Smiley face
Weather     Live Markets

Senator Blackburn’s Allegations Against Google’s AI: Fabricating Stories and Targeting Conservatives

In a strongly worded letter to Google CEO Sundar Pichai, Republican Senator Marsha Blackburn of Tennessee has raised serious concerns about Google’s AI model Gemma, alleging that it generated false and defamatory content about conservatives, including herself. This development comes at a critical time when discussions about artificial intelligence ethics, content moderation, and political bias in technology are increasingly prominent in national discourse. According to Blackburn, Gemma fabricated a sexual assault allegation against her when prompted with a question about whether she had been accused of rape. The AI allegedly produced a completely fictional story claiming that during her 1987 campaign for Tennessee State Senate, she had a non-consensual relationship with a state trooper who also alleged she pressured him for prescription drugs. The senator emphatically pointed out that not only did she run for office in 1998, not 1987, but that “there has never been such an accusation, there is no such individual, and there are no such news stories.”

The senator’s concerns emerged following a Senate Commerce Committee hearing that examined “jawboning” – the practice where government officials indirectly pressure tech companies to censor certain speech or content. During this hearing, Blackburn confronted Google Vice President Markham Erickson about AI “hallucinations” – instances where AI systems generate false information and present it as factual. She referenced conservative activist Robby Starbuck’s lawsuit against Google, which alleges that Google’s AI tools falsely linked him to serious accusations including sexual assault, child rape, and financial exploitation. This pattern of incidents prompted Blackburn to test Gemma herself, resulting in the discovery of fabricated allegations against her own character.

In her letter to Pichai, Blackburn characterized the AI’s output not as a harmless mistake but as “an act of defamation produced and distributed by a Google-owned AI model.” She expressed alarm that a publicly accessible tool could invent criminal allegations against a sitting U.S. senator, calling it “a catastrophic failure of oversight and ethical responsibility.” The Tennessee Republican further suggested that there appears to be a consistent pattern of bias against conservatives in Google’s AI systems, whether intentional or resulting from “ideologically biased training data.” Either way, she argued, the effect remains troubling: Google’s AI models are “shaping dangerous political narratives by spreading falsehoods about conservatives and eroding public trust.”

The senator has demanded specific responses from Google by November 6th, including explanations of how and why Gemma generated these false claims, what measures Google has implemented to prevent political or ideological bias in its AI systems, which safeguards failed to prevent this incident, and what actions the company will take to remove defamatory content and prevent similar occurrences in the future. These demands reflect growing concerns among lawmakers about the potential real-world harms of AI-generated misinformation, especially when it targets public figures or creates fabricated controversies that could influence public perception and political discourse.

During the Senate hearing, Google’s representative Erickson acknowledged that “large language models will hallucinate,” a technical term for when AI generates incorrect information. However, Blackburn’s response was unequivocal: “Shut it down until you can control it.” This exchange highlights the tension between technological innovation and responsible deployment, particularly for powerful AI systems with the potential to generate convincing but entirely fictional narratives. The issue raises profound questions about whether current AI systems are ready for widespread public use if they cannot reliably distinguish fact from fiction or avoid generating potentially libelous content about real individuals.

The controversy comes at a time when major tech companies are racing to develop and deploy increasingly sophisticated AI models while simultaneously facing mounting pressure from regulators, lawmakers, and the public to ensure these technologies operate responsibly. Google had not immediately responded to requests for comment on Blackburn’s allegations, according to the report. This situation exemplifies the complex challenges facing the tech industry and policymakers as they navigate the unprecedented capabilities and risks of generative AI technology. The outcome of this particular dispute could influence how AI developers approach content safeguards and bias mitigation in the future, especially regarding political content and information about public figures.

Share.
Leave A Reply