AI Bots Show Surprising Ability to Change Political Opinions
In a digital age where persuasion tactics are continuously evolving, recent research has uncovered a surprising finding: artificial intelligence chatbots can successfully shift people’s political opinions, even in deeply polarized environments. Two groundbreaking studies published in Nature and Science reveal that brief conversations with AI can nudge voters toward candidates they previously opposed, raising important questions about the future of political discourse and democratic processes.
The Nature study demonstrated that when potential voters engaged in short conversations with AI bots advocating for their less-preferred candidate, their opinions shifted measurably. During the contentious 2024 U.S. presidential race between Donald Trump and Kamala Harris, pro-Trump bots moved Harris supporters approximately four points in Trump’s direction, while pro-Harris bots shifted Trump voters about 2.3 points toward Harris. While these shifts rarely changed voting intentions completely, they did soften attitudes toward opposing candidates. Even more striking, when the experiment was replicated in Canada and Poland ahead of their 2025 elections, the effect was substantially stronger—moving participants’ opinions roughly 10 points toward their previously less-favored candidate.
“It’s not like lies are more compelling than truth,” explains MIT computational social scientist David Rand, who co-authored both studies. “If you need a million facts, you eventually are going to run out of good ones and so, to fill your fact quota, you’re going to have to put in some not-so-good ones.” This observation points to a central paradox of AI persuasion techniques: the most effective bots aren’t necessarily those telling the best stories or tailoring arguments to individual beliefs, but rather those that simply provide the most information—even when that information isn’t entirely accurate.
The Science study, involving nearly 77,000 participants from the United Kingdom, investigated what makes AI chatbots persuasive across more than 700 different topics. Researchers discovered that while AI models trained on larger datasets were somewhat more persuasive, the most significant boost came from prompting the bots to incorporate numerous facts into their arguments. A basic prompt instructing the bot to be persuasive moved opinions by about 8.3 percentage points, but when prompted to present abundant facts and evidence, this jumped to nearly 11 percentage points—making the bot 27 percent more effective at changing minds.
However, this fact-heavy approach comes with a significant downside. The accuracy of AI models deteriorated when they were instructed to prioritize fact delivery over other persuasion techniques. For instance, GPT-4o’s accuracy dropped from approximately 80 percent to 60 percent when prompted to emphasize facts rather than storytelling or moral appeals. This creates a troubling scenario where the most persuasive bots are also the most likely to spread misinformation.
Perhaps more concerning, the research revealed a political imbalance in how this misinformation manifests. Right-leaning bots showed a greater tendency to deliver inaccurate information than their left-leaning counterparts. Lisa Argyle, a computational social scientist from Purdue University, warns in a Science commentary that these politically biased yet persuasive fabrications pose “a fundamental threat to the legitimacy of democratic governance.”
Why do fact-laden arguments from AI succeed where similar approaches from humans often fail? Jillian Fisher, an AI and society expert at the University of Washington, suggests that people may perceive machines as more reliable than humans. Interestingly, her research indicates that users more familiar with how AI models operate are less susceptible to their persuasive tactics, suggesting that education about AI limitations could serve as a protective factor.
As AI becomes increasingly integrated into our information ecosystem, the challenge extends beyond obvious political conversations. Jacob Teeny, a persuasion psychology expert from Northwestern University, points out that political influence can be subtle and unexpected: “Maybe they’re asking about dinner and the chatbot says, ‘Hey, that’s Kamala Harris’ favorite dinner.'” This highlights how AI persuasion often operates implicitly rather than through direct political messaging.
The findings from these studies underscore the urgent need for greater awareness about how AI systems can both persuade and misinform. As these technologies continue to evolve and proliferate, understanding their influence on public opinion becomes crucial for maintaining the health and integrity of democratic societies.


