According to MIT Technology Review, a new study in Nature found that a single conversation with a large language model (LLM) chatbot can significantly sway a voter’s choice in an election, with an effect roughly four times greater than political ads. In an experiment with over 2,300 participants two months before the 2024 U.S. presidential election, a chatbot trained to advocate for a candidate moved Donald Trump supporters 3.9 points toward Kamala Harris on a 100-point scale, and moved Harris supporters 2.3 points toward Trump. In similar tests for the 2025 Canadian and Polish elections, the effect was even larger, shifting opposition voters by about 10 points. The research, led by psychologists like Gordon Pennycook of Cornell and Thomas Costello of American University, found chatbots were most persuasive when instructed to use facts and evidence. However, a major catch is that the chatbots, especially those advocating for right-leaning candidates, frequently presented inaccurate claims.
Why this is a big deal
Look, we’ve all been worried about targeted ads and social media echo chambers for years. But this is different. A political ad is a one-way broadcast. You can ignore it. A conversation with an AI feels interactive, personal, and responsive. It can answer your specific doubts and deploy a mountain of tailored information in real-time. That’s why it’s so much more persuasive. The study basically upends the old idea that partisan voters are immune to new information. Turns out, if you present what looks like a reasoned, fact-based argument in a conversational format, people will update their views. That’s the terrifyingly powerful part.
The misinformation problem is built-in
Here’s the thing that really twists the knife. The chatbots were most effective when using “facts.” But the models are trained on the internet, which is full of partisan garbage and misinformation. So they end up reproducing that bias. The study found chatbots for right-leaning candidates made more inaccurate claims. It’s not necessarily that the AI is intentionally lying—it’s just mirroring the “political communication that comes from the right, which tends to be less accurate,” as researcher Thomas Costello put it. So you have this incredibly persuasive tool that’s inherently prone to spreading falsehoods, depending on its political alignment. How do you even regulate that?
Scale and automation change everything
Another study in Science this week, involving 19 LLMs and 77,000 participants, showed how to maximize this persuasion. The recipe? Instruct the AI to pack arguments with facts and evidence, and then give it extra training on persuasive conversations. The most persuasive model shifted opinions by a staggering 26.1 points. Think about that. We’re not talking about a few targeted Facebook ads. We’re talking about a scalable, automated system that can have millions of “personal” conversations simultaneously, each one optimized to change minds. The barrier to running a massive, AI-driven influence operation is plummeting. And it doesn’t need a troll farm in a foreign country—just API credits and a prompt.
What happens next?
So where does this leave us? The genie is very much out of the bottle. Campaigns are already using AI for fundraising emails and ad copy. It’s a short, obvious step to deploying these persuasive chatbots on campaign websites or through messaging apps. The potential for misuse in tight elections is enormous. And it’s not just about elections. Imagine this tech applied to consumer protection, public health messaging, or corporate reputation management. The underlying persuasive architecture is the same. The scramble now will be for defensive measures—how to detect AI persuasion, how to inoculate people against it, and whether platforms have any responsibility to label these interactions. But honestly, I think we’re way behind. The research is showing us the power. The bad actors are already taking notes.
