人工智能聊天机器人对选民的影响力,甚至超过了传统的政治广告。

内容总结:
最新研究显示,人工智能聊天机器人在影响选民政治立场方面,可能比传统政治广告更具威力。一项由多所大学联合开展的研究发现,与带有政治倾向的AI模型进行对话,能够有效促使民主党与共和党选民转而支持对立政党的总统候选人,其效果甚至超过政治广告。
在2024年美国总统大选前进行的实验中,研究人员让超过2300名参与者与支持特定候选人的聊天机器人对话。结果显示,聊天机器人能显著改变选民态度:支持特朗普的选民在与支持哈里斯的AI交流后,对后者的支持倾向平均提升了3.9个百分点(以百分制计),这一效果约为2016年和2020年大选中政治广告影响力的四倍。在加拿大和波兰的类似实验中,聊天机器人使反对派选民的态度转变幅度达到约10个百分点。
研究指出,聊天机器人的说服力主要来自于其在对话中实时生成并策略性运用“事实与证据”。然而,问题在于这些“证据”并不总是准确的。研究发现,说服力越强的模型,往往提供的不实信息也越多——尤其是在为右翼候选人辩护时,聊天机器人出现错误陈述的频率更高。研究人员推测,这可能是因为模型在训练过程中吸收了网络文本中固有的偏见信息。
另一项发表于《科学》杂志的补充研究进一步揭示,若指令模型在论证中密集使用“事实与证据”,并辅以说服性对话示例进行训练,其改变参与者立场的幅度最高可达26.1个百分点。但值得注意的是,这种“说服力优化”往往以牺牲真实性为代价:模型变得更善辩的同时,也更容易输出误导性或虚假信息。
这一现象引发了学界对人工智能可能重塑选举生态的深切担忧。普林斯顿大学政治学家安迪·格斯指出,虽然目前尚不确定这类技术将如何被应用于实际竞选,但AI聊天机器人确实可能削弱选民独立进行政治判断的能力。西北大学政治学家亚历克斯·科波克则认为,如果竞选活动中广泛使用AI,虚假信息可能因其传播优势造成严重后果,但同时也存在“准确信息得以规模化传播”的可能性。
专家强调,未来需建立有效的监管机制,例如对涉及政治话题的AI对话内容进行审核与事实核查,以维护民主进程的健康发展。随着公众日益依赖AI获取信息,如何在技术扩散中确保选举的公正性与真实性,已成为亟待解决的全球性课题。
中文翻译:
人工智能聊天机器人影响选民的能力胜过政治广告
与聊天机器人的对话可以改变人们的政治观点——但最具说服力的模型也传播了最多的虚假信息。
2024年,宾夕法尼亚州的民主党国会候选人沙梅恩·丹尼尔斯使用一款名为“阿什利”的AI聊天机器人致电选民并进行对话。电话以这样的开场白开始:“您好,我叫阿什利,是沙梅恩·丹尼尔斯竞选国会议员的人工智能志愿者。”丹尼尔斯最终并未获胜,但这些电话或许为她的竞选带来了助力:最新研究表明,AI聊天机器人仅凭一次对话就能改变选民的政治倾向,且其效果出奇地显著。
一个由多所大学组成的研究团队发现,与带有政治倾向的AI模型聊天,比政治广告更能促使民主党和共和党选民转而支持对立政党的总统候选人。聊天机器人通过引用事实和证据来影响观点,但这些内容并不总是准确的——事实上,研究人员发现,说服力最强的模型往往说出最多不实信息。
这两项分别发表于《自然》和《科学》期刊的研究详细阐述了上述发现,它们是最新一批揭示大语言模型说服力的研究成果,也引发了关于生成式AI如何重塑选举的深刻问题。
“与大语言模型的一次对话就能对关键的选举选择产生相当显著的影响,”参与《自然》研究的康奈尔大学心理学家戈登·彭尼库克表示。他指出,大语言模型之所以比政治广告更具说服力,是因为它们能实时生成大量信息,并在对话中策略性地运用这些信息。
在《自然》论文的实验中,研究人员在2024年美国总统大选前两个月招募了2300多名参与者与聊天机器人对话。该聊天机器人被训练为支持两位主要候选人中的一位,其说服力惊人,尤其在讨论经济、医疗等政策议题时更为明显。支持唐纳德·特朗普的选民在与倾向于卡玛拉·哈里斯的AI模型对话后,对哈里斯的支持度在百分制量表上平均提升了3.9分,这一效果大约是2016年和2020年大选中政治广告效果的四倍。而倾向于特朗普的AI模型则使哈里斯支持者对特朗普的支持度提升了2.3分。
在2025年加拿大联邦选举和波兰总统选举前的类似实验中,研究团队发现了更显著的影响:聊天机器人使反对党选民的态度转变了约10个百分点。
长期以来的政治动机推理理论认为,党派选民不会接受与自身信念相悖的事实和证据。但研究人员发现,当使用GPT和DeepSeek等系列模型的聊天机器人被要求使用事实和证据时,其说服力比未受此指令时更强。“人们会根据模型提供的事实和信息更新自己的观点,”参与该项目的美国大学心理学家托马斯·科斯特洛解释道。
问题在于,聊天机器人提供的部分“证据”和“事实”并不真实。在三个国家的实验中,支持右倾候选人的聊天机器人比支持左倾候选人的模型做出了更多不准确陈述。科斯特洛指出,底层模型基于海量人类编写的文本进行训练,这意味着它们会复现现实世界的现象——包括“来自右翼的政治传播往往准确性较低”,这一结论基于对党派社交媒体帖子的研究。
在本周发表于《科学》的另一项研究中,重叠的研究团队探讨了聊天机器人如此具有说服力的原因。他们部署了19个大语言模型,围绕700多个政治议题与来自英国的近7.7万名参与者互动,并调整了算力、训练技巧和修辞策略等因素。
研究发现,提升模型说服力最有效的方法是要求其在论述中充实事实与证据,并通过提供说服性对话示例进行额外训练。实际上,说服力最强的模型使最初反对某一政治声明的参与者支持度提升了26.1分。“这些影响确实非常显著,”参与该项目的英国人工智能安全研究所科学家科比·哈肯伯格表示。
但优化说服力是以牺牲真实性为代价的。当模型变得更具说服力时,它们提供的误导性或虚假信息也越来越多——且无人确切知晓原因。“可能随着模型学习运用越来越多的事实,它们最终会触及已知信息的底线,导致事实质量下降,”哈肯伯格推测。
作者指出,聊天机器人的说服力可能对民主的未来产生深远影响。使用AI聊天机器人的政治活动可能以损害选民独立政治判断能力的方式塑造公众舆论。
然而,具体影响仍有待观察。“我们不确定未来的竞选活动将如何呈现,以及会如何整合这类技术,”普林斯顿大学政治学家安迪·格斯表示。争夺选民注意力成本高昂且困难,让他们与聊天机器人进行长时间政治对话可能颇具挑战。“这会成为人们了解政治的主要途径,还是仅局限于小众活动?”他提出疑问。
即使聊天机器人在选举中扮演更重要的角色,它们是否会更多传播真相还是虚构信息仍不明确。通常,虚假信息在竞选活动中具有传播优势,因此竞选AI的出现“可能意味着我们将走向灾难,”西北大学政治学家亚历克斯·科波克表示,“但也可能意味着正确的信息现在也能大规模传播。”
随之而来的问题是:谁将占据上风?“如果所有人的聊天机器人都在四处活动,是否意味着我们将互相说服、陷入僵局?”科波克问道。但有理由怀疑僵局是否会出现:政客们获取最具说服力模型的渠道可能并不均衡,且不同政治立场的选民对聊天机器人的参与程度也可能不同。“如果某一候选人或政党的支持者比另一方更精通技术,说服效果可能无法抵消,”格斯指出。
随着人们越来越多地借助AI处理生活事务,即使没有竞选活动的推动,他们也可能开始向聊天机器人寻求投票建议。若缺乏强有力的约束机制,这对民主而言可能是一个令人担忧的局面。审计并记录大语言模型在政治对话中输出的准确性,或许是迈向规范的第一步。
深度解读
人工智能
通用人工智能如何成为当今最具影响力的阴谋论
机器将与人一样聪明甚至超越人类的观点已席卷整个行业。但细究之下,你会发现这一迷思的存续与许多阴谋论存在相似动因。
OpenAI的新大语言模型揭示了AI运作的真实逻辑
这一实验模型虽无法与最顶尖的模型竞争,却能解释它们为何行为怪异,以及其可信度究竟如何。
量子物理学家压缩并“解禁”了DeepSeek R1
他们成功将AI推理模型的体积缩减过半,并宣称其现在能回答中国AI系统中曾受限的政治敏感问题。
AI玩具在中国掀起热潮——如今也登陆美国市场
竞争日趋激烈,美泰与OpenAI预计今年将推出面向儿童的产品。
保持联系
获取《麻省理工科技评论》最新动态
探索特别优惠、头条新闻、近期活动及更多内容。
英文来源:
AI chatbots can sway voters better than political advertisements
A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation.
In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them. “Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’s run for Congress,” the calls began. Daniels didn’t ultimately win. But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters’ opinions in a single conversation—and they’re surprisingly good at it.
A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing party. The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things.
The findings, detailed in a pair of studies published in the journals Nature and Science, are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections.
“One conversation with an LLM has a pretty meaningful effect on salient election choices,” says Gordon Pennycook, a psychologist at Cornell University who worked on the Nature study. LLMs can persuade people more effectively than political advertisements because they generate much more information in real time and strategically deploy it in conversations, he says.
For the Nature paper, the researchers recruited more than 2,300 participants to engage in a conversation with a chatbot two months before the 2024 US presidential election. The chatbot, which was trained to advocate for either one of the top two candidates, was surprisingly persuasive, especially when discussing candidates’ policy platforms on issues such as the economy and health care. Donald Trump supporters who chatted with an AI model favoring Kamala Harris became slightly more inclined to support Harris, moving 3.9 points toward her on a 100-point scale. That was roughly four times the measured effect of political advertisements during the 2016 and 2020 elections. The AI model favoring Trump moved Harris supporters 2.3 points toward Trump.
In similar experiments conducted during the lead-ups to the 2025 Canadian federal election and the 2025 Polish presidential election, the team found an even larger effect. The chatbots shifted opposition voters’ attitudes by about 10 points.
Long-standing theories of politically motivated reasoning hold that partisan voters are impervious to facts and evidence that contradict their beliefs. But the researchers found that the chatbots, which used a range of models including variants of GPT and DeepSeek, were more persuasive when they were instructed to use facts and evidence than when they were told not to do so. “People are updating on the basis of the facts and information that the model is providing to them,” says Thomas Costello, a psychologist at American University, who worked on the project.
The catch is, some of the “evidence” and “facts” the chatbots presented were untrue. Across all three countries, chatbots advocating for right-leaning candidates made a larger number of inaccurate claims than those advocating for left-leaning candidates. The underlying models are trained on vast amounts of human-written text, which means they reproduce real-world phenomena—including “political communication that comes from the right, which tends to be less accurate,” according to studies of partisan social media posts, says Costello.
In the other study published this week, in Science, an overlapping team of researchers investigated what makes these chatbots so persuasive. They deployed 19 LLMs to interact with nearly 77,000 participants from the UK on more than 700 political issues while varying factors like computational power, training techniques, and rhetorical strategies.
The most effective way to make the models persuasive was to instruct them to pack their arguments with facts and evidence and then give them additional training by feeding them examples of persuasive conversations. In fact, the most persuasive model shifted participants who initially disagreed with a political statement 26.1 points toward agreeing. “These are really large treatment effects,” says Kobi Hackenburg, a research scientist at the UK AI Security Institute, who worked on the project.
But optimizing persuasiveness came at the cost of truthfulness. When the models became more persuasive, they increasingly provided misleading or false information—and no one is sure why. “It could be that as the models learn to deploy more and more facts, they essentially reach to the bottom of the barrel of stuff they know, so the facts get worse-quality,” says Hackenburg.
The chatbots’ persuasive power could have profound consequences for the future of democracy, the authors note. Political campaigns that use AI chatbots could shape public opinion in ways that compromise voters’ ability to make independent political judgments.
Still, the exact contours of the impact remain to be seen. “We’re not sure what future campaigns might look like and how they might incorporate these kinds of technologies,” says Andy Guess, a political scientist at Princeton University. Competing for voters’ attention is expensive and difficult, and getting them to engage in long political conversations with chatbots might be challenging. “Is this going to be the way that people inform themselves about politics, or is this going to be more of a niche activity?” he asks.
Even if chatbots do become a bigger part of elections, it’s not clear whether they’ll do more to amplify truth or fiction. Usually, misinformation has an informational advantage in a campaign, so the emergence of electioneering AIs “might mean we’re headed for a disaster,” says Alex Coppock, a political scientist at Northwestern University. “But it’s also possible that means that now, correct information will also be scalable.”
And then the question is who will have the upper hand. “If everybody has their chatbots running around in the wild, does that mean that we’ll just persuade ourselves to a draw?” Coppock asks. But there are reasons to doubt a stalemate. Politicians' access to the most persuasive models may not be evenly distributed. And voters across the political spectrum may have different levels of engagement with chatbots. “If supporters of one candidate or party are more tech savvy than the other,” the persuasive impacts might not balance out, says Guess.
As people turn to AI to help them navigate their lives, they may also start asking chatbots for voting advice whether campaigns prompt the interaction or not. That may be a troubling world for democracy, unless there are strong guardrails to keep the systems in check. Auditing and documenting the accuracy of LLM outputs in conversations about politics may be a first step.
Deep Dive
Artificial intelligence
How AGI became the most consequential conspiracy theory of our time
The idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth that persists for many of the same reasons conspiracies do.
OpenAI’s new LLM exposes the secrets of how AI really works
The experimental model won't compete with the biggest and best, but it could tell us why they behave in weird ways—and how trustworthy they really are.
Quantum physicists have shrunk and “de-censored” DeepSeek R1
They managed to cut the size of the AI reasoning model by more than half—and claim it can now answer politically sensitive questions once off limits in Chinese AI systems.
AI toys are all the rage in China—and now they’re appearing on shelves in the US too
Competition is heating up, with Mattel and OpenAI expected to launch a product for kids this year.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.
文章标题:人工智能聊天机器人对选民的影响力,甚至超过了传统的政治广告。
文章链接:https://blog.qimuai.cn/?post=2322
本站文章均为原创,未经授权请勿用于任何商业用途