AI Might Actually Change Minds About Conspiracy Theories—Here’s How

by Staff Writer
September 20, 2024 at 10:05 AM UTC

AI can slash belief in conspiracy theories by 20% after just three chats, with effects lasting months, offering new hope in the fight against misinformation.

Clinical Relevance: Personalized AI-based interventions could help challenge entrenched, irrational beliefs and cognitive distortions.

  • AI reduced belief in conspiracy theories by 20 percent after just three personalized conversations.
  • The effects lasted for at least two months, and also lowered belief in unrelated falsehoods.
  • This study shows AI’s potential to combat misinformation at scale, though it carries risks if misused.

Artificial intelligence (AI) may be able to talk people out of their belief in conspiracy theories, finds a new study published in the journal Science.

Researchers from the  MIT Sloan School of Management conducted experiments involving 2,190 participants who held various conspiracy beliefs, ranging from the assassination of John F. Kennedy to COVID-19 misinformation. Each participant conversed with an AI model—specifically, GPT-4 Turbo—which refuted their specific conspiracy belief.

A Surprising Drop in Conspiracy Confidence

After just three rounds of dialogue with the AI, on average, participants’ faith in their conspiracy theory dropped by 20 percent. This newfound skepticism held even two months after conversing with the chatbot.

The AI didn’t just throw facts at participants either, the authors noted. It engaged them in a conversation that directly addressed the evidence they believed supported their theory.

This personalized approach made the participants feel heard and provided them with counter arguments that were relevant to their beliefs. In one case, a participant who believed in the Illuminati conspiracy admitted that the AI had shifted their perspective for the first time, noting that the response “made real, logical sense.”

Additionally, chatting with an AI didn’t just reduce the stickiness of the targeted misinformation in a participant’s mind. It also helped shake their belief in unrelated falsities, making it more likely they’d question other confabulations they encountered in the future.

When presented with compelling evidence, tailored to their specific point of view, even conspiracy theorists who have gone down a deep rabbit hole can keep an open mind, the authors noted.

“We wondered if it was possible that people simply hadn’t been exposed to compelling evidence disproving their theories,” David Rand, the study’s co-author and professor at MIT Sloan School of Management, explained.

“Conspiracy theories come in many varieties—the specifics of the theory and the arguments used to support it differ from believer to believer. So if you are trying to disprove the conspiracy but haven’t heard these particular arguments, you won’t be prepared to rebut them,” he added.

The Power of AI…

What makes the results of this study particularly interesting is that people become so psychologically invested in their paranoid beliefs that changing their minds is often nearly impossible.

But the success of the AI chatbot used in this study suggests that the right evidence, presented in a tailored, interactive way, can be very persuasive, painting a more optimistic picture of human reasoning, the authors write.

“This research indicates that evidence matters much more than we thought it did—so long as it is actually related to people’s beliefs,” co-author Gordon Pennycook of Cornell University said. “This has implications far beyond just conspiracy theories: Any number of beliefs based on poor evidence could, in theory, be undermined using this approach.”

The findings also highlights the potential of AI as a tool for combating baseless claims like the ones that spread like wildfire across social media, the authors point out.

AI can process vast amounts of information and personalize responses, allowing it to debunk conspiracy theories at scale. Platforms could quickly deploy AI-generated content on social media to counter fast-spreading false information or directly engage with users who share it.

…And the Peril

But this potential also comes with risks.

The same technology that can correct misinformation could also spread it. The study’s authors emphasize the importance of developing responsible AI systems that are accurate and free from bias. The researchers judged the AI used in this study to be 99.2 percent accurate in its claims during conversations, without sharing any false information.

When used responsibly, AI has the potential to foster more informed and reasoned conversations, the authors argue. They added that scaling up and deploying AI interventions like this in the real world remains uncertain, but the early results are promising.

“Before we had access to AI, conspiracy research was largely observational and correlational, which led to theories about conspiracies fulfilling psychological needs,” another of the study’s co-authors, Thomas Costello, added. “Our explanation is more mundane—much of the time, people just didn’t have the right information.”

Related Information:

Why So Many People Still Fall for Conspiracy Theories

The Science of Pathological Lying

Psychological Aspects of Factitious Disorder

Systematic Review

Therapeutic Efficacy and Safety of Memantine for Children and Adults With ADHD With a Focus on Glutamate-Dopamine Regulation: A Systematic Review

Although most studies involved small patient groups, memantine showed potential benefits in managing ADHD symptoms and had a favorable safety profile.

Won-Seok Choi and others

Case Report

Image of a doctor holding antipsychotics

Antipsychotic-Induced Tardive Lingual Dystonia Presenting With Bradycardia

The authors present a rare case of antipsychotic-induced lingual dystonia that led to hemodynamic changes in a 15-year-old boy.

Diveesha Munipati and others