Researchers have shown that artificial intelligence (AI) could be a valuable tool in the fight against conspiracy theories. They developed a chatbot that can refute false information and encourage people to question their way of thinking.
In a study published on September 12th inSciencewas published 1, participants spent a few minutes with the chatbot providing detailed answers and arguments, resulting in a shift in thinking that lasted for several months. This result suggests that facts and evidence can indeed change people's minds.
“This work challenged a lot of the existing literature that assumes we live in a post-truth society,” says Katherine FitzGerald, who researches conspiracy theories and misinformation at the Queensland University of Technology in Brisbane, Australia.
Previous analysis has suggested that people are drawn to conspiracy theories because they seek security and certainty in a turbulent world. But "what we discovered in this work contradicts this traditional explanation," says co-author Thomas Costello, a psychology researcher at American University in Washington DC. “One of the potentially exciting applications of this research is that AI could be used to debunk conspiracy theories in real life.”
Harmful ideas
Although many conspiracy theories have little social impact, the ones that gain popularity can "cause real harm," says FitzGerald. She points to the attack on the U.S. Capitol on January 6, 2021, which was fueled in part by claims of a rigged 2020 presidential election, as well as the anti-vaccine rhetoric that hurt COVID-19 vaccination acceptance, as examples.
It is possible to persuade people to change their minds, but doing so can be time-consuming and stressful - and the sheer number and variety of conspiracy theories makes it difficult to address the problem on a larger scale. But Costello and his colleagues wanted to explore the potential of large language models (LLMs)—which can quickly process large amounts of information and generate human-like responses—to combat conspiracy theories. “They've been trained on the internet and know all the conspiracy theories and their counterarguments, so it seemed like a very natural fit,” Costello says.
Believe it or not
The researchers developed a custom chatbot using GPT-4 Turbo—the latest LLM from San Francisco, California-based ChatGPT creator OpenAI—that was trained to argue convincingly against conspiracies. They then recruited over 1,000 participants whose demographics matched the U.S. Census on characteristics such as gender and ethnicity. Costello says that by recruiting “people with different life experiences and their own perspectives,” the team was able to assess the chatbot’s ability to debunk a variety of conspiracies.
Each participant was asked to describe a conspiracy theory, explain why they believed it to be true, and express the strength of their belief as a percentage. These details were shared with the chatbot, which then engaged in a conversation with the participant, citing information and evidence that undermined or refuted the conspiracy, and responded to the participant's questions. The chatbot's answers were thorough and detailed, often reaching hundreds of words. On average, each conversation lasted about 8 minutes.
The approach proved effective: participants' confidence in their chosen conspiracy theory fell on average by 21% after interacting with the chatbot. And 25% of participants moved from high confidence (over 50%) to uncertainty. However, the change was hardly noticeable for the control groups that spoke to the same chatbot on a similar topic. A follow-up study two months later showed that the change in perspective continued for many participants.
Although the study's results are promising, researchers note that the participants were paid survey takers and may not be representative of people deeply immersed in conspiracy theories.
Effective intervention
FitzGerald is excited about the potential of AI to combat conspiracies. “If we can find a way to prevent violence offline, that’s always a good thing,” she says. She suggests that follow-up research could examine different metrics to assess the effectiveness of the chatbot, or the study could be repeated using LLMs with less advanced security measures to ensure they do not reinforce conspiratorial thinking.
Previous studies have raised concerns about the tendency of AI chatbots to “hallucinate” false information. However, the study was careful to avoid this possibility – Costello's team asked a professional fact-checker to assess the accuracy of the information provided by the chatbot, who confirmed that none of the statements were false or politically biased.
Costello says the team plans further experiments to test different strategies of the chatbot, such as studying what happens when the chatbot's responses are rude. By identifying “the experiments where belief no longer works,” they hope to learn more about what made this particular study so successful.