Ki-Chatbot brings conspiracy theorists to question their beliefs

Ki-Chatbot brings conspiracy theorists to question their beliefs
In a study that was published on September 12th in science
"This work questioned many existing literature that assumes that we live in a post -factual society," says Katherine Fitzgerald, who is researching conspiracy theories and misinformation at Queensland University of Technology in Brisbane, Australia.
Earlier analyzes have suggested that people are attracted to conspiracy theories because they are looking for security and certainty in a turbulent world. But "what we discovered in this work contradicts this traditional explanation," says co -author Thomas Costello, a psychology orchard at the American University in Washington DC. "One of the potentially exciting applications of this research is that Ki could be used to refute conspiracy theories in real life."
harmful ideas
Although many conspiracy theories have hardly any social effects, those who gain popularity can "do real damage," says Fitzgerald. It refers to the attack on the US capitol on January 6, 2021, which was partially fueled by claims about a manipulated presidential election in 2020, as well as the anti-vaccine rhetoric, which impairs the vaccination acceptance against Covid-19, as examples.
It is possible to convince people to change their opinion, but this can be time -consuming and exhausting - and the sheer number and variety of conspiracy theories make it difficult to address the problem on a larger scale. However, Costello and his colleagues wanted to examine the potential of large voice models (LLMS) - these can process large amounts of information quickly and generate human -like answers - to combat conspiracy theories. "They were trained on the Internet, know all conspiracy theories and their counter arguments, so it seemed to be a very natural addition," says Costello.
believe it or not
The researchers developed a tailor-made chatbot using GPT-4 Turbo-the latest LLM of the Chatgpt manufacturer Openai, based in San Francisco, California-that was trained convincingly against conspiracies. They then recruited over 1,000 participants whose demography agreed with the US people counting in relation to characteristics such as gender and ethnicity. Costello says that the team was able to assess the ability of the chatbot to refute a variety of conspiracies by recruiting people with different life experiences and their own perspectives.
Each participant was asked to describe a conspiracy theory to explain why they considered it true and to express the strength of their conviction as a percentage. These details were shared with the chat bot, which then had a conversation with the participant, in which he cited information and evidence that undermined or refuted the conspiracy and reacted to the questions of the participant. The answers of the chatbot were thorough and detailed and often reached hundreds of words. On average, each conversation took about 8 minutes.
The approach turned out to be effective: the self -confidence of the participants in their chosen conspiracy theory fell on average by 21 %after they interacted with the chat bot. And 25 % of the participants changed from high trust (over 50 %) to uncertainty. The change was hardly noticeable for the control groups, which spoke on a similar topic with the same chat bot. A follow -up examination two months later showed that the change of perspective continued with many participants.
Although the results of the study are promising, the researchers point out that the participants were paid survey participants and may not be representative of people who are deeply involved in conspiracy theories.
effective intervention
Fitzgerald is enthusiastic about the potential of the AI to act against conspiracies. "If we can find a way to prevent offline violence, that's always a good thing," she says. It suggests that follow -up examinations could examine various metrics to evaluate the effectiveness of the chatbot or that the study could be repeated using LLMS with less advanced security measures to ensure that they do not intensify conspiratorial thinking.
Earlier studies have expressed concerns about the tendency of AI chatbots to "hallucinate" incorrect information. However, the study made sure to avoid this possibility - Costello's team asked a professional factual tester to evaluate the accuracy of the information provided by the chatbot, which confirmed that none of the statements were incorrect or politically biased.
Costello says that the team is planning further experiments to test various strategies of the chatbot, for example to examine what happens when the chatbots' answers are rude. They hope to learn more about what made this special study so successful through the identification "of the experiments in which the conviction no longer works".
-
Costello, T. H., Pennycook, G. & Rand, D. G. science 385 , EADQ1814 (2024).