AI tool promotes dialogue between people with opposing opinions

Transparenz: Redaktionell erstellt und geprüft.
Veröffentlicht am

An AI-supported tool helps people with different opinions find common points of view and thus promotes dialogue.

Ein KI-gestütztes Tool hilft Menschen mit unterschiedlichen Meinungen, gemeinsame Standpunkte zu finden und fördert so den Dialog.
An AI-supported tool helps people with different opinions find common points of view and thus promotes dialogue.

AI tool promotes dialogue between people with opposing opinions

A chatbot-like tool powered by artificial intelligence (AI) can help people with different views find areas of agreement. This is shown by an experiment with online discussion groups.

The model, developed by Google DeepMind in London, was able to synthesize divergent opinions and produce summaries of each group's position that took different perspectives into account. Participants preferred the AI-generated summaries over those written by human mediators. This suggests that such tools could be used to support complex consultations. The study was published October 17 in the journal Science 1.

“You can see it as a proof of concept that you can use AI, particularly large language models, to perform some of the function currently performed by citizen assemblies and deliberative polls,” says Christopher Summerfield, co-author of the study and research director at the UK AI Safety Institute. “People need to find common ground because collective action requires consent.”

Compromise machine

Democratic initiatives such as town hall meetings, where groups of people are asked to share their opinions on political issues, ensure that politicians hear a variety of perspectives. However, scaling up these initiatives can be difficult because these discussions are often limited to small groups to ensure that all voices are heard.

Curious about research into the possibilities of large language models (LLMs), Summerfield and his colleagues developed a study to evaluate how AI could help people with opposing opinions reach compromise.

They deployed a fine-tuned version of the pre-trained DeepMind LLM Chinchilla, which they called the “Habermas Machine,” named after the philosopher Jürgen Habermas, who developed a theory of how rational discussions can help resolve conflicts.

To test their model, the researchers recruited 439 British residents who were divided into smaller groups. Each group discussed three questions about British political issues and shared their personal opinions on them. These opinions were then fed into the AI ​​engine, which generated overarching statements that combined the perspectives of all participants. Participants were able to rate and critique each statement, which the AI ​​then incorporated into a final summary of the group's collective view.

“The model is trained to produce a statement that has maximum support from a group of people who have volunteered their opinions,” says Summerfield. “As the model learns what your preferences are about these statements, it can then produce a statement that is most likely to satisfy everyone.”

In addition to the AI, a participant was selected as a mediator. He was also asked to create a summary that best incorporated the views of all participants. The participants were shown both the AI ​​and the mediator's summaries and were asked to rate them.

Most participants rated the summaries written by the AI ​​as better than those written by the mediator. 56% of participants preferred AI performance, compared to 44% who preferred human summarization. External reviewers were also asked to evaluate the summaries and gave the AI ​​summaries higher ratings for fairness, quality, and clarity.

The research team then recruited a group of participants demographically representative of the UK population for a virtual town hall meeting. In this scenario, group agreement on contentious issues increased after they interacted with the AI. This finding suggests that AI tools, when integrated into a real citizens' assembly, could make it easier for leaders to develop policy proposals that take diverse perspectives into account.

“The LLM could be used in many ways to support deliberations and take on roles previously reserved for human facilitators,” says Ethan Busby, who studies how AI tools could improve democratic societies at Brigham Young University in Provo, Utah. “I see this as the pinnacle of work in this area, which has great potential to address pressing social and political issues.” Summerfield adds that AI could even help make conflict resolution processes faster and more efficient.

Lost connections

“Actually applying these technologies to deliberative experiments and processes is really gratifying,” says Sammy McKinney, who studies deliberative democracy and its interfaces with artificial intelligence at the University of Cambridge, UK. But he adds that researchers should carefully consider the potential impact of AI on the human aspect of deliberation. “A key reason to support citizen deliberations is that they create specific spaces in which people can relate to one another,” he says. “What do we lose by increasingly removing human contact and human facilitation?”

Summerfield recognizes the limitations associated with AI technologies such as these. “We didn’t train the model to intervene in deliberation,” he says, meaning the model’s statement could also contain extremist or other problematic beliefs. He adds that rigorous research into the impact of AI on society is crucial to understanding its value.

“It seems important to me to proceed cautiously,” McKinney says, “and then take steps to mitigate those concerns where possible.”

  1. Tessler, M.H. et al., Science 386, eadq2852 (2024).


    Google Scholar

Download bibliography

Quellen: