AI tool promotes the dialogue between people with opposing opinions

Ein KI-gestütztes Tool hilft Menschen mit unterschiedlichen Meinungen, gemeinsame Standpunkte zu finden und fördert so den Dialog.
A AI-supported tool helps people with different opinions to find common points of view and thus promote dialogue. (Symbolbild/natur.wiki)

AI tool promotes the dialogue between people with opposing opinions

A chat -like tool that is supported by artificial intelligence (AI) can help people with different views to find areas of agreement. This is shown by an experiment with online discussion groups.

The model developed by Google Deepmind in London was able to synthesize divergent opinions and create summaries of the position of each group that took into account different perspectives. The participants preferred the summaries generated by the AI ​​to those written by human mediators. This indicates that such tools could be used to support complex consultations. The study was published on October 17th in the magazine Science 1 .

"You can see it as a proof of feasibility that you can use AI, especially large language models, to fulfill part of the function that is currently fulfilled by citizens' meetings and deliberative surveys," says Christopher Summerfield, co -author of the study and research director at the UK AI Safety. "People have to find similarities, because collective action requires approval."

compromise machine

Democratic initiatives such as citizens' meetings, in which groups are asked to share their opinions on political issues, ensure that politicians hear a variety of perspectives. However, the expansion of these initiatives can be difficult because these discussions are often limited to small groups to ensure that all voices are heard.

Curious about research on the possibilities of large language models (LLMS), Summerfield, together with his colleagues, developed a study to evaluate how AI could help people with opposite opinions to achieve a compromise.

They use a fine-tuned version of the pre-trained Deepmind LLM Chinchilla, which they called "Habermas machine", named after the philosopher Jürgen Habermas, who developed a theory of how rational discussions can help to solve conflicts.

To test their model, the researchers recruited 439 British residents that were divided into smaller groups. Each group discussed three questions on British political issues and shared their personal opinions. These opinions were then fed into the AI ​​machine, which generated overarching statements that combined the perspectives of all participants. The participants were able to evaluate any statement and submit reviews, which the AI ​​then incorporated into a final summary of the group's collective perspective.

"The model is trained to produce a statement that is supported by a group of people who have voluntarily expressed their opinions," says Summerfield. "Since the model learns what your preferences are about these statements, it can then create a statement that most likely all satisfies."

In addition to the AI, a participant was selected as a mediator. He was also asked to create a summary that best includes the views of all participants. The participants were given both the summaries of the AI ​​and those of the mediator and should evaluate them.

Most participants rated the summaries written by the AI ​​as better than those of the mediator. 56 % of the participants preferred the performance of the AI ​​compared to 44 % that preferred human summary. External experts were also asked to evaluate the summaries and gave the AI ​​summary higher reviews regarding justice, quality and clarity.

The research team then recruited a group of participants who were demographically representative of the British population for a virtual citizens' meeting. In this scenario, the group agreement rose on controversial topics after interacting with the AI. This realization suggests that when they are integrated into a real citizens' meeting, AI tools could make it easier for the leaders to develop political proposals that take different perspectives into account.

"The LLM could be used in many ways to support and take on consultations that were previously reserved for human moderators," says Ethan Busby, who examines how AI tools could improve democratic societies, at Brigham Young University in Provo, Utah. "I consider this as the tip of the work in this area, which has great potential to tackle urgent social and political problems." Summerfield adds that AI could even help to make conflict resolution processes faster and more efficiently.

lost connections

"To actually apply these technologies into deliberative experiments and processes, is really gratifying," says Sammy McKinney, who examines the Deliberative Democracy and its interfaces for artificial intelligence at the University of Cambridge, UK. But he adds that researchers should carefully consider the potential effects of the AI ​​on the human aspect of the deliberation. "A main reason to support civic deliberations is that they create certain rooms in which people can relate to each other," he says. "What do we lose if we are increasingly removing human contact and human moderation?"

Summerfield recognizes the restrictions associated with AI technologies like this. "We did not train the model to intervene in the deliberation," he says, which means that the model's statement could also contain extremist or other problematic beliefs. He adds that rigorous research on the effects of AI on society is crucial to understand its value.

"To act carefully, seems important to me," says McKinney, "and then take steps to mitigate these concerns where possible."

  1. Tessler, M.H. et al., Science 386, EADQ2852 (2024).

    google scholar Reference" Data-Track-Value = "Google Scholar Reference" Data-Track-Label = "Link" Data-Track-Item_ID = "NOFOLLOW NOOPENER" ARIA label = "Google Scholar Reference 1" href = "http://scholar.google.com/scholar_lookup?&title=&journal=science&volume=386&publication_year=2024&author=tessler%2CM.H.">
    Google Scholar

  2. Download literature list