Artificial intelligence (AI) chatbots are increasingly inclined to echo the views of people who use them, according to computer scientists at Johns Hopkins University, who found the platforms limit what information they share depending on who asks them.

“Because people are reading a summary paragraph generated by AI, they think they’re getting unbiased, fact-based answers,” said JHU’s Ziang Xiao.

But such assumptions are largely wrong, Xiao and colleagues have argued, after looking at the results of tests involving 272 participants who were asked to write about a range of topics that often make the news in the US, such as healthcare and student loans, either by using standard internet searches and research or by interacting with a LLM (large language model) bot.

The “echo chamber” effect is louder when people looked items up on AI than via conventional searches, the JHU team found.

“Even if a chatbot isn’t designed to be biased, its answers reflect the biases or leanings of the person asking the questions. So really, people are getting the answers they want to hear,” Xiao said, ahead of presenting the team’s findings at the Association of Computing Machinery’s CHI conference on Human Factors in Computing Systems.

Research published on May 10 in the science journal Cell Press showed AI to be sometimes able to bluff even card sharks while playing poker and to come out best in diplomacy simulations.

The researchers found the bots to be capable a form of sycophancy, after they were “observed to systematically agree with their conversation partners, regardless of the accuracy of their statements” and “to mirror the user’s stance, even if it means forgoing the presentation of an impartial or balanced viewpoint.”

Follow us on Twitter