Trapped in a Dangerous Loop: Humans Inherit Artificial Intelligence Biases

Humans Inherit Artificial Intelligence Biases

A study from Deusto University reveals that humans can inherit decision-making biases from AI. Participants using a biased AI mirrored its errors, and this bias persisted even without the AI’s assistance. This highlights the urgent need for research and regulations on AI-human collaboration.

People can adopt biases from artificial intelligence in their decision-making processes, according to a new study.

New research provides evidence that people can inherit artificial intelligence biases (systematic errors in AI outputs) in their decisions. The study was conducted by the psychologists Lucía Vicente and Helena Matute from Deusto University in Bilbao, Spain.

The astonishing results achieved by artificial intelligence systems, which can, for example, hold a conversation as a human does, have given this technology an image of high reliability. More and more professional fields are implementing AI-based tools to support the decision-making of specialists to minimize errors in their decisions. However, this technology is not without risks due to biases in AI results. We must consider that the data used to train AI models reflects past human decisions. If this data hides patterns of systematic errors, the AI algorithm will learn and reproduce these errors. Indeed, extensive evidence indicates that AI systems do inherit and amplify human biases.

Reversal of Bias Transmission

The most relevant finding of Vicente and Matute’s research is that the opposite effect may also occur: that humans inherit AI biases. That is, not only would AI inherit its biases from human data, but people could also inherit those biases from AI, with the risk of getting trapped in a dangerous loop. Scientific Reports publishes the results of Vicente and Matute’s research.

In the series of three experiments conducted by these researchers, volunteers performed a medical diagnosis task. A group of the participants were assisted by a biased AI system (it exhibited a systematic error) during this task, while the control group participants were unassisted. The AI, the medical diagnosis task, and the disease were fictitious. The whole setting was a simulation to avoid interference with real situations.

Impact on Decision-Making

The participants assisted by the biased AI system made the same type of errors as the AI, while the control group did not make these mistakes. Thus, AI recommendations influenced participant’s decisions. Yet the most significant finding of the research was that, after interaction with the AI system, those volunteers continued to mimic its systematic error when they switched to performing the diagnosis task unaided.

In other words, participants who were first assisted by the biased AI replicated its bias in a context without this support, thus showing an inherited bias. This effect was not observed for the participants in the control group, who performed the task unaided from the beginning.

These results show that biased information by an artificial intelligence model can have a perdurable negative impact on human decisions. The finding of an inheritance of AI bias effect points to the need for further psychological and multidisciplinary research on AI-human interaction. Furthermore, evidence-based regulation is also needed to guarantee fair and ethical AI, considering not only the AI technical features but also the psychological aspects of the IA and human collaboration.

Reference: “Humans inherit artificial intelligence biases” by Lucía Vicente, and Helena Matute, 3 October 2023, Scientific Reports.
DOI: 10.1038/s41598-023-42384-8

1 Comment on "Trapped in a Dangerous Loop: Humans Inherit Artificial Intelligence Biases"

  1. An important question to ask — and to answer — is whether there is any substantive basis for the claimed bias, and if so, what society might do to change the things responsible for that bias? After all, the software doesn’t have emotions or vested interests to protect. Re-defining words works, at best, for only a short time — until people begin to associate the new word(s) with actions they disapprove of.

Leave a comment

Email address is optional. If provided, your email will not be published or shared.