Misinformation Express: How Generative AI Models Like ChatGPT, DALL-E, and Midjourney May Distort Human Beliefs

AI Technology Thoughts Beliefs Concept

Generative AI models like ChatGPT, DALL-E, and Midjourney may distort human beliefs by transmitting false information and stereotyped biases, according to Celeste Kidd and Abeba Birhane. The design of current generative AI, focused on information search and provision, could make it hard to alter people’s perceptions once exposed to false information.

Researchers warn that generative AI models, including ChatGPT, DALL-E, and Midjourney, could distort human beliefs by spreading false, biased information.

Impact of AI on Human Perception

Generative AI models such as ChatGPT, DALL-E, and Midjourney may distort human beliefs through the transmission of false information and stereotyped biases, according to researchers Celeste Kidd and Abeba Birhane. In their perspective, they delve into how studies on human psychology could shed light on why generative AI possesses such power in distorting human beliefs.

Overestimation of AI Capabilities

They argue that society’s perception of the capabilities of generative AI models has been overly exaggerated, which has led to a widespread belief that these models surpass human abilities. Individuals are inherently inclined to adopt the information disseminated by knowledgeable, confident entities like generative AI at a faster pace and with more assurance.

AI’s Role in Spreading False and Biased Information

These generative AI models have the potential to fabricate false and biased information which can be disseminated widely and repetitively, factors which ultimately dictate the extent to which such information can be entrenched in people’s beliefs. Individuals are most susceptible to influence when they are seeking information and tend to firmly adhere to the information once it’s been received.

Implications for Information Search and Provision

The current design of generative AI largely caters to information search and provision. As such, it may pose a significant challenge in changing the minds of individuals exposed to false or biased information via these AI systems, as suggested by Kidd and Birhane.

Need for Interdisciplinary Studies

The researchers conclude by emphasizing a critical opportunity to conduct interdisciplinary studies to evaluate these models. They suggest measuring the impacts of these models on human beliefs and biases both before and after exposure to generative AI. This is a timely opportunity, especially considering that these systems are increasingly being adopted and integrated into various everyday technologies.

Reference: “How AI can distort human beliefs: Models can convey biases and false information to users” by Celeste Kidd and Abeba Birhane, 22 June 2023, Science.
DOI: 10.1126/science.adi0248

4 Comments on "Misinformation Express: How Generative AI Models Like ChatGPT, DALL-E, and Midjourney May Distort Human Beliefs"

  1. Clyde Spencer | June 24, 2023 at 9:03 am | Reply

    The problem isn’t unique to AI. After all, these programs are trained on things written by humans. That is, AI gets its biases from humans; the stereotypical views are inherited from humans. When political or religious ideology is considered more important than objective truth, and people cherry pick what ‘facts’ to present, then readers are subjected to a distortion of reality. There is an old saying that there are always two sides to a story. If one is only getting one side of the story, then they are only getting ‘half-truths.’ The situation may actually be worse than that. The Rashomon Effect suggests that there are as many sides to a story as there are observers. That is why journalists and scientists should stick to verifiable observations, and when there are apparent contradictions or different interpretations, offer both sides rather than making decisions for the readers. There was a time when the ideal scientist was considered to be a “disinterested observer” who only reported what was measured. That is, coldly objective and reluctant to be subjective except perhaps to inductively formulate a tentative hypothesis to guide further research.

    Rationalized by a claimed ‘existential crisis,’ the public today is inundated with poorly supported claims about impending doom and supported by non sequiturs such as “climate change,” “tipping point” and “ocean acidification,” the public is asked to change their lifestyles and even economic systems in the name of salvation. Even articles in professional journals (let alone those intended for consumption by laymen) are often short on uncertainties associated with measurements, and whereas most sciences use at least a 2-sigma measure of uncertainty, climatology more commonly uses only 1-sigma, so that measurements appear more precise than they are.

    It is often remarked that children grow up behaving like their parents. We shouldn’t be surprised that AI is showing the same defects as those it learns from.

  2. Kidd and Birhane’s research highlights the concerning potential of generative AI models like ChatGPT, DALL-E, and Midjourney to distort human beliefs by spreading false and biased information. They emphasize that the overestimation of AI capabilities and the inherent trust individuals place in confident entities contribute to the rapid adoption of such information. The ability of these models to fabricate and disseminate information widely poses challenges in altering entrenched beliefs. This study underscores the need for careful consideration of the impact and regulation of generative AI in information search and provision to mitigate the spread of misinformation and biases.

  3. … you don’t get it… but!
    I guess… what can happen it eventually happen…

    • Clyde Spencer | June 26, 2023 at 6:01 pm | Reply

      “… what can happen it eventually happen…”
      In a multiverse over infinite time. Otherwise, what can happen MAY eventually happen.

Leave a comment

Email address is optional. If provided, your email will not be published or shared.