Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»Misinformation Express: How Generative AI Models Like ChatGPT, DALL-E, and Midjourney May Distort Human Beliefs
    Technology

    Misinformation Express: How Generative AI Models Like ChatGPT, DALL-E, and Midjourney May Distort Human Beliefs

    By American Association for the Advancement of Science (AAAS)June 23, 20234 Comments2 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    AI Technology Thoughts Beliefs Concept
    Generative AI models like ChatGPT, DALL-E, and Midjourney may distort human beliefs by transmitting false information and stereotyped biases, according to Celeste Kidd and Abeba Birhane. The design of current generative AI, focused on information search and provision, could make it hard to alter people’s perceptions once exposed to false information.

    Researchers warn that generative AI models, including ChatGPT, DALL-E, and Midjourney, could distort human beliefs by spreading false, biased information.

    Impact of AI on Human Perception

    Generative AI models such as ChatGPT, DALL-E, and Midjourney may distort human beliefs through the transmission of false information and stereotyped biases, according to researchers Celeste Kidd and Abeba Birhane. In their perspective, they delve into how studies on human psychology could shed light on why generative AI possesses such power in distorting human beliefs.

    Overestimation of AI Capabilities

    They argue that society’s perception of the capabilities of generative AI models has been overly exaggerated, which has led to a widespread belief that these models surpass human abilities. Individuals are inherently inclined to adopt the information disseminated by knowledgeable, confident entities like generative AI at a faster pace and with more assurance.

    AI’s Role in Spreading False and Biased Information

    These generative AI models have the potential to fabricate false and biased information which can be disseminated widely and repetitively, factors which ultimately dictate the extent to which such information can be entrenched in people’s beliefs. Individuals are most susceptible to influence when they are seeking information and tend to firmly adhere to the information once it’s been received.

    Implications for Information Search and Provision

    The current design of generative AI largely caters to information search and provision. As such, it may pose a significant challenge in changing the minds of individuals exposed to false or biased information via these AI systems, as suggested by Kidd and Birhane.

    Need for Interdisciplinary Studies

    The researchers conclude by emphasizing a critical opportunity to conduct interdisciplinary studies to evaluate these models. They suggest measuring the impacts of these models on human beliefs and biases both before and after exposure to generative AI. This is a timely opportunity, especially considering that these systems are increasingly being adopted and integrated into various everyday technologies.

    Reference: “How AI can distort human beliefs: Models can convey biases and false information to users” by Celeste Kidd and Abeba Birhane, 22 June 2023, Science.
    DOI: 10.1126/science.adi0248

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    American Association for the Advancement of Science Artificial Intelligence ChatGPT Ethics Perception
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    AI Ethics Surpass Human Judgment in New Moral Turing Test

    ChatGPT’s Strong Left-Wing Political Bias Unmasked by New Study

    ChatGPT Generative AI: USC Experts With Key Information You Should Know

    The Rise of Artificial Intelligence: ChatGPT’s Stunning Results on the US Medical Licensing Exam

    Rise of the Machines: DeepMind AlphaCode AI’s Strong Showing in Programming Competitions

    Artificial Intelligence Agent Is a Winner at (the Game of) Diplomacy

    AI Use Potentially Dangerous “Shortcuts” To Solve Complex Recognition Tasks

    Our Human Future in an Age of Artificial Intelligence

    Neural Networks Tricked by Optical Illusions in the Same Way That Humans Are Deceived

    4 Comments

    1. Clyde Spencer on June 24, 2023 9:03 am

      The problem isn’t unique to AI. After all, these programs are trained on things written by humans. That is, AI gets its biases from humans; the stereotypical views are inherited from humans. When political or religious ideology is considered more important than objective truth, and people cherry pick what ‘facts’ to present, then readers are subjected to a distortion of reality. There is an old saying that there are always two sides to a story. If one is only getting one side of the story, then they are only getting ‘half-truths.’ The situation may actually be worse than that. The Rashomon Effect suggests that there are as many sides to a story as there are observers. That is why journalists and scientists should stick to verifiable observations, and when there are apparent contradictions or different interpretations, offer both sides rather than making decisions for the readers. There was a time when the ideal scientist was considered to be a “disinterested observer” who only reported what was measured. That is, coldly objective and reluctant to be subjective except perhaps to inductively formulate a tentative hypothesis to guide further research.

      Rationalized by a claimed ‘existential crisis,’ the public today is inundated with poorly supported claims about impending doom and supported by non sequiturs such as “climate change,” “tipping point” and “ocean acidification,” the public is asked to change their lifestyles and even economic systems in the name of salvation. Even articles in professional journals (let alone those intended for consumption by laymen) are often short on uncertainties associated with measurements, and whereas most sciences use at least a 2-sigma measure of uncertainty, climatology more commonly uses only 1-sigma, so that measurements appear more precise than they are.

      It is often remarked that children grow up behaving like their parents. We shouldn’t be surprised that AI is showing the same defects as those it learns from.

      Reply
    2. Ali Greer on June 24, 2023 11:49 pm

      Kidd and Birhane’s research highlights the concerning potential of generative AI models like ChatGPT, DALL-E, and Midjourney to distort human beliefs by spreading false and biased information. They emphasize that the overestimation of AI capabilities and the inherent trust individuals place in confident entities contribute to the rapid adoption of such information. The ability of these models to fabricate and disseminate information widely poses challenges in altering entrenched beliefs. This study underscores the need for careful consideration of the impact and regulation of generative AI in information search and provision to mitigate the spread of misinformation and biases.

      Reply
    3. xABBAAA on June 26, 2023 10:38 am

      … you don’t get it… but!
      I guess… what can happen it eventually happen…

      Reply
      • Clyde Spencer on June 26, 2023 6:01 pm

        “… what can happen it eventually happen…”
        In a multiverse over infinite time. Otherwise, what can happen MAY eventually happen.

        Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Breakthrough Bowel Cancer Trial Leaves Patients Cancer-Free for Nearly 3 Years

    Natural Compound Shows Powerful Potential Against Rheumatoid Arthritis

    100,000-Year-Old Neanderthal Fossils in Poland Reveal Unexpected Genetic Connections

    Simple “Gut Reset” May Prevent Weight Gain After Ozempic or Wegovy

    2.8 Days to Disaster: Scientists Warn Low Earth Orbit Could Suddenly Collapse

    Common Food Compound Shows Surprising Power Against Superbugs

    5 Simple Ways To Remember More and Forget Less

    The Atomic Gap That Could Cost the Semiconductor Industry Billions

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • After 37 Years, the World’s Longest-Running Soil Warming Experiment Uncovers a Startling Climate Secret
    • NASA Satellite Captures First-Ever High-Res View of Massive Pacific Tsunami
    • ADHD Isn’t Just a Deficit: Study Reveals Powerful Hidden Strengths
    • Scientists Uncover “Astonishing” Hidden Property of Light
    • Scientists Discover Stem Cells That Could Regrow Teeth and Bone
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.