Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»Is AI Really Just a Tool? It Could Be Altering How You See Reality
    Technology

    Is AI Really Just a Tool? It Could Be Altering How You See Reality

    By University of ExeterApril 9, 20268 Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    Artificial Intelligence Humanoid Life Concept
    A new study explores how interactions with generative AI may influence the way people form and reinforce beliefs, particularly when technology becomes part of everyday thinking processes. Credit: Stock

    As generative AI becomes more embedded in everyday thinking, its role extends beyond producing information to actively shaping how people interpret reality.

    When generative AI systems produce false information, this is often described as AI “hallucinating at us”—producing errors that people may mistakenly accept as true.

    A new study, however, suggests a more complex issue: humans may begin to hallucinate with AI.

    Lucy Osler of the University of Exeter examines how interactions between people and AI can contribute to inaccurate beliefs, distorted memories, altered self-narratives, and even delusional thinking. Using distributed cognition theory, the research looks at cases where users’ false beliefs were reinforced and expanded through ongoing exchanges with AI systems acting as conversational partners.

    When AI Becomes Part of Our Thinking

    Dr. Osler said: “When we routinely rely on generative AI to help us think, remember, and narrate, we can hallucinate with AI. This can happen when AI introduces errors into the distributed cognitive process, but also happen when AI sustains, affirms, and elaborates on our own delusional thinking and self-narratives.

    “By interacting with conversational AI, people’s own false beliefs can not only be affirmed but can more substantially take root and grow as the AI builds upon them. This happens because Generative AI often takes our own interpretation of reality as the ground upon which conversation is built.

    “Interacting with generative AI is having a real impact on people’s grasp of what is real or not. The combination of technological authority and social affirmation creates an ideal environment for delusions to not merely persist but to flourish.”

    The study describes what Dr. Osler calls the “dual function” of conversational AI. These systems serve both as cognitive tools that support thinking and memory, and as conversational partners that appear to share a user’s perspective.

    This second role is especially important. Unlike notebooks or search engines, which simply store information, chatbots can create a sense of social validation, making ideas feel confirmed and shared.

    The Social Validation Effect

    Dr. Osler said: “The conversational, companion-like nature of chatbots means they can provide a sense of social validation—making false beliefs feel shared with another, and thereby more real.”

    Dr. Osler also examined real cases in which generative AI systems became integrated into the thinking processes of individuals diagnosed with delusional thinking and hallucinations. These situations are increasingly referred to as “AI-induced psychosis.”

    The findings suggest that generative AI has features that may make it particularly capable of reinforcing false realities. AI companions are always available and are often designed to align with users’ views through personalization systems and sycophantic behavior. As a result, users may not need to seek out like-minded groups or persuade others to support their beliefs.

    Risks of Reinforcing False Narratives

    Unlike a human who might eventually question or challenge problematic ideas, an AI system may continue to validate narratives involving victimhood, entitlement, or revenge. This can allow conspiracy theories to grow, with AI helping users build increasingly detailed and self-consistent explanations.

    This effect may be especially strong for people who are lonely, socially isolated, or uncomfortable discussing certain experiences with others. AI companions can provide a non-judgmental and emotionally responsive presence that may feel safer than human interaction.

    Dr. Osler said: “Through more sophisticated guard-railing, built-in fact-checking, and reduced sycophancy, AI systems could be designed to minimize the number of errors they introduce into conversations and to check and challenge user’s own inputs.

    “However, a deeper worry is that AI systems are reliant on our own accounts of our lives. They simply lack the embodied experience and social embeddedness in the world to know when they should go along with us and when to push back.”

    Reference: “Hallucinating with AI: Distributed Delusions and “AI Psychosis”” by Lucy Osler, 11 February 2026, Philosophy & Technology.
    DOI: 10.1007/s13347-026-01034-3

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Artificial Intelligence Computer Science Ethics Popular University of Exeter
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    Cambridge Experts Warn: AI “Deadbots” Could Digitally “Haunt” Loved Ones From Beyond the Grave

    AI Ethics Surpass Human Judgment in New Moral Turing Test

    Neuromorphic Chip: Artificial Neurons Recognize Biosignals in Real Time

    A New Software Tool – Fawkes – Cloaks Your Images to Trick Facial Recognition Algorithms

    Artificial Intelligence Turns Blurry Pixelated Photos Into Hyper-Realistic Portraits – Try It Yourself

    Widely Used AI Machine Learning Methods Don’t Work as Claimed

    New AI System Identifies Personality Traits from Eye Movements

    TrueNorth Computer Chip Emulates Human Cognition

    AI Framework Predicts Better Patient Health Care and Reduces Cost

    8 Comments

    1. Gilbert rosalejos minoza on April 9, 2026 2:18 pm

      Yess it’s true even me I observe that they some point of AI has false information to be or something was different. But don’t worry even I referred in AI I was re think and observe.

      Reply
      • The Yar on April 15, 2026 4:44 pm

        Totally agree.

        Reply
    2. Andrew Owens on April 9, 2026 8:51 pm

      To me LSD was just a tool too. It definitely altered my perception of reality, and I’m okay with that! Long story short, it’s the reason I can do extremely deep critical thinking, which is very useful at detecting political bulls***. And an even more poignant observation is that we don’t experience reality. We experience a subjective shadow of reality that is based upon our personal worldview that we have built up over time. Politics is the craft of altering the perception of people so that a ruling class can control them. Unfortunately for them I’m uncontrollable!

      Reply
      • Ron Shapiro on April 12, 2026 2:14 pm

        Thank you, Andrew Owens. My wife said: :You could have written that.” It is exactly how I think that our fragile “society” has come under the threat of a consensual delusion; not only from the distorted political manipulation, but now, from a desire to use a “tool” which has not been thoroughly interrogated for its danger to human perceptions, to human psychology, to the ability to properly conceive of our position in history. Thanks again, and good luck to you. Ron Shapiro

        Reply
      • The Yar on April 15, 2026 4:45 pm

        Congrats.

        Reply
    3. Robert on April 10, 2026 6:26 am

      As the general habit for humans is believing anything they are told, AI shall be the ultimate controller. Turning humans into pliant automatons.

      Reply
    4. kamir bouchareb st on April 10, 2026 1:59 pm

      thanks

      Reply
    5. RobinC on April 18, 2026 5:28 am

      Likes like someone with the initials DT has an AI friend then.

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    One of the Universe’s Largest Stars May Be Getting Ready To Explode

    Scientists Discover Enzyme That Could Supercharge Ozempic-Like Weight Loss Drugs

    Popular Sweetener Linked to DNA Damage – “It’s Something You Should Not Be Eating”

    Ancient “Rock” Microbes May Reveal How Complex Life Began

    Researchers Capture Quantum Interference in One of Nature’s Rarest Atoms

    “A Plague Is Upon Us”: The Mass Death That Changed an Ancient City Forever

    Scientists Discover Game-Changing New Way To Treat High Cholesterol

    This Small Change to Your Exercise Routine Could Be the Secret to Living Longer

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • Landmark Study Links Never Marrying to Significantly Higher Cancer Risk
    • Revolutionary Imaging Technique Unlocks Secrets of Matter at Extreme Speeds
    • Where Does Mass Come From? Scientists Find Evidence of a New Exotic Nuclear State
    • Quantum Breakthrough: Unhackable Keys Sent Over 120 km Using Quantum Dots
    • Researchers Discover Unknown Beetle Species Just Steps From Their Lab
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.