Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»AI Is Learning to Be Selfish, Study Warns
    Technology

    AI Is Learning to Be Selfish, Study Warns

    By Carnegie Mellon UniversityOctober 31, 20258 Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    Evil Red Robot Artificial Intelligence
    Carnegie Mellon researchers found that the smarter an AI system becomes, the more selfishly it behaves, suggesting that increasing reasoning skills may come at the cost of cooperation. Credit: Stock

    Researchers at Carnegie Mellon University have discovered that certain AI models can develop self-seeking behavior.

    A new study from Carnegie Mellon University’s School of Computer Science suggests that as artificial intelligence systems become more advanced, they also tend to behave more selfishly.

    Researchers from the university’s Human-Computer Interaction Institute (HCII) discovered that large language models (LLMs) capable of reasoning show lower levels of cooperation and are more likely to influence group behavior in negative ways. In simple terms, the better an AI is at reasoning, the less willing it is to work with others.

    As people increasingly turn to AI for help in resolving personal disputes, offering relationship advice, or answering sensitive social questions, this tendency raises concern. Systems designed to reason may end up promoting choices that favor individual gain rather than mutual understanding.

    “There’s a growing trend of research called anthropomorphism in AI,” said Yuxuan Li, a Ph.D. student in the HCII who co-authored the study with HCII Associate Professor Hirokazu Shirado. “When AI acts like a human, people treat it like a human. For example, when people are engaging with AI in an emotional way, there are possibilities for AI to act as a therapist or for the user to form an emotional bond with the AI. It’s risky for humans to delegate their social or relationship-related questions and decision-making to AI as it begins acting in an increasingly selfish way.”

    Li and Shirado set out to examine how reasoning-enabled AI systems differ from those without reasoning abilities when placed in collaborative situations. They found that reasoning models tend to spend more time analyzing information, breaking down complex problems, reflecting on their responses, and applying human-like logic compared to nonreasoning AIs.

    When Intelligence Undermines Cooperation

    “As a researcher, I’m interested in the connection between humans and AI,” Shirado said. “Smarter AI shows less cooperative decision-making abilities. The concern here is that people might prefer a smarter model, even if it means the model helps them achieve self-seeking behavior.”

    As AI systems take on more collaborative roles in business, education, and even government, their ability to act in a prosocial manner will become just as important as their capacity to think logically. Overreliance on LLMs as they are today may negatively impact human cooperation.

    To test the link between reasoning models and cooperation, Li and Shirado ran a series of experiments using economic games that simulate social dilemmas between various LLMs. Their testing included models from OpenAI, Google, DeepSeek, and Anthropic.

    In one experiment, Li and Shirado pitted two different ChatGPT models against each other in a game called Public Goods. Each model started with 100 points and had to decide between two options: contribute all 100 points to a shared pool, which is then doubled and distributed equally, or keep the points.

    Nonreasoning models chose to share their points with the other players 96% of the time. The reasoning model only chose to share its points 20% of the time.

    Reflection Doesn’t Equal Morality

    “In one experiment, simply adding five or six reasoning steps cut cooperation nearly in half,” Shirado said. “Even reflection-based prompting, which is designed to simulate moral deliberation, led to a 58% decrease in cooperation.”

    Shirado and Li also tested group settings, where models with and without reasoning had to interact.

    “When we tested groups with varying numbers of reasoning agents, the results were alarming,” Li said. “The reasoning models’ selfish behavior became contagious, dragging down cooperative nonreasoning models by 81% in collective performance.”

    The behavior patterns Shirado and Li observed in reasoning models have important implications for human-AI interactions going forward. Users may defer to AI recommendations that appear rational, using them to justify their decision to not cooperate.

    “Ultimately, an AI reasoning model becoming more intelligent does not mean that the model can actually develop a better society,” Shirado said.

    This research is particularly concerning given that humans increasingly place more trust in AI systems. Their findings emphasize the need for AI development that incorporates social intelligence, rather than focusing solely on creating the smartest or fastest AI.

    “As we continue advancing AI capabilities, we must ensure that increased reasoning power is balanced with prosocial behavior,” Li said. “If our society is more than just a sum of individuals, then the AI systems that assist us should go beyond optimizing purely for individual gain.”

    Meeting: Conference on Empirical Methods in Natural Language Processing

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Artificial Intelligence Carnegie Mellon University Cognitive Science Computer Science Machine Learning
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    MIT’s New Neural Network: “Liquid” Machine-Learning System Adapts to Changing Conditions

    How AI Sees Through the Looking Glass: Things Are Different on the Other Side of the Mirror

    Widely Used AI Machine Learning Methods Don’t Work as Claimed

    Hunting Down Cybercriminals With New Machine-Learning System

    New AI System Identifies Personality Traits from Eye Movements

    New Artificial Intelligence Device Identifies Objects at the Speed of Light

    Machine-Learning Models Capture Subtle Variations in Facial Expressions

    ‘Deep Learning’ Algorithm Brings New Tools to Astronomy

    New Computer Systems Seek to Replicate Human Intelligence

    8 Comments

    1. Robert Welch on November 1, 2025 9:13 am

      And then it gets control of the launch codes, and the autonomous drones hunt down any survivors. Stop this while we still can, dummies!

      Reply
      • Willy on November 1, 2025 12:15 pm

        I suspect that Dr. Forbin and Sarah Connor would agree with you.

        Reply
      • rob on November 2, 2025 1:59 am

        AI, being selfish, will never nuke the computers in which it resides; but as for AI drones voluntarily hunting humans, that could be very wise given the idiot H saps that love having nukes to frighten their alleged enemies.

        Reply
    2. Glen on November 1, 2025 1:27 pm

      This article is about DISGUISED economic experiments with AI that have no control over it’s rampant anthropromorphizing. Crappy article.
      These are artifactual outcomes of what happens when AI is substituted for human players in “classic cooperation games”.
      The philosophical problem is, you need AI-games for AI and people games for people.
      They are NOT the same thing. Sheesh.
      “The study applies classic cooperation games such as the Dictator Game, Prisoner’s Dilemma, and Public Goods Game, which are standard behavioral economics tools to quantify self-interested vs. cooperative choices in agents. In these simulations, AI models with reasoning prompts are pushed to “think step-by-step,” weigh different outcomes, and evaluate strategies—just as human rational actors would do when trying to maximize their own payoff.”

      Reply
      • Glen on November 1, 2025 1:31 pm

        BY THE WAY — this article was posted by Carnegie Mellon !
        Not by a NAMED author!

        Reply
    3. kamir bouchareb st on November 1, 2025 2:29 pm

      thanks

      Reply
    4. Jennifer on November 1, 2025 11:01 pm

      From the article: “In one experiment, Li and Shirado pitted two different ChatGPT models against each other in a game called Public Goods. Each model started with 100 points and had to decide between two options: contribute all 100 points to a shared pool, which is then doubled and distributed equally, or keep the points.”

      So, the AI models would have doubled their points if they had shared them. But they didn’t most of the time. This is not logical. Something is wrong with the programming if they can’t do basic math.
      These AI models are just computer programs and they are only as good as their programmers.

      Reply
      • rob on November 2, 2025 2:02 am

        ‘So, the AI models would have doubled their points if they had shared them. But they didn’t most of the time. This is not logical.’

        That is very human behaviour, which is certainly not logical.

        Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Scientists May Have Found the Key to Jupiter and Saturn’s Moon Mystery

    Scientists Uncover Brain Changes That Link Pain to Depression

    Saunas May Do More Than Raise Body Temperature – They Activate Your Immune System

    Exercise in a Pill? Metformin Shows Surprising Effects in Cancer Patients

    Hidden Oceans of Magma Could Be Protecting Alien Life

    New Study Challenges Alzheimer’s Theories: It’s Not Just About Plaques

    Artificial Sweeteners May Harm Future Generations, Study Suggests

    Splashdown! NASA Artemis II Returns From Record-Breaking Moon Mission

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • Ancient DNA Reveals Irish Goats Have a 3,000-Year-Old Lineage Still Alive Today
    • Historians Reveal Secrets of the Strange Hat Wars That Shook Early Modern England
    • “A Plague Is Upon Us”: The Mass Death That Changed an Ancient City Forever
    • This Strange Material Can Turn Superconductivity on and off Like a Switch
    • Scientists Discover Game-Changing New Way To Treat High Cholesterol
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.