Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»Did Scientists Overestimate AI’s Ability To Think Like Humans?
    Technology

    Did Scientists Overestimate AI’s Ability To Think Like Humans?

    By Science China PressMarch 24, 20267 Comments3 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    AI Robot Thinking Artificial Intelligence
    An AI model once claimed to replicate human cognitive behavior across a wide range of tasks, sparking excitement about unified theories of the mind. However, new findings suggest its performance may rely more on learned patterns than true understanding, raising deeper questions about how intelligence is defined and measured. Credit: Shutterstock

    A new wave of AI research is attempting to tackle one of psychology’s oldest questions: whether the human mind can be unified under a single theory.

    For decades, psychologists have debated a central question: can the human mind be explained by a single, unified theory, or must processes like memory, attention, and decision making be studied as separate systems? That question is now being revisited through an unexpected lens. Advances in artificial intelligence are offering researchers a new way to test what “understanding” really means.

    In July 2025, a study published in Nature introduced an AI model called “Centaur.” Built on existing large language models and refined with data from psychological experiments, the system was designed to mimic how people think and make decisions.

    According to its creators, Centaur could replicate human-like responses across 160 different cognitive tasks, spanning areas such as executive control and choice behavior. The results were widely interpreted as a potential breakthrough, suggesting that AI might begin to approximate a general model of human cognition.

    A Challenge to the Centaur Model

    A more recent study published in National Science Open has cast doubt on these claims. Researchers from Zhejiang University argue that Centaur’s apparent “human cognitive simulation ability” is likely due to overfitting, meaning the model may have memorized patterns in the training data rather than understood the tasks themselves.

    To test this idea, the team created several experimental setups. In one example, they replaced the original multiple-choice prompts, which described specific psychological tasks, with a simple instruction: “Please choose option A.” If the model truly understood the task, it should have selected option A every time. Instead, Centaur continued to produce the same “correct answers” found in the original dataset.

    This behavior suggests the model was not interpreting the meaning of the questions. Rather, it relied on statistical associations to arrive at answers, similar to a student who scores well by recognizing patterns without actually understanding the material.

    Implications for Evaluating AI Systems

    The findings highlight the need for more careful evaluation of large language models. Although these systems are highly effective at fitting patterns in data, their “black-box” design makes them vulnerable to problems such as hallucinations and misinterpretation. Rigorous and multi-faceted testing is necessary to determine whether a model truly demonstrates the abilities it appears to have.

    Despite being described as a “cognitive simulation” system, Centaur’s most notable weakness lies in language comprehension, particularly its ability to grasp the intent behind questions. The study suggests that achieving genuine language understanding may remain one of the biggest challenges in developing general models of cognition.

    Reference: “Can Centaur truly simulate human cognition? The fundamental limitation of instruction understanding” by Wei Liu and Nai Ding, 11 December 2025, National Science Open.
    DOI: 10.1360/nso/20250053

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Artificial Intelligence Cognitive Science Computer Science Psychology Science China Press
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    Forget IQ: This One Surprising Skill Predicts if You’ll Fall for AI Fakes

    Does AI Really “Know” Anything? Why Our Words Matter More Than We Think

    New Research Challenges the Myth That AI Stifles Human Creativity

    We May Never Know if AI Is Conscious, Says Cambridge Philosopher

    AI Is Learning to Be Selfish, Study Warns

    AI That Thinks Like Us: New Model Predicts Human Decisions With Startling Accuracy

    AI Is Learning To Read Your Emotions: Here’s Why That’s a Good Thing

    New Breakthrough Allows a Computer To Understand Human Emotions

    New Computer Systems Seek to Replicate Human Intelligence

    7 Comments

    1. James Tucker on March 24, 2026 6:08 pm

      “Did Scientists Overestimate AI’s Ability To Think Like Humans?”

      Absolutely, I’ve been saying this for ages. We do not even know ehat hunan consciousness is. So how can we say something is vmbecoming sentient like a human?

      What we do know is that conscipuness is ss much about structure of thd brain as it isxabout scale. Hence, simply scaling up compnents into rver larher data farms will ve unlikely to. make something sentient.

      Linkimg neural tissue to computers is scary.

      Reply
    2. Jose p koshy on March 24, 2026 7:13 pm

      The pattern of working is the same whether it is AI or human brain. It is just a problem of scaling up. But the fact is that such scaling up is impossible with silicon based network.

      Nature allows both carbon and silicon (identical elements) to create memory. But only carbon based organic networks can attain true intelligence. Nature selected carbon.

      In my opinion, silicon based AI will never attain general intelligence. Nature does not allow that.

      Reply
      • Clyde Spencer on March 25, 2026 11:59 am

        “But only carbon based organic networks can attain true intelligence. Nature selected carbon.”

        I think that assertion is weakly supported. All that we can say with any logical certainty is that we only know about intelligent life based on carbon. That doesn’t prove that silicon based intelligence doesn’t exist elsewhere in the universe. “Absence of evidence is not evidence against something.”

        Can you suggest anything more compelling than your “opinion?”

        Reply
    3. Grant Castillou on March 24, 2026 8:06 pm

      It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

      What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

      I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

      My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

      Reply
    4. Cheryl V Johnson on March 24, 2026 9:16 pm

      Think about this: are there any questions that humans answer the same way every single time, including humans with cognitive deficit and humans with schizophrenia and bipolar illness? Do we want an AI that can answer that the world is flat or that Earth is a mechanical sphere with a dwarf star at the center powering it?
      Having an information portal that randomly spits out completely illogical information that is obviously without factual basis isn’t very far from one that accurately mimics the way human brains work

      Reply
      • Clyde Spencer on March 25, 2026 12:07 pm

        However, a small but significant number of the illogical responses are outliers. That is why we generally reject them. As Carl Sagan so frequently said, “Extraordinary claims require extraordinary evidence.” Being an outlier doesn’t automatically mean that a claim is wrong. But it does raise the bar for acceptance.

        Reply
    5. Kayden Aaron Waltower on March 28, 2026 7:37 am

      Shimmer and shine 3d

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Why Popular Diabetes Drugs Like Ozempic Don’t Work for Everyone: The “Genetic Glitch”

    Scientists Stunned After Finding Plant Thought Extinct for 60 Years

    Scientists Discover Tiny New Spider That Hunts Prey 6x Its Size

    Natural Component From Licorice Shows Promise for Treating Inflammatory Bowel Disease

    Scientists Warn: Popular Sweetener Linked to Dangerous Metabolic Effects

    Monster Storms on Jupiter Unleash Lightning Beyond Anything on Earth

    Scientists Create “Liquid Gears” That Spin Without Touching

    The Simple Habit That Could Help Prevent Cancer

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • Forget Chemicals. This Plant Removes Microplastics From Water
    • Breakthrough Crystal Lets Scientists “Write” Nanoscale Patterns With Light
    • Pomegranate Compound Could Help Protect Against Heart Disease
    • Your Blood Test Might Already Show Alzheimer’s Risk
    • Common Pregnancy Medications Linked to Higher Autism Risk in Massive U.S. Study
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.