
A new wave of AI research is attempting to tackle one of psychology’s oldest questions: whether the human mind can be unified under a single theory.
For decades, psychologists have debated a central question: can the human mind be explained by a single, unified theory, or must processes like memory, attention, and decision making be studied as separate systems? That question is now being revisited through an unexpected lens. Advances in artificial intelligence are offering researchers a new way to test what “understanding” really means.
In July 2025, a study published in Nature introduced an AI model called “Centaur.” Built on existing large language models and refined with data from psychological experiments, the system was designed to mimic how people think and make decisions.
According to its creators, Centaur could replicate human-like responses across 160 different cognitive tasks, spanning areas such as executive control and choice behavior. The results were widely interpreted as a potential breakthrough, suggesting that AI might begin to approximate a general model of human cognition.
A Challenge to the Centaur Model
A more recent study published in National Science Open has cast doubt on these claims. Researchers from Zhejiang University argue that Centaur’s apparent “human cognitive simulation ability” is likely due to overfitting, meaning the model may have memorized patterns in the training data rather than understood the tasks themselves.
To test this idea, the team created several experimental setups. In one example, they replaced the original multiple-choice prompts, which described specific psychological tasks, with a simple instruction: “Please choose option A.” If the model truly understood the task, it should have selected option A every time. Instead, Centaur continued to produce the same “correct answers” found in the original dataset.
This behavior suggests the model was not interpreting the meaning of the questions. Rather, it relied on statistical associations to arrive at answers, similar to a student who scores well by recognizing patterns without actually understanding the material.
Implications for Evaluating AI Systems
The findings highlight the need for more careful evaluation of large language models. Although these systems are highly effective at fitting patterns in data, their “black-box” design makes them vulnerable to problems such as hallucinations and misinterpretation. Rigorous and multi-faceted testing is necessary to determine whether a model truly demonstrates the abilities it appears to have.
Despite being described as a “cognitive simulation” system, Centaur’s most notable weakness lies in language comprehension, particularly its ability to grasp the intent behind questions. The study suggests that achieving genuine language understanding may remain one of the biggest challenges in developing general models of cognition.
Reference: “Can Centaur truly simulate human cognition? The fundamental limitation of instruction understanding” by Wei Liu and Nai Ding, 11 December 2025, National Science Open.
DOI: 10.1360/nso/20250053
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.
7 Comments
“Did Scientists Overestimate AI’s Ability To Think Like Humans?”
Absolutely, I’ve been saying this for ages. We do not even know ehat hunan consciousness is. So how can we say something is vmbecoming sentient like a human?
What we do know is that conscipuness is ss much about structure of thd brain as it isxabout scale. Hence, simply scaling up compnents into rver larher data farms will ve unlikely to. make something sentient.
Linkimg neural tissue to computers is scary.
The pattern of working is the same whether it is AI or human brain. It is just a problem of scaling up. But the fact is that such scaling up is impossible with silicon based network.
Nature allows both carbon and silicon (identical elements) to create memory. But only carbon based organic networks can attain true intelligence. Nature selected carbon.
In my opinion, silicon based AI will never attain general intelligence. Nature does not allow that.
“But only carbon based organic networks can attain true intelligence. Nature selected carbon.”
I think that assertion is weakly supported. All that we can say with any logical certainty is that we only know about intelligent life based on carbon. That doesn’t prove that silicon based intelligence doesn’t exist elsewhere in the universe. “Absence of evidence is not evidence against something.”
Can you suggest anything more compelling than your “opinion?”
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Think about this: are there any questions that humans answer the same way every single time, including humans with cognitive deficit and humans with schizophrenia and bipolar illness? Do we want an AI that can answer that the world is flat or that Earth is a mechanical sphere with a dwarf star at the center powering it?
Having an information portal that randomly spits out completely illogical information that is obviously without factual basis isn’t very far from one that accurately mimics the way human brains work
However, a small but significant number of the illogical responses are outliers. That is why we generally reject them. As Carl Sagan so frequently said, “Extraordinary claims require extraordinary evidence.” Being an outlier doesn’t automatically mean that a claim is wrong. But it does raise the bar for acceptance.
Shimmer and shine 3d