
As AI rapidly advances and ethical debates intensify, scientists contend that understanding consciousness has become more urgent than ever.
As artificial intelligence (AI) continues to advance alongside growing ethical concerns, many scientists say that understanding consciousness has become a pressing scientific goal.
In a recent paper published in Frontiers in Science, researchers caution that progress in AI and neurotechnology is rapidly outpacing our grasp of how consciousness works, raising significant ethical risks.
They emphasize that uncovering the origins of conscious experience, which could eventually allow scientists to develop reliable tests for detecting it, must now be treated as both a scientific and moral priority. Achieving this understanding could influence a wide range of fields, including AI development, prenatal policy, animal welfare, medicine, mental health, law, and new neurotechnologies such as brain-computer interfaces.
“Consciousness science is no longer a purely philosophical pursuit. It has real implications for every facet of society—and for understanding what it means to be human,” said lead author Prof Axel Cleeremans from Université Libre de Bruxelles. “Understanding consciousness is one of the most substantial challenges of 21st-century science—and it’s now urgent due to advances in AI and other technologies.
“If we become able to create consciousness—even accidentally—it would raise immense ethical challenges and even existential risk,” added Cleeremans, a European Research Council (ERC) grantee.
Sentience test
Consciousness, the awareness of both ourselves and the world around us, continues to be one of science’s most perplexing puzzles. Even after decades of study, researchers have yet to agree on how subjective experience emerges from biological activity in the brain.
Although scientists have identified certain brain regions and neural mechanisms linked to conscious awareness, debate persists over which of these are truly essential and how they work together to produce experience. Some experts even question whether this approach can ever fully explain the nature of consciousness.
This new review explores where consciousness science stands today, where it could go next, and what might happen if humans succeed in understanding or even creating consciousness—whether in machines or in lab-grown brain-like systems like “brain organoids.”
The authors say that tests for consciousness—evidence-based ways to judge whether a being or a system is aware—could help identify awareness in patients with brain injury or dementia, and determine when it arises in fetuses, animals, brain organoids, or even AI.
While this would mark a major scientific breakthrough, they warn it would also raise profound ethical and legal challenges about how to treat any system shown to be conscious.
“Progress in consciousness science will reshape how we see ourselves and our relationship to both artificial intelligence and the natural world,” said co-author Prof Anil Seth from the University of Sussex and ERC grantee. “The question of consciousness is ancient—but it’s never been more urgent than now.”
Wide implications
A better understanding of consciousness could:
- Transform medical care for unresponsive patients once thought to be unconscious. Measurements inspired by integrated information theory and global workspace theory have already revealed signs of awareness in some people diagnosed as having unresponsive wakefulness syndrome. Further progress could refine these tools to assess consciousness in coma, advanced dementia, and anesthesia—and reshape how we approach treatment and end-of-life care
- Guide new therapies for mental health conditions such as depression, anxiety, and schizophrenia, where understanding the biology of subjective experience may help bridge the gap between animal models and human emotion
- Clarify our moral duty towards animals by identifying which creatures and systems are sentient. This could affect how we conduct animal research, farm animals, consume animal products, and approach conservation. “Understanding the nature of consciousness in particular animals would transform how we treat them and emerging biological systems that are being synthetically generated by scientists,” said co-author Prof Liad Mudrik from Tel Aviv University and ERC grantee.
- Reframe how we interpret the law by illuminating the conscious and unconscious processes involved in decision-making. New understanding could challenge legal ideas such as mens rea—the “guilty mind” required to establish intent. As neuroscience reveals how much of our behavior arises from unconscious mechanisms, courts may need to reconsider where responsibility begins and ends,
- Shape the development of neurotechnologies. Advances in AI, brain organoids, and brain–computer interfaces raise the prospect of producing or modifying awareness beyond biological life. While some suggest that computation alone might support awareness, others argue that biological factors are essential. “Even if ‘conscious AI’ is impossible using standard digital computers, AI that gives the impression of being conscious raises many societal and ethical challenges,” said Seth.
The authors call for a coordinated, evidence-based approach to consciousness. For example, using adversarial collaborations, rival theories are pitted against each other in experiments co-designed by their proponents. ”We need more team science to break theoretical silos and overcome existing biases and assumptions,” said co-author Prof Liad Mudrik. “This step has the potential to move the field forward.”
The researchers also urge more attention to phenomenology (what consciousness feels like) to complement the study of what it does (its function).
“Cooperative efforts are essential to make progress—and to ensure society is prepared for the ethical, medical, and technological consequences of understanding, and perhaps creating, consciousness,” said Cleeremans.
Reference: “Consciousness science: where are we, where are we going, and what if we get there?” by Axel Cleeremans, Liad Mudrik and Anil K. Seth, 15 September 2025, Frontiers in Science.
DOI: 10.3389/fsci.2025.1546279
Funding: National Fund for Scientific Research, European Research Council
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.
8 Comments
Dont worry – Ask AI when it will come.
Also, start with Information laws: they are more basic.
Does the hi-est machine intelligence necessarily imply self awareness, self-‘hood’ & ethical obligation? Seth, Mudrik & Cleeremans seem to think it mite be possible to murder a cognizant robo. Wood ethical sophistication give an intelloid the rite to assume power, judge & rule? The question is not intrinsically absurd.
It’s criminal that tens of thousands, if not hundreds of thousands of scientists researching the brain for decades upon decades have still not been able to figure what consciousness is, how it is created, what happens to it when we die. Additionally, no one has yet identified how memories are stored and retrieved.
What exactly do all these people do in their 40-60 hour work weeks? Play video games?
Good points, Jojo.
The SAFE AI Foundation shares the same concerns that you have raised too.
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Assume you could upload a human mind from a human into a machine. Would the machine mind then be conscious by virtue that it was effectively the progeny of a human being ?
I’ve been reading an interesting SF series called The Singularity Chronicles. There are supposed to be 5 books in the series but the author has only completed 2 to date.
He posits that it is emotions created via our hormones that make us human. This would be difficult to duplicate in a machine but that is what happens. This is a idea that I have never really thought about!
https://thesingularitychronicles.com/
None of the current AI implementations (including LLMs) display sentience and consciousness. This is the problem with Turing tests, which LLMs have passed for some time. Hype.
We are at the level of infancy in the AI technology and yet, look at all the amazing things that are being accomplished by AI machines (e.g. protein folding).
Who knows where we will be when the technology matures to say, the level of a 3rd grader?