Not Science Fiction Anymore: What Happens When Machine Learning Goes Too Far?

Robot Thinking Colorful

New research explores the potential risks and ethical implications of machine sentience, emphasizing the importance of understanding and preparing for the emergence of consciousness in AI and machine learning technologies. It calls for careful consideration of the ethical use of sentient machines and highlights the need for future research to navigate the complex relationship between humans and these self-aware technologies. Credit: SciTechDaily.com

Every piece of fiction carries a kernel of truth, and now is about the time to get a step ahead of sci-fi dystopias and determine what the risk in machine sentience can be for humans.

Although people have long pondered the future of intelligent machinery, such questions have become all the more pressing with the rise of artificial intelligence (AI) and machine learning. These machines resemble human interactions: they can help problem solve, create content, and even carry on conversations. For fans of science fiction and dystopian novels, a looming issue could be on the horizon: what if these machines develop a sense of consciousness?

Researchers published their results in the Journal of Social Computing.

While there is no quantifiable data presented in this discussion on artificial sentience (AS) in machines, there are many parallels drawn between human language development and the factors needed for machines to develop language in a meaningful way.

The Possibility of Conscious Machines

“Many of the people concerned with the possibility of machine sentience developing worry about the ethics of our use of these machines, or whether machines, being rational calculators, would attack humans to ensure their own survival,” said John Levi Martin, author and researcher. “We here are worried about them catching a form of self-estrangement by transitioning to a specifically linguistic form of sentience.”

The main characteristics making such a transition possible appear to be: unstructured deep learning, such as in neural networks (computer analysis of data and training examples to provide better feedback), interaction between both humans and other machines, and a wide range of actions to continue self-driven learning. An example of this would be self-driving cars. Many forms of AI check these boxes already, leading to the concern of what the next step in their “evolution” might be.

This discussion states that it’s not enough to be concerned with just the development of AS in machines, but raises the question of if we’re fully prepared for a type of consciousness to emerge in our machinery. Right now, with AI that can generate blog posts, diagnose an illness, create recipes, predict diseases, or tell stories perfectly tailored to its inputs, it’s not far off to imagine having what feels like a real connection with a machine that has learned of its state of being. However, researchers of this study warn, that is exactly the point at which we need to be wary of the outputs we receive.

The Dangers of Linguistic Sentience

“Becoming a linguistic being is more about orienting to the strategic control of information, and introduces a loss of wholeness and integrity…not something we want in devices we make responsible for our security,” said Martin. As we’ve already put AI in charge of so much of our information, essentially relying on it to learn much in the way a human brain does, it has become a dangerous game to play when entrusting it with so much vital information in an almost reckless way.

Mimicking human responses and strategically controlling information are two very separate things. A “linguistic being” can have the capacity to be duplicitous and calculated in their responses. An important element of this is, at what point do we find out we’re being played by the machine?

What’s to come is in the hands of computer scientists to develop strategies or protocols to test machines for linguistic sentience. The ethics behind using machines that have developed a linguistic form of sentience or sense of “self” are yet to be fully established, but one can imagine it would become a social hot topic. The relationship between a self-realized person and a sentient machine is sure to be complex, and the uncharted waters of this type of kinship would surely bring about many concepts regarding ethics, morality, and the continued use of this “self-aware” technology.

Reference: “Through a Scanner Darkly: Machine Sentience and the Language Virus” by Maurice Bokanga, Alessandra Lembo and John Levi Martin, December 2023, Journal of Social Computing.
DOI: 10.23919/JSC.2023.0024

5 Comments on "Not Science Fiction Anymore: What Happens When Machine Learning Goes Too Far?"

  1. Your balls shall sacrificed for taking it too far. So it’s high time we stopped further work on AI to preclude it from springing into consiousness. #Keep the balls safe

  2. david w. ferrin | February 8, 2024 at 5:30 pm | Reply

    The problem I have with this is that most people cant define what a self-realized person is or what a sentient machine is and have no idea what The relationship between a self-realized person and a sentient machine is…

  3. Eventually everything will be free, once AI’s and their robot workers take over, if you believe in the future as written by SF author Iain M. Banks (passed), who presupposed a post-scarcity reality called the The Culture in 10 novels, which is ruled by sentient “Minds”. Resources and energy are unlimited in this future and therefore money or power mean nothing. Warfare is mostly abolished and what does occur is between lesser races and The Culture machines. People live in huge spaceships always on the move between stars, capable of carrying billions, or on planets/moons, or in artificial orbital’s, etc. This is a hedonistic universe where you can acquire, do or be almost anything you want (even change sexes and give birth). The Minds take care of all the details and people do what makes themselves happy. Mostly, the Minds don’t get involved in petty BS among humans.

    OTOH, another SF author named Neal Asher, created a universe is called the Polity that is also ruled by sentient machines. In this universe, the machines took over when we humans created yet another war among ourselves but the machines that were supposed to fight refused and instead took over all government and military functions. There is a big honkin AI in charge of everything and a lot of minor AI’s that help do its bidding across the patch of the universe that it controls. There are no politicians (surely a good thing!). But AI’s in this universe can go rogue (e.g. AI war machine Penny Royal) and create all sorts of mayhem, death and destruction. The Polity is far rawer than The Culture. It is a place where money, crime, various bad aliens and regular warfare still exist.

    If we don’t get into space and expand to other star systems, then I believe the population of Earth will drastically fall. Most people contribute nothing to the world. Few will be remembered by anyone other than their families or close friends three months after they die.

    When there is no longer work for people because robots do most everything, then there is no reason to procreate. We should start seeing a dramatic drop in births within 20 years, as long as AI/Automation/Robot technology is allowed to advance w/o some sort of modern Luddite kickback.

  4. I hate to tell you all this, but if AI follows the same patterns as other weaponize-able technologies, then the level of AI tech that the public is aware of is probably 20-30 years old at this point. The DoD, the 3-letter agencies, DARPA, and their network of private contractors are who funds and drives the truly cutting edge technologies which are typically made publicly known only decades later. The SR-71 Blackbird was developed in the 1950’s. The DoD was having meeting about the use of AI back in the middle 1980’s. It’s a bit laughable that this article is implying that the public should stop developers from doing something that was probably already done decades ago. The more I read this website, the more it comes across as nothing more than clickbait

  5. Grant Castillou | February 9, 2024 at 10:08 am | Reply

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Leave a comment

Email address is optional. If provided, your email will not be published or shared.