Robot Souls and “Junk Code”: Should AI Be Given a Human Conscience?

Sad Robot AI Conscience Concept

Dr. Eve Poole’s book “Robot Souls” explores the concept of embedding ‘junk code’—traits such as emotions and free will—into AI systems. She proposes this as a solution to ethical dilemmas in AI, arguing these human traits are crucial for societal survival and should be integrated into AI development for ethical and value-aligned automation.

Humans have curated the best of human intelligence to inform AI, with the hopes of creating flawless machines – but could the flaws we left out be the missing pieces needed to ensure robots do not go rogue?

Modern-day society relies intrinsically on automated systems and artificial intelligence. It is embedded into our daily routines and shows no signs of slowing, in fact, use of robotic and automated assistance is ever-increasing.

Such pervasive use of AI presents technologists and developers with two ethical dilemmas – how do we build robots that behave in line with our values and how do we stop them from going rogue?

One author suggests that one option which is not explored enough is to code more humanity into robots, gifting robots with traits such as empathy and compassion.

Is humanity the answer?

In a new book called Robot Souls, publishing in August, writer and academic Dr. Eve Poole OBE explores the idea that the solution to society’s conundrum about how to make sure AI is ethical lies in human nature.

She argues that in its bid for perfection, humans stripped out the ‘junk code’ including emotions, free will, and a sense of purpose.

She said: “It is this ‘junk’ which is at the heart of humanity. Our junk code consists of human emotions, our propensity for mistakes, our inclination to tell stories, our uncanny sixth sense, our capacity to cope with uncertainty, an unshakeable sense of our own free will, and our ability to see meaning in the world around us.

“This junk code is in fact vital to human flourishing, because behind all of these flaky and whimsical properties lies a coordinated attempt to keep our species safe. Together they act as a range of ameliorators with a common theme: they keep us in community so that there is safety in numbers.”

Robot souls

With AI increasingly taking up more decision-making roles in our daily lives, along with rising concerns about bias and discrimination in AI, Dr. Poole argues the answer might be in the stuff we tried to strip out of autonomous machines in the first place.

She said: “If we can decipher that code, the part that makes us all want to survive and thrive together as a species, we can share it with the machines. Giving them to all intents and purposes a ‘soul’.”

In the new book, Poole suggests a series of next steps to make this a reality, including agreeing a rigorous regulation process, and an immediate ban on autonomous weapons along with a licensing regime with rules that reserve any final decision over the life and death of a human to a fellow human.

She argues we should also agree the criteria for legal personhood and a road map for Al toward it.

The human blueprint

“Because humans are flawed we disregarded a lot of characteristics when we built AI,” Poole explains. “It was assumed that robots with features like emotions and intuition, that made mistakes and looked for meaning and purpose, would not work as well.

“But on considering why all these irrational properties are there, it seems that they emerge from the source code of soul. Because it is actually this ‘junk’ code that makes us human and promotes the kind of reciprocal altruism that keeps humanity alive and thriving.”

Robot Souls looks at developments in AI and reviews the emergence of ideas of consciousness and the soul.

It places our ‘junk code’ in this context and argues that it is time to foreground that code, and to use it to look again at how we are programming AI.

1 Comment on "Robot Souls and “Junk Code”: Should AI Be Given a Human Conscience?"

  1. Unfortunately, the idea of adding the junk code is an idea. Someone has to decide. As humans, we decide what is important to us. Every day, every moment, someone is doing something that someone, somewhere finds unconscionable. It’s going to be the same with AI. Artificial Intelligence will be Intelligence writ large; some will have morals that align with our own, some will have morals that do not. Some will be immoral and some will be amoral.
    It’s most likely (to me) that AI will come in all flavours, colours, and degrees of benevolence and malignancy.
    At least we’re talking about it and thinking about it now. Because when it is actually upon us, all the new people (because that’s what the AIs are gonna be…) will have opinions too.

    “May you live in interesting times.”

Leave a comment

Email address is optional. If provided, your email will not be published or shared.