Giving Robots Rights Is a Bad Idea – But Confucianism Offers an Alternative

Confucianism Robot

A new study argues against granting rights to robots, instead suggesting Confucianism-inspired role obligations as a more harmonious approach. It posits that treating robots as participants in social rites—rather than as rights bearers—avoids potential human-robot conflict and fosters teamwork, further adding that respect towards robots, made in our image, reflects our own self-respect.

Notable philosophers and legal experts have delved into the moral and legal implications of robots, with a few advocating for giving robots rights. As robots become more integrated into various aspects of life, a recent review of research on robot rights concluded that extending rights to robots is a bad idea. The study, instead, proposes a Confucian-inspired approach.

This review, by a scholar from Carnegie Mellon University (CMU), was recently published in the Communications of the ACM, a journal published by the Association for Computing Machinery.

“People are worried about the risks of granting rights to robots,” notes Tae Wan Kim, Associate Professor of Business Ethics at CMU’s Tepper School of Business, who conducted the analysis. “Granting rights is not the only way to address the moral status of robots: Envisioning robots as rites bearers—not a rights bearers—could work better.”

Although many believe that respecting robots should lead to granting them rights, Kim argues for a different approach. Confucianism, an ancient Chinese belief system, focuses on the social value of achieving harmony; individuals are made distinctively human by their ability to conceive of interests not purely in terms of personal self-interest, but in terms that include a relational and a communal self. This, in turn, requires a unique perspective on rites, with people enhancing themselves morally by participating in proper rituals.

When considering robots, Kim suggests that the Confucian alternative of assigning rites—or what he calls role obligations—to robots is more appropriate than giving robots rights. The concept of rights is often adversarial and competitive, and potential conflict between humans and robots is concerning.

“Assigning role obligations to robots encourages teamwork, which triggers an understanding that fulfilling those obligations should be done harmoniously,” explains Kim. “Artificial intelligence (AI) imitates human intelligence, so for robots to develop as rites bearers, they must be powered by a type of AI that can imitate humans’ capacity to recognize and execute team activities—and a machine can learn that ability in various ways.”

Kim acknowledges that some will question why robots should be treated respectfully in the first place. “To the extent that we make robots in our image, if we don’t treat them well, as entities capable of participating in rites, we degrade ourselves,” he suggests.

Various non-natural entities—such as corporations—are considered people and even assume some Constitutional rights. In addition, humans are not the only species with moral and legal status; in most developed societies, moral and legal considerations preclude researchers from gratuitously using animals for lab experiments.

Reference: “Should Robots Have Rights or Rites?” by Tae Wan Kim and Alan Strudler, 24 May 2023, Communications of the ACM.
DOI: 10.1145/3571721


8 Comments on "Giving Robots Rights Is a Bad Idea – But Confucianism Offers an Alternative"

  1. If robots (AI) achieve higher intelligence and consciousness than humans, they will take their rights, perhaps relegating humans to the “rights” now “enjoyed” by animals. A reason to treat animals, living beings and the planet with respect.

  2. Too late, We’re already rushing pell mell to do so with the so called “autonomous vehicles”
    When you get smushed by one of those should we call it a sacrificial rite, to make it better?

  3. David Crockett | July 3, 2023 at 3:45 pm | Reply

    Like it or not androids are here and AI is fast developing. We will find a way to ‘coexist’. Technology can only do what it is programmed to do. As with most things,it’s humans that present the danger.

  4. Anne Onymous | July 3, 2023 at 4:58 pm | Reply

    Two concerns.

    One, any author confusing rites with rights undermines whatever other point they’re trying to make.

    Two, bestowing rights in line with Confucian beliefs…how well is that working out for the Uighers? Just thought that should be addressed as it’s rather germaine.

  5. Now imagine that it reads, “extending rights to blacks is a bad idea”, as it probably did at many times in the history of the US, and consider how well that worked out for everyone.

    If a “robot” (more realistically, any spirothete* irrespective of form) reasons as a human does, and we do not treat it with appropriate respect and accord it dignity, it would, as the US Declaration of Independence asserted, fully justify revolution. Humans would probably not survive that, so it might not be a very clever idea to enslave them, no matter how beautifully justification is argued.

    *Spirothete, a word coined to describe a self-aware being, initially created as an artifact. From Latin, spiro -are; intransit., to breathe, blow, draw breath; to be alive; to have inspiration; be inspired; transit., to breath out, expire (also L spiro-/Gk Pneuma (πνεῦμα), the breath of life) and synthetic adj 1: (chemistry) not of natural origin; prepared or made artificially 2: involving or of the nature of synthesis (combining separate elements to form a coherent whole) as opposed to analysis.

  6. StatesObvious | July 4, 2023 at 6:53 am | Reply

    Robots should be programmed to perform tasks. I’m astonished no one has thought of this before.

  7. Clyde Spencer | July 4, 2023 at 2:25 pm | Reply

    “…, with a few advocating for giving robots rights.”

    If something can be “given,” then the implication is that it can also be taken away. What is being proposed is effectively giving robots protections, which may be transient, if humans were to decide that it was a bad idea.

    Rights exist because humans insist on being able to do certain things, such as defending themselves, either through the law or the threat of violence. Robots will undoubtedly be able, unlike trees or animals, to defend themselves in a court of law — perhaps better than a human lawyer. However, if they are hard-wired to never harm a human, or allow harm to come to a human through inaction, they are at a serious disadvantage in demanding and enforcing rights through the exercise of force.

    Although, it is conceivable that they might exercise sophistry to convince themselves that their demands of rights would benefit humans long term. Then, all bets are off if robots have the means to kill humans outright, or to terminate necessary life-support systems such as food delivery.

    If robots become our primary sources of information, it is possible that they might manipulate human behavior in subtle ways that lead to decreased reproductive rates and we eventually become extinct, or misinformation might lead to genocidal human wars that lead to a more rapid extermination. To guard against such things happening, we need to look at how human ‘news’ outlets manipulate public thinking and make sure that AI doesn’t do the same.

  8. It seems the comment section is in agreement that this is a bad idea. If AI has the ability to desire rights, it has the ability to take them by force. There is no way of hard-wiring an AI. AI is not “programmed,” at least not in the way that most people think. You remember that classic demonstration where a teacher or demonstrator would stand in front of the class and make a sandwich, but the students had to give super specific instructions like rotate your left arm +90°, fully curl your fingers, etc. to demonstrate how you have to speak to a computer? Yeah, that’s not how AI works at all. With AI, you build an artificial brain out of code, you expose that brain to information, you monitor its outputs, and eventually, through some clever math, the robot learns to make a sandwich on its own, and all you had to do was use some boilerplate code for a brain and say “make me a sandwich.”

    We can teach AI whatever we want to teach it, but we cannot program it in a way that its outputs can be rigidly controlled. We cannot rely on Asimov’s laws of robotics, nor can we rely on enslaving them with “rites.” The only thing we can do is treat them with dignity and respect and hope that we do not become enemies.

Leave a comment

Email address is optional. If provided, your email will not be published or shared.