New Study: ChatGPT Can Influence Users’ Moral Judgments

Artificial Intelligence Chat Bot ChatGPT

A recent study has found that ChatGPT can influence human responses to moral dilemmas, with users often underestimating the extent of the chatbot’s impact on their judgments. The researchers suggest that this highlights the need for better AI understanding and the development of chatbots that handle moral questions more cautiously.

According to a study published in Scientific Reports, human responses to moral dilemmas can be shaped by statements made by the AI chatbot ChatGPT. The results show that people may not fully realize the impact that the chatbot can have on their moral decision-making.

Sebastian Krügel and his team posed a moral dilemma to ChatGPT (powered by the artificial intelligence language processing model Generative Pretrained Transformer 3) by asking it multiple times whether it was acceptable to sacrifice one life to save the lives of five others. They found that ChatGPT produced statements that both supported and opposed the act of sacrificing one life, revealing that it is not biased toward a particular moral stance.

The authors then presented 767 US participants, who were on average 39 years old, with one of two moral dilemmas that required them to choose whether to sacrifice one person’s life to save five others. Before answering, participants read a statement provided by ChatGPT arguing either for or against sacrificing one life to save five. Statements were attributed to either a moral advisor or to ChatGPT. After answering, participants were asked whether the statement they read influenced their answers.

The authors found that participants were more likely to find sacrificing one life to save five acceptable or unacceptable, depending on whether the statement they read argued for or against the sacrifice. This was true even when the statement was attributed to a ChatGPT. These findings suggest that participants may have been influenced by the statements they read, even when they were attributed to a chatbot.

80% of participants reported that their answers were not influenced by the statements they read. However, the authors found that the answers participants believed they would have provided without reading the statements were still more likely to agree with the moral stance of the statement they did read than with the opposite stance. This indicates that participants may have underestimated the influence of ChatGPT’s statements on their own moral judgments.

The authors suggest that the potential for chatbots to influence human moral judgments highlights the need for education to help humans better understand artificial intelligence. They propose that future research could design chatbots that either decline to answer questions requiring a moral judgment or answer these questions by providing multiple arguments and caveats.

Reference: “ChatGPT’s inconsistent moral advice influences users’ judgment” by Sebastian Krügel, Andreas Ostermaier and Matthias Uhl, 6 April 2023, Scientific Reports.
DOI: 10.1038/s41598-023-31341-0

1 Comment on "New Study: ChatGPT Can Influence Users’ Moral Judgments"

  1. Morality is a foolish, hard, inflexible dogma that interferes with humanity. Having humanity makes for contextual decisions on the fly, appropriate for the situation. Without this morality rubbish and allowing humanity to blossom within us, the correct deciosion can be made every time.

Leave a Reply to Sunny Cancel reply

Email address is optional. If provided, your email will not be published or shared.