
Human-like artificial intelligence may face greater blame for moral violations
In a new study, participants tended to place greater blame on AI involved in real-world ethical violations when they believed it had more like human thinking. Minjoo Joo of Sookmyung Women’s University in Seoul, South Korea, presents these findings in the open access journal PLOS One December 18, 2024.
Previous research has shown that people tend to blame artificial intelligence for a variety of ethical violations, such as decisions by self-driving cars to hit pedestrians or cause medical or military injuries. Other research shows that people tend to place more blame on artificial intelligence, which is thought to have the ability to be conscious, think and plan. People may be more likely to attribute this ability to what they believe to be an artificial intelligence with human-like thinking and the ability to experience conscious feelings.
Based on early research, Joo hypothesized that artificial intelligence thought to possess human-like thinking might be more culpable for certain ethical transgressions.
To test this idea, Joo conducted several experiments in which participants were shown various real-world instances of ethical violations involving artificial intelligence—such as the automated labeling of photos as racist—and asked questions to assess their feelings about them. Involves the psychological perception of artificial intelligence and the extent to which they place blame on the artificial intelligence, its programmers, the companies behind it, or the government. In some cases, the artificial intelligence’s mental perception is manipulated by describing the artificial intelligence’s name, age, height, and hobbies.
Across the experiments, participants tended to place more blame on the AI when they thought it had a more human-like mind. In these cases, when participants were asked to assign relative responsibility, they tended to attribute less blame to the company in question. But when asked to independently assess each agent’s level of responsibility, the responsibility assigned to the company did not decrease.
These findings suggest that AI mind perception is a key factor in attributing blame for violations involving AI. Additionally, Joo expressed concern about the potentially harmful consequences of misuse of AI as a scapegoat and called for further research into the attribution of AI blame.
The authors add: “Can AI be held responsible for ethical violations? This study shows that viewing AI as similar to humans increases blame for the AI while reducing blame for human stakeholders, raising concerns about the use of AI. Concerns about intelligence as moral scapegoating.
2024-12-18 22:49:43