Images of the agents appearing in the stories. From left to right: Asimo,...
Images of the agents appearing in the stories. From left to right: Asimo, iRobot, iClooney and a human being.
Source: Honda's Asimo robot (xcaballe (CC licensed)). iRobot (Sonny): Twentieth Century Fox. iClooney (see Palomäki et al. 2018). Human image: Radboud Faces Database.
Collage: University of Helsinki

How do people react to appearance of robots?

Artificial intelligence and robotics are advancing at a rapid pace, with the number of autonomous intelligent machines making moral choices among us continuously on the rise. Knowledge in moral psychology pertaining to artificial intelligence and robotics is important when discussing the ethics of their development.

'Moralities of Intelligent Machines' is a project that investigates people’s attitudes towards moral choices made by artificial intelligence. In the latest study completed under the project, study participants read short narratives where either a robot, a somewhat humanoid robot known as iRobot, a robot with a strong humanoid appearance called iClooney or a human being encounters a moral problem along the lines of the trolley dilemma, making a specific decision. The participants were also shown images of these agents, after which they assessed the morality of their decisions. The study was funded by the Jane and Aatos Erkko Foundation and the Academy of Finland.

The trolley dilemma is a problem where a person sees a trolley careening on the tracks, without anyone in control, towards five people. The person can either do nothing or turn the trolley onto another track, saving the five people but killing another individual on the other track.

At­ti­tudes more neg­at­ive to­wards hu­manoid ro­bots

According to the study, people consider the choice made by the humanoid iRobot and iClooney less ethically sound than the same decision made by a human and a robot with a traditional robot-like appearance. Michael Laakasuo, project lead and the principal investigator of the study, links the findings to the uncanny valley effect, which has been identified in prior research.

“Humanness in artificial intelligence is perceived as eerie or creepy, and attitudes towards such robots are more negative than towards more machine-like robots. This may be due to, for example, the difficulty of reacting to a humanoid being: is it an animal, a human or a tool?”

According to Laakasuo, the findings indicate that humans do not find robots making moral decisions a strange idea, since the decisions made by a human and a traditional robot were seen as equally acceptable. Instead, the appearance of the robot makes a difference to evaluating their morality.

Discussion guides the regulation of AI

Laakasuo says that the number of intelligent machines making moral choices is growing in our society, with self-driving cars as an example. “It’s important to know how people view intelligent machines and what kinds of factors affect related moral assessment. For instance, are traffic violations perpetrated by a stylish self-driving car perceived differently from those of a less classy model?"

This knowledge can influence the direction of AI and robotics development, as well as, among other things, product branding. Knowledge can also shape the political discussion relating to the regulation of artificial intelligence. For example, self-driving cars can become test laboratories of sorts for private companies: in the case of accidents, the consequences can be dealt with using money, risking human health in the name of technological advancement with appeals to consequentialist morals.

“What kind of robots do we want to have among us: robots who save five people from being run over by a trolley, sacrificing one person, or robots who refuse to sacrifice anyone even if it would mean saving several lives? Should robots be designed to look like humans or not if their appearance affects the perceived morality of their actions?”

Subscribe to our newsletter

Related articles

Tackling ethics concerns regarding use of carebots

Tackling ethics concerns regarding use of carebots

An analysis highlights the realistic pros and cons of apps and other technologies that use AI to benefit older adults, including those facing dementia and cognitive decline.

Finding moral common ground in human-robot relations

Finding moral common ground in human-robot relations

Designers who use ethics to shape better companion robots will end up making better humans, too, say UNSW researchers.

Robot therapists need rules

Robot therapists need rules

Numerous initiatives using robots for improving mental health already exist. However, the use of embodied AI in psychiatry poses ethical questions.

How do people emotionally respond to cloned faces?

How do people emotionally respond to cloned faces?

Researchers examined people’s emotional response to cloned faces, which could soon become the norm in robotics.

Expanding human-robot collaboration in manufacturing

Expanding human-robot collaboration in manufacturing

To enhance human-robot collaboration, researchers at Loughborough University have trained an AI to detect human intention.

Bipedal robot learns to run

Bipedal robot learns to run

Cassie the robot has made history by traversing 5 kilometers, completing the route in just over 53 minutes.

A contact aware robot design

A contact aware robot design

Researchers have developed a new method to computationally optimize the shape and control of a robotic manipulator for a specific task.

Companion robot to support primary care providers

Companion robot to support primary care providers

Intuition Robotics announced a significant expansion of ElliQ, their AI-driven companion robot, to enable the extension of primary care teams' presence into older adults' homes.

A robot scientist is ready for drug discovery

A robot scientist is ready for drug discovery

The robot scientist Eve has been assembled and is now operating at Chalmers University of Technology. Eve’s f​irst mission is to identify and test drugs against Covid-19.​

Popular articles

Subscribe to Newsletter