Why do human-like robots elicit uncanny feelings?
Source: CC0 Public Domain

Why do human-like robots elicit uncanny feelings?

Androids, or robots with humanlike features, are often more appealing to people than those that resemble machines — but only up to a certain point. Many people experience an uneasy feeling in response to robots that are nearly lifelike, and yet somehow not quite “right”. The feeling of affinity can plunge into one of repulsion as a robot’s human likeness increases, a zone known as “the uncanny valley".

Since the uncanny valley was first described, a common hypothesis developed to explain it. Known as the mind-perception theory, it proposes that when people see a robot with human-like features, they automatically add a mind to it. A growing sense that a machine appears to have a mind leads to the creepy feeling, according to this theory. “We found that the opposite is true,” says Wang Shensheng, first author of the new study, who did the work as a graduate student at Emory and recently received his PhD in psychology. “It’s not the first step of attributing a mind to an android but the next step of ‘dehumanizing’ it by subtracting the idea of it having a mind that leads to the uncanny valley. Instead of just a one-shot process, it’s a dynamic one.”

The findings have implications for both the design of robots and for understanding how we perceive one another as humans. “Robots are increasingly entering the social domain for everything from education to healthcare,” Wang says. “How we perceive them and relate to them is important both from the standpoint of engineers and psychologists.”

“At the core of this research is the question of what we perceive when we look at a face,” adds Philippe Rochat, Emory professor of psychology and senior author of the study. “It’s probably one of the most important questions in psychology. The ability to perceive the minds of others is the foundation of human relationships.”

The research may help in unraveling the mechanisms involved in mind-blindness — the inability to distinguish between humans and machines — such as in cases of extreme autism or some psychotic disorders, Rochat says.

Anthropomorphizing, or projecting human qualities onto objects, is common. “We often see faces in a cloud for instance,” Wang says. “We also sometimes anthropomorphize machines that we’re trying to understand, like our cars or a computer.”

Naming one’s car or imagining that a cloud is an animated being, however, is not normally associated with an uncanny feeling, Wang notes. That led him to hypothesize that something other than just anthropomorphizing may occur when viewing an android.

To tease apart the potential roles of mind-perception and dehumanization in the uncanny valley phenomenon the researchers conducted experiments focused on the temporal dynamics of the process. Participants were shown three types of images — human faces, mechanical-looking robot faces and android faces that closely resembled humans — and asked to rate each for perceived animacy or “aliveness.” The exposure times of the images were systematically manipulated, within milliseconds, as the participants rated their animacy.

Recommended article

The results showed that perceived animacy decreased significantly as a function of exposure time for android faces but not for mechanical-looking robot or human faces. And in android faces, the perceived animacy drops at between 100 and 500 milliseconds of viewing time. That timing is consistent with previous research showing that people begin to distinguish between human and artificial faces around 400 milliseconds after stimulus onset.

A second set of experiments manipulated both the exposure time and the amount of detail in the images, ranging from a minimal sketch of the features to a fully blurred image. The results showed that removing details from the images of the android faces decreased the perceived animacy along with the perceived uncanniness.

“The whole process is complicated but it happens within the blink of an eye,” Wang says. “Our results suggest that at first sight we anthropomorphize an android, but within milliseconds we detect deviations and dehumanize it. And that drop in perceived animacy likely contributes to the uncanny feeling.”

The research was published in the journal Perception.

Subscribe to our newsletter

Related articles

Soft robotic gripper inspired by pole beans

Soft robotic gripper inspired by pole beans

Researchers have designed a new soft robotic gripper that draws inspiration from an unusual source: pole beans

Sarcopenia: Robotic muscles could turn back body clock

Sarcopenia: Robotic muscles could turn back body clock

Loss of strength and muscle wastage is currently an unavoidable part of getting older and has a significant impact on health and quality of life.

Caterpillar-like robot could deliver drugs

Caterpillar-like robot could deliver drugs

A novel tiny, soft robot with caterpillar-like legs could pave the way for medical technology advances, such as drug delivery in the human body.

New material could help robots flex their muscles

New material could help robots flex their muscles

Researchers have developed a shape memory polymer that stores almost six times more energy than previous versions.

Expanding human-robot collaboration in manufacturing

Expanding human-robot collaboration in manufacturing

To enhance human-robot collaboration, researchers at Loughborough University have trained an AI to detect human intention.

Navigation the brain's arteries with a steerable catheter

Navigation the brain's arteries with a steerable catheter

For the first time, a steerable catheter will give neurosurgeons the ability to steer the device in any direction they want while navigating the brain's arteries and blood vessels.

Can we trust robots who goofed?

Can we trust robots who goofed?

When robots make mistakes, reestablishing trust with human co-workers depends on how the machines own up to the errors and how human-like they appear.

A robot scientist is ready for drug discovery

A robot scientist is ready for drug discovery

The robot scientist Eve has been assembled and is now operating at Chalmers University of Technology. Eve’s f​irst mission is to identify and test drugs against Covid-19.​

Using drones to elicit emotional responses

Using drones to elicit emotional responses

Researchers have conducted the first studies examining how people respond to various emotional facial expressions depicted on a drone.

Popular articles

Photo

The “RoboWig” untangle your hair

Nurses typically spend 18 to 40 percent of their time performing direct patient care tasks, oftentimes for many patients and with little time to spare. Personal care robots that brush your hair could provide substantial help and relief.

Subscribe to Newsletter