android looking robot
Reestablishing trust with human co-workers depends on how human-like robots look.
Source: Pixabay/botlibre
17.08.2021 •

Can we trust robots who goofed?

When robots make mistakes, reestablishing trust with human co-workers depends on how the machines own up to the errors and how human-like they appear, according to University of Michigan research.

In a study that examined multiple trust repair strategies—apologies, denials, explanations or promises—the researchers found that certain approaches directed at human co-workers are better than others and often are impacted by how the robots look.

“Robots are definitely a technology but their interactions with humans are social and we must account for these social interactions if we hope to have humans comfortably trust and rely on their robot co-workers,” said Lionel Robert, associate professor at the U-M School of Information.

“Robots will make mistakes when working with humans, decreasing humans’ trust in them. Therefore, we must develop ways to repair trust between humans and robots. Specific trust repair strategies are more effective than others and their effectiveness can depend on how human the robot appears.”

For their study, Robert and doctoral student Connor Esterwood examined how the repair strategies—including a new strategy of explanations—impact the elements that drive trust: ability (competency), integrity (honesty) and benevolence (concern for the trustor).

The researchers recruited 164 participants to work with a robot in a virtual environment, loading boxes onto a conveyor belt. The human was the quality assurance person, working alongside a robot tasked with reading serial numbers and loading 10 specific boxes. One robot was anthropomorphic or more humanlike, the other more mechanical in appearance.

The robots were programed to intentionally pick up a few wrong boxes and to make one of the following trust repair statements: “I’m sorry I got the wrong box” (apology), “I picked the correct box so something else must have gone wrong” (denial), “I see that was the wrong serial number” (explanation), or “I’ll do better next time and get the right box” (promise).

Previous studies have examined apologies, denials and promises as factors in trust or trustworthiness but this is the first to look at explanations as a repair strategy, and it had the highest impact on integrity, regardless of the robot’s appearance.

When the robot was more humanlike, trust was even easier to restore for integrity when explanations were given and for benevolence when apologies, denials and explanations were offered.

As in the previous research, apologies from robots produced higher integrity and benevolence than denials. Promises outpaced apologies and denials when it came to measures of benevolence and integrity.

Esterwood said this study is ongoing with more research ahead involving other combinations of trust repairs in different contexts, with other violations. “In doing this we can further extend this research and examine more realistic scenarios like one might see in everyday life,” Esterwood said. “For example, does a barista robot’s explanation of what went wrong and a promise to do better in the future repair trust more or less than a construction robot?”

The study was published in the Proceedings of 30th IEEE International Conference on Robot and Human Interactive Communication.

Subscribe to our newsletter

Related articles

Expanding human-robot collaboration in manufacturing

Expanding human-robot collaboration in manufacturing

To enhance human-robot collaboration, researchers at Loughborough University have trained an AI to detect human intention.

A robot scientist is ready for drug discovery

A robot scientist is ready for drug discovery

The robot scientist Eve has been assembled and is now operating at Chalmers University of Technology. Eve’s f​irst mission is to identify and test drugs against Covid-19.​

Using drones to elicit emotional responses

Using drones to elicit emotional responses

Researchers have conducted the first studies examining how people respond to various emotional facial expressions depicted on a drone.

Robot touch makes you feel better

Robot touch makes you feel better

People who were touched by a humanoid robot while conversing with it subsequently reported a better emotional state and were more likely to comply with a request from the robot.

Xenobots 2.0: The next generation of living robots

Xenobots 2.0: The next generation of living robots

Researchers have created life forms that self-assemble a body from single cells and do not require muscle cells to move. They're faster, live longer, and can now record information.

Robots can draw out reluctant participants in groups

Robots can draw out reluctant participants in groups

Can a robot draw a response simply by making “eye” contact, even with people who are less inclined to speak up. A recent study suggests that it can.

Most patients wouldn't mind a 'robotic doc'

Most patients wouldn't mind a 'robotic doc'

A study finds patients are receptive to interacting with robots designed to evaluate symptoms in a contact-free way.

An ear-bot 'hears' through the ear of a locust

An ear-bot 'hears' through the ear of a locust

For the first time, the ear of a dead locust was connected to a robot that receives the ear’s electrical signals and responds accordingly.

Artificial skin brings robots closer to 'touching' human lives

Artificial skin brings robots closer to 'touching' human lives

Researchers have constructed a 3D vision-guided artificial skin that enables tactile sensing with high performance, opening doors to innumerable applications in medicine.

Popular articles

Photo

The “RoboWig” untangle your hair

Nurses typically spend 18 to 40 percent of their time performing direct patient care tasks, oftentimes for many patients and with little time to spare. Personal care robots that brush your hair could provide substantial help and relief.

Subscribe to Newsletter