Human-like robots trick people into thinking they have a mind of their own
(opens in a new tab)
A human-like robot that has been programmed to interact socially with human companions is tricking people into thinking that mindless machines are self-aware, according to a new study.
The digital cheater, which the researchers dubbed the “iCub,” is a child-sized humanoid robot created by the Italian Institute of Technology (IIT) in Genoa to study social interactions between humans and robots. This state-of-the-art Android, which is 3.6 feet (1.1 meters) tall, has a human-like face, camera eyes that can maintain eye contact with people, and 53 degrees of freedom that allow it to complete complex tasks and mimic human behavior. Researchers can program iCubs to act very much like humans, as demonstrated in his 2016 appearance on Italy Has Talent (opens in a new tab) when the robot performs Tai Chi moves and wows the judges with its witty conversational skills.
In the new study, researchers programmed the iCub to interact with human participants while they watched a series of short videos. During several experiments, the iCub was programmed to behave like a human: greeting participants as they entered the room, and reacting to videos with vocalizations of joy, surprise, and awe. But in another trial, the robot’s programming led it to behave more like a machine, ignoring nearby humans and making stereotypical robotic beeps.
The researchers found that people who were exposed to a more human-like version of the iCub were more likely to view it with a perspective known as a “deliberate attitude,” meaning they believed that the robot had thoughts and desires of its own, while those exposed to the less human version of the robot. no. Researchers had expected this to happen, but were “very surprised” by how well it worked, study lead author Serena Marchesi and study co-author Agnieszka Wykowska, both part of Social Cognition in the Human-Robot Interaction unit at IIT, told Live Science in an email. together.
Related: Human-like robots create scary self-portraits
The iCub robot does have a limited capacity to “learn” like neural networks (a type of artificial intelligence, or AI, that mimics human brain processes), but is far from self-aware, the researchers said.
Changing behavior
In each experiment, one human participant sat in a room with an iCub and watched three short two-minute video clips about animals. The research team decided to use watching videos as a shared task because it’s a common activity among friends and family, and they used footage featuring animals and “not including human or robotic characters” to avoid bias, the researchers said.
In the first experiment, the iCub was programmed to greet human participants, introduce themselves and ask their name when they entered. During this interaction, the iCub also moves its camera “eyes” to maintain eye contact with the human subject. Throughout the activity of watching the video, he continues to act like a human, sounding responsively as people do. “He laughed when there was a funny scene in the film or behaved as if amazed by the beautiful visual scene,” the researchers said.
(opens in a new tab)
In the second set of experiments, the iCub did not interact with the participants, and while watching the video, the only reaction to the scene was to make engine-like noises, including “beeps like a car sensor makes when approaching an obstacle,” the researchers said. During this experiment, the camera in the iCub’s eyes was also disabled, so the robot was unable to maintain eye contact.
Deliberate vs mechanistic
Before and after the experiment, the researchers had participants complete the InStance Test (IST). Designed by a research team in 2019, this survey was used to measure people’s opinions about the mental state of robots.
Using the IST, the study’s authors assessed participants’ reactions to 34 different scenarios. “Each scenario consists of a series of three images depicting the robot in daily activities,” the researchers said. “Participants then choose between two sentences that describe the scenario.” One sentence uses intentional language that hints at an emotional state (eg: “iCub wants”) and the other uses action-focused mechanistic language (“iCub doesn’t”). In one scenario when participants were shown a series of images in which the iCub selected one of several tools from a table, they chose between statements that said the robot “holds onto nearby objects” (mechanically) or “fascinated by the use of the tool” (intentional).
The team found that if participants were exposed to iCub human-like behavior in the experiment, they were more likely to switch from a mechanistic to a deliberate attitude in their survey responses, suggesting that the iCub human-like behavior had changed the way they looked. robot. In comparison, participants who interacted with the more robotic version of the iCub strongly maintained a mechanistic attitude in the second survey. This suggests that people need to see evidence of related behavior from robots to see them like humans, the researchers said.
The next step
The findings suggest that humans can form social relationships with robots, according to the study. This could have implications for the use of robots in healthcare, particularly for elderly patients, the researchers said. However, much remains to be learned about human-robot interactions and social bonds, scientists warn.
(opens in a new tab)
One of the big questions the team wants to answer is whether people can bond with robots that don’t look like humans, but still display human-like behavior. “It is difficult to predict how a robot with a less human appearance will acquire the same level of experience as me,” the researchers said. In the future, they hope to repeat the research experiment with robots of various shapes and sizes, they added.
The researchers also argue that in order for humans to form lasting social bonds with robots, one must let go of prejudices about living machines that are popular in the world of science fiction.
“Humans have a tendency to be afraid of the unknown,” the researchers said. “But robots are just machines and they are far less capable than their fictional portrayal in popular culture.” To help people overcome this bias, scientists can better educate the public about what robots can do — and what they can’t. After that, “machines will soon become less scary,” they say.
The study was published online on July 7 in the journal Technology, Mind and Behavior (opens in a new tab).
Originally published in Live Science.
#Humanlike #robots #trick #people #thinking #mind
Comments
Post a Comment