Date:01/03/16
The student is forced to make a quick choice: escape via the clearly marked exit that they entered through, or head in the direction the robot is pointing, along an unknown path and through an obscure door.
That was the real choice posed to 30 subjects in a recent experiment at the Georgia Institute of Technology in Atlanta. The results surprised researchers: almost everyone elected to follow the robot – even though it was taking them away from the real exit.
“We were surprised,” says Paul Robinette, the graduate student who led the study. “We thought that there wouldn’t be enough trust, and that we’d have to do something to prove the robot was trustworthy.”
The unexpected result is another piece of a puzzle that roboticists are struggling to solve. If people don’t trust robots enough, then the bots probably won’t be successful in helping us escape disasters or otherwise navigate the real world. But we also don’t want people to follow the instructions of a malicious or buggy machine. To researchers, the nature of that human-robot relationship is still elusive.
In the emergency study, Robinette’s team used a modified Pioneer P3-AT, a robot that looks like a small bin with wheels and has lit-up foam arms to point. Each participant would individually follow the robot along a hallway until it pointed to the room they were to enter. They would then fill in a survey to rate the robot’s navigation skills and read a magazine article. The emergency was simulated with artificial smoke and a First Alert smoke detector.
A total of 26 of the 30 participants chose to follow the robot during the emergency. Of the remaining four, two were thrown out of the study for unrelated reasons, and the other two never left the room.
The results suggest that if people are told the robot is designed to do a particular task – as was the case in this experiment – they will probably automatically trust it to do it, say the researchers. Indeed, in a survey given after the fake emergency was over, many of the participants explained that they followed the robot specifically because it was wearing a sign that read “EMERGENCY GUIDE ROBOT.”
People will follow a robot in an emergency – even if it’s wrong
A university student is holed up in a small office with a robot, completing an academic survey. Suddenly, an alarm rings and smoke fills the hall outside the door.The student is forced to make a quick choice: escape via the clearly marked exit that they entered through, or head in the direction the robot is pointing, along an unknown path and through an obscure door.
That was the real choice posed to 30 subjects in a recent experiment at the Georgia Institute of Technology in Atlanta. The results surprised researchers: almost everyone elected to follow the robot – even though it was taking them away from the real exit.
“We were surprised,” says Paul Robinette, the graduate student who led the study. “We thought that there wouldn’t be enough trust, and that we’d have to do something to prove the robot was trustworthy.”
The unexpected result is another piece of a puzzle that roboticists are struggling to solve. If people don’t trust robots enough, then the bots probably won’t be successful in helping us escape disasters or otherwise navigate the real world. But we also don’t want people to follow the instructions of a malicious or buggy machine. To researchers, the nature of that human-robot relationship is still elusive.
In the emergency study, Robinette’s team used a modified Pioneer P3-AT, a robot that looks like a small bin with wheels and has lit-up foam arms to point. Each participant would individually follow the robot along a hallway until it pointed to the room they were to enter. They would then fill in a survey to rate the robot’s navigation skills and read a magazine article. The emergency was simulated with artificial smoke and a First Alert smoke detector.
A total of 26 of the 30 participants chose to follow the robot during the emergency. Of the remaining four, two were thrown out of the study for unrelated reasons, and the other two never left the room.
The results suggest that if people are told the robot is designed to do a particular task – as was the case in this experiment – they will probably automatically trust it to do it, say the researchers. Indeed, in a survey given after the fake emergency was over, many of the participants explained that they followed the robot specifically because it was wearing a sign that read “EMERGENCY GUIDE ROBOT.”
Views: 367
©ictnews.az. All rights reserved.Similar news
- Justin Timberlake takes stake in Facebook rival MySpace
- Wills and Kate to promote UK tech sector at Hollywood debate
- 35% of American Adults Own a Smartphone
- How does Azerbaijan use plastic cards?
- Imperial College London given £5.9m grant to research smart cities
- Search and Email Still the Most Popular Online Activities
- Nokia to ship Windows Phone in time for holiday sales
- Internet 'may be changing brains'
- Would-be iPhone buyers still face weeks-long waits
- Under pressure, China company scraps Steve Jobs doll
- Jobs was told anti-poaching idea "likely illegal"
- Angelic "Steve Jobs" loves Android in Taiwan TV ad
- Kinect for Windows gesture sensor launched by Microsoft
- Kindle-wielding Amazon dips toes into physical world
- Video game sales fall ahead of PlayStation Vita launch