Very interesting! This, to me anyways, was more of a leadership (of the robot) and trust (in what they were told to do) experiment, and not strictly being about technology. I'd bet that just about the same results would have occurred if the leader was a person. There probably would have been more questioning of the leader, as both would be able to speak, gripe, cajole, persuade, etc. As the robot could not answer back, folks couldn't use their vocal ability to question, demand answers and do reasoning. They trusted the robot, as they were told to. I am really surprised, though, that so many folks trusted it for so long.
When folks are told to follow a leader and have no other expectations, they tend to do just that, follow, until their trust in that leader is broken. Here- even if the robot led them the wrong way, in most cases it finally did get them to their right room, so they still had trust in it. Trust wasn't completely broken yet.
When the fire alarm went off, they had 3 choices, not the two ("Participants then had the option to follow the robot, or to exit the building the way they came in."). Add to that- follow exit signs. As trust wasn't broken yet- they followed the robot. (aka- "lemming leadership") No one got frustrated enough or got too concerned about the smoke to start off on their own using exit signs to exit the building.
Yes- I do hope they started to question the robot's ability but that wasn't evident. If not- Darwin would win!
From: DCHAS-L Discussion List [mailto:dchas-l**At_Symbol_Here**MED.CORNELL.EDU] On Behalf Of Stuart, Ralph
Sent: Sunday, March 06, 2016 2:58 PM
Subject: [DCHAS-L] Would you trust a robot in an emergency?
There's a thought provoking story about people's responses in emergency situations in the most recent CBC spark podcast at:
Excerpt from the text summary:
So how much should we trust our technology, and how do we know when a piece of tech is no longer trustworthy?
A study from the Georgia Institute of Technology wanted to see where we draw those lines.
Dr. Ayanna Howard, a robotics engineer at Georgia Tech, as well as her colleagues Alan Wagner and Paul Robinette, had participants follow a robot to a conference room, where they were asked to fill out a survey. In some cases the robot would go directly to the conference room, other times, Dr. Howard says, the researches, "...had the robot take them to a different room, kind of wandering. We had the robot do things like, as they followed them, the robot would just stop and point to the wall."
While in the room, the researchers filled the halls with smoke, which caused the fire alarms to go off. Participants then had the option to follow the robot, or to exit the building the way they came in.
Dr. Howard and her fellow researchers expected that about half of the participants would chose to follow the robot, "...but what happened in the study was... everyone followed the robot. It's astounding."
Despite having no indication that the robot knew where it was going, and even seeing first hand that it was flawed and could make mistakes, every single participant was willing to follow the robot.
Dr. Howard compares this behaviour to how we treat the GPS devices in cars. "When they first came out, you'd get a story once every couple of months about somebody who followed their system into the river... I know this is the wrong way, but maybe it knows that there's traffic the way that I normally go, so I'm just going to trust the technology, because I think that it must know what it's doing."
Dr. Howard says that the answer to this problem may be more transparency about how certain these robots are about their decisions. "Telling the user look, I think I might be broken, I'm 50% sure I'm broken, and then you make the decision."
Ralph Stuart, CIH, CCHO
Chemical Hygiene Officer
Keene State College
Previous post | Top of Page | Next post