From: "Yanchisin, Mark" <mark**At_Symbol_Here**EHS.UFL.EDU>
Subject: Re: [DCHAS-L] Would you trust a robot in an emergency?
Date: Mon, 7 Mar 2016 22:06:13 +0000
Reply-To: DCHAS-L <DCHAS-L**At_Symbol_Here**MED.CORNELL.EDU>
Message-ID: ca3829325a0b46afbc1c9aeb6cf8d97f**At_Symbol_Here**exmbxprd13.ad.ufl.edu
In-Reply-To


Very interesting! This, to me anyways, was more of a leadership (of the robot) and trust (in what they were told to do) experiment, and not strictly being about technology. I'd bet that just about the same results would have occurred if the leader was a person. There probably would have been more questioning of the leader, as both would be able to speak, gripe, cajole, persuade, etc. As the robot could not answer back, folks couldn't use their vocal ability to question, demand answers and do reasoning. They trusted the robot, as they were told to. I am really surprised, though, that so many folks trusted it for so long.

When folks are told to follow a leader and have no other expectations, they tend to do just that, follow, until their trust in that leader is broken. Here- even if the robot led them the wrong way, in most cases it finally did get them to their right room, so they still had trust in it. Trust wasn't completely broken yet.

When the fire alarm went off, they had 3 choices, not the two ("Participants then had the option to follow the robot, or to exit the building the way they came in."). Add to that- follow exit signs. As trust wasn't broken yet- they followed the robot. (aka- "lemming leadership") No one got frustrated enough or got too concerned about the smoke to start off on their own using exit signs to exit the building.

Yes- I do hope they started to question the robot's ability but that wasn't evident. If not- Darwin would win!


Mark Yanchisin


-----Original Message-----
From: DCHAS-L Discussion List [mailto:dchas-l**At_Symbol_Here**MED.CORNELL.EDU] On Behalf Of Stuart, Ralph
Sent: Sunday, March 06, 2016 2:58 PM
To: DCHAS-L**At_Symbol_Here**MED.CORNELL.EDU
Subject: [DCHAS-L] Would you trust a robot in an emergency?

There's a thought provoking story about people's responses in emergency situations in the most recent CBC spark podcast at:

https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cbc.ca_radio_spark_312-2Dgrowth-2Dand-2Dthe-2Dstart-2Dup-2Deconomy-2Dtwitter-2Dbot-2Dart-2Dand-2Dmore-2D1.3471294_would-2Dyou-2Dtrust-2Da-2Drobot-2Din-2Dan-2Demergency-2D1.3475216&d=BQIFAg&c=lb62iw4YL4RFalcE2hQUQealT9-RXrryqt9KZX2qu2s&r=meWM1Buqv4IQ27AlK1OJRjcQl09S1Zta6YXKalY_Io0&m=ytY9qc__G6hZQz5kGw8HwM5VC_VFERI4LzeZw0O9AVA&s=IrPMutW0MhxttQeX3vO2bgSopCiXLZ4qv3SoeOdsYwU&e=

Excerpt from the text summary:

So how much should we trust our technology, and how do we know when a piece of tech is no longer trustworthy?

A study from the Georgia Institute of Technology wanted to see where we draw those lines.

Dr. Ayanna Howard, a robotics engineer at Georgia Tech, as well as her colleagues Alan Wagner and Paul Robinette, had participants follow a robot to a conference room, where they were asked to fill out a survey. In some cases the robot would go directly to the conference room, other times, Dr. Howard says, the researches, "...had the robot take them to a different room, kind of wandering. We had the robot do things like, as they followed them, the robot would just stop and point to the wall."

While in the room, the researchers filled the halls with smoke, which caused the fire alarms to go off. Participants then had the option to follow the robot, or to exit the building the way they came in.

Dr. Howard and her fellow researchers expected that about half of the participants would chose to follow the robot, "...but what happened in the study was... everyone followed the robot. It's astounding."

Despite having no indication that the robot knew where it was going, and even seeing first hand that it was flawed and could make mistakes, every single participant was willing to follow the robot.

Dr. Howard compares this behaviour to how we treat the GPS devices in cars. "When they first came out, you'd get a story once every couple of months about somebody who followed their system into the river... I know this is the wrong way, but maybe it knows that there's traffic the way that I normally go, so I'm just going to trust the technology, because I think that it must know what it's doing."

Dr. Howard says that the answer to this problem may be more transparency about how certain these robots are about their decisions. "Telling the user look, I think I might be broken, I'm 50% sure I'm broken, and then you make the decision."

Ralph Stuart, CIH, CCHO
Chemical Hygiene Officer
Keene State College

ralph.stuart**At_Symbol_Here**keene.edu

Previous post   |  Top of Page   |   Next post



The content of this page reflects the personal opinion(s) of the author(s) only, not the American Chemical Society, ILPI, Safety Emporium, or any other party. Use of any information on this page is at the reader's own risk. Unauthorized reproduction of these materials is prohibited. Send questions/comments about the archive to secretary@dchas.org.
The maintenance and hosting of the DCHAS-L archive is provided through the generous support of Safety Emporium.