From: Harry J. Elston <helston**At_Symbol_Here**>
Subject: Re: [DCHAS-L] Would you trust a robot in an emergency?
Date: Mon, 7 Mar 2016 17:00:45 -0600
Reply-To: DCHAS-L <DCHAS-L**At_Symbol_Here**MED.CORNELL.EDU>
Message-ID: CAJ2hcffcEwqhqvTzhJAtOSMfuGqnrKJA7ZhfRLik+0G1ZsjXgw**At_Symbol_Here**

I'm sorry Rob, I'm afraid I can't do that

(Though it's a bit off topic, as HAL-9000 was a computer, not a robot).

On Mar 7, 2016 4:43 PM, "ILPI Support" <info**At_Symbol_Here**> wrote:
I can't believe nobody has of yet mentioned this classic example of a robot leading someone to safety:

Rob Toreki

Safety Emporium - Lab & Safety Supplies featuring brand names
you know and trust. Visit us at
Fax: (856) 553-6154, PO Box 1003, Blackwood, NJ 08012

-----Original Message-----
From: DCHAS-L Discussion List [mailto:dchas-l**At_Symbol_Here**MED.CORNELL.EDU] On Behalf Of Stuart, Ralph
Sent: Sunday, March 06, 2016 2:58 PM
Subject: [DCHAS-L] Would you trust a robot in an emergency?

There's a thought provoking story about people's responses in emergency situations in the most recent CBC spark podcast at:

Excerpt from the text summary:

So how much should we trust our technology, and how do we know when a piece of tech is no longer trustworthy?

A study from the Georgia Institute of Technology wanted to see where we draw those lines.

Dr. Ayanna Howard, a robotics engineer at Georgia Tech, as well as her colleagues Alan Wagner and Paul Robinette, had participants follow a robot to a conference room, where they were asked to fill out a survey. In some cases the robot would go directly to the conference room, other times, Dr. Howard says, the researches, "...had the robot take them to a different room, kind of wandering. We had the robot do things like, as they followed them, the robot would just stop and point to the wall."

While in the room, the researchers filled the halls with smoke, which caused the fire alarms to go off. Participants then had the option to follow the robot, or to exit the building the way they came in.

Dr. Howard and her fellow researchers expected that about half of the participants would chose to follow the robot, "...but what happened in the study was... everyone followed the robot. It's astounding."

Despite having no indication that the robot knew where it was going, and even seeing first hand that it was flawed and could make mistakes, every single participant was willing to follow the robot.

Dr. Howard compares this behaviour to how we treat the GPS devices in cars. "When they first came out, you'd get a story once every couple of months about somebody who followed their system into the river... I know this is the wrong way, but maybe it knows that there's traffic the way that I normally go, so I'm just going to trust the technology, because I think that it must know what it's doing."

Dr. Howard says that the answer to this problem may be more transparency about how certain these robots are about their decisions. "Telling the user look, I think I might be broken, I'm 50% sure I'm broken, and then you make the decision."

Ralph Stuart, CIH, CCHO
Chemical Hygiene Officer
Keene State College


Previous post   |  Top of Page   |   Next post

The content of this page reflects the personal opinion(s) of the author(s) only, not the American Chemical Society, ILPI, Safety Emporium, or any other party. Use of any information on this page is at the reader's own risk. Unauthorized reproduction of these materials is prohibited. Send questions/comments about the archive to
The maintenance and hosting of the DCHAS-L archive is provided through the generous support of Safety Emporium.