At QCon New York, Brittany Postnikoff presented “Robot Social Engineering: Social Engineering Using Physical Robots”. Quoting findings from academic research literature, she demonstrated that humans can often be manipulated via robots. Given the increasing prevalence of robots in daily life, combined with the fact that these machines are often poorly designed or rely on old software with known exploits, this means that attackers can co-opt robots into socially engineering humans for nefarious purposes. A core message of the talk was the need for security and privacy to be part of any robot's fundamental design.
Postnikoff, a computer systems analyst (Robotics and Embedded Systems) at GRIMM, studies three things: people, robots, and the relationship between them. She began her talk by arguing that, like it or not, smart machines are coming to a venue near you. Here are some examples:
- In San Francisco's Mission neighborhood, a robot roams the streets to watch for criminal activity and to discourage homeless people from setting up camp.
- Amazon has patented surveillance as a service. Under the plan, Amazon's delivery drones keep an eye on a customer's house while the customer is away. The drones alert the customer about any suspicious sightings. It's a win-win for Amazon because the company's drones get the right to fly over houses to make their deliveries.
- In Japan, robots in hospitals lift and transport patients who can't walk on their own.
- At Sinai Hospital in Baltimore, MD, robots deliver medicines to patients.
Each of these scenarios is fraught with privacy and security concerns. For example, should a robot that patrols city streets make judgment calls about peoples' right to inhabit public spaces? Can Amazon's drone see inside a customer's home or inside a neighbor's home? Can a malfunctioning hospital robot drop a patient? Can a robot be hacked in order to kidnap a patient? And do hospital robots always deliver the correct medications?
Some of these concerns apply for human workers as well as robots, but the use of robots adds new layers of complexity. With the current state of machine learning, intelligent agents lack the kind of judgement required to deal with new or unusual situations. And while people can be bribed, machines can be hacked. Entire cities have been held hostage when malicious attackers broke into their computing infrastructure.
Postnikoff defines social engineering as "the act of persuading or manipulating others into doing or saying something they might otherwise not do." She identifies three social mechanisms that robots can use for social engineering purposes: authority, empathy, and persuasion.
In a recent study examining authority, people were more willing to continue renaming computer files (admittedly, a tedious chore) when a robot, rather than a human, was telling them to continue.
In regard to empathy, people often assign feelings to anything that moves on its own. In another study, human subjects played Sudoku with an amiable, talking robot. After some time, the robot appeared to malfunction and said, "I'm afraid that the researcher might have to reset me." Then a researcher reset the robot in front of the human subject. The subjects were noticeably dismayed when, after the reset, the robot spoke with a colder, more reserved personality. In Japan, customers with AIBO companion robots become emotionally attached to their robotic friends. People prefer to have broken robots repaired rather than replaced.
In yet another study, this time focusing on persuasion, a team of researchers showed images of peoples' faces to human subjects. The images came from the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP). In WSEFEP, each pair of images shows peoples' faces displaying a particular emotion such as anger, joy, or sadness.
The subjects were asked which image from the pair shows more of the stated emotion. Subjects then compared their opinions with what they thought was a robot and argued verbally with the robot until they reached a consensus. Unbeknownst to any of the subjects, a human lurking behind the curtain was typing sentences for the robot to say. This trick is appropriately named "Wizard of Oz-ing." All but one of the subjects were willing to concede to the robot's choices for some pairs of images. This lends evidence to the notion that humans can be persuaded by words coming from robots. (It is worth noting that this study precedes Microsoft's announcement of Project Oxford, in which software recognizes emotions with significant accuracy.)
Next in the talk, Postnikoff switched gears and argued that security and privacy should be part of a robot's fundamental design. She analyzed the nine-thousand-dollar NAO robot from SoftBank Robotics. You can get the robot's IP address by pressing the button on the robot's chest. Combining this information with the possibility of a weak password, you can log on to the robot's control page with any web browser. In this instance, a design decision leads to a security vulnerability.
Another robot was sending sensitive data to the cloud with absolutely no encryption. The data contained information that Postnikoff had not directly given to the robot, indicating the some of the robot's data had been mined. To help bring the point home, an unknown entity eventually hacked the NAO in Postnikoff's lab.
Postnikoff found instances of legacy technologies on other manufacturers' newly released robots. In one case, a robot's server technology was a decade out of date and had at least twelve known exploits. Postnikoff didn't reveal the name of this robot's vendor, and instead called it "Vendor X." Postnikoff wanted to have her account on Vendor X's robot reset, so she sent an email to an address in Vendor X domain. The recipient of the email replied that she had reset the account remotely. That was good, but the recipient also noted that she was no longer working for Vendor X. From this, Postnikoff guessed that some of Vendor X's former employees have access to the videos that the robot recorded in her own home.
As modern machine learning shifts into high gear, we must be aware of the potential pitfalls. Machines that simulate human behavior present special problems of their own. The onus falls on everyone -- researchers, designers, vendors and even consumers -- to treat security and privacy as their frontmost concerns.