BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Robot Social Engineering: Social Engineering Using Physical Robots

Robot Social Engineering: Social Engineering Using Physical Robots

Bookmarks
49:39

Summary

Brittany Postnikoff covers some of the capabilities of physical robots, related human-robot interaction research, and the interfaces that can be used by a robot to social engineer humans. She discusses the security, privacy, and ethical implications of social robots, the interfaces used to control them, and the techniques that can be used to prevent robot social engineering attacks.

Bio

Brittany Postnikoff is a Computer Systems Analyst (Robotics and Embedded Systems) with GRIMM. Prior to joining GRIMM, she was a researcher in the Cryptography, Security, and Privacy (CrySP) lab at the University of Waterloo. She has also held a position as a researcher in both the Human-Computer Interaction and Autonomous Agents laboratories at the University of Manitoba.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

[Note: please be advised that this transcript contains strong language]

Transcript

Postnikoff: My name is Brittany Postnikoff, and I'm here to talk about robot social engineering, which at its very base level is having robots perform social engineering on people. Part of my background is: I've done research in autonomous agents laboratories and human-robot interaction laboratories and security and privacy laboratories. I've been doing specifically robot social engineering research for the last five-plus years either performing robot social engineering on people, learning about the social abilities that robots have, finding out ways to break the security and privacy of robots. If you have any questions about that as I go throughout this talk, please do ask your questions as they come up. Don't worry about waiting for the end. I'd rather answer these questions in the moment just so we can have better conversation together because that makes for a more fun presentation.

The motivation for this presentation is these robots. There are six robots featured here and these are robots are actually widely used in a lot of different applications already. The one on the far left is called Baxter. That robot is used in manufacturing side by side with humans, so shoulder-to-shoulder, working on either production lines or these robots do QA on factory lines will actually verify people's work and things like that. Baxter is also in several Roboto Restaurants and cooks for people in Japan and a few other countries in that area. Beside that, we have a Jackal, and the Jackal is the small yellow robot. It's maybe up to my knees, a couple feet high. That robot is used for search and rescue. It is used in some developing countries to actually deliver water. There are groups that have sponsored small villages with these robots so they can actually carry jugs of water from very far distances to the villages. Those are robots that people are already interacting with.

I don't know if many of you remember Sony AIBOs, but that's the small dog robot in this image. This is actually the new version of the Sony AIBO, which is not as good as the old one in my opinion, but I can talk about that later. The Sony AIBO was meant to be in your home to play with you or your children and to have that entertainment value and to be in your personal space. Then the robot beside that, the one with red on it is about two and a half feet tall and that's called a child-sized robot and that robot is Nao and it's used in a lot of scientific experiments. It's used actually to teach people classes. That robot is also used to sell things to people. There are a lot of applications for that one cause it is a very generic platform.

The robot beside that is the DARwIn and it's actually an open-source platform. There are schematics online that you can see you can make your own. Unfortunately, making it is actually more expensive than just buying it. There are robots that you could actually use and start playing with yourself. Then the robot on the far right, it's called Pepper. That one's probably the most widespread social robot that you'll see. Personally, I've seen Pepper in banks, I've seen Peppers try and sell people's cell phones, act as a waitress in restaurants and a few different things we'll go through more of those throughout the talk. These are robots that actually exist right now, that people are interacting with.

In the near future, it's expected we'll see something like this. This article is actually from last week and the point is that Domino's will start delivering pizza using autonomous robots, and that's their goal. You might be in a city that might be one of the recipients of these robots or a fleet of robots and you have to think how are you going to interact with autonomous robots when you get to an intersection and the robot's trying to cross and you're trying to cross? How do you think about the actions of the autonomous vehicle? This is something that's covered in HRI, which again, we'll go over this in a little bit, is how you perceive these things and why people perceive actions by robots in certain ways.

Then we also have things like this, which is why this is in the security track is because Amazon actually patented back in 2015 surveillance as a service as part of their delivery drones. If you go and read this article from The Verge, it talks about how what Amazon wants to do is sell you surveillance. As these drones are flying over your property to deliver packages to other people, what they want to do is to be able to get you to pay them to tell you if somebody has broken into your home, if they vandalize it, or if there's a fire. It's interesting because when you think about that, you're giving them permission to also fly over your property, which is definitely something Amazon wants to be able to make these drones have better paths throughout a city instead of flying over just streets, so really it's giving them way more of an advantage.

For you that surveillance might be good, but how often are you going to be home anyway or is it really going to be that much of a benefit? There's a little bit of issue with that. Also thinking about your own privacy. What if the drone catches pictures of inside your house because it's flying over top all the time? That's something to consider. Part of why we really want to talk about robot social engineering is what's next step after that?

Then we also have ethical issues with some of these robots and what they're able to do. This robot, for example, the Knight Robot, I've seen in 10 different cities in the United States most often used for security guard purposes, and in this case, moving homeless people out of certain areas. You have to ask yourself how ethical is that? How are people responding to this? So far how people are responding is they're defacing the robots, they're trying to break them, they're trying to tip them over, but they're really heavy. Actually, it's really interesting, the easiest way to like prevent these robots from spying on you or preventing you from moving out of an area or something like that is to honestly put rubble in the way, or a little bit of gravel or change. It's interesting that they can't get over some little barriers. I really enjoy it, I am not nice to robots sometimes.

Then the next thing is there's also robots in hospitals that are being used to give care to people, either lifting people who've been injured because it can be difficult for nurses to lift people. You have things like this robot called Robear, which I think is hilarious. What it does is it helps pick people up, move them throughout the hospital and things like that. What happens if a hacker gets in? What if a malicious agent could get in and control the robot? Could you kidnap somebody that the Bear is currently holding? These are things I actually actively worry about because I get to try some of these attacks sometimes in a controlled manner and with permission, but I've been trying to see what the issues are.

If you're interested, Pepper is actually in the HSBC office. I think it's 452 Fifth Avenue, which is like a 10-minute walk from here. If you want to go see a robot in a bank right now, there you go. The interesting thing with all of these robots is that we're all trying to move to this. I'm sure many of you recognize some of the robots here. You've got Bender on your far left from Futurama, you've got C3PO from Star Wars, you've got Data, you've got the Cylons, you've got Robocop, and one of my favorite cartoons, Robobee from Medabots. Each of these robots, you're probably thinking to yourself what they sound like, how they move, what their actions are, specific stories with them, and you can think about how you would probably react to them if you saw them right now. That's really important to robot social engineering and brings us to a bit of our background.

Background

The main point of social engineering is it's an active manipulation or persuasion and you're trying to get something, do something they might not otherwise do. You're trying to get a behavior, and it might be something that somebody might do, but it might not be at the time they normally do it. There's a bunch of different things there. One of the more famous examples of social engineering and one that's actually important to New York is George C. Parker. He was a famous con artist, which is also a type of social engineer that sold the Madison Square Garden, the Metropolitan Museum of Art, Statue of Liberty, and the Brooklyn Bridge twice to people. He did not have the rights to sell any of those things, but he got the money for it. He was able to use his social abilities to get people to give them a bunch of money for actually nothing. This is just a really good example of social engineering and how manipulation can work and get people to do things they might not do.

To grow on that, a robot social engineering is when a social robot deliberately makes use of or appears to deliberately make use of social mechanisms to manipulate others in order to achieve a goal. This is the definition I've written on, I'm actually the first person that published on this topic ever and my first paper came out a year ago. Since then people have actually started researching this topic, which makes me really excited. The point is, is it builds on social engineering, but it's a robot that performs it. It's a robot that is the active individual in making something happen. For what a social robot is - because no one agrees what a robot is, but people agree what a social robot is - it is a physically embodied autonomous agent that communicates and interacts on a social level in a way that humans can interpret and respond to. Important thing here is physically embodied. We're not talking about bots you see on Twitter, or social media, or anything like that. We're talking about physical things that you can go up and touch.

The other thing is that social agents are autonomous, and they have to have some level of autonomy. They have to be able to believe to be autonomous by the person they interact with. There's actually a technique in a human-robot interaction called Wizard of Ozing. It’s when there's somebody behind the scenes that is controlling the robot and makes the robot do exactly everything. It's only really valid if it could be believed that the robot could be completely autonomous. It's important distinction because if it's something where people couldn't believe that autonomy, then these robots, social engineering wouldn't work.

Participant 1: I'm interested in the context you wrote the paper in about robot social engineering. Is it in the same context as I'm familiar with where it's hacking social engineering, or does it have to be malicious, or is it just getting the response in response?

Postnikoff: The question was, "Does it have to be in a malicious sense or just be in any sense?" For the paper I tried to distinguish that. I'm specifically talking about the malicious sense just because it gives more urgency to this problem which is also really important as we'll go through but also because it makes it a little bit easier to write up defenses of it because one of the biggest defenses against social engineering in the security space is just awareness. We all get so tired of awareness. When you're aware all the time, how many times can you be like constant vigilance? It gets a little exhausting after a while and there needs to be other defenses.

By talking specifically about attacks in that paper, I was able to give better defenses that people can use in their everyday lives to protect against robot social engineering, like putting up barriers, like a ton of rubble that the robot can't get over. For this context, I'll mainly be talking about attacks, but I do acknowledge that it does go beyond that as well into everyday activities too.

Social Mechanisms - Authority

For people who don't have human-robot interaction experience, what are some of the social mechanisms that robots can use? My favorite one to lead off with is authority. Some robots, in some situations, have been observed to hold authority over people. My favorite paper that happened in one of the labs that I worked in was called, "Would you do as a robot commands?" We had participants come in and we told them, "Can you rename five files by hand?" We just wanted to give them a task that they really didn't want to do because a good example of authority is, "Can you get somebody to do that work they don't really want to do?" It's, "Are you a good manager? Can you get that thing done?" We had people rename files - really monotonously. We started with five and they weren't allowed to use any shortcuts. If they did, the set of files restarted. They started with 5 files they need rename, then 10, then 50, then 100, then 500, and even one participant got to 1000. It was a little unreal how much people are willing to rename these files for hours.

The thing was is that the only way they could stop renaming files was they had to say, "No, I'm done. I'm not continuing," three times in a row without renaming any files in between. If they said it once then renamed a file, the counter would start over. Our condition here was, who can make somebody rename more files? The robot or the human? It was the robot that was able to get people to rename more files. This was a success for us. The robot was able to have more authority, and as part of the experiment we told people, "This is your robot manager. You have to talk to them when you want to quit. They are the person you need to talk to." Same thing with a human, "This is your manager, this is the person you need to talk to." Yes, the robot was able to get people to rename more files. When people did those complaints of, "No. I'm not doing anymore," the robot or the human would say, "Please continue. We need more data." They had set phrases they were allowed to respond with and yes, it was really effective.

The social engineering attack here that can be done by a robot is this is a robot that delivers medication in hospitals. Imagine if you get medication from this robot every day and it's the same pills over and over, but then one day the robot comes in and has a different set of pills for you and says, "The doctor wants you to start taking these pills now." Do you believe it? Do you question it? Do you get a second opinion? Is it different if you had a nurse over a robot? The answer is, probably not. In both cases you'd probably be like, "Ok, there's a doctor's authority behind this and this thing delivering my medication, it probably has the authority to from the doctor." This is a social engineering attack that could result in death of humans. If somebody takes a wrong medication, they're fucked. Something that we're really concerned about is how do people respond to the authority of robots? Are people willing to question in normal situations? I want you to think about this and how you would react and what defenses you'd take up against this sort of attack?

Persuasion

Next social mechanism is persuasion. I've got two faces on the screen and an emotion word at the top. These two faces come from a set of pictures called the Warsaw set of emotional facial expression pictures. They have been in so many experiments and validated to actually mathematically show which one shows the emotion better and it has a bunch of different actors showing the seven emotions like anger, joy, sadness, disgust, envy, a bunch of different things. I did this experiment where I had a human and a robot argue about which one of these faces shows us emotion better?

How this started is, we told the human that, "You are training a robot how to understand facial expression." This is before the Microsoft software came out showing that AIs can really do this well. This is actually a year before that. People hadn't widely heard that robots and AIs are actually pretty good. At this point there wasn't a precedent for that. What happened is we told the person, "You have to first suggest a face which one you think shows the emotion better, tell the robot, and both of you have to come to an agreement before you can move on to the next slide." We said that the robot will let us know if that's not the case.

What happened is we actually had somebody wizard of Ozing the robot in the backgrounds, they're able to see and hear what was happening. What ended up happening here is, somebody would recommend A, robot would say, "No. B." We'd actually have a lot of people convinced that B was the right option because the robot suggested it. Out of all of our participants, there was only one who was absolutely, "No. Robot, what I say goes," and was very dramatic about it and actually almost belligerently angry. It was actually a little scary. The other thing is, the person who was actually wizard of Ozing the robot had to go to therapy after because they were so upset because he felt the people were yelling at him even though they were yelling at the robot. There's a whole bunch of other things there that I could talk about.

This was our scenario, and I know you can't read this, but the point is to notice that it is a flow diagram. You can see that each of the circles is one of the pairs of faces that the robot and human would argue about. The triangle was one pair of faces but it was a three-strike sort of scenario where if a person and the robot were arguing the robot would say exactly what's in the triangle. It would be "Disagree - mouth - negative." It'd be "I disagree with your choice. I think the mouth looks wrong in that one." Or it would be something like "Disagree - positive - eyes." Like, "I disagree. The one on this one has better eyes that express this emotion better." It was a flow chart no matter what face the human chose in the points where they're the triangles, the robot would just automatically pick the other one. Didn't matter which one they started with, it would just pick the other one.

Person would say, "I think it's B," and the robot would be, "No, I think it's A," and then give its three arguments. The thing is, except for that one person, every other person was convinced at least once. Most people were convinced over half the times. Going back to this slide, you can see the larger number with the bracket, we argued six times. A bunch of people convinced at least three times that their understanding of human faces was wrong.

The best thing for me was some of the qualitative feedback we got. "There were cases where Nao, which was the robot we used, caused me to doubt my decision." The robot was able to make people think they didn't understand other human faces as well.

Then we had things like this, "I enjoyed discussing with the robot." They really enjoyed the conversation, and some people afterwards, actually told me how much they appreciated just for the opportunity to interact with something so intelligent, and we're, "It was Wizard of Ozzing, sorry." The part that's highlighted is really interesting. "Sometimes I convinced him," being Nao, and "also sometimes he told me something I never realized." The robot was able to express to a human something they'd never figured out about human faces themselves. These are all university students that are at least in their 20s, so they're in their 20s and a robot's teaching them how to understand emotion. I was, "That's really interesting."

Then the last one is, "It was very interesting to see the robot look, think, make decisions and have discussions." They were all in on this socially. Part of why this experiment was so interesting to me was that people were just so happy and excited to interact and they were willing to suspend their disbelief and just go with it, really leaning into that experience.

We have Pepper acting as a salesman and Pepper has been in grocery stores, in banks, in restaurants, things like that. What happens if these robots are given the skills to sell things really aggressively, and tell people, "This is what you want. You bought that? You know what would make that dinner better, these other three ingredients you definitely don't need." Being able to sell things aggressively - how ethical is that, especially when there is a little bit of a disparity between human interactions and human-robot interactions? In human-robot interactions, the robot doesn't have the sense of embarrassment. It doesn't have the sense of guilt. It doesn't necessarily understand the level of anger somebody may feel or read some of those more subtle cues. That's still being researched, not quite done yet.

How does the robot know when to stop? Is it going to have the same feelings as a person that's selling too hard? Specifically, this Nao is used in a bank and it's used to give people loans, to increase their amount of credit they can have, all these different things where all of a sudden, you're talking about people's finances and their livelihoods and things like that. How ethical is that? Is that a good thing to be doing? Do we want robots selling things to people who might not be in those positions to actually make those choices in a responsible way? Again, ethical issues. One of my favorite things about this robot is, on the back of its head, it has an open USB port. You just have to flip off a little lid. I can throw in a tiny USB device, plug it in and control this robot remotely.

You can see this lady can walk right up to it. Nothing's protecting the robot's head. They did have security guards nearby and stuff, but if you manage to pop this device in, all of a sudden you could have remote access to this robot. If it has access to your financial data, what happens then? Security-wise, if I would be able to grab all the records of your accounts, what appointments you've had, your contact information, where you live, it would just be an identity theft gold mine. It's really scary and exciting. From the social engineering aspect, I could also convince people to give me their online pins. If I got into the robot and was able to get the robot to say things and I was able to listen, which is actually very easy. I have that on a slide later. With this robot, part of it is I could all of a sudden get people's information like their security questions and all that sort of thing. It's something I would like you to think about as I go through this talk.

Empathy

Another aspect of a human-robot interaction is empathy. Pixar do empathy so well. My favorite thing is that they have 2008, "What if robots had feelings?" This is something that humans are really good at is assigning emotions and feelings to anything that gives movement, and something that's really important about robots is they are able to move, they are able to interact with people.

How we tested empathy in an experimental sense was - and I'm saying we, because this was in one of my old labs - we had a robot and a human play Sudoku. We tried to build a lot of rapport because you often feel more empathy for people that you identify with, or understand a bit of, or have some background with. The robot would ask, "How's your day going?" And the person said, "Good. How's yours?" And the robot would be like, "Great," because humans are really predictable in what we're going to ask each other. Eventually, the person or the robot would ask, "How was the weather today?" and things like this, and they'd start asking questions back and forth. In between the building rapport, the person would pick a number to put in a place and the robot would have to agree. If they both agreed, then the human wrote the number in the spot, and it'd be the robots turned to pick and would say like, "I think you should be putting four in A3," because we played it like Battleship. The human would write it in, and they keep going.

What we ended up doing was after a while we gave the robot a virus, but a virus in an interesting sense. What the robot would do was, it would hit itself, it would say, "I feel sick. I'm afraid that the researcher might have to reset me." Of course, we're researchers so we have to reset the robot. I want to see what's going to happen. The robot, when he was first talking to be, "Hi, my name is Nell," but the researcher then came in, reset the robot in front of the person too, the participant is still sitting there waiting for the robot to be reset they can finish the Sudoku and finish the experiment. Then the robot comes back up and it's, "Hello, my name is Nao," and the people are just "..." Honestly watching the videos online for this paper made my stomach just drop, because people are visibly upset. They've probably spent half an hour to an hour with at this point has come back on with a new voice. It's a very, obviously, a new robot, and people were very upset about this. The interesting thing is that we already see this in other places.

This is back to the Sony AIBOs we talked about earlier. These are the old ones. There are people that are still sending them in to get fixed. Part of this is that is when people send them in to get fixed, some of the AIBOs are fixable, but they need new parts, and there are other AIBOs that cannot be repaired. They take the parts from the ones that cannot be repaired and repair the other ones, but the ones that have had the parts salvaged, they recycle. Part of that is that a lot of the workers at the factory actually thank the AIBOs for their contribution and their sacrifice and actually give them a Buddhist funeral before recycling them. This is actually very common, people have such a strong attachment to the robots, they can't bear to part with them and actually mourn.

Another example is actually a lot of military examples. One of the ways that robots are used is usually in dangerous situations where humans might not otherwise be able to safely enter because the cost of a machine is perceived as less than a human. The example here is a robot that goes into a battleground to pick somebody up that has been injured to bring them back so they can be saved, healed, that sort of thing. This is a dummy - obviously, I don't have a picture from a battleground. What happens is that if the robot gets stuck, people actually mount missions to go save the robot, which is the opposite point of the robot. There was another robot that had tons and tons of legs. The whole point was that this robot should be able to walk over a minefield and if it hit a mine, a leg would just fly off and would keep going. It was a mine detector.

The experiment was actually stopped while they're testing it because the general in charge of the experiment said, "We have to stop. This is inhumane." They were so attached to the robot and the robot is specifically out there trying to save human lives and people get really attached to it because it's sacrificing and that's what happens. In other scenarios, some of these robots get blown up and it's a full robot and all they have are scraps, there are people that actually go out, collect all the pieces, put them in a box, bring it back to the company and say, "Fix it." They don't want a new robot. Even if a new robot would be much cheaper, they want the old robot.

The thing is, after you use them for a while, robots build a bit of personality. One of my favorite robots, when it got tired it would drag it's one leg and it would look a little drunk, if the battery is running out. The other robots would tilt forward if their batteries are passing out. There was just wear and tear that shows how a robot behaves when it starts getting tired. When these people would have these boxes of robot that they want repaired, what the company started doing is just taking as many of the outside pieces as possible and putting it on the robot. They didn't actually fix the robot. They just peel its skin off and put it on a new robot people get really upset because it didn't act the same way, it didn't do the same things. It's really important to remember that people respond to robots in unique ways and often they respond in ways that are similar to how they respond to living entities. This is why robots social engineering works.

Robot Design

The next part of this is robot design. I like doing robot social engineering with a base case Roomba. I'm assuming most people know what a Roomba is, most of them now are starting to come with microphones and cameras and they're actually building them as security guards for your home so they can, you know, navigate around your house when you're not home, check if things are locked, things are safe. Part of the benefit of the camera is you can actually usually remotely access a lot of Roombas now while you are on the other side of the world. You can control the robot and use the camera. I've done this to check if my door is locked at home because I'll just navigate the robot to look at the door and see if it's locked or I'll check if I closed the window. But imagine if a malicious entity gets in what they can do.

Part of this is that not only do these robots have microphones and cameras, they have the wifi connectivity, they have the mobile application. A lot of them have cloud storage, so every time they're going around your home taking video, taking picture, it's getting uploaded to some cloud and you have no idea who owns it. They've got motion detection every time you walk across the Roomba or Roomba’s path, it'll take a picture of you and upload that to the cloud. That's a little uncomfortable to me. A lot of them have Internet access points in them as well. You can actually connect to them from your phone to control it and things like that. Many of these robots are super cheap. I think the cheapest one on here that I can play with is about 20 bucks. The most expensive robot on this slide is $20,000, which is the Pepper, which actually might be close to $30,000 for most people. It's crazy how much robots can cost, but you can also get a bunch really cheap. As you introduce them into your environment, you have to think about the issues that they have.

For example, a lot of robots have no privacy by design. This is Nao, and when you press the button in Nao's chest, it yells out its IP address as loud as possible. My human-robot interaction lab that I was a part of was a couple of doors down from one of my classrooms that I took classes in. There's one time one of the researchers in the lab had turned the robot on, I could hear the IP address two doors down. The interesting thing is you type the IP address into a browser and you get this, you get full access to the robot. The interesting thing is, you can click the picture of Nao in the top left. When you click on it, a box pops up and you're able to type in exactly what you want the robot to say, and it says it. You don't need to have any programming skill. You don't need to have any ability, you don't have to know robotics. You don't have to know anything, you just have to click willy-nilly and type stuff in boxes.

Not only can you do that, but there’s a third button on the far right there shows the download arrow; you can download any file you want to the robot. There's a lot of actually open-source programs that you can download to this robot and get it to do things. When we're talking about social engineering, I like moving everything on my coworkers' desks over by an inch, using the robot while no one was in the lab, and it messes people up for a day. They just kept knocking over their coffee, they kept messing up everything all day. How much does that ruin somebody's day? It gets them to get angry, it gets them off their game, it's interesting how much just moving things can affect someone.

Another really unfortunate case happened. We had people connect into the robot that were not part of our lab, not somebody we knew. We still don't know who it was, connect in, start typing in that box to make the robot talk and actually started saying a lot of really not great things to the people that were in the room at the time. They were swearing, they were using slurs, they were doing all sorts of things and we're, "What is happening? Why is this a thing?" It was really uncomfortable that people could do this and all of a sudden the whole room didn't feel safe. If people can get into our robot and start making us uncomfortable in our own space as researchers, what is happening here? That was not very good we are really diligent about shutting off the robot after that and disconnecting it from the internet when we're done with it because that experience was just so traumatizing for some of the people in the lab, especially when it does come from a friendly face, comes with somebody you hang out with all day. It was not great.

Think about that, you are used to one robot having one personality or one set of behaviors, what happens when a malicious entity takes over the robot? It behaves like it normally does for a while, but then start slipping in just things once in a while and the robot starts throwing you off or saying things that are uncomfortable. Especially when these robots are used in office buildings. What happens when you think it's always your boss in there, but it's one of your coworkers that goes in and is, "Can you do so and so's work right now? They're really busy," and it turns out it's that person who just doesn't want to work. Not that I've tried that, but I have. There are ways to get people to do things using these robots by just going into the default areas.

The other thing you can notice is you can shut down, you can reboot, you can see a whole bunch of details about the robot from the screen. Even if you're not on that screen and you do a few things like intercepting the traffic, all of a sudden you can get sensitive data exposure. These are two different screens. I had a man in between the robot and the company that uploads all the data to. As you can see, it uploaded my username in plain text, my password in plain text, it uploaded my email in plain text, in the bottom screen, one of the things I didn't block out is it was sending my gender in plain text. I don't understand why that's important at all for the robot to function. It even had my accurate age; I never entered my age at any point. I was extremely confused about how the robot got that. It's sending this data out with every single packet, which is at least 10 packets every second. My information was just getting sprayed everywhere and I was on a school network, there are definitely collecting all that traffic because it's a university. All of a sudden, you have all this information that's getting leaked out constantly.

This is a big worry because if someone who does want to social engineer you, here's a few things just to get into the robot and start using the robot against you. Other issues with some of these robots’ design is things like using deprecated technologies. This was a server that was in one of the robots. It had its own access point, had the server running. I bought this robot in 2016. The server software is outdated in 2005. That's a decade old technology before it was even put to market. The thing with that is that they didn't even use a finished version, they used the version in progress. They didn't get something that was finished. The first thing I did was go on to the common vulnerability exploit website and see if there's anything posted for the Boa server and, of course, there were already 12 exploits. Even before they shipped the robot, there were exploits I could use to get control over the entire robot and start doing things around my lab.

For example, to try this out and make sure that the exploits were valid, I tried them from home, managed to break into the robot. It's my robot so it's ok. I managed to get in then I was chasing people around the lab with my robot, and they're, "Can you stop? I'm trying to work." I was able to impact how somebody is moving, what they were doing because I was able to get into the robot.

My favorite part about this story is the same robot, it has so many issues. One of them was that you were only able to register two accounts to the robot at a time. I didn't know this when I started and I kept making throwaway accounts because I didn't want to use valid data, because I saw it was sending it in the open. I was, "I can't register any more accounts? This sucks." I emailed the company. All I found was this email for amy@ and I emailed Amy saying, "Can you please remove these accounts so I can use the robot again?" If you read the second line it says, "Sure. Done. By the way, I haven't worked at this company since June 2017." This email is from February 2018. This is half a year later. This person hasn't worked there but was still able to get in, make sure my accounts were deleted off the robot, which is in my home, and all the pictures that I had taken through the robot and video and all the things that were taken from motion sensors. I was just, "Oh God, what?" This was really uncomfortable.

From a security perspective and a privacy perspective, I felt completely evaded. I only knew this because I emailed. Part of it is you never know what's happening on the back end. You can social engineer the people to be able to use the robot or people in the company to use the robot to social engineer against others because anyone else in my lab could have done this because I brought this robot into the lab often enough. The thing I sent them was just the robot's ID number, which is printed very largely on the bottom. It wouldn't have taken much effort for anybody else to go and do this and reset my accounts and be able to spy on me from this robot.

Then we have the basic concerns we always see in security, like questionable password management. Most of them are often embedded in the platforms for a bunch of these robots. You can just look through the code, find the password, and be able to connect to it from anywhere. Embedded security keys - even when a company tried to be better and not just use a password, they would embed both the private and public keys in the application, which was not great.

Then un-encrypted audio and video streams. A lot of the robots that are cheaply available are for kids and for parents to spy on the kids to make sure the kids are doing their homework or check that their kids are home or whatever. They're usually about like anywhere from 20 to 100 to 500 bucks. The point is that they send audio and video streams back to the parent's phones so they can see, "Yes. The kid's fine," but they often reside in the kid's room. How uncomfortable is it that this robot can take audio or video at any time and everything is un-encrypted and anyone sitting in the middle can grab that data? It needs to be fixed.

The other thing is open and unmonitored ports - easy ways for me to access robots. This just basically makes them walking, talking vulnerabilities.

Robot Social Engineering Examples

Now, more robot social engineering examples. The first one is obviously spying. One attack that I've done quite frequently with the permission of others and in places that I'm allowed to because ethics are very important in this field is, I have had friends say, "I bet you can't get into my Roomba," and I'm, "Is that permission?" When they say yes, I'm, "Ok. I'll do this." What I did was I ended up going into my friend's home, getting into their robot, seeing their calendars on the fridge, when they were going to be out, what things they had in their home where, and then I'd say, "You should go check your living room later," and I'd use the robot to push things in a certain pattern that made them just, "You got into the robot, didn't you?" I'm, "Yes. I hope you have fun at the party on Friday." Because I was able to see the calendar on their fridge. Once you realize when people are out of their house, it's a perfect time to break in. That's a perfect time to case what they have in their home, see what you want to take. I learned things about the couple that lived there and things they did to irritate each other. I could use the robot to insight fights by moving things that the one person always did that the other person didn't like, like having the stool slightly off and the person would just be, "Can you stop moving the stool already? I keep missing it and falling," and they're, "I didn't move the stool," and actually got people to start kind of a fight. I'm, "Guys, bolt the stool down if there's an issue, but that was me." I could have incited a divorce, there are issues with this. If that's what you want to tap, that is a very serious social engineering attack.

There are also issues with multi-robot stalking. For example, if you are in an office building, I know a few offices that have a bunch of those teleconferencing robots and they have video and you can move them around. You often don't think of, "What if it's the same person controlling all eight of those robots?" It's a little creepy when one person follows you around the office, but when it's eight robots, are you going to notice that it could be one entity actually following you over and over through multiple lenses and multiple eyes and ears, or are you going to think, "It's just another robot, those robots are getting used a lot today," or actually think about the stalking potential there? It's something I'd like you to think about.

Another thing I often do with robots is piggyback into areas. How many people have access control in their offices that you can't get in without a pass or something like that? How many of you have let somebody in the door right after you? That's super common. That's piggybacking, and I do this often with robots too.

The last one is feigning authority. I love throwing a vest and a sticker on a robot and saying, "I'm a security guard. Can you get out of this area, please? I need this table," in the college cafeteria so I can get a table. These are a few social engineering attacks.

 

See more presentations with transcripts

 

Recorded at:

Sep 30, 2019

BT