BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations The Internet of Things Might Have Less Internet Than We Thought?

The Internet of Things Might Have Less Internet Than We Thought?

Bookmarks
44:21

Summary

Alasdair Allan looks at the possible implications of machine learning on the edge around privacy and security. The ability to run trained networks “at the edge” nearer the data without access to the cloud  -  or in some cases even without a network connection  at all - means that sensor data in the field can be interpreted without storing potentially privacy infringing data.

Bio

Alasdair Allan is a scientist, author, hacker, and journalist. An expert on the Internet of Things and sensor systems, he’s famous for hacking hotel radios, and causing one of the first big mobile privacy scandals which eventually became known as “locationgate”. He works as a consultant and journalist, focusing on open hardware, machine learning, big data, and emerging technologies.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Allan: Machine learning is traditionally associated with heavy duty, power-hungry processes. It's something done on big servers. Even if the sensors, cameras, and microphones taking the data are themselves local, the compute that controls them is far away. The processes that tend to make the decisions are all in the cloud. This is now changing, and that change is happening remarkably quickly and for a whole bunch of different reasons.

For anyone that's been around for a while, this isn't going to come as a surprise. Throughout the history of our industry, depending on the state of our technology, we seem to oscillate between thin and thick client architectures. Either the bulk of our compute power and storage is hidden far away in sometimes distant servers, or alternatively, it's in a mass of distributed systems much closer to home. We're now on the swing back, towards distributed systems once again, or at least a hybrid between the two. Machine learning has a rather nice split that can be made between development and deployment. Initially, an algorithm is trained on a large set of sample data, that's generally going to need a fast powerful computer or a cluster, but then the trained network is deployed into the wild and needs to interpret real data in real time, and that's a much easier fit for lower powered distributed systems. Sure enough this deployment or inference stage is where we're seeing the shift to local or edge computing right now, which is a good thing.

Let's be entirely honest with ourselves, as an industry, we have failed to bring people along with us as we've moved to the cloud. We have increasingly engineered intricate data collection, storage, and correlation systems. We have lakes, we have siloes, we have piles of unrelated data that we'll never touch again. It doesn't matter because outside of this room, this conference, and this industry, our industry is universally viewed as a dumpster fire. A McKinsey survey last year asked CxO level executives if their company had achieved positive returns from their big data projects. Just 7% said yes, and that's not even the real problem in our industry right now. We have much more serious problems than, "Our stuff doesn't actually add any value to your business."

More than a few years ago now, Mark Zuckerberg famously stated that privacy should no longer be considered a social norm. Zuckerberg was right, and it became the mantra of the big data era. We've been hearing "privacy is dead" for well over a decade now, and as long as people still feel the need to declare it, it's not going to be true, and there is now a serious privacy backlash coming. I really don't think that the current age, where privacy is no longer assumed to be a social norm, will survive the coming of the internet of things. I especially don't think it's going to survive the coming of machine learning.

This is interesting. This turned up recently. This is the Bracelet of Silence. Built by a team from the Human-Computer Interaction group at the University of Chicago, it uses ultrasonic white noise to jam microphones in the user's surroundings, your iPhone, hidden microphones, Alexa. It's hardly the first privacy wearable, but it's probably the most commercial one I've come across. The prototype is large and bulky and not particularly comfortable, but there's nothing that couldn't be mass-produced here. A retail version would be smaller, sleeker, and probably costs about 20 bucks. Privacy is not dead, it just went away for a little while.

You're Being Watched. Are You Ok With That?

The last decade can, therefore, be summed up by this question. Although, realistically, we didn't give anyone a choice. As our new tools, those levers on the world, came into widespread use, we've seen an increasingly aggressive and rapid erosion of personal privacy. Privacy really isn't about keeping things private, it's about choice – the choice of what we tell other people about ourselves. In a way, the GDPR is our own fault as an industry, a reaction to the egregious fashion we handle the introduction of the Cookie Law. The now-ubiquitous "This site uses cookies" banner that runs across pretty much every website in the entire world may meet the letter of that law, but it violates the spirit. It ignored the point of the regulation, which was an attempt to protect users. That really annoyed the European Union. Of course, we're doing it again in response to the GDPR. Technological fixes rather than cultural ones.

The GDPR is actually pretty simple if you decide you want to abide with the spirit of the law rather than trying to find technological or legal loopholes to go around or through it. Despite that, its arrival did create an entire industry, the GDPR compliance industry that, for the most part, is entirely snake oil. To be entirely clear, to imply someone is hawking snake oil is not only to call their product low-quality garbage. It implies that they're annoyingly defrauding customers and selling them junk. Nonetheless, the arrival of the GDPR in Europe, and to a lesser extent the CPPA in California, is a symptom, not the cause of the recent rise in debates around privacy. It's a reaction to our own industry's failings.

This is an actual screenshot from an actual app controlling actual smart lightbulbs – the internet of things at its best. Just like some U.S.-based websites, this particular lightbulb stopped their service when the GDPR was introduced. You could turn individual lightbulbs on or off still, one at a time, but everything else, groups, lights, timings, everything, it all went away if you tell the app that you refuse to accept their privacy policy. This makes you wonder, what are they doing with your data? What data about when you're turning bulbs on or off is the company behind the app selling on or using in ways you really wouldn't expect? Because it confuses me. To be clear, I'm fairly sure this response isn't legal under the GDPR. You can't refuse to provide the service just because the user refuses to let you have the data unless, of course, that data is necessary to provide the service. Does anyone want to take a guess what data is necessary to turn a lightbulb on and off that would infringe the GDPR?

A series of almost accidental decisions and circumstances have led us to a world where most things on the web appear to be free. It doesn't mean they are free, just that we pay for them in other ways. Our data and our attention are the currency we use to pay Google for our searches, and Facebook for keeping us in touch with our friends. Whether you view that as a problem is a personal choice, but it was, perhaps, not unanticipated. With no idea about the web, which was still many years in the future, people building the internet did think about the possibility.

This is from Alan Kay, who in 1972 anticipated the black rectangle of glass and brushed aluminum that lives in all of our pockets today and the ubiquity of the ad-blocking software that we need to make the mobile web even a little bit usable. If you have time, I highly recommend looking up some of Kay's writings. There's still a lot of value to be had there. While Kay's prediction of the existence of the smartphone was almost prophetic, it was also, in a way, naive. It was a simpler time, without ubiquitous panopticon of the modern world, without security threats, which arguably shapes the modern internet and our view of it.

Privacy, Security, Failure Modes, Ownership, Abandonware

A few years back now, it was discovered that the company behind CloudPets, essentially an internet-connected teddy bear, had left their database exposed to the internet without so much as a password to protect it. In the database were 2.2 million references to audio conversations between parents and their children. With those references, you could retrieve the actual audio recordings of those conversations from the company's entirely open Amazon S3 buckets. The person that originally discovered this problem tried to contact CloudPets to warn them. They didn't respond. Then this database was subject to a ransomware attack. It got deleted, so that's good, but it was accessed many times by unknown parties before it was deleted by yet another unknown party, which is the problem here. Suddenly, it's not just your emails or the photographs of your cat, but your location to the centimeter, your heart rate, your respiration rate, not just how you slept last night, but with whom.

A few years ago, iRobot, the company that makes the Roomba, the adorable robotic vacuum cleaner, gave it the ability to build a map of your home while keeping the track of its own location within it. Very useful. Then we found out, they were preparing to share that data with some of their trusted commercial partners. It turned out people weren't quite as happy about trading this data for services. They didn't think it was such a good deal anymore, especially when their free services come bundled with smart devices that they had to pay actual money to get – their money.

Back in May last year, a man named Masamba Sinclair rented a Ford Expedition from Enterprise Rent-a-Car. When he rented the car, he connected it to his FordPass app. The app allows drivers to use their phone to remotely start and stop the engine, lock and unlock the doors, and continuously track the vehicle's location. Despite bringing it to the attention of Enterprise and Ford, we learned that, in October, five months after he had returned the car, and multiple renters later, he still had full remote control of that car. After his story made worldwide news, Enterprise did, eventually, manage to resolve the situation, but the underlying problem still exists. Ford regards this as a feature.

Then you have to think about failure modes for the internet of things. You have to think about what state the device should be left in if it fails, or if it simply loses network connections. This is vastly more complicated for devices that rely on the cloud for the smarts. If your smart device isn't smart, but the cloud is smart, if the network connection goes away, suddenly, you have a very dumb device.

This is the Petnet SmartFeeder. It's an automatic feeding device that manages portion size, food supply, and mealtimes for your pets, all through an iPhone app. Very cool. All the way back in 2016, 4 years ago now, the Petnet SmartFeeder had a server outage, or rather the third party servers it relied on had an outage, and the devices stopped working. The servers were down for more than 12 hours, and during that time, the SmartFeeders stopped feeding their furry charges.

Perhaps, you'd think that, four years on, all that would be better. This would be a solved problem for the internet of things. Just last week, in fact, the next generation of Petnet feeders suffered from, more or less, the same problem. It's hard to know exactly if it was the same problem, because the company isn't actually replying to email from journalists. This time, the system outage lasted a week. This is not a cheap device, it's over £200. Really, ignoring that, "The cat dies" should never be a failure mode for your product.

These problems aren't helped by the long lifecycle that most internet of things devices are going to lead. The typical connected device lives on for 10 years or much more. After all, what is the lifespan of a non-smart thermostat? When was the last time you changed the thermostat in your home? Why would consumers expect a smart thermostat to need replacing sooner? A lot of the early connected devices are now abandonware. The rush to connect devices to the internet has led to an economic model that means manufacturers are abandoning them before we, as consumers, are done with them.

To be fair, that model is forced on companies selling the thing because the other internet, the digital one, has trained us, as consumers, to be unwilling to subscribe to services. We're willing to pay for the device, a physical thing that we can hold in our hands, but we expect software and services to be free. Thanks, Google. There is no cloud, as we all know, because we're the people that run it. There are just other people's computers. If we, as consumers, don't pay for them, then somebody has to.

All of this causes another problem – ownership. As customers, we may have purchased that thing, like the Roomba or a tractor, but the software and services that make the thing smart remain in the hands of the manufacturer. The data this thing generates belongs to, or at least is in possession of, the manufacturer, not the person that paid for the thing. Last year, John Deere told farmers that they don't really own their own tractors, just licenses to software that makes them go. For anyone who doesn't know what is a tractor, it's just a really big computer with a wheel at each corner these days. That means, not only can farmers not fix their own farm equipment, they can't even take it to an independent repair shop. They have to use a John Deere dealership. As a result, there's now a black market for cracked John Deere firmware on the dark web. 2020, it just blows my mind. Firmware that doesn't lock down the tractor and gives farmers the option to service the thing themselves.

Privacy, security, dealing with failure, ownership, abandonware. All of these are problems that exist or, in the case of security and privacy, have been made far worse in the internet of things by our insistence in making our smart devices not that smart, but instead cloud-connected clients for the big data industry. We need to sit back and think about why we're doing what we're doing, what the big data industry is for, rather than just those tools that came along in 2011 and we were excited about. We need to look at the new tools that have become available. In the end, we never really wanted the data anyway, we wanted insights and actions the data could generate. Insights into our environment are more useful than write-only data collected and stored for a rainy day in a lake.

This made me think about something Alistair Croll said, all the way back at the start of this rollercoaster ride, "Big data isn't big, it's the interaction of small data with big systems." Personally, I think it might be time to disassemble the big systems we spend 10 years building, because I'm not sure we need them anymore. I'm not sure we need the cloud anymore. It's not everywhere.

This is where this comes in. This is a leading indicator, as we say in the trade. It's a leading indicator of the move away from the cloud towards the edge. This is the Coral Dev Board from Google. Underneath that huge massive heat sink is something called the Edge TPU. It's part of the tidal wave of custom silicon we've seen released to market over the course of the last year or so, intended to speed up machine learning inferencing at the edge. No cloud needed, no network needed. You can take the data, you can act on the data, then you can throw the data away.

By speed up, I really do mean that. On the left, we have MobileNet SSD model running on the Edge TPU. On the right, we have the same model running on the CPU of the Dev Board itself, a quad-core ARM Cortex-A53, if that means anything to anyone. The difference is dramatic, inferencing speeds around 75 frames per second on the Edge TPU compared to 2 frames per second on the ARM. No cloud needed, no networking needed, this is all local.

Recently, researchers at the University of Massachusetts, Amherst, performed a lifecycle assessment for training several common large AI models. They found the process can emit the equivalent of 626,000 pounds of CO2, nearly 5 times the lifetime emissions of the average American car, including manufacture of the car itself.

I've been hearing about this study a lot, and I have a few issues with it and how it looks at machine learning. First, the machine learning it looks at is natural language processing, NLP models, and that's a small segment of what's going on, but it's also based on their own academic work. Their last paper where they found the process of building and testing the final paper-ready model required 4,789 other models to be built over a 6-month period. That's just not been my experience of how, out here in the real world, you train and build a model for a task.

The analysis is fine as far as it goes, but it ignores some things about how models are used and about those two stages, training and deployment. As we'll see later, a trained model doesn't take anything like the resources required to train it in the first place. Just like software, once trained, a model isn't a physical thing. It's not an object. One person using it doesn't stop anyone else using it. You have to split the sunk cost of training a model amongst everyone or every object that uses it, potentially thousands or even millions of instances. It's ok to invest a lot into something that's going to be used a lot, unlike your average American car.

It also ignores the fact about how long those models might hang around for. My first job as an adult, fresh out of university, same height but slightly browner hair, was at a now-defunct defense contractor. There, amongst many things I can't talk about, I built neural network software for image/video compression. To be clear, this was the first time, maybe the second time, this stuff was trendy, back in the early 1990s when machine learning was still called neural networks. Anyway, the compression software I built around the neural network leaves rather specific artifacts in the video stream, and every so often, I still see those artifacts in video today, in products, from a certain large manufacturer who presumably picked up the IP of the defense contractor at a bargain price after it went bankrupt. Those neural networks, presumably, now buried at the bottom of a software stack wrapped in a black box with "Here, there be magic" written on the outside, the documentation I left behind wasn't that good, are still around, something like 25 to 30 years later. Time passes.

This makes the accessibility to pre-trained models, and what have become colloquially known as model zoos, rather important. While you might draw an analogy between a trained model and a binary and the data set the model was trained on and a program source code, it turns out the data isn't as useful to you as the model. Let's be real here, the secret behind the recent success of machine learning isn't the algorithms. The algorithms have gotten better, but we've been doing this for years. This stuff has been lurking in the background for literally decades waiting for computing to catch up. Instead, the success of machine learning has relied heavily on the corpus of training data that companies, like Google, for instance, have managed to build up.

The most part, these training datasets are the secret sauce and are closely held by the companies and people that have them, but those datasets have also grown so large that most people, even if we had them, couldn't store them, let alone train new models based upon them. Unlike software, where we want source code, not the binaries, I'd actually argue that machine learning, the majority of us want models, pre-trained models, not data. Most of us, developers, hardware folks that just want to do stuff, should be looking at inferencing, not training. I'll admit this is actually a fairly controversial opinion in the community, although not less than any of my other opinions, which are just as strongly held.

It's trained models that makes it easy for you to go and build things like this. This is something I was doing the beginning of last year when I was talking about Google's AIY Project kits.

It's essentially a Google Home stuck inside a retro phone. It's quite cool. There's a link there. Anyway, for the most part, back then, I was talking about building cloud-connected machine learning applications. Although I did show a Raspberry Pi struggling really the limits of its capability to do hot word voice recognition without talking to the cloud.

Those arrow messages are not a good sign, but things have moved on a lot in the last year. Over the last year, there's been a realization that not everything can or should be done in the cloud. The arrival of hardware designed to run machine learning models at vastly increased speed, like that Coral Dev Board, inside relatively low-powered envelopes, without needing a connection to the cloud or a network connection at all, is starting to make edge-based computing that much more attractive proposition for a lot of people. The ecosystem around edge computing is actually starting to feel far more mature, mature enough that you can do actual work. It's not just the Coral Dev Board I mentioned earlier. This is only part of the collection of hardware that's sitting in my lab bench back in the office right now.

On the market, we have hardware from Google, Intel, NVIDIA, with hardware from smaller and less known companies coming soon or already in production. Some of it is designed to accelerate existing embedded hardware, like those USB sticks attached to the Raspberry Pis here on the bench. Some of it's designed as evaluation boards for system-on-module units that are being made available to volume customers to build actual products, like the Edge TPU or NVIDIA's Jetson Nano you can see here in the middle.

Over the last six to nine months or so, I've been looking at machine learning on the edge, and I've published a series of articles, I'll put up links at the end, trying to figure out how the new custom silicon performs compared to normal processors. I based my benchmarks around the Raspberry Pi, because it's really easy to get your hands on and pretty much everyone knows what it is. This is a Raspberry Pi 3 Model B+. It's built around a 64-bit quad-core ARM Cortex-A53 clocked at 1.4 GHz. You should bear in mind, the Cortex-A53 isn't a performance core. It was designed as a mid-range core and for efficiency. This is the Raspberry Pi 4 Model B. Unlike the Pi 3, it's built around an ARM Cortex-A72, and that means some big architectural changes. The A53 used by the Pi 3 was designed as mid-range, the A72 was designed as a performance core. Despite the apparent very similar clock speeds, the real performance difference here between the boards is rather significant, as we'll see in a minute.

I looked at TensorFlow. It's one of the most heavily used platforms to do deep learning, along with TensorFlow Lite, which is a version of TensorFlow for mobile and embedded device developers, that uses quantization to reduce computational overhead. Quantization, which I'm going to be talking a lot about, quantization of neural networks uses techniques to allow a reduced precision representation of weights and, optionally, activations for both storage and computation. Essentially, we're using 8 bits to represent our tensors rather than 32-bit numbers. That makes things easier on low-end hardware to do, but it also makes things a lot easier to optimize in hardware, for instance, on the custom silicon like Google's Edge TPU.

I also looked at a platform called AI2GO, from a startup called Xnor.ai. They'd been in closed testing, but I'd been hearing a lot of rumors around the community about them, so I'm quite pleased that they went into beta and I managed to get my hands on it. Their platform uses a new generation of what are called binary weight models, entirely proprietary. They don't really tell you how they work, very much, and as you'll see, the additional quantization means significant performance gains, enough that Apple went and bought them in January for an estimated $200 million, and you guys don't get to use it ever. Shame.

It was actually sort of interesting when I started out trying to do this. It's incredibly hard to find a good tutorial on how to do inferencing. A lot of the tutorials you'll find on "how to get started with TensorFlow machine learning" talk about training models. Many even just stop there once you've trained the model and then never ever touch it again. I find this sort of puzzling, and presumably, it speaks to the culture of the community around machine learning right now. It's still sort of vaguely, academic. You see similar sorts of weirdness with cryptography, a lot of discussion around mathematics and little about how to actually do cryptography.

In any case, feeding my test code for the various platforms was this image of two recognizable objects: a banana and an apple. It gives us bounding boxes around them, like this, for TensorFlow, and I must state that the importance of bananas to machine learning researchers cannot be overstated. This, for AI2GO. It's interesting to see the bounding boxes for the two different classes of algorithms were different – not wrong, not crazy, but definitely different, so that's something to bear in mind.

Here are the inferencing times for the Raspberry Pi 3, in blue, on the left, and Raspberry Pi 4, in green, on the right. With roughly twice the NEON capabilities of the Raspberry Pi 3, we would expect to see roughly the same speed-up models running on the CPU of the Raspberry Pi 4. As expected, we do, for both TensorFlow and the new binary weight AI2GO platform. You can definitely see the big gains for those proprietary binary weight quantized models. On the right there, the far right-hand two bars, is the performance of Google's Coral USB Accelerator using that Edge TPU. Custom silicon, as expected, performs better. The speed-up between the Pi 3 and the Pi 4 we're seeing here is actually due to the addition of a USB 3 bus on the Raspberry Pi 4, nothing to do with the onboard CPU.

You're getting roughly 2 to 4 frames a second of image recognition using TensorFlow on the Raspberry Pi and about 12 to 13 frames per second using the proprietary binary weight models, which shows the power of quantization. You get about 60 frames per second with the Edge TPU.

It's not really until we look at TensorFlow Lite on the Raspberry Pi 4 that we see the real surprise. This time I'm using blue bars on the left for TensorFlow and green bars on the right for TensorFlow Lite. We see between a times three to times four increase in inferencing speed between our original TensorFlow benchmark and the new results for TensorFlow Lite running on the same hardware. This isn't a new-fangled proprietary model. This isn't the stuff that Apple just bought for $200 million. This is stuff you can use yourself. It's the same MobileNet model, it's just quantized. You might actually expect some reduction in accuracy, because it's going faster and we've reduced the size of our tensor representation, but it doesn't really seem to be significant, which is interesting.

That speed-up is actually astonishing and really shows the power of quantization when we're dealing with machine learning on the edge. In fact, the decrease in inferencing times using TensorFlow Lite and quantization brings the Raspberry Pi 4 into direct head-to-head competition with a lot of the custom silicon from NVIDIA or Intel. A normal run-of-the-day ARM chip doing deep learning inferencing just as fast and for a lot less money than custom silicon from the big players.

For those of us that think better with graphs, me included, here we have the inferencing time in milliseconds for MobileNet v1 SSD 0.75 depth model, left-hand bars, and MobileNet v2 SSD model, right-hand bars. If you don't really know what that is, it doesn't really matter. It's just two different types of models. Both trained on Common Objects in Context, COCO, dataset. Standalone platforms were in green, while single bars for the Xnor AI2GO platforms, and their proprietary binary weight models are in blue, off to the right. All other measurements for acceleration hardware attached to Raspberry Pi 3 in yellow and Raspberry Pi 4 in red. What it comes down to is the new Raspberry Pi 4 is the cheapest, most affordable, most accessible way to get started for embedded machine learning right now. If you want to get started with edge computing, buy a Raspberry Pi. Use it with TensorFlow Lite for competitive performance or snag one of Google's USB Accelerators for absolute best in class.

It's notable that the only platform that kept up with TensorFlow Lite running on the Raspberry Pi on a CPU was Google's own Edge TPU, which also leans heavily on quantization. The results here are really starting to make me wonder whether we've gone ahead and started optimizing in hardware, best building custom silicon to do this stuff just a little bit too soon. If we can this leverage out of software, out of quantization, then perhaps we need to wait until a software for the embedded edge space is just a tad more mature to know what to optimize for. In hindsight, it makes Microsoft's decision to stick with FPGA for now, rather than rolling their own custom ASIC like everybody else seems to be doing right now, look a lot more sensible. Just something to think about.

My benchmark figures were all for image recognition, and that's hardly the only thing that you can do locally instead of in the cloud. Right now, while your voice assistants are listening to you, so are the humans behind them. This is a speech-to-text engine called Cheetah from the company called Picovoice. It runs inside the power and resource envelope of a Raspberry Pi Zero. No cloud needed, no network needed. That means that, unlike most current voice engines, your conversation isn't going to leave your home.

It won't be monitored by humans for quality control purposes. While inferencing speed is probably our most important measures, these are devices intended to do machine learning on the edge. That means we need to pay attention to environmental factors. Designing a smart object isn't just about the software you put on it. You have to think about heating and cooling and power envelope. There's a new build of TensorFlow Lite intended for microcontrollers that does exactly that.

The custom acceleration hardware we've been seeing over the last year or so is actually the high end of the embedded hardware stack. This is the SparkFun Edge. It's the board that got spun up to act as the demo hardware. It's built around the Ambiq Micro's latest Apollo 3 microcontroller, an ARM Cortex-M4F running at 48 MHz. It uses somewhere between 6 and 10 μA per MHz, so that's about 0.3 to 0.5 mA running flat out, and it draws just 1 μA in deep sleep mode with Bluetooth turned off. That's insanely low powered. For comparison, the Raspberry Pi draws 400 mA. This chip at the heart of this board runs flat out with a power budget less than many microcontrollers draw in deep sleep mode. It runs TensorFlow Lite. Keep your eye on LED 47 here.

That's real-time machine learning on a microcontroller board powered by coin cell battery that should last for months or years. No cloud needed, no network needed, no private personal information leaves the boards. At least in the open market right now, this is deep learning at the absolute limits of what our current hardware is capable of. It doesn't get any cheaper or less powerful than this. These tiny microcontrollers don't have enough power to train neural networks, but they can run inferencing using pre-trained models and tricks like model quantization. Of course, TensorFlow Lite for microcontrollers is up and running inside the Arduino dev environments, if any of you use that. There's actually two competing forks, one by a TensorFlow team at Google and another from Adafruit in New York.

It is not an end of the journey down the stack. TensorFlow Lite for microcontrollers is a massively streamlined version of TensorFlow. Designed to do portable and bare-metal systems, it doesn't need either standard C libraries or dynamic memory allocation. The core runtime just fits in 16 kilobytes, with enough operations to run speech detection model maybe in about 22 kilobytes. When people talk about machine learning, they use the phrase almost interchangeably with neural networks, and there's a lot more to it than that. You can use platforms like MicroML to go smaller. Instead of neural networks, MicroML supports SVMs, support vector machines. Good at classifying highly-dimensional features, they are easy to optimize for RAM-constrained environments. While TensorFlow Lite uses 16 kilobytes, MicroML can squeeze a model and do machine learning into 2 kilobytes.

The arrival of hardware designed to run machine learning models at vastly increased speeds and low-powered envelopes without needing a connection to the cloud means edge-based computing much more of an attractive proposition. It means the biggest growth area in machine learning practice over the next year or two could be inferencing rather than training.

We're right at the edge, the start of a shift in the way we collect data, the amount of data collected. We're going to see a lot more data very soon, but most of it is going to live in low powered systems that aren't going to be connected to the internet. The data doesn't have to go to the cloud. Instead, we can leverage new tools, machine learning models, to make the low-powered hardware run those models. New tools let us make local decisions. Local decisions mean seemingly inevitable data breaches, and the privacy and ownership problems we've seen with the internet of things till now, they aren't necessarily going to be inevitable anymore.

Of course, no new tool or technology is a panacea, capable of solving all our problems. I'm never going to be the one to argue that line of lunacy. Not like the blockchain people. If your solution to a problem exposes the underlying technology that solves it, you've failed. The whole point of technology is to get out of people's way and let them do the things they need to do. Unless you work in technology, like us, that isn't the technology itself. The important thing is whether it solves the problem better than other methods, more seamlessly, without the need for the end user to actually understand what we've done ourselves. I've got a sneaking suspicion that deploying a piece of hardware that does magic without having to fiddle around getting it onto the network and making sure that network stays up is going to be a lot better in most cases.

Deep learning models are incredibly easy to fool. They're not going to be useful everywhere. These are two stop signs – the left with real graffiti, something most humans would not even think suspicious, the right shows a stop sign with a physical perturbation. Stickers. Somewhat more obvious. It could be easily disguised as real graffiti. This is what is called an adversarial attack, and those four stickers make machine vision network designed to control autonomous cars think the stop sign, still obviously a stop sign to you and me, say, "Speed limit is 45 miles an hour." Not only would the car not stop, in many cases, it would actually speed up.

You can launch similar attacks against face and voice recognition machine learning networks. For instance, you can bypass Apple's FaceID liveness detection using some pair of glasses with tape over the lenses. It's really interesting.

Of course, moving data out of the cloud and onto our local devices doesn't solve everything. You wouldn't take a hard drive and just throw it out into the trash or put it up on sale for eBay, at least without properly wiping it. You shouldn't take a hard drive and throw it out to the trash and put it up on eBay without properly wiping it, just to be clear. Unsurprisingly, perhaps, this turns out that IoT objects can do it as well. This is a LIFX smart bulb. If you've connected this bulb to your Wi-Fi network, then your network password will be stored in plain text on the bulb and can be easily recovered by downloading the firmware and inspecting it using a hex editor. In other words, throwing this lightbulb in the trash is effectively the same as sticking a Post-it Note to your front door with your wireless SSID and password written on it.

The press around machine learning, artificial intelligence, paints a picture which is not really in line with our current understanding of how AI systems today or the foreseeable future work. Machine learning systems are trained for specific tasks. We are nowhere near general intelligence, no matter what the press says, and most researchers would argue that we don't really understand how to get from here to there. Privacy, security, morals, ethics around machine learning, it's all being quietly debated, mostly behind the scenes so as not to scare the public. What scares me the most is that, as an industry, we have proven ourselves perhaps uniquely ill-suited to self-regulate. Ten years of big data has entirely convinced me that the technology industry is arrogant and childish. "Move fast and break something" shouldn't apply to our personal privacy or our civilization. Of course, this too shall pass.

This is something called the Michigan Micro Mote, and last time I looked, it's the smallest real computer in the world. It features processing, data storage, and wireless comms. It's probably closest to the true smart dust vision from the early DARPA days that we've seen. This is the next paradigm shift, perhaps. In the future, the phrase data exhaust is no longer going to be a figure of speech, it'll be a literal statement. Your data will exist in a cloud, a halo of devices surrounding you, tasked to provide you with sensor and computing support as you walk along, constantly calculating, consulting with each other, predicting, anticipating your needs. You'll be surrounded by a web of distributed computing, sensors, and data, whether you like it or not.

It's now been 30 years since the birth of the World Wide Web, which changed the internet forever. I still rather vividly remember standing in a drafty computing lab, with half a dozen other people, crowded around a Sun SPARCstation with a black and white monitor, looking over the shoulder of someone who had just downloaded the very first public build of NCSA Mosaic by some torturous method or another. I remember shaking my head and going, "It'll never catch on. Who needs images?" which, perhaps, shows what I know. Pay as much attention as you want really.

One More Thing

Nonetheless, I'm going to leave you with one more thing. Just last year, Zuckerberg stood up on the Facebook F8 conference stage and said, "The Future is Private," even if you don't believe him, and really, let's face it, he hasn't given us any reason to do so. The idea that this man, the man that 10 years ago stood up and sold us on the mantra of the big data age, that privacy was no longer a social norm, tells us something. It tells us that that age is over. If we give our users, our customers, our friends, the choice not to share their data with us and yet, we, ourselves the ability to obtain and act on the insights the data could give us, that's not a bad thing, and we don't need their data anymore, or at least, we don't need to keep it. Moving what we can to the edge does not solve everything, but it's going to be a good start. Take the data. Act on the data. Throw the data away.

 

See more presentations with transcripts

 

Recorded at:

Apr 15, 2020

BT