Transcript
Currie: My name is Anne Currie. I have been in the tech industry for about 25 years, doing everything. I was an engineer, startup founder, eCommerce, all kinds of things. These days I am a visiting lecturer in tech ethics at the University of Hertfordshire. I'm also on sabbatical from Container Solutions writing a series of science fiction novels. I've just finished book five which will be out in time for Christmas.
Ethics 101: Don't Break the Law
I'm going to give you a very fast run through the major issues in tech with the pros and cons, and the things that we need to think about, the ethical issues. I cover this on my course. This is three of my lectures. It would take about three hours. I'm going to compress it down into 20 minutes.
I'm going to start slightly differently from that. I'm going to start with what I start the course with. The first lecture is the 101 of tech ethics, just to set a little bit of context here for the rest of it. The 101 of tech ethics is don't break the law. It might seem obvious but it is necessary. It's not sufficient but it's absolutely necessary. In fact, there are lots of laws that apply to technology that we do need to follow. It's not just the GDPR. The GDPR actually contains some very useful stuff about how to behave better around algorithmic transparency and right of reply. Privacy, I'm not so bothered about privacy. I think the other parts of it are actually very good. It's not just the GDPR, we're talking about copyright law, contracts, and accessibility. There are an awful lot of laws. Equally importantly are equalities and discrimination law, that if we obey the law we are actually a large way towards being ethically correct. It's actually very easy to break the law with technology without realizing it, so staying on top of what society has already codified is the right thing to do. You do need to do that. It's not everything that you need to do but it's a good start. The rest of this talk is about things that are not necessarily against the law yet, but are likely to be in the future because the law tends to lag behind what technology can do and what society has decided that technology should do. It doesn't lag that far behind, so if you're paying attention and reading the newspapers, you'll be able to see what direction that the society is going in and try not to do things which are already looking highly dodgy. The rest of this talk, I'm going to talk to you about things that are not against the law yet but do need to be thought about very carefully.
Energy Use in the Tech Sector: Sustainability and Climate Change
I'm going to start with something that I normally talk about which is pollution and climate change and the tech industry. You might have already read Greenpeace's Click Clean Reports, which are very good. The one in 2017, they said the tech industry used around 12% of the world's electricity, which is probably higher now. That's split into about 10% for running devices, and about 2% operating data centers. That is actually quite a lot of electricity. Do I think that what we get out of that is worthwhile? I absolutely do. The pro of this is the tech industry. The con is the electricity and the carbon being produced. The good news here is a lot of what we require is electricity, and electricity can be produced renewably. We have no excuse for not doing so. We need to make sure that everything that we can run on electricity we run on renewable power. The big three, Google, Amazon, Microsoft are making good progress today on that, but we do need to hold their feet to the fire there. We do need to make sure that they continue to do a good job, they continue to convert their data centers to be renewable power. You need to say things to your providers. You need to express a preference for renewable power because some of them, particularly I'm thinking about Amazon here, are entirely consumer focused. If you want it you'll get it, you don't ask for it you won't get it. We need to be asking for it. That's an easy one. We're looking at pollution there, climate change.
AI and Big Data
Next I'm going to talk about is AI and big data. That actually breaks down into two areas. We've got machine learning, so machine enhanced statistical analysis, and we've got automation. Machine learning is data analysis on a larger scale than we've been used to. The pros are fantastic potential improvements in medicine, in science, in engineering. Great stuff. The cons is the con of anything which is data dependent, which is, crap in crap out. If your data is bad it will have bad results. We know that there is an awful lot of bad historical data out there for certain groups. Groups that have been disadvantaged in the past have a lot of data which could be used to continue that disadvantage, so we have to be incredibly careful about how we use big data. That's the poor folk, disabled folk, racial minorities, unhealthy folk. We do need to be incredibly careful that we don't continue past inequalities using crap data that we have sourced from the past. We want to change in the future, not have more of the same. Big data, we all know that. Actually, a lot of that is covered that is against the law, against the disabilities and equalities act. My 101 of tech ethics, don't break the law.
The other thing that obviously we're talking about here is automation. Pros, this is what humanity is for. We're very good at using tools to get more done on less effort. That is why we live in nice houses and have running water, all these things these days. I'm not going to say that the pros generally outweigh the cons, but the cons are very significant. The cons obviously in the short term, when you automate you tend to get a lot of job loss. That's extremely unpleasant and very difficult for people who have suffered from it. Therefore, we need to be mitigating that. There's a slightly more subtle danger as well. There's something that's happening particularly now but it probably has happened all the way through history, which is, if you don't lose your job but you are now controlled by a machine, which is effective. Quite often, your day might be scheduled by an algorithm especially if you're a gig worker. That can be pretty awful. Algorithms can be quite psychopathic. They can be very uncaring about you and your needs as an individual. We do need to be incredibly careful about where humans are controlled by algorithms. That it needs to be tested with an enormous right of reply and redress. It needs to be monitored. We need to be incredibly careful about it. There are lots of cons to mitigate when algorithms are controlling humans.
Cyberwarfare, Propaganda, and Killer Robots
Next area is the future of warfare. We've got cyberwarfare, propaganda, and killer drones. Those are the three that immediately spring to mind. We'll start on killer drones. Actually, they're all constantly in the news at the moment, and correctly so. I did have Professor Noel Sharkey of the campaign against killer drones, do a little bit of input into this talk. I think his argument at the moment against killer drones is putting aside whether it's right or wrong to have a drone kill a human, make an algorithmic decision about whether someone lives or dies, or not. I don't think he wants to put this aside. Putting that aside, the technology is not good enough at the moment to have drones be making decisions about whether people live or die. We should be doing it because the idea is that it will kill fewer people because it'll be more accurate. It's not more accurate at moment, therefore, it's killing different people and many of whom are innocent. The pro is it's cheap for the military, and it reduces military casualties. The con for the West, generally, and for the larger America is that it might kill the wrong people. It can destabilize societies. If you look now at what's going on in Azerbaijan and Armenia, you can see a bigger con, which is that by reducing the cost of warfare using killer drones, you're getting huge numbers of civilian casualties, because war is now easier and cheaper to wage. Easier and cheaper to wage wars, is not a good thing. That is what automation is for. You can't really avoid that. There's a major issue at the moment here with cheap drone based warfare.
Going beyond killer drones, which I don't think are a good idea, we're into other new cheap forms of warfare, cyberwarfare. Using hacks to attack civilian targets like hospitals, power grids, schools. Pro, it's cheap. It's less violence. Nothing blows up. Con, directly targeting civilian populations. You can do quite a lot of it. It's massively destabilizing. For you folks, the moral here is, keep everything patched. If I was launching war these days, I would be using cyberwarfare because I don't have any money, and that is the cheapest way to do it. No, that's not true. It's not the cheapest way to do it. The cheapest way to wage a war at the moment is propaganda. Keir Giles, a fellow on Russia, Chatham House, defines propaganda as destabilizing an adversary society by creating conflict in it, and creating doubt, uncertainty and distrust in institutions. It's very easy to do particularly based on social media platforms. Propaganda has been a weapon in warfare for as long as warfare has existed. Since Darius the Great, it has been a weapon in warfare. Unless social media platforms act to suppress it, it will continue to be a very effective weapon.
Pros and cons, surveillance. London where I live, sixth most surveilled city in the world. It's actually very popular because people want security, which is a pro of surveillance. The other top 10 surveilled cities in the world are pretty much all in China. Also quite popular there because it helps with social cohesion and social control, which is actually, oddly enough, quite popular. In America, companies like Amazon are starting to roll out surveillance widely to make shopping more convenient. Again, that will be enormously popular. Cons, privacy. I think that's dead. I'm not that bothered about privacy, anyway. Morally, I think it's dubious. It's very powerful. You're putting a lot of power in the hands of whoever owns the surveillance data, which is good if you like them, bad if you don't. We're a little early on it. It's not good enough yet. Noel Sharkey also speaks up on this, and quite rightly so. It's not good enough for what we're currently attempting to do with it. We overestimate its goodness.
Anthropomorphism and Habituation - Some of the Biggest Threats Are the Most Attractive Technologies
Moving on to the next subject, which is anthropomorphism, which you might think is not all that important. I'm with the science fiction writer Philip K. Dick on this. I think it might actually be the most scary, where we take an algorithm, we stick it inside a cuddly toy, and then we believe it's good. Loads of downsides to that. The pros, it's a very cheap way of giving people love and care and attention, even though that's quite sad. The cons are that once we wrap something in a cuddly toy and make it look cute, we over-ascribe correctness to it, we over-ascribe predictability, and we over-ascribe soft skills like mercy. Because, let's face it, not very many humans are psychopaths, but all algorithms are psychopaths. They can do all kinds of crazy stuff that a human would never do. If you think that it's a cuddly toy, then you won't be watching out for that, and you won't be properly monitoring it, and you won't be looking out for the mistakes that will inevitably happen.
Attention - Where Does Your Time Go?
Attention, are we entertaining ourselves to death by doom scrolling on Twitter? Probably. Actually, the stats don't tend to back that up. We spend about as much time on things like social media now as we used to do on TV. It's entertainment. We quite like it. I like doom scrolling.
Social Scoring
Social scoring, I think this is a very interesting one. Social scoring, pros, you can find a plumber who actually knows what they're doing. Pros in China, great for finding out whether your neighbor is trustworthy or not. They really like it. We all really like social scoring. It's extremely powerful, actually, and increasingly powerful, and in China, very good at controlling behavior. Even more powerful. I see us going in that direction. The cons are, who controls it? Again, it's the same as with surveillance. It's very hackable at the moment. It's quite easy to give some people some low reviews or high reviews, and completely game the system. It's very powerful and quite hackable. That's a con for social scoring at the moment.
Open Code and Data
Open code and data. Open source, what's bad about that? You get open source, you get more bugs fixed. You get more eyes on it. You get more progress, all good stuff. Cons, if you build it and you put it out there, people can use it for things that you didn't really want them to use it for, tough luck. It is how it works when you put code out there. If you want to, put it under GPL. I would say this is perfectly reasonable. I always advise people to do this and they always look very unhappy about it. GPL or AGPL, is quite off-putting to evil folks that have to open source code. They put themselves in front out there and have quite large organizations that have money to back up legal suits against companies that are misusing GPL. If you're not very happy about what you're doing, put it on GPL.
I love open data. That's because I'm not really interested in privacy. I think there could be amazing stuff, if we put a whole load of data, medical data or behavioral data out in public and let people look at it. Its downsides, lots of companies don't like you doing that because they consider it commercial secrets. You think, it was recorded from me, so why shouldn't my society benefit from it? Again, if you keep it, then you get all the benefit from it. I don't know if I trust you. Pros and cons, open data and open source, I quite like. I think that the only way to make that safe is to make it more open. There are lots of other things: accessibility, security, privacy. There are things that you need to be considering, there are more aspects of the previous thing, of the larger scale issues rather than issues in and of themselves.
Conclusion
I'm going to close on, how do you know if you're doing something bad? How do you know if your code is going to be evil? I used to think it was because, if you thought it was going to create something dystopian, then you shouldn't do it. I was picked up on that by a young friend, who is younger than me, and pointed out that things changed. What I as an old person might think of as dystopian, a young person might not think of as dystopian. In fact, everyone might not think of as dystopian in 10 years' time. I can't make that judgment. I'm too old. You have to monitor. A decision that I make now might not be the same decision I would make in 10 years' time. I might not have foreseen how my technology was going to be used in 10 years' time. It might be completely different. It's not fire and forget. I'm going to have to constantly monitor, and constantly mitigate, and constantly inspect. I'm not going to be able to sit down and think very hard about the implications of my technology now, then write it and then push it into the cloud and never look at it again. I am going to have to constantly review what's being done with it.
Resources
If you're interested in any of this stuff, I would strongly recommend that you read my books, the Panopticon series, available on Amazon. They're extremely inexpensive, and great Christmas presents. They're also available in paperback. Generally, people quite enjoy them.
See more presentations with transcripts