BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Techno-solutionism, Ethical Technologists and Practical Data Privacy

Techno-solutionism, Ethical Technologists and Practical Data Privacy

In this podcast Shane Hastie, Lead Editor for Culture & Methods spoke to Katherine Jarmul of Thoughtworks about the dangers of techno-solutionism, challenges in ethical application of technology and her book Practical Data Privacy.

Key Takeaways

  • Techno-solutionism is the belief that problems will be solved by simply applying a newer and better technology to them
  • As technologists we have a bias towards technical solutions and often do not explore the potential unintended consequences of our choices
  • Technical ethics requires engaging across many disciplines in truly multi-disciplinary teams and actively looking to engage beyond the echo chamber we find ourselves in
  • Data privacy is an application of ethical application of technology in techniques such as federated data analysis
  • Privacy needs to be built into the architecture and design of software from the very beginning

Transcript

Shane Hastie: Good day folks. This is Shane Hastie for the InfoQ Engineering Culture Podcast. Today, I'm sitting down with Katherine Jarmul. Katherine, welcome. Thanks for taking the time to talk to us.

Katherine Jarmul: Thanks so much for inviting me, Shane. I'm excited to be here.

Introductions [00:34]

Shane Hastie: We met at QCon San Francisco last year where you gave a challenging and interesting talk on techno-solutionism. But before we get into that, possibly a better place to start is who's Katherine?

Katherine Jarmul: Who's Katherine? I'm currently working at Thoughtworks as a principal data scientist and recently released a book called Practical Data Privacy. I see myself kind of as a privacy activist, a privacy engineer, a machine learning engineer. I've been in technology for quite a long time and interested in the intersections of the political world and the technical world.

I was also a co-founder of PyLadies, the original chapter in California back in 2010, 2011-ish. So been in technology for a while, been in machine learning for the past 10 years, been in privacy in machine learning and data science for about the past five years. And we can maybe see how that progression relates to the topic of techno-solutionism.

Shane Hastie: Probably a good starting point in that conversation is what is techno-solutionism?

Defining Techno-solutionism [01:24]

Katherine Jarmul: Yes, I think I described it in the talk and I would definitely still describe it as the idea that we have a magical technology box and we take world problems or problems created by other technology boxes, and we take those problems and put it into the magical technology box. And then out comes happiness, problem solved, everything's good.

And I think that anything that you can fit into that type of abstract narrative, I would describe as techno-solutionism. So this idea that if we just had another piece of technology, we would solve this problem.

Shane Hastie: I'm reminded of doing process models and you put a cloud in the middle and in the cloud are the letters ATAMO, which stands for And Then A Miracle Occurs and after coding, something happens. So now, we are replacing the ATAMO cloud with the, Then A Technology Occurs, and things get better. But we don't, why not?

Katherine Jarmul: Why doesn't technology solve our problems? I think one of the things that you have to think about when you look at the history of technology is it's motivated mainly by either the desire to invent and create and change something, the desire to make something easier or to solve a problem or something like this, or other human desires like the desire to kill, the desire to destroy in a sense of when we look at the history of technology...

I think in my talk, I linked it back to the invention of gunpowder and the fact that when they found out gunpowder, they were actually trying to research the miracle of life. They're trying to find a magical medicine that would solve humans' problems and they found and created gunpowder. And so that's just kind of a nice metaphor in our minds to think, I'm not an anti-technologist, I am a technologist. I work in machine learning and privacy.

But we have this human way of anthropomorphizing technology and also of using it as kind of a reflection of the things that we see in the world. And when we do that, we essentially imprint our own biases and our own expectations and also all of our own idea of how to solve the problem into the technology. So we cannot de-link this connection between what we think the solution should be and how we build technology.

And I think that's where a solution for one person is actually a problem for another person. And I'm a believer in the fact that there's probably no universal moralism or universal truth, and therefore that becomes a difficult topic when you think of. I'm going to create something that works for me and then I'm going to scale it so that it works for everyone. And where does that take us? Because depending on the context, maybe the outcomes are different.

Shane Hastie: We don't explore the unintended consequences. We don't even consider the unintended consequences in our optimistic, hopeful... As technologists, how do we take that step back?

The need to identify and explore unintended consequences [04:56]

Katherine Jarmul: Yes, it's a really good question. And again, I don't think there's one answer. I think that one of the things that we need to think about is how do we reason about the history of our fields and technology? This has been something that's fascinated me for years. We continuously think we're inventing new things. And when you really study the history of computers and computing, and even you go back further of the history of mathematics, the history of science, you start to see patterns and you see the repetition of these patterns throughout time.

And so I think a useful starting point for most folks can start actually engaging with the history of your field, whatever area of technology you're in, and of the history of your industry. If you're in a consumer-facing industry, maybe the history of consumerism and these types of things and informing yourself about what are the things that I think are the pressing problems today and did they occur before and what did people try to do to solve them and did it solve it?

Just applying this curiosity and maybe a little bit of investigative curiosity into things before, assuming I'm the first person that had this idea, I'm the first person that encountered this problem and I'm going to be the first person to solve it, which of course sounds extra naive, but I also feel like I've definitely been there.

I've definitely been in that moment where I find out a problem and I'm like, I'm going to help solve this. And I think it's a really enticing, appetizing storyline that we get told a lot by technologists themselves, by the, it's zeitgeist of the era of Silicon Valley, and we're going to apply innovation and do things differently.

And I think that it's good to have hopeful energy. I'm a Californian, I'm very optimistic. In teams, I'm often the cheerleader. I have that energy, it's in the culture. But I think we can also use curiosity, humility, and also taking a step back and looking at experimentations past to try to better figure out how we might quell our own maybe over expectation of our contributions and our abilities and the ability also for technology in general to address a problem.

Shane Hastie: But we don't teach aspiring technologists any of this thinking.

Widening our perspectives through engaging across multiple disciplines [07:40]

Katherine Jarmul: I know. It's really interesting. So one of the things I talked about in the talk that I think that you and I also chatted a bit about is like, why don't we have multidisciplinary teams? Why don't we have teams where there's a historian on the team, where there's an ethicist or a philosopher, where there's community group involvement, so communities that are involved in combating the problems in an "analog" way?

And I think that my background is a great example of a lot of folks that I meet that have worked in kind of the ethics and technology space in that I am not from a one discipline background. I went to school on a scholarship to study computer science because I was really good at math and I loved math, but I went to school during AI winter and I really hated Java.

We mainly did Java and I had a really bad time, so I switched my major and I switched it to political science and economics because I could still study statistics and statistical reasoning, which I really enjoyed, but I didn't have to do annoying java applets, which I really did not like.

And so I think that there's these folks that I meet along the way who kind of have careers similar to mine or who have ended up in the space of ethics and data or ethics and computing. I think a lot of these folks ended up kind of tangentially studying other disciplines and then going back to technology or they started in tech and they went a little bit elsewhere and then they came back.

And I think it would behoove us kind of just as an educational system to give people this multidisciplinary approach from the beginning, maybe even in grade school, to think and reason about technology ethics. At the end of the day, it's a skill that you have to think about and learn. It's not like magically one day you're going to have studied all of this and you can also learn it yourself. It's not by any means something that you need to do in a university context.

Shane Hastie: Bringing in that curiosity. One of the things that we joked about a little bit before we started recording was what are we hearing in the echo chamber and how do we know what the echo chamber is? So if I'm actually wanting to find out, how do I break out of my own echo chamber?

Breaking out of our own echo chamber [10:14]

Katherine Jarmul: It's very difficult. We have algorithmic mixed systems now that want you to stay in your chamber or go into neighboring chambers. And I think it's hard. I don't know what your experience has been like, but I think especially during the Corona times and so forth when people weren't traveling, it was very difficult to figure out how to connect with people in different geographies and with different disciplines. Obviously, some of that is starting to fall away.

So conferences are starting again. We got to see each other at a conference and have chats. I think that's one way, but I think another way is to specifically ask yourself if you were to go outside of your psychological comfort zone, and I don't want to put anybody in harm's way, so obviously if you're feeling up for it, what is just at the edge of your reach that you kind of feel afraid about learning about or that you feel this tension or resistance to exploring?

And I think that sometimes those little bits of tension where you're curious, but you kind of also always find an excuse not to do it that maybe those are pathways for people to break out of where they're stuck and to find new ways. And most of that thinking is related by the way to lots of thinking around human psychology and communication and community.

So these are not my ideas, these are just ideas that are already out there. And I don't know, I would be very curious how you get out of your filter bubble.

Shane Hastie: Personally, I try hard to meet people that I wouldn't normally bump into.

Katherine Jarmul: How, just put yourself in a new environment?

Shane Hastie: I'll go into a new environment and try and show up with curiosity.

Katherine Jarmul: Awesome.

Shane Hastie: I don't always do it well.

Katherine Jarmul: I think that's like part of it, is having to learn that sometimes they will be uncomfortable or it is not going to go the way you want it to go, right?

Shane Hastie: Right. I had a wonderful experience. My day job, I teach generally IT and business-related topics, and I had an opportunity to teach a group of people who were teaching nursing and plumbing and healthcare and hairdressing.

Katherine Jarmul: Awesome.

Shane Hastie: And it was a completely different group. They had a little bit in common in that they were all vocational training educators where I'm a professional training educator. So the education side of it was in common, but their audience, their topics, the challenges that the 18, 19, 20-year-olds that they're teaching to start their careers versus maybe working with people who are mid-career, it was an enlightening three days.

Katherine Jarmul: Yes, I mean sometimes, it's just... I have a few activist groups that I work with where folks are from very different walks of life and backgrounds, and I feel like sometimes I crave those conversations. I noticed when I haven't attended recently and I've just been in kind of my tech bubble or normal life bubble of friends, and it can just be really refreshing to get out of the same topics over and over again.

Shane Hastie: The swinging back around on topics, Practical Data Privacy, the name of your new book. Tell us a bit about that.

Practical Data Privacy [13:51]

Katherine Jarmul: I wrote the book with the idea that it was a book that I wish that I had when I first got interested in privacy. And I first got interested in privacy by thinking about ethical machine learning. So how do we do machine learning in a more ethical way, in a more inclusive way, and how do we deal with the stereotypes and biases, societal biases that show up when we train large scale models, which is of pressing topic today.

But I referenced during the talk as part of looking at my own techno-solutionism, I thought to myself, I can't myself do anything in a technology sense to fix societal biases that show up in these models. And for the researchers that are working in this space, I greatly admire their work. And I think that when I evaluated, do I feel like I could contribute here in a meaningful way and do I feel like the contributions would actually help the industry in any meaningful way and therefore bring purpose to my work?

The answer I came up with at last was no. And of course, that could be a difficult moment, but the hope for me was at that time I was also getting interested in privacy. And I saw privacy as greatly related to thinking through the ethical considerations of data use because of concept of consent, should we use this data? Are we allowed to use this data? Should we ask people if we can use their data? This was very appealing to me.

And then the further I got into privacy, the more interesting it got because there's a lot of very cool math. And so the combination ended up being like, okay, this is a field I feel like I can contribute to. It has two things I love, math and maybe helping society in some ways, not with... Technology is not going to fix everything, but being a positive contribution to the world that I can make as a technologist.

And when I first got into the field, it was primarily academics and it was primarily PhDs who had been studying, let's say, cryptography or differential privacy or other of these highly technical concepts for many years. And even though I pride myself on my ability to read research, it was a rough start there in some of the things to wrap my mind around some of these concepts and to really start from basically being somebody that knew how machine learning worked to get into somebody that knows how these privacy technologies work.

And so when O'Reilly gave me a call and asked would I be willing to write a book on privacy technology, I said absolutely yes. And I said, I'd be really excited to aim it towards people like me, people that know math and data science, people that have been in the field, people that have noticed maybe there's privacy concerns they'd like to address and have heard these words but haven't yet had a proper introduction to the theory behind them and then also how to implement them in real life, so in practical systems.

And so each chapter has a little bit, we start with a little theory and we learned some of the core concepts. And then we have some code and Jupyter notebooks that go along with the book to say, okay, here's some open source libraries that you can take. Here's how you can use them. Here's how you can apply them in your technology context, whether that's data engineering, data science, or machine learning or some other area of the programming world.

Shane Hastie: Can we dig into one of say your favorite? What is your favorite of those privacy technologies and how would I use it?

Practical application of data privacy – federated data analysis [17:32]

Katherine Jarmul: Yes. One of the ones I'm most excited about potentially shifting the way that we do things is thinking through federated or distributed data analysis, or federated or distributed learning. And the idea of this is that... There's already systems that do this, but the idea of this is that the data actually remains always at the hands of the user, and we don't actually collect data and store it centrally. Instead, we can either ship machine learning to personal devices, we can give machine learning away. So GPT4All is an example of this to allow people to train their own models, to allow people to guide their own experience with machine learning.

And we can also run federated queries. Let's say we need to do data analysis on some sort of device usage or something like this. We could also run those. And a lot of times when we implement these in production systems and we want them to have high privacy guarantees, we might also add some of the other technologies. We might add differential privacy, which gives us essentially a certain element of data anonymization, or we might add encrypted computation, which can also help do distributed compute, by the way, by allowing us to operate on encrypted data and process encrypted data without ever decrypting it. So, doing the actual mathematics on the encrypted data and only when we have an aggregate result do we decrypt, for example.

And all of these can run in distributed senses, which would significantly enhance the average secrecy and privacy for people's data, and would be, as we both probably recognize a fundamental shift that I'm not sure will happen in my career, but I would like for it to happen. I think that would be really great, and we'll see how the winds go.

One of the cool things was the Google memo recently that got leaked of we don't have a moat and neither does OpenAI, specifically referenced the idea of people training their open source models for their own personal use. And so maybe if Google's afraid of it, maybe it will become real.

Shane Hastie: What about the team social aspects of data privacy? So, building cross-functional teams, dealing with privacy advocates, dealing with legal and so forth, how do I as a technologist communicate?

The need for and challenges in multidisciplinary teams for data privacy [20:05]

Katherine Jarmul: Yes. And I think we're dealing with, again, this multidisciplinary action directly happening inside an organization is when we talk about privacy problems, we usually have at least three major stakeholder groups who all speak different languages. We have legal involvement or privacy advocates that also have some sort of legal or regulatory understanding. We have information security or cybersecurity or whatever you call it at your org, which has their own language and their own idea of what does privacy mean and what does security mean.

And then we have the actual technologists implementing whatever needs to be implemented. And we have our own jargon, which sometimes those other teams share and understand, but sometimes not depending on how we've specialized. So particularly when we look at specialization like something like machine learning, it could become quite difficult for a legal representative to reason about privacy leakage in a machine learning system because they didn't study machine learning. And they may or may not know how models may or may not save outlier information or other private information as they're trained.

And so when we look at these fields, and if you wanted to ever enter the field of privacy engineering, you're kind of the bridge between these conversations and you kind of operate as a spoke to allow these groups to share their concerns, to identify risks, and to assess those risks, evaluate whether they're going to be mitigated or whether they're just going to be documented and accepted and to move forward. And I think that that's why the field of privacy engineering is growing is when we see things like, I'm not sure if you saw the Meta fine that got announced this week of a 1.3 billion euros, that Meta was fined for transferring a bunch of personal data from the EU into the US and storing it on US infrastructure.

These things actually affect everybody. It's not just a specialty field. It's real regulation and real locations that's happening. And thinking through privacy in your architecture, in your design, in your software is an increasingly expensive if you don't do it and increasingly, I think, important for folks to address. I also think outside of thinking through just the regulatory aspects, I think there's exciting aspects for being able to tell your users, we do data differently, we collect things differently, and we can definitely start to see that there's starting to be marketing pushes around. Specifically, we offer something that's more private than our competitors.

And I think that that's because for better or worse, I maybe am somewhat cynical. I don't necessarily think it's all from the hearts of the CEOs around the world. I think some of it is maybe that there's actual consumer demand for privacy. And I think that people get creeped out when they find out that things are tracking them and they don't expect it. And I think that maybe privacy by design is finally hitting its era nearly 30 years after it was written, that is finally hitting the same thing of what people want and maybe what we should think about implementing as technologists.

Shane Hastie: Because these are things that can't, well can, but that's very, very difficult and expensive to retrofit afterwards. They've got to be right in the core of the systems you're designing and building.

Privacy needs to be at the core of architecture and design, refactoring for privacy is doable but very hard [23:42]

Katherine Jarmul : Absolutely. I mean, I think that there's ways to approach it in a more, this would be your specialty, iterative and agile point of view. So, don't just rip out the core of your system and say, "Oh, we're going to refactor for privacy. We'll see you in two years," but figure out, okay, this is why the risk assessments are really helpful, especially multidisciplinary, get the group together. And where is everybody's biggest fear that they're not talking about having a security breach or having a privacy breach or having some other bad publicity and start to prioritize those and see is there a small chunk of this that we can actually take on and redesign with privacy in mind?

Or even having a dream session, how would our architecture look like if we did privacy by design? And maybe there's something right there that you can say, "Oh, we've been thinking about replacing the system for a while. Let's start here." And I think that there's ways to implement small chunks of privacy, and I would never want to tell somebody, "Oh, you have to re-implement everything." I think that's unrealistic and punitive when the norm has been to not build things private by design. I think you should congratulate yourself and be excited at every small step towards something better than what you're currently doing.

Shane Hastie: Katherine, some really interesting and somewhat challenging topics here. If people want to continue the conversation, where do they find you?

Katherine Jarmul: Yes, so obviously you can check out the book, Practical Data Privacy. It should be arriving shortly in physical form, hopefully, in bookstore around you or via your favorite book retailer. But I also run a newsletter called Probably Private, and I must say it is super nerdy. So I just want to give a warning there. It's called Probably Private. It's at probablyprivate.com, and it's specifically around this intersection between probability, math, statistics, machine learning, and privacy, with of course, a little bit of political opinions thrown in now and then.

Shane Hastie: Wonderful. Well, thanks so much for talking to us today.

Katherine Jarmul: Thank you, Shane.

Mentioned

About the Author

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and YouTube. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT