BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Leslie Miley on AI Bias and Sustainability

Leslie Miley on AI Bias and Sustainability

This item in japanese

Live from the venue of the QCon London Conference, we are talking with Leslie Miley, a technical advisor to the CTO at Microsoft. In this podcast, Leslie shares his insights on AI bias, sustainability, and the potential impact of AI on society. He emphasizes the importance of understanding and mitigating the harm these technologies can cause, while also discussing the responsibilities of developers, tech companies, and individuals in ensuring a more responsible and ethical approach to AI development. 

Key Takeaways

  • Developers should consider the potential harm of their AI creations from the very beginning and implement mitigation measures during development.
  • Engaging with communities that could be negatively impacted by AI technologies can help developers understand the potential consequences and build better products.
  • Open-sourcing AI algorithms can help in addressing ethical concerns, but developers must be aware of the potential for misuse by bad actors.
  • Developers should strive to create AI tools that are more difficult for bad actors to exploit, promoting positive uses of technology.
  • Taking humanities courses and engaging with real-world problem can help developers better understand the human impact of their AI projects.

Transcript

Introduction [00:44]

Roland Meertens: Welcome everyone to the InfoQ podcast. My name is Roland Meertens, your host for today, and I will be interviewing Leslie Miley. He's a technical advisor to the CTO at Microsoft and we are talking to each other in person at the QCon London Conference where he gave the keynote on the first day of the conference about AI bias and sustainability. Make sure to watch his presentation as he draws interesting parallels between historical infrastructure projects and modern AI development. During today's interview, we will dive deeper into the topic of responsible AI. I hope you enjoy it and I hope you can learn something from it.

Welcome, Leslie to the InfoQ podcast. You just gave a keynote talk as QCon London. Could you maybe give a summary of the things you talked about?

Leslie Miley: Thank you for having me, Roland. I'm appreciative of the time and the forum. Wow, that was a really interesting talk. I gave a talk on AI bias and sustainability and how they both are inexorably linked. It's interesting because when you look at AI and when you look at sustainability, they are both human problems and they can impact humans in many different ways. If we don't think about the human beings first and human beings that can be impacted more, so marginalized groups, we can really run into problems. Part of the talk was really about showing how the road can be of good intentions. What do they say? The road to hell is paved with good intentions.

The fascinating part of that is we have something in the United States called the interstate highway system that was designed and built in the 1950s and 1960s as 46,000 miles of road that was supposed to connect all of the major cities and create not just a transportation network, but a network for shipments of goods and services. While it did do that, it also cut communities off from each other and it created a huge social problem, huge cultural problem, huge pollution problem. It gave rise to the American car and the American gas guzzler car that increased the amount of CO2 and nitrous oxide and N02 being released, particularly in some of the most disadvantaged communities in the country. I really do think the infrastructure that we're building for AI and specifically the infrastructure demands for generative AI could be the same thing. We could get a lot of good out of it, but we could also end up impacting vast numbers of people who are already being impacted by the inequities in society.

Trade-offs in AI [03:21]

Roland Meertens: I think that's really the difficult thing right now, right? All these new technologies, all these new AI technologies are bringing us a couple of good things and a couple of bad things. How would you evaluate this? Do you for yourself have any balance? Do you see some kind of trade-offs which we can take right now?

Leslie Miley: The question I ask, and this goes back to my early days at Twitter when I was running the safety and security group, was how can something be weaponized? Asking that question is not always a straight line answer and sometimes I ask the question and if I come back with I don't know, then I have to ask the question, who can I ask? I think that's the question we have to ask ourselves all the time is can this be weaponized? If you don't know, who would know and how do you get in touch with them and ask them the questions? I think that's my framework and that's the rubric I use. I think that open AI, strangely enough, may have done something similar where they understood this in the very beginning and they had a very extensive manual review process for the data that they were collecting and the outputs, so they could tag this.

I think that's a start, but it wasn't perfect, but it's a start. I think that's what we have to do is as we design these technologies, we ask the harm that they can do before we get really far down the road and then we try to build into the actual development of the technology, the mitigation measures.

Roland Meertens: I think that's the difficult thing right now with ChatGPT, they built in a couple of safeguards, or at least they tried to build it in. The AI will always say, "I'm a language model and this is not ethical," but also immediately people started jail-breaking this to try to get it to talk about hijacking a car, trying to get it to give them unethical advice. Do you have any idea of how they should have deployed it then, or how should they have tested or thought about this?

Leslie Miley: Yes, I do actually. I'm not sure if I want to give Sam Altman any free advice. He should be paying me for the advice. I think you have to be inclusive enough in your design from the beginning to understand how it can be used. Of course, people are going to try to jailbreak it. Of course, people are going to try to get it to hallucinate and respond with pithy things or not so pithy things. In the rush to make this technology available, I don't think that you'll ever get to a point to where you can't stop the harm. I'll use the statement throughout this podcast, which is capitalism is consistent. The rush to market, the desire to be first, the desire to scale first, the desire to derive revenue will consistently overtake any of the mitigation measures that a more measured approach would have.

I think that's a problem and that's why I wanted to do this talk because I wanted people to start thinking about what they're doing and its potential harms even before they begin. I mean, we've learned so much in engineering and software development over the last 25 years, and one of the things that we've learned, and I'll try to tie this together, is it's actually better to test your code as you're writing it. I mean, think about that. 25 years ago nobody was doing that. Even 20 years ago it was still a very nascent concept and now it's a standard practice, it's a best practice and it allows us to write better code, more consistent code, more stable code that more people can use. We need to start thinking along the same lines. What's the equivalent of writing your test as you write your code to generative AI? I don't know, but maybe that's a way to look at it.

Roland Meertens: I think that's at least one of the things I see in generative AI is that, for example, in the DALL-E interface of open AI, it will come back to you if it detects that you're trying to generate something it doesn't like. It also has kind of false positives here, so sometimes it's let you generate something which is absolutely innocent. But how does one go for the unknown unknowns? Don't you think that there are always more unethical cases you didn't think of which you can't prevent?

Leslie Miley: Yes. Human history is replete with that. I mean, gunpowder was not made to fire bullets. It was made to celebrate. I don't know where that analogy just came from, but I just pulled it out of the depths of my memory somewhere. I'm sure the people who developed gunpowder were not like, "Ah, how do we keep this safe from people making weapons of mass destruction with it?" I mean, because they were just like, "Ah, this is great. It's like fireworks." I don't think you can, but I think you can make it difficult, and I think you can prompt people, to use a term that's appropriate here, to subscribe to their better angels. Twitter, before its recent upheaval, and may still have this feature, if you are going to tweet something spicy, it will actually give you an interstitial. I don't know if you've seen that.

I've seen it where if I'm responding to a political discourse, and usually my responses aren't going to be really that far off and that very spicy, but it gave me an interstitial saying, "Hey, are you sure you want to tweet this? It looks like it could be offensive and offend people." I was just like, "That's a good prompt for me to maybe get in touch with my better angels as opposed to just rage tweet and put it out there and get engagement." I really want to kind of come back to that because it's in response, strangely enough, to the data that shows that negative content has higher engagement on social media.

Roland Meertens: Also, social media started out as something which was fun, where you could meet your friends, you could post whatever you wanted, and it somehow became this echo chamber where people are split up into political groups and really attacking each other.

Leslie Miley: That's a great analogy. I'd just like to take us to the logical illogical conclusion, perhaps. What happens if what we're doing with generative AI has a similar societal impact as the siloing of communities, vis-a-vis social media? I mean, that's actually frightening, because now we just don't like each other and we go into our echo chambers and every now and then we'll pop out and we'll talk about right wing this, left wing that, Nazi this, fascist that. AI can make it a lot worse.

Roland Meertens: You also see that AI could maybe for the same communities help them out a bit. For example, one fear of AI is that it's possible that the people are a bit poor, their jobs might be taken away. But on the other hand, ChatGPT also allows someone who's maybe not that highly educated to write a perfect application letter or to get along in a field where previously they would have to ... maybe someone who is an immigrant not native with English language and didn't have that much training can now write perfect English sentences using a tool like ChatGPT. Don't you think it could also offer them some opportunities?

Leslie Miley: I think so. I'm trying to think of something I just read or listened to. It's almost like you need a CERN for generative AI to really have something that is for the benefit of humanity. I can see what you say. It's like, hey, we will help people do this, help people do that. Yes, it has that potential and I would hope that it would do that. I just don't know if, like I said, we'll subscribe to our better angels. Perhaps we will, but we also run the risk, and this is part of what my talk was about, was if we deploy this to communities that may be in a different country and have to learn the language or may not have had the best school system, but now have access to basically the sum of knowledge on the entire internet, which I think ChatGPT 4.0 was trained on, the other side of that is if you have a million or five million or 100 million or 200 million people doing that, what's the carbon cost of that? Where are these data centers going? How much more CO2 are we going to use?

Then you have people, yes, taking advantage of this, hopefully improving their lives, but vis-a-vis their actions, they're going to be impacting other people's lives who are also disenfranchised. It's like, yes, we allow people to own cars and this created mobility and gave people jobs and gave people this, and it pumped a bunch of CO2 into the environment, and we grew these big cities and we grew all this industry and we educated all these people and yes, and we had to build power plants that burn coal to .. it's like we do this good thing, but then we have this bad thing and then we try to mitigate this bad thing and then we keep repeating this pattern and the mitigating the bad thing never really fixes the original problem.

Responsibility of developers [12:08]

Roland Meertens: Where would you put a responsibility for something like this? For example, in the Netherlands, we always have this discussion about data center for Meta, which all the people in Netherlands are very much against, but at the end of the day, people are still using a lot of Meta products also in the Netherlands, so apparently I think there is a need for a data center if Meta says so, they're not building it for fun, which you put the responsibility more on the users of the services to reconsider what are they actually using or would you put it on the developers? Where would you put their responsibility to think about this?

Leslie Miley: It's a collaborative approach. It's not just Meta and it's not just your local state or country government. It's also the community and how do you make sure that everyone's voice is being heard? That includes if Meta says, "We're putting this data center in this place and we're going to use coal to run it", what about people in the South Pacific whose islands or their homes are going to disappear? How do they get a voice? This is what I mean. The point I was trying to make, I don't think I really got a chance to make in the talk, is that the choices we make today are no longer confined to our house or our office or our country. They're worldwide in many cases. What open AI is doing is going to have a worldwide impact, not just in people's jobs, but also from an environmental standpoint.

The data center buildouts that Meta and Google and Microsoft and everyone else is doing, it's the same thing, and how do you do that? The reason I'm bringing these questions is because there aren't answers to them, and I understand that, but there are questions that we, the global we, have to start having a dialogue about because if we don't, we'll just continue doing what we've always done and then try to fix it on the back end after the harm has been done. I just want us as a species to break that cycle because I don't think we survive long term as a species unless we do. I know that's really meta and really kind of far out there, but I fundamentally believe that. It's a terrible cycle that we see every day and the impact all over the world, and we have to stop it.

Roland Meertens: I think especially breaking out of such a cycle is hard. You mentioned the highway system as something which is great because you can drive all across America, but the downside is, of course, that we destroyed many beautiful cities in the process, and especially America still has many, many problems with massive sprawling suburbs where you can only get by car with a big vehicle emitting CO2. But for example, I know that in many European cities, at some point people learned and they started deconstructing the highways. You also see it, for example, the ring road around San Francisco is not a ring road anymore. Do you have any ideas on how we can learn faster from mistakes we make or how we can see these mistakes coming before they are happening, specifically in this case for AI?

Leslie Miley: I think about that and what comes to mind is the fossil fuel industry and how ridiculous it is that when you have these economic shocks like the illegal invasion of Ukraine by Russia that shoot energy prices through the roof, energy companies make billions in profits, tens of billions, hundreds of billions in profit. I'm like, "Well, there's a wrong incentive. It is a totally wrong incentive." What could happen is that you could actually just take the money from them and just say, "You don't get to profit off of this. You don't get to profit off of harm. You don't get to profit off of the necessity that you've created and that the people have bought into." I think that may be what we have to do, and it's the hardest thing in the world to do is to go to somebody and say, "Yeah, I know you made all this money, but you can't have it."

Roland Meertens: How would you apply this then to AI companies? How would you tell them what they can profit from and what they cannot profit from?

Leslie Miley: Well, maybe let's speed up on Meta or maybe even Twitter at this point. I think Meta is a little bit better. I think you can look at Meta and you can say it is demonstrably provable that your ad-serving radicalizes people, that your ad-serving causes bad behavior and that it causes people to take some horse dewormer instead of getting a COVID vaccine, so we're just going to start finding you when your platform is used against the public good. Yes, I know the public good is probably subjective, but I also think that at some point we just have to start holding companies responsible for the harm they do. The only way to hold companies responsible for the harm they do is to take their money and not a $100 million fine or 200, just like a five or 10 or $50 billion fine and make it so that companies are incentivized to it at a minimum not do the wrong thing.

Meta knows, their CTO said this. He's like, "Our job is to connect people. If that means that people die, okay, as long as we connect them. If people use our platform to kill each other, that's okay as long as they're connected." This is a quote from him. I'm like, "Oh, no time out Sparky. You can't say that," and as a government we should be like, "No. No, you don't get to profit from that. You don't get to profit from amplifying the hatred that causes a genocide in a country. You don't get to profit off of misinformation that causes people to take action against their own health. You just don't get to profit from that, and we're going to take your profit from that." I almost assure you if you did that, the companies of course would fight it, but if you did that, they would probably change their algorithms.

Balancing short-term and long term incentives [17:41]

Roland Meertens: But how would you, for example, balance this between the short term and the long term? If we look, for example, at ChatGPT, I think in the short term, it seems to help people write better letters, students can learn, so amazing, but we can probably already see coming that there will be a massive influx of spam messages, massive long texts being sent to people. It can be used in an evil way to maybe steer the discourse you were talking about on Twitter between groups to automatically generate more ideas and weird ideas and actually put these ideas on paper. How would you balance the short term goods and the long-term goods between the maybe short term evil and the long term evil?

Leslie Miley: This is where I think open sourcing algorithms help, open sourcing what these models are being trained on. If you deploy a model and it goes kind of up and to the right and you start to be able to realize significant revenue from this, that starts impacting society, and that can be defined as a town, a city, or a country. I think that's when the discussion starts of what does this mean? I don't think it's mutually exclusive. You can have a Twitter without it being a honey pot for assholes. You can have a Meta without it being a radicalization tool for fascist. You can have a DALL-E or a Midjourney that doesn't create child exploitation images. That's all possible. When it happens and there's no back pressure to it, and by back pressure, I mean regulation, companies are going to act in their best interest and changing an algorithm that ...

As I said, Meta knows their algorithm, they know what their algorithm does, but they also know that changing it is going to hit their other metrics and so they don't change it. I think these are the things that we have to push back against. Once again, these companies, and I've worked at most of the ones we're talking about, love to say that they have the smartest people in the world. Well, if you can't figure out how to continue to make money without child exploitation, maybe you don't deserve to make money.

Open source AI [19:42]

Roland Meertens: But for example, for the open sourcing, don't you think there's also inherent risks and problems where, for example, if I were to be a bad actor, I can't really use OpenAI's DALL-E to generate certain images. I could download Stable Diffusion, which has a certain level of security in there where it gives you an image of Rick Astley actually if you try to generate something weird.

Leslie Miley: You get Rickrolled. Wow, that's great,

Roland Meertens: You get Rickrolled. But the problem is that you have to uncomment one or two lines to bypass the security. So in there you see that some malicious actors actually use the open source of software instead of the larger commercial companies because the open source software allows them to train it on whichever artists or whichever images they want.

Leslie Miley: I would like to think that that will always be the exception and not the rule, and that will always exist, but it won't scale. That is my hope. I think that we have a lot of evidence that it won't scale like that. Will people use it to do that? Yes. Do people use 3D printers to print gun parts today? Yes. Do you ban 3D printers? No. Do you try to install firmware in them so that they can't do that? I don't think that's even possible, or somebody will just figure out how to rewrite the microcode. But I think the great part about it is it just doesn't scale. If it doesn't scale, then yes, you'll have it, but it won't be at a Meta level, it won't be at a Google level, it won't be at a Twitter level.

Roland Meertens: You not afraid that at some point some company will create a bad GPT, which will actually answer to all your unethical questions, but at two times the price and people will switch to that whenever they want to create something evil or people will switch to bad GPT just in general because it gives them more answers every time instead of always having a lecture about that it's just a large language model.

Leslie Miley: I mean, we shut down Silk Road. I mean, it's like we have a way to go after bad actors, and I think that it's always a game of cat and mouse. Like I said, being in safety and security at Twitter, I called it we had to play a global game of Whack-a-mole and the only way you could even begin to make a significant difference is that you need an army of trained octopus to play the game. Because they're just popping up everywhere and we'll always be behind, because as ubiquitous as these tools become, and as easy as they are to use, anyone can do it. There used to be a higher bar of entry because having a computer, they cost so much, you didn't do this. Then having fast internet access is a little bit too much. But now it's like a lot of people can do this development on their iPad or on a tablet or on $150 Chromebook or a $80 Chromebook for that matter.

Then you have labor costs that in some countries is just ridiculously cheap, or just people just sitting there. The internet research agency in Russia really showed us what a troll farm could do, and this is now that troll farm that's being replicated all over the world. People just throw them in. So my whole point is you will always have bad actors. Bad actors will always find a way to exploit the tools. The point that we have to look at is what type of enforcement do you have that will limit the impact of that? I think that we've gotten better at that over the years. Yes, but we're still going to have the dark web. You're still going to be able to do things like this if you're sufficiently motivated. Yes, somebody will figure out, or somebody will release a model that will allow you to create your version of an explosive or your version of something bad, and there will be a lot of naval gazing as to what we should do and the answer to the question, which won't ever happen, which is you don't let tools like that out.

Individual responsibilities [23:25]

Roland Meertens: So maybe if we bring it back to the audience of this podcast, the software engineers, what could developers introduce in the routine? Are there a couple of things which we can do as a community to prevent this? Is there something which we can as individuals do against AI bias?

Leslie Miley: This is interesting. I was just saying for startup founders like, "Oh, talk to your customer," and you got to get in front of your customer and you should talk to your customer on a regular basis. I fundamentally believe that, and I think that's part of the answer is you talk to your customer. But I think the other answer is talk to the people who could potentially be impacted or who know what that impact feels like and looks like. We got better at mitigating abuse and harmful content on Twitter once we engaged the communities that were being impacted by it. That's what you do. You have to engage those communities. You have to engage them authentically. You just can't like send them a form to fill out. You have to sit with them and you have to hear what they have to say, and you have to have empathy for the impact it has.

There are many people who are just like, "Well, you just don't have to look at it." It's like, "Oh, somebody send you a DM you don't have to look at that." It's like, yeah, if somebody ... I mean, I've gotten bad, terrible tweets. I've gotten hate mail. I've had people threaten me for some of the things I've said online, and it impacts you. If someone's creating a new product and they talk to people outside of their network who could potentially be negatively impacted, they will learn a lot. I think that's the part that is sometimes missing. In our rush to scale, in our rush to get a product out, we don't connect with human beings and we don't connect with our customers in a human way. We don't connect to the people who could be potentially impacted. In many ways, and I made this example, people don't care.

Just this amazing example out of the talk was this guy named Robert Moses, who designed part of the highway system in the United States so that bridges were too low so that buses from neighborhoods with minorities couldn't go to the beach. I mean, people will be bad actors and they'll do it at scale if you let them, but if you were to take this dude and he wasn't a violent racist, but if you were to take other people and say, "Look at the impact that this is going to have", maybe they just were like, "Hey, yeah, we're not going to listen to this guy. We're going to do something different."

Roland Meertens: Maybe that's another problem that people are often trying to improve the entire world at once, which then often means the entirety of the rich people in America who can afford your app. But for example, whenever I'm in Silicon Valley, I'm always so disappointed that these people are trying to improve the entire world, but they can't even solve the homelessness of the person in front of the office.

Leslie Miley: Living in San Francisco for the last decade, that resonates and it's almost as if the problem of the unhoused in San Francisco portends the downfall of tech, because as the homeless problem has become more acute, tech's issues have become more acute. I think you're right. If you can't take care of your house, how can you take care of the world? And San Francisco's tech's house? For somebody who likes it, I'm a native, I mean, this is my home, I'm disgusted. I'm not disappointed, I'm disgusted. I'm disgusted with our political leaders. I'm disgusted with our business leaders who have put the time and effort into extracting money out of a system and wealth out of a system, and really at the expense of the humanitarian crisis that runs on the streets of San Francisco on a daily basis.

I mean, I would love tech to be like, "Hey, let's work with cities and organizations that are solving this problem, measurably solving this problem to amplify it", and then maybe the humanity that we'll get in touch with will help us build better products.

Roland Meertens: I'm just going to repeat the same question again, but are there specific things where you say developers should do this? Should they vote with their time? Should they raise problems?

Leslie Miley: Take a humanities course. Somebody said that. Take a humanities course. What can you do? This is not a technical problem to solve. This is a human problem. I think that it's like how do you get more in touch with your humanity? When was it ever okay that you had to step over someone to go in your office? But people do that every day in San Francisco, maybe not so much today with San Francisco not going back to the office, but we're okay with that, and that's the problem. I just don't get how we can be okay with saying, "We're changing the world," when there are people outside who are in crisis that you have no desire to address. It is hypocritical, there's a cognitive dissonance there that I can't get, and I'll bring this a little closer to home. As I said, I'm a native, but right now I have packed up my place, put all my stuff in storage, and I'm out of San Francisco.

I actually do not have an address in California for the first time in, I don't know, 20 years. Because I was just like, "I can't solve this," and it's an assault on all of my senses, and I'm paying taxes in a place that is not addressing the problem. I'm watching wealth get extracted by tech for the last 20 years that hasn't measurably addressed this problem. Some people have. I give Mark Benioff the credit for sponsoring initiative that raises $300 million a year for unhoused services in San Francisco. So that's part of doing the problem. For whatever you want to say, that's a huge, huge, huge initiative that he took on to try to make the city that he's built a big giant tower in more livable for everyone. We need more of that though, because it's just not Benioff, it's Bob, it's just not Bob, it's Mary. It's not Mary, it's everyone who lives there has a say in this.

I don't want to spend too much time on that, but I think it's developers and product managers and co-founders have a responsibility to understand the impact of what they're doing. If they don't understand, if they don't think they know, they need to go and learn. ChatGPT could probably help you. Personally, I'd rather you go out and talk with someone. I'd rather you go out and talk with people who are impacted by these types of problems and learn what that impact is and not just, "Hey, can we get product market fit?" Because sometimes product market fit isn't what you should do. Maybe what you should do is ask what that impact is going to be and build a product that makes it harder for people to be bad actors.

Assholificiation of Technology [29:41]

Roland Meertens: Then last but not least, maybe to put it a bit broader, you also want to say something about how the bigger tech companies are behaving nowadays.

Leslie Miley: Oh, gee, we can just go on and on about this. The Silicon Valley Bank is an interesting story, and it's an interesting story because here was an entity that almost every co-founder, startup co-founder, would tell you, "This bank had our back. This bank had our back. When we needed funding, we got debt financing, we got bridge funding, we got this, we got ..." Silicon Valley was the savior for many, many co-founders and for many startups and Silicon Valley was killed by the very people it helped. I mean just like that's crazy town. It's like, "Hold it."

Any number of Peter Teal backed companies probably got some bridge financing from Silicon Valley Bank. Yes, companies have to do the right thing, and if their money's there, they think they're not going to get their money. Maybe they have to do the right thing. But the other side of it is also the collective wealth of the people who were tweeting about Silicon Valley Bank was enough to keep Silicon Valley Bank afloat, which to me is just ... it's once again hypocritical in tech. I think there's just this ... I call it the assholification of tech these days. I think there are people who just are really just in it for themselves and they will burn the house down that they're in. Or if they're not burning their own house down, they'll burn the house down of the people they say they support are in, which is almost as bad.

I don't know how we got there. It looks to me so much like the financial crisis that hit New York banking in the '80s or '90s where people just fell upon themselves like a pack of jackals. Is that what's happening in Silicon Valley? Is this who we've become? I guess the answer is yes, if you're ready to take down the bank, that may have helped you, which many people did. Then I ask, what's the consequence of this type of behavior? Or what's the consequence of Elon Musk lying his way through Twitter. He's like, "Oh, we're only going to lay off this many people," and he lays off more people. Or, "We're going to do give you less ads," and well, I haven't gotten less ads and we're going to do all these things and he treats people like crap and they just get fired and they don't know they get fired and he abuses somebody who has mobility issues online and makes fun of him and there doesn't seem to be any blowback.

I mean, he is valuing his company for $24 billion less than he just paid for it, so maybe that is the blowback. But I wonder if that bad behavior is being mimicked in other ways. People are now kind of flipping the bit and saying, "We need to treat workers like workers, and that means that they need to be in the office. That means that we need to be able to keep track of their time." Are we just turning into a bunch of assholes? I kind of think we are. I mean, Mark Benioff said this, I'm going to bring him up again, he's like, "There are a lot of tech CEOs right now who are asking if they should get in touch with their inner Elon Musk." I'm like, "Ooh, that's rough," and I think he's projecting, right? Because he's doing a lot of the same thing. He's telling workers that they have to come back and he's laying off people and he wants to score people's productivity.

Then other companies are doing the same thing. Mark Zuckerberg, some of his statements, and you could see it. Early on he was like, "Some of you shouldn't even be here." I mean, he said that last year all the way to his year of efficiency. I'm like, "Well, if it's a year of efficiency, what about that $10 billion you just spent on the Metaverse? That didn't seem too efficient. But you just are essentially telling people they don't have a job anymore because of your mistakes and you are unapologetic about it."

Roland Meertens: It seems to be a great way to end the podcast. Thank you very much, Leslie, for being here today.

Leslie Miley: No, thank you for having me.

 

Note: Roland Meertens works as a Machine Learning Scientist at Bumble Inc. The views he expresses here are his own, and not those of his employer. 

About the Author

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and YouTube. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

BT