Transcript
Leong: Today, we're doing a panel on stopping online harassment, or using tech to stop online harassment. Basically, what the hell do we do about online harassment? So today I have gathered a group of lovely ladies here to talk about the very subject. First of all, I would like to apologize to Leigh Honeywell whose name I mispronounced this morning, so it was very embarrassing. So Leigh is the CEO of Tall Poppy, which is a very exciting startup, which is going to be providing protection of online harassment to employees as a benefit. Is that correct? Cool. Next up we have Kat Fukui who is a product designer on GitHub's Community and Safety team. Small plug, our team is pretty great. So she helps me design safety from the beginning to the end of a feature, making sure that all products that we make are consensual. Last up, we have Sri Ponnada who is an engineer at Microsoft. This is not a Microsoft talk, I swear. And she is an engineer. You are a little bit outnumbered, I am sorry. And Sri is an engineer at Microsoft who made an app to help communities get a little bit closer to their [inaudible 00:01:23] and to give them a little bit more of a voice rather than Google saying, "This is what your page is going to look like."
What Is Your Definition of Online Harassment?
So with that, I'd like to start off the first question which is for anybody who would like to answer, what is your definition of online harassment?
Fukui: Hello. I guess I have the mic, so I'll start. My personal definition of online harassment is a targeted abuse of features that are on a platform using technology that may have not been accounted for. Usually, there's a pattern to it, whether it's the method of abuse or the type of people that are being abused with the tools and the technology. Yes, I think I'll let others speak too.
Honeywell: That's a super good and very like nuanced definition. Wow. So I think of a couple of things when I think of defining online harassment. One of the recurring experiences in working with people who've experienced online harassment is that they don't necessarily call it harassment. A couple of terms that have been useful are just negative online interactions or negative social experiences. One of my favorite retorts when someone is being a jerk to me on Twitter is telling them to go away and then when they don't saying, "Wow, you have really poor interpersonal boundaries." So I think just thinking of online interactions that are not positive; there's a smooth gradient from, "Ooh, someone was mean to me on Twitter" all the way into actual threats of violence and other things that potentially fall outside of the realm of like protected free speech. But I think thinking of it in terms of features and misuse of features is a really good perspective.
Ponnada: Yes. I guess, first, both of what you ladies touched on, and also lately thinking about what does being online mean? Your phone, what if you were dating someone and then they just consistently texted you because now they can. It's not like back in the day where they had to write a letter, they're not going to write a billion letters in a day, but they whip out their phone and they're like, “Let me send you, bombard you with texts”. So like that, or also how are companies harassing others? Harassing me with ads. I try to go on Instagram to see what my friends are up to and then I just see a ton of ads and I'm like, “Why is this here?” I never asked to see this or just taking our data, selling it and I don't know, doing things... the people putting the technology out there themselves, abusing their power. So yes.
Honeywell: I think that connects it to- has anyone here read Sarah Jeong's book, ''The Internet of Garbage?'' A few people. It's really good if you haven't read it yet. It's, I think, a dollar on the Kindle store or you can download it from The Verge, I believe. Anyway, one of the salient points in that book is making a comparison between the technological evolution of anti-spam and how today we're thinking about anti-harassment. And I think there's often situations that blur the boundaries between those two things, right? There was a Twitter thread going around this morning where someone was complaining about having been sent recruiter spam that was very clearly scraped from their GitHub account. You know all about this problem. It's both kind of like harassy and misusing features, but it's also like commercial spam.
I think there's a lot of things that straddle that line between just straight up commercial spam and you see that a lot with the enterprise salesy, "I'm going to follow up in automated fashion with you every two weeks until you tell me to screw off," which is the standard enterprise sales playbook. So figuring out what that line of- I like to call it the line between appropriate hustle and inappropriate thirst.
Leong: #thoughtleadership right there. So you mentioned that this is becoming normal practice now, which is somebody will non-consensually take your contact information and then do something which you don't expect it to. And then it crosses some boundaries within you as a personal person, you as a digital citizen. Why do you think that there is such a problem with harassment online?
Boundaries
Ponnada: I want to start this one if that's okay. Yes. So you talked about boundaries, right? I feel like with the rise of technology and that having become our primary form of communication, for the most part people don't really talk to each other anymore. And I noticed that in this room this morning, I was chatting with a gentleman and I asked him, ''Is this seacliff room?'' And he was like, "You know, I don't know." And then he asked one of the volunteers and he was like, ''Do you know if this is seacliff?'' And the volunteer said, ''Let me look it up on my phone.'' Or was like, “Oh, you can find it on the app”. And that's fine, whatever, no harm, no foul. But why is that the first thought, to think that, “Oh, why don't you just look it up on your phone”, rather than just talk to this person. Maybe they're just not developing those kinds of interpersonal relationships in real life or setting those boundaries for themselves. And then that just kind of translates online.
Fukui: Oh yes, sure. Piggybacking off of boundaries, when I think of online harassment, I think it's still harassment. Online communities are still communities and in real life we have those boundaries pretty set. If I called Danielle a jerk or something, I would never say that …
Leong: I probably deserve it.
Fukui: … then like that's not cool. And we have ways to talk about that and we have ways to combat that, especially if it's physical violence. But online communities, we don't have these standardized open frameworks for how to deal with that kind of stuff. If I called someone a jerk online, we wouldn't treat it the same way online as we do in real life. And we haven't come up with those boundaries in a standard way to create those boundaries across the technology that we build. So that's how I've been seeing it lately, that online harassment is real harassment and it's just the same in person and on the internet.
Honeywell: I literally have a slide in our startup pitch deck about how the Internet is real life. It's one of the bullet points. In having done longitudinal work over the past decade with people facing online harassment, one of the things that comes up a lot in these conversations is, is this some kind of new flavor of bigotry or new flavor of misogyny that we're seeing? And I think fundamentally it's not. I think what we're seeing with online harassment is the same negative interpersonal interactions and biased interactions that have always existed in our culture. They're just in public; they're just happening in a visible way.
I think of all of the stories of men not understanding how much street harassment was a problem until they walked at a distance from a female partner and observed what was happening to her, kind of thing. It's sort of that effect, except for the entire internet when it comes to gendered harassment and racialized harassment and all of these kinds of things where this shit has always existed. Excuse my French. We can just see it now and it's not deniable.
Although, which is sort of funny because one of the great themes in many harassment campaigns, one of the sets of tactics that I've seen used is people will be experiencing threats and harassment and stuff. And when they talk about it publicly, like, “Wow, I had to cancel this talk because of bomb threats” or whatever, people will be like, “It's a false flag, you're just making shit up.” So even though it's happening in public, there's this real crazy making thing of people still try to deny it even when it's super visible. So it's like Sandy Hook Truthers except for harassment.
Ponnada: But what you said about communities and us building communities online in this entire word, I feel like it's also - I don't want to say empowered because it's just doesn't feel right- but yes, given these hateful groups opportunities to connect …
Honeywell: Emboldened.
Ponnada: … Yes, embolden them, so that they can connect with each other. And maybe if you lived somewhere super liberal in Seattle you're like, "Oh, I can't really voice these thoughts in person because people are going to shame me," like you were talking about. Then you go online and you're like, "Yeah, there's 10,000 of these folks online and I can just say whatever I want." I think there was a veteran's group or something back on Facebook and they were posting- yes, the Marines. And just these kinds of incidents that you find out about and it's just scary.
Marginalized Communities
Leong: So then that brings me to my next question, which is, you mentioned, I think Sarah Jeong actually talks about this where a lot of harassment tends to be gendered and it tends to be against women, against a non-binary, against LGBT folks. It's also very racialized as well. So if you are a black trans person on Twitter, it is going to be terrible because we don't have systems in place for this. So what are some marginalized communities that tech leaves behind and how are these communities impacted by the role of tech in their lives?
Honeywell: I think one of the current situations that comes to mind around this is there's currently a major issue where Twitter is banning trans people for using the term TERF, which for those who aren't familiar stands for Trans Exclusionary Radical Feminist. There's a segment of folks who call themselves feminists who are super not cool with trans people, and harass and stalk and target them online. And there have been coordinated flagging and reporting campaigns against trans people, particularly trans people in the UK because this TERF-dom is very part of the mainstream politics in the UK, including some prominent politicians and opinion writers and stuff.
So the thing that this brings up is, these abuse reporting mechanisms that do exist are being weaponized against a marginalized population who are simply using a descriptive term to describe people who are advocating against their rights. These trans exclusionary so-called feminists are basically weaponizing the reporting systems in order to target trans people. And I think, it brings up some of the nuances of abuse reporting. Why can't Twitter solve the Nazi problem, right? They've solved it for Germany and France where they're legally obligated to. All of these systems are double-edged swords and can be weaponized against marginalized groups too.
It's this tricky needle that everyone is trying to figure out how to thread and people are trying to sprinkle some machine learning on it. But a lot of it actually comes down to human judgment. And sometimes those human judgments are not transparent and end up with stuff like what's happening to trans people on twitter.
Fukui: Yes, I can speak next. So actually before working at GitHub, my first job out of school was working on a platform to raise the visibility of precarious workers. Precarious laborers? So for example, people who work on farms. The Marriott strikes are what I would consider precarious laborers. And I think when we talked about groups that are being left behind, I also want to highlight the socio-economic disparity.
Leong: [inaudible 00:14:44] words. What does that mean?
Fukui: There is a huge gap in income in Silicon Valley and across all over the United States international, but I think it's very, very glaring when you come to San Francisco. We have a huge homeless problem that is not being addressed. I also live in Oakland and that is also extremely problematic. When I think of that kind of technology, I think we have to be thinking about how do we make our technology inclusive for people who can't exactly afford it or have the time to?
So when I was working on that platform called Stories of Solidarity, the technology that we were leveraging was actually SMS texting because most people who are still precarious laborers have some sort of phone that can text. And it is used pretty commonly as a way to spread information really quickly. Like, "Oh my gosh, there's going to be a raid this Monday at the farm, we need to go for undocumented workers." So whenever I'm thinking of technology that we build, I still want to think about the accessibility and what kind of tiers of technology we can build for others who may not have access to it. So beyond people who experience harassment in their real life, they're most likely going to experience it online. And I definitely want to think about our responsibility in tech to accommodate people with the income disparity.
Ponnada: Wow, tough act to follow, tough acts. But, yes, you're absolutely right. Talking about the divide that exists between the kinds of technology that's available to people and then, are we focusing our work on building websites or are we just making IoS apps? Are we making things for Android? How are we bridging these cultural and knowledge gaps and allowing people to access the internet and to access the knowledge and information that we have so that they can also be part of this new digital society?
And the other thing is, even women in the tech industry face this being left behind experience because it's in the news. HR tools have bias against women. I'm not going to name names, but we all know and it's just amazing. There is no one that isn't being left behind. The people creating this, this kind of environment, the people that are building these tools and not thinking about what they're doing, in a way are being left behind too because they don't know what they're doing. How do we bridge that? How do we let them know that, “Hey, you're a douchebag but also, yes, not.”
Honeywell: Just to link it back to specifically online harassment. I'm really grateful that you brought up the socioeconomic issues. In the work that I've done over the years with people facing online harassment, either in the internet jerks kind of environment or as a result of domestic violence or intimate partner violence, the dollar cost of this experience of being harassed online in terms of, say it's a domestic violence situation and your ex has put spyware on your computers and you don't have the technical capabilities to get that removed, you need to just go and buy a new computer or buy a new phone, and that's a significant dollar cost as a result of online harassment. And you see it also with the various privacy services, or even stuff as simple as like when I work with people who are coming forward as me too whistle blowers or stuff like that, I want to get them all set up with a hardware security key. Those are not free, right? If we don't design safety into these systems, safety ends up becoming a tax on marginalized people. As we're thinking about the big picture and how it connects into other social issues, I think it's the opposite of cui nono, right? It's like, who pays the cost?
Safer Products
Leong: And that segues into my next question. If you could do anything, how would you make safer products? If you could take any tech product and make it safer for marginalized people and which then makes it safer for everybody, what would you do? Is it too broad?
Honeywell: I mean, I have like 10 answers. Sorry, I'm giving it to someone else so I can come up with which one I want to go for here, if you know.
Ponnada: Maybe. So I guess a project that I did at Buzzfeed was, well, you know, that's one of the platforms that I first became aware of how much online harassment there exists. And people share their stories and that's awesome and use is spreading like crazy around the world. But if you read the comments, they're so hateful. And I've had a Twitter thing that went viral about my immigration story and just the reactions from people, the kind of emotional impact that has on the individual that's trying to spread awareness and create a change by using online platforms as a avenue to spread awareness, it's hard. It's draining. And when you asked about marginalized people that are being left behind, it's preventing us from sharing our stories. It's preventing us from building communities, right?
I worked on this Hackathon project for our customer support team that essentially just loads all the comments from top trending articles and runs a machine learning algorithm on them to identify if there's any kind of hateful or homophobic, whatever kind of speech, and then allows this person to look at the context of what has been said so that they can see, “Oh, is this just somebody like joking around? And they might've said the f-bomb or is this actually something that we need to take action on?”
Just thinking about how can we build these tools, I think that's what I would like to see more of on Twitter. And that's something I noticed too - why does that exist? And I think LinkedIn has done a fairly decent job of that. Maybe I just don't see it. I don't really see those kinds of comments on there. And while we're building this technology, also remembering that we're not building technology to replace people, so how can we empower others to be part of this revolution and help create safe spaces online?
Honeywell: I have a very concrete and specific thing that I want to exist in the world. I'm throwing it out here for everyone. I've been starting to socialize this with different tech companies. But it would be really cool if there was an OAuth grant, that there was a standard for that said, “These are the security properties of this account.” So I'm trying to help Danielle with securing her accounts and she grants me permission to see when did she last change her password? Does she have 2FA turned on? But not anything else. I don't want to see your DMs, nor do you want me to see those things. But to be able to introspect the security properties of your accounts in order to help give you guidance without granting any other permissions. Let's be real, there has not been a lot of innovation in the consumer security space in our lifetimes, basically. Remember antivirus? That was the last major innovation in consumer security, which is fairly tragic.
Leong: Twenty five years ago.
Honeywell: At least. Yes. Oh gosh. Let's not talk about McAfee please. Speaking of online harassment.
Ponnada: So many notifications.
Fukui: Pop-ups.
Honeywell: Oh, my gosh. Anyway, so that's my big idea that I'm putting out in the world. If anyone wants to talk to me about it afterwards, come find me.
Fukui: I don't think I have a specific of an answer, I guess I love putting out dumpster fires, but I think I'm tired of doing that. I want to be more proactive in how we encourage and empower people to make or to become better online citizens and make the people around them better as well. So a lot of the work that Danielle and I have been working on is rehabilitation. I think we've seen some interesting studies that people, if they have nefarious behavior or content or something, if they feel like they are being rehabilitated in a fair way, they know what content was nefarious, why it's not allowed here, the enforcement is fair, then they will correct their behavior and that has a platform effect. Like if we can rehabilitate on GitHub, we found that people will actually correct their behavior in other places like Twitter, Slack, Discord, which is extremely fascinating. So I would love to see tech figure out those ways of encouraging people to be better and making it easier to be better, like letting bystanders help create safer online spaces.
What Can Social Platforms Do to Build Healthy Communities?
Leong: Which then segues into my next question. Thank you. Convenient, just totally made up based on the context of this. When we're talking about online harassment, it's really easy to be like, “Well, we should just stop doing this. We should ban hammer all of this. This entire country is just garbage, let's just get rid of it and not allow them on our platform.” But as Kat says, you then miss out on a lot of different conversations. You lose that ability to rehabilitate. So what are some things that social platforms can do to build healthy communities and have these kinds of nuanced discussions? Leigh is laughing at me. So I'm going to give it to you first.
Honeywell: I think one of my favorite blog posts of all time dates from approximately 2007 and is Anil Dash’s ''If your website is full of assholes, it's your fault.'' It's just such a good title. But the fundamental thing is, it made my heart grow three sizes to hear that you guys are actively working on rehabilitating shit lords because I think there's just so much important, even low-hanging fruit work to be done there of gently course-correcting when people start to engage in negative behaviors. I think some of the video game platforms have been trying different ways of banning or time outing people to encourage better behavior over time. I think there is a large, hopefully very fruitful, set of low-hanging fruit to be working on in that space. So I'm really, really excited to hear that you're working on it.
I think the cautionary note is we need to be careful to not put that rehabilitative work on people who are already targets and are already marked marginalized. I think a lot of efforts around online harassment over the years have been very, what's the term I'm looking for? Not predatory, but sort of exploitative. A little bit exploitative from the platforms where they're like, "Let's get this army of volunteers to do volunteery things" versus actually paying and training professionals to do moderation, to do content aggregation, stuff like that. And I think particularly when we've seen a lot of stuff in the content moderation space, there's currently at least one lawsuit around PTSD that content moderators have gotten from actual PTSD, from doing content moderation.
So I think when we're figuring out what are the boundaries of what we get paid staff to do, versus what we get volunteers and community members to do, being thoughtful about what is the long term impact? Are we engaging in proper burnout prevention and secondary trauma prevention tactics? I see this a lot within the computer security incident response. Part of the field where there's this machismo, like "We're hackers and we're going to be fine and we don't ever talk about our feelings." But people get freaking real PTSD from doing security incident response and nobody talks about that. I'm excited, but also caveats.
Fukui: Yes, I totally agree that we should make sure that we're not putting the onus on marginalized folks who already have to deal with their identity and their spaces. It's usually two jobs, right? Being yourself and doing the job that you went onto a technology platform to do. So we should try and absorb that burden as much as we can. And something that definitely comes to mind is understanding what those pain points are. Plug for my talk later I will be talking about how to create user stories and understand those stressful cases that happen to your users. And those are ways that you can unite a team on a vision. We need to solve this and we need to do it quickly. Because when negative interactions happen, we found that swift action is the best way to …
Leong: Visible.
Fukui: … visible and swift action is what's going to show communities that this behavior is not okay. It will not be tolerated. And it's way better for people's mental health when someone else is taking on that burden and they don't have to defend themselves.
Ponnada: Yes. Adding to that, building allies. So I really liked that you brought up how rehabilitation can course-correct people and just continuing to do that emotional labor in real life, because who we are as a person then translates to what we put online. Yes. I don't really have much to add, you guys covered it all.
Honeywell: I think the emotional labor piece just brought up something for me which is...
Leong: What is emotional labor?
Honeywell: Oh yes, sorry. Emotional labor is the caring -- I'm not explaining this very well. Caring labor, it's having to be thoughtful and perceive other people's feelings about a particular situation, and it'll maybe make more sense in the context that I'm about to give. Which is, I think it's really important when we think about building anti-abuse into platforms, that we know the history of trust and safety and how it's often perceived as a cost center and marginalized and underfunded. And again, that's where you get the burnout and PTSD from content moderation and stuff because it's seen as just being a cost, versus a necessary feature of a healthy platform.
The thing that emotional labor brought up for me is making that case; that is its own emotional labor to continually having to be justifying the existence of this function that wears a lot of people out. And when we can shift into this is how we make our platform actually good for everyone, or at least how we retain people.
Ponnada: Yes. What you said just now made me think of it's not just the moderators that experience this kind of stress or burnout, right? Even the people that are actively building these tools, us, and talking about it, thinking about it all the time, we have that burnout, right? I have that burnout sometimes and I need to kind of disconnect because then it's just like technology everywhere. And remember that there is a world outside of this black hole on Twitter where people in my life love me and care about me and I need to go there and reconnect so that I can continue pushing the boundaries.
Leong: Cool. So that is all of the official questions that I have now. I am now opening it up to the floor. Please remember if you have a question, it has to start with a question. Yes, I know it is revolutionary, but, a very long manifesto, I will cut you off if you do that. But if you have a question, please raise your hand. I will bring the mic to you.
Tools Used for Abuse
Participant 1: I'm just going to use this time to talk about myself. I'm kidding. So Leigh, you mentioned that a lot of the tools that are used to prevent abuse can also be used to incur abuse. And I think we've seen that a lot and I wonder, how do you adjust for that? Because it seems like no matter what you do, especially at scale, those tools will be used for abuse. What are some tips, tricks, techniques, how do you account for that?
Honeywell: I think there's a couple of ways to think about that. One of them is to, as you're designing the tools, be red teaming yourself; be constantly reevaluating what are the ways - I'm approaching this as a good person who wants to do right in the world, but how might I think otherwise? And if that's not your mode of thinking, then engaging with someone for whom that is their mode of thinking, whether it's your company's red team or an external contractor, a specialist. I think the other piece of it is wherever possible building transparency into the tools, so that people can know what were the criteria? And obviously in any sort of anti-abuse system, there's a tradeoff between transparency and gaming of the system. So there's no silver bullet on this stuff, but balancing that transparency and the necessary secrecy of your anti-abuse rule set is one of the important things to strike the right tone on.
Participant 2: One of the things that you brought up really briefly was awareness and the lack thereof and especially how your app or technology could be used to marginalize others. So my question is, what are techniques or tools you found that have been effective to see things through a new lens?
Leong: Deep soul searching. Damn. That was deep. Kat?
Fukui: Actually a really good workshop that I've done with the community safety team at GitHub is just drawing together and understanding user stories. So one that we added recently was a user who's trying to escape an abusive relationship. And what are the problems that end up inevitably happening, what part of the technology is holding them back from success? What does success look like for them? What are the stressful feelings they're experiencing? And those are the ways that we can highlight cases that we may have not thought of. If you think of something simple as like leaving a comment, how could that be used to abuse someone that we may have not thought about? So user stories for us and we've just been collecting them pretty much …
User story at least in the context of our team and how we do that, is you define a user. So in this case, someone who is escaping abusive relationship. And literally draw the course of their journey on your platform, either a specific workflow or something more general. What are their problems? That can be literally just drawing out. I made the engineers draw, I gave them markers. It was super fun. So what are the problems they're facing? How are they feeling in this moment, and what does success look like for them? And I think what's really helpful is that a lot people on our team have experienced this, either in real life or on the internet. So it's easier for us to empathize. But I think if you do not have that context, it's really important to talk to people who have, and gather that research and those resources, and compensate them because that is emotional labor. So pay people for that kind of work.
Leong: I'd also like to point out, tiny plug for our team, that we are a team of largely marginalized people, and it is because of our lived experiences that means we are able to catch abuse factors a lot sooner. Because if I'm using a service and it automatically broadcasts my location, for example, I'm not going to use it. I've had a stalker before, I don't want my location out there. And so that's an easy-to-close abuse factor because I have these lived experiences. If you are trying to close a lot of these abuse factors, ask the people around you. Ask marginalized people in their communities and pay them and say, “What do you see in this app?” “Do you feel safe using this?” is an excellent question. Do you feel safe using this? I guarantee you somebody with a different lived experience from you is going to be like, “No, no. This doesn't strip the location data from this photo, I don't want to use this, it's creepy.” So asking people in their communities, “Would you use this, do you feel safe using this?” is extremely important when you're doing this.
The Balance
Participant 3: You spoke a little bit about the importance of not placing the onus of rehabilitation on marginalized people. The particular context I took to be content moderation. It seems to me that with a lot of marginalized communities, encouraging them or empowering them to be their own advocates is an important component somewhere along the line. I guess the question that I have is, how do you walk the balance between encouraging them being their own advocate, without inadvertently placing the onus of rehabilitation on those communities?
Honeywell: That's a super good question. I think it is fundamentally like a needle that you have to thread because you want to do nothing about us without us, but you also want to not place the entire burden on the set of people who are already marginalized. One of the ways you do that is by compensating people for their work. So supporting creators and thinkers and writers and advocates and activists who are members of marginalized people, whether that's because they have a patreon or they work with a nonprofit or otherwise like signal boosting their work. I think one of the sets of experiences that this comes from is the constantly “educate me” that people often have an entitled attitude around. “Well if you want me to support your cause, you should put all this energy into like educating me.” So figuring out how to not do that. in my own personal experience, I get a lot of, "How do I fix diversity at my company?" requests. “Can I pick your brain over a coffee about how to hire more women?"
Leong: $800 an hour.
Honeywell: I mean, no. I started reconfiguring how I interacted with these requests while I was still on an H-1B. So I unfortunately couldn't charge people $800 an hour because I was on a visa. But what I ended up doing was I would answer patiently and comprehensively a number of requests. And I basically collected all of my answers into a page on my website that was like, “I get asked this question all the time, here's a bunch of resources, please go read it.” And it's actually been really interesting. When I do get one of these questions now and I'm like now I have a green card and I can do that consulting, but I just don't have time.
What I get back when I get these requests and I say, "Hey, thanks for asking. I get asked this quite a bit so I have assembled some resources. And there's also some consultants linked to from there that you can engage to work in this with more depth." I've been pleasantly surprised. I haven't gotten any like, "Well, can you just explain it to me in little words" lately. I think I hit exactly the right note on the page. It's hypatia.ca/diversity if you want to see how it's set up.
Leong: I'll tweet that out later.
Honeywell: Yes. How do you effect the change you want to be in the world, but also do it in a way that's scalable, right? And having those one-on-one conversations over coffee about, “Yes, you should have a replicable hiring process that is as blinded as possible and not only recruit from Stanford.” I don't need to have that conversation ever again basically, because I've written it down, it's written. I've had positive experiences so far with the gentle pushback towards resources. So when you can build that kind of thing into the system, I think it can help with threading that needle.
Leong: Signal boosting writers of color are also pretty important too because then it's not just like, "Well, here's my take on this," but this is somebody who has this lived experience, who can speak about this topic better than me, is another way that I've found this is giving somebody a platform to help explain this very nuanced topic that you're discussing. So that's another thing I found that helps. Someone over here.
Participant 4: What would be the short rebuttal to the Napster defense of online panels? The defense that, "Oh no, we're not providing a content provider. We're providing a way for people who want to share content to connect to those who want to view it," in the same way that Uber doesn't provide a taxi company or Airbnb doesn't provide a hotel chain. How do you counteract that argument?
Honeywell: There are a couple of different pieces of this. CDA 230 is the relevant legislation in the US that governs how content plan- and I'm not a lawyer, but this is my best understanding- it governs how … I'm not a lawyer, I just worked with them at the ECLU. It governs how platforms can or can't preemptively filter on content and it's a lot of why you see a lot of rather than preemptively filtering, reporting and take down practices on various websites. Fundamentally, I think the Napster example is really interesting because it surfaces the sort of subtlety to all of these hate speech and free speech and first amendment and platform moderation arguments, which is that there is no one singular definition of hate speech or free speech. There are various things enshrined in American law, there are various things in international law that varies from country to country. I'm Canadian and we have hate speech laws there.
We do very frequently set all kinds of content-based boundaries in the platforms that we operate. You can't post child porn, you can't post copyrighted content. What other boundaries do we set? Those are choices that platforms make and those choices have consequences. But I think just hand-waving in a way is just we're a neutral platform. That's never been true because you can't be neutral on a moving train and this train is moving rather quickly.
Subtle Abuses
Participant 5: There are lots of examples we can see for ourselves on Twitter and Facebook. But I wondered if you had any examples of more subtle abuses of features at home about comments in social media that would be hard for us to think of as nice people that we'd never think anyone would do that with our systems.
Fukui: Man, I just feel like that's a lot of GitHub. People go on GitHub to collaborate on software together. It's just code. We're just coding together. Everyone's just coding. Yes. But turns out there's still conversation and human interaction that goes around code; it's still written by humans, meaning you've got to be social. And more and more we're seeing that people view GitHub as a social network now which is really strange. Yes. So that means that our tools really need to scale for that. And unfortunately when GitHub was built, that wasn't the intention and you could clearly see that in the actions that we took. So I think even if you think, “Oh, this piece of technology is just for professionals, there's no way it could be abused.” Anything with user to user interaction will be abused, and you need to accept that and make sure that you're hiring the right people to tackle that from the ground up and build that framework. And I'd love to, in general, see companies be more open about that conversation, especially ones that aren't purely social. So hoping that we can continue that conversation.
Honeywell: Two subtle examples that come to mind from my experience and just that I've seen publicly: I'm not going to name him because he's like Voldemort, but a particular internet shitlord, he harassed a woman writer online by sending her a $14.88 on Paypal. Those two numbers are significant in Neo Nazi numerology and she's Jewish. So that's literally just using the number of the transaction as a harassment vector. Another example along those lines of subtle misuse, although this is a little more in spam territory, but when I worked at Slack- the reason you can't put invite text in Slack anymore is that people were using it to spam and even we would block URLs and then people would go to sketchy fish site, D-O-T com spelled out. So there was always that arms race, and any service that allows people to stuff text in a field and send it to each other will at some point …
Leong: Or images.
Honeywell: Or images … Oh goodness. Phone numbers will end up getting abused. Yes. You have one as well.
Ponnada: Yes. I've heard this from a lot of women that they've had people hit on them on LinkedIn. And it's like, “Dude, this is my LinkedIn, I'm here to get a job not a husband, come on, or a wife or whatever.” That is just so weird. Even on monster.com, where you post your resume on there. So this happened to me when I shared my story of immigration online and that went viral, and then I checked my email one day and I got an email from this guy that found my email address, I'm guessing from monster, somewhere, and sent me a wedding proposal with photos of himself and his kid and was like, "Hey, I saw your story. You seem like a really great girl. I want you to stay in the United States. I'm 40 something, I got divorced and I'm a really nice guy. Let me know if you're interested, I can get you a green card." And I'm like, “Wow.” Yes, right? So just the many ways in which women, in particular, have been stalked or harassed in real life is also translating into these professional networks as well.
Leong: All right, we have time for one more question.
Machine Learning-Based Solutions to Content Moderation
Participant 6: I remember, one of you talked earlier about machine learning-based solutions to scaling, content moderation and stuff like that. As companies rely more and more on machine learning to scale these types of solutions, the models themselves may internalize systemic biases and then produce biased outcomes. So what are your thoughts on what companies can do to ensure that as we kind of offload more of our moderation to these algorithms, to maintain fair outcomes?
Honeywell: I think that comes back to that same question around striking the balance between transparency and keeping the secret sauce secret, where there has to be a certain amount of transparency so that people can feel like the system is fair. And there also has to be an ability to request a review basically. And that's where the review requesting workflow that's part of what's really difficult to scale- we can do a pretty broad pass filter on internet garbage as Sarah Jeong would say, with machine learning. But where it's tricky is that sort of crossover error rate of the false positives and the false negatives.
Ponnada: So one thing that I feel very passionately about is hiring people from diverse backgrounds. Thinking about, are we hiring people with a psychology degree? My major in college was gender studies and I feel like coming into this industry, I've brought up points that people haven't really thought about. So just thinking about those things as well.
See more presentations with transcripts