Project W: QA with Yaël Eisenstat

Davis Wright Tremaine LLP
Contact

Davis Wright Tremaine LLP

As Head of the Center for Technology & Society, Yaël Eisenstat leads ADL's efforts to hold tech companies accountable for hate and extremism on their platforms. Project W’s Program Director, Emily Baum, spoke with Yaël about her work to ensure that online spaces are safe, respectful, and inclusive, and help to lift the voices and experiences of those most impacted by online hate and harassment. Here is a look at their conversation:

Q: Your career journey is different from the path taken by many.  You've spent more than two decades protecting our democracy, including as an intelligence officer, diplomat, and White House advisor. How did you find a passion for online security?

Yaël: I spent my entire career before moving into this technology sphere really grappling with some of our biggest global political challenges. I joined government before September 11th. I joined the intelligence community and later the foreign service with a true interest in better understanding what our place in the world is, learning about how different communities and cultures get along, how to find shared interests. All of those things are what drove me into that world to begin with. And then after September 11th, I went more and more down this national security path and really was leading our counter-extremism work primarily in East Africa. I always really cared about the human side of these issues. I really cared about researching what would make certain communities, or certain people, susceptible to extremist messaging. What is the point at which you can still hopefully influence people before they become radicalized? That is where most of my work focused.

Fast forward to around 2015, I started seeing those same issues of how people were being influenced, how people were being manipulated, how people were being pulled into more and more extreme paths manifesting here at home. So, I started researching why that was happening. And the more I was digging in, the more it was clear that our online ecosystem - while maybe not single-handedly responsible for anything - was certainly contributing to the same path. To be frank, I called it "radicalizing people" years ago and was told that I was ridiculously extreme for thinking that. But that is what really brought me into the world of looking at information ecosystems and social media and how they were creating or helping to push people into more and more extreme viewpoints, whether real viewpoints that they hold, or whether just how they were expressing them and fundamentally what that meant for the future of our country and our democracy. So that's what got me into that space.

Q: Technology has helped amplify both bigoted voices and voices calling for positive change. How can individuals or companies use technology to lend their voice to the call for positive change or the constructive side of the equation?

Yaël: So I will be very frank; I don't have the perfect answer for this, but I have thoughts. Part of the reason I don't have the perfect answer is that any technology, any information system, can really be used for good or bad. Depending on the system, depending on whether it's Twitter or Facebook, or an online blog, or an actual news aggregator, they're all used differently. And the reason I point that out is that there is no one perfect way to use them for good. It is so easy to be using one of these different platforms to rally people behind a cause to really raise awareness about an issue, to be a positive influence, to connect with other folks who are passionate about something that you're passionate about. But with the flip of a switch, it can all go south if bad influences or purposely bad actors co-opt your message or start swarming or harassing or attacking you. And so there is not a perfect answer to this yet and I don't know if there ever will be.

I use social media to not only find people who educate me and find people from whom I have learned and have connected with, but also to share my messages and to help people understand more how social media works, including how it does manipulate conversation. But I did stop using it as frequently, in part because the people who want to silence someone like me are succeeding. The more I get attacked, the more I get swarmed, the more my message gets drowned out by bad actors, the less I feel like engaging. So the answer here is figuring out how to find your communities online, figuring out how to use social media in a positive way while recognizing that often, especially if you start really getting a message out there that threatens power systems, you are going to possibly face a battle where you're getting swarmed online. And then that lends itself to a totally different question: how do you keep yourself safe online while having your voice?

Q: So folks should engage with the causes and communities they care about but try to avoid becoming bait for an attack?

Yaël: Sure. And recognizing that if you do become the bait for the attack, think ahead about how you will handle that. For example, I will never engage with trolls. I never engage with people who are purposely trying to get an emotional reaction out of me. This might sound very basic and kindergarten, like I'm explaining how to use the internet, but it is a huge part of it. If somebody is triggering your emotions, they are trying to get a reaction out of you - and that is how social media actually works. Once you fall for that trap, or you fall for that intentionally inflammatory engagement moment, then it can spiral. For me, whenever something gets me that angry or that upset, I take a step back. I take a deep breath. I say I'll come back to this tomorrow. And usually by tomorrow, all of that has been forgotten and you can move on to the next conversation. So, for me personally, I continue to work with people who are in the same space as me, who want to figure out how to improve our conversations online, and I choose not to fall for the trap of engaging in inflammatory rhetoric.

Q: Many platforms, as I understand them, are designed to get you to that emotionally engaged place so that you're spending more time on the platform and helping their metrics, right? The tech is sort of designed to get you sucked in. I'm curious, what do you think is the ideal role and/or partnership between the online space or platform and the user. What responsibilities do they have for their customers?

Yaël: So what's so interesting in this question is that we make this assumption that we are the customer, right? The question asks, "what is the responsibility between the platform and the customer?" And it's a very logical and important question, but the sad reality is that the way tech platforms, particularly social media, are constructed today, we are not the customer. The customer is usually the advertiser. It depends on exactly how the platform is monetized, but in a case where the customer is really the advertiser, the so-called user, if that's the word we want to use, is just, I hate to fall for the cliché that we're the product, but essentially, we are. We are the ones that they're trying to sell advertising to. And the implications of that was sort of the entire topic of my TED Talk. It is important to understand that it is not that these companies intentionally set out to say "we want to amplify the most extreme content" or "our goal is to make everybody angry." I don't think anybody set out with that goal. But when you decide to train an algorithm, you train your algorithm with a goal in mind. Every algorithm is originally coded by a human, and those algorithms are coded with the goal of keeping you on the platform as long as possible so that they can collect your data. So that they can perfect the personalization process. So they can tell advertisers that they have the secret sauce on how to target you better. And that is a fundamentally important statement there. It is very different than just like a talking point that "they just want you online all the time to make you angry." It's not that. It's that they want you online all the time, period. It's not just angry. It can be extreme positive emotion, too. So that's the one thing that often people miss when they talk about this. Yes, there are many studies that show that rage and anger and clickbait and all those things keep you engaged, but so do extreme positive emotions too, right? The problem is that being somewhere in the middle of these extreme emotions is not necessarily going to keep you scrolling. And if you are not still scrolling, they cannot continue to improve their personalization process. And then they cannot continue to convince advertisers that they can target you in a really precise way to sell the product the advertisers are trying to sell. I don't know if decentralization is the answer, or if a different business model is the answer, but I do know that if we want healthier information systems and online spaces, then we need to figure out a way that these spaces can thrive without having to have that kind of surveillance advertising model. 

And so, again, we are not the customer. But I do think that we have the power to demand more from the platforms because if we all leave, they have nothing to sell to their advertisers, and that's where our power does lie. We feel very powerless in the sense that we don't have a financial contract. If I personally leave Facebook, Facebook is not going to fall apart. But if people en masse leave Facebook because it's not what they want anymore, then they won't have the customer base to sell to the advertisers.

Q: Well, and I suppose when users walk, these platforms could go buy the platform their users moved over to, right?

Yaël: Possibly. That's part of where the whole legislators/lawmakers part comes in, right? Antitrust is not my field of expertise, but if the government regulators were resourced to adequately enforce the rules that are already on the books, these platforms shouldn't be able to just stand up and buy all of their competitors.

Q: We work with a lot of startup founders at Project W, and they are often facing endless to-do lists and fighting fires that are right in front of them today. But they also must address the impact their companies will have on society. Realistically, what can early stage startup founders do to set themselves and their companies up to be good actors?

Yaël: This is an important question. And part of the answer is going to be somewhat unsatisfactory, unfortunately. I've worked with a lot of startups. Before I went to ADL, I was sort of the in-house advisor in residence at Betalab, which was a Betaworks initiative to help pre-seed companies thinking about how to fix the internet. And depending on the business model and the funding model of these startups, if they’re seeking venture funding and want to scale to be a unicorn, I'm not going to have great answers for them because they are going to face pressures from their funders to put growth and scale ahead of all else. And founders will say, "right now we don't have the manpower or the budget to build the trust and safety team, or to bring on advisors."

Years ago, I was talking to a company and I said, here are the five things I could see that are going to go terribly wrong with this product. And the founder looked at me like I had just crushed all his dreams and then pushed me aside and was like, "Whatever, that's not going to happen, because I'm a good person, so therefore my product will be good." And now, fast forward to today and everything I said was going to happen in the space they’re operating in is happening. 

Just because you and your team are good people and believe you are building a good product does not mean that it won’t be used to harm others. There will always be bad actors who figure out how to use technology to further their aims. Being a “good founder” is not enough. You have to commit to putting in the time and work in to ensure you are building, scaling and monetizing in a way that doesn’t cause harm. And unfortunately, sometimes that commitment to make sure that your product is not manipulated by bad actors, to make sure that you're not actually designing and monetizing your product in a way that specifically feeds into some of what we're talking about, means that you are likely not going to be in lockstep with certain investors who want you to grow at all costs. So, part of it is up to the founders to consider what type of investor they're looking for. There are investors right now who are looking for really smart, interesting, safe companies. At an early stage, you can decide what kind of investor or business model you want to go after. So that's first. 

Second, there are enough opportunities out there to get good advice that does not have to cost you a lot of money. I often hear from founders or startups, "we can't afford a trust and safety person." Or "we can't afford a chief privacy officer yet." And while Betalab doesn't exist anymore, when I was there, I was a free resource to startups. There are different groups that offer these kinds of resources right now - they'll bring on somebody that helps Red Team your product and helps think through what the potential dangers are and also how to think through what the future regulation might look like down the road to make sure that you're ahead of it. My team at ADL’s Center for Tech and Society published a Social Patterns Library on our website which specifically goes through all sorts of different ways that you can build with more of a safety-by-design mindset. And other orgs do similar things. I find that sometimes founders don't use the resources that do exist. You just have to say, "I'm going to take the time as a founder to look at those resources."

That said, I don't want to underplay the challenge. When you are small, not well-funded yet, and spending a lot of time fundraising, often building in safety and security measures is something you figure you'll do later after you scale and grow. Let me be emphatic here: it is much harder to fix the problem after you've already unleashed it than it is to at least try to take certain steps in the beginning to build in some protective measures. 

Q: In your role as Head of the Center for Technology & Society, you lead the ADL's efforts to hold tech companies accountable for hate and extremism on their platforms. Where do you think the most progress is being made towards actual accountability for both the perpetrators of it and for the companies themselves?

Yaël: For many years, society at large has been trying really hard to influence these companies to do the right thing. Which is sort of a self-regulation model, right? And as someone who went and worked for the biggest of them all, I just don't believe that accountability is ever going to come from their own good will to self-regulate. The idea of self-regulation in and of itself just cannot continue to be the model. I do think there's been some progress on the legislative front. Government may never be able to keep up with the pace of technology, but that does not mean that we shouldn't support our government in building basic guardrails. We also have laws in the offline world that should apply to the online world that just don't because the tech lobby is so unbelievably powerful. There's so much money and power caught up in the future of the internet, and it's a complicated space. But to answer your question, there has been some progress. And I think the more that the public demands this to be a priority, the more progress there will be. 

We at ADL actually worked to get AB 587 passed, which is the tech transparency legislation in California. We are working with other states on transparency legislation as well. And while I am not someone who works wholeheartedly on data privacy, I do think if we can get to the point where people see the need for data privacy at the federal level, that will be a hugely important thing moving forward.

We also have an initiative called 'Backspace Hate' working to get anti-doxing and anti-swatting legislation passed to better hold perpetrators accountable for their actions online. So it isn't just, "oh, what is Facebook doing?" We should be holding the people who doxx you online, who engage in actual activity that can cause you harm in the real world, accountable as individuals as well. 

I think there's a real interesting debate to be had in the US about if you can't get anything done at the federal level, but you should definitely try to get these things passed at the state level because no tech company wants to have to abide by a different set of state laws. So, you kind of hope that eventually they'll want to come to the table at the federal level. What underpins all of this is our choice of just what we want to prioritize, right? I will never be the person who thinks we should put responsibility on the individual user to have to counter the very pervasive and manipulative forces from big tech companies. But we still can say we want something better.

I am a strong believer that at the end of the day, legislation and regulation are going to have to be very key components. And for a tech founder or a tech startup, especially if they are really ingrained in the Silicon Valley Tech mindset, they are going to be told time and time again that regulation is the enemy of innovation. And I would argue that is fundamentally untrue. Legislation puts in the basic guardrails that you want to actually abide by, but your investors are trying to push you not to. With smart legislation at least you know what the rules are and you know how you can build. We want our government to be there to protect us when things go wrong, but we don't want them to be there to help builders build more safely from the outset? I just think that's an incompatible mindset.

Technology permeates every single aspect of our society, and therefore it has to be a whole-of-society approach. Meaning, legislation and regulation, meaning builders and investors thinking differently about how they're building and investing, and meaning we as consumers actually understanding and demanding our own rights for this ecosystem. There is no one magic fix. It's the combination of all these things together. It is a whole-of-society situation.

Q: What is the role of women in creating and curating safe spaces online? Do you see a difference between the genders?

Yaël: I think that's a great question. If you've seen my Twitter handle bio, you know it reads, “I've been called an alarmist, a Cassandra, and someone with the resume of a 70-year-old man,” which I guess was supposed to be a compliment. But listen, usually the people who are most affected by the harms of any particular product, of any kind of technology, of any kind of innovation, are the ones who are going to see the problems early on and who are going to try to call them out early on. And so in this space whether you're talking about algorithmic harms or discriminatory technology, it has been women and people of color who have highlighted these issues first. 

I would point you towards an article written by Rachel Sklar, in the Washington Post, titled, "The past decade was lousy. Women told you it would be." The whole article is about the "Cassandras" of the past decade and it points out many of the women who were early on in pointing out some of these things. It is almost always women and people of color who highlight the harms first, and even often propose the solutions. What is very frustrating is what happens once they have moved the needle far enough in public awareness that now people realize there is money to be made from the solutions they're proposing. And then they co-opt their voices. I mean, look at what is happening with Timnit Gebru or Safiya Noble who were so early on in talking about algorithms and AI harms. And now you have the letter from Elon Musk and others saying to slow down AI. I mean, that is a whole movement that these women were building. Women and people of color are always first – but then their voices get co-opted once the public starts to latch onto the ideas that they are proposing. 

When building technology and investing in technology, center those very people who are going to be most affected by it, especially women, people of color, LGBTQ+ individuals, and others most susceptible to harms if you are trying to make a safe product, that is always going to be a key ingredient. And, of course, I would love investors to realize this and invest in these individuals more because they are going to build the products that are going to serve the public. I hate to say this, but if a white man is building a product that's only going to serve a white man, it's not necessarily going to serve everybody else. If a woman is building a product with a broader view in mind, it's going to serve everybody.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Davis Wright Tremaine LLP | Attorney Advertising

Written by:

Davis Wright Tremaine LLP
Contact
more
less

Davis Wright Tremaine LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide