Episode 2

Fact-Checking and Addressing Misinformation

Ed Bice joins David and Paige Biderman to share his experience of being arrested at a protest against the Iraq War and how a subsequent email recounting his tale changed his life trajectory.

We discuss Meedan’s approach to using a hybrid network of machines and people to generate translations and why it’s critical to layer human knowledge of context on top of machine translation. Ed also shares Meedan’s role in the COVID-19 Vaccine Media Hub and the importance of providing context explainers and fact-checkers to the media.

Ed Bice is the CEO of Meedan, a social tech NGO focused on increasing the online exchange of media, dialogue, and educational materials between Arabic and English speakers. At the confluence of social networking, social translation, and open innovation, Meedan has forwarded ideas and projects with the singular focus of promoting more diverse networks for knowledge, data, and idea exchange between Arabic and English speakers on the Internet.

Ed has been an invited speaker at the 2010 Harvard Advance Leadership Initiative, the 2009 UN Internet Governance Forum, and many other ICT4D events. Ed is a member of the Partners for a New Beginning (PNB) group, a member of the Qatar Foundation International Educational Technology Working Group, and is a Co-Chair of the United Palestinian Partnership (UPP).

Episode Transcription

David:

Welcome fond listeners to Disarming Data. It's hosted by me, David Biderman, and a very close relative of mine.

Paige:

Me, Paige Biderman.

David:

And this is a podcast where we look at technology, data, and information from the perspective of two generations. Because as Paige and I were discussing these things, the divide between my generation, the Boomer generation, and hers is pretty significant.

Paige:

And we are fortunate enough to have our guest Ed Bice, CEO of Meedan. Meedan is a global technology not for profit that builds software and programmatic initiatives which strengthen journalism, digital literacy, and accessibility of information online and off. It develops open source tools for creating and sharing context on digital media through annotation, verification, archival, and translation.

David:

And Ed has been an invited speaker to the Harvard Advanced Leadership Initiative, the UN Internet Governance Forum. He's a member of the Partners for New Beginning, a member of the Qatar Foundation, International Educational Technology Group. I think either currently the co-chair or formerly the co-chair of the United Palestinian Partnership. A couple organizations in the Mideast that were basically formed by President Obama, as I understand, or at least fostered by President Obama. And Ed's been an invited reviewer at the National Science Foundation. And he's been portrayed in the book FREESOULS and writes frequently for New York Times, Economist, and other publications. This is the part we got to ask about. Ed is a graduate of Carleton College where he received to BA in philosophy of language, but then Ed is also the holder of some patents to some pretty sophisticated software. So we wanted to hear a little bit about that. But in the meantime, welcome Ed.

Ed:

All right. Thanks David. Thanks Paige. Nice to be here. Thanks for the over flattering introduction, especially for someone involved in the fact checking community. So I'll put our partners and colleagues on that intro right away and get back to you.

David:

Okay. I thought you were going to tell us we got something wrong.

Ed:

But I'll take all the compliments and hyperbole.

Paige:

So we always like to start by asking our guests just sort of a bio. Where you grew up, how school was for you, and how you kind of got into the field that you're in.

Ed:

That's going to take a couple weeks, Paige. It's a long and winding story. But I grew up in Minnesota.

David:

[inaudible 00:03:07].

Ed:

Yeah. Roots in the Midwest, and I stayed close to home for college. I was always a pretty curious student. And didn't imagine that I would study philosophy, but kind of couldn't put it down. First philosophy course that I took, I got hooked. I also though, at Carleton, studied with Paul Wellstone who was, at that time, the most popular professor with students at Carleton College and the least popular professor with the administration. He was carting his students off to farm protests. He was agitating for change. He was criticizing the administration, and they were basically trying to get him out of the college. Now that was until he won a kind of very close election and became Minnesota's United States senator, at which point the college administration of course embraced him and called them their own.

Ed:

So anyway, I had background, educational background, was philosophy of language, thinking a lot about how meaning is made. Studying Wittgenstein, who famously said the meaning of a word is its use in the language. It's like forget about all this platonic stuff about where the meaning resides. It resides in use. And getting the equal dose of Paul Wellstone ranting about the quality of the tomatoes that were engineered to put farm workers out of work, and be resistant to the tomato picking robots. So those are my kind of formative educational experiences. And I think a lot of my current work does owe to those two strands of my education.

David:

That's fascinating. One comment here is that your native state's had a lot of very famous senators, including some recently. Just I know Al Franken's back on the road touring, unfortunately, I guess he felt the pressure to leave. But anyway, that's an aside. So how did you transition? You graduated from Carleton, and then so what happens? How do you become doing what you're doing and get this expertise particularly in technology?

Ed:

Yeah. So philosophy majors aren't known for making the most pragmatic decisions coming out of college. And so I had a friend, Mark Newcomb, who was a year behind me and won a Watson Fellowship. He was a Mandarin speaker, and was studying in an econ major, I think. And he was studying rural agriculture in China. And he said, "Hey Ed, if you want to tag along, I got this Watson Fellowship. It might be able to pay for a few hotel rooms, and we can traipse around rural China." And this was 1990 or something.

Ed:

So I had the good fortune of traveling the world after I graduated. And that included travel through Northern Pakistan on the Karakoram Highway through to Kashgar in Western China. Obviously, Western China was at that time closed to foreigners. And I won't recount how we broke the law and blended in with the locals, and made our way across Western China by bicycle. That's another story.

Ed:

But we were shown so much hospitality and kindness and warmth in these Muslim countries and territories that when the war in Iraq broke out, I kind of felt this latent activism surge. And I was motivated to go and protest the first day of the war, the first day of the bombing of Iraq. And there was 130,000 people in the streets of San Francisco. And there were protests all over the world that day. I think it was the largest single day of global protest.

Ed:

And the story which I've recounted before was that my friend, Rouge Deza, who was a human rights photographer from El Salvador, drove me into the protest. He was taking photographs, so there's photos of this. But I was on the sidewalk. There were a number of protestors who were chained across the street. I think there are good authorities and bad authorities. And one of the bad ones grabbed me on the sidewalk and threw me into the street and arrested me for being in the street.

David:

Oh, [inaudible 00:10:30].

Ed:

Yeah, it was outrageous. And so I wrote an email from that experience, sent it off to five friends. And it wasn't about the circumstances of how I was arrested, it was about the US foreign policy, and what we were doing and why it didn't make sense in my view. And I guess it's fairly cliche to say that an email can change your life, but in this case, that email changed my life.

Ed:

One of the five people I sent that to in 2003 from an aol.com email handle, I'll add, one of the folks I sent that to responded, and said, "This is amazing. This is an important piece of writing. And I've talked with my wife." I didn't know at the time that he had the resource to be able to do this. And he didn't have a ton of resource, but he was incredibly passionate about what was happening in the world. And he said, "We want to publish this as a full page ad in the New York Times." Which is not an offer you get every day.

David:

No, no from an email.

Ed:

So I responded to him almost immediately. At this point, we had relocated to the Bay Area. I had been using my philosophy degree to design and build homes for the previous decade. So that was where I applied my degree. And so I was coming at this with kind of a design frame, and I said, "Okay. Are you serious, Kent? Okay, confirming you're serious, I'll do this, but I won't publish my email. The design problem here is that the world doesn't understand itself. And if people can see what I've seen when I've traveled the world, we wouldn't be rushing to a war with these people. A war that's going to have overwhelmingly terrible harmful impact on the people of this country. Granted they've got a terrible autocrat in charge, but this is a war against the people of Iraq. And I have had this incredible life experience traveling in the Muslim world, and I do not want to see this happen. If people were able to get a lens into viewpoints from across the language boundary, from across a culture, maybe we'd be less likely to see wars like this happen."

Ed:

So in the course of a single email, I said, "Let's start a project that will aggregate and translate viewpoints in media from around the world and push that into national publications, mainstream publications." The business model was that certainly good hearted people around the country will understand that this is an inspired project and will send us money if we put this ad in the Times and ask for money. So the business model failed miserably. We received $3,000 back on a $75,000 full page ad in the New York Times. But it was well received and doors opened.

Ed:

And three months later, I quit my job and threw myself into this full time. And with very little. This was not a popular decision in my household. It was risky. It was incredibly risky. And more so because it was failing. But I hustled cold called, went to DC, met with people, met a technologist working in Senator Leahy's office. I explained the series of ideas to him.

Ed:

And I think one other formative thing is my dad was an electrical engineer who worked in tech. So he worked at Control Data in Minnesota on one of the very first personal computers. So I wasn't internet native, I was ARPANET native. I was on the ARPANET in a year that I won't admit that might have been in the '70s. But it was well before the internet. So I guess I thought about language and networks maybe.

Ed:

But the ideas that I talked with John Shore about in DC became a patent, HDNLT, a approach to using a hybrid network of machines and people to generate translations. And super ahead of its time. I like to say that we should have stopped at the network part of the idea instead of adding on the 14 different layers because we architected a social network for media sharing. And the leap from that to where we are now, really owes a ton to a pitch I made in New York. And the full podcast is going to be basically the story behind this, but it is an excellent story.

Ed:

So I went to New York City. I got an audience with Henry Arnhold, the philanthropist. And I described to him this idea. And I had recruited Carnegie Mellon's Language Technology Institute into the project. Had gotten them excited about it. One of the

David:

You did all that.

Ed:

... linguistics labs in the world. Yeah, I hustled. So it was a pretty good story. It was like have this idea. It documented really horrible things happening in the world. We have the internet over here. People are starting to notice the internet. It's 2003. And then I have the story about language and translation, and using a network of people to bring together content and to translate it and share it.

Ed:

So I go into this office, 44th floor overlooking Central Park. I am absolutely fish out of water. I bought a suit. Pretty nervous. And I bet Henry at that point was in his upper 80s. Listens, no emotion on his face at all. Listens to my whole probably pretty breathless spiel. And at the end he says, "Mr. Bice." He points at this portrait at end of the conference room. He says, "Today my grandfather is smiling." And he says, I can't remember the year, but in 19 aught, or 18 whatever, "A Swedish dentist walked into his office in Dresden with a story about language and peace." And I was like, "Esperanto?" He said, "Precisely. My grandfather funded Esperanto, so I'm going to write you a check today and I'm going to make some introductions."

Ed:

So that was just from there, a big snowball effect happened where MacArthur Foundation came on board. IBM put two of their labs to work on this for three years. And suddenly, this is two years of hustling, another year to get the IBM deal together, but suddenly, 2006 we are running, and we reestablished the org around translation technologies focused on Arabic and English translation. That's when Meedan was born. So 2006, Meedan was born. The name Meedan came from a linguist at Carnegie Mellon. We had a MEAD acronym. I won't admit what it stands for. But she was like, "Look." And I said, "Do we have any words in Arabic that are either really good or really bad with this acronym?" And she was like, "Well you have to name it Meedan because this is the town square. It is the forum. It can be the battlefield, the medan arb. And it's just like the most evocative word in any language for what you're trying to do."

Ed:

So in any case, Meedan was born then. The partnership with Watson Lab and Hursley Lab at IBM. We set out to build a social network with a machine plus human translation layer on top of it. A content remixing interface that would allow you to clip digital content from anywhere on the web and bring it in and add it to a story. So we tried to build 18 different features that were in independently really successful web apps, overcomplicated it, and released something that several thousand internet geeks really loved, but ultimately didn't gain a ton of traction.

Ed:

And fast forward to today involves, we had an office in Cairo in 2007, which means we kind of grew the organization with the people that were in the middle of the Egyptian Revolution. When that revolution happened and the dust settled, we said, everything we've been looking at in translation in the power of decentralized networks or user generated content to create counter narratives and bring social change, this is all happening, but it's missing a verification layer.

Ed:

It's chaos. We have no idea whether some of this content that we were redistributing was inauthentic, was a product of the autocrat's media department, or whether it was authentic. This is incredibly profound, and the web's going to need a verification layer. So I think we were five years ahead of misinfo in that regard, or global recognition of misinfo. So in 2011 we started working on Check, and we kept the translation work going for a few years. We built a prototype human translation app for Facebook. We kind of kept translation in our work, but really have focused on verification and fact checking since 2011. So that's a long winded backstory, and there's of course 1,000 other twists I turns that I left out.

David:

I can imagine.

Paige:

Do you find when you were working with language, because my understanding of Arabic is it's a very unique language and with religious text as well, the Islamic text, it can't be directly translated as easily. So is there a lot of mistranslation that takes place from Arabic to English?

Ed:

Yeah, I mean it's an incredibly difficult language to translate. And in those years when we started the project, machine translation was awful. It was terrible. A complication of Arabic is the fact that the dialects vary a tremendous amount. And at that time there was a pretty common usage of Roman characters to represent Arabic, Arabizi. So the internet was mixing all of these things together. It was horrendously challenging, thus the need for a human layer on top of it.

Paige:

And then another question kind of about Arabic. My understanding is in Israel in particular, there's not a lot of Arabic speaking Jewish people that live there anymore. Many speak Hebrew. Do you fell the language barrier, I mean obviously it wouldn't do much because there's so many issues there, but do you feel like starting to translate between those two languages more would be more helpful to the people that live there?

Ed:

Yeah, I think this is an interesting question, Paige. And the reality now is that machine translation, and the rise of in-ear translation technologies that are processing in real time and audio signal, this stuff, it's Star Trek, but it's happening now. And it's always been our thesis that the lower the language barrier, the better for societies in conflict. So I mean, my answer is yes. I think that any cross-cultural dispute is improved the more communication you have.

Ed:

And it's a vastly different world now than it was 20 years ago. There was kind of a running cliche joke in the language technology community that oh, MT, it was a five year problem, but it was continually five year off. Every generation of researchers would say, "Oh, we're going to have this cracked in five years." And in fact one of the very first demonstrations of computing was focused on translation. A famous, I think it was a Georgetown demo, in the 40s. And they said like, "Well computers are going to solve this." And it's just recently with neural networks and other advanced machine learning approaches that in the last handful of years, we've seen incredible progress with automated translation.

David:

Do you think there still needs to be, well, we should go to this disinformation, but do you think there still needs to be a human layer on top to go with-

Ed:

I do. Yeah, I do.

David:

Why is that?

Ed:

And the reason is that language isn't static. And this is the same reason we need journalists, right? So we need human translators because language is constantly moving. We need journalists and fact checkers because events are constantly evolving. And the challenge of you can have a fact check or a verification report that needs to be revised given changes in the event circumstances. And you can have something that's true of a thing one day and false the next. So I think there's always going to be a need for humans to bring context, to bring meaning to machine interpretations.

David:

Got it. Okay, good. And we do have to go for it, but I want to just to have us a step back a little bit, because then you did do a lot trying to address the Palestinian Israeli conflict, as I understand it, particularly with at least some guidance from President Obama's administration. Do want to describe that a bit?

Ed:

Yeah. So our work in Egypt was continued up through the first presidential election where Morsi was elected. I'm personally a big fan of Obama. I'm not a big fan of what he did in the Middle East. And I don't think that he rose to the moment there. I think he steered the ship down the middle of the channel and lost an opportunity to really fall on the right side of history. I have friends in prison still. So the failings of US foreign policy in the Middle East are not happy discussions for me because these are my friends whose lives have been dramatically impacted. So not probably the answer you were expecting there.

David:

It wasn't.

Ed:

But Meedan's pivot away from being an organization that focused on the Arab reach and really came from a key supporter, a Swedish development agency, who started our work in verification in 2011, and to this day continue to be an important partner. In 2013, they asked us to begin moving our programming globally. So we now have and run programs in virtually every region in the world. And that was a move that we made quite willingly because we were in those very early days of starting to understand misinformation. Seeing that it was a cross border, cross language challenge. A lot of the content that is most problematic is being repurposed across languages, across communities, across countries.

Ed:

So taking on that challenge of misinformation as we have, it's essential to work globally. So 2013 was really the start of us moving into doing programs in Brazil and Africa and India. And I think the Electionland Project in 2016 was a US project. So we brought the work back to the US, and had 1,000 journalists on our software monitoring the 2016 US election. Interestingly, looking at allegations of election fraud, or voting place incidents on election day.

Ed:

And then that work, I think the Mexican election in 2018 was really the point where we said, okay, this is a super important, super powerful component of a country running a successful, meaningful democratic election. If we can improve the information ecosystem these elections are happening, and give people the power to report content that they don't think is accurate or is misleading and engage, in the case of Mexico, 130 media organizations across the country in that work, then we can improve elections. And I think that's what we did in Mexico in 2018, and that's been a really key part of our work over the last decade has been how do we bring communities together to arrive to both source misinformation and to distribute the results of journalists in civil society, experts looking at that information and assessing it.

Paige:

Do you feel that the upcoming Brazilian election is being handled differently in terms of misinformation this time than last time?

Ed:

I do, Paige. So our project in Brazil is breaking new ground. In the last selection cycle in 2018, there was a monitoring effort that brought together media partners in Brazil. And they targeted WhatsApp where we know the vast majority of Brazilians are getting their daily information. So in 2019, WhatsApp came to us and asked us to integrate our software with their business API. So a lot of the work that we've done since 2019 has really focused on the challenge of misinformation in closed networks. Which is it's not just a wicked problem, it's where the internet is moving. I mean, there's very good data to show. We all know because we're all increasingly in closed messaging groups with our family. The future of the internet is closed messaging.

Ed:

So it's a really challenging problem and it's the problem that we're going to have to address if we want healthy information ecosystems in our societies. So we integrated Check, our open source fact checking and content annotation platform with the WhatsApp business API for the Indian election in 2019. And we then started the third party fact checking program on WhatsApp, which now numbers more than 40 global media partners. The majority of those partners are using our software.

Ed:

So coming into this year's Brazil election, we had these partners in Brazil on our platform using the software. We received technology funding to integrate, essentially create a shared feed between all of those partners. So each of those work partner workspaces is a WhatsApp number. They collect their own tips. We created the technology that allows us to aggregate all of those tips across those different workspaces and look at trends and pull out insights so we can see what content is most viral across those sites.

Ed:

We're preserving user anonymity, so we strip any PII data, but we take all of those incoming queries and analyze them, run them through our matching algorithms, look for connections. Even if it's an image with text or a video. So this shared DB or shared feed technology is novel and different this year. Also, this same technology allows us to aggregate all of the fact checks that are being produced by all of these partners, and create a feed that can be and is being, or is serving as the kind of on demand fact checking resource for, in this case, the TSC, the governmental body in Brazil entrusted with conducting the elections.

Ed:

So TSE has actually, rightly I think, made misinformation an aspect of their remit, their mandate. The simple understanding that if you have a democratic election that's based on verifiably false information, it's not democratic. It's not serving the interest of the people. So we've got a project in Brazil that's now connecting six media partners, fact checking partners, with the TSE bot. We're seeing millions of requests coming in through that bot. We are seeing many tens of thousands of fact checks being distributed back in a one-on-one fashion.

Ed:

So a user submits a query. If we've got a match, there's a message going back to them with that, and that becomes an artifact that they can then share with their aunts and uncles and family groups. And so to me this is the future of how we improve internet information spaces. It does come back to the start of all of this, which is how do we improve people's understanding of the world? And it's exciting to see it start to happen at internet scale. Millions of users, an official government authority using this technology and data to improve the Brazilian election. So we'll see how the runoff goes, but it's pretty exciting to be in the middle of it.

Paige:

Here in America obviously there tends to be much more backlash against fact checking, especially on social media platforms, particularly from conservatives. Does that seem to happen in other countries as well by the citizens?

Ed:

Oh yeah. There's a playbook. There's a playbook. And we saw this really dramatically in the Philippines. So in the Philippines, we worked with Maria Ressa and her team at Rappler. And it was incredibly well organized, incredibly inspired projects, ongoing actually. So we're still running in the post-election phase that project. But the amount of, not kind of casually mistaken misinformation, but really radically targeted counterfactual content in that election cycle basically erased and rewrote that country's history.

Ed:

And in spite of a very well organized, inspired effort to fact check and distribute social media in that election setting, it was a landslide election premised on a revisionist history of the first Marcos era and a bunch of narratives, including one that was particularly effective that equated a vote for Marcos with essentially a lottery win for the country of the Philippines because it would release this hidden gold that was rightfully obtained. But I think we did the calculations on what the amounts that were said to be in this gold reserve were like 700 times global GDP. The Philippines is going to be the wealthiest country in the world, but you got to elect this guy.

Ed:

So that's where we're at right now is an internet that all of us from the early 2000s envisioned as being this paradise for collaboration and knowledge creation and improving the world. And it feels that the infrastructure, the hyperlink that was supposed to lead us down the path of collaboration and knowledge creation and progress has been hijacked. And we've got some ideas for how to fix it, but it's getting worse. And unfortunately we're seeing erosion across societies everywhere. We're seeing the rise of nationalists and dictators whose recipe relies heavily on the attention economy of an internet that rewards hyperbole and outrageous claims. And so it feels like a greater challenge than this naive thought we started with 20 years ago of bridging language boundaries.

Paige:

How much did it change when COVID happened in terms of, I mean obviously the misinformation with COVID was-

Ed:

I've got a story. So in 2018, and I've got the receipts for this, in 2018, a brilliant researcher named Gyenes bought a ticket from Boston to San Francisco, said, "Ed, I want to come out and pitch you on an idea." Nat had been working on public health misinformation for five years. She arrived to the SF office in, I want to say, early 2018, April of 2018. And Scott Hale, who's now our director of research, had gotten a fellowship to come over from Oxford and hang out in the Meedan office. And An Xiao Mina was our former COO and big thinker, was also there.

Ed:

Nat arrived. We went into brainstorm mode. I love the idea of developing a public health vertical to kind of prove the model for misinformation response. Because you need to be targeted. Elections are great because they're narrowing, and you can set up an election project, and people know that they're supposed to submit election related misinfo. So I loved the idea of proving things in a public health vertical. There's a huge piece of my life that involves the health system and my eldest son. So I was, from a personal point of view, pretty passionate about health information.

Ed:

And we sat down, started brainstorming, and we came up with a proposal to seek funding to build the theoretical framework for misinformation response in a hypothetical pandemic. And we submitted that proposal to Robert Wood Johnson Foundation. It was one of five Pioneer awards made that year, pool of hundreds. So won a super competitive grant process. And three months into that project, the pandemic began.

Ed:

So we've pivoted from a hypothetical project about what frameworks we'd need. Google came calling and said, "Can you work with the Global Science Media Centers and start a vaccine media hub?" We spun up a team of a bunch of brilliant, vast majority women, epidemiologists and public health experts. And we created a health desk as an effort to provide context to the fact checkers that we were serving on the WhatsApp 3PFC Project, third party fact checker project. So those context explainers are up at healthdesk.org and get used by journalists all over the world every day.

Ed:

So pandemic, absolutely. I mean, pandemic shifted so many things about the internet and how we use it. For us at Meedan, it was an outrageous coincidence that we had started this work in advance of it. I still think that public health and healthcare generally is a really, really, really rich, high impact place to address misinformation and information integrity. I think that everyone in the world has that same experience of your thumb is hurting, you don't know why. You do the Google search, and you're convinced that your kidneys are falling out. That's a universal experience. And the internet is just so bad for... It's getting slightly better. So I think that health misinfo is going to continue to be a growth area for us even, as our work with the pandemic and vaccines starts to tail off. We've got some ideas around pharmaceutical misinformation that's flowing through YouTube and so working on a concept around that now.

Paige:

Did you guys predict, when you guys were making your mock scenarios, was it worse than you predicted in terms of misinformation when COVID happened, or was it about accurate?

Ed:

Oh, interesting. I don't think we were surprised. I think Nat had been working in the field long enough that we weren't surprised. But what we didn't see was the politicization of the pandemic. We knew there was going to be rampant misinformation. We just didn't know it would have the political overtones that the Trump administration brought to the pandemic. And that's been super unfortunate.

Ed:

And I guess going back to your earlier question around fact checking generally and the politics of it, I think it's the same as the strategy that sweeps away all of mainstream media with just calling it fake news. Fact checking as a practice is you shouldn't trust any of them because they've all got an agenda. It's all part of the broader media conspiracy. And that's really tough because you don't get out of the starting gate with a certain percentage of the population. And once you thoroughly pollute the fact checking brand as an ideology, you don't have much recourse after that. If you've successfully invalidated the idea of a fact check, you've got pretty free reign in terms of what you can say. And I think that's a dilemma for fact checkers now.

Ed:

And our antidote to that is be nice people, win friends, and also be a little less dear about the value of any one fact check. So we kind of have this idea and embrace this model that, for any truth in the world, there's probably a few thousand different flavors of assessments of fact checks. And I guess this goes back to the philosophy, you can have an assessment of an event that has different truth values in four different places at different times, and you can have fact checks that evolve. And that's a little bit of, if you accept that, and I think they're pretty good arguments for understanding that at least some of what we think of as facts are contextual, and if you accept that, then you've got quite a bit more humility around the undertaking of fact checking. And if you can get that through in your work, then maybe people will be more likely to accept it.

David:

That's interesting. We had a guest, he was a utility security specialist. We talked about cyber attacks, et cetera, on utility networks and things like that. And he said that's really not the problem. Because he says we have technological defenses we could put up and things such as that. But he said, for disinformation, all the technology in the world can't stop things. As you say, out of the gate, you've got a certain percentage that you just can't convince. And he really said that was more dangerous, he thought, the state sponsored misinformation, than cyber attacks. So for what it's worth.

Ed:

Absolutely. The meaning of a word as it's used in the language. And if you have a charismatic leader who can convince people that the meaning of this word is [inaudible 00:58:30], then that's what they're going to take it as.

Paige:

And then just out of curiosity, this is a little bit of a pivot, but do you find that educators are taking media literacy in schools more seriously now because of everything happening? Or is it still hard to push through just those boundaries and for them to understand the importance of youth education in terms of the media landscape?

Ed:

Yeah, I think we're seeing more and more media literacy work, and I think that it needs to evolve. I think the challenge of getting... I have kids and we've all been trained, but particularly the kids out there expect to move through content so quickly. And all the media literacy in the world won't cause a person to slow down, do a web search, check the source. So there are a lot of great media literacy offerings out there, but I think it's a matter of changing behaviors. And I think those behaviors are really hard to change because these interfaces that we've become accustomed to are throwing content at us faster and faster. And whether it's rise of end to end encrypted messaging, or platforms like TikTok, these are increasingly walled gardens. And in some cases globally, that's made very explicit with data restrictions. But I think media literacy is one thing and changing online behaviors is another. And I think we've got a long ways to go.

Paige:

And do you find that TikTok, I don't honestly have a TikTok, but are they very anti-fact checking pretty much all around because of the Chinese ownership?

Ed:

No, no. Not at all. They're working in with fact checking partners, and there are a lot of good people at TikTok. I think that the format, the interface is actually really well suited to fact checking. We've been trying to convince them that they should evolve their strategy. But just as a function of how you can interact with content on the Stitch is basically the ideal format for fact checking. And you see a lot of organic fact checking on TikTok. But the algorithm is incredibly opaque, and it's eating the internet. And there's probably some legitimate concern over the amount of data they're capturing and the reach. But just on a personal level, I've seen their people, in conversation with their people, they're smart, they're committed to improving that service, and they're putting some serious effort into thinking through how they moderate content and how they respond to misinformation.

Paige:

Well that is really good to hear actually.

David:

This is probably pretty simplistic on my part because I am the Boomer here [inaudible 01:03:36], or disinformation. And two things, one, I would really recommend that our listeners go to your website, and look at what you guys have done in terms of, I think, just the automating the Check of disinformation or misinformation, just seems extraordinary that you're able to do that. And I don't know if you can give an overview, for someone who really doesn't understand technology, how that works. I could see it being done by humans with Google searches, but I'm not sure how you would do this automatically, but it sounds like that your Check software is pretty amazing.

Ed:

Yeah. Thanks, David. It is. Yeah, it's super amazing. No. I mean automation to the extent that we have automation, the data model is you've got users asking questions. And those questions can be in the form of free text, or they can be a link that user has encountered in the wild, or they can be an excerpt from a link, or they can be an image or they can be a video. So that's the kind of input. The output is the response to that. A media partner, a civil society org, fact checker takes that query and builds a response. And that response can be in the form of free text, or an image card or a video or, whatever it is or however it is they choose to respond.

Ed:

The algorithms that we're building are clustering and crosswalking between similar queries and available fact checks. So the automation is really all about how do you increase the number of positive matches between queries and fact checks. And if you do a good job of that, you serve more automated responses back that don't require human intervention. And if you do a bad job of that, you serve back automated responses that are not high quality, and you lose the faith and trust of your users. So yeah, we're building matching algorithms, and also hate speech detection algorithms to protect the users who are oftentimes suffering harassment while monitoring these tip lines. So that's a significant piece of what it means to be a digital journalist working in these spaces where you're taking in user generated or submitted content.

David:

That's fascinating. We've kept you long, and I don't want to keep you too much longer, but really appreciate the time. And then we have to ask obviously because of the election, our election, and legitimacy of the election. I saw that the Republican candidate for Senate in Arizona just recently acknowledged that Biden was "a legitimate president" after being an election denier for so long. The question is are things getting better on that front, on that specific issue, I guess I would ask, the [inaudible 01:07:54].

Ed:

I'm not a political scientist, David, so I might take a pass on that. I'll say that we're working in the US midterm with a set of partners on specifically addressing misinformation in Spanish language networks in run up to midterms. And working through WhatsApp on that project with Knight Foundation support. So that's, we hope, a test case for a larger 2024 project. But I think that working in WhatsApp, working in Spanish language, there is so much misinformation that's flowing through closed networks, and sometimes being cross national borders and kind of rooting out specifically the inauthentic narratives. There are efforts, and we saw this in Mexico in 2018, but coordinated efforts often. It's not just aunts and uncles coming up with crazy theories. It's state sponsored propaganda with manufacturing the narratives they think are going to stick, or divide, or outrage. So that'll be a good project for us.

David:

It sounds like a really important project. It also sounds as if you anticipated this probably about five years before others identified it as a problem. So that's terrific what you've done. It really is.

Ed:

Yeah. Yeah. Thanks.

Paige:

Yeah, thank you so much for coming on. I really appreciate it.

David:

Yeah, thank you for what you're doing. And Paige, you have any other questions, any [inaudible 01:10:30]?

Paige:

I think he was very thorough. That was really interesting.

David:

That's great.

Ed:

Wide ranging. Yeah. Thanks. I enjoyed the conversation. Thanks a lot.