Extreme Speech & the Ethiopian Elections

 
 
Photo by Taylor Turtle on Unsplash
 
 
eliza-rf.png

by Eliza Bacon

Junior Producer & Editor Global Digital Futures

There has been interesting shift starting from a utopian discourse of digital technology as a way of bringing democracy and development, into its dark side... Increasingly we are seeing digital technology being framed based on its negative effects.
— Dr. Matti Pohjonen

This week Chipo spoke to Matti Pojohnen about extreme speech and content moderation online. Matti works at the intersection of digital anthropology, philosophy and data science. He has taught at SOAS in Global Media & Post National Communications and is now a postdoctoral researcher at the University of Helsinki where he is working on global and comparative dimensions of platform accountability. Their conversation raises astute questions about the global future of content moderation. Listen to it here

As Adam Sataranio points out, Silicon Valley has a problem with double standards. 2021 has seen news sites disproportionately filled with speculation over how Facebook’s oversight board will deal with Donald Trump, whilst uneven and incoherent action is taken to compromise leaders inciting violence in the rest of the world. It is a stark reminder that these platforms - so intimately woven into our lives they become invisible - are multinational corporations serving corporate interest; and in certain contexts public scrutiny has higher market value.

Furthermore, the action taken over Trump has forestalled any future reluctance from platforms to intervene in extreme speech and undermined their claims to be neutral conduits of communication. On this, I’d recommend Tarleton Gillespie’s Custodians of the Internet where he establishes the paradox of content moderation: though platforms understandably disavow it, it is their essential, definitional and constitutional work. 

Last year, Ethiopian activists, journalists and civil rights organizations wrote an open letter to Facebook condemning their failure ‘to prevent the escalation of incitement to violence on its services’. Now, in the build up to what Prime Minister Abiy Ahmed has promised would be the country’s ‘first attempt at free and fair elections’, Facebook has been working to cement a responsible public image. Between March 2020 and March 2021, they claim to have removed 87,000 pieces of hate speech. Last week they removed a network of fake accounts associated with Ethiopia’s Information Network Security Agency, which is meant to be responsible for monitoring telecommunications and the internet. Read more on Facebook’s ‘election integrity work’ in Ethiopia here. Read about some of the most common misinformation circulating online here

Needless to say, it’s not all on Facebook’s shoulders to keep the elections fair. 4/10 regions won’t be voting at all this week- one of which is Tigray, where communications blackouts and blocks on humanitarian aid have been reported amongst the ongoing conflict. As such, ‘the government helped create conditions where disinformation and misinformation can thrive’. That quote is taken from The Washington Post, who have analysed 500,000 tweets to get a sense of the ongoing information conflict - here. It was also reported in May that social media sites across the country were restricted. If you haven’t already, read Deirbhile’s piece on the increased use of internet shutdowns here

I’ll leave you with some food for thought in the form of this cartoon by Gado, which Matti mentions in the podcast:

Jan.13.21.Trump_.Museveni.and_.Social.Media_.jpg

Eliza is studying the Global Media & Communications MA at SOAS, and holds a BA in English Literature from Cambridge. Eliza worked as a Communications Intern at UNICEF's South Asia office in Kathmandu, and hopes to become a writer on tech and global internet cultures.

 

Previous
Previous

The Indian farmer’s protests: a battle for narrative

Next
Next

The Ugly Side of Brazil’s Online Influencer Ecosystem