What the Facebook Whistleblower Reveals about Social Media and Conflict
What the Facebook Whistleblower Reveals about Social Media and Conflict
Former Facebook employee and whistleblower Frances Haugen testifies during a hearing entitled 'Protecting Kids Online: Testimony from a Facebook Whistleblower' in Washington REUTERS
Commentary / United States 5 minutes

What the Facebook Whistleblower Reveals about Social Media and Conflict

Former Facebook employee Frances Haugen’s recent testimony before the U.S. Senate reasserted the platform’s role in propagating misinformation that feeds conflict offline. Facebook should do more to reduce the spread of harmful content by revamping its moderation capacities and modifying its algorithm.

The U.S. Senate recently heard testimony from Facebook whistleblower Frances Haugen, who left the company’s civic integrity team with a trove of documents that she delivered to the Wall Street Journal. The leaked documents and her testimony support what many, including Crisis Group, have reported – that Facebook has exacerbated conflict in a number of places, and too often does not do enough to manage the fallout.

Haugen’s central criticisms in her testimony focus on Facebook’s algorithm, the particular set of rules that determine the content that users see, including where a post appears in the “news feed”. Facebook’s algorithm is designed to give users what the company believes is the best possible experience, to keep them coming back to the platform. Though few specifics are known, among other metrics the algorithm heavily weights user engagement, which includes actions like “liking”, sharing or commenting on a post. Problems arise because polarising content tends to perform well on such metrics, and thus get rewarded with a higher position on users’ feeds. This affects the substance of the content itself: in leaked Facebook research, some European political parties reported posting more negative content in response to a 2018 change to the algorithm that more heavily weighed user engagement. Facebook tries to mitigate this risk through what it calls “integrity demotions”, which push potentially harmful content lower on the news feed, and by boosting original reporting. For Haugen and other critics, these measures are not enough.

The debate over Facebook’s news feed algorithm has direct implications for conflict.

The debate over Facebook’s news feed algorithm has direct implications for conflict. Crisis Group’s work on Cameroon highlighted that by promoting polarising content, the algorithm may contribute to ethno-political tensions. Hate speech and other polarising content often spread on social media in Ethiopia, and Haugen has suggested that the algorithm may contribute to its reach. In an internal Facebook report leaked to the Wall Street Journal, a test set up to explore how the news feed operated in India found a “near constant barrage of polarizing nationalist content, misinformation, and violence”.

Conflict zones present particular challenges to Facebook in reducing harmful content: understanding what constitutes misinformation is more difficult in regions with less media freedom; hate speech is often highly contextual; and the plethora of languages and dialects spoken by users make content moderation across the globe an enormous technical challenge.

But even with these challenges, Facebook can do better. Start with its moderation system. Harmful material is removed through a mix of human moderators, who review flagged posts, and artificial intelligence, including a hate speech algorithm. Though 90 per cent of Facebook’s users live outside the U.S. and Canada, the moderation system heavily favours the U.S. According to leaked Facebook research, only 13 per cent of the 3.2 million hours spent labelling and removing misinformation in 2020 focused on non-U.S. contexts.

Facebook also has significant gaps in its language capacities. Haugen reported that 87 per cent of the platform’s spending to counter misinformation is devoted to English speakers, despite representing just 9 per cent of users. Most of the platform’s Arabic-speaking content reviewers know Moroccan Arabic, for example, and so cannot always identify hate speech, violence and abuse in other regional dialects, according to a leaked Facebook document. As Crisis Group has noted, in Cameroon, moderators with an understanding of local dialects are important to correctly identifying inflammatory content. Many conflict-affected countries, particularly those with multiple languages and dialects, are left under-resourced.

The same language capacity issues arise in Facebook’s automated flagging of content. Facebook’s response to harmful language relies heavily on its hate speech algorithm, which it touts as identifying 97 per cent of this content ultimately removed from the platform. But this algorithm does not work in all dialects, according to leaked research. It covers only two of the six languages spoken in Ethiopia, for example. In Afghanistan, the platform took action on an estimated .23 per cent of hate speech, according to Facebook research, due to gaps in its language capabilities. In India, Facebook’s largest market, leaked research highlighted a lack of relevant language classifiers to flag content. (Importantly, this may not be the only reason such content was allowed to remain: the Wall Street Journal previously reported that political considerations affected decisions about whether or not to remove Hindu nationalist pages and content.) One leaked study estimated that Facebook took action globally on as little as 3 to 5 per cent of hate speech and less than 1 per cent of violence incitement.

Facebook has taken some steps to address these imbalances. Partnerships with NGOs give the platform greater insight into conflict contexts. (Crisis Group is a partner of Facebook and in that capacity has occasionally been in contact with Facebook regarding misinformation on the platform that could provoke deadly violence.) Facebook has also provided digital literacy training to aid in reducing the spread of harmful content. Facebook’s spokesperson told the Wall Street Journal: “In countries at risk for conflict and violence, we have a comprehensive strategy, including relying on global teams with native speakers covering over 50 languages, educational resources, and partnerships with local experts and third-party fact checkers to keep people safe”. Such efforts are useful, and should be expanded.

Improving the platform’s ability to reduce harmful content across languages and local contexts is a central component of reducing the potential for violence offline.

Improving the platform’s ability to reduce harmful content across languages and local contexts is a central component of reducing the potential for violence offline. This includes channelling more funding toward expanding moderation teams and addressing language gaps, which can help limit the flow of hate speech, misinformation and incitement to violence. For example, Facebook increased the resources dedicated to Myanmar after it publicly acknowledged its role in the Rohingya atrocities. Crisis Group research shows that in part because of this past experience, the platform moved quickly and aggressively to remove pages linked to the military junta after the February 2021 coup.

But content moderation alone will not be enough. Haugen’s testimony lays bare how Facebook’s algorithm is in some cases misaligned with reducing conflict. Making the algorithm less engagement-driven would not directly address the presence of harmful content, but it could make it less visible. Nor does this have to mean shifting to a chronological news feed, as Haugen recommended: Facebook made changes to the algorithm in relation to health and civic content in the spring of 2020, and in certain conflict-affected countries including Myanmar and Ethiopia. Changes could also target the “explore” feature, which prompts users to look at new pages or accounts they may be interested in – one way that Facebook is inadvertently still advertising pages linked to Myanmar’s military junta. Fully understanding the consequences of any algorithmic changes, particularly in developing contexts, would be greatly facilitated by Facebook disclosing its own research and sharing more data with outside experts.

In the midst of all the criticism, it is easy to lose sight of the many positive benefits of social media for conflict-affected countries. Facebook has helped in the organisation of mass protest movements. It has allowed users to share information about human rights abuses in conflict zones. It offers an alternative source of news in countries where the media is heavily regulated by the state. But the downsides have been substantial. With more resources, research and reform, Facebook could reduce them.

Subscribe to Crisis Group’s Email Updates

Receive the best source of conflict analysis right in your inbox.