Skip to main content

In Africa, Taking on Viral Hate

Kenyan lawyers stand around a computer for a virtual meeting.
Kenyan lawyers participate in a virtual pre-trial consultation with a judge and Meta's legal counsel in Nairobi, Kenya, on April 12, 2023. © Tony Karumba/AFP/Getty

In the lead-up to Kenya’s elections last June, the human rights watchdog Global Witness decided to test Facebook’s content moderation policies. The group submitted 20 ads to the social media platform for approval: 10 in English and 10 in Swahili. They all contained hate speech pulled from real-life examples and mentioned beheadings, rape, and bloodshed. Some posts compared people to donkeys and goats.

Most were approved within a few hours.

When Facebook’s parent company, Meta Platforms Inc., responded, Global Witness resubmitted two ads, which were subsequently approved—proving the safeguards it claims to have put in place were still failing to screen such harmful content from the site.

This shocking result underscored the fact that Facebook users in Kenya and other African countries are less likely to be protected from hate speech and other harmful content on the platform. Just last April, a Kenyan government agency tasked with addressing inter-ethnic violence found that of all social media platforms in the country, Facebook had the highest level of hate speech content.

That content can have deadly consequences. A 2020 Facebook post published by a leader of the National Movement of Amhara, a group opposed to the minority Tigrayan population in Ethiopia, called for “self-defense” against Tigrayans; the post preceded a dawn attack during which gunmen killed more than 100 people in the western Benishangul-Gumuz region of Ethiopia. That same year, a post by a Facebook user going by the name Northern Patriot Tewodros Kebede Ayo accused the Qemant community—a small ethnic minority in northwestern Ethiopia—of supporting opposition forces and calling for their “clean-up,” which preceded the murder of more than a dozen Qemant civilians by a local militia consisting of members of Ethiopia’s second-largest ethnic group. Posts like these effectively create a climate of hatred that has led to mass attacks, indiscriminate murders, and targeted assassinations.

Two victims and a Kenyan rights group, the Katiba Institute, are now taking the tech behemoth to court, seeking $2 billion in restitution from Meta Platforms Inc. over its failure to prevent the publication content promoting violence. They also have the backing of Open Society grantee Foxglove, a UK-based non-profit with a track record of taking on tech behemoths to combat injustice caused by algorithms, data harvesting, and other abuses of technological power.

One plaintiff in the case is Abrham Meareg, the son of an Ethiopian academic who was shot dead after being attacked in Facebook posts—one of which included his photo. The plaintiffs are arguing that Meta’s abject failure to protect non-English speaking Facebook users on the continent from harmful content, which pales in comparison to safeguards in place for users in the U.S., constitutes discrimination. 

This lawsuit is a significant development in calls for accountability from Facebook. The plaintiffs’ hope is that this first-of-its-kind legal challenge before the Kenyan High Court will force social media platforms to prioritize content moderation in Africa and hold these platforms to account for insufficient investment into ensuring moderator and user safety across their products.

Meta “has tried to have it both ways,” says Chris Kerkering, the Katiba Institute’s litigation manager. “To be in every home and on every screen in the world, but only be accountable in the places where it has a physical office.” His organization believes that the Kenyan Constitution is “robust enough to recognize that Meta has to be accountable in the places where it causes harm, not just in the places where its executives sit.”

While it hasn’t responded directly to the lawsuit, a Meta spokesperson released a statement noting that the company has “strict rules which outline what is and isn’t allowed on Facebook and Instagram” and that it “[invests] heavily in teams and technology to help us find and remove this content.” The statement did not admit to any failure to stop the proliferation of hate speech and misinformation in Africa, even though the company’s shortcomings have been widely reported—nor did it acknowledge those who fell victim to violence incited on its platform.

Eighty-seven percent of the budget Facebook devotes to combating misinformation is spent in the U.S., although less than 10 percent of its users reside in North America, while a mere 13 percent is distributed between all 53 African countries, Latin America, and the Middle East. This means there are not nearly enough Facebook content moderators for the flood of harmful content in parts of the continent particularly prone to conflict—and a complete inability to moderate content in the local languages in which it is likely to appear. Cori Crider, a director of Foxglove, has characterized content moderation as being “an order of magnitude worse anywhere outside of the U.S.— and particularly bad in places facing crisis or conflict.”

Content moderation for countries in Eastern and Southern Africa totaling over 500 million people is carried out from a hub in Nairobi, with not nearly enough staff to meet the overwhelming need. Meanwhile, these Kenyan content moderation workers have reported atrocious working conditions and much lower pay than that of their counterparts in other countries.

But underinvestment is only part of the picture, because Facebook’s profit model promotes violent and harmful content by design. One of the metrics that influence the Facebook algorithm is the “Meaningful Social Interactions” metric (MSI), which was added to the Facebook algorithm in 2018 and prioritizes content predicted to elicit reactions, such as comments, reshares, and “likes.” MSI has increased the company’s advertising revenue by increasing the amount of time users spend on the site. But Meta has failed to address the way it prioritizes shares and replies to comments that further inciteful, hateful, and dangerous content, pushing this content to larger audiences.

The lawyers bringing suit are confident they can make the case that Meta has discriminated against African users because in high-risk scenarios elsewhere around the world, it has taken measures to make its algorithm safer. On January 6, 2021, after the attack on the U.S. Capitol, the company deployed a “break the glass” procedure that diminished the visibility of content that “delegitimized” the U.S. election process and promoted violence. In Myanmar, after Facebook posts promoting horrific violence against the Rohingya minority went unanswered by Meta for many months—and only after thousands of Rohingya were massacred—it eventually tweaked the algorithm to reduce the distribution of highly viral content. No such measures have been adopted in Kenya or Ethiopia.

Meta and other corporations must pay the true price of spreading harmful content on its platforms. Courts and regulators must ensure that social media platforms invest in content moderation in African markets or risk severe penalties. This means improving the working conditions of moderators and ensuring that they can cover a wider array of languages. It also means ensuring that algorithms don’t amplify inciteful content and investing in technology that can detect and remove harmful content quickly. Meta should also be required to develop crisis response processes, as they have done in the U.S., to address conflicts or other violent events that arise in Africa.

In her testimony before the U.S. Congress, Facebook whistleblower Frances Haugen said that she “genuinely [fears] that a huge number of people are going to die in the next five to ten years, or twenty years” because of Facebook’s algorithm and underinvestment in content moderation. Meta must change course before more lives are needlessly, and brutally, lost.

Read more

Subscribe to updates about Open Society’s work around the world

By entering your email address and clicking “Submit,” you agree to receive updates from the Open Society Foundations about our work. To learn more about how we use and protect your personal data, please view our privacy policy.