A Kenyan local court has ruled that Facebook and Instagram parent company Meta can be sued for its alleged role in promoting ethnic violence between the East African country and its neighbour, Ethiopia. The case stems from viral hate speech chants on the platform during the 2020-2022 civil war in northern Ethiopia’s Tigray region.
Meta has argued that local courts do not have the authority to hear cases against it, especially where it is not registered as a company.
However, in Thursday’s ruling with Katiba Institute, a plaintiff in the case alongside two Ethiopian researchers, the Kenyan high court responded that “the court here has refused to shy away from determining an important global matter, recognising that homegrown issues must be addressed directly in our courts.”
Plaintiff Nora Mbagathi, the Katiba Institute’s executive director, alleged that Facebook’s recommendation systems amplified violent posts in Ethiopia during the Tigray war.
Plaintiff Abrham Meareg explained that his father, Meareg Amare, was killed in 2021 following threatening posts on Facebook. Another, Fisseha Tekle, an Amnesty International researcher, claimed that he faced online hate for human rights work in Ethiopia.

In all, the Plaintiffs demanded that Meta create a restitution fund for victims of hate and violence and alter Facebook’s algorithm to stop promoting hate speech. This may introduce a significant change in Meta’s content management and might affect how Meta works with its content moderators.
The Facebook parent company had previously noted that it invested heavily in content moderation and removed hateful content from the platform. The company had invested billions and hired thousands of content moderators globally over the years to police sensitive content.
It also said at the time that it would stop proactively scanning for hate speech and other types of rule-breaking, reviewing such posts only in response to user reports.
See Also: Meta can be sued over moderator layoffs, Kenya court rules.
Meta’s continued battle over content moderation
The hate-speech case comes amid a flow of others for Meta.
The company had faced two previous lawsuits in Kenya where in one of those, content moderators employed by a local contractor said they faced poor working conditions and were fired for trying to organise a union.
In response to the allegation, the big tech company stated that it mandates its partners to offer industry-leading working conditions to their employees.


Also, some content moderators sued Meta and two content moderation contractors (Sama and Majorel) for unlawful redundancy and blacklisting after laying off all 260 of its content moderators in Kenya in March 2023, after it ended its contract with Facebook.
According to them, they lost their jobs with Sama for trying to organise a union. They said they were then blacklisted from applying for the same roles at another firm, Majorel, which is based in Luxembourg after Facebook changed contractors.
Meta argued that it is not incorporated in Kenya but is only conducting business, and it outsourced its content moderation work via Sama, a U.S.-headquartered company with operations in the East African nation.
In August 2023, the court ordered the parties to resolve their dispute out of court within 21 days. The court’s order aims to encourage a resolution between the parties without going through a formal trial, offering an opportunity for both sides to find an agreement outside of the courtroom.
Another set of lawsuits includes actions by two Ethiopian researchers and a rights institute. They accused Facebook’s parent company of allowing violent and hateful content from Ethiopia to proliferate on the Facebook platform. The company responded in December that such content goes against the rules of both Facebook and Instagram. It also implied that the company took measures to address and prevent it.


While it might seem like a continued battle for the social media company in content moderation, it made an effort to further address this matter.
The company noted in March that it will begin testing its new Community Notes feature in the United States starting March 18, 2025. The move replaced its long-standing third-party fact-checking program with a crowd-sourced model powered by an open-source algorithm originally developed by Elon Musk’s X.
Drawing inspiration from X’s Community Notes, revamped from its earlier “Birdwatch” feature in 2022, Meta will utilise X’s open-source algorithm as the backbone of its rating system.
The decision, which comes two months after Meta scrapped its fact-checking initiative amid pressure from conservatives, is being positioned as a less biased and more community-driven approach to tackling misinformation on its platforms.