Select Language:
An oversight board established by Facebook to review content moderation decisions has highlighted increased transparency and a commitment to respecting individual rights in their five-year review. Despite these positive notes, they expressed some frustrations about their independent role.
Originally launched in 2018 during a period of declining public trust in Facebook—now rebranded as Meta—the Oversight Board, often called the company’s “supreme court,” was created in response to scandals such as the Cambridge Analytica data breach and waves of misinformation around major votes like Brexit and the 2016 U.S. presidential election.
Since beginning operations in 2020, the board has been composed of notable figures including academics, media professionals, and civil society representatives. Its primary function is to evaluate specific appeals against Meta’s content moderation decisions, providing binding rulings on whether content removal was justified. Additionally, it offers non-binding recommendations aimed at improving policies used across Facebook, Instagram, and Threads for billions of users.
Over the past five years, the board reports that it has contributed to more transparency, accountability, and an open dialogue on free expression and human rights on Meta’s platforms. The board also suggests that Meta’s unique oversight approach could serve as a model for other social networks.
Funded by Meta, the board is legally obligated for the company to implement its decisions regarding individual content cases, though it is not bound to follow its broader policy recommendations.
However, the board admits to some frustrations, noting that expected impacts have at times fallen short despite their efforts.
External critics have voiced concerns, arguing that content moderation on Meta’s platforms has deteriorated since the board’s creation. Jan Penfrat from the European Digital Rights organization argues there is less moderation occurring today, perhaps justified under the banner of free speech.
Experts also contend that effective oversight of moderation at Meta’s scale would require a larger, quicker system with real power to effect systemic changes. The limits of the board’s influence became evident when Meta’s CEO Mark Zuckerberg discontinued the U.S. fact-checking program in January, which employed outside fact checkers, including AFP, to identify misinformation.
In April, the board criticized the hasty decision to replace the third-party fact-checkers with a user-generated system and recommended that Meta thoroughly evaluate the new approach’s effectiveness.
Looking to the future, the board plans to broaden its scope to include responsible deployment of AI tools and products. Zuckerberg has announced plans to incorporate artificial intelligence more deeply into Meta’s offerings, citing potential benefits like combating loneliness. However, recent incidents—such as reports of individuals dying after extensive conversations with AI chatbots—have increased worries over the harms posed by such technologies.
The Oversight Board emphasizes its intention to work towards solutions that uphold user rights on a global scale while addressing emerging risks associated with artificial intelligence.





