Earlier this year, Facebook launched its human rights policy during a session at the Palestine Digital Activism Forum (PDAF 2021), stressing its goal “to be a place for equality, safety, dignity, and free speech.” Ironically, one of the first instances in which Facebook breached these standards was in its removal of Palestinian content, as Palestinians were among the first to experience content “over-moderation” and digital discrimination on the platform, representing the continuation of a longstanding issue.
During May, Palestinians demonstrated in the streets due to an escalation of Israeli violations, which necessitated documentation and dissemination of these attacks in an attempt to draw the attention of the international community and gain greater international solidarity by highlighting the ethnic cleansing attempts and forced displacement against Palestinians.
Palestinians faced an escalation in the suppression of their voices and narratives through social media platforms—specifically Instagram, Facebook, Twitter, and TikTok—whether by taking down content, suspending accounts, or even deleting archival stories, as happened on Instagram specifically related to ethnic cleansing in the Sheikh Jarrah neighborhood of Jerusalem.
This is not the first time Palestinians have complained about discriminatory content moderation policies by technology companies. However, they aren’t the only ones. On the contrary, many other indigenous and marginalized communities have also decried social media censorship about them, such as those in Myanmar and Kashmir.
There are numerous efforts to silence Palestinians across social media platforms, and the Israeli Cyber Unit’s effort is one of the largest in this regard. The head of the Cyber Unit indicated that social media companies respond to nine out of ten requests to remove material submitted by the unit. To put this in perspective close to 20,000 content censorship requests were submitted by the unit to the social media companies in 2019.
In the first 10 days of May amid Palestinian protests, the Israeli government asked social media companies to take down more than 1,010 pieces of content. Of those, 598 of requests were made to Facebook (46 percent of them have been removed), 258 to TikTok (89 percent were removed), while 41% of 246 requests to Instagram (owned by Facebook) were removed.
The Arab Center for the Advancement of Social Media (7amleh) documented that 20 percent of 250 cases were restored when appealed to Instagram between May 6 and 19, as were another 20 percent of 175 appealed cases to Facebook. 7amleh is still waiting for responses to hundreds of other cases. 7amleh’s findings confirm that many of these cases were false positives, meaning deleted content from these platforms do not necessarily go against community standards. Rather, a large percentage of such content was documenting human rights violations against Palestinians on the ground.
Deleted content also included political posts that varied between images, videos, and written posts/tweets. Instagram also targeted specific hashtags relating to Al-Aqsa and hid related published content for a few hours. Instagram also used shadow banning to decrease the viewership of published stories. This content was kept online, but its reachability and viewership were dropped.
Concurrently, 7amleh monitored online public conversations in Hebrew to document racism, hate speech, and incitement against Palestinians, specifically between May 6 and 21, finding an increase of 15 times in violent discourse compared to the same time period last year in 2020. However, social media companies did not remove such inciting content despite the mob violence against Palestinians in the streets. This indicates the disparity between the over-moderating that social media companies apply to some groups and allow extremism to flourish on its platform.
Such content moderation has long term implications for platforms’ artificial intelligence algorithms, which learn to remove this type of content automatically. This leads to biased and discriminating algorithms against Palestinian content.
Instagram and Twitter apologized for the removal of content in early May, citing technical glitches that affected content relating to Palestinian protests in Sheikh Jarrah, Colombia, and U.S. and Canadian indigenous communities. Despite this apology, organizations that document digital rights violations continued to receive similar complaints on content removal, demonstrating a fundamental problem, not a recent “glitch.”
This digital oppression deeply affects the Palestinian narrative in the digital space and is reflective of the physical occupation and control exerted on Palestinians over the past 73 years. The deletion of digital content and archives affects the historical Palestinian narrative in the long term. Social media platforms have become an important and growing source of news for users all around the world, meaning such censorship of Palestinian content affects the public’s awareness around this issue. Beyond this, deleting documentation of human rights violations and war crimes can influence investigations into them.
So, who will compensate Palestinians for lost documentation of human rights violations and brutal attacks? What are the Palestinians supposed to do about these companies’ apologies for their “technical glitches” if they don’t actually amend their policies? Will social media platforms make their platforms and algorithms fairer and allow Palestinians to express what they are exposed to?
There are initiatives to tell Facebook, Instagram, and other social media platforms that they must stop silencing Palestinian voices and publicly audit its human rights policy and its content moderation policies. Facebook should publish a detailed transparency report in which it shows a detailed breakdown of digital rights violations that occur on its platform and the full reasons behind content takedowns, accounts suspensions, shadow banning, geoblocking, and other forms of the digital rights violations.
It is important to build on what was raised in May, as activists, digital rights advocates, and human rights organizations on the local, regional and international level were able to pressure social media companies to admit their mistakes. The mainstream media also played an important role—had the international media not extensively covered this digital oppression, companies would not have started to apologize and try to find solutions. In addition to the media, voices of those working for these big tech companies have also started speaking out against content moderation policies regarding Palestinians content.