timep single page

Tainted Narratives: Tech Giants Harming Users in War Time

Tech companies have generally failed in protecting people from dangerous narratives fueled by disinformation, hate speech, and a lack of authentication checks. This is the current case with the ongoing war on Gaza, as big tech algorithms attempt to mask deeply-rooted bias on their platforms.


Over the past decade, tech companies like Meta, which envisioned “connecting the world,” and X (formerly known as Twitter), with its noble claim of “defending and respecting the user’s voice,” have heralded a vision of a technological utopia. However, this lofty narrative, although increasingly unfulfilled, not only hides the potential harms these tech giants can inflict, but also masks the deep-rooted global inequality crisis.

What becomes glaringly evident is the striking contrast between the extensive, stringent, and swift measures taken by social media companies to safeguard the interests of users in the Global North, in countries like the United States and Ukraine, and other countries in the Global South when political crises occur, such as Palestine, Myanmar, and Kenya.

In the wake of the events that unfolded in Israel and Palestine since October 7, the digital realm has been inundated with a deluge of disinformation, hate speech, incitement, and violent rhetoric. These dangerous narratives emanated not exclusively from one source: whilst Israeli officials and politicians disseminated videos, rationalizing attacks on Gaza and normalizing violence, social media platforms in general became fertile grounds for incitement and racism from users, further fueling settler violence against Palestinians. This is not a new occurrence. Human rights groups had previously called for social media platforms to protect Palestinians, particularly following the 2021 events in Sheikh Jarrah and Gaza, and the burning of Huwara town earlier this year. In order for Palestinians, as well as social media users from the global majority more generally, to be protected from digital harm, social media companies need to take several steps in resolving the digital inequality they face.

Disinformation fanning the flames

This absence of safeguarding and proper content moderation and fact-checking is not an isolated incident limited to any one platform; rather, it is a widespread issue. In current events, X has seemingly allowed incitement and racist speech against Palestinians without adequate moderation. The platform’s owner even posted a tweet encouraging users to follow two accounts with a history of spreading disinformation, undermining the platform’s accountability. 

The ceaseless waves of disinformation burden people and distract them from getting news from the ground

Misinformation has plagued social media platforms these last few weeks, impacting and obstructing the proper flow of information. For instance, X permitted the wide dissemination of AI-generated images that were not flagged as fabricated, and which conveyed misleading information. Furthermore, some social media posts have spread videos taken during previous attacks on Gaza, or old videos of airstrikes in Syria or Ukraine as if they depict recent Israeli airstrikes on Gaza. Further, Human Rights Watch verified videos showing that white phosphorus was used in Gaza and Lebanon in October, whilst some other videos circulating on social media were taken in the Donbas in Ukraine. The ceaseless waves of disinformation burden people and distract them from getting news from the ground. This makes the tech giants the biggest beneficiaries of disinformation.

Some international mainstream media outlets also fell into this trap and contributed to spreading disinformation that was subsequently used to justify violence on the ground. These platforms reported these alleged stories as verified news without conducting thorough authenticity checks. Such reporting has the potential to manipulate global audiences, and could exacerbate polarization and extremism in such events.

The repercussions of this hateful speech extend far beyond the confines of this region

The repercussions of this hateful speech extend far beyond the confines of this region. It has contributed to the normalization of extremism and anti-Palestinian racism, but also of anti-semitism, especially in the absence of equitable international mainstream media coverage of events in this region. Additionally, it has affected communities beyond the region, as exemplified by the recent murder of Wadea al-Fayoume, a six-year-old Muslim Palestinian-American boy in Chicago. Jewish communities were also affected by the online hatred rhetoric, leading to unconscionable attacks, such as the burning of the al-Hammah synagogue, a historic site in Tunisia.

Tech companies’ contradictions: Censorship, ads, and press freedom

On the other hand, Meta has imposed stricter censorship of Palestinian content. This has included the removal of visual documentation of the attack on the al-Ahli Hospital in Gaza on the night of October 17. Moreover, Meta has reduced the reach and viewership of Instagram stories in support of Palestine, among other content violations. The same occurred in 2021, which only compounds existent harm, and reflects the platform’s failure to address tech harms to human rights and access to information in times of crisis. In both instances, Meta claimed there were widespread technical glitches.

Furthermore, YouTube has allowed sponsored adverts by the Israeli Ministry of Foreign Affairs on its platform, that justify the use of violence and deadly attacks against Palestinians. This follows the Ministry of Strategic Affairs employing the same tactic two years prior. Additionally, certain social media platforms have monitored and suspended the accounts of journalists and media organizations covering and documenting events in Palestine. TikTok has “permanently banned” the official account of Mondoweiss, a website focused on developments in Palestine and Israel. Further, Instagram has suspended the account of a Mondoweiss correspondent based in the West Bank. This affects journalists’ right to labor, access to information, and the right to freedom of opinion and expression.

Disparities in safeguarding users in times of crisis

In the midst of this current crisis, major technology companies have displayed a marked divergence from their response to protect users in the Global North. Unlike their swift actions to safeguard American democracy during the storming of the US Capitol and their proactive measures to protect Ukrainian civilians from the outset of the Russian invasion of Ukraine, they neglected addressing disinformation, incitement, hate speech, and other content that perpetuates conflict extremism and escalates violence on the ground. This glaring disparity mirrors a broader crisis of global inequality, highlighting the urgent need for more equitable digital protection measures in regions affected by events.

This glaring disparity mirrors a broader crisis of global inequality, highlighting the urgent need for more equitable digital protection measures in regions affected by events

The measures that tech companies have taken on the matter show that the protection of users varies, depending on various factors including, but not limited to, their countries’ economic and political purchase, as well as their countries’ support from the Global North. Furthermore, the adequacy of tech companies’ protective measures themselves can be called into question. For example, Meta, following its commission of a due diligence report by the Business for Social Responsibility Network (BSR) in 2021 after events in Sheikh Jarrah and Gaza, rejected one of its key recommendations. The report, which found that Meta had censored Palestinian voices in 2021, recommended committing resources to support public research aimed at establishing the most effective balance between the legal obligations imposed on social media platforms and their policies and practices. 

Unfortunately, it seems social media platforms invest resources based on their market size, not risks. It is important to note that Israel allocated approximately $319 million to social media advertising in 2021, with a staggering 95 percent of this budget dedicated to Meta platforms. This figure surpasses the collective advertising expenditures of Palestine, Jordan, and Egypt, solidifying Israel’s position as one of the largest advertising markets in the region. 

The tech companies’ layoffs in the past couple of years, particularly misinformation and safety teams, made matters worse. In early 2023, YouTube, owned by Google, reduced its already small team of policy experts in charge of tackling misinformation to one person in charge of misinformation policy globally. Moreover, Meta’s layoffs in 2022 included employees who assisted in leading research on hate speech, misinformation, and trust. Further, in December 2022, X discarded its Trust and Safety Council, where tens of civil society organizations and leaders from around the world had volunteered their time and effort, to enhance the platform’s safety. As such, tech companies are undermining processes of safeguarding their platforms from disinformation in times where these safeguards are desperately needed. 

Tech companies have not allocated sufficient resources to protect users or conduct in-depth research to comprehend tensions and its consequences on their platforms, which reflects a fundamental structural defect in their business models

Similar issues have occurred across these platforms in the past, despite numerous pleas from local, regional, and international civil society representatives. These calls emphasized the need for the platforms to be prepared for similar events, especially in light of recurring escalations of tensions in the Middle East. However, tech companies have not allocated sufficient resources to protect users or conduct in-depth research to comprehend tensions and its consequences on their platforms, which reflects a fundamental structural defect in their business models.

Tech companies may argue that they are closely monitoring what is happening, and are responding to escalations from “trusted partners”. However, we are at a moment where they should acknowledge the “Whac-A-Mole” approach no longer suffices, and will not bring about lasting change. This strategy is akin to applying small doses of painkillers to calm down minimally-resourced civil society organizations that document tech-related harms. To safeguard users worldwide, we must, now more than ever, urgently unite our efforts with all stakeholders to increase pressure on tech companies, imploring them to invest more in protecting users from tech-related harm.

Mona Shtaya is a Nonresident Fellow at TIMEP focusing on surveillance, privacy, and digital rights in the Middle East and North Africa (MENA) region. She works as the MENA Campaigns and Partnerships Manager and corporate engagement lead in Digital Action.

READ NEXT

The removal of victims' online content in conflict zones such as Gaza and Syria plays in…

In record time of just under a month, Jordan passed a new cybercrime law in August…

Facebook’s and Instagram’s policies violated the fundamental human rights of Palestinians, according to a due diligence…