United States Election Briefing

On November 5th 2024, more than 153 million ballots were cast to determine the United State’s president, one of the highest voter turnouts since women were given the right to vote. Trump’s campaign was fueled by anti-immigrant and white nationalist disinformation, in both English and non-English languages. The November 2024 ballot demonstrated the uniquely harmful ways in which offline and online disinformation campaigns target language-minority voters and the ways in which foreign information ecosystems impact U.S. domestic politics. It also exposed how Big Tech companies, which have failed to invest sufficiently in content moderation for the languages of the Global Majority, were also absconding their responsibility towards non-English speaking voters in the US.

Language differences in policy enforcement around election disinformation impacts millions of citizens. These Big Tech failures resulted in Muslim, Latinx and Black communities in the U.S. being affected by non-English and foreign disinformation campaigns.

Big Tech U.S. Election Policies 2024
While the deadly insurrection of the U.S. Capitol on January 6, 2021 brought to light how failure to moderate content on social media can undermine democracy, Meta, Google, TikTok and X all took a step back on their election integrity plans, as part of a growing trend to de-prioritize political content. This didn’t begin in 2024. Big Tech had rolled back electoral integrity policies since the 2020 midterm elections despite promises to safeguard elections. 17 critical policies across Meta, X and YouTube were eliminated between 2022-2023, including Meta’s political ad policy that mandated transparency and labeling. This happened alongside mass layoffs, particularly in the trust and safety, ethical engineering or responsible innovation, and content-moderation teams. 

Musk’s X cut half of its election integrity team just a week prior to the November 2024 election, after having pledged to expand it. It also reduced its resources to enforce its rules, as well as narrowing the categories of speech that violate them. For example, X previously banned posts containing claims like “unverified information about election rigging” or “claiming victory before election results have been certified.” On top of that, the tech giant has reduced the penalties for breaking its rules. As part of its civic integrity policy,  it encouraged users to add context to each other’s posts using the crowdsourcing “fact-checking” feature, Community Notes. This feature mostly fails to combat misinformation, according to experts. X doesn’t partner with independent fat-checkers and while its policy indicates that posts that violate its civic integrity rules will be labeled as misleading instead of removed, it gives no indication of whether those accounts will face consequences or not. 

Since the 2020 U.S. election, Meta shifted its elections integrity policies by scaling back on fact-checking labels and has done less to promote voting process information on Facebook’s information center. In response to the criticism it faced that its platform recommended people into “Stop the Steal” groups, Meta announced that Facebook would no longer recommend political or civic groups for users to join. Meta also cut down on election integrity employees, and while Facebook now requires labels on AI-generated content, there are still many unlabeled generative AI images being circulated. It also has more circumscribed policies regarding election misinformation, along with YouTube and X, weakening platform prohibitions against misinformation that delegitimizes election violence. None of these platforms have committed to addressing election fraud rumors comprehensively. 

TikTok played a bigger role in the presidential election than in any previous cycle. It’s the only of the four platforms mentioned above that prohibits political advertising. It prohibits claims that Trump won in 2020 and uses independent fact-checking partners to review unverified election claims. NYU’s Stern Center for Business and Human Rights found that “TikTok has announced tough-sounding policies related to elections, but the platform’s haphazard enforcement has failed to slow the spread of deniers’ lies.” 

Global Witness tested TikTok, YouTube and Facebook’s robustness by submitting advertisements containing false election claims and threats a few weeks prior to this year’s election. TikTok and YouTube approved 50% of the ads, although YouTube blocked the publication of the ads until the submission of formal identification. Facebook approved one ad containing harmful disinformation, which is an improvement from a similar test done during the 2022 midterm elections. 

Spanish-language Social Media Use and Electoral Disinformation

Spanish-speakers in the United States are avid social media users. Two-thirds of Latinx people treat YouTube as a primary source for their news and information about politics and elections. Facebook is also widely used by Spanish speakers in the U.S., with over 28 million users, 69% of which use Facebook daily. Among Spanish-speaking Facebook users, bilingual Latinxs and Spanish-dominant Latinxs are more receptive to their platforms’ video ads and are more likely to share advertising content. 

Immigrant communities in the U.S. who speak languages other than English at home use social media in ways that make them more exposed to mis- and disinformation campaigns. WhatsApp and other encrypted messaging apps are often key places for political discussions. Latinx people in the U.S. are more than twice as likely to use messaging apps such as Telegram and WhatsApp than other groups. Encrypted messaging apps are more difficult to scrutinize and fact-check, making them particularly susceptible to disinformation campaigns. 

As Roberta Braga, director of Digital Democracy Institute of the Americas (DDIA), states “There’s nothing inherent to Latino communities that makes us less accurate in our ability to identify false content online.” Rather, the ways in which Latinx communities use social media, coupled with the often racist and white nationalists motives behind disinformation campaigns, makes racially targeted disinformation in Spanish a pressing issue. Latinx communities are not more likely to believe disinformation than other groups, but they are more likely to be impacted by the violence and hate that such disinformation incites. 

Immigration disinformation and its consequences

Public understanding of the relationship between online speech and offline harms has developed significantly in recent years. Research published in 2023 showed how anti-Muslim hate crimes spiked after Donald Trump posted anti-Muslim discourse on social media during the 2016 presidential primaries.  This pattern was in evidence again during the 2024 elections.

Most disinformation around immigration during this year’s election revolved around portraying immigrants as responsible for a rise in violent crime and that immigrants are causing a rise in unemployment for people born in the U.S. Anti-immigrant disinformation was rampant in both English and Spanish,  such as widespread rumours that crime rates skyrocketed in New York due to increased immigration. 

After September’s presidential debate, in which Trump alluded to disinformation that members of the Venezuelan gang Tren de Aragua were taking over a Colorado apartment complex, the complex’s residents have said “they feel unsafe […] and they fear being stereotyped as criminals.” Similarly, Haitian immigrants have received threats and Springfield has received more than 33 bomb threats since Trump’s statements at the debate.

While  immigrants, both documented and undocumented, are less likely to commit crimes than native-born U.S. citizens across various crime categories, the fact that a big part of Trump’s campaign was built off of these tropes, and that so many people voted for him, shows how disinformation shaped voting behaviour. 

Trump and his allies have heavily used anti-immigrant rhetoric about undocumented people registering to vote to continue the “Big Lie” narrative in an attempt to guarantee support for denying election results if Trump lost. As polls started showing Trump’s lead, claims of a ‘stolen election’ on social media began to disappear. Donald Trump, JD Vance and Elon Musk were top spreaders of anti-immigrant disinformation, and although we’ll never know to what extent Big Tech’s failure to moderate harmful content contributed to Trump’s re-election, it certainly played a part. This is partly due to the fact that platforms adopted a less aggressive posture on election integrity

During the 2024 presidential elections, polling suggested that false claims affected how people saw candidates and their views on immigration, crime and the economy. Big Tech rolled back their policies meant to curb disinformation during elections, in a general attempt to depoliticize themselves and avoid similar scrutiny received in past elections. Policies around fact-checking political speech were absent, and while some argued that the risk of harm no longer outweighed the benefits of political dialogue, election disinformation caused fear, confusion, harassment against immigrant voters and even bomb threats. 

“What this moment has especially brought to my forefront,” says Sanaa,  Director of the Digital Spaces Project at Muslim Counterpublics Lab, “is that I don’t trust Big Tech to do this work.” The question that we need to be asking ourselves is: “what kind of power can we develop and leverage as users to democratise power, so that it doesn’t lie solely in the hands of Big Tech?”  

Meta’s independent content watchdog, the Meta Oversight Board, said there were “serious questions” as to how the company deals with anti-immigrant content on Facebook. This, coupled with multiple above-mentioned studies of how Meta, Google, TikTok and X did not adequately enforce their election integrity policies (and even less so in non-English languages), is particularly troubling for minority rights and freedoms. 

Anti-immigrant rhetoric is less about turning people against immigration and more about amplifying already held beliefs rooted in racism, according to Germán Cadenas, an associate professor at Rutgers University who specializes in the psychology of immigration. The biggest demographic to vote for Trump was white men, and white US adults are more susceptible to core stereotypes of Latinx immigrants being a threat to American society, according to a recent Rutgers University study

Language minority voters are  particularly vulnerable to disinformation campaigns that in turn have real life impacts, such as Springfield’s bomb threats after the spread of the Tren de Aragua disinformation. Disinformation has also impacted immigrant’s access to factual information. Research on immigration misinformation during this year’s election shows that mis- and disinformation has left many immigrants confused and fearful about using government benefit programs. Also, four in ten immigrants say Trump’s rhetoric about immigrants has negatively affected them, including about half of Asian immigrants.

Big Tech failed non-English voters 

Given that the Census Bureau projects white Americans to be a minority by 2050, the country’s language, speech and culture will (and is currently) also shifting. Because these shifts are fueling nativist and racist disinformation and electoral strategies, domestic media and information ecosystems must also shift their priorities and meaningfully invest in non-English languages and cultures. 

Research by the Center for Democracy and Technology (CDT) shows how content moderation practices on social media are not nearly as robust for non-English languages. The study highlights how language-minority voters sit at the intersection of culturally tailored misinformation and data voids. Big Tech companies such as Meta and Google have recently revealed plans to use automated content analysis tools to combat non-English disinformation. These tools have limited abilities to detect intent and motivation and perform especially poorly in languages that don’t have much digital representation (such as Indigenous languages, different creole languages, etc.). 

Prior to this year’s election, 24 million voters were expected to rely upon language translations of voting materials to cast their vote. These languages cover Native American and Alaskan Native languages, certain Asian languages, and Spanish. Arabic isn’t included as one of the language minorities protected in the Voting Rights Act, making Arabic-speaking voters particularly vulnerable to non-English electoral disinformation. Rima Meroueh, director of the National Network for Arab American Communities, says that “community members are more concerned about whether they might face voter intimidation or be turned away at the polls, rather than not feeling confident in the system. Over 14,000 letters were sent to registered voters who are naturalized citizens threatening criminal prosecution for illegal voting. Widespread electoral disinformation about undocumented people registering to vote fueled voter intimidation this election cycle, in turn causing real-world harassment and fear among immigrants. 

The Latinx community is also impacted by Big Tech’s failure to moderate hateful content, especially in Spanish. University of Washington’s Center for an Informed Public released research on how TikTok, Instagram and Facebook were not enforcing some of their own policies to safeguard against election misinformation in Spanish. In-platform searches reveal discrepancies in implementing election information policies. For example, searching for “fraude electoral” (electoral fraud in Spanish) on Instagram doesn’t produce the voting information banner as it does in English.

Foreign influence campaigns affected the  U.S. elections

While tracking the origins of disinformation can be challenging, there are clear patterns of non-English and foreign campaigns promoting white nationalist and racist agendas. For instance, the US Department of Justice found a Russian government-led disinformation network meant to influence elections which targeted “hispanic descendents,” among other groups. The Russian state media firm, RT, is accused of using AI and bots to spread disinformation on immigration and crime prior to the U.S. elections this year. One notorious example was a fake video of a “Haitian man” (who wasn’t actually Haitian) claiming to have voted in two counties after just arriving in the U.S. This video, created in Russia, spread widely across social media and helped shape Trump’s campaign narrative.

Onyx Impact, a nonprofit founded to better serve and empower Black communities by fighting the harmful information ecosystems targeting them, released a study prior to the elections called “The Black Online Disinformation Landscape” that found foreign actors to be one of six core networks spreading online disinformation impacting Black voters and Black social issues in the U.S. These include “individuals or entities acting on the behalf of, have strong ties to, or may be inadvertently promoting talking points from foreign governments, organisations, or interests that seek to influence or interfere in US political discourse.” Concerns over foreign influence operations escalated further after reports that Meta allowed users around the world to buy and sell Facebook accounts that are authorised to run political ads in the U.S.

Big Tech alignment with Donald Trump 

Over the past years, dozens of congressional hearings took place involving tech companies on issues around election integrity, online harms against children, privacy and content moderation. The Biden government had been investigating Meta, Alphabet, Tesla, Apple and Amazon, among others. In the run up to 2024, tech companies had also faced increasing political pressure from Republicans who portrayed content moderation as anti-conservative censorship. Alphabet, Amazon, Apple, Meta and Microsoft were subpoenaed by the House Judiciary Select Subcommittee on the Weaponization of the Federal Government in 2023 for information about their companies’ communications with the executive branch over how their content is moderated. 

Elon Musk, owner of X, endorsed Trump during the election campaign, spending more than $250 million on his re-election, and successfully shaped X into an echo chamber for Trump’s supporters. By loosening content moderation policies on his platform, Musk allowed right-wing disinformation to flourish on X. Musk, who has 206 million followers on X posted false and misleading claims about the 2024 elections that were viewed two billion times. His tweet insinuating that Democrats were allowing undocumented people to cross the border in order to vote received 47 million views alone. 

In response to civil society groups’ research and advocacy against X’s business model, Musk “declared war” and sued such groups multiple times. In particular, he sued the Center for Countering Digital Hate (CCDH), which recently looked into how X’s crowdsourced fact-checking feature, Community Notes, failed to counter false claims about the U.S. elections. 

Besides Musk, tech moguls in general bet on Trump to loosen government control over Big Tech regulation, in the hopes of avoiding accountability. Following the election, Meta’s CEO, Mark Zuckerberg, met with President-elect Trump at Mar-a-Lago. He criticized the Biden administration’s “pressure to censor Covid-19 content” during the pandemic and possibly sees potential in Trump’s laissez-faire approach. Amazon’s Bezos also announced he was “very optimistic this time around,” after having refused to let his newspaper, The Washington Post, endorse Kamala Harris. While Trump acolyte and nominee for Federal Communications Commission President, Brendan Carr, declared a wish to smash Big Tech’s “censorship cartel.’” 

This has set the stage for the unabashed alignment between Big Tech CEOs and President Trump to dismantle content moderation efforts and campaign against the digital regulatory efforts of other countries in 2025. 


Share this post