Brazil’s 2024 municipal elections, which took place on 6 October 2024, represented a watershed moment in the intersection of democratic processes, artificial intelligence (AI) governance, and the role of Big Tech in shaping public debate. With 441,350 candidates running for mayor and city council, these local elections took place amid new regulations aimed at governing AI use in elections, despite the failure of Brazil’s Congress to pass its “fake news” bill (PL 2630/2020), following intense lobbying by tech companies, led by Meta and Google. The Superior Electoral Court (TSE) has stepped forward as one of the key architect of Brazil’s digital guardrails, issuing groundbreaking resolutions that tackle two critical fronts: a stringent 24-hour takedown rule for electoral content that spreads demonstrably false or heavily distorted claims about Brazil’s voting system and electoral process, and an outright ban on deepfakes — synthetic audio or video content designed to manipulate candidates’ images or voices, regardless of authorization. These measures come as lawmakers debate the country’s landmark AI framework law (PL 2338/2023), positioning Brazil at the forefront of regulating artificial intelligence in democratic processes.
While many expected a surge of AI-driven manipulation, the elections saw only a few notable incidents. The AI Observatory in Elections reported a handful of issues, like AI-generated jingles in low-budget campaigns and occasional deepfakes. However, the elections did highlight some serious concerns, particularly around gender-based violence, with deepnudes targeting women candidates in local races. Content verification challenges also became clear as audio deepfakes spread on messaging apps. Even major platforms like Google’s Gemini and Meta’s AI systems had trouble providing reliable election information.
This electoral cycle unfolded against the backdrop of unprecedented tension between Big Tech and Brazilian authorities, most notably in the month-long ban of X (formerly Twitter) from August to October 2024. X’s systematic noncompliance with court orders about anti-democratic content, profile removals, and restricting access to certain content, combined with the company pulling out its legal representatives in Brazil, led to a major showdown with the Supreme Court and Judge Alexandre de Moraes. The stand-off with X, which culminated in the platform’s compliance only after facing fines and asset freezes, laid bare the broader challenge of making global tech platforms bow to national regulations—a tension that intensifies during electoral periods when the stakes for democracy run particularly high. The clash also tied into ongoing investigations into threats to democracy and digital militias, while raising important questions about Brazil’s platform liability laws and sparked a debate on the need to update the Civil Rights Framework for the Internet (Marco Civil da Internet). As a result, Brazil is becoming a key case study on how to balance technology and electoral integrity, showing that regulating Big Tech is about more than just AI—it’s also about protecting democracy and holding corporations accountable in the digital age.
Another key development is the increasing involvement of the Superior Electoral Court and the Supreme Court (STF) in the discussion about regulating digital platforms. The Supreme Court has carved out an increasingly decisive role in Brazil’s digital landscape, particularly after it began reviewing landmark cases on content moderation and platform liability at the end of 2024. Its active stance could reshape the country’s internet laws, including Article 19 of the Civil Rights Framework, potentially setting new standards for how platforms must answer for the content they host. These deliberations, happening as Brazil grapples with electoral disinformation, signal a deeper shift in how the country approaches platform governance. These changes could set new standards for platform responsibility, even during election periods.
Social Media Landscape
Brazil has the biggest population and number of online social media users in Latin America. It’s the fifth-largest social media market worldwide. Meta’s platforms are, by far, Brazilians’ go-to for information and news, particularly WhatsApp and Facebook, but Instagram follows close behind. In May 2024, Facebook had 175 million active users in Brazil, accounting for 79.2% of its population and 55.3% being women.
However, despite Brazil’s massive numbers of social media users, there remains a major connectivity gap in the country. According to the report “Significant Connectivity” on the quality of Internet access in Brazil, this scenario reflects the exclusion of important segments of the population, especially the most vulnerable communities. The majority of users in the country access the Internet exclusively via mobile phone, with limited data, impacting, among other things, their access to information. The proportion of those who fact check their information is much higher among those who use computers and mobile phones simultaneously (71%) than among those who use mobile phones exclusively (37%).
That being said, closed messaging platforms, particularly Telegram and WhatsApp, pose particular challenges to fighting disinformation and online harms. More than 96% of the population is active on WhatsApp. Given that social media platforms and messaging apps are widely used for election campaigning and public debate in Brazil, it comes as no surprise that electoral disinformation remains one of the biggest challenges in protecting democratic integrity.
Regulatory Framework: AI and Platform Governance
Brazil’s journey into digital regulation has been groundbreaking, with major steps taken since 2014 to address issues such as platform governance, data protection, and the emerging frontier of Artificial Intelligence.
In 2014, Brazil introduced the Civil Rights framework for the Internet (Marco Civil da Internet), one of the world’s first crowdsourced social media platform liability rules. This law requires platforms to comply with court-ordered takedowns of posts and profiles, while implementing a streamlined notice system for removing intimate content shared without consent as well as copyright violations. The framework was further strengthened in 2018 with the Personal Data Protection General Act, which set clear rules about how personal data should be handled by both individuals and companies.
The subsequent democratic crisis prompted new legislative action. This crisis involved the widespread of disinformation campaigns during the 2018 election that circulated extensively across social media platforms and messaging apps, particularly WhatsApp, and culminated in the January 8, 2023 riots when thousands of Bolsonaro supporters stormed Brazil’s Congress, Supreme Court, and presidential palace after rejecting the electoral results. In April 2020, Brazilian lawmakers introduced Draft Bill no. 2.630, dubbed the “Fake News Draft Bill,” representing the country’s most ambitious attempt to modernize platform liability rules. The proposed legislation would introduce a duty of care requirement and mandate more active moderation of harmful content, including anti-democratic messages, child abuse material, and terrorist content. It would also impose new transparency requirements on social media companies, forcing them to disclose advertising revenue and publish regular transparency reports. Despite initial support from civil society groups like Coalizão Direitos na Rede, the bill stalled in late 2023 after facing fierce opposition from tech companies and far-right groups.
Brazil remains at the forefront of AI regulation with Bill 2338/2023. This human rights-based approach to AI governance acknowledges its potential risks, including discrimination, environmental harm, surveillance, and job displacement. It aims to foster innovation while maintaining crucial safeguards for human rights, setting a potential model for international cooperation. As a key step forward, the Senate approved the bill’s substitute report and sent it to the Chamber of Deputies for further review. If changes are made in the Chamber and approved, it goes back to the Senate for final approval.
Brazil’s presidency of the G20 in 2024 placed AI and information integrity at the center of international discussions. At home, debates continue over possible updates to the Marco Civil, especially with a planned constitutional review of Article 19 and other cases related to content moderation and platform liability. With presidential elections approaching in 2026, digital policy is set to take on even greater importance, particularly as lawmakers prepare to review the Electoral Code before the end of 2025.
Deepfakes during the municipal elections
Brazil’s municipal elections this year marked an important milestone in AI regulation. For the first time, the country held elections governed by an electoral directive that prohibited the dissemination of deepfakes. The Superior Electoral Court (TSE) signed memorandums of understanding with Meta, TikTok, LinkedIn, Kwai, X, Google, and Telegram, securing the companies’ commitment to work with Brazilian authorities to remove disinformation from their platforms. Research by NetLab and DRFLab highlighted the need to improve AI labeling, making it systematically collectable for research purposes. Their findings also showed that self-declaration of AI use on social media platforms is insufficient, and algorithmic review techniques are ineffective.
Spotting electoral deepfakes on social media requires constant monitoring. The platforms studied —X, Facebook, Instagram, and YouTube— don’t offer the possibility to collect data from labels that indicate that the content was AI generated. One solution proposed by NetLab and DRFLab is to create a direct line to fact-checkers, such as Agência Lupa’s public WhatsApp account, where people can send content circulating on social media for verification. Improving the internal search systems on platforms with tools like TrueMedia would also help identify AI-generated content more effectively.
While civil society is closely watching the use of AI for spreading disinformation, AI hasn’t been used on a massive scale — yet. That said, it still had a serious impact, particularly on women politicians who were targeted by deepfakes. During this year’s municipal elections, AI was used in several ways to spread false information, including creating deepfakes, generating political jingles, and making deepnudes of women candidates. This highlights the need for stronger regulations to prevent widespread damage.
Political Gender-Based Violence
During the mayoral elections, women were the primary targets of online violence. In São Paulo, candidates Tabata Amaral and Marina Helena faced an intensification of online attacks. On both YouTube and X, they received three times as many attacks as their male counterparts. Over 80% of posts containing gender-based violence, studied by Democracy Reporting International (DRI) and Fundação Getúlio Vargas (FGV), aimed to undermine women’s roles in politics. FGV’s research also found that left-wing women candidates were targeted more frequently than their right-wing counterparts.
“The cost of producing a deepfake during elections against a woman is zero in this country. We presented all the evidence, documented, and the most we managed to achieve was that some videos were removed after a few days. These videos were already in people’s phones and in their WhatsApps,” said Tabata Amaral. “It’s impunity that explains all of this. And do I think that the social media platforms are responsible for this process? One hundred percent!”
YouTube hosted misogynistic and transphobic content targeting women candidates in both urban and rural areas of Brazil. A comprehensive study by MonitorA on online attacks across the country showed that most attacks against women candidates sought to portray them as inferior, while spreading misogyny and undermining their intelligence.
Several candidates, including Loreny Caetano and Suéllen Rosim, were victims of deepnudes generated using AI. A report by the AI Observatory in Elections highlights how deepnudes were used to reinforce the gender-based violence already faced by women in politics. As Yasmin Curzi, a professor at FGV Rio Law, warns,
“Women won’t feel comfortable participating in politics unless protective measures are put in place. This causes generational harm.”
Curzi’s research also points to a broader issue with content moderation: women in politics, including journalists and activists, often see their content blocked or deleted due to mass reporting. These coordinated campaigns aim to derail discussions and silence women politicians. An important first step, she says, is to implement effective measures to combat political gender-based violence in agreements with Big Tech platforms. These efforts should involve collaboration with local fact-checkers and civil society organizations.
AI is bound to play an even larger role in Brazil’s 2026 presidential elections, and the use of deep nudes against women politicians must be addressed—not only by the TSE but also through stronger platform regulations, particularly more robust content moderation. Given that women—especially those who challenge the status quo, such as trans, racialized, and left-wing women—are disproportionately targeted by tech-driven harms during elections, platform policies must adopt a feminist approach. Failing to do so risks creating a future where generations of women politicians are blocked, discouraged, and harassed from participating in political spaces.
Big Tech Preparedness and Transparency
Big Tech giants are struggling to match their public promises with effective action to protect Brazil’s election integrity, according to recent investigations by leading research institutions. Despite high-profile commitments and agreements with electoral authorities, companies like Meta, Google, and TikTok have shown significant gaps in their ability to combat disinformation and regulate AI-generated content.
“The question of transparency is central,” says Bia Barbosa from Reporters Without Borders (RSF). “it’s actually one of the main points that we in Brazil always include in regulatory attempts that exist — access to information for researchers and civil society.” Barbosa highlights how limited access to platform data has hampered effective oversight. These restrictions have grown more severe over time, with researchers noting that conducting analysis “was more complicated in technical terms than two years ago because of changes in access to social networks.” This challenge became particularly evident in the platforms’ response to transparency requirements during the election. “What happened was that before the electoral process began, some platforms said ‘well, then we won’t allow [political content] so we don’t have to create an ad library,’” explains Carla Vreche from Conectas, describing how Google and others chose to restrict political content before the elections, rather than comply with transparency measures mandated by Brazil’s electoral court.
Studies by NetLab UFRJ (Federal University of Rio de Janeiro), DFRLab, Aláfia Lab, and Data Privacy Brasil also uncovered widespread inconsistencies in the platforms’ enforcement of their own policies. A stark example emerged when Google’s AI system Gemini continued providing information about political candidates despite the company’s announced restrictions on political content.
The municipal elections also revealed new challenges with paid digital influencers. The case of Pablo Marçal in São Paulo highlighted how platform monetization features can be exploited for political gain. Through his extensive digital following, Marçal organized viral content competitions where followers could win cash prizes, which could be seen as abuses of political power and media misuse by Brazilian electoral legislation. As Carla Vreche observed, “He exposed the problem with the platform business model — hate speech and disinformation that affects our electoral system integrity are being monetized, with people producing this content getting paid by platforms because it goes viral.” This case showed how influencers-turned-politicians can exploit platform algorithms and reward systems in ways that traditional media figures cannot, since established TV personalities must observe a quarantine period before running for office. The campaign also revealed how third parties could bypass official campaign channels, as demonstrated when Marçal’s wife make-up artist promoted campaign content, highlighting the difficulty of enforcing transparency rules in digital spaces.
On February 21, 2025, Brazil’s Electoral Justice ruled Pablo Marçal ineligible to hold public office for eight years. Judge Antonio Maria Patiño Zorz of São Paulo’s 1st Electoral Zone found Marçal guilty of abuse of political and economic power, improper use of media, and illicit fundraising during his 2024 São Paulo mayoral campaign. The ruling came after investigations revealed Marçal had sold his political support to city council candidates in exchange for R$5,000 (around $800 US dollars) campaign donations via Pix transfers, a practice documented in Instagram videos and promoted through registration forms. The court determined Marçal had spread misinformation about the electoral fundraising system and conducted negative campaigning against opponents.
The technical infrastructure for monitoring election integrity is fundamentally flawed, notes a recent analysis from NetLab UFRJ and DFRLab. The research points to critical weaknesses in platform APIs and monitoring tools, particularly in tracking AI-generated content and political advertising. Meta’s much-touted Ad Library, while offering some transparency, lacks crucial functionality to identify AI-generated content. The situation has worsened with the shutdown of CrowdTangle, which researchers had previously relied on for in-depth analysis.
“What concerns us is that they analyze content individually, but don’t assess what can happen in terms of silencing journalists who receive 300 posts with this type of comment in one day,” notes Bia Barbosa, highlighting how platforms fail to consider the cumulative impact of coordinated harassment campaigns. This systemic weakness particularly affected women journalists, who faced disproportionate attacks during the election period.
The challenge of identifying AI-generated content has become particularly acute in Brazil’s political landscape. A recent controversy in Fortaleza highlighted these difficulties when experts failed to conclusively determine whether a disputed audio message was artificially generated. In another revealing case, Salvador’s candidate, Bruno Reis, posted AI-generated videos across multiple platforms but only labeled them as AI-created on Instagram, exposing the inconsistent approach to content transparency.
Agreements between tech companies and Brazil’s Superior Electoral Court (TSE) have revealed structural weaknesses in oversight mechanisms. While these memoranda of understanding established procedures for handling complaints through the Integrated Center for Combating Disinformation (CIEDDE), they notably lacked requirements for proactive detection tools. NetLab UFRJ and DFRLab researchers concluded that Google’s blanket ban on political content not only failed to achieve its intended goals, but actually reduced transparency by hampering systematic monitoring. For Bia Barbosa, the memorandum of understanding between the platforms and the TSE during the elections “works very little, though it’s better than nothing”.
The different approaches taken by the platforms have created an uneven playing field. Meta opted for a more permissive stance with enhanced transparency requirements, while Google and TikTok maintained strict bans on political advertising. This fragmentation in policy approaches, while each in line with TSE regulations, created potential gaps in the overall election information ecosystem that bad actors could exploit.
Despite significant investments in integrity measures, Meta claims to have deployed 40,000 people globally for security and integrity since 2016, but implementation has fallen short. TikTok’s dedicated election monitoring systems and other platforms’ safety measures have struggled to keep pace with evolving challenges, particularly in detecting and verifying AI-generated content.
The research findings come at a critical time as Brazil grapples with the intersection of artificial intelligence and electoral integrity. With deepfakes and synthetic media becoming increasingly sophisticated, the gap between Big Tech’s technological capabilities and the challenges of maintaining election integrity continues to widen.
“Even with this smaller number [of attacks compared to national elections], it’s a serious situation that generates silencing from the press about external attacks that happen in campaign situations,” Barbosa noted, emphasizing how even reduced levels of harassment can significantly impact election coverage. The findings underscore the growing disconnect between Big Tech’s stated commitments and their ability to protect democratic discourse in an evolving digital landscape.
Lessons for Brazil’s 2026 presidential elections
2024 was a big year for tech-related debates in Brazil. Amid ongoing discussions in Parliament and the Electoral Court on updating the regulatory framework for platform governance and Artificial Intelligence, the government was busy with its pro tempore presidency of the G20 summit and introducing information integrity as one of the core themes in the forum’s agenda. While the summit saw calls for more transparency from Big Tech, discussions on regulating AI, harnessing its potential for societal benefits, and combating climate disinformation globally, the domestic landscape still faces significant challenges.
InternetLab research coordinator Iná Jost points to the nuanced reality of electoral resolutions in Brazil’s digital regulation landscape. “Electoral resolutions end up innovating a lot in terms of electoral propaganda — they are both a charm and a liability,” she notes, explaining how these rapid regulatory responses, while nimble enough to address the fast-changing tech ecosystem, raise concerns about democratic accountability since they’re drafted within cabinet offices without broader public input. She highlights this year’s AI regulations as an example of this dynamic, where the Electoral Court successfully implemented rules on deepfakes and AI labeling requirements while also demonstrating how the effectiveness of such resolutions heavily depends on the technical expertise of the officials crafting them.
Despite efforts by social media companies and electoral authorities to further regulate digital spaces during this year’s elections, Brazil saw continued tech-related harms. Deepfakes and online attacks targeting women journalists and candidates were prevalent. As the country enters 2025, the question remains: What updates will be made to the Electoral Code — set to be reviewed by Parliament this year — and will the AI Act be approved? There are also potential amendments to the Civil Rights Framework for the Internet that could be proposed by the Supreme Court, addressing platform governance issues.
The failure to pass “fake news” legislation (PL 2630/2020) and X’s months-long failure to comply with their obligations highlighted the ongoing tensions between tech platforms and Brazilian authorities. Carla Vreche points out that the “judiciary is mostly acting because Congress failed to fulfill its role due to political disputes. If we had platform regulation, if Bill 2630 had passed, most of these details that need to be addressed in TSE resolutions or Supreme Court discussions would already be resolved.” Without more robust legislation, platforms have relied on self-regulation and voluntary commitments, which have not been enough to protect the digital civic space. Furthermore, a scattered approach to content moderation across different platforms – from Meta’s transparency requirements to Google and TikTok’s inconsistent outright bans – creates an uneven playing field that could affect future electoral cycles. This rule gap has forced judicial authorities to step in, showing the need for stronger enforcement tools and clearer obligations for platforms operating in the country.
Looking ahead to the 2026 presidential elections, civil society organizations are emphasizing the need for stronger institutional frameworks. “There’s a significant concern because the process of conducting hearings and gathering suggestions from civil society for electoral resolutions is not mandatory,” warns Carla Vreche, alerting on how changes in electoral court leadership could affect civic participation in rule-making. On the other hand, the upcoming parliamentary review of the Electoral Code and the pending AI Act present opportunities to strengthen the country’s digital democracy framework, particularly in addressing emerging challenges like AI-generated content and technology-facilitated gender-based violence (TFGBV).
The municipal elections also exposed gaps in Brazil’s ability to track and verify digital threats to electoral integrity and press freedom. As researchers from RSF, the Internet and Data Science Laboratory (Labic) at the Federal University of Espírito Santo and the Institute for Technology and Society of Rio (ITS-Rio) as parto of the Coalition in Defense of Journalism (CDJor) found in their analysis on the municipal elections in 2024 to oversee freedom of the press situation:
“When we use hashtags to find content against the press or when we use combinations of terms through artificial intelligence, we can conclude there’s content attacking the press, but platforms analyze content individually, not systemically.”
These continuing restrictions on content analysis and platform monitoring have made oversight increasingly difficult. Research institutions, civil society organisations and other stakeholders looking to gather information in electoral processes are losing direct access to platform data. ITS-Rio experience is telling, notes Barbosa – they “used to do data collection themselves, but now with the changes that happened on X, they can no longer do it – forcing organizations to rely on a shrinking pool of institutions with platform access credentials.”
Looking forward, Brazil needs a balanced approach that combines stronger regulation with innovative solutions. Civil society groups are pushing for digital attacks to be recognized as serious threats to press freedom and democratic discourse, while calling for more robust enforcement mechanisms. As Barbosa notes, “When we analyze attacks on journalists during elections, we’re not just looking at individual rights violations, but at attacks on society’s right to information in the electoral context.” Similarly, political gender-based violence on social media targeting women politicians causes widespread, generational harm, with a chilling effect on women’s political participation.
As Brazil prepares for the 2026 presidential elections, the lessons learned from last year’s municipal elections will be crucial to shaping the country’s democracy. While the government and civil society discuss and build the intersections of AI, platform regulation, and electoral integrity, digital rights advocates worldwide are keeping a watchful eye on what will unfold in Brazil. The path forward requires not just regulatory updates, but a comprehensive approach that balances innovation with protection, transparency with privacy, and technological advancement with democratic values.