A Turning Point for Platform Responsibility in Brazil

Written by Vladimir Cortés Roshdestvensky, with Spanish and Portuguese versions in links below.

On June 26, 2025, Brazil’s Supreme Federal Court (STF) issued a landmark ruling that will reshape platform governance far beyond its borders. The Court ruled that Article 19 of the Marco Civil da Internet (Civil Rights Framework for the Internet), the country’s foundational internet regulation framework, is partially unconstitutional and effectively redefined the limits of platform liability and the country’s long-standing ‘safe harbour’ provision. By deeming this rule insufficient in protecting fundamental rights and democratic institutions, the STF expanded its scope rather than striking it down entirely. The Court ruled that in certain serious cases, such as hate speech, child sexual abuse, and terrorism, platforms can and must act even without prior judicial order. In doing so, the STF introduced new legal obligations and safeguards to ensure greater responsibility and more robust protection.

Although this ruling marks a significant advancement in platform accountability, there are still areas that require further improvement. The role of civil society and Brazil’s Congress will be essential to ensure the ruling is translated into clear, democratic and enforceable standards; especially to prevent freedom of expression from being curtailed by vague or overly broad interpretations, which could lead to the removal of legitimate content.  

For over a decade, platforms like YouTube, Meta and X (formerly Twitter) operated under a “safe harbour” model. They were only held liable for illegal content if a court explicitly ordered its removal. This approach aimed to protect freedom of expression and prevent excessive moderation. But events like the January 8 riots in 2023 and widespread disinformation campaigns during the 2018 election, among others, exposed its limitations. Although the model was designed to safeguard public debate, it left serious accountability gaps. Platforms often failed to respond (and in some cases continue to fail) to serious harms such as disinformation and technology-facilitated violence, which disproportionately affect women, trans people, racialised communities and other historically marginalised groups. Some recent changes, such as those introduced by Meta, have even opened new doors to hate speech rather than closing them.

The challenge now is to strengthen platform responsibility without undermining free expression. That is precisely the gap the Supreme Federal Court is trying to close, and the timing is no coincidence.

During Brazil’s 2024 local elections, women in politics were disproportionately targeted by online violence. According to research by Democracy Reporting International and the Getúlio Vargas Foundation (FGV), over 80% of online gender-based violence aimed to discredit women’s political participation. Misogynistic and transphobic content, including AI-generated deepfakes and deepnudes, spread widely on YouTube, X (formerly Twitter), and WhatsApp.

Brazilian Congresswoman Tabata Amaral described the situation plainly:

“The cost of producing a deepfake during elections against a woman is zero in this country… and platforms did nothing until days later, when the harm was already done.”

Against this backdrop, the STF’s ruling introduces a differentiated model of liability. In cases involving serious criminal offenses, such as terrorism, hate speech, incitement to violence, gender-based violence, and child sexual abuse, platforms may now be held liable if they fail to act promptly, even without a prior judicial order.

The Court also established a presumption of liability in two key scenarios: (1) when unlawful content is amplified through paid promotions; and (2) when it is disseminated via bots or artificial networks. In such cases, platforms must prove they acted diligently and swiftly to avoid legal consequences.

This is not a system of strict liability. The STF was careful to clarify that platforms are not automatically responsible just because harm occurs. Instead, negligence, omission or “systemic failure” must be shown. This aligns with a growing global consensus: that platforms must exercise a duty of care when operating at scale and profiting from public communication.

The ruling also includes key safeguards. Platforms are required to implement internal appeals processes, provide notification to users whose content is moderated, and publish annual transparency reports. These measures represent an important step toward ensuring that regulation doesn’t become a cover for arbitrary censorship.

But Brazil’s model is not a copy of Europe’s, nor of the United States’ hands-off approach under Section 230. We could say it is a hybrid model, rooted in Brazil’s constitutional commitment to both freedom of expression and the protection of dignity, equality and democracy.

Still, serious questions remain.

The ruling introduces ambiguous terms, such as “mass dissemination” or “systemic failure,” that may be unevenly interpreted. Without clear regulation, platforms may over-remove content to avoid legal risk. In a politically polarised environment, this could disproportionately affect marginalised voices, including those of activists, journalists, and women working in the public sphere.

Moreover, Brazil’s history of coordinated online campaigns, often aimed at silencing dissent or flooding reporting systems to trigger automated takedowns, shows how easily content moderation can be weaponised. Without robust oversight and public accountability, well-intentioned regulation can backfire.

That’s why the STF’s call for mandatory self-regulation cannot become a mere formality or a list of bureaucratic requirements. The Court requires platforms to establish accessible complaint mechanisms, appoint legal representatives in Brazil, and respond to judicial and administrative requests. But these measures will only be effective if they are designed and implemented together with civil society, especially with organisations working at the intersection of gender, race and digital   rights, and if they are reinforced through a clear and participatory legislative process. As the STF itself rightly emphasises, Congress has a fundamental role to play in this new framework: it cannot be left on the sidelines if it is to build a democratic, legitimate and effective regulation of the digital ecosystem.

What Brazil is attempting is no minor reform: it seeks to move beyond the logic of passive intermediaries and towards a model of shared responsibility, where platforms can no longer profit from the amplification of harmful content without consequence. But for these shifts to be meaningful and sustainable, regulation must not be left solely to the judiciary or reduced to industry self-regulation.

Brazil’s Congress now has a crucial opportunity – and an obligation – to translate this judicial ruling into a robust legal framework, one that is transparent, rights-respecting, and responsive to the realities of those most affected by online violence and disinformation. As the STF itself noted, Parliament cannot remain on the sidelines. Without democratic legitimacy and legislative clarity, even the most well-intentioned rulings risk becoming ineffective.

The world should pay close attention.

Emerging trends, such as AI-driven disinformation, and the upcoming electoral cycles (Brazil will hold presidential elections in 2026) raise important concerns about the role of technology in shaping public discourse and preserving information integrity. In that context, Brazil’s model marks a critical inflexion point, controversial in some sectors, improvable in others, but nonetheless a necessary departure from the regulatory status quo.

Its long-term impact will depend on how the Court’s decision is translated into law, how it is enforced in practice, and whether it can be shielded from political misuse while being strengthened through democratic oversight.

The STF has sent a clear message: Tech giants can no longer hide behind outdated shields while profiting from harm. The question now is whether Brazil — and the world — are ready to build a truly democratic digital future, grounded in accountability, justice, and public participation.

Read:

A Turning Point for Platform Responsibility in Brazil -Spanish Version

A Turning Point for Platform Responsibility in Brazil -Portuguese Version

 


Share this post