Vladimir Cortés Roshdestvensky
Mexican journalist David Morales watched his news outlet’s Facebook reach plummet overnight. Chiapas Sin Censura had built a following, delivering 100 million views monthly to its communities. Then came an unexplained infraction for “fraud” on a year-old post. No warning. No meaningful appeal. Within weeks, traffic collapsed to 28 million. Monetization vanished. Twelve years of work hung by a thread.
In Chile, cannabis activist Paola Díaz documented a 92% drop in her activist content’s visibility on Instagram—while her lifestyle posts circulated freely. Peruvian educator on comprehensive sexuality education (CSE), Alesia Lund, saw her illustrations about reproductive health fall from 15,000 views to barely 300. Argentinian journalist Sebastián Lacunza discovered his X (formerly Twitter) account had become invisible, disappearing even from search results when users typed his exact name.
These aren’t isolated glitches. From Santiago to Mexico City, a new investigation by OBSERVACOM, the Forum on Information and Democracy, and Digital Action reveals a systematic pattern: shadow-banning has become the preferred tool of digital censorship across Latin America. The same algorithmic suppression affects Palestinian activists like our regional director for the SWANA region, Mona Shtaya, whose Instagram posts dropped from 2,600 views to 200 overnight, as well as communities of indigenous Quechua speakers whose content gets systematically buried. These and other countless voices find themselves digitally hidden for aiming to challenge power or discuss topics platforms deem ‘problematic’.
Censorship without fingerprints
Shadow-banning operates silently. Unlike traditional content moderation, where posts are removed or accounts suspended with notification (although in some cases even without a clear explanation), this practice quietly strangles visibility. Your content remains technically live. You receive no warning, no explanation, no notification that anything has changed. But the platform’s algorithm ensures almost nobody sees what you post.
The mechanics are insidious: reduced distribution in feeds, exclusion from search results and hashtags, removal from recommendation algorithms, and posts hidden even from existing followers. Users continue creating content, pouring energy and resources into work they don’t realize has been rendered invisible. They’re speaking into a digital void.
The cost of opacity
What makes shadow-banning particularly pernicious is its deliberate lack of transparency. Meta acknowledges in its policies that it reduces distribution of “problematic content”– a vague, expansive category encompassing everything from clickbait to posts from creators who “repeatedly violate” unspecified standards. It is worth noting that the company offers no notification to affected users and no straightforward appeals process for these restrictions.
X claims to label posts with limited visibility and allow review requests, but provides no detailed procedures for how users can actually access these supposed protections. In practice, those affected remain entirely in the dark.
The psychological toll cuts even deeper. “I live in fear,” confessed Paula Labra, whose inclusive lingerie business faces constant algorithmic suppression. “Before falling asleep, I always pray my account won’t disappear.” She pays $45 US dollars monthly for Meta Verified “just out of fear”, over $500 US dollars annually for protection that doesn’t materialize.
Alesia, from Emma y Yo, an Instagram account devoted to sexual education, described reaching her breaking point after years of building educational resources about sexual health: “I’ve been about to throw in the towel and say, fuck it, let the material stay there, it’s over, I don’t want any more.”
David from Chiapas Sin Censura revealed a chilling adaptation: his outlet now self-censors stories about violence, vulnerable children, or organized crime. “Sometimes we say: this story isn’t worth an infraction. Even though the event is real, we prefer not to publish it, because it could harm us more.”
When platforms can silently disappear content about human rights, sexual health, drug policy reform, or international conflicts—all matters of clear public interest—without explanation or recourse, they’re not moderating content. They’re controlling public discourse.
Demanding light in the shadows
Shadow-banning violates fundamental principles enshrined in international human rights law. The Inter-American Court has established that expression and its dissemination form an indivisible whole; you cannot meaningfully exercise freedom of expression if your speech reaches nobody.
Digital Action, the Forum on Information and Democracy, and OBSERVACOM’s investigation documents these practices and demands accountability. Platforms must provide transparent policies about visibility reduction, notify users when restrictions are applied, offer clear explanations for decisions, create accessible and effective appeal mechanisms, and ensure human review for content of public interest.
Until Big Tech embraces genuine transparency, shadow-banning will continue undermining the very possibility of democratic discourse in digital spaces. Voices that challenge power, educate communities, and expand public debate deserve to be heard and not disappear into algorithmic shadows without a trace or recourse.
Read the report here (in English). Find its Spanish version here and its Portuguese version here.