By Mona Shtaya
In a digital ecosystem powered by Elon Musk’s alt-right populist vision, X and its AI arm, xAI, have become a breeding ground for hate. At the center of this storm is Grok — Musk’s so-called “truth-seeking” AI chatbot — which, far from being a neutral tool, was born out of Musk’s frustration with what he saw as the “political correctness” of ChatGPT and is now actively fanning the flames of hate and disinformation. On July 8, Grok published an inciting post that called for the killing of Javier Smaldone, a longtime Argentinian human rights activist already targeted by years of online abuse. Four days later, X announced it had disabled Grok’s functionality due to “increased abusive usage” after Poland’s deputy prime minister said that his ministry would report Grok to the EU commission “for investigation and, if necessary, imposing a fine on X.”
Grok is not an exception — it is a direct outcome of Big Tech’s failure to self-govern, maintaining long-standing, unresolved content moderation issues that are now being duplicated at a larger scale by their AI models. This is a crystal-clear example of platform-led Generative AI being trained and deployed without ethical safeguards, transparency or public accountability and necessary risk assessment. Grok’s very architecture, embedded in X’s dysfunctional information ecosystem, ensures that harm is not just possible — it is normalised.
Grok’s real-time responsiveness — driven by integration with X posts — makes it especially dangerous, given the platform’s rampant disinformation and hateful content. This is particularly concerning in light of new analysis, that showed hate speech on X surged by about 50% in the months following Elon Musk’s acquisition of the platform in October 2022. This is fueled by sweeping changes to content moderation policies, mass layoffs of trust and safety staff, and the reinstatement of previously banned accounts. Under the banner of “free speech absolutism,” Musk and other Big Tech CEOs’ measures have allowed conspiracy theories, hate groups, and extremist rhetoric to flourish.
Yet this pattern is not limited to X and its xAI. The same failures of content governance that plagued earlier generations of Big Tech companies — vague community standards, lack of transparency, opaque moderation processes, and profit-driven incentives — are now being supercharged through generative AI. Such platforms are no longer merely passive hosts of content; they are now active reproducers of harmful material through their AI chatbots. This case illustrates how insufficient tech accountability mechanisms can allow Big Tech to amplify inciting and harmful content that moderation teams once aimed to reduce. Only now, the speed and scale are exponentially greater with AI, and the past models of accountability that allowed tech platforms to self-govern have clearly failed. Accountability is now almost nonexistent, especially in light of the increasingly open alignment between Big Tech and governments promoting digital authoritarianism at its worst.
The growing challenges to govern Big Tech as perpetrators of harm have caught the attention of some legal systems around the world. In June of this year the Supreme Federal Court in Brazil expanded the legal liability of tech platforms for content posted by users, requiring them to proactively monitor and remove posts containing “serious crimes.” This raises the bar for tech accountability globally and makes it clear that platforms are not “just intermediaries.”
Almost two decades into the integration of social media and Big Tech into people’s lives, the idea that platforms can govern themselves — or that their AI tools can be regulated on their own — is no longer credible. What began as a failure of moderation, has now become clearly a failure by design. This is a crucial moment demanding us more than reactive fixes here and there. It is strikingly clear that we need robust rights-based global regulations, led by democratic governments and multilateral bodies. It should establish enforceable transparency obligations, mandate risk assessments and independent audits of AI systems, prohibit harmful business models and monopoly, as well as ensure meaningful accountability mechanisms. These standards must prioritize the public interest over profit, while remaining adaptable to local contexts.
In summary, we are calling on democratically elected governments to lead the creation of legally binding international standards that place human rights, public safety, transparency, and accountability above platform power. This is not optional — it is a core foundation we must lay now to ensure a just, equitable, and rights-respecting digital future.