The Grok scandal and the AI we've normalized

A user using the Grok app, the artificial intelligence assistant created by xAI.

The nudity in Grok has become a symbol of a trend that goes far beyond a specific controversy. The ability to generate sexualized images of real women and girls, facilitated by an architecture designed to deliberately circumvent boundaries, is neither an accident nor a one-off excess. It is the coherent result of a type of artificial intelligence that we have been promoting and celebrating for years.

It's important to note that this isn't a problem of misuse, but rather a technological choice. Grok embodies an AI model based on scale, speed, and provocation as its core values. Systems trained on massive amounts of data are launched into the public sphere with the implicit promise that any harm will be dealt with later. First, it's deployed. Then, if necessary, it's adjusted. This logic isn't neutral. Nor is it inevitable.

Artificial intelligence doesn't learn in the abstract. It learns from societies riddled with deep inequalities. When these systems are fed with traces marked by patriarchy and other structural discriminations, without context or ethical friction, what they produce in return shouldn't surprise us. Imaginaries of domination are amplified, the availability of women's bodies is normalized, and symbolic violence is translated into automated, repeatable, and scalable production.

The public debate has quickly shifted to how to limit, block, or sanction AI. But this understandable reaction leaves the most uncomfortable question unanswered: Why this type of AI? Why do we continue to rely on models that confuse learning with accumulation, intelligence with statistical repetition, and progress with unlimited growth? Why do we call technologies that need to ignore consent and respect for human rights to function smoothly "innovation"?

Techno-solutionism insists that there will always be a subsequent technical solution. More restrictions. More filters. More geoblocking. But this controversy and the ensuing debate reveal something different. Once a technology exists, becomes integrated into the collective consciousness, and is accessible to millions of people, it doesn't disappear. It migrates. It adapts. It reappears in other formats, in other spaces, with less visibility and less friction. The problem isn't fixed. It's displaced.

Furthermore, we must not forget that this model of artificial intelligence is not only socially aggressive. It is materially unsustainable. It requires energy-intensive infrastructure, constant data extraction, and a logic of perpetual expansion. And what (almost) no one mentions is that there are other ways to develop AI. Smaller, more localized, more limited in scope, and more mindful of human rights and the climate. They don't dominate the market because they don't fit with a notion of success based on hyper-competition and immediate impact—an idea also closely linked to patriarchy and other prevailing systems.

Elon Musk's latest stunt hasn't revealed a flaw in the system. It's revealed the system itself. An artificial intelligence designed without memory, without clear limits, and without social responsibility reproduces, on a massive scale, inequalities we already know all too well. The risk isn't in a specific function or mode. It's in continuing to normalize a technology that treats violence as an externality and harm as an acceptable cost of progress.

stats