Digits and contraptions

Sánchez against X, Meta and TikTok for the child pornography of their AIs: can he get away with it?

Spain orders criminal investigation of the three big platforms while Europe continues to discuss whether scanning everyone's messages is a good idea

121 arrested in an operation against child pornography in Spain
20/02/2026
4 min

BarcelonaLast Tuesday, the Council of Ministers decided enough was enough: after months of watching how the artificial intelligence tools of big tech companies became automatic generators of child pornography, the Spanish government has ordered the State Attorney General's Office to investigate X, Meta, and TikTok for possible crimes of sexual abuse of minors. Let no one say they weren't warned: when users of Grok, Elon Musk's AI, created 3 million sexualized images in 11 days – 23,000 of which showed minors – perhaps someone in Silicon Valley could have suspected that things weren't going well.

Spain thus becomes one of the first countries in the world to take direct legal action against platforms for AI-generated CSAM – the English acronym for child sexual abuse material – because it seems no one wants to stick their neck out by clearly saying "child pornography." Government spokesperson Elma Saiz has been explicit: "We cannot allow algorithms to amplify or enable these crimes." She's right, but they're already late: the Internet Watch Foundation – IWF, a British non-profit organization that does the work that tech companies don't want to do – found thirteen videos of child abuse generated by AI in 2024; by 2025, it was already 3,440. In one year, they multiplied by 263. And on top of that, two out of every three were category A, which is the British code for "child rape and torture." Welcome to the future.

When your AI acts as a pedophile

The problem is that creating these images is much simpler. Anyone with a computer can download Stable Diffusion – a perfectly legal open-source AI program – and, with a few photos of a minor taken from Instagram, generate thousands of sexual images in minutes. All disconnected from the internet, leaving no trace. According to the IWF, 90% are indistinguishable from real photographs. And we're not just talking about images: in 2025, AI-generated videos skyrocketed.

In 2024, the aforementioned foundation found this material on 42 websites; in the first half of 2025 alone, it was on more than 200. There are forums on the dark web where a single link gives access to more than 20,000 AI-generated images, available by monthly subscription like Netflix. And let's not forget Telegram, with bots that automatically generate "custom" material in exchange for cryptocurrencies. Pavel Durov, CEO of Telegram, was arrested in Paris last August, partly for this reason. But the service continues to operate, just in case.

Sánchez, digital champion of minors

Pedro Sánchez's government wants to become the great defender of digital protection for minors. It is already processing an organic law in Congress that will prohibit access to social networks until the age of sixteen and will criminalize the creation and dissemination of pornographic deepfakes. The Spanish Data Protection Agency imposed the first European sanction in November for AI-generated deepfakes: the case of Almendralejo (Badajoz), where 15 minors created fake nudes of at least twenty girls. Now, with this new order to the Prosecutor's Office, Spain positions itself at the forefront of the fight against AI-generated CSAM.

In any case, it's not enough to focus all the blame on X, Meta, and TikTok. It makes perfect sense to accuse them and force them to curb the excesses of their algorithms: they are the ones who put image generation tools just one click away from their hundreds of millions of users. Furthermore, the hypocrisy is evident: the same platforms that censor photos of breastfeeding women because a nipple is visible offer, without batting an eyelid, tools to generate child pornography. But the problem goes beyond social networks: Stable Diffusion is open-source and can be downloaded for free. No platform controls it. No government can ban it.

Technology that works (when they want to use it)

Detection tools exist and work. PhotoDNA, developed by Microsoft in 2009, is the industry standard: it converts each image into a digital fingerprint that survives cropping and modifications, and compares it with databases of millions of known CSAM fingerprints. Google, Meta, Microsoft, and Discord use it. The most complete platform is Safer, from the company Thorn: it combines 131 million verified fingerprints with AI classifiers that detect new material not present in any database.

In 2024, it processed 112.3 billion images and videos. And it works: Meta reported 30.6 million cases of CSAM to the U.S. National Center for Missing and Exploited Children in 2023. In contrast, Apple reported 267. It's not that Apple users are more angelic, but rather that Apple doesn't scan. And it's not that they haven't tried.

In 2021, Apple announced NeuralHash, a system for scanning iPhone photos before uploading them to iCloud. The idea was that, if it found more than 10 suspicious ones, a human reviewer would be alerted; and if they confirmed it was child pornography, the user would be reported to the authorities.

However, it took researchers only a few hours to find security loopholes in the system, and 14 of the world's top cryptographers published a devastating paper against NeuralHash. Apple abandoned the project 15 months later. Now the company faces a billion-dollar lawsuit for having abandoned it. A real dilemma: if I scan your phone, I'm a spy; if I don't scan it, I'm an accomplice.

Europe, meanwhile, continues to discuss

The European Union has been negotiating the CSAR Regulation for four years, which detractors call ""Chat Control"." The original proposal obliged all messaging platforms to automatically scan all messages. The European Parliament rejected it. The Council abandoned mandatory scanning. Now, a "voluntary" model with judicial orders is being negotiated in a three-way dialogue – out of the 27 states, only Italy, Poland, the Czech Republic, and the Netherlands still oppose it –. However, the temporary derogation that allows voluntary scanning expires on April 3rd. Signal and Proton have already threatened to leave Europe if they are forced to implement on-device message scanning.

However, while Europe discusses, Europol acts. Operation Cumberland, last February, was the first major operation against AI-generated child pornography: 25 arrests in 19 countries and 273 suspects indicted. The main accused, a Dane, managed a platform that exclusively distributed AI-generated material. But arrests don't stop the growth: one in eight adolescents already knows someone who has been a victim of a sexual deepfake. Seven out of 10 creators of this material found the tools on social media. Not on the dark web: on Instagram, on TikTok, on X.

Perhaps the problem isn't that we don't have enough laws or technology to detect it, but rather that we've created machines that generate child pornography with a couple of clicks, and then we're surprised that despicable individuals use them. But don't worry: we can distract ourselves with the umpteenth round of negotiations in Brussels.

stats