Digits and gadgets

Judicial siege in the US against Meta and YouTube's infinite scroll

Two historic rulings condemn platforms for addictive design. The courts open a crack in the legal immunity that has protected big tech for 30 years

27/03/2026

BarcelonaIt has surely been the worst week for Mark Zuckerberg since the founding of Facebook. The tycoon has had to announce the dismantling of his virtual reality metaverse, after having burned 80 billion dollars in a bet that the public has received with indifference. While Meta makes a desperate turn towards AI to save face, the judicial front has opened a leak that threatens the entire business model of social networks. Two US juries have issued rulings holding Meta and YouTube directly responsible for designing deliberately addictive products.

Engineering of dependence

The Los Angeles ruling sets a major precedent. The jury has ordered Meta and YouTube to pay $6 million ($6 million) (70% to one and 30% to the other) in damages to a young woman identified as KGM, who began using the platforms at the age of six and ended up developing depression, self-harm, and a pathological obsession with her own body image. A day earlier, in New Mexico, the economic blow had been much more forceful: a $375 million fine for failing to protect children against sexual predators on Facebook and Instagram.

Cargando
No hay anuncios

The core of the lawsuits is not the content users saw, but how the applications are built to prevent the user from closing them: the amplification of notifications, the automatic playback of videos, and especially, the perversity of the infinite scroll. This function prevents the brain from saying enough. Even its inventor, Aza Raskin —son of Jef Raskin, creator of Macintosh—, has publicly regretted it on multiple occasions. "I made it possible for the industry to turn human psychology into a source of income," he declared. In fact, on LinkedIn you can check that digital giants have hundreds of doctors of psychiatry on staff. Raskin estimates that the mechanism wastes the equivalent of 200,000 human lives every day, if we translate it into lost hours.

The internal evidence presented in the trials has been devastating. Documents from Meta itself reveal that the company knew it had four million users under the age of 13 —the theoretical minimum age in the US— and that 85% of the clinical experts it consulted considered the networks addictive. Despite this, Meta prioritized growth.

How to circumvent legal shielding

Until now, tech companies have relied on Section 230 of the Communications Decency Act of 1996, which exempts platforms from liability for what their users post. The plaintiffs' lawyers have achieved a brilliant legal victory: they have not sought liability for content, but for product engineering. The accepted argument is that Section 230 protects editorial activity – that is, it does not hold them responsible for what third parties publish – but not for design decisions that make a product dangerous. If a car has faulty brakes, the manufacturer is liable; if an app captures minors' attention by design, the engineer is responsible.

Cargando
No hay anuncios

The case sets a precedent that thousands of pending lawsuits will leverage. TikTok and Snap, also sued, reached out-of-court settlements because the cost of a six-week trial is astronomical even for companies of their size. But not everyone celebrates the rulings without reservations. Analyst Mike Masnick argues that the winning legal theory will work well against companies nobody likes, but will inevitably extend to all others. It is naive to distinguish between content and design: infinite scrolling is not harmful because it exists, but because there is content that makes people want to keep scrolling. "Removing content from the equation is like blaming forks for obesity," he says.

Furthermore, if internal documents where employees debate risks become incriminating evidence, no company will ever again allow anyone to write anything in an email. All legal advisors in Silicon Valley have learned the same lesson: ignorance is preferable to diligence. And, paradoxically, Meta and Google can afford decades of multi-billion dollar litigation, but their smaller competitors cannot: rulings intended to punish the giants could end up reinforcing their monopoly.

Corrective measures: too little, too late

Faced with pressure, companies had already made corrections before the trial. Meta launched Instagram's "teen accounts" in September 2024. In his statement, Zuckerberg said he would have liked to act "sooner," but shifted responsibility to Apple and Google, asking that the operating system manufacturers verify age. It's understandable: if the business depends on maximizing usage time to sell ads, any measure that reduces addiction directly attacks revenue.

Cargando
No hay anuncios

The precedent will not stop at social media either: there are already lawsuits against OpenAI, Google, and Character.AI for dangerous design of their chatbots, and the winning legal principle in Los Angeles can be applied to any platform that makes design decisions affecting user health.

While the US imposes limits through years-long litigation, the European Union has opted for preventive regulation with the Digital Services Act (DSA), which obliges platforms to assess and mitigate risks to physical and mental well-being under threat of fines that do not depend on an individual case. The Commission demands transparency in algorithms and prohibits targeted advertising for minors. Added to all this is the AI Act, which in Article 5 explicitly prohibits systems that exploit users' psychological weaknesses to distort their behavior — a prohibition that directly targets recommendation algorithms designed to maximize retention over well-being. The European route is preventive but slow to enforce; the American route is reactive and unpredictable, but can generate rapid changes.

How to protect minors at home

No sentence will change overnight what happens in teenagers' rooms. Assuming that, from a certain age, depriving children of a phone is practically impossible, a gradual approach is recommended. The first step is strict configuration: "Screen Time" (iOS) and "Family Link" (Android) allow setting limits per application and rest periods in which the device becomes inoperable. The second is the bedroom rule: not charging the mobile phone in the bedroom avoids unsupervised nighttime use, the one with the highest documented risk. The third is filtering at the home router to block risky content. The fourth is the "technology contract": a family pact that defines usage obligations, preventing the mobile phone from being a bargaining chip or an instrument of punishment.

Cargando
No hay anuncios

The less comfortable principle remains: example. It is unlikely that a teenager will manage compulsive scrolling if they see their parents looking at their phones at the table. Platforms have designed the product to be addictive for everyone. Recognizing this as a family is the first step to escaping the trap that Aza Raskin has been lamenting having built for two decades.