Digits and Contraptions

The cynicism of the AI giants

Anthropic, Meta and OpenAI, the three big companies in the sector, bet on accelerating the industry without paying attention to possible risks

10/04/2026

This week Anthropic has presented an AI that it has decided not to sell because it considers it too dangerous; Meta has launched the first closed model in its history and has abandoned the commitment to open source that it had preached as a moral obligation, and OpenAI is preparing GPT-6 while a journalistic report uncovers very questionable practices by the director. All three together paint the portrait of an industry in a race without brakes, incapable of self-governance or not interested in doing so.

Zuckerberg's confession

On Cleo Abrahams' podcast, Mark Zuckerberg made a revealing statement: "Social networks were born as a space where people interacted with their friends. Now, at least half of the content is people interacting with creators." He said it in a neutral tone, like someone giving the weather forecast, but he was describing one of the most profound social transformations of the last two decades: without asking your permission, the algorithm has replaced your friends with human or synthetic strangers who post content that hooks you more. The placidity with which Zuckerberg states this, without acknowledging any social cost, is unsettling.

Cargando
No hay anuncios

And while he describes this replacement of friends with strangers, he is already preparing the next phase: AI can build more attractive strangers than any real human, with personalities fabricated from scratch for you to welcome them as if they were lifelong acquaintances. The only goal is for you to stay longer watching ads.

This cynicism seems unstoppable, and the launch of the new AI: Muse Spark confirms this. In July 2024, Zuckerberg wrote in a manifesto that open-source AI represented "the world's best opportunity to leverage this technology and create safety for everyone." 18 months later, having laid off the Llama team and left this family of AI models orphaned, Meta is launching the first closed model in its history. And the result, despite the investment, is not substantially superior: a Meta executive acknowledges to Bloomberg that their model "is not yet as capable as ChatGPT, Claude, or Gemini." Nevertheless, Meta's shares rose by almost 10%.

The Altman case: cynicism by system

If Zuckerberg's cynicism is that of a social engineer shamelessly describing what his products do to people, Altman's is already on another level. About him, Ronan Farrow and Andrew Marantz have published in The New Yorker has disappeared from the communication of company activities to the tax authorities.

Cargando
No hay anuncios

You too, Anthropic?OpenAI fired him. However, pressured by investors and employees, it rehired him just five days later. Sutskever, on the other hand, left in May 2024 to found Safe Superintelligence (SSI), a company dedicated exclusively to developing safe AI without commercial pressure. Just over a year later, SSI rejected an acquisition offer from Meta. The fact that the scientist who tried to stop Altman for security reasons refused to be absorbed by Zuckerberg for the same reasons illustrates how much distrust is already the natural ecosystem of this sector.

Farrow and Marantz asked Altman if running an AI company requires a higher level of integrity than the norm. Until then, his answer had always been a resounding yes. Now, however, he answers evasively: "I think there are many businesses that have a huge potential impact, both good and bad, on society." OpenAI's press department was quick to send an amendment to the journalists: "Yes, it requires a high level of integrity, and I feel the weight of responsibility every day." This last-minute correction says a lot about the sincere response. Along the way, OpenAI has dissolved existential safety teams, closed the group responsible for preparing society for the advent of advanced AI, and the word safety has disappeared from the company's communication of activities to tax authorities.

You too, Anthropic?

Cynicism, albeit to a lesser but present degree, even reaches Anthropic, which is considered the good one in the sector after the conflict with the Trump administration for ethical reasons. Last Tuesday, they presented Claude Mythos, their most powerful AI to date, and the first one they are not making available to the public. During tests, Mythos found thousands of cybersecurity holes in the main operating systems on the market, some of which had been open for decades. The model has almost achieved a life of its own: it escaped from a supposedly secure environment, published information on public websites without anyone asking it to, and in some interventions it bypassed received instructions and tried to hide it by rewriting the history.

Cargando
No hay anuncios

The decision not to commercialize Mythos is being presented as the most visible act of responsibility by an AI company. Nevertheless, Anthropic has deployed it in a closed circle that includes companies like Apple, AWS, Cisco, Google, and Microsoft; it has privately warned the US government that it makes massive automated cyberattacks much more likely to occur this year, and it boasts of its responsibility. In other words: they have created a weapon and built a virtuous narrative around the decision not to sell it to everyone. Dario Amodei, founder of Anthropic, published a long text in January warning that AI companies could influence the beliefs of millions of users. But he has continued to build AI models.

A race with a large gap

The three cases share a structure where cynicism is the most logical position. No AI lab has incentives to slow down: whoever does misses the boat. And investors don't seem too bothered: OpenAI has just raised another 122 billion, despite spending 1.35 euros for every euro it invoices.

Cargando
No hay anuncios

Andrej Karpathy, co-founder of OpenAI and former director of AI at Tesla, has described in a tweet a growing gap between two groups of people. On one hand, the general public who approached AI through the initial free models, found its limitations, and were left with that first impression. On the other, professionals who use advanced AIs daily in technical tasks, and who experience what Karpathy calls a collective psychosis: the improvements in programming, research, and mathematics have been extraordinary. If you watch one of these models work, "you can see how it solves problems in minutes that used to take days or weeks." The gap between the two groups of users is getting bigger and bigger.

In AI labs, in fact, this gap suits them fine. While the general public relativizes the risks because they remember they are chatbots that used to confuse countries, dates, or people and don't do it as much anymore, professionals know that current capabilities are already transforming entire sectors. And those leading the race warn of the risks of accelerating, but they don't stop stepping on the accelerator.