ChatGPT
25/10/2025
2 min

A study conducted by the European Broadcasting Union states that the four main artificial intelligences on the market distort 45% of the news they read. This means they explain it to users by introducing value judgments, with false attributions, or riddled with inaccuracies. In short, they wouldn't pass competency tests. A 45% error rate on any service would be a reason to forget it forever, but something tells us that AI can breathe easy and that, even if it puts its foot in the bucket almost half of the blows, it will continue to make its way into our daily lives. First, because many people won't detect the error at all and will walk away happy with its answer in hand, especially if it reinforces their previous conception. Because if AI knows how to do anything well, it's to suck up to us and applaud everything we tell it. And second, because the convenience it provides is such that we're pretty much doomed to end up accepting that large margin of error. Perhaps it's a love of dystopias, but it's tempting to imagine a novel in which humanity, after centuries of pursuing factuality and precision, finally develops a tool so absolute that it prefers to assume approximate knowledge—but one that confirms our biases—rather than having to do the work over again.

(Good) journalism should be able to respond to this threat and, with all its imperfections, defend its method of offering the world a rational and coherent view. How? On the one hand, by fighting against Big Tech so that they don't vampirize its content (with a 45% error rate!). On the other, it feels excellent enough that readers feel proud to use their money to defend an alternative model to that of AI and the oligarchs who control it. It's a battle that's frightening, but also inspiring.

stats