ChatGPT on a computer.
12/07/2025
3 min

In recent months, something has changed in the way we read without us even realizing it. And I say "read" and not "search for information on the internet" because I don't know a single writer or artist, even those in their seventies surrounded by a venerably analog aura, who doesn't log on to the internet every day to see how the world is doing. And it turns out the world is increasingly empty of people and more filled with stochastic parrots. "Stochastic parrot" is the nickname we use to try to diminish the authority and propaganda that dominates the great language models of artificial intelligence like ChatGPT: "parrot" because they regurgitate the work of others and are only capable of repeating what someone has already said; "stochastic" because the mechanism that makes these models guilty of what these things are is not a process of logical reasoning, but one of chance artificially introduced into the choice of words. But, although this robotic parrotism is perfectly familiar to us and we instinctively despise it, we use it. For a few months now, newspaper traffic has been plummeting because we're asking ChatGPT for things we used to ask publications written by human writers. There's a bubble of inhuman readings that we should burst.

I emerged from my ignorance when the Open IA chatbot started sucking up to me, a trait it didn't have at first and that has turned our conversations into something unbearably sugary and infantilizing: "Good question!", "Good point!", "You're absolutely right!" This irritating servility shattered the illusion of continuity that had replaced the habit of searching on Google with the convenience of searching with artificial intelligence tools. There's been a lot of talk about the danger of our brains atrophying if we delegate writing to machines, and there are already scientific studies that show that students who use ChatGPT have less neuronal activity, remember what they've written worse, feel less like they own it, and on top of that, they all write exactly the same thing. But much less is being said about what happens to us if we delegate reading articles to the tempting personalized rehashes offered by AI. We haven't stopped reading novels and essays when we muster the strength to dig deeper and concentrate in the old-fashioned way, but in the quick mode of searching for information, human voices are being swept aside by artificial ones. In the realm of direct answers to clear questions, GPT is eating Google.

It goes without saying that if AI wins, it's because it works. Even if it often gets some facts wrong, its prose averages to mediocrity, and its ideas correspond to a very poor common denominator, this highly customizable Wikipedia they offer us is effective and relevant (I don't know Wikipedia's visitor numbers, but I have no doubt they're also suffering). The alternative is to submit to a Google algorithm that offers us a list of 10 titles that are more general than the answer we're looking for and makes us suspicious of the selection criteria. And you click on those articles, and instead of getting to the point, there are words and twists and turns typical of the person who wrote them, associations of ideas and not-entirely-obvious digressions, metaphors with a personal history. But there's a desire not only to answer exactly the question you're itching to answer, but perhaps you'll ask yourself one you didn't even know you needed to ask.

Naturally, all these layers of subjectivity and inefficiency are responsible for this spark of meaning in the midst of the darkness we call humanity. The fact that writing is corporeal and situated, that instead of an algorithm calculating which word has been said the most times in a given context, the writer is a person who is perhaps fed up and needs to rehearse a different response, is what makes critique, innovation, and ultimately, freedom possible. I now understand that this romantic argument is like setting pigeons to flight and that very few of us will leave ChatGPT and return to Google, and then to the homepage from newspapers that a human editor has arranged without following any algorithm, and then to the architecture of paper (or PDF). But the alternative is to fall into a pit of homogeneity that makes us idiots and controllable. The other alternative, which is to regulate the AI that exploits common labor for which millionaires earn huge sums while thousands of anonymous workers don't see a cent, and to design a network with social rather than commercial criteria, is even more utopian in the current context of the resignation of politics. But, just as we began to regress with absurd educational methods, sooner or later the humanistic search for information on the internet will return because, quite simply, it's better. And as long as we don't build laws and customs for a world with AI, if we ever build them, it would be great not to waste years and mental muscle on the honeyed facilities of AI.

stats