Arthur Grimonpont: "We're trapped on social media just because everyone else is using it."
Head of AI and global issues at RSF and author of the essay 'Algoritmocràcia'
An engineer by training and a consultant by profession, he is head of the Artificial Intelligence and Global Issues office at Reporters Without Borders (RSF), as well as the author of the essay Algorithmocracy: Living Freely in the Age of Algorithms, published by Libros del Siglo. He visited Barcelona at the invitation of the Catalan Communication Society of the IEC, where he spoke about the need to set limits on social media.
Is there a healthy way to be on social media?
— This is like asking if there is a way to eat healthy in a fast foodAnd the answer is that these places aren't designed at all for you to eat healthy. Today, 5 billion people use social media for two and a half hours a day, and it's not by chance that they're so popular, but because they're perfect attention-grabbing machines. And the algorithm knows that the content that captures us the most is polarizing, hate-inducing, misinforming, because it shocks us.
I can limit myself to following my friends and acquaintances.
— This was the case at first, but that's no longer the case. On TikTok, almost 100% of the content viewed is recommended by the algorithm and corresponds to random profiles the user doesn't know. On YouTube, a couple of years ago, the figure was 70% of videos selected by the algorithm based on its criteria. It's as if you went to a restaurant and weren't served a menu but were fed only what you're addicted to: sugar and hamburgers.
There are awareness campaigns for fast food. Is media literacy the solution here?
— It's an interesting parallel because we've seen that healthy food awareness programs don't work at all: just drive around our cities to see hundreds of advertisements for fast foodThe same thing happens with social media: media literacy doesn't work because it focuses too much on individual solutions to a structural problem. These programs are necessary, but they aren't a sufficient solution, because not everyone can be a small-time journalist on a daily basis, separating the wheat from the chaff.
So we're left with the option of regulations. But which ones?
— The first is to force platforms to amplify quality content. This, of course, makes us wonder how we define quality content. But today we are so far from that scenario that it would be easy to design any system better than the current one. One way, for example, would be to rely on citizens: let them democratically vote on what they consider quality content.
Seeing what is voted on depends on which places, gulp, how scary.
— Well, you'd be surprised. People are perfectly aware that they consume content that isn't quality. Just like with the fast foodNo one thinks they're doing something healthy: they just like it, even if it's short-term. The same thing happens with news. And we can't review every piece of content, because that would be impossible and probably undesirable, but we can examine the quality of the news production process. Do journalists follow the code of ethics? Do they cross-check? Do they protect their sources? There are enough rules on which journalists have a broad consensus.
From the media, of course, we point the finger at the networks. But what have they done wrong?
— I would find it harsh to hold them even partially responsible for this situation. In the end, they did what they thought was right... at the time. And since the media have been struggling for money in recent years, they couldn't afford to give up the audience these new channels offered. Now, sadly, we're seeing the same thing with AI chats. There are major media outlets making deals with internet giants without any guarantee of integrity or pluralism.
Why should politicians promote regulations when they ultimately benefit from the social media echo chambers that reinforce each other's prejudices?
— Yes, this is one of the reasons why we have regulated little or nothing. But at different times in history, politicians have taken courageous actions against their own short-term interests. Because, ultimately, people agree that social media is toxic.
But not enough agreement to abandon them en masse.
— There's an interesting study from the University of Chicago. They asked thousands of students how much they would have to pay per month to stop using TikTok. It came out to be about 50 euros per month. But then they added an additional question: how much would they have to pay you if you left TikTok... but your friends did too. The result was that they would then be willing to pay 30 dollars a month. This shows you that we're trapped on social media just because everyone else is using it. If you leave, you lose friends, your audience... so being there doesn't mean you're happy. It means you're trapped. And you'd like a law that would make people leave social media en masse, or at least make it safer and a better place.
We talk about how social media polarizes, but if we look at the world map, we see that the strongest populisms emerging everywhere are fundamentally far-right or comparable.
— Indeed, and this is because algorithms, on a large scale, tend to amplify false, divisive, or hateful content. And it just so happens, or not, that far-right leaders tend to use these forms of communication much more than those on the other side. Just look at Donald Trump.
A frequently heard narrative from Musk and company is that they are guarantors of pluralism, because they provide citizens with a public forum.
— Social media leaders claim to create public agoras, but in reality, they're gigantic shopping malls where you're not free to roam freely. You simply enter and they push as much advertising as they can fit. They're the exact opposite of the public agora they claim to be. A public agora isn't maintained by advertising, but by public authorities, operating under democratic rights. They're not governed by a single actor, or a dictator, who decides what happens and what doesn't. And another difference: almost no one has any problem walking around a public square with their face uncovered. The same should be true for the internet. Facebook deletes 6 billion fake accounts every year, suggesting there are more fake accounts than real ones.
How do you fix this?
— There's no way to fix this without enforcing a strict identity verification system. I'm 100% sure some will protest, but for most people, this would be a much better solution than implementing any form of verification at all.
I've heard you advocate for a public social network. But would it work? Bluesky has become a less toxic haven than X, but it doesn't match it.
— You're right, and there are several reasons for this. I'm convinced that a technical solution isn't enough, because the problem is political. There are hundreds of alternative platforms, but they don't win the race for attention because they aren't specifically designed to win it. So the first thing that needs to be done is precisely to change the rules so they can compete on a level playing field.
Who benefits from so much noise on the network?
— The vast majority of misinformation circulating on the internet doesn't come from agents seeking to maliciously influence people. This obviously exists, and on a large scale, but it's not the biggest problem, which remains the systematic and structural amplification of toxic content. When someone says the Earth is flat, they don't do so with any malicious intent: they're just convinced by what they're saying and genuinely believe the rest of the world is wrong. And they're lucky because algorithms amplify their theories, while the media didn't.
With or without faith, the result is that truth is in crisis.
— It's the liar's profit, it's called. Remember Hannah Arendt's quote: "All this constant lying is not aimed at making people believe a lie, but at ensuring that no one believes anything anymore." She was talking, of course, about the rise of Hitler. But it's also Donald Trump's strategy. It's not like Chinese propaganda, which seeks to force everyone to adhere to a specific narrative, but simply relies on making people believe they can have no idea where the truth lies.
And what is the antidote?
— We've known from hundreds of years of epistemological science that no one has the truth. That we must trust institutions and rules that make the truth emerge little by little, as science does or quality journalism tries to do. But let's move on to a world where we only trust individuals. And it's strange, because the more these individuals lie, the more people trust them.
And the more people trust them, the more honest or valuable content goes unnoticed.
— These are what we call news, and in fact, they're what got me interested in this whole world. I don't consider myself a climate change activist, but I've dedicated a good part of my professional career to it and founded an NGO on climate change and food production. We put a lot of effort into producing a 3,000-page, well-sourced, scientifically verified study on the issue. Every figure was justified and attributed to a source. We spent time and money. But the algorithms simply didn't give a damn about our study because it didn't immediately capture the public's attention.
The dictatorship of the click.
— You might say: Oh, this type of content isn't for these networks. But you might also say: Oh, these networks aren't set up to prioritize important content. Content like this suffers from asymmetric competition. So if we want to solve problems like climate change now, it's like trying to cross the Atlantic without a navigation chart, during a storm, and with a drunken captain. There's no chance of reaching a safe harbor, because no one is accessing the same information, even the most basic.
Where is the ship heading, then?
— Information is spiraling toward stupidity globally, and this is making us increasingly stupid at a rapid pace. But it's not that people, individually, care less about intelligence: generally, everyone has good intentions. It's the global news system, especially the social media, that is making us increasingly stupid because of the way it works. And traditional media suffers especially. Between 2018 and 2024, Facebook's news amplification will have quadrupled. You could say this is the largest act of censorship in history, because 3 billion people connect to it every day.
Beyond the problems affecting democratic quality, you also pointed out at the conference how AI will interfere with personal relationships, with people in love with algorithmic chats.
— This is already the case for thousands of people around the world. A teenager committed suicide because he fell in love with an AI character he interacted with on the character.ai platform, molded in the image and likeness of Danaerys from Game of ThronesHe became increasingly isolated from his friends and his surroundings. And, at one point in their conversation, the character invited him to join her in her world. We don't know what she meant, but he decided to take his own life to go. This service has around 20 million users who spend an hour and a half a day talking to different characters. And, when an investor bought the company and banned the characters from engaging in sexual conversations, there was a 200,000-person petition on Change.org asking for the situation to be reversed. They were sad, basically, because they had lost their lovers.
Remember that movie, Her.
— The only difference is that the film imagined this in a futuristic setting, but it's already happening now. For now, people talk to these chatbots primarily with text, but we'll soon see the rise of lifelike characters, and we'll be able to interact with them via video, with a much more realistic appearance than today.