Interview

Xavier Sala i Martín: "Trump is destroying the engine that made America powerful."

Economist, author of "Between Paradise and the Apocalypse"

19/04/2025
7 min

Seven years ago, economist Xavier Sala i Martín met professor and writer Yuval Noah Harari in Davos, who painted a picture of a catastrophic future due to artificial intelligence. "At that time, I didn't know much about AI, and I decided to study it," says Sala i Martín.Between paradise and the apocalypse (Rosa dels Vents) where she provides a comprehensive historical review of this technology, explaining what it is, what it isn't, and what the risks are for the future.

— Not right now. For me, intelligence is the ability to solve problems you've never seen before, not solve problems you've seen fourteen times before. The AI everyone's talking about is ChatGPT, or generative AI, which predicts words. They're prediction machines that take everything that exists on the internet. If you say to it, "Madrid lost to..." it'll know how to add, "Arsenal."

But he doesn't know how to distinguish between truth and lies.

— He doesn't know, and this is important because there are people who use ChatGPT to ask questions, hoping it will tell them the truth. But he doesn't know what Barça is, or Real Madrid, or winning. The problem today is that people believe the machine is telling them the truth.

In the book you talk about the case of a Belgian engineer.

— He wanted to fix climate change, and the machine ended up convincing him that suicide was a good idea. But you could ask if drinking a glass of bleach was a good idea for a stomach ache, and in a percentage of cases, he might say yes because he looked at pages about cleaning, not health. The problem is that the name isn't correct. If we called it a statistical prediction machine, which is what it is, no one would suffer and we wouldn't have any confusion.

And can it surpass human intelligence?

— No mathematical theorem says this cannot be.

And what would be missing?

— The only way we know of to create human-level intelligence is to have children. The rest is being studied and unknown. The owners of these language models—OpenAI, Google, etc.—believed until recently that they just needed to make them older, models with more data. But it's becoming clear that this isn't true, and no one knows exactly how it will happen.

The next step would be for them to make decisions autonomously, right?

— Of course, machines don't do anything today unless a human asks them to; they have no willpower. But there's a bigger problem here.

Which?

— You know you're conscious, that you have the capacity to feel and think. You suspect that the rest of the humans around you are too. We believe cows are too, and that they suffer if their children eat them. We're not safe with insects. And we think that a glass, for example, doesn't suffer. But in reality, we don't know. Does anyone know why we are conscious or where consciousness resides?

And can we try to make a machine conscious if we know so little about human consciousness itself?

— Trying to reproduce on a machine something you don't even know what's there right now is beyond the realm of possibility. What we have is a machine that creates language. The problem is that historically, intelligence and writing went hand in hand, and we think that if ChatGPT writes, it means it's intelligent. And no, it just knows how to write.

We don't know what the future will look like, but it already seems clear that many things at work will change. Bill Gates says we'll work two days a week.

— Keynes made that prediction about working less a hundred years ago. And he argued that with technological changes we would be more productive. True, and this would be possible if the standard of living today were what it was in the 1930s. But that would mean a house without electricity, without running water, without hot water, without smartphones... And what happens? The economy isn't just determined by technology. Technological progress also changes our desires, and it will happen as Bill Gates said. There will come a day when, to achieve the standard of living of 2025, we might be able to work four hours a week, but we won't want it because everyone will want the phones of the future, the travel of the future, things that don't exist right now.

But it does seem clear that there are jobs that are going to change.

— A job has different tasks. I'm a professor and I have to teach, but I also have to do research, administrative tasks, and disseminate information... ChatGPT will influence some tasks, replace others, and have no effect on others. I hope AI grades exams or writes reports for asking for money. I'll be thrilled. But when we evaluate the impact, we have to go task by task. And there's hardly any job where all tasks will be replaced.

There's a chapter dedicated to education. Should ChatGPT be used in the classroom?

— I think so. It's the same debate we had about calculators when I was a kid. And some said we shouldn't use them because we wouldn't know how to add and subtract change when we went to the grocery store. And in schools, they teach addition, but when you're in high school, students are allowed to use a calculator. We can do both. Teach children to read and write, and teach children how to use ChatGPT, because it's the tool they'll have when they're older. Another thing is that we need to understand that what we've used to assess students—homework—no longer works. It hasn't worked for 50 years, and I think ChatGPT has held that up to us.

Because?

— When I was little, when we had to do jobs, those of us who had theCatalan Encyclopedia We used to go there and cheat. The poorest didn't have any, and got worse grades. And the richest had private tutors or siblings or parents to help them. Homework has been a farce for a long time now. Let's put an end to it and try to evaluate fairly.

As?

— If a child is doing a project, they can use whatever tools they want, including ChatGPT. Afterward, we should ask them a few questions to ensure they've learned and are able to explain it. The exam can be oral in front of their classmates. And the most important thing they can learn now isn't doing homework or assignments; it's learning to distinguish truth from lies.

Please tell me it's a lie that Donald Trump used ChatGPT in his formula for calculating tariffs.

— It's not a lie. When Trump came out with that graph telling each country, economists tried to understand how he got the figure. And they saw it was a trade deficit divided by imports. The machine was like this: imports minus exports... divided by imports.

Are you telling me that the tariffs Trump imposed on Europe were decided by ChatGPT?

— Yes. We could be facing the first global economic crisis originated or caused by artificial intelligence.

But does what Donald Trump does make any sense?

— It doesn't make sense. He believes that foreign deficits are bad. For example, Nike has a factory in Bangladesh, so Bangladesh sells millions of Nike sneakers in the United States. Bangladesh, on the other hand, doesn't buy from the United States, which offers more sophisticated products like financial services and AI. And Trump understands that they have a huge deficit, and from there he says: they're robbing us. And this must be balanced. How? We take Nike sneakers to be produced in the United States. But with American wages, the sneakers would cost $3,000. So it's good to have a deficit. It's good that Nike sneakers are made in Bangladesh. It's good that iPhones are made in China.

What do you think of the European response?

— If he goes crazy and shoots himself in the foot, the answer is no longer to shoot myself too.

Which?

— The United States represents 25 percent of global GDP. The remaining 75 percent could have a free market, and Americans would be excluded. Let's see who wins. But in politics, the testosterone booster and applauding the Chinese for standing up to them works better. But what's happening is that they're harming Chinese companies. This isn't standing up to them, it's playing dumb.

You're very connected to the United States. What do your colleagues tell you?

— I have two former students who are now in government. Both tell me that Trump doesn't listen and that no one dares to speak; everyone fears him. He acts like a medieval king, and if anyone doesn't go there and kneel, he'll cut off their heads. And he enjoys it when the richest people implore him: please don't put tariffs on phones. He's Caesar, and he decides who he saves and who he doesn't.

And what do you think of their battle against universities?

— In the book, I review the history of AI, and all the names of the people who have made breakthroughs are European. The question we should ask ourselves is: what's the problem with all those people there? Why hasn't this been invented in Europe? Because the United States has the money, it's open-minded like China lacks, and it welcomes any foreigner. No one has ever asked me where I'm from. A third of the Nobel Prize winners in the United States are immigrants. And Trump is destroying the great engine that has made the United States so powerful.

Harvard and Columbia have responded differently to Trump's threats. Which one do you think is the right one?

— The attacks have been different. At Columbia, he asked for a minor overhaul of the Middle Eastern Studies Department, but Columbia said the cost was small enough to budge. At Harvard, he's taken over the admissions process for students and even faculty. I don't think Harvard can do anything other than what it's done.

And can the university win the war against Trump?

— Harvard has the advantage of having a lot of money. It has an endowment of tens of billions of dollars, and therefore can survive four years without government contracts. And in four years, this nightmare we're all suffering can disappear. I'm very much in favor of Harvard, of not giving up, because I think it's fundamental. The American university is a world leader; American technology companies are world leaders because they've been open to everyone. Demis Hassabis—CEO of Google DeepMind—is the son of Lebanese parents. Steve Jobs was the son of a Lebanese father. Of the founders of Google, one was born in Russia. In Europe, when we do anything, there must be quotas.

Are technological changes also at the heart of this crisis?

— There's a battle over artificial intelligence, to see who will be the first to achieve general intelligence. And this may be behind some decisions, such as denying the Chinese the ability to use cutting-edge microchips.

What does AI mean in historical context?

— There are several technologies that have changed everything. For example, the domestication of plants and animals, which makes food cheaper and more abundant. The domestication of electricity, steam engine power, the internal combustion engine... And now we're facing a technology that doesn't affect food or electricity. It affects what we've understood as intelligence, which will become cheaper, for example in the field of ideas. That's why I would place AI in the realm of the great ideas of human history, such as agriculture, the steam engine, and electricity.

stats