Labor

Jeremias Adams-Prassl: "The AI productivity revolution is not noticeable"

Professor and Vice-Dean of Law at the University of Oxford

BarcelonaWho will be the winners and losers in an automation-based economy? How should we approach the rise of artificial intelligence (AI) in the world of work? Much of Jeremias Adams-Prassl's research revolves around some of these questions, and many others that arise with the explosion of this technology and its already visible impact on our daily lives. The professor and Vice Dean of Law at the University of Oxford presented some of these ideas at a conference at the Macaya Palace of the La Caixa Foundation a few weeks ago.

I would have transcribed this interview by listening to it again, but now we're using an AI tool to speed up the process.

— It's a good example of how to think about the impact of AI on work, isn't it? People often take a very general approach and say things like "everyone will lose their job," when in reality the impact is much more granular. There can be tasks within a job that are automated, such as transcription. Lawyers have always had to do a lot of document review, which involves going through a huge amount of text. No one would complain about not having to do that anymore. So the positive side of the story is that certain traditional, repetitive, or boring tasks can be automated.

How do you think technology is changing the nature of work?

— It's having a fundamental impact on the quality of work, and that's the important story. We often think about the number of jobs and whether people will lose their jobs, whether machines will replace them. But what we forget is that technology also creates new jobs. From a regulatory perspective, its impact on the quality of work is much more interesting. Because technology can improve it very significantly, but it can also worsen it considerably.

Where do we draw the line? How do we ensure it has a positive impact rather than a negative one?

— It depends a lot on the decisions we make when deploying the technology. For example, imagine you can monitor workers in great detail. You could use it for health and safety: to ensure people don't work excessive hours, to detect repetitive strain injuries, and so on. But you could also use it to pressure workers into taking on more tasks, which is far more problematic and has negative consequences. What the law should do is condition these decisions. Through upfront incentives, such as requiring that workers be consulted about how the technology will be deployed. And subsequently, by giving workers access to data on the overall impact of the technology.

What does he mean when he talks about "management through algorithms"?

— Historically, management has controlled the entire employment cycle: hiring, day-to-day management, and firing. The first thing we saw in the platform economy was the use of algorithmic systems, including AI, to automate or assist with these tasks. For example: automatic CV filtering, automated online interviews, app-based monitoring, or the automatic deactivation of workers if their score drops too low. But starting in 2017, these technologies began to slowly infiltrate other jobs, beyond the platform sector.

Cargando
No hay anuncios

And this hasn't been so visible, has it? Not like the delivery drivers with their backpacks.

— No, it's not that visible. A lot of surveillance technology is integrated into the software we use every day. There's no longer a need to install cameras: the data comes from office applications or video conferencing software. Then we had the pandemic. Imagine if in 2020 the authorities had said we had to put cameras in the kitchen or bedroom. There would have been a revolution. But three months later everyone was using video calls, and the level of surveillance became normal. Now we take it for granted, when in reality the amount of data being collected and those levels of surveillance are neither natural nor necessary.

Is this already causing litigation?

— There is resistance even before the courts. As with any surveillance phenomenon, people react. For example, there are mouse pads that move automatically to trick software that detects whether you're working. We also see litigation against more invasive data collection practices, often not in employment law, but in data protection.

To what extent is AI already intervening in business decisions, such as closing a factory because it considers it unprofitable?

— I think things like this are already happening. These decisions aren't necessarily being fully automated, but we are seeing more of it. Recruitment is the most affected area: most professional services firms now use algorithmic tools in their selection processes. It's the field where the technology is most developed.

Cargando
No hay anuncios

We know that AI can have gender or racial biases. Is it safe for companies to use it for hiring?

— It's not safe at all. Companies must be very careful. The use of technology doesn't mean the law ceases to apply. There have always been anti-discrimination regulations in hiring. People have biases, both conscious and unconscious, and these are subject to legal oversight. With automated systems, even though they promise to be more objective and neutral, you must be doubly careful. It's been well demonstrated that, given their training, these systems can be even more biased than humans.

How does this increase in productivity associated with the use of AI translate into better working conditions for humans?

— Okay, we can start with the question of productivity, because the answer here is very clear. We're not seeing any productivity impact in the statistics. When you look at labor market data, the AI ​​productivity revolution isn't visible, despite the enormous amounts of investment and money spent by both AI platforms and companies.

How can this be explained?

— There are different theories. The one I find interesting is the notion ofAIslope [AI-generated garbage content, in Catalan]. One of the biggest dangers is that we're now mainly talking about technologies like ChatGPT and language models that try to generate these results. But what they produce is something vaguely plausible, nothing more than that. I can produce much more, but not necessarily of better quality. If I commission AI to write a report for a colleague who then has to review it, the colleague ends up doing much more work than if I had written it myself. Initially, it seems like I do many things more easily, but in the long run, I end up with more work. I can use ChatGPT, but the AI ​​might make things up. It's a good example of how a reduction in productivity can end up happening because we're injecting this kind of slope in the system.

So why have companies invested so much?

Cargando
No hay anuncios

Okay, there's a lot of marketing involved. You have incredibly high ratings, and you have to justify them somehow. Systems are sold as if they have capabilities they may not technically possess. In recruitment, they sell you an AI system that claims you can upload a three-minute video, and it will tell you how that person is in a team or in dealing with clients. But it's just a generator of random numbers. It's not a technically valid system. Anyway, that doesn't stop it from being sold. I think people have FOMO [the acronym for fear of missing out[fear of missing out]: if the competition uses it, perhaps I should use it too. My hope is that we can reach a world where we understand that there isn't necessarily a relationship between productivity and the quality or quantity of work.

Will the raw work of AI be transferred to countries in the Global South?

— Unfortunately, yes. Online outsourcing platforms have played a key role in the development of AI, and this is often not discussed enough. All data cleaning and system training are often outsourced there. Content moderation is another classic example: we think a sophisticated algorithm handles it, but in reality, it's human work, often psychologically and occupationally hazardous, very poorly paid, and outsourced to these countries. It even happens with self-driving cars: when they malfunction, many companies call on a person, who intervenes from a remote support center. Therefore, the history of AI is also the history of hidden labor.

How should this impact be regulated?

— One possible solution is to ensure accountability in global supply chains. EU regulations have focused more on the application layer, i.e., end use. And this is an interesting aspect of the AI ​​Act: for the first time, it recognizes that there are different actors within the AI ​​supply chain. It doesn't go very far in terms of concrete measures, but it is a step in the right direction.

What gaps do you see in this regulation?

Cargando
No hay anuncios

What it does is create a lot of regulation and bureaucracy, which may not be good for businesses, but it also doesn't achieve much in terms of protecting people. The reason is that it tries to be a one-size-fits-all instrument applicable to AI in many different areas. It uses a single set of rules that applies regardless of the context. But, in reality, the challenges and opportunities we have with AI in healthcare are completely different from those in finance, consumer affairs, or employment. Therefore, having a single set of rules for all these contexts creates problems.

Are there sectors where you see this technology as having the potential to improve working conditions? And vice versa?

— Regarding positive uses, a historical aspect has been the activation of the labor market: creating opportunities for more people to work and bringing employment to a much wider range of communities, including people who, for various reasons, have been systematically excluded from the labor market. The most negative side we are seeing is when technology is used for essentially illegal purposes. For example, software that predicts when a person might become pregnant in order to fire them sooner. This is profoundly wrong.

Are these practices becoming commonplace?

— In the European Union, this kind of thing is completely illegal. What we do often see, however, is, for example, the complete automation of layoffs, or the use of software to predict unionization.

Should we be concerned about job losses caused by AI?

— Job losses are a factor we must consider when discussing the impact of AI and technology on the labor market. But we shouldn't let it distract us from the important political debates about the quality of work that we're currently facing. Historically, there have been many such debates that ultimately failed to materialize as predicted.

Cargando
No hay anuncios

Finally, a utopian scenario: what is the most positive thing that could come out of this debate?

— The most utopian scenario would be one in which we could automate many of the most tedious and repetitive tasks, freeing up time to dedicate to higher-quality work and the tasks we truly want to accomplish. But the key is that, if productivity gains are generated, we must carefully consider how to share them equitably. Productivity gains alone are not enough; what matters is how they are distributed.