Technology

Oriol Vinyals: Interview with Oriol Vinyals: "I use an AI personal assistant"

Vice President of Google DeepMind Search

Oriol Vinyals, telecommunications engineer.
27/12/2025
6 min

At the Polytechnic University of Catalonia, they do not recall any ovation as long as the one received by Oriol Vinyals (Sabadell, 1983) when he was awarded a doctorate at the end of November. honorary causeMinutes and minutes of standing ovation followed to honor the alumnus of the double degree program in telecommunications engineering and mathematics who has made crucial contributions to the development of artificial intelligence. In fact, he is among the top 10 most cited AI researchers in the world.

After studying at the UPC, he moved to the US in 2006, where he earned his PhD and joined Google Brain as a researcher. There, he specialized in two fields of AI: machine learning and deep learning. Since 2016, he has been Vice President of Research and Team Leader for Deep Learning at Google DeepMind in London.

This Sabadell native worked in the development of AlphaFold, An AI system that has solved one of biology's greatest challenges: how to predict proteins from their amino acid sequence. AlphaFold He won the Nobel Prize in Chemistry in 2024Furthermore, he has been co-technical director in the development of Gemini, the multimodal AI model Google's most advanced.

He just gave a masterclass at his university, the UPC, where there were more than 500 people. Many were standing at the end because there was no more room.

— It wasn't the first time I'd returned to UPC since leaving, but this time I wasn't expecting so many personal emotions focused on me. Yesterday's event [the ceremony] honorary causeIt was very special and emotional, with the people on the right applauding nonstop. I couldn't stop thinking about what I had done to get there. And, in fact, that's what I tried to explain in the talk I gave, in case it could inspire any of the students in the audience.

He returns to the university, where there is currently an intense debate about the use of these tools. In most cases, using AI penalizes students, and there is a return to oral exams.

— All technologies have dual uses. Here, we're focusing on how a student at a leading university uses them, and that's fine. But as a developer, I think about regions of the world where there are no universities, or where the nearest one is more than three hours away by car, effectively blocking access to education. AI provides these people with personalized tutoring. Globally, these tools are reshaping access to resources, tutors, and knowledge. And that's why we develop these tools. Every company has a different approach, but in this sense, Google conducts tests before releasing tools to the public and analyzes potential problems. In the case of education and AI, we have a very large team at Google Education that reviews these concerns. AI is a tool that will endure, and it's important to educate society as much as possible to have discussions about its use. Perhaps universities will need to teach other things.

Which is it?

— Skills such as negotiating, speaking, communicating, leading, and using technology in a way that helps optimize a person's overall learning, both so that they know how to do a derivative and so that they understand how the world works and how to enter a productive society.

But not everything that AI teaches is correct.

— That's right. And if you don't master a particular subject, like algebra, or if everything you know was learned from AI, you won't be able to be critical. We need knowledge to be able to question what AI tells us, re-evaluate, correct, and be critical. In the future, it won't make mistakes, but it does now. That's why having this critical perspective is so important. A professor at the UPC, Ferran MarquèsHe explained to me that in his exams he shows students two AI-generated solutions to questions, and the test consists of finding the errors. I think it's a very good, constructive way to build and project into the future, because the model is going to improve a lot in every aspect.

How can you distinguish talented people from people who are just very good at using AI?

— Perhaps we need to redefine what talent is. Adapting to constant change is now a more highly valued talent than, for example, calculating 500-digit numbers. But before calculators existed, being able to do that was highly valued. If someone today uses an AI tool very well and the result they obtain is excellent, that too is a talent.

Which one is yours?

— Mine? Probably never stop learning. It's one of my constants. I always choose to be in places where they work in new areas that I'm unfamiliar with. At Google Brain I learned to refine the deep learningWhen I went to London to work at DeepMind, I delved into the reinforcement learningwhich was another new area. I also have a talent for leadership. I've done it, for example, at Gemini.

AI, make me a tailor-made story for 3-year-old children

Vinyals explains that his daughter is not currently exposed to screens. Like any good scientist, he says he first needs to understand the effect technology has on children before introducing it to his three-year-old. However, she is indirectly exposed. For example, the day before flying to Barcelona to attend the honorary doctorate ceremony at the UPC, Vinyals went to his daughter's nursery in London to read a story. "I asked Gemini to write a version for children of this age about the story of Alphafold," he explains. And this is what he read to the children. "It was a success," he says proudly.

Some believe that AI trivializes creativity. ChatGPT opened a Pandora's box by making it possible to transform photos in the style of the Japanese animation studio Ghibli, founded by Hayao Miyazaki. There is much debate about whether it even violated intellectual property laws.

— We've created laws and systems to defend intellectual property within a framework that considers us the only intellect on the planet. But if we think about it, these tools do the same thing as humans: they learn from what has already been done and created to generate something new that transforms human knowledge. Isn't this what artists do? They study other artists of the past, are inspired by them, learn from them, and draw inspiration to create their own work, where they make a new contribution—their own, their own creativity?

Does the algorithm also make new contributions?

— At DeepMind, for example, our mission is to drive scientific progress. We have a system called Co-Scientist that reads research articles in a given field and tries to find new ideas that researchers can then develop. We find many examples of genuine creativity arising from what you ask the models to do, especially in highly specialized fields like mathematics. AI models can be a partner to brainstormingIt's clear they're not perfect, but experts can discern and change the promptI use an AI personal assistant myself.

What tasks does he perform for you?

— I lead a very large team, and we share documents with ideas. The assistant has access and tells me if we need to pay attention to a particular channel because it contains good ideas, or if I should talk to someone on the team about a specific idea. The result is that it makes connections. It also reads scientific studies in my field, summarizes the most innovative ideas for me, and suggests new avenues for research. We need to rethink concepts like talent and creativity so that these tools can truly accelerate progress.

Will these AI models eventually generate consciousness?

— I approach the debate about AI consciousness very pragmatically. When I interact with the models, I don't feel they are conscious, at least not today. In the future, it will depend on where the technology goes. It's possible they will eventually have some level of consciousness or even existence.

There are already cases of people who have married an AI or even committed suicide after being advised by one. chatbot.

— Especially when a tool is used on a large scale, there is a danger of dual use. There will always be minorities who fall into the most extreme cases, but it is important to remember that the most common uses are positive.

There is concern about the presence of Catalan in an ecosystem dominated by English.

— This is a very relevant question if you are a user of chatbotsIt's clear that, at least for now, the best way to get results when you ask an AI question is in English, because it's the dominant language on the internet and the universal language of science. How do we do with machine learningWe can calculate what happens when we ask the same question in English or Catalan; numerically measuring the differences allows us to start optimizing and understanding how we can integrate languages, and that's when the tool improves. At Google, it's now a priority for Gemini to be accurate in many languages. And we're making great progress! In fact, there are many Catalans working at Google, and there's clearly a lot of interest. Perhaps there are still differences between English and other languages, but they will become fewer and less noticeable with the new generations of the model.

What will we be doing with AI in five years that we can't even imagine now?

— Our interaction will be much richer and more multimodal. We can already do things like ask the model to generate a drawing inspired by a podcast, for example, or visualize it with graphics. We can even ask it for a video summary of a very long interview and have it add visual elements. In five years, this will have gone even further: we'll have personalized video tutors. For example, if we want to learn how a nuclear reactor works, the model will create a small simulator that we can interact with—an infographic of the reactor. The responses will be much richer, with a graphical interface that adapts to us.

AI's resource consumption will continue to rise...

— At Google, we're committed to carbon neutrality. The demand for AI is obviously growing exponentially. Our goal is to make this technology accessible to everyone, especially those who lack access to education. We also want to reduce greenhouse gas emissions, plant trees, and install solar panels in deserts. The increasing speed of our models is directly related to computing power. The chips we're manufacturing—the wattage required to perform the same computation from generation 2 to generation 6—have decreased 30-fold. The energy we consume, particularly when training models, is reasonable, and we're significantly optimizing the process. Furthermore, we're using this technology to try to discover new fusion materials. We do use energy, yes, but we're accelerating advances in superconductivity and fusion, which will ultimately reduce energy consumption and help combat climate change.

stats