Society 25/04/2021

Jordi Vallverdú: "Robots will do great wonders and great barbarities"

7 min
Available in:

Just as there is bioethics, there is roboethics, a new branch of knowledge that deals with the changes (without euphemisms: the dangers) of the development of artificial intelligence. Will robots put us out of work or, in fact, will it only be the first step towards becoming the new kings of creation? Or are these fantasies learned in Kubrick's 2001? UAB professor Jordi Vallverdú Segura, a specialist in Philosophy of Science and Computing, Artificial Intelligence and Robotics, works in roboethics. A person with a torrential verb, understandable, sharp and funny, as can be seen in the Interessos chapter of the UAB website: "Love for Johann Sebastian Bach, John Coltrane, Djivan Gasparian and Japanese poetry in the form of haiku accompany this researcher in his investigations. And he enjoys sharing life (but not the toilet) with his partner and their three children". Vallverdú sat next to fellow philosopher Daniel Gamper at the opening of the Diàlegs de Pedralbes 2021, organised by Barcelona City Council and the ARA. This interview includes some of the most interesting moments of the conversation.

How much human emotion will we transfer to robots?

— This is the question. There is a game, the ultimatum game, in which you have 100 euros and you have to share them between two people, according to a very simple rule: we will only share the money if we agree on what corresponds to each of us, and the person who makes the proposal cannot say "I'll keep a hundred and you get nothing". And what we saw is that in Western populations, if the proposal is not close to the 40%-60% point, the other person never accepts.

Because it doesn't seem fair.

— Exactly. But this response is cultural. In certain places in Asia, Africa or South America, there were people capable of accepting a 99 to 1 split, because if you enter the game with zero euros and leave with one, you have already won. For us, on the other hand, we have a sense of pride: "If I only get €1 and he gets €99, he's pulling my leg, so there's no money for anyone here and we're all stuck". This is important because when we make machines learn from humans, which human will they learn from? From an American or an Asian? They won't learn from Einstein but from an ordinary person, who gets up and doesn't say the same thing before coffee as after coffee, or if your partner has left you or your salary has been raised.

If the machine behaves like the person and changes its mind very often, it will not have a criterion.

— Or it will come to the conclusion that you have to change your mind very often. This happened with a Microsoft artificial intelligence system that learned through Twitter. What happened? Within 24 hours they had to shut it down because it immediately started making racist comments.

And this is where you philosophers come in.

— When I started doing philosophy of computation stuff, I would go to a conference and there were four of us, the typical contingent of oddballs, among the oddballs of philosophy, who would let us say some more nonsense. In a short time a lot has changed, because now we already have robots that can walk down the street. In fact, cars could already do many more things by themselves. Why don't they do them? Because we are still unclear about many ethical and legal challenges. There will be automated cars that will kill people unintentionally. And as Professor Daniel Gamper says, the problem is that we will program the car to make a decision when, in fact, the driver who wants to avoid an accident, in that split second, does he take a completely rational decision or does he do what he can?

What do you think we will make the machine do?

— At MIT they asked millions of people: "If you were a car, how would you react?" And they saw that there are cultural patterns: the answers were different from country to country. Italians thought it made sense to save mothers or grandmothers before small children. The problem we roboethicists have is that we have to introduce into a computer program human ethical codes that are not always coherent. Look at judges, how they interpret laws. If well-informed people disagree among themselves, what can we expect from robots? Law does not work with mathematical logic, and the machine does not have so many nuances. Now, I don't know if those of us in this room will see it, but I'm sure there will be robots that will be non-human persons.

Isn't "non-human persons" a contradiction in itself?

— On a legal level, no. In 2015, an Argentine judge ruled that a female orangutan had non-human person status, not a human but with rights as a person. And in 2017, a robot named Sophia Hansom Robotics, a humanoid inspired by the physiological structure of a woman, was granted citizenship in Saudi Arabia. And if I can leave everything to my dogs when I die, I can leave it to my robot too; in Japan there are cemeteries for robots. Cemeteries, not warehouses.

So, if the machine says "I don't want to sweep anymore, I'm tired", what do you do? Because supposedly you bought it so it doesn't get tired.

— Here's the thing: what will happen if robots become complex enough to have personalities? Now people are starting to talk about sexual robotics, so it's no longer a weird paraphilia. That's why anthropologist Katherine Richardson says that a new sexual slavery is being generated, we are passing on roles of domination from one place to another. On the other hand, Sergio Santos, a Catalan engineer who has made a sex robot, says that it can be programmed so that it doesn't always agree. But will you spend 15,000 euros on a robot that will tell you, not today, that it has a headache?

How many people will have to agree to programme these robots - philosophers, lawyers, scientists?

— The whole society. The country that has more robotics is Japan, for two reasons: as an industrial power (although the Chinese will catch them soon) and as a very restrictive society with immigration and low birth rate. What happens? They have more old people than anyone else, who are alone all day. Solution? Robots. A large part of social robotics has to do with care, which is what makes us human. Because the elderly tire us out, but the robot repeats what is necessary.

This in the case of a good robot. Will we be so stupid that we will programme machines so that they can attack us?

— Yes, of course, we will have very intelligent military robots capable of killing people. But machines already know how to fool you. Some Japanese researchers made a machine that played rock, paper, scissors and always won, because it has a very fast motor system that allows it to know what you will do and anticipate. It pretends to play with you under normal conditions, but in reality it has an enormous capacity to predict your body movement through its built-in cameras. Facebook or Instagram make you believe that you are going to watch the news and, in fact, you watch what they want and when they want.

Back to the evil non-human person...

— There has already been a US military ship that is unarmed because they don't want it to be, fully autonomous, that has already gone through the Panama Canal, all by itself. And there are fighter jets that will be able to do more things than humans did, because the kind of gravitational forces our bodies can withstand are limiting. They will eventually figure out more creative ways to kill people. Unfortunately, we humans have already done that: in Mathausen there was a competition between companies to find out what the showers and ovens should be like. You go there and you see the crematorium oven, with the logo of the company, and you say: "These are industrialised machines to kill people, and we people have done it". We will transfer Abu Ghraib and Beethoven to robots, and if robots learn from us and we don't self-control them, they can do great wonders and great barbarities.

So, better to have robots that were imperfect, like us?

— You got it right here: why are humans special? Because we are imperfect. Just like a child and the time to start walking, which is based on making mistakes, this is now being done with artificial intelligence systems that are then applied to robots. That they learn by making mistakes.

Will robots know that they are part of a collective? Will they be aware that we have programmed them?

— The problem of consciousness is very complicated because we don't know what it is, we know that it is a functional element to take decisions, but most of the things we do every day we don't do consciously. We have consciousness depending on the body we have and how we project it on a social level, on the symbolic tools we have. When the Spaniards arrived in America, they held a religious council after a while to decide if those individuals were people or not, and that's when Bartolomé de las Casas said "man, I think so".

Will robots have feelings?

— Today, machines don't feel anything. They can pretend you touch them and they go "ouch", but it's all theater. Now, will they ever feel? If we increase the complexity of systems in numbers of sensors, yes. Will they have emotions like ours? Not if they don't have a body exactly like ours.

Is there anyone who is reflecting on the value of a humanistic education for engineers?

— Strictly no, because we all work segmented by companies, projects... Now, as engineers try to do more complex things, they realise that they need philosophy, because if they say "this perceiving machine", they should ask themselves, "what is perceiving?". I'm also telling you that the people who have been most interested in social robotics are women. I'm not saying that women are better designed, this is another song of the heteropatriarchy. It's a lie that women are more social, it's just that they've been forced to do certain things.

So what differentiates one person from another?

— Apart from basic biology, it is the neural structure that makes you interested in one thing and the other in another. This is what makes people do one thing or another. I don't aspire to robots that are great mathematicians, we already have them. Why not robots that write the poetry that no human being has ever been able to write, or the musical works that no one has ever thought of, that haven't crossed their mind to combine this with that?

[Question to the audience] After Professor Vallverdú's explanations, who leaves happier and who leaves more worried about the future? [The worried ones win]

— There are many uncertainties and a lack of trust in the human species. The problem with robots is us, humans. If you go to Japan and South Korea it's the opposite: they have an optimistic view, there are engineers who have written about the divine nature of robots so that Buddhism and Shintoism allow this combination, there are robots that are considered monks. I have dedicated myself to robots because I wanted to understand people, and by making machines you realise what we don't know about people, which is everything.