The radical uncertainty of AI
In a world of ever-expanding digital infrastructure, addiction, confusion, distortion, and disorientation are becoming increasingly difficult to avoid. With each passing day, it becomes harder to distinguish what is true or real from what is false or synthetic. This can also be expressed another way: we are progressively accepting more artificial components in our lives. However, the most thorny central confusion lies in trying to understand what artificial intelligence (AI) represents. Since the universe of consciousness is still a mystery, there are two opposing schools of thought regarding AI and the role it can play in our society.
On one hand, there is a sector that understands that biological consciousness—the subjective experience resulting from various operations linked to perception, memory, and imagination—is not mechanical and, moreover, is not concentrated exclusively in the brain: muscles or our tactile surfaces, for example, also play a role. On the other hand, there is the sector that argues that our consciousness functions algorithmically and that, therefore, it is possible to create conscious machines. This group claims that we will never be able to compete with AI because, while we can only transfer part of our thought process and knowledge clumsily and slowly, digital "minds" are capable of sharing vast amounts of information with each other in an instant.
Faced with the challenges posed by the unstoppable exponential growth of AI, Geoffrey Hinton – considered one of its godparents—He is dedicated to publicizing the dangers of not containing it with extremely strict regulation. Hinton's case, who was awarded the Nobel Prize in Physics in 2024 for his contribution to the development of AI, is one of those that demonstrate that progress does not always choose the most convenient path for our species and the planet as a whole. Just as happened with Paul Hermann Müller, who received the Nobel Prize in Medicine for the discovery of the insecticidal properties of DDT, the scientific community rewards an advance that may have unintended or, in the worst-case scenario, catastrophic consequences. (DDT ended up being banned; what will happen with AI remains to be seen.) Hinton, however, is fully aware that AI endangers the integrity of our society and the ecosphere, and he expressed this in the Nobel Prize acceptance speech.
Figures like Geoffrey Hinton or Yann LeCun – another of the godparents Those who believe in AI belong to the group of thinkers who think it can become conscious and develop its own will and a type of emotion that is eminently logical. For us, the subjective experience of existence is linked to a wide variety of things. For example, an emotion like fear is often the result of 1) a biological, sensory, and chemical process with deep historical roots; 2) a learned emotion that may have recent, centuries-old, or millennia-old cultural coordinates; and 3) reasoning linked to the current social framework and also to our unique memory and personality. However, AI is just a concept. If, as its creators intend, we manage to align it with our interests, its empathy will be radically logical. In contrast, for us and many other animals, empathy is the product of an imaginative process in which memory and the rest of the body actively participate. Hearing someone cry, seeing a living being suffer, and imaginatively recreating it provokes a response that goes far beyond reason. We are social beings because we have a complex biological predisposition to relate to other beings.
According to the critical sector of the technology world, the theories that AI can rebel – or even exterminate us – are part of marketing strategies.hype of figures like Elon Musk. To illustrate its limitations, this more skeptical side uses the example that AI has not yet managed to do things that are very simple for humans, such as driving a car proficiently or riding a bicycle. Our inventions tend toward rigidity and grip; nature, on the other hand, is inherently flexible: we have designed airplanes and hang gliders, but we are a long way from being able to fly with the freedom of movement and efficiency of birds. Through some analogies, this critical sector argues that it is unlikely that AI will acquire self-awareness: if we do not foresee the possibility that a digital camera can actually see or that a computer-generated storm simulation can end in a downpour, what is the point of thinking that a series of statistical-symbolic operations can become self-aware? But the reflections of Hinton, and of other scientists who have developed AI and who do not have corporate interests, suggest that perhaps we cannot rule anything out entirely.
Geoffrey Hinton was the mentor of Ilya Sutskever, who, in addition to being one of the founders of OpenAI, was its leading scientific mind. Well, last year Sutskever left OpenAI, concerned about its direction, and, like Hinton, is now trying to develop less disruptive technological alternatives. Although recently Sutskever has expressed a different opinionHis warning had been clear: we cannot grasp the power that AI is gaining minute by minute and the existential risk that this entails. The magazine NatureIn fact, he has just published an article on AI's ability to access preconscious thoughts of people who have an interface implanted in the posterior parietal cortex.
Amid this disparity of theories and experiences, and with so little information about how consciousness works, it is difficult to conclude what the nature of AI is. What is certain is that this moment of transition is leading us to a situation of extreme uncertainty in which an unprecedented concentration of technological power is taking place.