A hyper-realistic robot face at the Ars Electronica festival in Linz, Austria, on September 7.
3 min

Ethics deals with the moral principles that govern a person's behavior and the way they act. In most civilizations, ethics has been closely linked to different religions, which provide guidelines for behavior for their followers and prescribe what is right and what is wrong.

In Western civilization, there is also a philosophical ethics, at least since Plato and Aristotle, who proposed thevirtue ethics, which influenced Christian ethics. Happiness in the sense of the good life is associated with the practice of virtues (wisdom, understanding, courage, temperance, and justice). As early as the 18th century, Kant proposed a ethics of duty, based on the rational principle of the categorical imperative: to know if an action is good, imagine that everyone does it; if the result is good, the action is good; otherwise, it is not.utilitarian ethics It was proposed by Mill in the 19th century. An action is more or less good according to its degree of utility, that is, according to the "total amount" of happiness it generates, in the sense of well-being for the greatest number. In the 20th century, Habermas and Apel criticized the individualism of Kant's ethics and of previous ethics and formulated thediscourse ethicsThey propose that what is good and necessary be discussed within the framework of the social community. Our democratic societies largely function according to the ethics of discourse: divorce, abortion, or euthanasia have come to be seen as acceptable because a social or political majority has deemed them so. However, the proponents of this ethic themselves recognized that there are moral principles that should be left out of the debate: for example, exterminating a minority within a state is wrong, no matter how much the social majority views it as acceptable (remember the Holocaust or Gaza).

In our time, social media has revealed a new limitation of the ethics of discourse. Instead of a debate involving the entire social body (of a country or territory), there are a multitude of parallel debates in small communities or bubbles, each of which independently discusses and agrees on what is right and wrong. This increases ethical polarization to levels that can no longer be represented by the democratic interplay of political parties. Tolerance and, why not say it, the prevailing moral relativism (to each his own, etc.) have served to get by, for the time being. However, without a basic consensus on what is right and what is wrong, society becomes unstable.

As if sorting things out among humans weren't complicated enough, artificial intelligence (AI) makes the ethical challenge even more harrowing. How can we teach machines something we don't agree on? Recently, Geoffrey Hinton, Nobel Prize winner and one of the fathers of current AI, warned that AI could wipe out humanity if it is not urgently and properly regulated. We must find an ethic that is not only acceptable to all humans, but also sufficiently clear, operational, and comprehensive to be implemented in advanced AI—which governs chatbots (such as ChatGPT and others), robots, self-driving cars, lethal autonomous weapons, the control of disinformation, etc.—and in the increasingly plausible artificial superintelligence –a general artificial intelligence that would surpass that of the brightest humans.

How are we doing? The good news is that there is awareness of the problem. The bad news is that ethical codes have proliferated, which are mutually exclusive and quite generic and difficult to implement in computer code. Floridi and Cowls proposed five well-known principles: beneficence (doing good), non-maleficence (doing no evil), autonomy, fairness, and explainability. In contrast, the EU recommendations for trustworthy AI propose seven distinct principles: human agency and oversight, robustness and safety, privacy, transparency, non-discrimination and equity, social and environmental well-being, and accountability. And many other organizations have proposed their principles, including the OECD, UNESCO, and IEEE. If the subject weren't so serious, I'd recall Groucho Marx's gag: "These are my principles; if you don't like them, I have others." On the legal front, the EU is the only major bloc that has regulated AI to protect citizens, while the technologically much stronger US and China have refrained from doing so. As a result, the ethical principles for AI promoted by large American and Chinese companies are as murky and "Marxist" (Groucho's) as those that guide us humans. A good way forward would be to recover the regulatory consensus on responsible AI that existed between the EU and the US during the Biden presidency, and to work hard to add China, based on a global ethic such as the Worldethos promoted by Hans Küng and his disciples.

stats