Is feminist AI possible?

Experts explain how artificial intelligence is being regulated and what steps need to be taken to achieve ethical algorithms that are committed to society.

07/03/2026

BarcelonaThat we are starting with artificial intelligence using biased algorithms is no secret. The dangers of using AI without regulation have been demonstrated and emphasized, whether for personal purposes, in a company or institution, or, even more alarmingly, for disseminating content on social media. Achieving ethical algorithms, aligned with democratic values ​​and incorporating a gender perspective, is an objective that is being pursued, especially in Europe. Women experts in this field are shedding light on a rapidly advancing technology that seems difficult to control and in which, as in the world of... big tech, There's a lack of women.

"The teams that have been training AI so far have been made up of white men who have a built-in bias. The lack of women has meant that no one has raised the alarm to say, for example, that there are two million photos here and they're all of men," he asserts. Karina Gibert, Gibert, a professor and chair of Intelligent Data Science and Artificial Intelligence (IDEAI) at the UPC and a professor at the Faculty of Informatics in Barcelona, ​​explains that biased AI stems "from blatant malpractice," combining the lack of women in the processes and the use of biased data to train these algorithms. Who is most vulnerable to algorithmic bias? Well, "those who aren't at the table," says Tatiana Caldas Löttiger, a lawyer specializing in technology and AI regulation worldwide. "Women, even if they are in the workforce, must join the conversation and ask questions. We have an important role to play." Caldas has 20 years of experience in international law and is the founder and CEO of International WomenX in Business for Ethical AI, a civil society organization that seeks to build a world where AI is developed and used ethically.

Cargando
No hay anuncios

The solution also involves regulating algorithms and training the people who train this technology, as well as fostering a cultural shift in society. Can we create a feminist AI? "It's clear we can. The problem is that if someone creates it, society must consume it," says Karina Gibert.

Regulating algorithms: a key piece

Europe is the most advanced in terms of AI regulation: The EU AI Act, passed in 2024, is the first law in the world to comprehensively regulate this technology to ensure it respects fundamental rights. Although its implementation is not yet complete (it has been postponed and is expected to be finalized in August 2027), this law classifies AI systems according to their risk (totally prohibited, high risk, limited risk, or minimal risk).

Cargando
No hay anuncios

"If you have an app that recommends books, it probably won't pose any risk, but if you have an app that gives advice to doctors or makes diagnoses, this would be high risk and should undergo a validation process to ensure that certain security requirements and minimum ethical standards are met," explains Caroline König'a, a Data Science and Artificial Intelligence engineer at UPC.

"Right now, we're at a stalemate, in the rollout of the law. We can still have legal AI that isn't entirely ethical," explains Gibert, who is also the dean of the Official College of Computer Engineering of Catalonia.

Cargando
No hay anuncios

What can happen if a company uses unregulated AI? For example, when hiring someone, algorithms can discriminate because they use biased data. However, with trained technology, things change. For example, in England, several insurance companies stopped selling services to divorced women because it was shown that they were in a vulnerable state and didn't have the financial means to afford it. Before, banks and insurance companies sold credit cards or insurance to all customers regardless of their financial status; with regulated AI, they now have an ethical obligation not to do so.

Now, what's also being raised is the extent to which regulation can keep up: technology is advancing too rapidly to have up-to-date regulations, and it requires significant effort that takes time. Caldas sums it up very well: "With generative AI, harms are being generated that weren't even considered. Who analyzes it? Who tells you if it's high or low risk? Isn't there time for the regulator or the lawmakers to create regulations on this?" "Generative AI, like ChatGPT, will not be considered high risk, but it will have to comply with transparency requirements and European copyright legislation," the European Parliament stated some time ago. Now, it has been demonstrated that chatbots are a dangerous tool, but this wasn't the case initially, not until it became clear that they were detrimental to young people and women, in particular.

Cargando
No hay anuncios

A practical case

"Now we look at things we might not have looked at four years ago. At the beginning, the main objective was to develop the most accurate model possible, and you didn't pay attention to whether it was discriminatory. Because it wasn't mandatory." Caroline König is a researcher working on a biomedical project considered high-risk under European AI legislation. Specifically, she is leading the development, at the UPC, of ​​a predictive AI-based platform for personalized medicine called PermepsyThis project, coordinated by the San Juan de Dios Health Park, aims to establish a personalized approach to the psychological treatment of psychosis. The fact that the prototype is considered high-risk (it could have a direct impact on people) has meant that it must comply with legal requirements and ensure that the platform is not discriminatory. Furthermore, it cannot be freely shared until it has been validated as safe according to current regulations.

Cargando
No hay anuncios

What's missing: a change in our relationship with AI

However, regulation isn't the only thing that needs to be done. There's a more tedious and complicated task: changing societal attitudes and educating people (especially young people) about the uses of AI. Only in this way can we prevent technology from having a sexist side. Like other experts, Tatiana Caldas agrees that the only way is through education and fostering an AI culture. "You have to understand that photos of you can be published online and can make you vulnerable." deepfakes "Explicit images. That needs to be explained to teenagers," she asserts. Karina Gibert agrees: "We must raise the awareness of everyone involved in AI: those who invent it, those who buy it, those who sell it, those who use it, and those who consume it," and adds: "We must return to an educational system that values ​​its impact. No one teaches this in schools."

Is there an alternative to AI? Caldas's answer: probably not. "AI isn't going anywhere. It's like someone who doesn't know how to use email. We have to see it as a tool that's here to stay, that will make our lives easier and more efficient, and we must learn to use it responsibly," she warns. It's a very powerful tool, but we need to take ownership of AI governance.