The Catalan model that should prevent AI from violating your rights
Next year all systems will have to undergo an evaluation to ensure that decisions are not made based on stigma or bias.
BarcelonaEnsuring that artificial intelligence does not violate fundamental rights is the objective of the European regulation that, starting next year, will make it mandatory to subject all such systems to a prior assessment when they may pose a high risk to people's rights. For example, when they are related to health, education, judicial proceedings, job selection processes, access to financial aid, and border control. From next August, AI-based systems will have to undergo a prior assessment to be used in the EU. However, the regulation does not specify how this assessment should be carried out, and Catalonia has developed a model that has already been adopted by the Basque Country, Croatia, Malta, and Brazil.
"We have been pioneers, and that has allowed us to reach many places, for it to be valued, and for some countries to adopt it as their own," says the president of the Catalan Data Protection Authority, Meritxell Borràs, in an interview with ARA a few days after reaching an agreement with the author of the regulation to ensure that AI respects fundamental rights. "If a significant risk is detected in an AI system, a mechanism must be found to mitigate it and, therefore, eliminate or minimize it so that it does not affect people's fundamental rights," Borràs summarizes.
The model developed in Catalonia has studied several practical cases that serve as examples of situations in which potential violations of fundamental rights could occur. However, given the expanding use of AI, Borràs anticipates that "sooner rather than later, new scenarios will emerge that could broaden the scope of the impact."
Health
In the healthcare sector, an AI system has been studied that uses images to detect cancer and monitor its progression, leading to treatment decisions for the patient. "Just as until recently dosages and treatments were the same for men and women, and we have already seen that in some cases this shouldn't be the case, the same applies to ethnicity," explains Borràs. In this case, the AI had been based solely on information from white patients, excluding the rest. "This has led to expanding the study to other continents to include patients of other ethnicities so that we can assess whether or not there is a change in their disease progression and the treatments they need," she states.
Social Rights
Another example Borràs cites is the AI systems used to decide who can receive a loan or financial aid. In this case, she warns of potential biases that can arise from the statistics. "There can be a bias due to the type of population, or because that person lives in a neighborhood that is presumably poorer and where there may be a greater likelihood of defaulting on loans," she explains. But what should be analyzed is the person's circumstances, she says, not the fact that they live in that neighborhood. "An algorithm can help decide who receives financial aid, but if it's biased based on the neighborhood where they live, we're not on the right track," she argues.
Education
The risk analysis model developed in Catalonia has also been tested with a system designed for university student assessments, which summarized their learning using a traffic light-like code. The aim was to detect if a student was struggling and therefore needed more attention or support, but in practice, "a psychological bias arose" for both the teacher and the student. "Although it wasn't the intended effect, it discouraged both the teacher and the student, and the right to education was affected," explains Borràs.