Anthropic's refusal to make its artificial intelligence Claude bend to the will of the American military has the country's far-right up in arms. The government has canceled contracts worth $200 million because the company does not want to allow its tool to be used to spy on US citizens, nor does it want it to be able to make decisions to kill someone without human supervision. Elon Musk, who in addition to being a neo-fascist oligarch is also a competitor, has been denouncing Claude's alleged woke" fits for days. One example: a user asked it to define "white pride" in one word and it responded "racism." Then it asked it to do the same with "black pride" and it said "empowerment," a positive attribute. As logical reasoning, it has the strength of a cigarette paper: pride is a word that has different connotations depending on whether it applies to an oppressor or oppressed group. In any case, Musk's grand gestures with these Hacendado syllogisms make good the saying that he who thinks badly, does badly. And, in fact, Grok's troglodyte bias – which is quite entertaining when one is aware of it – is much easier to demonstrate, considering that it feeds largely on messages on X, a network that has become unbalanced after the massive exile of users who have renounced using it for reasoned and reasonable dialogue about ideas.
Just as we talk about responsible consumption – in food, in clothing – we will also have to start thinking about something similar for AIs. Although they are still seen as the same thing now, it is easy to imagine that they will increasingly position themselves, even ideologically, even if subtly. And the media education that our children should receive must include a minimum knowledge of which foot each one wears, whether left, right, or far-right.