Grok's dangerous delusion about white genocide

For a few hours, Elon Musk's artificial intelligence tool Grok had a curious outbreak: it began offering answers in which it talked about the alleged "white genocide" perpetrated in South Africa. It did so, moreover, out of context, answering questions about baseball, scaffolding construction, or computer programs. The AI insisted that this genocide had happened and that it had been "trained" by its creators to consider it real. I suppose it's not hard to guess which famous tweeter, social media owner, and world's richest man has been rambling lately about how in apartheid South Africa, everyone suffered hatred because of the color of their skin. All suspicions point, once again, to Musk taking advantage of his position as a Lord of the Nets to circumvent the algorithm so that Grok assumes postulates that coincide with his manias. This was already evident when he suffered a fit of horns, during a break from the Superbowl, and called the engineers of X demanding that one of his tweets have more interactions than one from the then president, Joe Biden. But that did not go beyond the typical locker room complex of the insecure narcissist. Now, however, we are talking about a tool that many people already use for their general searches that begins to fire off harangues painfully coinciding with those of white supremacists.

The case requires increasing pressure on regulatory agencies to get their act together. It is urgent that algorithms be examinable and that these biases, perversely and secretly implanted, be detected. The apparent neutrality of AI tools makes them an extremely powerful tool in irresponsible hands. And Musk's hands, clearly, are twisted and threatening, like those of the Nosferatu Murnau, advancing through the shadows of algorithmic opacity.