AI

Xanthorox, the AI that promotes itself as an easy resource for committing cybercrimes

Developed by an anonymous user, it promises to bypass antivirus software, operating outside the usual channels.

Xanthorox
Aida Xart
29/05/2025
3 min

BarcelonaUpon entering its official website, it defines itself as "the killer of WormGPT and EvilGPT" in a dramatic way, with a black background and letters achieved in red. This is how Xanthorox is presented, the new artificial intelligence (AI) that has appeared this year, similar to ChatGPT. It has been created by an anonymous developer who has promoted the tool through public channels such as GitHub, Telegram and Discord.

Unlike most commercial AI systems, which limit functions to prevent malicious uses, Xanthorox openly presents itself as a tool that can greatly facilitate criminal activities on the Internet. On its main page, the tool claims to be able to describe "the first ransomware AI-generated that bypasses all antiviruses", with "powerful and intense encryption, optimized for fast and deep penetration into the system".

Xanthorox has the ability to generate fake videos or audios (deepfakes), emails from phishing (steal data with one click), malware and ransomware (asking for money to recover hijacked files from the user's device). In addition, the website can also analyze images, conduct reasoning, conduct voice and audio chats, analyze files, use the camera, and perform web searches, all on its own servers that guarantee the user's complete privacy. Once the features are presented and absolute security is promised, the website explains the prices: $200 per month for a limited feature, $300 for the full chat, or a variable amount can be paid depending on the user's needs, which must be agreed upon directly with the company's owner (a user named Gary Senderson) via chat.

A Repeating Pattern

Xanthorox uses open-source artificial intelligence models that do not incorporate the security measures common in commercial systems, such as ChatGPT. This setup allows for the generation of unfiltered content, including instructions for illegal activities, such as programming computer viruses. The use of this type of technology is not unprecedented, and in 2023 platforms such as WormGPT and FraudGPT (which Xanthorox boasts of having "killed") appeared that offered similar functionality.

Daniel Kelley, security researcher at SlashNext, he said earlier this year in the technology magazine Scientific American that Xanthorox "is more effective than WormGPT and FraudGPT" because it's "more sophisticated." Casey Ellis, founder of the cybersecurity platform Bugcrowd, explains that, although the details are not yet known, Xanthorox appears to have advanced systems that allow different AI models to review and validate each other's responses, an architecture typical of high-level systems.

"It's just after the money"

Jordi Serra, a cybersecurity expert, explains to ARA that tools like this "make any potential attack much more general, because these websites allow you to test and generate different code in an attempt to bypass antivirus programs": "However, antivirus programs look for specific behaviors in a virus." While these are notable, it is not entirely clear whether this tool can actually be used to commit large-scale cybercrimes. Xanthorox, on the main cybercrime forums they monitor, suggests that, for now, the real impact of this tool on the criminal world is limited. generate things out of nothing, without having done much work to train it first."

"That an attack of this type can impersonate a person without the AI not having a known voice is very difficult if there are not minutes of recordings of that person behind it," says Serra. anonymous person. In addition, people's data is needed to be able to deceive them and, at the moment, AI does not have that capacity: "Many times, what they do is send random numbers on the internet saying they are a relative of yours" and adds that there is no specific audience they will target: "They are only after money." Why is Xanthorox legal?

According to Kishon, while AI isn't necessarily bad, "it makes the job of cybercriminals much easier." Open-source AI systems can be used freely as long as they don't directly violate current laws, and therefore, creating a system like Xanthorox isn't a crime. It is illegal to use the tool to commit crimes, and therefore, the responsibility for now lies with the consumer. Serra concludes that artificial intelligence "perfects deception."

stats