Now it's time to escape from Telegram too.
The suspicious alliance between Pavel Durov and Elon Musk, a controversial agreement that rethinks digital privacy


The divorce between Donald Trump and Elon Musk, with exchanges of reproaches and very serious accusations, has left in the background another recent movement that also involves the tech mogul and affects us more directly as users of digital platforms, to the point that it should make us rethink the way we communicate in the age of AI, if we are even slightly concerned about our privacy.
Pavel Durov, the creator of Telegram, announced a few days ago an agreement with xAI, Elon Musk's artificial intelligence company, to integrate the Grok chatbot on the messaging platform. During the first year, xAI will pay $300 million (more than €260 million) to Telegram and will cede 50% of the revenue from Grok subscriptions sold through the app.
The context of this agreement is the open war between Musk and Sam Altman from OpenAI, the ChatGPT company. In February, Musk offered $97.4 billion to buy OpenAI, an offer that Altman ironically rejected and offered to buy Twitter for a tenth of that amount. OpenAI's real response has been to express its willingness to create its own social network around ChatGPT, a project that is still active and demonstrates that we are already immersed in a real battle between social networks and AI. In this vein, X has just activated XChat, which adds features more typical of a mobile chat app to private messages.
What will xAI do with Telegram content?
The reason for the alliance between Telegram and xAI is strategic: to train Grok, Musk is no longer satisfied with the tweets of hundreds of millions of X users, and will now also have access to the chats of more than 1 billion Telegram users, a volume that OpenAI envies.
According to Durov, xAI will only access the data that Telegram users explicitly share with Grok by dialoguing with the AI chatbot within the application. However, this statement leaves many open questions. Within Telegram, the user will be able to invoke Grok for writing suggestions, summarize chats, links and documents, and create stickers. Therefore, Musk's AI will access the content of the messages.
Musk has planned to feed Grok with everything at his disposal. X's privacy policy already contemplates the use of users' public posts to train its AI models. It's unclear whether xAI will use Telegram chats in a similar way, but all indications are that this is more of a data sale than a technological alliance. In any case, X has barely protected itself by changing its terms and conditions to prohibit app developers from training any external AI models with public tweets.
It's worth noting that no one is blameless in this area. Meta Platforms has officially begun collecting and using public data from adult WhatsApp, Instagram, and Facebook users to train its AI models in the European Union, too. It's true that users can fill out a form to object to the use of their public data to train Meta's AI, in accordance with the General Data Protection Regulation, but many are unaware of this.
A success built on misleading promises
Back to Telegram, It had traditionally been positioned as an alternative to WhatsApp with greater privacy.We've often explained here that it isn't: WhatsApp applies end-to-end encryption by default in all conversations, while Telegram only offers it in "secret chats." But the reality is devastating: the vast majority of Telegram traffic occurs in regular chats, broadcast channels, and groups that offer no real privacy. These conversations are stored encrypted on the company's servers, but with decryption keys accessible to the company.
In addition to this misleading promise of greater privacy, part of Telegram's success is also due to innovative features... which Meta has occasionally copied from WhatsApp: sending files of up to 2 GB, animated stickers, editing and deleting messages without time limits, syncing. On the other hand, Meta protects the content of chats but aggressively exploits communication metadata—who is chatting with whom, from where, what other apps are installed on their phone—to create detailed user profiles to personalize its ads.
In practice, the agreement between Telegram and xAI puts Pavel Durov's chat at a disadvantage compared to WhatsApp: both now share data, but at least Meta's service encrypts the content of all messages.
Alternatives
For users who prioritize privacy, the landscape is narrowing. Signal has become the go-to option. It's less comprehensive than Telegram, but it applies end-to-end encryption by default and captures far less data than WhatsApp, and it's run by a non-profit foundation... largely funded by one of WhatsApp's two creators, who fled immediately after selling to Meta (then Facebook). Signal has been my go-to chat for years.
Another alternative chat is the Swiss Threema. Open source and end-to-end encrypted, it strictly complies with the European regulatory framework. Of course, to use it you have to pay—a one-time fee—just under six and a half euros. Users who flee WhatsApp and/or Telegram will probably find it more distressing that their usual contacts aren't there. But the decision to leave Telegram is no longer just a matter of personal preference, but of principle. We need to be skeptical of platforms run by individuals capable of betraying their commitments when it suits them financially.
The advertising pivot of the industry
All of this is part of a broader trend: consumer AI is pivoting toward advertising models because subscriptions can't sustain their enormous computational costs. OpenAI burns more than $5 billion each year, while only 4% of its 500 million weekly users pay for the full ChatGPT.
AI companies have found that advertising models are more lucrative than subscriptions: OpenAI aims to earn $25 billion in ads by 2029. This economic reality explains why platforms like Telegram are willing to compromise user privacy in exchange for AI-driven biases and misinformation contaminating messaging apps when they incorporate advertising platforms.
In a world where AI companies need access to human conversations to train their models, users have become raw material for an industry that treats them with increasing cynicism.