Hate messages on social media resemble personality disorders.
Research from the University of Texas places this type of message very close to those linked to disorders such as narcissism, borderline personality, and antisocial personality disorder.


Entering certain internet forums or social networks like X can often be like navigating a minefield. One unfortunate comment or simply something that one of the participants doesn't like, and a storm of insults and threats can erupt. Digital hate is so common that we often consider it an inevitable part of social media, and we tend to associate it with the poor manners of some internet users. But what if hate speech on the internet wasn't just a reflection of bad manners? What if it concealed patterns reminiscent of other forms of human communication related to certain personality disorders?
A recently published paper by two researchers from the University of Texas, Andrew William Alexander and Hongbin Wang in Plos Digital Health, shows that hate speech found on social media shares characteristic linguistic features with texts written by people with personality disorders. They mapped this phenomenon using mathematical techniques and, when they placed it on a conceptual map of language, they found that it is very similar to the typical discourses associated with disorders such as narcissism, borderline personality, and antisocial personality.
In a summarized and simplified way, narcissism manifests itself in a constant need for admiration and limited empathy. Borderline personality disorder, in turn, is associated with an emotional roller coaster, with intense relationships and a strong fear of being abandoned. Antisocial personality disorder is characterized by a lack of respect for rules and the rights of others, a tendency to manipulate people and situations, and very little remorse for one's actions. This does not mean, as the researchers explicitly state, and it is very important to clarify, that people with these psychiatric diagnoses are more aggressive, but rather that hate speech on social media has a structure reminiscent of the emotional dysregulation characteristic of these conditions.
With the help of AI
To reach these conclusions, the authors compared thousands of messages from hate communities and mental health forums. They collected them from 54 communities on Reddit, an online forum platform that functions as a large community of communities. There were hate groups, misinformation forums, communities about psychiatric disorders, and control groups. Each message was converted into a 1,536-dimensional mathematical vector using artificial intelligence. Then, using data topology techniques, they constructed a map showing which communities are linguistically closest. Data topology is a mathematical technique used to understand the hidden structure of very large and complex data sets. Instead of looking only at specific points, it analyzes the overall way in which they cluster based on their similarity.
The result was overwhelming: hate speech was placed next to the personality disorder communities, much closer to each other than that of the control groups. They detected commonalities between the hate groups and the communities about psychiatric disorders, specifically an intense use of emotional expressions, a tendency to perceive the other as a threat, and communication marked by conflict. Interestingly, the disinformation forums presented a different pattern. Their language was more similar to that of the control groups, with a slight connection with anxiety disorders. In other words, spreading fake news or fake news that I hate.
It must be said that this does not imply that people who make hate messages have to have any of these disorders, but that they have similar communication styles. In other words, the expressions of the ranchers, as they are often called, can sound like those of someone struggling with difficult emotional regulation. This parallel raises a very interesting idea: if therapies aimed at improving empathy and emotional management work for patients with personality disorders, could they inspire strategies to reduce online toxicity?
Hate speech, Alexander and Wang argue, is not just a matter of ideology: it is also a form of communication marked by emotional dysregulation. This allows for a better understanding of this phenomenon and opens up new avenues for action. They propose three, otherwise very logical, approaches: focusing on emotional education to better manage emotions in the face of the impulsiveness that the digital world entails; promoting more humane moderation that, rather than simply censoring, explores strategies that promote empathy and reflection; and developing detection tools to identify hate before it erupts.
However, we must always keep in mind that personality disorders cannot be linked to hate, as this would lead to stigmatization that could fuel prejudice against vulnerable people; that these studies analyze texts, not people, and therefore cannot be used to make diagnoses; and that the development of algorithms must be very careful to avoid excessive censorship that could confuse legitimate and lawful emotional language with hate. The best strategy is not just to delete messages, but to learn to speak and listen differently.