Language Parallels Between Hate Speech and Mental Disorder Forums
Research

Language Parallels Between Hate Speech and Mental Disorder Forums

language-parallels-between-hate-speech-and-mental-disorder-forums

With the privileges brought by social media, it also brings with itself the dark side of the power of anonymity, instant access and availability. Misuse of such subtle powers has made social media a breeding ground for hateful and hate-inducing content (Windisch et al., 2021). The United Nations (2023) defines hate speech as any form of communication—spoken, written, or behavioural—that expresses hostility or uses derogatory or discriminatory language directed at people or communities distinguished by traits like ethnicity, nationality, religion, race, gender, or other aspects of identity.

In common language, hate speech would be any offensive language targeted at an individual or a group based on their inherent traits (gender, colour, etc). With the advent of social media, this issue has become more complicated, and it tends to perpetuate stereotypical thinking and discrimination (Castaño-Pulgarín, 2021). One of the potential causes of hate speech is the spreading of misinformation and which fuels the stereotypes, discrimination and even social violence (Turner, 2025). 

Due to this reason, it has become an increasingly important societal concern, with research highlighting the linguistic and psychological underpinnings of hate speech. One such study in PLOS Digital Health by Alexander & Wang 2025 suggests speech-pattern similarities exist between posts in hate speech communities and communities for certain psychiatric disorders on the social media website Reddit. 

What Previous Studies Reveal

Earlier studies, which have been conducted with the thought that lack of empathy and misinformation are probable causes, have found that there is a positive correlation between the Dark Triad personality (Narcissism, Machiavellianism, and Psychopathy) and hate speech (Withers et al., 2017). These traits have been found to have overlapping characteristics with Cluster B personality disorders of the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5) (Vossen et al., 2017). This suggests that there may be a probable relationship between hate speech and Cluster B personality disorders and other psychiatric disorders as well (Alexander & Wang, 2025). Thus, the research conducted by Alexander and Wang remains highly relevant in today’s times. 

How the Study Was Conducted 

The research was conducted on hate speech in a social media platform called Reddit, which makes use of communities that are dedicated to a specific topic. This proved highly beneficial for the study as it allowed them to extract information from a specific psychiatric disorder population through the already built communities. Therefore, 54 such communities were selected about four different classes: Control, Hate Speech, Misinformation, and Psychiatric Disorders (which had subclasses for each psychiatric disorder) (Alexander & Wang, 2025).

To analyse the thousands of posts that they had collected, they combined the use of large language models (LLMs) and topological data analysis (TDA). They used GPT3 to process the data that they had derived from the posts and extracted the linguistic features and semantic patterns. These embeddings, i.e., numerical interpretations, were then analysed through application of TDA, topological data analysis, which allowed them to model these complex datasets as high-dimensional shapes and identify non-obvious structural relationships.

Researchers also explored whether the language used in hate speech or misinformation resembled the speech patterns seen in psychiatric disorder communities or control groups. And if they were similar to psychiatric disorders, then which ones in particular (Alexander & Wang, 2025)? 

What the Study Found

The study revealed that the speech patterns in hate speech communities most closely resembled those found in online communities focused on Borderline Personality Disorder, Narcissistic Personality Disorder, Antisocial Personality Disorder, Schizoid Personality Disorder, and Complex Post-Traumatic Stress Disorder. The most similarity was found to be with the Cluster B personality disorders – Antisocial Personality Disorder, Borderline Personality Disorder, Narcissistic Personality Disorder (Alexander & Wang, 2025).

Potential reasons could be that these disorders lack empathy, have difficulty managing anger and also show disregard towards others (APA, 2013). Hate speech may contribute to or reinforce a lack of empathy in individuals and start to resemble traits in Cluster B personality disorders. Moreover, the speech pattern for misinformation communities was found to be the most similar to control communities, although some relation was found with anxiety disorder communities. 

Conclusion 

The research suggests that hate speech communities and psychiatric disorder communities, especially Cluster B personality disorder, share a similar speech pattern, and the authors have also emphasised that this does not mean that people with such disorders are more prone to hate speech. This research could prove to be potentially helpful as the elements of therapy used in psychiatric disorders could be adapted to combat the problems of online hate speech, and strategists can get further informed and develop new and informed interventions. However, further studies are required to explore this topic in detail and investigate potential links between patterns seen in hate speech communities and psychiatric disorder communities. 

FAQs

1. What was the main purpose of this study? 

The study aimed to investigate the structural and linguistic relationships between online hate speech, misinformation, and mental health discussions using advanced computational methods

2. What did this study find about hate speech and psychiatric disorders? 

Posts in online hate speech communities display speech patterns that resemble those found in communities focused on personality disorders such as borderline, selfish, and antisocial personality disorders. 

3. Why is this research important? 

It provides a deeper understanding of how harmful content spreads online and how it intersects with mental health discourse. These insights can help improve content moderation, early warning systems, and interventions to counter online radicalisation. 

4. Can this method be used to detect harmful content in real time? 

Potentially, yes. The study demonstrates that LLMs and TDA can identify subtle patterns in large datasets, making them promising tools for real-time detection if integrated into moderation systems. However, ethical considerations (e.g., privacy, bias) must be addressed first. 

5. What’s next for this research? 

Future work could focus on: 

  • Developing real-world tools for platforms to flag harmful content. 
  • Studying how to protect vulnerable users without censoring legitimate mental health discussions. 
  • Expanding the method to other languages and cultural contexts.
References +

Alexander, A. W., & Wang, H. (2025). Topological data mapping of online hate speech, misinformation, and general mental health: A large language model-based study. PLOS Digital Health, 4(7), e0000935. https://doi.org/10.1371/journal.pdig.0000935

Neuroscience News. (2025, July 29). Online hate speech resembles mental health disorders. Neuroscience Newshttps://neurosciencenews.com/online-hate-speech-personality-disorder-29537/

United Nations. (n.d.). What is hate speech? | United Nations. https://www.un.org/en/hate-speech/understanding-hate-speech/what-is-hate-speech

Windisch, S., Wiedlitzka, S., & Olaghere, A. (2021). PROTOCOL: Online interventions for reducing hate speech and cyberhate: A systematic review. Campbell Systematic Reviews, 17(1). https://doi.org/10.1002/cl2.1133

...

Leave feedback about this

  • Rating