
Social media often feels overwhelmingly toxic, but the reality is more restrained. Research finds that most harmful content comes from a tiny fraction of users who post frequently and loudly.
Many Americans believe that hostile behavior dominates online spaces, but research suggests this belief is far off the mark. People often assume that nearly half of users on large platforms regularly post abusive or aggressive comments. In reality, severe online toxicity is much less common. For example, Americans estimate that about 43% of Reddit users write highly toxic comments, even though evidence shows that only about 3% actually do. This large gap between perception and reality can quietly encourage pessimism, making people feel that society is more hostile and morally divided than it truly is.
How Researchers Measured Online Toxicity Perceptions
To understand why these beliefs are so widespread, researchers Angela Y. Lee, Eric Neumann, and their colleagues conducted a survey of 1,090 American adults using the online research platform CloudResearch Connect. The study compared what people think about harmful online behavior with existing platform level data from prior research. The goal was to see whether public perceptions match what actually happens on major social media sites.
The findings showed that people consistently overestimate harmful behavior. Participants believed that toxic commenters on Reddit were 13 times more common than they are in reality. They also greatly overestimated the spread of misinformation on Facebook. On average, participants guessed that 47% of Facebook users share false news stories, while research indicates that only about 8.5% do. These results suggest that many people think harmful content is the norm, even when it represents a small fraction of overall activity.
Why Accurate Detection Does Not Change Beliefs
The researchers also explored whether people simply struggle to recognize toxic content. To test this, participants completed a signal detection task, a type of psychological test designed to measure how accurately someone can identify specific examples among many possibilities. Even when participants correctly identified which posts were toxic, many still believed that a large share of users regularly engage in this behavior.
This suggests that the problem is not a lack of awareness, but a misunderstanding of scale. Highly visible or emotionally charged posts tend to stand out and linger in memory, leading people to assume they reflect typical behavior rather than rare extremes.
Correcting Misperceptions Improves Social Outlook
In a follow-up experiment, the researchers tested whether providing accurate information could change attitudes. When participants learned how uncommon severe online toxicity actually is, they reported feeling more positive overall. They were less likely to believe that society is in moral decline and less likely to think that most Americans support harmful behavior online.
According to the authors, people often confuse a very small but extremely vocal group with the broader public. A limited number of prolific users produce much of the toxic and harmful content seen online, creating the illusion that such behavior is widespread. Helping people understand this distinction could reduce the negative emotional impact of social media and support stronger social cohesion by reminding users that most people online are not acting maliciously.
Reference: “Americans overestimate how many social media users post harmful content” by Angela Y Lee, Eric Neumann, Jamil Zaki and Jeffrey Hancock, 16 December 2025, PNAS Nexus.
DOI: 10.1093/pnasnexus/pgaf310
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.
1 Comment
Hm, I have experience managing portions of a modest size technically oriented online contemporary of CompuServe, GEnie, Delphi, and others. That experience suggests at the time less than 1% of the people were toxic. Unlike it’s competitors BIX’s system did not allow removal or editing of posts. So posting something beastly and coming back after an edit whining about people claiming he said beastly things pointing to the edited posts. Being a beast was something beasts were stuck with when they showed their colors. They tended to get ignored. (And, moderators COULD remove posts, usually for profanity.) Even CompuServe seemed to have fewer than 1% obnoxious members. They allowed editing, too.
(BIX was not perfect. One of its moderators went crazy and terrorized the system, especially the women present, for about a year. I figure that qualifies me as one of the first very few cyberstalking victims. I simply refused to be a victim and stuck it out. I ended up chief moderator for the year or two before it was turned off.)
{^_^}