Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Science»Correcting Online Falsehoods Can Make Matters Worse
    Science

    Correcting Online Falsehoods Can Make Matters Worse

    By Peter Dizikes, Massachusetts Institute of TechnologyMay 22, 20214 Comments6 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    Correcting Online Falsehoods Can Make Matters Worse
    Not only is misinformation increasing online, but attempting to correct it politely on Twitter can have negative consequences, leading to even less-accurate tweets and more toxicity from the people being corrected, according to a new study co-authored by a group of MIT scholars. Credit: Christine Daniloff, MIT

    A new study shows Twitter users post even more misinformation after other users correct them.

    So, you thought the problem of false information on social media could not be any worse? Allow us to respectfully offer evidence to the contrary.

    Not only is misinformation increasing online, but attempting to correct it politely on Twitter can have negative consequences, leading to even less-accurate tweets and more toxicity from the people being corrected, according to new research co-authored by a group of MIT scholars.

    “What we found was not encouraging.” Mohsen Mosleh

    The study revolved around a Twitter field experiment in which a research team offered courteous corrections, complete with links to solid evidence, in replies to flagrantly false tweets about politics.

    “What we found was not encouraging,” says Mohsen Mosleh, a research affiliate at the MIT Sloan School of Management, lecturer at University of Exeter Business School, and a co-author of a new paper detailing the study’s results. “After a user was corrected … they retweeted news that was significantly lower in quality and higher in partisan slant, and their retweets contained more toxic language.”

    The paper, “Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment,” has been published online in CHI ’21: Proceedings of the 2021 Conference on Human Factors in Computing Systems.

    The paper’s authors are Mosleh; Cameron Martel, a PhD candidate at MIT Sloan; Dean Eckles, the Mitsubishi Career Development Associate Professor at MIT Sloan; and David G. Rand, the Erwin H. Schell Professor at MIT Sloan.

    From Attention to Embarrassment?

    To conduct the experiment, the researchers first identified 2,000 Twitter users, with a mix of political persuasions, who had tweeted out any one of 11 frequently repeated false news articles. All of those articles had been debunked by the website Snopes.com. Examples of these pieces of misinformation include the incorrect assertion that Ukraine donated more money than any other nation to the Clinton Foundation, and the false claim that Donald Trump, as a landlord, once evicted a disabled combat veteran for owning a therapy dog.

    The research team then created a series of Twitter bot accounts, all of which existed for at least three months and gained at least 1,000 followers, and appeared to be genuine human accounts. Upon finding any of the 11 false claims being tweeted out, the bots would then send a reply message along the lines of, “I’m uncertain about this article — it might not be true. I found a link on Snopes that says this headline is false.” That reply would also link to the correct information.

    Among other findings, the researchers observed that the accuracy of news sources the Twitter users retweeted promptly declined by roughly 1 percent in the next 24 hours after being corrected. Similarly, evaluating over 7,000 retweets with links to political content made by the Twitter accounts in the same 24 hours, the scholars found an upturn by over 1 percent in the partisan lean of content, and an increase of about 3 percent in the “toxicity” of the retweets, based on an analysis of the language being used.

    In all these areas — accuracy, partisan lean, and the language being used — there was a distinction between retweets and the primary tweets written by the Twitter users. Retweets, specifically, degraded in quality, while tweets original to the accounts being studied did not.

    “Our observation that the effect only happens to retweets suggests that the effect is operating through the channel of attention,” says Rand, noting that on Twitter, people seem to spend a relatively long time crafting primary tweets, and little time making decisions about retweets.

    He adds: “We might have expected that being corrected would shift one’s attention to accuracy. But instead, it seems that getting publicly corrected by another user shifted people’s attention away from accuracy — perhaps to other social factors such as embarrassment.” The effects were slightly larger when people were being corrected by an account identified with the same political party as them, suggesting that the negative response was not driven by partisan animosity.

    Ready for Prime Time

    As Rand observes, the current result seemingly does not follow some of the previous findings that he and other colleagues have made, such as a study published in Nature in March showing that neutral, nonconfrontational reminders about the concept of accuracy can increase the quality of the news people share on social media.

    “The difference between these results and our prior work on subtle accuracy nudges highlights how complicated the relevant psychology is,” Rand says. 

    As the current paper notes, there is a big difference between privately reading online reminders and having the accuracy of one’s own tweet publicly questioned. And as Rand notes, when it comes to issuing corrections, “it is possible for users to post about the importance of accuracy in general without debunking or attacking specific posts, and this should help to prime accuracy and increase the quality of news shared by others.”

    At least, it is possible that highly argumentative corrections could produce even worse results. Rand suggests the style of corrections and the nature of the source material used in corrections could both be the subject of additional research.

    “Future work should explore how to word corrections in order to maximize their impact, and how the source of the correction affects its impact,” he says.

    Reference: “Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment” by Mohsen Mosleh, Cameron Martel, Dean Eckles and David Rand, May 2021, CHI ’21: Proceedings of the 2021 Conference on Human Factors in Computing Systems.
    DOI: 10.1145/3411764.3445642

    The study was supported, in part, by the William and Flora Hewlett Foundation, the John Templeton Foundation, the Omidyar Group, Google, and the National Science Foundation.

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Behavioral Science MIT Social Networking
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    COVID Pandemic Mood: Much Worse Than a Bad Monday

    MIT Scientists Find Clues to Why Fake News Snowballs on Social Media

    Groups of Laypeople Reliably Rate Stories As Effectively as Fact-Checkers

    MIT Twitter Experiment Shows Clear Self-Selection Into Social Media “Echo Chambers” Due to Political Preferences

    The Hype Machine: Why Social Media Has Changed the World — And How to Fix It

    MIT: How Putting Warning Labels on Fake News Can Backfire

    Threat to Democratic Decision Making: ‘Information Gerrymandering’

    Voters Are Influenced by “Information Gerrymandering.” Here’s How.

    New Study Links Ancient Cave Art Drawings and the Emergence of Language

    4 Comments

    1. Clyde Spencer on May 22, 2021 9:33 am

      A confounding factor is that political ‘truths’ are not as straight forward as a mathematical proof or verifiable data point such as the value of a physical constant. Snopes is not perfect. They have been known to make mistakes. However, the real problem is how information about a topic is interpreted, and whether relevant facts are omitted by those either advocating or disputing a so-called ‘falsehood.’ There is an old saw about there always being two sides to a story. Part of that is because of how people will subjectively assign weight to the ‘facts’ relating to the issue.

      Despite there not being a bi-partisan congressional committee, or grand jury with subpoena power, investigating numerous claims of election fraud, the main stream media routinely and unreservedly calls all such claims “lies” or “unsupported.” Yet, those same journalists, reporting on the arrest of someone in the act of committing a crime will judiciously refer to the person arrested as the “suspect.” As long as journalists operate with double standards, one can’t expect much better from the general public.

      Let us know when you have a fool-proof method of determining the ‘truth’ of political claims. Even a jury of peers sometimes makes mistakes.

      Reply
      • Ben Nielsen on May 23, 2021 3:19 am

        Have you read the publication itself, or are you just commenting based on what you’ve read here?

        Did you look to see which Snopes articles were being referenced in the study (to see if these are applicable to your comment about Snopes being wrong sometimes)?

        Could you please cite a specific election fraud claim labeled as a lie (or unsupported) by the MSM that is true?

        Thanks in advance.

        Reply
        • Clyde Spencer on May 23, 2021 9:04 am

          You asked, “Could you please cite a specific election fraud claim labeled as a lie (or unsupported) by the MSM that is true?”

          You have missed the point. We don’t have good evidence as to whether the claims of election fraud are true or false. We don’t know! Without a thorough investigation, it is impossible to have confidence that any claim that a statement is a falsehood has support. That is, the journalists are making unsupported claims that the accusations of fraud are unsupported. The evidence may be available, but no one has investigated in depth.

          A lie is a willful negation of the truth. Here there are two problems: 1) to know what the person was thinking; i.e. was the person knowingly lying, or were they just mistaken? [If simply mistaken, it isn’t a lie.] 2) How does one determine what the truth is (the opposite of a lie), when there has been no investigation? In all instances, the election fraud accusations have been dismissed as having “No Standing,” because there was no evidence, because there was no investigation to obtain evidence. Catch 22!

          If a woman files a complaint of rape, the police department is expected to take the complaint seriously and investigate, and try to obtain enough evidence to file charges and bring the matter to trial. They don’t dismiss the complaint for lack of evidence before investigating. They don’t dismiss the complaint of rape as having ‘no standing’ because the rapist isn’t present with a confession.

          Reply
    2. Clyde Spencer on May 22, 2021 9:39 am

      And, it isn’t just politics that has a questionable record of correctly discerning ‘truth.’
      https://scitechdaily.com/sciences-new-replication-crisis-research-that-is-less-likely-to-be-true-is-cited-more/

      “Who shall guard the guards?”

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Scientists Discover Game-Changing New Way To Treat High Cholesterol

    This Small Change to Your Exercise Routine Could Be the Secret to Living Longer

    Scientists Discover 430,000-Year-Old Wooden Tools, Rewriting Human History

    AI Could Detect Early Signs of Alzheimer’s in Under a Minute – Far Before Traditional Tests

    What if Dark Matter Has Two Forms? Bold New Hypothesis Could Explain a Cosmic Mystery

    This Metal Melts in Your Hand – and Scientists Just Discovered Something Strange

    Beef vs. Chicken: Surprising Results From New Prediabetes Study

    Alzheimer’s Breakthrough: Scientists Discover Key Protein May Prevent Toxic Protein Clumps in the Brain

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • Revolutionary Imaging Technique Unlocks Secrets of Matter at Extreme Speeds
    • Where Does Mass Come From? Scientists Find Evidence of a New Exotic Nuclear State
    • Quantum Breakthrough: Unhackable Keys Sent Over 120 km Using Quantum Dots
    • Researchers Discover Unknown Beetle Species Just Steps From Their Lab
    • Jellyfish Caught Feasting on Exploding Sea Worms for the First Time
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.