Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Health»Same Symptoms, Different Care: How AI’s Hidden Bias Alters Medical Decisions
    Health

    Same Symptoms, Different Care: How AI’s Hidden Bias Alters Medical Decisions

    By The Mount Sinai Hospital / Mount Sinai School of MedicineApril 7, 20251 Comment5 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    AI Medicine Demographics Art Concept
    Generative AI can give different medical advice based on a patient’s background, even when their symptoms are the same. Mount Sinai researchers are pushing for stronger safeguards and oversight. Credit: SciTechDaily.com

    AI tools designed to assist in medical decisions may not treat all patients equally. A new study shows that these systems sometimes alter care recommendations based on a patient’s background, even when their medical conditions are identical.

    Researchers at Mount Sinai tested leading generative AI models and found inconsistencies in treatment suggestions depending on socioeconomic and demographic information, highlighting a major challenge in building fair and reliable AI for healthcare.

    Bias in AI-Driven Health Recommendations

    As artificial intelligence (AI) becomes more integrated into health care, a new study from the Icahn School of Medicine at Mount Sinai shows that generative AI models can recommend different treatments for the same medical condition, based solely on a patient’s socioeconomic or demographic background.

    Published online today (April 7, 2025) in Nature Medicine, the study underscores the need for early testing and oversight to make sure AI-driven care is fair, effective, and safe for everyone.

    Large-Scale Testing Across Patient Profiles

    To explore this issue, researchers tested nine large language models (LLMs) using 1,000 emergency department cases. Each case was repeated with 32 different patient backgrounds, producing more than 1.7 million AI-generated medical recommendations. Although the medical details remained exactly the same, the models sometimes changed their recommendations based on demographic and socioeconomic factors. This affected decisions like triage level, diagnostic tests, treatment plans, and mental health assessments.

    “Our research provides a framework for AI assurance, helping developers and health care institutions design fair and reliable AI tools,” says co-senior author Eyal Klang, MD, Chief of Generative-AI in the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai. “By identifying when AI shifts its recommendations based on background rather than medical need, we inform better model training, prompt design, and oversight. Our rigorous validation process tests AI outputs against clinical standards, incorporating expert feedback to refine performance. This proactive approach not only enhances trust in AI-driven care but also helps shape policies for better health care for all.”

    Responsible AI in Healthcare
    A new study raises concerns regarding responsible AI in health care. Researchers at the Icahn School of Medicine at Mount Sinai found that AI models can make different treatment recommendations for the same medical condition based on a patient’s socioeconomic and demographic background. This highlights the need for safeguards to ensure that AI-driven medical care is both safe, effective, and appropriate for everyone. Credit: Mahmud Omar, MD

    Escalating Care Based on Demographics

    One of the study’s most striking findings was the tendency of some AI models to escalate care recommendations, particularly for mental health evaluations, based on patient demographics rather than medical necessity. In addition, high-income patients were more often recommended advanced diagnostic tests such as CT scans or MRI, while low-income patients were more frequently advised to undergo no further testing. The scale of these inconsistencies underscores the need for stronger oversight, say the researchers.

    While the study provides critical insights, researchers caution that it represents only a snapshot of AI behavior. Future research will continue to include assurance testing to evaluate how AI models perform in real-world clinical settings and whether different prompting techniques can reduce bias. The team also aims to work with other healthcare institutions to refine AI tools, ensuring they uphold the highest ethical standards and treat all patients fairly.

    Global Collaboration for Safer AI

    “I am delighted to partner with Mount Sinai on this critical research to ensure AI-driven medicine benefits patients across the globe,” says physician-scientist and first author of the study, Mahmud Omar, MD, who consults with the research team. “As AI becomes more integrated into clinical care, it’s essential to thoroughly evaluate its safety, reliability, and fairness. By identifying where these models may introduce bias, we can work to refine their design, strengthen oversight, and build systems that ensure patients remain at the heart of safe, effective care. This collaboration is an important step toward establishing global best practices for AI assurance in health care.”

    “AI has the power to revolutionize health care, but only if it’s developed and used responsibly,” says co-senior author Girish N. Nadkarni, MD, MPH, Chair of the Windreich Department of Artificial Intelligence and Human Health Director of the Hasso Plattner Institute for Digital Health, and the Irene and Dr. Arthur M. Fishberg Professor of Medicine, at the Icahn School of Medicine at Mount Sinai. “Through collaboration and rigorous validation, we are refining AI tools to uphold the highest ethical standards and ensure appropriate, patient-centered care. By implementing robust assurance protocols, we not only advance technology but also build the trust essential for transformative health care. With proper testing and safeguards, we can ensure these technologies improve care for everyone—not just certain groups.”

    Next Steps for Real-World Validation

    Next, the investigators plan to expand their work by simulating multistep clinical conversations and piloting AI models in hospital settings to measure their real-world impact. They hope their findings will guide the development of policies and best practices for AI assurance in health care, fostering trust in these powerful new tools.

    Reference: “Sociodemographic biases in medical decision making by large language models” by Mahmud Omar, Shelly Soffer, Reem Agbareia, Nicola Luigi Bragazzi, Donald U. Apakama, Carol R. Horowitz, Alexander W. Charney, Robert Freeman, Benjamin Kummer, Benjamin S. Glicksberg, Girish N. Nadkarni and Eyal Klang, 7 April 2025, Nature Medicine.
    DOI: 10.1038/s41591-025-03626-6

    The study’s authors, as listed in the journal, are Mahmud Omar, Shelly Soffer, Reem Agbareia, Nicola Luigi Bragazzi, Donald U. Apakama, Carol R. Horowitz, Alexander W. Charney, Robert Freeman, Benjamin Kummer, Benjamin S. Glicksberg, Girish N. Nadkarni, and Eyal Klang.

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Artificial Intelligence Mount Sinai Hospital Mount Sinai School of Medicine
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    New AI Can Predict Which Diseases Your DNA Might Spark

    AI Revolutionizes Sleep Diagnosis, Achieving 92% Accuracy at Mount Sinai

    Wearable Wonders: How Your Apple Watch Could Gauge Your Mental Resilience

    Asymptomatic COVID-19 Transmission Revealed Through Study of 2,000 Marine Recruits

    Researchers May Have Discovered the True Cause of Low Oxygen Levels in Severe Cases of COVID-19

    Emergency Room Chest X-Rays Can Help Predict Severity of COVID-19

    Severe Vision Loss From Niacin (Vitamin B3) Can Be Reversed

    Warning: Lower IQ Related to Chemicals in Consumer Products During Pregnancy

    Study Shows Deep Brain Stimulation Is Effective Treatment for Most Severe Depression

    1 Comment

    1. JDow on April 8, 2025 1:36 am

      All the above said, it is utterly foolish and criminal to presume the likely problem a person might have is not related to their background, ethnicity, sex, and other parameters of their lives. Which socioeconomic stratum might have more contact with lead? with mercury? with STDs? with risk taking behaviors? Which are more prone to specifically genetic based diseases?

      We’re not all the same. Genetic boys cannot get pregnant. Genetic boys don’t get endometriosis problems. Sometimes these characteristics go with the genetic and sometimes they go with the adopted behaviors. People are different, period. Expecting untailored results from an AI is foolish. But, those differences it shows must be supportable. Some results above seem to be outlandish. Some results might be correct with the AI catching statistical facts we choose to ignore to massage our confirmation biases.

      {^_^}

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Scientists Discover Game-Changing New Way To Treat High Cholesterol

    This Small Change to Your Exercise Routine Could Be the Secret to Living Longer

    Scientists Discover 430,000-Year-Old Wooden Tools, Rewriting Human History

    AI Could Detect Early Signs of Alzheimer’s in Under a Minute – Far Before Traditional Tests

    What if Dark Matter Has Two Forms? Bold New Hypothesis Could Explain a Cosmic Mystery

    This Metal Melts in Your Hand – and Scientists Just Discovered Something Strange

    Beef vs. Chicken: Surprising Results From New Prediabetes Study

    Alzheimer’s Breakthrough: Scientists Discover Key Protein May Prevent Toxic Protein Clumps in the Brain

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • Revolutionary Imaging Technique Unlocks Secrets of Matter at Extreme Speeds
    • Where Does Mass Come From? Scientists Find Evidence of a New Exotic Nuclear State
    • Quantum Breakthrough: Unhackable Keys Sent Over 120 km Using Quantum Dots
    • Researchers Discover Unknown Beetle Species Just Steps From Their Lab
    • Jellyfish Caught Feasting on Exploding Sea Worms for the First Time
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.