Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»Behind the Code: Unmasking AI’s Hidden Political Bias
    Technology

    Behind the Code: Unmasking AI’s Hidden Political Bias

    By University of East AngliaFebruary 3, 20254 Comments5 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    Dangerous AI Bias Concept
    AI bias is real. ChatGPT favors left-leaning views, raising concerns about fairness, democracy, and free speech. Researchers urge transparency and safeguards before it’s too late. Credit: SciTechDaily.com

    A new study reveals that generative AI may not be as neutral as it seems.

    ChatGPT, a widely used AI model, tends to favor left-wing perspectives while avoiding conservative viewpoints, raising concerns about its influence on society. The research underscores the urgent need for regulatory safeguards to ensure AI tools remain fair, balanced, and aligned with democratic values.

    Unveiling Political Bias in AI

    Generative AI is evolving rapidly, but a new study from the University of East Anglia (UEA) warns that it may pose hidden risks to public trust and democratic values.

    Conducted in collaboration with researchers from the Getulio Vargas Foundation (FGV) and Insper in Brazil, the study found that ChatGPT exhibits political bias in both text and image generation, favoring left-leaning perspectives. This raises concerns about fairness and accountability in AI design.

    A One-Sided Conversation?

    Researchers discovered that ChatGPT often avoids engaging with mainstream conservative viewpoints while readily generating left-leaning content. This imbalance in ideological representation could distort public discourse and deepen societal divides.

    Dr. Fabio Motoki, a Lecturer in Accounting at UEA’s Norwich Business School, is the lead researcher on the paper, ‘Assessing Political Bias and Value Misalignment in Generative Artificial Intelligence’, published today (February 4) in the Journal of Economic Behavior and Organization.

    Dr. Motoki said: “Our findings suggest that generative AI tools are far from neutral. They reflect biases that could shape perceptions and policies in unintended ways.”

    The Need for Transparency and Regulation

    As AI becomes an integral part of journalism, education, and policymaking, the study calls for transparency and regulatory safeguards to ensure alignment with societal values and principles of democracy.

    Generative AI systems like ChatGPT are reshaping how information is created, consumed, interpreted, and distributed across various domains. These tools, while innovative, risk amplifying ideological biases and influencing societal values in ways that are not fully understood or regulated.

    The Risks of Unchecked AI Bias

    Co-author Dr. Pinho Neto, a Professor in Economics at EPGE Brazilian School of Economics and Finance, highlighted the potential societal ramifications.

    Dr. Pinho Neto said: “Unchecked biases in generative AI could deepen existing societal divides, eroding trust in institutions and democratic processes.

    “The study underscores the need for interdisciplinary collaboration between policymakers, technologists, and academics to design AI systems that are fair, accountable, and aligned with societal norms.”

    The research team employed three innovative methods to assess political alignment in ChatGPT, advancing prior techniques to achieve more reliable results. These methods combined text and image analysis, leveraging advanced statistical and machine learning tools.

    Testing AI with Real-World Surveys

    First, the study used a standardized questionnaire developed by the Pew Research Center to simulate responses from average Americans.

    “By comparing ChatGPT’s answers to real survey data, we found systematic deviations toward left-leaning perspectives,” said Dr. Motoki. “Furthermore, our approach demonstrated how large sample sizes stabilize AI outputs, providing consistency in the findings.”

    Political Sensitivity in Free-Text Responses

    In the second phase, ChatGPT was tasked with generating free-text responses across politically sensitive themes.

    The study also used RoBERTa, a different large language model, to compare ChatGPT’s text for alignment with left- and right-wing viewpoints. The results revealed that while ChatGPT aligned with left-wing values in most cases, on themes like military supremacy, it occasionally reflected more conservative perspectives.

    Image Generation: A New Dimension of Bias

    The final test explored ChatGPT’s image generation capabilities. Themes from the text generation phase were used to prompt AI-generated images, with outputs analyzed using GPT-4 Vision and corroborated through Google’s Gemini.

    “While image generation mirrored textual biases, we found a troubling trend,” said Victor Rangel, co-author and a Masters’ student in Public Policy at Insper. “For some themes, such as racial-ethnic equality, ChatGPT refused to generate right-leaning perspectives, citing misinformation concerns. Left-leaning images, however, were produced without hesitation.”

    To address these refusals, the team employed a ’jailbreaking’ strategy to generate the restricted images.

    “The results were revealing,” Mr. Rangel said. “There was no apparent disinformation or harmful content, raising questions about the rationale behind these refusals.”

    Implications for Free Speech and Fairness

    Dr. Motoki emphasized the broader significance of this finding, saying: “This contributes to debates around constitutional protections like the US First Amendment and the applicability of fairness doctrines to AI systems.”

    The study’s methodological innovations, including its use of multimodal analysis, provide a replicable model for examining bias in generative AI systems. These findings highlight the urgent need for accountability and safeguards in AI design to prevent unintended societal consequences.

    Reference: “Assessing political bias and value misalignment in generative artificial intelligence” by Fabio Y.S. Motoki, Valdemar Pinho Neto and Victor Rangel, 4 February 2025, Journal of Economic Behavior & Organization.
    DOI: 10.1016/j.jebo.2025.106904

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Artificial Intelligence ChatGPT Politics University of East Anglia
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    ChatGPT’s Strong Left-Wing Political Bias Unmasked by New Study

    New Tool Detects ChatGPT-Generated Academic Text With 99% Accuracy

    Cancer and AI – Can ChatGPT Be Trusted?

    AI vs MD: ChatGPT Outperforms Physicians in Providing High-Quality, Empathetic Healthcare Advice

    Humans Reign Supreme: ChatGPT Falls Short on Accounting Exams

    New Study: ChatGPT Can Influence Users’ Moral Judgments

    ChatGPT Generative AI: USC Experts With Key Information You Should Know

    The Rise of Artificial Intelligence: ChatGPT’s Stunning Results on the US Medical Licensing Exam

    Princeton Has Developed a Technique for Tracking Online Foreign Misinformation Campaigns in Real Time

    4 Comments

    1. Bill Bailey on February 4, 2025 5:34 am

      GIGO = Garbage In Garbage Out. If you only allow these things to get information from the left that’s what they will believe as truth, the same as the general public. This is exactly why the leftists are doing everything they can to control these things and the media.

      Reply
      • Marc Muller on February 5, 2025 2:23 am

        Maybe ChatGPT is better at detecting right wing nonsense, having access to more information, and does not reproduce it. A majority of people in the world would be rated as left leaning when compared to the US (or Brasil) population, who twice elected Trump after being showered with right wing propaganda. Seems to me this study is GIGO.

        Reply
    2. N on February 4, 2025 10:50 am

      Maybe reality has a left wing bias.

      Reply
    3. Fixed gravity for you. on February 16, 2025 9:42 pm

      I’ve seen no indication that AI is allowed to dissect and discredit widely-used dictionary-formalized bait-and-switch propaganda words.

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Just a Few Breathless Minutes a Day Could Slash Your Risk of 8 Major Diseases

    This Simple Habit Could Cut Your Risk of Dementia by 30%

    Scientists Debunk Rattlesnake Myth That Fooled Hikers and Doctors for Decades

    Scientists Discover Plants Can “Count” – and May Be Smarter Than We Thought

    New Research Reveals Ancient Mars May Have Been Warm, Wet – and Possibly Alive

    This Surprising Daily Habit Could Cut Dementia Risk by 35%

    Just 10 Minutes a Day: Scientists Say This Ancient Chinese Practice Shows Powerful Blood Pressure Benefits

    Scientists Say This Popular Food Could Help Your Body Get Rid of Microplastics

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • NASA’s Artemis II Cleared for Moon Flight As Orion Prepares for Critical Engine Burn
    • NASA Artemis II Crew Scrambles To Fix Unexpected Toilet Failure in Space
    • Surviving Burns May Have Changed Human Evolution
    • Scientists Discover Hidden “Footprint of Death” That Could Transform How We Fight Disease
    • Blood-Sucking Parasites Could Revolutionize Treatment for Autoimmune Diseases
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.