Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»New Research Debunks AI Doomsday Myths: LLMs Are Controllable and Safe
    Technology

    New Research Debunks AI Doomsday Myths: LLMs Are Controllable and Safe

    By University of BathAugust 16, 20242 Comments5 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    Dangerous Artificial Intelligence AI Concept
    A recent study found that large language models (LLMs) like ChatGPT do not pose an existential threat to humanity. These models, while proficient in following instructions and generating sophisticated language, cannot independently learn new skills or develop complex reasoning. The research emphasizes that LLMs remain controllable and predictable, though they could still be misused. The study also dispels fears that AI might develop hazardous abilities, suggesting instead that future research should focus on other risks, such as the generation of fake news.

    Large language models like ChatGPT are unable to learn or develop new abilities on their own, so they do not pose an existential threat to humanity.

    According to recent research from the University of Bath and the Technical University of Darmstadt in Germany, ChatGPT and other large language models (LLMs) are unable to learn autonomously or develop new skills, and therefore do not present an existential threat to humanity.

    The study, published as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) – the premier international conference in natural language processing – reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable, and safe.

    This means they remain inherently controllable, predictable, and safe.

    The research team concluded that LLMs – which are being trained on ever larger datasets – can continue to be deployed without safety concerns, though the technology can still be misused.

    With growth, these models are likely to generate more sophisticated language and become better at following explicit and detailed prompts, but they are highly unlikely to gain complex reasoning skills.

    Misconceptions About AI Threats

    “The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies and also diverts attention from the genuine issues that require our focus,” said Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the new study on the ‘emergent abilities’ of LLMs.

    The collaborative research team, led by Professor Iryna Gurevych at the Technical University of Darmstadt in Germany, ran experiments to test the ability of LLMs to complete tasks that models have never come across before – the so-called emergent abilities.

    As an illustration, LLMs can answer questions about social situations without ever having been explicitly trained or programmed to do so. While previous research suggested this was a product of models ‘knowing’ about social situations, the researchers showed that it was in fact the result of models using a well-known ability of LLMs to complete tasks based on a few examples presented to them, known as `in-context learning’ (ICL).

    Through thousands of experiments, the team demonstrated that a combination of LLMs’ ability to follow instructions (ICL), memory, and linguistic proficiency can account for both the capabilities and limitations exhibited by LLMs.

    Addressing Fears and Misconceptions

    Dr Tayyar Madabushi said: “The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities including reasoning and planning.

    “This has triggered a lot of discussion – for instance, at the AI Safety Summit last year at Bletchley Park, for which we were asked for comment – but our study shows that the fear that a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid.

    “Concerns over the existential threat posed by LLMs are not restricted to non-experts and have been expressed by some of the top AI researchers across the world.”

    However, Dr Tayyar Madabushi maintains this fear is unfounded as the researchers’ tests clearly demonstrated the absence of emergent complex reasoning abilities in LLMs.

    “While it’s important to address the existing potential for the misuse of AI, such as the creation of fake news and the heightened risk of fraud, it would be premature to enact regulations based on perceived existential threats,” he said.

    “Importantly, what this means for end users is that relying on LLMs to interpret and perform complex tasks which require complex reasoning without explicit instruction is likely to be a mistake. Instead, users are likely to benefit from explicitly specifying what they require models to do and providing examples where possible for all but the simplest of tasks.”

    Professor Gurevych added: “… our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news.”

    Reference: “Are Emergent Abilities in Large Language Models just In-Context Learning?” by Sheng Lu, Irina Bigoulaeva, Rachneet Sachdeva, Harish Tayyar Madabushi and Iryna Gurevych, 15 July 2024, 62nd Annual Meeting of the Association for Computational Linguistics.
    DOI: 10.48550/arXiv.2309.01809

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Artificial Intelligence Computer Science University of Bath
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    New General-Purpose Technique Sheds Light on Inner Workings of Neural Nets

    “Data Science Machine” Replaces Human Intuition with Algorithms

    AI Framework Predicts Better Patient Health Care and Reduces Cost

    Algorithm Analyzes Information From Medical Images to Identify Disease

    Halide, A New and Improved Programming Language for Image Processing Software

    New Algorithm Enables Wi-Fi Connected Vehicles to Share Data

    Algorithm Enables Robots to Learn and Adapt to Help Complete Tasks

    New Approach Uses Mathematics to Improve Automated Security Monitoring

    Mathematical Framework Formalizes Loop Perforation Technique

    2 Comments

    1. Liz on August 17, 2024 12:02 pm

      Well of course that’s what AI would want you to think; but seriously, I’d more worried about the controller, some humans are extremely dangerous in both thought and deeds.

      Reply
    2. Boba on August 17, 2024 4:47 pm

      Thank God. But let’s ban them anyway, just in case.

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Artificial Sweeteners May Harm Future Generations, Study Suggests

    Splashdown! NASA Artemis II Returns From Record-Breaking Moon Mission

    What If Consciousness Exists Beyond Your Brain

    Scientists Finally Crack the 100-Million-Year Evolutionary Mystery of Squid and Cuttlefish

    Beyond “Safe Levels”: Study Challenges What We Know About Pesticides and Cancer

    Researchers Have Found a Dietary Compound That Increases Longevity

    Scientists Baffled by Bizarre “Living Fossil” From 275 Million Years Ago

    Your IQ at 23 Could Predict Your Wealth at 27, Study Finds

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • What if Dark Matter Has Two Forms? Bold New Hypothesis Could Explain a Cosmic Mystery
    • Researchers Expose Hidden Chemistry of “Ore-Forming” Elements in Biology
    • Geologists Reveal the Americas Collided Earlier Than We Thought
    • 20x Difference: Study Reveals True Source of Airborne Microplastics
    • Scientists Uncover Hidden Force Powering Yellowstone’s Supervolcano
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.