Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»AI’s Achilles Heel: New Research Pinpoints Fundamental Weaknesses
    Technology

    AI’s Achilles Heel: New Research Pinpoints Fundamental Weaknesses

    By University of Copenhagen - Faculty of ScienceJanuary 24, 20246 Comments5 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    Broken Robot Artifical Intelligence
    University of Copenhagen researchers have proven that fully stable Machine Learning algorithms are unattainable for complex problems, highlighting the critical need for thorough testing and awareness of AI limitations. Credit: SciTechDaily.com

    Researchers from the University of Copenhagen have become the first in the world to mathematically prove that, beyond simple problems, it is impossible to develop algorithms for AI that will always be stable.

    ChatGPT and similar machine learning-based technologies are on the rise. However, even the most advanced algorithms face limitations. Researchers from the University of Copenhagen have made a groundbreaking discovery, mathematically demonstrating that, beyond basic problems, it’s impossible to develop AI algorithms that are always stable. This research could pave the way for improved testing protocols for algorithms, highlighting the inherent differences between machine processing and human intelligence.

    The scientific article describing the result has been approved for publication at one of the leading international conferences on theoretical computer science.

    Machines interpret medical scanning images more accurately than doctors, translate foreign languages, and may soon be able to drive cars more safely than humans. However, even the best algorithms do have weaknesses. A research team at the Department of Computer Science, University of Copenhagen, tries to reveal them.

    Take an automated vehicle reading a road sign as an example. If someone has placed a sticker on the sign, this will not distract a human driver. But a machine may easily be put off because the sign is now different from the ones it was trained on.

    “We would like algorithms to be stable in the sense, that if the input is changed slightly the output will remain almost the same. Real life involves all kinds of noise which humans are used to ignore, while machines can get confused,” says Professor Amir Yehudayoff, heading the group.

    A language for discussing weaknesses

    As the first in the world, the group together with researchers from other countries has proven mathematically that apart from simple problems it is not possible to create algorithms for Machine Learning that will always be stable. The scientific article describing the result was approved for publication at one of the leading international conferences on theoretical computer science, Foundations of Computer Science (FOCS).

    “I would like to note that we have not worked directly on automated car applications. Still, this seems like a problem too complex for algorithms to always be stable,” says Amir Yehudayoff, adding that this does not necessarily imply major consequences in relation to the development of automated cars:

    “If the algorithm only errs under a few very rare circumstances this may well be acceptable. But if it does so under a large collection of circumstances, it is bad news.”

    The scientific article cannot be applied by the industry to identify bugs in its algorithms. This wasn’t the intention, the professor explains:

    “We are developing a language for discussing the weaknesses in Machine Learning algorithms. This may lead to the development of guidelines that describe how algorithms should be tested. And in the long run, this may again lead to the development of better and more stable algorithms.”

    From intuition to mathematics

    A possible application could be for testing algorithms for the protection of digital privacy.

    ”Some companies might claim to have developed an absolutely secure solution for privacy protection. Firstly, our methodology might help to establish that the solution cannot be absolutely secure. Secondly, it will be able to pinpoint points of weakness,” says Amir Yehudayoff.

    First and foremost, though, the scientific article contributes to theory. Especially the mathematical content is groundbreaking, he adds: ”We understand intuitively, that a stable algorithm should work almost as well as before when exposed to a small amount of input noise. Just like the road sign with a sticker on it. But as theoretical computer scientists, we need a firm definition. We must be able to describe the problem in the language of mathematics. Exactly how much noise must the algorithm be able to withstand, and how close to the original output should the output be if we are to accept the algorithm to be stable? This is what we have suggested an answer to.”

    Important to keep limitations in mind

    The scientific article has received large interest from colleagues in the theoretical computer science world, but not from the tech industry. Not yet at least.

    ”You should always expect some delay between a new theoretical development and interest from people working in applications,” says Amir Yehudayoff while adding smilingly: ”And some theoretical developments will remain unnoticed forever.”

    However, he does not see that happening in this case: ”Machine Learning continues to progress rapidly, and it is important to remember that even solutions which are very successful in the real world still do have limitations. The machines may sometimes seem to be able to think but after all, they do not possess human intelligence. This is important to keep in mind.”

    Reference: “Replicability and Stability in Learning” by Zachary Chase, Shay Moran and Amir Yehudayoff, 2023, Foundations of Computer Science (FOCS) conference.
    DOI: 10.48550/arXiv.2304.03757

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Artificial Intelligence Computer Science Machine Learning Popular Robotics University of Copenhagen
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    Not Science Fiction: Researchers Recreate Star Trek’s Holodeck Using AI

    Crushing It: Autonomous AI Robot Creates a Shock-Absorbing Shape No Human Ever Could

    Top Guns of AI: MIT’s Maverick Approach Toward Safe and Reliable Autopilots for Flying

    The Shiniest Spy: How Everyday Objects Can Be Turned Into “Cameras”

    Solving the Tricky Challenges of Robotic Pizza-Making

    Chaotic Itinerancy: Robotic AI Learns to Be Spontaneous

    Widely Used AI Machine Learning Methods Don’t Work as Claimed

    New AI System Identifies Personality Traits from Eye Movements

    Machine-Learning Models Capture Subtle Variations in Facial Expressions

    6 Comments

    1. Julien M on January 25, 2024 2:14 am

      You can’t make assomptions that big and generalise it. It’s true for now with the current tech we are using for AI. But in the not-so-far future AI will be really different and thise issues could be avoided

      Reply
      • Clyde Spencer on January 25, 2024 6:35 am

        On what do you base your opinion and forecasts? You said, “You can’t make assomptions [sic] that big and generalise it.” Isn’t that exactly what you are doing?

        Reply
    2. Chris on January 26, 2024 3:45 am

      Has this paper been peer reviewed? The at least this article is poorly written.

      Reply
    3. Jerald Feinstein on January 26, 2024 9:29 am

      This article has an Interesting premise about the impossibility of making an AI that would always be stable. Might instability be an advantage in many cases… and it seems we have that same challenge with natural intelligence.

      Reply
    4. James Stewart Campbell MD on January 26, 2024 2:02 pm

      In the language of PID control theory, AI is always integrating, slowly learning from previous data. When encountering something completely new, it must “guess” a decision. This is where fatal errors or hallucinations can occur.

      Reply
    5. pepe on January 29, 2024 1:42 am

      Paper has strange structure no conclusion or discussion section and no. main finding explain it

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Scientists May Have Found the Key to Jupiter and Saturn’s Moon Mystery

    Scientists Uncover Brain Changes That Link Pain to Depression

    Saunas May Do More Than Raise Body Temperature – They Activate Your Immune System

    Exercise in a Pill? Metformin Shows Surprising Effects in Cancer Patients

    Hidden Oceans of Magma Could Be Protecting Alien Life

    New Study Challenges Alzheimer’s Theories: It’s Not Just About Plaques

    Artificial Sweeteners May Harm Future Generations, Study Suggests

    Splashdown! NASA Artemis II Returns From Record-Breaking Moon Mission

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • As Cities Invade the Amazon, Yellow Fever Makes a Dangerous Comeback
    • “Asian Flush” May Be a Hidden Trigger for Deadly Heart Damage
    • AI Could Detect Early Signs of Alzheimer’s in Under a Minute – Far Before Traditional Tests
    • What if Dark Matter Has Two Forms? Bold New Hypothesis Could Explain a Cosmic Mystery
    • Researchers Expose Hidden Chemistry of “Ore-Forming” Elements in Biology
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.