Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»The Future of AI: Self-Learning Machines Could Replace Current Artificial Neural Networks
    Technology

    The Future of AI: Self-Learning Machines Could Replace Current Artificial Neural Networks

    By Max Planck SocietyNovember 22, 2023No Comments5 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    AI Computer World Artificial Intelligence
    Scientists at the Max Planck Institute have devised a more energy-efficient method for AI training, utilizing physical processes in neuromorphic computing. This approach, diverging from traditional digital neural networks, reduces energy consumption and optimizes training efficiency. The team is developing an optical neuromorphic computer to demonstrate this technology, aiming to significantly advance AI systems.

    New physics-based self-learning machines could replace the current artificial neural networks and save energy.

    Artificial intelligence (AI) not only delivers impressive performance but also demands significant energy. The more complex the tasks it undertakes, the greater the energy consumption. Scientists Víctor López-Pastor and Florian Marquardt from the Max Planck Institute for the Science of Light in Erlangen, Germany, have developed a method for more efficient AI training. Their method utilizes physical processes, diverging from traditional digital artificial neural networks.

    Open AI, the company responsible for the development of GPT-3, the technology powering ChatGPT, has not disclosed the amount of energy needed for the training of this advanced and knowledgeable AI Chatbot.

    According to the German statistics company Statista, this would require 1000 megawatt hours – about as much as 200 German households with three or more people consume annually. While this energy expenditure has allowed GPT-3 to learn whether the word ‘deep’ is more likely to be followed by the word ‘sea’ or ‘learning’ in its data sets, by all accounts it has not understood the underlying meaning of such phrases.

    Neural networks on neuromorphic computers

    In order to reduce the energy consumption of computers, and particularly AI applications, in the past few years several research institutions have been investigating an entirely new concept of how computers could process data in the future. The concept is known as neuromorphic computing. Although this sounds similar to artificial neural networks, it in fact has little to do with them as artificial neural networks run on conventional digital computers. 

    This means that the software, or more precisely the algorithm, is modeled on the brain’s way of working, but digital computers serve as the hardware. They perform the calculation steps of the neuronal network in sequence, one after the other, differentiating between processor and memory. 

    “The data transfer between these two components alone devours large quantities of energy when a neural network trains hundreds of billions of parameters, i.e. synapses, with up to one terabyte of data,” says Florian Marquardt, director of the Max Planck Institute for the Science of Light and professor at the University of Erlangen.

    The human brain is entirely different and would probably never have been evolutionarily competitive, had it worked with an energy efficiency similar to that of computers with silicon transistors. It would most likely have failed due to overheating. 

    The brain is characterized by undertaking the numerous steps of a thought process in parallel and not sequentially. The nerve cells, or more precisely the synapses, are both processor and memory combined. Various systems around the world are being treated as possible candidates for the neuromorphic counterparts to our nerve cells, including photonic circuits utilizing light instead of electrons to perform calculations. Their components serve simultaneously as switches and memory cells.

    A self-learning physical machine optimizes its synapses independently 

    Together with Víctor López-Pastor, a doctoral student at the Max Planck Institute for the Science of Light, Florian Marquardt has now devised an efficient training method for neuromorphic computers.

    “We have developed the concept of a self-learning physical machine,” explains Florian Marquardt. “The core idea is to carry out the training in the form of a physical process, in which the parameters of the machine are optimized by the process itself.”

    When training conventional artificial neural networks, external feedback is necessary to adjust the strengths of the many billions of synaptic connections.

    “Not requiring this feedback makes the training much more efficient,” says Florian Marquardt. Implementing and training an artificial intelligence on a self-learning physical machine would not only save energy but also computing time. “Our method works regardless of which physical process takes place in the self-learning machine, and we do not even need to know the exact process,” explains Florian Marquardt. “However, the process must fulfill a few conditions.”

    Most importantly it must be reversible, meaning it must be able to run forward or backward with a minimum of energy loss.

    “In addition, the physical process must be non-linear, meaning sufficiently complex,” says Florian Marquardt. Only non-linear processes can accomplish the complicated transformations between input data and results. A pinball rolling over a plate without colliding with another is a linear action. However, if it is disturbed by another, the situation becomes non-linear.

    Practical test in an optical neuromorphic computer

    Examples of reversible, non-linear processes can be found in optics. Indeed, Víctor López-Pastor and Florian Marquardt are already collaborating with an experimental team developing an optical neuromorphic computer. This machine processes information in the form of superimposed light waves, whereby suitable components regulate the type and strength of the interaction. The researchers‘ aim is to put the concept of the self-learning physical machine into practice.

    “We hope to be able to present the first self-learning physical machine in three years,” says Florian Marquardt. By then, there should be neural networks that think with many more synapses and are trained with significantly larger amounts of data than today’s.

    As a consequence, there will likely be an even greater desire to implement neural networks outside conventional digital computers and to replace them with efficiently trained neuromorphic computers. “We are therefore confident that self-learning physical machines have a strong chance of being used in the further development of artificial intelligence,” says the physicist.

    Reference: “Self-Learning Machines Based on Hamiltonian Echo Backpropagation” by Víctor López-Pastor and Florian Marquardt, 18 August 2023, Physical Review X.
    DOI: 10.1103/PhysRevX.13.031020

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Artificial Intelligence Computer Science Max Planck Institute
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    New AI System Identifies Personality Traits from Eye Movements

    Flexpad Turns Flexible Materials into Displays

    AI Framework Predicts Better Patient Health Care and Reduces Cost

    Algorithm Analyzes Information From Medical Images to Identify Disease

    Halide, A New and Improved Programming Language for Image Processing Software

    New Algorithm Enables Wi-Fi Connected Vehicles to Share Data

    Algorithm Enables Robots to Learn and Adapt to Help Complete Tasks

    New Approach Uses Mathematics to Improve Automated Security Monitoring

    Mathematical Framework Formalizes Loop Perforation Technique

    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Scientists Warn That This Common Pet Fish Can Wreck Entire Ecosystems

    Scientists Make Breakthrough in Turning Plastic Trash Into Clean Fuel Using Sunlight

    This Popular Supplement May Interfere With Cancer Treatment, Scientists Warn

    Scientists Finally Solved One of Water’s Biggest Mysteries

    Could This New Weight-Loss Pill Disrupt the Entire Market? Here’s What You Should Know About Orforglipron

    Earth’s Crust Is Tearing Open in Africa, and It Could Form a New Ocean

    Breakthrough Bowel Cancer Trial Leaves Patients Cancer-Free for Nearly 3 Years

    Natural Compound Shows Powerful Potential Against Rheumatoid Arthritis

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • Kratom Use Explodes in the US, With Life-Changing Consequences
    • Scientists Uncover Fatal Weakness in “Zombie Cells” Linked to Cancer
    • World-First Study Reveals Human Hearts Can Regenerate After a Heart Attack
    • Why Your Dreams Feel So Real Sometimes and So Strange Other Times
    • Scientists Debunk 100-Year-Old Belief About Brain Cells, Rewriting Textbooks
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.