Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»MIT’s New Neural Network: “Liquid” Machine-Learning System Adapts to Changing Conditions
    Technology

    MIT’s New Neural Network: “Liquid” Machine-Learning System Adapts to Changing Conditions

    By Daniel Ackerman, Massachusetts Institute of TechnologyFebruary 2, 20212 Comments5 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    Machine Learning AI Concept
    Researchers created a neural network that continues learning on the job, beyond its initial training phase.

    The new type of neural network could aid decision making in autonomous driving and medical diagnosis.

    MIT researchers have developed a type of neural network that learns on the job, not just during its training phase. These flexible algorithms, dubbed “liquid” networks, change their underlying equations to continuously adapt to new data inputs. The advance could aid decision making based on data streams that change over time, including those involved in medical diagnosis and autonomous driving.

    “This is a way forward for the future of robot control, natural language processing, video processing — any form of time series data processing,” says Ramin Hasani, the study’s lead author. “The potential is really significant.”

    The research will be presented at February’s AAAI Conference on Artificial Intelligence. In addition to Hasani, a postdoc in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), MIT co-authors include Daniela Rus, CSAIL director and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, and PhD student Alexander Amini. Other co-authors include Mathias Lechner of the Institute of Science and Technology Austria and Radu Grosu of the Vienna University of Technology.

    Time series data are both ubiquitous and vital to our understanding the world, according to Hasani. “The real world is all about sequences. Even our perception — you’re not perceiving images, you’re perceiving sequences of images,” he says. “So, time series data actually create our reality.”

    He points to video processing, financial data, and medical diagnostic applications as examples of time series that are central to society. The vicissitudes of these ever-changing data streams can be unpredictable. Yet analyzing these data in real time, and using them to anticipate future behavior, can boost the development of emerging technologies like self-driving cars. So Hasani built an algorithm fit for the task.

    Hasani designed a neural network that can adapt to the variability of real-world systems. Neural networks are algorithms that recognize patterns by analyzing a set of “training” examples. They’re often said to mimic the processing pathways of the brain — Hasani drew inspiration directly from the microscopic nematode, C. elegans. “It only has 302 neurons in its nervous system,” he says, “yet it can generate unexpectedly complex dynamics.”

    Modeling Neurons After C. elegans

    Hasani coded his neural network with careful attention to how C. elegans neurons activate and communicate with each other via electrical impulses. In the equations he used to structure his neural network, he allowed the parameters to change over time based on the results of a nested set of differential equations.

    This flexibility is key. Most neural networks’ behavior is fixed after the training phase, which means they’re bad at adjusting to changes in the incoming data stream. Hasani says the fluidity of his “liquid” network makes it more resilient to unexpected or noisy data, like if heavy rain obscures the view of a camera on a self-driving car. “So, it’s more robust,” he says.

    Improving Transparency in Neural Decisions

    There’s another advantage of the network’s flexibility, he adds: “It’s more interpretable.”

    Hasani says his liquid network skirts the inscrutability common to other neural networks. “Just changing the representation of a neuron,” which Hasani did with the differential equations, “you can really explore some degrees of complexity you couldn’t explore otherwise.” Thanks to Hasani’s small number of highly expressive neurons, it’s easier to peer into the “black box” of the network’s decision making and diagnose why the network made a certain characterization.

    “The model itself is richer in terms of expressivity,” says Hasani. That could help engineers understand and improve the liquid network’s performance.

    Hasani’s network excelled in a battery of tests. It edged out other state-of-the-art time series algorithms by a few percentage points in accurately predicting future values in datasets, ranging from atmospheric chemistry to traffic patterns. “In many applications, we see the performance is reliably high,” he says. Plus, the network’s small size meant it completed the tests without a steep computing cost. “Everyone talks about scaling up their network,” says Hasani. “We want to scale down, to have fewer but richer nodes.”

    Hasani plans to keep improving the system and ready it for industrial application. “We have a provably more expressive neural network that is inspired by nature. But this is just the beginning of the process,” he says. “The obvious question is how do you extend this? We think this kind of network could be a key element of future intelligence systems.”

    Reference: “Liquid Time-constant Networks” by Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus and Radu Grosu, 14 December 2020, Computer Science > Machine Learning.
    arXiv: 2006.04439v4

    This research was funded, in part, by Boeing, the National Science Foundation, the Austrian Science Fund, and Electronic Components and Systems for European Leadership.

    Never miss a breakthrough: Join the SciTechDaily newsletter.

    Artificial Intelligence Computer Science CSAIL Machine Learning MIT
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    From Pixels to Paradigms: MIT’s Synthetic Leap in AI Training

    MIT’s AI Learns Molecular Language for Rapid Material Development and Drug Discovery

    CausalSim: MIT’s New Tool for Accurately Simulating Complex Systems

    Breakthrough AI Technique Enables Real-Time Rendering of Scenes in 3D From 2D Images

    New Artificial Intelligence System Enables Machines That See the World More Like Humans Do

    Avoiding Shortcut Solutions in Artificial Intelligence for More Reliable Predictions

    MIT Develops Machine-Learning Tool to Make Code Run Faster

    Hunting Down Cybercriminals With New Machine-Learning System

    Machine-Learning Models Capture Subtle Variations in Facial Expressions

    2 Comments

    1. Kev on February 2, 2021 9:15 pm

      Sounds like the premise of a sci-fi horror movie.

      Reply
    2. xABBAAA on February 14, 2021 1:28 am

      … for an example when one tries to predict the outcome on the “regular” times, one can have one set of parameters in another situation, for an example during the Corona outbreak, the one can have a situation in which it needs to change its parameter.
      One similar model can be used in weather prediction, perhaps…

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    The Sun’s Hidden Threads Revealed in Stunning Solar Flare Images

    Revolutionary Cortisol Test Lets You “See” Stress With a Smartphone Camera

    How B Vitamins Could Slow Cognitive Decline and Protect Against Dementia

    Common Pesticide Linked to “Remarkably Widespread” Brain Abnormalities in Children

    One Snake, Two Venoms – And Both Are Lethal

    First-Ever Treatment for Rare Eye Disease Wins FDA Approval After Landmark Trials

    Challenging Over 150 Years of Immunotherapy: Scientists Unveil New Weapon That Kills Cancer Without the Immune System

    Scientists Think This Star Could Be the Next Supernova

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • DNA From a Mysterious Extinct Hominin May Have Helped Ancient Americans Survive
    • New Measurements Show We May Live in a Giant “Cosmic Void”
    • JWST Detects Steam on Distant Exoplanets. Could Exotic Water Worlds Rewrite the Search for Life?
    • The End of Opioids? New Drug Could Change the Way We Treat Severe Pain
    • Study Finds 95% of Tested Beers Contain Toxic “Forever Chemicals”
    Copyright © 1998 - 2025 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.