Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»Light-Speed AI: MIT’s Ultrafast Photonic Processor Delivers Extreme Efficiency
    Technology

    Light-Speed AI: MIT’s Ultrafast Photonic Processor Delivers Extreme Efficiency

    By Adam Zewe, Massachusetts Institute of TechnologyDecember 12, 20241 Comment7 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    Fully Integrated Deep Neural Network Photonic Processor
    Researchers demonstrated a fully integrated photonic processor that can perform all key computations of a deep neural network optically on the chip, which could enable faster and more energy-efficient deep learning for computationally demanding applications like lidar or high-speed telecommunications. Credit: Sampson Wilcox, Research Laboratory of Electronics.

    A new photonic chip designed by MIT scientists performs all deep neural network computations optically, achieving tasks in under a nanosecond with over 92% accuracy.

    This could revolutionize high-demand computing applications, opening the door to high-speed processors that can learn in real-time.

    Photonic Machine Learning

    Deep neural networks, the driving force behind today’s most advanced machine-learning applications, have become so large and complex that they are pushing the limits of traditional electronic computing hardware.

    Photonic hardware, which uses light instead of electricity to perform machine-learning calculations, offers a faster, more energy-efficient solution. However, certain neural network operations have been difficult to achieve with photonic devices, forcing reliance on external electronics that slow down processing and reduce efficiency.

    Breakthrough in Photonic Chip Technology

    After a decade of research, scientists from MIT and collaborating institutions have developed a breakthrough photonic chip that overcomes these challenges. They demonstrated a fully integrated photonic processor capable of performing all essential deep neural network computations entirely with light, eliminating the need for external processing.

    The optical device was able to complete the key computations for a machine-learning classification task in less than half a nanosecond while achieving more than 92 percent accuracy — performance that is on par with traditional hardware.

    Photonic Neural Networks and Their Implications

    The chip, composed of interconnected modules that form an optical neural network, is fabricated using commercial foundry processes, which could enable the scaling of the technology and its integration into electronics.

    In the long run, the photonic processor could lead to faster and more energy-efficient deep learning for computationally demanding applications like lidar, scientific research in astronomy and particle physics, or high-speed telecommunications.

    Research Team and Future Prospects

    “There are a lot of cases where how well the model performs isn’t the only thing that matters, but also how fast you can get an answer. Now that we have an end-to-end system that can run a neural network in optics, at a nanosecond time scale, we can start thinking at a higher level about applications and algorithms,” says Saumil Bandyopadhyay ’17, MEng ’18, PhD ’23, a visiting scientist in the Quantum Photonics and AI Group within the Research Laboratory of Electronics (RLE) and a postdoc at NTT Research, Inc., who is the lead author of a paper on the new chip.

    Bandyopadhyay is joined on the paper by Alexander Sludds ’18, MEng ’19, PhD ’23; Nicholas Harris PhD ’17; Darius Bunandar PhD ’19; Stefan Krastanov, a former RLE research scientist who is now an assistant professor at the University of Massachusetts at Amherst; Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research; Matthew Streshinsky, a former silicon photonics lead at Nokia who is now co-founder and CEO of Enosemi; Michael Hochberg, president of Periplous, LLC; and Dirk Englund, a professor in the Department of Electrical Engineering and Computer Science, principal investigator of the Quantum Photonics and Artificial Intelligence Group and of RLE, and senior author of the paper. The research was published on December 2 in Nature Photonics.

    Machine Learning with Light

    Deep neural networks are composed of many interconnected layers of nodes, or neurons, that operate on input data to produce an output. One key operation in a deep neural network involves the use of linear algebra to perform matrix multiplication, which transforms data as it is passed from layer to layer.

    But in addition to these linear operations, deep neural networks perform nonlinear operations that help the model learn more intricate patterns. Nonlinear operations, like activation functions, give deep neural networks the power to solve complex problems.

    In 2017, Englund’s group, along with researchers in the lab of Marin Soljacic, the Cecil and Ida Green Professor of Physics, demonstrated an optical neural network on a single photonic chip that could perform matrix multiplication with light.

    But at the time, the device couldn’t perform nonlinear operations on the chip. Optical data had to be converted into electrical signals and sent to a digital processor to perform nonlinear operations.

    “Nonlinearity in optics is quite challenging because photons don’t interact with each other very easily. That makes it very power consuming to trigger optical nonlinearities, so it becomes challenging to build a system that can do it in a scalable way,” Bandyopadhyay explains.

    They overcame that challenge by designing devices called nonlinear optical function units (NOFUs), which combine electronics and optics to implement nonlinear operations on the chip.

    The researchers built an optical deep neural network on a photonic chip using three layers of devices that perform linear and nonlinear operations.

    A Fully-Integrated Network

    At the outset, their system encodes the parameters of a deep neural network into light. Then, an array of programmable beamsplitters, which was demonstrated in the 2017 paper, performs matrix multiplication on those inputs.

    The data then pass to programmable NOFUs, which implement nonlinear functions by siphoning off a small amount of light to photodiodes that convert optical signals to electric current. This process, which eliminates the need for an external amplifier, consumes very little energy.

    “We stay in the optical domain the whole time, until the end when we want to read out the answer. This enables us to achieve ultra-low latency,” Bandyopadhyay says.

    Achieving such low latency enabled them to efficiently train a deep neural network on the chip, a process known as in situ training that typically consumes a huge amount of energy in digital hardware.

    “This is especially useful for systems where you are doing in-domain processing of optical signals, like navigation or telecommunications, but also in systems that you want to learn in real time,” he says.

    The photonic system achieved more than 96 percent accuracy during training tests and more than 92 percent accuracy during inference, which is comparable to traditional hardware. In addition, the chip performs key computations in less than half a nanosecond.

    “This work demonstrates that computing — at its essence, the mapping of inputs to outputs — can be compiled onto new architectures of linear and nonlinear physics that enable a fundamentally different scaling law of computation versus effort needed,” says Englund.

    The entire circuit was fabricated using the same infrastructure and foundry processes that produce CMOS computer chips. This could enable the chip to be manufactured at scale, using tried-and-true techniques that introduce very little error into the fabrication process.

    Scaling up their device and integrating it with real-world electronics like cameras or telecommunications systems will be a major focus of future work, Bandyopadhyay says. In addition, the researchers want to explore algorithms that can leverage the advantages of optics to train systems faster and with better energy efficiency.

    Reference: “Single-chip photonic deep neural network with forward-only training” by Saumil Bandyopadhyay, Alexander Sludds, Stefan Krastanov, Ryan Hamerly, Nicholas Harris, Darius Bunandar, Matthew Streshinsky, Michael Hochberg and Dirk Englund, 2 December 2024, Nature Photonics.
    DOI: 10.1038/s41566-024-01567-z

    This research was funded, in part, by the U.S. National Science Foundation, the U.S. Air Force Office of Scientific Research, and NTT Research.

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Artificial Intelligence Electrical Engineering Machine Learning MIT Photonics
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    MIT’s New Analog Synapse Is 1 Million Times Faster Than the Synapses in the Human Brain

    When It Comes to AI, Can We Ditch the Datasets? Using Synthetic Data for Training Machine-Learning Models

    MIT AI Hardware Program Aims To Lead in Artificial Intelligence Technology Development

    Demystifying Machine-Learning Systems: Automatically Describing Neural Network Components in Natural Language

    Breakthrough AI Technique Enables Real-Time Rendering of Scenes in 3D From 2D Images

    New Artificial Intelligence System Enables Machines That See the World More Like Humans Do

    New Machine-Learning System Gives Robots Social Skills

    Artificial Intelligence Is Smart, but It Doesn’t Play Well With Others

    AI Boosted by Parallel Convolutional Light-Based Processors

    1 Comment

    1. Rob on December 13, 2024 4:44 pm

      Computers are already fast and efficient enough to keep assorted militaries happy dropping bombs on other people for the sake of “national security”.

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Largest-Ever Study Finds Medicinal Cannabis Ineffective for Anxiety, Depression, PTSD

    250-Million-Year-Old Egg Solves One of Evolution’s Biggest Mysteries

    Living With Roommates Might Be Changing Your Gut Microbiome Without You Knowing

    Century-Old Cleaning Chemical Linked to 500% Increased Risk of Parkinson’s Disease

    What if Your Memories Never Happened? Physicists Take a New Look at the Boltzmann Brain Paradox

    One of the Universe’s Largest Stars May Be Getting Ready To Explode

    Scientists Discover Enzyme That Could Supercharge Ozempic-Like Weight Loss Drugs

    Popular Sweetener Linked to DNA Damage – “It’s Something You Should Not Be Eating”

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • Scientists Say This Overlooked Organ Could Hold the Key to Longer Life
    • Want Less Stress? Landmark Study Points to a Simple Habit
    • Scientists Reveal Eating Fruits and Vegetables May Increase Your Risk of Lung Cancer
    • AI Reveals Explosive Growth of Floating Algae Across the World’s Oceans
    • 5.5 Million Bees Discovered Living Beneath a New York Cemetery
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.