Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»Hiddenite: A New AI Processor Based on a Cutting-Edge Neural Network Theory
    Technology

    Hiddenite: A New AI Processor Based on a Cutting-Edge Neural Network Theory

    By Tokyo Institute of TechnologyFebruary 18, 20224 Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit

    Artificial Intelligence Neural Network Concept

    A new accelerator chip called “Hiddenite” that can achieve state-of-the-art accuracy in the calculation of sparse “hidden neural networks” with lower computational burdens has now been developed by Tokyo Tech researchers. By employing the proposed on-chip model construction, which is the combination of weight generation and “supermask” expansion, the Hiddenite chip drastically reduces external memory access for enhanced computational efficiency.

    Deep neural networks (DNNs) are a complex piece of machine learning architecture for AI (artificial learning) that require numerous parameters to learn to predict outputs. DNNs can, however, be “pruned,” thereby reducing the computational burden and model size. A few years ago, the “lottery ticket hypothesis” took the machine learning world by storm. The hypothesis stated that a randomly initialized DNN contains subnetworks that achieve accuracy equivalent to the original DNN after training. The larger the network, the more “lottery tickets” for successful optimization. These lottery tickets thus allow “pruned” sparse neural networks to achieve accuracies equivalent to more complex, “dense” networks, thereby reducing overall computational burdens and power consumptions.

    Hidden Neural Networks (HNNs) Extract Sparse Subnetworks
    Figure 1. HNNs find sparse subnetworks which achieve equivalent accuracy to the original dense trained model. Credit: Masato Motomura from Tokyo Tech

    One technique to find such subnetworks is the hidden neural network (HNN) algorithm, which uses AND logic (where the output is only high when all the inputs are high) on the initialized random weights and a “binary mask” called a “supermask” (Fig. 1). The supermask, defined by the top-k% highest scores, denotes the unselected and selected connections as 0 and 1, respectively. The HNN helps reduce computational efficiency from the software side. However, the computation of neural networks also requires improvements in hardware.

    Traditional DNN accelerators offer high performance, but they do not factor the power consumption caused by external memory access. Now, researchers from Tokyo Institute of Technology (Tokyo Tech), led by Professors Jaehoon Yu and Masato Motomura, have developed a new accelerator chip called “Hiddenite,” which can calculate hidden neural networks with drastically improved power consumption. “Reducing the external memory access is the key to reducing power consumption. Currently, achieving high inference accuracy requires large models. But this increases external memory access to load model parameters. Our main motivation behind the development of Hiddenite was to reduce this external memory access,” explains Prof. Motomura. Their study will feature in the upcoming International Solid-State Circuits Conference (ISSCC) 2022, a prestigious international conference showcasing the pinnacles of achievement in integrated circuits.

    Hiddenite Chip Architecture Schematic
    Figure 2. The new Hiddenite chip offers on-chip weight generation and on-chip “supermask expansion” to reduce external memory access for loading model parameters. Credit: Masato Motomura from Tokyo Tech

    “Hiddenite” stands for Hidden Neural Network Inference Tensor Engine and is the first HNN inference chip. The Hiddenite architecture (Fig. 2) offers three-fold benefits to reduce external memory access and achieve high energy efficiency. The first is that it offers the on-chip weight generation for re-generating weights by using a random number generator. This eliminates the need to access the external memory and store the weights. The second benefit is the provision of the “on-chip supermask expansion,” which reduces the number of supermasks that need to be loaded by the accelerator. The third improvement offered by the Hiddenite chip is the high-density four-dimensional (4D) parallel processor that maximizes data reuse during the computational process, thereby improving efficiency.

    Hiddenite Chip
    Figure 3. Fabricated using 40nm technology, the core of the chip area is only 4.36 square millimeters. Credit: Masato Motomura from Tokyo Tech

    “The first two factors are what set the Hiddenite chip apart from existing DNN inference accelerators,” reveals Prof. Motomura. “Moreover, we also introduced a new training method for hidden neural networks, called ‘score distillation,’ in which the conventional knowledge distillation weights are distilled into the scores because hidden neural networks never update the weights. The accuracy using score distillation is comparable to the binary model while being half the size of the binary model.”

    Based on the hiddenite architecture, the team has designed, fabricated, and measured a prototype chip with Taiwan Semiconductor Manufacturing Company’s (TSMC) 40nm process (Fig. 3). The chip is only 3mm x 3mm and handles 4,096 MAC (multiply-and-accumulate) operations at once. It achieves a state-of-the-art level of computational efficiency, up to 34.8 trillion or tera operations per second (TOPS) per Watt of power, while reducing the amount of model transfer to half that of binarized networks.

    These findings and their successful exhibition in a real silicon chip are sure to cause another paradigm shift in the world of machine learning, paving the way for faster, more efficient, and ultimately more environment-friendly computing.

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Artificial Intelligence Machine Learning Tokyo Institute of Technology
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    Boosting Computing Power With Machine Learning for the Future of Particle Physics

    Engineers Create Smart Robodog With AI Brain [Video]

    Artificial Intelligence Helps Track Mysterious Cosmic Radio Bursts

    Artificial Intelligence Uses “Self-Learning” to Make Cancer Treatment Less Toxic

    New AI System Identifies Personality Traits from Eye Movements

    New Artificial Intelligence Device Identifies Objects at the Speed of Light

    Machine-Learning Models Capture Subtle Variations in Facial Expressions

    ‘Deep Learning’ Algorithm Brings New Tools to Astronomy

    Machine-Learning System Uses Physics to Identify Habitable Planets

    4 Comments

    1. Clyde Spencer on February 19, 2022 8:20 am

      There is already a mineral known by the name “Hiddenite.” Maybe they should have called it the mythical “Kryptonite.”

      https://www.mindat.org/min-7740.html

      Reply
    2. Steve Nordquist on February 19, 2022 4:31 pm

      ISSC Proceedings on ArXiv FTW, even if there’s not mention of the large models they ran or whether compression works on chip. So close to the Hide-Nitte chip, too.

      Reply
    3. xABBAAA on February 20, 2022 11:51 am

      … I heard that some are still having problems when one neural network has learned something and then try to learn something else… it is so trivial, all it needs some S1 and S2 and some more things… so debunked…

      Reply
    4. Clyde Spencer on February 21, 2022 8:13 am

      Poor choice of a name. It has been used as the name of a mineral since 1881. It is almost as bad as referring to perovskite-like photovoltaic materials as “perovskite,” which is a specific mineral with a different chemistry.

      https://www.mindat.org/min-7740.html

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    First-of-Its-Kind Discovery: Homer’s Iliad Found Embedded in a 1,600-Year-Old Egyptian Mummy

    Beyond Inflammation: Scientists Uncover New Cause of Persistent Rheumatoid Arthritis

    A Simple Molecule Could Unlock Safer, Easier Weight Loss

    Scientists Just Built a Quantum Battery That Charges Almost Instantly

    Researchers Unveil Groundbreaking Sustainable Solution to Vitamin B12 Deficiency

    Millions of People Have Osteopenia Without Realizing It – Here’s What You Need To Know

    Researchers Discover Boosting a Single Protein Helps the Brain Fight Alzheimer’s

    World-First Study Reveals Human Hearts Can Regenerate After a Heart Attack

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • Scientists Flip Immune System “Switch,” Uncover Surprising Path To Stop Gut Inflammation
    • Magnesium Magic: New Drug Melts Fat Even on a High-Fat, High-Sugar Diet
    • Weight-Loss Drugs Like Ozempic May Come With an Unexpected Cost
    • After Decades, MIT Researchers Capture the First 3D Atomic View of a Mysterious Material
    • Your Favorite Fishing Spot Is Turning Brown – and the Fish Are Changing
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.