Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»MIT Researchers Develop a New Way of Managing Memory on Computer Chips
    Technology

    MIT Researchers Develop a New Way of Managing Memory on Computer Chips

    By Larry Hardesty, Massachusetts Institute of TechnologySeptember 26, 2016No Comments6 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    MIT Improves Cache Management
    Engineers from MIT have discovered a more efficient way of managing memory on computer chips. This method utilizes circuit space more effectively and aligns better with existing chip designs.

    Engineers from MIT have found a new way of managing memory on computer chips, using circuit space much more efficiently and more consistent with existing chip designs.

    A year ago, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory unveiled a fundamentally new way of managing memory on computer chips, one that would use circuit space much more efficiently as chips continue to comprise more and more cores, or processing units. In chips with hundreds of cores, the researchers’ scheme could free up somewhere between 15 and 25 percent of on-chip memory, enabling much more efficient computation.

    Their scheme, however, assumed a certain type of computational behavior that most modern chips do not, in fact, enforce. Last week, at the International Conference on Parallel Architectures and Compilation Techniques — the same conference where they first reported their scheme — the researchers presented an updated version that’s more consistent with existing chip designs and has a few additional improvements.

    The essential challenge posed by multicore chips is that they execute instructions in parallel, while in a traditional computer program, instructions are written in sequence. Computer scientists are constantly working on ways to make parallelization easier for computer programmers.

    The initial version of the MIT researchers’ scheme, called Tardis, enforced a standard called sequential consistency. Suppose that different parts of a program contain the sequences of instructions ABC and XYZ. When the program is parallelized, A, B, and C get assigned to core 1; X, Y, and Z to core 2.

    Sequential consistency doesn’t enforce any relationship between the relative execution times of instructions assigned to different cores. It doesn’t guarantee that core 2 will complete its first instruction — X — before core 1 moves onto its second — B. It doesn’t even guarantee that core 2 will begin executing its first instruction — X — before core 1 completes its last one — C. All it guarantees is that, on core 1, A will execute before B and B before C; and on core 2, X will execute before Y and Y before Z.

    The first author on the new paper is Xiangyao Yu, a graduate student in electrical engineering and computer science. He is joined by his thesis advisor and co-author on the earlier paper, Srini Devadas, the Edwin Sibley Webster Professor in MIT’s Department of Electrical Engineering and Computer Science, and by Hongzhe Liu of Algonquin Regional High School and Ethan Zou of Lexington High School, who joined the project through MIT’s Program for Research in Mathematics, Engineering, and Science (PRIMES) program.

    Planned disorder

    But with respect to reading and writing data — the only type of operations that a memory-management scheme like Tardis is concerned with — most modern chips don’t enforce even this relatively modest constraint. A standard chip from Intel might, for instance, assign the sequence of read/write instructions ABC to a core but let it execute in the order ACB.

    Relaxing standards of consistency allows chips to run faster. “Let’s say that a core performs a write operation, and the next instruction is a read,” Yu says. “Under sequential consistency, I have to wait for the write to finish. If I don’t find the data in my cache [the small local memory bank in which a core stores frequently used data], I have to go to the central place that manages the ownership of data.”

    “This may take a lot of messages on the network,” he continues. “And depending on whether another core is holding the data, you might need to contact that core. But what about the following read? That instruction is sitting there, and it cannot be processed. If you allow this reordering, then while this write is outstanding, I can read the next instruction. And you may have a lot of such instructions, and all of them can be executed.”

    Tardis uses chip space more efficiently than existing memory management schemes because it coordinates cores’ memory operations according to “logical time” rather than chronological time. With Tardis, every data item in a shared memory bank has its own time stamp. Each core also has a counter that effectively time stamps the operations it performs. No two cores’ counters need agree, and any given core can keep churning away on data that has since been updated in main memory, provided that the other cores treat its computations as having happened earlier in time.

    Division of labor

    To enable Tardis to accommodate more relaxed consistency standards, Yu and his co-authors simply gave each core two counters, one for read operations and one for write operations. If the core chooses to execute a read before the preceding write is complete, it simply gives it a lower time stamp, and the chip as a whole knows how to interpret the sequence of events.

    Different chip manufacturers have different consistency rules, and much of the new paper describes how to coordinate counters, both within a single core and among cores, to enforce those rules. “Because we have time stamps, that makes it very easy to support different consistency models,” Yu says. “Traditionally, when you don’t have the time stamp, then you need to argue about which event happens first in physical time, and that’s a little bit tricky.”

    “The new work is important because it’s directly related to the most popular relaxed-consistency model that’s in current Intel chips,” says Larry Rudolph, a vice president and senior researcher at Two Sigma, a hedge fund that uses artificial-intelligence and distributed-computing techniques to devise trading strategies. “There were many, many different consistency models explored by Sun Microsystems and other companies, most of which are now out of business. Now it’s all Intel. So matching the consistency model that’s popular for the current Intel chips is incredibly important.”

    As someone who works with an extensive distributed-computing system, Rudolph believes that Tardis’ greatest appeal is that it offers a unified framework for managing memory at the core level, at the level of the computer network, and at the levels in between. “Today, we have caching in microprocessors, we have the DRAM [dynamic random-access memory] model, and then we have storage, which used to be disk drive,” he says. “So there was a factor of maybe 100 between the time it takes to do a cache access and DRAM access, and then a factor of 10,000 or more to get to disk. With flash [memory] and the new nonvolatile RAMs coming out, there’s going to be a whole hierarchy that’s much nicer. What’s really exciting is that Tardis potentially is a model that will span consistency between processors, storage, and distributed file systems.”

    Reference: “Tardis 2.0: Optimized Time Traveling Coherence for Relaxed Consistency Models” by Xiangyao Yu, Hongzhe Liu, Ethan Zou and Srinivas Devadas,11 September 2016, Proceedings of the 2016 International Conference on Parallel Architectures and Compilation.
    DOI: 10.1145/2967938.2967942
    arXiv:1511.08774
    PDF

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Computer Chips Computer Science Computer Technology Engineering MIT
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    New Technique Illuminates the Inner Workings of AI Systems

    TrueNorth Computer Chip Emulates Human Cognition

    MIT Engineers Create a System That Reallocates Cache Access on the Fly

    New Debugging Method Finds 23 Undetected Security Flaws in Popular Web Applications

    New Energy-Friendly Chip Can Perform Powerful AI Tasks

    “Data Science Machine” Replaces Human Intuition with Algorithms

    New Technique Could Enable Chips with Thousands of Cores

    “Fingerprinting” Chips to Aid in Fight Against Counterfeiting

    New Approach Improves Execution Times and Efficiency of Multicore Chips

    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Artificial Sweeteners May Harm Future Generations, Study Suggests

    Splashdown! NASA Artemis II Returns From Record-Breaking Moon Mission

    What If Consciousness Exists Beyond Your Brain

    Scientists Finally Crack the 100-Million-Year Evolutionary Mystery of Squid and Cuttlefish

    Beyond “Safe Levels”: Study Challenges What We Know About Pesticides and Cancer

    Researchers Have Found a Dietary Compound That Increases Longevity

    Scientists Baffled by Bizarre “Living Fossil” From 275 Million Years Ago

    Your IQ at 23 Could Predict Your Wealth at 27, Study Finds

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • Researchers Expose Hidden Chemistry of “Ore-Forming” Elements in Biology
    • Geologists Reveal the Americas Collided Earlier Than We Thought
    • 20x Difference: Study Reveals True Source of Airborne Microplastics
    • Scientists Uncover Hidden Force Powering Yellowstone’s Supervolcano
    • This Metal Melts in Your Hand – and Scientists Just Discovered Something Strange
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.