Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Science»Interactive Coding Scheme is the First to Near Optimality
    Science

    Interactive Coding Scheme is the First to Near Optimality

    By Larry Hardesty, Massachusetts Institute of TechnologyOctober 2, 2014No Comments5 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    Coding Scheme for Interactive Communication is the First to Near Optimality
    MIT engineers present the first interactive coding scheme that approaches the optimum for noise tolerance, transmission rate, and encoding/decoding speed. Credit: Jose-Luis Olivares/MIT

    In a new research paper, MIT engineers describe the first interactive coding scheme to near optimality on the three classical measures: How much noise can they tolerate? What’s the maximum transmission rate they afford? And how time-consuming are the encoding and decoding processes?

    Error-correcting codes are one of the glories of the information age: They’re what guarantee the flawless transmission of digital information over the airwaves or through copper wire, even in the presence of the corrupting influences that engineers call “noise.”

    But classical error-correcting codes work best with large chunks of data: The bigger the chunk, the higher the rate at which it can be transmitted error-free. In the Internet age, however, distributed computing is becoming more and more common, with devices repeatedly exchanging small chunks of data over long periods of time.

    So for the last 20 years, researchers have been investigating interactive-coding schemes, which address the problem of long sequences of short exchanges. Like classical error-correcting codes, interactive codes are evaluated according to three criteria: How much noise can they tolerate? What’s the maximum transmission rate they afford? And how time-consuming are the encoding and decoding processes?

    At the IEEE Symposium on Foundations of Computer Science this month, MIT graduate students past and present will describe the first interactive coding scheme to approach the optimum on all three measures.

    “Previous to this work, it was known how to get two out of three of these things to be optimal,” says Mohsen Ghaffari, a graduate student in electrical engineering and computer science and one of the paper’s two co-authors. “This paper achieves all three of them.”

    Vicious noise

    Moreover, where Claude Shannon’s groundbreaking 1948 analysis of error-correcting codes considered the case of random noise, in which every bit of transmitted data has the same chance of being corrupted, Ghaffari and his collaborator — Bernhard Haeupler, who did his graduate work at MIT and is now an assistant professor at Carnegie Mellon University — consider the more stringent case of “adversarial noise,” in which an antagonist is trying to interfere with transmission in the most disruptive way possible.

    “We don’t know what type of random noise will be the one that actually captures reality,” Ghaffari explains. “If we knew the best one, we would just use that. But generally, we don’t know. So you try to generate a coding that is as general as possible.” A coding scheme that could thwart an active adversary would also thwart any type of random noise.

    Error-correcting codes — both classical and interactive — work by adding some extra information to the message to be transmitted. They might, for instance, tack on some bits that describe arithmetic relationships between the message bits. Both the message bits and the extra bits are liable to corruption, so decoding a message — extracting the true sequence of message bits from the sequence that arrives at the receiver — is usually a process of iterating back and forth between the message bits and the extra bits, trying to iron out discrepancies.

    In interactive communication, the maximum tolerable error rate is one-fourth: If the adversary can corrupt more than a quarter of the bits sent, perfectly reliable communication is impossible. Some prior interactive-coding schemes, Ghaffari explains, could handle that error rate without requiring too many extra bits. But the decoding process was prohibitively complex.

    Making a list

    To keep the complexity down, Ghaffari and Haeupler adopted a technique called list decoding. Rather than iterating back and forth between message bits and extra bits until the single most probable interpretation emerges, their algorithm iterates just long enough to create a list of likely candidates. At the end of their mutual computation, each of the interacting devices may have a list with hundreds of entries.

    But each device, while it has only imperfect knowledge of the messages sent by the other, has perfect knowledge of the messages it sent. So if, at the computation’s end, the devices simply exchange lists, each has enough additional information to zero in on the optimal decoding.

    The maximum tolerable error rate for an interactive-coding scheme — one-fourth — is a theoretical result. The minimum length of an encoded message and the minimum decoding complexity, on the other hand, are surmises based on observation.

    But Ghaffari and Haeupler’s decoding algorithm is nearly linear, meaning that its execution time is roughly proportional to the length of the messages exchanged.

    “It is optimal in the sense that it is linear,” says Mark Braverman, an assistant professor of computer science at Princeton University who has also worked on interactive coding. “That’s an important benchmark.”

    But linear relationships are still defined by constants: y = x is a linear relationship, but so is y = 1,000,000,000x. A linear algorithm that takes an extra second of computation for each additional bit of data it considers isn’t as good as a linear algorithm that takes an extra microsecond.

    “We still need to worry a little bit about constants,” Braverman says. “But before you can worry about constants, you have to know that there is a constant-rate scheme. This is very nice progress and a prerequisite to asking those next questions.”

    Reference: “Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding” by Mohsen Ghaffari and Bernhard Haeupler, 18 October 2014,  2014 IEEE 55th Annual Symposium on Foundations of Computer Science.
    DOI: 10.1109/FOCS.2014.49
    PDF

     

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Algorithm Computer Science Engineering MIT
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    “I Know What You Bought at Chipotle” – New Algorithm Identifies Purchases From Only the Bill Total

    Algorithm Uses Math to Blend Musical Notes Seamlessly [Video]

    MIT Launches Intelligence Quest To Advance Human and Machine Intelligence Research

    New Technique Could Benefit Diamond-Based Quantum Computers

    “Data Science Machine” Replaces Human Intuition with Algorithms

    New Technique Could Enable Chips with Thousands of Cores

    New Statistical Technique to Help Evaluate Sites for Offshore Wind Farms

    Engineers Design Printable Centimeter-long Origami Robot

    New Algorithm Lets Robots Autonomously Plan for Tasks

    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    New Study Reveals Why Ozempic Works Better for Some People Than Others

    Climate Change Is Altering a Key Greenhouse Gas in a Way Scientists Didn’t Expect

    New Study Suggests Gravitational Waves May Have Created Dark Matter

    Scientists Discover Why the Brain Gets Stuck in Schizophrenia

    Scientists Engineer “Tumor-Eating” Bacteria That Devour Cancer From Within

    Even “Failed” Diets May Deliver Long-Term Health Gains, Study Finds

    NIH Scientists Discover Powerful New Opioid That Relieves Pain Without Dangerous Side Effects

    Collapsing Plasma May Hold the Key to Cosmic Magnetism

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • JWST Reveals a “Forbidden” Planet With a Baffling Composition
    • New Research Challenges 30-Year-Old Theory of Eye Development
    • The Protein “Sabotaging” Aging Muscle Recovery Could Be Key to Surviving Aging
    • This Diet–Gut Interaction Could Transform Fat Into a Calorie-Burning Machine
    • Why Some People Reach 100: New Study Reveals Key Biological Differences
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.