Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»Computer Scientists Create Fake Videos That Fool State-of-the-Art Deepfake Detectors
    Technology

    Computer Scientists Create Fake Videos That Fool State-of-the-Art Deepfake Detectors

    By University of California - San DiegoFebruary 9, 20213 Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    Successful Deepfake Video
    Researchers demonstrated that deepfake detectors can be bypassed by inserting adversarial examples into each frame of a video.

    A new study shows deepfake detectors can be easily bypassed using specially crafted video inputs that exploit their weaknesses. 

    Systems designed to detect deepfakes — videos that manipulate real-life footage via artificial intelligence — can be deceived, computer scientists showed for the first time at the WACV 2021 conference which took place online from January 5 to 9, 2021.

    Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs that cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed.

    “Our work shows that attacks on deepfake detectors could be a real-world threat,” said Shehzeen Hussain, a UC San Diego computer engineering Ph.D. student and first co-author on the WACV paper. “More alarmingly, we demonstrate that it’s possible to craft robust adversarial deepfakes in even when an adversary may not be aware of the inner workings of the machine learning model used by the detector.”

    Face-Focused Detectors Vulnerable to Manipulation

    In deepfakes, a subject’s face is modified in order to create convincingly realistic footage of events that never actually happened. As a result, typical deepfake detectors focus on the face in videos: first tracking it and then passing on the cropped face data to a neural network that determines whether it is real or fake. For example, eye blinking is not reproduced well in deepfakes, so detectors focus on eye movements as one way to make that determination. State-of-the-art Deepfake detectors rely on machine learning models for identifying fake videos.


    XceptionNet, a deep fake detector, labels an adversarial video created by the researchers as real. Credit: University of California San Diego

    The extensive spread of fake videos through social media platforms has raised significant concerns worldwide, particularly hampering the credibility of digital media, the researchers point out. “If the attackers have some knowledge of the detection system, they can design inputs to target the blind spots of the detector and bypass it,” said Paarth Neekhara, the paper’s other first coauthor and a UC San Diego computer science student.

    Researchers created an adversarial example for every face in a video frame. But while standard operations such as compressing and resizing video usually remove adversarial examples from an image, these examples are built to withstand these processes. The attack algorithm does this by estimating over a set of input transformations how the model ranks images as real or fake. From there, it uses this estimation to transform images in such a way that the adversarial image remains effective even after compression and decompression.

    The modified version of the face is then inserted in all the video frames. The process is then repeated for all frames in the video to create a deepfake video. The attack can also be applied on detectors that operate on entire video frames as opposed to just face crops.

    The team declined to release their code so it wouldn’t be used by hostile parties.

    High Success Rate

    Researchers tested their attacks in two scenarios: one where the attackers have complete access to the detector model, including the face extraction pipeline and the architecture and parameters of the classification model; and one where attackers can only query the machine learning model to figure out the probabilities of a frame being classified as real or fake.

    In the first scenario, the attack’s success rate is above 99 percent for uncompressed videos. For compressed videos, it was 84.96 percent. In the second scenario, the success rate was 86.43 percent for uncompressed and 78.33 percent for compressed videos. This is the first work that demonstrates successful attacks on state-of-the-art Deepfake detectors. 

    “To use these deepfake detectors in practice, we argue that it is essential to evaluate them against an adaptive adversary who is aware of these defenses and is intentionally trying to foil these defenses,” the researchers write. “We show that the current state-of-the-art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector.”

    To improve detectors, researchers recommend an approach similar to what is known as adversarial training: during training, an adaptive adversary continues to generate new deepfakes that can bypass the current state-of-the-art detector; and the detector continues improving in order to detect the new deepfakes.

    Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples

    *Shehzeen Hussain, Malhar Jere, Farinaz Koushanfar, Department of Electrical and Computer Engineering, UC San Diego

    Paarth Neekhara, Julian McAuley, Department of Computer Science and Engineering, UC San Diego

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Computer Science Cybersecurity Hacking Popular UCSD
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    MIT’s Cybersecurity Metior: A Secret Weapon Against Side-Channel Attacks

    Spectre Strikes Back: New Hacking Vulnerability Affecting Billions of Computers Worldwide

    A New Software Tool – Fawkes – Cloaks Your Images to Trick Facial Recognition Algorithms

    New Debugging Method Finds 23 Undetected Security Flaws in Popular Web Applications

    New Breakthrough May Lead to Instant-Start Computers

    Engineers Develop a Telescopic Contact Lens

    AI Framework Predicts Better Patient Health Care and Reduces Cost

    Synthetic Biology Circuits Perform Logic Functions and Remember the Results

    A New Record in Supercomputing, Researchers Break Million-Core Supercomputer Barrier

    3 Comments

    1. John on February 9, 2021 6:58 am

      I knew it was fake just by the [Obama: “I’m Sorry”] part. Like that buffoon ever apologized for driving up all our health insurance premiums or anything else.

      Reply
    2. chiquirian mcnell on February 27, 2021 10:13 am

      i knew it was fake just by “obama head movement”you can clearly see it was edit

      Reply
    3. Sean Harris on April 20, 2021 7:33 pm

      Through leveraging accumulated knowledge, the deepfake detector will also develop to prevent hidden or obscured visual content from being classified as genuine content. The deepfake detectors will also further guarantee the authenticity of the content created for digital forensics. This system will surely be the answer for the misused technology that leads to the misconception of what synthetic media really is.

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Millions Take These IBS Drugs, But a New Study Finds Serious Risks

    Scientists Unlock Hidden Secrets of 2,300-Year-Old Mummies Using Cutting-Edge CT Scanner

    Bread Might Be Making You Gain Weight Even Without Eating More Calories

    Scientists Discover Massive Magma Reservoir Beneath Tuscany

    Europe’s Most Active Volcano Just Got Stranger – Here’s Why Scientists Are Rethinking It

    Alzheimer’s Symptoms May Start Outside the Brain, Study Finds

    Millions Take This Popular Supplement – Scientists Discover a Concerning Link to Heart Failure

    The Universe Is Expanding Too Fast and Scientists Can’t Explain Why

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • U.S. Waste Holds $5.7 Billion Worth of Crop Nutrients
    • Scientists Say a Hidden Structure May Exist Inside Earth’s Core
    • Doctors Surprised by the Power of a Simple Drug Against Colon Cancer
    • Why Popular Diabetes Drugs Like Ozempic Don’t Work for Everyone: The “Genetic Glitch”
    • Scientists Create Improved Insulin Cells That Reverse Diabetes in Mice
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.