Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»Which Face is Real? Using Frequency Analysis to Identify “Deep-Fake” Images
    Technology

    Which Face is Real? Using Frequency Analysis to Identify “Deep-Fake” Images

    By Ruhr-University BochumJuly 17, 20203 Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit

    Which Face Is Real

    This method exposes fake images created by computer algorithms rather than by humans.

    They look deceptively real, but they are made by computers: so-called deep-fake images are generated by machine learning algorithms, and humans are pretty much unable to distinguish them from real photos. Researchers at the Horst Görtz Institute for IT Security at Ruhr-Universität Bochum and the Cluster of Excellence “Cyber Security in the Age of Large-Scale Adversaries” (Casa) have developed a new method for efficiently identifying deep-fake images. To this end, they analyze the objects in the frequency domain, an established signal processing technique.

    Frequency Analysis Fake Images
    Frequency analysis reveals typical artifacts in computer-generated images. Credit: © RUB, Marquard

    The team presented their work at the International Conference on Machine Learning (ICML) on 15 July 2020, one of the leading conferences in the field of machine learning. Additionally, the researchers make their code freely available online*, so that other groups can reproduce their results.

    Interaction of two algorithms results in new images

    Deep-fake images — a portmanteau word from “deep learning” for machine learning and “fake” — are generated with the help of computer models, so-called Generative Adversarial Networks, GANs for short. Two algorithms work together in these networks: the first algorithm creates random images based on certain input data. The second algorithm needs to decide whether the image is a fake or not. If the image is found to be a fake, the second algorithm gives the first algorithm the command to revise the image — until it no longer recognizes it as a fake.

    Bochum Fake Image Research Team
    Members of the Bochum-based research team include: Thorsten Holz, Lea Schönherr, Joel Frank and Thorsten Eisenhofer (left to right). Credit: RUB, Marquard

    In recent years, this technique has helped make deep-fake images more and more authentic. On the website**, users can check if they’re able to distinguish fakes from original photos. “In the era of fake news, it can be a problem if users don’t have the ability to distinguish computer-generated images from originals,” says Professor Thorsten Holz from the Chair for Systems Security.

    For their analysis, the Bochum-based researchers used the data sets that also form the basis of the above-mentioned page “Which face is real”. In this interdisciplinary project, Joel Frank, Thorsten Eisenhofer, and Professor Thorsten Holz from the Chair for Systems Security cooperated with Professor Asja Fischer from the Chair of Machine Learning as well as Lea Schönherr and Professor Dorothea Kolossa from the Chair of Digital Signal Processing.

    Frequency analysis reveals typical artifacts

    To date, deep-fake images have been analyzed using complex statistical methods. The Bochum group chose a different approach by converting the images into the frequency domain using the discrete cosine transform. The generated image is thus expressed as the sum of many different cosine functions. Natural images consist mainly of low-frequency functions.

    Image Frequeny Analysis
    Images of people transformed into the frequency domain: the upper left corner represents low-frequency image areas, the lower right corner represents high-frequency areas. On the left, you can see the transformation of a photo of a real person: the frequency range is evenly distributed. The transformation of the computer-generated photo (right) contains a characteristic grid structure in the high-frequency range – a typical artifact. Credit: © RUB, Lehrstuhl für Systemsicherheit

    The analysis has shown that images generated by GANs exhibit artifacts in the high-frequency range. For example, a typical grid structure emerges in the frequency representation of fake images. “Our experiments showed that these artifacts do not only occur in GAN-generated images. They are a structural problem of all deep learning algorithms,” explains Joel Frank from the Chair for Systems Security. “We assume that the artifacts described in our study will always tell us whether the image is a deep-fake image created by machine learning,” adds Frank. “Frequency analysis is, therefore, an effective way to automatically recognize computer-generated images.”

    Reference: “Leveraging Frequency Analysis for Deep Fake Image Recognition” by Joel Frank, Thorsten Eisenhofer, Lea Schonherr, Asja Fischer, Dorothea Kolossa and Thorsten Holz, 2020, International Conference on Machine Learning (ICML).
    PDF

    Notes

    * Code available at GitHub.

    ** Website: WhichFaceIsReal.com

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Artificial Intelligence Cybersecurity Machine Learning Popular Ruhr-University Bochum
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    A New Software Tool – Fawkes – Cloaks Your Images to Trick Facial Recognition Algorithms

    Powerful Photon-Based Processing Units Enable Complex Artificial Intelligence

    Machine Learning Has a Huge Flaw: It’s Gullible

    Unexpected Scientific Insights into COVID-19 From AI Machine Learning Tool

    Widely Used AI Machine Learning Methods Don’t Work as Claimed

    Researchers Develop a Machine Capable of Solving Complex Problems in Theoretical Physics

    Artificial Intelligence System Learns the Fundamental Laws of Quantum Mechanics

    Researchers Find Way to Harness AI Creativity – Dramatic Performance Boost to Deep Learning

    New AI System Identifies Personality Traits from Eye Movements

    3 Comments

    1. Edward Bear on July 18, 2020 7:15 am

      Unfortunately, it will be just as easy to use this technique to remove the artifacts from “deep fake” photographs in the next generation of development.

      Reply
      • Mark Keller on July 18, 2020 4:23 pm

        Yep, The moment that DeepFakes was created I knew what was coming … it aint here yet, but it almost is, and we aren’t in the least bit ready for it.

        Reply
    2. Glitch Switch on March 15, 2023 8:00 pm

      Mark Keller, not necessarily. People I work with have been testing various spectroscopic techniques, to find a simple & foolproof way of discerning all computer generated images. Consider that deep learning algorithms either generate from scratch, or using a mosaic of pixels from “real” images, or a blend. Both selfies and deep fakes, have telltale photonic patterns. Night vision goggles are said to be very handy, & then there’s the code which will give away anomalies, like repetition (I’m not well versed in this area but I think that was the gist).

      I don’t want to give away too many tricks just yet, but there’s an obvious checkmate move where machine learning & code pirates with malevolent intent will ONLY be able to develop fakes from a limited pool, that will increasingly shrink as each protocol nullifies the most obvious sources. Another factor in their predicted obsolescence is the way in which amateurs have been saturating the surface net with cheesy retrograde apps, and dumb social media gimmicks, so the diligent few have already figured out the obvious telltales.

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    The Universe Is Expanding Too Fast and Scientists Can’t Explain Why

    “Like Liquid Metal”: Scientists Create Strange Shape-Shifting Material

    Early Warning Signals of Esophageal Cancer May Be Hiding in Plain Sight

    Common Blood Pressure Drug Shows Surprising Power Against Deadly Antibiotic-Resistant Superbug

    Scientists Uncover Dangerous Connection Between Serotonin and Heart Valve Disease

    Scientists Discover a “Protector” Protein That Could Help Reverse Hair Loss

    Bone-Strengthening Discovery Could Reverse Osteoporosis

    Scientists Uncover Hidden Trigger Behind Stem Cell Aging

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • 5 Common Myths About Learning a New Language, Debunked
    • The Neanderthal “Love Story” Isn’t What It Seems
    • Scientists Unlock Hidden Secrets of 2,300-Year-Old Mummies Using Cutting-Edge CT Scanner
    • Men vs. Women: Scientists Uncover Dramatic Differences in How the Immune System Ages
    • Eating Chili Peppers Linked to Longer Life
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.