Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»AI Use Potentially Dangerous “Shortcuts” To Solve Complex Recognition Tasks
    Technology

    AI Use Potentially Dangerous “Shortcuts” To Solve Complex Recognition Tasks

    By York UniversityNovember 9, 20223 Comments3 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    Big Data Artificial Intelligence Concept Art
    The researchers revealed that deep convolutional neural networks were insensitive to configural object properties.

    Research from York University finds that even the smartest AI can’t match up to humans’ visual processing.

    Deep convolutional neural networks (DCNNs) do not view things in the same way that humans do (through configural shape perception), which might be harmful in real-world AI applications. This is according to Professor James Elder, co-author of a York University study recently published in the journal iScience.

    The study, which conducted by Elder, who holds the York Research Chair in Human and Computer Vision and is Co-Director of York’s Centre for AI & Society, and Nicholas Baker, an assistant psychology professor at Loyola College in Chicago and a former VISTA postdoctoral fellow at York, finds that deep learning models fail to capture the configural nature of human shape perception.

    In order to investigate how the human brain and DCNNs perceive holistic, configural object properties, the research used novel visual stimuli known as “Frankensteins.”

    “Frankensteins are simply objects that have been taken apart and put back together the wrong way around,” says Elder. “As a result, they have all the right local features, but in the wrong places.”

    The researchers discovered that whereas Frankensteins confuse the human visual system, DCNNs do not, revealing an insensitivity to configural object properties.

    Real-World Risks of DCNN Shortcuts in Object Recognition

    “Our results explain why deep AI models fail under certain conditions and point to the need to consider tasks beyond object recognition in order to understand visual processing in the brain,” Elder says. “These deep models tend to take ‘shortcuts’ when solving complex recognition tasks. While these shortcuts may work in many cases, they can be dangerous in some of the real-world AI applications we are currently working on with our industry and government partners,” Elder points out.

    One such application is traffic video safety systems: “The objects in a busy traffic scene – the vehicles, bicycles, and pedestrians – obstruct each other and arrive at the eye of a driver as a jumble of disconnected fragments,” explains Elder. “The brain needs to correctly group those fragments to identify the correct categories and locations of the objects. An AI system for traffic safety monitoring that is only able to perceive the fragments individually will fail at this task, potentially misunderstanding risks to vulnerable road users.”

    According to the researchers, modifications to training and architecture aimed at making networks more brain-like did not lead to configural processing, and none of the networks could accurately predict trial-by-trial human object judgments. “We speculate that to match human configurable sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition,” notes Elder.

    Reference: “Deep learning models fail to capture the configural nature of human shape perception” by Nicholas Baker and James H. Elder, 11 August 2022, iScience.
    DOI: 10.1016/j.isci.2022.104913

    The study was funded by the Natural Sciences and Engineering Research Council of Canada. 

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Artificial Intelligence Perception Vision York University
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    New Breakthrough Paves the Way for Vision Implants That Can Restore Sight

    From Pixels to Paradigms: MIT’s Synthetic Leap in AI Training

    AI Empathy: Is It Technology or Just Our Perceptions?

    A Visionary Leap: Enhancing Computer Vision for Autonomous Vehicles and Cyborgs

    Misinformation Express: How Generative AI Models Like ChatGPT, DALL-E, and Midjourney May Distort Human Beliefs

    Adversarially Robust: The Benefits of Peripheral Vision for Machines

    Intelligent Cameras That Can Learn and Understand What They Are Seeing

    Neural Networks Tricked by Optical Illusions in the Same Way That Humans Are Deceived

    Is What I See, What I Imagine? Neural Overlap Discovered Between Vision and Imagination

    3 Comments

    1. GB Lim on November 13, 2022 11:20 am

      “The researchers discovered that whereas Frankensteins confuse the human visual system, DCNNs do not, revealing an insensitivity to configural object properties.”

      I believe this is written incorrectly. To me it reads that DCNNs do not confuse the human visual system rather than stating that DCNNs are not confused by the disordered configural object properties of the Frankensteins.

      Reply
    2. Gregory Wynn on November 14, 2022 1:07 pm

      DCNNs are confused by Frankensteins where as humans can reconstruct/recognize the image. Scattered pieces of a puzzle image should create the same problem for DCNNs but not so much for people.

      Reply
      • skierpage on November 14, 2022 10:24 pm

        No, it’s the other way around. Humans are correctly confused by chopped up images, whereas AIs don’t realize there is a problem because they just identify the patches that make up the object even if they’re in the wrong configuration.

        “we employed a dataset of animal silhouettes and created a variant of this dataset that disrupts the configuration of each object while preserving local features. While human performance was impacted by this manipulation, DCNN performance was not, indicating insensitivity to object configuration.”

        Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Researchers Have Found a Dietary Compound That Increases Longevity

    Scientists Baffled by Bizarre “Living Fossil” From 275 Million Years Ago

    Your IQ at 23 Could Predict Your Wealth at 27, Study Finds

    320 Light-Years Away, a Planet Confirms a Fundamental Cosmic Assumption

    The Crown Jewel of Dentistry? Breakthrough Tech Could Transform Tooth Repair

    Python Blood Could Hold the Secret to Weight Loss Without Side Effects

    Naturally Occurring Bacteria Completely Eradicate Tumors in Mice With a Single Dose

    New “Nanozyme Hypothesis” Could Rewrite the Story of Life’s Origins

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • A New Chapter in Chemistry? Scientists Uncover New Way Metals Bind Oxygen
    • New Study Reveals Earth Is Getting Brighter at Night – About 2% Each Year
    • Accidental Deep Ocean Discovery Reveals Hidden Carbon Sink
    • Cooling the Planet Could Come at a Devastating Cost
    • These New Molecules Could Change How We Treat Lupus and Arthritis
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.