Blind Spots Uncovered at the Intersection of AI and Neuroscience – Dozens of Scientific Papers Debunked

AI Neuroscience Concept

Findings debunk dozens of prominent published papers claiming to read minds with EEG.

Is it possible to read a person’s mind by analyzing the electric signals from the brain? The answer may be much more complex than most people think.

Purdue University researchers – working at the intersection of artificial intelligence and neuroscience – say a prominent dataset used to try to answer this question is confounded, and therefore many eye-popping findings that were based on this dataset and received high-profile recognition are false after all.

The Purdue team performed extensive tests over more than one year on the dataset, which looked at the brain activity of individuals taking part in a study where they looked at a series of images. Each individual wore a cap with dozens of electrodes while they viewed the images.

The Purdue team’s work is published in IEEE Transactions on Pattern Analysis and Machine Intelligence. The team received funding from the National Science Foundation.

EEG Cap With Electrodes

Purdue University researchers are doing work at the intersection of artificial intelligence and neuroscience. In this photo, a research participant is wearing an EEG cap with electrodes. Credit: Chris Adam/Purdue University

“This measurement technique, known as electroencephalography or EEG, can provide information about brain activity that could, in principle, be used to read minds,” said Jeffrey Mark Siskind, professor of electrical and computer engineering in Purdue’s College of Engineering. “The problem is that they used EEG in a way that the dataset itself was contaminated. The study was conducted without randomizing the order of images, so the researchers were able to tell what image was being seen just by reading the timing and order information contained in EEG, instead of solving the real problem of decoding visual perception from the brain waves.”

The Purdue researchers originally began questioning the dataset when they could not obtain similar outcomes from their own tests. That’s when they started analyzing the previous results and determined that a lack of randomization contaminated the dataset.

“This is one of the challenges of working in cross-disciplinary research areas,” said Hari Bharadwaj, an assistant professor with a joint appointment in Purdue’s College of Engineering and College of Health and Human Sciences. “Important scientific questions often demand cross-disciplinary work. The catch is that, sometimes, researchers trained in one field are not aware of the common pitfalls that can occur when applying their ideas to another. In this case, the prior work seems to have suffered from a disconnect between AI/machine-learning scientists, and pitfalls that are well-known to neuroscientists.”

The Purdue team reviewed publications that used the dataset for tasks such as object classification, transfer learning and generation of images depicting human perception and thought using brain-derived representations measured through electroencephalograms (EEGs)

“The question of whether someone can read another person’s mind through electric brain activity is very valid,” said Ronnie Wilbur, a professor with a joint appointment in Purdue’s College of Health and Human Sciences and College of Liberal Arts. “Our research shows that a better approach is needed.”

Reference: “The Perils and Pitfalls of Block Design for EEG Classification Experiments” by Ren Li, Jared S. Johansen, Hamad Ahmed, Thomas V. Ilyevsky, Ronnie B. Wilbur, Hari M. Bharadwaj and Jeffrey Mark Siskind, 19 November 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.
DOI: 10.1109/TPAMI.2020.2973153

Siskind is a well-known Purdue innovator and has worked on multiple patented technologies with the Purdue Research Foundation Office of Technology Commercialization.

12 Comments on "Blind Spots Uncovered at the Intersection of AI and Neuroscience – Dozens of Scientific Papers Debunked"

  1. John Campbell | April 3, 2021 at 11:20 am | Reply

    Let’s not stop now, there’s a whole slew of papers awaiting the shredder in the fields of astrophysics, cosmology and medicine! I’d be deelighted to provide a shredding service, if required!

  2. John Sellers | April 4, 2021 at 8:36 am | Reply

    It is a head-slapping, eye-rolling, tongue-in-cheek OMG experience to hear that anyone doing that kind of analysis without understanding the necessity of systematically using air-tight best practice standard operating procedures in their methodology.

    It sounds to me like amateur hour with some manager wet behind the ears pushing a bunch of junk out the door into operation without a clue of how to go about it.

  3. Pablo Sagalá | April 4, 2021 at 6:00 pm | Reply

    This has been published some sixty years ago by Argentinian neurobiologist. As they start from different fundamentalities, it was easy to explain the reasons why EEG doesn’t map thoughts. See (in Spanish) Mario Crocco’s “Diferencias…” on academia.edu

  4. It’s common for those for those who support hard determinism to cite the type of studies that are now debunked, I wonder how they will respond

  5. Jenny zervakis | April 7, 2021 at 6:50 am | Reply

    This is not blind spots, but simply bad study design. Unfortunate.

  6. Clint campbell | April 7, 2021 at 9:25 am | Reply

    Well, this sounds like a machine learning problem, rather than a neuroscience problem. Not handling the training data well, not randomizing, then over fitting to the training set. Sounds like the data is there to be able to recrunch the data and make a cleaner model if it is possible at all. Odds are that it wont transfer well between subjects without some kind of transfer learning set from each person, like the early days of speech recognition but could be possible eventually.

  7. I know it is very much being used. I happen to know for a fact that there is nano implants that work. I have several in my body for my own rescreach and development. They link me to up to six others sharing imagery sound even smells taste and emotional touch feeling can be share. This is fact what I say I’ve gone through hell to program them and get them working

  8. I also have nano bots in my body this is going to be things now you have but can easily get I know how amazing this technology is

  9. Ya never know until you ask the right questions and perform the proper experiments… It’s called…science. Lets keep moving on!

  10. I m being sufering , the reading of thought, and ebrything of my body had being manipulate, like i was a robot, look like science ficción but it is a reallity It es posible yet, but It is not knew for the general people, the técnic used is call psicotronic weapon,

  11. Ah! You mean the Japa guy who posted in Twitter his article? Off the Planck navigation… Yeah, you can make an app to handle the timing along the tomography activations, recorrelate, etc. Sure these guys must have seen it. Of course you cannot read minds off EEGs, or tomographies, but surely another brain can resonate to transmissions of native encodings (brain implementation, no tribal reference), in trivial form, as the size of the signal compared to brain computing resources is negligible and effort s minimized by the common encoding. They gave not tried just speech rather than images, right? Duh…

  12. Gee, boy, those helmets must generate a LOOOT of noise with electrostatic, uh, electricity, and don’t have proper resolution, eh? But then all those measures/observations must, be taken and correlated anyway for curiosity s same, don’t they?

Leave a Reply to John Sellers Cancel reply

Email address is optional. If provided, your email will not be published or shared.