Researchers Discover Neurons That Can Predict What We Are Going To Say Before We Say It

Neuron Synapse Illustration

A new study utilizing advanced Neuropixels probes, provides insights into how the brain’s neurons enable the formulation and verbal expression of thoughts, revealing the pre-verbal planning of speech sounds. This breakthrough research, offering potential for developing speech prosthetics and enhancing treatments for language disorders, underscores the complexity and efficiency of the brain’s language production capabilities.

Results may be utilized to create innovative therapies for speech and language impairments.

A recent study conducted by Massachusetts General Hospital (MGH) researchers has utilized sophisticated brain recording methods to reveal the collaborative function of neurons in the human brain, enabling individuals to formulate their thoughts into words and subsequently articulate them verbally.

Together, these findings provide a detailed map of how speech sounds such as consonants and vowels are represented in the brain well before they are even spoken and how they are strung together during language production.

The work, which is published in Nature, reveals insights into the brain’s neurons that enable language production, and which could lead to improvements in the understanding and treatment of speech and language disorders.

“Although speaking usually seems easy, our brains perform many complex cognitive steps in the production of natural speech—including coming up with the words we want to say, planning the articulatory movements, and producing our intended vocalizations,” says senior author Ziv Williams, MD, an associate professor in Neurosurgery at MGH and Harvard Medical School.

“Our brains perform these feats surprisingly fast—about three words per second in natural speech—with remarkably few errors. Yet how we precisely achieve this feat has remained a mystery.”

Technological Breakthroughs in Neuronal Recording

When they used a cutting-edge technology called Neuropixels probes to record the activities of single neurons in the prefrontal cortex, a frontal region of the human brain, Williams and his colleagues identified cells that are involved in language production and that may underlie the ability to speak. They also found that there are separate groups of neurons in the brain dedicated to speaking and listening.

“The use of Neuropixels probes in humans was first pioneered at MGH. These probes are remarkable—they are smaller than the width of a human hair, yet they also have hundreds of channels that are capable of simultaneously recording the activity of dozens or even hundreds of individual neurons,” says Williams who had worked to develop these recording techniques with Sydney Cash, MD, PhD, a professor in Neurology at MGH and Harvard Medical School, who also helped lead the study. “Use of these probes can therefore offer unprecedented new insights into how neurons in humans collectively act and how they work together to produce complex human behaviors such as language.”

Decoding Speech Elements

The study showed how neurons in the brain represent some of the most basic elements involved in constructing spoken words—from simple speech sounds called phonemes to their assembly into more complex strings such as syllables.

For example, the consonant “da”, which is produced by touching the tongue to the hard palate behind the teeth, is needed to produce the word dog.

By recording individual neurons, the researchers found that certain neurons become active before this phoneme is spoken out loud. Other neurons reflected more complex aspects of word construction such as the specific assembly of phonemes into syllables.

With their technology, the investigators showed that it’s possible to reliably determine the speech sounds that individuals will say before they articulate them.

In other words, scientists can predict what combination of consonants and vowels will be produced before the words are actually spoken. This capability could be leveraged to build artificial prosthetics or brain-machine interfaces capable of producing synthetic speech, which could benefit a range of patients.

“Disruptions in the speech and language networks are observed in a wide variety of neurological disorders—including stroke, traumatic brain injury, tumors, neurodegenerative disorders, neurodevelopmental disorders, and more,” says Arjun Khanna who is a co-author on the study. “Our hope is that a better understanding of the basic neural circuitry that enables speech and language will pave the way for the development of treatments for these disorders.”

The researchers hope to expand on their work by studying more complex language processes that will allow them to investigate questions related to how people choose the words that they intend to say and how the brain assembles words into sentences that convey an individual’s thoughts and feelings to others.

Reference: “Single-neuronal elements of speech production in humans” by Arjun R. Khanna, William Muñoz, Young Joon Kim, Yoav Kfir, Angelique C. Paulk, Mohsen Jamali, Jing Cai, Martina L. Mustroph, Irene Caprara, Richard Hardstone, Mackenna Mejdell, Domokos Meszéna, Abigail Zuckerman, Jeffrey Schweitzer, Sydney Cash and Ziv M. Williams, 31 January 2024, Nature.
DOI: 10.1038/s41586-023-06982-w

Additional authors include William Muñoz, Young Joon Kim, Yoav Kfir, Angelique C. Paulk, Mohsen Jamali, Jing Cai, Martina L Mustroph, Irene Caprara, Richard Hardstone, Mackenna Mejdell, Domokos Meszena, Abigail Zuckerman, and Jeffrey Schweitzer.

This work was supported by the National Institutes of Health.

Be the first to comment on "Researchers Discover Neurons That Can Predict What We Are Going To Say Before We Say It"

Leave a comment

Email address is optional. If provided, your email will not be published or shared.