Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Science»Why We Don’t Talk Like Computers: Scientists Finally Have an Answer
    Science

    Why We Don’t Talk Like Computers: Scientists Finally Have an Answer

    By Saarland UniversityJanuary 18, 20261 Comment6 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    Humanoid Robot Speaking City Downtown
    Human language may appear inefficient compared with digital codes, yet its structure is deeply tuned to how the brain interacts with the world. Rather than compressing information into abstract symbols, languages build meaning step by step, drawing on shared experience and learned patterns. Credit: Shutterstock

    Human language is structured to minimize mental effort by using familiar, predictive patterns grounded in lived experience.

    Human languages are remarkably complex systems. About 7,000 languages are spoken around the world, ranging from those with only a few remaining speakers to widely used languages such as Chinese, English, Spanish, and Hindi, which are spoken by billions of people.

    Despite their many differences, all languages serve the same basic purpose. They communicate meaning by combining individual words into phrases and then organizing those phrases into sentences. Each level carries its own meaning, and together they allow people to share ideas in a way that can be clearly understood.

    Why language is not digitally compressed

    “This is actually a very complex structure. Since the natural world tends towards maximizing efficiency and conserving resources, it’s perfectly reasonable to ask why the brain encodes linguistic information in such an apparently complicated way instead of digitally, like a computer,” explains Michael Hahn.

    Hahn, a Professor of Computational Linguistics at Saarland University, has been exploring this question with his colleague Richard Futrell from the University of California, Irvine. In theory, encoding information as a simple binary sequence of ones and zeros would be far more efficient because it allows information to be compressed more tightly than natural language. This raises an obvious question. Why do humans not communicate, metaphorically speaking, like R2-D2 from Star Wars, but instead rely on spoken language? Hahn and Futrell have now identified an answer.

    Michael Hahn
    Michael Hahn, Chair for Language, Computation, and Cognition at Saarland University. Credit: Saarland University/Thorsten Mohr

    “Human language is shaped by the realities of life around us,” Hahn says. “If, for instance, I was to talk about half a cat paired with half a dog and I referred to this using the abstract term ‘gol’, nobody would know what I meant, as it’s pretty certain that no one has seen a gol. It simply does not reflect anyone’s lived experience. Equally, it makes no sense to blend the words ‘cat’ and ‘dog’ into a string of characters that uses the same letters but is impossible to interpret,” he continues. A sequence like “gadcot” would be meaningless to us, even though it contains the letters from both words. By contrast, the phrase “cat and dog” is immediately understandable because both words refer to familiar animals that most people recognize.

    Familiar structure lowers cognitive effort

    Hahn summarizes the main findings of the study as follow: ‘Put simply, it’s easier for our brain to take what might seem to be the more complicated route.’

    Although the information is not in its most compressed form, the computational load for the brain is much lower because the human brain processes language in constant interaction with the familiar natural environment. Coding the information in a purely binary digital form might seem more efficient, as the information can be transmitted in a shorter time, but such a code would be detached from our real-world experience.

    Michael Hahn says the daily drive to work provides a good analogy: ‘On our usual commute, the route is so familiar to us that the drive is almost like on autopilot. Our brain knows exactly what to expect, so the effort it needs to make is much lower. Taking a shorter but less familiar route feels much more tiring, as the new route demands that we be far more attentive during the drive.’ Mathematically speaking: ‘The number of bits the brain needs to process is far smaller when we speak in familiar, natural ways.’

    Prediction shapes how sentences are understood

    Encoding and decoding information digitally would therefore require significantly more cognitive effort for both speaker and listener. Instead, the human brain continuously calculates the probabilities of words and phrases occurring in sequence, and because we use our native language daily for tens of thousands of days across a lifetime, these sequence patterns become deeply ingrained, reducing the computational load even further.

    Hahn offers another example: ‘When I say the German phrase “Die fünf grünen Autos” (Engl.: “the five green cars”), the phrase will almost certainly make sense to another German speaker, whereas “Grünen fünf die Autos” (Engl.: “green five the cars”) won’t,’ he says.

    Consider what happens when a speaker utters the phrase ‘Die fünf grünen Autos‘. It begins with the German definite article ‘Die‘. At that point, a German-speaking listener will already know that the word ‘Die‘ is likely to signal a feminine singular noun or a plural noun of any gender. This allows the brain to rule out masculine or neuter singular nouns immediately.

    The next word, ‘fünf‘, is highly likely to refer to something countable, which rules out non-enumerable concepts like ‘love’ or ‘thirst’. The next word in the sequence ‘grünen‘ tells the listener that the as-yet-unknown noun will be in the plural form and is green in colour. It could be cars, but could just as well be bananas or frogs. Only when the final word in the sequence ‘Autos‘ is uttered does the brain resolve the remaining ambiguity. As the phrase unfolds, the number of interpretative possibilities narrows until (in most cases) only one final interpretation is left.

    However, in the phrase ‘Grünen fünf die Autos’ (Engl.: ‘green five the cars’), this logical chain of predictions and correlations breaks down. Our brain cannot construct meaning from the utterance because the expected sequence of cues is disrupted.

    Implications for artificial intelligence

    Michael Hahn and his US colleague Richard Futrell have now demonstrated these relationships mathematically. The significance of their study is underscored by its publication in the high-impact journal Nature Human Behaviour. Their insights could prove valuable, for example, in the further development of the large language models (LLMs) that underpin generative AI systems such as ChatGPT or Microsoft’s Copilot.

    Reference: “Linguistic structure from a bottleneck on sequential information processing” by Richard Futrell, and Michael Hahn, 24 November 2025, Nature Human Behaviour.
    DOI: 10.1038/s41562-025-02336-w

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Cognitive Science Language Linguistics Neuroscience Psychology
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    Frequently Distracted? Blame It on This Secret Brain Rhythm

    The End of Language As We Know It? Scientists Challenge 60 Years of Linguistic Research

    Linguists Tested 191 Universal Grammar Rules. Only One-Third Survived

    This Daily Brain Shift Could Be Costing You 40 Minutes of Work

    Have We Been Wrong About Language for 70 Years? New Study Challenges Long-Held Theory

    You Don’t Have Just Five Senses – New Research Suggests Humans May Have up to 33

    Researchers Decode How We Turn Thoughts Into Sentences

    New Research Finds Words Are Needed To Think About Numbers

    Body Language Is a Better Indicator of Intense Emotions Than Facial Expressions

    1 Comment

    1. Clyde Spencer on January 18, 2026 6:11 pm

      Something that the researchers overlooked is that language is redundant, probably on purpose. That is probably because in a noisy environment, if some ‘bits’ are dropped, the meaning may be lost. Where as, in spoken and even written language, redundancy through context, associated words, and similarity of words in other languages for the same object or action, allow the person being spoken to, to guess the meaning, which is the whole point of speech, not just getting all the ‘bits’ correct. That is, speaking tends to be self-correcting in the conveyance of meaning.

      Something that politically progressive people seem not to understand is that it is not the letters in a word, nor the order they are presented, that is important as long as receivers ‘get’ the meaning. That is, while using crude or impolite words is frowned upon generally, or even outright banned, it isn’t the word that is bad, it is the meaning of the word that is bad. Thus, putting some dots or dashes between the first and last letter of a word, to avoid offending or ‘triggering’ someone, still conveys the meaning, which is usually correctly guessed by anyone with more than a two-digit IQ. That is to say, it is irrational to ban people from using ‘the N-word’ because the insult (if it actually is even intended as an insult) is usually still conveyed and banning does not accomplish the purpose of preventing the insult because the offense is still committed even if the word is never uttered or is ‘coded’ in an unconventional way. It is the intent behind the use of a word, not the word itself, that is the meaning. Woke people are not generally deep thinkers, in my experience. They behave as though they think that prohibiting particular permutations of the letters of the alphabet will destroy the evil intended by the speaker.

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Breakthrough Bowel Cancer Trial Leaves Patients Cancer-Free for Nearly 3 Years

    Natural Compound Shows Powerful Potential Against Rheumatoid Arthritis

    100,000-Year-Old Neanderthal Fossils in Poland Reveal Unexpected Genetic Connections

    Simple “Gut Reset” May Prevent Weight Gain After Ozempic or Wegovy

    2.8 Days to Disaster: Scientists Warn Low Earth Orbit Could Suddenly Collapse

    Common Food Compound Shows Surprising Power Against Superbugs

    5 Simple Ways To Remember More and Forget Less

    The Atomic Gap That Could Cost the Semiconductor Industry Billions

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • Scientists Uncover “Astonishing” Hidden Property of Light
    • Scientists Discover Stem Cells That Could Regrow Teeth and Bone
    • Scientists Discover Natural Molecule That Stops Alzheimer’s Protein Clumps From Forming
    • Early Cannabis Use May Stall Key Brain Skills in Teens
    • Popular Vitamin D Supplement Has “Previously Unknown” Negative Effect, Study Finds
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.