Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»Does AI Really “Know” Anything? Why Our Words Matter More Than We Think
    Technology

    Does AI Really “Know” Anything? Why Our Words Matter More Than We Think

    By Iowa State UniversityJanuary 19, 20261 Comment6 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    Human Brain Puzzle Piece Artificial Intelligence
    Artificial intelligence is often described using language borrowed from human thought, but those word choices may quietly reshape how its capabilities are understood. Researchers analyzed large-scale news writing to see when and how mental language appears in discussions of AI. Credit: Shutterstock

    A new study explores how human-like language shapes the way we talk about artificial intelligence.

    Think, know, understand, remember.

    These are the kinds of mental verbs people use every day to describe what goes on in someone’s mind. But when the same words are applied to artificial intelligence, they can unintentionally make a computer system seem more human than it is.

    “We use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines – it helps us relate to them,” said Jo Mackiewicz, professor of English at Iowa State. “But at the same time, when we apply mental verbs to machines, there’s also a risk of blurring the line between what humans and AI can do.”

    Mackiewicz and Jeanine Aune, teaching professor of English and director of the advanced communication program at Iowa State, are part of a research team that investigated how writers use anthropomorphizing language – or words that give human traits to non-human things – when describing AI systems. Their study, was recently published in the journal Technical Communication Quarterly.

    The team also included Matthew J. Baker, an associate professor of linguistics at Brigham Young University, and Jordan Smith, an assistant professor of English at the University of Northern Colorado. Both Baker and Smith previously graduated from Iowa State University.

    How mental verbs can be misleading

    Mackiewicz and Aune said mental verbs can be misleading in AI coverage because they imply a human-like inner experience. Terms such as “think,” “know,” “understand” and “want” can signal beliefs, desires or consciousness. AI systems, however, do not have these qualities. They produce responses by drawing on patterns in data rather than feelings or intentions.

    They also warned that this language can overstate what AI can do. Phrases like “AI decided” or “ChatGPT knows” can make a tool sound more independent or intelligent than it really is, which may skew expectations about how safely or reliably it performs. Describing AI as if it has intentions can also shift attention away from the actual decision-makers: the people who build, train, deploy and supervise these systems.

    “Certain anthropomorphic phrases may even stick in readers’ minds and can potentially shape public perception of AI in unhelpful ways,” Aune said.

    Words on words

    To measure how common this kind of wording is, the researchers turned to the News on the Web (NOW) corpus, a 20-billion-word-plus dataset that is continually updated with English-language news stories from 20 countries. They used it to track how often news writers connect anthropomorphizing mental verbs – like learns, means and knows – with the terms AI and ChatGPT.

    The results, Mackiewicz and Aune said, surprised the research team.

    In their analysis, the team identified three key findings:

    1. The terms AI and ChatGPT are infrequently paired with mental verbs in news articles.

    Mackiewicz noted that there is no single definitive comparison of anthropomorphism across spoken and written language, but existing research offers useful context. “Anthropomorphism has been shown to be common in everyday speech, but we found there’s far less usage in news writing,” she said.

    Within the dataset, “needs” appeared most often alongside AI, with 661 instances. For ChatGPT, the most frequent pairing was “knows,” which appeared 32 times.

    The researchers also pointed to Associated Press guidance that discourages linking human emotions to the capabilities of AI models, noting that these recommendations may have influenced how often news coverage used mental verbs with AI and Chat GPT in recent years.

    2. When the terms AI and ChatGPT were paired with mental verbs, they weren’t necessarily anthropomorphized.

    The research team’s analysis found that writers used the mental verb “needs,” for example, in two main ways when discussing AI. In many instances, “needs” simply described what AI requires to function, such as “AI needs large amounts of data” or “AI needs some human assistance.” These uses weren’t anthropomorphic because they treated AI the same way we talk about other non‑human systems – “the car needs gas” or “the soup needs salt.”

    Second, writers sometimes used “needs” in a way that suggested an obligation to do or be something – “AI needs to be trained” or “AI needs to be implemented.” Aune said many of these instances were written in passive voice, which shifted responsibility back to humans, not AI.

    3. Anthropomorphization with mental verbs exists on a spectrum.

    Mackiewicz and Aune said the research team also discovered there were times the usage of “needs” edged into more human‑like territory. Some sentences – “AI needs to understand the real world,” for example – implied expectations or qualities associated with people, such as fairness, ethics or a personal understanding of the world we live in.

    “These instances showed that anthropomorphizing isn’t all‑or‑nothing and instead exists on a spectrum,” Aune said

    Writing the future

    “Overall, our analysis shows that anthropomorphization of AI in news writing is far less common – and far more nuanced – than we might think,” Mackiewicz said. “Even the instances that did anthropomorphize AI varied widely in strength.”

    The study’s findings, Mackiewicz and Aune said, underscore the importance of looking beyond surface-level verb counts and considering how meaning comes from context.

    “For writers, this nuance matters: the language we choose shapes how readers understand AI systems, their capabilities and the humans responsible for them,” Mackiewicz said.

    “Our findings can help technical and professional communication practitioners reflect on how they think about AI technologies as tools in their writing process and how they write about AI,” the research team wrote in the published study.

    And as AI technologies continue to evolve, writers will continually need to consider how word choices may frame those technologies, Mackiewicz and Aune said.

    Future research, the team concluded, “could examine the anthropomorphizing impact of different words and their senses” and “look at whether or not infrequent usage has an outsized effect on how people, including news writers and other professional communicators, think about AI.”

    Reference: “Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT” by Jeanine Elise Aune, Matthew J. Baker, Jo Mackiewicz and Jordan Smith, 29 November 2025, Technical Communication Quarterly.
    DOI: 10.1080/10572252.2025.2593840

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Artificial Intelligence Cognitive Science Computer Science Iowa State University Linguistics
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    Did Scientists Overestimate AI’s Ability To Think Like Humans?

    New Research Challenges the Myth That AI Stifles Human Creativity

    We May Never Know if AI Is Conscious, Says Cambridge Philosopher

    AI Is Learning to Be Selfish, Study Warns

    AI That Thinks Like Us: New Model Predicts Human Decisions With Startling Accuracy

    New Breakthrough Allows a Computer To Understand Human Emotions

    New Computer Systems Seek to Replicate Human Intelligence

    New Approach Uses Mathematics to Improve Automated Security Monitoring

    Mathematical Framework Formalizes Loop Perforation Technique

    1 Comment

    1. Grant Castillou on January 19, 2026 10:29 pm

      It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

      What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

      I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

      My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Scientists Uncover Potential Brain Risks of Popular Fish Oil Supplements

    Scientists Discover a Surprising Way To Make Bread Healthier and More Nutritious

    After 60 Years, Scientists Uncover Unexpected Brain Effects of Popular Diabetes Drug Metformin

    New Research Uncovers Hidden Side Effects of Popular Weight-Loss Drugs

    Scientists Rethink Extreme Warming After Surprising Ocean Discovery

    Landmark Study Links Never Marrying to Significantly Higher Cancer Risk

    Researchers Discover Unknown Beetle Species Just Steps From Their Lab

    Largest-Ever Study Finds Medicinal Cannabis Ineffective for Anxiety, Depression, PTSD

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • Strange 65-Foot Dinosaur Discovered in Argentina
    • Researchers Uncover Source of Strange Deformation in Earth’s Largest Continental Rift
    • Scientists Solve Mystery of Where the Colorado River Vanished Millions of Years Ago
    • Not Just Alzheimer’s: Scientists Uncover Clues to a Second, Overlooked Disorder
    • Scientists Uncover Dangerous Connection Between Serotonin and Heart Valve Disease
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.