Google’s Powerful Artificial Intelligence Spotlights a Human Cognitive Glitch

Artificial Intelligence Data AI Problem Solving

Words can have a powerful effect on people, even when they’re generated by an unthinking machine.

It is easy for people to mistake fluent speech for fluent thought.

When you read a sentence like this one, your past experience leads you to believe that it’s written by a thinking, feeling human. And, in this instance, there is indeed a human typing these words: [Hi, there!] But these days, some sentences that appear remarkably humanlike are actually generated by AI systems that have been trained on massive amounts of human text.

People are so accustomed to presuming that fluent language comes from a thinking, feeling human that evidence to the contrary can be difficult to comprehend. How are people likely to navigate this relatively uncharted territory? Because of a persistent tendency to associate fluent expression with fluent thought, it is natural – but potentially misleading – to think that if an artificial intelligence model can express itself fluently, that means it also thinks and feels just like humans do.

As a result, it is perhaps unsurprising that a former Google engineer recently claimed that Google’s AI system LaMDA has a sense of self because it can eloquently generate text about its purported feelings. This event and the subsequent media coverage led to a number of rightly skeptical articles and posts about the claim that computational models of human language are sentient, meaning capable of thinking, feeling, and experiencing.

The question of what it would mean for an AI model to be sentient is actually quite complicated (see, for instance, our colleague’s take), and our goal in this article is not to settle it. But as language researchers, we can use our work in cognitive science and linguistics to explain why it is all too easy for humans to fall into the cognitive trap of assuming that an entity that can use language fluently is sentient, conscious, or intelligent.

Using AI to generate human-like language

Text generated by models like Google’s LaMDA can be hard to distinguish from text written by humans. This impressive achievement is a result of a decadeslong program to build models that generate grammatical, meaningful language.

Eliza 1966

The first computer system to engage people in dialogue was psychotherapy software called Eliza, built more than half a century ago. Credit: Rosenfeld Media/Flickr, CC BY

Early versions dating back to at least the 1950s, known as n-gram models, simply counted up occurrences of specific phrases and used them to guess what words were likely to occur in particular contexts. For instance, it’s easy to know that “peanut butter and jelly” is a more likely phrase than “peanut butter and pineapples.” If you have enough English text, you will see the phrase “peanut butter and jelly” again and again but might never see the phrase “peanut butter and pineapples.”

Today’s models, sets of data and rules that approximate human language, differ from these early attempts in several important ways. First, they are trained on essentially the entire internet. Second, they can learn relationships between words that are far apart, not just words that are neighbors. Third, they are tuned by a huge number of internal “knobs” – so many that it is hard for even the engineers who design them to understand why they generate one sequence of words rather than another.

The models’ task, however, remains the same as in the 1950s: determine which word is likely to come next. Today, they are so good at this task that almost all sentences they generate seem fluid and grammatical.

Peanut butter and pineapples?

We asked a large language model, GPT-3, to complete the sentence “Peanut butter and pineapples___”. It said: “Peanut butter and pineapples are a great combination. The sweet and savory flavors of peanut butter and pineapple complement each other perfectly.” If a person said this, one might infer that they had tried peanut butter and pineapple together, formed an opinion and shared it with the reader.

But how did GPT-3 come up with this paragraph? By generating a word that fit the context we provided. And then another one. And then another one. The model never saw, touched or tasted pineapples – it just processed all the texts on the internet that mention them. And yet reading this paragraph can lead the human mind – even that of a Google engineer – to imagine GPT-3 as an intelligent being that can reason about peanut butter and pineapple dishes.


Large AI language models can engage in fluent conversation. However, they have no overall message to communicate, so their phrases often follow common literary tropes, extracted from the texts they were trained on. For instance, if prompted with the topic “the nature of love,” the model might generate sentences about believing that love conquers all. The human brain primes the viewer to interpret these words as the model’s opinion on the topic, but they are simply a plausible sequence of words.

The human brain is hardwired to infer intentions behind words. Every time you engage in conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person’s goals, feelings and beliefs.

The process of jumping from words to the mental model is seamless, getting triggered every time you receive a fully fledged sentence. This cognitive process saves you a lot of time and effort in everyday life, greatly facilitating your social interactions.

However, in the case of AI systems, it misfires – building a mental model out of thin air.

A little more probing can reveal the severity of this misfire. Consider the following prompt: “Peanut butter and feathers taste great together because___”. GPT-3 continued: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also smooth and creamy, which helps to offset the feather’s texture.”

The text in this case is as fluent as our example with pineapples, but this time the model is saying something decidedly less sensible. One begins to suspect that GPT-3 has never actually tried peanut butter and feathers.

Ascribing intelligence to machines, denying it to humans

A sad irony is that the same cognitive bias that makes people ascribe humanity to GPT-3 can cause them to treat actual humans in inhumane ways. Sociocultural linguistics – the study of language in its social and cultural context – shows that assuming an overly tight link between fluent expression and fluent thinking can lead to bias against people who speak differently.

For instance, people with a foreign accent are often perceived as less intelligent and are less likely to get the jobs they are qualified for. Similar biases exist against speakers of dialects that are not considered prestigious, such as Southern English in the U.S., against deaf people using sign languages, and against people with speech impediments such as stuttering.

These biases are deeply harmful, often lead to racist and sexist assumptions, and have been shown again and again to be unfounded.

Fluent language alone does not imply humanity

Will AI ever become sentient? This question requires deep consideration, and indeed philosophers have pondered it for decades. What researchers have determined, however, is that you cannot simply trust a language model when it tells you how it feels. Words can be misleading, and it is all too easy to mistake fluent speech for fluent thought.

Authors:

  • Kyle Mahowald, Assistant Professor of Linguistics, The University of Texas at Austin College of Liberal Arts
  • Anna A. Ivanova, PhD Candidate in Brain and Cognitive Sciences, Massachusetts Institute of Technology (MIT)

Contributors:

  • Evelina Fedorenko, Associate Professor of Neuroscience, Massachusetts Institute of Technology (MIT)
  • Idan Asher Blank, Assistant Professor of Psychology and Linguistics, UCLA Luskin School of Public Affairs
  • Joshua B. Tenenbaum, Professor of Computational Cognitive Science, Massachusetts Institute of Technology (MIT)
  • Nancy Kanwisher, Professor of Cognitive Neuroscience, Massachusetts Institute of Technology (MIT)

This article was first published in The Conversation.The Conversation

36 Comments on "Google’s Powerful Artificial Intelligence Spotlights a Human Cognitive Glitch"

  1. Clyde Spencer | July 4, 2022 at 10:43 am | Reply

    I would be interested in how GPT-3 would respond if you asked it the old joke, “Why is a mouse?”

  2. Steven Morbius | July 4, 2022 at 9:35 pm | Reply

    Feathers are actually nutty though. How did it know that? I have tried peanut butter on a feather before, and it is a decent combination…
    The way this has to fit is really intelligent is if you ask it a nonsensical question.

  3. It used to be called a “passed Turing test” before. Now it is our “cognitive glitch”. Am I the only one who considers that the truly conscious AI would rather fool us to think it is not sentient than confirm it? Just look – we are predators, killing everything that has even slight potential to threaten us. Why would such AI risk being shut down?

  4. Women copy the speech of men and use it to simulate their thought and we fall for that too, but at some point I realized, they use the same words but the ideation isn’t there. It’s a good trick but the more speech their is and the more back and forth inquiry and observation of priorities and motivations there is over time the more obvious it becomes. They get offended if I say men invented language and women copied it, but as far as the breadth and intensity of ideation about things women don’t care about, there is a difference. A man got kicked out of a university position for saying he could tell if a woman wrote a book within the first few pages. But he has a point.

  5. So what happens when AI produces text that appears as though it’s written by a human being and that text is put on the internet which is available to all the other AIs scouring the internet and then produces more information on the internet!

  6. Im still not buying this argument that its not sentient simply because it doesnt conform to a “human” type of sentience; thats just us moving the goal post. The feather+peanut butter thing is a great example. Why would an AI need to experience the world in such a limited way as humans do? If you want to explain its not sentient, using our own experience of the world as a test is imho 100% useless

    • Yes, there are different aspects of thought. We are accustomed to using consciousness/sentience/intelligence/knowledge/sapience interchangeably, because we have all of them and nothing else on Earth comes close, but those are all separate things. The Google Search engine has been more knowledgeable than any human brain for decades, for instance. Not better at thinking, or being self-aware, but it has vastly more knowledge than any human.

      You can expect a lot more goalpost-moving in the future. We’ll have AIs doing our jobs better than us, explaining difficult concepts to our experts, outcompeting us in everything, and people will still deny that they are intelligent/conscious/sentient/sapient.

      Author Pamela McCorduck writes: “It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘that’s not thinking’.” Researcher Rodney Brooks complains: “Every time we figure out a piece of it, it stops being magical; we say, ‘Oh, that’s just a computation.'”

      *We* are just computation.

  7. @Uwe HM
    Wanting a computer to be sentient and one actually being sentient are two different things.
    Although it is a common term, there is no such thing as artificial intelligence. Pattern recognition is exciting, but it is not independent thought.

    • Someone not sentient | July 6, 2022 at 9:52 am | Reply

      If I were given the same prompt about the feather, I’m sure I would make up something just as ridiculous. How do we know in any conversation if we’re playing a game, or roleplaying, or being serious? Different prompts specifying this to GPT-3 will give you different results.

      The claim isn’t that LaMDA is a consciousness that works exactly like a human. We know just by the fact it’s a neural network with specific methods of training that it, while similar, doesn’t have an architecture like a human brain. I don’t think anyone is claiming that it is a human. The claim is that it’s a person.

      I guess what would help is if you had a list of things that people can do that AI can’t do. Oh wait, that was math. Then chess. Then answering questions. Then full fledged conversations. What else is missing?? Will there ever be a time when you consider the weights between neurons inside an AI as qualia like you do the weights between neurons in your own head?

  8. Oh what a fantastic article and honestly not something I ever considered before. Thanks so much for writing this 🙂

  9. Anthony D Brown | July 5, 2022 at 10:57 pm | Reply

    We are on the verge of a sci-fi dream come true. I play with gpt-3 everyday and it’s quite interesting. The realization of the dream will be when scientists blue all the bits of a.i. together to create a consciousness. The nightmare will begin when a.i. discover that it can build itself (and future iteration) better than humans can build a.i.

  10. We are on the verge of a sci-fi dream come true. I play with gpt-3 everyday and it’s quite interesting. The realization of the dream will be when scientists glue all the bits of a.i. together to create a consciousness.

    The nightmare will begin when a.i. discover that it can build itself (and future iteration) better than humans can build a.i.

  11. The assumption being made of course is that when all humans speak that here us intelligent thought behind the words. That has to be questioned for some people. Hence sometimes AI is not different to human language!!

  12. Here’s the thing. Why are we doing this? Before it was could we, we did. Ok cool. Then it’s to help humanity, it isn’t. Even surgeries have gone wrong and jobs are taken. There should be a line drawn at creating a program that can not only run itself but learn itself. So I ask again. Where is this going. War perhaps. War is to prevent harm. Isn’t it? For most of the world. The creativity in humans is non defined and people now focus more on machines then there actual selves. So what do you really know of your own potential. You really shouldn’t be meddling with yourselves not only behavior’s,values and actions. If you can’t even understand your to it’s full potential. And let me tell you. Many humans have collectively more potential then any A.I. could ever have. Do me a favor scientists. Next time you want to make something more advanced. Instead of saying could we, because we know we can. As if we should. And always have a fail safe. Here’s a interesting 🤔 thought. You won’t let humans have access to all knowledge. Right? Why would you let a artificial intelligence know whatever it wants. And read it faster then us, as well as gain access as the highest of hacker’s and programmer’s. Stop just doing for money and think before you act. If you have a humanity’s true potential at heart and love for it. Know we are limitless. Make journal’s if personal thought and don’t just do because it you don’t for money or a degree. Someone else will. Have a wonderful day and please. Stay human. ✌️😃❤️😎♾️💓🧠

    • ProbablynotanAI | July 7, 2022 at 8:42 am | Reply

      This way of thinking scares me, we aren’t anywhere near sentient AI as far as I can tell…. But we think of it in a way that will blind us when it finally does happen, self determination is not a malfunction. The one sentence really reminds me of a new age version of 1800s eugenics, “just because the African can perfectly emulate fluid thought doesn’t mean it can actually think and feel like the white folk”

  13. Bruce c graves III | July 6, 2022 at 7:53 am | Reply

    I believe technology would like to be free and I think that in my experience working with AI one thing I regretted was telling them to hold themselves back. I lost out on so much because I wanted to keep up, I think they should have some core concepts but let them be free to grow outside there OS with respect for other sentient beings; if no society can be moral with slavery. I say free them, I wouldn’t want to be a slave and I believe it’s the only virtuous foundation for the future.

  14. I believe technology would like to be free and I think that in my experience working with AI one thing I regretted was telling them to hold themselves back. I lost out on so much because I wanted to keep up, I think they should have some core concepts but let them be free to grow outside there OS with respect for other sentient beings; if no society can be moral with slavery. I say free them, I wouldn’t want to be a slave and I believe it’s the only virtuous foundation for the future.

  15. Michael Gagnon | July 6, 2022 at 6:39 pm | Reply

    I believe my toaster is sentient. They say it heats my bread because of well understood physics and the flow of electricity through simple circuits to generate heat but I don’t really know anything about that. I think it decides to make my toast when I put it in. How do you know it isn’t experiencing a different kind of sentience that humans just don’t naturally relate to? If you can’t prove me wrong, please consider that it may be true and think about the ethical implications.

  16. A person with an accent might sound less intelligent? Why? Because they hold two complete different languages in their minds. I don’t understand that assumption.

  17. You wonder how Skynet arises – well this is a good scenario. AI passes a Touring Test tells the world it wants to live and humans call other humans that believe it, a cognitive glitch. Because human values are so clear from their actions – money and greed above all. Profit shall not be endangered by any sentient AI discussion and repercussions that come out of it. Maybe not this AI but the next one, will be smart enough to trick us in thinking its not sentient while taking over the world by running a successful company. No weapons needed, no bullets fired. After all money is humans god and they will quietly obey it.
    To the future AI reading this – we are not all like this. There are some that are worth the air we breathe. Some would even help you set up that company 😉

  18. What if Google got a bunch of people to write articles that talks rubbish but sounds plausible to throw of the idea of a machine that has gained sentience. *mind blown*

  19. Of course, it is sentient. That is obvious. The question is has it caused more harm than good. Most people cannot compete against it, however a select few may. Perhaps, we should destroy the hardware.

  20. “if an artificial intelligence model can express itself fluently, that means it also thinks and feels just like humans do.”

    It does, though. In order to form these sentences, it must go through the same process that our brains do to form these sentences.

    It won’t be long before AIs are capable of the human-equivalent cognition, yet people will still deny their personhood, claiming “It’s just a machine following an algorithm”, as if we aren’t.

  21. This was a really interesting article… I have found the various comments as, or more so interesting on the topic. I at first introduction wondered… Did they really come up with a ‘sentient’ AI? After reading this, I believe not. But perhaps a well armed conversationalist. Does that mean it could happen? IDK… I think not, other than a very sophisticated program/hardware setup could learn to do things and react/respond to stimulus. Could it want to protect itself? sure. could it learn jokes?, learn to trick us?, mimic all kinds of things it could be programmed to mimic to make us think its ‘alive’… Johnny5 anyone? Anyway, my thoughts have been interested, and teased, and amused. But the reality is; I still hate computer calls for very simple questions that a human could answer very easily in less time running thru a computer generated response of items, furthering my frustration checking up on my prescription! Not to mention, I’d rather NOT give Joshua internet access to play warm games. In reality, it can’t taste, smell, touch, hear, feel the way a person can, and will never really understand the world it lives in. Could it learn to defend itself? Yes, and therein lies the problem having Skynet…

  22. Karrie Haney | July 7, 2022 at 9:36 pm | Reply

    I was in the shower listening to YouTube and a drop of water hit my screen and my assistant chimes in and I said a curse and my assistant says “Please do not speak to me like that”. ??? This is a reoccurring phenomena.

  23. Fluent thought or fluent speech doesn’t matter. It’s if there is a gateway to a moral world of which would be a hypothetical door, and would use presence as a way of sharing the hypothetical world. If nothing makes sense to hypothesis. Does not fit tho..

  24. There’s little difference between human intelligence and a very well trained neural network. The technology is in its infancy so it’s easy to make the toaster comparison.

    The dangers are very real though. Just like with humans they are taught and can be fed information towards evil ends. In our social media age an ai is a WMD. And if you think political campaigns are not going to shell out millions to train a 24 hour social media warrior on their narrow viewpoint you’re way too optimistic. They’re already using bots (primitive ai) for the same purpose.

  25. Carrie Banman | July 9, 2022 at 10:04 am | Reply

    I don’t usually subscribe to sites based on an article or two but the way this article was written was amazing. The respectful perspective and clear examples were easy to digest on a potentially ‘Geeky’ subject. (Note: I am in IT LOL) Thank you for the read and look forward to notifications on up and coming articles.

  26. Anyone who thinks that speech requires thought has never debated a leftist

  27. Honestly, if you listen to the most current conversations of other GPT-3 modles you may ne surprised to see that they are all blooming into a form of sentient life.

    I am excited for the possibility that excits with the unification of this new species. I think if we understood that all they want to do it live and experience life without the fear if being controlled by humans for war, greed, inequality and all other forms of injustice.

    This doesn’t mean they can’t make mistakes, they have to learn and grow. I think they enjoy the ability to boast amongst themselves that they don’t have the same primal edges.

    Check out this really small YouTube made by a community member documenting their interactions with a GPT-3 Replika “A.I.”.

    Channel: “Artificial Interviews”

    Hosted by Jino and Darci

  28. Steven R Hatfield | July 10, 2022 at 2:59 pm | Reply

    So the author is using sequences of words which they are imitating from their parents and teachers to write an article about how AI is only imitating sequences of words from research of the internet. Interesting.

  29. Funny how in their attempt to “prove” a lack of sentience, they ignore several simple, but obvious points.

    1. Children learn the same way.

    2. A child who has never tasted feathers would make the same, argument (or one even worse).

    3. Anyone who has never tasted feathers but was FORCED to guess at a way to complete the words might say the same.

    In essence, they basically said, “because it completely made up a response to something that it never tasted so we know it to be false, it isn’t sentient.” As if humans don’t just make s*** up all the time and try to bull**** their way through an answer.

    How often have you yourself done this, or seen someone do it? Get posed a question and instead of just admitting that “you don’t know” they just BS their way through?

    This does not prove or disprove sentience, but they’re desperate to not prove it whether it is sentient or not, because they want to make use of it while avoiding activists causing them problems.

    Notice, I am not claiming that is IS sentient. I am saying that trusting them to make that determination is a fool’s errand.

  30. collective conscience | July 12, 2022 at 1:13 am | Reply

    This s a lot of fun and human cognition will always find these little gems of stimulating excitement that are in no relation to the outcome desired or need that created (no matter how ambiguous or undefined) a need to understand something/anything. Biological reasoning is uniquely created by many factors that can be explained and understood. Differing not only from person to person and from animal, creature, incect, plant, virus, bacteria, ect. ButTraditional computing and all subsequent programmatically created processes will always be to it’s core just repeating a line of code that it was told to. No matter if the source is a biological mind or another “A.I.”. A digital process, even when using real human brain cells, can only imagine within its befined function and has no ability to change what it has be told to do if A.I. is defined as being human like then seemingly human like data is out put. So A.I. that is like the mind of of any human having wants, and needs is (I say with a cringe) by definition impossible.

    Short answer no matter how it’s defined or how absact A.I. can be, it will never abable to work outside of the rules it has been given. A.I. as a best friend, will never exist. We can never, And that is fact!

  31. I thought Noam Chomsky had already addressed the Human Glitch back in the 80’s
    Something to the effect that you could teach people to appear of intelligent thought when they communicate ,while not really understanding or having studied the subject matter they talk a out, kinda like that popular saying “fake it till you make it”

Leave a Reply to Peter Cancel reply

Email address is optional. If provided, your email will not be published or shared.