
Allowing AI to talk to itself helps it learn faster and adapt more easily. This inner speech, combined with working memory, lets AI generalize skills using far less data.
Talking to yourself often feels like a uniquely human habit. Inner dialogue helps people sort through ideas, make choices, and process emotions. New research shows that this same kind of self-talk can also benefit artificial intelligence. In a study published in Neural Computation, scientists from the Okinawa Institute of Science and Technology (OIST) found that AI systems learn more effectively when inner speech is paired with short-term memory, allowing them to handle a wider range of tasks.
The results point to learning as more than just a matter of system design. According to first author Dr. Jeffrey Queißer, Staff Scientist in OIST’s Cognitive Neurorobotics Research Unit, “This study highlights the importance of self-interactions in how we learn. By structuring training data in a way that teaches our system to talk to itself, we show that learning is shaped not only by the architecture of our AI systems, but by the interaction dynamics embedded within our training procedures.”
Teaching AI to Talk to Itself
To test this idea, the researchers combined self-directed internal speech, described as quiet “mumbling,” with a specially designed working memory system. This combination led to noticeable improvements in how AI models learned new information, adjusted to unfamiliar situations, and managed more than one task at a time.
Building Flexible, General-Purpose AI
The research team has long focused on content-agnostic information processing. This approach aims to help AI apply what it learns beyond specific examples by relying on general rules and methods instead of memorized patterns.
“Rapid task switching and solving unfamiliar problems is something we humans do easily every day. But for AI, it’s much more challenging,” says Dr. Queißer. “That’s why we take an interdisciplinary approach, blending developmental neuroscience and psychology with machine learning and robotics, amongst other fields, to find new ways to think about learning and inform the future of AI.”
Why Working Memory Matters
Early experiments centered on memory design, particularly the role of working memory in helping AI generalize. Working memory allows a system to temporarily hold and use information, whether it is following instructions or performing quick calculations. By testing tasks with different levels of difficulty, the researchers compared several memory structures.
They found that AI systems with multiple working memory slots (temporary containers for pieces of information) performed better on complex challenges, such as reversing sequences or recreating patterns. These tasks require holding several elements in mind and manipulating them accurately.
When the team added self-mumbling targets—telling the system to talk to itself a certain number of times—performance improved even more. The biggest gains appeared in multitasking and in problems that involved many steps.
“Our combined system is particularly exciting because it can work with sparse data instead of the extensive data sets usually required to train such models for generalization. It provides a complementary, lightweight alternative,” says Dr. Queißer.
Learning to Learn in Real-World Conditions
Next, the researchers plan to move beyond tidy test environments and introduce more realistic challenges. Dr. Queißer explains, “In the real world, we’re making decisions and solving problems in complex, noisy, dynamic environments. To better mirror human developmental learning, we need to account for these external factors.”
This work also supports a broader goal of understanding how learning works in the human brain. “By exploring phenomena like inner speech, and understanding the mechanisms of such processes, we gain fundamental new insights into human biology and behavior,” Dr. Queißer concludes. “We can also apply this knowledge, for example in developing household or agricultural robots which can function in our complex, dynamic worlds.”
Reference: “Working Memory and Self-Directed Inner Speech Enhance Multitask Generalization in Active Inference” by Jeffrey Frederic Queißer and Jun Tani, 22 December 2025, Neural Computation.
DOI: 10.1162/NECO.a.36
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.