Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»Letting AI Talk to Itself Made It Much Smarter
    Technology

    Letting AI Talk to Itself Made It Much Smarter

    By Okinawa Institute of Science and Technology (OIST) Graduate UniversityJanuary 31, 20261 Comment4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    AI “Self Mumbling” Inner Speech
    Inner speech and working memory architecture boost AI performance when multitasking and completing complex pattern generation challenges. Credit: Kaori Serakaki/OIST

    Allowing AI to talk to itself helps it learn faster and adapt more easily. This inner speech, combined with working memory, lets AI generalize skills using far less data.

    Talking to yourself often feels like a uniquely human habit. Inner dialogue helps people sort through ideas, make choices, and process emotions. New research shows that this same kind of self-talk can also benefit artificial intelligence. In a study published in Neural Computation, scientists from the Okinawa Institute of Science and Technology (OIST) found that AI systems learn more effectively when inner speech is paired with short-term memory, allowing them to handle a wider range of tasks.

    The results point to learning as more than just a matter of system design. According to first author Dr. Jeffrey Queißer, Staff Scientist in OIST’s Cognitive Neurorobotics Research Unit, “This study highlights the importance of self-interactions in how we learn. By structuring training data in a way that teaches our system to talk to itself, we show that learning is shaped not only by the architecture of our AI systems, but by the interaction dynamics embedded within our training procedures.”

    Teaching AI to Talk to Itself

    To test this idea, the researchers combined self-directed internal speech, described as quiet “mumbling,” with a specially designed working memory system. This combination led to noticeable improvements in how AI models learned new information, adjusted to unfamiliar situations, and managed more than one task at a time.

    Building Flexible, General-Purpose AI

    The research team has long focused on content-agnostic information processing. This approach aims to help AI apply what it learns beyond specific examples by relying on general rules and methods instead of memorized patterns.

    “Rapid task switching and solving unfamiliar problems is something we humans do easily every day. But for AI, it’s much more challenging,” says Dr. Queißer. “That’s why we take an interdisciplinary approach, blending developmental neuroscience and psychology with machine learning and robotics, amongst other fields, to find new ways to think about learning and inform the future of AI.”

    Why Working Memory Matters

    Early experiments centered on memory design, particularly the role of working memory in helping AI generalize. Working memory allows a system to temporarily hold and use information, whether it is following instructions or performing quick calculations. By testing tasks with different levels of difficulty, the researchers compared several memory structures.

    They found that AI systems with multiple working memory slots (temporary containers for pieces of information) performed better on complex challenges, such as reversing sequences or recreating patterns. These tasks require holding several elements in mind and manipulating them accurately.

    When the team added self-mumbling targets—telling the system to talk to itself a certain number of times—performance improved even more. The biggest gains appeared in multitasking and in problems that involved many steps.

    “Our combined system is particularly exciting because it can work with sparse data instead of the extensive data sets usually required to train such models for generalization. It provides a complementary, lightweight alternative,” says Dr. Queißer.

    Learning to Learn in Real-World Conditions

    Next, the researchers plan to move beyond tidy test environments and introduce more realistic challenges. Dr. Queißer explains, “In the real world, we’re making decisions and solving problems in complex, noisy, dynamic environments. To better mirror human developmental learning, we need to account for these external factors.”

    This work also supports a broader goal of understanding how learning works in the human brain. “By exploring phenomena like inner speech, and understanding the mechanisms of such processes, we gain fundamental new insights into human biology and behavior,” Dr. Queißer concludes. “We can also apply this knowledge, for example in developing household or agricultural robots which can function in our complex, dynamic worlds.”

    Reference: “Working Memory and Self-Directed Inner Speech Enhance Multitask Generalization in Active Inference” by Jeffrey Frederic Queißer and Jun Tani, 22 December 2025, Neural Computation.
    DOI: 10.1162/NECO.a.36

    Never miss a breakthrough: Join the SciTechDaily newsletter.
    Follow us on Google and Google News.

    Artificial Intelligence Computer Science Okinawa Institute of Science and Technology Graduate University Robotics
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    New AI Model Is Shockingly Good at “Reading” Human Minds

    AI’s Achilles Heel: New Research Pinpoints Fundamental Weaknesses

    MIT SoftZoo: Open-Source Platform Simulates Wildlife for Soft Robotics Designers

    Bioinspired Neural Network Model Can Store Significantly More Memories

    Machine-Learning Models Capture Subtle Variations in Facial Expressions

    New Algorithm Enables Wi-Fi Connected Vehicles to Share Data

    Algorithm Enables Robots to Learn and Adapt to Help Complete Tasks

    New Approach Uses Mathematics to Improve Automated Security Monitoring

    Mathematical Framework Formalizes Loop Perforation Technique

    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    Scientists Warn That This Common Pet Fish Can Wreck Entire Ecosystems

    Scientists Make Breakthrough in Turning Plastic Trash Into Clean Fuel Using Sunlight

    This Popular Supplement May Interfere With Cancer Treatment, Scientists Warn

    Scientists Finally Solved One of Water’s Biggest Mysteries

    Could This New Weight-Loss Pill Disrupt the Entire Market? Here’s What You Should Know About Orforglipron

    Earth’s Crust Is Tearing Open in Africa, and It Could Form a New Ocean

    Breakthrough Bowel Cancer Trial Leaves Patients Cancer-Free for Nearly 3 Years

    Natural Compound Shows Powerful Potential Against Rheumatoid Arthritis

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • Kratom Use Explodes in the US, With Life-Changing Consequences
    • Scientists Uncover Fatal Weakness in “Zombie Cells” Linked to Cancer
    • World-First Study Reveals Human Hearts Can Regenerate After a Heart Attack
    • Why Your Dreams Feel So Real Sometimes and So Strange Other Times
    • Scientists Debunk 100-Year-Old Belief About Brain Cells, Rewriting Textbooks
    Copyright © 1998 - 2026 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.