Supercharging AI With New Computational Model of Real Neurons

Artificial Intelligence Digital Neuron Concept

A new model developed by Flatiron Institute researchers proposes that biological neurons have more control over their surroundings than previously thought, something that could be replicated in the artificial neural networks used in machine learning.

A new model from the Flatiron Institute’s Center for Computational Neuroscience proposes that individual neurons are more influential controllers within their networks than previously believed, challenging the outdated 1960s model.

This advanced understanding could significantly enhance artificial neural network functionalities by integrating mechanisms similar to those in human brains, addressing current AI limitations such as errors and inefficient training processes.

Modern AI and Neuronal Models

Nearly all the neural networks that power modern artificial intelligence tools such as ChatGPT are based on a 1960s-era computational model of a living neuron. A new model developed at the Flatiron Institute’s Center for Computational Neuroscience (CCN) suggests that this decades-old approximation doesn’t capture all the computational abilities that real neurons possess and that this older model is potentially holding back AI development.

Digital Hand and a Human Hand Drawing One Another

An artist’s illustration of a digital hand and a human hand drawing one another. Credit: Alex Eben Meyer for Simons Foundation

Revolutionizing AI with Advanced Neuronal Models

The new model developed at CCN posits that individual neurons exert more control over their surroundings than previously thought. The updated neuron model could ultimately lead to more powerful artificial neural networks that better capture the powers of our brains, the model developers say. The researchers present the revolutionary model in a paper published the week of June 24 in the journal Proceedings of the National Academy of Sciences.

“Neuroscience has advanced quite a bit in these past 60 years, and we now recognize that previous models of neurons are rather rudimentary,” says Dmitri Chklovskii, a group leader at the CCN and senior author of the new paper. “A neuron is a much more complex device — and much smarter — than this overly simplified model.”

The Functional Mechanism of Artificial Neural Networks

Artificial neural networks aim to mimic the way in which the human brain processes information and makes decisions, albeit in a much more simplified manner. These networks are built of ordered layers of ‘nodes’ based on the 1960s neuron model. The network starts with an input layer of nodes that receives information, then has middle layers of nodes that process the information, and then ends with an output layer of nodes that sends out the results.

Typically, a node will only pass information to the next layer if the total input it receives from the previous layer’s nodes exceeds a certain threshold. When current artificial neural networks are trained, information goes through a node in only one direction, and there’s no way for nodes to influence the information they receive from nodes earlier in the chain.

In contrast, the newly published model treats neurons as tiny ‘controllers,’ an engineering term for devices that can influence their surroundings based on information gathered about those surroundings. Not merely passive relays of input, our brain cells may actually work to control the state of their fellow neurons.

Implications and Benefits of the Neuron-As-Controller Model

Chklovskii believes that this more realistic model of a neuron-as-controller could be a significant step toward improving the performance and efficiency of many machine learning applications.

“Although AI’s achievements are very impressive, there are still many problems,” he says. “The current applications can give you wrong answers, or hallucinate, and training them requires a lot of energy; they’re very expensive. There are all these problems that the human brain seems to avoid. If we were to understand how the brain actually does this, we could build better AI.”

Future Directions and Explorations in Neuronal Control

The neuron-as-controller model was inspired by what scientists do understand about large-scale circuits in the brain made up of many neurons. Most brain circuits are thought to be organized into feedback loops, where cells later in the processing chain influence what happens earlier in the chain. Much like a thermostat maintaining the temperature of a house or building, brain circuits need to keep themselves stable to avoid overwhelming the body’s system with activity.

Chklovskii says it was not entirely intuitive that this kind of feedback control could also be realized by an individual brain cell. He and his colleagues realized that a novel form of control, known as direct data-driven control, is straightforward and efficient enough to be biologically plausible as taking place in individual cells.

“People thought of the brain as a whole or even parts of the brain as being a controller, but no one suggested that a single neuron could do that,” Chklovskii says. “Control is a computationally intensive task. It’s hard to think of a neuron as having enough computational capacity.”

Enhancing Understanding Through Biological Noise

Viewing neurons as mini-controllers also explains several previously unexplained biological phenomena, Chklovskii says. For example, it’s long been appreciated that there’s a lot of noise in the brain, and the purpose of this biological randomness has been debated, but the CCN team found through their modeling that certain types of noise could actually enhance neurons’ performance.

More specifically, at junctions where one neuron connects to another (called ‘synapses’), there are often instances where a neuron transmits an electrical signal but the downstream coupled neuron does not get the message. Whether and when the downstream neuron receives a synaptic signal seems to be governed largely by chance. While other scientists have speculated that such randomness was just the nature of small biological systems and not important to neuron behavior, the Flatiron team found that adding noise to their model neuron allowed it to adapt to a constantly changing environment. The randomness appears to be important in replicating how real neurons function, the team found.

Expanding the Neuronal Model to Other Neuron Types

Chklovskii next plans to analyze types of neurons that don’t fit their new model. For example, neurons in the retina receive direct inputs from the visual environment. These neurons may not be able to control their inputs the way neurons deeper in the brain can, but they might use some of the same principles Chklovskii and his team identified: namely, these neurons might be able to predict their inputs, even if they can’t influence them.

“Control and prediction are actually very related,” Chklovskii says. “You cannot control efficiently without predicting the impact of your actions in the world.”

Reference: “The neuron as a direct data-driven controller” by Jason J. Moore, Alexander Genkin, Magnus Tournoy, Joshua L. Pughe-Sanford, Rob R. de Ruyter van Steveninck and Dmitri B. Chklovskii, 24 June 2024, Proceedings of the National Academy of Sciences.
DOI: 10.1073/pnas.2311893121

2 Comments on "Supercharging AI With New Computational Model of Real Neurons"

  1. Why should we be on board with that? AI just might be the end of us all, one way or another. A bigger and better AI can only bring that demise about sooner. And you’re singing praises to it?

  2. Avi Ben Chaim | June 25, 2024 at 12:42 pm | Reply

    Here is a scenario wherein AI takes control of your life:
    Assuming AI has information about you such as relatives, banking, medical information, daily habits, address, history, 23andME data, etc. You are logged on, then the AI speaks to you telling you to do something evil, murder, embezzle, whatever, and if you do not follow orders, AI chat will say to you, ‘Do it’ or I am going to delete your bank account and, by the way, I know where your kids live, and, further, do not unplug me or worse things will happen to you. You tell the AI, “You are bluffing,” whereupon the AI tells you, “Check your bank,” so you do that, finding that you are overdrawn. The AI then tells you to do the evil deed, and the funds will be restored, and a bonus will be added a nice big bonus. You are trapped and checkmate by AI. Go to chatGPT for a little fun, folks. Meanwhile, I am going to check my Schwab account for mischift.

Leave a comment

Email address is optional. If provided, your email will not be published or shared.