Bioinspired Neural Network Model Can Store Significantly More Memories

Artificial Intelligence Neural Network Brain

The researchers discovered that a network that incorporated both pairwise and set-wise connections performed best and retained the greatest number of memories.

Researchers have developed a new model inspired by recent biological discoveries that shows enhanced memory performance. This was achieved by modifying a classical neural network.

Computer models play a crucial role in investigating the brain’s process of making and retaining memories and other intricate information. However, constructing such models is a delicate task. The intricate interplay of electrical and biochemical signals, as well as the web of connections between neurons and other cell types, creates the infrastructure for memories to be formed. Despite this, encoding the complex biology of the brain into a computer model for further study has proven to be a difficult task due to the limited understanding of the underlying biology of the brain.

Researchers at the Okinawa Institute of Science and Technology (OIST) have made improvements to a widely utilized computer model of memory, known as a Hopfield network, by incorporating insights from biology. The alteration has resulted in a network that not only better mirrors the way neurons and other cells are connected in the brain, but also has the capacity to store significantly more memories.

The complexity added to the network is what makes it more realistic, says Thomas Burns, a Ph.D. student in the group of Professor Tomoki Fukai, who heads OIST’s Neural Coding and Brain Computing Unit.

“Why would biology have all this complexity? Memory capacity might be a reason,” Mr. Burns says.

Diagrams of Connections in Hopfield Networks

In the classical Hopfield network (left), each neuron (I, j, k, l) is connected to the others in a pairwise manner. In the modified network made by Mr. Burns and Professor Fukai, sets of three or more neurons can connect simultaneously. Credit: Thomas Burns (OIST)

Hopfield networks store memories as patterns of weighted connections between different neurons in the system. The network is “trained” to encode these patterns, then researchers can test its memory of them by presenting a series of blurry or incomplete patterns and seeing if the network can recognize them as one it already knows. In classical Hopfield networks, however, neurons in the model reciprocally connect to other neurons in the network to form a series of what are called “pairwise” connections.

Pairwise connections represent how two neurons connect at a synapse, a connection point between two neurons in the brain. But in reality, neurons have intricate branched structures called dendrites that provide multiple points for connection, so the brain relies on a much more complex arrangement of synapses to get its cognitive jobs done. Additionally, connections between neurons are modulated by other cell types called astrocytes.

“It’s simply not realistic that only pairwise connections between neurons exist in the brain,” explains Mr. Burns. He created a modified Hopfield network in which not just pairs of neurons but sets of three, four, or more neurons could link up too, such as might occur in the brain through astrocytes and dendritic trees.

Although the new network allowed these so-called “set-wise” connections, overall it contained the same total number of connections as before. The researchers found that a network containing a mix of both pairwise and set-wise connections performed best and retained the highest number of memories. They estimate it works more than doubly as well as a traditional Hopfield network. “It turns out you actually need a combination of features in some balance,” says Mr. Burns. “You should have individual synapses, but you should also have some dendritic trees and some astrocytes.”

Hopfield networks are important for modeling brain processes, but they have powerful other uses too. For example, very similar types of networks called Transformers underlie AI-based language tools such as ChatGPT, so the improvements Mr. Burns and Professor Fukai have identified may also make such tools more robust.

Mr. Burns and his colleagues plan to continue working with their modified Hopfield networks to make them still more powerful. For example, in the brain the strengths of connections between neurons are not normally the same in both directions, so Mr. Burns wonders if this feature of asymmetry might also improve the network’s performance. Additionally, he would like to explore ways of making the network’s memories interact with each other, the way they do in the human brain. “Our memories are multifaceted and vast,” says Mr. Burns. “We still have a lot to uncover.”

Reference: “Simplicial Hopfield networks” by Thomas F Burns and Tomoki Fukai, 1 February 2023, International Conference on Learning Representations.

Be the first to comment on "Bioinspired Neural Network Model Can Store Significantly More Memories"

Leave a comment

Email address is optional. If provided, your email will not be published or shared.