Deep learning appears to be a key magical ingredient for the realization of many artificial intelligence tasks. However, these tasks can be efficiently realized by the use of simpler shallow architectures.
Shallow feedforward networks can efficiently learn non-trivial classification tasks with reduced computational complexity compared to deep learning architectures, according to research published in Scientific Reports. This finding may direct the development of unique, energy-efficient hardware for shallow learning.
The earliest artificial neural network, the Perceptron, was introduced approximately 65 years ago and consisted of just one layer. However, to address solutions for more complex classification tasks, more advanced neural network architectures consisting of numerous feedforward (consecutive) layers were later introduced. This is the essential component of the current implementation of deep learning algorithms. It improves the performance of analytical and physical tasks without human intervention, and lies behind everyday automation products such as the emerging technologies for self-driving cars and autonomous chatbots.
The key question driving new research published today (April 20) in the journal Scientific Reports is whether efficient learning of non-trivial classification tasks can be achieved using brain-inspired shallow feedforward networks, while potentially requiring less computational complexity. “A positive answer questions the need for deep learning architectures, and might direct the development of unique hardware for the efficient and fast implementation of shallow learning,” said Prof. Ido Kanter, of Bar-Ilan’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research. “Additionally, it would demonstrate how brain-inspired shallow learning has advanced computational capability with reduced complexity and energy consumption.”
“We’ve shown that efficient learning on an artificial shallow architecture can achieve the same classification success rates that previously were achieved by deep learning architectures consisting of many layers and filters, but with less computational complexity,” said Yarden Tzach, a PhD student and contributor to this work. “However, the efficient realization of shallow architectures requires a shift in the properties of advanced GPU technology, and future dedicated hardware developments,” he added.
The efficient learning on brain-inspired shallow architectures goes hand in hand with efficient dendritic tree learning which is based on previous experimental research by Prof. Kanter on sub-dendritic adaptation using neuronal cultures, together with other anisotropic properties of neurons, like different spike waveforms, refractory periods and maximal transmission rates (see video above on dendritic learning.)
For years brain dynamics and machine learning development were researched independently, however recently brain dynamics has been revealed as a source for new types of efficient artificial intelligence.
Reference: “Efficient shallow learning as an alternative to deep learning” by Yuval Meir, Ofek Tevet, Yarden Tzach, Shiri Hodassman, Ronit D. Gross and Ido Kanter, 20 April 2023, Scientific Reports.