The human brain, comprising billions of diverse neurons intricately organized into highly efficient neural networks, remains the only known system capable of achieving general intelligence. At the intersection of artificial intelligence (AI) and neuroscience, a key goal has been to develop AI models that can replicate the brain's remarkable performance and efficiency.
Brain-inspired techniques, which map biological neural architectures onto artificial neural networks, have shown significant promise across various domains. However, most of these approaches focus predominantly on neuronal firing patterns, often overlooking the critical role of neuronal microcircuits in complex information processing. This raises a central question: how can neural networks balance information capacity and computational complexity while accurately modeling biological microcircuits within deep learning frameworks?
Led by Professor Yue Deng, a research team has advanced neural networks by integrating recent insights from neuroscience to improve the accuracy, efficiency, and speed of networks. Their novel contribution is the introduction of a Heterogeneous spiking Framework with self-inhibiting neurons (HIFI).
Inspired by self-inhibiting synapses observed in biological neurons, the researchers developed a new self-inhibiting neuron model that endows individual neurons with memory capabilities. This innovation offers a novel perspective for simulating and understanding the complex dynamics of mammalian brain neurons. The study demonstrates that this spiking neuron model can effectively emulate neuronal potential dynamics both at the individual and network levels, significantly improving decoding efficiency in brain-computer interfaces.
In terms of network learning, the team drew on the brain’s inherent heterogeneity to propose the HIFI model, composed of diverse self-inhibiting neurons. The model employs a bi-level programming paradigm to respectively learn neuron-level biophysical variables and network-level synapse weights for nested heterogeneous learning. HIFI demonstrated superior performance on both image classification and neuromorphic datasets, achieving up to a 10% improvement in accuracy, a 17.83-fold reduction in energy consumption, and a 5-fold reduction in latency. The high efficiency and low latency characteristics position HIFI as a promising tool for analyzing large-scale, high-dimensional single-cell RNA sequencing (scRNA-seq) data. HIFI can accurately identify the rare cell types of Sncg, Serpinf1 and Astro from single cell RNA sequences (scRNA-seq). While these rare cells only occupy very minor proportions (e.g. 0.09%), they are key biomarkers for several brain diseases including multiple system atrophy (Sncg), glioblastoma (Serpinf1) and brain edema (Astro).
This interdisciplinary research introduces a learning framework with high precision, efficiency, and low latency, which holds great potential for broad applications in general machine learning tasks. The experiments demonstrate the framework's applicability across various complex tasks, advancing bio-inspired computational models towards general intelligence.
Journal
National Science Review