The experiment was performed on a prototype of the BrainScales-2 chip,...
The experiment was performed on a prototype of the BrainScales-2 chip, Schematic representation of a neural network and Results for simple and complex tasks (from left to right)
Source: Heidelberg University and MPID

Optimizing neural networks on a brain-inspired computer

Neural networks in both biological settings and artificial intelligence distribute computation across their neurons to solve complex tasks. New research now shows how so-called “critical states” can be used to optimize artificial neural networks running on brain-inspired neuromorphic hardware. The study was carried out by scientists from Heidelberg University working within the Human Brain Project, and the Max-Planck-Institute for Dynamics and Self-Organization (MPIDS).

Many computational properties are maximized when the dynamics of a network are at a “critical point”, a state where systems can quickly change their overall characteristics in fundamental ways, transitioning e.g. between order and chaos or stability and instability. Therefore, the critical state is widely assumed to be optimal for any computation in recurrent neural networks, which are used in many AI applications.

Researchers from the HBP partner Heidelberg University and the Max-Planck-Institute for Dynamics and Self-Organization challenged this assumption by testing the performance of a spiking recurrent neural network on a set of tasks with varying complexity at – and away from critical dynamics. They instantiated the network on a prototype of the analog neuromorphic BrainScaleS-2 system. BrainScaleS is a state-of-the-art brain-inspired computing system with synaptic plasticity implemented directly on the chip. It is one of two neuromorphic systems currently under development within the European Human Brain Project.

First, the researchers showed that the distance to criticality can be easily adjusted in the chip by changing the input strength, and then demonstrated a clear relation between criticality and task-performance. The assumption that criticality is beneficial for every task was not confirmed: whereas the information-theoretic measures all showed that network capacity was maximal at criticality, only the complex, memory intensive tasks profited from it, while simple tasks actually suffered. The study thus provides a more precise understanding of how the collective network state should be tuned to different task requirements for optimal performance.

Mechanistically, the optimal working point for each task can be set very easily under homeostatic plasticity by adapting the mean input strength. The theory behind this mechanism was developed very recently at the Max Planck Institute. “Putting it to work on neuromorphic hardware shows that these plasticity rules are very capable in tuning network dynamics to varying distances from criticality”, says senior author Viola Priesemann, group leader at MPIDS. Thereby tasks of varying complexity can be solved optimally within that space.

The finding may also explain why biological neural networks operate not necessarily at criticality, but in the dynamically rich vicinity of a critical point, where they can tune their computation properties to task requirements. Furthermore, it establishes neuromorphic hardware as a fast and scalable avenue to explore the impact of biological plasticity rules on neural computation and network dynamics. “As a next step, we now study and characterize the impact of the spiking network’s working point on classifying artificial and real-world spoken words”, says first author Benjamin Cramer of Heidelberg University.

The results have been published in Nature Communications.

Subscribe to our newsletter

Related articles

Designing medical deep learning systems

Designing medical deep learning systems

Researchers have analysed whether better design of deep learning studies can lead to the faster transformation of medical practices.

How to train a robot - using AI and supercomputers

How to train a robot - using AI and supercomputers

Computer scientists use TACC systems to generate synthetic objects for robot training.

A computer reads and predicts thoughts

A computer reads and predicts thoughts

Researchers at the University of Helsinki have developed a technique in which a computer models visual perception by monitoring human brain signals.

Neural networks: artificial brains need sleep too

Neural networks: artificial brains need sleep too

States that resemble sleep-like cycles in simulated neural networks quell the instability that comes with uninterrupted self-learning in artificial analogs of brains.

AI challenge aims to improve mammography accuracy

AI challenge aims to improve mammography accuracy

AI techniques, used in combination with the evaluation of expert radiologists, improve the accuracy in detecting cancer using mammograms.

Artificial intelligence rapidly computes protein structures

Artificial intelligence rapidly computes protein structures

Scientists have created a deep learning method, RoseTTAFold, to provide access to highly accurate protein structure prediction.

Harnessing AI to discover new drugs

Harnessing AI to discover new drugs

Artificial intelligence can recognise the biological activity of natural products in a targeted manner.

AI improves speech recognition in hearing aid

AI improves speech recognition in hearing aid

In noisy environments, it is difficult for hearing aid or hearing implant users to understand their conversational partner. Artificial intelligence could solve this problem.

Smart biomarkers to empower drug development

Smart biomarkers to empower drug development

Researchers aim to speed up developing drugs against brain diseases through cutting-edge technology. They are generating an innovative technology platform based on high-density microelectrode arrays and 3D networks of human neurons.

Popular articles

Subscribe to Newsletter