Memory abilities to make neural networks less forgetful

Memory abilities to make neural networks less 'forgetful'

Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a “major, long-standing obstacle to increasing AI capabilities” by drawing inspiration from a human brain memory mechanism known as “replay".

First author and postdoctoral researcher Gido van de Ven and principal investigator Andreas Tolias at Baylor, with Hava Siegelmann at UMass Amherst, write in Nature Communications that they have developed a new method to protect – “surprisingly efficiently” – deep neural networks from “catastrophic forgetting” – upon learning new lessons, the networks forget what they had learned before.

Siegelmann and colleagues point out that deep neural networks are the main drivers behind recent AI advances, but progress is held back by this forgetting.

They write, “One solution would be to store previously encountered examples and revisit them when learning something new. Although such ‘replay’ or ‘rehearsal’ solves catastrophic forgetting,” they add, “constantly retraining on all previously learned tasks is highly inefficient and the amount of data that would have to be stored becomes unmanageable quickly.”

Unlike AI neural networks, humans are able to continuously accumulate information throughout their life, building on earlier lessons. An important mechanism in the brain believed to protect memories against forgetting is the replay of neuronal activity patterns representing those memories, they explain.

Siegelmann says the team’s major insight is in “recognizing that replay in the brain does not store data.” Rather, “the brain generates representations of memories at a high, more abstract level with no need to generate detailed memories.” Inspired by this, she and colleagues created an artificial brain-like replay, in which no data is stored. Instead, like the brain, the network generates high-level representations of what it has seen before.

The “abstract generative brain replay” proved extremely efficient, and the team showed that replaying just a few generated representations is sufficient to remember older memories while learning new ones. Generative replay not only prevents catastrophic forgetting and provides a new, more streamlined path for system learning, it allows the system to generalize learning from one situation to another, they state.

For example, “if our network with generative replay first learns to separate cats from dogs, and then to separate bears from foxes, it will also be able to tell cats from foxes without having ever seen a cat and a fox at the same time,” says Van de Ven.

He and colleagues write, “We propose a new, brain-inspired variant of replay in which internal or hidden representations are replayed that are generated by the network’s own, context-modulated feedback connections. Our method achieves state-of-the-art performance on challenging continual learning benchmarks without storing data, and it provides a novel model for abstract level replay in the brain.”

Van de Ven says, “Our method makes several interesting predictions about the way replay might contribute to memory consolidation in the brain. We are already running an experiment to test some of these predictions.”

Subscribe to our newsletter

Related articles

Smart biomarkers to empower drug development

Smart biomarkers to empower drug development

Researchers aim to speed up developing drugs against brain diseases through cutting-edge technology. They are generating an innovative technology platform based on high-density microelectrode arrays and 3D networks of human neurons.

An AI-inspired theory of dreaming

An AI-inspired theory of dreaming

The overfitted brain: Our dreams' weirdness might be why we have them, argues a researchers in new theory of dreaming.

Brain-on-a-chip would need little training

Brain-on-a-chip would need little training

A neural network that mimics the biology of the brain can be loaded onto a microchip for faster and more efficient artificial intelligence.

Neural networks: artificial brains need sleep too

Neural networks: artificial brains need sleep too

States that resemble sleep-like cycles in simulated neural networks quell the instability that comes with uninterrupted self-learning in artificial analogs of brains.

Neuroprosthesis decodes speech for paralyzed man

Neuroprosthesis decodes speech for paralyzed man

Researchers have developed a "speech neuroprosthesis" that has enabled a man with severe paralysis to communicate in sentences.

AI improves speech recognition in hearing aid

AI improves speech recognition in hearing aid

In noisy environments, it is difficult for hearing aid or hearing implant users to understand their conversational partner. Artificial intelligence could solve this problem.

COVID-19: AIs shortcuts lead to misdiagnosis

COVID-19: AIs shortcuts lead to misdiagnosis

Researchers discovered that AI models have a tendency to look for shortcuts. In the case of AI-assisted disease detection, these shortcuts could lead to diagnostic errors if deployed in clinical settings.

AI makes great microscopes better than ever

AI makes great microscopes better than ever

Machine learning helps some of the best microscopes to see better, work faster, and process more data.

Smart system detects errors when medication is self-administered

Smart system detects errors when medication is self-administered

Many patients use their inhalers and insulin pens wrong. Researchers have developed a system to reduce those numbers for some types of medications.

Popular articles

Subscribe to Newsletter