Summary: Dreams that seem realistic and bizarre at the same time help our brains learn and extract generic concepts from past experiences, a new study reports.
Source: Human Brain Project
A new study by researchers from the University of Bern, Switzerland, suggests that dreams — especially dreams that seem simultaneously realistic, but on closer inspection bizarre — help our brain learn and extract generic concepts from past experiences.
The study, conducted within the Human Brain Project and published in eLifeoffers a new theory of the meaning of dreams using machine learning-inspired methodology and brain simulation.
The importance of sleep and dreams for learning and memory has long been recognized – the impact a single restless night can have on our cognition is well known. “What we’re missing is a theory that links this to consolidation of experiences, generalization of concepts and creativity,” explains Nicolas Deperrois, lead author of the study.
During sleep, we usually experience two types of sleep phases, alternating one after the other: non-REM sleep, when the brain “replays” the sensory stimulus we have awake, and REM sleep, when spontaneous bursts of intense brain activity are vivid. produce dreams.
The researchers used simulations of the cerebral cortex to model how different stages of sleep affect learning. To introduce an element of unusualness into the artificial dreams, they took inspiration from a machine learning technique called Generative Adversarial Networks (GANs).
In GANs, two neural networks compete to generate new data from the same data set, in this case a series of simple images of objects and animals. This operation produces new artificial images that may appear superficially realistic to a human observer.
The researchers then simulated the cortex during three different states: wakefulness, non-REM sleep, and REM sleep. While awake, the model is exposed to images of boats, cars, dogs and other objects. In non-REM sleep, the model repeats the sensory input with some occlusions.
REM sleep creates new sensory input through the GANs, creating distorted but realistic versions and combinations of boats, cars, dogs, etc.
To test the performance of the model, a simple classifier evaluates how easily the identity of the object (boat, dog, car, etc.) can be read from the cortical representations.
“Non-REM and REM dreams become more realistic as our model learns,” explains Jakob Jordan, senior author and leader of the research team.
“While non-REM dreams are quite similar to waking experiences, REM dreams tend to creatively combine these experiences.”
Interestingly, it was when the REM sleep phase was suppressed in the model, or when these dreams were made less creative, that the classifier’s accuracy decreased. When the NREM sleep phase was removed, these representations tended to be more sensitive to sensory disturbances (here, occlusions).
According to this study, wakefulness, non-REM and REM sleep appear to have complementary functions for learning: experiencing the stimulus, amplifying that experience, and discovering semantic concepts. “We think these findings suggest a simple evolutionary role for dreams, without interpreting their exact meaning,” Deperrois says.
“It is not surprising that dreams are bizarre: this bizarreness has a purpose. The next time you have crazy dreams, maybe don’t try to find a deeper meaning — your brain may just be organizing your experiences.”
About this sleep, dream and learning research news
Writer: Roberto Inchingolo
Source: Human Brain Project
Contact: Roberto Inchingolo – Human Brain Project
Image: The image is in the public domain
Original research: Open access.
“Learning cortical representations through disturbed and hostile dreams” by Nicolas Deperrois et al. eLife
Cortical representations learn through disturbed and hostile dreams
Humans and other animals learn to extract general concepts from sensory experiences without extensive instruction. This ability is thought to be facilitated by offline states such as sleep, where previous experiences are systematically repeated. However, the distinctively creative nature of dreams suggests that learning semantic representations can go beyond simply repeating past experiences.
We support this hypothesis by implementing a cortical architecture inspired by generative adversarial networks (GANs).
Learning in our model is organized into three different global brain states that mimic wakefulness, non-rapid eye movement (NREM), and REM sleep, optimizing different, yet complementary, objective functions.
We train the model on standard datasets of natural images and evaluate the quality of the learned representations.
Our results suggest that generating new, virtual sensory input via hostile dreams during REM sleep is essential for extracting semantic concepts, while replaying episodic memories via disrupted dreams during NREM sleep enhances the robustness of latent representations. .
The model provides a new computational perspective on sleep states, memory repetition and dreams, and suggests a cortical implementation of GANs.