Reading Time: 6 minutes

Someone’s developing a theory lately that might explain why humans’ dreams are often so illogical and weird. Since dis me 100%, the story caught my eye. What makes it even better is that it relates to the world of artificial intelligence (AI). Today, we’ll look at new discoveries about dreams: their function, their necessity, and the distinctive AI weakness that got us thinking about it all.

brighton in an ai's dreams
A scene of Brighton Pavilion, seen in an AI’s dreams.

Strange dreams

Everyone dreams. We must. Some of us can actually remember our dreams, though most can’t. But we all do it, even if we can’t remember a lick of ’em afterward. If we’re prevented from reaching the state of sleep that forms the wellspring of dreaming, then we are mightily messed up the next day. (We also get mightily messed up if prevented from sleeping at all. More than a few days of that, and people deteriorate very quickly).

A doctor of psychiatry and sleep medicine, Dr. Alex Dimitru, tells us:

“Whether they remember or not, all people do dream in their sleep. It is an essential function for the human brain, and also present in most species.”

So we know that humans must dream while we sleep. We even know about some of the stuff that can inspire strange and vivid dreams:

We also know that sometimes, our dreams can cause us a lot of anxiety during our waking hours, especially if they’re unpleasant.

What we don’t really know, though, is why humans must dream – or why we must sleep at all, for that matter. But we’ve got some ideas along those lines.

Dreams as filing systems

For the past few years, I’ve been watching as researchers have developed some intriguing new ideas about why people must dream. This story from NBC summarizes some of those ideas:

Our brains need offline time for processing and learning new things – and they do this during sleep. (And there’s a whole lot of evidence to support the idea that sleep makes learning and memory storing possible.)

And it might be that dreaming plays a role in that process, [Robert] Stickgold says, “Where the brain is trying to solve problems and complete processes that were going on during waking that it—in its waking hours—didn’t complete.”

That makes sense. When I was in college in the 90s, I read somewhere that staying up to study all night long was actually not as beneficial as spending a reasonable amount of time studying, then getting a good night’s sleep. At the time, I half-suspected it was a grand conspiracy on the part of parents to deprive college students of fun.

I learned better, though, when I very unwisely stayed up for several days straight to cram for finals one summer semester. I was so out of it by the last test that I caught myself filling in “D” on Scantron answer sheets for True/False questions. I never did that again.

But there might be something else that dreams do, something very important indeed.

Maybe our dreams are a way of introducing noise and chaos to our minds so we can retain information more effectively, make better connections between our ideas, and learn the patterns in the information we learned that day.

AI dreams and stranger things

If you’ve ever monkeyed around with something like Deep Dream Generator, you know that the images it returns can get really wild really quickly. Today, I started with this picture of a pretty wine-red Miata:

basic miata picture
Just your everyday average Miata picture. GO YATTA GO YATTA GO!

A short while later, Deep Dream served this up to me:

just what miata
A warped version of the Miata. I guess someone was showing the AI snakes and frogs?

When I clicked “Go Deeper” a couple more times until I was weirded out, this was what that idyllic little country scene had turned into:

What in the
Third try at “Go Deeper” on Deep Dream.

Do you notice any themes in the images I got?

The “dreamed” images kinda look like the AI had something on its mind, doesn’t it? Like someone had recently fed it tons of images of graphic badgers and snakes, and it found things in the Miata picture that sorta reminded it of what it’d seen while “awake.”

So eventually, it produced a nightmare Lisa Frank picture.

Overfitting in the absence of AI dreams

You want an AI to learn to detect patterns. It does that with what’s called a “training set” of data points.

As this Big Think article puts it, the AI needs to learn what the data means, not just what the data points are. If it doesn’t ever learn the pattern behind the data, then it’ll just mash everything users feed it into those same data points. It won’t be able to generalize enough to provide useful output with a variety of data points.

That means that if you want an AI to recognize human faces, it’s got to see millions of ’em — different ones, too, in all kinds of poses and expressions. But then if you ask it to interpret a picture of a dog, it’ll hunt through that image to see if it can find any elements that look like what it actually knows, which in this case is the human face.

And if it finds any such elements anywhere, like say the puppy’s nose looks a bit like a human ear, it might just draw you a picture of a dog with a human ear where its nose should be. And a tail that looks like a rainbow of human arms waving in the air. The AI just jams whatever it can into the pattern it recognizes.

That’s called overfitting, and AI system programmers try really hard to avoid it. It severely weakens and stunts the AI by limiting its ability to interpret data given to it.

The dreams that may come

To prevent overfitting, those programmers have a few tricks up their sleeves. Mostly, they introduce chaotic elements to the data set so the AI doesn’t just learn that one zig-zag pattern. That way, the AI knows that some elements won’t fit into that exact data set, and it seeks a through-way—the pattern—that the data points describe.

That’s what programmers want. They want it to be able to generalize from the data, not zig-zag back and forth between exact, precise data points.

The way these articles describe this chaos element, it sounds like white noise – the way that it dampens other extreme noises and makes it smooth, so we can sleep in noisy environments.

And guess what? Just as AI learning systems need a little chaos so they can function correctly, humans might need it as well.

The overfitting brain hypothesis

About a year ago, we began hearing about a new theory about the function of dreams. Erik Hoel, a neuroscientist and assistant professor at Tufts University, has been working on the Overfitting Brain Hypothesis. Recently, he gave us a more layperson-friendly writeup here, at Inverse.

The way he puts it, humans’ lives are pretty dang repetitive. (They were probably even way more repetitive centuries and millennia ago!) And that can lead to us being exposed to only a very narrow data set of experiences that we can learn from.

So maybe our dreams are a way of introducing noise and chaos to our minds so we can retain information more effectively, make better connections between our ideas, and learn the patterns in the information we learned that day.

He’s got some other interesting ideas about ways to perhaps bring a “dreamlike” state to people who need to stay away for long periods. That may prove to be its own wondrous game-changer. But it was this part about vivid, wacky dreams that caught my eye most of all.

ROLL TO DISBELIEVE "Captain Cassidy" is Cassidy McGillicuddy, a Gen Xer and ex-Pentecostal. (The title is metaphorical.) She writes about the intersection of psychology, belief, popular culture, science,...

Notify of
Inline Feedbacks
View all comments