The Surreal Dreams of Google’s Artificial Intelligence


Google’s artificial neural network has some explaining to do. Image by Michael Tyka/Google

Google’s servers don’t only drive your data, but apparently dream too. According to a Google blog post by two Google software engineers and an intern, Google’s artificial neural networks (ANNs) are stacked layers of artificial neurons (run on computers) used to process Google Images.

Google’s programmers teach an ANN what a fork is by showing it millions of pictures of forks, and designating that each one is what a fork looks like. Each of the network’s 10-30 layers extracts progressively more complex information from the picture, from edges to shapes to finally the idea of a fork. Eventually, the neural network understands a fork has a handle and two to four tines, and if there are any errors, the team corrects what the computer is misreading and tries again.

The Google team realized that the same process used to discern images could be used to generate images as well. The logic holds: if you know what a fork looks like, you can ostensibly draw a fork. So here is what the network first came up with:


This is what Google’s neural network thinks animals and objects look like. Image by Michael Tyka/Google

Clearly, even when shown millions of photos, the computer couldn’t come up with a perfect Platonic form of an object. For instance, when asked to create a dumbbell, the computer depicted long, stringy arm-things stretching from the dumbbell shapes. Since arms were often found in pictures of dumbbells, the computer thought that sometimes dumbbells had arms.


Google’s artificial neural network’s take on a what dumbbells looks like. Image by Michael Tyka/Google

So the Google team had to take it further by using the ANN to amplify patterns it saw in pictures. Each artificial neural layer works on a different level of abstraction, meaning some picked up edges based on tiny levels of contrast, while others found shapes and colors. They ran this process to accentuate color and form, and then by told the network to go buck wild, and keep accentuating anything it recognizes. So if a cloud looks similar to a bird, the network would keep applying its idea of a bird in small iterations over and over again.


The produce of an artificial neural network being asked to amplify and pull patterns out of white noise. Image by Michael Tyka/Google

Oddly enough, the Google team found patterns. Stone and trees often became buildings. Leaves often became birds and insects.


Google’s artificial neural network often found similar patterns in images of rocks or trees. Image by Michael Tyka/Google

Researchers then set the picture the network produced as the new picture to process, creating an iterative process with a small zoom each time, and soon the network began to create a “endless stream of new impressions.” When started with white noise, the network would produce images purely of its own design. They call these images the neural network’s “dreams,” completely original representations of a computer’s mind, derived from real world objects.


The produce of an artificial neural network being asked to amplify and pull patterns out of white noise. Image by Michael Tyka/Google

Amazing, aren’t they? And you can be sure that Google will go even further (and we already have a video about that). In the meantime here are some more of the artificial network’s dreams (click image to enlarge). See Google’ Inceptionism Gallery for hi-res versions of the images above and below and more.


via PopSci, Google Research


Leave a Reply

Your email address will not be published. Required fields are marked *

Show Buttons
Share On Facebook
Share On Twitter
Share On Google Plus
Share On Linkdin
Share On Pinterest
Share On Reddit
Share On Stumbleupon
Hide Buttons