Wolfram Research

Function Repository Resource:

DeepDreamBeta

Source Notebook

Render an image using the DeepDream-β algorithm

Contributed by: Aster Ctor

ResourceFunction["DeepDreamBeta"][net,image]

renders the image using the deepdream-β algorithm.

ResourceFunction["DeepDreamBeta"][net,image,step]

renders the image with an iteration depth of step.

Details and Options

The algorithm has the following options:
"Eyes" 2 network activation depth
"Activation" Identity pre-activation function for image
"StepSize" 1 step size of each pattern overlay iteration
Resampling "Cubic" works the same as ImageResize
TargetDevice "CPU" works the same as NetChain
WorkingPrecision "Real32" works the same as NetChain

Examples

Basic Examples

Take a neural network as the dreamer:

In[1]:=
VGG = NetModel["VGG-19 Trained on ImageNet Competition Data"];
VGG = NetTake[VGG, {1, "pool5"}]
Out[2]=

Start with an image:

In[3]:=
img = ExampleData[{"TestImage", "Tree"}]
Out[3]=

Now process it with DeepDreamBeta:

In[4]:=
ResourceFunction["DeepDreamBeta"][VGG, img]
Out[4]=

Scope

Use GPU acceleration:

In[5]:=
ResourceFunction["DeepDreamBeta"][VGG, img, 25, TargetDevice -> "GPU"]
Out[5]=

Options

Activation

Change the pre-activation function:

In[6]:=
ResourceFunction["DeepDreamBeta"][VGG, img, 25, TargetDevice -> "GPU",
  "Activation" -> LogisticSigmoid]
Out[6]=

Eyes

"Eyes" controls the size of the receptive field:

In[7]:=
ResourceFunction["DeepDreamBeta"][VGG, img, 25, TargetDevice -> "GPU",
  "Eyes" -> 1]
Out[7]=

More eyes gives a greater departure from the original picture:

In[8]:=
ResourceFunction["DeepDreamBeta"][VGG, img, 25, TargetDevice -> "GPU",
  "Eyes" -> 3]
Out[8]=

StepSize

The smaller the step, the smoother the final result will be:

In[9]:=
move1 = ResourceFunction["DeepDreamBeta"][VGG, img, TargetDevice -> "GPU", "Eyes" -> 3, "StepSize" -> 1]
Out[9]=
In[10]:=
move2 = ResourceFunction["DeepDreamBeta"][VGG, img, TargetDevice -> "GPU", "Eyes" -> 3, "StepSize" -> 0.1]
Out[10]=

Properties and Relations

A residual activation network makes it difficult to activate abstract features. Use non-residual networks for better results:

In[11]:=
RES = NetModel["ResNet-50 Trained on ImageNet Competition Data"];
RES = NetTake[RES, {1, "5c"}]
Out[12]=

Generate messy textures:

In[13]:=
ResourceFunction["DeepDreamBeta"][RES, img, 25, TargetDevice -> "GPU"]
Out[13]=

Resource History

Related Resources

License Information