Wolfram Research

Function Repository Resource:

DeepDreamBeta

Source Notebook

Render an image using the DeepDream-β algorithm

Contributed by: Aster Ctor  |  Aster Ctor (MoeNet)

ResourceFunction["DeepDreamBeta"][net,image]

renders the image using the deepdream-β algorithm.

ResourceFunction["DeepDreamBeta"][net,image,step]

renders the image with an iteration depth of step.

Details and Options

ResourceFunction["DeepDreamBeta"] has the following options:
"Eyes"2network activation depth
"Activation"Identitypre-activation function for image
"StepSize"1step size of each pattern overlay iteration
Resampling"Cubic"works the same as ImageResize
TargetDevice"CPU"works the same as NetChain
WorkingPrecision"Real32"works the same as NetChain

Examples

Basic Examples (3) 

Take a neural network as the dreamer:

In[1]:=
VGG = NetModel["VGG-19 Trained on ImageNet Competition Data"];
VGG = NetTake[VGG, {1, "pool5"}]
Out[1]=

Start with an image:

In[2]:=
img = ExampleData[{"TestImage", "Tree"}]
Out[2]=

Now process it with DeepDreamBeta:

In[3]:=
ResourceFunction["DeepDreamBeta"][VGG, img]
Out[3]=

Scope (3) 

Take a neural network as the dreamer:

In[4]:=
VGG = NetModel["VGG-19 Trained on ImageNet Competition Data"];
VGG = NetTake[VGG, {1, "pool5"}]
Out[4]=

Start with an image:

In[5]:=
img = ExampleData[{"TestImage", "Tree"}]
Out[5]=

Use GPU acceleration:

In[6]:=
ResourceFunction["DeepDreamBeta"][VGG, img, 25, TargetDevice -> "GPU"]
Out[6]=

Options (10) 

Activation (3) 

Take a neural network as the dreamer:

In[7]:=
VGG = NetModel["VGG-19 Trained on ImageNet Competition Data"];
VGG = NetTake[VGG, {1, "pool5"}]
Out[7]=

Start with an image:

In[8]:=
img = ExampleData[{"TestImage", "Tree"}]
Out[8]=

Change the pre-activation function:

In[9]:=
ResourceFunction["DeepDreamBeta"][VGG, img, 25, TargetDevice -> "GPU",
  "Activation" -> LogisticSigmoid]
Out[9]=

Eyes (4) 

Take a neural network as the dreamer:

In[10]:=
VGG = NetModel["VGG-19 Trained on ImageNet Competition Data"];
VGG = NetTake[VGG, {1, "pool5"}]
Out[10]=

Start with an image:

In[11]:=
img = ExampleData[{"TestImage", "Tree"}]
Out[11]=

"Eyes" controls the size of the receptive field:

In[12]:=
ResourceFunction["DeepDreamBeta"][VGG, img, 25, TargetDevice -> "GPU",
  "Eyes" -> 1]
Out[12]=

More eyes gives a greater departure from the original picture:

In[13]:=
ResourceFunction["DeepDreamBeta"][VGG, img, 25, TargetDevice -> "GPU",
  "Eyes" -> 3]
Out[13]=

StepSize (3) 

Take a neural network as the dreamer:

In[14]:=
VGG = NetModel["VGG-19 Trained on ImageNet Competition Data"];
VGG = NetTake[VGG, {1, "pool5"}]
Out[14]=

Start with an image:

In[15]:=
img = ExampleData[{"TestImage", "Tree"}]
Out[15]=

The smaller the step, the smoother the final result will be:

In[16]:=
move1 = ResourceFunction["DeepDreamBeta"][VGG, img, TargetDevice -> "GPU", "Eyes" -> 3, "StepSize" -> 1]
Out[16]=
In[17]:=
move2 = ResourceFunction["DeepDreamBeta"][VGG, img, TargetDevice -> "GPU", "Eyes" -> 3, "StepSize" -> 0.1]
Out[17]=

Properties and Relations (2) 

Use non-residual networks for better results.

The residual activation network is difficult to activate abstract features:

In[18]:=
RES = NetModel["ResNet-50 Trained on ImageNet Competition Data"];
RES = NetTake[RES, {1, "5c"}]
Out[18]=

Generate messy textures:

In[19]:=
ResourceFunction["DeepDreamBeta"][RES, img, 25, TargetDevice -> "GPU"]
Out[19]=

Resource History

Related Resources

License Information