Wolfram Computation Meets Knowledge

SqueezeNet V1.1 Trained on ImageNet Competition Data

Identify the main object in an image

Released in 2016, SqueezeNet is a successful attempt to produce a high-performance image classification model using as few parameters as possible. Aiming for all of the benefits of a computationally cheap algorithm, it achieves good accuracy, while having very few parameters (~5 MB) compared to similar nets. This is achieved by making heavy use of bottleneck (squeeze) 1x1 convolutions. ImageNet classes are mapped to Wolfram Language Entities through their unique WordNet IDs.

Number of layers: 69 | Parameter count: 1,235,496 | Trained size: 5 MB

Training Set Information

Performance

Examples

Resource retrieval

Retrieve the resource object:

In[1]:=
ResourceObject["SqueezeNet V1.1 Trained on ImageNet Competition Data"]
Out[1]=

Get the pre-trained net:

In[2]:=
NetModel["SqueezeNet V1.1 Trained on ImageNet Competition Data"]
Out[2]=

Basic usage

Classify an image:

In[3]:=
CloudGet["https://www.wolframcloud.com/objects/920e196c-f162-4321-922a-61209496dfff"] (* Evaluate this cell to copy the example input from a cloud object *)
Out[3]=

The prediction is an Entity object, which can be queried:

In[4]:=
pred["Definition"]
Out[4]=

Get a list of available properties of the predicted Entity:

In[5]:=
pred["Properties"]
Out[5]=

Obtain the probabilities of the ten most likely entities predicted by the net:

In[6]:=
CloudGet["https://www.wolframcloud.com/objects/a8fd6f71-7fbf-4535-80f5-a917afb5e472"] (* Evaluate this cell to copy the example input from a cloud object *)
Out[6]=

An object outside the list of the ImageNet classes will be misidentified:

In[7]:=
CloudGet["https://www.wolframcloud.com/objects/3608982c-0405-4236-9d4d-1e9a89792362"] (* Evaluate this cell to copy the example input from a cloud object *)
Out[7]=

Obtain the list of names of all available classes:

In[8]:=
EntityValue[
 NetExtract[
   NetModel["SqueezeNet V1.1 Trained on ImageNet Competition Data"], 
   "Output"][["Labels"]], "Name"]
Out[8]=

Feature extraction

Remove the last three layers of the trained net so that the net produces a vector representation of an image:

In[9]:=
extractor = 
 Take[NetModel[
   "SqueezeNet V1.1 Trained on ImageNet Competition Data"], {1, -4}]
Out[9]=

Get a set of images:

In[10]:=
CloudGet["https://www.wolframcloud.com/objects/ea432996-8724-4c2f-8a7b-a9649d5af254"] (* Evaluate this cell to copy the example input from a cloud object *)

Visualize the features of a set of images:

In[11]:=
FeatureSpacePlot[imgs, FeatureExtractor -> extractor, 
 LabelingFunction -> (ImageResize[#, 100] &), ImageSize -> 800]
Out[11]=

Visualize convolutional weights

Extract the weights of the first convolutional layer in the trained net:

In[12]:=
weights = 
  NetExtract[
   NetModel[
    "SqueezeNet V1.1 Trained on ImageNet Competition Data"], {"conv1",
     "Weights"}];

Show the dimensions of the weights:

In[13]:=
Dimensions[weights]
Out[13]=

Visualize the weights as a list of 64 images of size 3x3:

In[14]:=
ImageAdjust[Image[#, Interleaving -> False]] & /@ weights
Out[14]=

Transfer learning

Use the pre-trained model to build a classifier for telling apart images of dogs and cats. Create a test set and a training set:

In[15]:=
CloudGet["https://www.wolframcloud.com/objects/632321fa-9a87-4a4b-b4d3-9f2bb8d89b67"] (* Evaluate this cell to copy the example input from a cloud object *)
In[16]:=
CloudGet["https://www.wolframcloud.com/objects/a223fdf7-2180-4031-88de-a72d8da89982"] (* Evaluate this cell to copy the example input from a cloud object *)

Remove the linear layer from the pre-trained net:

In[17]:=
tempNet = 
 Take[NetModel[
   "SqueezeNet V1.1 Trained on ImageNet Competition Data"], {1, -4}]
Out[17]=

Create a new net composed of the pre-trained net followed by a linear layer and a softmax layer:

In[18]:=
newNet = NetChain[<|"pretrainedNet" -> tempNet, 
   "linearNew" -> LinearLayer[], "softmax" -> SoftmaxLayer[]|>, 
  "Output" -> NetDecoder[{"Class", {"cat", "dog"}}]]
Out[18]=

Train on the dataset, freezing all the weights except for those in the "linearNew" layer (use TargetDevice -> "GPU" for training on a GPU):

In[19]:=
trainedNet = 
 NetTrain[newNet, trainSet, 
  LearningRateMultipliers -> {"linearNew" -> 1, _ -> 0}]
Out[19]=

Perfect accuracy is obtained on the test set:

In[20]:=
ClassifierMeasurements[trainedNet, testSet, "Accuracy"]
Out[20]=

Net information

Inspect the number of parameters of all arrays in the net:

In[21]:=
NetInformation[
 NetModel["SqueezeNet V1.1 Trained on ImageNet Competition Data"], \
"ArraysElementCounts"]
Out[21]=

Obtain the total number of parameters:

In[22]:=
NetInformation[
 NetModel["SqueezeNet V1.1 Trained on ImageNet Competition Data"], \
"ArraysTotalElementCount"]
Out[22]=

Obtain the layer type counts:

In[23]:=
NetInformation[
 NetModel["SqueezeNet V1.1 Trained on ImageNet Competition Data"], \
"LayerTypeCounts"]
Out[23]=

Display the summary graphic:

In[24]:=
NetInformation[
 NetModel["SqueezeNet V1.1 Trained on ImageNet Competition Data"], \
"SummaryGraphic"]
Out[24]=

Export to MXNet

Export the net into a format that can be opened in MXNet:

In[25]:=
jsonPath = 
 Export[FileNameJoin[{$TemporaryDirectory, "net.json"}], 
  NetModel["SqueezeNet V1.1 Trained on ImageNet Competition Data"], 
  "MXNet"]
Out[25]=

Export also creates a net.params file containing parameters:

In[26]:=
paramPath = FileNameJoin[{DirectoryName[jsonPath], "net.params"}]
Out[26]=

Get the size of the parameter file:

In[27]:=
FileByteCount[paramPath]
Out[27]=

The size is similar to the byte count of the resource object:

In[28]:=
ResourceObject[
  "SqueezeNet V1.1 Trained on ImageNet Competition Data"]["ByteCount"]
Out[28]=

Represent the MXNet net as a graph:

In[29]:=
Import[jsonPath, {"MXNet", "NodeGraphPlot"}]
Out[29]=

Requirements

Wolfram Language 11.1 (March 2017) or above

Resource History

Reference