DenseNet-121 Trained on ImageNet Competition Data

Identify the main object in an image

This model introduces the Dense Convolutional Network (DenseNet) paradigm, connecting each layer to every other layer in a feed-forward fashion. For each layer, the feature maps of all preceding layers are used as inputs and its own feature maps are used as inputs into all subsequent layers. Rather then using explicit dense connections, this implementation achieves the same result by ordinary skip connections and concatenations.

Number of layers: 427 | Parameter count: 8,062,504 | Trained size: 34 MB |

Training Set Information

Performance

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["DenseNet-121 Trained on ImageNet Competition Data"]
Out[1]=

Basic usage

Classify an image:

In[2]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/673c76a6-119e-4da4-8fab-71b4a76aad0b"]
Out[2]=

The prediction is an Entity object, which can be queried:

In[3]:=
pred["Definition"]
Out[3]=

Get a list of available properties of the predicted Entity:

In[4]:=
pred["Properties"]
Out[4]=

Obtain the probabilities of the 10 most likely entities predicted by the net:

In[5]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/020f0303-0a1f-4e10-9bb4-482f03c64ea5"]
Out[5]=

An object outside the list of the ImageNet classes will be misidentified:

In[6]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/7744e97d-3442-4657-8d03-be4dad94c906"]
Out[6]=

Obtain the list of names of all available classes:

In[7]:=
EntityValue[
 NetExtract[
   NetModel["DenseNet-121 Trained on ImageNet Competition Data"], "Output"][["Labels"]], "Name"]
Out[7]=

Feature extraction

Remove the last three layers of the trained net so that the net produces a vector representation of an image:

In[8]:=
extractor = NetDrop[NetModel[
   "DenseNet-121 Trained on ImageNet Competition Data"], -3]
Out[8]=

Get a set of images:

In[9]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/5f89a63c-fd8f-4512-b11d-2808618e3d32"]

Visualize the features of a set of images:

In[10]:=
FeatureSpacePlot[imgs, FeatureExtractor -> extractor, LabelingSize -> 100, ImageSize -> 600]
Out[10]=

Visualize convolutional weights

Extract the weights of the first convolutional layer in the trained net:

In[11]:=
weights = NetExtract[
   NetModel[
    "DenseNet-121 Trained on ImageNet Competition Data"], {"1", "Weights"}];

Show the dimensions of the weights:

In[12]:=
Dimensions[weights]
Out[12]=

Visualize the weights as a list of 64 images of size 7⨯7:

In[13]:=
ImageAdjust[Image[#, Interleaving -> False]] & /@ Normal[weights]
Out[13]=

Transfer learning

Use the pre-trained model to build a classifier for telling apart images of sunflowers and roses. Create a test set and a training set:

In[14]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/c62ce7f6-5192-4584-9405-6796ff8499f1"]
In[15]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/5618fe9d-0718-447c-abe2-1b3e7799274f"]

Remove the last two layers from the pre-trained net:

In[16]:=
tempNet = NetDrop[NetModel[
   "DenseNet-121 Trained on ImageNet Competition Data"], -3]
Out[16]=

Create a new net composed of the pre-trained net followed by a linear layer and a softmax layer:

In[17]:=
newNet = NetChain[<|"pretrainedNet" -> tempNet, "Agg" -> AggregationLayer[Mean], "linear" -> LinearLayer[], "softmax" -> SoftmaxLayer[]|>, "Output" -> NetDecoder[{"Class", {"sunflower", "rose"}}]]
Out[17]=

Train on the dataset, freezing all the weights except for those in the "linearNew" layer (use TargetDevice -> "GPU" for training on a GPU):

In[18]:=
trainedNet = NetTrain[newNet, trainSet, LearningRateMultipliers -> {"linear" -> 1, _ -> 0}]
Out[18]=

Accuracy obtained on the test set:

In[19]:=
ClassifierMeasurements[trainedNet, testSet, "Accuracy"]
Out[19]=

Net information

Inspect the number of parameters of all arrays in the net:

In[20]:=
Information[
 NetModel["DenseNet-121 Trained on ImageNet Competition Data"], \
"ArraysElementCounts"]
Out[20]=

Obtain the total number of parameters:

In[21]:=
Information[
 NetModel["DenseNet-121 Trained on ImageNet Competition Data"], \
"ArraysTotalElementCount"]
Out[21]=

Obtain the layer type counts:

In[22]:=
Information[
 NetModel["DenseNet-121 Trained on ImageNet Competition Data"], \
"LayerTypeCounts"]
Out[22]=

Display the summary graphic:

In[23]:=
Information[
 NetModel["DenseNet-121 Trained on ImageNet Competition Data"], \
"SummaryGraphic"]
Out[23]=

Export to ONNX

Export the net to the ONNX format:

In[24]:=
onnxFile = Export[FileNameJoin[{$TemporaryDirectory, "net.onnx"}], NetModel["DenseNet-121 Trained on ImageNet Competition Data"]]
Out[24]=

Get the size of the ONNX file:

In[25]:=
FileByteCount[onnxFile]
Out[25]=

The size is similar to the byte count of the resource object:

In[26]:=
ResourceObject[
  "DenseNet-121 Trained on ImageNet Competition Data"]["ByteCount"]
Out[26]=

Check some metadata of the ONNX model:

In[27]:=
{OpsetVersion, IRVersion} = {Import[onnxFile, "OperatorSetVersion"], Import[onnxFile, "IRVersion"]}
Out[27]=

Import the model back into the Wolfram Language. However, the NetEncoder and NetDecoder will be absent because they are not supported by ONNX:

In[28]:=
Import[onnxFile]
Out[28]=

Requirements

Wolfram Language 12.2 (December 2020) or above

Resource History

Reference