Wolfram Research

MobileNet V3 Trained on ImageNet Competition Data

Identify the main object in an image

Released in 2019 by researchers at Google, these models improve upon the performance of previous versions of MobileNet models. They introduce Swish activation functions and squeeze-and-excitation modules in an efficient manner. Like their predecessors, the expansion layers use lightweight depthwise convolutions.

Number of models: 4

Training Set Information

Performance

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["MobileNet V3 Trained on ImageNet Competition Data"]
Out[1]=

NetModel parameters

This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:

In[2]:=
NetModel["MobileNet V3 Trained on ImageNet Competition Data", \
"ParametersInformation"]
Out[2]=

Pick a non-default net by specifying the parameters:

In[3]:=
NetModel[{"MobileNet V3 Trained on ImageNet Competition Data", "Size" -> "Small", "Width" -> 0.75}]
Out[3]=

Pick a non-default uninitialized net:

In[4]:=
NetModel[{"MobileNet V3 Trained on ImageNet Competition Data", "Size" -> "Small", "Width" -> 1}, "UninitializedEvaluationNet"]
Out[4]=

Basic usage

Classify an image:

In[5]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/83b2c58a-9e96-4523-8f60-e941f286e2d0"]
Out[5]=

The prediction is an Entity object, which can be queried:

In[6]:=
pred["Definition"]
Out[6]=

Get a list of available properties of the predicted Entity:

In[7]:=
pred["Properties"]
Out[7]=

Obtain the probabilities of the 10 most likely entities predicted by the net:

In[8]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/897c908a-b350-407a-b771-952d4d0e63b6"]
Out[8]=

An object outside the list of the ImageNet classes will be misidentified:

In[9]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/e8e51261-9188-405b-b25a-dfd32ff5da51"]
Out[9]=

Obtain the list of names of all available classes:

In[10]:=
EntityValue[
 NetExtract[
   NetModel["MobileNet V3 Trained on ImageNet Competition Data"], "Output"][["Labels"]], "Name"]
Out[10]=

Feature extraction

Remove the last layers of the trained net so that the net produces a vector representation of an image:

In[11]:=
extractor = Take[NetModel[
   "MobileNet V3 Trained on ImageNet Competition Data"], {1, -5}]
Out[11]=

Get a set of images:

In[12]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/b07b0266-1b5a-4822-ae6a-2aff7d0e0f66"]

Visualize the features of a set of images:

In[13]:=
FeatureSpacePlot[imgs, FeatureExtractor -> extractor, LabelingSize -> 180, ImageSize -> Full]
Out[13]=

Visualize convolutional weights

Extract the weights of the first convolutional layer in the trained net:

In[14]:=
NetModel["MobileNet V3 Trained on ImageNet Competition Data"]
Out[14]=
In[15]:=
weights = NetExtract[
   NetModel[
    "MobileNet V3 Trained on ImageNet Competition Data"], {"conv", "conv", "Weights"}];

Show the dimensions of the weights:

In[16]:=
Dimensions[weights]
Out[16]=

Visualize the weights as a list of 16 images of size 3x3:

In[17]:=
ImageAdjust[Image[#, Interleaving -> False]] & /@ Normal[weights]
Out[17]=

Transfer learning

Use the pre-trained model to build a classifier for telling apart images of motorcycles and bicycles. Create a test set and a training set:

In[18]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/aad043db-52ab-4113-90b6-ce5cbc546349"]
In[19]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/b8489842-a063-4a25-beab-d84fbd667cdd"]

Remove the last layers from the pre-trained net:

In[20]:=
tempNet = Take[NetModel[
   "MobileNet V3 Trained on ImageNet Competition Data"], {1, -5}]
Out[20]=

Create a new net composed of the pre-trained net followed by a linear layer and a softmax layer:

In[21]:=
newNet = NetChain[<|"pretrainedNet" -> tempNet, "linearNew" -> LinearLayer[], "softmax" -> SoftmaxLayer[]|>, "Output" -> NetDecoder[{"Class", {"bicycle", "motorcycle"}}]]
Out[21]=

Train on the dataset, freezing all the weights except for those in the "linearNew" layer (use TargetDevice -> "GPU" for training on a GPU):

In[22]:=
trainedNet = NetTrain[newNet, trainSet, MaxTrainingRounds -> 20, LearningRateMultipliers -> {"linearNew" -> 1, _ -> 0}]
Out[22]=

Perfect accuracy is obtained on the test set:

In[23]:=
ClassifierMeasurements[trainedNet, testSet, "Accuracy"]
Out[23]=

Net information

Inspect the number of parameters of all arrays in the net:

In[24]:=
Information[
 NetModel["MobileNet V3 Trained on ImageNet Competition Data"], \
"ArraysElementCounts"]
Out[24]=

Obtain the total number of parameters:

In[25]:=
Information[
 NetModel["MobileNet V3 Trained on ImageNet Competition Data"], \
"ArraysTotalElementCount"]
Out[25]=

Obtain the layer type counts:

In[26]:=
Information[
 NetModel["MobileNet V3 Trained on ImageNet Competition Data"], \
"LayerTypeCounts"]
Out[26]=

Export to MXNet

Export the net into a format that can be opened in MXNet:

In[27]:=
jsonPath = Export[FileNameJoin[{$TemporaryDirectory, "net.json"}], NetModel["MobileNet V3 Trained on ImageNet Competition Data"], "MXNet"]
Out[27]=

Export also creates a net.params file containing parameters:

In[28]:=
paramPath = FileNameJoin[{DirectoryName[jsonPath], "net.params"}]
Out[28]=

Get the size of the parameter file:

In[29]:=
FileByteCount[paramPath]
Out[29]=

Resource History

Reference