MobileNet-3D V2 Trained on Video Datasets

Identify the main action in a video

Released in 2019, this family of nets consists of three-dimensional (3D) versions of the original MobileNet V2 architecture. By introducing the new "inverted residual" structures, featuring shortcut connections between the thin bottleneck layers, these models improve on the performance of the previous 3D MobileNet models. By using 3D convolutions, they achieve much better video classification accuracies as compared to their two-dimensional counterparts.

Number of models: 8

Training Set Information

Performance

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["MobileNet-3D V2 Trained on Video Datasets"]
Out[1]=

NetModel parameters

This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:

In[2]:=
NetModel["MobileNet-3D V2 Trained on Video Datasets", "ParametersInformation"]
Out[2]=

Pick a non-default net by specifying the parameters:

In[3]:=
NetModel[{"MobileNet-3D V2 Trained on Video Datasets", "Dataset" -> "Kinetics", "Width" -> 0.7}]
Out[3]=

Pick a non-default uninitialized net:

In[4]:=
NetModel[{"MobileNet-3D V2 Trained on Video Datasets", "Dataset" -> "Kinetics", "Width" -> 0.7}]
Out[4]=

Basic usage

Identify the main action in a video:

In[5]:=
bbq = ResourceData["Sample Video: Barbecuing"];
In[6]:=
NetModel["MobileNet-3D V2 Trained on Video Datasets"][bbq]
Out[6]=

Obtain the probabilities of the 10 most likely entities predicted by the net:

In[7]:=
NetModel["MobileNet-3D V2 Trained on Video Datasets"][bbq, {"TopProbabilities", 10}]
Out[7]=

Obtain the list of names of all available classes:

In[8]:=
NetExtract[NetModel["MobileNet-3D V2 Trained on Video Datasets"], "Output"][["Labels"]]
Out[8]=

Network architecture

In addition to the depthwise separable convolutions introduced in MobileNet-3D V1, MobileNet-3D V2 models are characterized by inverted residual blocks. In classical residual blocks, the number of channels is decreased and later increased in the convolutional chain (bottleneck), and the residual connection connects feature maps with a large number of channels. On the other hand, inverted residual blocks first increase and then decrease the number of channels (inverted bottleneck), and the residual connection connects feature maps with a small number of channels:

In[9]:=
NetExtract[
 NetModel["MobileNet-3D V2 Trained on Video Datasets"], "block3"]
Out[9]=

The first convolution layer in this inverted residual block increases the number of channels from 24 to 144:

In[10]:=
NetExtract[
 NetModel["MobileNet-3D V2 Trained on Video Datasets"], {"block3", "conv1"}]
Out[10]=

The last convolution layer decreases the number of channels from 144 back to 24:

In[11]:=
NetExtract[
 NetModel["MobileNet-3D V2 Trained on Video Datasets"], {"block3", "conv3"}]
Out[11]=

Feature extraction

Remove the last layers of the trained net so that the net produces a vector representation of an image:

In[12]:=
extractor = NetTake[NetModel[
   "MobileNet-3D V2 Trained on Video Datasets"], {1, -3}]
Out[12]=

Get a set of videos:

In[13]:=
videos = Join[ResourceData["Tooth Brushing Video Samples"], ResourceData["Cheerleading Video Samples"]];

Visualize the features of a set of videos:

In[14]:=
FeatureSpacePlot[videos, FeatureExtractor -> extractor, LabelingFunction -> (Callout[
     Thumbnail@VideoExtractFrames[#1, Quantity[1, "Frames"]]] &), LabelingSize -> 50, ImageSize -> 600]
Out[14]=

Transfer learning

Use the pre-trained model to build a classifier for telling apart images from two action classes not present in the dataset. Create a test set and a training set:

In[15]:=
videos = <|
   ResourceData["Sample Video: Reading a Book"] -> "reading book", ResourceData["Sample Video: Blowing Glitter"] -> "blowing glitter"|>;
In[16]:=
dataset = Join @@ KeyValueMap[
    Thread[
      VideoSplit[#1, Most@Table[
          Quantity[i, "Frames"], {i, 16, Information[#1, "FrameCount"][[1]], 16}]] -> #2] &,
    videos
    ];
In[17]:=
{train, test} = ResourceFunction["TrainTestSplit"][dataset, "TrainingSetSize" -> 0.7];

Remove the linear layer from the pre-trained net:

In[18]:=
tempNet = NetTake[NetModel[
   "MobileNet-3D V2 Trained on Video Datasets"], {1, -3}]
Out[18]=

Create a new net composed of the pre-trained net followed by a linear layer and a softmax layer:

In[19]:=
newNet = NetJoin[tempNet, NetChain[{"Linear" -> LinearLayer[], "SoftMax" -> SoftmaxLayer[]}], "Output" -> NetDecoder[{"Class", {"blowing glitter", "reading book"}}]]
Out[19]=

Train on the dataset, freezing all the weights except for those in the "Linear" layer (use TargetDevice -> "GPU" for training on a GPU):

In[20]:=
trainedNet = NetTrain[newNet, train, LearningRateMultipliers -> {"Linear" -> 1, _ -> 0}, ValidationSet -> Scaled[0.1], MaxTrainingRounds -> 5]
Out[20]=

Perfect accuracy is obtained on the test set:

In[21]:=
ClassifierMeasurements[trainedNet, test, "Accuracy"]
Out[21]=

Net information

Inspect the number of parameters of all arrays in the net:

In[22]:=
Information[
 NetModel["MobileNet-3D V2 Trained on Video Datasets"], "ArraysElementCounts"]
Out[22]=

Obtain the total number of parameters:

In[23]:=
Information[
 NetModel["MobileNet-3D V2 Trained on Video Datasets"], "ArraysTotalElementCount"]
Out[23]=

Obtain the layer type counts:

In[24]:=
Information[
 NetModel["MobileNet-3D V2 Trained on Video Datasets"], "LayerTypeCounts"]
Out[24]=

Display the summary graphic:

In[25]:=
Information[
 NetModel["MobileNet-3D V2 Trained on Video Datasets"], "SummaryGraphic"]
Out[25]=

Export to ONNX

Export the net to the ONNX format:

In[26]:=
onnxFile = Export[FileNameJoin[{$TemporaryDirectory, "net.onnx"}], NetModel["MobileNet-3D V2 Trained on Video Datasets"]]
Out[26]=

Get the size of the ONNX file:

In[27]:=
FileByteCount[onnxFile]
Out[27]=

Check some metadata of the ONNX model:

In[28]:=
{opsetVersion, irVersion} = {Import[onnxFile, "OperatorSetVersion"], Import[onnxFile, "IRVersion"]}
Out[28]=

Import the model back into the Wolfram Language. However, the NetEncoder and NetDecoder will be absent because they are not supported by ONNX:

In[29]:=
Import[onnxFile]
Out[29]=

Resource History

Reference