# Wolfram Neural Net Repository

Immediate Computable Access to Neural Net Models

Identify the main action in a video

Released in 2019, this family of nets consists of three-dimensional (3D) versions of the original MobileNet V2 architecture. By introducing the new "inverted residual" structures, featuring shortcut connections between the thin bottleneck layers, these models improve on the performance of the previous 3D MobileNet models. By using 3D convolutions, they achieve much better video classification accuracies as compared to their two-dimensional counterparts.

Number of models: 8

- Kinetics-600 dataset, containing six hundred human action classes with at least six hundred video clips for each action. For each class, there are also 50 and one hundred validation and test videos respectively. Jester dataset, containing 148,092 gesture videos under 27 classes.

The models achieve the following accuracies on the original ImageNet validation set.

Get the pre-trained net:

In[1]:= |

Out[1]= |

This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:

In[2]:= |

Out[2]= |

Pick a non-default net by specifying the parameters:

In[3]:= |

Out[3]= |

Pick a non-default uninitialized net:

In[4]:= |

Out[4]= |

Identify the main action in a video:

In[5]:= |

In[6]:= |

Out[6]= |

Obtain the probabilities of the 10 most likely entities predicted by the net:

In[7]:= |

Out[7]= |

Obtain the list of names of all available classes:

In[8]:= |

Out[8]= |

In addition to the depthwise separable convolutions introduced in MobileNet-3D V1, MobileNet-3D V2 models are characterized by inverted residual blocks. In classical residual blocks, the number of channels is decreased and later increased in the convolutional chain (bottleneck), and the residual connection connects feature maps with a large number of channels. On the other hand, inverted residual blocks first increase and then decrease the number of channels (inverted bottleneck), and the residual connection connects feature maps with a small number of channels:

In[9]:= |

Out[9]= |

The first convolution layer in this inverted residual block increases the number of channels from 24 to 144:

In[10]:= |

Out[10]= |

The last convolution layer decreases the number of channels from 144 back to 24:

In[11]:= |

Out[11]= |

Remove the last layers of the trained net so that the net produces a vector representation of an image:

In[12]:= |

Out[12]= |

Get a set of videos:

In[13]:= |

Visualize the features of a set of videos:

In[14]:= |

Out[14]= |

Use the pre-trained model to build a classifier for telling apart images from two action classes not present in the dataset. Create a test set and a training set:

In[15]:= |

In[16]:= |

In[17]:= |

Remove the linear layer from the pre-trained net:

In[18]:= |

Out[18]= |

Create a new net composed of the pre-trained net followed by a linear layer and a softmax layer:

In[19]:= |

Out[19]= |

Train on the dataset, freezing all the weights except for those in the "Linear" layer (use TargetDevice -> "GPU" for training on a GPU):

In[20]:= |

Out[20]= |

Perfect accuracy is obtained on the test set:

In[21]:= |

Out[21]= |

Inspect the number of parameters of all arrays in the net:

In[22]:= |

Out[22]= |

Obtain the total number of parameters:

In[23]:= |

Out[23]= |

Obtain the layer type counts:

In[24]:= |

Out[24]= |

Display the summary graphic:

In[25]:= |

Out[25]= |

Export the net to the ONNX format:

In[26]:= |

Out[26]= |

Get the size of the ONNX file:

In[27]:= |

Out[27]= |

Check some metadata of the ONNX model:

In[28]:= |

Out[28]= |

Import the model back into the Wolfram Language. However, the NetEncoder and NetDecoder will be absent because they are not supported by ONNX:

In[29]:= |

Out[29]= |

- O. Köpüklü, N. Kose, A. Gunduz, G. Rigoll, "Resource Efficient 3D Convolutional Neural Networks," arXiv:1904.02422 (2019)
- Available from:
- Rights: MIT License