EfficientNet-V2 Trained on ImageNet Competition Data

Identify the main object in an image

Released in 2021, this family of models focuses on improving both the training speed and parameter efficiency of the original EfficientNet family. In comparison to the original, EfficientNet-V2 does not use depthwise convolutions and squeeze-and-excitation blocks in early layers and has smaller kernel sizes and smaller expansion ratios in mobile blocks. Also, the models are trained with an improved progressive learning technique that improves both the training speed and the accuracy. Finally, EfficientNet-V2 demonstrates up to 11x faster training speed and up to 6.8x better parameter efficiency on the ImageNet, CIFAR, Cars and Flowers datasets than prior art.

Number of models: 7

Training Set Information

Performance

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["EfficientNet-V2 Trained on ImageNet Competition Data"]
Out[1]=

NetModel parameters

This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:

In[2]:=
NetModel["EfficientNet-V2 Trained on ImageNet Competition Data", "ParametersInformation"]
Out[2]=

Pick a non-default net by specifying the parameters:

In[3]:=
NetModel[{"EfficientNet-V2 Trained on ImageNet Competition Data", "Architecture" -> "B0"}]
Out[3]=

Pick a non-default uninitialized net:

In[4]:=
NetModel[{"EfficientNet-V2 Trained on ImageNet Competition Data", "Architecture" -> "B0"}, "UninitializedEvaluationNet"]
Out[4]=

Basic usage

Classify an image:

In[5]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/176c8658-0030-43cc-bd84-5825fa715e55"]
Out[5]=

The prediction is an Entity object, which can be queried:

In[6]:=
pred["Definition"]
Out[6]=

Get a list of available properties of the predicted Entity:

In[7]:=
pred["Properties"]
Out[7]=

Obtain the probabilities of the 10 most likely entities predicted by the net:

In[8]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/85377d40-587b-40e0-8487-15ce4a72161f"]
Out[8]=

An object outside the list of the ImageNet classes will be misidentified:

In[9]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/0f2aa57c-8166-412b-bb2d-6e15ce66d212"]
Out[9]=

Obtain the list of names of all available classes:

In[10]:=
EntityValue[
  NetExtract[
    NetModel["EfficientNet-V2 Trained on ImageNet Competition Data"], "Output"][["Labels"]], "Name"] // Short
Out[10]=

Feature extraction

Remove the last two layers of the trained net so that the net produces a vector representation of an image:

In[11]:=
extractor = NetDrop[NetModel[
   "EfficientNet-V2 Trained on ImageNet Competition Data"], -2]
Out[11]=

Get a set of images:

In[12]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/0e6e12b1-30d7-4797-bbd8-4f18ee34e049"]

Use the net as a feature extractor to build a clustering tree of the images:

In[13]:=
ClusteringTree[imgs, FeatureExtractor -> extractor, ImageSize -> Large]
Out[13]=

Transfer learning

Use the pre-trained model to build a classifier for telling apart indoor and outdoor photos. Create a test set and a training set:

In[14]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/a496e5c3-e3b0-4f4c-a3fa-bb1b369b4e32"]
In[15]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/937b0ed4-100e-4e1b-9e3a-e3be7b75a0a2"]

Remove the linear layer from the pre-trained net:

In[16]:=
tempNet = Take[NetModel[
   "EfficientNet-V2 Trained on ImageNet Competition Data"], {1, -3}]
Out[16]=

Create a new net composed of the pre-trained net followed by a linear layer and a softmax layer:

In[17]:=
newNet = NetAppend[
   tempNet, {"linearNew" -> LinearLayer[], "softmax" -> SoftmaxLayer[]}, "Output" -> NetDecoder[{"Class", {"indoor", "outdoor"}}]];

Train on the dataset, freezing all the weights except for those in the "linearNew" layer (use TargetDevice -> "GPU" for training on a GPU):

In[18]:=
trainedNet = NetTrain[newNet, trainSet, LearningRateMultipliers -> {"linearNew" -> 1, _ -> 0}]
Out[18]=

Perfect accuracy is obtained on the test set:

In[19]:=
ClassifierMeasurements[trainedNet, Most@testSet, "Accuracy"]
Out[19]=

Net information

Inspect the number of parameters of all arrays in the net:

In[20]:=
Information[
 NetModel["EfficientNet-V2 Trained on ImageNet Competition Data"], "ArraysElementCounts"]
Out[20]=

Obtain the total number of parameters:

In[21]:=
Information[
 NetModel["EfficientNet-V2 Trained on ImageNet Competition Data"], "ArraysTotalElementCount"]
Out[21]=

Obtain the layer type counts:

In[22]:=
Information[
 NetModel["EfficientNet-V2 Trained on ImageNet Competition Data"], "LayerTypeCounts"]
Out[22]=

Export to ONNX

Export the net to the ONNX format:

In[23]:=
onnxFile = Export[FileNameJoin[{$TemporaryDirectory, "net.onnx"}], NetModel["EfficientNet-V2 Trained on ImageNet Competition Data"]]
Out[23]=

Get the size of the ONNX file:

In[24]:=
FileByteCount[onnxFile]
Out[24]=

Check some metadata of the ONNX model:

In[25]:=
{OpsetVersion, IRVersion} = {Import[onnxFile, "OperatorSetVersion"], Import[onnxFile, "IRVersion"]}
Out[25]=

Import the model back into the Wolfram Language. However, the NetEncoder and NetDecoder will be absent because they are not supported by ONNX:

In[26]:=
Import[onnxFile]
Out[26]=

Resource History

Reference