SSD-MobileNet V2 Trained on MS-COCO Data

Contributed by: Julian W. Francis

Detect and localize objects in an image

Released in 2019, this model is a single-stage object detection model that goes straight from image pixels to bounding box coordinates and class probabilities. The model architecture is based on inverted residual structure where the input and output of the residual block are thin bottleneck layers as opposed to traditional residual models. Moreover, nonlinearities are removed from intermediate layers and lightweight depthwise convolution is used. This model is part of the Tensorflow object detection API.

Number of layers: 267 | Parameter count: 15,291,106 | Trained size: 63 MB |

Training Set Information

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["SSD-MobileNet V2 Trained on MS-COCO Data"]
Out[1]=

Evaluation function

Define the label list for this model. Integers in the model's output correspond to elements in the label list:

In[2]:=
labels = {"person", "bicycle", "car", "motorcycle", "airplane", "bus",
    "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"};

Write an evaluation function to scale the result to the input image size and suppress the least probable detections:

In[3]:=
nonMaxSuppression[overlapThreshold_][detection_] := Module[{boxes, confidence}, Fold[{list, new} |-> If[NoneTrue[list[[All, 1]], iou[#, new[[1]]] > overlapThreshold &], Append[list, new], list], Sequence @@ TakeDrop[Reverse@SortBy[detection, Last], 1]]]

iou := iou = With[{c = Compile[{{box1, _Real, 2}, {box2, _Real, 2}}, Module[{area1, area2, x1, y1, x2, y2, w, h, int}, area1 = (box1[[2, 1]] - box1[[1, 1]]) (box1[[2, 2]] - box1[[1, 2]]);
       area2 = (box2[[2, 1]] - box2[[1, 1]]) (box2[[2, 2]] - box2[[1, 2]]);
       x1 = Max[box1[[1, 1]], box2[[1, 1]]];
       y1 = Max[box1[[1, 2]], box2[[1, 2]]];
       x2 = Min[box1[[2, 1]], box2[[2, 1]]];
       y2 = Min[box1[[2, 2]], box2[[2, 2]]];
       w = Max[0., x2 - x1];
       h = Max[0., y2 - y1];
       int = w*h;
       int/(area1 + area2 - int)], RuntimeAttributes -> {Listable}, Parallelization -> True, RuntimeOptions -> "Speed"]}, c @@ Replace[{##}, Rectangle -> List, Infinity, Heads -> True] &]
In[4]:=
netevaluate[img_Image, detectionThreshold_ : .5, overlapThreshold_ : .45] := Module[{netOutputDecoder, net, decoded},
  netOutputDecoder[imageDims_, threshold_ : .5][netOutput_] := Module[{detections = Position[netOutput["ClassProb"], x_ /; x > threshold]}, If[Length[detections] > 0, Transpose[{Rectangle @@@ Round@Transpose[
          Transpose[
            Extract[netOutput["Boxes"], detections[[All, 1 ;; 1]]], {2, 3, 1}]*
           imageDims/{300, 300}, {3, 1, 2}], Extract[labels, detections[[All, 2 ;; 2]]],
       Extract[netOutput["ClassProb"], detections]}],
     {}
     ]
    ];
  net = NetModel["SSD-MobileNet V2 Trained on MS-COCO Data"];
  decoded = netOutputDecoder[ImageDimensions[img], detectionThreshold]@net[img];
  Flatten[
   Map[nonMaxSuppression[overlapThreshold], GatherBy[decoded, decoded[[2]] &]], 1]
  ]

Basic usage

Obtain the detected bounding boxes with their corresponding classes and confidences for a given image:

In[5]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/896d4f4d-f1d7-437a-993d-e529cd94edeb"]
In[6]:=
detection = netevaluate[testImage, .5]
Out[6]=

Inspect which classes are detected:

In[7]:=
classes = DeleteDuplicates@detection[[All, 2]]
Out[7]=

Visualize the detection:

In[8]:=
HighlightImage[testImage, MapThread[{White, Inset[Style[#2,
      Black, FontSize -> Scaled[1/20], Background -> GrayLevel[1, .6]], Last[#1], {Right, Top}], #1} &,
   Transpose@detection]]
Out[8]=

Network result

The network computes 1,917 bounding boxes and the probability that the objects in each box are of any given class:

In[9]:=
res = NetModel["SSD-MobileNet V2 Trained on MS-COCO Data"][testImage]
Out[9]=

Visualize all the boxes predicted by the net scaled by their “objectness” measures:

In[10]:=
rectangles = Rectangle @@@ res["Boxes"];
In[11]:=
Graphics[
 MapThread[{EdgeForm[Opacity[#1 + .01]], #2} &, {Total[
    res["ClassProb"], {2}], rectangles}],
 BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}
 ]
Out[11]=

Visualize all the boxes scaled by the probability that they contain a cat:

In[12]:=
idx = Position[labels, "cat"][[1, 1]]
Out[12]=
In[13]:=
Graphics[
 MapThread[{EdgeForm[Opacity[#1 + .01]], #2} &, {res["ClassProb"][[
    All, idx]], rectangles}],
 BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}
 ]
Out[13]=

Superimpose the cat prediction on top of the scaled input received by the net:

In[14]:=
HighlightImage[ImageResize[testImage, {300, 300}], Graphics[MapThread[{EdgeForm[{Opacity[#1 + .01]}], #2} &, {res[
      "ClassProb"][[All, idx]], rectangles}]], BaseStyle -> {FaceForm[], EdgeForm[{Thin, Red}]}]
Out[14]=

Advanced visualization

Write a function to apply a custom styling to the result of the detection:

In[15]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/2fa7fc9f-6850-4cfe-a07b-35540eaa9f20"]
In[16]:=
styleDetection[
  detection_] := {RandomColor[], {#[[1]], Text[Style[#[[2]], White, 12], {20, 20} + #[[1, 1]], Background -> Transparent]} & /@ #} & /@ GatherBy[detection, #[[2]] &]

Visualize multiple objects, using a different color for each class:

In[17]:=
HighlightImage[image, styleDetection[netevaluate[image, 0.2]]]
Out[17]=

Net information

Inspect the number of parameters of all arrays in the net:

In[18]:=
NetInformation[
 NetModel["SSD-MobileNet V2 Trained on MS-COCO Data"], \
"ArraysElementCounts"]
Out[18]=

Obtain the total number of parameters:

In[19]:=
NetInformation[
 NetModel["SSD-MobileNet V2 Trained on MS-COCO Data"], \
"ArraysTotalElementCount"]
Out[19]=

Obtain the layer type counts:

In[20]:=
NetInformation[
 NetModel["SSD-MobileNet V2 Trained on MS-COCO Data"], \
"LayerTypeCounts"]
Out[20]=

Display the summary graphic:

In[21]:=
NetInformation[
 NetModel["SSD-MobileNet V2 Trained on MS-COCO Data"], \
"SummaryGraphic"]
Out[21]=

Export to MXNet

Export the net into a format that can be opened in MXNet:

In[22]:=
jsonPath = Export[FileNameJoin[{$TemporaryDirectory, "net.json"}], NetModel["SSD-MobileNet V2 Trained on MS-COCO Data"], "MXNet"]
Out[22]=

Export also creates a net.params file containing parameters:

In[23]:=
paramPath = FileNameJoin[{DirectoryName[jsonPath], "net.params"}]
Out[23]=

Get the size of the parameter file:

In[24]:=
FileByteCount[paramPath]
Out[24]=

The size is similar to the byte count of the resource object:

In[25]:=
ResourceObject[
  "SSD-MobileNet V2 Trained on MS-COCO Data"]["ByteCount"]
Out[25]=

Requirements

Wolfram Language 12.0 (April 2019) or above

Resource History

Reference