SSD-VGG-300 Trained on PASCAL VOC Data

Contributed by: Julian W. Francis

Detect and localize objects in an image

Released in 2016, this model discretizes the output space of bounding boxes into a set of default boxes. At the time of prediction, scores are generated for each object and multiple feature maps with different resolutions are used to make predictions for objects of various sizes. This model processes images at 59 FPS on a NVIDIA Titan X.

Number of layers: 145 | Parameter count: 27,076,694 | Trained size: 109 MB |

Training Set Information

Performance

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["SSD-VGG-300 Trained on PASCAL VOC Data" ]
Out[1]=

Evaluation function

Write an evaluation function to scale the result to the input image size and suppress the least probable detections:

In[2]:=
nonMaxSuppression[overlapThreshold_][detection_] := Module[{boxes, confidence}, Fold[{list, new} |-> If[NoneTrue[list[[All, 1]], iou[#, new[[1]]] > overlapThreshold &], Append[list, new], list], Sequence @@ TakeDrop[Reverse@SortBy[detection, Last], 1]]]

iou := iou = With[{c = Compile[{{box1, _Real, 2}, {box2, _Real, 2}}, Module[{area1, area2, x1, y1, x2, y2, w, h, int}, area1 = (box1[[2, 1]] - box1[[1, 1]]) (box1[[2, 2]] - box1[[1, 2]]);
       area2 = (box2[[2, 1]] - box2[[1, 1]]) (box2[[2, 2]] - box2[[1, 2]]);
       x1 = Max[box1[[1, 1]], box2[[1, 1]]];
       y1 = Max[box1[[1, 2]], box2[[1, 2]]];
       x2 = Min[box1[[2, 1]], box2[[2, 1]]];
       y2 = Min[box1[[2, 2]], box2[[2, 2]]];
       w = Max[0., x2 - x1];
       h = Max[0., y2 - y1];
       int = w*h;
       int/(area1 + area2 - int)], RuntimeAttributes -> {Listable}, Parallelization -> True, RuntimeOptions -> "Speed"]}, c @@ Replace[{##}, Rectangle -> List, Infinity, Heads -> True] &]

Define the label list for this model. Integers in the model's output correspond to elements in the label list:

In[3]:=
labels = {"aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
      "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"};
In[4]:=
netevaluate[img_Image, detectionThreshold_ : .5, overlapThreshold_ : .45] := Module[{netOutputDecoder, net},
  netOutputDecoder[imageDims_, threshold_ : .5][netOutput_] := Module[{detections = Position[netOutput["ClassProb"], x_ /; x > threshold]}, If[Length[detections] > 0, Transpose[{Rectangle @@@ Round@Transpose[
          Transpose[
            Extract[netOutput["Boxes"], detections[[All, 1 ;; 1]]], {2, 3, 1}]*
           imageDims/{300, 300}, {3, 1, 2}], Extract[labels, detections[[All, 2 ;; 2]]],
       Extract[netOutput["ClassProb"], detections]}],
     {}
     ]
    ];
  net = NetModel["SSD-VGG-300 Trained on PASCAL VOC Data"];
  (Flatten[
      nonMaxSuppression[overlapThreshold] /@ GatherBy[#, #[[2]] &], 1] &)@netOutputDecoder[ImageDimensions[img], detectionThreshold]@(net@(ImageResize[#, {300, 300}] &)@img)
  ]

Basic usage

Obtain the detected bounding boxes with their corresponding classes and confidences for a given image:

In[5]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/10e91fac-61c1-41c7-9474-c3152636da92"]
In[6]:=
detection = netevaluate[testImage]
Out[6]=

Inspect which classes are detected:

In[7]:=
classes = DeleteDuplicates@detection[[All, 2]]
Out[7]=

Visualize the detection:

In[8]:=
HighlightImage[testImage, MapThread[{White, Inset[Style[#2, Black, FontSize -> Scaled[1/12], Background -> GrayLevel[1, .6]], Last[#1], {Right, Top}], #1} &,
   Transpose@detection]]
Out[8]=

Network result

The network computes 8,732 bounding boxes and the probability that the objects in each box are of any given class:

In[9]:=
res = NetModel["SSD-VGG-300 Trained on PASCAL VOC Data" ][testImage]
Out[9]=

Visualize all the boxes predicted by the net scaled by their “objectness” measures:

In[10]:=
rectangles = Rectangle @@@ res["Boxes"];
In[11]:=
Graphics[
 MapThread[{EdgeForm[Opacity[#1 + .01]], #2} &, {Total[
    res["ClassProb"], {2}], rectangles}],
 BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}
 ]
Out[11]=

Visualize all the boxes scaled by the probability that they contain a bus:

In[12]:=
idx = Position[labels, "bus"][[1, 1]]
Out[12]=
In[13]:=
Graphics[
 MapThread[{EdgeForm[Opacity[#1 + .01]], #2} &, {res["ClassProb"][[
    All, idx]], rectangles}],
 BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}
 ]
Out[13]=

Superimpose the bus prediction on top of the scaled input received by the net:

In[14]:=
HighlightImage[ImageResize[testImage, {300, 300}], Graphics[MapThread[{EdgeForm[{Opacity[#1 + .01]}], #2} &, {res[
      "ClassProb"][[All, idx]], rectangles}]], BaseStyle -> {FaceForm[], EdgeForm[{Thin, Red}]}]
Out[14]=

Advanced visualization

Write a function to apply a custom styling to the result of the detection:

In[15]:=
styleDetection[
  detection_] := {RandomColor[], {#[[1]], Text[Style[#[[2]], White, 12], {20, 20} + #[[1, 1]], Background -> Black]} & /@ #} & /@ GatherBy[detection, #[[2]] &]

Visualize multiple objects, using a different color for each class:

In[16]:=
HighlightImage[testImage, styleDetection[netevaluate[testImage]]]
Out[16]=

Net information

Inspect the number of parameters of all arrays in the net:

In[17]:=
NetInformation[
 NetModel["SSD-VGG-512 Trained on MS-COCO Data" ], \
"ArraysElementCounts"]
Out[17]=

Obtain the total number of parameters:

In[18]:=
NetInformation[
 NetModel["SSD-VGG-512 Trained on MS-COCO Data" ], \
"ArraysTotalElementCount"]
Out[18]=

Obtain the layer type counts:

In[19]:=
NetInformation[
 NetModel["SSD-VGG-512 Trained on MS-COCO Data" ], "LayerTypeCounts"]
Out[19]=

Display the summary graphic:

In[20]:=
NetInformation[
 NetModel["SSD-VGG-512 Trained on MS-COCO Data" ], "SummaryGraphic"]
Out[20]=

Export to MXNet

Export the net into a format that can be opened in MXNet:

In[21]:=
jsonPath = Export[FileNameJoin[{$TemporaryDirectory, "net.json"}], NetModel["SSD-VGG-300 Trained on PASCAL VOC Data" ], "MXNet"]
Out[21]=

Export also creates a net.params file containing parameters:

In[22]:=
paramPath = FileNameJoin[{DirectoryName[jsonPath], "net.params"}]
Out[22]=

Get the size of the parameter file:

In[23]:=
FileByteCount[paramPath]
Out[23]=

The size is similar to the byte count of the resource object:

In[24]:=
ResourceObject["SSD-VGG-300 Trained on PASCAL VOC Data" ]["ByteCount"]
Out[24]=

Requirements

Wolfram Language 11.3 (March 2018) or above

Resource History

Reference