Wolfram Research

SSD Feature Pyramid Nets Trained on MS-COCO Data

Detect and localize objects in an image

This family of object detection models introduces a feature pyramid network (FPN) to effectively recognize objects at different scales. To make lower-level feature maps semantically stronger, FPN employ top-down pathways to upsample higher resolution features, which are then merged with features from bottom-up pathway via lateral connections. All models inside the object detection family have separate modules for box and class predictions except for MobileNetV2 models, which share some computations before making the final predictions. In addition, all models had been trained using "Focal Loss," which addresses the imbalance between foreground and background classes that arises within single-stage detectors.

Training Set Information

Model Information

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["SSD Feature Pyramid Nets Trained on MS-COCO Data"]
Out[1]=

NetModel parameters

This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:

In[2]:=
NetModel["SSD Feature Pyramid Nets Trained on MS-COCO Data", "ParametersInformation"]
Out[2]=

Pick a non-default net by specifying the parameters:

In[3]:=
NetModel[{"SSD Feature Pyramid Nets Trained on MS-COCO Data", "Architecture" -> "ResNet-101", "ImageSize" -> 640}]
Out[3]=

Pick a non-default uninitialized net:

In[4]:=
NetModel[{"SSD Feature Pyramid Nets Trained on MS-COCO Data", "Architecture" -> "ResNet-101", "ImageSize" -> 640}, "UninitializedEvaluationNet"]
Out[4]=

Evaluation function

Define the label list for this model:

In[5]:=
labels = {"person", "bicycle", "car", "motorcycle", "airplane", "bus",
    "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"};

Write an evaluation function to scale the result to the input image size and suppress the least probable detections:

In[6]:=
nonMaximumSuppression = ResourceFunction["NonMaximumSuppression"]
Out[6]=
In[7]:=
netevaluate[model_, img_Image, detectionThreshold_ : .5, overlapThreshold_ : .45] := Module[
  	{netoutput, class2box, argMaxPerBox, maxPerBox, maxClasses, probableDetectionInd, probableClasses, probableScores, probableBoxes, nmsDetections, boxDecoder},
  netoutput = model[img];
  class2box = netoutput["ClassProb"];
  (*Assuming there is one class per box*) argMaxPerBox = Flatten@Map[ Ordering[#, -1] &, class2box];
  maxPerBox = Map[Max, class2box];
  maxClasses = labels[[argMaxPerBox]];
  (*Filter out the low score boxes*) probableDetectionInd = UnitStep[maxPerBox - detectionThreshold];
  {probableClasses, probableScores, probableBoxes} = Map[Pick[#, probableDetectionInd, 1] &, {maxClasses, maxPerBox, netoutput["Boxes"]}];
  (*Apply NMS*) nmsDetections = nonMaximumSuppression[
    probableBoxes -> probableScores, {"Region", "Index"}, MaxOverlapFraction -> overlapThreshold];
  boxDecoder[{a_, b_, c_, d_}, {w_, h_}] := Rectangle[{a*w, h - b*h}, {c*w, h - d*h}]; Map[{boxDecoder[ First[#], ImageDimensions[img]], probableClasses[[Last[#]]], probableScores[[Last[#]]]} &, nmsDetections]
  ]

Basic usage

Obtain the detected bounding boxes with their corresponding classes and confidences for a given image:

In[8]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/99f0b348-a09f-487c-8dbe-4b4edc458178"]
In[9]:=
detection = netevaluate[
  NetModel["SSD Feature Pyramid Nets Trained on MS-COCO Data"], testImage]
Out[9]=

Inspect which classes are detected:

In[10]:=
classes = DeleteDuplicates@Flatten@detection[[All, 2]]
Out[10]=

Visualize the detection:

In[11]:=
HighlightImage[testImage, GroupBy[detection[[All, ;; 2]], Last -> First]]
Out[11]=

Network result

The network computes 12,804 bounding boxes and the probability that the objects in each box are of any given class:

In[12]:=
res = NetModel["SSD Feature Pyramid Nets Trained on MS-COCO Data"][
   testImage];
In[13]:=
Dimensions /@ res
Out[13]=

Change coordinate system into a graphics domain:

In[14]:=
rectangles = MapAt[1 - # &, res["Boxes"], {All, {2, 4}}];

Visualize all the boxes predicted by the net scaled by their "objectness" measures:

In[15]:=
rectangles = ArrayReshape[rectangles, {Length@res["Boxes"], 2, 2}];
probabilities = res["ClassProb"];
In[16]:=
Graphics[
 MapThread[{EdgeForm[Opacity[Total[#1]*0.5]], Rectangle @@ #2} &, {probabilities, rectangles}],
 BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}
 ]
Out[16]=

Visualize all the boxes scaled by the probability that they contain a cat:

In[17]:=
idx = Position[labels, "cat"][[1, 1]]
Out[17]=
In[18]:=
Graphics[
 MapThread[{EdgeForm[Opacity[#1]], Rectangle @@ #2} &, {probabilities[[All, idx]], rectangles}],
 BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}
 ]
Out[18]=

Superimpose the cat prediction on top of the scaled input received by the net:

In[19]:=
HighlightImage[testImage, Graphics[MapThread[{EdgeForm[{Opacity[#1]}], Rectangle @@ (#2*{ImageDimensions[testImage], ImageDimensions[testImage]})} &, {probabilities[[All, idx]], rectangles}]], BaseStyle -> {FaceForm[], EdgeForm[{Thin, Red}]}]
Out[19]=

Net information

Inspect the number of parameters of all arrays in the net:

In[20]:=
Information[
 NetModel["SSD Feature Pyramid Nets Trained on MS-COCO Data"], "ArraysElementCounts"]
Out[20]=

Obtain the total number of parameters:

In[21]:=
Information[
 NetModel["SSD Feature Pyramid Nets Trained on MS-COCO Data"], "ArraysTotalElementCount"]
Out[21]=

Obtain the layer type counts:

In[22]:=
Information[
 NetModel["SSD Feature Pyramid Nets Trained on MS-COCO Data"], "LayerTypeCounts"]
Out[22]=

Display the summary graphic:

In[23]:=
Information[
 NetModel["SSD Feature Pyramid Nets Trained on MS-COCO Data"], "SummaryGraphic"]
Out[23]=

Resource History

Reference