EfficientDet Trained on MS-COCO Data

Detect and localize objects in an image

Released in 2020, this family of object detection models is obtained by uniformly scaling the resolution, depth, and width of the original EfficientNet models, obtaining larger nets. In addition, a novel weighted bidirectional feature pyramid network (BiFPN) is introduced for fast and easy multiscale feature fusion. EfficientDet-D7 achieves stateof-the-art 55.1 AP on COCO test-dev, being 4x – 9x smaller and using 13x – 42x fewer FLOPs than previous detectors.

Number of models: 9

Training Set Information

Performance

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["EfficientDet Trained on MS-COCO Data"]
Out[1]=

NetModel parameters

This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:

In[2]:=
NetModel["EfficientDet Trained on MS-COCO Data", "ParametersInformation"]
Out[2]=

Pick a non-default net by specifying the parameters:

In[3]:=
NetModel[{"EfficientDet Trained on MS-COCO Data", "Architecture" -> "D5"}]
Out[3]=

Pick a non-default uninitialized net:

In[4]:=
NetModel[{"EfficientDet Trained on MS-COCO Data", "Architecture" -> "D5"}, "UninitializedEvaluationNet"]
Out[4]=

Evaluation function

Write an evaluation function to scale the result to the input image size and suppress the least probable detections:

In[5]:=
nonMaximumSuppression = ResourceFunction["NonMaximumSuppression"]
Out[5]=
In[6]:=
netevaluate[model_, img_Image, detectionThreshold_ : .5, overlapThreshold_ : .45] := Module[
  	{netoutput, class2box, argMaxPerBox, maxPerBox, maxClasses, probableDetectionInd, probableClasses, probableScores, probableBoxes, nmsDetections, boxDecoder, scale, targetSize},
  netoutput = model[img, {"ClassProb" -> "Probabilities", "Boxes"}];
  class2box = Transpose@Values[netoutput["ClassProb"]];
  (*Assuming there is one class per box*) argMaxPerBox = Flatten@Map[ Ordering[#, -1] &, class2box];
  maxPerBox = Map[Max, class2box];
  maxClasses = Keys[netoutput["ClassProb"]][[argMaxPerBox]];
  (*Filter out the low score boxes*) probableDetectionInd = UnitStep[maxPerBox - detectionThreshold]; probableClasses = Pick[maxClasses, probableDetectionInd, 1];
  probableScores = Pick[maxPerBox, probableDetectionInd, 1];
  probableBoxes = Pick[netoutput["Boxes"], probableDetectionInd, 1];
  (*Apply NMS*) nmsDetections = nonMaximumSuppression[
    probableBoxes -> probableScores, {"Region", "Index"}, MaxOverlapFraction -> overlapThreshold]; targetSize = Reverse@Rest[NetExtract[model, {"Input", "Output"}]]; scale = Max[ImageDimensions[img]/targetSize]; boxDecoder[{{a_, b_}, {c_, d_}}, {w_, h_}] := Rectangle[{a*scale, h - b*scale}, {c*scale, h - d*scale}]; Map[{boxDecoder[ First[#], ImageDimensions[img]], probableClasses[[Last[#]]], probableScores[[Last[#]]]} &, nmsDetections]
  ]

Basic usage

Obtain the detected bounding boxes with their corresponding classes and confidences for a given image:

In[7]:=
net = NetModel["EfficientDet Trained on MS-COCO Data"]
Out[7]=
In[8]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/239d530e-6e7a-419f-8ded-d663123f0682"]
In[9]:=
detection = netevaluate[net, testImage];

Inspect which classes are detected:

In[10]:=
classes = DeleteDuplicates@Flatten@detection[[All, 2]]
Out[10]=

Visualize the detection:

In[11]:=
HighlightImage[testImage, GroupBy[detection[[All, ;; 2]], Last -> First]]
Out[11]=

Resource History

Reference