YOLOX Trained on MS-COCO Data

Detect and localize objects in an image

This model is also available through the built-in functions ImageCases and ImageContents

YOLO (You Only Look Once) Version X is a family of object detection models published in August 2021. It is a revisitation of the YOLOv3-DarkNet53 model with several architectural and training improvements: the decoupling of classification and regression heads, the switch to an anchor-free pipeline, the introduction of an advanced label-assignment strategy named SimOTA (Simplified Optimal Transport Assignment) and the use of strong data augmentation techniques.

Training Set Information

Model Information

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["YOLOX Trained on MS-COCO Data"]
Out[1]=

NetModel parameters

This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:

In[2]:=
NetModel["YOLOX Trained on MS-COCO Data", "ParametersInformation"]
Out[2]=

Pick a non-default net by specifying the parameters:

In[3]:=
NetModel[{"YOLOX Trained on MS-COCO Data", "Architecture" -> "L"}]
Out[3]=

Pick a non-default uninitialized net:

In[4]:=
NetModel[{"YOLOX Trained on MS-COCO Data", "Architecture" -> "L"}, "UninitializedEvaluationNet"]
Out[4]=

Evaluation function

Write an evaluation function to scale the result to the input image size and suppress the least probable detections:

In[5]:=
nonMaximumSuppression = ResourceFunction["NonMaximumSuppression"];
In[6]:=
labels = {"person", "bicycle", "car", "motorcycle", "airplane", "bus",
    "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"};
In[7]:=
netevaluate[model_, img_, detectionThreshold_ : .5, overlapThreshold_ : .5] := Module[{imgSize, classes, coords, obj, scores, bestClass, probable, probableClasses, probableScores, probableBoxes, h, w, max, scale, padding, nms, finals},
   imgSize = Last@NetExtract[model, {"Input", "Output"}];
   {obj, classes, coords} = Values@model[img];
   (*each class probability is rescaled with the box objectness*)
   scores = classes*obj;
   bestClass = Last@*Ordering /@ scores;
   (*filter by probability*)
   (*very small probability are thresholded*)
   probable = UnitStep[obj - detectionThreshold]; {probableClasses, probableBoxes, probableScores} = Map[Pick[#, probable, 1] &, {labels[[bestClass]], coords, obj}];
   If[Length[probableBoxes] == 0, Return[{}]];
   (*transform coordinates into rectangular boxes*)
   {w, h} = ImageDimensions[img];
   max = Max[{w, h}];
   scale = max/imgSize ;
   padding = imgSize*(1 - {w, h}/max);
   padding[[1]] = padding[[1]]/2;
   probableBoxes = Apply[
     Rectangle[
       scale*({#1 - #3/2, imgSize - #2 - #4/2} - padding),
       scale*({#1 + #3/2, imgSize - #2 + #4/2} - padding)
       ] &, probableBoxes, 1];
   (*gather the boxes of the same class and perform non-
   max suppression*)
   nms = nonMaximumSuppression[probableBoxes -> probableScores, "Index"];
   finals = Transpose[{probableBoxes, probableClasses, probableScores}];
   Part[finals, nms]
   ];

Basic usage

Obtain the detected bounding boxes with their corresponding classes and confidences for a given image:

In[8]:=
net = NetModel["YOLOX Trained on MS-COCO Data"]
Out[8]=
In[9]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/96f1ffd2-3499-4db5-beaa-3345691fceaf"]
In[10]:=
detection = netevaluate[net, testImage];

Inspect which classes are detected:

In[11]:=
classes = DeleteDuplicates@Flatten@detection[[All, 2]]
Out[11]=

Visualize the detection:

In[12]:=
HighlightImage[testImage, GroupBy[detection[[All, ;; 2]], Last -> First]]
Out[12]=

Network result

The network computes 8,400 bounding boxes and the probability that the objects in each box are of any given class:

In[13]:=
res = NetModel["YOLOX Trained on MS-COCO Data"][testImage];
In[14]:=
Dimensions /@ res
Out[14]=

Rescale the bounding boxes to the coordinates of the input image and visualize them scaled by their "objectness" measures:

In[15]:=
rectangles = Block[
   {w, h, max, imgSize, scale, padding},
   {w, h} = ImageDimensions[testImage];
   max = Max[{w, h}];
   imgSize = 640;
   scale = max/imgSize ;
   padding = imgSize*(1 - {w, h}/max);
   Apply[
    Rectangle[
      scale*({#1 - #3/2, imgSize - #2 - #4/2} - padding),
      scale*({#1 + #3/2, imgSize - #2 + #4/2} - padding)
      ] &,
    res["Boxes"],
    1
    ]
   ];
In[16]:=
Graphics[
 MapThread[{EdgeForm[Opacity[Total[#1] + .01]], #2} &, {res[
    "Objectness"], rectangles}], BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}]
Out[16]=

Visualize all the boxes scaled by the probability that they contain a cat:

In[17]:=
labels = {"person", "bicycle", "car", "motorcycle", "airplane", "bus",
    "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"};
idx = Position[labels, "cat"][[1, 1]]
Out[18]=
In[19]:=
Graphics[
 MapThread[{EdgeForm[Opacity[#1 + .01]], #2} &, {res["Objectness"]*
    Extract[res["ClassProb"], {All, idx}], rectangles}],
 BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}
 ]
Out[19]=

Superimpose the cat prediction on top of the scaled input received by the net:

In[20]:=
HighlightImage[testImage, Graphics[
  MapThread[{EdgeForm[{Thickness[#1/100], Opacity[(#1 + .01)/3]}], #2} &, {res["Objectness"]*
     Extract[res["ClassProb"], {All, idx}], rectangles}]], BaseStyle -> {FaceForm[], EdgeForm[{Thin, Red}]}]
Out[20]=

Net information

Inspect the number of parameters of all arrays in the net:

In[21]:=
Information[
 NetModel["YOLOX Trained on MS-COCO Data"], "ArraysElementCounts"]
Out[21]=

Obtain the total number of parameters:

In[22]:=
Information[
 NetModel["YOLOX Trained on MS-COCO Data"], "ArraysTotalElementCount"]
Out[22]=

Obtain the layer type counts:

In[23]:=
Information[
 NetModel["YOLOX Trained on MS-COCO Data"], "LayerTypeCounts"]
Out[23]=

Display the summary graphic:

In[24]:=
Information[
 NetModel["YOLOX Trained on MS-COCO Data"], "SummaryGraphic"]
Out[24]=

Resource History

Reference