YOLO V5 Trained on MS-COCO Data

Detect and localize objects in an image

YOLO (You Only Look Once) Version 5 is a family of object detection models published in June 2020. It is a single-stage architecture that goes straight from image pixels to bounding box coordinates and class probabilities. YOLO Version 5 employs the Cross Stage Partial Networks (CSPNet) technique in its backbone to extract rich and informative features from the input image, implements the PA-NET neck for feature aggregation and uses the SiLU function for its activations. Its training leverages several novel data augmentation techniques such as mosaic augmentation and cutout (also used in YOLO Version 4), helping the model to recognize small objects. The YOLO Version 5 S model is about 90% smaller than YOLOv4-custom (with Darknet architecture), meaning that it can be deployed to embedded devices much more easily. It is also faster in both training and inference, achieving 140 frames per second in batch.

Training Set Information

Model Information

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["YOLO V5 Trained on MS-COCO Data"]
Out[1]=

NetModel parameters

This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:

In[2]:=
NetModel["YOLO V5 Trained on MS-COCO Data", "ParametersInformation"]
Out[2]=

Pick a non-default net by specifying the parameters:

In[3]:=
NetModel[{"YOLO V5 Trained on MS-COCO Data", "Architecture" -> "L6"}]
Out[3]=

Pick a non-default uninitialized net:

In[4]:=
NetModel[{"YOLO V5 Trained on MS-COCO Data", "Architecture" -> "L6"}, "UninitializedEvaluationNet"]
Out[4]=

Evaluation function

Write an evaluation function to scale the result to the input image size and suppress the least probable detections:

In[5]:=
nonMaximumSuppression = ResourceFunction["NonMaximumSuppression"];
In[6]:=
labels = {"person", "bicycle", "car", "motorcycle", "airplane", "bus",
    "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"};
In[7]:=
netevaluate[model_, img_, detectionThreshold_ : .5, overlapThreshold_ : .5] := Module[{imgSize, classes, coords, obj, scores, bestClass, probable, probableClasses, probableScores, probableBoxes, h, w, max, scale, padding, nms, finals},
   imgSize = Last@NetExtract[model, {"Input", "Output"}];
   {classes, coords, obj} = Values@model[img];
   (*each class probability is rescaled with the box objectness*) scores = classes*obj;
   bestClass = Last@*Ordering /@ scores;
   (*filter by probability*)
   (*very small probability are thresholded*) probable = UnitStep[obj - detectionThreshold]; {probableClasses, probableBoxes, probableScores} = Map[Pick[#, probable, 1] &, {labels[[bestClass]], coords, obj}];
   If[Length[probableBoxes] == 0, Return[{}]];
   (*transform coordinates into rectangular boxes*)
   {w, h} = ImageDimensions[img];
   max = Max[{w, h}];
   scale = max/imgSize ;
   padding = imgSize*(1 - {w, h}/max)/2;
   probableBoxes = Apply[
     Rectangle[
       scale*({#1 - #3/2, imgSize - #2 - #4/2} - padding),
       scale*({#1 + #3/2, imgSize - #2 + #4/2} - padding)
       ] &, probableBoxes, 1];
   (*gather the boxes of the same class and perform non-
   max suppression*) nms = nonMaximumSuppression[probableBoxes -> probableScores, "Index"];
   finals = Transpose[{probableBoxes, probableClasses, probableScores}];
   Part[finals, nms]
   ];

Basic usage

Obtain the detected bounding boxes with their corresponding classes and confidences for a given image:

In[8]:=
net = NetModel["YOLO V5 Trained on MS-COCO Data"]
Out[8]=
In[9]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/27231989-3929-44d2-abdf-3c485f6776af"]
In[10]:=
detection = netevaluate[net, testImage];

Inspect which classes are detected:

In[11]:=
classes = DeleteDuplicates@Flatten@detection[[All, 2]]
Out[11]=

Visualize the detection:

In[12]:=
HighlightImage[testImage, GroupBy[detection[[All, 1 ;; 2]], Last -> First]]
Out[12]=

Network result

The network computes 25,200 bounding boxes and the probability that the objects in each box are of any given class:

In[13]:=
res = NetModel["YOLO V5 Trained on MS-COCO Data"][testImage];
In[14]:=
Dimensions /@ res
Out[14]=

Rescale the bounding boxes to the coordinates of the input image and visualize them scaled by their "objectness" measures:

In[15]:=
rectangles = Block[
   {w, h, max, imgSize, scale, padding},
   {w, h} = ImageDimensions[testImage];
   max = Max[{w, h}];
   imgSize = 640;
   scale = max/imgSize ;
   padding = imgSize*(1 - {w, h}/max)/2;
   Apply[
    Rectangle[
      scale*({#1 - #3/2, imgSize - #2 - #4/2} - padding),
      scale*({#1 + #3/2, imgSize - #2 + #4/2} - padding)
      ] &,
    res["Boxes"],
    1
    ]
   ];
In[16]:=
Graphics[MapThread[{EdgeForm[Opacity[Total[#1] + .01]], #2} &, {res[
    "Objectness"], rectangles}], BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}]
Out[16]=

Visualize all the boxes scaled by the probability that they contain a cat:

In[17]:=
idx = Position[labels, "cat"][[1, 1]]
Out[17]=
In[18]:=
Graphics[
 MapThread[{EdgeForm[Opacity[#1 + .01]], #2} &, {res["Objectness"]*
    Extract[res["ClassProb"], {All, idx}], rectangles}],
 BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}
 ]
Out[18]=

Superimpose the cat prediction on top of the input received by the net:

In[19]:=
HighlightImage[testImage, Graphics[MapThread[{EdgeForm[{Thickness[#1/100], Opacity[(#1 + .01)/3]}], #2} &, {res["Objectness"]*
     Extract[res["ClassProb"], {All, idx}], rectangles}]], BaseStyle -> {FaceForm[], EdgeForm[{Thin, Red}]}]
Out[19]=

Net information

Inspect the number of parameters of all arrays in the net:

In[20]:=
Information[
 NetModel["YOLO V5 Trained on MS-COCO Data"], "ArraysElementCounts"]
Out[20]=

Obtain the total number of parameters:

In[21]:=
Information[
 NetModel["YOLO V5 Trained on MS-COCO Data"], "ArraysTotalElementCount"]
Out[21]=

Obtain the layer type counts:

In[22]:=
Information[
 NetModel["YOLO V5 Trained on MS-COCO Data"], "LayerTypeCounts"]
Out[22]=

Display the summary graphic:

In[23]:=
Information[
 NetModel["YOLO V5 Trained on MS-COCO Data"], "SummaryGraphic"]
Out[23]=

Resource History

Reference