YOLO V8 Detect Trained on MS-COCO Data

Detect and localize objects in an image

YOLO (You Only Look Once) Version 8 by Ultralytics is the latest version of the YOLO models. Just like its predecessor, YOLO Version 5, YOLO Version 8 is an anchor-free model that was trained with mosaic augmentation. It features the use of new "C2f" blocks, which employ additional dense connections between bottleneck modules. YOLO Version 8 models outperform all models from previous versions at a similar size.

Training Set Information

Model Information

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["YOLO V8 Detect Trained on MS-COCO Data"]
Out[2]=

NetModel parameters

This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:

In[3]:=
NetModel["YOLO V8 Detect Trained on MS-COCO Data", "ParametersInformation"]
Out[4]=

Pick a non-default net by specifying the parameters:

In[5]:=
NetModel[{"YOLO V8 Detect Trained on MS-COCO Data", "Size" -> "L"}]
Out[6]=

Pick a non-default uninitialized net:

In[7]:=
NetModel[{"YOLO V8 Detect Trained on MS-COCO Data", "Size" -> "L"}, "UninitializedEvaluationNet"]
Out[8]=

Evaluation function

Write an evaluation function to scale the result to the input image size and suppress the least probable detections:

In[9]:=
labels = {"person", "bicycle", "car", "motorcycle", "airplane", "bus",
    "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"};
In[10]:=
netevaluate[net_, img_, detectionThreshold_ : .25, overlapThreshold_ : .5] := Module[{res, imgSize, isDetection, probableClasses, probableBoxes, h, w, max, scale, padding, nms, result},
   (*define image dimensions*)
   imgSize = 640;
   {w, h} = ImageDimensions[img];
   (*get inference*)
   res = net[img];
   (*filter by probability*)
   (*very small probability are thresholded*)
   isDetection = UnitStep[Max /@ res["ClassProb"] - detectionThreshold];
   {probableClasses, probableBoxes} = Map[Pick[#, isDetection, 1] &, {res["ClassProb"], res["Boxes"]}];
   If[Length[probableBoxes] == 0, Return[{}]]; (*transform coordinates into rectangular boxes*)
   max = Max[{w, h}];
   scale = max/imgSize;
   padding = imgSize*(1 - {w, h}/max)/2; probableBoxes = Apply[Rectangle[
       scale*({#1 - #3/2, imgSize - #2 - #4/2} - padding),
       scale*({#1 + #3/2, imgSize - #2 + #4/2} - padding)] &, probableBoxes, 2]; (*gather the boxes of the same class and perform non-
   max suppression*)
   nms = ResourceFunction["NonMaximumSuppression"][
     probableBoxes -> Max /@ probableClasses, "Index", MaxOverlapFraction -> overlapThreshold];
   <|
    "Boxes" -> probableBoxes[[nms]],
    "Classes" -> labels[[Last@*Ordering /@ Part[probableClasses, nms]]]
    |>
   ];

Basic usage

Obtain the detected bounding boxes with their corresponding classes and confidences for a given image:

In[11]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/8acba9d3-bd1e-433e-8309-df969dd2897d"]
In[12]:=
detection = netevaluate[NetModel["YOLO V8 Detect Trained on MS-COCO Data"], testImage];

The model's output is an Association containing the detected "Boxes" and "Classes":

In[13]:=
Keys[detection]
Out[13]=

The "Boxes" key is a list of Rectangle expressions corresponding to the bounding boxes of the detected objects:

In[14]:=
detection["Boxes"]
Out[14]=

The "Classes" key contains the classes of the detected objects:

In[15]:=
detection["Classes"]
Out[15]=

Visualize the detection:

In[16]:=
Image[HighlightImage[testImage, detection["Boxes"], ImageLabels -> detection["Classes"]]]
Out[16]=

Resource History

Reference