YOLO V8 Pose Trained on MS-COCO Data

Detect and localize human joints in an image

YOLO (You Only Look Once) Version 8 by Ultralytics is the latest version of the YOLO models. Just like its predecessor, YOLO Version 5, YOLO Version 8 is an anchor-free model that was trained with mosaic augmentation. It features the use of new "C2f" blocks, which employ additional dense connections between bottleneck modules. Although YOLO models are historically object detection models, these models perform both object detection and human joint keypoint regression at the same time.

Training Set Information

Model Information

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["YOLO V8 Pose Trained on MS-COCO Data"]
Out[2]=

NetModel parameters

This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:

In[3]:=
NetModel["YOLO V8 Pose Trained on MS-COCO Data", "ParametersInformation"]
Out[3]=

Pick a non-default net by specifying the parameters:

In[4]:=
NetModel[{"YOLO V8 Pose Trained on MS-COCO Data", "Size" -> "L"}]
Out[24]=

Pick a non-default uninitialized net:

In[25]:=
NetModel[{"YOLO V8 Pose Trained on MS-COCO Data", "Size" -> "L"}, "UninitializedEvaluationNet"]
Out[26]=

Evaluation function

Write an evaluation function to scale the result to the input image size and suppress the least probable detections:

In[27]:=
labels = {"Nose", "LeftEye", "RightEye", "LeftEar", "RightEar", "LeftShoulder", "RightShoulder", "LeftElbow", "RightElbow", "LeftWrist", "RightWrist", "LeftHip", "RightHip", "LeftKnee", "RightKnee", "LeftAnkle", "RightAnkle"};
In[28]:=
netevaluate[net_, img_, detectionThreshold_ : .25, overlapThreshold_ : .5] := Module[
   {imgSize, w, h, probableObj, probableBoxes, probableScores, probableKeypoints, max, scale, padx, pady, results, nms, x1, y1, x2, y2}, (*define image dimensions*)
   imgSize = 640;
   {w, h} = ImageDimensions[img]; (*get inference*)
   results = net[img]; (*filter by probability*)
   (*very small probability are thresholded*)
   probableObj = UnitStep[
     results["Objectness"] - detectionThreshold]; {probableBoxes, probableScores, probableKeypoints} = Map[Pick[#, probableObj, 1] &, {results["Boxes"], results["Objectness"], results["KeyPoints"]}];
   If[Or[Length[probableBoxes] == 0, Length[probableKeypoints] == 0], Return[{}]]; max = Max[{w, h}];
   scale = max/imgSize;
   {padx, pady} = imgSize*(1 - {w, h}/max)/2; (*transform keypoints coordinates to fit the input image size*)
   probableKeypoints = Apply[
     {
       {
        Clip[Floor[scale*(#1 - padx)], {1, w}],
        Clip[Floor[scale*(imgSize - #2 - pady)], {1, h}]
        },
       #3
       } &,
     probableKeypoints, {2}
     ]; (*transform coordinates into rectangular boxes*)
   probableBoxes = Apply[
     (
       x1 = Clip[Floor[scale*(#1 - #3/2 - padx)], {1, w}];
       y1 = Clip[Floor[scale*(imgSize - #2 - #4/2 - pady)], {1, h}];
       x2 = Clip[Floor[scale*(#1 + #3/2 - padx)], {1, w}];
       y2 = Clip[Floor[scale*(imgSize - #2 + #4/2 - pady)], {1, h}];
       Rectangle[{x1, y1}, {x2, y2}]
       ) &, probableBoxes, 1
     ]; (*gather the boxes of the same class and perform non-
   max suppression*)
   nms = ResourceFunction["NonMaximumSuppression"][
     probableBoxes -> probableScores, "Index", MaxOverlapFraction -> overlapThreshold];
   results = Association[];
   results["ObjectDetection"] = Part[Transpose[{probableBoxes, probableScores}], nms];
   results["KeypointEstimation"] = Part[probableKeypoints, nms][[All, All, 1]];
   results["KeypointConfidence"] = Part[probableKeypoints, nms][[All, All, 2]];
   results
   ];

Basic usage

Obtain the detected bounding boxes with their corresponding classes and confidences as well as the locations of human joints for a given image:

In[29]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/340b96f9-6411-4a10-8910-8e3100b9ebae"]
In[30]:=
predictions = netevaluate[NetModel["YOLO V8 Pose Trained on MS-COCO Data"], testImage];

Inspect the prediction keys:

In[31]:=
Keys[predictions]
Out[31]=

The "ObjectDetection" key contains the coordinates of the detected objects as well as their confidences and classes:

In[32]:=
predictions["ObjectDetection"]
Out[32]=

The "KeypointEstimation" key contains the locations of the top predicted keypoints:

In[33]:=
predictions["KeypointEstimation"]
Out[33]=

The "KeypointConfidence" key contains the confidences for each person’s keypoints:

In[34]:=
predictions["KeypointConfidence"]
Out[34]=

Inspect the predicted keypoint locations:

In[35]:=
keypoints = predictions["KeypointEstimation"];

Visualize the keypoints:

In[36]:=
HighlightImage[testImage, keypoints]
Out[36]=

Visualize the keypoints grouped by person:

In[37]:=
HighlightImage[testImage, AssociationThread[Range[Length[keypoints]] -> keypoints], ImageLabels -> None]
Out[37]=

Visualize the keypoints grouped by a keypoint type:

In[38]:=
HighlightImage[testImage, AssociationThread[
  Range[Length[Transpose@keypoints]] -> Transpose@keypoints], ImageLabels -> None, ImageLegends -> labels]
Out[38]=

Define a function to combine the keypoints into a skeleton shape:

In[39]:=
getSkeleton[personKeypoints_] := Line[DeleteMissing[
   Map[personKeypoints[[#]] &, {{1, 2}, {1, 3}, {2, 4}, {3, 5}, {1, 6}, {1, 7}, {6, 8}, {8, 10}, {7, 9}, {9, 11}, {6, 7}, {6, 12}, {7,
     13}, {12, 13}, {12, 14}, {14, 16}, {13, 15}, {15, 17}}], 1, 2]]

Visualize the pose keypoints, object detections and human skeletons:

In[40]:=
HighlightImage[testImage,
 AssociationThread[Range[Length[#]] -> #] & /@ {keypoints, Map[getSkeleton, keypoints], predictions["ObjectDetection"][[;; , 1]]},
 ImageLabels -> None
 ]
Out[40]=

Network result

The network computes eight thousand four hundred bounding boxes, the position of the keypoints with their probabilities and the probability of an object inside the box:

In[41]:=
res = NetModel["YOLO V8 Pose Trained on MS-COCO Data"][testImage];
In[42]:=
Dimensions /@ res
Out[42]=

Rescale the "KeyPoints" to the coordinates of the input image and visualize them scaled and colored by their probability measures:

In[43]:=
imgSize = 640;
{w, h} = ImageDimensions[testImage];
max = Max[{w, h}];
scale = max/imgSize;
{padx, pady} = imgSize*(1 - {w, h}/max)/2;
heatpoints = Flatten[Apply[
    {
      {Clip[Floor[scale*(#1 - padx)], {1, w}],
        Clip[Floor[scale*(imgSize - #2 - pady)], {1, h}]
        } ->
       ColorData["TemperatureMap"][#3]
      } &,
    res["KeyPoints"], {2}
    ]];
In[44]:=
heatmap = ReplaceImageValue[ConstantImage[1, {w, h}], heatpoints]
Out[44]=

Overlay the heat map on the image:

In[45]:=
ImageCompose[testImage, {heatmap, 0.6}]
Out[45]=

Rescale the bounding boxes to the coordinates of the input image and visualize them scaled by their "Objectness" measures:

In[46]:=
boxes = Apply[
   (
     x1 = Clip[Floor[scale*(#1 - #3/2 - padx)], {1, w}];
     y1 = Clip[Floor[scale*(imgSize - #2 - #4/2 - pady)], {1, h}];
     x2 = Clip[Floor[scale*(#1 + #3/2 - padx)], {1, w}];
     y2 = Clip[Floor[scale*(imgSize - #2 + #4/2 - pady)], {1, h}];
     Rectangle[{x1, y1}, {x2, y2}]
     ) &, res["Boxes"], 1
   ];
In[47]:=
Graphics[
 MapThread[{EdgeForm[Opacity[Total[#1] + .01]], #2} &, {res[
    "Objectness"], boxes}], BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}]
Out[47]=

Superimpose the predictions on top of the input received by the net:

In[48]:=
HighlightImage[testImage, Graphics[
  MapThread[{EdgeForm[Opacity[Total[#1] + .01]], #2} &, {res[
     "Objectness"], boxes}], BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}], BaseStyle -> {FaceForm[], EdgeForm[{Thin, Red}]}]
Out[48]=

Net information

Inspect the number of parameters of all arrays in the net:

In[49]:=
Information[
 NetModel[
  "YOLO V8 Pose Trained on MS-COCO Data"], "ArraysElementCounts"]
Out[50]=

Obtain the total number of parameters:

In[51]:=
Information[
 NetModel[
  "YOLO V8 Pose Trained on MS-COCO Data"], "ArraysTotalElementCount"]
Out[52]=

Obtain the layer type counts:

In[53]:=
Information[
 NetModel["YOLO V8 Pose Trained on MS-COCO Data"], "LayerTypeCounts"]
Out[54]=

Display the summary graphic:

In[55]:=
Information[
 NetModel["YOLO V8 Pose Trained on MS-COCO Data"], "SummaryGraphic"]
Out[56]=

Export to ONNX

Export the net to the ONNX format:

In[57]:=
onnxFile = Export[FileNameJoin[{$TemporaryDirectory, "net.onnx"}], NetModel["YOLO V8 Pose Trained on MS-COCO Data"]]
Out[58]=

Get the size of the ONNX file:

In[59]:=
FileByteCount[onnxFile]
Out[59]=

The size is similar to the byte count of the resource object:

In[60]:=
NetModel["YOLO V8 Pose Trained on MS-COCO Data", "ByteCount"]
Out[61]=

Check some metadata of the ONNX model:

In[62]:=
{OpsetVersion, IRVersion} = {Import[onnxFile, "OperatorSetVersion"], Import[onnxFile, "IRVersion"]}
Out[62]=

Import the model back into Wolfram Language. However, the NetEncoder and NetDecoder will be absent because they are not supported by ONNX:

In[63]:=
Import[onnxFile]
Out[63]=

Resource History

Reference