YOLO V2 Trained on MS-COCO Data

Detect and localize objects in an image

YOLO (You Only Look Once) Version 2 is an object detection model published by Joseph Redmon and Ali Farhadi in December 2016. It is a single-stage architecture that goes straight from image pixels to bounding box coordinates and class probabilities. Compared to its predecessor, it introduces batch normalization, raises the image resolution and switches from direct coordinate prediction to anchor boxes' offsets. On an NVIDIA Titan X, it processes images at 40-90 FPS.

Number of layers: 106 | Parameter count: 51,000,657 | Trained size: 205 MB |

Training Set Information

Performance

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["YOLO V2 Trained on MS-COCO Data"]
Out[1]=

Evaluation function

Write an evaluation function to scale the result to the input image size and suppress the least probable detections:

In[2]:=
nonMaxSuppression[overlapThreshold_][detection_] := Module[{boxes, confidence},
  	Fold[
   		{list, new} |-> If[NoneTrue[list[[All, 1]], IoU[#, new[[1]]] > overlapThreshold &], Append[list, new], list],
   		Sequence @@ TakeDrop[Reverse@SortBy[detection, Last], 1]
   	]
  ]
ClearAll[IoU]
IoU := IoU = With[{c = Compile[
      	{{box1, _Real, 2}, {box2, _Real, 2}},
      	Module[{area1, area2, x1, y1, x2, y2, w, h, int}, area1 = (box1[[2, 1]] - box1[[1, 1]]) (box1[[2, 2]] - box1[[1, 2]]);
       		area2 = (box2[[2, 1]] - box2[[1, 1]]) (box2[[2, 2]] - box2[[1, 2]]); x1 = Max[box1[[1, 1]], box2[[1, 1]]];
       		y1 = Max[box1[[1, 2]], box2[[1, 2]]];
       		x2 = Min[box1[[2, 1]], box2[[2, 1]]];
       		y2 = Min[box1[[2, 2]], box2[[2, 2]]]; w = Max[0., x2 - x1];
       		h = Max[0., y2 - y1]; int = w * h; int / (area1 + area2 - int)
       	],
      	RuntimeAttributes -> {Listable},
      	Parallelization -> True,
      	RuntimeOptions -> "Speed"
      ]},
   	c @@ Replace[{##}, Rectangle -> List, Infinity, Heads -> True] &
   ]
In[3]:=
netevaluate[img_, detectionThreshold_ : .2, overlapThreshold_ : .4] := Module[
    	{w, h, scale, coords, obj, classes, boxes, padding, classProb, bestClass, probable, finals}, {coords, classes, obj} = Values@NetModel["YOLO V2 Trained on MS-COCO Data"][img]; (* transform coordinates into rectangular boxes *)
    	{w, h} = ImageDimensions[img];
    	scale = Max[{w, h}] / 416;
    	padding = 416 (1 - {w, h} / Max[{w, h}]) / 2;
    	boxes = Apply[
        		Rectangle[ scale {#1 - #3/2 - padding[[1]], 416 - #2 - #4/2 - padding[[2]]}, scale {#1 + #3/2 - padding[[1]], 416 - #2 + #4/2 - padding[[2]]}
            		] &,
        		416 coords,
        	1]; (* each class probability is rescaled with the box objectivness *)
\
    	classProb = classes * obj;
    	(* filter by probability*)
    	(* very small probability are thresholded *) probable = Position[Max /@ classProb , p_ /; p - 10^-2 > detectionThreshold]; If[Length[probable] == 0, Return[{}]]; (* gather the boxes of the same class and perform non-
  max suppression *)
    	bestClass = Last@*Ordering /@ classProb;
    	finals = Join @@ Values @ GroupBy[
            		MapThread[
              			{#1, #2, #3[[#2]]} &,
              			{Extract[boxes, probable], Extract[bestClass, probable], Extract[classProb, probable]}
              		],
            		#[[2]] &, nonMaxSuppression[overlapThreshold]
            	]
    ]

Label list

Define the label list for this model. Integers in the model’s output correspond to elements in the label list:

In[4]:=
labels = {"person", "bicycle", "car", "motorcycle", "airplane", "bus",
    "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"};

Basic usage

Obtain the detected bounding boxes with their corresponding classes and confidences for a given image:

In[5]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/4c57adde-ede2-49aa-9324-d824b6dd3935"]
In[6]:=
detection = netevaluate[testImage]
Out[6]=

Inspect which classes are detected:

In[7]:=
classes = DeleteDuplicates@detection[[All, 2]]
Out[7]=
In[8]:=
labels[[classes]]
Out[8]=

Visualize the detection:

In[9]:=
HighlightImage[testImage, MapThread[{White, Inset[Style[labels[[#2]], Black, FontSize -> Scaled[1/12], Background -> GrayLevel[1, .6]], Last[#1], {Right, Top}], #1} &,
   Transpose@detection]]
Out[9]=

Network result

The network computes 845 bounding boxes, the probability of having an object in each box and the conditioned probability that the object is of any given class:

In[10]:=
res = NetModel["YOLO V2 Trained on MS-COCO Data"][testImage]
Out[10]=

Visualize all the boxes predicted by the net scaled by their “objectness” measures:

In[11]:=
rectangles = Apply[Rectangle[{#1 - #3/2, 1 - #2 - #4/2}, {#1 + #3/2, 1 - #2 + #4/2}] &, res["Boxes"], 1];
In[12]:=
Graphics[
 MapThread[{EdgeForm[Opacity[#1 + .01]], #2} &, {res["Objectness"], rectangles}],
 BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}
 ]
Out[12]=

Visualize all the boxes scaled by the probability that they contain a cat:

In[13]:=
idx = Position[labels, "cat"][[1, 1]]
Out[13]=
In[14]:=
Graphics[
 MapThread[{EdgeForm[Opacity[#1 + .01]], #2} &, {res[
     "Objectness"] Extract[res["ClassProb"], {All, idx}], rectangles}],
 BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}
 ]
Out[14]=

Superimpose the cat prediction on top of the scaled input received by the net:

In[15]:=
HighlightImage[
 Image[NetExtract[NetModel["YOLO V2 Trained on MS-COCO Data"], "Input"][testImage], Interleaving -> False],
 MapThread[{EdgeForm[{Thickness[#1/100], Opacity[#1 + .1]}], #2} &, {res["Objectness"] Extract[
     res["ClassProb"], {All, idx}], rectangles}],
 BaseStyle -> {FaceForm[], EdgeForm[{Thin, Red}]},
 DataRange -> {{0, 1}, {0, 1}}
 ]
Out[15]=

Advanced visualization

Write a function to apply a custom styling to the result of the detection:

In[16]:=
styleDetection[detection_] := Values[GroupBy[detection, #1[[2]] &, With[{c = RandomColor[]}, ({c, #1[[1]], Inset[Style[labels[[#1[[2]]]], FontSize -> Scaled[1/28], Black], Last[#1[[1]]], {Right, Top}, Background -> GrayLevel[1, .8]]} &) /@ #1] &]];

Visualize multiple objects, using a different color for each class:

In[17]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/c983e8c9-e09d-4f1c-9796-c75b3317ee47"]
In[18]:=
HighlightImage[image, styleDetection[netevaluate[image, 0.2, .4]]]
Out[18]=

Net information

Inspect the number of parameters of all arrays in the net:

In[19]:=
NetInformation[
 NetModel["YOLO V2 Trained on MS-COCO Data"], "ArraysElementCounts"]
Out[19]=

Obtain the total number of parameters:

In[20]:=
NetInformation[
 NetModel["YOLO V2 Trained on MS-COCO Data"], \
"ArraysTotalElementCount"]
Out[20]=

Obtain the layer type counts:

In[21]:=
NetInformation[
 NetModel["YOLO V2 Trained on MS-COCO Data"], "LayerTypeCounts"]
Out[21]=

Display the summary graphic:

In[22]:=
NetInformation[
 NetModel["YOLO V2 Trained on MS-COCO Data"], "SummaryGraphic"]
Out[22]=

Export to MXNet

Export the net into a format that can be opened in MXNet:

In[23]:=
jsonPath = Export[FileNameJoin[{$TemporaryDirectory, "net.json"}], NetModel["YOLO V2 Trained on MS-COCO Data"], "MXNet"]
Out[23]=

Export also creates a net.params file containing parameters:

In[24]:=
paramPath = FileNameJoin[{DirectoryName[jsonPath], "net.params"}]
Out[24]=

Get the size of the parameter file:

In[25]:=
FileByteCount[paramPath]
Out[25]=

The size is similar to the byte count of the resource object:

In[26]:=
ResourceObject["YOLO V2 Trained on MS-COCO Data"]["ByteCount"]
Out[26]=

Requirements

Wolfram Language 11.3 (March 2018) or above

Resource History

Reference