Wolfram Computation Meets Knowledge

YOLO V2 Trained on MS-COCO Data

Detect and localize objects in an image

YOLO (You Only Look Once) Version 2 is an object detection model published by Joseph Redmon and Ali Farhadi in December 2016. It is a single-stage architecture that goes straight from image pixels to bounding box coordinates and class probabilities. Compared to its predecessor, it introduces batch normalization, raises the image resolution and switches from direct coordinate prediction to anchor boxes' offsets. On an NVIDIA Titan X, it processes images at 40-90 FPS.

Number of layers: 106 | Parameter count: 51,000,657 | Trained size: 205 MB

Training Set Information

Performance

Examples

Resource retrieval

Retrieve the resource object:

In[1]:=
ResourceObject["YOLO V2 Trained on MS-COCO Data"]
Out[1]=

Get the pre-trained net:

In[2]:=
NetModel["YOLO V2 Trained on MS-COCO Data"]
Out[2]=

Evaluation function

Write an evaluation function to scale the result to the input image size and suppress the least probable detections:

In[3]:=
nonMaxSuppression[overlapThreshold_][detection_] := 
 Module[{boxes, confidence},
  	Fold[
   		{list, new} \[Function] 
    If[NoneTrue[list[[All, 1]], 
      IoU[#, new[[1]]] > overlapThreshold &], Append[list, new], list],
   		Sequence @@ TakeDrop[SortBy[detection, Last], 1]
   	]
  ]
IoU := Area[RegionIntersection[#]]/Area[RegionUnion[#2]] &
In[4]:=
netevaluate[img_, detectionThreshold_ : .2, 
  overlapThreshold_ : .4] := 
  Module[
    	{h, w, scale, coords, obj, classes, boxes, padding, classProb, 
   bestClass, probable, finals},
    
    	{coords, classes, obj} = 
   Values@NetModel["YOLO V2 Trained on MS-COCO Data"][img];
    
    	(* transform coordinates into rectangular boxes *)
    	{h, w} = 
   ImageDimensions[img];
    	scale = Max[{h, w}] / 416;
    	padding = 416 (1 - {h, w} / Max[{h, w}]) / 2;
    	boxes = Apply[
        		Rectangle[
            			
      Clip[scale {#1 - #3/2 - padding[[1]], 
         416 - #2 - #4/2 - padding[[2]]}, {0, w}],
            			
      Clip[scale {#1 + #3/2 - padding[[1]], 
         416 - #2 + #4/2 - padding[[2]]}, {0, h}]
            		] &,
        		416 coords,
        	1];
    	
    	(* each class probability is rescaled with the box objectivness *)
\
    	classProb = classes * obj;
    	(* filter by probability*)
    	(* 
  very small probability are thresholded *)
    	
  probable = 
   Position[Max /@ classProb , 
    p_ /; p - 10^-2 > detectionThreshold];	
    	If[Length[probable] == 0, Return[{}]];
    	
    	(* gather the boxes of the same class and perform non-
  max suppression *)
    	bestClass = Last@*Ordering /@ classProb;
    	finals = Join @@ Values @ GroupBy[
            		MapThread[
              			{#1, #2, #3[[#2]]} &,
              			{Extract[boxes, probable], 
        Extract[bestClass, probable], Extract[classProb, probable]}
              		],
            		#[[2]] &, nonMaxSuppression[overlapThreshold]
            	]
    ]

Label list

Define the label list for this model. Integers in the model’s output correspond to elements in the label list:

In[5]:=
labels = {"person", "bicycle", "car", "motorcycle", "airplane", "bus",
    "train", "truck", "boat", "traffic light", "fire hydrant", 
   "stop sign", "parking meter", "bench", "bird", "cat", "dog", 
   "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", 
   "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", 
   "skis", "snowboard", "sports ball", "kite", "baseball bat", 
   "baseball glove", "skateboard", "surfboard", "tennis racket", 
   "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", 
   "banana", "apple", "sandwich", "orange", "broccoli", "carrot", 
   "hot dog", "pizza", "donut", "cake", "chair", "couch", 
   "potted plant", "bed", "dining table", "toilet", "tv", "laptop", 
   "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", 
   "toaster", "sink", "refrigerator", "book", "clock", "vase", 
   "scissors", "teddy bear", "hair drier", "toothbrush"};

Basic usage

Obtain the detected bounding boxes with their corresponding classes and confidences for a given image:

In[6]:=
CloudGet["https://www.wolframcloud.com/objects/e417bf68-de8b-4b66-be56-5853a16d0df1"] (* Evaluate this cell to copy the example input from a cloud object *)
In[7]:=
detection = netevaluate[testImage]
Out[7]=

Inspect which classes are detected:

In[8]:=
classes = DeleteDuplicates@detection[[All, 2]]
Out[8]=
In[9]:=
labels[[classes]]
Out[9]=

Visualize the detection:

In[10]:=
HighlightImage[testImage, 
 MapThread[{White, 
    Inset[Style[labels[[#2]], Black, FontSize -> Scaled[1/12], 
      Background -> GrayLevel[1, .6]], Last[#1], {Right, Top}], #1} &,
   Transpose@detection]]
Out[10]=

Network result

The network computes 845 bounding boxes, the probability of having an object in each box and the conditioned probability that the object is of any given class:

In[11]:=
res = NetModel["YOLO V2 Trained on MS-COCO Data"][testImage]
Out[11]=
In[12]:=
Dimensions /@ res
Out[12]=

Visualize all the boxes predicted by the net scaled by their “objectness” measures:

In[13]:=
rectangles = 
  Apply[Rectangle[{#1 - #3/2, 1 - #2 - #4/2}, {#1 + #3/2, 
      1 - #2 + #4/2}] &, res["Boxes"], 1];
In[14]:=
Graphics[
 MapThread[{EdgeForm[Opacity[#1 + .01]], #2} &, {res["Objectness"], 
   rectangles}],
 BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}
 ]
Out[14]=

Visualize all the boxes scaled by the probability that they contain a cat:

In[15]:=
idx = Position[labels, "cat"][[1, 1]]
Out[15]=
In[16]:=
Graphics[
 MapThread[{EdgeForm[Opacity[#1 + .01]], #2} &, {res[
     "Objectness"] Extract[res["ClassProb"], {All, idx}], rectangles}],
 BaseStyle -> {FaceForm[], EdgeForm[{Thin, Black}]}
 ]
Out[16]=

Superimpose the cat prediction on top of the scaled input received by the net:

In[17]:=
HighlightImage[
 Image[NetExtract[NetModel["YOLO V2 Trained on MS-COCO Data"], 
    "Input"][testImage], Interleaving -> False],
 MapThread[{EdgeForm[{Thickness[#1/100], 
      Opacity[#1 + .1]}], #2} &, {res["Objectness"] Extract[
     res["ClassProb"], {All, idx}], rectangles}],
 BaseStyle -> {FaceForm[], EdgeForm[{Thin, Red}]},
 DataRange -> {{0, 1}, {0, 1}}
 ]
Out[17]=

Advanced visualization

Write a function to apply a custom styling to the result of the detection:

In[18]:=
styleDetection[detection_] := 
  Values[GroupBy[detection, #1[[2]] &, 
    With[{c = 
        RandomColor[]}, ({c, #1[[1]], 
          Inset[Style[labels[[#1[[2]]]], FontSize -> Scaled[1/28], 
            Black], Last[#1[[1]]], {Right, Top}, 
           Background -> GrayLevel[1, .8]]} &) /@ #1] &]];

Visualize multiple objects, using a different color for each class:

In[19]:=
CloudGet["https://www.wolframcloud.com/objects/884f5bd5-0f33-43f2-9764-e07532c94417"] (* Evaluate this cell to copy the example input from a cloud object *)
In[20]:=
HighlightImage[image, styleDetection[netevaluate[image, 0.1, 1]]]
Out[20]=

Net information

Inspect the number of parameters of all arrays in the net:

In[21]:=
NetInformation[
 NetModel["YOLO V2 Trained on MS-COCO Data"], "ArraysElementCounts"]
Out[21]=

Obtain the total number of parameters:

In[22]:=
NetInformation[
 NetModel["YOLO V2 Trained on MS-COCO Data"], \
"ArraysTotalElementCount"]
Out[22]=

Obtain the layer type counts:

In[23]:=
NetInformation[
 NetModel["YOLO V2 Trained on MS-COCO Data"], "LayerTypeCounts"]
Out[23]=

Display the summary graphic:

In[24]:=
NetInformation[
 NetModel["YOLO V2 Trained on MS-COCO Data"], "SummaryGraphic"]
Out[24]=

Export to MXNet

Export the net into a format that can be opened in MXNet:

In[25]:=
jsonPath = 
 Export[FileNameJoin[{$TemporaryDirectory, "net.json"}], 
  NetModel["YOLO V2 Trained on MS-COCO Data"], "MXNet"]
Out[25]=

Export also creates a net.params file containing parameters:

In[26]:=
paramPath = FileNameJoin[{DirectoryName[jsonPath], "net.params"}]
Out[26]=

Get the size of the parameter file:

In[27]:=
FileByteCount[paramPath]
Out[27]=

The size is similar to the byte count of the resource object:

In[28]:=
ResourceObject["YOLO V2 Trained on MS-COCO Data"]["ByteCount"]
Out[28]=

Requirements

Wolfram Language 11.3 (March 2018) or above

Resource History

Reference