# Wolfram Neural Net Repository

Immediate Computable Access to Neural Net Models

Detect and localize objects in an image

YOLO (You Only Look Once) Version 5 is a family of object detection models published in June 2020. It is a single-stage architecture that goes straight from image pixels to bounding box coordinates and class probabilities. YOLO Version 5 employs the Cross Stage Partial Networks (CSPNet) technique in its backbone to extract rich and informative features from the input image, implements the PA-NET neck for feature aggregation and uses the SiLU function for its activations. Its training leverages several novel data augmentation techniques such as mosaic augmentation and cutout (also used in YOLO Version 4), helping the model to recognize small objects. The YOLO Version 5 S model is about 90% smaller than YOLOv4-custom (with Darknet architecture), meaning that it can be deployed to embedded devices much more easily. It is also faster in both training and inference, achieving 140 frames per second in batch.

- Microsoft COCO, a dataset for image recognition, segmentation and captioning, consisting of more than three hundred thousand images overall consisting of 80 object classes.

Get the pre-trained net:

In[1]:= |

Out[1]= |

This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:

In[2]:= |

Out[2]= |

Pick a non-default net by specifying the parameters:

In[3]:= |

Out[3]= |

Pick a non-default uninitialized net:

In[4]:= |

Out[4]= |

Write an evaluation function to scale the result to the input image size and suppress the least probable detections:

In[5]:= |

In[6]:= |

In[7]:= |

Obtain the detected bounding boxes with their corresponding classes and confidences for a given image:

In[8]:= |

Out[8]= |

In[9]:= |

In[10]:= |

Inspect which classes are detected:

In[11]:= |

Out[11]= |

Visualize the detection:

In[12]:= |

Out[12]= |

The network computes 25,200 bounding boxes and the probability that the objects in each box are of any given class:

In[13]:= |

In[14]:= |

Out[14]= |

Rescale the bounding boxes to the coordinates of the input image and visualize them scaled by their "*objectness*" measures:

In[15]:= |

In[16]:= |

Out[16]= |

Visualize all the boxes scaled by the probability that they contain a cat:

In[17]:= |

Out[17]= |

In[18]:= |

Out[18]= |

Superimpose the cat prediction on top of the input received by the net:

In[19]:= |

Out[19]= |

Inspect the number of parameters of all arrays in the net:

In[20]:= |

Out[20]= |

Obtain the total number of parameters:

In[21]:= |

Out[21]= |

Obtain the layer type counts:

In[22]:= |

Out[22]= |

Display the summary graphic:

In[23]:= |

Out[23]= |

- Ultralytics
- Available from:
- Rights: GNU General Public License