OpenFace Face Recognition Net
Trained on
CASIA-WebFace and FaceScrub Data
Released in 2015, this facial feature extractor, based on the Inception architecture, was trained to learn a mapping directly from facial images to 128-dimensional feature vectors. Training was performed using triplets of examples, with two examples belonging to the same person. Facial images sharing the same identity are optimized to be embedded close to each other in the feature space.
Number of layers: 172 |
Parameter count: 3,743,280 |
Trained size: 15 MB |
Examples
Resource retrieval
Get the pre-trained net:
Basic usage
Compute a feature vector for a given image:
Get the length of the feature vector:
Use a batch of face images:
Compute the feature vectors for the batch of images, and obtain the dimensions of the features:
Visualize the features in two and three dimensions:
Obtain the five closest faces to a given one:
Net information
Inspect the number of parameters of all arrays in the net:
Obtain the total number of parameters:
Obtain the layer type counts:
Display the summary graphic:
Export to MXNet
Export the net into a format that can be opened in MXNet:
Export also creates a net.params file containing parameters:
Get the size of the parameter file:
The size is similar to the byte count of the resource object:
Requirements
Wolfram Language
11.3
(March 2018)
or above
Resource History
Reference