ResNet-101
Trained on
Augmented CASIA-WebFace Data
Released in 2016 and based on the ResNet-101 architecture, this facial feature extractor was trained using specific data augmentation techniques tailored for this task. Starting from the CASIA-WebFace dataset, a far greater per-subject appearance was achieved by synthesizing pose, shape and expression variations from each single image.
Number of layers: 345 |
Parameter count: 42,605,504 |
Trained size: 172 MB |
Examples
Resource retrieval
Get the pre-trained network:
Basic usage
Compute a feature vector for a given image:
Get the length of the feature vector:
Use a batch of face images:
Compute the feature vectors for the batch of images, and obtain the dimensions of the features:
Reduce the feature vectors to two dimensions with t-SNE:
Visualize the results:
Obtain the five closest faces to a given one:
Net information
Inspect the number of parameters of all arrays in the net:
Obtain the total number of parameters:
Obtain the layer type counts:
Display the summary graphic:
Export to MXNet
Export the net into a format that can be opened in MXNet:
Export also creates a net.params file containing parameters:
Get the size of the parameter file:
The size is similar to the byte count of the resource object:
Requirements
Wolfram Language
11.2
(September 2017)
or above
Resource History
Reference