Released in 2016 and based on the ResNet-101 architecture, this facial feature extractor was trained using specific data augmentation techniques tailored for this task. Starting from the CASIA-WebFace dataset, a far greater per-subject appearance was achieved by synthesizing pose, shape and expression variations from each single image.
Number of layers: 345 |
Parameter count: 42,605,504 |
Trained size: 172 MB |
Examples
Resource retrieval
Get the pre-trained network:
Out[1]= |  |
Basic usage
Compute a feature vector for a given image:
Get the length of the feature vector:
Out[3]= |  |
Use a batch of face images:
Compute the feature vectors for the batch of images, and obtain the dimensions of the features:
Out[6]= |  |
Reduce the feature vectors to two dimensions with t-SNE:
Out[7]= |  |
Out[8]= |  |
Visualize the results:
Out[9]= |  |
Obtain the five closest faces to a given one:
Out[10]= |  |
Net information
Inspect the number of parameters of all arrays in the net:
Out[11]= |  |
Obtain the total number of parameters:
Out[12]= |  |
Obtain the layer type counts:
Out[13]= |  |
Display the summary graphic:
Out[14]= |  |
Export to MXNet
Export the net into a format that can be opened in MXNet:
Out[15]= |  |
Export also creates a net.params file containing parameters:
Out[16]= |  |
Get the size of the parameter file:
Out[17]= |  |
The size is similar to the byte count of the resource object:
Out[18]= |  |
Requirements
Wolfram Language 11.2
(September 2017) or above
Resource History
Reference