2D Face Alignment Net Trained on 300W Large Pose Data

Determine the locations of keypoints from a facial image

This model is also available through the built-in function FacialFeatures

Developed in 2017 at the Computer Vision Laboratory at the University of Nottingham, this net predicts the locations of 68 2D keypoints (17 for face contour, 10 for eyebrows, 9 for nose, 12 for eyes, 20 for mouth) from a facial image. For each keypoint, a heat map for its location is produced. Its complex architecture features a combination of hourglass modules and multiscale parallel blocks.

Number of layers: 967 | Parameter count: 23,874,320 | Trained size: 97 MB |

Training Set Information

Performance

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["2D Face Alignment Net Trained on 300W Large Pose Data"]
Out[1]=

Basic usage

This net outputs a 64x64 heat map for each of the 68 landmarks:

In[2]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/8e5bda12-6fb3-41c3-b8b4-6143a4473f94"]

Obtain the dimensions of the heat map:

In[3]:=
Dimensions[heatmaps]
Out[3]=

Visualize heat maps 1, 12 and 29:

In[4]:=
MatrixPlot /@ heatmaps[[{1, 12, 29}]]
Out[4]=

Evaluation function

Write an evaluation function that picks the maximum position of each heat map and returns a list of landmark positions:

In[5]:=
netevaluation[img_] := Block[
  {heatmaps, posFlattened, posMat},
  heatmaps = NetModel["2D Face Alignment Net Trained on 300W Large Pose Data"][
    img];
  posFlattened = Map[First@Ordering[#, -1] &, Flatten[heatmaps, {{1}, {2, 3}}]];
  posMat = QuotientRemainder[posFlattened - 1, 64] + 1;
  1/64.*Map[{#[[2]], 64 - #[[1]] + 1} - 0.5 &, posMat]
  ]

Landmark positions

Get the landmarks using the evaluation function. Coordinates are rescaled to the input image size so that the bottom-left corner is identified by {0, 0} and the top-right corner by {1, 1}:

In[6]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/35f10b4e-edcc-4d7a-9c1c-6a65d21e4dc1"]
Out[6]=

Group landmarks associated with different facial features by colors:

In[7]:=
groupings = Span @@@ {{1, 17}, {18, 22}, {23, 27}, {28, 36}, {37, 42}, {43, 48}, {49, 60}, {61, 68}};

Visualize the landmarks:

In[8]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/3c9c937b-280a-4b5a-a4d1-33fa50fc86b5"]
Out[8]=

Preprocessing

The net must be evaluated on facial crops only. Get an image with multiple faces:

In[9]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/a81c6a86-8b4b-4992-8a2c-79119765e53a"]

Write an evaluation function that crops the input image around faces and returns the crops and facial landmarks:

In[10]:=
findFacialLandmarks[img_Image] := Block[
  {crops, points},
  crops = ImageTrim[img, #] & /@ FindFaces[img];
  points = If[Length[crops] > 0, netevaluation /@ crops, {}];
  MapThread[<|"Crop" -> #1, "Landmarks" -> #2|> &, {crops, points}]
  ]

Evaluate the function on the image:

In[11]:=
output = findFacialLandmarks[img]
Out[11]=

Visualize the landmarks:

In[12]:=
HighlightImage[#Crop, Graphics@
    Riffle[Thread@Hue[Range[8]/8.], Map[Point, Function[p, Part[#Landmarks, p]] /@ groupings]], DataRange -> {{0, 1}, {0, 1}}, ImageSize -> 300] & /@ output
Out[12]=

Robustness to facial crop size

Get an image:

In[13]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/3aba6a35-a7f4-4742-9be1-0f52cf395c21"]

Crop the image at various sizes:

In[14]:=
crops = Table[ImageCrop[img, s, Bottom], {s, 290, 500, 70}]
Out[14]=

Inspect the network performance across the crops:

In[15]:=
HighlightImage[#, Graphics@
    Riffle[Thread@Hue[Range[8]/8.], Map[Point, Function[p, Part[netevaluation[#], p]] /@ groupings]],
    DataRange -> {{0, 1}, {0, 1}}, ImageSize -> 250] & /@ crops
Out[15]=

Net information

Inspect the number of parameters of all arrays in the net:

In[16]:=
NetInformation[
 NetModel[
  "2D Face Alignment Net Trained on 300W Large Pose Data"], "ArraysElementCounts"]
Out[16]=

Obtain the total number of parameters:

In[17]:=
NetInformation[
 NetModel[
  "2D Face Alignment Net Trained on 300W Large Pose Data"], "ArraysTotalElementCount"]
Out[17]=

Obtain the layer type counts:

In[18]:=
NetInformation[
 NetModel[
  "2D Face Alignment Net Trained on 300W Large Pose Data"], "LayerTypeCounts"]
Out[18]=

Display the summary graphic:

In[19]:=
NetInformation[
 NetModel[
  "2D Face Alignment Net Trained on 300W Large Pose Data"], "SummaryGraphic"]
Out[19]=

Export to MXNet

Export the net into a format that can be opened in MXNet:

In[20]:=
jsonPath = Export[FileNameJoin[{$TemporaryDirectory, "net.json"}], NetModel["2D Face Alignment Net Trained on 300W Large Pose Data"], "MXNet"]
Out[20]=

Export also creates a net.params file containing parameters:

In[21]:=
paramPath = FileNameJoin[{DirectoryName[jsonPath], "net.params"}]
Out[21]=

Get the size of the parameter file:

In[22]:=
FileByteCount[paramPath]
Out[22]=

The size is similar to the byte count of the resource object:

In[23]:=
ResourceObject[
  "2D Face Alignment Net Trained on 300W Large Pose Data"]["ByteCount"]
Out[23]=

Requirements

Wolfram Language 11.2 (September 2017) or above

Resource History

Reference