Wolfram Computation Meets Knowledge

Ademxapp Model A1 Trained on ADE20K Data

Segment an image into various semantic component classes

Released in 2016 by the University of Adelaide, this model exploits recent progress in the understanding of residual architectures. With only 17 residual units, it is able to outperform previous, much deeper architectures.

Number of layers: 141 | Parameter count: 124,684,886 | Trained size: 499 MB

Training Set Information

Performance

Examples

Resource retrieval

Retrieve the resource object:

In[1]:=
ResourceObject["Ademxapp Model A1 Trained on ADE20K Data"]
Out[1]=

Get the pre-trained net:

In[2]:=
NetModel["Ademxapp Model A1 Trained on ADE20K Data"]
Out[2]=

Evaluation function

Write an evaluation function to handle net reshaping and resampling of input and output:

In[3]:=
netevaluate[img_, device_: "CPU"] := Block[
  {net, resized, encData, dec, mean, var, prob},
  net = NetModel["Ademxapp Model A1 Trained on ADE20K Data"];
  resized = ImageResize[img, {504}];
  encData = Normal@NetExtract[net, "Input"];
  dec = NetExtract[net, "Output"];
  {mean, var} = Lookup[encData, {"MeanImage", "VarianceImage"}];
  prob = NetReplacePart[net,
     {"Input" -> 
       NetEncoder[{"Image", ImageDimensions@resized, 
         "MeanImage" -> mean, "VarianceImage" -> var}], 
      "Output" -> Automatic}
     ][resized, TargetDevice -> device];
  prob = ArrayResample[prob, Append[Reverse@ImageDimensions@img, 150]];
  dec[prob]
  ]

Label list

Define the label list for this model. Integers in the model’s output correspond to elements in the label list:

In[4]:=
labels = {"wall", "building", "sky", "floor", "tree", "ceiling", 
   "road", "bed", "windowpane", "grass", "cabinet", "sidewalk", 
   "person", "earth", "door", "table", "mountain", "plant", "curtain",
    "chair", "car", "water", "painting", "sofa", "shelf", "house", 
   "sea", "mirror", "rug", "field", "armchair", "seat", "fence", 
   "desk", "rock", "wardrobe", "lamp", "bathtub", "railing", 
   "cushion", "base", "box", "column", "signboard", "chest", 
   "counter", "sand", "sink", "skyscraper", "fireplace", 
   "refrigerator", "grandstand", "path", "stairs", "runway", "case", 
   "pool", "pillow", "screen", "stairway", "river", "bridge", 
   "bookcase", "blind", "coffee", "toilet", "flower", "book", "hill", 
   "bench", "countertop", "stove", "palm", "kitchen", "computer", 
   "swivel", "boat", "bar", "arcade", "hovel", "bus", "towel", 
   "light", "truck", "tower", "chandelier", "awning", "streetlight", 
   "booth", "television", "airplane", "dirt", "apparel", "pole", 
   "land", "bannister", "escalator", "ottoman", "bottle", "buffet", 
   "poster", "stage", "van", "ship", "fountain", "conveyer", "canopy",
    "washer", "plaything", "swimming", "stool", "barrel", "basket", 
   "waterfall", "tent", "bag", "minibike", "cradle", "oven", "ball", 
   "food", "step", "tank", "trade", "microwave", "pot", "animal", 
   "bicycle", "lake", "dishwasher", "screen", "blanket", "sculpture", 
   "hood", "sconce", "vase", "traffic", "tray", "ashcan", "fan", 
   "pier", "crt", "plate", "monitor", "bulletin", "shower", 
   "radiator", "glass", "clock", "flag"};

Basic usage

Obtain a segmentation mask for a given image:

In[5]:=
CloudGet["https://www.wolframcloud.com/objects/8103667b-4000-4c3b-837c-25fc6ff2b4c8"] (* Evaluate this cell to copy the example input from a cloud object *)

Inspect which classes are detected:

In[6]:=
detected = DeleteDuplicates@Flatten@mask
Out[6]=
In[7]:=
labels[[detected]]
Out[7]=

Visualize the mask:

In[8]:=
Colorize[mask]
Out[8]=

Advanced visualization

Associate classes to colors:

In[9]:=
indexToColor = 
  Thread[Range[150] -> ColorData["Legacy", "ColorList"][[1 ;; 150]]];

Write a function to overlap the image and the mask with a legend:

In[10]:=
result[img_, device_: "CPU"] := Block[
  {mask, classes, maskPlot, composition},
  mask = netevaluate[img, device];
  classes = DeleteDuplicates[Flatten@mask];
  maskPlot = Colorize[mask, ColorRules -> indexToColor];
  composition = ImageCompose[img, {maskPlot, 0.5}];
  Legended[
   Row[Image[#, ImageSize -> Large] & /@ {maskPlot, composition}], 
   SwatchLegend[indexToColor[[classes, 2]], labels[[classes]]]]
  ]

Inspect the results:

In[11]:=
CloudGet["https://www.wolframcloud.com/objects/58821d4d-848c-4380-83ab-49ac06e3a029"] (* Evaluate this cell to copy the example input from a cloud object *)
Out[11]=
In[12]:=
CloudGet["https://www.wolframcloud.com/objects/ab59771b-b347-4df7-a9c2-211961c96e9b"] (* Evaluate this cell to copy the example input from a cloud object *)
Out[12]=
In[13]:=
CloudGet["https://www.wolframcloud.com/objects/d317b020-ddc0-4e18-9ac5-31647663330f"] (* Evaluate this cell to copy the example input from a cloud object *)
Out[13]=

Net information

Inspect the sizes of all arrays in the net:

In[14]:=
NetInformation[
 NetModel["Ademxapp Model A1 Trained on ADE20K Data"], \
"ArraysElementCounts"]
Out[14]=

Obtain the total number of parameters:

In[15]:=
NetInformation[
 NetModel["Ademxapp Model A1 Trained on ADE20K Data"], \
"ArraysTotalElementCount"]
Out[15]=

Obtain the layer type counts:

In[16]:=
NetInformation[
 NetModel["Ademxapp Model A1 Trained on ADE20K Data"], \
"LayerTypeCounts"]
Out[16]=

Display the summary graphic:

In[17]:=
NetInformation[
 NetModel["Ademxapp Model A1 Trained on ADE20K Data"], \
"SummaryGraphic"]
Out[17]=

Export to MXNet

Export the net into a format that can be opened in MXNet:

In[18]:=
jsonPath = 
 Export[FileNameJoin[{$TemporaryDirectory, "net.json"}], 
  NetModel["Ademxapp Model A1 Trained on ADE20K Data"], "MXNet"]
Out[18]=

Export also creates a net.params file containing parameters:

In[19]:=
paramPath = FileNameJoin[{DirectoryName[jsonPath], "net.params"}]
Out[19]=

Get the size of the parameter file:

In[20]:=
FileByteCount[paramPath]
Out[20]=

The size is similar to the byte count of the resource object:

In[21]:=
ResourceObject[
  "Ademxapp Model A1 Trained on ADE20K Data"]["ByteCount"]
Out[21]=

Requirements

Wolfram Language 11.3 (March 2018) or above

Resource History

Reference