Released in 2017, this architecure combines the technique of dilated convolutions with the paradigm of residual networks, outperforming their nonrelated counterparts in image classification and semantic segmentation.
Number of layers: 360 |
Parameter count: 54,497,859 |
Trained size: 220 MB |
Examples
Resource retrieval
Get the pre-trained net:
Out[1]= |  |
Evaluation function
Write an evaluation function to handle net reshaping and resampling of input and output:
Label list
Obtain the label list for this model. Integers in the model’s output correspond to elements in the label list:
Basic usage
Obtain a segmentation mask for a given image:
Inspect which classes are detected:
Out[5]= |  |
Out[6]= |  |
Visualize the mask:
Out[7]= |  |
Advanced visualization
Associate classes to colors:
Out[8]= |  |
Write a function to overlap the image and the mask with a legend:
Inspect the results:
Out[11]= |  |
Out[12]= |  |
Out[13]= |  |
Net information
Inspect the number of parameters of all arrays in the net:
Out[14]= |  |
Obtain the total number of parameters:
Out[15]= |  |
Obtain the layer type counts:
Out[16]= |  |
Display the summary graphic:
Out[17]= |  |
Export to MXNet
Export the net into a format that can be opened in MXNet:
Out[18]= |  |
Export also creates a net.params file containing parameters:
Out[19]= |  |
Get the size of the parameter file:
Out[20]= |  |
The size is similar to the byte count of the resource object:
Out[21]= |  |
Requirements
Wolfram Language 11.3
(March 2018) or above
Resource History
Reference