Released in 2016, this model is an application of a powerful method for general-purpose image-to-image translation using conditional adversarial networks. The automatic learning of the loss function with the adversarial networks technique allows the same paradigm to generalize across a wide range of image translation tasks. The architecture enables an efficient aggregation of features of multiple scales through skip connections with concatenations. This particular model was trained to generate a street map from a satellite photo.
Number of layers: 56 |
Parameter count: 54,419,459 |
Trained size: 218 MB
Examples
Resource retrieval
Get the pre-trained net:
Out[1]= |  |
Basic usage
Obtain a satellite photo:
Out[2]= |  |
Use the net to draw the street map:
Out[3]= |  |
Evaluate accuracy
Overlap photo and prediction:
Out[4]= |  |
Obtain the actual street map:
Out[5]= |  |
Compare the generated street map with the actual street map:
Out[6]= |  |
Issues
More complex patterns are harder to render. Obtain a new photo and street map pair:
Out[7]= |  |
Compare the prediction with the actual street map:
Out[8]= |  |
Net information
Inspect the number of parameters of all arrays in the net:
Out[9]= |  |
Obtain the total number of parameters:
Out[10]= |  |
Obtain the layer type counts:
Out[11]= |  |
Display the summary graphic:
Out[12]= |  |
Export to MXNet
Export the net into a format that can be opened in MXNet:
Out[13]= |  |
Export also creates a net.params file containing parameters:
Out[14]= |  |
Get the size of the parameter file:
Out[15]= |  |
The size is similar to the byte count of the resource object:
Out[16]= |  |
Represent the MXNet net as a graph:
Out[17]= |  |
Requirements
Wolfram Language 11.2
(September 2017) or above
Resource History
Reference