StarGAN
Trained on
CelebA Data
Released in 2017, StarGAN performs multi-domain image-to-image translation. Adapted from CycleGANs, StarGAN uses the same architecture for the generator network and has a similiar objective loss but instead of having one generator for every particular image translation task, StarGAN has a single generator for all translations.
Examples
Resource retrieval
Get the pre-trained net:
NetModel parameters
This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:
Pick a non-default net by specifying the parameters:
Pick a non-default uninitialized net:
Basic usage
Evaluate a net on a photo:
Attributes interpolation
Remove the class encoder from the net:
Instead of using one-hot representation for the class attributes, allow continuous inputs:
Net information
Inspect the sizes of all arrays in the net:
Obtain the total number of parameters:
Obtain the layer type counts:
Display the summary graphic:
Export to ONNX
Export the net to the ONNX format:
Get the size of the ONNX file:
The size is similar to the byte count of the resource object :
Check some metadata of the ONNX model:
Import the model back into the Wolfram Language. However, the NetEncoder and NetDecoder will be absent because they are not supported by ONNX:
Resource History
Reference
-
Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, J. Choo, "StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation," arXiv:1711.09020 (2017)
- Available from: https://github.com/yunjey/stargan
-
Rights:
MIT License