Resource retrieval
Get the pre-trained net:
NetModel parameters
This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:
Pick a non-default net by specifying the parameters:
Pick a non-default uninitialized net:
Basic usage
Use the CLIP text encoder to obtain the feature representation of a piece of text:
The default CLIP text encoder embeds the input text into a vector of size 512:
Use the CLIP image encoder to obtain the feature representation of an image:
The default CLIP image encoder embeds the input text into a vector of size 512:
Feature space visualization
Get a set of images:
Visualize the features of a set of images:
Define a list of sentences in two categories:
Connecting text and images
Define a test image:
Define a list of text descriptions:
Embed the test image and text descriptions into the same feature space:
Rank the text description with respect to the correspondence to the input image according to the CosineDistance. Smaller distances mean higher correspondence between the text and the image:
Zero-shot image classification
By using the text and image feature extractors together, it's possible to perform generic image classification between any set of classes without having to explicitly train any model for those particular classes (zero-shot classification). Obtain the FashionMNIST test data, which contains ten thousand test images and 10 classes:
Display a few random examples from the set:
Get a mapping between class IDs and labels:
Generate the text templates for the FashionMNIST labels and embed them. The text templates will effectively act as classification labels:
Classify an image from the test set. Obtain its embedding:
The result of the classification is the description the embedding of which is closest to the image embedding:
Find the top 10 description nearest to the image embedding:
Obtain the accuracy of this procedure on the entire test set. Extract the features for all the images (if a GPU is available, setting TargetDevice->"GPU" is recommended as the computation will take several minutes on CPU):
Calculate the distance matrix between the computed text and image embeddings:
Obtain the top-1 prediction:
Obtain the final classification results:
Attention visualization for images
Just like the original Vision Transformer (see the model "Vision Transformer Trained on ImageNet Competition Data"), the image feature extractor divides the input images in 7x7 patches and performs self-attention on a set of 50 vectors: 49 vectors, or "tokens," representing the 7x7 patches and an additional one, a "feature extraction token," that is eventually used to produce the final feature representation of the image. Thus the attention procedure for this model can be visualized by inspecting the attention weights between the feature extraction token and the patch tokens. Define a test image:
Extract the attention weights used for the last block of self-attention:
Extract the attention weights between the feature extraction token and the input patches. These weights can be interpreted as which patches in the original image the net is "looking at" in order to perform the feature extraction:
Reshape the weights as a 3D array of 12 7x7 matrices. Each matrix corresponds to an attention head, while each element of the matrices corresponds to a patch in the original image:
Visualize the attention weight matrices. Patches with higher values (red) are what is mostly being "looked at" for each attention head:
Define a function to visualize the attention matrix on an image:
Visualize the mean attention across all the attention heads:
Visualize each attention head separately:
Attention visualization for text
The text feature extractor tokenizes the input string prepending and appending the special tokens StartOfString and EndOfString and then performs causal self-attention on the token embedding vectors. After the self-attention stack, the last vector (corresponding to the token EndOfString) is used to obtain the final feature representation of the text. Thus the attention procedure for this model can be visualized by inspecting the attention weights between the last vector and the previous ones. Define a test string:
Extract the NetEncoder of the net to encode the string:
Extract the list of available tokens and inspect how the input string was tokenized. Even though the BPE tokenization generally segments the input into subwords, it's common to observe that all tokens correspond to full words. Also observe that the StartOfString and EndOfString tokens are added automatically:
Feed the string to the net and extract the attention weights used for the last block of self-attention:
Extract the attention weights between the last vector and the previous ones, leaving the initial vector corresponding to StartOfString out. These weights can be interpreted as which tokens in the original sentence the net is "looking at" in order to perform the feature extraction:
Inspect the average attention weights for each token across the attention heads. Observe that the tokens the net is mostly focused on are "hair" and "camera":
Visualize each head separately:
Extract the attention weights for all the 12 attention layers:
Compute the average across all heads, leaving the StartOfString token out:
Define a function to visualize the attention weights:
Explore the attention weights for every layer. A thicker arrow pointing from token A to token B indicates that the layer is paying attention to token B when generating the vector corresponding to token A:
Transfer learning
Use the pre-trained model to build a classifier for telling apart indoor and outdoor photos. Create a test set and a training set:
Remove the last linear layer from the pre-trained net:
Create a new net composed of the pre-trained net followed by a linear layer and a softmax layer:
Train on the dataset, freezing all the weights except for those in the "linearNew" layer (use TargetDevice -> "GPU" for training on a GPU):
Perfect accuracy is obtained on the test set:
Net information
Inspect the number of parameters of all arrays in the net:
Obtain the total number of parameters:
Obtain the layer type counts:
Export to ONNX
Export the net to the ONNX format:
Get the size of the ONNX file:
Check some metadata of the ONNX model:
Import the model back into Wolfram Language. However, the NetEncoder and NetDecoder will be absent because they are not supported by ONNX: