NetModel parameters
This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:
Pick a non-default net by specifying the parameters:
Pick a non-default uninitialized net:
Pick the tokenizer:
Basic usage
Define a test image:
Generate an image caption:
Try different initial prompts:
Obtain a test video:
Generate a caption for the video. The caption is generated from a number of uniformly spaced frames whose features are averaged. The number of frames can be controlled via an option:
Feature space visualization
Get a set of images of cars and airplanes:
Visualize the feature space embedding performed by the image encoder. Notice that images from the same class are clustered together:
Cross-attention visualization
When generating a new token of the image caption, the text decoder attends on the image features produced by the image encoder. Such features are a set of 577 vectors of length 1024, where every vector except the first one corresponds to one of 24x24 patches taken from the input image (the extra vector exists because the image encoder inherits the architecture of the image classification model Vision Transformer Trained on ImageNet Competition Data, but in this case, it doesn’t have any special importance). This means that the text decoder attention weights to these image features can be interpreted as the image patches the decoder is "looking at" when generating every new token, and it is possible to visualize this information. Get a test image and compute the features:
Generate the caption, accumulating the attention weights for each token generation. There are 12 attention blocks in the text decoder and each generates its own set of attention weights, and as usual in deep learning, the deeper blocks are the ones generating the most semantic information. Hence the following code extracts the attention weights from the last attention block only:
Each generated token corresponds to a 12x577 array of attention weights, where 12 is the number of attention heads and 577 is the 24x24 patches plus the extra one:
Extract the attention weights related to the image patches and rescale them by their minimum value, which makes them more suitable for visualization:
Reshape the flat image patch dimension to 24x24 and take the average over the attention heads, thus obtaining a 24x24 attention matrix for each of the 10 generated tokens:
Visualize the attention weight matrices. Patches with higher values (red) are what is mostly being "looked at" when generating the corresponding token:
Define a function to visualize the attention matrix on the image:
Visualize the attention mechanism for each token. Notice the emphasis on the head for the tokens "little" and "girl," on the hands for "taking" and on the camera for "camera":
Net information
Inspect the number of parameters of all arrays in the net:
Obtain the total number of parameters:
Obtain the layer type counts: