Wolfram Research

GPT-2 Transformer Trained on WebText Data

Generate text in English and represent text as a sequence of vectors

Released in 2019, this model improves and scales up its predecessor model. It has a richer vocabulary and uses BPE tokenization on UTF-8 byte sequences and additional normalization at the end of all of the transformer blocks.

Training Set Information

Performance

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["GPT-2 Transformer Trained on WebText Data"]
Out[1]=

NetModel parameters

This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:

In[2]:=
NetModel["GPT-2 Transformer Trained on WebText Data", \
"ParametersInformation"]
Out[2]=

Pick a non-default net by specifying the parameters:

In[3]:=
lm = NetModel[{"GPT-2 Transformer Trained on WebText Data", 
   "Task" -> "LanguageModeling", "Size" -> "345M"}]
Out[3]=

Pick a non-default uninitialized net:

In[4]:=
NetModel[{"GPT-2 Transformer Trained on WebText Data", 
  "Task" -> "LanguageModeling"}, "UninitializedEvaluationNet"]
Out[4]=

Basic usage

Given a piece of text, the GPT-2 net produces a sequence of feature vectors of size 768, which correspond to the sequence of input words or subwords:

In[5]:=
embeddings = 
 NetModel["GPT-2 Transformer Trained on WebText Data"][
  "The cat is on the mat"]
Out[5]=

Obtain dimensions of the embeddings:

In[6]:=
Dimensions[embeddings]
Out[6]=

Visualize the embeddings:

In[7]:=
MatrixPlot[embeddings]
Out[7]=

Transformer architecture

The input string is first normalized and then tokenized, or split into words or subwords. This two-step process is accomplished using the NetEncoder "Function":

In[8]:=
net = NetModel[
  "GPT-2 Transformer Trained on WebText Data"]; netencoder = 
 NetExtract[net, "Input"]
Out[8]=

The tokenization step is performed using the NetEncoder "BPESubwordTokens" and can be extracted using the following steps:

In[9]:=
pos = First@ Position[NetExtract[netencoder, "Function"], _NetEncoder];
Extract[NetExtract[netencoder, "Function"], pos]
Out[10]=

The encoder produces an integer index for each subword token that corresponds to the position in the vocabulary:

In[11]:=
netencoder["Hello world! I am here"]
Out[11]=

Each subword token is also assigned a positional index:

In[12]:=
net["Hello world! I am here", 
 NetPort[{"embedding", "posembed", "Output"}]]
Out[12]=

A lookup is done to map these indices to numeric vectors of size 768:

In[13]:=
embeddings = net["Hello world! I am here",
   {NetPort[{"embedding", "embeddingpos", "Output"}],
    NetPort[{"embedding", "embeddingtokens", "Output"}]}];
Map[MatrixPlot, embeddings]
Out[14]=

For each subword token, these two embeddings are combined by summing elements with ThreadingLayer:

In[15]:=
NetExtract[net, "embedding"]
Out[15]=

The transformer architecture then processes the vectors using 12 structurally identical self-attention blocks stacked in a chain:

In[16]:=
NetExtract[net, "decoder"]
Out[16]=

The key part of these blocks is the attention module comprising of 12 parallel self-attention transformations, also called “attention heads”:

In[17]:=
NetExtract[net, {"decoder", 1, 1}]
Out[17]=

Each head uses an AttentionLayer at its core:

In[18]:=
NetExtract[net, {"decoder", 1, 1, "attention", 1}]
Out[18]=

Attention is done with causal masking, which means that the embedding of a given subword token depends on the previous subword tokens and not on the next ones. This is a prerequisite to be able to generate text with the language model. The following figures compare causal attention to other forms of connectivity between input tokens:

Language modeling: Basic usage

Retrieve the language model by specifying the "Task" parameter:

In[19]:=
lm = NetModel[{"GPT-2 Transformer Trained on WebText Data", 
   "Task" -> "LanguageModeling"}]
Out[19]=

Predict the next word in a given sequence:

In[20]:=
lm["Albert Einstein was a German-born theoretical physicist"]
Out[20]=

Obtain the top 15 probabilities:

In[21]:=
topProbs = 
 lm["Albert Einstein was a German-born theoretical physicist", \
{"TopProbabilities", 15}]
Out[21]=

Plot the top 15 probabilities:

In[22]:=
BarChart[Thread@
  Labeled[Values@topProbs, 
   Keys[topProbs] /. {"\n" -> "\\n", "\t" -> "\\t"}], 
 ScalingFunctions -> "Log", ImageSize -> Large]
Out[22]=

Text generation

Define a function to predict the next token:

In[23]:=
lm = NetModel[{"GPT-2 Transformer Trained on WebText Data", 
    "Task" -> "LanguageModeling"}];
generateSample[languagemodel_][input_String, numTokens_: 10, 
   temperature_: 1] := 
  Nest[Function[
    StringJoin[#, 
     languagemodel[#, {"RandomSample", 
       "Temperature" -> temperature}]]], input, numTokens];

Generate the next 20 tokens by using it on a piece of text:

In[24]:=
generateSample[
  lm]["Albert Einstein was a German-born theoretical physicist", 20]
Out[24]=

The third optional argument is a “temperature” parameter that scales the input to the final softmax. A high temperature flattens the distribution from which tokens are sampled, increasing the probability of extracting less likely tokens:

In[25]:=
generateSample[
  lm]["Albert Einstein was a German-born theoretical physicist", 40, \
1.5]
Out[25]=

Decreasing the temperature sharpens the peaks of the sampling distribution, further decreasing the probability of extracting less likely tokens :

In[26]:=
generateSample[
  lm]["Albert Einstein was a German-born theoretical physicist", 50, \
0.5]
Out[26]=

Very high temperature settings are equivalent to random sampling:

In[27]:=
generateSample[
  lm]["Albert Einstein was a German-born theoretical physicist", 50, \
10]
Out[27]=

Very low temperature settings are equivalent to always picking the character with maximum probability. It is typical for sampling to “get stuck in a loop”:

In[28]:=
generateSample[
  lm]["Albert Einstein was a German-born theoretical physicist", 100, \
0.01]
Out[28]=

Sentence analogies

Define a sentence embedding that consists of the last subword embedding of GPT-2 (this choice is justified by the fact that GPT-2 is a forward causal model):

In[29]:=
sentenceembedding = 
 NetAppend[NetModel["GPT-2 Transformer Trained on WebText Data"], 
  "last" -> SequenceLastLayer[]]
Out[29]=

Define some sentences in two broad categories for comparison:

In[30]:=
sentences = {"I put on some nice soothing music.", 
   "The song blasted from the little radio.", 
   "The soundtrack from the movie was so good.", 
   "Food is needed for survival.", "Go on, eat if you are hungry.", 
   "Her baking skills are terrible."};

Precompute the embeddings for a list of sentences:

In[31]:=
assoc = AssociationThread[sentences -> sentenceembedding[sentences]];

Visualize the similarity between the sentences using the net as a feature extractor:

In[32]:=
FeatureSpacePlot[
 Table[Labeled[(Values@assoc)[[i]], (Keys@assoc)[[i]]], {i, 
   Length@assoc}], LabelingFunction -> Callout, ImageSize -> Medium]
Out[32]=

Train a classifier model with the subword embeddings

Get a text-processing dataset:

In[33]:=
train = ResourceData["Sample Data: Movie Review Sentence Polarity", 
   "TrainingData"];
valid = ResourceData["Sample Data: Movie Review Sentence Polarity", 
   "TestData"];

View a random sample of the dataset:

In[34]:=
RandomSample[train, 1]
Out[34]=

Define a sentence embedding that consists of the last subword embedding of GPT-2 (this choice is justified by the fact that GPT-2 is a forward causal model):

In[35]:=
sentenceembedding = 
 NetAppend[NetModel["GPT-2 Transformer Trained on WebText Data"], 
  "last" -> SequenceLastLayer[]]
Out[35]=

Precompute the GPT-2 vectors for the training and the validation datasets (if available, GPU is recommended), using the last embedded vector as a representation of the entire text:

In[36]:=
trainembeddings = 
  sentenceembedding[train[[All, 1]], TargetDevice -> "CPU"] -> 
   train[[All, 2]];
validembeddings = 
  sentenceembedding[valid[[All, 1]], TargetDevice -> "CPU"] -> 
   valid[[All, 2]];

Define a simple network for classification:

In[37]:=
classifierhead = NetChain[
  {DropoutLayer[], 2, SoftmaxLayer[]},
  "Output" -> NetDecoder[{"Class", {"negative", "positive"}}]
  ]
Out[37]=

Train the network on the precomputed GPT-2 vectors :

In[38]:=
gpt2results = NetTrain[classifierhead, trainembeddings, All,
  ValidationSet -> validembeddings,
  TrainingStoppingCriterion -> <|"Criterion" -> "ErrorRate", 
    "Patience" -> 50|>,
  TargetDevice -> "CPU",
  MaxTrainingRounds -> 500]
Out[38]=

Check the classification error rate on the validation data:

In[39]:=
gpt2results["ValidationMeasurements", "ErrorRate"]
Out[39]=

Compare the results with the performance of a classifier trained on context-independent word embeddings. Precompute the GloVe vectors for the training and the validation datasets (if available, GPU is recommended):

In[40]:=
glove = NetModel[
   "GloVe 300-Dimensional Word Vectors Trained on Wikipedia and \
Gigaword 5 Data"];
In[41]:=
trainembeddingsglove = 
  glove[train[[All, 1]], TargetDevice -> "CPU"] -> train[[All, 2]];
validembeddingsglove = 
  glove[valid[[All, 1]], TargetDevice -> "CPU"] -> valid[[All, 2]];

Define a simple network for classification, using a max-pooling strategy:

In[42]:=
gloveclassifierhead = NetChain[
  {DropoutLayer[],
   NetMapOperator[2],
   AggregationLayer[Max, 1],
   SoftmaxLayer[]},
  "Output" -> NetDecoder[{"Class", {"negative", "positive"}}]]
Out[42]=

Train the classifier on the precomputed GloVe vectors:

In[43]:=
gloveresults = NetTrain[gloveclassifierhead, trainembeddingsglove, All,
  ValidationSet -> validembeddingsglove,
  TrainingStoppingCriterion -> <|"Criterion" -> "ErrorRate", 
    "Patience" -> 50|>,
  TargetDevice -> "CPU",
  MaxTrainingRounds -> 50]
Out[43]=

Compare the results obtained with GPT-2 and with GloVe:

In[44]:=
Dataset[<|"GPT-2" -> gpt2results["ValidationMeasurements"], 
  "GloVe" -> gloveresults["ValidationMeasurements"]|>]
Out[44]=

Net information

Inspect the number of parameters of all arrays in the net:

In[45]:=
NetInformation[
 NetModel[
  "GPT-2 Transformer Trained on WebText Data"], "ArraysElementCounts"]
Out[45]=

Obtain the total number of parameters:

In[46]:=
NetInformation[
 NetModel["GPT-2 Transformer Trained on WebText Data"], \
"ArraysTotalElementCount"]
Out[46]=

Obtain the layer type counts:

In[47]:=
NetInformation[
 NetModel["GPT-2 Transformer Trained on WebText Data"], \
"LayerTypeCounts"]
Out[47]=

Display the summary graphic:

In[48]:=
NetInformation[
 NetModel["GPT-2 Transformer Trained on WebText Data"], \
"SummaryGraphic"]
Out[48]=

Export to MXNet

Export the net into a format that can be opened in MXNet:

In[49]:=
jsonPath = 
 Export[FileNameJoin[{$TemporaryDirectory, "net.json"}], 
  NetModel["GPT-2 Transformer Trained on WebText Data"], "MXNet"]
Out[49]=

Export also creates a net.params file containing parameters:

In[50]:=
paramPath = FileNameJoin[{DirectoryName[jsonPath], "net.params"}]
Out[50]=

Get the size of the parameter file:

In[51]:=
FileByteCount[paramPath]
Out[51]=

Requirements

Wolfram Language 12.0 (April 2019) or above

Resource History

Reference