Deep Speech 2 Trained on Baidu English Data

Transcribe an English-language audio recording

Released in 2015, Baidu Research's Deep Speech 2 model converts speech to text end to end from a normalized sound spectrogram to the sequence of characters. It consists of a few convolutional layers over both time and frequency, followed by gated recurrent unit (GRU) layers (modified with an additional batch normalization). At evaluation time, the space of possible output sequences is explored by the decoder using a beam search algorithm. The same architecture has also been shown to train successfully on Mandarin Chinese.

Number of layers: 152 | Parameter count: 52,504,416 | Trained size: 211 MB |

Training Set Information

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["Deep Speech 2 Trained on Baidu English Data"]
Out[1]=

Basic usage

Record an audio sample and transcribe it:

In[2]:=
record = AudioCapture[]
In[3]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/4faae7be-972e-4960-9b68-ea9b2de22aeb"]
In[4]:=
output = NetModel["Deep Speech 2 Trained on Baidu English Data"][
  record]
Out[4]=

Join the characters:

In[5]:=
StringJoin[output]
Out[5]=

Display the confidence of the five most likely hypotheses:

In[6]:=
MapAt[StringJoin, NetModel["Deep Speech 2 Trained on Baidu English Data"][
  record, {"TopNegativeLogLikelihoods", 5}], {All, 1}]
Out[6]=

Performance evaluation

Models trained with CTC loss have difficulties in producing correct spelling:

In[7]:=
StringJoin /@ NetModel["Deep Speech 2 Trained on Baidu English \
Data"][{ExampleData[{"Audio", "FemaleVoice"}], ExampleData[{"Audio", "MaleVoice"}]}]
Out[7]=

The standard solution to this problem is to incorporate a language model into the beam search decoding procedure, as done in the original Deep Speech 2 implementation.

Noisy environments can affect the performance too:

In[8]:=
StringJoin@
 NetModel["Deep Speech 2 Trained on Baidu English Data"]@
  ExampleData[{"Audio", "NoisyTalk"}]
Out[8]=

Visualization

Write a function to visualize the token probabilities for each time step:

In[9]:=
visualize[audio_] := With[{explikelihood = NetModel["Deep Speech 2 Trained on Baidu English Data"][audio, None]},
  MatrixPlot[Transpose@explikelihood, FrameTicks -> {{Transpose@{Range[29], Join[{"'", "<space>"}, Alphabet[], {"<blank>"}]}, None}, {Automatic, Automatic}}, AspectRatio -> 0.8]
  ]

Inspect the character probabilities of the sample. The token is the standard blank token used by nets trained with CTC loss:

In[10]:=
visualize[ExampleData[{"Audio", "FemaleVoice"}]]
Out[10]=

Net information

Inspect the sizes of all arrays in the net:

In[11]:=
NetInformation[
 NetModel["Deep Speech 2 Trained on Baidu English Data"], \
"ArraysElementCounts"]
Out[11]=

Obtain the total number of parameters:

In[12]:=
NetInformation[
 NetModel["Deep Speech 2 Trained on Baidu English Data"], \
"ArraysTotalElementCount"]
Out[12]=

Obtain the layer type counts:

In[13]:=
NetInformation[
 NetModel["Deep Speech 2 Trained on Baidu English Data"], \
"LayerTypeCounts"]
Out[13]=

Display the summary graphic:

In[14]:=
NetInformation[
 NetModel["Deep Speech 2 Trained on Baidu English Data"], \
"FullSummaryGraphic"]
Out[14]=

Export to MXNet

Export the net into a format that can be opened in MXNet:

In[15]:=
jsonPath = Export[FileNameJoin[{$TemporaryDirectory, "net.json"}], NetModel["Deep Speech 2 Trained on Baidu English Data"], "MXNet"]
Out[15]=

Export also creates a net.params file containing parameters:

In[16]:=
paramPath = FileNameJoin[{DirectoryName[jsonPath], "net.params"}]
Out[16]=

Get the size of the parameter file:

In[17]:=
FileByteCount[paramPath]
Out[17]=

The size is similar to the byte count of the resource object:

In[18]:=
ResourceObject[
  "Deep Speech 2 Trained on Baidu English Data"]["ByteCount"]
Out[18]=

Requirements

Wolfram Language 11.3 (March 2018) or above

Resource History

Reference