Deep Speech 2
Trained on
Baidu English Data (Raw Model)
Released in 2017, Baidu Research's Deep Speech 2 model converts speech to text end-to-end from a normalized sound spectrogram to the sequence of characters. It consists of a few convolution layers over both time and frequency, followed by gated recurrent unit (GRU) layers (modified with an additional batch normalization). At evaluation time, the space of possible output sequences is explored by the decoder using a beam search algorithm. The same architecture has also been shown to train successfully on Mandarin Chinese.
Number of layers: 169 |
Parameter count: 52,504,416 |
Trained size: 211 MB |
Examples
Resource retrieval
Get the pre-trained network:
Evaluation function
Write an evaluation function to manage audio preprocessing, net resizing and joining of output characters:
Basic usage
Record an audio sample and transcribe it:
Performance evaluation
Models trained with CTC loss have difficulties in producing correct spelling:
The standard solution to this problem is to incorporate a language model into the beam search decoding procedure, as done in the original Deep Speech 2 implementation.
Visualization
Write a function to visualize the token probabilities for each time step:
Inspect the character probabilities of the sample. The <blank> token is the standard blank token used by nets trained with CTC loss:
Net information
Inspect the sizes of all arrays in the net:
Obtain the total number of parameters:
Obtain the layer type counts:
Display the summary graphic:
Export to MXNet
Export the net into a format that can be opened in MXNet:
Export also creates a net.params file containing parameters:
Get the size of the parameter file:
The size is similar to the byte count of the resource object:
Requirements
Wolfram Language
11.3
(March 2018)
or above
Resource History
Reference