This page documents the capabilities for "online decoding" in Kaldi.
By "online decoding" we mean decoding where the features are coming in in real time, and you don't want to wait until all the audio is captured before starting the online decoding. (We're not using the phrase "real-time decoding" because "real-time decoding" can also be used to mean decoding whose speed is not slower than real time, even if it is applied in batch mode).
Note: see Online Recognizers for some now-deprecated online recognizers.
The approach that we took with Kaldi was to focus for the first few years on off-line recognition, in order to reach state of the art performance as quickly as possible. Now we are making more of an effort to support online decoding.
There are two online-decoding setups: the "old" online-decoding setup, in the subdirectories online/ and onlinebin/, and the "new" decoding setup, in online2/ and online2bin/. The "old" online-decoding setup is now deprecated, and may eventually be removed from the trunk (but remain in ^/branches/complete).
There is some documentation for the older setup here, but we recommend to read this page first.
In Kaldi we aim to provide facilities for online decoding as a library. That is, we aim to provide the functionality for online decoding but not necessarily command-line tools for it. The reason is, different people's requirements will be very different depending on how the data is captured and transmitted. In the "old" online-decoding setup we provided facilities for transferring data over UDP and the like, but in the "new" online-decoding setup our only aim is to demonstrate the internal code, and for now we don't provide any example programs that you could hook up to actual real-time audio capture; you would have to do that yourself.
We have decoding programs for GMM-based models (see next section) and for neural net models (see section Neural net based online decoding with iVectors).
The program online2-wav-gmm-latgen-faster.cc is currently the primary example program for the GMM-based online-decoding setup. It reads in whole wave files but internally it processes them chunk by chunk with no dependency on the future. In the example script egs/rm/s5/local/online/run_gmm.sh you can see an example script for how you build models suitable for this program to use, and evaluate it. The main purpose of program is to apply the GMM-based online-decoding procedure within a typical batch-processing framework, so that you can easily evaluate word error rates. We plan to add similar programs for SGMMs and DNNs. In order to actually do online decoding, you would have to modify this program. We should note (and this is obvious to speech recognition people but not to outsiders) that the audio sample rate needs to exactly match what you used in training (and oversampling won't work but subsampling will).
In Kaldi, when we use the term "decoder" we don't generally mean the entire decoding program. We mean the inner decoder object, generally of the type LatticeFasterDecoder. This object takes the decoding graph (as an FST), and the decodable object (see The Decodable interface). All the decoders naturally support online decoding; it is the code in the decoding program (but outside of the decoder) that needs to change. We should note, though, a difference in how you need to invoke the decoder for online decoding.
We should mention here that in the old online setup, there is a decoder called OnlineFasterDecoder. Do not assume from the name of this that it is the only decoder to support online decoding. The special thing about the OnlineFasterDecoder is that it has the ability to work out which words are going to be "inevitably" decoded regardless of what audio data comes in in future, so you can output those words. This is useful in an online-transcription context, and if there seems to be a demand for this, we may move that decoder from online/ into the decoder/ directory and make it compatible with the new online setup.
Most of the complexity in online decoding relates to feature extraction and adaptation.
In online-feature.h we provide classes that provide various components of feature extraction, all inheriting from class OnlineFeatureInterface. OnlineFeatureInterface is a base class for online feature extraction. The interface specifies how the object provides the features to the caller (OnlineFeatureInterface::GetFrame()) and how it says how many frames are ready (OnlineFeatureInterface::NumFramesReady()), but does not say how it obtains those features. That is up to the child class.
In online-feature.h we define classes OnlineMfcc and OnlinePlp which are the lowest-level features. They have a member function OnlineMfccOrPlp::AcceptWaveform(), which the user should call when data is captured. All the other online feature types in online-feature.h are "derived" features, so they take an object of OnlineFeatureInterface in their constructor and get their input features through a stored pointer to that object.
The only part of the online feature extraction code in online-feature.h that is non-trivial is the cepstral mean and variance normalization (CMVN) (and note that the fMLLR, or linear transform, estimation is not trivial but the complexity lies elsewhere). We describe the CMVN below.
Cepstral mean normalization is a normalization method in which the mean of the data (typically of the raw MFCC features) is subtracted. "Cepstral" simply refers to the normal feature type; the first C in MFCC means "Cepstral".. the cepstrum is the inverse fourier transform of the log spectrum, although it's actually the cosine transform that is used. Anyway, in cepstral variance normalization, each feature dimension is scaled so that its variance is one. In all the current scripts, we turn cepstral variance normalization off and only use cepstral mean normalization, but the same code handles both. In the discussion below, for brevity we will refer only to cepstral mean normalization.
In the Kaldi scripts, cepstral mean and variance normalization (CMVN) is generally done on a per-speaker basis. Obviously in an online-decoding context, this is impossible to do because it is "non-causal" (the current feature depends on future features).
The basic solution we use is to do "moving-window" cepstral mean normalization. We accumulate the mean over a moving window of, by default, 6 seconds (see the "--cmn-window" option to programs in online2bin/, which defaults to 600). The options class for this computation, OnlineCmvnOptions, also has extra configuration variables, speaker-frames (default: 600), and global-frames (default: 200). These specify how we make use of prior information from the same speaker, or a global average of the cepstra, to improve the estimate for the first few seconds of each utterance. The program apply-cmvn-online can apply this normalization as part of a training pipeline so that we can can train on matched features.
The OnlineCmvn class has functions GetState and SetState that make it possible to keep track of the state of the CMVN computation between speakers. It also has a function Freeze(). This function causes it to freeze the state of the cepstral mean normalization at a particular value, so that after calling Freeze(), any calls to GetFrame(), even for earlier times, will apply the mean offset that we were using when the user called Freeze(). This frozen state will also be propagated to future utterances of the same speaker via the GetState and SetState function calls. The reason we do this is that we don't believe it makes sense to do speaker adaptation with fMLLR on top of a constantly varying CMN offset. So when we start estimating fMLLR (see below), we freeze the CMN state and leave it fixed in future. The value of CMN at the time we freeze it is not especially critical because fMLLR subsumes CMN. The reason we freeze the CMN state to a particular value rather than just skip over the CMN when we start estimating fMLLR, is that we are actually using a method called basis-fMLLR (again, see below) where we incrementally estimate the parameters, and it is not completely invariant to offsets.
The most standard adaptation method used for speech recognition is feature-space Maximum Likelihood Linear Regression (fMLLR), also known in the literature as Constrained MLLR (CMLLR), but we use the term fMLLR in the Kaldi code and documentation. fMLLR consists of an affine (linear + offset) transform of the features; the number of parameters is d * (d+1), where d is the final feature dimension (typically 40). In the online decoding program a basis method to incrementally estimate an increasing number of transform parameters as we decode more data. The top-level logic for this at the decoder level is mostly implemented in class SingleUtteranceGmmDecoder.
The fMLLR estimation is done not continuously but periodically, since it involvesa computing lattice posteriors and this can't very easily be done in a continuous manner. Configuration variables in class OnlineGmmDecodingAdaptationPolicyConfig determine when we re-estimate fMLLR. The default currently is, during the first utterance, to estimate it after 2 seconds, and thereafter at times in a geometrically increasing ratio with constant 1.5 (so at 2 seconds, 3 seconds, 4.5 seconds...). For later utterances we estimate it after 5 seconds, 10 seconds, 20 seconds and so on. For all utterances we estimate it at the end of the utterance.
Note that the CMN adaptation state is frozen, as mentioned above, the first time we estimate fMLLR for a speaker, which by default will be two seconds into the first utterance.
In the online decoding decode for GMMs in online-gmm-decoding.h, up to three models can be supplied. These are held in class OnlineGmmDecodingModels, which takes care of the logic necessary to decide which model to use for different purposes if fewer models are supplied. The three models are:
Our best online-decoding setup, which we recommend should be used, is the neural net based setup. The adaptation philosphy is to give the neural net un-adapted and non-mean-normalized features (MFCCs, in our example recipes), and also to give it an iVector. An iVector is a vector of dimension several hundred (one or two hundred, in this particular context) which represents the speaker properties. For more information on this the reader can look at the speaker identification literature. Our idea is that the iVector gives the neural net as much as it needs to know about the speaker properties. This has proved quite useful. The iVector is estimated in a left-to-right way, meaning that at a certain time t, it sees input from time zero to t. It also sees information from previous utterances of the current speaker, if available. The iVector estimation is Maximum Likelihood, involving Gaussian Mixture Models.
If pitch is used (e.g. for tonal languages), we don't include it in the features used for iVector estimation, in order to simplify things; we just include it in the features given to the neural network. We don't yet have example scripts for the online-neural-net decoding for tonal languages; it is still being debugged.
The neural nets in our example scripts for online decoding are p-norm neural networks, typically trained in parallel on several GPUs. We have these example scripts for several different example setups, e.g. in egs/rm/s5, egs/wsj/s5, egs/swbd/s5b, and egs/fisher_english/s5. The top-level example script is always called local/online/run_nnet2.sh. In the case of the Resource Management recipe there is also a script local/online/run_nnet2_wsj.sh. This demonstrates how to take a larger neural net trained on out-of-domain speech with the same sampling rate (in this example, WSJ), and retrain it on in-domain data. In this way we obtained our best-ever results on RM.
We are currently working on example scripts for discriminative training for this setup.
In this section we will explain how to download already-build online-nnet2 models from www.kaldi-asr.org and evaluate them on your own data.
The reader can download the models and other relating files from http://kaldi-asr.org/downloads/build/2/sandbox/online/egs/fisher_english/s5 , which are built using the fisher_english recipe. To use the online-nnet2 models, the reader only needs to download two directories: exp/tri5a/graph and exp/nnet2_online/nnet_a_gpu_online. Use the following commands to download the archives and extract them:
wget http://kaldi-asr.org/downloads/build/5/trunk/egs/fisher_english/s5/exp/nnet2_online/nnet_a_gpu_online/archive.tar.gz -O nnet_a_gpu_online.tar.gz wget http://kaldi-asr.org/downloads/build/2/sandbox/online/egs/fisher_english/s5/exp/tri5a/graph/archive.tar.gz -O graph.tar.gz mkdir -p nnet_a_gpu_online graph tar zxvf nnet_a_gpu_online.tar.gz -C nnet_a_gpu_online tar zxvf graph.tar.gz -C graph
Here the archives are extracted to the local directory. We need to modify pathnames in the config files, which we can do as follows:
for x in nnet_a_gpu_online/conf/*conf; do cp $x $x.orig sed s:/export/a09/dpovey/kaldi-clean/egs/fisher_english/s5/exp/nnet2_online/:$(pwd)/: < $x.orig > $x done
Next, choose a single wav file to decode. The reader can download a sample file by typing
wget http://www.signalogic.com/melp/EngSamples/Orig/ENG_M.wav
This is a 8kHz-sampled wav file that we found online (unfortunately it is UK English, so the accuracy is not very good). It can be decoded with the following command:
~/kaldi-online/src/online2bin/online2-wav-nnet2-latgen-faster --do-endpointing=false \ --online=false \ --config=nnet_a_gpu_online/conf/online_nnet2_decoding.conf \ --max-active=7000 --beam=15.0 --lattice-beam=6.0 \ --acoustic-scale=0.1 --word-symbol-table=graph/words.txt \ nnet_a_gpu_online/final.mdl graph/HCLG.fst "ark:echo utterance-id1 utterance-id1|" "scp:echo utterance-id1 ENG_M.wav|" \ ark:/dev/null
We added the –online=false
option because it tends to slightly improve results. You can see the result in the logging output (although there are other ways to retrieve this). For us, the logging output was as follows:
/home/dpovey/kaldi-online/src/online2bin/online2-wav-nnet2-latgen-faster --do-endpointing=false --online=false --config=nnet_a_gpu_online/conf/online_nnet2_decoding.conf --max-active=7000 --beam=15.0 --lattice-beam=6.0 --acoustic-scale=0.1 --word-symbol-table=graph/words.txt nnet_a_gpu_online/smbr_epoch2.mdl graph/HCLG.fst 'ark:echo utterance-id1 utterance-id1|' 'scp:echo utterance-id1 ENG_M.wav|' ark:/dev/null LOG (online2-wav-nnet2-latgen-faster:ComputeDerivedVars():ivector-extractor.cc:180) Computing derived variables for iVector extractor LOG (online2-wav-nnet2-latgen-faster:ComputeDerivedVars():ivector-extractor.cc:201) Done. utterance-id1 tons of who was on the way for races two miles and then in nineteen ninety to buy sodas sale the rate them all these to commemorate columbus is drawn into the new world five hundred years ago on the one to the moon is to promote the use of so the sales in space exploration LOG (online2-wav-nnet2-latgen-faster:main():online2-wav-nnet2-latgen-faster.cc:253) Decoded utterance utterance-id1 LOG (online2-wav-nnet2-latgen-faster:Print():online-timing.cc:51) Timing stats: real-time factor for offline decoding was 1.62102 = 26.7482 seconds / 16.5009 seconds. LOG (online2-wav-nnet2-latgen-faster:main():online2-wav-nnet2-latgen-faster.cc:259) Decoded 1 utterances, 0 with errors. LOG (online2-wav-nnet2-latgen-faster:main():online2-wav-nnet2-latgen-faster.cc:261) Overall likelihood per frame was 0.230575 per frame over 1648 frames.
Note that for mismatched data, sometimes the iVector estimation can get confused and lead to bad results. Something that we have found useful is to weight down the silence in the iVector estimation. To do this you can set e.g. –ivector-silence-weighting.silence-weight=0.001
; you need to set the silence phones as appropriate, e.g. –ivector-silence-weighting.silence-phones=1:2:3:4
(this should be a list of silence or noise phones in your phones.txt; you can experiment with which ones to include).
Oftentimes users will have to use their own language model to improve the recognition accuracy. In this section we will explain how to build a language model with SRILM, and how to incorporate this language model to the existing online-nnet2 models.
We first have to build an ARPA format language model with SRILM. Note that SRILM comes with a lot of training options, and we assume it's the user's responsibility to figure out what is the best setting for their own application. Suppose "train.txt" is our language model training corpus (e.g., training data transcriptions), and "wordlist" is our vocabulary. Here we assume the language model vocabulary is the same as the recognizer's vocabulary, i.e., it only contains the words from data/lang/words.txt, except the epsilon symbol "<eps>" and disambiguation symbol "#0". We will explain how we can use a different vocabulary in the next section. We can build a 3gram Kneser-Ney language model using the following SRILM command
ngram-count -text train.txt -order 3 -limit-vocab -vocab wordlist -unk \ -map-unk "<unk>" -kndiscount -interpolate -lm srilm.o3g.kn.gz
Now that we have the ARPA format language model trained, we have to compile it into WFST format. Let's first define the following variables
dict_dir=data/local/dict # The dict directory provided by the online-nnet2 models lm=srilm.o3g.kn.gz # ARPA format LM you just built. lang=data/lang # Old lang directory provided by the online-nnet2 models lang_own=data/lang_own # New lang directory we are going to create, which contains the new language model
Given the above variables, we can compile an ARPA format language model into WFST format using the following commands
utils/format_lm.sh $lang $lm $dict_dir/lexicon.txt $lang_own
Now, we can compile the decoding graph with the new language model, using the following command
graph_own_dir=$model_dir/graph_own utils/mkgraph.sh $lang_own $model_dir $graph_own_dir || exit 1;
where $model_dir is the model directory which contains the model "final.mdl" and the tree "tree". At this point, we can use $graph_own_dir/HCLG.fst to replace the old HCLG.fst, which uses the language model we just built.
For most applications users will also have to change the recognizer's existing vocabulary, for example, adding out-of-vocabulary words such as person names to the existing vocabulary. In this section we will explain how this can be done.
We first have to create a new pronunciation lexicon, typically by adding more words to the recognizer's existing pronunciation lexicon. The recognizer's lexicon that we are going to modify is usually located at the $dict_dir/lexicon.txt, where $dict_dir is the recognizer's dictionary directory, and is usually data/local/dict. The new lexicon can be created manually by adding new lexical entries to $dict_dir/lexicon.txt. If we do not have pronunciations for the new words, we can use grapheme-to-phoneme (G2P) conversion to generate pronunciations automatically. The commonly used G2P tools are Sequitur and Phonetisaurus, the later is usually much faster.
The second step is to create a dictionary directory for our new lexicon, which contains the required files, for example, lexicon.txt, lexiconp.txt, etc. Most likely if we don't change the lexicon's phone set, the old files such as extra_questions.txt, nonsilence_phones.txt, optional_silence.txt, silence_phones.txt can be re-used. For details of how to create those files, we suggest the users follow the existing Kaldi scripts, for example this one: egs/wsj/s5/local/wsj_prepare_dict.sh. The format of the dictionary directory is described here.
Now we can create a new lang directory with the updated lexicon. Suppose $lang is the recognizer's old lang directory, $lang_own is the new lang directory that we are going to create, $dict_own is the dictionary directory we just created, and "<SPOKEN_NOISE>" is the word symbol that represents out-of-vocabulary words in the lexicon, we can generate the new lang directory with the updated lexicon using the following command
lang_own_tmp=data/local/lang_own_tmp/ # Temporary directory. utils/prepare_lang.sh \ --phone-symbol-table $lang/phones.txt \ $dict_own "<SPOKEN_NOISE>" $lang_own_tmp $lang_own
Make usre you use the option "--phone-symbol-table", which makes sure that phones in your new lexicon will be compatible with the recognizer.
The last step is of course to update the decoding graph, using the following command
graph_own_dir=$model_dir/graph_own utils/mkgraph.sh $lang_own $model_dir $graph_own_dir || exit 1;
where $model_dir is the model directory which contains the model "final.mdl" and the tree "tree". We now can use $graph_own_dir/HCLG.fst to replace the old HCLG.fst.
Online decoding with nnet3 models is basically the same as with nnet2 models as described in Neural net based online decoding with iVectors. However, there are some limitations as to the model type you can use. In Kaldi 5.0 and earlier, online nnet3 decoding does not support recurrent models. In Kaldi 5.1 and later, online nnet3 decoding supports "forward" recurrent models such as LSTMs, but not bidirectional ones like BLSTMs. In addition, online nnet3 decoding with recurrent models may not give optimal results unless you use "Kaldi-5.1-style" configuration, including the "decay-time" option and specifying –extra-left-context-initial 0; see Context and chunk-size in the "nnet3" setup for more discussions of these issues.
Many of the issues in online nnet3 decoding are the same as in nnet2 decoding and the command lines are quite similar. For online nnet3 decoding with Kaldi 5.1 and later, the best example script for online decoding including model training is, at the time of writing, egs/tedlium/s5_r2/local/chain/tuning/run_tdnn_lstm_1e.sh (at the time of writing this is only available in the 'shortcut' branch, as Kaldi 5.1 has not yet been merged to master); and downloadable models that can be used with online nnet3 decoding, please see http://kaldi-asr.org/models.html (the first model there, the ASPIRE model, includes instructions in a README file).
The program to run the TCP sever is online2-tcp-nnet3-decode-faster located in the ~/src/online2bin folder. The usage is as follows:
online2-tcp-nnet3-decode-faster <nnet3-in> <fst-in> <word-symbol-table>
For example:
online2-tcp-nnet3-decode-faster model/final.mdl graph/HCLG.fst graph/words.txt
The word symbol table is mandatory (unlike other nnet3 online decoding programs) because the server outputs word strings. Endpointing is mandatory to make the operation of the program reasonable. Other, non-standard options include:
The TCP protocol simply takes RAW signal on input (16-bit signed integer encoding at chosen sampling frequency) and outputs simple text using the following logic:
Each output string (delimited by '') should be treated as uncertain and can change entirely until the utterance delimiter ('
') is sent. The delimiter chars are chosen specifically in order to make the output look neat in the terminal. It is possible to use it with other interfaces and a web demo (HTML/JS AudioAPI+WebSockets) exists.
To run the program from the terminal you can use one of the following commands. First, make sure the server is running and accepting connections. Using the Aspire models, the command should look like this:
online2-tcp-nnet3-decode-faster --samp-freq=8000 --frames-per-chunk=20 --extra-left-context-initial=0 --frame-subsampling-factor=3 --config=model/conf/online.conf --min-active=200 --max-active=7000 --beam=15.0 --lattice-beam=6.0 --acoustic-scale=1.0 --port-num=5050 model/final.mdl graph/HCLG.fst graph/words.txt
Note in order to make the communication as simple as possible, the server has to accept any data on input and cannot figure out when the stream is over. It will therefore not be able to terminate the connection and it is the client's resposibility to disconnect when it is ready to do so. As a fallback for certain situations, the read-timeout option was added, which will automatically disconnect if a chosen amount of seconds has passed. Keep in mind, that this is not an ideal solution and it's a better idea to design your client to properly disconnect the connection when neccessary.
For testing purposes, we will use the netcat program. We will also use sox to reeoncode the files properly from any source. Netcat has an issue that, similarly to what was stated above about the server, it cannot always interpret the data and usually it won't automatically disconnect the TCP connection. To get around this, we will use the '-N' switch, which kills the connection once streaming of the file is complete, but this can have a small sideffect of not reading the whole output from the Kaldi server if the discconect comes too fast. Just keep this in mind if you intend to implement any of these programs into a production environment.
To send a WAV file into the server, it first needs to be decoded into raw audio, then it can be sent to the socket:
sox audio.wav -t raw -c 1 -b 16 -r 8k -e signed-integer - | nc -N localhost 5050
It is possible to play audio (almost) simultaneously as decoding. It may require installing the 'pv' program (used to throttle the signal into Kaldi at the same speed as the playback):
sox audio.wav -t raw -c 1 -b 16 -r 8k -e signed-integer - | \ tee >(play -t raw -r 8k -e signed-integer -b 16 -c 1 -q -) | \ pv -L 16000 -q | nc -N localhost 5050
Finally, it is possible to send audio from the microphone directly into the server:
rec -r 8k -e signed-integer -c 1 -b 16 -t raw -q - | nc -N localhost 5050