In signal processing, Feature space Maximum Likelihood Linear Regression (fMLLR) is a global feature transform that are typically applied in a speaker adaptive way, where fMLLR transforms acoustic features to speaker adapted features by a multiplication operation with a transformation matrix. In some literature, fMLLR is also known as the Constrained Maximum Likelihood Linear Regression (cMLLR).
fMLLR transformations are trained in a maximum likelihood sense on adaptation data. These transformations may be estimated in many ways, but only maximum likelihood (ML) estimation is considered in fMLLR. The fMLLR transformation is trained on a particular set of adaptation data, such that it maximizes the likelihood of that adaptation data given a current model-set.
This technique is a widely used approach for speaker adaptation in HMM-based speech recognition.[1] [2] Later research[3] also shows that fMLLR is an excellent acoustic feature for DNN/HMM[4] hybrid speech recognition models.
The advantage of fMLLR includes the following:
Major problem and disadvantage of fMLLR:
Feature transform of fMLLR can be easily computed with the open source speech tool Kaldi, the Kaldi script uses the standard estimation scheme described in Appendix B of the original paper, in particular the section Appendix B.1 "Direct method over rows".
In the Kaldi formulation, fMLLR is an affine feature transform of the form
x
A
x
+b
x
\hat{x}
\hat{x}
\begin{bmatrix}x\ 1\end{bmatrix}
x
\hat{x}
\begin{bmatrix}1\ x\end{bmatrix}
The sufficient statistics stored are:
K=\sumt,j,m\gammaj,m
-1 | |
(t)style\Sigma | |
jm |
\mujmx(t)+\displaystyle
where
-1 | |
style\Sigma | |
jm |
\displaystyle
And for
0\leqi\leqD
D
G(i)=\sumt,j,m\gammaj,m(t)\left(
1 | |||||||||
|
\right)x(t)+x(t)+T\displaystyle
For a thorough review that explains fMLLR and the commonly used estimation techniques, see the original paper "Maximum likelihood linear transformations for HMM-based speech recognition ".
Note that the Kaldi script that performs the feature transforms of fMLLR differs with by using a column of the inverse in place of the cofactor row. In other words, the factor of the determinant is ignored, as it does not affect the transform result and can causes potential danger of numerical underflow or overflow.
Experiment result shows that by using the fMLLR feature in speech recognition, constant improvement is gained over other acoustic features on various commonly used benchmark datasets (TIMIT, LibriSpeech, etc).
In particular, fMLLR features outperform MFCCs and FBANKs coefficients, which is mainly due to the speaker adaptation process that fMLLR performs.
In, phoneme error rate (PER, %) is reported for the test set of TIMIT with various neural architectures:
MLP | 18.2 | 18.7 | 16.7 | |
RNN | 17.7 | 17.2 | 15.9 | |
LSTM | 15.1 | 14.3 | 14.5 | |
GRU | 16.0 | 15.2 | 14.9 | |
Li-GRU | 15.3 | 14.9 | 14.2 |
Where MLP (multi-layer perceptron) serves as a simple baseline, on the other hand RNN, LSTM, and GRU are all well known recurrent models.
The Li-GRU[5] architecture is based on a single gate and thus saves 33% of the computations over a standard GRU model, Li-GRU thus effectively address the gradient vanishing problem of recurrent models.
As a result, the best performance is obtained with the Li-GRU model on fMLLR features.
fMLLR can be extracted as reported in the s5 recipe of Kaldi.
Kaldi scripts can certainly extract fMLLR features on different dataset, below are the basic example steps to extract fMLLR features from the open source speech corpora Librispeech.
Note that the instructions below are for the subsets train-clean-100
,train-clean-360
,dev-clean
, and test-clean
,
but they can be easily extended to support the other sets dev-other
, test-other
, and train-other-500.
$KALDI_ROOT/egs/librispeech/s5/
with the files in the repository.$KALDI_ROOT/egs/librispeech/s5/cmd.sh
to replace queue.pl
to run.pl
:export train_cmd="run.pl --mem 2G"export decode_cmd="run.pl --mem 4G"export mkgraph_cmd="run.pl --mem 8G"
data
path in run.sh
to your LibriSpeech data path, the directory LibriSpeech/
should be under that path. For example:data=/media/user/SSD # example path
flac
with: sudo apt-get install flac
run.sh
for LibriSpeech at least until Stage 13 (included), for simplicity you can use the modified run.sh.exp/tri4b/trans.*
files into exp/tri4b/decode_tgsmall_train_clean_*/
with the following command:mkdir exp/tri4b/decode_tgsmall_train_clean_100 && cp exp/tri4b/trans.* exp/tri4b/decode_tgsmall_train_clean_100/
. ./cmd.sh ## You'll want to change cmd.sh to something that will work on your system.. ./path.sh ## Source the tools/utils (import the queue.pl)
gmmdir=exp/tri4b
for chunk in dev_clean test_clean train_clean_100 train_clean_360 ; do dir=fmllr/$chunk steps/nnet/make_fmllr_feats.sh --nj 10 --cmd "$train_cmd" \ --transform-dir $gmmdir/decode_tgsmall_$chunk \ $dir data/$chunk $gmmdir $dir/log $dir/data || exit 1
compute-cmvn-stats --spk2utt=ark:data/$chunk/spk2utt scp:fmllr/$chunk/feats.scp ark:$dir/data/cmvn_speaker.arkdone
steps/align_fmllr.sh --nj 10 data/dev_clean data/lang exp/tri4b exp/tri4b_ali_dev_cleansteps/align_fmllr.sh --nj 10 data/test_clean data/lang exp/tri4b exp/tri4b_ali_test_cleansteps/align_fmllr.sh --nj 30 data/train_clean_100 data/lang exp/tri4b exp/tri4b_ali_clean_100steps/align_fmllr.sh --nj 30 data/train_clean_360 data/lang exp/tri4b exp/tri4b_ali_clean_360
data=/user/kaldi/egs/librispeech/s5 ## You'll want to change this path to something that will work on your system.
rm -rf $data/fmllr_cmvn/mkdir $data/fmllr_cmvn/
for part in dev_clean test_clean train_clean_100 train_clean_360; do mkdir $data/fmllr_cmvn/$part/ apply-cmvn --utt2spk=ark:$data/fmllr/$part/utt2spk ark:$data/fmllr/$part/data/cmvn_speaker.ark scp:$data/fmllr/$part/feats.scp ark:- | add-deltas --delta-order=0 ark:- ark:$data/fmllr_cmvn/$part/fmllr_cmvn.arkdone
du -sh $data/fmllr_cmvn/*echo "Done!"
python ark2libri.py