BasisFmllrEstimate Class Reference

Estimation functions for basis fMLLR. More...

#include <basis-fmllr-diag-gmm.h>

Collaboration diagram for BasisFmllrEstimate:

Public Member Functions

 BasisFmllrEstimate ()
 
 BasisFmllrEstimate (int32 dim)
 
void Write (std::ostream &out_stream, bool binary) const
 Routines for reading and writing fMLLR basis matrices. More...
 
void Read (std::istream &in_stream, bool binary)
 
void EstimateFmllrBasis (const AmDiagGmm &am_gmm, const BasisFmllrAccus &basis_accus)
 Estimate the base matrices efficiently in a Maximum Likelihood manner. More...
 
void ComputeAmDiagPrecond (const AmDiagGmm &am_gmm, SpMatrix< double > *pre_cond)
 This function computes the preconditioner matrix, prior to base matrices estimation. More...
 
int32 Dim () const
 
int32 BasisSize () const
 
double ComputeTransform (const AffineXformStats &spk_stats, Matrix< BaseFloat > *out_xform, Vector< BaseFloat > *coefficients, BasisFmllrOptions options) const
 This function performs speaker adaptation, computing the fMLLR matrix based on speaker statistics. More...
 

Private Attributes

std::vector< Matrix< BaseFloat > > fmllr_basis_
 Basis matrices. More...
 
int32 dim_
 Feature dimension. More...
 
int32 basis_size_
 Number of bases D*(D+1) More...
 

Detailed Description

Estimation functions for basis fMLLR.

Definition at line 107 of file basis-fmllr-diag-gmm.h.

Constructor & Destructor Documentation

◆ BasisFmllrEstimate() [1/2]

BasisFmllrEstimate ( )
inline

Definition at line 110 of file basis-fmllr-diag-gmm.h.

110 : dim_(0), basis_size_(0) { }
int32 dim_
Feature dimension.
int32 basis_size_
Number of bases D*(D+1)

◆ BasisFmllrEstimate() [2/2]

BasisFmllrEstimate ( int32  dim)
inlineexplicit

Definition at line 111 of file basis-fmllr-diag-gmm.h.

111  {
112  dim_ = dim; basis_size_ = dim * (dim + 1);
113  }
int32 dim_
Feature dimension.
int32 basis_size_
Number of bases D*(D+1)

Member Function Documentation

◆ BasisSize()

int32 BasisSize ( ) const
inline

Definition at line 139 of file basis-fmllr-diag-gmm.h.

139 { return basis_size_; }
int32 basis_size_
Number of bases D*(D+1)

◆ ComputeAmDiagPrecond()

void ComputeAmDiagPrecond ( const AmDiagGmm am_gmm,
SpMatrix< double > *  pre_cond 
)

This function computes the preconditioner matrix, prior to base matrices estimation.

Since the expected values of G statistics are used, it takes the acoustic model as the argument, rather than the actual accumulations AffineXformStats See section 5.1 of the paper.

Definition at line 156 of file basis-fmllr-diag-gmm.cc.

References VectorBase< Real >::AddVec2(), SpMatrix< Real >::CopyFromMat(), rnnlm::d, AmDiagGmm::Dim(), BasisFmllrAccus::dim_, DiagGmm::GetMeans(), AmDiagGmm::GetPdf(), DiagGmm::GetVars(), rnnlm::i, MatrixBase< Real >::IsSymmetric(), rnnlm::j, KALDI_ASSERT, KALDI_ERR, kaldi::kSetZero, kaldi::kTakeLower, DiagGmm::NumGauss(), AmDiagGmm::NumPdfs(), PackedMatrix< Real >::NumRows(), VectorBase< Real >::Range(), MatrixBase< Real >::Range(), SpMatrix< Real >::Resize(), MatrixBase< Real >::Row(), and DiagGmm::weights().

157  {
158  KALDI_ASSERT(am_gmm.Dim() == dim_);
159  if (pre_cond->NumRows() != (dim_ + 1) * dim_)
160  pre_cond->Resize((dim_ + 1) * dim_, kSetZero);
161 
162  int32 num_pdf = am_gmm.NumPdfs();
163  Matrix<double> H_mat((dim_ + 1) * dim_, (dim_ + 1) * dim_);
164  // expected values of fMLLR G statistics
165  vector< SpMatrix<double> > G_hat(dim_);
166  for (int32 d = 0; d < dim_; ++d)
167  G_hat[d].Resize(dim_ + 1, kSetZero);
168 
169  // extend mean vectors with 1 [mule_jm 1]
170  Vector<double> extend_mean(dim_ + 1);
171  // extend covariance matrix with a row and column of 0
172  Vector<double> extend_var(dim_ + 1);
173  for (int32 j = 0; j < num_pdf; ++j) {
174  const DiagGmm &diag_gmm = am_gmm.GetPdf(j);
175  int32 num_comp = diag_gmm.NumGauss();
176  // means, covariance and mixture weights for this diagonal GMM
177  Matrix<double> means(num_comp, dim_);
178  Matrix<double> vars(num_comp, dim_);
179  diag_gmm.GetMeans(&means); diag_gmm.GetVars(&vars);
180  Vector<BaseFloat> weights(diag_gmm.weights());
181 
182  for (int32 m = 0; m < num_comp; ++m) {
183  extend_mean.Range(0, dim_).CopyFromVec(means.Row(m));
184  extend_mean(dim_) = 1.0;
185  extend_var.Range(0, dim_).CopyFromVec(vars.Row(m));
186  extend_var(dim_) = 0;
187  // loop over feature dimension
188  // Eq. (28): G_hat {d} = \sum_{j, m} P_{j}{m} Inv_Sigma{j, m, d}
189  // (mule_extend mule_extend^T + Sigma_extend)
190  // where P_{j}{m} = P_{j} c_{j}{m}
191  for (int32 d = 0; d < dim_; ++d) {
192  double alpha = (1.0 / num_pdf) * weights(m) * (1.0 / vars.Row(m)(d));
193  G_hat[d].AddVec2(alpha, extend_mean);
194  // add vector to the diagonal elements of the matrix
195  // not work for full covariance matrices
196  G_hat[d].AddDiagVec(alpha, extend_var);
197  } // loop over dimension
198  } // loop over Gaussians
199  } // loop over states
200 
201  // fill H_ with G_hat[i]; build the block diagonal structure
202  // Eq. (31)
203  for (int32 d = 0; d < dim_; d++) {
204  H_mat.Range(d * (dim_ + 1), (dim_ + 1), d * (dim_ + 1), (dim_ + 1))
205  .CopyFromSp(G_hat[d]);
206  }
207 
208  // add the extra H(1) elements
209  // Eq. (30) and Footnote 1 (0-based index)
210  for (int32 i = 0; i < dim_; ++i)
211  for (int32 j = 0; j < dim_; ++j)
212  H_mat(i * (dim_ + 1) + j, j * (dim_ + 1) + i) += 1;
213  // the final H should be symmetric
214  if (!H_mat.IsSymmetric())
215  KALDI_ERR << "Preconditioner matrix H = H(1) + H(2) is not symmetric";
216  pre_cond->CopyFromMat(H_mat, kTakeLower);
217 }
kaldi::int32 int32
MatrixIndexT NumRows() const
#define KALDI_ERR
Definition: kaldi-error.h:147
void CopyFromMat(const MatrixBase< Real > &orig, SpCopyType copy_type=kTakeMean)
Definition: sp-matrix.cc:112
int32 dim_
Feature dimension.
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:185
void Resize(MatrixIndexT nRows, MatrixResizeType resize_type=kSetZero)
Definition: sp-matrix.h:81

◆ ComputeTransform()

double ComputeTransform ( const AffineXformStats spk_stats,
Matrix< BaseFloat > *  out_xform,
Vector< BaseFloat > *  coefficients,
BasisFmllrOptions  options 
) const

This function performs speaker adaptation, computing the fMLLR matrix based on speaker statistics.

It takes fMLLR stats as argument. The basis weights (d_{1}, d_{2}, ..., d_{N}) are also optimized explicitly. Finally, it returns objective function improvement over all the iterations, compared with the value at the initial value of "out_xform" (or the unit transform if not provided). The coefficients are output to "coefficients" only if the vector is provided. See section 5.3 of the paper for more details.

Definition at line 270 of file basis-fmllr-diag-gmm.cc.

References MatrixBase< Real >::AddMat(), VectorBase< Real >::AddVec(), AffineXformStats::beta_, kaldi::CalBasisFmllrStepSize(), MatrixBase< Real >::CopyFromMat(), rnnlm::d, AffineXformStats::dim_, BasisFmllrAccus::dim_, kaldi::FmllrAuxFuncDiagGmm(), AffineXformStats::G_, MatrixBase< Real >::InvertDouble(), MatrixBase< Real >::IsZero(), AffineXformStats::K_, KALDI_ASSERT, KALDI_VLOG, KALDI_WARN, kaldi::kNoTrans, kaldi::kSetZero, kaldi::kTrans, BasisFmllrOptions::min_count, rnnlm::n, BasisFmllrOptions::num_iters, MatrixBase< Real >::NumCols(), MatrixBase< Real >::NumRows(), MatrixBase< Real >::Range(), Vector< Real >::Resize(), Matrix< Real >::Resize(), MatrixBase< Real >::Row(), MatrixBase< Real >::Scale(), MatrixBase< Real >::SetUnit(), MatrixBase< Real >::SetZero(), BasisFmllrOptions::size_scale, BasisFmllrOptions::step_size_iters, kaldi::TraceMatMat(), and Matrix< Real >::Transpose().

Referenced by SingleUtteranceGmmDecoder::EstimateFmllr(), and main().

274  {
275  if (coefficient == NULL) {
276  Vector<BaseFloat> tmp;
277  return ComputeTransform(spk_stats, out_xform, &tmp, options);
278  }
279  KALDI_ASSERT(dim_ == spk_stats.dim_);
280  if (spk_stats.beta_ < options.min_count) {
281  KALDI_WARN << "Not updating fMLLR since count is below min-count: "
282  << spk_stats.beta_;
283  coefficient->Resize(0);
284  return 0.0;
285  } else {
286  if (out_xform->NumRows() != dim_ || out_xform->NumCols() != (dim_ +1)) {
287  out_xform->Resize(dim_, dim_ + 1, kSetZero);
288  }
289  // Initialized either as [I;0] or as the current transform
290  Matrix<BaseFloat> W_mat(dim_, dim_ + 1);
291  if (out_xform->IsZero()) {
292  W_mat.SetUnit();
293  } else {
294  W_mat.CopyFromMat(*out_xform);
295  }
296 
297  // Create temporary K and G quantities. Add for efficiency,
298  // avoid repetitions of converting the stats from double
299  // precision to single precision
300  Matrix<BaseFloat> stats_tmp_K(spk_stats.K_);
301  std::vector<SpMatrix<BaseFloat> > stats_tmp_G(dim_);
302  for (int32 d = 0; d < dim_; d++)
303  stats_tmp_G[d] = SpMatrix<BaseFloat>(spk_stats.G_[d]);
304 
305  // Number of bases for this speaker, according to the available
306  // adaptation data
307  int32 basis_size = int32 (std::min( double(basis_size_),
308  options.size_scale * spk_stats.beta_));
309 
310  coefficient->Resize(basis_size, kSetZero);
311 
312  BaseFloat impr_spk = 0;
313  for (int32 iter = 1; iter <= options.num_iters; ++iter) {
314  // Auxf computation based on FmllrAuxFuncDiagGmm from fmllr-diag-gmm.cc
315  BaseFloat start_obj = FmllrAuxFuncDiagGmm(W_mat, spk_stats);
316 
317  // Contribution of quadratic terms to derivative
318  // Eq. (37) s_{d} = G_{d} w_{d}
319  Matrix<BaseFloat> S(dim_, dim_ + 1);
320  for (int32 d = 0; d < dim_; ++d)
321  S.Row(d).AddSpVec(1.0, stats_tmp_G[d], W_mat.Row(d), 0.0);
322 
323 
324  // W_mat = [A; b]
325  Matrix<BaseFloat> A(dim_, dim_);
326  A.CopyFromMat(W_mat.Range(0, dim_, 0, dim_));
327  Matrix<BaseFloat> A_inv(A);
328  A_inv.InvertDouble();
329  Matrix<BaseFloat> A_inv_trans(A_inv);
330  A_inv_trans.Transpose();
331  // Compute gradient of auxf w.r.t. W_mat
332  // Eq. (38) P = beta [A^{-T}; 0] + K - S
333  Matrix<BaseFloat> P(dim_, dim_ + 1);
334  P.SetZero();
335  P.Range(0, dim_, 0, dim_).CopyFromMat(A_inv_trans);
336  P.Scale(spk_stats.beta_);
337  P.AddMat(1.0, stats_tmp_K);
338  P.AddMat(-1.0, S);
339 
340  // Compute directional gradient restricted by bases. Here we only use
341  // the simple gradient method, rather than conjugate gradient. Finding
342  // the optimal transformation W_mat is equivalent to optimizing weights
343  // d_{1,2,...,N}.
344  // Eq. (39) delta(W) = \sum_n tr(\fmllr_basis_{n}^T \P) \fmllr_basis_{n}
345  // delta(d_{n}) = tr(\fmllr_basis_{n}^T \P)
346  Matrix<BaseFloat> delta_W(dim_, dim_ + 1);
347  Vector<BaseFloat> delta_d(basis_size);
348  for (int32 n = 0; n < basis_size; ++n) {
349  delta_d(n) = TraceMatMat(fmllr_basis_[n], P, kTrans);
350  delta_W.AddMat(delta_d(n), fmllr_basis_[n]);
351  }
352 
353  BaseFloat step_size = CalBasisFmllrStepSize(spk_stats, stats_tmp_K,
354  stats_tmp_G, delta_W, A, S, options.step_size_iters);
355  W_mat.AddMat(step_size, delta_W, kNoTrans);
356  coefficient->AddVec(step_size, delta_d);
357  // Check auxiliary function
358  BaseFloat end_obj = FmllrAuxFuncDiagGmm(W_mat, spk_stats);
359 
360  KALDI_VLOG(4) << "Objective function (iter=" << iter << "): "
361  << start_obj / spk_stats.beta_ << " -> "
362  << (end_obj / spk_stats.beta_) << " over "
363  << spk_stats.beta_ << " frames";
364 
365  impr_spk += (end_obj - start_obj);
366  } // loop over iters
367 
368  out_xform->CopyFromMat(W_mat, kNoTrans);
369  return impr_spk;
370  }
371 }
MatrixIndexT NumCols() const
Returns number of columns (or zero for empty matrix).
Definition: kaldi-matrix.h:67
static BaseFloat CalBasisFmllrStepSize(const AffineXformStats &spk_stats, const Matrix< BaseFloat > &spk_stats_tmp_K, const std::vector< SpMatrix< BaseFloat > > &spk_stats_tmp_G, const Matrix< BaseFloat > &delta, const Matrix< BaseFloat > &A, const Matrix< BaseFloat > &S, int32 max_iters)
This function takes the step direction (delta) of fMLLR matrix as argument, and optimize step size us...
kaldi::int32 int32
void CopyFromMat(const MatrixBase< OtherReal > &M, MatrixTransposeType trans=kNoTrans)
Copy given matrix. (no resize is done).
std::vector< Matrix< BaseFloat > > fmllr_basis_
Basis matrices.
double ComputeTransform(const AffineXformStats &spk_stats, Matrix< BaseFloat > *out_xform, Vector< BaseFloat > *coefficients, BasisFmllrOptions options) const
This function performs speaker adaptation, computing the fMLLR matrix based on speaker statistics...
bool IsZero(Real cutoff=1.0e-05) const
Returns true if matrix is all zeros.
float BaseFloat
Definition: kaldi-types.h:29
struct rnnlm::@11::@12 n
#define KALDI_WARN
Definition: kaldi-error.h:150
Real TraceMatMat(const MatrixBase< Real > &A, const MatrixBase< Real > &B, MatrixTransposeType trans)
We need to declare this here as it will be a friend function.
int32 dim_
Feature dimension.
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:185
MatrixIndexT NumRows() const
Returns number of rows (or zero for empty matrix).
Definition: kaldi-matrix.h:64
float FmllrAuxFuncDiagGmm(const MatrixBase< float > &xform, const AffineXformStats &stats)
Returns the (diagonal-GMM) FMLLR auxiliary function value given the transform and the stats...
#define KALDI_VLOG(v)
Definition: kaldi-error.h:156
void Resize(const MatrixIndexT r, const MatrixIndexT c, MatrixResizeType resize_type=kSetZero, MatrixStrideType stride_type=kDefaultStride)
Sets matrix to a specified size (zero is OK as long as both r and c are zero).
int32 basis_size_
Number of bases D*(D+1)

◆ Dim()

int32 Dim ( ) const
inline

Definition at line 137 of file basis-fmllr-diag-gmm.h.

Referenced by SingleUtteranceGmmDecoder::EstimateFmllr().

137 { return dim_; }
int32 dim_
Feature dimension.

◆ EstimateFmllrBasis()

void EstimateFmllrBasis ( const AmDiagGmm am_gmm,
const BasisFmllrAccus basis_accus 
)

Estimate the base matrices efficiently in a Maximum Likelihood manner.

It takes diagonal GMM as argument, which will be used for preconditioner computation. The total number of bases is fixed to N = (dim + 1) * dim Note that SVD is performed in the normalized space. The base matrices are finally converted back to the unnormalized space.

The sum of the [per-frame] eigenvalues is roughly equal to the improvement of log-likelihood of the training data.

Definition at line 219 of file basis-fmllr-diag-gmm.cc.

References SpMatrix< Real >::AddMat2Sp(), VectorBase< Real >::AddMatVec(), BasisFmllrAccus::beta_, TpMatrix< Real >::Cholesky(), MatrixBase< Real >::CopyFromTp(), BasisFmllrAccus::dim_, BasisFmllrAccus::grad_scatter_, TpMatrix< Real >::InvertDouble(), KALDI_LOG, kaldi::kNoTrans, kaldi::kSetZero, kaldi::kTrans, rnnlm::n, MatrixBase< Real >::Row(), VectorBase< Real >::Scale(), kaldi::SortSvd(), SpMatrix< Real >::SymPosSemiDefEig(), and Matrix< Real >::Transpose().

Referenced by main().

221  {
222  // Compute the preconditioner
223  SpMatrix<double> precond_mat((dim_ + 1) * dim_);
224  ComputeAmDiagPrecond(am_gmm, &precond_mat);
225  // H = C C^T
226  TpMatrix<double> C((dim_+1) * dim_);
227  C.Cholesky(precond_mat);
228  TpMatrix<double> C_inv(C);
229  C_inv.InvertDouble();
230  // From TpMatrix to Matrix
231  Matrix<double> C_inv_full((dim_ + 1) * dim_, (dim_ + 1) * dim_);
232  C_inv_full.CopyFromTp(C_inv);
233 
234  // Convert to the preconditioned coordinates
235  // Eq. (35) M_hat = C^{-1} grad_scatter C^{-T}
236  SpMatrix<double> M_hat((dim_ + 1) * dim_);
237  {
238  SpMatrix<double> grad_scatter_d(basis_accus.grad_scatter_);
239  M_hat.AddMat2Sp(1.0, C_inv_full, kNoTrans, grad_scatter_d, 0.0);
240  }
241  Vector<double> Lvec((dim_ + 1) * dim_);
242  Matrix<double> U((dim_ + 1) * dim_, (dim_ + 1) * dim_);
243  // SVD of M_hat; sort eigenvalues from greatest to smallest
244  M_hat.SymPosSemiDefEig(&Lvec, &U);
245  SortSvd(&Lvec, &U);
246  // After transpose, each row is one base
247  U.Transpose();
248 
249  fmllr_basis_.resize(basis_size_);
250  for (int32 n = 0; n < basis_size_; ++n) {
251  fmllr_basis_[n].Resize(dim_, dim_ + 1, kSetZero);
252  Vector<double> basis_vec((dim_ + 1) * dim_);
253  // Convert eigenvectors back to unnormalized space
254  basis_vec.AddMatVec(1.0, C_inv_full, kTrans, U.Row(n), 0.0);
255  // Convert stacked vectors to matrix
256  fmllr_basis_[n].CopyRowsFromVec(basis_vec);
257  }
258  // Output the eigenvalues of the gradient scatter matrix
259  // The eigenvalues are divided by twice the number of frames
260  // in the training data, to get the per-frame values.
261  Vector<double> Lvec_scaled(Lvec);
262  Lvec_scaled.Scale(1.0 / (2 * basis_accus.beta_));
263  KALDI_LOG << "The [per-frame] eigenvalues sorted from largest to smallest: " << Lvec_scaled;
266  KALDI_LOG << "Sum of the [per-frame] eigenvalues, that is"
267  " the log-likelihood improvement, is " << Lvec_scaled.Sum();
268 }
kaldi::int32 int32
std::vector< Matrix< BaseFloat > > fmllr_basis_
Basis matrices.
void ComputeAmDiagPrecond(const AmDiagGmm &am_gmm, SpMatrix< double > *pre_cond)
This function computes the preconditioner matrix, prior to base matrices estimation.
struct rnnlm::@11::@12 n
int32 dim_
Feature dimension.
int32 basis_size_
Number of bases D*(D+1)
#define KALDI_LOG
Definition: kaldi-error.h:153
void SortSvd(VectorBase< Real > *s, MatrixBase< Real > *U, MatrixBase< Real > *Vt, bool sort_on_absolute_value)
Function to ensure that SVD is sorted.

◆ Read()

void Read ( std::istream &  in_stream,
bool  binary 
)

Definition at line 133 of file basis-fmllr-diag-gmm.cc.

References BasisFmllrAccus::dim_, kaldi::ExpectToken(), KALDI_ASSERT, rnnlm::n, and kaldi::ReadBasicType().

133  {
134  uint32 tmp_uint32;
135  string token;
136 
137  ExpectToken(is, binary, "<BASISFMLLRPARAM>");
138 
139  ExpectToken(is, binary, "<NUMBASIS>");
140  ReadBasicType(is, binary, &tmp_uint32);
141  basis_size_ = static_cast<int32>(tmp_uint32);
143  ExpectToken(is, binary, "<BASIS>");
144  fmllr_basis_.resize(basis_size_);
145  for (int32 n = 0; n < basis_size_; ++n) {
146  fmllr_basis_[n].Read(is, binary);
147  if (n == 0)
148  dim_ = fmllr_basis_[n].NumRows();
149  else {
150  KALDI_ASSERT(dim_ == fmllr_basis_[n].NumRows());
151  }
152  }
153  ExpectToken(is, binary, "</BASISFMLLRPARAM>");
154 }
void ReadBasicType(std::istream &is, bool binary, T *t)
ReadBasicType is the name of the read function for bool, integer types, and floating-point types...
Definition: io-funcs-inl.h:55
kaldi::int32 int32
std::vector< Matrix< BaseFloat > > fmllr_basis_
Basis matrices.
void ExpectToken(std::istream &is, bool binary, const char *token)
ExpectToken tries to read in the given token, and throws an exception on failure. ...
Definition: io-funcs.cc:191
struct rnnlm::@11::@12 n
int32 dim_
Feature dimension.
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:185
int32 basis_size_
Number of bases D*(D+1)

◆ Write()

void Write ( std::ostream &  out_stream,
bool  binary 
) const

Routines for reading and writing fMLLR basis matrices.

Definition at line 116 of file basis-fmllr-diag-gmm.cc.

References rnnlm::n, kaldi::WriteBasicType(), and kaldi::WriteToken().

116  {
117  uint32 tmp_uint32;
118 
119  WriteToken(os, binary, "<BASISFMLLRPARAM>");
120 
121  WriteToken(os, binary, "<NUMBASIS>");
122  tmp_uint32 = static_cast<uint32>(basis_size_);
123  WriteBasicType(os, binary, tmp_uint32);
124  if (fmllr_basis_.size() != 0) {
125  WriteToken(os, binary, "<BASIS>");
126  for (int32 n = 0; n < basis_size_; ++n) {
127  fmllr_basis_[n].Write(os, binary);
128  }
129  }
130  WriteToken(os, binary, "</BASISFMLLRPARAM>");
131 }
kaldi::int32 int32
std::vector< Matrix< BaseFloat > > fmllr_basis_
Basis matrices.
struct rnnlm::@11::@12 n
void WriteToken(std::ostream &os, bool binary, const char *token)
The WriteToken functions are for writing nonempty sequences of non-space characters.
Definition: io-funcs.cc:134
void WriteBasicType(std::ostream &os, bool binary, T t)
WriteBasicType is the name of the write function for bool, integer types, and floating-point types...
Definition: io-funcs-inl.h:34
int32 basis_size_
Number of bases D*(D+1)

Member Data Documentation

◆ basis_size_

int32 basis_size_
private

Number of bases D*(D+1)

Definition at line 163 of file basis-fmllr-diag-gmm.h.

◆ dim_

int32 dim_
private

Feature dimension.

Definition at line 161 of file basis-fmllr-diag-gmm.h.

◆ fmllr_basis_

std::vector< Matrix<BaseFloat> > fmllr_basis_
private

Basis matrices.

Dim is [T] [D] [D+1] T is the number of bases

Definition at line 159 of file basis-fmllr-diag-gmm.h.


The documentation for this class was generated from the following files: