Class Hierarchy

Go to the graphical class hierarchy

This inheritance list is sorted roughly, but not completely, alphabetically:
[detail level 12345]
 CAccess
 CAccumAmDiagGmm
 CAccumDiagGmm
 CAccumFullGmmClass for computing the maximum-likelihood estimates of the parameters of a Gaussian mixture model
 CAccumulateTreeStatsInfo
 CAccumulateTreeStatsOptions
 CActivePath
 CAffineXformStats
 CAgglomerativeClustererNecessary mechanisms for the actual clustering algorithm
 CAhcClusterAhcCluster is the cluster object for the agglomerative clustering
 CAlignConfig
 CAlignedTermsPair
 CAmDiagGmm
 CAmNnet
 CAmNnetSimple
 CAmSgmm2Class for definition of the subspace Gmm acoustic model
 CAnalyzerThis struct exists to set up various pieces of analysis; it helps avoid the repetition of code where we compute all these things in sequence
 CArbitraryResampleClass ArbitraryResample allows you to resample a signal (assumed zero outside the sample region, not periodic) at arbitrary specified time values, which don't have to be linearly spaced
 CMinimumBayesRisk::Arc
 CGrammarFstPreparer::ArcCategory
 CArcIterator< GrammarFst >This is the overridden template for class ArcIterator for GrammarFst
 CArcPosteriorComputer
 CArpaFileParserArpaFileParser is an abstract base class for ARPA LM file conversion
 CArpaLine
 CArpaLmCompilerImplInterface
 CArpaParseOptionsOptions that control ArpaFileParser
 CBackpointerToken
 Cbasic_filebuf
 Cbasic_streambuf
 CBasicHolder< BasicType >BasicHolder is valid for float, double, bool, and integer types
 CBasicPairVectorHolder< BasicType >BasicPairVectorHolder is a Holder for a vector of pairs of a basic type, e.g
 CBasicVectorHolder< BasicType >A Holder for a vector of basic types, e.g
 CBasicVectorVectorHolder< BasicType >BasicVectorVectorHolder is a Holder for a vector of vector of a basic type, e.g
 CBasisFmllrAccusStats for fMLLR subspace estimation
 CBasisFmllrEstimateEstimation functions for basis fMLLR
 CBasisFmllrOptions
 CBatchedXvectorComputer
 CBatchedXvectorComputerOptions
 CLatticeFasterOnlineDecoderTpl< FST >::BestPathIterator
 CLatticeIncrementalOnlineDecoderTpl< FST >::BestPathIterator
 CBiglmFasterDecoderThis is as FasterDecoder, but does online composition between HCLG and the "difference language model", which is a deterministic FST that represents the difference between the language model you want and the language model you compiled HCLG with
 CCuBlockMatrix< Real >::BlockMatrixData
 CBottomUpClusterer
 CCacheArcIterator
 CCacheImpl
 CCacheOptions
 CCacheStateIterator
 CCachingOptimizingCompilerThis class enables you to do the compilation and optimization in one call, and also ensures that if the ComputationRequest is identical to the previous one, the compilation process is not repeated
 CCachingOptimizingCompilerOptions
 CChainExampleMergerThis class is responsible for arranging examples in groups that have the same strucure (i.e
 CChainObjectiveInfo
 CCheckComputationOptions
 CLmState::ChildrenVectorLessThan
 CLmState::ChildType
 CChunkInfoChunkInfo is a class whose purpose is to describe the structure of matrices holding features
 CChunkInfo
 CChunkTimeInfoStruct ChunkTimeInfo is used by class UtteranceSplitter to output information about how we split an utterance into chunks
 CCindexHasher
 CComputationGraphBuilder::CindexInfo
 CCindexSet
 CCindexVectorHasher
 CPldaStats::ClassInfo
 CClatRescoreTuple
 CClusterable
 CClusterKMeansOptions
 CCollapseModelConfigConfig class for the CollapseModel function
 CNnetComputation::Command
 CCommandAttributes
 CNnetComputer::CommandDebugInfo
 CCommandPairComparator
 CCompactLatticeHolder
 CCompactLatticeMinimizer< Weight, IntType >
 CCompactLatticePusher< Weight, IntType >
 CCompactLatticeToKwsProductFstMapper
 CCompactLatticeWeightCommonDivisorTpl< BaseWeightType, IntType >
 CCompactLatticeWeightTpl< WeightType, IntType >
 CCompareFirst< Real >
 CCompareFirstMemberOfPair< A, B >Comparator object for pairs that compares only the first pair
 CComparePosteriorByPdfs
 CCompareReverseSecond
 CCompartmentalizedBottomUpClusterer
 CCompBotClustElem
 CCompilerThis class creates an initial version of the NnetComputation, without any optimization or sharing of matrices
 CCompilerOptions
 CComponentAbstract class, basic element of the network, it is a box with defined inputs, outputs, and tranformation functions interface
 CComponentAbstract base-class for neural-net components
 CComponentPrecomputedIndexes
 CPrunedCompactLatticeComposer::ComposedStateInfo
 CComposeLatticePrunedOptions
 CCompressedAffineXformStats
 CCompressedMatrix
 CComputationAnalysisThis class performs various kinds of specific analysis on top of what class Analyzer gives you immediately
 CComputationCacheClass ComputationCache is used inside class CachingOptimizingCompiler to cache previously computed computations
 CComputationChecker
 CComputationExpander
 CComputationGraphThe first step in compilation is to turn the ComputationSpecification into a ComputationGraph, where for each Cindex we have a list of other Cindexes that it depends on
 CComputationGraphBuilderAn abstract representation of a set of Cindexes
 CNnetBatchComputer::ComputationGroupInfo
 CNnetBatchComputer::ComputationGroupKey
 CNnetBatchComputer::ComputationGroupKeyHasher
 CComputationLoopedOptimizer
 CComputationRenumberer
 CComputationRequest
 CComputationRequestHasher
 CComputationRequestPtrEqual
 CLatticePhoneAligner::ComputationState
 CLatticeLexiconWordAligner::ComputationState
 CLatticeWordAligner::ComputationState
 CComputationStepsComputerThis class arranges the cindex_ids of the computation into a sequence of lists called "steps", which will correspond roughly to the commands in the compiled computation
 CComputationVariablesThis class relates the matrices and sub-matrices in the computation to imaginary "variables", such that we can think of the operations as operating on sets of individual variables, and we can then do analysis that lets us do optimization
 CConfigLineThis class is responsible for parsing input like hi-there xx=yyy a=b c empty= f-oo=Append(bar, sss) ba_z=123 bing='a b c' baz="a b c d='a b' e" and giving you access to the fields, in this case
 CConstArpaLm
 CConstIntegerSet< I >
 CConstIntegerSet< EventValueType >
 CConstIntegerSet< int32 >
 CConstIntegerSet< Label >
 CContextDependencyInterfaceContext-dep-itf.h provides a link between the tree-building code in ../tree/, and the FST code in ../fstext/ (particularly, ../fstext/context-dep.h)
 CConvolutionComputationThis struct represents the structure of a convolution computation
 CConvolutionComputationIo
 CConvolutionComputationOptionsThis struct contains options for compiling the convolutional computation
 CConvolutionModelThis comment explains the basic framework used for everything related to time-height convolution
 CConvolutionComputation::ConvolutionStep
 CCountStats
 CCovarianceStats
 CCRnnLM
 CCuAllocatorOptions
 CCuArrayBase< T >Class CuArrayBase, CuSubArray and CuArray are analogues of classes CuVectorBase, CuSubVector and CuVector, except that they are intended to store things other than float/double: they are intended to store integers or small structs
 CCuArrayBase< int32 >
 CCuArrayBase< Int32Pair >
 CCuBlockMatrix< Real >The class CuBlockMatrix holds a vector of objects of type CuMatrix, say, M_1, M_2,
 CCuBlockMatrixData_This structure is used in cu-block-matrix.h to store information about a block-diagonal matrix
 CCuCompressedMatrixBaseClass CuCompressedMatrixBase is an abstract base class that allows you to compress a matrix of type CuMatrix<BaseFloat>
 CCuMatrixBase< Real >Matrix for CUDA computing
 CCuMatrixBase< double >
 CCuMatrixBase< float >
 CCuPackedMatrix< Real >Matrix for CUDA computing
 CCuRand< Real >
 CCuRand< float >
 CCuSparseMatrix< Real >
 CCuValue< Real >The following class is used to simulate non-const references to Real, e.g
 CCuVectorBase< Real >Vector for CUDA computing
 CCuVectorBase< double >
 CCuVectorBase< float >
 CDecisionTreeSplitter
 CDecodableInterfaceDecodableInterface provides a link between the (acoustic-modeling and feature-processing) code and the decoder
 CDecodableNnet2OnlineOptions
 CDecodableNnetSimple
 CDecodableNnetSimpleLooped
 CDecodableNnetSimpleLoopedInfoWhen you instantiate class DecodableNnetSimpleLooped, you should give it a const reference to this class, that has been previously initialized
 CDecodeInfo
 CDecodeUtteranceLatticeFasterClassThis class basically does the same job as the function DecodeUtteranceLatticeFaster, but in a way that allows us to build a multi-threaded command line program more easily
 CDeltaFeatures
 CDeltaFeaturesOptions
 CDerivativeTimeLimiter
 CDescriptor
 CDeterministicOnDemandFst< Arc >Class DeterministicOnDemandFst is an "FST-like" base-class
 CDeterministicOnDemandFst< fst::StdArc >
 CDeterministicOnDemandFst< StdArc >
 CDeterminizeLatticeOptions
 CDeterminizeLatticePhonePrunedOptions
 CDeterminizeLatticePrunedOptions
 CDeterminizeLatticeTask
 CDeterminizerStar< F >
 CDfsOrderVisitor< Arc >
 CDiagGmmDefinition for Gaussian Mixture Model with diagonal covariances
 CDiagGmmNormalDefinition for Gaussian Mixture Model with diagonal covariances in normal mode: where the parameters are stored as means and variances (instead of the exponential form that the DiagGmm class is stored as)
 CDifferentiableTransformThis class is for speaker-dependent feature-space transformations – principally various varieties of fMLLR, including mean-only, diagonal and block-diagonal versions – which are intended for placement in the bottleneck of a neural net
 CDiscriminativeComputation
 CDiscriminativeExampleMergerThis class is responsible for arranging examples in groups that have the same strucure (i.e
 CDiscriminativeExampleSplitterFor each frame, judge:
 CDiscriminativeExamplesRepositoryThis struct stores neural net training examples to be used in multi-threaded training
 CDiscriminativeNnetExampleThis struct is used to store the information we need for discriminative training (MMI or MPE)
 CDiscriminativeObjectiveFunctionInfo
 CDiscriminativeObjectiveInfo
 CDiscriminativeOptions
 CDiscriminativeSupervision
 CDiscriminativeSupervisionSplitter
 CParseOptions::DocInfoStructure for options' documentation
 CDummyOptions
 CEbwAmSgmm2OptionsThis header implements a form of Extended Baum-Welch training for SGMMs
 CEbwAmSgmm2Updater
 CEbwAmSgmmUpdaterContains the functions needed to update the SGMM parameters
 CEbwOptions
 CEbwWeightOptions
 CEigenvalueDecomposition< Real >
 CHashList< I, T >::Elem
 CLatticeDeterminizer< Weight, IntType >::Element
 CDeterminizerStar< F >::Element
 CTrivialFactorWeightFstImpl< A, F >::Element
 CLatticeDeterminizerPruned< Weight, IntType >::Element
 CTrivialFactorWeightFstImpl< A, F >::ElementEqual
 CTrivialFactorWeightFstImpl< A, F >::ElementKey
 CLatticeStringRepository< IntType >::Entry
 CLatticeStringRepository< IntType >::EntryEqual
 CLatticeStringRepository< IntType >::EntryKey
 CDeterminizerStar< F >::EpsilonClosure
 CDeterminizerStar< F >::EpsilonClosure::EpsilonClosureInfo
 CCompactLatticeMinimizer< Weight, IntType >::EquivalenceSorter
 Cerror_stats
 CEventMapA class that is capable of representing a generic mapping from EventType (which is a vector of (key, value) pairs) to EventAnswerType which is just an integer
 CEventMapVectorEqual
 CEventMapVectorHash
 CExampleFeatureComputerThis class is only added for documentation, it is not intended to ever be used
 CExampleFeatureComputerOptionsThis class is only added for documentation, it is not intended to ever be used
 CExampleGenerationConfig
 CExampleMergerThis class is responsible for arranging examples in groups that have the same strucure (i.e
 CExampleMergingConfig
 CExampleMergingStatsThis class is responsible for storing, and displaying in log messages, statistics about how examples of different sizes (c.f
 CExamplesRepositoryThis class stores neural net training examples to be used in multi-threaded training
 CGrammarFst::ExpandedStateRepresents an expanded state in an FstInstance
 CFasterDecoder
 CFasterDecoderOptions
 CFastNnetCombiner
 CFbankComputerClass for computing mel-filterbank features; see Computing MFCC features for more information
 CFbankOptionsFbankOptions contains basic options for computing filterbank features
 CFeatureTransformEstimateOptions
 CFeatureWindowFunction
 CFloatWeightTpl
 CFmllrOptions
 CFmllrRawAccs
 CFmllrRawOptions
 CFmllrSgmm2AccsClass for computing the accumulators needed for the maximum-likelihood estimate of FMLLR transforms for a subspace GMM acoustic model
 CFmpe
 CFmpeOptions
 CFmpeStats
 CFmpeUpdateOptions
 CForwardingDescriptorA ForwardingDescriptor describes how we copy data from another NetworkNode, or from multiple other NetworkNodes, possibly with a scalar weight
 CLatticeBiglmFasterDecoder::ForwardLink
 CForwardLink< Token >
 CLatticeSimpleDecoder::ForwardLink
 CFrameExtractionOptions
 CDiscriminativeExampleSplitter::FrameInfo
 COnlineSilenceWeighting::FrameInfo
 CGrammarFst::FstInstance
 CFullGmmDefinition for Gaussian Mixture Model with full covariances
 CFullGmmNormalDefinition for Gaussian Mixture Model with full covariances in normal mode: where the parameters are stored as means and variances (instead of the exponential form that the FullGmm class is stored as)
 CMinimumBayesRisk::GammaCompare
 CGaussInfo
 CGaussPostHolder
 CGeneralDescriptorThis class is only used when parsing Descriptors
 CGeneralMatrixThis class is a wrapper that enables you to store a matrix in one of three forms: either as a Matrix<BaseFloat>, or a CompressedMatrix, or a SparseMatrix<BaseFloat>
 CGenericHolder< SomeType >GenericHolder serves to document the requirements of the Holder interface; it's not intended to be used
 CCompressedMatrix::GlobalHeader
 CGrammarFstGrammarFst is an FST that is 'stitched together' from multiple FSTs, that can recursively incorporate each other
 CGrammarFstArc
 CGrammarFstPreparer
 CHashList< I, T >::HashBucket
 CHashList< I, T >
 CHashList< PairId, kaldi::BiglmFasterDecoder::Token *>
 CHashList< PairId, kaldi::LatticeBiglmFasterDecoder::Token *>
 CHashList< StateId, decoder::BackpointerToken *>
 CHashList< StateId, kaldi::FasterDecoder::Token *>
 CHashList< StateId, Token *>
 CHmmCacheHash
 CHmmTopology::HmmStateA structure defined inside HmmTopology to represent a HMM state
 CHmmTopologyA class for storing topology information for phones
 CHtkHeaderA structure containing the HTK header
 CHtkMatrixHolder
 CHTransducerConfigConfiguration class for the GetHTransducer() function; see The HTransducerConfig configuration class for context
 CIdentityFunction< T >
 CImageAugmentationConfig
 CImplToFst
 CIndexStruct Index is intended to represent the various indexes by which we number the rows of the matrices that the Components process: mainly 'n', the index of the member of the minibatch, 't', used for the frame index in speech recognition, and 'x', which is a catch-all extra index which we might use in convolutional setups or for other reasons
 CIndexHasher
 CIndexLessNxt
 CIndexSetAn abstract representation of a set of Indexes
 CIndexVectorHasher
 CInput
 CInputImplBase
 CInt32AndFloat
 CInt32IsZero
 CInt32Pair
 CInterval
 CExampleMergingConfig::IntSet
 CIoSpecification
 CIoSpecificationHasher
 CIvectorEstimationOptions
 CIvectorExtractor
 CIvectorExtractorComputeDerivedVarsClass
 CIvectorExtractorEstimationOptionsOptions for training the IvectorExtractor, e.g. variance flooring
 CIvectorExtractorOptions
 CIvectorExtractorStatsIvectorExtractorStats is a class used to update the parameters of the ivector extractor
 CIvectorExtractorStatsOptionsOptions for IvectorExtractorStats, which is used to update the parameters of IvectorExtractor
 CIvectorExtractorUpdateProjectionClass
 CIvectorExtractorUpdateWeightClass
 CIvectorExtractorUtteranceStatsThese are the stats for a particular utterance, i.e
 CIvectorExtractTask
 CIvectorTask
 CKaldiCompileTimeAssert< B >
 CKaldiCompileTimeAssert< true >
 CKaldiObjectHolder< KaldiType >KaldiObjectHolder works for Kaldi objects that have the "standard" Read and Write functions, and a copy constructor
 CKaldiRnnlmWrapper
 CKaldiRnnlmWrapperOpts
 CComponent::key_valueA pair of type and marker,
 CkMarkerMap
 CKwsAlignment
 CKwScoreStats
 CKwsProductFstToKwsLexicographicFstMapper
 CKwsTerm
 CKwsTermsAligner
 CKwsTermsAlignerOptions
 CKwTermEqual
 CKwTermLower
 CLatticeArcRecordThis is used in CompactLatticeLimitDepth
 CLatticeBiglmFasterDecoderThis is as LatticeFasterDecoder, but does online composition between HCLG and the "difference language model", which is a deterministic FST that represents the difference between the language model you want and the language model you compiled HCLG with
 CLatticeDeterminizer< Weight, IntType >
 CLatticeDeterminizerPruned< Weight, IntType >
 CLatticeFasterDecoderConfig
 CLatticeFasterDecoderTpl< FST, Token >This is the "normal" lattice-generating decoder
 CLatticeFasterDecoderTpl< FST, decoder::BackpointerToken >
 CLatticeFasterDecoderTpl< fst::StdFst, decoder::BackpointerToken >
 CLatticeHolder
 CLatticeIncrementalDecoderConfigThe normal decoder, lattice-faster-decoder.h, sometimes has an issue when doing real-time applications with long utterances, that each time you get the lattice the lattice determinization can take a considerable amount of time; this introduces latency
 CLatticeIncrementalDecoderTpl< FST, Token >This is an extention to the "normal" lattice-generating decoder
 CLatticeIncrementalDecoderTpl< FST, decoder::BackpointerToken >
 CLatticeIncrementalDeterminizerThis class is used inside LatticeIncrementalDecoderTpl; it handles some of the details of incremental determinization
 CDiscriminativeSupervisionSplitter::LatticeInfo
 CLatticeLexiconWordAligner
 CLatticePhoneAligner
 CLatticeReaderLatticeReader provides (static) functions for reading both Lattice and CompactLattice, in text form
 CLatticeSimpleDecoderSimplest possible decoder, included largely for didactic purposes and as a means to debug more highly optimized decoders
 CLatticeSimpleDecoderConfig
 CPrunedCompactLatticeComposer::LatticeStateInfo
 CLatticeStringRepository< IntType >
 CLatticeToStdMapper< Real >Class LatticeToStdMapper maps a LatticeArc to a normal arc (StdArc) by adding the elements of the LatticeArc weight
 CLatticeWeightTpl< FloatType >
 CLatticeWeightTpl< BaseFloat >
 CLatticeWordAligner
 CLbfgsOptionsThis is an implementation of L-BFGS
 CLdaEstimateClass for computing linear discriminant analysis (LDA) transform
 CLdaEstimateOptions
 CDecodableAmDiagGmmUnmapped::LikelihoodCacheRecordDefines a cache record for a state
 CLimitRankClass
 CLinearCgdOptions
 CLinearResampleLinearResample is a special case of ArbitraryResample, where we want to resample a signal at linearly spaced intervals (this means we want to upsample or downsample the signal)
 CLinearVtln
 CLmState
 CMessageLogger::Log
 CMessageLogger::LogAndThrow
 CLogisticRegression
 CLogisticRegressionConfig
 CLogMessageEnvelopeLog message severity and source location info
 CLossItf
 CLossOptions
 CMapDiagGmmOptionsConfiguration variables for Maximum A Posteriori (MAP) update
 CMapInputSymbolsMapper< Arc, I >
 CMapTransitionUpdateConfig
 CMatcherBase
 CMatrixAccesses
 CMatrixBase< Real >Base class which provides matrix operations not involving resizing or allocation
 CMatrixBase< float >
 CMatrixBufferA buffer for caching (utterance-key, feature-matrix) pairs
 CMatrixBufferOptions
 CMemoryCompressionOptimizer::MatrixCompressInfo
 CNnetComputation::MatrixDebugInfo
 CMatrixDim_Structure containing size of the matrix plus stride
 CMatrixElement< Real >
 CMatrixExtender
 CNnetComputation::MatrixInfo
 CDerivativeTimeLimiter::MatrixPruneInfo
 CMatrixRandomizerShuffles rows of a matrix according to the indices in the mask,
 CMaxChangeStats
 CMelBanks
 CMelBanksOptions
 CRestrictedAttentionComponent::Memo
 CBatchNormComponent::Memo
 CMemoryCompressionOptimizerThis class is used in the function OptimizeMemoryCompression(), once we determine that there is some potential to do memory compression for this computation
 CMessageLogger
 CMfccComputer
 CMfccOptionsMfccOptions contains basic options for computing MFCC features
 CMinibatchInfoItf
 CNnetBatchComputer::MinibatchSizeInfo
 CMinimumBayesRiskThis class does the word-level Minimum Bayes Risk computation, and gives you either the 1-best MBR output together with the expected Bayes Risk, or a sausage-like structure
 CMinimumBayesRiskOptionsThe implementation of the Minimum Bayes Risk decoding method described in "Minimum Bayes Risk decoding and system combination based on a recursion for edit distance", Haihua Xu, Daniel Povey, Lidia Mangu and Jie Zhu, Computer Speech and Language, 2011 This is a slightly more principled way to do Minimum Bayes Risk (MBR) decoding than the standard "Confusion Network" method
 CMiscComputationInfo
 CMleAmSgmm2AccsClass for the accumulators associated with the phonetic-subspace model parameters
 CMleAmSgmm2OptionsConfiguration variables needed in the SGMM estimation process
 CMleAmSgmm2Updater
 CMleAmSgmmUpdaterContains the functions needed to update the SGMM parameters
 CMleDiagGmmOptionsConfiguration variables like variance floor, minimum occupancy, etc
 CMleFullGmmOptionsConfiguration variables like variance floor, minimum occupancy, etc
 CMleSgmm2SpeakerAccsClass for the accumulators required to update the speaker vectors v_s
 CMleTransitionUpdateConfig
 CMlltAccsA class for estimating Maximum Likelihood Linear Transform, also known as global Semi-tied Covariance (STC), for GMMs
 CModelCollapser
 CModelUpdateConsolidatorThis class is responsible for consolidating the model-update part of backprop commands, for components in (e.g.) recurrent networks that need to have many separate backprop commands, into more efficient single commands operating on consolidated data in larger matrices
 CSvdApplier::ModifiedComponentInfo
 CRowOpsSplitter::MultiIndexSplitInfo
 CMultiThreadable
 CMultiThreader< C >
 CMyTaskClass
 CNaturalLess< CompactLatticeWeightTpl< LatticeWeightTpl< double >, int32 > >
 CNaturalLess< CompactLatticeWeightTpl< LatticeWeightTpl< float >, int32 > >
 CNaturalLess< CompactLatticeWeightTpl< LatticeWeightTpl< FloatType >, IntType > >
 CNaturalLess< LatticeWeightTpl< double > >
 CNaturalLess< LatticeWeightTpl< float > >
 CNaturalLess< LatticeWeightTpl< FloatType > >
 CNccfInfo
 CNetworkNodeNetworkNode is used to represent, three types of thing: either an input of the network (which pretty much just states the dimension of the input vector); a Component (e.g
 Cneuron
 CNGramA parsed n-gram from ARPA LM file
 CNnet
 CNnet
 CNnet
 CNnetBatchComputerThis class does neural net inference in a way that is optimized for GPU use: it combines chunks of multiple utterances into minibatches for more efficient computation
 CNnetBatchDecoderDecoder object that uses multiple CPU threads for the graph search, plus a GPU for the neural net inference (that's done by a separate NnetBatchComputer object)
 CNnetBatchInferenceThis class implements a simplified interface to class NnetBatchComputer, which is suitable for programs like 'nnet3-compute' where you want to support fast GPU-based inference on a sequence of utterances, and get them back from the object in the same order
 CNnetChainComputeProbThis class is for computing objective-function values in a nnet3+chain setup, for diagnostics
 CNnetChainExampleNnetChainExample is like NnetExample, but specialized for lattice-free (chain) training
 CNnetChainExampleStructureCompareThis comparator object compares just the structural aspects of the NnetChainExample without looking at the value of the features
 CNnetChainExampleStructureHasherThis hashing object hashes just the structural aspects of the NnetExample without looking at the value of the features
 CNnetChainSupervision
 CNnetChainTrainerThis class is for single-threaded training of neural nets using the 'chain' model
 CNnetChainTrainingOptions
 CNnetCombineAconfig
 CNnetCombineConfigConfiguration class that controls neural net combination, where we combine a number of neural nets, trying to find for each layer the optimal weighted combination of the different neural-net parameters
 CNnetCombineFastConfigConfiguration class that controls neural net combination, where we combine a number of neural nets, trying to find for each layer the optimal weighted combination of the different neural-net parameters
 CNnetComputation
 CNnetComputationPrintInserter
 CNnetComputeOptions
 CNnetComputeProbThis class is for computing cross-entropy and accuracy values in a neural network, for diagnostics
 CNnetComputeProbOptions
 CNnetComputer
 CNnetComputerClass NnetComputer is responsible for executing the computation described in the "computation" object
 CNnetComputerFromEg
 CNnetDataRandomizerOptionsConfiguration variables that affect how frame-level shuffling is done
 CNnetDiscriminativeComputeObjfThis class is for computing objective-function values in a nnet3 discriminative training, for diagnostics
 CNnetDiscriminativeExampleNnetDiscriminativeExample is like NnetExample, but specialized for sequence training
 CNnetDiscriminativeExampleStructureCompareThis comparator object compares just the structural aspects of the NnetDiscriminativeExample without looking at the value of the features
 CNnetDiscriminativeExampleStructureHasherThis hashing object hashes just the structural aspects of the NnetExample without looking at the value of the features
 CNnetDiscriminativeOptions
 CNnetDiscriminativeStats
 CNnetDiscriminativeSupervision
 CNnetDiscriminativeTrainerThis class is for single-threaded discriminative training of neural nets
 CNnetDiscriminativeUpdateOptions
 CNnetDiscriminativeUpdater
 CNnetEnsembleTrainer
 CNnetEnsembleTrainerConfig
 CNnetExampleNnetExample is the input data and corresponding label (or labels) for one or more frames of input, used for standard cross-entropy training of neural nets (and possibly for other objective functions)
 CNnetExampleNnetExample is the input data and corresponding label (or labels) for one or more frames of input, used for standard cross-entropy training of neural nets (and possibly for other objective functions)
 CNnetExampleBackgroundReader
 CNnetExampleStructureCompareThis comparator object compares just the structural aspects of the NnetExample without looking at the value of the features
 CNnetExampleStructureHasherThis hashing object hashes just the structural aspects of the NnetExample without looking at the value of the features
 CNnetFixConfig
 CNnetGenerationOptions
 CNnetInferenceTaskClass NnetInferenceTask represents a chunk of an utterance that is requested to be computed
 CNnetIo
 CNnetIoStructureCompareThis comparison object compares just the structural aspects of the NnetIo object (name, indexes, feature dimension) without looking at the value of features
 CNnetIoStructureHasherThis hashing object hashes just the structural aspects of the NnetIo object (name, indexes, feature dimension) without looking at the value of features
 CNnetLdaStatsAccumulator
 CNnetLimitRankOpts
 CNnetMixupConfig
 CNnetOnlineComputer
 CNnetOptimizeOptions
 CNnetRescaleConfig
 CNnetRescaler
 CNnetShrinkConfigConfiguration class that controls neural net "shrinkage" which is actually a scaling on the parameters of each of the updatable layers
 CNnetSimpleComputationOptions
 CNnetSimpleLoopedComputationOptions
 CNnetSimpleTrainerConfig
 CNnetStats
 CNnetStatsConfig
 CNnetTrainerThis class is for single-threaded training of neural nets using standard objective functions such as cross-entropy (implemented with logsoftmax nonlinearity and a linear objective function) and quadratic loss
 CNnetTrainerOptions
 CNnetTrainOptions
 CNnetUpdater
 CNnetWidenConfigConfiguration class that controls neural net "widening", which means increasing the dimension of the hidden layers of an already-trained neural net
 CTreeClusterer::Node
 COnlineProcessPitch::NormalizationStats
 CNumberIstream< T >
 CObjectiveFunctionInfo
 COfflineFeatureTpl< F >This templated class is intended for offline feature extraction, i.e
 CConvolutionModel::Offset
 COnlineAudioSourceItf
 COnlineCmvnOptions
 COnlineCmvnStateStruct OnlineCmvnState stores the state of CMVN adaptation between utterances (but not the state of the computation within an utterance)
 COnlineEndpointConfig
 COnlineEndpointRuleThis header contains a simple facility for endpointing, that should be used in conjunction with the "online2" online decoding code; see ../online2bin/online2-wav-gmm-latgen-faster-endpoint.cc
 COnlineFeatInputItf
 COnlineFeatureInterfaceOnlineFeatureInterface is an interface for online feature processing (it is also usable in the offline setting, but currently we're not using it for that)
 COnlineFeatureMatrix
 COnlineFeatureMatrixOptions
 COnlineFeaturePipelineCommandLineConfigThis configuration class is to set up OnlineFeaturePipelineConfig, which in turn is the configuration class for OnlineFeaturePipeline
 COnlineFeaturePipelineConfigThis configuration class is responsible for storing the configuration options for OnlineFeaturePipeline, but it does not set them
 COnlineGmmAdaptationState
 COnlineGmmDecodingAdaptationPolicyConfigThis configuration class controls when to re-estimate the basis-fMLLR during online decoding
 COnlineGmmDecodingConfig
 COnlineGmmDecodingModelsThis class is used to read, store and give access to the models used for 3 phases of decoding (first-pass with online-CMN features; the ML models used for estimating transforms; and the discriminatively trained models)
 COnlineIvectorEstimationStatsThis class helps us to efficiently estimate iVectors in situations where the data is coming in frame by frame
 COnlineIvectorExtractionConfigThis class includes configuration variables relating to the online iVector extraction, but not including configuration for the "base feature", i.e
 COnlineIvectorExtractionInfoThis struct contains various things that are needed (as const references) by class OnlineIvectorExtractor
 COnlineIvectorExtractorAdaptationStateThis class stores the adaptation state from the online iVector extractor, which can help you to initialize the adaptation state for the next utterance of the same speaker in a more informed way
 COnlineNaturalGradientKeywords for search: natural gradient, naturalgradient, NG-SGD
 COnlineNaturalGradientSimple
 COnlineNnet2DecodingConfig
 COnlineNnet2DecodingThreadedConfig
 COnlineNnet2FeaturePipelineConfigThis configuration class is to set up OnlineNnet2FeaturePipelineInfo, which in turn is the configuration class for OnlineNnet2FeaturePipeline
 COnlineNnet2FeaturePipelineInfoThis class is responsible for storing configuration variables, objects and options for OnlineNnet2FeaturePipeline (including the actual LDA and CMVN-stats matrices, and the iVector extractor, which is a member of ivector_extractor_info
 COnlinePitchFeatureImpl
 COnlinePreconditionerKeywords for search: natural gradient, naturalgradient, NG-SGD
 COnlinePreconditionerSimple
 COnlineSilenceWeighting
 COnlineSilenceWeightingConfig
 COnlineSpeexDecoder
 COnlineSpeexEncoder
 COnlineSpliceOptions
 COnlineTimerClass OnlineTimer is used to test real-time decoding algorithms and evaluate how long the decoding of a particular utterance would take
 COnlineTimingStatsClass OnlineTimingStats stores statistics from timing of online decoding, which will enable the Print() function to print out the average real-time factor and average delay per utterance
 COptimizableInterface< Real >OptimizableInterface provides a virtual class for optimizable objects
 COptimizeLbfgs< Real >
 CSimpleOptions::OptionInfo
 COptionsItf
 COtherReal< T >This class provides a way for switching between double and float types
 COtherReal< double >A specialized class for switching from double to float
 COtherReal< float >A specialized class for switching from float to double
 COutput
 COutputImplBase
 CLatticeDeterminizerPruned< Weight, IntType >::OutputState
 CPackedMatrix< Real >Packed matrix: base class for triangular and symmetric matrices
 CPackedMatrix< double >
 CPackedMatrix< float >
 CLatticeDeterminizer< Weight, IntType >::PairComparator
 CDeterminizerStar< F >::PairComparator
 CLatticeDeterminizerPruned< Weight, IntType >::PairComparator
 CRandomAccessTableReaderSortedArchiveImpl< Holder >::PairCompare
 CPairHasher< Int1, Int2 >A hashing function-object for pairs of ints
 CSgmm2LikelihoodCache::PdfCacheElement
 CPdfPrior
 CPdfPriorOptions
 CCompressedMatrix::PerColHeader
 CPhoneAlignLatticeOptions
 CPitchExtractionOptions
 CPitchFrameInfo
 CPitchInterpolator
 CPitchInterpolatorOptions
 CPitchInterpolatorStats
 CPlda
 CPldaConfig
 CPldaEstimationConfig
 CPldaEstimator
 CPldaStats
 CPldaUnsupervisedAdaptorThis class takes unlabeled iVectors from the domain of interest and uses their mean and variance to adapt your PLDA matrices to a new domain
 CPldaUnsupervisedAdaptorConfig
 CPlpComputerThis is the new-style interface to the PLP computation
 CPlpOptionsPlpOptions contains basic options for computing PLP features
 CRefineClusterer::point_info
 CComputationRenumberer::PointerCompare< T >
 CPosteriorHolder
 CNnetComputation::PrecomputedIndexesInfo
 CProcessPitchOptions
 CProfiler
 CProfileStats
 CProfileStats::ProfileStatsEntry
 CPrunedCompactLatticeComposerPrunedCompactLatticeComposer implements an algorithm for pruned composition
 CPruneSpecialClass< Arc >This class is used to implement the function PruneSpecial
 CPushSpecialClass
 CQuestionsThis class defines, for each EventKeyType, a set of initial questions that it tries and also a number of iterations for which to refine the questions to increase likelihood
 CQuestionsForKeyQuestionsForKey is a class used to define the questions for a key, and also options that allow us to refine the question during tree-building (i.e
 CRandFstOptions
 CRandomAccessTableReader< Holder >Allows random access to a collection of objects in an archive or script file; see The Table concept
 CRandomAccessTableReader< kaldi::TokenHolder >
 CRandomAccessTableReaderImplBase< Holder >
 CRandomAccessTableReaderImplBase< kaldi::TokenHolder >
 CRandomAccessTableReaderMapped< Holder >This class is for when you are reading something in random access, but it may actually be stored per-speaker (or something similar) but the keys you're using are per utterance
 CRandomizerMaskGenerates randomly ordered vector of indices,
 CRandomState
 CRbmTrainOptions
 CRecognizedWord
 CRecyclingVectorThis class serves as a storage for feature vectors with an option to limit the memory usage by removing old elements
 CRefineClusterer
 CRefineClustersOptions
 CRegressionTreeA regression tree is a clustering of Gaussian densities in an acoustic model, such that the group of Gaussians at each node of the tree are transformed by the same transform
 CRegtreeFmllrDiagGmmAn FMLLR (feature-space MLLR) transformation, also called CMLLR (constrained MLLR) is an affine transformation of the feature vectors
 CRegtreeFmllrDiagGmmAccsClass for computing the accumulators needed for the maximum-likelihood estimate of FMLLR transforms for an acoustic model that uses diagonal Gaussian mixture models as emission densities
 CRegtreeFmllrOptionsConfiguration variables for FMLLR transforms
 CRegtreeMllrDiagGmmAn MLLR mean transformation is an affine transformation of Gaussian means
 CRegtreeMllrDiagGmmAccsClass for computing the maximum-likelihood estimates of the parameters of an acoustic model that uses diagonal Gaussian mixture models as emission densities
 CRegtreeMllrOptionsConfiguration variables for FMLLR transforms
 CRemoveEpsLocalClass< Arc, ReweightPlus >
 CRemoveSomeInputSymbolsMapper< Arc, I >
 CProfileStats::ReverseSecondComparator
 CReweightPlusDefault< Weight >
 CReweightPlusLogArc
 CRowOpsSplitter
 CRspecifierOptions
 CTaskSequencer< C >::RunTaskArgsList
 Cruntime_error
 CSemaphore
 CSequentialTableReader< Holder >A templated class for reading objects sequentially from an archive or script file; see The Table concept
 CSequentialTableReaderImplBase< Holder >
 CSgmm2FmllrConfigConfiguration variables needed in the estimation of FMLLR for SGMMs
 CSgmm2FmllrGlobalParamsGlobal adaptation parameters
 CSgmm2GauPostElementThis is the entry for a single time
 CSgmm2GselectConfig
 CSgmm2LikelihoodCacheSgmm2LikelihoodCache caches SGMM likelihoods at two levels: the final pdf likelihoods, and the sub-state level likelihoods, which means that with the SCTM system we can avoid redundant computation
 CSgmm2PerFrameDerivedVarsHolds the per-frame precomputed quantities x(t), x_{i}(t), z_{i}(t), and n_{i}(t) (cf
 CSgmm2PerSpkDerivedVars
 CSgmm2Project
 CSgmm2SplitSubstatesConfig
 CShiftedDeltaFeatures
 CShiftedDeltaFeaturesOptions
 CSimpleDecoderSimplest possible decoder, included largely for didactic purposes and as a means to debug more highly optimized decoders
 CSimpleObjectiveInfo
 CFmllrRawAccs::SingleFrameStats
 CFmllrDiagGmmAccs::SingleFrameStats
 CRowOpsSplitter::SingleSplitInfo
 CSingleUtteranceGmmDecoderYou will instantiate this class when you want to decode a single utterance using the online-decoding setup
 CSingleUtteranceNnet2DecoderYou will instantiate this class when you want to decode a single utterance using the online-decoding setup for neural nets
 CSingleUtteranceNnet2DecoderThreadedYou will instantiate this class when you want to decode a single utterance using the online-decoding setup for neural nets
 CSingleUtteranceNnet3DecoderTpl< FST >You will instantiate this class when you want to decode a single utterance using the online-decoding setup for neural nets
 CSingleUtteranceNnet3IncrementalDecoderTpl< FST >You will instantiate this class when you want to decode a single utterance using the online-decoding setup for neural nets
 CSlidingWindowCmnOptions
 CSolverOptionsThis class describes the options for maximizing various quadratic objective functions
 CSparseMatrix< Real >
 CSparseMatrix< float >
 CSparseVector< Real >
 CSpeakerStatsItf
 CSpectrogramComputerClass for computing spectrogram features
 CSpectrogramOptionsSpectrogramOptions contains basic options for computing spectrogram features
 CSpeexOptions
 CSphinxMatrixHolder< kFeatDim >A class for reading/writing Sphinx format matrices
 CSplitDiscriminativeExampleConfigConfig structure for SplitExample, for splitting discriminative training examples
 CSplitDiscriminativeSupervisionOptions
 CSplitExampleStatsThis struct exists only for diagnostic purposes
 CSplitRadixComplexFft< Real >
 CSplitRadixComplexFft< float >
 CPitchFrameInfo::StateInfo
 CNnetStats::StatsElement
 CExampleMergingStats::StatsForExampleSize
 CStdToken
 CStdToLatticeMapper< Real >Class StdToLatticeMapper maps a normal arc (StdArc) to a LatticeArc by putting the StdArc weight as the first element of the LatticeWeight
 CStdVectorRandomizer< T >Randomizes elements of a vector according to a mask
 CCompiler::StepInfo
 CStringHasherA hashing function object for strings
 CStringRepository< Label, StringId >
 CComputationRenumberer::SubMatrixHasher
 CNnetComputation::SubMatrixInfo
 CDeterminizerStar< F >::SubsetEqual
 CLatticeDeterminizerPruned< Weight, IntType >::SubsetEqual
 CLatticeDeterminizer< Weight, IntType >::SubsetEqual
 CLatticeDeterminizer< Weight, IntType >::SubsetEqualStates
 CDeterminizerStar< F >::SubsetEqualStates
 CLatticeDeterminizerPruned< Weight, IntType >::SubsetEqualStates
 CDeterminizerStar< F >::SubsetKey
 CLatticeDeterminizer< Weight, IntType >::SubsetKey
 CLatticeDeterminizerPruned< Weight, IntType >::SubsetKey
 CSgmm2LikelihoodCache::SubstateCacheElement
 CSumDescriptorThis is an abstract base-class
 CSvdApplier
 Csynapse
 CTableComposeCache< F >TableComposeCache lets us do multiple compositions while caching the same matcher
 CTableComposeCache< fst::Fst< fst::StdArc > >
 CTableMatcherOptionsTableMatcher is a matcher specialized for the case where the output side of the left FST always has either all-epsilons coming out of a state, or a majority of the symbol table
 CTableWriter< Holder >A templated class for writing objects to an archive or script file; see The Table concept
 CTableWriterImplBase< Holder >
 CTarjanNode
 CPruneSpecialClass< Arc >::Task
 CLatticeDeterminizerPruned< Weight, IntType >::Task
 CLatticeDeterminizerPruned< Weight, IntType >::TaskCompare
 CTaskSequencer< C >
 CTaskSequencerConfig
 CTcpServer
 CLatticeDeterminizer< Weight, IntType >::TempArc
 CLatticeDeterminizerPruned< Weight, IntType >::TempArc
 CDeterminizerStar< F >::TempArc
 CTestFunction
 CTestFunctor< Arc >
 CThreadSynchronizerClass ThreadSynchronizer acts to guard an arbitrary type of buffer between a producing and a consuming thread (note: it's all symmetric between the two thread types)
 CThrSweepStats
 CTidToTstateMapper
 CTimer
 CSimpleDecoder::Token
 CFasterDecoder::Token
 CLatticeBiglmFasterDecoder::Token
 CLatticeSimpleDecoder::Token
 CBiglmFasterDecoder::Token
 CTokenHolder
 CLatticeIncrementalDecoderTpl< FST, Token >::TokenList
 CLatticeSimpleDecoder::TokenList
 CLatticeBiglmFasterDecoder::TokenList
 CLatticeFasterDecoderTpl< FST, Token >::TokenList
 CTokenVectorHolder
 CTrainingGraphCompiler
 CTrainingGraphCompilerOptions
 CTransitionModel
 CTreeClusterer
 CTreeClusterOptions
 CTreeRenderer
 CLatticeLexiconWordAligner::Tuple
 CLatticePhoneAligner::Tuple
 CLatticeWordAligner::Tuple
 CTransitionModel::Tuple
 CLatticeWordAligner::TupleEqual
 CLatticeLexiconWordAligner::TupleEqual
 CLatticePhoneAligner::TupleEqual
 CLatticeLexiconWordAligner::TupleHash
 CLatticePhoneAligner::TupleHash
 CLatticeWordAligner::TupleHash
 CTwvMetrics
 CTwvMetricsOptions
 CTwvMetricsStats
 CUbmClusteringOptions
 Cunary_function
 CNnetBatchInference::UtteranceInfo
 CNnetBatchDecoder::UtteranceInput
 CNnetBatchDecoder::UtteranceOutput
 CUtteranceSplitter
 CVadEnergyOptions
 CVariableMergingOptimizerThis class is responsible for merging matrices, although you probably want to access it via the the function VariableMergingOptimization()
 Cvector
 CVectorBase< Real >Provides a vector abstraction class
 CVectorBase< float >
 CStringRepository< Label, StringId >::VectorEqual
 CVectorFstToKwsLexicographicFstMapper
 CVectorFstTplHolder< Arc >
 CVectorHasher< Int >A hashing function-object for vectors
 CStringRepository< Label, StringId >::VectorKey
 CVectorRandomizerRandomizes elements of a vector according to a mask
 Cvocab_word
 CWaveDataThis class's purpose is to read in Wave files
 CWaveHeaderReadGofer
 CWaveHolder
 CWaveInfoThis class reads and hold wave file header information
 CWaveInfoHolder
 CWordAlignedLatticeTester
 CWordAlignLatticeLexiconInfoThis class extracts some information from the lexicon and stores it in a suitable form for the word-alignment code to use
 CWordAlignLatticeLexiconOpts
 CWordBoundaryInfo
 CWordBoundaryInfoNewOpts
 CWordBoundaryInfoOpts
 CConstArpaLmBuilder::WordsAndLmStatePairLessThan
 CWspecifierOptions
 CBatchedXvectorComputer::XvectorTask
 Cbool
 Cfloat
 Csize_t