CAccess | |
CAccumAmDiagGmm | |
CAccumDiagGmm | |
CAccumFullGmm | Class for computing the maximum-likelihood estimates of the parameters of a Gaussian mixture model |
CAccumulateTreeStatsInfo | |
CAccumulateTreeStatsOptions | |
CActivePath | |
►CAffineXformStats | |
CFmllrDiagGmmAccs | This does not work with multiple feature transforms |
CAgglomerativeClusterer | Necessary mechanisms for the actual clustering algorithm |
CAhcCluster | AhcCluster is the cluster object for the agglomerative clustering |
CAlignConfig | |
CAlignedTermsPair | |
CAmDiagGmm | |
CAmNnet | |
CAmNnetSimple | |
CAmSgmm2 | Class for definition of the subspace Gmm acoustic model |
CAnalyzer | This struct exists to set up various pieces of analysis; it helps avoid the repetition of code where we compute all these things in sequence |
CArbitraryResample | Class ArbitraryResample allows you to resample a signal (assumed zero outside the sample region, not periodic) at arbitrary specified time values, which don't have to be linearly spaced |
CMinimumBayesRisk::Arc | |
CGrammarFstPreparer::ArcCategory | |
CArcIterator< GrammarFst > | This is the overridden template for class ArcIterator for GrammarFst |
CArcPosteriorComputer | |
►CArpaFileParser | ArpaFileParser is an abstract base class for ARPA LM file conversion |
CArpaLmCompiler | |
CConstArpaLmBuilder | |
CArpaLine | |
►CArpaLmCompilerImplInterface | |
CArpaLmCompilerImpl< HistKey > | |
CArpaParseOptions | Options that control ArpaFileParser |
CBackpointerToken | |
►Cbasic_filebuf | |
Cbasic_pipebuf< CharType, Traits > | |
►Cbasic_streambuf | |
Cbasic_filebuf< CharT, Traits > | |
CBasicHolder< BasicType > | BasicHolder is valid for float, double, bool, and integer types |
CBasicPairVectorHolder< BasicType > | BasicPairVectorHolder is a Holder for a vector of pairs of a basic type, e.g |
CBasicVectorHolder< BasicType > | A Holder for a vector of basic types, e.g |
CBasicVectorVectorHolder< BasicType > | BasicVectorVectorHolder is a Holder for a vector of vector of a basic type, e.g |
CBasisFmllrAccus | Stats for fMLLR subspace estimation |
CBasisFmllrEstimate | Estimation functions for basis fMLLR |
CBasisFmllrOptions | |
CBatchedXvectorComputer | |
CBatchedXvectorComputerOptions | |
CLatticeFasterOnlineDecoderTpl< FST >::BestPathIterator | |
CLatticeIncrementalOnlineDecoderTpl< FST >::BestPathIterator | |
CBiglmFasterDecoder | This is as FasterDecoder, but does online composition between HCLG and the "difference language model", which is a deterministic FST that represents the difference between the language model you want and the language model you compiled HCLG with |
CCuBlockMatrix< Real >::BlockMatrixData | |
CBottomUpClusterer | |
►CCacheArcIterator | |
CArcIterator< TrivialFactorWeightFst< A, F > > | |
►CCacheImpl | |
CTrivialFactorWeightFstImpl< A, F > | |
►CCacheOptions | |
CTrivialFactorWeightOptions< Arc > | |
►CCacheStateIterator | |
CStateIterator< TrivialFactorWeightFst< A, F > > | |
CCachingOptimizingCompiler | This class enables you to do the compilation and optimization in one call, and also ensures that if the ComputationRequest is identical to the previous one, the compilation process is not repeated |
CCachingOptimizingCompilerOptions | |
CChainExampleMerger | This class is responsible for arranging examples in groups that have the same strucure (i.e |
CChainObjectiveInfo | |
CCheckComputationOptions | |
CLmState::ChildrenVectorLessThan | |
CLmState::ChildType | |
CChunkInfo | ChunkInfo is a class whose purpose is to describe the structure of matrices holding features |
CChunkInfo | |
CChunkTimeInfo | Struct ChunkTimeInfo is used by class UtteranceSplitter to output information about how we split an utterance into chunks |
CCindexHasher | |
CComputationGraphBuilder::CindexInfo | |
CCindexSet | |
CCindexVectorHasher | |
CPldaStats::ClassInfo | |
CClatRescoreTuple | |
►CClusterable | |
CGaussClusterable | GaussClusterable wraps Gaussian statistics in a form accessible to generic clustering algorithms |
CScalarClusterable | ScalarClusterable clusters scalars with x^2 loss |
CVectorClusterable | VectorClusterable wraps vectors in a form accessible to generic clustering algorithms |
CClusterKMeansOptions | |
CCollapseModelConfig | Config class for the CollapseModel function |
CNnetComputation::Command | |
CCommandAttributes | |
CNnetComputer::CommandDebugInfo | |
CCommandPairComparator | |
CCompactLatticeHolder | |
CCompactLatticeMinimizer< Weight, IntType > | |
CCompactLatticePusher< Weight, IntType > | |
CCompactLatticeToKwsProductFstMapper | |
CCompactLatticeWeightCommonDivisorTpl< BaseWeightType, IntType > | |
CCompactLatticeWeightTpl< WeightType, IntType > | |
CCompareFirst< Real > | |
CCompareFirstMemberOfPair< A, B > | Comparator object for pairs that compares only the first pair |
CComparePosteriorByPdfs | |
CCompareReverseSecond | |
CCompartmentalizedBottomUpClusterer | |
CCompBotClustElem | |
CCompiler | This class creates an initial version of the NnetComputation, without any optimization or sharing of matrices |
CCompilerOptions | |
►CComponent | Abstract class, basic element of the network, it is a box with defined inputs, outputs, and tranformation functions interface |
CDctComponent | Discrete cosine transform |
CFixedAffineComponent | FixedAffineComponent is an affine transform that is supplied at network initialization time and is not trainable |
CFixedBiasComponent | FixedBiasComponent applies a fixed per-element bias; it's similar to the AddShift component in the nnet1 setup (and only needed for nnet1 model conversion |
CFixedLinearComponent | FixedLinearComponent is a linear transform that is supplied at network initialization time and is not trainable |
CFixedScaleComponent | FixedScaleComponent applies a fixed per-element scale; it's similar to the Rescale component in the nnet1 setup (and only needed for nnet1 model conversion) |
CMaxoutComponent | |
CMaxpoolingComponent | MaxPoolingComponent : Maxpooling component was firstly used in ConvNet for selecting an representative activation in an area |
►CNonlinearComponent | This kind of Component is a base-class for things like sigmoid and softmax |
CLogSoftmaxComponent | |
CNormalizeComponent | |
CPowerComponent | Take the absoute values of an input vector to a power |
CRectifiedLinearComponent | |
CSigmoidComponent | |
CSoftHingeComponent | |
CSoftmaxComponent | |
CTanhComponent | |
CPermuteComponent | PermuteComponent does a permutation of the dimensions (by default, a fixed random permutation, but it may be specified) |
CPnormComponent | |
►CRandomComponent | |
CAdditiveNoiseComponent | This is a bit similar to dropout but adding (not multiplying) Gaussian noise with a given standard deviation |
CDropoutComponent | This Component, if present, randomly zeroes half of the inputs and multiplies the other half by two |
CScaleComponent | |
CSpliceComponent | Splices a context window of frames together [over time] |
CSpliceMaxComponent | This is as SpliceComponent but outputs the max of any of the inputs (taking the max across time) |
CSumGroupComponent | |
►CUpdatableComponent | Class UpdatableComponent is a Component which has trainable parameters and contains some global parameters for stochastic gradient descent (learning rate, L2 regularization constant) |
►CAffineComponent | |
CAffineComponentPreconditioned | |
CAffineComponentPreconditionedOnline | Keywords: natural gradient descent, NG-SGD, naturalgradient |
►CBlockAffineComponent | |
CBlockAffineComponentPreconditioned | |
CConvolutional1dComponent | Convolutional1dComponent implements convolution over frequency axis |
►CComponent | Abstract base-class for neural-net components |
CBackpropTruncationComponent | |
CBatchNormComponent | |
CClipGradientComponent | |
CDistributeComponent | This Component takes a larger input-dim than output-dim, where the input-dim must be a multiple of the output-dim, and distributes different blocks of the input dimension to different 'x' values |
CElementwiseProductComponent | |
CFixedAffineComponent | FixedAffineComponent is an affine transform that is supplied at network initialization time and is not trainable |
CFixedBiasComponent | FixedBiasComponent applies a fixed per-element bias; it's similar to the AddShift component in the nnet1 setup (and only needed for nnet1 model conversion |
CFixedScaleComponent | FixedScaleComponent applies a fixed per-element scale; it's similar to the Rescale component in the nnet1 setup (and only needed for nnet1 model conversion) |
CMaxpoolingComponent | |
►CNonlinearComponent | |
CLogSoftmaxComponent | |
CRectifiedLinearComponent | |
CSigmoidComponent | |
CSoftmaxComponent | |
CTanhComponent | |
CNoOpComponent | NoOpComponent just duplicates its input |
CNormalizeComponent | |
CPermuteComponent | PermuteComponent changes the order of the columns (i.e |
CPnormComponent | |
►CRandomComponent | |
CDropoutComponent | |
CDropoutMaskComponent | |
CGeneralDropoutComponent | GeneralDropoutComponent implements dropout, including a continuous variant where the thing we multiply is not just zero or one, but may be a continuous value |
CSpecAugmentTimeMaskComponent | SpecAugmentTimeMaskComponent implements the time part of SpecAugment |
CRestrictedAttentionComponent | RestrictedAttentionComponent implements an attention model with restricted temporal context |
CStatisticsExtractionComponent | |
CStatisticsPoolingComponent | |
CSumBlockComponent | SumBlockComponent sums over blocks of its input: for instance, if you create one with the config "input-dim=400 output-dim=100", its output will be the sum over the 4 100-dimensional blocks of the input |
CSumGroupComponent | SumGroupComponent is used to sum up groups of posteriors |
►CUpdatableComponent | Class UpdatableComponent is a Component which has trainable parameters; it extends the interface of Component |
►CAffineComponent | |
CNaturalGradientAffineComponent | |
CBlockAffineComponent | This class implements an affine transform using a block diagonal matrix e.g., one whose weight matrix is all zeros except for blocks on the diagonal |
CCompositeComponent | CompositeComponent is a component representing a sequence of [simple] components |
CConstantComponent | |
CConstantFunctionComponent | |
CConvolutionComponent | WARNING, this component is deprecated in favor of TimeHeightConvolutionComponent, and will be deleted |
CLinearComponent | |
CLstmNonlinearityComponent | |
CPerElementOffsetComponent | |
►CPerElementScaleComponent | PerElementScaleComponent scales each dimension of its input with a separate trainable scale; it's like a linear component with a diagonal matrix |
CNaturalGradientPerElementScaleComponent | NaturalGradientPerElementScaleComponent is like PerElementScaleComponent but it uses a natural gradient update for the per-element scales |
►CRepeatedAffineComponent | |
CNaturalGradientRepeatedAffineComponent | |
CScaleAndOffsetComponent | |
►CTdnnComponent | TdnnComponent is a more memory-efficient alternative to manually splicing several frames of input and then using a NaturalGradientAffineComponent or a LinearComponent |
CBlockFactorizedTdnnComponent | BlockFactorizedTdnnComponent is a modified form of TdnnComponent (which inherits from TdnnComponent) that is inspired by quaternion-based neural networks, but is more general and trainable– the idea is that blocks of parameters are linear functions of a smaller number parameters, where the linear function itself is trainable |
CTdnnComponent | TdnnComponent is a more memory-efficient alternative to manually splicing several frames of input and then using a NaturalGradientAffineComponent or a LinearComponent |
CTimeHeightConvolutionComponent | TimeHeightConvolutionComponent implements 2-dimensional convolution where one of the dimensions of convolution (which traditionally would be called the width axis) is identified with time (i.e |
CTimeHeightConvolutionComponent | TimeHeightConvolutionComponent implements 2-dimensional convolution where one of the dimensions of convolution (which traditionally would be called the width axis) is identified with time (i.e |
►CComponentPrecomputedIndexes | |
CBackpropTruncationComponentPrecomputedIndexes | |
CDistributeComponentPrecomputedIndexes | |
CGeneralDropoutComponentPrecomputedIndexes | |
CRestrictedAttentionComponent::PrecomputedIndexes | |
CSpecAugmentTimeMaskComponentPrecomputedIndexes | |
CStatisticsExtractionComponentPrecomputedIndexes | |
CStatisticsPoolingComponentPrecomputedIndexes | |
CTdnnComponent::PrecomputedIndexes | |
CTdnnComponent::PrecomputedIndexes | |
CTimeHeightConvolutionComponent::PrecomputedIndexes | |
CTimeHeightConvolutionComponent::PrecomputedIndexes | |
CPrunedCompactLatticeComposer::ComposedStateInfo | |
CComposeLatticePrunedOptions | |
CCompressedAffineXformStats | |
CCompressedMatrix | |
CComputationAnalysis | This class performs various kinds of specific analysis on top of what class Analyzer gives you immediately |
CComputationCache | Class ComputationCache is used inside class CachingOptimizingCompiler to cache previously computed computations |
CComputationChecker | |
CComputationExpander | |
CComputationGraph | The first step in compilation is to turn the ComputationSpecification into a ComputationGraph, where for each Cindex we have a list of other Cindexes that it depends on |
CComputationGraphBuilder | An abstract representation of a set of Cindexes |
CNnetBatchComputer::ComputationGroupInfo | |
CNnetBatchComputer::ComputationGroupKey | |
CNnetBatchComputer::ComputationGroupKeyHasher | |
CComputationLoopedOptimizer | |
CComputationRenumberer | |
CComputationRequest | |
CComputationRequestHasher | |
CComputationRequestPtrEqual | |
CLatticePhoneAligner::ComputationState | |
CLatticeLexiconWordAligner::ComputationState | |
CLatticeWordAligner::ComputationState | |
CComputationStepsComputer | This class arranges the cindex_ids of the computation into a sequence of lists called "steps", which will correspond roughly to the commands in the compiled computation |
CComputationVariables | This class relates the matrices and sub-matrices in the computation to imaginary "variables", such that we can think of the operations as operating on sets of individual variables, and we can then do analysis that lets us do optimization |
CConfigLine | This class is responsible for parsing input like hi-there xx=yyy a=b c empty= f-oo=Append(bar, sss) ba_z=123 bing='a b c' baz="a b c d='a b' e" and giving you access to the fields, in this case |
CConstArpaLm | |
CConstIntegerSet< I > | |
CConstIntegerSet< EventValueType > | |
CConstIntegerSet< int32 > | |
CConstIntegerSet< Label > | |
►CContextDependencyInterface | Context-dep-itf.h provides a link between the tree-building code in ../tree/, and the FST code in ../fstext/ (particularly, ../fstext/context-dep.h) |
CContextDependency | |
CConvolutionComputation | This struct represents the structure of a convolution computation |
CConvolutionComputationIo | |
CConvolutionComputationOptions | This struct contains options for compiling the convolutional computation |
CConvolutionModel | This comment explains the basic framework used for everything related to time-height convolution |
CConvolutionComputation::ConvolutionStep | |
CCountStats | |
CCovarianceStats | |
CCRnnLM | |
CCuAllocatorOptions | |
►CCuArrayBase< T > | Class CuArrayBase, CuSubArray and CuArray are analogues of classes CuVectorBase, CuSubVector and CuVector, except that they are intended to store things other than float/double: they are intended to store integers or small structs |
CCuArray< T > | Class CuArray represents a vector of an integer or struct of type T |
CCuSubArray< T > | |
►CCuArrayBase< int32 > | |
CCuArray< int32 > | |
►CCuArrayBase< Int32Pair > | |
CCuArray< Int32Pair > | |
CCuBlockMatrix< Real > | The class CuBlockMatrix holds a vector of objects of type CuMatrix, say, M_1, M_2, |
CCuBlockMatrixData_ | This structure is used in cu-block-matrix.h to store information about a block-diagonal matrix |
►CCuCompressedMatrixBase | Class CuCompressedMatrixBase is an abstract base class that allows you to compress a matrix of type CuMatrix<BaseFloat> |
CCuCompressedMatrix< I > | Class CuCompressedMatrix, templated on an integer type (expected to be one of: int8, uint8, int16, uint16), this provides a way to approximate a CuMatrix in a more memory-efficient format |
►CCuMatrixBase< Real > | Matrix for CUDA computing |
CCuMatrix< Real > | This class represents a matrix that's stored on the GPU if we have one, and in memory if not |
CCuSubMatrix< Real > | This class is used for a piece of a CuMatrix |
►CCuMatrixBase< double > | |
CCuMatrix< double > | |
►CCuMatrixBase< float > | |
CCuMatrix< float > | |
►CCuPackedMatrix< Real > | Matrix for CUDA computing |
CCuSpMatrix< Real > | |
CCuTpMatrix< Real > | |
CCuRand< Real > | |
CCuRand< float > | |
CCuSparseMatrix< Real > | |
CCuValue< Real > | The following class is used to simulate non-const references to Real, e.g |
►CCuVectorBase< Real > | Vector for CUDA computing |
CCuSubVector< Real > | |
CCuVector< Real > | |
►CCuVectorBase< double > | |
CCuVector< double > | |
►CCuVectorBase< float > | |
CCuVector< float > | |
CDecisionTreeSplitter | |
►CDecodableInterface | DecodableInterface provides a link between the (acoustic-modeling and feature-processing) code and the decoder |
►CDecodableAmDiagGmmUnmapped | DecodableAmDiagGmmUnmapped is a decodable object that takes indices that correspond to pdf-id's plus one |
CDecodableAmDiagGmm | |
CDecodableAmDiagGmmRegtreeFmllr | |
CDecodableAmDiagGmmRegtreeMllr | |
CDecodableAmDiagGmmScaled | |
►CDecodableAmSgmm2 | |
CDecodableAmSgmm2Scaled | |
CDecodableDiagGmmScaledOnline | |
CDecodableMapped | |
CDecodableMatrixMapped | This is like DecodableMatrixScaledMapped, but it doesn't support an acoustic scale, and it does support a frame offset, whereby you can state that the first row of 'likes' is actually the n'th row of the matrix of available log-likelihoods |
CDecodableMatrixMappedOffset | This decodable class returns log-likes stored in a matrix; it supports repeatedly writing to the matrix and setting a time-offset representing the frame-index of the first row of the matrix |
CDecodableMatrixScaled | |
CDecodableMatrixScaledMapped | |
►CDecodableSum | |
CDecodableSumScaled | |
CDecodableAmNnet | DecodableAmNnet is a decodable object that decodes with a neural net acoustic model of type AmNnet |
CDecodableAmNnetParallel | This version of DecodableAmNnet is intended for a version of the decoder that processes different utterances with multiple threads |
CDecodableNnet2Online | This Decodable object for class nnet2::AmNnet takes feature input from class OnlineFeatureInterface, unlike, say, class DecodableAmNnet which takes feature input from a matrix |
CDecodableAmNnetSimple | |
CDecodableAmNnetSimpleLooped | |
CDecodableAmNnetSimpleParallel | |
►CDecodableNnetLoopedOnlineBase | |
CDecodableAmNnetLoopedOnline | |
CDecodableNnetLoopedOnline | |
COnlineDecodableDiagGmmScaled | |
CDecodableNnet2OnlineOptions | |
CDecodableNnetSimple | |
CDecodableNnetSimpleLooped | |
CDecodableNnetSimpleLoopedInfo | When you instantiate class DecodableNnetSimpleLooped, you should give it a const reference to this class, that has been previously initialized |
CDecodeInfo | |
CDecodeUtteranceLatticeFasterClass | This class basically does the same job as the function DecodeUtteranceLatticeFaster, but in a way that allows us to build a multi-threaded command line program more easily |
CDeltaFeatures | |
CDeltaFeaturesOptions | |
CDerivativeTimeLimiter | |
CDescriptor | |
►CDeterministicOnDemandFst< Arc > | Class DeterministicOnDemandFst is an "FST-like" base-class |
CBackoffDeterministicOnDemandFst< Arc > | This class wraps an Fst, representing a language model, using the interface for "BackoffDeterministicOnDemandFst" |
CCacheDeterministicOnDemandFst< Arc > | |
CComposeDeterministicOnDemandFst< Arc > | |
CLmExampleDeterministicOnDemandFst< Arc > | This class is for didactic purposes, it does not really do anything |
CUnweightedNgramFst< Arc > | The class UnweightedNgramFst is a DeterministicOnDemandFst whose states encode an n-gram history |
►CDeterministicOnDemandFst< fst::StdArc > | |
CConstArpaLmDeterministicFst | This class wraps a ConstArpaLm format language model with the interface defined in DeterministicOnDemandFst |
CRnnlmDeterministicFst | |
►CDeterministicOnDemandFst< StdArc > | |
CInverseContextFst | |
CInverseLeftBiphoneContextFst | |
CScaleDeterministicOnDemandFst | Class ScaleDeterministicOnDemandFst takes another DeterministicOnDemandFst and scales the weights (like applying a language-model scale) |
CDeterminizeLatticeOptions | |
CDeterminizeLatticePhonePrunedOptions | |
CDeterminizeLatticePrunedOptions | |
CDeterminizeLatticeTask | |
CDeterminizerStar< F > | |
CDfsOrderVisitor< Arc > | |
CDiagGmm | Definition for Gaussian Mixture Model with diagonal covariances |
CDiagGmmNormal | Definition for Gaussian Mixture Model with diagonal covariances in normal mode: where the parameters are stored as means and variances (instead of the exponential form that the DiagGmm class is stored as) |
►CDifferentiableTransform | This class is for speaker-dependent feature-space transformations – principally various varieties of fMLLR, including mean-only, diagonal and block-diagonal versions – which are intended for placement in the bottleneck of a neural net |
CAppendTransform | This is a version of the transform class that consists of a number of other transforms, appended dimension-wise– e.g |
CAppendTransform | This is a version of the transform class that consists of a number of other transforms, appended dimension-wise– e.g |
CFmllrTransform | Notes on the math behind differentiable fMLLR transform |
CNoOpTransform | This is a version of the transform class that does nothing |
CSequenceTransform | This is a version of the transform class that does a sequence of other transforms, specified by other instances of the DifferentiableTransform interface |
CSimpleMeanTransform | This version of the transform class does a mean normalization: adding an offset to its input so that the difference (per speaker) of the transformed class means from the speaker-independent class means is minimized |
CDiscriminativeComputation | |
CDiscriminativeExampleMerger | This class is responsible for arranging examples in groups that have the same strucure (i.e |
CDiscriminativeExampleSplitter | For each frame, judge: |
CDiscriminativeExamplesRepository | This struct stores neural net training examples to be used in multi-threaded training |
CDiscriminativeNnetExample | This struct is used to store the information we need for discriminative training (MMI or MPE) |
CDiscriminativeObjectiveFunctionInfo | |
CDiscriminativeObjectiveInfo | |
CDiscriminativeOptions | |
CDiscriminativeSupervision | |
CDiscriminativeSupervisionSplitter | |
CParseOptions::DocInfo | Structure for options' documentation |
CDummyOptions | |
CEbwAmSgmm2Options | This header implements a form of Extended Baum-Welch training for SGMMs |
CEbwAmSgmm2Updater | |
CEbwAmSgmmUpdater | Contains the functions needed to update the SGMM parameters |
CEbwOptions | |
CEbwWeightOptions | |
CEigenvalueDecomposition< Real > | |
CHashList< I, T >::Elem | |
CLatticeDeterminizer< Weight, IntType >::Element | |
CDeterminizerStar< F >::Element | |
CTrivialFactorWeightFstImpl< A, F >::Element | |
CLatticeDeterminizerPruned< Weight, IntType >::Element | |
CTrivialFactorWeightFstImpl< A, F >::ElementEqual | |
CTrivialFactorWeightFstImpl< A, F >::ElementKey | |
CLatticeStringRepository< IntType >::Entry | |
CLatticeStringRepository< IntType >::EntryEqual | |
CLatticeStringRepository< IntType >::EntryKey | |
CDeterminizerStar< F >::EpsilonClosure | |
CDeterminizerStar< F >::EpsilonClosure::EpsilonClosureInfo | |
CCompactLatticeMinimizer< Weight, IntType >::EquivalenceSorter | |
Cerror_stats | |
►CEventMap | A class that is capable of representing a generic mapping from EventType (which is a vector of (key, value) pairs) to EventAnswerType which is just an integer |
CConstantEventMap | |
CSplitEventMap | |
CTableEventMap | |
CEventMapVectorEqual | |
CEventMapVectorHash | |
CExampleFeatureComputer | This class is only added for documentation, it is not intended to ever be used |
CExampleFeatureComputerOptions | This class is only added for documentation, it is not intended to ever be used |
CExampleGenerationConfig | |
CExampleMerger | This class is responsible for arranging examples in groups that have the same strucure (i.e |
CExampleMergingConfig | |
CExampleMergingStats | This class is responsible for storing, and displaying in log messages, statistics about how examples of different sizes (c.f |
CExamplesRepository | This class stores neural net training examples to be used in multi-threaded training |
CGrammarFst::ExpandedState | Represents an expanded state in an FstInstance |
►CFasterDecoder | |
COnlineFasterDecoder | |
►CFasterDecoderOptions | |
CBiglmFasterDecoderOptions | |
COnlineFasterDecoderOpts | |
CFastNnetCombiner | |
CFbankComputer | Class for computing mel-filterbank features; see Computing MFCC features for more information |
CFbankOptions | FbankOptions contains basic options for computing filterbank features |
CFeatureTransformEstimateOptions | |
CFeatureWindowFunction | |
►CFloatWeightTpl | |
CArcticWeightTpl< T > | |
CFmllrOptions | |
CFmllrRawAccs | |
CFmllrRawOptions | |
CFmllrSgmm2Accs | Class for computing the accumulators needed for the maximum-likelihood estimate of FMLLR transforms for a subspace GMM acoustic model |
CFmpe | |
CFmpeOptions | |
CFmpeStats | |
CFmpeUpdateOptions | |
►CForwardingDescriptor | A ForwardingDescriptor describes how we copy data from another NetworkNode, or from multiple other NetworkNodes, possibly with a scalar weight |
COffsetForwardingDescriptor | Offsets in 't' and 'x' values of other ForwardingDescriptors |
CReplaceIndexForwardingDescriptor | This ForwardingDescriptor modifies the indexes (n, t, x) by replacing one of them (normally t) with a constant value and keeping the rest |
CRoundingForwardingDescriptor | For use in clockwork RNNs and the like, this forwarding-descriptor rounds the time-index t down to the the closest t' <= t that is an exact multiple of t_modulus_ |
CSimpleForwardingDescriptor | SimpleForwardingDescriptor is the base-case of ForwardingDescriptor, consisting of a source node in the graph with a given scalar weight (which will in the normal case be 1.0) |
CSwitchingForwardingDescriptor | Chooses from different inputs based on the the time index modulo (the number of ForwardingDescriptors given as inputs) |
CLatticeBiglmFasterDecoder::ForwardLink | |
CForwardLink< Token > | |
CLatticeSimpleDecoder::ForwardLink | |
CFrameExtractionOptions | |
CDiscriminativeExampleSplitter::FrameInfo | |
COnlineSilenceWeighting::FrameInfo | |
CGrammarFst::FstInstance | |
CFullGmm | Definition for Gaussian Mixture Model with full covariances |
CFullGmmNormal | Definition for Gaussian Mixture Model with full covariances in normal mode: where the parameters are stored as means and variances (instead of the exponential form that the FullGmm class is stored as) |
CMinimumBayesRisk::GammaCompare | |
CGaussInfo | |
CGaussPostHolder | |
CGeneralDescriptor | This class is only used when parsing Descriptors |
CGeneralMatrix | This class is a wrapper that enables you to store a matrix in one of three forms: either as a Matrix<BaseFloat>, or a CompressedMatrix, or a SparseMatrix<BaseFloat> |
CGenericHolder< SomeType > | GenericHolder serves to document the requirements of the Holder interface; it's not intended to be used |
CCompressedMatrix::GlobalHeader | |
CGrammarFst | GrammarFst is an FST that is 'stitched together' from multiple FSTs, that can recursively incorporate each other |
CGrammarFstArc | |
CGrammarFstPreparer | |
CHashList< I, T >::HashBucket | |
CHashList< I, T > | |
CHashList< PairId, kaldi::BiglmFasterDecoder::Token *> | |
CHashList< PairId, kaldi::LatticeBiglmFasterDecoder::Token *> | |
CHashList< StateId, decoder::BackpointerToken *> | |
CHashList< StateId, kaldi::FasterDecoder::Token *> | |
CHashList< StateId, Token *> | |
CHmmCacheHash | |
CHmmTopology::HmmState | A structure defined inside HmmTopology to represent a HMM state |
CHmmTopology | A class for storing topology information for phones |
CHtkHeader | A structure containing the HTK header |
CHtkMatrixHolder | |
CHTransducerConfig | Configuration class for the GetHTransducer() function; see The HTransducerConfig configuration class for context |
CIdentityFunction< T > | |
CImageAugmentationConfig | |
►CImplToFst | |
CTrivialFactorWeightFst< A, F > | TrivialFactorWeightFst takes as template parameter a FactorIterator as defined above |
CIndex | Struct Index is intended to represent the various indexes by which we number the rows of the matrices that the Components process: mainly 'n', the index of the member of the minibatch, 't', used for the frame index in speech recognition, and 'x', which is a catch-all extra index which we might use in convolutional setups or for other reasons |
CIndexHasher | |
CIndexLessNxt | |
CIndexSet | An abstract representation of a set of Indexes |
CIndexVectorHasher | |
CInput | |
►CInputImplBase | |
CFileInputImpl | |
COffsetFileInputImpl | |
CPipeInputImpl | |
CStandardInputImpl | |
CInt32AndFloat | |
CInt32IsZero | |
CInt32Pair | |
CInterval | |
CExampleMergingConfig::IntSet | |
CIoSpecification | |
CIoSpecificationHasher | |
CIvectorEstimationOptions | |
CIvectorExtractor | |
CIvectorExtractorComputeDerivedVarsClass | |
CIvectorExtractorEstimationOptions | Options for training the IvectorExtractor, e.g. variance flooring |
CIvectorExtractorOptions | |
CIvectorExtractorStats | IvectorExtractorStats is a class used to update the parameters of the ivector extractor |
CIvectorExtractorStatsOptions | Options for IvectorExtractorStats, which is used to update the parameters of IvectorExtractor |
CIvectorExtractorUpdateProjectionClass | |
CIvectorExtractorUpdateWeightClass | |
CIvectorExtractorUtteranceStats | These are the stats for a particular utterance, i.e |
CIvectorExtractTask | |
CIvectorTask | |
CKaldiCompileTimeAssert< B > | |
CKaldiCompileTimeAssert< true > | |
CKaldiObjectHolder< KaldiType > | KaldiObjectHolder works for Kaldi objects that have the "standard" Read and Write functions, and a copy constructor |
CKaldiRnnlmWrapper | |
CKaldiRnnlmWrapperOpts | |
CComponent::key_value | A pair of type and marker, |
►CkMarkerMap | |
►CComponent | Abstract class, building block of the network |
CAveragePoolingComponent | AveragePoolingComponent : The input/output matrices are split to submatrices with width 'pool_stride_' |
CBlockSoftmax | |
CCopyComponent | Rearrange the matrix columns according to the indices in copy_from_indices_ |
CDropout | |
CHiddenSoftmax | |
CKlHmm | |
CLengthNormComponent | Rescale the matrix-rows to have unit length (L2-norm) |
CMaxPoolingComponent | MaxPoolingComponent : The input/output matrices are split to submatrices with width 'pool_stride_' |
►CRbmBase | |
CRbm | |
CSigmoid | |
CSimpleSentenceAveragingComponent | SimpleSentenceAveragingComponent does not have nested network, it is intended to be used inside of a <ParallelComponent> |
CSoftmax | |
CSplice | Splices the time context of the input features in N, out k*N, FrameOffset o_1,o_2,...,o_k FrameOffset example 11frames: -5 -4 -3 -2 -1 0 1 2 3 4 5 |
CTanh | |
►CUpdatableComponent | Class UpdatableComponent is a Component which has trainable parameters, it contains SGD training hyper-parameters in NnetTrainOptions |
CAddShift | Adds shift to all the lines of the matrix (can be used for global mean normalization) |
CAffineTransform | |
CConvolutionalComponent | ConvolutionalComponent implements convolution over single axis (i.e |
CFramePoolingComponent | FramePoolingComponent : The input/output matrices are split to frames of width 'feature_dim_' |
CLinearTransform | |
CMultiBasisComponent | |
►CMultistreamComponent | Class MultistreamComponent is an extension of UpdatableComponent for recurrent networks, which are trained with parallel sequences |
CBlstmProjected | |
CLstmProjected | |
CParallelComponent | |
CRecurrentComponent | Component with recurrent connections, 'tanh' non-linearity |
CParametricRelu | |
CRescale | Rescale the data column-wise by a vector (can be used for global variance normalization) |
CSentenceAveragingComponent | Deprecated!!!, keeping it as Katka Zmolikova used it in JSALT 2015 |
CKwsAlignment | |
CKwScoreStats | |
CKwsProductFstToKwsLexicographicFstMapper | |
CKwsTerm | |
CKwsTermsAligner | |
CKwsTermsAlignerOptions | |
CKwTermEqual | |
CKwTermLower | |
CLatticeArcRecord | This is used in CompactLatticeLimitDepth |
CLatticeBiglmFasterDecoder | This is as LatticeFasterDecoder, but does online composition between HCLG and the "difference language model", which is a deterministic FST that represents the difference between the language model you want and the language model you compiled HCLG with |
CLatticeDeterminizer< Weight, IntType > | |
CLatticeDeterminizerPruned< Weight, IntType > | |
CLatticeFasterDecoderConfig | |
CLatticeFasterDecoderTpl< FST, Token > | This is the "normal" lattice-generating decoder |
►CLatticeFasterDecoderTpl< FST, decoder::BackpointerToken > | |
CLatticeFasterOnlineDecoderTpl< FST > | LatticeFasterOnlineDecoderTpl is as LatticeFasterDecoderTpl but also supports an efficient way to get the best path (see the function BestPathEnd()), which is useful in endpointing and in situations where you might want to frequently access the best path |
►CLatticeFasterDecoderTpl< fst::StdFst, decoder::BackpointerToken > | |
CLatticeFasterOnlineDecoderTpl< fst::StdFst > | |
CLatticeHolder | |
CLatticeIncrementalDecoderConfig | The normal decoder, lattice-faster-decoder.h, sometimes has an issue when doing real-time applications with long utterances, that each time you get the lattice the lattice determinization can take a considerable amount of time; this introduces latency |
CLatticeIncrementalDecoderTpl< FST, Token > | This is an extention to the "normal" lattice-generating decoder |
►CLatticeIncrementalDecoderTpl< FST, decoder::BackpointerToken > | |
CLatticeIncrementalOnlineDecoderTpl< FST > | LatticeIncrementalOnlineDecoderTpl is as LatticeIncrementalDecoderTpl but also supports an efficient way to get the best path (see the function BestPathEnd()), which is useful in endpointing and in situations where you might want to frequently access the best path |
CLatticeIncrementalDeterminizer | This class is used inside LatticeIncrementalDecoderTpl; it handles some of the details of incremental determinization |
CDiscriminativeSupervisionSplitter::LatticeInfo | |
CLatticeLexiconWordAligner | |
CLatticePhoneAligner | |
CLatticeReader | LatticeReader provides (static) functions for reading both Lattice and CompactLattice, in text form |
CLatticeSimpleDecoder | Simplest possible decoder, included largely for didactic purposes and as a means to debug more highly optimized decoders |
CLatticeSimpleDecoderConfig | |
CPrunedCompactLatticeComposer::LatticeStateInfo | |
CLatticeStringRepository< IntType > | |
CLatticeToStdMapper< Real > | Class LatticeToStdMapper maps a LatticeArc to a normal arc (StdArc) by adding the elements of the LatticeArc weight |
CLatticeWeightTpl< FloatType > | |
CLatticeWeightTpl< BaseFloat > | |
CLatticeWordAligner | |
CLbfgsOptions | This is an implementation of L-BFGS |
►CLdaEstimate | Class for computing linear discriminant analysis (LDA) transform |
►CFeatureTransformEstimate | Class for computing a feature transform used for preconditioning of the training data in neural-networks |
CFeatureTransformEstimateMulti | |
CLdaEstimateOptions | |
CDecodableAmDiagGmmUnmapped::LikelihoodCacheRecord | Defines a cache record for a state |
CLimitRankClass | |
CLinearCgdOptions | |
CLinearResample | LinearResample is a special case of ArbitraryResample, where we want to resample a signal at linearly spaced intervals (this means we want to upsample or downsample the signal) |
CLinearVtln | |
CLmState | |
CMessageLogger::Log | |
CMessageLogger::LogAndThrow | |
CLogisticRegression | |
CLogisticRegressionConfig | |
CLogMessageEnvelope | Log message severity and source location info |
►CLossItf | |
CMse | |
CMultiTaskLoss | |
CXent | |
CLossOptions | |
CMapDiagGmmOptions | Configuration variables for Maximum A Posteriori (MAP) update |
CMapInputSymbolsMapper< Arc, I > | |
CMapTransitionUpdateConfig | |
►CMatcherBase | |
CTableMatcher< F, BackoffMatcher > | |
CTableMatcherImpl< F, BackoffMatcher > | |
CTableMatcher< F > | |
CTableMatcher< fst::Fst< fst::StdArc > > | |
CMatrixAccesses | |
►CMatrixBase< Real > | Base class which provides matrix operations not involving resizing or allocation |
CMatrix< Real > | A class for storing matrices |
CSubMatrix< Real > | Sub-matrix representation |
CMatrix< BaseFloat > | |
CMatrix< double > | |
►CMatrixBase< float > | |
CMatrix< float > | |
CMatrixBuffer | A buffer for caching (utterance-key, feature-matrix) pairs |
CMatrixBufferOptions | |
CMemoryCompressionOptimizer::MatrixCompressInfo | |
CNnetComputation::MatrixDebugInfo | |
CMatrixDim_ | Structure containing size of the matrix plus stride |
CMatrixElement< Real > | |
CMatrixExtender | |
CNnetComputation::MatrixInfo | |
CDerivativeTimeLimiter::MatrixPruneInfo | |
CMatrixRandomizer | Shuffles rows of a matrix according to the indices in the mask, |
CMaxChangeStats | |
CMelBanks | |
CMelBanksOptions | |
CRestrictedAttentionComponent::Memo | |
CBatchNormComponent::Memo | |
CMemoryCompressionOptimizer | This class is used in the function OptimizeMemoryCompression(), once we determine that there is some potential to do memory compression for this computation |
CMessageLogger | |
CMfccComputer | |
CMfccOptions | MfccOptions contains basic options for computing MFCC features |
►CMinibatchInfoItf | |
CFmllrTransform::MinibatchInfo | |
CSimpleMeanTransform::MinibatchInfo | |
CNnetBatchComputer::MinibatchSizeInfo | |
CMinimumBayesRisk | This class does the word-level Minimum Bayes Risk computation, and gives you either the 1-best MBR output together with the expected Bayes Risk, or a sausage-like structure |
CMinimumBayesRiskOptions | The implementation of the Minimum Bayes Risk decoding method described in "Minimum Bayes Risk decoding and system combination based on a recursion for
edit distance", Haihua Xu, Daniel Povey, Lidia Mangu and Jie Zhu, Computer Speech and Language, 2011 This is a slightly more principled way to do Minimum Bayes Risk (MBR) decoding than the standard "Confusion Network" method |
CMiscComputationInfo | |
CMleAmSgmm2Accs | Class for the accumulators associated with the phonetic-subspace model parameters |
CMleAmSgmm2Options | Configuration variables needed in the SGMM estimation process |
CMleAmSgmm2Updater | |
CMleAmSgmmUpdater | Contains the functions needed to update the SGMM parameters |
CMleDiagGmmOptions | Configuration variables like variance floor, minimum occupancy, etc |
CMleFullGmmOptions | Configuration variables like variance floor, minimum occupancy, etc |
CMleSgmm2SpeakerAccs | Class for the accumulators required to update the speaker vectors v_s |
CMleTransitionUpdateConfig | |
CMlltAccs | A class for estimating Maximum Likelihood Linear Transform, also known as global Semi-tied Covariance (STC), for GMMs |
CModelCollapser | |
CModelUpdateConsolidator | This class is responsible for consolidating the model-update part of backprop commands, for components in (e.g.) recurrent networks that need to have many separate backprop commands, into more efficient single commands operating on consolidated data in larger matrices |
CSvdApplier::ModifiedComponentInfo | |
CRowOpsSplitter::MultiIndexSplitInfo | |
►CMultiThreadable | |
CAccumulateMultiThreadedClass | |
CComputeNormalizersClass | |
CEbwUpdatePhoneVectorsClass | |
CExampleClass | |
CMyThreadClass | |
CDiscTrainParallelClass | |
CDoBackpropParallelClass | |
CFisherComputationClass | |
CUpdatePhoneVectorsClass | |
CUpdateWClass | |
CMultiThreader< C > | |
CMyTaskClass | |
CNaturalLess< CompactLatticeWeightTpl< LatticeWeightTpl< double >, int32 > > | |
CNaturalLess< CompactLatticeWeightTpl< LatticeWeightTpl< float >, int32 > > | |
CNaturalLess< CompactLatticeWeightTpl< LatticeWeightTpl< FloatType >, IntType > > | |
CNaturalLess< LatticeWeightTpl< double > > | |
CNaturalLess< LatticeWeightTpl< float > > | |
CNaturalLess< LatticeWeightTpl< FloatType > > | |
CNccfInfo | |
CNetworkNode | NetworkNode is used to represent, three types of thing: either an input of the network (which pretty much just states the dimension of the input vector); a Component (e.g |
Cneuron | |
CNGram | A parsed n-gram from ARPA LM file |
CNnet | |
CNnet | |
CNnet | |
CNnetBatchComputer | This class does neural net inference in a way that is optimized for GPU use: it combines chunks of multiple utterances into minibatches for more efficient computation |
CNnetBatchDecoder | Decoder object that uses multiple CPU threads for the graph search, plus a GPU for the neural net inference (that's done by a separate NnetBatchComputer object) |
CNnetBatchInference | This class implements a simplified interface to class NnetBatchComputer, which is suitable for programs like 'nnet3-compute' where you want to support fast GPU-based inference on a sequence of utterances, and get them back from the object in the same order |
CNnetChainComputeProb | This class is for computing objective-function values in a nnet3+chain setup, for diagnostics |
CNnetChainExample | NnetChainExample is like NnetExample, but specialized for lattice-free (chain) training |
CNnetChainExampleStructureCompare | This comparator object compares just the structural aspects of the NnetChainExample without looking at the value of the features |
CNnetChainExampleStructureHasher | This hashing object hashes just the structural aspects of the NnetExample without looking at the value of the features |
CNnetChainSupervision | |
CNnetChainTrainer | This class is for single-threaded training of neural nets using the 'chain' model |
CNnetChainTrainingOptions | |
CNnetCombineAconfig | |
CNnetCombineConfig | Configuration class that controls neural net combination, where we combine a number of neural nets, trying to find for each layer the optimal weighted combination of the different neural-net parameters |
CNnetCombineFastConfig | Configuration class that controls neural net combination, where we combine a number of neural nets, trying to find for each layer the optimal weighted combination of the different neural-net parameters |
CNnetComputation | |
CNnetComputationPrintInserter | |
CNnetComputeOptions | |
CNnetComputeProb | This class is for computing cross-entropy and accuracy values in a neural network, for diagnostics |
CNnetComputeProbOptions | |
CNnetComputer | |
CNnetComputer | Class NnetComputer is responsible for executing the computation described in the "computation" object |
CNnetComputerFromEg | |
CNnetDataRandomizerOptions | Configuration variables that affect how frame-level shuffling is done |
CNnetDiscriminativeComputeObjf | This class is for computing objective-function values in a nnet3 discriminative training, for diagnostics |
CNnetDiscriminativeExample | NnetDiscriminativeExample is like NnetExample, but specialized for sequence training |
CNnetDiscriminativeExampleStructureCompare | This comparator object compares just the structural aspects of the NnetDiscriminativeExample without looking at the value of the features |
CNnetDiscriminativeExampleStructureHasher | This hashing object hashes just the structural aspects of the NnetExample without looking at the value of the features |
CNnetDiscriminativeOptions | |
CNnetDiscriminativeStats | |
CNnetDiscriminativeSupervision | |
CNnetDiscriminativeTrainer | This class is for single-threaded discriminative training of neural nets |
CNnetDiscriminativeUpdateOptions | |
CNnetDiscriminativeUpdater | |
CNnetEnsembleTrainer | |
CNnetEnsembleTrainerConfig | |
CNnetExample | NnetExample is the input data and corresponding label (or labels) for one or more frames of input, used for standard cross-entropy training of neural nets (and possibly for other objective functions) |
CNnetExample | NnetExample is the input data and corresponding label (or labels) for one or more frames of input, used for standard cross-entropy training of neural nets (and possibly for other objective functions) |
CNnetExampleBackgroundReader | |
CNnetExampleStructureCompare | This comparator object compares just the structural aspects of the NnetExample without looking at the value of the features |
CNnetExampleStructureHasher | This hashing object hashes just the structural aspects of the NnetExample without looking at the value of the features |
CNnetFixConfig | |
CNnetGenerationOptions | |
CNnetInferenceTask | Class NnetInferenceTask represents a chunk of an utterance that is requested to be computed |
CNnetIo | |
CNnetIoStructureCompare | This comparison object compares just the structural aspects of the NnetIo object (name, indexes, feature dimension) without looking at the value of features |
CNnetIoStructureHasher | This hashing object hashes just the structural aspects of the NnetIo object (name, indexes, feature dimension) without looking at the value of features |
CNnetLdaStatsAccumulator | |
CNnetLimitRankOpts | |
CNnetMixupConfig | |
CNnetOnlineComputer | |
CNnetOptimizeOptions | |
CNnetRescaleConfig | |
CNnetRescaler | |
CNnetShrinkConfig | Configuration class that controls neural net "shrinkage" which is actually a scaling on the parameters of each of the updatable layers |
►CNnetSimpleComputationOptions | |
CNnetBatchComputerOptions | |
CNnetSimpleLoopedComputationOptions | |
CNnetSimpleTrainerConfig | |
CNnetStats | |
CNnetStatsConfig | |
CNnetTrainer | This class is for single-threaded training of neural nets using standard objective functions such as cross-entropy (implemented with logsoftmax nonlinearity and a linear objective function) and quadratic loss |
CNnetTrainerOptions | |
CNnetTrainOptions | |
CNnetUpdater | |
CNnetWidenConfig | Configuration class that controls neural net "widening", which means increasing the dimension of the hidden layers of an already-trained neural net |
CTreeClusterer::Node | |
COnlineProcessPitch::NormalizationStats | |
CNumberIstream< T > | |
CObjectiveFunctionInfo | |
COfflineFeatureTpl< F > | This templated class is intended for offline feature extraction, i.e |
CConvolutionModel::Offset | |
►COnlineAudioSourceItf | |
COnlinePaSource | |
COnlineTcpVectorSource | |
COnlineVectorSource | |
COnlineCmvnOptions | |
COnlineCmvnState | Struct OnlineCmvnState stores the state of CMVN adaptation between utterances (but not the state of the computation within an utterance) |
COnlineEndpointConfig | |
COnlineEndpointRule | This header contains a simple facility for endpointing, that should be used in conjunction with the "online2" online decoding code; see ../online2bin/online2-wav-gmm-latgen-faster-endpoint.cc |
►COnlineFeatInputItf | |
COnlineCacheInput | |
COnlineCmnInput | |
COnlineDeltaInput | |
COnlineFeInput< E > | |
COnlineLdaInput | |
COnlineMatrixInput | |
COnlineUdpInput | |
►COnlineFeatureInterface | OnlineFeatureInterface is an interface for online feature processing (it is also usable in the offline setting, but currently we're not using it for that) |
COnlineAppendFeature | This online-feature class implements combination of two feature streams (such as pitch, plp) into one stream |
►COnlineBaseFeature | Add a virtual class for "source" features such as MFCC or PLP or pitch features |
COnlineGenericBaseFeature< C > | This is a templated class for online feature extraction; it's templated on a class like MfccComputer or PlpComputer that does the basic feature extraction |
COnlinePitchFeature | |
COnlineCacheFeature | This feature type can be used to cache its input, to avoid repetition of computation in a multi-pass decoding context |
COnlineCmvn | This class does an online version of the cepstral mean and [optionally] variance, but note that this is not equivalent to the offline version |
COnlineDeltaFeature | |
COnlineFeaturePipeline | OnlineFeaturePipeline is a class that's responsible for putting together the various stages of the feature-processing pipeline, in an online setting |
COnlineIvectorFeature | OnlineIvectorFeature is an online feature-extraction class that's responsible for extracting iVectors from raw features such as MFCC, PLP or filterbank |
COnlineMatrixFeature | This class takes a Matrix<BaseFloat> and wraps it as an OnlineFeatureInterface: this can be useful where some earlier stage of feature processing has been done offline but you want to use part of the online pipeline |
COnlineNnet2FeaturePipeline | OnlineNnet2FeaturePipeline is a class that's responsible for putting together the various parts of the feature-processing pipeline for neural networks, in an online setting |
COnlineProcessPitch | This online-feature class implements post processing of pitch features |
COnlineSpliceFrames | |
COnlineTransform | This online-feature class implements any affine or linear transform |
COnlineFeatureMatrix | |
COnlineFeatureMatrixOptions | |
COnlineFeaturePipelineCommandLineConfig | This configuration class is to set up OnlineFeaturePipelineConfig, which in turn is the configuration class for OnlineFeaturePipeline |
COnlineFeaturePipelineConfig | This configuration class is responsible for storing the configuration options for OnlineFeaturePipeline, but it does not set them |
COnlineGmmAdaptationState | |
COnlineGmmDecodingAdaptationPolicyConfig | This configuration class controls when to re-estimate the basis-fMLLR during online decoding |
COnlineGmmDecodingConfig | |
COnlineGmmDecodingModels | This class is used to read, store and give access to the models used for 3 phases of decoding (first-pass with online-CMN features; the ML models used for estimating transforms; and the discriminatively trained models) |
COnlineIvectorEstimationStats | This class helps us to efficiently estimate iVectors in situations where the data is coming in frame by frame |
COnlineIvectorExtractionConfig | This class includes configuration variables relating to the online iVector extraction, but not including configuration for the "base feature", i.e |
COnlineIvectorExtractionInfo | This struct contains various things that are needed (as const references) by class OnlineIvectorExtractor |
COnlineIvectorExtractorAdaptationState | This class stores the adaptation state from the online iVector extractor, which can help you to initialize the adaptation state for the next utterance of the same speaker in a more informed way |
COnlineNaturalGradient | Keywords for search: natural gradient, naturalgradient, NG-SGD |
COnlineNaturalGradientSimple | |
COnlineNnet2DecodingConfig | |
COnlineNnet2DecodingThreadedConfig | |
COnlineNnet2FeaturePipelineConfig | This configuration class is to set up OnlineNnet2FeaturePipelineInfo, which in turn is the configuration class for OnlineNnet2FeaturePipeline |
COnlineNnet2FeaturePipelineInfo | This class is responsible for storing configuration variables, objects and options for OnlineNnet2FeaturePipeline (including the actual LDA and CMVN-stats matrices, and the iVector extractor, which is a member of ivector_extractor_info |
COnlinePitchFeatureImpl | |
COnlinePreconditioner | Keywords for search: natural gradient, naturalgradient, NG-SGD |
COnlinePreconditionerSimple | |
COnlineSilenceWeighting | |
COnlineSilenceWeightingConfig | |
COnlineSpeexDecoder | |
COnlineSpeexEncoder | |
COnlineSpliceOptions | |
COnlineTimer | Class OnlineTimer is used to test real-time decoding algorithms and evaluate how long the decoding of a particular utterance would take |
COnlineTimingStats | Class OnlineTimingStats stores statistics from timing of online decoding, which will enable the Print() function to print out the average real-time factor and average delay per utterance |
COptimizableInterface< Real > | OptimizableInterface provides a virtual class for optimizable objects |
COptimizeLbfgs< Real > | |
CSimpleOptions::OptionInfo | |
►COptionsItf | |
CParseOptions | The class ParseOptions is for parsing command-line options; see Parsing command-line options for more documentation |
CSimpleOptions | The class SimpleOptions is an implementation of OptionsItf that allows setting and getting option values programmatically, i.e., via getter and setter methods |
COtherReal< T > | This class provides a way for switching between double and float types |
COtherReal< double > | A specialized class for switching from double to float |
COtherReal< float > | A specialized class for switching from float to double |
COutput | |
►COutputImplBase | |
CFileOutputImpl | |
CPipeOutputImpl | |
CStandardOutputImpl | |
CLatticeDeterminizerPruned< Weight, IntType >::OutputState | |
►CPackedMatrix< Real > | Packed matrix: base class for triangular and symmetric matrices |
CSpMatrix< Real > | Packed symetric matrix class |
CTpMatrix< Real > | Packed symetric matrix class |
►CPackedMatrix< double > | |
CSpMatrix< double > | |
CTpMatrix< double > | |
►CPackedMatrix< float > | |
CSpMatrix< float > | |
CTpMatrix< float > | |
CLatticeDeterminizer< Weight, IntType >::PairComparator | |
CDeterminizerStar< F >::PairComparator | |
CLatticeDeterminizerPruned< Weight, IntType >::PairComparator | |
CRandomAccessTableReaderSortedArchiveImpl< Holder >::PairCompare | |
CPairHasher< Int1, Int2 > | A hashing function-object for pairs of ints |
CSgmm2LikelihoodCache::PdfCacheElement | |
CPdfPrior | |
CPdfPriorOptions | |
CCompressedMatrix::PerColHeader | |
CPhoneAlignLatticeOptions | |
CPitchExtractionOptions | |
CPitchFrameInfo | |
CPitchInterpolator | |
CPitchInterpolatorOptions | |
CPitchInterpolatorStats | |
CPlda | |
CPldaConfig | |
CPldaEstimationConfig | |
CPldaEstimator | |
CPldaStats | |
CPldaUnsupervisedAdaptor | This class takes unlabeled iVectors from the domain of interest and uses their mean and variance to adapt your PLDA matrices to a new domain |
CPldaUnsupervisedAdaptorConfig | |
CPlpComputer | This is the new-style interface to the PLP computation |
CPlpOptions | PlpOptions contains basic options for computing PLP features |
CRefineClusterer::point_info | |
CComputationRenumberer::PointerCompare< T > | |
CPosteriorHolder | |
CNnetComputation::PrecomputedIndexesInfo | |
CProcessPitchOptions | |
CProfiler | |
CProfileStats | |
CProfileStats::ProfileStatsEntry | |
CPrunedCompactLatticeComposer | PrunedCompactLatticeComposer implements an algorithm for pruned composition |
CPruneSpecialClass< Arc > | This class is used to implement the function PruneSpecial |
CPushSpecialClass | |
CQuestions | This class defines, for each EventKeyType, a set of initial questions that it tries and also a number of iterations for which to refine the questions to increase likelihood |
CQuestionsForKey | QuestionsForKey is a class used to define the questions for a key, and also options that allow us to refine the question during tree-building (i.e |
CRandFstOptions | |
CRandomAccessTableReader< Holder > | Allows random access to a collection of objects in an archive or script file; see The Table concept |
CRandomAccessTableReader< kaldi::TokenHolder > | |
►CRandomAccessTableReaderImplBase< Holder > | |
►CRandomAccessTableReaderArchiveImplBase< Holder > | |
CRandomAccessTableReaderDSortedArchiveImpl< Holder > | |
CRandomAccessTableReaderSortedArchiveImpl< Holder > | |
CRandomAccessTableReaderUnsortedArchiveImpl< Holder > | |
CRandomAccessTableReaderScriptImpl< Holder > | |
CRandomAccessTableReaderImplBase< kaldi::TokenHolder > | |
CRandomAccessTableReaderMapped< Holder > | This class is for when you are reading something in random access, but it may actually be stored per-speaker (or something similar) but the keys you're using are per utterance |
CRandomizerMask | Generates randomly ordered vector of indices, |
CRandomState | |
CRbmTrainOptions | |
CRecognizedWord | |
CRecyclingVector | This class serves as a storage for feature vectors with an option to limit the memory usage by removing old elements |
CRefineClusterer | |
CRefineClustersOptions | |
CRegressionTree | A regression tree is a clustering of Gaussian densities in an acoustic model, such that the group of Gaussians at each node of the tree are transformed by the same transform |
CRegtreeFmllrDiagGmm | An FMLLR (feature-space MLLR) transformation, also called CMLLR (constrained MLLR) is an affine transformation of the feature vectors |
CRegtreeFmllrDiagGmmAccs | Class for computing the accumulators needed for the maximum-likelihood estimate of FMLLR transforms for an acoustic model that uses diagonal Gaussian mixture models as emission densities |
CRegtreeFmllrOptions | Configuration variables for FMLLR transforms |
CRegtreeMllrDiagGmm | An MLLR mean transformation is an affine transformation of Gaussian means |
CRegtreeMllrDiagGmmAccs | Class for computing the maximum-likelihood estimates of the parameters of an acoustic model that uses diagonal Gaussian mixture models as emission densities |
CRegtreeMllrOptions | Configuration variables for FMLLR transforms |
CRemoveEpsLocalClass< Arc, ReweightPlus > | |
CRemoveSomeInputSymbolsMapper< Arc, I > | |
CProfileStats::ReverseSecondComparator | |
CReweightPlusDefault< Weight > | |
CReweightPlusLogArc | |
CRowOpsSplitter | |
CRspecifierOptions | |
CTaskSequencer< C >::RunTaskArgsList | |
►Cruntime_error | |
CKaldiFatalError | Kaldi fatal runtime error exception |
CSemaphore | |
CSequentialTableReader< Holder > | A templated class for reading objects sequentially from an archive or script file; see The Table concept |
►CSequentialTableReaderImplBase< Holder > | |
CSequentialTableReaderArchiveImpl< Holder > | |
CSequentialTableReaderBackgroundImpl< Holder > | |
CSequentialTableReaderScriptImpl< Holder > | |
CSgmm2FmllrConfig | Configuration variables needed in the estimation of FMLLR for SGMMs |
CSgmm2FmllrGlobalParams | Global adaptation parameters |
CSgmm2GauPostElement | This is the entry for a single time |
CSgmm2GselectConfig | |
CSgmm2LikelihoodCache | Sgmm2LikelihoodCache caches SGMM likelihoods at two levels: the final pdf likelihoods, and the sub-state level likelihoods, which means that with the SCTM system we can avoid redundant computation |
CSgmm2PerFrameDerivedVars | Holds the per-frame precomputed quantities x(t), x_{i}(t), z_{i}(t), and n_{i}(t) (cf |
CSgmm2PerSpkDerivedVars | |
CSgmm2Project | |
CSgmm2SplitSubstatesConfig | |
CShiftedDeltaFeatures | |
CShiftedDeltaFeaturesOptions | |
CSimpleDecoder | Simplest possible decoder, included largely for didactic purposes and as a means to debug more highly optimized decoders |
►CSimpleObjectiveInfo | |
CPerDimObjectiveInfo | |
CFmllrRawAccs::SingleFrameStats | |
CFmllrDiagGmmAccs::SingleFrameStats | |
CRowOpsSplitter::SingleSplitInfo | |
CSingleUtteranceGmmDecoder | You will instantiate this class when you want to decode a single utterance using the online-decoding setup |
CSingleUtteranceNnet2Decoder | You will instantiate this class when you want to decode a single utterance using the online-decoding setup for neural nets |
CSingleUtteranceNnet2DecoderThreaded | You will instantiate this class when you want to decode a single utterance using the online-decoding setup for neural nets |
CSingleUtteranceNnet3DecoderTpl< FST > | You will instantiate this class when you want to decode a single utterance using the online-decoding setup for neural nets |
CSingleUtteranceNnet3IncrementalDecoderTpl< FST > | You will instantiate this class when you want to decode a single utterance using the online-decoding setup for neural nets |
CSlidingWindowCmnOptions | |
CSolverOptions | This class describes the options for maximizing various quadratic objective functions |
CSparseMatrix< Real > | |
CSparseMatrix< float > | |
CSparseVector< Real > | |
►CSpeakerStatsItf | |
CFmllrTransform::SpeakerStats | |
CSpectrogramComputer | Class for computing spectrogram features |
CSpectrogramOptions | SpectrogramOptions contains basic options for computing spectrogram features |
CSpeexOptions | |
CSphinxMatrixHolder< kFeatDim > | A class for reading/writing Sphinx format matrices |
CSplitDiscriminativeExampleConfig | Config structure for SplitExample, for splitting discriminative training examples |
CSplitDiscriminativeSupervisionOptions | |
CSplitExampleStats | This struct exists only for diagnostic purposes |
►CSplitRadixComplexFft< Real > | |
CSplitRadixRealFft< Real > | |
►CSplitRadixComplexFft< float > | |
CSplitRadixRealFft< float > | |
CPitchFrameInfo::StateInfo | |
CNnetStats::StatsElement | |
CExampleMergingStats::StatsForExampleSize | |
CStdToken | |
CStdToLatticeMapper< Real > | Class StdToLatticeMapper maps a normal arc (StdArc) to a LatticeArc by putting the StdArc weight as the first element of the LatticeWeight |
CStdVectorRandomizer< T > | Randomizes elements of a vector according to a mask |
CCompiler::StepInfo | |
CStringHasher | A hashing function object for strings |
CStringRepository< Label, StringId > | |
CComputationRenumberer::SubMatrixHasher | |
CNnetComputation::SubMatrixInfo | |
CDeterminizerStar< F >::SubsetEqual | |
CLatticeDeterminizerPruned< Weight, IntType >::SubsetEqual | |
CLatticeDeterminizer< Weight, IntType >::SubsetEqual | |
CLatticeDeterminizer< Weight, IntType >::SubsetEqualStates | |
CDeterminizerStar< F >::SubsetEqualStates | |
CLatticeDeterminizerPruned< Weight, IntType >::SubsetEqualStates | |
CDeterminizerStar< F >::SubsetKey | |
CLatticeDeterminizer< Weight, IntType >::SubsetKey | |
CLatticeDeterminizerPruned< Weight, IntType >::SubsetKey | |
CSgmm2LikelihoodCache::SubstateCacheElement | |
►CSumDescriptor | This is an abstract base-class |
CBinarySumDescriptor | BinarySumDescriptor can represent either A + B, or (A if defined, else B) |
CConstantSumDescriptor | This is an alternative base-case of SumDescriptor (an alternative to SimpleSumDescriptor) which represents a constant term, e.g |
COptionalSumDescriptor | This is the case of class SumDescriptor, in which we contain just one term, and that term is optional (an IfDefined() expression) |
CSimpleSumDescriptor | This is the normal base-case of SumDescriptor which just wraps a ForwardingDescriptor |
CSvdApplier | |
Csynapse | |
CTableComposeCache< F > | TableComposeCache lets us do multiple compositions while caching the same matcher |
CTableComposeCache< fst::Fst< fst::StdArc > > | |
►CTableMatcherOptions | TableMatcher is a matcher specialized for the case where the output side of the left FST always has either all-epsilons coming out of a state, or a majority of the symbol table |
CTableComposeOptions | |
CTableWriter< Holder > | A templated class for writing objects to an archive or script file; see The Table concept |
►CTableWriterImplBase< Holder > | |
CTableWriterArchiveImpl< Holder > | |
CTableWriterBothImpl< Holder > | |
CTableWriterScriptImpl< Holder > | |
CTarjanNode | |
CPruneSpecialClass< Arc >::Task | |
CLatticeDeterminizerPruned< Weight, IntType >::Task | |
CLatticeDeterminizerPruned< Weight, IntType >::TaskCompare | |
CTaskSequencer< C > | |
CTaskSequencerConfig | |
CTcpServer | |
CLatticeDeterminizer< Weight, IntType >::TempArc | |
CLatticeDeterminizerPruned< Weight, IntType >::TempArc | |
CDeterminizerStar< F >::TempArc | |
CTestFunction | |
CTestFunctor< Arc > | |
CThreadSynchronizer | Class ThreadSynchronizer acts to guard an arbitrary type of buffer between a producing and a consuming thread (note: it's all symmetric between the two thread types) |
CThrSweepStats | |
CTidToTstateMapper | |
CTimer | |
CSimpleDecoder::Token | |
CFasterDecoder::Token | |
CLatticeBiglmFasterDecoder::Token | |
CLatticeSimpleDecoder::Token | |
CBiglmFasterDecoder::Token | |
CTokenHolder | |
CLatticeIncrementalDecoderTpl< FST, Token >::TokenList | |
CLatticeSimpleDecoder::TokenList | |
CLatticeBiglmFasterDecoder::TokenList | |
CLatticeFasterDecoderTpl< FST, Token >::TokenList | |
CTokenVectorHolder | |
CTrainingGraphCompiler | |
CTrainingGraphCompilerOptions | |
CTransitionModel | |
CTreeClusterer | |
CTreeClusterOptions | |
CTreeRenderer | |
CLatticeLexiconWordAligner::Tuple | |
CLatticePhoneAligner::Tuple | |
CLatticeWordAligner::Tuple | |
CTransitionModel::Tuple | |
CLatticeWordAligner::TupleEqual | |
CLatticeLexiconWordAligner::TupleEqual | |
CLatticePhoneAligner::TupleEqual | |
CLatticeLexiconWordAligner::TupleHash | |
CLatticePhoneAligner::TupleHash | |
CLatticeWordAligner::TupleHash | |
CTwvMetrics | |
CTwvMetricsOptions | |
CTwvMetricsStats | |
CUbmClusteringOptions | |
►Cunary_function | |
CComparePair | |
CPairIsEqualComparator | |
CNnetBatchInference::UtteranceInfo | |
CNnetBatchDecoder::UtteranceInput | |
CNnetBatchDecoder::UtteranceOutput | |
CUtteranceSplitter | |
CVadEnergyOptions | |
CVariableMergingOptimizer | This class is responsible for merging matrices, although you probably want to access it via the the function VariableMergingOptimization() |
►Cvector | |
CSgmm2GauPost | Indexed by time |
►CVectorBase< Real > | Provides a vector abstraction class |
CSubVector< Real > | Represents a non-allocating general vector which can be defined as a sub-vector of higher-level vector [or as the row of a matrix] |
CVector< Real > | A class representing a vector |
CVector< double > | |
►CVectorBase< float > | |
CVector< float > | |
CStringRepository< Label, StringId >::VectorEqual | |
CVectorFstToKwsLexicographicFstMapper | |
CVectorFstTplHolder< Arc > | |
CVectorHasher< Int > | A hashing function-object for vectors |
CStringRepository< Label, StringId >::VectorKey | |
CVectorRandomizer | Randomizes elements of a vector according to a mask |
Cvocab_word | |
CWaveData | This class's purpose is to read in Wave files |
CWaveHeaderReadGofer | |
CWaveHolder | |
CWaveInfo | This class reads and hold wave file header information |
CWaveInfoHolder | |
CWordAlignedLatticeTester | |
CWordAlignLatticeLexiconInfo | This class extracts some information from the lexicon and stores it in a suitable form for the word-alignment code to use |
CWordAlignLatticeLexiconOpts | |
CWordBoundaryInfo | |
CWordBoundaryInfoNewOpts | |
CWordBoundaryInfoOpts | |
CConstArpaLmBuilder::WordsAndLmStatePairLessThan | |
CWspecifierOptions | |
CBatchedXvectorComputer::XvectorTask | |
Cbool | |
Cfloat | |
Csize_t | |