►Nfst | For an extended explanation of the framework of which grammar-fsts are a part, please see Support for grammars and graphs with on-the-fly parts. (i.e |
►Ninternal | |
►CTrivialFactorWeightFstImpl | |
CElement | |
CElementEqual | |
CElementKey | |
CArcIterator< GrammarFst > | This is the overridden template for class ArcIterator for GrammarFst |
CArcIterator< TrivialFactorWeightFst< A, F > > | |
CArcticWeightTpl | |
CBackoffDeterministicOnDemandFst | This class wraps an Fst, representing a language model, using the interface for "BackoffDeterministicOnDemandFst" |
CCacheDeterministicOnDemandFst | |
►CCompactLatticeMinimizer | |
CEquivalenceSorter | |
CCompactLatticePusher | |
CCompactLatticeWeightCommonDivisorTpl | |
CCompactLatticeWeightTpl | |
CComposeDeterministicOnDemandFst | |
CDeterministicOnDemandFst | Class DeterministicOnDemandFst is an "FST-like" base-class |
CDeterminizeLatticeOptions | |
CDeterminizeLatticePhonePrunedOptions | |
CDeterminizeLatticePrunedOptions | |
►CDeterminizerStar | |
CElement | |
►CEpsilonClosure | |
CEpsilonClosureInfo | |
CPairComparator | |
CSubsetEqual | |
CSubsetEqualStates | |
CSubsetKey | |
CTempArc | |
CDfsOrderVisitor | |
►CGrammarFst | GrammarFst is an FST that is 'stitched together' from multiple FSTs, that can recursively incorporate each other |
CExpandedState | Represents an expanded state in an FstInstance |
CFstInstance | |
CGrammarFstArc | |
►CGrammarFstPreparer | |
CArcCategory | |
CIdentityFunction | |
CInverseContextFst | |
CInverseLeftBiphoneContextFst | |
►CLatticeDeterminizer | |
CElement | |
CPairComparator | |
CSubsetEqual | |
CSubsetEqualStates | |
CSubsetKey | |
CTempArc | |
►CLatticeDeterminizerPruned | |
CElement | |
COutputState | |
CPairComparator | |
CSubsetEqual | |
CSubsetEqualStates | |
CSubsetKey | |
CTask | |
CTaskCompare | |
CTempArc | |
►CLatticeStringRepository | |
CEntry | |
CEntryEqual | |
CEntryKey | |
CLatticeToStdMapper | Class LatticeToStdMapper maps a LatticeArc to a normal arc (StdArc) by adding the elements of the LatticeArc weight |
CLatticeWeightTpl | |
CLmExampleDeterministicOnDemandFst | This class is for didactic purposes, it does not really do anything |
CMapInputSymbolsMapper | |
CNaturalLess< CompactLatticeWeightTpl< LatticeWeightTpl< double >, int32 > > | |
CNaturalLess< CompactLatticeWeightTpl< LatticeWeightTpl< float >, int32 > > | |
CNaturalLess< CompactLatticeWeightTpl< LatticeWeightTpl< FloatType >, IntType > > | |
CNaturalLess< LatticeWeightTpl< double > > | |
CNaturalLess< LatticeWeightTpl< float > > | |
CNaturalLess< LatticeWeightTpl< FloatType > > | |
►CPruneSpecialClass | This class is used to implement the function PruneSpecial |
CTask | |
CPushSpecialClass | |
CRandFstOptions | |
CRemoveEpsLocalClass | |
CRemoveSomeInputSymbolsMapper | |
CReweightPlusDefault | |
CReweightPlusLogArc | |
CScaleDeterministicOnDemandFst | Class ScaleDeterministicOnDemandFst takes another DeterministicOnDemandFst and scales the weights (like applying a language-model scale) |
CStateIterator< TrivialFactorWeightFst< A, F > > | |
CStdToLatticeMapper | Class StdToLatticeMapper maps a normal arc (StdArc) to a LatticeArc by putting the StdArc weight as the first element of the LatticeWeight |
►CStringRepository | |
CVectorEqual | |
CVectorKey | |
CTableComposeCache | TableComposeCache lets us do multiple compositions while caching the same matcher |
CTableComposeOptions | |
CTableMatcher | |
CTableMatcherImpl | |
CTableMatcherOptions | TableMatcher is a matcher specialized for the case where the output side of the left FST always has either all-epsilons coming out of a state, or a majority of the symbol table |
CTestFunctor | |
CTrivialFactorWeightFst | TrivialFactorWeightFst takes as template parameter a FactorIterator as defined above |
CTrivialFactorWeightOptions | |
CUnweightedNgramFst | The class UnweightedNgramFst is a DeterministicOnDemandFst whose states encode an n-gram history |
CVectorFstTplHolder | |
►Nkaldi | This code computes Goodness of Pronunciation (GOP) and extracts phone-level pronunciation feature for mispronunciations detection tasks, the reference: |
►Ndecoder | |
CBackpointerToken | |
CForwardLink | |
CStdToken | |
►Ndifferentiable_transform | |
CAppendTransform | This is a version of the transform class that consists of a number of other transforms, appended dimension-wise– e.g |
CDifferentiableTransform | This class is for speaker-dependent feature-space transformations – principally various varieties of fMLLR, including mean-only, diagonal and block-diagonal versions – which are intended for placement in the bottleneck of a neural net |
►CFmllrTransform | Notes on the math behind differentiable fMLLR transform |
CMinibatchInfo | |
CSpeakerStats | |
CMinibatchInfoItf | |
CNoOpTransform | This is a version of the transform class that does nothing |
CSequenceTransform | This is a version of the transform class that does a sequence of other transforms, specified by other instances of the DifferentiableTransform interface |
►CSimpleMeanTransform | This version of the transform class does a mean normalization: adding an offset to its input so that the difference (per speaker) of the transformed class means from the speaker-independent class means is minimized |
CMinibatchInfo | |
CSpeakerStatsItf | |
►Ndiscriminative | |
CDiscriminativeComputation | |
CDiscriminativeObjectiveInfo | |
CDiscriminativeOptions | |
CDiscriminativeSupervision | |
►CDiscriminativeSupervisionSplitter | |
CLatticeInfo | |
CSplitDiscriminativeSupervisionOptions | |
►Nkws_internal | |
CKwScoreStats | |
CKwTermEqual | |
CKwTermLower | |
CThrSweepStats | |
►Nnnet1 | |
CAddShift | Adds shift to all the lines of the matrix (can be used for global mean normalization) |
CAffineTransform | |
CAveragePoolingComponent | AveragePoolingComponent : The input/output matrices are split to submatrices with width 'pool_stride_' |
CBlockSoftmax | |
CBlstmProjected | |
►CComponent | Abstract class, building block of the network |
Ckey_value | A pair of type and marker, |
CConvolutionalComponent | ConvolutionalComponent implements convolution over single axis (i.e |
CCopyComponent | Rearrange the matrix columns according to the indices in copy_from_indices_ |
CDropout | |
CFramePoolingComponent | FramePoolingComponent : The input/output matrices are split to frames of width 'feature_dim_' |
CHiddenSoftmax | |
CKlHmm | |
CLengthNormComponent | Rescale the matrix-rows to have unit length (L2-norm) |
CLinearTransform | |
CLossItf | |
CLossOptions | |
CLstmProjected | |
CMatrixBuffer | A buffer for caching (utterance-key, feature-matrix) pairs |
CMatrixBufferOptions | |
CMatrixRandomizer | Shuffles rows of a matrix according to the indices in the mask, |
CMaxPoolingComponent | MaxPoolingComponent : The input/output matrices are split to submatrices with width 'pool_stride_' |
CMse | |
CMultiBasisComponent | |
CMultistreamComponent | Class MultistreamComponent is an extension of UpdatableComponent for recurrent networks, which are trained with parallel sequences |
CMultiTaskLoss | |
CNnet | |
CNnetDataRandomizerOptions | Configuration variables that affect how frame-level shuffling is done |
CNnetTrainOptions | |
CParallelComponent | |
CParametricRelu | |
CPdfPrior | |
CPdfPriorOptions | |
CRandomizerMask | Generates randomly ordered vector of indices, |
CRbm | |
CRbmBase | |
CRbmTrainOptions | |
CRecurrentComponent | Component with recurrent connections, 'tanh' non-linearity |
CRescale | Rescale the data column-wise by a vector (can be used for global variance normalization) |
CSentenceAveragingComponent | Deprecated!!!, keeping it as Katka Zmolikova used it in JSALT 2015 |
CSigmoid | |
CSimpleSentenceAveragingComponent | SimpleSentenceAveragingComponent does not have nested network, it is intended to be used inside of a <ParallelComponent> |
CSoftmax | |
CSplice | Splices the time context of the input features in N, out k*N, FrameOffset o_1,o_2,...,o_k FrameOffset example 11frames: -5 -4 -3 -2 -1 0 1 2 3 4 5 |
CStdVectorRandomizer | Randomizes elements of a vector according to a mask |
CTanh | |
CUpdatableComponent | Class UpdatableComponent is a Component which has trainable parameters, it contains SGD training hyper-parameters in NnetTrainOptions |
CVectorRandomizer | Randomizes elements of a vector according to a mask |
CXent | |
►Nnnet2 | |
CAdditiveNoiseComponent | This is a bit similar to dropout but adding (not multiplying) Gaussian noise with a given standard deviation |
CAffineComponent | |
CAffineComponentPreconditioned | |
CAffineComponentPreconditionedOnline | Keywords: natural gradient descent, NG-SGD, naturalgradient |
CAmNnet | |
CBlockAffineComponent | |
CBlockAffineComponentPreconditioned | |
CChunkInfo | ChunkInfo is a class whose purpose is to describe the structure of matrices holding features |
CComponent | Abstract class, basic element of the network, it is a box with defined inputs, outputs, and tranformation functions interface |
CConvolutional1dComponent | Convolutional1dComponent implements convolution over frequency axis |
CDctComponent | Discrete cosine transform |
CDecodableAmNnet | DecodableAmNnet is a decodable object that decodes with a neural net acoustic model of type AmNnet |
CDecodableAmNnetParallel | This version of DecodableAmNnet is intended for a version of the decoder that processes different utterances with multiple threads |
CDecodableNnet2Online | This Decodable object for class nnet2::AmNnet takes feature input from class OnlineFeatureInterface, unlike, say, class DecodableAmNnet which takes feature input from a matrix |
CDecodableNnet2OnlineOptions | |
►CDiscriminativeExampleSplitter | For each frame, judge: |
CFrameInfo | |
CDiscriminativeExamplesRepository | This struct stores neural net training examples to be used in multi-threaded training |
CDiscriminativeNnetExample | This struct is used to store the information we need for discriminative training (MMI or MPE) |
CDiscTrainParallelClass | |
CDoBackpropParallelClass | |
CDropoutComponent | This Component, if present, randomly zeroes half of the inputs and multiplies the other half by two |
CExamplesRepository | This class stores neural net training examples to be used in multi-threaded training |
CFastNnetCombiner | |
CFisherComputationClass | |
CFixedAffineComponent | FixedAffineComponent is an affine transform that is supplied at network initialization time and is not trainable |
CFixedBiasComponent | FixedBiasComponent applies a fixed per-element bias; it's similar to the AddShift component in the nnet1 setup (and only needed for nnet1 model conversion |
CFixedLinearComponent | FixedLinearComponent is a linear transform that is supplied at network initialization time and is not trainable |
CFixedScaleComponent | FixedScaleComponent applies a fixed per-element scale; it's similar to the Rescale component in the nnet1 setup (and only needed for nnet1 model conversion) |
CLimitRankClass | |
CLogSoftmaxComponent | |
CMaxoutComponent | |
CMaxpoolingComponent | MaxPoolingComponent : Maxpooling component was firstly used in ConvNet for selecting an representative activation in an area |
CNnet | |
CNnetCombineAconfig | |
CNnetCombineConfig | Configuration class that controls neural net combination, where we combine a number of neural nets, trying to find for each layer the optimal weighted combination of the different neural-net parameters |
CNnetCombineFastConfig | Configuration class that controls neural net combination, where we combine a number of neural nets, trying to find for each layer the optimal weighted combination of the different neural-net parameters |
CNnetComputer | |
CNnetDiscriminativeStats | |
CNnetDiscriminativeUpdateOptions | |
CNnetDiscriminativeUpdater | |
CNnetEnsembleTrainer | |
CNnetEnsembleTrainerConfig | |
CNnetExample | NnetExample is the input data and corresponding label (or labels) for one or more frames of input, used for standard cross-entropy training of neural nets (and possibly for other objective functions) |
CNnetExampleBackgroundReader | |
CNnetFixConfig | |
CNnetLimitRankOpts | |
CNnetMixupConfig | |
CNnetOnlineComputer | |
CNnetRescaleConfig | |
CNnetRescaler | |
CNnetShrinkConfig | Configuration class that controls neural net "shrinkage" which is actually a scaling on the parameters of each of the updatable layers |
CNnetSimpleTrainerConfig | |
►CNnetStats | |
CStatsElement | |
CNnetStatsConfig | |
CNnetUpdater | |
CNnetWidenConfig | Configuration class that controls neural net "widening", which means increasing the dimension of the hidden layers of an already-trained neural net |
CNonlinearComponent | This kind of Component is a base-class for things like sigmoid and softmax |
CNormalizeComponent | |
COnlinePreconditioner | Keywords for search: natural gradient, naturalgradient, NG-SGD |
COnlinePreconditionerSimple | |
CPermuteComponent | PermuteComponent does a permutation of the dimensions (by default, a fixed random permutation, but it may be specified) |
CPnormComponent | |
CPowerComponent | Take the absoute values of an input vector to a power |
CRandomComponent | |
CRectifiedLinearComponent | |
CScaleComponent | |
CSigmoidComponent | |
CSoftHingeComponent | |
CSoftmaxComponent | |
CSpliceComponent | Splices a context window of frames together [over time] |
CSpliceMaxComponent | This is as SpliceComponent but outputs the max of any of the inputs (taking the max across time) |
CSplitDiscriminativeExampleConfig | Config structure for SplitExample, for splitting discriminative training examples |
CSplitExampleStats | This struct exists only for diagnostic purposes |
CSumGroupComponent | |
CTanhComponent | |
CUpdatableComponent | Class UpdatableComponent is a Component which has trainable parameters and contains some global parameters for stochastic gradient descent (learning rate, L2 regularization constant) |
►Nnnet3 | |
►Ntime_height_convolution | |
►CConvolutionComputation | This struct represents the structure of a convolution computation |
CConvolutionStep | |
CConvolutionComputationIo | |
CConvolutionComputationOptions | This struct contains options for compiling the convolutional computation |
►CConvolutionModel | This comment explains the basic framework used for everything related to time-height convolution |
COffset | |
CAccess | |
CAffineComponent | |
CAmNnetSimple | |
CAnalyzer | This struct exists to set up various pieces of analysis; it helps avoid the repetition of code where we compute all these things in sequence |
CBackpropTruncationComponent | |
CBackpropTruncationComponentPrecomputedIndexes | |
►CBatchedXvectorComputer | |
CXvectorTask | |
CBatchedXvectorComputerOptions | |
►CBatchNormComponent | |
CMemo | |
CBinarySumDescriptor | BinarySumDescriptor can represent either A + B, or (A if defined, else B) |
CBlockAffineComponent | This class implements an affine transform using a block diagonal matrix e.g., one whose weight matrix is all zeros except for blocks on the diagonal |
CBlockFactorizedTdnnComponent | BlockFactorizedTdnnComponent is a modified form of TdnnComponent (which inherits from TdnnComponent) that is inspired by quaternion-based neural networks, but is more general and trainable– the idea is that blocks of parameters are linear functions of a smaller number parameters, where the linear function itself is trainable |
CCachingOptimizingCompiler | This class enables you to do the compilation and optimization in one call, and also ensures that if the ComputationRequest is identical to the previous one, the compilation process is not repeated |
CCachingOptimizingCompilerOptions | |
CChainExampleMerger | This class is responsible for arranging examples in groups that have the same strucure (i.e |
CChainObjectiveInfo | |
CCheckComputationOptions | |
CChunkInfo | |
CChunkTimeInfo | Struct ChunkTimeInfo is used by class UtteranceSplitter to output information about how we split an utterance into chunks |
CCindexHasher | |
CCindexSet | |
CCindexVectorHasher | |
CClipGradientComponent | |
CCollapseModelConfig | Config class for the CollapseModel function |
CCommandAttributes | |
CCommandPairComparator | |
CComparePair | |
►CCompiler | This class creates an initial version of the NnetComputation, without any optimization or sharing of matrices |
CStepInfo | |
CCompilerOptions | |
CComponent | Abstract base-class for neural-net components |
CComponentPrecomputedIndexes | |
CCompositeComponent | CompositeComponent is a component representing a sequence of [simple] components |
CComputationAnalysis | This class performs various kinds of specific analysis on top of what class Analyzer gives you immediately |
CComputationCache | Class ComputationCache is used inside class CachingOptimizingCompiler to cache previously computed computations |
CComputationChecker | |
CComputationExpander | |
CComputationGraph | The first step in compilation is to turn the ComputationSpecification into a ComputationGraph, where for each Cindex we have a list of other Cindexes that it depends on |
►CComputationGraphBuilder | An abstract representation of a set of Cindexes |
CCindexInfo | |
CComputationLoopedOptimizer | |
►CComputationRenumberer | |
CPointerCompare | |
CSubMatrixHasher | |
CComputationRequest | |
CComputationRequestHasher | |
CComputationRequestPtrEqual | |
CComputationStepsComputer | This class arranges the cindex_ids of the computation into a sequence of lists called "steps", which will correspond roughly to the commands in the compiled computation |
CComputationVariables | This class relates the matrices and sub-matrices in the computation to imaginary "variables", such that we can think of the operations as operating on sets of individual variables, and we can then do analysis that lets us do optimization |
CConstantComponent | |
CConstantFunctionComponent | |
CConstantSumDescriptor | This is an alternative base-case of SumDescriptor (an alternative to SimpleSumDescriptor) which represents a constant term, e.g |
CConvolutionComponent | WARNING, this component is deprecated in favor of TimeHeightConvolutionComponent, and will be deleted |
CDecodableAmNnetLoopedOnline | |
CDecodableAmNnetSimple | |
CDecodableAmNnetSimpleLooped | |
CDecodableAmNnetSimpleParallel | |
CDecodableNnetLoopedOnline | |
CDecodableNnetLoopedOnlineBase | |
CDecodableNnetSimple | |
CDecodableNnetSimpleLooped | |
CDecodableNnetSimpleLoopedInfo | When you instantiate class DecodableNnetSimpleLooped, you should give it a const reference to this class, that has been previously initialized |
►CDerivativeTimeLimiter | |
CMatrixPruneInfo | |
CDescriptor | |
CDiscriminativeExampleMerger | This class is responsible for arranging examples in groups that have the same strucure (i.e |
CDiscriminativeObjectiveFunctionInfo | |
CDistributeComponent | This Component takes a larger input-dim than output-dim, where the input-dim must be a multiple of the output-dim, and distributes different blocks of the input dimension to different 'x' values |
CDistributeComponentPrecomputedIndexes | |
CDropoutComponent | |
CDropoutMaskComponent | |
CElementwiseProductComponent | |
CExampleGenerationConfig | |
CExampleMerger | This class is responsible for arranging examples in groups that have the same strucure (i.e |
►CExampleMergingConfig | |
CIntSet | |
►CExampleMergingStats | This class is responsible for storing, and displaying in log messages, statistics about how examples of different sizes (c.f |
CStatsForExampleSize | |
CFixedAffineComponent | FixedAffineComponent is an affine transform that is supplied at network initialization time and is not trainable |
CFixedBiasComponent | FixedBiasComponent applies a fixed per-element bias; it's similar to the AddShift component in the nnet1 setup (and only needed for nnet1 model conversion |
CFixedScaleComponent | FixedScaleComponent applies a fixed per-element scale; it's similar to the Rescale component in the nnet1 setup (and only needed for nnet1 model conversion) |
CForwardingDescriptor | A ForwardingDescriptor describes how we copy data from another NetworkNode, or from multiple other NetworkNodes, possibly with a scalar weight |
CGeneralDescriptor | This class is only used when parsing Descriptors |
CGeneralDropoutComponent | GeneralDropoutComponent implements dropout, including a continuous variant where the thing we multiply is not just zero or one, but may be a continuous value |
CGeneralDropoutComponentPrecomputedIndexes | |
CImageAugmentationConfig | |
CIndex | Struct Index is intended to represent the various indexes by which we number the rows of the matrices that the Components process: mainly 'n', the index of the member of the minibatch, 't', used for the frame index in speech recognition, and 'x', which is a catch-all extra index which we might use in convolutional setups or for other reasons |
CIndexHasher | |
CIndexLessNxt | |
CIndexSet | An abstract representation of a set of Indexes |
CIndexVectorHasher | |
CIoSpecification | |
CIoSpecificationHasher | |
CLinearComponent | |
CLogSoftmaxComponent | |
CLstmNonlinearityComponent | |
CMatrixAccesses | |
CMatrixExtender | |
CMaxChangeStats | |
CMaxpoolingComponent | |
►CMemoryCompressionOptimizer | This class is used in the function OptimizeMemoryCompression(), once we determine that there is some potential to do memory compression for this computation |
CMatrixCompressInfo | |
CMiscComputationInfo | |
CModelCollapser | |
CModelUpdateConsolidator | This class is responsible for consolidating the model-update part of backprop commands, for components in (e.g.) recurrent networks that need to have many separate backprop commands, into more efficient single commands operating on consolidated data in larger matrices |
CNaturalGradientAffineComponent | |
CNaturalGradientPerElementScaleComponent | NaturalGradientPerElementScaleComponent is like PerElementScaleComponent but it uses a natural gradient update for the per-element scales |
CNaturalGradientRepeatedAffineComponent | |
CNetworkNode | NetworkNode is used to represent, three types of thing: either an input of the network (which pretty much just states the dimension of the input vector); a Component (e.g |
CNnet | |
►CNnetBatchComputer | This class does neural net inference in a way that is optimized for GPU use: it combines chunks of multiple utterances into minibatches for more efficient computation |
CComputationGroupInfo | |
CComputationGroupKey | |
CComputationGroupKeyHasher | |
CMinibatchSizeInfo | |
CNnetBatchComputerOptions | |
►CNnetBatchDecoder | Decoder object that uses multiple CPU threads for the graph search, plus a GPU for the neural net inference (that's done by a separate NnetBatchComputer object) |
CUtteranceInput | |
CUtteranceOutput | |
►CNnetBatchInference | This class implements a simplified interface to class NnetBatchComputer, which is suitable for programs like 'nnet3-compute' where you want to support fast GPU-based inference on a sequence of utterances, and get them back from the object in the same order |
CUtteranceInfo | |
CNnetChainComputeProb | This class is for computing objective-function values in a nnet3+chain setup, for diagnostics |
CNnetChainExample | NnetChainExample is like NnetExample, but specialized for lattice-free (chain) training |
CNnetChainExampleStructureCompare | This comparator object compares just the structural aspects of the NnetChainExample without looking at the value of the features |
CNnetChainExampleStructureHasher | This hashing object hashes just the structural aspects of the NnetExample without looking at the value of the features |
CNnetChainSupervision | |
CNnetChainTrainer | This class is for single-threaded training of neural nets using the 'chain' model |
CNnetChainTrainingOptions | |
►CNnetComputation | |
CCommand | |
CMatrixDebugInfo | |
CMatrixInfo | |
CPrecomputedIndexesInfo | |
CSubMatrixInfo | |
CNnetComputationPrintInserter | |
CNnetComputeOptions | |
CNnetComputeProb | This class is for computing cross-entropy and accuracy values in a neural network, for diagnostics |
CNnetComputeProbOptions | |
►CNnetComputer | Class NnetComputer is responsible for executing the computation described in the "computation" object |
CCommandDebugInfo | |
CNnetComputerFromEg | |
CNnetDiscriminativeComputeObjf | This class is for computing objective-function values in a nnet3 discriminative training, for diagnostics |
CNnetDiscriminativeExample | NnetDiscriminativeExample is like NnetExample, but specialized for sequence training |
CNnetDiscriminativeExampleStructureCompare | This comparator object compares just the structural aspects of the NnetDiscriminativeExample without looking at the value of the features |
CNnetDiscriminativeExampleStructureHasher | This hashing object hashes just the structural aspects of the NnetExample without looking at the value of the features |
CNnetDiscriminativeOptions | |
CNnetDiscriminativeSupervision | |
CNnetDiscriminativeTrainer | This class is for single-threaded discriminative training of neural nets |
CNnetExample | NnetExample is the input data and corresponding label (or labels) for one or more frames of input, used for standard cross-entropy training of neural nets (and possibly for other objective functions) |
CNnetExampleStructureCompare | This comparator object compares just the structural aspects of the NnetExample without looking at the value of the features |
CNnetExampleStructureHasher | This hashing object hashes just the structural aspects of the NnetExample without looking at the value of the features |
CNnetGenerationOptions | |
CNnetInferenceTask | Class NnetInferenceTask represents a chunk of an utterance that is requested to be computed |
CNnetIo | |
CNnetIoStructureCompare | This comparison object compares just the structural aspects of the NnetIo object (name, indexes, feature dimension) without looking at the value of features |
CNnetIoStructureHasher | This hashing object hashes just the structural aspects of the NnetIo object (name, indexes, feature dimension) without looking at the value of features |
CNnetLdaStatsAccumulator | |
CNnetOptimizeOptions | |
CNnetSimpleComputationOptions | |
CNnetSimpleLoopedComputationOptions | |
CNnetTrainer | This class is for single-threaded training of neural nets using standard objective functions such as cross-entropy (implemented with logsoftmax nonlinearity and a linear objective function) and quadratic loss |
CNnetTrainerOptions | |
CNonlinearComponent | |
CNoOpComponent | NoOpComponent just duplicates its input |
CNormalizeComponent | |
CObjectiveFunctionInfo | |
COffsetForwardingDescriptor | Offsets in 't' and 'x' values of other ForwardingDescriptors |
COnlineNaturalGradient | Keywords for search: natural gradient, naturalgradient, NG-SGD |
COnlineNaturalGradientSimple | |
COptionalSumDescriptor | This is the case of class SumDescriptor, in which we contain just one term, and that term is optional (an IfDefined() expression) |
CPairIsEqualComparator | |
CPerDimObjectiveInfo | |
CPerElementOffsetComponent | |
CPerElementScaleComponent | PerElementScaleComponent scales each dimension of its input with a separate trainable scale; it's like a linear component with a diagonal matrix |
CPermuteComponent | PermuteComponent changes the order of the columns (i.e |
CPnormComponent | |
CRandomComponent | |
CRectifiedLinearComponent | |
CRepeatedAffineComponent | |
CReplaceIndexForwardingDescriptor | This ForwardingDescriptor modifies the indexes (n, t, x) by replacing one of them (normally t) with a constant value and keeping the rest |
►CRestrictedAttentionComponent | RestrictedAttentionComponent implements an attention model with restricted temporal context |
CMemo | |
CPrecomputedIndexes | |
CRoundingForwardingDescriptor | For use in clockwork RNNs and the like, this forwarding-descriptor rounds the time-index t down to the the closest t' <= t that is an exact multiple of t_modulus_ |
►CRowOpsSplitter | |
CMultiIndexSplitInfo | |
CSingleSplitInfo | |
CScaleAndOffsetComponent | |
CSigmoidComponent | |
CSimpleForwardingDescriptor | SimpleForwardingDescriptor is the base-case of ForwardingDescriptor, consisting of a source node in the graph with a given scalar weight (which will in the normal case be 1.0) |
CSimpleObjectiveInfo | |
CSimpleSumDescriptor | This is the normal base-case of SumDescriptor which just wraps a ForwardingDescriptor |
CSoftmaxComponent | |
CSpecAugmentTimeMaskComponent | SpecAugmentTimeMaskComponent implements the time part of SpecAugment |
CSpecAugmentTimeMaskComponentPrecomputedIndexes | |
CStatisticsExtractionComponent | |
CStatisticsExtractionComponentPrecomputedIndexes | |
CStatisticsPoolingComponent | |
CStatisticsPoolingComponentPrecomputedIndexes | |
CSumBlockComponent | SumBlockComponent sums over blocks of its input: for instance, if you create one with the config "input-dim=400 output-dim=100", its output will be the sum over the 4 100-dimensional blocks of the input |
CSumDescriptor | This is an abstract base-class |
CSumGroupComponent | SumGroupComponent is used to sum up groups of posteriors |
►CSvdApplier | |
CModifiedComponentInfo | |
CSwitchingForwardingDescriptor | Chooses from different inputs based on the the time index modulo (the number of ForwardingDescriptors given as inputs) |
CTanhComponent | |
CTarjanNode | |
►CTdnnComponent | TdnnComponent is a more memory-efficient alternative to manually splicing several frames of input and then using a NaturalGradientAffineComponent or a LinearComponent |
CPrecomputedIndexes | |
►CTimeHeightConvolutionComponent | TimeHeightConvolutionComponent implements 2-dimensional convolution where one of the dimensions of convolution (which traditionally would be called the width axis) is identified with time (i.e |
CPrecomputedIndexes | |
CUpdatableComponent | Class UpdatableComponent is a Component which has trainable parameters; it extends the interface of Component |
CUtteranceSplitter | |
CVariableMergingOptimizer | This class is responsible for merging matrices, although you probably want to access it via the the function VariableMergingOptimization() |
►Nsparse_vector_utils | |
CCompareFirst | |
CAccumAmDiagGmm | |
CAccumDiagGmm | |
CAccumFullGmm | Class for computing the maximum-likelihood estimates of the parameters of a Gaussian mixture model |
CAccumulateMultiThreadedClass | |
CAccumulateTreeStatsInfo | |
CAccumulateTreeStatsOptions | |
CActivePath | |
CAffineXformStats | |
CAgglomerativeClusterer | Necessary mechanisms for the actual clustering algorithm |
CAhcCluster | AhcCluster is the cluster object for the agglomerative clustering |
CAlignConfig | |
CAlignedTermsPair | |
CAmDiagGmm | |
CAmSgmm2 | Class for definition of the subspace Gmm acoustic model |
CArbitraryResample | Class ArbitraryResample allows you to resample a signal (assumed zero outside the sample region, not periodic) at arbitrary specified time values, which don't have to be linearly spaced |
CArcPosteriorComputer | |
CArpaFileParser | ArpaFileParser is an abstract base class for ARPA LM file conversion |
CArpaLine | |
CArpaLmCompiler | |
CArpaLmCompilerImpl | |
CArpaLmCompilerImplInterface | |
CArpaParseOptions | Options that control ArpaFileParser |
Cbasic_filebuf | |
Cbasic_pipebuf | |
CBasicHolder | BasicHolder is valid for float, double, bool, and integer types |
CBasicPairVectorHolder | BasicPairVectorHolder is a Holder for a vector of pairs of a basic type, e.g |
CBasicVectorHolder | A Holder for a vector of basic types, e.g |
CBasicVectorVectorHolder | BasicVectorVectorHolder is a Holder for a vector of vector of a basic type, e.g |
CBasisFmllrAccus | Stats for fMLLR subspace estimation |
CBasisFmllrEstimate | Estimation functions for basis fMLLR |
CBasisFmllrOptions | |
►CBiglmFasterDecoder | This is as FasterDecoder, but does online composition between HCLG and the "difference language model", which is a deterministic FST that represents the difference between the language model you want and the language model you compiled HCLG with |
CToken | |
CBiglmFasterDecoderOptions | |
CBottomUpClusterer | |
CClatRescoreTuple | |
CClusterable | |
CClusterKMeansOptions | |
CCompactLatticeHolder | |
CCompactLatticeToKwsProductFstMapper | |
CCompareFirstMemberOfPair | Comparator object for pairs that compares only the first pair |
CComparePosteriorByPdfs | |
CCompareReverseSecond | |
CCompartmentalizedBottomUpClusterer | |
CCompBotClustElem | |
CComposeLatticePrunedOptions | |
CCompressedAffineXformStats | |
►CCompressedMatrix | |
CGlobalHeader | |
CPerColHeader | |
CComputeNormalizersClass | |
CConfigLine | This class is responsible for parsing input like hi-there xx=yyy a=b c empty= f-oo=Append(bar, sss) ba_z=123 bing='a b c' baz="a b c d='a b' e" and giving you access to the fields, in this case |
CConstantEventMap | |
CConstArpaLm | |
►CConstArpaLmBuilder | |
CWordsAndLmStatePairLessThan | |
CConstArpaLmDeterministicFst | This class wraps a ConstArpaLm format language model with the interface defined in DeterministicOnDemandFst |
CConstIntegerSet | |
CContextDependency | |
CContextDependencyInterface | Context-dep-itf.h provides a link between the tree-building code in ../tree/, and the FST code in ../fstext/ (particularly, ../fstext/context-dep.h) |
CCountStats | |
CCovarianceStats | |
CCuAllocatorOptions | |
CCuArray | Class CuArray represents a vector of an integer or struct of type T |
CCuArrayBase | Class CuArrayBase, CuSubArray and CuArray are analogues of classes CuVectorBase, CuSubVector and CuVector, except that they are intended to store things other than float/double: they are intended to store integers or small structs |
►CCuBlockMatrix | The class CuBlockMatrix holds a vector of objects of type CuMatrix, say, M_1, M_2, |
CBlockMatrixData | |
CCuCompressedMatrix | Class CuCompressedMatrix, templated on an integer type (expected to be one of: int8, uint8, int16, uint16), this provides a way to approximate a CuMatrix in a more memory-efficient format |
CCuCompressedMatrixBase | Class CuCompressedMatrixBase is an abstract base class that allows you to compress a matrix of type CuMatrix<BaseFloat> |
CCuMatrix | This class represents a matrix that's stored on the GPU if we have one, and in memory if not |
CCuMatrixBase | Matrix for CUDA computing |
CCuPackedMatrix | Matrix for CUDA computing |
CCuRand | |
CCuSparseMatrix | |
CCuSpMatrix | |
CCuSubArray | |
CCuSubMatrix | This class is used for a piece of a CuMatrix |
CCuSubVector | |
CCuTpMatrix | |
CCuValue | The following class is used to simulate non-const references to Real, e.g |
CCuVector | |
CCuVectorBase | Vector for CUDA computing |
CDecisionTreeSplitter | |
CDecodableAmDiagGmm | |
CDecodableAmDiagGmmRegtreeFmllr | |
CDecodableAmDiagGmmRegtreeMllr | |
CDecodableAmDiagGmmScaled | |
►CDecodableAmDiagGmmUnmapped | DecodableAmDiagGmmUnmapped is a decodable object that takes indices that correspond to pdf-id's plus one |
CLikelihoodCacheRecord | Defines a cache record for a state |
CDecodableAmSgmm2 | |
CDecodableAmSgmm2Scaled | |
CDecodableDiagGmmScaledOnline | |
CDecodableInterface | DecodableInterface provides a link between the (acoustic-modeling and feature-processing) code and the decoder |
CDecodableMapped | |
CDecodableMatrixMapped | This is like DecodableMatrixScaledMapped, but it doesn't support an acoustic scale, and it does support a frame offset, whereby you can state that the first row of 'likes' is actually the n'th row of the matrix of available log-likelihoods |
CDecodableMatrixMappedOffset | This decodable class returns log-likes stored in a matrix; it supports repeatedly writing to the matrix and setting a time-offset representing the frame-index of the first row of the matrix |
CDecodableMatrixScaled | |
CDecodableMatrixScaledMapped | |
CDecodableSum | |
CDecodableSumScaled | |
CDecodeUtteranceLatticeFasterClass | This class basically does the same job as the function DecodeUtteranceLatticeFaster, but in a way that allows us to build a multi-threaded command line program more easily |
CDeltaFeatures | |
CDeltaFeaturesOptions | |
CDeterminizeLatticeTask | |
CDiagGmm | Definition for Gaussian Mixture Model with diagonal covariances |
CDiagGmmNormal | Definition for Gaussian Mixture Model with diagonal covariances in normal mode: where the parameters are stored as means and variances (instead of the exponential form that the DiagGmm class is stored as) |
CDummyOptions | |
CEbwAmSgmm2Options | This header implements a form of Extended Baum-Welch training for SGMMs |
CEbwAmSgmm2Updater | |
CEbwOptions | |
CEbwUpdatePhoneVectorsClass | |
CEbwWeightOptions | |
CEigenvalueDecomposition | |
Cerror_stats | |
CEventMap | A class that is capable of representing a generic mapping from EventType (which is a vector of (key, value) pairs) to EventAnswerType which is just an integer |
CEventMapVectorEqual | |
CEventMapVectorHash | |
CExampleClass | |
CExampleFeatureComputer | This class is only added for documentation, it is not intended to ever be used |
CExampleFeatureComputerOptions | This class is only added for documentation, it is not intended to ever be used |
►CFasterDecoder | |
CToken | |
CFasterDecoderOptions | |
CFbankComputer | Class for computing mel-filterbank features; see Computing MFCC features for more information |
CFbankOptions | FbankOptions contains basic options for computing filterbank features |
CFeatureTransformEstimate | Class for computing a feature transform used for preconditioning of the training data in neural-networks |
CFeatureTransformEstimateMulti | |
CFeatureTransformEstimateOptions | |
CFeatureWindowFunction | |
CFileInputImpl | |
CFileOutputImpl | |
►CFmllrDiagGmmAccs | This does not work with multiple feature transforms |
CSingleFrameStats | |
CFmllrOptions | |
►CFmllrRawAccs | |
CSingleFrameStats | |
CFmllrRawOptions | |
CFmllrSgmm2Accs | Class for computing the accumulators needed for the maximum-likelihood estimate of FMLLR transforms for a subspace GMM acoustic model |
CFmpe | |
CFmpeOptions | |
CFmpeStats | |
CFmpeUpdateOptions | |
CFrameExtractionOptions | |
CFullGmm | Definition for Gaussian Mixture Model with full covariances |
CFullGmmNormal | Definition for Gaussian Mixture Model with full covariances in normal mode: where the parameters are stored as means and variances (instead of the exponential form that the FullGmm class is stored as) |
CGaussClusterable | GaussClusterable wraps Gaussian statistics in a form accessible to generic clustering algorithms |
CGaussInfo | |
CGaussPostHolder | |
CGeneralMatrix | This class is a wrapper that enables you to store a matrix in one of three forms: either as a Matrix<BaseFloat>, or a CompressedMatrix, or a SparseMatrix<BaseFloat> |
CGenericHolder | GenericHolder serves to document the requirements of the Holder interface; it's not intended to be used |
►CHashList | |
CElem | |
CHashBucket | |
CHmmCacheHash | |
►CHmmTopology | A class for storing topology information for phones |
CHmmState | A structure defined inside HmmTopology to represent a HMM state |
CHtkHeader | A structure containing the HTK header |
CHtkMatrixHolder | |
CHTransducerConfig | Configuration class for the GetHTransducer() function; see The HTransducerConfig configuration class for context |
CInput | |
CInputImplBase | |
CInt32AndFloat | |
CInt32IsZero | |
CInterval | |
CIvectorEstimationOptions | |
CIvectorExtractor | |
CIvectorExtractorComputeDerivedVarsClass | |
CIvectorExtractorEstimationOptions | Options for training the IvectorExtractor, e.g. variance flooring |
CIvectorExtractorOptions | |
CIvectorExtractorStats | IvectorExtractorStats is a class used to update the parameters of the ivector extractor |
CIvectorExtractorStatsOptions | Options for IvectorExtractorStats, which is used to update the parameters of IvectorExtractor |
CIvectorExtractorUpdateProjectionClass | |
CIvectorExtractorUpdateWeightClass | |
CIvectorExtractorUtteranceStats | These are the stats for a particular utterance, i.e |
CIvectorExtractTask | |
CIvectorTask | |
CKaldiFatalError | Kaldi fatal runtime error exception |
CKaldiObjectHolder | KaldiObjectHolder works for Kaldi objects that have the "standard" Read and Write functions, and a copy constructor |
CKaldiRnnlmWrapper | |
CKaldiRnnlmWrapperOpts | |
CKwsAlignment | |
CKwsProductFstToKwsLexicographicFstMapper | |
CKwsTerm | |
CKwsTermsAligner | |
CKwsTermsAlignerOptions | |
CLatticeArcRecord | This is used in CompactLatticeLimitDepth |
►CLatticeBiglmFasterDecoder | This is as LatticeFasterDecoder, but does online composition between HCLG and the "difference language model", which is a deterministic FST that represents the difference between the language model you want and the language model you compiled HCLG with |
CForwardLink | |
CToken | |
CTokenList | |
CLatticeFasterDecoderConfig | |
►CLatticeFasterDecoderTpl | This is the "normal" lattice-generating decoder |
CTokenList | |
►CLatticeFasterOnlineDecoderTpl | LatticeFasterOnlineDecoderTpl is as LatticeFasterDecoderTpl but also supports an efficient way to get the best path (see the function BestPathEnd()), which is useful in endpointing and in situations where you might want to frequently access the best path |
CBestPathIterator | |
CLatticeHolder | |
CLatticeIncrementalDecoderConfig | The normal decoder, lattice-faster-decoder.h, sometimes has an issue when doing real-time applications with long utterances, that each time you get the lattice the lattice determinization can take a considerable amount of time; this introduces latency |
►CLatticeIncrementalDecoderTpl | This is an extention to the "normal" lattice-generating decoder |
CTokenList | |
CLatticeIncrementalDeterminizer | This class is used inside LatticeIncrementalDecoderTpl; it handles some of the details of incremental determinization |
►CLatticeIncrementalOnlineDecoderTpl | LatticeIncrementalOnlineDecoderTpl is as LatticeIncrementalDecoderTpl but also supports an efficient way to get the best path (see the function BestPathEnd()), which is useful in endpointing and in situations where you might want to frequently access the best path |
CBestPathIterator | |
►CLatticeLexiconWordAligner | |
CComputationState | |
CTuple | |
CTupleEqual | |
CTupleHash | |
►CLatticePhoneAligner | |
CComputationState | |
CTuple | |
CTupleEqual | |
CTupleHash | |
CLatticeReader | LatticeReader provides (static) functions for reading both Lattice and CompactLattice, in text form |
►CLatticeSimpleDecoder | Simplest possible decoder, included largely for didactic purposes and as a means to debug more highly optimized decoders |
CForwardLink | |
CToken | |
CTokenList | |
CLatticeSimpleDecoderConfig | |
►CLatticeWordAligner | |
CComputationState | |
CTuple | |
CTupleEqual | |
CTupleHash | |
CLbfgsOptions | This is an implementation of L-BFGS |
CLdaEstimate | Class for computing linear discriminant analysis (LDA) transform |
CLdaEstimateOptions | |
CLinearCgdOptions | |
CLinearResample | LinearResample is a special case of ArbitraryResample, where we want to resample a signal at linearly spaced intervals (this means we want to upsample or downsample the signal) |
CLinearVtln | |
►CLmState | |
CChildrenVectorLessThan | |
CChildType | |
CLogisticRegression | |
CLogisticRegressionConfig | |
CLogMessageEnvelope | Log message severity and source location info |
CMapDiagGmmOptions | Configuration variables for Maximum A Posteriori (MAP) update |
CMapTransitionUpdateConfig | |
CMatrix | A class for storing matrices |
CMatrixBase | Base class which provides matrix operations not involving resizing or allocation |
CMelBanks | |
CMelBanksOptions | |
►CMessageLogger | |
CLog | |
CLogAndThrow | |
CMfccComputer | |
CMfccOptions | MfccOptions contains basic options for computing MFCC features |
►CMinimumBayesRisk | This class does the word-level Minimum Bayes Risk computation, and gives you either the 1-best MBR output together with the expected Bayes Risk, or a sausage-like structure |
CArc | |
CGammaCompare | |
CMinimumBayesRiskOptions | The implementation of the Minimum Bayes Risk decoding method described in "Minimum Bayes Risk decoding and system combination based on a recursion for
edit distance", Haihua Xu, Daniel Povey, Lidia Mangu and Jie Zhu, Computer Speech and Language, 2011 This is a slightly more principled way to do Minimum Bayes Risk (MBR) decoding than the standard "Confusion Network" method |
CMleAmSgmm2Accs | Class for the accumulators associated with the phonetic-subspace model parameters |
CMleAmSgmm2Options | Configuration variables needed in the SGMM estimation process |
CMleAmSgmm2Updater | |
CMleDiagGmmOptions | Configuration variables like variance floor, minimum occupancy, etc |
CMleFullGmmOptions | Configuration variables like variance floor, minimum occupancy, etc |
CMleSgmm2SpeakerAccs | Class for the accumulators required to update the speaker vectors v_s |
CMleTransitionUpdateConfig | |
CMlltAccs | A class for estimating Maximum Likelihood Linear Transform, also known as global Semi-tied Covariance (STC), for GMMs |
CMultiThreadable | |
CMultiThreader | |
CMyTaskClass | |
CMyThreadClass | |
CNccfInfo | |
CNGram | A parsed n-gram from ARPA LM file |
CNumberIstream | |
COfflineFeatureTpl | This templated class is intended for offline feature extraction, i.e |
COffsetFileInputImpl | |
COnlineAppendFeature | This online-feature class implements combination of two feature streams (such as pitch, plp) into one stream |
COnlineAudioSourceItf | |
COnlineBaseFeature | Add a virtual class for "source" features such as MFCC or PLP or pitch features |
COnlineCacheFeature | This feature type can be used to cache its input, to avoid repetition of computation in a multi-pass decoding context |
COnlineCacheInput | |
COnlineCmnInput | |
COnlineCmvn | This class does an online version of the cepstral mean and [optionally] variance, but note that this is not equivalent to the offline version |
COnlineCmvnOptions | |
COnlineCmvnState | Struct OnlineCmvnState stores the state of CMVN adaptation between utterances (but not the state of the computation within an utterance) |
COnlineDecodableDiagGmmScaled | |
COnlineDeltaFeature | |
COnlineDeltaInput | |
COnlineEndpointConfig | |
COnlineEndpointRule | This header contains a simple facility for endpointing, that should be used in conjunction with the "online2" online decoding code; see ../online2bin/online2-wav-gmm-latgen-faster-endpoint.cc |
COnlineFasterDecoder | |
COnlineFasterDecoderOpts | |
COnlineFeatInputItf | |
COnlineFeatureInterface | OnlineFeatureInterface is an interface for online feature processing (it is also usable in the offline setting, but currently we're not using it for that) |
COnlineFeatureMatrix | |
COnlineFeatureMatrixOptions | |
COnlineFeaturePipeline | OnlineFeaturePipeline is a class that's responsible for putting together the various stages of the feature-processing pipeline, in an online setting |
COnlineFeaturePipelineCommandLineConfig | This configuration class is to set up OnlineFeaturePipelineConfig, which in turn is the configuration class for OnlineFeaturePipeline |
COnlineFeaturePipelineConfig | This configuration class is responsible for storing the configuration options for OnlineFeaturePipeline, but it does not set them |
COnlineFeInput | |
COnlineGenericBaseFeature | This is a templated class for online feature extraction; it's templated on a class like MfccComputer or PlpComputer that does the basic feature extraction |
COnlineGmmAdaptationState | |
COnlineGmmDecodingAdaptationPolicyConfig | This configuration class controls when to re-estimate the basis-fMLLR during online decoding |
COnlineGmmDecodingConfig | |
COnlineGmmDecodingModels | This class is used to read, store and give access to the models used for 3 phases of decoding (first-pass with online-CMN features; the ML models used for estimating transforms; and the discriminatively trained models) |
COnlineIvectorEstimationStats | This class helps us to efficiently estimate iVectors in situations where the data is coming in frame by frame |
COnlineIvectorExtractionConfig | This class includes configuration variables relating to the online iVector extraction, but not including configuration for the "base feature", i.e |
COnlineIvectorExtractionInfo | This struct contains various things that are needed (as const references) by class OnlineIvectorExtractor |
COnlineIvectorExtractorAdaptationState | This class stores the adaptation state from the online iVector extractor, which can help you to initialize the adaptation state for the next utterance of the same speaker in a more informed way |
COnlineIvectorFeature | OnlineIvectorFeature is an online feature-extraction class that's responsible for extracting iVectors from raw features such as MFCC, PLP or filterbank |
COnlineLdaInput | |
COnlineMatrixFeature | This class takes a Matrix<BaseFloat> and wraps it as an OnlineFeatureInterface: this can be useful where some earlier stage of feature processing has been done offline but you want to use part of the online pipeline |
COnlineMatrixInput | |
COnlineNnet2DecodingConfig | |
COnlineNnet2DecodingThreadedConfig | |
COnlineNnet2FeaturePipeline | OnlineNnet2FeaturePipeline is a class that's responsible for putting together the various parts of the feature-processing pipeline for neural networks, in an online setting |
COnlineNnet2FeaturePipelineConfig | This configuration class is to set up OnlineNnet2FeaturePipelineInfo, which in turn is the configuration class for OnlineNnet2FeaturePipeline |
COnlineNnet2FeaturePipelineInfo | This class is responsible for storing configuration variables, objects and options for OnlineNnet2FeaturePipeline (including the actual LDA and CMVN-stats matrices, and the iVector extractor, which is a member of ivector_extractor_info |
COnlinePaSource | |
COnlinePitchFeature | |
COnlinePitchFeatureImpl | |
►COnlineProcessPitch | This online-feature class implements post processing of pitch features |
CNormalizationStats | |
►COnlineSilenceWeighting | |
CFrameInfo | |
COnlineSilenceWeightingConfig | |
COnlineSpeexDecoder | |
COnlineSpeexEncoder | |
COnlineSpliceFrames | |
COnlineSpliceOptions | |
COnlineTcpVectorSource | |
COnlineTimer | Class OnlineTimer is used to test real-time decoding algorithms and evaluate how long the decoding of a particular utterance would take |
COnlineTimingStats | Class OnlineTimingStats stores statistics from timing of online decoding, which will enable the Print() function to print out the average real-time factor and average delay per utterance |
COnlineTransform | This online-feature class implements any affine or linear transform |
COnlineUdpInput | |
COnlineVectorSource | |
COptimizableInterface | OptimizableInterface provides a virtual class for optimizable objects |
COptimizeLbfgs | |
COptionsItf | |
COtherReal | This class provides a way for switching between double and float types |
COtherReal< double > | A specialized class for switching from double to float |
COtherReal< float > | A specialized class for switching from float to double |
COutput | |
COutputImplBase | |
CPackedMatrix | Packed matrix: base class for triangular and symmetric matrices |
CPairHasher | A hashing function-object for pairs of ints |
►CParseOptions | The class ParseOptions is for parsing command-line options; see Parsing command-line options for more documentation |
CDocInfo | Structure for options' documentation |
CPhoneAlignLatticeOptions | |
CPipeInputImpl | |
CPipeOutputImpl | |
CPitchExtractionOptions | |
►CPitchFrameInfo | |
CStateInfo | |
CPitchInterpolator | |
CPitchInterpolatorOptions | |
CPitchInterpolatorStats | |
CPlda | |
CPldaConfig | |
CPldaEstimationConfig | |
CPldaEstimator | |
►CPldaStats | |
CClassInfo | |
CPldaUnsupervisedAdaptor | This class takes unlabeled iVectors from the domain of interest and uses their mean and variance to adapt your PLDA matrices to a new domain |
CPldaUnsupervisedAdaptorConfig | |
CPlpComputer | This is the new-style interface to the PLP computation |
CPlpOptions | PlpOptions contains basic options for computing PLP features |
CPosteriorHolder | |
CProcessPitchOptions | |
CProfiler | |
►CProfileStats | |
CProfileStatsEntry | |
CReverseSecondComparator | |
►CPrunedCompactLatticeComposer | PrunedCompactLatticeComposer implements an algorithm for pruned composition |
CComposedStateInfo | |
CLatticeStateInfo | |
CQuestions | This class defines, for each EventKeyType, a set of initial questions that it tries and also a number of iterations for which to refine the questions to increase likelihood |
CQuestionsForKey | QuestionsForKey is a class used to define the questions for a key, and also options that allow us to refine the question during tree-building (i.e |
CRandomAccessTableReader | Allows random access to a collection of objects in an archive or script file; see The Table concept |
CRandomAccessTableReaderArchiveImplBase | |
CRandomAccessTableReaderDSortedArchiveImpl | |
CRandomAccessTableReaderImplBase | |
CRandomAccessTableReaderMapped | This class is for when you are reading something in random access, but it may actually be stored per-speaker (or something similar) but the keys you're using are per utterance |
CRandomAccessTableReaderScriptImpl | |
►CRandomAccessTableReaderSortedArchiveImpl | |
CPairCompare | |
CRandomAccessTableReaderUnsortedArchiveImpl | |
CRandomState | |
CRecognizedWord | |
CRecyclingVector | This class serves as a storage for feature vectors with an option to limit the memory usage by removing old elements |
►CRefineClusterer | |
Cpoint_info | |
CRefineClustersOptions | |
CRegressionTree | A regression tree is a clustering of Gaussian densities in an acoustic model, such that the group of Gaussians at each node of the tree are transformed by the same transform |
CRegtreeFmllrDiagGmm | An FMLLR (feature-space MLLR) transformation, also called CMLLR (constrained MLLR) is an affine transformation of the feature vectors |
CRegtreeFmllrDiagGmmAccs | Class for computing the accumulators needed for the maximum-likelihood estimate of FMLLR transforms for an acoustic model that uses diagonal Gaussian mixture models as emission densities |
CRegtreeFmllrOptions | Configuration variables for FMLLR transforms |
CRegtreeMllrDiagGmm | An MLLR mean transformation is an affine transformation of Gaussian means |
CRegtreeMllrDiagGmmAccs | Class for computing the maximum-likelihood estimates of the parameters of an acoustic model that uses diagonal Gaussian mixture models as emission densities |
CRegtreeMllrOptions | Configuration variables for FMLLR transforms |
CRnnlmDeterministicFst | |
CRspecifierOptions | |
CScalarClusterable | ScalarClusterable clusters scalars with x^2 loss |
CSemaphore | |
CSequentialTableReader | A templated class for reading objects sequentially from an archive or script file; see The Table concept |
CSequentialTableReaderArchiveImpl | |
CSequentialTableReaderBackgroundImpl | |
CSequentialTableReaderImplBase | |
CSequentialTableReaderScriptImpl | |
CSgmm2FmllrConfig | Configuration variables needed in the estimation of FMLLR for SGMMs |
CSgmm2FmllrGlobalParams | Global adaptation parameters |
CSgmm2GauPost | Indexed by time |
CSgmm2GauPostElement | This is the entry for a single time |
CSgmm2GselectConfig | |
►CSgmm2LikelihoodCache | Sgmm2LikelihoodCache caches SGMM likelihoods at two levels: the final pdf likelihoods, and the sub-state level likelihoods, which means that with the SCTM system we can avoid redundant computation |
CPdfCacheElement | |
CSubstateCacheElement | |
CSgmm2PerFrameDerivedVars | Holds the per-frame precomputed quantities x(t), x_{i}(t), z_{i}(t), and n_{i}(t) (cf |
CSgmm2PerSpkDerivedVars | |
CSgmm2Project | |
CSgmm2SplitSubstatesConfig | |
CShiftedDeltaFeatures | |
CShiftedDeltaFeaturesOptions | |
►CSimpleDecoder | Simplest possible decoder, included largely for didactic purposes and as a means to debug more highly optimized decoders |
CToken | |
►CSimpleOptions | The class SimpleOptions is an implementation of OptionsItf that allows setting and getting option values programmatically, i.e., via getter and setter methods |
COptionInfo | |
CSingleUtteranceGmmDecoder | You will instantiate this class when you want to decode a single utterance using the online-decoding setup |
CSingleUtteranceNnet2Decoder | You will instantiate this class when you want to decode a single utterance using the online-decoding setup for neural nets |
CSingleUtteranceNnet2DecoderThreaded | You will instantiate this class when you want to decode a single utterance using the online-decoding setup for neural nets |
CSingleUtteranceNnet3DecoderTpl | You will instantiate this class when you want to decode a single utterance using the online-decoding setup for neural nets |
CSingleUtteranceNnet3IncrementalDecoderTpl | You will instantiate this class when you want to decode a single utterance using the online-decoding setup for neural nets |
CSlidingWindowCmnOptions | |
CSolverOptions | This class describes the options for maximizing various quadratic objective functions |
CSparseMatrix | |
CSparseVector | |
CSpectrogramComputer | Class for computing spectrogram features |
CSpectrogramOptions | SpectrogramOptions contains basic options for computing spectrogram features |
CSpeexOptions | |
CSphinxMatrixHolder | A class for reading/writing Sphinx format matrices |
CSplitEventMap | |
CSplitRadixComplexFft | |
CSplitRadixRealFft | |
CSpMatrix | Packed symetric matrix class |
CStandardInputImpl | |
CStandardOutputImpl | |
CStringHasher | A hashing function object for strings |
CSubMatrix | Sub-matrix representation |
CSubVector | Represents a non-allocating general vector which can be defined as a sub-vector of higher-level vector [or as the row of a matrix] |
CTableEventMap | |
CTableWriter | A templated class for writing objects to an archive or script file; see The Table concept |
CTableWriterArchiveImpl | |
CTableWriterBothImpl | |
CTableWriterImplBase | |
CTableWriterScriptImpl | |
►CTaskSequencer | |
CRunTaskArgsList | |
CTaskSequencerConfig | |
CTcpServer | |
CThreadSynchronizer | Class ThreadSynchronizer acts to guard an arbitrary type of buffer between a producing and a consuming thread (note: it's all symmetric between the two thread types) |
CTidToTstateMapper | |
CTimer | |
CTokenHolder | |
CTokenVectorHolder | |
CTpMatrix | Packed symetric matrix class |
CTrainingGraphCompiler | |
CTrainingGraphCompilerOptions | |
►CTransitionModel | |
CTuple | |
►CTreeClusterer | |
CNode | |
CTreeClusterOptions | |
CTreeRenderer | |
CTwvMetrics | |
CTwvMetricsOptions | |
CTwvMetricsStats | |
CUbmClusteringOptions | |
CUpdatePhoneVectorsClass | |
CUpdateWClass | |
CVadEnergyOptions | |
CVector | A class representing a vector |
CVectorBase | Provides a vector abstraction class |
CVectorClusterable | VectorClusterable wraps vectors in a form accessible to generic clustering algorithms |
CVectorFstToKwsLexicographicFstMapper | |
CVectorHasher | A hashing function-object for vectors |
CWaveData | This class's purpose is to read in Wave files |
CWaveHeaderReadGofer | |
CWaveHolder | |
CWaveInfo | This class reads and hold wave file header information |
CWaveInfoHolder | |
CWordAlignedLatticeTester | |
CWordAlignLatticeLexiconInfo | This class extracts some information from the lexicon and stores it in a suitable form for the word-alignment code to use |
CWordAlignLatticeLexiconOpts | |
CWordBoundaryInfo | |
CWordBoundaryInfoNewOpts | |
CWordBoundaryInfoOpts | |
CWspecifierOptions | |
►Nrnnlm | |
CCRnnLM | |
Cneuron | |
Csynapse | |
Cvocab_word | |
CCacheArcIterator | |
CCacheImpl | |
CCacheOptions | |
CCacheStateIterator | |
CCuBlockMatrixData_ | This structure is used in cu-block-matrix.h to store information about a block-diagonal matrix |
CDecodeInfo | |
CEbwAmSgmmUpdater | Contains the functions needed to update the SGMM parameters |
CFloatWeightTpl | |
CImplToFst | |
CInt32Pair | |
CKaldiCompileTimeAssert | |
CKaldiCompileTimeAssert< true > | |
CkMarkerMap | |
CMatrixDim_ | Structure containing size of the matrix plus stride |
CMatrixElement | |
CMleAmSgmmUpdater | Contains the functions needed to update the SGMM parameters |
CTestFunction | |