All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Modules Pages
TimeHeightConvolutionComponent Class Reference

TimeHeightConvolutionComponent implements 2-dimensional convolution where one of the dimensions of convolution (which traditionally would be called the width axis) is identified with time (i.e. More...

#include <nnet-convolutional-component.h>

Inheritance diagram for TimeHeightConvolutionComponent:
Collaboration diagram for TimeHeightConvolutionComponent:

Classes

class  PrecomputedIndexes
 

Public Member Functions

 TimeHeightConvolutionComponent ()
 
 TimeHeightConvolutionComponent (const TimeHeightConvolutionComponent &other)
 
virtual int32 InputDim () const
 Returns input-dimension of this component. More...
 
virtual int32 OutputDim () const
 Returns output-dimension of this component. More...
 
virtual std::string Info () const
 Returns some text-form information about this component, for diagnostics. More...
 
virtual void InitFromConfig (ConfigLine *cfl)
 Initialize, from a ConfigLine object. More...
 
virtual std::string Type () const
 Returns a string such as "SigmoidComponent", describing the type of the object. More...
 
virtual int32 Properties () const
 Return bitmask of the component's properties. More...
 
virtual void * Propagate (const ComponentPrecomputedIndexes *indexes, const CuMatrixBase< BaseFloat > &in, CuMatrixBase< BaseFloat > *out) const
 Propagate function. More...
 
virtual void Backprop (const std::string &debug_info, const ComponentPrecomputedIndexes *indexes, const CuMatrixBase< BaseFloat > &in_value, const CuMatrixBase< BaseFloat > &out_value, const CuMatrixBase< BaseFloat > &out_deriv, void *memo, Component *to_update, CuMatrixBase< BaseFloat > *in_deriv) const
 Backprop function; depending on which of the arguments 'to_update' and 'in_deriv' are non-NULL, this can compute input-data derivatives and/or perform model update. More...
 
virtual void Read (std::istream &is, bool binary)
 Read function (used after we know the type of the Component); accepts input that is missing the token that describes the component type, in case it has already been consumed. More...
 
virtual void Write (std::ostream &os, bool binary) const
 Write component to stream. More...
 
virtual ComponentCopy () const
 Copies component (deep copy). More...
 
virtual void ReorderIndexes (std::vector< Index > *input_indexes, std::vector< Index > *output_indexes) const
 This function only does something interesting for non-simple Components. More...
 
virtual void GetInputIndexes (const MiscComputationInfo &misc_info, const Index &output_index, std::vector< Index > *desired_indexes) const
 This function only does something interesting for non-simple Components. More...
 
virtual bool IsComputable (const MiscComputationInfo &misc_info, const Index &output_index, const IndexSet &input_index_set, std::vector< Index > *used_inputs) const
 This function only does something interesting for non-simple Components, and it exists to make it possible to manage optionally-required inputs. More...
 
virtual
ComponentPrecomputedIndexes
PrecomputeIndexes (const MiscComputationInfo &misc_info, const std::vector< Index > &input_indexes, const std::vector< Index > &output_indexes, bool need_backprop) const
 This function must return NULL for simple Components. More...
 
virtual void Scale (BaseFloat scale)
 This virtual function when called on – an UpdatableComponent scales the parameters by "scale" when called by an UpdatableComponent. More...
 
virtual void Add (BaseFloat alpha, const Component &other)
 This virtual function when called by – an UpdatableComponent adds the parameters of another updatable component, times some constant, to the current parameters. More...
 
virtual void PerturbParams (BaseFloat stddev)
 This function is to be used in testing. More...
 
virtual BaseFloat DotProduct (const UpdatableComponent &other) const
 Computes dot-product between parameters of two instances of a Component. More...
 
virtual int32 NumParameters () const
 The following new virtual function returns the total dimension of the parameters in this class. More...
 
virtual void Vectorize (VectorBase< BaseFloat > *params) const
 Turns the parameters into vector form. More...
 
virtual void UnVectorize (const VectorBase< BaseFloat > &params)
 Converts the parameters from vector form. More...
 
virtual void FreezeNaturalGradient (bool freeze)
 freezes/unfreezes NaturalGradient updates, if applicable (to be overriden by components that use Natural Gradient). More...
 
void ScaleLinearParams (BaseFloat alpha)
 
- Public Member Functions inherited from UpdatableComponent
 UpdatableComponent (const UpdatableComponent &other)
 
 UpdatableComponent ()
 
virtual ~UpdatableComponent ()
 
virtual void SetUnderlyingLearningRate (BaseFloat lrate)
 Sets the learning rate of gradient descent- gets multiplied by learning_rate_factor_. More...
 
virtual void SetActualLearningRate (BaseFloat lrate)
 Sets the learning rate directly, bypassing learning_rate_factor_. More...
 
virtual void SetAsGradient ()
 Sets is_gradient_ to true and sets learning_rate_ to 1, ignoring learning_rate_factor_. More...
 
virtual BaseFloat LearningRateFactor ()
 
virtual void SetLearningRateFactor (BaseFloat lrate_factor)
 
void SetUpdatableConfigs (const UpdatableComponent &other)
 
BaseFloat LearningRate () const
 Gets the learning rate to be used in gradient descent. More...
 
BaseFloat MaxChange () const
 Returns the per-component max-change value, which is interpreted as the maximum change (in l2 norm) in parameters that is allowed per minibatch for this component. More...
 
void SetMaxChange (BaseFloat max_change)
 
BaseFloat L2Regularization () const
 Returns the l2 regularization constant, which may be set in any updatable component (usually from the config file). More...
 
void SetL2Regularization (BaseFloat a)
 
- Public Member Functions inherited from Component
virtual void StoreStats (const CuMatrixBase< BaseFloat > &in_value, const CuMatrixBase< BaseFloat > &out_value, void *memo)
 This function may store stats on average activation values, and for some component types, the average value of the derivative of the nonlinearity. More...
 
virtual void ZeroStats ()
 Components that provide an implementation of StoreStats should also provide an implementation of ZeroStats(), to set those stats to zero. More...
 
virtual void DeleteMemo (void *memo) const
 This virtual function only needs to be overwritten by Components that return a non-NULL memo from their Propagate() function. More...
 
 Component ()
 
virtual ~Component ()
 

Private Member Functions

void Check () const
 
void ComputeDerived ()
 
void UpdateNaturalGradient (const PrecomputedIndexes &indexes, const CuMatrixBase< BaseFloat > &in_value, const CuMatrixBase< BaseFloat > &out_deriv)
 
void UpdateSimple (const PrecomputedIndexes &indexes, const CuMatrixBase< BaseFloat > &in_value, const CuMatrixBase< BaseFloat > &out_deriv)
 
void InitUnit ()
 

Private Attributes

time_height_convolution::ConvolutionModel model_
 
std::vector< int32 > all_time_offsets_
 
std::vector< bool > time_offset_required_
 
CuMatrix< BaseFloatlinear_params_
 
CuVector< BaseFloatbias_params_
 
BaseFloat max_memory_mb_
 
bool use_natural_gradient_
 
OnlineNaturalGradient preconditioner_in_
 
OnlineNaturalGradient preconditioner_out_
 

Additional Inherited Members

- Static Public Member Functions inherited from Component
static ComponentReadNew (std::istream &is, bool binary)
 Read component from stream (works out its type). Dies on error. More...
 
static ComponentNewComponentOfType (const std::string &type)
 Returns a new Component of the given type e.g. More...
 
- Protected Member Functions inherited from UpdatableComponent
void InitLearningRatesFromConfig (ConfigLine *cfl)
 
std::string ReadUpdatableCommon (std::istream &is, bool binary)
 
void WriteUpdatableCommon (std::ostream &is, bool binary) const
 
- Protected Attributes inherited from UpdatableComponent
BaseFloat learning_rate_
 learning rate (typically 0.0..0.01) More...
 
BaseFloat learning_rate_factor_
 learning rate factor (normally 1.0, but can be set to another < value so that when < you call SetLearningRate(), that value will be scaled by this factor. More...
 
BaseFloat l2_regularize_
 L2 regularization constant. More...
 
bool is_gradient_
 True if this component is to be treated as a gradient rather than as parameters. More...
 
BaseFloat max_change_
 configuration value for imposing max-change More...
 

Detailed Description

TimeHeightConvolutionComponent implements 2-dimensional convolution where one of the dimensions of convolution (which traditionally would be called the width axis) is identified with time (i.e.

the 't' component of Indexes). For a deeper understanding of how this works, please see convolution.h.

The following are the parameters accepted on the config line, with examples of their values.

Parameters inherited from UpdatableComponent (see comment above declaration of UpdadableComponent in nnet-component-itf.h for details): learning-rate, learning-rate-factor, max-change

Convolution-related parameters:

num-filters-in E.g. num-filters-in=32. Number of input filters (the number of separate versions of the input image). The filter-dim has stride 1 in the input and output vectors, i.e. we order the input as (all-filters-for-height=0, all-filters-for-height=1, etc.) num-filters-out E.g. num-filters-out=64. The number of output filters (the number of separate versions of the output image). As with the input, the filter-dim has stride 1. height-in E.g. height-in=40. The height of the input image. The width is not specified the the model level, as it's identified with "t" and is called the time axis; the width is determined by how many "t" values were available at the input of the network, and how many were requested at the output. height-out E.g. height-out=40. The height of the output image. Will normally be <= (the input height divided by height-subsample-out). height-subsample-out E.g. height-subsample-out=2 (defaults to 1). Subsampling factor on the height axis, e.g. you might set this to 2 if you are doing subsampling on this layer, which would involve discarding every other height increment at the output. There is no corresponding config for the time dimension, as time subsampling is determined by which 't' values you request at the output, together with the values of 'time-offsets' at different layers of the network. height-offsets E.g. height-offsets=-1,0,1 The set of height offsets that contribute to each output pixel: with the values -1,0,1, height 10 at the output would see data from heights 9,10,11 at the input. These values will normally be consecutive. Negative values imply zero-padding on the bottom of the image, since output-height 0 is always defined. Zero-padding at the top of the image is determined in a similar way (e.g. if height-in==height-out and height-offsets=-1,0,1, then there is 1 pixel of padding at the top and bottom of the image). time-offsets E.g. time-offsets=-1,0,1 The time offsets that we require at the input to produce a given output; these are comparable to the offsets used in TDNNs. Note that the time axis is always numbered using an absolute scheme, so that if there is subsampling on the time axis, then later in the network you'll see time-offsets like "-2,0,2" or "-4,0,4". Subsampling on the time axis is not explicitly specified but is implicit based on tracking dependencies. offsets Setting 'offsets' is an alternative to setting both height-offsets and time-offsets, that is useful for configurations with less regularity. It is a semicolon- separated list of pairs (time-offset,height-offset) that might look like: -1,1;-1,0;-1,1;0,1;....;1,1 required-time-offsets E.g. required-time-offsets=0 (defaults to the same value as time-offsets). This is a set of time offsets, which if specified must be a nonempty subset of time-offsets; it determines whether zero-padding on the time axis is allowed in cases where there is insufficient input. If not specified it defaults to the same as 'time-offsets', meaning there is no zero-padding on the time axis. Note: for speech tasks we tend to pad on the time axis with repeats of the first or last frame, rather than zero; and this is handled while preparing the data and not by the core components of the nnet3 framework. So for speech tasks we wouldn't normally set this value. max-memory-mb Maximum amount of temporary memory, in megabytes, that may be used as temporary matrices in the convolution computation. default=200.0.

Initialization parameters: param-stddev Standard deviation of the linear parameters of the convolution. Defaults to sqrt(1.0 / (num-filters-in * num-height-offsets * num-time-offsets)), e.g. sqrt(1.0/(64*3*3)) for a 3x3 kernel with 64 input filters; this value will ensure that the output has unit stddev if the input has unit stddev. bias-stddev Standard deviation of bias terms. default=0.0. init-unit Defaults to false. If true, it is required that num-filters-in equal num-filters-out and there should exist a (height, time) offset in the model equal to (0, 0). We will initialize the parameter matrix to be equivalent to the identity transform. In this case, param-stddev is ignored.

Natural-gradient related options are below; you won't normally have to set these.

use-natural-gradient e.g. use-natural-gradient=false (defaults to true). You can set this to false to disable the natural gradient updates (you won't normally want to do this). rank-out Rank used in low-rank-plus-unit estimate of the Fisher-matrix factor that has the dimension (num-rows of the parameter space), which equals num-filters-out. It defaults to the minimum of 80, or half of the number of output filters. rank-in Rank used in low-rank-plus-unit estimate of the Fisher matrix factor which has the dimension (num-cols of the parameter matrix), which has the dimension (num-input-filters * number of time-offsets * number of height-offsets + 1), e.g. num-input-filters * 3 * 3 + 1 for a 3x3 kernel (the +1 is for the bias term). It defaults to the minimum of 80, or half the num-rows of the parameter matrix. [note: I'm considering decreasing this default to e.g. 40 or 20]. num-minibatches-history This is used setting the 'num_samples_history_in' configuration value of the natural gradient object. There is no concept of samples (frames) in the application of natural gradient to the convnet, because we do it all on the rows and columns of the derivative. default=4.0. A larger value means the Fisher matrix is averaged over more minibatches (it's an exponential-decay thing). alpha-out Constant that determines how much we smooth the Fisher-matrix factors with the unit matrix, for the space of dimension num-filters-out. default=4.0. alpha-in Constant that determines how much we smooth the Fisher-matrix factors with the unit matrix, for the space of dimension (num-filters-in * num-time-offsets * num-height-offsets + 1). default=4.0.

Example of a 3x3 kernel with no subsampling, and with zero-padding on both the the height and time axis, and where there has previously been no subsampling on the time axis:

num-filters-in=32 num-filters-out=64 height-in=28 height-out=28 \ height-subsample-out=1 height-offsets=-1,0,1 time-offsets=-1,0,1 \ required-time-offsets=0

Example of a 3x3 kernel with no subsampling, without zero-padding on either axis, and where there has *previously* been 2-fold subsampling on the time axis:

num-filters-in=32 num-filters-out=64 height-in=20 height-out=18 \ height-subsample-out=1 height-offsets=0,1,2 time-offsets=0,2,4

[note: above, the choice to have the time-offsets start at zero rather than be centered is just a choice: it assumes that at the output of the network you would want to request indexes with t=0, while at the input the t values start from zero.]

Example of a 3x3 kernel with subsampling on the height axis, without zero-padding on either axis, and where there has previously been 2-fold subsampling on the time axis:

num-filters-in=32 num-filters-out=64 height-in=20 height-out=9 \ height-subsample-out=2 height-offsets=0,1,2 time-offsets=0,2,4

[note: subsampling on the time axis is not expressed in the layer itself: any time you increase the distance between time-offsets, like changing them from 0,1,2 to 0,2,4, you are effectively subsampling the previous layer– assuming you only request the output at one time value or at multiples of the total subsampling factor.]

Example of a 1x1 kernel:

num-filters-in=64 num-filters-out=64 height-in=20 height-out=20 \ height-subsample-out=1 height-offsets=0 time-offsets=0

Definition at line 212 of file nnet-convolutional-component.h.

Constructor & Destructor Documentation

Definition at line 34 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::Check().

35  :
36  UpdatableComponent(other), // initialize base-class
37  model_(other.model_),
38  all_time_offsets_(other.all_time_offsets_),
39  time_offset_required_(other.time_offset_required_),
40  linear_params_(other.linear_params_),
41  bias_params_(other.bias_params_),
42  max_memory_mb_(other.max_memory_mb_),
43  use_natural_gradient_(other.use_natural_gradient_),
44  preconditioner_in_(other.preconditioner_in_),
45  preconditioner_out_(other.preconditioner_out_) {
46  Check();
47 }
time_height_convolution::ConvolutionModel model_

Member Function Documentation

void Add ( BaseFloat  alpha,
const Component other 
)
virtual

This virtual function when called by – an UpdatableComponent adds the parameters of another updatable component, times some constant, to the current parameters.

– a NonlinearComponent (or another component that stores stats, like BatchNormComponent)– it relates to adding stats. Otherwise it will normally do nothing.

Reimplemented from Component.

Definition at line 591 of file nnet-convolutional-component.cc.

References CuMatrixBase< Real >::AddMat(), CuVectorBase< Real >::AddVec(), TimeHeightConvolutionComponent::bias_params_, KALDI_ASSERT, and TimeHeightConvolutionComponent::linear_params_.

592  {
593  const TimeHeightConvolutionComponent *other =
594  dynamic_cast<const TimeHeightConvolutionComponent*>(&other_in);
595  KALDI_ASSERT(other != NULL);
596  linear_params_.AddMat(alpha, other->linear_params_);
597  bias_params_.AddVec(alpha, other->bias_params_);
598 }
void AddMat(Real alpha, const CuMatrixBase< Real > &A, MatrixTransposeType trans=kNoTrans)
*this += alpha * A
Definition: cu-matrix.cc:941
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:169
void AddVec(Real alpha, const CuVectorBase< Real > &vec, Real beta=1.0)
Definition: cu-vector.cc:1182
void Backprop ( const std::string &  debug_info,
const ComponentPrecomputedIndexes indexes,
const CuMatrixBase< BaseFloat > &  in_value,
const CuMatrixBase< BaseFloat > &  out_value,
const CuMatrixBase< BaseFloat > &  out_deriv,
void *  memo,
Component to_update,
CuMatrixBase< BaseFloat > *  in_deriv 
) const
virtual

Backprop function; depending on which of the arguments 'to_update' and 'in_deriv' are non-NULL, this can compute input-data derivatives and/or perform model update.

Parameters
[in]debug_infoThe component name, to be printed out in any warning messages.
[in]indexesA pointer to some information output by this class's PrecomputeIndexes function (will be NULL for simple components, i.e. those that don't do things like splicing).
[in]in_valueThe matrix that was given as input to the Propagate function. Will be ignored (and may be empty) if Properties()&kBackpropNeedsInput == 0.
[in]out_valueThe matrix that was output from the Propagate function. Will be ignored (and may be empty) if Properties()&kBackpropNeedsOutput == 0
[in]out_derivThe derivative at the output of this component.
[in]memoThis will normally be NULL, but for component types that set the flag kUsesMemo, this will be the return value of the Propagate() function that corresponds to this Backprop() function. Ownership of any pointers is not transferred to the Backprop function; DeleteMemo() will be called to delete it.
[out]to_updateIf model update is desired, the Component to be updated, else NULL. Does not have to be identical to this. If supplied, you can assume that to_update->Properties() & kUpdatableComponent is nonzero.
[out]in_derivThe derivative at the input of this component, if needed (else NULL). If Properties()&kBackpropInPlace, may be the same matrix as out_deriv. If Properties()&kBackpropAdds, this is added to by the Backprop routine, else it is set. The component code chooses which mode to work in, based on convenience.

Implements Component.

Definition at line 302 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::PrecomputedIndexes::computation, kaldi::nnet3::time_height_convolution::ConvolveBackwardData(), UpdatableComponent::is_gradient_, KALDI_ASSERT, UpdatableComponent::learning_rate_, TimeHeightConvolutionComponent::linear_params_, TimeHeightConvolutionComponent::UpdateNaturalGradient(), TimeHeightConvolutionComponent::UpdateSimple(), and TimeHeightConvolutionComponent::use_natural_gradient_.

310  {
311  const PrecomputedIndexes *indexes =
312  dynamic_cast<const PrecomputedIndexes*>(indexes_in);
313  KALDI_ASSERT(indexes != NULL);
314 
315  if (in_deriv != NULL) {
316  ConvolveBackwardData(indexes->computation, linear_params_,
317  out_deriv, in_deriv);
318  }
319  if (to_update_in != NULL) {
320  TimeHeightConvolutionComponent *to_update =
321  dynamic_cast<TimeHeightConvolutionComponent*>(to_update_in);
322  KALDI_ASSERT(to_update != NULL);
323 
324  if (to_update->learning_rate_ == 0.0)
325  return;
326 
327  if (to_update->is_gradient_ || !to_update->use_natural_gradient_)
328  to_update->UpdateSimple(*indexes, in_value, out_deriv);
329  else
330  to_update->UpdateNaturalGradient(*indexes, in_value, out_deriv);
331  }
332 }
void ConvolveBackwardData(const ConvolutionComputation &cc, const CuMatrixBase< BaseFloat > &params, const CuMatrixBase< BaseFloat > &output_deriv, CuMatrixBase< BaseFloat > *input_deriv)
This does the part of the backward derivative computation of convolution, that propagates derivatives...
Definition: convolution.cc:682
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:169
void Check ( ) const
private

Definition at line 50 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::bias_params_, ConvolutionModel::Check(), CuVectorBase< Real >::Dim(), KALDI_ASSERT, TimeHeightConvolutionComponent::linear_params_, TimeHeightConvolutionComponent::model_, ConvolutionModel::num_filters_out, CuMatrixBase< Real >::NumCols(), CuMatrixBase< Real >::NumRows(), ConvolutionModel::ParamCols(), and ConvolutionModel::ParamRows().

Referenced by TimeHeightConvolutionComponent::Read(), and TimeHeightConvolutionComponent::TimeHeightConvolutionComponent().

50  {
51  model_.Check();
55 }
MatrixIndexT NumCols() const
Definition: cu-matrix.h:206
MatrixIndexT Dim() const
Dimensions.
Definition: cu-vector.h:68
MatrixIndexT NumRows() const
Dimensions.
Definition: cu-matrix.h:205
time_height_convolution::ConvolutionModel model_
bool Check(bool check_heights_used=true, bool allow_height_padding=true) const
Definition: convolution.cc:130
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:169
void ComputeDerived ( )
private

Definition at line 491 of file nnet-convolutional-component.cc.

References ConvolutionModel::all_time_offsets, TimeHeightConvolutionComponent::all_time_offsets_, rnnlm::i, TimeHeightConvolutionComponent::model_, ConvolutionModel::required_time_offsets, and TimeHeightConvolutionComponent::time_offset_required_.

Referenced by TimeHeightConvolutionComponent::InitFromConfig(), and TimeHeightConvolutionComponent::Read().

491  {
492  all_time_offsets_.clear();
493  all_time_offsets_.insert(
494  all_time_offsets_.end(),
495  model_.all_time_offsets.begin(),
496  model_.all_time_offsets.end());
498  for (size_t i = 0; i < all_time_offsets_.size(); i++) {
501  }
502 }
time_height_convolution::ConvolutionModel model_
virtual Component* Copy ( ) const
inlinevirtual

Copies component (deep copy).

Implements Component.

Definition at line 245 of file nnet-convolutional-component.h.

References TimeHeightConvolutionComponent::TimeHeightConvolutionComponent().

BaseFloat DotProduct ( const UpdatableComponent other) const
virtual

Computes dot-product between parameters of two instances of a Component.

Can be used for computing parameter-norm of an UpdatableComponent.

Implements UpdatableComponent.

Definition at line 610 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::bias_params_, KALDI_ASSERT, kaldi::kTrans, TimeHeightConvolutionComponent::linear_params_, kaldi::TraceMatMat(), and kaldi::VecVec().

611  {
612  const TimeHeightConvolutionComponent *other =
613  dynamic_cast<const TimeHeightConvolutionComponent*>(&other_in);
614  KALDI_ASSERT(other != NULL);
615  return TraceMatMat(linear_params_, other->linear_params_, kTrans) +
616  VecVec(bias_params_, other->bias_params_);
617 }
Real TraceMatMat(const MatrixBase< Real > &A, const MatrixBase< Real > &B, MatrixTransposeType trans=kNoTrans)
We need to declare this here as it will be a friend function.
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:169
Real VecVec(const VectorBase< Real > &a, const VectorBase< Real > &b)
Returns dot product between v1 and v2.
Definition: kaldi-vector.cc:37
void FreezeNaturalGradient ( bool  freeze)
virtual

freezes/unfreezes NaturalGradient updates, if applicable (to be overriden by components that use Natural Gradient).

Reimplemented from UpdatableComponent.

Definition at line 642 of file nnet-convolutional-component.cc.

References OnlineNaturalGradient::Freeze(), TimeHeightConvolutionComponent::preconditioner_in_, and TimeHeightConvolutionComponent::preconditioner_out_.

void GetInputIndexes ( const MiscComputationInfo misc_info,
const Index output_index,
std::vector< Index > *  desired_indexes 
) const
virtual

This function only does something interesting for non-simple Components.

For a given index at the output of the component, tells us what indexes are required at its input (note: "required" encompasses also optionally-required things; it will enumerate all things that we'd like to have). See also IsComputable().

Parameters
[in]misc_infoThis argument is supplied to handle things that the framework can't very easily supply: information like which time indexes are needed for AggregateComponent, which time-indexes are available at the input of a recurrent network, and so on. We will add members to misc_info as needed.
[in]output_indexThe Index at the output of the component, for which we are requesting the list of indexes at the component's input.
[out]desired_indexesA list of indexes that are desired at the input. are to be written to here. By "desired" we mean required or optionally-required.

The default implementation of this function is suitable for any SimpleComponent; it just copies the output_index to a single identical element in input_indexes.

Reimplemented from Component.

Definition at line 504 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::all_time_offsets_, rnnlm::i, KALDI_ASSERT, kaldi::nnet3::kNoTime, Index::n, Index::t, and Index::x.

507  {
508  KALDI_ASSERT(output_index.t != kNoTime);
509  size_t size = all_time_offsets_.size();
510  desired_indexes->resize(size);
511  for (size_t i = 0; i < size; i++) {
512  (*desired_indexes)[i].n = output_index.n;
513  (*desired_indexes)[i].t = output_index.t + all_time_offsets_[i];
514  (*desired_indexes)[i].x = output_index.x;
515  }
516 }
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:169
const int kNoTime
Definition: nnet-common.cc:568
std::string Info ( ) const
virtual

Returns some text-form information about this component, for diagnostics.

Starts with the type of the component. E.g. "SigmoidComponent dim=900", although most components will have much more info.

Reimplemented from UpdatableComponent.

Definition at line 65 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::bias_params_, OnlineNaturalGradient::GetAlpha(), OnlineNaturalGradient::GetNumMinibatchesHistory(), OnlineNaturalGradient::GetRank(), ConvolutionModel::Info(), UpdatableComponent::Info(), TimeHeightConvolutionComponent::linear_params_, TimeHeightConvolutionComponent::max_memory_mb_, TimeHeightConvolutionComponent::model_, TimeHeightConvolutionComponent::NumParameters(), TimeHeightConvolutionComponent::preconditioner_in_, TimeHeightConvolutionComponent::preconditioner_out_, kaldi::nnet3::PrintParameterStats(), and TimeHeightConvolutionComponent::use_natural_gradient_.

65  {
66  std::ostringstream stream;
67  // The output of model_.Info() has been designed to be suitable
68  // as a component-level info string, it has
69  // {num-filters,height}-{in-out}, offsets=[...], required-time-offsets=[...],
70  // {input,output}-dim.
71  stream << UpdatableComponent::Info() << ' ' << model_.Info();
72  PrintParameterStats(stream, "filter-params", linear_params_);
73  PrintParameterStats(stream, "bias-params", bias_params_, true);
74  stream << ", num-params=" << NumParameters()
75  << ", max-memory-mb=" << max_memory_mb_
76  << ", use-natural-gradient=" << use_natural_gradient_;
77  if (use_natural_gradient_) {
78  stream << ", num-minibatches-history="
80  << ", rank-in=" << preconditioner_in_.GetRank()
81  << ", rank-out=" << preconditioner_out_.GetRank()
82  << ", alpha-in=" << preconditioner_in_.GetAlpha()
83  << ", alpha-out=" << preconditioner_out_.GetAlpha();
84  }
85  return stream.str();
86 }
virtual int32 NumParameters() const
The following new virtual function returns the total dimension of the parameters in this class...
time_height_convolution::ConvolutionModel model_
virtual std::string Info() const
Returns some text-form information about this component, for diagnostics.
void PrintParameterStats(std::ostringstream &os, const std::string &name, const CuVectorBase< BaseFloat > &params, bool include_mean)
Print to 'os' some information about the mean and standard deviation of some parameters, used in Info() functions in nnet-simple-component.cc.
Definition: nnet-parse.cc:530
void InitFromConfig ( ConfigLine cfl)
virtual

Initialize, from a ConfigLine object.

Parameters
[in]cflA ConfigLine containing any parameters that are needed for initialization. For example: "dim=100 param-stddev=0.1"

Implements Component.

Definition at line 115 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::bias_params_, ConvolutionModel::Check(), ConvolutionModel::ComputeDerived(), TimeHeightConvolutionComponent::ComputeDerived(), ConfigLine::GetValue(), ConvolutionModel::height_in, ConvolutionModel::Offset::height_offset, ConvolutionModel::height_out, ConvolutionModel::height_subsample_out, rnnlm::i, UpdatableComponent::InitLearningRatesFromConfig(), TimeHeightConvolutionComponent::InitUnit(), kaldi::IsSortedAndUniq(), rnnlm::j, KALDI_ASSERT, KALDI_ERR, KALDI_WARN, TimeHeightConvolutionComponent::linear_params_, TimeHeightConvolutionComponent::max_memory_mb_, TimeHeightConvolutionComponent::model_, ConvolutionModel::num_filters_in, ConvolutionModel::num_filters_out, CuMatrixBase< Real >::NumCols(), CuMatrixBase< Real >::NumRows(), ConvolutionModel::offsets, ConvolutionModel::ParamCols(), ConvolutionModel::ParamRows(), TimeHeightConvolutionComponent::preconditioner_in_, TimeHeightConvolutionComponent::preconditioner_out_, ConvolutionModel::required_time_offsets, CuVector< Real >::Resize(), CuMatrix< Real >::Resize(), CuVectorBase< Real >::Scale(), CuMatrixBase< Real >::Scale(), OnlineNaturalGradient::SetAlpha(), OnlineNaturalGradient::SetNumMinibatchesHistory(), CuVectorBase< Real >::SetRandn(), CuMatrixBase< Real >::SetRandn(), OnlineNaturalGradient::SetRank(), kaldi::SortAndUniq(), kaldi::SplitStringToIntegers(), kaldi::SplitStringToVector(), ConvolutionModel::Offset::time_offset, TimeHeightConvolutionComponent::use_natural_gradient_, and ConfigLine::WholeLine().

115  {
116  // 1. Config values inherited from UpdatableComponent.
118 
119  // 2. convolution-related config values.
120  model_.height_subsample_out = 1; // default.
121  max_memory_mb_ = 200.0;
122  std::string height_offsets, time_offsets, required_time_offsets = "undef",
123  offsets;
124 
125  bool ok = cfl->GetValue("num-filters-in", &model_.num_filters_in) &&
126  cfl->GetValue("num-filters-out", &model_.num_filters_out) &&
127  cfl->GetValue("height-in", &model_.height_in) &&
128  cfl->GetValue("height-out", &model_.height_out);
129  if (!ok) {
130  KALDI_ERR << "Bad initializer: expected all the values "
131  "num-filters-in, num-filters-out, height-in, height-out, "
132  "to be defined: "
133  << cfl->WholeLine();
134  }
135  // some optional structural configs.
136  cfl->GetValue("required-time-offsets", &required_time_offsets);
137  cfl->GetValue("height-subsample-out", &model_.height_subsample_out);
138  cfl->GetValue("max-memory-mb", &max_memory_mb_);
140 
141  { // This block sets up model_.offsets.
142  model_.offsets.clear();
143  if (cfl->GetValue("offsets", &offsets)) {
144  // init from offsets, like "-1,-1;-1,0;-1,1;0,-1;...;1,1"
145  std::vector<std::string> splits;
146  SplitStringToVector(offsets, ";", false, &splits);
147  for (size_t i = 0; i < splits.size(); i++) {
148  std::vector<int32> int_pair;
149  if (!SplitStringToIntegers(splits[i], ",", false, &int_pair) ||
150  int_pair.size() != 2)
151  KALDI_ERR << "Bad config value offsets=" << offsets;
152  time_height_convolution::ConvolutionModel::Offset offset;
153  offset.time_offset = int_pair[0];
154  offset.height_offset = int_pair[1];
155  model_.offsets.push_back(offset);
156  }
157  std::sort(model_.offsets.begin(), model_.offsets.end());
158  if (!IsSortedAndUniq(model_.offsets) || model_.offsets.empty())
159  KALDI_ERR << "Error in offsets: probably repeated offset. "
160  "offsets=" << offsets;
161  } else if (cfl->GetValue("height-offsets", &height_offsets) &&
162  cfl->GetValue("time-offsets", &time_offsets)) {
163  std::vector<int32> height_offsets_vec,
164  time_offsets_vec;
165  if (!SplitStringToIntegers(height_offsets, ",", false,
166  &height_offsets_vec) ||
167  !SplitStringToIntegers(time_offsets, ",", false,
168  &time_offsets_vec)) {
169  KALDI_ERR << "Formatting problem in time-offsets or height-offsets: "
170  << cfl->WholeLine();
171  }
172  if (height_offsets_vec.empty() || !IsSortedAndUniq(height_offsets_vec) ||
173  time_offsets_vec.empty() || !IsSortedAndUniq(time_offsets_vec)) {
174  KALDI_ERR << "time-offsets and height-offsets must be nonempty, "
175  "sorted and unique.";
176  }
177  model_.offsets.clear();
178  for (size_t i = 0; i < time_offsets_vec.size(); i++) {
179  for (size_t j = 0; j < height_offsets_vec.size(); j++) {
180  time_height_convolution::ConvolutionModel::Offset offset;
181  offset.time_offset = time_offsets_vec[i];
182  offset.height_offset = height_offsets_vec[j];
183  model_.offsets.push_back(offset);
184  }
185  }
186  } else {
187  KALDI_ERR << "Expected either 'offsets', or both 'height-offsets' and "
188  "'time-offsets', to be defined: " << cfl->WholeLine();
189  }
190  }
191 
192  if (model_.offsets.empty())
193  KALDI_ERR << "Something went wrong setting offsets: " << cfl->WholeLine();
194 
195 
196  { // This block sets model_.required_time_offsets.
197  std::vector<int32> required_time_offsets_vec;
198  if (required_time_offsets == "undef") {
199  // it defaults to all the time offsets that were used.
200  std::set<int32> required_time_offsets;
201  for (size_t i = 0; i < model_.offsets.size(); i++)
202  required_time_offsets_vec.push_back(model_.offsets[i].time_offset);
203  SortAndUniq(&required_time_offsets_vec);
204  } else {
205  if (!SplitStringToIntegers(required_time_offsets, ",", false,
206  &required_time_offsets_vec) ||
207  required_time_offsets_vec.empty() ||
208  !IsSortedAndUniq(required_time_offsets_vec)) {
209  KALDI_ERR << "Formatting problem in required-time-offsets: "
210  << cfl->WholeLine();
211  }
212  }
215  required_time_offsets_vec.begin(),
216  required_time_offsets_vec.end());
217  }
218 
220  if (!model_.Check(false, true)) {
221  KALDI_ERR << "Parameters used to initialize TimeHeightConvolutionComponent "
222  << "do not make sense, line was: " << cfl->WholeLine();
223  }
224  if (!model_.Check(true, true)) {
225  KALDI_WARN << "There are input heights unused in "
226  "TimeHeightConvolutionComponent; consider increasing output "
227  "height or decreasing height of preceding layer."
228  << cfl->WholeLine();
229  }
230 
231  // 3. Parameter-initialization configs.
232  BaseFloat param_stddev = -1, bias_stddev = 0.0;
233  bool init_unit = false;
234  cfl->GetValue("param-stddev", &param_stddev);
235  cfl->GetValue("bias-stddev", &bias_stddev);
236  cfl->GetValue("init-unit", &init_unit);
237  if (param_stddev < 0.0) {
238  param_stddev = 1.0 / sqrt(model_.num_filters_in *
239  model_.offsets.size());
240  }
241  // initialize the parameters.
243  if (!init_unit) {
245  linear_params_.Scale(param_stddev);
246  } else {
247  InitUnit();
248  }
251  bias_params_.Scale(bias_stddev);
252 
253 
254  // 4. Natural-gradient related configs.
255  use_natural_gradient_ = true;
256  int32 rank_out = -1, rank_in = -1;
257  BaseFloat alpha_out = 4.0, alpha_in = 4.0,
258  num_minibatches_history = 4.0;
259  cfl->GetValue("use-natural-gradient", &use_natural_gradient_);
260  cfl->GetValue("rank-in", &rank_in);
261  cfl->GetValue("rank-out", &rank_out);
262  cfl->GetValue("alpha-in", &alpha_in);
263  cfl->GetValue("alpha-out", &alpha_out);
264  cfl->GetValue("num-minibatches-history", &num_minibatches_history);
265 
266  int32 dim_in = linear_params_.NumCols() + 1,
267  dim_out = linear_params_.NumRows();
268  if (rank_in < 0)
269  rank_in = std::min<int32>(80, (dim_in + 1) / 2);
270  preconditioner_in_.SetRank(rank_in);
271  if (rank_out < 0)
272  rank_out = std::min<int32>(80, (dim_out + 1) / 2);
273  preconditioner_out_.SetRank(rank_out);
274  preconditioner_in_.SetNumMinibatchesHistory(num_minibatches_history);
275  preconditioner_out_.SetNumMinibatchesHistory(num_minibatches_history);
276 
277  preconditioner_in_.SetAlpha(alpha_in);
278  preconditioner_out_.SetAlpha(alpha_out);
279 
280  ComputeDerived();
281 }
void Scale(Real value)
Definition: cu-vector.cc:1161
void Scale(Real value)
Definition: cu-matrix.cc:610
bool SplitStringToIntegers(const std::string &full, const char *delim, bool omit_empty_strings, std::vector< I > *out)
Split a string (e.g.
Definition: text-utils.h:68
void InitLearningRatesFromConfig(ConfigLine *cfl)
MatrixIndexT NumCols() const
Definition: cu-matrix.h:206
void SortAndUniq(std::vector< T > *vec)
Sorts and uniq's (removes duplicates) from a vector.
Definition: stl-utils.h:39
void Resize(MatrixIndexT dim, MatrixResizeType t=kSetZero)
Allocate the memory.
Definition: cu-vector.cc:941
float BaseFloat
Definition: kaldi-types.h:29
void Resize(MatrixIndexT rows, MatrixIndexT cols, MatrixResizeType resize_type=kSetZero, MatrixStrideType stride_type=kDefaultStride)
Allocate the memory.
Definition: cu-matrix.cc:49
MatrixIndexT NumRows() const
Dimensions.
Definition: cu-matrix.h:205
time_height_convolution::ConvolutionModel model_
void SplitStringToVector(const std::string &full, const char *delim, bool omit_empty_strings, std::vector< std::string > *out)
Split a string using any of the single character delimiters.
Definition: text-utils.cc:63
#define KALDI_ERR
Definition: kaldi-error.h:127
#define KALDI_WARN
Definition: kaldi-error.h:130
bool Check(bool check_heights_used=true, bool allow_height_padding=true) const
Definition: convolution.cc:130
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:169
void SetNumMinibatchesHistory(BaseFloat num_minibatches_history)
bool IsSortedAndUniq(const std::vector< T > &vec)
Returns true if the vector is sorted and contains each element only once.
Definition: stl-utils.h:63
void InitUnit ( )
private

Definition at line 89 of file nnet-convolutional-component.cc.

References rnnlm::i, KALDI_ASSERT, KALDI_ERR, TimeHeightConvolutionComponent::linear_params_, TimeHeightConvolutionComponent::model_, ConvolutionModel::num_filters_in, ConvolutionModel::num_filters_out, CuMatrixBase< Real >::NumRows(), and ConvolutionModel::offsets.

Referenced by TimeHeightConvolutionComponent::InitFromConfig().

89  {
91  KALDI_ERR << "You cannot specify init-unit if the num-filters-in "
92  << "and num-filters-out differ.";
93  }
94  size_t i;
95  int32 zero_offset = 0;
96  for (i = 0; i < model_.offsets.size(); i++) {
97  if (model_.offsets[i].time_offset == 0 &&
98  model_.offsets[i].height_offset == 0) {
99  zero_offset = i;
100  break;
101  }
102  }
103  if (i == model_.offsets.size()) // did not break.
104  KALDI_ERR << "You cannot specify init-unit if the model does "
105  << "not have the offset (0, 0).";
106 
107  CuSubMatrix<BaseFloat> zero_offset_block(
109  zero_offset * model_.num_filters_in, model_.num_filters_in);
110 
111  KALDI_ASSERT(zero_offset_block.NumRows() == zero_offset_block.NumCols());
112  zero_offset_block.AddToDiag(1.0); // set this block to the unit matrix.
113 }
MatrixIndexT NumRows() const
Dimensions.
Definition: cu-matrix.h:205
time_height_convolution::ConvolutionModel model_
#define KALDI_ERR
Definition: kaldi-error.h:127
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:169
int32 InputDim ( ) const
virtual

Returns input-dimension of this component.

Implements Component.

Definition at line 57 of file nnet-convolutional-component.cc.

References ConvolutionModel::InputDim(), and TimeHeightConvolutionComponent::model_.

57  {
58  return model_.InputDim();
59 }
time_height_convolution::ConvolutionModel model_
bool IsComputable ( const MiscComputationInfo misc_info,
const Index output_index,
const IndexSet input_index_set,
std::vector< Index > *  used_inputs 
) const
virtual

This function only does something interesting for non-simple Components, and it exists to make it possible to manage optionally-required inputs.

It tells the user whether a given output index is computable from a given set of input indexes, and if so, says which input indexes will be used in the computation.

Implementations of this function are required to have the property that adding an element to "input_index_set" can only ever change IsComputable from false to true, never vice versa.

Parameters
[in]misc_infoSome information specific to the computation, such as minimum and maximum times for certain components to do adaptation on; it's a place to put things that don't easily fit in the framework.
[in]output_indexThe index that is to be computed at the output of this Component.
[in]input_index_setThe set of indexes that is available at the input of this Component.
[out]used_inputsIf this is non-NULL and the output is computable this will be set to the list of input indexes that will actually be used in the computation.
Returns
Returns true iff this output is computable from the provided inputs.

The default implementation of this function is suitable for any SimpleComponent: it just returns true if output_index is in input_index_set, and if so sets used_inputs to vector containing that one Index.

Reimplemented from Component.

Definition at line 519 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::all_time_offsets_, rnnlm::i, KALDI_ASSERT, kaldi::nnet3::kNoTime, Index::t, and TimeHeightConvolutionComponent::time_offset_required_.

523  {
524  KALDI_ASSERT(output_index.t != kNoTime);
525  size_t size = all_time_offsets_.size();
526  Index index(output_index);
527  if (used_inputs != NULL) {
528  used_inputs->clear();
529  used_inputs->reserve(size);
530  for (size_t i = 0; i < size; i++) {
531  index.t = output_index.t + all_time_offsets_[i];
532  if (input_index_set(index)) {
533  // This input index is available.
534  used_inputs->push_back(index);
535  } else {
536  // This input index is not available.
537  if (time_offset_required_[i]) {
538  // A required offset was not present -> this output index is not
539  // computable.
540  used_inputs->clear();
541  return false;
542  }
543  }
544  }
545  // All required time-offsets of the output were computable. -> return true.
546  return true;
547  } else {
548  for (size_t i = 0; i < size; i++) {
549  if (time_offset_required_[i]) {
550  index.t = output_index.t + all_time_offsets_[i];
551  if (!input_index_set(index))
552  return false;
553  }
554  }
555  return true;
556  }
557 }
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:169
const int kNoTime
Definition: nnet-common.cc:568
int32 NumParameters ( ) const
virtual

The following new virtual function returns the total dimension of the parameters in this class.

Reimplemented from UpdatableComponent.

Definition at line 619 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::bias_params_, CuVectorBase< Real >::Dim(), TimeHeightConvolutionComponent::linear_params_, CuMatrixBase< Real >::NumCols(), and CuMatrixBase< Real >::NumRows().

Referenced by TimeHeightConvolutionComponent::Info(), TimeHeightConvolutionComponent::UnVectorize(), and TimeHeightConvolutionComponent::Vectorize().

619  {
621  bias_params_.Dim();
622 }
MatrixIndexT NumCols() const
Definition: cu-matrix.h:206
MatrixIndexT Dim() const
Dimensions.
Definition: cu-vector.h:68
MatrixIndexT NumRows() const
Dimensions.
Definition: cu-matrix.h:205
int32 OutputDim ( ) const
virtual

Returns output-dimension of this component.

Implements Component.

Definition at line 61 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::model_, and ConvolutionModel::OutputDim().

61  {
62  return model_.OutputDim();
63 }
time_height_convolution::ConvolutionModel model_
void PerturbParams ( BaseFloat  stddev)
virtual

This function is to be used in testing.

It adds unit noise times "stddev" to the parameters of the component.

Implements UpdatableComponent.

Definition at line 600 of file nnet-convolutional-component.cc.

References CuMatrixBase< Real >::AddMat(), CuVectorBase< Real >::AddVec(), TimeHeightConvolutionComponent::bias_params_, CuVectorBase< Real >::Dim(), kaldi::kUndefined, TimeHeightConvolutionComponent::linear_params_, CuMatrixBase< Real >::NumCols(), CuMatrixBase< Real >::NumRows(), CuVectorBase< Real >::SetRandn(), and CuMatrixBase< Real >::SetRandn().

600  {
601  CuMatrix<BaseFloat> temp_mat(linear_params_.NumRows(),
603  temp_mat.SetRandn();
604  linear_params_.AddMat(stddev, temp_mat);
605  CuVector<BaseFloat> temp_vec(bias_params_.Dim(), kUndefined);
606  temp_vec.SetRandn();
607  bias_params_.AddVec(stddev, temp_vec);
608 }
MatrixIndexT NumCols() const
Definition: cu-matrix.h:206
MatrixIndexT Dim() const
Dimensions.
Definition: cu-vector.h:68
MatrixIndexT NumRows() const
Dimensions.
Definition: cu-matrix.h:205
void AddMat(Real alpha, const CuMatrixBase< Real > &A, MatrixTransposeType trans=kNoTrans)
*this += alpha * A
Definition: cu-matrix.cc:941
void AddVec(Real alpha, const CuVectorBase< Real > &vec, Real beta=1.0)
Definition: cu-vector.cc:1182
ComponentPrecomputedIndexes * PrecomputeIndexes ( const MiscComputationInfo misc_info,
const std::vector< Index > &  input_indexes,
const std::vector< Index > &  output_indexes,
bool  need_backprop 
) const
virtual

This function must return NULL for simple Components.

Returns a pointer to a class that may contain some precomputed component-specific and computation-specific indexes to be in used in the Propagate and Backprop functions.

Parameters
[in]misc_infoThis argument is supplied to handle things that the framework can't very easily supply: information like which time indexes are needed for AggregateComponent, which time-indexes are available at the input of a recurrent network, and so on. misc_info may not even ever be used here. We will add members to misc_info as needed.
[in]input_indexesA vector of indexes that explains what time-indexes (and other indexes) each row of the in/in_value/in_deriv matrices given to Propagate and Backprop will mean.
[in]output_indexesA vector of indexes that explains what time-indexes (and other indexes) each row of the out/out_value/out_deriv matrices given to Propagate and Backprop will mean.
[in]need_backpropTrue if we might need to do backprop with this component, so that if any different indexes are needed for backprop then those should be computed too.
Returns
Returns a child-class of class ComponentPrecomputedIndexes, or NULL if this component for does not need to precompute any indexes (e.g. if it is a simple component and does not care about indexes).

Reimplemented from Component.

Definition at line 560 of file nnet-convolutional-component.cc.

References kaldi::nnet3::time_height_convolution::CompileConvolutionComputation(), TimeHeightConvolutionComponent::PrecomputedIndexes::computation, KALDI_ERR, TimeHeightConvolutionComponent::max_memory_mb_, and TimeHeightConvolutionComponent::model_.

564  {
565  using namespace time_height_convolution;
566  ConvolutionComputationOptions opts;
567  opts.max_memory_mb = max_memory_mb_;
568  PrecomputedIndexes *ans = new PrecomputedIndexes();
569  std::vector<Index> input_indexes_modified,
570  output_indexes_modified;
572  model_, input_indexes, output_indexes, opts,
573  &(ans->computation), &input_indexes_modified, &output_indexes_modified);
574  if (input_indexes_modified != input_indexes ||
575  output_indexes_modified != output_indexes) {
576  KALDI_ERR << "Problem precomputing indexes";
577  }
578  return ans;
579 }
time_height_convolution::ConvolutionModel model_
#define KALDI_ERR
Definition: kaldi-error.h:127
void CompileConvolutionComputation(const ConvolutionModel &model, const std::vector< Index > &input_indexes, const std::vector< Index > &output_indexes, const ConvolutionComputationOptions &opts, ConvolutionComputation *computation, std::vector< Index > *input_indexes_modified, std::vector< Index > *output_indexes_modified)
This function does the compilation for a convolution computation; it's a wrapper for the functions be...
void * Propagate ( const ComponentPrecomputedIndexes indexes,
const CuMatrixBase< BaseFloat > &  in,
CuMatrixBase< BaseFloat > *  out 
) const
virtual

Propagate function.

Parameters
[in]indexesA pointer to some information output by this class's PrecomputeIndexes function (will be NULL for simple components, i.e. those that don't do things like splicing).
[in]inThe input to this component. Num-columns == InputDim().
[out]outThe output of this component. Num-columns == OutputDim(). Note: output of this component will be added to the initial value of "out" if Properties()&kPropagateAdds != 0; otherwise the output will be set and the initial value ignored. Each Component chooses whether it is more convenient implementation-wise to add or set, and the calling code has to deal with it.
Returns
Normally returns NULL, but may return a non-NULL value for components which have the flag kUsesMemo set. This value will be passed into the corresponding Backprop routine.

Implements Component.

Definition at line 283 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::bias_params_, TimeHeightConvolutionComponent::PrecomputedIndexes::computation, kaldi::nnet3::time_height_convolution::ConvolveForward(), CuMatrixBase< Real >::CopyRowsFromVec(), CuMatrixBase< Real >::Data(), ConvolutionModel::height_out, KALDI_ASSERT, TimeHeightConvolutionComponent::linear_params_, TimeHeightConvolutionComponent::model_, ConvolutionModel::num_filters_out, CuMatrixBase< Real >::NumCols(), CuMatrixBase< Real >::NumRows(), and CuMatrixBase< Real >::Stride().

286  {
287  const PrecomputedIndexes *indexes =
288  dynamic_cast<const PrecomputedIndexes*>(indexes_in);
289  KALDI_ASSERT(indexes != NULL);
290  { // this block handles the bias term.
291  KALDI_ASSERT(out->Stride() == out->NumCols() &&
293  CuSubMatrix<BaseFloat> out_reshaped(
294  out->Data(), out->NumRows() * model_.height_out,
296  out_reshaped.CopyRowsFromVec(bias_params_);
297  }
298  ConvolveForward(indexes->computation, in, linear_params_, out);
299  return NULL;
300 }
MatrixIndexT NumCols() const
Definition: cu-matrix.h:206
MatrixIndexT NumRows() const
Dimensions.
Definition: cu-matrix.h:205
time_height_convolution::ConvolutionModel model_
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:169
void ConvolveForward(const ConvolutionComputation &cc, const CuMatrixBase< BaseFloat > &input, const CuMatrixBase< BaseFloat > &params, CuMatrixBase< BaseFloat > *output)
This does the forward computation of convolution.
Definition: convolution.cc:524
MatrixIndexT Stride() const
Definition: cu-matrix.h:207
const Real * Data() const
Return data pointer (const).
Definition: cu-matrix.h:673
void Read ( std::istream &  is,
bool  binary 
)
virtual

Read function (used after we know the type of the Component); accepts input that is missing the token that describes the component type, in case it has already been consumed.

Implements Component.

Definition at line 452 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::bias_params_, TimeHeightConvolutionComponent::Check(), TimeHeightConvolutionComponent::ComputeDerived(), kaldi::nnet3::ExpectToken(), KALDI_ASSERT, TimeHeightConvolutionComponent::linear_params_, TimeHeightConvolutionComponent::max_memory_mb_, TimeHeightConvolutionComponent::model_, TimeHeightConvolutionComponent::preconditioner_in_, TimeHeightConvolutionComponent::preconditioner_out_, ConvolutionModel::Read(), CuVector< Real >::Read(), CuMatrix< Real >::Read(), kaldi::ReadBasicType(), UpdatableComponent::ReadUpdatableCommon(), OnlineNaturalGradient::SetAlpha(), OnlineNaturalGradient::SetNumMinibatchesHistory(), OnlineNaturalGradient::SetRank(), and TimeHeightConvolutionComponent::use_natural_gradient_.

452  {
453  std::string token = ReadUpdatableCommon(is, binary);
454  // the next few lines are only for back compatibility.
455  if (token != "") {
456  KALDI_ASSERT(token == "<Model>");
457  } else {
458  ExpectToken(is, binary, "<Model>");
459  }
460  model_.Read(is, binary);
461  ExpectToken(is, binary, "<LinearParams>");
462  linear_params_.Read(is, binary);
463  ExpectToken(is, binary, "<BiasParams>");
464  bias_params_.Read(is, binary);
465  ExpectToken(is, binary, "<MaxMemoryMb>");
466  ReadBasicType(is, binary, &max_memory_mb_);
467  ExpectToken(is, binary, "<UseNaturalGradient>");
468  ReadBasicType(is, binary, &use_natural_gradient_);
469  int32 rank_in, rank_out;
470  BaseFloat alpha_in, alpha_out,
471  num_minibatches_history;
472  ExpectToken(is, binary, "<NumMinibatchesHistory>");
473  ReadBasicType(is, binary, &num_minibatches_history);
474  ExpectToken(is, binary, "<AlphaInOut>");
475  ReadBasicType(is, binary, &alpha_in);
476  ReadBasicType(is, binary, &alpha_out);
477  preconditioner_in_.SetAlpha(alpha_in);
478  preconditioner_out_.SetAlpha(alpha_out);
479  ExpectToken(is, binary, "<RankInOut>");
480  ReadBasicType(is, binary, &rank_in);
481  ReadBasicType(is, binary, &rank_out);
482  preconditioner_in_.SetRank(rank_in);
483  preconditioner_out_.SetRank(rank_out);
484  preconditioner_in_.SetNumMinibatchesHistory(num_minibatches_history);
485  preconditioner_out_.SetNumMinibatchesHistory(num_minibatches_history);
486  ExpectToken(is, binary, "</TimeHeightConvolutionComponent>");
487  ComputeDerived();
488  Check();
489 }
void ReadBasicType(std::istream &is, bool binary, T *t)
ReadBasicType is the name of the read function for bool, integer types, and floating-point types...
Definition: io-funcs-inl.h:55
void Read(std::istream &is, bool binary)
I/O.
Definition: cu-vector.cc:911
float BaseFloat
Definition: kaldi-types.h:29
static void ExpectToken(const std::string &token, const std::string &what_we_are_parsing, const std::string **next_token)
time_height_convolution::ConvolutionModel model_
std::string ReadUpdatableCommon(std::istream &is, bool binary)
void Read(std::istream &is, bool binary)
I/O functions.
Definition: cu-matrix.cc:461
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:169
void SetNumMinibatchesHistory(BaseFloat num_minibatches_history)
void ReorderIndexes ( std::vector< Index > *  input_indexes,
std::vector< Index > *  output_indexes 
) const
virtual

This function only does something interesting for non-simple Components.

It provides an opportunity for a Component to reorder the or pad the indexes at its input and output. This might be useful, for instance, if a component requires a particular ordering of the indexes that doesn't correspond to their natural ordering. Components that might modify the indexes are required to return the kReordersIndexes flag in their Properties(). The ReorderIndexes() function is now allowed to insert blanks into the indexes. The 'blanks' must be of the form (n,kNoTime,x), where the marker kNoTime (a very negative number) is there where the 't' indexes normally live. The reason we don't just have, say, (-1,-1,-1), relates to the need to preserve a regular pattern over the 'n' indexes so that 'shortcut compilation' (c.f. ExpandComputation()) can work correctly

Parameters
[in,out]Indexesat the input of the Component.
[in,out]Indexesat the output of the Component

Reimplemented from Component.

Definition at line 408 of file nnet-convolutional-component.cc.

References kaldi::nnet3::time_height_convolution::CompileConvolutionComputation(), TimeHeightConvolutionComponent::max_memory_mb_, and TimeHeightConvolutionComponent::model_.

410  {
411  using namespace time_height_convolution;
412  ConvolutionComputationOptions opts;
413  opts.max_memory_mb = max_memory_mb_;
414  ConvolutionComputation computation_temp;
415  std::vector<Index> input_indexes_modified,
416  output_indexes_modified;
418  model_, *input_indexes, *output_indexes, opts,
419  &computation_temp, &input_indexes_modified, &output_indexes_modified);
420  input_indexes->swap(input_indexes_modified);
421  output_indexes->swap(output_indexes_modified);
422 }
time_height_convolution::ConvolutionModel model_
void CompileConvolutionComputation(const ConvolutionModel &model, const std::vector< Index > &input_indexes, const std::vector< Index > &output_indexes, const ConvolutionComputationOptions &opts, ConvolutionComputation *computation, std::vector< Index > *input_indexes_modified, std::vector< Index > *output_indexes_modified)
This function does the compilation for a convolution computation; it's a wrapper for the functions be...
void Scale ( BaseFloat  scale)
virtual

This virtual function when called on – an UpdatableComponent scales the parameters by "scale" when called by an UpdatableComponent.

– a Nonlinear component (or another component that stores stats, like BatchNormComponent)– it relates to scaling activation stats, not parameters. Otherwise it will normally do nothing.

Reimplemented from Component.

Definition at line 581 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::bias_params_, TimeHeightConvolutionComponent::linear_params_, CuVectorBase< Real >::Scale(), CuMatrixBase< Real >::Scale(), CuVectorBase< Real >::SetZero(), and CuMatrixBase< Real >::SetZero().

581  {
582  if (scale == 0.0) {
585  } else {
586  linear_params_.Scale(scale);
587  bias_params_.Scale(scale);
588  }
589 }
void Scale(Real value)
Definition: cu-vector.cc:1161
void Scale(Real value)
Definition: cu-matrix.cc:610
void SetZero()
Math operations, some calling kernels.
Definition: cu-matrix.cc:476
void SetZero()
Math operations.
Definition: cu-vector.cc:1044
void ScaleLinearParams ( BaseFloat  alpha)
inline
virtual std::string Type ( ) const
inlinevirtual

Returns a string such as "SigmoidComponent", describing the type of the object.

Implements Component.

Definition at line 226 of file nnet-convolutional-component.h.

226 { return "TimeHeightConvolutionComponent"; }
void UnVectorize ( const VectorBase< BaseFloat > &  params)
virtual

Converts the parameters from vector form.

Reimplemented from UpdatableComponent.

Definition at line 633 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::bias_params_, CuVectorBase< Real >::CopyFromVec(), CuMatrixBase< Real >::CopyRowsFromVec(), VectorBase< Real >::Dim(), CuVectorBase< Real >::Dim(), KALDI_ASSERT, TimeHeightConvolutionComponent::linear_params_, CuMatrixBase< Real >::NumCols(), TimeHeightConvolutionComponent::NumParameters(), CuMatrixBase< Real >::NumRows(), and VectorBase< Real >::Range().

634  {
635  KALDI_ASSERT(params.Dim() == NumParameters());
636  int32 linear_size = linear_params_.NumRows() * linear_params_.NumCols(),
637  bias_size = bias_params_.Dim();
638  linear_params_.CopyRowsFromVec(params.Range(0, linear_size));
639  bias_params_.CopyFromVec(params.Range(linear_size, bias_size));
640 }
void CopyRowsFromVec(const CuVectorBase< Real > &v)
This function has two modes of operation.
Definition: cu-matrix.cc:2282
MatrixIndexT NumCols() const
Definition: cu-matrix.h:206
MatrixIndexT Dim() const
Dimensions.
Definition: cu-vector.h:68
MatrixIndexT NumRows() const
Dimensions.
Definition: cu-matrix.h:205
virtual int32 NumParameters() const
The following new virtual function returns the total dimension of the parameters in this class...
void CopyFromVec(const CuVectorBase< Real > &src)
Copy functions; these will crash if the dimension do not match.
Definition: cu-vector.cc:1026
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:169
MatrixIndexT Dim() const
Returns the dimension of the vector.
Definition: kaldi-vector.h:63
SubVector< Real > Range(const MatrixIndexT o, const MatrixIndexT l)
Returns a sub-vector of a vector (a range of elements).
Definition: kaldi-vector.h:93
void UpdateNaturalGradient ( const PrecomputedIndexes indexes,
const CuMatrixBase< BaseFloat > &  in_value,
const CuMatrixBase< BaseFloat > &  out_deriv 
)
private

Definition at line 354 of file nnet-convolutional-component.cc.

References CuMatrixBase< Real >::AddMat(), CuVectorBase< Real >::AddVec(), TimeHeightConvolutionComponent::bias_params_, TimeHeightConvolutionComponent::PrecomputedIndexes::computation, kaldi::nnet3::time_height_convolution::ConvolveBackwardParams(), CuMatrixBase< Real >::CopyColFromVec(), CuMatrixBase< Real >::Data(), CuVectorBase< Real >::Dim(), ConvolutionModel::height_out, KALDI_ASSERT, kaldi::kTrans, UpdatableComponent::learning_rate_, TimeHeightConvolutionComponent::linear_params_, TimeHeightConvolutionComponent::model_, ConvolutionModel::num_filters_out, CuMatrixBase< Real >::NumCols(), CuMatrixBase< Real >::NumRows(), OnlineNaturalGradient::PreconditionDirections(), TimeHeightConvolutionComponent::preconditioner_in_, TimeHeightConvolutionComponent::preconditioner_out_, CuMatrixBase< Real >::Row(), CuMatrixBase< Real >::RowRange(), and CuMatrixBase< Real >::Stride().

Referenced by TimeHeightConvolutionComponent::Backprop().

357  {
358 
359  CuVector<BaseFloat> bias_deriv(bias_params_.Dim());
360 
361  { // this block computes 'bias_deriv', the derivative w.r.t. the bias.
362  KALDI_ASSERT(out_deriv.Stride() == out_deriv.NumCols() &&
363  out_deriv.NumCols() ==
365  CuSubMatrix<BaseFloat> out_deriv_reshaped(
366  out_deriv.Data(), out_deriv.NumRows() * model_.height_out,
368  bias_deriv.AddRowSumMat(1.0, out_deriv_reshaped);
369  }
370 
371  CuMatrix<BaseFloat> params_deriv(linear_params_.NumRows(),
372  linear_params_.NumCols() + 1);
373  params_deriv.CopyColFromVec(bias_deriv, linear_params_.NumCols());
374 
375 
376  CuSubMatrix<BaseFloat> linear_params_deriv(
377  params_deriv, 0, linear_params_.NumRows(),
378  0, linear_params_.NumCols());
379 
380  ConvolveBackwardParams(indexes.computation, in_value, out_deriv,
381  1.0, &linear_params_deriv);
382 
383  // the precondition-directions code outputs a scalar that
384  // must be multiplied by its output (this saves one
385  // CUDA operation internally).
386  // We don't bother applying this scale before doing the other
387  // dimenson of natural gradient, because although it's not
388  // invariant to scalar multiplication of the input if the
389  // scalars are different across iterations, the scalars
390  // will be pretty similar on different iterations
391  BaseFloat scale1, scale2;
392  preconditioner_in_.PreconditionDirections(&params_deriv, &scale1);
393 
394 
395  CuMatrix<BaseFloat> params_deriv_transpose(params_deriv, kTrans);
396  preconditioner_out_.PreconditionDirections(&params_deriv_transpose, &scale2);
397 
399  learning_rate_ * scale1 * scale2,
400  params_deriv_transpose.RowRange(0, linear_params_.NumCols()),
401  kTrans);
402 
403  bias_params_.AddVec(learning_rate_ * scale1 * scale2,
404  params_deriv_transpose.Row(linear_params_.NumCols()));
405 }
void ConvolveBackwardParams(const ConvolutionComputation &cc, const CuMatrixBase< BaseFloat > &input, const CuMatrixBase< BaseFloat > &output_deriv, BaseFloat alpha, CuMatrixBase< BaseFloat > *params_deriv)
This does the part of the backward derivative computation of convolution, that computes derivatives w...
Definition: convolution.cc:840
MatrixIndexT NumCols() const
Definition: cu-matrix.h:206
float BaseFloat
Definition: kaldi-types.h:29
MatrixIndexT Dim() const
Dimensions.
Definition: cu-vector.h:68
MatrixIndexT NumRows() const
Dimensions.
Definition: cu-matrix.h:205
time_height_convolution::ConvolutionModel model_
BaseFloat learning_rate_
learning rate (typically 0.0..0.01)
void PreconditionDirections(CuMatrixBase< BaseFloat > *R, BaseFloat *scale)
This call implements the main functionality of this class.
void AddMat(Real alpha, const CuMatrixBase< Real > &A, MatrixTransposeType trans=kNoTrans)
*this += alpha * A
Definition: cu-matrix.cc:941
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:169
void AddVec(Real alpha, const CuVectorBase< Real > &vec, Real beta=1.0)
Definition: cu-vector.cc:1182
MatrixIndexT Stride() const
Definition: cu-matrix.h:207
const Real * Data() const
Return data pointer (const).
Definition: cu-matrix.h:673
void UpdateSimple ( const PrecomputedIndexes indexes,
const CuMatrixBase< BaseFloat > &  in_value,
const CuMatrixBase< BaseFloat > &  out_deriv 
)
private

Definition at line 334 of file nnet-convolutional-component.cc.

References CuVectorBase< Real >::AddRowSumMat(), TimeHeightConvolutionComponent::bias_params_, TimeHeightConvolutionComponent::PrecomputedIndexes::computation, kaldi::nnet3::time_height_convolution::ConvolveBackwardParams(), CuMatrixBase< Real >::Data(), ConvolutionModel::height_out, KALDI_ASSERT, UpdatableComponent::learning_rate_, TimeHeightConvolutionComponent::linear_params_, TimeHeightConvolutionComponent::model_, ConvolutionModel::num_filters_out, CuMatrixBase< Real >::NumCols(), CuMatrixBase< Real >::NumRows(), and CuMatrixBase< Real >::Stride().

Referenced by TimeHeightConvolutionComponent::Backprop().

337  {
338 
339  { // this block handles the bias term.
340  KALDI_ASSERT(out_deriv.Stride() == out_deriv.NumCols() &&
341  out_deriv.NumCols() ==
343  CuSubMatrix<BaseFloat> out_deriv_reshaped(
344  out_deriv.Data(), out_deriv.NumRows() * model_.height_out,
346  bias_params_.AddRowSumMat(learning_rate_, out_deriv_reshaped);
347  }
348 
349  ConvolveBackwardParams(indexes.computation, in_value, out_deriv,
351 }
void ConvolveBackwardParams(const ConvolutionComputation &cc, const CuMatrixBase< BaseFloat > &input, const CuMatrixBase< BaseFloat > &output_deriv, BaseFloat alpha, CuMatrixBase< BaseFloat > *params_deriv)
This does the part of the backward derivative computation of convolution, that computes derivatives w...
Definition: convolution.cc:840
MatrixIndexT NumCols() const
Definition: cu-matrix.h:206
void AddRowSumMat(Real alpha, const CuMatrixBase< Real > &mat, Real beta=1.0)
Sum the rows of the matrix, add to vector.
Definition: cu-vector.cc:1222
MatrixIndexT NumRows() const
Dimensions.
Definition: cu-matrix.h:205
time_height_convolution::ConvolutionModel model_
BaseFloat learning_rate_
learning rate (typically 0.0..0.01)
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:169
MatrixIndexT Stride() const
Definition: cu-matrix.h:207
const Real * Data() const
Return data pointer (const).
Definition: cu-matrix.h:673
void Vectorize ( VectorBase< BaseFloat > *  params) const
virtual

Turns the parameters into vector form.

We put the vector form on the CPU, because in the kinds of situations where we do this, we'll tend to use too much memory for the GPU.

Reimplemented from UpdatableComponent.

Definition at line 624 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::bias_params_, VectorBase< Real >::Dim(), CuVectorBase< Real >::Dim(), KALDI_ASSERT, TimeHeightConvolutionComponent::linear_params_, CuMatrixBase< Real >::NumCols(), TimeHeightConvolutionComponent::NumParameters(), CuMatrixBase< Real >::NumRows(), and VectorBase< Real >::Range().

625  {
626  KALDI_ASSERT(params->Dim() == NumParameters());
627  int32 linear_size = linear_params_.NumRows() * linear_params_.NumCols(),
628  bias_size = bias_params_.Dim();
629  params->Range(0, linear_size).CopyRowsFromMat(linear_params_);
630  params->Range(linear_size, bias_size).CopyFromVec(bias_params_);
631 }
MatrixIndexT NumCols() const
Definition: cu-matrix.h:206
MatrixIndexT Dim() const
Dimensions.
Definition: cu-vector.h:68
MatrixIndexT NumRows() const
Dimensions.
Definition: cu-matrix.h:205
virtual int32 NumParameters() const
The following new virtual function returns the total dimension of the parameters in this class...
#define KALDI_ASSERT(cond)
Definition: kaldi-error.h:169
MatrixIndexT Dim() const
Returns the dimension of the vector.
Definition: kaldi-vector.h:63
SubVector< Real > Range(const MatrixIndexT o, const MatrixIndexT l)
Returns a sub-vector of a vector (a range of elements).
Definition: kaldi-vector.h:93
void Write ( std::ostream &  os,
bool  binary 
) const
virtual

Write component to stream.

Implements Component.

Definition at line 424 of file nnet-convolutional-component.cc.

References TimeHeightConvolutionComponent::bias_params_, OnlineNaturalGradient::GetAlpha(), OnlineNaturalGradient::GetNumMinibatchesHistory(), OnlineNaturalGradient::GetRank(), TimeHeightConvolutionComponent::linear_params_, TimeHeightConvolutionComponent::max_memory_mb_, TimeHeightConvolutionComponent::model_, TimeHeightConvolutionComponent::preconditioner_in_, TimeHeightConvolutionComponent::preconditioner_out_, TimeHeightConvolutionComponent::use_natural_gradient_, ConvolutionModel::Write(), CuVector< Real >::Write(), CuMatrixBase< Real >::Write(), kaldi::WriteBasicType(), kaldi::WriteToken(), and UpdatableComponent::WriteUpdatableCommon().

424  {
425  WriteUpdatableCommon(os, binary); // Write opening tag and learning rate.
426  WriteToken(os, binary, "<Model>");
427  model_.Write(os, binary);
428  WriteToken(os, binary, "<LinearParams>");
429  linear_params_.Write(os, binary);
430  WriteToken(os, binary, "<BiasParams>");
431  bias_params_.Write(os, binary);
432  WriteToken(os, binary, "<MaxMemoryMb>");
433  WriteBasicType(os, binary, max_memory_mb_);
434  WriteToken(os, binary, "<UseNaturalGradient>");
436  int32 rank_in = preconditioner_in_.GetRank(),
437  rank_out = preconditioner_out_.GetRank();
439  alpha_out = preconditioner_out_.GetAlpha(),
440  num_minibatches_history = preconditioner_in_.GetNumMinibatchesHistory();
441  WriteToken(os, binary, "<NumMinibatchesHistory>");
442  WriteBasicType(os, binary, num_minibatches_history);
443  WriteToken(os, binary, "<AlphaInOut>");
444  WriteBasicType(os, binary, alpha_in);
445  WriteBasicType(os, binary, alpha_out);
446  WriteToken(os, binary, "<RankInOut>");
447  WriteBasicType(os, binary, rank_in);
448  WriteBasicType(os, binary, rank_out);
449  WriteToken(os, binary, "</TimeHeightConvolutionComponent>");
450 }
void Write(std::ostream &is, bool binary) const
Definition: cu-vector.cc:921
float BaseFloat
Definition: kaldi-types.h:29
void WriteUpdatableCommon(std::ostream &is, bool binary) const
time_height_convolution::ConvolutionModel model_
void WriteToken(std::ostream &os, bool binary, const char *token)
The WriteToken functions are for writing nonempty sequences of non-space characters.
Definition: io-funcs.cc:134
void Write(std::ostream &os, bool binary) const
Definition: convolution.cc:225
void WriteBasicType(std::ostream &os, bool binary, T t)
WriteBasicType is the name of the write function for bool, integer types, and floating-point types...
Definition: io-funcs-inl.h:34
void Write(std::ostream &os, bool binary) const
Definition: cu-matrix.cc:469

Member Data Documentation

std::vector<bool> time_offset_required_
private

The documentation for this class was generated from the following files: