UpdatableComponent Class Referenceabstract

Class UpdatableComponent is a Component which has trainable parameters; it extends the interface of Component. More...

#include <nnet-component-itf.h>

Inheritance diagram for UpdatableComponent:
Collaboration diagram for UpdatableComponent:

Public Member Functions

 UpdatableComponent (const UpdatableComponent &other)
 
 UpdatableComponent ()
 
virtual ~UpdatableComponent ()
 
virtual BaseFloat DotProduct (const UpdatableComponent &other) const =0
 Computes dot-product between parameters of two instances of a Component. More...
 
virtual void PerturbParams (BaseFloat stddev)=0
 This function is to be used in testing. More...
 
virtual void SetUnderlyingLearningRate (BaseFloat lrate)
 Sets the learning rate of gradient descent- gets multiplied by learning_rate_factor_. More...
 
virtual void SetActualLearningRate (BaseFloat lrate)
 Sets the learning rate directly, bypassing learning_rate_factor_. More...
 
virtual void SetAsGradient ()
 Sets is_gradient_ to true and sets learning_rate_ to 1, ignoring learning_rate_factor_. More...
 
virtual BaseFloat LearningRateFactor ()
 
virtual void SetLearningRateFactor (BaseFloat lrate_factor)
 
void SetUpdatableConfigs (const UpdatableComponent &other)
 
virtual void FreezeNaturalGradient (bool freeze)
 freezes/unfreezes NaturalGradient updates, if applicable (to be overriden by components that use Natural Gradient). More...
 
BaseFloat LearningRate () const
 Gets the learning rate to be used in gradient descent. More...
 
BaseFloat MaxChange () const
 Returns the per-component max-change value, which is interpreted as the maximum change (in l2 norm) in parameters that is allowed per minibatch for this component. More...
 
void SetMaxChange (BaseFloat max_change)
 
BaseFloat L2Regularization () const
 Returns the l2 regularization constant, which may be set in any updatable component (usually from the config file). More...
 
void SetL2Regularization (BaseFloat a)
 
virtual std::string Info () const
 Returns some text-form information about this component, for diagnostics. More...
 
virtual int32 NumParameters () const
 The following new virtual function returns the total dimension of the parameters in this class. More...
 
virtual void Vectorize (VectorBase< BaseFloat > *params) const
 Turns the parameters into vector form. More...
 
virtual void UnVectorize (const VectorBase< BaseFloat > &params)
 Converts the parameters from vector form. More...
 
- Public Member Functions inherited from Component
virtual void * Propagate (const ComponentPrecomputedIndexes *indexes, const CuMatrixBase< BaseFloat > &in, CuMatrixBase< BaseFloat > *out) const =0
 Propagate function. More...
 
virtual void Backprop (const std::string &debug_info, const ComponentPrecomputedIndexes *indexes, const CuMatrixBase< BaseFloat > &in_value, const CuMatrixBase< BaseFloat > &out_value, const CuMatrixBase< BaseFloat > &out_deriv, void *memo, Component *to_update, CuMatrixBase< BaseFloat > *in_deriv) const =0
 Backprop function; depending on which of the arguments 'to_update' and 'in_deriv' are non-NULL, this can compute input-data derivatives and/or perform model update. More...
 
virtual void StoreStats (const CuMatrixBase< BaseFloat > &in_value, const CuMatrixBase< BaseFloat > &out_value, void *memo)
 This function may store stats on average activation values, and for some component types, the average value of the derivative of the nonlinearity. More...
 
virtual void ZeroStats ()
 Components that provide an implementation of StoreStats should also provide an implementation of ZeroStats(), to set those stats to zero. More...
 
virtual void GetInputIndexes (const MiscComputationInfo &misc_info, const Index &output_index, std::vector< Index > *desired_indexes) const
 This function only does something interesting for non-simple Components. More...
 
virtual bool IsComputable (const MiscComputationInfo &misc_info, const Index &output_index, const IndexSet &input_index_set, std::vector< Index > *used_inputs) const
 This function only does something interesting for non-simple Components, and it exists to make it possible to manage optionally-required inputs. More...
 
virtual void ReorderIndexes (std::vector< Index > *input_indexes, std::vector< Index > *output_indexes) const
 This function only does something interesting for non-simple Components. More...
 
virtual ComponentPrecomputedIndexesPrecomputeIndexes (const MiscComputationInfo &misc_info, const std::vector< Index > &input_indexes, const std::vector< Index > &output_indexes, bool need_backprop) const
 This function must return NULL for simple Components. More...
 
virtual std::string Type () const =0
 Returns a string such as "SigmoidComponent", describing the type of the object. More...
 
virtual void InitFromConfig (ConfigLine *cfl)=0
 Initialize, from a ConfigLine object. More...
 
virtual int32 InputDim () const =0
 Returns input-dimension of this component. More...
 
virtual int32 OutputDim () const =0
 Returns output-dimension of this component. More...
 
virtual int32 Properties () const =0
 Return bitmask of the component's properties. More...
 
virtual ComponentCopy () const =0
 Copies component (deep copy). More...
 
virtual void Read (std::istream &is, bool binary)=0
 Read function (used after we know the type of the Component); accepts input that is missing the token that describes the component type, in case it has already been consumed. More...
 
virtual void Write (std::ostream &os, bool binary) const =0
 Write component to stream. More...
 
virtual void Scale (BaseFloat scale)
 This virtual function when called on – an UpdatableComponent scales the parameters by "scale" when called by an UpdatableComponent. More...
 
virtual void Add (BaseFloat alpha, const Component &other)
 This virtual function when called by – an UpdatableComponent adds the parameters of another updatable component, times some constant, to the current parameters. More...
 
virtual void DeleteMemo (void *memo) const
 This virtual function only needs to be overwritten by Components that return a non-NULL memo from their Propagate() function. More...
 
virtual void ConsolidateMemory ()
 This virtual function relates to memory management, and avoiding fragmentation. More...
 
 Component ()
 
virtual ~Component ()
 

Protected Member Functions

void InitLearningRatesFromConfig (ConfigLine *cfl)
 
std::string ReadUpdatableCommon (std::istream &is, bool binary)
 
void WriteUpdatableCommon (std::ostream &is, bool binary) const
 

Protected Attributes

BaseFloat learning_rate_
 learning rate (typically 0.0..0.01) More...
 
BaseFloat learning_rate_factor_
 learning rate factor (normally 1.0, but can be set to another < value so that when < you call SetLearningRate(), that value will be scaled by this factor. More...
 
BaseFloat l2_regularize_
 L2 regularization constant. More...
 
bool is_gradient_
 True if this component is to be treated as a gradient rather than as parameters. More...
 
BaseFloat max_change_
 configuration value for imposing max-change More...
 

Private Member Functions

const UpdatableComponentoperator= (const UpdatableComponent &other)
 

Additional Inherited Members

- Static Public Member Functions inherited from Component
static ComponentReadNew (std::istream &is, bool binary)
 Read component from stream (works out its type). Dies on error. More...
 
static ComponentNewComponentOfType (const std::string &type)
 Returns a new Component of the given type e.g. More...
 

Detailed Description

Class UpdatableComponent is a Component which has trainable parameters; it extends the interface of Component.

This is a base-class for Components with parameters. See comment by declaration of kUpdatableComponent. The functions in this interface must only be called if the component returns the kUpdatable flag.

Child classes support the following config-line parameters in addition to more specific ones:

learning-rate e.g. learning-rate=1.0e-05. default=0.001 It's not normally necessary or desirable to set this in the config line, as it typically gets set in the training scripts. learning-rate-factor e.g. learning-rate-factor=0.5, can be used to conveniently control per-layer learning rates (it's multiplied by the learning rates given to the –learning-rate option to nnet3-copy or any 'set-learning-rate' directives to the –edits-config option of nnet3-copy. default=1.0. max-change e.g. max-change=0.75. Maximum allowed parameter change for the parameters of this component, in Euclidean norm, per update step. If zero, no limit is applied at this level (the global –max-param-change option will still apply). default=0.0.

Definition at line 455 of file nnet-component-itf.h.

Constructor & Destructor Documentation

◆ UpdatableComponent() [1/2]

Definition at line 229 of file nnet-component-itf.cc.

229  :
230  learning_rate_(other.learning_rate_),
231  learning_rate_factor_(other.learning_rate_factor_),
232  l2_regularize_(other.l2_regularize_),
233  is_gradient_(other.is_gradient_),
234  max_change_(other.max_change_) { }
BaseFloat max_change_
configuration value for imposing max-change
BaseFloat learning_rate_
learning rate (typically 0.0..0.01)
BaseFloat learning_rate_factor_
learning rate factor (normally 1.0, but can be set to another < value so that when < you call SetLear...
BaseFloat l2_regularize_
L2 regularization constant.
bool is_gradient_
True if this component is to be treated as a gradient rather than as parameters.

◆ UpdatableComponent() [2/2]

UpdatableComponent ( )
inline

Definition at line 461 of file nnet-component-itf.h.

Referenced by LstmNonlinearityComponent::ConsolidateMemory().

461  : learning_rate_(0.001), learning_rate_factor_(1.0),
462  l2_regularize_(0.0), is_gradient_(false),
463  max_change_(0.0) { }
BaseFloat max_change_
configuration value for imposing max-change
BaseFloat learning_rate_
learning rate (typically 0.0..0.01)
BaseFloat learning_rate_factor_
learning rate factor (normally 1.0, but can be set to another < value so that when < you call SetLear...
BaseFloat l2_regularize_
L2 regularization constant.
bool is_gradient_
True if this component is to be treated as a gradient rather than as parameters.

◆ ~UpdatableComponent()

virtual ~UpdatableComponent ( )
inlinevirtual

Definition at line 465 of file nnet-component-itf.h.

References kaldi::nnet3::DotProduct(), and kaldi::nnet3::PerturbParams().

465 { }

Member Function Documentation

◆ DotProduct()

◆ FreezeNaturalGradient()

virtual void FreezeNaturalGradient ( bool  freeze)
inlinevirtual

◆ Info()

std::string Info ( ) const
virtual

Returns some text-form information about this component, for diagnostics.

Starts with the type of the component. E.g. "SigmoidComponent dim=900", although most components will have much more info.

Reimplemented from Component.

Reimplemented in CompositeComponent, ScaleAndOffsetComponent, NaturalGradientPerElementScaleComponent, ConstantFunctionComponent, PerElementOffsetComponent, PerElementScaleComponent, LinearComponent, NaturalGradientAffineComponent, ConstantComponent, RepeatedAffineComponent, BlockAffineComponent, TdnnComponent, TdnnComponent, AffineComponent, LstmNonlinearityComponent, TimeHeightConvolutionComponent, TimeHeightConvolutionComponent, and ConvolutionComponent.

Definition at line 333 of file nnet-component-itf.cc.

References Component::InputDim(), UpdatableComponent::is_gradient_, UpdatableComponent::l2_regularize_, UpdatableComponent::learning_rate_factor_, UpdatableComponent::LearningRate(), UpdatableComponent::max_change_, Component::OutputDim(), and Component::Type().

Referenced by LstmNonlinearityComponent::ConsolidateMemory(), ConvolutionComponent::Info(), TimeHeightConvolutionComponent::Info(), LstmNonlinearityComponent::Info(), AffineComponent::Info(), TdnnComponent::Info(), BlockAffineComponent::Info(), RepeatedAffineComponent::Info(), ConstantComponent::Info(), LinearComponent::Info(), PerElementScaleComponent::Info(), PerElementOffsetComponent::Info(), ConstantFunctionComponent::Info(), ScaleAndOffsetComponent::Info(), kaldi::nnet3::TestNnetComponentUpdatable(), and kaldi::nnet3::TestNnetComponentVectorizeUnVectorize().

333  {
334  std::stringstream stream;
335  stream << Type() << ", input-dim=" << InputDim()
336  << ", output-dim=" << OutputDim() << ", learning-rate="
337  << LearningRate();
338  if (is_gradient_)
339  stream << ", is-gradient=true";
340  if (l2_regularize_ != 0.0)
341  stream << ", l2-regularize=" << l2_regularize_;
342  if (learning_rate_factor_ != 1.0)
343  stream << ", learning-rate-factor=" << learning_rate_factor_;
344  if (max_change_ > 0.0)
345  stream << ", max-change=" << max_change_;
346  return stream.str();
347 }
virtual int32 OutputDim() const =0
Returns output-dimension of this component.
BaseFloat max_change_
configuration value for imposing max-change
BaseFloat learning_rate_factor_
learning rate factor (normally 1.0, but can be set to another < value so that when < you call SetLear...
BaseFloat l2_regularize_
L2 regularization constant.
virtual std::string Type() const =0
Returns a string such as "SigmoidComponent", describing the type of the object.
bool is_gradient_
True if this component is to be treated as a gradient rather than as parameters.
virtual int32 InputDim() const =0
Returns input-dimension of this component.
BaseFloat LearningRate() const
Gets the learning rate to be used in gradient descent.

◆ InitLearningRatesFromConfig()

void InitLearningRatesFromConfig ( ConfigLine cfl)
protected

Definition at line 248 of file nnet-component-itf.cc.

References ConfigLine::GetValue(), KALDI_ERR, UpdatableComponent::l2_regularize_, UpdatableComponent::learning_rate_, UpdatableComponent::learning_rate_factor_, UpdatableComponent::max_change_, and ConfigLine::WholeLine().

Referenced by LstmNonlinearityComponent::ConsolidateMemory(), ConvolutionComponent::InitFromConfig(), TimeHeightConvolutionComponent::InitFromConfig(), LstmNonlinearityComponent::InitFromConfig(), AffineComponent::InitFromConfig(), TdnnComponent::InitFromConfig(), BlockAffineComponent::InitFromConfig(), RepeatedAffineComponent::InitFromConfig(), ConstantComponent::InitFromConfig(), NaturalGradientAffineComponent::InitFromConfig(), LinearComponent::InitFromConfig(), PerElementScaleComponent::InitFromConfig(), PerElementOffsetComponent::InitFromConfig(), ConstantFunctionComponent::InitFromConfig(), and ScaleAndOffsetComponent::InitFromConfig().

248  {
249  learning_rate_ = 0.001;
250  cfl->GetValue("learning-rate", &learning_rate_);
251  learning_rate_factor_ = 1.0;
252  cfl->GetValue("learning-rate-factor", &learning_rate_factor_);
253  max_change_ = 0.0;
254  cfl->GetValue("max-change", &max_change_);
255  l2_regularize_ = 0.0;
256  cfl->GetValue("l2-regularize", &l2_regularize_);
257  if (learning_rate_ < 0.0 || learning_rate_factor_ < 0.0 ||
258  max_change_ < 0.0 || l2_regularize_ < 0.0)
259  KALDI_ERR << "Bad initializer " << cfl->WholeLine();
260 }
BaseFloat max_change_
configuration value for imposing max-change
BaseFloat learning_rate_
learning rate (typically 0.0..0.01)
#define KALDI_ERR
Definition: kaldi-error.h:147
BaseFloat learning_rate_factor_
learning rate factor (normally 1.0, but can be set to another < value so that when < you call SetLear...
BaseFloat l2_regularize_
L2 regularization constant.

◆ L2Regularization()

BaseFloat L2Regularization ( ) const
inline

Returns the l2 regularization constant, which may be set in any updatable component (usually from the config file).

This value is not interrogated in the component-level code. Instead it is read by the function ApplyL2Regularization(), declared in nnet-utils.h, which is used as part of the training workflow.

Definition at line 522 of file nnet-component-itf.h.

Referenced by kaldi::nnet3::ApplyL2Regularization().

522 { return l2_regularize_; }
BaseFloat l2_regularize_
L2 regularization constant.

◆ LearningRate()

BaseFloat LearningRate ( ) const
inline

Gets the learning rate to be used in gradient descent.

Definition at line 505 of file nnet-component-itf.h.

Referenced by kaldi::nnet3::ApplyL2Regularization(), Compiler::ComputeDerivNeeded(), ModelCollapser::GetDiagonallyPreModifiedComponentIndex(), UpdatableComponent::Info(), and CompositeComponent::SetUnderlyingLearningRate().

505 { return learning_rate_; }
BaseFloat learning_rate_
learning rate (typically 0.0..0.01)

◆ LearningRateFactor()

virtual BaseFloat LearningRateFactor ( )
inlinevirtual

Definition at line 489 of file nnet-component-itf.h.

489 { return learning_rate_factor_; }
BaseFloat learning_rate_factor_
learning rate factor (normally 1.0, but can be set to another < value so that when < you call SetLear...

◆ MaxChange()

BaseFloat MaxChange ( ) const
inline

Returns the per-component max-change value, which is interpreted as the maximum change (in l2 norm) in parameters that is allowed per minibatch for this component.

The components themselves do not enforce the per-component max-change; it's enforced in class NnetTrainer by querying the max-changes for each component. See NnetTrainer::UpdateParamsWithMaxChange() in nnet-utils.h.

Definition at line 513 of file nnet-component-itf.h.

Referenced by kaldi::nnet3::UpdateNnetWithMaxChange().

513 { return max_change_; }
BaseFloat max_change_
configuration value for imposing max-change

◆ NumParameters()

◆ operator=()

const UpdatableComponent& operator= ( const UpdatableComponent other)
private

◆ PerturbParams()

◆ ReadUpdatableCommon()

std::string ReadUpdatableCommon ( std::istream &  is,
bool  binary 
)
protected

Definition at line 263 of file nnet-component-itf.cc.

References UpdatableComponent::is_gradient_, UpdatableComponent::l2_regularize_, UpdatableComponent::learning_rate_, UpdatableComponent::learning_rate_factor_, UpdatableComponent::max_change_, kaldi::ReadBasicType(), kaldi::ReadToken(), and Component::Type().

Referenced by LstmNonlinearityComponent::ConsolidateMemory(), ConvolutionComponent::Read(), TimeHeightConvolutionComponent::Read(), AffineComponent::Read(), TdnnComponent::Read(), BlockAffineComponent::Read(), RepeatedAffineComponent::Read(), NaturalGradientAffineComponent::Read(), LinearComponent::Read(), PerElementScaleComponent::Read(), PerElementOffsetComponent::Read(), ScaleAndOffsetComponent::Read(), and CompositeComponent::Read().

264  {
265  std::ostringstream opening_tag;
266  opening_tag << '<' << this->Type() << '>';
267  std::string token;
268  ReadToken(is, binary, &token);
269  if (token == opening_tag.str()) {
270  // if the first token is the opening tag, then
271  // ignore it and get the next tag.
272  ReadToken(is, binary, &token);
273  }
274  if (token == "<LearningRateFactor>") {
275  ReadBasicType(is, binary, &learning_rate_factor_);
276  ReadToken(is, binary, &token);
277  } else {
278  learning_rate_factor_ = 1.0;
279  }
280  if (token == "<IsGradient>") {
281  ReadBasicType(is, binary, &is_gradient_);
282  ReadToken(is, binary, &token);
283  } else {
284  is_gradient_ = false;
285  }
286  if (token == "<MaxChange>") {
287  ReadBasicType(is, binary, &max_change_);
288  ReadToken(is, binary, &token);
289  } else {
290  max_change_ = 0.0;
291  }
292  if (token == "<L2Regularize>") {
293  ReadBasicType(is, binary, &l2_regularize_);
294  ReadToken(is, binary, &token);
295  } else {
296  l2_regularize_ = 0.0;
297  }
298  if (token == "<LearningRate>") {
299  ReadBasicType(is, binary, &learning_rate_);
300  return "";
301  } else {
302  return token;
303  }
304 }
void ReadBasicType(std::istream &is, bool binary, T *t)
ReadBasicType is the name of the read function for bool, integer types, and floating-point types...
Definition: io-funcs-inl.h:55
void ReadToken(std::istream &is, bool binary, std::string *str)
ReadToken gets the next token and puts it in str (exception on failure).
Definition: io-funcs.cc:154
BaseFloat max_change_
configuration value for imposing max-change
BaseFloat learning_rate_
learning rate (typically 0.0..0.01)
BaseFloat learning_rate_factor_
learning rate factor (normally 1.0, but can be set to another < value so that when < you call SetLear...
BaseFloat l2_regularize_
L2 regularization constant.
virtual std::string Type() const =0
Returns a string such as "SigmoidComponent", describing the type of the object.
bool is_gradient_
True if this component is to be treated as a gradient rather than as parameters.

◆ SetActualLearningRate()

virtual void SetActualLearningRate ( BaseFloat  lrate)
inlinevirtual

Sets the learning rate directly, bypassing learning_rate_factor_.

Reimplemented in CompositeComponent.

Definition at line 483 of file nnet-component-itf.h.

Referenced by CompositeComponent::SetActualLearningRate().

483 { learning_rate_ = lrate; }
BaseFloat learning_rate_
learning rate (typically 0.0..0.01)

◆ SetAsGradient()

virtual void SetAsGradient ( )
inlinevirtual

Sets is_gradient_ to true and sets learning_rate_ to 1, ignoring learning_rate_factor_.

Reimplemented in CompositeComponent.

Definition at line 487 of file nnet-component-itf.h.

Referenced by CompositeComponent::SetAsGradient(), kaldi::nnet3::SetNnetAsGradient(), and kaldi::nnet3::TestSimpleComponentModelDerivative().

487 { learning_rate_ = 1.0; is_gradient_ = true; }
BaseFloat learning_rate_
learning rate (typically 0.0..0.01)
bool is_gradient_
True if this component is to be treated as a gradient rather than as parameters.

◆ SetL2Regularization()

void SetL2Regularization ( BaseFloat  a)
inline

Definition at line 524 of file nnet-component-itf.h.

524 { l2_regularize_ = a; }
BaseFloat l2_regularize_
L2 regularization constant.

◆ SetLearningRateFactor()

virtual void SetLearningRateFactor ( BaseFloat  lrate_factor)
inlinevirtual

Definition at line 492 of file nnet-component-itf.h.

Referenced by kaldi::nnet3::ReadEditConfig().

492  {
493  learning_rate_factor_ = lrate_factor;
494  }
BaseFloat learning_rate_factor_
learning rate factor (normally 1.0, but can be set to another < value so that when < you call SetLear...

◆ SetMaxChange()

void SetMaxChange ( BaseFloat  max_change)
inline

Definition at line 515 of file nnet-component-itf.h.

515 { max_change_ = max_change; }
BaseFloat max_change_
configuration value for imposing max-change

◆ SetUnderlyingLearningRate()

virtual void SetUnderlyingLearningRate ( BaseFloat  lrate)
inlinevirtual

Sets the learning rate of gradient descent- gets multiplied by learning_rate_factor_.

Reimplemented in CompositeComponent.

Definition at line 478 of file nnet-component-itf.h.

Referenced by AffineComponent::AffineComponent(), ConvolutionComponent::ConvolutionComponent(), kaldi::nnet3::ReadEditConfig(), kaldi::nnet3::SetLearningRate(), and CompositeComponent::SetUnderlyingLearningRate().

478  {
480  }
BaseFloat learning_rate_
learning rate (typically 0.0..0.01)
BaseFloat learning_rate_factor_
learning rate factor (normally 1.0, but can be set to another < value so that when < you call SetLear...

◆ SetUpdatableConfigs()

void SetUpdatableConfigs ( const UpdatableComponent other)

Definition at line 237 of file nnet-component-itf.cc.

References UpdatableComponent::is_gradient_, UpdatableComponent::l2_regularize_, UpdatableComponent::learning_rate_, UpdatableComponent::learning_rate_factor_, and UpdatableComponent::max_change_.

Referenced by SvdApplier::DecomposeComponent().

238  {
239  learning_rate_ = other.learning_rate_;
240  learning_rate_factor_ = other.learning_rate_factor_;
241  l2_regularize_ = other.l2_regularize_;
242  is_gradient_ = other.is_gradient_;
243  max_change_ = other.max_change_;
244 }
BaseFloat max_change_
configuration value for imposing max-change
BaseFloat learning_rate_
learning rate (typically 0.0..0.01)
BaseFloat learning_rate_factor_
learning rate factor (normally 1.0, but can be set to another < value so that when < you call SetLear...
BaseFloat l2_regularize_
L2 regularization constant.
bool is_gradient_
True if this component is to be treated as a gradient rather than as parameters.

◆ UnVectorize()

◆ Vectorize()

virtual void Vectorize ( VectorBase< BaseFloat > *  params) const
inlinevirtual

◆ WriteUpdatableCommon()

void WriteUpdatableCommon ( std::ostream &  is,
bool  binary 
) const
protected

Definition at line 306 of file nnet-component-itf.cc.

References UpdatableComponent::is_gradient_, UpdatableComponent::l2_regularize_, UpdatableComponent::learning_rate_, UpdatableComponent::learning_rate_factor_, UpdatableComponent::max_change_, Component::Type(), kaldi::WriteBasicType(), and kaldi::WriteToken().

Referenced by LstmNonlinearityComponent::ConsolidateMemory(), ConvolutionComponent::Write(), TimeHeightConvolutionComponent::Write(), AffineComponent::Write(), TdnnComponent::Write(), BlockAffineComponent::Write(), RepeatedAffineComponent::Write(), ConstantComponent::Write(), NaturalGradientAffineComponent::Write(), LinearComponent::Write(), PerElementScaleComponent::Write(), PerElementOffsetComponent::Write(), ConstantFunctionComponent::Write(), ScaleAndOffsetComponent::Write(), and CompositeComponent::Write().

307  {
308  std::ostringstream opening_tag;
309  opening_tag << '<' << this->Type() << '>';
310  std::string token;
311  WriteToken(os, binary, opening_tag.str());
312  if (learning_rate_factor_ != 1.0) {
313  WriteToken(os, binary, "<LearningRateFactor>");
315  }
316  if (is_gradient_) {
317  WriteToken(os, binary, "<IsGradient>");
318  WriteBasicType(os, binary, is_gradient_);
319  }
320  if (max_change_ > 0.0) {
321  WriteToken(os, binary, "<MaxChange>");
322  WriteBasicType(os, binary, max_change_);
323  }
324  if (l2_regularize_ > 0.0) {
325  WriteToken(os, binary, "<L2Regularize>");
326  WriteBasicType(os, binary, l2_regularize_);
327  }
328  WriteToken(os, binary, "<LearningRate>");
329  WriteBasicType(os, binary, learning_rate_);
330 }
BaseFloat max_change_
configuration value for imposing max-change
BaseFloat learning_rate_
learning rate (typically 0.0..0.01)
BaseFloat learning_rate_factor_
learning rate factor (normally 1.0, but can be set to another < value so that when < you call SetLear...
void WriteToken(std::ostream &os, bool binary, const char *token)
The WriteToken functions are for writing nonempty sequences of non-space characters.
Definition: io-funcs.cc:134
BaseFloat l2_regularize_
L2 regularization constant.
virtual std::string Type() const =0
Returns a string such as "SigmoidComponent", describing the type of the object.
bool is_gradient_
True if this component is to be treated as a gradient rather than as parameters.
void WriteBasicType(std::ostream &os, bool binary, T t)
WriteBasicType is the name of the write function for bool, integer types, and floating-point types...
Definition: io-funcs-inl.h:34

Member Data Documentation

◆ is_gradient_

◆ l2_regularize_

◆ learning_rate_

◆ learning_rate_factor_

BaseFloat learning_rate_factor_
protected

◆ max_change_


The documentation for this class was generated from the following files: