Backpropagation - Changes in Learning Rate/Momentum

Top 

Backpropagation - Changes in Learning Rate and Momentum

You can dynamically increase the learning rate and/or momentum by increments you define as training proceeds.  Non-zero increments, which should be very small, are added at the end of each epoch.  The changes equally affect all links in the network.  Some experts believe that the learning rate and momentum should slowly increase or decrease on many types of problems during training.

 

For example, if you want to slowly increase your learning rate for the first thousand epochs from .1 to .3, you would divide .3 - .1 by 1000, and use the answer (.0002) to increase the learning rate.  Use -.0002 to decrease it.

 

Once you have specified an increment for either learning rate or momentum, it will not stop automatically.  You need to return to this module, and reset the increment to 0.

 

If you stop the learning, the incremented learning rate and momentum are not automatically saved.  You need to return to the Design module and reset the learning rate and momentum.

 

For most problems, we find that this feature is not really necessary.  Therefore, we recommend it for experts only.