﻿ Backpropagation Learning
 Backpropagation Learning
 Use this module to train Backpropagation networks (learn the patterns in the training set).   The module allows you to view graphics and statistics for both the training and test set patterns as learning progresses.   Training Graphics The graphics in this module are meant for temporary use only during training, because the graphics can greatly slow down learning, especially in later stages as graphics become full of data.  Click on the icon of the graph you wish to display during learning:   1.  X/Y graph of Training Set Average Error Graphed Against Epochs Elapsed 2.  X/Y graph of Test Set Average Error Graphed Against Intervals Elapsed(Calibration only) 3.  X/Y graph of Error Factor Ranges of Training Set Patterns 4.  X/Y graph of Error Factor Ranges of Test Set Patterns (Calibration only)   Statistics: By clicking on the appropriate box, you can display a variety of statistics on the average error for both the Training and Test Patterns as learning progresses.  Note that the Training Pattern statistics are computed at the end of an epoch while the statistics on the Test Patterns are computed at the end of the number of events set for the Calibration test interval and after processing of the test set is finished.   Training Patterns: Learning events: This is the number of training patterns that have been propagated through the network.   Learning epochs: This is the number of times the entire set of training patterns (an epoch) has been propagated through the network.  If you have a large training set, it will take a while before this number is displayed.   Last average error: This is the network's most recent computation for the difference between the network's predictions and the actual predictions or classifications for data in the training set.  If there is more than one output, the error is "averaged" over all of the output values.  Error refers to the mean squared error, a standard statistical technique for determining closeness of fit.   The network computes the mean (average) squared error between the actual and predicted values for all outputs over all patterns.  The way it works is that the network first computes the squared error for each output in a pattern, totals them, and then computes the mean of the total for each pattern.  The network then computes the mean of that number over all patterns in the training set.   Note:  This error is computed within NeuroShell 2's internal operating intervals, 0 to 1 or -1 to 1, depending upon the activation function used in the output layer.  This number is not useful for its value itself.  It is useful during training to see if the network is improving, because it gets lower as the network improves.  As the network learns the training set better, the average error for the training set gets lower, and eventually makes very slow downward progress.   Minimum average error: This displays the lowest value for average error that the network achieved during training for data in the training set.   Epochs since min avg error: This displays the number of epochs that have elapsed since the Minimum Average Error was calculated.   Test Patterns: Calibration interval: This number specifies the number of events or test set patterns that are propagated through the network before the average error for the test set is computed.   Last average error:  This is the network's most recent computation for the difference between the network's predictions and the actual predictions or classifications for data in the test set.  If there is more than one output, the error is summed over all of the output values in each pattern and then averaged over all patterns.  Refer to Last Average Error in the Training Set for details.   Minimum average error: This displays the lowest value for average error that the network achieved during training for data in the test set.   Events since min avg error: This displays the number of events that have elapsed since the Minimum Average Error was calculated.   Use the Run Menu's Start Training option to begin the learning process. Use the Continue Training Option to restart training once it has been stopped (which may be done with the Interrupt Option on the Run Menu).   Learning may be complete when there is little or no change in the minimum average error for either the training set or testing set (if using Calibration).  If you have not set a stop training criteria in the Backpropagation Training Criteria module, you may want to monitor these statistics in order to stop learning (use the Interrupt Training option on the Run Menu).   Networks are usually sensitive to initial weight settings.  Use the Run Menu's Set Random Number Seed option if you want to either reproduce a training sequence or change a training sequence.  If you are trying to reproduce a training sequence, type in the same random number seed each time before you start training.  If you are trying to change the training sequence (and perhaps change your results), type in a different random number seed.  The Random Seed ranges from 0 to 32767.  The default is 1.   When to Save the Network: If you have specified one of the following options in the Backpropagation Training Criteria module, that option will be the default setting during learning. (The default setting is displayed on the screen when you begin training.) You may change this option at any time during learning.   Best Training Set: Saves the network each time the error factor reaches a minimum average error for the training set.  You may want to use this option if you’re NOT using Calibration.  (When using this option, the minimum average error computations are done at the end of each epoch.)   Best Test Set: Saves the network every time it reaches a new minimum average error for the test set.  Select this option if you’re using Calibration.  (The computations for the test set are done at the end of the specified number of events.)   No Auto Save: Does not automatically save the network.  You may want to use this option if you plan on training the network for a long period of time and do not want the network to stop to save either the best training or test set each time a new minimum error factor is reached.   Note:  If you change the number of input or output neurons, you must retrain the network.  You cannot continue training the original network.   Use the Options Menu to Show Weights while a network is learning.  Refer to Show Weights for details.   Learning slows as more graphics and statistics are displayed, so turn on only those you really want to see.  This is especially true for learning events.   File Note: This module defaults to training on a .TRN file, if it exists, or the .PAT file if there is no .TRN file.  This module uses the .FIG file created in the Design module and the .MMX file created in the Define Inputs and Outputs module.  All of these files for a given problem must reside in the same directory.   This module can be minimized and run in the background while you do other things.