GRNN Learning

Top  Previous  Next

_bm19

(Calibration Set to Iterative or None)

Use this module to train GRNN networks. GRNN is essentially trained after one pass of the training patterns, and it is capable of functioning after only a few training patterns have been entered.  (Obviously, GRNN training improves as more patterns are added.)

 

The module allows you to view statistics for both the training and test set patterns and you can display a graph of Calibration as it seeks to find the best smoothing factor.  Training slows as the graph and more statistics are displayed.

 

Training Graphics

_bm55

NeuroShell 2 allows you to view the Test Set Mean Square Error on the Y axis graphed against the Smoothing Factor as Calibration seeks the best smoothing factor on the X axis.  Click on the icon to view the graph.   Use this module to determine an effective smoothing factor that can be used when applying the network.

 

Note: If you close the graph and then reopen it, the graph is recreated and the scale will change from what previously appeared in the graph.

 

Smoothing Factor Update

Save Best Factor for the Test Set: Use the smoothing factor computed by Calibration during learning.

 

Keep Present Factor: Use the smoothing factor that was set in the GRNN Architecture module.

 

Maximum Upper Value for Search

The GRNN learning module allows you to specify an upper limit that is used in the search for a smoothing factor.  The objective is to find a smoothing factor that produces the least mean squared error for all outputs over all test patterns.  Use the mouse to click on one of the radio buttons to select one of the following maximum values:  .8, 1.6, 3.2, 6.4, or 12.8.  Smaller values are best, but if you notice the smoothing factor pushing up against the maximum value, then you should retrain using a higher limit.

 

Statistics

By clicking on the appropriate box, you can display a variety of statistics for the Training and Test Patterns as learning progresses.

 

Training Patterns

Learning events: This box displays the number of training patterns that have been presented to the network.  Unlike Backpropagation networks which propagate training patterns through the network many times seeking a lower mean square error between the network's output and the actual output or answer, GRNN training patterns are only presented to the network one time.

 

Test Patterns

Learning events: This box displays the number of test patterns that have been propagated through the network.

Current best smoothing factor: If you're using Calibration, this box displays the smoothing factor that results in the least mean squared error.

Learning epochs: This box displays the number of times the entire set of test patterns has been passed through the network.

Last mean squared error: Error refers to the mean squared error, a standard statistical technique for determining closeness of fit.  This is the network's most recent computation for the difference between the network's predictions and the actual predictions or classifications for data in the test set.  If there is more than one output, the error is "averaged" over all of the output values.

 

The network computes the mean (average) squared error between the actual and predicted values for all outputs over all patterns.  The way it works is that the network first computes the squared error for each output in a pattern, totals them, and then computes the mean of the total for each pattern.  The network then computes the mean of that number over all patterns in the training set.

Minimum mean squared error: This displays the lowest value for average error that the network achieved during training for data in the test set.

 

Epochs since minimum mean squared error: This displays the number of epochs that have elapsed since the minimum mean squared error was calculated.

 

Note: When using Calibration with GRNN networks, the mean squared error shown on the screen and on the learning graph is for outputs after they have been mapped into the interval [0, 1] and therefore will not be the real mean squared error which is displayed in the Apply GRNN Network module.

 

As learning is complete after one pass of the training patterns (an epoch), you will not usually need to monitor the number of epochs since the minimum mean squared error as you would for a Backpropagation network.

 

Note:  If you change the number of input or output neurons, you must retrain the network.  You cannot continue training the original network.

File Note: This module defaults to training on a .TRN file, if it exists, or the .PAT file if there is no .TRN file.  This module uses the .FIG file created in the Design module and the .MMX file created in the Define Inputs and Outputs module.  All of these files for a given problem must reside in the same directory.

 

This module can be minimized and run in the background while you do other things.