Use this module to train GRNN networks. GRNN is essentially trained after one pass of the training patterns, and it is capable of functioning after only a few training patterns have been entered. (Obviously, GRNN training improves as more patterns are added.)
The module allows you to view statistics for both the training and test set patterns and you can display a graph of Calibration as it seeks to find the best smoothing factor. Training slows as the graph and more statistics are displayed.
NeuroShell 2 allows you to view the test set average error graphed against the number of generations elapsed as Calibration seeks to best smoothing factor. Click on the icon to view the graph.
Genetic Breeding Pool Size
A genetic algorithm works by selective breeding of a population of “individuals”, each of which is a potential solution to the problem. In this case, a potential solution is a set of smoothing factors, and the genetic algorithm is seeking to breed an individual that minimizes the mean squared error of the test set. The larger the breeding pool size, the greater the potential of it producing a better individual. However, the networks produced by every individual must be applied to the test set on every reproductive cycle, so larger breeding pools take longer. After testing all of the individuals in the pool, a new “generation” of individuals is produced for testing.
By clicking on the appropriate box, you can display a variety of statistics for the Training and Test Patterns as learning progresses.
Learning Events: This box displays the number of training patterns that have been presented to the network. Unlike Backpropagation networks which propagate training patterns through the network many times seeking a lower mean squared error between the network’s output and the actual output or answer, GRNN training patterns are only presented to the network one time.
Smoothing Test Individuals: This box displays the number of times the test set has been propagated through the network with different smoothing factor adjustments. These are genetic algorithm “individuals” in the breeding pool.
Current Best Smoothing Factor: This box displays the overall smoothing factor that results in the lowest mean squared error.
Smoothing Test Generations: This box displays the total number of complete generations the genetic algorithm has been through. A generation involves the testing of all of the individuals in the breeding pool.
Last Mean Squared Error: This box displays the mean squared error calculated for the last generation.
Minimum Mean Squared Error: This box displays the lowest mean squared error calculated so far during training. This value represents the network that will be saved by Calibration as the best solution to the problem.
Generations Since Minimum Mean Squared Error: This box displays the number of generations that have passed since the generation which produced the minimum mean squared error.
If this option is turned off, the Calibration process will continue until you decide to stop it. When turned on, the learning module will automatically stop the process when there have been 20 successive reproductions (generations) of the whole population, but none has produced an individual that improved the mean squared error by at least 1 percent. Without this option, the genetic algorithm could continue for quite some time finding solutions that are only marginally better. However, if the problem is particularly stubborn, you may just want to let it run awhile with the option turned off.
Input Smoothing Factor Adjustment (Sensitivity)
After training begins when the user selects the Run Menu, individual smoothing factors for each of the input variables are displayed. (If column labels exist, they are displayed after training begins.) The input smoothing factor is an adjustment used to modify the overall smoothing to provide a new value for each input. At the end of training, the individual smoothing factors may be used as a sensitivity analysis tool: the larger the factor for a given input, the more important that input is to the model, at least as far as the test set is concerned. Inputs with low smoothing factors are candidates for removal for a later trial. (If the smoothing factor goes to zero, the net has removed the input anyway.)
Individual smoothing factors are unique to each network. The numbers are relative to each other within a given network and they cannot be used to compare inputs from different nets.
Viewing Individual Smoothing Factors: (32-bit training only)
The Set Cell Width option allows you to specify the width of the grid cell by clicking on the button for either By Number (the number of decimal places in the largest individual smoothing factor) or By Name (the number of letters in the largest column name).
The Sort Input Smoothing Factors option allows you to specify the order in which the smoothing factors are displayed. Select from Unsorted (displays the smoothing factors in the order in which the columns appear in the spreadsheet), Ascending (lowest to highest), or Descending (highest to lowest).
Note: Column names appear after training has begun.
Copying Individual Smoothing Factors:
You can copy the smoothing factors to a spreadsheet or word processor by doing the following:
1. Use the File Menu and select the Copy results to clipboard option. .
2. Open a spreadsheet or a word processor and either select Paste from the Edit menu or hit the Control and V keys at the same time.
Note: If you have already trained a net with PNN or GRNN with the iterative option, you will be asked if you want to continue training the net. Because PNN or GRNN networks are trained after one pass of the training data, you will not normally want to continue training. If you change the number of input, output, or hidden neurons, however, you must retrain the network. This may occur when you add more training patterns because PNN or GRNN networks require one hidden neuron for each training pattern.
For networks trained with the genetic adaptive option, you may want to resume training a network after stopping (even if Calibration stopped the training because of lack of progress). You will be given that option in a dialog box that is displayed when you select the Start Training option from the Run Menu. You may also want to start over (usually after selecting a new random seed) but continue with the smoothing factors already found from the last training session. A dialog box allows you to select this option.
File Note: This module defaults to training on a .TRN file, if it exists, or the .PAT file if there is no .TRN file. This module uses the .FIG file created in the Design module and the .MMX file created in the Define Inputs and Outputs module. All of these files for a given problem must reside in the same directory.
Note: If you change the number of input or output neurons, you must retrain the network. You cannot continue training the original network.
This module can be minimized and run in the background while you do other things.