Apply PNN Network

Top  Previous  Next

_bm6Apply PNN Network

Use this module to process a data file through a trained neural network to produce the network's classifications or predictions for each pattern in the file.  A file of outputs (the .OUT file) is produced.  If you include actual values in the file, the module gives you check boxes to include actual values and/or the differences between the actual answer's and the network's answers in the .OUT file.  If there is more than one output, the actual values and differences will be displayed for each output.  The order of display is actual values, followed by predicted values, followed by differences.

 

Smoothing Factor

When applying PNN networks, you need to supply a smoothing factor required by the algorithm  that affects the value of the output.

 

The following example may help to explain what occurs when various smoothing factors are used in applying a trained PNN network.  In this example, one input is used to predict one output.  In the following figures which display a graph of 100 input and output patterns, only input patterns 25, 50, and 75 produce an output value of 1.

 

_bm33

Figure 1.

 

In Figure 1, the lowest smoothing factor of .05 is used and the output values are very close to either 0 or 1.

 

_bm34

Figure 2.

 

In Figure 2, a smoothing factor of .08 is used and the output values are mostly in the range .5 to 1, especially close to the outputs which are supposed to be 1.

 

_bm35

Figure 3.

 

In Figure 3, a smoothing factor of .1 is used and the output values are very close to 1.  The differences between input and output values are "smoothed" out or lopped off.

 

For PNN networks, the smoothing factor must be greater than 0 and can usually range from .01 to 1 with good results. You need to experiment to determine which smoothing factor is most appropriate for your data.  Fortunately, no retraining is required to change smoothing factors, because the value is specified when the network is applied.

 

You can either type in a value for the smoothing factor in the edit box or use the default setting.  

 

Default Setting:  The default setting is the smoothing factor that was specified in the Architecture and Parameters module when you designed the network.  The default value appears in the edit box when you use the Run Menu to apply the network.  If you used Calibration, then the best smoothing factor for your test set was computed during training.

 

PNN and GRNN are very local algorithms.  If you apply the network and the message “Fatal error:  Smoothing Factor out of range for this data” is displayed, increase the smoothing factor in the edit box to expand its range of influence (the ability of the network to generalize).  This works only when using the iterative version of Calibration, not the genetic adaptive version.  This message may also appear when you are applying the network to patterns that are different from the data used to train the network.  You should add new patterns to the training set to include this area of the problem domain.

 

Set Output Neuron Values

PNN network outputs show whether the network classified a data pattern as being included in a category (with a 1) or not being included in a category (with a 0).  By clicking on the appropriate radio button, you can either:

 

Set the winning output to 1 and all others to 0

Show actual neuron values in outputs.

Show category probabilities in outputs.

 

When you set outputs to either 1 or 0, the winning neuron (the one that is included in a category) is the neuron with the highest output value, even if there is another neuron with only a slightly smaller value.  Such a neuron would be a runner up, however.

 

When you show actual neuron values, you have to decide which is the winning neuron yourself (the highest value).

 

When you show category probabilities, the output values from all categories will add up to 1.  The probabilities indicate how “sure” the network is.

 

Check Boxes

You can click on the compute statistics check box if you have included actual output values in the file you are processing.  When the compute statistics box is checked, you can view statistics for each output to determine how well your trained network is functioning:

 

Actual winners: The number of times this output was set to 1 in the file.

 

Classified winners: The number of times the network sets this output to 1.

 

Actual losers: The number of times this output was set to 0 in the file.

 

Classified losers: The number of times the network sets this output to 0.

 

True positives: The number of times an actual value of 1 in the file was classified a 1 by the network.

 

False positives: The number of times an actual value of 0 was classified a 1 by the network.

 

True negatives: The number of times an actual value of 0 was classified a 0 by the network.

 

False negatives: The number of times an actual value of 1 was classified a 0 by the network.

 

True positive proportion: The ratio of true positives to actual winners.

 

False positive proportion: The ratio of false positives to actual losers.

 

The statistics computed when PNN networks are applied to a file may be copied to the Windows clipboard for use in other applications.  To copy the statistics, select the Copy Results to Clipboard option from the File Menu.  For example, you may want to compare the results of different neural networks.  You can copy the results to the clipboard and paste them into a spreadsheet for easy comparison.

 

Checking the "include actuals in .OUT file" box will cause the actual values to be displayed in the first column followed by the network's classifications in the .OUT file.  (Note that actual values for the outputs must be in the file.)  If there is more than one output, the actual values for each output will be displayed, followed by a blank column, followed by the network's classifications for each output.

 

Checking the "include in .OUT file actuals minus network outputs" will cause the differences between the actual values minus the network outputs to be displayed.  (Note that actual values for the outputs must be in the file.)  If there is more than one output, the difference will be displayed for each output.  The order of display is actual values, followed by predicted values, followed by differences.

 

Note:  Do Not check these boxes if you used the Race Handicapping Prenetwork Module.  If you do, the Race Handicapping Postnetwork Module will not be able to reconstruct the file.

 

If your data file includes an * in a cell beneath a column labeled A (Actual output), the * will be replaced with a 0 and a prediction will be made in that row when you apply a network.  A prediction will not be made in a row if your data file includes an * in a cell beneath a column labeled I (Input).  (The column labels were specified in the Define Inputs/Outputs module.)  Previous releases of NeuroShell 2 up to Release 2.0 would not apply a trained network to a data row if it contained an * in either an A or I column.

 

The patterns classified edit box displays the number of patterns in the file that the network processed.

 

The patterns classified correctly edit box displays the number of patterns in the file that the network classified correctly.

 

The patterns classified incorrectly edit box displays the number of patterns in the file that the network classified incorrectly.

 

Use the Run Menu to start processing the data file through the network. Also use this menu to interrupt processing.

 

Use the File Menu to select an alternate pattern file, view the pattern file, view the output file, or copy the results (statistics computed when the network is applied) to the Windows clipboard.

 

File Note: This module defaults to processing the .PAT file, but you can apply the network to any file that is in the NeuroShell 2 file format (the same as  Lotus 1-2-3 .WK1 or Excel 4 .XLS file format) simply by using the File Menu to select a file.  The inputs must be in the same columns in the same order as the .PAT file with which the network was trained.  This module places the network's classifications or predictions into an .OUT file.