PNN networks work by comparing patterns based upon their distance from each other. NeuroShell 2 gives you two methods of computing this distance. Click on the appropriate box to select the method you want to use for your problem.
The Vanilla Euclidean distance metric is recommended for most networks because it works the best.
The City Block distance metric is the sum of the absolute values of the differences in all dimensions between the pattern and the weight vector for that neuron. City block distance is computed faster than Vanilla distance, but is usually not as accurate.
Calibration for PNN
If you have a test set for a PNN network, you will probably want to use Calibration to decide which smoothing factor is best for your problem. The success of PNN networks is dependent upon the smoothing factor.
Ward Systems Group was a pioneer in building neural networks using genetic algorithms. However, we had not previously released any neural networks using a genetic algorithm (GA) since the traditional methods either were extremely slow or did not enhance the ability of the network to generalize (work well on new data not used to train the network). Finally, we developed a GA based network algorithm that uses the GA directly with Calibration to improve the network’s generalization. It is an outgrowth of Dr. Donald Specht’s work on adaptive GRNN and adaptive PNN networks.
The three options for implementing Calibration for PNN networks are the following:
Iterative: With Calibration, training for PNN networks proceeds in two parts. The first part trains the network with the data in the training set. The second part uses Calibration to test a whole range of smoothing factors, trying to hone in on one that works best for the network created in the first part. Training is faster than when using the genetic adaptive option. You may want to use the iterative option when all of the input variables have the same impact on predicting the output, such as when all of the input variables are of the same type as they are in a waveform.
In general, it is recommended that you allow the network to choose a smoothing factor via Calibration. Remember, however, that the smoothing factor is only as good as the test set.
There may be some instances, however, when you want to find your own smoothing factor. If you have multiple outputs, some outputs may be more important than others and one smoothing factor may work better for one output while another smoothing factor may work better for another output. You can find your own smoothing factor simply by trying different ones as you apply the trained network to some file since the Apply Module allows you to change smoothing factors.
Genetic adaptive: Uses a genetic algorithm to find appropriate individual smoothing factors for each input as well as an overall smoothing factor. (The input smoothing factor is an adjustment used to modify the overall smoothing factor to provide a new value for each input.) Training takes longer than when using the iterative option.
Training for PNN nets using the genetic adaptive option also proceeds in two parts. The first part trains the network with the data in the training set. The second part uses Calibration to test a whole range of smoothing factors, trying to hone in on a combination that works best on the test set with the network created in the first part. The genetic algorithm is looking for a smoothing factor multiplier for each input, which in essence is an individual smoothing factor for each input.
At the end of training, the individual smoothing factors may be used as a sensitivity analysis tool: the larger the factor for a given input, the more important that input is to the model at least as far as the test set is concerned. Inputs with low smoothing factors are candidates for removal for a later trial, especially if the smoothing factor gets set to zero. (If it goes to zero, the net has removed the input anyway.) You may want to use the genetic adaptive option when the input variables are of different types and some may have more of an impact on predicting the output than others.
The genetic adaptive method will produce networks which work much better on the test set but will take much longer to train. As with Calibration on all of our architectures and paradigms, the key to success is in using a representative test set. If you do not, you will get better answers on the test set, but generalization on future production sets could even be poorer if the future data is unlike anything in the test set.
Genetic algorithms use a “fitness” measure to determine which of the individuals in the population survive and reproduce. Thus, survival of the fittest causes good solutions to evolve. The fitness for PNN is the number of incorrect answers which we are obviously trying to minimize. Actually the number of incorrect answers may not be a whole number. In some cases we add fractions, e.g., 13.9, to show that the net almost gets 14 wrong. The fraction is an internal number that allows the genetic algorithm to distinguish between two nets that get the same number of wrong answers.
None: Simply trains the network but does not use Calibration to find an overall smoothing factor. When using the Apply module, a default value for the smoothing factor is displayed. The user will have to manually adjust the smoothing factor by entering a new one in the edit box.
It is recommended that you allow the network to choose a smoothing factor via Calibration. You may want to try other smoothing factors and try to find a better one. Remember, however, that the smoothing factor is only as good as the test set.
See Missing Values for more details.