PNN Architecture

Top  Previous  Next

 

Probabilistic Neural Networks (PNN) are known for their ability to train quickly on sparse data sets.  PNN separates data into a specified number of output categories.

 

PNN networks are three layer networks wherein the training patterns are presented to the input layer and the output layer has one neuron for each possible category.  There must be as many neurons in the hidden layer as there are training patterns.

 

The network produces activations in the output layer corresponding to the probability density function estimate for that category.  The highest output represents the most probable category.

 

Slabs

Click on each Slab to select or inspect the number of neurons.  Change the default settings by typing in a new value in the text box.

 

The number of neurons in the input layer (Slab 1) is the number of inputs in your problem, and the number of neurons in the output layer (Slab 3) corresponds to the number of categories. Because the purpose of a PNN network is to separate outputs into different categories, two or more outputs are required. All output values should be either 0 or 1.

 

The number of neurons in the hidden layer defaults to the number of patterns in the training set because the hidden layer consists of one neuron for each pattern in the training set.  You can make it larger if you may want to add more patterns, but don't make it smaller.  If you are designing a network without an existing .PAT file, the default value is 0 and you must specify a value.

 

Note: If you have more than 2000 patterns in your training set, then PNN may become too slow to be feasible unless you have a very fast machine.  The reason is that applying a PNN network requires a comparison between the new pattern and each of the training patterns.

 

Use the mouse to select a scaling function for the input layer from the list box.

 

Connection Arrows (Links)

Click on the Connection Arrows to set or inspect a smoothing factor for each link.  The same smoothing factor applies to all links.  The smoothing factor that you set in the design stage is a default setting and not the one you have to use when you apply the net, because you can change it in the Apply a Trained Network module.

 

You need to experiment with different smoothing factors to discover which works best for your problem.  You should apply the trained network to your training set, and perhaps a test set, using different smoothing factors and see which one gives you the best answers.  

 

If you're using Calibration, the smoothing factor will be automatically computed and the default setting will be ignored.