Probabilistic Neural Networks (PNN) are known for their ability to train quickly on sparse data sets. PNN separates data into a specified number of output categories.
PNN networks are three layer networks wherein the training patterns are presented to the input layer and the output layer has one neuron for each possible category. There must be as many neurons in the hidden layer as there are training patterns.
The network produces activations in the output layer corresponding to the probability density function estimate for that category. The highest output represents the most probable category.
PNN networks were invented by Dr. Donald Specht.
PNN networks should generally not be used if there are more than 1000 training patterns unless you have a very fast machine.