Like PNN networks, General Regression Neural Networks (GRNN) are known for the ability to train quickly on sparse data sets. Rather than categorizing data like PNN, however, GRNN applications are able to produce continuous valued outputs. In our tests we found that GRNN responds much better than backpropagation to many types of problems (but not all). It is especially useful for continuous function approximation. GRNN can have multidimensional input, and it will fit multidimensional surfaces through data.
A GRNN network is a three-layer network that contains one hidden neuron for each training pattern. There are no training parameters such as learning rate and momentum as there are in Backpropagation networks, but there is a smoothing factor that is used when the network is applied to new data. The smoothing factor determines how tightly the network matches its predictions to the data in the training patterns.
GRNN networks were invented by Dr. Donald Specht.
GRNN networks should generally not be used if there are more than 1000 training patterns unless you have a very fast machine.