NeuroShell 2 uses Calibration to optimize the network by applying the current network to an independent test set during training. Calibration finds the optimum network for the data in the test set (which means that the network is able to generalize well and give good results on new data).
Calibration does this by computing the mean squared error between actual and predicted for all outputs over all patterns. (The mean squared error is the standard statistical technique for determining closeness of fit.) Calibration computes the squared error for each output in a pattern, totals them and then computes the mean of that number over all patterns in the test set.
For Backpropagation networks, the network is saved every time a new minimum average error (or mean squared error) is reached. In order to use Calibration, you need to set the Calibration test interval, which is how often the test set is evaluated.
For GRNN networks, Calibration optimizes the smoothing factor based upon the values in the test set. Calibration does this by trying different smoothing factors and choosing the one that generates the least mean squared error between the actual and predicted answers. For PNN, Calibration also optimizes the smoothing factor, but does so by minimizing the probabilistic error.