﻿ Alternative to Discriminant Function Analysis
 Alternative to Discriminant Function Analysis
 Alternative to Discriminant Function Analysis

Discriminant function analysis is a method for classifying data patterns into two or more categories, usually with a linear function.  Neural networks are classifiers by their basic nature, and in fact were originally designed to perform this function in a nonlinear fashion.

It is possible to use a neural network to do classifications with a single output like discriminant function analysis, although it is usually much better to use multiple outputs, one for each category to be classified.  If you were to use a single output, each category would be coded as a single integer value.  For example, if you wanted to decide whether to buy, sell, or hold a stock, you might code the outputs:  Buy = 1, Sell = -1, Hold = 0.

There are three problems with this approach:

 1 There is no way for the network to tell you that it does not recognize a pattern (most likely a number close to zero will occur).
 2 There is no strength or probability indicator for each category.
 3 Neural networks were not really designed to work that way.

If you make three outputs, however, these problems are eliminated.

Code the second 0 for no sell, 1 for sell

Code the third 0 for no hold, 1 for hold.

If the network does not recognize a pattern (backprop only), the result will be a low activation in all three outputs.  If two or more outputs are high, the network has found a pattern that has characteristics of several categories.  The output strength will show you which is most likely (using PNN, the output can be an actual probability.)

The LINES example in the NeuroShell 2 \EXAMPLES subdirectory is an excellent prototype for categorical problems, except that an output of 0 to 1 is more usual than the 0-10 that we used.  (PNN likes 0 to 1, but backprop does not care because of the scaling).