Neural network technology mimics the brain's own problem solving process. Just as humans apply knowledge gained from past experience to new problems or situations, a neural network takes previously solved examples to build a system of "neurons" that makes new decisions, classifications, and forecasts.
Neural networks look for patterns in training sets of data, learn these patterns, and develop the ability to correctly classify new patterns or to make forecasts and predictions. Neural networks excel at problem diagnosis, decision making, prediction, and other classifying problems where pattern recognition is important and precise computational answers are not required.
Two Types of Networks
There are two basic types of neural networks: supervised and unsupervised:
Supervised networks build models which classify patterns, make predictions, or make decisions according to other patterns of inputs and outputs they have "learned." They give the most reasonable answer based upon the variety of learned patterns. In a supervised network, you show the network how to make predictions, classifications, or decisions by giving it a large number of correct classifications or predictions from which it can learn. Backpropagation, GRNN, and PNN networks are supervised network types.
Unsupervised networks can classify a set of training patterns into a specified number of categories without being shown in advance how to categorize. The network does this by clustering patterns. It clusters them by their proximity in N dimensional space where N is the number of inputs. The user tells the network the maximum number of categories and it usually clusters the data into that number of categories. However, occasionally the network may not be able to separate the patterns into that many distinct categories. Kohonen networks are unsupervised.
Neither type of network is guaranteed to always give an absolutely "correct" answer, especially if patterns are in some way incomplete or conflicting. Results should be evaluated in terms of the percentage of correct answers that result from the model.
In this regard, the technology is similar to biological neural functioning after which it was designed, and differs significantly from all other conventional computer software. Neural networks may not work at all with some applications. Some problems are well suited for the pattern recognition capabilities of a neural network and others are best solved with more traditional methods.
Figure 1. Network Structure
The basic building block of neural network technology is the simulated neuron (depicted in Figure 1 as a circle). Independent neurons are of little use, however, unless they are interconnected in a network of neurons. The network processes a number of inputs from the outside world to produce an output, the network's classifications or predictions. The neurons are connected by weights, (depicted as lines) which are applied to values passed from one neuron to the next.
A group of neurons is called a slab. Neurons are also grouped into layers by their connection to the outside world. For example, if a neuron receives data from outside of the network, it is considered to be in the input layer. If a neuron contains the network's predictions or classifications, it is in the output layer. Neurons in between the input and output layers are in the hidden layer(s). A layer may contain one or more slabs of neurons.
A typical neural network is a Backpropagation network which usually has 3 layers of neurons. Input values in the first layer are weighted and passed to the second (hidden) layer. Neurons in the hidden layer "fire" or produce outputs that are based upon the sum of weighted values passed to them. The hidden layer passes values to the output layer in the same fashion, and the output layer produces the desired results (predictions or classifications).
The network "learns" by adjusting the interconnection weights between layers. The answers the network is producing are repeatedly compared with the correct answers, and each time the connecting weights are adjusted slightly in the direction of the correct answers. Eventually, if the problem can be learned, a stable set of weights adaptively evolves and will produce good answers for all of the sample decisions or predictions. The real power of neural networks is evident when the trained network is able to produce good results for data which the network has never "seen" before.
The biggest secret to building successful neural networks is knowing when to stop training. If you train too little the net will not learn the patterns. If you train too much, the net will learn the noise or memorize the training patterns and not generalize well with new patterns. Fortunately, NeuroShell 2 contains Calibration, which prevents overtraining the network.