Architecture and Parameters


_bm36Architecture and Parameters

Use this module to select from five different types of learning paradigms:

1. Backpropagation (BP)

Backpropagation networks are known for their ability to generalize well on a wide variety of problems.  Backpropagation networks are a supervised type of network, e.g., trained with both inputs and outputs.  Depending upon the number of patterns, training may be slower than other paradigms.   Backpropagation networks are used for the vast majority of working neural network applications because they tend to generalize well.  When using Backpropagation networks, you can increase the precision of the network by creating a separate network for each output.


NeuroShell 2 offers several different variations of Backpropagation networks:


a.  Each layer connected to the immediately previous layer (with either 3, 4, or 5 layers). Using more than 3 layers is rarely necessary if you use Calibration.  Using more than 5 layers should never be done because there is no known benefit to using more than 5 layers.


b.  Each layer connected to every previous layer (with either 3, 4, or 5 layers).


c.  Recurrent networks with dampened feedback from either the input, hidden, or output layer. These are excellent for time series data.


d.  Ward networks with multiple hidden slabs. When the different hidden slabs are given different activation functions, these networks are very powerful because the hidden layers detect different features of the input vectors.  This gives the output layer different "views" of the data.


2. Unsupervised (Kohonen)

The Kohonen Self Organizing Map network is a type of unsupervised network, which has the ability to learn without being shown correct outputs in sample patterns.  These networks are able to separate data patterns into a specified number of categories.


3. Probabilistic Neural Network (PNN)

Probabilistic Neural Networks are a type of supervised network known for their ability to train quickly on sparse data sets.  PNN separates data into a specified number of output categories, i.e., it classifies its input patterns into categories.


4. General Regression Neural Network (GRNN)

Like PNN networks, General Regression Neural Networks are known for the ability to train quickly on sparse data sets.  GRNN is a type of supervised network.  Rather than categorizing data like PNN, however, GRNN applications are able to produce continuous valued outputs.  In our tests we found that GRNN responds much better than Backpropagation to many types of problems (but not all).  It is especially useful for continuous function approximation.  GRNN can have multidimensional input, and it will fit multidimensional surfaces through data.  Because GRNN networks evaluate each output independently of the other outputs, GRNN networks may be more accurate than Backpropagation networks when there are multiple outputs.


5. GMDH Network (Group Method of Data Handling or Polynomial Nets)


GMDH works by building successive layers with links that are simple polynomial terms.  These polynomial terms are created by using linear and non-linear regression.  The initial layer is simply the input layer.  The first layer created is made by computing regressions of the input variables and then choosing the best ones.  The second layer is created by computing regressions of the values in the first layer along with the input variables.  Again, only the best are chosen by the algorithm.  These are called survivors.  This process continues until the net stops getting better (according to a prespecified selection criterion).


The resulting network can be represented as a complex polynomial (i.e., a familiar formula) description of the model.  You may view the formula, which contains the most significant input variables.  In some respects, it is very much like using regression analysis, but it is far more powerful than regression analysis.  GMDH can build very complex models while avoiding overfitting problems.