﻿ GMDH Architecture
 GMDH Architecture

 The technique called Group Method of Data Handling (GMDH) was invented by A. G. Ivakhnenko in the former Soviet Union, but enhanced by others, including A. R. Barron.  The software was programmed for Ward Systems Group by NeuroP Ltd.  This technique has also been called "polynomial nets".   Note:  We realize that the explanation we have given for GMDH is brief, but a complete and detailed description of the algorithm is beyond the scope of this help file.  Readers interested in more technical detail should refer to Farlow’s book (listed in References).   Also, whenever the notation X^2 appears in the following documentation, it refers to X squared.  X^3 refers to X cubed, etc.   GMDH works by building successive layers with complex links (or connections) that are the individual terms of a polynomial.  These polynomial terms are created by using linear and non-linear regression.  The initial layer is simply the input layer.  The first layer created is made by computing regressions of the input variables and then choosing the best ones.  The second layer is created by computing regressions of the values in the first layer along with the input variables.  (Note that we are essentially building polynomials of polynomials.)  Again, only the best are chosen by the algorithm.  These are called survivors.  This process continues until the net stops getting better (according to a prespecified selection criterion).   The resulting network can be represented as a complex polynomial (i.e., a familiar formula) description of the model.  You may view the formula, which contains the most significant input variables.  In some respects, it is very much like using regression analysis, but it is far more powerful than regression analysis.  GMDH can build very complex models while avoiding overfitting problems.   GMDH contains several evaluation methods, called selection criteria, to determine when it should stop training.  One of these, called Regularity, is similar to Calibration in that the net uses the constructed architecture that works the best on the test set.   The other selection criteria do not need a test set because the network automatically penalizes models that become too complex in order to prevent overtraining.  The advantage of this is that you can use all available data to train the network. If you are not using Regularity, don't extract a test set so GMDH will use the entire pattern file.  If you already have extracted a test (.TST) file, erase it and the .TRN file.   A by-product of GMDH is that it recognizes the most significant variables as it trains, and will display a list of them.   For a detailed description of how GMDH works, refer to GMDH Overview or Farlow’s book (listed in References)..   A word of caution:  don't use the formula generated on the Learning Module screen as the model.  It does not have enough precision in the coefficients and it does not take into account the variable scaling.  Instead, use the code or formulas generated by the Source Code Generator.   Slabs Click on Slab 1 to select or inspect the number of neurons in the slab.  Change the default settings by typing in a new value in the text box. The input slab must always have exactly as many "neurons" as there are inputs.   Click on the list box to change scaling for the input and output slabs. Although GMDH will work without any scaling, for complex problems it is recommended that you scale your data between -1 and 1 using the linear <<-1,1>> function, unless all of the input variables are of the same type and share the same range.  This will make the algorithm more computationally stable.  For Min/Max, we recommend trying mean plus or minus 1 standard deviation in addition to the normal min/max calculation.  Complex data such as financial predictions may work better if you try mean plus or minus 3 standard deviations.  (Refer to Define Inputs and Outputs for more information.)   Click Slab N.  This is the output slab, and it must always have only one output.  If you have a multiple-output problem, split it into several problems and solve them sequentially, i.e., train with only one output at a time.   The output slab for GMDH also has a Scale Function.(unlike other architectures which have an activation function).  Everything recommended for input slab scaling applies to output scaling.   Connection Arrow (Link) For the GMDH Architecture, there are no parameters in the link which may be defined by the user. The ability to click on the Connection Arrow for GMDH is disabled.