NeuroShell 2 is our legacy neural network product targeted towards computer science instructors and students. It contains classic algorithms and architectures popular with graduate school professors and computer science students.
NeuroShell 2 combines powerful neural network architectures, a Microsoft® Windows icon driven user interface, sophisticated utilities, and popular options to give users the ultimate neural network experimental environment. It is recommended for academic users only, or those users who are concerned with classic neural network paradigms like backpropagation. Users interested in solving real problems should consider the NeuroShell Predictor, NeuroShell Classifier, or the NeuroShell Trader.
Beginner's Neural Networks
NeuroShell 2 includes both Beginner's and Advanced Systems in addition to Runtime Facilities.
NeuroShell 2 includes interfaces for beginners and experts alike. The Beginner's System in NeuroShell 2 is designed for novices and first-time users. It includes a simplified set of procedures for building and executing a complete, powerful neural network application. The Beginner's System uses our powerful enhanced backpropagation paradigm, and defaults neural network parameters such as learning rate, momentum, and number of hidden neurons. To use the system you enter data, specify the inputs and outputs, and train the neural network. You are then able to apply the trained neural network to new data and export the results to other programs.
Advanced Neural Networks
The NeuroShell 2 Advanced Options screen displays the independent modules that may be used to create a neural network application.
The Advanced System in NeuroShell 2 gives experienced neural network users the ability to create and execute 16 different neural network architectures, with more user control compared to the Beginner's System.
Users can choose to enter their data in their familiar spreadsheet program or work in the NeuroShell 2 Datagrid. The program uses spreadsheet files as its internal format so you can view or edit them yourself. In fact, NeuroShell 2 will call your favorite Windows spreadsheet program.
File Import and Export
NeuroShell 2 imports ASCII, binary, and spreadsheet files.
The program imports and exports ASCII and binary file formats. The File Export Module allows you to convert NeuroShell 2 files to ASCII or binary files. It will also merge data into an existing spreadsheet or print a file.
The Rules Module allows you to create If/Then/Else rules to pre-process data.
The Symbol Translate module converts alphanumeric data or strings into numbers which can be processed by the neural network. For example, you may want to convert "cold", "warm", and "hot" to 1, 2, and 3 respectively.
The Rules Module allows you to create If/Then/Else type rules to preprocess data prior to feeding it to the neural network. For example, you can use two input variables to create a third. You may believe that if you combine the price of gold and IBM stock and the total is greater than 400, the economy is improving. You can translate this information into a new neural network input as follows:
You may also use the Rules Module to post-process the neural network's predictions and classifications.
Test Set Extract
NeuroShell 2 gives you great flexibility in choosing which columns of your spreadsheet are inputs and which are outputs.
NeuroShell 2 makes it easy to pull out test and production data sets from the training data. The Test Set Extract Module offers four different methods for selecting data:
The Design Module offers a palette of 16 different neural network architectures/paradigms for different types of data. (Users may customize each neural network architecture by setting parameters.)
The NeuroShell 2 Design Module offers 16 "icon selectable" architectures, each with several learning methods. You only need to point and click on "slabs" of neurons or weight links to set parameters. Slabs are subsets of layers.
Each neural network architecture allows you to click on slabs of neurons and links to set parameters.
Users can specify their own learning rate, momentum, activation functions, and initial weight ranges on a layer basis in the Design Module. Elect rotational or random pattern selection. Choose multiple criteria for stopping training. Select different methods for handling missing data. Users can view weight values during training, including weights displayed in modified Hinton diagrams. Weights may be modified during learning.
Twelve of the architectures include the Ward Systems Group version of backpropagation, which has been enhanced for speed and accuracy.
This is the standard type of backpropagation neural network in which every layer is connected or linked only to the previous layer.
This is the type of backpropagation neural network in which every layer is connected or linked to every previous layer.
Recurrent Neural Networks
This type of backpropagation neural network is often used in predicting financial markets because recurrent neural networks can learn sequences. Therefore, they are excellent for time series data.
A regular feed forward neural network responds to a given input pattern with exactly the same output every time the given input pattern is presented. A recurrent neural network may respond to the same input pattern differently at different times, depending upon the input patterns which have been presented previously. Recurrent neural networks build a long-term memory of internal neurons.
This type of backpropagation neural network is able to detect different features in the data with the use of multiple slabs of neurons in the hidden layer, each with a different activation function. Activation functions are functions used internally in neurons to "fire" the neurons. When you apply a different activation function to each slab in the hidden layer, the network can discover novel features in a single pattern processed through the network.
All of the backpropagation neural networks include the Calibration feature, which prevents overtraining (thereby greatly reducing training time), and increases the neural network's ability to generalize well on new data.
This is a training method for feed forward neural networks that operates much faster than backpropagation neural networks. TurboProp offers users the additional advantage of not requiring learning rate and momentum to be set.
The Kohonen Self Organizing Map neural network used in NeuroShell 2 is a type of unsupervised neural network, which means it has the ability to learn without being shown correct outputs in sample patterns. The neural networks are able to separate data into a specified number of categories.
Probabilistic Neural Networks (PNN) are known for their ability to train on sparse data sets and they train in only one pass of the training set! PNN separates data into a specified number of output categories. PNN neural networks are often able to function as soon as two training patterns are available, so training can be incremental.
Genetic Adaptive nets optimize architectures and select inputs.
Like PNN neural networks, General Regression Neural Networks (GRNN) are known for their ability to train in only one pass of the training set using sparse data sets. Rather than categorizing data like PNN, however, GRNN applications are able to produce continuous valued outputs.
GRNN is especially useful for continuous function approximation, and can fit multidimensional surfaces through data. In our tests we found that GRNN responds much better than Backpropagation to many types of problems. Note: GRNN is not the same as regression analysis!
Genetic Adaptive Nets
Our Genetic Adaptive feature uses a Genetic Algorithm to optimize the neural network structure of our GRNN and PNN neural networks. At the same time, the genetic algorithm eliminates bad inputs and gives you a sensitivity factor for the ones it keeps. This feature makes our Genetic Adaptive neural networks among the most powerful neural networks available today.
The GMDH Learning window.
The GMDH Advanced Setup window. This is for more mathematically sophisticated users.
NeuroShell 2 includes a very powerful architecture which is called Group Method of Data Handling (GMDH) or polynomial nets. GMDH neural networks derive a mathematical formula which is a nonlinear polynomial expression relating the values of the most important inputs to predict the output variable. In fact, the GMDH neural network is not like regular feedforward neural networks and was not originally represented as a neural network. The GMDH neural network is implemented with polynomial terms in the links and a genetic-like component to decide how many layers are built. The result of training at the output layer can be represented as a polynomial function of all or some of the inputs.
GMDH works by building successive layers with complex links (or connections) that are the individual terms of a polynomial. These polynomial terms are created using linear and non-linear regression. The initial layer is simply the input layer. The first layer created is made by computing regressions of the input variables and then choosing the best ones. The second layer is created by computing regressions of the values in the first layer along with the input variables. (Note that we are essentially building polynomials of polynomials.)
Again, only the best are chosen by the algorithm. These are called survivors. This process continues until the neural network stops improving (according to a pre-specified selection criterion).
The resulting neural network can be represented as a complex polynomial (i.e., a familiar formula) description of the model just like the ones statisticians use. You may view the formula, which contains the most significant input variables. In some respects, it is very much like using regression analysis, but it is far more powerful than regression analysis. GMDH can build very complex models while avoiding overfitting problems.
Calibration solves one of the most difficult problems for neural networks -- knowing when to stop training. The longer the neural network learns the training set, the closer the neural network gets to "memorizing" the training set. If you present the neural network with a pattern that was in the training set, the neural network will be able to make a very accurate prediction. If, however, you present the neural network with a pattern that wasn't in the training set, the neural network may not be able to "generalize" well on data it hasn't "seen" before if you have let the problem learn too long. Calibration corrects this.
For backpropagation neural networks, Calibration saves the neural network at the point where it gives the most accurate answers for patterns outside the training set.
For GRNN and PNN neural networks, Calibration finds the optimum smoothing factor, a parameter that is used when you apply the neural network. The Genetic Adaptive nets calibrate with the genetic algorithm.
Although not nearly as complete and powerful as the NeuroShell Trader, NeuroShell 2 has standard features and options that make it easier to import and preprocess market and futures data.
ASCII and Spreadsheet File Convert
ASCII and spreadsheet file importing will allow you to import data from some of the popular price retrieval services.
Recurrent Neural Networks and Ward Nets
NeuroShell 2 includes three types of fully recurrent neural networks, which are excellent for time series, stock market, futures, and commodities prediction. These architectures have long-term memories with adjustable dampening factors. Ward Nets capture different views of the same data which result in more accurate predictions.
Market Indicator Package (See Included Features Page)
The Market Indicator Package provides 150 popular and powerful technical indicators for financial or other time series data.
The Market Indicator package is a NeuroShell 2 option which provides 150 popular and powerful technical indictors you can apply to your financial or other time series data. With this incredible option you eliminate much of the need for purchasing other trading software. You can apply indicators to other indicators to the depth you desire! When you add the Optimizer to the Market Indicator Package, you can automatically find the most appropriate indicators to apply to your raw data or other indicators. It will also find the most appropriate parameters for each indicator, e.g., how many days to use in a moving average. These, together with our Graphics and Rules modules, provide you with features of powerful trading systems built into your neural network system.
Graphics facilities enable you to look for patterns in your data.
NeuroShell® 2 includes extensive graphics utilities such as line charts, bar charts, scatter plots and high- low-close graphs. Options allow you to add 3-D effects and change colors or types of graphs.
The graphs may be printed on any Windows compatible printer, copied to the clipboard, or saved in a file. Before and after neural network processing, you may use the Variable Graphs module to create four different types of graphs:
Graph Variable(s) Across All Patterns
This graph type lets you graph different types of variables, such as advertising expenditures and cost of goods sold, across all patterns in a file. This graph is also useful for graphing time series data.
Graph Variable Sets in a Pattern
Use this graph to examine your data if all of the variables in a pattern are of the same type, e.g., 100 points in a physiological signal such as an electrocardiogram.
Correlation Scatter Plot
Use this graph type to make a scatter plot of one variable against another through all patterns. At the same time the linear correlation coefficient is computed.
This graph allows you to select variables from your data file that are displayed as the high, low, and close values of a stock price. The graph manifests trends in your data.
NeuroShell 2 allows you to display graphs of training set/test set errors while training backpropagation neural networks. When training Kohonen neural networks, you can display a graph of category distributions in either a bar or pie chart. The Probabilistic Neural Networks (PNN) and General Regression Neural Networks (GRNN) Learning Modules allow you to display a smoothing factor optimization graph. GMDH nets graph the criterion value for the created formula against the layer number.
Statistics can be viewed when a trained neural network is applied.
NeuroShell 2 lets you analyze how well your trained neural network is doing in a variety of ways.
For Backpropagation neural networks, the Contribution Factor Module produces a number for each input variable. This number is a rough measure of the importance of that variable in predicting the neural networks output, relative to other input variables in the same neural network. A similar sensitivity factor is available with our Genetic Adaptive Networks.
When you apply a trained Backpropagation neural network, you can choose to compute several different statistics including R Squared, the statistical coefficient of multiple determination. You can also compute the mean squared error, the mean absolute error, and the minimum and maximum absolute error of the actual answers compared to the neural networks predicted answers. Also displayed for Backpropagation neural networks is the correlation coefficient (r), a statistical measure of the strength of the relationship between the actual vs. predicted outputs.
The computed statistics for General Regression Neural Networks are the same as those for Backpropagation neural networks.
For Probabilistic Neural Networks, which categorize data, the computed statistics are related to the numbers of correctly classified patterns, including the numbers of true positives, false positives, true negatives, and false negatives.
The Group Method of Data Handling (GMDH) gives you a non-linear formula for your model. Since it uses regression analysis to build the models, the formula is a familiar one to statisticians. At the same time, GMDH removes inputs that are not relevant to the model.
NeuroShell 2's Runtime System allows you to call a DLL to fire your neural network or generate source code.
The Runtime System, included in the package, allows you to convert your neural network into a form callable by a Dynamic Link Library (DLL) that may be called from other programs or Microsoft Excel. The Runtime Module will also generate Visual Basic or C source code for your trained neural network that may be called from non-Windows programs. NeuroShell 2 includes royalty free, unlimited distribution rights for any networks that you create using NeuroShell 2.
The Dynamic Link Library (DLL) Primer
Once a network has been trained, you can process files of input data in NeuroShell 2. However, you can also build interactive programs or Microsoft Excel spreadsheets with your neural network in them. The NeuroShell 2 DLL Primer can save your neural network so that it can be reloaded later by a NeuroShell 2 DLL.
For example, if you later want to execute this neural network from a program you write in C, Pascal, Microsoft Visual Basic, Visual C++TM, Access Basic, Delphi, etc., you would use the DLL function FireNet to execute it. Your program passes neural network inputs to FireNet, and FireNet gives back neural network outputs. In Excel, you place the function Predict into a cell, passing it neural network inputs. A neural network output will be placed into that cell for you.
Source Code Generator
No matter which neural network architecture you choose, the Source Code Generator makes it possible to port trained neural networks from NeuroShell 2 to non-Windows platforms by generating C source code (also Basic and plain formulas). If you're working with Windows platforms, however, the DLL module is more efficient to use.
Icon-Driven User Interface
Neural Network Architectures
Genetic Adaptive Networks
PNN and GRNN nets that utilize a genetic algorithm to find the best inputs and neural network architecture.
TurboPropTM 1 (Not to be confused with TurboPropTM 2 in our newer products)
Enables users to create a neural network without setting learning rate and momentum. It is much faster than backpropagation while still generalizing very well.
Prevents overtraining (overfitting) and increases generalization.
Every layer (or even sublayers called slabs) can have their own learning rate, momentum, activation functions, and initial weight ranges. Pattern selection may be rotational or random. Select multiple criteria for stopping neural network training. Select different methods for handling missing data. Weights may be modified during training. Text description file attached to each neural network.
NeuroShell 2 has theoretical limits of 65,535 rows, 32,768 columns, and 16,000 neurons. However, neural networks will not train effectively with much over 2,000 inputs, and some neural network types lose effectiveness at about 400 inputs and 5,000 to 10,000 rows (patterns), and may not function at all with higher limits. Actually 65,535 rows are possible, but many older spreadsheet programs will display only 16,000 rows. Our own Datagrid displays 32,000.
NeuroShell 2 has three different ways of determining sensitivity of inputs depending on the neural network type.
The Rules Module lets you apply If/Then/Else rules to your pattern files.
Symbol Translate Module
The Symbol Translate Module will translate strings to numerics and vice versa.
NeuroShell 2's on-line tutorial directs you through the steps required to use NeuroShell 2 and create a working application.
Two tutorial example programs, including one which uses stock market data, are included in the on-line manual and on the distribution diskettes. Sample programs are included for backpropagation, Kohonen, PNN, and GRNN neural networks. Several other examples are also provided.
Import and export files in the following formats: ASCII, binary, and spreadsheet.
Market Indicator Package
The Market Indicator Package provides 150 popular and powerful technical indicators for financial or other time series data.
This feature is also called the Time Series Indicator Package. The Market Indicator Package allows you to build a complex set of indicators from raw stock market or futures data such as high, low, close, and volume. When doing financial or other time series predictions, it is usually necessary to enhance the time series data by showing the neural network specific features from the past. This enhancement makes it easier for the neural network to determine trends. The Indicator Package has 150 technical indicators that market analysts have been using for years for forecasting. The Indicator Package calculates these indicators from your raw data and puts them into your spreadsheet so they can be used as additional inputs to the neural network.
Market Indicator Package with Optimizer
The Optimizer can automatically select the best indicators and corresponding parameters for the indictors.
The Optimizer included with Market Indicator Package allows you to select the best indicators and corresponding parameters for the indicators. For example, suppose you choose a simple indicator like a lagged moving average. How long should you lag it and how long should you average it? The Optimizer will tell you these things based upon your data. It will also run through all indicators and tell you the best ones to use for your data.
Batch Processor Option
The Batch Processor Option automates running multiple problems or variations of a single problem.
NeuroShell 2 has an intuitive graphical user interface that lets you configure your neural networks and run them interactively. But you may have a need to set up dozens, or even hundreds, of neural networks and run them one after another automatically without being near the computer (for example, overnight). The Batch Processor does this. Basically, you load each problem into a row of a matrix. The first row is a base problem, and the subsequent rows are variations of the base problem (usually). The columns contain the training sets, the configuration information, and the data sets the neural networks will be applied to after training. There are multiple tabs, so you can bring up multiple matrices, each with its own base problem.
You may also use the Batch Processor to apply previously trained neural networks to different data files.
When all runs are completed, a log file will show which neural networks give the best results. You get a statistical summary of each neural network and the name(s) of the apply files.
Race Handicapping Option
The Race Handicapping Option is useful for predicting race outcomes and solving other ranking problems.
This Package is also called the Data Ranking Package. Neural networks are often used successfully to predict the outcome of a horse race or dog race or for other ranking problems. Input data may include variables about each of the field of horses with the neural network ranking the horses in terms of probable finish places. Neural networks, however, do much better at ranking two horses at a time rather than ranking as many as eight or more at a time. So the handicapping package explodes the training patterns into all permutations of eight horses taken two at a time. Then the neural network uses that file to learn to rank two horses. The pairs rankings are then automatically analyzed to give you the ranking for the whole field of eight.
The Graphics Add-On creates a three- dimensional plot of a trained neural network's output.
The Graphics Add-On creates a three-dimensional plot of a trained networks output value as it is calculated by varying two selected input variables. The inputs are displayed as the X and Y dimensions; the output is displayed as dimension Z.
The graphs are useful in determining how two input variables influence the neural networks output. The program selects the most significant inputs as the default values for the X and Y dimensions.
For backpropagation neural networks, the most significant input variables are determined by the Contribution Factors Module. For Genetic Adaptive General Regression neural networks, the genetic algorithm determines the most significant inputs. You also have the package of selecting which inputs to graph.
The program includes a Graphics Wizard which allows you to change picture properties.
Ward Systems Group, Inc.
|Copyright © 1997-2020
Ward Systems Group, Inc. All rights reserved.
Copyright Information | Privacy Statement