# Genetic Algorithms and What They Can Do For You

How Does a Genetic Algorithm Work?
A genetic algorithm solves optimization problems by creating a population or group of possible solutions to the problem. The individuals in this population will carry chromosomes that are the values of variables of the problem.

The genetic algorithm actually solves your problem by allowing the less fit individuals in the population to die (peacefully) and selectively breeding the most fit individuals (the ones that solve the problem best). This process is called selection, as in selection of the fittest. The genetic algorithm will take two fit individuals and mate them (a process called crossover). The offspring of the mated pair will receive some of the characteristics of the mother, and some of the father.

In nature, offspring often have some slight abnormalities, called mutations. Usually these mutations are disabling and inhibit the ability of the children to survive, but once in a while they improve the fitness of the individual (like toes stuck together in a web-like fashion). The genetic algorithm similarly occasionally causes mutations in its populations by randomly changing the value of a variable.

After the genetic algorithm mates fit individuals and mutates some, the population undergoes a generation change. The population will then consist of offspring plus a few of the older individuals, which the genetic algorithm allows to survive to the next generation because they are the most fit in the population, and we will want to keep them breeding. These most fit individuals are called elite individuals.

After dozens or even hundreds of "generations", a population eventually emerges wherein the individuals will solve the problem very well. In fact, the most fit (elite) individual will be an optimum or close to optimum solution.

The processes of selection, crossover, and mutation are called genetic operators.

There are many opportunities for optimization in the business world today. Some of the best known are optimizing schedules and work flow to minimize cost and time, while maximizing output.

Some of the oldest types of optimizers use simple "exhaustive search", meaning that every possible combination is tried to see what is the best one. This is a very accurate approach, since you are bound to find the best combination of variables - eventually. However, it is a very inefficient approach, because whenever there are more than a few thousand combinations, it takes too long to try them all. That is why users of exhaustive search optimizers tend to limit the number of variables they use, or tend to limit the number of values these variables can take.

The genetic algorithm, by contrast, does not try every possible combination. It attempts instead to intelligently get closer and closer to the best solution. Therefore, far more variables can be utilized, and you can allow all values of a variable. Optimization can still take a good deal of time if you give a GA a fair number of variables, but it will be doing  much more work in that amount of time.

More efficient optimizers than exhaustive search optimizers are in use.  If they are not genetic algorithms, however, they are most likely only searching one section of the search space at a time. Genetic algorithms are searching dozens or hundreds of parts of the search space simultaneously. This means they are less likely to become stuck in "local minima" as the others quite often do. (Local minima are decent solutions that the optimizer can never get out of in order to find better solutions.)

There are yet other types of optimizers in use today that use Newtonian search techniques. While these are very fast, the biggest drawback they have is that they require the search space to be continuous or differentiable. Genetic algorithms impose no restrictions on the space they search.