Genetic Algorithms


Oooooooooh, PRETTY!!!!

So for the longest time, I’ve been desiring a beowulf architecture, but setting up the cluster has always been an issue. The bigger issue, however, is thread synchronization. The nice thing about clusters, is you can put them together with whatever crappy machines you have lying around. The not-as-nice thing about clusters is that you tend not to have 20 identical machines lying around.

With the standard MPI solutions, a process is created permanently on a node, typically with a set amount of work to do before exiting or requesting more work. There are some simple techinques you can use to try to sync everything up, and by simple, I mean archaic. You could (though I truly would never recommend this) balance the workload by hand (using arithmetic, God forbid) when developing, timing out each machine and such. There are more sophisticated methods, which I personally would recommend.

Thread Pooling

The basic idea behind this is to create a simple server/client architecture, where the server hosts a data set, and the clients request an element from the set, process it, and return the result. Clients may end up waiting while the servers has no data to be processed.

Data Pooling

This is very similary to thread pooling, but applied to the Genetic Algorithm. Imagine your server as having a population of whatever you’re working on, and the clients occaisionally pull the top few members from the population and working with them, then submitting results back to the population. Since there is always a full population in the server, clients will not hang up waiting for the server to have work for them to do. This offers complete asyncrhonization without delays (except perhaps server overload).

Each of these methods have are heavy on the implementation side, so I’d like to try a cluster-based solution. Tune in next time!

Advertisements

PARALLEL!

It's a pun. Get over it. Anyways, now it's time to talk a little parallel GA theory. There's a lot of variations, but essentially two different ways to parallelize the genetic algorithm.

Single GA Parallelism

GA is a great algorithm for parallelization. During the evaluation step, where each member of the population is given a value, the same function is being called on each member of the population, one at a time. This is referred to as embarrassingly parallel, in that parallelizing the process takes no separation of information or sophisticated communication between function calls. All that is required is that each function call be given its own process in which to operate. Here is some sample c-like pseudo code:

void evaluatePopulation(pop)
{
    for( each member of the population i )
    {
        create pipe[i]
        open(pipe[i]);
    }

    for( each member of the population j )
    {
        fork();
        if(this is the child process)
        {
            break;
        }
    }

    if(I am a child process)
    {
        TheReturn = pop[ME].evaluate();
        send(pipe[ME], TheReturn);
        exit(0);
    }

    else//I am the parent

    {

        for( each member of the population j )

        {

            pop[j].value = recv(pipe[j]);
            close(pipe[j]);
        }

    }

}
Multiple GA Parallelism

Another common use for parallelization is simply to scale one's application. GA is an interesting case of this, because the bulk of the work is done by "random" variation. Thus, the more "random" you have, the better chance you have of finding your optimal solution. The plan then is to create multiple instances of your GA and have each run in a separate process. Here is some sample c-like pseudo code:

void main()
{
    GeneticAlgorithm GA;
    GA.init();

    for( 10 times [i] )
    {
        create pipe[i]
        open(pipe[i]);
    }

    for( 10 times )
    {
        fork();
        if(this is a child process)
        {
            break;
        }
    }

    if( I am a child process )
    {

        for(int i = 0; i < 100; i++)
        {
            GA.step() // runs one generation of GA
        }

        send(pipe[ME], GA.pop.BestMember());
    }
    else
    {
        for( 10 times [i] )
        {
            cout << pipe[i] << " = " << pipe[i].value << endl;
            close(pipe[i]);
        }
    }
}

To improve upon this code, you could allow each process to share their best member(s) incrementally. Here is some sample c-like pseudo code:

void main()
{
    GeneticAlgorithm GA;
    GA.init();

    for( 10 times [i] )
    {
        create pipe[i]
        open(pipe[i]);
    }

    for( 10 times )
    {
        fork();
        if(this is a child process)
        {
            break;
        }
    }

    if( I am a child process )
    {
        for( 10 times )
        {
            for(int i = 0; i < 10; i++)
            {
                GA.step() // runs one generation of GA
            }

            send(pipe[ME], GA.pop.BestMember);
            GA.pop.WorstMember = recv(pipe[(ME+1)%10]);
        }
        send(pipe[me], GA.pop.BestMember);

    }
    else
    {
        for( 10 times [i] )
        {
            cout << pipe[i] << " = " << pipe[i].value << endl;
            close(pipe[i]);
        }
    }
}

The Implementation Differences

Single GA parallelization is easy to implement in the way shown in that you do not need to pass any genetic information, just the calculated values of each member of the population. It is also very easy to distribute across a Mosix type cluster, given one small drawback: the evaluation function needs to be very processor intensive in order to be distributed by the Mosix kernel. A fork()ed process must be in existence for about 3-5 seconds to give the kernel time to identify and migrate it. So, for little tiny calculations (like the airline example from my previous post, or from TSP if you're more familiar with these types of problems), your processes will not migrate and you're wasting your time.

Implementation of this style of parallelization in a Beowulf cluster requires you to specify a node for a member to be calculated on, send all genetic material, and then await a response. This also requires writing a separate worker application on all other nodes, to accept the incoming genetic material and process it, as was mentioned in the post "Clustering, A Continuation…".

If you're already using a Beowulf cluster, Multiple GA parallelization is the easy way to go. A distribution like LAM/MPI will create identical copies of your GA on each slave node, so you don't need to fork() off any processes. For communication between slave nodes, it depends on whether or not you use blocking I/O.

For blocking I/O, you'll need to use your root node as a server to send material from slave node to adjacent nodes, in order to avoid nodes idling while waiting for adjacent nodes to get to an accept state.

For non-blocking I/O, you can use your root node to set up something called a gene-pool, where each process intermittently drops a little genetic material into the pool, then checks to see if there's any new material in the pool to grab. If not, it can continue on without waiting. This strategy can also be implemented with blocking I/O, but it's a little bit more convoluted.

That just about covers the basics for GA parallelism. Next time, I'll post my research from last semester, and then perhaps get into my new clustering scheme. Until next time…

So the Genetic Algorithm (GA, since I'm a lazy immediately-post-college student), is an optimization tool. It's capable of solving all kinds of really difficult problems. The GA is best used for problems where the best answer can't be found by traditional means in a reasonable amount of time. Thus, when using the Genetic Algorithm, it's best to be looking for a good-enough solution.

"Solutions," and why that's in bunny ears:

Imagine you're trying to get from LA to NY. There's a lot of ways you could get there, or "solutions" to your problem. Different airlines, different connections, first class vs. coach; the list goes on and on. If you ask someone how to get from LA to NY, you'll get a myriad of answers, so it's not a problem where you can't find a solution, it's a matter of whether or not that solution is very good. It's not a good plan to book a first class non-stop flight when you only have $45 to spend, and if you're on a deadline, you'll need to be there without a three-day layover in Cleveland. Especially if you're allergic to Cleveland. I heard about a guy once….

Anyways, there's a lot of different variables that go into choosing the best way for you to get to NY. Now imagine this as a graph theory problem, LA and NY are at different ends of the graph, and all the different cities you could stop in on your way there are points on the graph inbetween the start and destination. The edges in the graphs represent where you can go (probably completely interconnected; every point connected to every other point). Now imagine on each of these edges, there's a weight or cost (associate with airline ticket costs, for our example problem). Since the graph is fully connected, there's a lot of ways to get from LA to NY, but which is best (AKA cheapest)?

Graph Theory and Why It Sucks So Bad

Graph problems, like the one described above, are notorious for being a royal pain. If you were to do the math a little, you would notice that the number of paths increases at an alarming rate when you increase the number of points in the graph. These problems typically fall into a category of problems called NP-Complete. The exact definition of an NP-Complete problem is several pages long, but it goes something like this:

A problem is NP-Complete iff there does not exist a polynomial time algorithm to find the optimal solution (least cost path from above).

A lot of optimization problems (problems with more than one feasible solution, and possibly more than one optimal solution), also fall into this class, as well as a similar class called NP-hard, which basically means it's very similar to some NP-complete problem, but has yet to be proven NP-complete by the full definition. Typical proofs to prove an NP-hard problem to be NP-Complete involve reducing an NP-hard problem to a form of an already proven NP-Complete problem.

On to GA, already…

So, GA is a handy-dandy little tool for solving lots of problems like these. First, you randomly generate a bunch of "solutions" (which for the sake of my math profs, I'll now call feasible solutions), and call them your population. Then you'll rank each of these members of your population.

Ranking, Elitism at its Most Useful
The ranking system can be very simple, or very complicated. For our example above, the simple version would be to add the costs on the edges in our path from LA to NY. This is known as a single-objective GA. A more complicated version might include shortest travel time, in which case the GA is attempting to minimize ticket price as well as travel time. This is known as a multiple-objective GA (MOGA).

Now that we've ranked our population, we need to use this information to our advantage, much like the young Hitlers we all want to be. Now that we've assigned each member a value and put them in their proper place, it's time to start the breeding.

Mating, as dirty as your nerdy mind wants it to be

There's a lot of algorithms for mating, but they all essentially do the same thing: take two members of your population (parents), and make two more (children). As in biological sexual mating, the children will share the traits of both parents.

Depending on the type of problem, you can make your mating algorithms simpler or more complex, but there are some industry standards: simple crossover, which simply takes a random combination of genes from each of the parents; and blended crossover, which takes the two sets of genes from each parent and blends them with a random weight (which works well cause sometimes kids look more like their dad than their mom, which can really unfortunate for little Sally with the hairy back).

Matching, dating for bits

The key part to dating for a bit is to put your best foot forward. Unfortunately, since its been a long time since ram manufacturers have included the "foot" option (though SanDisk is getting back to it with their new MP3 players), that's not always as simple as it sounds. So, we take more simplistic, and unfortunate methods. Each of these algorithms has its benefits and drawbacks (both real and theoretical), but I won't address them here. If you read between the lines, you can probably determine my preferences. I'm not a subtle writer.

Random Pairing
I really don't feel the need to go too indepth into this. It's random. Deal with it.

Best-First
Again, kinda simplistic. Take the two best and mate them, the third and fourth, the fifth and sixth, etc. Every generation, you end up with a new set super-jocks and down at the bottom your unfortunate group that has epilepsy with a side of polio.

Tournament Pairing
Create a tournament bracket NCAA March Madness style. Say that each member has a certain percent chance of winning each round in its bracket, based on its rank compared to its opponent. Each member is assigned a random number between 0 and their percentage. Whichever of the members has the larger number moves on to the next round. The better valued members will always have a better chance of winning, but statistically, the lower members still have a shot at being that cinderalla team.

Plutonium in the Water
Now I don't actually remember if plutonium is one of those horribly feared elements that'll cause you to grow a third limb if you look at it crosseyed, but everyone knows what I'm talking about.

Everyone in their family has that red-headed kid that nobody knows where his red-hair came from and suspects the milk man cause he's always been a little off and never looks mom in the eye, but maybe mom's a standup lady and that's just a random mutation. It can happen. Really.

After you've finished mating, the idea is to go through and randomly mutate a few genes, just to keep things interesting. There's actually a lot of reasons for mutation which I will address at a later date, but this will do for now.

Lather, rinse, repeat.

Now that you've done all this, it is highly recommended that you do it again. And again. And again. Thousands of times, actually. Each of these iterations is called a generation, and the more generations (typically), the better your answer will get. This isn't entirely true, but again, this discussion goes beyond the scope of a first time reader.

Now that everyone has a good idea of what GA is all about, I can continue my fireside chats about my research. Until next time…