PARALLEL!

It's a pun. Get over it. Anyways, now it's time to talk a little parallel GA theory. There's a lot of variations, but essentially two different ways to parallelize the genetic algorithm.

Single GA Parallelism

GA is a great algorithm for parallelization. During the evaluation step, where each member of the population is given a value, the same function is being called on each member of the population, one at a time. This is referred to as embarrassingly parallel, in that parallelizing the process takes no separation of information or sophisticated communication between function calls. All that is required is that each function call be given its own process in which to operate. Here is some sample c-like pseudo code:

void evaluatePopulation(pop)
{
    for( each member of the population i )
    {
        create pipe[i]
        open(pipe[i]);
    }

    for( each member of the population j )
    {
        fork();
        if(this is the child process)
        {
            break;
        }
    }

    if(I am a child process)
    {
        TheReturn = pop[ME].evaluate();
        send(pipe[ME], TheReturn);
        exit(0);
    }

    else//I am the parent

    {

        for( each member of the population j )

        {

            pop[j].value = recv(pipe[j]);
            close(pipe[j]);
        }

    }

}
Multiple GA Parallelism

Another common use for parallelization is simply to scale one's application. GA is an interesting case of this, because the bulk of the work is done by "random" variation. Thus, the more "random" you have, the better chance you have of finding your optimal solution. The plan then is to create multiple instances of your GA and have each run in a separate process. Here is some sample c-like pseudo code:

void main()
{
    GeneticAlgorithm GA;
    GA.init();

    for( 10 times [i] )
    {
        create pipe[i]
        open(pipe[i]);
    }

    for( 10 times )
    {
        fork();
        if(this is a child process)
        {
            break;
        }
    }

    if( I am a child process )
    {

        for(int i = 0; i < 100; i++)
        {
            GA.step() // runs one generation of GA
        }

        send(pipe[ME], GA.pop.BestMember());
    }
    else
    {
        for( 10 times [i] )
        {
            cout << pipe[i] << " = " << pipe[i].value << endl;
            close(pipe[i]);
        }
    }
}

To improve upon this code, you could allow each process to share their best member(s) incrementally. Here is some sample c-like pseudo code:

void main()
{
    GeneticAlgorithm GA;
    GA.init();

    for( 10 times [i] )
    {
        create pipe[i]
        open(pipe[i]);
    }

    for( 10 times )
    {
        fork();
        if(this is a child process)
        {
            break;
        }
    }

    if( I am a child process )
    {
        for( 10 times )
        {
            for(int i = 0; i < 10; i++)
            {
                GA.step() // runs one generation of GA
            }

            send(pipe[ME], GA.pop.BestMember);
            GA.pop.WorstMember = recv(pipe[(ME+1)%10]);
        }
        send(pipe[me], GA.pop.BestMember);

    }
    else
    {
        for( 10 times [i] )
        {
            cout << pipe[i] << " = " << pipe[i].value << endl;
            close(pipe[i]);
        }
    }
}

The Implementation Differences

Single GA parallelization is easy to implement in the way shown in that you do not need to pass any genetic information, just the calculated values of each member of the population. It is also very easy to distribute across a Mosix type cluster, given one small drawback: the evaluation function needs to be very processor intensive in order to be distributed by the Mosix kernel. A fork()ed process must be in existence for about 3-5 seconds to give the kernel time to identify and migrate it. So, for little tiny calculations (like the airline example from my previous post, or from TSP if you're more familiar with these types of problems), your processes will not migrate and you're wasting your time.

Implementation of this style of parallelization in a Beowulf cluster requires you to specify a node for a member to be calculated on, send all genetic material, and then await a response. This also requires writing a separate worker application on all other nodes, to accept the incoming genetic material and process it, as was mentioned in the post "Clustering, A Continuation…".

If you're already using a Beowulf cluster, Multiple GA parallelization is the easy way to go. A distribution like LAM/MPI will create identical copies of your GA on each slave node, so you don't need to fork() off any processes. For communication between slave nodes, it depends on whether or not you use blocking I/O.

For blocking I/O, you'll need to use your root node as a server to send material from slave node to adjacent nodes, in order to avoid nodes idling while waiting for adjacent nodes to get to an accept state.

For non-blocking I/O, you can use your root node to set up something called a gene-pool, where each process intermittently drops a little genetic material into the pool, then checks to see if there's any new material in the pool to grab. If not, it can continue on without waiting. This strategy can also be implemented with blocking I/O, but it's a little bit more convoluted.

That just about covers the basics for GA parallelism. Next time, I'll post my research from last semester, and then perhaps get into my new clustering scheme. Until next time…

Advertisements