Since my last post, I’ve moved from the inland northwest to the beautiful silicon valley. As such, the project mentioned in my previous and long past post has been inherited by new employer. Its right up their alley, and since they’re willing to pay for me to work on it, I can’t complain.

Also since my last post, I’ve moved my blog to a new site (which I’m actually hosting myself, so I hope my server doesn’t explode on some magical good day).

The new blog (which has already imported all the posts from this blog) can be found at http://clusterfudge.endoftheinternet.org .

Thanks for those of you who are still subscribed to RSS after 9 months of no posts, I appreciate your laziness.

This will be the last post here.

Now that my spiffy new cluster is finally up and running, I’m taking on a new project. Also, I’m taking on hands. If there’s anyone out there that is good at multithreaded-programming, databases, indexing, or just programming in general, I may need your help!

I’m working on a new type of knowledge engine, which I have yet to name, but it has four basic parts. It will have its own data delivery system (which for now is just a url scraper and wget, but will later use AI techniques to further its intelligence), an indexer, a grouping analysis on the index, and then an HTM framework that has definitely yet to be designed. I’m going to start small (indexing a fraction of the web as I only have about 1tb of storage to work with, though my plan for indexing will be significantly smaller than the data indexed itself), and scale up from there, if at all possible. There will be statisctical analysis, heavy distributed database work, some theoretical AI techniqes, and stupid amount of multi-threading.

Possible applications of this include, but are not limited to:

Search Engines

Speech/image recognition

Predictive Trend Analysis

And many more!

If this project sounds even vaguely interesting to you, leave a psot on this board and I will contact you over the weekend. I’ll be setting up a repository on sourceforge soon.

Thanks!

So, apparently there’s a lot of really angry people that read Digg on a daily basis, some with some justifiably angry comments, and others with some not as justifiably angry comments. So right now, I’m going to address the most common comments.

A dual core 2.66ghz doesn’t mean you have a 5.32ghz box:

Bravo, you’ve discovered that I was glossing over some deep parallel processing theory. Two machines or cores will not get through a single process at the combined speed of their processors, as you will only be using one of the processors at a time. However, if you are to break your tasks into multiple processes, the work can be done with each process being given its processor or core. This is the brilliance behind parallel processing. You are not doing any particular work faster, you are just doing a bunch of it at the same time. With a little simple algebra, you can see that work being done brings your box’s or cluster’s total productivity average to roughly the equivalent of the SUM OF ALL CORES.

Moving on…

T5212 machines are bad machines:

Now, I cannot personally say anything negative about these machines, though it is entirely possible that they are a large steaming pile of crap and I’ve gotten very lucky with mine (I have 5). Now it is not important that you use these types of machines in your cluster. It is far more important that you use the processors within and compatible hardware, but let me speak to the primary issue: Intel Core and Core2 processors do not work with openMosix 2.4 . That’s all there is to it. You’re also going to run into freaky problems with AMD 64 bit systems, so I’d recommend you stay away from those as well. The basic gist is that this software will run on any system with a Pentium D class processor or lower, and X-Windows (if you want it) requires about 64mb of ram to run properly. My suggestion of the T5212 was out of good experiences.

Now diggers, please stop being so angry!

NOTE: If you find yourself angry after reading this post, read this first.

This seems difficult, at first glance, but really, it’s not.

At all.

From the time you get all your hardware plugged in to the time you’re doing some massive parallel processing, depending on your needs, can be anywhere from 2 hours to 10 minutes. And this simple guide will help you get there.

Get the Hardware

Now, mind you, I’m not trying to do this as cheaply as possible, but I am trying to do this with as much bang for your buck as possible. These are the things you need to get.

PCs: Duh, kinda the barebones necessity in a cluster, and I have a recommendation: eMachines T5212. It’s got a Pentium 805 Dual core Processor with each core running at 2.66ghz, for a total of 5.32ghz per machine, and 2x1MB L2 cache, which while not stellar, is pretty respectable. It’s also got 1gb of RAM and a 200gb hard-drive, so storage problems go away pretty quickly too. There’s a lower model with half the ram and a smaller hard drive, the T5216, but I need the RAM, so I go with the T5212. At Best Buy and other stores, these run about 534.99 for just the tower. Mind you I have chosen this box for the hardware’s compatibility to the software we’ll be using in a later step.

Network Cables: You’re going to need at least one for each PC, and probably a couple more if you have an external device or PC acting as your DHCP server and/or gateway.

Network Hardware: You’re going to need a switch big enough for all your PCs to connect (or a series of small ones that you can daisy chain together). Life will also be a lot easier if you have a ONE DHCP server for all of your machines. All the machines need to be on the same IP subnet, but don’t need to be on the same network switch or in the same geographic area.

Setting Up your Hardware:

In my personal configuration, I have a small network appliance that acts as a dhcp server, router, and print server, so I use that as the base of my networking needs. I then have a series of smaller switches which have 1 (count them, 1) link total back to the DHCP server. This is important for me, so that network traffic on the cluster doesn’t bog down the rest of my home network. How else could I play Halo while factoring 100 digit non-prime numbers? This will also help your cluster have fewer jumps between nodes.

Your Software:

I strongly recommend the use of ClusterKnoppix. It’s a great tool, and is very stable. It uses the 2.4 debian kernel, and has openMosix installed and configured for auto-detect (which means nodes are essentially plug-and-play, though not really, and I’ll discuss why later). You’ll need one copy for each box, unless you choose to commit the knoppix image to the hard-drive of each machine. It’s not necessary, but it may be easier if you don’t have a stack of CDR’s at your disposal.

Booting up the Cluster:

This is probably the easiest part of the process. Place a ClusterKnoppix CD in each box, and boot it up. I can vouch that this hardware is compatible and you won’t have any issues loading, so now you’re ready to work! NOTE: If you do this with other hardware, I can’t guarantee things are going to work so swimmingly, and I am nowhere near qualified to help you trouble-shoot your hardware. If you have a DHCP server and DNS somewhere on your network, your cluster should be live to the internet, so you can pick up your code off other boxes on the network or from a CVS server somewhere out there in the intarwebs. There is a version of GCC and G++, though I can’t think of the version numbers off the top of my head (feel free to check the link on the side of this blog, I’m sure its there somewhere).

Making your Mosix Cluster a Beowulf

There’s two methods, but they essentially do the same thing. The first is to commit a knoppix image to the hard-drive of one of your boxes. There are tools for formatting and repartitioning the harddrive in the utilities menu in KDE (I think it’s GTPart that’s installed, as well as a few others). Follow the instructions from http://www.knoppix.org or from any other live-distribution site (they’re a little extensive, else I would include them here). The second is to commit an alteration image to your hard disk, and then boot knoppix from this alteration image (a new feature to Knoppix that I’ve never used, so once again, check the intarwebs).

Either method you choose, I would strongly recommend you use LAM/MPI (or openMPI if you so strongly desire the biggest and baddest). The nice thing about this setup, is that you do not need to configure each machine with lam, or configure your root node with machine lists of all the other nodes in the network. All you have to do is create multiple processes on the node that has MPI installed, and openMosix will balance the cluster. It’s truly beautiful. In order to run a process in mpi, follow these simple instructions (for lam):

bash$: lamboot

<some output here>

bash$: mpirun -np (some number of processes) <your executable name> <your arguments>

That’s it. You don’t even need to compile your binaries using the MPI compilers, assuming they don’t use the MPI libraries. If they do, use mpic++ or mpicc as you would g++ or gcc, respectively.

I’d love to hear success stories, so please, leave comments!

For my clustering solutions, I’ve chosen live cd’s (clusterknoppix to be specific). I purchased about $2500 in old hardware (Pentium 850 Dual Core’s with 1GB of RAM running at 2.66ghz per core), and combining that with my existing hardware and I end up with a cluster with 15 cores running combined at a little over 40ghz and 6GB of ram. It’s not too shabby, if I do say so myself. It is an openMosix based cluster, so the load balancing is awesome. I’ve been using it thus far to do some prime number factoring.

One of the nice things about mosix clustering is its ease of use, while one of the nice things about beowulf clustering is the ease of data sharing and segregation. I’ve discussed previously in GA In Parallel, some more interesting thoughts… that load balancing is a problem with Beowulf clusters, and even with my crazy scheme of Multi-Level Hybrid Clusters (MLHC). The basic idea behind MLHC is to create a beowulf cluster whose slave nodes are actually mosix clusters. It was an invention more of necessity than anything else, but it solved my problem nonetheless.

My plan now is to install LAM/MPI (beowulf software, essentially) onto one of my mosix nodes and then create all of the MPI processes on a single node and allow the mosix kernel to load-balance the cluster. At that point, I’ll have a load-balanced beowulf cluster. My main issue until now has been creating multiple instances of complete populations of my GANet software, because a single member of a population can be up to 100mb. Since some of my nodes have only 256mb of RAM, they can’t even run one instance of GANet with a decent sized population. The solution then is to create multiple small populations that share information, with each population being small enough to run on a single node. Also, I’ve learned that my population members don’t need to be quite as large as I’m making to start, especially considering that I allow them to grow dynamically.

Today’s lesson, beowulf is nice, but mosix load-balances. So take both.

Oooooooooh, PRETTY!!!!

So for the longest time, I’ve been desiring a beowulf architecture, but setting up the cluster has always been an issue. The bigger issue, however, is thread synchronization. The nice thing about clusters, is you can put them together with whatever crappy machines you have lying around. The not-as-nice thing about clusters is that you tend not to have 20 identical machines lying around.

With the standard MPI solutions, a process is created permanently on a node, typically with a set amount of work to do before exiting or requesting more work. There are some simple techinques you can use to try to sync everything up, and by simple, I mean archaic. You could (though I truly would never recommend this) balance the workload by hand (using arithmetic, God forbid) when developing, timing out each machine and such. There are more sophisticated methods, which I personally would recommend.

Thread Pooling

The basic idea behind this is to create a simple server/client architecture, where the server hosts a data set, and the clients request an element from the set, process it, and return the result. Clients may end up waiting while the servers has no data to be processed.

Data Pooling

This is very similary to thread pooling, but applied to the Genetic Algorithm. Imagine your server as having a population of whatever you’re working on, and the clients occaisionally pull the top few members from the population and working with them, then submitting results back to the population. Since there is always a full population in the server, clients will not hang up waiting for the server to have work for them to do. This offers complete asyncrhonization without delays (except perhaps server overload).

Each of these methods have are heavy on the implementation side, so I’d like to try a cluster-based solution. Tune in next time!

So, these past couple of years, there’s been a few big courses I took to help me acquire the knowledge necessary for doing any kind of significant computer science research, and I can only recommend that all CS students take these:

1) Operating Systems

If you’re going to do any kind of research, chances are your software is going to run for a long time, and is going to be a series of complicated processes, as opposed to your standard “Hello, world!” program.

Things I got:

Parallel processing, inter-process communication, sheduling, file systems.

2) Data Communications

Again, like OS, if you’re going to do research, chances are you’re going to need more than one machine, so it helps to know how to do networking. This was the class that gave me my basic foundation of knowledge to build my cluster.

Things I got:

Network structure, network administration, basic sockets programming, client-server architecture, multi-threaded server design.

3) Artificial Intelligence

There were two courses to our AI program at GU, and I feel like they didn’t hold the same weight. The first studied classical AI, which can be summed up as:

If A, then B. A. Thus, B.

Not particularly stimulating, am I right? There was a bit of game theory, and some state space traversal, but nothing too horribly complicated. And for some reason, none of the state-space stuff we generated worked real well anyways….

Things I got:

Overview of Genetic Algorithms, introduction to neural networks. Overview of past failures of AI.

Now I suppose AI isn’t a course that you really need to be a well rounded CS student, but I enjoyed it.

What I was supposed to be talking about…

My research; It’s complicated, kinda convoluted, and totally time consuming. Good thing I don’t have a life. As I’ve discussed before, GA is a great tool for optimization. As I haven’t discussed before, neural networks are a great tool for recognizing patterns. Neural networks can come in many different structures, and the plan of my research is to use GA to “evolve” the structure of a neural network, based on how well it learns a given training set. I’ve yet to decide what kind of training set I will use, but I’m leaning towards natural language processing.

A neural network can have several layers, and I’ve chosen to represent the links between each layer as a two-dimensional array of booleans (true signifying that a link exists, false that one does not). Since there will be multiple layers, then there will also be more than one of these two-dimensional arrays, thus giving birth to the three-dimensional boolean array that is the bulk of my genome (bool *** adjacencyMatrices).

I would love to use the standard templated 3DArray_Genome from GALib, but alas, I wanted more scalability. The adjacencyMatrices have the ability to “grow” in number of layers (height), and number of nodes in any individual layer (width). In 3DArray_Genome, genomes are of a fixed size as of GA initialization.

I suppose that’s enough of a start for now, so until next time…

Follow

Get every new post delivered to your Inbox.