So, these past couple of years, there’s been a few big courses I took to help me acquire the knowledge necessary for doing any kind of significant computer science research, and I can only recommend that all CS students take these:

1) Operating Systems

If you’re going to do any kind of research, chances are your software is going to run for a long time, and is going to be a series of complicated processes, as opposed to your standard “Hello, world!” program.

Things I got:

Parallel processing, inter-process communication, sheduling, file systems.

2) Data Communications

Again, like OS, if you’re going to do research, chances are you’re going to need more than one machine, so it helps to know how to do networking. This was the class that gave me my basic foundation of knowledge to build my cluster.

Things I got:

Network structure, network administration, basic sockets programming, client-server architecture, multi-threaded server design.

3) Artificial Intelligence

There were two courses to our AI program at GU, and I feel like they didn’t hold the same weight. The first studied classical AI, which can be summed up as:

If A, then B. A. Thus, B.

Not particularly stimulating, am I right? There was a bit of game theory, and some state space traversal, but nothing too horribly complicated. And for some reason, none of the state-space stuff we generated worked real well anyways….

Things I got:

Overview of Genetic Algorithms, introduction to neural networks. Overview of past failures of AI.

Now I suppose AI isn’t a course that you really need to be a well rounded CS student, but I enjoyed it.

What I was supposed to be talking about…

My research; It’s complicated, kinda convoluted, and totally time consuming. Good thing I don’t have a life. As I’ve discussed before, GA is a great tool for optimization. As I haven’t discussed before, neural networks are a great tool for recognizing patterns. Neural networks can come in many different structures, and the plan of my research is to use GA to “evolve” the structure of a neural network, based on how well it learns a given training set. I’ve yet to decide what kind of training set I will use, but I’m leaning towards natural language processing.

A neural network can have several layers, and I’ve chosen to represent the links between each layer as a two-dimensional array of booleans (true signifying that a link exists, false that one does not). Since there will be multiple layers, then there will also be more than one of these two-dimensional arrays, thus giving birth to the three-dimensional boolean array that is the bulk of my genome (bool *** adjacencyMatrices).

I would love to use the standard templated 3DArray_Genome from GALib, but alas, I wanted more scalability. The adjacencyMatrices have the ability to “grow” in number of layers (height), and number of nodes in any individual layer (width). In 3DArray_Genome, genomes are of a fixed size as of GA initialization.

I suppose that’s enough of a start for now, so until next time…