Aspen Systems is a world renowned provider of Beowulf Clusters. When it comes to engineering, designing, and building the most sophisticated, high performance clusters in the world, scientists, chemists, biologists, lab technicians, and mathematicians want the experience and unprecedented technical expertise of a company with over three decades of experience in the HPC industry. Aspen Systems has designed and built high-speed Beowulf clusters for companies like NASA, NOAA and Air Force among many other prestigious names. From simple to the very complex, Aspen Systems has been providing Beowulf scientific, cluster computing systems to some of the most prominent research labs, government and universities in the nation. To fully appreciate the experience and resources behind Aspen System’s hundreds of successful High Performance HPC cluster implementations, it helps to understand the definition and history behind the name, Beowulf.
Beowulf clusters can be defined as a shared local network of multiple computers (or cluster) in which specific applications, programs and libraries are installed to make one, high performance, parallel computing system or virtual, supercomputer. Any multi-system, cluster architecture which is used for parallel computing can be considered a Beowulf cluster. Custom libraries and program applications are installed on clusters to make them work together. While, there is no one, single recipe for putting together a Beowulf computer, they often share many of the same characteristics and attributes of a cluster. The individual components or nodes within a Beowulf system can range from mid-grade personal computers to the very latest, state-of-the-art, high-performance, workstations. A computer within a Beowulf can have one or multiple processors and may include other computational devices such as one or more GPUs. Perhaps the key to defining a Beowulf system is the way it behaves and how it is used. There are certainly an infinite number of ways to network a cluster of computers together.
What differentiates the Beowulf system is the way in which it performs as a single system rather than a network of many nodes within the cluster. The client nodes of a Beowulf are considered ‘cluster’ nodes in that they do not provide any access to user devices such as keyboards, mice or displays. Unlike typical networked computers, these ‘cluster’ nodes can be thought of in the same way as individual components, memory, cpu’s, etc., within a single, computer system. Cluster nodes can be plugged in and unplugged as additional computational or performance needs of the overall, cluster require them. Of course, there is the need for a ‘server’ or ‘master’ node which controls the entire cluster and serves files to each of the cluster nodes. There can be more than one server node for accomplishing different tasks within the cluster. One cluster node might serve as the cluster’s gateway or console to the outside world while another might serve as one in many user stations for monitoring the entire cluster. A Beowulf system can be extremely complex and large, or it can also be very small and simple. Even a cluster which is as small as two, networked computers which are sharing the same file system and processing remote shells in parallel fashion can technically be called a Beowulf Computer. A Beowulf system is obviously a familiar fixture of many, worldwide research labs and universities where cost, performance, size and scale are only limited by budget. Beowulf clusters are scientific computing tools.
While there are no operating system limitations when it comes to the Beowulf, most of them are ran in a Unix-type environment. Familiar operating systems on the cluster include Linux, Solaris, BSD and other free, open source software. A couple of other important terms you’ll hear mentioned with Beowulf have to do with the libraries that are used: MPI (Message Passing Interface and/or PVM (Parallel Virtual) are two of the most common. Some popular examples of software used for MP include MPICH and OpenMPI, but there are many, many more terms that cluster users and programmers must know.
If you’re wondering where the unusual name, Beowulf, came from, there is an interesting story behind it: The name, Beowulf has not been around very long. In 1994 a couple of NASA scientists, Donald Becker and Thomas Sterling built a scientific cluster-type computer. Sterling used the name Beowulf from an Old English poem to describe his own scientific, cluster computer. In the poem, the writer described how the grasp of thirty men lied in the grip of his hand. When you imagine the power, speed and performance of 30 computers being used as one cluster, you have a reasonable understanding of what Beowulf means. A cluster of 30 computers is like the grip of many in one.
With full appreciation of how a Beowulf Cluster can harness the combined computing power of dozens of client nodes, its computational possibilities are only limited by the imagination of what’s to come. The performance and capabilities of the components within the cluster continue to improve at an alarming pace. Intel and AMD continue to release new processors with faster clock speeds and a greater numbers of cores. Memory / RAM capacities continue to grow while prices decline. NVidia continues to increase rendering and computation speed with each new release of its graphics and its latest, Kepler, Tesla Cards. Bigger, faster storage drives are making it possible to crunch larger volumes of data allowing research and science labs to capitalize on space. You might think this means that you can do more with less. Yet, the real scientists are saying, “Now we can do more with more.” The components of a Beowulf system today look completely different than a cluster might have looked yesterday and will be just as unfamiliar for the scientists of tomorrow. While a Beowulf system doesn’t have to be built from the latest, leading-edge components, the rate at which cluster component technology advances will continue to stretch the budgets and dreams of our nation’s scientists.
The history behind the use of Beowulf HPC Cluster and naming of the term is a fascinating topic. Aspen Systems is proud to make the claim that we’ve been designing, engineering and building Beowulf Clusters since before the name was invented in 1994. In today’s HPC world, the Beowulf name has become slightly diluted as other types of cluster technology becomes more prominent. While visualization clusters, GPU clusters, database clusters and grid computing become more prominently used, the meaning of the Beowulf name remains the same to Aspen Systems. It is our life blood and mission to put more power into the grip of a single hand.