High Performance Computing

Written by Stephanie Bilberry
Bookmark and Share

High performance computing is often referred to as HPC. Using cluster servers to work together towards the same goal or for the same purpose is HPC. Linked together by a high speed interconnectability solution, these servers will pass information back and forth to crunch numbers at very high speeds, and with a high degree of efficiency.

A Question of Scalability

Just buying a larger server will not give you HPC in the same way that cluster servers can. While it is true that you will have a larger capacity, you will not have the superior computing power that multiple servers can give you. The process where you have several servers, each doing part of a task, is called parallelization.

One illustration of how this works is perhaps the analogy of cooking. If there is just one cook, the meal would take so long to prepare. If you have four or five cooks, and two prepare the main course, while one does the soup, one does the salad and another does the dessert, then you'll have a meal in a fraction of the time that one cook would have taken. There is a limit to parallelization though; too many cooks can spoil the broth.

For many of today's computing tasks, cluster servers are the best and fastest way to do it. Deciding how many servers you can effectively use is deduced by Amdahl's law. However, if your task cannot be broken down into units, the task is referred to as a serial process, and installing cluster servers may just be a waste of your resources. In this case, a larger server is probably your best choice.


Bookmark and Share