Main Page | See live article | Alphabetical index

Supercomputer

"A supercomputer is a device for turning compute-bound problems into I/O-bound problems."
--Seymour Cray (widely attributed; possibly apocryphal)

A supercomputer is a computer that leads the world in terms of processing capacity, particularly speed of calculation, at the time of its introduction. The first supercomputers were introduced in the 1960s, led primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the 1970s until Cray split off to form his own company, Cray Research, and then took over the market. In the 1980s a large number of smaller competitors entered the market, a parallel to the creation of the minicomputer market a decade earlier, many of whom disappeared in the mid-1990s "supercomputer market crash". Today supercomputers are typically one-off custom designs produced by "traditional" companies such as IBM and HP, who had purchased many of the 1980s companies to gain their experience.

The term Supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's also-ran, as can be seen from the world's first (non solid state) computer Colossus, used to break German ciphers in World War II. CDC's early machines were simply single very fast processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at lower price points to enter the market. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of simple custom CPUs. Today the parallel design is the only remaining architecture, but based on "off the shelf" RISC microprocessors such as the PowerPC or PA-RISC.

Table of contents
1 Software Tools
2 Uses
3 Design
4 Types of general-purpose supercomputers
5 Special-purpose supercomputers
6 The fastest supercomputers today
7 Timeline of supercomputers
8 See also

Software Tools

Software tools for distributed processing include standard APIs such as MPI and PVM and open source-based software solutions such as Beowulf and openMosix which facilitate the creation of a sort of "virtual supercomputer" from a collection of ordinary workstations or servers. Technology like Rendezvous pave the way for the creation of ad hoc computer clusters. An example of this is the distributed rendering function in Apple's Shake compositing application. Computers running the Shake software merely need to be in proximity to each other, in networking terms, to automatically discover and use each other's resources. While no one has yet built an ad hoc computer cluster that rivals even yesteryear's supercomputers, the line between desktop, or even laptop, and supercomputer is beginning to blur.

Uses

Supercomputers are used for highly calculation-intensive tasks such as weather forecasting, climate research (including research into global warming), molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and the like. Military and scientific agencies are heavy users.

Design

Supercomputers tradionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialised for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times—in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy design and componentry. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.

As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to accelerate the remaining bottlenecks.

Supercomputer challenges and technologies

Technologies developed for supercomputers include:

Processing Techniques

Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD processing instructions for general-purpose computers.

Operating Systems

Their operating systems, often variants of UNIX, tend not to be as sophisticated as those for smaller machines, since supercomputers are typically dedicated to one task at a time rather than the multitude of simultaneous jobs that makes up the workload of smaller devices.

Programming

The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Special-purpose FORTRAN compilers can often generate faster code than the C or C++ compilers, so FORTRAN remains the language of choice for scientific programming, and hence for most programs run on supercomputers. To exploit the parallelism of supercomputers, programming environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are being used.

Types of general-purpose supercomputers

There are three main classes of general-purpose supercomputers:

As of 2002, Moore's Law and economies of scale are the dominant factors in supercomputer design: a single modern desktop PC is now more powerful than a 15-year old supercomputer, and at least some of the design tricks that allowed past supercomputers to out-perform contemporary desktop machines have now been incorporated into commodity PC's. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production.

Additionally, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, particularly, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design which can be programmed to act as one large computer. Many of these use the Linux operating system; they are then called Beowulf clusters.

As of 2003, the world's number 3 ranked supercomputer is a commodity cluster running Linux on Intel x86 hardware. However, a number of new commodity cluster projects that will run Linux on thousands of AMD x86-64 CPUs are expected to run at even higher speeds. If these trends continue, the Linux operating system is likely to become the de facto standard supercomputer operating system.

Special-purpose supercomputers

Special-purpose supercomputers are high-performance computing devices with a hardware architecture dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as astrophysics computation and brute-force codebreaking.

Examples of special-purpose supercomputers:

The fastest supercomputers today

The speed of a supercomputer is generally measured in FLOPS (floating point operations per second); this measurement ignores communication overheads and assumes that all processors of the machine are provided with data and are working at full speed.

As of early 2002, the fastest supercomputer is the Earth Simulator at the Yokohama Institute for Earth Sciences. It is a cluster of 640 custom-designed 8-processor vector processor computers based on the NEC SX-6 architecture (a total of 5120 processors). It uses a customised version of the UNIX operating system.

Its performance is over 5 times that of the previous fastest supercomputer, the cluster computer ASCI White at Lawrence Livermore National Laboratory. The United States Government ASCI initiative aims to replace nuclear testing with simulation, to maintain its strategic advantage in the presence of nuclear test-ban treaties.

PARAM is another series of supercomputers.

A list of the 500 fastest supercomputers is maintained at " class="external">http://www.top500.org/

Timeline of supercomputers

PeriodSupercomputerSpeedLocation
1943-1944Colossus Bletchley Park, England
1945-1950Manchester Mark I University of Manchester, England
1950-1955MIT Whirlwind Massachusetts Institute of Technology, Cambridge, MA
1955-1960IBM 7090210 KFLOPSU.S. Air Force BMEWS (RADC), Rome, NY
1960-1965CDC 660010.24 MFLOPSLawrence Livermore Laboratory, California
1965-1970CDC 760037.27 MFLOPSLawrence Livermore Laboratory, California
1970-1975CDC Cyber 76  
1975-1980Cray-1160 MFLOPSLos Alamos National Laboratory, New Mexico (1976)
1980-1985Cray X-MP500 MFLOPSLos Alamos National Laboratory, New Mexico
1985-1990Cray Y-MP1.3 GFLOPSLos Alamos National Laboratory, New Mexico
1990-1995Fujitsu Numerical Wind Tunnel236 GFLOPSNational Aerospace Lab
1995-2000Intel ASCI Red2150 GFLOPSSandia National Laboratories, New Mexico
2000-2002IBM ASCI White, SP Power3 375 MHz7226 GFLOPSLawrence Livermore Laboratory, California
2002-Earth Simulator35 TFLOPSYokohama Institute for Earth Sciences, Japan
future   

Forthcoming supercomputers:

See also

External links: