# Amdahl's law

**Amdahl's law**, named after

computer architect

Gene Amdahl, states that if

*F* is the fraction of a calculation that is sequential, and (1-

*F*) is the fraction that can be parallelised, then the maximum speedup that can be achieved by using

*P* processors is 1 / (

*F* + (1-

*F*)/

*P*). In the limit, as

*P* tends to

infinity, the maximum speedup tends to 1/

*F*. In practice, price/performance ratio falls rapidly as P is increased once (1-

*F*)/

*P* is small compared to

*F*.

As an example, if *F* is only 10%, the problem can be sped up by only a maximum of a factor of 10, no matter how large the value of *P* used. For this reason, parallel computing is only useful for either small numbers of processors, or problems with very low values of *F*: so-called embarrassingly parallel problems.

A great part of the craft of parallel programming consists of attempting to reduce *F* to the smallest possible value.

References:

- Gene Amdahl, "Validity of the Single Processor Approach to Achieving Large-Scale Computing Capabilities", AFIPS Conference Proceedings, (30), pp. 483-485, 1967.

## External links