You throw 100 processors at a job that takes an hour. But 20 minutes of that hour is code that runs in sequence — one step after the other, no shortcut. Those 20 minutes don’t care how many processors you have. So your theoretical best is 20 minutes plus whatever the parallel part takes split across your 100 cores.

Amdahl’s law quantifies that ceiling. The serial fraction of your workload sets a hard limit on how much speedup you can ever get, no matter how much hardware you add.

formula

\[S(N) = \frac{1}{(1 - p) + \frac{p}{N}}\]

where:

As \(N \to \infty\), the maximum speedup converges to \(\frac{1}{1 - p}\).

If 5% of your code is serial (\(p = 0.95\)), you will never exceed a 20x speedup. Ever.

notes

Fixed problem size. Amdahl assumes the total workload stays the same regardless of processor count. You’re asking: “given this exact job, how much faster can I finish it?” This makes the law inherently pessimistic about scaling.

Gustafson’s rebuttal (1988). In practice, when you get more processors you solve larger problems, not the same one faster. Gustafson’s law flips the framing: with \(N\) processors, you tackle \(N\)-sized work. The two laws aren’t contradictory — they model different scenarios.

Ignores overhead. The formula doesn’t account for communication costs, synchronization, memory contention, or load imbalance. Real speedups are often worse than Amdahl predicts.

The serial fraction isn’t fixed. A common mistake is treating \(p\) as an architectural constant. In practice, the serial fraction depends on problem size, algorithm choice, and implementation.