Karp–Flatt metric
This article includes a improve this article by introducing more precise citations. (March 2020) ) |
The Karp–Flatt metric is a measure of
Description
Given a parallel computation exhibiting speedup on processors, where > 1, the experimentally determined serial fraction is defined to be the Karp–Flatt Metric viz:
The lower the value of , the better the parallelization.
Justification
There are many ways to measure the performance of a
Where:
- is the total time taken for code execution in a -processor system
- is the time taken for the serial part of the code to run
- is the time taken for the parallel part of the code to run in one processor
- is the number of processors
with the result obtained by substituting = 1 viz. , if we define the serial fraction = then the equation can be rewritten as
In terms of the speedup = :
Solving for the serial fraction, we get the Karp–Flatt metric as above. Note that this is not a "derivation" from Amdahl's law as the left hand side represents a
Use
While the serial fraction e is often mentioned in
For a problem of fixed size, the efficiency of a parallel computation typically decreases as the number of processors increases. By using the serial fraction obtained experimentally using the Karp–Flatt metric, we can determine if the efficiency decrease is due to limited opportunities of parallelism or increases in algorithmic or architectural overhead.
References
- Karp, Alan H. & Flatt, Horace P. (1990). "Measuring Parallel Processor Performance". Communications of the ACM. 33 (5): 539–543. .
- Quinn, Michael J. (2004). Parallel Programming in C with MPI and OpenMP. Boston: McGraw-Hill. ISBN 0-07-058201-7.