Computer performance by orders of magnitude

Source: Wikipedia, the free encyclopedia.

This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.

Scientific E notation index: 2 | 3 | 6 | 9 | 12 | 15 | 18 | 21 | 24 | >24

Milliscale computing (10−3)

Deciscale computing (10−1)

  • 1×10−1: multiplication of two 10-digit numbers by a 1940s electromechanical desk calculator[1]
  • 3×10−1: multiplication on
    digital computers
    , 1941 and 1945 respectively
  • 5×10−1: computing power of the average human mental calculation[clarification needed] for multiplication using pen and paper

Scale computing (100)

  • 1 OP/S: power of an average human performing calculations[clarification needed] using pen and paper
  • 1.2 OP/S: addition on Z3, 1941, and multiplication on Bell Model V, 1946
  • 2.4 OP/S: addition on Z4, 1945
  • 5 OP/S: world record for addition set

Decascale computing (101)

  • 1.8×101: ENIAC, first programmable electronic digital computer, 1945[2]
  • 5×101: upper end of serialized human perception computation (light bulbs do not flicker to the human observer)
  • 7×101: Whirlwind I 1951 vacuum tube computer and IBM 1620 1959 transistorized scientific minicomputer[2]

Hectoscale computing (102)

  • 1.3×102: PDP-4 commercial minicomputer, 1962[2]
  • 2.2×102: upper end of serialized human throughput. This is roughly expressed by the lower limit of accurate event placement on small scales of time (The swing of a conductor's arm, the reaction time to lights on a drag strip, etc.)[3]
  • 2×102: IBM 602 electromechanical calculator (then called computer), 1946[citation needed]
  • 6×102: Manchester Mark 1 electronic general-purpose stored-program digital computer, 1949[4]

Kiloscale computing (103)

Megascale computing (106)

Gigascale computing (109)

Terascale computing (1012)

Petascale computing (1015)

Exascale computing (1018)

  • 1×1018: The U.S. Department of Energy and NSA estimated in 2008 that they would need exascale computing around 2018[17]
  • 1×1018: Fugaku 2020 supercomputer in single precision mode[18]
  • 1.1x1018: Frontier 2022 supercomputer
  • 1.88×1018: U.S. Summit achieves a peak throughput of this many operations per second, whilst analysing genomic data using a mixture of numerical precisions.[19]
  • 2.43×1018 Folding@home distributed computing system during COVID-19 pandemic response[20]

Zettascale computing (1021)

  • 1×1021: Accurate global weather estimation on the scale of approximately 2 weeks.[21] Assuming Moore's law remains applicable, such systems may be feasible around 2035.[22]

A zettascale computer system could generate more single floating point data in one second than was stored by any digital means on Earth in the first quarter of 2011.[citation needed]

Beyond zettascale computing (>1021)

  • 1.12×1036: Estimated computational power of a Matrioshka brain, assuming 1.87×1026 watt power produced by solar panels and 6 GFLOPS/watt efficiency.[23]
  • 4×1048: Estimated computational power of a Matrioshka brain whose power source is the
    Carnot engine
  • 5×1058: Estimated power of a galaxy equivalent in luminosity to the Milky Way converted into Matrioshka brains.

See also

References

  1. ^ .
  2. ^ a b c d e f g h i j k l "Cost of CPU Performance Through Time 1944-2003". www.jcmit.net. Retrieved 2024-01-15.
  3. ^ "How many frames per second can the human eye see?". 2004-05-19. Retrieved 2013-02-19.
  4. .
  5. ^ .
  6. ^ "Intel 980x Gulftown | Synthetic Benchmarks | CPU & Mainboard | OC3D Review". www.overclock3d.net. March 12, 2010.
  7. ^ Tony Pearson, IBM Watson - How to build your own "Watson Jr." in your basement, Inside System Storage
  8. ^ "DGX-1 deep learning system" (PDF). NVIDIA DGX-1 Delivers 75X Faster Training...Note: Caffe benchmark with AlexNet, training 1.28M images with 90 epochs
  9. ^ "DGX Server". DGX Server. Nvidia. Retrieved 7 September 2017.
  10. ^ "NVIDIA GeForce-News". 12 October 2022.
  11. ^ "Build and train machine learning models on our new Google Cloud TPUs". 17 May 2017.
  12. ^ a b "Top500 List - June 2013 | TOP500 Supercomputer Sites". top500.org. Archived from the original on 2013-06-22.
  13. .
  14. ^ "Brain on a Chip". 30 November 2001.
  15. ^ http://top500.org/list/2016/06/ Top500 list, June 2016
  16. ^ "November 2018 | TOP500 Supercomputer Sites". www.top500.org. Retrieved 2018-11-30.
  17. ^ "'Exaflop' Supercomputer Planning Begins". 2008-02-02. Archived from the original on 2008-10-01. Retrieved 2010-01-04. Through the IAA, scientists plan to conduct the basic research required to create a computer capable of performing a million trillion calculations per second, otherwise known as an exaflop.
  18. ^ "June 2020 | TOP500".
  19. ^ "Genomics Code Exceeds Exaops on Summit Supercomputer". Oak Ridge Leadership Computing Facility. Retrieved 2018-11-30.
  20. ^ Pande lab. "Client Statistics by OS". Archive.is. Archived from the original on 2020-04-12. Retrieved 2020-04-12.
  21. .
  22. ^ "Zettascale by 2035? China Thinks So". 6 December 2018.
  23. ^ Jacob Eddison; Joe Marsden; Guy Levin; Darshan Vigneswara (2017-12-12), "Matrioshka Brain", Journal of Physics Special Topics, 16 (1), Department of Physics and Astronomy, University of Leicester
  24. ^ Moore, Gordon E. (1965). "Cramming more components onto integrated circuits" (PDF). Electronics Magazine. p. 4. Retrieved 2006-11-11.

External links