Supercomputer
A supercomputer is a type of
Supercomputers play an important role in the field of
Supercomputers were introduced in the 1960s, and for several decades the fastest was made by
The US has long been the leader in the supercomputer field, first through Cray's almost uninterrupted dominance of the field, and later through a variety of technology companies. Japan made major strides in the field in the 1980s and 90s, with China becoming increasingly active in the field. As of May 2022, the fastest supercomputer on the
History
In 1960,
The third pioneering supercomputer project in the early 1960s was the
The CDC 6600, designed by Seymour Cray, was finished in 1964 and marked the transition from germanium to silicon transistors. Silicon transistors could run more quickly and the overheating problem was solved by introducing refrigeration to the supercomputer design.[18] Thus, the CDC6600 became the fastest computer in the world. Given that the 6600 outperformed all the other contemporary computers by about 10 times, it was dubbed a supercomputer and defined the supercomputing market, when one hundred computers were sold at $8 million each.[19][20][21][22]
Cray left CDC in 1972 to form his own company,
Massively parallel designs
The only computer to seriously challenge the Cray-1's performance in the 1970s was the ILLIAC IV. This machine was the first realized example of a true massively parallel computer, in which many processors worked together to solve different parts of a single larger problem. In contrast with the vector systems, which were designed to run a single stream of data as quickly as possible, in this concept, the computer instead feeds separate parts of the data to entirely different processors and then recombines the results. The ILLIAC's design was finalized in 1966 with 256 processors and offer speed up to 1 GFLOPS, compared to the 1970s Cray-1's peak of 250 MFLOPS. However, development problems led to only 64 processors being built, and the system could never operate more quickly than about 200 MFLOPS while being much larger and more complex than the Cray. Another problem was that writing software for the system was difficult, and getting peak performance from it was a matter of serious effort.
But the partial success of the ILLIAC IV was widely seen as pointing the way to the future of supercomputing. Cray argued against this, famously quipping that "If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?"
In 1982,
Software development remained a problem, but the CM series sparked off considerable research into this issue. Similar designs using custom hardware were made by many companies, including the
In 1998,
Systems with a massive number of processors generally take one of two paths. In the
As the price, performance and
High-performance computers have an expected life cycle of about three years before requiring an upgrade.
Special purpose supercomputers
A number of special-purpose systems have been designed, dedicated to a single problem. This allows the use of specially programmed
Energy usage and heat management
Throughout the decades, the management of
Heat management is a major issue in complex electronic devices and affects powerful computer systems in various ways.
The packing of thousands of processors together inevitably generates significant amounts of
In the
The energy efficiency of computer systems is generally measured in terms of "
Because copper wires can transfer energy into a supercomputer with much higher power densities than forced air or circulating refrigerants can remove waste heat,[78] the ability of the cooling systems to remove waste heat is a limiting factor.[79][80] As of 2015[update], many existing supercomputers have more infrastructure capacity than the actual peak demand of the machine – designers generally conservatively design the power and cooling infrastructure to handle more than the theoretical peak electrical power consumed by the supercomputer. Designs for future supercomputers are power-limited – the thermal design power of the supercomputer as a whole, the amount that the power and cooling infrastructure can handle, is somewhat more than the expected normal power consumption, but less than the theoretical peak power consumption of the electronic hardware.[81]
Software and system management
Operating systems
Since the end of the 20th century,
Since modern
While in a traditional multi-user computer system
Although most modern supercomputers use Linux-based operating systems, each manufacturer has its own specific Linux-derivative, and no industry standard exists, partly due to the fact that the differences in hardware architectures require changes to optimize the operating system to each hardware design.[82][88]
Software tools and message passing
The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Software tools for distributed processing include standard
In the most common scenario, environments such as
Moreover, it is quite difficult to debug and test parallel programs. Special techniques need to be used for testing and debugging such applications.
Distributed supercomputing
Opportunistic approaches
Opportunistic supercomputing is a form of networked grid computing whereby a "super virtual computer" of many loosely coupled volunteer computing machines performs very large computing tasks. Grid computing has been applied to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. However, basic grid and cloud computing approaches that rely on volunteer computing cannot handle traditional supercomputing tasks such as fluid dynamic simulations.[91]
The fastest grid computing system is the volunteer computing project Folding@home (F@h). As of April 2020[update], F@h reported 2.5 exaFLOPS of x86 processing power. Of this, over 100 PFLOPS are contributed by clients running on various GPUs, and the rest from various CPU systems.[92]
The Berkeley Open Infrastructure for Network Computing (BOINC) platform hosts a number of volunteer computing projects. As of February 2017[update], BOINC recorded a processing power of over 166 petaFLOPS through over 762 thousand active Computers (Hosts) on the network.[93]
As of October 2016[update],
Quasi-opportunistic approaches
Quasi-opportunistic supercomputing is a form of distributed computing whereby the "super virtual computer" of many networked geographically disperse computers performs computing tasks that demand huge processing power.[95] Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic grid computing by achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. However, quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through the implementation of grid-wise allocation agreements, co-allocation subsystems, communication topology-aware allocation mechanisms, fault tolerant message passing libraries and data pre-conditioning.[95]
High-performance computing clouds
Cloud computing with its recent and rapid expansions and development have grabbed the attention of high-performance computing (HPC) users and developers in recent years. Cloud computing attempts to provide HPC-as-a-service exactly like other forms of services available in the cloud such as software as a service, platform as a service, and infrastructure as a service. HPC users may benefit from the cloud in different angles such as scalability, resources being on-demand, fast, and inexpensive. On the other hand, moving HPC applications have a set of challenges too. Good examples of such challenges are virtualization overhead in the cloud, multi-tenancy of resources, and network latency issues. Much research is currently being done to overcome these challenges and make HPC in the cloud a more realistic possibility.[96][97][98][99]
In 2016, Penguin Computing, Parallel Works, R-HPC,
Performance measurement
Capability versus capacity
Supercomputers generally aim for the maximum in capability computing rather than capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a single large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can, e.g. a very complex
Capacity computing, in contrast, is typically thought of as using efficient cost-effective computing power to solve a few somewhat large problems or many small problems.[102] Architectures that lend themselves to supporting many users for routine everyday tasks may have a lot of capacity but are not typically considered supercomputers, given that they do not solve a single very complex problem.[102]
Performance metrics
In general, the speed of supercomputers is measured and
No single number can reflect the overall performance of a computer system, yet the goal of the Linpack benchmark is to approximate how fast the computer solves numerical problems and it is widely used in the industry.[105] The FLOPS measurement is either quoted based on the theoretical floating point performance of a processor (derived from manufacturer's processor specifications and shown as "Rpeak" in the TOP500 lists), which is generally unachievable when running real workloads, or the achievable throughput, derived from the LINPACK benchmarks and shown as "Rmax" in the TOP500 list.[106] The LINPACK benchmark typically performs LU decomposition of a large matrix.[107] The LINPACK performance gives some indication of performance for some real-world problems, but does not necessarily match the processing requirements of many other supercomputer workloads, which for example may require more memory bandwidth, or may require better integer computing performance, or may need a high performance I/O system to achieve high levels of performance.[105]
The TOP500 list
Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.
This is a recent list of the computers which appeared at the top of the TOP500 list,[108] and the "Peak speed" is given as the "Rmax" rating. In 2018, Lenovo became the world's largest provider for the TOP500 supercomputers with 117 units produced.[109]
Rank (previous) | Rmax
Rpeak (PetaFLOPS) |
Name | Model | CPU cores | Accelerator (e.g. GPU) cores | Interconnect | Manufacturer | Site
country |
Year | Operating
system |
---|---|---|---|---|---|---|---|---|---|---|
1 | 1,102.00
1,685.65 |
Frontier | HPE Cray EX235a
|
591,872
(9,248 × 64-core Optimized 3rd Generation EPYC 64C @2.0 GHz)
|
36,992 × 220 AMD Instinct MI250X | Slingshot-11 | HPE | Oak Ridge National Laboratory United States |
2022 | HPE Cray OS )
|
2 | 442.010
537.212 |
Fugaku | Supercomputer Fugaku | 7,630,848
(158,976 × 48-core Fujitsu A64FX @2.2 GHz) |
0 | Tofu interconnect D | Fujitsu | RIKEN Center for Computational Science Japan |
2020 | Linux (RHEL) |
3 | 309.10
428.70 |
LUMI
|
HPE Cray EX235a
|
150,528
(2,352 × 64-core Optimized 3rd Generation EPYC 64C @2.0 GHz)
|
9,408 × 220 AMD Instinct MI250X | Slingshot-11 | HPE | 2022 | HPE Cray OS )
| |
4 | 174.70
255.75 |
Leonardo | BullSequana XH2000 | 110,592
(3,456 × 32-core Xeon Platinum 8358 @2.6 GHz) |
13,824 × 108 Nvidia Ampere A100
|
Nvidia HDR100 Infiniband
|
Atos | 2022 | Linux | |
5 (4) | 148.60
200.795 |
Summit | IBM Power SystemAC922 | 202,752
(9,216 × 22-core IBM POWER9 @3.07 GHz) |
27,648 × 80 Nvidia Tesla V100 | InfiniBand EDR | IBM | Oak Ridge National Laboratory United States |
2018 | Linux (RHEL 7.4) |
6 (5) | 94.640
125.712 |
Sierra | IBM Power SystemS922LC | 190,080
(8,640 × 22-core IBM POWER9 @3.1 GHz) |
17,280 × 80 Nvidia Tesla V100 | InfiniBand EDR | IBM | Lawrence Livermore National Laboratory United States |
2018 | Linux (RHEL) |
7 (6) | 93.015
125.436 |
SunwayTaihuLight | Sunway MPP | 10,649,600
(40,960 × 260-core Sunway SW26010 @1.45 GHz) |
0 | Sunway[111] | NRCPC | 2016 | Linux (RaiseOS 2.0.5) | |
8 (7) | 70.87
93.75 |
Perlmutter | HPE Cray EX235n
|
? × ?-core AMD Epyc 7763 64-core @2.45 GHz | ? × 108 Nvidia Ampere A100
|
Slingshot-10 | HPE | NERSC United States |
2021 | HPE Cray OS )
|
9 (8) | 63.460
79.215 |
Selene | Nvidia | 71,680
(1,120 × 64-core AMD Epyc 7742 @2.25 GHz)
|
4,480 × 108 Nvidia Ampere A100
|
Mellanox HDR Infiniband | Nvidia | Nvidia United States |
2020 | Ubuntu 20.04 .1)
|
10 (9) | 61.445
100.679 |
Tianhe-2A | TH-IVB-FEP | 427,008
(35,584 × 12-core Intel Xeon E5–2692 v2 @2.2 GHz) |
35,584 × Matrix-2000[112] 128-core | TH Express-2 | NUDT | National Supercomputer Center in Guangzhou China |
2018[113] | Linux (Kylin) |
Applications
This section is in prose. is available. (January 2020) |
This section needs expansion. You can help by adding to it. (January 2020) |
The stages of supercomputer application may be summarized in the following table:
Decade | Uses and computer involved |
---|---|
1970s | Weather forecasting, aerodynamic research (Cray-1).[114] |
1980s | Probabilistic analysis,[115] radiation shielding modeling[116] (CDC Cyber). |
1990s | Brute force code breaking (EFF DES cracker).[117] |
2000s | 3D nuclear test simulations as a substitute for legal conduct ASCI Q).[118]
|
2010s | Molecular dynamics simulation ( Tianhe-1A)[119]
|
2020s | Scientific research for outbreak prevention/Electrochemical Reaction Research[120] |
The IBM
Modern-day weather forecasting also relies on supercomputers. The National Oceanic and Atmospheric Administration uses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate.[122]
In 2011, the challenges and difficulties in pushing the envelope in supercomputing were underscored by IBM's abandonment of the Blue Waters petascale project.[123]
The Advanced Simulation and Computing Program currently uses supercomputers to maintain and simulate the United States nuclear stockpile.[124]
In early 2020, COVID-19 was front and center in the world. Supercomputers used different simulations to find compounds that could potentially stop the spread. These computers run for tens of hours using multiple paralleled running CPU's to model different processes.[125][126][127]
Development and trends
In the 2010s, China, the United States, the European Union, and others competed to be the first to create a 1 exaFLOP (1018 or one quintillion FLOPS) supercomputer.[128] Erik P. DeBenedictis of Sandia National Laboratories has theorized that a zettaFLOPS (1021 or one sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a two-week time span accurately.[129][130][131] Such systems might be built around 2030.[132]
Many Monte Carlo simulations use the same algorithm to process a randomly generated data set; particularly, integro-differential equations describing physical transport processes, the random paths, collisions, and energy and momentum depositions of neutrons, photons, ions, electrons, etc. The next step for microprocessors may be into the third dimension; and specializing to Monte Carlo, the many layers could be identical, simplifying the design and manufacture process.[133]
The cost of operating high performance supercomputers has risen, mainly due to increasing power consumption. In the mid-1990s a top 10 supercomputer required in the range of 100 kilowatts, in 2010 the top 10 supercomputers required between 1 and 2 megawatts.[134] A 2010 study commissioned by DARPA identified power consumption as the most pervasive challenge in achieving Exascale computing.[135] At the time a megawatt per year in energy consumption cost about 1 million dollars. Supercomputing facilities were constructed to efficiently remove the increasing amount of heat produced by modern multi-core central processing units. Based on the energy consumption of the Green 500 list of supercomputers between 2007 and 2011, a supercomputer with 1 exaFLOPS in 2011 would have required nearly 500 megawatts. Operating systems were developed for existing hardware to conserve energy whenever possible.[136] CPU cores not in use during the execution of a parallelized application were put into low-power states, producing energy savings for some supercomputing applications.[137]
The increasing cost of operating supercomputers has been a driving factor in a trend toward bundling of resources through a distributed supercomputer infrastructure. National supercomputing centers first emerged in the US, followed by Germany and Japan. The European Union launched the Partnership for Advanced Computing in Europe (PRACE) with the aim of creating a persistent pan-European supercomputer infrastructure with services to support scientists across the European Union in porting, scaling and optimizing supercomputing applications.[134] Iceland built the world's first zero-emission supercomputer. Located at the Thor Data Center in Reykjavík, Iceland, this supercomputer relies on completely renewable sources for its power rather than fossil fuels. The colder climate also reduces the need for active cooling, making it one of the greenest facilities in the world of computers.[138]
Funding supercomputer hardware also became increasingly difficult. In the mid-1990s a top 10 supercomputer cost about 10 million euros, while in 2010 the top 10 supercomputers required an investment of between 40 and 50 million euros.[134] In the 2000s national governments put in place different strategies to fund supercomputers. In the UK the national government funded supercomputers entirely and high performance computing was put under the control of a national funding agency. Germany developed a mixed funding model, pooling local state funding and federal funding.[134]
In fiction
Examples of supercomputers in fiction include
See also
- ACM/IEEE Supercomputing Conference
- ACM SIGHPC
- High-performance computing
- High-performance technical computing
- Jungle computing
- Nvidia Tesla Personal Supercomputer
- Parallel computing
- Supercomputing in China
- Supercomputing in Europe
- Supercomputing in India
- Supercomputing in Japan
- Testing high-performance computing applications
- Ultra Network Technologies
- Quantum computing
References
- ^ "IBM Blue gene announcement". 03.ibm.com. 26 June 2007. Retrieved 9 June 2012.
- ^ "Intrepid". Argonne Leadership Computing Facility. Argonne National Laboratory. Archived from the original on 7 May 2013. Retrieved 26 March 2020.
- ^ "The List: June 2018". Top 500. Retrieved 25 June 2018.
- ^ "AMD Playstation 5 GPU Specs". TechPowerUp. Retrieved 11 September 2021.
- ^ "NVIDIA GeForce GT 730 Specs". TechPowerUp. Retrieved 11 September 2021.
- ^ "Operating system Family / Linux". TOP500.org. Retrieved 30 November 2017.
- ^ Anderson, Mark (21 June 2017). "Global Race Toward Exascale Will Drive Supercomputing, AI to Masses." Spectrum.IEEE.org. Retrieved 20 January 2019.
- ^ Lemke, Tim (8 May 2013). "NSA Breaks Ground on Massive Computing Center". Retrieved 11 December 2013.
- ^ ISBN 978-0-309-04088-4.
- ^ ISBN 978-1-55860-539-8.
- ^ Paul Alcorn (30 May 2022). "AMD-Powered Frontier Supercomputer Breaks the Exascale Barrier, Now Fastest in the World". Tom's Hardware. Retrieved 30 May 2022.
- ^ "Japan Captures TOP500 Crown with Arm-Powered Supercomputer - TOP500 website". www.top500.org.
- ^ "Performance Development". www.top500.org. Retrieved 27 October 2022.
- ISBN 9780801887741.
- ISBN 9780801887741.
- ISBN 9780801887741.
- ^ The Atlas, University of Manchester, archived from the original on 28 July 2012, retrieved 21 September 2010
- ^ The Supermen, Charles Murray, Wiley & Sons, 1997.
- ISBN 978-0-262-53203-7.
- ^ ISBN 978-1-878592-63-7.
- ISBN 978-1-4020-8135-4.
- ISBN 978-0-253-00349-2.
- ISBN 978-1-55860-539-8page 41-48
- ISBN 1-57356-521-0page 65
- ^ Due to Soviet propaganda, it can be read sometimes that the Soviet supercomputer M13 was the first to reach the gigaflops barrier. Actually, the M13 building began in 1984, but it was not operational before 1986. Rogachev Yury Vasilievich, Russian Virtual Computer Museum
- ^ "Seymour Cray Quotes". BrainyQuote.
- ^ Steve Nelson (3 October 2014). "ComputerGK.com : Supercomputers".
- ^ "LINKS-1 Computer Graphics System-Computer Museum". museum.ipsj.or.jp.
- ^ "VPP500 (1992) - Fujitsu Global".
- ^ "TOP500 Annual Report 1994". Netlib.org. 1 October 1996. Retrieved 9 June 2012.
- ISBN 0-8186-7901-8.
- ^ H. Fujii, Y. Yasuda, H. Akashi, Y. Inagami, M. Koga, O. Ishihara, M. Syazwan, H. Wada, T. Sumimoto, Architecture and performance of the Hitachi SR2201 massively parallel processor system, Proceedings of 11th International Parallel Processing Symposium, April 1997, pages 233–241.
- ^ Y. Iwasaki, The CP-PACS project, Nuclear Physics B: Proceedings Supplements, Volume 60, Issues 1–2, January 1998, pages 246–254.
- ^ A.J. van der Steen, Overview of recent supercomputers, Publication of the NCF, Stichting Nationale Computer Faciliteiten, the Netherlands, January 1997.
- ISBN 978-0-262-68142-1page 182
- ^ "David Bader Selected to Receive the 2021 IEEE Computer Society Sidney Fernbach Award". IEEE Computer Society. 22 September 2021. Retrieved 12 October 2023.
- ^ S2CID 237318907.
- ^ Fleck, John (8 April 1999). "UNM to crank up $400,000 supercomputer today". Albuquerque Journal. p. D1.
- ISBN 978-3-540-69261-4.
- ^ Knight, Will: "IBM creates world's most powerful computer", NewScientist.com news service, June 2007
- ^ N. R. Agida; et al. (2005). "Blue Gene/L Torus Interconnection Network | IBM Journal of Research and Development" (PDF). Torus Interconnection Network. p. 265. Archived from the original (PDF) on 15 August 2011.
- ISBN 978-3-540-29810-6. Archived(PDF) from the original on 9 October 2022.
- ^ Analysis and performance results of computing betweenness centrality on IBM Cyclops64 by Guangming Tan, Vugranam C. Sreedhar and Guang R. Gao The Journal of Supercomputing Volume 56, Number 1, 1–24 September 2011
- ^ Prickett, Timothy (31 May 2010). "Top 500 supers – The Dawning of the GPUs". Theregister.co.uk.
- ISBN 978-3-642-16232-9.
- ^ Damon Poeter (11 October 2011). "Cray's Titan Supercomputer for ORNL Could Be World's Fastest". Pcmag.com.
- ^ Feldman, Michael (11 October 2011). "GPUs Will Morph ORNL's Jaguar into 20-Petaflop Titan". Hpcwire.com.
- ^ Timothy Prickett Morgan (11 October 2011). "Oak Ridge changes Jaguar's spots from CPUs to GPUs". Theregister.co.uk.
- ^ "The NETL SuperComputer" Archived 4 September 2015 at the Wayback Machine. page 2.
- ^ Condon, J.H. and K.Thompson, "Belle Chess Hardware", In Advances in Computer Chess 3 (ed.M.R.B.Clarke), Pergamon Press, 1982.
- ISBN 978-0-691-09065-8.
- ^ C. Donninger, U. Lorenz. The Chess Monster Hydra. Proc. of 14th International Conference on Field-Programmable Logic and Applications (FPL), 2004, Antwerp – Belgium, LNCS 3203, pp. 927 – 932
- ^ J Makino and M. Taiji, Scientific Simulations with Special Purpose Computers: The GRAPE Systems, Wiley. 1998.
- ^ RIKEN press release, Completion of a one-petaFLOPS computer system for simulation of molecular dynamics Archived 2 December 2012 at the Wayback Machine
- ISBN 978-1-56592-520-5.
- ^ Lohr, Steve (8 June 2018). "Move Over, China: U.S. Is Again Home to World's Speediest Supercomputer". New York Times. Retrieved 19 July 2018.
- ^ "Green500 List - November 2018". TOP500. Retrieved 19 July 2018.
- S2CID 1389468.
- ISBN 0-471-04885-2, pages 133–135
- ISBN 1-60595-022-Xpage 401
- ISBN 1-60456-186-6, pages 313–314
- ^ ISBN 978-1-85233-599-1, pages 201–202
- ^ ISBN 3-540-26043-9, pages 60–67
- ^ "NVIDIA Tesla GPUs Power World's Fastest Supercomputer" (Press release). Nvidia. 29 October 2010. Archived from the original on 2 March 2014. Retrieved 21 February 2011.
- ^ Balandin, Alexander A. (October 2009). "Better Computing Through CPU Cooling". Spectrum.ieee.org.
- ^ "The Green 500". Green500.org. Archived from the original on 26 August 2016. Retrieved 14 August 2011.
- ^ "Green 500 list ranks supercomputers". iTnews Australia. Archived from the original on 22 October 2008.
- S2CID 11283177.
- ^ "IBM uncloaks 20 petaflops BlueGene/Q super". The Register. 22 November 2010. Retrieved 25 November 2010.
- ^ Prickett, Timothy (15 July 2011). "The Register: IBM 'Blue Waters' super node washes ashore in August". Theregister.co.uk. Retrieved 9 June 2012.
- ^ "IBM Hot Water-Cooled Supercomputer Goes Live at ETH Zurich". IBM News room. 2 July 2010. Archived from the original on 10 January 2011. Retrieved 16 March 2020.
- ^ Martin LaMonica (10 May 2010). "CNet 10 May 2010". News.cnet.com. Archived from the original on 1 November 2013. Retrieved 9 June 2012.
- ^ "Government unveils world's fastest computer". CNN. Archived from the original on 10 June 2008.
performing 376 million calculations for every watt of electricity used.
- ^ "IBM Roadrunner Takes the Gold in the Petaflop Race". Archived from the original on 17 December 2008. Retrieved 16 March 2020.
- ^ "Top500 Supercomputing List Reveals Computing Trends". 20 July 2010.
IBM... BlueGene/Q system .. setting a record in power efficiency with a value of 1,680 MFLOPS/W, more than twice that of the next best system.
- ^ "IBM Research A Clear Winner in Green 500". 18 November 2010.
- ^ "Green 500 list". Green500.org. Archived from the original on 3 July 2011. Retrieved 16 March 2020.
- ^ Saed G. Younis. "Asymptotically Zero Energy Computing Using Split-Level Charge Recovery Logic". 1994. page 14.
- ^ "Hot Topic – the Problem of Cooling Supercomputers" Archived 18 January 2015 at the Wayback Machine.
- ^ Anand Lal Shimpi. "Inside the Titan Supercomputer: 299K AMD x86 Cores and 18.6K NVIDIA GPUs". 2012.
- ^ Curtis Storlie; Joe Sexton; Scott Pakin; Michael Lang; Brian Reich; William Rust. "Modeling and Predicting Power Consumption of High-Performance Computing Jobs". 2014.
- ^ ISBN 0-387-09765-1pages 426–429
- ISBN 0-262-63188-1page 149-151
- ISBN 3-540-22924-8, page 835
- ISBN 3-540-37783-2page
- ^ An Evaluation of the Oak Ridge National Laboratory Cray XT3 by Sadaf R. Alam etal International Journal of High Performance Computing Applications February 2008 vol. 22 no. 1 52–80
- ISBN 978-3-540-31024-2pages 95–101
- ^ "Top500 OS chart". Top500.org. Archived from the original on 5 March 2012. Retrieved 31 October 2010.
- ^ "Wide-angle view of the ALMA correlator". ESO Press Release. Retrieved 13 February 2013.
- ISBN 978-3-319-21903-5.
- ^ Rahat, Nazmul. "Chapter 03 Software and System Management".
- ^ Pande lab. "Client Statistics by OS". Folding@home. Stanford University. Retrieved 10 April 2020.
- BOINC. Archived from the original on 19 September 2010. Retrieved 30 October 2016Note this link will give current statistics, not those on the date last accessed.)
{{cite web}}
: CS1 maint: postscript (link - ^ "Internet PrimeNet Server Distributed Computing Technology for the Great Internet Mersenne Prime Search". GIMPS. Retrieved 6 June 2011.
- ^ CiteSeerX 10.1.1.135.8993. Retrieved 4 August 2011.)
{{cite web}}
: CS1 maint: multiple names: authors list (link - S2CID 10974077.
- S2CID 10141367.
- S2CID 9405724.
- S2CID 11502126.
- ^ Eadline, Douglas. "Moving HPC to the Cloud". Admin Magazine. Retrieved 30 March 2019.
- ^ Niccolai, James (11 August 2009). "Penguin Puts High-performance Computing in the Cloud". PCWorld. IDG Consumer & SMB. Retrieved 6 June 2016.
- ^ ISBN 0-309-12485-9page 9
- ISBN 978-0-7923-8462-5.
- ^ Brehm, M. and Bruhwiler, D. L. (2015) ‘Performance Characteristics of the Plasma Wakefield Acceleration Driven by Proton Bunches’.’ Journal of Physics: Conference Series ,
- ^ S2CID 1900724
- ^ "Understanding measures of supercomputer performance and storage system capacity". Indiana University. Retrieved 3 December 2017.
- ^ "Frequently Asked Questions". TOP500.org. Retrieved 3 December 2017.
- ^ Intel brochure – 11/91. "Directory page for Top500 lists. Result for each list since June 1993". Top500.org. Archived from the original on 18 December 2010. Retrieved 31 October 2010.
{{cite web}}
: CS1 maint: numeric names: authors list (link) - ^ "Lenovo Attains Status as Largest Global Provider of TOP500 Supercomputers". Business Wire. 25 June 2018.
- ^ "November 2022 | TOP500". www.top500.org. Retrieved 7 December 2022.
- ^ a b "China Tops Supercomputer Rankings with New 93-Petaflop Machine – TOP500 Supercomputer Sites".
- ^ "Matrix-2000 - NUDT - WikiChip". en.wikichip.org. Retrieved 19 July 2019.
- ^ "Tianhe-2A - TH-IVB-FEP Cluster, Intel Xeon E5-2692v2 12C 2.2GHz, TH Express-2, Matrix-2000 | TOP500 Supercomputer Sites". www.top500.org. Retrieved 16 November 2022.
- ^ "The Cray-1 Computer System" (PDF). Cray Research, Inc. Archived (PDF) from the original on 9 October 2022. Retrieved 25 May 2011.
- .
- ^ "Abstract for SAMSY – Shielding Analysis Modular System". OECD Nuclear Energy Agency, Issy-les-Moulineaux, France. Retrieved 25 May 2011.
- ^ "EFF DES Cracker Source Code". Cosic.esat.kuleuven.be. Retrieved 8 July 2011.
- ^ "Disarmament Diplomacy: – DOE Supercomputing & Test Simulation Programme". Acronym.org.uk. 22 August 2000. Archived from the original on 16 May 2013. Retrieved 8 July 2011.
- ^ "China's Investment in GPU Supercomputing Begins to Pay Off Big Time!". Blogs.nvidia.com. Archived from the original on 5 July 2011. Retrieved 8 July 2011.
- ^ Andrew, Scottie (19 March 2020). "The world's fastest supercomputer identified chemicals that could stop coronavirus from spreading, a crucial step toward a treatment". CNN. Retrieved 12 May 2020.
- ^ Kaku, Michio. Physics of the Future (New York: Doubleday, 2011), 65.
- ^ "Faster Supercomputers Aiding Weather Forecasts". News.nationalgeographic.com. 28 October 2010. Archived from the original on 5 September 2005. Retrieved 8 July 2011.
- ^ "IBM Drops 'Blue Waters' Supercomputer Project". International Business Times. 9 August 2011. Retrieved 14 December 2018. – via EBSCO (subscription required)
- ^ "Supercomputers". U.S. Department of Energy. Archived from the original on 7 March 2017. Retrieved 7 March 2017.
- ^ "Supercomputer Simulations Help Advance Electrochemical Reaction Research". ucsdnews.ucsd.edu. Retrieved 12 May 2020.
- ^ "IBM's Summit—The Supercomputer Fighting Coronavirus". MedicalExpo e-Magazine. 16 April 2020. Retrieved 12 May 2020.
- ^ "OSTP Funding Supercomputer Research to Combat COVID-19 – MeriTalk". Retrieved 12 May 2020.
- ^ "EU $1.2 supercomputer project to several 10-100 PetaFLOP computers by 2020 and exaFLOP by 2022 | NextBigFuture.com". NextBigFuture.com. 4 February 2018. Retrieved 21 May 2018.
- ^ DeBenedictis, Erik P. (2004). "The Path To Extreme Computing" (PDF). Zettaflops. Sandia National Laboratories. Archived from the original (PDF) on 3 August 2007. Retrieved 9 September 2020.
- ^ Cohen, Reuven (28 November 2013). "Global Bitcoin Computing Power Now 256 Times Faster Than Top 500 Supercomputers, Combined!". Forbes. Retrieved 1 December 2017.
- ISBN 978-1-59593-019-4.
- ^ "IDF: Intel says Moore's Law holds until 2029". Heise Online. 4 April 2008. Archived from the original on 8 December 2013.
- OSTI 5689714.
- ^ ISBN 9783642244483.
- ISBN 9781447144922.
- ISBN 9781447144922.
- ISBN 9781447144922.
- ^ "Green Supercomputer Crunches Big Data in Iceland". intelfreepress.com. 21 May 2015. Archived from the original on 20 May 2015. Retrieved 18 May 2015.
External links
- McDonnell, Marshall T. (2013). "Supercomputer Design: An Initial Effort to Capture the Environmental, Economic, and Societal Impacts". Chemical and Biomolecular Engineering Publications and Other Works.