IBM Blue Gene
This article may need to be rewritten to comply with Wikipedia's quality standards. (December 2011) |
IBM PERCS |
Blue Gene was an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with low power consumption.
The project created three generations of supercomputers, Blue Gene/L, Blue Gene/P, and Blue Gene/Q. During their deployment, Blue Gene systems often led the TOP500[1] and Green500[2] rankings of the most powerful and most power-efficient supercomputers, respectively. Blue Gene systems have also consistently scored top positions in the Graph500 list.[3] The project was awarded the 2009 National Medal of Technology and Innovation.[4]
As of 2015, IBM appears to have ended development of the Blue Gene family, though no formal announcement has been made.
History
In December 1999, IBM announced a US$100 million research initiative for a five-year effort to build a massively
At IBM, Alan Gara started working on an extension of the QCDOC architecture into a more general-purpose supercomputer: The 4D nearest-neighbor interconnection network was replaced by a network supporting routing of messages from any node to any other; and a parallel I/O subsystem was added. DOE started funding the development of this system and it became known as Blue Gene/L (L for Light); development of the original Blue Gene system continued under the name Blue Gene/C (C for Cyclops) and, later, Cyclops64.
In November 2004 a 16-
While the LLNL installation was the largest Blue Gene/L installation, many smaller installations followed. In November 2006, there were 27 computers on the TOP500 list using the Blue Gene/L architecture. All these computers were listed as having an architecture of eServer Blue Gene Solution. For example, three racks of Blue Gene/L were housed at the San Diego Supercomputer Center.
While the TOP500 measures performance on a single benchmark application, Linpack, Blue Gene/L also set records for performance on a wider set of applications. Blue Gene/L was the first supercomputer ever to run over 100 TFLOPS sustained on a real-world application, namely a three-dimensional molecular dynamics code (ddcMD), simulating solidification (nucleation and growth processes) of molten metal under high pressure and temperature conditions. This achievement won the 2005 Gordon Bell Prize.
In June 2006,
The name
The name Blue Gene comes from what it was originally designed to do, help biologists understand the processes of protein folding and gene development.[14] "Blue" is a traditional moniker that IBM uses for many of its products and the company itself. The original Blue Gene design was renamed "Blue Gene/C" and eventually Cyclops64. The "L" in Blue Gene/L comes from "Light" as that design's original name was "Blue Light". The "P" version was designed to be a petascale design. "Q" is just the letter after "P". There is no Blue Gene/R.[15]
Major features
The Blue Gene/L supercomputer was unique in the following aspects:[16]
- Trading the speed of processors for lower power consumption. Blue Gene/L used low frequency and low power embedded PowerPC cores with floating-point accelerators. While the performance of each chip was relatively low, the system could achieve better power efficiency for applications that could use large numbers of nodes.
- Dual processors per node with two working modes: co-processor mode where one processor handles computation and the other handles communication; and virtual-node mode, where both processors are available to run user code, but the processors share both the computation and the communication load.
- System-on-a-chip design. Components were embedded on a single chip for each node, with the exception of 512 MB external DRAM.
- A large number of nodes (scalable in increments of 1024 up to at least 65,536)
- Three-dimensional torus interconnect with auxiliary networks for global communications (broadcast and reductions), I/O, and management
- Lightweight OS per node for minimum system overhead (system noise).
Architecture
The Blue Gene/L architecture was an evolution of the QCDSP and
Compute nodes were packaged two per compute card, with 16 compute cards plus up to 2 I/O nodes per node board. There were 32 node boards per cabinet/rack.
Each Blue Gene/L node was attached to three parallel communications networks: a 3D toroidal network for peer-to-peer communication between compute nodes, a collective network for collective communication (broadcasts and reduce operations), and a global interrupt network for fast barriers. The I/O nodes, which run the Linux operating system, provided communication to storage and external hosts via an Ethernet network. The I/O nodes handled filesystem operations on behalf of the compute nodes. Finally, a separate and private Ethernet network provided access to any node for configuration, booting and diagnostics. To allow multiple programs to run concurrently, a Blue Gene/L system could be partitioned into electronically isolated sets of nodes. The number of nodes in a partition had to be a positive integer power of 2, with at least 25 = 32 nodes. To run a program on Blue Gene/L, a partition of the computer was first to be reserved. The program was then loaded and run on all the nodes within the partition, and no other program could access nodes within the partition while it was in use. Upon completion, the partition nodes were released for future programs to use.
Blue Gene/L compute nodes used a minimal
IBM published BlueMatter, the application developed to exercise Blue Gene/L, as open source here.[20] This serves to document how the torus and collective interfaces were used by applications, and may serve as a base for others to exercise the current generation of supercomputers.
Blue Gene/P
In June 2007, IBM unveiled Blue Gene/P, the second generation of the Blue Gene series of supercomputers and designed through a collaboration that included IBM, LLNL, and Argonne National Laboratory's Leadership Computing Facility.[21]
Design
The design of Blue Gene/P is a technology evolution from Blue Gene/L. Each Blue Gene/P Compute chip contains four PowerPC 450 processor cores, running at 850 MHz. The cores are cache coherent and the chip can operate as a 4-way symmetric multiprocessor (SMP). The memory subsystem on the chip consists of small private L2 caches, a central shared 8 MB L3 cache, and dual DDR2 memory controllers. The chip also integrates the logic for node-to-node communication, using the same network topologies as Blue Gene/L, but at more than twice the bandwidth. A compute card contains a Blue Gene/P chip with 2 or 4 GB DRAM, comprising a "compute node". A single compute node has a peak performance of 13.6 GFLOPS. 32 Compute cards are plugged into an air-cooled node board. A rack contains 32 node boards (thus 1024 nodes, 4096 processor cores).[22] By using many small, low-power, densely packaged chips, Blue Gene/P exceeded the
Installations
The following is an incomplete list of Blue Gene/P installations. Per November 2009, the
- On November 12, 2007, the first Blue Gene/P installation, JUGENE, with 16 racks (16,384 nodes, 65,536 processors) was running at Forschungszentrum Jülich in Germany with a performance of 167 TFLOPS.[23] When inaugurated it was the fastest supercomputer in Europe and the sixth fastest in the world. In 2009, JUGENE was upgraded to 72 racks (73,728 nodes, 294,912 processor cores) with 144 terabytes of memory and 6 petabytes of storage, and achieved a peak performance of 1 PetaFLOPS. This configuration incorporated new air-to-water heat exchangers between the racks, reducing the cooling cost substantially.[24] JUGENE was shut down in July 2012 and replaced by the Blue Gene/Q system JUQUEEN.
- The 40-rack (40960 nodes, 163840 processor cores) "Intrepid" system at Argonne National Laboratory was ranked #3 on the June 2008 Top 500 list.[25] The Intrepid system is one of the major resources of the INCITE program, in which processor hours are awarded to "grand challenge" science and engineering projects in a peer-reviewed competition.
- Lawrence Livermore National Laboratory installed a 36-rack Blue Gene/P installation, "Dawn", in 2009.
- The KAUST) installed a 16-rack Blue Gene/P installation, "Shaheen", in 2009.
- In 2012, a 6-rack Blue Gene/P was installed at Rice University and will be jointly administered with the University of São Paulo.[26]
- A 2.5 rack Blue Gene/P system is the central processor for the Low Frequency Array for Radio astronomy (LOFAR) project in the Netherlands and surrounding European countries. This application uses the streaming data capabilities of the machine.
- A 2-rack Blue Gene/P was installed in September 2008 in Sofia, Bulgaria, and is operated by the Bulgarian Academy of Sciences and Sofia University.[27]
- In 2010, a 2-rack (8192-core) Blue Gene/P was installed at the Victorian Life Sciences Computation Initiative.[28]
- In 2011, a 2-rack Blue Gene/P was installed at University of Canterbury in Christchurch, New Zealand.
- In 2012, a 2-rack Blue Gene/P was installed at Rutgers University in Piscataway, New Jersey. It was dubbed "Excalibur" as an homage to the Rutgers mascot, the Scarlet Knight.[29]
- In 2008, a 1-rack (1024 nodes) Blue Gene/P with 180 TB of storage was installed at the University of Rochester in Rochester, New York.[30]
- The first Blue Gene/P in the ASEAN region was installed in 2010 at the Universiti of Brunei Darussalam’s research centre, the UBD-IBM Centre. The installation has prompted research collaboration between the university and IBM research on climate modeling that will investigate the impact of climate change on flood forecasting, crop yields, renewable energy and the health of rainforests in the region among others.[31]
- In 2013, a 1-rack Blue Gene/P was donated to the Department of Science and Technology for weather forecasts, disaster management, precision agriculture, and health it is housed in the National Computer Center, Diliman, Quezon City, under the auspices of Philippine Genome Center (PGC) Core Facility for Bioinformatics (CFB) at UP Diliman, Quezon City.[32]
Applications
- Veselin Topalov, the challenger to the World Chess Champion title in 2010, confirmed in an interview that he had used a Blue Gene/P supercomputer during his preparation for the match.[33]
- The Blue Gene/P computer has been used to simulate approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections.[34]
- The TCP/IP connectivity.[35][36] Running standard Linux software like MySQL, their performance results on SpecJBB rank among the highest on record.[citation needed]
- In 2011, a Rutgers University / IBM / University of Texas team linked the IBM Watson Research Center into a "federated high performance computing cloud", winning the IEEE SCALE 2011 challenge with an oil reservoir optimization application.[37]
Blue Gene/Q
The third supercomputer design in the Blue Gene series, Blue Gene/Q has a peak performance of 20
Design
The Blue Gene/Q "compute node" consists of a chip containing several
A Q32[41] "compute drawer" contains 32 compute nodes, each water cooled.[42] A "midplane" (crate) contains 16 Q32 compute drawers for a total of 512 compute nodes, electrically interconnected in a 5D torus configuration (4x4x4x4x2). Beyond the midplane level, all connections are optical. Racks have two midplanes, thus 32 compute drawers, for a total of 1024 compute nodes, 16,384 user cores, and 16 TB RAM.[42]
Separate I/O drawers, placed at the top of a rack or in a separate rack, are air cooled and contain 8 compute cards and 8 PCIe expansion slots for InfiniBand or 10 Gigabit Ethernet networking.[42]
Performance
At the time of the Blue Gene/Q system announcement in November 2011, an initial 4-rack Blue Gene/Q system (4096 nodes, 65536 user processor cores) achieved #17 in the TOP500 list[1] with 677.1 TeraFLOPS Linpack, outperforming the original 2007 104-rack BlueGene/L installation described above. The same 4-rack system achieved the top position in the Graph500 list[3] with over 250 GTEPS (giga traversed edges per second). Blue Gene/Q systems also topped the Green500 list of most energy efficient supercomputers with up to 2.1 GFLOPS/W.[2]
In June 2012, Blue Gene/Q installations took the top positions in all three lists: TOP500,[1] Graph500[3] and Green500.[2]
Installations
The following is an incomplete list of Blue Gene/Q installations. Per June 2012, the TOP500 list contained 20 Blue Gene/Q installations of 1/2-rack (512 nodes, 8192 processor cores, 86.35 TFLOPS Linpack) and larger.[1] At a (size-independent) power efficiency of about 2.1 GFLOPS/W, all these systems also populated the top of the June 2012 Green 500 list.[2]
- A Blue Gene/Q system called
- A 10 PFLOPS (peak) Blue Gene/Q system called
- JUQUEEN at the Forschungzentrum Jülich is a 28-rack Blue Gene/Q system, and was from June 2013 to November 2015 the highest ranked machine in Europe in the Top500.[1]
- Vulcan at Lawrence Livermore National Laboratory (LLNL) is a 24-rack, 5 PFLOPS (peak), Blue Gene/Q system that was commissioned in 2012 and decommissioned in 2019.[48] Vulcan served Lab-industry projects through Livermore's High Performance Computing (HPC) Innovation Center[49] as well as academic collaborations in support of DOE/National Nuclear Security Administration (NNSA) missions.[50]
- Fermi at the CINECA Supercomputing facility, Bologna, Italy,[51] is a 10-rack, 2 PFLOPS (peak), Blue Gene/Q system.
- As part of EPCC hosts a 6 rack (6144-node) Blue Gene/Q system at the University of Edinburgh[52]
- A five rack Blue Gene/Q system with additional compute hardware called AMOS was installed at Rensselaer Polytechnic Institute in 2013.[53] The system was rated at 1048.6 teraflops, the most powerful supercomputer at any private university, and third most powerful supercomputer among all universities in 2014.[54]
- An 838 TFLOPS (peak) Blue Gene/Q system called Avoca was installed at the Victorian Life Sciences Computation Initiative in June, 2012.[55] This system is part of a collaboration between IBM and VLSCI, with the aims of improving diagnostics, finding new drug targets, refining treatments and furthering our understanding of diseases.[56] The system consists of 4 racks, with 350 TB of storage, 65,536 cores, 64 TB RAM.[57]
- A 209 TFLOPS (peak) Blue Gene/Q system was installed at the TB of high-performance storage.[59]
- A 209 TFLOPS peak (172 TFLOPS LINPACK) Blue Gene/Q system called Lemanicus was installed at the PBof IBM GPFS-GSS storage.
- A half-rack Blue Gene/Q system, with about 100 TFLOPS (peak), called Cumulus was installed at A*STAR Computational Resource Centre, Singapore, at early 2011.[62]
Applications
Record-breaking science applications have been run on the BG/Q, the first to cross 10
See also
References
- ^ a b c d e f g h i "November 2004 - TOP500 Supercomputer Sites". Top500.org. Retrieved 13 December 2019.
- ^ a b c d e "Green500 - TOP500 Supercomputer Sites". Green500.org. Archived from the original on 26 August 2016. Retrieved 13 October 2017.
- ^ a b c "The Graph500 List". Archived from the original on 2011-12-27.
- ^ Harris, Mark (September 18, 2009). "Obama honours IBM supercomputer". Techradar.com. Retrieved 2009-09-18.
- ^ "Supercomputing Strategy Shifts in a World Without BlueGene". Nextplatform.com. 14 April 2015. Retrieved 13 October 2017.
- ^ "IBM to Build DoE's Next-Gen Coral Supercomputers - EE Times". EETimes. Archived from the original on 30 April 2017. Retrieved 13 October 2017.
- ^ "Blue Gene: A Vision for Protein Science using a Petaflop Supercomputer" (PDF). IBM Systems Journal. 40 (2). 2017-10-23.
- BusinessWeek, November 6, 2001, archived from the originalon December 11, 2014
- ^ "BlueGene/L". Archived from the original on 2011-07-18. Retrieved 2007-10-05.
- ^ "hpcwire.com". Archived from the original on September 28, 2007.
- ^ "SC06". sc06.supercomputing.org. Retrieved 13 October 2017.
- ^ "HPC Challenge Award Competition". Archived from the original on 2006-12-11. Retrieved 2006-12-03.
- ^ "Mouse brain simulated on computer". BBC News. April 27, 2007. Archived from the original on 2007-05-25.
- ^ "IBM100 - Blue Gene". 03.ibm.com. 7 March 2012. Retrieved 13 October 2017.
- ISBN 9783642387500. Retrieved 13 October 2017 – via Google Books.
- ^ "Blue Gene". IBM Journal of Research and Development. 49 (2/3). 2005.
- ^ Kissel, Lynn. "BlueGene/L Configuration". asc.llnl.gov. Archived from the original on 17 February 2013. Retrieved 13 October 2017.
- ^ "Compute Node Ruby for Bluegene/L". www.ece.iastate.edu. Archived from the original on February 11, 2009.
- ^ William Scullin (March 12, 2011). Python for High Performance Computing. Atlanta, GA.
- ^ Blue Matter source code, retrieved February 28, 2020
- ^ "IBM Triples Performance of World's Fastest, Most Energy-Efficient Supercomputer". 2007-06-27. Retrieved 2011-12-24.
- .
- ^ "Supercomputing: Jülich Amongst World Leaders Again". IDG News Service. 2007-11-12.
- ^ "IBM Press room - 2009-02-10 New IBM Petaflop Supercomputer at German Forschungszentrum Juelich to Be Europe's Most Powerful". 03.ibm.com. 2009-02-10. Retrieved 2011-03-11.
- ^ "Argonne's Supercomputer Named World's Fastest for Open Science, Third Overall". Mcs.anl.gov. Archived from the original on 8 February 2009. Retrieved 13 October 2017.
- ^ "Rice University, IBM partner to bring first Blue Gene supercomputer to Texas". news.rice.edu. Archived from the original on 2012-04-05. Retrieved 2012-04-01.
- ^ Вече си имаме и суперкомпютър Archived 2009-12-23 at the Wayback Machine, Dir.bg, 9 September 2008
- ^ "IBM Press room - 2010-02-11 IBM to Collaborate with Leading Australian Institutions to Push the Boundaries of Medical Research - Australia". 03.ibm.com. 2010-02-11. Retrieved 2011-03-11.
- ^ "Rutgers Gets Big Data Weapon in IBM Supercomputer - Hardware -". Archived from the original on 2013-03-06. Retrieved 2013-09-07.
- ^ "University of Rochester and IBM Expand Partnership in Pursuit of New Frontiers in Health". University of Rochester Medical Center. May 11, 2012. Archived from the original on 2012-05-11.
- ^ "IBM and Universiti Brunei Darussalam to Collaborate on Climate Modeling Research". IBM News Room. 2010-10-13. Retrieved 18 October 2012.
- ^ Ronda, Rainier Allan. "DOST's supercomputer for scientists now operational". Philstar.com. Retrieved 13 October 2017.
- ^ "Topalov training with super computer Blue Gene P". Players.chessdo.com. Archived from the original on 19 May 2013. Retrieved 13 October 2017.
- ^ Kaku, Michio. Physics of the Future (New York: Doubleday, 2011), 91.
- ^ "Project Kittyhawk: A Global-Scale Computer". Research.ibm.com. Retrieved 13 October 2017.
- ^ Appavoo, Jonathan; Uhlig, Volkmar; Waterland, Amos. "Project Kittyhawk: Building a Global-Scale Computer" (PDF). Yorktown Heights, NY: IBM T.J. Watson Research Center. Archived from the original on 2008-10-31. Retrieved 2018-03-13.
{{cite web}}
: CS1 maint: bot: original URL status unknown (link) - ^ "Rutgers-led Experts Assemble Globe-Spanning Supercomputer Cloud". News.rutgers.edu. 2011-07-06. Archived from the original on 2011-11-10. Retrieved 2011-12-24.
- ^ "IBM announces 20-petaflops supercomputer". Kurzweil. 18 November 2011. Retrieved 13 November 2012.
IBM has announced the Blue Gene/Q supercomputer, with peak performance of 20 petaflops
- ^ "Memory Speculation of the Blue Gene/Q Compute Chip". Retrieved 2011-12-23.
- ^ "The Blue Gene/Q Compute chip" (PDF). Archived from the original (PDF) on 2015-04-29. Retrieved 2011-12-23.
- ^ "IBM Blue Gene/Q supercomputer delivers petascale computing for high-performance computing applications" (PDF). 01.ibm.com. Retrieved 13 October 2017.
- ^ a b c "IBM uncloaks 20 petaflops BlueGene/Q super". The Register. 2010-11-22. Retrieved 2010-11-25.
- ^ Feldman, Michael (2009-02-03). "Lawrence Livermore Prepares for 20 Petaflop Blue Gene/Q". HPCwire. Archived from the original on 2009-02-12. Retrieved 2011-03-11.
- ^ B Johnston, Donald (2012-06-18). "NNSA's Sequoia supercomputer ranked as world's fastest". Archived from the original on 2014-09-02. Retrieved 2012-06-23.
- ^ "TOP500 Press Release". Archived from the original on June 24, 2012.
- ^ "MIRA: World's fastest supercomputer - Argonne Leadership Computing Facility". Alcf.anl.gov. Retrieved 13 October 2017.
- ^ "Mira - Argonne Leadership Computing Facility". Alcf.anl.gov. Retrieved 13 October 2017.
- ^ "Vulcan—decommissioned". hpc.llnl.gov. Retrieved 10 April 2019.
- ^ "HPC Innovation Center". hpcinnovationcenter.llnl.gov. Retrieved 13 October 2017.
- ^ "Lawrence Livermore's Vulcan brings 5 petaflops computing power to collaborations with industry and academia to advance science and technology". Llnl.gov. 11 June 2013. Archived from the original on 9 December 2013. Retrieved 13 October 2017.
- ^ "Ibm-Fermi | Scai". Archived from the original on 2013-10-30. Retrieved 2013-05-13.
- ^ "DiRAC BlueGene/Q". epcc.ed.ac.uk.
- ^ "Rensselaer at Petascale: AMOS Among the World's Fastest and Most Powerful Supercomputers". News.rpi.edu. Retrieved 13 October 2017.
- ^ Michael Mullaneyvar. "AMOS Ranks 1st Among Supercomputers at Private American Universities". News.rpi.edi. Retrieved 13 October 2017.
- ^ "World's greenest supercomputer comes to Melbourne - The Melbourne Engineer". Themelbourneengineer.eng.unimelb.edu.au/. 16 February 2012. Archived from the original on 2 October 2017. Retrieved 13 October 2017.
- ^ "Melbourne Bioinformatics - For all researchers and students based in Melbourne's biomedical and bioscience research precinct". Melbourne Bioinformatics. Retrieved 13 October 2017.
- ^ "Access to High-end Systems - Melbourne Bioinformatics". Vlsci.org.au. Retrieved 13 October 2017.
- ^ "University of Rochester Inaugurates New Era of Health Care Research". Rochester.edu. Retrieved 13 October 2017.
- ^ "Resources - Center for Integrated Research Computing". Circ.rochester.edu. Retrieved 13 October 2017.
- ^ "EPFL BlueGene/L Homepage". Archived from the original on 2007-12-10. Retrieved 2021-03-10.
- ^ Utilisateur, Super. "À propos". Cadmos.org. Archived from the original on 10 January 2016. Retrieved 13 October 2017.
- ^ "A*STAR Computational Resource Centre". Acrc.a-star.edu.sg. Archived from the original on 2016-12-20. Retrieved 2016-08-24.
- ].
- ^ "Cardioid Cardiac Modeling Project". Researcher.watson.ibm.com. 25 July 2016. Archived from the original on 21 May 2013. Retrieved 13 October 2017.
- ^ "Venturing into the Heart of High-Performance Computing Simulations". Str.llnl.gov. Archived from the original on 14 February 2013. Retrieved 13 October 2017.
- S2CID 12651650.