IBM Blue Gene explained

IBM Blue Gene
Developer:IBM
Type:Supercomputer platform
First Release Date:BG/L: BG/P: June 2007BG/Q: Nov 2011
Predecessor:IBM RS/6000 SP
QCDOC
Successor:Summit, Sierra
Processor:BG/L: PowerPC 440BG/P: PowerPC 450BG/Q: PowerPC A2

Blue Gene was an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with low power consumption.

The project created three generations of supercomputers, Blue Gene/L, Blue Gene/P, and Blue Gene/Q. During their deployment, Blue Gene systems often led the TOP500[1] and Green500[2] rankings of the most powerful and most power-efficient supercomputers, respectively. Blue Gene systems have also consistently scored top positions in the Graph500 list.[3] The project was awarded the 2009 National Medal of Technology and Innovation.[4]

After Blue Gene/Q, IBM focused its supercomputer efforts on the OpenPower platform, using accelerators such as FPGAs and GPUs to address the diminishing returns of Moore's law.[5] [6]

History

A video presentation of the history and technology of the Blue Gene project was given at the Supercomputing 2020 conference.

In December 1999, IBM announced a US$100 million research initiative for a five-year effort to build a massively parallel computer, to be applied to the study of biomolecular phenomena such as protein folding.[7] The research and development was pursued by a large multi-disciplinary team at the IBM T. J. Watson Research Center, initially led by William R. Pulleyblank.The project had two main goals: to advance understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. Major areas of investigation included: how to use this novel platform to effectively meet its scientific goals, how to make such massively parallel machines more usable, and how to achieve performance targets at a reasonable cost, through novel machine architectures.

The initial design for Blue Gene was based on an early version of the Cyclops64 architecture, designed by Monty Denneau. In parallel, Alan Gara had started working on an extension of the QCDOC architecture into a more general-purpose supercomputer. The US Department of Energy started funding the development of this system and it became known as Blue Gene/L (L for Light). Development of the original Blue Gene architecture continued under the name Blue Gene/C (C for Cyclops) and, later, Cyclops64.

Architecture and chip logic design for the Blue Gene systems was done at the IBM T. J. Watson Research Center, chip design was completed and chips were manufactured by IBM Microelectronics, and the systems were built at IBM Rochester, MN.

In November 2004 a 16-rack system, with each rack holding 1,024 compute nodes, achieved first place in the TOP500 list, with a LINPACK benchmarks performance of 70.72 TFLOPS.[1] It thereby overtook NEC's Earth Simulator, which had held the title of the fastest computer in the world since 2002. From 2004 through 2007 the Blue Gene/L installation at LLNL[8] gradually expanded to 104 racks, achieving 478 TFLOPS Linpack and 596 TFLOPS peak. The LLNL BlueGene/L installation held the first position in the TOP500 list for 3.5 years, until in June 2008 it was overtaken by IBM's Cell-based Roadrunner system at Los Alamos National Laboratory, which was the first system to surpass the 1 PetaFLOPS mark.

While the LLNL installation was the largest Blue Gene/L installation, many smaller installations followed. The November 2006 TOP500 list showed 27 computers with the eServer Blue Gene Solution architecture. For example, three racks of Blue Gene/L were housed at the San Diego Supercomputer Center.

While the TOP500 measures performance on a single benchmark application, Linpack, Blue Gene/L also set records for performance on a wider set of applications. Blue Gene/L was the first supercomputer ever to run over 100 TFLOPS sustained on a real-world application, namely a three-dimensional molecular dynamics code (ddcMD), simulating solidification (nucleation and growth processes) of molten metal under high pressure and temperature conditions. This achievement won the 2005 Gordon Bell Prize.

In June 2006, NNSA and IBM announced that Blue Gene/L achieved 207.3 TFLOPS on a quantum chemical application (Qbox).[9] At Supercomputing 2006,[10] Blue Gene/L was awarded the winning prize in all HPC Challenge Classes of awards.[11] In 2007, a team from the IBM Almaden Research Center and the University of Nevada ran an artificial neural network almost half as complex as the brain of a mouse for the equivalent of a second (the network was run at 1/10 of normal speed for 10 seconds).[12]

The name

The name Blue Gene comes from what it was originally designed to do, help biologists understand the processes of protein folding and gene development.[13] "Blue" is a traditional moniker that IBM uses for many of its products and the company itself. The original Blue Gene design was renamed "Blue Gene/C" and eventually Cyclops64. The "L" in Blue Gene/L comes from "Light" as that design's original name was "Blue Light". The "P" version was designed to be a petascale design. "Q" is just the letter after "P".[14]

Major features

The Blue Gene/L supercomputer was unique in the following aspects:[15]

Architecture

The Blue Gene/L architecture was an evolution of the QCDSP and QCDOC architectures. Each Blue Gene/L Compute or I/O node was a single ASIC with associated DRAM memory chips. The ASIC integrated two 700 MHz PowerPC 440 embedded processors, each with a double-pipeline-double-precision Floating-Point Unit (FPU), a cache sub-system with built-in DRAM controller and the logic to support multiple communication sub-systems. The dual FPUs gave each Blue Gene/L node a theoretical peak performance of 5.6 GFLOPS (gigaFLOPS). The two CPUs were not cache coherent with one another.

Compute nodes were packaged two per compute card, with 16 compute cards (thus 32 nodes) plus up to 2 I/O nodes per node board. A cabinet/rack contained 32 node boards.[16] By the integration of all essential sub-systems on a single chip, and the use of low-power logic, each Compute or I/O node dissipated about 17 watts (including DRAMs). The low power per node allowed aggressive packaging of up to 1024 compute nodes, plus additional I/O nodes, in a standard 19-inch rack, within reasonable limits on electrical power supply and air cooling. The system performance metrics, in terms of FLOPS per watt, FLOPS per m2 of floorspace and FLOPS per unit cost, allowed scaling up to very high performance. With so many nodes, component failures were inevitable. The system was able to electrically isolate faulty components, down to a granularity of half a rack (512 compute nodes), to allow the machine to continue to run.

Each Blue Gene/L node was attached to three parallel communications networks: a 3D toroidal network for peer-to-peer communication between compute nodes, a collective network for collective communication (broadcasts and reduce operations), and a global interrupt network for fast barriers. The I/O nodes, which run the Linux operating system, provided communication to storage and external hosts via an Ethernet network. The I/O nodes handled filesystem operations on behalf of the compute nodes. A separate and private Ethernet management network provided access to any node for configuration, booting and diagnostics.

To allow multiple programs to run concurrently, a Blue Gene/L system could be partitioned into electronically isolated sets of nodes. The number of nodes in a partition had to be a positive integer power of 2, with at least 25 = 32 nodes. To run a program on Blue Gene/L, a partition of the computer was first to be reserved. The program was then loaded and run on all the nodes within the partition, and no other program could access nodes within the partition while it was in use. Upon completion, the partition nodes were released for future programs to use.

Blue Gene/L compute nodes used a minimal operating system supporting a single user program. Only a subset of POSIX calls was supported, and only one process could run at a time on a node in co-processor mode—or one process per CPU in virtual mode. Programmers needed to implement green threads in order to simulate local concurrency. Application development was usually performed in C, C++, or Fortran using MPI for communication. However, some scripting languages such as Ruby[17] and Python[18] have been ported to the compute nodes.

IBM published BlueMatter, the application developed to exercise Blue Gene/L, as open source.[19] This serves to document how the torus and collective interfaces were used by applications, and may serve as a base for others to exercise the current generation of supercomputers.

Blue Gene/P

In June 2007, IBM unveiled Blue Gene/P, the second generation of the Blue Gene series of supercomputers and designed through a collaboration that included IBM, LLNL, and Argonne National Laboratory's Leadership Computing Facility.[20]

Design

The design of Blue Gene/P is a technology evolution from Blue Gene/L. Each Blue Gene/P Compute chip contains four PowerPC 450 processor cores, running at 850 MHz. The cores are cache coherent and the chip can operate as a 4-way symmetric multiprocessor (SMP). The memory subsystem on the chip consists of small private L2 caches, a central shared 8 MB L3 cache, and dual DDR2 memory controllers. The chip also integrates the logic for node-to-node communication, using the same network topologies as Blue Gene/L, but at more than twice the bandwidth. A compute card contains a Blue Gene/P chip with 2 or 4 GB DRAM, comprising a "compute node". A single compute node has a peak performance of 13.6 GFLOPS. 32 Compute cards are plugged into an air-cooled node board. A rack contains 32 node boards (thus 1024 nodes, 4096 processor cores).[21] By using many small, low-power, densely packaged chips, Blue Gene/P exceeded the power efficiency of other supercomputers of its generation, and at 371 MFLOPS/W Blue Gene/P installations ranked at or near the top of the Green500 lists in 2007–2008.[2]

Installations

The following is an incomplete list of Blue Gene/P installations. Per November 2009, the TOP500 list contained 15 Blue Gene/P installations of 2-racks (2048 nodes, 8192 processor cores, 23.86 TFLOPS Linpack) and larger.[1]

Applications

Blue Gene/Q

The third design in the Blue Gene series, Blue Gene/Q, significantly expanded and enhanced on the Blue Gene/L and /P architectures.

Design

The Blue Gene/Q "compute chip" is based on the 64-bit IBM A2 processor core. The A2 processor core is 4-way simultaneously multithreaded and was augmented with a SIMD quad-vector double-precision floating-point unit (IBM QPX). Each Blue Gene/Q compute chip contains 18 such A2 processor cores, running at 1.6 GHz. 16 Cores are used for application computing and a 17th core is used for handling operating system assist functions such as interrupts, asynchronous I/O, MPI pacing, and RAS. The 18th core is a redundant manufacturing spare, used to increase yield. The spared-out core is disabled prior to system operation. The chip's processor cores are linked by a crossbar switch to a 32 MB eDRAM L2 cache, operating at half core speed. The L2 cache is multi-versioned—supporting transactional memory and speculative execution—and has hardware support for atomic operations.[37] L2 cache misses are handled by two built-in DDR3 memory controllers running at 1.33 GHz. The chip also integrates logic for chip-to-chip communications in a 5D torus configuration, with 2 GB/s chip-to-chip links. The Blue Gene/Q chip is manufactured on IBM's copper SOI process at 45 nm. It delivers a peak performance of 204.8 GFLOPS while drawing approximately 55 watts. The chip measures 19×19 mm (359.5 mm²) and comprises 1.47 billion transistors. Completing the compute node, the chip is mounted on a compute card along with 16 GB DDR3 DRAM (i.e., 1 GB for each user processor core).[38]

A Q32[39] "compute drawer" contains 32 compute nodes, each water cooled.[40] A "midplane" (crate) contains 16 Q32 compute drawers for a total of 512 compute nodes, electrically interconnected in a 5D torus configuration (4x4x4x4x2). Beyond the midplane level, all connections are optical. Racks have two midplanes, thus 32 compute drawers, for a total of 1024 compute nodes, 16,384 user cores, and 16 TB RAM.

Separate I/O drawers, placed at the top of a rack or in a separate rack, are air cooled and contain 8 compute cards and 8 PCIe expansion slots for InfiniBand or 10 Gigabit Ethernet networking.

Performance

At the time of the Blue Gene/Q system announcement in November 2011,[41] an initial 4-rack Blue Gene/Q system (4096 nodes, 65536 user processor cores) achieved #17 in the TOP500 list with 677.1 TeraFLOPS Linpack, outperforming the original 2007 104-rack BlueGene/L installation described above. The same 4-rack system achieved the top position in the Graph500 list[3] with over 250 GTEPS (giga traversed edges per second). Blue Gene/Q systems also topped the Green500 list of most energy efficient supercomputers with up to 2.1 GFLOPS/W.[2]

In June 2012, Blue Gene/Q installations took the top positions in all three lists: TOP500,[1] Graph500[3] and Green500.[2]

Installations

The following is an incomplete list of Blue Gene/Q installations. Per June 2012, the TOP500 list contained 20 Blue Gene/Q installations of 1/2-rack (512 nodes, 8192 processor cores, 86.35 TFLOPS Linpack) and larger.[1] At a (size-independent) power efficiency of about 2.1 GFLOPS/W, all these systems also populated the top of the June 2012 Green 500 list.[2]

Applications

Record-breaking science applications have been run on the BG/Q, the first to cross 10 petaflops of sustained performance. The cosmology simulation framework HACC achieved almost 14 petaflops with a 3.6 trillion particle benchmark run,[62] while the Cardioid code,[63] [64] which models the electrophysiology of the human heart, achieved nearly 12 petaflops with a near real-time simulation, both on Sequoia. A fully compressible flow solver has also achieved 14.4 PFLOP/s (originally 11 PFLOP/s) on Sequoia, 72% of the machine's nominal peak performance.[65]

See also

External links

Notes and References

  1. Web site: November 2004 - TOP500 Supercomputer Sites. Top500.org. 13 December 2019.
  2. Web site: Green500 - TOP500 Supercomputer Sites. Green500.org. 13 October 2017. https://web.archive.org/web/20160826075608/http://www.green500.org/. 26 August 2016. dead.
  3. Web site: The Graph500 List . dead . https://web.archive.org/web/20111227021230/http://www.graph500.org/ . 2011-12-27 .
  4. Web site: Obama honours IBM supercomputer. Harris. Mark . September 18, 2009. Techradar.com. 2009-09-18.
  5. Web site: Supercomputing Strategy Shifts in a World Without BlueGene. 14 April 2015. Nextplatform.com. 13 October 2017.
  6. Web site: IBM to Build DoE's Next-Gen Coral Supercomputers - EE Times. EETimes. 13 October 2017. https://web.archive.org/web/20170430140132/http://www.eetimes.com/document.asp?doc_id=1324631. 30 April 2017. dead.
  7. Blue Gene: A Vision for Protein Science using a Petaflop Supercomputer . IBM Systems Journal . 40 . 2. 2017-10-23 .
  8. Web site: BlueGene/L . 2007-10-05 . dead . https://web.archive.org/web/20110718034455/https://asc.llnl.gov/computing_resources/bluegenel/ . 2011-07-18 .
  9. Web site: hpcwire.com. https://web.archive.org/web/20070928004334/http://www.hpcwire.com/hpc/701665.html. dead. September 28, 2007.
  10. Web site: SC06. sc06.supercomputing.org. 13 October 2017.
  11. Web site: HPC Challenge Award Competition . 2006-12-03 . dead . https://web.archive.org/web/20061211183441/http://www.hpcchallenge.org/custom/index.html?lid=103&slid=212 . 2006-12-11 .
  12. News: Mouse brain simulated on computer. BBC News. April 27, 2007. https://web.archive.org/web/20070525081051/http://news.bbc.co.uk/2/hi/technology/6600965.stm. 2007-05-25.
  13. Web site: IBM100 - Blue Gene. 7 March 2012. 03.ibm.com. 13 October 2017.
  14. Book: Supercomputing: 28th International Supercomputing Conference, ISC 2013, Leipzig, Germany, June 16-20, 2013. Proceedings. Julian M.. Kunkel. Thomas. Ludwig. Hans. Meuer. 12 June 2013. Springer. 13 October 2017. Google Books. 9783642387500.
  15. Blue Gene . IBM Journal of Research and Development . 49 . 2/3 . 2005.
  16. Web site: BlueGene/L Configuration. Lynn. Kissel. asc.llnl.gov. 13 October 2017. 17 February 2013. https://web.archive.org/web/20130217032440/https://asc.llnl.gov/computing_resources/bluegenel/configuration.html. dead.
  17. Web site: Compute Node Ruby for Bluegene/L. www.ece.iastate.edu. https://web.archive.org/web/20090211071506/http://www.ece.iastate.edu:80/~crb002/cnr.html. dead. February 11, 2009.
  18. Python for High Performance Computing . William Scullin . March 12, 2011 . Atlanta, GA.
  19. https://github.com/IBM/BlueMatter Blue Matter source code, retrieved February 28, 2020
  20. Web site: IBM Triples Performance of World's Fastest, Most Energy-Efficient Supercomputer . 2007-06-27 . 2011-12-24.
  21. Overview of the IBM Blue Gene/P project . IBM Journal of Research and Development . Jan 2008 . 10.1147/rd.521.0199 . 52 . 199–220.
  22. News: Supercomputing: Jülich Amongst World Leaders Again . IDG News Service . 2007-11-12 .
  23. Web site: IBM Press room - 2009-02-10 New IBM Petaflop Supercomputer at German Forschungszentrum Juelich to Be Europe's Most Powerful . 03.ibm.com . 2009-02-10 . 2011-03-11.
  24. Web site: Argonne's Supercomputer Named World's Fastest for Open Science, Third Overall. Mcs.anl.gov. 13 October 2017. https://web.archive.org/web/20090208225756/http://www.mcs.anl.gov/news/detail.php?id=147. 8 February 2009. dead.
  25. Web site: Rice University, IBM partner to bring first Blue Gene supercomputer to Texas. news.rice.edu. 2012-04-01. 2012-04-05. https://web.archive.org/web/20120405051308/http://news.rice.edu/2012/03/30/rice-university-ibm-partner-to-bring-first-blue-gene-supercomputer-to-texas/. dead.
  26. http://dnes.dir.bg/2008/09/09/news3363693.html#sepultura Вече си имаме и суперкомпютър
  27. Web site: IBM Press room - 2010-02-11 IBM to Collaborate with Leading Australian Institutions to Push the Boundaries of Medical Research - Australia . 03.ibm.com . 2010-02-11 . 2011-03-11.
  28. Web site: Rutgers Gets Big Data Weapon in IBM Supercomputer - Hardware - . 2013-09-07 . dead . https://web.archive.org/web/20130306182855/http://www.informationweek.com/hardware/supercomputers/rutgers-gets-big-data-weapon-in-ibm-supe/232700313 . 2013-03-06 .
  29. Web site: University of Rochester and IBM Expand Partnership in Pursuit of New Frontiers in Health. University of Rochester Medical Center. https://web.archive.org/web/20120511152144/http://www.urmc.rochester.edu/news/story/index.cfm?id=3498. 2012-05-11. May 11, 2012.
  30. Web site: IBM and Universiti Brunei Darussalam to Collaborate on Climate Modeling Research. IBM News Room. 18 October 2012. 2010-10-13.
  31. Web site: DOST's supercomputer for scientists now operational. Rainier Allan. Ronda. Philstar.com. 13 October 2017.
  32. Web site: Topalov training with super computer Blue Gene P. Players.chessdo.com. 13 October 2017. 19 May 2013. https://web.archive.org/web/20130519190040/http://players.chessdom.com/veselin-topalov/topalov-blue-gene-p. dead.
  33. Kaku, Michio. Physics of the Future (New York: Doubleday, 2011), 91.
  34. Web site: Project Kittyhawk: A Global-Scale Computer. Research.ibm.com. 13 October 2017.
  35. Web site: Project Kittyhawk: Building a Global-Scale Computer . Jonathan . Appavoo . Volkmar . Uhlig . Amos . Waterland . IBM T.J. Watson Research Center . Yorktown Heights, NY . 2018-03-13 . 2008-10-31 . https://wayback.archive-it.org/all/20081031010631/http://weather.ou.edu/~apw/projects/kittyhawk/kittyhawk.pdf . bot: unknown .
  36. Web site: Rutgers-led Experts Assemble Globe-Spanning Supercomputer Cloud . News.rutgers.edu . 2011-07-06 . 2011-12-24 . dead . https://web.archive.org/web/20111110044858/http://news.rutgers.edu/medrel/special-content/summer-2011/rutgers-led-experts-20110706 . 2011-11-10 .
  37. Web site: Memory Speculation of the Blue Gene/Q Compute Chip. 2011-12-23 .
  38. Web site: The Blue Gene/Q Compute chip. 2011-12-23. 2015-04-29. https://web.archive.org/web/20150429070233/http://www.hotchips.org/wp-content/uploads/hc_archives/hc23/HC23.18.1-manycore/HC23.18.121.BlueGene-IBM_BQC_HC23_20110818.pdf. dead.
  39. Web site: IBM Blue Gene/Q supercomputer delivers petascale computing for high-performance computing applications. 01.ibm.com. 13 October 2017.
  40. Web site: IBM uncloaks 20 petaflops BlueGene/Q super. The Register. 2010-11-22. 2010-11-25.
  41. Web site: IBM announces 20-petaflops supercomputer. Kurzweil. 13 November 2012. 18 November 2011. IBM has announced the Blue Gene/Q supercomputer, with peak performance of 20 petaflops.
  42. Web site: Feldman . Michael . Lawrence Livermore Prepares for 20 Petaflop Blue Gene/Q . HPCwire . 2009-02-03 . 2011-03-11 . dead . https://web.archive.org/web/20090212034132/http://www.hpcwire.com/features/Lawrence-Livermore-Prepares-for-20-Petaflop-Blue-GeneQ-38948594.html . 2009-02-12 .
  43. Web site: B Johnston . Donald . NNSA's Sequoia supercomputer ranked as world's fastest . 2012-06-18 . 2012-06-23 . https://web.archive.org/web/20140902012510/https://www.llnl.gov/news/newsreleases/2012/Jun/NR-12-06-07.html . 2014-09-02 . dead .
  44. Web site: TOP500 Press Release. https://web.archive.org/web/20120624041004/http://www.top500.org/lists/2012/06/press-release. dead. June 24, 2012.
  45. Web site: MIRA: World's fastest supercomputer - Argonne Leadership Computing Facility. Alcf.anl.gov. 13 October 2017.
  46. Web site: Mira - Argonne Leadership Computing Facility. Alcf.anl.gov. 13 October 2017.
  47. Web site: Vulcan—decommissioned. hpc.llnl.gov. 10 April 2019.
  48. Web site: HPC Innovation Center. hpcinnovationcenter.llnl.gov. 13 October 2017.
  49. Web site: Lawrence Livermore's Vulcan brings 5 petaflops computing power to collaborations with industry and academia to advance science and technology. 11 June 2013. Llnl.gov. 13 October 2017. 9 December 2013. https://web.archive.org/web/20131209231310/https://www.llnl.gov/news/newsreleases/2013/Jun/NR-13-06-05.html. dead.
  50. Web site: Ibm-Fermi | Scai . 2013-05-13 . dead . https://web.archive.org/web/20131030045547/http://www.hpc.cineca.it/content/ibm-fermi . 2013-10-30 .
  51. Web site: DiRAC BlueGene/Q. epcc.ed.ac.uk.
  52. Web site: Rensselaer at Petascale: AMOS Among the World's Fastest and Most Powerful Supercomputers. News.rpi.edu. 13 October 2017.
  53. Web site: AMOS Ranks 1st Among Supercomputers at Private American Universities. Michael Mullaneyvar . News.rpi.edi. 13 October 2017.
  54. Web site: World's greenest supercomputer comes to Melbourne - The Melbourne Engineer. 16 February 2012. Themelbourneengineer.eng.unimelb.edu.au/. 13 October 2017. 2 October 2017. https://web.archive.org/web/20171002042114/http://themelbourneengineer.eng.unimelb.edu.au/2012/02/worlds-greenest-computer-comes-to-melbourne/. dead.
  55. Web site: Melbourne Bioinformatics - For all researchers and students based in Melbourne's biomedical and bioscience research precinct. Melbourne Bioinformatics. 13 October 2017.
  56. Web site: Access to High-end Systems - Melbourne Bioinformatics. Vlsci.org.au. 13 October 2017.
  57. Web site: University of Rochester Inaugurates New Era of Health Care Research. Rochester.edu. 13 October 2017.
  58. Web site: Resources - Center for Integrated Research Computing. Circ.rochester.edu. 13 October 2017.
  59. Web site: EPFL BlueGene/L Homepage . https://web.archive.org/web/20071210070627/http://bluegene.epfl.ch/ . 2021-03-10. 2007-12-10 .
  60. Web site: À propos. Super. Utilisateur. Cadmos.org. 13 October 2017. https://web.archive.org/web/20160110132825/http://cadmos.org/. 10 January 2016. dead.
  61. Web site: A*STAR Computational Resource Centre. Acrc.a-star.edu.sg. 2016-08-24. 2016-12-20. https://web.archive.org/web/20161220065923/https://www.acrc.a-star.edu.sg/21/hardware.html. dead.
  62. 1211.4864. The Universe at Extreme Scale: Multi-Petaflop Sky Simulation on the BG/Q . S. Habib . V. Morozov . H. Finkel . A. Pope . K. Heitmann. Katrin Heitmann . K. Kumaran . T. Peterka . J. Insley . D. Daniel . P. Fasel . N. Frontiere . Z. Lukic . amp . cs.DC . 2012 .
  63. Web site: Cardioid Cardiac Modeling Project. Researcher.watson.ibm.com. 25 July 2016. 13 October 2017. 21 May 2013. https://web.archive.org/web/20130521180128/http://researcher.watson.ibm.com/researcher/view_project.php?id=2992. dead.
  64. Web site: Venturing into the Heart of High-Performance Computing Simulations. Str.llnl.gov. 13 October 2017. 14 February 2013. https://web.archive.org/web/20130214104403/https://str.llnl.gov/Sep12/streitz.html. dead.
  65. Book: http://dl.acm.org/citation.cfm?doid=2503210.2504565 . 10.1145/2503210.2504565 . SC '13. 17 November 2013. 1–13. 9781450323789. 12651650. 11 PFLOP/S simulations of cloud cavitation collapse. Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis. Rossinelli. Diego. Hejazialhosseini. Babak. Hadjidoukas. Panagiotis. Bekas. Costas. Curioni. Alessandro. Bertsch. Adam. Futral. Scott. Schmidt. Steffen J.. Adams. Nikolaus A.. Koumoutsakos. Petros.