NASA Advanced Supercomputing Division explained

Agency Name:NASA Advanced Supercomputing Division
Picture Width:250px
Preceding1:Numerical Aerodynamic Simulation Division (1982)
Preceding2:Numerical Aerospace Simulation Division (1995)
Headquarters:NASA Ames Research Center, Moffett Field, California
Coordinates:37.4211°N -122.0647°W
Chief1 Name:Piyush Mehrotra
Chief1 Position:Division Chief
Parent Department:Ames Research Center Exploration Technology Directorate
Parent Agency:National Aeronautics and Space Administration (NASA)
Current Supercomputing Systems
PleiadesSGI/HPE ICE X supercluster
Aitken[1] HPE E-Cell system
Electra[2] SGI/HPE ICE X & HPE E-Cell system
EndeavourSGI UV shared-memory system
Merope[3] SGI Altix supercluster
The NASA Advanced Supercomputing (NAS) Division is located at NASA Ames Research Center, Moffett Field in the heart of Silicon Valley in Mountain View, California. It has been the major supercomputing and modeling and simulation resource for NASA missions in aerodynamics, space exploration, studies in weather patterns and ocean currents, and space shuttle and aircraft design and development for almost forty years.

The facility currently houses the petascale Pleiades, Aitken, and Electra supercomputers, as well as the terascale Endeavour supercomputer. The systems are based on SGI and HPE architecture with Intel processors. The main building also houses disk and archival tape storage systems with a capacity of over an exabyte of data, the hyperwall visualization system, and one of the largest InfiniBand network fabrics in the world.[4] The NAS Division is part of NASA's Exploration Technology Directorate and operates NASA's High-End Computing Capability (HECC) Project.[5]

History

Founding

In the mid-1970s, a group of aerospace engineers at Ames Research Center began to look into transferring aerospace research and development from costly and time-consuming wind tunnel testing to simulation-based design and engineering using computational fluid dynamics (CFD) models on supercomputers more powerful than those commercially available at the time. This endeavor was later named the Numerical Aerodynamic Simulator (NAS) Project and the first computer was installed at the Central Computing Facility at Ames Research Center in 1984.

Groundbreaking on a state-of-the-art supercomputing facility took place on March 14, 1985 in order to construct a building where CFD experts, computer scientists, visualization specialists, and network and storage engineers could be under one roof in a collaborative environment. In 1986, NAS transitioned into a full-fledged NASA division and in 1987, NAS staff and equipment, including a second supercomputer, a Cray-2 named Navier, were relocated to the new facility, which was dedicated on March 9, 1987.[6]

In 1995, NAS changed its name to the Numerical Aerospace Simulation Division, and in 2001 to the name it has today.

Industry Leading Innovations

NAS has been one of the leading innovators in the supercomputing world, developing many tools and processes that became widely used in commercial supercomputing. Some of these firsts include:[7]

Software Development

NAS develops and adapts software in order to "complement and enhance the work performed on its supercomputers, including software for systems support, monitoring systems, security, and scientific visualization," and often provides this software to its users through the NASA Open Source Agreement (NOSA).[8]

A few of the important software developments from NAS include:

Supercomputing History

Since its construction in 1987, the NASA Advanced Supercomputing Facility has housed and operated some of the most powerful supercomputers in the world. Many of these computers include testbed systems built to test new architecture, hardware, or networking set-ups that might be utilized on a larger scale.[13] Peak performance is shown in Floating Point Operations Per Second (FLOPS).

Computer Name Architecture Peak Performance Number of CPUs Installation Date
210.53 megaflops 1 1984
Navier 1.95 gigaflops 4 1985
Chuck 1.9 gigaflops 8 1987
Pierre 14.34 gigaflops 16,000 1987
43 gigaflops 48,000 1991
Stokes Cray 2 1.95 gigaflops 4 1988
Piper CDC/ETA-10Q 840 megaflops 4 1988
Reynolds 2.54 gigaflops 8 1988
2.67 gigaflops 88 1988
Lagrange 7.88 gigaflops 128 1990
GammaIntel iPSC/860 7.68 gigaflops 128 1990
von Karman Convex 3240 200 megaflops 41991
Boltzmann Thinking Machines CM5 16.38 gigaflops 128 1993
Sigma 15.60 gigaflops 208 1993
von Neumann 15.36 gigaflops 16 1993
Eagle Cray C90 7.68 gigaflops 8 1993
Grace Intel Paragon 15.6 gigaflops 209 1993
Babbage 34.05 gigaflops 128 1994
42.56 gigaflops 160 1994
da Vinci 16 1994
SGI Power Challenge XL 11.52 gigaflops 32 1995
Newton7.2 gigaflops 36 1996
Piglet 4 gigaflops 8 1997
Turing SGI Origin 2000/195 MHz 9.36 gigaflops 24 1997
25 gigaflops 64 1997
Fermi SGI Origin 2000/195 MHz 3.12 gigaflops 8 1997
Hopper SGI Origin 2000/250 MHz 32 gigaflops 64 1997
Evelyn SGI Origin 2000/250 MHz 4 gigaflops 8 1997
Steger SGI Origin 2000/250 MHz 64 gigaflops 128 1997
128 gigaflops 256 1998
Lomax SGI Origin 2800/300 MHz 307.2 gigaflops 512 1999
409.6 gigaflops 512 2000
Lou SGI Origin 2000/250 MHz 4.68 gigaflops 12 1999
Ariel SGI Origin 2000/250 MHz 4 gigaflops 8 2000
Sebastian SGI Origin 2000/250 MHz 4 gigaflops 8 2000
SN1-512 409.6 gigaflops 512 2001
Bright 64 gigaflops 32 2001
Chapman SGI Origin 3800/400 MHz 819.2 gigaflops 1,024 2001
1.23 teraflops 1,024 2002
Lomax II SGI Origin 3800/400 MHz 409.6 gigaflops512 2002
Kalpana[14] SGI Altix 3000[15] 2.66 teraflops 512 2003
Cray X1[16] 204.8 gigaflops 2004
SGI Altix 3000[17] 63 teraflops 10,240 2004
10,296 2006
85.8 teraflops[18] 13,824 2007
Schirra IBM POWER5+[19] 4.8 teraflops 640 2007
RT Jones 43.5 teraflops 4,096 2007
SGI ICE 8200, Intel Xeon "Harpertown" Processors[20] 487 teraflops 51,200 2008
544 teraflops[21] 56,320 2009
SGI ICE 8200, Intel Xeon "Harpertown"/"Nehalem" Processors[22] 773 teraflops 81,920 2010
SGI ICE 8200/8400, Intel Xeon "Harpertown"/"Nehalem"/"Westmere" Processors[23] 1.09 petaflops 111,1042011
SGI ICE 8200/8400/X, Intel Xeon "Harpertown"/"Nehalem"/"Westmere"/"Sandy Bridge" Processors[24] 1.24 petaflops 125,980 2012
SGI ICE 8200/8400/X, Intel Xeon "Nehalem"/"Westmere"/"Sandy Bridge"/"Ivy Bridge" Processors[25] 2.87 petaflops 162,496 2013
3.59 petaflops 184,800 2014
SGI ICE 8400/X, Intel Xeon "Westmere"/"Sandy Bridge"/"Ivy Bridge"/"Haswell" Processors[26] 4.49 petaflops 198,432 2014
5.35 petaflops[27] 210,336 2015
SGI ICE X, Intel Xeon "Sandy Bridge"/"Ivy Bridge"/"Haswell"/"Broadwell" Processors[28] 7.25 petaflops 246,048 2016
SGI UV 2000, Intel Xeon "Sandy Bridge" Processors[29] 32 teraflops 1,536 2013
SGI ICE 8200, Intel Xeon "Harpertown" Processors 61 teraflops 5,120 2013
SGI ICE 8400, Intel Xeon "Nehalem"/"Westmere" Processors 141 teraflops 1,152 2014
Electra SGI ICE X, Intel Xeon "Broadwell" Processors[30] 1.9 petaflops 1,152 2016
SGI ICE X/HPE SGI 8600 E-Cell, Intel Xeon "Broadwell"/"Skylake" Processors[31] 4.79 petaflops 2,304 2017
8.32 petaflops [32] 3,456 2018
Aitken HPE SGI 8600 E-Cell, Intel Xeon "Cascade Lake" Processors[33] 3.69 petaflops 1,150 2019
Computer Name Architecture Peak Performance Number of CPUs Installation Date

Storage Resources

Disk Storage

In 1987, NAS partnered with the Defense Advanced Research Projects Agency (DARPA) and the University of California, Berkeley in the Redundant Array of Inexpensive Disks (RAID) project, which sought to create a storage technology that combined multiple disk drive components into one logical unit. Completed in 1992, the RAID project lead to the distributed data storage technology used today.

The NAS facility currently houses disk mass storage on an SGI parallel DMF cluster with high-availability software consisting of four 32-processor front-end systems, which are connected to the supercomputers and the archival tape storage system. The system has 192 GB of memory per front-end[34] and 7.6 petabytes (PB) of disk cache. Data stored on disk is regularly migrated to the tape archival storage systems at the facility to free up space for other user projects being run on the supercomputers.

Archive and Storage Systems

In 1987, NAS developed the first UNIX-based hierarchical mass storage system, named NAStore. It contained two StorageTek 4400 cartridge tape robots, each with a storage capacity of approximately 1.1 terabytes, cutting tape retrieval time from 4 minutes to 15 seconds.

With the installation of the Pleiades supercomputer in 2008, the StorageTek systems that NAS had been using for 20 years were unable to meet the needs of the greater number of users and increasing file sizes of each project's datasets.[35] In 2009, NAS brought in Spectra Logic T950 robotic tape systems which increased the maximum capacity at the facility to 16 petabytes of space available for users to archive their data from the supercomputers.[36] As of March 2019, the NAS facility increased the total archival storage capacity of the Spectra Logic tape libraries to 1,048 petabytes (or 1 exabyte) with 35% compression. SGI's Data Migration Facility (DMF) and OpenVault manage disk-to-tape data migration and tape-to-disk de-migration for the NAS facility.

As of March 2019, there is over 110 petabytes of unique data stored in the NAS archival storage system.

Data Visualization Systems

In 1984, NAS purchased 25 SGI IRIS 1000 graphics terminals, the beginning of their long partnership with the Silicon Valley-based company, which made a significant impact on post-processing and visualization of CFD results run on the supercomputers at the facility. Visualization became a key process in the analysis of simulation data run on the supercomputers, allowing engineers and scientists to view their results spatially and in ways that allowed for a greater understanding of the CFD forces at work in their designs.

The hyperwall

In 2002, NAS visualization experts developed a visualization system called the "hyperwall" which included 49 linked LCD panels that allowed scientists to view complex datasets on a large, dynamic seven-by-seven screen array. Each screen had its own processing power, allowing each one to display, process, and share datasets so that a single image could be displayed across all screens or configured so that data could be displayed in "cells" like a giant visual spreadsheet.[37]

The second generation "hyperwall-2" was developed in 2008 by NAS in partnership with Colfax International and is made up of 128 LCD screens arranged in an 8x16 grid 23 feet wide by 10 feet tall. It is capable of rendering one quarter billion pixels, making it the highest resolution scientific visualization system in the world.[38] It contains 128 nodes, each with two quad-core AMD Opteron (Barcelona) processors and a Nvidia GeForce 480 GTX graphics processing unit (GPU) for a dedicated peak processing power of 128 teraflops across the entire system—100 times more powerful than the original hyperwall.[39] The hyperwall-2 is directly connected to the Pleiades supercomputer's filesystem over an InfiniBand network, which allows the system to read data directly from the filesystem without needing to copy files onto the hyperwall-2's memory.

In 2014, the hyperwall was upgraded with new hardware: 256 Intel Xeon "Ivy Bridge" processors and 128 NVIDIA Geforce 780 Ti GPUs. The upgrade increased the system's peak processing power from 9 teraflops to 57 teraflops, and now has nearly 400 gigabytes of graphics memory.[40]

In 2020, the hyperwall was further upgraded with new hardware: 256 Intel Xeon Platinum 8268 (Cascade Lake) processors and 128 NVIDIA Quadro RTX 6000 GPUs with a total of 3.1 terabytes of graphics memory. The upgrade increased the system's peak processing power from 57 teraflops to 512 teraflops.[41]

Concurrent Visualization

An important feature of the hyperwall technology developed at NAS is that it allows for "concurrent visualization" of data, which enables scientists and engineers to analyze and interpret data while the calculations are running on the supercomputers. Not only does this show the current state of the calculation for runtime monitoring, steering, and termination, but it also "allows higher temporal resolution visualization compared to post-processing because I/O and storage space requirements are largely obviated... [and] may show features in a simulation that would otherwise not be visible."[42]

The NAS visualization team developed a configurable concurrent pipeline for use with a massively parallel forecast model run on the Columbia supercomputer in 2005 to help predict the Atlantic hurricane season for the National Hurricane Center. Because of the deadlines to submit each of the forecasts, it was important that the visualization process would not significantly impede the simulation or cause it to fail.

External links

NASA Advanced Supercomputing Resources

Other Online Resources

Notes and References

  1. Web site: Aitken Supercomputer homepage . NAS.
  2. Web site: Electra Supercomputer homepage . NAS.
  3. Web site: Merope Supercomputer homepage . NAS.
  4. Web site: NASA Advanced Supercomputing Division: Advanced Computing. NAS. 2019.
  5. Web site: NAS Homepage - About the NAS Division . NAS.
  6. Web site: NASA Advanced Supercomputing Division 25th Anniversary Brochure (PDF) . NAS . dead . https://web.archive.org/web/20130302164116/http://www.nas.nasa.gov/assets/pdf/NAS_25th_brochure.pdf . 2013-03-02 .
  7. Web site: NAS homepage: Division History . NAS.
  8. Web site: NAS Software and Datasets . NAS.
  9. Web site: NASA Flow Analysis Software Toolkit . NASA.
  10. Web site: NASA Cart3D Homepage. https://web.archive.org/web/20020602073014/http://people.nas.nasa.gov/~aftosmis/cart3d/ . dead . 2002-06-02 .
  11. Web site: NASA.gov. 2024-05-21. 2023-01-17. https://web.archive.org/web/20230117100533/https://www.nasa.gov/centers/ames/news/releases/2002/02_76AR.html. dead.
  12. Web site: NASA.gov.
  13. NAS High-Performance Computer History . Gridpoints . Spring 2002 . 1A–12A.
  14. Web site: NASA to Name Supercomputer After Columbia Astronaut . NAS . May 2005 . 2014-03-07 . 2013-03-17 . https://web.archive.org/web/20130317094512/http://www.nas.nasa.gov/publications/news/2004/05-10-04.html . dead .
  15. Web site: NASA Ames Installs World's First Alitx 512-Processor Supercomputer . NAS . November 2003 . 2014-03-07 . 2013-03-17 . https://web.archive.org/web/20130317094536/http://www.nas.nasa.gov/publications/news/2003/11-17-03.html . dead .
  16. Web site: New Cray X1 System Arrives at NAS . NAS . April 2004.
  17. Web site: NASA Unveils Its Newest, Most Powerful Supercomputer . NASA . October 2004 . 2014-03-07 . 2004-10-28 . https://web.archive.org/web/20041028100627/http://www.nasa.gov/home/hqnews/2004/oct/HQ_04353_columbia.html . dead .
  18. Web site: Columbia Supercomputer Legacy homepage . NASA.
  19. Web site: NASA Selects IBM for Next-Generation Supercomputing Applications . NASA . June 2007.
  20. Web site: NASA Supercomputer Ranks Among World's Fastest – November 2008 . NASA . November 2008 . 2014-03-07 . 2019-08-25 . https://web.archive.org/web/20190825041603/https://www.nas.nasa.gov/publications/news/2008/11-18-08.html . dead .
  21. Web site: 'Live' Integration of Pleiades Rack Saves 2 Million Hours . NAS . February 2010 . 2014-03-07 . 2013-03-16 . https://web.archive.org/web/20130316185608/http://www.nas.nasa.gov/publications/news/2010/02-08-10.html . dead .
  22. Web site: NASA Supercomputer Doubles Capacity, Increases Efficiency . NASA . June 2010 . 2014-03-07 . 2019-08-25 . https://web.archive.org/web/20190825041604/https://www.nas.nasa.gov/publications/news/2010/06-02-10.html . dead .
  23. Web site: NASA's Pleiades Supercomputer Ranks Among World's Fastest . NASA . June 2011 . 2014-03-07 . 2011-10-21 . https://web.archive.org/web/20111021054516/http://www.nasa.gov/home/hqnews/2011/jun/HQ-11-194_Supercomputer_Ranks.html . dead .
  24. Web site: Pleiades Supercomputer Gets a Little More Oomph . NASA . June 2012.
  25. Web site: NASA's Pleiades Supercomputer Upgraded, Harpertown Nodes Repurposed . NAS . August 2013 . 2014-03-07 . 2019-08-25 . https://web.archive.org/web/20190825041604/https://www.nas.nasa.gov/publications/news/2013/09-19-13.html . dead .
  26. Web site: NASA's Pleiades Supercomputer Upgraded, Gets One Petaflops Boost . NAS . October 2014 . 2014-12-29 . 2019-08-25 . https://web.archive.org/web/20190825041557/https://www.nas.nasa.gov/publications/news/2014/10-28-14.html . dead .
  27. Web site: Pleiades Supercomputer Performance Leaps to 5.35 Petaflops with Latest Expansion . NAS . January 2015.
  28. Web site: Pleiades Supercomputer Peak Performance Increased, Long-Term Storage Capacity Tripled . NAS . July 2016 . 2020-03-05 . 2019-06-19 . https://web.archive.org/web/20190619184616/https://www.nas.nasa.gov/publications/news/2016/06-01-16.html . dead .
  29. Web site: Endeavour Supercomputer Resource homepage . NAS.
  30. Web site: NASA Ames Kicks off Pathfinding Modular Supercomputing Facility . NAS . February 2017.
  31. Web site: Recently Expanded, NASA's First Modular Supercomputer Ranks 15th in the U.S. on TOP500 List . NAS . November 2017.
  32. Web site: NASA's Electra Supercomputer Rises to 12th Place in the U.S. on the TOP500 List . NAS . November 2018.
  33. Web site: NASA Advanced Supercomputing Division: Modular Supercomputing . NAS . 2019.
  34. Web site: HECC Archival Storage System Resource homepage . NAS.
  35. Web site: NAS Silo, Tape Drive, and Storage Upgrades - SC09 . NAS . November 2009.
  36. Web site: New NAS Data Archive System Installation Completed . NAS . 2009.
  37. Web site: Mars Flyer Debuts on Hyperwall . NAS . September 2003.
  38. Web site: NASA Develops World's Highest Resolution Visualization System . NAS . June 2008.
  39. Web site: NAS Visualization Systems Overview . NAS.
  40. Web site: NAS hyperwall Visualization System Upgraded with Ivy Bridge Nodes . NAS . October 2014.
  41. Web site: NAS Visualization Systems: hyperwall . NAS . December 2020.
  42. Ellsworth . David . Bryan Green . Chris Henze . Patrick Moran . Timothy Sandstrom . Concurrent Visualization in a Production Supercomputing Environment . IEEE Transactions on Visualization and Computer Graphics . 12 . 5 . September–October 2006 . 997–1004 . 10.1109/TVCG.2006.128 . 17080827 . 14037933 .