Dates: | Operational in 2008 Final completion in 2009 |
Operators: | National Nuclear Security Administration |
Sponsors: | IBM |
Location: | Los Alamos National Laboratory |
Architecture: | 12,960 IBM PowerXCell 8i CPUs, 6,480 AMD Opteron dual-core processors, InfiniBand |
Memory: | 103.6 TiB |
Storage: | 1,000,000 TiB |
Speed: | 1.042 petaFLOPS |
Os: | Red Hat Enterprise Linux |
Power: | 2.35 MW |
Space: | 296 racks, 560m2 |
Cost: | US$100 million (equivalent to $ million in) |
Chartname: | TOP500 |
Chartposition: | 10 |
Chartdate: | June 2011 |
Purpose: | Modeling the decay of the U.S. nuclear arsenal |
Legacy: | First TOP500 Linpack sustained 1.0 petaflops, May 25, 2008 |
Roadrunner was a supercomputer built by IBM for the Los Alamos National Laboratory in New Mexico, USA. The US$100-million Roadrunner was designed for a peak performance of 1.7 petaflops. It achieved 1.026 petaflops on May 25, 2008, to become the world's first TOP500 LINPACK sustained 1.0 petaflops system.[1] [2]
In November 2008, it reached a top performance of 1.456 petaFLOPS, retaining its top spot in the TOP500 list.[3] It was also the fourth-most energy-efficient supercomputer in the world on the Supermicro Green500 list, with an operational rate of 444.94 megaflops per watt of power used. The hybrid Roadrunner design was then reused for several other energy efficient supercomputers.[4] Roadrunner was decommissioned by Los Alamos on March 31, 2013.[5] In its place, Los Alamos commissioned a supercomputer called Cielo, which was installed in 2010.
IBM built the computer for the U.S. Department of Energy's (DOE) National Nuclear Security Administration (NNSA).[6] [7] It was a hybrid design with 12,960 IBM PowerXCell 8i[8] and 6,480 AMD Opteron dual-core processors in specially designed blade servers connected by InfiniBand. The Roadrunner used Red Hat Enterprise Linux along with Fedora[9] as its operating systems, and was managed with xCAT distributed computing software. It also used the Open MPI Message Passing Interface implementation.[10]
Roadrunner occupied approximately 296 server racks which covered 560m2[11] and became operational in 2008. It was decommissioned March 31, 2013.[12] The DOE used the computer for simulating how nuclear materials age in order to predict whether the USA's aging arsenal of nuclear weapons are both safe and reliable. Other uses for the Roadrunner included the science, financial, automotive, and aerospace industries.
Roadrunner differed from other contemporary supercomputers because it continued the hybrid approach to supercomputer design introduced by Seymour Cray in 1964 with the Control Data Corporation CDC 6600 and continued with the order of magnitude faster CDC 7600 in 1969. However, in this architecture the peripheral processors were used only for operating system functions and all applications ran in the one central processor. Most previous supercomputers had only used one processor architecture, since it was thought to be easier to design and program for. To realize the full potential of Roadrunner, all software had to be written specially for this hybrid architecture. The hybrid design consisted of dual-core Opteron server processors manufactured by AMD using the standard AMD64 architecture. Attached to each Opteron core is an IBM-designed and -fabricated PowerXCell 8i processor. As a supercomputer, the Roadrunner was considered an Opteron cluster with Cell accelerators, as each node consists of a Cell attached to an Opteron core and the Opterons to each other.[13]
Roadrunner was in development from 2002 and went online in 2006. Due to its novel design and complexity it was constructed in three phases and became fully operational in 2008. Its predecessor was a machine also developed at Los Alamos named Dark Horse.[14] This machine was one of the earliest hybrid architecture systems originally based on ARM and then moved to the Cell processor. It was entirely a 3D design, its design integrated 3D memory, networking, processors and a number of other technologies.
The first phase of the Roadrunner was building a standard Opteron based cluster, while evaluating the feasibility to further construct and program the future hybrid version. This Phase 1 Roadrunner reached 71 teraflops and was in full operation at Los Alamos National Laboratory in 2006.
Phase 2 known as AAIS (Advanced Architecture Initial System) included building a small hybrid version of the finished system using an older version of the Cell processor. This phase was used to build prototype applications for the hybrid architecture. It went online in January 2007.
The goal of Phase 3 was to reach sustained performance in excess of 1 petaflops. Additional Opteron nodes and new PowerXCell processors were added to the design. These PowerXCell processors are five times as powerful as the Cell processors used in Phase 2. It was built to full scale at IBM’s Poughkeepsie, New York facility,[15] where it broke the 1 petaflops barrier during its fourth attempt on May 25, 2008. The complete system was moved to its permanent location in New Mexico in the summer of 2008.
Roadrunner used two different models of processors. The first is the AMD Opteron 2210, running at 1.8 GHz. Opterons are used both in the computational nodes feeding the Cells with useful data and in the system operations and communication nodes passing data between computing nodes and helping the operators running the system. Roadrunner has a total of 6,912 Opteron processors with 6,480 used for computation and 432 for operation. The Opterons are connected together by HyperTransport links. Each Opteron has two cores for a total 13,824 cores.
The second processor is the IBM PowerXCell 8i, running at 3.2 GHz. These processors have one general purpose core (PPE), and eight special performance cores (SPE) for floating point operations. Roadrunner has a total of 12,960 PowerXCell processors, with 12,960 PPE cores and 103,680 SPE cores, for a total of 116,640 cores.
Logically, a TriBlade consists of two dual-core Opterons with 16 GB RAM and four PowerXCell 8i CPUs with 16 GB Cell RAM.[16]
Physically, a TriBlade consists of one LS21 Opteron blade, an expansion blade, and two QS22 Cell blades. The LS21 has two 1.8 GHz dual-core Opterons with 16 GB memory for the whole blade, providing 8GB for each CPU. Each QS22 has two PowerXCell 8i CPUs, running at 3.2 GHz and 8 GB memory, which makes 4 GB for each CPU. The expansion blade connects the two QS22 via four PCIe x8 links to the LS21, two links for each QS22. It also provides outside connectivity via an InfiniBand 4x DDR adapter. This makes a total width of four slots for a single TriBlade. Three TriBlades fit into one BladeCenter H chassis. The expansion blade is connected to the Opteron blade via HyperTransport.
A Connected Unit is 60 BladeCenter H full of TriBlades, that is 180 TriBlades. All TriBlades are connected to a 288-port Voltaire ISR2012 Infiniband switch. Each CU also has access to the Panasas file system through twelve System x3755 servers.[16]
CU system information:[16]
The final cluster is made up of 18 connected units, which are connected via eight additional (second-stage) Infiniband ISR2012 switches. Each CU is connected through twelve uplinks for each second-stage switch, which makes a total of 96 uplink connections.[16]
Overall system information:[16]
IBM Roadrunner was shut down on March 31, 2013. While the supercomputer was one of the fastest in the world, its energy efficiency was relatively low. Roadrunner delivered 444 megaflops per watt vs the 886 megaflops per watt of a comparable supercomputer.[17] Before the supercomputer is dismantled, researchers will spend one month performing memory and data routing experiments that will aid in designing future supercomputers.
After IBM Roadrunner is dismantled, the electronics will be shredded.[18] Los Alamos will perform the majority of the supercomputer's destruction, citing the classified nature of its calculations. Some of its parts will be retained for historical purposes.