Tsubame (supercomputer) explained

Tsubame is a series of supercomputers that operates at the GSIC Center at the Tokyo Institute of Technology in Japan, designed by Satoshi Matsuoka.

Versions

Tsubame 1.0

The Sun Microsystems-built Tsubame 1.0 began operation in 2006 achieving 85 TFLOPS of performance, it was the most powerful supercomputer in Japan at the time.[1] [2] The system consisted of 655 InfiniBand connected nodes, each with a 8 dual-core AMD Opteron 880 and 885 CPUs and 32 GB of memory.[3] [4] Tsubame 1.0 also included 600 ClearSpeed X620 Advance cards.[5]

Tsubame 1.2

In 2008, Tsubame was upgraded with 170 Nvidia Tesla S1070 server racks, adding at total of 680 Tesla T10 GPU processors for GPGPU computing. This increased performance to 170 TFLOPS, making it at the time the second most powerful supercomputer in Japan and 29th in the world.

Tsubame 2.0

Tsubame 2.0 was built in 2010 by HP and NEC as a replacement to Tsubame 1.0.[6] With a peak of 2,288 TFLOPS, in June 2011 it was ranked 5th in the world.[7] [8] It has 1,400 nodes using six-core Xeon 5600 and eight-core Xeon 7500 processors. The system also included 4,200 of Nvidia Tesla M2050 GPGPU compute modules. In total the system had 80.6 TB of DRAM, in addition to 12.7 TB of GDDR memory on the GPU devices.[9]

Tsubame 2.5

Tsubame 2.0 was further upgrade to 2.5 in 2014, replacing all of the Nvidia M2050 GPGPU compute modules with Nvidia Tesla Kepler K20x compute modules.[10] [11] This yielded 17.1 PFLOPS of single precision performance.

Tsubame-KFC

Tsubame KFC added oil based liquid cooling to reduce power consumption.[12] [13] This allowed the system to achieve world's best performance efficiencies of 4.5 gigaflops/watt.[14] [15] [16]

Tsubame 3.0

In February 2017, Tokyo Institute of Technology announced it would add a new system Tsubame 3.0.[17] [18] It was developed with SGI and is focused on artificial intelligence and targeting 12.2 PFLOPS of double precision performance. The design is reported to utilize 2,160 Nvidia Tesla P100 GPGPU modules, in addition to Intel Xeon E5-2680 v4 processors.

Tsubame 3.0 ranked 13th at 8125 TFLOPS on the November 2017 list of the TOP500 supercomputer ranking.[19] It ranked 1st on the June 2017 list of the Green500 energy efficiency ranking at 14.110 GFLOPS/watts.[20]

See also

Notes and References

  1. News: Toshiaki. Konishi. The world's first GPU supercomputer! Tokyo Institute of Technology TSUBAME 1.2 released. 20 February 2017. ASCII.jp. 3 December 2008.
  2. News: Morgan. Timothy Pricket. Tokyo Tech dumps Sun super iron for HP, NEC. 20 February 2017. The Register. 31 May 2010.
  3. Book: Endo. Toshio. Nukada. Akira. Matsuoka. Satoshi. Maruyama. Naoya. Linpack Evaluation on a Supercomputer with Heterogeneous Accelerators . 1–8. May 2010. 10.1109/IPDPS.2010.5470353. 978-1-4244-6442-5. 10.1.1.456.3880. 2215916 .
  4. Web site: Takenouchi. Kensuke. Yokoi. Shintaro. Muroi. Chiashi. Ishida. Junichi. Aranami. Kohei. Research on Computational Techniques for JMA's NWP Models. World Climate Research Program. 20 February 2017.
  5. Tanabe. Noriyuki. Ichihashi. Yasuyuki. Nakayama. Hirotaka. Masuda. Nobuyuki. Ito. Tomoyoshi. Speed-up of hologram generation using ClearSpeed Accelerator board. Computer Physics Communications. October 2009. 180. 10. 1870–1873. 10.1016/j.cpc.2009.06.001. 2009CoPhC.180.1870T.
  6. Web site: Acquisition of next-generation supercomputer by Tokyo Institute of Technology NEC · HP Union receives order. Global Scientific Information and Computing Center, Tokyo Institute of Technology. Tokyo Institute of Technology. 20 February 2017.
  7. http://www.hpcwire.com/hpcwire/2011-05-05/tokyo_institute_of_technology_to_add_cula_library_to_tsubame_2_0.html HPCWire May 2011
  8. Hui Pan 'Research Initiatives with HP Servers', Gigabit/ATM Newsletter, December 2010, page 11
  9. News: Feldman. Michael. The Second Coming of TSUBAME. 20 February 2017. HPC Wire. 14 October 2010.
  10. Web site: TSUBAME 2.0 Upgraded to TSUBAME 2.5: Aiming Ever Higher. Tokyo Institute of Technology. Tokyo Institute of Technology. 20 February 2017.
  11. News: Brueckner. Being Very Green with Tsubame 2.5 Towards 3.0 and Beyond to Exascale. 20 February 2017. Inside HPC. 14 January 2014.
  12. News: Rath. John. Tokyo's Tsubame-KFC Remains World's Most Energy Efficient Supercomputer. 20 February 2017. Data Center Knowledge. 2 July 2014.
  13. News: Brueckner. Rich. Green Revolution Cooling Helps Tsubame-KFC Supercomputer Top the Green500. 20 February 2017. Inside HPC. 2 December 2015.
  14. News: Heterogeneous Systems Dominate the Green500. November 20, 2013. HPCWire. 28 December 2013.
  15. News: Japan's Oil-Cooled "KFC" Tsubame Supercomputer May Be Headed for Green500 Greatness. Millington. George . November 19, 2013. NVidia. 28 December 2013.
  16. News: Submerged Supercomputer Named World's Most Efficient System in Green 500. Rath. John . November 21, 2013. datacenterknowledge.com. 28 December 2013.
  17. News: Armasu. Lucian. Nvidia To Power Japan's 'Fastest AI Supercomputer' This Summer. 20 February 2017. Tom's Hardware. 17 February 2017.
  18. News: Morgan. Timothy Pricket. Japan Keeps Accelerating With Tsubame 3.0 AI Supercomputer. 20 February 2017. The Next Platform. 17 February 2017.
  19. Web site: TOP500 List - November 2017 . 2 October 2018.
  20. Web site: Green500 List for June 2017 . 2 October 2018.