The NEC SX-Aurora TSUBASA is a vector processor of the NEC SX architecture family. Unlike previous SX supercomputers, the SX-Aurora TSUBASA is provided as a PCIe card, termed by NEC as a "Vector Engine" (VE). Eight VE cards can be inserted into a vector host (VH) which is typically a x86-64 server running the Linux operating system. The product has been announced in a press release on 25 October 2017 and NEC has started selling it in February 2018. The product succeeds the SX-ACE.
SX-Aurora TSUBASA is a successor to the NEC SX series and SUPER-UX, which are vector computer systems upon which the Earth Simulator supercomputer is based. Its hardware consists of x86 Linux hosts with vector engines (VEs) connected via PCI express (PCIe) interconnect.
High memory bandwidth (0.75–1.2 TB/s), comes from eight cores and six HBM2 memory modules on a silicon interposer implemented in the form-factor of a PCIe card.[1] Operating system functionality for the VE is offloaded to the VH and handled mainly by user space daemons running the VEOS.[2]
Depending on the clock frequency (1.4 or 1.6 GHz), each VE CPU has eight cores and a peak performance of 2.15 or 2.45 TFLOPS in double precision. The processor has the world's first implementation of six HBM2 modules on a Silicon interposer with a total of 24 or 48 GB of high bandwidth memory. It is integrated in the form-factor of a standard full length, full height, double width PCIe card that is hosted by an x86_64 server, the Vector Host (VH). The server can host up to eight VEs, clusters VHs can scale to arbitrary number of nodes.[3]
Version 2 Vector Engine[4]
Clock speed (in Ghz) | 1.6 | 1.6 | |
Number of cores | 10 | 8 | |
Core peak performance(double precision GFLOPS) | 307 | 307 | |
Core peak performance(single precision GFLOPS) | 614 | 614 | |
CPU peak performance(double precision TFLOPS) | 3.07 | 2.45 | |
CPU peak performance(single precision TFLOPS) | 6.14 | 4.91 | |
Memory bandwidth (TB/s) | 1.53 | 1.53 | |
Memory capacity (GB) | 48 | 48 |
Version 1 Vector Engine
The version 1.0 of the Vector Engine was produced in 16 nm FinFET process (from TSMC) and released in three SKUs (subsequent versions add an E at the end):[5]
Clock speed (in Ghz) | 1.6 | 1.4 | 1.4 | 1.584 | 1.408 | 1.400 | |
Number of cores | 8 | 8 | 8 | 8 | 8 | 8 | |
Core peak performance(double precision GFLOPS) | 307.2 | 268.8 | 268.8 | 304 | 270 | 268 | |
Core peak performance(single precision GFLOPS) | 537 | 608 | 540 | 537 | |||
CPU peak performance(double precision TFLOPS) | 2.45 | 2.15 | 2.15 | 2.43 | 2.16 | 2.15 | |
CPU peak performance(single precision TFLOPS) | 4.9 | 4.3 | 4.3 | 4.86 | 4.32 | 4.30 | |
Memory bandwidth (TB/s) | 1.2 | 1.2 | 0.75 | 1.35 | 1.35 | 1.00 | |
Memory capacity (GB) | 48 | 48 | 24 | 48 | 48 | 24 |
Each of the eight SX-Aurora cores has 64 logical vector registers. These have 256 x 64 Bits length implemented as a mix of pipeline and 32-fold parallel SIMD units. The registers are connected to three FMA floating-point multiply and add units that can run in parallel, as well as two ALU arithmetical logical units handling fixed point operations and a divide and square root pipe. Considering only the FMA units and their 32-fold SIMD parallelism, a vector core is capable of 192 double precision operations per cycle. In "packed" vector operations, where two single precision values are loaded into the space of one double precision slot in the vector registers, the vector unit delivers twice as many operations per clock cycle compared to double precision.
A Scalar Processing Unit (SPU) handles non-vector instructions on each of the cores.
The memory of the SX-Aurora TSUBASA processor consists of six HBM2 second generation high-bandwidth memory modules implemented in the same package as the CPU with the help of Chip-on-Wafer-on-Substrate technology. Depending on the processor model, the HBM2 modules are either 4 or 8 die 3D modules with either 4 or 8 GB capacity, each. The SX-Aurora CPUs thus have either 24 GB or 48 GB HBM2 memory. The models implemented with large HBM2 modules have 1.2 TB/s memory bandwidth.[6]
The cores of a vector engine share 16 MB of "Last-Level-Cache" (LLC), a write-back cache directly connected to the vector registers and the L2 cache of the SPU. The LLC cache line size is 128 Bytes. The priority of data retention in the LLC can to some extent be controlled in software, allowing the programmer to specify which of the variables or arrays should be retained in cache, a feature comparable to that of the Advanced Data Buffer (ADB) of the NEC SX-ACE.
NEC is currently selling the SX-Aurora TSUBASA vector engine integrated into four platforms:[7] [5]
Within a VH node VEs can communicate with each other through PCIe. Large parallel systems built with SX-Aurora use Infiniband in a PeerDirect setup as interconnect.
NEC also used to sell the SX-Aurora TSUBASA vector engine integrated into five platforms:
All types are exclusively air cooled with the exception of the A500 series, which also utilizes watercooling.
The operating system of the vector engine (VE) is called "VEOS", and has been offloaded entirely to the host system, the vector host (VH).[9] VEOS consists of kernel modules and user space daemons that:
VEOS supports multitasking on the VE and almost all Linux system calls are supported in the VE libc.[10] Offloading operating system services to the VH shifts OS jitter away from the VE at the expense of increased latencies.[10] All VE operating system related packages are licensed under the GNU General Public License and have been published at .
A Software Development Kit is available from NEC for developers and customers. It contains proprietary products and must be purchased from NEC. The SDK contains:
NEC MPI is also a proprietary implementation and is conforming to the MPI-3.1 standard specification.[14]
Hybrid programs can be created that use the VE as an accelerator for certain host kernel functions by using VE offloading C-API.[15] To some extent VE offloading is comparable to OpenCL and CUDA, but provides a simpler API and allows the kernels to be developed in normal C, C++ or Fortran and use almost any syscall on the VE. Python bindings to VEO are available at .
NLC1 | MKL | CUDA | ||
---|---|---|---|---|
Linear Algebra | Dense Matrix | ✓ | ✓ | ✓ |
Sparse Matrix | ✓ | ✓ | ✓ | |
Function Transform | ✓ | ✓ | ✓ | |
Real-to-Real (DCT, …) | ✓ | ✓ | ||
Laplace, Wavelet, … | ✓ | |||
Statistics | Random Number Generator | ✓ | ✓ w/o MPI | ✓ w/o MPI |
Multivariate, Regression, … | ✓ | |||
Other | Sorting | ✓ | ||
Special Functions | ✓ | |||
Integrals, Derivatives, … | ✓ | |||
Stencil Code | ✓ | |||
Deep Learning | ✗ (planned) | ✓ | ✓ |
1 NEC Numerical Library Collection is a collection of mathematical libraries that supports the development of numerical simulation programs.