GeForce 20 series | |
Discontinued: | [1] |
Manufacturer: | TSMC |
Designfirm: | Nvidia |
Marketed By: | Nvidia |
Codename: | TU10x |
Architecture: | Turing |
Model: | GeForce RTX series |
Fab: | TSMC 12 nm (FinFET) |
Transistors1: | 10.8B (TU106) |
Transistors2: | 13.6B (TU104) |
Transistors3: | 18.6B (TU102) |
D3dversion: | Direct3D 12.0 (feature level 12_2) Shader Model 6.8 |
Openclversion: | OpenCL 3.0[2] |
Openglversion: | OpenGL 4.6[3] |
Vulkanapi: | Vulkan 1.3[4] |
Predecessor: | GeForce 10 series |
Variant: | GeForce 16 series |
Successor: | GeForce 30 series |
Support Status: | Supported |
The GeForce 20 series is a family of graphics processing units developed by Nvidia.[5] Serving as the successor to the GeForce 10 series,[6] the line started shipping on September 20, 2018,[7] and after several editions, on July 2, 2019, the GeForce RTX Super line of cards was announced.
The 20 series marked the introduction of Nvidia's Turing microarchitecture, and the first generation of RTX cards, the first in the industry to implement hardware-enabled real-time ray tracing in a consumer product. In a departure from Nvidia's usual strategy, the 20 series has no entry-level range, leaving it to the 16 series to cover this segment of the market.[8]
These cards are succeeded by the GeForce 30 series, powered by the Ampere microarchitecture, which first launched in 2020.[9]
On August 14, 2018, Nvidia teased the announcement of the first card in the 20 series, the GeForce RTX 2080, shortly after introducing the Turing architecture at SIGGRAPH earlier that year.[10] The GeForce 20 series was finally announced at Gamescom on August 20, 2018,[5] becoming the first line of graphics cards "designed to handle real-time ray tracing" thanks to the "inclusion of dedicated tensor and RT cores."[11]
In August 2018, it was reported that Nvidia had trademarked GeForce RTX and Quadro RTX as names.[12]
The line started shipping on September 20, 2018.[7] Serving as the successor to the GeForce 10 series,[6] the 20 series marked the introduction of Nvidia's Turing microarchitecture, and the first generation of RTX cards, the first in the industry to implement realtime hardware ray tracing in a consumer product.[13]
Released in late 2018, the RTX 2080 was marketed as up to 75% faster than the GTX 1080 in various games,[14] also describing the chip as "the most significant generational upgrade to its GPUs since the first CUDA cores in 2006," according to PC Gamer.[15]
After the initial release, factory overclocked versions were released in the late 2018.[16] The first was the "Ti" edition,[17] while the Founders Edition cards were overclocked by default and had a three-year warranty.[14] When the GeForce RTX 2080 Ti came out, TechRadar called it "the world’s most powerful GPU on the market."[18] The GeForce RTX 2080 Founders Edition was positively reviewed for performance by PC Gamer on September 19, 2018, but was criticized for the high cost to consumers,[19] also noting that its ray tracing feature wasn't yet utilized by many programs or games.[20] In January 2019, Tom's Hardware also stated the GeForce RTX 2080 Ti Xtreme was "the fastest gaming graphics card available," although it criticized the loudness of the cooling solution, the size and heat output in PC cases.[21] In August 2018, the company claimed that the GeForce RTX graphics cards were the "world’s first graphics cards to feature super-fast GDDR6 memory, a new DisplayPort 1.4 output that can drive up to 8K HDR at 60Hz on future-generation monitors with just a single cable, and a USB Type-C output for next-generation Virtual Reality headsets."[22]
In October 2018, PC Gamer reported the supply of the 2080 Ti card was "extremely tight" after availability had already been delayed.[23] By November 2018, MSI was offering nine different RTX 2080-based graphics cards.[24] Released in December 2018, the line's Titan RTX was initially priced at $2500, significantly more than the $1300 then needed for a GeForce RTX 2080 Ti.[25]
In January 2019, Nvidia announced that GeForce RTX graphics cards would be used in 40 new laptops from various companies.[26] Also that month, in response to negative reactions to the pricing of the GeForce RTX cards, Nvidia CEO Jensen Huang stated "They were right. [We] were anxious to get RTX in the mainstream market... We just weren’t ready. Now we’re ready, and it’s called 2060," in reference to the RTX 2060.[27] In May 2019, a TechSpot review noted that the newly released Radeon VII by AMD was comparable in speeds to the GeForce RTX 2080, if slightly slower in games, with both priced similarly and framed as direct competitors.[28]
On July 2, 2019, the GeForce RTX Super line of cards was announced, which comprises higher-spec versions of the 2060, 2070 and 2080. Each of the Super models were offered for a similar price as older models but with improved specs.[29] In July 2019, NVidia stated the "SUPER" graphics cards in the GeForce RTX 20 series, to be introduced, had a 15% performance advantage over the GeForce RTX 2060.[30] PC World called the super editions a "modest" upgrade for the price, and the 2080 Super chip the "second most-powerful GPU ever released" in terms of speed.[31] In November 2019, PC Gamer wrote "even without an overclock, the 2080 Ti is the best graphics card for gaming."[32] In June 2020, PC Mag listed the Nvidia GeForce RTX 2070 Super as one of the "best [8] graphics cards for 4k gaming in 2020." The GeForce RTX 2080 Founders Edition, Super, and Ti were also listed.[33] In June 2020, graphic cards including the RTX 2060, RTX 2060 Super, RTX 2070 and the RTX 2080 Super were announced as discounted by retailers in expectation of the GeForce RTX 3080 launch.[34] In April 2020, Nvidia announced 100 new laptops licensed to include either GeForce GTX and RTX models.[35]
Due to production problems surrounding the RTX 30-series cards and a general shortage of graphics cards due to production issues caused by the ongoing COVID-19 pandemic, which led to a global semiconductor shortage, and general demand for graphics cards increasing due to an increase in cryptocurrency mining, the RTX 2060 and its Super counterpart, alongside the GTX 1050 Ti,[36] were brought back into production in 2021.[37] [38]
Furthermore, the RTX 2060 was reissued on December 7, 2021 as a variant with 12GB of VRAM.[39] [40] However, availability of the card at launch was scarce.[41] [42]
See also: Turing (microarchitecture) and Ray-tracing hardware. The RTX 20 series is based on the Turing microarchitecture and features real-time hardware ray tracing.[43] The cards are manufactured on an optimized 14 nm node from TSMC, named 12 nm FinFET NVIDIA (FFN).[44] New example features in Turing included mesh shaders,[45] Ray tracing (RT) cores (bounding volume hierarchy acceleration),[46] tensor (AI) cores,[11] dedicated Integer (INT) cores for concurrent execution of integer, and floating point operations.[47] In the GeForce 20 series, this real-time ray tracing is accelerated by the use of new RT cores, which are designed to process quadtrees and spherical hierarchies, and speed up collision tests with individual triangles.
The ray tracing performed by the RT cores can be used to produce effects such as reflections, refractions, shadows, depth of field, light scattering and caustics, replacing traditional raster techniques such as cube maps and depth maps. Notes: Instead of replacing rasterization entirely, however, ray tracing is offered in a hybrid model, in which the information gathered from ray tracing can be used to augment the rasterized shading for more photo-realistic results.
The second generation Tensor Cores (succeeding Volta's) work in cooperation with the RT cores, and their AI features are used mainly to two ends: firstly, de-noising a partially ray traced image by filling in the blanks between rays cast; also another application of the Tensor cores is DLSS (deep learning super-sampling), a new method to replace anti-aliasing, by artificially generating detail to upscale the rendered image into a higher resolution.[48] The Tensor cores apply deep learning models (for example, an image resolution enhancement model) which are constructed using supercomputers. The problem to be solved is analyzed on the supercomputer, which is taught by example what results are desired. The supercomputer then outputs a model which is then executed on the consumer's Tensor cores. These methods are delivered to consumers as part of the cards' drivers.
Nvidia segregates the GPU dies for Turing into A and non-A variants, which is appended or excluded on the hundreds part of the GPU code name. Non-A variants are not allowed to be factory overclocked, whilst A variants are.[49]
The GeForce 20 series was launched with GDDR6 memory chips from Micron Technology. However, due to reported faults with launch models, Nvidia switched to using GDDR6 memory chips from Samsung Electronics by November 2018.[50]
See main article: Nvidia RTX. With the GeForce 20 series, Nvidia introduced the RTX development platform. RTX uses Microsoft's DXR, Nvidia's OptiX, and Vulkan for access to ray tracing.[51] The ray tracing technology used in the RTX Turing GPUs was in development at Nvidia for 10 years.[52] Nvidia's Nsight Visual Studio Edition application is used to inspect the state of the GPUs.[53]
All of the cards in the series have a PCIe 3.0 x16 interface, which connects it to the CPU, manufactured using a 12 nm FinFET process from TSMC, and use GDDR6 memory (initially Micron modules upon launch, and later Samsung modules from November 2018).[50]
Model | Launch date | Launch MSRP (USD) | Code name(s)[54] | Core config | SM count | L2 cache | Clock speeds | Fillrate | Memory | Processing power (TFLOPS) | Ray tracing performance | TDP | NVLink support | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Core (MHz) | Memory (MT/s) | Pixel (GP/s) | Texture (GT/s) | Size | Bandwidth (GB/s) | Bus width | Half precision (boost) | Single precision (boost) | Double precision (boost) | Rays/s (billions) | RTX-OPS (trillions) | Tensor TFLOPS | |||||||||||
2060[55] [56] | $349 | 10.8 | 445 | 1920 120:48:30:240 | 30 | 3MB | 1365 (1680) | 14000 | 65.52 | 163.8 | 6GB | 336 | 192-bit | 10.483 (12.902) | 5.242 (6.451) | 0.164 (0.202) | 5 | 37 | 51.6 | 160W | rowspan="6" | ||
$300 | TU104-150 | 13.6 | 545 | ||||||||||||||||||||
2060 (12 GB)[57] | TU106-300 | 10.8 | 445 | 2176 136:48:34:272 | 34 | 1470 (1650) | 79.2 | 224.4 | 12GB | 12.246 (14.362) | 6.123 (7.181) | 0.191 (0.224) | 6 | 41 | 57.4 | 185W | |||||||
2060 Super[58] [59] | $399 | TU106-410 | 2176 136:64:34:272 | 4MB | 94.05 | 199.9 | 8GB | 448 | 256-bit | 175W | |||||||||||||
2070[60] | $499 | TU106-400 | 2304 144:64:36:288 | 36 | 1410 (1620) | 90.24 | 203.04 | 12.994 (14.930) | 6.497 (7.465) | 0.203 (0.233) | 45 | 59.7 | |||||||||||
$599 | TU106-400A | ||||||||||||||||||||||
2070 Super | $499 | TU104-410 | 13.6 | 545 | 2560 160:64:40:320 | 40 | 1605 (1770) | 102.72 | 256.8 | 16.435 (18.125) | 8.218 (9.062) | 0.257 (0.283) | 7 | 52 | 72.5 | 215W | 2-way | ||||||
2080[61] | $699 | TU104-400 | 2944 184:64:46:368 | 46 | 1515 (1710) | 96.96 | 278.76 | 17.840 (20.137) | 8.920 (10.068) | 0.279 (0.315) | 8 | 60 | 80.5 | ||||||||||
$799 | TU104-400A | ||||||||||||||||||||||
2080 Super | $699 | TU104-450 | 3072 192:64:48:384 | 48 | 1650 (1815) | 15500 | 105.6 | 316.8 | 496 | 20.275 (22.303) | 10.138 (11.151) | 0.317 (0.349) | 63 | 89.2 | 250W | ||||||||
2080 Ti[62] | $999 | TU102-300 | 18.6 | 754 | 4352 272:88:68:544 | 68 | 5.5MB | 1350 (1545) | 14000 | 118.8 | 367.2 | 11 GB | 616 | 352-bit | 23.500 (26.896) | 11.750 (13.448) | 0.367 (0.421) | 10 | 78 | 107.6 | |||
$1199 | TU102-300A | ||||||||||||||||||||||
Nvidia Titan RTX[63] | $2499 | TU102-400 | 4608 288:96:72:576 | 72 | 6MB | 1350 (1770) | 129.6 | 388.8 | 24GB | 672 | 384-bit | 24.884 (32.625) | 12.442 (16.312) | 0.389 (0.510) | 11 | 84 | 130.5 | 280W |
Model | Launch | Code name(s) | Core config | SM count | L2 cache | Clock speeds | Fillrate | Memory | Processing power (TFLOPS) | Ray tracing performance | TDP | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Core (MHz) | Memory (MT/s) | Pixel (GP/s) | Texture (GT/s) | Size | Bandwidth (GB/s) | Bus width | Half precision (boost) | Single precision (boost) | Double precision (boost) | Rays/s (billions) | RTX-OPS (trillions) | |||||||||
2050[64] [65] [66] [67] | GA107 (GN20-S7) | 2048 64:32:32:256 | 16 | 2MB | 1155 (1477) | 14000 | 47.26 | 94.53 | 4GB | 112.0 | 64-bit | (12.10) | (6.050) | (0.189) | 3045W | |||||
2060 Max-Q[68] | TU106 (N18E-G1) | 10.8 | 445 | 1920 120:48:30:240 | 30 | 3MB | 975 (1175) | 11000 | 56.88 | 142.2 | 6GB | 264.0 | 192-bit | (9.101) | (4.550) | (0.142) | 65W | |||
2060[69] | 960 (1200) | 14000 | 57.60 | 144.0 | 336.0 | (9.216) | (4.608) | (0.144) | 3.5 | 26 | 8090W | |||||||||
[70] | TU106 (N18E-G1-B) | 115W | ||||||||||||||||||
2070 Max-Q[71] | TU106 (N18E-G2) | 2304 144:64:36:288 | 36 | 4MB | 885 (1185) | 12000 | 75.84 | 170.6 | 8GB | 384.0 | 256-bit | (10.92) | (5.460) | (0.171) | 4 | 31 | 80W | |||
2070[72] | 1215 (1440) | 14000 | 92.16 | 207.4 | 448.0 | (13.27) | (6.636) | (0.207) | 5 | 38 | 115W | |||||||||
TU106 (N18E-G1R) | 1305 (1485) | |||||||||||||||||||
2070 Super Max-Q[73] | TU104 | 13.6 | 545 | 2560 160:64:40:320 | 40 | 930 (1155) | 12000 | 69.1 | 172.8 | 352.0 | (11.06) | (5.530) | (0.173) | 4 | 34 | 80W | ||||
2070 Super[74] | 1140 (1380) | 14000 | 88.3 | 220.8 | 448.0 | (14.13) | (7.066) | (0.221) | 5 | 40 | 115W | |||||||||
2080 Max-Q[75] | TU104 (N18E-G3) | 2944 184:64:46:368 | 46 | 735 (1095) | 12000 | 70.08 | 201.5 | 384.0 | (12.89) | (6.447) | (0.202) | 5 | 37 | 80W | ||||||
2080[76] | 1380 (1590) | 14000 | 101.8 | 292.6 | 448.0 | (18.72) | (9.362) | (0.293) | 7 | 53 | 150+W | |||||||||
2080 Super Max-Q[77] | TU104 (N18E-G3R) | 3072 192:64:48:384 | 48 | 735 (1080) | 11000 | 69.1 | 207.4 | 352.0 | (13.27) | (6.636) | (0.207) | 5 | 38 | 80W | ||||||
2080 Super[78] | 1365 (1560) | 14000 | 99.8 | 299.5 | 448.0 | (19.17) | (9.585) | (0.300) | 7 | 55 | 150+W |