All eyes are going to be on GDDR6 memory soon, with the upcoming generations of graphics cards almost certain to use the new video memory standard. Mass production of GDDR6 memory is already in full swing, and it’s anticipated that new GPUs from both Nvidia and AMD could be ready to ship later in 2018.
Micron is one of the leading manufacturers of GDDR6 memory, and they’ve published a research paper covering the capabilities of this new standard. It’s an evolution of the technology used for GDDR5 and GDDR5X but also mixed with some deep architectural tweaks. This has allowed the likes of Micron to amp up the voltage of GDDR6, pushing speeds as high as 20Gb/s while keeping power draw low.
The current limitation of GDDR6 is thought to be around 16.5Gb/s, but Micron’s research paper demonstrates that even a minor voltage bump can increase the throughput by almost 25%.
“While the preceding results demonstrate full DRAM functionality up to as high as 16.5Gb/s, it is possible for the overall performance of an architecture to be capped by timing limitations in the memory array itself. To determine if this GDDR6 interface could extend beyond the 16.5Gb/s range, the device was placed into a mode of operation which exercises only the I/O while bypassing the memory array.
“The oscilloscope measurement presented in Fig.15 confirms that when bypassing the memory array, and with a small, but helpful, boost in I/O supply voltage, it is possible to push Micron Technology, Inc.’s GDDR6 I/O as high as 20Gb/s.”
The current GDDR5X standard has a transfer rate of between 10-14Gb/s, while GDDR5 runs at 5-7Gb/s. This would represent a very impressive jump in performance. Combine this with the apparent affordability of GDDR6, made possible because it’s been designed using mature, iterative technology, and GDDR6 is looking a far stronger option than HBM2.
Nvidia has yet to announce to GeForce GTX 1170 or GeForce GTX 1180, but it’s pretty safe to assume they’ll be using these this new GDDR6 VRAM. Certainly, for those looking to game at 4K, memory bandwidth concerns could quickly become a thing of the past.