Header Ads

Samsung Introduces 16Gbps GDDR6 Memory, 2GB Capacities

This site may earn affiliate commissions from the links on this page. Terms of use.

Samsung has announced that it’s bringing new, advanced GDDR6 modules to market with higher capacities and clock rates. It’s a step forward for the state of GDDR memory as a whole, with higher data transfer rates (up to 16Gbps) and larger capacities (16Gb, or 2GB per die). In other words, a system could now field 8GB of RAM in just four GDDR6 chips while maintaining a respectable 256GB/s of memory bandwidth. Obviously smaller dies will be available if companies want to deploy more memory channels with less VRAM per channel, but the figures for the new RAM standard are respectable.

The chart below is from Micron, but it illustrates the standard difference between GDDR5X and GDDR6. Power consumption may decrease a bit from being built on newer nodes, and there’s a different channel configuration, but overall bandwidth should be an evolutionary gain.

GDDR5X-vs-GDDR6

The funny thing is, once upon a time it wasn’t clear if GDDR6 would come to market at all, at least as a major GPU solution. HBM was supposed to scale quickly into HBM2, and HBM2 was supposed to deploy in major markets relatively quickly. It wasn’t unusual for AMD to be the only company to deploy HBM; AMD and Nvidia had split similarly over the use of GDDR4 a decade ago, with AMD adopting the technology and Nvidia choosing to stick with GDDR3. Nvidia’s decision to use a stopgap GDDR5X raised a few eyebrows, but HBM2 still seemed to be the better long-term technology, especially when Nvidia deployed it first for its high-end GPUs and AMD was following suit with Vega.

But there have been consistent rumors all year that HBM2’s manufacturing difficulties are causing problems for everyone who adopts the tech, not just AMD. A report from Semiengineering suggests some reasons why. While the data bus in HBM2 is 1,024 bits wide, that’s only the data transmission lines. Factor in power, ground, and other signaling into the mix, and it’s more like 1,700 wires to each HBM2 die. The interposer isn’t necessarily difficult to manufacture, but the design of the interposer is still challenging–companies have to balance signal length, power consumption, and cross talk. Managing heat flow in HBM stacks is also challenging–because each die is stacked on top of the other, you can wind up with lower dies becoming extremely hot, since they radiate heat upwards through the memory stack.

4GB HBM2 - Samsung

Samsung’s diagram for its own HBM2 design.

HBM2 may well remain in the upper end of the product stack, but it seems no one has had much luck bringing it down market yet, or even in ensuring easy deployment. That’s not because the technology is intrinsically bad–AMD’s Vega gets some very real thermal headroom from HBM2–but a technology that can’t scale easily down to lower markets is a technology that’s fundamentally limited in terms of its own appeal. Our argument is simple: If Nvidia deploys GDDR6 in its next generation of high-end cards and AMD makes a similar move with whatever it uses to follow up the Polaris family (RX 560-RX 580), it’ll be a sign both companies are still struggling to bring the technology to market.

Let's block ads! (Why?)

Read More :