Are you up for looking HBM2 vs HBM3 and what are main differences between both of them? You are on the right spot to know the answer of this question.
The new generation of HBM is here, and the HBM2 vs. HBM3 battle is served. If you want to know more about these two memory revisions, I invite you to continue reading this article.
Table of Contents
HBM2 vs HBM3 (High-Bandwidth Memory)
In this article we will see both what HBM memory is and we will also analyze the novelties of the HBM3 and the advantages over the HBM2.
What is HBM Memory?
As its name suggests, and is presented in this document, HBM memory offers higher memory bandwidth. The memory uses wide interface architectures to achieve high-speed, low-power operations.
HBM DRAM is tightly coupled to the host computing die via a distributed interface. Typically, HBM memory stacks are made up of four 4DRAM dies stacked on single core dies.
See Also: In-Memory Vs Near Memory Computing
A stack of four dies (4-Hi) in high-bandwidth memory has two channels with 256 bits each die, making a total of eight channels with 1024 bits wide.
High Bandwidth Memory (HBM) chips are small in size compared to Graphics Double Data Rate (GDDR) memory, for which they were initially designed.
High-bandwidth memory (HBM) is a high-speed memory interface for stacked synchronous dynamic random-access memory (SDRAM), initially manufactured by Samsung, AMD, and SK Hynix.
At Hot Chips in August 2016, both Samsung and SK Hynix announced the next generation memory technology, High Bandwidth Memory.
To help AMD with its HBM vision, they have brought in partners from the memory industry, most notably Korean firm SK Hynix.
This firm has prior experience in 3D stacked memory, and partners from the interposer industry (Taiwanese firm UMC) and the packaging industry (Amkor Technology and ASE).
HBM stands for High Bandwidth Memory, and is the type of memory interface used in the 3D stacked Dynamic Random Access Memory (DRAM) found in some GPUs (also known as GPUs).
Such as in the graphics cards from AMD and also in the server, high-performance computing (HPC), and network and client spaces, etc.
Ultimately, HBM is designed to provide much higher bandwidth and lower power consumption than the GDDR memory used in most of today’s good gaming graphics cards, compared to conventional DDR RAM, HBM.
HBM memory is superior in terms of performance and power efficiency compared to previously used GDDR5, presenting an opportunity for a growing high-bandwidth memory market.
High bandwidth and low latency characteristics dictate that HBM is well suited as GPU memory, as gaming and graphics are highly predictable and highly concurrency tasks in themselves.
In short, high memory bandwidth is critical to keeping the multiple devices in the system, as well as the compute units per core, supplied with data.
Even with the relatively high rates of DDR and GDDR, many AI algorithms and neural networks repeatedly run into memory bandwidth limitations.
Some gamers should know that HBM is a higher speed memory that has much higher bandwidth than DDR/GDDR.
The HBM offers a solution not only to that wall of memory bandwidth, but also, thanks to the proximity of the interposer layers and the 3D architecture. It offers higher performance and smaller form factors, respectively.
Features of HBM2
The second generation of high-bandwidth memory, HBM2, also specifies up to eight dies per stack, and dual-pin transfers at speeds of up to 2 GT/s.
The HBM2 standard allows 3.2 GBps per pin, a maximum of 24 GB per stack (2 GB per die over 12 dies per stack). And a maximum throughput of 410 GBps delivered over a 1024-bit memory interface separated by 8 unique channels in each stack.
See Also: What is STT RAM Memory and What is it for
Each HBM2 stack has a larger, dedicated 1024-bit interface. This allows memory devices to run at comparatively lower clock speeds while still delivering massive performance.
The memory interface is 1024 bits wide, which could result in 256 GB/s memory bandwidth on a single stack.
The HBM2, meanwhile, has the advantage of lower latency, making it perfect for server CPUs with dozens of cores, but not a graphics card, and has up to 8 different memory channels, one per chip.
In terms of advancements, the HBM2 was expected to offer higher memory speeds, as well as higher bandwidth.
Of course, HBM2 is more capable of scaling across a large number of stacks than GDDR6.across the width of the bus, thus making larger HBM2 topologies far ahead of standalone GDDR6 memory chips in terms of bandwidth.
For comparison, a 256-bit wide GDDR6 memory bus running at 14 Gbps could achieve a total bandwidth of 448 GB/s, thus an individual stack of HBM2 lags a bit behind.
Although slower than HBM2 in terms of memory bandwidth, GDDR6 is much cheaper than HBM2.Hence making it a good choice for conventional graphics cards.
That is why AMD switched to GDDR6 on their NAVI GPUs after using HBM and HBM2 on their FURY and VEGA series graphics cards.
Enhanced HBMs
The High Bandwidth Memory has several improvements available on the market that have been emerging over time, which are:
- HBM2E – Introduced in late 2018 by JEDEC as an upgrade to HBM2 that will have higher capacity and bandwidth. Specifically, it supports up to 307 GB/s per stack or what is the same, 2.5 Tbit/s.
- HBMnext – In late 2020 Micron introduced an update to HBM2E known as HBMnext, another standard that was originally proposed as HBM3.
- HBM-PIM – was introduced in early 2021 by Samsung. A new development with AI computing capabilities within the memory itself, thus increasing large-scale data processing thanks to the AI engines embedded in the DRAM.
HBM3 Features
The HBM3 RAMBUS memory subsystem supports a data rate of up to 8.4 Gbps per data pin. With multiple memory channels, HBM3 is capable of supporting larger stacks of DRAM per die, as well as more granular accesses.
See Also: What is eMMC Flash Memory
The memory interface is still 1,024 bits wide, and a single HBM3 stack can deliver 819 GB/sec throughputs.
A stack of 12 devices 32 Gb high results in a 48 GB HBM3 DRAM device. Divided by 8 bits/1 bytes, the possible bandwidth between the host processor and individual HBM3 DRAM devices is 819 gigabytes per second (GB/s).
HBM3 memory sticks support data rates of 6.4 GT/s, so a single high-bandwidth memory stack can deliver up to 819 GB/s.
In the future from a capacity standpoint, we expect first-generation HBM3 memory to closely resemble its current-generation counterpart, HBM2E memory.
Offering 8-Hi and 12-Hi stacked, respectively, this means that, at least for SK Hynix, the first generation HBM3 memory remains identical in density to its last generation HBM2E memory.
According to Anandtech’s Ryan Smith, SK Hynix’s first generation HBM3 memory has the same density as SK Hynix’s latest generation HBM2E memory. It means device vendors looking to increase the overall storage capacity of their hardware.
Next generation will have to use 12-layer memory per die, compared to the 8-layer stacks they have generally used up to now. According to the JEDEC,
The HBM3 standard is designed to deliver higher throughput, twice the data per pin than HBM2 generation parts, up to 6.4Gbps, which equates to 819Gbps bandwidth per device, according to JEDEC, which corresponds to SK Hynix’s HBM3 DRAM design announced last year.
Memory bandwidth is a key factor for computing performance, so the evolution of standards needs to be accelerated, and HBM3 represents a new benchmark.
At the data rates SK Hynix claims, that means a single HBM3 stack would be capable of reaching 819 GB/second of memory bandwidth. SK Hynix is going to provide its own ultra-fast HBM3 memory.
Zahid Khan Jadoon is an Interior Decorator, Designer and a specialized Chef and loves to write about home appliances and food. Right now he is running his interior designing business along with a managing a restaurant. Also in his spare time he loves to write about home and kitchen appliances.