This comparison between HBM vs HBM2 tries to elucidate what differences exist between these VRAM memories, taking into account that their use is purely professional.
We find them on NVIDIA and AMD graphics cards, so we’ll see what they offer.
We are leaving the gaming segment to enter a complex world with much higher needs. Graphics card architectures are important in all segments, but in data centers or the like HBM, etc.
Let’s jump into in and see things in further more details.
What is HBM Memory
The acronym refers to High Bandwidth Memory (high-bandwidth memory) and it is a type of memory that is characterized by having 3D stacked DRAM memory.
See Also: HBM2 vs. HBM3 (High-Bandwidth Memory): Main Differences
Therefore, the memory chips are not installed around the CPU or GPU, but are stacked vertically and are interconnected with each other by the Microbumps and TSVs .
JEDEC adopted the HBM standard in October 2013, with Samsung and SK hynix being the main manufacturers of this type of memory, whose purpose is to supply NVIDIA and AMD (mainly) for their graphics cards, as HPC products.
In fact, the first HBM chip was produced by SK hynix in 2013 and used in AMD Fiji GPUs.
According to Mordor Intelligence, the main HBM manufacturers are: Micron, Samsung, SK Hynix, IBM and Intel.
It is the interposer (a component that is not cheap to manufacture) “the switchboard” in charge of interconnecting the stacked HBM memories with the CPU, GPU or SoC.
If you’re wondering why vertically stack memory, the answer is simple: it shortens the distance data travels, allowing for a small form factor.
So, vertical stacking responds to 2 needs:
- Higher data rate to shorten the information journey
- Equip a lot of memory in a small space.
In this way, the memory is not integrated into the CPU, GPU or SoC, but rather they are connected very close to these chips, being the key interposer to make this fast connection viable.
According to AMD, HBM features are almost indistinguishable from on-chip RAM.
It has 2 channels of 128 bits per die, a factor that gives it a much larger memory bus than other types (GDDR, for example).
The purpose of HBM memory is to offer more bandwidth (greater than 100 GB/s) and less power consumption, enabling a GPU with 4 HBM stacks to support a bus width of 4096 bits.
The big problem with these memories is their production cost and the low availability rate.
There is not a brutal demand on products with HBM, but it is true that we handle different production times due to the added complexity of the stacking.
If we compare it with the GDDR5X or GDDR6 alternatives, HBM has offered much more since 2015.
AMD dared to include it in the Radeon R9 Fury or Radeon Pro, but the truth is that it was a failure resources and wastage of money.
What is HBM2 Memory
Its initials mean the same thing and it was introduced as the evolution of HBM in 2016, JEDEC updating the HBM2 standard in 2018. The main improvements that were introduced were:
- Bandwidth of 3.2 Gbps (256 GB/s), with 410 Gbps being the maximum.
- 24 GB maximum capacity, configured at 2 GB per die through 12 dies per stack.
- A single die could support up to 8 GB of memory and stacks of 8 memory chips.
In the gaming world we saw the AMD Radeon RX Vega generation and the Radeon VII, graphics cards that gained more fame for their high consumption.
See Also: Is 16 GB RAM Enough for your Gaming PC
In the professional world, AMD opted for HBM2 in its Radeon Pro, while NVIDIA did it in its Titan V, as in the Quadro GP100.
We have had to pull the newspaper library to find out how much an HBM2 chip costs and the price of the interposer.
According to experts an estimate of the cost in 2017 was $150 + $25 from the interposer. The unit price of HBM2 (16 GB with 4 stack DRAM chips) in 2019 was $120, not including the cost of the package.
Logically, the price of this would rise absurdly, even more so when the HBM chip market is divided among 5 brands. When we say HBM vs HBM2, the price is a very important factor to consider.
The high cost of HBM2 forced AMD to include the same PCB and VRMs in the RX Vega 64 and 56, struggling to obtain a minimum profit per sale.
Basically, the commercial products that have integrated HBM2 or HBM have forced the brands to reduce the cost of production to the maximum (PCB, VRM, BIOS chip, etc).
At first, HBM2 appeared as a solution to the high consumption of GDDR5, but we must remember that we have GDDR6 and GDDR6X on the market as well.
HBM vs HBM2: Which is better and Why
We end the article with the verdict of HBM vs HBM2, establishing the differences and the reasons why we believe that one memory is better than another.
Logically, HBM2 is better than HBM because it is more advance level and offer more capacity per stack, more speed, more bandwidth and their consumption is reduced.
In short, it is improved HBM memory, but it has one main disadvantage: the price. We do not know how all the shortage of components that has arisen in the sector has affected the HBM2 memory and will cost you.
Why isn’t it implemented in gaming GPUs? Because AMD, NVIDIA or Intel would be unable to put GPUs with HBM2 on sale at an affordable price.
See Also: Types of HBM Memory
As the professional sector is not so dependent on price, but rather on performance and profitability, graphics cards with HBM2 may be more interesting.
The information we have indicates that GDDR6 and GDDR6X await us for the RX 7000 and RTX 4000.
In other words, we have GDDR for a while specifically for the next 2 years. What’s more, there is already talk of GDDR7.
However, in the professional sector we are seeing new releases that come with HBM and not with HBM2, which leads us to think that, despite all the improvements, its price is too high to bet on it.
Zahid Khan Jadoon is an Interior Decorator, Designer and a specialized Chef and loves to write about home appliances and food. Right now he is running his interior designing business along with a managing a restaurant. Also in his spare time he loves to write about home and kitchen appliances.