Have You Caught the HBM Gold Rush

2024-05-30 16:51:09
181

High Bandwidth Memory (HBM) refers to the 2.5/3D based on advanced packaging technology, the multiple DRAM Die stacked up like a stack of new memory,becoming the most visible in the current memory industry. Recently, TSMC announced a combination of N12FFC + and N5 process technology, the production of basic die for HBM4, for HBM 4 ready for expansion, and CoWoS advanced packaging capacity expansion several times, just to meet the industry's soaring demand for HBM. The three major storage manufacturers are also dynamic, after SK Hynix, Samsung, Micron have said that HBM capacity has been sold out in the past two years, recently, Samsung and SK Hynix said that in order to meet the demand, they will be more than 20% of the DRAM production line converted to HBM production lines. As HBM3E and HBM4 continue to advance, driving changes in the industry ecosystem, the three major storage originators and TSMC are more closely linked than ever.


The industry has HBM into the ranks of advanced packaging, but more HBM into a new type of memory. As for the industry to categorize it as advanced packaging, it is because at present almost all HBM systems are highly bound to TSMC's advanced packaging technology CoWos.

HBM fully unleashes arithmetic performance through the combination of 2.5D CoWoS packaging and AI arithmetic chips. In addition to CoWoS advanced packaging technology, the industry currently has many other advanced packaging technologies in development to enhance HBM functionality, such as TSMC's next-generation wafer system platform CoW-SoW, SK Hynix's HBM with Through Silicon Via (TSV), Mass Reflow Molded Underfill (MR-MUF) advanced packaging, Samsung's non-conductive film thermal compression (TC-NCF) and so on. As a packaging technology that will influence the future development of the HBM industry.



As shown in the figure above, HBM is made up of multiple DRAM stacks, mainly utilizing TSV and micro bumps to connect the dies, and the multi-layer DRAM die is then connected to the lowest layer of the Base die, and then interconnected with the silicon interposer through bumps. In the same plane, HBM and GPU, CPU or ASIC are laid together on the silicon intermediate layer, and then connected to each other through 2.5D advanced packaging process such as CoWoS, the silicon intermediary layer is connected to the package substrate through CuBump, and finally the package substrate is connected to the PCB substrate below through the tin ball. The product's clever design greatly reduces the size area and capacity expansion while realizing high bandwidth, low latency and low power consumption.

With the rising demand for computing in the AI era, the demand for high-end GPUs and memory is in short supply. Currently GPUs complement CPU functions and continue to strengthen their arithmetic power. But the performance of the processor is rapidly increasing at a rate of about 55% per year, while the rate of increase in memory performance is only about 10% per year. At present, the traditional video memory GDDR5 and so on are also facing bottlenecks such as low bandwidth and high power consumption, and the GPU/CPU can't be counted.

GPU video memory is generally used GDDR or HBM two programs, but the industry data show that HBM performance is far more than GDDR. here to see AMD on HBM and DDR (Double Data Rate) memory parameters comparison, to the industry's most hot GDDR5 as an example.



According to AMD's data, in terms of video memory bit width, GDDR5 is 32-bit, and HBM is four times as wide, reaching 1024-bit; in terms of clock frequency, HBM is 500MHz, much smaller than GDDR5's 1750MHz; in terms of memory bandwidth, a stack of HBM is greater than 100GB/s, while a chip of GDDR5 is 25GB/s; in terms of data transfer rate, HBM is much higher than GDDR5.
From the perspective of space utilization, HBM is packaged with the GPU in a single piece, which significantly reduces the space of the graphics card PCB, while the GDDR5 chip area is three times that of the HBM chip, which means that HBM is able to achieve a larger capacity in a smaller space. Therefore, HBM can save chip area and power consumption while realizing high bandwidth and high capacity, and is regarded as an ideal solution for GPU storage units.

At present, HBM has already become the standard for AI servers, data centers, automotive driving and other high-performance computing areas, and its applicable markets are still expanding in the future.

According to the latest study by TrendForce, the output value of HBM is estimated to exceed 20% of the total DRAM output value by 2024, and by 2025, it will have a chance to account for more than 30% of the total output value. Looking ahead to 2025, from the perspective of major AI solution providers, the demand for HBM specifications has significantly shifted to HBM3e, and there will be more 12hi products, leading to an increase in the capacity of single chip equipped with HBM. 2024 HBM demand bit annual growth rate of nearly 200%, and is expected to double again in 2025.

If you like this article, click into ICHOME to see more.  

Recommended
Recommended

Electronic Component Part Number

Popular Parts
Stock Parts
Obsolete Electronic Components