CONNECT WITH US

Micron advances HBM4 development, sets 2026 for mass production

Levi Li, DIGITIMES Asia, Taipei 0

Credit: AFP

Micron Technology announced during its latest investor conference that its HBM4 development is progressing as planned while revealing development of HBM4E is underway. The company expects to begin mass production of HBM4 in 2026, according to Wccftech.

Strong quarterly performance drives momentum

In its December 18, 2024 earnings call, Micron reported earnings per share of US$1.79, exceeding the US$1.60 consensus estimate. The company's quarterly revenue reached US$8.71 billion, representing an increase of 84.3% compared to the same period last year, outperforming market expectations.

The company highlighted its data center segment's robust performance, with revenue surging over 400% compared to the same period last year and 40% quarter over quarter. This segment achieved a milestone by contributing over 50% of the company's total revenue for the first time.

Micron reported significant progress in HBM shipments, with HBM revenue more than doubling quarter over quarter. The company anticipates strong growth in the HBM market in the coming years, as noted by MarketBeat and Fool.

For the second quarter of fiscal year 2025, Micron projects revenue between US$7.7 billion and US$8.1 billion and earnings per share in the range of US$1.33 to US$1.53.

The strategic partnership enhances HBM4E capabilities

HBM4E promises over 50% improved performance compared to HBM3E, as noted by TweakTown and Tom's Hardware. Through collaboration with TSMC, Micron can now develop customized logic base dies tailored to specific customer needs, strengthening its financial performance.

Technology advancement drives AI and HPC innovation

Micron's HBM4 memory will enter mass production first, with HBM4E following in subsequent years. HBM4E introduces enhanced data transfer speeds and customizable base dies developed with TSMC, offering improved cache capacity and logic functions for AI, HPC, and networking workloads.

HBM4 utilizes Micron's 1β (5th Generation 10nm-class) DRAM technology, integrating up to 16 DRAM dies per stack, each offering 32 GB of capacity. The technology features a 2048-bit interface and 6.4 GT/s data rates, delivering a peak bandwidth of 1.64 TB/s per stack. Production timing aligns with the release of Nvidia's Vera Rubin GPUs and AMD's Instinct MI400-series GPUs, both designed for AI and HPC applications.

In current developments, Micron's HBM3E includes 8-Hi stacks already in use with Nvidia's Blackwell processors. Its 12-Hi HBM3E stacks, currently in testing, have received positive customer feedback for consuming 20% less power than competitors' 8-Hi versions while delivering 50% more memory capacity and superior performance, according to Micron CEO Sanjay Mehrotra.

While Micron's HBM4E development gains momentum through collaboration with multiple customers, it faces strong competition from Samsung and SK Hynix, both reportedly adopting 6th generation 10nm-class (1c) processes for HBM4. Despite the rivalry, Micron and SK Hynix continue to lead in high-bandwidth memory innovation.

Mehrotra emphasized HBM4E as a transformative innovation for the company's memory business. The partnership with TSMC's advanced logic foundry enables Micron to deliver customized base dies tailored to specific customer needs, potentially boosting its financial performance.

The upcoming AMD Instinct MI325X and MI355X accelerators, along with Nvidia's Blackwell B300 GPUs, are expected to feature 12-Hi HBM3E memory stacks, optimized for AI and HPC workloads, as reported by Reuters, Tom's Hardware, and Heise Online.

SK Hynix accelerates HBM4 development with TSMC

SK Hynix has modified its HBM4 production strategy to adopt TSMC's 3nm process, replacing the originally planned 5nm technology. This change aims to boost performance and efficiency, with mass production scheduled for late 2025 to meet Nvidia's delivery timelines, according to TrendForce.

The SK Hynix-TSMC partnership is set to enhance SK Hynix's competitive position against Samsung, which is reportedly using a 4nm process for HBM4 production. By adopting TSMC's advanced 3nm technology, SK Hynix aims to achieve a 20–30% performance boost over 5nm-based HBM4 chips, reinforcing its leadership in the high-bandwidth memory market, according to TechPowerUp.

Nvidia has reportedly urged SK Hynix to accelerate the delivery of HBM4 chips by six months to address the increasing demand for AI applications. In response, SK Hynix plans to commence shipments in late 2025, ahead of its original schedule, as reported by Reuters.

Credit: Micron

Credit: Micron