Micron Sells Out Complete HBM3E Provide for 2024, Most of 2025
Being the primary firm to ship HBM3E reminiscence has its perks for Micron, as the corporate has revealed that’s has managed to promote out the complete provide of its superior high-bandwidth reminiscence for 2024, whereas most of their 2025 manufacturing has been allotted, as properly. Micron’s HBM3E reminiscence (or how Micron alternatively calls it, HBM3 Gen2) was one of many first to be certified for NVIDIA’s up to date H200/GH200 accelerators, so it appears to be like just like the DRAM maker can be a key provider to the inexperienced firm.
“Our HBM is offered out for calendar 2024, and the overwhelming majority of our 2025 provide has already been allotted,” said Sanjay Mehrotra, chief govt of Micron, in ready remarks for the corporate’s earnings name this week. “We proceed to count on HBM bit share equal to our total DRAM bit share someday in calendar 2025.”
Micron’s first HBM3E product is an 8-Hello 24 GB stack with a 1024-bit interface, 9.2 GT/s knowledge switch price, and a complete bandwidth of 1.2 TB/s. NVIDIA’s H200 accelerator for synthetic intelligence and high-performance computing will use six of those cubes, offering a complete of 141 GB of accessible high-bandwidth reminiscence.
“We’re on observe to generate a number of hundred million {dollars} of income from HBM in fiscal 2024 and count on HBM revenues to be accretive to our DRAM and total gross margins beginning within the fiscal third quarter,” stated Mehrotra.
The corporate has additionally started sampling its 12-Hello 36 GB stacks that supply a 50% extra capability. These KGSDs will ramp in 2025 and can be used for subsequent generations of AI merchandise. In the meantime, it doesn’t seem like NVIDIA’s B100 and B200 are going to make use of 36 GB HBM3E stacks, at the very least initially.
Demand for synthetic intelligence servers set data final yr, and it appears to be like like it’s going to stay excessive this yr as properly. Some analysts imagine that NVIDIA’s A100 and H100 processors (in addition to their varied derivatives) commanded as a lot as 80% of the complete AI processor market in 2023. And whereas this yr NVIDIA will face more durable competitors from AMD, AWS, D-Matrix, Intel, Tenstorrent, and different corporations on the inference entrance, it appears to be like like NVIDIA’s H200 will nonetheless be the processor of selection for AI coaching, particularly for giant gamers like Meta and Microsoft, who already run fleets consisting of lots of of 1000’s of NVIDIA accelerators. With that in thoughts, being a major provider of HBM3E for NVIDIA’s H200 is a giant deal for Micron because it allows it to lastly seize a sizeable chunk of the HBM market, which is presently dominated by SK Hynix and Samsung, and the place Micron managed solely about 10% as of final yr.
In the meantime, since each DRAM gadget inside an HBM stack has a large interface, it’s bodily larger than common DDR4 or DDR5 ICs. Because of this, the ramp of HBM3E reminiscence will have an effect on bit provide of commodity DRAMs from Micron, the corporate stated.
“The ramp of HBM manufacturing will constrain provide development in non-HBM merchandise,” Mehrotra stated. “Industrywide, HBM3E consumes roughly thrice the wafer provide as DDR5 to supply a given variety of bits in the identical know-how node.”