Micron Kicks Off Manufacturing of HBM3E Reminiscence


Micron Know-how on Monday stated that it had initiated quantity manufacturing of its HBM3E reminiscence. The corporate’s HBM3E recognized good stack dies (KGSDs) can be used for Nvidia’s H200 compute GPU for synthetic intelligence (AI) and high-performance computing (HPC) functions, which is able to ship in the second quarter of 2024.

Micron has introduced it’s mass-producing 24 GB 8-Hello HBM3E gadgets with a knowledge switch fee of 9.2 GT/s and a peak reminiscence bandwidth of over 1.2 TB/s per machine. In comparison with HBM3, HBM3E will increase information switch fee and peak reminiscence bandwidth by a whopping 44%, which is especially vital for bandwidth-hungry processors like Nvidia’s H200.

Nvidia’s H200 product depends on the Hopper structure and affords the identical computing efficiency because the H100. In the meantime, it’s outfitted with 141 GB of HBM3E reminiscence that includes bandwidth of as much as 4.8 TB/s, a big improve from 80 GB of HBM3 and as much as 3.35 TB/s bandwidth within the case of the H100.

Micron’s reminiscence roadmap for AI is additional solidified with the upcoming launch of a 36 GB 12-Hello HBM3E product in March 2024. In the meantime, it stays to be seen the place these gadgets can be used.

Micron makes use of its 1β (1-beta) course of know-how to supply its HBM3E, which is a big achievement for the corporate because it makes use of its newest manufacturing node for its information center-grade merchandise, which is a testomony to the manufacturing know-how.

Beginning mass manufacturing of HBM3E reminiscence forward of opponents SK Hynix and Samsung is a big achievement for Micron, which at present holds a ten% market share within the HBM sector. This transfer is essential for the corporate, because it permits Micron to introduce a premium product sooner than its rivals, probably growing its income and revenue margins whereas gaining a bigger market share.

Micron is delivering a trifecta with this HBM3E milestone: time-to-market management, best-in-class {industry} efficiency, and a differentiated energy effectivity profile,” stated Sumit Sadana, govt vice chairman and chief enterprise officer at Micron Know-how. “AI workloads are closely reliant on reminiscence bandwidth and capability, and Micron could be very well-positioned to help the numerous AI development forward by way of our industry-leading HBM3E and HBM4 roadmap, in addition to our full portfolio of DRAM and NAND options for AI functions.

Supply: Micron

Leave a Reply

Your email address will not be published. Required fields are marked *