SK hynix Now Sampling 24GB HBM3 Stacks, Making ready for Mass Manufacturing

When SK hynix initially announced its HBM3 memory portfolio in late 2021, the corporate mentioned it was creating each 8-Hello 16GB reminiscence stacks in addition to much more technically advanced 12-Hello 24GB reminiscence stacks. Now, nearly 18 months after that preliminary announcement, SK hynix has lastly begun sampling its 24GB HBM3 stacks to a number of clients, with an goal in direction of going into mass manufacturing and market availability within the second half of the yr. All of which must be a really welcome growth for SK hynix’s downstream clients, a lot of whom are chomping on the bit for added reminiscence capability to fulfill the wants of enormous language fashions and different high-end computing makes use of.

Based mostly on the identical expertise as SK hynix’s current 16GB HBM3 reminiscence modules, the 24GB stacks are designed to additional enhance on the density of the general HBM3 reminiscence module by rising the variety of DRAM layers from 8 to 12 – including 50% extra layers for 50% extra capability. That is one thing that is been within the HBM specification for fairly a while, however it’s confirmed tough to drag off because it requires making the extraordinarily skinny DRAM dies in a stack even thinner with a purpose to squeeze extra in.

Normal HBM DRAM packages are usually 700 – 800 microns excessive (Samsung claims its 8-Hello and 12-Hello HBM2E are 720 microns high), and, ideally, that top must be maintained to ensure that these denser stacks to be bodily appropriate with current product designs, and to a lesser extent to keep away from towering over the processors they’re paired with. Because of this, to pack 12 reminiscence gadgets into a typical KGSD, reminiscence producers should both shrink the thickness of every DRAM layer with out compromising efficiency or yield, cut back the house between layers, decrease the bottom layer, or introduce a mixture of all three measures.

Whereas SK hynix’s newest press launch gives restricted particulars, the corporate has apparently gone for scaling down the DRAM dies and the house between them with an improved underfill materials. For the DRAM dies themselves, SK hynix has beforehand said that they have been capable of shave their die thickness all the way down to 30 microns. In the meantime, the improved underflow materials on their 12-Hello stacks is being supplied through as a part of the corporate’s new Mass Reflow Molded Underfill (MR-MUF) packing expertise. This system includes bonding the DRAM dies collectively suddenly through the reflow course of, whereas concurrently filling the gaps between the dies with the underfill materials.

SK hynix calls their improved underfill materials “liquid Epoxy Molding Compound”, or “liquid EMC”, which replaces the older non conductive movie (NCF) utilized in older generations of HBM. Of specific curiosity right here, moreover the thinner layers this enables, in line with SK hynix liquid EMC gives twice the thermal conductivity of NCF. Protecting the decrease layers of stacked chips moderately cool has been one of many greatest challenges with chip stacking expertise of all varieties, so doubling the thermal conductivity of their fill materials marks a big enchancment for SK hynix. It ought to go a good distance in direction of making 12-Hello stacks extra viable by higher dissipating warmth from the well-buried lowest-level dies.

Meeting apart, the efficiency specs for SK hynix’s 24GB HBM3 stacks are an identical to their current 16GB stacks. Meaning a most knowledge switch pace of 6.4Gbps/pin operating over a 1024-bit interface, offering a complete bandwidth of 819.2 GB/s per stack.

In the end, all of the meeting difficulties with 12-Hello HBM3 stacks must be greater than justified by the advantages that the extra reminiscence capability brings. SK hynix’s main clients are already using 6+ HBM3 stacks on a single product with a purpose to ship the whole bandwidth and reminiscence capacities they deem mandatory. A 50% increase in reminiscence capability, in flip, will likely be a big boon to merchandise corresponding to GPUs and different types of AI accelerators, particularly as this period of enormous language fashions has seen reminiscence capability turn into bottlenecking consider mannequin coaching. NVIDIA is already pushing the envelope on reminiscence capability with their H100 NVL – a specialised, 96GB H100 SKU that permits previously-reserved reminiscence – so it is easy to see how they might be keen to have the ability to present 120GB/144GB H100 components utilizing 24GB HBM3 stacks.

Supply: SK Hynix

Leave a Reply

Your email address will not be published. Required fields are marked *