The CXL consortium has had an everyday presence at FMS (which rechristened itself from ‘Flash Reminiscence Summit’ to the ‘Way forward for Reminiscence and Storage’ this 12 months). Again at FMS 2022, the corporate had announced v3.0 of the CXL specs. This was adopted by CXL 3.1’s introduction at Supercomputing 2023. Having began off as a bunch to system interconnect normal, it had slowly subsumed other competing standards corresponding to OpenCAPI and Gen-Z. Because of this, the specs began to embody all kinds of use-cases by constructing a protocol on prime of the the ever-present PCIe growth bus. The CXL consortium contains of heavyweights corresponding to AMD and Intel, in addition to numerous startup corporations trying to play in several segments on the system facet. At FMS 2024, CXL had a chief place within the sales space demos of many distributors.

The migration of server platforms from DDR4 to DDR5, together with the rise of workloads demanding massive RAM capability (however not notably delicate to both reminiscence bandwidth or latency), has opened up reminiscence growth modules as one of many first set of broadly out there CXL gadgets. During the last couple of years, we have now had product bulletins from Samsung and Micron on this space.

SK hynix CMM-DDR5 CXL Reminiscence Module and HMSDK

At FMS 2024, SK hynix was exhibiting off their DDR5-based CMM-DDR5 CXL reminiscence module with a 128 GB capability. The corporate was additionally detailing their related Heterogeneous Reminiscence Software program Improvement Equipment (HMSDK) – a set of libraries and instruments at each the kernel and person ranges geared toward rising the convenience of use of CXL reminiscence. That is achieved partially by contemplating the reminiscence pyramid / hierarchy and relocating the information between the server’s principal reminiscence (DRAM) and the CXL system primarily based on utilization frequency.

The CMM-DDR5 CXL reminiscence module comes within the SDFF form-factor (E3.S 2T) with a PCIe 3.0 x8 host interface. The interior reminiscence is predicated on 1α know-how DRAM, and the system guarantees DDR5-class bandwidth and latency inside a single NUMA hop. As these reminiscence modules are meant for use in datacenters and enterprises, the firmware consists of options for RAS (reliability, availability, and serviceability) together with safe boot and different administration options.

SK hynix was additionally demonstrating Niagara 2.0 – a {hardware} answer (at present primarily based on FPGAs) to allow reminiscence pooling and sharing – i.e, connecting a number of CXL reminiscences to permit totally different hosts (CPUs and GPUs) to optimally share their capability. The earlier model solely allowed capability sharing, however the newest model allows sharing of information additionally. SK hynix had presented these options on the CXL DevCon 2024 earlier this 12 months, however some progress appears to have been made in finalizing the specs of the CMM-DDR5 at FMS 2024.

Microchip and Micron Exhibit CZ120 CXL Reminiscence Growth Module

Micron had unveiled the CZ120 CXL Reminiscence Growth Module final 12 months primarily based on the Microchip SMC 2000 collection CXL reminiscence controller. At FMS 2024, Micron and Microchip had an indication of the module on a Granite Rapids server.

Further insights into the SMC 2000 controller have been additionally offered.

The CXL reminiscence controller additionally incorporates DRAM die failure dealing with, and Microchip additionally supplies diagnostics and debug instruments to investigate failed modules. The reminiscence controller additionally helps ECC, which varieties a part of the enterprise class RAS function set of the SMC 2000 collection. Its flexibility ensures that SMC 2000-based CXL reminiscence modules utilizing DDR4 can complement the primary DDR5 DRAM in servers that assist solely the latter.

Marvell Declares Structera CXL Product Line

A couple of days previous to the beginning of FMS 2024, Marvell had announced a brand new CXL product line underneath the Structera tag. At FMS 2024, we had an opportunity to debate this new line with Marvell and collect some extra insights.

In contrast to different CXL system options specializing in reminiscence pooling and growth, the Structera product line additionally incorporates a compute accelerator half along with a memory-expansion controller. All of those are constructed on TSMC’s 5nm know-how.

The compute accelerator half, the Structera A 2504 (A for Accelerator) is a PCIe 5.0 x16 CXL 2.0 system with 16 built-in Arm Neoverse V2 (Demeter) cores at 3.2 GHz. It incorporates 4 DDR5-6400 channels with assist for as much as two DIMMs per channel together with in-line compression and decompression. The mixing of highly effective server-class ARM CPU cores signifies that the CXL reminiscence growth half scales the reminiscence bandwidth out there per core, whereas additionally scaling the compute capabilities.

Functions corresponding to Deep-Studying Suggestion Fashions (DLRM) can profit from the compute functionality out there within the CXL system. The scaling within the bandwidth availability can also be accompanied by diminished vitality consumption for the workload. The strategy additionally contributed in the direction of disaggregation inside the server for a greater thermal design as a complete.

The Structera X 2404 (X for eXpander) shall be out there both as a PCIe 5.0 (single x16 or two x8) system with 4 DDR4-3200 channels (as much as 3 DIMMs per channel). Options corresponding to in-line (de)compression, encryption / decryption, and safe boot with {hardware} assist are current within the Structera X 2404 as properly. In comparison with the 100 W TDP of the Structera X 2404, Marvell expects this half to eat round 30 W. The first goal of this half is to allow hyperscalers to recycle DDR4 DIMMs (as much as 6 TB per expander) whereas rising server reminiscence capability.

Marvell additionally has a Structera X 2504 half that helps 4 DDR5-6400 channels (with two DIMMs per channel for as much as 4 TB per expander). Different facets stay the identical as that of the DDR4-recycling half.

The corporate confused upon some distinctive facets of the Structera product line – the inline compression optimizes out there DRAM capability, and the three DIMMs per channel assist for the DDR4 expander maximizes the quantity of DRAM per expander (in comparison with competing options). The 5nm course of lowers the ability consumption, and the components assist accesses from a number of hosts. The mixing of Arm Neoverse V2 cores seems to be a primary for a CXL accelerator, and allows delegation of compute duties to enhance total efficiency of the system.

Whereas Marvell introduced specs for the Structera components, it does seem that sampling is no less than just a few quarters away. One of many attention-grabbing facets about Marvell’s roadmaps / bulletins in recent times has been their concentrate on creating merchandise tuned to the calls for of high-volume prospects. The Structera product line is not any totally different – hyperscalers are hungry to recycle their DDR4 reminiscence modules and apparently cannot wait to get their fingers on the expander components.

CXL is simply beginning its sluggish ramp-up, and the hockey stick phase of the expansion curve is unquestionably positively not within the close to time period. Nevertheless, as extra host methods with CXL assist begin to get deployed, merchandise just like the Structera accelerator line begin to make sense from a server effectivity viewpoint.

Leave a Reply

Your email address will not be published. Required fields are marked *