Marvell has declare At the 2024 Analyst Day, it provided a customized high-bandwidth memory (CHBM) solution for its customized XPU designed for artificial intelligence applications. memory capacity, die size and cost design. CHBM will be compatible with Marvell’s custom XPU and will not be part of the HBM standard defined by JEDEC, at least initially.
Breaking News: @Marvell is partnering with leading HBM vendors to develop customized HBM interfaces to enable faster, smaller and lower-power die-to-die interconnects. #Marvell2024AIDay pic.twitter.com/rnQ1ZZSox8December 10, 2024
Marvell’s custom HBM solution allows the interface and stack to be customized for specific applications, but the company has yet to reveal any details. One of Marvell’s goals is to reduce the space occupied by the industry-standard HBM interface inside the processor. Free up space that can be used for computation and functionality. The company claims that with its proprietary die-to-die I/O, it can not only pack up to 25% more logic into its custom XPU, but also potentially fit up to 33% more CHBM memory alongside the compute Small dies are packaged to increase the amount of DRAM available to the processor. In addition, the company expects to reduce memory interface power consumption by up to 70%.
Because Marvell’s CHBM does not rely on standards specified by JEDEC, on the hardware side, it will require new controllers and customizable physical interfaces, new chip-to-chip interfaces, and a complete overhaul of the HBM base chip. The new Marvell chip-to-die HBM interface will have a bandwidth of 20 Tbps/mm (2.5 TB/s per millimeter), a significant increase from the 5 Tbps/mm (625 GB/s per millimeter) currently offered by HBM, based on the company’s analyst day release a slideshow of serving families. Over time, Marvell envisions 50 Tbps/mm (6.25 TB/s per millimeter) unbuffered memory.
Marvell did not specify the width of its CHBM interface. Marvell didn’t reveal many details about its custom HBM solution, saying only that it “enhances the XPU by serializing and accelerating the I/O interface between its internal AI compute accelerator die and the HBM base die,” which it said in This somewhat hints at the narrower interface, which is smaller in width compared to industry standard HBM3E or HBM4 solutions. However, it looks like the cHBM solution will be customizable.
“Enhancing the XPU by customizing HBM for specific performance, power consumption and total cost of ownership is the latest step in a new paradigm of how AI accelerators are designed and delivered,” said Will Chu, senior vice president and general manager of customization. Marvell’s Compute and storage team. “We’re grateful to be working with leading memory designers to accelerate this revolution and help cloud data center operators continue to scale their XPUs and infrastructure for the AI era.”
Collaboration with Micron, Samsung and SK hynix is critical to the successful implementation of Marvell’s CHBM as it sets the stage for relatively widespread use of custom high-bandwidth memory.