This week, Micron Updates provided About its HBM4 and HBM4E plans. new generation HBM4 memory with 2048-bit interfaceis expected to achieve mass production in 2026, and HBM4E will also achieve mass production in the following years. In addition to offering higher data transfer rates than HBM4, HBM4E will introduce the option to customize its base chip, marking a paradigm shift in the industry.
There’s no doubt that the HBM4’s 2048-bit memory interface is impressive. However, HBM4E will be even more impressive, allowing Micron to offer customized base wafers to certain customers, providing a more optimized solution with potential additional features. Customized logic chips will be manufactured by TSMC using advanced nodes, allowing them to pack more cache and logic to improve performance and functionality.
“HBM4E will bring a paradigm shift to the memory business by using TSMC’s advanced logic foundry manufacturing process to customize logic base wafers for certain customers,” said Micron President and CEO Sanjay Mehrotra. “We expect this customization capability to drive Micron Technology Improves Financial Results.”
For now, we can only wonder how Micron plans to customize its HBM4E base logic chip. Nonetheless, the list of possibilities is long and includes basic processing capabilities such as enhanced caching, custom interface protocols customized for specific applications (AI, HPC, networking, etc.), memory-to-memory transfer capabilities, variable Interface width, advanced voltage scaling and power gating, and custom ECC and/or Safety Algorithm. Remember, this is speculative. Currently, it is unclear whether the actual JEDEC standard supports such customization.
Micron says development of HBM4E products is well underway with multiple customers, so we can expect different customers to adopt different configurations of the base chip. This marks a step towards custom memory solutions for bandwidth-hungry artificial intelligence, high-performance computing, networking and other applications. It remains to be seen how the custom HBM4E solution will stack up against Marvell’s custom HBM (cHBM4) solution launched earlier this month.
Micron’s HBM4 will use DRAM manufactured by the company based on its proven 1β (fifth generation 10nm-level process technology), placed on a base die with a 2048-bit wide interface and a data transfer rate of approximately 6.4 GT/s. This provides a peak theoretical bandwidth of 1.64 TB/s per stack. Micron plans to massively increase HBM4 production in 2026, which coincides with Nvidia’s Vera Rubin and AMD Instinct MI400 Series GPUs Suitable for artificial intelligence and high-performance computing applications. Interestingly, both Samsung and SK Hynix rumor Uses sixth-generation 10nm-scale manufacturing technology for its HBM4 products.
Micron also revealed this week that shipments of its 8-Hi HBM3E devices for Nvidia Blackwell processors are in full swing. The company’s 12-Hi HBM3E stack is being tested by its major customers, and they are reportedly pleased with the results.
“We continue to receive positive feedback from key customers about the Micron HBM3E 12-Hi stack, which offers best-in-class power consumption that is 20% lower than the competitor’s HBM3E 8-Hi, despite the fact that the Micron product offers 50% more memory capacity and Industry-leading performance,” said Mehrotra.
12-Hi HBM3E stack expected to be used by AMD Instinct MI325X and MI355X acceleratoras well as the Nvidia Blackwell B300 series of compute GPUs for AI and HPC workloads.