- HBM4 chips will power Tesla’s advanced artificial intelligence ambitions
- Dojo supercomputer will integrate Tesla’s high-performance HBM4 chip
- Samsung and SK Hynix compete for Tesla AI memory chip orders
As the high-bandwidth memory (HBM) market continues to grow and is expected to reach $33 billion by 2027, Samsung SK Hynix is getting worse.
It is reported that Tesla has contacted Samsung and SK Hynix, South Korea’s two largest memory chip manufacturers, to seek samples of its next-generation HBM4 chips, which has further fanned the flames.
Now, a copy from Korea Economic Daily Claims Tesla plans to evaluate the samples for possible integration into its custom-built Dojo supercomputer, a tool designed for the company AI Ambitious, including for its self-driving car technology.
Tesla’s ambitious AI and HBM4 plans
The Dojo supercomputer is powered by Tesla’s proprietary D1 AI chip, which helps train the neural networks required for its Fully Self-Driving (FSD) capabilities. This latest request indicates that Tesla is preparing to replace its older HBM2e chips with the more advanced HBM4, which offers significant improvements in speed, power efficiency, and overall performance. The company is also expected to incorporate HBM4 chips into its artificial intelligence data centers and future autonomous vehicles.
Longtime rivals in the memory chip market, Samsung and SK Hynix, are both preparing HBM4 chip prototypes for Tesla. These companies are also actively developing customized HBM4 solutions for major US technology companies, such as MicrosoftYuanhe Google.
According to industry insiders, SK Hynix is still the leader in the high-bandwidth memory (HBM) market, supplying HBM3e chips to the world. NVIDIA and occupy an important market share. However, Samsung is quickly closing the gap, forming partnerships with companies such as Taiwan Semiconductor Manufacturing Company (TSMC) to produce key components for its HBM4 wafers.
SK Hynix seems to be making progress with its HBM4 chips. The company claims its solution delivers 1.4 times the bandwidth of HBM3e while consuming 30% less power. The HBM4 chip is expected to have a bandwidth of more than 1.65 TB/s and consume less power, providing the performance and efficiency needed to train large-scale AI models using Tesla’s Dojo supercomputer.
The new HBM4 chips are also expected to feature a logic die at the bottom of the die stack that acts as a control unit for the memory chips. This logic chip design enables faster data processing and greater energy efficiency, making HBM4 ideal for Tesla’s artificial intelligence-driven applications.
The two companies are expected to accelerate the development progress of HBM4, with SK Hynix aiming to deliver chips to customers by the end of 2025. It can help it ensure its competitive advantage in the global HBM market.
through TrendForce