AI, power availability and Intel’s future
January 8, 2025

AI, power availability and Intel’s future

6) Liquid cooling takes off

The heat generated by artificial intelligence systems has already led to the need liquid cooling because air cooling is simply not enough. But it’s not just GPUs that overheat—new Intel and AMD processors get pretty hot, too. I expect to see an increase in the use of stand-alone liquid cooling that will be used in existing data centers and will not require major retrofits. I also think HPE and Dell will finally make their own liquid cooling similar Project Neptune from Lenovo. Until now, HPE and Dell have been content to leave liquid cooling to third parties, but they may finally have to do their own thing.

7) Intel is splitting up

There is simply no way around this. Intel should spin off its manufacturing business like AMD did in 2008. It was expensive, painful, and necessary for long-term success. Gelsinger simply didn’t have enough bandwidth to manage Intel’s foundries and Intel products, and all three suffered as a result: the amazing business started off slow, the chip business lagged behind AMD, and Gelsinger’s term was interrupted. It’s time to break free from factories, Intel.

8) Continued forecasts of a mass exodus from local networks

This completely inaccurate forecast will continue, and it will continue to be wrong. There are too many reasons to maintain an on-premises data center, starting with data confidentiality and integrity. Data repatriation from the cloud to local infrastructure occurs every year. Local data centers will die along with mainframes.

9) GPU usage becomes paramount

Nvidia shows no signs of reducing power consumption, so it’s up to others to make these devices perform as efficiently as possible. This means maximizing hardware utilization and scaling. As a result, maximizing GPU utilization will become a key design goal for modern data centers. This will encourage innovation in hardware and software to support the infrastructure needed for learning and minimize delays, lags and other issues that cause pauses in learning. Success will be measured by how efficiently data centers can utilize the resources of their GPUs.

10) Power constraints affect data center locations

With nearly 500 data centers in the Virginia area, it’s safe to say the region is reaching its limit. The same applies to Texas and Santa Clara. The demand for large-scale data processing for artificial intelligence, data analytics and quantum computing will change. where new data centers are being built. The good news is that these applications, especially AI training, are not latency sensitive, so they can be afforded to be located in a remote location where there is plenty of land and plenty of cheap energy. This will mostly involve data centers designed for large-scale computing operations, so there’s no need to worry about your colocation provider setting up shop in the Sierras.

2025-01-02 16:42:26

Leave a Reply

Your email address will not be published. Required fields are marked *