Jensen Huang claims Nvidia’s AI chips are outpacing Moore’s Law
January 10, 2025

Jensen Huang claims Nvidia’s AI chips are outpacing Moore’s Law

Perspective: Challenging conventional wisdom, Nvidia CEO Jensen Huang said his company’s AI chips are outpacing historical performance gains established by Moore’s Law. The announcement, made during his keynote at CES in Las Vegas and repeated in interviews, signals a potential paradigm shift in the world of computing and artificial intelligence.

For decades, Moore’s Law, coined by Intel co-founder Gordon Moore in 1965, has been the driving force behind computer progress. He predicted that the number of transistors on computer chips would roughly double every year, leading to exponential growth in productivity and plummeting costs. However, this law has begun to slow down in recent years.

Huang, however, painted a different picture of Nvidia’s AI chips. “Our systems are evolving much faster than Moore’s Law,” he said. said TechCrunch points to the company’s latest data center superchip, which is said to be more than 30 times faster at running AI workloads than its predecessor.

Huang attributed the accelerated progress to Nvidia’s comprehensive approach to chip development. “We can create the architecture, the chip, the system, the libraries and the algorithms all at the same time,” he explained. “If you do that, you can move faster than Moore’s Law because you can innovate across the entire stack.”

This strategy has apparently produced impressive results. Huang said Nvidia’s AI chips today are 1,000 times more advanced than those the company produced a decade ago, far ahead of the pace set by Moore’s Law.

Rejecting the idea that AI progress has stalled, Huang identified three operating laws for AI scaling: pre-training, post-training and test-time computation. He pointed out the importance of testing-time computation, which occurs during the inference phase and gives AI models more time to “think” after each question.

During his CES keynote, Huang showed off Nvidia’s latest data center superchip, the GB200 NVL72, showing off its 30 to 40 times faster performance on AI workloads compared to its predecessor, the H100. This jump in performance will make expensive AI reasoning models like OpenAI’s o3 more affordable over time, Huang said.

“The direct and immediate solution to computing during testing, both from a performance and cost affordability perspective, is to increase our computing capabilities,” Huang said. He added that in the long term, AI reasoning models can be used to generate better data for pre- and post-training of AI models.

Nvidia’s announcements come at a pivotal time for the artificial intelligence industry, with AI companies such as Google, OpenAI and Anthropic relying on its chips and their performance advancements. Moreover, as the focus of the tech industry shifts from learning to inference, questions are being raised about whether Nvidia’s high-end products will maintain their dominance. Huang’s statements suggest that Team Green is not only keeping up with the times, but also setting new standards in output and cost-effectiveness.

While early versions of AI reasoning models such as o3 OpenAI were expensive to operate, Huang expects the trend of sharply falling AI model costs to continue, driven by computing breakthroughs from hardware companies such as Nvidia.

2025-01-08 13:41:00

Leave a Reply

Your email address will not be published. Required fields are marked *