Nvidia CEO defends his moat as AI labs change how to improve their AI models

[ad_1]

Nvidia generated net income of more than $19 billion during the most recent quarter The company stated on Wednesday, but that did not reassure investors that its rapid growth would continue. On the earnings call, analysts urged CEO Jensen Huang to see how Nvidia would fare if the tech companies started using new methods to improve their AI models.

The method behind OpenAI’s O1 model, or “test time measurement,” has come up a lot. It’s the idea that AI models will give better answers if you give them more time and computing power to “think” through the questions. Specifically, it adds more computation to the AI ​​inference phase, which is everything that happens after the user hits the enter button on their prompt.

The Nvidia CEO was asked if he sees AI model developers turning to these new methods, and how older Nvidia chips will drive AI inference.

Huang told investors that o1 and scaling test time more broadly could play a bigger role in Nvidia’s business moving forward, calling it “one of the most exciting developments” and a “new law of scaling.” Huang has gone out of his way to assure investors that Nvidia is well-positioned for change.

The Nvidia CEO’s statements are consistent with what Microsoft CEO Satya Nadella said on stage At a Microsoft event on Tuesday: o1 represents a new way for the AI ​​industry to improve its models.

This is important for the chip industry because it focuses more on AI inference. While Nvidia chips are the gold standard for training AI models, there are a wide range of well-funded startups making ultra-fast AI inference chips, such as Groq and Cerebras. It could be a more competitive space for Nvidia to operate in.

Although Recent reports Although improvements in generative models are slowing, Huang told analysts that AI model developers are still improving their models by adding more computation and data during the pre-training phase.

Anthropic CEO Dario Amodei also said Wednesday during an on-stage interview at the Cerebral Valley Summit in San Francisco that he doesn’t see a slowdown in development of the model.

“The expansion of pre-training for the basic model is sound and ongoing,” Huang said on Wednesday. “You know, this is an experimental law, not a fundamental law of physics, but the evidence is that it continues to expand. But what we are learning is that this is not enough,” Huang said.

This is certainly what Nvidia investors have wanted to hear, ever since the chip maker’s shares It rose by more than 180% in 2024 By selling AI chips on which OpenAI, Google, and Meta train their models. However, Andreessen Horowtiz partners and several other AI executives have previously said that these approaches are already starting to show diminishing returns.

Huang noted that most of Nvidia’s computing workloads today revolve around pre-training AI models — not inference — but he attributed that more to where the AI ​​world is today. One day, he said, there will simply be more people running AI models, which means more AI inferences will occur. Huang noted that Nvidia is the largest inference platform in the world today and that the company’s size and reliability give it a significant advantage over startups.

“Our hopes and dreams are that one day, the world will be able to make a lot of inferences, and that’s when AI will really succeed,” Huang said. “Everyone knows that if they innovate on top of the CUDA and Nvidia architecture, they can innovate more quickly, and they know that everything should work.”

[ad_2]

Leave a Comment