BusinessNvidia’s Groq bet shows that the economics of AI...

Nvidia’s Groq bet shows that the economics of AI chip-building are still unsettled



Nvidia built its AI empire on GPUs. But its $20 billion bet on Groq suggests the company isn’t convinced GPUs alone will dominate the most important phase of AI yet: running models at scale, known as inference. 

The battle to win on AI inference, of course, is over its economics. Once a model is trained, every useful thing it does—answering a query, generating code, recommending a product, summarizing a document, powering a chatbot, or analyzing an image—happens during inference. That’s the moment AI goes from a sunk cost into a revenue-generating service, with all the accompanying pressure to reduce costs, shrink latency (how long you have to wait for an AI to answer), and improve efficiency.

That pressure is exactly why inference has become the industry’s next battleground for potential profits— and why Nvidia, in a deal announced just before the Christmas holiday, licensed technology from Groq, a startup building chips designed specifically for fast, low-latency AI inference, and hired most of its team, including CEO and founder Jonathan Ross.

Inference is AI’s ‘industrial revolution’

Nvidia CEO Jensen Huang has been explicit about the challenge of inference. While he says Nvidia is “excellent at every phase of AI,” he told analysts at the company’s Q3 earnings call in November that inference is “really, really hard.” Far from a simple case of one prompt in and one answer out, modern inference must support ongoing reasoning, millions of concurrent users, guaranteed low latency, and relentless cost constraints. And AI agents, which have to handle multiple steps, will dramatically increase inference demand and complexity—and raise the stakes of getting it wrong. 

“People think that inference is one shot, and therefore it’s easy. Anybody could approach the market that way,” Huang said. “But it turns out to be the hardest of all, because thinking, as it turns out, is quite hard.”

Nvidia’s support of Groq underscores that belief, and signals that even the company that dominates AI training is hedging on how inference economics will ultimately shake out. 

Huang has also been blunt about how central inference will become to AI’s growth. In a recent conversation on the BG2 Podcast, Huang said inference already accounts for more than 40% of AI-related revenue—and predicted that it is “about to go up by a billion times.”

“That’s the part that most people haven’t completely internalized,” Huang said. “This is the industry we were talking about. This is the industrial revolution.”

The CEO’s confidence helps explain why Nvidia is willing to hedge aggressively on how inference will be delivered, even as the underlying economics remain unsettled.

Nvidia wants to corner the inference market

Nvidia is hedging its bets to make sure that they have their hands in all parts of the market, said Karl Freund, founder and principal analyst at Cambrian-AI Research. “It’s a little bit like Meta acquiring Instagram,” he explained. “It’s not they thought Facebook was bad, they just knew that there was an alternative that they wanted to make sure wasn’t competing with them.” 

That, even though Huang had made strong claims about the economics of the existing Nvidia platform for inference. “I suspect they found that it either wasn’t resonating as well with clients as they’d hoped, or perhaps they saw something in the chip memory-based approach that Groq and another company called D-Matrix has,” said Freund, referring to another fast, low-latency AI chip startup backed by Microsoft that recently raised $275 million at a $2 billion valuation. 

Freund said Nvidia’s move into Groq could lift the entire category. “I’m sure D-Matrix is a pretty happy startup right now, because I suspect their next round will go at a much higher valuation thanks to the [Nvidia-Groq deal],” he said. 

Other industry executives say the economics of AI inference are shifting as AI moves beyond chatbots into real-time systems like robots, drones, and security tools. Those systems can’t afford the delays that come with sending data back and forth to the cloud, or the risk that computing power won’t always be available. Instead, they favor specialized chips like Groq’s over centralized clusters of GPUs. 

Behnam Bastani, CEO and founder of OpenInfer, which focuses on running AI inference close to where data is generated—such as on devices, sensors, or local servers rather than distant cloud data centers—said his startup is targeting these kind of applications at the “edge.” 

The inference market, he emphasized, is still nascent. And Nvidia is looking to corner that market with its Groq deal. With inference economics still unsettled, he said Nvidia is trying to position itself as the company that spans the entire inference hardware stack, rather than betting on a single architecture.

“It positions Nvidia as a bigger umbrella,” he said. 

This story was originally featured on Fortune.com



Original Source Link

Latest News

The politics of permanent outrage

This week, guest host Eric Boehm is joined by Lauren Hall, a political science professor at the Rochester...

Did reintroducing Wolves to Yellowstone really cause an ecological cascade?

Over the last three decades, Yellowstone National Park has undergone an ecological cascade. As elk numbers fell, aspen...

Peace Hopes Dented As Russia Says Ukraine Tried To Attack Putin Residence

MOSCOW/KYIV, Dec 29 (Reuters) - Russia accused Ukraine on Monday of trying to attack President Vladimir Putin’s residence...

Grayscale Files Registration for Bittensor ETP

Digital asset management company Grayscale Investments has filed to list and trade shares of an exchange-traded product (ETP)...

A year of fragile resilience in charts

This article is an on-site version of our Chris Giles on Central Banks newsletter. Premium subscribers can sign...

Must Read

The politics of permanent outrage

This week, guest host Eric Boehm is joined...
- Advertisement -

You might also likeRELATED
Recommended to you