Groq’s chief executive says the hunger for AI inference is overwhelming, a claim that comes as the chip startup’s valuation doubled after its latest funding round.
In a new Bloomberg interview, Groq CEO Jonathan Ross says its hardware aims to compete with Nvidia’s GPUs and Google’s TPUs in powering the next phase of artificial intelligence.
Ross highlights that inference, running models after they’re trained, is becoming the decisive battleground.
“The bottom line is that the demand for inference is insatiable. The total amount of capacity that people are trying to deploy is mind-boggling, the numbers that people are putting up, and that’s only growing.”
The Groq executive also says that the firm is in a solid position to capture the soaring demand for inference chips.
“And in our case, we don’t feel that we’re competitive. We feel that we’re differentiated. If you’re not differentiated, there’s no point. And in our case, what we’re able to do is we’re able to provide price performance.
So when you’re training these models, you don’t need the chips to run particularly fast. You just need them to have a lot of throughput. And in our case, we’re able to provide both speed and throughput, lowering the cost. And when you’re spending money on every single query, the difference in that cost can be the difference between profitability and losing money.”
Ross also says the demand for generative AI compute appears to have no ceiling.
“It’s insatiable for the reason that, unlike with the industrial age, where as you’re producing oil, and compute is the new oil, let’s be clear about that, you can only consume as much oil as you had hardware built that could consume it. In the generative age, every model that you train, you can scale it up to the amount of compute that you have available. As such, people are scaling as much as they can possibly get compute.”
Groq recently raised $750 million in a fresh financing round, which pushed its post-money valuation to $6.9 billion, more than doubling its worth in just over a year.