This smart move by Nvidia accomplishes two goals at once: it eliminates a potential competitor and obtains a new chip technology to offer its customers.
On Friday, artificial intelligence (AI) chip start-up Groq announced via a very brief press release that it “has entered into a non-exclusive licensing agreement with Nvidia (NVDA +1.09%) for Groq’s inference technology.”
The deal also includes Jonathan Ross, Groq’s founder and CEO, Sunny Madra, Groq’s president, and other members of the Groq team joining Nvidia to “help advance and scale the licensed technology.”
This was a smart move by Nvidia, in my view. Nvidia has tons of cash, and it makes sense to use it to accomplish two goals at once: eliminate a potential competitor and obtain a new chip technology to offer its customers.
Here’s what investors should know.
Image source: Getty Images.
As close to an acquisition as it gets
That Nvidia is not only entering into a non-exclusive license with Groq, but also hiring its founder-CEO, president, and reportedly key engineering talent, makes this deal an “acqui-hire” and as close to a full-fledged acquisition as possible.
Advertisement
Granted, Groq will reportedly continue to operate, with its CFO taking over the CEO position, and operate its GroqCloud. However, with the founder — who is the mastermind behind the company’s tech — leaving, it appears that all advancements in Groq’s tech will now be made under Nvidia.
No doubt, Nvidia structured the deal to avoid potential regulatory scrutiny. The company already dominates the AI chip space, so any acquisition that could potentially increase its current or future market share further would likely garner a very close look from regulators.
Nvidia-Groq deal size
The deal’s size wasn’t disclosed by the companies involved, but one major financial outlet has reported it at $20 billion, which would be Nvidia’s largest deal to date, by far. Its prior largest deal was its $6.9 billion acquisition of high-performance networking specialist Mellanox Technologies in 2020. That acquisition proved to be extremely successful, as Nvidia’s networking business is booming.
If the $20 billion is accurate, it represents a huge premium over Groq’s most recent valuation. After a $750 million financing round in September, Groq’s valuation was $6.9 billion.
Nvidia tried to buy leading central processing unit (CPU) chip designer Arm Holdings in 2020, but that massive deal was called off due to significant antitrust concerns from regulators in the U.S. and elsewhere.

Today’s Change
(1.09%) $2.05
Current Price
$190.66
Key Data Points
Market Cap
$4.6T
Day’s Range
$189.63 – $192.69
52wk Range
$86.62 – $212.19
Volume
5.5M
Avg Vol
189M
Gross Margin
70.05%
Dividend Yield
0.02%
Groq’s chip technology
Groq’s chips are language processing units (LPUs) designed for AI inferencing. Inferencing is the second step in the two-step AI process, following training. Training involves using vast amounts of data to teach an AI model, while inference is the deployment of that model to generate answers to a user’s questions, images, and other outputs.
Nvidia’s graphics processing units (GPUs) have long dominated the AI training stage. They’re also leaders in AI inference, but they face growing competition in this area. Competitors include Advanced Micro Devices‘ (AMD) data center GPUs, as well as custom application-specific integrated circuits (ASICs) that Broadcom and Marvell Technology are making for big tech company customers. Until now, the big tech companies have just used their custom AI chips internally and, where applicable, also in their cloud computing services.
However, it was recently revealed that social media giant Meta Platforms is considering buying Alphabet‘s Google custom AI chip, called a tensor processing unit (TPU), for inferencing purposes in its data centers.
The big tech companies are exploring alternatives to Nvidia’s GPUs for two reasons: to reduce costs and to diversify their supply chains. Relying on just a single supplier for anything can be risky.
Groq’s goal was for its LPUs to be a big player in the AI inference market. The company claims its technology is faster than alternatives for specific inference applications. Its plans included selling its chips for less money than Nvidia GPUs and perhaps other offerings.
It makes good sense that Nvidia views Groq’s tech as potentially very valuable and evidently viewed the company as a potential, significant future rival. Groq founder and CEO Jonathan Ross is widely considered the creator of Google’s TPU. Granted, he didn’t create this chip alone, but he was the force behind the effort to develop it.








Add Comment