Nvidia Licenses Groq's AI Chip Tech, Hires CEO Jonathan Ross

The chip giant is tapping into Groq's LPU innovation and leadership to bolster its AI capabilities.

Nvidia has entered a non-exclusive licensing agreement with AI chip challenger Groq, acquiring access to its LPU technology. This strategic move also includes hiring Groq's CEO, Jonathan Ross, known for his work on Google's TPUs, signaling a significant shift in the competitive AI chip landscape.

Mark Ellison

By Mark Ellison

December 25, 2025

4 min read

Nvidia Licenses Groq's AI Chip Tech, Hires CEO Jonathan Ross

Key Facts

  • Nvidia has secured a non-exclusive licensing agreement for Groq's LPU technology.
  • Nvidia is hiring Groq's CEO, Jonathan Ross, who previously invented Google's TPU.
  • Groq's LPUs can run LLMs 10 times faster and use one-tenth the energy of GPUs.
  • Groq powers AI apps for over 2 million developers.
  • Groq raised $750 million at a $6.9 billion valuation in September.

Why You Care

Ever wonder how the AI tools you use every day get faster and smarter? What if the biggest player in AI chips just made a move that could supercharge their future? Nvidia, the industry leader, has licensed system from AI chip challenger Groq and hired its CEO, Jonathan Ross, according to the announcement. This creation could directly impact the speed and efficiency of the AI applications you rely on.

What Actually Happened

Nvidia has secured a non- licensing agreement with Groq, a company known for its Language Processing Unit (LPU) system, as mentioned in the release. This deal also brings Groq’s CEO, Jonathan Ross, into Nvidia’s fold. Ross is a notable figure in AI chip creation; he previously helped create the Tensor Processing Unit (TPU) at Google, a specialized AI accelerator chip. Groq’s LPUs are designed to run large language models (LLMs) significantly faster and with less energy. The company reports that its LPUs can operate at 10 times faster speeds and use one-tenth the energy compared to standard GPUs for LLMs. Groq’s rapid growth is also remarkable, powering AI apps for over 2 million developers, up from approximately 356,000 last year, the company reports.

Why This Matters to You

This move by Nvidia is a big deal for anyone interested in AI’s future. It indicates that even dominant players are constantly seeking new innovations. The integration of Groq’s LPU system could lead to more efficient and AI models. Imagine your favorite AI assistant responding almost instantly, or complex generative AI tasks completing in seconds instead of minutes. This is the kind of betterment this collaboration aims to achieve. Do you think this licensing agreement will accelerate AI creation across the board?

Consider the practical implications for developers and users alike. If you’re a developer, access to more efficient hardware means you can build more AI applications. For end-users, this translates directly into better performance and potentially lower costs for AI-powered services. For example, think about how quickly new AI features are rolled out in your preferred apps. This agreement could speed up that process considerably.

Here are some potential benefits of this collaboration:

  • Faster AI Model Inference: LLMs could run at speeds.
  • Reduced Energy Consumption: AI operations become more environmentally friendly.
  • Broader AI Accessibility: More efficient hardware can make AI more affordable.
  • Enhanced Developer Tools: New capabilities for building AI applications.

As CNBC reports, “As tech companies compete to grow their AI capabilities, they need computing power, and Nvidia’s GPUs have emerged as the industry standard.”

The Surprising Finding

The most surprising aspect of this announcement isn’t just the licensing deal itself, but the nature of it. Despite Nvidia’s strong position in the AI chip market, they are licensing system from a competitor rather than relying solely on their internal R&D. This suggests that Groq’s LPU system offers a truly distinct advantage, specifically in running large language models. It challenges the common assumption that market leaders only acquire or develop everything in-house. The team revealed that Groq’s LPUs are capable of running LLMs “at 10 times faster and using one-tenth the energy.” This efficiency gain is a significant differentiator. It highlights a pragmatic approach from Nvidia, acknowledging external creation to maintain its competitive edge in a rapidly evolving field.

What Happens Next

The future will likely involve the integration of Groq’s LPU system into Nvidia’s environment. We can expect to see initial results within the next 12 to 18 months, perhaps by late 2026 or early 2027. This could manifest as new product lines or enhanced capabilities within existing Nvidia offerings. For example, future generations of Nvidia’s AI accelerators might incorporate LPU-inspired designs for specialized LLM tasks. For readers, this means keeping an eye on announcements from both Nvidia and major cloud providers, as they will likely be early adopters of these advancements. The industry implications are vast, potentially accelerating the creation of more complex and responsive AI systems across various sectors. This strategic move could redefine what’s possible in AI processing.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice