Niv-AI Boosts GPU Power Efficiency for Data Centers

A new startup aims to unlock wasted power in AI data centers, improving GPU performance and reducing energy costs.

Niv-AI, a Tel Aviv-based startup, has emerged from stealth to tackle the significant power inefficiencies in AI data centers. Their technology uses sensors and AI to optimize GPU power usage, preventing throttling and maximizing performance for advanced AI models. This could save data centers substantial revenue.

Katie Rowan

By Katie Rowan

March 18, 2026

4 min read

Niv-AI Boosts GPU Power Efficiency for Data Centers

Key Facts

  • Niv-AI, a Tel Aviv-based startup, has exited stealth mode to improve GPU power performance in data centers.
  • Data centers currently throttle GPU usage by up to 30% due to millisecond-scale power demand surges.
  • Niv-AI is deploying rack-level sensors to detect and understand GPU power usage at a granular level.
  • The company plans to build an AI model to predict and synchronize power loads across data centers.
  • Niv-AI expects to have operational systems in U.S. data centers within six to eight months.

Why You Care

Ever wonder why your favorite AI tools sometimes feel slower than they should? Or why the future of AI seems to hit invisible power walls? The truth is, a huge amount of electricity is wasted in the massive data centers powering artificial intelligence. This inefficiency can throttle performance by as much as 30%, according to industry observations. What if there was a way to unlock all that squandered energy? A new company, Niv-AI, is stepping in to solve this essential problem, and your AI experiences could get a whole lot better.

What Actually Happened

Niv-AI, a startup based in Tel Aviv, recently exited stealth mode, according to the announcement. Their mission is to improve power performance for Graphics Processing Units (GPUs) in data centers. These GPUs are the workhorses for training and running AI models. The company was founded last year by CEO Tomer Timor and CTO Edward Kizis. They are backed by several prominent venture capital firms, including Glilot Capital and Grove Ventures, as mentioned in the release. The core issue they address is the challenge data centers face in managing power grids. Specifically, millisecond-scale power demand surges occur when GPUs switch tasks. These surges force data centers to either pay for expensive temporary energy storage or, more commonly, reduce their GPU usage. Both options decrease the return on investment for these costly chips, the research shows.

Why This Matters to You

Imagine you’ve invested heavily in AI hardware, only to find you can’t use it to its full potential. That’s the reality for many data centers today. Niv-AI’s approach directly addresses this by first understanding the problem. They are deploying rack-level sensors, as detailed in the blog post. These sensors detect power usage at the millisecond level on GPUs. The goal is to pinpoint specific power profiles for different deep learning tasks. This data will then help develop mitigation techniques. Ultimately, this allows data centers to unlock more of their existing capacity. “We just can’t continue building data centers the way we build them now,” said Lior Handelsman, a partner at Grove Ventures and a Niv-AI board member. This indicates a pressing need for change. How much more efficient could your AI applications be if power wasn’t a bottleneck?

Consider this impact:

  • Reduced Operational Costs: Less wasted electricity means lower utility bills for data centers.
  • Improved AI Performance: GPUs can run at their full capacity, accelerating model training and inference.
  • Extended Hardware Lifespan: Better power management can reduce stress on expensive GPU hardware.
  • Environmental Benefits: More efficient energy use contributes to a smaller carbon footprint.

The Surprising Finding

The most surprising aspect of this situation is the sheer scale of wasted power. “There is so much power squandered in these AI factories,” Nvidia CEO Jensen Huang stated during a keynote speech. He added, “Every unused watt is revenue lost.” This highlights a essential inefficiency often overlooked. Data centers are essentially leaving money and performance on the table. They are forced to throttle down their expensive GPUs by up to 30% to avoid power grid issues. This isn’t due to a lack of processing power in the GPUs themselves. Instead, it’s a limitation in how power is managed and distributed. This challenges the assumption that simply adding more GPUs is the only answer to scaling AI. Better power management could be just as crucial.

What Happens Next

Niv-AI’s roadmap involves collecting extensive data. They are deploying sensors with design partners to understand GPU power profiles. Naturally, the engineers plan to build an AI model based on this collected data, as the team revealed. This model will predict and synchronize power loads across the data center. Think of it as a “copilot” for data center engineers. Niv-AI expects to have an operational system in a handful of U.S. data centers within the next six to eight months. This means we could see significant improvements by late 2026 or early 2027. For you, this could mean faster AI services and more generative AI tools in the near future. Companies should consider evaluating their current power management strategies. This new approach offers a tangible way to enhance existing infrastructure without massive new investments.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice