C2i Secures $15M to Tackle AI Data Center Power Bottleneck

Indian startup C2i, backed by Peak XV, aims to revolutionize power delivery for energy-hungry AI infrastructure.

AI data centers face a critical power bottleneck. Indian startup C2i has raised $15 million in Series A funding to address this issue. Their innovative 'grid-to-GPU' system promises significant energy savings and improved data center efficiency.

Katie Rowan

By Katie Rowan

February 16, 2026

4 min read

C2i Secures $15M to Tackle AI Data Center Power Bottleneck

Key Facts

  • C2i raised $15 million in Series A funding, led by Peak XV Partners.
  • The startup's total funding now stands at $19 million.
  • AI data centers' primary limiting factor is shifting from compute to power.
  • Data center electricity consumption is projected to nearly triple by 2035.
  • C2i aims to cut end-to-end energy losses by approximately 10% with its 'grid-to-GPU' system.

Why You Care

Ever worried about the massive energy footprint of AI? As AI models grow, the data centers powering them are hitting a wall. This isn’t about running out of computing power; it’s about running out of power itself. An Indian startup, C2i, just secured $15 million in funding to tackle this essential issue. This creation could directly impact the future of AI and the energy efficiency of the system you use daily. Do you know how much energy your favorite AI tools consume?

What Actually Happened

Peak XV Partners has invested in C2i, an Indian startup, to solve a growing problem. The problem is the increasing power consumption of AI data centers, according to the announcement. C2i, which stands for control conversion and intelligence, raised $15 million in a Series A funding round. This round was led by Peak XV Partners, with additional participation from Yali Deeptech and TDK Ventures. This brings the two-year-old startup’s total funding to $19 million, the company reports. This investment comes as global data center energy demand is rapidly accelerating. The core issue isn’t generating electricity, but efficiently converting it within the data centers themselves.

Why This Matters to You

Imagine your phone charger wasting 15% of the electricity before it even reaches your device. That’s similar to what’s happening in AI data centers today. High-voltage power needs to be stepped down thousands of times for GPUs, wasting about 15% to 20% of energy, C2i’s co-founder and CTO Preetam Tadeparthy said in an interview. C2i is redesigning power delivery as a single, plug-and-play “grid-to-GPU” system. This system spans from the data-center bus directly to the processor. This integrated approach can significantly cut energy losses. The team revealed that they estimate cutting end-to-end losses by around 10%. This translates to roughly 100 kilowatts saved for every megawatt consumed. This also has positive knock-on effects for cooling costs and GPU utilization. What impact could a 10% energy saving have on the environment and your energy bills?

Here’s how C2i’s approach could benefit the industry:

  • Reduced Energy Waste: Less electricity is lost during power conversion.
  • Lower Operating Costs: Data centers save on electricity and cooling expenses.
  • Improved GPU Performance: More efficient power delivery can enhance processor stability.
  • Environmental Impact: A smaller carbon footprint for AI operations.

Preetam Tadeparthy highlighted the increasing demands on power systems, stating, “What used to be 400 volts has already moved to 800 volts, and will likely go higher.” This indicates the important need for more efficient solutions like C2i’s. Your future interactions with AI could become more sustainable because of these advancements.

The Surprising Finding

Here’s an interesting twist: the primary bottleneck for scaling AI data centers is no longer compute power. Instead, it’s the power itself, as mentioned in the release. This challenges the common assumption that processing speed is the main hurdle. Electricity consumption from data centers is projected to nearly triple by 2035, according to a December 2025 report from BloombergNEF. What’s more, Goldman Sachs Research estimates a significant increase in data-center power demand. This unexpected shift means that companies like C2i, focusing on power efficiency, are becoming incredibly vital. It’s not just about making faster chips; it’s about making sure those chips have enough, and efficiently delivered, power to run.

What Happens Next

C2i, founded in 2024 by former Texas Instruments power executives, is now positioned to roll out its solutions. We can expect to see their integrated power delivery systems implemented in data centers over the next 12-24 months. This will likely start with early adopters seeking to cut operational costs and improve efficiency. For example, imagine a large cloud provider upgrading its infrastructure to incorporate C2i’s ‘grid-to-GPU’ system. This would allow them to run more AI workloads with less energy. This move could also influence industry standards for data center design. The company reports that by integrating power conversion, control, and packaging, they offer a holistic system. This approach provides actionable advice for data center operators: consider integrated power solutions early in your planning. The team revealed that their system offers knock-on effects for cooling costs and overall data-center economics. This suggests a broader impact beyond just power conversion.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice