Sarvam's New Open-Source AI Models Boost India's Tech Vision

Indian AI lab Sarvam unveils powerful new models, challenging global AI giants and promoting local language support.

Indian AI lab Sarvam has launched a suite of new open-source AI models, including 30-billion and 105-billion parameter options. These models aim to reduce India's reliance on foreign AI and enhance local language applications. The initiative highlights a significant bet on open-source AI viability.

Mark Ellison

By Mark Ellison

February 19, 2026

3 min read

Sarvam's New Open-Source AI Models Boost India's Tech Vision

Key Facts

  • Indian AI lab Sarvam launched new 30-billion and 105-billion parameter open-source AI models.
  • The models include text-to-speech, speech-to-text, and vision capabilities for document parsing.
  • They utilize a mixture-of-experts architecture to reduce computing costs.
  • The models were trained from scratch on trillions of tokens, including multiple Indian languages.
  • Training was supported by India's government-backed IndiaAI Mission, Yotta, and Nvidia.

Why You Care

Ever wonder if big tech companies will always dominate the AI landscape? Or can smaller, regional players make a real impact? India’s AI lab, Sarvam, just made a bold move. They launched new open-source AI models. This could reshape how AI is developed and used globally. It directly impacts your digital experiences, especially if you interact with AI in diverse languages.

What Actually Happened

Sarvam, an Indian AI lab, announced a significant expansion of its open-source AI offerings. The launch occurred at the India AI Impact Summit in New Delhi, according to the announcement. This aligns with India’s goal to lessen its dependence on foreign AI platforms. What’s more, it aims to customize AI models for local languages and specific use cases.

The new lineup includes 30-billion and 105-billion parameter models, the company reports. It also features a text-to-speech model, a speech-to-text model, and a vision model for document parsing. These represent a sharp upgrade from their previous 2-billion-parameter Sarvam 1 model, released in October 2024. The models were trained from scratch, not just fine-tuned on existing systems, Sarvam said.

Why This Matters to You

These new Sarvam models bring several practical implications for users and developers alike. Imagine you are a small business owner in India. You want to create a voice assistant that understands local dialects. These models could make that possible. They offer capabilities tailored for diverse linguistic needs. This is a big step towards more inclusive AI.

Key Features of Sarvam’s New Models

Feature30B Model105B Model
Parameters30 billion105 billion
ArchitectureMixture-of-expertsMixture-of-experts
Context Window32,000 tokens (real-time conversational)128,000 tokens (complex reasoning)
Training Data~16 trillion tokens of textTrillions of tokens (multiple Indian languages)

Sarvam explained that the 30-billion and 105-billion-parameter models use a mixture-of-experts architecture. This design activates only a fraction of their total parameters at any given time. This significantly reduces computing costs, the team revealed. The 30B model supports a 32,000-token context window. This is ideal for real-time conversational use. The larger 105B model offers a 128,000-token window. This allows for more complex, multi-step reasoning tasks. How might more affordable and localized AI change your daily digital interactions?

The Surprising Finding

What’s truly surprising here is Sarvam’s commitment to training these models from scratch. Many open-source initiatives fine-tune existing large models. However, Sarvam chose a more resource-intensive path. They did not fine-tune on existing open-source systems, the company reports. The 30B model was pre-trained on about 16 trillion tokens of text. The 105B model was trained on trillions of tokens spanning multiple Indian languages. This challenges the common assumption that only global giants can afford such extensive foundational training. It shows a deep investment in creating truly unique and localized AI capabilities. This approach is a strong vote of confidence in open-source viability, as mentioned in the release.

What Happens Next

Sarvam’s new open-source AI models are poised to have a significant impact. We can expect to see these models integrated into various applications over the next 12-18 months. For example, imagine government services offering , accurate voice support in dozens of regional Indian languages. This would greatly improve accessibility. The startup said these models are designed to support real-time applications. These include voice-based assistants and chat systems in Indian languages. The models were trained using resources from India’s government-backed IndiaAI Mission, according to the announcement. This indicates strong national support. Developers should start exploring these models for new localized AI solutions. This move could inspire other nations to pursue similar independent AI creation strategies, boosting the global open-source AI environment.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice