Governments Face 'Buy vs. Build' Dilemma for LLMs

A new framework helps policymakers decide how to integrate large language models into public services.

Governments are grappling with how to adopt large language models (LLMs) for public services. A new paper offers a strategic framework for deciding between buying existing solutions, building domestic capabilities, or using a hybrid approach. This decision impacts national sovereignty, security, and cost.

Sarah Kline

By Sarah Kline

February 16, 2026

3 min read

Governments Face 'Buy vs. Build' Dilemma for LLMs

Key Facts

  • Governments face a strategic choice: buy, build, or hybrid approaches for Large Language Models (LLMs).
  • LLMs can support public-sector applications from citizen services to sensitive state functions.
  • The decision framework evaluates options based on sovereignty, safety, cost, resource capability, cultural fit, and sustainability.
  • 'Building' capabilities does not require governments to act alone; it can involve collaborations with research institutions, universities, and state-owned enterprises.
  • National AI strategies are often pluralistic, combining sovereign, commercial, and open-source models.

Why You Care

Ever wondered how governments will use AI to serve you better? Imagine your local council using AI for citizen services. But who builds that AI? A new paper presents a essential structure for governments. It helps them decide whether to ‘buy’ existing large language models (LLMs) or ‘build’ their own. This choice directly affects your data security and the quality of public services.

What Actually Happened

A recent paper, “Buy versus Build an LLM: A Decision structure for Governments,” has been published. It addresses a crucial challenge for public sectors worldwide. Governments must decide how to integrate AI, specifically large language models (LLMs). LLMs are AI systems that understand and generate human-like text. They can support many public-sector applications, according to the announcement. This includes general citizen services and sensitive state functions. The paper outlines strategic choices. These choices involve buying existing services, building domestic capabilities, or adopting hybrid approaches. These decisions are especially essential because leading model providers are often foreign corporations, as detailed in the blog post.

Why This Matters to You

This structure is vital for ensuring secure and effective public services. It helps governments navigate complex decisions about AI adoption. Your personal data could be handled by these systems. Therefore, understanding this structure is important. The paper evaluates options across several dimensions. These include sovereignty, safety, cost, and cultural fit. For example, imagine a government using an LLM for healthcare inquiries. Do you want that LLM managed by a foreign company or a domestic entity? This choice impacts data privacy and national security. The paper states that national AI strategies are typically pluralistic. This means sovereign, commercial, and open-source models often coexist. “Governments may rely on commercial models for non-sensitive or commodity tasks, while pursuing greater control for essential, high-risk or strategically important applications,” the paper states. This approach allows for flexibility. It also ensures protection for sensitive areas. How might this affect the digital services you use daily?

Key Decision Dimensions for LLM Adoption:

DimensionDescription
SovereigntyControl over data, algorithms, and infrastructure.
SafetyEnsuring LLM outputs are reliable and secure.
CostFinancial investment for creation, deployment, and maintenance.
Resource CapabilityAvailability of skilled personnel and technical infrastructure.
Cultural FitAlignment with national values, languages, and societal norms.
SustainabilityLong-term viability and adaptability of the chosen approach.

The Surprising Finding

Interestingly, the paper clarifies what ‘building’ truly means. It doesn’t imply governments must act alone. This challenges a common assumption. Many might think ‘building’ requires solely government-led initiatives. However, the team revealed that domestic capabilities can develop through various collaborations. These include public research institutions, universities, and state-owned enterprises. Joint ventures or broader national ecosystems also contribute, the research shows. This perspective is surprising. It offers a more flexible and collaborative path to AI independence. It suggests that even without massive internal creation, a nation can foster its own LLM capabilities. This expands the definition of ‘sovereign AI’ significantly.

What Happens Next

Policymakers can use this structure immediately. They can apply it to their national AI strategies over the next 12-24 months. For example, a country might decide to buy commercial LLMs for public information websites. Meanwhile, they could build a specialized LLM for national defense applications. This ensures sensitive data remains secure. The paper aims to serve as a reference for policymakers. It helps them determine the best approach for their specific national needs, as mentioned in the release. This structure will guide significant investment and policy decisions. It will shape how governments interact with AI providers. It also influences the creation of national AI ecosystems. Actionable advice for readers includes understanding these policy shifts. They will directly impact future digital public services. This structure could lead to more tailored and secure AI solutions globally.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice