Multiverse Computing Unveils Tiny AI Models for On-Device Performance

New 'brain-sized' AI models promise powerful local processing for everyday devices, from smartphones to IoT.

Multiverse Computing, a European AI startup, has introduced two remarkably small, high-performing AI models. These models, named after a chicken's and a fly's brain due to their compact size, are designed to run directly on devices like smartphones and IoT hardware, potentially transforming how AI is integrated into daily tech.

Mark Ellison

By Mark Ellison

August 15, 2025

4 min read

Multiverse Computing Unveils Tiny AI Models for On-Device Performance

Key Facts

  • Multiverse Computing released two new, extremely small, high-performing AI models.
  • The models are named after a chicken's and a fly's brain due to their size.
  • They are intended for local execution on IoT devices, smartphones, tablets, and PCs.
  • The core technology enabling this is 'CompactifAI' model compression.
  • Multiverse Computing recently raised €189 million ($215 million) in funding.

Why You Care

Imagine your phone or smart device handling complex AI tasks without needing an internet connection. Multiverse Computing's new, incredibly compact AI models are making that a reality, promising faster, more private, and more reliable AI experiences right in your pocket or smart home.

What Actually Happened

Multiverse Computing, a prominent European AI startup, recently announced the release of two new AI models that are remarkably small yet high-performing. These models are so compact that, according to TechCrunch, they have been named after a chicken’s brain and a fly’s brain. The core creation enabling this miniaturization is a proprietary model compression system the company calls “CompactifAI.”

According to founder Román Orús, as quoted by TechCrunch, the primary goal for these tiny models is their integration into Internet of Things (IoT) devices, as well as enabling them to run locally on consumer electronics such as smartphones, tablets, and personal computers. Orús stated: "We can compress the model so much that they can fit on devices. You can run them on premises, directly on your iPhone, or on your Apple Watch." This creation follows Multiverse Computing's recent success in securing €189 million (approximately $215 million) in June 2025, bringing their total funding since their 2019 founding to around $250 million, as reported by TechCrunch.

Why This Matters to You

For content creators, podcasters, and AI enthusiasts, this creation carries significant practical implications. Currently, many complex AI tools rely on cloud-based processing, which means your data travels to remote servers and back. This can introduce latency, privacy concerns, and a dependency on stable internet connectivity. With Multiverse's compact models, a new era of on-device AI becomes possible.

Consider a podcaster using AI for real-time transcription or noise reduction. If these AI models can run directly on their laptop or even a dedicated audio interface, the processing would be instantaneous, without the lag associated with cloud APIs. For video creators, imagine an editing collection with AI features like intelligent object removal or automatic color correction that processes footage locally, enhancing privacy and speeding up workflows significantly. AI enthusiasts experimenting with custom models could deploy them on low-power edge devices, opening up new possibilities for localized smart applications without the need for constant cloud access or expensive server infrastructure. This shift to on-device processing could also democratize access to complex AI functionalities, making them available to a wider range of users and devices, even in areas with limited internet access.

The Surprising Finding

The most surprising aspect of Multiverse Computing's announcement isn't just the small size of the models, but their claimed high performance despite that diminutive footprint. Typically, smaller AI models often come with a trade-off in accuracy or capability. However, the company emphasizes that these are "high-performing models," suggesting that their CompactifAI system has managed to overcome some of the traditional limitations of model compression. The ability to shrink a complex AI model to the 'brain of a fly' equivalent while retaining significant utility for real-world applications challenges the conventional wisdom that bigger models are always better. This implies a significant leap in efficiency, where the computational resources required to run these models are drastically reduced, opening doors for AI integration into devices where power consumption and processing capacity are severely constrained, such as wearables or basic IoT sensors.

What Happens Next

The prompt future will likely see Multiverse Computing focusing on integrating these models into commercial products, particularly within the IoT and consumer electronics sectors. We can anticipate partnerships with hardware manufacturers looking to embed more intelligent features directly into their devices. For content creators and AI developers, this means keeping an eye on new software creation kits (SDKs) or APIs that might emerge, allowing direct access to these on-device AI capabilities. The true test will be how these models perform in diverse real-world scenarios and whether their 'high-performing' claims hold up under scrutiny. If successful, this trend towards highly efficient, on-device AI could accelerate the creation of truly smart environments, where AI processing is ubiquitous, private, and always available, regardless of network connectivity. This could lead to a proliferation of AI-powered features in unexpected places, from smart home appliances that learn your habits locally to portable recording gear with built-in, real-time AI enhancements, fundamentally changing the landscape of consumer system within the next few years.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice