Tiny AI Models from Multiverse Promise Big Impact for On-Device Processing

New 'chicken brain' and 'fly brain' models could revolutionize AI on smartphones and IoT devices.

Multiverse Computing has unveiled two remarkably small yet high-performing AI models, named for their minimal size. These models are designed to run directly on consumer devices like smartphones and IoT hardware, potentially enabling advanced AI capabilities without cloud reliance.

August 15, 2025

4 min read

Tiny AI Models from Multiverse Promise Big Impact for On-Device Processing

Key Facts

  • Multiverse Computing released two tiny, high-performing AI models.
  • Models are named 'chicken brain' and 'fly brain' due to their small size.
  • Designed for local execution on smartphones, tablets, PCs, and IoT devices.
  • Multiverse raised €189 million ($215 million) in June 2025 based on its 'CompactifAI' technology.
  • The technology aims to enable powerful AI without cloud reliance, enhancing privacy and speed.

Why You Care

Imagine running capable AI tools directly on your phone or smart device, without needing an internet connection or sending your data to the cloud. This isn't a distant dream; it's becoming a reality, and it means more privacy, faster processing, and new creative possibilities for you.

What Actually Happened

Multiverse Computing, a prominent European AI startup, recently announced the release of two new AI models so compact they've been playfully named after a chicken's brain and a fly's brain. According to TechCrunch, these models are specifically engineered to be embedded into Internet of Things (IoT) devices and to run locally on smartphones, tablets, and PCs. Román Orús, a founder of Multiverse, stated to TechCrunch: "We can compress the model so much that they can fit on devices. You can run them on premises, directly on your iPhone, or on your Apple Watch." This creation follows Multiverse Computing's significant funding round in June 2025, where they raised €189 million (approximately $215 million), largely on the strength of their model compression system, which they call "CompactifAI." The company, founded in 2019, has secured around $250 million in total funding, as reported by TechCrunch.

Why This Matters to You

For content creators, podcasters, and AI enthusiasts, the implications of these tiny, on-device AI models are large. First, consider the prompt benefit of privacy. If your AI tasks, such as transcribing audio, generating captions, or even basic video editing, can be done locally on your device, your sensitive data never leaves your control. This is a significant advantage over cloud-based AI services, where data transmission and storage carry inherent privacy risks. Second, speed and reliability are dramatically improved. Without the latency of sending data to and from a server, AI operations can happen almost instantaneously. This means real-time transcription during a live podcast, quick content suggestions while editing, or even on-the-fly audio betterment without a stable internet connection. Imagine recording a podcast in a remote location and having AI-powered noise reduction or voice betterment applied in real-time, directly on your portable recorder or smartphone. This capability could democratize access to complex AI tools, making them available to a wider range of creators, regardless of their internet access or budget for cloud computing.

The Surprising Finding

The truly surprising aspect of Multiverse's announcement isn't just that they've made AI models small, but that these models are also described as "high-performing." Historically, shrinking AI models often meant sacrificing performance, leading to less accurate results or limited capabilities. The promise here is that these "chicken brain" and "fly brain" models can deliver useful, practical AI functions despite their minimal footprint. This challenges the conventional wisdom that larger models inherently equate to superior performance. It suggests that Multiverse's "CompactifAI" system has found a way to retain essential functionality and accuracy even after aggressive compression. This could open doors for complex AI applications in resource-constrained environments, like smart home devices, wearable tech, or even embedded systems in cameras and microphones, where computational power and memory are severely limited.

What Happens Next

The prompt future will likely see these tiny models integrated into a broader range of consumer electronics. Expect to see smartphones, tablets, and potentially even smartwatches advertising enhanced on-device AI capabilities, from improved voice assistants to more intelligent photo and video processing that doesn't rely on constant cloud connectivity. For content creators, this could translate into new generations of portable recording equipment with built-in AI for live transcription, intelligent sound mixing, or even basic content generation directly at the source. The success of Multiverse's approach could also spur other AI companies to invest more heavily in model compression research, potentially leading to an environment where capable, private, and efficient AI is the norm on personal devices. However, the exact performance benchmarks and real-world applicability of these models will need to be rigorously validated by the broader developer community to fully understand their impact and limitations in diverse use cases.