DolphinGemma: Google AI Unlocks Dolphin Communication Secrets

A new AI model from Google is helping scientists decode the complex language of dolphins.

Google has developed DolphinGemma, a large language model designed to analyze dolphin communication. This AI tool, used with data from the Wild Dolphin Project, aims to understand the patterns in dolphin clicks and whistles. It could eventually help us generate responses to their vocalizations.

Sarah Kline

By Sarah Kline

December 3, 2025

4 min read

Key Facts

  • DolphinGemma is a large language model developed by Google.
  • It helps scientists analyze dolphin communication, including clicks, whistles, and burst pulses.
  • The project collaborates with Georgia Tech and the Wild Dolphin Project (WDP).
  • The WDP has conducted the world's longest-running underwater dolphin research project since 1985.
  • DolphinGemma is a ~400M parameter model designed to run on Pixel phones in the field.

Why You Care

What if we could truly talk to animals? Imagine understanding the intricate conversations happening beneath the waves. Google’s new AI, DolphinGemma, is making strides toward this very goal. It is helping scientists decipher dolphin communication, which could change our understanding of marine life. This creation matters because it opens a window into another species’ world, offering insights into animal intelligence and potentially fostering new conservation efforts. Your curiosity about the natural world is about to get a fascinating new outlet.

What Actually Happened

Google has introduced DolphinGemma, a large language model (LLM) specifically designed to analyze dolphin vocalizations. This AI model assists scientists in studying how dolphins communicate, according to the announcement. The goal is to understand the patterns within their complex clicks, whistles, and burst pulses. Ultimately, researchers hope to generate realistic responses to these sounds. This initiative is a collaboration between Google, researchers at Georgia Tech, and the Wild Dolphin Project (WDP). The WDP has conducted the world’s longest-running underwater dolphin research project since 1985, as mentioned in the release. Their non-invasive approach has created a rich dataset of audio and video. This data is meticulously paired with individual dolphin identities and observed behaviors. DolphinGemma uses specific Google audio technologies, including the SoundStream tokenizer. This tokenizer efficiently represents dolphin sounds, which are then processed by a suitable model architecture, the company reports.

Why This Matters to You

Understanding dolphin communication has significant implications. It could offer insights into animal cognition and social structures. For example, imagine you are a marine biologist. This system could allow you to understand specific warnings dolphins issue about predators. Or you might learn about their methods for locating food. The ability to link sounds to specific behaviors is crucial. The WDP has correlated various sound types with behavioral contexts for decades, the research shows. This provides a solid foundation for the AI’s analysis. The ultimate goal is to find patterns and rules that indicate language within these natural sound sequences. How might deciphering dolphin language change your perspective on humanity’s place in the animal kingdom?

Here are some examples of correlated dolphin sounds:

  1. Signature whistles: These are unique ‘names’ used by mothers and calves to reunite.
  2. Burst-pulse “squawks”: Often observed during aggressive encounters or fights.
  3. Click “buzzes”: Frequently used during courtship rituals or when chasing sharks.

Knowing the individual dolphins involved is essential for accurate interpretation, the team revealed. This long-term analysis of natural communication forms the bedrock of WDP’s research. It provides essential context for any AI analysis, the paper states. As Dr. Denise Herzing, Research Director/Founder of the Wild Dolphin Project, states, “For decades, understanding the clicks, whistles and burst pulses of dolphins has been a scientific frontier. What if we could not only listen to dolphins, but also understand the patterns of their complex communication well enough to generate realistic responses?”

The Surprising Finding

Here’s an interesting twist: DolphinGemma, despite its capabilities, is surprisingly compact. This ~400M parameter model is optimally-sized to run directly on the Pixel phones WDP uses in the field, according to the announcement. This is quite unexpected for a large language model tackling such complex biological data. Typically, LLMs require substantial computational resources. The fact that Google has developed an AI capable of acoustic analysis that can run on a consumer-grade smartphone is remarkable. It challenges the common assumption that AI research always demands supercomputers. This portability means real-time analysis in remote locations is now more feasible. It could accelerate data collection and interpretation significantly.

What Happens Next

This project is still in its early stages, but the implications are vast. Over the next 12-24 months, we might see initial findings published from DolphinGemma’s analysis. For instance, researchers could identify previously unknown patterns in dolphin vocalizations. This could lead to new theories about their social structures or environmental awareness. The industry could see a push for more portable, specialized AI models for field research. This would extend beyond marine biology to areas like ornithology or entomology. If you’re a budding scientist, consider how such accessible AI tools could empower your own research. The ongoing collaboration between Google and the Wild Dolphin Project will undoubtedly yield more insights. This will deepen our understanding of these intelligent marine mammals.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice