CoPlay Solves Acoustic Sensing's Music Interference Problem

New deep learning algorithm allows smart devices to perform acoustic sensing without degrading music playback quality.

Researchers have developed CoPlay, a deep learning algorithm that enables acoustic sensing on smart devices even while playing music. This innovation prevents interference, maintaining both sensing accuracy and audio quality, addressing a major real-world limitation.

Sarah Kline

By Sarah Kline

September 18, 2025

4 min read

CoPlay Solves Acoustic Sensing's Music Interference Problem

Key Facts

  • CoPlay is a deep learning-based optimization algorithm for acoustic sensing.
  • It solves the problem of interference when speakers are used for both sensing and playing music.
  • CoPlay maximizes sensing signal magnitude and minimizes music playback distortion.
  • A study with 12 users showed CoPlay maintained sensing accuracy comparable to no-music scenarios.
  • CoPlay was presented at ICCCN'25, indicating future industry relevance.

Why You Care

Ever tried using your smart device for health monitoring while enjoying your favorite tunes? Did you notice the sound quality drop, or the monitoring become unreliable? This common issue has plagued acoustic sensing, but a new creation might change everything. What if your phone could accurately track your breathing and play crystal-clear music simultaneously? This is precisely the problem CoPlay aims to solve, making your smart devices even smarter and more versatile.

What Actually Happened

Researchers Yin Li, Bo Liu, and Rajalakshmi Nanadakumar have introduced CoPlay, a deep learning-based optimization algorithm, as detailed in the paper titled “CoPlay: Audio-agnostic Cognitive Scaling for Acoustic Sensing.” This creation tackles a significant challenge in acoustic sensing. Previously, using a device’s speaker for both sensing and playing music caused interference, according to the announcement. This interference led to either degraded music quality or compromised sensing accuracy. Traditional methods like clipping or down-scaling the signals failed to adequately address this problem. CoPlay, however, intelligently adapts the sensing signal. It ensures optimal sensing range and accuracy while minimizing any frequency distortion that could affect music playback, the team revealed.

Why This Matters to You

Imagine your smartwatch precisely monitoring your respiration during a workout, all while your motivational playlist streams flawlessly. CoPlay makes this possible by allowing acoustic sensing to coexist harmoniously with other audio applications. This means your devices can perform more functions without sacrificing your user experience. The algorithm works by maximizing the sensing signal magnitude within the available bandwidth. Simultaneously, it minimizes any consequential frequency distortion, as explained in the paper. This dual optimization is key to its effectiveness.

For example, consider an elderly relative using a smart speaker for fall detection. With CoPlay, that same speaker can play their favorite podcasts without affecting the essential sensing capabilities. This improves both safety and enjoyment. A study with 12 users demonstrated that respiration monitoring and gesture recognition using CoPlay’s adapted signal achieved similar accuracy to scenarios without concurrent music. This contrasts sharply with traditional clipping or down-scaling methods, which showed worse accuracy, the research shows. A qualitative study also found that music playback quality remained undegraded, unlike with older techniques. How will this impact the next generation of health and interaction technologies you use daily?

  • CoPlay’s Key Benefits:
  • Maximizes sensing signal magnitude within available bandwidth.
  • Minimizes frequency distortion for music playback.
  • Maintains high accuracy for acoustic sensing tasks.
  • Preserves music playback quality.

The Surprising Finding

Here’s the twist: acoustic sensing, despite its potential, often overlooks a essential real-world problem. The same speaker used for sensing, when also playing music, causes significant interference. This interference makes concurrent use impractical, as stated in the abstract. Most surprising is that previous solutions, like clipping or down-scaling, negatively impacted both music quality and sensing performance. They either overloaded the speaker’s mixer or reduced sensing range and accuracy. CoPlay, however, manages to achieve both optimal sensing and pristine audio quality. This challenges the long-held assumption that one must be sacrificed for the other. It demonstrates that intelligent signal processing can overcome these seemingly inherent limitations.

What Happens Next

This system, presented at ICCCN‘25, suggests a future where acoustic sensing becomes far more integrated into our daily lives. We can expect to see CoPlay or similar deep learning solutions implemented in smart devices within the next 12-24 months. For example, future smartphones might use acoustic sensing for touchless gestures or enhanced security features, all while you’re on a video call. This could unlock new possibilities for human-computer interaction. Device manufacturers should consider incorporating cognitive scaling algorithms into their products. This will enable more and user-friendly acoustic sensing applications, the team revealed. The industry implications are vast, paving the way for more and smart device experiences for you.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice