A new AI translation system for headphones clones multiple voices simultaneously

May Be Interested In:ASUS DriverHub driver management tool targeted by RCE vulnerability


Spatial Speech Translation consists of two AI models, the first of which divides the space surrounding the person wearing the headphones into small regions and uses a neural network to search for potential speakers and pinpoint their direction. 

The second model then translates the speakers’ words from French, German, or Spanish into English text using publicly available data sets. The same model extracts the unique characteristics and emotional tone of each speaker’s voice, such as the pitch and the amplitude, and applies those properties to the text, essentially creating a “cloned” voice. This means that when the translated version of a speaker’s words is relayed to the headphone wearer a few seconds later, it sounds as if it’s coming from the speaker’s direction and the voice sounds a lot like the speaker’s own, not a robotic-sounding computer.

Given that separating out human voices is hard enough for AI systems, being able to incorporate that ability into a real-time translation system, map the distance between the wearer and the speaker, and achieve decent latency on a real device is impressive, says Samuele Cornell, a postdoc researcher at Carnegie Mellon University’s Language Technologies Institute, who did not work on the project.

“Real-time speech-to-speech translation is incredibly hard,” he says. “Their results are very good in the limited testing settings. But for a real product, one would need much more training data—possibly with noise and real-world recordings from the headset, rather than purely relying on synthetic data.”

Gollakota’s team is now focusing on reducing the amount of time it takes for the AI translation to kick in after a speaker says something, which will accommodate more natural-sounding conversations between people speaking different languages. “We want to really get down that latency significantly to less than a second, so that you can still have the conversational vibe,” Gollakota says.

This remains a major challenge, because the speed at which an AI system can translate one language into another depends on the languages’ structure. Of the three languages Spatial Speech Translation was trained on, the system was quickest to translate French into English, followed by Spanish and then German—reflecting how German, unlike the other languages, places a sentence’s verbs and much of its meaning at the end and not at the beginning, says Claudio Fantinuoli, a researcher at the Johannes Gutenberg University of Mainz in Germany, who did not work on the project. 

Reducing the latency could make the translations less accurate, he warns: “The longer you wait [before translating], the more context you have, and the better the translation will be. It’s a balancing act.”

share Share facebook pinterest whatsapp x print

Similar Content

Could taking carbon out of the sea cool down the planet?
Could taking carbon out of the sea cool down the planet?
The Ontario Court of Appeal is seen in Toronto in 2019.
Ontario loses battle to refuse to pay for penis-sparing vaginoplasty for non-binary resident
A geological timescale for bacterial evolution and oxygen adaptation | Science
A geological timescale for bacterial evolution and oxygen adaptation | Science
7digital and Pex partner to create opportunities for rightsholders and UGC platforms
7digital and Pex partner to create opportunities for rightsholders and UGC platforms
Footballers object to processing of performance data | Computer Weekly
Footballers object to processing of performance data | Computer Weekly
South Korea’s constitutional court rules to oust impeached president Yoon
South Korea’s constitutional court rules to oust impeached president Yoon
Live and Unfiltered: The Day’s Breaking News | © 2025 | Daily News