Google's new AI is able to pick out voices in a crowd
The tech could be used for hearing aids and video conferencing
Google's Research team has developed technology to recognise individual voices in a crowd, just as a human can.
It's based on the cocktail party effect, which refers to how the human brain mutes other sounds and voices when having a conversation with an individual in a busy place. The technology works by separating a sound source into separate sound strings.
Google's demonstration study uses a video with lots of people talking at once. The user can select a particular face and they'll hear the soundtrack of just that person. They are also able to select the context of the conversation and only references to that conversation is played, even if multiple people are discussing the subject matter.
The company explained its technology could be used to improve how hearing aids work, or boost video conferencing tools, enabling them to take place in the middle of an office space, for example, rather than only in a soundproofed meeting room.
Inbar Mosseri and Oran Lang, software engineers working on the project at Google Research explained the sound goes hand in hand with the visual cues, analysing the mouth movements to match the sound with the right person.
"Intuitively, movements of a person's mouth, for example, should correlate with the sounds produced as that person is speaking, which in turn can help identify which parts of the audio correspond to that person," said researchers Inbar Mosseri and Oran Lang, writing in a blog post.
"The visual signal not only improves the speech separation quality significantly in cases of mixed speech (compared to speech separation using audio alone, as we demonstrate in our paper), but, importantly, it also associates the separated, clean speech tracks with the visible speakers in the video."
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
The data used to develop the technology was collated from 100,000 videos of lectures and training videos on YouTube. Parts of the speeches and lectures with no background sound and just a single person in view were then extracted to generate videos of a cocktail party type environment, with non-speech background noise obtained from AudioSet.
Google Researchers could then use this content to train a multi-stream convolutional neural network-based model to pull out individual conversations.
Google Research's findings are outlined in an in-depth report called Looking to Listen at the Cocktail Party and it explained it would be applying the principles of the technology to products in the future.
Clare is the founder of Blue Cactus Digital, a digital marketing company that helps ethical and sustainability-focused businesses grow their customer base.
Prior to becoming a marketer, Clare was a journalist, working at a range of mobile device-focused outlets including Know Your Mobile before moving into freelance life.
As a freelance writer, she drew on her expertise in mobility to write features and guides for ITPro, as well as regularly writing news stories on a wide range of topics.