DeepMind's AI can lip read better than humans
DeepMind AI beats a human expert in lip reading competition
Google's DeepMind has partnered with Oxford University researchers to create a new AI that can read lips, calling it Watch, Listen and Spell (WLAS).
The researchers released a scientific paper suggesting the newly developed AI could correctly interpret more words that a trained professional in lip reading.
When tested on the same randomly selected 200 clips, a human professional lip reader was able to guess words correctly 12.4% of the time, while WLAS had an accuracy rate of 46.8%.
The paper reads: "The WLAS model trained on the LRS dataset surpasses the performance of all previous work on standard lip reading benchmark datasets, often by a significant margin. This lip reading performance beats a professional lip reader on videos from BBC television, and we also demonstrate that visual information helps to improve speech recognition performance even when the audio is available."
The system was trained on a dataset of 118,000 different sentences (17,500 words) using 5,000 hours of video footage from the BBC.
The BBC videos were prepared using machine learning algorithms, and the AI was also taught to realign video and audio when it was out of sync.
Earlier this month, the University of Oxford published a similar research paper, testing a lip reading program called LipNet. LipNet had a 93.4% level of lip reading accuracy, compared to 52.3% scored by a human expert on the same material presented.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
However, LipNet was tested on videos with volunteers saying formulaic sentences, with a dataset of only 51 words, whereas WLAS was tested on a much larger range of data, analysing actual conversations from BBC shows.
There are various possible applications of this lip reading technology. An AI tool such as WLAS could be of great help to improve the quality of live subtitles and better support individuals whose hearing is impaired.
It could also be a useful additional integration for virtual assistants such as Siri, as they could use the phone camera to lip read, improving their understanding of users' words even in crowded or noisy environments.
Such a tool could also be implemented for surveillance purposes, although reading lips from a grainy CCTV video could prove more challenging.