Spotting words in silent speech videos: a retrieval-based approach

Published in Journal of Machine Vision and Applications (MVA), 2019

Recommended citation: A. Jha, V. P. Namboodiri and C. V. Jawahar,”Spotting words in silent speech videos: a retrieval-based approach”, Journal of Machine Vision and Applications, March 2019, Volume 30, Issue 2, pp 217–229 https://link.springer.com/article/10.1007/s00138-019-01006-y

Download paper here

Our goal is to spot words in silent speech videos without explicitly recognizing the spoken words, where the lip motion of the speaker is clearly visible and audio is absent. Existing work in this domain has mainly focused on recognizing a fixed set of words in word-segmented lip videos, which limits the applicability of the learned model due to limited vocabulary and high dependency on the model’s recognition performance. Our contribution is twofold: (1) we develop a pipeline for recognition-free retrieval and show its performance against recognition-based retrieval on a large-scale dataset and another set of out-of-vocabulary words. (2) We introduce a query expansion technique using pseudo-relevant feedback and propose a novel re-ranking method based on maximizing the correlation between spatiotemporal landmarks of the query and the top retrieval candidates. Our word spotting method achieves 35% higher mean average precision over recognition-based method on large-scale LRW dataset. We also demonstrate the application of the method by word spotting in a popular speech video (“The great dictator” by Charlie Chaplin) where we show that the word retrieval can be used to understand what was spoken perhaps in the silent movies. Finally, we compare our model against ASR in a noisy environment and analyze the effect of the performance of underlying lip-reader and input video quality on the proposed word spotting pipeline.

Recommended citation: A. Jha, V. P. Namboodiri and C. V. Jawahar,”Spotting words in silent speech videos: a retrieval-based approach”, Journal of Machine Vision and Applications, March 2019, Volume 30, Issue 2, pp 217–229