Cross-language Speech Dependent Lip-synchronization

Published in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019

Recommended citation: A. Jha, V. Voleti, V. Namboodiri and C. V. Jawahar, "Cross-language Speech Dependent Lip-synchronization," ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom, 2019, pp. 7140-7144. http://vinaypn.github.io/files/icassp2019.pdf

Download paper here

Understanding videos of people speaking across international borders is hard as audiences from different demographies do not understand the language. Such speech videos are often supplemented with language subtitles. However, these hamper the viewing experience as the attention is shared. Simple audio dubbing in a different language makes the video appear unnatural due to unsynchronized lip motion. In this paper, we propose a system for automated cross-language lip synchronization for re-dubbed videos. Our model generates superior photorealistic lip-synchronization over original video in comparison to the current re-dubbing method. With the help of a user-based study, we verify that our method is preferred over unsynchronized videos.

Recommended citation: A. Jha, V. Voleti, V. Namboodiri and C. V. Jawahar, “Cross-language Speech Dependent Lip-synchronization,” ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom, 2019, pp. 7140-7144.