Towards Automatic Face-to-Face Translation

Published in 27th ACM International Conference on Multimedia (ACM-MM), 2019

Recommended citation: Prajwal Renukanand*, Rudrabha Mukhopadhyay*, Jerin Philip, Abhishek Jha, Vinay Namboodiri and C.V. Jawahar, “Towards Automatic Face-to-Face Translation”, 27th ACM International Conference on Multimedia (ACM-MM), Nice, France, 2019, Pages 1428–1436 https://cvit.iiit.ac.in/research/projects/cvit-projects/facetoface-translation

Download paper here

In light of the recent breakthroughs in automatic machine translation systems, we propose a novel approach that we term as "Face-to-Face Translation". As today's digital communication becomes increasingly visual, we argue that there is a need for systems that can automatically translate a video of a person speaking in language A into a target language B with realistic lip synchronization. In this work, we create an automatic pipeline for this problem and demonstrate its impact in multiple real-world applications. First, we build a working speech-to-speech translation system by bringing together multiple existing modules from speech and language. We then move towards "Face-to-Face Translation" by incorporating a novel visual module, LipGAN for generating realistic talking faces from the translated audio. Quantitative evaluation of LipGAN on the standard LRW test set shows that it significantly outperforms existing approaches across all standard metrics. We also subject our Face-to-Face Translation pipeline, to multiple human evaluations and show that it can significantly improve the overall user experience for consuming and interacting with multimodal content across languages. Code, models and demo video are made publicly available.

Recommended citation: Prajwal Renukanand, Rudrabha Mukhopadhyay, Jerin Philip, Abhishek Jha, Vinay Namboodiri and C.V. Jawahar, “Towards Automatic Face-to-Face Translation”, 27th ACM International Conference on Multimedia (ACM-MM), Nice, France, 2019, Pages 1428–1436