U-DADA: Unsupervised Deep Action Domain Adaptation

Published in Asian Conference on Computer Vision (ACCV), 2018

Recommended citation: Jamal A., Namboodiri V.P., Deodhare D., Venkatesh K.S. (2019) U-DADA: Unsupervised Deep Action Domain Adaptation. In: Jawahar C., Li H., Mori G., Schindler K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science, vol 11363. Springer, https://link.springer.com/chapter/10.1007/978-3-030-20893-6_28

Download paper here

The problem of domain adaptation has been extensively studied for object classification task. However, this problem has not been as well studied for recognizing actions. While, object recognition is well understood, the diverse variety of videos in action recognition make the task of addressing domain shift to be more challenging. We address this problem by proposing a new novel adaptation technique that we term as unsupervised deep action domain adaptation (U-DADA). The main concept that we propose is that of explicitly modeling density based adaptation and using them while adapting domains for recognizing actions. We show that these techniques work well both for domain adaptation through adversarial learning to obtain invariant features or explicitly reducing the domain shift between distributions. The method is shown to work well using existing benchmark datasets such as UCF50, UCF101, HMDB51 and Olympic Sports. As a pioneering effort in the area of deep action adaptation, we are presenting several benchmark results and techniques that could serve as baselines to guide future research in this area.

Recommended citation: Jamal A., Namboodiri V.P., Deodhare D., Venkatesh K.S. (2019) U-DADA: Unsupervised Deep Action Domain Adaptation. In: Jawahar C., Li H., Mori G., Schindler K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science, vol 11363. Springer,