# Multimodal Humor Dataset: Predicting Laughter tracks for Sitcoms

Published in The IEEE Winter Conference on Applications of Computer Vision, 2021

Recommended citation: Badri N. Patro, Mayank Lunayach, Deepankar Srivastava, Sarvesh, Hunar Singh, Vinay P. Namboodiri (2021).  Multimodal Humor Dataset: Predicting Laughter tracks for Sitcoms'', The IEEE Winter Conference on Applications of Computer Vision, USA, 2021.) https://delta-lab-iitk.github.io/Multimodal-Humor-Dataset/

A great number of situational comedies (sitcoms) are being regularly made and the task of adding laughter tracks to these is a critical task. Providing an ability to be able to predict whether something will be humorous to the audience is also crucial. In this project, we aim to automate this task. Towards doing so, we annotate an existing sitcom (Big Bang Theory') and use the laughter cues present to obtain a manual annotation for this show. We provide detailed analysis for the dataset design and further evaluate various state of the art baselines for solving this task. We observe that existing LSTM and BERT based networks on the text alone do not perform as well as joint text and video or only video-based networks. Moreover, it is challenging to ascertain that the words attended to while predicting laughter are indeed humorous. Our dataset and analysis provided through this paper is a valuable resource towards solving this interesting semantic and practical task. As an additional contribution, we have developed a novel model for solving this task that is a multi-modal self-attention based model that outperforms currently prevalent models for solving this task. The project page for our paper is \url{https://delta-lab-iitk.github.io/Multimodal-Humor-Dataset/}.