A few-shot learning model generally consists of a feature extraction network and a classification module. In this paper, we propose an approach to improve few-shot image classification performance by increasing the representational capacity of the feature extraction network and improving the quality of the features extracted by it. The ability of the feature extraction network to extract highly discriminative features from images is essential to few-shot learning. Such features are generally class agnostic and contain information about the general content of the image. Our approach improves the training of the feature extraction network in order to enable them to produce such features. We train the network using filter-grafting along with an auxiliary self-supervision task and a knowledge distillation procedure. Particularly, filter-grafting rejuvenates unimportant (invalid) filters in the feature extraction network to make them useful and thereby, increases the number of important filters that can be further improved by using self-supervision and knowledge distillation techniques. This combined approach helps in significantly improving the few-shot learning performance of the model. We perform experiments on several few-shot learning benchmark datasets such as mini-ImageNet, tiered-ImageNet, CIFAR-FS, and FC100 using our approach. We also present various ablation studies to validate the proposed approach. We empirically show that our approach performs better than other state-of-the-art few-shot learning methods.
Recommended citation: Pratik Mazumder, Pravendra Singh and Vinay P. Namboodiri, ”GIFSL - grafting based improved few-shot learning”, Journal of Image and Vision Computing, volume 104 2020 , issn = “0262-8856”, doi = “https://doi.org/10.1016/j.imavis.2020.104006”