Stability Based Filter Pruning for Accelerating Deep CNNs

Published in IEEE Winter Conference on Applications of Computer Vision (WACV), 2019

Recommended citation: P. Singh, Manikandan V.S.R. Kadi, N. Verma and V. P. Namboodiri, "Stability Based Filter Pruning for Accelerating Deep CNNs," IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, Hawaii, USA. https://arxiv.org/abs/1811.08321

Download paper here

Convolutional neural networks (CNN) have achieved impressive performance on the wide variety of tasks (classification, detection, etc.) across multiple domains at the cost of high computational and memory requirements. Thus, leveraging CNNs for real-time applications necessitates model compression approaches that not only reduce the total number of parameters but reduce the overall computation as well. In this work, we present a stability-based approach for filter-level pruning of CNNs. We evaluate our proposed approach on different architectures (LeNet, VGG-16, ResNet, and Faster RCNN) and datasets and demonstrate its generalizability through extensive experiments. Moreover, our compressed models can be used at run-time without requiring any special libraries or hardware. Our model compression method reduces the number of FLOPS by an impressive factor of 6.03X and GPU memory footprint by more than 17X, significantly outperforming other state-of-the-art filter pruning methods.

Recommended citation: P. Singh, Manikandan V.S.R. Kadi, N. Verma and V. P. Namboodiri, “Stability Based Filter Pruning for Accelerating Deep CNNs,” IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, Hawaii, USA.