On defocus, diffusion and depth estimation

Published in Pattern Recognition Letters, 2007

Recommended citation: V.P. Namboodiri and S. Chaudhuri (2007). “On defocus, diffusion and depth estimation” Pattern Recognition Letters Volume 28, Issue 3, 1 February 2007, Pages 311-319 http://vinaypn.github.io/files/prl07.pdf

Download paper here

An intrinsic property of real aperture imaging has been that the observations tend to be defocused. This artifact has been used in an innovative manner by researchers for depth estimation, since the amount of defocus varies with varying depth in the scene. There have been various methods to model the defocus blur. We model the defocus process using the model of diffusion of heat. The diffusion process has been traditionally used in low level vision problems like smoothing, segmentation and edge detection. In this paper a novel application of the diffusion principle is made for generating the defocus space of the scene. The defocus space is the set of all possible observations for a given scene that can be captured using a physical lens system. Using the notion of defocus space we estimate the depth in the scene and also generate the corresponding fully focused equivalent pin-hole image. The algorithm described here also brings out the equivalence of the two modalities, viz. depth from focus and depth from defocus for structure recovery.

Recommended citation: V.P. Namboodiri and S. Chaudhuri (2007). “On defocus, diffusion and depth estimation” Pattern Recognition Letters Volume 28, Issue 3, 1 February 2007, Pages 311-319