
Richardson-Lucy Deblurring for
Scenes under a Projective Motion Path
Yu-Wing Tai, Member, IE EE, Ping Tan, Member, IEEE, and Michael S. Brown, Member, IEEE
Abstract—This paper addresses how to model and correct image blur that arises when a camera undergoes ego motion while
observing a distant scene. In particular, we discuss how the blurred image can be modeled as an integration of the clear scene under a
sequence of planar projective transformations (i.e., homographies) that describe the camera’s path. This projective motion path blur
model is more effective at modeling the spatially varying motion blur exhibited by ego motion than conventional methods based on
space-invariant blur kernels. To correct the blurred image, we describe how to modify the Richardson-Lucy (RL) algorithm to
incorporate this new blur model. In addition, we show that our projective motion RL algorithm can incorporate state-of-the-art
regularization priors to improve the deblurred results. The projective motion path blur model, along with the modified RL algorithm, is
detailed, together with experimental results demonstrating its overall effectiveness. Statistical analysis on the algorithm’s convergence
properties and robustness to noise is also provided.
Index Terms—Motion deblurring, spatially verying motion blur.
Ç
1INTRODUCTION
M
OTION blur from camera ego motion is an artifact in
photography caused by the relative motion between
the camera and an imaged scene during exposure. Assuming
a static and distant scene and ignoring the effects of defocus
and lens aberration, each point in the blurred image can be
modeled as the convolution of the unblurred image by a
point spread function (PSF) that describes the relative
motion trajectory at that point’s position. The aim of image
deblurring is to reverse this convolution process to recover
the clear image of the scene from the captured blurry image,
as shown in Fig. 1.
A common assumption in existing motion deblurring
algorithms is that the motion PSF is spatially invariant. This
implies that all pixels are convolved with the same motion
blur kernel. However, as recently discussed by Levin et al.
[17], this global PSF assumption is typically invalid. In their
experiments, images taken with camera shake exhibited
notable amounts of rotation that causes spatially varying
motion blur within the image. Fig. 2 shows a photograph
that illustrates this effect. As a result, Levin et al. [17]
advocated the need for a better motion blur model as well
as image priors to impose on the deblurred results. In this
paper, we address the former issue by introducing a new
and compact motion blur model that is able to describe
spatially varying motion blur caused by a camera under-
going ego motion.
We refer to our blur model as the projective motion blur
model as it represents the degraded image as an integration
of the clear scene under a sequence of planar projective
transforms. Fig. 3 shows a diagram of this representation.
One key benefit of this model is that it is better suited to
representing camera ego motion than the conventional
kernel-based PSF parameterization that would require the
image to be segmented into uniform blur regions, or worse,
a separate blur kernel per pixel. However, because our
approach is not based on convolution with an explicit PSF,
it has no apparent frequency domain equivalent. One of the
key contributions of this paper is to show how our blur
model can be used to extend the conventional pixel-domain
Richardson-Lucy (RL) deblurring algorithm. We refer to this
modified RL algorithm as the projective motion Richardson-
Lucy algorithm. Similarly to the conventional RL deblur-
ring, regularization based on various priors can be
incorporated in our algorithm.
Our paper is focused on developing the projective
motion blur model and the associated RL algorithm. We
assume that the motion path of the camera is known and
that the camera’s motion satisfies our projective motion blur
model. Recent methods useful in estimating the projective
motion path are discussed in Section 6. As with other
camera shake deblurring approaches, we assume that the
scene is distant and void of moving objects.
The remainder of this paper is organized as follows:
Section 2 discusses related work, Section 3 details our
motion blur model, Section 4 derives the projective motion
Richardson-Lucy deconvolution algorithm, Section 5 de-
scribes how to incorporate regularization into our modified
IEEE TRANSAC TIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 33, NO. 8, AUGUST 2011 1603
. Y.-W. Tai is with the Department of Computer Science, Korea Advanced
Institute of Science and Technology (KAIST), Rm 2425, E3-1 CS Building,
KAIST, 373-1 Guseong-dong, Yuseong-gu, Daejeon, 305-701, South
. P. Tan is with the Department of Electrical and Computer Engineering,
National University of Singapore (NUS), Blk E4, Level 8, Room 4, 4
Engineering Drive 3, Singapore 117576, Republic of Singapore.
. M.S. Brown is with the School of Computing, National University of
Singapore (NUS), School of Computing, Computing 1, 13 Computing
Drive, Singapore 117417, Republic of Singapore.
Manuscript received 17 July 2009; revised 11 Apr. 2010; accepted 11 Nov.
2010; published online 13 Dec. 2010.
Recommended for acceptance by K. Kutulakos.
For information on obtaining reprints of this article, please send e-mail to:
TPAMI-2009-07-0459.
Digital Object Identifier no. 10.1109/TPAMI.2010.222.
0162-8828/11/$26.00 ß 2011 IEEE Published by the IEEE Computer Society