Recent Advances in Image Dehazing
Recent Advances in Image Dehazing
3, JULY 2017
Abstract—Images captured in hazy or foggy weather conditions even remove interference due to haze by special approaches, in
can be seriously degraded by scattering of atmospheric particles, order to obtain satisfactory visual effects and obtain more use-
which reduces the contrast, changes the color, and makes the ful information. In theory, image dehazing removes unwanted
object features difficult to identify by human vision and by some
outdoor computer vision systems. Therefore image dehazing is visual effects and is often considered as an image enhancement
an important issue and has been widely researched in the field technique. However, it differs from traditional noise removal
of computer vision. The role of image dehazing is to remove and contrast enhancement methods since the degradation to
the influence of weather factors in order to improve the visual image pixels that is induced by the presence of haze depends
effects of the image and provide benefit to post-processing. This on the distance between the object and the acquisition device
paper reviews the main techniques of image dehazing that have
been developed over the past decade. Firstly, we innovatively and the regional density of the haze. The effect of haze on
divide a number of approaches into three categories: image image pixels also suppresses the dynamic range of the colors.
enhancement based methods, image fusion based methods and
image restoration based methods. All methods are analyzed
and corresponding sub-categories are introduced according to
principles and characteristics. Various quality evaluation meth-
ods are then described, sorted and discussed in detail. Finally,
research progress is summarized and future research directions
are suggested.
Index Terms—Atmospheric scattering model, image dehazing,
image enhancement, quality assessment.
I. I NTRODUCTION
Fig. 1. Comparison between hazy image and hazy free image.
Although a large number of image dehazing methods have range, to improve the image contrast and enhance the details
been proposed, the research is still scattered and a complete of the image. In other words, histogram equalization enhances
theoretical system has still not been established. In particular, the overall contrast of a hazy image by increasing the dynamic
there is a lack of systematic summary of the advances in range of the gray values. An example is shown in Fig. 4: (a)
related work until now [41]. Therefore, it is necessary to sum- is a hazy image, (b) is the histogram of (a), (c) is the dehazed
marize the development of image dehazing methods in the last image from (a), and (d) is the histogram of (c).
decade. This paper provides an extensive review of the recent
advances of image dehazing techniques and related methods.
To facilitate a comprehensive overview, existing techniques
are categorized based on their principles and characteristics.
In addition, various quality evaluation methods are described
and discussed in detail, and research progress is summarized
and future research directions are suggested. In this paper, we
try to elaborate on existing image dehazing methods including
application characteristics, dehazing performance, algorithm
complexity and other aspects.
The remainder of this paper is organized as follows. Section
II introduces the dehazing methods according to their classi-
fication in Fig. 3 and their principles and characteristics are
analyzed in detail. In Section III, related quality assessment
criteria of dehazing algorithms are described. Finally, a sum-
mary of conclusions is given and future research directions are
suggested in Section IV. Fig. 3. Classification of image dehazing methods.
enhancement of images that are too dark or bright overall. an illumination component.
It is usually used to compress the brightness of pixels to
obtain more uniform exposure characteristics [42]. However,
the algorithm’s gray statistics across the whole image make it
difficult for each local area to restore the optimal values since
the method cannot adapt to the local brightness characteristics
of an input image, and often causes a “halo” effect and
brightness distortion. Therefore, some scholars have proposed
a local histogram equalization algorithm to solve this problem Fig. 5. The model of illumination reflection.
which has been widely used.
LHE extends the histogram equalization algorithm to all
local regions of the image, and adaptively enhances local F (x, y) = R(x, y)I(x, y) (2)
information of the image by local operations. It is suitable for where R(x, y) is the reflection component, which represents
processing a hazy image with low contrast and a changeable the reflection of the surface of an object and is related to
depth of field, but block effect usually appears and the calcula- the intrinsic nature of the image, I(x, y) is the illumination
tion complexity is large. Local histogram equalization methods component, which depends on the ambient light and is related
have been proven to provide better performance than global to the dynamic range of the image and F (x, y) is the captured
methods and reveal more local image details with stronger image. Based on Retinex theory, if a method can be found
image enhancement performance [43]−[45]. to estimate and separate the reflection component from the
Some optimized methods will now be described. Reference total light, the impact of the illumination component on the
[46] used adaptive histogram equalizations (AHE) for contrast image can be reduced, achieving the goal of enhancing the
enhancement while [47] used partially overlapped sub-block image. The Retinex algorithm has the characteristics of color
histograms to enhance the contrast. Huang et al. [48] have constancy, dynamic range compression and color fidelity, and
proposed a novel local histogram equalization algorithm which its workflow is shown in Fig. 6, where log is a logarithmic
had good performance for improving the contrast of the image operation and exp is an exponential operation.
while preserving the brightness. Xu et al. [49] have estab-
lished a generalized equalization model that integrates contrast
enhancement and white balancing into a unified framework
for convex programming of the image histogram. In [50],
histogram equalization and a wavelet transform (WT) method
are combined to enhance images, which can improve the
gray distribution of images. Xu et al. [51] have proposed Fig. 6. Workflow of the Retinex method.
a contrast limited adaptive histogram equalization (CLAHE)
method to remove the effects of fog, which can limit noise The Retinex method used for enhancement of hazy images
while enhancing the image contrast. In [52] combined the can be divided into two categories: single-scale Retinex (SSR)
CLAHE method with the Weiner filter and [53] combined and multi-scale Retinex (MSR).
the CLAHE method with the finite impulse response filter to An SSR algorithm has been proposed by Jobson et al.
enhance the contrast of images. [56] based on the center/surrounding Retinex method. The
In summary, the histogram equalization algorithm can essence of this algorithm is to obtain the reflection image by
achieve better performance for gray images than for color estimating the ambient brightness. In order to keep a good
images, and can lead to noise amplification in some hazy balance between the dynamic range compression and the color
images. constancy, Rahman et al. [57] extended the SSR algorithm to
2) Retinex Method multiple scales and proposed an MSR algorithm.
Retinex, i.e., retinal cerebral cortex theory, was created by Since the reflection image has little dependence on the
Land and McCann based on color perception by the human intensity of the illumination, the Retinex algorithm can easily
eyes [54], [55]. Retinex-based algorithms have been widely realize image dehazing. The formulas for SSR and MSR can
applied in the field of image enhancement for applications such be expressed by (3) and (4), respectively.
as shadow removal and haze removal. Its principal concept
ri (x, y) = log Ri (x, y)
is to obtain the reflection properties of objects from the
influence of light on the image, and it provides a model = log Fi (x, y) − log[G(x, y) ∗ Fi (x, y)] (3)
for describing the color invariance. The concept is based
on the fact that during visual information transmission, the rMSRi (x, y)
human vision system performs some information processing = log Ri (x, y)
to remove the uncertainty related to the light source’s intensity X N
and irradiation, and only information reflecting the nature of = wk {log F i (x, y)−log[Gk (x, y)∗Fi (x, y)]} (4)
the object, such as the reflection coefficient. The model of k=1
illumination reflection is shown in Fig. 5 and (2), which show where F (x, y) is the input image, ri (x, y) is the output of
that an image can be expressed as a reflection component and the Retinex, R(x, y) is the reflection image, i is the color
WANG AND YUAN: RECENT ADVANCES IN IMAGE DEHAZING 413
channel, (x, y) is the position of a single pixel, ∗ represents by a slow variation in space, and the reflection component
the convolution operator, N is the number of scales, G(x, y) = is often associated with the details of the scene. Image
1 2 2
e−(x +y )/c is the low-pass convolution surrounding function, enhancement is achieved by removing the radiation compo-
wk is the weighted coefficient, and c is the Gauss surrounding nent. By combining frequency filtering with the gray scale
scale. transformation, the dynamic range of the compressed image
The algorithm combines the advantages of different Gaus- can be used to improve the image quality.
sian functions convolved with the original image, including Thus, the basic principle of homomorphic filtering for de-
the characteristics of large, medium and small scales, and can hazing is still based on the illumination model. The flowchart
achieve high dynamic range compression and color constancy of this algorithm is shown in Fig. 7. where log is the loga-
for better visual effects. rithmic transform, FFT is the Fourier transform, H(u, v) is
However, since Gaussian filtering does not have good edge the frequency filtering function, IFFT is the inverse Fourier
preservation performance, the phenomena of edge degradation transform and exp is the exponential operation.
and “halo” artifacts will appear in the dehazing result. In
order to solve these problems as much as possible, Xu et
al. [58] estimated the illumination values by using a mean
shift smoothing filter to overcome the uneven illumination and Fig. 7. The flowchart of homomorphic filtering.
eliminate the halo phenomenon. Yang et al. [59] presented an
adaptive filter which combined sub-block local information Seow et al. [64] processed foggy color images using a
to estimate the luminance component. Hu et al. [60] used homomorphic filter and achieved good enhancement effects.
bilateral filtering to replace Gaussian filtering to estimate In [65], a self-adaptive homomorphic filtering method is
the illumination component. In [61], a novel Multi-Scale proposed to remove thin clouds.
Retinex color image enhancement method has been proposed The homomorphic filtering algorithm can remove uneven
to enhance the contrast and better preserve the color of the regions generated by light, while maintaining the contour
original image. In this method, the orientation of the long axis information of the image. However, it needs two Fourier trans-
of the Gaussian filter is determined according to the gradient formations, one exponential operation and one logarithmic
orientation at that position. Shu et al. [62] also proposed a type operation for each pixel of the image, so the computation is
of MSR algorithm based on sub-band decomposition for image large.
enhancement. Fu et al. [33] proposed a variation framework b) The basic principle of the wavelet transform (WT) is sim-
for Retinex to process the reflection and the illumination from ilar to homomorphic filtering for image enhancement. Firstly,
a single underwater image by decomposing, enhancing and a wavelet transform is performed on the original image, and
combining after color correction. Zhang et al. [63] adopted images with different frequency characteristics are obtained.
an improved Retinex-based method to remove fog in a traffic The details of the image are then enhanced for the non-low
video. Experimental results showed that the proposed method frequency sub blocks to improve their clearness.
can not only remove the fog but also enhance the clarity of The WT can be described by the following steps: perform
the traffic video images. displacement processing on the basic wavelet function ψ(t)
The advantages of the Retinex algorithm are clear and at step τ , then create a product with signal x(t) for different
easy to implement. These methods can not only increase the scales a:
contrast and brightness of the image, but also can regulate the 1
Z +∞ µ
t−τ
¶
∗
dynamic range of the gray level with a priority given to color W Tx (a, τ ) = √ x (t) ψ dt, a > 0. (5)
a −∞ a
image dehazing. However, the algorithm uses the Gaussian
convolution template for illumination estimation and does not Its equivalent expression in the frequency domain is:
have the ability to preserve edges, which will lead to halo √ Z +∞
a
phenomena in some sharp boundary regions or cause the whole W Tx (a, τ ) = X (ω) Ψ∗ (aω) ejωτ dω (6)
2π −∞
image to be too bright.
3) Frequency Domain Filtering where X(ω) and Ψ(ω) are the Fourier transform of x(t) and
Under foggy conditions, the low frequency components of ψ(t), respectively.
an image are enhanced, so a high-pass filter can be used for Grewe et al. [66] proposed a fusion method based on
image filtering to suppress low frequencies and enhance high wavelet analysis, and fused and processed a large number of
frequencies. The frequency domain enhancement always uses foggy images to obtain a high quality visual effect. Russo [67]
Fourier analysis and other methods to convert an image into implemented equalization with different scales on a degraded
the frequency domain. After completing the filtering operation, image, and achieved good sharpening results of the details.
an inverse transform is performed back to the spatial domain. Du et al. [68] suggested that haze is distributed in the low-
Typical methods based on the frequency domain include frequency layer, and thus introduced a single-scene based haze
homomorphic filtering, the wavelet transform and the Curvelet masking method that uses wavelet analysis to decompose a
transform. hazy image. However, the application of this method is limited
a) The principle of homomorphic filtering is to divide the to ice/snow-free scenes. Reference [69] assumed that the fog
image into a radiation component and a reflection component. is mainly in low frequency regions while scene details are in
The radiation component of the foggy image is characterized high frequency regions, and improved the image quality by
414 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 4, NO. 3, JULY 2017
dehazing the low frequency regions and enhancing the high less scattered by particles in the air. This makes it desirable
frequency regions. Zhu et al. [70] applied the wavelet trans- for image dehazing to reveal details of distant objects in
form to image dehazing, and then used the SSR algorithm to landscape photographs. The near-infrared spectrum can easily
enhance the color performance and get the expected haze-free be acquired by using off-the-shelf digital cameras with minor
image. Reference [71] described a new method for mitigating modifications [75], or potentially through a single RGBN
the effects of atmospheric distortion using a regional fusion camera which can capture multiple images with different
method based on the dual tree complex wavelet transform properties simultaneously.
(DT-CWT) which improved the visibility. John et al. [72] Schaul et al. [76] took advantage of the fact that NIR
introduced a wavelet based method for enhancing weather images are less sensitive to haze and proposed a method to
degraded video sequences, which processed the foreground dehaze images using both visible and NIR images. In their
and background pixels of the reduced quality video using method, the optimization framework of the edge preserving
wavelet fusion theory. multiresolution decomposition is applied to both the visible
The WT is a local transformation of space and frequency and the NIR images based on weighted least squares (WLS),
and has advantages of multi-scale analysis and multi-resolution and a pixel level fusion criterion is used to maximize the image
characteristics for image contrast enhancement. However, contrast. The advantage of this approach for dehazing is that
over-bright, over-dark and unevenly illuminated images are there is no requirement for a scattering model. An example is
difficult to resolve. shown in Fig. 8. In contrast with [76], reference [77] performed
c) The curvelet transform (CT) is a multi-scale analysis dehazing on visible images and infrared images by firstly
method developed from the wavelet transform, which can using a processing method, and then used a fusion strategy
overcome the edge enhancement limitation of WT. CT has to complete the image fusion. In [78], the authors proposed
been used to perform automatic processing of foggy images. a two-stage dehazing scheme: an air-light color estimation
Starck et al. [73] presented a new method for contrast en- stage that exploits the dissimilarity between RGB and NIR;
hancement based on the CT, which can represent edges better and an image dehazing stage that enforces the NIR gradient
than wavelets, and is therefore well-suited to multi-scale edge constraint through an optimization framework. This method
enhancement. The authors also found that curvelet based en- also achieved good results.
hancement out-performs other enhancement methods for noisy
images, but on noiseless or near noiseless images, curvelet
based enhancement is not much effective than wavelet based
enhancement. In [74], the authors implemented an efficient
algorithm which can extract a clear image from a blurred and
hazy image by using the curvelet to increase the clarity of the
image as well as removing image haze.
Although it can improve the visual image quality by
enhancing the curved edges, it cannot in essence remove
interference due to fog from the image. Its general application
includes SAR (synthetic aperture radar) image enhancement
and ceramic micro image enhancement.
In summary, the main purpose of foggy image enhancement
is to satisfy the visual effect requirement for the human eyes,
or make computer recognition easier. While the image quality
is not considered, the methods only need to highlight certain
information while reducing or removing the information that
is unnecessary within an image. Since there is no physical
mechanism and degradation model for foggy image process-
ing, this is not essentially dehazing, especially for foggy color
images, which generally cannot achieve a satisfactory result.
weighted by three normalized weight maps (luminance, chro- proposed to combine the initial recovered image with an image
matic, saliency) and finally blended in a multi-scale fashion with sufficient details and color information. The combined
that avoids introducing artifacts. image is more informative than any of the input images and
The first input I1 (x) is obtained by white-balancing the should also appear “natural” [86].
original hazy image. The second image I2 (x) is obtained by These methods employ a fusion-based strategy for two
subtracting the original image from the mean image using the images derived from the original image, and thus the images
expression: are perfectly aligned. However, this technique is limited to
¯
¡ ¢
I2 (x) = γ I(x) − I(x) (7) processing only color images.
where γ is a factor that increases linearly with luminance in
hazy regions. C. Image Restoration Based Methods
In order to balance the contribution of each input and ensure Image restoration based methods for dehazing are studied
that regions with high contrast are obtained, three measures to explore the reasons for the image degradation and analyze
(weight maps) are introduced: the luminance weight map the imaging mechanism, then recover the scene by an inverse
WLk (x), the chromatic weight map WCk (x) and the saliency transformation. In this method, the physical model of the
weight map WSk (x), where k is the index of the inputs. degraded images is the basis, and many researchers have used
Assume that the resulting weights W k are obtained by the following general model for image restoration.
multiplying the processed weight maps WLk , WCk and WSk . 1) Degradation Model: As shown in Fig. 9, f (x) is the input
Then, each pixel x of the output F is computed by summing image, h(x) is the degradation function, n(x) is the noise, g(x)
the inputs Ik weighted by corresponding normalized weight is the degraded image, h′ (x) is the restoration function and
maps W k : X f ′ (x) is the restored image. The linear time invariant system
F (x) = W̄ k (x)Ik (x) (8) can be generally expressed as:
k
g(x) = f (x) ∗ h(x) + n(x). (11)
where Ik symbolizes
P the input (k is the index of the inputs)
and W̄ k = W k / k W k are the normalized weight maps.
Using Gaussian and Laplacian pyramids, the above equation
becomes:
X
Fl (x) = Gl {W̄ k (x)}Ll {Ik (x)} (9)
k
imaging model, which forms a theoretical basis of a foggy where E∞ (λ) is the radiation intensity of the atmospheric light
image with characteristics of blur and low contrast that can at infinity.
be used to understand the degradation mechanism of foggy According to the mechanisms of the McCartney model [88],
images, thus enabling degraded images to be restored. A the attenuation process and the airlight imaging process are
schematic diagram of the atmospheric scattering model is both dominant and lead to a decrease in contrast of the foggy
shown in Fig. 10. The solid line is the light from the object to image. Therefore, the total radiant intensity received by the
the camera, and the dotted line is the air-light. camera is equivalent to the linear superposition of the scene
The principle of the attenuation model is described in radiation light with the addition of scattered light entering the
Fig. 11. If a beam of light is emitted into an atmospheric imaging system, and the formula is:
medium, when the incident light passes through a unit area (the
E(d, λ) = E0 (λ)e−β(λ)d + E∞ (λ)(1 − e−β(λ)d ) (14)
shaded part), the energy of the light will then be attenuated.
It can be expressed by (12). where the first term is the direct attenuation, which describes
the attenuated result of reflected light in the medium, and
the second term is the airlight (the atmospheric veil), which
reflects the scattering of global atmospheric light. Letting
I(x) = E(d, λ) represent the hazy image, J(x) = E0 (λ)
represent the haze-free image, t(x) = e−β(λ)d denote the
transmission, and A = E∞ (λ) denote the atmospheric light
(skylight or airlight color), then, equation (14) can be simpli-
fied to:
Fig. 10. Atmospheric scattering model.
I(x) = J(x)t(x) + A(1 − t(x)). (15)
As can be seen from (15), the main difficulties in solving
single image dehazing are the double unknowns of the haze-
free image J(x) and the transmission map t(x), which are
severely ill-posed. However, if the depth information of an
image is known, or if multiple images can be used to estimate
the depth, or some prior knowledge is available for a single
image, the J(x) can still be resolved.
Therefore, in recent years, many scholars have used (14) or
Fig. 11. Attenuation model.
(15) as the prototype to propose a large number of dehazing
algorithms, many of which have achieved satisfactory results.
Several representative methods based on a physical model will
Ed (d, λ) = E0 (λ)e−β(λ)d (12) now be introduced in the following section.
1) Single Image Dehazing With Additional Information
where λ is the wavelength of visible light, d is the distance a) Knowing the scene information
from the scene to the camera, β(λ) is the atmospheric scat- The method was first proposed by Oakley and Satherley
tering coefficient and E0 (λ) is the beam radiation intensity at [87], who studied a degradation model that was based on
x = 0. multi-parameter statistics under the assumption that the depth
The principle of the airlight scattering model is described of the scene is known, and then completed the scattering
in Fig. 12. If it is assumed that the direction, intensity and attenuation compensation using the estimated weights of pixel
spectrum of the atmospheric light are unknown, and that the scattering and reflection, and obtained good recovery results.
light traveling along the line of sight has constant energy, then The method was only suitable for gray images when first
the radiation intensity reaching the camera can be expressed proposed by Oakley; later, Tan et al. [89], [90] improved the
by (13). algorithm by performing an in-depth study of the relationship
between the quality of the image contrast and the wavelength,
and extended the degraded image restoration to color images.
On the basis of this study, Robinson et al. [91] constructed a
dynamic and real-time weather system, which was based on
the atmospheric scattering model to compensate for the loss
of contrast by removing the environmental light components
in each color channel.
Later, Hautière et al. [92] estimated the visibility distance
using side geographical information that was obtained using
Fig. 12. Airlight scattering model. an on-board optical sensor system [93], [94] to establish the
relationship between the road visibility and the contrast in the
foggy image [24]. They then computed the depth of the scene
Ea (d, λ) = E∞ (λ)(1 − e−β(λ)d ) (13) by modeling the depth value of each point as a Euclidean
WANG AND YUAN: RECENT ADVANCES IN IMAGE DEHAZING 417
distance function, and used the 3D geographical model to as knowledge of the physics of the atmosphere. Zhu et al.
remove the fog. Kopf et al. [95] introduced a deep photo [100], [101] later proposed a simple and powerful method of
system that uses the existing digital terrain to provide the basic prior color attenuation to create a linear model for scene depth
information. A three-dimensional model of the scene was built of hazy images. In this method, a linear model is firstly created
firstly by estimating a large amount of information such as as follows.
depth and texture, and then the depth information values as
well as the structure of the image colors and texture were d(x) = θ0 + θ1 v(x) + θ2 s(x) + ε(x) (16)
determined in order to estimate a stable value for the curve
haze. The final physical model can be used for the purpose of where x is the position within the image, d is the scene
dehazing. depth, v is the brightness component of the hazy image, s
This method is based on the premise that the depth of the is the saturation component, θ0 , θ1 , θ2 are the unknown linear
scene is known and that the restoration of the image is good. coefficients and ε(x) is the random error. Assuming a Gaussian
However, the hardware requirements for expensive radars and density for ε with zero mean and variance σ 2 , then according
distance sensors and the requirement for an existing database to the Gaussian distribution property, d(x) can be expressed
to obtain accurate scene depth information severely limits the as:
real-time applicability of this algorithm. d(x)∼p(d(x)|x, θ0 , θ1 , θ2 , σ 2 ) = N (θ0 +θ1 v+θ2 s, σ 2 ). (17)
b) User interaction
In addition, Narasimhan et al. [96] proposed a single foggy By learning the parameters of the linear model with a super-
image interactive restoration method, which requires a user to vised learning method on 500 training samples containing 120
input the area of the sky or the areas that are seriously affected million scene points, the bridge between the hazy image and
by weather, the artificially-specified maximum depth of field its corresponding depth map can be effectively built with the
and the minimum depth of the field area to obtain rough depth best learning results such that θ0 = 0.121779, θ1 = 0.959710,
information. Using the estimation of scene depth map, the im- θ2 = −0.780245, σ = 0.041337. Using the recovered depth
age is then restored based on the atmospheric scattering model. information, the haze can be easily removed from a single hazy
This method does not require precise information about the image. The proposed approach runs quickly and can achieve
scene or the weather conditions, and does not require changes good results, but the training procedure is complex and the
in weather conditions between image acquisitions. It is clear parameters rely too much on the training data.
that such simple techniques are easy-to-use and can effectively 2) Multi-image Dehazing Methods
restore clear daytime colors and contrasts from images taken Depth or detailed information can also be estimated using
in poor weather conditions, as shown by the example in two or more different images of the same scene. The recovery
Fig. 13. Sun et al. [97] later proposed a method that assumed principles used by this method can be divided into two
gentle changes in the depth of the scene, and then simplified categories: different polarizing filters and different weather
the atmospheric scattering model to a monochromatic model. conditions.
With user assistance, the sky region and the maximum and
a) Different polarizing conditions
minimum depth regions are obtained, and image dehazing is
A team of researchers led by Schechner et al. [102] have
realized by solving the partial differential equation. This type
studied the polarized characteristics of light and found that
of interaction based method can obviously improve the visual
reflected light from the target has no polarization charac-
effects and contrast, but since it requires a certain degree of
teristics, and sky light has some polarization characteristics
user interaction, it cannot be done automatically by a real-time
after medium scattering. Therefore, using the polarization
system.
characteristics of sky light, the authors captured multiple
images of the same scene with different polarization angles
and obtained the degree of polarization, and then restored the
degraded image.
In order to facilitate the description, eq. (14) can be updated
as follows:
Similarly, the light of the polarization imaging system can to improve the visibility and color of the image and achieve the
be decomposed into I Π and I ⊥ (I ⊥ > I Π ), and the DOP of purpose of dehazing. In addition, the authors also processed
the scene is defined as: the noise added during the course of dehazing. After obtaining
I⊥ − IΠ the distribution rate and the intensity of the ambient light,
PJ = . (20) the noise is considered to be related to the distance and
I
will be amplified when recovery occurs through the physical
Since the two images in orthogonal polarization directions
model. So the authors used the regularization method, adaptive
are I Π = J/2 + AΠ and I ⊥ = J/2 + A⊥ , then the intensity
weights related to distance [105] and the nano flow method
of the atmospheric light can be calculated:
[106] to remove noise.
I × PJ In another paper [107], the authors proposed a type of
A= . (21)
PA polarimetric dehazing method to enhance the contrast and the
Based on above equations, the dehazing image can be range of visibility of images based on angle-of-polarization
calculated using the formula: (AOP) distribution analysis. Reference [108] introduced an
PJ effective method to synthesize the optimal polarized-difference
I(1 − PA )
Jobject = . (22) (PD) image and presented a new polarization hazy imaging
I×PJ
1− A∞ ×PA model that considers the joint polarization effects of airlight
From the above analysis, PJ and I can be obtained by using and the object radiance in the imaging process. After an-
two or more images in different polarization directions. If A∞ alyzing several methods for estimating airlight parameters,
and PA are estimated by the polarized image in the infinity reference [109] proposed blind estimation of the DOP based
scene, then a clear image can be obtained from the fog. An on independent component analysis (ICA). In the paper by
example is shown in Fig. 14. Treibitz and Schechner [110], different angles of polarized
filters are quantitatively analyzed according to their signal-
to-noise ratio (SNR) to estimate the dehazing effects. A
quality assessment method suitable for polarization analysis
images in foggy conditions is proposed [111]. Reference [112]
proposed a method that estimates the haze parameters from
the polarization information of two known objects at different
distances, and the estimated parameters are used to remove the
haze effect from the image. Some methods can also be applied
to underwater images [113]−[115], which can not only obtain
clear images, but also enhance the structural information about
the scene.
These methods are very dependent on the DOP of sky light.
While they can enhance the image contrast under thin fog
and dense fog, the dehazing effect may be greatly reduced
Fig. 14. Dehazing with ploarized images [102]. because of inaccuracies in the scene information. In addition,
it is difficult to find the maximum and minimum degrees of
Schechner et al. [102], [103] analyzed the imaging pro- polarization under the same scene during rapid scene changes,
cess of a foggy image, and explained the physical principle and the operation is complicated, so it is not conducive to
of the polarization effect based on atmospheric scattering. image restoration in real time.
Firstly, two or more images were collected through adjusting b) Different weather conditions
the polarization direction of the polarizer; then, the contrast Another method of obtaining depth information of a scene
and correct colors of the scene were recovered using these is by capturing two images of the same scene under different
obtained images in order to estimate the atmospheric optical weather conditions. Narasimhan and Nayar [116]−[120] have
polarization coefficient using this data. The optical depth of extensively studied the extraction of depth information of
the scene is then obtained and image dehazing is realized using a scene from different perspectives. By analyzing multiple
the atmospheric scattering physical model. Using these results, obtained images of the same scene under different foggy
the scene depth map and the atmospheric particle properties conditions, it was found that under different scenarios, the
can also be calculated. However, this method mainly depends intensity and color of the image was mainly determined by the
on information about the infinite sky, so it has some limitations atmospheric light and the scattering of atmospheric particles.
for application. Shwartz et al. [104] then proposed a type of Therefore, when there are multiple unknown parameters in the
blind classification method to solve the limitation of parameter physical model, the authors combined two or more different
estimation based on sky information. The sky information degraded images to obtain useful information, proposed a
may be ignored by assuming that there is no correlation geometric framework describing the impact of atmospheric
between the airlight component and the direct transmission scattering on color and used this framework for image de-
component in some parts of the image, and an independent hazing. Firstly, the geometric constraints of color changes in
component analysis (ICA) method is adopted to restore the different images is calculated; then, these constraints and the
airlight component and other related data information in order atmospheric scattering model are combined and the color and
WANG AND YUAN: RECENT ADVANCES IN IMAGE DEHAZING 419
depth information is computed; finally, a three dimensional levels of fog are obtained to estimate the scene depth, similar
structure is obtained to restore the clear image, which achieves to the work of Nayar and Narasimhan [116]. Wu and Dai
good results. An example is shown in Fig. 15. additionally provided a segmentation step to adapt to changes
in a scene, such as planes moving across the field of view.
Therefore, this approach can account for ambiguous regions.
These types of dehazing methods are simple and can achieve
good results. However, two or more different images in the
same scene are required, so it is difficult to realize image de-
hazing within a short time for real-time monitoring situations,
and difficult to apply and popularize in practice.
3) Single Image Dehazing Method With Prior Knowledge
Single image dehazing is essentially an under-constrained
problem. In order to make image dehazing more practical,
some image dehazing methods based on additional priors or
constraints have been proposed in recent years, adding new
vitality to image processing [124]. Some classic algorithms of
this type of method are introduced in the following paragraphs.
a) Tan method
In 2008, Tan [125] proposed an effective image dehazing
method based on two prior conditions. The first condition is
that the contrast in the image without fog should be higher
than that of the foggy image. The second condition is that the
Fig. 15. Dehazing with ploarized images [117]. attenuation of field spots is a continuous function of distance
which should be smooth. The author firstly defined the color
Reference [116] analyzed the effects of the atmosphere of light, and then by separating each color channel of the
on imaging and proposed a two-color atmospheric scattering image brightness, the airlight color of the input image can be
model. The image degradation process due to atmospheric transformed to white. The equation can be modified as follows:
scattering was described as an interactive function of the color,
the depth and the environmental light at particular points in the 1
scene, and a structure of the fog concentration and image depth I ′ (x) = J ′ (x)t(x)′ + A(x) 1 (23)
was constructed. The model takes into account the dependence 1
of atmospheric scattering on wavelength, which requires a where I ′ (x) is the image after color standardization, J ′ (x) is
clear image of the scene without fog. In order to avoid this the corresponding dehazed image, and the invariant A(x) =
constraint, Narasimhan et al. [117] used color changes in the (Ar + Ab + Ac )(1 − t(x)).
degraded images under different weather conditions as the Based on the first prior knowledge, the cost function of
constraint condition and proposed an effective algorithm for the edge strength is then constructed, and the formula can be
the reconstruction of a 3D scene structure including scene expressed as:
color information, which can be extended to color images. X
In [118], the method of constructing the depth information Cedges (I) = |∇Ic (x)| (24)
of the scene is described in detail which uses two images x,c
of the same scene with different weather conditions. Other
reference [119], [120] introduced a method to calculate the where c ∈ {R, G, B} are RGB channels and ∇ is the
scene structure, enhance the image contrast and restore clear differential operator.
images by searching for the depth of discontinuities. Based on the second prior knowledge, the airlight is ob-
Sun et al. [121] later improved the above method by tained using Markov random fields (MRFs). The potential
changing the original distribution mode for the concentration, function of the MRFs is:
scattering coefficient and color information from global mode E ({ Ax } | Px ) =
X
φ (Px |Ax ) + η
X
ψ (Ax , Ay ) (25)
to local mode, where the gradient field related to the depth x x,y∈Nx
is obtained from the partial derivative of the atmospheric
degradation equation, and the Poisson equation is solved to where φ (Px |Ax ) is the data item, ψ (Ax , Ay ) is the smooth
realize restoration of the foggy image. Chen et al. [122] item, Px is the region with x as the center, Ax is the constant
used a foggy image and a clear image in the same scene of the region, η is the strength of the smooth item and Nx is
as the samples to conduct optical modeling of the scene. the neighbor of pixel x.
After computing the depth ratio with corresponding points, A is finally obtained using the graph cut method to maxi-
the image was then restored with the atmospheric scattering mize the probability of a Gibbs distribution, and it is used to
model. calculate the transmission rate for the image restoration.
A data-driven approach was presented by Wu and Dai [123], This method can realize dehazing by maximizing the local
where multiple observations of the same scene with various contrast with only one image. However, serious “halo” effects
420 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 4, NO. 3, JULY 2017
can easily occur due to sudden changes of depth, leading to and the factorial Markov random field (FMRF) is used to
color oversaturation in images with heavy haze. compute the depth information in order to recover the haze-
Based on the same assumption [126], Ancuti et al. later free image.
proposed another dehazing technique which is optimized to In this literature, eq. (15) is deformed as follows:
preserve the original color spatial distribution and local con-
ln L−1
¡ ¢
trast that is suitable for the challenging problem of image ∞ I (x) − 1 = ln (ρ (x) − 1) − βd (x) (28)
matching based on local feature points. where ρ (x) is the albedo information and d (x) is the depth
In order to address the over-enhancing effects of the Tan information.
method, and inspired by the Bertalmı́o method [127], Galdran Setting I˜ (x) = ln L−1
¡ ¢
∞ I (x) − 1 , C (x) = ln (ρ (x) − 1)
et al. proposed a perception-inspired variational framework and D (x) = −βd (x), (28) can be expressed as:
[128], [129] for the task of single image dehazing without
requiring the depth structure of the scene to be estimated. The I˜ (x) = C (x) + D (x) (29)
method performs a spatially-variant contrast enhancement that
effectively removes haze from far away regions. where C (x) and D (x) represent the scene albedo item and
b) Fattal method the scene depth item and it can be assumed that both are
Based on the prior knowledge that there is no correlation independent statistically. If p (C) and p (D) are the prior
between object surface shading and the transmission map, knowledge, then, C (x) and D (x) can be computed through
Fattal [130] used independent component analysis (ICA) and the maximum posterior probability:
a Markov random field (MRF) model to estimate the surface ³ ´ ³ ´
arg max p C, D|I˜ = arg max p I|C, ˜ D p (C) p (D) .
albedo, then obtained the medium transmission of the scene ρ̃,d˜ ρ̃,d˜
and recovered the clear image from the foggy image. The key (30)
steps can be described as follows: Kratz’s method can recover a haze-free image with fine edge
Firstly, each pixel in the unknown clear image J is modeled details, but the results are often over-enhanced and suffer from
as the product of the surface reflection coefficient R and a oversaturation.
shadowing factor l, i.e. J = Rl. Therefore, equation (15) can The technique of Kratz and Nishino [132] was later ex-
be transformed to: tended in [21]. Kratz et al. [132] introduced a novel Bayesian
probabilistic method that jointly estimates the scene albedo
I(x) = t(x)l(x)R + (1 − t(x))A (26)
and depth from a single degraded image by fully leveraging its
R is then decomposed into two components. The first compo- latent statistical structures. Their approach models the image
nent is parallel to the direction of atmospheric light, A and the with a factorial Markov random field (FMRF) by jointly
second component is called residual vector the R′ ∈ A⊥ and is estimating two statistically independent latent layers for the
perpendicular to the direction of A. Therefore, the transmission scene albedo and depth. Experimental results show that the
can be calculated using the formula: method can achieve good results but the technique produces
some dark artifacts in regions approaching infinite depth.
t (x) = 1 − (IA (x) − ηIR′ (x)) / kAk (27)
Similar to the MRF model in [133], Caraffa and Tarel
where IA (x) and IR′ (x) are the projections of the input [134], [135] took advantage of both stereo and atmospheric
image along the A direction and R′ direction, respectively, veil depth cues to achieve better stereo reconstructions in
η = hR, Ai / (kR′ k kAk) is the measurement of atmospheric foggy weather and proposed a Markov random field model
light and h·, ·i is the standard 3D point multiplication in RGB of the stereo reconstruction and defogging problem. Their
space. method can be optimized iteratively using an α-expansion
Finally, the foggy image is recovered through an inverse algorithm. Based on the Bayesian framework, Nan et al. [136]
process of the image degradation model with the transmission proposed a method for single image dehazing taking noise
function. into consideration, and obtained the reflectance image using an
This approach is physically sound and can usually produce iterative approach with feedback to obtain a balance between
impressive results when there is sufficient color information. dehazing and denoising. In order to reduce the computation
Nevertheless, it cannot effectively restore images with heavy time of [133], Mutimbu et al. [137] considered the defog-
haze and may fail in cases where the original assumptions are ging problem as a relaxed factorial Markov random field
invalid. (FMRF) of albedo and depth layers, which can be efficiently
In a subsequent work [130], the same author Fattal [131] solved using sparse Cholesky factorization techniques. Rather
presented another new single-image dehazing method based than factorizing the scene albedo and depth through a log-
on the color-line pixel regularity in natural images, and also transform, Dong et al. [138] introduced a sparse prior and
proposed an augmented GMRF model with long-range cou- an additive noise argument in the degraded image model,
pling in order to more accurately resolve the transmission in and proposed an alternative optimization method to iteratively
isolated pixels lacking their own estimates. approximate the maximum a posteriori (MAP) estimators of
c) Kratz method these variables. Zhang et al. [139], [140] later described a
Kratz et al. [132] proposed another approach that is related new framework for video dehazing based on the Markov
to the Tan method [125], which assumes that the foggy image random field and optical flow estimation, which builds an
is composed of albedo and depth in independent latent layers, MRF model on the transmission map to improve the spatial
WANG AND YUAN: RECENT ADVANCES IN IMAGE DEHAZING 421
and temporal coherence of the transmission. Exploring this in Many improvements were later done to refine the coarse
further depth, Wang et al. [141] proposed a multi-scale depth transmission map based on DCP, such as WLS edge-preserving
fusion (MDP) scheme which obtains the depth map of the smoothing [147], bilateral filtering [148]−[150], a fast O(1)
physical model using an inhomogeneous Laplacian-Markov bilateral filter [151], joint bilateral filtering [152], a joint
random field (ILMRF), which can better estimate the depth trilateral filter [153], guided image filtering [6], [154]−[160],
map while accommodating the advantages of different patches weighted guided image filtering [161], [162], content adap-
and reducing the drawbacks of other methods. tive guided image filtering [163], smooth filtering [164],
d) He method anisotropic diffusion [165], window adaptive method [166],
He et al. [142], [143] proposed a dark channel prior (DCP) associative filter [167], edge-preserving and mean filters [168],
algorithm that can effectively overcome the deficiencies of a joint mean shift filtering algorithm [169], adaptively subdi-
the above two algorithms (Tan [125], Fattal [130]) to some vided quadtree [170], edge-guided interpolated filter [171], an
extent. The dark channel principle is sourced from a remote adaptive Wiener filter [172], guided trigonometric bilateral fil-
sensing image and an underwater image is used to summarize ters [32], median filter and gamma correction [173], Laplacian-
the rules from a natural image with no fog. The authors based gamma correction [174], fuzzy theory and weighted
then combined the principle with the atmospheric scattering estimation [175], opening operation and fast joint bilateral
model and realized single image dehazing based on DCP. The filtering [176], cross bilateral filtering [177] and a fusion
motivation of DCP is that for most non-sky patches in haze- strategy [4], [86], [178], [179] to optimize the transmission
free outdoor images, at least one color channel has very low image.
intensity for some pixels, with a brightness value J dark (x) Some approaches have also been proposed based on im-
that is close to 0. It can be expressed as: proved DCP. In [9], a median DCP (MDCP) algorithm was
µ ¶ proposed in order to improve He’s transmission model [142].
dark c
J (x) = min min (J (y)) → 0 (31) By calculating the median neighborhood instead of the min-
y∈Ω(x) c∈{r,g,b}
imum value of the DCP algorithm, the halo phenomenon
where J is the haze-free image and Ω(x) is the local patch appearing at the edge of the scene is reduced. Shiau et al. [180]
with pixel x at the center. applied a weighted technique to estimate the atmospheric light
Using this priori, He et al. were able to identify the local and transmission. The method mitigates halo artifacts around
dark channel patches in the image and used these to roughly the sharp edges and computes the transmission map adaptively
estimate the atmospheric transmission. Thus the atmospheric using a trade-off between the 1 × 1 pixels and the 15 ×
scattering model (15) can be transformed into: 15 pixel dark channel maps. While this method can preserve
µ c µ c
edges, it generates oversaturation.
µ ¶¶ µ ¶¶
I (y) J (y)
min min = t̃(x) min min Based on the observation that areas with dramatic color
y∈Ω(x) c Ac y∈Ω(x) c Ac
¡ ¢ changes tend to have similar depths, a window variation
+ 1 − t̃(x) . (32)
mechanism was proposed in another paper [181] that uses the
Based on the DCP (32), the rough transmission map t̃(x) neighborhood scene complexity and the color saturation rate
can then be obtained: to achieve an ideal compromise between depth resolution and
µ c
precision.
µ ¶¶
I (y)
t̃(x) = 1 − min min . (33) The other issue is the invalidity of DCP when other objects
y∈Ω(x) c Ac
have similar colors as the atmospheric light. The method
If the atmospheric scattering model is directly inverted
proposed in [182] defines a reliability map that depicts how
to obtain the haze-free image, there will be a significant
many objects or areas meet the dark channel prior assumption,
block effect on the transmission map. Therefore, the authors
and then estimates the transmission map using the reliable
optimized their method using soft matting [144]. The optimal
pixels only. Wang and Zhu [183] introduced a novel variational
t(x) can be obtained by solving the following sparse linear
model (VM) to optimize the transmission using a smoothness
system:
term and a gradient-preserving term to prevent false edges and
(L + λU )t(x) = λt̃(x). (34)
distorted sky areas in the recovered image.
The matrix L is called the matting Laplacian matrix. λ is Later, Meng et al. [184] provided a new geometric per-
set to 10−4 and t(x) is softly constrained by t̃(x). spective for DCP using a boundary constraint, and proposed
The DCP algorithm is an important breakthrough in the field a transmission image optimization algorithm that explores
of single image dehazing. Gibson and Nguyen [145], [146] the boundary constraint and contextual regularization. This
described the effectiveness of this approach using principal method is fast and can attenuate image noise and enhance
component analysis and minimum volume ellipsoid approxi- some interesting image structures. Chen et al. [185] proposed
mation, and Tang et al. [98] confirmed that the dark-channel an approach based on Bi-Histogram modification that exploits
feature is the most informative feature for dehazing from the features of gamma correction and histogram equalization
a learning perspective. DCP provides a new concept for to flexibly adjust the haze thickness in the transmission map
researchers, but refinement of its transmission map requires of DCP. Reference [186] later presented a new image haze
high computations. Additionally, when the image contains removal approach that can solve the problems associated with
large bright areas such as sky, water or white objects, the dark the presence of localized light sources and color shifts, which
channel prior assumptions will be invalid. was based on Fisher’s linear discriminant-based dual dark
422 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 4, NO. 3, JULY 2017
channel prior scheme. In order to handle edge contours which cause sudden
Motivated by DCP, Ancuti et al. [187] proposed a semi- changes in depth in the image, median filtering is performed
inverse (SI) method of converting the image to LCH (lightness, on W (x) to obtain B(x) with a window size sv.
chroma and hue) space by using the inverse operator for fast
A(x) = mediansv (W (x)) (37)
dehazing, which reduces the complexity of He et al.’s [142]
algorithm by converting the approach from block-based to B(x) = A(x) − mediansv (|W (x) − A(x)|). (38)
layer-based. Gao et al. [188] later combined DCP to present
a fast image dehazing algorithm based on negative correction Then, the atmospheric veil function can be calculated.
to improve the perceptual quality while reducing the compu- V (x) = max(min(pB(x), W (x)), 0) (39)
tational complexity. Rather than estimating the transmission
map, the correction factor of the negative of the images is where p is the adjusting factor of the dehazing degree.
estimated and used to rectify the corresponding hazy images. After solving V (x), the haze free image J(x) is revealed
Li et al. [189] proposed a luminance reference model for through (35).
transmission estimation by searching for the lowest intrinsic The Tarel method greatly simplifies the dehazing process
luminance with small sliding windows, and then refined it and improves efficiency, and Gibson et al. [194] used the color
utilizing a bilateral filter to smooth out noises and obtain a ellipsoid framework to explain its principle. However, after
reliable result. median filtering, the smoothed atmospheric veil did not main-
Additionally, DCP-based methods have been extended for tain the depth edge information, so the algorithm is sometimes
use to night-time images [190]−[192], underwater images [31] invalid in small edge regions. There are many parameters in
and rainy or snowy [39] conditions. For example, in order to the algorithm, which cannot be adjusted adaptively.
improve the robustness of the DCP algorithm for night-time Based on Tarel et al’s method [193], Yu et al. [195]
hazy images, Pei et al. [190] combined a color transfer method proposed an edge-preserving smoothing approach based on
which transformed the airlight colors from a “blue shift” to a weighted least squares (WLS) optimization framework to
“grayish”, and then used a DCP method to remove night-time smooth the edges of image. Bilateral filtering [196] has also
haze. Their method can achieve results with more details but been used to refine the atmospheric veil function estimation.
the color characteristics of the input are also changed by the Zhao et al. [197] proposed another edge-preserving smoothing
color transfer procedure. Therefore, reference [191] presented approach based on local extremes to estimate the atmospheric
a new imaging model for night-time haze conditions, which veil, finally applying the inverse scene albedo for the recovery
takes into account both the non-uniform light conditions and process. Xiao et al. [198] later improved Yu et al’s [195]
the color characteristics of artificial light sources, achieving method further by combining joint bilateral filtering [199], and
both illumination-balance and haze free results. Reference proposed a guide joint bilateral filter to refine the transmission
[192] presented an improved DCP model which was integrated map obtained by median filtering. This method can preserve
with local smoothing and image Gaussian pyramid operators edges and reduce the computation complexity to O(N ). Bao
to enhance the perceptual quality of the night videos. For et al. [200] proposed an edge-preserving texture-smoothing
underwater images, reference [31] proposed an underwater filtering method to improve the visibility of images in the
DCP (UDCP) methodology which basically considers blue and presence of haze or fog. Their method can effectively achieve
green color channels to be the underwater visual information strong textural smoothing while maintaining sharp edges,
source. This method provides a significant improvement over and any low-pass filter can be directly integrated into the
existing methods based on DCP. framework. Based on Tarel’s framework, reference [201] later
e) Tarel method introduced non-local structure-aware regularization to properly
constrain the transmission estimation without introducing halo
Tarel et al. [193] introduced a contrast-based enhancement
artifacts.
approach to remove haze effects, which aims to be faster
Due to the properties of the median filter, the results of
than previous approaches. It assumes that the atmospheric
Tarel’s work cannot remarkably preserve the edges and gradi-
veil function changes gently over a local region, so the
ents of the images and may cause halo artifacts around objects.
transmission coefficient of the medium can be estimated by
Thus, reference [202] introduced a digital total variation filter
pretreatment and median filtering. Firstly, a white-balancing
with color transfer (DTVFCT) for single color image dehazing.
operation is applied to the foggy image, and the foggy regions
The estimation of the atmospheric veil is a filtering problem
are regulated to white. Then, the atmospheric scattering model
on the minimal component image and a digital TV filter is
given by (15) is transformed to:
applied to preserve the edges and gradients of the images, in
order to avoid halo artifacts. Negru et al. [203] proposed an
I(x) = J(x)(1 − A−1 V (x)) + V (x) (35) efficient single image enhancement algorithm that is suitable
for daytime fog conditions, which take the exponential decay
where V (x) = A(1 − t(x)) is the atmospheric veil function. present in foggy images into account when computing the
The minimum color components W (x) of the input image atmospheric veil. Li et al. [204] presented a change of detail
I(x) can be calculated by: (CoD) prior in an image model, which can estimate the
atmospheric veil through a sharper operator and a smoothing
W (x) = min(I(x)), c ∈ {r, g, b}. (36) operator effectively to recover the haze-free image.
c
WANG AND YUAN: RECENT ADVANCES IN IMAGE DEHAZING 423
f) Kim method Most of the present methods are targeted mainly at im-
In order to maintain a balance between avoiding over- proving the quality of the estimated transmission, while of-
stretching the contrast [125], [132], [142], [193] and the ten computing rough estimates of the atmospheric light. In
inability to remove dense haze [130] because of incorrect fact, the atmospheric light estimation is as important as the
estimation of scene depths, Kim et al. [158], [205] presented a transmission estimation, and an incorrect atmospheric light
dehazing algorithm based on optimized contrast enhancement calculation can cause a dehazed image to look unrealistic.
by maximizing the block-wise contrast while minimizing the However, there are some methods that can be used to address
information loss due to pixel truncation. Using a temporal this problem.
coherence measure, the algorithm has been extended for video Narasimhan et al. [96] adopted a direct manual method
dehazing. to define image regions affected by atmospheric light, but
Firstly, the atmospheric light in a hazy image is selected it is not applicable to realistic application due to frequent
using the quad-tree based subdivision. interruption. Nayar et al. [116] and Kratz et al. [132] employed
The scene depths are then assumed to be locally similar and a method to estimate atmospheric light by selecting a patch of
the haze equation, equation (15) can be rewritten as the sky in the foggy image. Their methods can achieve good
estimation and have been used in some following algorithms.
1 However, the methods only work if there is sky in the
J (x) = (I (x) − A) + A. (40)
t scene. Narasimhan et al. [117] and Fattal [130] calculated the
The contrast cost Econtrast and the information loss cost direction of atmospheric light, but it was hard to determine
Eloss of each block Ω are then defined as: the intensity of the light. Fattal [130] applied the principle
of uncorrelation to search within small windows of constant
X X Ic (x) − I¯c 2
¡ ¢
albedo for white pixels that have the lowest correlation.
Econtrast = − (41)
t2 N However, over-saturation may occur when there are white
c∈{r,g,b} x∈Ω
objects with high intensities. In [141], the authors assumed
that fog-opaque pixels exist not only in the deepest regions of
αc µ
½X ¶2
X i − Ac the depth map but also in smooth regions of foggy images,
Eloss = + Ac hc (i)
t since fog-opaque regions exhibit atmospheric luminance and
c∈{r,g,b} i=0
conceal the textured appearance of the scene. All pixels in
the fog-opaque region are averaged to obtain the color vector
255 µ ¶2
of the atmospheric luminance. In the work of Tan et al.
¾
X i − Ac
+ + Ac − 255 hc (i) (42) [125], the brightest pixels in the hazy image were used as
t
i=βc the atmospheric light. However, when there is a white object
where I¯c and N are the average values of Ic (x) and the in the image, this method is not appropriate. He et al. [143]
number of pixels in Ω, hc (i) is the histogram of the input used the pixels with the highest intensity in the hazy image,
pixel value i in the color channel c, and αc and βc denote e.g., the top 0.1% of the brightest pixels was selected from
truncated values due to underflow and overflow, respectively. the dark channel. However, this method is also influenced by
Finally, for block Ω, the optimal transmission t̃ can be white objects. Tarel et al. [193] estimated the atmospheric
obtained by minimizing the overall cost function: light by calibrating the white balance of the image. This
method is simple to operate and works well for most practical
E = Econtrast + λL Eloss (43) scenes. Kim et al. [158] selected the atmospheric light in
a hazy image using a hierarchical searching method based
where λL is a weighting parameter. on quad-tree subdivision, which repeats the steps in order to
Experimental results have demonstrated that the proposed divide it into four rectangular regions. The brightest region
algorithm is capable of effectively removing haze and faith- is chosen as the atmospheric light according to a threshold.
fully restoring images, as well as achieving real-time process- This method is simple and reliable. Pedone et al. [210]
ing. However, it is not suitable for image dehazing in thick proposed a method based on novel statistics gathered from
fog. natural images regarding frequently occurring air-light colors,
Similar to Kim’s method, reference [206] later used local which used statistics to design a new robust solution for
atmospheric light to estimate the transmission for each local computing the color hue of the air-light. This method is easy
region using an objective function represented by a modi- to compute. In contrast with previous methods that focus on
fied saturation evaluation metric and an intensity difference, luminance estimation, Cheng et al. [211] proposed a linear
consisting of image entropy and information fidelity [207]. time atmospheric light estimation algorithm based on color
Motivated by this, Lai el al. [208], [209] assumed that the analysis, by estimating the color probability in YCbCr space
transmission map is under a locally constant variable, and to select candidates from the representative fog pixels for air-
proposed an optimal transmission map method using an ob- light color computation. This method is effective and has very
jective function, which guarantees a global optimal solution. low computation cost.
The obtained transmission map accurately preserves the depth In summary, although atmospheric light is an important
consistency of each object. parameter for restoration based image dehazing, there are not
4) Atmospheric Light Estimation as many algorithms proposed to estimate the atmospheric light
424 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 4, NO. 3, JULY 2017
as there are for estimating the transmission map. At present, the dehazed image assessment methods can be
divided into two categories in this paper according to their
III. Q UALITY A SSESSMENT FOR I MAGE D EHAZING special purpose: ordinary method and special method. The
Image quality assessment (IQA) is an essential step in former is a general method used for evaluating the quality
image dehazing. Generally speaking, the assessment of image of any image, which is adapted to evaluate dehazing effects
quality includes two main aspects: image fidelity and image only; and the latter is specially designed for use in the
readability which can be classified as the subjective assessment dehazing applications, which uses an assessment principle that
and the objective assessment. is combined with the characteristics of the hazy conditions.
1) Ordinary IQA
As can be seen from [187], [212], many general IQA
A. Subjective Assessment
have been employed for image dehazing applications, such
The subjective assessment method uses observers to make as Ancuti et al. [187] who have compared images with
the quality assessment using a set of assessment criteria radically different dynamic ranges [212] to evaluate both the
according to their visual opinion of the processed image. contrast and the structural changes. Liu et al. [178] adopted a
The results are summarized to compare the performance of color naturalness index (CNI) and a color colorfulness index
the algorithm. The score was divided into 5 grades. The (CCI) [213] for algorithm evaluation and analysis. Wang et al.
assessment required that there were more than 20 assessors [179] considered that images captured in hazy weather often
and that some people have experience in image processing suffer from a degradation in contrast, color distortion, and
while others should have no knowledge of image processing. missing image content, then applied an average gradient (AG),
The final quality score, called the mean opinion score (MOS), a color consistency (CC) [214] and a structure similarity
is computed to obtain the overall assessment score by averag- (SSIM) for objective evaluation. Ma et al. [215] adopted eight
ing the subjective scores from all assessors. The assessment dehazing algorithms to perform image dehazing on 25 images,
criteria are shown in Table I. then evaluated the quality through subjective users and some
TABLE I general IQA methods (BIQI [216], BRISQUE [217], NIQE
T HE C RITERIA OF S UBJECTIVE A SSESSMENT [218], BLINDS-II [219], DILT [220] and NCDQI [221]) and
concluded that none of these IQA models properly predicts the
Score Assessment grade Quality criteria
perceived quality of dehazed images. Some of the commonly-
1 Worst The worst in the group
used IQAs for dehazing images are introduced as follows:
2 Worse Worse than average
a) Standard deviation (STD): The STD reflects the degree
3 Average Average in the group
of dispersion in the image relative to its average value, and
4 Better Better than average
is a measure of the contrast in a certain range. The larger the
5 Best The best in the group
standard deviation, the better the visual effect will be:
Although this method is simple and can reflect the visual XM X N r
(f (i, j) − µ)2
quality of the image, it lacks stability and is often subject δ= (44)
M ×N
to experimental conditions, the knowledge of the observers, i=1 j=1
their emotions, motivation, and many other factors. In the where M and N are the width and the height of the image,
current literature, the most common existing solution is to respectively; f (i, j) is the gray value of pixel (i, j) and µ is
manually present several images in bad visibility alongside the average value of the whole image.
their corresponding enhanced images which have been pro- b) Mean gradient (MG): The average gradient reflects the
cessed by different algorithms, and then enlarge some regions ability to express details of an image [164] and can be used
with key details for subjective comparison. This method lacks to measure the relative clarity of the image. It is formulated
consistency from different assessors, and is difficult to use in as
engineering applications.
M −1 N−1
r
X X (f (i, j)−f (i+1, j))2 +(f (i, j)−f (i, j +1))2
G=
B. Objective Assessment i=1 j=1
2
The objective assessment method evaluates the image with (45)
qualitative data according to objective criteria. In general, there where M and N are the width and the height of the the image,
are three major categories of quantitative metrics depending on respectively, and f (i, j) is the gray value of pixel (i, j).
the availability of an original image: full-reference methods, c) Information entropy (IE): If an image is taken as a source
reduced-reference methods and no-reference methods, with the of random output sets {ai } and the probability of ai is P (ai ),
first two categories needing to use a reference image. However, then the average amount of information in the image is as
for image dehazing, the reference image of the same scene follows:
L
without haze is usually very difficult to obtain, so there is X
no ideal image to be used as a reference. Therefore, the no- H=− P (ai ) log2 P (ai ). (46)
i=1
reference evaluation method is often used or a dehazed image
is used as the reference image to evaluate the performance of According to the theory of entropy, the larger the value of
the algorithms. IE, the more information is in the image.
WANG AND YUAN: RECENT ADVANCES IN IMAGE DEHAZING 425
d) Mean squared error (MSE): The simplest and most above measures are often used for simple calculation since
widely used full-reference quality metric which is computed they have clear physical meaning and are mathematically con-
by averaging the squared intensity differences of the distorted venient in the context of optimization. However, unfortunately
and reference image pixels [98], [101], [164]. It is formulated these approaches cannot be simply adopted, because existing
as IQA metrics are generally inappropriate for this application
M X N
1 X since they are designed to assess distortion levels rather than
M SE = [f (i, j) − f ′ (i, j)] (47)
M × N i=1 j=1 the visibility of fog in images which may not be otherwise
distorted.
where M and N are the width and the height of the image, 2) Special IQA
respectively, f (i, j) is the original image and f ′ (i, j) is the Some IQAs have been designed particularly for use in image
dehazed image. dehazing from different views. These IQAs are introduced as
e) Peak signal to noise ratio (PSNR): The PSNR can be used follows.
as an index of the signal distortion. A large PSNR corresponds a) Visible edge based method
to a smaller image distortion [141], [153], [164]. It can be At present, within research on dehazing effect assessment,
expressed as: the most famous approach is a blind contrast enhancement
f2 assessment approach proposed by Hautière et al. [223], which
P SN R = 10 lg max (48)
M SE is mainly based on an atmospheric luminance model and the
where fmax is the largest gray value, in general fmax = 255. concept of a visibility level, which is usually used in lighting
f) Structural similarity (SSIM): Generally, the human visual engineering. The method evaluates the contrast enhancement
perception is highly adapted to extracting structural informa- detail between a hazy image and a haze-free image with three
tion from a scene. So, Wang et al. [222] proposed an SSIM indexes: e (the rate of new visible edges), r̄ (the ratio of the
index method to measure the restored image quality from the gradient of the visible edges before and after restoration) and
perspective of image formation, using the three components σ (the ratio of saturated (black or white) pixels).
of luminance comparison l(x, y), contrast comparison c(x, y) n r − n0
e = (50)
and structural comparison s(x, y). The three components are n0
combined to yield an overall similarity measure. Its formula
is as follows: 1 X
r̄ = exp log ri (51)
nr
S(i, j) = F (l(x, y), c(x, y), s(x, y)). (49) Pi ∈ψr
ns
A diagram for the SSIM measurement system is shown in σ = (52)
dimx × dimy
Fig. 16.
where n0 and nr are the number of visible edges before and
after dehazing, Ψr is the visible edge sets of the dehazed
image, Pi are the pixels of the visible edges, ri is the Sobel
gradient ratio of Pi and the corresponding points of the
original image, ns is the number of saturated pixels (black and
white) and dimx and dimy denote the width and the height
of the image respectively. The larger that e, or r̄ are and the
smaller that σ is, the better the dehazing performance will be.
This method can efficiently reflect the edge details of the
Fig. 16. Diagram of the SSIM measurement system.
images before and after dehazing [80], [83], [173]−[175],
[177], [178], and it is used for dehazing method evaluation
The similarity of the two images is dependent on SSIM, [189], [197], [203], [204], [206], [224]. However, it only
and has a value between [0,1]. When the value is close to 1, provides three indices for evaluation rather than a generalized
the two images are more similar. This method can effectively assessment result, and sometimes the evaluation results will
simulate the human eye to extract structural information from be inconsistent. The method also cannot evaluate the color
the image, and the evaluation results are very close to the distortion.
human eye. This method has been used in many studies to b) Color distortion based method
evaluate the performance of dehazing methods [101], [141], To address the color distortion problem due to halo artifacts
[153], [179], [183]. and color shifts, Li et al. [225] proposed a color quality
STD reflects the contrast of the image; IE reflects the assessment of dehazed images based on a color histogram,
information contained in the image; AG reflects the clarity histogram similarity and a color recovery coefficient. The
of the image; and MSE, PSNR and SSIM reflect the degree of original image and the dehazed image are decomposed into
distortion of an image. For MSE, PSNR and SSIM, foggy an illumination component and a reflection component using
images are usually adopted as references, because no fog- a Gauss low-pass filter. Detailed intensity detection, color
free images exist in these benchmark data. Higher MSE, recovery detection and scene structure detection are then
lower PSNR and SSIM scores imply greater dissimilarity performed and finally the recovery coefficient of the dehazed
between restored results and referenced foggy image. The image is obtained. The diagram is shown in Fig. 17.
426 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 4, NO. 3, JULY 2017
TABLE II
O BJECTIVE E VALUATION OF VARIOUS M ETHODS ON “M OUNTAIN ” I MAGE
TABLE III
O BJECTIVE E VALUATION OF VARIOUS M ETHODS ON “N EW YORK ” I MAGE
Among all of the methods, the Tan method [125], the The objective IQA experiment is also implemented on the
He method [142] and the Tarel method [193] have the best above images. Several IQA methods were selected including
dehazing effect on the whole hazy image, especially for long- STD, AG, IE, PSNR, SSIM [222], Visible edges (e, r̄) [223],
range scenery. However, the Tan and Tarel methods resulted BIQI [216], BRISQUE [217] and NIQE [218]. Since the
in color shifting or over saturation, which looks like pseudo IQA outputs have different dimensions, all data needs to be
color in the haze-free image. The Kopf method and the Fattal normalized. The formula is expressed as
method can better maintain the color of the original image, but
(ymax − ymin ) × (x − xmin )
their overall effect lacks competitiveness. The Meng method y= + ymin (53)
[184], the Kim method [158] and the Zhu method [101] have xmax − xmin
similar results with relatively consistent tones. However, these where xmax and xmin are the maximum and minimum values
three methods are not good at processing sharp-jumps in of the data before normalization and ymax and ymin are
depth of field due to edges in the scene. Of the above image the maximum and minimum values of the normalized data,
restoration methods, the He method [142] can achieve a good respectively. In this paper, ymax = 1 and ymin = 0, and
compromise of both close-up scenery and long-range scenery, each index’s score is proportional to its performance. The
while maintaining an outstanding visual effect on fidelity. experimental results for the images “Mountain” and “New
From the analysis of Fig. 20, it can be seen that overall, York” are shown in Table II and Table III, respectively.
the image restoration methods are better than the image From Tables II and III, it can be easy seen that for the same
enhancement methods, especially in terms of color fidelity method, different IQA indexes will give different scores. In
from a human visual perspective. some cases, the evaluation results have opposite values, since
WANG AND YUAN: RECENT ADVANCES IN IMAGE DEHAZING 429
the general IQA indexes are considering different aspects when resolution of the model. For a clear image priori, it is necessary
evaluating a restored image. On the whole, the scores for the to consider the human visual color constancy, brightness
image restoration based methods are lower than the image constancy and contrast sensitivity as the research objects based
enhancement based methods, especially for the histogram on existing statistical priori, and exploit priori knowledge
equalization and AHE which obtain very high scores, which suitable for the human eye from clear images. From prior
conflicts with the subjective evaluation. This is mainly because research on hazy image degradation, it is necessary to consider
this IQA focuses on the contrast and structure while ignoring the feature variations with environmental light by combining
the color fidelity. Combining the subjective evaluation of the effect of turbid media and focusing on different types of
human vision, the above IQA indexes are not consistent with scenarios, including different depths, different concentrations
the subjective evaluation, and may not be suitable for direct of haze, different light intensities and different backgrounds
evaluation of dehazed images. Thus, development of an to explore universal priori knowledge, which can constrain
IQA method for dehazed images is necessary for future work. the image solution process effectively and help to estimate
the scene albedo precisely.
IV. C ONCLUSIONS AND E XPECTATION
3) Integrate the image fusion approach and the image
There are three types of dehazing methods seen in current enhancement approach into the physical model. Many image
research: image enhancement based methods, image fusion enhancement methods have been developed based on the hu-
based methods and image restoration based methods. All man vision system, which can quickly and accurately estimate
of these methods have advantages and disadvantages. Image the image brightness and maintain the true color. Image fusion
enhancement based methods improve the image contrast from methods can determine or mine effective information from
the perspective of subjective vision, using a color correction different source images. Therefore, in physical model based
which conforms to the perception of human visual system dehazing methods, it is necessary to apply the human visual
on a color scene. The early methods are mature and reliable, perception mechanism to the process of model resolution,
but these methods result in unpredictable distortion, especially and explore a fast and optimized method that uses multi
where there is complex depth in the field image. Image scale information fusion technology and machine learning
fusion based methods maximize the beneficial information technology.
from multiple sources to finally form a high quality image.
4) Strengthen research on video dehazing. Currently, most
These methods do not need a physical model, but the fusion
video dehazing methods are improvements of single image
strategy for multiple sources of information is complex. Image
dehazing methods and usually contain a large number of
restoration based methods are related to the image degradation
complex data processing algorithms, such as large-scale matrix
mechanism, and are suitable for image dehazing with different
decomposition and mass equation group solutions. These
depth of fields. However, optimal tools are required to find
complex operations often require a long processing time, but
the solution and these methods may be time-consuming. In
real-time performance of the algorithm is very important for
summary, image restoration based methods are better than the
certain application including safety monitoring systems and
other two types of methods for real scene dehazing and is now
military reconnaissance systems. So it is important to establish
the current research hotspot. The characteristics of some main
how to effectively use potential information between adjacent
approaches are shown in Table IV.
frames in a video stream. In addition, use of programmable
In view of the above analysis, some open questions that
hardware to accelerate image dehazing is another future re-
require further study are as follows.
search direction.
1) Study of a comprehensive degradation model. The con-
struction and resolution technique is core to physical model 5) Design a special IQA mechanism. Effective performance
based methods for hazy image. At present, in addition to evaluation of image dehazing can guide the study of dehazing
the widely-used atmospheric scattering model, there are other methods, and can lay the foundation for the design of closed-
degradation models such as the dual-color atmospheric scatter- loop dehazing systems. At present, the research on quality
ing model and the ATF (atmospheric transfer function) model. assessment of dehazed images still requires further develop-
However, none of these models can accurately describe the ment, and the evaluation indexes are mainly concentrated on
phenomenon of haze degradation. Therefore, it is necessary image clarity, contrast, color and structural information, while
to explore some cues that have been obtained from research lacking comprehensive scientific criteria. The no-reference
results of modern atmospheric optics. In addition to IQA method based on feature cognition can better fit human
considering the haze attenuation, another approach that should visual characteristics, which can be combined with an image
be explored is to introduce complex atmospheric light, atmo- analysis model, a statistical model, a visual information model
spheric turbulence and other factors causing degradation of and machine learning theory to evaluate the image dehazing
the image, so as to establish a more comprehensive physical objectively, and will be a very important research direction.
model. In summary, image dehazing techniques started relatively
2) Explore the prior knowledge of the physical model. A late due to the random nature and complexity of weather con-
reasonable priori is a prerequisite for success of single image ditions, and there is only approximately a decade of research.
dehazing methods based on physical models. Therefore, in At present, as a research hotspot in the field of machine vision,
order to accurately obtain the scene albedo, a clear scene image dehazing techniques are developing rapidly, and a large
prior is needed as well as a haze degradation prior for the number of new methods continue to appear. Although some
430 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 4, NO. 3, JULY 2017
TABLE IV
C OMPARISON OF D IFFERENT A PPROACHES
It is easy to implement, but difficult to keep a good balance between dynamic range comp-
Single-scale Retinex
ression and color constancy.
Retinex method It can overcome the shortage of SSR but it does not have an edge preservation ability and
Image enhancement Multi-scale Retinex
will lead to halo phenomena.
based method
Homomorphic
It is suitable for the processing of images with uneven light, but its computation is large.
filtering
It has the advantages of multi-scale analysis and multi-resolution characteristics on image
Frequency domain Wavelet transform contrast enhancement, but over-brightness, over-darkness and uneven illumination are di-
transform fficult to resolve.
It can improve the visual image quality by enhancing the curve edges but cannot remove
Curvelet transform
the interference of fog in essence.
This method does not need atmospheric light or a depth map, but it is difficult to obtain
Fusion with multi-spectral image
Image fusion the source images and yield few halo artifacts.
based method The images for fusion are to be perfectly aligned, but this technique is limited to process-
Fusion with single image
ing color images.
Known the scene The restoration effect of the image is good, but scene information is needed from the sen-
Single image information sors, or an existing database.
dehazing with It can improve the visual effect and obvious contrast and it can run automatically in a
User interaction
additional real-time system.
information It runs fast and can achieve good results, but the training procedure is complex and the
Machine learning
parameters rely on the training data.
Multi-images Different polarizing It can enhance the contrast of the image in thin fog, but it is complicated to obtain the
dehazing with conditions source images.
different Different weather It is simple and can achieve good results, but it is difficult to obtain the source images
Image conditions conditions and it cannot be used in real-time systems.
restoration
based method It can maximize the local contrast with only one image, but easily results in color over
Tan method [125]
saturation in images with heavy haze.
It can usually produce impressive results when there is sufficient color information while
Fattal method [130]
it may fail in the cases where the original assumption is invalid.
It can recover a haze-free image with fine edge details, but the results often tend to be
Kratz method [132]
Single image over enhanced and suffer from oversaturation.
dehazing with It is simple and can keep high fidelity of the natural scene, but it is invalid when there
He method [142]
prior knowledge are white objects.
It simplifies the dehazing process and improves the efficiency, but many parameters in
Tarel method [143]
the algorithm cannot be adjusted adaptively.
It can keep the balance between contrast enhancement and information loss, but it is not
Kim method [158]
suitable for image dehazing with thick fog.
WANG AND YUAN: RECENT ADVANCES IN IMAGE DEHAZING 431
research works have shown outstanding results under certain [18] J. C. McCall and M. M. Trivedi, “Video-based lane estimation and
conditions, these methods still need further improvement. Ex- tracking for driver assistance: Survey, system, and evaluation,” IEEE
Trans. Intell. Transp. Syst., vol. 7, no. 1, pp. 20−37, Mar. 2006.
ploiting image dehazing methods with universality, robustness
and real-time performance will be a challenging task in the [19] J. P. Tarel, N. Hautière, L. Caraffa, A. Cord, H. Halmaoui, and D.
Gruyer, “Vision enhancement in homogeneous and heterogeneous fog,”
future. IEEE Intell. Transp. Syst. Mag., vol. 4, no. 2, pp. 6−20, Apr. 2012.
[20] M. Negru, S. Nedevschi, and R. I. Peter, “Exponential contrast restora-
R EFERENCES tion in fog conditions for driving assistance,” IEEE Trans. Intell.
Transp. Syst., vol. 16, no. 4, pp. 2257−2268, Aug. 2015.
[1] H. Halmaoui, A. Cord, and N. Hautiere, “Contrast restoration of
road images taken in foggy weather,” in Proc. 2011 IEEE Int. Conf. [21] M. Pavlic, G. Rigoll, and S. Ilic, “Classification of images in fog and
Computer Vision Workshops, Barcelona, Spain, 2011, pp. 2057−2063. fog-free scenes for use in vehicles,” in Proc. 2013 IEEE Intelligent
Vehicles Symposium, 2013, pp. 481−486.
[2] S. Bronte, L. M. Bergasa, and P. F. Alcantarilla, “Fog detection
system based on computer vision techniques,” in Proc. 12th Int. IEEE [22] N. Hautière, J. P. Tarel, H. Halmaoui, R. Bremond, and D. Aubert, “En-
Conf. Intelligent Transportation Systems, St. Louis, MO, USA, 2009, hanced fog detection and free-space segmentation for car navigation,”
pp. 1−6. Mach. Vis. Appl., vol. 25, no. 3, pp. 667−679, Apr. 2014.
[3] M. S. Shehata, J. Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. [23] N. Hautière, J. P. Tarel, and D. Aubert, “Mitigation of visibility loss for
J. Johannesson, and A. Radmanesh, “Video-based automatic incident advanced camera-based driver assistance,” IEEE Trans. Intell. Transp.
detection for smart roads: The outdoor environmental challenges re- Syst., vol. 11, no. 2, pp. 474−484, Jun. 2010.
garding false alarms,” IEEE Trans. Intell. Transp. Syst., vol. 9, no. 2,
pp. 349−360, Jun. 2008. [24] N. Hautière, J. P. Tarel, and D. Aubert, “Towards fog-free in-vehicle
vision systems through contrast restoration,” in Proc. 2007 IEEE Conf.
[4] S. C. Huang, B. H. Chen, and Y. J. Cheng, “An efficient visibility Computer Vision and Pattern Recognition, Minneapolis, MN, USA,
enhancement algorithm for road scenes captured by intelligent trans- 2007, pp. 1−8.
portation systems,” IEEE Trans. Intell. Transp. Syst., vol. 15, no. 5,
pp. 2321−2332, Oct. 2014. [25] H. J. Song, Y. Z. Chen, and Y. Y. Gao, “Velocity calculation by auto-
matic camera calibration based on homogenous fog weather condition,”
[5] S. C. Huang, “An advanced motion detection algorithm with video Int. J. Autom. Comput., vol. 10, no. 2, pp. 143−156, Apr. 2013.
quality analysis for video surveillance systems,” IEEE Trans. Circuits
Syst. Video Technol., vol. 21, no. 1, pp. 1−14, Jan. 2011. [26] R. Spinneker, C. Koch, S. B. Park, and J. J. Yoon, “Fast fog detection
for camera based advanced driver assistance systems,” in Proc. IEEE
[6] B. Xie, F. Guo, and Z. X. Cai, “Universal strategy for surveillance 17th Int. Conf. Intelligent Transportation Systems, Qingdao, China,
video defogging,” Opt. Eng., vol. 51, no. 10, pp. 101703, May 2012. 2014, pp. 1369−1374.
[7] Z. Jia, H. C. Wang, R. E. Caballero, Z. Y. Xiong, J. W. Zhao,
[27] R. Sato, K. Domany, D. Deguchi, Y. Mekada, I. Ide, H. Murase,
and A. Finn, “A two-step approach to see-through bad weather for
and Y. Tamatsu, “Visibility estimation of traffic signals under rainy
surveillance video quality enhancement,” Mach. Vis. Appl., vol. 23,
weather conditions for smart driving support,” in Proc. 15th Int. IEEE
no. 6, pp. 1059−1082, Nov. 2012.
Conf. Intelligent Transportation Systems, Anchorage, AK, USA, 2012,
[8] I. Yoon, S. Kim, D. Kim, M. H. Hayes, and J. Paik, “Adaptive pp. 1321−1326.
defogging with color correction in the HSV color space for consumer
surveillance system,” IEEE Trans. Consum. Electron., vol. 58, no. 1, [28] N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results
pp. 111−116, Feb. 2012. in underwater single image dehazing,” in Proc. 2010 IEEE OCEANS,
Seattle, WA, USA, 2010, pp. 1−8.
[9] K. B. Gibson, D. T. Vo, and T. Q. Nguyen, “An investigation of
dehazing effects on image and video coding,” IEEE Trans. Image Proc., [29] C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert, “Enhancing underwa-
vol. 21, no. 2, pp. 662−673, Feb. 2012. ter images and videos by fusion,” in Proc. 2012 IEEE Conf. Computer
Vision and Pattern Recognition, Providence, RI, 2012, pp. 81−88.
[10] M. Chacon-Murguia and S. Gonzalez-Duarte, “An adaptive neural-
fuzzy approach for object detection in dynamic backgrounds for [30] J. Y. Chiang and Y. C. Chen, “Underwater image enhancement by
surveillance systems,” IEEE Trans. on Industr. Electron., vol. 59, no. 8, wavelength compensation and dehazing,” IEEE Trans. Image Process.,
pp. 3286−3298, Aug. 2012. vol. 21, no. 4, pp. 1756−1769, Apr. 2012.
[11] S. Y. Tao, H. J. Feng, Z. H. Xu, and Q. Li, “Image degradation [31] P. Drews, E. do Nascimento, F. Moraes, S. Botelho, and M. Campos,
and recovery based on multiple scattering in remote sensing and bad “Transmission estimation in underwater single images,” in Proc. 2013
weather condition,” Opt. Express, vol. 20, no. 15, pp. 16584−16595, IEEE Int. Conf. Computer Vision Workshops, Sydney, NSW, Australia,
Jul. 2012. 2013, pp. 825−830.
[12] J. Long, Z. W. Shi, W. Tang, and C. S. Zhang, “Single remote sensing [32] H. M. Lu, Y. J. Li, and S. Serikawa, “Underwater image enhancement
image dehazing,” IEEE Geosci. Remote Sens. Lett., vol. 11, no. 1, using guided trigonometric bilateral filter and fast automatic color cor-
pp. 59−63, Jan. 2014. rection,” in Proc. 20th IEEE Int. Conf. Image Processing, Melbourne,
VIC, Australia, 2013, pp. 3412−3416.
[13] A. Makarau, R. Richter, R. Muller, and P. Reinartz, “Haze detection
and removal in remotely sensed multispectral imagery,” IEEE Trans. [33] X. Y. Fu, P. X. Zhuang, Y. Huang, Y. H. Liao, X. P. Zhang, and X.
Geosci. Remote Sens., vol. 52, no. 9, pp. 5895−5905, Sep. 2014. H. Ding, “A retinex-based enhancing approach for single underwater
[14] J. Liu, X. Wang, M. Chen, S. G. Liu, X. R. Zhou, Z. F. Shao, and P. image,” in Proc. 2014 IEEE Int. Conf. Image Processing, Paris, France,
Liu, “Thin cloud removal from single satellite images,” Opt. Express, 2014, pp. 4572−4576.
vol. 22, no. 1, pp. 618−632, Jan. 2014. [34] S. H. Sun, S. P. Fan, and Y. C. F. Wang, “Exploiting image structural
[15] H. F. Li, L. P. Zhang, and H. F. Shen, “A principal component based similarity for single image rain removal,” in Proc. 2014 IEEE Int. Conf.
haze masking method for visible images,” IEEE Geosci. Remote Sens. Image Processing, Paris, France, 2014, pp. 4482−4486.
Lett., vol. 11, no. 5, pp. 975−979, May 2014. [35] S. D. You, R. T. Tan, R. Kawakami, and K. Ikeuchi, “Adherent
[16] X. X. Pan, F. Y. Xie, Z. G. Jiang, and J. H. Yin, “Haze removal for a raindrop detection and removal in video,” in Proc. 2013 IEEE Conf.
single remote sensing image based on deformed haze imaging model,” Computer Vision and Pattern Recognition, Portland, OR, USA, 2013,
IEEE Signal Process. Lett., vol. 22, no. 10, pp. 1806−1810, Oct. 2015. pp. 1035−1042.
[17] L. X. Wang, W. X. Xie, and J. H. Pei, “Patch-based dark channel prior [36] M. Desvignes and G. Molinie, “Raindrops size from video and image
dehazing for RS multi-spectral image,” Chin. J. Electron., vol. 24, no. 3, processing,” in Proc. 19th IEEE Int. Conf. Image Processing, Orlando,
pp. 573−578, Jul. 2015. FL, USA, 2012, pp. 1341−1344.
432 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 4, NO. 3, JULY 2017
[37] Z. Jia, H. C. Wang, R. Caballero, Z. Y. Xiong, J. W. Zhao, and A. Finn, [58] X. Xu, Q. Chen, P. A. Heng, H. J. Sun, and D. S. Xia, “A fast halo-free
“Real-time content adaptive contrast enhancement for see-through fog image enhancement method based on retinex,” J. Computer-Aided Des.
and rain,” in Proc. 2010 IEEE Int. Conf. Acoustics Speech and Signal Comput. Graph., vol. 20, no. 10, pp. 1325−1331, Oct. 2008.
Processing, Dallas, TX, USA, 2010, pp. 1378−1381.
[59] W. T. Yang, R. G. Wang, S. Fang, and X. Zhang, “Variable filter
[38] K. Garg and S. K. Nayar, “When does a camera see rain?” in retinex algorithm for foggy image enhancement,” J. Computer-Aided
Proc. 10th IEEE Int. Conf. Computer Vision, Beijing, China, vol. 2, Des. Comput. Graph., vol. 22, no. 6, pp. 965−971, Jun. 2010.
pp. 1067−1074, 2005.
[60] W. W. Hu, R. G. Wang, S. Fang, and Q. Hu, “Retinex algorithm for
[39] H. Kawarabuki and K. Onoguchi, “Snowfall detection in a foggy image enhancement based on bilateral filtering,” J. Eng. Graph., vol. 31,
scene,” in Proc. 22nd IEEE Int. Conf. Pattern Recognition, Stockholm, no. 2, pp. 104−109, Apr. 2010.
Sweden, 2014, pp. 877−882.
[61] X. Y. Hu, X. H. Gao, and H. B. Wang, “A novel retinex algorithm
[40] L. R. Bissonnette, “Imaging through fog and rain,” Opt. Eng., vol. 31, and its application to fog-degraded image enhancement,” Sens. Transd.,
no. 5, pp. 1045−1052, May 1992. vol. 175, no. 7, pp. 138−143, Jul. 2014.
[41] C. T. Cai, Q. Y. Zhang, and Y. H. Liang, “A survey of image [62] T. Shu, Y. F. Liu, B. Deng, Y. P. Tan, and B. Q. Chen, “Multi-scale
dehazing approaches,” in Proc. 27th Chinese Control and Decision Retinex algorithm for the foggy image enhancement based on sub-band
Conf., Qingdao, China, 2015, pp. 3964−3969. decomposition,” J. Jishou Univ., vol. 36, no. 1, pp. 40−45, Jan. 2015.
[42] Q. Wang and R. K. Ward, “Fast image/video contrast enhancement [63] K. Zhang, C. C. Wu, J. X. Miao, and L. Z. Yi, “Research about using
based on weighted thresholded histogram equalization,” IEEE Trans. the retinex-based method to remove the fog from the road traffic video,”
Consum. Electron., vol. 53, no. 2, pp. 757−764, May 2007. ICTIS 2013, pp. 861−867.
[43] R. Dale-Jones and T. Tjahjadi, “A study and modification of the
[64] M. J. Seow and V. K. Asari, “Ratio rule and homomorphic filter
local histogram equalization algorithm,” Pattern Recogn., vol. 26, no. 9,
for enhancement of digital colour image,” Neurocomputing, vol. 69,
pp. 1373−1381, Sep. 1993.
no. 7−9, pp. 954−958, Mar. 2006.
[44] M. F. Khan, E. Khan, and Z. A. Abbasi, “Segment dependent dynamic
multi-histogram equalization for image contrast enhancement,” Digit. [65] W. T. Cai, Y. X. Liu, M. C. Li, L. Cheng, and C. X. Zhang, “A self-
Signal Process., vol. 25, pp. 198−223, Feb. 2014. adaptive homomorphic filter method for removing thin cloud,” in Proc.
19th Int. Conf. Geoinformatics, Shanghai, China, 2011, pp. 1−4.
[45] T. Celik and T. Tjahjadi, “Contextual and variational contrast enhance-
ment,” IEEE Trans. Image Process., vol. 20, no. 12, pp. 3431−3441, [66] L. L. Grewe and R. B. Richard, “Atmospheric attenuation reduc-
Dec. 2011. tion through multi-sensor fusion,” in Proc. SPIE Sensor Fusion:
Architectures, Algorithms and Applications II, Orlando, FL, vol. 3376,
[46] T. K. Kim, J. K. Paik, and B. S. Kang, “Contrast enhancement system pp. 102−109, 1998.
using spatially adaptive histogram equalization with temporal filtering,”
IEEE Trans. Consum. Electron., vol. 44, no. 1, pp. 82−87, Feb. 1998. [67] F. Russo, “An image enhancement technique combining sharpening
and noise reduction,” IEEE Trans. Instrum. Meas., vol. 51, no. 4,
[47] J. Y. Kim, L. S. Kim, and S. H. Hwang, “An advanced contrast enhance- pp. 824−828, Aug. 2002.
ment using partially overlapped sub-block histogram equalization,”
IEEE Trans. Circuits Syst. Video Technol., vol. 11, no. 4, pp. 475−484, [68] Y. Du, B. Guindon, and J. Cihlar, “Haze detection and removal in high
Apr. 2001. resolution satellite image with wavelet analysis,” IEEE Trans. Geosci.
Remote Sens., vol. 40, no. 1, pp. 210−217, Jan. 2002.
[48] S. C. Huang and C. H. Yeh, “Image contrast enhancement for preserv-
ing mean brightness without losing image features,” Eng. Appl. Artif. [69] S. D. Zhou, M. Wang, F. Huang, Z. H. Liu, and S. Ye, “Color image de-
Intell., vol. 26, no. 5−6, pp. 1487−1492, May−Jun. 2013. fogging based on intensity wavelet transform and color improvement,”
J. Harbin Univ. Sci. Technol., vol. 16, no. 4, pp. 59−62, Aug. 2011.
[49] H. T. Xu, G. T. Zhai, X. L. Wu, and X. K. Yang, “Generalized
equalization model for image enhancement,” IEEE Trans. Multimed., [70] R. Zhu and L. J. Wang, “Improved wavelet transform algorithm for
vol. 16, no. 1, pp. 68−82, Jan. 2014. single image dehazing,” Optik-Int. J. Light Electron Opt., vol. 125,
no. 13, pp. 3064−3066, Jul. 2014.
[50] L. J. Wang and R. Zhu, “Image defogging algorithm of single color
image based on wavelet transform and histogram equalization,” Appl. [71] N. Anantrasirichai, A. Achim, D. Bull, and N. Kingsbury, “Mitigating
Math. Sci., vol. 7, no. 79, pp. 3913−3921, 2013. the effects of atmospheric distortion using DT-CWT fusion,” in Proc.
19th IEEE Int. Conf. Image Processing, Orlando, FL, USA, 2012,
[51] Z. Y. Xu, X. M. Liu, and N. Ji, “Fog removal from color images using
pp. 3033−3036.
contrast limited adaptive histogram equalization,” in Proc. 2nd IEEE
Int. Congress on Image and Signal Processing, Tianjin, China, 2009, [72] J. John and M. Wilscy, “Enhancement of weather degraded color
pp. 1−5. images and video sequences using wavelet fusion,” in Advances in
[52] M. F. Al-Sammaraie, “Contrast enhancement of roads images with Electrical Engineering and Computational Science, S. L. Ao and L.
foggy scenes based on histogram equalization,” in Proc. 10th Int. Conf. Gelman, Eds. Netherlands: Springer, 2009, pp. 99−109.
Computer Science & Education, Cambridge, UK, 2015, pp. 95−101. [73] J. L. Starck, F. Murtagh, E. J. Candes, and D. L. Donoho, “Gray and
[53] G. Yadav, S. Maheshwari, and A. Agarwal, “Foggy image enhancement color image contrast enhancement by the curvelet transform,” IEEE
using contrast limited adaptive histogram equalization of digitally Trans. Image Process., vol. 12, no. 6, pp. 706−717, Jun. 2003.
filtered image: Performance improvement,” in Proc. 2014 Int. Conf. [74] M. Verma, V. D. Kaushik, and V. K. Pathak, “An efficient deblurring
Advances in Computing, Communications and Informatics, New Delhi, algorithm on foggy images using curvelet transforms,” in Proc. 3rd Int.
India, 2014, pp. 2225−2231. Symposium on Women in Computing and Informatics, New York, NY,
[54] E. H. Land and J. J. McCann, “Lightness and Retinex theory,” J. Opt. USA, 2015, pp. 426−431.
Soc. Am., vol. 61, no. 1, pp. 1−11, Jan. 1971. [75] N. Salamati, A. Germain, and S. Süsstrunk, “Removing shadows from
[55] T. J. Cooper and F. A. Baqai, “Analysis and extensions of the images using color and near-infrared,” in Proc. 18th IEEE Int. Conf.
Frankle-McCann retinex algorithm,” J. Electron. Image, vol. 13, no. 1, Image Processing, Brussels, Belgium, 2011, pp. 1713−1716.
pp. 85−92, Jan. 2004.
[76] L. Schaul, C. Fredembach, and S. Süsstrunk, “Color image dehazing
[56] D. J. Jobson, Z. U. Rahman, and G. A. Woodell, “Properties and using the near-infrared,” in Proc. 16th IEEE Int. Conf. Image Process-
performance of a center/surround retinex,” IEEE Trans. Image Process., ing, Cairo, Egypt, 2009, pp. 1629−1632.
vol. 6, no. 3, pp. 451−462, Mar. 1997.
[77] J. Son, H. Kwon, T. Shim, Y. Kim, S. Ahu, and K. Sohng, “Fusion
[57] Z. U. Rahman, D. J. Jobson, and G. A. Woodell, “Multi-scale retinex method of visible and infrared images in foggy environment,” in
for color image enhancement,” in Proc. IEEE Int. Conf. Image Pro- Proc. Int. Conf. Image Processing, Computer Vision, and Pattern
cessing, Lausanne, Switzerland, vol. 3, pp. 1003−1006, 1996. Recognition, 2015, pp. 433−437.
WANG AND YUAN: RECENT ADVANCES IN IMAGE DEHAZING 433
[78] C. Feng, S. J. Zhuo, X. P. Zhang, L. Shen, and S. Susstrunk, “Near- [99] K. B. Gibson, S. J. Belongie, and T. Q. Nguyen, “Example based depth
infrared guided color image dehazing,” in Proc. IEEE Int. Conf. Image from fog,” in Proc. 20th IEEE Int. Conf. Image Processing, Melbourne,
Process., Melbourne, VIC, Australia, 2013, pp. 2363−2367. VIC, Australia, 2013, pp. 728−732.
[79] C. O. Ancuti, C. Ancuti, and P. Bekaert, “Effective single image [100] Q. S. Zhu, J. M. Mai, and L. Shao, “Single image dehazing using color
dehazing by fusion,” in Proc. 17th IEEE Int. Conf. Image Process., attenuation prior,” in Proc. 25th British Machine Vision Conference,
Hong Kong, China, 2010, pp. 3541−3544. 2014, pp. 1−10.
[80] C. O. Ancuti and C. Ancuti, “Single image dehazing by multi-scale [101] Q. S. Zhu, J. M. Mai, and L. Shao, “A fast single image haze removal
fusion,” IEEE Trans. Image Process., vol. 22, no. 8, pp. 3271−3282, algorithm using color attenuation prior,” IEEE Trans. Image Process.,
Aug. 2013. vol. 24, no. 11, pp. 3522−3533, Nov. 2015.
[81] C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, “Image and video [102] Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing
decolorization by fusion,” Asian Conference on Computer Vision, R. of images using polarization,” in Proc. 2001 IEEE Computer Society
Kimmel, R. Klette, and A. Sugimoto, Eds. Berlin Heidelberg, Germany: Conf. Computer Vision and Pattern Recognition, Kauai, HI, USA, 2001,
Springer-Verlag, 2011, pp. 79−92. pp. 325−332.
[82] S. Fang, R. Deng, Y. Cao, and C. L. Fang, “Effective single underwater [103] Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-
image enhancement by fusion,” J. Comput., vol. 8, no. 4, pp. 904−911, based vision through haze,” Appl. Opt., vol. 42, no. 3, pp. 511−525,
Apr. 2013. Feb. 2003.
[83] Z. L. Ma, J. Wen, C. Zhang, Q. Y. Liu, and D. N. Yan, “An effective [104] S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,” in
fusion defogging approach for single sea fog image,” Neurocomputing, Proc. 2006 IEEE Computer Society Conf. Computer Vision and Pattern
vol. 173, pp. 1257−1267, Jan. 2016. Recognition, New York, NY, USA, vol. 2, pp. 1984−1991, 2006.
[84] Z. Wang and Y. Feng, “Fast single haze image enhancement,” Comput. [105] Y. Y. Schechner and Y. Averbuch, “Regularized image recovery in
Electr. Eng., vol. 40, no. 3, pp. 785−795, Apr. 2014. scattering media,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29,
[85] F. Guo, J. Tang, and Z. X. Cai, “Fusion strategy for single image no. 9, pp. 1655−1660, Sep. 2007.
dehazing,” Int. J. Digit. Content Technol. Appl., vol. 7, no. 1, pp. 19−28, [106] R. Kaftory, Y. Y. Schechner, and Y. Y. Zeevi, “Variational distance-
Jan. 2013. dependent image restoration,” in Proc. 2007 IEEE Conf. Computer Vi-
[86] H. Zhang, X. Liu, Z. T. Huang, and Y. F. Ji, “Single image dehazing sion and Pattern Recognition, Minneapolis, MN, USA, 2007, pp. 1−8.
based on fast wavelet transform with weighted image fusion,” in [107] F. Liu, L. Cao, X. P. Shao, P. L. Han, and X. L. Bin, “Polarimetric
Proc. 2014 IEEE Int. Conf. Image Processing, Paris, France, 2014, dehazing utilizing spatial frequency segregation of images,” Appl. Opt.,
pp. 4542−4546. vol. 54, no. 27, pp. 8116−8122, Sep. 2015.
[87] J. P. Oakley and B. L. Satherley, “Improving image quality in poor
[108] S. Fang, X. S. Xia, H. Xing, and C. W. Chen, “Image dehazing
visibility conditions using a physical model for contrast degradation,”
using polarization effects of objects and airlight,” Opt. Express, vol. 22,
IEEE Trans. Image Process., vol. 7, no. 2, pp. 167−179, Feb. 1998.
no. 16, pp. 19523−19537, Aug. 2014.
[88] E. J. McCartney, Optics of the Atmosphere: Scattering by Molecules
[109] E. Namer, S. Shwartz, and Y. Schechner, “Skyless polarimetric cal-
and Particles. New York, USA: John Wiley and Sons, Inc., 1976,
ibration and visibility enhancement,” Opt. Express, vol. 17, no. 2,
pp. 1−42.
pp. 472−493, Jan. 2009.
[89] K. Tan and J. P. Oakley, “Physics-based approach to color image
[110] T. Treibitz and Y. Y. Schechner, “Polarization: Beneficial for visibility
enhancement in poor visibility conditions,” J. Opt. Soc. Am. A, vol. 18,
enhancement?,” in Proc. 2009 IEEE Conf. Computer Vision and Pattern
no. 10, pp. 2460−2467, 2001.
Recognition, Media, Iran, 2009, pp. 525−532.
[90] K. Tan and J. P. Oakley, “Enhancement of color images in poor
visibility conditions,” in Proc. 2000 Int. Conf. Image Processing, [111] C. L. Li, W. J. Lu, S. Xue, Y. C. Shi, and X. N. Sun, “Quality
Vancouver, BC, Canada, 2000, pp. 788−791. assessment of polarization analysis images in foggy conditions,” in
Proc. 2014 IEEE Int. Conf. Image Processing, Paris, France, 2014,
[91] M. J. Robinson, D. W. Armitage, and J. P. Oakley, “Seeing in the mist: pp. 551−555.
Real time video enhancement,” Sens. Rev., vol. 22, no. 2, pp. 157−161,
Jun. 2002. [112] D. Miyazaki, D. Akiyama, M. Baba, R. Furukawa, S. Hiura, and N.
Asada, “Polarization-based dehazing using two reference objects,” in
[92] N. Hautière, J. P. Tarel, J. Lavenant, and D. Aubert, “Automatic Proc. 2013 IEEE Int. Conf. Computer Vision Workshops, Washington,
fog detection and estimation of visibility distance through use of an DC, USA, 2013, pp. 852−859.
onboard camera,” Mach. Vis. Appl., vol. 17, no. 1, pp. 8−20, Apr. 2006.
[113] Y. Y. Schechner and N. Karpel, “Clear underwater vision,” in Proc.
[93] N. Hautière and D. Aubert, “Contrast restoration of foggy images 2004 IEEE Computer Society Conf. Computer Vision and Pattern
through use of an onboard camera,” in Proc. 2005 IEEE Intelligent Recognition, Washington, DC, USA, 2004, pp. 536−543.
Transportation Systems, Vienna, Austria, 2005, pp. 601−606.
[114] Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and
[94] N. Hautière, R. Labayrade, and D. Aubert, “Real-time disparity contrast structure by polarization analysis,” IEEE J. Oceanic Eng., vol. 30, no. 3,
combination for onboard estimation of the visibility distance,” IEEE pp. 570−587, Jul. 2005.
Trans. Intell. Transp. Syst., vol. 7, no. 2, pp. 201−212, Jun. 2006.
[115] T. Treibitz and Y. Y. Schechner, “Active polarization descattering,”
[95] J. Kopf, B. Neubert, B. Chen, M. F. Cohen, D. Cohen-Or, O. Deussen, IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 3, pp. 385−399,
M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based pho- Mar. 2009.
tograph enhancement and viewing,” ACM Trans. Graphics (TOG),
vol. 27, no. 5, Article ID 116, 2008. [116] S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in
Proc. 7th IEEE Int. Conf. Computer Vision, Kerkyra, Greece, 1999,
[96] S. G. Narasimhan and S. K. Nayar, “Interactive (de) weathering of an pp. 820−827.
image using physical models,” in Proc. IEEE Workshop on Color and
Photometric Methods in Computer Vision, pp. 1−8, 2003. [117] S. G. Narasimhan and S. K. Nayar, “Chromatic framework for vision in
bad weather,” in Proc. 2000 IEEE Conf. Computer Vision and Pattern
[97] Y. B. Sun, L. Xiao, Z. H. Wei, and H. Z. Wu, “Method of defogging Recognition, Hilton Head Island, SC, USA, 2000, pp. 598−605.
image of outdoor scenes based on PDE,” J. Syst. Simul., vol. 19, no. 16,
pp. 3739−3744, Aug. 2007. [118] S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int.
J. Comput. Vis., vol. 48, no. 3, pp. 233−254, Jul. 2002.
[98] K. T. Tang, J. C. Yang, and J. Wang, “Investigating haze-relevant
features in a learning framework for image dehazing,” in Proc. 2014 [119] S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather
IEEE Conf. Computer Vision and Pattern Recognition, Columbus, OH, degraded images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25,
Italy, 2014, pp. 2995−3002. no. 6, pp. 713−724, Jun. 2003.
434 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 4, NO. 3, JULY 2017
[120] S. G. Narasimhan and S. K. Nayar, “Removing weather effects from [142] K. M. He, J. Sun, and X. O. Tang, “Single image haze removal using
monochrome images,” in Proc. 2001 IEEE Computer Society Conf. dark channel prior,” in Proc. IEEE Conf. Computer Vision and Pattern
Computer Vision and Pattern Recognition, Kauai, HI, USA, 2001, Recognition, New York, USA, 2009, pp. 1956−1963.
pp. 186−193.
[143] K. M. He, J. Sun, and X. O. Tang, “Single image haze removal using
[121] J. Sun, J. Y. Jia, C. K. Tang, and H. Y. Shum, “Poisson matting,” ACM dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33,
Trans. Graph., vol. 23, no. 3, pp. 315−321, Aug. 2004. no. 12, pp. 2341−2353, Dec. 2011.
[122] G. Chen, T. Wang, and H. Q. Zhou, “A novel physics-based method [144] A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to
for restoration of foggy day images,” J. Image Graph., vol. 13, no. 5, natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell.,
pp. 885−893, May 2008. vol. 30, no. 2, pp. 228−242, Feb. 2008.
[123] D. Wu and Q. H. Dai, “Data-driven visibility enhancement using [145] K. B. Gibson and T. Q. Nguyen, “On the effectiveness of the dark
multi-camera system,” in Proc. SPIE Enhanced and Synthetic Vision, channel prior for single image dehazing by approximating with mini-
Orlando, Florida, USA, vol. 7689, Article ID 76890H, 2010. mum volume ellipsoids,” in Proc. IEEE Int. Conf. Acoustics, Speech,
and Single Processing, Prague, Czech Republic, 2011, pp. 1253−1256.
[124] D. Wu and Q. S. Zhu, “The latest research progress of image dehazing,”
Acta Autom. Sin., vol. 41, no. 2, pp. 221−239, Feb. 2015. [146] K. B. Gibson and T. Q. Nguyen, “An analysis of single image defogging
methods using a color ellipsoid framework,” EURASIP J. Image Video
[125] R. T. Tan, “Visibility in bad weather from a single image,” in Proc. Process., vol. 2013, pp. 37, Jan. 2013.
2008 IEEE Conf. Computer Vision and Pattern Recognition, Anchor-
age, AK, USA, 2008, pp. 1−8. [147] D. Park, D. K. Han, and H. Ko, “Single image haze removal with
WLS-based edge-preserving smoothing filter,” in Proc. 2013 IEEE
[126] C. Ancuti and C. O. Ancuti, “Effective contrast-based dehazing for Int. Conf. Acoustics, Speech and Signal Processing, Vancouver, BC,
robust image matching,” IEEE Geosci. Remote Sens. Lett., vol. 11, Canada, 2013, pp. 2469−2473.
no. 11, pp. 1871−1875, Nov. 2014.
[148] J. Yu, C. B. Xiao, and D. P. Li, “Physics-based fast single image fog
[127] M. Bertalmı́o, V. Caselles, E. Provenzi, and A. Rizzi, “Perceptual color removal,” in Proc. IEEE 10th Int. Conf. Signal Processing, Beijing,
correction through variational techniques,” IEEE Trans. Image Process., China, 2010, pp. 1048−1052.
vol. 16, no. 4, pp. 1058−1072, Apr. 2007.
[149] C. H. Yeh, L. W. Kang, M. S. Lee, and C. Y. Lin, “Haze effect removal
[128] A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmio, “En- from image via haze density estimation in optical model,” Opt. Express,
hanced variational image dehazing,” SIAM J. Imaging Sci., vol. 8, no. 3, vol. 21, no. 22, pp. 27127−27141, Nov. 2013.
pp. 1519−154, Feb. 2015.
[150] S. Fang, J. Q. Zhan, Y. Cao, and R. Z. Rao, “Improved single image
[129] A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmio, “A dehazing using segmentation,” in Proc. 17th IEEE Int. Conf. Image
variational framework for single image dehazing,” in Proc. 2014 Processing, Hong Kong, China, 2010, pp. 3589−3592.
Springer European Conf. Computer Vision, Zurich, Switzerland, 2014,
pp. 259−270. [151] F. C. Cheng, C. H. Lin, and J. L. Lin, “Constant time O(1) image fog
removal using lowest level channel,” Electron. Lett., vol. 48, no. 22,
[130] R. Fattal, “Single image dehazing,” ACM Trans. Graph. (TOG), vol. 27, pp. 1404−1406, Oct. 2012.
no. 3, Article ID 72, Aug. 2008.
[152] L. Chen, B. L. Guo, J. Bi, and J. J. Zhu, “Algorithm of single image fog
[131] R. Fattal, “Dehazing using color-lines,” ACM Trans. Graph. (TOG), removal based on joint bilateral filter,” J. Beijing Univ. Posts Telecomm.,
vol. 34, no. 1, Article ID 13, Nov. 2014. vol. 35, no. 4, pp. 19−23, Aug. 2012.
[132] L. Kratz and K. Nishino, “Factorizing scene albedo and depth from a [153] S. Serikawa and H. M. Lu, “Underwater image dehazing using joint tri-
single foggy image,” in Proc. IEEE 12th Int. Conf. Computer Vision, lateral filter,” Comput. Electr. Eng., vol. 40, no. 1, pp. 41−50, Jan. 2014.
Kyoto, Japan, 2009, pp. 1701−1708. [154] K. M. He, J. Sun, and X. O. Tang, “Guided image filtering,” in Proc.
[133] K. Nishino, L. Kratz, and S. Lombardi, “Bayesian defogging,” Int. J. 11th European Conf. Computer Vision, Berlin Heidelberg, Germany,
Computer Vis., vol. 98, no. 3, pp. 263−278, Jul. 2012. 2010, pp. 1−14.
[134] L. Caraffa and J. P. Tarel, “Stereo reconstruction and contrast restora- [155] K. M. He, J. Sun, and X. O. Tang, “Guided image filtering,” IEEE
tion in daytime fog,” in Proc. 11th Asia Conf. Computer Vision, Trans. Pattern Anal. Mach. Intell., vol. 35, no. 6, pp. 1397−1409,
Daejeon, Korea, 2013, pp. 13−25. Jun. 2013.
[135] L. Caraffa and J. P. Tarel, “Markov random field model for single image [156] R. J. Gao, X. Fan, J. L. Zhang, and Z. X. Luo, “Haze filtering with
defogging,” in Proc. 2013 IEEE Intelligent Vehicles Symposium, Gold aerial perspective,” in Proc. 19th IEEE Int. Conf. Image Processing,
Coast, QLD, Australia, 2013, pp. 994−999. Orlando, FL, USA, 2012, pp. 989−992.
[136] D. Nan, D. Y. Bi, C. Liu, S. P. Ma, and L. Y. He, “A Bayesian [157] F. Guo, J. Tang, and Z. X. Cai, “Image dehazing based on haziness
framework for single image dehazing considering noise,” Sci. World analysis,” Int. J. Com., vol. 11, no. 1, pp. 78−86, Feb. 2011.
J., Vol. 2014, Article ID 651986, 2014. [158] J. H. Kim, W. D. Jang, J. Y. Sim, and C. S. Kim, “Optimized contrast
[137] L. Mutimbu and A. Robles-Kelly, “A relaxed factorial Markov random enhancement for real-time image and video dehazing,” J. Vis. Commun.
field for colour and depth estimation from a single foggy image,” Image Represent., vol. 24, no. 3, pp. 410−425, Apr. 2013.
in Proc. 20th IEEE Int. Conf. Image Processing, Melbourne, VIC, [159] C. Feng, F. P. Da, and C. X. Wang, “Single image dehazing using dark
Australia, 2013, pp. 355−359. channel prior and adjacent region similarity,” in Proc. Chinese Conf.
[138] X. M. Dong, X. Y. Hu, S. L. Peng, and D. C. Wang, “Single color Pattern Recognition, Beijing, China, 2012, pp. 463−470.
image dehazing using sparse priors,” in Proc. 17th IEEE Int. Conf. [160] Z. L. Ma, J. Wen, and L. L. Hao, “Video image defogging algo-
Image Processing, Hong Kong, China, 2010, pp. 3593−3596. rithm for surface ship scenes,” Syst. Eng. Electron., vol. 36, no. 9,
pp. 1860−1867, 2014.
[139] J. W. Zhang, L. Li, G. Q. Yang, Y. Zhang, and J. Z. Sun, “Local albedo-
insensitive single image dehazing,” Vis. Comput., vol. 26, no. 6−8, [161] Z. G. Li, J. H. Zheng, Z. J. Zhu, W. Yao, and S. Q. Wu, “Weighted
pp. 761−768, Jun. 2010. guided image filtering,” IEEE Trans. Image Process., vol. 24, no. 1,
pp. 120−129, Jan. 2015.
[140] J. W. Zhang, L. Li, Y. Zhang, G. Q. Yang, X. C. Cao, and J. Z. Sun,
“Video dehazing with spatial and temporal coherence,” Vis. Comput., [162] Z. G. Li and J. H. Zheng, “Edge-preserving decomposition-based single
vol. 27, no. 6−8, pp. 749−757, Jun. 2011. image haze removal,” IEEE Trans. Image Process., vol. 24, no. 12,
pp. 5432−5441, Dec. 2015.
[141] Y. K. Wang and C. T. Fan, “Single image defogging by multiscale depth
fusion,” IEEE Trans. Image Process., vol. 23, no. 11, pp. 4826−4837, [163] Z. G. Li, J. H. Zheng, W. Yao, and Z. J. Zhu, “Single image haze
Nov. 2014. removal via a simplified dark channel,” in Proc. 2015 IEEE Int.
WANG AND YUAN: RECENT ADVANCES IN IMAGE DEHAZING 435
Conf. Acoustics, Speech and Signal Processing, South Brisbane, QLD, [184] G. F. Meng, Y. Wang, J. Y. Duan, S. M. Xiang, and C. H. Pan, “Efficient
Australia, 2015, pp. 1608−1612. image dehazing with boundary constraint and contextual regulariza-
tion,” in Proc. 2013 IEEE Int. Conf. Computer Vision, Sydney, NSW,
[164] J. B. Wang, N. He, L. L. Zhang, and K. Lu, “Single image dehazing Australia, 2013, pp. 617−624.
with a physical model and dark channel prior,” Neurocomputing,
vol. 149, pp. 718−728, Feb. 2015. [185] B. H. Chen, S. C. Huang, and J. H. Ye, “Hazy image restoration by
bi-histogram modification,” ACM Trans. Intell. Syst. Technol., vol. 6,
[165] A. K. Tripathi and S. Mukhopadhyay, “Single image fog removal using no. 4, Article ID 50, Jul. 2015.
anisotropic diffusion,” IET Image Process., vol. 6, no. 7, pp. 966−975,
Oct. 2012. [186] B. H. Chen and S. C. Huang, “An advanced visibility restoration
algorithm for single hazy images,” ACM Trans. Multimed. Comput.
[166] F. M. Fang, F. Li, and T. Y. Zeng, “Single image Dehazing and Commun. Appl. (TOMM), vol. 11, no. 4, Article ID 53, Apr. 2015.
Denoising: A fast variational approach,” SIAM J. Imaging Sci., vol. 7,
no. 2, pp. 969−996, Apr. 2014. [187] C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, “A fast semi-
inverse approach to detect and remove the haze from a single image,” in
[167] B. Li, S. H. Wang, J. Zheng, and L. P. Zheng, “Single image haze Proc. 10th Asian Conf. Computer Vision, Berlin, Heidelberg, Germany,
removal using content-adaptive dark channel and post enhancement,” 2010, pp. 501−514.
IET Comput. Vis., vol. 8, no. 2, pp. 131−140, Apr. 2014.
[188] Y. Y. Gao, H. M. Hu, S. H. Wang, and B. Li, “A fast image dehazing
[168] Y. H. Shiau, H. Y. Yang, P. Y. Chen, and Y. Z. Chuang, “Hardware algorithm based on negative correction,” Signal Process., vol. 103,
implementation of a fast and efficient haze removal method,” IEEE pp. 380−398, Oct. 2014.
Trans. Circuits Syst. Video Technol., vol. 23, no. 8, pp. 1369−1374,
[189] J. F. Li, H. Zhang, D. Yuan, and H. L. Wang, “Haze removal from
Aug. 2013.
single images based on a luminance reference model,” in Proc. 2nd
[169] W. Sun, B. L. Guo, D. J. Li, and W. Jia, “Fast single-image dehazing Asian Conf. Pattern Recognition, Naha, Japan, 2013, pp. 446−450.
method for visible-light systems,” Opt. Eng., vol. 52, no. 9, pp. 093103, [190] S. C. Pei and T. Y. Lee, “Nighttime haze removal using color transfer
May 2013. pre-processing and dark channel prior,” in Proc. 19th IEEE Int. Conf.
[170] M. Ding and R. F. Tong, “Efficient dark channel based image dehazing Image Processing, Orlando, FL, USA, 2012, pp. 957−960.
using quadtrees,” Sci. China Inf. Sci., vol. 56, no. 9, pp. 1−9, Sep. 2013. [191] J. Zhang, Y. Cao, and Z. F. Wang, “Nighttime haze removal based on a
[171] X. M. Zhu, Y. Li, and Y. Qiao, “Fast single image dehazing through new imaging model,” in Proc. 2014 IEEE Int. Conf. Image Processing,
edge-guided interpolated filter,” in Proc. 14th IAPR Int. Conf. Machine Paris, France, 2014, pp. 4557−4561.
Vision Applications, Tokyo, Japan, 2015, pp. 443−446. [192] X. S. Jiang, H. X. Yao, S. P. Zhang, X. S. Lu, and W. Zeng, “Night
video enhancement using improved dark channel prior,” in Proc. 20th
[172] K. B. Gibson and T. Q. Nguyen, “Fast single image fog removal
IEEE Int. Conf. Image Processing, Melbourne, VIC, Australian, 2013,
using the adaptive wiener filter,” in Proc. 20th IEEE Int. Conf. Image
pp. 553−557.
Processing, Melbourne, VI, Australia, 2013, pp. 714−718.
[193] J. P. Tarel and N. Hautiere, “Fast visibility restoration from a single
[173] S. C. Huang, B. H. Chen, and W. J. Wang, “Visibility restoration of
color or gray level image,” in Proc. IEEE 12th Int. Conf. Computer
single hazy images captured in real-world weather conditions,” IEEE
Vision, Kyoto, Japan, 2009, pp. 2201−2208.
Trans. Circuits Syst. Video Technol., vol. 24, no. 10, pp. 1814−1824,
Oct. 2014. [194] K. B. Gibson and T. Q. Nguyen, “Hazy image modeling using color
ellipsoids,” in Proc. 18th IEEE Int. Conf. Image Processing, Brussels,
[174] S. C. Huang, J. H. Ye, and B. H. Chen, “An advanced single-image Belgium, 2011, pp. 1861−1864.
visibility restoration algorithm for real-world hazy scenes,” IEEE Trans.
Industr. Electron., vol. 62, no. 5, pp. 2962−2972, May 2015. [195] J. Yu and Q. M. Liao, “Fast single image fog removal using edge-
preserving smoothing,” in Proc. 2011 IEEE Int. Conf. Acoustics, Speech
[175] J. G. Wang, S. C. Tai, and C. J. Lin, “Image haze removal using a and Signal Processing, Prague, Czech Republic, 2011, pp. 1245−1248.
hybrid of fuzzy inference system and weighted estimation,” J. Electron
Imaging, vol. 24, no. 3, Article ID 033027, Jun. 2015. [196] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color
images,” in Proc. 6th IEEE Int. Conf. Computer Vision, Bombay, India,
[176] W. Sun, “A new single-image fog removal algorithm based on 1998, pp. 839−846.
physical model,” Optik-Int. J. Light Electron Opt., vol. 124, no. 21,
pp. 4770−4775, Nov. 2013. [197] H. Y. Zhao, C. B. Xiao, J. Yu, and X. J. Xu, “Single image fog
removal based on local extrema,” IEEE/CAA J. Autom. Sin., vol. 2,
[177] W. Sun, H. Wang, C. H. Sun, B. L. Guo, W. Y. Jia, and M. G. Sun, “Fast no. 2, pp. 158−165, Apr. 2015.
single image haze removal via local atmospheric light veil estimation,”
Comput. Electr. Eng., vol. 46, pp. 371−383, Aug. 2015. [198] C. X. Xiao and J. J. Gan, “Fast image dehazing using guided joint
bilateral filter,” Vis. Comput., vol. 28, no. 6−8, pp. 713−721, Jun. 2012.
[178] H. B. Liu, J. Yang, Z. P. Wu, and Q. N. Zhang, “Fast single image
[199] J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint
dehazing based on image fusion,” J. Electron. Imaging, vol. 24, Article
bilateral upsampling,” ACM Trans. Graph., vol. 26, no. 3, Article ID
ID 013020, Jan. 2015.
96, Jul. 2007.
[179] W. Wang, W. H. Li, Q. J. Guan, and M. Qi, “Multiscale single [200] L. C. Bao, Y. B. Song, Q. X. Yang, and N. Ahuja, “An edge-preserving
image dehazing based on adaptive wavelet fusion,” Math. Probl. Eng., filtering framework for visibility restoration,” in Proc. 21st Int. Conf.
vol. 2015, Article ID 131082, May 2015. Pattern Recognition, Tsukuba, Japan, 2012, pp. 384−387.
[180] Y. H. Shiau, P. Y. Chen, H. Y. Yang, C. H. Chen, and S. S. Wang, [201] Q. Yan, L. Xu, and J. Y. Jia, “Dense scattering layer removal,” in Proc.
“Weighted haze removal method with halo prevention,” J. Visual ACM SIGGRAPH Asia 2013 Technical Briefs, New York, NY, USA,
Commun. Image Represent., vol. 25, no. 2, pp. 445−453, Feb. 2014. 2013.
[181] J. Chen and L. P. Chau, “An enhanced window-variant dark chan- [202] X. Liu, F. X. Zeng, Z. T. Huang, and Y. F. Ji, “Single color image
nel prior for depth estimation using single foggy image,” in Proc. dehazing based on digital total variation filter with color transfer,”
IEEE Int. Conf. Image Processing, Melbourne, VIC, Australia, 2013, in Proc. 20th IEEE Int. Conf. Image Processing, Melbourne, VIC,
pp. 3508−3512. Australia, 2013, pp. 909−913.
[182] T. H. Kil, S. H. Lee, and N. I. Cho, “Single image dehazing based on [203] M. Negru, S. Nedevschi, and R. I. Peter, “Exponential image enhance-
reliability map of dark channel prior,” in Proc. 20th IEEE Int. Conf. ment in daytime fog conditions,” in Proc. 17th IEEE Int. Conf. Intel-
Image Processing, Melbourne, VIC, Australia, 2013, pp. 882−885. ligent Transportation Systems, Qingdao, China, 2014, pp. 1675−1681.
[183] D. Wang and J. Zhu, “Fast smoothing technique with edge preser- [204] J. F. Li, H. Zhang, D. Yuan, and M. G. Sun, “Single image dehazing
vation for single image dehazing,” IET Comput. Vis., vol. 9, no. 6, using the change of detail prior,” Neurocomputing, vol. 156, pp. 1−11,
pp. 950−959, Dec. 2015. May 2015.
436 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 4, NO. 3, JULY 2017
[205] J. H. Kim, J. Y. Sim, and C. S. Kim, “Single image dehazing based on [223] N. Hautière, J. P. Tarel, D. Aubert, and E. Dumont, “Blind contrast
contrast enhancement,” in Proc. 2011 IEEE Int. Conf. Acoustics, Speech enhancement assessment by gradient ratioing at visible edges,” Image
and Signal Processing, Prague, Czech Republic, 2011, pp. 1273−1276. Anal. Stereol. J., vol. 27, no. 2, pp. 87−95, Jun. 2008.
[206] H. Park, D. Park, D. K. Han, and H. Ko, “Single image haze removal [224] L. K. Choi, J. You, and A. C. Bovik, “Referenceless prediction of
using novel estimation of atmospheric light and transmission,” in Proc. perceptual fog density and perceptual image defogging,” IEEE Trans.
IEEE Int. Conf. Image Processing, Paris, France, 2014, pp. 4502−4506. Image Process., vol. 24, no. 11, pp. 3888−3901, Nov. 2015
[207] H. Park, D. Park, D. K. Han, and H. Ko, “Single image dehazing with [225] D. P. Li, J. Yu, and C. B. Xiao, “No-reference quality assess-
image entropy and information fidelity,” in Proc. 2014 IEEE Int. Conf. ment method for defogged images,” J. Image Graph., vol. 16, no. 9,
Image Processing, Paris, France, 2014, pp. 4037−4041. pp. 1753−1757, Sep. 2011.
[208] Y. S. Lai, Y. L. Chen, and C. T. Hsu, “Single image dehazing with [226] F. Guo and Z. X. Cai, “Objective assessment method for the clearness
optimal transmission map,” in Proc. 21st IEEE Int. Conf. Pattern effect of image defogging algorithm,” Acta Autom. Sin., vol. 38, no. 9,
Recognition, Tsukuba, Japan, 2012, pp. 388−391. pp. 1410−1419, Sep. 2012.
[209] Y. H. Lai, Y. L. Chen, C. J. Chiou, and C. T. Hsu, “Single-image [227] Z. Y. Chen, T. T. Jiang, and Y. H. Tian, “Quality assessment for
dehazing via optimal transmission map under scene priors,” IEEE comparing image enhancement algorithms,” in Proc. 2014 IEEE Conf.
Trans. Circuits Syst. Video Technol., vol. 25, no. 1, pp. 1−14, Jan. 2015. Computer Vision and Pattern Recognition, Columbus, OH, Italy, 2014,
pp. 3003−3010.
[210] M. Pedone and J. Heikkilä, “Robust airlight estimation for haze
removal from a single image,” in Proc. 2011 IEEE Computer Society
Conf. Computer Vision and Pattern Recognition Workshops, Colorado
Springs, CO, USA, 2011, pp. 90−96.
[211] F. C. Cheng, C. C. Cheng, P. H. Lin, and S. C. Huang, “A hierarchical Wencheng Wang received the B.S. degree in au-
airlight estimation method for image fog removal,” Eng. Appl. Artif. tomatic engineering in 2002, the M.S. and Ph.D.
Intell., vol. 43, pp. 27−34, Aug. 2015. degrees in pattern recognition and intelligent system
from Shandong University, Jinan, in 2005 and 2011,
[212] T. O. Aydin, R. Mantiuk, K. Myszkowski, and H. S. Seidel, “Dynamic
respectively. And now he is an associate professor of
range independent image quality assessment,” ACM Trans. Graph.
Department of Information and Control Engineering
(TOG), vol. 27, no. 3, Article ID 69, Aug. 2008.
in Weifang University. From 2006 to 2007, he was
[213] K. Q. Huang, Q. Wang, and Z. Y. Wu, “Natural color image en- a visiting scholar at Qingdao University of Science
hancement and evaluation algorithm based on human visual system,” and Technology, and now he is a visiting scholar
Comput. Vis. Image Underst., vol. 103, no. 1, pp. 52−63, Jul. 2006. in University of North Texas and engaging in the
research of computer vision and automatic detection
[214] B. S. Manjunath, J. R. Ohm, V. V. Vasudevan, and A. Yamada, “ Color technology, especially on image dehazing. His group has published and
and texture descriptors,” IEEE Trans. Circuits Syst. Video Technol., authored more than 30 papers on academic journals and conference, four book
vol. 11, no. 6, pp. 703−715, Jun. 2001. chapters and 5 patents, and more than 30 papers have been indexed by SCI/EI.
[215] K. D. Ma, W. T. Liu, and Z. Wang, “Perceptual evaluation of single His main research interests include computer vision, pattern recognition,
image dehazing algorithms,” in Proc. 2015 IEEE Int. Conf. Image and intelligent computing. He is currently serving as an associate editor of
Processing, Quebec City, QC, Canada, 2015, pp. 3600−3604. international journal Transactions of the Institute of Measurement and Control.
He was awarded the Young researchers award of Weifang University in 2010.
[216] A. K. Moorthy and A. C. Bovik, “A two-step framework for construct-
ing blind image quality indices,” IEEE Signal Process. Lett., vol. 17,
no. 5, pp. 513−516, May 2010.
[217] A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image
quality assessment in the spatial domain,” IEEE Trans. Image Process., Xiaohui Yuan received the B.S. degree in electrical
vol. 21, no. 12, pp. 4695−4708, Dec. 2012. engineering from Hefei University of Technology,
China in 1996 and Ph.D. degree in computer sci-
[218] A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a completely ence from Tulane University in 2004. After his
blind image quality analyzer,” IEEE Signal Process. Lett., vol. 20, no. 3, graduation, he worked at the National Institutes of
pp. 209−212, Mar. 2013. Health on medical imaging and analysis till 2006.
[219] M. A. Saad, A. C. Bovik, and C. Charrier, “Blind image quality He joined the University of North Texas (UNT) as
assessment: A natural scene statistics approach in the dct domain,” an Assistant Professor in 2006 and was promoted
IEEE Trans. Image Process., vol. 21, no. 8, pp. 3339−3352, Aug. 2012. to Associate Professor with tenure in 2012. His
research interests include computer vision, data min-
[220] Q. B. Wu, H. L. Li, K. N. Ngan, B. Zeng, and M. Gabbouj, “No ing, machine learning, and artificial intelligence. He
reference image quality metric via distortion identification and multi- served as PI and co-PI in projects supported by Air Force Laboratory, National
channel label transfer,” in Proc. 2014 IEEE Int. Symposium on Circuits Science Foundation (NSF), Texas Advanced Research Program, Oak Ridge
and Systems, Melbourne VIC, Australia, 2014, pp. 530−533. Associated Universities, and UNT. His research findings are reported in over
70 peer-reviewed papers. Dr. Yuan is a recipient of Ralph E. Powe Junior
[221] Y. M. Fang, K. D. Ma, Z. Wang, W. S. Lin, Z. J. Fang, and G. T. Zhai,
Faculty Enhancement award in 2008 and the Air Force Summer Faculty
“No-reference quality assessment of contrast-distorted images based
Fellowship in 2011, 2012, and 2013. He also received two research awards
on natural scene statistics,” IEEE Signal Process. Lett., vol. 22, no. 7,
and a teaching award from UNT in 2007, 2008, and 2012, respectively. He
pp. 838−842, Jul. 2015.
served in the editorial board of several international journals and served as
[222] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image session chairs in many conferences, as well as panel reviewer for funding
quality assessment: From error visibility to structural similarity,” IEEE agencies including NSF, NIH, and Louisiana Board of Regents Research
Trans. Image Process., vol. 13, no. 4, pp. 600−612, Apr. 2004. Competitiveness program. He is a member of IEEE and SPIE.