Mempercerah Warna
Mempercerah Warna
PII: S1047-3203(16)30224-3
DOI: http://dx.doi.org/10.1016/j.jvcir.2016.11.001
Reference: YJVCI 1886
Please cite this article as: C. Jung, Q. Yang, T. Sun, Q. Fu, H. Song, Low Light Image Enhancement with Dual-
Tree Complex Wavelet Transform, J. Vis. Commun. Image R. (2016), doi: http://dx.doi.org/10.1016/j.jvcir.
2016.11.001
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers
we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting proof before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Low Light Image Enhancement with Dual-Tree
Cheolkon Jung, Qi Yang, Tingting Sun, Qingtao Fu, and Hyoseob Song
Abstract—In low light conditions, low dynamic range of the image distorts the contrast and results in high noise
levels. In this work, we propose an effective contrast enhancement method based on dual-tree complex wavelet
transform (DT-CWT) to operate on a wide range of imagery without noise amplification. In the terms of
enhancement, we use the nonlinear response of the human eye to the luminance to design a logarithmic function for
global brightness promotion. Moreover, the local contrast is enhanced by contrast limited adaptive histogram
equalization (CLAHE) in low-pass subbands, which makes the structure of image clearer. In the terms of noise
reduction, thanks to the direction selective property of DT-CWT, the content-based total variation (TV) diffusion
was designed to assign a weight to update smoothing criterion to noise and edges in high-pass subbands. The
proposed enhancement method has been shown to perform very well with low-light images, outperforming other
conventional methods, in terms of contrast enhancement and noise reduction in the output data.
Index Terms—Contrast enhancement, dual-tree complex wavelet transform, noise reduction, wavelet coefficient.
I. INTRODUCTION
Contrast enhancement plays an important role in image processing, computer vision, and pattern recognition. Low
contrast in images is caused by many reasons such as the user’s operational error, poor quality of the imaging device,
and low light condition. For instance, the image captured in a dark environment often contains large regions of too
This work was supported by the National Natural Science Foundation of China (No. 61271298) and the International S&T Cooperation Program of China
(No. 2014DFG12780).
C. Jung is with School of Electronic Engineering, Xidian University, Xi'an 710071, China (e-mail: [email protected]).
Q. Yang is with School of Electronic Engineering, Xidian University, Xi'an 710071, China (e-mail:[email protected]).
T. Sun is with School of Electronic Engineering, Xidian University, Xi'an 710071, China (e-mail: [email protected]).
Q. Fu is with School of Electronic Engineering, Xidian University, Xi'an 710071, China (e-mail: [email protected]).
H. Song is with VA Lab, Samsung SDS, Seoul 130-240, Korea (e-mail: [email protected])
H. Song is with Samsung SDS, Seoul 138-240, Korea (email: [email protected])
dark pixels whose visibility is remarkably reduced [1]. That is, ambient light is an indispensable factor for the quality
of images captured by imaging devices. Above all, images captured in the dark condition often have low and
concentrated gray scale, thus making images have a narrow dynamic range and low contrast [2]. It is required to
improve contrast of images captured under low light condition. In general, contrast enhancement aims to make the
image have a perceptually more pleasing or visually more informative vision effect [3].
Up to now, researchers have proposed a lot of contrast enhancement methods to improve the contrast of the low
light images. Histogram equalization (HE) remapped input pixel values according to the probability distribution of
the input image to make the enhanced image have a uniform distribution in its histogram and fully utilize the
dynamic range [4, 5]. However, since HE had not preserved the mean brightness effectively, over-enhancement was
the main problem which causes visible distortions such as contouring or noise/artifacts [6-8]. To overcome the
drawback of HE, researchers have proposed many outstanding methods in recent years, and the representative ones
were the histogram modification framework (HMF) [9] and optimized contrast-tone mapping (OCTM) [10]. HMF
optimization problem that minimized the special penalty terms to adjust the degree of enhancement [9]. OCTM
formulated contrast enhancement as one of optimal allocation of an output dynamic range by maximizing contrast
gains while minimizing tone distortions, which was solved by linear programming [10]. Both HMF and OCTM
overcame excessive enhancement and produced visual-pleasing results by selecting the proper constraints. However,
they ignored the special characteristics of images captured under poor ambient illumination conditions. In practice,
low light images were different from natural scenes captured under ordinary conditions. In general, they had low
signal-to-noise ratio (SNR), and thus caused noise and color distortions [11]. For this reason, some enhancement and
denoising algorithms have been proposed in recent years. Malm et al. proposed a low light image enhancement
algorithm based on adaptive spatiotemporal smoothing and contrast limited histogram equalization (CLHE) to reduce
noise and expand the dynamic range of low light images [12]. However, this method required high computational
costs and sufferers from motion blurs. Chen et al. [13] proposed an intra-and-inter-constraint based algorithm for
video enhancement. This method analyzed features from different region-of-interests (ROIs) and created a global
tone mapping curve for the entire image. Although this method could achieve relatively good intra-frame quality in a
video, it involved the detection of ROIs, feature extraction, and some other complex operations. However, its
computational costs were very high. Rao et al. [14, 15] proposed image-based fusion video enhancement algorithm
for night-time videos. This method fused video frames from high-quality day-time and night-time backgrounds with
low-quality night-time videos. However, the moving objects in day-time videos were hard to be completely cleaned
and the enhanced frames became unnatural. Moreover, the moving objects inevitably caused ghost artifacts. Yin et al.
[16] presented a novel framework for low light image enhancement and noise reduction by performing
brightness/contrast stretching and noise reduction in the HSI and YCbCr color spaces. Huang et al. provided an
automatic transformation technique to improve the brightness of dimmed images via the gamma correction and
probability distribution of luminance pixels [17]. Although most grey-level transformation methods are performed in
the spatial domain, some researchers have utilized wavelets for this purpose in recent years. The advantage of using
these transformation methods is their ability to analyze and modify image features based on their spatial-frequency
content at different resolutions. Artur et al. [18] proposed an automatic contrast enhancement method for low-light
images based on local statistics of wavelet coefficients. They used a nonlinear enhancement function based on the
local distribution of the wavelet coefficients modeled as a Cauchy distribution to stretch brightness/contrast and
utilized a shrinkage function to prevent noise amplification. However, the effect of noise reduction in this method
was not obvious. Recently, Glenn et al. [19] proposed a highly efficient denoising method which combined the
shearlets with total variation (TV) diffusion. They reduced noise based on shearlet representation by constraining the
residual coefficients from a projected adaptive total variation scheme. This method performed the diffusion adaptive
to the local structure considering a type of content. Although they have improved visual quality of low-light images
to some extent, it is a challengeable task to achieve both noise reduction and color reproduction from low light
images.
In this paper, we propose a simple but effective low light image enhancement method based on DT-CWT [20]. In
the DT-CWT framework, we perform contrast enhancement in the low-pass subbands by CLAHE [21]. Moreover,
we apply TV diffusion to high-pass subbands for noise reduction. During the TV diffusion process, we adjust the
diffusion degree in noisy areas of the image based on content information. Experimental results show that the
effectiveness of the algorithm is threefold: noise reduction, local contrast enhancement, and global brightness
promotion. Compared with existing methods, our main contributions are as follows:
1) We achieve both contrast enhancement and noise reduction in the DT-CWT framework. We perform contrast
enhancement in the low-pass subbands using CLAHE, while we apply TV diffusion to the high-pass subbands for
noise reduction. During the TV diffusion process, we adjust the diffusion strength in noisy areas of the image based
on content information. 2) We successfully preserve edges and details using the direction selective property of
DT-CWT. The direction selective property is beneficial to the TV diffusion. Thus, details are successfully preserved
Experimental results demonstrate that the proposed method is very effective in enhancing low-light images and
The remainder of this paper is organized as follows. Section II provides the proposed method in detail, while
Section III presents the experimental results and their corresponding analysis. Conclusions are drawn in Section IV.
Fig. 1 illustrates the flowchart of the proposed method for low light image enhancement. As shown in the figure,
we first convert the input image into the YUV color space to get the Y component. Then, we perform bilateral
filtering to decompose the input Y channel into base and detail layers and enhance the detail layer to recall the lost
details. We conduct a logarithmic mapping for global brightness enhancement by remapping a narrow intensity range
in the input image to a wider range. Next, we remove noise in high frequency wavelet coefficients using edge
directionality, and enhance local contrast in the low frequency ones using CLAHE via DT-CWT. Finally, we perform
color correction to get the final result. The detailed descriptions are provided as follows:
First, we convert the RGB color space in the input image to the YUV color space to get the Y channel for contrast
enhancement and noise reduction. Since the chrominance channel provides relatively little information in low-light
condition compared with the luminance channel, we use the Y channel. We obtain the Y channel as follows:
DT-CWT
High-pass
subbands J-level DT-CWT
Real part Imaginary part
Denoising
model Imaginary part
Real part
(J-1)-level
Inverse DT-CWT
Low-pass
subbands
One-level
DT-CWT LL
subbands
CLAHE
Inverse DT-CWT
Ye R G B
Color Correction
B. Illumination Compensation
Images captured under poor illumination condition often suffer from a low dynamic range, thus resulting in losing
much detail information in low light images. Illumination compensation is to solve this problem. To increase the
detail information, we utilize bilateral filtering to decompose the Y channel into base and detail layers. After
smoothing by bilateral filtering, we get the base layer Yb that preserves region boundaries and other structures in the
Y channel. By subtracting Yb from the Y channel, we obtain the detail layer Yd which contains detail information in
images. Thus, we obtain the enhanced detail layer Y’d from Yb and Yd as follows [16]:
range, and not matched with the dynamic range of the sensing and/or display devices. Thus, it is necessary to adjust
the illumination by a mapping function to promote the global brightness. Because the logarithmic function is strongly
correlated to the nonlinear response of the human eye to the luminance, the adjusted Yp is obtained as follows:
where we should change the range of Y’ from [0, 1] to [0, 255]; Y’max is the maximum luminance; and b is a
parameter varying in the range of [0.6, 2]. The result of illumination compensation is shown in the Fig. 2. The detail
layer (Fig. 2(c)) is obtained by subtracting the base layer (Fig. 2(b)), i.e. the bilateral filtering result, from the input Y
channel (Fig. 2(a)). It can be observed that the structure and waves on the lake are more obvious (Fig. 2(e)) after
enhancing the detail layer. Moreover, its global brightness is successfully enhanced (Fig. 2(f)), which makes the
visual effect better. However, noise also becomes more obvious after the enhancement.
(d) Enhanced detail layer Y’d (e) Y channel with more details (f) Adjusted Yp
Fig. 2 Illumination compensation result.
C. Dual-Tree Complex Wavelet Transform
The wavelet theory integrates mathematics and signal processing, and has popularly been applied to the image
processing field in recent years. It has a superior performance in noise reduction, image restoration, and resolution
enhancement [22, 23]. In the discrete wavelet transform (DWT), coefficients are down-sampled after high and low
pass filtering.
Fig. 3 Impulse response of 2-D complex wavelet filters with six dominant orientations of ±15°, ±45°, and ±75° for the real
(a) (b)
Fig. 4 The difference between DWT and DT-CWT when they represent the curve. (a) DWT. (b) DT-CWT.
Although DWT prevents redundancy and allows for using the same pair of filters in different scales, it lacks of the
direction selective property. Thus, DWT is not able to effectively represent the 2D signals. For example, it needs
relatively more coefficients to represent the curve based on the dots to fit the curve. However, the DT-CWT has 6
evenly spaced orientations represented by the complex coefficients as shown in Fig. 3. The direction selective
property is very effective in representing the curve using directional wavelet coefficients to fit the curve. Fig. 4
illustrates the difference between DWT and DT-CWT when they represent the curve. Thus, the direction selective
property of DT-CWT makes it possible to preserve the gradient of edges, which can separate the signal and noise in
the high-pass subbands. The TV diffusion depends on the gradient operator based on the diffusion flux to distinguish
the signal from noise. It iteratively eliminates small variations due to noise while preserving large variations due to
edges. That is, we combine TV diffusion with DT-CWT to deal with the denoising problem. The TV function is used
as a weight for the wavelet coefficients. We utilize the content information to control the diffusion degree in the
dual-tree wavelet domain. Since the textural information often exists in the high-pass subbands, textures would not
be lost if we perform contrast enhancement only in the low-pass subbands. Thanks to the characteristics of DT-CWT,
we can enhance the contrast while avoiding the amplification of noise in low-light images.
After contrast enhancement in low-light images, noise is also amplified, and becomes more obvious. In our work,
we perform content-based total variation (TV) diffusion which assigns a weight to the wavelet coefficients of
DT-CWT. The TV diffusion depends on the gradient operator to distinguish the signal from noise as follows [24]:
1
arg min S p I p | S p |
2
(4)
p 2
S
where I is the input, which could be the coefficients of each high-pass subband and p indexes 2D coefficients; S is the
resulting denoised high-pass subband; and is the Lagrange parameter, which controls the influence of noise on
| S
p p
| | x S p | | y S |
p (5)
with the anisotropic expression in 2D, and x and y are the partial derivatives in two directions.
Based on careful observations, we have found that directly applying (4) to denoising is not able to achieve
effective noise reduction. This is because its criterion for selective smoothing depends on the gradient module of
| S
p p
| , which is not able to effectively distinguish between edges and noise. Some of the noise could have
higher gradients than some edges, hence, weaker diffusivities. This arrangement makes it difficult to ensure both the
smoothness of noise and the preservation of edges. Therefore, it is necessary to update this smoothing criterion to
gain a higher scale of the smoothness in noise than that in edges. The proposed method contains a general pixel-wise
TV measure as follows:
x p
qR p
g p ,q x S q ,
(6)
y p g p ,q y S .
q
qR p
where q belongs to R(p), the rectangular region centered at a pixel p; x p and y p are windowed TVs in the
x and y directions for a pixel p; captures the variation of coefficients; and g p ,q is a weighting function defined
x x 2 y y 2
exp
p q p q
g p ,q (7)
2 2
where controls the scale of the window which has a range of (0, 6]. The sum of S depends on whether the
gradients in a window are coincident or not in terms of their directions because S for one coefficient could be
either positive or negative. The resulting in a window that only contains noise is generally smaller than that in a
window including edges. An intuitive explanation is that an edge in a local window contains more similar direction
gradients than noise with complex patterns. Thus, we use to control the smoothing degree as follows:
1
arg min S p I p | S p |
2 1
(8)
p 2
S
1
2
x p y p (9)
where is the diffusion factor, which varies adaptively according to the characteristics of the image itself. We
perform content-based TV diffusion in the dual-tree wavelet domain. The denoising function in (8) is used in the
high-pass subbands at the corresponding decomposition level for noise reduction. The low-pass subbands at the final
decomposition level are processed using contrast limited adaptive histogram equalization (CLAHE) [21], which
limits the local contrast gain by restricting the maximum number of pixels at a grayscale level to improve the contrast
of low-pass subbands while preventing over-enhancement. Therefore, the readability of low-light images is
significantly enhanced by noise reduction and contrast enhancement in high-pass and low-pass subbands.
E. Color Correction
After noise reduction and contrast enhancement in the wavelet domain, we perform inverse DT-CWT to
reconstruct the luminance image as output. Denote the luminance image by Ye. We obtain the output color image by
R' Ye Y 0 0 R
G ' 0 Y Y 0 G
e (10)
B' 0 0 Ye Y B
where [R’,G’,B’] and [R,G,B] represent the color channels of the output and input color images, respectively.
To verify the superiority of the proposed method, we perform experiments on a PC with Intel Core i5 3.20GHz
CPU and 4.00 GB RAM using MATLAB. For the tests, we use the test images from Memorial dataset [28],
Greenwich dataset [29], and Eden project multi-sensor dataset [30], which represent a wide range of lighting
conditions. First, we compare the performance of the proposed method with those of state-of-the-art denoising
methods of BM3D [31], NLM [32], Bivariate shrinkage denoising [33], and soft-threshold denoising [34]. We set
Y’max to 255, b to 1.2 in (3), and to 3 in (7). Fig. 5 shows the enhancement results of the proposed method in
low-light images. Second, we compare the performance of the proposed method with those of 5 contrast
enhancement ones: CLAHE [21], HMF [9], OCTM [10], AGCWD [17], ACE [18], and the proposed method.
A. Noise Reduction
We provide denoising results of the proposed method in Fig. 5 on five low-light images: Greenwich, Landrover,
Memorial, Ramp, and Dusk. Notice that we add an additive Gaussian noise with =10 in the test images to verify the
denoising performance. It can be observed that the noise is also enhanced after contrast enhancement if there is no
denoising operation (see the 3rd row). However, the denoising operation in the proposed method effectively
suppresses noise and produces satisfactory results (see the 4th row). It can be observed from Fig. 5(a) that noise and
artifacts in the sky and lake are successfully reduced by the proposed method. In Fig. 5(b), the proposed method
reduces noise in the people and car while successfully preserving details. In Figs. 5(c) and 4(d), noise and artifacts in
the whole image are well suppressed; however, some textures are lost. However, in Fig. 5(e), the image is so dark
and the signal-to-noise ratio is too low, and thus the denoising effect of the proposed method is not obvious. For
more quantitative measurements, we measure peak signal-to-noise ratio (PSNR) and root-mean square error (RMSE)
Memorial, (d) Ramp, and (e) Dusk. 1st row: Original test images. 2nd row: Noisy images in =10. 3rd row: Contrast
enhancement results without denoising. 4th row: Contrast enhancement results with denoising.
The PSNR is the ratio between the reference signal and the distortion signal in an image (unit: dB). The higher the
PSNR, the closer the distorted image is to the original one. PSNR is defined as follows:
MAX 2
PSNR 10log10 (11)
MSE
where MSE is the mean-square error between the original and the distortion image, and MAX is the maximum pixel
value of the image, i.e. MAX = 255. We compare the performance of PSNR and RMSE with those of BM3D [31],
NLM [32], Bivariate shrinkage denoising [33], soft-threshold denoising [34], and the proposed method in TABLE I
and TABLE II. In the experiments, we add additive Gaussian noise with σ=10 in the test images.
We evaluate peak signal to noise ratio (PSNR) and root mean square error (RMSE) on BM3D [31], NLM [32],
bivariate shrinkage denoising [33], soft-threshold denoising [34], and the proposed method. We report the
evaluation results in TABLE I and TABLE II. As shown in the two tables, the proposed method performs worse
than BM3D which achieves the best performance in both PSNR and RSME. NLM removes noise based on patch
similarities. The denoising performance of NLM is very good while successfully preserving details and fine
structures. The complexity of NLM in an image is O N 2 , where N is the number of pixels in the image. BM3D
is a state-of-the-art denoising method that an image has a locally sparse representation in the transform domain.
However, its complexity is much higher than NLM. However, the computational complexity of the proposed
method is O N R , where R is the window size, which much smaller than NLM and BM3D. Thus, the
2
proposed method achieves a significant improvement in the computational complexity over NLM and BM3D.
B. Contrast Enhancement
Figs. 6-10 show experimental results in Greenwich, Landrover, Memorial, Ramp, and Dusk, respectively. We
provide the contrast enhancement results by CLAHE [21], HMF [9], OCTM [10], AGCWD [17], ACE [18], and the
proposed method. CLAHE, HMF, and OCTM improve visual quality of low-light images to some extent. However,
the enhancement results are not enough to reveal details and textures in images. Moreover, their colors are distorted,
thus not natural-looking (see Figs. 6-10 (b)-(d)). CLAHE improves the contrast of images by limiting the local
contrast gain and restricting the maximum number of pixels at a grayscale level. However, it often produces noise in
results, and its brightness enhancement is not satisfactory to find the detail information (see Figs. 6-10 (b)). HMF
provides a general histogram modification framework to find an optimal tradeoff between the original histogram and
the uniformly distributed histogram, which adjusts the enhancement degree as shown in Figs. 6-10 (c). Compared
with CLAHE, HMF produces a better visual quality in most images (see Figs. 6 (b) and (d)), but produces noise and
color distortions in the results (see Fig. 6 (c)). The experimental results of OCTM are shown in Figs. 6-10 (d), which
has formulated contrast enhancement as one of optimal allocation of an output dynamic range to maximize contrast
gains while minimizing tone distortions. However, OCTM produces noise when the whole brightness of the
enhanced images is somewhat dark, especially in Figs. 7(d) and 10(d). AGCWD often generates too dark regions in
the results, and is not effective in preserving detail information (see Figs. 6-7(e)). ACE, i.e. one of the state-of-the-art
methods, successfully performs contrast enhancement and noise reduction by utilizing the properties of DT-CWT. It
generates a more visually pleasing result than previous ones, but is not effective in image denoising (Figs. 7-10(f)).
Figs. 6-10(g) show the experimental results by the proposed method. As shown in Fig. 8(g), the proposed method
outperforms the others in noise reduction while enhancing contrast. Moreover, the whole brightness and color of the
enhanced images are more natural-looking as shown in Figs. 6-10(g). That is, the readability of the enhanced images
is greatly improved by the proposed method, and our visual effects are better than the other methods. For more
quantitative measurements, we provide performance evaluation results on them in TABLE III in terms of discrete
entropy (DE) [35], absolute mean brightness error (AMBE) [36], and colorfulness metric (CM) [37].
▪ DE computes the amount of information in an image and higher DE indicates that the image contains more
details as follows:
L 1
H p p(i ) log 2p (i ) (12)
i 0
▪ AMBE is the absolute difference between input and output means as follows:
AMBE X , Y X M YM (13)
where XM is the mean of the input image X and YM is the mean of the output image Y.
(a) (b) (c) (d) (e) (f) (g)
Fig. 6 Experimental results in Greenwich. (a) Original image. (b) CLAHE [21]. (c) HMF [9]. (d) OCTM [10]. (e)
AGCWD[17]. (f) ACE [18]. (g) Proposed method.
▪ CM measures the perceived color of an image, and is computed based on the mean and standard deviations of
two parameters =R-G and =0.5(R+G)-B [35] as follows:
where and are standard deviations; and and are the means of and , respectively.
As listed in TABLE III, our DE performance is better than the existing methods of HMF, OCTM, and AGCWD.
That is, the proposed method successfully preserves detail information in the enhanced images. Moreover, it can be
observed that the DE values of the proposed method are much higher than other methods, which indicates our
method remarkably improves the brightness of the image. Since color is one of the main elements in performance
evaluation, we use the colorfulness metric to evaluate the color fidelity. The higher the value is, the better the color is.
As shown in TABLE III, the colorfulness value of the proposed method is much higher than the other ones, i.e., our
method ensures the fidelity of the perceived color in brightness enhancement. Also, we provide boxplots on the
results in Fig. 11 to show the statistics of the evaluation results. As shown in the figure, the proposed method
achieves better performance than the others in terms of DE, AMBE, and Colorfulness. Consequently, we can safely
conclude that the proposed method outperforms other ones in terms of information preservation, brightness
TABLE III Objective Evaluation Results of Test Images in terms of DE, AMBE, and Colorfulness.
Metric Method Greenwich Landrover memorial ramp dusk Average
CLAHE [21] 6.7968 6.7786 6.3573 6.4552 4.9681 6.2712
HMF [9] 6.0602 5.4844 4.9654 5.0402 3.8718 5.0844
OCTM [10] 6.0552 5.4751 4.9658 5.0440 3.8744 5.0829
DE
AGCWD [17] 6.0539 5.5050 4.9678 4.9953 3.8731 5.0790
ACE [18] 6.9756 6.9085 6.5708 7.6551 5.9319 6.8084
Proposed 6.3414 6.9111 6.7675 8.7932 5.7185 6.9063
CLAHE [21] 25.0019 25.1544 22.2434 24.5481 13.0209 21.9940
HMF [9] 26.4890 27.5763 26.0346 28.6480 42.8662 30.3230
OCTM [10] 35.6452 16.2407 14.8990 15.1529 13.0288 18.9930
AMBE
AGCWD [17] 25.9751 13.4028 10.5051 8.3792 12.7918 14.2110
ACE [18] 40.8979 25.2706 18.1405 13.0547 14.0468 22.2820
Proposed 55.4501 35.5396 23.7590 60.5616 21.1726 39.2966
CLAHE [21] 0.1482 15.2903 15.8767 0.0574 0.0625 6.2872
HMF [9] 17.7825 16.0737 12.9755 10.3960 15.3751 14.5201
Colorfu OCTM [10] 19.8196 11.3177 13.8401 5.6347 6.2456 11.1715
lness AGCWD [17] 18.0032 9.5358 12.5439 10.3034 7.1217 11.5016
ACE [18] 23.5340 19.2529 19.4393 18.9913 8.7496 15.9934
Proposed 48.3908 20.8678 29.5054 20.7412 8.5688 25.6148
(a) (b) (c)
Fig. 11 Box plots on the statistics of the evaluation results in TABLE III. (a) DE. (b) AMBE. (c) Colorfulness.
IV. CONCLUSIONS
We have proposed low light image enhancement based on DT-CWT. Low light images have low dynamic range,
low contrast, and much noise. Considering the properties of low light images, we have employed DT-CWT to
decompose the input image into low-pass subbands and high-pass subbands. Then, we have performed TV diffusion
in high-pass subbands to suppress noise. We have combined TV diffusion with multi-resolution decomposition to
control the diffusion degree based on the content information. Moreover, we have utilized CLAHE in the low-pass
subbands to improve the contrast of the image while preserving the details in high-pass subbands. The experimental
results demonstrate that proposed method successfully reproduces informative, natural looking, and visually pleasing
images on low light images, and achieves a good performance in terms of both noise reduction and contrast
enhancement.
In the proposed method, we have performed noise reduction in the luminance channel. However, noise is a small
amount but remains in the chrominance channels. Thus, our future work includes investigating color distortions
caused by noise.
16
REFERENCES
[1] R. C. Bilcu and M. Vehvilainen, "A novel tone mapping method for image contrast enhancement," Proceeding of
International Symposium on Image and Signal Processing and Analysis, pp. 268-273, 2007.
[2] X. D. Zhang, P. Y. Shen, and L.-L. Luo, "Enhancement and noise reduction of very low light level images," Proceedings of
[3] K. Gu, G. Zhai, and X. Yang, "Automatic contrast enhancement technology with saliency preservation," IEEE Trans.
Circuits Sys. Video Technol., vol. 25, no. 9, pp. 1480-1494, 2014.
[4] Y.-T. Kim, "Contrast enhancement using brightness preserving bi-histogram equalization, "IEEE Trans. Consumer Electron,
[5] Y. Wan, Q. Chen, and B. Zhang, "Image enhancement based on equal area dualistic sub-image histogram equalization
method," IEEE Trans. Consumer Electron, vol. 45, no. 1, pp. 68-75, Feb.1999.
[6] S.-D. Chen and A.-R. Ramli, "Minimum mean brightness error bi-histogram equalization in contrast enhancement," IEEE
[7] Q. Wang and R. K. Tan, "Fast image/video contrast enhancement based on weighted thresholded histogram equalization,"
IEEE Trans. Consumer Electron, vol. 53, no. 2, pp. 757-764, Jul.2007.
[8] C. Wang, J. Peng, and Z. Ye, "Flattest histogram specification with accurate brightness preservation," IET Image Processing,
[9] T. Arici, S. Dikbas, and Y. Altunbasak, "A histogram modification framework and its application for image contrast
enhancement," IEEE Trans. Imaging Processing, vol. 18, no. 9, pp.1921-1935, Sep. 2009.
[10] X. Wu, "A linear programming approach for optimal contrast-tone mapping," IEEE Trans. Image Processing, vol. 20, no. 5,
[11] J. J. Cheng, X. F. Lv, and Z. X. Xie, "A predicted compensation model of human vision system for low-light image,"
Proceedings of International Congress on Image and Signal Processing (CISP 2010), pp. 605-609, 2010.
[12] H. Malm, M. Oskarsson, E. Warrant, "Adaptive enhancement and noise reduction in very low light-level video,"
Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 1-8, Oct. 2007.
[13] Y. Chen, W. Lin, C. Zhang, Z. Chen, N. Xu, J. Xie, "Intra-and-inter-constraint-based video enhancement based on piecewise
tone mapping," IEEE Trans. Circuits and Systems for Video Technology, vol. 23, no. 1, pp. 74-82, 2013.
[14] R. Rao, W. Lin, L. Chen, "A global-motion-estimation-based method for nighttime video enhancement," Optical
17
[15] Y. Rao, W. Lin, L. Chen, "Image-based fusion for video enhancement of nighttime surveillance," Optical Engineering, vol.
[16] W. Yin, X. Lin, and Y. Sun, "A novel framework for low-light colour image enhancement and denoising," Proceedings of
International Conference on Awareness Science and Technology (iCAST), pp. 20-23, 27-30, Sep. 2011.
[17] S.-C. Huang, F.-C. Cheng, and Y.-S. Chiu, "Efficient contrast enhancement using adaptive gamma correction with
weighting distribution," IEEE Trans. Image Processing, vol. 22, no. 3, pp. 1032-1041, Mar.2013.
[18] A. Loza, D. Bull, and A. Achim, "Automatic contrast enhancement of low-light images based on local statistics of wavelet
coefficients," Proceedings of International Conference on Image Processing (ICIP), pp. 3553-3556, Sep.2010.
[19] Easley, Glenn R., Demetrio Labate, and Flavia Colonna. "Shearlet-based total variation diffusion for denoising," IEEE
[20] I. W. Selesnick, R. G. Baraniuk, and N. C. Kingsbury, "The dual-tree complex wavelet transform," IEEE Signal Processing
[21] S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, T. Greer, B. H. Romeny, J. B. Zimmerman, and K. Zuiderveld,
"Adaptive histogram equalization and its variations," Comput. Vision, Graphics and Image Processing, vol. 39, no. 2, pp.
355-368, 1987.
[22] N. Kingsbury, "The dual-tree complex wavelet transform: a new efficient tool for image restoration and enhancement," Proc.
[23] T. Celik and T. Tjahjadi. "Image resolution enhancement using dual-tree complex wavelet transform," IEEE Geoscience and
[24] L. I. Rudin, S. Osher, and E. Fatemi, "Nonlinear total variation based noise removal algorithms," Physica D: Nonlinear
[25] P.-S. Tsai, C-K Liang, and T.-H. Huang, "Image enhancement for backlight-scaled TFT-LCD Displays," IEEE Trans.
Circuits and Systems for Video Technology, vol. 19, no. 4, pp. 574-583, Apr.2009.
[26] C.-H. Lee, P.-Y. Lin, L.-H. Chen, and W.-K. Wang, "Image enhancement approach using the just-noticeable-difference
model of the human visual system," Journal of Electronic Imaging, vol. 21, no. 3, pp. 033007, Jul.-Sep. 2012.
[27] T.-H. Huang, K.-T. Shih, S.-L. Yeh, and H. Chen, "Enhancement of backlight-scaled images," IEEE Trans. Image
[28] C. Tchou and P. Debevec, "HDR shop (dataset)," [online], Available: http://projects.ict.usc.edu/graphics/HDRShop,2009.
18
[30] The Eden project multi-sensor data set, 2006, [online], Available at http://www.imagefusion.org/.
[31] M. Lebrun, "An analysis and implementation of the BM3D image denoising method," Image Processing On Line, vol. 2, pp.
175-213, 2012.
[32] A. Buades, B. Coll, and J.-M. Morel, "A non-local algorithm for image denoising," Proc. IEEE Computer Vision and
[33] L. Sendur and I.W. Selesnick, "Bivariate shrinkage functions for wavelet-based denoising exploiting interscale dependency,"
IEEE Transactions on Signal Processing, vol. 50, no. 11, pp. 2744-2756, 2002.
[34] K. Berkner and R. O. Wells, "Smoothness estimates for soft-threshold denoising via translation-invariant wavelet
transforms," Applied and Computational Harmonic Analysis, vol. 12. pp. 1-24, 2002.
[35] C. E. Shannon, "A mathematical theory of communication," Bell Syst. Tech. J., vol. 27, pp. 379-423, 623-656, Jul.-Oct.
1948.
[36] S.-D. Chen and A. R. Ramli, "Minimum mean brightness error bi-histogram equalization in contrast enhancement," IEEE
Trans. Consum. Elelctron., vol. 49, no. 4, pp. 1310-1319, Nov. 2003.
[37] S. Susstrunk and S. Winkler, "Color image quality on the internet," Proc. IS&T/SPIE Electronic Imaging 2004: Internet
19
BIOGRAPHIES
Cheolkon Jung received the B.S., M.S., and Ph.D. degrees in Electronic Engineering from Sungkyunkwan University, Republic
of Korea, in 1995, 1997, and 2002, respectively. He was with the Samsung Advanced Institute of Technology (Samsung
Electronics), Republic of Korea, as a research staff member from 2002 to 2007. He was a research professor in the School of
Information and Communication Engineering at Sungkyunkwan University, Republic of Korea, from 2007 to 2009. Since 2009,
he has worked for the School of Electronic Engineering at Xidian University, China, as a professor. His main research interests
include computer vision, pattern recognition, machine learning, image and video processing, multimedia content analysis and
Qi Yang received the B.S. degree in Electronic Engineering fromXidian University, China, in 2014. She is currently pursuing
her M.S.degree at the same university. Her research interests include color image processing and display technology.
Tingting Sun received the B.S. degree in Electronic Engineering from Hebei University, China, in 2013. She is currently
pursuing her M.S. degree at the same university. Her research interests include color image processing and display technology.
Qingtao Fu received the B.S. degree in Telecommunication Engineering and the M.S. degree in Information and
Communication Engineering from Xidian University, China, in 2012 and 2015, respectively. He is currently pursuing his Ph.D.
degree at the same university. His research interests include image processing and video coding.
Hyoseob Song received the B.S. and M.S. degrees in Information Engineering from Korea University, Republic of Korea, in
1996, and 1998, respectively. He was a mobile software developer at Pantech, Republic of Korea, from 2004 to 2010. Currently,
he is working for Samsung SDS as a senior engineer. His main research interests include computer vision, pattern recognition,
20
Research highlights
22