1 s2.0 S0923596520301818 Main
1 s2.0 S0923596520301818 Main
1. Introduction
enhance the high-frequency and low-frequency components obtained
by a Gaussian low-pass filter applied to the L channel, respectively.
Low visibility underwater images strongly affect the development
Inspired by the multi-scale fusion strategy [9], we employed a simple
and utilization of the abundant mineral and biological resources in
linear fusion to integrate the enhanced high- and low-frequency com-
oceans. With the development of underwater exploration, underwater
ponents. Extensive comparison experiments are performed to evaluate
image and video processing technology has great potential for devel-
the performance of the proposed method in terms of enhanced quality,
opment in computer vision [1–3]. Unfortunately, as shown in Fig. 1,
time efficiency, and applicability. The contributions of this paper are
underwater images face two major challenges of color distortion and
summarized as follows:
low contrast due to the scattering and absorption of light in water,
(1) A novel underwater image enhancement method based on color
which reduces the ability to obtain valuable information that is not
correction and contrast enhancement is presented, which can effec-
conducive to further processing of underwater images.
tively reduce color distortion, improve image visibility, and highlight
To address these problems, our previous work [5] reported that vari-
image details.
ous underwater image restoration and enhancement methods have been
(2) A simple and efficient sub-interval linear transformation strategy
proposed to improve the visibility of underwater images. However, in
is used to adjust the histogram distribution of the three-channels to
some examples, the existing methods failed to highlight the details of
images when the images are captured in deep-sea environments [6– suppress low and high pixel areas, and stretching the middle pixel area
8], and there are also methods are prone to introduce red artifacts for according to the limited ratio of the red, green, and blue channels.
enhanced images [9–11]. Therefore, it is necessary for researchers to (3) A Bi-interval histogram based on optimal equalization threshold
propose an effective method to handle the aforementioned problems. is applied to improve the contrast of the low-frequency components of
In this paper, a novel method of color correction and Bi-interval the L channel. Additionally, an S-shaped function is used to highlight
Contrast enhancement for enhancing underwater images is proposed. the details of the high-frequency of the L channel.
The tasks of the proposed method mainly include two aspects, i.e., on
the one hand, correcting image color; and on the other hand, improving 2. Related works
image visibility. For the two major issues of color distortion and low
visibility. Firstly, a sub-interval linear transmission strategy is used to In this section, we provide an overview of related work from three
correct color, and then we proposed a Bi-interval histogram based on aspects of underwater image restoration methods, underwater image
optimal equalization threshold strategy and an S-shaded function to enhancement methods, and deep learning methods.
∗ Corresponding author.
E-mail address: [email protected] (L. Dong).
https://doi.org/10.1016/j.image.2020.116030
Received 3 February 2020; Received in revised form 11 August 2020; Accepted 3 September 2020
Available online 20 October 2020
0923-5965/© 2020 Elsevier B.V. All rights reserved.
W. Zhang, L. Dong, T. Zhang et al. Signal Processing: Image Communication 90 (2021) 116030
Fig. 1. Examples of different underwater images degraded by selective absorption and scattering of light.
Source: These degraded underwater images are from UIEB dataset [4].
2.1. Underwater image restoration methods based on adaptive retinal mechanisms to enhance underwater im-
ages. Zhang et al. [34] proposed using global and local equalization
Underwater image restoration methods estimate imaging model pa- of histogram and multi-scale fusion for underwater image enhance-
rameters based on captured images and invert the degradation process ment. These enhancement-based methods have obvious advantages
to restore high-quality images. The results obtained by underwater op- for improving contrast and brightness of underwater images, and
tical imaging-based methods [12–14] and polarization characteristics- enhancement-based methods are faster and simpler than restoration
based methods [15–17] are close to the ground truth in terms of methods.
image content, but these methods require specialized hardware devices
and the parameters of modeling are complex. Prior information-based 2.3. Deep learning methods
methods [6–8,18–20] are inspired by dark-channel prior (DCP) [21].
Galdran et al. [18] Proposed an underwater image restoration method In recently, visual tasks based on deep learning are concerned
based on red-channel to recovery color and improve the visibility. Li widely. Moreover, the deep-learning-based methods has been applied to
et al. [6,19] combined an underwater image defogging method with a a variety of visual tasks [35–41]. Recently, the methods based on deep
histogram distribution prior. Drews et al. [7] proposed an underwater learning is gradually applied to underwater image enhancement [42–
dark-channel prior by modifies the DCP [21]. However, the UDCP does 51]. Li et al. [42] proposed a weakly supervised color transfer for
not always effective when there are white objects or artificial light in underwater image enhancement. A multiscale dense generative adver-
the underwater environment. Peng et al. [8] proposed a method based sarial network was proposed in [43]. Chen et al. [44] proposed a
on the image blurriness and light absorption to restore underwater GAN-based restoration network, it achieved a comprehensively supe-
images. Peng et al. [20] proposed a generalized dark channel prior rior performance in terms of visual quality and feature restoration.
method for single image restoration. Zhou et al. [22] using color-lines
Ding et al. [45] proposed a jointly adversarial network was applied
model to restore underwater image. Berman et al. [23] using haze-
to wavelength compensation and dehazing of underwater images, and
lines and a new quantitative dataset to restore underwater images. Song
established a underwater image datasets for different water types us-
et al. [24] proposed a statistical model of background light and opti-
ing the NYU-depth2 dataset [46]. Islam et al. [47] presented a con-
mization of transmission map to enhance underwater images. However,
ditional generative adversarial network-based for underwater image
the acquisition of prior information depends on the underwater imaging
enhancement and constructed a large-scale dataset of a paired and
environment. These restoration-based methods are sensitive to imaging
unpaired collection of underwater images. Li et al. [48] designed an
assumptions and model parameters, thereby these methods may not be
underwater image enhancement convolutional neural network based
applicable in a strongly changing underwater environment.
on underwater scene prior and proposed a new underwater image
synthesis method. Fu et al. [49] employed the global–local networks
2.2. Underwater image enhancement methods
and compressed-histogram equalization for underwater image enhance-
Underwater image enhancement methods improve the quality of ment. Yang et al. [50] presented a method of conditional genera-
images by modifying the pixel value. Iqbal et al. [25] proposed an unsu- tive adversarial network for enhancing underwater images. Moreover,
pervised underwater images color correction method. Ancuti et al. [26] Li and Anwar [51] provided a comprehensive overview for deep-
presented a fusion method for enhancing underwater images and learning-based underwater image enhancement methods. Additionally,
videos. Fu et al. [9] proposed a method based on Retinex for correction Li et al. [52] and Liu et al. [53] constructed a real-world underwater
color and enhancement contrast of underwater images. Li et al. [10] image enhancement benchmark dataset, respectively. However, these
proposed a hybrid underwater image enhancement method by intro- deep-learning-based methods relies on data-driven and their network
ducing a color correction method [9]. Fu et al. [11] proposed a two-step structure is complex and variable.
method to enhance underwater images. Ghani et al. [27] modified
their previous work [28] proposed a recursive adaptive histogram 3. Proposed method
method for improving underwater image contrast and color. Zhang
et al. [29] proposed an underwater enhancement image method by ex- In this paper, we presented a novel underwater image enhancement
tended multi-scale Retinex. Dong et al. [30] proposed a Multi-Channel method based on Color Correction and Bi-interval Contrast enhance-
Convolutional MSRCR-based image defogging method and applied it ment. The proposed method first considers color correction and then
to underwater image enhancement. Ghani et al. [31] proposed natural- from the perspective of contrast enhancement and detail sharpening.
based underwater image color enhancement method consists of four The proposed method is divided into three main steps: Color correc-
main steps. Ancuti et al. [4] presented a color balance and fusion tion, decomposing the high-frequency and the low-frequency of the L
method was used to enhance underwater images. Ancuti et al. [32] channel of the color-corrected image, and contrast enhancement of the
proposed a color channel compensation method for the image pre- low-frequency component and detail sharpening of the high-frequency
processing step of various scenes. Gao et al. [33] Proposed a method component. The overview of the proposed method is shown in Fig. 2.
2
W. Zhang, L. Dong, T. Zhang et al. Signal Processing: Image Communication 90 (2021) 116030
3.1. Color correction and (10) according to the lower quantile function [54].
( )
Since the attenuation of green light is less than the red and blue, 𝑡𝑐1 = 𝐹 𝐼𝑐 (𝑥), 𝑟𝑐1 (9)
( )
and the water bodies usually with blue–green appearance, it will cause 𝑡𝑐2 = 𝐹 𝐼𝑐 (𝑥), 𝑟𝑐2 (10)
most captured underwater images to appear blue or green. To solve
the color shift, the methods of Fu et al. [9,11] and Li et al. [10] have where the F is a lower quantile function. To suppress the shadow and
demonstrated the effectiveness of color correction. However, the above highlight values effectively, the operation is performed for each color
methods do not fully consider the characteristics of color degradation in channel as follows:
underwater scenes, when the method based on mean value and mean {𝑐
𝑡1 𝐼𝑐 (𝑥) < 𝑡𝑐1
𝑐
square error is used to compensate the color, it is easy to cause red 𝐼𝑜𝑢𝑡 (𝑥) = (11)
channel overcompensation. Therefore, a color correction method based 𝑡𝑐2 𝐼𝑐 (𝑥) > 𝑡𝑐2
on sub-interval linear transformation is proposed. Defining I as the Finally, a linear operation is performed on the pixel values of the
input image, and the color correction process is as follows. First, the middle region as follows:
total pixel values of red, green and blue channels are calculated as 𝑐 (𝑥) − 𝑡𝑐
follows: 𝐼𝑜𝑢𝑡 1
𝑐
𝐼𝐶𝑅 (𝑥) = × 255 (12)
𝑡𝑐2 − 𝑡𝑐1
∑
𝑀∗𝑁
𝑆𝑢𝑚𝑅 = 𝐼𝑅 (𝑥) (1) where the 𝐼𝐶𝑅 𝑐 (𝑥) is the color corrected image. The sub-interval lin-
𝑖=1
ear transformation strategy achieves that the shadow and highlight
∑
𝑀∗𝑁
𝑆𝑢𝑚𝐺 = 𝐼𝐺 (𝑥) (2) values are suppressed effectively, and the middle area pixels are well
𝑖=1 stretched. In other words, this method is used to achieve color correc-
∑
𝑀∗𝑁 tion by adjusting the histogram of the three-channels. The color correc-
𝑆𝑢𝑚𝐵 = 𝐼𝐵 (𝑥) (3) tion method is applied to the UIEB dataset [52]. Based on large-scale
𝑖=1 statistical analysis, the color correction method has good performance
where 𝑀 ∗ 𝑁 represents the total number of pixels for single-channel when 𝛼1 and 𝛼2 are determined to be 0.001 and 0.995.
image. Meanwhile, the ratio of red, green and blue channels is calcu- Fig. 3shows the three-color histogram of the original image and the
lated as follows: corrected image. The cut-off thresholds for three channels of shadow
{ }
Max 𝑆𝑢𝑚𝑅 , 𝑆𝑢𝑚𝐺 , 𝑆𝑢𝑚𝐵 and highlight values are determined as 𝑡R 1
= 58, 𝑡R
2
= 197, 𝑡𝐺
1
= 83,
𝑃𝑅 = (4) 𝐺 𝐵 𝐵
𝑡2 = 239, 𝑡1 = 61, and 𝑡2 = 183 by Eqs. (9) and (10) when 𝛼1 =
𝑆𝑢𝑚𝑅
{ } 0.001 and 𝛼2 = 0.995. As shown in Fig. 3.(a), the initial image with
Max 𝑆𝑢𝑚𝑅 , 𝑆𝑢𝑚𝐺 , 𝑆𝑢𝑚𝐵
𝑃𝐺 = (5) poor visual, low contrast, color distortion, and blur. Otherwise, the
𝑆𝑢𝑚𝐺
{ } histogram distribution of the red, green, and blue channels is more
Max 𝑆𝑢𝑚𝑅 , 𝑆𝑢𝑚𝐺 , 𝑆𝑢𝑚𝐵
𝑃𝐵 = (6) concentrated. However, the corrected image as shown in Fig. 3.(b) with
𝑆𝑢𝑚𝐵 high-visibility and color is well corrected, the histogram distribution
where the Max is a function of taking the maximum value, and the 𝑃𝑅 , of the red, greed, and blue channels is more uniform. Nevertheless,
𝑃𝐺 and 𝑃𝐵 represents the ratio of the three channels to the sum of the the color-corrected image still suffers from under-exposure and blurred
maximum channel pixels. In order to divide each channel into three details, the paper focuses on solving these two problems in the next
intervals, it needs to define two cut-off ratios 𝑟𝑐1 and 𝑟𝑐2 , it is expressed section.
as follows:
𝑟𝑐1 = 𝛼1 ∗ 𝑝𝑐 (7) 3.2. Decomposing the low-frequency and the high-frequency
𝑟𝑐2 = 𝛼2 ∗ 𝑝𝑐 (8)
For color corrected underwater images, there are still problems of
where 𝑐 ∈ {𝑅, 𝐺, 𝐵}, 𝛼1 and 𝛼2 are constants in (0, 1). Then, the cut-off underexposure and blurred details. The focuses of this is on solving
thresholds 𝑡𝑐1 and 𝑡𝑐2 corresponding to 𝑟𝑐1 and 𝑟𝑐2 are determined as (9) these two issues without affecting the color of the corrected image.
3
W. Zhang, L. Dong, T. Zhang et al. Signal Processing: Image Communication 90 (2021) 116030
Fig. 3. Color correction. (a) and (b) are tricolor histograms of the initial image and the color corrected image, respectively.
Since the low frequency component mainly contains the main informa- (1) Bi-interval equalization applied to low-frequency compo-
tion of the image, and the high frequency component mainly contains nents
the texture and edge information of the image. Therefore, we use Typically, an underwater image under a normal light source is com-
different enhancement strategies to enhance low- and high-frequency posed of a background image and a foreground image. According to this
components to address under-exposure and blurred details. 𝐿
characteristic, the low-frequency component 𝐼𝐿𝑜𝑤 is segmented into a
Since the brightness channel L and the color channels A and B are background sub-image and a foreground sub-image by the maximum
independent of each other [29] in the CIELAB color space. Therefore, between-cluster variance method [55]. In our work, we first use the
𝑅𝐺𝐵 ( )
we first consider the image that converts the corrected image 𝐼𝐶𝑅 median 𝑇𝑚𝑒𝑑 = 𝑇min + 𝑇max ∕2 as the initial threshold to divide the
from RGB color space to the image 𝐼𝐶𝑅 𝐿𝐴𝐵 of the LAB color space. Then,
image into categories. Then, Eq. (16) is used as the fitness function to
a Gaussian low-pass filter is defined as follows: calculate their variance.
(( )2 ( )2 ) ( )2 ( )2
− 𝑥 − 𝑥_𝑐𝑒𝑛𝑡𝑒𝑟𝑤 + 𝑦 − 𝑦_𝑐𝑒𝑛𝑡𝑒𝑟𝑤 𝑓 𝑢𝑛𝑐𝑡𝑖𝑜𝑛 = 𝑤1 𝑢1 − 𝑢 + 𝑤2 𝑢2 − 𝑢 (16)
1
𝐺(𝑥, 𝑦) = exp (13)
2𝜋𝜎 2 2𝜎 2 where 𝑤1 and 𝑤2 are the probabilities of all gray values in the first and
where 𝑤 denotes the window size of the filter, and 𝜎 denotes the second categories, respectively. 𝑢1 and 𝑢2 are the average of all gray
variance of the filter. Meanwhile, 𝐺(𝑥, 𝑦) is applied to the L channel values in the first and second categories, respectively. Meanwhile 𝑢 is
𝐿 of image 𝐼 𝐿𝐴𝐵 to obtain the low-frequency component 𝐼 𝐿
𝐼𝐶𝑅 of the
𝐶𝑅 𝐿𝑜𝑤 the average value of the overall grayscale. The greater the variance,
channel. It is expressed as follows: the greater the difference between the target and the background,
𝐿
𝐼𝐿𝑜𝑤 𝐿
(𝜎) = 𝐼𝐶𝑅 (𝜎) ∗ 𝐺(𝑥, 𝑦) (14) and the more reasonable the selection of the threshold. Therefore, the
threshold is gradually changed in the whole gray level, the optimal
Finally, the high-frequency component 𝐿
𝐼𝐻𝑖𝑔ℎ is obtained by formula segmentation threshold 𝑇𝑜𝑝𝑡 can be obtained when the variance is the
(15). largest. 𝑇𝑜𝑝𝑡 is also called the optimal threshold of Bi-interval, and
[ ) [ ]
𝐿 𝐿 𝐿 [0, 255] = 0, 𝑇𝑜𝑝𝑡 ∪ 𝑇𝑜𝑝𝑡 , 255 . Therefore, the input image can be divided
𝐼𝐻𝑖𝑔ℎ = 𝐼𝐶𝑅 − 𝐼𝐿𝑜𝑤 (15) 𝐷 𝑈
into a background sub-image 𝐼𝐿𝑜𝑤 and a foreground sub-image 𝐼𝐿𝑜𝑤 as
follows:
3.3. Dealing with under-exposure and blurred details
𝐷
{ 𝐿 𝐿 𝐿 𝐿
}
𝐼𝐿𝑜𝑤 = 𝐼𝐿𝑜𝑤 (𝑖, 𝑗)|𝐼𝐿𝑜𝑤 (𝑖, 𝑗) < 𝑇𝑜𝑝𝑡 , ∀𝐼𝐿𝑜𝑤 (𝑖, 𝑗) ∈ 𝐼𝐿𝑜𝑤 (17)
In order to improve the under-exposure and blurry details of under- 𝑈
{ 𝐿 𝐿 𝐿 𝐿
}
𝐼𝐿𝑜𝑤 = 𝐼𝐿𝑜𝑤 (𝑖, 𝑗)|𝐼𝐿𝑜𝑤 (𝑖, 𝑗) > 𝑇𝑜𝑝𝑡 , ∀𝐼𝐿𝑜𝑤 (𝑖, 𝑗) ∈ 𝐼𝐿𝑜𝑤 (18)
water images, different enhancement operations should be performed
according to the characteristics of low- and high-frequency compo- 𝐷
{ Suppose that the} background sub-image 𝐼𝐿𝑜𝑤 is composed of
nents. In this section, a Bi-interval histogram based on optimization 𝐿 𝐿 𝑈
𝐼𝐿𝑜𝑤 , 𝐼𝐿𝑜𝑤 , … , 𝑇𝑜𝑝𝑡 and the foreground sub-image 𝐼𝐿𝑜𝑤 is composed
threshold strategy is used to enhance low-frequency components to { 0 1 }
𝐿
of 𝑇𝑜𝑝𝑡+1 , 𝑇𝑜𝑝𝑡+2 , … , 𝐼𝐿𝑜𝑤 𝐷
. Then, the probability of sub-images 𝐼𝐿𝑜𝑤
address under-exposure, and an S-shaped function is used to enhance 𝑁−1
high-frequency components to address detail blur. 𝑈
and 𝐼𝐿𝑜𝑤 appearing in the sub-histogram of low and high intervals is
4
W. Zhang, L. Dong, T. Zhang et al. Signal Processing: Image Communication 90 (2021) 116030
( 𝐿 ) ( 𝐿 )
defined as 𝑃𝐷 𝐼𝐿𝑜𝑤 and 𝑃𝑈 𝐼𝐿𝑜𝑤 , respectively, which are expressed The final enhanced L channel image and A and B channel images
as follows: are converted from the LAB color space to RGB color space to obtain
( 𝐿 ) ( 𝐿 ) the enhanced image underwater.
𝑃𝐷 𝐼𝐿𝑜𝑤 = 𝐻 𝐼𝐿𝑜𝑤 ∕𝑁𝐷 (19)
Fig. 4. gives the final enhancement result in the third row. It can be
( 𝐿 ) ( 𝐿 )
𝑃𝑈 𝐼𝐿𝑜𝑤 = 𝐻 𝐼𝐿𝑜𝑤 ∕𝑁𝑈 (20) observed that the image enhanced by Bi-interval Contrast enhancement
( 𝐿 ) and S-shaped function has a more natural appearance, highlighted
where 𝐻 𝐼𝐿𝑜𝑤 represents the frequency of occurrences of grayscale
𝐿 , moreover 𝑁 and 𝑁 represent the total number of pixels of the details, and high visibility.
𝐼𝐿𝑜𝑤 𝐷 𝑈
background sub-image and the foreground sub-image, respectively. The
4. Experimental results
cumulative probability density functions of the background sub-image
( 𝐿 ) ( 𝐿 )
and foreground sub-image are defined as 𝐶𝐷 𝐼𝐿𝑜𝑤 and 𝐶𝑈 𝐼𝐿𝑜𝑤 ,
In this section, we perform qualitative and quantitative comparisons
which are expressed as follows:
with several state-of-the-art methods to evaluate the performance of
𝑇 −1
( 𝐿 ) ∑
𝑜𝑝𝑡
( 𝐿 ) the proposed method. In addition, we also compare the performance
𝐶𝐷 𝐼𝐿𝑜𝑤 = 𝑃𝐷 𝐼𝐿𝑜𝑤 (21) of different methods in terms of detail-preserving. These compared
𝐿 =0
𝐼𝐿𝑜𝑤 methods include the automatic red channel (ARC) method [18], the
( 𝐿 ) ∑
255
( 𝐿 ) underwater dark channel prior (UDCP) method [7], the image blurri-
𝐶𝑈 𝐼𝐿𝑜𝑤 = 𝑃𝑈 𝐼𝐿𝑜𝑤 (22) ness and light absorption (IBLA) method [8], the generalization dark
𝐿 =𝑇
𝐼𝐿𝑜𝑤 𝑜𝑝𝑡 channel prior (GDCP) method [20], the fusion-based (FB) method [26],
Finally, the background sub-image and foreground sub-image are the hybrid (HB) method [10], and the two-step (TS) method [11],
equalized by Eq. (23) to solve the underexposure of the image. are used as comparative methods thanks to their representativeness in
single image defogging [7,18], underwater image restoration [8,20],
⎧𝑓 = 𝐼 𝐿 + (𝐼 𝐿 𝐿
( 𝐿 ) 𝐿
[ ) and underwater image enhancement [10,11,26], respectively.
⎪ 𝐷 𝐿𝑜𝑤0 𝐿𝑜𝑤𝑚 − 𝐼𝐿𝑜𝑤0 ) × 𝐶𝐷 𝐼𝐿𝑜𝑤 𝐼𝐿𝑜𝑤
𝑚
∈ 0, 𝑇𝑜𝑝𝑡
In our experiments, we conduct an ablation study on UIEB dataset
⎨ 𝐿 𝐿 𝐿
( 𝐿 ) 𝐿
[ ]
⎪𝑓𝑈 = 𝐼𝐿𝑜𝑤 + (𝐼𝐿𝑜𝑤 − 𝐼𝐿𝑜𝑤 ) × 𝐶𝑈 𝐼𝐿𝑜𝑤 𝐼𝐿𝑜𝑤 ∈ 𝑇𝑜𝑝𝑡 , 255 [52] to demonstrate the effect of each component in our method. In
⎩ 𝑚+1 𝑛−1 𝑚+1 𝑚
addition, we also analyze the running time and potential applications
(23) of the proposed method.
𝐿 ⎪ 𝐿 𝐿 𝐿 age gradient (AG) [30], information entropy (IE) [30], and patch-
𝐼𝐹 𝑖𝑛𝑎𝑙 (𝑖, 𝑗) = ⎨𝐼𝐿𝑜𝑤𝐹 (𝑖, 𝑗) + 𝐼𝐻𝑖𝑔ℎ𝐹 (𝑖, 𝑗) 0 < 𝐼𝐿𝑜𝑤𝐹 (𝑖, 𝑗) < 255
based contrast quality index (PCQI) [56] to evaluate the quality of
⎪
⎪255 𝐼𝐹𝐿𝑖𝑛𝑎𝑙 (𝑖, 𝑗) ≥ 255 ∪ 𝐼𝐿𝑜𝑤𝐿
(𝑖, 𝑗) ≥ 255 underwater images shown in Figs. 5–7. In addition, we further select
⎩ 𝐹
two non-reference metrics includes underwater image quality measure
(25) (UIQM) [57], and underwater color image quality evaluation metric
5
W. Zhang, L. Dong, T. Zhang et al. Signal Processing: Image Communication 90 (2021) 116030
Fig. 4. The results of the color correction and the final enhancement. From top to bottom are the color corrected images, the local enlarged images corresponding to the red
boxes in color corrected images, the final enhanced image, and the local enlarged images corresponding to the red boxes in final enhanced images.
Fig. 5. Subjective comparisons on the greenish underwater images. From left to right are raw underwater images, and the result of ARC [18], UDCP [7], IBLA [8], GDCP [20],
FB [26], HB [10], TS [11], and the proposed method.
Fig. 6. Subjective comparisons on the bluish underwater images. From left to right are raw underwater images, and the result of ARC [18], UDCP [7], IBLA [8], GDCP [20],
FB [26], HB [10], TS [11], and the proposed method.
(UCIQE) [58] to evaluate the quality of underwater images shown 𝑐1 × 𝑈 𝐼𝐶𝑀 + 𝑐2 × 𝑈 𝐼𝑆𝑀 + 𝑐3 × 𝑈 𝐼𝐶𝑜𝑛𝑀, where 𝑐1 = 0.0282, 𝑐2 = 0.2953
in Figs. 5–7. AG is mainly used to represent the sharpness of the and 𝑐3 = 3.5753 in [57]. UCIQE is a linear combination of chroma,
image. IE mainly denotes the average number of information that can saturation and contrast, which is a comprehensive evaluation metric of
be used to describe the color richness of underwater images. PCQI chroma, saturation and contrast. UCIQE can be defined as 𝑈 𝐶𝐼𝑄𝐸 =
mainly evaluates the contrast perception of underwater images by the 𝑐1 × 𝜎𝑐 + 𝑐2 × 𝑐𝑜𝑛𝑙 + 𝑐3 × 𝜇𝑠 , where 𝜎𝑐 is the standard deviation of chroma,
human eye from an objective aspect. UIQM uses underwater image 𝑐𝑜𝑛𝑙 is the contrast of luminance, and 𝜇𝑠 is the average of saturation.
colorfulness measure (UICM), underwater image sharpness measure And, 𝑐1 = 0.4859, 𝑐2 = 0.2745 and 𝑐3 = 0.2576 in [58]. Therefore, we
(UISM), and underwater image contrast measure (UIConM) to evaluate believe that these metrics can provide a comprehensive evaluation of
the quality of underwater images. UIQM can be expressed as 𝑈 𝐼𝑄𝑀 = the effectiveness of different methods.
6
W. Zhang, L. Dong, T. Zhang et al. Signal Processing: Image Communication 90 (2021) 116030
Fig. 7. Subjective comparisons on the haze underwater images. From left to right are raw underwater images, and the result of ARC [18], UDCP [7], IBLA [8], GDCP [20],
FB [26], HB [10], TS [11], and the proposed method.
Table 1A
AG, IE, and PCQI values of different methods in Fig. 5. (The bold values represent the best results).
ARC UDCP IBLA GDCP FB HB TS Our method
AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI
Green1 2.392 7.424 1.090 2.294 7.210 1.019 3.168 7.662 1.145 3.478 7.702 1.078 3.026 7.368 1.151 3.295 7.609 1.117 4.152 7.468 1.189 4.758 7.875 1.173
Green2 3.247 7.264 1.130 4.249 7.068 1.016 3.823 7.504 1.203 3.465 7.255 0.965 4.206 7.198 1.191 5.441 7.643 1.210 4.458 7.027 1.154 7.367 7.887 1.189
Green3 2.261 7.008 1.036 2.890 7.325 1.105 2.790 7.352 1.147 3.074 7.447 1.141 3.285 7.215 1.179 3.699 7.583 1.169 3.359 7.135 1.160 5.479 7.693 1.169
Green4 3.386 7.167 1.146 3.699 6.968 0.950 3.983 7.614 1.223 3.328 7.053 1.014 4.144 6.977 1.188 5.531 7.624 1.245 4.149 6.870 1.152 6.815 7.819 1.199
Average 2.821 7.216 1.101 3.283 7.143 1.023 3.441 7.533 1.179 3.336 7.364 1.049 3.665 7.189 1.177 4.491 7.614 1.185 4.029 7.125 1.163 6.104 7.818 1.182
Table 1B
UQIM and UCIQE values of different methods in Fig. 5. (The bold values represent the best results).
ARC UDCP IBLA GDCP FB HB TS Our method
UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE
Green1 3.635 0.562 3.026 0.560 2.954 0.548 1.094 0.480 4.244 0.602 3.463 0.569 4.536 0.545 4.304 0.648
Green2 2.355 0.516 2.671 0.569 1.545 0.510 −0.412 0.410 2.749 0.575 2.574 0.595 3.612 0.488 3.569 0.625
Green3 3.581 0.531 4.580 0.543 3.494 0.569 4.995 0.572 3.366 0.619 3.476 0.618 4.157 0.522 13.48 0.679
Green4 4.044 0.508 2.143 0.501 2.651 0.478 1.562 0.374 4.557 0.564 3.779 0.560 4.453 0.458 4.390 0.624
Average 3.404 0.529 3.105 0.543 2.661 0.526 1.809 0.459 3.729 0.590 3.323 0.585 4.189 0.503 6.435 0.644
Table 2A
AG, IE, and PCQI values of different methods in Fig. 6. (The bold values represent the best results).
ARC UDCP IBLA GDCP FB HB TS Our method
AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI
Blue1 4.947 7.458 1.061 5.099 6.894 0.809 7.704 7.718 1.183 5.726 7.226 0.959 5.762 7.332 1.087 6.487 7.619 1.115 7.071 7.521 1.164 10.89 7.908 1.181
Blue2 4.476 7.333 0.992 6.527 6.643 0.895 4.762 7.317 1.092 6.531 7.313 1.125 5.081 7.182 0.991 6.713 7.509 1.168 5.807 7.054 1.098 9.227 7.61 1.145
Blue3 3.041 7.131 0.957 6.650 7.014 1.079 4.019 7.519 1.184 4.127 6.913 1.053 4.028 7.136 1.093 5.055 7.603 1.21 3.636 6.658 1.066 13.68 7.798 1.236
Blue4 5.307 6.968 0.965 6.096 7.197 0.946 5.672 7.025 1.220 7.069 6.771 1.051 6.373 6.986 1.014 8.683 7.58 1.226 6.313 6.938 1.114 7.601 7.795 1.168
Average 4.442 7.222 0.994 6.093 6.937 0.932 5.539 7.394 1.169 5.863 7.055 1.047 5.311 7.159 1.046 6.734 7.577 1.179 5.706 7.042 1.110 10.34 7.777 1.182
Table 2B
UQIM and UCIQE values of different methods in Fig. 6. (The bold values represent the best results).
ARC UDCP IBLA GDCP FB HB TS Our method
UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE
Blue1 4.240 0.513 8.600 0.534 3.721 0.589 2.214 0.414 4.811 0.542 4.062 0.541 5.046 0.505 5.215 0.587
Blue2 3.836 0.572 2.763 0.631 4.018 0.548 3.895 0.557 3.797 0.589 3.371 0.596 3.975 0.522 4.259 0.649
Blue3 3.579 0.561 2.469 0.688 2.202 0.577 2.794 0.478 3.423 0.599 2.962 0.612 3.270 0.465 4.809 0.637
Blue4 4.104 0.536 4.503 0.563 3.615 0.496 3.332 0.451 4.316 0.589 3.619 0.613 4.288 0.482 4.189 0.658
Average 3.940 0.545 4.584 0.604 3.389 0.552 3.058 0.475 4.086 0.579 3.503 0.590 4.144 0.493 4.618 0.632
Tables 1, 2, and 3 given the evaluation scores of these methods To further evaluate the effectiveness and robustness of these meth-
applied to greenish, bluish, haze underwater images shown in Figs. 5– ods, we also perform quantitative assessments among the UIEB dataset.
7 in terms of AG, IE, PCQI, UIQM, and UCIQE. In Tables 1A, 2A, and UIEB is a large-scale real-world image, which includes 890 real-world
3A, the proposed method obtains similar and in general better in most
underwater images collected from the Internet. Fig. 1 shows several
cases in regard to the values of AG, IE, and PCQI. In Tables 1B, 2B, and
degraded underwater images of the UIEB dataset. The average scores
3B, the proposed method obtains higher values of UIQM and UCIQE
metrics. However, Li et al. [52] reported that UCIQE did not take of five assessment metrics are given in Table 4. In all quantitative
full account of color cast and artifacts. Nevertheless, our method has assessment metrics, it can be observed that our method significantly
achieved satisfactory results in all quantitative assessment indicators. outperforms several state-of-the-arts methods.
7
W. Zhang, L. Dong, T. Zhang et al. Signal Processing: Image Communication 90 (2021) 116030
Table 3A
AG, IE, and PCQI values of different methods in Fig. 7. (The bold values represent the best results).
ARC UDCP IBLA GDCP FB HB TS Our method
AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI
Haze1 4.577 7.655 1.029 6.28 7.684 1.041 6.686 5.489 0.757 7.966 7.663 1.095 5.194 7.471 1.083 5.369 7.689 1.04 5.756 7.57 1.113 8.850 7.936 1.183
Haze2 2.443 6.912 1.007 3.387 7.207 0.993 3.584 7.160 1.202 5.303 7.712 1.223 2.854 6.892 1.047 4.782 7.36 1.223 3.828 7.1 1.179 7.405 7.847 1.22
Haze3 7.442 7.122 0.983 9.275 7.084 0.981 8.293 7.378 1.161 11.17 7.073 1.116 8.656 7.097 1.059 11.54 7.552 1.233 9.329 7.073 1.192 17.85 7.723 1.292
Haze4 3.863 7.307 1.005 4.749 6.911 0.909 4.623 7.664 1.151 5.536 7.190 1.060 4.542 7.318 1.104 5.43 7.544 1.142 5.000 6.919 1.182 9.028 7.821 1.238
Average 4.581 7.249 1.006 5.922 7.221 0.981 5.796 6.922 1.067 7.493 7.409 1.123 5.311 7.194 1.073 6.78 7.536 1.159 5.978 7.165 1.166 10.78 7.832 1.233
Table 3B
UQIM and UCIQE values of different methods in Fig. 7. (The bold values represent the best results).
ARC UDCP IBLA GDCP FB HB TS Our method
UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE UIQM UCIQE
Haze1 2.806 0.518 3.908 0.589 1.134 0.590 4.857 0.601 3.806 0.524 3.144 0.522 4.093 0.528 4.394 0.562
Haze2 1.669 0.471 3.042 0.530 2.26 0.540 4.125 0.568 2.677 0.523 2.579 0.584 3.356 0.471 4.078 0.582
Haze3 4.634 0.543 5.060 0.596 4.506 0.493 4.730 0.502 4.612 0.564 3.876 0.575 4.853 0.477 4.814 0.617
Haze4 3.799 0.541 4.246 0.588 3.645 0.565 3.263 0.494 3.639 0.570 2.813 0.557 3.969 0.467 4.454 0.615
Average 3.227 0.518 4.064 0.575 2.886 0.547 4.244 0.541 3.683 0.545 3.103 0.559 4.068 0.485 4.435 0.594
8
W. Zhang, L. Dong, T. Zhang et al. Signal Processing: Image Communication 90 (2021) 116030
Fig. 8. Subjective comparisons on synthetic underwater images. From left to right: Original images, ARC [18], UDCP [7], IBLA [8], GDCP [20], FB [26], HB [10], TS [11], and
the proposed method.
Table 5A
AG, IE, and PCQI values of different methods in Fig. 7. (The bold values represent the best results).
ARC UDCP IBLA GDCP FB HB TS Our method
AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI AG IE PCQI
Blue1 7.462 4.688 0.9904 6.862 3.225 0.9149 7.447 4.002 1.104 7.526 4.796 0.920 7.119 4.195 1.0093 7.555 4.409 0.9538 7.488 4.781 1.0452 7.727 6.681 1.043
Blue2 7.662 5.371 0.9793 7.430 3.849 0.9449 7.058 3.51 0.9965 7.376 5.408 1.029 7.379 4.409 0.9808 7.434 3.547 0.8444 7.693 4.95 1.0311 7.73 6.396 1.0366
Blue3 7.48 4.82 0.9796 6.996 3.512 0.9037 7.638 4.104 1.0422 7.683 5.068 0.897 7.387 4.053 0.9853 7.598 4.131 0.9285 7.607 4.978 1.0659 7.845 6.698 1.0136
Green1 7.692 5.885 1.0013 7.086 3.975 0.8686 7.557 4.228 1.0490 7.256 5.657 0.971 7.418 4.467 0.9765 7.609 4.244 0.9281 7.707 6.471 1.0431 7.821 8.564 1.0006
Green2 7.422 5.395 1.0279 6.758 3.336 0.9248 7.357 4.389 1.1491 7.344 5.03 0.973 7.124 4.528 1.0417 7.511 5.184 1.0227 7.453 5.422 1.1358 7.718 7.154 1.1008
Green3 7.792 4.999 1.0725 6.788 3.53 0.8704 7.507 3.911 1.0387 7.199 4.981 1.024 7.215 4.354 1.0442 7.496 3.953 1.0099 7.611 6.153 1.0928 7.764 9.681 1.0141
Average 7.585 5.193 1.0085 6.986 3.571 0.9045 7.427 4.024 1.0632 7.397 5.157 0.969 7.273 4.334 1.0063 7.533 4.244 0.9479 7.593 5.459 1.0689 7.767 7.529 1.0347
Table 5B
PSNR and SSIM values of different methods in Fig. 7. (The bold values represent the best results).
ARC UDCP IBLA GDCP FB HB TS Our method
PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM
Blue1 26.014 0.877 24.927 0.683 28.201 0.854 28.724 0.785 27.985 0.873 29.219 0.838 29.209 0.851 30.700 0.869
Blue2 26.348 0.848 24.795 0.697 26.197 0.775 24.723 0.757 25.408 0.851 25.848 0.78 25.569 0.821 32.268 0.854
Blue3 26.775 0.853 25.156 0.724 29.009 0.851 29.6 0.743 28.362 0.857 30.126 0.801 29.939 0.838 36.589 0.841
Green1 26.645 0.826 24.967 0.683 27.234 0.83 24.752 0.715 26.364 0.828 27.286 0.768 28.098 0.846 35.584 0.835
Green2 26.136 0.816 24.742 0.661 27.664 0.806 28.876 0.713 27.100 0.818 30.245 0.792 29.539 0.803 30.705 0.822
Green3 33.486 0.753 25.721 0.669 30.122 0.668 25.594 0.736 27.967 0.726 27.522 0.663 28.163 0.758 32.239 0.718
Average 27.567 0.828 25.051 0.686 28.071 0.797 27.044 0.741 27.197 0.825 28.374 0.773 28.419 0.819 33.014 0.823
9
W. Zhang, L. Dong, T. Zhang et al. Signal Processing: Image Communication 90 (2021) 116030
Fig. 9. Experimental results about detail preserving for real-world underwater images. In the first and third rows, from left to right are raw images, the results of ARC [18],
UDCP [7], IBLA [8], GDCP [20], FB [26], HB [10], TS [11], and the proposed method. In the second and fourth rows, from left to right are the local enlarged images corresponding
to the red boxes in raw images and the results of different methods.
Fig. 10. Experimental results about detail preserving for synthetic underwater images. In the first and third rows, from left to right are raw images, the results of ARC [18],
UDCP [7], IBLA [8], GDCP [20], FB [26], HB [10], TS [11], and the proposed method. In the second and fourth rows, from left to right are the local enlarged images corresponding
to the red boxes in raw images and the results of different methods.
Fig. 11. Results obtained by different components. Top row are the raw image and enhanced images by different components. The bottom two rows are the two local enlarged
images corresponding to the blue and red boxes in the top row.
4.6. Robustness of different cameras underwater images with the standard Macbeth Color Checker taken by
different underwater specialized cameras to evaluate the robustness of
It is well known that robustness plays an important role in the wide our method. Our method significantly improve visibility and remove
applicability of underwater image enhancement methods. We selected color distortion as shown in Figs. 5–10. From Fig. 12, since the different
10
W. Zhang, L. Dong, T. Zhang et al. Signal Processing: Image Communication 90 (2021) 116030
Fig. 12. The results of our method (the second row) correspond to underwater images captured by different cameras (the first row). The images are provided by [4]. From left
to right, the camera types are Canon D10, Olympus Tough 6000, Olympus Tough 8000, Pentax W60, Pentax W80 and FujiFilm Z33.
Fig. 13. Qualitative evaluation on low light images. From top to bottom are low light images and enhanced result by our method, respectively.
Fig. 14. Qualitative evaluation on fog images. From top to bottom are haze images and defogged result by our method, respectively.
11
W. Zhang, L. Dong, T. Zhang et al. Signal Processing: Image Communication 90 (2021) 116030
Fig. 15. Local feature points matching by using the SIFT [61]. From left to right are feature point matching of original image pair and enhanced image pair, respectively.
5. Conclusion [6] C. Li, J. Guo, S. Chen, Y. Tang, Y. Pang, J. Wang, Underwater image restoration
based on minimum information loss principle and optical properties of under-
water imaging, in: Proc. IEEE Int. Conf. Image Process. (ICIP), Sep. 2016, pp.
We have presented a color correction and Bi-interval Contrast en-
1993–1997.
hancement method for underwater image enhancement. Subjective [7] P. Drews Jr., E.R. Nascimento, S. Botelho, M.F.M. Campos, Underwater depth
evaluation indicates that the proposed method improves visibility, estimation and image restoration based on single images, IEEE Comput. Graph.
corrects color, and outputs natural appearance for underwater images Appl. 36 (2) (2016) 24–35.
[8] Y. Peng, P. Cosman, Underwater image restoration based on image blurriness
in a variety of scenarios. Objective evaluation shows that this proposed
and light absorption, IEEE Trans. Image Process. 26 (4) (2017) 1579–1594.
method has the highest or approximately high quantitative evaluation [9] X. Fu, P. Zhuang, Y. Huang, Y. Liao, X.-P. Zhang, X. Ding, A retinex-based
metric when compared with existing classical methods. In addition, our enhancing approach for single underwater image, in: Proc. IEEE Int. Conf. Image
method also has good performance of defogging and enhancement for Process., (ICIP), Jan, 2015, pp. 4572–4576.
[10] C. Li, J. Guo, C. Guo, R. Cong, J. Gong, A hybrid method for underwater image
ordinary mist images.
correction, Pattern Recognit. Lett. 94 (2017) 62–67.
Despite the proposed produces high-quality images for both under- [11] X. Fu, Z. Fan, M. Ling, Two-step approach for single underwater image enhance-
water and natural images of various scenes. However, the proposed ment, in: Symposium. of IEEE Intell. Signal Process. Commun. Syst., (ISPACS),
method cannot obtain good results when sources images with con- 2017, pp. 789–794.
siderable noise. Our future work focuses on underwater images with [12] H.M. Lu, Y.J. Li, Y.D. Zhang, M. Chen, S.C. Serikawa, H. Kim, Underwater optical
image processing: A comprehensive review, Mobile Netw. Appl. 22 (6) (2017)
concentrated noise. 1204–1211.
[13] D. Akkaynak, T. Treibitz, A revised underwater image formation model, in: Proc.
Declaration of competing interest of IEEE Int. Conf. Comput. Vis. Pattern Rec. (CVPR), 2018, pp. 6723–6732.
[14] D. Akkaynak, T. Treibitz, Sea-thru: A method for removing water from under-
water images, in: Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2019,
The authors declare that they have no known competing finan- pp. 1682–1691.
cial interests or personal relationships that could have appeared to [15] T. Treibitz, Y.Y. Schechner, Turbid scene enhancement using multi-directional
influence the work reported in this paper. illumination fusion, IEEE Trans. Image Process. 21 (11) (2012) 4662–4667.
[16] B. Huang, T. Liu, H. Hu, J. Han, M. Yu, Underwater image recovery considering
polarization effects of objects, Opt. Express 24 (2016) 9826–9838.
Acknowledgments [17] H. Hu, L. Zhao, X. Li, H. Wang, Underwater image recovery under the non-
uniform optical field based on polarimetric imaging, IEEE Photonics J. 10 (1)
This paper was supported in part by the National Natural Science (2018).
[18] A. Galdran, D. Pardo, A. Picón, A. Alvarez-Gila, Automatic red-channel un-
Foundation of China under Grant 61701069, in part by the Fundamen-
derwater image restoration, J. Vis. Commun. Image Represent. 26 (2015)
tal Research Funds for the Central Universities of China under Grants 132–145.
3132019340 and 3132019200. [19] C. Li, J. Guo, R. Cong, Y. Pang, B. Wang, Underwater image enhancement by
dehazing with minimum information loss and histogram distribution prior, IEEE
Trans. Image Process. 25 (12) (2016) 5664–5677.
References
[20] Y. Peng, K. Cao, P. Cosman, Generalization of the dark channel prior for single
image restoration, IEEE Trans. Image Process. 27 (6) (2018) 2856–2868.
[1] J. Ahn, S. Yasukawa, T. Sonoda, Y. Nishida, K. Ishii, T. Ura, An optical image [21] K. He, J. Sun, X. Tang, Single image haze removal using dark channel prior,
transmission system for deep sea creature sampling missions using autonomous IEEE Trans. Pattern Anal. Mach. Intell. 33 (12) (2011) 2341–2353.
underwater vehicle, IEEE J. Ocean. Eng. (2018) 1–12. [22] Y. Zhou, Q. Wu, K. Yan, L. Feng, Underwater image restoration using color-line
[2] C. Sánchez-Ferreira, L.S. Coelho, H.V.H. Ayala, M.C.Q. Farias, C.H. Llanos, Bio- model, IEEE Trans. Circuits Syst. Video Technol. 29 (3) (2019) 907–911.
inspired optimization algorithms for real underwater image restoration, Signal [23] D. Berman, D. Levy, S. Avidan, T. Treibitz, Underwater single image color
Process., Image Commun. 77 (2019) 49–65. restoration using haze-lines and a new quantitative dataset, IEEE Trans. Pattern
[3] H. Lu, Y. Li, X. Xu, J. Li, Z. Liu, X. Li, J. Yang, S. Serikawa, Underwater image Anal. Mach. Intell. (2020).
enhancement method using weighted guided trigonometric filtering and artificial [24] W. Song, Y. Wang, D. Huang, A. Liotta, C. Perra, Enhancement of underwater im-
light correction, J. Vis. Commun. Image Represent. 38 (2016) 504–516. ages with statistical model of background light and optimization of transmission
[4] C.O. Ancuti, C. Ancuti, C.De. Vleeschouwer, P. Bekaert, Color balance and fusion map, IEEE Trans. Broadcast. 66 (1) (2020) 153–169.
for underwater image enhancement, IEEE Trans. Image Process. 27 (1) (2018) [25] K. Iqbal, M. Odetayo, A. James, R.A. Salam, A.Z.H. Talib, Enhancing the low-
379–393. quality images using unsupervised colour correction method, in: Proc. IEEE Int.
[5] W.D. Zhang, L.L. Dong, X.P. Pan, P.Y. Zou, L. Qin, W.H. Xu, A survey of Conf. Syst. Man Cybern. (SMC), Oct. 2010, pp. 1703–1709.
restoration and enhancement for underwater images, IEEE Access 7 (2019) [26] C. Ancuti, C.O. Ancuti, T. Haber, P. Bekaert, Enhancing under-water images and
182259–182279. videos by fusion, in: Proc. IEEE CVPR, Jun. 2012, pp. 81–88.
12
W. Zhang, L. Dong, T. Zhang et al. Signal Processing: Image Communication 90 (2021) 116030
[27] A.S.A. Ghani, N.A.M. Isa, Automatic system for improving underwater image [44] X. Chen, J. Yu, S. Kong, Z. Wu, X. Fang, L. Wen, Towards real-time advancement
contrast and color through recursive adaptive histogram modification, Comput. of underwater visual quality with GAN, IEEE Trans. Ind. Electron. 66 (12) (2019).
Electron. Agric. 141 (2017) 181–195. [45] X. Ding, Y. Wang, Y. Yan, Z. Liang, Z. Mi, X. Fu, Jointly adversarial network
[28] A. Ghmad, A. Shahrizan, M. Isa, N. Ashidi, Enhancement of low quality to wavelength compensation and dehazing of underwater images, 2019, arXiv
underwater image through integrated global and local contrast correction, Appl. preprint arXiv:1907.05595.
Soft Comput. 37 (2015) 332–344. [46] N. Silberman, D. Hoiem, P. Kohli, R. Fergus, Indoor segmentation and support
[29] S. Zhang, T. Wang, J. Dong, H. Yu, Underwater image enhancement via extended inference from rgbd images, in: Proc. Eur. Conf. Comput. Vis. (ECCV), Springer,
multi-scale Retinex, Neurocomputing 245 (5) (2017) 1–9. 2012, pp. 746–760.
[30] W. Zhang, L. Dong, X. Pan, J. Zhou, L. Qin, W. Xu, Single image defogging based [47] M.J. Islam, Y. Xia, J. Sattar, Fast underwater image enhancement for improved
on multi-channel convolutional MSRCR, IEEE Access 7 (1) (2019) 72492–72504. visual perception, IEEE Robot. Autom. Lett. 5 (2) (2020) 3227–3234.
[31] K.Z.M. Azmi, A. Ghani, Z.M. Yusof, Z. Ibrahim, Natural-based underwater image [48] C. Li, S. Anwar, F. Porikli, Underwater scene prior inspired deep underwater
color enhancement through fusion of swarm-intelligence algorithm, Appl. Soft image and video enhancement, Pattern Recognit. 98 (2020).
Comput. 85 (2019). [49] X. Fu, X. Gao, Underwater image enhancement with global–local networks and
[32] C.O. Ancuti, C. Ancuti, C.De. Vleeschouwer, M. Sbert, Color channel compen- compressed-histogram equalization, Signal Process., Image Commun. 86 (2020).
sation (3C): A fundamental pre-processing step for image enhancement, IEEE [50] M. Yang, K. Hu, Y. Du, Z. Wei. Z. Sheng, J. Hu, Underwater image enhancement
Trans. Image Process. (2019). based on conditional generative adversarial network, Signal Process., Image
[33] Y. Li, M. Zhang, Q. Zhao, X. Zhang, S. Gao, Underwater image enhancement Commun. 8 (2020).
using adaptive retinal mechanisms, IEEE Trans. Image Process. (2019). [51] S. Anwar, C. Li, Diving deeper into underwater image enhancement: A survey,
[34] L. Bai, W. Zhang, X. Pan, C. Zhao, Underwater image enhancement based on 89, Signal Process., Image Commun. (2020).
global and local equalization of histogram and dual-image multi-scale fusion, [52] C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, D. Tao, An underwater image
IEEE Access 8 (2020) 128973–128990. enhancement benchmark dataset and beyond, IEEE Trans. Image Process. 29
[35] X. Pan, L. Li, H. Yang, Z. Liu, J. Yang, L. Zhao, Y. Fan, Accurate segmentation of (2020) 4376–4389.
nuclei in pathological images via sparse reconstruction and deep convolutional [53] R. Liu, X. Fan, M. Zhu, M. Hou, Z. Luo, Real-world underwater enhancement:
networks, Neurocomputing 229 (15) (2017) 88–99. Challenges benchmarks and solutions, IEEE Trans. Circuits Syst. Video Technol.
[36] W. Zhao, H. Lu, D. Wang, Multisensor image fusion and enhancement in spectral (2020).
total variation domain, IEEE Trans. Multimedia 20 (4) (2018) 866–879. [54] T.D. Hunt, P.M. Bentler, Quantile lower bounds to reliability based on locally
[37] C. Li, R. Cong, J. Hou, S. Zhang, Y. Qian, S. Kwong, Nested network with two- optimal splits, Psychometrika 80 (1) (2015) 182–195.
stream pyramid for salient object detection in optical remote sensing images, [55] Y. Liu, J. Han, Q. Zhang, L. Wang, Salient object detection via two-stage graphs,
IEEE Trans. Geosci. Remote Sens. 57 (11) (2019) 9156–9166. IEEE Trans. Circuits Syst. Video Technol. 29 (4) (2018) 1023–1037.
[38] C. Guo, C. Li, J. Guo, R. Cong, H. Fu, P. Han, Hierarchical features driven [56] S. Wang, K. Ma, H. Yeganeh, Z. Wang, W. Lin, A patch-structure representation
residual learning for depth map super-resolution, IEEE Trans. Image Process. 28 method for quality assessment of contrast changed images, IEEE Signal Process.
(5) (2019) 2545–2557. Lett. 22 (12) (2015) 2387–2390.
[39] W. Ren, J. Pan, H. Zhang, X. Cao, M. Yang, Single image dehazing via multi- [57] K. Panetta, C. Gao, S. Agaian, Human-visual-system-inspired underwater image
scale convolutional neural networks with holistic edges, Int. J. Comput. Vis. 126 quality measures, IEEE J. Ocean. Eng. 41 (3) (2015) 541–551.
(2019) 1–20. [58] M. Yang, A. Sowmya, An underwater color image quality evaluation metric, IEEE
[40] C. Li, C. Guo, J. Guo, et al., PDR-Net: Perception-inspired single image dehazing Trans. Image Process. 24 (12) (2015) 6062–6071.
network with refinement, IEEE Trans. Multimedia 22 (3) (2020). [59] Chongyi Li, Runmin Cong, Yongri Piao, Qianqian Xu, Chen Change Loy, RGB-
[41] C. Guo, C. Li, J. Guo, C. Loy, J. Hou, S. Kwong, R. Cong, Zero-reference D salient object detection with cross-modality modulation and selection, in:
deep curve estimation for low-light image enhancement, in: IEEE Conference Proceedings of the European Conference on Computer Vision, Springer, 2020.
on Computer Vision and Pattern Recognition, 2020. [60] L. Liu, D. Wang, Z. Peng, C.L.P. Chen, T. Li, Bounded neural network control for
[42] C. Li, J. Guo, C. Guo, Emerging from water: Underwater image color correction target tracking of underactuated autonomous surface vehicles in the presence of
based on weakly supervised color transfer, IEEE Signal Process. Lett. 25 (3) uncertain target dynamics, IEEE Trans. Neural Netw. Learn. Syst. 66 (11) (2019)
(2018) 323–327. 8724–8732.
[43] Y. Guo, H. Li, P. Zhuang, Underwater image enhancement using a multiscale [61] D.G. Lowe, Distinctive image features from scale-invariant keypoints, Int. J.
dense generative adversarial network, IEEE J. Ocean. Eng. (2019) 1–9. Comput. Vis. 60 (2) (2004) 91–110.
13