0% found this document useful (0 votes)
103 views17 pages

Image Fusion Techniques Review

This document reviews image fusion techniques for pan-sharpening high-resolution satellite imagery. It investigates 41 different pan-sharpening methods categorized as component substitution, multi-resolution analysis, variational optimization, and hybrid. These methods were tested on images from various satellite platforms and evaluated based on spectral and spatial quality metrics. Results showed that multi-resolution analysis methods generally had best spectral quality while hybrid methods tended to have highest spatial quality.

Uploaded by

César Millán
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
103 views17 pages

Image Fusion Techniques Review

This document reviews image fusion techniques for pan-sharpening high-resolution satellite imagery. It investigates 41 different pan-sharpening methods categorized as component substitution, multi-resolution analysis, variational optimization, and hybrid. These methods were tested on images from various satellite platforms and evaluated based on spectral and spatial quality metrics. Results showed that multi-resolution analysis methods generally had best spectral quality while hybrid methods tended to have highest spatial quality.

Uploaded by

César Millán
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

Contents lists available at ScienceDirect

ISPRS Journal of Photogrammetry and Remote Sensing


journal homepage: www.elsevier.com/locate/isprsjprs

A review of image fusion techniques for pan-sharpening of high-resolution


satellite imagery
Farzaneh Dadrass Javan a, b, *, Farhad Samadzadegan a, Soroosh Mehravar a, Ahmad Toosi a,
Reza Khatami c, Alfred Stein b
a
School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran, Iran
b
Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, 7522 NB Enschede, the Netherlands
c
Department of Geography, University of Florida, Gainesville, FL 32611, United States

A R T I C L E I N F O A B S T R A C T

Keywords: Pan-sharpening methods are commonly used to synthesize multispectral and panchromatic images. Selecting an
Image fusion appropriate algorithm that maintains the spectral and spatial information content of input images is a chal­
Pan-sharpening lenging task. This review paper investigates a wide range of algorithms, including 41 methods. For this purpose,
Spectral and spatial quality
the methods were categorized as Component Substitution (CS-based), Multi-Resolution Analysis (MRA), Varia­
tional Optimization-based (VO), and Hybrid and were tested on a collection of 21 case studies. These include
images from WorldView-2, 3 & 4, GeoEye-1, QuickBird, IKONOS, KompSat-2, KompSat-3A, TripleSat, Pleiades-1,
Pleiades with the aerial platform, and Deimos-2. Neural network-based methods were excluded due to their
substantial computational requirements for operational mapping purposes. The methods were evaluated based
on four Spectral and three Spatial quality metrics. An Analysis Of Variance (ANOVA) was used to statistically
compare the pan-sharpening categories. Results indicate that MRA-based methods performed better in terms of
spectral quality, whereas most Hybrid-based methods had the highest spatial quality and CS-based methods had
the lowest results both spectrally and spatially. The revisited version of the Additive Wavelet Luminance Pro­
portional Pan-sharpening method had the highest spectral quality, whereas Generalized IHS with Best Trade-off
Parameter with Additive Weights showed the highest spatial quality. CS-based methods generally had the fastest
run-time, whereas the majority of methods belonging to MRA and VO categories had relatively long run times.

1. Introduction images that contain both spectral and spatial information content of the
input MS and PAN images (DadrasJavan and Samadzadegan, 2014;
Several Earth observation remote sensing platforms acquire both Loncan et al., 2015; Hasanlou and Saradjian, 2016). Generally, the pan-
multispectral (MS) and panchromatic (PAN) images, where MS images sharpening methods can be categorized into three classes including
include rich spectral content but at lower spatial resolution than PAN pixel, feature, and decision level methods (Pohl and Van Genderen,
bands (Hasanlou and Saradjian, 2016; Rodríguez-Esparragón et al., 1998; Belgiu and Stein, 2019). In pixel-level fusion methods, a new
2017). Application of either MS or PAN images alone means that parts of image is obtained using a pixel-by-pixel combination of the values of
information content are discarded. Image fusion techniques are input images. In feature-level methods, geometrical, structural, and
frequently used to combine two or more images to produce enhanced spectral features, such as edges, textures, shapes, spectrums, and angles,
images (Pohl and Van Genderen, 1998; DadrasJavan and Samadzade­ are extracted from the input images and fused. In decision-level fusion
gan, 2014; Alidoost et al., 2015; Jagalingam and Hegde, 2015). When a methods, the input images are processed separately and the extracted
fusion method is used to fuse MS and PAN images of the same scene decisions from them are fused, instead of the original images (Pandit and
acquired by the same satellite, the fusion task is called “pan-sharp­ Bhiwani, 2015). The pixel-level pan-sharpening approaches are the most
ening”. The pan-sharpening methods aim to produce synthesized MS common fusion methods applied in remote sensing and computer vision

* Corresponding author at: School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Postal Code: 1439957131, P.O. Box:
11155-4563, Tehran, Iran.
E-mail addresses: [email protected], [email protected] (F. Dadrass Javan), [email protected] (F. Samadzadegan), [email protected]
(S. Mehravar), [email protected] (A. Toosi), [email protected] (R. Khatami), [email protected] (A. Stein).

https://doi.org/10.1016/j.isprsjprs.2020.11.001
Received 22 January 2020; Received in revised form 25 October 2020; Accepted 1 November 2020
Available online 21 November 2020
0924-2716/© 2020 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

applications to generate high spatial-spectral fused images. 2. Materials and methods


The quality of the subsequent processing and modeling tasks that
utilize the pan-sharpened images as inputs depends on the accuracy and 2.1. Pan-sharpening methods
reliability of the pan-sharpened products. In addition to the quality of
input images, the performance of the fusion method can affect the One major category of pan-sharpening methods is the CS group. The
quality of synthesized images. Therefore, a vast array of algorithms has main idea of CS-based methods is to apply a pre-determined trans­
been examined for pan-sharpening image fusion and quality assessment formation on the MS bands to separate their spectral and spatial infor­
(Jagalingam and Hegde, 2015; Pandit and Bhiwani, 2015), which in­ mation components (Vivone et al., 2014b; DadrasJavan et al., 2018).
dicates the importance of the pan-sharpening issue. However, choosing Next, the PAN image is histogram-matched with the spatial component
a suitable or optimal method among a large number of methods can be extracted from the MS image. The spatial component is then substituted
challenging. Consequently, many research articles have evaluated the for the matched PAN band. An inverse transformation is then conducted
efficiency of different pan-sharpening methods based on different to transfer the modified components to the MS image space (Vivone
quality assessment procedures (Blasch et al., 2008; Yakhdani and Azizi, et al., 2014b). Intensity-hue-saturation (IHS), principal component
2010; Alimuddin et al., 2012; Witharana et al., 2013; Student, 2014; analysis (PCA), arithmetic combinations, and Brovey transformation
Vivone et al., 2014b; Pandit and Bhiwani, 2015; Ghassemian, 2016; (BT)-based methods are some of the most common CS-based pixel-level
Pushparaj and Hegde, 2017; Snehmani et al., 2017; Duran et al., 2017; pan-sharpening methods (Vivone et al., 2014b; Ghassemian, 2016).
DadrasJavan et al., 2018). Commonly, articles introducing new fusion These methods are simple and usually provide better spatial quality but
algorithms compare the performances of their methods with those of commonly with the cost of a reduction in spectral quality (DadrasJavan
some of the existing benchmark methods (for examples see: Chen et al. et al., 2018). Among all CS-based methods, color-transformation-based
(2005), Garzelli et al. (2007), Chen et al. (2008), Khan et al. (2008), fusion methods (such as IHS) have become popular due to their simple
Wang et al. (2008), Khan et al. (2009), Padwick et al. (2010), Baisantry and fast computational process and the high spatial quality of their
et al. (2011), Jing and Cheng (2012), Zhu and Bamler (2012), Gharbia outputs (Tu et al., 2004; Choi, 2006). CS methods are also considered as
et al. (2014), Kang et al. (2013), Huang et al. (2015), Shahdoosti and spectral-based algorithms since the fusion process occurs through
Ghassemian, 2011, Ghahremani and Ghassemian (2016), Kaplan and spectral transformations (Scarpa et al., 2018; Vivone et al., 2019).
Erer (2016), Lari and Yazdi (2016), Masi et al. (2016), Restaino et al. Another group of methods is MRA, which deals with the pan-
(2016a), Shahdoosti (2017), Li et al. (2018), and Liu et al. (2020)). sharpening problem from a spatial perspective. In MRA based
However, as the focus of those articles was to introduce new algorithms, methods, MS and PAN images are decomposed into scale levels using
the algorithms were frequently compared with small numbers of pyramid or wavelet transformation functions. Then, the spatial infor­
benchmark algorithms usually based on small numbers of case studies. mation of a PAN image at the selected decomposed level is extracted and
On the other hand, several articles compared some of the existing injected to the same level of MS decomposed image (Aiazzi et al., 2002;
methods. Most papers focused on the comparison of a few methods Vivone et al., 2014b). Finally, the inverse decomposition process gen­
based on small numbers of datasets (Chavez et al.,1991; Garzelli et al., erates the fused image. Laplacian pyramids, wavelets, contourlet, and
2004; Vijayaraj et al., 2006; Alparone et al., 2007; Nikolakopoulos, curvelet transformations are some of the most common MRA-based
2008; Riyahi et al., 2009; Yakhdani and Azizi, 2010; Alimuddin et al., methods (DadrasJavan et al., 2018). Despite the CS-based methods,
2012; Mandhare et al., 2013; Witharana et al., 2013; Li et al., 2017; the MRA-based approaches usually perform well spectrally (Kim et al.,
Pushparaj and Hegde, 2017; DadrasJavan et al., 2018; Toosi et. al., 2010).
2020). Whereas a few studies compared large numbers of methods based Considering the limitations of the CS and MRA-based methods,
on small numbers of datasets (Witharana et al., 2013; Snehmani et al., Hybrid methods have been proposed, aiming to preserve both the
2017; Vivone et al., 2014b; Duran et al., 2017). In a meta-analysis, Meng spectral and spatial information content of the input images (González-
et al. (2019) aggregated the results obtained from a collection of articles Audícana et al., 2005; Zhang and Hong, 2005; Kim et al., 2010; Loncan
published from 2000 to 2016. The analysis compared three groups of et al., 2015; Vivone et al., 2019). For instance, some Hybrid methods
pan-sharpening methods including Component Substitution (CS), combine wavelets and IHS-based or wavelets and PCA-based fusion
Multi-Resolution Analysis (MRA), and Variational Optimization (VO) methods (Gonzales et al., 2004). Merging wavelet-based methods with
methods, based on quantitative results reported by different articles for IHS or PCA methods are based on the idea of improving the fused image
different case studies. spatially by adding the spatial information of the PAN image to the I or
This paper aims to present a comprehensive review of the different PC1 component of the MS image (Gonzales et al., 2004; DadrasJavan
pixel-level image fusion methods. In particular, it aims to provide a clear et al., 2018).
analysis of the relative performance of different pan-sharpening These three classes can be introduced as classical approaches (Amro
methods. To do so, the most common pan-sharpening methods, repre­ et al., 2011; Restaino et al., 2016a,b). Many other pan-sharpening al­
sented by 41 algorithms, are investigated. The algorithms are applied to gorithms have been proposed over recent years that do not fit in this
a large variety of image datasets, including different land cover com­ classification. Many contributions in the literature are rendered based
positions. To draw generalizable conclusions about the performances of on Bayesian estimation theory (Fasbender et al., 2008; Wang et al.,
the pan-sharpening methods, the methods are implemented and tested 2018; Vivone et al., 2014b). Considering PAN and MS images as coarse
on 21 different case studies distributed around the world covering measurements of a desirable high-resolution MS imagery, the image can
different landscapes and acquired by different imaging sensors. The 41 be estimated through regularized solutions of the ill-posed problem (Aly
methods are all tested on the same images, following a randomized and Sharma, 2014). Some research has investigated the total variation
complete block design (RCBD), as the statistical experimental design. In penalization terms (Palsson et al., 2013; He et al., 2014; Vivone et al.,
doing so, any differences in the qualities of fused images can be attrib­ 2014a), and representation coefficient sparsity (compressive sensing
uted to the performances of the methods. For the evaluation, methods theory) (Vivone et al., 2014a; Ghahremani and Ghassemian, 2015).
are investigated quantitatively, both in terms of the spectral and spatial Many of the recent improvements of these methods have been obtained
quality of outputs, using seven quality assessment metrics and qualita­ from the application of super-resolution (Pan et al., 2013; Yin, 2017).
tively based on the visual inspection of results. The model of sparse representation of an image is based on the
assumption that every image patch can approximately display itself as a
linear combination of several atoms coming from the predefined dic­
tionary. Sparse representation is known as an efficient way for the
extraction of local structures of imagery in the field of remotely sensed

102
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

image fusion (Cheng et al., 2015), and has been used in pan-sharpening Table 1
techniques (Zhu and Bamler., 2012; Vicinanza et al., 2014; Yin, 2015; Pixel-level pan-sharpening methods.
Zhu et al., 2015; Yang et al., 2018a,b). The regularized-based, the No. Category Method References
Bayesian-based, the model-based optimization, and the sparse
1 CS Principal Component Analysis Jiang et al., 2011
representation-based methods are directly or indirectly founded on (PCA)
optimization of a variational model. Additionally, they are usually 2 Brovey Transformation (BT) Jiang et al., 2011;
considered VO-based algorithms. These algorithms usually require Mandhare et al., 2013
heavy computations in comparison with previously mentioned groups. 3 Generalized Intensity-Hue- Tu et al., 2001, 2004,
Saturation (GIHS) 2007; Zhang and Hong,
Several algorithms based on convolutional neural network (CNN) 2005
have been proposed for pan-sharpening (Zhang et al., 2017; Xing et al., 4 GIHS with Tradeoff Parameter Choi, 2006
2018; Jiang et al., 2020) adopting super-resolution architecture (Masi (GIHS-TP)
et al., 2016; Scarpa et al., 2018), while de-noising auto-encoder archi­ 5 GIHS with Best Tradeoff Parameter Tu et al., 2007
(GIHS-BTP)
tecture (Huang et al., 2015; Azarang et al., 2019), deep residual network
6 GIHS with Additive Weights (GIHS- Aiazzi et al. 2007
architecture (Wei and Yuan, 2017), and generative adversarial network AW)
architecture (Liu et al., 2018). Most of these methods are based on 7 Improved GIHS-AW (IGIHS-AW) Xu et al., 2008
single-image super-resolution networks, which contradict pan- 8 Nonlinear IHS (NIHS) Ghahremani and
sharpening at one important point. In the single-image super-resolu­ Ghassemian, 2016
9 Ehlers Klonus and Ehlers,
tion, the spatial details are driven from the low-resolution image, while 2009; Snehmani et al.,
in pan-sharpening, the high-frequency information of the synthesized 2017
product is extracted from the high-resolution PAN image. This issue 10 Gram-Schmidt (GS) Klonus and Ehlers, 2009
leads to the inability of many deep-learning-based methods to entirely 11 Smoothing Filter-based Intensity Liu, 2000; Wald and
Modulation (SFIM) Ranchin, 2002
exploit useful spatial information of the PAN image, thus resulting in
12 Matting Model Pan-sharpening Kang et al., 2013
blurred fusion products (Jiang et al., 2020). To cope with this challenge, (MMP)
He et al. (2019) and Shen et al. (2019) incorporated the CS/MRA cate­ 13 Band Dependent Spatial Detail Garzelli et al., 2007;
gories and variational model into deep-learning-based approaches. Be­ (BDSD) Vivone et al., 2014b
sides, many CNN methods deal with pan-sharpening as a black-box 14 BDSD with Physical Constraints Vivone, 2019
(BDSD-PC)
learning procedure (He et al., 2019). Nevertheless, great improvements
15 Partial Replacement Adaptive Choi et al., 2010; Vivone
in the performance of such state-of-the-art approaches are undeniable Component Substitution (PRACS) et al., 2014b
after the report of many successful contributions (Azarang et al., 2019; 16 High Pass Filter (HPF) Chavez et al., 1991
He et al., 2019). On the other hand, deep learning image fusion tech­ 17 Indusion Khan et al., 2008
18 Hyperspherical Color Space (HCS) Padwick et al., 2010
niques often require large numbers of images for training the underlying
19 MRA Substitutive Wavelet (SW) Amolins et al., 2007;
model (Vivone et al., 2019) demanding high-tech hardware configura­ Kim et al., 2010
tions especially GPUs. Deep-learning-based fusion methods are also not 20 Additive Wavelet (AW) Amolins et al., 2007
deterministic solutions to the fusion process and different factors such as 21 Contrast pyramid Zhang and Han, 2004
initial training data to even the applied hardware can affect the final 22 Morphological Half Gradient (MF- Restaino et al., 2016a
HG)
quality of the products. Moreover, CNN based methods are still imple­
23 Modulation Transfer Function- Aiazzi et al., 2002, 2006
mented on small datasets in experimental studies. Therefore, deep- Generalized Laplacian Pyramid
learning-based fusion methods are excluded from this study. Conse­ (MTF-GLP)
quently, this study has focused on CS-based methods, MRA family, 24 MTF-GLP-Context-Based Decision Aiazzi et al., 2002,
(MTF-GLP-CBD) 2006, 2007
Hybrid MRA-CS family, and Vo-based algorithms.
25 MTF-GLP with High-Pass Aiazzi et al. 2003, 2006
In this research, 41 pixel-level pan-sharpening methods from the CS- Modulation model (MTF-GLP-HPM)
based, MRA-based, Hybrid, and VO-based categories are studied. 26 MTF-GLP-HPM with Post-Processing Aiazzi et al., 2006; Lee
Table 1 shows a list of investigated methods. For each category of (MTF-GLP-HPM-PP) and Lee 2009
methods, the most common methods, based on the literature, are 27 GLP based on Full Scale Regression- Vivone et al., 2018
based injection coefficients (GLP-
included to have comprehensive analysis and at the same time to keep
Reg-FS)
the workload at a manageable scale. Pair-wise comparisons were con­ 28 Hybrid Substitute Wavelet Intensity (SWI) Gonzales et al., 2004
ducted between all pairs of methods using statistical tests (details of the 29 Additive Wavelet Intensity (AWI) Nunez et al., 1999
statistical analysis are presented in Section 2.4). Additionally, statistical 30 Additive Wavelet Luminance Otazu et al., 2005; Kim
Proportional (AWLP) et al., 2010
tests are conducted between the four groups of methods to examine their
31 Revisited AWLP (AWLP-H) Vivone et al., 2019
relative performance and to see if some groups of methods systemati­ 32 Weighted Wavelet Intensity (WWI) Zhang and Hong, 2005
cally outperform the other groups. For the wavelet family methods, the 33 Substitute Wavelet Principal González-Audícana
“a‘trous” decomposition scheme was applied that is an undecimated Component (SWPC) et al., 2004
discrete wavelet transformation and yields shift-invariant discrete 34 Additive Wavelet Principal González-Audícana
Component (AWPC) et al., 2005
wavelet decomposition (Rockinger and Fechner, 1998; González-Audí­
35 GS-Wavelet Proposed by Authors
cana et al., 2005; Amolins et al., 2007). Furthermore, the level of 36 A Pan-sharpening method using Li et al., 2018
decomposition was set to two, based on the experiment conducted by Guided Filter (PGF)
DadrasJavan et al. (2018). The “ratio” parameter, which is a scale ratio 37 VO P + XS Ballester et al., 2006
between MS and PAN images, was set to four in Morphological Half 38 Filter Estimation (FE) Vivone et al., 2014a
39 A pan-sharpening method with Liu and Wang, 2013
Gradient (MF-HG), Band Dependent Spatial Detail (BDSD), Partial sparse representation under the
Replacement Adaptive Component Substitution (PRACS), High Pass framework of wavelet transform
Filter (HPF), Smoothing Filter-based Intensity Modulation (SFIM) and (DWTSR)
Indusion methods, similar to Vivone et al. (2014b). The “block size” 40 Pan-sharpening with structural Zeng et al., 2016
consistency and ℓ1/2 gradient prior
parameter, which is used in the BDSD algorithm, was set to 128 to be
(L12-norm)
consistent among all the datasets with different dimensions. A “haar” 41 A Variational Pan-Sharpening with Fu et al., 2019
wavelet (Snehmani et al., 2017) was used as the wavelet method in the Local Gradient Constraints (VP-
Gram-Schmidt-Wavelet (GS-Wavelet) method. In the DWTSR method, LGC)

103
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

the required dictionary was learned from the training set images utilized ∑M ∑N
by Liu and Wang (2013), employing the K-SVD algorithm. The regula­ CC(MS, F) = ∑
i=1 j=1 [MS(i,j) − MS][F(i,j) − F]
(1)
( )2 ∑ ∑ ( )2
rization parameter and the number of iterations needed to minimize the M ∑N
MS(i,j) − MS M N
i=1 j=1 i=1 j=1 F(i,j) − F
energy in the optimization procedure of the Variational Pan-Sharpening
algorithm with Local Gradient Constraints, proposed by Fu et al. (2019), SAM is the most commonly used spectral metric (Loncan et al., 2015;
are empirically set to 0.2 and 100, respectively. The input parameter Meng et al., 2019). SAM represents the angle between the reference and
values of each method have been based on the recommended values the processed vectors of a given pixel in the spectral feature space of an
from the corresponding articles introducing the method. image (Leung et al., 2001). SAM is calculated as shown in Eq. (2).
∑B
2.2. Pan-sharpening quality assessment measures i=1 MSi .Fi
SAM(MS, F) = Arccos(√̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅
̅ √̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅) (2)
∑B ∑ B
MSi .MSi , Fi .Fi
While several measures have been developed for quality assessment i=1 i=1
of satellite imagery (Yakhdani and Azizi, 2010) and aerial imagery
(Sagan et al., 2019), many metrics have been proposed and utilized for where MSi and Fi are two spectral vectors of the multispectral and the
evaluating the pan-sharpening methods. These measures are aimed to fused images, respectively, and B is the number of image bands. The
determine how successful a pan-sharpening method is in preserving the computed SAM is the spectral angle between the MS and the fused
spectral and spatial information content of the original input images vectors of a given pixel and can range from 0 to 90 degrees. Smaller SAM
(Wald, 2000; Piella and Heijmans, 2003; Zhang, 2008; Pandit and Bhi­ values represent more similarity between the multispectral and the
wani, 2015; Jagalingam and Hegde, 2015). The assessment methods can fused vectors. Therefore, the ideal value of SAM is zero (Leung et al.,
be classified into qualitative and quantitative categories (Jagalingam 2001; Alparone et al., 2007). The final value of SAM for a given pan-
and Hegde, 2015). The qualitative methods are commonly based on sharpened image is calculated by averaging the SAM values computed
visual comparison of the colors and spatial details of the fused image for all pixels.
with those of the input MS and PAN images (Chen and Blum, 2005; UIQI computes the level of relevant information transferred from an
Alparone et al., 2007; Zhang, 2008). Because these methods are sub­ MS image into the fused image. The value of UIQI is in the range of − 1 to
jective, i.e. rely on human opinion, inconsistencies among assessments 1 and a value equal to 1 indicates the perfect similarity of the two images
from different analysts are common (Zhang, 2008). Thus, quantitative (Wang and Bovik, 2002; Alparone et al., 2008). UIQI is defined as:
methods, based on pre-defined metrics that quantify the similarities
between the input and fused images, are used in this research (Alparone UIQI(MS, F) =
4σMS.F , MS, F
[( ) (3)
( )2 ]
et al., 2007; Zhang, 2008; Jagalingam and Hegde, 2015).
2
2
(σ MS + σ2F ) MS + F
The quantitative pan-sharpening quality assessment metrics can be
classified into two main categories of spectral and spatial metrics
(Jagalingam and Hegde, 2015). In the spectral metrics, the spectral in which σ MS.F shows covariance between MS andF, and σ 2MS and σ 2F are
similarity and proximity of the corresponding features in the fused and the variances of MS and F, respectively.
the input MS images are compared. Quantitative spectral metrics could ERGAS computes the quality of the fused image as the normalized
be classified into mono-modal and multi-modal metrics, based on the average error of bands of the fused image, which can range between zero
number of spectral levels, considered in the quality assessment process and infinity, with lower values indicating a higher degree of similarity
(Xydeas and Petrovic, 2000; Thomas and Wald, 2006; Alparone et al., between the two images (Wald, 2000). ERGAS metric is given by Eq. (4)
2007). Mono-modal metrics consider a single image layer for quality where dh /dl denotes the ratio between pixel sizes of PAN and MS, and
assessment while multi-modal metrics consider several image channels μ(l) is the mean of the lth band.
at the same time (Thomas and Wald, 2006). Although the fused image √̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅

should preserve the spectral information of the MS image, it should also dh √ 1 ∑ B
RMSE(l) 2
ERGAS = 100 √ ( ) (4)
present the spatial information of the PAN image (Javan et al., 2013, dl B l=1 μ(l)
Hasanlou and Saradjian, 2016; Yakhdani and Azizi, 2010; Witharana
et al., 2013). SCC is a metric in which the high-frequency details of PAN and fused
The quality assessment of the pan-sharpening methods was con­ images are first extracted by a high-pass 2-D filter; then, a correlation
ducted using some of the most commonly used spectral and spatial coefficient is used to determine the similarity of those filtered images (Li
quality metrics (based on the existing literature). Four spectral quality et al., 2017; Sulaiman et al., 2020). Laplacian filter is the most well-
indices, namely Correlation Coefficient (CC), Spectral Angle Mapper known filter used for SCC calculation (Zhou et al., 1998; Pushparaj
(SAM), Relative Dimensionless Global Error in Synthesis (ERGAS), and and Hegde, 2017). SCC shows the similarity of the spatial information of
Universal Image Quality Index (UIQI) are employed for spectral evalu­ the PAN and fused images. For the edge CC metric, the similarity value is
ations. CC and UIQI are mono-modal metrics as opposed to SAM and computed on the edge maps of PAN and fused images using the corre­
ERGAS which are multi-modals. The spatial quality validation of the lation coefficient (Klonus and Ehlers, 2009; Ehlers et al., 2010; Yakhdani
pan-sharpened images is also done using correlation of edge extraction and Azizi, 2010).
(edge CC), Spatial Correlation Coefficient (SCC), and Ds component of The spatial component of the QNR index is computed through sim­
QNR metric. Letting MS and F represent the multispectral and pan- ilarity measurements of couples of scalar images obtained by employing
sharpened images respectively, with the size of M and N pixels and the UIQI metric (Pálsson, 2013). The Eq. (5) represents the DS compo­
the number of B image bands, and means are shown by MS and F, the nent of the QNR metric, where MSl is the low-resolution lth MS band,
metrics definitions are expressed as follows. PANL indicates the PAN image which is degraded to the resolution of MS
CC computes the spectral similarity between the MS and fused im­ image. Also, Fl is the lth band of pan-sharpened image and is defined in
ages. It ranges from − 1 to 1, with larger values indicating more simi­ comparison to the original PAN image, and q represents a constant value
larity between the MS and F images (Padwick et al., 2010; Zhu and (Alparone et al., 2015).
Bamler, 2012; Yang et al., 2018a). CC is defined in Eq. (1) in which the √̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅
q 1
∑B
subscripts (i, j) specify the location of the pixels. DS = |UIQI(MSl , PANL ) − UIQI(Fl , PAN) |q (5)
B l=1
Depending on the availability and selection of a reference image,
different approaches have been used for the evaluation of pan-

104
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

sharpening processes (Ghassemian, 2016). In this study, Wald’s protocol used as response values. The ANOVAs were implemented for a ran­
(Ranchin and Wald, 2000) is applied as it is the most commonly used domized complete block design (RCBD) with case study images as the
approach by the image fusion community (Vivone et al., 2014b; Zeng blocking factor because all methods were implemented on the same
et al., 2016; Ghahremani and Ghassemian, 2016; Restaino et al., 2016a; dataset. Given the blocking design used, any changes among the quality
Duran et al., 2017; Snehmani et al., 2017; DadrasJavan et al., 2018; Dou, values of the methods can be directly attributed to the performances of
2018). In this approach, down-sampled input MS and PAN images are the methods, as they were all tested on the same images. The overall null
introduced to the pan-sharpening process and the generated fused image hypotheses of the ANOVAs were equality of all methods, i.e. all methods
is compared with the initial MS image based on applied quality assess­ have identical performances, and the alternative hypotheses were
ment metrics. inequality of at least two methods. If the null hypothesis for an ANOVA
was rejected, two series of subsequent tests were performed. The first
2.3. Experimental dataset series of tests included comparisons between categories of methods, i.e.
CS, MRA, Vo-based, and Hybrid categories. In addition, pair-wise
A large sample of datasets is required to conduct rigorous statistical comparisons were conducted to compare pairs of methods. The
analysis on the performances of the pan-sharpening methods and to Tukey–Kramer method was used to control the experiment-wise type I
draw generalizable conclusions. Therefore, the pan-sharpening methods error rate at α = 0.05 for the pair-wise comparisons.
(Table 1) were tested on a sample of 21 images including data from
WorldView-2, 3, and 4, GeoEye-1, QuickBird, IKONOS, KompSat-2, 3. Results
KompSat-3A, TripleSat, Pleiades-1A, and Deimos-2. The images were
sampled from a variety of scenes including urban, agricultural, forested, 3.1. Qualitative assessment of pan-sharpened imagery
industrial, and water-covered areas. Table 2 presents the specification of
the case study datasets. The MS images of the 21 case studies are shown Qualitative evaluation of the pan-sharpening methods was con­
in Fig. 1. Because the main purpose of this research is to evaluate the ducted through visual inspection of fusion results. Fig. 2 to Fig. 4 show
spectral and spatial quality of different pan-sharpening methods all the the results for three case study images from QuickBird, WorldView-2,
sample images are selected as registered datasets. and WorldView-3 sensors, respectively. The zoomed images are
selected from areas with distinct objects and include high spatial and
spectral information content. Considering spectral distortion, it can be
2.4. Statistical comparison of pan-sharpening methods seen from Fig. 2 that methods IGIHS-AW (7), L12-norm (40), HCS (18),
and SW (19) have the lowest spectral quality, and methods PGF (36),
Analysis of variance (ANOVA) was used to conduct statistical com­ MFHG (22), MTF-GLP (23), MTF-GLP-CBD (24), MMP (12), and AWLP-
parisons among the pan-sharpening methods. Separate one-way H (31) have the highest spectral quality based on the color distortion
ANOVAs were conducted to compare the performance of methods that occurred in the building’s roof and grassland. In Fig. 3, methods
based on each of the seven quality assessment metrics. For each ANOVA, BDSD (13), AW (20), AWI (29), and AWPC (34) have the worst spectral
the observed values of the quality metrics, for the 21 case studies were

Table 2
Specifications of the datasets used for pan-sharpening.
Case Platform Location Type of Coverage GSD1 (MS/ Number of spectral bands Size of PAN image
study area PAN) (pixels)

a WorldView-2 Rio de Janeiro (Rio.), urban residential, water, 1.84/0.46 8 (coastal, blue, green, yellow, red, 3072* 3072
Brazil vegetation m red edge, NIR21 and NIR2)
b San Fransisco-1, USA road 1024*1024
c Stockholm, Sweden road 1680*1680
d Sydney, Australia residential, water 2048*2048
e Washington DC, USA residential, vegetation 2048*2048
f WorldView-3 Tripoli, Libya industrial industrial buildings, 1.24/ 0.31 8 (red, red edge, coastal, blue, green, 2048*2048
vegetation m yellow, NIR1 and NIR2)
g WorldView-4 New York-1, USA urban residential, water 1.24/ 0.31 4 (red, green, blue and NIR) 2048*2048
m
h GeoEye-1 Isfahan-1, Iran urban residential 1.84/ 0.46 4 (blue, green, red and NIR) 512*512
i Cape Town, South residential, water, m 2048*2048
Africa vegetation
j QuickBird Isfahan-2, Iran urban residential, water, 2.4/ 0.6 m 4 (blue, green, red and NIR) 1024*1024
vegetation
k Jaipur, Rajasthan, residential, water 2048*2048
India
l India Lake, India suburban soil, vegetation 1024*1024
m San Fransisco-2, USA urban water, vegetation, soil 1152*1152
n IKONOS Sichuan, China suburban vegetation 3.2/ 0.82 m 4 (blue, green, red and NIR) 2048*2048
o Sao Paulo, Brazil urban residential 2048*2048
p KompSat-2 Hong Kong, China urban residential, water 4/ 1 m 4 (blue, green, red and NIR) 2048*2048
q KompSat-3A New York-2, USA urban residential, water 2.2/ 0.55 m 4 (blue, green, red and NIR) 2048*2048
r TripleSat Lakewood, California, urban residential, vegetation 3.2/ 0.8 m 4 (blue, green, red and NIR) 2048*2048
USA
s Pleiades-1A Boulder County, urban residential, vegetation 2/ 0.5 m 4 (blue, green, red and NIR) 1024*1024
Colorado, USA
t Pleiades (aerial Toulouse, France urban residential, vegetation 0.6/* m 4 (blue, green, red and NIR) 1024*1024
platform)
u Deimos-2 Barcelona, Spain port area water 4/ 1 m 4 (blue, green, red and NIR) 1536*1536

*PAN image was simulated using Vivone et al.’s (2014b) method for the case study “t”.
1
GSD: Ground sampling distance
2
NIR: Near-infrared

105
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

Fig. 1. The MS images from the 21 case studies.

quality, and methods PGF (36), SWI (28), and GS-Wavelet (35) have the have the most spectral distortion and methods SWI (28), AWLP-H (31),
best spectral quality comparing the color of the MS image. In Fig. 4, by and MMP (12) have the least spectral distortion.
considering the color distortion in the truck and the cars, it can be noted In terms of spatial quality, in Fig. 2, the rooftop edges, the lanes, and
that methods AWI (29), BDSD (13), SW (19), BT (2), and Indusion (17) the car boundary provide a proper structure for spatial assessment of

106
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

Fig. 1. (continued).

images. Accordingly, methods AWPC (34), AWI (29), Indusion (17), WWI (32), GIHS (3), and SWI (28) in Fig. 2, methods WWI (32), SWPC
NIHS (8), and GIHS-TP (4) present the lowest spatial quality and (33), PGF (36), and GIHS (3) in Fig. 3, and methods AWLP-H (31) and
methods WWI (32), AWLP-H (31), GIHS (3), MTF-GLP-HPM-PP (26), GIHS (3) in Fig. 4 are some examples of methods that show moderately
and L12-norm (40) the highest spatial quality. In the case of edges of good spectral and spatial quality.
letters in Fig. 3, it seems that methods BDSD (13), Indusion (17), VP-LGC
(41), NIHS (8), and GIHS-TP (4) include less spatial clarities and
3.2. Spectral comparison of pan-sharpening methods
methods WWI (32), SWPC (33), GIHS (3), and HCS (18) more spatial
clarities. From the spatial details in the bridge, the cars, and the lanes in
Statistical tests were conducted to analyze the performances of the
Fig. 4, it is noticeable that methods AWI (29), GIHS-TP (4), and NIHS (8)
pan-sharpening methods. Results for each of the spectral quality metrics
suffer from spatial distortion while methods AWLP-H (31), WWI (32),
were analyzed separately using an ANOVA (four ANOVAs in total) while
and SWPC (33) present a higher quality of spatial information.
also using the results from the 21 case studies as the response values. An
Moreover, as Figs. 2, 3, and 4 show, some methods present either
F-test was performed to examine the null hypothesis of equality of all
higher spectral or spatial quality while other methods present a mod­
methods for each ANOVA. Results showed the p-value of the F-test was
erate level of both spectral and spatial quality. Methods AWLP-H (31),
smaller than 0.001 for all four metrics. That indicates that the

107
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

Fig. 2. Qualitative assessment of 41 pan-sharpening techniques, QuickBird dataset.

differences among the performances of at least two methods were sta­ had 0.390 smaller SAM values compared to the CS-based methods
tistically significant (for all four metrics). Therefore, two sets of statis­ (averaged over all methods of the categories and all case studies). The
tical tests were performed for each metric to study the differences among difference between the two categories was statistically significant (p-
methods in more detail. One group of tests included pair-wise compar­ value < 0.001). Similarly, the Hybrid methods displayed better perfor­
isons between all pairs of methods. In addition, tests were conducted to mance compared to CS-based methods with an average improvement of
compare the four categories of methods (see Table 1) and to examine if 0.314 in SAM, 0.015 in CC, 0.635 in ERGAS, and 0.03 in UIQI. Moreover,
some categories systematically outperform other categories. the average difference in SAM between MRA-based and Hybrid methods
A comparison of different fusion categories, in terms of spectral was 0.076, which was not statistically significant (p-value = 0.358) at
quality assessment metrics, is presented in Table 3. Table 3 shows the 0.05 significance level. The average difference was also 0.005 in CC,
difference between the mean metric value of different categories, where 0.093 in ERGAS, and 0.006 in UIQI between MRA-based and Hybrid
the mean value of a category is calculated by averaging values for all methods indicating the better spectral behavior of MRA-based methods.
methods of that category. T-tests were conducted among all categories of Considering CS-based and Vo-based methods, the average difference of
methods to examine if there was a statistically significant difference 0.004, 0.198, 0.005, and 0.352 in CC, ERGAS, UIQI, and SAM can be
within the mean values of categories (null hypothesis: mean of category seen, respectively. Overall, considering CC metric, MRA-based and Vo-
i = mean of category j; i and j = 1–4). P-values can be used to decide if a based had the spectrally best and worst performance, respectively. In
difference is statistically significant. MRA-based methods outperformed terms of ERGAS, UIQI, and SAM metrics, MRA-based and CS-based were
CS-based methods in terms of mean SAM. On average, MRA methods the best and the worst categories of methods, respectively.

108
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

Fig. 3. Qualitative assessment of 41 pan-sharpening techniques, WorldView-2 dataset.

In addition to the comparisons between categories of methods, pair- plotting the rose diagrams, methods are sorted alphabetically starting
wise comparisons, based on spectral quality assessment metrics, were from the x-axis (which is the direction of the Cosine axis) counter­
conducted between pairs of individual methods (Fig. 5). Fig. 5 shows the clockwise and are colored based on their value. In the rose diagrams, the
rose diagrams for the mean metric values of methods (mean over the 21 best methods are colored in green while the worst are colored in red. As
case studies) on the right and the statistical t-test results on the left. T- Fig. 5 shows, the AWLP-H method had the best performance, in terms of
tests were conducted between each pair of methods to examine if there all four spectral metrics. It is followed by the GLP-Reg-FS method. Ul­
was a statistically significant difference among the mean values of timately all metrics showed the superiority of the MMP method in the
methods (null hypothesis: mean of the method i = mean of method j; i third position.
and j = 1–41). For the t-tests charts (Fig. 5), methods sharing a common The PCA, Brovey, and Gram Schmidt methods had the poorest per­
horizontal line did not have a statistically significant difference, at α = formances, i.e. the largest SAM and ERGAS values and smallest UIQI and
0.05. For example, in terms of CC, AWLP-H method had the largest CC values. Most MRA-based and Hybrid methods had higher quality
value, which was statistically significantly larger than the CC of SWPC values compared to the CS-based algorithms. Most of the IHS-based
method (as the two methods do not share a horizontal line). However, fusion methods had relatively lower quality values, which indicate
the difference between the CC of AWLP-H and MMP methods was not poor performances of those methods compared to the other methods.
statistically significant (as they share a horizontal line in Fig. 5). For The fused images from the simple wavelet-based methods (SW and AW)

109
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

Fig. 4. Qualitative assessment of 41 pan-sharpening techniques, WorldView-4 dataset.

Table 3
Comparisons between the four pan-sharpening categories based on spectral assessment metrics.
CC ERGAS UIQI SAM

Categories Mean difference P-value Mean difference P-value Mean difference P-value Mean difference P-value

CS-MRA − 0.019 <0.0001 0.729 <0.0001 − 0.036 <0.0001 0.390 <0.0001


CS-Hybrid − 0.015 <0.0001 0.635 <0.0001 − 0.030 <0.0001 0.314 <0.0001
CS-VO 0.004 0.037 0.198 0.0012 − 0.005 0.0385 0.352 <0.0001
MRA-Hybrid 0.005 0.015 − 0.093 0.1008 0.006 0.015 − 0.076 0.358
MRA-VO 0.023 <0.0001 − 0.530 <0.0001 0.030 <0.0001 − 0.038 0.696
Hybrid-VO 0.019 <0.0001 − 0.438 <0.0001 0.025 <0.0001 0.038 0.698

had low spectral quality, but when those methods are combined with derivatives of GIHS method, and the Hybrid algorithms derived from
other algorithms such as Gram-Schmidt, IHS, and PCA, their spectral GIHS had better spectral performances compared with the GIHS method.
performances increased substantially. The IHS-based methods, i.e. the Most of the pyramid-based methods, such as MTF-GLP-HPM, MTF-GLP-

110
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

Fig. 5. Comparison between pan-sharpening methods in terms of spectral quality. Methods sharing a common horizontal line did not have a statistically significant
difference at α = 0.05.

CBD, and MTF-GLP, also had relatively high spectral quality. In the case at least two methods had statistically different performances (for each of
of Vo-based methods, they had moderate spectral performance and none the three metrics). Therefore, two sets of statistical tests, i.e. comparison
of them were among the best or worst methods. of categories of methods and pair-wise comparison of all methods, were
conducted to investigate differences among the performances of
3.3. Spatial comparison of pan-sharpening methods methods, similar to the analysis of spectral results in Section 3.2. As
Table 4 shows, the Hybrid methods performed better than the other
Similar to the analysis of spectral metrics, statistical analyses were three categories of methods, with an average improvement in SCC of
conducted to compare the fusion methods in terms of spatial quality 0.017, 0.013, and 0.020 compared to CS-based, MRA-based, and VO-
based on Edge CC, Spatial SCC, and DS values. One ANOVA was used to based methods, respectively (all three differences were statistically
analyze results for each of the three metrics. The overall F-test for all significant with p-value < 0.001). The difference between the CS-based
three metrics was statistically significant (p-value < 0.001). That means and the MRA-based methods in terms of SCC was 0.004, which was not

111
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

Table 4
Comparisons between the four pan-sharpening categories based on spatial assessment metrics.
Edge CC SCC DS

Categories Mean difference P-value Mean difference P-value Mean difference P-value

CS-MRA − 0.016 0.0005 − 0.004 0.150 0.008 <0.0001


CS-Hybrid − 0.065 <0.0001 − 0.017 <0.0001 0.021 <0.0001
CS-VO − 0.027 <0.0001 0.002 0.487 0.017 <0.0001
MRA-Hybrid − 0.049 <0.0001 − 0.013 <0.0001 0.013 <0.0001
MRA-VO − 0.012 0.058 0.006 0.092 0.009 <0.0001
Hybrid-VO 0.038 <0.0001 0.020 <0.0001 − 0.004 0.035

statistically significant (p-value = 0.150). In terms of Edge CC and DS methods had the best performances in terms of spatial quality, and
metrics, Hybrid and CS-based methods were the best and the worst PRACS, NIHS, and PCA methods had the lowest average spatial quality
methods, respectively as displayed in Table 4. When considering SCC as among all methods. IGIHS-AW, GLP-Reg-FS, DWTSR, and AWLP-H
the measure, Hybrid methods still presented the best spatial quality, methods had the best spatial performance within the CS, MRA, Vo-
while Vo-based methods had the worst spatial quality. based, and Hybrid groups, respectively. On the contrary, PRACS, MTF-
For the second set of tests, pair-wise t-tests based on spatial quality GLP-CBD, L12-norm, and AWI methods had the poorest performance
assessment metrics were conducted between pairs of individual methods within each of the mentioned groups, respectively. Most of the Hybrid
(Fig. 6). Based on results depicted in Fig. 6, GIHS-BTP, Ehlers, and WWI and CS-based algorithms were superior in terms of spatial quality, where

Fig. 6. Comparison among pan-sharpening methods in terms of spatial quality. Methods sharing a common horizontal line did not have a statistically significant
difference at α = 0.05.

112
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

most of the Vo-based and MRA-based methods were relatively weak coefficients at full resolution, proposed for the GLP-Reg-FS method, can
from this standpoint. Most of the wavelet-based and IHS-based methods estimate the injection coefficients in a way to minimize the spectral and
had relatively high spatial quality. Several IHS-based fusion methods spatial distortion of fusion product (Vivone et al., 2018). The good
such as GIHS, IGIHS-AW, GIHS-BTP, and Ehlers had similar spatial qualitative and quantitative results of this method was also reported by
performances. This is also true for some of the wavelet-based algorithm Li et al. (2020). The high spectral quality of MMP can be attributed to its
pairs e.g. WWI-SWI and AWLP-AWI. success in separating the spectral foreground and background, and
opacity of foreground color (called alpha channel) from the MS image.
4. Discussion According to (Kang et al., 2013), replacing the alpha channel with
spatial details of PAN image through a linear composite of foreground
4.1. Comparison between pan-sharpening methods and categories and background color does not create much spectral distortion. The
good spectral preservation capability of MMP was quantitatively and
A rigorous statistical comparison of a variety of pixel-level fusion visually confirmed by Liu et al. (2017) as well. The PCA and Brovey
methods for the pan-sharpening of high-resolution satellite imagery was methods had the poorest performance. The poor performance of PCA
conducted in this research. To provide a comprehensive evaluation, the and Brovey methods were also mentioned in Jelének et al. (2016) and
most commonly used pan-sharpening methods, including 41 methods in Helmy and El-Tawel (2015). Based on the results of comparing families
four categories were selected and tested on 21 case study images with spectrally, most MRA-based and Hybrid methods resulted in higher
different land cover types, scene heterogeneity levels, and topographic fusion spectral quality compared to the Vo-based and CS-based methods.
conditions. The methods were implemented as a fusion toolbox in Most of the IHS-based fusion methods had relatively poor spectral per­
MATLAB. The quality assessment was performed based on spectral and formances. The fused images from the simple wavelet-based methods
spatial aspects using four spectral and three spatial quality assessment (SW and AW) had lower spectral quality, but when those methods were
metrics. Pairwise comparisons were conducted between each pair of combined with other algorithms such as Gram-Schmidt, IHS, and PCA,
pan-sharpening methods and groups of methods were compared as well. their spectral performances increased substantially. This could be
The comparisons of groups were conducted to examine the relative because input images can be better decomposed into high and
performance of different “families” of methods. Results from this low-frequency information when the intensity component in IHS or the
research may be utilized by analysts when choosing an appropriate first principal component in PCA derived from MS image is incorporated
fusion method for their corresponding application. into the wavelet-based procedures. In this way, the spatial details of the
Based on the qualitative assessment of the results, SW, BDSD, and PAN image can be injected more efficiently, thereby causing less spectral
AWI methods demonstrated the lowest level of spectral quality, and distortion (Yang et al., 2017b). The IHS-based methods, i.e. the de­
PGF, MMP, AWLP-H, and SWI presented the highest level of spectral rivatives of the GIHS method, and the Hybrid algorithms derived from
quality. The superiority of the MMP approach was also reported by Kang GIHS had higher spectral performances compared with GIHS. Most of
et al. (2013) where it was shown that MMP outperforms a variety of the pyramid-based methods, such as MTF-GLP-HPM, MTF-GLP-CBD,
state-of-the-art pan-sharpening algorithms. Moreover, AWI, NIHS, GHS- and MTF-GLP also had higher spectral quality. The multiscale and
TP, and BDSD were less efficient and WWI, GHS, SWPC, and AWLP-H oversampled structure of GLP-based algorithms which employs the MTF
were more efficient considering the spatial quality. AWLP-H and BDSD of the MS scanner to design the GLP reduction filter is the main reason
resulted in the highest and the lowest quality, respectively, considering for its good spectral preservation (Snehmani et al., 2017; Vivone et al.,
both spectral and spatial quality. GIHS, PGF, and WWI resulted in 2018).
moderate spectral and spatial quality. On the other hand, the substantial In terms of spatial quality, the Hybrid methods performed better than
changes of color occurred in the pan-sharpened images belonging to the the other three categories of methods, while the MRA-based, CS-based,
CS family, i.e. NIHS, GS, SFIM, Indusion, and HCS. The best spectral and Vo-based methods were the next best categories. The GIHS-BTP
preservation was observed in the Hybrid family where most of the method had the best spatial performance, followed by the Ehlers
methods also presented high spatial quality. Spectral and spatial quali­ method, which was also declared by Tu et al. (2007) and Ehlers et al.
ties of the fusion products are almost inextricable, hence the spatial (2010). The high spatial performance of GIHS-BTP is probably due to the
artifacts can lead to spectral distortions and vice versa. The sharpness simple energy-normalization procedure employed by Tu et al. (2007) to
level of the CS family was not considerably different from the MRA- seek the best tradeoff for spatial details and color distortion (Dadras­
based results. Even though VO-based methods had more computa­ Javan et al., 2018).
tional complexities than the other categories of methods, they did not PRACS, NIHS, and PCA methods had the poorest performance, which
provide higher quality outputs than the other categories. confirms the results from Vivone et al. (2014b) and Xing et al. (2018).
In terms of quantitative spectral quality assessment, the AWLP-H The weak performance of PRACS, PCA, and BT seems to be due to the
method had the best performance, followed by the GLP-Reg-FS and substitution step conducted for the injection of spatial information. The
MMP methods. AWLP-H algorithm which showed an overall superiority replacement solution that is performed in all CS-based methods inevi­
among all methods is the instrument-optimized, haze-corrected version tably results in some spectral distortion, and hence some degrees of
of the popular AWLP method. The good performance of this algorithm spatial artifacts (Li et al., 2018). This is because a great amount of
was also reported by Li et al. (2020). As the important effect of haze on spectral content existing in the substitutional component of these
spatial and spectral quality of pan-sharpening algorithms was addressed methods is lost after the replacement (Alparone et al., 2017). The ma­
by Li and Jing. (2017), haze correction can be a reason why the quali­ jority of IHS-based fusion methods had relatively high spatial quality
tative and quantitative results of AWLP-H could yield spatially/spec­ values, which indicate good spatial performances. Also, this is true for
trally fine results. Moreover, the modifications pursued by Vivone et al. most of the wavelet-based methods. Generally, none of the pyramid-
(2019), achieving an implicit optimization towards the spectral and based methods had high spatial performance in comparison with the
spatial responses of the imaging instrument, are arguably the other other methods.
reasons for spectrally and spatially better output of AWLP-H. Regarding Generally, it is difficult to preserve both high spectral and spatial
the GLP-Reg-FS method, its superior performance compared to the other qualities of the input images during pan-sharpening (Vivone et al.,
methods can be attributed to its proposed scheme for injection co­ 2014b; Snehmani et al., 2017). Some degree of trade-off was observed
efficients calculation. In a pan-sharpening process, injection coefficients between the spectral and spatial performances of some of the methods.
denoting the level of spatial details injection in the fusion product can Fig. 7 presents the scatterplot of mean spectral and spatial values of the
highly affect the spatial and spectral quality of output images. The pan-sharpening methods, averaging over all 21 datasets. To average the
iterative regression-based approach for estimation of the injection metrics results, all the achieved values are normalized in the range of

113
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

Fig. 7. Scatterplot of spatial and spectral quality values for all 41 pan-sharpening methods averaged over all 21 case studies.

0–1 from the worst to the best. The methods that are in the rightmost relative processing time of methods might be used for method prioriti­
part of the spectral axis are considered to have the best spectral quality, zation. Generally, most of the fastest algorithms are CS-based methods,
and those that are in the upper part of the spatial axis have the best followed by Hybrid methods. The MRA-based methods required longer
spatial quality. As Fig. 7 shows, not all methods had both high spectral run-times, and it is much more time-consuming in the case of VO-based
and spatial fusion qualities. For example, the methods such as Indusion, algorithms. GIHS and its derivatives were the fastest algorithms. Among
SFIM, MTF-GLP-CBD, and MTF-GLP-HPM-PP maintain the spectral the MRA family, the derivatives of MTF-GLP required longer times to
properties of images better than the spatial quality, while the CS-based run. VP-LGC and DWTSR were by far the slowest techniques, taking
methods such as GIHS, GIHS-AW, and Ehlers produce a pan-sharpened significantly longer times than the other methods.
image with high spatial quality and relatively poor spectral quality.
Depending on a target application, an analyst might prioritize one of the 4.2. New developments in pan-sharpening domain based on deep learning
quality metrics over the other while selecting a pan-sharpening method.
The spatial and spectral qualities of the fusion methods are not the In recent years, numerous neural network and deep learning-based
only aspects that might be considered for fusion method selection. The algorithms, especially CNN methods, have been introduced for the
computational speed of algorithms is also important, especially for pan-sharpening of satellite imagery. Several studies such as Huang et al.
operational projects with large numbers of images. Table 5 represents (2015), Masi et al. (2016), and Wei and Yuan (2017) obtained promising
the time spent for running each of the methods using a PC system with a results using deep learning-based methods and various improvements
2.50 GHz Pentium Intel Core i7 Processor and 6 GB of RAM. The di­ have been made by recent studies (Yang et al., 2017a; Scarpa et al.,
mensions of the MS and PAN images in all datasets in this comparison 2018; Zhang et al., 2019; Rohith et al., 2020; Vitale and Scarpa., 2020)
were 128 × 128 and 512 × 512, respectively. While the absolute time improving the efficiency and the accuracy of deep learning methods.
values could be different for machines with different specifications, the Deep learning-based approaches usually include three components

Table 5
Execution times for the compared pan-sharpening methods (ordered ascendingly). The algorithms belonging to CS, MRA, Hybrid, and VO categories are shown in red,
blue, green, and black colors respectively.

114
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

comprising feature extraction subnetwork, fusion layer, and super- References


resolution subnetwork (Wei et al., 2020). Commonly, learning strat­
egy, cost function, and network architecture have been the main areas of Aiazzi, B., Baronti, S., Selva, M., 2007. Improving component substitution pansharpening
through multivariate regression of MS $+ $ Pan data. IEEE Trans. Geosci. Remote
improvement for most of deep learning pan-sharpening studies (Liu Sens. 45, 3230–3239.
et al., 2020; Cai and Huang., 2020; Qu et al., 2020; Wei et al., 2020). Aiazzi, B., Alparone, L., Baronti, S., Garzelli, A., 2002. Context-driven fusion of high
Even though the deep learning-based methods have shown promising spatial and spectral resolution images based on oversampled multiresolution
analysis. IEEE Trans. Geosci. Remote Sens. 40, 2300–2312.
performances, they usually require large training data and computa­ Aiazzi, B., Alparone, L., Baronti, S., Garzelli, A., Selva, M., 2006. MTF-tailored multiscale
tional resources. Therefore, in most published studies, deep learning fusion of high-resolution MS and Pan imagery. Photogramm. Eng. Remote Sens. 72,
methods are applied to small experimental case studies. Consequently, 591–596.
Aiazzi, B., Alparone, L., Baronti, S., Garzelli, A., Selva, M., 2003. An MTF-based spectral
in this research, deep learning-based methods were excluded. Further distortion minimizing model for pan-sharpening of very high resolution
research would be needed to examine the suitability of deep learning- multispectral images of urban areas, in: 2003 2nd GRSS/ISPRS Joint Workshop on
based methods for operational large area mapping. Remote Sensing and Data Fusion over Urban Areas. IEEE, pp. 90–94.
Alidoost, F., Sharifi, M.A., Stein, A., 2015. Region-and pixel-based image fusion for
disaggregation of actual evapotranspiration. Int. J. Image Data Fusion 6, 216–231.
5. Conclusion Alimuddin, I., Sumantyo, J.T.S., Kuze, H., 2012. Assessment of pan-sharpening methods
applied to image fusion of remotely sensed multi-band data. Int. J. Appl. Earth Obs.
Frequently, the pan-sharpened images are used as inputs of subse­ Geoinf. 18, 165–175.
Alparone, L., Wald, L., Chanussot, J., Thomas, C., Gamba, P., Bruce, L.M., 2007.
quent analyses, and the quality of pan-sharpened images can affect the Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data-fusion
reliability of those analyses. The results from this research can be used contest. IEEE Trans. Geosci. Remote Sens. 45, 3012–3021.
by analysts to make more informed decisions during pan-sharpening Alparone, L., Aiazzi, B., Baronti, S., Garzelli, A., Nencini, F., Selva, M., 2008.
Multispectral and panchromatic data fusion assessment without reference.
method selection. Depending on the target application, spectral or Photogramm. Eng. Remote Sens. 74, 193–200.
spatial quality of pan-sharpening might be prioritized. Based on this Alparone, L., Aiazzi, B., Baronti, S., Garzelli, A., 2015. Remote sensing image fusion. Crc
research’s case studies, MRA-based methods performed better in terms Press.
Alparone, L., Garzelli, A., Vivone, G., 2017. Intersensor statistical matching for
of spectral quality, whereas most of the Hybrid-based methods had the pansharpening: Theoretical issues and practical solutions. IEEE Trans. Geosci.
highest spatial quality. AWLP-H had the highest spectral quality, Remote Sens. 55, 4682–4695.
whereas GIHS-BTP showed the highest spatial fidelity. Moreover, trade- Amro, I., Mateos, J., Vega, M., Molina, R., Katsaggelos, A.K., 2011. A survey of classical
methods and new trends in pansharpening of multispectral images. EURASIP J. Adv.
offs between spectral and spatial qualities of fusion outputs were Signal Process. 2011, 1–22.
observed for many methods. Therefore, methods with a balance between Amolins, K., Zhang, Y., Dare, P., 2007. Wavelet based image fusion techniques—An
the two quality aspects might be preferred when both of the aspects are introduction, review and comparison. ISPRS J. Photogramm. Remote Sens. 62,
249–263.
important for a given application. Run-time can also affect method se­
Aly, H.A., Sharma, G., 2014. A regularized model-based optimization framework for pan-
lection especially when processing big-datasets. CS-based methods sharpening. IEEE Trans. Image Process. 23, 2596–2608.
generally had the fastest run-time, whereas most MRA-based methods Azarang, A., Manoochehri, H.E., Kehtarnavaz, N., 2019. Convolutional autoencoder-
had relatively long run times. based multispectral image fusion. IEEE Access 7, 35673–35683.
Baisantry, M., Negi, D.S., Manocha, O.P., Singh, B.P., Kishore, S., 2011. Object-based
Besides the spectral and spatial quality of pan-sharpened images, image fusion for minimization of color distortion in high-resolution satellite
several other factors might affect method selection such as the algo­ imagery. In: 2011 International Conference on Image Information Processing. IEEE,
rithm’s level of automation, simplicity of the algorithm, scalability of pp. 1–6.
Ballester, C., Caselles, V., Igual, L., Verdera, J., Rougé, B., 2006. A variational model for P
the algorithm (how a method’s performance/requirements are affected + XS image fusion. Int. J. Comput. Vis. 69, 43–58.
as the study area or the number of images increases), analysts’ famil­ Belgiu, M., Stein, A., 2019. Spatiotemporal image fusion in remote sensing. Remote Sens.
iarity with the method, software/hardware availability, and scene type/ 11, 818.
Blasch, E., Li, X., Chen, G., Li, W., 2008. Image quality assessment for performance
heterogeneity/topographic variations. These factors can be investigated evaluation of image fusion. In: 2008 11th International Conference on Information
in future research. Moreover, some of the recently introduced pan- Fusion. IEEE, pp. 1–6.
sharpening methods (e.g. deep neural network-based pan-sharpening Cai, J., Huang, B., 2020. Super-resolution-guided progressive pansharpening based on a
deep convolutional neural network. IEEE Trans. Geosci. Remote Sens.
(Huang et al., 2015), CNN-based pan-sharpening (Masi et al., 2016), Chavez, P., Sides, S.C., Anderson, J.A., 1991. Comparison of three different methods to
pan-sharpening via Lattice Structures (Kaplan and Erer, 2016), and merge multiresolution and multispectral data- Landsat TM and SPOT panchromatic.
NLPCA methods (Licciardi et al., 2016), Zheng et al.’s (2017) and Zhang Photogramm. Eng. Remote Sensing 57, 295–303.
Chen, Y., Fung, T., Lin, W., Wang, J., 2005. An image fusion method based on object-
et al.’s (2020)) need to be evaluated in future research. While this
oriented image classification, in: Proceedings. 2005 IEEE International Geoscience
research examines the relative spectral/spatial performance of several and Remote Sensing Symposium, 2005. IGARSS’05. IEEE, pp. 3924–3927.
pan-sharpening methods, the practical significance of the differences Chen, S., Luo, J., Shen, Z., Luo, G., Zhu, C., 2008. Using synthetic variable ratio method
among different methods should be investigated in future research. It is to fuse multi-source remotely sensed images based on sensor spectral response. In:
IGARSS 2008-2008 IEEE International Geoscience and Remote Sensing Symposium.
necessary to investigate and quantify how the differences in the spec­ IEEE, pp. II–1084.
tral/spatial quality of pan-sharpening methods affect the quality of Chen, Y., Blum, R.S., 2005. Experimental tests of image fusion for night vision. In: 2005
subsequent analyses, which depend on the target application. 7th International Conference on Information Fusion. IEEE, p. pp. 8-pp..
Cheng, J., Liu, H., Liu, T., Wang, F., Li, H., 2015. Remote sensing image fusion via
wavelet transform and sparse representation. ISPRS J. Photogramm. Remote Sens.
Funding sources 104, 158–173.
Choi, M., 2006. A new intensity-hue-saturation fusion approach to image fusion with a
tradeoff parameter. IEEE Trans. Geosci. Remote Sens. 44, 1672–1682.
This research did not receive any specific grant from funding Choi, J., Yu, K., Kim, Y., 2010. A new adaptive component-substitution-based satellite
agencies in the public, commercial, or not-for-profit sectors. image fusion by using partial replacement. IEEE Trans. Geosci. Remote Sens. 49,
295–309.
DadrasJavan, F., Samadzadegan, F., 2014. An object-level strategy for pan-sharpening
Declaration of Competing Interest quality assessment of high-resolution satellite imagery. Adv. Sp. Res. 54, 2286–2295.
DadrasJavan, F., Samadzadegan, F., Fathollahi, F., 2018. Spectral and spatial quality
The authors declare that they have no known competing financial assessment of IHS and wavelet based pan-sharpening techniques for high resolution
satellite imagery. Adv. Image Video Process. 6, 1.
interests or personal relationships that could have appeared to influence
Dou, W., 2018. Image degradation for quality assessment of pan-sharpening methods.
the work reported in this paper. Remote Sens. 10, 154.
Duran, J., Buades, A., Coll, B., Sbert, C., Blanchet, G., 2017. A survey of pansharpening
methods with a new band-decoupled variational model. ISPRS J. Photogramm.
Remote Sens. 125, 78–105.
Ehlers, M., Klonus, S., Johan Åstrand, P., Rosso, P., 2010. Multi-sensor image fusion for
pansharpening in remote sensing. Int. J. Image Data Fusion 1, 25–45.

115
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

Fasbender, D., Radoux, J., Bogaert, P., 2008. Bayesian data fusion for adaptable image Liu, J.G., 2000. Smoothing filter-based intensity modulation: A spectral preserve image
pansharpening. IEEE Trans. Geosci. Remote Sens. 46, 1847–1857. fusion technique for improving spatial details. Int. J. Remote Sens. 21, 3461–3472.
Fu, X., Lin, Z., Huang, Y., Ding, X., 2019. A variational pan-sharpening with local Liu, Y., Wang, Z., 2013. A practical pan-sharpening method with wavelet transform and
gradient constraints. In: Proceedings of the IEEE Conference on Computer Vision and sparse representation. In: 2013 IEEE International Conference on Imaging Systems
Pattern Recognition, pp. 10265–10274. and Techniques (IST). IEEE, pp. 288–293.
Garzelli, A., Nencini, F., Alparone, L., Aiazzi, B., Baronti, S., 2004. Pan-sharpening of Liu, P., Xiao, L., Li, T., 2017. A variational pan-sharpening method based on spatial
multispectral images: A critical review and comparison. IEEE. fractional-order geometry and spectral–spatial low-rank priors. IEEE Trans. Geosci.
Garzelli, A., Nencini, F., Capobianco, L., 2007. Optimal MMSE pan sharpening of very Remote Sens. 56, 1788–1802.
high resolution multispectral images. IEEE Trans. Geosci. Remote Sens. 46, 228–236. Liu, X., Liu, Q., Wang, Y., 2020. Remote sensing image fusion based on two-stream fusion
Ghahremani, M., Ghassemian, H., 2015. A compressed-sensing-based pan-sharpening network. Inf. Fusion 55, 1–15.
method for spectral distortion reduction. IEEE Trans. Geosci. Remote Sens. 54, Liu, X., Wang, Y., Liu, Q., 2018. PSGAN: A generative adversarial network for remote
2194–2206. sensing image pan-sharpening. In: 2018 25th IEEE International Conference on
Ghahremani, M., Ghassemian, H., 2016. Nonlinear IHS: A promising method for pan- Image Processing (ICIP). IEEE, pp. 873–877.
sharpening. IEEE Geosci. Remote Sens. Lett. 13, 1606–1610. Loncan, L., De Almeida, L.B., Bioucas-Dias, J.M., Briottet, X., Chanussot, J., Dobigeon, N.,
Ghassemian, H., 2016. A review of remote sensing image fusion methods. Inf. Fusion 32, Fabre, S., Liao, W., Licciardi, G.A., Simoes, M., 2015. Hyperspectral pansharpening:
75–89. A review. IEEE Geosci. Remote Sens. Mag. 3, 27–46.
González-Audícana, M., Saleta, J.L., Catalán, R.G., García, R., 2004. Fusion of Mandhare, R.A., Upadhyay, P., Gupta, S., 2013. Pixel-level image fusion using brovey
multispectral and panchromatic images using improved IHS and PCA mergers based transforme and wavelet transform. Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 2,
on wavelet decomposition. IEEE Trans. Geosci. Remote Sens. 42, 1291–1299. 2690–2695.
González-Audícana, M., Otazu, X., Fors, O., Seco, A., 2005. Comparison between Mallat’s Masi, G., Cozzolino, D., Verdoliva, L., Scarpa, G., 2016. Pansharpening by convolutional
and the ‘à trous’ discrete wavelet transform based algorithms for the fusion of neural networks. Remote Sens. 8, 594.
multispectral and panchromatic images. Int. J. Remote Sens. 26, 595–614. Meng, X., Shen, H., Li, H., Zhang, L., Fu, R., 2019. Review of the pansharpening methods
Gharbia, R., El Baz, A.H., Hassanien, A.E., Tolba, M.F., 2014. Remote sensing image for remote sensing images based on the idea of meta-analysis: Practical discussion
fusion approach based on Brovey and wavelets transforms, in: Proceedings of the and challenges. Inf. Fusion 46, 102–113. https://doi.org/10.1016/j.
Fifth International Conference on Innovations in Bio-Inspired Computing and inffus.2018.05.006.
Applications IBICA 2014. Springer, pp. 311–321. Nikolakopoulos, K.G., 2008. Comparison of nine fusion techniques for very high
Hasanlou, M., Saradjian, M.R., 2016. Quality assessment of pan-sharpening methods in resolution data. Photogramm. Eng. Remote Sens. 74, 647–659.
high-resolution satellite images using radiometric and geometric index. Arab. J. Nunez, J., Otazu, X., Fors, O., Prades, A., Pala, V., Arbiol, R., 1999. Multiresolution-based
Geosci. 9, 45. image fusion with additive wavelet decomposition. IEEE Trans. Geosci. Remote Sens.
He, X., Condat, L., Bioucas-Dias, J.M., Chanussot, J., Xia, J., 2014. A new pansharpening 37, 1204–1211.
method based on spatial and spectral sparsity priors. IEEE Trans. Image Process. 23, Otazu, X., González-Audícana, M., Fors, O., Núñez, J., 2005. Introduction of sensor
4160–4174. spectral response into image fusion methods. Application to wavelet-based methods.
He, L., Rao, Y., Li, J., Chanussot, J., Plaza, A., Zhu, J., Li, B., 2019. Pansharpening via IEEE Trans. Geosci. Remote Sens. 43, 2376–2385.
detail injection based convolutional neural networks. IEEE J. Sel. Top. Appl. Earth Padwick, C., Deskevich, M., Pacifici, F., Smallwood, S., 2010. WorldView-2 pan-
Obs. Remote Sens. 12, 1188–1204. sharpening, in: Proceedings of the ASPRS 2010 Annual Conference, San Diego, CA,
Helmy, A.K., El-Tawel, G.S., 2015. An integrated scheme to improve pan-sharpening USA.
visual quality of satellite images. Egypt. Informatics J. 16, 121–131. Pálsson, F., 2013. Pansharpening and Classification of Pansharpened Images. Sc. A Thesis
Huang, W., Xiao, L., Wei, Z., Liu, H., Tang, S., 2015. A new pan-sharpening method with Electr. Comput. Eng. the, Univ. Icel.
deep neural networks. IEEE Geosci. Remote Sens. Lett. 12, 1037–1041. Palsson, F., Sveinsson, J.R., Ulfarsson, M.O., 2013. A new pansharpening algorithm
Jagalingam, P., Hegde, A.V., 2015. A review of quality metrics for fused image. Aquat. based on total variation. IEEE Geosci. Remote Sens. Lett. 11, 318–322.
Proc. 4, 133–142. Pan, Z., Yu, J., Huang, H., Hu, S., Zhang, A., Ma, H., Sun, W., 2013. Super-resolution
Javan, F.D., Samadzadegan, F., Reinartz, P., 2013. Spatial quality assessment of pan- based on compressive sensing and structural self-similarity for remote sensing
sharpened high resolution satellite imagery based on an automatically estimated images. IEEE Trans. Geosci. Remote Sens. 51, 4864–4876.
edge based metric. Remote Sens. 5, 6539–6559. Pandit, V.R., Bhiwani, R.J., 2015. Image fusion in remote sensing applications: A review.
Jiang, D., Zhuang, D., Huang, Y., Fu, J., 2011. Survey of multispectral image fusion Int. J. Comput. Appl. 120.
techniques in remote sensing applications. Image fusion its Appl. 1–23. Piella, G., Heijmans, H., 2003. A new quality metric for image fusion, in: Proceedings
Jelének, J., Kopačková, V., Koucká, L., Mišurec, J., 2016. Testing a modified PCA-based 2003 International Conference on Image Processing (Cat. No. 03CH37429). IEEE, pp.
sharpening approach for image fusion. Remote Sens. 8, 794. III–173.
Jiang, M., Shen, H., Li, J., Yuan, Q., Zhang, L., 2020. A differential information residual Pohl, C., Van Genderen, J.L., 1998. Review article multisensor image fusion in remote
convolutional neural network for pansharpening. ISPRS J. Photogramm. Remote sensing: concepts, methods and applications. Int. J. Remote Sens. 19, 823–854.
Sens. 163, 257–271. Pushparaj, J., Hegde, A.V., 2017. Evaluation of pan-sharpening methods for spatial and
Jing, L., Cheng, Q., 2012. An image fusion method based on object-oriented spectral quality. Appl. Geomatics 9, 1–12.
classification. Int. J. Remote Sens. 33, 2434–2450. Qu, Y., Baghbaderani, R.K., Qi, H., Kwan, C., 2020. Unsupervised Pansharpening Based
Kang, X., Li, S., Benediktsson, J.A., 2013. Pansharpening with matting model. IEEE on Self-Attention Mechanism. IEEE Trans. Geosci. Remote Sens.
Trans. Geosci. Remote Sens. 52, 5088–5099. Ranchin, T., Wald, L., 2000. Fusion of high spatial and spectral resolution images: The
Kaplan, N.H., Erer, I., 2016. Pansharpening of multispectral satellite images via lattice ARSIS concept and its implementation.
structures. Int. J. Comput. Appl. 140. Restaino, R., Vivone, G., Dalla Mura, M., Chanussot, J., 2016a. Fusion of multispectral
Kim, Y., Lee, C., Han, D., Kim, Y., Kim, Y., 2010. Improved additive-wavelet image and panchromatic images based on morphological operators. IEEE Trans. Image
fusion. IEEE Geosci. Remote Sens. Lett. 8, 263–267. Process. 25, 2882–2895.
Khan, M.M., Chanussot, J., Condat, L., Montanvert, A., 2008. Indusion: Fusion of Restaino, R., Dalla Mura, M., Vivone, G., Chanussot, J., 2016b. Context-adaptive
multispectral and panchromatic images using the induction scaling technique. IEEE pansharpening based on image segmentation. IEEE Trans. Geosci. Remote Sens. 55,
Geosci. Remote Sens. Lett. 5, 98–102. 753–766.
Khan, M.M., Alparone, L., Chanussot, J., 2009. Pansharpening quality assessment using Riyahi, R., Kleinn, C., Fuchs, H., 2009. Comparison of Different Image Fusion Techniques
the modulation transfer functions of instruments. IEEE Trans. Geosci. Remote Sens. for Individual Tree Crown Identification Using Quick bird Images. Int. Soc.
47, 3880–3891. Photogramm. Remote Sensing, High-Resolution Earth Imaging Geospatial Inf. 38,
Klonus, S., Ehlers, M., 2009. Performance of evaluation methods in image fusion. In: 1–4.
2009 12th International Conference on Information Fusion. IEEE, pp. 1409–1416. Rockinger, O., Fechner, T., 1998. Pixel-level image fusion: the case of image sequences,
Lari, S.N., Yazdi, M., 2016. Improved IHS pan-sharpening method based on adaptive in: Signal Processing, Sensor Fusion, and Target Recognition VII. International
injection of à trous wavelet decomposition. Int. J. Signal Process. Image Process. Society for Optics and Photonics, pp. 378–388.
Pattern Recognit. 9, 291–308. Rodríguez-Esparragón, D., Marcello, J., Eugenio, F., García-Pedrero, A., Gonzalo-
Lee, J., Lee, C., 2009. Fast and efficient panchromatic sharpening. IEEE Trans. Geosci. Martín, C., 2017. Object-based quality evaluation procedure for fused remote
Remote Sens. 48, 155–163. sensing imagery. Neurocomputing 255, 40–51.
Leung, L.W., King, B., Vohora, V., 2001. Comparison of image data fusion techniques Rohith, G., Kumar, L.S., 2020. Super-Resolution Based Deep Learning Techniques for
using entropy and INI. In: 22nd Asian Conference on Remote Sensing, p. 9. Panchromatic Satellite Images in Application to Pansharpening. IEEE Access 8,
Li, H., Jing, L., Tang, Y., 2017. Assessment of pansharpening methods applied to 162099–162121.
worldview-2 imagery fusion. Sensors 17, 89. Sagan, V., Maimaitijiang, M., Sidike, P., Eblimit, K., Peterson, K.T., Hartling, S.,
Li, H., Jing, L., 2017. Improvement of a pansharpening method taking into account haze. Esposito, F., Khanal, K., Newcomb, M., Pauli, D., 2019. UAV-based high resolution
IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 10, 5039–5055. thermal imaging for vegetation monitoring, and plant phenotyping using ICI 8640 P,
Li, Q., Yang, X., Wu, W., Liu, K., Jeon, G., 2018. Pansharpening multispectral remote- FLIR Vue Pro R 640, and thermomap cameras. Remote Sens. 11, 330.
sensing images with guided filter for monitoring impact of human behavior on Scarpa, G., Vitale, S., Cozzolino, D., 2018. Target-adaptive CNN-based pansharpening.
environment. Concurr. Comput. Pract. Exp. e5074. IEEE Trans. Geosci. Remote Sens. 56, 5443–5457.
Li, X., Yan, H., Xie, W., Kang, L., Tian, Y., 2020. An Improved Pulse-Coupled Neural Shahdoosti, H.R., Ghassemian, H., 2011. Multispectral and Panchromatic Image Fusion
Network Model for Pansharpening. Sensors 20, 2764. by Combining Spectral PCA and Spatial PCA Methods.
Licciardi, G., Vivone, G., Dalla Mura, M., Restaino, R., Chanussot, J., 2016. Multi- Shahdoosti, H.R., 2017. MS and PAN image fusion by combining Brovey and wavelet
resolution analysis techniques and nonlinear PCA for hybrid pansharpening methods. arXiv Prepr. arXiv1701.01996.
applications. Multidimens. Syst. Signal Process. 27, 807–830.

116
F. Dadrass Javan et al. ISPRS Journal of Photogrammetry and Remote Sensing 171 (2021) 101–117

Shen, H., Jiang, M., Li, J., Yuan, Q., Wei, Y., Zhang, L., 2019. Spatial-Spectral Fusion by Wei, J., Xu, Y., Cai, W., Wu, Z., Chanussot, J., Wei, Z., 2020. A Two-Stream Multiscale
Combining Deep Learning and Variational Model. IEEE Trans. Geosci. Remote Sens. Deep Learning Architecture for Pan-Sharpening. IEEE J. Sel. Top. Appl. Earth Obs.
57, 6169–6181. Remote Sens. 13, 5455–5465.
Snehmani, Gore, A., Ganju, A., Kumar, S., Srivastava, P.K., RP, H.R., 2017. A Witharana, C., Civco, D.L., Meyer, T.H., 2013. Evaluation of pansharpening algorithms in
comparative analysis of pansharpening techniques on QuickBird and WorldView-3 support of earth observation based rapid-mapping workflows. Appl. Geogr. 37,
images. Geocarto Int. 32, 1268–1284. 63–87.
Student, P., 2014. Study of image fusion-techniques method and applications. Xing, Y., Wang, M., Yang, S., Jiao, L., 2018. Pan-sharpening via deep metric learning.
Sulaiman, A.G., Elashmawi, W.H., El-Tawel, G.S., 2020. A Robust Pan-Sharpening ISPRS J. Photogramm. Remote Sens. 145, 165–183.
Scheme for Improving Resolution of Satellite Images in the Domain of the Xu, J., Guan, Z., Liu, J., 2008. An improved IHS fusion method for merging multi-spectral
Nonsubsampled Shearlet Transform. Sens. Imaging 21, 3. and panchromatic images considering sensor spectral response. Int. Arch.
Thomas, C., Wald, L., 2006. Analysis of changes in quality assessment with scale. In: Photogramm. Remote Sens. Spat. Inf. Sci 37, 1169–1174.
2006 9th International Conference on Information Fusion. IEEE, pp. 1–5. Xydeas, C.S., Petrovic, V., 2000. Objective image fusion performance measure. Electron.
Toosi, A., Javan, F.D., Samadzadegan, F., Mehravar, S., 2020. Object-based spectral Lett. 36, 308–309.
quality assessment of high-resolution pan-sharpened satellite imageries: new Yakhdani, M.F., Azizi, A., 2010. Quality assessment of image fusion techniques for
combined fusion strategy to increase the spectral quality. Arab. J. Geosci. 13, 1–17. multisensor high resolution satellite images (case study: IRS-P5 and IRS-P6 satellite
Tu, T.-M., Cheng, W.-C., Chang, C.-P., Huang, P.S., Chang, J.-C., 2007. Best tradeoff for images). na.
high-resolution image fusion to preserve spatial details and minimize color Yang, J., Fu, X., Hu, Y., Huang, Y., Ding, X., Paisley, J., 2017a. PanNet: A deep network
distortion. IEEE Geosci. Remote Sens. Lett. 4, 302–306. architecture for pan-sharpening. In: Proceedings of the IEEE International
Tu, T.-M., Huang, P.S., Hung, C.-L., Chang, C.-P., 2004. A fast intensity-hue-saturation Conference on Computer Vision, pp. 5449–5457.
fusion technique with spectral adjustment for IKONOS imagery. IEEE Geosci. Remote Yang, Y., Wan, W., Huang, S., Lin, P., Que, Y., 2017b. A novel pan-sharpening framework
Sens. Lett. 1, 309–312. based on matting model and multiscale transform. Remote Sens. 9, 391.
Tu, T.-M., Su, S.-C., Shyu, H.-C., Huang, P.S., 2001. A new look at IHS-like image fusion Yang, X., Jian, L., Yan, B., Liu, K., Zhang, L., Liu, Y., 2018a. A sparse representation based
methods. Inf. fusion 2, 177–186. pansharpening method. Futur. Gener. Comput. Syst. 88, 385–399.
Vicinanza, M.R., Restaino, R., Vivone, G., Dalla Mura, M., Chanussot, J., 2014. Yang, C., Zhan, Q., Liu, H., Ma, R., 2018b. An IHS-based pan-sharpening method for
A pansharpening method based on the sparse representation of injected details. IEEE spectral fidelity improvement using ripplet transform and compressed sensing.
Geosci. Remote Sens. Lett. 12, 180–184. Sensors 18, 3624.
Vijayaraj, V., Younan, N.H., O’Hara, C.G., 2006. Quantitative analysis of pansharpened Yin, H., 2015. Sparse representation based pansharpening with details injection model.
images. Opt. Eng. 45, 46202. Signal Process. 113, 218–227.
Vitale, S., Scarpa, G., 2020. A detail-preserving cross-scale learning strategy for CNN- Yin, H., 2017. A joint sparse and low-rank decomposition for pansharpening of
based pansharpening. Remote Sens. 12, 348. multispectral images. IEEE Trans. Geosci. Remote Sens. 55, 3545–3557.
Vivone, G., 2019. Robust Band-Dependent Spatial-Detail Approaches for Panchromatic Zeng, D., Hu, Y., Huang, Y., Xu, Z., Ding, X., 2016. Pan-sharpening with structural
Sharpening. IEEE Trans. Geosci. Remote Sens. 57, 6421–6433. consistency and ℓ1/2 gradient prior. Remote Sens. Lett. 7, 1170–1179.
Vivone, G., Restaino, R., Chanussot, J., 2018. Full scale regression-based injection Zhang, X., Han, J., 2004. Multiscale contrast image fusion scheme with performance
coefficients for panchromatic sharpening. IEEE Trans. Image Process. 27, measures. Opt. Appl. 34, 453–461.
3418–3431. Zhang, Y., Hong, G., 2005. An IHS and wavelet integrated approach to improve pan-
Vivone, G., Simões, M., Dalla Mura, M., Restaino, R., Bioucas-Dias, J.M., Licciardi, G.A., sharpening visual quality of natural colour IKONOS and QuickBird images. Inf.
Chanussot, J., 2014a. Pansharpening based on semiblind deconvolution. IEEE Trans. Fusion 6, 225–234.
Geosci. Remote Sens. 53, 1997–2010. Zhang, Y., 2008. Methods for image fusion quality assessment-a review, comparison and
Vivone, G., Alparone, L., Chanussot, J., Dalla Mura, M., Garzelli, A., Licciardi, G.A., analysis. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 37, 1101–1109.
Restaino, R., Wald, L., 2014b. A critical comparison among pansharpening Zhang, K., Zuo, W., Gu, S., Zhang, L., 2017. Learning deep CNN denoiser prior for image
algorithms. IEEE Trans. Geosci. Remote Sens. 53, 2565–2586. restoration. In: Proceedings of the IEEE Conference on Computer Vision and Pattern
Vivone, G., Alparone, L., Garzelli, A., Lolli, S., 2019. Fast Reproducible Pansharpening Recognition, pp. 3929–3938.
Based on Instrument and Acquisition Modeling: AWLP Revisited. Remote Sens. 11, Zhang, Y., Liu, C., Sun, M., Ou, Y., 2019. Pan-sharpening using an efficient bidirectional
2315. pyramid network. IEEE Trans. Geosci. Remote Sens. 57, 5549–5563.
Wald, L., Ranchin, T., 2002. Liu’Smoothing filter-based intensity modulation: A spectral Zhang, L., Sun, Y., Zhang, J., 2020. Pan-sharpening based on common saliency feature
preserve image fusion technique for improving spatial details’. Int. J. Remote Sens. analysis and multiscale spatial information extraction for multiple remote sensing
23, 593–597. images. Int. J. Remote Sens. 41, 3095–3118.
Wald, L., 2000. Quality of high resolution synthesised images: Is there a simple Zheng, Y., Guo, M., Dai, Q., Wang, L., 2017. A pan-sharpening method based on guided
criterion?. image filtering: a case study over GF-2 imagery. Int. Arch. Photogramm. Remote
Wang, Z., Bovik, A.C., 2002. A universal image quality index. IEEE Signal Process. Lett. 9, Sens. Spat. Inf. Sci. 42.
81–84. Zhou, J., Civco, D.L., Silander, J.A., 1998. A wavelet transform method to merge Landsat
Wang, L., Cao, X., Chen, J., 2008. ISVR: an improved synthetic variable ratio method for TM and SPOT panchromatic data. Int. J. Remote Sens. 19, 743–757.
image fusion. Geocarto Int. 23, 155–165. Zhu, X.X., Bamler, R., 2012. A sparse image fusion algorithm with application to pan-
Wang, T., Fang, F., Li, F., Zhang, G., 2018. High-quality Bayesian pansharpening. IEEE sharpening. IEEE Trans. Geosci. Remote Sens. 51, 2827–2836.
Trans. Image Process. 28, 227–239. Zhu, X.X., Grohnfeldt, C., Bamler, R., 2015. Exploiting joint sparsity for pansharpening:
Wei, Y., Yuan, Q., 2017. Deep residual learning for remote sensed imagery The J-SparseFI algorithm. IEEE Trans. Geosci. Remote Sens. 54, 2664–2681.
pansharpening, in: 2017 International Workshop on Remote Sensing with Intelligent
Processing (RSIP). IEEE, pp. 1–4.

117

You might also like