Segmentation and Visualization of Flooded Areas Through Sentinel-1 Images and U-Net
Segmentation and Visualization of Flooded Areas Through Sentinel-1 Images and U-Net
17, 2024
Abstract—Floods are the most common phenomenon and cause poverty annually [3]. Nevertheless, what is the reason for the
the most significant economic and social damage to the population. increase in these disasters? There are many factors, but without
They are becoming more frequent and dangerous. Consequently, a doubt, climate change and human activities are triggering
it is necessary to create strategies to intervene effectively in the
mitigation and resilience of the affected areas. Different methods factors. In 2021, 432 disasters occurred, causing almost 11 000
and techniques have been developed to mitigate the damage caused deaths; 223 were floods (see Fig. 1). In 2022, there were 387
by this phenomenon. Satellite programs provide a large amount of disasters and nearly 31 000 deaths; 176 were floods. Floods have
data on the earth’s surface, and geospatial information process- the most significant impact of these catastrophes, affecting more
ing tools help manage different natural disasters. Likewise, deep
than 45% of the world’s population (see Fig. 2) [4].
learning is an approach capable of forecasting time series that can
be applied to satellite images for flood prediction and mapping. The countries that suffer the most from floods are India,
This article presents an approach for flood segmentation and vi- China, Afghanistan, Germany, and Western Europe [1]. It also
sualization using the U-Net architecture and Sentinel-1 synthetic significantly impacts food production since it causes losses
aperture radar (SAR) satellite imagery. The U-Net architecture can in crops and livestock, affecting food sovereignty in different
capture relevant features in SAR images. The approach comprises countries [5], [6]. Mexico is no stranger to these catastrophes.
various phases, from data loading and preprocessing to flood infer-
ence and visualization. For the study, the georeferenced dataset The climate impact, whether of natural origin or due to human
Sen1Floods11 is used to train and validate the model through activities, has increased susceptibility in various regions of the
different epochs and training. A study area in southeastern Mexico country. Hydrometeorological phenomena have increased in the
that presents frequent floods was chosen. The results demonstrate southeastern areas and the coast of the Gulf of Mexico. There-
that the segmentation model achieves high accuracy in detecting
fore, floods have triggered catastrophes, causing severe damage
flooded areas, with promising metrics regarding loss, precision, and
F1-score. to economic and industrial infrastructure and the well-being of
the region’s inhabitants [7]. The most severe cases occurred
Index Terms—Deep learning (DL) and Sentinel-1, flood in October 2007 and November 2020. According to official
mapping, flood segmentation, flood with deep learning, Sentinel-1,
U-Net and natural disasters. data from the Economic Commission for Latin America and
the Caribbean [8], the damage caused in 2007 was US$3B:
31.77% in the productive sector, 26.9% in agriculture, and 0.5%
I. INTRODUCTION in the environment. In 2020 [9], more than 800 000 people were
ECENT studies from the Centre for Research on the affected, 200 400 homes were damaged, and more than US$1M
R Epidemiological Disaster indicate that natural disasters
have increased [1], [2]. The ravages of this phenomenon cause
in emergency response.
The factors that cause flooding can be diverse [10]: 1) pluvial,
human losses, considerable economic damage to infrastructure, the result of excess precipitation; 2) fluvial, increase in water
and different collateral damages to entire population, both rural levels in rivers, seas, or water bodies; 3) failures of hydraulic
and urban, which puts approximately 26 million people into works, breaking of dams, dikes, or banks; and 4) failure of natural
drainage when the soil can no longer absorb more water.
Manuscript received 7 December 2023; revised 29 January 2024 and 26 March Given the devastation caused by floods, timely information
2024; accepted 31 March 2024. Date of publication 11 April 2024; date of current on the occurrence of floods and their impact on the population
version 1 May 2024. (Corresponding author: Raúl Aquino Santos.)
Fernando Pech-May is with the Computer Systems Department, Instituto
is needed. In this sense, flood prediction, identification, and
Tecnológico Superior de los Ríos, Balancán, Tabasco 86930, Mexico (e-mail: mapping are fundamental. This will allow the authorities to
[email protected]). act promptly to implement rescue services, damage assessment,
Raúl Aquino-Santos is with the General Coordination of Scientific Research,
Universidad de Colima, Colima, CO 28017, Mexico (e-mail: [email protected]).
and identification of affected areas for the prompt relief of the
Omar Álvarez-Cárdenas is with the Telematics Faculty, Universidad de Col- population and, in general, the resilience of populations affected
ima, Colima, CO 28040, Mexico (e-mail: [email protected]). by floods.
Jorge Lozoya Arandia is with the Computacional Science and Technological
Inovation Department, Universidad de Guadalajara, Guadalajara, Jalisco 44100,
In recent years, remote sensing has shown notable growth
Mexico (e-mail: [email protected]). due to its ability to obtain terrestrial data through sensors and
German Rios-Toledo is with the Sistemas y Computación, Tecnológico Na- cameras implemented on satellites or satellite programs [11],
cional de México campus Tuxtla Gutiérrez, Tuxtla Gutiérrez, Chiapas 29038,
México (e-mail: [email protected]).
[12], [13], [14]. Satellite programs generally have two types
Digital Object Identifier 10.1109/JSTARS.2024.3387452 of sensors: passive, which captures optical images, and active,
© 2024 The Authors. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see
https://creativecommons.org/licenses/by-nc-nd/4.0/
PECH-MAY et al.: SEGMENTATION AND VISUALIZATION OF FLOODED AREAS THROUGH SENTINEL-1 IMAGES AND U-NET 8997
Fig. 1. Occurrence of the five most common natural disasters in the world from 2005 to 2022: Floods, storms, earthquakes, droughts, and wildfires.
Fig. 2. Floods in the world: 2000–2022. The years 2006, 2007, and 2021 have been the years with the highest flooding in different regions of the world.
which captures radar images. The optical images are high- These satellite data have different properties such as: 1) spatial
resolution multispectral and are correlated with the open water resolution, which determines the area of the earth’s surface
surface. However, they can be affected by the presence of clouds covered by each pixel of the image; 2) spectral resolution, which
during precipitation, making it impossible to acquire clean and represents the electromagnetic spectrum captured by the remote
reliable images. In contrast, radar images can penetrate clouds sensor and the number and width of regions; and 3) temporal
and operate day and night and in any weather conditions. This resolution, which determines how long satellite information can
is because the sensors operate at longer wavelengths and are be obtained from the exact location with the same satellite and
independent of solar radiation. This makes it ideal for moni- radiometric resolution [18].
toring and mapping floods and estimating the damage caused. In addition, artificial intelligence algorithms are being used
Satellite programs include Copernicus [14], Landsat [12], and to analyze these data. Both technologies are being used to study
Terra/Aqua (MODIS) [13]. climate change, precipitation, carbon flow prediction, drought
Copernicus stands out for its remarkable capacity to acquire forecasting, detection of soil changes, earthquakes, bodies of
remote data with high temporal and spatial resolution. It is water, floods, crops, etc.
made up of satellites for different purposes: Sentinel-1 provides Specifically, deep learning (DL) algorithms have taken on
synthetic aperture radar (SAR) images helpful in observing a highly relevant role due to their ability to discriminate data
the earth and oceans; Sentinel-2 provides multispectral optical and automate and improve the precision of tasks such as im-
terrestrial images; Sentinel-3 for marine and land observation; age classification, element detection, and generating thematic
Sentinel-4 and 5 for air quality monitoring; and Sentinel-6 for cartographic representations [19]. Furthermore, they can learn
marine observation [15], [16], [17]. from feature representations appropriate for classification tasks
8998 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 17, 2024
of spatial learning using convolutional neural networks (CNNs) near-infrared spectrum bands [40], [41]. These indices apply
and sequential learning using recurrent neural networks (RNNs). to images with different resolutions, such as Landsat, Spot,
These approaches have presented better results compared to or Sentinel [42]. However, to map bodies of water and soil
other techniques. However, they suffer from some problems. vegetation, the following are mainly used: the normalized dif-
CNNs suffer from inductive biases, while RNNs suffer from ference vegetation index [43] and the normalized difference
gradient disappearance [20]. Furthermore, satisfactory results water index [44]. Although optical sensors are highly correlated
of DL algorithms require an extensive dataset for training [21], with open water surfaces, they cannot penetrate clouds, which
[22]. Due to this need, labeled image datasets have been limits them in rainy or cloudy weather. Consequently, it is
used. impossible to acquire high-resolution, multispectral, cloud-free
Some datasets used in different proposals for flood analysis images. Deroliya et al. [45] present an approach for flood risk
and mapping are Sen1Floods11 [23], which has Sentinel-1 mapping considering geomorphic descriptors. They used three
and Sentinel-2 images of 11 manually labeled flood events; algorithms: decision tree, random forest (RF), and gradient-
UNOSAT [24], with Sentinel-1-SAR labeled images over 15 boosted decision trees. Zhou et al. [46] use a support vector
flood events; OMBRIA [25], with images labeled Sentinel-1 machine (SVM); Tulbure et al. [47] and Schumann et al. [48]
and Sentinel-2 over 23 floods; SEN12-FLOOD [26] with images RF for the analysis of water bodies. Pech-May et al. [5] analyze
labeled Sentinel-1 and Sentinel-2; and World Floods that con- the behavior of land cover and water bodies of floods in the rainy
tains information on 119 floods that occurred from 2015 to 2019. season using multispectral images and RF, SVM, and classifica-
These datasets are used in different flood analysis proposals [21], tion and regression trees algorithms. Anusha and Bharathi [49]
[25], [27], [28], [29]. use multispectral imaging with the algorithms mentioned earlier.
Free access to these data has allowed various institutions to Konapala et al. [50] presented a strategy for flood identification
expand their research using large volumes of data. Satellite data from SAR satellite images. Rudner et al. [51] used Sentinel-1
are an effective tool for estimating damage caused by natural and Sentinel-2 to identify flooded areas. Li et al. [52] conducted
disasters and improving risk management. This is due to the a study analyzing the damage caused by hurricanes.
sensors’ different resolutions and capture methods on space Most current approaches use CNNs. They rely on dimension-
platforms. This data availability has led to developing services ality reduction to reduce the number of parameters and preserve
that enable the rapid creation of flood maps using automated the relative locations of pixels. Increasing the depth of CNNs can
or semiautomated processes. However, these methods present improve their performance because deep networks incorporate
some uncertainties due to the need for more verification and the multidimensional features and classifiers in multiple end-to-end
rapidity with which they occur. layers. Consequently, the deeper the network structure, the richer
This article explores a strategy for flood segmentation based the feature level. However, the network can cause problems
on the U-Net architecture and the Sen1Floods11 georeferenced such as: 1) gradient disappearance; 2) gradient explosion; and
dataset. This is done to segment and visualize flooded areas 3) network degradation. To solve these problems, ResNet [53]
through satellite images. The study area belongs to southeastern was proposed, effectively mitigating network degradation and
Mexico, which has experienced severe flooding. allowing more profound training of DLs through residual blocks.
Zhao et al. [54] used SAR images to classify buildings, vege-
tation, roads, and water bodies using TerraSAR images [55].
II. RELATED WORKS Other approaches, such as those of Xing et al. [56] and Tavus
As a strategy for flood mapping, remote sensing has shown et al. [57], use the U-Net architecture [58]. Katyar et al. [59]
promising results [27], [29], [30], [31]. Many works propose use the Sen1Floods11 dataset with SegNet [60]. Notably, U-Net
analyses, classifying, detecting, and mapping floods and water uses skip connections between different blocks of each stage to
bodies using optical (multispectral) or SAR images. Others com- preserve the acquired feature maps. At the same time, SegNet
bine SAR and optical data. Despite promising results, there are reuses the encoder’s pooling indices for nonlinear upsampling,
still difficulties in satellite images, such as their spatial [32] and thus improving the results in flood detection. Bai et al. [61]
temporal resolution [33]. Artificial intelligence also provides improved on the work using BASNet [62], an image segmenta-
different supervised, unsupervised, and contrastive algorithms tion network identical to U-Net; they combined it with a hybrid
for flood analysis using satellite images [33], [34]. Deep neural loss function (structural similarity loss), intersection over union
networks, specifically CNNs, are the most widely used [35]. (IoU) loss, and focal loss.
In this sense, the more training data the CNNs have, the better On the other hand, Scepanovic et al. [63] created a land
results they will obtain [36]. cover mapping system with five classes. They applied several
Regardless of the strategy of the proposed approaches, they semantic segmentation models such as U-Net, DeepLabV3+
have a fundamental premise: the analysis of floods in different [64], PSPNet [65], BiSeNet [66], SegNet, FCDenseNet [67], and
locations. Some map floods to coordinate rescue efforts, others FRRN-B [68]. Other approaches explore using self-supervised
analyze flood extents to mitigate and predict their effects, etc. and semisupervised learning based on SimCLR [69] and
Generally, traditional machine learning approaches use op- FixMatch [70] to segment land use and flood mapping via
tical images [5], [37], [38], [39]. Spectral indices are ap- Sen1Floods11.
plied to images based on the interactions between vegetation Some RNN approaches have also been proposed for analyzing
and the electromagnetic energy of the shortwave infrared and water bodies and land cover using Sentinel images [71], [72].
PECH-MAY et al.: SEGMENTATION AND VISUALIZATION OF FLOODED AREAS THROUGH SENTINEL-1 IMAGES AND U-NET 8999
Fig. 8. General graphical scheme of the U-Net architecture used for the
detection and segmentation of flooded areas in SAR images.
Fig. 6. Geographic points where flood data were collected for Sen1Floods11. a) Size of input images: This parameter sets the size of the
images in the dataset that would feed the neural model.
This is important to ensure optimal performance in flood
detection and segmentation. The declared size for the
input images is 512 × 512 pixels to identify specific
characteristics associated with flooding. The image size
seeks to balance the need to capture relevant details in SAR
images with the computational efficiency of the model.
b) Bands to use from the input images: In SAR imaging, chan-
nels relate to the different polarization bands in the images.
The images generated by Sentinel-1 have two polarization
bands: VV and VH. Each band represents unique infor-
mation and characteristics inherent to the acquisition pro-
cess and the interactions between electromagnetic waves
and the observed terrain. The two bands were selected
for primary model training because they capture distinct
Fig. 7. Schematic of a U-Net architecture that receives as input a terrain properties: VV is sensitive to surface structure and
512 × 512 pixel image with three channels. roughness, including features such as vegetation, and VH
is sensitive to the humidity and volume of objects, such
image chips with a size of 512 × 512 pixels, covering a total as water on the ground. Combining both bands provides
area of 120 406 km2 . Sentinel-1 images consist of two bands, a complete and more detailed picture of the observed
vertical–vertical (VV) and vertical-horizontal (VH), represent- surface.
ing backscatter values. Sentinel-2 images include 13 bands, all c) Input layers: The input layer is adjusted to the size of
of which are TOA (below atmosphere atmospherically corrected the established images of 512 × 512. This layer provides
images) reflectance values. a structure for entering data into the model and ensures
1) Parameter Definition: Some parameters were considered that images are transformed and processed consistently
for our model. according to the settings in each subsequent layer.
PECH-MAY et al.: SEGMENTATION AND VISUALIZATION OF FLOODED AREAS THROUGH SENTINEL-1 IMAGES AND U-NET 9001
C. Preprocessing
In this phase, the images and masks of the Sen1Floods11 backscatter values of images. Loading images in TIF format
dataset are preprocessed to be entered into the neural model later. begins with preprocessing to adapt them to the format required
Among the challenges of SAR images is processing. This is due by the model. A transformation ensures that the images have
to the geometry of its acquisition, which generates geometric and the dimensions defined in the previous step. In addition, a
radiometric deformation effects such as slant range distortion, transposition of the images is performed to adjust their chan-
layover, and foreshortening [80]. Warping effects can affect the nels to match the dimensions of the channels required by the
9002 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 17, 2024
Fig. 12. Performance of the model with 200 epochs. (a) Loss in training.
Fig. 11. Performance of the model with 150 epochs. (a) Loss in training. (b) Precision and validation. (c) Confusion matrix.
(b) Precision and validation. (c) Confusion matrix.
Fig. 13. Results of the visualization of the segmentation. Right, ground truth. Left, model predictions. (a) 50 epochs. (b) 100 epochs. (c) 150 epochs. (d) 200
epochs.
training to the application of the model. The main components IV. RESULTS OBTAINED
are detailed as follows.
A. Model Evaluation Metrics
1) Loss and precision to understand how the model adapts
to the data and training over time: These metrics pro- The following metrics were selected to evaluate the devel-
vide information about model convergence and whether oped neural model: loss, recall, precision, F1-score, accuracy,
overfitting or underfitting occurs. A gradual decrease confusion matrix, and IoU [83], [84].
in loss and an increase in accuracy indicate successful 1) Loss: It is a metric that quantifies the difference between
training. model predictions and actual labels. A more minor loss
2) Model evaluation: The test subset is used to evaluate the indicates better agreement between model predictions and
model’s actual performance. The model predictions are labels. The loss was evaluated at different training epochs
applied to these images and compared to the flood masks. (50, 100, 150, and 200) to understand the evolution and
This allows various evaluation metrics to be calculated, convergence to a minimum value for better fitting the
such as precision, recall, and IoU score. data.
3) IoU score calculation to evaluate the quality of the seg- 2) Recall: It measures the proportion of positive instances
mentation: This is calculated by dividing the intersection (flooded areas) the model correctly identified compared
area between the predicted mask and the actual mask by to the total number of positive instances. A high recall
the area of their union. Higher IoU indicates higher overlap indicates the model’s ability to detect most flooded areas
and accuracy in predicting flooded areas. in the SAR image.
4) Prediction on test images, applied to a test image to 3) Precision: It measures the fraction correctly detected by
generate a prediction of the flooded areas: This predic- the model.
tion is visually compared to the actual flood mask in the 4) F1-score: It is a metric that combines the precision and
same image to evaluate the accuracy and quality of the recall of the model. It measures the ratio between true and
segmentation. Detected and actual areas can be overlaid false positive predictions compared to the actual labels.
to analyze coincidences and deviations. It is advantageous when there is an imbalance between
classes, such as in segmenting flooded areas where non-
flooded areas are predominant.
5) Accuracy: It evaluates the overall accuracy of a classifier.
It indicates the overall performance of the model.
G. Validation
6) Confusion matrix: This shows the number of valid, false
An inference test is performed to predict new flood images positives, true negatives, and false negatives in the model
obtained from SAR images. In this phase, the knowledge ac- classification.
quired during model training is applied to detect flooded areas 7) IoU: It measures the overlap between the segmentation
in real-world scenarios. The key components are described as masks generated by the model and the ground truth masks.
follows. These metrics were evaluated in different training epochs: 50,
1) Loading of the trained neural model: It contains the 100, 150, and 200. The model’s improvement can be observed
weights and architecture learned during the training pro- throughout each epoch and the equilibrium points where perfor-
cess for classifying flooded areas. mance stabilizes. In addition, it allows for identifying the stage
2) Preprocess and postprocess: Preprocessing functions are where the model achieves an optimal balance between precision
used to prepare the images properly, including normal- and recall.
izing pixel values and adjusting the size to match the Training the neural network with 50 epochs reached a loss
model input format. After obtaining model predictions, level of 0.3666 and 0.4462 on the validation set [see Fig. 9(a)].
postprocessing functions are used to improve and refine The loss in training indicates the magnitude of the difference
the outputs. This could involve removing small groups between the model predictions and the actual labels. The in-
of unwanted pixels and improving the consistency of crease in loss on the validation set is because the model was
segmented areas. overfitting. The accuracy achieved in the training set was 0.8756,
3) New image classification: Classification proceeds once and on the validation set, it was 0.8244 [see Fig. 9(b)]. This
the image has been preprocessed and the model loaded. indicates that 87.56% of the model predictions match the actual
The image is input into the model, and predictions are labels in the training set. Although the model shows excellent
generated about the areas that could be flooded. The model predictive ability in the training set, its performance in valida-
uses its prior understanding of patterns learned during tion is slightly lower. The F1-score was 0.0230 in the test set,
training to make these predictions. indicating that the model balances accuracy and can detect true
4) Visualization of results: The model predictions can be positives. However, it is essential to note that the low F1-value is
visualized by overlaying them on the original image. This due to the imbalance in the flooded and nonflooded classes in the
allows a visual assessment of how the model has identified test set. It is worth mentioning that the terrain characteristics and
flooded areas compared to reality. The overlay can also the angle of incidence of the image produce areas with excessive
indicate the quality of the segmentation and whether there shadowing; this causes the model to detect false positives from
are areas for improvement. areas with flooding present.
PECH-MAY et al.: SEGMENTATION AND VISUALIZATION OF FLOODED AREAS THROUGH SENTINEL-1 IMAGES AND U-NET 9005
TABLE IV REFERENCES
RESULTS OF THE METRICS USED TO EVALUATE THE PERFORMANCE OF THE
U-NET MODEL TRAINED AT 200 EPOCHS [1] CRED, “2021 disasters in numbers,” Centre Res. Epidemiol. Disasters,
Brussels, Belgium, Tech. Rep., 2021. [Online]. Available: https://cred.be/
sites/default/files/2021_EMDAT_report.pdf
[2] D. Tin, L. Cheng, D. Le, R. Hata, and G. Ciottone, “Natural disasters: A
comprehensive study using EMDAT database 1995–2022,” Public Health,
vol. 226, pp. 255–260, 2024.
[3] E. Psomiadis, “Flash flood area mapping utilising SENTINEL-1 radar
data,” Proc. SPIE, 2016, vol. 10005, Art. no. 100051G.
[4] P. Wallemacq and R. House, “Economic losses, poverty and disas-
ters (1998-2017),” Centre for Research on the Epidemiology of Dis-
asters United Nations Office for Disaster Risk Reduction, Tech. Rep.,
2018. [Online]. Available: https://www.preventionweb.net/files/61119_
credeconomiclosses.pdf
[5] F. Pech-May, R. Aquino-Santos, G. Rios-Toledo, and J. P. F. Posadas-
areas provides a visual assessment of the accuracy of the model Durán, “Mapping of land cover with optical images, supervised al-
predictions. On the right side is the ground truth mask used for gorithms, and Google earth engine,” Sensors, vol. 22, no. 13, 2022,
Art. no. 4729.
the model input, where flooded areas are marked in blue. [6] E. Benami et al., “Uniting remote sensing, crop modelling and economics
As shown in Fig. 13(a), segmentation with 50 epochs needs to for agricultural risk management,” Nature Rev. Earth Environ., vol. 2,
be revised. The IoU score was 1.17% and accuracy was 91.93% pp. 1–20, 2021.
[7] J. Paz, F. Jiménez, and B. Sánchez, “Urge manejo del agua en tabasco,”
(see Table I). The model achieves significant segmentation skills Universidad Nacional Autónoma de México y Asociación Mexicana de
with 100 epochs [see Fig. 13(b)]. As evidence, its IoU score is Ciencias para el Desarrollo Regional A.C., Ciudad de México, Mexico,
32.94% and accuracy is 92.87% (see Table II). Tests conducted Tech. Rep., 2018.
[8] CEPAL, “Tabasco: Características e impacto socioeconómico de las in-
with 150 epochs [see Fig. 13(c)] highlight the model’s ability undaciones provocadas a finales de octubre y a comienzos de noviembre
to identify flooded areas and achieve segmentation that overlaps de 2007 por el frente frío número 4,” Comisión Económica para América
significantly with the actual flooded areas. The IoU score was Latina y el Caribe Sede Subregional en México, Mexico City, México,
Tech. Rep., 2008. [Online]. Available: https://hdl.handle.net/11362/25881
64.94% and accuracy was 80.73% (see Table III). The training [9] J. Cuevas, M. F. Enriquez, and R. Norton, “Tabasco floods of 2020—
with 200 epochs obtained the best results, with an IoU score of Learning from the past to prepare for the future,” ISET International and
73.02% and accuracy of 94.31% (see Table IV). These visual- the Zurich Flood Resilience Alliance, Boulder, CO, USA, Tech. Rep.,
2022. [Online]. Available: https://preparecenter.org/wp-content/uploads/
izations provide a concrete graphical representation of how the 2023/03/PERC-fullreport_Mexico_ENG.pdf
model identifies and segments flooded areas in SAR imagery. [10] M. Perevochtchikova and J. Torre, “Causas de un desastre: Inundaciones
del 2007 en tabasco, México,” J. Latin Amer. Geography, vol. 9, pp. 73–98,
2010.
[11] G. J.-P. Schumann and D. K. Moller, “Microwave remote sensing of flood
inundation,” Phys. Chem. Earth, Parts A/B/C, vol. 83–84, pp. 84–95, 2015.
V. CONCLUSION [12] W. Emery and A. Camps, “The history of satellite remote sensing,” in
Introduction to Satellite Remote Sensing, W. Emery and A. Camps, Eds.,
The segmentation model based on the U-Net architecture New York, NY, USA: Springer, 2017, pp. 1–42.
effectively identifies flooded areas. The ability of U-Net to [13] T. M. Lillesand, Remote Sensing and Image Interpretation. Hoboken, NJ,
USA: Wiley, 2006.
capture relevant features in SAR images and its training and [14] S. Jutz and M. Milagro-Pérez, “1.06—Copernicus program,” in Compre-
validation are reflected in the obtained results. The tests with 200 hensive Remote Sensing, S. Liang, Ed., Amsterdam, The Netherlands:
epochs obtained the best results with an IoU score of 73.02% Elsevier, 2018, pp. 150–191.
[15] A. Twele, W. Cao, S. Plank, and S. Martinis, “Sentinel-1-based flood
and accuracy of 94.31%. mapping: A fully automated processing chain,” Int. J. Remote Sens.,
Despite the achievements made, several paths could be ex- vol. 37, no. 13, pp. 2990–3004, 2016.
plored in future works further to improve flood detection and [16] M. Chini, R. Pelich, L. Pulvirenti, N. Pierdicca, R. Hostache, and P.
Matgen, “Sentinel-1 InSAR coherence to detect floodwater in urban areas:
segmentation in SAR images. Houston and hurricane Harvey as a test case,” Remote Sens., vol. 11, no. 2,
1) Architecture improvement: Use other architectures such as 2019, Art. no. 107.
DeepLab and PSPNet that could be considered to explore [17] K. K. Singh and A. Singh, “Identification of flooded area from satellite
images using Hybrid Kohonen Fuzzy C-means sigma classifier,” Egyptian
new feature extraction capabilities. J. Remote Sens. Space Sci., vol. 20, no. 1, pp. 147–155, 2017.
2) Parameter optimization: Although tests have been per- [18] V. Lalitha and B. Latha, “A review on remote sensing imagery augmen-
formed to determine the optimal number of training tation using deep learning,” Mater. Today: Proc., vol. 62, pp. 4772–4778,
2022.
epochs, more profound optimization can be performed [19] L. Alzubaidi, J. Zhang, A. J. Humaidi, and A. Al-Dujaili, “Review of deep
to fine-tune the hyperparameters and achieve a balance learning: Concepts, CNN architectures, challenges, applications, future
between accuracy and training time. directions,” J. Big Data, vol. 53, no. 8, pp. 1–74, 2021.
[20] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge,
3) Use of multitemporal data: Integrating multitemporal data MA, USA: MIT Press, 2016.
from different satellites could allow better flood detection [21] R. Bentivoglio, E. Isufi, S. N. Jonkman, and R. Taormina, “Deep learning
by considering the temporal evolution of the affected methods for flood mapping: A review of existing applications and future re-
search directions,” Hydrol. Earth Syst. Sci., vol. 26, no. 16, pp. 4345–4378,
areas. 2022.
As challenges are addressed and new opportunities are ex- [22] C. P. Patel, S. Sharma, and V. Gulshan, “Evaluating self and semi-
plored, this methodology will likely continue to improve and supervised methods for remote sensing segmentation tasks,” CoRR, vol.
abs/2111.10079, 2021. [Online]. Available: https://dblp.unitrier.de/rec/
significantly impact disaster management and data-driven deci- journals/corr/abs2111-10079.html?view=bibtex
sion making.
PECH-MAY et al.: SEGMENTATION AND VISUALIZATION OF FLOODED AREAS THROUGH SENTINEL-1 IMAGES AND U-NET 9007
[23] D. Bonafilia, B. Tellman, T. Anderson, and E. Issenberg, “Sen1Floods11: [46] Y. Zhou, J. Luo, Z. Shen, X. Hu, and H. Yang, “Multiscale water body
A georeferenced dataset to train and test deep learning flood algorithms extraction in urban environments from satellite images,” IEEE J. Sel.
for sentinel-1,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Topics Appl. Earth Observ. Remote Sens., vol. 7, no. 10, pp. 4301–4312,
Workshops, 2020, pp. 835–845. Oct. 2014.
[24] UNOSAT, “Unosat flood dataset,” 2019. Accessed: Jun. 20, 2022. [On- [47] M. G. Tulbure, M. Broich, S. V. Stehman, and A. Kommareddy, “Surface
line]. Available: http://floods.unosat.org/geoportal/catalog/main/home. water extent dynamics from three decades of seasonally continuous Land-
page sat time series at subcontinental scale in a semi-arid region,” Remote Sens.
[25] G. I. Drakonakis, G. Tsagkatakis, K. Fotiadou, and P. Tsakalides, Environ., vol. 178, pp. 142–157, 2016.
“OmbriaNet—Supervised flood mapping via convolutional neural net- [48] G. Schumann, J. Henry, L. Hoffmann, L. Pfister, F. Pappenberger, and
works using multitemporal Sentinel-1 and Sentinel-2 data fusion,” IEEE P. Matgen, “Demonstrating the high potential of remote sensing in hy-
J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp. 2341–2356, draulic modelling and flood risk management,” in Proc. Annu. Conf.
2022. Remote Sens. Photogrammetry Soc. NERC Earth Observ. Conf., 2005,
[26] C. Rambour, N. Audebert, E. Koeniguer, B. Le Saux, M. Crucianu, and pp. 6–9.
M. Datcu, “SEN12-FLOOD: A SAR and multispectral dataset for flood [49] N. Anusha and B. Bharathi, “Flood detection and flood mapping using
detection,” 2020, doi: 10.21227/w6xz-s898. multi-temporal synthetic aperture radar and optical data,” Egypt. J. Remote
[27] V. Tsyganskaya, S. Martinis, P. Marzahn, and R. Ludwig, “SAR-based Sens. Space Sci., vol. 23, pp. 207–219, 2020.
detection of flooded vegetation—A review of characteristics and ap- [50] G. Konapala, S. V. Kumar, and S. K. Ahmad, “Exploring Sentinel-1 and
proaches,” Int. J. Remote Sens., vol. 39, no. 8, pp. 2255–2293, 2018. Sentinel-2 diversity for flood inundation mapping using deep learning,”
[28] C. Rambour, N. Audebert, E. Koeniguer, B. Le Saux, M. Crucianu, and ISPRS J. Photogrammetry Remote Sens., vol. 180, pp. 163–173, 2021.
M. Datcu, “Flood detection in time series of optical and SAR images,” [51] T. G. J. Rudner et al., “Multi3Net: Segmenting flooded buildings via fusion
in Proc. Int. Arch. Photogrammetry, Remote Sens. Spatial Inf. Sci., 2020, of multiresolution, multisensor, and multitemporal satellite imagery,” in
pp. 1343–1346. Proc. AAAI Conf. Artif. Intell., 2019, vol. 33, pp. 702–709.
[29] G. Mateo-Garcia et al., “Towards global flood mapping onboard low [52] Y. Li, S. Martinis, and M. Wieland, “Urban flood mapping with an active
cost satellites with machine learning,” Sci. Rep., vol. 11, no. 1, 2021, self-learning convolutional neural network based on TerraSAR-X intensity
Art. no. 7249. and interferometric coherence,” ISPRS J. Photogrammetry Remote Sens.,
[30] S. Grimaldi, Y. Li, V. Pauwels, and J. Walker, “Remote sensing-derived vol. 152, pp. 178–191, 2019.
water extent and level to constrain hydraulic flood forecasting models: Op- [53] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
portunities and challenges,” Surv. Geophys., vol. 37, no. 5, pp. 977–1034, recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016,
2016. pp. 770–778.
[31] X. Shen, D. Wang, K. Mao, E. Anagnostou, and Y. Hong, “Inundation [54] B. Zhao, H. Sui, C. Xu, and J. Liu, “Deep learning approach for
extent mapping by synthetic aperture radar: A review,” Remote Sens., flood detection using SAR image: A case study in Xinxiang,” in
vol. 11, no. 7, 2019, Art. no. 879. Proc. Int. Arch. Photogrammetry, Remote Sens. Spatial Inf. Sci., 2022,
[32] M. V. Bernhofen et al., “The role of global data sets for riverine flood risk pp. 1197–1202.
management at national scales,” Water Resour. Res., vol. 58, no. 4, 2022, [55] J. Betbeder, S. Rapinel, T. Corpetti, E. Pottier, S. Corgne, and L. Hubert-
Art. no. e2021WR031555. Moy, “Multitemporal classification of TerraSAR-X data for wetland veg-
[33] R. Sadiq, M. Imran, and F. Ofli, Remote Sensing for Flood Mapping and etation mapping,” J. Appl. Remote Sens., vol. 8, 2014, Art. no. 083648.
Monitoring. Singapore: Springer, 2023, pp. 679–697. [56] Z. Xing et al., “Flood vulnerability assessment of urban buildings based
[34] J. Yu, Z. Wang, V. Vasudevan, L. Yeung, M. Seyedhosseini, and Y. Wu, on integrating high-resolution remote sensing and street view images,”
“CoCa: Contrastive captioners are image-text foundation models,” 2022, Sustain. Cities Soc., vol. 92, 2023, Art. no. 104467.
arXiv:2205.01917. [57] B. Tavus, R. Can, and S. Kocaman, “A CNN-based flood mapping ap-
[35] J. Rosentreter, R. Hagensieker, and B. Waske, “Towards large-scale map- proach using Sentinel-1 data,” ISPRS Ann. Photogrammetry, Remote Sens.
ping of local climate zones using multitemporal Sentinel 2 data and Spatial Inf. Sci., vol. 3, pp. 549–556, 2022.
convolutional neural networks,” Remote Sens. Environ., vol. 237, 2020, [58] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional net-
Art. no. 111472. works for biomedical image segmentation,” in Proc. Med. Image Comput.
[36] S. Martinis, S. Groth, M. Wieland, L. Knopp, and M. Rättich, “Towards a Computer-Assisted Interv., 2015, pp. 234–241.
global seasonal and permanent reference water product from Sentinel-1/2 [59] V. Katiyar, N. Tamkuan, and M. Nagai, “Near-real-time flood mapping
data for improved flood mapping,” Remote Sens. Environ., vol. 278, 2022, using off-the-shelf models with SAR imagery and deep learning,” Remote
Art. no. 113077. Sens., vol. 13, no. 12, 2021, Art. no. 2334.
[37] Z. Gou, “Urban road flooding detection system based on SVM algorithm,” [60] A. Moradi Sizkouhi, M. Aghaei, and S. M. Esmailifar, “A deep convolu-
in Proc. 2nd Int. Conf. Mach. Learn. Comput. Appl., 2021, pp. 1–8. tional encoder-decoder architecture for autonomous fault detection of PV
[38] A. H. Tanim, C. B. McRae, H. Tavakol-Davani, and E. Goharian, “Flood plants using multi-copters,” Sol. Energy, vol. 223, pp. 217–228, 2021.
detection in urban areas using satellite imagery and machine learning,” [61] Y. Bai et al., “Enhancement of detecting permanent water and temporary
Water, vol. 14, no. 7, 2022, Art. no. 1140. water in flood disasters by fusing Sentinel-1 and Sentinel-2 imagery using
[39] K. Kunverji, K. Shah, and N. Shah, “A flood prediction system developed deep learning algorithms: Demonstration of Sen1Floods11 benchmark
using various machine learning algorithms,” Proc. 4th Int. Conf. Adv. Sci. datasets,” Remote Sens., vol. 13, no. 11, 2021, Art. no. 2220.
Technol., 2021. [Online]. Available: https://papers.ssrn.com/sol3/papers. [62] X. Qin, Z. Zhang, C. Huang, C. Gao, M. Dehghan, and M. Jagersand,
cfm?abstract_id=3866524 “BASNet: Boundary-aware salient object detection,” in Proc. IEEE/CVF
[40] C. Alexander, “Normalised difference spectral indices and urban land Conf. Comput. Vis. Pattern Recognit., 2019, pp. 7471–7481 .
cover as indicators of land surface temperature (lst),” Int. J. Appl. Earth [63] S. Scepanovic, O. Antropov, P. Laurila, Y. Rauste, V. Ignatenko, and J.
Observ. Geoinf., vol. 86, 2020, Art. no. 102013. Praks, “Wide-area land cover mapping with Sentinel-1 imagery using deep
[41] V. Kumar, A. Sharma, R. Bhardwaj, and A. K. Thukral, “Comparison learning semantic segmentation models,” IEEE J. Sel. Topics Appl. Earth
of different reflectance indices for vegetation analysis using Landsat-TM Observ. Remote Sens., vol. 14, pp. 10357–10374, 2021.
data,” Remote Sens. Appl.: Soc. Environ., vol. 12, pp. 70–77, 2018. [64] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-
[42] J. Campbell and R. Wynne, Introduction to Remote Sensing, 5th ed. New decoder with atrous separable convolution for semantic image segmenta-
York City, NY, USA: Guilford Publications, 2011. tion,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 801–818.
[43] J. Rouse, J. W., R. H. Haas, J. A. Schell, and D. W. Deering, “Monitoring [65] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing
vegetation systems in the great plains with ERTS,” in Proc. 3rd ERTS network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017,
Symp., 1974, vol. 351, pp. 309–317. pp. 6230–6239.
[44] G. Bo-cai, “NDWI—A normalized difference water index for remote [66] C. Yu, J. Wang, C. Peng, C. Gao, G. Yu, and N. Sang, “BiSeNet: Bilateral
sensing of vegetation liquid water from space,” Remote Sens. Environ., segmentation network for real-time semantic segmentation,” in Proc. Eur.
vol. 58, no. 3, pp. 257–266, 1996. Conf. Comput. Vis., 2018, pp. 334–349.
[45] P. Deroliya, M. Ghosh, M. P. Mohanty, S. Ghosh, K. D. Rao, and S. [67] S. Jégou, M. Drozdzal, D. Vazquez, A. Romero, and Y. Bengio, “The
Karmakar, “A novel flood risk mapping approach with machine learning one hundred layers Tiramisu: Fully convolutional DenseNets for seman-
considering geomorphic and socio-economic vulnerability dimensions,” tic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.
Sci. Total Environ., vol. 851, 2022, Art. no. 158002. Workshops, 2017, pp. 11–19 .
9008 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 17, 2024
[68] T. Pohlen, A. Hermans, M. Mathias, and B. Leibe, “Full-resolution residual Raúl Aquino-Santos received the Ph.D. degree in
networks for semantic segmentation in street scenes,” in Proc. IEEE Conf. electrical and electronic engineering from the Uni-
Comput. Vis. Pattern Recognit., 2017, pp. 4151–4160 . versity of Sheffield, Sheffield, U.K., in 2005.
[69] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework In 2007, he was a Postdoctoral Researcher of Net-
for contrastive learning of visual representations,” in Proc. 37th Int. Conf. working and Telecommunications with the Centre
Mach. Learn., 2020, pp. 1597–1607. for Scientific Research and Higher Education at En-
[70] K. Sohn et al., “FixMatch: Simplifying semi-supervised learning with senada, Ensenada, Mexico. In 2008, he was also a
consistency and confidence,” in Proc. Int. Conf. Neural Inf. Process. Syst., Postdoctoral Researcher of Networking and Telecom-
2020, vol. 33, pp. 596–608. munications with the Department of Telecommuni-
[71] Y. Bengio, A. C. Courville, and P. Vincent, “Representation learning: A cations, National Autonomous University of Mexico,
review and new perspectives,” IEEE Trans. Pattern Anal. Mach. Intell., Mexico City, Mexico. He is a Member of the National
vol. 35, no. 8, pp. 1798–1828, Aug. 2013. System of Researchers and has authored or coauthored six books, 12 chapters of
[72] D. Ienco, R. Gaetano, R. Interdonato, K. Ose, and D. H. T. Minh, “Com- books, and more than 30 published articles in international journals. His research
bining Sentinel-1 and Sentinel-2 time series via RNN for object-based interests include the design, development, and implementation of Industry 4.0,
land cover classification,” in Proc. IEEE Int. Geosci. Remote Sens. Symp., Internet of Things, Smart Cities, and Natural Disaster Management. He has
2019, pp. 4881–4884. participated in the International Visitor Leadership Program “Advanced Infor-
[73] X. Shi, Z. Chen, H. Wang, D. Yeung, W. Wong, and W. Woo, “Convo- matics and Communications Technologies” in the USA, a project for Mexico
lutional LSTM network: A machine learning approach for precipitation from the U.S. Department of State Bureau of Educational and Cultural Affairs
nowcasting,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2015, pp. 802– (2010) and represented Mexico (2014 and 2015) at Asian Pacific Economic
810. Cooperation held in Shanghai, China.
[74] R. Marc and K. Marco, “Multi-temporal land cover classification with Dr. Aquino-Santos is the Chair of the Topic Group AI for Flood Monitoring
sequential recurrent encoders,” ISPRS Int. J. Geo-Inf., vol. 7, no. 4, 2018, and Detection at the International Telecommunication Union.
Art. no. 129.
[75] M. Volpi and D. Tuia, “Dense semantic labeling of subdecimeter resolution
images with convolutional neural networks,” IEEE Trans. Geosci. Remote
Sens., vol. 55, no. 2, pp. 881–893, Feb. 2017.
[76] H. Zhong, C. Chen, Z. Jin, and X. Hua, “Deep robust clustering by Omar Álvarez-Cárdenas received the master’s de-
contrastive learning,” 2020, arXiv:2008.03030. [Online]. Available: https: gree in telematics and the Ph.D. degree in education
//api.semanticscholar.org/CorpusID:210920406 from the University of California, Los Angeles, CA,
[77] M. Huang and S. Jin, “Rapid flood mapping and evaluation with a su- USA, both specialized in active learning using remote
pervised classifier and change detection in Shouguang using Sentinel-1 laboratories applied to satellite communications, in
SAR and Sentinel-2 optical data,” Remote Sens., vol. 12, no. 13, 2020, 1999 and 2018.
Art. no. 2073. He is currently a Research Professor of Mobile
[78] H. Jung, Y. Oh, S. Jeong, C. Lee, and T. Jeon, “Contrastive self-supervised Computing Area with the School of Telematics, Uni-
learning with smoothed representation for remote sensing,” IEEE Geosci. versity of Colima, Colima, Mexico. His research in-
Remote Sens. Lett., vol. 19, 2022, Art. no. 8010105. terests include wireless networks, satellite communi-
[79] E. M. Cuevas, J., and R. Norton, “Inundaciones de 2020 en tabasco: cations, remote laboratories, cybersecurity, and active
Aprender del pasado para preparar el futuro,” Centro Nacional de Pre- learning methodologies.
vención de Desastres, Ciudad de México, Mexico, Tech. Rep., 2022.
[Online]. Available: https://preparecenter.org/wp-content/uploads/2022/
08/PERC_Mexico_ESP.pdf
[80] M. Tzouvaras, C. Danezis, and D. G. Hadjimitsis, “Differential SAR
interferometry using Sentinel-1 imagery-limitations in monitoring fast
moving landslides: The case study of Cyprus,” Geosciences, vol. 10, no. 6, Jorge Lozoya Arandia received the master’s degree
2020, Art. no. 236. in information technologies and the doctorate degree
[81] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in energy and water from the University of Guadala-
in Proc. 3rd Int. Conf. Learn. Represent., Dec. 2014. jara, Guadalajara, Mexico, in 2013 and 2021.
[82] P. Hurtik, S. Tomasiello, J. Hula, and D. Hynar, “Binary cross-entropy with He is an Engineer in communications and electron-
dynamical clipping,” Neural Comput. Appl., vol. 34, pp. 12029–12041, ics. He develops data analysis and computer systems
2022. projects, with a specialty in energy optimization, dis-
[83] G. Keren, “Neural network supervision: Notes on loss functions, labels semination of image science, and analysis through
and confidence estimation,” Ph.D. dissertation, Dept. Faculty Comput. deep learning. He actually works in natural sensors
Sci. Math., Univ. Passau, Passau, Germany, 2020. to develop natural analysis of ecosystems. His re-
[84] M. Vakili, M. K. Ghamsari, and M. Rezaei, “Performance analysis and search interests include high-performance comput-
comparison of machine and deep learning algorithms for IoT data classi- ing, energy, and digital literacy.
fication,” 2020, arXiv:2001.09636. Mr. Arandia is a Member of the National System of Researchers.