0% found this document useful (0 votes)
14 views13 pages

Segmentation and Visualization of Flooded Areas Through Sentinel-1 Images and U-Net

This article discusses the increasing frequency and impact of floods, highlighting the need for effective mitigation strategies. It presents a method for flood segmentation and visualization using the U-Net architecture and Sentinel-1 satellite imagery, demonstrating high accuracy in detecting flooded areas in southeastern Mexico. The study emphasizes the importance of timely flood prediction and mapping to aid in disaster response and resilience efforts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views13 pages

Segmentation and Visualization of Flooded Areas Through Sentinel-1 Images and U-Net

This article discusses the increasing frequency and impact of floods, highlighting the need for effective mitigation strategies. It presents a method for flood segmentation and visualization using the U-Net architecture and Sentinel-1 satellite imagery, demonstrating high accuracy in detecting flooded areas in southeastern Mexico. The study emphasizes the importance of timely flood prediction and mapping to aid in disaster response and resilience efforts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

8996 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL.

17, 2024

Segmentation and Visualization of Flooded Areas


Through Sentinel-1 Images and U-Net
Fernando Pech-May , Raúl Aquino-Santos , Omar Álvarez-Cárdenas , Jorge Lozoya Arandia ,
and German Rios-Toledo

Abstract—Floods are the most common phenomenon and cause poverty annually [3]. Nevertheless, what is the reason for the
the most significant economic and social damage to the population. increase in these disasters? There are many factors, but without
They are becoming more frequent and dangerous. Consequently, a doubt, climate change and human activities are triggering
it is necessary to create strategies to intervene effectively in the
mitigation and resilience of the affected areas. Different methods factors. In 2021, 432 disasters occurred, causing almost 11 000
and techniques have been developed to mitigate the damage caused deaths; 223 were floods (see Fig. 1). In 2022, there were 387
by this phenomenon. Satellite programs provide a large amount of disasters and nearly 31 000 deaths; 176 were floods. Floods have
data on the earth’s surface, and geospatial information process- the most significant impact of these catastrophes, affecting more
ing tools help manage different natural disasters. Likewise, deep
than 45% of the world’s population (see Fig. 2) [4].
learning is an approach capable of forecasting time series that can
be applied to satellite images for flood prediction and mapping. The countries that suffer the most from floods are India,
This article presents an approach for flood segmentation and vi- China, Afghanistan, Germany, and Western Europe [1]. It also
sualization using the U-Net architecture and Sentinel-1 synthetic significantly impacts food production since it causes losses
aperture radar (SAR) satellite imagery. The U-Net architecture can in crops and livestock, affecting food sovereignty in different
capture relevant features in SAR images. The approach comprises countries [5], [6]. Mexico is no stranger to these catastrophes.
various phases, from data loading and preprocessing to flood infer-
ence and visualization. For the study, the georeferenced dataset The climate impact, whether of natural origin or due to human
Sen1Floods11 is used to train and validate the model through activities, has increased susceptibility in various regions of the
different epochs and training. A study area in southeastern Mexico country. Hydrometeorological phenomena have increased in the
that presents frequent floods was chosen. The results demonstrate southeastern areas and the coast of the Gulf of Mexico. There-
that the segmentation model achieves high accuracy in detecting
fore, floods have triggered catastrophes, causing severe damage
flooded areas, with promising metrics regarding loss, precision, and
F1-score. to economic and industrial infrastructure and the well-being of
the region’s inhabitants [7]. The most severe cases occurred
Index Terms—Deep learning (DL) and Sentinel-1, flood in October 2007 and November 2020. According to official
mapping, flood segmentation, flood with deep learning, Sentinel-1,
U-Net and natural disasters. data from the Economic Commission for Latin America and
the Caribbean [8], the damage caused in 2007 was US$3B:
31.77% in the productive sector, 26.9% in agriculture, and 0.5%
I. INTRODUCTION in the environment. In 2020 [9], more than 800 000 people were
ECENT studies from the Centre for Research on the affected, 200 400 homes were damaged, and more than US$1M
R Epidemiological Disaster indicate that natural disasters
have increased [1], [2]. The ravages of this phenomenon cause
in emergency response.
The factors that cause flooding can be diverse [10]: 1) pluvial,
human losses, considerable economic damage to infrastructure, the result of excess precipitation; 2) fluvial, increase in water
and different collateral damages to entire population, both rural levels in rivers, seas, or water bodies; 3) failures of hydraulic
and urban, which puts approximately 26 million people into works, breaking of dams, dikes, or banks; and 4) failure of natural
drainage when the soil can no longer absorb more water.
Manuscript received 7 December 2023; revised 29 January 2024 and 26 March Given the devastation caused by floods, timely information
2024; accepted 31 March 2024. Date of publication 11 April 2024; date of current on the occurrence of floods and their impact on the population
version 1 May 2024. (Corresponding author: Raúl Aquino Santos.)
Fernando Pech-May is with the Computer Systems Department, Instituto
is needed. In this sense, flood prediction, identification, and
Tecnológico Superior de los Ríos, Balancán, Tabasco 86930, Mexico (e-mail: mapping are fundamental. This will allow the authorities to
[email protected]). act promptly to implement rescue services, damage assessment,
Raúl Aquino-Santos is with the General Coordination of Scientific Research,
Universidad de Colima, Colima, CO 28017, Mexico (e-mail: [email protected]).
and identification of affected areas for the prompt relief of the
Omar Álvarez-Cárdenas is with the Telematics Faculty, Universidad de Col- population and, in general, the resilience of populations affected
ima, Colima, CO 28040, Mexico (e-mail: [email protected]). by floods.
Jorge Lozoya Arandia is with the Computacional Science and Technological
Inovation Department, Universidad de Guadalajara, Guadalajara, Jalisco 44100,
In recent years, remote sensing has shown notable growth
Mexico (e-mail: [email protected]). due to its ability to obtain terrestrial data through sensors and
German Rios-Toledo is with the Sistemas y Computación, Tecnológico Na- cameras implemented on satellites or satellite programs [11],
cional de México campus Tuxtla Gutiérrez, Tuxtla Gutiérrez, Chiapas 29038,
México (e-mail: [email protected]).
[12], [13], [14]. Satellite programs generally have two types
Digital Object Identifier 10.1109/JSTARS.2024.3387452 of sensors: passive, which captures optical images, and active,

© 2024 The Authors. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see
https://creativecommons.org/licenses/by-nc-nd/4.0/
PECH-MAY et al.: SEGMENTATION AND VISUALIZATION OF FLOODED AREAS THROUGH SENTINEL-1 IMAGES AND U-NET 8997

Fig. 1. Occurrence of the five most common natural disasters in the world from 2005 to 2022: Floods, storms, earthquakes, droughts, and wildfires.

Fig. 2. Floods in the world: 2000–2022. The years 2006, 2007, and 2021 have been the years with the highest flooding in different regions of the world.

which captures radar images. The optical images are high- These satellite data have different properties such as: 1) spatial
resolution multispectral and are correlated with the open water resolution, which determines the area of the earth’s surface
surface. However, they can be affected by the presence of clouds covered by each pixel of the image; 2) spectral resolution, which
during precipitation, making it impossible to acquire clean and represents the electromagnetic spectrum captured by the remote
reliable images. In contrast, radar images can penetrate clouds sensor and the number and width of regions; and 3) temporal
and operate day and night and in any weather conditions. This resolution, which determines how long satellite information can
is because the sensors operate at longer wavelengths and are be obtained from the exact location with the same satellite and
independent of solar radiation. This makes it ideal for moni- radiometric resolution [18].
toring and mapping floods and estimating the damage caused. In addition, artificial intelligence algorithms are being used
Satellite programs include Copernicus [14], Landsat [12], and to analyze these data. Both technologies are being used to study
Terra/Aqua (MODIS) [13]. climate change, precipitation, carbon flow prediction, drought
Copernicus stands out for its remarkable capacity to acquire forecasting, detection of soil changes, earthquakes, bodies of
remote data with high temporal and spatial resolution. It is water, floods, crops, etc.
made up of satellites for different purposes: Sentinel-1 provides Specifically, deep learning (DL) algorithms have taken on
synthetic aperture radar (SAR) images helpful in observing a highly relevant role due to their ability to discriminate data
the earth and oceans; Sentinel-2 provides multispectral optical and automate and improve the precision of tasks such as im-
terrestrial images; Sentinel-3 for marine and land observation; age classification, element detection, and generating thematic
Sentinel-4 and 5 for air quality monitoring; and Sentinel-6 for cartographic representations [19]. Furthermore, they can learn
marine observation [15], [16], [17]. from feature representations appropriate for classification tasks
8998 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 17, 2024

of spatial learning using convolutional neural networks (CNNs) near-infrared spectrum bands [40], [41]. These indices apply
and sequential learning using recurrent neural networks (RNNs). to images with different resolutions, such as Landsat, Spot,
These approaches have presented better results compared to or Sentinel [42]. However, to map bodies of water and soil
other techniques. However, they suffer from some problems. vegetation, the following are mainly used: the normalized dif-
CNNs suffer from inductive biases, while RNNs suffer from ference vegetation index [43] and the normalized difference
gradient disappearance [20]. Furthermore, satisfactory results water index [44]. Although optical sensors are highly correlated
of DL algorithms require an extensive dataset for training [21], with open water surfaces, they cannot penetrate clouds, which
[22]. Due to this need, labeled image datasets have been limits them in rainy or cloudy weather. Consequently, it is
used. impossible to acquire high-resolution, multispectral, cloud-free
Some datasets used in different proposals for flood analysis images. Deroliya et al. [45] present an approach for flood risk
and mapping are Sen1Floods11 [23], which has Sentinel-1 mapping considering geomorphic descriptors. They used three
and Sentinel-2 images of 11 manually labeled flood events; algorithms: decision tree, random forest (RF), and gradient-
UNOSAT [24], with Sentinel-1-SAR labeled images over 15 boosted decision trees. Zhou et al. [46] use a support vector
flood events; OMBRIA [25], with images labeled Sentinel-1 machine (SVM); Tulbure et al. [47] and Schumann et al. [48]
and Sentinel-2 over 23 floods; SEN12-FLOOD [26] with images RF for the analysis of water bodies. Pech-May et al. [5] analyze
labeled Sentinel-1 and Sentinel-2; and World Floods that con- the behavior of land cover and water bodies of floods in the rainy
tains information on 119 floods that occurred from 2015 to 2019. season using multispectral images and RF, SVM, and classifica-
These datasets are used in different flood analysis proposals [21], tion and regression trees algorithms. Anusha and Bharathi [49]
[25], [27], [28], [29]. use multispectral imaging with the algorithms mentioned earlier.
Free access to these data has allowed various institutions to Konapala et al. [50] presented a strategy for flood identification
expand their research using large volumes of data. Satellite data from SAR satellite images. Rudner et al. [51] used Sentinel-1
are an effective tool for estimating damage caused by natural and Sentinel-2 to identify flooded areas. Li et al. [52] conducted
disasters and improving risk management. This is due to the a study analyzing the damage caused by hurricanes.
sensors’ different resolutions and capture methods on space Most current approaches use CNNs. They rely on dimension-
platforms. This data availability has led to developing services ality reduction to reduce the number of parameters and preserve
that enable the rapid creation of flood maps using automated the relative locations of pixels. Increasing the depth of CNNs can
or semiautomated processes. However, these methods present improve their performance because deep networks incorporate
some uncertainties due to the need for more verification and the multidimensional features and classifiers in multiple end-to-end
rapidity with which they occur. layers. Consequently, the deeper the network structure, the richer
This article explores a strategy for flood segmentation based the feature level. However, the network can cause problems
on the U-Net architecture and the Sen1Floods11 georeferenced such as: 1) gradient disappearance; 2) gradient explosion; and
dataset. This is done to segment and visualize flooded areas 3) network degradation. To solve these problems, ResNet [53]
through satellite images. The study area belongs to southeastern was proposed, effectively mitigating network degradation and
Mexico, which has experienced severe flooding. allowing more profound training of DLs through residual blocks.
Zhao et al. [54] used SAR images to classify buildings, vege-
tation, roads, and water bodies using TerraSAR images [55].
II. RELATED WORKS Other approaches, such as those of Xing et al. [56] and Tavus
As a strategy for flood mapping, remote sensing has shown et al. [57], use the U-Net architecture [58]. Katyar et al. [59]
promising results [27], [29], [30], [31]. Many works propose use the Sen1Floods11 dataset with SegNet [60]. Notably, U-Net
analyses, classifying, detecting, and mapping floods and water uses skip connections between different blocks of each stage to
bodies using optical (multispectral) or SAR images. Others com- preserve the acquired feature maps. At the same time, SegNet
bine SAR and optical data. Despite promising results, there are reuses the encoder’s pooling indices for nonlinear upsampling,
still difficulties in satellite images, such as their spatial [32] and thus improving the results in flood detection. Bai et al. [61]
temporal resolution [33]. Artificial intelligence also provides improved on the work using BASNet [62], an image segmenta-
different supervised, unsupervised, and contrastive algorithms tion network identical to U-Net; they combined it with a hybrid
for flood analysis using satellite images [33], [34]. Deep neural loss function (structural similarity loss), intersection over union
networks, specifically CNNs, are the most widely used [35]. (IoU) loss, and focal loss.
In this sense, the more training data the CNNs have, the better On the other hand, Scepanovic et al. [63] created a land
results they will obtain [36]. cover mapping system with five classes. They applied several
Regardless of the strategy of the proposed approaches, they semantic segmentation models such as U-Net, DeepLabV3+
have a fundamental premise: the analysis of floods in different [64], PSPNet [65], BiSeNet [66], SegNet, FCDenseNet [67], and
locations. Some map floods to coordinate rescue efforts, others FRRN-B [68]. Other approaches explore using self-supervised
analyze flood extents to mitigate and predict their effects, etc. and semisupervised learning based on SimCLR [69] and
Generally, traditional machine learning approaches use op- FixMatch [70] to segment land use and flood mapping via
tical images [5], [37], [38], [39]. Spectral indices are ap- Sen1Floods11.
plied to images based on the interactions between vegetation Some RNN approaches have also been proposed for analyzing
and the electromagnetic energy of the shortwave infrared and water bodies and land cover using Sentinel images [71], [72].
PECH-MAY et al.: SEGMENTATION AND VISUALIZATION OF FLOODED AREAS THROUGH SENTINEL-1 IMAGES AND U-NET 8999

Fig. 4. Geographic location of the study area—Rios subregion, Tabasco,


Mexico.

almost 40% of the country’s fresh water. The abundance of


water and the impact of dams on the hydrology of the region,
by altering the natural flow of rivers, cause flash floods and
floods, which affect drinking water, health, and the livelihoods
of thousands of Tabasco residents [79]. Therefore, flooding is
expected in the region. However, in the fall of 2020, several
river fronts and hurricanes caused the worst flooding in decades,
causing human and economic losses. The study area focuses on
the Ríos subregion (see Fig. 4). It is in the easternmost part
Fig. 3. Proposed architecture for flood segmentation.
of the state, on the borders of Campeche and the Republic of
Guatemala. This is because of the many rivers that cross it,
In [73], [74], and [75], they proposed approaches incorporating including the Usumacinta River, the largest in the country, and
recurrent and convolutional operations for treating spatiotempo- the San Pedro Mártir River. The municipalities that make up
ral data. Contrastive learning [76] has recently emerged to avoid this subregion are Tenosique, Emiliano Zapata, and Balancán.
reliance on labeled data for flood mapping [22], [76], [77], [78]. Its surface is approximately 6000 km2 , representing 24.67% of
the state’s total.
III. METHODOLOGY FOR FLOOD SEGMENTATION IN SAR SAR images with identical polarization in the return wave,
IMAGES horizontal–horizontal (HH) obtained from the Sentinel-1 satel-
lite, were obtained using the Copernicus Open Access Hub1
Satellite images have become a fundamental tool for under-
platform. The images are found within a tile that covered the
standing and mitigating the impact of natural disasters. The pro-
states of Campeche, Chiapas, and Tabasco (see Fig. 5). Given
posed methodology uses SAR images captured by the Sentinel-1
that the study area has a large amount of vegetation, it was
satellite to detect and segment floods. The U-Net neural network
decided to use HH polarization since it has greater penetration
architecture identifies patterns and characteristics that differenti-
through the canopy.
ate flooded and nonflooded areas. The methodology can be seen
in Fig. 3 and consists of a series of steps, which are explained
as follows. B. Load Data
This stage focuses on data acquisition and organization. Im-
A. Study Area ages of the study area and labels corresponding to the floods
are collected for subsequent processing and model training. The
The southeast of Mexico, Tabasco, was selected as the study
Sen1Flood11 dataset is used to learn the neural network.
area. Tabasco is located on the coast of the Gulf of Mexico.
Sen1Flood11 [23] was created to train DL algorithms for
Its territorial extension is 24 661 km2 , representing 1.3% of the
flood detection; the type of calibration used is Beta nought. It
country. Two regions are recognized in the entity: Grijalva and
covers 11 flood events (see Fig. 6) distributed in 14 biomes, 357
Usumacinta, which contain two subregions (swamps and rivers).
ecoregions, and six continents worldwide. It comprises 4831
Together, they form one of the largest river systems in the world
in terms of volume. In addition, the state’s average precipitation
is three times higher than the average in Mexico, representing 1 https://scihub.copernicus.eu/
9000 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 17, 2024

Fig. 5. Tile location containing the SAR images of the study.

Fig. 8. General graphical scheme of the U-Net architecture used for the
detection and segmentation of flooded areas in SAR images.

Fig. 6. Geographic points where flood data were collected for Sen1Floods11. a) Size of input images: This parameter sets the size of the
images in the dataset that would feed the neural model.
This is important to ensure optimal performance in flood
detection and segmentation. The declared size for the
input images is 512 × 512 pixels to identify specific
characteristics associated with flooding. The image size
seeks to balance the need to capture relevant details in SAR
images with the computational efficiency of the model.
b) Bands to use from the input images: In SAR imaging, chan-
nels relate to the different polarization bands in the images.
The images generated by Sentinel-1 have two polarization
bands: VV and VH. Each band represents unique infor-
mation and characteristics inherent to the acquisition pro-
cess and the interactions between electromagnetic waves
and the observed terrain. The two bands were selected
for primary model training because they capture distinct
Fig. 7. Schematic of a U-Net architecture that receives as input a terrain properties: VV is sensitive to surface structure and
512 × 512 pixel image with three channels. roughness, including features such as vegetation, and VH
is sensitive to the humidity and volume of objects, such
image chips with a size of 512 × 512 pixels, covering a total as water on the ground. Combining both bands provides
area of 120 406 km2 . Sentinel-1 images consist of two bands, a complete and more detailed picture of the observed
vertical–vertical (VV) and vertical-horizontal (VH), represent- surface.
ing backscatter values. Sentinel-2 images include 13 bands, all c) Input layers: The input layer is adjusted to the size of
of which are TOA (below atmosphere atmospherically corrected the established images of 512 × 512. This layer provides
images) reflectance values. a structure for entering data into the model and ensures
1) Parameter Definition: Some parameters were considered that images are transformed and processed consistently
for our model. according to the settings in each subsequent layer.
PECH-MAY et al.: SEGMENTATION AND VISUALIZATION OF FLOODED AREAS THROUGH SENTINEL-1 IMAGES AND U-NET 9001

Fig. 9. Performance of the model with 50 epochs. (a) Loss in training.


(b) Precision and validation. (c) Confusion matrix.
Fig. 10. Performance of the model with 100 epochs. (a) Loss in training.
(b) Precision and validation. (c) Confusion matrix.

C. Preprocessing
In this phase, the images and masks of the Sen1Floods11 backscatter values of images. Loading images in TIF format
dataset are preprocessed to be entered into the neural model later. begins with preprocessing to adapt them to the format required
Among the challenges of SAR images is processing. This is due by the model. A transformation ensures that the images have
to the geometry of its acquisition, which generates geometric and the dimensions defined in the previous step. In addition, a
radiometric deformation effects such as slant range distortion, transposition of the images is performed to adjust their chan-
layover, and foreshortening [80]. Warping effects can affect the nels to match the dimensions of the channels required by the
9002 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 17, 2024

Fig. 12. Performance of the model with 200 epochs. (a) Loss in training.
Fig. 11. Performance of the model with 150 epochs. (a) Loss in training. (b) Precision and validation. (c) Confusion matrix.
(b) Precision and validation. (c) Confusion matrix.

Normalization is a fundamental step in standardizing the scale


model. Radiometric calibration, terrain correction, and thermal of the data and ensuring that the values are within a range that
noise elimination are performed in preprocessing. Backscatter facilitates the training of the neural model. Normalization is
coefficients are converted to decibels. VV + VH dual-band performed by dividing the pixel intensity values by 255, which
scenes acquired by Interferometric Wide swath are recovered. will scale the values to 0 and 1: 1 for pixels corresponding to
The scenes are then filtered according to the ascending and floods and 0 in nonflooded areas.
descending passes due to the influence of the angle of incidence They are creating matrices for training houses the prepro-
on the backscatter coefficient. Channels VV and VH are clipped cessed images and their respective flood reference masks. The
within the range of (−23, 0) dB for VV and (−28, −5) dB for matrices X and Y are created. The masks are represented in binary
VH. Subsequently, the pixel intensity values are normalized. format: 1 for flooded and 0 for nonflooded areas.
PECH-MAY et al.: SEGMENTATION AND VISUALIZATION OF FLOODED AREAS THROUGH SENTINEL-1 IMAGES AND U-NET 9003

Fig. 13. Results of the visualization of the segmentation. Right, ground truth. Left, model predictions. (a) 50 epochs. (b) 100 epochs. (c) 150 epochs. (d) 200
epochs.

D. U-Net Architecture of flooding in each pixel. Three subsets were made: a)


The U-Net architecture is a CNN designed for image seg- model training set (70% of the dataset); b) validation set
mentation. It can learn specific features in images by combining (15% of the dataset) to tune the model hyperparameters
low-level and high-level features. Despite being one of the and prevent overfitting; and c) test set (15% of the dataset),
simplest models, it offers more precise or adjusted results than data to evaluate the model and provide a realistic estimate
other models (see Fig. 7). The accuracy is due to its handling of of its performance.
small datasets [58]. 2) Definition of callbacks: Callbacks were implemented to
The creation of the U-Net architecture for the detection and control the training process and make decisions based
segmentation of flooded areas consists of two main parts (see on the model’s performance. One of the most critical
Fig. 8): the contraction path (encoder) and the expansion path Callbacks is Model Checkpoint, which saves the model
(decoder). Jump connections interconnect both. Furthermore, it with the lowest loss to the validation set during training.
ends in an output layer, which generates the segmented mask of In addition, the Early Stopping parameter was used to stop
the areas of interest. model training if no improvement in validation loss was
observed for a specific number of epochs (in this case, 70
epochs).
E. Training The model was trained and validated using the subsets for
Essential aspects for training were configured to compile these purposes. A batch size of 32 images per iteration was
the model. The Adam optimization algorithm was used. Adam used during training. Different training tests were performed
combines the advantages of RMSprop and Momentum to im- with different numbers of epochs (50, 100, 150, and 200)
prove the model learning process [81]. Both use the history of to evaluate performance over time. This allowed us to de-
previous gradients to update the model parameters. However, termine with how many epochs the best results are obtained
instead of a constant learning rate, it adjusts the rate of each regarding loss and precision in the validation set. Training
parameter individually based on its estimate of the Momentum was performed by iterating through the training batches at
and magnitude of the gradient. This allows for more efficient and each iteration. Model weights are updated to minimize the
accurate fitting to the training data, resulting in more excellent loss function. Training progress is monitored, and loss and
prediction accuracy than other optimization methods. Binary accuracy are recorded at each training and validation set
cross-entropy [82] was used as the loss function, which is gen- epoch.
erally used in binary classification problems but also in problems
where the variables to be predicted take values between 0 and 1.
1) Subdataset for training and validation: Manually labeled F. Output
Sen1Floods11 data are divided into subsets to train, val- 1) Viewing Results: In this phase, metrics and visualizations
idate, and test the model. These manually labeled data that allow us to understand the performance and effectiveness
are Sentinel-1 SAR images that expert remote sensing of the model in detecting and segmenting floods in SAR images
analysts have labeled to indicate the presence or absence are obtained. It ranges from the evaluation of the quality of the
9004 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 17, 2024

training to the application of the model. The main components IV. RESULTS OBTAINED
are detailed as follows.
A. Model Evaluation Metrics
1) Loss and precision to understand how the model adapts
to the data and training over time: These metrics pro- The following metrics were selected to evaluate the devel-
vide information about model convergence and whether oped neural model: loss, recall, precision, F1-score, accuracy,
overfitting or underfitting occurs. A gradual decrease confusion matrix, and IoU [83], [84].
in loss and an increase in accuracy indicate successful 1) Loss: It is a metric that quantifies the difference between
training. model predictions and actual labels. A more minor loss
2) Model evaluation: The test subset is used to evaluate the indicates better agreement between model predictions and
model’s actual performance. The model predictions are labels. The loss was evaluated at different training epochs
applied to these images and compared to the flood masks. (50, 100, 150, and 200) to understand the evolution and
This allows various evaluation metrics to be calculated, convergence to a minimum value for better fitting the
such as precision, recall, and IoU score. data.
3) IoU score calculation to evaluate the quality of the seg- 2) Recall: It measures the proportion of positive instances
mentation: This is calculated by dividing the intersection (flooded areas) the model correctly identified compared
area between the predicted mask and the actual mask by to the total number of positive instances. A high recall
the area of their union. Higher IoU indicates higher overlap indicates the model’s ability to detect most flooded areas
and accuracy in predicting flooded areas. in the SAR image.
4) Prediction on test images, applied to a test image to 3) Precision: It measures the fraction correctly detected by
generate a prediction of the flooded areas: This predic- the model.
tion is visually compared to the actual flood mask in the 4) F1-score: It is a metric that combines the precision and
same image to evaluate the accuracy and quality of the recall of the model. It measures the ratio between true and
segmentation. Detected and actual areas can be overlaid false positive predictions compared to the actual labels.
to analyze coincidences and deviations. It is advantageous when there is an imbalance between
classes, such as in segmenting flooded areas where non-
flooded areas are predominant.
5) Accuracy: It evaluates the overall accuracy of a classifier.
It indicates the overall performance of the model.
G. Validation
6) Confusion matrix: This shows the number of valid, false
An inference test is performed to predict new flood images positives, true negatives, and false negatives in the model
obtained from SAR images. In this phase, the knowledge ac- classification.
quired during model training is applied to detect flooded areas 7) IoU: It measures the overlap between the segmentation
in real-world scenarios. The key components are described as masks generated by the model and the ground truth masks.
follows. These metrics were evaluated in different training epochs: 50,
1) Loading of the trained neural model: It contains the 100, 150, and 200. The model’s improvement can be observed
weights and architecture learned during the training pro- throughout each epoch and the equilibrium points where perfor-
cess for classifying flooded areas. mance stabilizes. In addition, it allows for identifying the stage
2) Preprocess and postprocess: Preprocessing functions are where the model achieves an optimal balance between precision
used to prepare the images properly, including normal- and recall.
izing pixel values and adjusting the size to match the Training the neural network with 50 epochs reached a loss
model input format. After obtaining model predictions, level of 0.3666 and 0.4462 on the validation set [see Fig. 9(a)].
postprocessing functions are used to improve and refine The loss in training indicates the magnitude of the difference
the outputs. This could involve removing small groups between the model predictions and the actual labels. The in-
of unwanted pixels and improving the consistency of crease in loss on the validation set is because the model was
segmented areas. overfitting. The accuracy achieved in the training set was 0.8756,
3) New image classification: Classification proceeds once and on the validation set, it was 0.8244 [see Fig. 9(b)]. This
the image has been preprocessed and the model loaded. indicates that 87.56% of the model predictions match the actual
The image is input into the model, and predictions are labels in the training set. Although the model shows excellent
generated about the areas that could be flooded. The model predictive ability in the training set, its performance in valida-
uses its prior understanding of patterns learned during tion is slightly lower. The F1-score was 0.0230 in the test set,
training to make these predictions. indicating that the model balances accuracy and can detect true
4) Visualization of results: The model predictions can be positives. However, it is essential to note that the low F1-value is
visualized by overlaying them on the original image. This due to the imbalance in the flooded and nonflooded classes in the
allows a visual assessment of how the model has identified test set. It is worth mentioning that the terrain characteristics and
flooded areas compared to reality. The overlay can also the angle of incidence of the image produce areas with excessive
indicate the quality of the segmentation and whether there shadowing; this causes the model to detect false positives from
are areas for improvement. areas with flooding present.
PECH-MAY et al.: SEGMENTATION AND VISUALIZATION OF FLOODED AREAS THROUGH SENTINEL-1 IMAGES AND U-NET 9005

TABLE I TABLE III


RESULTS OF THE METRICS USED TO EVALUATE THE PERFORMANCE OF THE RESULTS OF THE METRICS USED TO EVALUATE THE PERFORMANCE OF THE
U-NET MODEL TRAINED AT 50 EPOCHS U-NET MODEL TRAINED AT 150 EPOCHS

TABLE II the actual flooding areas. Table II presents a summary of the


RESULTS OF THE METRICS USED TO EVALUATE THE PERFORMANCE OF THE
U-NET MODEL TRAINED AT 100 EPOCHS obtained results.
They are training with 150 epochs to achieve promising and
robust performance in flooded area segmentation. Its loss was
0.1467, which suggests that the model has managed to minimize
the discrepancy between its predictions and the actual labels
[see Fig. 11(a)]. The overall accuracy reaches a solid 0.9420 on
the training set, showing that the model can perform accurate
classification in most instances. The validation set’s accuracy
remains at a satisfactory level of 0.9363 [see Fig. 11(b)]. F1-
score had a value of 0.7874, demonstrating that the model bal-
The recall had a value of 0.0117, indicating that the model has ances precision and recall when considering both true positives
difficulty detecting most of the flooded areas in the image. This and false positives and false negatives. Recall performed well,
is due to the limitation of the model with 50-epoch training. with a solid value of 0.7718. The precision was 0.8037, which
The precision, which measures the model’s ability to identify avoids false positives and performs adequate segmentations.
flooded areas correctly, was 0.9193. This means that the model The matrix shows that the model improved its performance; it
positively predicts 91.93% of the flooded areas. The confusion classified 97.34% of instances as non-flooded areas (99 802 794
matrix shows that the model classified 1.16% as true positives instances) and 77.17% as flooded areas (11 112 871 instances).
(167 919 instances), 0.01% as false positives (14 731 instances), However, 2.64% of misclassified cases were identified as false
and 99.99% as true negatives (102 503 103 instances); however, positives (2 715 040 instances), and 22.83% (3 285 519) were
it classified 98.84% (14 230 471 instances) false negatives [see misclassified as false negatives [see Fig. 11(c)]. The model
Fig. 9(c)]. The IoU score was 0.0117. This value reflects the obtains an IoU score of 0.6494, meaning that there is a significant
model’s ability to perform accurate segmentation and coincides overlap between the areas segmented by the model and the actual
with the low recall value observed. flooded areas. In Table III, the summary of the results obtained
The above results suggest that the model trained with 50 is presented.
epochs has limitations in detecting and segmenting flooded areas Training with 200 epochs achieved a loss of 0.0697 and,
in SAR images. Although it shows acceptable precision, its during validation, a loss of 0.1396, indicating that the model
recall and IoU are low. Table I presents a summary of the results has achieved excellent agreement between predictions and ac-
obtained. curate labels during training [see Fig. 12(a)]. The accuracy was
Training the model with 100 epochs reached a loss value on 0.9519 [see Fig. 12(b)], demonstrating that the model effectively
the training set of 0.2017, thus reducing the discrepancy between generalized the relationships learned during training to new
predictions and labels; the loss on the validation set was 0.2267 data. F1-score obtained 0.8441 and recall 0.7639. The precision
[see Fig. 10(a)]. The accuracy in the training set was 0.9211, was 0.9431, underlining the model’s reliability in classifying
which means that the model correctly classified 92.11% of the flooded areas. The matrix shows high model performance, with
instances; on the validation set, the precision was 0.9137, slightly 99.35% (101 854 538 instances) as nonflooded areas correctly
lower but significant [see Fig. 10(b)]. The F1-score value was identified and 76.39% as flooded areas (10 998 738 instances),
0.4956, which shows the ability of the model to find a balance 0.65% incorrectly classified as flooded areas (663 296 instances)
in identifying flooded and nonflooded areas. The recall reached and 23.60% (3 399 652 instances) of areas wrongly flooded [see
a value of 0.3379, detecting a third of the flooded areas. The Fig. 12(c)]. Finally, the IoU score was 0.7302. Table IV presents
precision was 0.9287, indicating that 92.87% of the predicted a summary of the results obtained.
instances correspond to the flooded areas. The confusion matrix
correctly identified 91.46% of nonflooded areas (102 144 006
B. Visualization of Segmentations
instances) and 92.87% of flooded areas (4 865 920 instances).
However, it misclassified 8.52% as flooded areas (9 532 470 Fig. 13 shows the results of the segmentation tests. On the
instances) and 7.12% (373 828 instances) as nonflooded areas left side, you can see the prediction made by the model. Areas
[see Fig. 10(c)]. The IoU score was 0.3294, indicating a signif- that the model identifies as flooded are highlighted in white.
icant correlation between the areas identified by the model and Overlaying the blank segmented areas with the actual flooded
9006 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 17, 2024

TABLE IV REFERENCES
RESULTS OF THE METRICS USED TO EVALUATE THE PERFORMANCE OF THE
U-NET MODEL TRAINED AT 200 EPOCHS [1] CRED, “2021 disasters in numbers,” Centre Res. Epidemiol. Disasters,
Brussels, Belgium, Tech. Rep., 2021. [Online]. Available: https://cred.be/
sites/default/files/2021_EMDAT_report.pdf
[2] D. Tin, L. Cheng, D. Le, R. Hata, and G. Ciottone, “Natural disasters: A
comprehensive study using EMDAT database 1995–2022,” Public Health,
vol. 226, pp. 255–260, 2024.
[3] E. Psomiadis, “Flash flood area mapping utilising SENTINEL-1 radar
data,” Proc. SPIE, 2016, vol. 10005, Art. no. 100051G.
[4] P. Wallemacq and R. House, “Economic losses, poverty and disas-
ters (1998-2017),” Centre for Research on the Epidemiology of Dis-
asters United Nations Office for Disaster Risk Reduction, Tech. Rep.,
2018. [Online]. Available: https://www.preventionweb.net/files/61119_
credeconomiclosses.pdf
[5] F. Pech-May, R. Aquino-Santos, G. Rios-Toledo, and J. P. F. Posadas-
areas provides a visual assessment of the accuracy of the model Durán, “Mapping of land cover with optical images, supervised al-
predictions. On the right side is the ground truth mask used for gorithms, and Google earth engine,” Sensors, vol. 22, no. 13, 2022,
Art. no. 4729.
the model input, where flooded areas are marked in blue. [6] E. Benami et al., “Uniting remote sensing, crop modelling and economics
As shown in Fig. 13(a), segmentation with 50 epochs needs to for agricultural risk management,” Nature Rev. Earth Environ., vol. 2,
be revised. The IoU score was 1.17% and accuracy was 91.93% pp. 1–20, 2021.
[7] J. Paz, F. Jiménez, and B. Sánchez, “Urge manejo del agua en tabasco,”
(see Table I). The model achieves significant segmentation skills Universidad Nacional Autónoma de México y Asociación Mexicana de
with 100 epochs [see Fig. 13(b)]. As evidence, its IoU score is Ciencias para el Desarrollo Regional A.C., Ciudad de México, Mexico,
32.94% and accuracy is 92.87% (see Table II). Tests conducted Tech. Rep., 2018.
[8] CEPAL, “Tabasco: Características e impacto socioeconómico de las in-
with 150 epochs [see Fig. 13(c)] highlight the model’s ability undaciones provocadas a finales de octubre y a comienzos de noviembre
to identify flooded areas and achieve segmentation that overlaps de 2007 por el frente frío número 4,” Comisión Económica para América
significantly with the actual flooded areas. The IoU score was Latina y el Caribe Sede Subregional en México, Mexico City, México,
Tech. Rep., 2008. [Online]. Available: https://hdl.handle.net/11362/25881
64.94% and accuracy was 80.73% (see Table III). The training [9] J. Cuevas, M. F. Enriquez, and R. Norton, “Tabasco floods of 2020—
with 200 epochs obtained the best results, with an IoU score of Learning from the past to prepare for the future,” ISET International and
73.02% and accuracy of 94.31% (see Table IV). These visual- the Zurich Flood Resilience Alliance, Boulder, CO, USA, Tech. Rep.,
2022. [Online]. Available: https://preparecenter.org/wp-content/uploads/
izations provide a concrete graphical representation of how the 2023/03/PERC-fullreport_Mexico_ENG.pdf
model identifies and segments flooded areas in SAR imagery. [10] M. Perevochtchikova and J. Torre, “Causas de un desastre: Inundaciones
del 2007 en tabasco, México,” J. Latin Amer. Geography, vol. 9, pp. 73–98,
2010.
[11] G. J.-P. Schumann and D. K. Moller, “Microwave remote sensing of flood
inundation,” Phys. Chem. Earth, Parts A/B/C, vol. 83–84, pp. 84–95, 2015.
V. CONCLUSION [12] W. Emery and A. Camps, “The history of satellite remote sensing,” in
Introduction to Satellite Remote Sensing, W. Emery and A. Camps, Eds.,
The segmentation model based on the U-Net architecture New York, NY, USA: Springer, 2017, pp. 1–42.
effectively identifies flooded areas. The ability of U-Net to [13] T. M. Lillesand, Remote Sensing and Image Interpretation. Hoboken, NJ,
USA: Wiley, 2006.
capture relevant features in SAR images and its training and [14] S. Jutz and M. Milagro-Pérez, “1.06—Copernicus program,” in Compre-
validation are reflected in the obtained results. The tests with 200 hensive Remote Sensing, S. Liang, Ed., Amsterdam, The Netherlands:
epochs obtained the best results with an IoU score of 73.02% Elsevier, 2018, pp. 150–191.
[15] A. Twele, W. Cao, S. Plank, and S. Martinis, “Sentinel-1-based flood
and accuracy of 94.31%. mapping: A fully automated processing chain,” Int. J. Remote Sens.,
Despite the achievements made, several paths could be ex- vol. 37, no. 13, pp. 2990–3004, 2016.
plored in future works further to improve flood detection and [16] M. Chini, R. Pelich, L. Pulvirenti, N. Pierdicca, R. Hostache, and P.
Matgen, “Sentinel-1 InSAR coherence to detect floodwater in urban areas:
segmentation in SAR images. Houston and hurricane Harvey as a test case,” Remote Sens., vol. 11, no. 2,
1) Architecture improvement: Use other architectures such as 2019, Art. no. 107.
DeepLab and PSPNet that could be considered to explore [17] K. K. Singh and A. Singh, “Identification of flooded area from satellite
images using Hybrid Kohonen Fuzzy C-means sigma classifier,” Egyptian
new feature extraction capabilities. J. Remote Sens. Space Sci., vol. 20, no. 1, pp. 147–155, 2017.
2) Parameter optimization: Although tests have been per- [18] V. Lalitha and B. Latha, “A review on remote sensing imagery augmen-
formed to determine the optimal number of training tation using deep learning,” Mater. Today: Proc., vol. 62, pp. 4772–4778,
2022.
epochs, more profound optimization can be performed [19] L. Alzubaidi, J. Zhang, A. J. Humaidi, and A. Al-Dujaili, “Review of deep
to fine-tune the hyperparameters and achieve a balance learning: Concepts, CNN architectures, challenges, applications, future
between accuracy and training time. directions,” J. Big Data, vol. 53, no. 8, pp. 1–74, 2021.
[20] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge,
3) Use of multitemporal data: Integrating multitemporal data MA, USA: MIT Press, 2016.
from different satellites could allow better flood detection [21] R. Bentivoglio, E. Isufi, S. N. Jonkman, and R. Taormina, “Deep learning
by considering the temporal evolution of the affected methods for flood mapping: A review of existing applications and future re-
search directions,” Hydrol. Earth Syst. Sci., vol. 26, no. 16, pp. 4345–4378,
areas. 2022.
As challenges are addressed and new opportunities are ex- [22] C. P. Patel, S. Sharma, and V. Gulshan, “Evaluating self and semi-
plored, this methodology will likely continue to improve and supervised methods for remote sensing segmentation tasks,” CoRR, vol.
abs/2111.10079, 2021. [Online]. Available: https://dblp.unitrier.de/rec/
significantly impact disaster management and data-driven deci- journals/corr/abs2111-10079.html?view=bibtex
sion making.
PECH-MAY et al.: SEGMENTATION AND VISUALIZATION OF FLOODED AREAS THROUGH SENTINEL-1 IMAGES AND U-NET 9007

[23] D. Bonafilia, B. Tellman, T. Anderson, and E. Issenberg, “Sen1Floods11: [46] Y. Zhou, J. Luo, Z. Shen, X. Hu, and H. Yang, “Multiscale water body
A georeferenced dataset to train and test deep learning flood algorithms extraction in urban environments from satellite images,” IEEE J. Sel.
for sentinel-1,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Topics Appl. Earth Observ. Remote Sens., vol. 7, no. 10, pp. 4301–4312,
Workshops, 2020, pp. 835–845. Oct. 2014.
[24] UNOSAT, “Unosat flood dataset,” 2019. Accessed: Jun. 20, 2022. [On- [47] M. G. Tulbure, M. Broich, S. V. Stehman, and A. Kommareddy, “Surface
line]. Available: http://floods.unosat.org/geoportal/catalog/main/home. water extent dynamics from three decades of seasonally continuous Land-
page sat time series at subcontinental scale in a semi-arid region,” Remote Sens.
[25] G. I. Drakonakis, G. Tsagkatakis, K. Fotiadou, and P. Tsakalides, Environ., vol. 178, pp. 142–157, 2016.
“OmbriaNet—Supervised flood mapping via convolutional neural net- [48] G. Schumann, J. Henry, L. Hoffmann, L. Pfister, F. Pappenberger, and
works using multitemporal Sentinel-1 and Sentinel-2 data fusion,” IEEE P. Matgen, “Demonstrating the high potential of remote sensing in hy-
J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 15, pp. 2341–2356, draulic modelling and flood risk management,” in Proc. Annu. Conf.
2022. Remote Sens. Photogrammetry Soc. NERC Earth Observ. Conf., 2005,
[26] C. Rambour, N. Audebert, E. Koeniguer, B. Le Saux, M. Crucianu, and pp. 6–9.
M. Datcu, “SEN12-FLOOD: A SAR and multispectral dataset for flood [49] N. Anusha and B. Bharathi, “Flood detection and flood mapping using
detection,” 2020, doi: 10.21227/w6xz-s898. multi-temporal synthetic aperture radar and optical data,” Egypt. J. Remote
[27] V. Tsyganskaya, S. Martinis, P. Marzahn, and R. Ludwig, “SAR-based Sens. Space Sci., vol. 23, pp. 207–219, 2020.
detection of flooded vegetation—A review of characteristics and ap- [50] G. Konapala, S. V. Kumar, and S. K. Ahmad, “Exploring Sentinel-1 and
proaches,” Int. J. Remote Sens., vol. 39, no. 8, pp. 2255–2293, 2018. Sentinel-2 diversity for flood inundation mapping using deep learning,”
[28] C. Rambour, N. Audebert, E. Koeniguer, B. Le Saux, M. Crucianu, and ISPRS J. Photogrammetry Remote Sens., vol. 180, pp. 163–173, 2021.
M. Datcu, “Flood detection in time series of optical and SAR images,” [51] T. G. J. Rudner et al., “Multi3Net: Segmenting flooded buildings via fusion
in Proc. Int. Arch. Photogrammetry, Remote Sens. Spatial Inf. Sci., 2020, of multiresolution, multisensor, and multitemporal satellite imagery,” in
pp. 1343–1346. Proc. AAAI Conf. Artif. Intell., 2019, vol. 33, pp. 702–709.
[29] G. Mateo-Garcia et al., “Towards global flood mapping onboard low [52] Y. Li, S. Martinis, and M. Wieland, “Urban flood mapping with an active
cost satellites with machine learning,” Sci. Rep., vol. 11, no. 1, 2021, self-learning convolutional neural network based on TerraSAR-X intensity
Art. no. 7249. and interferometric coherence,” ISPRS J. Photogrammetry Remote Sens.,
[30] S. Grimaldi, Y. Li, V. Pauwels, and J. Walker, “Remote sensing-derived vol. 152, pp. 178–191, 2019.
water extent and level to constrain hydraulic flood forecasting models: Op- [53] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
portunities and challenges,” Surv. Geophys., vol. 37, no. 5, pp. 977–1034, recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016,
2016. pp. 770–778.
[31] X. Shen, D. Wang, K. Mao, E. Anagnostou, and Y. Hong, “Inundation [54] B. Zhao, H. Sui, C. Xu, and J. Liu, “Deep learning approach for
extent mapping by synthetic aperture radar: A review,” Remote Sens., flood detection using SAR image: A case study in Xinxiang,” in
vol. 11, no. 7, 2019, Art. no. 879. Proc. Int. Arch. Photogrammetry, Remote Sens. Spatial Inf. Sci., 2022,
[32] M. V. Bernhofen et al., “The role of global data sets for riverine flood risk pp. 1197–1202.
management at national scales,” Water Resour. Res., vol. 58, no. 4, 2022, [55] J. Betbeder, S. Rapinel, T. Corpetti, E. Pottier, S. Corgne, and L. Hubert-
Art. no. e2021WR031555. Moy, “Multitemporal classification of TerraSAR-X data for wetland veg-
[33] R. Sadiq, M. Imran, and F. Ofli, Remote Sensing for Flood Mapping and etation mapping,” J. Appl. Remote Sens., vol. 8, 2014, Art. no. 083648.
Monitoring. Singapore: Springer, 2023, pp. 679–697. [56] Z. Xing et al., “Flood vulnerability assessment of urban buildings based
[34] J. Yu, Z. Wang, V. Vasudevan, L. Yeung, M. Seyedhosseini, and Y. Wu, on integrating high-resolution remote sensing and street view images,”
“CoCa: Contrastive captioners are image-text foundation models,” 2022, Sustain. Cities Soc., vol. 92, 2023, Art. no. 104467.
arXiv:2205.01917. [57] B. Tavus, R. Can, and S. Kocaman, “A CNN-based flood mapping ap-
[35] J. Rosentreter, R. Hagensieker, and B. Waske, “Towards large-scale map- proach using Sentinel-1 data,” ISPRS Ann. Photogrammetry, Remote Sens.
ping of local climate zones using multitemporal Sentinel 2 data and Spatial Inf. Sci., vol. 3, pp. 549–556, 2022.
convolutional neural networks,” Remote Sens. Environ., vol. 237, 2020, [58] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional net-
Art. no. 111472. works for biomedical image segmentation,” in Proc. Med. Image Comput.
[36] S. Martinis, S. Groth, M. Wieland, L. Knopp, and M. Rättich, “Towards a Computer-Assisted Interv., 2015, pp. 234–241.
global seasonal and permanent reference water product from Sentinel-1/2 [59] V. Katiyar, N. Tamkuan, and M. Nagai, “Near-real-time flood mapping
data for improved flood mapping,” Remote Sens. Environ., vol. 278, 2022, using off-the-shelf models with SAR imagery and deep learning,” Remote
Art. no. 113077. Sens., vol. 13, no. 12, 2021, Art. no. 2334.
[37] Z. Gou, “Urban road flooding detection system based on SVM algorithm,” [60] A. Moradi Sizkouhi, M. Aghaei, and S. M. Esmailifar, “A deep convolu-
in Proc. 2nd Int. Conf. Mach. Learn. Comput. Appl., 2021, pp. 1–8. tional encoder-decoder architecture for autonomous fault detection of PV
[38] A. H. Tanim, C. B. McRae, H. Tavakol-Davani, and E. Goharian, “Flood plants using multi-copters,” Sol. Energy, vol. 223, pp. 217–228, 2021.
detection in urban areas using satellite imagery and machine learning,” [61] Y. Bai et al., “Enhancement of detecting permanent water and temporary
Water, vol. 14, no. 7, 2022, Art. no. 1140. water in flood disasters by fusing Sentinel-1 and Sentinel-2 imagery using
[39] K. Kunverji, K. Shah, and N. Shah, “A flood prediction system developed deep learning algorithms: Demonstration of Sen1Floods11 benchmark
using various machine learning algorithms,” Proc. 4th Int. Conf. Adv. Sci. datasets,” Remote Sens., vol. 13, no. 11, 2021, Art. no. 2220.
Technol., 2021. [Online]. Available: https://papers.ssrn.com/sol3/papers. [62] X. Qin, Z. Zhang, C. Huang, C. Gao, M. Dehghan, and M. Jagersand,
cfm?abstract_id=3866524 “BASNet: Boundary-aware salient object detection,” in Proc. IEEE/CVF
[40] C. Alexander, “Normalised difference spectral indices and urban land Conf. Comput. Vis. Pattern Recognit., 2019, pp. 7471–7481 .
cover as indicators of land surface temperature (lst),” Int. J. Appl. Earth [63] S. Scepanovic, O. Antropov, P. Laurila, Y. Rauste, V. Ignatenko, and J.
Observ. Geoinf., vol. 86, 2020, Art. no. 102013. Praks, “Wide-area land cover mapping with Sentinel-1 imagery using deep
[41] V. Kumar, A. Sharma, R. Bhardwaj, and A. K. Thukral, “Comparison learning semantic segmentation models,” IEEE J. Sel. Topics Appl. Earth
of different reflectance indices for vegetation analysis using Landsat-TM Observ. Remote Sens., vol. 14, pp. 10357–10374, 2021.
data,” Remote Sens. Appl.: Soc. Environ., vol. 12, pp. 70–77, 2018. [64] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-
[42] J. Campbell and R. Wynne, Introduction to Remote Sensing, 5th ed. New decoder with atrous separable convolution for semantic image segmenta-
York City, NY, USA: Guilford Publications, 2011. tion,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 801–818.
[43] J. Rouse, J. W., R. H. Haas, J. A. Schell, and D. W. Deering, “Monitoring [65] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing
vegetation systems in the great plains with ERTS,” in Proc. 3rd ERTS network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017,
Symp., 1974, vol. 351, pp. 309–317. pp. 6230–6239.
[44] G. Bo-cai, “NDWI—A normalized difference water index for remote [66] C. Yu, J. Wang, C. Peng, C. Gao, G. Yu, and N. Sang, “BiSeNet: Bilateral
sensing of vegetation liquid water from space,” Remote Sens. Environ., segmentation network for real-time semantic segmentation,” in Proc. Eur.
vol. 58, no. 3, pp. 257–266, 1996. Conf. Comput. Vis., 2018, pp. 334–349.
[45] P. Deroliya, M. Ghosh, M. P. Mohanty, S. Ghosh, K. D. Rao, and S. [67] S. Jégou, M. Drozdzal, D. Vazquez, A. Romero, and Y. Bengio, “The
Karmakar, “A novel flood risk mapping approach with machine learning one hundred layers Tiramisu: Fully convolutional DenseNets for seman-
considering geomorphic and socio-economic vulnerability dimensions,” tic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.
Sci. Total Environ., vol. 851, 2022, Art. no. 158002. Workshops, 2017, pp. 11–19 .
9008 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 17, 2024

[68] T. Pohlen, A. Hermans, M. Mathias, and B. Leibe, “Full-resolution residual Raúl Aquino-Santos received the Ph.D. degree in
networks for semantic segmentation in street scenes,” in Proc. IEEE Conf. electrical and electronic engineering from the Uni-
Comput. Vis. Pattern Recognit., 2017, pp. 4151–4160 . versity of Sheffield, Sheffield, U.K., in 2005.
[69] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework In 2007, he was a Postdoctoral Researcher of Net-
for contrastive learning of visual representations,” in Proc. 37th Int. Conf. working and Telecommunications with the Centre
Mach. Learn., 2020, pp. 1597–1607. for Scientific Research and Higher Education at En-
[70] K. Sohn et al., “FixMatch: Simplifying semi-supervised learning with senada, Ensenada, Mexico. In 2008, he was also a
consistency and confidence,” in Proc. Int. Conf. Neural Inf. Process. Syst., Postdoctoral Researcher of Networking and Telecom-
2020, vol. 33, pp. 596–608. munications with the Department of Telecommuni-
[71] Y. Bengio, A. C. Courville, and P. Vincent, “Representation learning: A cations, National Autonomous University of Mexico,
review and new perspectives,” IEEE Trans. Pattern Anal. Mach. Intell., Mexico City, Mexico. He is a Member of the National
vol. 35, no. 8, pp. 1798–1828, Aug. 2013. System of Researchers and has authored or coauthored six books, 12 chapters of
[72] D. Ienco, R. Gaetano, R. Interdonato, K. Ose, and D. H. T. Minh, “Com- books, and more than 30 published articles in international journals. His research
bining Sentinel-1 and Sentinel-2 time series via RNN for object-based interests include the design, development, and implementation of Industry 4.0,
land cover classification,” in Proc. IEEE Int. Geosci. Remote Sens. Symp., Internet of Things, Smart Cities, and Natural Disaster Management. He has
2019, pp. 4881–4884. participated in the International Visitor Leadership Program “Advanced Infor-
[73] X. Shi, Z. Chen, H. Wang, D. Yeung, W. Wong, and W. Woo, “Convo- matics and Communications Technologies” in the USA, a project for Mexico
lutional LSTM network: A machine learning approach for precipitation from the U.S. Department of State Bureau of Educational and Cultural Affairs
nowcasting,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2015, pp. 802– (2010) and represented Mexico (2014 and 2015) at Asian Pacific Economic
810. Cooperation held in Shanghai, China.
[74] R. Marc and K. Marco, “Multi-temporal land cover classification with Dr. Aquino-Santos is the Chair of the Topic Group AI for Flood Monitoring
sequential recurrent encoders,” ISPRS Int. J. Geo-Inf., vol. 7, no. 4, 2018, and Detection at the International Telecommunication Union.
Art. no. 129.
[75] M. Volpi and D. Tuia, “Dense semantic labeling of subdecimeter resolution
images with convolutional neural networks,” IEEE Trans. Geosci. Remote
Sens., vol. 55, no. 2, pp. 881–893, Feb. 2017.
[76] H. Zhong, C. Chen, Z. Jin, and X. Hua, “Deep robust clustering by Omar Álvarez-Cárdenas received the master’s de-
contrastive learning,” 2020, arXiv:2008.03030. [Online]. Available: https: gree in telematics and the Ph.D. degree in education
//api.semanticscholar.org/CorpusID:210920406 from the University of California, Los Angeles, CA,
[77] M. Huang and S. Jin, “Rapid flood mapping and evaluation with a su- USA, both specialized in active learning using remote
pervised classifier and change detection in Shouguang using Sentinel-1 laboratories applied to satellite communications, in
SAR and Sentinel-2 optical data,” Remote Sens., vol. 12, no. 13, 2020, 1999 and 2018.
Art. no. 2073. He is currently a Research Professor of Mobile
[78] H. Jung, Y. Oh, S. Jeong, C. Lee, and T. Jeon, “Contrastive self-supervised Computing Area with the School of Telematics, Uni-
learning with smoothed representation for remote sensing,” IEEE Geosci. versity of Colima, Colima, Mexico. His research in-
Remote Sens. Lett., vol. 19, 2022, Art. no. 8010105. terests include wireless networks, satellite communi-
[79] E. M. Cuevas, J., and R. Norton, “Inundaciones de 2020 en tabasco: cations, remote laboratories, cybersecurity, and active
Aprender del pasado para preparar el futuro,” Centro Nacional de Pre- learning methodologies.
vención de Desastres, Ciudad de México, Mexico, Tech. Rep., 2022.
[Online]. Available: https://preparecenter.org/wp-content/uploads/2022/
08/PERC_Mexico_ESP.pdf
[80] M. Tzouvaras, C. Danezis, and D. G. Hadjimitsis, “Differential SAR
interferometry using Sentinel-1 imagery-limitations in monitoring fast
moving landslides: The case study of Cyprus,” Geosciences, vol. 10, no. 6, Jorge Lozoya Arandia received the master’s degree
2020, Art. no. 236. in information technologies and the doctorate degree
[81] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in energy and water from the University of Guadala-
in Proc. 3rd Int. Conf. Learn. Represent., Dec. 2014. jara, Guadalajara, Mexico, in 2013 and 2021.
[82] P. Hurtik, S. Tomasiello, J. Hula, and D. Hynar, “Binary cross-entropy with He is an Engineer in communications and electron-
dynamical clipping,” Neural Comput. Appl., vol. 34, pp. 12029–12041, ics. He develops data analysis and computer systems
2022. projects, with a specialty in energy optimization, dis-
[83] G. Keren, “Neural network supervision: Notes on loss functions, labels semination of image science, and analysis through
and confidence estimation,” Ph.D. dissertation, Dept. Faculty Comput. deep learning. He actually works in natural sensors
Sci. Math., Univ. Passau, Passau, Germany, 2020. to develop natural analysis of ecosystems. His re-
[84] M. Vakili, M. K. Ghamsari, and M. Rezaei, “Performance analysis and search interests include high-performance comput-
comparison of machine and deep learning algorithms for IoT data classi- ing, energy, and digital literacy.
fication,” 2020, arXiv:2001.09636. Mr. Arandia is a Member of the National System of Researchers.

Fernando Pech-May received the master’s degree in


artificial intelligence from the Center for Research
and Advanced Studies, National Polytechnic Insti-
tute, Mexico City, Mexico, in 2009, and the Ph.D. de-
gree in computer science from the Centro Nacional de
Investigación y Desarrollo Tecnológico, Cuernavaca, German Rios-Toledo received the Ph.D. degree in
Mexico, in 2019. computer science from the Centro Nacional de In-
He is currently a Research Teacher with the In- vestigación y Desarrollo Tecnológico, Cuernavaca,
stituto Tecnológico Superior de los Ríos, Tabasco, México, in 2019.
México. He is currently working on projects related He is currently a Full-Time Professor with the
to crop and body monitoring of water using artificial Computer Department, National Technology of Mex-
intelligence techniques. He has authored or coauthored different research articles ico, Tuxtla Gutierrez, Mexico. His main research
in national and international journals. His research interests include natural interests include natural language processing, partic-
language processing, information extraction and retrieval, semantic web, and ularly the use of syntactic information as a feature
deep learning. of writing style. His main research interests also
Dr. Pech-May is a Member of the Topic Group AI for Flood Monitoring and include traditional machine learning algorithm and
Detection at the International Telecommunication Union. deep learning applied to the processing of images, audio, and video.

You might also like