0% found this document useful (0 votes)
9 views28 pages

Agronomy 14 00363 v2

This review discusses the application of deep learning for weed and crop recognition in smart agricultural equipment, highlighting the importance of accurate detection for effective weed management. It outlines various methods and technologies, including intelligent robots and drones, that enhance weed recognition and reduce reliance on herbicides, thereby promoting sustainable agriculture. The paper also identifies research gaps and suggests future directions for improving intelligent agricultural systems.

Uploaded by

idrisidris79
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views28 pages

Agronomy 14 00363 v2

This review discusses the application of deep learning for weed and crop recognition in smart agricultural equipment, highlighting the importance of accurate detection for effective weed management. It outlines various methods and technologies, including intelligent robots and drones, that enhance weed recognition and reduce reliance on herbicides, thereby promoting sustainable agriculture. The paper also identifies research gaps and suggests future directions for improving intelligent agricultural systems.

Uploaded by

idrisidris79
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

agronomy

Review
Deep Learning-Based Weed–Crop Recognition for Smart
Agricultural Equipment: A Review
Hao-Ran Qu and Wen-Hao Su *

College of Engineering, China Agricultural University, Beijing 100083, China; [email protected]


* Correspondence: [email protected]

Abstract: Weeds and crops engage in a relentless battle for the same resources, leading to potential re-
ductions in crop yields and increased agricultural costs. Traditional methods of weed control, such as
heavy herbicide use, come with the drawback of promoting weed resistance and environmental pollu-
tion. As the demand for pollution-free and organic agricultural products rises, there is a pressing need
for innovative solutions. The emergence of smart agricultural equipment, including intelligent robots,
unmanned aerial vehicles and satellite technology, proves to be pivotal in addressing weed-related
challenges. The effectiveness of smart agricultural equipment, however, hinges on accurate detection,
a task influenced by various factors, like growth stages, environmental conditions and shading. To
achieve precise crop identification, it is essential to employ suitable sensors and optimized algorithms.
Deep learning plays a crucial role in enhancing weed recognition accuracy. This advancement enables
targeted actions such as minimal pesticide spraying or precise laser excision of weeds, effectively
reducing the overall cost of agricultural production. This paper provides a thorough overview of
the application of deep learning for crop and weed recognition in smart agricultural equipment.
Starting with an overview of intelligent agricultural tools, sensors and identification algorithms, the
discussion delves into instructive examples, showcasing the technology’s prowess in distinguishing
between weeds and crops. The narrative highlights recent breakthroughs in automated technologies
for precision plant identification while acknowledging existing challenges and proposing prospects.
By marrying cutting-edge technology with sustainable agricultural practices, the adoption of intelli-
gent equipment presents a promising path toward efficient and eco-friendly weed management in
modern agriculture.
Citation: Qu, H.-R.; Su, W.-H. Deep
Learning-Based Weed–Crop
Keywords: deep learning; smart agricultural equipment; weeds and crops; recognition
Recognition for Smart Agricultural
Equipment: A Review. Agronomy 2024,
14, 363. https://doi.org/10.3390/
agronomy14020363 1. Introduction
Academic Editor: Sung-Cheol Koh
Weeds are a big threat in agriculture as they occur in all parts of the field and compete
with crop plants for resources. The result of competition for resources is reduced crop yields.
Received: 27 December 2023 Yield losses depend on factors, such as weed species, population density and relative time
Revised: 24 January 2024 of emergence and distribution, as well as on the soil type, soil moisture levels, pH and
Accepted: 7 February 2024
fertility [1,2]. For decades, researchers and farmers have struggled to control weeds to
Published: 11 February 2024
overcome the thorny challenges they pose. Weeds in the field compete with crops for water,
nutrients and sunlight. If not controlled properly, weeds can adversely affect crop yield
and quality. In addition, research has shown that there is a significant link between reduced
Copyright: © 2024 by the authors.
crop yields and weed competition [2]. For example, the annual cost of weeds in Australia
Licensee MDPI, Basel, Switzerland.
within grain production systems is USD 3.3 billion, comprising USD 2.6 billion in costs for
This article is an open access article weed control and USD 0.7 billion in lost yield [3].
distributed under the terms and In today’s agricultural sector, accurately identifying crops and weeds is crucial for
conditions of the Creative Commons improving agricultural productivity, reducing production costs and achieving sustainable
Attribution (CC BY) license (https:// agricultural development. The fast development of deep learning techniques for wide
creativecommons.org/licenses/by/ application in computer vision provides new opportunities for crop and weed recognition.
4.0/). The high automation and learning capabilities of deep learning models enable them to

Agronomy 2024, 14, 363. https://doi.org/10.3390/agronomy14020363 https://www.mdpi.com/journal/agronomy


Agronomy 2024, 14, 363 2 of 28

learn from large datasets and gradually improve their performance, bringing unprece-
dented breakthroughs to precision agriculture. Recently, the main methods of weed control
in agricultural fields have included hand weeding, mechanical weeding, laser weeding
and chemical weeding. Chemical weeding provides the advantage of low cost, and it is
unaffected by terrain factors. It is widely used all over the world [4]. The heavy use of
herbicides increases weed resistance and increases the cost of agricultural inputs. Reducing
the use of herbicides is also a critical step towards sustainable agriculture. Site-specific
weed control can save up to 90% of herbicide expenditures. In addition, annual sales of
pesticides worldwide amount to about USD 100 billion. If this idea can become reality,
it will significantly reduce agricultural expenditure [5]. Spraying pesticides over large
areas can also pollute the environment. For example, indiscriminate broadcast spraying
throughout tobacco fields, especially during the early growth phase, can lead to unneces-
sarily spraying bare soil off target between any two contiguous tobacco plants, causing
environmental pollution and pesticide seepage into the ground [6,7]. Pesticide use also
has an impact on human health. The WHO has estimated that 1 million adverse reactions
have been reported when hand-sprayed insecticides are used in crop fields [8]. In order
to better control the use of herbicides, due to the massive increase in over-reliance on
herbicides and herbicide-resistant weeds, the EU’s agricultural system has become more
fragile and unsustainable. The EU Green Deal has a goal of cutting the use and risks of
chemical fertilizers by 50 percent by 2030 [9]. The European Food Safety Authority (EFSA)
has announced that 98.9% of food products contain agrochemical residues (of which 1.5%
exceed legal limits). In addition, plants resistance to agrochemicals (e.g., herbicides) is
becoming a huge threat to crop yields in many countries [10].
Manual weeding is not only a heavy workload but also cannot easily detect weeds
in a timely manner. The only solution to the problem is to increase manpower, but this
will inevitably increase agricultural costs. Mechanical weed control is especially suitable
for weed control in organic farmland and can also be useful in traditional farmland. On
the other hand, the utilization of machines may also have a downside effect by damaging
and eroding crops and the environment [11]. Currently, weed removal in crop rows still
relies on manual removal in many cases, but manual weeding is less efficient. With the
development of deep learning algorithms, weed management has achieved successful
results. Agricultural robotics research has increased over the past few years due to the
potential applications of robots and industry efforts in robot development. The role of robots
in many agricultural tasks has been studied, focusing mainly on improving the automation
of traditional agricultural machinery and weeding processes [12,13]. It can accurately
recognize weeds and accurately deal with them, which greatly saves the use of herbicides,
avoids environmental pollution and reduces agricultural costs. In smart agriculture, using
sensors installed on satellites, unmanned aerial vehicles or ground tractors to separate
them between weeds and crops is becoming an effective method of weed management.
Remote sensing technology allows for quickly charting the distribution of weeds and crops
over large areas [14]. An SVM-based system for a crop/weed detection system for tractor
boom sprayers to spot spray tobacco crops in the field was constructed. Its classification
accuracy is 96% [6]. In the last decade or so, Earth observation satellites have provided
higher-resolution free remote sensing data, making the detection of agriculture by high-
resolution satellites possible. Google Street View images were tested using a convolutional
neural network (CNN), with an overall accuracy of 83.3 % [15]. Laser weed control also
offers a new possibility for weed removal. A YOLOX convolutional neural network-based
weeding robot utilizes a blue laser to weed with a weed recognition rate of 88.94% [16].
Drones are considered to be more efficient than robotic or satellite acquisition because
they can rapidly collect field data at very high spatial resolution and at low cost [17–19].
The most widely used application is the use of drones, which are utilized to capture RGB
images and tested on a test set using SVM, KNN, AdaBoost and CNN, whose accuracies
for recognizing rice weeds are 89.75%, 85.58%, 90.25% and 92.41%, respectively [20].
Agronomy 2024, 14, 363 3 of 29

Agronomy 2024, 14, 363 SVM, KNN, AdaBoost and CNN, whose accuracies for recognizing rice weeds 3 ofare
28
89.75%, 85.58%, 90.25% and 92.41%, respectively [20].
This paper reviews the current state of research on applying deep learning to crop
and This
weedpaper reviewsfor
recognition thesmart
current state of research
agricultural on applying
equipment. There are deep learning
many to crop
previous re-
and
viewweed recognition
articles related to forthis
smart agricultural
topic. For example,equipment. There are many
Imran Zualkernan et al.previous review
[21] focused on
articles related to this topic. For example, Imran Zualkernan
new deep learning models and architectures for research using drone image data sinceet al. [21] focused on new
deep
2018.learning
Jiayou Shi models and presented
et al. [22] architectures for research
a thorough reviewusing drone
of the image and
methods dataapplications
since 2018.
Jiayou
related to crop row inspection in agricultural machinery navigation. They paid related
Shi et al. [22] presented a thorough review of the methods and applications special
to crop row
attention toinspection
sensors and in agricultural
systems used machinery
for crop rownavigation.
detection They paid special
in order attention
to validate their
to sensors
sensing andand systems capabilities
detection used for crop row
and, detection
thus, improve in order to validate
their sensing andtheir sensingcapa-
inspection and
detection capabilities and, thus, improve their sensing and inspection
bilities. Ana I. de Castro et al. [23] reviewed the sensor types, configurations and image capabilities. Ana
I.processing
de Castroalgorithms
et al. [23] reviewed
of UAVs for theagriculture
sensor types, andconfigurations and image
forestry applications. WenHaoprocessing
Su [14]
algorithms of UAVs for agriculture and forestry applications.
discussed RGB, hyperspectral and spot spectroscopy in sensors for crop and weed WenHao Su [14] discussed
iden-
RGB, hyperspectral
tification. However, and theyspot spectroscopy
did not in sensors for crop
provide a comprehensive and weed
introduction to identification.
intelligent ag-
However, they did notWe
ricultural equipment. provide
brieflya describe
comprehensive
the needintroduction
for intelligent to weed
intelligent agricultural
management and
equipment. We briefly describe the need for intelligent weed management and then present
then present aspects of weed control. Section 2 focuses on the image recognition steps
aspects of weed control. Section 2 focuses on the image recognition steps for smart devices,
for smart devices, including image collection, image preprocessing and feature extrac-
including image collection, image preprocessing and feature extraction. Section 3 describes
tion. Section 3 describes the application of using deep learning algorithmic models for
the application of using deep learning algorithmic models for recognizing weeds in smart
recognizing weeds in smart agricultural equipment. They mainly utilize convolutional
agricultural equipment. They mainly utilize convolutional neural networks (CNNs) and
neural networks (CNNs) and their variants, such as Faster RCNN [24], MTS-CNN [25],
their variants, such as Faster RCNN [24], MTS-CNN [25], FHGSO-based Deep CNN [26]
FHGSO-based Deep CNN [26] and DRCNN [27]. In addition to this, support vector ma-
and DRCNN [27]. In addition to this, support vector machine (SVM) is also heavily used,
chine (SVM) is also heavily used, mostly in agricultural equipment, such as tractors,
mostly in agricultural equipment, such as tractors, drones, etc. [4,6,24]. Most notably, a
drones, etc. [4,6,24]. Most notably, a Transformer Neural Network and its variants, for
Transformer Neural Network and its variants, for example, vit [28], Swin-DeepLab+ [29]
example, vit [28], Swin-DeepLab+ [29] and Deformable DETR, are used [30]. Moreover,
and Deformable DETR, are used [30]. Moreover, the vit model is a relatively new proposed
the vit model is a relatively new proposed model, which outperforms some advanced
model, which outperforms some advanced models such as EfficientNet and ResNet, so this
modelshas
model such as EfficientNet
great potential [28]. and ResNet,
With so this
the rapid model has great
transformation potential [28].
of agricultural With the
landscapes,
rapid transformation of agricultural landscapes, driven by technological
driven by technological innovations, this review aims to synthesize the current state of the innovations,
thisinreview
art aims to of
the application synthesize the current smart
deep learning-based state of the art inequipment
agricultural the application
for weedof deep
and
learning-based smart agricultural equipment for weed and crop
crop differentiation. By elucidating state-of-the-art technologies, identifying research differentiation. Bygaps
elu-
cidating state-of-the-art technologies, identifying research gaps and
and suggesting potential directions for future research, this study aims to contribute to the suggesting potential
directions foroffuture
development research,
intelligent this study aims
and autonomous to contribute
systems to the farmers
that empower development
with theof tools
intel-
ligent and autonomous systems that empower farmers with the
to address weed management challenges, leading to sustainable and efficient agricultural tools to address weed
management challenges, leading to sustainable and efficient agricultural management.
management.

2. Weed
2. Weed Detection
Detection Using
UsingRemote
RemoteSensing
SensingTechnique
Technique
Theworkflow
The workflowof ofimage
imagerecognition
recognitionofofcrops
cropsand
andweeds
weedscancangenerally
generallybebedivided
dividedinto
in-
to four
four steps:
steps: image
image data data acquisition,
acquisition, preprocessing,
preprocessing, feature
feature extraction
extraction andand classification
classification of
of weeds
weeds andand crops
crops [31].[31].
TheThe specific
specific details
details are are shown
shown in Figure
in Figure 1. 1.

Figure1.1.General
Figure Generalworkflow
workflowof
ofimage
imageprocessing-based
processing-basedweed
weeddetection.
detection.
Agronomy 2024, 14, 363 4 of 28

2.1. Image Data Collection


DL-based weed inspection and classification techniques require a sufficient quantity
of labeled data. Data can be gathered using various types of sensors mounted on various
smart agricultural devices. The main sensors commonly used are as follows: RGB sensors,
multispectral sensors, hyperspectral sensors and LiDAR sensors. Table 1 shows the images
collected by different sensors.
Visible Light Sensors are most commonly used by UAVs in precision agriculture and
related smart agriculture applications. RGB imaging or color imaging has gained popularity
due to its clear color-revealing principle, simple hardware structure and proven production
process. The costs of the RGB cameras are comparatively inexpensive, light weight and
perform well in drawing orthophoto maps that capture images and aerial videos of the
entire field in a single instance. UAVs equipped with RGB cameras have the benefits of
small size, low cost, great productivity and mobility [32]. Meanwhile, RGB imaging has
many benefits, providing only limited data at a limited number of wavelengths [33].
Recently, technologies such as hyperspectral imaging (HSI) systems have provided
a chance to quickly categorize plant species, both in the laboratory and the field. The
advantage of HSI is to provide an integrated analysis of spectroscopy and the relationship
between various chemical components and absorption in the spectrum. The principle of
HSI spectroscopy is based on the vibration of muscles in the infrared region. Therefore,
absorbance at specific wavelengths, which might be related to specific chemical bands, can
be used for different materials’ classification and quality determination. Weed identification
techniques based on RGB imaging are based on shape, size and color discrimination, while
the use of HSI increases the value of such techniques [34]. However, hyperspectral images
typically contain a great number of superfluous information, which may mask the real
information of ground objects and adversely affect spectral data recognition. In addition,
high-dimensional spectral data not only increase temporal and spatial complexity but
also tend to cause a dimensional disaster. To address the above problems, Zhihua Diao
et al. [35] proposed a lightweight three-dimensional convolution neural network model.
An image enhancement method was used to improve the training results to address the
problem of sparse training samples in hyperspectral images. A lightweight unit module was
introduced on this basis to reduce the number of parameters in the network. Meanwhile,
Zhaoxia Lou proposed a 3D-CNN model for predicting the CCI of a competition index.
There are two key aspects of hyperspectral band selection, the effective preservation of
information and the elimination of redundancy. Many hyperspectral studies use VIP for
band selection because it performs better in terms of information preservation. However,
this method has the limitation of retaining an excessive number of bands in the band
selection process, which may identify irrelevant bands as significant. Therefore, the use of
the VIP method in band selection may require further research [36,37].
Compared to hyperspectral cameras, a multispectral camera is lightweight, low-cost
and has high spatial resolution, making it suitable for large areas [38]. In contrast to RGB
cameras, multispectral cameras have additional spectral bands and are capable of sensing
radiation in both the invisible (red-edge and near-infrared) and visible segments of the
spectrum, typically spanning four to six bands. The inclusion of a reflectance calibration
panel makes multispectral cameras less susceptible to environmental variation [39,40].
A multispectral image is essentially a collection of grayscale images, with each image
corresponding to a specific wavelength or band of wavelengths in the electromagnetic
spectrum. Multispectral imaging (MSI) involves capturing images from various spectral
bands to gather both spatial and spectral information. MSI technology enables the creation
of wavelength channels in the near-UV, visible, near-IR, mid-IR and far-IR bands [33].
One of the most commonly used techniques for the composition of multispectral images
is the co-registration of the bands of interest. The images captured by the multispectral
cameras show significant band misregistration effects due to lens distortion and the varying
viewing angles of each lens or sensor [41]. To obtain accurate spectral and geometrical
information, a precise geometric distortion correction and band-to-band co-registration
Agronomy 2024, 14, 363 5 of 28

method is necessary [42]. Multispectral imaging, with the advantage of light hardware and
faster calculation speed, is emerging as the successor to hyperspectral technology.
Thermal infrared sensors help to capture the temperature of the objects, generate
images and display the same based on the information collected. Infrared sensors and
optical lenses are used in thermal cameras to capture thermal energy [43]. The development
of higher-resolution thermal imaging systems compatible with unmanned aerial vehicles
(UAVs) has facilitated the practical application of thermal imaging in agriculture. The
use of thermal measurement, in conjunction with other sensor measurements, such as
hyperspectral, visible and optical distance, has proven to be more effective in field-scale
crop phenotyping [44]. When combined with deep learning, remote heat sensing technology
is able to recognize crops and weeds and crop stress assessment [45].
LiDAR, which stands for Light Detection and Ranging, is a highly advanced and
dependable sensor that has been widely used in the fields of crop row detection and robotic
navigation. This sensor is famous for its high precision, wide range and strong immunity
to interference [46]. LIDAR works on the principle that the transmitting system emits
visible or near-infrared light waves. These light waves are then reflected off the target
and detected by the receiving system. The data obtained are subsequently processed to
generate parametric information, including distance. LiDAR sensors have been utilized in
crop row detection to provide highly accurate and detailed 3D maps of crop canopies [47].
Additionally, LiDAR sensors have the capability to penetrate vegetation and capture ground
surface data, facilitating the detection of crop rows, even in densely vegetated fields [22].
LiDAR can be used in intensive agricultural scenarios.

Table 1. Examples of public datasets.

Dataset Crop and Weed Sensor Number of Images Reference


Dataset of annotated food
6 crops, 8 weeds RGB 1176 Sudars, K., et al. [48]
crops and weed images
DeepWeeds 8 weeds RGB and Gound-based 17,509 Olsen, A., et al. [49]
weed control robot
(AutoWeed)
Weed-
maize, lettuce, radish RGB 7200 Jiang, H.H., et al. [50]
Corn/Lettuce/Radish
Multispectral and Micro
Sugar Beet/Weed Dataset sugar beet 465 Sa, I., et al. [51]
Aerial Vehicle (MAV)
Rumex and Urtica weed Rumex and Urtica weed Binch, A. and C.W. Fox
RGB and Crawler robots 10,000
plants Dataset plants [52]
Multispectral Lettuce Multispectral bands and
Lettuce and weeds 100 Osorio, K. et al. [53]
Dataset UAV
tomato, cotton, velvetleaf
Early crop weed RGB 508 Espejo-Garcia, B. et al. [54]
and black nightshade
flax and 14 most common
AIWeeds RGB 10,000 Du, Y., et al. [5]
weeds
TobSet Tobacco Crop and Weeds RGB 1700 Alam, M.S., et al. [55]
Maize, the common bean, RGB and Autonomous
Crop and weed 83 Champ, J., et al. [56]
and a variety of weeds electrifier robot
Datasets for sugar beet RGB-NIR and BOSCH
Capsella bursa pastoris 8518 Di Cicco, M., et al. [57]
crop/weed detection Bonirob farm robot
RGB—red, green, blue; NIR—near infrared.

2.2. Preprocessing
After acquiring data from various sources, it is essential to prepare the data for the
training, testing and validation of models. Raw data may not always be suitable for
deep learning (DL) models. Approaches for dataset preparation include the application
of various image processing techniques, data labeling, utilization of image enhancement
methods to augment the input data and introduce variations, as well as the generation of
Agronomy 2024, 14, 363 6 of 28

synthetic data for training. The commonly used image processing techniques are removal
of background, resizing of captured images, green component segmentation, removal
of motion blur, denoising, image enhancement, extraction of color vegetation indices
and alteration in color models [58]. Table 2 demonstrates the effect of different image
enhancement techniques on segmentation.

Table 2. Effect of different image enhancements on image segmentation.

Input
Crop Methods Enhancement MIoU Reference
Representation
HE 92.75%
An encoder-decoder deep
Sugar beet PS-AC RGB 94.29% AICHEN WANG et al. [59]
learning network
DPE 93.50%
HE 94.80%
An encoder-decoder deep
Oilseed PS-AC RGB 95.80% AICHEN WANG et al. [59]
learning network
DPE 96.12%
Random rotation,
DeepLabv3+ Random flipping, 88.59%
Soybean Swin+DeepLabv3+ Random cropping, RGB 91.10% Yu, H., et al. [29]
Swin-DeepLab Adding Gaussian noise, 91.53%
and Increasing contrast
An algorithm proposed by 89%
Perspective Deformity
Sunflower Lopez, L.O., et al [41]. U-Net Multispectral 90% Lopez, L.O., et al. [41]
Correction Program
FPN 89%
HE—Histogram Equalization; PS-AC—PS Auto Contrast; DPE—Deep Photo Enhancer; RGB—red, green, blue;
Swin-DeepLab—Hierarchical Vision Transformer for Semantic Segmentation.

2.2.1. Image Resizing


Achieving good accuracy with lower patch sizes proved to require less training time
for the model. To expedite processing and reduce computational complexity, many studies
performed image resizing operations on the dataset before feeding it into the deep learning
(DL) model. Following the collection of field images, their resolution was adjusted to meet
the DL network’s requirements [58]. Julien Champ et al. [56] resized the image so that
their shorter edge was 1200 pixels and the longest one 2048 pixels. This allowed the model
to be run in a reasonable time on a standard graphics processing unit. Reenul Reedha
et al. [28] extracted the crop and weed image patches from the bounding boxes. Then, the
image patches were resized to 64 × 64 pixels. This choice of image size aligned with the
dimensions of the bounding boxes, possibly corresponding to the altitude at which the
UAV was flown and the size of the crops in the study field. Images with high resolution are
sometimes split into a number of patches to reduce the computational complexity. Ramirez
et al. [60] captured only five images at high resolution using a drone, which were then
segmented into non-overlapping chunks and chunks with overlap. These adjustments to the
image pixel size can reduce the computational complexity and decrease the computational
duration of the DL model to achieve optimal results.

2.2.2. Image Enhancement and Denoising


Image enhancement and denoising such strategies can effectively enhance the accuracy
of algorithm recognition. Reenul Reedha et al. [28] utilized data augmentation strategies
to enrich datasets, including random resized crop, color dithering and rand augments.
This technology is achieved using Keras ImageDataGenerator, which instantly generates
enhanced images. As a result, the basic ViT B-16 model reached a recognized accuracy of
99.4%. The use of data augmentations aimed to enhance the model’s robustness and gener-
alization capabilities. Aichen Wang et al. [59] assessed the performance of the DL model
based on the input representation of images. They applied many image preprocessing oper-
ations, such as histogram equalization, automatic adjustment of the contrast of images and
deep photo enhancement. Babu et al. [27] performed image enhancement through CLAHE,
which allowed for better visual interpretation of images. CLAHE has superior contrast
Agronomy 2024, 14, 363 7 of 28

limiting compared to ordinary adaptive histogram equalization. In conventional adaptive


histogram equalization, the noise in the near-constant regions in images is magnified. The
CLAHE algorithm improves the image contrast and limits the amplification, improving
the quality of the image. CLAHE is widely used for enhancing medical imagery, satellite
images, etc. Dmitrii Vypirailenko et al. [61] utilized two methods for data enhancement.
The first was to resize the image to 128 × 128 and then enhance the data by horizontal
and vertical flipping, panning and rotating. They also used random contrast correction
as an enhancement method to ensure the effectiveness of the enhanced image. Another
approach was to use random affine transformation. Each enhancement method was applied
to an image when it passed to the model. They also used weights in cross-entropy loss to
overcome the imbalance in the dataset. The result of the enhancement should be similar
to the image taken at the real site. In conclusion, effective enhancement and denoising of
images have a significant impact on the recognition of the algorithm.

2.2.3. Background Removal


Background removal has an important role in weed identification. The aim of segmen-
tation is to extract plant ROI by segregating the background (i.e., soil, stones, etc.) from the
vegetation (i.e., leaves of different weeds). Zhaoxia Lou et al. [37] extracted the vegetation
canopy spectra for the acquired images. The contrast between the vegetation canopy and
the soil background was improved using OSAVI. Mask images of soil background and
vegetation canopy were generated through a threshold segmentation method, effectively
eliminating the soil portion from the digital surface model (DOM) image, retaining only
the vegetation canopy area. Li et al. [34] developed a threshold segmentation algorithm
involving spectral data extraction with a threshold of 0.19 at a 950 nm wavelength. The
mask generated in this way was multiplied by the original HSL image, and the resulting
image contained only plants. At the same time, they used a simple linear iterative clustering
algorithm to segment the plant images into hyperpixels. This was accomplished by taking
the similarity in spectral and spatial domains into account when grouping pixels into clus-
ters. The results show that the separation of a crop from the background can be achieved
by spectral characterization and threshold adjustment. MLP developed using Sp data is a
more robust and reliable method compared to traditional classification methods. Similarly,
a threshold adjustment in the color is utilized to achieve separation of the crop from the
background. Borja Espejo-Garcia et al. [54] first normalized the R, G and B channels in the
image for the green channel and then used the ExG (excess green) values to be indexed
by the initial vegetation segmentation, followed by OTSU thresholding of the grayscale
image to obtain a binary mask. Based on this method in threshold segmentation, many
algorithms also obtained more than 95% accuracy in weed identification. Gee, C. et al. [62]
proposed a new vegetation index called MetaIndex, which combined the advantages of six
vegetation indices. The method refined the results by geodesic segmentation and obtained
a black-and-white vegetation image, also known as a black-and-white vegetation mask.

2.3. Feature Extraction of Weeds


In agriculture, there are four groups of descriptive features: visual textures, spatial
contexts, spectral features and biological morphological features [63].

2.3.1. Visual Texture Feature


For textural features, humans can judge them through their senses, such as identifying
whether soft or hard, rough or fine, horizontally or vertically corrugated, etc. [64]. Research
on texture-aware properties has its origins in computer vision as well as cognitive science.
In computer vision-based approaches, visual textures have played a key role in image
understanding. And because the texture of the local image descriptors is pooled in an
unordered manner, the texture of the image is represented by computing the intensity of
the clustered pixels in the space, and six common variability directions are identified [65].
Figure 2 is a sample image of texture-based segmentation using a Gabor filter. The Gabor
role in image understanding. And because the texture of the local image descripto
pooled in an unordered manner, the texture of the image is represented by compu
the intensity of the clustered pixels in the space, and six common variability direc
Agronomy 2024, 14, 363 are identified [65]. Figure 2 is a sample image of texture-based segmentation 8 of 28
using a
bor filter. The Gabor filter, which is a group of Gabor wavelets, automatically determ
the boundaries between tobacco and non-tobacco objects (weeds) based on their tex
filter, which is a group of Gabor wavelets, automatically determines the boundaries be-
characteristics. The extracted Gabor texture features are input to a k-means clust
tween tobacco and non-tobacco objects (weeds) based on their texture characteristics. The
algorithm. This
extracted classifies
Gabor textured
texture features regions
are input of tobacco
to a k-means fromalgorithm.
clustering other texture classes (we
This classifies
as shown in Figure
textured regions of 2.tobacco
It is evident from
from other (b) classes
texture that the tobacco
(weeds), plant
as shown in has prominent
Figure 2. It is tex
evident from (b) that the tobacco plant has prominent texture features,
features, as compared to the surrounding objects. Table 3 shows a table of deep lear as compared to the
surrounding objects. Table 3 shows a table of deep learning recognition based on texture
recognition based on texture feature weed identification.
feature weed identification.

(a) (b) (c)


FigureFigure
2. Texture-based
2. Texture-basedsegmentation using
segmentation using Gabor
Gabor filters filters (orientation
(orientation between [0 between [0 and
and 135] degrees in 135] de
steps of 45 degrees)
in steps of 45 degrees) [6].[6].

GLCM is a way to define the texture of images using the information of intensity
GLCM is aco-occur
values that way tospatially.
defineThe thetechnique
texture usedof images using derived
texture features the information
from a gray-of inte
valueslevel
thatco-occurrence matrix (GLCM).
co-occur spatially. TheThe next step was
technique used the texture
extractionfeatures
of four texture features
derived from a g
from GLCM. These features include contrast, correlation, energy and homogeneity, with
level co-occurrence matrix (GLCM). The next step was the extraction of four texture
73% accuracy using the Radial Basis Function (RBF) kernel in the support vector machine
tures from
(SVM)GLCM.
[66]. TheThese features
Gabor wavelet includeenables
transform contrast, correlation,
the analysis of image energy
scenes andboth homogen
in
with 73%
spatialaccuracy usingdomains.
and frequency the Radial Basis Function
It is important to note that(RBF) kernel transform
the wavelet in the support
of v
an image is a well-established multi-resolution filtering technique
machine (SVM) [66]. The Gabor wavelet transform enables the analysis of image sc for extracting texture
features. Each derived (preprocessed) image was filtered with a bank of Gabor wavelet
both infilters
spatial and frequency domains. It is important to note that the wavelet trans
computed with designated lower (Ul) and higher (Uh) frequencies selected to be 0.1
of an image is a well-established
and 0.5, respectively. Four levels of multi-resolution
orientation and ten levelsfiltering
of scaletechnique
were chosenfor [67].extracting
ture features.
Yajun Each
Chen etderived (preprocessed)
al. [3] identified image comprising
six texture features, was filtered with aofbank
a histogram ori- of G
ented gradient features, rotation-invariant local binary pattern
wavelet filters computed with designated lower (Ul) and higher (Uh) frequencies se (LBP) feature, Hu invariant
moment feature, Gabor feature, gray-level co-occurrence matrix, and gray-level-gradient
ed to be 0.1 and 0.5,
co-occurrence respectively.
matrix. Four
These six feature levels of
descriptors wereorientation
combined to and createten
a setlevels of scale
of 18 fea-
chosenture[67].
combinations. For the problem of image size normalization, they proposed a strategy
that kept
Yajun Chenthe shape
et al.of [3]
the leaves unchanged
identified and supplemented
six texture features,0 pixels in the blank
comprising area of
a histogram o
the normalized size. Lei Zhang et al. [68] proposed a weed recognition method for support
ented gradient features, rotation-invariant local binary pattern (LBP) feature, Hu in
vector machines using any combination of three sets of texture features, including oriented
ant moment feature, features,
gradient histogram Gabor rotation-invariant
feature, gray-level co-occurrence
local binary pattern (LBP) matrix,
features,andand gray-l
grayscale co-occurrence matrix (GLCM). The application of six different texture features for
weed identification is enumerated in Table 2. For hybrid feature extraction, the accuracy
obtained using machine learning is greater than single feature extraction, and the accuracy
of deep learning is greater than machine learning. For the study of deep learning in crop
and weed recognition, hybrid texture features can be utilized.
Agronomy 2024, 14, 363 9 of 28

Table 3. Deep learning recognition based on texture feature weed identification.

Feature Combination Methods Crop Weed Test Accuracy Reference


ANN with 15 units in
LBP carrot Weed 83.5% Lease, B.A., et al. [7]
ensemble
GLCM SVM Rice Grasses 73% Ashraf, T. and Y.N. Khan [66]
Ashwagandha of the
LBP SVM spinach quinoa family, prickly pear 83.78% Miao, R., et al. [69]
of the aster family
LBP k-FLBPCM broadleaf canola wild carrot >96.75% Vi Nguyen Thanh Le et al. [70]
Goldenrod,
lamb’s-quarters, sheep
sorrel, goldenrod, poplar
Gabor LDA blueberry 81.4% Ayalew, G., et al. [67]
spreading dogbane,
mouse-eared hawkweed
and a few black bulrushes.
GLCM-M DA-WDGN Crop Detect broadleaf weeds 99.4% Raja, G., et al. [71]
Gabor SVM oil palm broad weed, narrow weed 95.0% Zaman, M.H.M., et al. [72]
Gabor MLPNN oil palm broad weed, narrow weed 94.5% Zaman, M.H.M. et al. [72]
Chenopodium serotinum,
LBP+GLCM GA-SVM lettuce 87.55% Zhang, L., et al. [68]
Polygonum lapathifolium
Chenopodium serotinum,
LBP+GLCM SVM lettuce 81.33% Zhang, L., et al. [68]
Polygonum lapathifolium
Chenopodium serotinum,
HOG+LBP+GLCM GA-SVM lettuce 86.02% Zhang, L., et al. [68]
Polygonum lapathifolium
Chenopodium serotinum,
HOG+GLCM GA-SVM lettuce 85.46% Zhang, L., et al. [68]
Polygonum lapathifolium
Cirsium setosum (Willd.)
MB, Poa annua L., Eleusine
GGCM+RotLBP SVM Crop 97.50% Chen, Y., et al. [4]
indica (L.) Gaertn., and
Chenopodium album L.
LBP—local binary pattern; GLCM—Gray-Level Co-occurrence Matrix; HOG—Histogram of Oriented Gradi-
ents; SVM—Support vector machine; ANN—Artificial Neural Network; LDA—Linear Discriminant Analysis;
k-FLBPCM—filtered Local Binary Patterns with contour masks and coefficient k; GA—Genetic Algorithm;
MLPNN—Multi-layer perceptron neural networks.

2.3.2. Spatial Context Feature


Plant discrimination based on morphological and spectral properties is prone to varia-
tions in plant appearance, exhibiting significant differences in the field, across fields and
during the growing season. This variability makes the detection method less stable. In
contrast, the sowing pattern of crops is relatively stable, as most crops are sown or planted
in rows following a predetermined pattern. Leveraging spatial contexts or position infor-
mation can enhance discrimination accuracy [73]. In a crop field, crops are often planted
regularly in the field, and spatial coordinates can be used to discriminate between crops
and weeds. Weeds and crops can also be identified by spatial features [64]. Considering
that most crops are sown or planted in rows with a predetermined pattern, spatial contexts
or position information can contribute to improving discrimination accuracy. For cereals,
the detection of inter-row weeds can be effectively achieved by identifying the centerline
and edge of crop rows between adjacent crop plants. Figure 3 shows a sample image of
bean and spinach based on spatial feature recognition.
The Hough transform is a widely employed method for identifying linear features in
an image. It works by representing a straight line as a spike in parameter space, where
the parameters correspond to the characteristics of the line. In addition, the linear Hough
transform can be utilized for detecting or analyzing arbitrary (non-parametric) curves by
examining the shape of peaks or their locations in the parameter space [74]. Teplyakov
et al. [75] proposed a lightweight Artificial Neural Network for line detection with several
convolutional layers and a fast Hough transform layer that can be trained in an end-to-end
manner. They proposed to use fast Hough transform (FHT) with O(N2logN) complexity.
The FHT approximated the lines with dyadic patterns and utilized an efficient solution
for summation. In complex backgrounds, the model of YOLOv5s was more accurate
Agronomy 2024, 14, 363 10 of 28

Agronomy 2024, 14, 363 10 of 29


than the detection of Hoff variations and was faster. In order to solve the problems of
large memory overhead, long time consumption and low recognition accuracy of offset
tion
Houghinformation
transform,canslam,
enhance
N. etdiscrimination accuracy
al. [76] proposed [73]. In acircle
an efficient crop localization
field, crops are of-
algorithm
ten planted
based regularly in the field,
on multi-resolution and spatial(two-step
segmentation coordinates can be used
optimized to discriminate
Hough transform). be-First,
tween crops
the target and was
circle weeds. Weeds by
obtained andadaptive
crops can also be
image identified bytospatial
preprocessing features
determine the[64].
location
Considering thatsearch
of the effective most crops
area.are sown
Then, or planted in
high-quality rows with
images werea separated
predetermined pattern,
by shape quality
spatial contexts or position information can contribute to improving discrimination
inspection to be used as accurate data sources. Finally, the location accuracy is improved ac-
curacy. For cereals,
to the sub-pixel theusing
level detection
leastofsquares
inter-row weeds
circle can The
fitting. be effectively
effects of achieved by iden-
burrs, misalignments,
tifying the centerline and edge of crop rows between adjacent crop plants.
defects and contamination are also reduced. The extraction of spatial features can also Figure 3 be
shows a sample image of bean and spinach based on spatial feature
used as an auxiliary recognition criterion when the UAV is flying overhead. recognition.

Figure
Figure 3. From left
3. From left to
to right:
right: line
line detection
detection in
in bean
bean (a)
(a) and
and spinach
spinach (b)
(b) fields.
fields. Detected
Detected lines
lines are
are in
in blue.
blue. In the spinach field, inter-row distance and the crop row orientation are not regular.
In the spinach field, inter-row distance and the crop row orientation are not regular. The detected The de-
tected lines
lines are are mainly
mainly locatedlocated
in theincenter
the center
of theof crop
the crop
rows rows
[17].[17].

2.3.3.TheSpectral
HoughFeature
transform is a widely employed method for identifying linear features
in an Spectroscopy
image. It worksisby representing
used to acquirea straight
spectralline as a spike over
information in parameter space, where
a wide spectral range, in
Agronomy 2024, 14, 363 11 of 29
the parameters correspond to the characteristics of the
which specific frequencies of vibrations can be perceived that match the jumpline. In addition, the energy
linear of a
Hough transform
key or group. can be utilized
Spectroscopy forcategorized
is also detecting orinanalyzing
many ways; arbitrary (non-parametric)
the common ones are Point
curves by examining the shape of peaks or their locations
Spectroscopy, RGB and hyperspectral imaging, fluorescence spectroscopy and in the parameter space [74].
multispectral
multispectral
Imaging. The theoretical basis for using spectral detection is that weed rivalry leadsri-
Teplyakov et al. Imaging. The
[75] proposed theoretical basis
a lightweight for using
Artificial spectral
Neural detection
Network is
for line that weed
detection
to
valry
changes leads
with several to changes
convolutional
in plant in layers
physiology plant
that physiology
and a fast Houghthat alter
alter light-absorbing light-absorbing
transform
and
layer that can
crown reflectanceand
be crown
trained reflec-
in
properties [37].
an end-to-end
tance manner. They proposed to use of fast Hough fortransform (FHT)and with
Figureproperties
4 shows the [37]. Figure
regions of4interest
shows thecorn
for regions
seedlings interest
and weeds corn
onseedlings
hyperspectral weeds
images
O(N2logN)
on complexity.
hyperspectral images TheandFHT theapproximated
corresponding the lines
average with dyadic
spectral patterns
curves. Tableand 4uti-
shows
and the corresponding average spectral curves. Table 4 shows shows a sample image of
lized an
shows efficient
a sample solution
image for summation.
of hawkweed In complex
flowersrecognition.
based backgrounds,
on spatial the model of
feature recognition.
hawkweed flowers based on spatial feature
YOLOv5s was more accurate than the detection of Hoff variations and was faster. In or-
der to solve the problems of large memory overhead, long time consumption and low
recognition accuracy of offset Hough transform, slam, N. et al. [76] proposed an efficient
circle localization algorithm based on multi-resolution segmentation (two-step opti-
mized Hough transform). First, the target circle was obtained by adaptive image prepro-
cessing to determine the location of the effective search area. Then, high-quality images
were separated by shape quality inspection to be used as accurate data sources. Finally,
the location accuracy is improved to the sub-pixel level using least squares circle fitting.
The effects of burrs, misalignments, defects and contamination are also reduced. The ex-
traction of spatial features can also be used as an auxiliary recognition criterion when
the UAV is flying overhead.

2.3.3.
Figure
Figure Spectral
4.4. SampleFeature
Sample imagesof
images ofhawkweed
hawkweedflowersh
flowershbased
basedon
on spectral
spectral feature
feature recognition.
recognition. (a)
(a)Actual
Actual
multispectral
multispectral
Spectroscopyimage;
image;is(b)
(b)Prediction
usedPredictionresult;
result;
to acquire (c)
(c)Prediction
spectral Prediction results
results
information overare
a overlayed
are overlayed
wide with actual
withrange,
spectral image
actualinimage
(EPSG:4326—WGS
(EPSG:4326—WGS
which 84)
84) [40].
specific frequencies[40]. of vibrations can be perceived that match the jump energy of
a key or group. Spectroscopy is also categorized in many ways; the common ones are
Islam
PointIslam et al.
al. [77]
Spectroscopy, employed
[77]RGB
employedRGB
RGBimages
images
and hyperspectralcaptured byfluorescence
captured
imaging, RGB cameras
by RGB mounted
cameras onand
mounted
spectroscopy a drone.
on a
They extracted
drone. the reflectance
They extracted of red,of
the reflectance green
red, and blue
green andbands and subsequently
blue bands calculated
and subsequently cal-
culated vegetation indices, including normalized red band, normalized green band and
normalized blue band. The purpose of this normalization was to reduce the effects of
different lighting conditions on the color channels. Moreover, in addition to RGB data,
Fawakherji et al. [78] took into account near-infrared (NIR) information, generating four
channel multispectral synthetic images. They extracted the plant cover from the entire
Agronomy 2024, 14, 363 11 of 28

vegetation indices, including normalized red band, normalized green band and normalized
blue band. The purpose of this normalization was to reduce the effects of different lighting
conditions on the color channels. Moreover, in addition to RGB data, Fawakherji et al. [78]
took into account near-infrared (NIR) information, generating four channel multispectral
synthetic images. They extracted the plant cover from the entire image cover. The plant
cover was a binary image where the plant pixels to be learned were set to 1, and the
other pixels were set to 0. The plant cover was then mapped to a realistic multispectral
image, and the resulting image was used for data enhancement. The use of an NIR channel
helps to enhance the accuracy of the activity for which vegetation inspection is required.
Photosynthesis in healthy green plants leads to the absorption of more solar energy in
the visible spectrum, resulting in a low reflectance level in the RGB channels. Similarly,
the reflectance of the NIR spectrum is affected by the same phenomena with opposite
results, with a high reflectance level in the NIR channel, where generally 10% or less of
radiation is absorbed [78,79]. Jinya Su et al. [38] studied that the triangular greenness index
(TGI) consisting of green-NIR was the most discriminative SI. Its recognition accuracy was
93.0%. Utilizing thermal measurements in conjunction with other sensor data, such as
hyperspectral, visible and optical distance, has demonstrated increased effectiveness in
field-scale crop phenotyping [80–82].

Table 4. Deep learning recognition based on spectral feature weed identification.

Feature Combination Methods Crop Weed Test Accuracy Reference


ANN with 15 units in
LBP carrot Weed 83.5% Lease, B.A. et al. [7]
ensemble
Blackberry, various species
grasslands, meadows,
430 hyperspectral RF of goldenrod, wood >78.4% Sabat-Tomala, A. et al. [83]
and forests
small-reed grass
Blackberry, various species
grasslands, meadows,
30 MNF SVM of goldenrod, wood >85.0% Sabat-Tomala, A. et al. [83]
and forests
small-reed grass
kochia, waterhemp,
Thermal ML soybean redroot pigweed, and 82% Eide, A. et al. [44]
common ragweed
sunflower crops, Sugar
RGB+NIR cGAN Weed 94% Fawakherji, M., et al. [78]
beet
unwanted weed and
RGB RF and SVM chilli 96%, 94% Islam, N. et al. [77]
parasites within crops
wheat husk, wheat straw,
terahertz spectral Wheat-V2 wheat wheat leaf, wheat >96.7% Shen, Y. et al. [84]
grain, weed, and ladybugs
hyperspectral lightweight-3D-CNN Crop seedlings Weed >97.4% Diao, Z. et al. [35]
(Chenopodium album L.,
Multispectral U-Net and FPN sunflower Convolvulus arviensis L.; 90%, 89% Lopez, L.O. et al. [41]
and Cyperus rotundus L.
Multispectral SFS-Top3 Triticum aestivum L. Alopecurus myosuroides 93.8% Su, J. et al. [38]
pasture lands and forest
Multispectral RF and XGB meadows of New Hawkweeds (Pilosella spp.) 97%, 98% Amarasingam, N. et al. [40]
Zealand
LBP—local binary pattern; MNF—Maximum Noise Fraction; SVM—Support vector machine; ANN—Artificial
Neural Network; RF—Random Forest; ML—Machine Learning; cGAN—Conditional Adversarial Nets;
CNN—Convolutional Neural Networks; U-Net—Convolutional Networks for Biomedical Image Segmenta-
tion; FPN—Feature Pyramid Network; SFS—Shape from shading; XGB—Tree Ensemble.

2.3.4. Biological Morphological Features


Biological morphological features are five characteristics represented by the shape,
structure, size, pattern and color of an organism. In agriculture, biomorphic traits can iden-
tify the biomorphic characteristics of weeds and crops, although they are more susceptible
to leaf-folding or shading problems. They also have a high accuracy rate after training.
This current deep learning algorithm approach based on biomorphic feature recognition is
innovative [64]. Figure 5 illustrates a schematic of biometric extraction through leaves.
structure, size, pattern and color of an organism. In agriculture, biomorphic traits can
identify the biomorphic characteristics of weeds and crops, although they are more sus-
ceptible to leaf-folding or shading problems. They also have a high accuracy rate after
training. This current deep learning algorithm approach based on biomorphic feature
Agronomy 2024, 14, 363 12 of 28
recognition is innovative [64]. Figure 5 illustrates a schematic of biometric extraction
through leaves.

Figure5.
Figure 5. Pixelated
Pixelated segmentation
segmentation of
of green
greenplant
plantleaves
leaves[85].
[85].

Color features are extracted from the pixels of images, with advantages of stable
features after rotation, scale and translation changes [86]. Weeds and crop seedlings
are the same green color. It is difficult to distinguish them by color alone [30]. The
extraction of color features requires the use of color moments, which provide unique
features for distinguishing objects based on their color. Color moments are founded on
the probability distribution of image intensities, characterized by statistical moments, like
mean, variance and skewness. These three are the central moments of intensity distribution
and can be easily found for all color spaces, such as RGB, HSV and L*a*b [6]. Apart from
these color features, there are other shape descriptors/features proposed by researchers.
Tannouche et al. [87] used a region-based adjacencies descriptor to discriminate between
Dicot and Monocot weeds. The proposed descriptor calculated two numbers of adjacencies
between a given original pixel and their adjacent pixels. The first was the number of
horizontal and vertical adjacencies, and the second one was the number of diagonal
adjacencies. Shape factors that were generated by transformations typically required
the use of information about the boundaries or contours of the segmented region and
required complex calculations, compared with region-based shape measurements and
indices. Therefore, they are often referred to as region-based shape descriptors. Hu’s
moment invariants (MIs) are popular shape descriptors, which are normalized functions
created based on the information of both shape boundary and interior region [88]. Weed
detection using machine vision relies on features, like plant color, leaf texture, shape
and patterns. Drought stress can impact leaf color and morphological features in plants,
potentially affecting the reliability of machine vision-based weed detection [89]. But they
still lack universal segmentation capabilities for different crop varieties with varying leaf
shapes and canopy structures. Designing a universal 3D segmentation method for different
varieties at multiple growth stages is the current research frontier of plant phenotyping [90].
Biomorphic feature extraction has the advantages of strong interpretability, high stability
and wide versatility in weed recognition, and it is especially suitable for scenarios that
require the identification of different types of plants.
Agronomy 2024, 14, 363 13 of 28

In deep learning, hybrid feature extraction refers to the simultaneous use of multiple
levels, sources or types of features for model training and recognition. Different levels and
types of features contain different levels of abstraction and semantic information. Hybrid
feature extraction captures this diverse information and enables the model to represent
the input data more richly. Single feature extraction may ignore or lose some critical
information. Using features from multiple sources can make the model more robust and
better adaptable to variations and noise in the input data.

3. Applications for Weed/Crop Discrimination


Deep learning algorithms have developed rapidly over the past few years, leading
the possibility of smart farms. Many scientists have studied the problem of applying deep
learning algorithms to smart agricultural equipment to recognize weeds and crops.

3.1. Learning Algorithm


Deep Neural Networks (DNNs) aim to replicate the communication between biolog-
ical neurons through layers of nodes, comprising input, hidden and output layers [91].
Deep Neural Networks (DNNs) extend the complexity, number of connections and hidden
layers of Artificial Neural Networks (ANNs). A convolutional neural network (CNN), a
type of DNN, assigns learnable weights and biases to different aspects and objects within
input images to distinguish and classify objects, such as weeds [1]. Unlike traditional
machine learning algorithms that require manual feature selection and classifier choice,
deep learning algorithms automatically extract features through self-learning from errors.
This automatic feature extraction sets deep learning apart from the broader field of machine
learning [1,92,93]. To train and evaluate a deep CNN model, each input image undergoes a
sequence of convolution layers with filters, followed by flattening, pooling layers and fully
connected layers. CNNs autonomously capture the spatial and temporal dependencies
within the input image using relevant filters, resulting in enhanced and more efficient
image processing. This is achieved with a significantly reduced number of estimable pa-
rameters and processing time. Due to potential slight jittering in the graphical information
formed by adjacent positions, the pooling operation extracts essential information from the
upper feature map. Common pooling operations include maximum pooling and average
pooling. The model maintains translation and rotation invariance while preserving crucial
features [94].
The attention mechanism is becoming a key concept in the deep learning field. The
inspiration for attention comes from the human perception process, where individuals
naturally concentrate on specific information, simultaneously neglecting other perceptible
details. This attention mechanism has significantly influenced the realm of natural lan-
guage processing, particularly in prioritizing a subset of crucial words. The self-attention
paradigm has evolved from the attention concepts, demonstrating enhancements in the
performance of deep networks [95]. The utilization of the self-attention mechanism enables
the establishment of global references during both model training and prediction. This
significantly reduces the training time required to attain high accuracy [96,97]. The self-
attention mechanism is a crucial element in transformers, explicitly modeling interactions
among all entities in a sequence for structured prediction tasks. Essentially, a self-attention
layer updates each element of a sequence by consolidating global information from the
entire input sequence. In contrast to the fixed K × K neighborhood grid of convolution
layers, the self-attention’s receptive field encompasses the entire image. This expanded
receptive field of self-attention enhances its capability compared to CNN, all without intro-
ducing the computational costs associated with excessively large kernel sizes. Moreover,
self-attention remains invariant to permutations and variations in the number of input
points. Consequently, it can seamlessly operate on irregular inputs, in contrast to standard
convolution that necessitates grid structures [98].
Overall, the attention mechanism has some advantages in improving model perfor-
mance, processing sequence data and improving interpretability. However, for specific
Agronomy 2024, 14, 363 14 of 28

tasks, the attention mechanism is not always necessarily superior to the traditional deep
neural network structure but, rather, the appropriate model structure should be selected
according to the specific application scenario and task requirements.

3.2. Recognition Applications


The above describes the part of deep learning collecting images and pre-processing.
The following is a review of the latest applications of these techniques to recognize weeds
and weed control in smart agricultural equipment. Table 5 demonstrates the accuracy of
different deep learning algorithm models for crop/weed recognition.

Table 5. Classification of weeds and crops with regard to algorithms.

Methods Crop Weed Sensor Accuracy Reference


Graminoid weeds such
as Digitaria sanguinalis
(L.) Scop and Setaria
viridis (L.) Beauv;
Swin-DeepLab Soybean RGB 91.53% Yu, H. et al. [29]
broadleaf weeds such as
Chenopodium glaucum L.,
Acalypha australis L., and
Amaranthus retroflexus L.
lightweight-3D-CNN Crop seedlings Weed Hyperspectral >97.4% Diao, Z. et al. [35]
Tomato (Solanum Black nightshade
A combination of
lycopersicum L.) and (Solanum nigrum L.) and
fine-tuned Densenet and RGB 99.29% Espejo-Garcia, B. et al. [54]
Cotton (Gossypium Velvetleaf (Abutilon
Support Vector Machine
hirsutum L.) theophrasti Medik.)
97.83%,
VGG16, VGG19,
Corn NLW, BLW RGB 97.44%, Garibaldi-Marquez, F. et al. [93]
Xception
97.24%
Chenopodium serotinum,
GA-SVM Lettuce RGB 87.55% Zhang, L. et al. [68]
Polygonum lapathifolium
Black-grass, Charlock,
Cleaver, Common
Chickweed, Common
wheat, Fat Hen, Loose
Silky-bent, Maize,
VIt Maize and Wheat RGB 98.1% Guo, X. et al. [85]
Scentless Mayweed,
Shepherds Purse,
Small-flowered
Cranesbill, and Sugar
beet.
AlexNet, GoogleNet,
Florida pusley Bahiagrass RGB 95%, 96%, 95% Zhuang, J. et al. [89]
VGG
95.04%
Tobacco, Tomato, and Monocotyledonous
PlantNet High Precision 3D Laser 96.44% Li, D. et al. [90]
Sorghum weed
98.03%
Rape seedlings
VGG-SVM Winter rape seedlings RGB 92.1% Tao, T. and X. Wei [99]
associated weeds
Portulaca oleracea,
Eleusine indica,
Chenopodium album,
YOLOv4-Tiny peanuts RGB 96.7% Zhang, H. et al. [100]
Amaranth blitum,
Abutilon theophrasti, and
Calystegia.
(Chenopodium album L.,
U-Net Sunflower Convolvulus arviensis L.; Multispectral 90% Lopez, L.O. et al. [41]
and Cyperus rotundus L.
YOLO-v3 CenterNet
Bok choy Weeds RGB 98.4%, 98.3%, 97.5% Xiaojun Jin et al. [101]
Faster R-CNN
SVM, YOLOV3, Mask
Lettuce Crops Weeds Multispectral and UAV 88%, 94%, 94% Osorio, K. et al. [53]
R-CNN
ML Wheat blackgrass weeds Multispectral and UAV 93.8% Su, J. et al. [38]
SVM, KNN, AdaBoost Leptochloa Chinensis, 89.75%, 85.58%,
Rice RGB and UAV Zhu, S. et al. [20]
and CNN Sedges 90.25%, 92.41%
Agronomy 2024, 14, 363 15 of 28

Table 5. Cont.

Methods Crop Weed Sensor Accuracy Reference


Broad-leaf, Grass,
Pig-weed,
96.6%,
Soybean, Sugar Beet and Lambs-quarter, RGB, UAV and
BPNN 97.7%, Abouzahir, S. et al. [102]
Carrot Hares-ear Mustard, BONIROB Robot
93%
Turnip Weed, Wild
Carrot, Corsican Mint
Unwanted weed and
RF, SVM and KNN Chilli RGB and UAV 94%, 96%, 63% Islam, N. et al. [77]
Parasites within crops
Annual Goosegrass
Improved Faster R-CNN Pea and Strawberry RGB and UAV average of 95.3% Khan, S. et al. [24]
(Eleusine indica) weeds
Thistles and young RGB and DJI Phantom 3
RF Bean and Spinach 96.99 Bah, M.D. et al. [17]
Potato sprouts Pro drone
Grassy weeds and
CNNLVQ Soybean RGB and UAV 99.44% Haq, M.A. [103]
Broadleaf weeds
CNN Soybean Weeds RGB+NIR and UAV 99.66% Milioto, A. et al. [104]
CNN Chinese cabbage Weeds RGB and UAV 92.41% Ong, P. et al. [105]
Beet, Parsley and
Vit Weeds RGB and UAV >98.63% Reedha, R. et al. [28]
Spinach
MobileNetV2 Flax 14 most common weeds RGB and SAMBot 90% Du, Y. et al. [5]
RGB and A
SVM Tobacco Weeds tractor-mounted boom 96% Tufail, M. et al. [6]
sprayer
YOLOX Corn seedlings Weeds RGB 92.45% Zhu, H.B. et al. [16]
Bare soil and Weeds
Faster R-CNN and RGB and Pesticide 98.43%
Tobacco (that grow up in tobacco Alam·, M.S. et al. [55]
YOLOv5 spraying robot 94.45%
fields)
Industrial USB cameras
Faster ReCNN with
Maize seedling Weeds and Field robot platform 98.2% Quan, L. et al. [106]
VGG1
(FRP)
An encoder-decoder
network with atrous Sugar Beet and Oilseed Weed RGB and BoniRob robot 96.12% Wang, A. et al. [60]
separable convolution
ANN with 15 units in Multispectral and
Carrot Weeds 83.5% Lease, B.A. et al. [7]
ensemble Bonirob
Multispectral and
sunflower crops, Sugar
cGAN Weeds BOSCH Bonirob farm 94% Fawakherji, M. et al. [78]
beet
robot
In the experiment, spray
RGB and Precision Sanchez, P.R. and H.
CNN Soybeans Weeds volume was reduced by
Sprayer Zhang [107]
up to 48.89.
High resolution camera
CNN Grass Broad-leaf weed 96.88% Zhang, W.H. et al. [108]
and Quadbike
lightweight DCNN Organic carrot Weeds RGB and AgBot II 93.9% McCool, C. et al. [109]
SVM—Support vector machine; ANN—Artificial Neural Network; RF—Random Forest; ML—Machine Learning;
cGAN—Conditional Adversarial Nets; CNN—Convolutional Neural Networks; U-Net—Convolutional Networks for
Biomedical Image Segmentation; Faster R-CNN—Faster Region Convolutional Neural Network; YOLO—You Only
Look Once; Vit—Vision Transformer; CNNLVQ—Convolutional Neural Network with Learning Vector Quantization;
MobileNetV2—MobileNet Version 2; BPNN—Backpropagation Neural Network; Mask R-CNN—Mask Region-based
Convolutional Neural Network; VGG—Visual Geometry Group; CenterNet—Objects as Points: CenterNet;
KNN—K-Nearest Neighbors; AdaBoost—Adaptive Boosting; PlantNet—PlantNet Plant Identification;
AlexNet—Imagenet Classification with Deep Convolutional Neural Networks; GoogleNet—Inception-v1;
Xception—Extreme Inception; Swin-DeepLab—Hierarchical Vision Transformer for Semantic Segmentation.

3.2.1. Spot Photographic Image Recognition


Spotting refers to taking images with a cell phone or camera at a fixed location. This
method of acquiring images is relatively simple but requires a great deal of labor to take
them. Fixed-point photography usually occurs in relatively fixed environments, which
means that the images are relatively consistent in terms of background, lighting and
camera angle. This consistency helps train deep learning models to better adapt to specific
environments and conditions. It also facilitates the labeling of the images, making it easy to
improve the training efficiency of deep learning.
Agronomy 2024, 14, 363 16 of 28

Taskeen Ashraf et al. [66] sought to classify images based on grass density into three
classes. The first approach utilized texture features extracted from the gray-level co-
occurrence matrix (GLCM) with a Radial Basis Function (RBF) kernel in a support vector
machine (SVM), achieving an accuracy of 73%. Another technique employed scale and
rotation-invariant moments to classify grass density. The second technique outperformed
the first, achieving an accuracy of 86% with a Random Forest classifier. This kind of
quantitative agricultural spraying for different densities of weeds can effectively reduce
the use of pesticides. To improve weed recognition, some scientists have combined ma-
chine learning with deep learning. Tao T.et al. [99] proposed a deep convolutional neural
network with a support vector machine classifier aimed at improving the classification
accuracy of winter oilseed rape seeding and field weeds. They used a VGG network model
with true-color images (224 × 224 pixels) of oilseed rape/weeds as input. The proposed
VGG-SVM model obtained a higher classification accuracy, greater robustness and real
time. Borja Espejo-Garcia et al. [54] proposed a novel crop/weed identification system.
The method involved fine-tuning pre-trained convolutional networks, such as Xception,
Inception-Resnet, VGNets, Mobilenet and Densenet. These networks were combined with
“traditional” machine learning classifiers, like support vector machines, XGBoost, and Lo-
gistic Regression. These classifiers were trained with features extracted from deep learning
models. The aim of this approach was to prevent overfitting and achieve a robust and
consistent performance. Attention mechanisms have become increasingly popular in recent
years and can greatly increase the rate of recognition. Helong Yu et al. [29] introduced a
soybean field weed recognition model named Swin-DeepLab. This model was built upon
an enhanced DeepLabv3+ model, incorporating a Swin transformer as the feature extraction
backbone. Furthermore, a convolution block attention module (CBAM) was integrated after
each feature fusion to improve the model’s utilization of focused information within the
feature maps. The proposed network can further address the problem of weed recognition
in intensive agricultural scenarios.

3.2.2. Satellite Photo Image Recognition


In recent decades, substantial progress has been achieved in sensing technologies,
wireless communication, autonomous systems and artificial intelligence through collabora-
tive research efforts worldwide [110]. Agricultural satellites use remote sensing techniques,
including visible, infrared and microwave radiation, to capture information about the
Earth’s surface. These satellites can provide high-resolution images that can be used to
monitor different aspects of agricultural land. Some civil satellites in agriculture, combined
with high-performance sensors, have produced a large number of images of farmland with
various temporal, spatial and spectral resolutions. Among other things, these images are of
great significance to farmers for seeding scheduling, pest and disease tracking and weed
control [111]. Satellite remote sensing image acquisition, although providing large spatial
coverage, has limited its development in the field of smart agriculture due to fixed and
long revisit intervals and problems such as cloud cover [112].
Anita Sabat-Tomala et al. [83] conducted a comparison between two machine learning
algorithms, support vector machine (SVM) and Random Forest (RF), for the identification of
Solidago spp., Calamagrostis epigejos and Rubus spp. on HySpex hyperspectral aerial images.
The classifications were performed on 430 spectral bands and on the most informative
30 bands extracted using the Minimum Noise Fraction (MNF) transformation. While
satellite images are less suitable for weed recognition, semantic segmentation of remotely
sensed images proves to be more effective. In the realm of digital agricultural services, there
is a growing need for farmers or their advisors to provide digital records of field boundaries.
Automatic extraction of field boundaries from satellite imagery would reduce the reliance
on manual input of these records, which is time consuming and would underpin the
provision of remote products and services [113,114].
Agronomy 2024, 14, 363 17 of 28

3.2.3. Application of Drone Weed Identification


An unmanned aerial vehicle (UAV) is a powered flying vehicle that operates without a
human operator. It can fly autonomously or be controlled remotely, equipped with various
payloads. UAVs are rapidly advancing due to their benefits in flexible data acquisition and
high spatial resolution. They offer a potent technical solution for numerous applications in
precision agriculture (PA) [115,116]. For better acquisition of image data, the flight altitude
of the UAV is an important parameter as it has a great impact on the resolution of the image,
the flight time and the computational cost of image processing [117]. Moreover, UAVs
have the flexibility to carry various payloads tailored to specific purposes. In precision
agriculture (PA), UAVs are commonly equipped with remote sensors, such as RGB imaging,
multispectral and hyperspectral imaging sensors, thermal infrared sensors, Light Detection
and Ranging (LiDAR) and Synthetic Aperture Radar (SAR) to capture agricultural informa-
tion [112,118]. UAVs are currently used for surveillance [119], disease detection [120] and
weed management [20,24]. The use of these data allows for the identification of specific
spatial features and time-varying information on crop characteristics as well as the targeted
14, 363
spraying of pesticides and fertilizers, resulting in a reduction in19pests
of 29
and diseases and an
increase in crop yields and quality [115,121]. Figure 6 shows some of the uses of drones in
smart agriculture.

(a)

(b)
Figure 6. Unmanned aerial
Figurevehicles used inaerial
6. Unmanned agriculture.
vehicles(a) Unmanned
used aerial (a)
in agriculture. vehicle (UAV) aerial
Unmanned hy- vehicle (UAV) hyper-
perspectral imaging system [120].
spectral (b) Drone
imaging spraying
system [120]. of
(b)pesticides.
Drone spraying of pesticides.

Hile Narmilan
3.2.4. Application of Agricultural Amarasingam
Robotics et al. [40] studied the potential of machine learning (ML)
for Weed Recognition
algorithms for the detection of mouse-ear grass leaves and flowers from multispectral
Agricultural robots represent an important trend in modern agricultural automa-
(MS) images acquired by unmanned aerial vehicles (UAVs) at different spatial resolutions
tion. By combining machines, sensors and autonomous navigation technologies, they are
and compared different machine learning. The highest machine learning recognition was
revolutionizing agricultural production. Agricultural robots can include modified trac-
tors, small ground robots and aerial robots [13]. Modern agricultural equipment inte-
grates advanced technologies, such as artificial intelligence, navigation, sensing systems
and communication, to increase agricultural productivity and promote smart agriculture
[22,122,123]. Among the information, navigation data, image recognition data, etc., re-
Agronomy 2024, 14, 363 18 of 28

achieved with 100% accuracy. Jinya Su et al. [38] analyzed and mapped blackgrass in
wheat fields by incorporating unmanned aerial vehicles (UAVs), multispectral imagery
and machine learning techniques. Eighteen widely used techniques were produced from
five raw spectral bands. Various feature selection algorithms were then used to refine
the simplicity and experience interpretation of the model. The selection of these raw
spectral segments and the selection of vegetation indices (VIs) were important for weed
identification in multispectral images. Mohd Anul Haq et al. [103] proposed a novel
CNNLVQ model to detect weeds in soybean crop images and distinguish between grassy
weeds and broadleaf weeds. The uniqueness of their study lies in the development of this
innovative CNNLVQ model, meticulous hyperparameter optimization and the utilization
of authentic datasets. Faster R-CNN stands out as a deep learning approach incorporating
a region proposal network (RPN). This network, formed by merging convolutional features
with a classification network, facilitates training and testing through a seamless process.
It results in a fast detection rate and outperforms other conventional object detection
methods. Shahbaz Khan et al. [24] optimized the architecture of the traditional Faster-R-
CNN. Residual Network 101 (ResNet-101) was deployed as a convolutional neural network
instead of the normally used Visual Geometry Group 16 (VGG16). Anchors are classified
using a traditional SoftMax classifier. In addition, Saad Abouzahir et al. [102] used HOG
blocks as key points to generate visual words based on the Bag of Visual Words (BOVW)
method and feature vectors as histograms of these visual words. And a backpropagation
neural network was used to detect weeds and classify plants from three different crop fields
(sugar beet, carrot, soybean). The algorithm had 97.7%, 93% and 96.6% accuracy in weed
and crop differentiation.
Drones have an important role in identifying weeds in fields and spraying pesticides in
real time. Shahbaz KhanI et al. [116] developed a deep learning-based real-time recognition
system for drones. The capability of the system is achieved through a two-step process
where the target recognizer part is based on a CNN model. The developed deep learning
system achieved an average F1 score of 0.955, while the classifier recognition average
computation time was 3.68 ms. This deep learning model can effectively solve the problem
of real-time pesticide spraying by UAVs to recognize weeds. Meanwhile, Gunasekaran
Raja et al. [71] proposed a UAV-assisted weed detection method using a modified multi-
channel gray-scale covariance matrix (GLCM-M) and normalized difference index with red
threshold (NDIRT) index (DA-WDGN) to assist the weed detection process. In DA-WDGN,
the UAV incorporates information and communication techniques to capture far-field data
and accurately detect weeds. The accurate detection of weeds limits the need for pesticides
and helps to protect the environment. Reenul Reedha et al. [28] investigated a Visual
Transformer (ViT) and applied it to plant classification in unmanned aerial vehicle (UAV)
images. They utilized the strategy of migration algorithms to increase the effectiveness of
the test set while reducing the training set. The ViT algorithm is able to efficiently process
large-scale image data, thus better adapting to the large number of images produced by
UAVs in aerial photography. This efficient image processing capability helps to improve
the speed and accuracy of weed identification. Moreover, the ViT algorithm is based on the
self-attention mechanism, which is able to capture global information in the image and not
only limited to local features. This feature gives ViT and UAVs a huge advantage in the
future development of weed recognition.

3.2.4. Application of Agricultural Robotics for Weed Recognition


Agricultural robots represent an important trend in modern agricultural automation.
By combining machines, sensors and autonomous navigation technologies, they are revolu-
tionizing agricultural production. Agricultural robots can include modified tractors, small
ground robots and aerial robots [13]. Modern agricultural equipment integrates advanced
technologies, such as artificial intelligence, navigation, sensing systems and communi-
cation, to increase agricultural productivity and promote smart agriculture [22,122,123].
Among the information, navigation data, image recognition data, etc., require the work
Agronomy 2024, 14, 363 19 of 28

of sensors, including monocular cameras, binocular cameras, RGB cameras, panorama


cameras and spectral imaging systems [22]. In the early days of precision agriculture, most
image data from fields were collected using ground cameras either mounted on unmanned
ground vehicles (UGVs) or fixed next to vegetation patches [21]. Through image recogni-
tion, agricultural robots can perform laser weeding [124–126], spraying pesticides for weed
control [13], spot picking [127,128], fertilizer application and other tasks. Figure 7 shows
some of the agricultural robots used in smart agriculture.
Yajun Chen et al. [4] trained the classifier based on SVM comparing single features of
six features with different fusion strategies. The highest classification accuracy was obtained
by fusion feature combining rotationally invariant LBP features with a gray gradient co-
occurrence matrix based on an SVM classifier and accurately detected various weeds
and maize seedlings. Tufail M et al. [5] presented a machine learning-based crop/weed
detection system for tractor boom sprayers to spot spray tobacco crops in the field and
proposed an SVM classifier with carefully selected combinations of tobacco plant features
(texture, shape and color) with a classification accuracy of 96%. Julien Champ et al. [56]
trained and evaluated an instance segmentation convolutional neural network designed to
segment and identify each plant specimen visible in an agricultural robot image. And they
adjusted the hyperparameters of a mask region-based convolutional neural network (R-
CNN) to this specific task and evaluated the resulting training model. Data augmentation
via Generative Adversarial Networks (GANs) can add entire synthetic scenes to the training
data, thus expanding and enriching their information content [78].
There has been a lot of progress in image recognition using smart agricultural robots,
and a number of scientists are working on smart weeding by agricultural robots to reduce
the burden on farmers. Yayun Du et al. [5] provided a complete process, from model
training at maximum efficiency to deploying TensorRT-optimized models to single-board
computers. And the performance of five different CNN models was tested. They deployed
MobileNetV2 on a small autonomous robot, SAMBot, for real-time weed detection. In a
previously unseen scenario in a flax field (row spacing of 0.2–0.3 m), with crops and weeds,
distortions, blurring and shadows, 90% accuracy was achieved. Paolo Rommel Sanchez
et al. [107] developed a modular precision sprayer that distributes the high computational
load of CNNs to parallel low-cost, low-power vision computing devices. The sprayer
employed a customized precision spray algorithm based on SSD-MobileNetV1 running
on a Jetson Nano 4 GB. The model achieved 76% mAP0.5 at 19 fps in detecting weeds
and soybeans in a widely planted field. Muhammad Shahab Alam et al. [55] developed
and deployed a vision-based robotic spraying system. By using the vision system in
combination with speed sensors, flow sensors and pressure sensors, the technology detected
and categorized tobacco plants and weeds in real time. The use of targeted pesticide
spraying technology has reduced the use of pesticides, and environmental pollution has
been effectively controlled, but laser-targeted weed control is underway in terms of more
environmentally friendly future development. Huibin Zhu et al. [16] designed a weeding
robot based on the YOLOX convolutional neural network for removing weeds from corn
seedling fields. They verified the feasibility of a blue laser as a non-contact weeding
tool. Similarly Azmat Hussain et al. [126] designed a laser weeding robot based on the
YOLOV5 convolutional neural network. The field trials demonstrated that the robot took
approximately 23.7 h at a linear velocity of 0.07 m/s for the weeding of one acre plot. It
included 5 s of laser to kill one weed plant. They proposed an innovative weeding operation
method, applying herbicides after causing mechanical damage to weeds, and designed a
composite intelligent in-row weeding robot based on this method. Based on the YOLOv5
algorithmic model, the detection accuracy reached 93.33% under real operating conditions.
The machine was more efficient at weeding compared to simple machines and reduced the
amount of pesticides used compared to chemical pesticide spraying robots [129].
an innovative weeding operation method, applying herbicides after causing mechanical
damage to weeds, and designed a composite intelligent in-row weeding robot based on
this method. Based on the YOLOv5 algorithmic model, the detection accuracy reached
93.33% under real operating conditions. The machine was more efficient at weeding
Agronomy 2024, 14, 363 20 of 28
compared to simple machines and reduced the amount of pesticides used compared to
chemical pesticide spraying robots [129].

24, 14, 363 21 of 29

(a)

(b)

(c)

(d)
Figure 7. (a) An autonomous
Figure 7.agricultural robot for agricultural
(a) An autonomous weed removal uses
robot for[130].
weed(b) Precision
removal Agricul-
uses [130]. (b) Precision Agricul-
tural Sprayer [6]. (c) YOLOX-based blue laser cornfield weeding robot [16]. (d) Components
tural Sprayer [6]. (c) YOLOX-based blue laser cornfield weeding robot [16]. of the
(d) Components of the
modular agrochemical precision sprayer mounted on a push-type frame [107].
modular agrochemical precision sprayer mounted on a push-type frame [107].

4. Discussion
In the context of the development of artificial intelligence, smart agriculture is the
development direction of a large agricultural country. The development of intelligent ag-
riculture is inseparable from the development of intelligent agricultural equipment. In
recent years, agricultural robots, agricultural drones, satellites and other booming devel-
opments for the development of intelligent agriculture have provided a new program.
The development of all three types of smart agricultural equipment is the mainstream of
Agronomy 2024, 14, 363 21 of 28

4. Discussion
In the context of the development of artificial intelligence, smart agriculture is the
development direction of a large agricultural country. The development of intelligent
agriculture is inseparable from the development of intelligent agricultural equipment.
In recent years, agricultural robots, agricultural drones, satellites and other booming
developments for the development of intelligent agriculture have provided a new program.
The development of all three types of smart agricultural equipment is the mainstream of
the future, and all have great potential for application in the development of smart farms.
Satellites, as part of smart farming equipment, play an important role in delineating farm
boundaries for effective farm management. However, it is slightly lacking in weed and
crop identification. Waldner, F. et al. proposed a method to facilitate the extraction of site
boundaries from satellite images [113]. The use of satellite technology to segment and
monitor sites in agriculture has a number of benefits that can help farmers to plan land use
more accurately. This includes identifying the most suitable locations for specific crops,
avoiding overuse of land and increasing the sustainable utilization of agricultural land.
And it allows for better allocation of resources, such as water, fertilizers and pesticides,
reducing waste of resources and environmental pollution. As an overhead drone, it has an
integral role in smart agriculture. For example, high-resolution image acquisition provides
a dataset for the training and learning of deep learning algorithms; the detection and
identification of crops using sensors allow for the precise application of medicine and
irrigation [24,38]. Combining deep learning with drones allows for weed crop identification
and targeted pesticide spraying. Based on RGB camera sensing, CNN has more than
92% accuracy in weed recognition. It has a higher accuracy rate compared to machine
learning. And the results shown by Vit provide the possibility of real-time recognition of
pesticide spraying by drones in the future. Weeds can be dealt with more efficiently and
with less wastage of resources. Deep learning-based agricultural robots are essentially 95%
accurate in weed recognition. The use of agricultural robots in agriculture is not only in
data collection and weed identification and processing, as it also allows for precise picking
and harvesting of crops. Overall, the combination of deep learning and smart agricultural
equipment has been widely used in weed/crop identification research. In smart agriculture
scenarios, deep learning has been used to solve the problem of crop and weed identification.
Deep learning has four steps in weed/crop detection: data collection, dataset preparation,
weed detection and weed/crop localization and classification. First of all, for the collection
of datasets, with the help of intelligent agricultural equipment, the collection of images
is no longer a problem. Moreover, a variety of sensors have improved the quality of
image acquisition. Multispectral cameras have some advantages over RGB cameras and
hyperspectral cameras in that they can improve more spectral bands than RGB cameras
and are cheaper than hyperspectral cameras, which can be utilized in smart agriculture to
reduce the cost and improve the quality of collected images [38,39]. Thermal measurements
from thermal infrared sensors can complement measurements from other sensors, such as
hyperspectral, visible and optical distance, and have also been shown to be more effective
in field crop phenotyping [44]. For training datasets, manual labeling by researchers is
still required, which is a very labor-intensive task. However, semi-supervised learning
algorithms and unsupervised learning algorithms are a worthwhile solution for the future,
as they can perform labeling during iterations, greatly reducing the human workload.
Feature extraction of weeds and crops is an important part of the recognition process, and
the main features are texture features, spectral features, spatial features and biomorphic
features. All four features have a great role in weed recognition by deep learning, but the
current trend in recognition is hybrid feature extraction of spectral features, texture features
and biomorphic features. The similarity between weeds and crops makes using a single
image feature to detect weeds and crops almost impossible. The commonly used image
features can achieve the purpose of weed detection, but the experimental accuracy is low,
and the stability is poor in a nonideal environment due to the complex interference factors
in the actual field. Acquired images need to be preprocessed for better recognition and
Agronomy 2024, 14, 363 22 of 28

classification. The scientists segmented the crop and background by threshold segmentation
and color segmentation and performed noise reduction on the images [34,131].
The performance of different deep learning algorithm models in weed/crop identi-
fication is influenced by a variety of factors. The main factor is the network structure. In
general, lightweight CNN models are less accurate in weed recognition compared to CNN
models. However, lightweight CNN models are usually designed to be more concise, using
fewer parameters and computational resources, and they require relatively less memory
space [104,109]. Some of the lightweighting techniques include network pruning, quan-
tization and depth-separable convolution, which aim to minimize the size of the model
while maximizing the retention of its representational power. Due to the performance
improvement in the Faster R-CNN architecture, it is possible to perform target detection,
image classification and instance segmentation simultaneously in a single neural network.
The researcher improved the Mask R-CNN by adding an attention mechanism and deep
separable convolution. This approach improves the model’s ability to represent weed-
related features and reduces the number of model parameters, increasing computational
speed [132]. In addition to this, the performance of deep learning algorithms is greatly
influenced by the training strategy used. The training strategy involves the training process
of the model, selection of hyperparameters, data augmentation, etc. For example, batch
normalization of deep learning models by some researchers accelerates training and im-
proves the generalization performance of the model [54]. In addition, the input dataset is
key to training deep learning models as it is the basic source of information. The accuracy
of deep learning is improved by data augmentation of sample images, as stated in Section 2
of this article. Algorithmic models such as Swin transformer and DeepLabv3+ also excel in
weed identification.

5. Challenges for Weed Recognition in Smart Farming Equipment and Future Trends
In terms of future development, the combination of sensor and drone technology
can effectively increase the efficiency of identification. Among the recent innovations, un-
manned aerial vehicles (UAVs) or drones have demonstrated their suitability for the timely
tracking and assessment of vegetation status due to several advantages, as follows: (1) They
can operate at low altitudes to provide aerial imagery with ultra-high spatial resolution,
allowing for the detection of fine details of vegetation. (2) The flights can be scheduled
with great flexibility according to critical moments imposed by vegetation progress over
time. (3) They can use diverse sensors and perception systems, acquiring different ranges of
the vegetation spectrum (visible, infrared, thermal). (4) This technology can also generate
digital surface models (DSMs) with three-dimensional (3D) measurements of vegetation by
using highly overlapping images and applying photoreconstruction procedures with the
structure-from-motion (SfM) technique [23,35,44].
The future of agricultural robotics promises more developments in weed removal:
(1) Increased intelligence and autonomy: Future agricultural robots will be more intelligent,
with highly autonomous decision-making capabilities. Combined with artificial intelligence
and deep learning technology, the robot can analyze farmland images and data in real time,
make intelligent weed identification and weeding decisions, without human intervention,
and improve operational efficiency. (2) The integration of multimodal sensing technology:
Agricultural robots will integrate a variety of sensors, including vision, infrared, ultrasonic
and other multimodal sensors, to obtain richer and more accurate information about the
farmland. This will help identify weeds more accurately and adapt to different farmland
environments. (3) Efficient and precise weeding technology: Future agricultural robots
will use more precise and efficient weeding technology. This will require more advanced
weeding systems and automated control technologies. Although laser mowing is currently
very advantageous, there are still issues to consider, such as whether mowing is safe and
whether it can cause fires [124,125].
Deep learning also faces several challenges in weed and crop recognition. First, due to
the small visual differences between weeds and crops, there are large similarities between
Agronomy 2024, 14, 363 23 of 28

categories, which leads to models that are prone to confusion. In addition, there are varia-
tions in weeds and crops such as growth stages and environmental differences, and the
models need to have good generalization capabilities to accommodate these variations [73].
In addition, datasets are costly to annotate, especially when collected and labeled in a large-
scale farmland environment. This poses certain difficulties in model training. To overcome
these challenges, future research can be expanded in the following aspects: First, further
improve the robustness and generalization ability of deep learning models, and design
more effective feature extraction methods and classification algorithms for the similarities
between weeds and crops. Second, develop larger-scale datasets containing samples from
different times, locations and farming conditions to enhance the generalization ability of
the model. At the same time, techniques such as augmented learning and transfer learning
are reasonably utilized to achieve better results with fewer data. In addition, combining
sensors and smart agricultural equipment technologies for the real-time identification of
weeds and crops contributes to intelligent and precise decision making in agricultural
production. Proper dosage of plant protection products is one of the key issues in agricul-
tural production. Using advanced sensor technology, crop growth can be monitored more
accurately. This technology allows for the timely dosing of weeds or diseases. Spraying the
right amount of insecticide will neither cause contamination by using too much nor reduce
crop yields by using too little.

6. Conclusions
This review concentrates on the forefront applications of intelligent agricultural equip-
ment, specifically emphasizing crop and weed identification, pivotal components in the
trajectory of smart agriculture. The integration of sensors into smart agricultural equipment
assumes a critical role in data acquisition, capturing extensive sets of high-dimensional
images that serve as foundational training data for deep learning algorithms. Various
preprocessing techniques are employed to refine the algorithmic processes, encompassing
noise reduction, background effect elimination and image resizing. Deep learning algo-
rithms emerge as powerful tools capable of analyzing complex, high-dimensional data with
distinct characteristics compared to the training set, facilitating accurate crop identification.
The adoption of hybrid feature extraction techniques underscores the inherent advantages
of leveraging multiple features in tandem, contributing significantly to the efficacy of weed
and crop identification processes. In the realm of machine learning and deep learning,
the attention mechanism stands out as a particularly valuable and promising learning
algorithm. Renowned for its high accuracy and expedited processing time, the attention
mechanism proves advantageous in the context of crop and weed identification. These
attributes position it as a formidable asset for smart agricultural equipment engaged in
real-time weeding operations within agricultural fields. The emphasis on attention mecha-
nisms reflects a forward-looking perspective, acknowledging their potential to augment
the efficiency and accuracy of smart agricultural practices, particularly in the domain of
weed management.

Author Contributions: Conceptualization, W.-H.S.; methodology, H.-R.Q.; software, H.-R.Q.; valida-


tion, H.-R.Q.; formal analysis, W.-H.S.; investigation, H.-R.Q.; resources, W.-H.S.; writing—original
draft preparation, H.-R.Q. and W.-H.S.; writing—review and editing, W.-H.S.; supervision, W.-H.S.;
project administration, W.-H.S.; funding acquisition, W.-H.S. All authors have read and agreed to the
published version of the manuscript.
Funding: This research was funded by the National Natural Science Foundation of China, grant
number 32371991.
Data Availability Statement: Data are available on request due to privacy.
Conflicts of Interest: The authors declare no conflicts of interest.
Agronomy 2024, 14, 363 24 of 28

References
1. Murad, N.Y.; Mahmood, T.; Forkan, A.R.M.; Morshed, A.; Jayaraman, P.P.; Siddiqui, M.S. Weed Detection Using Deep Learning:
A Systematic Literature Review. Sensors 2023, 23, 3670. [CrossRef] [PubMed]
2. Hamuda, E.; Glavin, M.; Jones, E. A survey of image processing techniques for plant extraction and segmentation in the field.
Comput. Electron. Agric. 2016, 125, 184–199. [CrossRef]
3. Llewellyn, R.; Ronning, D.; Clarke, M.; Mayfield, A.; Walker, S.; Ouzman, J. Impact of Weeds in Australian Grain Production; Grains
Research and Development Corporation: Canberra, Australia, 2016.
4. Chen, Y.; Wu, Z.; Zhao, B.; Fan, C.; Shi, S. Weed and Corn Seedling Detection in Field Based on Multi Feature Fusion and Support
Vector Machine. Sensors 2021, 21, 212. [CrossRef]
5. Du, Y.; Zhang, G.; Tsang, D.; Jawed, M.K. Deep-CNN based Robotic Multi-Class Under-Canopy Weed Control in Precision
Farming. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27
May 2022; pp. 2273–2279.
6. Tufail, M.; Iqbal, J.; Tiwana, M.I.; Alam, M.S.; Khan, Z.A.; Khan, M.T. Identification of Tobacco Crop Based on Machine Learning
for a Precision Agricultural Sprayer. IEEE Access 2021, 9, 23814–23825. [CrossRef]
7. Lease, B.A.; Wong, W.K.; Gopal, L.; Chiong, W.R. Weed Pixel Level Classification Based on Evolving Feature Selection on Local
Binary Pattern with Shallow Network Classifier. In Proceedings of the 2nd International Conference on Materials Technology and
Energy (ICMTE), Curtin Univ Malaysia, Sarawak, Malaysia, 6–8 November 2020.
8. Mogili, U.M.R.; Deepak, B.B.V.L. Review on Application of Drone Systems in Precision Agriculture. In Proceedings of the 1st
International Conference on Robotics and Smart Manufacturing (RoSMa), Chennai, India, 19–21 July 2018; pp. 502–509.
9. Tataridas, A.; Kanatas, P.; Chatzigeorgiou, A.; Zannopoulos, S.; Travlos, I. Sustainable Crop and Weed Management in the Era of
the EU Green Deal: A Survival Guide. Agronomy 2022, 12, 589. [CrossRef]
10. Jeanmart, S.; Edmunds, A.J.F.; Lamberth, C.; Pouliot, M. Synthetic approaches to the 2010-2014 new agrochemicals. Bioorganic
Med. Chem. 2016, 24, 317–341. [CrossRef]
11. Eyre, M.D.; Critchley, C.N.R.; Leifert, C.; Wilcockson, S.J. Crop sequence, crop protection and fertility management effects on
weed cover in an organic/conventional farm management trial. Eur. J. Agron. 2011, 34, 153–162. [CrossRef]
12. Ampatzidis, Y.; De Bellis, L.; Luvisi, A. iPathology: Robotic Applications and Management of Plants and Plant Diseases.
Sustainability 2017, 9, 1010. [CrossRef]
13. Aravind, K.R.; Raja, P.; Perez-Ruiz, M. Task-based agricultural mobile robots in arable farming: A review. Span. J. Agric. Res. 2017,
15, e02R01-01. [CrossRef]
14. Su, W.-H. Advanced Machine Learning in Point Spectroscopy, RGB- and Hyperspectral-Imaging for Automatic Discriminations
of Crops and Weeds: A Review. Smart Cities 2020, 3, 767–792. [CrossRef]
15. Ringland, J.; Bohm, M.; Baek, S.-R. Characterization of food cultivation along roadside transects with Google Street View imagery
and deep learning. Comput. Electron. Agric. 2019, 158, 36–50. [CrossRef]
16. Zhu, H.B.; Zhang, Y.Y.; Mu, D.L.; Bai, L.Z.; Zhuang, H.; Li, H. YOLOX-based blue laser weeding robot in corn field. Front. Plant
Sci. 2022, 13, 1017803. [CrossRef]
17. Bah, M.D.; Hafiane, A.; Canals, R. Deep Learning with Unsupervised Data Labeling for Weed Detection in Line Crops in UAV
Images. Remote Sens. 2018, 10, 1690. [CrossRef]
18. Teimouri, N.; Dyrmann, M.; Nielsen, P.R.; Mathiassen, S.K.; Somerville, G.J.; Jorgensen, R.N. Weed Growth Stage Estimator Using
Deep Convolutional Neural Networks. Sensors 2018, 18, 1580. [CrossRef]
19. Oghaz, M.M.; Razaak, M.; Kerdegari, H.; Argyriou, V.; Remagnino, P. Scene and Environment Monitoring Using Aerial Imagery
and Deep Learning. In Proceedings of the 15th Annual International Conference on Distributed Computing in Sensor Systems
(DCOSS), Santorini, Greece, 29–31 May 2019; pp. 362–369.
20. Zhu, S.; Deng, J.; Zhang, Y.; Yang, C.; Yan, Z.; Xie, Y. Study on distribution map of weeds in rice field based on UAV remote
sensing. J. South China Agric. Univ. 2020, 41, 67–74. [CrossRef]
21. Zualkernan, I.; Abuhani, D.A.; Hussain, M.H.; Khan, J.; ElMohandes, M. Machine Learning for Precision Agriculture Using
Imagery from Unmanned Aerial Vehicles (UAVs): A Survey. Drones 2023, 7, 382. [CrossRef]
22. Shi, J.Y.; Bai, Y.H.; Diao, Z.H.; Zhou, J.; Yao, X.B.; Zhang, B.H. Row Detection BASED Navigation and Guidance for Agricultural
Robots and Autonomous Vehicles in Row-Crop Fields: Methods and Applications. Agronomy 2023, 13, 1780. [CrossRef]
23. de Castro, A.I.; Shi, Y.; Maja, J.M.; Pena, J.M. UAVs for Vegetation Monitoring: Overview and Recent Scientific Contributions.
Remote Sens. 2021, 13, 2139. [CrossRef]
24. Khan, S.; Tufail, M.; Khan, M.T.; Khan, Z.A.; Anwar, S. Deep learning-based identification system of weeds and crops in strawberry
and pea fields for a precision agriculture sprayer. Precis. Agric. 2021, 22, 1711–1727. [CrossRef]
25. Kim, Y.H.; Park, K.R. MTS-CNN: Multi-task semantic segmentation-convolutional neural network for detecting crops and weeds.
Comput. Electron. Agric. 2022, 199, 107146. [CrossRef]
26. Deepa, S.N.; Rasi, D. FHGSO: Flower Henry gas solubility optimization integrated deep convolutional neural network for image
classification. Appl. Intell. 2022, 53, 7278–7297. [CrossRef]
27. Babu, V.S.; Ram, N.V. Deep Residual CNN with Contrast Limited Adaptive Histogram Equalization for Weed Detection in
Soybean Crops. Trait. Du Signal 2022, 39, 717–722. [CrossRef]
Agronomy 2024, 14, 363 25 of 28

28. Reedha, R.; Dericquebourg, E.; Canals, R.; Hafiane, A. Transformer Neural Network for Weed and Crop Classification of High
Resolution UAV Images. Remote Sens. 2022, 14, 592. [CrossRef]
29. Yu, H.; Che, M.; Yu, H.; Zhang, J. Development of Weed Detection Method in Soybean Fields Utilizing Improved DeepLabv3+
Platform. Agronomy 2022, 12, 2889. [CrossRef]
30. Sun, Y.; Chen, Y.; Jin, X.; Yu, J.; Chen, Y. AI differentiation of bok choy seedlings from weeds. Fujian J. Agric. Sci. 2021, 36,
1484–1490. [CrossRef]
31. Wu, Z.N.; Chen, Y.J.; Zhao, B.; Kang, X.B.; Ding, Y.Y. Review of Weed Detection Methods Based on Computer Vision. Sensors 2021,
21, 3647. [CrossRef] [PubMed]
32. Xu, X.; Wang, L.; Shu, M.; Liang, X.; Ghafoor, A.Z.; Liu, Y.; Ma, Y.; Zhu, J. Detection and Counting of Maize Leaves Based on
Two-Stage Deep Learning with UAV-Based RGB Image. Remote Sens. 2022, 14, 5388. [CrossRef]
33. Fan, K.-J.; Su, W.-H. Applications of Fluorescence Spectroscopy, RGB- and MultiSpectral Imaging for Quality Determinations of
White Meat: A Review. Biosensors 2022, 12, 76. [CrossRef]
34. Li, Y.; Al-Sarayreh, M.; Irie, K.; Hackell, D.; Bourdot, G.; Reis, M.M.; Ghamkhar, K. Identification of Weeds Based on Hyperspectral
Imaging and Machine Learning. Front. Plant Sci. 2021, 11, 611622. [CrossRef]
35. Diao, Z.; Yan, J.; He, Z.; Zhao, S.; Guo, P. Corn seedling recognition algorithm based on hyperspectral image and lightweight-3D-
CNN. Comput. Electron. Agric. 2022, 201, 107343. [CrossRef]
36. Dashti, H.; Glenn, N.F.; Ustin, S.; Mitchell, J.J.; Qi, Y.; Ilangakoon, N.T.; Flores, A.N.; Luis Silvan-Cardenas, J.; Zhao, K.; Spaete,
L.P.; et al. Empirical Methods for Remote Sensing of Nitrogen in Drylands May Lead to Unreliable Interpretation of Ecosystem
Function. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3993–4004. [CrossRef]
37. Lou, Z.; Quan, L.; Sun, D.; Li, H.; Xia, F. Hyperspectral remote sensing to assess weed competitiveness in maize farmland
ecosystems. Sci. Total Environ. 2022, 844, 157071. [CrossRef] [PubMed]
38. Su, J.; Yi, D.; Coombes, M.; Liu, C.; Zhai, X.; McDonald-Maier, K.; Chen, W.-H. Spectral analysis and mapping of blackgrass weed
by leveraging machine learning and UAV multispectral imagery. Comput. Electron. Agric. 2022, 192, 106621. [CrossRef]
39. Su, J.; Coombes, M.; Liu, C.; Zhu, Y.; Song, X.; Fang, S.; Guo, L.; Chen, W.H. Machine Learning-Based Crop Drought Mapping
System by UAV Remote Sensing RGB Imagery. Unmanned Syst. 2020, 8, 71–83. [CrossRef]
40. Amarasingam, N.; Hamilton, M.; Kelly, J.E.; Zheng, L.; Sandino, J.; Gonzalez, F.; Dehaan, R.L.; Cherry, H. Autonomous Detection
of Mouse-Ear Hawkweed Using Drones, Multispectral Imagery and Supervised Machine Learning. Remote Sens. 2023, 15, 1633.
[CrossRef]
41. Lopez, L.O.; Ortega, G.; Aguera-Vega, F.; Carvajal-Ramirez, F.; Martinez-Carricondo, P.; Garzon, E.M. Multispectral Imaging for
Weed Identification in Herbicides Testing. Informatica 2022, 33, 771–793. [CrossRef]
42. Aguera-Vega, F.; Aguera-Puntas, M.; Aguera-Vega, J.; Martinez-Carricondo, P.; Carvajal-Ramirez, F. Multi-sensor imagery
rectification and registration for herbicide testing. Measurement 2021, 175, 109049. [CrossRef]
43. Allred, B.; Martinez, L.; Fessehazion, M.K.; Rouse, G.; Williamson, T.N.; Wishart, D.; Koganti, T.; Freeland, R.; Eash, N.; Batschelet,
A.; et al. Overall results and key findings on the use of UAV visible-color, multispectral, and thermal infrared imagery to map
agricultural drainage pipes. Agric. Water Manag. 2020, 232, 106036. [CrossRef]
44. Eide, A.; Koparan, C.; Zhang, Y.; Ostlie, M.; Howatt, K.; Sun, X. UAV-Assisted Thermal Infrared and Multispectral Imaging of
Weed Canopies for Glyphosate Resistance Detection. Remote Sens. 2021, 13, 4606. [CrossRef]
45. Pineda, M.; Baron, M.; Perez-Bueno, M.L. Thermal Imaging for Plant Stress Detection and Phenotyping. Remote Sens. 2021, 13, 68.
[CrossRef]
46. Wang, X.; Pan, H.; Guo, K.; Yang, X.; Luo, S. The evolution of LiDAR and its application in high precision measurement. IOP Conf.
Ser. Earth Environ. Sci. 2020, 502, 012008. [CrossRef]
47. Moreno, H.; Valero, C.; Bengochea-Guevara, J.M.; Ribeiro, A.; Garrido-Izard, M.; Andujar, D. On-Ground Vineyard Reconstruction
Using a LiDAR-Based Automated System. Sensors 2020, 20, 1102. [CrossRef] [PubMed]
48. Sudars, K.; Jasko, J.; Namatevs, I.; Ozola, L.; Badaukis, N. Dataset of annotated food crops and weed images for robotic computer
vision control. Data Brief 2020, 31, 105833. [CrossRef]
49. Olsen, A.; Konovalov, D.A.; Philippa, B.; Ridd, P.; Wood, J.C.; Johns, J.; Banks, W.; Girgenti, B.; Kenny, O.; Whinney, J.; et al.
DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning. Sci. Rep. 2019, 9, 2058. [CrossRef]
50. Jiang, H.H.; Zhang, C.Y.; Qiao, Y.L.; Zhang, Z.; Zhang, W.J.; Song, C.Q. CNN feature based graph convolutional network for weed
and crop recognition in smart farming. Comput. Electron. Agric. 2020, 174, 105450. [CrossRef]
51. Sa, I.; Chen, Z.T.; Popovic, M.; Khanna, R.; Liebisch, F.; Nieto, J.; Siegwart, R. weedNet: Dense Semantic Weed Classification
Using Multispectral Images and MAV for Smart Farming. IEEE Robot. Autom. Lett. 2018, 3, 588–595. [CrossRef]
52. Binch, A.; Fox, C.W. Controlled comparison of machine vision algorithms for Rumex and Urtica detection in grassland. Comput.
Electron. Agric. 2017, 140, 123–138. [CrossRef]
53. Osorio, K.; Puerto, A.; Pedraza, C.; Jamaica, D.; Rodriguez, L. A Deep Learning Approach for Weed Detection in Lettuce Crops
Using Multispectral Images. Agriengineering 2020, 2, 471–488. [CrossRef]
54. Espejo-Garcia, B.; Mylonas, N.; Athanasakos, L.; Fountas, S.; Vasilakoglou, I. Towards weeds identification assistance through
transfer learning. Comput. Electron. Agric. 2020, 171, 105306. [CrossRef]
Agronomy 2024, 14, 363 26 of 28

55. Alam, M.S.; Alam, M.; Tufail, M.; Khan, M.U.; Guenes, A.; Salah, B.; Nasir, F.E.; Saleem, W.; Khan, M.T. TobSet: A New Tobacco
Crop and Weeds Image Dataset and Its Utilization for Vision-Based Spraying by Agricultural Robots. Appl. Sci. 2022, 12, 1308.
[CrossRef]
56. Champ, J.; Mora-Fallas, A.; Goeau, H.; Mata-Montero, E.; Bonnet, P.; Joly, A. Instance segmentation for the fine detection of crop
and weed plants by precision agricultural robots. Appl. Plant Sci. 2020, 8, e11373. [CrossRef]
57. Di Cicco, M.; Potena, C.; Grisetti, G.; Pretto, A. Automatic Model Based Dataset Generation for Fast and Accurate Crop and
Weeds Detection. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)/Workshop
on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics, Vancouver, BC, Canada, 24–28 September 2017;
pp. 5188–5195.
58. Hasan, A.; Sohel, F.; Diepeveen, D.; Laga, H.; Jones, M.G.K. A survey of deep learning techniques for weed detection from images.
Comput. Electron. Agric. 2021, 184, 106067. [CrossRef]
59. Wang, A.; Xu, Y.; Wei, X.; Cui, B. Semantic Segmentation of Crop and Weed using an Encoder-Decoder Network and Image
Enhancement Method under Uncontrolled Outdoor Illumination. IEEE Access 2020, 8, 81724–81734. [CrossRef]
60. Ramirez, W.; Achanccaray, P.; Mendoza, L.F.; Pacheco, M.A.C. Deep Convolutional Neural Networks For Weed Detection in
Agricultural Crops Using Optical Aerial Images. In Proceedings of the IEEE Latin American GRSS and ISPRS Remote Sensing
Conference (LAGIRS), Santiago, Chile, 21–26 March 2020; pp. 133–137.
61. Vypirailenko, D.; Kiseleva, E.; Shadrin, D.; Pukalchik, M. Deep learning techniques for enhancement of weeds growth classification.
In Proceedings of the IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Glasgow, UK,
17–20 May 2021.
62. Gee, C.; Denimal, E. RGB Image-Derived Indicators for Spatial Assessment of the Impact of Broadleaf Weeds on Wheat Biomass.
Remote Sens. 2020, 12, 2982. [CrossRef]
63. Slaughter, D.C. The Biological Engineer: Sensing the Difference Between Crops and Weeds. In Automation: The Future of Weed
Control in Cropping Systems; Young, S.L., Pierce, F.J., Eds.; Springer: Dordrecht, The Netherlands, 2014; pp. 71–95. [CrossRef]
64. Al-Badri, A.H.; Ismail, N.A.; Al-Dulaimi, K.; Salman, G.A.; Khan, A.R.; Al-Sabaawi, A.; Salam, M.S.H. Classification of weed
using machine learning techniques: A review-challenges, current and future potential techniques. J. Plant Dis. Prot. 2022, 129,
745–768. [CrossRef]
65. Cimpoi, M.; Maji, S.; Kokkinos, I.; Vedaldi, A. Deep Filter Banks for Texture Recognition, Description, and Segmentation. Int. J.
Comput. Vis. 2016, 118, 65–94. [CrossRef] [PubMed]
66. Ashraf, T.; Khan, Y.N. Weed density classification in rice crop using computer vision. Comput. Electron. Agric. 2020, 175, 105590.
[CrossRef]
67. Ayalew, G.; Zaman, Q.U.; Schumann, A.W.; Percival, D.C.; Chang, Y. An investigation into the potential of Gabor wavelet features
for scene classification in wild blueberry fields. Artif. Intell. Agric. 2021, 5, 72–81. [CrossRef]
68. Zhang, L.; Zhang, Z.; Wu, C.; Sun, L. Segmentation algorithm for overlap recognition of seedling lettuce and weeds based on
SVM and image blocking. Comput. Electron. Agric. 2022, 201. [CrossRef]
69. Miao, R.; Yang, H.; Wu, J.; Liu, H. Weed identification of overlapping spinach leaves based on image sub-block and reconstruction.
Trans. Chin. Soc. Agric. Eng. 2020, 36, 178–184.
70. Vi Nguyen Thanh, L.; Ahderom, S.; Alameh, K. Performances of the LBP Based Algorithm over CNN Models for Detecting Crops
and Weeds with Similar Morphologies. Sensors 2020, 20, 2193. [CrossRef]
71. Raja, G.; Dev, K.; Philips, N.D.; Suhaib, S.A.M.; Deepakraj, M.; Ramasamy, R.K. DA-WDGN: Drone-Assisted Weed Detection
using GLCM-M features and NDIRT indices. In Proceedings of the IEEE Conference on Computer Communications Workshops
(IEEE INFOCOM), Vancouver, BC, Canada, 9–12 May 2021.
72. Zaman, M.H.M.; Mustaza, S.M.; Ibrahim, M.F.; Zulkifley, M.A.; Mustafa, M.M. Weed Classification Based on Statistical Features
from Gabor Transform Magnitude. In Proceedings of the International Conference on Decision Aid Sciences and Application
(DASA), Sakheer, Bahrain, 7–8 December 2021.
73. Wang, A.; Zhang, W.; Wei, X. A review on weed detection using ground-based machine vision and image processing techniques.
Comput. Electron. Agric. 2019, 158, 226–240. [CrossRef]
74. Bailey, D.; Chang, Y.; Le Moan, S. Analysing Arbitrary Curves from the Line Hough Transform. J. Imaging 2020, 6, 26. [CrossRef]
[PubMed]
75. Teplyakov, L.; Kaymakov, K.; Shvets, E.; Nikolaev, D. Line detection via a lightweight CNN with a Hough Layer. In Proceedings
of the 13th International Conference on Machine Vision, Rome, Italy, 2–6 November 2021.
76. Qi, M.; Wang, Y.; Chen, Y.; Xin, H.; Xu, Y.; Meng, H.; Wang, A. Center detection algorithm for printed circuit board circular marks
based on image space and parameter space. J. Electron. Imaging 2023, 32, 011002. [CrossRef]
77. Islam, N.; Rashid, M.M.; Wibowo, S.; Xu, C.-Y.; Morshed, A.; Wasimi, S.A.; Moore, S.; Rahman, S.M. Early Weed Detection Using
Image Processing and Machine Learning Techniques in an Australian Chilli Farm. Agriculture 2021, 11, 387. [CrossRef]
78. Fawakherji, M.; Potena, C.; Pretto, A.; Bloisi, D.D.; Nardi, D. Multispectral Image Synthesis for Crop/Weed Segmentation in
Precision Farming. Robot. Auton. Syst. 2021, 146, 103861. [CrossRef]
79. Ustin, S.L.; Jacquemoud, S. How the Optical Properties of Leaves Modify the Absorption and Scattering of Energy and Enhance
Leaf Functionality. Remote Sens. Plant Biodivers. 2020, 14, 349–384.
Agronomy 2024, 14, 363 27 of 28

80. Zhu, W.; Sun, Z.; Huang, Y.; Yang, T.; Li, J.; Zhu, K.; Zhang, J.; Yang, B.; Shao, C.; Peng, J.; et al. Optimization of multi-source UAV
RS agro-monitoring schemes designed for field-scale crop phenotyping. Precis. Agric. 2021, 22, 1768–1802. [CrossRef]
81. Calderon, R.; Montes-Borrego, M.; Landa, B.B.; Navas-Cortes, J.A.; Zarco-Tejada, P.J. Detection of downy mildew of opium poppy
using high-resolution multispectral and thermal imagery acquired with an unmanned aerial vehicle. Precis. Agric. 2014, 15,
639–661. [CrossRef]
82. Bellvert, J.; Zarco-Tejada, P.J.; Girona, J.; Fereres, E. Mapping crop water stress index in a ‘Pinot-noir’ vineyard: Comparing
ground measurements with thermal remote sensing imagery from an unmanned aerial vehicle. Precis. Agric. 2014, 15, 361–376.
[CrossRef]
83. Sabat-Tomala, A.; Raczko, E.; Zagajewski, B. Comparison of Support Vector Machine and Random Forest Algorithms for Invasive
and Expansive Species Classification Using Airborne Hyperspectral Data. Remote Sens. 2020, 12, 516. [CrossRef]
84. Shen, Y.; Yin, Y.; Li, B.; Zhao, C.; Li, G. Detection of impurities in wheat using terahertz spectral imaging and convolutional neural
networks. Comput. Electron. Agric. 2021, 181, 105931. [CrossRef]
85. Guo, X.; Ge, Y.; Liu, F.; Yang, J. Identification of maize and wheat seedlings and weeds based on deep learning. Front. Earth Sci.
2023, 11, 1146558. [CrossRef]
86. Wang, Y.; Zhang, X.; Ma, G.; Du, X.; Shaheen, N.; Mao, H. Recognition of weeds at asparagus fields using multi-feature fusion
and backpropagation neural network. Int. J. Agric. Biol. Eng. 2021, 14, 190–198. [CrossRef]
87. Tannouche, A.; Sbai, K.; Rahmoune, M.; Zoubir, A.; Agounoune, R.; Saadani, R.; Rahmani, A. A Fast and Efficient Shape
Descriptor for an Advanced Weed Type Classification Approach. Int. J. Electr. Comput. Eng. 2016, 6, 1168–1175.
88. Bakhshipour, A.; Jafari, A. Evaluation of support vector machine and artificial neural networks in weed detection using shape
features. Comput. Electron. Agric. 2018, 145, 153–160. [CrossRef]
89. Zhuang, J.; Jin, X.; Chen, Y.; Meng, W.; Wang, Y.; Yu, J.; Muthukumar, B. Drought stress impact on the performance of deep
convolutional neural networks for weed detection in Bahiagrass. Grass Forage Sci. 2023, 78, 214–223. [CrossRef]
90. Li, D.; Shi, G.; Li, J.; Chen, Y.; Zhang, S.; Xiang, S.; Jin, S. PlantNet: A dual-function point cloud segmentation network for multiple
plant species. Isprs J. Photogramm. Remote Sens. 2022, 184, 243–263. [CrossRef]
91. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw 2015, 61, 85–117. [CrossRef]
92. Zhu, Y.; Wang, M.; Yin, X.; Zhang, J.; Meijering, E.; Hu, J. Deep Learning in Diverse Intelligent Sensor Based Systems. Sensors
2023, 23, 62. [CrossRef]
93. Garibaldi-Marquez, F.; Flores, G.; Mercado-Ravell, D.A.; Ramirez-Pedraza, A.; Valentin-Coronado, L.M. Weed Classification from
Natural Corn Field-Multi-Plant Images Based on Shallow and Deep Learning. Sensors 2022, 22, 3021. [CrossRef]
94. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.Q.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.;
Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8,
53. [CrossRef]
95. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need.
In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9
December 2017; pp. 6000–6010.
96. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Houlsby, N. An Image is Worth 16x16 Words: Transformers for Image
Recognition at Scale. arXiv 2020, arXiv:2010.11929.
97. Jiang, K.; Afzaal, U.; Lee, J. Transformer-Based Weed Segmentation for Grass Management. Sensors 2023, 23, 65. [CrossRef]
98. Khan, S.; Naseer, M.; Hayat, M.; Zamir, S.W.; Khan, F.S.; Shah, M. Transformers in Vision: A Survey. ACM Comput. Surv. 2022, 54, 200.
[CrossRef]
99. Tao, T.; Wei, X. A hybrid CNN-SVM classifier for weed recognition in winter rape field. Plant Methods 2022, 18, 29. [CrossRef]
100. Zhang, H.; Wang, Z.; Guo, Y.; Ma, Y.; Cao, W.; Chen, D.; Yang, S.; Gao, R. Weed Detection in Peanut Fields Based on Machine
Vision. Agriculture 2022, 12, 1541. [CrossRef]
101. Jin, X.; Sun, Y.; Che, J.; Bagavathiannan, M.; Yu, J.; Chen, Y. A novel deep learning-based method for detection of weeds in
vegetables. Pest Manag. Sci. 2022, 78, 1861–1869. [CrossRef]
102. Abouzahir, S.; Sadik, M.; Sabir, E. Paper Bag-of-visual-words-augmented Histogram of Oriented Gradients for efficient weed
detection. Biosyst. Eng. 2021, 202, 179–194. [CrossRef]
103. Haq, M.A. CNN Based Automated Weed Detection System Using UAV Imagery. Comput. Syst. Sci. Eng. 2022, 42, 837–849.
[CrossRef]
104. Milioto, A.; Lottes, P.; Stachniss, C. Real-Time Blob-Wise Sugar Beets vs. Weeds Classification for Monitoring Fields Using
Convolutional Neural Networks. In Proceedings of the International Conference on Unmanned Aerial Vehicles in Geomatics,
Bonn, Germany, 4–7 September 2017; pp. 41–48.
105. Ong, P.; Teo, K.S.; Sia, C.K. UAV-based weed detection in Chinese cabbage using deep learning. Smart Agric. Technol. 2023, 4, 100181.
[CrossRef]
106. Quan, L.; Feng, H.; Li, Y.; Wang, Q.; Zhang, C.; Liu, J.; Yuan, Z. Maize seedling detection under different growth stages and
complex field environments based on an improved Faster R-CNN. Biosyst. Eng. 2019, 184, 1–23. [CrossRef]
107. Sanchez, P.R.; Zhang, H. Evaluation of a CNN-Based Modular Precision Sprayer in Broadcast-Seeded Field. Sensors 2022, 22, 9723.
[CrossRef]
Agronomy 2024, 14, 363 28 of 28

108. Zhang, W.H.; Hansen, M.F.; Volonakis, T.N.; Smith, M.; Smith, L.; Wilson, J.; Ralston, G.; Broadbent, L.; Wright, G. Broad-Leaf
Weed Detection in Pasture. In Proceedings of the 3rd IEEE International Conference on Image, Vision and Computing (ICIVC),
Chongqing, China, 27–29 June 2018; pp. 101–105.
109. McCool, C.; Perez, T.; Upcroft, B. Mixtures of Lightweight Deep Convolutional Neural Networks: Applied to Agricultural
Robotics. IEEE Robot. Autom. Lett. 2017, 2, 1344–1351. [CrossRef]
110. Asseng, S.; Asche, F. Future farms without farmers. Sci. Robot. 2019, 4, eaaw1875. [CrossRef]
111. Wang, D.S.; Cao, W.J.; Zhang, F.; Li, Z.L.; Xu, S.; Wu, X.Y. A Review of Deep Learning in Multiscale Agricultural Sensing. Remote
Sens. 2022, 14, 559. [CrossRef]
112. Zhang, H.D.; Wang, L.Q.; Tian, T.; Yin, J.H. A Review of Unmanned Aerial Vehicle Low-Altitude Remote Sensing (UAV-LARS)
Use in Agricultural Monitoring in China. Remote Sens. 2021, 13, 1221. [CrossRef]
113. Waldner, F.; Diakogiannis, F.I. Deep learning on edge: Extracting field boundaries from satellite images with a convolutional
neural network. Remote Sens. Environ. 2020, 245, 1221. [CrossRef]
114. Yuan, X.H.; Shi, J.F.; Gu, L.C. A review of deep learning methods for semantic segmentation of remote sensing imagery. Expert
Syst. Appl. 2021, 169, 114417. [CrossRef]
115. Liu, J.; Xiang, J.J.; Jin, Y.J.; Liu, R.H.; Yan, J.N.; Wang, L.Z. Boost Precision Agriculture with Unmanned Aerial Vehicle Remote
Sensing and Edge Intelligence: A Survey. Remote Sens. 2021, 13, 4387. [CrossRef]
116. Khan, S.; Tufail, M.; Khan, M.T.; Khan, Z.A.; Iqbal, J.; Wasim, A. Real-time recognition of spraying area for UAV sprayers using a
deep learning approach. PLoS ONE 2021, 16, e0249436. [CrossRef] [PubMed]
117. De Castro, A.I.; Ehsani, R.; Ploetz, R.; Crane, J.H.; Abdulridha, J. Optimum spectral and geometric parameters for early detection
of laurel wilt disease in avocado. Remote Sens. Environ. 2015, 171, 33–44. [CrossRef]
118. Xie, C.Q.; Yang, C. A review on plant high-throughput phenotyping traits using UAV-based sensors. Comput. Electron. Agric.
2020, 178, 105731. [CrossRef]
119. Allred, B.; Eash, N.; Freeland, R.; Martinez, L.; Wishart, D. Effective and efficient agricultural drainage pipe mapping with UAS
thermal infrared imagery: A case study. Agric. Water Manag. 2018, 197, 132–137. [CrossRef]
120. Guo, A.T.; Huang, W.J.; Dong, Y.Y.; Ye, H.C.; Ma, H.Q.; Liu, B.; Wu, W.B.; Ren, Y.; Ruan, C.; Geng, Y. Wheat Yellow Rust Detection
Using UAV-Based Hyperspectral Technology. Remote Sens. 2021, 13, 123. [CrossRef]
121. Radoglou-Grammatikis, P.; Sarigiannidis, P.; Lagkas, T.; Moscholios, I. A compilation of UAV applications for precision agriculture.
Comput. Netw. 2020, 172, 107148. [CrossRef]
122. Sivakumar, A.N.; Modi, S.; Gasparino, M.V.; Ellis, C.; Velasquez, A.E.B.; Chowdhary, G.; Gupta, S. Learned Visual Navigation
for Under-Canopy Agricultural Robots. In Proceedings of the Conference on Robotics—Science and Systems, Electr Network,
Virtual, 12–16 July 2021.
123. Subeesh, A.; Mehta, C.R. Automation and digitization of agriculture using artificial intelligence and internet of things. Artif. Intell.
Agric. 2021, 5, 278–291. [CrossRef]
124. Andreasen, C.; Scholle, K.; Saberi, M. Laser Weeding With Small Autonomous Vehicles: Friends or Foes? Front. Agron. 2022, 4,
841086. [CrossRef]
125. Tran, D.; Schouteten, J.J.; Degieter, M.; Krupanek, J.; Jarosz, W.; Areta, A.; Emmi, L.; De Steur, H.; Gellynck, X. European
stakeholders’ perspectives on implementation potential of precision weed control: The case of autonomous vehicles with laser
treatment. Precis. Agric. 2023, 24, 2200–2222. [CrossRef]
126. Hussain, A.; Fatima, H.S.; Zia, S.M.; Hasan, S.; Khurram, M.; Stricker, D.; Afzal, M.Z. Development of Cost-Effective and Easily
Replicable Robust Weeding Machine-Premiering Precision Agriculture in Pakistan. Machines 2023, 11, 287. [CrossRef]
127. Xu, S.Y.; Wu, J.J.; Zhu, L.; Li, W.H.; Wang, Y.T.; Wang, N. A novel monocular visual navigation method for cotton-picking robot
based on horizontal spline segmentation. In Proceedings of the 9th International Symposium on Multispectral Image Processing
and Pattern Recognition (MIPPR)—Automatic Target Recognition and Navigation, Enshi, China, 31 October–1 November 2015.
128. Jia, W.K.; Zhang, Y.; Lian, J.; Zheng, Y.J.; Zhao, D.; Li, C.J. Apple harvesting robot under information technology: A review. Int. J.
Adv. Robot. Syst. 2020, 17, 1729881420925310. [CrossRef]
129. Jiang, W.; Quan, L.Z.; Wei, G.Y.; Chang, C.; Geng, T.Y. A conceptual evaluation of a weed control method with post-damage
application of herbicides: A composite intelligent intra-row weeding robot. Soil Tillage Res. 2023, 234, 105837. [CrossRef]
130. Mohamed, E.S.; Belal, A.; Abd-Elmabod, S.K.; El-Shirbeny, M.A.; Gad, A.; Zahran, M.B. Smart farming for improving agricultural
management. Egypt. J. Remote Sens. Space Sci. 2021, 24, 971–981. [CrossRef]
131. Darwin, B.; Dharmaraj, P.; Prince, S.; Popescu, D.E.; Hemanth, D.J. Recognition of Bloom/Yield in Crop Images Using Deep
Learning Models for Smart Agriculture: A Review. Agronomy 2021, 11, 646. [CrossRef]
132. Jin, S.; Dai, H.; Peng, J.; He, Y.; Zhu, M.; Yu, W.; Li, Q. An Improved Mask R-CNN Method for Weed Segmentation. In Proceedings
of the 17th Conference on Industrial Electronics and Applications (ICIEA), Chengdu, China, 16–19 December 2022.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like