Neural Networks: Image & NLP Applications
Neural Networks: Image & NLP Applications
net/publication/386144973
CITATIONS READS
0 15
3 authors:
Raghuraj Singh
Pranveer Singh Institute of Technology
39 PUBLICATIONS 59 CITATIONS
SEE PROFILE
All content following this page was uploaded by Abhishek Rawat on 27 November 2024.
Chapter - 8
Advanced Applications of Neural Networks:
From Image Recognition to Natural Language
Processing
Authors
Abhishek Rawat
Department of Computer Science & Engineering, Pranveer
Singh Institute of Technology, Kanpur, Uttar Pradesh, India
Rajat Verma
Department of Computer Science & Engineering, Pranveer
Singh Institute of Technology, Kanpur, Uttar Pradesh, India
Raghuraj Singh Suryavanshi
Department of Computer Science & Engineering, Pranveer
Singh Institute of Technology, Kanpur, Uttar Pradesh, India
Page | 131
In: Computational Techniques in Engineering, ISBN: 978-1-80433-922-0
Science and Technology, Edn. 2024
Page | 132
In: Computational Techniques in Engineering, ISBN: 978-1-80433-922-0
Science and Technology, Edn. 2024
Chapter - 8
Advanced Applications of Neural Networks: From Image
Recognition to Natural Language Processing
Abhishek Rawat, Rajat Verma and Raghuraj Singh Suryavanshi
Abstract
This chapter emphasizes the more complex uses like neural networks,
especially in fields such as the processing of natural language and photo
recognition. The first section of the chapter examines the methods and
strategies applied to picture recognition. Examined are the complex internal
structures of convolutional deep neural networks (CNNs), emphasizing how
well they can extract relevant characteristics from images for precise item
detection and categorization. There is a discussion of the difficulties in training
CNNs, offering some insights into possible ways to improve their efficiency.
Next, the focus switches to the processing of natural languages (NLP),
wherein recurrent neural networks, also called RNNs, are leading the way in
important developments. Long quick memory (LSTM) networks' mechanisms
are examined, demonstrating how they can extract long-range dependencies
from textual material and improve the precision and flow of natural language
processing (NLP) applications. Advanced neural applications for networks
raise several ethical questions that are covered, including privacy concerns,
bias, and responsible use of AI. It stresses how crucial it is to create just and
reliable models that take society's effects on their outputs into account. The
remarkable advancements in domains including driverless cars, virtual
assistants, and medical imaging are shown, underscoring the deep influence
of machine learning on a range of sectors. This chapter offers a thorough
summary of the cutting-edge uses for neural networks, demonstrating their
adaptability and promise in various fields, from natural language processing
to picture recognition. In today's quickly changing technological scene, it is
an invaluable resource for researchers, practitioners, along enthusiasts looking
to push the boundaries of artificial intelligence and use neural networks to
address challenging challenges.
Keywords: Image recognition, convolutional neural networks, natural
Page | 133
In: Computational Techniques in Engineering, ISBN: 978-1-80433-922-0
Science and Technology, Edn. 2024
language processing, neural networks
1. Introduction
Neural networks have become a game-changer in modern computing,
merging the realms of artificial intelligence and brain-inspired design. These
networks are built upon interconnected layers of artificial neurons that mimic
the intricate workings of the human brain [1]. They can carry out intricate
calculations and learn from enormous volumes of data thanks to their design,
which has produced amazing advances in machine learning. Neural networks
come in various forms, each intended for a particular purpose. Feed forward
neural networks, for example, are widely used for tasks like image recognition
and natural language processing [2]. By passing data through multiple layers,
these networks can extract valuable features and make accurate predictions.
Recurrent neural networks, on the other hand, incorporate feedback loops that
allow them to process sequential data, making them suitable for tasks like
speech recognition and time series analysis [3]. Convolutional neural networks
have transformed computer vision applications by facilitating tasks like object
detection and image categorization. This is due to their capacity to interpret
grid-like data.
The developed applications of artificial neural networks extend across
various industries. In recognition of pictures, neural networks that are have
played an essential part. They are used to evaluate and categorize images,
allowing applications like systems that recognize faces and detect objects in
fields such as protection and autonomous vehicles. Natural Language
Processing (NLP) is another area where neural networks have made
significant contributions [4]. They are utilized in speech recognition, machine
translation and text processing, enabling virtual assistants and language
translation systems to understand and generate human language. Moreover, in
industries such as finance, neural networks are employed to make predictions
about stock prices, market trends, and investment opportunities based on
historical financial data. These are only a handful of the many uses for neural
networks that demonstrate how they have the power to transform a wide range
of industries and change the way we solve challenging issues.
2. Scientification: Utilizing Convolutional Neural Networks' Power
(CNNS)
Convolutional neural networks, or CNNs, have become a potent tool for
computer vision and image identification applications. By automatically
extracting hierarchical features from unprocessed input images and removing
Page | 134
In: Computational Techniques in Engineering, ISBN: 978-1-80433-922-0
Science and Technology, Edn. 2024
the need for laborious preparation, these networks have completely
transformed the area of computer vision [5]. CNNs are excellent at
differentiating between objects and features in images by using layers of
convolution that apply filters to find patterns locally. CNNs' remarkable
success in applications like image categorization, object detection, or image
recognition has made them the state-of-the-art method in computer vision.
They are frequently employed in machine learning modeling and have become
more and more popular in the data science sector, especially when developing
image classifiers.
CNNs are specifically built for jobs involving image processing and
recognition [6]. They are made up of several layers, such as completely linked,
pooling, and convolution layers. Convolution layers apply filters to the input
image and perform convolutions to extract features. By lowering the spatial
dimensionality of the features, pooling layers simplify computation. The
network can make projections based upon the learned feature thanks to fully
linked layers, which link every neuron from the previous one to the subsequent
layer. CNNs have several uses in computer vision and image identification.
They are widely utilized in industries like healthcare, driverless cars, and
security [7]. CNNs are used in security systems to perform functions such as
object detection and facial recognition, which allow precise tracking and
identification of people or things. CNNs, or CNN are essential for operations
like lane being identified, road marking acceptance and detection of
pedestrians in self-driving automobiles, which boosts the effectiveness as well
as security of these vehicles. CNNs are used in the medical field to analyze
medical images, which helps with disease diagnosis and anomaly detection in
scans. CNNs have also been used in several other industries, such as retail,
entertainment, and agriculture. CNNs are used in agriculture to estimate yield
and detect crop diseases. They are used in retail for things like inventory
control and product identification. CNNs are used in the entertainment sector
for video analysis and content suggestions.
2.1 Convolutional Layers: Extracting Features from Images
In neural network models (CNNs), convolutional layers are essential for
extracting features from images. These layers oversee figuring out and
identifying structures, patterns, and shapes in the input pictures. Convolutional
layers capture local properties from the input picture and build feature maps
that depict various parts of the image through the use of a set of convolutional
methods of filtering the image. Convolutional layers employ element-wise
multiplication and summation operations to a small filter, called a kernel, to
Page | 135
In: Computational Techniques in Engineering, ISBN: 978-1-80433-922-0
Science and Technology, Edn. 2024
extract characteristics from input images [8]. This approach results in a feature
graph that highlights the presence of traits or themes in the image. The filters
in the CNN first layers identify low-level properties including corners, edges,
and textures. Subsequent layers of the network's filters become increasingly
skilled in recognizing complex and abstract aspects, including forms, people,
animals, and objects.
The Convolutional layers can learn more complex characteristics as the
network gets deeper because of their hierarchical structure [9]. In deeper layers,
lower-level features found in early layers are integrated and transformed into
higher-level features. Like the way the human visual system interprets visual
stimuli, hierarchical feature extraction allows CNNs to acquire and interpret
complex visual information. Convolutional layers produce feature maps or
activated maps, which are then used as input by layers of pooling or fully
connected layers, which are the next levels in the network. To generate
predictions or carry out tasks, such as object detection or image classification,
these layers additionally analyze the gathered information.
2.2 Pooling Layers: Reducing dimensionality
Convolutional neural networks, also known as CNNs, rely on pooling
layers to reduce feature map dimensionality while maintaining meaningful
information. Convolutional layers in CNNs use the input data to extract local
characteristics. These feature maps, however, may contain redundant data and
be computationally costly. Layers of pooling are used to overcome these
obstacles. Pooling layers reduce the characteristic maps by choosing a
particular window measurement and stride length, which lowers the cost of
computation and the model's parameter count. The network becomes more
effective and less prone to overfitting because of this dimensionality reduction
[10]
.
Additionally, pooling layers help CNNs achieve translation invariance.
Translation invariance is the property that the same features are recognized
independently of an object's precise location in the image, meaning that an
object's position has no bearing on the classification outcome. Pooling
techniques, including maximum pooling and average pooling, are
implemented in every area of the feature map to identify the most significant
characteristics and eliminate less significant data. This enhances the network's
overall performance and capacity for generalization while also assisting in the
reduction of noise and concentrating on the most discriminatory elements. In
summary, pooling layers in CNNs play a vital role in reducing the
dimensionality of feature maps while preserving important information [11].
Page | 136
In: Computational Techniques in Engineering, ISBN: 978-1-80433-922-0
Science and Technology, Edn. 2024
They contribute to the efficiency of the network by reducing computational
complexity and parameter count. Additionally, layers of pooling enable
translation consistency, making the network's structure more resistant to
variations in object position.
Pooling layers improve the network's capacity to focus on pertinent
information by choosing the most important characteristics within each region
in the feature map. This enhances the overall performance of the network in
many computer vision applications.
2.3 Fully connected layers: Making predictions
Dense layers, or fully connected layers, are essential components of
convolutional neural networks, or CNNs, and oversee producing predictions
using the information that was taken out of the preceding layers [12]. Pooling
and convolution layers operate on restricted regions of the input, but fully
connected layers connect all neurons in the layer before it to every neuronal
contrasting the current layer. As a result, the network can learn intricate
patterns and correlations and integrate data from around the world. Fully
connected layers are designed to convert the high-level, abstract information
that was retrieved by the pooling and convolutional layers that came before
them into regression values or class probabilities. Every cell in the fully linked
layer receives input from every one of the cells in the hierarchy above, and
each cell applies the proper weights and biases to the data. During the training
process, techniques like the descent of gradients and back propagation are
used to learn these weights and biases.
A few non-linear functions for activation, such as sigmoid or Relu
(Rectified Linear Unit), are commonly utilized to provide non-linearity to the
network as data traverses the fully connected layers. For networks to learn
intricate decision limits and recognize non-linear correlations between
features, this non-linearity is essential. Typically, the quantity and classes in a
classification job equals the number of completely connected neurons in the
final layer. Using a Soft Max activation function, this layer produces class
likelihoods or the likelihood that an input corresponds to each class.
Regression tasks may involve a single neuron in the last fully connected layer
that outputs the anticipated value directly [13].
3. NLP (Natural Language Processing): Using neural networks to
enhance language comprehension
Enabling computers to understand and use human language is the aim of
the multidisciplinary field of natural language processing or NLP. It involves
Page | 137
In: Computational Techniques in Engineering, ISBN: 978-1-80433-922-0
Science and Technology, Edn. 2024
the analysis and manipulation of natural language information sets, that
include text or speech corpora, using rule-based or statistical machine learning
techniques. Numerous industries, including health care research, engines for
search, business intelligence, voice assistants, chatbots, dialogue systems,
emotion analysis, summary generation, and question answering, have found
extensive uses for natural language processing (NLP). Natural language
generation, also called NLG, and the understanding of natural languages
(NLU) are the two overlapping subfields of natural language processing
(NLP) [14]. While NLG is focused on text synthesis by machines, NLU is more
concerned with semantic analysis, or figuring out the intended function of text.
Using natural language processing (NLP), speech recognition technology may
convert what is said into written text and vice versa.
The use of neural networks, especially deep learning models, has
significantly advanced natural language processing (NLP) in recent years.
NLP systems' accuracy and performance have significantly increased thanks
to deep learning models like recurrent neural networks (RNNs) and
convolutional neural networks (CNNs) [15]. Large volumes of text that are
unorganized and voice data sets can be accurately meaning-extracted from by
using these models, which can automatically extract, categorize, and label
parts of text and voice data. Resources and features for NLP activities are
provided by NLP libraries and tools such as Intel Natural Language Architect,
genism, and Natural Language Toolkit (NLTK). A Python library for subject
modeling and article indexing is called Genism; another library for neural
network topologies and approaches is called Intel NLP Architect. NLTK is a
freely available Python module that includes data sets and tutorials. NLP
enables a broad range of applications across multiple fields by enabling
computers to comprehend and process human language. Neural network and
deep neural network integration have improved the accuracy and capability of
natural language processing (NLP) systems to extract meaning from vast
amounts of audio and text input. NLP tools and libraries offer features and
resources to make NLP work easier.
3.1 Recurrent Neural Networks (RNNs) in NLP
Recurrent neural networks, also known as RNNs, are a useful tool in the
analysis of natural language activity (NLP). Many NLP (natural language
processing) applications, such as recognition of speech, sentiment analysis,
photo captioning, and machine translation, have shown success using RNNs.
This is because RNNs can analyze sequential input. They can describe the
sequential structure of language and record long-term dependencies because
Page | 138
In: Computational Techniques in Engineering, ISBN: 978-1-80433-922-0
Science and Technology, Edn. 2024
they are adept at capturing situational data and dependencies between
sentences or syllables in a sequence. Because RNNs can handle variable-
length sequences, they are a good choice for situations where the length of the
input or output may vary [16]. In addition to being memory-efficient, their
shared parameter design results in fewer parameters than feed forward neural
networks.
Recurrent neural networks, or RNNs, have been applied in NLP, and this
has significantly improved language analysis and comprehension. Machine
translation, sentiment analysis, and text mining are just a few of the
applications where RNNs have shown their worth. They can model the ordered
structure of language and collect long-term dependencies because they can
capture context information and dependencies inside a sequence. Additionally,
RNNs excel at managing sequences of varying lengths, which is important for
jobs like translation by machines where the lengths of the input and output can
change. RNNs process and analyze sequential input in NLP applications with
great strength because of their memory-efficient shared parameter architecture
[17]
.
3.2 Sequential Data with Long and Short-Term Memory (LSTM)
networking
In sequential data processing, LSTM networks, which stand for Long
Short-Term Memory, have become increasingly prominent, especially in the
processing of natural languages (NLP) [18]. Long-term dependency capture
(LSTM), one of the residual neural network (RNN) designs, is ideal for
sequence-based tasks like language modeling, automatic translation, audio
recognition, and picture captioning. LSTM networks differ from traditional
neural networks by the fact they can handle entire data sequences rather than
just individual data points thanks to the addition of feedback connections. As
a result, they excel at recognizing and predicting trends in sequential data,
including series of times, text, and audio. LSTM networks are widely
employed in many different fields [19]. For instance, Facebook used LSTM
networks to execute billions of machine translations per day. In the
challenging video game Dota 2, Open AI defeated humans using LSTM
trained via policy gradients. Moreover, LSTM has been used for applications
like sentiment analysis, text summarization, and language modeling.
The capacity of LSTM networks to accommodate variable-length
sequences is one of their advantages. This is important in NLP applications
where the length of the input or output can change. Moreover, long-term
Page | 139
In: Computational Techniques in Engineering, ISBN: 978-1-80433-922-0
Science and Technology, Edn. 2024
dependencies and contextual information between words or symbols in a
sequence can be captured by LSTM networks, which enables them to simulate
the sequential structure of language. It is crucial to remember that, in
comparison to simpler models, LSTM networks are more complex can be
computationally prohibitive and require longer training cycles.
Notwithstanding these difficulties, LSTM networks have shown to be an
effective tool for NLP research and software, facilitating improvements in a
range of sequential data analysis-related activities [20].
3.3 Language Modeling and Sentiment Analysis
Two crucial tasks in the processing of natural language (NLP) are
Sentiment Analysis and Language Modeling [21]. Sentiment analysis attempts
to categorize the feelings or viewpoints expressed in a text, whereas language
modeling predicts the likelihood of a word sequence appearing in a specific
situation. There has been a lot of NLP study and development focused on these
two goals because of their close relationship. Language modeling is crucial
for many NLP applications, including text production, recognition of speech,
and machine translation. It facilitates comprehension of linguistic patterns and
structure, allowing systems to provide replies that are appropriate for the
context and cohesive. For language modeling applications, language models
like transformer frameworks like BERT and recurrent neural networks, also
known as RNNs, have been employed extensively. These models can provide
meaningful text representations by capturing the relationships between words.
On the opposite hand, sentiment analysis focuses on identifying the
sentiment and emotion that is communicated in a text [22]. Numerous activities
can be accomplished with it, such as sentiment analysis of customer feedback,
sentiment tracking on social media monitoring, and public opinion analysis.
Sentiment analysis can be done at several levels, including the aspect, phrase,
and document levels. Tasks involving sentiment analysis have utilized deep
learning and machine learning methods, including LSTM and BERT, as the
models [23]. Textual data may provide businesses and organizations with useful
insights as these models can identify text depending on the sentiment it
communicates. Sentiment analysis and language modeling are crucial NLP
tasks. Sentiment analysis allows one to categorize the sentiment or opinion
communicated in text, whereas language modeling aids in the comprehension
of language's structure and patterns [24]. The application of deep learning and
machine learning techniques has led to tremendous progress in both tasks,
enabling more precise and sophisticated textual data interpretation.
Page | 140
In: Computational Techniques in Engineering, ISBN: 978-1-80433-922-0
Science and Technology, Edn. 2024
4. Conclusion
The advanced uses of neural networks, with a particular emphasis on
image identification and natural language processing, have been examined in
this chapter. The way we perceive and engage with textual and visual material
has been completely transformed by neural network advances, creating a
world of new possibilities.
Advancements in Image Recognition: Convolutional neural
networks, are also known to have enabled image identification jobs
to achieve unprecedented levels of efficiency and accuracy [25].
Advances in several fields, including object recognition, picture
classification, and semantic segmentation, have been made possible
by CNNs [26]. These developments have opened new avenues for
practical applications in a range of fields, including medical picture
analysis, facial recognition software, and autonomous vehicles.
Strong and dependable recognition systems are made possible by
CNNs, which have demonstrated to be extremely adept in
deciphering complex visual information while extracting important
features.
Enhancements in Natural Language Processing: Natural language
processing tasks have been substantially improved by the
combination of attention processes and recurrent neural networks
(RNNs) [27]. Because they perform well on sequential data
processing, RNNs are perfect for jobs involving language. Text
generation, sentiment analysis, and language translation have all
been effectively used with them. Language interpretation and
generation skills of RNNs have substantially improved by capturing
historical dependencies within text [28]. By allowing these models to
concentrate on pertinent data, attention processes have further
improved them and resulted in notable improvements in language
translation & question-answering systems [29].
Challenges and Ongoing Research: Even though neural networks
have made amazing progress in natural language processing and
picture identification, there are still issues that need to be resolved
[30]
. One such difficulty is that these models need a lot of labelled data
to be trained properly. The interpretability in neural network models
presents another difficulty since it can occasionally be challenging to
comprehend how they make decisions [31]. Furthermore, the
Page | 141
In: Computational Techniques in Engineering, ISBN: 978-1-80433-922-0
Science and Technology, Edn. 2024
efficiency and scalability of the neural network topologies in use
today may be constrained. The goal of current research and
advancements in the field is to solve these issues and enhance neural
networks' capabilities [32].
Future Potential and Impact: Neural networks have shown to have
enormous potential and an impact across a wide range of sectors
through their sophisticated applications in image identification and
natural language processing. They have created fresh opportunities
for creativity and problem-solving [33]. Neural networks are predicted
to remain essential for tackling complicated issues and fostering
innovation in various fields as technology develops. Neural networks
are predicted to have a wide range of applications, from enhancing
healthcare diagnoses to facilitating more natural human-computer
interactions [34].
All things considered, the development of neural networks has
revolutionized the domains of natural language processing and image
identification, opening previously untapped possibilities for the use of textual
and visual data. As we keep pushing the limits of what neural systems are
capable of, we will undoubtedly make fascinating discoveries and
developments in the future.
References
1. Zhang, Q., Yu, H., Barbiero, M., Wang, B., & Gu, M. (2019). Artificial
neural networks enabled by nanophotonics. Light: Science &
Applications, 8(1), 42.
2. Goldberg, Y. (2022). Neural network methods for natural language
processing. Springer Nature.
3. Wang, D., Wang, X., & Lv, S. (2019). An overview of end-to-end
automatic speech recognition. Symmetry, 11(8), 1018.
4. Toneva, M., & Wehbe, L. (2019). Interpreting and improving natural-
language processing (in machines) with natural language processing (in
the brain). Advances in neural information processing systems, 32.
5. Khan, A., Sohail, A., Zahoora, U., & Qureshi, A. S. (2020). A survey of
the recent architectures of deep convolutional neural networks. Artificial
intelligence review, 53, 5455-5516.
Page | 142
In: Computational Techniques in Engineering, ISBN: 978-1-80433-922-0
Science and Technology, Edn. 2024
6. Hu, W., Zhang, Y., & Li, L. (2019). Study of the application of deep
convolutional neural networks (CNNs) in processing sensor data and
biomedical images. Sensors, 19(16), 3584.
7. Javed, A. R., Shahzad, F., ur Rehman, S., Zikria, Y. B., Razzak, I., Jalil,
Z., & Xu, G. (2022). Future smart cities: Requirements, emerging
technologies, applications, challenges, and future aspects. Cities, 129,
103794.
8. Maheshwari, K., Shaha, A., Arya, D., Rajasekaran, R., & Tripathy, B. K.
(2020). Convolutional neural networks: a bottom-up approach. Deep
learning research with engineering applications, 21-50.
9. Roy, D., Panda, P., & Roy, K. (2020). Tree-CNN: a hierarchical deep
convolutional neural network for incremental learning. Neural
Networks, 121, 148-160.
10. Abdulhammed, R., Musafer, H., Alessa, A., Faezipour, M., & Abuzneid,
A. (2019). Features dimensionality reduction approaches for machine
learning based network intrusion detection. Electronics, 8(3), 322.
11. Ma, J., & Yuan, Y. (2019). Dimension reduction of image deep feature
using PCA. Journal of Visual Communication and Image
Representation, 63, 102578.
12. Ketkar, N., Moolayil, J., Ketkar, N., & Moolayil, J. (2021). Convolutional
neural networks. Deep Learning with Python: Learn Best Practices of
Deep Learning Models with PyTorch, 197-242.
13. Gu, J., Wang, Z., Kuen, J., Ma, L., Shahroudy, A., Shuai, B., & Chen, T.
(2018). Recent advances in convolutional neural networks. Pattern
recognition, 77, 354-377.
14. Priyadarshini, S. B. B., Bagjadab, A. B., & Mishra, B. K. (2020). A brief
overview of natural language processing and artificial
intelligence. Natural Language Processing in Artificial Intelligence, 211-
224.
15. Banerjee, I., Ling, Y., Chen, M. C., Hasan, S. A., Langlotz, C. P.,
Moradzadeh, N., & Lungren, M. P. (2019). Comparative effectiveness of
convolutional neural network (CNN) and recurrent neural network (RNN)
architectures for radiology text report classification. Artificial
intelligence in medicine, 97, 79-88.
Page | 143
In: Computational Techniques in Engineering, ISBN: 978-1-80433-922-0
Science and Technology, Edn. 2024
16. Viswambaran, R. A., Chen, G., Xue, B., & Nekooei, M. (2020, July).
Evolving deep recurrent neural networks using a new variable-length
genetic algorithm. In 2020 IEEE Congress on Evolutionary Computation
(CEC) (pp. 1-8). IEEE.
17. Zhang, X., Xia, H., Zhuang, D., Sun, H., Fu, X., Taylor, M. B., & Song,
S. L. (2021, June). η-lstm: Co-designing highly-efficient large lstm
training via exploiting memory-saving and architectural design
opportunities. In 2021 ACM/IEEE 48th Annual International Symposium
on Computer Architecture (ISCA) (pp. 567-580). IEEE.
18. Vennerød, C. B., Kjærran, A., & Bugge, E. S. (2021). Long short-term
memory RNN. arXiv preprint arXiv:2105.06756.
19. Lindemann, B., Maschler, B., Sahlab, N., & Weyrich, M. (2021). A survey
on anomaly detection for technical systems using LSTM
networks. Computers in Industry, 131, 103498.
20. Zhao, J. (2023). Adding time-series data to enhance performance of
naural language processing tasks (Doctoral dissertation).
21. Rajput, A. (2020). Natural language processing, sentiment analysis, and
clinical analytics. In Innovation in health informatics (pp. 79-97).
Academic Press.
22. Kim, E., & Klinger, R. (2018). A survey on sentiment and emotion
analysis for computational literary studies. arXiv preprint
arXiv:1808.03137.
23. Chintalapudi, N., Battineni, G., & Amenta, F. (2021). Sentimental
analysis of COVID-19 tweets using deep learning models. Infectious
disease reports, 13(2), 329-339.
24. Wankhade, M., Rao, A. C. S., & Kulkarni, C. (2022). A survey on
sentiment analysis methods, applications, and challenges. Artificial
Intelligence Review, 55(7), 5731-5780.
25. Liu, Y., Pu, H., & Sun, D. W. (2021). Efficient extraction of deep image
features using convolutional neural network (CNN) for applications in
detecting and analysing complex food matrices. Trends in Food Science
& Technology, 113, 193-204.
26. Geng, Q., Zhou, Z., & Cao, X. (2018). Survey of recent progress in
semantic image segmentation with CNNs. Science China Information
Sciences, 61, 1-18.
Page | 144
In: Computational Techniques in Engineering, ISBN: 978-1-80433-922-0
Science and Technology, Edn. 2024
27. Galassi, A., Lippi, M., & Torroni, P. (2020). Attention in natural language
processing. IEEE transactions on neural networks and learning
systems, 32(10), 4291-4308.
28. Li, J., Tang, T., Zhao, W. X., Nie, J. Y., & Wen, J. R. (2022). Pretrained
language models for text generation: A survey. arXiv preprint
arXiv:2201.05273.
29. Chaudhari, S., Mithal, V., Polatkan, G., & Ramanath, R. (2021). An
attentive survey of attention models. ACM Transactions on Intelligent
Systems and Technology (TIST), 12(5), 1-32.
30. Goldberg, Y. (2022). Neural network methods for natural language
processing. Springer Nature.
31. Zhang, Y., Tiňo, P., Leonardis, A., & Tang, K. (2021). A survey on neural
network interpretability. IEEE Transactions on Emerging Topics in
Computational Intelligence, 5(5), 726-742.
32. Misra, J., & Saha, I. (2010). Artificial neural networks in hardware: A
survey of two decades of progress. Neurocomputing, 74(1-3), 239-255.
33. Treffinger, D. J., Isaksen, S. G., & Stead-Dorval, K. B. (2023). Creative
problem solving: An introduction. Routledge.
34. Nazar, M., Alam, M. M., Yafi, E., & Su’ud, M. M. (2021). A systematic
review of human–computer interaction and explainable artificial
intelligence in healthcare with artificial intelligence techniques. IEEE
Access, 9, 153316-153348.
Page | 145