Recent AI Trends and Applications
Topics covered
Recent AI Trends and Applications
Topics covered
net/publication/387789196
CITATIONS READS
2 249
3 authors:
Rekha Agarwal
Govt. Model Science College, Jabalpur
242 PUBLICATIONS 243 CITATIONS
SEE PROFILE
All content following this page was uploaded by Rajesh Kumar Mishra on 07 January 2025.
Introduction:
Artificial Intelligence (AI) has transcended from being a theoretical concept to a
cornerstone of technological advancement. The integration of AI across industries
demonstrates its potential to revolutionize processes, systems, and services. This chapter
examines recent trends in AI, exploring key innovations and their applications across
diverse fields, and concludes by discussing future directions and challenges. The beginning
of modern AI can be traced to the classical philosophers' attempts to describe human
thinking as a symbolic system. But the field of AI was not formally founded until 1956, at a
conference at Dartmouth College, in Hanover, New Hampshire, where the term “artificial
intelligence” was coined. The organizers included John McCarthy, Marvin Minsky, Claude
Shannon, Nathaniel Rochester, all of whom went on to greatly contribute to the field. In the
years following the Dartmouth Conference, impressive advances were made in AI.
Machines were built that could solve school mathematics problems, and a program called
Eliza became the world's first chatbot, occasionally fooling users into thinking that it was
conscious.
The first “AI winter” lasted from 1974 until around 1980. It was followed in the
1980s by another boom, thanks to the advent of expert systems, and the Japanese fifth
generation computer initiative, which adopted massively parallel programming. Expert
systems limit themselves to solving narrowly defined problems from single domains of
expertise (for instance, litigation) using vast databases. They avoid the messy
complications of everyday life, and do not tackle the perennial problem of trying to
inculcate common sense. The funding dried up again in the late 1980s because the
73
Bhumi Publishing, India
difficulties of the tasks being addressed was once again underestimated, and also because
desktop computers overtook mainframes in speed and power, rendering very expensive
legacy machines redundant.
AI has crossed the threshold for the simple reason that it works. AI has provided
effective services that make a huge difference in people's lives, toward enabling companies
to make a lot of money. A central goal of AI is the design of automated systems that can
accomplish a task despite uncertainty. Such systems can be viewed as taking inputs from
the environment and producing outputs toward the realization of some goals. Modern
intelligent agents approaches should combine methodologies, techniques, and
architectures from many areas of computer science, cognitive science, operation research,
and cybernetics. AI planning is an essential function of intelligence that is necessary in
intelligent agent's applications.
Artificial Intelligence (AI) has witnessed rapid advancements in recent years,
transforming various sectors by enhancing efficiency, automating tasks, and enabling more
intelligent decision-making processes (Mishra et al., 2024a, 2024b, 2024c, 2024d). The
integration of AI into diverse industries such as healthcare, finance, materials science, and
autonomous systems has paved the way for revolutionary applications and innovations.
Below are the key trends in AI and its applications.
Recent Trends in Artificial Intelligence
Generative AI, a subset of machine learning, focuses on creating new content such as
images, text, music, and videos. Powered by advanced neural networks, these systems can
produce human-like creativity. Generative AI is built on advanced machine learning models
known as deep learning models—algorithms that mimic the human brain's learning and
decision-making processes. These models function by recognizing and encoding patterns
and relationships found in vast datasets, which they then utilize to comprehend users'
natural language requests or inquiries and generate relevant new content in response.
For the past ten years, AI has remained a prominent topic in technology discussions,
but generative AI—particularly with the introduction of ChatGPT in 2022—has propelled
AI into global headlines and sparked a remarkable wave of innovation and adoption within
the field. Generative AI provides significant productivity advantages for both individuals
and organizations; despite the legitimate challenges and risks it poses, companies are
actively pursuing ways that this technology can enhance their internal operations and add
value to their products and services. Research conducted by the management consulting
firm McKinsey indicates that a third of organizations are already utilizing generative AI on
74
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
a regular basis in at least one aspect of their business.¹ Industry analyst Gartner forecasts
that over 80% of organizations will have implemented generative AI applications or
utilized generative AI application programming interfaces (APIs) by the year 2026.
Generative AI has the ability to produce various forms of content across multiple domains.
Text
Generative models, particularly those utilizing transformers, can craft coherent and
contextually appropriate text—ranging from instructions and documentation to brochures,
emails, website content, blogs, articles, reports, research papers, and even creative writing.
They can also handle repetitive or monotonous writing tasks (for instance, drafting
document summaries or meta descriptions for web pages), allowing writers to focus on
more creative and higher-value endeavors.
Images and Video
Image generation tools like DALL-E, Mid journey, and Stable Diffusion can produce
realistic images or original artwork, as well as perform style transfer, image-to-image
translation, and other editing or enhancement tasks. New generative AI video applications
can create animations from text prompts and apply special effects to existing video content
more swiftly and cost-effectively than traditional methods.
Sound, Speech, and Music
Generative models can create natural-sounding speech and audio for voice-activated
AI chat bots and digital assistants, audio book narration, and similar uses. This technology
can also compose original music that resembles the structure and sound of professional
pieces.
Software Code
Generative AI can generate original source code, auto complete code snippets,
translate code between different programming languages, and summarize the functionality
of code. This enables developers to quickly prototype, refactor, and debug applications
while providing a natural language interface for coding tasks.
Design and Art
Generative AI models can create distinct pieces of art and design or aid in graphic
design. Applications include the dynamic generation of environments, characters, or
avatars, as well as special effects for virtual simulations and video games.
Simulations and Synthetic Data
Generative AI models can be trained to produce synthetic data or synthetic
structures derived from real or synthetic data. For example, generative AI is utilized in drug
75
Bhumi Publishing, India
76
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
77
Bhumi Publishing, India
78
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
Edge AI, conversely, merges Artificial Intelligence and Edge computing. Edge computing
strategically locates computation and data storage close to the source of data requests,
thereby minimizing latency and optimizing bandwidth usage, along with offering various
additional advantages. AI is a broad discipline of computer science focused on constructing
intelligent machines capable of executing tasks that generally necessitate human
intelligence. With Edge AI, machine learning algorithms can operate directly at the Edge,
and data and information processing can take place directly on IoT devices, rather than in a
centralized cloud computing facility or private data center. Machine learning (ML) is a
domain of research dedicated to comprehending and developing methods that allow
machines to replicate intelligent human actions. It carries out complex tasks and is a subset
of artificial intelligence. Edge computing is rapidly expanding due to its capacity to support
AI and ML and its inherent benefits. Its primary advantages include:
Reduced latency
Real-time analytics Low bandwidth consumption Improved security Reduced costs
Edge AI systems leverage these benefits and can execute machine learning algorithms on
existing CPUs or even less advanced microcontrollers (MCUs). In comparison to other
applications that utilize AI chips, Edge AI offers enhanced performance, particularly
concerning latency in data transmission and the elimination of security risks within the
network.
Deploying AI at the edge (or edge AI) signifies a change in paradigm. Unlike
conventional AI models, which are centralized in the cloud, edge AI handles data locally on
devices or edge servers. This decentralized method brings intelligence nearer to the data
origin, diminishing the latency linked with cloud-based solutions to facilitate real-time
decision-making. The incorporation of edge AI into enterprise ecosystems is not simply a
standard technology upgrade; it is a strategic necessity. By processing data at the edge and
enhancing it with AI inference, organizations can attain unparalleled speed, efficiency and
agility. This has a direct influence on business outcomes by improving operational
efficiency, minimizing latency and unlocking new pathways for innovation.
Applications of AI at the Edge
Edge AI is revolutionizing a variety of sectors by facilitating immediate intelligence
and decision-making. Let’s examine some significant applications.
Manufacturing
In manufacturing, machinery downtime can incur high costs. Edge AI mitigates this
by overseeing equipment condition and forecasting possible malfunctions before they
79
Bhumi Publishing, India
happen. By evaluating data from sensors in real time, AI models can recognize
irregularities and notify maintenance teams to undertake preventive measures. This not
only diminishes downtime but also prolongs the lifespan of equipment. Maintaining
product quality is crucial in manufacturing. AI-enhanced cameras with edge AI capabilities
can examine products for imperfections in real time. These systems interpret visual data to
pinpoint defects such as scratches, dents, or faulty assembly. By automating the inspection
procedures, manufacturers can attain greater precision, consistency, and efficiency,
ultimately improving product quality and consumer satisfaction.
Healthcare
The healthcare sector is gaining substantial advantages from Edge AI. Portable
devices integrated with edge AI can evaluate medical images like X-rays, MRIs, and CT
scans, delivering quicker diagnoses. This functionality is especially beneficial in remote or
underprivileged areas where access to specialized radiologists may be scarce. By
processing images locally, edge AI minimizes the time required for diagnosis, facilitating
timely treatment and enhancing patient outcomes. Wearable gadgets featuring edge AI are
transforming patient care by facilitating continuous observation of health metrics. These
devices gather information like heart rate, blood pressure, and glucose levels, assessing it
in real-time to uncover anomalies. If a serious condition is detected, the device can
promptly alert healthcare professionals. This proactive method of patient monitoring aids
in managing chronic illnesses, identifying health issues early, and decreasing hospital visits.
Retail
Effective inventory management is vital for retail operations. AI-integrated cameras
and sensors can monitor inventory quantities in real time, ensuring that shelves remain
adequately filled. By examining data from these devices, edge AI can optimize stock
replenishment, slash waste, and avert stockouts. This results in enhanced customer
satisfaction and reduced inventory expenses. Comprehending customer behavior is
essential for offering personalized shopping experiences. Edge AI evaluates data from in-
store cameras and sensors to glean insights into customer preferences and actions. Based
on this analysis, it can provide tailored suggestions and promotions to individual shoppers.
Personalization elevates the shopping experience, fosters customer loyalty, and increases
sales.
Smart Cities
Controlling urban traffic is a complicated undertaking that necessitates real-time
data analysis. Edge AI can enhance traffic flow by assessing data from traffic cameras,
80
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
sensors, and GPS units. By identifying congestion trends and forecasting traffic situations, it
can modify traffic signals, redirect vehicles, and furnish real-time traffic updates to drivers.
This boosts traffic efficiency, cuts down travel time, and improves road safety. Safeguarding
public safety is a primary concern for smart cities. AI-driven surveillance systems equipped
with edge AI can oversee public areas, recognize irregularities, and detect potential
dangers. These systems scrutinize video feeds in real time, identifying suspicious behaviors
such as unauthorized access or unattended bags. By notifying authorities swiftly, edge AI
bolsters security and allows for a rapid response to incidents.
Advantages of AI at the Edge
Edge computing shifts AI processing tasks from the cloud to devices situated closer
to the end-users. This resolves the inherent issues associated with traditional cloud
systems, such as significant latency and inadequate security. Therefore, relocating AI
computations to the network edge creates possibilities for innovative products and
services featuring AI-driven applications.
Lower Data Transfer Volume
One of the primary advantages of edge AI is that the device transmits a considerably
reduced volume of processed data to the cloud. By decreasing traffic between a small cell
and the core network, we can enhance connection bandwidth to avert bottlenecks.
Consequently, this lowers the traffic amount within the core network.
Speed for Real-time Computing
Real-time processing is a key benefit of Edge Computing. The physical closeness of
edge devices to data sources enables the achievement of reduced latency. As a result, this
enhances the performance of real-time data processing. It facilitates delay-sensitive
applications and services such as remote surgery, tactile internet, unmanned vehicles, and
vehicle accident prevention. Edge servers furnish decision support, decision-making, and
data analysis in a timely fashion.
Privacy and Security
While transmitting sensitive user data across networks poses increased
vulnerability, executing AI at the edge ensures data remains confidential. Edge computing
enables the assurance that private data never exits the local device (on-device machine
learning). When it is necessary to process data remotely, edge devices can eliminate
personally identifiable information prior to data transfer. This bolsters user privacy and
security. Explore our privacy-preserving deep learning article for further insights on data
security with AI. Computer vision in public spaces necessitates privacy-preserving
81
Bhumi Publishing, India
82
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
place complete faith in them. Explainable AI can assist individuals in grasping and
clarifying machine learning (ML) algorithms, deep learning, and neural [Link]
models are frequently regarded as black boxes that are impossible to decipher. ² Neural
networks utilized in deep learning rank among the most challenging for a person to
comprehend. Bias, which is often rooted in race, gender, age, or location, has long been a
significant concern in training AI models. Moreover, AI model performance can shift or
deteriorate because production data can vary from training data. This makes it imperative
for a business to continuously oversee and handle models to enhance AI explainability
while assessing the business effects of employing such algorithms. Explainable AI also
assists in fostering end user confidence, model auditability, and effective utilization of AI. It
additionally alleviates compliance, legal, security, and reputational hazards of operational
AI. Explainable AI stands as one of the fundamental requirements for executing responsible
AI, a methodology for the extensive implementation of AI methods within real
organizations, emphasizing fairness, model explainability, and accountability. ³ To facilitate
the responsible adoption of AI, organizations must incorporate ethical values into AI
applications and procedures by developing AI systems founded on trust and transparency.
Regulatory Compliance
The fast advancement of counterfeit insights frameworks has provoked
administrative bodies around the world to set up rigid straightforwardness necessities,
with the EU AI Act rising as a point of interest system for guaranteeing AI responsibility.
This enactment orders that high-risk AI frameworks must give clear clarifications for their
decision-making forms, checking a urgent move toward capable AI advancement. Beneath
the EU AI Act's prerequisites, organizations conveying AI frameworks must actualize
vigorous straightforwardness components. These necessities are especially rigid for high-
risk AI applications, which must experience careful similarity evaluations and give point by
point documentation of their inward workings. Non-compliance can result in considerable
punishments – up to €30 million or 6% of worldwide yearly income – underscoring the
basic significance of explainability in AI frameworks.
Logical AI (XAI) advances have ended up basic apparatuses for meeting these
administrative requests. Instead of working as dark boxes, AI frameworks must presently
give clear methods of reasoning for their yields, empowering partners to get it how choices
are come to. This straightforwardness is pivotal not as it were for administrative
compliance but moreover for building believe with clients who are progressively
concerned approximately algorithmic inclination and reasonableness. Real-world usage
83
Bhumi Publishing, India
84
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
and makes a difference distinguish potential predispositions some time recently they affect
operations. This proactive approach not as it were fulfills administrative prerequisites but
moreover upgrades the in general quality and unwavering quality of AI frameworks.
Ethical Concerns
Reasonable AI (XAI) has ended up a pivotal viewpoint of moral contemplations in AI
frameworks, especially as the complexity of calculations and machine learning models
increments. XAI points to form AI frameworks more straightforward by giving human-
understandable experiences into how choices are made. Typically particularly vital in
settings where AI is utilized to form choices that essentially affect people and society, such
as in healthcare, fund, law requirement, and independent frameworks. The significance of
XAI in morals lies in its potential to address concerns around responsibility, decency, and
believe.
One key moral issue that XAI addresses is the “black box” nature of numerous AI
models, particularly those based on profound learning. These models regularly create
profoundly precise comes about but are troublesome for indeed specialists to translate.
This darkness can lead to a need of responsibility, making it challenging to decide the
thinking behind choices when something goes off-base. In areas like independent driving or
therapeutic determination, where lives may be at stake, the failure to clarify why a
framework made a certain choice may lead to moral problems. XAI gives components to
break down these decision-making forms, empowering partners to get it, believe, and
intercede when essential. Another ethical challenge addressed by XAI is the risk of bias in
AI systems. AI models are trained on large datasets, and if these datasets are skewed or
biased, the AI's decisions can reflect and even exacerbate societal inequalities. XAI can help
identify when and where such biases occur, providing explanations that enable developers
and users to rectify these biases and ensure more equitable outcomes. For example, in
credit scoring or hiring processes, XAI can help ensure that decisions are made based on
fair and transparent criteria rather than on flawed or biased algorithms. By promoting
transparency, accountability, and fairness, XAI strengthens the ethical foundation of AI
systems, encouraging their responsible use while fostering public trust (Arrieta et al.,
2020).
Accountability in the realm of decision-making is an important concern regarding
artificial intelligence (AI) and autonomous systems. As AI technologies become increasingly
widespread, the decisions made by these systems have profound effects on individuals,
organizations, and society as a whole. The capacity to hold both the systems and their
85
Bhumi Publishing, India
operators accountable is vital for ensuring that decisions made by AI are fair, transparent,
and ethical (Binns, 2018). In conventional systems, accountability is generally clear-cut,
with a direct chain of responsibility connecting human agents to the results of their
decisions. Conversely, in the case of AI, especially within autonomous systems, the
decision-making process can often be less transparent. Autonomous systems like self-
driving cars or automated trading platforms typically make choices based on intricate
algorithms and extensive datasets, which may not be readily comprehensible, even to their
operators. This "black box" characteristic of AI creates difficulties in assigning
accountability when issues occur, such as accidents involving autonomous vehicles or
incorrect financial trades (Lipton, 2018).
Explainable AI (XAI) is essential in tackling these accountability issues. By clarifying
the decision-making processes of AI, XAI helps stakeholders comprehend how specific
outcomes are determined. This clarity permits increased scrutiny, making it simpler to
ascertain who or what is liable for a given decision. For example, if an autonomous vehicle
is involved in an incident, XAI could clarify whether an error originated from the AI system,
whether the data it was trained upon contained biases, or whether there was insufficient
human oversight. In this manner, XAI allows for tracing decisions back to their origins,
which aids in accountability (Doshi-Velez & Kim, 2017). Moreover, the concept of
accountability in decision-making is closely related to ethical issues such as fairness, bias,
and discrimination. AI systems have the potential to unintentionally reinforce or
exacerbate existing biases in their training datasets. In the absence of transparency, it
becomes challenging to hold AI developers, operators, or users accountable for biased or
unjust decisions. XAI can aid in detecting and rectifying these biases, ensuring that the
decision-making processes are more fair and equitable. For instance, in the financial sector,
XAI can help confirm that credit-scoring algorithms do not discriminate against particular
demographic groups by shedding light on how decisions are arrived at and what factors are
influential (Baracas et al., 2019).
The significance of regulatory frameworks is also vital for ensuring accountability.
In industry sectors such as finance, healthcare, and transportation, regulatory agencies are
progressively highlighting the necessity of transparency and explainability in AI systems.
Regulations like the European Union’s General Data Protection Regulation (GDPR) and the
proposed AI Act incorporate clauses that compel organizations to clarify their AI-driven
decisions, especially when those decisions greatly impact individuals. These regulations
aim to hold organizations accountable for their AI systems, thus building trust and
86
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
87
Bhumi Publishing, India
instance, in the financial sector, XAI can help confirm that credit-scoring algorithms do not
discriminate against particular demographic groups by shedding light on how decisions are
arrived at and what factors are influential (Baracas et al., 2019). The significance of
regulatory frameworks is also vital for ensuring accountability. In industry sectors such as
finance, healthcare, and transportation, regulatory agencies are progressively highlighting
the necessity of transparency and explainability in AI systems. Regulations like the
European Union’s General Data Protection Regulation (GDPR) and the proposed AI Act
incorporate clauses that compel organizations to clarify their AI-driven decisions,
especially when those decisions greatly impact individuals. These regulations aim to hold
organizations accountable for their AI systems, thus building trust and minimizing
potential harm (European Commission, 2020). In summary, the issue of accountability in AI
decision-making is intricate yet crucial. By enhancing transparency through XAI and
following regulatory frameworks, organizations can ensure that decisions made by AI are
not only efficient and effective but also ethical and just. Mechanisms for accountability,
which include human oversight, comprehensive documentation, and explainability, are
essential for fostering trust in AI systems and mitigating the risks that come with their
increased integration.
Transparency is vital for establishing trust in AI systems, especially in industries
where decisions significantly affect people's lives, like finance and healthcare. When
organizations utilize Explainable AI (XAI) methods, they can clarify the reasoning behind
algorithm-driven outcomes, allowing stakeholders to grasp how results are produced
(Lipton, 2018). This transparency is crucial for building confidence, as users tend to trust
systems that offer insight into their workings. Moreover, transparency helps alleviate
concerns regarding bias and unfair treatment. By transparently sharing their data sources,
algorithmic methods, and decision-making criteria, organizations can illustrate their
dedication to ethical practices and accountability. For example, if a financial institution
reveals the workings of its credit scoring algorithms, it empowers consumers to
understand the elements that affect their scores, thus fostering fairness and alleviating
fears about automated judgments (Kauffman & Hsu, 2019). Additionally, regulatory
agencies are increasingly requiring transparency to ensure compliance and safeguard
consumer rights. This obligation not only adheres to ethical benchmarks but also boosts
the credibility of AI systems, ultimately fostering a more trusting connection between
organizations and their stakeholders.
88
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
User Trust
Artificial intelligence (AI) has gained increasing momentum in its use across various
fields to address the heightened complexity, scalability, and automation, which also
extends into digital networks today. A swift increase in the complexity and sophistication
of AI-driven systems has developed to such an extent that humans cannot comprehend the
intricate mechanisms by which AI systems operate or how they arrive at certain decisions
— this is particularly problematic when AI-based systems generate outputs that are
surprising or seemingly erratic. This is especially true for obscure decision-making
systems, such as those utilizing deep neural networks (DNNs), which are regarded as
intricate black box models. The inability for humans to peer inside black boxes can lead to
AI adoption (and even its continued advancement) being obstructed, which is why
escalating levels of autonomy, complexity, and ambiguity in AI methods intensify the
demand for interpretability, transparency, understandability, and explainability of AI
products/outputs (like predictions, decisions, actions, and recommendations). These
aspects are vital to ensuring that humans can grasp and — as a result — trust AI-driven
systems (Mujumdar, et al., 2020). Explainable artificial intelligence (XAI) pertains to
methods and techniques that produce precise, interpretable models of why and how an AI
algorithm reaches a particular decision so that the outcomes from AI solutions can be
comprehended by humans (Barredo Arrieta, et al., 2020).
In the absence of explanations regarding an AI model’s internal workings and the
decisions it renders, there exists a danger that the model will not be viewed as reliable or
legitimate. XAI provides the necessary clarity and transparency to foster greater confidence
in AI-based solutions. Therefore, XAI is recognized as an essential attribute for the effective
implementation of AI models in systems and, more importantly, for fulfilling the basic
rights of AI users concerning AI decision-making (according to European Commission
ethical guidelines for trustworthy AI). Standardization organizations like the European
Telecommunications Standards Institute (ETSI) and the Institute of Electrical and
Electronics Engineers Standards Association (IEEE SA) also highlight the significance of XAI
where AI models are utilized, demonstrating XAI’s increasing relevance for the future
(Frost, et al. , 2020). AI deployers and developers must adhere to these ethical guidelines
and regulations to ensure that their AI solutions are both explainable and trustworthy
(Anneroth, 2019).
Building trust is crucial for users to embrace AI-driven solutions as well as the
systems that include decisions made by them. There are, nonetheless, considerable
89
Bhumi Publishing, India
obstacles in creating explainability methods. One such obstacle is the balance between
achieving algorithm transparency and affecting the high performance of intricate yet
obscure models (when transparency is enhanced, privacy and the protection of sensitive
information are called into question). Another hurdle is determining the appropriate
information for the user, where varying levels of understanding will become relevant.
Apart from choosing the level of comprehension retained by the user, producing a brief
(simple yet meaningful) explanation also presents a challenge. Most explainability
techniques emphasize clarifying the mechanisms behind an AI decision, which can
sometimes disregard the specific context of its use, leading to unrealistic explanations.
Researchers are working to incorporate knowledge-based systems so that the explanation
aligns with the context of its application.
XAI aids in fostering trust through the following attributes:
• Trustworthiness, to gain human trust in the AI model by elucidating the features and
rationale behind the AI output
• Transferability, wherein the explanation of an AI model enhances comprehension so
that it can be appropriately applied to another problem or domain/application
• Informativeness, pertaining to educating a user on how an AI model operates to
prevent misconceptions (this is also connected to human agency and autonomy,
ensuring that humans grasp AI results and can take action based on that
understanding)
• Confidence, which is realized through possessing a model that is robust, stable, and
explainable to bolster human assurance in employing an AI model
• Privacy awareness, ensuring that the AI and XAI techniques do not reveal private
information (which can be achieved through data anonymization)
• Actionability, with XAI offering guidance on how a user might modify an action to
achieve a different result while also providing the rationale for an outcome
• Tailored (user-focused) explanations, enabling humans — as users of AI systems from
varied knowledge backgrounds — to comprehend the behavior and forecasts made
by AI-based systems through customized explanations aligned with their roles,
objectives, and preferences.
Tools
SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-
Agnostic Explanations) are widely used for interpretability.
90
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
91
Bhumi Publishing, India
model to approximate the decision-making process of any black box model's predictions.
LIME explicitly tries to model the local neighborhood of any prediction – by focusing on a
narrow enough decision surface, even simple linear models can provide good
approximations of black box model behavior. Users can then inspect the glass box model to
understand how the black box model behaves in that region. LIME works by perturbing any
individual data point and generating synthetic data which gets evaluated by the black box
system, and ultimately used as a training set for the glass box model. LIME’s advantages are
that you can interpret an explanation the same way you reason about a linear model, and
that it can be used on almost any model. On the other hand, explanations are occasionally
unstable and highly dependent on the perturbation process.
Ethical AI
The need for fairness, accountability, and transparency has led to a focus on ethical
AI. Organizations and governments are establishing guidelines to address AI’s potential
biases and societal impact. Ethical AI refers to the development and deployment of artificial
intelligence systems that prioritize fairness, accountability, transparency, and the well-
being of all stakeholders. It encompasses considerations that ensure AI technologies align
with societal values, respect human rights, and avoid harm. The ethical dimensions of AI
are vast, ranging from data privacy and algorithmic bias to accountability in decision-
making systems and environmental sustainability.
Key Principles of Ethical AI
Fairness and Avoidance of Bias
AI systems can inadvertently perpetuate or exacerbate societal biases present in
their training data. Ensuring fairness involves:
• Identifying and mitigating biases in data.
• Implementing fairness-aware machine learning techniques.
• Regular audits to prevent discrimination based on race, gender, religion, or other
sensitive attributes.
Hiring algorithms have faced scrutiny for penalizing candidates based on gender or
educational background due to biased historical data.
Transparency and Explainability
Transparency ensures stakeholders understand how AI systems work, including the
data and models used. Explainability is crucial in sensitive applications like healthcare or
law enforcement, where decisions must be interpretable.
• Open documentation of algorithms and processes.
92
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
93
Bhumi Publishing, India
94
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
95
Bhumi Publishing, India
96
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
• Efficiency: Reduces the need for centralized data collection and storage.
• Scalability: Supports large-scale deployment across distributed systems.
• Adaptability: Enables continuous model improvement based on diverse, real-world
data.
Federated learning holds promise for applications in healthcare, finance, and IoT,
where data privacy and security are paramount.
Types of Federated Learning
1. Horizontal Federated Learning
Used when datasets across different clients have the same feature space but
different data samples. Example: Smart phones collecting user typing patterns to improve a
keyboard prediction model.
2. Vertical Federated Learning
Applied, when datasets share overlapping users but have different feature sets.
Example: A bank and an e-commerce platform collaborating to predict loan defaults
without sharing raw customer data.
3. Federated Transfer Learning
Useful when clients have different feature spaces and only partially overlapping
data samples.
Benefits of Federated Learning
1. Enhanced Data Privacy
• Data remains localized, reducing the risk of exposure during transit or centralized
storage.
2. Regulatory Compliance
• FL aligns with data protection laws like GDPR and HIPAA, which restrict cross-border
data sharing and centralized storage.
3. Reduced Communication Costs
• Only model updates are transmitted, which typically have a smaller data footprint
than raw datasets.
4. Scalable and Decentralized
• FL can scale to millions of devices, making it suitable for IoT and mobile
applications.
Challenges in Federated Learning
1. Heterogeneous Data and Systems
- Non-IID Data: Local datasets may not represent the overall data distribution, causing
training inefficiencies.
97
Bhumi Publishing, India
98
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
99
Bhumi Publishing, India
Medical Imaging: AI detects abnormalities in X-rays, MRIs, and CT scans with higher
accuracy.
Drug Discovery: Algorithms predict drug efficacy and identify molecular targets.
Predictive Analytics: AI forecasts disease progression and personalizes treatments.
Finance
The financial sector leverages AI for fraud detection, personalized banking, and
automated trading.
- Algorithmic Trading: AI analyzes market trends for high-frequency trading.
- Fraud Detection: Machine learning models identify suspicious activities.
- Customer Support: AI chat bots offer round-the-clock assistance.
Transportation
AI powers autonomous systems, optimizing logistics and enhancing safety in
transportation.
- Autonomous Vehicles: AI algorithms enable self-driving cars to navigate complex
environments.
- Traffic Management: AI systems predict congestion and optimize traffic flows.
- Fleet Optimization: Predictive maintenance minimizes downtime.
Education
AI has revolutionized education by personalizing learning experiences and enhancing
accessibility.
- Adaptive Learning Platforms: Systems adjust course content based on individual
performance.
- Language Processing Tools: AI assists in translation and transcription, breaking
language barriers.
- Virtual Assistants: AI tutors provide real-time help for learners.
Manufacturing
AI has become a cornerstone of Industry 4.0, driving automation and operational
efficiency.
- Predictive Maintenance: AI monitors equipment health and predicts failures.
- Quality Assurance: Computer vision systems detect product defects in real-time.
- Supply Chain Optimization: AI streamlines inventory management and logistics.
Entertainment
AI has transformed content creation and user engagement in the entertainment
industry.
100
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
unless it is used by research institutions, students, research scholars, and authors in their
future works. The authors will remain ever grateful to Dr. H. S. Ginwal, Director, ICFRE-
Tropical Forest Research Institute, Jabalpur Principal, Jabalpur Engineering College,
Jabalpur & Principal Government Science College, Jabalpur who helped by giving
constructive suggestions for this work. The authors are also responsible for any possible
errors and shortcomings, if any in the book, despite the best attempt to make it
immaculate.
References:
1. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.
2. Vaswani, A., et al. (2017). Attention is all you need. Advances in Neural Information
Processing Systems, 5998–6008.
3. Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach. Pearson.
4. Silver, D., et al. (2016). Mastering the game of Go with deep neural networks and tree
search. Nature, 529(7587), 484–489.
5. EU AI Act (2021). Regulatory framework for trustworthy AI. European Commission.
6. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., et al. (2020). Explainable Artificial
Intelligence (XAI): Concepts, Taxonomies, Opportunities, and Challenges toward
Responsible AI. Information Fusion, 58, 82-115.
7. Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning.
Fairness, Accountability, and Transparency in Machine Learning, 1(1), 1-28.
8. Binns, R. (2018). Fairness in Machine Learning: Lessons from Political Philosophy. In
Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency
(pp. 149-158).
9. Doshi-Velez, F., & Kim, P. (2017). Towards a rigorous science of interpretable
machine learning. Proceedings of the 34th International Conference on Machine
Learning, 70, 1-16.
10. European Commission. (2018). General Data Protection Regulation (GDPR). Retrieved
from EU Data Protection Rules.
11. European Commission. (2020). White Paper on Artificial Intelligence: A European
Approach to Excellence and Trust. Retrieved from EU White Paper.
12. European Commission. (2021). Proposal for a Regulation on Artificial Intelligence.
Retrieved from EU Proposal.
13. Federal Trade Commission (FTC). (2020). Face Facts: A Consumer Guide to Facial
Recognition Technologies. Retrieved from FTC Facial Recognition.
102
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
14. Gonzalez, C., Gamberini, L., & Zambianchi, R. (2020). Human Factors and the
Automation of Complex Systems: A Review of Current Research and Future
Directions. Human Factors and Ergonomics in Manufacturing & Service Industries,
30(5), 660-670.
15. Hwang, J., Gahm, J., & Lee, C. (2020). The Impact of Explainable AI on Surgical
Robotics. Journal of Robotic Surgery, 14(2), 195-201. International Journal of Science
and Research Archive, 2024, 13(01), 1807–1819.
16. Kumar, R., Bansal, A., & Mohan, A. (2020). Drones in Disaster Management: A
Systematic Review. International Journal of Disaster Risk Reduction, 50, 101746.
17. Lepri, B., Oliver, N., & Pentland, A. (2018). Fair, Transparent, and Accountable
Algorithmic Decision-Making Processes. Philosophical Transactions of the Royal
Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20170308.
18. Lipton, Z. C. (2016). The Mythos of Model Interpretability. Communications of the
ACM, 59(10), 36-43.
19. Lipton, Z. C. (2018). The Mythos of Model Interpretability. Communications of the
ACM, 61(10), 36-43.
20. Miller, T. (2019). Explanation in Artificial Intelligence: Insights from the Social
Sciences. Artificial Intelligence, 267, 1-38.
21. National Institute of Standards and Technology (NIST). (2020). A Proposal for
Identifying and Managing Bias in Artificial Intelligence. Retrieved from NIST Proposal.
22. O'Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality
and Threatens Democracy. Crown Publishing Group.
23. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting Racial Bias
in an Algorithm Used to Manage the Health of Populations. Science, 366(6464), 447-
453.
24. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why Should I Trust You?" Explaining
the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144).
25. Lundberg, S. M., & Lee, S. I. (2017). A Unified Approach to Interpreting Model
Predictions. In Advances in Neural Information Processing Systems (pp. 4765-4774).
26. Zeng, Z., & Zhang, Y. (2020). Explainable Artificial Intelligence for Credit Scoring: A
Case Study. Journal of Financial Innovation, 6(1), 1-16.
103
Bhumi Publishing, India
27. Falkner, J., Löchner, J., & Mattfeld, D. C. (2021). Ethics in Drone Applications for
Humanitarian Logistics. Journal of Humanitarian Logistics and Supply Chain
Management, 11(2), 125-142.
28. Sharma, P., Thakur, R., & Mangal, A. (2021). Role of Explainable AI in Robotic
Rehabilitation Systems. Healthcare Technology Letters, 8(3), 73-80.
29. Gilpin, L. H., Bau, D., Yuan, B., & Bajcsy, R. (2018). Explaining Explanations: An
Overview of Interpretability of Machine Learning. 2018 IEEE 5th International
Conference on Data Science and Advanced Analytics (DSAA), 80-89.
30. Joseph Chukwunweike, Andrew Nii Anang, Adewale Abayomi Adeniran and Jude Dike.
Enhancing manufacturing efficiency and quality through automation and deep
learning: addressing redundancy, defects, vibration analysis, and material strength
optimization Vol. 23, World Journal of Advanced Research and Reviews. GSC Online
Press; 2024. Available from: [Link]
31. Joseph Nnaemeka Chukwunweike, Moshood Yussuf, Oluwatobiloba Okusi, Temitope
Oluwatobi Bakare and Ayokunle J. Abisola. The role of deep learning in ensuring
privacy integrity and security (2024) :Applications in AI-driven cybe rsecurity
solutions [Link]
32. Altmann, A., Toloşi, L., Sander, O., & Lengauer, T. (2010). Permutation importance: a
corrected feature importance measure. Bioinformatics, 10, 1340-1347.
33. Anneroth, (2019). Responsible AI – a human right?
34. Barredo Arrieta, A., Diaz Rodriguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado
González, A., Garcia, S., Gil-López, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F.
(2020). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities
and Challenges toward Responsible AI. Information Fusion, 58, 82-115.
35. Chakraborti, T., Sreedharan, S., & Kambhampati, S. (2020). The Emerging Landscape
of Explainable Automated Planning & Decision Making. Proceedings of the Twenty-
Ninth International Joint Conference on Artificial Intelligence, (IJCAI-20), 4803-4811.
36. Chen, T., & Guestrin, C. (2016). XGBoost: A Scalable Tree Boosting System.
Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining. San Francisco, California, USA. Association for Computing
Machinery. pp. 785- 794.
37. Cyras, , Badrinath, R., Mohalik, S. K., Mujumdar, A., Nikou, A., Previti, A., Sundararajan,
V., & Vulgarakis, A. (2020) Machine Reasoning Explainability. Arxiv. [Preprint]
Related tutorial: Machine Reasoning Explainability
104
Artificial Intelligence: Trends and Applications
(ISBN: 978-93-95847-63-6)
38. European Comission. (2020). Assessment List for Trustworthy Artificial Intelligence
(ALTAI) for self-assessment.
39. Frost, L., Meriem, T. B., Bonifacio, J. M., Cadzow, S., Silva, F. d., Essa, M., Forbes, R.,
Marchese, , Odini, M., Sprecher, N., Toche, C., & Wood, S. (2020). Artificial Intelligence
and future directions for ETSI. ETSI White Paper No. #34, 1 (ISBN 979-10-92620-30-
1).
40. Mishra, R.K. and Agarwal, R. (2024a), Artificial Intelligence in Material Science:
Revolutionizing Construction, In: Research and Reviews in Material Science Volume I,
ISBN: 978-93-95847-83-4, 69- 99.
41. Mishra, R.K. and Agarwal, R. (2024b), Impact of digital evolution on various facets of
computer science and information technology, In: Digital Evolution: Advances in
Computer Science and Information Technology, First Edition: June 2024, ISBN: 978-
93-95847-84-1.
42. Mishra, R.K., Mishra, Divyansh and Agarwal, R. (2024c), An Artificial Intelligence-
Powered Approach to Material Design, In: Cutting-Edge Research in Chemical and
Material Science Volume I, First Edition: August 2024, ISBN: 978-93-95847-39-1.
43. Mishra, R.K., Mishra, Divyansh and Agarwal, R. (2024d), Artificial Intelligence and
Machine Learning in Research, In: Innovative Approaches in Science and Technology
Research Volume I, First Edition: October 2024, ISBN: 978-81-979987-5-1.
44. Korobov & Lopuhin, K. (2019). ELI5 Documentation, Release 0.9.0.,
45. Looveren, A. V., & Klaise, J. (2019). Interpretable Counterfactual Explanations Guided
by Prototypes. Arxiv. [Preprint]
46. Lundberg, S., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model
Predictions. Advances in Neural Information Processing Systems 30 (NIPS 2017).
Curran Associates, Inc. pp. 4765-4774
47. Mujumdar, A., Cyras K., Singh S., & Vulgarakis A. (2020). Trustworthy AI:
explainability, safety and verifiability
48. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why Should I Trust You?": Explaining
the Predictions of Any Classifier. 22nd ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining. San Francisco, California, USA. Association for
Computing Machinery.
49. Terra, A., Inam, R., Baskaran, S., Batista, P., Burdick, I., & Fersman, E. (2020).
Explainablity Methods for Identifying Root-Cause of SLA Violation Prediction in 5G
105
Bhumi Publishing, India
106
Edge AI advances healthcare by enabling portable devices to process medical images locally, offering quicker diagnoses, especially in underserved areas. Wearable devices utilize edge AI to continuously monitor and analyze health metrics, alerting healthcare professionals to anomalies, which improves chronic illness management and patient care .
Edge AI differs from traditional cloud AI by performing data processing locally on edge devices, reducing latency compared to centralized cloud processing. This proximity to data sources minimizes the time taken for data to travel, allowing real-time decision-making and reducing dependency on cloud infrastructure .
Edge AI systems complement cloud-based AI by handling latency-sensitive tasks locally while offloading non-time-critical computations to the cloud. This balance facilitates efficient resource utilization, reduces response times, and enhances data processing capabilities. By combining the strengths of both approaches, organizations can achieve scalable and flexible AI solutions .
Explainable AI (XAI) seeks to make AI systems more interpretable and transparent by clarifying decision-making processes. It is essential for accountability as it allows stakeholders to trace back decisions to their origins, enabling scrutiny and understanding of outcomes. XAI helps identify biases and errors in AI models, thus facilitating fair and ethical decision-making .
Edge AI enhances security and privacy by ensuring that sensitive data is processed locally on devices, thus minimizing the transmission of personal data over networks. This approach reduces vulnerabilities associated with data transfer and allows data to remain confidential. Additionally, personally identifiable information can be stripped from data before any necessary remote processing, further bolstering privacy .
Edge AI contributes to operational efficiency in manufacturing by enabling real-time monitoring of equipment and predicting potential malfunctions, thereby reducing downtime and extending equipment lifespan. It also automates inspection processes using AI-enhanced cameras, improving product quality and consistency by identifying defects in real time .
Black-box AI models, such as those utilizing deep neural networks, pose challenges as their decision-making processes are often opaque, leading to trust and accountability issues. XAI addresses these challenges by shedding light on how these models arrive at decisions, making it possible for stakeholders to understand, audit, and improve AI systems, thereby enhancing trust and reducing biases .
Regulatory oversight is crucial in integrating XAI to improve trust in AI technologies by mandating transparency and explainability in AI systems. Regulations like the GDPR require organizations to clarify AI-driven decisions affecting individuals, ensuring accountability and fairness, and enhancing trust in AI systems by making them more interpretable .
Edge AI enhances real-time decision-making in smart cities by processing data from traffic cameras and sensors to improve traffic flow and safety. It can modify traffic signals and provide real-time updates to drivers based on congestion trends. Additionally, AI-driven surveillance systems utilize edge AI to identify safety concerns and alert authorities quickly, thus enhancing public safety .
Edge AI reduces data transfer volumes by processing and analyzing data locally, transmitting only necessary information to the cloud. This decreases overall network traffic and alleviates bottlenecks, enhancing connection bandwidth and allowing for more efficient use of network resources .