0% found this document useful (0 votes)
27 views14 pages

Ai Presentation by Izhar Ali and Sahil

The document discusses the challenges and potential solutions for AI-powered real-time embedded systems, particularly in safety-critical applications. It highlights issues related to reliability, safety, and security, emphasizing the need for trustworthy AI methodologies and hardware acceleration. The paper outlines major problems, possible solutions, and future research directions to improve the performance and predictability of these complex systems.

Uploaded by

asafkhan3411
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views14 pages

Ai Presentation by Izhar Ali and Sahil

The document discusses the challenges and potential solutions for AI-powered real-time embedded systems, particularly in safety-critical applications. It highlights issues related to reliability, safety, and security, emphasizing the need for trustworthy AI methodologies and hardware acceleration. The paper outlines major problems, possible solutions, and future research directions to improve the performance and predictability of these complex systems.

Uploaded by

asafkhan3411
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Overall Goal.

Means that system


a system perform
means that can we trust on real time which perform
task that typically Specialized
embedded systems that use AI to operate tasks in a specific
required human computing devices
in real time. time or deadline
intelligence that perform
Specifically, is concerned with their
dedicated or specific
reliability, safety and security.
task with in a large
Can We Trust AI-Powered Real-Time Embedded system
e.g smart phones -
Systems? sensors -micro
controlers
Abstract: Giorgio Buttazzo #
• Artificial Department of Excellence in Robotics & AI, Scuola Superiore Sant’Anna, Pisa, Italy
intelligence (AI),
especially Abstract
through The excellent performance of deep neural networks and machine learning algorithms is pushing the
advanced industry to adopt such a technology in several application domains, including safety-critical ones, as
techniques like self-driving vehicles, autonomous robots, and diagnosis support systems for medical applications.
deep neural
However, most of the AI methodologies available today have not been designed to work in safety-
networks and
critical environments and several issues need to be solved, at different architecture levels, to make
machine
them trustworthy. This paper presents some of the major problems existing today in AI-powered
learning, has
embedded systems, highlighting possible solutions and research directions to support them, increasing
shown amazing presenting
their security, safety, and time predictability.
results. Because possible
of this, many 2012 ACM Subject Classification Computer systems organization solutions on how
industries are to fix this
starting to use AI Keywords and phrases Real-Time Systems, Heterogeneous architectures, Trustworthy AI, Hyper- problems.
in important visors, Deep learning, Adversarial attacks, FPGA acceleration, Mixed criticality systems
areas such as:
Self-driving cars, Digital Object Identifier 10.4230/OASIcs.NG-RES.2022.1
Autonomous This paper talks about:
robots, Medical Category Invited Paper 1. Major Problems: The significant issues that exist in AI-powered systems today.
diagnosis 2. Possible Solutions: Ideas on how to fix these problems.
support systems.
1 Introduction 3. Research Directions: Suggestions on where future research should focus to
improve AI systems.
Embedded computing platforms are becoming more complex every day to manage the 2. These systems
increasing computational load generated by emerging applications, as autonomous vehicles use many
(cars, trains, drones, aircrafts), advance robotic systems, intelligent appliances, and so on. sensors that
1.These Such systems are equipped with a variety of sensors that produce a large amount of data, create a lot of
systems are hence demanding for real-time processing and high-performance computing. To provide the data. This data
becoming more needs to be
required computational power, computer architectures are evolving towards heterogeneous
complex to processed in
platforms that integrate on the same board, or even on the same chip, multicore processors of real-time,
handle the different types, field programmable gate arrays (FPGAs), general purpose graphics processing
increasing
units (GPGPUs), and special co-processors optimized for executing operations on tensors, as
demands of
modern tensor processing units (TPUs).
applications Although new tools and libraries are becoming available every year, developing safety- 3. To manage
critical applications on top of such heterogeneous platforms, while providing the required this, modern
guarantees, is quite difficult due to a number of non-trivial problems. The following list computer systems
presents just a few of such problems, related to the use of artificial intelligence (AI) in are evolving to
4. Even though
safety-critical applications with real-time constraints. include different
new tools and
types of
libraries are
The interaction among the various components through the shared resources available on processors and
available to help,
special hardware
creating safe and the computing platform (as buses, memories, and I/O devices) generates a significant
on a single board
reliable amount of interference, introducing large and unpredictable delays on the computational or chip,
applications on activities. Such a large variability in responding to external events makes it is very
these complex difficult to provide timing guarantees on the application behavior. This is also a serious
platforms is very problem for the certification of safety-critical software components.
difficult.
Programming modern heterogeneous platforms requires a deep knowledge of low-level
details of the architecture, which prolongs the developing and testing times.
© Giorgio Buttazzo;
licensed under Creative Commons License CC-BY 4.0
Third Workshop on Next Generation Real-Time Embedded Systems (NG-RES 2022).
Editors: Marko Bertogna, Federico Terraneo, and Federico Reghenzani; Article No. 1; pp. 1:1–1:14
OpenAccess Series in Informatics
Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany
1:2 Can We Trust AI-Powered Real-Time Embedded Systems?

Distributing the computational activities in an optimal way between hardware accelerators


and processors is not trivial, but it can make a huge difference in the overall system
performance, as well as in satisfying the real-time application constraints.
Deep neural networks (DNNs) are commonly developed and inferred by means of state-
of-the-art frameworks (e.g., Tensorflow, Caffe, and PyTorch), which greatly simplify the
implementation of new models. Unfortunately, however, none of the current frameworks is
specifically optimized to be used in safety-critical environments, nor capable of providing
bounded response times. This prevents their use in real-time applications like autonomous
driving, where DNNs should have a highly predictable behavior, not only in the functional
Security Risks: domain, but also in the time domain, responding within specific deadlines.
Using AI The use of deep learning algorithms and the related frameworks increases the software
frameworks
attack surface, posing serious security issues for the overall system. This problem is
increases the risk
of cyber attacks, exacerbated by the fact that such frameworks usually run on top of rich operating systems,
especially since as Linux, which are more vulnerable to cyber attacks.
these In spite of their excellent capabilities in perception tasks, deep neural networks have
frameworks often been shown to be prone to adversarial attacks, i.e., malicious inputs with imperceptible
run on operating
perturbations that force a neural network to produce a wrong output with a high
systems like
Linux, which are confidence score.
more susceptible Similar threats derive from inputs that significantly differ from the distribution of the
to such threats. training set. Predicting the behavior of a neural network on such inputs is not easy and,
in some cases, the network could also respond with a wrong output with a high confidence
score.
Finally, since the behavior of a neural network is not explicitly programmed, but encoded
in a huge number of parameters, interpreting the output of a neural model and deciding
whether its prediction can be trusted is a challenging task.
To address these To address the issues described above, a lot of research is being devoted to support the
issues, extensive development of AI-powered applications on top of heterogeneous platforms for safety-critical
research is being real-time systems. Paper Organization
conducted to
The remainder of the paper is organized as follows: Section 2 discusses the problems
develop
AI-powered and preliminary solutions related to architecture issues; Section 3 presents problems and
applications on promising solutions related to security issues; Section 4 describes the major threats caused
complex platforms by the use of AI algorithms and some solutions aimed at mitigating them; Section 5 proposes
for safety-critical, an architectural approach to address all the issues discussed above; and Section 6 states the
real-time systems conclusions and outlines some promising research lines.
To make AI work in real-time applications, modern deep neural
2 Architecture issues network (DNN) models need special hardware to speed up their
processing. This is called hardware acceleration.
To be used in real time, the inference of modern DNN models requires hardware acceleration.
This can be achieved by exploiting modern heterogeneous computing platforms equipped
by GPUs or programmable hardware, as FPGA. This section discusses the main problems
related to such computing platforms and presents existing solutions to them.

2.1 General platform issues


To cope with the different computational requirements of real-time applications, modern
heterogeneous computing platforms integrate different processing elements, as multi-core
processors of different types, general purpose GPGPUs, FPGAs, and tensor processing units.
1. Modern AI-powered systems need to process a lot of data quickly, especially for real-time
applications like self-driving cars or robots. To handle this, these systems use advanced computing
platforms that combine different types of processors and hardware components.
G. Buttazzo When different parts of the system try to access shared resources like1:3
memory
and data pathways (buses), it creates interference. can cause long and
unpredictable delays in the system's tasks.
The concurrent accesses to shared devices existing on such architectures, as buses, memory
controllers, and high-level caches, create a significant interference on the computations,
introducing long and variable delays in application tasks.
For instance, Cavicchioli et al. [12] observed significant and variable delays when using
GPU acceleration on heterogeneous embedded platforms due to the contention occurring on
shared memory, especially for memory-intensive GPU tasks. Restuccia et al. [29] identified
some anomalous situations that can arise in an AXI bus arbiter in FPGA-based SoC and
proposed a reservation mechanism to prevent this phenomenon and restore fairness during
bus transactions. of unpredictable delays in the system's tasks.
the AXI Stall A timing analysis has also been proposed [28] to bound the execution of periodically-
Monitor (ASM) invoked hardware accelerators in nominal conditions. This analysis can be used to configure
can detect and a latency-free hardware module named AXI Stall Monitor (ASM) to detect and safely solve
fix delays during possible stalls during AXI bus transactions. Efforts have also been devoted to analytically
AXI bus bound the delay experienced by AXI bus transactions issued by hardware accelerators on
transactions, FPGA [30].
ensuring smooth Hardware acceleration typically involves memory-intensive computations. Therefore, an
operation.
accurate control of the memory traffic is crucial to achieve predictability in the execution of
HW-tasks. This mechanism
Pagani et al. [26] proposed a bandwidth reservation mechanism for AXI-based transac- helps control the
tions on FPGAs able to control the bus traffic generated by hardware accelerators. The memory traffic. It
mechanism, named Memory Budget and Protection Unit (MBPU), aims at “shielding” hard- limits the number
of memory
ware accelerators from excessive or unpredictable memory interference. MBPUs are installed
transactions a
between AXI master ports and the interconnect, enforcing a given budget of memory transac- hardware task
tions within a periodic interval of time. Budgets are recharged in a periodic fashion and are can make in a
configurable from the CPU via memory-mapped registers. MBPUs also protect the system given time period,
from unrestricted accesses to memory by HW-tasks: this is accomplished by masking the
accesses that fall outside a set of configurable memory address spaces.

2.2 GPU-related issues


Today, the most common way for accelerating DNNs is by executing them on a GPU-based
platform. This solution has two main advantages: (i) the response time can be reduced by
two orders of magnitude and (ii) the development is supported by standard frameworks.
However, GPUs also have disadvantages. First, they are closed systems and multiple tasks
are scheduled in a non preemptive fashion. This means that, if the system includes multiple
neural networks with different complexity and periodicity requirements, those with shorter
periods will be more likely to experience longer delays and higher response time variability.
An example of non-preemptive schedule of three DNNs with different execution times and
periods is illustrated in Figure 1.
Notice that, in this example, the total GPU utilization is less than one (U = 0.95), but
DDN1 is forced to skip the second and the fifth execution instance, because it cannot preempt
the execution of DNN3.
To solve this problem, Capodieci et al. [9], in collaboration with NVIDIA, proposed to
modify the GPU internal scheduler with a preemptive scheduler based on Earliest Deadline
First (EDF) [18], also providing bandwidth isolation by means of a Constant Bandwidth
Server (CBS) [3]. Unfortunately, however, this solution is not yet available on commercial
NVIDIA GPU platforms.
Other problems with GPU acceleration are due to the high power consumption and their
significant weight and encumbrance, which prevent their usage in small embedded systems,
as unmanned aerial vehicles (UAVs).

NG-RES 2022
DNN1 is not running because DNN3 is
occuping the GPU..
1:4 Can We Trust AI-Powered Real-Time Embedded Systems?
blue block represent DNN is running. Even though the
GPU's total
DNN1 has short and utilization is less
frequent tasks. DNN1 X X than its full
0 20 40 60 80 100 120 capacity (95%
DNN2 has used), DNN1
medium-length DNN2 misses some of its
tasks 0 50 100
scheduled times
(second and fifth
DNN3 has long instances) because
DNN3
and less frequent it cannot interrupt
0 70
tasks time DNN3, which takes
longer to execute.
Figure 1 Example of non-preemptive schedule of three DNNs with different execution times and This causes delays
periods. As clear from the figure, DNN1 experiences longer and variable delays. and inconsistency
Field in the response
Programmable time for DNN1
Gate Arrays 2.3 FPGA-related issues
(FPGAs) are
special integrated An interesting alternative to GPUs for accelerating AI algorithms is provided by FPGAs.
circuits that can They are integrated circuits designed to be configured after manufacturing for implementing
be programmed arbitrary logic functions in hardware. As such, they exhibit a highly predictable behavior in
by the user after terms of execution times. In addition, they consume much less power with respect to GPUs
manufacturing to and existing commercial platforms are characterized by lower weight, encumbrance, and
perform specific cost. Hence, they represent an ideal solution for being used on battery-operated embedded2
tasks. Unlike systems with size, weight, power and cost (SWaP-C) constraints, as space robots, satellites,
GPUs, which are and UAVs.
also used for
Nevertheless, FPGAs have other problems when used as DNNs accelerators:
accelerating AI
algorithms, No floating point unit (FPU) is available on the chip, unless it is explicitly programmed
FPGAs offer by the user, but consuming a significant fraction of the available fabric.
several
advantages: Programming FPGAs is quite more difficult than programming CPUs or GPUs, and
efficient coding requires a deep knowledge of low-level architecture details.
The frameworks available today for developing AI applications on FPGA-based platforms
are less rich and flexible than those available for GPUs, and the same is true for related
libraries and tools.
The overall FPGA area available in medium size SoCs could be insufficient to host more
than one DNN, or even a single large DNN.

To overcome the problems outlined above, a lot of research has been carried out in the
recent years.
The absence of an FPU is overcome by performing a preliminary parameter quantization
to convert floating point numbers into integers with n-bit precision. Several quantization
methods have been proposed in the literature [17], including symmetrical, asymmetrical,
non-uniform, and statistical. An extreme quantization converts weights into binary numbers
using the sign function. Courbariaux, Bengio, and David [14] have shown that a binarized
DNN can achieve 98.8% accuracy in classifying the handwritten digits of the MNIST dataset.
Other optimization steps (e.g., network pruning and layer fusion) can also be performed,
both on GPUs and FPGAs, to reduce the computation time and the memory footprint of
trained DNNs while minimizing the loss in accuracy.
To overcome the limitation of the FPGA area, Biondi et al. [6] proposed a programming
framework, called FRED1 , to support the design, development, and execution of predictable
software on FPGAs. FRED exploits dynamic partial reconfiguration and recurrent execution

1
See details on http://fred.santannapisa.it.
Communication Between Software and Hardware Tasks
In systems using FPGAs, communication between software tasks (SW-tasks) running on a CPU and hardware tasks (HW-tasks)
running on an FPGA is crucial. Here's how it works:

G. Buttazzo 1:5

to virtualize the FPGA area, thus enabling the user to allocate a larger number of hardware
accelerators than those that could otherwise be fit into the physical fabric. FRED also
integrates a tool for automated floorplanning [33] and a set of runtime mechanisms to enhance
predictability by scheduling hardware resources and regulating bus/memory contentions [29].
An application targeted by FRED consists of a set of software tasks (SW-tasks) running
on the CPU cores that can periodically invoke the execution of hardware accelerators (HW-
tasks) to be dynamically programmed on the FPGA. The communication scheme adopted
between SW-tasks and HW-tasks is illustrated in Figure 2. 2. Requesting Hardware
Acceleration: SW-tasks can
SW task request HW-tasks to
Input data perform certain operations.
1. Shared Memory: Both This is done by sending a
SW-tasks and HW-tasks use shared memory HW task request to the FPGA
shared memory to exchange
data. <put
put data>
data FPGA area
request for HW acceleration
<request> required by the
<get data> accelerator 3. Data Exchange:
SW-tasks put input data
into shared memory,
shared memory Bitstream for
which the HW-task
the FPGA processes. After
Output results processing, the HW-task
puts the output results
back into shared memory
Figure 2 Communication scheme adopted between SW-tasks and HW-tasks in FRED. for the SW-task to
This retrieve.
The FPGA virtualization is achieved through a timesharing mechanism that replaces
framework
inactive accelerators (i.e., those that finished their computation and are waiting for the
allows next activation) with active ones. In this way, the total number of HW-tasks that can run
efficient use on the FPGA can be much higher than the number of HW-tasks that would statically fit
of FPGA in the physical area available on the fabric. Hence, this mechanism virtualizes the FPGA6
resources by creating a virtual area much larger than the physical one. The resulting approach is
and similar to multitasking, where tasks continuously change their context, or a virtual memory
ensures mechanism, where memory pages are swapped between hard disk and dynamic memory.
Thanks to a set of design choices and a proper scheduling infrastructure, resource
that the contention delays experienced by tasks running under FRED are bounded and predictable,
SW-tasks and hence they can be estimated to verify the system schedulability.
and A full support for FRED has been developed under both FreeRTOS and Linux [25, 24].
HW-tasks The Linux support comes with a user-space daemon and a set of custom kernel drivers to
work handle the processor configuration port (PCAP) and the shared-memory communication
seamlessly buffers between CPUs and FPGA. A preemptable reconfiguration interface has also been
developed by Rossi et al. [31] to achieve a finer control in scheduling the reconfiguration
together. requests and a better control on the reconfiguration delays incurred by HW-tasks.

2.4 Framework issues


DNNs are commonly developed and inferred by means of state-of-the-art frameworks, as
Tensorflow, Caffe, and PyTorch. Unfortunately, such frameworks are not optimized for being
used in real-time applications and they are not supported by commercial real-time operating
systems, as VxWorks and QNX. As a consequence, DNN tasks may be subject to a variable
interference.
Casini et al. [10] addressed this problem by modifying the internal scheduler of the
TensorFlow framework and adapting it for the SCHED_DEADLINE scheduling class of
Linux. Extensive experiments demonstrated the effectiveness of the approach, showing a
significant reduction of both average and longest-observed response times of TensorFlow
tasks.

NG-RES 2022
Restuccia and Biondi proposed techniques to enhance the predictability of DNN
execution on FPGA-based platforms using the Vitis AI framework by Xilinx.
1:6 Can We Trust AI-Powered Real-Time Embedded Systems?

Recently, Restuccia and Biondi [27] proposed a set of techniques for accelerating DNNs
on FPGA-based platforms with a highly predictable timing behavior under the Vitis AI
frameworks by Xilinx. In Vitis AI, the execution of the DNN layers relies on the deep
learning processing unit (DPU) core, a hardware accelerator optimized for the execution
of convolutional DNNs. Based on an extensive profiling campaign conducted on the Xilinx
Zynq Ultrascale+ platform, they proposed an execution model for the DPU employed to
derive a response time analysis for guaranteeing real-time applications constraints.

Safety-critical
systems, like 3 Security issues
self-driving cars, Examples:
use AI algorithms Safety-critical systems that make use of AI algorithms consist of several components with
Steering,
and consist of different complexity and requirements.
throttle
components with Consider for example a self-driving car. The functions responsible for steering, throttle modulation,
varying levels of modulation, braking, and engine control are highly critical and must satisfy stringent braking,
complexity and requirements in terms of safety, security, and real-time behavior. As such, they need to be engine control.
requirements. managed by a real-time operating system, that must be certified to guarantee the required
Understanding safety integrity levels.
how to secure
On the other hand, high-level functions related to sensory perception, object tracking, High-Level AI
these systems is
crucial for their and vehicle localization, which heavily rely on AI algorithms, need to be executed on a Components
safe operation. rich operating system (e.g., Linux) to exploit all the available device drivers, libraries, and Examples:
development frameworks required for such complex computations. These components are far Sensory
from being certified and offer a large software surface for cyber attacks. perception,
object
In 2015, two hackers, Charlie Miller and Chris Valasek, discovered a vulnerability in the
Real-World tracking,
Jeep Cherokee, which they exploited to remotely access the vehicle and gain physical control,
Example of a vehicle
Security including steering, braking, turning on the wipers, blasting the radio, and finally, killing the localization.
Breach engine to bring the vehicle to a complete stop [15]. They wrote a long paper [19] where they
explain how they accessed the CAN bus through the infotainment system, detailing the full
attack chain.
Although cyber attacks to non critical components accessible by a wireless network cannot
be avoided completely, the security of a system can greatly be enhanced by preventing such
attacks from spreading to more critical components. This can be achieved by isolating
software components with different level of criticality into different execution domains,
through the use of a hypervisor.
A hypervisor is a
software layer 3.1 Hypervisor-based architecture
that manages
multiple A hypervisor is a software layer above the hardware platform able to create and manage
execution multiple execution domains, each hosting a virtual machine with its own operating system.
domains (virtual An example of a hypervisor-based architecture for safety-critical embedded systems is the
machines) on the one proposed by the SPHERE project [7], which supports the creation of multiple virtual
same hardware machines on the same computing platform, providing both time/memory isolation, security,
platform. Each real-time communication channels, and I/O virtualization to allow different virtual machines
domain can run to shared the peripheral devices.
its own operating Figure 3 shows an example of a hypervisor managing two execution domains with different
system
levels of criticality. One domain hosts a virtual machine running all safety-critical functions on
independently.
a real-time operating system (RTOS), while the other domain hosts another virtual machine
running AI-powered software on the Linux operating system. Such an architecture has been

Key Features of a Hypervisor-Based System


Time/Memory Isolation: Ensures each virtual machine operates independently without interference.
Security: Protects critical functions from potential attacks on less critical components.
Real-Time Communication: Enables timely and secure data exchange between virtual machines.
I/O Virtualization: Allows multiple virtual machines to share peripheral devices without compromising performance.
In simple terms, a hypervisor helps different parts of a system run efficiently by dividing tasks into high-criticality (safety-critical) and
low-criticality (non-safety-critical) domains. This ensures that crucial tasks always get the resources they need while maintaining
overall system performance and security.

G. Buttazzo 1:7
Low-Criticality Domain: High-Criticality Domain:

This domain is for tasks that are not This domain handles safety-critical tasks, which
low criticality
low-criticality high criticality
high-criticality
safety-critical but still important, like means any delay or error can have serious
domain domain
AI-powered applications. consequences.
It uses a general-purpose operating It uses a Real-Time Operating System (RTOS),
system, such as Linux. AI-powered Safety-critical which is designed to handle tasks with strict
These tasks can be interrupted and software software timing requirements.
rescheduled without severe This domain ensures that critical tasks get the
consequences. priority they need.
Linux RTOS

Hypervisor: Hardware Platform:


Hypervisor
Sits between the hardware and the Includes the physical hardware and
operating systems. components such as processors and memory.
Hardware Hardware It also consists of hardware accelerators that
Manages the allocation of resources Ttrusted EE
accelerators platform speed up specific tasks
to ensure that both domains operate
smoothly and safely. Trusted Execution
Figure 3 Example of a hypervisor managing two execution domains with different criticality. Environment (Trusted
18 EE):

A secure area within


successfully implemented and tested on a number of AI-powered control applications using the hardware platform
CLARE [2], a novel hypervisor purposely designed to support mixed-criticality real-time that ensures sensitive
systems exploiting AI hardware acceleration in heterogeneous embedded platforms. tasks are executed
safely and securely.
Helps protect the
The CLARE 3.2 CLARE hypervisor system from potential
security threats.
hypervisor is a
special type of The CLARE hypervisor is a novel bare-metal (type-1) hypervisor at the core of the CLARE
software that software stack [2]. It integrates cutting-edge mechanisms to host safe, secure, and time-
helps manage predictable virtual machines that can execute in isolation upon the same hardware platform. Uses advanced
different virtual CLARE hypervisor follows a fully-static approach with off-line configurations and optimization techniques like
machines to allocate the onboard resources to virtual machines. It has been designed to support modern pointer
(VMs) on a heterogeneous platforms, such as GPGPU- and FPGA-based SoC, to better exploit and authentication
single code to ensure
control their computational resources. In particular, it provides a number of real-time and
hardware the integrity of
security features that make it suitable for safety-critical systems, such as control flows,
platform,
Improved key management and attack detection under control flow integrity by pointer which helps in
especially in
safety-critical authentication code [16]. detecting attacks.
systems Hardware-based isolation exploiting the ARM TrustZone technology to perform key
management and provide attack detection and recovery strategies. In systems where
safety is crucial (like
Virtualization of trusted execution environments leveraging ARM TrustZone [13].
medical devices,
Protection mechanisms at the hypervisor level to control temporal and spatial interference automotive systems,
among domains, also preventing side-channel attacks [20]. or industrial control
I/O device virtualization and I/O related memory contention control, with related latency systems), it's essential
that different software
analysis [11]. components run
FPGA virtualization to allow multiple domains to exploit hardware accelerators in predictably and
isolation. securely.

4 AI related issues
Deep neural networks have shown an impressive performance in several recognition tasks, but
their suitability for mission-critical applications has been questioned by Szegedy et al. [35]
and many other authors [34], who showed that imperceptible perturbations added to an
input sample can fool a neural network in perceiving objects that are not present in the
input. Such perturbed inputs are called adversarial examples (AEs) and represent a serious

Deep neural networks are very good at tasks


like recognizing images and sounds. However, These tricky inputs are called
some experts, like Szegedy and others, are adversarial examples (AEs) NG-RES 2022
worried about their reliability in important and they are a big problem for
situations. the safety and accuracy of
neural networks.
1:8 Can We Trust AI-Powered Real-Time Embedded Systems?

Genuine sample 0 0.5 1

Neural
Network

Adversarial Attacks: + 0 0.5 1

Means to fool a system


or Neural
= Network
Specially crafted tricky inputs
Adversarial Adversarial
perturbation example
Means if some noise is added
Figure 4 Example in which a neural network is able to correctly classify a genuine image of a
stop sign (top). However, the same image can be modified by adding an adversarial perturbation, so
that it is classified as a parking sign with a high confidence score (bottom).

12
threat for the security of AI-based systems. An example of adversarial image is shown in
Figure 4, where the picture of a stop sign is perturbed in such a way that it is perceived by
the network as a parking sign with a high confidence score.
Although a significant effort has been spent to develop defense methods against adversarial
examples [5], the problem remains open and challenging, since these attacks violate the
fundamental stationarity assumption of learning algorithms, i.e., that training and testing
data are drawn from the same distribution.
The trustworthiness of DNNs is also threatened by genuine inputs characterized by a
distribution that is quite different from that of the training samples. Such inputs are referred
to as out-of-distribution (OoD) samples. Two examples of OoD images are shown in Figure 5.

Figure 5 Two examples of OoD images that could cause a deep neural network to produce a
wrong output.

Considering that the prediction score of a DNN can be high in the presence on both AEs
and OoD samples, the output score of the best classified class cannot be considered as an
indication of the prediction confidence of the model.
Several methods have been proposed in the literature to detect AEs and OoD samples.
Two of these methods are presented below with more details. 8
To solve this problem, researchers have
suggested several methods Below, we
4.1 Detection by input transformations will explain two of these methods in detail.
One method for detecting AEs relies on the fact that DNN models are usually robust to
certain types of input transformations (e.g., translation, rotation, scaling, blurring, noise
addition, etc.). This means that, if a genuine image is correctly recognized by a DNN, the
Normally, if you make small changes to a genuine image, like , adding some information,
rotating it, resizing it (scaling), blurring it, or adding some noise, the DNN still recognizes the
image pretty well. The prediction score (how confident the DNN is about its guess) only
drops a little.

BUT,
Adversarial examples are different. Mostly, These tricky changes
make the DNN fails to recognize genuine sample.. When you apply
the same small changes like adding information, blurring, rotation,
scaling, adding noise etc.) to an AE, the DNN's prediction score drops
G. Buttazzo 1:9
a lot more.

prediction score reduces only slightly when the same image is translated, rotated, or modified
with one the mentioned transformations. However, the same is not true for most AEs and
it has been observed that they result to be more sensitive to input transformations, which
cause a much higher degradation in the prediction score.
This property of AEs has been exploited by some authors [37] to detect whether an input
To handle x is adversarial or genuine. If y = f (x) is the top class score produced by the DNN on input
In AI, "defense
this problem, x and yT = f (T (x)) is the score produced on the transformed input T (x), a simple detection
perturbation" is
Nesti method is to consider x to be adversarial if the difference y − yT is higher than a given
a technique
proposed a threshold τ .
used to detect
new method Unfortunately, it is possible to generate adversarial examples that are robust to input
tricky inputs
called " transformations. To cope with this case, Nesti et al. [22] proposed a new method, called
defense perturbation, capable of detecting AEs that are robust to input transformations.
called
defense
The defense perturbation is generated by a proper optimization process capable of making adversarial
perturbation."
robust AEs sensitive again to input transformations. Furthermore, the paper introduces examples.
multi-network AEs that can fool multiple DNN models simultaneously, presenting a solution
for detecting them. 1. Input Layer
2. Hidden Layer Coverage
4.2 Detection by coverage analysis 3. Output Layer analysis works in
two main phases:
A different approach for detecting AEs is based on a deeper analysis of the neuron activation
values in the different DNN layers. In fact, in order to force a DNN model to classify an 1. Offline Phase
input with a desired wrong class, AEs usually cause an overactivation of some neurons in (Before Using the
Rossolini different network layers. To identify such neurons, Rossolini et al. [32] presented a new
AI):
presented coverage analysis methodology capable of detecting both adversarial and out-of-distribution - Trusted Dataset:
new inputs. First, we use a
methodology set of correct and
The approach works in two distinct phases: in a preliminary (off-line) phase, a trusted
trusted examples
for detecting dataset is presented to the DNN and the neuron outputs, in each layer and for each class,
these tricky are analyzed and aggregated into a set of covered states, which all together represent a sort - Analyze
inputs called of signature describing how the model responds to the trusted samples for each given class. Neurons: Feed
Then, at runtime, each new input is subject to an evaluation phase, in which the activation examples as input
coverage state produced by the input in each layer is compared with the corresponding signature for into AI and record
analysis the class predicted by the network. The higher the number of activation values outside the how the neurons
range observed during the presentation of the trusted dataset, the higher the probability that reacts.
the current input is not trustworthy. The approach is schematically illustrated in Figure 6.
- Create
The nice thing about this approach is that the comparison against the signature allows
Signatures: We
2.Runtime Phase computing a confidence value c distinct from the prediction score, indicating how much the combine these
(While Using the current prediction can be trusted. responses into a "
AI) signature" for each
OR Testing class. Also known
Phase:
4.3 Interpretability issues
as Output.
Another problem of complex machine learning models is that they are hardly interpretable
- New Input by humans. In fact, they encode their input-output function in millions of parameters
Evaluation: A trusted dataset is
and, therefore, it is not trivial to understand why a given input produces a certain output.
When a new used to train and
Many people demand for transparency and precise mechanisms as a prerequisite for trust.
input is given to test an AI model.
the AI, we check Especially for safety-critical applications, developers cannot trust a critical decision system if
It provides accurate
how the neurons it is not possible to explain the reasons that brought to that decision. and reliable
react. To address this issue, a new branch of research, referred to as Explainable AI (XAI) [36], examples that help
started around 2014 with the goal of reconstructing and representing in a comprehensible the System learn
fashion the features that caused an AI model to produce a given output. correctly and
-Compare to Signatures: Example: identify the inputs.
We compare this Imagine you are teaching a child to recognize
reaction to the stored animals. You show them a trusted set of pictures
signature for the of dogs and cats. These pictures are clear and
NG-RES 2022
predicted class correctly labeled. Once the child learns from
-Trustworthiness Check: If the neuron's
these trusted examples, they can more easily
reactions are very different from the recognize new pictures of dogs and cats and
signature (the normal pattern), it means spot any unusual or incorrect pictures
the input might be tricky or unfamiliar
1:10 Can We Trust AI-Powered Real-Time Embedded Systems?

Confidence Value C:
Offline Phase
C.value helps us understand how
confident the AI is in its decision. DNN y
Trusted Signature
- If the new input’s activation dataset
Coverage Aggregation Agg. Algo combines these
(output) closely match the stored analysis algorithm
reactions to create a
signature then confidence value will signature.
be high. x DNN y
Current
input
Runtime /
- if there are outputs that differ
from the stored signature, then Active state
Confidence c Testing Phase
evaluation
confidence value will be low,
the active state means neurons in
indicating that the input might be Signature a neural network react when they
ticky or unusual. see something/input
Figure 6 Overview of the coverage-based method proposed in [32]: (top) off-line 12
phase for
producing the signature for each layer and each class; (bottom) online detection phase based on
comparing the current activation states with the stored signature. It produces a confidence value c
The above methods indicating how much the current prediction can be trusted.
make solutions for some
cases like recognition
of gunine and tricky Thanks to these methods, it has been found that, in some cases, a DNN learned to
inputs. classify images using features that have nothing to do with the object of interest, but appear
frequently in most training samples of a specific class (e.g., water for the class ship, snow for
the class wolf, or even copyright tags present in most of the images of a given class). Such
biases in the training data limit the generalization capabilities of a neural model and can
cause wrong predictions that would be quite harmful in applications like self-driving cars or
medical diagnoses.
Identifying such biases in the training set is the main objective of XAI, which in the
last years proposed several methods and tools for making AI decisions more interpretable to
humans [21]. For instance, Pacini et al. [23] presented X-BaD, a flexible tool for detecting
biases in training sets while providing user-interpretable explanations for DNN outputs. The
tool can also be used to compare the performance of different XAI methods.

5 Towards trustworthy AI-based systems


From the previous
section it is clear that From the considerations presented in the previous sections, it should be clear that, although
AI algorithms exhibit a great performance in several perception and control tasks, they have
AI algorithms intrinsic weaknesses in terms of safety, security, timing predictability, and certifiability. Does
and models are it mean that complex cyber-physical systems cannot take advantage of such an amazing
very good at software technology? Fortunately, there is a promising way to exploit the power of modern
tasks like deep learning algorithms in safety-critical systems.
understanding While we cannot prevent AI algorithms from being attacked or producing unsafe results,
and making we can take a number of countermeasures to prevent them from harming the whole system.
decisions, but For instance, the level of temporal predictability, as well as the level of security of the
they still have whole system can be increased by using a suitable real-time hypervisor capable of isolating the
some safety-critical components from the AI-powered functions in two separated virtual machines,
weaknesses: as described in Section 3. In this way, an attack to the AI domain cannot propagate to the
high-criticality domain, which can be protected by exploiting the hardware security features
available in modern computer architectures.
1) Safety: They can sometimes make mistakes that might cause harm.
2) Security: They can be vulnerable to attacks.
3) Timing Predictability: It's sometimes hard to predict exactly when they will finish their tasks.
4) Certifiability: It can be difficult to prove that they will always work correctly.
While we can't completely stop AI algorithms from being attacked
or making unsafe decisions, we can still protect our systems.
One way to do this is by using a special tool called a real-time hypervisor.
This hypervisor helps by separating the important safety parts of the
G. Buttazzo system from the AI functions. It's like putting them in different rooms. So 1:11
even if there's an attack on the AI part, it won't affect the safety-critical
parts.
To protect
against attacks, To cope with adversarial attacks and OoD samples, the defense methods described in
unexpected Section 4 are essential to detect both malicious as well as unsafe inputs that would cause a
problems and DNN to produce a wrong output. In these cases, the system must react by excluding the
harmful situation, attacked AI component from the decision pipeline and switching to a simpler, but safer,
Biondi used backup control module that can bring the system into a safe state. For instance, in a
approach make self-driving car, the backup module could take control of the vehicle to stop it at the side of
deep learning the road.
safer for
important The approach described above has been undertaken by Biondi et al. [8] for exploiting
systems. deep learning models in safety-critical systems. The architecture includes two execution
domains illustrated in Figure 7: a high-performance domain, running under Linux, and a
safety-critical domain, running under the Erika Enterprise real-time kernel [1], both managed 2 Domains
by the CLARE hypervisor [2].
This part runs on Linux and
Biondi Approach: High-performance domain Safety-critical domain handles all the AI functions. It
They split the system has three Deep Neural Networks
DNN1
into two parts: one that's (DNN1, DNN2, and DNN3)
Sensory
DNN2 VOTER which receive input.
fast and runs on Linux, input
(High Performance DNN3
Domain). This part runs on a real-time
operating system called
and another (Safety Integrated Erika Enterprise. It handles
Safety
Critical Domain) that's Other confidence Monitor all the critical control
safer and runs on RTOS AI functions and ensures safety
functions Safe
called Erika Enterprise. Backup output
controller
These are managed by a
special tool called the Linux Erika Enterprise
CLARE hypervisor Clare Hypervisor manages both the
CLARE Hypervisor high-performance domain and the
For Example, safety-critical domain.
19
in a Figure 7 Architecture scheme proposed in [8], including a high-performance domain (running all
self-driving the AI functions) and a safety-critical domain (running all critical control functions), managed by
car, if there's the CLARE hypervisor.
an issue with
the AI, a
The high-performance domain is in charge of executing all the AI algorithms and tasks Voter:
backup
system can that must run under Linux, whereas the safety-critical domain runs all the vital system The three DNNs
functions under the Erika Enterprise real-time kernel [1]. each give their own
take control result. A voter looks
and safely To increase the level of robustness against AEs and OoD inputs, each perceptual function at these three
stop the car is replicated by three different DNN models trained on different datasets. Each DNN also results and chooses
at the side of includes a coverage analyzer to provide a confidence signal (represented by the red arrow the one that most of
the road. them agree on
coming out from each DNN box). The three confidence signals are then integrated into an
overall confidence signal, which is used to decide whether to switch to the safe controller. The
three redundant DNN outputs go instead to a majority voter to resolve possible disagreements
in the predictions. Confidence Signals:
An extra safety feature is represented by the presence of a Safety Monitor, which is in Each DNN also
charge of detecting possible DNN outputs that could have a negative consequence on the sends a confidence
signal to show how
controlled system. In fact, even in the presence of a non-malicious input, a DNN could sure it is about its
generate an output that is not detected as unsafe by the voter and the confidence integrator, result.
but could cause the controlled system to fail or misbehave. If such a condition is detected by
the Safety Monitor, the system control is switched from the high-performance AI controller
to the backup controller.
Safety Monitor: The Safety Monitor checks if the output Backup Controller: If the AI is not working
might cause problems. If it detects an issue, it switches correctly, the backup controller takes
control to the backup controller to ensure safety. over to keep the system safe.

NG-RES 2022
In Self Driving Car Example:
- The high-performance part does complex tasks like recognizing images and making decisions.
- The safety-critical part keeps the car's important functions safe like braking, steering, controlling speed.
- If the AI fails or makes a risky choice, the backup system takes over to stop the car safely.
The first version of this system was successfully tested on an inverted pendulum. The Safety Monitor in this system uses a
approach to check if the current state is in the safe area or into a warning area.

1:12 Can We Trust AI-Powered Real-Time Embedded Systems?

A first version of the architecture has been successfully implemented and tested on a
control system for an inverted pendulum. In this case, the Safety Monitor uses a Lyapunov
approach to detect when the system state exits a safe region of the state space and enters a
margin region in which the stability can still be recovered by the safe controller. When this
happens, the control is given to the backup controller. The AI controller is re-enabled when
the system state goes back to the safe zone.
Figure 8 illustrates a simplified and qualitative representation of the state space for an
inverted pendulum, where the black dot represents the current system state, the safe region
is visualized in green, the recoverable region in yellow, and the unstable region in gray.

This is a dangerous area where The black dot


Unstable non recoverable shows the
the system might fail and region of the
cannot easily return to the safe current state.
state space
region.
Safe region This is the area
where the system is
This means the system stable and working
is not completely safe correctly.
but it can recover from
issues and return to the
green area
Figure 8 Qualitative representation of the state space for an inverted pendulum. The black dot
represents the current system state.

On the same line, Belluardo et al. [4] presented a safe and secure multi-domain software
architecture tailored for autonomous driving.
20
AI algorithms and models are very good at tasks like understanding
6 Conclusions and making decisions, but they still have some weaknesses:
This paper
discusses several
already discusses above.
deep learning This paper presented a set of problems that today prevent the use of deep learning al-
problems and gorithms in mission-critical systems, as self-driving vehicles, autonomous robots, and medical
their solutions applications. Among them, the most relevant issues are the low predictability of modern
like self-driving heterogeneous architectures and the high vulnerability of AI software to cyber attacks.
vehicles, robots, Fortunately, the research community is readily reacting to address such problems and some
and medical solutions to overcome such limitations have already been proposed at different architecture
applications.
levels. However, the problems are many and complex, so several issues remain unsolved and
But there are
further many require some joint effort from the AI and the real-time research communities.
complex the research community is actively working to address these problems. Some
problems, so solutions have already been proposed. and further they are working on it.
References
several issues
remain unsolved. 1 Erika Enterprise RTOS. URL: https://www.erika-enterprise.com/.
2 The CLARE Software Stack. URL: https://accelerat.eu/clare.
3 Luca Abeni and Giorgio Buttazzo. Resource Reservation in Dynamic Real-Time Systems.
Real-Time Systems, 27(2):123–167, July 2004.
4 L. Belluardo, A. Stevanato, D. Casini, G. Cicero, A. Biondi, and G. Buttazzo. A Multi-domain
Software Architecture for Safe and Secure Autonomous Driving. In Proc. of the 27th IEEE
International Conference on Embedded and Real-Time Computing Systems and Applications
(RTCSA 2021), Online event, August 18–20, 2021.
5 Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine
learning. Pattern Recognition, 84:317–331, December 2018.
G. Buttazzo 1:13

6 Alessandro Biondi, Alessio Balsini, Marco Pagani, Enrico Rossi, Mauro Marinoni, and Giorgio
Buttazzo. A Framework for Supporting Real-Time Applications on Dynamic Reconfigurable
FPGAs. In Proc. of the IEEE Real-Time Systems Symposium (RTSS 2016), Porto, Portugal,
November 29 – December 2, 2016.
7 Alessandro Biondi, Daniel Casini, Giorgiomaria Cicero, Niccolò Borgioli, and Giorgio But-
tazzo et al. SPHERE: A Multi-SoC Architecture for Next-generation Cyber-Physical Systems
Based on Heterogeneous Platforms. IEEE Access, 9:75446–75459, May 2021.
8 Alessandro Biondi, Federico Nesti, Giorgiomaria Cicero, Daniel Casini, and Giorgio Buttazzo.
A Safe, Secure, and Predictable Software Architecture for Deep Learning in Safety-Critical
Systems. IEEE Embedded Systems Letters, 12(3):78–82, September 2020.
9 N. Capodieci, R. Cavicchioli, M. Bertogna, and A. Paramakuru. Deadline-Based Scheduling
for GPU with Preemption Support. In Proc. of the 39th IEEE Real-Time Systems Symposium
(RTSS 2018), Nashville, Tennessee, USA, December 11–14, 2018. URL: http://arxiv.org/
abs/1312.6199.
10 Daniel Casini, Alessandro Biondi, and Giorgio Buttazzo. Timing Isolation and Improved
Scheduling of Deep Neural Networks for Real-Time Systems. Software: Practice and Experience,
50(9):1760–1777, September 2020.
11 Daniel Casini, Alessandro Biondi, Giorgiomaria Cicero, and Giorgio Buttazzo. Latency Analysis
of I/O Virtualization Techniques in Hypervisor-Based Real-Time Systems. In Proceedings
of the 27th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS
2021), Online event, May 18–21, 2021.
12 R. Cavicchioli, N. Capodieci, and M. Bertogna. Memory Interference Characterization Between
CPU Cores and Integrated GPUs in Mixed-Criticality Platforms. In Proc. of the 22nd IEEE
International Conference on Emerging Technologies and Factory Automation (ETFA 2017),
Limassol, Cyprus, September 12–15, 2017.
13 Giorgiomaria Cicero, Alessandro Biondi, Giorgio Buttazzo, and Anup Patel. Reconciling
Security with Virtualization: A Dual-Hypervisor Design for ARM TrustZone. In Proceedings of
the 18th IEEE International Conference on Industrial Technology (ICIT 2018), Lyon, France,
February 20–22, 2018.
14 Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. BinaryConnect: Training Deep
Neural Networks with binary weights during propagations. In Proc. of the 29th Conference
on Neural Information Processing Systems (NIPS 2015), Montreal, Canada, December 7–10,
2015.
15 Blane Erwin. The Groundbreaking 2015 Jeep Hack Changed Automotive Cybersecurity, 2021.
URL: https://fractionalciso.com/the-groundbreaking-2015-jeep-hack-changed-
automotive-cybersecurity/.
16 Giulia Ferri, Giorgiomaria Cicero, Alessandro Biondi, and Giorgio Buttazzo. Towards the
Hypervision of Hardware-based Control-Flow Integrity for Arm Platforms. In Proceedings of
the Italian Conference on CyberSecurity (ITASEC 2019), Pisa, Italy, February 12–15, 2019.
17 Yunhui Guo. A Survey on Methods and Theories of Quantized Neural Networks. ArXiv,
abs/1808.04752, 2018.
18 C. Liu and J. Layland. Scheduling algorithms for multiprogramming in a hard real-time
environment. Journal of the ACM, 20(1):40–61, January 1973.
19 Charlie Miller and Chris Valasek. Remote Exploitation of an Unaltered Passenger Vehicle,
August 10, 2015. URL: http://illmatics.com/Remote%20Car%20Hacking.pdf.
20 Paolo Modica, Alessandro Biondi, Giorgio Buttazzo, and Anup Patel. Supporting Temporal
and Spatial Isolation in a Hypervisor for ARM Multicore Platforms. In Proceedings of the 18th
IEEE International Conference on Industrial Technology (ICIT 2018), Lyon, France, February
20–22, 2018.
21 Christoph Molnar. Interpretable Machine Learning: A Guide for Making Black Box Models
Explainable, 2021. URL: https://christophm.github.io/interpretable-ml-book/.
22 Federico Nesti, Alessandro Biondi, and Giorgio Buttazzo. Detecting Adversarial Examples
by Input Transformations, Defense Perturbations, and Voting. IEEE Transactions on Neural
Networks and Learning Systems, August 2021.

NG-RES 2022
1:14 Can We Trust AI-Powered Real-Time Embedded Systems?

23 Marco Pacini, Federico Nesti, Alessandro Biondi, and Giorgio Buttazzo. X-BaD: A Flexible
Tool for Explanation-Based Bias Detection. In Proc. of the IEEE International Conference on
Cyber Security and Resilience, Online event, July 26–28, 2021.
24 M. Pagani, A. Biondi, M. Marinoni, L. Molinari, G. Lipari, and G. Buttazzo. A Linux-Based
Support for Developing Real-Time Applications on Heterogeneous Platforms with Dynamic
FPGA Reconfiguration. Future Generation Computer Systems, To appear.
25 Marco Pagani, Alessio Balsini, Alessandro Biondi, Mauro Marinoni, and Giorgio Buttazzo.
A Linux-based Support for Developing Real-Time Applications on Heterogeneous Platforms
with Dynamic FPGA Reconfiguration. In Proceedings of the 30th IEEE International System-
on-Chip Conference (SOCC 2017), Munich, Germany, September 5–8, 2017.
26 Marco Pagani, Enrico Rossi, Alessandro Biondi, Mauro Marinoni, Giuseppe Lipari, and
Giorgio Buttazzo. A Bandwidth Reservation Mechanism for AXI-based Hardware Accelerators
on FPGAs. In Proc. of the Euromicro Conference on Real-Time Systems (ECRTS 2019),
Stuttgart, Germany, July 9–12, 2019.
27 Francesco Restuccia and Alessandro Biondi. Time-Predictable Acceleration of Deep Neural
Networks on FPGA SoC Platforms. In Proc. of the 42nd IEEE Real-Time Systems Symposium
(RTSS 2021), Online event, December 7–10, 2021.
28 Francesco Restuccia, Alessandro Biondi, Mauro Marinoni, and Giorgio Buttazzo. Safely
preventing unbounded delays during bus transactions in FPGA-based SoC. In Proceedings of
the 28th Annual Int. Symposium on Field-Programmable Custom Computing Machines (FCCM
2020), Fayetteville, Arkansas, USA, May 3–6, 2020.
29 Francesco Restuccia, Marco Pagani, Alessandro Biondi, Mauro Marinoni, and Giorgio Buttazzo.
Is Your Bus Arbiter Really Fair? Restoring Fairness in AXI Interconnects for FPGA SoCs.
ACM Transactions on Embedded Computing Systems, 18(5-51):1–22, October 2019.
30 Francesco Restuccia, Marco Pagani, Alessandro Biondi, Mauro Marinoni, and Giorgio Buttazzo.
Modeling and analysis of bus contention for hardware accelerators in FPGA SoCs. In
Proceedings of the 32nd Euromicro Conference on Real-Time Systems (ECRTS 2020), Online
event, July 7–10, 2020.
31 Enrico Rossi, Marvin Damschen, Lars Bauer, Giorgio Buttazzo, and Jörg Henkel. Preemption
of the Partial Reconfiguration Process to Enable Real-Time Computing with FPGAs. ACM
Transactions on Reconfigurable Technology and Systems, 11(2):10:1–10:24, November 2018.
32 Giulio Rossolini, Alessandro Biondi, and Giorgio Buttazzo. Increasing the Confidence of
Deep Neural Networks by Coverage Analysis. In arXiv:2101.12100 [cs.LG], January 2021.
arXiv:2101.12100.
33 Biruk Seyoum, Alessandro Biondi, and Giorgio Buttazzo. FLORA: FLoorplan Optimizer
for Reconfigurable Areas in FPGAs. ACM Transactions on Embedded Computing Systems,
18(5-73):1–20, October 2019.
34 Tom Simonite. AI Has a Hallucination Problem That’s Proving Tough to Fix, March 12, 2018.
URL: https://www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-
tough-to-fix/.
35 Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian
Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In Proc. of the 2nd
International Conference on Learning Representations (ICLR 2014), Banff, AB, Canada, April
14–16, 2014. URL: http://arxiv.org/abs/1312.6199.
36 The Royal Society. Explainable AI: the basics – policy briefing, 2019. URL: https://
royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability
-policy-briefing.pdf.
37 Shixin Tian, Guolei Yang, and Ying Cai. Detecting Adversarial Examples through Image
Transformation. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence
(AAAI-18), New Orleans, Louisiana, USA, February 2—7, 2018.

You might also like