Ai Presentation by Izhar Ali and Sahil
Ai Presentation by Izhar Ali and Sahil
NG-RES 2022
DNN1 is not running because DNN3 is
occuping the GPU..
1:4 Can We Trust AI-Powered Real-Time Embedded Systems?
blue block represent DNN is running. Even though the
GPU's total
DNN1 has short and utilization is less
frequent tasks. DNN1 X X than its full
0 20 40 60 80 100 120 capacity (95%
DNN2 has used), DNN1
medium-length DNN2 misses some of its
tasks 0 50 100
scheduled times
(second and fifth
DNN3 has long instances) because
DNN3
and less frequent it cannot interrupt
0 70
tasks time DNN3, which takes
longer to execute.
Figure 1 Example of non-preemptive schedule of three DNNs with different execution times and This causes delays
periods. As clear from the figure, DNN1 experiences longer and variable delays. and inconsistency
Field in the response
Programmable time for DNN1
Gate Arrays 2.3 FPGA-related issues
(FPGAs) are
special integrated An interesting alternative to GPUs for accelerating AI algorithms is provided by FPGAs.
circuits that can They are integrated circuits designed to be configured after manufacturing for implementing
be programmed arbitrary logic functions in hardware. As such, they exhibit a highly predictable behavior in
by the user after terms of execution times. In addition, they consume much less power with respect to GPUs
manufacturing to and existing commercial platforms are characterized by lower weight, encumbrance, and
perform specific cost. Hence, they represent an ideal solution for being used on battery-operated embedded2
tasks. Unlike systems with size, weight, power and cost (SWaP-C) constraints, as space robots, satellites,
GPUs, which are and UAVs.
also used for
Nevertheless, FPGAs have other problems when used as DNNs accelerators:
accelerating AI
algorithms, No floating point unit (FPU) is available on the chip, unless it is explicitly programmed
FPGAs offer by the user, but consuming a significant fraction of the available fabric.
several
advantages: Programming FPGAs is quite more difficult than programming CPUs or GPUs, and
efficient coding requires a deep knowledge of low-level architecture details.
The frameworks available today for developing AI applications on FPGA-based platforms
are less rich and flexible than those available for GPUs, and the same is true for related
libraries and tools.
The overall FPGA area available in medium size SoCs could be insufficient to host more
than one DNN, or even a single large DNN.
To overcome the problems outlined above, a lot of research has been carried out in the
recent years.
The absence of an FPU is overcome by performing a preliminary parameter quantization
to convert floating point numbers into integers with n-bit precision. Several quantization
methods have been proposed in the literature [17], including symmetrical, asymmetrical,
non-uniform, and statistical. An extreme quantization converts weights into binary numbers
using the sign function. Courbariaux, Bengio, and David [14] have shown that a binarized
DNN can achieve 98.8% accuracy in classifying the handwritten digits of the MNIST dataset.
Other optimization steps (e.g., network pruning and layer fusion) can also be performed,
both on GPUs and FPGAs, to reduce the computation time and the memory footprint of
trained DNNs while minimizing the loss in accuracy.
To overcome the limitation of the FPGA area, Biondi et al. [6] proposed a programming
framework, called FRED1 , to support the design, development, and execution of predictable
software on FPGAs. FRED exploits dynamic partial reconfiguration and recurrent execution
1
See details on http://fred.santannapisa.it.
Communication Between Software and Hardware Tasks
In systems using FPGAs, communication between software tasks (SW-tasks) running on a CPU and hardware tasks (HW-tasks)
running on an FPGA is crucial. Here's how it works:
G. Buttazzo 1:5
to virtualize the FPGA area, thus enabling the user to allocate a larger number of hardware
accelerators than those that could otherwise be fit into the physical fabric. FRED also
integrates a tool for automated floorplanning [33] and a set of runtime mechanisms to enhance
predictability by scheduling hardware resources and regulating bus/memory contentions [29].
An application targeted by FRED consists of a set of software tasks (SW-tasks) running
on the CPU cores that can periodically invoke the execution of hardware accelerators (HW-
tasks) to be dynamically programmed on the FPGA. The communication scheme adopted
between SW-tasks and HW-tasks is illustrated in Figure 2. 2. Requesting Hardware
Acceleration: SW-tasks can
SW task request HW-tasks to
Input data perform certain operations.
1. Shared Memory: Both This is done by sending a
SW-tasks and HW-tasks use shared memory HW task request to the FPGA
shared memory to exchange
data. <put
put data>
data FPGA area
request for HW acceleration
<request> required by the
<get data> accelerator 3. Data Exchange:
SW-tasks put input data
into shared memory,
shared memory Bitstream for
which the HW-task
the FPGA processes. After
Output results processing, the HW-task
puts the output results
back into shared memory
Figure 2 Communication scheme adopted between SW-tasks and HW-tasks in FRED. for the SW-task to
This retrieve.
The FPGA virtualization is achieved through a timesharing mechanism that replaces
framework
inactive accelerators (i.e., those that finished their computation and are waiting for the
allows next activation) with active ones. In this way, the total number of HW-tasks that can run
efficient use on the FPGA can be much higher than the number of HW-tasks that would statically fit
of FPGA in the physical area available on the fabric. Hence, this mechanism virtualizes the FPGA6
resources by creating a virtual area much larger than the physical one. The resulting approach is
and similar to multitasking, where tasks continuously change their context, or a virtual memory
ensures mechanism, where memory pages are swapped between hard disk and dynamic memory.
Thanks to a set of design choices and a proper scheduling infrastructure, resource
that the contention delays experienced by tasks running under FRED are bounded and predictable,
SW-tasks and hence they can be estimated to verify the system schedulability.
and A full support for FRED has been developed under both FreeRTOS and Linux [25, 24].
HW-tasks The Linux support comes with a user-space daemon and a set of custom kernel drivers to
work handle the processor configuration port (PCAP) and the shared-memory communication
seamlessly buffers between CPUs and FPGA. A preemptable reconfiguration interface has also been
developed by Rossi et al. [31] to achieve a finer control in scheduling the reconfiguration
together. requests and a better control on the reconfiguration delays incurred by HW-tasks.
NG-RES 2022
Restuccia and Biondi proposed techniques to enhance the predictability of DNN
execution on FPGA-based platforms using the Vitis AI framework by Xilinx.
1:6 Can We Trust AI-Powered Real-Time Embedded Systems?
Recently, Restuccia and Biondi [27] proposed a set of techniques for accelerating DNNs
on FPGA-based platforms with a highly predictable timing behavior under the Vitis AI
frameworks by Xilinx. In Vitis AI, the execution of the DNN layers relies on the deep
learning processing unit (DPU) core, a hardware accelerator optimized for the execution
of convolutional DNNs. Based on an extensive profiling campaign conducted on the Xilinx
Zynq Ultrascale+ platform, they proposed an execution model for the DPU employed to
derive a response time analysis for guaranteeing real-time applications constraints.
Safety-critical
systems, like 3 Security issues
self-driving cars, Examples:
use AI algorithms Safety-critical systems that make use of AI algorithms consist of several components with
Steering,
and consist of different complexity and requirements.
throttle
components with Consider for example a self-driving car. The functions responsible for steering, throttle modulation,
varying levels of modulation, braking, and engine control are highly critical and must satisfy stringent braking,
complexity and requirements in terms of safety, security, and real-time behavior. As such, they need to be engine control.
requirements. managed by a real-time operating system, that must be certified to guarantee the required
Understanding safety integrity levels.
how to secure
On the other hand, high-level functions related to sensory perception, object tracking, High-Level AI
these systems is
crucial for their and vehicle localization, which heavily rely on AI algorithms, need to be executed on a Components
safe operation. rich operating system (e.g., Linux) to exploit all the available device drivers, libraries, and Examples:
development frameworks required for such complex computations. These components are far Sensory
from being certified and offer a large software surface for cyber attacks. perception,
object
In 2015, two hackers, Charlie Miller and Chris Valasek, discovered a vulnerability in the
Real-World tracking,
Jeep Cherokee, which they exploited to remotely access the vehicle and gain physical control,
Example of a vehicle
Security including steering, braking, turning on the wipers, blasting the radio, and finally, killing the localization.
Breach engine to bring the vehicle to a complete stop [15]. They wrote a long paper [19] where they
explain how they accessed the CAN bus through the infotainment system, detailing the full
attack chain.
Although cyber attacks to non critical components accessible by a wireless network cannot
be avoided completely, the security of a system can greatly be enhanced by preventing such
attacks from spreading to more critical components. This can be achieved by isolating
software components with different level of criticality into different execution domains,
through the use of a hypervisor.
A hypervisor is a
software layer 3.1 Hypervisor-based architecture
that manages
multiple A hypervisor is a software layer above the hardware platform able to create and manage
execution multiple execution domains, each hosting a virtual machine with its own operating system.
domains (virtual An example of a hypervisor-based architecture for safety-critical embedded systems is the
machines) on the one proposed by the SPHERE project [7], which supports the creation of multiple virtual
same hardware machines on the same computing platform, providing both time/memory isolation, security,
platform. Each real-time communication channels, and I/O virtualization to allow different virtual machines
domain can run to shared the peripheral devices.
its own operating Figure 3 shows an example of a hypervisor managing two execution domains with different
system
levels of criticality. One domain hosts a virtual machine running all safety-critical functions on
independently.
a real-time operating system (RTOS), while the other domain hosts another virtual machine
running AI-powered software on the Linux operating system. Such an architecture has been
G. Buttazzo 1:7
Low-Criticality Domain: High-Criticality Domain:
This domain is for tasks that are not This domain handles safety-critical tasks, which
low criticality
low-criticality high criticality
high-criticality
safety-critical but still important, like means any delay or error can have serious
domain domain
AI-powered applications. consequences.
It uses a general-purpose operating It uses a Real-Time Operating System (RTOS),
system, such as Linux. AI-powered Safety-critical which is designed to handle tasks with strict
These tasks can be interrupted and software software timing requirements.
rescheduled without severe This domain ensures that critical tasks get the
consequences. priority they need.
Linux RTOS
4 AI related issues
Deep neural networks have shown an impressive performance in several recognition tasks, but
their suitability for mission-critical applications has been questioned by Szegedy et al. [35]
and many other authors [34], who showed that imperceptible perturbations added to an
input sample can fool a neural network in perceiving objects that are not present in the
input. Such perturbed inputs are called adversarial examples (AEs) and represent a serious
Neural
Network
12
threat for the security of AI-based systems. An example of adversarial image is shown in
Figure 4, where the picture of a stop sign is perturbed in such a way that it is perceived by
the network as a parking sign with a high confidence score.
Although a significant effort has been spent to develop defense methods against adversarial
examples [5], the problem remains open and challenging, since these attacks violate the
fundamental stationarity assumption of learning algorithms, i.e., that training and testing
data are drawn from the same distribution.
The trustworthiness of DNNs is also threatened by genuine inputs characterized by a
distribution that is quite different from that of the training samples. Such inputs are referred
to as out-of-distribution (OoD) samples. Two examples of OoD images are shown in Figure 5.
Figure 5 Two examples of OoD images that could cause a deep neural network to produce a
wrong output.
Considering that the prediction score of a DNN can be high in the presence on both AEs
and OoD samples, the output score of the best classified class cannot be considered as an
indication of the prediction confidence of the model.
Several methods have been proposed in the literature to detect AEs and OoD samples.
Two of these methods are presented below with more details. 8
To solve this problem, researchers have
suggested several methods Below, we
4.1 Detection by input transformations will explain two of these methods in detail.
One method for detecting AEs relies on the fact that DNN models are usually robust to
certain types of input transformations (e.g., translation, rotation, scaling, blurring, noise
addition, etc.). This means that, if a genuine image is correctly recognized by a DNN, the
Normally, if you make small changes to a genuine image, like , adding some information,
rotating it, resizing it (scaling), blurring it, or adding some noise, the DNN still recognizes the
image pretty well. The prediction score (how confident the DNN is about its guess) only
drops a little.
BUT,
Adversarial examples are different. Mostly, These tricky changes
make the DNN fails to recognize genuine sample.. When you apply
the same small changes like adding information, blurring, rotation,
scaling, adding noise etc.) to an AE, the DNN's prediction score drops
G. Buttazzo 1:9
a lot more.
prediction score reduces only slightly when the same image is translated, rotated, or modified
with one the mentioned transformations. However, the same is not true for most AEs and
it has been observed that they result to be more sensitive to input transformations, which
cause a much higher degradation in the prediction score.
This property of AEs has been exploited by some authors [37] to detect whether an input
To handle x is adversarial or genuine. If y = f (x) is the top class score produced by the DNN on input
In AI, "defense
this problem, x and yT = f (T (x)) is the score produced on the transformed input T (x), a simple detection
perturbation" is
Nesti method is to consider x to be adversarial if the difference y − yT is higher than a given
a technique
proposed a threshold τ .
used to detect
new method Unfortunately, it is possible to generate adversarial examples that are robust to input
tricky inputs
called " transformations. To cope with this case, Nesti et al. [22] proposed a new method, called
defense perturbation, capable of detecting AEs that are robust to input transformations.
called
defense
The defense perturbation is generated by a proper optimization process capable of making adversarial
perturbation."
robust AEs sensitive again to input transformations. Furthermore, the paper introduces examples.
multi-network AEs that can fool multiple DNN models simultaneously, presenting a solution
for detecting them. 1. Input Layer
2. Hidden Layer Coverage
4.2 Detection by coverage analysis 3. Output Layer analysis works in
two main phases:
A different approach for detecting AEs is based on a deeper analysis of the neuron activation
values in the different DNN layers. In fact, in order to force a DNN model to classify an 1. Offline Phase
input with a desired wrong class, AEs usually cause an overactivation of some neurons in (Before Using the
Rossolini different network layers. To identify such neurons, Rossolini et al. [32] presented a new
AI):
presented coverage analysis methodology capable of detecting both adversarial and out-of-distribution - Trusted Dataset:
new inputs. First, we use a
methodology set of correct and
The approach works in two distinct phases: in a preliminary (off-line) phase, a trusted
trusted examples
for detecting dataset is presented to the DNN and the neuron outputs, in each layer and for each class,
these tricky are analyzed and aggregated into a set of covered states, which all together represent a sort - Analyze
inputs called of signature describing how the model responds to the trusted samples for each given class. Neurons: Feed
Then, at runtime, each new input is subject to an evaluation phase, in which the activation examples as input
coverage state produced by the input in each layer is compared with the corresponding signature for into AI and record
analysis the class predicted by the network. The higher the number of activation values outside the how the neurons
range observed during the presentation of the trusted dataset, the higher the probability that reacts.
the current input is not trustworthy. The approach is schematically illustrated in Figure 6.
- Create
The nice thing about this approach is that the comparison against the signature allows
Signatures: We
2.Runtime Phase computing a confidence value c distinct from the prediction score, indicating how much the combine these
(While Using the current prediction can be trusted. responses into a "
AI) signature" for each
OR Testing class. Also known
Phase:
4.3 Interpretability issues
as Output.
Another problem of complex machine learning models is that they are hardly interpretable
- New Input by humans. In fact, they encode their input-output function in millions of parameters
Evaluation: A trusted dataset is
and, therefore, it is not trivial to understand why a given input produces a certain output.
When a new used to train and
Many people demand for transparency and precise mechanisms as a prerequisite for trust.
input is given to test an AI model.
the AI, we check Especially for safety-critical applications, developers cannot trust a critical decision system if
It provides accurate
how the neurons it is not possible to explain the reasons that brought to that decision. and reliable
react. To address this issue, a new branch of research, referred to as Explainable AI (XAI) [36], examples that help
started around 2014 with the goal of reconstructing and representing in a comprehensible the System learn
fashion the features that caused an AI model to produce a given output. correctly and
-Compare to Signatures: Example: identify the inputs.
We compare this Imagine you are teaching a child to recognize
reaction to the stored animals. You show them a trusted set of pictures
signature for the of dogs and cats. These pictures are clear and
NG-RES 2022
predicted class correctly labeled. Once the child learns from
-Trustworthiness Check: If the neuron's
these trusted examples, they can more easily
reactions are very different from the recognize new pictures of dogs and cats and
signature (the normal pattern), it means spot any unusual or incorrect pictures
the input might be tricky or unfamiliar
1:10 Can We Trust AI-Powered Real-Time Embedded Systems?
Confidence Value C:
Offline Phase
C.value helps us understand how
confident the AI is in its decision. DNN y
Trusted Signature
- If the new input’s activation dataset
Coverage Aggregation Agg. Algo combines these
(output) closely match the stored analysis algorithm
reactions to create a
signature then confidence value will signature.
be high. x DNN y
Current
input
Runtime /
- if there are outputs that differ
from the stored signature, then Active state
Confidence c Testing Phase
evaluation
confidence value will be low,
the active state means neurons in
indicating that the input might be Signature a neural network react when they
ticky or unusual. see something/input
Figure 6 Overview of the coverage-based method proposed in [32]: (top) off-line 12
phase for
producing the signature for each layer and each class; (bottom) online detection phase based on
comparing the current activation states with the stored signature. It produces a confidence value c
The above methods indicating how much the current prediction can be trusted.
make solutions for some
cases like recognition
of gunine and tricky Thanks to these methods, it has been found that, in some cases, a DNN learned to
inputs. classify images using features that have nothing to do with the object of interest, but appear
frequently in most training samples of a specific class (e.g., water for the class ship, snow for
the class wolf, or even copyright tags present in most of the images of a given class). Such
biases in the training data limit the generalization capabilities of a neural model and can
cause wrong predictions that would be quite harmful in applications like self-driving cars or
medical diagnoses.
Identifying such biases in the training set is the main objective of XAI, which in the
last years proposed several methods and tools for making AI decisions more interpretable to
humans [21]. For instance, Pacini et al. [23] presented X-BaD, a flexible tool for detecting
biases in training sets while providing user-interpretable explanations for DNN outputs. The
tool can also be used to compare the performance of different XAI methods.
NG-RES 2022
In Self Driving Car Example:
- The high-performance part does complex tasks like recognizing images and making decisions.
- The safety-critical part keeps the car's important functions safe like braking, steering, controlling speed.
- If the AI fails or makes a risky choice, the backup system takes over to stop the car safely.
The first version of this system was successfully tested on an inverted pendulum. The Safety Monitor in this system uses a
approach to check if the current state is in the safe area or into a warning area.
A first version of the architecture has been successfully implemented and tested on a
control system for an inverted pendulum. In this case, the Safety Monitor uses a Lyapunov
approach to detect when the system state exits a safe region of the state space and enters a
margin region in which the stability can still be recovered by the safe controller. When this
happens, the control is given to the backup controller. The AI controller is re-enabled when
the system state goes back to the safe zone.
Figure 8 illustrates a simplified and qualitative representation of the state space for an
inverted pendulum, where the black dot represents the current system state, the safe region
is visualized in green, the recoverable region in yellow, and the unstable region in gray.
On the same line, Belluardo et al. [4] presented a safe and secure multi-domain software
architecture tailored for autonomous driving.
20
AI algorithms and models are very good at tasks like understanding
6 Conclusions and making decisions, but they still have some weaknesses:
This paper
discusses several
already discusses above.
deep learning This paper presented a set of problems that today prevent the use of deep learning al-
problems and gorithms in mission-critical systems, as self-driving vehicles, autonomous robots, and medical
their solutions applications. Among them, the most relevant issues are the low predictability of modern
like self-driving heterogeneous architectures and the high vulnerability of AI software to cyber attacks.
vehicles, robots, Fortunately, the research community is readily reacting to address such problems and some
and medical solutions to overcome such limitations have already been proposed at different architecture
applications.
levels. However, the problems are many and complex, so several issues remain unsolved and
But there are
further many require some joint effort from the AI and the real-time research communities.
complex the research community is actively working to address these problems. Some
problems, so solutions have already been proposed. and further they are working on it.
References
several issues
remain unsolved. 1 Erika Enterprise RTOS. URL: https://www.erika-enterprise.com/.
2 The CLARE Software Stack. URL: https://accelerat.eu/clare.
3 Luca Abeni and Giorgio Buttazzo. Resource Reservation in Dynamic Real-Time Systems.
Real-Time Systems, 27(2):123–167, July 2004.
4 L. Belluardo, A. Stevanato, D. Casini, G. Cicero, A. Biondi, and G. Buttazzo. A Multi-domain
Software Architecture for Safe and Secure Autonomous Driving. In Proc. of the 27th IEEE
International Conference on Embedded and Real-Time Computing Systems and Applications
(RTCSA 2021), Online event, August 18–20, 2021.
5 Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine
learning. Pattern Recognition, 84:317–331, December 2018.
G. Buttazzo 1:13
6 Alessandro Biondi, Alessio Balsini, Marco Pagani, Enrico Rossi, Mauro Marinoni, and Giorgio
Buttazzo. A Framework for Supporting Real-Time Applications on Dynamic Reconfigurable
FPGAs. In Proc. of the IEEE Real-Time Systems Symposium (RTSS 2016), Porto, Portugal,
November 29 – December 2, 2016.
7 Alessandro Biondi, Daniel Casini, Giorgiomaria Cicero, Niccolò Borgioli, and Giorgio But-
tazzo et al. SPHERE: A Multi-SoC Architecture for Next-generation Cyber-Physical Systems
Based on Heterogeneous Platforms. IEEE Access, 9:75446–75459, May 2021.
8 Alessandro Biondi, Federico Nesti, Giorgiomaria Cicero, Daniel Casini, and Giorgio Buttazzo.
A Safe, Secure, and Predictable Software Architecture for Deep Learning in Safety-Critical
Systems. IEEE Embedded Systems Letters, 12(3):78–82, September 2020.
9 N. Capodieci, R. Cavicchioli, M. Bertogna, and A. Paramakuru. Deadline-Based Scheduling
for GPU with Preemption Support. In Proc. of the 39th IEEE Real-Time Systems Symposium
(RTSS 2018), Nashville, Tennessee, USA, December 11–14, 2018. URL: http://arxiv.org/
abs/1312.6199.
10 Daniel Casini, Alessandro Biondi, and Giorgio Buttazzo. Timing Isolation and Improved
Scheduling of Deep Neural Networks for Real-Time Systems. Software: Practice and Experience,
50(9):1760–1777, September 2020.
11 Daniel Casini, Alessandro Biondi, Giorgiomaria Cicero, and Giorgio Buttazzo. Latency Analysis
of I/O Virtualization Techniques in Hypervisor-Based Real-Time Systems. In Proceedings
of the 27th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS
2021), Online event, May 18–21, 2021.
12 R. Cavicchioli, N. Capodieci, and M. Bertogna. Memory Interference Characterization Between
CPU Cores and Integrated GPUs in Mixed-Criticality Platforms. In Proc. of the 22nd IEEE
International Conference on Emerging Technologies and Factory Automation (ETFA 2017),
Limassol, Cyprus, September 12–15, 2017.
13 Giorgiomaria Cicero, Alessandro Biondi, Giorgio Buttazzo, and Anup Patel. Reconciling
Security with Virtualization: A Dual-Hypervisor Design for ARM TrustZone. In Proceedings of
the 18th IEEE International Conference on Industrial Technology (ICIT 2018), Lyon, France,
February 20–22, 2018.
14 Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. BinaryConnect: Training Deep
Neural Networks with binary weights during propagations. In Proc. of the 29th Conference
on Neural Information Processing Systems (NIPS 2015), Montreal, Canada, December 7–10,
2015.
15 Blane Erwin. The Groundbreaking 2015 Jeep Hack Changed Automotive Cybersecurity, 2021.
URL: https://fractionalciso.com/the-groundbreaking-2015-jeep-hack-changed-
automotive-cybersecurity/.
16 Giulia Ferri, Giorgiomaria Cicero, Alessandro Biondi, and Giorgio Buttazzo. Towards the
Hypervision of Hardware-based Control-Flow Integrity for Arm Platforms. In Proceedings of
the Italian Conference on CyberSecurity (ITASEC 2019), Pisa, Italy, February 12–15, 2019.
17 Yunhui Guo. A Survey on Methods and Theories of Quantized Neural Networks. ArXiv,
abs/1808.04752, 2018.
18 C. Liu and J. Layland. Scheduling algorithms for multiprogramming in a hard real-time
environment. Journal of the ACM, 20(1):40–61, January 1973.
19 Charlie Miller and Chris Valasek. Remote Exploitation of an Unaltered Passenger Vehicle,
August 10, 2015. URL: http://illmatics.com/Remote%20Car%20Hacking.pdf.
20 Paolo Modica, Alessandro Biondi, Giorgio Buttazzo, and Anup Patel. Supporting Temporal
and Spatial Isolation in a Hypervisor for ARM Multicore Platforms. In Proceedings of the 18th
IEEE International Conference on Industrial Technology (ICIT 2018), Lyon, France, February
20–22, 2018.
21 Christoph Molnar. Interpretable Machine Learning: A Guide for Making Black Box Models
Explainable, 2021. URL: https://christophm.github.io/interpretable-ml-book/.
22 Federico Nesti, Alessandro Biondi, and Giorgio Buttazzo. Detecting Adversarial Examples
by Input Transformations, Defense Perturbations, and Voting. IEEE Transactions on Neural
Networks and Learning Systems, August 2021.
NG-RES 2022
1:14 Can We Trust AI-Powered Real-Time Embedded Systems?
23 Marco Pacini, Federico Nesti, Alessandro Biondi, and Giorgio Buttazzo. X-BaD: A Flexible
Tool for Explanation-Based Bias Detection. In Proc. of the IEEE International Conference on
Cyber Security and Resilience, Online event, July 26–28, 2021.
24 M. Pagani, A. Biondi, M. Marinoni, L. Molinari, G. Lipari, and G. Buttazzo. A Linux-Based
Support for Developing Real-Time Applications on Heterogeneous Platforms with Dynamic
FPGA Reconfiguration. Future Generation Computer Systems, To appear.
25 Marco Pagani, Alessio Balsini, Alessandro Biondi, Mauro Marinoni, and Giorgio Buttazzo.
A Linux-based Support for Developing Real-Time Applications on Heterogeneous Platforms
with Dynamic FPGA Reconfiguration. In Proceedings of the 30th IEEE International System-
on-Chip Conference (SOCC 2017), Munich, Germany, September 5–8, 2017.
26 Marco Pagani, Enrico Rossi, Alessandro Biondi, Mauro Marinoni, Giuseppe Lipari, and
Giorgio Buttazzo. A Bandwidth Reservation Mechanism for AXI-based Hardware Accelerators
on FPGAs. In Proc. of the Euromicro Conference on Real-Time Systems (ECRTS 2019),
Stuttgart, Germany, July 9–12, 2019.
27 Francesco Restuccia and Alessandro Biondi. Time-Predictable Acceleration of Deep Neural
Networks on FPGA SoC Platforms. In Proc. of the 42nd IEEE Real-Time Systems Symposium
(RTSS 2021), Online event, December 7–10, 2021.
28 Francesco Restuccia, Alessandro Biondi, Mauro Marinoni, and Giorgio Buttazzo. Safely
preventing unbounded delays during bus transactions in FPGA-based SoC. In Proceedings of
the 28th Annual Int. Symposium on Field-Programmable Custom Computing Machines (FCCM
2020), Fayetteville, Arkansas, USA, May 3–6, 2020.
29 Francesco Restuccia, Marco Pagani, Alessandro Biondi, Mauro Marinoni, and Giorgio Buttazzo.
Is Your Bus Arbiter Really Fair? Restoring Fairness in AXI Interconnects for FPGA SoCs.
ACM Transactions on Embedded Computing Systems, 18(5-51):1–22, October 2019.
30 Francesco Restuccia, Marco Pagani, Alessandro Biondi, Mauro Marinoni, and Giorgio Buttazzo.
Modeling and analysis of bus contention for hardware accelerators in FPGA SoCs. In
Proceedings of the 32nd Euromicro Conference on Real-Time Systems (ECRTS 2020), Online
event, July 7–10, 2020.
31 Enrico Rossi, Marvin Damschen, Lars Bauer, Giorgio Buttazzo, and Jörg Henkel. Preemption
of the Partial Reconfiguration Process to Enable Real-Time Computing with FPGAs. ACM
Transactions on Reconfigurable Technology and Systems, 11(2):10:1–10:24, November 2018.
32 Giulio Rossolini, Alessandro Biondi, and Giorgio Buttazzo. Increasing the Confidence of
Deep Neural Networks by Coverage Analysis. In arXiv:2101.12100 [cs.LG], January 2021.
arXiv:2101.12100.
33 Biruk Seyoum, Alessandro Biondi, and Giorgio Buttazzo. FLORA: FLoorplan Optimizer
for Reconfigurable Areas in FPGAs. ACM Transactions on Embedded Computing Systems,
18(5-73):1–20, October 2019.
34 Tom Simonite. AI Has a Hallucination Problem That’s Proving Tough to Fix, March 12, 2018.
URL: https://www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-
tough-to-fix/.
35 Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian
Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In Proc. of the 2nd
International Conference on Learning Representations (ICLR 2014), Banff, AB, Canada, April
14–16, 2014. URL: http://arxiv.org/abs/1312.6199.
36 The Royal Society. Explainable AI: the basics – policy briefing, 2019. URL: https://
royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability
-policy-briefing.pdf.
37 Shixin Tian, Guolei Yang, and Ying Cai. Detecting Adversarial Examples through Image
Transformation. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence
(AAAI-18), New Orleans, Louisiana, USA, February 2—7, 2018.