The EU AI Act
meets MDR
Everything AI-enabled
medical device
manufacturers need
to know.
Contents Page
Introduction 3
The new legislative framework 4
AI Act & MDR scope interplay 5
Intersection of MDR and AIA 7
AIA classification and its interplay with MDR 9
Misconceptions when classifying AI-enabled medical devices 12
Economic operators 13
Additional requirements set by the AIA for high-risk AI systems 14
Quality management requirements 16
Risk management 17
Data and data governance 19
Post market monitoring system 20
Systems for reporting serious incidents and malfunctions 22
Technical documentation requirements 23
Product requirements 26
Automatically generated logs 26
Accuracy, robustness, and cybersecurity 26
Human oversight 27
Transparency and provision of information to deployers 27
Accessibility 27
Declaration of conformity 28
AI database and registration 28
CE mark 28
Conformity assessments & Notified Bodies 29
Notified Body testing 31
Notified Body certificate 32
Changes to AI systems 33
Managing changes under the AIA 34
Defining significant changes and substantial modifications 35
Implications for other AIA actors 35
Documenting pre-determined changes 35
Conclusion 36
Practical use cases 38
Peace of Mind LLC. 39
Radiopic Limited Power 40
Reality Vieux 41
References 42
© 2025 BSI. All rights reserved. 2
Introduction
For over 30 years, medical devices have been The European Union (EU) considered that the
regulated at the European level, with a well- existing EU legal instruments that do directly
established infrastructure ensuring their safety regulate medical devices such as the MDR and
and efficacy. The introduction of the Medical the General Data Protection Regulation (GDPR),
Device Directives (MDD)1 and later the Medical are not sufficient to handle specific challenges
Device Regulation (MDR)2 has created a posed by AI models and systems. In this context,
framework for managing the risks associated the European Commission recently published the
with medical technologies. Artificial Intelligence Act (AIA)7, a Regulation
that aims to harmonize AI rules across the EU
Recent advancements in Artificial Intelligence and create an ecosystem of trust in AI by aligning
(AI) are driving a digital transformation across its use with European values, fundamental rights,
the healthcare sector. AI is increasingly and principles.
embedded in medical devices, ranging from AI
systems to detect breast cancer during This whitepaper explores the key aspects of the
mammograms3, wearable patient monitoring AIA and MDR that manufacturers of AI-enabled
solutions for virtual patient care4, to autonomous medical devices need to understand. It focuses
robotic surgeons5. While these advances present on the interplay between these regulations, the
significant benefits to patients, they also raise classification of AI systems, the conformity
new risks and safety concerns. For example, assessment process, the roles of economic
biased data, often due to underrepresentation of operators, and additional requirements coming
minority groups in medical datasets, can lead to from the AIA for high-risk AI enabled medical
inaccurate or misleading diagnoses for these devices. Manufacturers will gain insights into how
populations. This can result in disparities in to integrate AIA requirements into their existing
treatment, potentially jeopardizing patient safety MDR compliance frameworks and prepare for the
and exacerbating discrimination in healthcare6. challenging landscape of AI regulation.
© 2025 BSI. All rights reserved. 3
The new
legislative
framework
The AIA is a horizontal regulation, meaning it The AIA should be seen as a complementary
applies across all sectors and industries, not just legislation to existing product safety laws11. In
one specific field. This approach ensures that AI fact, the AIA clarifies in Article 2(9) that its rules
models and systems, no matter in which domain should be applied without prejudice to existing
they fall under, follow a consistent set of rules. Union legal acts related to consumer protection
The AIA aims to provide clarity and promote and product safety.
uniform regulation of AI technologies across the
EU, creating a fair competitive field and equal The intention of the AIA is to avoid inconsistencies
opportunities for businesses. when applying several EU laws at the same time.
For this reason, the principle lex specialis vs lex
The AIA and MDR align with the EU’s New generalis is to be applied, whenever we have a
Legislative Framework (NLF) for CE marking, matter regulated by two different rules. This
which aims to improve the internal market by means that, the more specific rule will ‘win’
enhancing product safety, boosting conformity over the more general one. In this regard, we
assessments, and clarifying CE marking. The NLF understand the AIA to be considered as lex
includes Regulation (EC) 765/20088, Decision generalis, which will set general requirements to
768/20089, and Regulation (EU) 2019/102010. the use and risks of AI while the MDR should be
Decision 768/2008, in particular, sets a considered lex specialis, which contains more
common framework for product marketing specific requirements with respect to the safe
and serves as the template for future use of AI in medical devices.
product harmonization laws. Currently, 27
pieces of harmonization legislation, including This approach should ensure that products
the AIA, MDR, Toy Safety Directive, Low Voltage covered by both the AIA and MDR are not
Directive, and Machinery Regulation, are based subjected to conflicting regulatory requirements.
on this template, which is why they often share
similar structures.
© 2025 BSI. All rights reserved. 4
AI Act &
MDR scope
interplay
At the EU level, medical devices, including those • Software driving or influencing the use of
with AI, are primarily regulated under the MDR. a device means ‘software which is intended
The MDR aims to guarantee a high level of health to drive or influence the use of a (hardware)
and safety of medical devices while supporting medical device and does not have or perform a
innovation13. medical purpose on its own, nor does it create
information on its own for one or more of the
According to the MDR, a medical device means medical purposes described in the definition of
‘any instrument, apparatus, appliance, software, a medical device (...).’ In other words, software
implant, reagent, material or other article that is considered a part/component or an
intended by the manufacturer to be used, alone accessory to the medical device. Under the
or in combination, for human beings for one or MDR, an accessory is defined as ‘an article
more of the following specific medical purposes’, which, whilst not being itself a medical device,
for example, ‘diagnosis, prevention, monitoring, is intended by its manufacturer to be used
prediction, prognosis, treatment or alleviation of together with one or several particular medical
disease’14. Software can also be part of a medical device(s) to specifically enable the medical
device, improving the device functionalities. device(s) to be used in accordance with its/
Depending on the intended purpose of the their intended purpose(s) or to specifically
software, we need to differentiate the following: and directly assist the medical functionality
of the medical device(s) in terms of its/their
• Medical Devices Software (MDSW15) ‘(...) is intended purpose(s)’18. Software in a Medical
software that is intended to be used, alone or in Device (SiMD) is the term used by the IMDRF
combination, for a purpose as specified in the Guidance19.
definition of a “medical device” in the medical
devices regulation (...)’16. Bear in mind that
Software as a Medical Device (SaMD) is the
term used by the IMDRF Guidance17.
© 2025 BSI. All rights reserved. 5
In this whitepaper, we will adopt the terminology Manufacturers should first assess whether their
from the Medical Device Coordination Group software qualifies as a medical device under
(MDCG) guidance on the qualification and these criteria. If the software is classified as a
classification of software under Regulation (EU) medical device, it must comply with the MDR.
2017/745 (MDR) and Regulation (EU) 2017/746
(IVDR).
In short, whether a device, including
software, is a medical device under the
MDR, it will depend upon two criteria: 1)
the objective element of (at least) one of
the medical purposes enlisted in Article
2(1) MDR, and 2) the subjective element
of the manufacturer intended to use the
software for a specific medical purpose.
Fig. 1. Software terms between MDCG and IMDRF
guidance
© 2025 BSI. All rights reserved. 6
Intersection of MDR and AIA According to the AIA, an AI system is ‘a
machine-based system designed to operate with
The next consideration is whether the MDSW varying levels of autonomy and that may exhibit
or Software driving or influencing the use of adaptiveness after deployment and that, for
a device includes AI, as defined by the AIA. If explicit or implicit objectives, infers, from the
the software qualifies as a medical device and input it receives, how to generate outputs such
includes AI, it must also comply with the AIA. as predictions, content, recommendations, or
decisions that can influence physical or virtual
The definition of what is AI has been a environments;’20.
controversial topic during the whole EU
legislative process, the key thing was to The key criteria that differentiate an AI
differentiate AI from traditional based system from simpler traditional software
computer systems. systems or programming approaches
are: (i) inference, (ii) autonomy, and (iii)
adaptiveness.
© 2025 BSI. All rights reserved. 7
Recital 12 AIA further elaborates on this For an AI system to be evaluated within the
definition and states that ‘inference’ refers scope of product safety law, such as the MDR,
to ‘the process of obtaining the outputs, such it must be associated with a medical device
as predictions, content, recommendations, or product, as described in the second option. As
decisions, which can influence physical and virtual we will see better in the following section, to
environments, and to a capability of AI systems to fall under the scope of the AIA, the AI system
derive models or algorithms, or both, from inputs must either function as a safety component of a
or data.’ product or the AI system is ‘itself a product’.
The techniques that enable inference for AI include
‘machine learning approaches that learn from In summary, we have seen that the regulation
data how to achieve certain objectives, and logic- of medical devices in the EU, particularly
and knowledge-based approaches that infer from under the MDR, aims to ensure that medical
encoded knowledge or symbolic representation of devices, including those incorporating AI,
the task to be solved,’ with either implementation meet stringent safety and performance
going ‘beyond basic data processing’ and enabling conditions.
‘learning, reasoning or modelling.’ The recital also
explains that ‘autonomy’ means that AI systems Whether an AI system is classified as a
have ‘some degree of independence of actions medical device depends on its intended
from human involvement and of capabilities to medical purpose and function. If an AI
operate without human intervention’, whereas application qualifies as a medical device, it
‘adaptiveness’ refers to ‘self-learning capabilities, must comply with the MDR; if it also falls
allowing the system to change while in use.’ under the definition of AI within the AIA,
it may be subject to its regulation as well.
Now that we have a definition, it is important to Understanding this scope is the first step for
understand how AI systems can be presented to manufacturers and providers to ensure full
the market. AI systems can either: compliance with applicable EU regulations.
1. be used on a stand-alone basis, outside of
existing product safety laws, or
2. serve as a component of a product, whether
physically integrated (embedded) or serving
the functionality of the product without being
integrated (non-embedded)21.
© 2025 BSI. All rights reserved. 8
AIA classification
and its interplay
with MDR
Under the MDR, medical devices are classified Similar to the MDR, the AIA follows a risk-based
based on their intended purpose and inherent approach: the higher the risk, the stricter the
risks. Article 51 MDR introduces four risk classes: rule. Four risk classes are used: ‘unacceptable
Class I (lowest risk), Class IIa (medium risk), Class risk’24, ‘high risk’25, ‘limited risk’26 and ‘minimal
IIb (medium/high risk), and Class III (highest risk’27 28. General Purpose AI systems (GPAI)
risk). are subject to specific obligations that vary
depending on factors such as whether the model
The class of a device is decided according to is open source, its computing power, and the size
22 rules22 and within this list, Rule 11 explicitly of its user base29.
classifies MDSW23 based on the risk it poses
to patient safety. Once the class of a device is
identified, the manufacturer will need to follow
the applicable general safety and performance
requirements (GSPRs) set out in Annex I.
This system ensures that devices with higher
risks undergo stricter regulatory checks, while
lower-risk devices face less regulatory burden.
Moreover, the purpose of the classification is to
guide manufacturers in selecting the appropriate
conformity assessment pathway for a medical
device. A conformity assessment under the Fig. 2. Risk classification in the AIA
MDR is the process by which a medical device’s
compliance with the regulatory requirements is The bulk of the AIA relies on both, the
evaluated before it enters the EU market. requirements that high-risk systems shall satisfy,
and the obligations for the economic operators
involved in their lifecycle.
© 2025 BSI. All rights reserved. 9
According to Article 6 AIA, two classes of high- We recognize that the term AI system being
risk systems can be identified: ‘itself a product’ can refer to an independent AI
medical device that performs a medical purpose
1. Stand-alone AI systems listed in Annex III. and is intended to be placed on the market or
It is important to note that these systems put into service. Therefore, MDSW qualifies as
are not covered by existing product safety ‘itself a product’ when the AI software functions
laws. This category includes software used in independently of any other device31.
healthcare that does not fall under the MDR.
For example, AI systems that play a crucial However, when the MDSW drives or influences
role in the management and prioritization of a (hardware) medical device and also serves a
emergency calls and services are classified medical purpose32, the AI component will only be
as high-risk AI systems under Annex III30. regarded as a ‘safety component’, if it is specifically
intended to perform a safety function. In both
2. AI Systems covered under Annex I, Section cases, the MDSW would qualify as high-risk
A. These are classified as high-risk if they under the AIA if classified as Class IIa or higher
meet both criteria: under the MDR.
a. they are intended to be used as a
safety component of a product On the other hand, AI-software driving or
or the AI system is itself a product influencing the use of a medical device – if
covered by Union harmonization intended to perform as an accessory or part/
legislation listed in Annex I Section A, component of a product33 – falls under the
such as the MDR, and; AIA’s high-risk category only if it drives safety
b. the product whose safety component functionalities, otherwise, it is classified as non-
pursuant to point (a) is the AI system, high risk.
or the AI system itself as a product, is
required by the Union harmonization
legislation to undergo a third-party
conformity assessment, before
being placed on the market or put
into service.
Fig. 3. Article 6 (1) AIA, risk classification of products covered under
Union Harmonization Legislation
© 2025 BSI. All rights reserved. 10
To summarize the key aspects, we can outline 3. AI-software driving or influencing the
the following categorization scenarios regarding use of a medical device, where AI is a
AI-enabled medical devices: safety component without a medical
purpose: Here, the AI component ensures
1. MDSW that is ‘itself a product’ having its the operational safety of the system but
own intended medical purpose: In this does not directly serve a medical purpose.
case, the MDSW itself is an AI-based system, For example, an AI monitoring system that
such as software for automated CT image oversees hardware performance in a surgical
segmentation aimed at the early diagnosis of robot enhances the system’s reliability and
specific cancer types. Here, the AI functions safety but does not contribute directly to
as the core medical device with a direct medical decision-making or treatment.
diagnostic purpose.
2. MDSW where AI is a safety component The European Commission plans to release
with a medical purpose: In this scenario, guidelines early in 2025 to clarify their
the AI drives or influences a (hardware) expectations on classification of AI systems
medical device and acts as a safety- in more detail34.
enhancing component with a medical
purpose. An example is an insulin pump
system with a glucose monitoring software,
where an AI-based algorithm predicts blood
sugar trends using continuous glucose
monitoring data. The AI component plays
a safety role by adjusting insulin delivery
to prevent adverse glycemic events.
Fig. 4. Differentiation between AI systems as a ‘safety component’
of a product and AI systems being ‘itself a product’
© 2025 BSI. All rights reserved. 11
Misconceptions when classifying It is also important to note that certain
Class I medical devices, such as those with
AI-enabled medical devices
a measuring function, sterile devices, and
reusable surgical instruments, require
A common misconception is to think that all
conformity assessment by a third party for
AI medical devices will automatically qualify as
those specific aspects. If these devices are
high-risk under the AIA just because it has some
enhanced with AI, they will be classified as
form of AI in it. However, we are of the view that
high-risk under the AI Act.
this is not accurate for two reasons:
It is crucial to emphasize that risk classification
1. It is true that when we talk about MDSW that
under the AIA does not change the risk
is ‘itself a product’, if it is classified as Class IIa
classification under the MDR. In particular,
or higher under the MDR, requiring mandatory
recital 51 AIA specifies that the classification of
third-party conformity assessment, it is
an AI system as high-risk under the AIA should
automatically deemed high-risk under the AIA.
not necessarily mean that the product whose
safety component is the AI system, or the AI
However, when it comes to MDSW that drives
system itself as a product, is considered high-risk
or influences a (hardware) medical device,
under the MDR.
if the AI is not intended to perform a safety
function, it will not be classified as high-
One might wonder why AIA risk classification
risk, even if the device requires mandatory
matters if it does not affect MDR risk classification.
third-party conformity assessment under
The reason is that the AIA risk classification
the MDR. The MDR does not explicitly define
determines which AI systems must adhere
‘safety component’35, however, the definition of
to additional obligations and requirements
safety component that we find in article 3(14)
specified by the AIA, in addition to MDR
AIA can apply to medical device components.
requirements. If an AI medical device is not
classified as high-risk under the AIA, then the
In the context of AI software driving or
AIA’s high-risk requirements do not apply.
influencing the use of a device, while such
AI system may not have a medical purpose -it
Both the MDR and the AIA follow a risk-
is just an accessory-, they can potentially be
based approach for its classification
considered a ‘safety component’ if, again, the
rules, and their scheme should be seen as
AI is intended to perform a safety function 36.
complementary rather than affecting each
other directly. The MDR determines the level
It is clear from the above that additional
of regulatory scrutiny based on the device’s
interpretative guidance from the Commission
risk classification. On the other hand, the
is needed, especially regarding the definition
AIA builds on this classification to assess
of a ‘safety component’ under the MDR.
whether the AI device is considered high-
risk under its own criteria, adding extra
2. Not all AI-enabled medical devices require
requirements beyond those of the MDR.
third-party conformity assessment under
the MDR. For example, some Class I medical
devices may incorporate AI and maintain
its classification under the MDR. In this
scenario, the AI-enabled Class I device will
not be classified as high-risk under the
AIA. However, it is important to note that,
if, for example, MDSW has been classified
as Class I in accordance with MDR Rule
11, but then provider has integrated AI-
functionality, then the risk classification can
be changed to a higher class (e.g. Class IIa).
© 2025 BSI. All rights reserved. 12
Economic
operators
There two primary economic operators However, the responsibilities of users
recognized under the AIA scope are providers37 primarily involve following instructions for
and deployers38 of AI systems. The AIA also use, maintaining devices and reporting any
introduces specific obligations for importers, issues. In contrast, deployers under the AIA
distributors, deployers and authorized have broader obligations that encompass
representatives of providers. the ethical implications and societal impact
of AI technologies. These responsibilities
This distribution of roles is loosely aligned include conducting fundamental rights
with the established categories of economic impact assessments, ensuring transparency
operators under the MDR. in AI operations, and adhering to reporting
requirements39. Non-compliance can result in
Providers under the AIA are entities that place penalties or restrictions on the deployment of
AI systems on the market or put them into AI systems.
service. For example, a company that places
on the market or puts into service an AI-based Something important to bear in mind is
diagnostic imaging software will act as both a Recital 87 AIA, which states that when a
“provider” under the AIA and a “manufacturer” high-risk AI system functions as a safety
under the MDR. Thus, in this paper, when component of a medical device and it is
referring to an AI provider, we are talking not placed on the market independently
about an AI-enabled medical device from the medical device, the medical
manufacturer. manufacturer will be the one ensuring that
the overall final product complies with both
Deployers under the AIA are responsible for regulations, the MDR and the AIA.
using AI systems in accordance with their
intended purposes. At first glance, this role
may appear similar to that of “users” of medical
devices under the MDR.
© 2025 BSI. All rights reserved. 13
Additional
requirements for
high-risk AI
systems
The AIA recognizes that a single AI system may The requirements for high-risk AI systems and
fall under different Union harmonization laws. manufacturers of AI-enabled medical devices
In the case of a medical device with AI, it are specified in Chapter III, Section 2 and 3
could pose risks not fully addressed by the AIA. When comparing the requirements of
MDR, requiring the coordinated application both, MDR and AIA regulations, at a broad level
of multiple EU regulations40. (see table 1 on next page), significant overlap
is evident in areas such as Risk Management,
The AIA seeks to ensure consistency, prevent Technical Documentation, Quality Management
duplication, and minimize additional burdens Systems (QMS), and Post-Market Surveillance
when applied alongside the MDR. It allows (PMS). This overlap suggests some degree of
manufacturers to incorporate the necessary interoperability between the two regulations.
compliance measures for the AIA into the However, a detailed clause-by-clause analysis
existing procedures and documentation is needed to identify any new or additional
required by the MDR. requirements introduced by the AIA.
Even where requirements appear similar,
This enables manufacturers of AI-enabled differences may exist at a more granular level.
medical devices to streamline their
processes by integrating AIA requirements
into their MDR submissions. While the MDR In the following section, we will address
takes precedence as the lex specialis for medical the most important points that will lead to
devices, manufacturers must still meet the an adjustment of the AI-enabled medical
additional AIA requirements, which can be devices’ QMS.
addressed within a unified compliance process41.
© 2025 BSI. All rights reserved. 14
Table 1 - Overview of relevant AI Act requirements for AI-enabled medical devices manufacturers
AI Act – High-risk AI providers obligations
Quality management requirements
Risk management system
Data and data governance
Post-market surveillance
Systems for reporting serious incidents and malfunctions
Technical documentation requirements
Product requirements
Automatically generated logs
Accuracy, robustness, and cybersecurity
Human oversight
Transparency and provision of information
Accessibility requirements
Declaration of conformity
CE mark
© 2025 BSI. All rights reserved. 15
Quality
management
requirements
Article 17 AIA mandates that providers of high-risk AI systems implement a robust QMS. This requirement
will sound familiar to medical device manufacturers, since it is essentially the same obligations prescribed
under MDR. High-risk AI providers must establish and maintain documented policies, procedures,
and instructions to ensure compliance.
Below is a table summarizing the key components that must be included in the QMS:
Table 2 - List of QMS Requirements
Regulatory compliance strategy
Design control and design verification
Quality control and assurance procedures
Testing and validation
Technical specifications (harmonized standards)
Data management systems
Risk management
Post market surveillance
Serious incident reporting
Communication with relevant authorities
Record keeping procedures
Resource management (security-of-supply) and accountability management
© 2025 BSI. All rights reserved. 16
There are certain aspects of the AI QMS that
are unique to the needs of AI systems. For
example, data management, risk management,
and testing and validation of AI algorithms
require specific attention. Data management,
in particular, must ensure that data used in AI
systems is secure, accurate, and compliant with
data protection laws42 43. This is critical given AI’s
reliance on large datasets.
The AIA recognizes the unique challenges
faced by SMEs and includes provisions for the
development of simplified QMS frameworks44.
These simplified systems, which will be
developed by the Commission, aim to make
compliance more accessible for smaller
organizations while maintaining the safety and
effectiveness of their AI systems.
For AI-enabled medical devices, the AIA
allows manufacturers to integrate its specific
QMS requirements into their existing MDR
QMS framework45. Since medical device
Within the AIA, the requirements on risk
manufacturers typically adhere to the ISO/IEC
management48 are particularly important. To
13485 standard, they may wonder if adopting
mitigate the risks associated with high-risk AI
an additional AI-specific QMS, such as ISO/IEC
systems, providers are required to comply with
42001, is necessary. It is important to note that
the requirements outlined in Chapter III sec 2
while manufacturers can voluntarily choose
AIA49. However, the AIA acknowledges that even
which standards to follow, the mandatory
full compliance with the requirements may not
requirements are to incorporate the relevant
reduce all risks to an acceptable level50. This
AIA’s Article 17 obligations into their current
is where Article 9 comes into play. This article
system46.
requires providers of high-risk AI systems to
identify any remaining risks and implement
additional measures to mitigate them.
Risk management
Pursuant to Article 9, a risk management
The development and deployment of AI systems system needs to be “established, implemented,
come with a wide range of risks, including documented and maintained”. It also emphasizes
algorithmic bias, data security vulnerabilities, that risk management for AI systems must be
a lack of transparency, and potential issues an ongoing and iterative process throughout
with faulty model updates. For companies in the system’s entire lifecycle. This involves the
the AI value chain, it is essential and required continuous identification and analysis of
by regulatory requirements to manage risks known and the reasonably foreseeable risks
holistically – from identification and analysis to health, safety, and fundamental rights.
to mitigation and continuous monitoring.
This applies not only across the AI and data
lifecycle (data, algorithms, model performance,
cybersecurity) but also from economic, legal,
and ethical perspectives . The capacity to reliably
identify, accurately assess, and adequately
respond to risks is especially critical in high-stake
environments, like in this case, the health sector.
© 2025 BSI. All rights reserved. 17
The risk management of an AI-enabled medical Special attention must also be given to addressing
device will require reorganization to address risks that affect vulnerable groups, particularly
new objectives related to fundamental individuals under the age of 1858.
rights. While the MDR has traditionally focused
on safety and performance-related risks, the Article 9 will be supported by future harmonized
AIA introduces a broader scope of protection, international standards on risk management
ensuring more comprehensive safeguards for methodologies. Currently, there is no specific,
the fundamental rights of individuals, as outlined defined approach, concerning fundamental rights.
in the EU Charter of Fundamental Rights51.
Something relevant for AI-enabled medical
However, variations in how member states device manufacturers is that they can integrate
define and protect fundamental rights can AI risk management into their existing risk
influence the risk assessment process, as the management processes under the MDR.
standards for safeguarding human rights may However, compliance with ISO 14971:2019 alone
differ across jurisdictions52. Examples include is insufficient, as this standard does not address
algorithms discriminating against people with risks related to business operations, society,
dual nationality and low income53. or the environment. ISO/IEC 23894 expands
on these areas, outlining where they should
Providers are required to estimate and evaluate be considered within an organization’s risk
these risks, including those arising from misuse, management activities.
and to implement targeted measures to mitigate
them54. MDR requires to reduce risks as far Additionally, BS/AAMI 34971:2023, Application
as possible, while AIA requires elimination or of ISO 14971 to Machine Learning in Artificial
reduction as far as technically feasible. Risk Intelligence - Guide, provides guidance on
management measures shall be taken in such a incorporating AI systems into ISO 14971-based
way that as few interactions as possible occur55, risk management. It is also important to note
but also that residual risks are still considered that ongoing work on the EN standard is focused
acceptable56. To identify risks, high-risk AI on addressing the requirements of Article 9 of
systems must undergo regular testing, including the EU AIA.
under real-world conditions, to ensure they meet
established safety and performance standards57.
© 2025 BSI. All rights reserved. 18
Data and data governance
To sum up, the AIA mandates that
Data plays an essential role in the AI context, as organizations implement comprehensive
there is no AI without data. Data being of the data management practices. This includes
highest quality is paramount to ensure that AI documenting these processes, procedures,
is trustworthy. For this reason, Article 10 AIA and technical details as part of their Quality
stands as one of the key requirements that high- Management System63 and Technical
risk AI systems are expected to fulfil. Documentation64.
Article 10 introduces new obligations to AI-enabled Additionally, aligning these practices with
medical devices compared to MDR requirements GDPR and other EU data laws ensures robust
as it introduces specific data governance and data protection, enhances transparency,
management practices tailored to AI systems. and fosters trust in AI systems.
Unlike the MDR, which primarily focuses on safety
and performance aspects, the AIA emphasizes
detailed management of training, validation, and
testing data. This includes careful consideration
of design choices, data collection methods,
preparation, bias prevention, and addressing any
data gaps or shortcomings59. The data used must
be relevant, representative, and ‘to the best extent
possible, free of errors and complete in view of the
intended purpose’60.
Given the extensive amount of data required to
properly train, validate, and test an AI models
or systems, it will be difficult to determine when
training is ‘complete’ according to the AIA. This
issue is particularly significant for AI-enabled
medical devices, where the sensitive nature of
the data involved may have a more profound
impact on the development process compared
to other industries61.
In this context, the AIA’s requirements for
handling special categories of personal data
(genetic, health, biometric data) under strict
conditions for bias monitoring and fundamental
rights protection go beyond the MDR’s scope,
which does not specifically address these data
management challenges for AI systems62.
Data governance measures under the AIA must
work in tandem with key EU regulations such as
the General Data Protection Regulation (GDPR),
the Data Act, the Data Governance Act and the
future European Health Data Space Regulation
(EHDS). These laws, part of the broader European
Strategy for Data, collectively shape the framework
for data management and protection.
© 2025 BSI. All rights reserved. 19
of the AIA into the existing PMS surveillance
framework stipulated by the MDR66. However,
this integration is not just about adding
new requirements, it involves adapting the
monitoring surveillance system to specifically
address the unique characteristics of AI systems.
In particular, the AIA requires manufacturers to
use a standardized AI PMS template that will be
provided by the European Commission67.
This template is expected to cover those aspects
not addressed by the current MDR framework.
Finally, for AI-enabled medical devices the
responsibility for market surveillance will
continue to reside with the authority designated
under the MDR. This approach ensures that
enforcement of PMS requirements is conducted
by authorities familiar with the specificities of
medical devices. However, Member States can
Post-market monitoring system decide to appoint an alternative authority for
overseeing AI-specific requirements, as long
The post-market monitoring system (PMS) as they ensure coordination between relevant
serves as a critical component in the regulatory bodies68. It is important to note that the
framework for high-risk AI systems, ensuring their enforcement procedures outlined in the AIA will
ongoing compliance with legal requirements. not be applicable to AI-enabled medical devices,
The AIA mandates providers to establish and as the procedures established under the MDR
document an appropriate PMS based on a PMS will take precedence69.
plan to continuously monitor the performance,
safety, and compliance of high-risk AI systems PMS requirements outlined so far are not fully
throughout their lifecycle. detailed, providers will need to wait for the
Commission to release the AI PMS template, which
In addition to the requirements outlined in MDR will specify the list of items that need to be covered
articles 32, 61, and 84, as well as the MDCG in the PMS. However, there are additional PMS
on post-market surveillance and vigilance, requirements hidden in the AIA that worth noting70.
and ISO/TR 20416 – Article 72 AIA requires
providers the active and systematic collection Risk management is a crucial part of the PMS plan
of data, specifically related to the AI system’s for high-risk AI systems, as new risks to health,
performance. This includes data from deployers safety, and fundamental rights can emerge after
as well as from other sources, emphasizing the system is placed on the market or put into
continuous monitoring throughout the AI service. Consequently, organizations may need
system’s operational life. Moreover, this article to establish a PMS that systematically evaluates
also mandates the analysis of the AI system’s market data and addresses these emerging risks71.
interactions with other AI systems, which is
very relevant given the interoperable and High-risk AI systems are also required to
interconnectable nature of many AI applications. automatically log events throughout their entire
This requirement excludes the analysis of lifecycle, which we believe is vital for effective
sensitive operational data from deployers that PMS72. These logs help track system performance,
are law enforcement authorities65. quickly identify issues, and enable efficient risk
management. They also enhance transparency
Something relevant for AI-enabled medical and accountability, allowing authorities to
devices, is that manufacturers are allowed to verify compliance with the AIA and investigate
integrate the additional PMS requirements incidents if needed.
© 2025 BSI. All rights reserved. 20
Furthermore, the AIA mandates that high-risk
AI systems must be subject to human oversight In conclusion, although medical device
while in use. This ensures that risks, such as bias manufacturers are familiar with PMS
or system errors, can be continuously monitored requirements under the MDR, they must
and addressed by human operators when ensure that their system also meets the
necessary73. Some authors argue that the post- key requirement of enabling continuous
market monitoring plan should further specify assessment of the product’s conformity.
the level of human oversight, the information to
be collected, and the actions required to address
potential system failures74.
Additionally, deployers play a key role in
ensuring the ongoing safety and performance
of AI systems. They are responsible for actively
monitoring the AI system based on the provider’s
instructions, identifying risks to health, safety,
or fundamental rights, and reporting these to
the provider or distributor and relevant market
surveillance authorities75. Authors further
suggest that the specific responsibilities of
deployers should be clearly outlined in the
PMS plan. This should also include details on
communication between the involved parties in
the value chain and with relevant authorities76.
© 2025 BSI. All rights reserved. 21
Systems for reporting serious For high-risk AI systems that are safety
components of devices, or are themselves
incidents and malfunctions
devices covered by the MDR, the notification
of serious incidents would be limited to ‘the
Article 73 AIA outlines the reporting obligations
infringement of obligations under Union law
for providers of high-risk AI systems, focusing on
intended to protect fundamental rights’78 79 (e.g.,
serious incidents.
discrimination, bias, privacy breaches).
The AIA defines serious incidents as those that
This means that incidents related to traditional
‘directly or indirectly led, might have led or might
health or safety risks of medical devices only need
lead to (…) (a) the death of a person, or serious harm
to be reported under the MDR, not the AIA. While
to a person’s health; (b) a serious and irreversible
AIA reporting is required only if the AI-enabled
disruption of the management or operation of
medical device infringes fundamental rights like
critical infrastructure; (c) the infringement of
privacy or non-discrimination, areas that the
obligations under Union law intended to protect
MDR does not directly regulate. Manufacturers
fundamental rights; (d) serious harm to property or
will need to evaluate whether their current
the environment’77.
reporting systems need adjustments to ensure
they do not miss AI-specific risks, such as
Providers must report such incidents to the
violations of fundamental rights, which may
market surveillance authorities in the member
not be directly linked to health (e.g., biased
state where the incident took place. The
diagnostic recommendations).
reporting must occur within 15 days after the
provider establishes a causal link between the
In this case, the notification will have to be made,
AI system and the incident, or suspects such a
‘to the national competent authority [NCA] chosen
link, with faster timelines for more severe cases,
for that purpose by the Member States where
such as deaths or widespread infringements. In
the incident occurred’80 and not to the market
those cases, reporting must occur immediately
surveillance authority, as stated in Article 73 (1)
and no later than 10 days for fatalities or 2 days
AIA. This means that, instead of managing two
for widespread incidents or a serious incident
separate reporting pathways, manufacturers of
regarding the management and operation of
AI-enabled medical devices will primarily deal
critical infrastructure.
with NCAs for AIA incident notifications. This is
crucial because medical device manufacturers
are already accustomed to reporting safety
incidents through their established channels
under the MDR.
© 2025 BSI. All rights reserved. 22
Technical
documentation
requirements
As we introduced in section 2, the NLF regime, placed on the market or put into service and must
in particular Decision 2008/768, provides a be available to national competent authorities
standardized approach to product legislation. upon request82.
This Decision requires, among other things,
to have technical documentation in place to Article 11 (2) AIA also allows manufacturers of
demonstrate that the product is compliant with AI-enabled medical devices to create a single
EU safety, health and environmental standards, set of technical documentation that fulfils
essential for its placement on the market81. the requirements of both the AIA and the MDR.
While each NLF regulation has its own specific Therefore, it is expected that medical device
checklist of required technical documentation, manufacturers will rely on this provision to adapt
there are notable similarities across them. their existing MDR technical documentation
In this section, we will examine the technical system to add the information required in Annex
information required for AI systems. IV of the AIA.
Article 11 AIA is the provision that requires There are concerns that the stringent
providers of high-risk AI systems to prepare and documentation requirements could be difficult
maintain up-to-date technical documentation for smaller companies and startups. To alleviate
that, at a minimum, includes the elements this burden, the AIA allows these entities to
specified in Annex IV. This documentation must submit the required elements in Annex IV in a
be kept for at least 10 years after the AI system is simplified format83.
© 2025 BSI. All rights reserved. 23
The Commission will create a simplified technical documentation form specifically for small and micro-
enterprises, which Notified Bodies (third party bodies, see section 10) are required to accept. Companies
will need to go to EU Recommendation 2003/361/EC to see whether they qualify as SME or micro enterprise.
In Annex IV we find the set of technical documentation essential for demonstrating compliance with the
AIA. It includes details on system descriptions, development processes, performance monitoring, and
risk management (See table 3).
Table 3 - Technical Documentation Requirements
AIA Annex IV Sections AIA comments & observations
Both MDR and AIA require detailed descriptions of the
medical device/AI system, manufacturer/provider’s name
and its purpose.
1. General Description
AI Act focuses on the internal functioning of the AI system,
2. Detailed Description
version, interaction with hardware/software, how it
was designed and developed, system architecture, data
requirements, human oversight, pre-determined changes
and validation/testing procedures.
The AIA requires monitoring the information of the AI
system’s performance, capabilities and limitations, including
accuracy for specific users on which the system is intended
3. Monitoring & Control to be used. It should also outline potential unintended
outcomes to health, safety, fundamental rights, necessary
human oversight measures, the technical measures used to
help deployers interpret results and details on input data.
Related to the above point, the AIA emphasizes on
3. Performance Metrics ongoing performance metrics and accuracy, or potential
discriminatory impacts.
The AIA requires to consider risks to the health, safety and
fundamental rights of persons and unintended outcomes.
4. Risk Management
Risk assessment against fundamental rights is not covered
under the MDR.
The AIA requires updates to documentation if there have
been changes to the AI system but also documentation of
5. Description of Changes pre-determined changes for those high-risk AI systems that
continue to learn after being placed on the market or put
into service.
Presumption of conformity with AIA requirements when
4. Harmonized standards conformity with harmonized standards is evident, is also
present in the AIA.
Declaration of Conformity should cover all applicable
5. Declaration of Conformity
legislations.
The AIA post-market surveillance focuses on continuously
collecting, documenting and analyzing relevant data
6. Post Market Monitoring gathered on the performance of the AI high-risk system
during their lifetime. This analysis also include interaction
with other AI systems. (More information in section 7.3.)
© 2025 BSI. All rights reserved. 24
From the table on the previous page, it is It is evident that design and development
apparent that the AIA imposes more rigorous decisions for AI systems should be
documentation and monitoring requirements, meticulously documented and integrated
going beyond what is required by the MDR. into a comprehensive QMS. This proactive
While both regulations demand detailed approach supports compliance with the AIA,
descriptions and continuous documentation, while enabling the traceability and transparency
the AI Act places a stronger emphasis on the of high-risk AI systems, both during operation
internal functioning and lifecycle management and post-market surveillance86.
of AI systems, particularly in areas such as data
governance, performance monitoring, and risk Finally, it is important to note that Technical
management. In other words, the AIA focuses Documentation requirements may be updated
more on how the high-risk AI systems have been over time, as Article 11 of the AIA allows the
developed and how they perform throughout Commission to modify them as needed through
their lifetime85. delegated acts87.
© 2025 BSI. All rights reserved. 25
Product
requirements
The MDR’s GSPRs do not fully address the unique tracking how the AI system works to ensure it
challenges posed by AI. As a result, the AIA continues to follow safety rules during operation.
introduces new requirements for AI-enabled While the MDR mandates post-market
medical devices to ensure they protect individuals’ surveillance and vigilance requirements, it
health, safety, and fundamental rights. does not mandate automated logging, which is
essential for monitoring AI behavior over time.
Automatically generated logs
Accuracy, robustness, and
High-risk AI systems shall be designed in a way cybersecurity
that it automatically records events (logs)
relevant for identifying national-level risks The MDR includes requirements for device
and substantial modifications throughout the safety and performance, including cybersecurity
system’s lifecycle88 Article 12 AIA explains that aspects, however, it does not fully address
it is primarily about recording events that are the dynamic nature of AI systems, such as the
relevant for the following: a) identifying risks need for ongoing accuracy, robustness and
or substantial modifications in the AI system’s adaptative cybersecurity measures89.
behavior that could cause risks or harm; b)
helping monitor the system after it is released
to make sure it stays safe and functions correctly
(facilitating post-market monitoring); and c)
© 2025 BSI. All rights reserved. 26
Article 15 AIA states that high-risk AI systems Transparency and provision of
must be designed to be accurate, reliable,
information to deployers
and secure throughout their entire lifecycle.
They should be resistant to errors, faults or
inconsistencies that may occur within the system The MDR requires transparency by device
or the environment in which the system operates, labelling and instructions for use, but the AIA
due to its interaction with natural persons or introduces additional transparency obligations
other systems. specific to AI systems. High-risk AI providers have
transparency obligations towards AI deployers
The robustness of high-risk AI systems can with a view to enable the latter to ‘interpret the
be achieved through technical redundancy, system’s output and use it appropriately’93.
which may include backup or fail-safe plans. AI
systems that keep learning after being released This includes providing detailed instructions
must avoid biased outputs with appropriate for use, such as the system’s intended purpose,
risk mitigation measures. The system’s security accuracy, performance, data input specifications,
should also prevent unauthorized changes or and human oversight measures, along with
exploitation. Cybersecurity measures for high- technical documentation94 that covers the AI’s
risk AI systems should match the specific risks general logic, design choices, training data, and
they face. potential discriminatory impacts. Deployers
must use this information when conducting
These measures should help prevent, detect, and Data Protection Impact Assessments (DPIAs) to
handle attacks that attempt to manipulate the comply with the transparency requirements of
training data set (data poisoning), or pre-trained the AIA95.
components used in training (model poisoning),
inputs designed to cause the AI model to make a However, such transparency obligations apply
mistake (adversarial examples or model evasion), only to high-risk AI systems; for non-high-risk AI
confidentiality attacks or model flaws90. systems, transparency requirements are limited
to communicating the presence of AI, as outlined
in Article 52, such as in the case of deepfakes.
Human oversight
The AIA mandates to design high-risk AI systems Accessibility
so that deployers can implement human
oversight91. Human oversight is a mechanism Often overlooked, article 16(l) AIA specifies
to prevent or minimize the risks to health, safety that high-risk AI systems must comply with
or fundamental rights that may emerge when accessibility requirements outlined in two
a high-risk AI system is used as intended or specific EU accessibility directives (2016/2102
and 2019/882)96. Thus, these systems should
under conditions of foreseeable misuse92. Here,
be designed so that all deployers and other
contrary to the MDR, the AIA recognizes that AI
intended user, including those with disabilities,
systems may operate with a degree of autonomy
may easily access and use them. Compliance
that needs additional safeguards to allow for involves integrating features such as readable
human intervention. interfaces, alternative text for images, and
compatibility with assistive technologies.
Simply put, natural persons oversighting the
system must be able to understand what is It is important to note that the MDR does not
happening in the product and interpret the output. specifically address accessibility for AI-enabled
In addition, this person must be able to step in medical devices, therefore, manufacturers will
and intervene in the high-risk system’s operation need to integrate the accessibility requirements
or safely shut down the system if needed. to ensure inclusivity for all users.
© 2025 BSI. All rights reserved. 27
Declaration of conformity AI database and registration
Another key requirement is for the provider to Similar to EUDAMED for medical devices, there
draw up a written machine readable, physical will also be an EU database for high-risk AI
or electronically signed EU declaration of systems in the future. However, it is important
conformity for each high-risk AI system and to clarify that only high-risk AI systems listed
keep it at the disposal of the national competent in Annex III (except for critical infrastructure)
authorities for 10 years after the high-risk AI will need to be registered in this upcoming
system has been placed on the market or put database98. This means that AI-enabled
into service. For AI-enabled medical devices, medical devices must be registered under
the AIA allows to issue a single declaration EUDAMED instead.
of conformity covering all applicable laws.
The declaration must include the information CE Mark
specified in Annex V of the AIA97, including a
statement of conformity with the GDPR. AI providers must affix the CE mark on the device,
its packaging, or accompanying documentation,
along with the identification number of the
Notified Body responsible for conformity
assessment99. For AI-enabled medical devices
there will be a single CE mark indicating that
the device complies with both regulations,
the AIA and MDR100.
© 2025 BSI. All rights reserved. 28
Conformity
assessments &
Notified Bodies
To ensure a high level of trustworthiness, the AIA laws, such as the MDR, to use the existing
states that high-risk AI systems are only allowed conformity assessment process from that
on the EU market after they have undergone and law, while also incorporating the specific
passed a conformity assessment101. This process requirements of the AIA105.
enables providers to demonstrate that their high-
risk AI systems meet the requirements specified Article 43 AIA outlines the different conformity
in Chapter III, Section 2 AIA102. If a product assessments routes. The appropriate assessment
meets all relevant requirements, a declaration of route will depend on the specific Annex applicable
conformity103 is issued, and the “CE” marking104 to the high-risk AI system (See Figure 5).
is applied. Once the CE marking is affixed, the
system can be deployed and freely circulated When it comes to AI systems not classified as
within the EU internal market. high-risk, these routes of conformity will not
be applicable. However, these non-high-risk AI
In industries such as medical devices, the systems are still subject to important obligations.
conformity assessment process is already well- While these requirements are less stringent,
established. To reduce administrative burdens they remain critical for ensuring transparency,
and avoid duplication, the AIA allows high-risk accountability, and safety.
AI systems covered by Union harmonization
© 2025 BSI. All rights reserved. 29
Fig. 5. AIA conformity assessment routes
Route 4 is the one applicable to high-risk AI public authorities can also serve as Notified
enabled medical devices. According to Article Bodies107. These bodies are tasked with evaluating
43 (3) AIA, for high-risk AI systems covered by the QMS and technical documentation which
Union harmonization legislation listed in Section must be included in the provider’s application108.
A of Annex I, providers must follow the relevant
conformity assessment procedures already In particular, the responsible Notified Body for the
required by those laws. The specific requirements AI assessment is the one that has been designated
in Section 2 of Chapter 4 AIA must also be under Union harmonization legislation if they
included within the assessment. Additionally, fulfil the obligations set out in Article 43 (3),
Points 4.3, 4.4, 4.5, and the fifth paragraph of second paragraph. These obligations are around
Point 4.6 in Annex VII AIA are applicable (datasets Notified Body independence109, professional
testing, see more in section 10.1). integrity110 and having sufficient internal
competence of personnel in AI111, which should
This means that the responsibility for third-party have been assessed during its designation under
conformity assessments lies with conformity Union harmonization legislation.
assessment bodies, known as “Notified Bodies,”
which are designated by “notifying authorities”
established by member states106. In some cases,
© 2025 BSI. All rights reserved. 30
In the context of the MDR, this means that Notified Body testing
MDR-designated Notified Bodies will be the
ones controlling the conformity assessment As discussed earlier, sections 4.3, 4.4, 4.5, and the
procedure of AI enabled medical devices, fifth paragraph of section 4.6 in Annex VII focus
ensuring that the specific requirements from on testing procedures.
the AIA and relevant Notified Body testing are
integrated. However, MDR Notified Bodies can The MDR Notified Body is responsible for
only perform AI assessments if they meet the examining the QMS and technical documentation
obligations outlined in Article 43 (3), second of AI-enabled medical device systems to ensure
paragraph. Since these obligations should have compliance with the AIA. To do this effectively,
been evaluated during the initial designation the Notified Body may need access to training,
of the MDR Notified Bodies, many will likely validation, and testing datasets, potentially
need to request an extension of their scope. through remote means. The Notified Body may
This extension is necessary to ensure they have require additional evidence or tests from the
appropriate internal AI competencies. The AIA provider and, if unsatisfied, may conduct its own
demands not only technical AI expertise but also tests. If all other methods to verify compliance
legal, administrative, and scientific knowledge have been exhausted, the Notified Body can
to effectively carry out these conformity also access the AI system’s training models and
assessments112. parameters, provided this access complies with
intellectual property and trade secret laws.
August 2025 is dictated (Art 113(b)) as the date
of applicability for Chapter III sec 4, on Notifying It is important for providers to establish clear
Authorities and Notified Bodies. Under this section agreements detailing how and when this access
of the AIA (Art. 28 and 29), it is mandated that will be granted. These contracts should outline
Member States should have in place at least one the process for providing access to technical
notifying authority for Conformity assessment documentation and data, including necessary
bodies to be able to submit an application. security measures to protect sensitive
information. This is particularly important if
Finally, it is important to note that the AIA the manufacturer does not own the datasets
allows Notified Bodies to conduct necessary used for training or testing. In such cases,
tests on AI systems and request access to the provider must ensure that contracts with
trained models, including their relevant dataset owners include provisions for granting
parameters113. This requires MDR Notified the Notified Body access.
Bodies to have competent AI personnel
and adequate testing facilities to perform
such assessments.
© 2025 BSI. All rights reserved. 31
Notified Body certificate Although the intention of the AIA is to reduce
overlaps between sectorial legislations and
When a Notified Body conformity assessment the AIA, it remains unclear whether AI-enabled
is required, the AI provider will need to lodge medical devices will be covered by a single
an application with a Notified Body of their certificate for both the MDR and AIA regulations
choice, to examine the QMS and the technical or if the Union technical documentation
documentation of the AI system/s that the assessment certificate will be issued separately
provider intends to place on the market or put only for the AI part. Further guidance on this
into service. As stated in section 10 in the case of matter is undoubtedly necessary.
AI-enabled medical devices, the MDR-designated
Notified Bodies will be the ones controlling the
conformity assessment procedure.
The provider’s QMS for AI-enabled medical
devices must undergo initial examination
and continuous surveillance by the Notified
Body, as per article 17 AIA114. The provider’s
application should include contact details,
relevant documentation, and a written
declaration that the same application has not
been lodged with another Notified Body. The
Notified Body will assess compliance with Article
17 AIA and notify the provider of its decision115.
To ensure ongoing compliance with the terms
and conditions of the approved QMS, the
Notified Body will conduct regular audits. The
Notified Body may also perform additional
tests on AI-enabled medical device system
and will provide audit reports to ensure the
QMS remains adequate and effective116. It is
crucial to emphasize the need for MDR Notified
Bodies to have competent AI personnel and
appropriate testing facilities to conduct these
assessments effectively.
In addition, the application to the Notified Body
shall also cover the assessment of their AI
system’s technical documentation, which the
Notified Body will review and provide a decision
along with an explanation117.
If the AI system meets the requirements outlined
in Chapter III, Section 2, the MDR Notified Body
will issue a Union technical documentation
assessment certificate which has a limited time
validity and can be suspended or withdrawn
by the Notified Body118 119. This certificate will
include the provider’s name and address, the
conclusions of the assessment, any conditions
for the certificate’s validity, and essential data
for identifying the AI system120.
© 2025 BSI. All rights reserved. 32
Changes to
AI systems
One of the key advantages of AI-enabled medical This capability allows them to learn from real-
devices is their ability to learn and enhance their world experience and adapt to new information
performance through real-world experience. even after they have been distributed122 123.
These devices have the potential to revolutionize
healthcare by extracting valuable insights from Despite the benefits of continuous adaptation,
the extensive data generated daily in medical unlocked AI systems come with risks. Continuous
settings. However, the adaptability of AI also updates can alter the system’s intended use or
presents certain challenges121. modify its classification, which may impact its
performance or regulatory status124 125. To manage
Specifically, these challenges arise from the these risks, the AIA imposes stringent obligations
continuous training of AI models in post- on providers regarding changes to AI systems.
marketing settings. This includes algorithms that
are locked or unlocked. Locked AI systems do not
undergo retraining or updates after their initial
deployment. As a result, they do not adapt to new
data or changing conditions. However, unlocked
AI systems are designed to be continuously
retrained and updated with new data over time.
© 2025 BSI. All rights reserved. 33
Managing changes under the AIA changes require a new conformity assessment
or can be addressed by issuing a supplement to
According to the AIA, any intended change the original certificate. Where the changes are
to the approved QMS or the list of AI systems approved, a supplement to the Union technical
covered by it must be brought to the attention of documentation assessment certificate will be
the Notified Body by the provider. For AI-enabled issued to the provider127.
medical devices, this requires contacting the
MDR-designated Notified Body.
However, there is an exception to this. The
The MDR Notified Body will then examine the AIA indicates that those changes occurring
proposed changes and determine whether the to the algorithm and the performance of AI
modified system still satisfies with article 17 systems which continue to ‘learn’ after
AIA, or if a reassessment is required. Once the being released on the market (unlocked
examination is complete, the Notified Body will AI systems) should not be considered
notify the provider of its decision, including a ‘substantial modification’ if these have
reasoned assessment of the changes126. been pre-determined by the provider
and assessed during the conformity
Moreover, any changes to the AI-enabled assessment128. In other words, the provider
medical device that could impact its will need to specify how their AI system
compliance with the AIA, or its intended will change while continuously learning in
purpose must be assessed by the MDR post-market settings. In this case, a new
Notified Body that issued the Union technical conformity assessment will not be required.
documentation assessment certificate. The
provider is responsible for informing the Notified
Body of any such changes, or if they become
aware of changes that may affect the system.
The Notified Body will then decide whether the
© 2025 BSI. All rights reserved. 34
Defining significant changes and
substantial modifications
A critical question for AI-enabled medical
device manufacturers is understanding what
constitutes a “change”. The AIA uses the terms
significant change and substantial modification
interchangeably. Recital 177 clarifies that the
concept of significant change under the AIA
is equivalent to substantial modification129.
This could potentially create conflicts with how
substantial modification is understood under
other sector-specific laws, such as the MDR.
However, Recital 84 AIA addresses this issue,
stating that sector-specific laws should take
precedence over the AIA when more specific
provisions exist. For instance, Article 16(2) of the
MDR, which outlines certain changes that should
not be considered modifications of a medical
device, still applies to high-risk AI-enabled
medical devices under the MDR.
performance, along with information on the
Implications for other AIA actors technical solutions used to ensure continuous
compliance. However, further guidance is needed
The AIA also includes provisions that impact on what constitutes pre-defined changes.
distributors, importers, and other third parties.
Any third party that makes a substantial An example of such guidance is the
modification to a CE marked high-risk AI system Predetermined Change Control Plan
or changes the intended purpose of a non-high- (PCCP) for Machine Learning-Enabled
risk AI system, turning it into a high-risk one, Medical Devices131 by the U.S. Food and
will be considered the provider under the Drug Administration (FDA), Health Canada,
AIA and therefore assume all the relevant and the United Kingdom’s Medicines and
obligations130. For example, repurposing an AI Healthcare products Regulatory Agency
system for medical purposes could result in the (MHRA). This document identifies 5 guiding
actor who made the substantial modification principles for predetermined change control
being reclassified as both the provider under plans for machine learning-enabled medical
the AIA and the manufacturer under the devices (MLMD). These guiding principles are
MDR, requiring compliance with both sets of voluntary and offer best practices to monitor
regulations. the performance and the potential risks that
come with retraining models.
Documenting Pre-Determined Although the AIA aligns closely with the FDA’s
Changes guidance, detailed and specific guidelines for
manufacturers would be helpful to ensure
Another key consideration for manufacturers is clear and effective compliance with the AIA’s
how to properly plan for and document pre- requirements, particularly for AI systems that
determined changes. According to Annex IV, continue to learn after-market release.
Section 2(f) AIA, the technical documentation
of the provider must include details on the pre-
determined changes to the AI system and its
© 2025 BSI. All rights reserved. 35
Conclusion
In conclusion, the integration of AI into medical For manufacturers, understanding how these
devices brings transformative potential to two regulations intersect is crucial. Compliance
healthcare, improving patient outcomes and with both the MDR and AIA, will ensure that AI-
creating new possibilities for diagnostics and enabled medical devices not only meet safety
treatment. However, it also introduces new and performance medical standards but also
risks and regulatory challenges. While the MDR uphold fundamental rights and European values
has long provided a framework for regulating for AI technology.
medical devices, the rapid advancements in AI
have necessitated additional safeguards, which As AI continues to evolve, manufacturers must
the AIA aims to provide. The AIA and the MDR remain proactive in adapting their compliance
form a complementary regulatory framework, strategies, integrating the specific requirements
that together ensure the safety, efficacy, and of the AIA into their existing MDR framework.
ethical deployment of AI-enabled medical By staying informed and compliant with these
devices in the European market. complex regulations, AI-enabled medical
device manufacturers will be well-positioned to
The AIA introduces a risk-based approach to contribute to the future of healthcare.
AI, focusing on high-risk systems and ensuring
their compliance with stringent requirements
for data governance, transparency, and
fundamental rights.
© 2025 BSI. All rights reserved. 36
Authors
Inma Perez Ruiz - Regulatory Lead - AI Notified Body
Sarah Mathew - Regulatory Lead - AI Notified Body
Alex Zaretsky - Regulatory Lead - AI Notified Body
Aris Tzavaras - Head of AI Notified Body
© 2025 BSI. All rights reserved. 37
Practical use
cases
In this section, we explore three hypothetical Disclaimers:
examples to illustrate how different AI-enabled
medical devices navigate the conformity 1. Any similarities with existing MDs or
assessments under the MDR and AIA. manufacturers provided in the examples
below are coincidental; the devices chosen
Each example outlines the relevant device are hypothetical examples by BSI.
classifications, economic operators, and legal 2. The information presented in the examples
obligations that must be met under these below is limited and may not be sufficient
regulations. The scenarios reiterate the concepts to undergo an actual assessment under
detailed in this whitepaper as well as highlight both MDR and AIA legislations. Therefore,
the complexities in choosing the correct route the conclusions drawn might not be
to market. representative of real word AI enabled MDs.
© 2025 BSI. All rights reserved. 38
Peace of Mind LLC For the first question, the AIA definition is general,
it does not directly address AI technologies.
Background information However, the AI technology described above falls
Peace of Mind LLC. has created an app for under the general definition as it is a machine-
adult patients who have been diagnosed with based system that for explicit objectives makes
moderate to severe depression. The mobile an inference. Taking into consideration recital
app uses real-time biometrics and lifestyle 12, machine learning approaches such as deep
activities of the diagnosed person to suggest learning are considered AI. Therefore, the AI
activities, either physical (i.e. exposure therapy techniques used in this example are considered
treatments, exercise, etc.) or to be completed AI systems, and therefore AIA is applicable. It is
on the mobile device, to assist in alleviating the worth noting that more details about the models
symptoms of depression. This app recommends would be needed to rule out this as general-
these activities based on two in-house built deep purpose AI.
learning models that are integrated within the
app. The app is only available with a healthcare For the second question, as this is a product
professional’s prescription in the EU. on its own, that requires 3rd party conformity
assessment under MDR (Class IIb) this is
MDR classification considered a high-risk AI system per Article 6 (1).
What are the legal obligations for this entity
and its product? As Peace of Mind LLC. has Under the AIA, since Peace of Mind LLC.
developed the app along with the two deep developed the app and intends to place it on the
learning models, it is the product manufacturer. market, they are considered an AI provider.
This product would be considered Class IIb
medical device due to Rule 11 of the MDR as With the identified classification and economic
it is software intended to provide information operator, this organization has obligations both
which is used to take decisions with diagnosis or under the MDR and AIA. The device will need
therapeutic purposes. However, why would this to undergo a conformity assessment with one
device be a IIb device? If the activities suggested Notified Body under the MDR and an assessment
to the patient may possibly cause a “a serious for Chapter III, Section 2 requirements of the
deterioration of a person’s state of health or a AIA. The Notified Body must be appropriately
surgical intervention, in which case it is classified designated under both the MDR and AIA.
as class IIb”. The classification must always
consider the worst-case scenario. If this is a MD to be marketed in EU following
August 2027, CE marking and certification must
AIA classification be in place before placing the product on the
Whether AIA is applicable to the device depends market.
on 2 main factors:
1. Is the system used considered an AI system If the MD is intended to be placed on the market
under the AIA definitions? prior to August 2027, then the AIA requirements
2. Is it high-risk under AIA? Is the AI enabled are not applicable, and the initial assessment
medical device a product on its own or a should be performed under MDR requirements.
safety component of a MD that undergoes However, in this case, significant changes
3rd party conformity assessment? following Aug. 2027 should be assessed against
both MDR and AIA.
© 2025 BSI. All rights reserved. 39
Radiopic Limited Power The second question is more difficult to address.
This is not a “stand-alone” Software product,
Background information as AI is embedded into the MD functionality.
The Dutch organization Windy Tree Hospital Therefore, the question is whether this is a safety
Consortium implants the Radiopic Limited’s component. MDR does not make any reference on
Powertini devices to treat certain types of focal “safety components”, while the AIA defines (Art 3
seizures through sensing and modulating (14) a safety component as “means a component
electrical stimulation in an area(s) of interest in of a product or of an AI system which fulfils a
the brain. The device’s operation is built on a static safety function for that product or AI system, or
AI Machine learning system. The AI system drives the failure or malfunctioning of which endangers
the operation of the device and the failure of the health and safety of persons or property”.
which may compromise the safety of the patient. Following the AIA definition the AI system should
be considered as a MD safety component, as the
MDR classification failure/malfunctioning of which might endanger
Under the MDR, these products would be the health of the patient.
considered Class III medical devices due to Rule
8 since it is an active implantable. As Class III products are required to undergo a
3rd party conformity assessment under the MDR
AIA classification the risk classification per the AIA would be as a
Whether AIA is applicable to the device depends high-risk AI system per Art. 6(1).
on 2 main factors:
1. Is the system used considered an AI system Radiopic Limited is considered the AI provider for
under the AIA definitions? the product, while Windy Tree Hospital Consortium,
2. Is it high-risk under AIA? Is the AI enabled while utilizing the devices in a professional
medical device a product on its own or a capacity, would be considered as an AI deployer.
safety component of a MD that undergoes The AI provider of this high-risk AI system as well
3rd party conformity assessment? as being the product manufacturer for the device,
must undergo a conformity assessment with an
As in the previous example, for the first question, appropriately designated Notified Body under the
the AIA definition is general, it does not directly MDR and an assessment for Chapter III, Section 2
address AI technologies. However, the AI of the AIA.
technology described above falls under the
general definition as it is a machine-based system If this is a MD to be marketed in EU following
that for explicit objectives makes a decision August 2027, CE marking and certification must be
to modulate brain electrical stimulation that in place before placing the product on the market.
influences the patient. Taking into consideration
recital 12, machine learning approaches are If the MD is intended to be placed on the market
considered AI. Therefore, the AI techniques used prior to August 2027, then the AIA requirements
in this example are considered AI systems, and are not applicable, and the initial assessment
therefore AIA is applicable. should be performed under MDR requirements.
However, in this case, significant changes
It is important to note that static AI systems fall following Aug. 2027 should be assessed against
under the scope of the AIA even though they do both MDR and AIA.
not continue to learn in the field. Static systems
are expected to be retrained after a period of time Windy Tree hospital consortium will need to
in order stay relevant to their affected domain. comply with Article 26 Obligations of deployers
of high-risk AI systems of the AIA. As the date of
applicability for Article 6(1) is Aug. 2027, the above
obligations are mandatory following this date.
© 2025 BSI. All rights reserved. 40
Reality Vieux heavily affect a percentage of patients’
disease diagnosis and future treatment
Background information (unnecessary in the case of high FP).
Reality Vieux is a French company that develops • Risks identified and mitigations: Although
an AI system software application that uses this system is a CDSS, the risk of clinicians’
Convolutional Neural Networks and AI Deep overreliance to MD suggestions is not to
Learning models to provide estimates to clinicians be ignored. Risk mitigation strategies need
on the likelihood of presence of carcinomas to be assessed for their effectiveness prior
through magnetic resonance imaging or to deciding on whether is supporting or
computed tomography scans. It is intended to be clinician’s might rely on the system for the
placed on the market in the EU in Q4 of 2026. diagnosis.
MDR classification For this example, we will classify the device as
Under the MDR, the product is considered “software a class IIb considering the scenarios of MD
as a medical device” and is classified under rule 11. driving clinical management (aids in diagnosis)
Considerations of rule 11 and MDCG 2019-11, to and patient situation is critical (requires major
classify the MD under MDR classification, should therapeutic interventions).
take into account the following:
• Is the MD diagnosing carcinomas? The AIA classification
system should be considered a clinical Whether AIA is applicable to the device depends
decision support system (CDSS), as it on 2 main factors:
provides “opinion” on the likelihood of 1. Is the system used considered an AI system
presence of a disease. Clinicians makes the under the AIA definitions?
final diagnosis, taking into consideration AI 2. Is it high-risk under AIA? Is the AI enabled
system recommendation. medical device a product on its own or a
• Is the MD driving clinical management? safety component of a MD that undergoes
Not directly. The “opinion” of the CDSS is 3rd party conformity assessment?
considered among other clinical factors by
the clinician to decide the need of additional For the first question, the AIA definition is general,
diagnostic or treatment steps. However, it does not directly address AI technologies.
CDSS aid in the patient diagnosis as it helps However, the AI technology described above falls
predicting diseases potentially in early stages. under the general definition as it is a machine-
• Is the patient situation critical or serious? based system that for explicit objectives makes
Under the assumption that the diagnosis of a recommendation. Taking into consideration
carcinomas will require major therapeutic recital 12, machine learning approaches such
interventions, patient situation should be as deep learning and Convolutional Neural
considered critical. However, if clinical claims Networks are considered AI. Therefore, the AI
include early diagnosis, the patient situation techniques used in this example are considered
might be considered serious as early AI systems, and therefore AIA is applicable.
diagnosis or treatment is important to avoid
unnecessary interventions. For the second question, as this is a product
on its own, that requires 3rd party conformity
The information provided in this example for assessment under MDR (Class IIb) this is
the MD is not sufficient to come to a verdict on considered a high-risk AI system per Article 6 (1).
MDR classification. Other factors that need to be
considered are: Reality Vieux is the AI system provider under AIA.
• Performance metrics used and their values: As the MD is intended to be placed on the market
How may the performance metrics affect prior to August 2027, then the AIA requirements
treatment and patient situation? As an are not applicable, and the initial assessment
example, consider an MD system with high should be performed under MDR requirements.
False Negative (FN) rate or False Positive (FP) However, in this case, significant changes following
rate. Suggestions produced by such system Aug. 2027 should be assessed against both MDR
and AIA by the MDR Notified Body.
© 2025 BSI. All rights reserved. 41
References
1. Directive 93/42/EEC of the European Parliament and Institute of Global Health Innovation, Imperial College
of the Council of 14 June 1993 concerning medical London. Available at: Spiral: Addressing racial and
devices. Available at: https://eur-lex.europa.eu/legal- ethnic inequities in data-driven health technologies
content/EN/TXT/?uri=CELEX%3A31993L0042. (imperial.ac.uk)
2. Regulation (EU) 2017/745 of the European Parliament 7. Regulation (EU) 2024/1689 of the European Parliament
and of the Council of 5 April 2017 on medical devices and of the Council of 13 June 2024 laying down
(MDR). Available at: https://eur-lex.europa.eu/legal- harmonized rules on artificial intelligence (AI) and
content/EN/TXT/?uri=CELEX%3A32017R0745. amending various regulations (AIA). Available at:
3. McKinney et al., “International evaluation of an AI https://data.europa.eu/eli/reg/2024/1689/oj.
system for breast cancer screening,” Nature, 577 8. Regulation (EC) No 765/2008 of the European
(2020), 89–94. Available at: International evaluation of Parliament and of the Council of 9 July 2008 setting
an AI system for breast cancer screening out the requirements for accreditation and market
4. Baig, M.M., GholamHosseini, H., Moqeem, A.A., Mirza, surveillance. Available at: https://eur-lex.europa.eu/eli/
F., & Lindén, M. “A Systematic Review of Wearable reg/2008/765/oj.
Patient Monitoring Systems – Current Challenges and 9. Decision No 768/2008/EC of the European Parliament
Opportunities for Clinical Adoption,” Journal of Medical and of the Council of 9 July 2008 on a common
Systems, 41 (2017), 115. Available at Springer: https:// framework for the marketing of products. Available at:
link.springer.com/article/10.1007/s10916-017-0760-1. https://eur-lex.europa.eu/eli/dec/2008/768/oj.
5. Markoff, J. “The Robot Surgeon Will See You Now.” 10. Regulation (EU) 2019/1020 of the European Parliament
The New York Times, 30 April 2021. Available at: The and of the Council of 20 June 2019 on market
Robot Surgeon Will See You Now - The New York Times surveillance and compliance of products. Available at:
(nytimes.com) https://eur-lex.europa.eu/eli/reg/2019/1020/oj.
6. Ghafur, S., O’Shaughnessy, J., Darzi, A., van Dael, J., 11. European Commission, “Consumer product safety –
& Gardner, C. J. (2022). Addressing Racial and Ethnic product safety rules in the EU.” Available at: https://
Inequities in Data-Driven Health Technologies. ec.europa.eu/consumer-safety.
© 2025 BSI. All rights reserved. 42
12. Commission notice, The ‘Blue Guide’ on the 36. Cepeda Zapata, K.A., Patil, R., Ward, T., Loughran,
implementation of EU product rules 2022. P.11. R., & McCaffery, F., “Analysis of the Classification of
Available at: EUR-Lex - 52022XC0629(04) - EN - EUR-Lex Medical Device Software in the AI Act Proposal,” 2024,
(europa.eu) p.8. Available at: [PDF] Analysis of the Classification
13. Recital 1, MDR. of Medical Device Software in the AI Act Proposal |
14. Article 2(1), MDR. Semantic Scholar
15. In Europe, the Medical Device Coordination Group, a 37. Article 3(2) AIA defines providers as “a natural or legal
group of representatives of Member States dealing person, public authority, agency or other body that
with issues and guidelines related to MDs, SaMD develops an AI system or a general purpose AI model
is referred to as Medical Device Software (MDSW). or that has an AI system or a general purpose AI model
Medical Device Coordination Group (MDCG), “Guidance developed and places them on the market or puts the
on Qualification and Classification of Software in system into service under its own name or trademark,
Regulation (EU) 2017/745 – MDR and Regulation (EU) whether for payment or free of charge”.
2017/746 – IVDR,” 2019. Available at: https://health. 38. Article 3(4) AIA defines deployers as “any natural or
ec.europa.eu/document/download/b45335c5-1679- legal person, public authority, agency or other body
4c71-a91c-fc7a4d37f12b_en using an AI system under its authority except where
16. Idem. the AI system is used in the course of a personal non-
17. International Medical Device Regulators Forum professional activity”.
(IMDRF) SaMD Working Group, “Software as a Medical 39. Article 26 & 27 AIA.
Device (SaMD): Key Definitions,” 2013. Available 40. Recital 64, AIA.
at: https://www.imdrf.org/sites/default/files/docs/ 41. Idem.
imdrf/final/technical/imdrf-tech-131209-samd-key- 42. Article 2 (7) AIA.
definitions-140901.pdf 43. Recital 67 AIA.
18. Art. 2(2) MDR. 44. Article 17 (2), AIA.
19. International Medical Device Regulators Forum 45. Article 17 (3), AIA.
(IMDRF) SaMD Working Group, “Software as a Medical 46. Aboy, M., Minssen, T., & Vayena, E., “Navigating the
Device (SaMD): Key Definitions,” 2013. Available EU AI Act: Implications for Regulated Digital Medical
at: https://www.imdrf.org/sites/default/files/docs/ Products,” npj Digital Medicine, 7 (2024), 237. https://
imdrf/final/technical/imdrf-tech-131209-samd-key- doi.org/10.1038/s41746-024-01232-3
definitions-140901.pdf 47. Schuett, J., “Risk Management in the Artificial
20. Article 3(1), AIA. Intelligence Act,” European Journal of Risk Regulation
21. Recital 12, AIA. (2023), 1–19. https://doi.org/10.1017/err.2023.1.
22. Annex VIII, Chapter III, MDR. 48. Article 9, AIA.
23. The classification of software that drives or influences 49. Articles 8 and 16(a) AIA. Chapter 2 contains
the use of a medical device will mirror the classification requirements on risk management (Art 9), data and
of the medical device it interacts with. For stand-alone data governance (Art 10), technical documentation
MDSW, the classification will be based on Rule 11 (Art 11), record-keeping (Art 12), transparency and
Annex VIII MDR. the provision of information to users (Art 13), human
24. Article 5, AIA. oversight (Art 14) and accuracy, robustness and
25. Article 6, AIA. cybersecurity (Art 15).
26. Article 50, AIA. 50. Schuett, J., “Risk Management in the Artificial
27. Recital 165 & Article 95, AIA. Intelligence Act,” European Journal of Risk Regulation
28. For more information on the AIA’s classification, see: (2023), 1–19. https://doi.org/10.1017/err.2023.1.
Zaretsky, A., Seneca, D., Pérez, I., Mathew, S., & Tzavaras, 51. European Union, Charter of Fundamental Rights
A., “Artificial Intelligence Act: What AI Providers and of the European Union, Official Journal of the
Deployers Need to Know,” 2024. Available at: EU AI Act European Union, C 326/391 (2012). Available
(bsigroup.com) at: https://eur-lex.europa.eu/legal content/EN/
29. Chapter V, AIA. TXT/?uri=CELEX%3A12012P%2FTXT.
30. Annex III point 5 (d), AIA. 52. Doorn, L., “Risk Management under the AI Act,”
31. Medical Device Coordination Group (MDCG), “Guidance MatrixReq. Available at: Risk Management under the AI
on Qualification and Classification of Software in Act (matrixreq.com)
Regulation (EU) 2017/745 – MDR and Regulation 53. AI: Decoded: “A Dutch algorithm scandal serves
(EU) 2017/746 – IVDR,” 2019. P. 8, Note 1. Available a warning to Europe – The AI Act won’t save us,”
at: https://health.ec.europa.eu/document/download/ POLITICO. Available at: https://politico.eu.
b45335c5-1679-4c71-a91c-fc7a4d37f12b_en 54. Article 9 (5), AIA.
32. Idem, P. 8, Note 2. 55. Article 9 (4), AIA.
33. Idem, P. 9. 56. Article 9 (5), AIA.
34. Recital 53, AIA. 57. Article 9 (7), AIA.
35. Article 3 (14) AIA defines “safety component”, however, 58. Article 9 (9), AIA.
we do not see a similar definition under the MDR. 59. Article 10 (2), AIA.
60. Article 10 (3), AIA.
© 2025 BSI. All rights reserved. 43
61. Johnson, H.R., “The EU AI Act: How Will It Impact Medical 102. Article 3 (20), AIA.
Device Manufacturers?” MDDI Online, 2024. Available 103. Article 47, AIA.
at: EU Endorses AI Act: Implications for Medical Device 104. Article 48, AIA.
Manufacturers & SMEs (mddionline.com) 105. Recital 124 AIA.
62. Article 10 (5), AIA. 106. Article 28 & 30, AIA.
63. Article 17(f), AIA. 107. Article 43, third paragraph AIA.
64. Annex IV (2) (d), AIA. 108. The provider must apply to an accredited notified body.
65. Article 72 (2), AIA. This application should include all the documents and
66. Article 72 (4), AIA. information specified in Annex VII.
67. Article 72 (3), AIA. 109. Article 33 (4), AIA.
68. Article 74 (2) & (3) AIA. 110. Article 33 (9), AIA.
69. Article 74 (4), AIA. 111. Article 33 (10), AIA.
70. Doorn, L., “Monitoring AI Systems under the AI Act 112. Article 31 (11), AIA.
within Medical Devices,” MatrixReq. Available at: 113. Annex VIII, paragraphs 4.4 & 4.5, AIA.
Monitoring AI Systems under the AI Act within Medical 114. Annex VII, point 2 AIA.
Devices (matrixreq.com) 115. Annex VII, point 3 AIA.
71. Idem 116. Annex VII, point 5 AIA.
72. Article 12 (2) (b), AIA. 117. Annex VII, point 4 AIA.
73. Article 14, AIA. 118. Annex VII, point 4.6 first paragraph AIA.
74. Doorn, L., “Monitoring AI Systems under the AI Act 119. Article 44, AIA.
within Medical Devices,” MatrixReq. Available at: 120. Annex VII point 4.6 second paragraph AIA.
Monitoring AI Systems under the AI Act within Medical 121. FDA, “Proposed Regulatory Framework for
Devices (matrixreq.com) Modifications to Artificial Intelligence / Machine
75. Article 26, AIA. Learning (AI / ML) -Based Software as a Medical Device
76. Doorn, L., “Monitoring AI Systems under the AI Act (SaMD),” April 2019. Available at: https://www.fda.gov/
within Medical Devices,” MatrixReq. Available at: media/122535/download.
Monitoring AI Systems under the AI Act within Medical 122. Idem.
Devices (matrixreq.com) 123. Cepeda Zapata, K.A.C., Ward, T., Loughran, R., &
77. Idem. McCaffery, F., “A Review of the Artificial Intelligence
78. Article 3, point (49)(c), AIA. Act Proposal and the Medical Device Regulation,”
79. Article 73 (9), AIA. 31st Irish Conference on Artificial Intelligence
80. Article 73 (10), AIA. and Cognitive Science (AICS), 2023, doi:10.1109/
81. Article 4, Decision 768/2008. AICS60730.2023.10470832.
82. Article 18, AIA. 124. FDA, “Proposed Regulatory Framework for
83. Article 11 (1), paragraph 2, AIA. Modifications to Artificial Intelligence / Machine
84. Commission Recommendation of 6 May 2003 Learning (AI / ML) -Based Software as a Medical Device
concerning the definition of micro, small, and medium- (SaMD),” April 2019. Available at: https://www.fda.gov/
sized enterprises. Available at: https://eur-lex.europa. media/122535/download.
eu/eli/reco/2003/361/oj. 125. Cepeda Zapata, K.A.C., Ward, T., Loughran, R., &
85. Recital 71, AIA. McCaffery, F., “A Review of the Artificial Intelligence
86. Idem. Act Proposal and the Medical Device Regulation,”
87. Article 11(3), AIA. 31st Irish Conference on Artificial Intelligence
88. Article 12, AIA. and Cognitive Science (AICS), 2023, doi:10.1109/
89. Article 15, AIA. AICS60730.2023.10470832.
90. Article 15 (5), paragraph 3, AIA. 126. Annex VII, point 3.4 AIA.
91. Article 14, AIA. 127. Annex VII, point 4.7 AIA.
92. Article 14 (2), AIA. 128. Recital 128, AIA.
93. Article 13 (1), AIA. 129. Article 3 (23), AIA: ‘substantial modification’ means
94. Article 11, AIA. a change to an AI system after its placing on the
95. Article 26 (9), AIA. market or putting into service which is not foreseen or
96. Article 16(l), AIA. planned in the initial conformity assessment carried
97. The Commission is empowered to adopt delegated acts out by the provider and as a result of which the
in accordance with Article 97 in order to amend Annex compliance of the AI system with the requirements set
V by updating the content of the EU declaration of out in Chapter III, Section 2 is affected or results in a
conformity set out in that Annex, in order to introduce modification to the intended purpose for which the AI
elements that become necessary considering technical system has been assessed;
progress. 130. Recital 84, AIA.
98. Article 49, AIA. 131. FDA, “Predetermined Change Control Plans for Machine
99. Article 48 (3), AIA. Learning-Enabled Medical Devices: Guiding Principles.”
100. Article 48 (5), AIA. October 2023. Available at: Predetermined Change
101. Recital 123, AIA. Control Plans for Machine Learning-Enabled Medical
Devices: Guiding Principles | FDA
© 2025 BSI. All rights reserved. 44
BSI Group
389 Chiswick High Road
London, W4 4AL
United Kingdom
+44 345 080 9000
bsigroup.com
© 2025 BSI. All rights reserved. 45