20240813225533D4406 - 11 Electronic Person - Who Is Liable For Automated Decision - Compressed
20240813225533D4406 - 11 Electronic Person - Who Is Liable For Automated Decision - Compressed
Since digitization continues to advance, decisions are increasingly being automated. Technologies
paired with AI make it possible to have many decisions made fully or semi- autonomously. In general, AI
can hardly be defined in a clear- cut way. AI is a mix of many different technologies. It enables machines
to understand, act, and learn with human- like intelligence.
AI therefore means, in the broadest sense, all automated decisions that are largely independent of
human intervention. Therefore, AI results in a transfer of decisions from humans to machines or
algorithms.
SIGNIFICANCE OF AI
SIGNIFICANCE OF AI
AI is already relatively widespread. For example, in e- commerce, shoppers can be provided with
additional offers in a more targeted manner. Other typical areas of application include assistance
programs, language translation (e.g., DeepL), automated consulting processes, and autonomous
driving. In industry, networked and automated machines (e.g., robots) or production lines are used as
part of the so- called Industry 4.0. In the broader area of legal advice, so- called smart contracts
solutions (i.e., computer protocols that map or check contracts or technically support the negotiation or
settlement of a contract) can also be based on AI or supported by the use of AI.
Despite the steadily increasing importance of AI, the legal situation with regard to its fields of
application is still relatively undeveloped. In essence, this almost always relates to liability issues. When
it comes to liability for autonomous decisions, there are three protagonists that could be considered:
• The AI itself
• The user
CURRENT LEGISLATIVE PROJECTS
CURRENT LEGISLATIVE PROJECTS
General Projects
Even though AI in the EU is not that widespread compared to other parts of the world, the slight AI
skepticism in Europe results in wider regulatory efforts with regard to AI. Such efforts legally concern AI.
As a first step, the EU wants to clarify liability issues and has also created a kind of definition, among
others, for AI and robots with the proposal on civil law regulations in the field of robotics (EU
Commission, Draft Resolution of the European Parliament, with recommendations to the Commission
on civil law regulations in the field of robotics, 2015/ 2103 (INL) of 27.05.2018).
CURRENT LEGISLATIVE PROJECTS
General Projects
AI is thereafter:
• Obtaining autonomy via sensors and/ or via data exchange with their environment (interconnectivity)
and the provision and analysis of these data.
• Ability to self- learn through experience and through interaction (optional criterion).
On the other hand, the EU wants to address the legal challenges posed by AI and regulate the core
areas of product liability, IT or system security, and data protection that it has identified. This is to be
achieved through appropriate national oversight of AI users and programmers. In this context, the EU
essentially recognizes that legal protection must always be guaranteed for decisions made by AI against
a person responsible for the AI. Furthermore, the protection of data and information in the online area
plays a decisive role. It is clearly recognizable that the EU also wants to focus on the human
protagonists behind it in its legislative plans with regard to liability for AI.
BASIC CONSIDERATIONS ON
ACCOUNTABILITY FOR AI
BASIC CONSIDERATIONS ON ACCOUNTABILITY FOR AI
The starting point is to prevent protagonists from evading responsibility by transferring decisions to AI.
At the same time, the principle of ultimate human responsibility must apply. A principle must be
established such that the legal standards for “fair” and “right” must be incorporated into the act of
programming so that these become part of the decision- making standard of AI. AI must not change the
basic legal order and the essential legal standards; rather, and this is the legislative mandate, it must be
ensured that these values are taken into account by AI decisions. This can be done in the framework of
a voluntary commitment (Nathmann, 2021).
To this end, it is first necessary that there is always an ultimate human responsibility. This requires that
a responsible addressee is defined. This can be either the programmer, the user, or both. This would
follow the idea that liability must be assumed for the risk of AI, similar to the principles of product
liability.
THANK YOU