0% found this document useful (0 votes)
13 views16 pages

20240813225533D4406 - 11 Electronic Person - Who Is Liable For Automated Decision - Compressed

The document discusses the implications of AI in automated decision-making, highlighting the significance of AI technologies in various sectors, including e-commerce and industry 4.0. It addresses the legal challenges surrounding accountability for AI decisions, identifying potential liable parties such as the AI itself, programmers, and users. The EU is actively working on regulatory frameworks to clarify liability issues and ensure that human responsibility is maintained in AI decision-making processes.

Uploaded by

jnrius.alxnder
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views16 pages

20240813225533D4406 - 11 Electronic Person - Who Is Liable For Automated Decision - Compressed

The document discusses the implications of AI in automated decision-making, highlighting the significance of AI technologies in various sectors, including e-commerce and industry 4.0. It addresses the legal challenges surrounding accountability for AI decisions, identifying potential liable parties such as the AI itself, programmers, and users. The EU is actively working on regulatory frameworks to clarify liability issues and ensure that human responsibility is maintained in AI decision-making processes.

Uploaded by

jnrius.alxnder
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

ELECTRONIC PERSON – WHO IS

LIABLE FOR AUTOMATED


DECISION?
SESSION 11

SUBJECT MATTER EXPERT


Yakob Utama Chandra, SE., MMSI
LEARNING OUTCOME

LO 4: Analyze production project Industry 4.0 and digital ecosystems


ACKNOWLEDGEMENT

• These slides have been adapted from


Tobias Endress. (2023). Digital Project Practice for New Work and
Industry 4.0. First Edition CRC Press. USA. ISBN: 978 1 03227 604 5
TECHNOLOGIES PAIRED WITH AI
TECHNOLOGIES PAIRED WITH AI

Since digitization continues to advance, decisions are increasingly being automated. Technologies
paired with AI make it possible to have many decisions made fully or semi- autonomously. In general, AI
can hardly be defined in a clear- cut way. AI is a mix of many different technologies. It enables machines
to understand, act, and learn with human- like intelligence.

AI therefore means, in the broadest sense, all automated decisions that are largely independent of
human intervention. Therefore, AI results in a transfer of decisions from humans to machines or
algorithms.
SIGNIFICANCE OF AI
SIGNIFICANCE OF AI

AI is already relatively widespread. For example, in e- commerce, shoppers can be provided with
additional offers in a more targeted manner. Other typical areas of application include assistance
programs, language translation (e.g., DeepL), automated consulting processes, and autonomous
driving. In industry, networked and automated machines (e.g., robots) or production lines are used as
part of the so- called Industry 4.0. In the broader area of legal advice, so- called smart contracts
solutions (i.e., computer protocols that map or check contracts or technically support the negotiation or
settlement of a contract) can also be based on AI or supported by the use of AI.

AI is considered to be of great importance in Europe and internationally. As far as industry investment


in AI is concerned, the U.S. is one of the frontrunners and the EU brings up the rear internationally. In
the EU region, only between €2.4 billion and €3.2 billion was invested in AI projects in 2016. This is less
than half the investment in Asia and low compared to North America (especially the U.S.), where
between €12.1 billion and €18.6 billion was invested in AI projects in 2016. The same is true for
research on AI (Manyika, 2017). Accordingly, the AI competence of employees in European and, in
particular, German companies is below average in international comparison (EY, 2019). Europe also lags
behind in terms of hardware; 53.4% of all supercomputers are located in Asia. However, the absolute
economic performance is significantly lower and would suggest a much higher share in North America
and Europe.
FUNDAMENTAL LEGAL CHALLENGE
– ACCOUNTABILITY
FUNDAMENTAL LEGAL CHALLENGE – ACCOUNTABILITY

Despite the steadily increasing importance of AI, the legal situation with regard to its fields of
application is still relatively undeveloped. In essence, this almost always relates to liability issues. When
it comes to liability for autonomous decisions, there are three protagonists that could be considered:

• The AI itself

• The programmer/ manufacturer

• The user
CURRENT LEGISLATIVE PROJECTS
CURRENT LEGISLATIVE PROJECTS

General Projects

Even though AI in the EU is not that widespread compared to other parts of the world, the slight AI
skepticism in Europe results in wider regulatory efforts with regard to AI. Such efforts legally concern AI.
As a first step, the EU wants to clarify liability issues and has also created a kind of definition, among
others, for AI and robots with the proposal on civil law regulations in the field of robotics (EU
Commission, Draft Resolution of the European Parliament, with recommendations to the Commission
on civil law regulations in the field of robotics, 2015/ 2103 (INL) of 27.05.2018).
CURRENT LEGISLATIVE PROJECTS

General Projects

AI is thereafter:

• Obtaining autonomy via sensors and/ or via data exchange with their environment (interconnectivity)
and the provision and analysis of these data.

• Ability to self- learn through experience and through interaction (optional criterion).

• At least a minimum physical support.

• Ability to adapt their behavior and actions to their environment.

• Not living beings in the biological sense.


CURRENT LEGISLATIVE PROJECTS

On the other hand, the EU wants to address the legal challenges posed by AI and regulate the core
areas of product liability, IT or system security, and data protection that it has identified. This is to be
achieved through appropriate national oversight of AI users and programmers. In this context, the EU
essentially recognizes that legal protection must always be guaranteed for decisions made by AI against
a person responsible for the AI. Furthermore, the protection of data and information in the online area
plays a decisive role. It is clearly recognizable that the EU also wants to focus on the human
protagonists behind it in its legislative plans with regard to liability for AI.
BASIC CONSIDERATIONS ON
ACCOUNTABILITY FOR AI
BASIC CONSIDERATIONS ON ACCOUNTABILITY FOR AI

The starting point is to prevent protagonists from evading responsibility by transferring decisions to AI.
At the same time, the principle of ultimate human responsibility must apply. A principle must be
established such that the legal standards for “fair” and “right” must be incorporated into the act of
programming so that these become part of the decision- making standard of AI. AI must not change the
basic legal order and the essential legal standards; rather, and this is the legislative mandate, it must be
ensured that these values are taken into account by AI decisions. This can be done in the framework of
a voluntary commitment (Nathmann, 2021).

To this end, it is first necessary that there is always an ultimate human responsibility. This requires that
a responsible addressee is defined. This can be either the programmer, the user, or both. This would
follow the idea that liability must be assumed for the risk of AI, similar to the principles of product
liability.
THANK YOU

You might also like