0% found this document useful (0 votes)
30 views9 pages

432-Article Text-761-1-10-20250103

Uploaded by

macharlaakshaya8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views9 pages

432-Article Text-761-1-10-20250103

Uploaded by

macharlaakshaya8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

MULTISCIENCE E-ISSN 22722 - 2985

JANUARY - APRIL 2025

CRIMINAL LIABILITY FOR MISUSE OF ARTIFICIAL


INTELLIGENCE (AI) IN DEEPFAKE CRIMES
Jandrie Sembiring1, Sunny Ummul Firdaus2

1,2)
Faculty of Law, Sebelas Maret University Surakarta
Email : [email protected], [email protected]

ABSTRACT

Artificial Intelligence (AI) has brought significant innovations in various fields, but it
has also created new challenges in the realm of criminal law. One of the negative implications
of AI is its misuse in the deepfakes crimes, a media manipulation technology that can create
video, audio, or image content that appears authentic but is fake. Crimes involving deepfakes
can include blackmail, spreading fake news, defamation, and privacy violations. This paper
discusses the criminal liability for the use of AI in deepfake crimes, highlighting relevant legal
aspects, including who can be held liable, the AI creator, the technology user, or other related
parties. This article examines the criminal liability of AI used to commit deepfake abuse. This
study analyzes existing regulations, including the Electronic Information and Transactions Law
(UU ITE), as well as the role of law enforcement agencies in combating deepfake crimes. In
addition, this study also identifies challenges faced in the law enforcement process, such as
limited resources, lack of coordination between agencies, and gaps in understanding
technology among law enforcement officers. The results of the study show that there is a legal
and regulatory vacuum governing Artificial Intelligence and Deepfake Crimes, although there
are regulations on Information and Electronic Transactions (ITE) but they have not been able
to resolve the problem of deepfake crimes that are increasingly occurring in Indonesia.
significant progress in the legal framework and policies related to cyber crime, there are still
many obstacles that reduce the effectiveness of law enforcement. Therefore, efforts are needed
to strengthen the ongoing regulatory side to strengthen the capacity of law enforcement
institutions, increase international cooperation, and update regulations to be in line with
technological developments. Thus, it is hoped that law enforcement against deepfake crimes
that utilize artificial intelligence in electronic transactions in Indonesia can be more optimal in
protecting the public from misuse of AI and deepfake crimes..
Keywords: Criminal Liability, Artificial Intelligence, Deepfake Crimes.

INTRODUCTION

Human civilization in this era has entered the industrial revolution 4.0. The era of the
industrial revolution 4.0 is characterized by the entry of technology into the industrial world.
The industrial revolution 4.0 is a development of the previous industrial revolution, starting
with the discovery of the steam engine by James Watt in 1784 as the originator of the first
industrial revolution. The next stage, the second industrial revolution was marked by the
discovery of electricity in 1870. Entering the end of the 20th century, computer technology was
discovered and developed and is known as the third industrial revolution.
The era of the industrial revolution 4.0 is also marked by the presence of artificial
intelligence (henceforth referred to as AI). AI is a term to refer to the simulation of human
intelligence and thought processes by machines connected to a sea of data and information.
These machines are built with capacities and intelligence that resemble human intelligence. AI

1
MULTISCIENCE E-ISSN 22722 - 2985
JANUARY - APRIL 2025

is a technology that can be used by humans as a tool to support activities carried out by humans
themselves. Functionally, AI has more or less the same function as a robot, but AI comes in a
different appearance in the form of a computer system that is displayed in visual form. So AI
can also be referred to as the brain of a robot.
In its development, AI underwent three stages of evolution. The first is Artificial
Narrow Intelligence (ANI), known as Weak AI. Next there is Artificial General Intelligence
(AGI), which is called Strong AI and has the ability to be equal to humans. Finally, there is
Artificial Super Intelligence (ASI), which is a form of AI designed to exceed human
capabilities. The current evolution of AI is still at the Weak AI stage. One example is the use of
AI technology in the automotive industry, such as driverless cars equipped with an autopilot
feature, which allows cars to operate automatically without a driver. In addition, AI has also
helped and replaced many human tasks in daily life, such as Google Translate which can
translate various languages quickly, without the need to use a dictionary to translate many
languages around the world.
The impact of AI in human life is not only limited to the ease of work, but also goes
further than that. It can profoundly change human lifestyle and habits. In its development, AI
has touched various sectors of human activity, including the legal field. Since 2017, China has
utilized AI technology as a judge in digital cases. However, this use is still limited and will
continue to grow over time. The Netherlands is also a country that utilizes AI in the legal field,
by providing access to open regulations and agreements that apply in the country. Not inferior
to these developed countries, Indonesia as a developing country has also begun to use AI in the
legal world, including heylaw, and hukumonline which have provided various features to
obtain data and information related to law to assist the law enforcement process.
With the existence of AI in technological development, this cannot be separated from
the legal arrangements that apply in a country. Given the technological advancement of AI,
which is capable of performing human tasks, various legal issues arise regarding its actions. AI
is an artificial intelligence that is limited by the code that underlies its ability to perform an
action.
In Indonesia, there are no regulations that specifically and clearly regulate AI, which
could certainly become a legal problem in the future if AI technology performs actions that are
contrary to applicable positive law. In this case, given AI's ability to perform legal actions, AI
can behave like humans, including committing criminal acts that harm other parties. One
example of a criminal act utilizing AI that is currently rampant is deepfake.
This technology was first introduced by Ian Godfellow in 2014 with the initial purpose
as a form of entertainment. One of the implications of deepfake technology at that time was the
Face Swap application, which allowed users to swap photos of their faces with other people's
faces. Subsequently, Deepfake was then misused to take photos, videos, or voices of other
people on the internet without permission, and manipulated for certain purposes. Not only are
the parties whose assets are used harmed, but also the people who consume the media, who can
be affected by the spread of false information. Deepfake began to gain widespread attention in
2017 when it was used unethically on the Reddit platform, where users shared pornographic
videos with facial manipulation technology. Deepfakes are difficult to recognize at a glance
because they use real footage, making them look very convincing.
Some countries that have adopted AI technology in various fields have recognized AI
as a legal subject with rights and obligations. However, this is not the case in Indonesia, where
AI is not recognized as a legal subject under positive law. Therefore, it is important to explain
the responsibility for legal acts committed by AI in this study, particularly from a criminal law
perspective. This becomes important when dealing with AI-based criminal offenses such as
deepfake.

2
MULTISCIENCE E-ISSN 22722 - 2985
JANUARY - APRIL 2025

Linked to this legal issue, the author would like to present it in an article with the title:
“Criminal Liability for Misuse of Artificial Intelligence (AI) in Deepfake Crimes.”

METHODS

In conducting this research, the method used is a qualitative method, with a normative
juridical approach. The type of research is library research. The data sources of this research
are obtained from laws and regulations, books, journals and document studies.

RESULTS AND DISCUSSION

1. The Position of AI as a Legal Subject in Criminal Law


Regarding the position of AI in criminal law is a discussion that refers to the position of
AI when committing a criminal offense, can it be called a legal subject like a person and legal
entity or not? This discussion will begin with several definitions of legal subjects in the realm
of civil law conveyed by several doctrines, including the following:
1. Subekti, defines a legal subject as a bearer of rights or an object in law, namely a
person;
2. Soedikno Mertokusumo, defines a legal subject as everything that can obtain rights and
obligations from the law. Only humans can be the subject of law;
3. Syahran, defines the subject of law as a supporter of rights and obligations;
4. Chaidir Ali, defines the subject of law as a human being with legal personality and
everything that is based on the demands of the needs of society as such and by law is
recognized as a supporter of rights and obligations;
5. Agra, defines the subject of law as everyone who has rights and obligations so as to
have legal authority or called Rechtsbevoigdheid.

Meanwhile, in the realm of criminal law, there is a development of the subject of law
from the original subject of law consisting only of persons (natuurlijk persoon) developed with
the addition of legal entities (rechtpersoon) as legal subjects. Wirjono Prodjodikoro in his book
entitled: The Principles of Indonesian Criminal Law says that in the view of the old Criminal
Code which is currently still in effect, the subject of a criminal offense is a human being as an
individual. This is evidenced by the forms of criminal sanctions contained in the old Criminal
Code in the form of criminal sanctions: imprisonment, fines, curtailment, and closure.
The old Criminal Code that is currently still in effect states that only the management
(directors) of the corporation can be held criminally liable (criminal liability). However, in its
development, corporations can also be held legally responsible. This concept was first
introduced by Law No. 32/2009 on Environmental Protection and Management.
Subsequent developments, corporations are stipulated as legal subjects for all criminal
offenses by Law Number 1 of 2023 concerning the Criminal Code, which comes into force in
2026 (3 (three) years after its enactment). Article 45 paragraph (1) states that corporation is the
subject of criminal offense including:
1. Legal entities such as PT, foundations, cooperatives, State-Owned Enterprises
(BUMN), Regional-Owned Enterprises (BUMD), or the equivalent;
2. Associations both incorporated and unincorporated;
3. Business entities in the form of a firm, partnership, or the equivalent in accordance with
the provisions of laws and regulations.

3
MULTISCIENCE E-ISSN 22722 - 2985
JANUARY - APRIL 2025

Meanwhile, the conditions for criminal acts by corporations to be held accountable can
be found in the UUPPLH, namely if:
1. included in the scope of business or activities as specified in the articles of association
or other provisions applicable to the corporation;
2. benefit the corporation unlawfully;
3. accepted as corporate policy;
4. the corporation does not take the necessary steps to prevent, prevent greater impacts
and ensure compliance with applicable legal provisions to avoid criminal acts; and/or
5. the corporation allows the criminal offense to occur.
Legal subjects have a very important position and role in the field of law, especially
civil law because legal subjects have legal authority. In some legal literature, there are two
types of legal subjects, including the following:
1. Human (Natuurlijk persoon)
Every person who has the same position as a supporter of rights and obligations. In
principle, the person as a legal subject starts from birth until death. Including children in the
womb for more than 2 weeks are considered to have been born even though they have not yet
been born, if the interests of the child require it. An example for inheritance purposes. Human
(Person) can be a subject of law with the following conditions:
A. A person who is an adult aged 21 years (Marriage Law No.1 of 1974 and Article
330 of the Civil Code).
B. A person who is under 21 years old but has been married.
C. A person of sound mind and sanity.

2. Legal Entity (Rechtpersoon)


According to Article 1653 of the Civil Code, a legal entity is an association of persons
recognized by law or held by public authority and established for a specific purpose that is not
contrary to law or morality.
According to Prof. Dr.Mr.L.J. Van Apeldoorn, what is meant by a legal entity is every
human association, which acts in legal relations as if it were a single person. Meanwhile,
according to E. Utrecht, a legal entity (rechtspersoon) is a body that according to the law has
the power or authority to become a supporter of rights, which has no soul, or more precisely
which is not human. Legal entity as a social symptom is a real symptom, a fact that is really in
legal association even though it does not have a human form or objects made of iron, wood and
so on.
Purnadi Purbacaraka and Soerjono Soekanto argue as follows in translating zadelijk
lichaam into a legal entity, lichaam is the correct translation of the body, but the law as a
translation of zadelijk is wrong, because the real meaning is moral. Therefore, the term zadelijk
lichaam is nowadays synonymous with rechtspersoon, so it is better to use it with the
translation of legal person.
AI cannot just be given rights because it is man-made, the rights and obligations of
artificial intelligence are borne by the individual or legal entity that created it. The construction
of rights is certainly not far from human rights, such as intellectual property rights, the right to
obtain information, and so on. In terms of legal responsibility, the actions of artificial
intelligence can be charged to the user and can also be charged to the legal entity that makes
the artificial intelligence. The owner of artificial intelligence and artificial intelligence have a
very close relationship in terms of giving birth to an authority. Given that there is no legislation
product that specifically regulates the rights and obligations of artificial intelligence, an
analogy can be made to regulate its rights and responsibilities. In line with the author's
opinion, Atsar and Sutrisno liken the provision of legal subjects in the form of legal entities.
However, they open the opportunity to give its form to a new entity.

4
MULTISCIENCE E-ISSN 22722 - 2985
JANUARY - APRIL 2025

AI does not meet these criteria. AI does not have rights and obligations like humans or
legal entities. Even in the context of a legal entity, there is an authority that controls, manages,
and is responsible for its actions, for example, the directors of a company. AI does not have an
internal “manager” who is responsible, rather it is operated by humans or companies that
control it.
In its concept as the work of a legal entity or individual, the position of artificial
intelligence can be supported by an alternative interpretation of the Civil Code (KUH-Perdata)
by analogizing it to a worker. The relationship between workers and employers can be seen in
Article 1367 paragraph (1) and paragraph (3) of the Civil Code which states: “Paragraph (1): A
person is liable not only for damages caused by his own acts, but also for damages caused by
the acts of those for whom he is responsible or by goods under his supervision.” “Subsection
(3): Employers and persons who appoint others to represent their affairs are liable for damages
incurred by their servants or subordinates in the performance of the work for which these
persons are employed.” In the view of anthropomorphism, artificial intelligence is analogous or
likened to a person as a worker with the attribution of worker characteristics to it. This logic is
in accordance with the view that artificial intelligence is used daily to do things that humans
do.
Atsar and Sutrisno in their research in 2022 revealed that artificial intelligence is an
extension of human hands as a legal subject. It (artificial intelligence) only works according to
the program that has been set by humans. Therefore, humans are responsible for unexpected
events that occur in the future by artificial intelligence, both in the civil, criminal and
administrative domains. Furthermore, according to them, artificial intelligence does not adhere
to values and ethics and conscience. In criminal law, artificial intelligence does not have inner
responsibility in the form of acteus and mens rea as qualifications for its actions and
accountability. In terms of rights and obligations, according to them, artificial intelligence
does not have what is owned by an inventor (inventor) of artificial intelligence. so that in the
case of artificial intelligence it becomes (more appropriate) as an object rather than a subject of
law.

2. Criminal liability for AI used to commit deepfake abuse.


Criminal responsibility is the punishment of an actor with the intention of determining
whether or not a defendant or suspect is held accountable or a criminal act occurs. In Article 34
of the new Criminal Code, it is formulated that criminal responsibility continues to be an
objective reproach for criminal acts based on applicable legal provisions. Subjectively to the
maker who meets the requirements in the law (criminal law) to be subject to punishment for the
act. While the requirements for criminal responsibility or the imposition of a penalty, there
must be an element of guilt in the form of intent and negligence.
The development of artificial crime or AI gave birth to a similar technology later called
deepfake. Deepfake uses generative adversial network technology that relies on neural
networks by analyzing large sets of samples to mimic human facial expressions, mannerisms,
voices, and tone of voice. The term deepfake is a combination of the word deep learning,
which means deep machine technology, and the word fake, which means fake. The use of
facial recognition algorithms and deep learning computer networks is referred to as varitional
auto-encorder.
Deepfake technology is often used for the purpose of sexualizing victims. With the help
of AI, deepfake is monetized by irresponsible people who provide paid editing services with a
credit recharge system. Customers only need to send a photo of the victim to be edited.
Deepfake video creation services are also in high demand. The cost ranges from $300 to
US$20,000 per minute depending on the level of difficulty (Herman, 2023). It doesn't stop
there, manipulated media is usually accompanied by threats to spread it. The motivation of this

5
MULTISCIENCE E-ISSN 22722 - 2985
JANUARY - APRIL 2025

intimidation is to provide a sense of terror and blackmail the victim into giving a certain
amount of money as a condition not to be spread. Many people enjoy deepfake content even
though they know that it is a form of sexual harassment and against the law. A now-deleted
subreddit called r/deepfakes reportedly had nearly 90,000 users. The main activity of the group
members was to share deepfake pornography. The internet is the main platform used for
explicit content. Over the years, pornographic content has also become more varied. This
makes it easy for anyone to access or upload anything indecent. Unlimited sexual preferences
and fantasies also easily find a place on pornographic websites that can be accessed for free.
This problem indicates a moral crisis and this easy access to pornography has had a huge
negative impact because the parties harmed are numerous and not closed to anyone. This
means that it is an irony that security against pornographic pages tends to be minimal. Victims
of harmed pornographic content also often suffer prolonged physical and psychological harm,
including sexual predation, emotional trauma, cyberbullying, and even suicidal tendencies.
Sharing pornographic content is often trivialized, but restoring reputations and erasing digital
footprints is difficult.
Children can access the internet easily because there are no strict age restrictions from
internet page providers. The internet provides an enormous amount of information and
facilitates children's curiosity, so it becomes a butterfly effect because children can be exposed
to things they shouldn't be. According to a Pew Research Center survey, almost 95% of
teenagers have access to the internet. The Ministry of Women's Empowerment and Child
Protection (Kemen PPPA) revealed that 66.6% of boys and 62.3% of girls in Indonesia are
exposed to pornographic content on the internet. The more time teenagers and children spend
on the internet, the more likely they are to be accidentally exposed to explicit content, such as
those in advertisements while browsing the internet. Deepfake perpetrators not only target
adults, but underage children and teenagers are also easy targets, especially for people with
pedophilia disorders.
Pornography is a crime that is clearly regulated by law. In Indonesia alone, Article 4 of
Law Number 44/2008 on Pornography stipulates that the act of producing, making,
reproducing, duplicating, disseminating, broadcasting, importing, exporting, offering, trading,
renting, or providing pornography can be charged with criminal penalties. Criminal threats
against the perpetrators are regulated in Article 45 of Law Number 19 of 2016 concerning
Amendments to Law Number 11 of 2008 concerning Electronic Information and Transactions
which reads “Every person who intentionally and without right distributes and/or transmits
and/or makes accessible Electronic Information and/or Electronic Documents that have content
that violates decency as referred to in Article 27 paragraph (1) shall be punished with a
maximum imprisonment of 6 (six) years and/or a maximum fine of Rp1,000,000,000.00 (one
billion rupiah).”
The criminal punishment given aims to provide a deterrent effect for the perpetrator and
provide justice for the victim. In fact, far more people are working to develop deepfake
technology than the technology to detect and remove it. Deepfake is not specifically regulated
in civil or criminal law, but the constitution has adapted the law to cover defamation, identity
fraud, or deepfake identity impersonation. Indonesia has several legal grounds that can be used
to prosecute deepfake offenders. The legal basis that can be used is Article 48 paragraph (1)
which reads “Every person who fulfills the elements as referred to in Article 32 paragraph (1)
shall be punished with a maximum imprisonment of 8 (eight) years and/or a maximum fine of
Rp2,000,000,000.00 (two billion rupiah).” Article 32 paragraph (1) stipulates that the act of
changing, adding, reducing, transmitting, damaging, eliminating, moving, hiding Electronic
Information and/or Electronic Documents belonging to another person or public is an unlawful
act.

6
MULTISCIENCE E-ISSN 22722 - 2985
JANUARY - APRIL 2025

Another article that can be used is Article 68 of Law Number 27 of 2022 concerning
Personal Data Protection which reads “Every person who intentionally makes false Personal
Data or falsifies Personal Data with the intention of benefiting themselves or others which may
result in harm to others as referred to in Article 66 shall be punished with a maximum
imprisonment of 6 (six) years and/or a maximum fine of Rp6,000,000,000.00 (six billion
rupiah).” The legal framework plays a vital role in regulating the creation and dissemination of
deepfake content (Kumar, 2023). The increasing number of misinformation and deepfakes has
also led to a decline in public trust in the authorities. Public distrust of the authorities has
implications for deepfakes, which are increasingly easy to spread and consume by the public.
Technological advancements are needed from cyber security to develop more sophisticated
detection tools that are able to compete to identify deepfake content. It is a challenge for cyber
security to overcome this problem. This challenge is compounded by people who are still less
aware of personal data protection and relatively easy to consume hoax news because they are
reluctant to check the validity of a media. Increasing public awareness and transparency is also
an important component.
Evidence is a very crucial stage in criminal law that directly affects the running of the
trial process. Without an evidentiary process and evidence, a criminal event cannot be
considered valid. The judge's decision is strongly influenced by evidence that supports a crime.
Article 6 paragraph (2) of Law Number 48 of 2009 concerning Judicial Power states “no one
shall be sentenced to punishment unless the court, by means of valid evidence under the law, is
convinced that a person who is considered liable, has been guilty of the act charged against
him.” Strengthened by Article 1 Point 13 of Law Number 2 of 2002 concerning the Police
which reads “investigation is a series of actions by investigators in the case and in the manner
regulated in this law to seek and collect evidence with which the evidence makes light of
criminal acts that occur and to find the suspect.” Photos, videos, and voice recordings can be
used as important evidence needed in the investigation process.
With deepfake, these media can be faked and deliberately created to engineer events
that never happened to distort the facts. The impact of incorrect evidence is that it becomes
increasingly difficult to know which to believe, resulting in slowing down the trial process and
even influencing the judge's decision which will be detrimental to the parties to the case. The
widespread use of deepfake requires police officers to examine evidence in more depth and
verify whether the evidence is authentic or not. The trial process for cybercrime cases begins
with the police making a police report and calling witnesses from the owner of the internet
provider used by the perpetrator, examining the crime scene to collect evidence used by the
perpetrator in carrying out his actions, especially on computer devices, examining witnesses
and asking for expert testimony, arresting and detaining the perpetrator to conduct a direct
investigation, and finally determining the article that can be imposed to continue to the trial
process. The public prosecutor needs to make an indictment followed by valid evidence and in
accordance with Article 183 of the Criminal Procedure Code, at least two pieces of evidence
and accompanied by the judge's conviction. Of course, what is difficult in this process is
finding the perpetrator whose whereabouts and identity are difficult to trace. This phenomenon
requires law enforcement to develop increasingly complex skills and legal processes. Such
sophisticated technology makes it easy for perpetrators to cover their identities and digital
footprints and makes it difficult for law enforcement to carry out the tracking process.
Perpetrators can access the internet from any device, anywhere, with hidden identities, so the
process of collecting evidence and evidence requires extra effort. Virtual Private Network
(VPN) also helps perpetrators in carrying out crimes because VPN is designed to hide the IP
Address identity of its users. As the saying goes "there is no perfect crime", conventional
crimes always leave traces. The ease of internet access in this era has made cybercrime
possible without a trace and unseen by a single witness. Therefore, law enforcement officers

7
MULTISCIENCE E-ISSN 22722 - 2985
JANUARY - APRIL 2025

not only need to improve their skills to detect deepfakes, but also invest in the ability to
overcome future technological challenges (European Union Agency for Law Enforcement
Cooperation, 2022).
Digital analysis and digital forensics were born as disciplines that aim to work together
in combating cybercrime. The threat of AI technology and the internet requires an increase in
efficient computing tools to detect and block pornographic content. Reddit and Pornhub have
banned deepfake pornography and are working to follow up on any reports from users of such
content. The United States has taken legal action by passing the National Defense
Authorization Act (NDAA).
The law requires the US Director of National Intelligence to report on the use of
deepfakes by international governments. However, this policy is considered insufficient to
address the problems caused by deepfakes, as technological advances continue to accelerate
and have innovations that are not necessarily legally accessible. Since its inception, only five
states in the United States have passed laws regarding deepfake technology.
Starting in 2019, the states of Texas and California banned the use of deepfakes to
influence upcoming elections. California and Virginia also passed laws in the same year to
prohibit the creation and distribution of consensual deepfake pornography. A year later, New
York passed a law establishing the right to take legal action against the publication of unlawful
deepfakes. Bills to regulate AI have also been proposed and passed in several states in 2022.
.

CONCLUSIONS

A. Artificial Intelligence (AI) cannot be qualified as a legal subject in criminal law and is more
precisely referred to as an object. The actions carried out by AI are commands originating
from AI users, in this case humans, which are sent via code or descriptors so that the actions
of the AI are not conscious actions like those carried out by humans.
B. The lack of regulation and the widespread spread of crime not only threaten popular figures,
but have the potential to endanger anyone. Media that has been uploaded to the internet can
be accessed by anyone, so that photos, videos, and voices of anyone spread on the internet
can be used as deepfake targets. Its impact on privacy and personal security is definitely an
effect of cybercrime that must be monitored. If not dealt with firmly, deepfake crimes with
their rapid development can destroy someone's life through the spread of incorrect
information. Therefore, internet technology is basically controlled by humans and cannot be
designated as a legal subject. Violations committed by AI are the full responsibility of the
person operating it.

REFERENCES

Abdul Atsar and Budi Sutrisno, “Tanggungjawab Kecerdasan Buatan Sebagai Subjek
Hukum Paten Di Indonesia,” In Proceeding Justicia Conference, Vol. 1, 2022, 1–14.
Abdul Atsar and Budi Sutrisno, “Tanggungjawab Kecerdasan Buatan Sebagai Subjek
Hukum Paten Di Indonesia,” In Proceeding Justicia Conference, Vol. 1, 2022, 1–14.
Amboro, Priyo, and Komarhana, “Prospek Kecerdasan Buatan Sebagai Subjek Hukum Perdata
Di Indonesia.” Law Review21, No. 2 (2021): 145–172.
Eka N.A.M Sihombing & Syaputra, Implementasi Penggunaan Kecerdasan Buatan dalam
Pembentukan Peraturan Daerah, Jurnal Ilmiah Kebijakan Hukum, Vol. 14, No. 3, 2020
Hamzah Hatrik, 1996, Asas Pertanggungjawaban Korporasi Dalam Hukum Pidana Indonesia,
Raja Grafindo, Jakarta

8
MULTISCIENCE E-ISSN 22722 - 2985
JANUARY - APRIL 2025

Harumiati Natadimaja, 2009, Hukum Perdata Mengenai Hukum Orang dan Hukum Benda,
Graha Ilmu, Yogyakarta
Itok Dwi Kurniawan, Analisis Terhadap Artificial Intelligence sebagai Subjek Hukum Pidana,
Jurnal Mutiara Multidisiplin, Vol. 1, No. 1, 2023
Michael Reskianto Pabubung, Human Dignity Menurut Paus Yohanes Paulus II dan Relevansi
Terhadap Kecerdasan Buatan (AI), Jurnal Teologi, Vol. 10, No. 1, 2021
Muhammad RM Fayasy Failaq, Transplantasi Teori Fiksi dan Konsesi Badan Hukum terhadap
Hewan dan Kecerdasan Buatan sebagai Subjek Hukum, Jurnal Hukum dan HAM Wara
SainsVol. 1, No. 02.
Muhammad Tan Abdul Rahman Haris & Tantimin, Analisis Pertanggungjawaban Hukum
Pidana Terhadap Pemanfaatan Artificial Intelligence di Indonesia, Jurnal Komunikasi
Hukum, Vol. 8, No. 1, 2022
Novyanti, Jerat Hukum Penyalahgunaan Aplikasi Deepfake Ditinjau Dari Hukum Pidana,
Novum: Jurnal Hukum, 2021
Qur’ani Dewi Kusumawardani, Hukum Progresif dan Perkembangan Teknologi Kecerdasan
Buatan, Veritas Et Justitia, Vol. 5, No. 1, 2021
Shannon Gandrova & Ricky Banke, Penerapan Hukum Positif Indonesia Terhadap Kasus
Kejahatan Dunia Maya Deepfake, Madani: Jurnal Multidisiplin, Vol. 1, No. 10, 2023
Shannon Gandrova, Jurnal Ilmiah Multidisipline Nomor.10 2023
Soesi Idayanti, dkk, Pembangunan Hukum Bisnis Dalam Perspektif Pancasila Pada Era
Revolusi Industri 4.0, Jurisprudence, Vol. 9, No. 1, 2022
Supriyadi & Asih, Implementasi Artificial Intelligence (Ai) Di Bidang Administrasi Publik
Pada Era Revolusi Industri 4.0, Jurnal RASI, Vol. 2, No. 2, 2020
Surden, Artificial Intelligence & Law, An Overview, Georgia State University Law Review,
Vol. 35, No. 2, 2019
T.C. Lin, Artificial Intelligence: Finance, and the Law, Fordham Law Rev, Vol. 88, No. 1, 2019
Verheij, Artificial intelligence as law.Artif. Intell. Law, Vol. 28, No. 2, 2020
Wirjono Prodjodikoro, 2011, Asas-Asas Hukum Pidana di Indonesia, Refika Aditama,
Bandung

You might also like