AI Ethics
AI Ethics
5.1 INTRODUCTION
The dictionary defines ethics as, "the moral principles that govern a person or a gro'..; 5
behaviour or actions" Or "the moral correctness of a conduct or action" . In short, Ethics a:i:
the moral responsibility of anyone or anything that can impact others.
Since AI has gained so much power that it can change the lives of people it also h3s t~
look into ethical issues around it as AI can have enormous impact on societies am~ t\'ei
nations. Major ethical issues of AI are :
♦ Bias and Fairness
♦ Accountability
♦ Transparency r-
t'thu·,
.
a- .
• Safety Fthk, ,111.' t l~
I l",p,111-,11• iItl\ ol • '
AI interaction 11n 11t111~ lh,11 t\HI I \l '
1
Al Futuro / "'-r , f, Automation
Allmpaa
Control of Al
(Over thing~ and people) 1,rJ, Impact over jobs
Human rights vs. r '. , Audttability
Robot rights
r.1.;1- lnterpretability
Ethically an AI system should be free from all types of biases and be fair, e.g., an AI system
designed for picking candidates for a job must not be biased against any gender, race,
colour or sexuality and so forth. It should be free from all such t hings and be totally fair.
: /i\{Cetm·obility
Al learns and evolves over time and data. What if an evolved algorithm makes some big
mistake? Who would be accountable for it? For instance, when an autonomous Tesla car hit
a random pedestrian during a test, Tesla was blamed and not the human test driver sitting
inside, and certainly not the algorithm itself. But what if the program was created by
dozens of different people and was also modified with each incident and more data
available? Can the developers or the testers be blamed then?
sparency
tools and p1<1ct 11 ,,,, st lllt itd 111 , •:11 1111pl\ lll\ 11kd ,;nd\• th,1.t they cause no
1 1
ect
' harm to dalo Jl 13'f' . I.i: 1111 1 1h,· P11/Ct 1·H.:'- \l phKtll'cs must be safe to
l
1
Figure 5.2
5. Human Al Interaction
With evolution of AI, many AI technologies, such as humanoid robots, have their actio
and appearance similar to the human beings or oth~r livi_ng beings. Humans very easi:
attribute mental properties to objects, and empathise with them, especially when the
outer appearance of these objects is similar to that of living beings. Thus, AI must be
responsible enough and must not exploit this (i.e., looks and actions similar to living
beings) to deceive humans (or animals) into attributing more intellectual or even
emotional significance to robots or AI systems than they deserve. In short, AI must not
deceive humans or other living beings, and it must not threaten or violate human dignity
in any way.
6. Trust, Privacy and C ontrol
Improved AI "faking" technologies make what once was reliable evidence - into
unreliable evidence - this has already happened to digital photos, sound recordings, and
video. It will soon be quite easy to create (rather than alter) "deep fake" text, photos, and
video material with any desired content. Soon, sophisticated real-time interaction wit::
persons over text, phone, or video
will be faked, too. So, we cannot
trust digital interactions while we
are at the same time increasingly
dependent on such interactions.
Thus, it is the ethical responsi-
bility of the creator and user of
AI to ensure that these are not
lldsused.
th
an Using the tracltionit rtt
threats and to test the systems' vulnerabilities.
to hackers then they will know a system's weaknesses and
~te even more cyber threats. Thus, it is the ethical respo
to have human control over AI usage in terms of its span and control so
available to hackers for malicious use.
and Impact over Jobs
d robotics are leading to increased automation in all types of fields and industries
at many places, robots are replacing humans too. This will lead to many humans losing
jobs. But AI does not mean that jobs are reduced, it just means that the nature of
jobs and work is predominantly changin g.
Thus, it is the ethical responsibility of an organisation t o upgrade the skillset of its
workers so that t hey u pgrade their skillset and be ready for futuristic AI oriented jobs. It
is ethical responsibility of governments too (equally and even more) to bring appropriate
changes to the education, training, internships and opportunities fo r its people, keeping
in mind the evolving nature of jobs and impact of AI over them.
man rights in the age of Al
AI has generated new form of threats and it has led to new discussion - how to protect
human rig hts in the ag e of AI? There are many consequences of use and application of AI
in our lives, such as :
♦ With smart cities, people create a trail of data for nearly every aspect of their lives ,
which reveal minute details about our lives. AI can process and analyse all t his dat a
and all this data is available to government but also to potential advertisers. This is a
huge risk to data privacy and protection - violates human right to pn·vacy.
♦ Decisions based on AI processed data and algorithms may be biased dependin g upon
the accuracy/inaccuracy and bias of the algorithm, e.g., many facial recc,gnition
software being d~velopPd have shown bias towards fail skinn~d people. It leads to
biased decisicrn and violates hunwn , iqhl to fni, clzmzce mzd j11stict'.
(.rucltl I 'lllt'l:.illll
W< ,r II rlr 11,•,•. plr.r1111111u un ·\ t,,1$t>d
,11 n t p111d11-llv1'
t, ~
..,,.
Tllrgeted
,
possible future en
surance companies may deny insurance to some p
uman right to affordable health care.
.....
J I Erhics
There are many more such incidents and exa~ples
where AI can be used to violate human rights (some . A! erhics ls a set of
pnnc1pf Ps, and techni
more examples given below in Fig. 5.3). Thus, it is . Quea
employ widely accepted
important for government and authorities to . I I Stan
o f rig 11 anc wrong 10 gu'd
ensure the protection of human rights from various . •
conduct 111 thC' development
e lllo
forms of misuse of AI. of /\J lt•chnologies. and lllf
Exemplary cases
Exemplary cases
• Use of data without or against the explicit
will of customers • Infringement of the right lo opinion, due to
e, ccssive use of ulgorithrns in social media
• Disproportionate use of intimate and personal
• Replacement of democratic decisions by Al
data of individuals by public institutions
decisions (robotocrncy)
AI Bias (Artificial Intelligence Bias) is an important term, which you should k11L1w ,lbl)ut.
'Bias', as you must be knowing, means inclination or prejudice Jo, or against 011t' ; 1t"$,P, !':
group, especially in a way considered to be unfair. When AI prog 1 ams tools ,rnd ,1l~w11thm~
exhibit any kind of bias, it is called AI bias.
In 1988, the UK Commission for Racial Equality found ,\ Btilish medic,ll sd10l)l guilt}'
dlilcdtnination. It used a computer program to determine which applk,mts wuuld
. .mews. And the list of applicants it picked was found to be biased ag
1ltt.h llOI\ luropean names. The computer program was develop
d hlto computer algorithm.
.--
~ I Bias
Al bias is an anomaly (irregularity of abnormality) i.n the resuh
Let us now define produced Lhrough AI based programs and a1gori.thms because or
AI bias formally. prejudiced (discriminalory) assumptions made during the alfori.thm
devL,Jopment process or prejudices in the traini.ng d,1t,1. l
data 1s a huqe co lll'cl 11 , 11 11 1 11h1•\ll•d i1tl111111,ll i11n l h,1l" u:;1'd h.1 build an AI mod.el
e \eanung model) . l'lll.: Li ,111111111 dl1t.1 11~u,1H\' (\.'l\"1St'- of mrn?tnted text
or audio. Thwug h l 11.u 11 i11 q dL,l L, di\ J\ l mudc\ lecHl\'- to perform its task at 1.
acy
~
tJOQJft preferred colour for car is 'Red'
their cultural influence. Now if anoth8r co
l Mores that 'Red' is also a preferred choice of colour mation tha
aggressive drivers, then without much representation, model (e.g.,
model).
this dataset may link the minority group with aggressive
driving - an AI bias, here.
Reasons for Al Bias in Data
Other than the over- and under-representation, there are many more reasons that ca
contribute to AI bias. .-- USe or
L .Bias
, in Data ColJ"rt·
, ion
These are :
Bias in data collect10 .
(i) Human bias in decisions re f ers to. flawed or unbaJ-ilOCe(I
1
Trusted Al Principles
These are just a few common examples of AI bias. There are many instances of unfaiz AI
practices with or without the knowledge of the developer.
Let us now learn how the AI people can reduce the AI biases in dat a collections and
decisions :
(i) Thorough Research. The data collector must research their users or subjects i.1
advance about which the data are being collected. They should be aware o:
general results and odd results of the data.
(ii) Diversity of Team. The t eam working for data collection or algorithr:,
development must be diverse so t hat one person or team does not have ma\':
influence on data and algorithm of decision-making.
(iii) Data Diversity. Combine inputs from multiple sources to ensme data diwi"$::y
(iv) Standardised Data labelling. The team must have standa1dised w,1y l1f l:,l'-?:~~
so that accurate, consistent and standardised data l.1bels ,H t' ust>d n: ,.....
collection.
(v) Identify Bias-proneness. The team should identify t ht' lH)ssibk 1.11.\·mtt'n~·~; '
b.
1ases among dat a sets and use multi· ll.lSs ,unwl.lltlH\S 1. t'. mu ltink
r ~t•
Dlveralty of Team
Date Dlveralty
2 So that one person or
team does not have 3 Combine lnputl from
mft]or lnfluonce on multiple source, to
decision lllnklng ensure data diversity
https://www.youtube.com/watch?v=BfHaRUt7EXU&t=271s II
https://vvww.youtube.com/watch?v=fASxpRnqbK M ~
C heck Point
3. Results based on incomplete or prcjudiet'd c/i11t1, producl'd b\' \! lt)t)l~ ,in' -;ir 11 tt 1t',i •1'
(a) Al fairness (b) Dc•cpfoke (c) l~obol il·s (.I) \ I t,i.i~ , , ., : \·
4 'A colJege student created a fokt> pictun• or his fdlow studt•11t 11s111~~ \I._ dt't'f' 1•11-.\ , It ,,:hn, 1, :-
registered for a competition'. Which !\I <'lhic.il is,-.1H• is n•l.ill·d 1,, !111~ '
Transparency (/1) Hi.i:-- .111d F.i1rn1•._ ...
·at•11.
privacy and controJ (d) Au111111.11i,111 ,111d 11111•,id t'I Jt'l'.;
~L - 1 l\
(b) Cyber aeairfty and
(,/) Sa foty
company ,,,id llll llhlllY l,H lory W<>rk <•rs L'Vl'rsince
. in
I't t t d ' • '1
kn: c·ilr<;h cll'Vl'l,ii>1·d IW() /\I h.isl'd di,itb 1its n,,nwlv \/it't' ,rnd_ l\t)!•. Hl,tt .•,th.'r ~,,nw tin,~'
k hut town tlw .,. , li.tll>()h I Ill' n•.is 1111 t 11ld ln·himl 1I w.,~ th,1t U.,',· ,,nd 1,,-_, d"'\d,,l"t.'d O\\ \
,...,.DL110.. l'•(ll.tllf,•"• ,111d •.t.11 t,•11 11tl1· 1,1d11lf, Ill th,ll l,ll\)~ll-lf.,'.
14. In 201--1, Amazon de\'eloped an AI recruiting system to streamline their hiring process. It was found
to be discriminatory against women as the d ata u sed to train the model was from the past 10 rears
where most selected applicants were men due to the male dominance in the tech industry. ~ l l i
scraped this system in 2018.
This is an example of AI ethical issue _ __
(a) Bias and Fairness (b) Accountability
(c) Transparency (d) Human-AI interaction
6 , T US REVISE
❖ Van·ous ethical issues of Al are : Bias and Fairness, Accountability, Transparen cy, Sc.ft':}'. E:.-..:~ - :.
interaction, Trust, Privacy and Control, Malicious use of AI, Impact over jobs. Human a-,r: -~a:,~: r.~.;~~ ·
so forth.
❖ Deepfake is a technology that can generate fake digital photos, sound 1eco, dings c-:l: , :·,:c-.·
just as original as possible.
❖ AI ethics is~ set of values, principles, and techniques that employ widclJ (h Ct'pttY: .,, :•i.~.:· .~::: ·
wrong to gwde moral conduct in the development and 1,se O;,, J\I t CC-Il/10 l t'gl·t'$ ,
❖ AI bi~s is an anomaly (irregulanly of abnonnalitv) in the ,csu/1 p,ocfwwi t'i · 1 , h LT -.:,:,." ·
algonthms because
• .
of pre111d1ced (disc1i111i11nloiy) cl,,Sll/1/p
,. . •
1/()/1$ i rl · I
11/(l( t' , t. 11110 t i f .. .'
· ·
process or pre1ud1ces 111 the lrw111119 da/(1.
f'rafmng data 1s a huge collection of tubcllc:rl in/Ollll(l/ion thnt\ 1
,· ·rl t, b
ng model).
·- - • ~111 "' L , ,
Deepfake is a technology that can generate fake digital photos, sound recordings, and , i deo
ihich look just as original as possible.
t IS Al bias ? Give example.
· AI bias is an anomaly (irregularity of abnorm ality) in the result produced thr frugh ..\l b..1~-J.
programs and al gorithms becm1se o f prej ud iced (discriminatory) assumptions 1n.lde durin~ th1::.'
onthm dc:vc: lopnwnl procvss or prejudices in the tra ining data, c.~, ,1 /1c,1/ti, i11s1111111ct· -\I ~·- ,,.\· :··
red whites o v 1·r hi.i t ks for v>. tr,, lw ,1lthcarc services bL'C,1t1R' of ,m Al b i,1s.
of a11 Al model .
A g iven A l lll(l(l<·l is I.ii i if ti ll' 0 11 1p 11 ts ,m· intkpl·mknt 1,f st•ns iti\'t' 1-',W.mwh.'~ ~- ~ ~c.'l'h.'h.'r
u Illy, r (' Itgtc111 s I d ·i 11I, l I l ', d l i tlil)' , l 'li' •) ll1r ,1 ..;nt•dlit·
' r l,\Sl-. tlMt is ,\lt'\',hh° ,\tt\,·t\,t h :-,,-i,\l
n lion
Ir 111111, dal 1111 Al
Al mo d £• I pr 1 II',,, ,t I11 11 • I Ill 11 1111 ·t1l111 111 ,, 1·,•rt,1in w,1, w~il\~'. ,1 :--..\1\\p\\' ~,t ,)t hu,_~--
'" I , , 11111 1
trn, 11 111~ • tlitlfl t I11 • 1111111•11·I 1.11 ,.,.,. 111 h•n11~ 111
n'l''''"''11t,1h,,I\ {\\1.)t \)\"\.'r,. '-x
.,.,1 . 1 1111 11111l•1 ,-.,,d n•p1\•,,•11t,1h,m) ,\lh..'-".h- th'-' ,)uk."~ of
Lu) m I b1I 1ll c t il l 1111 (l ""' l , ...
tr inin , d til , b1
. , tit ,n,d1H'\'d
1 l l t I IH t t " 1 1
t,, tlw \\ m,ll.\~l,,,,~,lnthm (that used
b1 ll ....,_............,
I I \ t J ll~"-' tor an Al m'-ldel mu t be ~-~ ~
rt nt th11t t1u trnm ng l