0% found this document useful (0 votes)
8 views14 pages

AI Ethics

Uploaded by

hipriyarghya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views14 pages

AI Ethics

Uploaded by

hipriyarghya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

J~ Al Blas and Al Access

5.1 INTRODUCTION

There is a popular saying - "With power comes responsibility", which is true


. d 7. s·mce when the 'things' for
everyone and everything. Everything ? Aren ,t you surpnse are
expected to be responsible ?
Well, you know the answer to this question yourself ; a_n~ _that is ever since the 'things'
have become intelligent and powerful, even though artificially (AI).
And when we talk of responsibility, it involves ethical, moral, legal responsibility and even
more. In this session, we shall explore ethics pertaining to artificial intelligence, AI bias
and advantages and disadvantages of AI. So, let us begin our discussion.

5.2 ETHICAL ISSUES AROUND A l

The dictionary defines ethics as, "the moral principles that govern a person or a gro'..; 5
behaviour or actions" Or "the moral correctness of a conduct or action" . In short, Ethics a:i:
the moral responsibility of anyone or anything that can impact others.
Since AI has gained so much power that it can change the lives of people it also h3s t~
look into ethical issues around it as AI can have enormous impact on societies am~ t\'ei
nations. Major ethical issues of AI are :
♦ Bias and Fairness
♦ Accountability
♦ Transparency r-
t'thu·,
.
a- .
• Safety Fthk, ,111.' t l~
I l",p,111-,11• iItl\ ol • '
AI interaction 11n 11t111~ lh,11 t\HI I \l '
1
Al Futuro / "'-r , f, Automation
Allmpaa

Control of Al
(Over thing~ and people) 1,rJ, Impact over jobs
Human rights vs. r '. , Audttability
Robot rights
r.1.;1- lnterpretability

Figure 5.1 Ethica l Issues of Al


Examples of Al Ethical Iss ues
Let us discuss some examples of AI ethical issues.
5 and Fa:rness

Ethically an AI system should be free from all types of biases and be fair, e.g., an AI system
designed for picking candidates for a job must not be biased against any gender, race,
colour or sexuality and so forth. It should be free from all such t hings and be totally fair.
: /i\{Cetm·obility

Al learns and evolves over time and data. What if an evolved algorithm makes some big
mistake? Who would be accountable for it? For instance, when an autonomous Tesla car hit
a random pedestrian during a test, Tesla was blamed and not the human test driver sitting
inside, and certainly not the algorithm itself. But what if the program was created by
dozens of different people and was also modified with each incident and more data
available? Can the developers or the testers be blamed then?
sparency

Transparency means nothing is hidden and everything that AI performs is expbinabl.e.


Transparency ensures that Lhere is full information and knowledge a.bL)Ut th~~e :
• data used, its r,rngC', inlNVcll clllcl SO Ul('('S clr.
♦ models used arc• ,ippropr i,ill' (01 l ltc ronlt'xl lll,lkl' :-C'ns1..'
• lllodel arc thoro11qlily t1 stPd 1

•hy part1culc1r ucc ir:i1111 :: c111' 111,Hll'

tools and p1<1ct 11 ,,,, st lllt itd 111 , •:11 1111pl\ lll\ 11kd ,;nd\• th,1.t they cause no
1 1

ect
' harm to dalo Jl 13'f' . I.i: 1111 1 1h,· P11/Ct 1·H.:'- \l phKtll'cs must be safe to
l
1

being of 1nd1v1du 1I per s u11 s <rnd t lw µubh~ w~lfaie. AI practices must


through the re pons1ble use ot tel hnoloq1es.

l lNI 1 IN 1 ,; \ I I ' I '' 1 '. '-


Ablllty to
vary with ch
Ethics In Al
Ethical purpose,
build, and use
rtfes to assess data ] t:?
arid provide assurance that
aulputs can be trusted 1
✓ Fair
Audltable r5l ; Eliminates or reduces
the impact of bias on certain
users

Figure 5.2

5. Human Al Interaction
With evolution of AI, many AI technologies, such as humanoid robots, have their actio
and appearance similar to the human beings or oth~r livi_ng beings. Humans very easi:
attribute mental properties to objects, and empathise with them, especially when the
outer appearance of these objects is similar to that of living beings. Thus, AI must be
responsible enough and must not exploit this (i.e., looks and actions similar to living
beings) to deceive humans (or animals) into attributing more intellectual or even
emotional significance to robots or AI systems than they deserve. In short, AI must not
deceive humans or other living beings, and it must not threaten or violate human dignity
in any way.
6. Trust, Privacy and C ontrol

Improved AI "faking" technologies make what once was reliable evidence - into
unreliable evidence - this has already happened to digital photos, sound recordings, and
video. It will soon be quite easy to create (rather than alter) "deep fake" text, photos, and
video material with any desired content. Soon, sophisticated real-time interaction wit::
persons over text, phone, or video
will be faked, too. So, we cannot
trust digital interactions while we
are at the same time increasingly
dependent on such interactions.
Thus, it is the ethical responsi-
bility of the creator and user of
AI to ensure that these are not
lldsused.
th
an Using the tracltionit rtt
threats and to test the systems' vulnerabilities.
to hackers then they will know a system's weaknesses and
~te even more cyber threats. Thus, it is the ethical respo
to have human control over AI usage in terms of its span and control so
available to hackers for malicious use.
and Impact over Jobs

d robotics are leading to increased automation in all types of fields and industries
at many places, robots are replacing humans too. This will lead to many humans losing
jobs. But AI does not mean that jobs are reduced, it just means that the nature of
jobs and work is predominantly changin g.
Thus, it is the ethical responsibility of an organisation t o upgrade the skillset of its
workers so that t hey u pgrade their skillset and be ready for futuristic AI oriented jobs. It
is ethical responsibility of governments too (equally and even more) to bring appropriate
changes to the education, training, internships and opportunities fo r its people, keeping
in mind the evolving nature of jobs and impact of AI over them.
man rights in the age of Al

AI has generated new form of threats and it has led to new discussion - how to protect
human rig hts in the ag e of AI? There are many consequences of use and application of AI
in our lives, such as :

♦ With smart cities, people create a trail of data for nearly every aspect of their lives ,
which reveal minute details about our lives. AI can process and analyse all t his dat a
and all this data is available to government but also to potential advertisers. This is a
huge risk to data privacy and protection - violates human right to pn·vacy.
♦ Decisions based on AI processed data and algorithms may be biased dependin g upon
the accuracy/inaccuracy and bias of the algorithm, e.g., many facial recc,gnition
software being d~velopPd have shown bias towards fail skinn~d people. It leads to
biased decisicrn and violates hunwn , iqhl to fni, clzmzce mzd j11stict'.

(.rucltl I 'lllt'l:.illll
W< ,r II rlr 11,•,•. plr.r1111111u un ·\ t,,1$t>d
,11 n t p111d11-llv1'

I ,C""""' .",. 1"''•


pl, 1th'llHS

t, ~
..,,.
Tllrgeted
,
possible future en
surance companies may deny insurance to some p
uman right to affordable health care.
.....
J I Erhics
There are many more such incidents and exa~ples
where AI can be used to violate human rights (some . A! erhics ls a set of
pnnc1pf Ps, and techni
more examples given below in Fig. 5.3). Thus, it is . Quea
employ widely accepted
important for government and authorities to . I I Stan
o f rig 11 anc wrong 10 gu'd
ensure the protection of human rights from various . •
conduct 111 thC' development
e lllo
forms of misuse of AI. of /\J lt•chnologies. and lllf

Exemplary cases
Exemplary cases
• Use of data without or against the explicit
will of customers • Infringement of the right lo opinion, due to
e, ccssive use of ulgorithrns in social media
• Disproportionate use of intimate and personal
• Replacement of democratic decisions by Al
data of individuals by public institutions
decisions (robotocrncy)

p Situation I Situation II Situation IIJ Situation IV


The input of Al conflicts The output of Al leads The use of Al in specific
A
with human rights to unintended human rights areas conflic ts w ith A human rights
R violations human rights violator uses Al
T
B Exemplary cases
Exemplary cases
• Unlawful discrimination in job applica tion s
n Use of AI to monitor th e citizens criticizing the
based on ethnicity
government
• Illicit discrimination of women in the public
• Use of AI to suppress e thnic minorities ,:md to
health system
track individuals

Figure 5.3 Human Rights Violations and AI

5.3 Al BIAS AND Al ACCESS

AI Bias (Artificial Intelligence Bias) is an important term, which you should k11L1w ,lbl)ut.
'Bias', as you must be knowing, means inclination or prejudice Jo, or against 011t' ; 1t"$,P, !':
group, especially in a way considered to be unfair. When AI prog 1 ams tools ,rnd ,1l~w11thm~
exhibit any kind of bias, it is called AI bias.

Let us understand it with the help of exampll's.


Example l

In 1988, the UK Commission for Racial Equality found ,\ Btilish medic,ll sd10l)l guilt}'
dlilcdtnination. It used a computer program to determine which applk,mts wuuld
. .mews. And the list of applicants it picked was found to be biased ag
1ltt.h llOI\ luropean names. The computer program was develop
d hlto computer algorithm.

cident in US, a healthcare al O • h .


medical facilities for people ~ n! m,dwhich was being used to
over black pntients. Aga·n th ' b' ro _uce faulty results that favoured
l , e ias m the al 'th fr he data
of human made decisions in the past. gon m crept om t

It 1,,I lii.1~ Ill., 111, rh,·,,I 1111111 r,,,.,,... \\


liP1tl-.n

e~pc1 ~a s or en( 0 use of


·ac1allY biased' <1lgorithms AI rn.-1 ..
~;"1~~;";~idi;;:;-i••••••••
oend•r bias in Al: building Lives /\t R
Rcgul<1tor.
fairer a lgorithms Blas In A.I: A problem recognized bu

The Week in Tech: Algorithmic Bias ls


Bad. Uncovering It Is Good.
Artificial lntelrigence has a gender bias
problem - just ask Siri
o ruggle to Recognize Black Faces Equally

.--
~ I Bias
Al bias is an anomaly (irregularity of abnormality) i.n the resuh
Let us now define produced Lhrough AI based programs and a1gori.thms because or
AI bias formally. prejudiced (discriminalory) assumptions made during the alfori.thm
devL,Jopment process or prejudices in the traini.ng d,1t,1. l

Poss ble Bias in Doto Collection


Data plays an imporl<1nt role in c1 11 Al modcljc1lgo1ilhrn's functioni.ng. A1 Hh"\dt?l l''l -1lg1..)1it~rn
trained to hrncl ion in rl Cl' tl cl ill WclV usiuq ,\ lwqe Sllmpk St't of d,lLl K.lll."1WH ,\~ ~- ~:- ·i l
• Now, if thi<: trr1ini, 1q dc1 lc1 1~; 1Ji,1sl'cl, lilt' 1t'sull p10durt.'t1 bv th1.' A1 Hh'l-°11..'l ,'ll901ithm
used this dat,,) will ,d•;o IH' 1>ic1sl'cl . 1.l'l us 1uHll-1st.nttl wh,1t i~ bi,1s m 1.1-'lt.\ ,'L)llection.

data 1s a huqe co lll'cl 11 , 11 11 1 11h1•\ll•d i1tl111111,ll i11n l h,1l" u:;1'd h.1 build an AI mod.el
e \eanung model) . l'lll.: Li ,111111111 dl1t.1 11~u,1H\' (\.'l\"1St'- of mrn?tnted text
or audio. Thwug h l 11.u 11 i11 q dL,l L, di\ J\ l mudc\ lecHl\'- to perform its task at 1.
acy
~
tJOQJft preferred colour for car is 'Red'
their cultural influence. Now if anoth8r co
l Mores that 'Red' is also a preferred choice of colour mation tha
aggressive drivers, then without much representation, model (e.g.,
model).
this dataset may link the minority group with aggressive
driving - an AI bias, here.
Reasons for Al Bias in Data
Other than the over- and under-representation, there are many more reasons that ca
contribute to AI bias. .-- USe or
L .Bias
, in Data ColJ"rt·
, ion
These are :
Bias in data collect10 .
(i) Human bias in decisions re f ers to. flawed or unbaJ-ilOCe(I
1

d ata with over- or d


(ii) Flawed and unbalanced data collection . un er-
represen tat1 on of data related
(iii) Under- or over-representation of specific features s pec1·f·1c f eatures or grou P5 orto
. .
p (iv) Wrong assumptions et h mc1ty etc. in the final
data-collection.
A (v) No proper bias t esting
R (vi) No bias mitigation (i.e., reducing the severity of bias)
T
Human Biases in Data
B Reporting bias Stereotypical bias Group attribution error
Selection bias Historical unfairness Halo effect
Overgeneralization Implicit associations ( overall impressior of ::
person influences ere j J:::;2-
(associations of concepts ment about the perso~)
(e.g., black, gay) and eva-
luations (e.g., good, bad)
Out-group homogeneity bias Implicit stereotypes
Prejudice
Human Biases in Collection and Annotation
Sampling error Bias blind spot Neqlt'ct of pr0bJt-ili~
Non-sampling error Confirmalion bius f\11t'nfot,1l t,1ll.h-~
Insensitivity to sam P.le size SubjccliVl' v,1lid,1tiLlil l tltt,l\)ll L't \ ,\ll,Ht\
Correspondence bias Lx1.wri11H'lllt'1.., bi,h
In group bias
t'hoin• , upplll ti, t' bi,,,

nng Dato Fa1rnc s


It is very important for the collecto, ,s
thereby, fair decision making to Lio I lw lo\l1lwi nq ll11 t hu t,uuH'' ' ot - i

9 the correlation of f ea t ures wtth data (should be diverse)


J .!
dtdaiona lftd the 4--
data

1ias testing by learning about b' .


ed testing of data and modifi t' ias~s and inducing fairness by
ca ton m training data
Note
\ giw11 .>\I lll\llh-1 is hi r .,. ll
. ' I lt' outputs .. . d ..
lt',l/ .. P,t'llllt'I' l"'lt'l' "' l'X 1.
, . I rn .ep(' nd ent of sensitive para~w,~
, ' , ,, , ua ll y rC' lr. g· .· fare
th,11 is ,lln,,idy t1fl'C'cl, 1 b . '. . _,ou~ an 1, disabi lity, etc:.) for a specific task
et Y soc it1 I discrimi nation.

Trusted Al Principles

lllsp)nsible A ccountable Transparent Empowering


Sllal}lard.ng Inclu sive
Seeking and Developing a
lulatnghts Promoting Respecting the
leveraging transparent user
Ill protecting economic growth societal values of
feed back for experience to guide
lie dala we are and employment all those 1mpacteo .
conti nuous users through
91sedwith. for our customers, not just those of
improvement. machine-driven their employees, and the creators.
recommendations. society as a whole.

co ·ons of Biases in Al Technology


The AI bias can lead to biased decisions and nullify the intended use of AI technology :ir. 2.
specific context. You have already read some examples of AI biases in earlier lines.
Following are some more examples of how AI bias impacts decisions and results into bias~Q
results.

Here are some real-life examples of AI bias :


♦ COMPAS. COMPAS (Correc tional Offonclct Mt1tt,:gcmcnt P10filil~g- fot _ Al~t>rn,1.th~
,
Sa nct10ns) .1c; c1 soft w,11P usc•c . l l>Y Ihe us corn ts to Jntlqt' . llw. µH1b,lb1hty
. , .t1t ,l ,:t-~,--:
. .. ,: it
(the person c1r.c· u•,NJ of rn1111111-11 -11icJ 1I ciintt') bt'co11111Hl• ,t 1t't'1d111st l,h't ~,t t~'P~',\tm.\_ the
111111 ) D , 10 t IH' llt\tvilv IH,11,t'd d,1L1 tlw mt,dd p1~d1rted
·. .
Prevtou
t..t
ly com 1111 t l c•d c-
· ,
' · Ill •
I 0 1 tl'('ldtVl'-:111
. l I ·k \ii h'1Hk1~ th m whit~, l)ffonde~
Ill l ,ll ' ' • .,.
,w,ce as many fdl .e po•,111v1": · • . . " .

'• Rl i I II t' () I,, /\ III IZ(l 11 l Il 'vt•ll1i1t'd ,111 1\ I tt•1·1111llllq


I •
:-\~t~m to streamline
ldrin r ng. '
It \S ftHIIH 1ln 1ll' l lis,·tin 1111 ,1t,11v ,1q,1lll~l wumen as the data used
g process. Wt 10
y, "s wtwil' most selected applicants were
h mode\ was from ll1e ~;t~~l te~_'h rndustiy. Ama on scraped this s,stea
the male dominance m ,e
mked their health risk lower tkan
wer healthcare standards for black people.
mage Cropping. In September of 2020, Twitter users found
ag cropping algorithm favoured white Jac~s ove~ black [aces, i.e.,".::an
with a different aspect rati~ than the preview window ~s post~d on T\'iitter
algorithm crops parts of the image and shows only a certam portion of th . ,
the preview. This AI model often showed white faces in the preview ur:e 1lllage
picture with white and black faces. .. ~ndow 1Jt

♦ Facebook's Advertisement Algorithm. In 2019, Facebook allowed advert·


. .
target people based on their race, gen der, an d re lzgzon. Th.1s l e d to jobs like isers .to
and secretary being targeted to women, wh1·1e JO . bs l'k . .
1 e Janitor and taxi nursing
d.
targeted men, ~specially men of colour. The model ~lso learned that. real estat~::
had a better click-through rate when shown to whzte people, resulting in a lack of
real-estate advertisements to minority people.

These are just a few common examples of AI bias. There are many instances of unfaiz AI
practices with or without the knowledge of the developer.

Reducing and Mitigating A l Bias

Let us now learn how the AI people can reduce the AI biases in dat a collections and
decisions :

(i) Thorough Research. The data collector must research their users or subjects i.1
advance about which the data are being collected. They should be aware o:
general results and odd results of the data.
(ii) Diversity of Team. The t eam working for data collection or algorithr:,
development must be diverse so t hat one person or team does not have ma\':
influence on data and algorithm of decision-making.
(iii) Data Diversity. Combine inputs from multiple sources to ensme data diwi"$::y
(iv) Standardised Data labelling. The team must have standa1dised w,1y l1f l:,l'-?:~~
so that accurate, consistent and standardised data l.1bels ,H t' ust>d n: ,.....
collection.

(v) Identify Bias-proneness. The team should identify t ht' lH)ssibk 1.11.\·mtt'n~·~; '
b.
1ases among dat a sets and use multi· ll.lSs ,unwl.lltlH\S 1. t'. mu ltink
r ~t•

annotators label the data so as to minimisl' t ht' pussihh' bi,\s. tf


._..,___ E li h · - to te'
awr•"'• n st t e help of someone with domain expertise y
~or annotated data. Someone from outside of the team ina
. _ hu overlooked.
e biases.

Dlveralty of Team
Date Dlveralty
2 So that one person or
team does not have 3 Combine lnputl from
mft]or lnfluonce on multiple source, to
decision lllnklng ensure data diversity

neness Data Review


~
•Regular Data Analysis
sible Regular Blas Testing
- Enlist the help of
f biases
ts
someone with domain
expertise to review
The team should keep
track of errors and
8 Test the collected data.
multi-pass training data and the
collected and/or problem areas so as
overall performance
s. annotated data. to respond to and
of the algorithm
resolve them quickly.
against biases.

~ =-= =-= =-= =-= =-= =-- =-= , . ,


~ JU Etfzics
1
Digital Resource links ---•

https://www.youtube.com/watch?v=BfHaRUt7EXU&t=271s II
https://vvww.youtube.com/watch?v=fASxpRnqbK M ~

'\ ::s//= y==tub=m/==ch?=cS~IRj~fac9s '-== ~


·ages and Disadvantages of Al
Advantages

f> Use of AI reduces in human error.


♦ Use of AI helps in lessening repetitive work.
♦ AI technology provides digital assislance .
• AI technolnqy dids i11
l,tSlC't ,ttHI lllOll' ,H'Cllt.llt' l1l'l'l$ll)ll$.
dvanfogo
•u of Al re 1111 •. i ,1 10•.1 11v1• 111111 ~.

dl' II I It (l f I ti I I• 11 I 111 J\ I (I() III , 1111 Ii •: l l \ I HI \\'

lack of jll cl<.: 11 1 d /\1 li.t •,t•d p 111d11 d•: .


I big ()0l(• 1tlt 11 f (11 111 1 , 11 ,l' 11I ,\1 , 1
- - -=i- - -

C heck Point

1. Which of the following are some ethical issues around AI ?


(a) Bias and Fairness (b) Accountability
(c) Cyber Security and Malicious use (d) All of these
2. 'For hiring purposes, a big firm made use of an AI based software and it resulted in hirin~ rt:'ry·~'"
women in past 5 years'. Which AI ethical issue is related to this ?
(a) Transparency (b) Bias and Fairness
(c) Trust, privacy and control (d) /\utom,1tion ,111d imp.id lH'l'I' ;l,h;

3. Results based on incomplete or prcjudiet'd c/i11t1, producl'd b\' \! lt)t)l~ ,in' -;ir 11 tt 1t',i •1'
(a) Al fairness (b) Dc•cpfoke (c) l~obol il·s (.I) \ I t,i.i~ , , ., : \·
4 'A colJege student created a fokt> pictun• or his fdlow studt•11t 11s111~~ \I._ dt't'f' 1•11-.\ , It ,,:hn, 1, :-
registered for a competition'. Which !\I <'lhic.il is,-.1H• is n•l.ill·d 1,, !111~ '
Transparency (/1) Hi.i:-- .111d F.i1rn1•._ ...

·at•11.
privacy and controJ (d) Au111111.11i,111 ,111d 11111•,id t'I Jt'l'.;
~L - 1 l\
(b) Cyber aeairfty and
(,/) Sa foty
company ,,,id llll llhlllY l,H lory W<>rk <•rs L'Vl'rsince
. in
I't t t d ' • '1

the l'mphl\' I\Wnt ,11ul li vc•lihood of 111 1, k s ar e us g 1'1


sc• wor e n;' . Wh ich AI ethical iNUe

(/1) Bias and Fairness


(d) /\ulorn a lion a nd impact o ver jobs
m ode ls is kn own as ___ d a ta.
(b) Testin g
Traininf. (d) A nn o ta ted
What is/,in.' tlw n:-,1:-Lm(s) behind Al bias in data ?
f1Jw1..,l .ind unbalanced data collection (b) Wrong assumptions
~t) pwper bi.:i.s testing and mitigation (d) All of the above
~one oi these
Which of the following measure(s) will ensure Data Fairness in AI ?
En<uring balanced data
Obsening biases in human decisions and the data collected
Supen i~ed decision-making (d) Regular bias testing
t All of the above (/) None of these
bich of the following measure(s) are useful for reducing and mitigating AI bias ?
· Team Di\ ersity (b) Data Diversity
Data Re\ icw (d) Bias testing

All ot th(• ahovc· (/) None of these

( crn,t1clcncy lbsrd ~csf ions

kn: c·ilr<;h cll'Vl'l,ii>1·d IW() /\I h.isl'd di,itb 1its n,,nwlv \/it't' ,rnd_ l\t)!•. Hl,tt .•,th.'r ~,,nw tin,~'
k hut town tlw .,. , li.tll>()h I Ill' n•.is 1111 t 11ld ln·himl 1I w.,~ th,1t U.,',· ,,nd 1,,-_, d"'\d,,l"t.'d O\\ \
,...,.DL110.. l'•(ll.tllf,•"• ,111d •.t.11 t,•11 11tl1· 1,1d11lf, Ill th,ll l,ll\)~ll-lf.,'.

cl1c,1l<>1 ,d /\I 1·tli11•,


tf•) \v1 ·111111t,1l•d1l\
( t 11d I ( ,I
lr/) \ II 111 tin"•''
Al 1 ,,.,. , 1111i•11•, 111 '\ '\ ,,,u11tn md ,t i\l~, bu It a
Hllf) lit\ ( l I I 'II I ll'I I d
11 1
, , ' • Ill • tli,tl 1\ 1 11\111 \
,
'\;1'.llh \)(\ \) t~ eu1npan
I
f(,r at thuu 111d "' 111 11'. '''''
, 1 1111n•
,•ti\' d1t1,·1,·1",. I"'"' tlw ltt.'-\d '-'
k h11 ln, 111 •hi l,1·1 111 l 1
t ltl t1 ,ut, l •11 u1111•ttl'll t ',.,, l1'" \I\ tlw l.'\ln\ptl
.,,111111111\l I 11111•11..'lllll\l t,
I
•111\I\ '
t1 , 111•1'(: I m
1lt •I It l I
,t,uh ,n ,tt
\.t pt•
(b) Accountability
(d) Human-AI interaction
ring company is hugt>IY successful cc~mpa~y .. It keeps innovatmg
keeping in mind tlw L'c1Sl' of USL' ,rnd qualrt~. l{c ce;ntl.y, XYZ co:npan~
.
e d es1gn •
of pac k ,1g111g , rnrovcs· highly. useful. I, fow~ver, it w_._
11oxL,s. w1,,·,,1, - ...
• ( ll ·1·0 l'J1~t11·e lh1s th< · XYZ company e 1
packaging box should h,H'l' lrny dl's1gn Ju_ · _. · , . . ' . , , • . mp oyed
based software. It w,1s shown., huge collccllon of images showing the. correct design of p
box. After repL',1tt'd usl' of this huge collection of image 5, th e Af software can now remove
with incorrect design.
Such a huge collL'ction of data used for teaching AI software to work in a certain way is cafled
___ data.
(a) AI (b) Testing (c) Training (d) All of these

14. In 201--1, Amazon de\'eloped an AI recruiting system to streamline their hiring process. It was found
to be discriminatory against women as the d ata u sed to train the model was from the past 10 rears
where most selected applicants were men due to the male dominance in the tech industry. ~ l l i
scraped this system in 2018.
This is an example of AI ethical issue _ __
(a) Bias and Fairness (b) Accountability
(c) Transparency (d) Human-AI interaction

6 , T US REVISE

❖ Van·ous ethical issues of Al are : Bias and Fairness, Accountability, Transparen cy, Sc.ft':}'. E:.-..:~ - :.
interaction, Trust, Privacy and Control, Malicious use of AI, Impact over jobs. Human a-,r: -~a:,~: r.~.;~~ ·
so forth.
❖ Deepfake is a technology that can generate fake digital photos, sound 1eco, dings c-:l: , :·,:c-.·
just as original as possible.
❖ AI ethics is~ set of values, principles, and techniques that employ widclJ (h Ct'pttY: .,, :•i.~.:· .~::: ·
wrong to gwde moral conduct in the development and 1,se O;,, J\I t CC-Il/10 l t'gl·t'$ ,
❖ AI bi~s is an anomaly (irregulanly of abnonnalitv) in the ,csu/1 p,ocfwwi t'i · 1 , h LT -.:,:,." ·
algonthms because
• .
of pre111d1ced (disc1i111i11nloiy) cl,,Sll/1/p
,. . •
1/()/1$ i rl · I
11/(l( t' , t. 11110 t i f .. .'
· ·
process or pre1ud1ces 111 the lrw111119 da/(1.
f'rafmng data 1s a huge collection of tubcllc:rl in/Ollll(l/ion thnt\ 1
,· ·rl t, b
ng model).
·- - • ~111 "' L , ,

rea,on, of AI bias are . Humm, lml!i ,,, d,


tation of ap cific Jeatu, s Wrcm
.: •. t tM IMrlty of b1a )
bal nttd d ta wr th
1u1narency, bias and fairness ac
, countability
with n•s11t·ct to l'tl,ii s 111 Al.

Clear, consistt>nt ,111d u111h•1,t,1blt> in its k'


wor ing
Eliminate 01 it•dun• tlw in1p,1ct of bias (i) Auditable
(ii) Interpretable
Alll1willll thi1d p,11lit's lo assess data inp t d .
,1<.sw,111c1.' t I1t1l ct illu can be llusled u s an provide (iii) Transparent
Ability to see how results can vary with ch · .
· ang,ng inputs (iv) Fair
1
(/ ) - (iP) ; (c) - (i) ; (d) _ (ii)
H 11,11t111 R,~J, ts molntiv11s of Al.
A.n--
\:1 (L1llt>ding personal data of people without their knowled ge.
l•') Al bias.
\ . ~,,uc c;hzc.il issues related to : (i) Al setup, (ii) Al actions, (iii) AI impact, (iv) AI future.
Ans
:\ Bias and fairness, Accountability, Transparency
Safety, Human-AI interaction, Trust, Privacy and control
(m1 Automation, Impact over jobs, Aud itability and Interpretability
,i') Control of AI over things and people, Human rights, Robot rights
t is deepfake ?

Deepfake is a technology that can generate fake digital photos, sound recordings, and , i deo
ihich look just as original as possible.
t IS Al bias ? Give example.
· AI bias is an anomaly (irregularity of abnorm ality) in the result produced thr frugh ..\l b..1~-J.
programs and al gorithms becm1se o f prej ud iced (discriminatory) assumptions 1n.lde durin~ th1::.'
onthm dc:vc: lopnwnl procvss or prejudices in the tra ining data, c.~, ,1 /1c,1/ti, i11s1111111ct· -\I ~·- ,,.\· :··
red whites o v 1·r hi.i t ks for v>. tr,, lw ,1lthcarc services bL'C,1t1R' of ,m Al b i,1s.
of a11 Al model .
A g iven A l lll(l(l<·l is I.ii i if ti ll' 0 11 1p 11 ts ,m· intkpl·mknt 1,f st•ns iti\'t' 1-',W.mwh.'~ ~- ~ ~c.'l'h.'h.'r
u Illy, r (' Itgtc111 s I d ·i 11I, l I l ', d l i tlil)' , l 'li' •) ll1r ,1 ..;nt•dlit·
' r l,\Sl-. tlMt is ,\lt'\',hh° ,\tt\,·t\,t h :-,,-i,\l
n lion
Ir 111111, dal 1111 Al
Al mo d £• I pr 1 II',,, ,t I11 11 • I Ill 11 1111 ·t1l111 111 ,, 1·,•rt,1in w,1, w~il\~'. ,1 :--..\1\\p\\' ~,t ,)t hu,_~--
'" I , , 11111 1

trn, 11 111~ • tlitlfl t I11 • 1111111•11·I 1.11 ,.,.,. 111 h•n11~ 111
n'l''''"''11t,1h,,I\ {\\1.)t \)\"\.'r,. '-x
.,.,1 . 1 1111 11111l•1 ,-.,,d n•p1\•,,•11t,1h,m) ,\lh..'-".h- th'-' ,)uk."~ of
Lu) m I b1I 1ll c t il l 1111 (l ""' l , ...
tr inin , d til , b1
. , tit ,n,d1H'\'d
1 l l t I IH t t " 1 1
t,, tlw \\ m,ll.\~l,,,,~,lnthm (that used
b1 ll ....,_............,
I I \ t J ll~"-' tor an Al m'-ldel mu t be ~-~ ~
rt nt th11t t1u trnm ng l

You might also like