0% found this document useful (0 votes)
171 views13 pages

Class 10 Ai Ethics

Uploaded by

Shlok Bakhshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
171 views13 pages

Class 10 Ai Ethics

Uploaded by

Shlok Bakhshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

ARTIFICIAL INTELLIGENCE

AI ETHICS

CLASS 10
AI Ethics
We need to keep aspects relating to the ethical practices in mind
while developing solutions using AI.
Some of the ethical concerns in detail.
Unemployment
What happens after the end of jobs?
Inequality
How do we distribute the wealth created by machines?
Humanity
How do machines affect our behaviour and interaction?
Security
How do we keep AI safe from adversaries?
AI ETHICS
• The term “Ethics” is derived from the Greek word ethos which means
custom, habit, character or disposition.
• In the simplest possible words, we can define ‘ethics’ as a system of
moral principles that govern individual’s behaviour or actions.
• Ethics are concerned with what is good for individuals and societies.
• Similarly, ethical concerns are the issues ,situations or concerns that
cause individuals ,societies to evaluate different choices in terms of
what is right (ethical) and what is wrong (unethical).
ETHICS STATES:
• Have I taken a right decision?
• What are our rights and responsibilities?
• Is the command or instruction right or wrong?
• About moral-decisions-what is good and bad?

Take one example-A cyclist was wheeling her cycle across a street when
suddenly she was hit by a self driven vehicle with a human operator. The
vehicle was under the control of Artificial Intelligence. The human cyclist
did not survive the encounter. Who was responsible for the death? The
programmer of the AI system? The human operator of the vehicle? The
companies testing the vehicle? The designer of the system? Or even the
manufacturer of the vehicle? What was the responsibility of each of them?
Many modern AI empowered system’s behaviour reflects the data
the system is trained against and the human labelers who
annotate the data. These systems are often named as “black box”
which refers to inherent problem of AI systems.

Today it is quite challenging to determine the systems whether it


will behave ethically or not as its behaviour is not predicated on
simple rules but it rather dependent on the byproduct of
numerous surrounding factors. AI system learns and reacts by
looking at data from millions of examples. It is difficult to predict
how they’ll behave in one new scenario.
• Examples are cited where AI predications are biased.
• In 2014, Amazon developed a recruiting tool for identifying software engineers to hire;
the system produced results discriminating against women, and the company
abandoned it in 2017.
• In 2015, a software engineer reported that image recognition algorithms in Google
photo are classifying his black friend as “Gorilla”, which Google accepted the mistake
and assured the person to correct it.
• In 2016, ProPublica developed system to predict the chance of repeated offence by
criminals .The idea was to help judges make better judgements with this additional
information. The system picked up racial bias that had existed in America for hundreds
of years and became biased towards the black community.
• In the last two years, self driven cars that works on rules and algorithms to operate have
caused fatal accidents when confronted with unfamiliar sensory feedbacks or inputs, it
couldn’t make a fair decision.
There is a need to develop an ethical, human centric AI program that is aligned with
values and ethical principles of a society or the community it affects.
Steps to ensure ethics are practiced while designing AI systems
• Must be responsible
• Must take far decisions
• Understood by the humans
• Produce accurate results and imply validate assumptions
• Be transparent

AI is like a two-edged sword; at one end it can solve a problem intelligently,


at the other end pose a problem. Hence, we must handle it properly
Facts and Interpretation
FACTS POSSIBLE INTERPRETATION BIASED OR NOT?
Women rarely occupied top
managerial positions in the
companies.

Most of the managerial positions


were occupied by people from five
cities
The number of women leaving jobs
after pregnancy or marriage was
significantly more than the number
of men leaving the job due to
similar reasons.
ETHICAL CONCERNS
• a) related to DATA (Bias and inclusions)

• b) related to implications of AI technology itself.


https://www.youtube.com/watch?v=x9gan8vOBJ8

https://www.youtube.com/watch?v=vgUWKXVvO9Q

Moral Issues: Self-Driving Cars

https://www.youtube.com/watch?v=ixIoDYVfKA0
Moral Issues : Self Driving Card
Scenario 1:
Imagine that we are in year 2030. Let us assume one day, your
relative is going to office in his self-driving car. He is sitting in the back
seat as the car is driving itself. Suddenly, a small boy comes in front of
this car. The incident was so sudden that the car is only able to make
either of the two choices:
1. Go straight and hit the boy who has come in front of the car and
injure him severely.
2. Take a sharp right turn to save the boy and smash the car into a
metal pole thus damaging the car as well as injuring the person
sitting in it.
With the help of this scenario, we need to understand that the developer
of the car goes through all such dilemmas while developing the car’s
algorithm.
Here the morality of the developer gets transferred into the machine as
what according to him/her is right would have a higher priority and
hence would be the selection made by the machine.
If you were in the place of this developer and if there was no other
alternative to the situation,
which one of the two would you prioritise and why?
Scenario 2:
Let us now assume that the car has hit the boy who came in front of
it. Considering this as an accident, who should be held responsible for it?
Why?
1. The person who bought this car
2. The Manufacturing Company
3. The developer who developed the car’s algorithm
4. The boy who came in front of the car and got severely injured

*It might differ from person to person and one must understand that
nobody is wrong in this case*.

You might also like