FUNDAMENTALS OF
ARTIFICIAL INTELLIGENC
LECTURE-11
DR. M.SHUJAH UR REHMAN
AI Supports Decision-Making
1. Automated Data Processing:
► AI can handle vast amounts of data quickly and accurately, which
supports decision-makers by providing insights based on real-time or
historical data.
► Example: In business, AI algorithms can analyze sales data to
recommend pricing strategies or inventory management.
2. Predictive Analytics:
► Using machine learning, AI can forecast future trends based on
historical patterns, enabling businesses to make decisions
proactively.
► Example: AI can predict customer behavior, enabling businesses to
adjust marketing strategies accordingly.
3. Optimization:
► AI can help in optimizing decisions by analyzing different variables
and recommending the best course of action, whether it’s for
resource allocation, supply chain management, or financial
planning.
► Example: In manufacturing, AI can optimize production schedules
based on factors like supply, demand, and available resources.
4. Risk Assessment:
► AI systems can assess risks in decision-making scenarios by analyzing
potential outcomes and their probabilities, allowing decision-makers
to choose the least risky or most profitable option.
► Example: In finance, AI can predict market risks or evaluate the risk
of a loan applicant.
5. Decision Support Systems (DSS):
► AI is integrated into DSS to assist in making complex decisions. AI
tools help decision-makers process data, evaluate alternative
options, and simulate outcomes.
► Example: A healthcare system might use AI to recommend the best
treatment options based on a patient’s medical history.
6. Real-time Decision-Making:
► AI systems enable quick decision-making by providing instant
analyses and suggestions, particularly useful in dynamic
environments like financial markets or autonomous vehicles.
► Example: In autonomous driving, AI makes real-time decisions about
navigation, speed, and obstacle avoidance.
Importance of Aligning AI with
Human Values
► Value Alignment: One of the central ethical challenges in AI is
ensuring that AI systems align with human values. AI, when
deployed for decision-making, needs to be designed in such a way
that its actions reflect what humans deem to be desirable or
acceptable.
► Goal Specification: AI systems are created to achieve certain goals,
but the design process must ensure that those goals are
well-defined and ethically sound. If the goals are poorly defined or
misaligned, AI systems may pursue unintended outcomes that can
be harmful to individuals or society at large.
Transparency and Accountability
► Black-box Problem: Many AI systems, especially deep learning
models, function as "black boxes," meaning that their
decision-making processes are not transparent. This opacity can be
a significant ethical issue, especially when decisions made by AI
affect individuals or communities (e.g., loan approval, medical
diagnoses).
► Need for Explainability: AI systems should be interpretable and
explainable so that humans can understand how decisions are
being made. For instance, in critical sectors like healthcare or law
enforcement, understanding how an AI system arrived at its decision
is crucial for trust, accountability, and fairness.
Fairness and Bias
► Bias in AI Systems: AI systems can inadvertently inherit biases present
in the data used to train them. This may lead to discriminatory or
unfair outcomes, particularly in areas like hiring, law enforcement,
and lending.
► Ensuring Fairness: Ethical AI design should focus on mitigating bias
and ensuring that AI decisions are fair. This may involve using
techniques to detect and reduce bias in training data or
developing algorithms that explicitly account for fairness in
decision-making.
► Equity Considerations: The authors discuss the need for AI systems to
avoid reinforcing or exacerbating existing inequalities, particularly
those based on race, gender, socio-economic status, or other
protected characteristics.
Safety and Control
► Ensuring Safety: As AI systems take on more responsibilities, their
actions must be safe and reliable. Ethical AI systems need
mechanisms to ensure they avoid harmful consequences—whether
through unintended side effects or malicious misuse.
► Human-in-the-loop: AI systems should maintain the ability for human
oversight, especially in decision-making contexts where the stakes
are high (e.g., autonomous vehicles, military drones). Humans should
be able to intervene and correct AI actions if necessary.
Autonomy and Human Control
► Human Autonomy: AI should augment human decision-making, not
replace it entirely. The ethical use of AI involves preserving human
autonomy, ensuring that AI decisions are used as tools to assist
human judgment rather than completely overtaking it.
► Control and Accountability: Ethical AI requires clear mechanisms of
accountability—i.e., if an AI system causes harm or makes a bad
decision, it should be clear who is responsible, whether it is the
creators, deployers, or operators of the system.
Long-Term Ethical Considerations
► AI and the Future of Work: The rise of AI and automation raises
ethical questions about the impact on employment and society.
While AI can bring significant benefits, it may also displace jobs,
creating social and economic challenges.
► AI and Global Risks: There are also concerns about AI-driven arms
races and the potential for AI to be used in harmful ways, such as in
warfare or surveillance. The authors highlight the importance of
considering long-term global risks when designing and deploying AI
technologies.
Ethical Decision-Making
Frameworks
► Ethical AI Design: The authors emphasize the need for
multidisciplinary collaboration to design AI systems that reflect
ethical principles. This could involve incorporating insights from
philosophy, law, sociology, and economics to ensure AI systems
respect human dignity, rights, and fairness.
► Value-sensitive Design: AI systems should be designed with ethical
principles at the core. This includes designing AI that respects
privacy, promotes fairness, and supports human well-being.
Key Ethical Principles Highlighted
by Russell and Norvig
► Beneficence: AI should work to enhance human well-being,
improving quality of life and providing benefits.
► Non-maleficence: AI systems should avoid causing harm or
negative outcomes, ensuring their design does not lead to
detrimental consequences.
► Autonomy: AI should respect human freedom and decision-making
power, not undermining or replacing human choice.
► Justice: AI systems must be fair, providing equal treatment and
opportunities to all individuals, avoiding discrimination or bias.
THANKS