Revisiting AI Project Cycle &
Ethical Frameworks for AI
What is Ethical Framework?
An ethical framework is a structured set of guidelines or principles that help individuals and
organizations make morally responsible decisions. These frameworks provide a foundation
for assessing what is right or wrong, to help us ensure fairness, accountability and
transparency while making decisions in various fields, including business, healthcare and
technology.
Use of Ethical Framework in AI-
With the rapid adoption of AI systems across the globe and their significant impact on
society, it has become essential to ensure that these systems are designed ethically and used
responsibly. It must also be ensured that the systems:
1. Prevent Bias and Discrimination: AI should make fair decisions and avoid reinforcing
social biases.
2. Ensure Transparency and Explainability: AI models should be understandable and
interpretable.
3. Protect Privacy and Data Security: User data must be handled responsibly and securely.
4. Promote Accountability: AI developers and organizations must take responsibility for AI
decisions.
5. Enhance Human Well-being: Without ethical frameworks, AI can lead to unintended
consequences like biased hiring systems, misinformation spread or abuse of privilege. Ethical
AI ensures trust, fairness and safety in technology use.
The Bias inside us-
AI systems learn from human data, which may include biases like favouring certain groups
overlooking hidden factors or making assumptions based on incomplete information.
Factors affecting human decision-making:
1. Personal and Emotional Factors: Our decisions are usually influenced by emotions, past
experiences and upbringing. People may favour choices that are connected with their values,
beliefs or personal experiences.
2. Perception of Need and Impact: Our choices are also governed by how urgent or
effective an option appears. We tend to prioritize actions that seem to have a direct or
visible impact.
3. Bias in Human vs Non-Human Considerations: Humans are most likely to prioritize their
own needs over those of animals or the environment. However, emotional attachment or
ethical beliefs can shift preferences.
4. Geographic and Demographic Biases: People are more likely to make decisions that
benefit those in familiar locations or social groups. Stereotypes and personal identification
can shape preferences and priorities.
5. Religious and Ethical Views: Faith and moral beliefs influence decision-making, affecting
judgments on fairness, responsibility and what is considered right or wrong. 6. Transparency
and Trust: People prefer options that feel reliable and verifiable. Lack of information or fear
of deception can discourage certain choices.
Classification of Ethical Frameworks in AI -AI ethics can be broadly classified into
sector-based and value-based frameworks. Both approaches are important and provide
different ways to address ethical concerns in AI decision-making.
1. Sector-based Ethical Frameworks -These frameworks apply ethical principles to
specific industries where AI is used and help us tackle unique challenges in each field.
For example—
• Bioethics: Ensures AI in healthcare respects patient privacy, fairness and
autonomy.
• Business Ethics: Prevents bias and promotes transparency in hiring, lending and
customer interactions.
• Legal and Justice Ethics: Ensures fairness and accountability in AI-assisted law
enforcement and court decisions.
• Environmental Ethics: Examines AI’s impact on sustainability, climate change and
nature conservation.
2. Value-Based Ethical Frameworks- These frameworks focus on core moral values that
guide AI decision-making across all sectors. They reflect human values in AI-driven
choices and are categorized as:
• Rights-based Ethics: Protects fundamental human rights such as privacy, dignity
and freedom. It ensures AI prioritizes human lives and treats individuals fairly.
• Utility-based Ethics: Aims to maximize overall good by evaluating AI decisions
based on their impact. It prioritizes solutions that benefit most people, even if trade-
offs are needed.
• Virtue-based Ethics: Focuses on choosing ethical decision-makers who uphold
honesty, compassion and integrity in AI governance. It ensures AI behaviour is guided
by moral values and not just rules.
BIOETHICS: THE GUIDING PRINCIPLES FOR LIFE AND TECHNOLOGY –
What is Bioethics?- Bioethics is the study of ethical issues and principles that arise in
biology, medicine and healthcare. This domain of ethics examines how we should act when
dealing with complex questions about life, health and human condition. AI is becoming
increasingly embedded in healthcare today and is impacting the way we define life and
existence. It is important for us to carefully understand where bioethics and AI ethics meet.
The Hippocratic Oath -Bioethical principles aren’t just theoretical ideas—they have a deep-
rooted significance in human history, experiences and values across cultures. For example,
consider the ancient Hippocratic Oath, written in the 5th century BCE, in which physicians
pledged to ‘do no harm’ (non-maleficence). This principle remains central to medical ethics
even thousands of years later. Similarly, many cultures emphasize the importance of
respecting individual autonomy. Modern healthcare reflects this value when families are
included in important health decisions, ensuring their voices are heard and respected. By
integrating such age-old principles with AI ethics, we can ensure that new technologies serve
humanity in ways that are responsible, compassionate and fair.
Principles of Bioethics
1. Respect for Persons- This principle recognizes that each person has inherent value,
dignity and is capable of making their own decisions
2. Beneficence (Doing good): This principle is a call for action and a moral imperative to
act in the best interests of others, seeking ways to help them.
3. Non-Maleficence (Avoiding harm): This bioethics principle is the commitment to ‘do
no harm’. Doctors, researchers and healthcare providers must be cautious about
potential risks, actively avoiding unnecessary or unjustifiable harm to their patients.
4. Justice (Fairness): It is the ethical principle that reminds us to treat everyone fairly,
irrespective of social, economic or other differences. Resources should be distributed
equitably and access to healthcare should be guaranteed for all. This principle
requires that healthcare must be a right and not a privilege of every human.
Bioethics and AI Ethics- While ethical guidelines are shaping life science, they are equally
important for AI as it is gradually becoming an important part of our lives and healthcare.
Recent advances in AI are merging biology and technology, making it essential to bring
bioethics into AI ethics. Since AI can influence medical decisions, it is essential that ethical
principles of bioethics guide its development and use.
The adoption of AI in healthcare introduces challenges that intersect both technological and
ethical considerations. While on the one hand, AI improves diagnosis, treatment and
personalized care, on the other, it raises concerns about data privacy, algorithmic bias and
equitable care for all patients.
• How do we ensure AI does not harm vulnerable populations?
• How much control over a patient’s care should be given to a machine?
• How do we protect human autonomy in this new era?
Let us understand the joint application of bioethics and AI ethics with a hypothetical case
study.
SMART MEDICINE DISPENSER AND THE VILLAGE DOCTOR Consider Asha Gram, a rural village in India. Like many other
villages, Asha Gram faces challenges in healthcare access. There is one primary health centre run by Dr Sharma, a
dedicated doctor who works long hours in the service of the people. Remote parts of the village are laden with
challenges as delivering medicines on time becomes difficult. HealTech, a tech company, has developed a new ‘Smart
Medicine Dispenser’. This is a small, AI-powered device that is designed to automatically dispense the right medicine
and dosage to patients, based on the doctor’s prescription and the patient’s unique identification (through a fingerprint
scan or Aadhaar card). It is equipped with a screen that shows simple instructions while recording details of each
dispensing. This could be particularly helpful in rural areas with shortage of trained medical staff. HealTech proposes a
pilot program for Asha Gram—they will install multiple smart dispensers at community centres and train local volunteers
to assist people in using them. Dr Sharma will initially prescribe medicines as usual. However, eventually, the AI
dispenser could also give suggestions based on the data it collects, such as a patient’s prior health records and symptom
descriptions (entered by the local health volunteer or the patient themselves). This can also help to track usage of
medicines and provide analytics to public health workers to identify outbreaks or gaps in health service delivery. Key
Issues
• Limited Access: Asha Gram has limited access to healthcare professionals and medications and solely relies on
Dr Sharma.
• New Technology
• Patient data
• AI Decision making
• Equity of Access
• Data Security
Bioethical Considerations
1. Autonomy: Does Smart Medicine Dispenser respect the autonomy of patients? Do the patients have a right to choose
if they want to use this technology or not? Who makes the ultimate decision on their healthcare?
2. Beneficence: How can Smart Medicine Dispenser improve healthcare in Asha Gram? What are the benefits for the
patients and the community? Does using this device truly do any good?
3. Non-Maleficence: What are the risks associated with this technology? How might it potentially harm patients? What
safeguards need to be put in place?
4. Justice: Would using these dispensers be fair to everyone in Asha Gram? How might differences in technology literacy,
health awareness or accessibility create inequities in healthcare access?
AI Ethics Considerations
1. Data Privacy: What are the ethical concerns about collecting and storing patient data with Smart Medicine Dispenser?
How should this data be protected? Is consent being obtained fairly and in a culturally appropriate manner?
2. Transparency: How does the AI system determine which medication to dispense? How transparent is the decision-
making process to Dr Sharma and the patients?
3. Bias: How can we ensure that the AI system is not biased against certain groups of patients? Is the AI using data from
other countries which may not be suitable for patients in India?
4. Accountability: Who is responsible if the AI system makes an error and causes harm to a patient—HealTech, Dr
Sharma or the village volunteer?
5. Solutions: What steps should Dr Sharma, the villagers and HealTech take to ensure that Smart Medicine Dispenser is
implemented ethically and effectively? How should the villagers’ concerns be addressed? What happens when things go
wrong?