Ethics Exam Study Guide
Chapter Summary
What is Ethics?
Ethics (moral philosophy) is a branch of philosophy that systematizes, defends, and recommends
concepts of right and wrong behavior. It has ancient roots in texts like the Epic of Gilgamesh, Homer's
Iliad, and various religious instructions.
Descriptive vs. Normative Ethics
Descriptive Ethics: Describes how people actually behave and what moral standards they claim to
follow (non-evaluative, like anthropology)
Normative Ethics: Creates or evaluates moral standards; determines what people should do
Three Main Ethical Approaches
1. Virtue Ethics (Ethics of Character)
Focus: Character and virtues that determine ethical behavior
Origins: Plato and Aristotle
Key Principle: Emphasizes "being" rather than "doing"
Approach: Morality stems from individual character; actions reflect inner morality
2. Deontological Ethics (Duty-Based/Kantian Ethics)
Focus: Adhering to ethical principles, duties, and obligations
Key Principle: Moral absolutes make actions moral regardless of circumstances
Sources:
Kant: Actions motivated by reason, not emotions
Divine Command Theory: Actions decreed by God as good
Examples: Golden Rule, Ten Commandments
3. Consequentialism/Utilitarianism (Teleological)
Focus: Morality based on consequences/outcomes
Utilitarianism: Seeks "greatest happiness for greatest number" (John Stuart Mill)
Key Principle: Net amount of happiness, number of people affected, duration of happiness
Challenge: Trolley Problem scenarios
Professional Ethics Components
Key Qualities
Honesty
Integrity
Transparency
Accountability
Confidentiality
Objectivity
Respectfulness
Obedience to Law
Whistleblowing
Person who reports dishonest/illegal activities in government or private organizations, raising concerns
about fraud, corruption, and mismanagement.
Three R's of Ethics
Rules
Responsibility
Respect
Codes of Ethics
Purpose
Provide basic framework for ethical judgment and help professionals:
Perform their roles properly
Conduct themselves appropriately
Resolve ethical issues
Apply moral principles to specific situations
Positive Roles
Inspiration and guidance
Support for responsible conduct
Deterring unethical behavior
Education and mutual understanding
Positive public image
Business interests promotion
Limitations
Too broad and general for all situations
Internal conflicts between different code entries
Cannot serve as final moral authority
Computer Ethics
Don'ts
Don't harm others with computers
Don't interfere with others' computer work
Don't snoop in others' files
Don't use computers to steal
Don't copy unpaid proprietary software
Don't use others' computer resources without authorization
Don't appropriate others' intellectual output
Do's
Consider social consequences of programs/systems
Use computers with consideration and respect for others
Practice Scenarios and Solutions
Scenario 1: Software Piracy at Work
Situation: Your company asks you to install unlicensed software to save money on a critical project with a
tight deadline. Your boss emphasizes that everyone does this and it's "just business."
Virtue Ethics Analysis: A virtue ethicist would ask: "What kind of person do I want to be, and what would
a person of good character do?" Key virtues at stake include honesty, integrity, and justice. A person of
integrity maintains consistency between their values and actions, regardless of external pressure. Even if
"everyone does it," stealing software corrupts one's character and sets a precedent for future moral
compromises. The virtuous person would refuse, explaining that their character and professional
reputation are worth more than short-term convenience. They might suggest legal alternatives like open-
source software or requesting a budget for proper licensing.
Deontological Analysis: Following Kantian duty-based ethics, we must ask if this action could become a
universal law. If everyone pirated software whenever it was convenient, the software industry would
collapse, making the action self-defeating. Kant's categorical imperative tells us to "act only according to
maxims we could will to be universal laws." Additionally, we have clear moral duties: not to steal
(regardless of circumstances) and to respect others' intellectual property rights. The duty not to steal is
absolute - it doesn't matter if the company needs to save money or if the deadline is tight. Divine
Command Theory would also condemn this as violating the commandment "thou shalt not steal."
Utilitarian Analysis: A utilitarian would calculate the total happiness/well-being for all affected parties.
Negative consequences: Legal risks (fines, lawsuits, criminal charges), damage to company reputation if
discovered, lost revenue to software developers affecting their employees and families, setting bad
precedent encouraging more piracy, potential malware risks from illegal downloads, and personal stress
from acting unethically. Positive consequences: Short-term cost savings, meeting project deadline,
temporary job security. The utilitarian calculation clearly shows that the aggregate harm (legal
consequences, industry damage, moral precedent) far outweighs the minimal short-term benefits. The
greatest good for the greatest number requires respecting intellectual property rights.
Scenario 2: Whistleblowing Dilemma
Situation: You discover your company's medical device software has a critical bug that could cause
patient monitoring systems to fail, potentially leading to deaths. Management knows about it but decides
to delay the fix for six months to avoid disrupting a major product launch and maintain quarterly profits.
Professional Ethics Analysis: This scenario directly conflicts multiple professional obligations. Your code
of ethics likely prioritizes public safety above all else, while also requiring loyalty to your employer. The
Three R's of Ethics apply: Rules (professional codes mandate protecting public welfare), Responsibility
(you have a duty to prevent harm), and Respect (for human life and dignity). The core ethical
components at stake include honesty (concealing the truth), accountability (taking responsibility for
known risks), and obedience to law (potentially violating safety regulations).
Virtue Ethics Approach: A person of integrity cannot stand by while people's lives are at risk. The
virtuous professional demonstrates courage by speaking up despite personal cost, compassion by
prioritizing patient welfare, and justice by ensuring those in power are held accountable. While loyalty is a
virtue, it cannot override the fundamental virtue of protecting innocent life.
Deontological Approach: Kant's moral framework provides clear guidance: we have an absolute duty
not to harm others, which includes not allowing preventable harm when we have the power to stop it.
The categorical imperative test asks: could we universalize "employees should stay quiet about life-
threatening defects to protect company profits"? This would create a world where corporate profits
always trump human safety - clearly unacceptable.
Utilitarian Approach: The happiness/well-being calculation is stark: potential deaths and suffering of
patients and families vs. temporary financial losses and disruption for the company. The utilitarian would
factor in not just immediate consequences but long-term effects: lawsuits, regulatory penalties, destroyed
company reputation, and loss of public trust in the industry.
Recommended Action: Report the defect to regulatory authorities while documenting everything. Public
safety must take precedence over corporate loyalty.
Scenario 3: Privacy vs. Security
Situation: Following a terrorist attack, the government demands that your social media company provide
backdoor access to all user communications and location data, claiming it's necessary to prevent future
attacks. They threaten regulatory action if you refuse. Users trust your platform specifically because of
your privacy protections.
Virtue Ethics Analysis: The virtue ethicist must consider what character traits are most important in this
situation. Integrity requires consistency between stated values (privacy protection) and actions. Courage
may be needed to resist government pressure. Justice demands fair treatment - is it just to violate
everyone's privacy for the actions of a few? Prudence requires careful consideration of long-term
consequences for society. A person of good character would likely seek a middle ground that respects
both security needs and privacy rights, such as targeted warrants for specific suspects rather than mass
surveillance.
Deontological Analysis: Kant's framework presents competing duties: the duty to protect users' privacy
and autonomy vs. the duty to help prevent harm to innocent people. The categorical imperative test is
crucial: "Should all companies give governments backdoor access when demanded?" This would create a
world without digital privacy, potentially enabling authoritarian control. However, we must also consider:
"Should companies never cooperate with legitimate security investigations?" The resolution lies in
processes that respect both privacy and security - requiring specific warrants, judicial oversight, and
transparency about government requests.
Utilitarian Analysis: The utilitarian must weigh complex consequences. Security benefits: Potentially
preventing terrorist attacks, saving lives, and reducing public fear. Privacy costs: Chilling effect on free
speech, potential for government abuse, destruction of user trust, possible economic harm to the tech
industry, and precedent for authoritarian regimes worldwide. Historical evidence suggests that mass
surveillance has limited effectiveness in preventing terrorism while causing substantial harm to civil
liberties. The greatest good for the greatest number likely requires protecting privacy while cooperating
with targeted, warrant-based investigations.
Recommended Approach: Refuse blanket access but offer cooperation with specific, warranted requests
that include judicial oversight and transparency reporting.
Scenario 4: AI Bias in Hiring
Situation: Your company's AI hiring system, which screens thousands of applications, consistently ranks
applications from women and minorities lower than equally qualified white male candidates. When you
report this to management, they respond that the AI is just "learning from historical hiring data" and that
changing it would be expensive and time-consuming. They argue that since they're not explicitly
programming discrimination, they're not responsible for the bias.
Computer Ethics Analysis: This directly violates core computer ethics principles: "Think about the social
consequences of the program you are writing or the system you are designing" and "Use a computer in
ways that ensure consideration and respect for your fellow humans." The system perpetuates and
amplifies historical discrimination, causing real harm to qualified candidates.
Virtue Ethics Analysis: A person of integrity cannot ignore discrimination simply because it's automated.
Justice requires fair treatment regardless of race or gender. Honesty means acknowledging the system's
flaws rather than hiding behind technical explanations. Courage is needed to challenge management's
complacent attitude. The virtuous engineer would advocate for fixing the bias, possibly volunteering to
lead the effort or refusing to maintain discriminatory systems.
Deontological Analysis: We have clear moral duties not to discriminate and to treat people as ends in
themselves, not merely as means. The fact that discrimination is automated doesn't eliminate moral
responsibility - the categorical imperative asks: "Should all companies use biased AI systems when it's
cheaper than fixing them?" This would create a world where technological bias amplifies social inequality.
The duty to respect human dignity requires addressing known bias, regardless of cost or convenience.
Utilitarian Analysis: Harms of maintaining bias: Perpetuating workplace inequality, denying
opportunities to qualified candidates, potential legal liability, damage to company reputation, reduced
diversity limiting innovation and performance, and broader societal harm from normalized algorithmic
discrimination. Benefits of maintaining status quo: Short-term cost savings, avoiding technical work.
The utilitarian calculation clearly favors fixing the bias - the widespread harm to society far outweighs the
company's inconvenience and expense.
Professional Response: Document the bias, propose specific solutions (retraining models, adjusting
datasets, implementing fairness metrics), and escalate to higher management or relevant authorities if
necessary.
Scenario 5: Code of Ethics Conflict
Situation: You're a software engineer working on autonomous vehicle software. Your professional code
requires both "loyalty to your employer" and "protection of public safety." Your company wants to rush
the software to market to beat competitors, but you've identified potential safety issues that need more
testing. Management argues that competitors are taking similar risks and that delays could bankrupt the
company, affecting hundreds of jobs.
Analysis of Code Limitations: This perfectly demonstrates the limitations mentioned in your study
material: codes of ethics are "broad guidelines" that "often have internal conflicts" and "cannot serve as
the final moral authority." When professional codes conflict, we must use deeper ethical reasoning.
Virtue Ethics Resolution: The virtuous engineer considers which action reflects better character.
Integrity means being honest about safety concerns regardless of pressure. Courage requires speaking
up when lives are at stake. Prudence demands careful risk assessment. While loyalty is important, it
cannot override the fundamental virtue of protecting innocent life. A person of good character would
document safety concerns, propose reasonable solutions, and escalate appropriately if ignored.
Deontological Resolution: Kant's hierarchy of duties helps resolve this conflict. The duty to preserve
human life is more fundamental than loyalty to any organization. The categorical imperative test clarifies
this: "Should all engineers stay quiet about safety issues to protect their company's competitive
position?" This would create a world where profit consistently trumps safety - clearly unacceptable.
Professional engineers have a special duty to society that goes beyond ordinary employee loyalty.
Utilitarian Resolution: The utilitarian weighs all consequences: Risks of rushing: Potential accidents,
deaths, massive lawsuits, regulatory backlash, destroyed company reputation, setback for entire
autonomous vehicle industry. Benefits of rushing: Temporary competitive advantage, short-term job
security, faster deployment of potentially beneficial technology. Even considering the positive aspects of
autonomous vehicles, the utilitarian calculation favors thorough safety testing - the catastrophic risks of
premature deployment far outweigh competitive benefits.
Professional Resolution: Professional ethics codes typically establish a hierarchy: public safety trumps
employer loyalty. Document concerns, propose solutions, seek peer review, and if necessary, report to
professional organizations or regulatory bodies. The engineer might say: "I'm being loyal to the
company's long-term interests by preventing catastrophic liability and reputational damage."
Exam Preparation Questions
Short Answer Practice
1. Define normative vs. descriptive ethics
2. Explain the trolley problem and its ethical implications
3. List three limitations of codes of ethics
4. What are the three R's of ethics?
5. Define whistleblowing and give an example
Essay Practice
1. Compare how virtue ethics, deontology, and utilitarianism would approach the same ethical
dilemma
2. Analyze the role and limitations of professional codes of ethics
3. Discuss the unique ethical challenges in computer science and technology
4. Evaluate the ethical considerations of AI and automation in society
Application Practice
Be prepared to:
Apply all three ethical frameworks to any given scenario
Identify which ethical principles are in conflict
Justify your ethical reasoning
Discuss professional responsibilities and codes of ethics
Address computer-specific ethical issues
Key Terms to Remember
Categorical Imperative (Kant)
Greatest Happiness Principle (Mill)
Virtue Ethics, Deontology, Consequentialism
Professional codes of ethics
Whistleblowing
Computer ethics principles
Three R's: Rules, Responsibility, Respect