0% found this document useful (0 votes)
499 views18 pages

Unit 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
499 views18 pages

Unit 4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

CCS345-ETHICS AND AI-UNITIV

UNIT IV

ROBOETHICS: SOCIAL AND ETHICAL IMPLICATION OF ROBOTICS


Robot-Roboethics- Ethics and Morality- Moral Theories-Ethics in Science and
Technology - Ethical Issues in an ICT Society- Harmonization of Principles- Ethics and
Professional Responsibility Roboethics Taxonomy.
Robot

Robots are the artificial agents acting in real world environment.


Robotics is a branch of AI engineering and computer science that involves the conception,
design, manufacture and operation of robots.
Robotics can take on a number of forms. A robot might resemble a human or be in the form
of a robotic application, such as robotic process automation, which simulates how humans
engage with software to perform repetitive, rules-based tasks.

Objective

Robots are aimed at manipulating the objects by perceiving, picking, moving, modifying the
physical properties of object, destroying it, or to have an effect thereby freeing manpower
from doing repetitive functions without getting bored, distracted, or exhausted.

Aspects of Robotics

• The robots have mechanical construction, form, or shape designed to accomplish a


particular task.
• They have electrical components which power and control the machinery.
• They contain some level of computer program that determines what, when and how
a robot does something.

Difference in Robot System and Other AI Program

Here is the difference between the two −

AI Programs Robots

They usually operate in computer-


They operate in real physical world
stimulated worlds.

The input to an AI program is in Inputs to robots is analog signal in the form of


symbols and rules. speech waveform or images

They need general purpose They need special hardware with sensors and
computers to operate on. effectors.

CSE/IIIYEAR/VI SEM Page 1


CCS345-ETHICS AND AI-UNITIV

Tasks of Computer Vision


• OCR − In the domain of computers, Optical Character Reader, a software to convert
scanned documents into editable text, which accompanies a scanner.
• Face Detection − Many state-of-the-art cameras come with this feature, which
enables to read the face and take the picture of that perfect expression. It is used to let
a user access the software on correct match.
• Object Recognition − They are installed in supermarkets, cameras, high-end cars
such as BMW, GM, and Volvo.
• Estimating Position − It is estimating position of an object with respect to camera as
in position of tumor in human’s body.

Laws of robotics
Laws of robotics are any set of laws, rules, or principles, which are intended as a
fundamental framework to underpin the behavior of robots designed to have a degree
of autonomy
The best known set of laws are Isaac Asimov's "Three Laws of Robotics".
The Three Laws are:

1. A robot may not injure a human being or, through inaction, allow a human being to
come to harm.
2. A robot must obey the orders given it by human beings except where such orders
would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict
with the First or Second Laws.

Applications of Robotics

The robotics has been instrumental in the various domains such as −

• Industries − Robots are used for handling material, cutting, welding, color coating,
drilling, polishing, etc.
• Military − Autonomous robots can reach inaccessible and hazardous zones during
war. A robot named Daksh, developed by Defense Research and Development
Organization (DRDO), is in function to destroy life-threatening objects safely.
• Medicine − The robots are capable of carrying out hundreds of clinical tests
simultaneously, rehabilitating permanently disabled people, and performing complex
surgeries such as brain tumors.
• Exploration − The robot rock climbers used for space exploration, underwater drones
used for ocean exploration are to name a few.
• Entertainment − Disney’s engineers have created hundreds of robots for movie
making.
Types of robotics
Robots are designed to perform specific tasks and operate in different environments. The
following are some common types of robots used across various industries:

• Industrial robots. Frequently used in manufacturing and warehouse settings, these


large programmable robots are transforming the supply chain by performing tasks such as
welding, painting, assembling and material handling.

CSE/IIIYEAR/VI SEM Page 2


CCS345-ETHICS AND AI-UNITIV

• Service robots. These robots are used in a variety of fields in different scenarios, such as
domestic chores, hospitality, retail and healthcare. Examples include cleaning robots,
entertainment robots and personal assistance robots.
• Medical robots. These robots help with surgical procedures, rehabilitation and
diagnostics in healthcare settings. Robotic surgery systems, exoskeletons and artificial
limbs are a few examples of medical robots.
• Autonomous vehicles. These robots are mainly used for transportation purposes and
can include self-driving cars, drones and autonomous delivery robots. They navigate
and make decisions using advanced sensors and AI algorithms.
• Humanoid robots. These robots are programmed to imitate and mimic human
movements and actions. They look humanlike and are employed in research,
entertainment and human-robot interactions.
• Cobots. Contrary to the majority of other types of robots, which do their tasks alone
or in entirely separated work environments, cobots can share workplaces with human
employees, enabling them to work more productively. They're typically used to
remove costly, dangerous or time-consuming tasks from routine workflows. Cobots
can occasionally recognize and respond to human movement.
• Agricultural robots. These robots are used in farming and agricultural applications.
They can plant, harvest, apply pesticides and check crop health.
• Exploration and space robots. These robots are used in missions to explore space as
well as in harsh regions on Earth. Examples include underwater exploration robots
and rovers used on Mars expeditions.
• Defense and military robots. These robots aid military tasks and operations
including surveillance, bomb disposal and search-and-rescue missions. They're
specifically designed to operate in unknown terrains.
• Educational robots. These robots are created to instruct and educate kids about
robotics, programming and problem-solving. Kits and platforms for hands-on learning
in academia are frequent examples of educational robots.
• Entertainment robots. Created for entertainment purposes, these robots come in the
form of robotic pets, humanoid companions and interactive toys
The pros and cons of robotics
Robotic systems are coveted in many industries because they can increase accuracy, reduce
costs and increase safety for human beings.

Common advantages of robotics include the following:

• Safety. Safety is arguably one of robotics' greatest benefits, as many dangerous or


unhealthy environments no longer require the human element. Examples include the
nuclear industry, space, defense and maintenance. With robots or robotic systems,
workers can avoid exposure to hazardous chemicals and even limit psychosocial and
ergonomic health risks.

CSE/IIIYEAR/VI SEM Page 3


CCS345-ETHICS AND AI-UNITIV

• Increased productivity. Robots don't readily become tired or worn out as humans do.
They can work continuously without breaks while performing repetitive jobs, which
boosts
• productivity.
• Accuracy. Robots can perform precise tasks with greater consistency and accuracy
than humans can. This eliminates the risk of errors and inconsistencies.
• Flexibility. Robots can be programmed to carry out a variety of tasks and are easily
adaptable to new use cases.
• Cost savings. By automating repetitive tasks, robots can reduce labor costs.

Roboethics

Roboethics is an applied ethics whose objective is to develop scientific/cultural/technical


tools that can be shared by different social groups and beliefs.

Robot ethics is a growing interdisciplinary research effort roughly situated in the intersection
of applied ethics and robotics with the aim of understanding the ethical implications and
consequences of robotic technology, in particular, autonomous robots.

Roboethics ensures that robots are developed and used in an ethical and responsible way that
benefits humanity and the environment.

Roboethics will become increasingly important as we enter an era where more advanced and
sophisticated robots as well as Artificial General Intelligence (AGI) are going to become an
integral part of our daily life.

Therefore, the debate in ethical and social issues in advanced robotics must become
increasingly important. The current growth of robotics and the rapid developments in
Artificial Intelligence require roboticists and humans in general to be prepared sooner rather
than later.

Roboethics Methodologies
Roboethics methodologies are developed adopting particular ethics theories. Therefore,
before discussing these methodologies, it is helpful to have a quick look at the branches and
theories of ethics

CSE/IIIYEAR/VI SEM Page 4


CCS345-ETHICS AND AI-UNITIV

Branches of ethics.

• Meta-ethics. The study of concepts, judgements, and moral reasoning (i.e., what is
the nature of morality in general, and what justifies moral judgements? What does right
mean?).
• Normative (prescriptive) ethics. The elaboration of norms prescribing what is right
or wrong, what must be done or what must not (What makes an action morally acceptable?
Or what are the requirements for a human to live well? How shoud we act? What ought to be
the case?).
• Applied ethics. The ethics branch which examines how ethics theories can be applied
to specific problems/applications of actual life (technological, environmental, biological,
professional, public sector, business ethics, etc., and how people take ethical knoweledge and
put it in practice). Applied ethics is actually contrasted with theoretical ethics.
• Descriptive ethics. The empirical study of people’s moral beliefs, and the question:
What is the case?

Roboethics Methodologies

Roboethics has two basic methodologies:


➢ Top-down methodology
➢ Bottom-up Methodology

Top-down roboethics methodology:


• In this methodology, the rules of the desired ethical behaviour of the robot are
programmed and embodied in the robot system.
• Top-down methodogy in ethics was originated from several areas including
philosophy, religion, and literature. In control and automation sytems design, the top-
down approach means to analyze or decompose a task in simpler sub-tasks that can
be hierarchically arranged and performed to achieve a desired output product.

Bottom-up roboethics methodology:


• This methodology assumes that the robots possess adequate computational and
artificial intelligence capabilites to adapt themselves to different contexts.

CSE/IIIYEAR/VI SEM Page 5


CCS345-ETHICS AND AI-UNITIV

• Ethical theories be capable to learn, starting from perception of the world, and then
perform the planning of the actions based on sensory data, and finally execute the
action.

Levels of Robot Morality


The morality of robots can be classified into one of three levels
➢ Operational morality (moral responsibility lies entirely in the robot designer and
user).
➢ Functional morality (the robot has the ability to make moral judgments without top-
down instructions from humans, and the robot designers can no longer predict the
robot’s actions and their consequences).
➢ Full morality (the robot is so intelligent that it fully autonomously chooses its actions,
thereby being fully responsible for them).

Levels of robot morality

Moral Theories
A distinction is made between persons and moral agents such that, it is not necessary for a
robot to have personhood in order to be a moral agent.
• The first is achieved when the robot is significantly autonomous from any
programmers or operators of the machine.
• The second is when one can analyze or explain the robot’s behavior only by ascribing
to it some predisposition or ‘intention’ to do good or harm.
• And finally, robot moral agency requires the robot to behave in a way that shows and
understanding of responsibility to some other moral agent.
Robots with all of these criteria will have moral rights as well as responsibilities regardless
of their status as persons.
In order to evaluate the moral status of any autonomous robotic technology, one needs to ask
three questions of the technology under consideration: - Is the robot significantly
autonomous? - Is the robot’s behaviour intentional? - Is the robot in a position of
responsibility?
➢ Autonomy
Autonomy The first question asks if the robot could be seen as significantly autonomous from
any programmers, operators, and users of the machine. use the term ‘autonomy,’ in the

CSE/IIIYEAR/VI SEM Page 6


CCS345-ETHICS AND AI-UNITIV

engineering sense, simply that the machine is not under the direct control of any other agent
or user. The robot must not be a telerobot or be temporarily behaving as one.

Autonomy as described is not sufficient in itself to ascribe moral agency. Thus entities such
as bacteria, or animals, ecosystems, computer viruses, simple artificial life programs, or
simple autonomous robots, all of which exhibit autonomy as I have described it, are not to be
seen as responsible moral agents simply on account of possessing this quality.

➢ Intentionality
• The second question addresses the ability of the machine to act ‘intentionally.’
Remember, we do not have to prove the robot has intentionality in the strongest sense,
as that is impossible to prove without argument for humans as well.
• As long as the behaviour is complex enough that one is forced to rely on standard folk
psychological notions of predisposition or ‘intention’ to do good or harm, then this is
enough to answer in the affirmative to this question.
• To answer in the affirmative to this question. If the complex interaction of the robot’s
programming and environment causes the machine to act in a way that is morally
harmful or beneficial, and the actions are seemingly deliberate and calculated, then
the machine is a moral agent.
• Responsibility
Finally, ascribe moral agency to a robot when the robot behaves in such a way that
we can only make sense of that behaviour by assuming it has a responsibility to some
other moral agent(s).
If the robot behaves in this way and it fulfils some social role that carries with it some
assumed responsibilities, and only way we can make sense of its behaviour is to
ascribe to it the ‘belief” that it has the duty to care for its patients, then we can ascribe
to this machine the status of a moral agent.
For example: robotic caregivers are being designed to assist in the care of the elderly.
Certainly a human nurse is a moral agent, when and if a machine caries out those
same duties it will be a moral agent if it is autonomous as described above, behaves in
an intentional way and whose programming is complex enough that it understands its
role in the responsibility of the health care system that it is operating in has towards
the patient under its direct care.

Ethical Issues in Science and Technology


In today's rapidly advancing digital world, information technology (IT) has become
essential to our personal and professional lives, transforming how access information
- communicate and work.
However, as technology advances, it presents ethical dilemmas that require careful
consideration. These challenges have gained significant attention due to privacy
breaches and biased algorithms.
So to explore ethical issues in IT and emphasize the importance of balancing
innovation with accountability.

CSE/IIIYEAR/VI SEM Page 7


CCS345-ETHICS AND AI-UNITIV

The importance of ethics in technology


Ethics plays a vital role in technology for several reasons.
➢ Firstly, ethical behavior fosters trust and confidence among users, crucial for
successful technological advancements and user adoption.
➢ Secondly, ethical considerations protect individuals' privacy and ensure responsible
handling of personal data. Fairness and equity are also essential, as technology
should benefit everyone regardless of their background.

Ethical issues in information technology


Ethical issues in information technology encompass various concerns and considerations
related to technology use, development, and impact. Some of the prominent ethical issues in
information technology include:

1. Privacy and data protection: Collecting, storing, and using personal data raises
significant ethical concerns. Protecting individuals' privacy rights through data privacy
measures, informed consent, and robust security is essential.

Example: A social media platform that collects and sells users' data without explicit
consent violates ethical privacy and data protection standards. Users' information
should be safeguarded and used responsibly (GDPR standards), with transparent
privacy policies and options for users to control their data

2. Access rights: Unequal access to technology and the digital divide raise ethical concerns
about equal opportunities and limited access to essential services. Bridging the digital divide
and ensuring fair access to technology is crucial for creating an inclusive society.

• Example: A government or a nonprofit organization is implementing a program to


provide free internet access and computer literacy programs to underprivileged
communities, which ensures equal opportunities for education, employment, and
access to essential services. This initiative promotes ethical principles of inclusivity
and fair access to technology.

3. Harmful actions: Ethical conduct in IT prohibits intentional damage or alteration of IT


systems, including actions that harm users, employees, employers, or the general public.

• Example: A hacker intentionally breaching a company's IT system to steal and


exploit sensitive customer information for personal gain engages in unethical
behavior. Such harmful actions compromise the security and privacy of individuals
and have broader implications for the affected organization and the public.

4. Intellectual property: Issues related to patents, copyright, and trade secrets arise in
information technology. Protecting intellectual property rights while encouraging innovation
and fair use of technology is a complex ethical challenge.

• Example: Using copyrighted software or proprietary algorithms without proper


licensing or authorization is an ethical violation. Companies and individuals should
respect intellectual property rights, give credit where it's due, and seek appropriate
permissions to foster a fair and innovative technology environment.

CSE/IIIYEAR/VI SEM Page 8


CCS345-ETHICS AND AI-UNITIV

5. Liability and accountability: Determining liability and responsibility for software or


hardware defects that can cause data breaches or other negative consequences is an ethical
concern. It is crucial to address this issue in a way to hold responsible parties accountable for
their actions.

• Example: If a software company releases a product with known security


vulnerabilities and fails to address them promptly, they should be held accountable for
any resulting data breaches. Ethical responsibility involves acknowledging and
rectifying mistakes, compensating affected parties, and implementing defined
measures to prevent future incidents.

6. Cybersecurity and ethical hacking: The ever-growing threat of cyber attacks raises
ethical questions about cybersecurity practices, particularly in the context of ethical hacking
(white-hat hacking). Ethical hacking is the authorized practice of finding and exploiting
security weaknesses in computer systems to improve their security. It involves hacking
techniques to identify vulnerabilities and assists organizations in enhancing their defenses
against cyber threats, making it a crucial ethical consideration.

• Example: An organization hires ethical hackers to conduct security assessments and


penetration tests on its systems to identify vulnerabilities and improve security
measures. These ethical hackers operate within legal boundaries and adhere to a code
of ethics while helping organizations enhance their cybersecurity defenses.

7. Algorithmic bias and fairness: Algorithms can be biased, perpetuating discrimination and
social inequalities. Ensuring fairness, transparency, and accountability in algorithmic
decision-making is an ethical imperative.

• Example: A hiring algorithm used by a company favors certain demographic groups,


leading to discrimination against qualified individuals from underrepresented
backgrounds. Similarly, a social media platform regularly audits its content
recommendation algorithms to identify and eliminate biases that may promote
harmful stereotypes or exclusionary practices.

8. Artificial intelligence and automation: Ethical considerations arise with the increasing
use of AI and business processes’ automation, including concerns about job displacement,
privacy, and biased or unethical decisions by autonomous systems.

• Example: An autonomous vehicle manufacturer establishes clear ethical guidelines to


ensure their self-driving cars prioritize passenger safety, follow traffic regulations,
and make unbiased decisions in critical situations, reducing the risk of accidents
caused by biased or unethical decision-making.

9. Digital manipulation and misinformation: The spread of misinformation and digital


manipulation raises ethical concerns about the impact on public trust, democratic processes,
and individual well-being. Addressing these issues involves promoting media literacy and the
responsible use of technology.

• Example: Spreading false information through social media platforms to manipulate


public opinion during elections undermines democratic processes and erodes public
trust. Ethical responsibility involves promoting accurate information, fact-checking,

CSE/IIIYEAR/VI SEM Page 9


CCS345-ETHICS AND AI-UNITIV

and educating users about media literacy to combat the harmful effects of
misinformation.

10. Social impact and responsibility: The broader social impact of technology, such as its
effect on communities, the environment, and society at large, requires ethical reflection.
Ensuring that technology contributes positively to society and respects human rights and
environmental sustainability is a moral imperative.

• Example: A technology company showcases ethical responsibility by minimizing its


carbon footprint, responsibly recycling electronic waste, and actively contributing to
environmental conservation efforts. Simultaneously, it donates a portion of its profits
to support educational programs in underprivileged communities, promoting digital
literacy and equal access to technology resources. This demonstrates the company's
commitment to reducing harm, fostering social responsibility, and positively
impacting society.

Ethical Issues in an ICT Society

Information and Communications Technology (ICT) can impact student learning when
teachers are digitally literate and understand how to integrate it into curriculum. Schools use
a diverse set of ICT tools to communicate, create, disseminate, store, and manage

Access rights: Unequal access to technology and the digital divide raise ethical concerns
about equal opportunities and limited access to essential services. Bridging the digital divide
and ensuring fair access to technology is crucial for creating an inclusive society.

Some examples include online harassment and cyberbullying, unprotected access to personal
data, lack of safety regulations for data processing, and identification and exploitation of
digital divides

Information technology ethics is the study of the ethical issues arising out of the use and
development of electronic technologies. Its goal is to identify and formulate answers to
questions about the moral basis of individual responsibilities and actions, as well as the moral
underpinnings of public policy.

CSE/IIIYEAR/VI SEM Page 10


CCS345-ETHICS AND AI-UNITIV

ICT ethics issues-accesibility in Automation System


Harmonization of Principle

Harmonization means working on those areas which are complementary in order to have the
plans working together for the achievement of an overall strategic objective. Harmonization
helps different departments in local authorities share the same vision, work together and
optimize the use of resources.

As changing consumer trends continue to shorten product life cycles, manufacturers need
production lines to meet the requirements of frequent product change-overs. In addition, as
labour shortage becomes a reality, manufacturers strive to automate the simpler and
monotonous tasks, to create a workplace where people can contribute in the more creative
tasks.

Harmonization Principle Work

Harmonization Principles

When deciding whether and how to harmonize, we will create outcomes that allow us to:

1. Prioritize the end user experience.

a. End users should not have to become experts in the way staff are.

b. If someone has to experience complexity, can it be staff instead of patrons?


Leverage staff expertise to allow patrons a shorter path.

c. We will reduce or eliminate barriers for patrons to discover, access, and effectively

CSE/IIIYEAR/VI SEM Page 11


CCS345-ETHICS AND AI-UNITIV

use our research collections

2. Simplify.

a. Complexity is costly - proliferation of code, of special technical needs, of


breakage points all require more staffing resources to maintain.

b. Complexity can result in communication or training errors when reference


staff are working with patrons.

3. Keep evolving: we will not be bound to long-standing practices and systems that are no

longer useful.

a. Build the freedom to experiment into our processes, and create a culture of
comfort with reiteration.

b. We will create a cycle of assessment to determine “usefulness.”

c. Remember: future changes do not retroactively malign past decisions.

d. Changes we make now are still beneficial even if changes are made again in
the future.

Future changes will be helped by incremental changes.

4. Strongly support unique needs.

a. Recognize that the UC Libraries are different for good reasons and
specialize in different things.

b. Localization will be supported in a way that is still interoperable.

c. Allow campuses to support user-oriented needs locally.

d. Help campuses define and prioritize the local needs in relation to system
needs.

e. Shining a light on the unique collections can be accomplished through


standardization or localization.

5. Enable data-driven decision making.

CSE/IIIYEAR/VI SEM Page 12


CCS345-ETHICS AND AI-UNITIV

a. Our data needs to be compatible / interoperable in order to make sense.

b. We need to be measuring / capturing the same things.

c. Understand and share definitions of what we are measuring in order to


improve measurement.

d. Understand what is impactful, why, and how we calculate that (as a system
and locally).

6. Unify the experience for users of our general collections.

7. Share the load.

a. Don’t reinvent the wheel; don’t over-expend energy.

b. Reduce unnecessary / problematic work.

c. Stop doing some things altogether.

d. Identify where there is value in doing it once for the consortium.

e. Free up resources to spend on unique challenges and opportunities.

Example:Mobile Robot

Our TM series provides a unique solution to easily install a robot to automate applications
such as assembly, pick&place and inspection. As part of the TM series launch, we will
release a “mobile-compatible” model which will seamlessly integrate into our market leading
LD series autonomous mobile robot. This enables users to automate more complex tasks such
as pick and place onto a tray or container, and to connect processes with autonomous mobile
robots.

Introduce 12 robots in the TM series with a combination of the following specifications.

• Arm length: 700 mm, 900 mm, 1100 mm, 1300 mm


• Payload: 4 kg, 6kg, 12kg, 14kg
• Power supply: AC, DC
• Conforming or not conforming to SEMI S2 Safety Guidelines

CSE/IIIYEAR/VI SEM Page 13


CCS345-ETHICS AND AI-UNITIV

Three Key Features of the TM Series Collaborative Robot:

• Intuitive programming interface reduces programming time: Reduced installation


and setup times compared to traditional industrial robots. The flowchart-based
intuitive programming interface and easy teaching requires little to no previous robot
programming experience.
• Integrated on-arm vision system reduces setup time: The TM series comes with
built-in vision and integrated lighting, allowing the user to capture products with a
wide viewing angle. Equipped with image sensing functions such as pattern matching,
bar code reading, and colour identification, this robot system makes inspection,
measurement and sorting applications extremely easy to set up out of the box.
• Conforms to all human-machine collaborative safety standards reducing
installation time: The TM series conforms with all safety standards that enable
cooperation between humans and machines and can be safely operated around people
without industrial safety fencing traditionally required for industrial robots, greatly
reducing the installation time. (Conforms to safety requirements for industrial robots
ISO10218-1 as well as safety requirements for collaborative industrial robots
ISO/TS15066)

Example: workpiece feeding operation by using hand guiding device

CSE/IIIYEAR/VI SEM Page 14


CCS345-ETHICS AND AI-UNITIV

• The operator conducts the robot arm to the workpiece position and makes the arm
grasp the workpiece in the collaborative work space by using the hand guiding device.
• After this, he moves the arm to the automatic operation space, and once the arm has
passed the boundary provided by the safeguards, the robot transits to automatic
operation mode to carry out a programmed process.
• Other than this, assist operations for handling work pieces or tools in parts assembling
or welding are expected as concrete applications of the collaborative operation.

Ethics and professional responsibility


Ethics and professional responsibility in robotics are critical considerations as robotics
technologies become increasingly integrated into various aspects of society.
Here are some key points to consider:
Ethical Considerations:
➢ Autonomy and Control:
Robotics often involve autonomous systems. Ethical questions arise about who is
responsible when robots make decisions.
Should there be human oversight, and to what extent should humans be able to
intervene in robotic actions?
➢ Safety:
Ensuring robots are safe for human interaction is a primary concern.
How can we prevent robots from causing harm, whether through physical injury or
data breaches?
➢ Privacy:
Robots, especially those with cameras and sensors, raise concerns about privacy
infringement.
How should data collected by robots be used, stored, and protected?
➢ Bias and Fairness:
Like AI, robots can inherit biases from their designers or datasets.
Ensuring fairness in robot behavior, especially in areas like hiring (robotic HR) or law
enforcement (robotic policing), is crucial.
➢ Job Displacement:
Automation can lead to job loss for humans. Ensuring a fair transition for affected
workers is an ethical concern.

CSE/IIIYEAR/VI SEM Page 15


CCS345-ETHICS AND AI-UNITIV

Should companies using robots to replace human workers be required to provide


training or compensation?
➢ Environmental Impact:
The manufacturing and disposal of robots can have environmental consequences.
How can robotics be made more sustainable and environmentally friendly?

Professional Responsibility:
➢ Design Ethics:
Roboticists have a responsibility to design robots ethically, considering the potential
impact on society.
This includes transparency in design decisions and considering the long-term effects
of their creations.
➢ Regulatory Compliance:
Adherence to existing laws and regulations is crucial. Roboticists should ensure their
creations meet safety and ethical standards.
They should also advocate for appropriate regulations where none exist.
➢ Education and Transparency:
Educating the public about the capabilities and limitations of robots is important for
responsible adoption.
Providing transparent documentation about how robots operate and make decisions
builds trust.
➢ Accountability:
Establishing clear lines of accountability when something goes wrong is necessary.

CSE/IIIYEAR/VI SEM Page 16


CCS345-ETHICS AND AI-UNITIV

This includes defining who is responsible in the case of accidents or ethical breaches
involving robots.
➢ Continual Assessment:
As technology evolves, so do ethical considerations.
Roboticists have a responsibility to continually assess the impact of their work on
society and adjust accordingly.
Case Studies:
• Autonomous Vehicles:
Who is responsible in a crash involving an autonomous vehicle? The manufacturer,
the owner, or the AI?
• Robotic Surgery:
What happens if a surgical robot malfunctions during a procedure? Who bears the
responsibility?
• Military Robotics:
How should autonomous weapons be used ethically and legally in warfare?
Initiatives and Guidelines:
IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems:
Provides guidelines for ethically aligned design in robotics and AI.
Asimov's Three Laws of Robotics:
Though fictional, these laws inspire discussions on robot ethics.
Conclusion:
Ethics and professional responsibility in robotics are essential for the safe and
beneficial integration of robots into society. Roboticists, policymakers, and society as
a whole must work together to ensure that robots are developed and deployed
ethically, with a focus on safety, fairness, transparency, and accountability.

CSE/IIIYEAR/VI SEM Page 17


CCS345-ETHICS AND AI-UNITIV

CSE/IIIYEAR/VI SEM Page 18

You might also like