Unit 4
Unit 4
UNIT IV
Objective
Robots are aimed at manipulating the objects by perceiving, picking, moving, modifying the
physical properties of object, destroying it, or to have an effect thereby freeing manpower
from doing repetitive functions without getting bored, distracted, or exhausted.
Aspects of Robotics
AI Programs Robots
They need general purpose They need special hardware with sensors and
computers to operate on. effectors.
Laws of robotics
Laws of robotics are any set of laws, rules, or principles, which are intended as a
fundamental framework to underpin the behavior of robots designed to have a degree
of autonomy
The best known set of laws are Isaac Asimov's "Three Laws of Robotics".
The Three Laws are:
1. A robot may not injure a human being or, through inaction, allow a human being to
come to harm.
2. A robot must obey the orders given it by human beings except where such orders
would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict
with the First or Second Laws.
Applications of Robotics
• Industries − Robots are used for handling material, cutting, welding, color coating,
drilling, polishing, etc.
• Military − Autonomous robots can reach inaccessible and hazardous zones during
war. A robot named Daksh, developed by Defense Research and Development
Organization (DRDO), is in function to destroy life-threatening objects safely.
• Medicine − The robots are capable of carrying out hundreds of clinical tests
simultaneously, rehabilitating permanently disabled people, and performing complex
surgeries such as brain tumors.
• Exploration − The robot rock climbers used for space exploration, underwater drones
used for ocean exploration are to name a few.
• Entertainment − Disney’s engineers have created hundreds of robots for movie
making.
Types of robotics
Robots are designed to perform specific tasks and operate in different environments. The
following are some common types of robots used across various industries:
• Service robots. These robots are used in a variety of fields in different scenarios, such as
domestic chores, hospitality, retail and healthcare. Examples include cleaning robots,
entertainment robots and personal assistance robots.
• Medical robots. These robots help with surgical procedures, rehabilitation and
diagnostics in healthcare settings. Robotic surgery systems, exoskeletons and artificial
limbs are a few examples of medical robots.
• Autonomous vehicles. These robots are mainly used for transportation purposes and
can include self-driving cars, drones and autonomous delivery robots. They navigate
and make decisions using advanced sensors and AI algorithms.
• Humanoid robots. These robots are programmed to imitate and mimic human
movements and actions. They look humanlike and are employed in research,
entertainment and human-robot interactions.
• Cobots. Contrary to the majority of other types of robots, which do their tasks alone
or in entirely separated work environments, cobots can share workplaces with human
employees, enabling them to work more productively. They're typically used to
remove costly, dangerous or time-consuming tasks from routine workflows. Cobots
can occasionally recognize and respond to human movement.
• Agricultural robots. These robots are used in farming and agricultural applications.
They can plant, harvest, apply pesticides and check crop health.
• Exploration and space robots. These robots are used in missions to explore space as
well as in harsh regions on Earth. Examples include underwater exploration robots
and rovers used on Mars expeditions.
• Defense and military robots. These robots aid military tasks and operations
including surveillance, bomb disposal and search-and-rescue missions. They're
specifically designed to operate in unknown terrains.
• Educational robots. These robots are created to instruct and educate kids about
robotics, programming and problem-solving. Kits and platforms for hands-on learning
in academia are frequent examples of educational robots.
• Entertainment robots. Created for entertainment purposes, these robots come in the
form of robotic pets, humanoid companions and interactive toys
The pros and cons of robotics
Robotic systems are coveted in many industries because they can increase accuracy, reduce
costs and increase safety for human beings.
• Increased productivity. Robots don't readily become tired or worn out as humans do.
They can work continuously without breaks while performing repetitive jobs, which
boosts
• productivity.
• Accuracy. Robots can perform precise tasks with greater consistency and accuracy
than humans can. This eliminates the risk of errors and inconsistencies.
• Flexibility. Robots can be programmed to carry out a variety of tasks and are easily
adaptable to new use cases.
• Cost savings. By automating repetitive tasks, robots can reduce labor costs.
Roboethics
Robot ethics is a growing interdisciplinary research effort roughly situated in the intersection
of applied ethics and robotics with the aim of understanding the ethical implications and
consequences of robotic technology, in particular, autonomous robots.
Roboethics ensures that robots are developed and used in an ethical and responsible way that
benefits humanity and the environment.
Roboethics will become increasingly important as we enter an era where more advanced and
sophisticated robots as well as Artificial General Intelligence (AGI) are going to become an
integral part of our daily life.
Therefore, the debate in ethical and social issues in advanced robotics must become
increasingly important. The current growth of robotics and the rapid developments in
Artificial Intelligence require roboticists and humans in general to be prepared sooner rather
than later.
Roboethics Methodologies
Roboethics methodologies are developed adopting particular ethics theories. Therefore,
before discussing these methodologies, it is helpful to have a quick look at the branches and
theories of ethics
Branches of ethics.
• Meta-ethics. The study of concepts, judgements, and moral reasoning (i.e., what is
the nature of morality in general, and what justifies moral judgements? What does right
mean?).
• Normative (prescriptive) ethics. The elaboration of norms prescribing what is right
or wrong, what must be done or what must not (What makes an action morally acceptable?
Or what are the requirements for a human to live well? How shoud we act? What ought to be
the case?).
• Applied ethics. The ethics branch which examines how ethics theories can be applied
to specific problems/applications of actual life (technological, environmental, biological,
professional, public sector, business ethics, etc., and how people take ethical knoweledge and
put it in practice). Applied ethics is actually contrasted with theoretical ethics.
• Descriptive ethics. The empirical study of people’s moral beliefs, and the question:
What is the case?
Roboethics Methodologies
• Ethical theories be capable to learn, starting from perception of the world, and then
perform the planning of the actions based on sensory data, and finally execute the
action.
Moral Theories
A distinction is made between persons and moral agents such that, it is not necessary for a
robot to have personhood in order to be a moral agent.
• The first is achieved when the robot is significantly autonomous from any
programmers or operators of the machine.
• The second is when one can analyze or explain the robot’s behavior only by ascribing
to it some predisposition or ‘intention’ to do good or harm.
• And finally, robot moral agency requires the robot to behave in a way that shows and
understanding of responsibility to some other moral agent.
Robots with all of these criteria will have moral rights as well as responsibilities regardless
of their status as persons.
In order to evaluate the moral status of any autonomous robotic technology, one needs to ask
three questions of the technology under consideration: - Is the robot significantly
autonomous? - Is the robot’s behaviour intentional? - Is the robot in a position of
responsibility?
➢ Autonomy
Autonomy The first question asks if the robot could be seen as significantly autonomous from
any programmers, operators, and users of the machine. use the term ‘autonomy,’ in the
engineering sense, simply that the machine is not under the direct control of any other agent
or user. The robot must not be a telerobot or be temporarily behaving as one.
Autonomy as described is not sufficient in itself to ascribe moral agency. Thus entities such
as bacteria, or animals, ecosystems, computer viruses, simple artificial life programs, or
simple autonomous robots, all of which exhibit autonomy as I have described it, are not to be
seen as responsible moral agents simply on account of possessing this quality.
➢ Intentionality
• The second question addresses the ability of the machine to act ‘intentionally.’
Remember, we do not have to prove the robot has intentionality in the strongest sense,
as that is impossible to prove without argument for humans as well.
• As long as the behaviour is complex enough that one is forced to rely on standard folk
psychological notions of predisposition or ‘intention’ to do good or harm, then this is
enough to answer in the affirmative to this question.
• To answer in the affirmative to this question. If the complex interaction of the robot’s
programming and environment causes the machine to act in a way that is morally
harmful or beneficial, and the actions are seemingly deliberate and calculated, then
the machine is a moral agent.
• Responsibility
Finally, ascribe moral agency to a robot when the robot behaves in such a way that
we can only make sense of that behaviour by assuming it has a responsibility to some
other moral agent(s).
If the robot behaves in this way and it fulfils some social role that carries with it some
assumed responsibilities, and only way we can make sense of its behaviour is to
ascribe to it the ‘belief” that it has the duty to care for its patients, then we can ascribe
to this machine the status of a moral agent.
For example: robotic caregivers are being designed to assist in the care of the elderly.
Certainly a human nurse is a moral agent, when and if a machine caries out those
same duties it will be a moral agent if it is autonomous as described above, behaves in
an intentional way and whose programming is complex enough that it understands its
role in the responsibility of the health care system that it is operating in has towards
the patient under its direct care.
1. Privacy and data protection: Collecting, storing, and using personal data raises
significant ethical concerns. Protecting individuals' privacy rights through data privacy
measures, informed consent, and robust security is essential.
Example: A social media platform that collects and sells users' data without explicit
consent violates ethical privacy and data protection standards. Users' information
should be safeguarded and used responsibly (GDPR standards), with transparent
privacy policies and options for users to control their data
2. Access rights: Unequal access to technology and the digital divide raise ethical concerns
about equal opportunities and limited access to essential services. Bridging the digital divide
and ensuring fair access to technology is crucial for creating an inclusive society.
4. Intellectual property: Issues related to patents, copyright, and trade secrets arise in
information technology. Protecting intellectual property rights while encouraging innovation
and fair use of technology is a complex ethical challenge.
6. Cybersecurity and ethical hacking: The ever-growing threat of cyber attacks raises
ethical questions about cybersecurity practices, particularly in the context of ethical hacking
(white-hat hacking). Ethical hacking is the authorized practice of finding and exploiting
security weaknesses in computer systems to improve their security. It involves hacking
techniques to identify vulnerabilities and assists organizations in enhancing their defenses
against cyber threats, making it a crucial ethical consideration.
7. Algorithmic bias and fairness: Algorithms can be biased, perpetuating discrimination and
social inequalities. Ensuring fairness, transparency, and accountability in algorithmic
decision-making is an ethical imperative.
8. Artificial intelligence and automation: Ethical considerations arise with the increasing
use of AI and business processes’ automation, including concerns about job displacement,
privacy, and biased or unethical decisions by autonomous systems.
and educating users about media literacy to combat the harmful effects of
misinformation.
10. Social impact and responsibility: The broader social impact of technology, such as its
effect on communities, the environment, and society at large, requires ethical reflection.
Ensuring that technology contributes positively to society and respects human rights and
environmental sustainability is a moral imperative.
Information and Communications Technology (ICT) can impact student learning when
teachers are digitally literate and understand how to integrate it into curriculum. Schools use
a diverse set of ICT tools to communicate, create, disseminate, store, and manage
Access rights: Unequal access to technology and the digital divide raise ethical concerns
about equal opportunities and limited access to essential services. Bridging the digital divide
and ensuring fair access to technology is crucial for creating an inclusive society.
Some examples include online harassment and cyberbullying, unprotected access to personal
data, lack of safety regulations for data processing, and identification and exploitation of
digital divides
Information technology ethics is the study of the ethical issues arising out of the use and
development of electronic technologies. Its goal is to identify and formulate answers to
questions about the moral basis of individual responsibilities and actions, as well as the moral
underpinnings of public policy.
Harmonization means working on those areas which are complementary in order to have the
plans working together for the achievement of an overall strategic objective. Harmonization
helps different departments in local authorities share the same vision, work together and
optimize the use of resources.
As changing consumer trends continue to shorten product life cycles, manufacturers need
production lines to meet the requirements of frequent product change-overs. In addition, as
labour shortage becomes a reality, manufacturers strive to automate the simpler and
monotonous tasks, to create a workplace where people can contribute in the more creative
tasks.
Harmonization Principles
When deciding whether and how to harmonize, we will create outcomes that allow us to:
a. End users should not have to become experts in the way staff are.
c. We will reduce or eliminate barriers for patrons to discover, access, and effectively
2. Simplify.
3. Keep evolving: we will not be bound to long-standing practices and systems that are no
longer useful.
a. Build the freedom to experiment into our processes, and create a culture of
comfort with reiteration.
d. Changes we make now are still beneficial even if changes are made again in
the future.
a. Recognize that the UC Libraries are different for good reasons and
specialize in different things.
d. Help campuses define and prioritize the local needs in relation to system
needs.
d. Understand what is impactful, why, and how we calculate that (as a system
and locally).
Example:Mobile Robot
Our TM series provides a unique solution to easily install a robot to automate applications
such as assembly, pick&place and inspection. As part of the TM series launch, we will
release a “mobile-compatible” model which will seamlessly integrate into our market leading
LD series autonomous mobile robot. This enables users to automate more complex tasks such
as pick and place onto a tray or container, and to connect processes with autonomous mobile
robots.
• The operator conducts the robot arm to the workpiece position and makes the arm
grasp the workpiece in the collaborative work space by using the hand guiding device.
• After this, he moves the arm to the automatic operation space, and once the arm has
passed the boundary provided by the safeguards, the robot transits to automatic
operation mode to carry out a programmed process.
• Other than this, assist operations for handling work pieces or tools in parts assembling
or welding are expected as concrete applications of the collaborative operation.
Professional Responsibility:
➢ Design Ethics:
Roboticists have a responsibility to design robots ethically, considering the potential
impact on society.
This includes transparency in design decisions and considering the long-term effects
of their creations.
➢ Regulatory Compliance:
Adherence to existing laws and regulations is crucial. Roboticists should ensure their
creations meet safety and ethical standards.
They should also advocate for appropriate regulations where none exist.
➢ Education and Transparency:
Educating the public about the capabilities and limitations of robots is important for
responsible adoption.
Providing transparent documentation about how robots operate and make decisions
builds trust.
➢ Accountability:
Establishing clear lines of accountability when something goes wrong is necessary.
This includes defining who is responsible in the case of accidents or ethical breaches
involving robots.
➢ Continual Assessment:
As technology evolves, so do ethical considerations.
Roboticists have a responsibility to continually assess the impact of their work on
society and adjust accordingly.
Case Studies:
• Autonomous Vehicles:
Who is responsible in a crash involving an autonomous vehicle? The manufacturer,
the owner, or the AI?
• Robotic Surgery:
What happens if a surgical robot malfunctions during a procedure? Who bears the
responsibility?
• Military Robotics:
How should autonomous weapons be used ethically and legally in warfare?
Initiatives and Guidelines:
IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems:
Provides guidelines for ethically aligned design in robotics and AI.
Asimov's Three Laws of Robotics:
Though fictional, these laws inspire discussions on robot ethics.
Conclusion:
Ethics and professional responsibility in robotics are essential for the safe and
beneficial integration of robots into society. Roboticists, policymakers, and society as
a whole must work together to ensure that robots are developed and deployed
ethically, with a focus on safety, fairness, transparency, and accountability.