Ph.D. Dissertation - Tomonori Yamamoto
Ph.D. Dissertation - Tomonori Yamamoto
Tomonori Yamamoto
Baltimore, Maryland
January, 2011
Abstract
ABSTRACT
detect a lump during teleoperated palpation. With a custom version of the da Vinci
Classic Surgical System, we conducted experiments using both artificial and biological tissues and quantified the accuracy of our technique. Second, we examine the
impact of using a mathematical environment model on the stability and transparency
of a general bilateral teleoperation system. Considering an estimated environment
model, we design a teleoperator controller that would provide high-fidelity force feedback to an operator. The stability conditions of our controller relax stability margins
compared to a previously proposed controller. Third, we present the development
and evaluation of an open platform for augmented reality and haptic interfaces for
robot-assisted surgery. Our goal is to create a system that enables interoperability
between various kinds of telesurgical devices and enhances the surgeons performance
by providing haptic feedback and augmented reality. We demonstrate the feasibility
of the interface for two augmentations, both of which are created using the detected
geometry of the tissue surface: the real-time material property overlay described earlier, and forbidden-region virtual fixtures that prevent a patient-side manipulator
from entering undesired regions of the workspace.
Dissertation Advisor:
Professor Allison M. Okamura
Dissertation Readers:
Professor Russell H. Taylor and Professor Louis L. Whitcomb
iii
Acknowledgments
First and foremost, I would like to thank my advisor, Dr. Allison Okamura, for
her support, patience, encouragement, and guidance. Throughout my graduate studies, she has always been supportive and helpful. She also provided me with many
valuable experiences, including a summer school in Paris and a research opportunity
in Munich. Furthermore, I am grateful to have had the opportunity to closely observe
her successful career, being promoted from an Assistant Professor to a Full Professor.
Additionally, I would like to thank Dr. Louis Whitcomb for his insightful comments
on this dissertation, particularly in Chapter 2. Without his advice, our tissue property
overlay technique would not have been a successful application.
Also, I would like to acknowledge Dr. Russell Taylor for agreeing to be a member
of my thesis committee. His immense knowledge on computer-integrated surgery and
his useful feedback on this thesis helped me complete the dissertation.
I received much help from many collaborators over the years. Balazs Vagvolgyi,
Kamini Balaji, Vasiliki Koropouli, and Alex Vacharat helped on the study described
in Chapter 2. In particular, Balazs significantly contributed to the project to make
iv
ACKNOWLEDGMENTS
an excellent da Vinci demonstration. Antonio Cedillos and several engineers at Intelligent Automation, Inc. Tim Judkins, Niki Abolhassani, and Sung Jung helped
considerably in the Robotic Surgery Open Interface project presented in Chapter 4.
Paul Griffiths, Lawton Verner, and Sarthak Misra always welcomed my questions and
were willing to discuss interesting research topics. The work reported in Appendix A
was done with collaborators at Technische Universitat M
unchen, Michael Bernhardt,
Angelika Peer, and Dr. Martin Buss.
I have been fortunate to have had many great times with friends and colleagues at
Hopkins. I would like to thank: Jake Abbott, Tope Akinbiyi, Amy Blank, Alex Burtness, Steven Charles, Tricia Gibo, David Grow, Netta Gurari, Jim Gwilliam, Vinutha
Kallem, Jakob Kemper, Yoshihiko Koseki, Katherine Kuchenbecker, Mohsen Mahvash, Ann Majewicz, Pannada (Nim) Marayong, Steven Marra, Kyle Reed, Carol
Reiley, Joe Romano, Sunipa Saha, Josh Wainer, Robert Webster, and Tom Wedlick
of the Haptics Laboratory; Dan Abretske, Omar Ahmad, Marcin Balicki, Dr. Darius Burschka, Tiffany Chen, Gouthami Chintalapani, Dr. Gregory Chirikjian, Dr.
Noah Cowan, Csaba Csoma, Anton Deguet, Greg Fischer, Ioana Fleming, Pezhman Foroughi, Dr. Gregory Hager, Tamas Haidegger, Dr. Iulian Iordachita, Min
Yang Jung, Ankur Kapoor, Dr. Peter Kazanzides, Axel Krieger, Dr. Rajesh Kumar,
Jusuk Lee, Kiju Lee, Henry Lin, Manu Madhav, Babak Matinfar, Topher McFarland,
Dan Mirota, Matt Moses, Shinya Onogi, Yoshito Otake, Wooram Park, Zach Pezzementi, John Pliam, Hassan Rivaz, Eatai Roth, Sharmi Seshamani, Aris Skliros, John
ACKNOWLEDGMENTS
vi
Dedication
To my wife Miki,
for all she has done for me.
vii
Contents
Abstract
ii
Acknowledgments
iv
List of Tables
xiv
List of Figures
xv
1 Introduction
1.1
Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2
Dissertation Contributions . . . . . . . . . . . . . . . . . . . . . . . .
1.3
Prior Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.1
1.3.2
12
1.3.3
21
Dissertation Outline . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
1.4
viii
CONTENTS
25
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
2.1.1
Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
2.1.2
Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
2.2
Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
2.3
31
2.3.1
35
2.3.2
43
51
2.4.1
52
2.4.2
53
2.4.3
54
2.4.4
56
59
2.5.1
Estimation Technique . . . . . . . . . . . . . . . . . . . . . . .
59
2.5.2
Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
2.5.3
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
2.5.4
Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
63
2.6.1
Estimation Technique . . . . . . . . . . . . . . . . . . . . . . .
63
2.6.2
Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
2.4
2.5
2.6
ix
CONTENTS
2.7
2.6.3
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
2.6.4
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
2.6.5
68
2.6.6
Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
74
76
3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
3.1.1
Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
3.1.2
Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . .
82
83
3.2.1
84
3.2.2
87
3.2.3
Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . .
94
3.2
3.3
3.3.2
Stability Margin
. . . . . . . . . . . . . . . . . . . . . . . . . 111
3.4
Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
3.5
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
CONTENTS
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.2
4.2.2
Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.2.3
4.2.4
4.2.5
4.2.6
4.2.6.2
4.2.6.3
4.2.7
4.2.8
4.2.9
4.3
4.2.6.1
4.2.8.1
4.2.8.2
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.3.1
4.3.2
4.3.3
4.3.4
xi
CONTENTS
4.4
4.5
Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4.4.1
4.4.2
4.4.3
4.4.4
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
162
5.1
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.2
169
. . . . . . . . . . . . . . . . . . . . . . 176
xii
CONTENTS
Bibliography
201
Vita
230
xiii
List of Tables
2.1
2.2
2.3
2.4
2.5
2.6
3.1
3.2
32
38
41
48
49
65
A.1 Estimated parameters of the environment and average force errors for
each estimation algorithm (Teleoperation without PE) . . . . . . . . 193
A.2 Estimated parameters of the environment and average force errors for
each estimation algorithm (Teleoperation mimicking PE) . . . . . . . 194
A.3 Estimated parameters of the environment and average force errors for
each estimation algorithm (Autonomous Control with PE) . . . . . . 195
xiv
List of Figures
1.1
1.2
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
xv
2
16
30
36
36
39
40
40
42
44
LIST OF FIGURES
2.9
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
xvi
44
45
47
47
51
55
57
57
62
66
68
69
70
70
LIST OF FIGURES
2.23 Force-displacement data of porcine livers and kidneys with the artificial
lumps that were used for the real-time experiments reveal that the
differences between soft and hard regions are not always clear. . . . .
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
72
LIST OF FIGURES
4.2
4.3
4.4
4.5
4.6
4.7
4.8
xviii
109
110
111
113
113
121
123
125
126
129
133
140
144
LIST OF FIGURES
4.9
4.10
4.11
4.12
4.13
4.14
4.15
Comparison of the point clouds before interpolation and after interpolation (right). Our interpolation technique successfully restored some
of the missing surface points. . . . . . . . . . . . . . . . . . . . . . . .
Tool-environment interaction during palpation of soft and hard regions
of the artificial tissue. The Hunt-Crossley model approximates both
nonlinear curves well. . . . . . . . . . . . . . . . . . . . . . . . . . . .
A camera image of the 3D material property overlay after several palpations (left) and a color bar displaying the stiffness range (right). A
hard lump is located on the right side of the artificial tissue sample. .
Cross-section view (on the local x-z plane) of points in the FRPC and
the corresponding normal vectors before (top) and after (bottom) the
position and normal average filters were applied. . . . . . . . . . . . .
In simulation, a user travels on the reference surface curve from the
center to the right limit of the artificial tissue. The number of points
in the point cloud changes the surface of the FRVF. We observed how
the magnitude and the angle of the force changed as the user moved.
Relationship between the number of points and the average change in
force magnitude (top) and force angle (bottom) the use would feel at
every increment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The trajectories of the slave manipulator tip and the force measured
by the force sensor with FRVF (top) and without FRVF (bottom).
Blue dots are position of the slave manipulator and red arrows are the
measured interaction forces. . . . . . . . . . . . . . . . . . . . . . . .
147
148
149
151
152
153
154
xix
Chapter 1
Introduction
1.1
Motivation
CHAPTER 1. INTRODUCTION
Human
Operator
Master
Communication
Channel
Slave
Environment
CHAPTER 1. INTRODUCTION
the original system or object. The accuracy of the model is important, but it does not
have to be as accurate as the original. Instead, in order to make the model most useful
for a particular application, unnecessary details from the original can be excluded to
decrease complexity and emphasize what should be displayed.
Acquisition of the patients tissue model will be useful for surgeons to learn, for
example, stiffness distribution, dynamic behavior, and geometrical shapes and boundaries of the tissues. The model can be obtained a priori or in real time. We seek to
estimate mechanical properties of the tissue to detect hard lumps hidden in soft tissues and enhance teleoperator control. We also aim to employ a tissue geometry
model to prevent the surgeon from inadvertently cutting or touching delicate tissue
structures.
1.2
Dissertation Contributions
CHAPTER 1. INTRODUCTION
CHAPTER 1. INTRODUCTION
We address the development and evaluation of an open platform for augmented reality and haptic interfaces for robot-assisted surgery. Key
features of the interface are augmented visual feedback using 3D material property overlays and forbidden-region virtual fixtures with haptic feedback to provide assistance to the operator. A stereo reconstruction technique computes a
point cloud, consisting of points on the surface of an object. Based on the point
cloud, a forbidden-region virtual fixture is created. We evaluate the effects of
smoothing filters and the number of points in the point cloud on virtual fixture
performance. Two bilateral teleoperation assistance modes are provided during interaction with artificial prostate tissue: 3D tissue property overlay and
point-cloud-based forbidden-region virtual fixtures to prevent the patient-side
manipulator from entering undesired regions of the workspace. This work was
performed in collaboration with Timothy N. Judkins, Niki Abolhassani, and
Sung Jung of Intelligent Automation, Inc., who programmed the communication protocol and 3D surface reconstruction. Antonio Cedillos of the Johns
Hopkins University (JHU) created an artificial prostate tissue.
1.3
Prior Work
This section reviews prior work related to the studies in this dissertation. In Section 1.3.1, we describe methods for measuring mechanical tissue properties, including
CHAPTER 1. INTRODUCTION
tools to measure the properties, types of tissue models, and estimation techniques.
Section 1.3.2 reviews haptic displays for surgery. Haptic (force and tactile) information collected during a surgical procedure can be displayed to the surgeon haptically or graphically. We also address virtual fixtures. In Section 1.3.3, we review
robot-assisted surgical systems, describing existing complete systems as well as open
interfaces.
1.3.1
Knowledge of tissue properties helps surgeons detect locations of tissue abnormalities such as a hard nodule in the prostate or a calcified artery in the heart. We
review tools to measure the properties of tissues, mathematical tissue models, and
offline and online estimation techniques.
CHAPTER 1. INTRODUCTION
of the patients body from hard materials by aligning atoms nuclei in the patients
body, applying radio frequency pulses to change the alignment, and detecting radio
frequency energy emitted by the nuclei. MRI is particularly useful in diagnosing
soft-tissue structures of the body, such as brain, muscles, and heart.
Despite their popularity and effectiveness, CT and magnetic resonance images are
not readily available in todays interventional suites, except for a few applications
in which preoperative images are overlaid on a stereoscopic video [154] or used by
a neurosurgical robot [9, 143]. Due to tissue mobility and deformation as well as
patients movement, it is difficult for surgeons to align locations of tissue abnormalities
detected from the preoperative images to the patients body. To overcome this issue,
researchers have integrated a laparoscopic ultrasound probe with the da Vinci Surgical
System to provide intuitive visualization of the ultrasound data [91].
A promising approach that could be employed during a surgical procedure is
elastography [119, 120, 121, 131]. It measures elasticity of soft tissues and displays
strain distribution of the tissues for detection of tumors and ablated areas. Ultrasound
imaging is the most commonly used technique for elastography, but MRI or other
diagnostic imaging modalities have also been used [119, 120]. The main disadvantage
of elastography is that it is computationally expensive [120]. In addition, ultrasonic
sensors do not work well for the lung or the stomach because the signals are attenuated
quickly due to the air in those organs [79].
Some specialized tools for contact-based localization of tumors have been de-
CHAPTER 1. INTRODUCTION
veloped. Broadly, those tools are classified into two categories depending on the
sensing type: force and tactile. Force sensors have been widely used for acquisition
of mechanical properties of the tissues. Tactile sensors have also been extensively
used for lump detection because a single palpation provides more information than
force sensors. Applications for those tools include breast cancer [42, 43, 77, 162],
prostate cancer [41, 161], lung cancer [75, 103, 104], and coronary artery bypass
grafting (CABG) [52, 167]. Puangmali et al., [129], provide a literature review on
force and tactile sensing for minimally invasive surgery (MIS).
Force sensors have been used to acquire tissue properties. Kato et al., [77, 78],
developed the first automatic palpation robot, the WAPRO system, for detection
of breast cancer. They proposed a discriminative function to estimate stiffness of a
palpated point and clinically tested the robot system on 16 patients who had breast
cancer. Bicchi et al., [21], modified a commercially available laparoscopic tool to
develop a prototype of a sensorized surgical tool. They estimated compliance of the
manipulated object by fitting force-displacement curves. Xu and Simaan, [165], investigated the intrinsic force-sensing capabilities of a continuum robot using a load cell.
To meet demands for MRI compatibility and space limitation for MIS, they placed
the sensor away from the tip of the surgical tool and used joint-level information to
convert measured force to the tip force. They validated the capability of stiffness detection of the robot using artificial tissues and hard lumps. The rolling indenter [93]
uses a force/torque sensor mounted on a rotating cylinder that rolls over the tissue
CHAPTER 1. INTRODUCTION
surface to localize hard lumps hidden in soft tissues during robot-assisted minimally
invasive surgery (RMIS). By keeping the penetration depth constant, a colored force
map is created to identify the locations of hard nodules.
Tactile array sensors have been employed in numerous devices. Mechanical imaging (MI) [126, 136] provides a 3D reconstruction of an internal mechanical structure
from the measurement of surface strain and stress patterns by a pressure array sensor.
MI uses continuum mechanics-based models to assess geometrical and mechanical parameters of the tissue structures. It is able to detect size, shape, consistency/hardness,
and mobility of detected lesions. Both prostate [41, 161] and breast [42, 43] MI systems have been clinically tested. Tactile imaging [162] uses a hand-held scan head
where piezoresistive pressure sensors are mounted for breast cancer diagnosis. It can
provide more accurate and reproducible estimates of the size of breast masses than
physical examination. Tactile imaging system (TIS) [103, 104] uses tactile sensors
made of capacitance-based arrays to measure contact pressure. A pseudo-color map
is generated to visualize the measured pressure and overlay it on a monitor for lump
detection. A graphical overlay is created based on relative measured pressure on
each sensor element. Tactile sensing instrument (TSI) [150] attached at the tip of
a surgical tool of an industrial robot also uses a tactile array sensor. It was developed for localization of tumors during RMIS. The robot performed a force-controlled
autonomous palpation and generated a pressure map. PVDF sensing grippers have
four pairs of tactile sensors attached on the grippers [33]. When an object is pinched,
CHAPTER 1. INTRODUCTION
the sensors discern the softness and hardness of the object. The stress distribution is
visualized using an intuitive, simple color image. An active tactile sensor consists of
piezoresistors and a magnetic actuating unit [57]. It can measure the contact force
and estimate both stiffness and damping.
Other than force or tactile sensors, some researchers have addressed a technique to
localize a hard lump using air flow. The active strobe imager [75] applies a vibrating
air jet on a surface of an organ and records its dynamic behavior using a camera.
The tissue surface is directly observed under strobe light to allow the user to detect
a tumor, so no visual overlay is necessary. In the extension of their study [79],
Kaneko et al. performed both animal experiments using live pigs and experiments
using a piece of human lung with a tumor removed in open surgery. The tumors
could be detected by phase difference between input air flow and output displacement
measurements. One advantage of using air flow is that damage to the tissue is minimal
compared to forceps grasping. Althoefer et al., [16], developed an air-cushion forcesensitive indentation probe, although the approach to generate a colored force map
is similar to [93]. Another method for measuring tissue properties includes pipette
tissue aspiration [157].
Tissue Models
Developing a mechanical tissue model based on tool-tissue interaction is useful in
many scenarios, including developing a surgical simulator to provide realistic haptic
10
CHAPTER 1. INTRODUCTION
feedback [106] and helping surgeons detect cancerous tumors, as malignant tissues
are typically stiffer than normal healthy tissues [69, 88, 126, 128, 135]. For characterization of mechanical properties of biological tissues, many researchers refer to Fungs
work [48], which modeled tissues from a continuum mechanics point of view. Rosen
et al., [132], obtained the structural biomechanical properties, such as stress-strain
curves and stress relaxation, of seven abdominal organs: bladder, gallbladder, small
and large intestines, liver, spleen, and stomach, from 14 pigs.
Biological tissues are known to exhibit nonlinear properties and consist of inhomogeneous structures. While finite element modeling [68] can provide superior tissue
characterization, its computational complexity has limited its utility in real-time applications. Thus, for computational efficiency, many researchers assume a simple
linear tissue model. In particular, a classical linear tissue model, such as a spring
model [28, 29, 35, 165] or the Kelvin-Voigt model [57, 105, 159, 166], is commonly
employed. Polynomial functions are also used [21, 115]. A nonlinear Hunt-Crossley
model [71] takes into account the energy loss during impact, which is neglected in
the Kelvin-Voigt model. This model has been employed to characterize soft tissue
behavior in [40, 167]. The quasi-linear Hammerstein model has also been used to
emulate compliant viscoelastic environments [20]. A mechanics-based model can be
obtained by solving an inverse problem using stress patterns on the surface of compressed tissue [126, 136]. Misra et al., [106], reviewed the literature on tool-tissue
interaction modeling.
11
CHAPTER 1. INTRODUCTION
Estimation Methods
Once a tissue model is selected, tissue mechanical properties need to be estimated.
For online environment parameter estimation, there exist several methods, including
recursive least squares (RLS) [27, 40, 95, 159], adaptive identification [58, 105, 137,
139], Kalman filter approaches [28, 29, 35, 68], and a multi-estimator technique [110,
166]. Erickson et al., [44], reviewed and compared four methods: RLS, indirect
adaptive control, model-reference adaptive control, and a signal processing technique.
They estimated environment stiffness and damping to improve force tracking and
stability of impedance control [66] for the application of robotic assembly operations.
They concluded that indirect adaptive control, with persistent excitation, showed the
best performance among the four schemes. For surgical applications, Yamamoto et
al., [166], compared RLS, adaptive identification, and the multi-estimator technique
to estimate unknown parameters of the Kelvin-Voigt model. They recommend RLS
or the multi-estimator for online tissue parameter estimation. The details of [166] are
given in Appendix A.
1.3.2
RMIS systems such as the da Vinci Surgical System provide numerous advantages over traditional MIS, including precise motion of a surgical tool, dextrous wrist
manipulation control, enhanced 3D vision systems, among others. However, commer-
12
CHAPTER 1. INTRODUCTION
cially, clinically used RMIS systems provide no or limited haptic feedback, which has
been recognized as a major drawback [116, 117]. Acquired haptic information can
be fed back to the surgeon via haptic or graphical modalities to enhance surgeons
capabilities. We also address virtual fixtures (assistive, computer-generated force) for
surgical applications.
Haptic Displays
We first review haptic devices, which are typically hand-held and convey the
touch sensations to a human operator. From an operators point of view, the type
of signals sent to and returned from a haptic interface varies depending on device
mechanisms. A typical haptic interface can be categorized as being of the impedance
or admittance type. An impedance-type device is typically light-weight, low-friction,
and backdrivable. For impedance-type devices, it appears to the operator that input
to the device is velocity (or position) and output from the device is force. Examples include the PHANTOM family of haptic devices (SensAble Technologies, Inc.,
Woburn, Massachusetts, USA) [11], the master manipulator of the da Vinci Surgical
System (Intuitive Surgical, Inc., Sunnyvale, California, USA) [7], and the Immersion
Impulse Engine 2000 (Immersion Corporation, San Jose, California, USA) [3]. Using
the Laplace transform and assuming the device dynamics is linear, an impedance-type
device is modeled as
(1.1)
CHAPTER 1. INTRODUCTION
where V is velocity (or position) of the device, F is the sum of the force (i.e., applied
force by the human operator subtracted by force applied by the actuators of the
device), and Z is an inherent mechanical impedance of the device.
In contrast, an admittance-type device is typically non-backdrivable due to a
highly-geared mechanism. A force sensor is often needed to control the device. It
appears to the operator that input to the device is force and output from the device
is velocity (or position):
V (s) = Y (s) F (s)
(1.2)
CHAPTER 1. INTRODUCTION
back to the surgeon via a haptic interface. Alternatively, the interaction force can
be estimated using a dynamic model of the manipulator. From a control point of
view, however, returning the force to the operator is challenging, particularly in a
teleoperation system.
A number of researchers [25, 56, 59, 89] have pointed out that there is a trade-off
between stability and transparency (a measure of the accuracy of motion tracking
and haptic feedback) of the teleoperator. Commonly used control architectures for
a bilateral teleoperator are position-position [97, 100], position-force [32, 142], and
position-position/force feedback [25] control. A position-position controller exchanges
position (and velocity) between the master and slave to minimize position tracking
errors. It is a simple controller, but fidelity (degree of accuracy) of force feedback is
not high. To increase transparency of this controller, researchers have compensated
for friction/inertia of the slave manipulator [98, 99, 100]. A position-force controller
sends position (and velocity) information from the master to the slave so that the slave
tracks the motion of the master, and measured environment force from the slave to the
master to convey realistic interaction force to the operator. This controller appears to
provide a good transparency, but has a fundamental limitation such that the product
of force and position scaling gains needs to be smaller than the ratio of the master inertia to the slave inertia [32]. Both position-position and position-force architectures
are often referred to as a two-channel controller since the information exchanged
between master and slave robots uses two communication channels. The position-
15
CHAPTER 1. INTRODUCTION
Environment
Slave
Communication
channel
Master
Operator
Figure 1.2: A block diagram of a general four-channel teleoperation system [59, 89].
No communication delay is assumed in this block diagram.
16
CHAPTER 1. INTRODUCTION
position/force feedback controller adds direct force feedback from the environment
on top of position-position controller. Thus, this is regarded as a three-channel architecture (position from the master and position and force from the slave). When
both position and force information are exchanged bidirectionally between master
and slave, it is called a four-channel controller. Comparison of performance of the
teleoperator architectures have been studied between two-channel and three-channel
controllers [25] and two-channel and four-channel controllers [15, 89, 147]. Lawrence,
[89], and later extended by Hashtrudi-zaad and Salcudean, [59], showed a general
framework of the bilateral teleoperator architecture (Figure 1.2) and derived perfect
transparency conditions for the four-channel teleoperator. Around the same time
as [89], Yokokohji and Yoshikawa, [169], also independently realized perfect transparency conditions. Perfect transparency could be achieved by the three-channel
or even two-channel architecture [112], but from a practical point of view, achieving perfect transparency would not be possible due to unmodeled dynamics of the
manipulators, time lag due to filtered signals, among others. Instead of achieving
perfect transparency, there are some efforts to enhance sensitivity of the environment
property feedback for detection of tissue abnormalities [25, 37].
Graphical Displays
As an alternative to force feedback, the visual modality can be used to display
acquired haptic information to the surgeon via (artificially generated) graphics. This
17
CHAPTER 1. INTRODUCTION
approach is particularly effective for lump detection because humans are generally
good at handling a lot of data graphically rather than haptically. Moreover, graphical
feedback has a significant advantage over force feedback in that it does not inject
energy into the system, which may break the closed-loop dynamics of the bilateral
teleoperator and thus eliminates many stability issues.
Graphical force feedback can be displayed using a separate display [145] or overlaid in the surgeons console [52, 85, 122, 130] of a teleoperated RMIS system. Many
researchers have displayed tactile information using a graphical modality, often with
a specialized tool such as those described in Section 1.3.1. The obtained tactile information can be categorized into pressure distribution [41, 42, 104, 150, 162], force
distribution [93, 165], and stiffness distribution [167]. The details of [167] and additional research results are given in Chapter 2.
Virtual Fixtures
The accuracy and speed of surgical tasks in RMIS can be improved by the introduction of computer-generated assistive forces, known as virtual fixtures [133]. Virtual
fixtures are typically added to any natural sensory feedback provided from a remote
environment. Virtual fixtures can be generally categorized as either guidance virtual
fixtures (GVFs) or forbidden-region virtual fixtures (FRVFs). As their names suggest,
GVFs assist a human operator to follow desired trajectories while FRVFs keep the
operator from entering into undesired regions. Another category of virtual fixtures
18
CHAPTER 1. INTRODUCTION
includes a constraint-based approach, which does not distinguish GVFs and FRVFs.
First proposed by Funda et al., [47], and then extended by Li, [92], Kapoor, [76],
and their collaborators, this type of virtual fixture uses a least-squares optimization
technique to determine manipulator motions by minimizing other tasks criteria. The
constraint-based virtual fixtures become quite useful when robot motion is limited
due to spatial constraints, such as inside the chest or abdomen. With use of virtual fixtures, the operator is able to increase speed and accuracy [101]. An intuitive
example of a real fixture akin to a virtual fixture is use of a ruler (straight-edge).
A ruler gives the user physical guidance to draw a straight line in the workspace.
It also physically prevents the user from going into unwanted regions. Rosenberg,
[133], described an example of a virtual fixture for a surgical application, in which
a surgeon makes a precise incision with a scalpel by following desired paths (GVF)
without penetrating below a certain depth (FRVF).
Many RMIS research studies have demonstrated the effectiveness of GVFs [19, 39,
87, 92], FRVFs [125], and constraint-based virtual fixtures [76, 92]. Bettini et al., [19],
designed and implemented a vision-assisted control method for human-machine cooperative manipulation tasks at millimeter to micrometer scales. Computer vision was
employed to detect a desired 2D path and provide force feedback to guide a user along
a path. Dewan et al., [39], also used a stereo imaging system to help a user follow the
spherical surface of an environment for applications in retinal microsurgery. Li et al.,
[92], used registered CT models to generate constraint-based guidance virtual fixtures
19
CHAPTER 1. INTRODUCTION
for endoscopic sinus surgery. Kragic et al., [87], created guidance virtual fixtures for
microsurgical applications by integrating a real-time segmentation of user motions
using continuous hidden Markov models (HMMs) into human-machine collaborative
systems. Park et al., [125], implemented FRVFs for robotic cardiac surgery. With the
ZEUS surgical robot system [152], they conducted in vitro experiments to evaluate
human performance of a blunt dissection task with and without the FRVFs. Abbott
and Okamura, [13], studied the stability of FRVFs in a general class of telemanipulator
control architectures. They derived theoretical stability conditions and experimentally validated the results. Kapoor, [76], in collaboration with other researchers, has
created a software library of virtual fixtures, which could be applied to either teleoperative or cooperative tasks as well as impedance-type or admittance-type devices.
Some commercially available surgical robots, such as the RIO Robotic Arm Interactive Orthopedic System (MAKO Surgical Corp., Lauderdale, Florida, USA) [8], and
the ROBODOC Surgical System (Curexo Technology Corporation, Fremont, California, USA) [1], have also used virtual fixtures defined using medical images. Compared
to those related work, FRVFs presented in Chapter 4 use an impedance-type manipulator, and are created using a 3D surface of an object automatically reconstructed
from stereo vision information.
20
CHAPTER 1. INTRODUCTION
1.3.3
21
CHAPTER 1. INTRODUCTION
customized PHANTOM manipulators are installed at the surgeons side, and four
multi-purpose robot manipulators are mounted on the ceiling at the patients side. All
the surgical instruments have force sensing capabilities using strain gauges. Tadano
and Kawashima, [144], developed a pneumatically-actuated master-slave laparoscopic
surgical system. To achieve bilateral teleoperation, they estimated tool-environment
interaction force using pneumatic cylinders.
22
CHAPTER 1. INTRODUCTION
and evaluated the performance of a specialized Telerobotic Fundamentals of Laparoscopic Surgery Block Transfer task using Skype video. The University of Washington
and Technische Universitat M
unchen also collaborated to establish multimodal telepresence using a Session Initiation Protocol. They used a standard Internet session
and transport protocols to share audio, video, and haptic feedback information [83].
Although the concept of Robotic Surgery Open Interface (RSOI) described in Chapter 4 is similar to those prior work, its main focus lies on transferring haptic and
augmented-reality information to enhance surgeons performance.
1.4
Dissertation Outline
23
CHAPTER 1. INTRODUCTION
with ex vivo tissues to identify the locations of artificial lumps embedded in porcine
liver and kidney. We perform a quantitative analysis of the experimental results.
In Chapter 3, we analyze the stability and transparency of a bilateral teleoperation
system considering estimated environment models. We first review a bilateral teleoperator architecture, including a master-slave two-port network and a general block
diagram of a four-channel teleoperator. We propose a three-channel controller that
employes knowledge of an estimated environment model. When all the impedance estimates are correct, it achieves perfect transparency. We analyze robust stability and
position and force tracking performance of our controller and an existing controller.
In Chapter 4, we present the development of an open platform for augmented
reality and haptic interfaces for robot-assisted surgery. We provide an overview of
the robotic surgery open interface (RSOI), describing hardware and software frameworks and communication architecture. We then address reconstruction of the 3D
surface of an object and create point-cloud-based forbidden-region virtual fixtures.
The feasibility of the open interface is tested in two bilateral teleoperation scenarios
performed on artificial prostate tissue: palpation to detect a lump hidden in the artificial prostate and point-cloud-based forbidden-region virtual fixtures to prevent the
patient-side manipulator from entering unwanted regions of the workspace.
We conclude our studies and findings in this dissertation in Chapter 5. We also
address future directions of the work.
24
Chapter 2
Real-Time Tissue Property
Estimation and Graphical Overlay
2.1
Introduction
25
26
However, in traditional and robot-assisted MIS, sensing and displaying such information to the surgeon is technically challenging. Compared to force feedback, there has
been relatively little work done to display spatially distributed tissue properties for
RMIS.
In this work, we propose a real-time graphical overlay technique to display the
location of hard objects hidden in soft materials. Our approach is to estimate the
mechanical properties of tissues using recursive least-squares (RLS) estimators and
simultaneously overlay stiffness distribution on the surgeons visual display. Our proposed technique may practically incorporate force sensing, since simple force-sensing
instruments designed specifically for palpation may be easier to develop than forcesensing dextrous manipulation instruments required for conventional force feedback.
2.1.1
Contributions
The main contributions of this study are (1) validation of a mathematical tissue
model appropriate for real-time mechanical property estimation during teleoperation
and (2) use of the model and teleoperated robot to create a real-time graphical overlay
to represent tissue stiffness, thus enabling an operator to identify an invisible hard
inclusion inside a soft material.
To choose an accurate mathematical model of the tissue, we analyzed experimental
tool-environment interaction data and compared seven candidate models. By postprocessing the data using the least-squares method, we evaluated the model accuracy
27
based on the force estimation error in both self-validation and cross-validation. Since
we aim to model real tissues, which are inhomogeneous and complex, both validations are important to mathematically approximate tissue dynamic behavior. The
Hunt-Crossley model was chosen because of its accuracy and inclusion of stiffness
information that distinguishes hard objects from the soft materials.
We also developed an online graphical overlay technique that displays stiffness
distribution based on estimated tissue properties. A hue-saturation-lightness (HSL)
representation was used to overlay a semi-transparent colored stiffness map on the
environment image. Hue corresponds to a stiffness value and saturation is calculated
considering the distance from a palpated point. When multiple circles overlap in the
stereo display, hue is blended to make a continuous colored map. Also, saturation
values are summed to increase reliability at the palpated point. This procedure
is repeated in real time as new data are added. As a result, we have achieved a
semi-transparent, colored stiffness map that displays the location of a hard inclusion,
without obscuring the surgeons view of the tissue or surgical instrument.
This work improves upon existing techniques by (1) enabling real-time mechanical property estimation and display, (2) providing an intuitive, 3D augmented reality
display of tissue mechanical properties, (3) basing the mechanical property estimation on a validated model, (4) performing experiments with ex vivo tissues, and (5)
evaluating system performance with quantitative metrics.
This work was performed in collaboration with other researchers of the Johns
28
Hopkins University. Louis Whitcomb advised the author on model validation (Section 2.3). Balazs Vagvolgyi co-designed and implemented the visual system and
graphical overlay described in Section 2.4. Alex Vacharat made artificial heart tissues,
and Vasiliki Koropouli contributed to the development of 3D surface reconstruction.
2.1.2
Outline
In this chapter, we report the results of two sets of experiments using (1) artificial
heart tissues and (2) ex vivo tissues (porcine liver and kidney). In Section 2.2, we
briefly describe a teleoperated robotic system used for both experiments. In Section 2.3, we compare seven dynamic models of (artificial and biological) tissues. The
accuracy of the models is analyzed in both self-validation and cross-validation based
on preliminary palpation experiments. An appropriate model is selected based on
the accuracy (low force estimation errors) and usefulness (a parameter to indicate
hardness). Section 2.4 describes a stereo camera system of the experimental setup
and the graphical overlay technique. In Section 2.5, we address real-time palpation
experiments with the artificial tissue to detect the locations of an artificial calcified
artery. In Section 2.6, we present real-time palpation experiments with the ex vivo
tissue to detect the locations of an artificial lump embedded in porcine liver or kidney.
29
Figure 2.1: Custom version of the da Vinci Classic Surgical System [97]. The master
manipulators and the 3D vision display system are located on the right, and the
patient-side manipulators and a stereo camera are on the left.
2.2
Apparatus
Throughout this study, we use a custom version of the da Vinci Surgical System,
shown in Figure 2.1. It consists of four components: master manipulators, a stereo
camera, a 3D vision system, and patient-side manipulators. The manipulators and
the vision system are provided by Intuitive Surgical, Inc. [7], and a custom control
system has been developed at the Johns Hopkins University. A position-position
controller is used to achieve bilateral telemanipulation. Friction and inertia of the
patient-side manipulator are compensated to improve transparency, but the amount
of force to cancel out the dynamics is limited to make sure the entire teleoperation system is always stable [98, 99, 100]. A user receives force feedback based on a
30
proportional-derivative (PD) controller, but this does not affect the results of preliminary experiments or real-time experiments since all we need for estimation are motion
data of the patient-side manipulator and tool-environment interaction force. Since
palpation is a simple task, we only use right arms of the master and patient-side manipulators. More details about the teleoperation system and the control architecture
are given in [97].
2.3
31
Model Name
Equation of Model
Kelvin-Voigt
+ bx
f = a
0 + kx
mass-damper-spring
f = a
0 + a
1 x + a
2 x2
f = a
0 + a
1 x + a
2 x2 + a
3 x3
f = a
0 + a
1 x + a
2 x2 + a
3 x3 + a
4 x4
Hunt-Crossley
+ bx + m
f = a
0 + kx
x
f = a
0 + a
1 x + a
2 x2 + bx
n + x
n x
f = a
0 + kx
Tissue Models
For both artificial and ex vivo tissues, we compare the seven mathematical models
listed in Table 2.1. f is an estimated interaction force between the tool and the
environment, and x, x,
and x are position, velocity, and acceleration of the tool,
b, m,
and n
respectively. The remaining terms, a
i , k,
,
, are unknown parameters to
be estimated.
32
manipulator was restricted to up and down motions along the prismatic joint of the
patient-side manipulator, so the system has one degree of freedom. A force sensor was
placed underneath an experimental setup (in artificial tissue experiments) or mounted
on a palpation tool (in ex vivo tissue experiments). To start and end recording
palpation data, we set a force threshold to detect contact between a palpation tool and
a tissue. By considering noise level from the force sensor, if the product of previous
and current force along the prismatic joint of the patient-side robot was bigger than
0.002 N2 , the tool was considered to have made contact with the environment. Probed
locations were different trial by trial to obtain a wide range of data. Position, velocity,
and acceleration of the tool were computed from encoder readings of the da Vinci
patient-side manipulator, and applied force to a tissue was recorded by the force
sensor.
Model Validation
To verify model accuracy, batch post-processing was employed, and unknown
parameters and interaction force for each trial and model were estimated. Models 1
to 6 are linear in unknown parameters, and the linear least-squares method was used.
f(t) = T (t)
(2.1)
= (X T X)1 X T Y
X = [1 , 2 , , n ]T
where
Y = [f1 , f2 , , fn ]T .
33
(2.2)
(2.3)
,
n
where = [x, x]
T and = [
a0 , k,
]T . At the k-th iteration, the estimated parameters are
k+1 = k + ,
= (J T J)1 J T Y
Jij
where Y
yi
(2.4)
,
g(,)
=
j
(2.5)
i ,k
= [y1 , y2 , , yn ]T .
= fi g(i , )
Nonlinear least squares has no closed-form solution and requires initial values for
unknown parameters. We set the initial values, 0 , by trial and error to avoid local
minima (sometimes observed when 0 was a zero vector). Throughout this chapter,
the units of force and position are Newton and centimeter, respectively. The iteration
number k started from 0 and increased until the norm of the increment parameter
vector was less than 0.001.
In self-validation, by using either linear or nonlinear least-squares estimate, we
can estimate the unknown parameters for each model and the interaction force, f, in
34
(2.1) or (2.3). Thus, we can compute force estimation errors for each time step as
yi = fi fi .
(2.6)
where f is a measured interaction force by the force sensor. The model accuracy is
evaluated based on the averaged force estimation error.
In self-validation, we can obtain a set of estimated parameters from a single trial
for each model. Cross-validation uses those parameters to compute estimated interaction forces for the other trials. Therefore, if there are n preliminary palpation trials,
n force estimation errors will be computed in self-validation, and n2 force estimation
errors in cross-validation.
2.3.1
In this subsection, specific settings and conditions for artificial tissue experiments
are explained.
Figure 2.2: The diameter and thickness of an artificial heart tissue are approximately
60 mm and 18.5 mm, respectively. A coffee stir straw with diameter of 4 mm, which
mimics a calcified artery, is embedded at the depth of 5 mm from the surface. (Left)
a model used for an experiment and (Right) a transparent one to show the artificial
calcified artery.
instrument
extension
da Vinci
instrument
artificial
heart tissue
plastic
base
plastic
disc
force/torque
sensor
(a) Palpation tool attachment
36
Figure 2.3 shows an instrument extension with a plastic disc (Figure 2.3(a)) attached at the tip of a surgical instrument. The diameter of the artificial calcified
artery was 4 mm, and the disc was 10 mm in diameter. Due to the size and flatness
of the disc, the tool does not slide off the artery when force is applied. The instrument was mounted on a patient-side manipulator of the da Vinci robot. As shown in
Figure 2.3(b), the artificial heart tissue was placed on a plastic base mounted on a
Nano17 transducer (ATI Industrial Automation, Apex, North Carolina, USA).
Five palpation trials were performed each for the artificial soft tissue and the
artificial calcified artery. In each trial, a user moved down the instrument through
teleoperation, and the palpation was initiated as soon as the contact was detected
by the force sensor and the force threshold described earlier. One trial consisted of
several palpations on the same location without losing contact with the environment.
Model Validation
Table 2.2 and Figure 2.4 summarize the mean and the standard deviation of the
force estimation errors for each model. Since there are five sets of palpation data
for the artificial soft tissue and the artificial calcified artery, the averaged values are
given (i.e., average of 5 sets of data in self-validation and 25 sets in cross-validation).
The lower the mean and the standard deviation of the force estimation errors a model
yields, the more accurate it is.
37
38
Hunt-Crossley
+ velocity-dependent
mass-damper-spring
Kelvin-Voigt
Name
Model
0.078
0.074
0.090
0.091
0.095
0.135
0.149
Mean
0.081
0.070
0.102
0.102
0.104
0.108
0.120
Std
Soft Tissue
0.245
0.234
0.232
0.256
0.291
0.445
0.478
Mean
0.235
0.198
0.271
0.285
0.275
0.352
0.389
Std
Calcified Artery
Self-Validation
0.369
0.391
0.406
0.392
0.369
0.364
0.375
Mean
0.104
0.134
0.170
0.140
0.112
0.151
0.163
Std
Soft Tissue
0.431
0.464
0.514
0.448
0.439
0.637
0.645
Mean
0.271
0.322
0.342
0.323
0.288
0.438
0.465
Std
Calcified Artery
Cross-Validation
Table 2.2: Comparison of Mean and Standard Deviation of Force Estimation Errors of Artificial Heart Tissues
0.9
0.9
Artificial Soft Tissue
Artificial Calcified Artery
0.6
0.3
0.6
0.3
Model Number
Model Number
(a) Self-validation
(b) Cross-validation
Figure 2.4: Mean and standard deviation of force estimation errors for the seven
mathematical models of the artificial tissue.
In self-validation, the higher the order of a polynomial function model is, the
lower errors the model yields. Models 4-7 seem to characterize the dynamics of both
artificial soft tissue and artificial calcified artery well, while Models 1-3 do not. In
cross-validation, Models 3, 4, 6, and 7 are superior to the others both for the artificial
soft tissue and calcified artery. Thus, Models 1 and 2 are inferior to the others in
both validation. Model 5 is good in self-validation but not in cross-validation because
the model captures signal noise as well as underlying interaction data in estimating
the unknown parameters (overfitting). Figure 2.5 shows sample plots of interaction
data of the tool and the environment, and Figure 2.6 shows a comparison of measured
force and estimated one using the estimated parameters, based on one of the trial
data for the artificial calcified artery, for Models 1 and 7.
From both validation tests, Models 6 and 7 show better performance in force
39
14
14
Measured
Estimated
12
10
Force [N]
Force [N]
10
8
6
8
6
Measured
Estimated
12
10
12
14
Time [s]
14
Force [N]
Force [N]
8
6
4
2
6
14
10
12
10
Measured
Estimated
12
10
14
10
Time [s]
Measured
Estimated
12
12
Displacement [mm]
10
12
Displacement [mm]
Figure 2.5: Sample plots of palpation on (a) artificial soft tissue and (b) artificial calcified artery showing (top) force vs. elapsed time and (bottom) force vs. displacement.
Model 7 is used to calculate the estimated force.
Force [N]
12
Force [N]
12
Measured
Model 1
Model 7
4
0
Measured
Model 1
Model 7
4
0
10 12 14 16
10
12
Displacement [mm]
Time [s]
Figure 2.6: Comparison of estimated force plots of (a) force vs. elapsed time and (b)
force vs. displacement from one of the artificial calcified artery data. Model 1 yields
more force errors than Model 7.
40
Model 6
Model 7
Number
a
0
a
1
a
2
a
0
0.155
3.697
4.588
0.258
0.708
5.518
0.090
1.679
0.519
0.427
6.687
0.286
0.668
5.544
0.115
1.570
0.028
5.679
2.461
0.202
0.416
5.741
0.198
1.391
0.036
4.867
2.581
0.208
0.327
4.960
0.134
1.686
0.373
3.302
5.654
0.306
1.025
5.984
0.211
1.587
0.352
3.432
6.984
0.294
0.178
8.687
0.200
1.849
0.210
4.936
5.701
0.228
0.249
8.490
0.272
1.934
0.374
5.374
6.181
0.205
0.702
10.412
0.411
1.613
0.720
5.087
6.533
0.259
0.458
9.960
0.380
1.622
0.328
5.963
5.174
0.418
0.770
10.293
0.491
1.750
Soft
Tissue
Calcified
Artery
estimation errors. We conclude that for our purpose, Model 7 is a more appropriate
choice. To discriminate the hard inclusions from the soft surrounding materials, we
would like to identify a physically meaningful difference. Table 2.3 and Figure 2.7
summarize the estimated parameter values of Models 6 and 7. While k of Model
7 shows a consistent difference between the artificial soft tissue and the artificial
calcified artery, any parameters of Model 6 do not. Therefore, we choose Model 7,
the Hunt-Crossley model in subsequent experiments in hard lump detection.
41
0.8
Estimated Values
6
0.6
0.6
4
5
0.4
0.4
Artificial
Calcified Artery
Artificial
Soft Tissue
2
0.2
0.2
3
Estimated Parameters
(a) Model 6
1.1
0.45
10
1.9
Estimated Values
0.9
0.35
0.7
1.7
Artificial
Calcified Artery
0.25
0.5
6
0.15
0.3
0.1
1.5
Artificial
Soft Tissue
1.3
0.05
Estimated Parameters
(b) Model 7
2.3.2
In this subsection, specific settings and conditions for ex vivo tissue experiments
are explained.
20 mm
Figure 2.9: Artificial lumps made of epoxy are mixed with radio-opaque substances
so that they are visible in an X-ray image. The large lump is for porcine liver and
the small ones are for porcine kidney.
placed on it. For porcine liver, the artificial lumps were glued on the sandpaper,
and the tissue was directly placed to cover them. For porcine kidney, after making
a small incision on the dorsal side of the kidneys by a scalpel, the artificial lumps
were pushed into the tissue. To prevent the palpation tool from sticking on the tissue
surface during palpation, water was perfused on the tissue surface prior to trials.
The force sensor was attached to the end-effector of the patient-side manipulator
44
instrument
extension
force/torque
sensor
plastic
disc
Figure 2.10: A force sensor is placed on a palpation tool attachment to record toolenvironment interaction data.
via an instrument extension, such that the wrist degrees of freedom of the minimally
invasive surgical instrument are bypassed (Figure 2.10). A small rod attached the
force sensor to a 10mm-diameter flat plastic disc, which directly contacts the tissue.
Due to their viscosity, biological tissues show a relaxation effect such that they
have a response that changes with time. The relaxation of the ex vivo tissues degraded the accuracy of the dynamic models because force-displacement data were not
repeatable or consistent. To minimize this effect, one trial in the ex vivo experiments
consisted of only a single palpation. Since one trial was quite short, we performed 50
trials each for the soft tissue and the artificial tumor.
45
Model Validation
Figures 2.11 and 2.12, as well as Tables 2.4 and 2.5, summarize the mean and the
standard deviation of the force estimation errors for porcine liver and kidney. Since
there are 50 sets of palpation trials each for the soft and hard regions, the mean and
the standard deviation are averaged (50 sets of data in self-validation and 2,500 sets
in cross-validation).
The porcine liver case is analyzed first. In self-validation, the higher the order
of a polynomial function model is, the lower errors the model yields. However, even
the 4th order polynomial function (Model 5) is inferior to the other types of the
models. Models 1 and 2 show better performance for both soft and hard regions
than the polynomial models, but are inferior to Models 6 or 7. In cross-validation,
on the other hand, both linear models show the lower force errors for both soft and
hard regions than any other models. Models 6 and 7 are comparable to the linear
models. The polynomial models, particularly the 4th order one due to overfitting,
performed the worst in estimating the interaction force. Models 1 and 2 are good in
cross-validation, but much worse than Models 6 and 7 in self-validation, while Models
6 and 7 are comparable to Models 1 and 2 in cross-validation. Thus, either Model
6 or 7 seems appropriate based on both validation tests. The porcine kidney case
shows the similar results to the porcine liver, and the Models 6 and 7 show good
performance as well.
46
Soft Tissue
0.2
Artificial Tumor
0.1
3
Soft Tissue
Artificial Tumor
Model Number
Model Number
(a) Self-validation
(b) Cross-validation
Figure 2.11: Mean and standard deviation of force estimation errors for the seven
mathematical models of porcine liver.
Soft Tissue
0.2
Artificial Tumor
0.1
Soft Tissue
Artificial Tumor
Model Number
Model Number
(a) Self-validation
(b) Cross-validation
Figure 2.12: Mean and standard deviation of force estimation errors for the seven
mathematical models of porcine kidney.
47
48
Hunt-Crossley
+ damping
mass-damper-spring
Kelvin-Voigt
Name
Model
0.061
0.075
0.126
0.131
0.133
0.120
0.121
Mean
0.048
0.061
0.098
0.096
0.097
0.096
0.097
Std
Soft Tissue
0.080
0.092
0.131
0.140
0.150
0.112
0.113
Mean
0.071
0.081
0.139
0.136
0.134
0.107
0.109
Std
Artificial Tumor
Self-Validation
0.692
0.686
0.987
0.748
0.781
0.597
0.596
Mean
0.319
0.266
0.610
0.362
0.360
0.212
0.212
Std
Soft Tissue
1.060
1.067
2.068
1.172
1.223
1.010
1.010
Mean
0.356
0.321
1.253
0.520
0.477
0.302
0.301
Std
Artificial Tumor
Cross-Validation
Table 2.4: Comparison of Mean and Standard Deviation of Force Estimation Errors of Porcine Liver
49
Hunt-Crossley
+ damping
mass-damper-spring
Kelvin-Voigt
Name
Model
0.082
0.103
0.148
0.157
0.159
0.132
0.134
Mean
0.062
0.081
0.126
0.124
0.124
0.118
0.119
Std
Soft Tissue
0.055
0.065
0.086
0.092
0.096
0.093
0.094
Mean
0.042
0.051
0.089
0.088
0.087
0.088
0.089
Std
Artificial Tumor
Self-Validation
0.887
0.844
1.087
0.919
0.967
0.782
0.782
Mean
0.354
0.273
0.554
0.381
0.383
0.253
0.252
Std
Soft Tissue
0.650
0.655
1.660
0.705
0.660
0.596
0.597
Mean
0.225
0.208
0.913
0.285
0.235
0.209
0.208
Std
Artificial Tumor
Cross-Validation
Table 2.5: Comparison of Mean and Standard Deviation of Force Estimation Errors of Porcine Kidney
Contrary to the artificial heart tissues used in Section 2.3.1, none of the estimated
parameters in both models showed a clear distinction between the soft and hard
regions. If the indentation velocity is small, the Hunt-Crossley model (Model 7) can
n . So, both k and n
be approximated by f ' kx
are related to represent hardness of a
palpated point. Due to inhomogeneity of biological tissues, even palpating the same
location gave us different force-displacement curves and thus a different set of k and
n
. We define stiffness index that relates those two parameters.
Assume we have two curves, f = k1 xn 1 and f = k2 xn 2 , that give us the same
stiffness index if the tissue is palpated up to the indentation depth of xc . We can find
the condition by solving k1 xnc 1 = k2 xnc 2 , and it is k1 = k2 xnc 2 n1 . We can generalize
this to find a set of estimated parameters that has the same stiffness index. Given
n has the same stiffness
n
(k,
), we claim that any pair (k, n) satisfying kxnc = kx
c
index assuming the indentation depth is up to xc . Then, we can define the stiffness
n
n . Figure 2.13 shows the distributions of (k,
) from the preliminary
index as K = kx
c
palpation experiments with porcine liver, and the maximum, mean, and minimum of
the stiffness index K are overlaid. Red dots are for hard regions and blue stars are for
soft regions. We could not find a similar solution for Model 6. Therefore, for the ex
vivo tissues, we use the stiffness index, K, to represent hardness of a palpated point.
50
60
Estimated k
50
Soft
Regions
40
30
Hard
Regions
20
10
0
0
2
Estimated n
2.4
2.4.1
2.4.2
The tissue stiffness map is rendered on the live stereo video as a texture map
projected on the reconstructed 3D surface of a tissue sample. An image of a colored
circle is generated for each palpation. The texture map has the same resolution as the
surface. Therefore, there is one stiffness value assigned for each pixel in the region of
interest. The texture is an image in hue, saturation, and lightness (HSL) color space
where individual H, S, and L values are computed from the estimated stiffness of the
palpated points. Since the data are analyzed in real time, while the tissue is being
palpated, the estimated stiffness and the palpated position data are simultaneously
transferred to a vision computer and displayed as a semi-transparent color graphical
overlay at a rate of 30 Hz. With transparency of 50%, the camera images are clearly
visible behind the overlaid image. Thus, the colored stiffness map does not obscure
53
the users view of the tool or the surface features of the tissue.
Hue corresponds to stiffness, and the color ranges from green, passing through
yellow, to red (Figure 2.14(a)). Based on the preliminary palpation experimental
results, we define the range of stiffness (k for the artificial tissues and K for the
biological tissues). Saturation value is computed for each colored circle considering
the distance from the center of a palpated point to each pixel on the camera images.
For a graphical purpose, we employ a weighted probability density function (pdf)
of Gaussian distribution, h(x) = exp (x )2 / , to normalize the distance range
between 0 and 1. This renders a blurred pixel image at the edge of the circle. Two
parameters, x and , are elements of a vector such that the term (x )2 is squared
distance between the center of the palpated point and the surrounding regions. The
value of is chosen such that the size of the overlaid circle corresponds to that of
the palpation disc. A simple summation interpolation technique is used when two or
more circles overlap, as shown in Figure 2.14(b). If the sum is more than 1, it is cut
off to 1. Lightness value is always 0.5 for all pixels to show the overlay in color.
2.4.3
In artificial tissue experiments, the calibration between the camera frame and
the overlay (environment) platform frame was performed using a rigid registration
54
Saturation
0
min
max
Stiffness
softer
harder
Saturation
mixed-pdf
pdf1
pdf2
Figure 2.14: Our HSL mapping uses transparency of 50% so that the camera images
are clearly visible with the overlay. (a) Hue represents stiffness, changing from green
to red. (b) Saturation is calculated using a weighted probability density function
(pdf) of Gaussian distribution, and changes from gray to color. When two or more
circles overlap, both saturation values are summed.
between the rigid body defined by the platform and the 3D point cloud reconstructed
from the stereo video. There were 11 fiducial markers of known geometry on the
experimental platform (Figure 2.3(b)). Prior to real-time experiments, those points
were clicked manually by a mouse on both left and right camera images.
In order to overlay precisely the stiffness map on the camera images, we first transformed the tool-tip position from the robot coordinates to the camera coordinates,
then from the camera coordinates to the platform coordinates. Since the platform texture was parallel to the x-z plane, we could project the transformed tool-tip position
onto the texture surface.
55
2.4.4
In ex vivo tissue experiments, the calibration between the camera frame and the
overlay platform frame was performed only for validation purpose. In order to evaluate the accuracy of the stiffness map after a real-time palpation experiment, the exact
locations of lumps need to be known. We used a C-arm (9600 mobile imaging system;
OEC Medical Imaging Systems, Inc., Salt Lake City, Utah, USA) to take X-ray images of the tissues. Those images were used as the ground truth of the lump locations
and compared to those detected by the stiffness map. Visible markers of known geometry were placed on the experimental platform (Figure 2.15) and manually selected
by clicking with a mouse on both left and right images. Given the image coordinates
of the markers, we reconstructed their 3D Cartesian positions in the camera frame,
then the resulting point cloud was registered to the known geometry of the markers
to determine the transformation between the two coordinate systems using Aruns
method [18]. This calibration was used later to transform the measurement results
to X-ray image coordinates. Figure 2.16 shows an X-ray image of the experimental
platform with visible fiducial markers, on which two artificial tumors and porcine
kidney were placed.
In order to reconstruct the 3D geometry of the anatomical surface observed from
the two camera views, we used the computational stereo algorithm proposed by
56
fiducial markers
fiducial
markers
fiducial
markers
porcine kidney
fiducial markers
Figure 2.15: The experimental platform consists of the outer frame and the exchangeable inner base. Eight fiducial markers are embedded in the outer frame and visible
on camera images as well as X-ray images.
fiducial markers
porcine kidney
fiducial
markers
fiducial
markers
artificial lumps
fiducial markers
Figure 2.16: In an X-ray image, locations of the fiducial markers and the artificial
lumps are clearly visible.
57
Vagvolgyi et al., [154]. This algorithm has been optimized for extracting geometry from anatomical targets in real time. The algorithm uses a 2D version of dynamic programming to perform global optimization on the cost function that contains
smoothness and similarity constraints. Other stereo algorithms described in the literature focus on scan-line (one dimensional) dynamic programming [31], hierarchical
methods [156], multiple smoothness constraints [63], and graph cuts [86]. However,
most real-time versions make use of area-matching techniques [24] that can be easily
parallelized.
Computational stereo algorithms generate a disparity map from which one can
reconstruct the 3D position of the visible objects measured in the camera frame. The
algorithm used in this study generates a disparity for each image pixel in the region
of interest, and then the corresponding 3D positions form a point cloud topology that
can be interpreted as a continuous 3D surface representing the visible objects. In our
case, the 3D surface has been visualized as a triangle mesh, where each pixel was
represented by a rectangle built from two triangles.
Conveniently, the resulting 3D surface will be generated directly in the camera
coordinate frame. Thus, the relationship between the patient-side manipulator and
the anatomical surface can be directly calculated using the calibrations described
earlier.
58
2.5
Using the experimental setup and techniques presented so far, real-time palpation
experiments with artificial heart tissues were conducted. The goal was to detect the
artificial calcified artery hidden in the artificial tissue.
2.5.1
Estimation Technique
We employed the RLS method to estimate unknown parameters of the HuntCrossley model in real time. Due to fast parameter convergence and accuracy of RLS,
even one palpation may be enough for the unknown parameters to converge [166].
Since the Hunt-Crossley model is nonlinear in unknown parameters, those parameters are decoupled so that two linear RLS estimators, 1 and 2 , can run in
and ,
and 2 estimates the exponent n
parallel [40]: 1 estimates a
0 , k,
. Both estimators are interconnected via feedback. This is an ad-hoc solution and the initial
values have to be close to the true ones to facilitate parameter convergence [40]. We
added position offset, a
0 , to the original method presented in [40]. At each time step,
the parameter vector is updated as:
n = n1 + Ln yn Tn n1 ,
Ln = Pn1 n + Tn Pn1 n 1
where
,
1
T
Pn = [I Ln n ]Pn1
59
(2.7)
(2.8)
f
a0
x ,
k+
= log x, = n
.
2.5.2
Method
A human operator was seated at the master console, looking at the stereo display,
and teleoperated the patient-side manipulator to detect the locations of the artificial calcified artery using the stiffness map. We employed the same artificial heart
tissue, as well as artificial calcified artery, used in the preliminary experiments in Section 2.3.1. As soon as the contact between the palpation tool and the artificial tissue
was detected, the RLS estimators automatically started estimating the parameters,
was overlaid in the stereo display
and a color representation of estimated stiffness, k,
at the master console as described in Section 2.4.2. To avoid the effects of tremor
of the tool on the graphical overlay, the position of the palpated point was fixed at
the contact. It was released when the tool lost contact with the environment. The
parameters of the RLS estimators were all initialized at every release.
60
2.5.3
Results
Figure 2.17 shows four screenshots taken during a trial. As soon as contact between the tool and artificial tissue was detected, a semi-transparent colored circle was
displayed on the stereo monitor of the surgeons console. Figure 2.17(a) shows the
result of the first palpation. In Figure 2.17(b), there are several circles displayed due
to random palpation. When a current palpated area was close to other areas palpated
before, the interpolation technique was used to blend the color and take the sum of
saturation values. One of the intermediate results is shown in Figure 2.17(c). At this
point, one can see a red area that vertically goes through the center. When the entire
surface of the artificial tissue had been palpated, the red region was very distinct, as
shown in Figure 2.17(d).
2.5.4
Discussion
Due to the size of the overlaid circle, whose diameter is more than twice the width
of the artificial calcified artery, the red region is approximately 8 mm wide while the
actual artificial calcified artery is 4 mm wide. Once the approximate location of hard
objects is identified, however, the size of the overlaid circle or the value, , could be
reduced to detect more precise size and location of the artificial calcified artery. The
user was able to palpate the artificial tissue at a voluntary velocity because of the
accuracy of the selected tissue model and the speed of the RLS estimators. Each
61
(a)
(b)
(c)
(d)
Figure 2.17: Four screenshots taken during palpation over the surface of the artificial
heart tissue with an invisible artificial calcified artery. In (d), the red region correctly
indicates that there is a stiff object beneath it.
62
palpation took approximately 1.0 second on average. In the case of the trial shown
in Figure 2.17, the completion time was about 75 seconds.
2.6
As we see in the previous section, the proposed tissue property overlay technique
worked well for detection of the artificial calcified artery embedded in the artificial
heart tissue. In this section, we extend our method to ex vivo tissue experiments. We
use two types of ex vivo tissues, porcine liver and porcine kidney, and evaluate the
accuracy of the proposed technique.
2.6.1
Estimation Technique
63
2.6.2
Methods
2.6.3
Analysis
To measure the accuracy of the stiffness map, we compare locations of the artificial lumps segmented from the stiffness map to those identified in the X-ray image.
We manually aligned each pair of the stiffness map and the X-ray image, and overlaid
the two images by matching the 8 fiducial markers. By post-processing the over-
64
Liver 1
12
Liver 2
13
Liver 3
16
Liver 4
19
Liver 5
20
Liver 6
18
Kidney 1
Kidney 2
Kidney 3
Kidney 4
Kidney 5
Kidney 6
65
Stiffness Map
A: True Positive
(overlapped region)
A
B
C
B: False Positive
C: False Negative
Artificial Lump
Figure 2.18: To compute two performance metrics, we segment a region of a true
positive (A), a false positive (B), and a false negative (C). A true negative is not
used in this analysis.
laid images, we consider three categories of regions: a true positive (the area A in
Figure 2.18), a region that is predicted positive by the stiffness map and actually
contains the lump area; a false positive (B in Figure 2.18), a region that is predicted
positive but does not contain the lump area; and a false negative (C in Figure 2.18),
a region that is predicted negative and does not contain the lump area. With those
three values, two performance metrics, commonly used in medicine and radiology
(e.g., [170]), can be employed to evaluate the effectiveness of the stiffness map: true
positive rate (TPR) and positive predictive value (PPV). A true negative is not used
in this analysis.
TPR is the fraction of the true positives out of the positives (i.e., the percentage
of the area of the artificial lump that has been detected by the stiffness map):
TPR =
true positive
A
=
.
true positive + false negative
A+C
If TPR is 1, it indicates that all the lump area has been detected by the stiffness map.
66
This performance metric is useful for removal of a tumor because surgeons want to
make sure that they have cut out all the cancerous tissues during a procedure.
PPV is the fraction of the true positives out of the summation of the true positives
and the false positives (i.e., the percentage of the area of the artificial lump that is
located inside the stiffness map):
PPV =
A
true positive
=
.
true positive + false positive
A+B
If PPV is 1, it indicates that a predicted area by the stiffness map is always inside
the artificial lump area. This metric is useful for biopsy because PPV reveals how
likely surgeons can reach the tissue abnormality in biopsy.
We need to set a stiffness (or color) threshold to define the regions, A and B,
that are segmented from the overlaid images. Therefore, both of the performance
metrics vary depending on the threshold value. They also depend on the choice of
the stiffness index range used in the real-time experiments. Ideally, TPR becomes
100% for some threshold and PPV is always 100%.
An additional metric, the distance between the center of the lump detected in the
X-ray image and the centroid of the predicted regions in the stiffness map, was also
computed.
67
(a)
(b)
(c)
(d)
Figure 2.19: Four screenshots taken from the left camera during palpation over the
surface of the porcine liver with an invisible artificial lump. In (d), the red region
correctly indicates that there is a hard object beneath it.
2.6.4
Results
Similar to the experiments with the artificial heart tissue, Figure 2.19 shows four
screenshots taken during a trial. Figure 2.19(a) shows the result of the first palpation.
In Figure 2.19(b), there are approximately 15 circles displayed from palpations at
different locations. An intermediate stage is shown in Figure 2.19(c). At this point,
one can see a red region developing, indicating an area of higher stiffness. When the
entire surface of the tissue had been palpated, the red region was very distinct, as
shown in Figure 2.19(d).
2.6.5
After performing 6 real-time palpation trials each for porcine liver and porcine
kidney, we evaluated the accuracy of the stiffness map. After registering a stiffness
map and an X-ray image of each trial, we segmented the artificial lump area by a
68
Figure 2.20: The location of the artificial lump is overlaid by a dashed line on the
corresponding stiffness map of each trial for porcine liver (top) and porcine kidney
(bottom). The areas surrounded by the dashed line are used as the ground truth of
the artificial lump location.
dashed line, as shown in Figure 2.20. Therefore, the regions surrounded by the dashed
line are artificial lumps detected in the X-ray images, which were used as the ground
truth of the lump location.
We computed TPR and PPV by changing stiffness thresholds (ranging from Kmax
to Kmin ), as shown in Figure 2.21. The thick line is a mean value of the six trials
for both liver and kidney, and a 95% confidence interval is based on the standard
deviation of those six trials.
Figure 2.22 shows the distance between the center of the artificial tumor detected
in X-rays and the centroid of the predicted regions in the stiffness map for each liver
and kidney sample for different stiffness thresholds.
69
1.0
1.0
0.8
0.6
0.6
Rate
Rate
0.8
Positive
Predictive Value
True
Positive Rate
0.4
0.4
0.2
0.2
Stiffness Threshold
Stiffness Threshold
Figure 2.21: Average TPR and PPV (lines) with 95% confidence interval (shaded)
for (a) porcine liver and (b) porcine kidney.
Liver 1
Liver 2
Liver 3
30
Liver 4
Liver 5
Liver 6
Distance (mm)
Distance (mm)
12
Kidney 1
Kidney 2
Kidney 3
Kidney 4
Kidney 5
Kidney 6
20
10
Stiffness Threshold
Stiffness Threshold
Figure 2.22: Distance between the center of the artificial tumor and the centroid of
the predicted regions in the stiffness map for (a) porcine liver and (b) porcine kidney.
70
2.6.6
Discussion
The performance of the system was exemplary in terms of real-time acquisition and
display of models, and intuitiveness of the stereo graphical overlay of the stiffness map.
The tissue property overlay technique was quite intuitive for the operator to play with,
since it only requires a simple up and down motion. The only caution the operator
was aware of was time for estimation. In the current settings, as soon as the contact is
detected, a colored circle is overlaid on camera images. This does not mean estimation
is done. Prior work [166] indicates that RLS estimates take up to 0.6 seconds for
estimated parameters to converge. Therefore, the user has to hold the manipulator
for a short time during the contact. Alternatively, we could wait to display the
colored circle until the increment of the norm of the estimated parameter vector, ,
is under a threshold. Furthermore, in the ex vivo tissue experiments, the motion of
the instrument had to be very slow in order to minimize the effects of tissue relaxation
for more accurate estimation results. On average, a single palpation took about 2.5
seconds for average indentation depth of approximately 2.2 mm. Completion time for
each trial totally depended on the area of the region of interest, and can be roughly
approximated by the area (cm2 ) multiplied by 2.5 seconds since the area of each
overlaid circle is approximately 1 cm2 . In the case of the trial shown in Figure 2.19,
it took about 250 seconds. This clearly made the ex vivo tissue experiments much
longer than the artificial tissue ones.
The experiments on multiple lumps in different tissue samples revealed several
71
Force [N]
Hard Regions
Soft Regions
0
0
Displacement [mm]
(a) Porcine liver
3
Force [N]
Hard Regions
2
Soft Regions
0
0
Displacement [mm]
(b) Porcine kidney
Figure 2.23: Force-displacement data of porcine livers and kidneys with the artificial
lumps that were used for the real-time experiments reveal that the differences between
soft and hard regions are not always clear.
72
advantages and drawbacks of our method. The quantitative metrics, TPR, PPV,
and the distance between the centers were found to depend heavily on the threshold
used for the stiffness metric, K. This was expected, as the area of the detected
hard region relies completely on K. Selecting an optimal K to detect the lump
depends on the relative importance of avoiding false negatives and false positives for
the particular medical application. The current method of setting a threshold for K
is quite simplistic; other methods for segmenting lumps using the stiffness map could
be considered. It is also important to note that these metrics are most useful as an
objective comparison of the accuracy of different lump detection techniques. Such
metrics could be used to compare the accuracy of our method with that of [93, 104,
150, 165]. In addition, the metrics can evaluate how well the system could be used
as an autonomous lump detector. However, our design goal was that such a stiffness
map would most likely be directly observed by a human operator using the real-time
display, as shown in Figure 2.19. A human operator can decide how to interpret the
stiffness map based on his or her a priori knowledge of anatomy, tissue properties (in
particular, the expected inhomogeneity of the tissue), and the manner in which the
tissue was palpated.
The experiments revealed several improvements that can be made to the system
and its use with particular organs. First, a stiffness range was identified for each tissue
based on the preliminary palpation experiments in Section 2.3, before the real-time
experiments were performed. One can see from Figure 2.21(b) that the stiffness index
73
range for the porcine kidney was not appropriately chosen. Since we used different
pieces of tissues for the preliminary experiments and the real-time experiments, the
range of stiffness was probably different. Ideally, the stiffness range could be based
on a large database of tissue properties or adjusted in real time so that all the colors
of the stiffness map would shift as the palpation progresses.
Second, the tissues used were in some locations very hard due to tissue inhomogeneity. If we look at force-displacement plots of porcine liver and porcine kidney
(Figure 2.23), it is difficult to see differences between soft (normal tissue) and hard
(over the embedded epoxy) regions in the porcine kidney case. This explains why we
had worse experimental results with the porcine kidneys. If we cannot see difference
in the force-displacement plots, our material property overlay approach would not
work. Guidelines should be developed for what organs and their respective tumors
would be most appropriate for stiffness estimation and display using this technique.
2.7
Conclusions
This chapter presents the use of a teleoperated surgical robot for online estimation
of tissue properties, and identification of a hard object hidden in soft materials using
a colored stiffness map overlaid on the surgeons monitor. Among seven potential
mathematical models for artificial and biological tissues, we chose the Hunt-Crossley
model, which provided lower force estimation errors in both self-validation and cross-
74
validation and showed a difference between soft and hard regions. Recursive least
squares was used to estimate the unknown parameters of the Hunt-Crossley model.
The estimated stiffness was transmitted to a vision computer to create a graphical
overlay using the hue-saturation-lightness map. The hue of the graphical overlay
is determined by the stiffness. Saturation is computed using a weighted Gaussian
function, considering the distance from a palpated point. To create a continuously
blended colored map, a simple interpolation technique is used. We conducted realtime palpation experiments with artificial tissues and ex vivo porcine liver and kidney,
which showed successful acquisition of a color map indicating the location of the
hard object. In the ex vivo tissue experiments, we evaluated the accuracy of our
technique using quantitative performance metrics, which can be used to compare the
performance of other techniques.
75
Chapter 3
Stability and Transparency of a
Bilateral Teleoperator Considering
Estimated Environment Models
3.1
Introduction
such as dissection or suturing, force feedback may also be valuable in surgical task
performance. In this chapter, we will consider using an estimated environment model
in the design of a teleoperator controller. As a model of the experimental setup in
Chapter 2, we consider a three-channel teleoperator architecture, in which position
sensing is available at the master side and position and force sensing are available at
the slave side.
The goal of robotic teleoperation is to enable a human operator to explore or
manipulate a remote environment. A wide variety of teleoperator applications have
been developed, including radioactive material handling to protect operators from
hazard [49, 160], minimally invasive surgery to scale motion and/or force [51, 152],
and remote exploration in the water [46, 168] or in space [64, 140] to overcome distance. As seen in Chapter 1, a teleoperation system is divided into five components
(Figure 1.1). The operator directly manipulates the master robot. A controller implemented in the master, the communication channel, and/or the slave, is designed such
that the slave robot follows the motion of the master. The slave robot either moves in
free space or directly interacts with objects in the environment. The operator usually
receives visual feedback through a 2D or 3D video display and possibly audio feedback as well, but not necessarily haptic (force and tactile) feedback. Although force
feedback provides valuable information to the operator, due to issues of performance
and safety, bilateral teleoperation systems that render realistic haptic feedback to
the operator are not presently commercially available. Although it is desirable to
77
78
trol approach, the Routh-Hurwitz stability criterion. In Section 3.3, robust stability
and position and force tracking performance of a bilateral controller are analyzed in
the frequency-domain simulator by changing the accuracy of impedance estimates of
the master, slave, and environment, as well as the slave local force controller gain.
Simulation results are discussed in Section 3.4. Finally, Section 3.5 concludes the
study presented in this chapter.
3.1.1
Related Work
of time delay and still provide the operator with high-fidelity force feedback. Yokokohji and Yoshikawa, [169], proposed intervening impedance that makes the dynamics
of master and slave manipulators act as a certain kind of impedance. The concept
of their idea is quite similar to the bilateral impedance control used in [55], but their
method does not require neither human dynamics nor environment model. They defined transparency in terms of position/force tracking between the master and slave
and used as performance index of maneuverability of the teleoperator. Lawrence,
[89], presented a general structure of a teleoperation system to quantify performance
as well as stability of the system when communication delays are present. He showed
that transparency and robust stability are conflicting design goals in bilateral teleoperation systems. He also proposed a transparency-optimized controller and compared its performance and stability with those of common bilateral teleoperator controllers including position-position and position-force architectures. Hashtrudi-Zaad
and Salcudean, [59], extended the general framework of the bilateral teleoperation
system proposed by Lawrence for any combinations of impedance/admittance types
of master-slave teleoperators. They also added local force controllers that are missing
in the original architecture, and derived the perfect transparency conditions for each
teleoperator architecture. In stability and performance analysis, Llewellyns absolute
stability criterion [61] and the minimum and the dynamic range of Z-width [26] of
the transmitted impedance to the operator were used. In [60], Hashtrudi-Zaad and
Salcudean found that perfect transparency could be achieved even under a three-
80
channel control architecture by canceling out the physical interaction force between
the operator and the master manipulator or between the environment and the slave
manipulator. Later, Naerum and Hannaford, [112], thoroughly analyzed global transparency conditions of Lawrences teleoperator architecture. They proved necessary
and sufficient conditions to achieve perfect transparency and summarized perfectly
transparent controller settings for each teleoperator architecture.
Unlike other researchers who use the two-port network-based approach, Speich and
Goldfarb, [142], designed a controller from a frequency-domain loop-shaping perspective for a two-channel position-force control architecture. With a 3-DOF master-slave
manipulator, they showed improvements in transparency bandwidth and stability robustness of a teleoperation system using a loop-shaping compensator. De Gersem et
al., [36], analyzed influence of friction force on transparency for position-position and
position-force control architectures. Mahvash and Okamura, [98, 100], presented a
friction compensation technique and implemented it on a da Vinci Classic Surgical
System to enhance transparency for a position-position control architecture.
There are some researchers who used the estimated environment model for teleoperator controller design. Hannaford, [55], used estimates of both human operator and
environment impedances to achieve a four-channel perfectly transparent teleoperator.
Hashtrudi-Zaad and Salcudean, [58], incorporated composite adaptive control [141]
and impedance bilateral control [55] into Lawrences teleoperator architecture. The
estimated environment model is updated online and fed back to both master and
81
slave controllers. Though the system is quite sensitive to noise, it does not require
force measurements on the slave manipulator side. Park and Khatib, [124], proposed
a new teleoperation approach that connects the master and slave via a virtual spring,
and force control is used at the slave manipulator side. The virtual spring stiffness
is updated based on the environment model, which is estimated online using a modified Kalman estimator. Mobaser and Hashtrudi-Zaad, [108], proposed a controller
based on environment impedance reflection for rate (velocity) mode control [81] to
achieve perfect transparency under a two-channel control architecture. Mitra and
Niemeyer, [107], proposed model-mediated telemanipulation. Their motivation for
using model estimates of the remote environment was to mitigate the effects of large
communication delays on bilateral teleoperator performance.
3.1.2
Contributions
Our motivation to use an estimated environment model for designing a teleoperator controller is that few teleoperator architectures to date have taken advantage of
knowledge of the environment dynamics. Controllers can be implemented in such a
way that the combined system of the master manipulator, the communication channel, and the slave manipulator is passive, so control designers would not need to
know the dynamics of the human operator or environment, assuming both are passive. However, prior knowledge or online estimates of the environment impedance
could be used to design the controller.
82
Our main contribution in this study is the incorporation of an estimated environment model to achieve perfect transparency for a three-channel teleoperator architecture under the ideal conditions, i.e. when all the impedance estimates are correct.
We start from a hybrid matrix that relates position and force of the human operator
and the environment, and design the hybrid matrix parameters to achieve perfect
impedance matching between the impedance transmitted to the operator and that of
the environment. The only previous work using the environment model explicitly in
the hybrid matrix is from Hannaford, [55], but his controller design results in all hybrid matrix parameters being a single value, which is different from our approach. We
analyze stability conditions of the proposed controller and an existing controller [60]
by taking into account impedance estimate errors of the master, slave, and environment. We also quantify position and force tracking performance and robust stability
of our controller in a frequency-domain simulation, which indicates that the controller
is robust and capable of transparency.
3.2
This section first reviews a general framework of a bilateral teleoperator architecture for impedance-impedance devices. Next, perfectly transparent controller design
is discussed with and without considering an environment model. Finally, stability
83
Operator
Environment
Master
+ Comm. Chan.
+ Slave
3.2.1
(3.1)
Fe = Fe? + Ze Ve .
(3.2)
We can also define a hybrid matrix, H, that relates the position and force of the
human operator and the environment as:
Fh h11 h12 Vh
=
.
Fe
Ve
h21 h22
| {z }
(3.3)
From (3.2) and (3.3), assuming the environment exogenous force is zero, we can derive
the impedance transmitted to the operator in terms of the hybrid matrix parameters.
Zto
Fh
:=
Vh Fe? =0
=
(3.4)
(3.5)
Next, the impedance transmitted to the operator is described in terms of controllers and impedances of the general teleoperation system. Lawrence, [89], first
85
introduced the general block diagram (Figure 1.1) for a four-channel bilateral teleoperator. He visualized the master-slave two-port network diagram clearly to enable
more complex teleoperator controller design. The controllers C1 to C4 are called communication channels as they usually transmit either position or force signals over a
communications system. The impedances of the master and slave manipulators are
denoted by Zm and Zs , and local position controllers of them are Cm and Cs , respectively. Hashtrudi-Zaad and Salcudean, [59], extended Lawrences general bilateral
teleoperator block diagram by adding local force controllers, C5 and C6 . Then, the
control commands on the master and slave sides are
Fcm = C6 Fh Cm Vh + C2 Fe + C4 Ve ,
(3.6)
Fcs = C5 Fe Cs Ve + C3 Fh + C1 Vh .
(3.7)
The hybrid matrix, H, and the impedance transmitted to the operator, (3.5), can
be described in terms of the impedances and the controllers shown in Figure 1.2. By
using the control commands, Fcm and Fcs , equation of motion of the master and slave
manipulators can be derived.
1
(Fcm + Fh )
Vh = Zm
(3.8)
Ve = Zs1 (Fcs Fe )
(3.9)
After substituting (3.6)-(3.7) into (3.8)-(3.9), we obtain the hybrid matrix parameters
86
h11 =
(3.10)
h12
(3.11)
h21
h22
(3.12)
(3.13)
Zto =
3.2.2
(3.14)
87
review the general perfect impedance matching conditions from previous work, and
then show our controller that considers an estimated environment model.
h11 = 0 ,
(3.15)
h22 = 0 ,
(3.16)
h12 h21 = 1 .
(3.17)
Fh = h12 Fe ,
(3.18)
Ve = h21 Vh .
(3.19)
Therefore, if there is no position scaling between the master and slave manipulators,
h12 = h21 = 1 .
(3.20)
Under the conditions of (3.15), (3.16), and (3.20), perfect position and force tracking
between the master and the slave is also achieved.
For an impedance-impedance teleoperation system, the conditions for achieving
perfect transparency are derived by solving (3.10)-(3.13), (3.15)-(3.16), and (3.20) [59,
88
89].
C1 = Zs + Cs
C2 = 1 + C6
(3.21)
C3 = 1 + C5
C4 = (Zm + Cm )
where Zm and Zs are estimates of the master and slave impedances, and we require
Zm = Zm and Zs = Zs for perfect transparency.
Hashtrudi-Zaad and Salcudean later pointed out that not all four channels are
necessary to achieve perfect transparency [60]. For example, by introducing a local
force controller, C5 , and canceling out physical interaction between the slave manipulator and the environment with the local force controller, one can achieve perfect
transparency with the following controller under the ideal conditions, i.e., when all
the impedance estimates are correct:
C1 = Zs + Cs
C2 = 1 (C6 = 0)
.
(3.22)
C3 = 0 (C5 = 1)
C4 = (Zm + Cm )
Figure 3.2 is redrawn from Figure 1.2 to mimic a typical block diagram from operators
point of view. Note that red blocks contain an estimate of the impedance of either
master or slave manipulator. To simplify later analysis, Vm and Vs in the original
block diagram are replaced by Xm and Xs to represent position signals.
89
Operator
Master
Communication
channel
Slave
+
Environment
Figure 3.2: A block diagram of the bilateral teleoperation system with the controller
defined in (3.22). When all estimates are perfect, the transfer function from Fh to
Xm becomes Ze1 .
90
h011 =
(3.23)
h012
(3.24)
h021
h022
(3.25)
(3.26)
(3.27)
One way to achieve perfect transparency would be to invoke the following constraints:
Zcm Zcs + C1 C4 = 0 ,
(3.28)
1 + C5 = 0 ,
(3.29)
(1 + C5 )Zcm + C1 C2 = Zcs ,
(3.30)
to cross out the first term of the numerator, (3.28), and the coefficient of Ze of the
denominator, (3.29), and cancel out the coefficient of Ze of the numerator by Zcs in
the denominator, (3.30), which would result in (3.22). Instead, if we will design the
controller using an estimated environment model, the controller Ci (i = 1, 2, 4, and 5)
may be a function of the estimated environment impedance, Ze . One way to achieve
perfect impedance matching is:
Zcm Zcs + C1 C4 = 0 ,
(1 + C5 )Zcm + C1 C2 = (1 + C5 )Ze + Zcs ,
91
(3.31)
(3.32)
to cancel out the first term of the numerator, (3.31), and cross out the coefficient of
Ze of the numerator by the denominator, (3.32). Then, let us choose
C1 = Zs + Cs
(3.33)
C4 = (Zm + Cm )
(3.34)
(3.35)
Cm = Zm + Ze ,
(3.36)
to satisfy (3.32). If all the impedance estimates, Zm , Zs , and Ze are true values, perfect
impedance matching is achieved. To summarize the conditions for our proposed
controller,
C1 = Zs + Cs
C2 = 1
C3 = 0
,
C4
(3.37)
= (Zm + Cm ) = Ze
C5 = C
C6 = 0
where C is any constant. Unlike the three-channel perfectly transparent controller,
(3.22), proposed by Hashtrudi-Zaad and Salcudean, [60], we do not necessarily have
to cancel out the physical interaction between the slave and the environment using the
slave local force controller, C5 . The physical interpretation of (3.37) is that we cancel
92
Operator
Master
Communication
channel
Slave
+
Environment
Figure 3.3: A block diagram of the bilateral teleoperation system with the controllers
defined in (3.37). When all estimates are perfect, the transfer function from Fh to
Xm becomes Ze1 .
out the dynamics of the master manipulator and render the estimated environment
impedance at the master side. Thus, if the estimated environment impedance is
correct, the operator would feel as if he directly interacts with the environment. The
block diagram of this system can be redrawn in Figure 3.3 where red blocks contain
impedance estimates of the master, slave, or environment.
The above controller design is just one possibility. In fact, we can achieve perfect impedance matching even in the two-channel architecture under the following
93
conditions:
C1 = Zs + Cs
C2 = 0
C3 = 0
.
(3.38)
C4 = Ze (Zm + Cm )
C5 = 1
C6 = 0
This controller is quite similar to (3.22). It is anticipated that stability conditions of
(3.38) are similar to (3.22). So, we will use the controller, (3.37), for later analysis.
Clearly, the impedance estimates will not always be perfect. Thus, we examine in the remainder of this chapter how the accuracy of the impedance estimates,
Zm , Zs , and Ze , affects teleoperator performance in comparison to a more traditional
controller.
3.2.3
Stability Analysis
In the previous section, we designed the teleoperator controllers to achieve highfidelity force feedback based on impedance matching. Another important goal is to
make the system stable, and preferably robust as well. This must be accomplished in
any teleoperator scenario, especially for teleoperated robot-assisted surgery, to ensure
that the robots do not harm patients during a surgical procedure. In this section,
we analyze stability of the three-channel perfectly transparent controller proposed
94
by Hashtrudi-Zaad and Salcudean [60] (Controller I, (3.22)) and the one designed
by us (Controller II, (3.37)). We employ a classical stability analysis tool, the
Routh-Hurwitz stability criterion, to check if a teleoperation system is boundedinput bounded-output (BIBO) stable and show that consideration of the environment
model for designing the teleoperator controllers relaxes stability conditions. In this
study, we take into account a prior knowledge of the environment model. Thus, we
assume that the environment model does not change or is not updated during bilateral
teleoperation.
Stability analysis of a bilateral teleoperator can be quite complicated due to unknown impedances of the human operator and the environment, as well as unmodeled
dynamics of the master and slave manipulators. A common approach for stability
analysis is to apply passivity theory for the master-slave two-port network system
to guarantee stability of the entire combined system, assuming the operator and the
environment are both passive1 . The passivity approach makes it easier to analyze
stability of the entire teleoperation system than using classical control approaches
since it does not require knowledge of the impedance of the operator or the environment. However, passivity imposes strict conditions on achievable controllers, and is
often overly conservative as a means for guaranteeing stability. Thus, we will not use
passivity analysis in this work.
As shown in Figures 1.2 and 3.1, the entire teleoperation system consists of the
1
Although the human operator and/or the environment could be nonpassive, it is a widespread
assumption to say both are passive in the teleoperation literature [114].
95
2. When the human operator controls the teleoperator in free space, the system
must be stable. In other words, the combined system of the human operator,
master manipulator, communication channel, and slave manipulator needs to
be stable.
3. When the slave manipulator contacts the environment and the human operator
does not touch the master manipulator, the system must be stable. In other
words, the combined system of the master manipulator, communication channel,
1
slave manipulator, and environment, i.e. Zto
, needs to be stable.
4. When the human operator controls the teleoperator and the slave manipulator
interacts with a passive environment, the system must be stable. In other words,
the combined system of the five components needs to be stable.
Typically, researchers ensure stability conditions 1 and/or 4. In the following
analysis, we mainly focus on condition 3. That is, we analyze the stability of the
96
dition because, for condition 4, there exist systems that are unstable but could be
stabilized by the human operator. However, the operator should not be a stabilizing
controller for the teleoperation system. Also, under such a condition, the operators
maneuverability is not high [72]. We also comment on condition 2, since the stability
of the teleoperator moving in free space is important in a real system as well, but
often neglected in the literature.
1
Zto
=
m +Cm )(Z
s +Cs ) + (Z
s +Cs )+(1+C5 )Zcm Ze
(Zm +Cm )(Zs +Cs )(Z
b2 s 2 + b 1 s + b0
a4 s 4 + a3 s 3 + a2 s 2 + a1 s + a0
97
(3.39)
where
b2 = Ms
b1 = D s
b0 = Ps + Ke
mM
s
a4 = Mm Ms M
.
(3.40)
a3 = Mm Ds + Ms Dm
s )Ke
a2 = Mm Ps + Ms Pm + (Mm + M
a1 = (Dm + Ds )Ke
a0 = (Km + Ks )Ke
To check stability of the system, we employ the Routh-Hurwitz stability criterion,
which requires values in the first column of the Routh table (Table 3.1) have the same
signs for stable systems. Thus, the stability conditions are:
mM
s
Mm Ms M
> 0
Mm Ds + Ms Dm > 0
R31
> 0
R41
> 0
(Pm + Ps )Ke
> 0
(3.41)
The conditions of (3.41) are quite complicated due to the master position controller, Cm = Dm s + Pm . To simplify the stability analysis, we remove the master
position controller from Controller I by making Dm = Pm = 0. This relaxes stability
conditions since we remove one closed loop from the system. Also, this would not
98
Table 3.1: Routh table for the three-channel perfectly transparent controller
s4
mM
s
Mm Ms M
s3 Mm Ds + Ms Dm
(Dm + Ds )Ke
s2
R31
(Pm + Ps )Ke
R41
s0
(Pm + Ps )Ke
where
R31 =
s Dm )(Mm Ms )Ke
(Mm Ds +Ms Dm )(Mm Ps +Ms Pm )+(Mm Ds M
Mm Ds +Ms Dm
R41 =
Mm
Mm Ms
(3.42)
> 0
The stability conditions of the teleoperator moving in free space can be derived
by substituting Ze = 0 into (3.39), and that is:
99
1
Zto
|Ze =0 =
Ms s2 +Ds s+Ps
mM
s )s4 +(Mm Ds +Ms Dm )s3 +(Mm Ps +Ms Pm )s2
(Mm Ms M
.
(3.43)
Since (3.43) has two poles at the origin, s = 0, the transfer function (3.43) is BIBO
mM
s > 0, Mm > 0 and Ms > 0
stable only if either (a) Ds = Ps = 0, Mm Ms M
or (b) the numerator of (3.43) can be canceled out by the denominator. In the former
case, there is no position tracking controller on the slave manipulator side. This
would render the teleoperator ineffective. The latter condition would rarely occur in
real teleoperator settings since impedances are physical parameters. Therefore, this
controller would have a stability issue or poor tracking performance in free space.
(Zs + Cs ) + (1 + C5 )Ze
(Zm + Ze ) (Zs + Cs ) + (1 + C5 )Ze + (Zs + Cs )Ze
Mm Ms s4 +Mm Ds s3 +
.
(3.44)
Mm Ms
s Ke +Ms Ke
Mm (Ps +Ke )+M
s3
Mm Ds
Ds Ke
s2
e )+(Mm Ms )Ke
Mm (Ps + K
e )Ke
(Ps + K
(Mm Ms )Ke Ds Ke
e )+(Mm Ms )Ke
Mm (Ps + K
s0
e )Ke
(Ps + K
e )Ke
(Ps + K
(3.45)
the position tracking accuracy in contact with the environment if the gains are large.
0
, such
We could introduce a gain (0 1) to tune the D or PD controller, Cm
that gradually varies as the state of the slave manipulator changes (clumped with
the environment or in free space). Thus, the controller (3.45) is expected to be stable
and work well in free space also.
3.3
This section focuses on performance and stability analysis of the two three-channel
controllers presented in Section 3.2.2. As we have seen by comparison of (3.41) and
102
(3.46)
(3.47)
(Zs + Cs )Ze
(Zm + Cm )(Zs + Cs + Ze ) + (Zs + Cs )(Ze (Zm + Cm ))
(3.48)
Similarly for Controller II, the position and force tracking transfer functions can be
103
(3.49)
(3.50)
where Zm = Zm Zm , Zs = Zs Zs , and Ze = Ze Ze .
3.3.1
We analyze how the impedance estimates of the master, slave, and environment,
as well as the slave local force controller, , affect position and force tracking accuracy.
For this purpose, three parameters, , , and , are introduced to define:
Zm = Zm ,
(3.51)
Zs = Zs ,
(3.52)
Ze = Ze .
(3.53)
Controllers I & II
Controllers I & II
10
1.4
0.6
Magnitude (dB)
Magnitude (dB)
10
0
1.4
0.6
20
45
20
45
1
Phase (deg)
Phase (deg)
1.4
0.6
45 1
10
10
1
3
10
10
45 0
10
Frequency (rad/sec)
10
10
10
10
Frequency (rad/sec)
Figure 3.4: Bode plots of the position tracking transfer function when the accuracy
of the slave mass estimate, , varies from 0.6 to 1.4 by 0.2 (left) and the slave local
force controller, , varies from 0 to 1 by 0.2 (right). The parameter settings are
= 0.9, = 0.9, and = 0.03 (left) and = 0.9, = 0.95, and = 0.9 (right). The
thick line represents the perfect estimate case, i.e. = 1 (left) and = 0 (right), and
the arrows indicate how the responses change as the parameters increase.
between the master and slave, the magnitude and phase of the transfer function are
0 dB and 0 at all frequencies, respectively. In both magnitude and phase plots,
the accuracy of the slave mass estimate, , has little effect on position tracking
performance at low frequencies, while the amount of canceling out the slave interaction
force, i.e. 0, improves force tracking errors.
Next, we analyze force tracking performance. The force tracking transfer functions
for both controllers, (3.48) and (3.50), are compared here. Similar to the position
tracking transfer functions, the ideal response is that the magnitude and phase are
0 dB and 0 at all frequencies, respectively. Figure 3.5 shows the force tracking
performance of the two controllers when the accuracy of the master mass estimate,
105
Controller I
10
Controller II
10
Magnitude (dB)
Magnitude (dB)
1.0
0.6
20
1.0
0.6
45
90 0
10
0.6
20
40
Phase (deg)
Phase (deg)
40
1.0
10
10
10
10
1.0
0
0.6
45
90 0
10
10
10
Frequency (rad/sec)
10
10
10
10
Frequency (rad/sec)
Figure 3.5: Bode plots of the force tracking transfer functions of Controllers I (left)
and II (right) when the accuracy of the master mass estimate, , varies from 0.6 to
1.0 by 0.1. The parameter settings are = 0.95, = 0.9, and = 0.03. The thick
line represents the perfect estimate case, i.e. = 1, and the arrows indicate how the
responses change as increases.
Controller I
1.4
0
0.6
20
0
0.6
20
40
90
Phase (deg)
40
90
Phase (deg)
Controller II
10
1.4
Magnitude (dB)
Magnitude (dB)
10
45
1.4
0
45
45
1.4
0
0.6
45
0.6
90 0
10
10
10
10
10
10
Frequency (rad/sec)
90 0
10
10
10
10
10
10
Frequency (rad/sec)
Figure 3.6: Bode plots of the force tracking transfer functions of Controllers I (left)
and II (right) when the accuracy of the slave mass estimate, , varies from 0.6 to
1.4 by 0.2. The parameter settings are = 0.9, = 0.95, and = 0.03. The thick
line represents the perfect estimate case, i.e. = 1, and the arrows indicate how the
response change as increases.
106
, varies from 0.6 to 1.0 by increment of 0.1. The upper bound of was chosen to
keep the teleoperation system stable based on (3.42) and (3.45). Similarly, Figures
3.6 and 3.7 compare the force tracking performance when the accuracy of the slave
mass estimate, , varies from 0.6 to 1.4 by increment of 0.2 and the slave local force
controller, , from 0 to 1 by increment of 0.2, respectively. In the above three cases,
the accuracy of the impedance estimates or the slave local force controller would not
change the force tracking performance at low frequencies. Therefore, if the human
operator uses the teleoperator to slowly interact with the environment, the accuracy
of the impedance estimates would not matter for the force tracking performance.
Finally, Figure 3.8 shows the force tracking performance when the accuracy of the
environment impedance estimate, , varies from 0 to 1 by increment of 0.2. In the
left figure, we use = 0.03, while in the right figure, = 1. When is close to zero,
i.e. the interaction force between the slave and the environment is canceled out, the
accuracy of the environment impedance estimate would not affect the performance.
On the other hand, when there is no cancellation of the interaction force, i.e. = 1,
more accurate estimate of the environment impedance results in more force tracking
errors. The stability condition of Controller II, (3.45), implies that increasing makes
a system more stable. Thus, this may be regarded as a trade-off between stability
and transparency.
Figures 3.9 and 3.10 compare averaged position and force tracking errors of both
controllers with respect to the accuracy of the master mass estimate () and that of
107
Controller I
Controller II
10
1
0
Magnitude (dB)
Magnitude (dB)
10
0
1
20
40
45
0
1
20
40
45
Phase (deg)
Phase (deg)
1
0
45
0
1
45
1
90 0
10
10
10
10
10
90 0
10
10
Frequency (rad/sec)
10
10
10
Frequency (rad/sec)
10
10
Figure 3.7: Bode plots of the force tracking transfer functions of Controllers I (left)
and II (right) when the slave local force controller, , varies from 0 to 1 by 0.2. The
parameter settings are = 0.9, = 0.95, and = 0.9. The thick line represents the
perfect estimate case, i.e. = 0, and the arrows indicate how the response change as
increases.
Controllers I & II
10
Controllers I & II
10
Magnitude (dB)
Magnitude (dB)
0
0
1
Controller I
20
40
45
1
20
Controller I
40
45
Phase (deg)
Phase (deg)
1
0
1
45
10
10
45
Controller I
90 0
10
Controller I
3
10
10
10
Frequency (rad/sec)
90 0
10
10
10
10
10
10
Frequency (rad/sec)
Figure 3.8: Bode plots of the force tracking transfer functions of Controllers I (dash)
and II (solid) when the accuracy of the environment impedance estimate, , varies
from 0.6 to 1 by 0.1. The parameter settings are = 0.9, = 0.95, and = 0.03
(left) and = 1 (right). The thick solid line represents the ideal case, i.e. = 0, and
the arrows indicate how the responses change as increases.
108
Controller I
Controller II
x 10
2.2
1.4
x 10
2.2
1.4
1.3
1.3
1.2
1.2
2
1.1
1.1
1.0
1.0
0.9
0.6
1.8
0.7
0.8
0.9
0.9
0.6
1.0
1.8
0.7
0.8
0.9
1.0
Figure 3.9: Comparison of averaged position tracking errors of both controllers. The
system is unstable in the regions rendered in white. The environment estimate accuracy is = 0.9 for Controller II.
Controller I
x 10
3
1.4
Controller II
x 10
2.8
1.4
1.3
1.3
1.2
1.2
2.3
1.5
1.1
1.1
1.0
1.0
0.9
0.6
0
0.7
0.8
0.9
0.9
0.6
1.0
1.8
0.7
0.8
0.9
1.0
Figure 3.10: Comparison of averaged force tracking errors of both controllers. The
system is unstable in the regions rendered in white. The environment estimate accuracy is = 0.9 for Controller II.
109
x 10
x 10
Difference of
Force Tracking
Errors [N]
Difference of
Position Tracking
Errors [m]
7
0.8
0.9
3
2
0
0.8
0.9
Figure 3.11: Differences of the averaged position (left) and force (right) tracking
errors between the two controllers as the environment impedance estimate rate, , of
Controller II varies from 0.8 to 1.0. Errors of Controller I are subtracted from those
of Controller II. So, negative position tracking errors mean Controller I yields smaller
tracking errors. Same as the force tracking errors.
the slave (). An environment impedance estimator accuracy of = 0.9 was used
in both analyses. Since the two controllers have exactly the same position tracking transfer functions, the averaged position errors of both controllers are almost
the same. Controller I shows a slightly better force tracking performance than Controller II by taking the average, although the stable regions of Controller I are much
smaller. Finally, Figure 3.11 shows the differences of the averaged position and force
tracking errors between the two controllers as the estimate rate of the environment
impedance varies from 0.8 to 1.0. The position errors are always negative, which
means Controller II has smaller position tracking errors; however, the difference is
trivial since the amplitude of the position is 0.1 m, and the errors are many orders
of magnitude smaller. On the other hand, the force errors are always positive, which
means Controller I has smaller force tracking errors.
110
-1
O
Figure 3.12: Stability radius is defined as the closest distance from the Nyquist plot to
the point (-1,0). The distance between
the point
and any
(-1,0)
point on the Nyquist
plot, L(j), can be computed as 1 L(j) = 1 + L(j).
3.3.2
Stability Margin
We analyze the robust stability of the bilateral teleoperation system using the
stability radius [45] as well as gain and phase margins as a stability metric. Stability
radius is a measure of robustness and can be used as a unique metric since it combines
gain and phase margins which are not correlated. It is defined as the closest distance
from the Nyquist plot to the critical point, (-1,0). It can be computed by:
min1 L(j) = min1 + L(j) = minS 1 (j)
1
.
1+L(j)
(3.54)
The stability
radius is thus equal to the reciprocal of the peak in the Bode gain plot of the sensitivity
function. Figure 3.12 illustrates the notion of stability radius. The larger the stability
radius is, the larger the stability margin is. Gain and phase margins can be derived
111
using an open-loop transfer function of the system, L(j), which is obtained from
Figures 3.2 or 3.3.
Figure 3.13 compares the stability radii of both controllers with respect to the
accuracy of the impedance estimates of the master () and the slave (). Controller II
has a stability radius of 1 for any combination of and (0.6 1.0 and
0.6 1.4) except for = 1, which has even larger stability radius. This indicates
that Controller II has a larger stability margin, since the maximum stability radius of
Controller I is 0.24. Indeed, both gain and phase margins of Controller II are infinity
for the specified range of and . The gain and phase margins of Controller I are
shown in Figure 3.14 where unstable regions are rendered in white.
3.4
Discussion
112
Controller I
Controller II
1.4
0.24
1.4
1.3
1.3
1.2
1.2
1.10
1.05
0.12
1.1
1.1
1.0
1.0
0.9
0.6
0
0.7
0.8
0.9
0.9
0.6
1.0
1.00
0.7
0.8
0.9
1.0
Figure 3.13: Comparison of stability radii of Controllers I (left) and II (right). The
environment estimate accuracy rate is = 0.9 for Controller II.
Controller I
Controller I
1.4
1.8
1.4
18
1.3
1.3
1.2
1.2
1.4
1.1
1.1
1.0
1.0
0.9
0.6
1
0.7
0.8
0.9
0.9
0
0.7
1.0
0.8
0.9
1.0
Figure 3.14: Gain margin (left) and phase margin (right) of Controller I.
113
(3.45) indicate, this limits a possible combination of the master and slave impedance
estimates, especially in the presence of unmodeled dynamics, and thus more severe
stability conditions are imposed on the controller design. As pointed out by many
researchers, e.g., [59, 89], this can be also regarded as a fundamental trade-off between performance and stability of the bilateral teleoperator. Alternatively, based
on (3.46) and (3.49), the position tracking accuracy of both controllers could be improved if the PD gains of the slave local position controller, Cs , are much larger than
the impedance of the environment multiplied by the gain .
In both controllers, overestimating the slave mass makes the system more stable
which is counterintuitive. First, this simulation is based on the frequency-domain
analysis using the transfer functions. In a real system, a slight time lag may be
introduced, which is not taken into account in the simulation, to filter out velocity
and/or acceleration because those measurements are typically noisy. This restricts
the range of the slave mass estimate. Second, the simulation assumes that the slave
manipulator is always in contact with the environment. As commented in the stability
analysis, overestimating the slave mass with Controller I makes the system unstable
in free space.
In both performance and robust stability analyses, the accuracy of the environment impedance estimate in Controller II does not appear to be very important as
long as the gain of the slave local force controller, , is small. In the case where we
have a perfect estimate of the environment impedance, the teleoperation system uses
114
a position-forward control law since there is no force or position information fed back
from the slave to the master (Figure 3.3). In this situation, the system is robust in
time delay, and therefore it is expected that accurate knowledge of the environment
impedance seems more important under a time-delayed system, which is beyond the
scope of this work. More thorough analysis of the proposed controller is needed to
analyze behavior under time delay.
3.5
Conclusions
115
116
Chapter 4
Haptic Feedback in the Context of
Robotic Surgery Open Interfaces
4.1
Introduction
Robot-assisted minimally invasive surgery (RMIS) has achieved widespread clinical application since the introduction of the da Vinci Surgical System [51] (Intuitive
Surgical, Inc., Sunnyvale, California, USA) in 2000. Radical prostatectomy, the surgical removal of the cancerous prostate gland, is by far the most common procedure
performed with RMIS; the RMIS approach currently accounts for more than 70% of
radical prostatectomies performed in the U.S [6]. The benefits of RMIS over traditional minimally invasive surgery include improved dexterity, more precise motion of
the surgical tools, and enhanced 3D visualization of the operating field.
117
However, lack of significant haptic (force and tactile) feedback to the surgeon
has been recognized as a major drawback of RMIS. Numerous research results have
demonstrated that force feedback improves the performance of surgical tasks involving manipulation, such as suturing [85], blunt dissection [158], and palpation [52].
Major issues that prevent the use of force feedback in a commercially available system are the intrinsic trade-off between stability and transparency (a measure of the
accuracy of motion tracking and haptic feedback) of bilateral teleoperators [55, 89]
and the challenge of sensing forces under cost, biocompatibility, and sterilizability
constraints [116, 117]. Currently, surgeons use visual cues such as changes in texture,
color, and contour to compensate for the lack of haptic sensation [149].
In this chapter, we present the development and evaluation of an open platform
for augmented reality and haptic interfaces for robot-assisted surgery. This work
was done in collaboration with Timothy Judkins, Niki Abolhassani, and Sung Jung
of Intelligent Automation, Inc. (IAI) [5]. T. Judkins originally proposed a robotic
surgery open interface (RSOI) project and approached Allison Okamura due to her
teams expertise in haptics and teleoperated surgical robots. T. Judkins designed
the software framework (Section 4.2.1), T. Judkins and S. Jung designed and coimpelemented the communication architecture (Section 4.2.3), and N. Abolhassani
implemented the stereo vision system and 3D surface reconstruction (Section 4.2.6).
Antonio Cedillos at JHU, under the supervision of the author, created an artificial
prostate tissue (Section 4.2.5). The author did all the work required for a Linux-based
118
system that controlled the master and slave manipulators, including porting the communication architecture code from Windows to Linux, implementing the registration
program (Section 4.2.4), co-designing and implementing two bilateral teleoperation
tasks: 3D material property overlay (Section 4.2.7) and point-cloud-based forbiddenregion virtual fixtures (Section 4.2.8).
RSOI pursues a similar goal as SAW [74, 153] and Plugfest [82], which is to create a system that enables interoperability between myriad telesurgical devices. The
emphasis in this thesis is on using the RSOI to enhance surgeons performance by providing haptic feedback and augmented reality. Key features of the RSOI developed
by the author are augmented visual feedback using 3D material property overlays
and virtual fixtures with haptic feedback to provide assistance to the surgeon. We
do not use any RSOI-specific code to perform these tasks, so they are easily transferable to other architectures. The software framework consists of four main elements:
control and command software, robot plug-ins, image processing plug-ins, and 3D
surface reconstructions of an unknown environment. We demonstrate the feasibility
of the interface in two bilateral teleoperation scenarios performed on artificial prostate
tissue: palpation to detect a lump hidden in the artificial prostate and point-cloudbased forbidden-region virtual fixtures to prevent the patient-side manipulator from
entering unwanted regions of the workspace.
The relevant prior work is in the following areas: (1) platforms for robot-assisted
surgery, (2) detection of tissue abnormalities through pressure, force, and material
119
property sensing, and (3) virtual fixtures for surgeon assistance. These are reviewed
in Section 1.3.
4.2
4.2.1
The RSOI, which focuses on capabilities for augmented reality and haptic feedback, consists of the following main components: haptic manipulators (master and
slave robots), a stereo vision camera system, and a stereoscopic 3D display. The
software framework is designed to interface with any surgical robot, acquire useful
data about a surgical task by sharing measured information among the components,
and provide augmented feedback to the surgeon. Figure 4.1 shows an overview of the
RSOI design, where yellow rectangular blocks represent generalized hardware components and blue rounded rectangular blocks represent software components. The
left column shows the interconnections of the robotic system components, and the
right shows those of the vision system. Sensor measurements, particularly motion
and force, are transmitted from the robotic side to the vision side to enable augmented reality visual displays. In addition, geometric environment data acquired
using computer vision informs haptic feedback, in particular virtual fixtures, implemented by the robotic system. Because the RSOI is designed for telemanipulation,
there is a clear division between the surgeon-side hardware components (the master
120
Surgeon Side
Software
Framework
Robotic System
Vision System
Master Manipulator
3D Display
force
position
tactile
position
force
tactile
Robot Plug-in
Patient Side
vision-based
feedback
force
position
tactile
position
force
tactile
sensor
measurements
3D
Reconstruction
Image
Processing
Plug-in
camera images
Stereo Camera
Slave Manipulator
Figure 4.1: Overview of the robotic surgery open interface (RSOI). The software
framework enables appropriate communication between surgical robot hardware components and visual interfaces.
manipulator and the 3D display) and the patient-side hardware components (the slave
manipulator and the stereo camera).
4.2.2
Hardware
The RSOI is designed to use any master and slave robots. In the implementation used for the experiments described in this chapter, we used a pair of 3-DOF
PHANTOM Premium manipulators (SensAble Technologies, Inc., California, USA).
The master was a 1.0 PHANTOM and the slave was a 1.5 PHANTOM.
In the current version of the RSOI, we use two computers: a Linux-based machine
121
(robot computer) controlling master and slave robots at 1 kHz, which is a typical update rate for haptic interfaces; and a Windows machine (vision computer) performing
stereo vision processing and reconstructing a 3D surface of the environment.
Existing RMIS systems use stereo endoscopes to enable 3D visualization. For
software development purposes, the vision system of our testbed uses a Bumblebee2 IEEE-1394 stereo vision camera (Point Grey Research, Inc., Richmond, British
Columbia, Canada) and Triclops Software Development Kit (SDK). The stereoscopic
display system consists of the NVIDIA 3D Vision Kit and a 120 Hz LCD monitor.
We configured the video settings of a Quadro FX 580 in Windows Vista to enable
stereo display.
For palpation and surface tracing tasks, we mounted the same tool attachment
on the tip of the slave manipulator as the one used in Section 2.3.1. A Nano 17
transducer (ATI Industrial Automation, Apex, North Carolina, USA) was attached
on the palpation tool as shown in Fig. 2.10.
4.2.3
Communication Architecture
To communicate between the system hardware components, we use a Transmission Control Protocol (TCP) client-server architecture as shown in Figure 4.2, as
designed by T. Judkins and S. Jung of Intelligent Automation, Inc. A TCP server
acts as a network router to pass messages between computers controlling the hardware components. This approach allows each computer to register with the system
122
Figure 4.2: The TCP client-server architecture. A computer controlling each hardware component is connected via the TCP server and the server maintains IP addresses as a network router.
dynamically so that one master manipulator and visual display can connect with one
of many possible slave robots. An IP address is assigned to each computer, and the
computers commanding the hardware components each run a TCP client that sends
a hello message to the TCP server upon start-up. The TCP server maintains a list
of IP addresses and routes messages accordingly.
The message protocol supports switching between standard teleoperation, palpation with augmented visualization, and forbidden-region virtual fixture (FRVF)
modes. In the standard teleoperation mode, the slave robot follows motion of the
master robot with a position-tracking controller. In the palpation mode, data from
the force sensor are acquired at the slave robot side, and then the robot computer
sends an estimated tissue stiffness value to the vision computer, which is overlaid on
camera images using a translucent color map. In the FRVF mode, the user defines
a forbidden region by mouse input and the virtual fixture is created on a surface
123
constructed using a 3D point cloud computed from the stereo vision system.
Each message is composed of two parts: the message header and data. The message header contains information about the source of the message, the destination,
the message type, and the amount of data in the message. The source is the component (master, slave, display, or camera) from which the message is sent, and the
destination is the component to which the message is sent. For both the source and
the destination, the TCP server determines where to route the message based on
which components are registered with the server. The message type indicates the
contents of the message (hello, mode change, etc). The message data are specific to
the message type. The following message types have been implemented:
Hello (Register) Registers a client with the TCP server
Mode Change Changes for the mode of the robot
Send Point Cloud Sends point cloud data for forbidden-region virtual fixture
Send Stiffness Sends estimated stiffness values for material property overlay
After each message is sent, the recipient must acknowledge the message to confirm
receipt. This ensures that data are not lost and components are synchronized. An
example message sequence is given in Figure 4.3. In this example, the console (display)
and the robot (master) send hello messages to register with the TCP server. The
console then sends a mode change message to switch to the FRVF mode followed
by sending point cloud data to define the forbidden region. The robot operates in
124
Figure 4.3: An example of message protocol sequence between the display console,
the robot manipulator, and the TCP server.
this mode until the console sends a mode change message to switch to the palpation
mode. The robot then sends stiffness values repeatedly to be displayed.
4.2.4
To convert between the robot coordinate system and the stereo camera coordinate system, we defined a local coordinate system to which the other systems were
registered (Figure 4.4).
The local coordinate system, which is fixed on the patient (in our testbed, this is
125
Camera Coordinates
Robot Coordinates
Local Coordinates
Figure 4.4: Homogeneous transformations are computed to transform robot and camera coordinate systems into local coordinate system and vice versa. LR T and LC T are
the 4 4 transformation matrices to convert from robot to local coordinates and
camera to local coordinates, respectively.
an artificial tissue sample), is defined by measuring three points: the location of the
local origin (G PLorg ), a point on the local x-axis (G PLx ), and a point on the local y-axis
(G PLy ). A transformation matrix is computed from these three points to transform
points in the robot coordinate system to the local coordinate system as:
T
L RT
LR R R P Lorg
R
RT =
0 0 0
1
(4.1)
P Lorg is a 3 1 vector of the position of the origin of the local coordinate system in
L
RR
= u u u
x
y
z
(4.2)
where ux is the unit vector defining the x-, y-, and z-axes of the local coordinate
system with respect to the robot coordinate system, respectively. The unit vectors
ux , uy , and uz are defined by
PLx R P Lorg
||R PLx R P Lorg ||
R
PLy R P Lorg
= ux R
|| PLy R P Lorg ||
ux =
(4.3)
uz
(4.4)
uy = uz ux
(4.5)
L P
A
= RT
R P
A
(4.6)
127
C P
A
L 1 L
= C T RT
R P
A
(4.7)
4.2.5
The system was tested using artificial tissue representing a region of the prostate.
The prostate was chosen as the organ of interest because of the popularity of robotic
radical prostatectomy in current clinical practice, and the potential for improving
this procedure with haptic feedback and augmented reality. Of particular concern is
recent data suggesting that a greater likelihood of positive surgical margin (existence
of cancer cells at the edge of removed tissue) occurs with RMIS than with a traditional,
open radical retropubic prostatectomy [163].
128
Figure 4.5: A 3D computer-aided design model of the artificial prostate (left) and
the bottom mold with a hard protrusion (right). The artificial prostate model was
created from silicone rubber using plastic molds. The bottom mold was attached
under the model to simulate a hidden lump.
The artificial prostate tissue was made from Ecoflex 0030 platinum cure silicone
rubber (Smooth-on, Inc., Pennsylvania, USA). Plastic molds were designed and constructed using a rapid prototyping machine. Based on the geometry of a commercially
available artificial prostate tissue, the model has a hill at each side and a valley at
the center (Figure 4.5, left). The palpation task used in the experiment requires a
hard lump to be hidden in the prostate, so the bottom part of the mold (Figure 4.5,
right) includes a protrusion positioned under one of the hills. Due to the short focal
length of the stereo camera, a large object was required to be clearly visible in stereo
images. Therefore, the dimensions of the artificial prostate were scaled approximately
four times larger than those of a standard prostate. The dimensions of the artificial
tissue were approximately 0.03 m high, 0.14 m wide, and 0.12 m deep. Details of
artificial tissue construction are provided by Gwilliam et al., [52].
129
4.2.6
4.2.6.1
3D Display System
Real-time stereo images are captured from the operating scene, which includes the
tip of the robotic instrument and the tissue surface. The two-lens camera used in our
testbed provides two images of the scene, which are used for displaying stereoscopic
images and computing a 3D surface model. To display both the original camera
images and the graphical overlays in 3D, we used OpenGL-based stereo rendering,
implemented with quad buffer drawings. The stereo camera provides individual left
and right images, and the images are drawn onto the back-left and the back-right
buffers to generate flicker-free stereo images. The position of the right image can be
horizontally shifted to adjust the separation of the eye viewpoint. The resolution of
the display images is 1280 768. OpenGLs glut game mode was used to enable a
full-screen mode display.
130
The stereo display system includes a graphical user interface that enables operating modes appropriate for various tasks and feedback modalities. The display
application consists of four views: live stereo view, camera frame calibration view,
palpation overlay view, and forbidden region selection view. The application allows
mouse input from the user, overlays a stiffness map onto stereo images, and calculates
frame transformation functions while displaying real-time stereo images.
4.2.6.2
In our system, a point cloud representing the tissue surface is computed using the
Triclops SDK [12]. The SDK uses the Sum of Absolute Differences algorithm, which
is a correlation-based similarity measure, to establish correspondence between 2D left
and right images and obtain depth information. Prior to stereo matching, a low-pass
filter is applied to the raw left and right images to smooth them and then the filtered
images are rectified to correct for distortion. Thereafter, an edge detection algorithm
is applied to the rectified images in order to use changes in the brightness rather
than the absolute pixel value in the correlation process. The computational cost of
edge detection does not affect the performance of our palpation or virtual fixture
applications because point cloud computation is performed before real-time bilateral
teleoperation commences. Finally, to perform the stereo processing and obtain depth
information, a mask is selected for every pixel in the left image and compared with a
number of neighborhoods along the same row in the right image. The best match is
131
selected using
m
2
2
X
X
min
I
[x
+
i][y
+
j]
I
[x
+
i
+
d][y
+
j]
right
,
left
dmax
d=dmin
i= m
2
(4.8)
j= m
2
where dmin and dmax are the minimum and maximum disparities, m is the mask
size, and Iright and Ileft are right and left images, respectively [73]. The displacement
between left and right images and the known intrinsic geometry of the camera provides
3D location including depth information, i.e., the distance of each point from the
cameras. In this study, disparity range was [1, 200], and the correlation mask was
13 13. Figure 4.6 summarizes the steps used to compute the point clouds. The
resulting point cloud data set includes the 3D coordinates, the RGB values, and
row-column of each valid pixel in the right 2D image with respect to the camera
coordinate system. Only the 3D coordinates are sent to the robot computer via the
TCP server, while the complete point cloud data is kept in the buffer. The origin
of the right image is the top left corner and that of the camera coordinates is the
midpoint between the two lenses. Our software framework allows up to 8,000 surface
points of the point cloud data to be sent with a single transmission.
4.2.6.3
Figure 4.6: A flow chart of 3D point cloud detection. The point cloud will be sent to
either the master robot or the slave robot depending on which side the user wants to
implement virtual fixtures on.
133
lation algorithm to fill holes in the region of interest (ROI) of the point cloud. The
algorithm can only compute surface points that are in the cameras viewpoint. The
algorithm searches for pixels in the right 2D image whose associated 3D coordinates
are missing in the point cloud data. Then, for each of these unmatched pixels, the
nearest neighbors with associated 3D coordinates in the point cloud data are selected.
The 3D coordinates for the unmatched pixel are then computed using the average
3D coordinates of the selected nearest neighbors. This search is restricted to a given
neighborhood, e.g., 20 pixels offset in each of eight directions from the unmatched
pixel, i.e., top, bottom, left, right, top-left, top-right, bottom-left and bottom-right.
If no neighbor is found within the offset in a given direction, that direction is not
included in the computation.
4.2.7
A 3D graphical overlay displaying local mechanical properties of the tissue is generated in real time to assist surgeons in detecting hard lumps in tissue via palpation.
Our approach integrates this novel form of haptic feedback with the RSOI framework.
This is an alternative implementation of the techniques presented in Chapter 2. Our
technique consists of two parts: online estimation of environment mechanical properties and real-time graphical overlay of a stiffness map.
First, we describe the online estimation process, reviewing the approach from
Section 2.3 in Chapter 2. To estimate mechanical properties of the artificial prostate
134
tissue, we compute position and velocity from robot joint sensors and kinematics and
measure interaction force using a force/torque sensor. With those data, a recursive
least-squares (RLS) method is used to estimate unknown parameters of a mathematical environment model in real time. As in Chapter 2, we employed the Hunt-Crossley
n +
xx
model, f = a
0 + kx
n , where f is an estimated interaction force, a
0 is an esti is an estimated damping
mated position offset, k is an estimated stiffness coefficient,
coefficient, n
is an estimated exponent, and x and x are position and velocity of the
slave manipulator, respectively. By running two RLS estimators in parallel [40], the
four unknown parameters are estimated as new measurements are added. We set a
force threshold to detect contact with the environment.
Next, we describe the real-time graphical overlay. Once the contact is detected
and the estimators start running, the estimated stiffness value, k in the model, the
minimum and maximum stiffness values, and a palpated location in the 3D coordinates are sent from the robot computer to the vision computer. The position in the
3D coordinates is then converted to that in the 2D coordinates by the generic formula
of an optical focal length calculation for the left and right images. To indicate the
hardness of the palpated location, the stiffness value is converted to a hue value in the
hue-saturation-lightness color space. We used a fixed range of stiffness values, so the
minimum and maximum stiffness values were always the same during the palpation
task. However, the stiffness-to-color mapping could be updated as the range of the
estimated stiffness values changes. In the graphical overlay, the minimum stiffness
135
(soft region) corresponds to green and the maximum (hard region) corresponds to
red through yellow. A colored circle is overlaid on the left and right images. We
have less confidence in the estimated stiffness as a corresponding pixel in the camera
images is away from the center of a palpated point. A weighted probability density
function of Gaussian distribution, h(x) = exp((x )2 /), is used so that the term
(x )2 is the squared distance between the center of the palpated point and the
surrounding regions in the 2D camera images, and that determines the size of an
overlaid circle. The range of saturation is between 0 and 1 since the function is normalized. Lightness value is always 0.5 so that the graphical overlay is in color. When
multiple circles overlap in the images, OpenGLs blending function, glBlendFunc, is
used. The function combines new and existing images pixel by pixel in RGB mode.
We apply a translucent ratio, , to the graphical overlay, so the alpha value is always
multiplied to the incoming image. The translucent effect allows the user to clearly
see the original camera images behind the overlay. The resulting overlay appears 3D
from the users perspective in the stereoscopic 3D display.
4.2.8
For haptic virtual and augmented environments, the 3D surface of an object can
be represented by a number of methods, e.g., points, meshes, and splines. Ruspini et
136
al., [134], developed a method of proxy to interact with 3D complex virtual environments made of a large number of polygons. The proxy always moves in free space or
on the surface, but never penetrates into the object. Cotin et al., [30], reconstructed
an elastic deformable soft tissue made of simplex meshes [38] from 3D medical images.
They developed a surgical simulator that includes tissue deformation, collision detection, and force feedback. Dewan et al., [39], used a stereo reconstruction technique
with a third-order tensor B-splines to estimate a surface geometry for guidance virtual
fixtures. They employed an admittance-type manipulator to guide the user along the
surface in both position and orientation. Hua and Qin, [70], developed a dynamic
volumetric modeling technique for 3D haptic interaction. In order to further design
and edit the 3D objects in real time, a user is able to manipulate existing volumetric
datasets as well as point clouds, and the object is approximated by B-splines. Li
et al., [92], developed a real-time task-based control of a surgical robot and tested
constraint-based virtual fixtures on a phantom skull model obtained from registered
CT scans. The 3D surface model of the skull was composed of 99,000 vertices and
182,000 triangles.
In this study, we use a point cloud to reconstruct the 3D surface of the environment to implement FRVFs, using the RSOI for efficient and general implementation
based on acquired stereo vision data. We employ an impedance-type manipulator to
interact with the FRVFs. Our approach of force feedback computation is intuitive and
computationally inexpensive. Furthermore, the point cloud can be easily extended
137
7. A position average filter and a normal average filter are applied to make the
surface of the FRVF smoother (Section 4.2.8.2).
8. All the points in the filtered FRPC are offset by certain amount from the original
surface position to create virtual walls (Section 4.2.8.2).
9. During teleoperation, if the slave manipulator enters the forbidden region, reaction force is computed and returned to the user (Section 4.2.8.2).
4.2.8.1
In the FRVF mode of the RSOI, an interactive selection window was implemented
to allow the user to define a region of interest (ROI) and a forbidden region. A 2D
rectified image of the scene is displayed on the monitor, and the user draws a polygon
with multiple mouse clicks on the image. The polygon defines the 2D boundary of the
selected region. A minimum of three points is required to define both of the regions;
however, more points can define a smoother and more precise boundary. The ROI
is selected as a quadrilateral and always contains the forbidden region. Therefore,
the forbidden region is a subset of the ROI. After the user confirms the ROI, the
program applies the interpolation technique and then extracts the 3D coordinates of
the forbidden-region polygon surface from the ROI point cloud. Although the ROI
point cloud is computed, only the forbidden-region point cloud (FRPC) is sent to the
robot computer. Figure 4.7 shows a screenshot of the interactive selection view. The
139
Figure 4.7: A screenshot of the interactive selection view shows the user-defined region
of interest (red rectangle) and forbidden region (green polygon).
ROI boundary is indicated by the red rectangle and the forbidden region boundary
by the green polygon.
4.2.8.2
The goal of a FRVF is to prevent the user from entering an undesired region, or
at least generate sufficient force to make the user aware that he is entering such a
region. Because our FRVFs are rendered as haptic virtual walls (unilateral springs),
the stiffness of the FRVF is limited to ensure stability in the face of non-idealities
such as sampling rate, sample-and-hold, and encoder resolution using standard approaches [13, 40]. In addition, the surface of the FRVF must be smooth so that the
user does not encounter discontinuities in the surface that result in unexpected force
jumps or oscillations. Our method of creating a smooth, point-cloud-based FRVF is
divided into offline and online computation. In the offline step, we pre-process the
140
point cloud to make the surface of the FRVF smooth. In the online step, we compute appropriate reaction force to the user. We describe these steps in the following
paragraphs.
In the offline step, we pre-process the transferred FRPC by (1) downsampling the
number of points, (2) finding n closest points to each point, (3) applying a position
average filter, (4) applying a normal average filter, and (5) adding an offset to create
virtual walls above the tissue surface.
Our software framework allows transmission of up to 8,000 points at one time.
However, using more points does not always make the surface of the FRVF smoother.
As discussed later, we find two neighboring points to each point to compute a plane
and thus a normal vector. If those three points are located very close to each other,
even small position errors of the points significantly affect direction of the normal
vector. This implies that if the computed points are noisy, a smaller number of
points in the point cloud may result in making a smoother surface. Thus, uniform
downsampling of the number of points in the transferred FRPC may be necessary if
the number of points in the transferred FRPC is large.
Prior to applying the two smoothing filters, we perform a nearest neighbor search
to find n closest points to each point in the FRPC. Those n closest points are used
to smooth out position of each point and direction of a corresponding normal vector.
We first compute distance between each point and use the quicksort [65] algorithm
to sort the computed distances in the descending order. We then record the n closest
141
points to each point so that those data can be easily extracted for later use.
For each point in the FRPC, the position filter takes an average of the position
of the point and n neighboring points, and replaces the original position of the point
by the averaged one. In the process of computing the average, a weighting factor,
0 1, is assigned to the original point so that it may be more dominant than
the neighboring points. For example, for a point Pm , assume its n closest points
are Pm1 Pmn . The position of the reassigned point, Pm0 , can be computed as:
Pm0 = Pm + 1
n
Pmn
i=m1
point in the local coordinates because the smoothness of the surface of the FRVF
mostly depends on the height of the points in our experimental setup.
Next, the effective normal to the surface represented by the position-averaged
FRPC is computed. For each point, we compute a unit normal vector by taking
an average of n 1 normal vectors derived from the n closest points. For example,
for a point Pm with its n neighboring points Pm1 Pmn , we can find n 1 planes
such that one point on a plane is always Pm and the other two points are Pmi and
Pmi+1 (1 i n 1). By changing i between 1 and n 1, we have n 1 planes and
thus n 1 normal vectors. We average them and normalize the vector to assign a
unit normal vector for each point. There exist two possible unit normal vectors for a
plane (one pointing out of the surface and the other pointing into the surface),
and we choose the normal vector that points toward the camera (outside the tissue)
to correctly define the forbidden region. Although it is beyond the scope of this
142
study, visualization of our FRVFs would be complicated since the normal average
filter changes the direction of a plane, and thus the boundaries between each plane
would not be straightforward.
The final offline step is to offset the filtered FRPC away from the tissue surface
by a specified amount. In our testbed, we move the points in the positive vertical
direction of the local coordinate system to make virtual walls that prevent penetration
into the surface.
The point cloud computation, forbidden region definition, and filtering steps described so far are considered to be offline computations, since they can occur before
the user begins real-time teleoperation. Now, we describe the online steps that must
occur while the user is teleoperating. First, we perform the nearest neighbor search
again to find the closest point in the filtered FRPC to the user. Similarly to the
offline step, we compute the distance between the slave instrument tip and each point
in the FRPC, and then use the quicksort algorithm to find the closest point. Second, from the data structure constructed offline, we extract the unit normal vector
n~l corresponding to the closest point Pl . Then, the equation of the plane, j , that
consists of the closest point [xj
n~j = [aj
bj
yj
cj ] is derived as aj x + bj y + cj z + dj = 0, where dj = aj xj bj yj cj zj .
Third, we compute the distance, D, between the slave instrument tip, [Xs
and the plane, j , as: D =
Ys
Zs ],
computed using Hookes law: f~ = k D n~j , where k is a virtual wall spring coefficient.
143
Stereo Camera
Slave Manipulator
z
y
Artificial Tissue
The virtual fixture was created on the master manipulator side in our implementation;
it could also be generated on the patient side [13].
4.2.9
Experimental Setup
To demonstrate the feasibility of the RSOI, we set up the slave robot, the artificial
prostate, and the stereo camera as shown in Figure 4.8. The master robot and the
stereoscopic 3D display were located at operators side for remote operation. On
the operators side, a user held the end-effector of a master manipulator, and looked
at the stereoscopic display while wearing 3D active shutter glasses. We tested the
performance of communication and registration, surface detection and reconstruction,
material property displays, and forbidden-region virtual fixtures.
144
4.3
4.3.1
Results
Communication and Registration
After communication between the robot computer and the vision computer was
confirmed, the three-point measurement method described in Section 4.2.4 was used
to register the robot, camera and local coordinates. The local coordinate system
was defined as the x-axis (left/right), the y-axis (backward/forward), and the z-axis
(down/up) when the user faced the slave manipulator (as shown in Figure 4.8). After
computing the homogeneous transformations LR T and LC T , we measured the accuracy
of the registration. While we moved the slave instrument tip along the x- and yaxis of the local coordinates, we measured positions manually (using a ruler) and
computed from the homogeneous transformation
L
RT .
the slave instrument tip, the maximum difference between the manual and computed
positions was 2 mm.
4.3.2
145
image features, we randomly drew lines and patterns on the tissue surface with colored
markers. Furthermore, we sprinkled pepper and coffee grounds on the surface to
provide visual surface texture (which is naturally present in live organs). Although
these techniques improved reconstruction of the 3D surface of the artificial prostate,
the point cloud still had missing points. We applied the interpolation technique to
restore missing points in the ROI point cloud. For a ROI whose size is approximately
0.30 0.22 [m], we obtained 86,770 points as the original ROI point cloud. After
removing points that were outside the ROI, the interpolation technique restored some
missing points. The total number of points was 87,822 after interpolation. The area
of the forbidden region selected was approximately two thirds of a circle whose radius
is 0.05 m. For this area, we obtained NiF RP C = 4,632 points as the interpolated
FRPC. Figure 4.9 compares the original ROI point cloud (left) and the interpolated
ROI point cloud (right). Note that each point has an RGB value in the images, and
thus the point cloud looks like a texture map. The accuracy of the reconstructed
point cloud in the forbidden region is discussed in Section 4.3.1.
4.3.3
Figure 4.9: Comparison of the point clouds before interpolation and after interpolation (right). Our interpolation technique successfully restored some of the missing
surface points.
tive mathematical models (the Kelvin-Voigt model, a 3rd order polynomial, and the
Hunt-Crossley model) to approximate tool-environment interaction. A user palpated
five locations within a soft region of the artificial tissue sample, and five locations
within a hard region, using the teleoperator. The position and velocity of the slave
manipulator and the interaction force between the slave manipulator and environment were recorded. The accuracy of the models was determined by estimating the
interaction force for each model via least squares, and comparing the result to the
measured force. For self-validation, the estimated interaction force was computed
using estimated model parameters from the same data set, while in cross-validation,
the estimated force was computed using estimated model parameters from a different
data set. A nonlinear Hunt-Crossley model was the best fit for the artificial prostate
material in both self-validation and cross-validation tests. Figure 4.10 shows stiffness
plots for palpation of a soft region and a hard region, using the Hunt-Crossley model
147
Force [N]
2.5
2
1.5
1
0.5
Displacement [mm]
Figure 4.10: Tool-environment interaction during palpation of soft and hard regions
of the artificial tissue. The Hunt-Crossley model approximates both nonlinear curves
well.
Hard
(9.5)
Soft
(5.0)
Figure 4.11: A camera image of the 3D material property overlay after several palpations (left) and a color bar displaying the stiffness range (right). A hard lump is
located on the right side of the artificial tissue sample.
graphical overlay. Based on the preliminary palpation results, the estimated stiffness
range was set to 5.0-9.5. The translucent ratio of = 0.5 was applied so that the
original camera images were visible behind the stiffness map overlay. For the Gaussian distribution function, the value was chosen such that the size of an overlaid
circle corresponded to that of the palpation disc in the camera images. Figure 4.11
shows an image of the artificial prostate tissue with the graphical overlay after several
palpations. As indicated by the colored stiffness map, a hard lump was found on the
right side of the artificial tissue model.
4.3.4
We evaluated the process and performance of FRVFs using the artificial prostate
tissue as the environment. We first analyzed the effects of the following three factors
on the surface smoothness of the FRVF: spatial distribution of the points (position
149
average filter), direction of the normal vectors (normal average filter), and the number
of points in the point cloud. In the following analysis, we only consider half of the
points in the interpolated FRPC due to the symmetry of the artificial prostate tissue
and the selected forbidden region. The number of points used in the first two analyses
is NiF RP C /2 = 2, 316.
The position average filter is expected to smooth the surface. We first used a
sixth-order polynomial to fit a curve to a 2D slice of the interpolated FRPC. (The
2D slice is that seen from an x-z plane in the local coordinates, where the artificial
prostate tissue is sliced to reveal two hills and a valley.) The reference surface curve
was obtained from the CAD drawing of the artificial prostate, and compared with the
sixth-order polynomial fit to the slice of FRPC data. With six neighboring points,
i.e., n = 6, and a weighting factor of = 0.25 for the position filter, the R2 value
between the two curves (representing the proportion of variability in the polynomial
fit to the FRPC that is accounted for by the reference surface curve) was 0.9960.
Without applying the position average filter, the R2 value was 0.9935.
Next, the performance of the normal average filter was evaluated. From the equation of the reference surface curve, we computed a normal vector for each point. This
was the ground truth and compared with a normal vector computed from the FRPC.
For n = 6, there are five surrounding normal vectors for each point. The average of
the five vectors was normalized to compute a unit normal vector for each point, and
an angular error was computed with respect to the ground truth of the normal vector.
150
0.05
Before filtering
0.04
Point in FRPC
Normal Vector
0.03
0.01
0.02
0.03
0.04
0.05
0.05
After filtering
0.04
Point in FRPC
Normal Vector
0.03
0.01
0.02
0.03
0.04
0.05
Figure 4.12: Cross-section view (on the local x-z plane) of points in the FRPC and
the corresponding normal vectors before (top) and after (bottom) the position and
normal average filters were applied.
Without applying the normal average filter, the maximum and average angle errors
were 25.9 and 105.7 degrees, respectively. With the normal average filter, they were
9.2 and 42.1 degrees, respectively. Without using the position average filter before
applying the normal average filter, the maximum and average angle errors were 15.6
and 50.5 degrees, respectively. Figure 4.12 compares the points and the corresponding
unit normals before and after position and normal average filters were applied.
In addition, we examined the relationship between the number of points and the
151
Forbidden-Region
Point Cloud
Simulated
User
Artificial Prostate Tissue
Offset
Reference Surface Curve
x
Local Coordinate Frame
Figure 4.13: In simulation, a user travels on the reference surface curve from the
center to the right limit of the artificial tissue. The number of points in the point
cloud changes the surface of the FRVF. We observed how the magnitude and the
angle of the force changed as the user moved.
smoothness of the virtual fixture surface. We simulated the reaction force vector
resulting from a user tracing the surface of the artificial tissue in the virtual wall, and
observed how the magnitude and the angle of the force changed as the user moved. In
simulation, the x-position of the user was incremented by 0.0005 m starting from the
center (x = 0) to the right limit of the interpolated FRPC (x = 0.05). The z-position
of the user was always on the reference curve obtained from the 3D CAD model of
the artificial tissue. We offset the points in the interpolated FRPC up by 0.01 m in
the vertical direction in the local coordinate system to generate virtual walls above
the artificial tissue surface. Figure 4.13 illustrates this simulation.
Due to distribution of the points in 3D space, we ran several simulations by
changing y-position of the user because the reaction force to the user depended on
which slice of an x-z plane was used. During the simulation, we recorded the change
in magnitude and angle of the force the user would feel at every increment. We then
152
Average Change
in Force Angle
[deg]
Average Change
in Force Magnitude
[N]
0.04
0.02
0
6
4
2
0
10
10
10
Number of Points
Figure 4.14: Relationship between the number of points and the average change in
force magnitude (top) and force angle (bottom) the use would feel at every increment.
computed an average. This was done for the number of points 8 N 2, 316.
Figure 4.14 summarizes the relationship between the number of points and average
change in magnitude and angle of the force vectors.
We also tested the smoothness of the FRVF surface using a PHANTOM Premium
by changing the number of points. We downsampled the number of points in the
interpolated FRPC from NiF RP C = 4, 632 to 220 for the defined forbidden region
because it felt smooth enough to avoid unexpected force jump or oscillation as well
as be similar to the shape of the artificial tissue surface.
After applying the two smoothing filters and downsampling the number of points,
we demonstrated a surface-tracing task on the artificial prostate tissue. A force/torque
153
FRVF region
Slave position
With FRVF
Without FRVF
Measured force
0.015 [m]
0.118 [m]
Figure 4.15: The trajectories of the slave manipulator tip and the force measured
by the force sensor with FRVF (top) and without FRVF (bottom). Blue dots are
position of the slave manipulator and red arrows are the measured interaction forces.
sensor was attached on the tip of the slave instrument. A stiffness of k = 160 N-m
was used to create a virtual wall for each plane defined by a point and the associated
unit normal. The goal of the task was (1) to avoid contacting the environment surface within the forbidden region and (2) to achieve a smooth motion on the surface
of the virtual fixtures. A user manipulated the master robot in order to make the
slave trace the surface of the artificial tissue with and without the virtual fixture.
Figure 4.15 compares sample trajectories of the slave robot and corresponding force
measurements from the force sensor with and without the FRVF.
154
4.4
4.4.1
Discussion
Communication and Registration
Communication between the vision computer and the robot computer was always
stable and we did not lose any packets during testing. Although the communication
protocol was preliminary (two computers controlled four hardware components, and
the software was not completely cross-platform), it demonstrated the feasibility of an
RSOI in which the components of the system are interchangeable with a standard
network protocol.
Registration errors could potentially result in misplacement of the stiffness map
in the camera coordinate system, due to conversion of palpated location data from
robot to local coordinates, and then local to camera coordinates. Misplacement of
the point cloud in the robot coordinate system could also occur due to conversion
of the FRPC from camera to local coordinates, and then local to robot coordinates.
Such misplacements would be problematic for the material property display, resulting
in incorrect localization of a tumor, and the forbidden-region virtual fixtures, such
that the patient-side manipulator might not be prevented from entering the defined
forbidden region. Due to the size of the artificial prostate tissue, the area of the
stiffness map, and the area covered by the point cloud in our testbed, the observed
registration errors did not result in noticeable errors in the material property display
or forbidden-region virtual fixtures. In microsurgery cases, however, even submil155
limeter registration errors could result in surgical errors. This could be addressed by
taking the average of multiple homogeneous transformations created using separate
calibrations. Another solution would be to implement a program that allows the user
to manually adjust the position offset of the material property display and/or the
point cloud based on 3D visualization.
4.4.2
Surface detection and reconstruction functioned as expected and was straightforward to implement within the RSOI framework. The reconstruction was useful for
both overlaying graphics describing material properties in an intuitive fashion and
creating forbidden-region virtual fixtures, as described in the following two subsections.
The surface of the artificial prostate lacked the visual texture found in live organs. Thus, the addition of random patterns to the surface, accomplished by drawing
lines and sprinkling pepper/coffee grounds, was required to generate sufficient image
features for surface detection. We also placed in the background color images with
non-uniform patterns to improve surface detection. Because the colors of an in vivo
tissue surface vary significantly, and vessels on the surface of an organ create distinct
visual textures, our efforts to enhance the image features of artificial tissues would
not be required in actual robot-assisted surgery [154].
The test of the Forbidden-Region Virtual Fixture demonstrated excellent accu156
4.4.3
The material property display involved all aspects of the RSOI: teleoperation and
measurement of haptic interactions, stereo vision to detect the surface of the environment, and use of the stereoscopic display to create a graphical overlay representing
tissue stiffness. The RSOI framework enabled implementation of an effective material
property display in a modular fashion, where common changes (such as using a different material model) can be easily realized. The resulting material property display
was straightforward for the user to generate, and clearly showed the location of a
hard lump in an artificial prostate (Figure 4.11). The method should be particularly
useful in systems that do not have real-time haptic feedback directly to the users
hands.
Nonlinearity and hysteresis are present in the force-displacement curves used for
157
material modeling. Because the Hunt-Crossley model has both a nonlinear stiffness
term and a damping parameter, it approximated this behavior better than the other
models tested. There may exist other mathematical models that also fit the forcedisplacement data well. Our goal was to find a model that was not only accurate
for the artificial prostate used in the test, but also useful to locate a lump by clearly
differentiating hard and soft tissue. The Hunt-Crossley model is accurate for our
artificial tissue sample, and has a stiffness parameter that can be used to discriminate
the hard lump from the soft surrounding material. For different materials or biological
tissues, other models might be more effective.
Although it is beyond the scope of this study, an estimated size of the detected
lump, accuracy of the detected lump location, and the use of the display to determine
appropriate surgical margins should be thoroughly analyzed in order to consider this
method for clinical use. It is anticipated that repeatability of a stiffness map would
be less consistent with inhomogeneous materials, since we estimated a model based
on tool-tissue interaction data that is controlled by the user rather than an automated process. Finally, we used a fixed range of estimated stiffness values based on
preliminary palpation data; however, this range would naturally depend on the environment. Some users might prefer a dynamic range of the stiffness, where maximum
and minimum stiffness values are updated as new data are added.
158
4.4.4
The FRVF test demonstrated the ability of the RSOI to use stereo images for
computing and rendering haptic surface boundaries that can be used to protect forbidden regions of the workspace. Computer vision was successfully used to define
the FRVF surface, and filtering techniques ably created smooth surfaces for haptic
rendering. Experiments during real-time teleoperation and example force profiles
generated using simulation based on actual point cloud data showed the effectiveness
of the FRVFs to prevent penetration into the curved surface of the artificial prostate
model.
We analyzed how the position average filter, normal average filter, and the number
of points affected the smoothness of the FRVF surface. The position average filter
alone did not significantly improve R2 between the reference surface curve and the
fitted surface curve from the point cloud, because the original position error was
negligible. The normal average filter significantly improved the directions of the
normal vectors for smooth haptic rendering. Furthermore, if the position average
filter was applied prior to the normal average filter, we observed smaller angle errors.
The combination of the two filters was effective because the position average filter
corrects the spatial distributions of points in the point cloud, and then the normal
average filter adjusts the direction of the normal vectors so that they point at the
direction perpendicular to the surface tangent. More consistent point spacing results
in a smoother set of local virtual walls. The number of points in the point cloud
159
160
4.5
Conclusions
161
Chapter 5
Conclusions and Future Work
In robot-assisted minimally invasive surgery (RMIS), acquisition of mechanical
and/or geometrical models of a patients tissue provides us with stiffness distribution, dynamic behavior, and geometrical shapes/boundaries of the tissues. Use of
those models in diagnosis or surgical planning facilitates lump detection in soft tissues, teleoperator controller design, and forbidden-region avoidance during a procedure. This dissertation focuses on the development, evaluation, and application of
environment models in surgical scenarios to provide useful feedback to the surgeons.
In the long term, we aim to enhance the surgeons performance and the patients
clinical outcomes applying the tissue models to teleoperated robot-assisted surgery.
This chapter summarizes the studies presented in this thesis and addresses possible
future directions of the work.
162
5.1
Summary
163
164
165
5.2
Future Work
the reconstructed 3D surface of the tissues as necessary. One limitation of the tissue
model we employed is that we cannot obtain the depth of the tissue abnormalities. A
mechanics-based approach may be considered (e.g., [126, 136]). Although our intension of the proposed method is to assist a surgeon to detect a lump location in RMIS,
this could be automated by properly defining the stiffness threshold of the stiffness
map.
The tissue property overlay technique can be improved by incorporating accuracy of the estimated stiffness. Although the Hunt-Crossley model performed best
in the preliminary experiments with both artificial and ex vivo tissues, the accuracy
of the estimated ex vivo tissue dynamics was worse, particularly in cross-validation
(Tables 2.2, 2.4, and 2.5). In a real-time palpation trial, we could display the estimated accuracy by changing saturation. In vivo tissues might exhibit better lump
detection accuracy than ex vivo tissues because higher elasticity of the in vivo tissues
is expected to alleviate the tissue relaxation. We can also improve visualization of
the surgical instrument by segmenting it in the image and placing it over the semitransparent display of tissue properties. Finally, such a system should be tested on
in vivo tissues, including live animal experiments, to have a better understanding of
dynamic behavior of soft tissues and hard lumps.
The acquired tissue properties can be used in automated diagnosis and surgical
planning, extracted for the creation of realistic patient-specific surgical simulators,
and used to make improvements to design a bilateral teleoperator controller. In
167
Chapter 3, using a teleoperator simulation, we showed that consideration of the estimated environment model relaxes stability conditions and makes the system more
robust. The models used in the simulation are quite simplified, and the analysis is
only applicable at a local point due to the linearity assumption. Thus, more thorough
analysis is required before implementing the proposed controller on a real teleoperator,
particularly a surgical teleoperator that interacts with a patient. Also, if the environment model is updated online, further discussion is required since our controller
design assumes that the estimated environment model does not change. Furthermore,
the effects of time delay on the proposed controller may be of interest toward real
telesurgery.
The robotic surgery open interface presented in Chapter 4 can be improved by
incorporating all the features mentioned above. The RSOI should be extended by
implementing a more complete message set that includes control commands between
a master robot computer and a slave robot computer as well as enables numerous plugin components (e.g., force and tactile sensing, vibrotactile display, and preoperative
images). We are also interested in extending the software framework to interface
with various types of surgical robots. Other areas of future work include providing
platform-independent operation and performing additional experiments with RSOI
plug-ins to demonstrate performance enhancement in clinical tasks.
168
Appendix A
Multi-Estimator Technique for
Environment Parameter
Estimation During
Telemanipulation
In this appendix, we report automated environment parameter identification methods for bilateral telemanipulation, with a focus on surgical applications. We first
present a multi-estimator technique and demonstrate that, in practice, it finds the best
estimator for a Kelvin-Voigt material. Using a one-degree-of-freedom teleoperation
system, cubes with various material properties were palpated to acquire data under
three control conditions: teleoperation without persistent excitation, teleoperation in
169
which the operator mimics persistent excitation, and autonomous control with persistent excitation. The estimation performance of three online estimation techniques
(recursive least-squares, adaptive identification, and multi-estimator) are compared.
Neither the cube type nor the control condition affected the estimation performance.
By considering practical aspects, recursive least-squares or multi-estimator would be
suitable for online estimation of the tissue property.
A.1
Introduction
170
models will help improve transparency of the teleoperation system during a surgical
procedure, evaluate tissue health status for online diagnosis, and create realistic simulations based on the estimated tissue models. For example, in long-distance surgical
teleoperation, patient models updated during the procedure could be used to increase
realism and stability. Even if a surgical teleoperator does not provide direct haptic
feedback to the surgeon, acquired information such as estimated stiffness over the tissue surface could assist the surgeon in detecting the locations of tumors. In addition,
in vivo models may be added to a database to recreate tool-tissue interactions for
realistic surgical simulation.
Considering the application of robot-assisted surgery, we implement several estimation techniques and use them to find mechanical properties of artificial tissues.
We use a one-degree-of-freedom (DOF) teleoperator under three conditions, from
the most to least desirable for surgical implementation: teleoperation without persistent excitation, teleoperation in which the operator mimics persistent excitation,
and autonomous control with persistent excitation. Persistent excitation would be a
deviation from the natural motions/forces applied in surgery, but may be required to
acquire complex tissue models for certain estimation methods.
A.1.1
Previous Work
In this work, we focus on online parameter estimation. There are many parameter
estimation techniques that can be implemented online. Recursive least-squares (RLS)
171
has been widely employed. Love and Book, [95], used RLS to estimate unknown
stiffness of an environment to improve the performance of an impedance controller.
Wang et al. extended the work of [95] in order to enhance the fidelity of a teleoperation
system as well as to achieve robust stability [159]. RLS was also used by Diolaiti et
al., [40], to estimate unknown environment parameters of the Hunt-Crossley model.
Colton and Hollerbach, [27], have implemented exponentially-weighted recursive leastsquares to identify models of a class of nonlinear passive devices. RLS is a simple but
powerful estimation technique if environment model is linear in unknown parameter
vector.
Adaptive control can be also used as an estimation method. Seraji and Colbaugh,
[137], have used indirect adaptive control to achieve force tracking within impedance
control and estimate unknown damping and stiffness of environment. Singh and Popa,
[139], implemented a model-reference adaptive controller to estimate the stiffness of
an unknown environment. Hashtrudi-Zaad and Salcudean, [58], employed composite
adaptive control to achieve transparency for teleoperator and to estimate unknown
properties of a second-order linear environment model. Erickson et al., [44], reviewed
and compared four parameter identification algorithms: RLS [95], indirect adaptive
control [137] with some modification, model-reference adaptive control [139] and signal
processing method they proposed. They concluded that indirect adaptive control,
given persistent excitation, showed the best performance among the four schemes.
Misra and Okamura, [105], extended the work of [44, 137] to simulate estimation
172
173
A.2
f = bx + kx
(A.1)
where f is the interaction force between the robot and the environment, and b, k,
and x are damping, stiffness, and displacement of the environment, respectively.
A.2.1
The least-squares estimate (LSE) minimizes the mean square value of the model
error. This offline identification method is used to obtain the reference model of the
environment since no official mechanical properties of the soft cubes are available.
Let us define (t) = [x(t),
174
(A.2)
(A.3)
where
x x x
1
2
n
X =
x1 x2 xn
(A.4)
= [f1 , f2 , , fn ]T
(A.5)
A.2.2
For bilateral teleoperation, estimation needs to be updated online as data become available. Recursive least-squares (RLS), which is a common online estimation
T where b and k are the estimates of b and k, respecmethod, is used. Let = [b, k]
tively. The parameter vector is calculated every sample time as [94]:
n = n1 + Ln fn Tn n1
1
Ln = Pn1 n 1 + Tn Pn1 n
where
P = [I Ln Tn ]Pn1
(A.6)
175
A.2.3
(A.7)
(t) = (t)
(A.8)
(A.9)
Note that in (A.8) is a true parameter vector and constant. To ensure parameter
convergence, an update law is chosen as
= (t)f (t)
(t)
(A.10)
by intewhere > 0 is a gain matrix. We can compute the parameter vector (t)
grating (A.10). By substituting (A.2) and (A.7)-(A.9) into (A.10), we obtain
= (t)
= (t)T (t)(t) .
(t)
A.2.4
(A.11)
Multi-estimator (ME)
The multi-estimator (ME) method is a part of supervisory control originally proposed by Morse, [110, 111]. The goal of supervisory control is same as that of conventional adaptive control: to control uncertain systems by adaptation. The main
176
M
ep
yp
E
Supervisor
Figure A.1: Schematic of supervisory control shows three components: Multiestimator E, Monitoring signal generator M, and Switching logic S. A controller
and plant of the system is represented by C and P, respectively [62].
difference is that conventional methods design a controller and update unknown parameters in a continuous way, while supervisory control does so by means of logicbased switching. A large number of candidate controllers/estimators are prepared in
advance, and switching among them is automatically done by a supervisor. As shown
in Figure A.1, supervisory control consists of three components: multi-estimator E,
monitoring signal generator M, and switching logic S. In the original idea, the switching logic outputs a signal that is used to define the control law u. We only pursue
the estimation part in this paper, and thus the output from the switching logic is not
used to design the controller.
Figure A.2 shows a block diagram of a ME modified for our system. In our design,
there are m distinct candidate estimators. Each estimator accepts two signals, x and
177
E
x
x
x
x
fp1 +
fp2 +
ep1
|| ||
p1
ep2
|| ||
p2
S
x
x
f
fpm +
epm
|| ||
pm
x, and yields an estimated output fpi (1 i m). The estimated output is compared
to the measured output f and we have an estimation error of
e p i = fp i f .
(A.12)
The estimation error is passed to the monitoring signal generator, M, which outputs
Z
pi (t) =
(A.13)
(A.14)
(A.15)
n
X
k=0
178
(A.16)
n+1
X
(A.17)
k=0
=e
2T
( n
X
)
e
2(nk)T
kep [tk ]k + e
2T
kep [tn+1 ]k
(A.18)
k=0
(A.19)
In order for two estimators to yield the same amount of monitoring signal for t
[tp , tq ], the two estimation errors also need to be same for t [tp , tq ] from (A.19).
Thus, the switching logic, S, chooses the estimator that yields the minimum amount
of monitoring signal, . In supervisory control, the output, , is used to modify the
control law for stabilizing the system. We do not use this signal to control the system
in this paper. Therefore, the output, , is used to indicate which estimator best
approximates the environment model.
A.3
Parameter convergence
For estimated parameters to converge to true values, the input signal needs to
be rich enough to excite all the modes of the plant. This property is called persistent
excitation [113] and referred to as PE. If the regression vector is PE, a matrix Pn
in (A.6) is nonsingular and thus estimated parameters will be consistent for RLS [94].
In the case of AI, if is PE, the differential equation (A.11) is uniformly asymptotically stable [113]. The regression vector is therefore required to be PE for RLS and
179
A.3.1
Parameterization redundancy
180
A.3.1.1
Lemma 1. Assume we have the true estimator that is exactly the same as the plant.
There exist estimators that produce zero error and thus zero monitoring signal for all
t), where b and k are defined later.
t if and only if x(t) = x0 exp( k
b
Proof. [Necessary condition] Given the plant: f = bx + kx, we have the true
estimator fp = b x + k x that is exactly the same as the plant, i.e. (b , k ) = (b, k).
The estimation error ep is always zero, and thus p (t) = 0 for all t from (A.13).
Suppose we have another estimator, fp = b x + k x where b = b + b and k =
k + k, that is distinct from the true estimator but produces the same amount of
monitoring signal, i.e. p (t) = p (t) for all t. In order to have p (t) = 0 for all t,
ep (t) = 0 for all t. Then,
ep = fp f
= bx + kx = 0 .
If x(t) is constant and equal to zero, the above equation always holds. This is trivial,
and the condition of x = 0 is ignored. If b 6= 0,
k
x,
b
k
x(t) = x0 exp
t .
b
x =
(A.20)
If b = 0, k also needs to be zero, which contradicts the assumption that the two
estimators are distinct.
181
A.3.1.2
Lemma 2. If no estimator is exactly the same as the plant, there exist estimators
that can produce the same amount of error and thus the same amount of monitoring
signal for t [tp , tq ] for the following three cases: (1) (b , k ) = (b , k );
k +k
k
(2) x(t) = x0 exp k
t
;
and
(3)
x(t)
=
x
exp
t and b k =
0
b b
b +b
b k .
Proof. Given the plant: f = bx + kx, we do not have any estimator that is exactly
the same as the plant. Assume we have two distinct estimators fp = b x + k x and
fp = b x + k x. The estimation errors are
ep = b x + k x ,
ep = b x + k x .
182
Suppose there are conditions that lead to p = p for some t [tp , tq ]. This requires
that the two estimators produce the same amount of estimation error for t [tp , tq ].
Since any 2-norm is greater than or equal to zero and ep is scalar in our case,
|ep | = |ep | .
(A.21)
(i) If b x + k x = b x + k x
(A.21) (b b )x + (k k )x = 0
If b 6= b , the solution is
x(t) = x0 exp
k k
t
b b
(A.22)
183
(ii) If b x + k x = (b x + k x)
(A.21) (b + b )x + (k + k )x = 0
If b + b = 0, k + k = 0. Therefore,
(b , k ) = (b , k )
(A.23)
(A.24)
A.3.2
When the plant is given as f = bx + kx, regardless of the existence of the true
estimator, redundancy exists. However, if we consider the physical meaning of the
conditions that lead to redundancy, conditions (A.20), (A.22), and (A.24) would
not occur. To meet these conditions, the user would have to make the slave go
exponentially to infinity or zero. The condition (A.23), on the other hand, could
occur. The physical interpretation of the condition (A.23) is that we may overestimate
or underestimate environment parameters by the same amount. Hence, we cannot
184
say if either model is closer to the plant unless we have another criterion to choose a
desired estimator. Therefore, in our framework, the ME method practically chooses
the best approximated model.
A.4
Experiments
The main goal of this chapter is to find out what needs to be considered to
estimate unknown mechanical properties of the environment online. The experiments
consists of two parts: (1) data acquisition and (2) parameter estimation. We did not
update a controller based on the online estimation. This allows us to separate the
two procedures. However, we could have done the both tasks simultaneously during
telemanipulation, as would be desirable in real applications.
A.4.1
Design
A.4.1.1
Materials
As an environment for the experiments, three soft cubes were prepared. The
dimensions of the soft cubes are 50mm 50mm 40mm. They each have a different
stiffness: soft (Cube 1), medium (Cube 2), and hard (Cube 3). All the cubes are
made of silicone rubber. There are no official data showing the damping and stiffness
values of the soft cubes.
185
A.4.1.2
Control conditions
The input signal to the system needs to be PE for some estimation algorithms in
order to guarantee the parameter convergence. To test whether additional reference
forces other than the one generated by the operator are needed, we used three types
of control conditions: (i) teleoperation without PE, (ii) teleoperation mimicking PE
by the operator, and (iii) autonomous control with PE.
A.4.1.3
Estimation methods
Four estimation methods were applied: linear least-squares (LSE), recursive leastsquares (RLS), adaptive identification (AI), and multi-estimator (ME). There is no
official data showing the damping and stiffness values of the soft cubes. We chose to
use estimation data using LSE as a reference model because it takes into account the
entire data set offline and hence, it is expected to yield the best estimate.
A.4.1.4
Estimation methods
186
Figure A.3: Experimental setup of 1-DOF teleoperation system and a soft cube as
the environment
A.4.2
Experimental setup
Data were acquired using a 1-DOF telemanipulation system, as shown in Figure A.3. A force sensor is attached on the master robot to measure the input force
from the operator. The input force is used to calculate the master velocity that is
used to command the slave velocity. A square metal plate is attached at the tip of
the slave. It covers all the surface area of the environment. The soft cube is gently
attached to the metal wall by a tape, and it is always in contact with the slave. This
setting allows us to use the simple environment model (A.1).
The operator only had direct visual feedback and received no force feedback from
the environment. The operator was asked to palpate a soft cube with arbitrary motion
(teleoperation without PE) or trying to make a sinusoidal movement with a constant
frequency and amplitude (teleoperation mimicking PE). Autonomous control with PE
was completed without any human operator, and sinusoidal force input was chosen
as a typical PE. There are three soft cubes and three control methods prepared, and
thus nine different conditions exist.
187
A.4.3
Estimation setup
Prior to the estimation, some parameters are tuned in advance by trial and error.
The initial values of the covariance matrix P of RLS (A.6) are set:
100 0
P0 =
0 100
(A.25)
(A.26)
where diag is a 2 2 diagonal matrix. For ME, damping and stiffness values of
candidate estimators are chosen as:
188
where 1 i 41 and 1 j 193. There are 41 and 193 candidates for damping and
stiffness, respectively, and thus 7913 candidate estimators are available in total, i.e.
m = 7913 in Figure A.2. Note that, unlike AI, which requires the different adaptation
gain depending on the cube as seen in (A.26), the same candidate estimators were
used for all the cubes for ME. The initial parameters guess for both RLS and AI is
(b, k) = (0, 0), which is a common assumption for parameter estimation.
The forgetting factor in (A.13) should be set very small if the environment is
linear and uniform. In our experiments, contrary to the model, the material is not
linear, and thus mechanical properties are not uniform. Therefore, = 10 is chosen.
For RLS, we chose = 0.98 so that e2t with an update rate of 1kHz.
A.4.4
Results
Once the data acquisition task was performed, unknown parameters and interaction forces were estimated by the parameter identification algorithms. Figure A.4
shows sample profile of estimated parameters for each algorithm. The sample plots
shown in Figure A.4 were acquired using teleoperation without PE with Cube 1 as
the plant.
All the estimation results are summarized in Tables A.1-A.3. Estimated parameters for AI and ME are averaged over an experimental period of 10 seconds. An
189
Displacement [mm]
14
12
10
8
6
4
2
3000
2500
2000
1500
1000
RLS
AI
ME
500
0
10
11
12
13
14
15
Time [s]
Velocity [mm/s]
100
50
0
50
100
500
250
0
-250
500
RLS
AI
ME
10
11
12
13
14
15
Time [s]
190
1000
0
RLS
AI
ME
Cube 1
RLS
AI
ME
Cube 2
6000
4000
2000
0
4
x 10
3
2
RLS
AI
ME
1
0
Cube 3
9
10
11
12
13
14
15
Time [s]
Figure A.5: Estimated stiffness for Cubes 1, 2, and 3. The control method was
teleoperation without PE. For all the cubes, all three estimation methods show the
quick parameter convergence.
where f is measured force and f is estimated force. The average force error ef is
different from the estimation error epi (A.12), which is defined only for the multiestimator.
191
3000
2000
1000
0
RLS
AI
ME
Teleoperation without PE
RLS
AI
ME
Teleoperation mimicking PE
3000
2000
1000
0
3000
2000
1000
0
5
RLS
AI
ME
8
10
11
12
13
14
15
Time [s]
Figure A.6: Estimated stiffness under the control condition of teleoperation without
PE, teleoperation mimicking PE, and autonomous control with PE. The plant was
Cube 1.
192
193
without PE
Teleoperation
ME
-16.8
9.2
AI
b [Ns/m]
-12.5
Algorithm
Method
RLS
Estimation
Control
2202.7
2212.7
2229.2
k [N/m]
Cube 1
0.49
0.97
0.34
-74.3
108.9
-28.7
ef [N] b [Ns/m]
5158.1
5046.5
5235.1
k [N/m]
Cube 2
0.38
0.90
0.25
-367
13
-492
ef [N] b [Ns/m]
33276
33207
33034
k [N/m]
Cube 3
1.01
1.53
0.99
ef [N]
Table A.1: Estimated parameters of the environment and average force errors for each estimation algorithm (Teleoperation without PE)
194
PE
mimicking
Teleoperation
Algorithm
Method
ME
AI
RLS
Estimation
Control
-42.5
23.5
-19.5
b [Ns/m]
2338.0
2310.0
2372.6
k [N/m]
Cube 1
0.57
1.02
0.39
-53.9
36.0
-50.1
ef [N] b [Ns/m]
4901.0
5113.0
4857.8
k [N/m]
Cube 2
0.57
1.22
0.56
-226
608
-429
ef [N] b [Ns/m]
29288
31099
28308
k [N/m]
Cube 3
0.57
2.34
0.53
ef [N]
Table A.2: Estimated parameters of the environment and average force errors for each estimation algorithm (Teleoperation mimicking PE)
195
with PE
Control
Autonomous
Algorithm
Method
ME
AI
RLS
Estimation
Control
35.1
1.2
-7.3
b [Ns/m]
2100.1
2093.3
2085.1
k [N/m]
Cube 1
0.21
0.72
0.10
-50.6
-41.3
-52.1
ef [N] b [Ns/m]
4605.3
4613.7
4626.6
k [N/m]
Cube 2
0.31
1.11
0.34
-284
101
-441
ef [N] b [Ns/m]
19850
20187
19813
k [N/m]
Cube 3
0.77
1.47
0.76
ef [N]
Table A.3: Estimated parameters of the environment and average force errors for each estimation algorithm (Autonomous Control with PE)
A.5
Discussion
There were three main factors in the experiment: cube type (soft, medium, and
hard), system control condition (teleoperation without PE, teleoperation mimicking
PE, and autonomous with PE), and estimation algorithm (RLS, AI, and ME). In
this section, we will consider the role of each factor in the accuracy and speed of
estimation. Our discussion will be limited to the stiffness estimation because the
damping of the cubes was so low as to have a negligible effect on total force. In
addition, some of the estimated damping values are negative, which should not occur
for a physical object. This may be due to limitations of the model or noise in the
sensed force and velocity calculated from encoders.
A.5.1
The stiffness of the cubes did not affect the accuracy or rise time of any of the
estimation algorithms. Figure A.5 gives the estimation response for each of the estimation methods done online. Fast convergence is obtained for all three online estimation methods. The oscillation is desirable because the stiffness of the environment
varies with penetration depth. More force error is observed as the stiffness of the
cube increases (Tables A.1-A.3). For increased interaction forces, the errors between
the estimated forces and the measured forces naturally become larger.
196
A.5.2
The control conditions also had little effect on estimation performance. Figure A.6
shows the estimation response for each of the estimation methods under the different control conditions. In our experimental settings, even without having PE, all
online estimation algorithms converge rapidly. Comparing the average force errors
in Tables A.1-A.3, autonomous control with PE shows smaller errors than the other
control methods. This is partly because the penetration depth by autonomous control
was smaller than the other control methods. With small penetration, the estimated
stiffness is small, and thus the interaction force is also small. Since the material is
nonlinear, the penetration depth plays an important role for comparison. Further
research is required to find the role of control methods on estimation performance.
This will be especially important for very viscoelastic or massive environments.
A.5.3
197
and thus short enough to estimate the stiffness by one palpation. Although AI may
appear to converge more slowly than the other two methods, a closer initial guess
and better adaptation gain would improve the rise time. It is interesting to see that
there is no rise time for ME since it does not need initial estimation guess and can
choose the best approximated estimator quickly.
Regarding the accuracy of the estimation, RLS showed the lowest force estimation
error in most cases. Because of the faster convergence of ME, it would be expected to
have lower estimation error than RLS. However, the lack of ability to converge to the
true values would yield some errors even at the steady state while RLS shows zero
error. This made RLS slightly more accurate than ME.
In our experimental setup, we used a open-loop control: the interaction force between the slave and the environment was not fed back to the operator. In a closed-loop
system, adaptive control would be easy to implement since it uses AI to estimate the
unknown environment parameters that can be used to update a controller [58, 105].
However, the choice of the adaptation gain is a key to obtain good estimation results
(convergence and accuracy). Therefore, it may not be an ideal method for surgical
applications since the environments encountered in surgery are inhomogeneous and
thus it is quite difficult to tune the adaptation gain in advance.
RLS showed fast convergence as well as accurate force estimation. With persistent
excitation, the parameter convergence is guaranteed. Moreover, it is easy to implement: only simple initial parameter setting is required. So, RLS can be expected to
198
work well with minimum amount of work for the tissue property estimation. In a
closed-loop system, a controller needs to take into account the updated environment
model to improve transparency and stability of the teleoperation system, which has
not been established well unlike adaptive control.
As well as RLS, ME also requires less tuning. In [28], Kalman Active Observer
is able to differentiate soft and stiff objects, but there are numbers of parameters
to be carefully tuned in advance. While the gain matrix of AI (A.26) needed to be
changed depending on the stiffness of the cube, the same candidate estimators for ME
(A.27) were used for all the three cubes. The forgetting factor should be considered
based on whether the material is uniform. Compared to tuning gains in adaptive
identification algorithm, this is much less complicated. The stability of a closed-loop
multi-estimator/controller system is slightly complicated [110, 111]. The dwell time
to switch controllers has to be considered.
A.6
Conclusions
This appendix presents three parameter identification techniques (recursive leastsquares, adaptive identification, and multi-estimator) to estimate unknown environment parameters. We first show that the multi-estimator chooses the best estimator
among the candidates by theoretical analysis. Using a teleoperation system and artificial tissues, real interaction data were obtained. There are three system control
199
conditions: teleoperation without persistent excitation, teleoperation mimicking persistent excitation, and autonomous control with persistent excitation. Three cubes
having different stiffness are chosen as an environment. Only the choice of the estimation algorithm has an effect on estimation performance. For online parameter
estimation for surgical applications, either RLS or ME would be appropriate.
200
Bibliography
[1] Curexo Technology Corporation, Fremont, California, USA. Last Accessed:
November 1, 2010. [Online]. Available: http://www.robodoc.com/
[2] Hansen Medical, Inc., Mountain View, California, USA. Last Accessed:
November 1, 2010. [Online]. Available: http://www.hansenmedical.com/
[3] Immersion Corporation, San Jose, California, USA. Last Accessed: November
1, 2010. [Online]. Available: http://www.immersion.com/
[4] IMRIS, Winnipeg, Manitoba, Canada. Last Accessed: November 1, 2010.
[Online]. Available: http://www.imris.com/
[5] Intelligent Automation, Inc., Rockville, Maryland, USA. Last Accessed:
December 3, 2010. [Online]. Available: http://www.i-a-i.com/
[6] Intuitive Surgical, Inc. Investor Presentation (Q1 2010). Last Accessed: May
11, 2010. [Online]. Available: http://investor.intuitivesurgical.com/
201
BIBLIOGRAPHY
[7] Intuitive Surgical, Inc., Sunnyvale, California, USA. Last Accessed: November
1, 2010. [Online]. Available: http://www.intuitivesurgical.com/
[8] MAKO Surgical Corp., Lauderdale, Florida, USA. Last Accessed: November
1, 2010. [Online]. Available: http://www.makosurgical.com/
[9] Medtech SAS, Montpellier, France. Last Accessed:
BIBLIOGRAPHY
[15] I. Aliaga, A. Rubio, and E. Sanchez, Experimental Quantitative Comparison of Different Control Architectures for MasterSlave Teleoperation, IEEE
Transactions on Control Systems Technology, vol. 12, no. 1, pp. 211, 2004.
[16] K. Althoefer, D. Zbyszewski, H. Liu, P. Puangmali, L. Seneviratne, B. Challacombe, P. Dasgupta, and D. Murphy, Air-cushion force sensitive probe for soft
tissue investigation during minimally invasive surgery, in IEEE Conference on
Sensors, 2008, pp. 827830.
[17] R. J. Anderson and M. W. Spong, Bilateral control of teleoperators with time
delay, IEEE Transactions on Automatic Control, vol. 34, no. 494-501, p. 5,
1989.
[18] K. S. Arun, T. S. Huang, and S. D. Blostein, Least-Squares Fitting of Two 3-D
Point Sets, IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 9, no. 5, pp. 698700, 1987.
[19] A. Bettini, P. Marayong, S. Lang, A. M. Okamura, and G. D. Hager, Vision
Assisted Control for Manipulation Using Virtual Fixtures, IEEE Transactions
on Robotics, vol. 20, no. 6, pp. 953966, 2004.
[20] L. Biagiotti and C. Melchiorri, Environment Estimation in Teleoperation Systems, in Advances in Telerobotics, M. Ferre, M. Buss, R. Aracil, C. Melchiorri,
and C. Balaguer, Eds. Springer-Verlag, 2007, pp. 211231.
203
BIBLIOGRAPHY
204
BIBLIOGRAPHY
205
BIBLIOGRAPHY
206
BIBLIOGRAPHY
207
BIBLIOGRAPHY
BIBLIOGRAPHY
BIBLIOGRAPHY
Washington, DC,
210
BIBLIOGRAPHY
211
BIBLIOGRAPHY
212
BIBLIOGRAPHY
BIBLIOGRAPHY
[82] H. H. King, B. Hannaford, K. Kwok, G. Yang, P. Griffiths, A. M. Okamura, I. Farkhatdinov, J. Ryu, G. Sankaranarayanan, V. Arikatla, K. Tadano,
K. Kawashima, A. Peer, T. Schau, M. Buss, L. Miller, D. Glozman, J. Rosen,
and T. Low, Plugfest 2009 : Global Interoperability in Telerobotics and
Telemedicine, in IEEE International Conference on Robotics and Automation,
2010, pp. 17331738.
[83] H. H. King, B. Hannaford, J. Kammerly, and E. Steinbachy, Establishing
multimodal telepresence sessions using the Session Initiation Protocol (SIP)
and advanced haptic codecs, in IEEE Haptics Symposium, 2010, pp. 321325.
[84] H. H. King, K. Tadano, R. Donlin, D. Friedman, M. J. H. Lum, V. Asch,
C. Wang, K. Kawashima, and B. Hannaford, Preliminary Protocol for Interoperable Telesurgery, in International Conference on Advanced Robotics, 2009,
pp. 16.
[85] M. Kitagawa, D. Dokko, A. M. Okamura, and D. D. Yuh, Effect of Sensory
Substitution on Suture Manipulation Forces for Robotic Surgical Systems,
Journal of Thoracic and Cardiovascular Surgery, vol. 129, no. 1, pp. 151158,
2005.
[86] V. Kolmogorov and R. Zabih, Graph Cut Algorithms for Binocular Stereo
with Occlusions, Handbook of Mathematical Models in Computer Vision, pp.
423437, 2006.
214
BIBLIOGRAPHY
BIBLIOGRAPHY
BIBLIOGRAPHY
217
BIBLIOGRAPHY
[105] S. Misra and A. M. Okamura, Environment Parameter Estimation During Bilateral Telemanipulation, in 14th Symposium on Haptic Interfaces for Virtual
Environments and Teleoperator Systems, 2006, pp. 301307.
[106] S. Misra, K. T. Ramesh, and A. M. Okamura, Modeling of Tool-Tissue Interactions for Computer-Based Surgical Simulation: A Literature Review, Presence:
Teleoperators and Virtual Environments, vol. 17, no. 5, pp. 463491, 2008.
[107] P. Mitra and G. Niemeyer, Model-mediated Telemanipulation, The International Journal of Robotics Research, vol. 27, no. 2, pp. 253262, 2008.
[108] F. Mobasser and K. Hashtrudi-Zaad, Transparent Rate Mode Bilateral Teleoperation Control, The International Journal of Robotics Research, vol. 27,
no. 1, pp. 5772, 2008.
[109] F. W. Mohr, V. Falk, A. Diegeler, T. Walther, J. F. Gummert, J. Bucerius,
S. Jacobs, and R. Autschbach, Computer-enhanced robotic cardiac surgery:
experience in 148 patients, The Journal of Thoracic and Cardiovascular
Surgery, vol. 121, no. 5, pp. 842853, 2001.
[110] A. S. Morse, Supervisory Control of Families of Linear Set-Point Controllers
Part 1: Exact Matching, IEEE Transactions on Automatic Control, vol. 41,
no. 10, pp. 14131431, 1996.
[111] , Supervisory Control of Families of Linear Set-Point Controllers Part
218
BIBLIOGRAPHY
2: Robustness, IEEE Transactions on Automatic Control, vol. 42, no. 11, pp.
15001515, 1997.
[112] E. Naerum and B. Hannaford, Global transparency analysis of the Lawrence
teleoperator architecture, in IEEE International Conference on Robotics and
Automation, 2009, pp. 43444349.
[113] K. S. Narendra and A. M. Annaswamy, Stable Adaptive Systems. Dover Publications, Inc., 2005.
[114] G. Niemeyer, C. Preusche, and G. Hirzinger, Chapter 31. Telerobotics (Handbook of Robotics). Springer, 2008, pp. 741757.
[115] D. P. Noonan, H. Liu, Y. H. Zweiri, K. A. Althoefer, and L. D. Seneviratne,
A Dual-Function Wheeled Probe for Tissue Viscoelastic Property Identification during Minimally Invasive Surgery, in IEEE International Conference on
Robotics and Automation, 2007, pp. 26292634.
[116] A. M. Okamura, Methods for Haptic Feedback in Teleoperated Robot-Assisted
Surgery, Industrial Robot, vol. 31, no. 6, pp. 499508, 2004.
[117] , Haptic Feedback in Robot-Assisted Minimally Invasive Surgery, Current Opinion in Urology, vol. 19, no. 1, pp. 102107, 2009.
[118] A. M. Okamura, L. N. Verner, C. E. Reiley, and M. Mahvash, Haptics
for Robot-Assisted Minimally Invasive Surgery, in International Symposium
219
BIBLIOGRAPHY
220
BIBLIOGRAPHY
BIBLIOGRAPHY
BIBLIOGRAPHY
BIBLIOGRAPHY
World
224
BIBLIOGRAPHY
actions on Systems, Man, and Cybernetics. Part B, Cybernetics, vol. 37, no. 6,
pp. 15121528, 2007.
[148] R. Taylor, P. Jensen, L. Whitcomb, A. Barnes, R. Kumar, D. Stoianovici,
P. Gupta, Z. Wang, E. DeJuan, and L. Kavoussi, A Steady-Hand Robotic
System for Microsurgical Augmentation, The International Journal of Robotics
Research, vol. 18, no. 12, pp. 12011210, 1999.
[149] A. K. Tewari, N. D. Patel, R. A. Leung, R. Yadav, E. D. Vaughan, Y. ElDouaihy, J. J. Tu, M. B. Amin, M. Akhtar, M. Burns, U. Kreaden, M. A. Rubin, A. Takenaka, and M. M. Shevchuk, Visual Cues as a Surrogate for Tactile
Feedback during Robotic-Assisted Laparoscopic Prostatectomy: Posterolateral
Margin Rates in 1340 Consecutive Patients. British Journal of Urology International, vol. 106, no. 4, pp. 528536, 2010.
[150] A. L. Trejos, J. Jayender, M. T. Perri, M. D. Naish, R. V. Patel, and R. A.
Malthaner, Robot-assisted Tactile Sensing for Minimally Invasive Tumor Localization, The International Journal of Robotics Research, vol. 28, no. 9, pp.
11181133, 2009.
[151] R. Y. Tsai and R. K. Lenz, A New Technique for Fully Autonomous and
Efficient 3D Robotics Hand/Eye Calibration, IEEE Transactions on Robotics
and Automation, vol. 5, no. 3, pp. 345358, 1989.
[152] S. Uranues, H. Maechler, P. Bergmann, S. Huber, G. Hoebarth, J. Pfeifer,
225
BIBLIOGRAPHY
BIBLIOGRAPHY
BIBLIOGRAPHY
[164] H. Xin, J. S. Zelek, and H. Carnahan, Laparoscopic surgery, perceptual limitations and force : A review, in First Canadian Student Conference on Biomedical Computing, Kingston, Ontario, Canada, 2006.
[165] K. Xu and N. Simaan, An Investigation of the Intrinsic Force Sensing Capabilities of Continuum Robots, IEEE Transactions on Robotics, vol. 24, no. 3,
pp. 576587, 2008.
[166] T. Yamamoto, M. Bernhardt, A. Peer, M. Buss, and A. M. Okamura, MultiEstimator Technique for Environment Parameter Estimation During Telemanipulation, in IEEE International Conference on Biomedical Robotics and
Biomechatronics, 2008, pp. 217223.
[167] T. Yamamoto, B. Vagvolgyi, K. Balaji, L. L. Whitcomb, and A. M. Okamura,
Tissue Property Estimation and Graphical Display for Teleoperated RobotAssisted Surgery, in IEEE International Conference on Robotics and Automation, 2009, pp. 42394245.
[168] D. Yoerger, J. Newman, and J.-J. E. Slotine, Supervisory control system for
the JASON ROV, IEEE Journal of Oceanic Engineering, vol. 11, no. 3, pp.
392400, 1986.
[169] Y. Yokokohji and T. Yoshikawa, Bilateral Control of Master-Slave Manipulators for Ideal Kinesthetic Coupling Formulation and Experiment, IEEE
Transactions on Robotics and Automation, vol. 10, no. 5, pp. 605619, 1994.
228
BIBLIOGRAPHY
229
Vita
Tomonori Yamamoto was born in Tokyo, Japan in 1979. He received his B.E. degree in Control and Systems Engineering from Tokyo
Institute of Technology, Japan in 2003, and he enrolled in the Ph.D.
program in the Mechanical Engineering Department at the Johns Hopkins University in 2004. Before the commencement of his graduate
studies, he was a part-time engineer at ZMP, Inc. in Tokyo where he worked on signal processing and humanoid robots. Tomonori Yamamoto completed his Masters
degree in December 2006 and his Ph.D. in January 2011. His research focuses on
robotics, control, telemanipulation, and haptics. He is interested in designing and
developing robotic systems that improve quality of life, particularly in computerassisted surgery and healthcare fields. During Dr. Yamamotos doctoral studies, he
was an active member of Kagakusha Network, a support group for prospective graduate students coming from Japan to the United States. Collaborating with members
of the Kagakusha Network, he co-authored and edited a book titled U.S. Graduate
Schools: A Path for Global Scientists & Engineers (in Japanese) in March 2010.
230