A Unified Bi-Directional Model For Natural and Artificial Trust in Human-Robot Collaboration
A Unified Bi-Directional Model For Natural and Artificial Trust in Human-Robot Collaboration
...
...
...
while trustee agents are characterized by their individual capa-
arXiv:2106.02194v1 [cs.RO] 4 Jun 2021
robot should trust a human. Existing models are performance- strategies. Robots can now perceive and process humans’ trust
centric and ignore non-performance trustees’ capabilities or and take action to increase or decrease humans’ trust when
factors, which are needed for determining artificial trust. To necessary [20]–[22].
accommodate both natural and artificial trust in (human or
robotic) trustees, a computational model of trust must be B. Trust Definition
able to consider assessments of a trustee’s non-performance
Several trust definitions have been proposed, generally
capabilities, such as honesty, benevolence or integrity levels
pointing to the trustor’s attitude or willingness to be vulnerable
[2], [9]. Therefore, although existing trust models are sufficient
to the trustee’s actions [2], [4]. In this work, we assume the
for planning algorithms, these trust models can not be used in
(adapted) definition for trust recently proposed by Kok and
more sophisticated control authority allocation applications,
Soh, which states that: “given a trustor agent A and a trustee
which are likely to be based on comparisons between the
agent B, A’s trust in B is a multidimensional latent variable
human’s trust in the robot and the robot’s trust in the human
that mediates the relationship between events in the past and
[10].
A’s subsequent choice of relying on B in an uncertain envi-
To address those shortcomings, we propose a novel
ronment” [19]. Kok and Soh’s definition establishes important
capabilities-based bi-directional trust model. Our model char-
aspects of our model, such as the multidimensionality of trust
acterizes tasks on a set of standard requirements that can
and its dependence on a history of events involving the trustor
represent either performance or non-performance capabilities
and the trustee agents.
that affect trust, and builds trustee capability profiles based
on the trustee’s history on executing those tasks. Trust is
represented by the probability that an agent can successfully C. Trust Computational Models
execute a task, considering that agent’s capability profile (built Trust models are usually applied to determine how much a
after observations). By considering the agent’s capabilities human trusts a robot to perform a task (e.g. Fig. 1, where
(performance or non-performance) [9] and the task require- the robot R is chosen to execute a task). The robot uses
ments, our model can be used to determine a robot’s artificial this estimate of human trust to predict the human’s behavior,
trust in a trustee agent. Moreover, our model can be used such as intervening on the task execution. For example, trust
for predicting trust transfer between tasks, similar to the models have been used in different trust-aware POMDP-based
model proposed in [6]. However, as compared to [6], our algorithms proposed for robotic planning and decision-making
model improves trust transfer predictions by representing tasks [22], [23]. Their objective is to eventually improve the robot’s
in terms of capability requirements instead of using natural collaboration with the human, using human trust as a vital
language processing (NLP) similarity metrics. We show the factor when planning the robot’s actions.
superiority of our trust model by comparing its prediction Planning and decision-making frameworks usually rely on
results with those from other models, using a dataset collected the use of probabilistic models for trust [5], [24], [25]. Xu
in an online experiment with 284 participants. In sum, our and Dudek proposed an online probabilistic trust inference
contributions with this work are: model for human–robot collaborations (OPTIMo) that uses
• a new trust model that (i) can be used for the artificial a dynamic Bayesian network (DBN) combined with a lin-
trust computation and (ii) outperforms existing models ear Gaussian model and recursively reduces the uncertainty
for multi-task natural trust transfer prediction; and around the human operator’s trust. OPTIMo was tested in a
• an online experiment that resulted in a dataset relating human–unmanned aerial vehicle (UAV) collaboration setting
trust and task capabilities measurements. [5] and, although some dynamic models had been proposed
before [13], [26], OPTIMo was the first trust model capable
II. T RUST IN H UMAN –ROBOT I NTERACTION of tracking human’s trust in a robot with low latency and
relatively high accuracy. The UAV, with OPTIMo, was able
A. Origins and Current Stage of Trust in HRI to track the human operator’s trust by observing how much
Trust in robots that interact with humans can be considered the human intervened in the UAV’s operation.
as an evolution of trust in automation, which in turn has Other Bayesian models have been proposed since OPTIMo.
evolved from theoretical frameworks on interpersonal trust. These models include personalized trust models that apply
Muir [11] proposed the concept of trust in automation after inference over a history of robot performances, such as [25]
adapting sociologist interpersonal trust definitions [1], [12] to and [24]. Mahani et al. proposed a model for trust in a
humans and automated machines [13]. Trust in automation is swarm of UAVs, establishing a baseline for human–multi-
a dynamic construct [14] that can be directly measured with robot interaction trust prediction [25]. Guo and Yang [24]
subjective scales [3], [15] or can also be estimated through have improved trust prediction accuracy (as compared to
behavioral variables [16], [17]. Lee’s ARMAV model [13] and OPTIMo [5]) by proposing a
People’s trust in an automated system must be calibrated, formulation that describes trust in terms of Beta probability
which means it has to align with the system’s capabilities. distributions and aligns the inference processes with trust
Miscalibrated trust is likely to lead to the inappropriate use formation and evolution processes [24]. Without explicitly
of the system [14], [18]–[20]. However, the evolution of auto- modeling trust, Lee et al. showed that a robot that estimates
mated systems into autonomous robots with powerful sensing and calibrates humans’ intents and capabilities while making
technologies has paved the way for new trust calibration decisions can engender higher trust from humans [27].
AZEVEDO-SA et al.: UNIFIED BI-DIRECTIONAL MODEL FOR NATURAL AND ARTIFICIAL TRUST IN HUMAN–ROBOT COLLABORATION 3
Although all previously mentioned approaches for trust We represent a capability as an element of a closed interval
modeling represent important advances in how we understand Λi = [0, 1], i ∈ {1, 2, 3, ..., n}, with n being a finite number
and describe humans’ trust in robots, they suffer from a of dimensions characterizing distinct capabilities.
common limitation. Those models depend on the history Definition 4 - Capability Hypercube. The compact set
of robots’ performances on unique specific tasks and are representation Q of n distinct capabilities, given by the Cartesian
n
not applicable for trust transfer between different tasks. The product Λ = i=1 Λi = [0, 1]n . This definition is inspired
issue of multi-task trust transfer was recently approached by by the particular capabilities from Mayer et al.’s model [2],
Soh et al. [6], who proposed Gaussian processes and neural namely ability, benevolence and integrity, but the definition is
methods for predicting the transferred trust among different intended to be broader than these three dimensions.
tasks that were described with NLP-based text embeddings. A Definition 5 - Agent’s Capability Transform. The agent
major goal for our model was to deepen that discussion and capability transform ξ : {H, R} → Λ maps an agent into
improve prediction accuracy for multi-task trust transfer by a point in the capability hypercube representing the agent’s
(i) describing tasks in terms of capability requirements, and capabilities, given by ξ(a) = λ = (λ1 , λ2 , ..., λn ) ∈ Λ.
(ii) describing potential trustee agents in terms of their proven Definition 6 - Task Requirements Transform. The task
capabilities that can be used to transfer trust to another task. requirements transform % : Γ → Λ maps a task γ into the
The other major goal for our model was to be bi-directional, minimum required capabilities for the execution of γ, given
i.e., to be able to represent either natural trust or artificial trust. by %(γ) = λ̄ = (λ̄1 , λ̄2 , ..., λ̄n ) ∈ Λ.
Because the existing trust models are usually performance- Definition 7 - Time Index. The time is discrete and
centric, they are suited to represent humans’ natural trust represented by t ∈ N.
in robots. Although mutual trust has been modeled as a Definition 8 - Task Outcome. The outcome of a task γ after
single variable that depends on both the human’s and the being executed by the agent a at the time t is represented by
robot’s performances on collaborative tasks [28], to represent Ω(ξ(a), %(γ), t) ∈ {0, 1}, where 0 represents a failure and 1
a robot’s artificial trust in humans, trust models must be more represents a success. We also define the Boolean complement
comprehensive. Computational models of trust must consider of Ω, denoted by f, such that f = 1 when Ω = 0, and f = 0
not only performance factors but also non-performance factors when Ω = 1.
that describe human trustees [2], [9], [29], [30]. Until recently, Leveraging the previous definitions, we can finally define
only a few trust models have considered the robot’s trust trust.
perspective, focusing only on non-performance factors that Definition 9 - Trust. A trustor agent’s trust in a trustee agent
affect trust. For instance, a model that reproduces theory of a to execute a task γ at a time instance t can be represented
mind (ToM) aspects in robots to identify deceptive humans by
has been proposed and applied in [29] and [30]. Our model τ (a, γ, t) = P Ω(ξ(a), %(γ), t) = 1
is applicable for either natural or artificial trust because it
(1)
Z
explicitly considers a general form of agents’ capabilities and = p Ω(λ, λ̄, t) = 1|λ, t bel(λ, t − 1)dλ,
task requirements, which can represent performance or non- Λ
performance trustee capabilities. where λ = ξ(a), λ̄ = %(γ), and bel(λ, t − 1) represents the
trustor’s belief in the agent’s capabilities λ at time t − 1 (i.e.,
III. B I -D IRECTIONAL T RUST M ODEL D EVELOPMENT before the actual task execution). The belief is a dynamic
probability distribution over the capability hypercube Λ. Note
A. Context Description that, at each time instance t, trust is a function of the task
Consider the following situation: two agents (human H or requirements λ̄, representing a probability of success in [0, 1].
robot R) collaborate and must execute a sequence of tasks.
These tasks are indivisible and must be executed by only one B. Bi-directional Trust Model
agent. The execution of each task can either succeed or fail.
Our bi-directional model is defined by Eq. (1) and depends
For each task, one of the agents is in the position of trustor,
on the combination of:
and the other is the trustee. Therefore, the trustor is vulnerable
• a function to represent the “trust given the trustee’s
to the trustee’s performance in that task. From previous experi-
ences with the trustee, the trustor has some implicit knowledge capability”, represented
by the conditional probability
about the trustee’s capabilities. This implicit knowledge is used p Ω(λ, λ̄, t) = 1|λ, t ; and
• a process to dynamically update the trustor’s belief over
by the trustor to assess how likely the trustee is to succeed
or fail in the execution of a task. We define the terms and the trustee capabilities bel(λ, t).
concepts we need for developing our trust model: We assume that an agent that successfully performs a task
Definition 1 - Task. A task that must be executed is is more likely to be successful on less demanding tasks.
represented by γ ∈ Γ. Γ represents the set of all tasks that Conversely, an agent that fails on a task is more likely to
can be executed by the agents. fail on more demanding tasks. We adapt the sigmoid function
Definition 2 - Agent. An agent a ∈ {H, R} represents a to represent that logic, and for each capability dimension we
trustee that could execute a task γ. can write " #ζi
Definition 3 - Capability. The representation of a specific 1
τi = , (2)
skill that agents have/that is required for the execution of tasks. 1 + eβi (λ̄i −λi )
4 IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JUNE, 2021
If If If
0 1 0 1 0 1
When
0 1 0 1 0 1 0 1
Fig. 2. Capability update procedure, where each capability dimension changes after the trustor agent observes the trustee agent a executing a task γt (at a
specific time instance t). The belief distribution over a’s capabilities before the task execution bel(λi , t − 1) is updated to bel(λi , t), depending on the task
capability requirements %(γt )i = λ̄i and on the performance of a in γt , which can be a success (Ω = 1) or a failure (Ω = 0). The capability belief: (i)
expands either when the agent succeeds on a task whose requirement exceeds ui , or when the agent fails on a task whose requirement is less than `i ; (ii)
contracts when the agent succeeds or fails on a task whose requirement falls between ui and `i ; or (iii) remains the same either when the agent fails on a
task whose requirement exceeds ui , or when the agent succeeds on a task whose requirement is less than `i .
models, such as Soh’s models [6] and OPTIMo [5]. We τ in the AV to execute the fourth remaining task (i.e., the
aimed to emulate a human-automated vehicle (AV) interaction trust prediction task) on a 7-point Likert scale varying from
setting, asking participants to (1) assess the requirement levels “very low trust” to “very high trust”, as an indication of how
for driving tasks that were to be executed by the AV, (2) much they disagreed or agreed with the sentence: “I believe
watch videos of the AV executing a part of those tasks and (3) that the AV would successfully execute the task.” Participants
evaluate their trust in the AV to execute other tasks (distinct were asked to consider all videos they had seen during the
from those they have watched in the videos). observation tasks and rate their trust in the AV to execute the
Initially, only images and verbal descriptions of four driving trust prediction task. Finally, participants received a random
tasks were presented in random order to the participants (Fig. 4-digit identifier code to upload in the MTurk platform and
3). Participants were asked to rate the capability requirements receive their payment.
for each of the presented tasks in terms of two distinct To keep work-related regulations consistent, we restricted
capabilities of the AV: sensing and processing, which were our participants to those who were physically in the United
defined and presented to the participants as, States when accepting the MTurk human intelligence task
(HIT). A total of 284 MTurk workers participated in our
• Sensing (λs ) - The accuracy and precision of the sensors
experiment and received a payment of $1.80 for completing
used to map the environment where the AV is located and
the HIT without failing to correctly answer the attention
perceive elements within that environment, such as other
checker questions. The HITs were completed in approximately
vehicles, people and traffic signs.
6min40s, on average. We collected no demographics data or
• Processing (λp ) - The speed and performance of the
other personal information from the participants because these
AV’s computers that use the information from sensors to
were not needed for our analyses. The obtained dataset and our
calculate the trajectories and the steering, acceleration,
implementations are available at https://bit.ly/3sfVtuK. The
and braking needed to execute those trajectories.
research was reviewed and approved by the University of
Participants were asked to indicate the required capability Michigan’s institutional review board (IRB# HUM00192470).
levels (λ̄s , λ̄p ) ∈ [0, 1]2 for each task, providing a score (i.e.,
indicating a slider position on a continuous scale) varying from V. R ESULTS
low to high.
After evaluating all four presented tasks, participants A. Human-drivers’ (natural) trust in robotic AVs
watched short videos (approximately 20s to 30s) of a sim- We implemented a 10-fold cross-validation to train and
ulated AV executing three of the four tasks. Those three evaluate our bi-directional trust model (BTM) with the data
were considered observation tasks. The videos showed the obtained in the experiment described in Section IV. For
AV succeeding or failing to execute each observation task. comparison, we also evaluated the performance of Soh’s
(All videos are available at https://bit.ly/37gXXkI.) Next, par- Bayesian Gaussian process model (GP) [6] and that of a
ticipants were asked to indicate whether the AV successfully linear Gaussian model similar to Xu and Dudek’s OPTIMo
executed the task. That question served both as an attention (OPT) [5] on our collected dataset. We obtained the tasks’
checker and as a way to make the participant acknowledge the vector representations for the GP model with GloVe [31], by
performance of the AV in that specific task. After watching processing the verbal descriptions presented in Fig. 3. There
each video, participants were also asked to rate their trust were no closed forms for Eq. (1), therefore we discretized each
6 IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JUNE, 2021
Oncoming vehicle
Oncoming vehicle
Park, moving forward, in an empty Park parallel to curb in a space When reaching a roundabout, check When navigating on a two-way road
space. between cars. left for oncoming traffic and complete behind a vehicle and in foggy
the right turn when safe. weather, check for oncoming traffic
and pass when safe.
Fig. 3. Tasks presented to the experiment participants in terms of images and corresponding verbal descriptions. The participants had to rate the capability
requirements for each of these tasks, considering two capability dimensions: sensing and processing. In other words, they had to assign a pair (λ̄1 , λ̄2 ) ∈ [0, 1]2
for each task. Tasks were randomly presented for avoiding ordering effects.
task capability dimensions in 10 equal parts and computed nu- 0.4 0.4
= +/- 1 Standard Deviation
merical approximations for τ . Because we considered only two BTM
0.35 GP 0.35
outcome possibilities (failure or success in executing a task), OPT
**
* **
the trust measurements from both the dataset and the model 0.3 0.3
MAE
outputs were considered probability parameters of Bernoulli 0.25 0.25
distributions. We considered the cross entropy between those
distributions to be the loss function to be minimized. We used 0.2 0.2
**
optimizations. **
0.7 0.7
Task Observations
(No Observations)
1 1 1 1 1
0 0 0 0 0
0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1
Training Epochs
Training Epochs
Training Epochs
Training Epochs
0 0.5 1
1 1 1 1 1
0 0 0 0 0
Training Epochs Training Epochs Training Epochs Training Epochs
Fig. 5. Artificial trust results, where a robotic trustor agent’s belief over a trustee agent a’s capabilities is updated after N observations of a’s performances
in different tasks, represented by points in Λ = [0, 1]2 . When N = 0, bel(λ, N ) is “spread” over the entire Λ. When the robot trustor collects observations, it
starts building a’s capabilities profile and reducing the gray area in the bel(λ, N ) distribution. This profile gets more accurate when N increases and (λ1 , λ2 )
gets better defined. This is also reflected in the evolution of the conditional trust function τ (a, γ, N ).
for tasks inside a bin dividing the number of successes by The results reveal that our proposed bi-directional trust
the total number of tasks that fell on each bin (i.e., the model has better performance for predicting a human’s trust
approximation for τ̂ ). Finally we ran optimizations to find the in a robot (in our specific experiment, an AV) than the models
parameters that best characterized bel(λ1 , N ) and bel(λ2 , N ), from [5] and [6]. This performance improvement was expected
solving the problem represented by Eq. (8). Fig. 5 illustrates because current models are limited in capturing important
the evolution of bel(λ, N ) and of τ (a, γ, N ) for increasing trust-related parameters, such as the agents’ capabilities or
values of N . The higher the number of observations, the better task’s requirements in their formulation. To the best of our
the accuracy of a’s identified capabilities. knowledge, only our model and Soh’s models [6] distinguish
and describe the trust transfer between different tasks, while
VI. D ISCUSSION OPTIMo [5] is more appropriate for predicting a human’s trust
Our model is based on general capability representations in a robot to execute one specific task.
that can be either performance or non-performance trust fac-
tors. This particular aspect of our bi-directional trust model Section V-B presents simulations that show how the pro-
makes it useful for representing a robot’s artificial trust, as posed model can be used for representing a robot’s artificial
presented in Subsection V-B, and allows for better human trust trust. In the future, the proposed bi-directional trust model
predictions in comparison to existing models, as presented could be used in real-world human subjects experiments. An
in Subsection V-A. Additionally, our model considers task example could be a study where participants would execute
capability requirements in its description, describing how hard some tasks represented in the capabilities hypercube, and the
a task is for an agent to execute. The model’s mathematical for- robot would be able to establish its trust in the participants
mulation captures the differences between those task require- based on their failures or successes on those tasks. In parallel,
ments and the potential trustee agent’s observed capabilities. the robot could estimate the human’s natural trust for different
Differently from the Gaussian process-based method presented tasks, and use both natural and artificial trust metrics to
in [6], this formulation allows for the adequate representation compute expected rewards for the execution of new tasks.
of lower trust levels when the requirements of a task exceed Tasks could be allocated between the human and the robot
the capabilities of the agent and, conversely, higher trust levels to maximize the expected reward of a whole set of tasks,
when the agent capabilities exceed the task requirements. eventually improving the joint performance of the human–
8 IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED JUNE, 2021
robot team. [13] J. Lee and N. Moray, “Trust, control strategies and allocation of function
Despite the eventual improvement on multi-task trust pre- in human-machine systems,” Ergonomics, vol. 35, no. 10, pp. 1243–
1270, 1992.
diction performance, the use of task capability requirements [14] M. Lewis, K. Sycara, and P. Walker, “The Role of Trust in Human-
could also be considered a drawback of our model because it Robot Interaction,” in Studies in Systems, Decision and Control. Berlin,
calls for one more subjective input dimension in comparison Germany: Springer-Verlag, 2018, vol. 117, pp. 135–159.
[15] J.-Y. Jian, A. M. Bisantz, and C. G. Drury, “Foundations for an
with current models. Rating and describing tasks that must empirically determined scale of trust in automated systems,” Int. J. Cogn.
be executed by humans and robots in terms of specific Ergon., vol. 4, no. 1, pp. 53–71, 2000.
human/robotic capability dimensions depends on the trustor [16] J. D. Lee and N. Moray, “Trust, self-confidence, and operators’ adap-
tation to automation,” Int. J. Hum-Comput. Stud., vol. 40, no. 1, pp.
agent’s individual beliefs and experiences—natural, in the case 153–184, 1994.
of a human trustor agent, or artificial, in the case of a robotic [17] H. Azevedo-Sa, S. K. Jayaraman, C. T. Esterwood, X. J. Yang, L. P.
trustor agent. Our models’ trust prediction performance might Robert, and D. M. Tilbury, “Real-Time Estimation of Drivers’ Trust in
Automated Driving Systems,” Int. J. Soc. Robot., pp. 1–17, 2020.
have also been restricted by inconsistencies related to task [18] J. D. Lee and K. A. See, “Trust in Automation: Designing for Appro-
characterization by each participant of our experiment. We be- priate Reliance,” Hum. Factors, vol. 46, no. 1, pp. 50–80, 2004.
lieve that better trust prediction results can be achieved with in- [19] B. C. Kok and H. Soh, “Trust in Robots: Challenges and Opportunities,”
Curr. Robot. Rep., vol. 1, pp. 297–309, 2020.
person longitudinal experiments involving fewer participants [20] H. Azevedo-Sa, S. K. Jayaraman, X. J. Yang, L. P. Robert, and D. M.
and more predictions. Tilbury, “Context-Adaptive Management of Drivers’ Trust in Automated
Vehicles,” IEEE Robot. Autom. Let., vol. 5, no. 4, pp. 6908–6915, 2020.
[21] M. Chen, S. Nikolaidis, H. Soh, D. Hsu, and S. Srinivasa, “Trust-Aware
VII. C ONCLUSION Decision Making for Human-Robot Collaboration,” ACM Trans. Human-
Robot Interact., vol. 9, no. 2, pp. 1–23, 2020.
We presented a multi-task bi-directional trust model that [22] S. Sheng, E. Pakdamanian, K. Han, Z. Wang, J. Lenneman, and L. Feng,
depends on both a trustee agent’s proven capabilities (as “Trust-based route planning for automated vehicles,” in 12th ACM/IEEE
observed by the trustor agent) and on the task capability Int. Conf. Cyber-Physic. Syst. (ICCPS ’21). ACM, 2021.
[23] M. Chen, S. Nikolaidis, H. Soh, D. Hsu, and S. Srinivasa, “Planning
requirements (as characterized by that same trustor agent). with trust for human-robot collaboration,” in Proc. 2018 ACM/IEEE
Our model outperformed the most relevant and recent trust Int. Conf. on Human-Robot Interact., 2018, pp. 307–315.
models (i.e., [5] and [6]) in terms of predicting the transferred [24] Y. Guo and X. J. Yang, “Modeling and Predicting Trust Dynamics in
Human–Robot Teaming: A Bayesian Inference Approach,” Int. J. Soc.
trust between distinct tasks by addressing the main limita- Robot., pp. 1–11, 2020.
tions of those models. With a generalist capability dimension [25] M. Fooladi Mahani, L. Jiang, and Y. Wang, “A Bayesian Trust Inference
representing trustee agents’ capabilities, our model can also Model for Human-Multi-Robot Teams,” Int. J. of Soc. Robot., pp. 1–15,
2020.
represent robots’ artificial trust in different trustee agents. [26] M. Desai, P. Kaniarasu, M. Medvedev, A. Steinfeld, and H. Yanco,
Our model is useful for future applications where humans “Impact of robot failures and feedback on real-time trust,” in 8th Int.
and robots collaborate and must sequentially take turns in Conf. Human-Robot Interact. IEEE, 2013, pp. 251–258.
[27] J. Lee, J. Fong, B. C. Kok, and H. Soh, “Getting to Know One
executing different tasks. Another: Calibrating Intent, Capabilities and Trust for Human-Robot
Collaboration,” in 2020 IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS).
R EFERENCES IEEE, 2020, pp. 6296–6303.
[28] Y. Wang, Z. Shi, C. Wang, and F. Zhang, “Human-robot mutual trust
[1] B. Barber, The logic and limits of trust. New Brunswick, NJ, USA: in (semi) autonomous underwater robots,” in Cooperative Robots and
Rutgers Univ. Press, 1983, vol. 96. Sensor Networks 2014. Berlin, Germany: Springer, 2014, pp. 115–137.
[2] R. C. Mayer, J. H. Davis, and F. D. Schoorman, “An Integrative Model [29] M. Patacchiola and A. Cangelosi, “A developmental Bayesian model of
of Organizational Trust,” Acad. Manage. Rev., vol. 20, no. 3, p. 709, trust in artificial cognitive systems,” in 2016 IEEE Int. Conf. Dev. Learn.
Jul. 1995. Epigen. Robot. (ICDL-EpiRob). IEEE, 2016, pp. 117–123.
[3] B. M. Muir, “Trust in automation: Part I. Theoretical issues in the study [30] S. Vinanzi, M. Patacchiola, A. Chella, and A. Cangelosi, “Would a robot
of trust and human intervention in automated systems,” Ergonomics, trust you? Developmental robotics model of trust and theory of mind,”
vol. 37, no. 11, pp. 1905–1922, 1994. Philos. Trans. R. Soc. Lond. B Biol. Sci., vol. 374, no. 1771, 2019.
[4] J. D. Lee and K. A. See, “Trust in automation: Designing for appropriate [31] J. Pennington, R. Socher, and C. D. Manning, “GloVe: Global vectors
reliance,” Hum. Factors, vol. 46, no. 1, pp. 50–80, 2004. for word representation,” in Proc. 2014 Conf. Empir. Meth. Nat. Lang.
[5] A. Xu and G. Dudek, “OPTIMo: Online Probabilistic Trust Inference Proc. (EMNLP), 2014, pp. 1532–1543.
Model for Asymmetric Human-Robot Collaborations,” in ACM/IEEE [32] A. Paszke, S. Gross, F. Massa, A. Lerer et al., “Pytorch: An imperative
Int. Conf. on Human-Robot Interact., pp. 221–228, 2015. style, high-performance deep learning library,” in Adv. Neural Inf.
[6] H. Soh, Y. Xie, M. Chen, and D. Hsu, “Multi-task trust transfer for Process. Syst., 2019, vol. 32, pp. 8024–8035.
human–robot interaction,” The Int. J. Robot. Res., vol. 39, no. 2-3, pp. [33] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
233–249, 2020. in Int. Conf. Learn. Representat., 2015.
[7] S. You and L. P. Robert, “Human-robot similarity and willingness to
work with a robotic co-worker,” in Proc. 2018 ACM/IEEE Int. Conf.
Human–Robot Interact., 2018.
[8] K. J. Anstey, J. Wood, S. Lord, and J. G. Walker, “Cognitive, sensory and
physical factors enabling driving safety in older adults,” Clin. Psychol.
Rev., vol. 25, no. 1, pp. 45–65, 2005.
[9] B. F. Malle and D. Ullman, “A multidimensional conception and measure
of human-robot trust,” in Trust in Human-Robot Interact. Amsterdam,
The Netherlands: Elsevier, 2021, pp. 3–25.
[10] H. Azevedo-Sa, X. J. Yang, L. Robert, and D. Tilbury, “Handling trust
between drivers and automated vehicles for improved collaboration,” in
2021 ACM/IEEE Int. Conf. Human–Robot Interact. ACM, 2021.
[11] B. M. Muir, “Trust between humans and machines, and the design of
decision aids,” Int. J. Man Mach. Stud., vol. 27, pp. 527–539, 1987.
[12] J. K. Rempel, J. G. Holmes, and M. P. Zanna, “Trust in close relation-
ships.” J. Pers. Soc. Psychol., vol. 49, no. 1, p. 95, 1985.