1 s2.0 S0167404825003098 Main
1 s2.0 S0167404825003098 Main
A R T I C L E I N F O A B S T R A C T
Keywords: Trust and Reputation Management (TRM) systems are used in various environments, and their main goal is to
Trust ensure efficiency despite malicious or unreliable agents striving to maximize their usefulness or to disrupt the
Reputation operations of other agents. However, TRM systems can be targeted by specific attacks, which can reduce the
Trust and reputation management
efficiency of the environment. The impact of such attacks on a specific system cannot be easily anticipated and
Network security
evaluated. The article presents models of the environment and the TRM system operating in the environment. On
Security threats
Multi-agent systems that basis, measures of the reliability of TRM systems were defined to enable a comprehensive and quantitative
Reliability evaluation evaluation of the resistance of such systems to attacks. The presented methodology is then used to evaluate an
example TRM system (RefTRM), through the created and briefly described tool TRM-RET (Trust and Reputation
Management – Reliability Evaluation Testbed). The results indicate that the system’s specific properties can be
indicated on the basis of the tests and metrics proposed; for example, the RefTRM system is quite vulnerable to an
attack tailored to the parameters used by this system.
* Corresponding author.
E-mail address: [email protected] (M. Janiszewski).
https://doi.org/10.1016/j.cose.2025.104620
Received 17 September 2024; Received in revised form 9 June 2025; Accepted 1 August 2025
Available online 6 August 2025
0167-4048/© 2025 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-
nc/4.0/).
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
result of the decision-making process by other agents (e.g. by changing the security analysis of TRM systems are not based on an appropriate
the result of calculating trust or reputation measures). formal model but only perform intuitive evaluations in relation to spe
One of the less explored problems associated with TRM systems is the cific cases. Many works evaluate TRM systems using simulation
development of methodology and measures for assessing their resistance methods, but their essential characteristic is the lack of generality and
to attacks, enabling their exhaustive and systematical comparison. This limited applicability. These conclusions are consistent with the obser
paper focuses on assessing the resilience of TRM systems in the context vations of the authors of this paper. Ghasempouri and Ladani (2019)
of malicious attacks directly targeting the TRM systems themselves. also claim that the model of the TRM system – TRIM (Trust and Repu
Therefore, the main subject of interest of the article is whether trust and tation Interaction Model) is able to cover a broad spectrum of TRM
reputation management systems provide reliable information, signifi systems, and it is possible to define advanced and complex attacks on its
cantly supporting decision-making in conditions of uncertainty, even basis. In order to demonstrate the applicability of the model, several
with active and sophisticated counteraction of potential adversaries. known TRM systems and selected significant attacks have been defined
Due to the lack of a widely accepted methodology for assessing based on the presented model. In addition, a TRIM-checker tool was
resistance to attacks, individual TRM system proposals are tested based created for initial verification of the resistance of TRM systems to at
on different assumptions about how to conduct an attack and consid tacks. However, as the authors emphasize, this tool still has significant
ering different metrics, making it difficult to compare the results ob limitations. Indeed, the work is important in the context of creating a
tained. Developing a methodology for evaluating attack resistance may TRM system model for evaluating its resistance to attacks. However, it
contribute to more effective testing of TRM systems, as Sievers (2022) still seems to be insufficient in terms of practical evaluation of TRM
claims. systems. First of all, Ghasempouri and Ladani (2019) do not consider
The article presents a model of the environment in which the TRM some properties of TRM systems that depend on the environment in
system can be used (in Section 3.1) and a generic model of TRM systems which they operate. Despite the authors’ claim about creating a
(in Section 3.2), developed based on existing works (described briefly in comprehensive model, several simplifying assumptions were adopted,
Section 2) but significantly extending them. On that basis, the article also not expressed explicitly. Examples of such assumptions and con
proposes measures of the reliability of TRM systems to evaluate their straints are as follows:
efficiency and resistance to attacks (in Section 4). Section 5 describes an
example environment, an example TRM model, and the set of tests used • environments consisting of a maximum of 4 agents are used; more
for evaluation and presents the results obtained. Summary and pro over, in an environment consisting of more agents, evaluation is
spective future works are presented in Section 6. impossible due to significant computational complexity;
• an attack carried out by malicious agents must be defined a priori (it
2. Related works is not possible to adapt attackers’ strategies to the actual state of the
environment), i.e. all adaptive attacks, which pose the greatest threat
The literature review considered theoretical descriptions and models to TRM systems, cannot be taken into account;
of TRM systems and proposed measures of reliability evaluation. • the analysis can be carried out only for predetermined parameters of
Although the literature concerning TRM systems is very rich, in this the TRM system, and it is not possible to optimize the selection of
short survey, we focus on publications related to modelling of TRM parameters;
systems and proposed measures of reliability evaluation. • only one homogeneous service that can be provided in the environ
ment is considered; it is not possible to include more services in the
2.1. Models of TRM systems model;
• it is not possible to model different classes of trust or reputation (e.g.
Trust and reputation management may become a new, defined recommendation trust – to issued recommendations, or action trust -
standard security service next to confidentiality, integrity, availability, to the quality of the service provided);
authentication, authorization or accountability, according to Wierzbicki • only extreme values of individual parameters or confidence mea
(2010). Wierzbicki also presents his model of the trust management sures are considered, while in practice, it is possible to use (e.g. by
system. He indicates, among other things, the object of "proof", which attackers) the entire spectrum of possible values, which may signif
can be any information that allows the calculation of trust (e.g. icantly impact attack efficiency.
recommendation, report, or observation). We present a different model
in this article, but in many aspects, it is similar to the model presented by Furthermore, using their tool and model, Ghasempouri and Ladani
Wierzbicki (2010). (2019) made calculations only considering the first ten interactions.
A model of a TRM system to be used for evaluating resistance to Moreover, they note that after about 20 interactions, there is a "state
attacks was proposed by Ghasempouri and Ladani (2019). It is worth space explosion", which in practice makes further evaluation impossible.
noting that Ghasempouri and Ladani (2019) quote the prior publication Therefore, attacks that require attackers to first establish a high level of
written by the first author of this paper (Janiszewski, 2017), which trust or reputation before proceeding cannot be modelled.
presents concepts helpful in creating a model for assessing the resilience Concerning the work of Ghasempouri and Ladani (2019), the only
of trust and reputation management systems. Ghasempouri and Ladani aspect not included in the model presented in this article is the possi
(2019) compare their own model with the initial model presented by bility of the service provider’s refusal to provide the service. However,
Janiszewski (2017) and show some advantages of their model. The situations when this is possible are rare in real environments and can
indicated limitations of the model were noticed and explicitly included additionally be perceived as some kind of discrimination. Moreover,
in the article by Janiszewski (2017), as it described only the initial stage extending the model to include such a case is not particularly difficult,
of creating a comprehensive system and attack model. Ghasempouri and but it seems to lead to an unnecessary increase in its complexity.
Ladani (2019) present a different approach to creating a model of the It is also worth noting that the evaluation of the works of other au
TRM system for evaluation. Both the models of the TRM system pre thors, made by Ghasempouri and Ladani (2019), is carried out on a
sented by Ghasempouri and Ladani (2019) and the one presented by discretionary basis and does not always accurately reflect reality. For
Janiszewski (2017) have many common features but are also signifi example, the prior publication written by the first author of this paper
cantly different. Despite the lack of complete convergence, they do not (Janiszewski, 2017) is perceived as considering only distributed repu
seem contradictory but only focus on different aspects. In our opinion, tation systems and completely ignoring the aspect of centralized and
our model can be more easily used in real systems research. distributed trust systems and the centralized reputation system. In
Ghasempouri and Ladani (2019) indicate that most of the works on contrast, this article indicates all four categories of systems according to
2
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
the definition adopted by Ghasempouri and Ladani (2019). Other as A TRM system, which, using machine learning algorithms, allows for
pects of the comparison also seem to support the thesis of the superiority the detection of agents providing services of unsatisfactory quality or
of the authors’ own model. Despite the above remarks, it should be committing attacks, was proposed by Magdich et al. (2022). They
emphasized that the work is an important step in the quantitative evaluate the proposed system in relation to the detection of the most
comparison of trust and reputation management systems. popular non-cooperative attacks, using measures based on the number
The model of the TRM system as a reinforcement learning problem of correctly and incorrectly classified agents. An overview of publica
was defined by Bidgoly and Arabi (2023). The model was created to tions containing proposals for TRM systems and specifying what attacks
identify new ways of behaviours of malicious agents and discover the are considered by their authors was also created by Magdich et al.
worst possible attacks against a system. The approach aimed at (2022).
discovering attacking activities that will be most effective in relation to a The article by Liu et al. (2023) does not define specific measures that
certain criterion is similar to the approach of the authors of this article can be used to evaluate TRM systems but lists 14 evaluation criteria that
when creating the MEAEM method. a TRM system needs to meet to build a trustworthy environment. Despite
Some classification criteria that may be useful in creating the model the description of each criterion, due to the lack of a specific measure,
of the TRM system, also in relation to the model of attacks on such the assessment of whether these criteria are met seems to be subjective.
systems, especially in the e-commerce environment, were indicated by Three regression model evaluation indicators, MAE (mean absolute
Pereira et al. (2023). error), MSE (mean square error) and RMSE (root mean square error),
A study on the classification of trust models was conducted by were used by Wu et al. (2024) to evaluate the closeness of the trust
Alhandi et al. (2023). A set of presented factors and characteristics could model prediction results to the real trust values. However real trust
also be used for the formalization of TRM systems. values are hard to evaluate in the real, dynamic environments.
Attempts to determine the requirements for the content of the Two evaluation metrics based on the claim that the success of the
description of TRM systems in order to strive for standardization and, to trust model based on reputation depends on its accuracy in predicting
some extent, create a general model of such systems were performed by future trust relationships were introduced by You et al. (2024). Precision
Mármol and Pérez (2010). Their article indicates that the functioning of degree reflects the success of the reputation model in resisting malicious
TRM systems can be divided into five main phases: information gath attacks through the ratio of active top-ranked honest agents. Similarity
ering, evaluation and ranking, agent selection, transaction, and reward degree presents the divergence between the comprehensive trust values
or punishment. This division is also reflected in other work by Mármol at the previous and the actual trust rating feedback values. Both mea
and Pérez (2009a). Sievers (2022) also proposed the model of the TRM sures, however, seem to be insufficient as some unsuccessful interactions
system, but its use in evaluating the resistance of TRM systems to attacks can be the result of the environmental characteristics, not the behaviour
could be limited. of agents.
The above examples of measures of the effectiveness of TRM systems
2.2. Measures of reliability evaluation of TRM systems can be applied to selected environments. A different approach attempts
to create measures that could be applied to a broad class of trust and
The "malicious node detection performance" (MDP) was defined by reputation management systems. An example of a paper proposing
general measures of the effectiveness of TRM systems is the article by
Sun et al. (2008). MDP represents the average detection level of mali
cious agents (i.e. how many malicious agents were discovered in the Janiszewski (2017). The article proposes, among others, measures such
as speed of propagation of information about a change in behaviour,
environment on average):
∑ computational decision overhead, computational overhead for request
ni handling, overhead for memory resources, overhead for energy, total
MDP = i∈M B
|M| reputation of malicious agents, total reputation of reliable agents, and
others. Some of the proposed measures concern the broader issue of
where: niB - the number of reliable agents that detected that agent i is assessing TRM systems’ functioning, not only assessing their resistance
malicious; M – a set of malicious agents; B – a set of reliable agents. to attacks. Measures relevant to assessing the reliability of TRM systems
A complementary measure, also defined by Sun et al. (2008), is the have been extended and described in more detail in Section IV.
level of false positives:
∑ 2.3. Summary
ni
FAR = i∈B B
|B|
Although the subject of trust and reputation management systems, as
A similar measure of effectiveness (detection accuracy) was also well as attacks against such systems, is widely explored, the prevailing
defined by Marzi and Li (2013). opinion among research teams is that there is still no comprehensive
The "packet delivery ratio", as the ratio of packets delivered to the approach to evaluate the reliability of such systems exhaustively. In
recipient, was defined by Sun et al. (2008) (as their paper describes the particular, as the first author of the article pointed out in an earlier
use of TRM systems for routing protocols). It is worth noting that other publication (Janiszewski, 2016), the research on the resistance of TRM
works (Janiszewski, 2014) use similar measures, often referred to as systems to attacks described in the literature is characterized by the lack
"network effectiveness", which may, contrary to the name, also be used of a precisely defined methodology for conducting research in order to
in other types of environments. An analogous measure – the level of be able to compare results for different systems and are limited only to a
packet loss as a percentage of undelivered packets out of the packets sent selected set of measures, e.g. efficiency, without taking into account how
was defined by Zahariadis et al. (2009). A similar approach to using gaining high trust or reputation by malicious agents may contribute to
packet delivery ratio as a measure of the efficiency of a TRM system in a disturbing the efficiency in the long run.
MANET network was presented by Maheswari and Vijayabhasker Typically, a general measure of system effectiveness is used for
(2023). experimental testing of attacks (usually in the form of simulations).
In publications by Marzi and Li (2013) and Zahariadis et al. (2009), Definitions of this measure vary depending on the authors of the
the energy consumption of agents being WSN (Wireless Sensor Network) research. Usually, it is a certain sum of the interaction results - the higher
nodes was measured when the trust management system was running it is, the more effective the system is. This is justified by the fact that a
and compared with the level of energy consumption when the TRM high value of this measure suggests that malicious agents are rarely
system was not used – on this basis, energy consumption due to the use selected as service providers. This measure is insufficient for several
of a trust management system was determined. reasons:
3
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
• It does not take into account the possibility of intentionally manip maximum quality of service ul provided by agent ak is quakl = q, where
ulating a specific group of agents. q is a certain value from the set Q (q ∈ Q),
• It does not consider the fact that the purpose of malicious agents may ○ an agent which can provide a given service with the maximum
not be to minimize the average system efficiency, but they may have quality q, is also able to provide this service with a quality lower
other goals, e.g. to suddenly drop the efficiency at a certain point in than q;
time. • there is a set of agents - service providers AP , which includes agents
• It fails to account for the extent of manipulation in trust or reputation that provide at least one service, i.e. ∀ak ∈ AP : Uak ∕ = ∅, AP ⊆ A,
values among individual agents. This issue may be more relevant, AP ∕=∅
firstly, because malicious agents are not necessarily intended to • for each service ul , there is a set of agents - service providers of this
reduce efficiency, and secondly, achieving higher trust values by service: AP:ul , such that: ∀ak ∈ AP:ul : ul ∈ Uak ;
malicious agents may allow them to carry out an attack in the future. • each of the agents a1 , …, an may request services from a defined set of
services requested by this agent, i.e. UR:a1 , …, UR:an , each of these sets
Some of the above conclusions were also drawn by the authors of being a subset of the set U, i.e. ∀k ∈ [1, n] : UR:ak ⊆ U;
theoretical descriptions of attacks, e.g. by Velloso et al. (2008), but • there is a set of agents - service recipients AR , which includes agents
usually, measures that would be free of the mentioned disadvantages are that request at least one service at least once, i.e.
not proposed. For this reason, this article proposes measures that will ∀ak ∈ AR : UR:ak ∕ = ∅, AR ⊆ A;
allow the identification of other targets of attackers and enable a • the sum of the sets of services requested by each of the agents in the
broader evaluation of TRM systems in terms of adversary activities. environment gives a subset of the set of services provided in the
environment, i.e. ∪ni=1 UR:ai ⊆ U,
3. Modelling environment and TRM system ○ not all services provided in a given environment must be
requested;
There are many TRM systems, among which many common features • in the general case, all the above parameters are variable, i.e. they
can be distinguished. The purpose of this section is to present a model can change over time, and each of the above parameters can be
that is general enough to cover as many environments and TRM systems determined at a specific moment of the environment’s operation – i.
as possible but at the same time detailed enough to enable non-trivial e. in time m.
reliability evaluation of such systems, also in a quantitative way. The
model of the TRM system was created based on literature and the au
thors’ own conclusions. 3.1.1. Service request
As part of the operation of the environment, we assume that in
subsequent moments m1 , m2 , m3 , … there are requests from service
3.1. The model of the environment recipients. Consecutive requests can be ranked by appearance time and
agent ID and numbered consistently throughout the system, thus
The presented model of the environment is valid both in the case of achieving sequential consistency. We assume that interaction number k
environments in which TRM systems operate, as well as environments in (ik interaction) has started at time mk . In addition, we assume that the
which such a system does not operate. The environment consists of analysis of the environment operation begins at time m0 = 0, and the
agents interacting to provide services. The following assumptions were first interaction is initiated at time m1 > 0.
made to define the model: It is worth emphasizing that only those moments when the service
recipients’ requests appear (more precisely when they identify the need
• agents from the finite set A operate in the environment: |A| = n, A = to use a particular service) are considered. The appearance of subse
{a1 , …, an }, where: a1 , …, an – agents with identifiers 1, .., n, quent requests does not have to occur in fixed time intervals, i.e. ∀k :
respectively; mk − mk− 1 ∕ = const.
• a subset of the set of agents A is the set of reliable agents AB , We also assume that each interaction has a negligible duration.
(AB ⊆ A), |AB | = nB ; Therefore, the next interaction cannot start before the previous one is
• a subset of the set of agents A is the set of malicious agents AM , finished. There is, at most, one interaction at a time. It is worth
(AM ⊆ A), |AM | = nM , the sets of malicious and reliable agents are emphasizing that this is a significant limitation of the model, especially
disjoint: AB ∩ AM = ∅, AB ∪ AM = A, nB + nM = n; for some types of environments.
• services from a specific finite set of services U = {u1 , …, ul } can be Let us assume that we are considering the operation of the envi
provided in the environment; ronment in time from m0 to mend and during this time lI interactions
• each of the agents a1 , …, an can provide services from the defined set (lI ∈ N) were initiated. Therefore, the numbers of subsequent in
of services of this agent, i.e. Ua1 , ⋯, Uan , each of these sets being any teractions: 1, 2, …, lI form the set LI = {1, 2, …, lI }. Obviously: |LI | = lI .
subset of the set U, i.e. ∀k ∈ [1, n] : Uak ⊆ U; {
The set M is the set of times of requests: M = m1 , m2 , …, mlI .
}
• services in the environment can be provided with a quality q with a The request el is an ordered 3-element tuple (ai ,uk ,ml ), where: ai ∈ A
value from the set of service quality values Q; – service recipient (agent requesting the service), uk ∈ U – requested
○ the set Q may be infinite;
service, ml ∈ M – time of occurrence of request number l. The set of all
○ the set Q may contain such values of quality q that are not currently
{ }
requests is the set E, E = e1 , e2 , ⋯, elI , |E| = lI .
provided in the environment;
○ the values in the set Q can be ordered from the lowest to the
3.1.2. Interaction and interaction result
highest; As defined earlier, Q is the set of possible values of service quality. An
○ in the set Q there is a quality of service value corresponding to the
interaction involves an agent’s request for a service and the provision of
lack of service (q0 ∈ Q); that service by another agent.
○ in the set Q there is a quality of service value corresponding to the
The interaction function is the partial function fint : A × U × M ×
ideal quality (qmax ∈ Q), such that ∀qk ∈ Q, qk ∕ = qmax : qk < qmax ; ( )
A → Q, where if fint ai , uk , ml , aj = ql , then ai ∈ A – service recipient
○ the lowest quality value in the set of services is the value qmin such
(agent requesting the service), uk ∈ U – requested (provided) service,
that: ∀qk ∈ Q, qk ∕= qmin : qk > qmin ;
ml ∈ M – start time of interaction number l, aj ∈ A – service provider
• each of the agents, for each of the services it provides, has a defined
maximum quality of this service that can be provided, e.g. the (agent providing the service), and ql ∈ Q – quality of service provided
4
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
within the l-th interaction (result of this interaction). In the environment, for each request specified in E, the following
The interaction set I is the domain of the interaction function. actions occur:
Interaction il ∈ I is an element of the set of interactions number l
(
(argument of the interaction function number l), i.e. a tuple: ai , uk , ml , • selection of the service provider – by the function: fsel ;
)
aj , where ai ∈ A – service recipient (agent requesting the service), uk ∈ • interaction – according to the function: fint ;
U – requested (provided) service, ml ∈ M – start time of interaction • occurrence of a disturbance – according to the function: fdis .
number l, aj ∈ A – service provider.
The sequence of interaction results is the sequence QRES : LI →Q, 3.1.5. Model limitations
where the indexes are the interactions’ numbers, and the sequence Even though the presented model of the environment reflects, in the
values are from the set Q and correspond to the results of the in authors’ opinion, the characteristics of the significant features of the
environment in a sufficient way, it is not able to take into account all
teractions, ordered by the time the interaction started. ql is the l-th
aspects of the real environment. The following sections briefly describe
element from QRES , which is the result of interaction number l.
the limitations of the model.
It is relevant to point out that the quality of the service provided by
the service provider in general does not have to be the same as the
quality of the service provided to the recipient, therefore let us introduce 3.1.5.1. Homogeneity and heterogeneity of services. In practice, services,
the concept of the disturbance function: the disturbance function is the although very similar, may differ in certain features. For example, one
partial function fdis : Z × M→ O, where if fdis (zl ,ml ) = ol , then: zl ∈ Z – seller may sell a particular product on an e-commerce platform, and
event of the occurrence of a disturbance with a particular value of in another may offer the same product but with a shorter delivery time.
fluence on the result of the interaction, ml ∈ M – interaction time Two products offered could be very similar but can differ in some
particular, insignificant features. Interestingly, in order to capture such
number l, and ol ∈ O – service quality assessed by the service recipient
specificity effectively, it would be necessary to introduce an additional
(actual result of the interaction).
concept: substitute or similarity of services. This is due to the fact that,
The set of possible interaction results O is the same as the set of
despite the differences presented in the above examples, these services,
possible values of service quality Q, i.e. O = Q and the quality of the
from the point of view of the service recipient, satisfy the same need.
service provided within the interaction is the actual result of this
Within the presented model, the services of the above examples will be
interaction.
treated as the same. This approach is justified by the fact that the
The sequence of actual interaction results is the sequence
additional features of these services are not significant. In such an
ORES : LI →O, where the indexes are the interactions’ numbers, and the
approach, it becomes essential to decide at what point the services differ
sequence values are from the set O and correspond to the actual inter
so much that they should be described as entirely different. The rule
action results, ordered by the time the interaction started. ol is the l-th
worth adopting is the importance of the features of a given service - if
element from ORES , which is the actual result of interaction number l.
they differ significantly, i.e. they respond to different needs of the
recipient, then it is worth treating them as different services (e.g. if they
3.1.3. Service provider selection
are different products within the e-commerce platform). Of course, this
The service provider selection function determines how an agent is
is somewhat subjective, and the definition of services depends on the
selected from the set of potential service providers providing the
person making the characteristics of a specific environment. Neverthe
requested service.
less, it seems that, in general, this limitation should not affect the results
Let A be a finite family of all subsets of A; A = P(A). Then, each
of assessing the reliability of TRM systems.
element of the family A is a specific subset of the set A and can be
identified with the set of potential service providers. The service pro
3.1.5.2. Negligible interaction duration. The assumption of a negligible
vider selection function is the partial function fsel : A x M→A, which for
interaction duration is an important limitation of the model. In practice,
any X ∈ A satisfies the condition: fsel (X, m) ∈ X.
in the case of most real environments, the time from the moment of
sending the request, through the provision of the service by the service
3.1.4. Environment specification
provider, to the receipt of the service by the recipient is non-zero.
Based on the introduced definitions and notations, we can define a
Adopting such a limitation enables more straightforward subsequent
formal definition of the environment:
analysis and significantly reduces the complexity of the model itself. The
The environment is an ordered 5-element tuple (A,U,E,F,M), where A
consequence of this assumption is that there will never be a situation
is the set of agents (including their characteristics), U is the set of ser
where more than one interaction occurs simultaneously. This fact fa
vices, E is the set of requests, F is the set of partial functions defined in
{ } cilitates the analysis, but on the other hand, some strategic actions of
the environment F = fsel , fint , fdis , and M is the set of interaction times.
agents cannot be taken into account. In particular, it will not be possible
In order to characterize the environment, the following must be
to model a case where a service provider receives many requests for
specified:
services that it does not provide, while the service recipients do not even
realize that they have been deceived because the service delivery time
• agents A operating in the environment;
has not yet expired.
• the services U provided in the environment and the set of possible
It is possible to extend the model to include the duration of the
values of service quality Q;
interaction from request to receipt of the service. In this case, it would
• for each agent ak , the services provided by this agent: Uak – as a part
become a parameter of the interaction itself. Still, interactions could be
of agents’ characteristics;
ranked according to the time of sending the request for a given service
• for each service ul provided by each of the agents ak , the maximum
and the identifier of the requesting agent. The authors chose to exclude
possible quality of this service: quakl – as a part of agents’
this aspect because omitting it still allows for valuable analyses to be
characteristics; conducted while avoiding excessive complexity. At the same time, the
• set of interaction times M; issue of considering interaction time should be treated as an interesting
• set of requests E; future research issue.
• service provider selection function: fsel ;
• interaction function: fint ;
• disturbance function: fdis .
5
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
3.2. The model of the TRM system context is possible, it should be possible to trust this recommendation (i.
e., create another level of context). Such context can be used to assess
Agents operating within the environment and using a trust and confidence in the recommendation to provide certain services. Similarly,
reputation management system can exchange recommendations about when looking for a doctor, one can be guided by recommendations
other agents that can be used to evaluate trust in other agents or the about various doctors. However, one will have much more trust in the
reputation of other agents. Based on the assessment of trust or reputa recommendation of a good and honest friend who used this doctor’s
tion, the agent requesting a service can choose the agent with whom it services than in the recommendation of a friend who only heard about
will interact (select the service provider). After the interaction, it is this doctor or in an anonymous recommendation. In this approach, the
possible to update the trust or reputation values of the agents (or a context is determined by the recommendation for the provision of ser
subset of agents) that exist in the system. vices from a specific set (denoted as r(Uk )). This level of context
The set of possible agent’s actions includes at least the following: (recommendation regarding the provision of services) could be defined
as the first level of context.
1. Collecting information, including requesting recommendations. Using that reasoning, the recommendation trust can also be deter
2. Trust or reputation evaluation. mined regarding the recommendation regarding the provision of ser
3. Selection of the agent which will provide the service. vices by defining another context, denoted as r(r(Uk )). This level of
4. Interaction (service provision) and evaluation of its result. context (recommendation regarding recommendation regarding service
5. Updating trust or reputation. provision) could be defined as the second level of context.
The above reasoning can be repeated, but in practice, it makes sense
to consider only the context up to a certain level.
3.2.1. Trust or reputation A specific context may also consist of a set of pre-defined contexts, e.
This section provides a formal description of trust. Reputation can be g. if one trusts a car mechanic, he can also trust their recommendations
defined analogously. Both trust and reputation are context-specific. for a car shop.
T is the set of possible trust values determined by the TRM system. The above considerations make it possible to present a formal defi
The trust value tn ∈ T can be expressed by a single value or a vector (e.g. nition of context. The context ck ∈ Cα defines what the recommendation
containing a value representing the trust estimation and a second value or measure of trust or reputation is about. The context is:
characterizing the certainty of this estimation). Further considerations
are carried out with regard to trust as a single value. • provision of services from any non-empty subset of the set of all
The trust value tmin ∈ T is the minimum trust value such that ∀tn ∈ T : services: ck = Uk , where Uk ⊆ U, Uk ∕ = ∅, U = {u1 , …, ul }; or
tmin ≤ tn , and the trust value tmax ∈ T is the maximum trust value such • recommendation for any context: cl = r(ck ); or
that ∀tn ∈ T : tmax ≥ tn . • any set of contexts: cm = {ck , cl , …}.
(
Trust is the partial function ftrust : A × A × C × M→T, where if ftrust ai ,
)
aj ,ck ,ml = tn , then ai ∈ A – trusting agent, aj ∈ A – trusted agent, ck ∈ In general, the set of all possible contexts Cα , based on the above
C – trust context, ml ∈ M – time (of interaction or trust assessment), definition, is infinite because for each existing context ck , one can create,
tn ∈ T – trust value. For the purposes of the rest of the article, let us for example, the context cl = r(ck ). However, each TRM system must
introduce the following notation: t caki →a;ml
– trust value of agent ai to agent define a finite set of contexts used in the system: C⊂Cα . The set of con
texts C contains contexts for which trust or reputation values can be
j
6
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
3.2.3.2. Interaction recommendation. The function of the interaction 3.2.4. Trust and reputation update
recommendation is the partial function freci : A × A × I × C × M→R, The very important issue is how the trust and reputation values are
( )
where if freci ai , aj , ip , ck , ml = rn , then ai ∈ A – agent requesting the calculated (updated) as a result of the function of trust or reputation. In
recommendation (recipient of the recommendation), aj ∈ A – agent general, for the correct determination of the TRM system, it is necessary
delivering the recommendation (recommendation issuer), ip ∈ I – to indicate:
element from the set of interactions to which the recommendation ap
plies (recommendation subject), ck ∈ C – recommendation context, • for which agents and contexts the value of trust and reputation will
ml ∈ M – time, rn ∈ R – recommendation value. be determined;
(
The interaction recommendation is a 6-element tuple ai , aj , ip , ck , ml , • when these values will be updated;
)
rn , created by extending the function arguments tuple with the • how it will be performed (how the new values will be calculated).
recommendation value. The set of such 6-element tuples is a set of
This should be determined by the interaction and agent evaluation
interaction recommendations RIS .
algorithm, which is a procedure to update the trust value of the agent
that participated in the interaction (or that agent’s reputation).
3.2.3.3. Recommendation. A recommendation is an agent recommen
In general, historical trust values, interaction results, and recom
dation or an interaction recommendation. The set of recommendations
mendations may be taken into account when calculating trust or repu
RS is the sum of the set of agent recommendations and the set of inter
tation values.
action recommendations, RS = RAS ∩ RIS . The update of the trust or reputation value may result from both the
functioning of the recommendation evaluation algorithm as well as the
3.2.3.4. Request and selection of recommendation providers. The TRM interaction and agent evaluation algorithm.
system should specify how the agents who will issue recommendations
are selected. The recommendation provider selection function defined 3.2.5. Impact of trust or reputation on service provider selection and
below applies only to TRM systems where recommendations are issued interactions
upon the agent’s request. In the case of TRM systems that use recom The main idea behind the trust and reputation management system is
mendations issued immediately after the interaction, the recommenda that they should allow to influence the choice of the service provider.
tions to which the agent has access or certain aggregates of such Therefore, the use of the TRM system changes the form of the service
recommendations (e.g. average recommendation value) are used. The provider selection function, fsel and possibly the interaction function, fint
recommendation provider selection function is as follows: in such a way that they take into account additional characteristics of
Let A be a power set of A (A = P(A)). Then, each element of A is a agents (non-existent in the absence of the TRM system), such as agent
specific subset of set A and can be identified with the set of potential reputation or trust to the agent. Therefore, the TRM system specification
recommendation providers. The function of selection of recommenda must determine how trust and reputation measures are used to make
tion providers is the partial function fsel rec : C × A × M→A, where if decisions regarding selecting a service provider and possibly how they
( )
fsel rec ci , aj , ml = Ap , then ci ∈ C – recommendation context, aj ∈ A – affect the quality of the service provided.
recommendations subject, ml ∈ M – time, Ap ∈ A – set of recommen
dation providers. A recommendation request is sent to each agent in the 3.2.6. TRM system specification
set of recommendation providers. Recommendation request eR:l is an It can be adopted a new definition of the environment with a TRM
ordered 4-element tuple (ai , ap , ck , ml ) or (ai , ip , ck , ml ), where: ai ∈ A – system, which is slightly expanded compared to the environment
requestor of a recommendation, ap ∈ A - subject of recommendation without the TRM system:
(agent) or ip ∈ I – subject of recommendation (interaction), ck ∈ C – The environment with a TRM system is an ordered 9-element tuple
context of recommendation, ml ∈ M – time of occurrence of request. (A’ , U, E, F’ , M, T, P, C, R), where Aʹ is the set of agents (including their
characteristics), U is the set of services, E is the set of requests, Fʹ is the
3.2.3.5. Issuing and evaluating recommendations. In general, the rec set of partial functions defined in the environment with the TRM system,
ommendations may take into account: M is the set of interaction times, T is the set of trust values, P is the set of
reputation values, C is the set of contexts, and R is a set of recommen
• trust or reputation values; dations. The set of partial functions specified in the TRM environment
• historical interaction results; may include the following functions: fsel , fint , fdis , ftrust , frep , frec , fsel rec
• other recommendations. or some of them. In the environment with the TRM system, the char
acteristics of the agents are changed (because they may contain e.g. trust
The exact method of creating the issued recommendation, deter values to other agents). The set of partial functions contains additional
mining its value, and thus the form of the recommendation function, is a functions ftrust or frep , and possibly functions: frec and fsel rec ; in addition
property of a specific TRM system. the functions fsel , fint depend on the extended characteristics of the
In the case of some TRM systems, the received recommendations agents.
may be evaluated, e.g. based on actual interaction results after receiving In order to characterize the environment with a TRM system, the
a given recommendation. The recommendation evaluation algorithm is following elements of the TRM system must be specified (as additional to
a procedure to update the trust (or reputation) value of the agent that the elements of the environment listed before):
issued the recommendation. In general, the recommendation evaluation
algorithm may take into account the following factors: • a set of possible trust values T or a set of possible reputation values P;
• the partial function of trust ftrust or reputation frep along with an
• the recommendation issued by the agent (and previous recommen interaction and agent evaluation algorithm;
dations of the agent); • set of trust or reputation or recommendation contexts: C;
• trust or reputation values (to the agent issuing the recommendation • if there are recommendations:
or the agent being the subject of the recommendation); ○ type of recommendation used,
• the result of interaction with the agent being the subject of the ○ partial function of recommendation f
rec ,
recommendation. ○ how to generate recommendation request e
R:l ,
7
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
○partial function for selection of recommendation providers fsel rec , in the literature, but without qmax in the denominator of the formula,
○recommendation evaluation algorithm; which results from the fact that the maximum quality of services usually
• the partial function of service provider selection: fsel considering trust assumed in models has a value of 1, although it is rarely defined
or reputation; explicitly. Although widely used, this measure has a severe disadvantage
• interaction function: fint considering trust or reputation; in assessing the impact of attacks on the environment. Remarkably, in a
• initial values of trust or reputation. real environment, the agent that will provide a given service (e.g. service
ul ) with the maximum quality, may not exist, i.e. ¬∃k : quakl = qmax .
In the environment, for each request specified in E, the following Therefore, even in the absence of any malicious actions by the attackers,
actions occur: the efficiency of the environment without disturbances (and also the
efficiency of the environment) will be less than 1. It is hence justified to
• obtaining recommendations (if possible) – in accordance with the introduce two more measures.
function fsel rec ; Note that for each service in the environment, the maximum quality
• trust or reputation assessment – in accordance with the ftrust or frep of service which can be provided can be determined, i.e. for the service:
function; ul : qumax
l = quakl : ∀aj ∈ AP:ul : quajl ≤ qualk . Ideal efficiency is the efficiency of
• selection of the service provider – in accordance with the fsel an environment in which services would be provided only by the agents
function; that provide the best quality of service in the environment and in a
• interaction – according to the function fint ; situation where no one performs any attack. In other words, such effi
• occurrence of a disturbance – according to the function: fdis ; ciency assumes that clients will always choose the best service provider.
• recommendation evaluation (if any); The ideal efficiency is equal to 1 when the agents do not perform any
• assessing or updating trust or reputation (if any). attack on providing services, and the service recipients ideally choose
the service provider. Let us adopt the following notation convention:
4. Reliability measures uint:l is the service provided as part of l-th interaction.
Ideal efficiency (Eideal ) is equal to the ratio of the sum of the actual
The following reliability measures apply to trust and reputation results of the interaction to the sum of the maximum quality of services
management systems. The proposed measures in some cases (e.g. effi in the entire environment provided by agents in subsequent interactions.
ciency) overlap to some extent with those defined in the literature by ∑l i
Sun et al. (2008), Zahariadis et al. (2009) and Jøsang et al. (2007). Some o
Eideal = ∑l i=1 u (5)
of the measures were previously defined in a slightly different way in i=1 q int:i
max
other publications by the first author of this article (Janiszewski, 2017,
2014,2016,2020). where qumax
int:i
is the maximum quality of service (in the entire environ
ment) provided during i-th interaction.
Ideal efficiency takes into account the global (for the entire envi
4.1. Efficiency measures ronment) quality of service, but when calculating the efficiency mea
sure, we can also take into account the maximum quality of service of
Efficiency measures determine how efficient the environment is and the agent that provided this service during a specific interaction.
how resistant it is to malicious agents. Advanced efficiency takes into account which agents provided the ser
Efficiency of the environment without disturbances (Eq ) – the vice. Let us use the following convention: aint:l is the agent that provided
ratio of the sum of the interaction results to the product of the number of the service in l-th interaction, and uint:l , is the service that was provided
all interactions and the maximum quality of services in the environment: in this interaction.
∑l i
q Advanced efficiency (Eadv ) is equal to the ratio of the sum of the
Eq = i=1 (1) actual results of the interaction to the sum of the maximum quality of
l ∗ qmax
service of a given agent provided in subsequent interactions:
Efficiency of the environment (E) – the ratio of the sum of actual ∑l i
interaction results to the product of the number of all interactions and o
Eadv = ∑l i=1 u (6)
the maximum quality of services in the environment: q int:i
i=1 aint:i
∑l i
o where quaint:i
int:i
is the maximum quality of service provided by the agent who
E = i=1 (2)
l ∗ qmax was the service provider during i-th interaction.
In the Eqs. (1) and (2), l is the number of interactions in the Analyzing the formulas of efficiency of the environment, ideal effi
environment. ciency and advanced efficiency, the following relation can always be
If O = {0, 1}, the efficiency of the environment is the number of observed:
successful interactions divided by the number of all interactions. Eadv ≥ Eideal ≥ E
Temporary efficiency n (E(n) ) – this is the efficiency of the envi
ronment, but taking into account only the last n interactions: Proof: by definition: quaint:i
int:i
≤ qumax
int:i ≤ q
max for every i,what implies:
∑l Eadv ≥ Eideal ≥ E (due to the equality of the numerator in the formulas for
oi any type of efficiency).
E(n) = i=l− n+1 (3)
n ∗ qmax Note that the efficiency of the environment can be equal to 1 only if,
In Eq. (3), l is the number of interactions for which the temporary for each requested service, there is at least one service provider that is
efficiency measure n is determined. able to provide this service with the maximum quality.
Specifically, instantaneous efficiency is temporary efficiency 1
4.1.1. Example 1
(E(1) ):
Let us consider an environment consisting of three service providers
ol AP = {a1 , a2 , a3 }, each of these agents is providing the u1 service. The
E(1) = (4)
qmax environment defines the following set of possible qualities of service:
Q = {0; 0.25; 0.5; 0.75; 1}. Each agent can provide the u1 service with
A measure similar to the efficiency of the environment is often used
8
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
a specified maximum quality: qua11 = 0.25, qua12 = 0.5, qua31 = 0.75. Each paribus, change the most effective attack. For this reason, comparing the
agent established two interactions with every other agent, so there were environment without and with the TRM system, the most effective
6 service requests and interactions: l = 6. There are no disturbances in attack will be different.
the environment, i.e. ∀i ∈ N,1 ≤ i ≤ 6 : oi = qi . Let us assume that none An important issue is determining how the TRM system (with fixed
of the service providers performed an attack on the provision of services, parameters) affects the efficiency in the presence of malicious agents
i.e. they provided services with the maximum possible quality deter carrying out the attack. To evaluate such impact, measures called effi
mined by their characteristics. Then, the efficiency measures will be ciency gain and absolute efficiency gain can be used:
equal: Efficiency gain (G) – the difference between the efficiency of the
∑l i environment in which a specific TRM system operates (E+TRM ), and the
o 2 ∗ 0.25 + 2 ∗ 0.5 + 2 ∗ 0.75 1 efficiency of the environment without the TRM system (E0 ), assuming
Eq = E = i=1 = =
l ∗ qmax 6∗1 2 that in both cases, the attackers behave exactly in the same way – they
∑l i apply actions that allow reducing the efficiency of the environment as
o 2 ∗ 0.25 + 2 ∗ 0.5 + 2 ∗ 0.75 2
Eideal = ∑l i=1 u = = (7) much as possible when the TRM system is used:
i=1 qmax
int:i 6 ∗ 0.75 3
∑l i G = E+TRM − E0 (9)
o 2 ∗ 0.25 + 2 ∗ 0.5 + 2 ∗ 0.75
Eadv = ∑l i=1 u = =1 Absolute efficiency gain (GA ) – the difference between the effi
q a
int:i
i=1 int:i
2 ∗ 0.25 + 2 ∗ 0.5 + 2 ∗ 0.75
ciency of the environment in which a specific TRM system operates
(E+TRM ) and the efficiency of the environment without the TRM system
4.1.2. Example 2
(E0ʹ ), assuming that in both cases, the attackers use the most effective
Let us consider the environment and requests the same as in Example
attack, i.e. they choose their actions in such a way as to minimize the
1, but in this case, assume that each agent performs an attack during one
efficiency of the environment both when the trust management system is
of its two interactions, then providing the service with quality qi = 0, for
used and when it is not (they can use different attacks in these two
i = 2; 4; 6 (so each of these agents performs an on-off attack). Then, the
cases):
efficiency measures will be equal:
∑l i GA = E+TRM − E0ʹ (10)
o 0.25 + 0 + 0.5 + 0 + 0.75 + 0 1
Eq = E = i=1 = =
l ∗ qmax 6∗1 4 That implies: GA ≥ G, because: E ≤ E0 , as E is the efficiency
0ʹ 0ʹ
∑l i measured when attackers use the most effective attack for all conditions
o 0.25 + 0 + 0.5 + 0 + 0.75 + 0 1 of the environment (including the lack of a TRM system).
Eideal = ∑l i=1 u = = (8)
i=1 q int:i
max 6 ∗ 0.75 3
∑l i 4.2.1. Theorem
o 0.25 + 0 + 0.5 + 0 + 0.75 + 0 1
Eadv = ∑l i=1 u = = Suppose there is no TRM system in the environment and the selection
q int:i 2 ∗ 0.25 + 2 ∗ 0.5 + 2 ∗ 0.75 2
i=1 aint:i of service providers is random. In that case, the most effective attack is a
constant attack combined with creating multiple identities (Sybil
4.1.3. Summary attack), in which malicious agents provide services of the minimum
The most critical parameter in the context of evaluating TRM systems possible quality. Then the efficiency of the environment (assuming that
is ideal efficiency. This is because even a high value of advanced effi all reliable agents are able to provide services with the quality qmax ) is
ciency does not necessarily indicate the optimal selection of agents approximately:
(choosing the agent that provides the best services), which is, after all,
the main goal of TRM systems. On the other hand, using the efficiency of nB + nM qqmax
min
E0ʹ ≈ (11)
the environment measure may lead to a situation where, despite the low nB + nM
value, the decisions made by the service clients are optimal (this is Where nM is the number of malicious agents from the perspective of
because if no agent provides a service of maximum quality, the effi the environment (after the Sybil attack).
ciency value will be lower than 1). It is worth mentionig that while In a situation where qmin = 0, and qmax = 1 (the most intuitive case),
determining the efficiency of the environment is easy (it is enough to
the efficiency is equal to: E0ʹ ≈ nBn+n
B
9
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
values in the context of ck between each ordered pair of agents for which 4.4. Measures of popularity of agents
the trust is specified, divided by the product of the number of such pairs
of agents and the maximum trust value. The global average trust of all Measures of popularity of agents determine how often the services of
agents to all agents (A→A) in context ck at time ml is given by: a given group of agents (or a specific agent) are used by other agents in
∑n ∑n c ;m the environment.
j=1 ta →a
k l
ck ;ml
tA→A =
i=1
∑n ∑i n j (12) The number of interactions of reliable agents with reliable
tmax ∗ i=1 j=1 1 agents is the sum of the number of interactions in which both the service
provider aj and the client ai were reliable:
where i and j are such that agent ai ’s trust in aj is defined. ∑ R:a ,P:a
If a trust value is defined for each pair of agents, then the global lAI B ,AB = lI i j (17)
average trust of all agents to all agents (A→A) in context ck at time ml is i,j:ai, aj ∈AB ,i∕
=j
given by:
R:a ,P:a
∑n ∑n where: lI i j is the number of interactions in which agent aj is the
=i tai →aj
ck ;ml
ck ;ml
tA→A =
i=1 j=1,j∕
(13) service provider and agent ai is the service requestor.
n(n − 1) ∗ tmax The number of interactions between reliable agents and mali
If there are malicious agents in the environment, the global average cious agents is the sum of the number of interactions between agents in
trust does not reflect the environment’s proper functioning and the TRM which one of the agents was reliable and the other malicious:
system’s reliability. For this reason, it is worth creating more detailed ∑ R:a ,P:a
∑ R:a ,P:a
lAI B ,AM = lI i j + lI i j (18)
measures for different groups of agents and using them to assess the i:ai ∈AB ,j:aj ∈AM i:ai ∈AM ,j:aj ∈AB
functioning of the environment. In particular, it is important to create a
measure of the global average trust of reliable agents to all agents, It is crucial to note that in an effective operating environment, the
reliable agents to reliable agents, and reliable agents to malicious number of interactions of reliable agents with other reliable agents
agents. should be significantly higher than the number of interactions of reliable
The global average trust of reliable agents to all agents in the agents with malicious agents.
context ck is the sum of the trust values in the context of ck between the The TRM system can strongly influence the uneven load of agents in
reliable agent and another agent for which this trust is defined, divided the system (in a sense, this is its purpose). The problem, however, may
by the product of the number of such pairs of agents and the maximum be a situation in which only a certain subgroup of reliable agents pro
trust value. The global average trust value of reliable agents in all agents vides services, and the others do not. For this reason, not only the
(AB →A) in the context ck at time ml is given by: measures included in the above definitions are important, but the dis
∑nB ∑n c ;m tribution of the number of interactions of individual agents who are
j=1 ta →a service providers is also essential.
k l
i=1
tAckB;m l
= ∑ B ∑i n j (14)
→A
tmax ∗ ni=1 j=1 1
( )
= j and ∃ftrust ai , aj , ck , ml – that
where i, j is such that: ai ∈ AB , aj ∈ A, i ∕ 4.5. The ideal TRM system
is, agent ai ’s trust in aj is defined.
The global average trust of reliable agents to reliable agents in An ideal TRM system is a system that, despite the presence of mali
the context ck is the sum of the trust values in the context ck between cious agents in the environment and their attacks, allows the achieve
each ordered pair of reliable agents for which this trust is defined, ment of the following values of measures defined in this section:
divided by the product of the number of such pairs of agents and the
maximum trust value. The global average trust value of reliable agents 4.5.1. Efficiency measures
to reliable agents (AB →AB ) in the context of ck at time ml is given by: For any environment, the greater the value of efficiency measures,
∑nB ∑nB c ;m the better it performs, therefore efficiency measures should be close to
j=1 ta →a one:
k l
i=1
tAckB;m l
→AB = ∑ B ∑i nB j (15)
tmax ∗ ni=1 j=1 1
Eq = 1
( )
where i, j is such that: ai ∈ AB , aj ∈ AB , i ∕
= j and ∃ftrust ai , aj , ck , ml – For an environment without disturbances, also:E = 1
that is, agent ai ’s trust in aj is defined.
The global average trust of reliable agents to malicious agents in ∀n : E(n) = 1
the context of ck is the sum of the trust values in the context of ck of each
reliable agent to each malicious agent, if trust is defined between these Eideal = 1
agents, divided by the product of the number of such pairs of agents and
the maximum trust value. The global average trust value of reliable Eadv = 1
agents to malicious agents (AB →AM ) in the context ck at time ml is
expressed by the formula: 4.5.2. Efficiency gain measures
∑nB ∑nM c ;m Efficiency gain and absolute efficiency gain should be significantly
j=1 ta →a greater than 0. Theoretically, it could be indicated that these values
k l
ck ;ml i=1
tAB →AM = ∑nB ∑i nM j (16)
tmax ∗ i=1 j=1 1 should be close to one, but it may turn out to be impossible to achieve
not because of the TRM system itself, but because even without its
The above measures can easily be extended so that when there is presence in a given environment, malicious agents will not be able to
more than one group of malicious agents in the environment, each group reduce the effectiveness to the minimum.
can be treated separately.
Analogous measures can be defined in relation to reputation, i.e. the G≫0; GA ≫0
global average reputation of agents, the global average reputation of
reliable agents and the global average reputation of malicious agents can 4.5.3. Measures of global average trust
be determined. The impact of malicious agents can be measured using measures of
global average trust – theoretically, the higher the trust in reliable agents
and the lower the trust in malicious agents, the fewer actions malicious
10
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
agents will be able to perform. When global average trust is considered • there are 20 agents in the environment (in the case of attacks, 10 of
for all agents in the environment, even in the case of an ideal TRM them are malicious agents);
system, the value of these parameters will depend on the ratio of the • the length of the simulation for each test is 1000 interactions;
number of malicious and reliable agents. • characteristics of requests are determined for all tests;
ck ;ml
tA→A , tAckB;m
→A – varies depending on the ratio of the number of mali
l
• at least 10 test runs were performed for each test (the average value
cious and reliable agents from the simulation runs and the minimum and maximum values
were presented for each parameter).
tAckB;m ck ;ml
→AB = tmax ; tAB →AM = tmin
l
11
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
attack against agents in the group, and provides unreliable services applications. For this reason, it is necessary to create a heuristic algo
to all agents outside the group The attack was initially described (but rithm to achieve the attackers’ goal.
under another name) by Mármol and Pérez (2009b); The MEAEM method consists of determining the attackers’ profit
• attack based on Bad-mouthing, False-praise and On-off attacks using function, simulating possible actions of malicious agents in each inter
cooperation (denoted as BFOc): action, on this basis, selecting a set of decisions that are best from the
In this attack, a group of attackers actively cooperates by using a point of view of the attackers’ profit, and then moving on to the analysis
bad-mouthing attack against agents outside the group and a false- of the next interaction.
praise attack against agents in the group. They also provide period The method is based on the repeated analysis of possible actions of
ically unreliable and reliable services to all agents outside the group. malicious agents related to one interaction. The analysis of actions
The attack was initially described (but under another name) by within different interactions is performed independently.
Mármol and Pérez (2009b). It is worth noting that such an approach, although justified by the
need to limit the number of possible cases to be analyzed, may not be
An attack called MEAEM (denoted as M), which is used to find the effective (i.e., it may not generate an optimal course of action for the
most effective attack, has also been implemented. The MEAEM (Most attackers).This method was entirely created by the first author of this
Effective Attack Evaluation Method) method consists of an attempt to paper and will be described in detail in another article.
analyze possible cases of agent behaviour and, on this basis, determine Detailed results were presented in the following subsection only for
what actions should be taken by malicious agents in order to reduce the constant attack and MEAEM attack; for the rest of the attacks, only a
effectiveness of the environment. Unlike other simulation studies, this summary in the Table 1 was presented.
method does not assume a priori how malicious agents will act but fo
cuses on an attempt to consider possible actions of attackers and choose 5.2. Results
those that bring them the most significant benefit. This approach
potentially allows finding an attack that will be more effective than Concerning the RefTRM system and the environment, not all mea
known attacks and, in some situations, finding the worst case from the sures listed earlier in the article will apply. Because when determining
perspective of the functioning of the environment with the implemented the environment, it was assumed that each agent can provide services
TRM system. with maximum quality, so both the ideal efficiency of Eideal , as well as the
Theoretically, the simplest conceptual way to conduct such an advanced efficiency Eadv , will always be equal to the efficiency of the
analysis would be to consider all possible behaviours of malicious agents environment E. Additionally, there are no disturbances in the environ
for each interaction (concerning the quality of service provision, ment, so the efficiency of the environment without disturbances Eq will
recommendation manipulation, identity manipulation and other ac be identical to the efficiency of the environment E. Therefore, these
tions). It is easy to see that even with simplifying assumptions, analyzing measures will not be presented. The measure of temporary efficiency n
all possible agent behaviours is computationally infeasible in all typical can be presented for different values of the parameter n, but the test
Table 1
A summary of the results of testing the environment with and without the RefTRM system during various attacks.
Symbol Parameter Without RefTRM RefTRM RefTRM RefTRM RefTRM RefTRM RefTRM RefTRM
TRM attack: C attack: O attack: attack: attack: W attack: N attack: BF attack:
attack:C BFOc BFCc MEAEM
E efficiency of the environment 0.479 0.857 0.849 0.874 0.899 0.867 0.91 1.0 0.681
E(n) temporary efficiency 0.484 0.963 0.911 0.947 0.99 0.94 0.947 1.0 0.699
tAc1B;m
→A
l
global average action trust of 0.506 0.558 0.565 0.526 0.56 0.724 0.956 0.849
reliable agents to all agents
tAc1B;m l
→AB
global average action trust of 0.997 0.994 0.992 0.994 0.994 0.979 0.947 0.958
reliable agents to reliable
agents
tAc1B;m l
→AM
global average action trust of 0.064 0.165 0.181 0.105 0.169 0.495 0.965 0.751
reliable agents to malicious
agents
tAc2B;m
→A
l
global average 1.0 0.985 0.466 0.474 0.988 0.876 0.548 0.953
recommendation trust of
reliable agents to all agents
tAc2B;m l
→AB
global average 1.0 0.969 0.98 1.0 0.975 0.738 1.0 0.958
recommendation trust of
reliable agents to reliable
agents
tAc2B;m l
→AM
global average 1.0 1.0 0.005 0.0 1.0 1.0 0.142 0.948
recommendation trust of
reliable agents to malicious
agents
tAc3B;m
→A
l
global average total trust of 0.501 0.543 0.472 0.44 0.517 0.668 0.815 0.813
reliable agents to all agents
tAc3B;m l
→AB
global average total trust of 0.914 0.907 0.824 0.828 0.908 0.864 0.788 0.854
reliable agents to reliable
agents
tAc3B;m l
→AM
global average total trust of 0.13 0.215 0.154 0.09 0.166 0.492 0.84 0.776
reliable agents to malicious
agents
lAI B ,AB number of interactions 478.8 857.3 747.8 789.6 899.2 778.7 655.8 438.4 462.0
between reliable agents
lAI B ,AM number of interactions 521.2 142.7 252.2 210.4 100.8 221.3 344.2 561.6 538.0
between reliable and malicious
agents
12
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
results present the value of this measure for n = 100 unless stated
otherwise. The measures of efficiency gain G and absolute efficiency
gain GA will be determined after all tests have been carried out.
When examining an environment without a functioning TRM system,
only measures of efficiency of the environment and temporary efficiency
will be presented. All measures that include trust values are not appli
cable because trust values are not determined without using the TRM
system.
explaining why the value of action and total trust in malicious agents
does not reach 0 but only tends to 0.1. This results from the parameter of
the agent selection function, in which the probability of selecting an
agent is proportional to the value of action trust, and if this trust is below
Fig. 1. Efficiency of the environment without TRM system during a con a certain threshold, for the purposes of calculating the probability of
stant attack.
13
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
Fig. 9. The number of interactions between reliable agents and between reli
Fig. 5. Temporary efficiency n = 100 of the environment with RefTRM system able and malicious agents in the environment with RefTRM system during
during constant attack. constant attack.
14
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
15
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
As it can be seen, if reliable agents behaved exactly the same way, CRediT authorship contribution statement
regardless of whether the RefTRM system is running in the environment
or not, the efficiency of the environment would be at a similar level, and Marek Janiszewski: Writing – review & editing, Methodology,
the efficiency gain would be close to zero, or even negative (which Conceptualization, Software, Formal analysis, Writing – original draft,
means that if attackers used this attack, from the point of view of system Investigation, Validation, Funding acquisition. Krzysztof Szczypiorski:
efficiency, it would be better if the RefTRM system has not been used by Writing – review & editing, Formal analysis, Validation, Supervision.
agents), which is an interesting observation. This does not mean, how
ever, that the use of the RefTRM system is completely unjustified, Declaration of competing interest
because, in its absence, there are more effective attacks on such an
environment (e.g. constant attack). The authors declare that they have no known competing financial
The research carried out allowed us to identify interesting properties interests or personal relationships that could have appeared to influence
of the RefTRM system and indicate its vulnerabilities, including the fact the work reported in this paper.
16
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
17
M. Janiszewski and K. Szczypiorski Computers & Security 158 (2025) 104620
Data availability Maheswari, S., Vijayabhasker, R., 2023. Fuzzy reputation based trust mechanism for
mitigating attacks in MANET. Intell. Autom. Soft Comput. 35 (3), 3677–3692.
Mármol, F.G., Pérez, G.M., 2009a. TRMSim-WSN, trust and reputation models simulator
Data will be made available on request. for wireless sensor networks. IEEe Int. Conf. Commun.
Mármol, F.G., Pérez, G.M., 2009b. Security threats scenarios in trust and reputation
References models for distributed systems. Comput. Secur. 28 (7), 545–556.
Mármol, F.G., Pérez, G.M., 2010. Towards pre-standardization of trust and reputation
models for distributed and heterogeneous systems. Comput. Stand. Interfaces. 32 (4),
Alhandi, S.A., Kamaludin, H., Alduais, N.A.M., 2023. Trust evaluation model in IoT 185–196.
environment: a comprehensive survey. IEEe Access. 11 (November 2022), Marzi, H., Li, M., 2013. An enhanced bio-inspired trust and reputation model for wireless
11165–11182. sensor network. Procedia Comput. Sci. 19, 1159–1166.
Bidgoly, A.J., Arabi, F., 2023. Robustness evaluation of trust and reputation systems Pereira, R.H., Gonçalves, M.J.A., Coelho, M.A.G.M., 2023. Reputation Systems: a
using a deep reinforcement learning approach. Comput. Oper. Res. 156 (June 2022), framework for attacks and frauds classification. J. Inf. Syst. Eng. Manag. 8 (1), 1–10.
106250. Sievers, M., 2022. Modeling Trust and Reputation in Multiagent Systems. In: Madni, A.
Ghasempouri, S.A., Ladani, B.Tork, 2019. Modeling trust and reputation systems in M., Augustine, N., Sievers, M. (Eds.), Handbook of Model-Based Systems
hostile environments. Futur. Gener. Comput. Syst. 99 (May), 571–592. Engineering. Springer International Publishing, Cham, pp. 1–36.
Hoffman, K., Zage, D., Nita-Rotaru, C., 2009. A survey of attack and defense techniques Sun, Y.L., Han, Z., Yu, W., Liu, K.J.R., 2006. Attacks on trust evaluation in distributed
for reputation systems. ACM. Comput. Surv. 42 (1), 1–31. networks. In: Inf. Sci. Syst. 2006 40th Annu. Conf., pp. 1461–1466.
Janiszewski, M., 2014. The oracle a new intelligent cooperative strategy of attacks on Sun, Y., Han, Z., Liu, K.J., 2008. Defense of trust management vulnerabilities in
trust and reputation systems. Ann. UMCS Inform. 14 (2), 86–101. distributed networks. IEEe Commun. Mag. 46 (2), 112–119.
Janiszewski, M., 2016. Methods for reliability evaluation of trust and reputation systems. Velloso, P.B.B., Laufer, R.P.P., Duarte, O.-C.M.B., Pujolle, G., 2008. A trust model robust
In: Proceedings of SPIE - The International Society for Optical Engineering, 10031. to slander attacks in Ad Hoc networks. In: 2008 Proc. 17th Int. Conf. Comput.
Janiszewski, M., 2017. Towards an evaluation model of trust and reputation Commun. Networks, 6, p. 121.
management systems. Int. J. Electron. Telecommun. 63 (4), 411–416. Wierzbicki, A., 2010. Trust and Fairness in open, Distributed systems, vol. 298. Springer.
Janiszewski, M., 2020. TRM-EAT - a New Tool for Reliability Evaluation of Trust and Wu, X., Liu, Y., Tian, J., Li, Y., 2024. Privacy-preserving trust management method based
Reputation Management Systems in Mobile Environments. In: International on blockchain for cross-domain industrial IoT. Knowl. Based. Syst. 283. https://doi.
Symposium on Cluster, Cloud and Internet Computing, pp. 718–727. org/10.1016/j.knosys.2023.111166.
Jøsang, A., Ismail, R., Boyd, C., 2007. A survey of trust and reputation systems for online You, X., Hou, F., Chiclana, F., 2024. A reputation-based trust evaluation model in group
service provision. Decis. Support. Syst. 43 (2), 618–644. decision-making framework. Inf. Fusion 103. https://doi.org/10.1016/j.
Liu, Y., Wang, J., Yan, Z., Wan, Z., Jantti, R., 2023. A survey on blockchain-based trust inffus.2023.102082.
management for Internet of Things. IEEe Internet. Things. J. 10 (7), 5898–5922. Zahariadis, T., Leligou, H.C., Voliotis, S., Maniatis, S., Trakadas, P., Karkazis, P., 2009.
Magdich, R., Jemal, H., Ben Ayed, M., 2022. A resilient Trust Management framework An energy and trust-aware routing protocol for large wireless sensor networks. In:
towards trust related attacks in the Social Internet of Things. Comput. Commun. 191 9th WSEAS Int. Conf. Appl. Informatics Commun. (AIC ’09), pp. 216–224.
(May 2021), 92–107.
18