Liuwen Yu, Leendert Van Der Torre, and R Eka Markovich
Liuwen Yu, Leendert Van Der Torre, and R Eka Markovich
1 Introduction
Argumentation means different things to different people. Even in the two
volumes of the Handbook of Formal Argumentation, one can find a range of
definitions, some focusing more on the formal aspects, others focusing more on
the computational aspects. We believe that the thirteen challenges discussed
in this chapter pertain to all these definitions or can be rephrased to adhere to
all these definitions. Nevertheless, to clarify some of the issues we discuss, we
present in Table 1 the definitions we will use in this chapter.
Moreover, as its title suggests, this chapter is particularly concerned with
formal and computational argumentation as discussed in the Handbook of For-
mal Argumentation, the proceedings of the Computational Models of Argument
(COMMA) conferences, the Argument and Computation journal, and the wider
artificial intelligence (AI) literature on argumentation. In particular, whereas
formal argumentation has developed as a branch of knowledge representation
and reasoning, an essential part of AI, it now intersects with numerous disci-
plines, including natural language processing (NLP), and multiagent systems.
Therefore, when we refer to argumentation without further clarification, we
mean formal and computational argumentation. When we specifically discuss
formal argumentation only or computational argumentation only, we will make
this explicit. Similarly, when referring to natural argumentation, we will do so
explicitly.
To structure our discussion of these challenges, we use the attack-defense
paradigm shift brought about by Dung [1995] as a pivotal point. In particular,
Table 2 distinguishes between three groups of challenges. The first group is
concerned with the background to this paradigm shift, the second group is
2 Liuwen Yu, Leendert van der Torre, and Réka Markovich
Term Definition
Refers to the way humans naturally reason and communicate
Natural
in everyday language, combining elements of linguistics, phi-
argumentation
losophy, and rhetoric.
Formal A process of representing, managing and (sometimes) resolv-
argumentation ing conflicts.
Algorithmic A step-by-step procedure or set of rules designed to perform
argumentation a specific task or solve a particular argumentation problem.
Refers to the study and implementation of argumentation pro-
Computational cesses using computational methods. Involves theoretical and
argumentation practical aspects of how argumentation can be modeled and
executed by computers.
A computational approach incorporating argumentation rea-
Argumentation
soning mechanisms with other technologies, e.g., NLP, large
technology
language models (LLMs), distributed ledger technologies, etc.
concerned with the paradigm shift itself, and the third group is concerned with
the consequences of this paradigm shift for computational argumentation.
For each challenge, we begin by presenting an “observation”. Here, we mean
an observation about the above-mentioned literature on argumentation, i.e., the
Handbook on Formal Argumentation, the COMMA proceedings, the Argument
and Computation journal, and the wider AI literature on argumentation.
Given the wide range of topics discussed in this literature, and the changes
taking place due to technological developments such as LLMs, the observations
we chose to focus on have had a big influence on the contents of this chapter.
Other researchers in argumentation might make different observations and, as
a result, would approach this chapter differently. Thus, this chapter reflects
our personal interpretation of the literature on argumentation.
We use a diverse array of examples for illustrative purposes from areas such
as decision-making, ethical and legal reasoning, and practical reasoning, and
these are listed in Table 3. The selected examples cover a wide range of disci-
plines and issues, illustrating also the breadth of potential application domains
for the techniques discussed in this chapter. We reuse examples across different
challenges so that we can look at these examples from different angles. We end
each challenge by presenting several open questions for further research.
We selected the topics judiciously, leaving out many topics we would have
liked to cover but would have made this chapter too long. To provide the reader
with some additional information, we complement these challenges with open
research questions.
This chapter follows the structure of Table 2 and is organized as follows. Sec-
tion 2 discusses natural argumentation approaches that were prevalent before
the paradigm shift, identifying four key challenges. Section 3 focuses on the
paradigm shift itself, acknowledging its contributions while also highlighting
Thirteen Challenges in Formal and Computational Argumentation 3
Challenge
Example Discipline
Number(s)
Jiminy ethical governor Machine ethics 1, 2
Fatio dialogue protocol Speech act theory 2
Dialogue between au-
Speech act theory, argumentation
tonomous robot NS-4 and 2
schemes
Spooner
Child custody AI & law 2, 3, 9
Scottish fitness lover, snoring Knowledge representation and
4, 6
professor reasoning
Untidy room Neuro-symbolic AI 4
Knowledge representation and
Bachelor vs. married 7
reasoning
Dialogue between accuser and
AI & law 8
suspect
Murder at Facility C AI & law 12
Intelligent Human-Input-
Computer science, financial
Based Blockchain Oracle 13
markets, AI & law
(IHiBO)
process.
2.1 Individual and collective reasoning
In this section, we discuss reasoning, its philosophical and mathematical foun-
dations, and its use across many disciplines. From this perspective, we illustrate
in Table 1 our definition of formal argumentation — representing, managing
and (sometimes) resolving conflicts —using as an example the six layers of
conflict addressed by the Jiminy ethical governor.
Russell and Norvig [2010] identify four schools of thought on AI — machines
that: think like humans, act like humans, think rationally, and act rationally.
We can interpret these four schools of thought from various perspectives:
Practical reasoning vs. theoretical reasoning Practical reasoning is ori-
ented towards choosing a course of action on the basis of goals and knowl-
edge of one’s own situation, while theoretical reasoning is oriented to-
wards finding reasons for determining that a proposition about the world
is true or false [Walton, 1990].
Descriptive reasoning vs. prescriptive reasoning Descriptive reasoning
aims to replicate human intelligence and behavior, while prescriptive rea-
soning aims to simulate decision-making and prescribe actions that align
with ethics and laws.
In all these different kinds of reasoning, there could be individual reasoning
and collective reasoning. This brings us to the distinction between intelli-
Thirteen Challenges in Formal and Computational Argumentation 5
gent systems and multiagent systems across various disciplines. In the social
sciences, the distinction between individual and collective reasoning is called
the micro-macro dichotomy [Coleman, 1984]. Another prototypical example is
the distinction between classical decision theory based on the expected utility
paradigm and classical game theory [Savage, 1972]. Whereas classical deci-
sion theory is a kind of optimization problem (concerned with maximizing the
agent’s expected utility), classical game theory is a kind of equilibrium analy-
sis [Nash Jr, 1950]. This leads to the following challenge.
Challenge 1. Connecting individual and collective reasoning.
One of the central topics in reasoning is how to handle conflict, whether
it arises among beliefs (logical inconsistency) or choices of actions (practical
conflicts). This is relevant both to the reasoning process of an individual agent
and to interactions among multiple agents.
Example 2.1 illustrates a conflict from the perspective of a single agent.
Example 2.1 (Car accident dilemma in I, Robot). In the film I,
Robot, Detective Del Spooner is driving when he has an accident, plunging
his own car and another vehicle carrying a child into a river. An autonomous
general-purpose humanoid robot, NS-4, is passing by and is faced with a con-
flict because it cannot save all the humans involved in the accident, i.e., the
drivers and the child. NS-4 must make a descriptive analysis of the situation
and follow prescriptive actions guided by ethical and legal considerations.
Example 2.2 illustrates a conflict from a multiagent perspective.
Example 2.2 (Continued from Example 2.1). Instead of seeing the con-
flict in terms of saving Spooner or saving the girl, a conflict that is faced by
NS-4 only, we can consider it as a disagreement between two stakeholders: NS-
4 and Spooner. NS-4 wants to save Spooner, while Spooner wants to save the
girl.
NS-4 and Spooner might reach a consensus through a process known as
judgment aggregation where they combine their individual judgments to arrive
at a collective decision [Caminada and Pigozzi, 2011]. However, in the context
of game theory, the goal is not always to resolve all conflicts but rather to
understand the dynamics at play and sometimes agree to disagree [Aumann,
2016]. This concept, known as equilibrium analysis, allows for a situation
where Spooner and NS-4 recognize their differing priorities and accept the
disagreement without necessarily resolving the conflict.
In a multi-stakeholder setting, conflicts can be conceptualized and managed
at various layers. The Jiminy architecture [Liao et al., 2023] is an ethical
governor that uses techniques from normative systems and formal argumen-
tation to get moral agreements from multiple stakeholders. Each stakeholder
has their own normative system. The Jiminy architecture combines norms into
arguments, identifies their conflicts as moral dilemmas, and evaluates the argu-
ments to resolve each dilemma whenever possible. In particular cases, Jiminy
6 Liuwen Yu, Leendert van der Torre, and Réka Markovich
decides which of the stakeholders’ normative systems takes preference over the
others.
Example 2.3 (The six layers of conflict of I, Robot in the Jiminy
architecture).
Layer 1: conclusions only The conflict is based on the conclusions drawn by
each stakeholder. NS-4 concludes it should save Spooner, while Spooner
concludes that the girl should be saved. This layer represents a straight-
forward conflict of decisions without going into the underlying reasoning,
possibly making it difficult to resolve.
Layer 2: assumptions and reasons Agents present their conclusions along
with their assumptions and reasons. We refer to these conclusions to-
gether with the assumptions and reasons as arguments. Conflict reso-
lution may involve formal argumentation techniques such as assigning
attacks among arguments. The reason Spooner wants to save the girl is
that she has a longer potential lifespan. That reason could be attacked by
an argument from NS-4 that the girl has incurable cancer and therefore
has a short lifetime.
Layer 3: combining normative systems This layer involves combining mul-
tiple normative systems into a single normative system. As a conse-
quence, there may be new arguments built from the norms of distinct
stakeholders, and the combined knowledge may be sufficient to reach
a moral agreement. For example, NS-4 has information unknown to
Spooner — the child’s illness. By aligning their knowledge bases, they
may reach an agreement to save Spooner instead.
Layer 4: context-sensitive meta-reasoning as ethical governors Jiminy
considers the agents’ norm preferences. These meta-norms are context-
dependent norms that select the one stakeholder who has the most rel-
evant expertise. Jiminy may decide that NS-4’s preference takes prece-
dence over Spooner’s because NS-4 can get a more accurate evaluation of
the imminent accident, leading to a more reasonable decision. This mech-
anism is comparable to those used in private international law [Markovich,
2019].
Layer 5: suspend decisions and observation Traditional conflict resolu-
tion often assumes that dilemmas must be addressed immediately. How-
ever, suspending a decision to allow for additional information to emerge
can be beneficial in certain situations.
Layer 6: dialogue In this layer, stakeholders engage in a dialogue, attempt-
ing to persuade one other. Through structured communication, NS-4
and Spooner present their arguments, counterarguments, and justifica-
tions. The dialogue helps them explore each other’s perspectives and can
lead to a more informed and mutually agreeable resolution.
Thirteen Challenges in Formal and Computational Argumentation 7
There are many other examples similar to the one in I, Robot. For instance,
the tunnel dilemma and the trolley dilemma [Awad et al., 2018] involve ethical
decisions by autonomous vehicles and the question of who should decide how
they respond in life-and-death situations. Another example is a smart speaker
that passively listens in and stores voice recordings, acting like a surveillance
device [Liao et al., 2023]. Should it assist in the prevention of or investigation
into crimes? This presents a moral dilemma involving household members,
law enforcement agencies, and the manufacturer of the smart speaker. Which
stakeholder should be alerted in such cases?
Jiminy explains the general problem of connecting individual and collective
reasoning, and its relation to practical reasoning. Different mechanisms could
be implemented in the Jiminy architecture to connect the two kinds of reason-
ing. For example, philosophical concepts such as the veil of ignorance [Rawls,
2001] and Kantian imperative [Kant, 1993] offer valuable perspectives. The veil
of ignorance principle requires individuals to make decisions without knowledge
of their own personal characteristics or societal position, thus promoting im-
partiality and fairness in collective decision-making. This aligns closely with
the AI challenge of designing a system that makes unbiased decisions. Simi-
larly, Kant’s categorical imperative suggests that one should act only according
to maxims that can be universally applied to build universally ethical and ra-
tional guidelines for behavior. Both principles emphasize the importance of
considering the broader implications of actions, and they encourage integrat-
ing individual actions into collective norms and ethics.
In this section, we discussed the general challenge of connecting individual
and collective behavior from the perspective of argumentation. We end this
section by raising a number questions for further research.
CQ1 What other goals do I have that should also be considered even though
they might conflict with G?
CQ3 From the solutions of me bringing about A and these alternative actions,
which can be argued to be the most efficient?
10 Liuwen Yu, Leendert van der Torre, and Réka Markovich
CQ4 What grounds are there for arguing that it is possible for me to bring
about A in practice?
CQ5 What other consequences of me bringing about A should be taken into
account?
Example 2.4 illustrates the dialogue between NS-4 and Spooner based on
the argumentation schemes of practical reasoning.
Example 2.4 (Dialogue between NS-4 and Spooner).
Spooner: Save the girl! That is the moral and ethical choice. She deserves
the chance to live her life fully.
NS-4: What other goals do you have that might conflict with this one?
Spooner: My goal is to save the most vulnerable lives. There is no conflict.
NS-4: What alternative actions should be considered?
Spooner: Saving the girl should be the only course of action. It should have
the highest priority.
NS-4: What is the most efficient choice?
Spooner: Saving the girl. She is lighter, so this course of action is more likely
to succeed.
NS-4: What grounds are there for arguing that it is practically possible to
save the girl?
Spooner: The girl’s lighter weight makes her rescue quicker and less risky.
NS-4: What consequences should be considered?
Spooner: Saving the girl aligns with the duty to protect the vulnerable.
NS-4: Your argument is sound and aligns with ethical and practical consider-
ations. I will save the girl.
Speech act theory, a subfield of pragmatics, studies how words are used not
only to present information but also to carry out actions [Austin, 1975]. This
theory has been formalized in the Foundation for Intelligent Physical Agents
(FIPA) standards, which are widely used in computer science to facilitate com-
munication between autonomous agents [FIPA, 2002]. The scheme allows mul-
tiple labels to be applied to one utterance since a single utterance can perform
multiple actions in a dialogue [Kissine, 2013]. Such labels range from a few
basic types such as assertions, questions and commands to more complex ones
like promises and declarations [Searle, 1979]. One of the key features of speech
acts, as opposed to physical actions, is that their main effects are on the mental
Thirteen Challenges in Formal and Computational Argumentation 11
and interactional states of agents [Traum, 1999]. The attitudes of belief, desire
and intention are familiar to agency theories [Georgeff et al., 1999]. In the
context of human-like chatbots, speech acts can be used to design interactions
between the chatbot and the user [Hakim et al., 2019]. Specifically, questions
that seek justification are crucial as they prompt the chatbot to provide reasons
and explanations, which not only enriches the interaction but also drives the
conversation towards deeper engagement and understanding.
McBurney and Parsons [2004] proposed an interaction protocol called Fatio
comprising of five locutions for argumentation which can be considered as a set
of speech acts.
F1: assert (Pi , ϕ): A speaker Pi asserts a statement ϕ. In doing so, Pi creates
a dialectical obligation within the dialogue to provide a justification for
ϕ if required subsequently by another participant.
F4: justify (Pi , Φ ⊢ ϕ): A speaker Pi , who had uttered assert(Pi , ϕ) and was
then questioned or challenged by another speaker, is able to provide a
justification Φ ∈ A for the initial statement ϕ by means of this locution.
F5: retract (Pi , ϕ): A speaker Pi , who had uttered assert(Pi , ϕ) or jus-
tify(Pi , Φ ⊢ ϕ), can withdraw this statement with the utterance of re-
tract(Pi , ϕ) or the utterance of retract(Pi , Φ ⊢ ϕ) respectively. This
removes the earlier dialectical obligation on Pi to justify ϕ or Φ ⊢ ϕ if
questioned or challenged.
Example 2.5 illustrates the dialogue between NS-4 and Spooner following
the speech act.
Example 2.5 (A dialogue between NS-4 and Del Spooner). xx
Spooner: Because the girl is young and has a much longer lifespan. (justify)
Theories and
Conceptua-
Process Formal Application
lization
Approaches
Logical structure and Graph theory, Automated
reasoning to derive nonmonotonic logic, reasoning systems,
Argumentation
conclusions from computational logic, knowledge
as inference
incomplete and causal reasoning, representation,
inconsistent premises Bayesian reasoning expert systems
Dynamic verbal Speech act theory, Debating
interaction between game theory, technologies,
Argumentation
stakeholders to exchange axiomatic semantics, chatbots,
as dialogue
information or resolve operational recommender
conflicts of opinion semantics systems
Multi-criteria
Deliberative
Balancing pros and cons decision theory,
Argumentation decision-making in
to reach a justified machine ethics,
as balancing law, ethics, and
decision computational law,
economics
case-based reasoning
Table 4 might give the impression that the three approaches are distinct and
that they have distinct application areas. We would like to point out that this
is not the case. The approaches (or types) of argumentation are not mutually
exclusive or even incompatible. You can switch from one to another if you want
to look at the same problem or situation from different angles, highlight differ-
ent aspects, or select a modeling approach that is more suitable for a particular
purpose. This also means that complex application areas like the legal domain
can make very good use of each approach. Indeed, legal reasoning often en-
gages with each of the three conceptualizations — argumentation as inference,
dialogue, and balancing — across different contexts and legal roles. Judges
and attorneys may rely on one form of argumentation more than another, de-
pending on the nature of the case and their specific role in the legal process.
For instance, inference is commonly used by judges, attorneys, actually any
type of lawyer, when applying legal rules to facts or deriving conclusions from
incomplete or inconsistent premises. Dialogue plays a central role in courtroom
exchanges between opposing parties. The structure of a trial often resembles
a dialogue: each party presents their arguments and responds to those of the
other while the judge oversees the process to ensure it follows legal procedures.
Thirteen Challenges in Formal and Computational Argumentation 15
Balancing is typically the domain of judges as they weigh multiple factors, con-
flicting interests, or values, to determine the most appropriate outcome. This is
particularly important in discretionary decision-making where the law, instead
of trying to provide detailed rules, assigns special power to judges so that they
can make decisions based on their own evaluations. In such cases, judges ex-
ercise their judicial discretion by carefully balancing competing considerations
within the framework of legal principles to reach a fair and just decision.
Hence, these different modes of reasoning can correspond to and interact
with one another, creating a comprehensive tool set for legal reasoning and
decision-making. Below, we shall illustrate each of the three conceptualizations
using the legal example of child custody in a divorce case.
Research on argumentation-based dialogue (see the overview of Black et
al. [2021]) is often carried out against the background of the six types of dia-
logue and in accordance with their respective goals [Walton and Krabbe, 1995],
as shown in Table 5. When argumentation is viewed as a kind of dialogue be-
tween multiple agents (whether human or artificial), new issues arise. One issue
is the distributed nature of information (among the agents). Another issue is
the dynamic nature of information — agents do not reveal everything they be-
lieve initially, and they can learn from one other. There are also strategic issues
— agents will have their own internal preferences, desires and goals [Prakken,
2018]. In Section 2.2, we described the speech act theory on dialogue forma-
tion [McBurney and Parsons, 2002]. For better comparison, we use a legal child
custody case [Yu et al., 2020] to illustrate the three conceptualizations.
Type of
Initial situation Participant goal Dialogue goal
dialogue
Persuasion Conflict of opinions Persuade other party Resolve or clarify issue
Find and verify Prove (or disprove)
Inquiry Need to have proof
evidence hypothesis
Get what they most Reasonable settlement
Negotiation Conflict of interests
want they can both live with
Information- Acquire or give
Need information Exchange information
seeking information
Dilemma or Co-ordinate goals and Decide best available
Deliberation
practical choice actions course of action
Verbally hit out at Reveal deeper basis of
Eristic Personal conflict
opponent conflict
Example 2.6 (Child custody dialogue). Alice and Lucy are talking about
a divorce case, specifically whether it is in the child’s best interest to live with
her mother or with her father. They have the following dialogue.
Alice: It is in the ten-year-old child’s best interest that she lives with her
mother. (assert)
16 Liuwen Yu, Leendert van der Torre, and Réka Markovich
(1) Peter says that most (11) Public opinion says that
ten-year-old children (2) Peter is a child psychologist. most ten-year-old children
know what they want. do not know what they want.
r5
r1
(4) The child says (3) Most ten-year-old (12) Most ten-year-old
rebut
she wants to live children know what children do not
with her mother. they want. know what they want.
undercut
r3
(7) It is in the child’s best interest that
(9) The father is wealthy.
she lives with her mother.
r4 r6
(8) It is not in the child’s best interest rebut (10) It is in the child’s best interest
that she lives with her father. that she lives with her father.
It is in the child’s
best interest that she
lives with her mother.
PRO CON
a1 a2
The mother is
The child knows The child wants to
less wealthy
what she wants. live with her mother.
than the father.
Empirical generalizations: e.g., adults are usually employed, birds can typ-
ically fly, etc.
Exceptions to legal rules: e.g., when a father dies, the child inherits, except
when the child killed the father.
Exceptions to moral principles: e.g., one should not lie, except when a lie
can save lives.
Alternative explanations: e.g., the grass is wet so it must have rained, but
the sprinkler was on.
Defaults Facts
1
BornInScotland ⇒ Scottish
3
Scottish ⇒ LikesWhisky BornInScotland, FitnessLover
2
FitnessLover ⇒ ¬LikesWhisky
E1 = {BornInScotland, FitnessLover}
E2 = {BornInScotland, FitnessLover, ¬LikesWhisky}
E3 = {BornInScotland, FitnessLover, ¬LikesWhisky, Scottish}
E4 = E3
((Pi , φ, +) ∈
/ DOS(Pi )) ∧ (∀j ̸= i)(Di Bj Bi φ).
Nonmonotonic deduction: The mother deduces from the context and her
knowledge that her daughter does not typically live in disarray. Thus,
something extraordinary must have happened.
Each argument is derived by applying the steps above (1 and 2) finitely many
times to ensure a structured process for argumentation within the framework.
We now use Example 3.1 to illustrate the commutative diagram, and we
explain the technical details later in section 3.2.
Example 3.1 (Two approaches to nonmonotonic reasoning). Consider
n
a knowledge base containing three defeasible rules ⇒ as well as facts ({⊤}),
as in Figure 4(a). Logical approaches to defeasible reasoning select a subset of
26 Liuwen Yu, Leendert van der Torre, and Réka Markovich
rules whose conclusions are maximally consistent. For example, PDL [Brewka
and Eiter, 1999], discussed in Section 2.4, selects the strongest applicable rules,
i.e., the order (i) → (ii) in Figure 4(a), with output {a, ¬b}. While (iii) is now
made applicable by a, its consequent b conflicts with ¬b and cannot be selected.
Argumentation approaches, in turn, build explicit arguments (Figure 4(b))
and represent these conflicts (b, ¬b) as attacks between arguments (B and C).
Observe how the arguments in Figure 4(b) activate the rules in Figure 4(a). To
specifically capture PDL, one needs a selection of attacks (discussed in Section
3.2), such as the attack induced by the weakest link in Figure 4(c), which
defines that the strength of an argument reflects its weakest rules. Intuitively,
the jointly acceptable arguments here are {A, C}, which corresponds to the
PDL extension {a, ¬b}. But in Figure 4(c), the last link, which defines that the
strength of an argument is that of its last rule, selects the arguments {A, B}
with output {a, b}.
C
{⊤} z }| {
⊤ ⇒2 ¬b C • 2
C • 2
< >
1
⊤⇒a A 3 1
(ii) z }| {
2
⊤ ⇒ ¬b (i) ⊤ 1
a ⇒3 }b A• •B A• • B
| ⇒ {z
3 (iii)
a ⇒b
B last link weakest link
(a) knowledge base (b) arguments (c) attack assignments; extensions
conflicts. The concept is straightforward: in a gunfight, one stays alive iff all
attackers are dead, and one dies iff at least one attacker is still alive. Under-
standing this analogy essentially captures the core idea of abstract argumenta-
tion:
2. An argument is labeled out iff it has at least one attacker that is labeled
in;
a b c
Now the question is: which labelings in Dung’s theory are called extensions?
We illustrate the extensions to the framework in Example 3.2 below. We use the
thick nodes for in, normal nodes for out, and dotted nodes for undec, to obtain
a visualization similar to a colored graph. We say that if a labeling is three-
valued, then it is a complete extension, as in the first item below. A complete
extension generalizes a stable extension and there is no argument labeled undec,
i.e., there is no dotted node, as in the second item below. The unique grounded
28 Liuwen Yu, Leendert van der Torre, and Réka Markovich
extension is the most skeptical complete extension; only arguments that cannot
avoid being accepted are labeled in, as in the third item. For some frameworks,
there are no stable extensions. Then we can use preferred extensions, which
are the maximal complete extensions, as in the fourth item.
et al., 2021] specifies a numeric value that indicates the relative strength of an
attack, as shown in Figure 6(c). Abstract agent argumentation [Yu et al.,
2021] extends Dung’s framework with a set of agents and a relation associat-
ing arguments with agents, as shown in Figure 6(d). Value-based argumenta-
tion [Atkinson and Bench-Capon, 2021], as shown in Figure 6(e), defines values
that are associated with an argument. The preference ordering of the values
may depend on a specific audience. To model defeat for a specific audience: an
argument A attacks an argument B for audience a if A attacks B and the value
associated with B is not preferred to the value associated with A for audience
a. Higher (second)-order argumentation [Cayrol et al., 2021] introduces a new
kind of attack which is a binary relation from arguments to attack relations,
as shown in Figure 6(f).
One technique that has already proven to be useful in the past for studying
such extensions is a meta-argumentation methodology involving the notion of
flattening [Boella et al., 2009]. Flattening is a function that maps some ex-
tended argumentation frameworks into Dung frameworks. There are two main
flattening techniques. One is that we keep the arguments the same while re-
moving attacks or introducing auxiliary attacks (this is also called reductions
sometimes). This technique is used in preference-based argumentation, ab-
stract agent argumentation, bipolar argumentation, etc. The other technique
is to use not only auxiliary attacks but also auxiliary arguments in higher-order
argumentation.
Example 3.3 (Four reductions of preference-based argumentation).
Figure 7 illustrates the differences between the four reductions from a
preference-based argumentation framework to abstract argumentation frame-
works [Kaci et al., 2021]. The basic idea of Reduction 1 is that an attack
succeeds only when the attacked argument is not preferred to the attacker.
Reduction 2 enforces that one argument defeats another when the former is
preferred but attacked by the latter. The idea of Reduction 3 is that if an
argument is attacked by a less preferred argument, then the former should de-
fend itself against its attacker. Reduction 4 mixes the second and the third
reductions.
Flattening by adding auxiliary arguments is a way of implementing the
methodology of meta-argumentation [Boella et al., 2009]. Meta-argumentation
generally involves taking into account the arguments of, e.g., lawyers, com-
mentators, citizens, teachers, or parents (in accordance with the level of their
expertise) but it can also go beyond this — the arguers and the meta-arguers
can be represented by the same reasoners. For example, a lawyer may debate
whether a suspect’s argument attacks another argument, and she may also
argue in a similar way about her own arguments. To give another example,
people may be in the middle of an argument, but then start questioning the
rules of the dialogue game, and argue about that. A further example is that of
a child arguing that the argument I was ill attacks the argument I have to do
my homework but then finds that the argument I have a nice tan attacks the
30 Liuwen Yu, Leendert van der Torre, and Réka Markovich
a b
a
b b≻a c
v1 |v2 v2 >v1
A B
v1 v2
a b
v1 >v2
c
a a a a a
b b b b b
b≻a
Figure 7: From left to right: the original argumentation framework, and the
results after applying the four reductions respectively. The defeat relation
is visualized with thick lines, and arguments that are accepted in grounded
semantics also have thick lines.
ASPIC+ system by Modgil and Prakken [2013; 2014], and it has also been
used to reconstruct and compare a variety of nonmonotonic logics, namely de-
fault logic [Reiter, 1980], Pollock’s [1987] argumentation system, and several
logic-programming semantics. More representations have been developed —
for details, please refer to the work of Heyninck [2019]. However, as discussed
in Section 2, there is also a diversity of natural argumentation, conceptual-
izations, and formal methods. Notwithstanding the initial appeal of Dung’s
abstract argumentation theory, there are many different kinds of argumenta-
tion frameworks and semantics. That leads to the following challenge:
Challenge 6. Representing nonmonotonic logics and solution concepts.
Before we get into approximating PDL with argumentation, let us first talk
about methodologies employed to compare different nonmonotonic logics and,
in particular, their use of examples. There are different approaches to the use
of examples in different disciplines. In law, ethics, and linguistics, examples
are central to the development and validation of theories because they help
ground abstract concepts in real-world scenarios, which helps to align logical
frameworks with intuitive understanding. In contrast, knowledge representa-
tion (KR) and other areas of computer science often use examples as a practical
tool to test, demonstrate, and communicate the effectiveness of a formal theory
rather than using them as foundational elements in theory construction.
NLP task: translating natural language into formal language. Consider the
aforementioned example of the fitness-lover Scot and an additional ex-
ample about a snoring professor:
The fitness-lover Scot: It is commonly assumed that if a man was
born in Scotland, then he is Scottish. And if he is Scottish, we
can normally deduce that he likes whiskey. However, fitness lovers
normally avoid alcohol for health reasons. Stewart was born in Scot-
land, and he is also a fitness lover. Does he like whiskey?
The snoring professor A library has a general rule that misbehavior,
such as snoring loudly, leads to denial of access. However, there
is another rule that professors are normally allowed access. Bob is
a professor and he is snoring loudly in the library. Should he be
allowed access to the library?
NLP could be used to identify three rules for each example, and
then further abstract them into these formal (default) rules with
1 3 2
priorities: {⊤ ⇒ a, a ⇒ b, ⊤ ⇒ ¬b}. We then have:
KR task: after inputting some requirements, i.e., the goal of the reasoning,
the system asks what you want to derive from what you have. Although
the above two examples share a similar structure, there could be differ-
ent reasoning requirements that lead to the selection of different rules
and ultimately different conclusions. In the fitness-lover example, one
2
might prioritize the rule ⊤ ⇒ ¬b and conclude that Stewart does not
like whiskey. In contrast, in the snoring professor example, one might
3
prioritize the rule a ⇒ b and conclude that Bob should be denied access.
Logic design task: According to these requirements, the system asks what is
the best logic for your application. These two examples have been used to
illustrate the difference between prescriptive and descriptive reasoning in
nonmonotonic reasoning and between the weakest link and the last link,
which are two principles regarding how an argument draws strength from
its defaults.
1 1 2 1 1 2
⊤⇒a ⊤ ⇒ a ⇒ ¬b ⊤⇒a ⊤ ⇒ a ⇒ ¬b
1 1 2 1 1 2
⊤⇒a ⊤ ⇒ a ⇒ ¬b ⊤⇒a ⊤ ⇒ a ⇒ ¬b
{a, ¬b}
Ex. 3.7
{b, ¬a}
1 1 2 1 1 2
⊤⇒b ⊤ ⇒ b ⇒ ¬a ⊤⇒b ⊤ ⇒ b ⇒ ¬a
1 3 2
⊤ ⇒ a ⇒ b 7−→ 1 = min{1, 3} ⊤ ⇒ ¬b 7−→ 2 = min{2}
A comparison of the strengths in this conflict produces the attack shown in
Figure 9 (top). The semantics then gives the argument selection also shown.
Our three attack relations (swl, dwl, pdl) do in fact agree on the verdict for this
example.1
1 3
(Descriptive.) This reading favours the set {⊤ ⇒ a, a ⇒ b} as its priorities
{1, 3} are more desirable than the rival ones {1, 2}. Last link can be seen as an
implementation of this reading: the contribution of a new default to a selection
1
or argument, say {⊤ ⇒ a}, is defined by the desirability of this default (2 vs. 3
in the example). Last link thus agrees on the above preference but arrives at
it through argumentative means. First, one computes argument strength:
1 3 2
⊤ ⇒ a ⇒ b 7−→ 3 = last(1, 3) ⊤ ⇒ ¬b 7−→ 2 = last(2)
1 3 2
Based on this, argument ⊤ ⇒ a ⇒ b attacks ⊤ ⇒ ¬b. Using a standard
argumentation semantics, one obtains the output {a, b}, not shown in Figure 9
(top).
1 This example represents the Tweety scenario {penguin → bird, bird ⇒ flies,
penguin ⇒ ¬flies} with priorities instead of the strict rule (→). Without priorities, the
solution {penguin, bird, ¬flies} obtains from specificity (of penguin over bird): birds fly is
overruled by the more specific penguins do not fly. Without specificity the solution obtains
from appropriate priorities using PDL or swl.
36 Liuwen Yu, Leendert van der Torre, and Réka Markovich
The simple weakest link, though, does not always capture the prescriptive
reading. In response to this, a more intuitive disjoint variant of the weakest
link has been considered [Young et al., 2016]. This variant assumes a rela-
tional measure of argument strength. It ignores all the shared defaults before
searching for the weakest link between two arguments.
1 3
Example 3.6 (Simple vs. disjoint weakest link). Let {⊤ ⇒ a, a ⇒ b,
2 1 3
a ⇒ ¬b} define our knowledge base. Note that the two arguments ⊤ ⇒ a ⇒ b
1 2 1
and ⊤ ⇒ a ⇒ ¬b share a default ⊤ ⇒ a with the lowest priority. See the
middle row in Figure 9.
(Disjoint weakest link) The attack relation defined by disjoint weakest link
(dwl) assigns strengths 3 > 2 to the above arguments, after excluding
the default they share. This generates the tie-breaking dwl-attack shown
in Figure 9 (mid, right). This figure also shows the set of arguments se-
lected by our semantics. The selected arguments’ conclusions match the
PDL output {a, b}.
Pollock’s definition of weakest link swl [Pollock, 2001] was adopted and
studied for ASPIC+ by Modgil and Prakken [2013; 2014]. Then, Young et
al. [2016; 2017] introduced dwl and proved that argument extensions under the
dwl-attack relation correspond to PDL extensions under total orders; see also
the results presented by Liao et al. [2019] and Pardo and Straßer [2022]. Under
total preorders, a new attack relation is needed for more intuitive outputs and
a better approximation of PDL — that is, better than dwl.
1 1 2 2
Example 3.7 (Beyond dwl). Let {⊤ ⇒ a, ⊤ ⇒ b, a ⇒ ¬b, b ⇒ ¬a} be the
defaults.
(swl, dwl) Weakest link attacks, depicted in Figure 9 (bottom, left), admit
1 1
the selection of arguments {⊤ ⇒ a, ⊤ ⇒ b}. This selection fits neither
the prescriptive interpretation nor PDL. Selecting either default ought to
2
be followed by the selection of a stronger default, namely a ⇒ ¬b and
2
b ⇒ ¬a respectively.
(PDL) As PDL selects the strongest default one at a time, this excludes by
1 1
construction the concurrent selection of {⊤ ⇒ a, ⊤ ⇒ b}. The PDL-
inspired attack relation in Figure 9 (bottom, right) also excludes this
selection.
Thirteen Challenges in Formal and Computational Argumentation 37
In contrast, two PDL constructions exist for K, and so do two PDL extensions:
1 2 1
(⊤ ⇒ a, a ⇒ b, ⊤ ⇒ c) 7 → {a, b, c}
−
1 1 2
(⊤ ⇒ c, ⊤ ⇒ a, a, c ⇒ ¬b) −7 → {a, ¬b, c}
As a consequence, disjoint weakest link cannot characterize PDL under stable
semantics. Observe that attswl here coincides with PDL.
A 1 1 2
⊤⇒a ⊤⇒a⇒b
1
C ⊤⇒c
2
A, C ⇒ ¬b
Figure 10: The stable belief set {a, c, b} under attdwl for Example 3.9. Two
extensions, {a, c, b} and {a, c, ¬b}, exist under PDL.
1 1 2
⊤⇒a ⊤ ⇒ a ⇒ ¬b
1 1 2
⊤⇒b ⊤ ⇒ b ⇒ ¬a
Figure 11: pPDL differs from PDL. PDL has two extensions {a, ¬b} and
{b, ¬a}. pPDL has an additional extension {a, b}. Arrows denote logical con-
flicts.
As seen in Examples 3.7 and 3.9, disjoint weakest link and PDL are incom-
parable under total preorders. As a first step towards their convergence, one
can slightly modify PDL to make it closer to the disjoint weakest link. To this
end, Pardo et al. [2024] propose parallel PDL (pPDL), a concurrent variant of
PDL. The main novelty of pPDL is that each inductive step can concurrently
select a set of defaults, rather than just one, for the technical details, we refer
to the paper of Pardo et al. [2024].
Example 3.10 (pPDL, DWL vs. PDL). Let us use Example 3.7 to show
that the default logic PDL differs from pPDL. Figure 11 illustrates the three
pPDL extensions {a, ¬b}, {b, ¬a}, {a, b}, of which {a, b} is not a PDL extension.
Although pPDL and attdwl agree in this and other examples, pPDL does
not always match the disjoint weakest link.
Example 3.11 (pPDL vs. DWL). Example 3.9 showed a unique stable
belief set, {a, b, c}, under attdwl . But there are two pPDL extensions: {a, b, c}
and {a, ¬b, c}.
To sum up, the first goal of Pardo et al. [2024] was to identify an attack
relation that captures PDL extensions and compare it with attacks based
on the simple and disjoint weakest link using the eight principles advanced
by Dung [2016; 2018]. They proved which principles for attack relations are
satisfied by weakest link, disjoint weakest link and PDL based attacks. Their
principle-based analysis presented the difference between several kinds of attack
relation assignment. They identified and explained the nature of the weakest
link principle and revealed that there is still the potential to improve the weak-
est link attack. On this last question, they proposed pPDL (parallel PDL), a
concurrent variant of PDL, and they showed by way of examples that it falls
closer to the disjoint weakest link than PDL does. While the pPDL variant
still does not match the disjoint weakest link, one might conjecture that some
further refinement might do.
In addition to presenting the argumentation framework, Dung [1995] also
investigated two examples of problems from microeconomics — cooperative
game theory and matching theory. In each case, Dung showed how an appro-
Thirteen Challenges in Formal and Computational Argumentation 39
1. We showed that PDL and the weakest link definitions are similar but
not exactly the same. How can PDL be changed to make it fit one of
the weakest link definitions? How can the weakest link be changed to fit
PDL?
2. We discussed the logic of the weakest link. What is the logic that corre-
sponds to the last link?
4. PDL is only one of many logics for prioritized rules. How can all the
other systems for prioritized rules be represented?
of two arguments conflict with one other. Two kinds of rebuts have been
discussed in the literature: restricted rebuts and unrestricted rebuts. The in-
tuition behind restricted rebut is: if an argument is built up with only strict
rules, then the conclusion should also be strict, and the argument cannot be
attacked. The intuition behind unrestricted rebut is that a conclusion is de-
feasible, i.e., it can be attacked iff it is built up with at least one defeasible
rule. Different choices on rebuts influence how to define the argumentation for-
malism that derives reasonable conclusions. This exists in the ASPIC family
of argumentation frameworks, including ASPIC+ [Modgil and Prakken, 2013;
Modgil and Prakken, 2014], ASPIC- [Caminada et al., 2014] and ASPIC-
END [Dauphin and Cramer, 2018].
Example 3.12 illustrates rationality postulates, comparing unrestricted rebut
and restricted rebut, and it shows the solutions to restricted rebut required to
satisfy the rational postulates for this example.
Example 3.12 (Married John [Caminada and Amgoud, 2007]). Con-
sider an argumentation system consisting of the strict rules {→ r, → n, m →
hs, b → ¬hs} and the two defeasible rules {r ⇒ m, n ⇒ b}. An intuitive in-
terpretation of this example is the following: “John wears a ring (r) on his
finger. John is also a regular nightclubber (n). Someone who wears a ring
on his finger is usually married (m). Someone who is a regular nightclubber is
usually a bachelor (b). Someone who is married has a spouse (hs) by definition.
Someone who is bachelor does not have a spouse (¬hs) by definition.” We can
construct the following arguments:
A1 :→ r A3 : A1 ⇒ m A5 : A3 → hs
A2 :→ n A4 : A2 ⇒ b : A6 : A4 → ¬hs
A1 : ⊤ ⇒ p A4 : q, r → ¬p
A2 : ⊤ ⇒ q A5 : p, r → ¬q
A3 : ⊤ ⇒ r A6 : p, q → ¬r
A1 A1
A4 A4
A5 A6 A5 A6
A2 A3 A2 A3
There are more postulates. For example, noninterference and crash resis-
tance are particularly relevant when the strict rules are derived from classical
logic, and again we examine various ways of satisfying these properties. How-
ever, there have been comparatively fewer results that would establish them in
systems of the ASPIC family.
Thirteen Challenges in Formal and Computational Argumentation 43
As a result, there are various ways to define the semantics. Below we discuss
abstract agent argumentation [Yu et al., 2021], which uses a minimal extension
of Dung’s framework as a common core. This work only introduces an abstract
set of agents and arguments are associated with agents. There are four types
of semantics, defined by adaptations of defense, reductions, aggregations, and
selections:
Agent defense approaches adapt Dung’s notion of defense to argumenta-
tion semantics.
Social approaches are based on counting the number of agents [Leite and
Martins, 2011] and a reduction to preference-based argumentation [Am-
goud and Cayrol, 2002].
Agent reductions take the perspective of individual agents and aggregate
their individual perspectives [Giacomin, 2017].
Filtering methods are inspired by agents’ knowledge or trust [Arisaka et al.,
2022]. They leave out some arguments or attacks because they do not
belong to any agent.
Yu et al. [2021] have defined individual agent defense and collective agent
defense. Roughly, in individual agent defense, if an agent puts forward an
argument, it can only be defended by arguments from that same agent, i.e.,
a set of arguments E from individual agent α defends an argument c iff there
exists an agent α who has argument c such that for all arguments b attacking
c, there exists an argument a in E from α attacking b. Whereas with collective
agent defense, a set of agents α can do that, i.e., a set of arguments E defends
c collectively iff for all arguments b attacking c, there exists an agent α who
has c and an argument a in E from a set of agents α such that a attacks b.
Example 3.15 illustrates these two agent defenses.
Example 3.15 (Individual agent defense vs. collective agent defense).
In Figure 6(d), argument c defends argument a, but it does not individually
agent-defend it because c and a come from different agents. Consider another
abstract agent framework visualized in Figure 13. Here, {c1 , c2 } collectively
agent-defend argument a, but they do not individually agent-defend it.
Social semantics is based on a reduction to preference-based argumentation
for each argument, by counting the number of agents that have those argu-
ments. It thus interprets agent argumentation as a kind of voting procedure.
Example 3.16 illustrates social reduction.
Example 3.16 (Social reduction). Consider the agent argumentation
framework visualized in Figure 14. Arguments a and b both belong to agent α,
b also belongs to agent β, and a attacks b. In that situation, argument b is pre-
ferred to argument a because it is held by more agents. We can then apply the
four reductions from preference-based argumentation framework to abstract
argumentation framework, followed by application of Dung’s semantics.
46 Liuwen Yu, Leendert van der Torre, and Réka Markovich
b1 b2 γ
α c1 c2 β
a a a a a
a b
α β b b b b b
b≻a
Figure 14: Social reduction: the left hand is an abstract agent argumentation
framework, the middle is the corresponding preference-based framework, and
the right hand are four corresponding abstract argumentation frameworks (as
discussed in Example 3.3)
Agent reductions take the perspective of each agent and obtain the semantics
accordingly. Intuitively, agents prefer their own arguments over the arguments
of others. Thus, for each agent, there is a preference-based argumentation
framework. In social reduction semantics, there is a unique abstract argumen-
tation framework after each reduction. However, in agent reduction semantics,
we obtain a set of abstract argumentation frameworks — one for each agent.
The final step is to take the union of all these frameworks to form a combined
abstract argumentation framework. Example 3.17 illustrates agent reductions.
Example 3.17 (Agent reduction). Reconsider the abstract argumentation
framework in Figure 14. Agent β prefers argument b over argument a. Thus,
we get the same preference-based framework as depicted in Figure 14, but for
a very different reason to that of social reduction. For agent α, argument a
and b are equivalent. To compute the agent extensions, we take the union of
the reductions for each agent.
Agent filtering semantics remove arguments that do not belong to an agent,
or they remove attacks that do not belong to an agent. An attack belongs to
an agent if both the attacking and attacked arguments belong to that agent.
Example 3.18 illustrates agent filtering semantics.
Example 3.18 (Agent filtering). Consider the two abstract agent frame-
works visualized in Figure 15. For the framework on the left, we might say
that argument a is not known because it doesn’t belong to any agent. And for
Thirteen Challenges in Formal and Computational Argumentation 47
the framework on the right, we might say that the attack is unknown because
no agent holds both arguments a and b. The filtering methods remove such
unknown arguments and unknown attacks. This is followed by the application
of Dung’s semantics.
a b a b
β α β
Pr2 : “It is you who killed the victim. Only you were near the scene at the
time of the murder.”
Pr3 : “At facility A? Then, it is impossible that you had your ID card stolen
because facility A does not allow any person to enter without an ID card.”
In this example, the opponent tries to defend himself with the claim “I had
my ID card stolen!” (Op2 ). However, the proponent strategically uses this very
claim against the opponent (Pr3 ), arguing that if the opponent was at facility
48 Liuwen Yu, Leendert van der Torre, and Réka Markovich
A, it would have been impossible for his ID card to have been stolen because
the facility does not permit entry without an ID card. This demonstrates how
an argument can backfire in a strategic dialogue.
In this section, we discussed extending the attack-defense paradigm for di-
alogue, particularly with agents. We end the section with the following ques-
tions:
3. What is the next step required to bridge the gap between 1) Dung’s
attack-defense paradigm and 2) strategic argumentation and dialogue?
4. There are many kinds of dialogues. What are the main components of a
dialogue? For example, what are the components of persuasion dialogue
systems like Fatio?
5. For all these kinds of dialogue, what makes a good dialogue? For example,
what is a successful Fatio dialogue? Does a successful dialogue happen
when someone is convinced of an argument they did not hold previously
or does it happen when the parties agree about where they disagree?
At an abstract level, it seems that pro and con arguments and the rela-
tions between them can be represented intuitively in bipolar argumentation
frameworks discussed by Cayrol and Lagasquie-Schiex [2005; 2009; 2010; 2013],
extending abstract argumentation framework with support relation that is in-
dependent of attack. Figure 16 illustrates three bipolar argumentation frame-
works, where attack relations are depicted by solid arrows, and support rela-
tions are depicted by dashed arrows. Similarly to abstract agent argumentation
semantics, there are also three types of bipolar argumentation semantics de-
fined by Yu et al. [2023].
There are three new defense based on both attack and support relations,
called defense1 , defense2 , and defense3 , all of which have additional require-
ments for Dung’s defense. Defended1 requires that the argument defends (in
Dung’s theory) another argument also supports it. Defended2 requires that a
defender is supported. Moreover, defended3 requires not only that the attack-
ers are attacked, but also that all supporters of the attackers are attacked as
well. We illustrate the three defense in Figure 16.
a b a b a b
c c d c d e
a b
c d
a c b a c b
a c b a c b
Figure 19: A child custody deliberation with possible arguments and their
relations in a divorce case [Yu et al., 2020].
2013]. Addressing this criticism, the literature on legal interpretation has dis-
cussed the possibility that legal knowledge-based systems contain alternative
syntactic formalizations. It has been observed that while, on the syntactic
level, formalization commits us to a given interpretation, on the conceptual
level, classification of factual situations as legal concepts is not an issue of
logical form [Prakken, 2013]. Alternatively, we can restrict the investigation
by saying that “the only aspects of legal reasoning that can be formalized are
those aspects that concern the following problem: given a particular interpre-
tation of a body of legal knowledge, and given a particular description of some
legal problem, what are the general rational patterns of reasoning with which a
solution to the problem can be obtained?” [Prakken, 2013]. If a formal frame-
work offers the different interpretations itself, though, then using it might be
directly exploitable to the comparison of the different possibilities and routes
of reasoning given each interpretation.
It has been argued that for the validation of a bipolar argumentation theory,
so-called theory-based validation is preferable to empirical validation [Prakken,
2020; Polberg and Hunter, 2018], which itself is preferable to intuition-based
validation. Nevertheless, in this context, the principle-based analysis discussed
in Section 4.1 complements these validation methods. The theory of formal
argumentation needs to be complemented with examples and case studies con-
cerning the use of the theory.
In this section, we discussed extending the attack-defense paradigm for bal-
ancing. We end this section with the following questions:
2. When we introduce a new concept like support at the abstract level, it can
be interpreted in various ways at the structured level. For example, what
does support mean other than inferential relation (e.g., in ASPIC+)?
3. How can we better represent argumentation as balancing (e.g., in law
and ethics)? For example, how should the pros and cons be aggregated?
What is the role of weights?
4. How should other aspects of balancing be incorporated? For example,
how should argument strength be represented and evaluated? It is im-
portant to distinguish between different kinds of argument strength, in
particular logical, dialectical and rhetorical argument strength [Prakken,
2024a].
5. In the previous and present sections, we discussed extended abstract ar-
gumentation inspired by dialogue and balancing. Which other inspira-
tions can be utilized to design extensions of abstract argumentation? For
example, how can natural argumentation inspire extensions of abstract
argumentation?
Postulates
1. Reflexivity
2. Right weakening
3. Left logical equivalence
4. Cautious monotonicity
5. Cut
6. ...
Principles
1. Conflict-free
2. Naivety
3. Admissibility
4. Reinstatement
5. Directionality
6. SCC-recursivenss
7. ...
Classify and study the sets of principles — study the relations between
principles. For example, a set of principles may imply another principle.
Or there may be incompatibilities among principles. Or there may be a
set of principles that characterizes a semantics.
We illustrate the above three branches of semantics with Example 4.1 be-
low.
Example 4.1 (Three branches of abstract argumentation semantics).
Consider the three frameworks in Figure 21. For framework (a), the only
extension in AB semantics is the empty set whereas under the CF2 semantics,
56 Liuwen Yu, Leendert van der Torre, and Réka Markovich
c a
c d
a b c
a b d b
Naivety states that every extension under the semantics is a maximal conflict-
free set.
Example 4.2. Given the framework shown in Figure 22, the complete and
weak complete semantics are the same: {∅}, while ∅ is not a maximal conflict-
free set in this framework, e.g., {a} is also conflict-free.
a b
Figure 22: Complete and weak complete semantics do not satisfy naivety
Semantics P1 P2 P3 P4 P5
TR CGPS CGPS CGP CGPS CGPS
Sem1 CGPS CGPS CGP S S
Sem2 CGPS CGPS CGP S S
SR1 × × CGP × ×
SR2 CGPS × × × ×
SR3 × CGPS CGP CGPS CGPS
SR4 CGPS G × × G
AR1 × × CGP × S
AR2 CGPS × × × ×
AR3 CGPS CGPS CGP CGPS CGPS
AR4 CGPS G × × G
OR CGPS × CGP CGPS CGPS
NBR × × CGP × S
Semantics. P6 P7 P8 P9
TR × × × ×
Sem1 CGP CGP S S
Sem2 × CGP S S
SR1 × × × ×
SR2 × × × ×
SR3 × × × ×
SR4 × × × ×
AR1 × × × ×
AR2 × × × ×
AR3 × × × ×
AR4 × × × ×
OR × × × ×
NBR × × × ×
Table 9: Comparison between the reductions and new agent principles. P10 =
AgentAdditionPersistence, P11 = AgentAdditionUniversalPersistence, P12 =
PermutationPersistence, P13 = PermutationPersistence, P14 = RemovalAgent-
Persistence, P15 = AgentNumberEquivalence, P16 = ConflictfreeInvolvement,
Principle 17 = RemovalArgumentPersistence.
system can simultaneously satisfy the whole set of seemingly reasonable criteria
— non-dictatorship, unrestricted domain, Pareto efficiency, and independence
of irrelevant alternatives — when there are three or more options available.
This kind of result highlights the inherent trade-offs involved in designing sys-
tems that attempt to balance competing principles. Similarly, as discussed in
Section 3.2, any attempt to realize PDL in ASPIC+ should preserve the def-
initional principle of attack closure. The impossibility theorem explains how
this is incompatible with context independence [Pardo et al., 2024]. These im-
possibility results are crucial because they reveal the boundaries of what can
be achieved within a given formal system. Additionally, they also guide re-
searchers to either accept certain trade-offs or seek alternative approaches that
might circumvent these limitations.
In this section, we discussed principle-based analysis and, as usual, we list
several research questions about that topic.
1. How can we provide guidance to users who are not experts in formal
or computational argumentation on how to use the theory of argumen-
tation for their needs? Can we develop a user guide for the theory of
argumentation?
Step 1 Partition the argumentation framework into SCCs. There are two
SCCs in the framework: S1 = {a, b} and S2 = {c, d, e}. Here, S1 is
identified as the initial SCC as it does not depend on S2 .
Step 3 For each possible extension determined in Step 2, apply the reinstate-
ment principle. This involves suppressing the nodes directly attacked
within subsequent SCCs and considering the distinction between defended
and undefended nodes. Let’s take candidate extension {b}. Here, argu-
ment c in S2 cannot be included in the extension because it is attacked
by argument b. Therefore, only {d, e} can be taken into consideration.
b
a d
c
M2
e1 b c
M1
e1 a1 a2 a1 a2
e2 o e2 o
M1 M2
e1 a1 a2 a3 a4 e2 e1 a1 a2 e2
(a) (b)
The first area where locality and modularity principles find application is in
the development of algorithms for efficiently computing argumentation seman-
tics, particularly through divide-and-conquer strategies. By leveraging locality,
one can focus on specific parts of an argumentation framework and thus reduce
the computational burden. There are three locality- and modularity-based
approaches that demonstrate how these principles can enhance the efficiency
of computing semantics in dynamic, static, and partial argumentation frame-
works. For instance, when only partial semantics are required — such as in sce-
narios where the status of certain arguments needs to be queried — algorithms
can be designed to focus solely on the relevant arguments, disregarding those
that do not impact the outcome. Similarly, in dialogues where new arguments
are introduced, the computation can be streamlined by ignoring the effects of
irrelevant arguments. For a comprehensive overview of these approaches, we
refer to the work of Baroni et al. [2018].
Other principles used in the design of algorithms include robustness princi-
ples [Rienstra et al., 2020], which deal with the behavior of a semantics when
the argumentation framework changes due to the addition or removal of an
attack between two arguments. Robustness principles have been classified into
two kinds: persistence principles and monotonicity principles. The former deal
with the question of whether a labeling in an argumentation framework under
a given semantics persists after a change. The latter deal with the question of
whether new labelings are created after a change. They are listed as follows:
Persistence and monotonicity principles are also useful for addressing en-
forcement problems [Baumann, 2012] in abstract argumentation. This is about
the problem of determining minimal sets of changes to an argumentation frame-
work in order to enforce some result, such as the acceptance of a given set of
arguments. Because persistence and monotonicity principles can be used to de-
termine which changes to the attack relation of an argumentation framework
do or do not change its evaluation, these principles can be used to guide the
search for sets of changes that address the enforcement problem. This idea has
already been used for extension enforcement under grounded semantics [Niska-
nen et al., 2018].
In this section, we have discussed compositionality principles, algorithms and
other computational approaches that exploit principles. We end this section
with the following questions:
1. Most algorithms are developed for abstract argumentation and for argu-
mentation as inference. What are the computational tasks for structured
argumentation, for argumentation as dialogue, and for argumentation as
balancing?
2. Apart from algorithms, what other tools does computer science have to
offer, e.g., analysis of computational complexity, efficient implementation
of algorithms?
4. What else can we learn from artificial intelligence? The rise of machine
learning and foundation models is changing the landscape of argumenta-
tion. How could these new approaches speed up computation?
a1 He was at Facility A on the day of the murder [this is a fact Acc knows].
a6 Only Acc could have killed the victim [this is Prc’s claim].
Meanwhile, Wit has certain beliefs on the basis of which he has three arguments:
a3 Acc stayed home on the day of the murder, having previously lost his ID
card [this Wit originally believes to be a fact].
a4 Acc could enter any facility provided he had his ID card on him [this is a
fact known to Wit].
a5 Acc could not have been at Facility C at the time of the murder [this is
Wit’s claim].
Further, the relationship between the three arguments is such that a3 attacks
a4 , which attacks a5 . Altogether, these arguments by the three agents form
the argumentation framework in Figure 26(a). Acc, Prc and Wit reveal their
own internal argumentation, partially or elaborately, for the judge to evaluate.
But since agents may come to learn the arguments of other agents if, say, they
are expressed before they present their own arguments, it is possible that they
take the additional information into account when deciding which arguments
70 Liuwen Yu, Leendert van der Torre, and Réka Markovich
to present. In this example, both Prc and Acc have the characteristic of being
unaware agents. Prc has no reason to drop argument a6 , and neither does Acc,
as he sees no benefit in admitting that a6 . However, how Wit responds to the
fact known to Acc (a1 ) could prove crucial to whether he is found innocent or
guilty.
With With
closed-minded unaware Wit closed-minded aware Wit
Fi Fj
a3 a3
a1 a1 a1
a4 a4
a6 a6 a6
a2 a2 a2
a5 a5
a b c
e f
g h d
cial tool for improving the logical consistency and depth of responses generated
by LLMs.
Additionally, LLMs prompts to reevaluate the relationship between abstract
and structured argumentation. Traditionally, structured arguments were nec-
essary because the attack relations were defined based on the internal structure
of the arguments, as discussed in Section 3.2. However, with LLM capabili-
ties, it becomes possible to retain arguments in their natural language form
and use an LLM to extract the underlying argumentation framework. This
approach allows argumentation to be integrated more naturally with conversa-
tional models because LLMs provide the contextual understanding needed to
facilitate these processes.
Furthermore, continuous improvements in computational power, together
with the capabilities of foundational models like LLMs, have opened up new
avenues for integrating argumentation into more complex systems. In such
systems, argumentation can synergize with other technologies, enhancing the
overall functionality and enabling more sophisticated applications. Such inte-
gration will not only advance the field of computational argumentation but will
also push the boundaries of what can be achieved in AI-driven reasoning and
decision-making systems. In this section, we discuss the following challenge:
Challenge 13. Integrating argumentation with technologies.
Distributed argumentation technology [Yu, 2023] is a computational approach
that incorporates argumentation reasoning mechanisms within multiagent sys-
tems. An instantiation of distributed argumentation technology is Intelligent
Human-Input-Based Blockchain Oracle (IHiBO) [Yu et al., 2022]. The mo-
tivation for IHiBO comes from fund management for the securities market.
Figure 28 shows a toy fund management procedure. Investors first pool their
money together and then fund managers conduct investment research and pre-
pare the specific plan for the investment portfolio. Fund managers invest secu-
rities on behalf of their clients (investors) according to their research and the
final decision in the investment plan. The investment generates returns, and
the returns are passed down to investors. Fund managers play an important
role in the investment and financial world as they give investors peace of mind
that their money is in the hands of experts. However, reality is not always
as hoped for and investors are supposed to know (but do not actually know)
where their money is going, why, and how much is the true profit.
A significant aspect of IHiBO is its human-input-based oracle, which bridges
the gap between a blockchain and real-world data, allowing human experts
to input information into the blockchain. IHiBO was envisioned for use in
fund management, where managers provide their arguments in terms of the
investment plan for stocks. Specifically, IHiBO utilizes multiagent abstract
argumentation frameworks to model decision-making processes, which are then
implemented by smart contracts and stored on a blockchain. The integration
of argumentation reasoning and blockchain makes the decision-making process
more transparent and traceable.
Thirteen Challenges in Formal and Computational Argumentation 75
Figure 29: The architecture of the IHiBO framework. DLT = distributed ledger
technology.
5. This is just the beginning of the use of the attack-defense paradigm shift
in argumentation technology. For a start, how can we use algorithmic
argumentation methods in argumentation technologies?
5 Summary
This chapter has discussed the evolving landscape of argumentation, explor-
ing its natural forms, the paradigm shift initiated by Dung, and subsequent
advancements in computational approaches.
Natural argumentation is rooted in both theoretical and practical reasoning,
with formal argumentation grounded in philosophical and mathematical foun-
dations. This foundational approach is essential for representing, managing,
and resolving conflicts in various disciplines. For instance, the Jiminy ethical
governor, which operates across six layers of conflict, exemplifies the complexity
and depth of formal reasoning in ethical decision-making. However, natural ar-
gumentation is inherently diverse, reflecting the complexities of human thought
and communication. This diversity is evident in psychological evaluations of
arguments, where understanding and generating arguments involves intricate
Thirteen Challenges in Formal and Computational Argumentation 77
tions are used for balancing. These connections underscore the importance of
argumentation for making AI systems more transparent and understandable.
Additionally, the integration of argumentation techniques with emerging tech-
nologies such as distributed argumentation technology has expanded the poten-
tial applications of argumentation in areas like blockchain and AI. For instance,
the IHiBO architecture integrates argumentation with blockchain technology
to enhance transparency and trust in decision-making processes.
In summary, this chapter presents an overview of argumentation: past
achievements, the current state of the art, and future directions. We discussed
the three pillars in the context of natural argumentation before discussing the
attack-defense paradigm shift initiated by Dung and advancements in compu-
tational argumentation that are shaping the future of the field.
6 Acknowledgments
All authors acknowledge financial support from the Luxembourg National
Research Fund (FNR) — L. van der Torre through the project The Epis-
temology of AI Systems (EAI) (C22/SC/17111440), L. van der Torre and
R. Markovich through the projects Logical Methods for Deontic Explana-
tions (LODEX) (INTER/DFG/23/17415164/LoDEx) and Study of the Lim-
its, Problems and Risks Associated with Autonomous Technologies (INTE-
GRAUTO) (INTER/AUDACE/21/16695098) and all authors through the
project Deontic Logic for Epistemic Rights (DELIGHT) (O20/14776480). R.
Markovich and L.Yu are also supported by the University of Luxembourg’s
Marie Speyer Excellence Grant of 2024 Formal Analysis of Discretionary Rea-
soning (MSE-DISCREASON).
BIBLIOGRAPHY
[Alchourrón et al., 1985] Carlos E Alchourrón, Peter Gärdenfors, and David Makinson. On
the logic of theory change: partial meet contraction and revision functions. The journal
of symbolic logic, 50(2):510–530, 1985.
[Aleinikoff, 1986] T Alexander Aleinikoff. Constitutional law in the age of balancing. Yale
lj, 96:943, 1986.
[Alkaissi and McFarlane, 2023] Hussam Alkaissi and Samy I McFarlane. Artificial halluci-
nations in chatgpt: implications in scientific writing. Cureus, 15(2), 2023.
[Altay et al., 2022] Sacha Altay, Marlène Schwartz, Anne-Sophie Hacquin, Aurélien Allard,
Stefaan Blancke, and Hugo Mercier. Scaling up interactive argumentation by providing
counterarguments with a chatbot. Nature Human Behaviour, 6(4):579–592, 2022.
[Amgoud and Cayrol, 2002] Leila Amgoud and Claudette Cayrol. Inferring from inconsis-
tency in preference-based argumentation frameworks. International Journal of Approxi-
mate Reasoning, 29(2):125–169, 2002.
[Amgoud and Vesic, 2012] Leila Amgoud and Srdjan Vesic. On the use of argumentation for
multiple criteria decision making. In International Conference on Information Processing
and Management of Uncertainty in Knowledge-Based Systems, pages 480–489. Springer,
2012.
[Amgoud, 2005] Leila Amgoud. An argumentation-based model for reasoning about coali-
tion structures. In International Workshop on Argumentation in Multi-Agent Systems,
pages 217–228. Springer, 2005.
[Amgoud, 2009] Leila Amgoud. Argumentation for decision making. Argumentation in
artificial intelligence, pages 301–320, 2009.
Thirteen Challenges in Formal and Computational Argumentation 79
[Arisaka et al., 2022] Ryuta Arisaka, Jérémie Dauphin, Ken Satoh, and Leendert van der
Torre. Multi-agent argumentation and dialogue. IfCoLog Journal of Logics and Their
Applications, 9(4):921–954, 2022.
[Atkinson and Bench-Capon, 2005] Katie Atkinson and Trevor Bench-Capon. Legal case-
based reasoning as practical reasoning. Artificial Intelligence and Law, 13:93–131, 2005.
[Atkinson and Bench-Capon, 2021] Katie Atkinson and Trevor Bench-Capon. Value-based
argumentation. Handbook of Formal Argumentation, 2:397–441, 2021.
[Aumann, 2016] Robert J Aumann. Agreeing to disagree. Springer, 2016.
[Austin, 1975] John Langshaw Austin. How to do things with words. Harvard University
Press, 1975.
[Awad et al., 2018] Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph
Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. The moral machine
experiment. Nature, 563(7729):59–64, 2018.
[Baroni and Giacomin, 2007] Pietro Baroni and Massimiliano Giacomin. On principle-based
evaluation of extension-based argumentation semantics. Artificial Intelligence, 171(10-
15):675–700, 2007.
[Baroni et al., 2005] Pietro Baroni, Massimiliano Giacomin, and Giovanni Guida. Scc-
recursiveness: a general schema for argumentation semantics. Artificial Intelligence,
168(1-2):162–210, 2005.
[Baroni et al., 2011] Pietro Baroni, Paul E Dunne, and Massimiliano Giacomin. On the
resolution-based family of abstract argumentation semantics and its grounded instance.
Artificial Intelligence, 175(3-4):791–813, 2011.
[Baroni et al., 2014] Pietro Baroni, Guido Boella, Federico Cerutti, Massimiliano Giacomin,
Leendert van der Torre, and Serena Villata. On the input/output behavior of argumen-
tation frameworks. Artificial Intelligence, 217:144–197, 2014.
[Baroni et al., 2018] Pietro Baroni, Massimiliano Giacomin, and Beishui Liao. Locality and
modularity in abstract argumentation. Handbook of formal argumentation, pages 937–979,
2018.
[Barringer et al., 2005] Howard Barringer, Dov Gabbay, and John Woods. Temporal dy-
namics of support and attack networks: From argumentation to zoology. Mechanizing
Mathematical Reasoning: Essays in Honor of Jörg H. Siekmann on the Occasion of His
60th Birthday, pages 59–98, 2005.
[Baumann et al., 2020] Ringo Baumann, Gerhard Brewka, and Markus Ulbricht. Revis-
iting the foundations of abstract argumentation–semantics based on weak admissibility
and weak defense. In Proceedings of the AAAI Conference on Artificial Intelligence,
volume 34, pages 2742–2749, 2020.
[Baumann, 2012] Ringo Baumann. What does it take to enforce an argument? Minimal
change in abstract argumentation. In ECAI 2012, pages 127–132. IOS Press, 2012.
[Baumeister et al., 2021] Dorothea Baumeister, Daniel Neugebauer, and Jörg Rothe. Collec-
tive acceptability in abstract argumentation. Handbook of Formal Argumentation, Volume
2, 2021.
[Bengel et al., 2024] Lars Bengel, Lydia Blümel, Tjitze Rienstra, and Matthias Thimm.
Argumentation-based probabilistic causal reasoning. In Conference on Advances in Ro-
bust Argumentation Machines, pages 221–236. Springer, 2024.
[Bezou-Vrakatseli, 2023] Elfia Bezou-Vrakatseli. Evaluation of LLM reasoning via argument
schemes. Online Handbook of Argumentation for AI, 4, 2023.
[Bistarelli et al., 2021] Stefano Bistarelli, Francesco Santini, et al. Weighted argumentation.
Handbook of Formal Argumentation, Volume 2, 2021.
[Black et al., 2021] Elizabeth Black, Nicolas Maudet, and Simon Parsons. Argumentation-
based dialogue. Handbook of Formal Argumentation, Volume 2, 2021.
[Boella et al., 2009] Guido Boella, Dov M Gabbay, Leendert van der Torre, and Serena
Villata. Meta-argumentation modelling I: Methodology and techniques. Studia Logica,
93:297–355, 2009.
[Boella et al., 2010] Guido Boella, Dov M Gabbay, Leendert van der Torre, and Serena
Villata. Support in abstract argumentation. In Proceedings of the Third International
Conference on Computational Models of Argument (COMMA’10), pages 40–51. Frontiers
in Artificial Intelligence and Applications, IOS Press, 2010.
80 Liuwen Yu, Leendert van der Torre, and Réka Markovich
[Borg and Bex, 2024] AnneMarie Borg and Floris Bex. Minimality, necessity and sufficiency
for argumentation and explanation. International Journal of Approximate Reasoning,
page 109143, 2024.
[Brewka and Eiter, 1999] G. Brewka and T. Eiter. Preferred answer sets for extented logic
programs. Artif. Intell., 109:297–356, 1999.
[Brewka, 1994] Gerhard Brewka. Reasoning about priorities in default logic. In Barbara
Hayes-Roth and Richard E. Korf, editors, Proc. of the 12th National Conference on AI,
volume 2, pages 940–945. AAAI Press / The MIT Press, 1994.
[Budzynska et al., 2018] Katarzyna Budzynska, Serena Villata, et al. Processing natural
language argumentation. Handbook of formal argumentation, 1:577–627, 2018.
[Caminada and Amgoud, 2005] Martin Caminada and Leila Amgoud. An axiomatic account
of formal argumentation. In AAAI, volume 6, pages 608–613, 2005.
[Caminada and Amgoud, 2007] Martin Caminada and Leila Amgoud. On the evaluation of
argumentation formalisms. Artificial Intelligence, 171(5-6):286–310, 2007.
[Caminada and Ben-Naim, 2007] Martin Caminada and Jonathan Ben-Naim. Postulates for
paraconsistent reasoning and fault tolerant logic programming. PhD thesis, Department
of Information and Computing Sciences, Utrecht University, 2007.
[Caminada and Gabbay, 2009] Martin WA Caminada and Dov M Gabbay. A logical account
of formal argumentation. Studia Logica, 93:109–145, 2009.
[Caminada and Pigozzi, 2011] Martin Caminada and Gabriella Pigozzi. On judgment aggre-
gation in abstract argumentation. Autonomous Agents and Multi-Agent Systems, 22:64–
102, 2011.
[Caminada and Wu, 2011] Martin Caminada and Yining Wu. On the limitations of abstract
argumentation. In Proceedings of the 23rd Benelux Conference on Artificial Intelligence
(BNAIC 2011), pages 59–66, 2011.
[Caminada et al., 2012] Martin WA Caminada, Walter A Carnielli, and Paul E Dunne.
Semi-stable semantics. Journal of Logic and Computation, 22(5):1207–1254, 2012.
[Caminada et al., 2014] Martinus Wigbertus Antonius Caminada, Sanjay Modgil, and Nir
Oren. Preferences and unrestricted rebut. Computational Models of Argument, 2014.
[Caminada, 2006] Martin Caminada. Semi-stable semantics. COMMA, 144:121–130, 2006.
[Caminada, 2017] Martin Caminada. Argumentation semantics as formal discussion. Jour-
nal of Applied Logics, 4(8):2457–2492, 2017.
[Caminada, 2018a] Martin Caminada. Argumentation semantics as formal discussion. Hand-
book of Formal Argumentation, 1:487–518, 2018.
[Caminada, 2018b] Martin Caminada. Rationality postulates: Applying argumentation the-
ory for non-monotonic reasoning. Handbook of Formal Argumentation, Volume 1, pages
771–796, 2018.
[Castagna et al., 2024a] Federico Castagna, Nadin Kökciyan, Isabel Sassoon, Simon Par-
sons, and Elizabeth Sklar. Computational argumentation-based chatbots: a survey. Jour-
nal of Artificial Intelligence Research, 80:1271–1310, 2024.
[Castagna et al., 2024b] Federico Castagna, Peter McBurney, and Simon Parsons.
Explanation–question–response dialogue: An argumentative tool for explainable AI. Ar-
gument & Computation, (Preprint):1–23, 2024.
[Castagna et al., 2024c] Federico Castagna, Isabel Sassoon, and Simon Parsons. Can formal
argumentative reasoning enhance LLMs performances? arXiv preprint arXiv:2405.13036,
2024.
[Cayrol and Lagasquie-Schiex, 2005] Claudette Cayrol and Marie-Christine Lagasquie-
Schiex. On the acceptability of arguments in bipolar argumentation frameworks. In
European Conference on Symbolic and Quantitative Approaches to Reasoning and Un-
certainty, pages 378–389. Springer, 2005.
[Cayrol and Lagasquie-Schiex, 2009] Claudette Cayrol and Marie-Christine Lagasquie-
Schiex. Bipolar abstract argumentation systems. In Argumentation in Artificial Intelli-
gence, pages 65–84. Springer, 2009.
[Cayrol and Lagasquie-Schiex, 2010] Claudette Cayrol and Marie-Christine Lagasquie-
Schiex. Coalitions of arguments: A tool for handling bipolar argumentation frameworks.
International Journal of Intelligent Systems, 25(1):83–109, 2010.
[Cayrol and Lagasquie-Schiex, 2013] Claudette Cayrol and Marie-Christine Lagasquie-
Schiex. Bipolarity in argumentation graphs: Towards a better understanding. Inter-
national Journal of Approximate Reasoning, 54(7):876–899, 2013.
Thirteen Challenges in Formal and Computational Argumentation 81
[Cayrol et al., 2021] Claudette Cayrol, Andrea Cohen, and Marie Christine La-
gasquie Schiex. Higher-order interactions (bipolar or not) in abstract argumentation:
a state of the art. 2021.
[Cerutti et al., 2021] Federico Cerutti, Marcos Cramer, Mathieu Guillaume, Emmanuel
Hadoux, Anthony Hunter, and Sylwia Polberg. Empirical cognitive studies about for-
mal argumentation. Handbook of Formal Argumentation, Volume 2, 2021.
[Coleman, 1984] James S Coleman. Micro foundations and macrosocial behavior. Ange-
wandte Sozialforschung anc AIAS Informationen Wien, 12(1-2):25–37, 1984.
[Cramer and Guillaume, 2018a] Marcos Cramer and Mathieu Guillaume. Directionality of
attacks in natural language argumentation. In CEUR Workshop Proceedings. RWTH
Aachen University, Aachen, Germany, 2018.
[Cramer and Guillaume, 2018b] Marcos Cramer and Mathieu Guillaume. Empirical cogni-
tive study on abstract argumentation semantics. In Computational Models of Argument,
pages 413–424. IOS Press, 2018.
[Cramer and Guillaume, 2019] Marcos Cramer and Mathieu Guillaume. Empirical study on
human evaluation of complex argumentation frameworks. In Logics in Artificial Intelli-
gence: 16th European Conference, JELIA 2019, Rende, Italy, May 7–11, 2019, Proceed-
ings 16, pages 102–115. Springer, 2019.
[Da Costa et al., 2007] Newton CA Da Costa, Décio Krause, and Otávio Bueno. Paracon-
sistent logics and paraconsistency. In Philosophy of logic, pages 791–911. Elsevier, 2007.
[Da Costa, 1974] Newton CA Da Costa. On the theory of inconsistent formal systems. Notre
dame journal of formal logic, 15(4):497–510, 1974.
[Danry et al., 2023] Valdemar Danry, Pat Pataranutaporn, Yaoli Mao, and Pattie Maes.
Don’t just tell me, ask me: AI systems that intelligently frame explanations as questions
improve human logical discernment accuracy over causal AI explanations. In Proceedings
of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–13, 2023.
[Dauphin and Cramer, 2018] Jérémie Dauphin and Marcos Cramer. ASPIC-END: struc-
tured argumentation with explanations and natural deduction. In Theory and Applica-
tions of Formal Argumentation: 4th International Workshop, TAFA 2017, Melbourne,
VIC, Australia, August 19-20, 2017, Revised Selected Papers 4, pages 51–66. Springer,
2018.
[D’Avila Garcez et al., 2005] Artur S D’Avila Garcez, Dov M Gabbay, and Luis C Lamb.
Value-based argumentation frameworks as neural-symbolic learning systems. Journal of
Logic and Computation, 15(6):1041–1058, 2005.
[Delgrande et al., 2004] James Delgrande, Torsten Schaub, Hans Tompits, and Kewen
Wang. A classification and survey of preference handling approaches in nonmonotonic
reasoning. Computational Intelligence, 20(2):308–334, 2004.
[Dong et al., 2019] Huimin Dong, Beishui Liao, Reka Markovich, and Leendert van der
Torre. From classical to non-monotonic deontic logic using ASPIC+. In Logic, Ratio-
nality, and Interaction: 7th International Workshop, LORI 2019, Chongqing, China,
October 18–21, 2019, Proceedings 7, pages 71–85. Springer, 2019.
[Dong et al., 2021] Huimin Dong, Réka Markovich, and Leendert van der Torre. Towards
AI logic for social reasoning. arXiv preprint arXiv:2110.04452, 2021.
[Dung and Thang, 2018] Phan Minh Dung and Phan Minh Thang. Fundamental properties
of attack relations in structured argumentation with priorities. Artificial Intelligence,
255:1–42, 2018.
[Dung, 1995] Phan Minh Dung. On the acceptability of arguments and its fundamental role
in nonmonotonic reasoning, logic programming and n-person games. Artificial intelligence,
77(2):321–357, 1995.
[Dung, 2014] Phan Minh Dung. An axiomatic analysis of structured argumentation for
prioritized default reasoning. volume 263 of Frontiers in Artificial Intelligence and Ap-
plications, pages 267–272. IOS Press, 2014.
[Dung, 2016] Phan Minh Dung. An axiomatic analysis of structured argumentation with
priorities. Artif. Intell., 231:107–150, 2016.
[FIPA, 2002] FIPA. Communicative act library specification. http://www. fipa. org/specs/-
fipa00037, 2002.
[Gabbay and Rivlin, 2017] Dov M Gabbay and Lydia Rivlin. Heal2100: human effective
argumentation and logic for the 21st century. The next step in the evolution of logic.
IFCoLog Journal of Logics and Their Applications, 2017.
82 Liuwen Yu, Leendert van der Torre, and Réka Markovich
[Garcez et al., 2008] Artur SD’Avila Garcez, Luis C Lamb, and Dov M Gabbay. Neural-
symbolic cognitive reasoning. Springer Science & Business Media, 2008.
[Gargouri et al., 2020] Anis Gargouri, Sébastien Konieczny, Pierre Marquis, and Srdjan
Vesic. On a notion of monotonic support for bipolar argumentation frameworks. In
20th International Conference on Autonomous Agents and MultiAgent Systems, 2020.
[Georgeff et al., 1999] Michael Georgeff, Barney Pell, Martha Pollack, Milind Tambe, and
Michael Wooldridge. The belief-desire-intention model of agency. In Intelligent Agents V:
Agents Theories, Architectures, and Languages: 5th International Workshop, ATAL’98
Paris, France, July 4–7, 1998 Proceedings 5, pages 1–10. Springer, 1999.
[Giacomin, 2017] Massimiliano Giacomin. Handling heterogeneous disagreements through
abstract argumentation (extended abstract). In PRIMA 2017: Principles and Practice
of Multi-Agent Systems, pages 3–11, 2017.
[Gillioz and Zufferey, 2020] Christelle Gillioz and Sandrine Zufferey. Introduction to exper-
imental linguistics. John Wiley & Sons, 2020.
[Giunchiglia et al., 2004] Enrico Giunchiglia, Joohyung Lee, Vladimir Lifschitz, Norman
McCain, and Hudson Turner. Nonmonotonic causal theories. Artificial Intelligence, 153(1-
2):49–104, 2004.
[Gordon et al., 2007] Thomas F Gordon, Henry Prakken, and Douglas Walton. The
Carneades model of argument and burden of proof. Artificial Intelligence, 171(10-15):875–
896, 2007.
[Gordon, 1993] Thomas F Gordon. The Pleadings Game: An exercise in computational
dialectics. Artificial Intelligence and Law, 2:239–292, 1993.
[Gordon, 2018] Thomas F Gordon. Towards requirements analysis for formal argumentation.
In Pietro Baroni, Dov Gabbay, Massimiliano Giacomin, and Leendert van der Torre, edi-
tors, Handbook of formal argumentation, Volume 1, pages 145–156. College Publictions,
2018.
[Governatori et al., 2021] Guido Governatori, Michael J Maher, and Francesco Olivieri.
Strategic argumentation. Handbook of Formal Argumentation, Volume 2, 2021.
[Hakim et al., 2019] Fauzia Zahira Munirul Hakim, Lia Maulia Indrayani, and Rosaria Mita
Amalia. A dialogic analysis of compliment strategies employed by Replika chatbot. In
Third International conference of arts, language and culture (ICALC 2018), pages 266–
271. Atlantis Press, 2019.
[Hamblin, 1970] C. L. Hamblin. Fallacies. Tijdschrift Voor Filosofie, 33(1):183–188, 1970.
[Heyninck, 2019] Jesse Heyninck. Investigations into the logical foundations of defeasible
reasoning: an argumentative perspective. PhD thesis, Ruhr University Bochum, Germany,
2019.
[Hinton and Wagemans, 2023] Martin Hinton and Jean HM Wagemans. How persuasive
is AI-generated argumentation? An analysis of the quality of an argumentative text
produced by the GPT-3 AI text generator. Argument & Computation, 14(1):59–74, 2023.
[Kaci et al., 2021] Souhila Kaci, Leendert van Der Torre, Srdjan Vesic, and Serena Villata.
Preference in abstract argumentation. Handbook of Formal Argumentation, Volume 2,
2021.
[Kakas and Moraitis, 2003] Antonis Kakas and Pavlos Moraitis. Argumentation based de-
cision making for autonomous agents. In Proceedings of the second international joint
conference on Autonomous agents and multiagent systems, pages 883–890, 2003.
[Kant, 1993] Immanuel Kant. Groundwork of the Metaphysics of Morals. Hackett Publish-
ing Company, Indianapolis, 3rd edition, 1993. [1785].
[Kissine, 2013] Michail Kissine. Speech act classifications. Pragmatics of speech actions,
173:202, 2013.
[Knocks et al., 2024] Aleks Knocks, Muyun Shao, Leendert ver der Torre, Vincent De Wit,
and Liuwen Yu. A principle-based analysis for numerical balancing. In Logics for New-
Generation Artificial Intelligence (LNGAI2024). College Publications, United Kingdom,
2024.
[Knoks and van der Torre, 2023] Aleks Knoks and Leendert van der Torre. Reason-based
detachment. In Logics for New-Generation Artificial Intelligence (LNGAI2023). College
Publications, London, United Kingdom, 2023.
[Kraus et al., 1990] Sarit Kraus, Daniel Lehmann, and Menachem Magidor. Nonmonotonic
reasoning, preferential models and cumulative logics. Artificial intelligence, 44(1-2):167–
207, 1990.
Thirteen Challenges in Formal and Computational Argumentation 83
[Leite and Martins, 2011] João Leite and João G. Martins. Social abstract argumentation.
In Toby Walsh, editor, IJCAI 2011, Proceedings of the 22nd International Joint Con-
ference on Artificial Intelligence, Barcelona, Catalonia, Spain, July 16-22, 2011, pages
2287–2292. IJCAI/AAAI, 2011.
[Liao and van der Torre, 2024] Beishui Liao and Leendert van der Torre. Attack-defense
semantics of argumentation. In Computational Models of Argument, pages 133–144. IOS
Press, 2024.
[Liao et al., 2019] Beishui Liao, Nir Oren, Leendert van der Torre, and Serena Villata. Prior-
itized norms in formal argumentation. Journal of Logic and Computation, 29(2):215–240,
2019.
[Liao et al., 2023] Beishui Liao, Pere Pardo, Marija Slavkovik, and Leendert van der Torre.
The Jiminy advisor: Moral agreements among stakeholders based on norms and argumen-
tation. Journal of Artificial Intelligence Research, 77:737–792, 2023.
[Longo et al., 2024] Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Con-
falonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas
Holzinger, et al. Explainable artificial intelligence (XAI) 2.0: A manifesto of open chal-
lenges and interdisciplinary research directions. Information Fusion, 106:102301, 2024.
[Makinson, 2005] David Makinson. Bridges from classical to nonmonotonic logic. King’s
College, 2005.
[Markovich, 2019] Réka Markovich. On the formal structure of rules in conflict of laws. In
JURIX, pages 199–204, 2019.
[McBurney and Parsons, 2002] Peter McBurney and Simon Parsons. Games that agents
play: A formal framework for dialogues between autonomous agents. Journal of Logic,
Language and Information, 11(3):315–334, 2002.
[McBurney and Parsons, 2004] Peter McBurney and Simon Parsons. Locutions for argu-
mentation in agent interaction protocols. In International Workshop on Agent Commu-
nication, pages 209–225. Springer, 2004.
[McBurney et al., 2021] Peter McBurney, Simon Parsons, et al. Argument schemes and
dialogue protocols: Doug walton’s legacy in artificial intelligence. FLAP, 8(1):263–290,
2021.
[McDougall et al., 2020] Rosalind McDougall, Cade Shadbolt, and Lynn Gillam. The prac-
tice of balancing in clinical ethics case consultation. Clinical Ethics, 15(1):49–55, 2020.
[Mercier and Sperber, 2017] Hugo Mercier and Dan Sperber. The enigma of reason. Harvard
University Press, 2017.
[Modgil and Prakken, 2013] Sanjay Modgil and Henry Prakken. A general account of argu-
mentation with preferences. Artif. Intell., 195:361–397, 2013.
[Modgil and Prakken, 2014] Sanjay Modgil and Henry Prakken. The ASPIC + framework
for structured argumentation: a tutorial. Argument Comput., 5(1):31–62, 2014.
[Moore, 1993] David John Moore. Dialogue game theory for intelligent tutoring systems.
PhD thesis, Leeds Metropolitan University, 1993.
[Musi and Palmieri, 2022] Elena Musi and Rudi Palmieri. The fallacy of explainable gener-
ative AI: evidence from argumentative prompting in two domains. 2022.
[Nash Jr, 1950] John F Nash Jr. Equilibrium points in n-person games. Proceedings of the
national academy of sciences, 36(1):48–49, 1950.
[Niskanen et al., 2018] Andreas Niskanen, Johannes P Wallner, and Matti Järvisalo. Exten-
sion enforcement under grounded semantics in abstract argumentation. In Sixteenth In-
ternational Conference on Principles of Knowledge Representation and Reasoning, 2018.
[Nouioua and Risch, 2010] Farid Nouioua and Vincent Risch. Bipolar argumentation frame-
works with specialized supports. In 2010 22nd IEEE International Conference on Tools
with Artificial Intelligence, volume 1, pages 215–218. IEEE, 2010.
[Nute, 1994] Donald Nute. Defeasible logic. Handbook of logic in artificial intelligence and
logic programming, 3:353–395, 1994.
[Okuno and Takahashi, 2009] Kenichi Okuno and Kazuko Takahashi. Argumentation sys-
tem with changes of an agent’s knowledge base. In Twenty-First International Joint
Conference on Artificial Intelligence. Citeseer, 2009.
[Pardo and Straßer, 2022] Pere Pardo and Christian Straßer. Modular orders on defaults in
formal argumentation. Journal of Logic and Computation, 2022.
84 Liuwen Yu, Leendert van der Torre, and Réka Markovich
[Pardo et al., 2024] Pere Pardo, Liuwen Yu, Chen Chen, and Leendert van der Torre. Weak-
est link, prioritised default logic and principles in argumentation. Journal of Logic and
Computation, 2024. Forthcoming.
[Phillips et al., 2021] P Jonathon Phillips, P Jonathon Phillips, Carina A Hahn, Peter C
Fontana, Amy N Yates, Kristen Greene, David A Broniatowski, and Mark A Przybocki.
Four principles of explainable artificial intelligence. 2021.
[Pigozzi and van der Torre, 2018] Gabriella Pigozzi and Leendert van der Torre. Arguing
about constitutive and regulative norms. Journal of Applied Non-Classical Logics, 28(2-
3):189–217, 2018.
[Polberg and Hunter, 2018] Sylwia Polberg and Anthony Hunter. Empirical evaluation of
abstract argumentation: Supporting the need for bipolar and probabilistic approaches.
Int. J. Approx. Reason., 93:487–543, 2018.
[Pollock, 1987] John L Pollock. Defeasible reasoning. Cognitive science, 11(4):481–518,
1987.
[Pollock, 1992] John L Pollock. How to reason defeasibly. Artificial Intelligence, 57(1):1–42,
1992.
[Pollock, 1994] John L Pollock. Justification and defeat. Artificial Intelligence, 67(2):377–
407, 1994.
[Pollock, 1995] John L Pollock. Cognitive carpentry: A blueprint for how to build a person.
Mit Press, 1995.
[Pollock, 2001] John L Pollock. Defeasible reasoning with variable degrees of justification.
Artificial intelligence, 133(1-2):233–282, 2001.
[Pollock, 2009] John L Pollock. A recursive semantics for defeasible reasoning. In Argumen-
tation in artificial intelligence, pages 173–197. Springer, 2009.
[Pollock, 2010] John L Pollock. Defeasible reasoning and degrees of justification. Argument
and Computation, 1(1):7–22, 2010.
[Prakken and Sartor, 2015] Henry Prakken and Giovanni Sartor. Law and logic: A review
from an argumentation perspective. Artificial intelligence, 227:214–245, 2015.
[Prakken, 2010a] Henry Prakken. An abstract framework for argumentation with structured
arguments. Argument & Computation, 1(2):93–124, 2010.
[Prakken, 2010b] Henry Prakken. Slides on nonmonotonic logic for commonsense reasoning,
2010.
[Prakken, 2013] Henry Prakken. Logical tools for modelling legal argument: a study of
defeasible reasoning in law, volume 32. Springer Science & Business Media, 2013.
[Prakken, 2018] Henry Prakken. Historical overview of formal argumentation. In Handbook
of formal argumentation, pages 73–141. College Publications, 2018.
[Prakken, 2020] Henry Prakken. On validating theories of abstract argumentation frame-
works: the case of bipolar argumentation frameworks. In Proceedings of the 8th Workshop
on Computational Models of Natural Argument (CMNA 2020), Perugia, Italy (and on-
line). CEUR-WS.org, 2020.
[Prakken, 2024a] Henry Prakken. An abstract and structured account of dialectical argu-
ment strength. Artificial Intelligence, 335:104193, 2024.
[Prakken, 2024b] Henry Prakken. On evaluating legal-reasoning capabilities of generative
ai. In Proceedings of the 24th Workshop on Computational Models of Natural Argument,
pages 100–112, 2024.
[Priest, 2002] Graham Priest. Paraconsistent logic. In Handbook of philosophical logic, pages
287–393. Springer, 2002.
[Rahwan et al., 2003] Iyad Rahwan, Sarvapali D Ramchurn, Nicholas R Jennings, Peter
McBurney, Simon Parsons, and Liz Sonenberg. Argumentation-based negotiation. The
Knowledge Engineering Review, 18(4):343–375, 2003.
[Rawls, 2001] John Rawls. Justice as fairness: A restatement. Harvard University Press,
2001.
[Reiter, 1980] Raymond Reiter. A logic for default reasoning. Artificial intelligence, 13(1-
2):81–132, 1980.
[Rienstra et al., 2020] Tjitze Rienstra, Chiaki Sakama, Leendert van der Torre, and Beishui
Liao. A principle-based robustness analysis of admissibility-based argumentation seman-
tics. Argument & Computation, 11(3):305–339, 2020.
Thirteen Challenges in Formal and Computational Argumentation 85
[Ross et al., 2023] Steven I Ross, Fernando Martinez, Stephanie Houde, Michael Muller,
and Justin D Weisz. The programmer’s assistant: Conversational interaction with a
large language model for software development. In Proceedings of the 28th International
Conference on Intelligent User Interfaces, pages 491–514, 2023.
[Roth and Verheij, 2004] Bram Roth and Bart Verheij. Cases and dialectical arguments–an
approach to case-based reasoning. In On the Move to Meaningful Internet Systems 2004,
pages 634–651. Springer, 2004.
[Russell and Norvig, 2010] Stuart J Russell and Peter Norvig. Artificial intelligence: A
modern approach. London, 2010.
[Savage, 1972] Leonard J Savage. The foundations of statistics. Courier Corporation, 1972.
[Schindler et al., 2020] Samuel Schindler, Anna Drozdzowicz, and Karen Brøcker. Linguistic
intuitions: Evidence and method. Oxford University Press, 2020.
[Searle, 1979] John R Searle. Expression and meaning: Studies in the theory of speech acts.
Cambridge University Press, 1979.
[Sklar and Azhar, 2018] Elizabeth I Sklar and Mohammad Q Azhar. Explanation through
argumentation. In Proceedings of the 6th International Conference on Human-Agent
Interaction, pages 277–285, 2018.
[Straßer and Arieli, 2019] Christian Straßer and Ofer Arieli. Normative reasoning by
sequent-based argumentation. Journal of Logic and Computation, 29(3):387–415, 2019.
[Sycara, 1990] Katia P Sycara. Persuasive argumentation in negotiation. Theory and deci-
sion, 28:203–242, 1990.
[Tennent, 1991] Robert D Tennent. Semantics of programming languages. (No Title), 1991.
[Toulmin, 1958] Stephen E Toulmin. The uses of argument. Cambridge university press,
1958.
[Traum, 1999] David R Traum. Speech acts for dialogue agents. In Foundations of rational
agency, pages 169–201. Springer, 1999.
[Turner, 2004] Hudson Turner. Strong equivalence for causal theories. In Logic Program-
ming and Nonmonotonic Reasoning: 7th International Conference, LPNMR 2004 Fort
Lauderdale, FL, USA, January 6-8, 2004 Proceedings 7, pages 289–301. Springer, 2004.
[Ulbricht and Wallner, 2021] Markus Ulbricht and Johannes P Wallner. Strong explana-
tions in abstract argumentation. In Proceedings of the AAAI Conference on Artificial
Intelligence, volume 35, pages 6496–6504, 2021.
[van der Lee et al., 2021] Chris van der Lee, Albert Gatt, Emiel van Miltenburg, and Emiel
Krahmer. Human evaluation of automatically generated text: Current trends and best
practice guidelines. Computer Speech & Language, 67:101151, 2021.
[van der Torre and Vesic, 2018] Leendert van der Torre and Srdjan Vesic. The principle-
based approach to abstract argumentation semantics. In Pietro Baroni, Dov Gabbay,
Massimiliano Giacomin, and Leendert van der Torre, editors, Handbook of formal argu-
mentation, Volume 1, pages 797–838. College Publictions, 2018.
[Van Laar and Krabbe, 2018] Jan Albert Van Laar and Erik CW Krabbe. The role of ar-
gument in negotiation. Argumentation, 32(4):549–567, 2018.
[Villata et al., 2011] Serena Villata, Guido Boella, and Leendert van der Torre. Attack
semantics for abstract argumentation. In IJCAI. IJCAI/AAAI, 2011.
[Vilone and Longo, 2021] Giulia Vilone and Luca Longo. Classification of explainable artifi-
cial intelligence methods through their output formats. Machine Learning and Knowledge
Extraction, 3(3):615–661, 2021.
[Visser et al., 2020] Jacky Visser, John Lawrence, and Chris Reed. Reason-checking fake
news. Communications of the ACM, 63(11):38–40, 2020.
[Von Neumann and Morgenstern, 1947] John Von Neumann and Oskar Morgenstern. The-
ory of games and economic behavior, 2nd rev. 1947.
[Walton and Krabbe, 1995] Douglas Walton and Erik CW Krabbe. Commitment in dia-
logue: Basic concepts of interpersonal reasoning. State University of New York Press,
1995.
[Walton et al., 2008] Douglas Walton, Christopher Reed, and Fabrizio Macagno. Argumen-
tation schemes. Cambridge University Press, 2008.
[Walton, 1990] Douglas N Walton. What is reasoning? What is an argument? The Journal
of Philosophy, 87(8):399–419, 1990.
[Walton, 2013] Douglas Walton. Argumentation schemes for presumptive reasoning. Rout-
ledge, 2013.
86 Liuwen Yu, Leendert van der Torre, and Réka Markovich