0% found this document useful (0 votes)
62 views7 pages

Three Levels of AI Transparency Accepted Version

Uploaded by

Idris Adesanya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views7 pages

Three Levels of AI Transparency Accepted Version

Uploaded by

Idris Adesanya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Three Levels of AI Transparency

Haresamudram, Kashyap; Larsson, Stefan; Heintz, Fredrik

Published in:
Computer

DOI:
10.1109/MC.2022.3213181

2023

Document Version:
Peer reviewed version (aka post-print)

Link to publication

Citation for published version (APA):


Haresamudram, K., Larsson, S., & Heintz, F. (2023). Three Levels of AI Transparency. Computer, 56(2), 93-100.
https://doi.org/10.1109/MC.2022.3213181

Total number of authors:


3

General rights
Unless other specific re-use rights are stated the following general rights apply:
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors
and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the
legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study
or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal

Read more about Creative commons licenses: https://creativecommons.org/licenses/


Take down policy
If you believe that this document breaches copyright please contact us providing details, and we will remove
access to the work immediately and investigate your claim.

LUNDUNI
VERSI
TY

PO Box117
22100L und
+4646-2220000
Digital Object Identifier: 10.1109/MC.2022.3213181

Three Levels of AI Transparency


Kashyap Haresamudram, Doctoral Researcher, Lund University, Sweden. Stefan Larsson, Associate
Professor, Lund University, Sweden. Fredrik Heintz, Professor, Linköping University, Sweden.

Abstract—Transparency is generally cited as a key consideration towards building Trustworthy AI. However, the concept of
transparency is fragmented in AI research, often limited to transparency of the algorithm alone. While considerable attempts have been
made to expand the scope beyond the algorithm, there has yet to be a holistic approach that includes not only the AI system, but also
the user, and society at large. We propose that AI transparency operates on three levels, (1) Algorithmic Transparency, (2) Interaction
Transparency, and (3) Social Transparency, all of which need to be considered to build trust in AI. We expand upon these levels using
current research directions, and identify research gaps resulting from the conceptual fragmentation of AI transparency highlighted
within the context of the three levels.

Index Terms—Artificial Intelligence, Transparency, Explainability, Trustworthy AI, Interaction

1 I NTRODUCTION

T RANSPARENCY is often viewed as a prerequisite for


trust in society [1]. And in relation to AI, transparency
has been highlighted as one of the key ethical consider-
legal context, within AI research, it has come to be pre-
dominantly understood as transparency of the algorithm,
closely related to the emerging field of Explainable Artificial
ations required to build trustworthy AI [1]. Particularly Intelligence (XAI) [1]. However, XAI has been criticised
in sociology and political science, transparency has been for being techno-centric, led by individual XAI researchers’
studied extensively and believed to lead to greater trust intuition on explanations [3], and with limited consideration
in groups and institutions [1]. The conversation around to the existing research on both transparency and explana-
transparency in AI has developed relatively recently, rooted tions within the social sciences [4]. This phenomenon has
in governance of AI and closely related to its conceptual been described as ‘inmates running the asylum’ [4]. And
roots in socio-legal discourse. However, it has often been within this context, transparency is defined simply as the
argued that AI transparency is needed in order to build ability to understand an algorithm and its decision making,
trust in the decision-making of AI systems, and also un- through nuances such as simulatability, decomposability
derstand their implication on the larger socio-political and and algorithmic transparency, all of which focus on the
cultural context within which they exist and operate. With algorithm [5]. We argue that this is a narrow conceptualisa-
AI systems, particularly decision-making and recommender tion. AI transparency should extend beyond the algorithm
systems, being deployed in all domains from healthcare into the entire life-cycle of AI development and application,
and law-enforcement to retail and e-commerce, questions incorporating various stakeholders.
regarding whether the algorithm is accurate and should be Larsson and Heintz have argued for a broader concep-
trusted require opaque ‘black-box’ algorithms to become tualisation of AI transparency beyond the algorithm, and
transparent [1]. While transparency is generally useful in the elaborated upon the socio-legal context of AI transparency
case of decision-making systems, especially when decisions [1][6], although, it still remains domain-specific. But it can
are being suggested to aid human decision-makers, it isn’t be useful to understand how these domain-specific concep-
entirely clear whether the same is true for all types of AI tualisations are interconnected, and we argue that a wider
systems and contexts of application. Additionally, this is framing of the concept of AI transparency can help achieve
transparency of the algorithms alone, outside of its situated that. We build that argument within the context of AI by
context, and excluding the user interactions. Such a specific identifying three distinct levels at which AI transparency
definition of transparency, arguably, is unlikely to have an can be realised, distinguishing three central elements in
effect on trust. On the other hand some research has found applied AI, the AI system, the user, and the social context.
transparency to lead to information overload, and nega- Bringing transparency across these elements together, we
tively affect trust in consumers [2]. However, in general, envision a cross-domain framework to build truly transpar-
there is a need for more empirical research on transparency ent, and consequentially, trustworthy AI. We conceptualise
requirements from a user perspective, in various contexts, these levels as, (1) algorithmic transparency, as seen above in
for any real conclusions to be drawn [3]. We believe that the XAI, (2) interaction transparency, realised through human-AI
current scarcity of such research is the result of a fragmented interaction, and (3) social transparency, realised through insti-
understanding of AI transparency, and highlight the need to tutions, laws, and socio-cultural norms. This, we believe, can
expand the conceptual scope of AI transparency to not only serve as a road-map to better organise and prioritise gaps
include the AI system, but also the various stakeholders in trustworthy AI research, enable a clearer understanding
interacting with the system, the context of use of the system, of the larger social context of AI, and help identify cross-
and the larger social implications of its continued use. domain collaboration opportunities for various stakeholders
While conceptually transparency is rooted in a socio- in AI research and development.

© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any
current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating
new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in
other works.
Digital Object Identifier: 10.1109/MC.2022.3213181

1.1 Terminology 2 L EVELS OF AI T RANSPARENCY


This section serves to clarify what we mean by some of the Conceptualising AI transparency beyond the algorithm is
key terms we use in this paper. Of late, AI research has been not a novel endeavour. Several scholars have proposed their
inundated with numerous, overlapping, sometimes inter- own frameworks. Wortham [9] highlights system trans-
changeable terms, describing various associated concepts. parency and organisational transparency as being key to
The multidisciplinary nature of current conversation around build trust in AI. And Larsson [6] expands upon trans-
AI also means that the same terms can sometimes be under- parency in the legal context with seven nuances of the con-
stood differently in different fields. This could potentially cept; proprietorship, avoiding abuse, literacy, data ecosys-
be one of the reasons for the fragmented understanding of tems, distributed/personalised outcomes, algorithmic trans-
transparency. To alleviate any misinterpretation, this is our parency, and concepts, terminology and metaphor. While
key to the terminology used in this paper. these conceptualisations make significant expansions over
the widely used concept of algorithmic transparency, they
1.1.1 Trustworthy AI do not touch upon all aspects of AI development and use.
We believe such a holistic approach is needed to truly
According to the Ethics Guideline for Trustworthy AI
achieve trustworthy AI.
outlined by the European Commission High-Level Expert
AI systems are not only algorithms, but through their
Group on AI, for AI to be trustworthy it must meet three
use, give rise to complex interactions between individuals
broad criteria, it must be (1) Lawful, (2) Ethical, and (3)
and devices, within specific contexts and environments,
Robust [7]. Evidently, this is a very broad definition. The
which are in-turn governed by social norms, cultural ex-
concept of trust itself does not have a universally accepted
pectations, and laws. The complex interplay of these interac-
definition. Transparency is clearly highlighted in the guide-
tions is, we have found, not adequately captured in AI trans-
line as a crucial element of trustworthy AI.
parency research. Conceptually, Meijer [10] distinguishes
three broad perspectives on transparency; transparency as
1.1.2 AI Transparency a virtue, relation and system. The first perspective encapsu-
We use the term AI transparency as an umbrella term en- lates transparency as norm or inherently desirable value in
compassing several notions of the concept of transparency public actors. The second perspective captures the relational
from various disciplines that speak to making AI more notion where one actor is made transparent to another, and
understandable and human-compatible both individually transparency exists as a consequence of this relationship.
and societally [1][6]. In this paper we propose sparse usage The third perspective speaks about the complex network of
of this term (and transparency in general) in favour of the relations that exist within a system that work together to
three specific levels of AI transparency that better articulate produce transparency [10]. Echoing Meijer’s perspectives in
the different contexts and stakeholders involved. This, we the context of AI, we propose an overarching framework
suggest, will help alleviate confusion arising from the myr- where transparency is realised on three levels, the AI sys-
iad of understandings of transparency. tem/algorithm, the user interaction, and the social context.
They can roughly be seen as representing transparency
within the AI system, between the AI and the human user,
1.1.3 Explainable AI and between the AI and society at large, see Fig.1. However,
Explainable AI or XAI can be defined as algorithms that ex- while Meijer [10] seems to treat the perspectives as three
plicitly consider human comprehensibility of their decisions separate views on transparency, we argue that the three
as a criteria in their computations [8]. They encompass tools levels we propose in relation to AI are inherently connected,
and methods to explain algorithmic predictions made by likely even interdependent, and work together to make AI
black-box AI. Generally, they tend to produce post-hoc ex- systems transparent.
planations. Currently, the field of Explainable AI deals with Broadly, this conceptualisation is in some ways aligned
a wide variety of research, from highly computational and with the ethos of the ’Transparency by Design’ framework
algorithmic, to methods of representation of information. by Felzmann et al [11], and we view their 9 principles as
complementary to our framework. The principles being -
”(1) Be proactive, not reactive, (2) Think of transparency
1.1.4 Explanation
as an integrative process, (3) Communicate in an audience-
While explainability is the ability to explain algorithmic sensitive manner, (4) Explain what data is being used and
decision making in human-compatible terms, the explana- how it is being processed, (5) Explain decision-making cri-
tion itself is a much more qualitative element pertaining teria and their justifiability, (6) Explain the risk and risk mit-
to the nature of information exchange between the human igation measures, (7) Ensure inspectability and auditability,
and AI (in the context of AI) [8]. Explainable AI deals with (8) Be responsive to stakeholder queries and concerns, and
the algorithm, but the explanation itself has nothing to do (9) Report diligently about the system” [11]. However, it can
with the algorithm, rather, it speaks more to the resultant be argued that the principles largely pertain to algorithmic
interaction between the the AI and the user. We make this transparency (with some exceptions), and relate to specific
distinction explicit, since the criticism of XAI is precisely set of stakeholders. Through the three levels we seeks to
that while AI developers may have an understanding of cast a broader lens on transparency. The principles however
XAI methods, in most cases they probably do not have a prove useful in further elaborating upon some concepts
nuanced understanding of explanations [8]. covered here.

© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any
current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating
new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in
other works.
Digital Object Identifier: 10.1109/MC.2022.3213181

(COMPAS), a criminal recidivism prediction algorithm em-


ployed by some states in the US, that has been shown to be
racially biased against African-Americans [6].
Closely associated is the concept of openness. The ability
to access and scrutinise code, data sets, and accompanying
systems is essential for accountability, and an important
part of AI transparency [1]. The two examples listed above
are proprietary algorithms, making them harder to study
and evaluate. Additionally, complex arguments regarding
data collection, privacy, biased data, historical data and the
like have stemmed from algorithmic transparency research.
And there are still important questions to be answered
here such as transparency with regards to synthetic data.
Fig. 1. Levels of AI Transparency Algorithmic decisions are generally going to be influenced
by the characteristics of the data used to train the algorithm,
as well as the data the algorithm is used on. Understanding
It is important to note that the relevance of each this interaction between the algorithm and the data is at the
individual level may vary for different stakeholders. For core of algorithmic transparency.
example, algorithmic transparency may be crucial to
Various methods to open these black boxes and ‘ex-
developers and auditors of the system, but not necessarily
plain’ the decisions made by these algorithms have been
as important to the end user, to whom interaction
proposed under a relatively new research domain called
transparency might take precedence. The three levels can
Explainable AI. Using several methods, most often sec-
be used to create stakeholder maps that can help identify
ondary AI algorithms designed to decode the primary
collaboration needs and opportunities when developing
‘black-box’ algorithms, the decisions are broken down into
and deploying AI algorithms. For example, the hypothetical
human-comprehensible terms. For example, Shapley Addi-
stakeholder map in Table 1 shows an overview of how
tive Explanations (SHAP) is a popular XAI algorithm that
the levels of transparency may potentially map on-to
produces explanations by highlighting the feature weights
various stakeholders (in terms of relevance). Neither
within the black-box AI that most influenced a given predic-
the stakeholders, nor the mapping presented here are
tion/decision. This information is presented through graphs
exhaustive and will likely differ based on the AI system
[5]. XAI methods however are not always accurate and
in question and the context it is used in. But we propose
are also generally inaccessible to non domain experts. The
that creating such stakeholder maps can help identify
explanations, which often take the form of probabilities or
areas of cross collaboration as well as explicitly address
graphical representations can also be unintuitive and convo-
transparency requirements on different levels.
luted. And also mentioned above, XAI has been criticised for
relying on individual XAI researchers’ intuition on expla-
nations [3], and with limited understanding of explanations
Algorithmic Interaction Social within the social sciences [4]. We argue that the ’explanation’
Developer X itself is less suited as part of algorithmic transparency, and
Designer X
Owner X
more suited as part of interaction transparency that we
User X X elaborate in the next section. Alternatively, other methods
Regulator X X for algorithmic transparency have also been proposed with-
Society X out needing post-hoc explanations of algorithmic decisions.
TABLE 1
Example Stakeholder Interest Map Interpretable AI is one such method where simple, human-
comprehensible algorithms are used instead of black-box
models. It is usually understood that complex models with
large data sets yield better and more accurate predictions,
however Rudin [12] has recently demonstrated that leaner,
2.1 Algorithmic Transparency inherently interpretable models can be just as effective in
Most widely (mis)understood as AI transparency in general, certain domains.
algorithmic transparency is seemingly the most researched Generally though, the solutions to achieve algorithmic
and well understood of the three levels. The primary prob- transparency tend to cater to domain-experts. This is likely
lem being that complex AI systems process humanly un- due to a justified bias in research pertaining to ’high-stakes’
manageable amounts of data in humanly incomprehensible AI systems such as in healthcare, where AI is generally used
ways, resulting in unknown biases in the resulting deci- by experts as decision-support systems. Whether algorith-
sions. Such algorithms are sometimes referred to as ‘black- mic transparency is needed in everyday contexts to end
boxes’. Several notable examples of black-box algorithms users, such as during online shopping, is not well under-
exist that have made biased decisions not intended by stood. Although the limited research that does exist finds
its developers, for example, Amazon’s AI recruiting tool that this type of transparency may not matter to users in ev-
was found to be biased against women , and Correctional eryday contexts [13]. Algorithmic transparency is probably
Offender Management Profiling for Alternative Sanctions most relevant to domain-experts and auditors/regulators.

© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any
current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating
new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in
other works.
Digital Object Identifier: 10.1109/MC.2022.3213181

2.2 Interaction Transparency a result of interaction. To illustrate this, Frauenberger gives


the example of a hypothetical device Flow that provides
Miller claims about explainability that, “Ultimately, it is information about the ease or anxiety levels of different
a human–agent interaction problem. Human-agent inter- actors in an interaction, and postulates that the input form
action can be defined as the intersection of artificial in- the device may become an inherent part of interaction with
telligence, social science, and human–computer interaction time, making this new sense (input) shape future interac-
(HCI)” [8]. As we have stated before, we argue instead that tions, exhibiting an entanglement between the technology
it is not explainability, rather the explanation itself that is the and the users [17]. In the context of this paper, this exam-
human-agent interaction part of the problem (explainability ple can be interpreted as enabling a form of transparency,
is the algorithmic operationalisation of an explanation). providing information about the state of anxiety of an actor
As AI systems get more complex and advanced, so in an interaction, information that would otherwise not be
too does their ability to interact with the users as well available to the actors. With AI, such entanglements can be
as influence their shared environment. The ability of AI used as tools to open new avenues of interactions, as well
systems to learn and adapt to their users brings forth as transparency.
an entirely different interaction paradigm and affordances Ultimately, it is through the interaction that knowledge
towards transparency through the interaction. But discourse exchange between the AI system and its users takes place.
on AI transparency has evolved as though various elements An intimate coupling of behaviours between AI and user
of interaction don’t influence it. Research on how AI trans- is a form of transparency that could potentially enable a
parency translates in an applied setting is limited [13], what nuanced understanding of the strengths and limitations of
it means to the user is not well understood, and how to an AI system, strengthening human trust in it. Embod-
design for it is not clear either [13]. How transparency ied/entangled interaction design can play a much larger
translates in interaction is seemingly the least studied of all role in enabling transparency than has seemingly been
three levels we have highlighted in this paper. recognised, and much research is needed in this space.
Tangibility, embodiment and entanglement form a com-
pelling basis for interaction transparency. Increasingly, our
2.3 Social Transparency
interactions with AI are embodied. We wear AI in smart
watches, we let AI change and adjust our environment in Today AI, specifically machine learning, is being applied in
smart homes, smartphones are intrinsically linked to the so- almost every domain that has access to big data, and do-
cial fabric of modern lives, and we experience these devices mains that do not are rushing to create those opportunities
as an extended continuum of our body. As objects that can [1]. Big tech has made AI-enabled services ubiquitous, leav-
be touched and interacted with, the affordances through ing experts and researchers to play catch-up with relevant
the materiality of these objects could be used to further legal frameworks, and understand its larger social impact.
embody the experience and entangle the individual with the
device. Ghajargar et al [14] conducted a design workshop to 2.3.1 Law and regulation
ideate and build upon the concept of ‘Graspable AI’ as an With high-profile cases such as Cambridge Analytica still in
extension of the scope of explainable AI, using tangible and our recent collective memory, the conversation around data
embodied interaction and the material body of the object privacy and algorithmic responsibility is highly relevant
to create rich, contextual, situated explanations that could [6]. Given that applied AI is so commonplace in society,
enable transparency. several governments have begun to formulate frameworks
Lakoff and Johnson [15] write extensively about the na- to regulate it. Transparency in AI is widely considered
ture of language and the relational metaphors we use with an ethical obligation [18]. This statement is evidenced by
regards to our body to make sense of the world, making recent developments within the EU, where a high-level
our experiences necessarily embodied. The use of metaphors expert group has drafted 7 key facets to evaluate when
can be extended outside of language to objects, designing implementing AI in the Ethics Guideline for Trustworthy
interaction possibilities with one object through the embod- AI, highlighting transparency as a key facet [7]. In a recent
ied experience of another analogous object. For example, study, it was found that transparency in some form was
representing e-books as real books digitally, and transfer- mentioned as a the most common element in 84 different
ring existing knowledge about interaction possibilities with ethics guidelines on AI across the world [18].
real books onto e-books. “The stronger this coupling, the The approach to AI transparency is necessarily domain
more natural and pervasive the metaphor(s) involved, the and application specific [1]. And this is reflected in how
more naturalistic and transparent the interaction becomes” the EU regulates AI as outlined in the proposal for an
[16]. ’Third-wave HCI’ embraces the concepts of tangibility Artificial Intelligence Act presented in 2021, by dividing
and embodiment to understand knowledge production in the applications into four broad, risk-based buckets, un-
interaction. Frauenberger [17] proposes Entanglement as a acceptable risk, high risk, limited risk, and minimal risk
new paradigm in interaction design arguing for objects and [19]. High-risk AI is scrutinised much more heavily, and
the environment as forming equal social actors to human the transparency requirements of high-risk AI are much
actors within interaction, and that knowledge is co-created higher [19], and operate on multiple levels of transparency
between all actors as part of the interaction rather than as categorised above. Limited-risk AI also has transparency
existing entirely in an objective reality or as an external obligations according to the proposed AI regulation, insofar
social phenomenon. This theory aligns perfectly with the ar- as the users have the right to know when interacting with
gument we make about transparency (knowledge) arising as an AI [19]. This risk-based approach is also echoed in recent

© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any
current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating
new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in
other works.
Digital Object Identifier: 10.1109/MC.2022.3213181

work by Rudin [12], who proposes the use of interpretable 3 C ONCLUSION


models as an alternative to black-box AI in high-stakes AI Given that AI transparency is often understood as a pre-
applications. Rudin’s [12] work echos a common theme in requisite for trustworthy AI, but at the same time is a frag-
a majority of AI research, not just in computer science but mented concept, this broad framework expands the scope
also the social sciences, with a key emphasis on high stakes to include various levels that AI transparency encompasses.
AI. Arguably, the risk-based approach is the first attempt The framework provides the ability to identify and weigh
in trying to incorporate the situatedness (highly context- different notions of transparency based on the context to
dependent nature) of AI systems within a legal framework. enable informed prioritisation. Based on our review, we
However, risk levels are one way to define context in which identify potential research areas that can contribute to the
AI operates, while some form of categorisation is necessary current understating of AI transparency and its role towards
to differentiate between various AI systems and their con- trust in AI. Firstly, user-centred research on AI transparency
texts of use, it remains to be seen whether this is the ideal is limited. Much remains to be learned about the user needs
approach. with regards to AI transparency in various contexts, as well
as the role interaction plays in generating transparency.
Secondly, further research needs to be conducted on alter-
2.3.2 Society and Culture native methods to achieve transparency that don’t involve
great volumes of information and individual responsibility.
In European consumer and data protection, much emphasis
More work is needed towards establishing the collective
is placed on information as a means of transparency and
responsibility (institutionalised trust) argument which is
on individual responsibility towards that information. This
seemingly at odds with parts of the current direction of
has resulted in implementing solutions like cookie consent
AI regulation worldwide. Lastly, novel approaches in the
banners for transparency in data collection. However, these
form of embodied interaction should be embraced and
individual privacy agreements are far too many, causing in-
researched to solve novel interaction problems posed by
formation overload; indicating a flawed approach [2]. Critics
novel technology within the broad AI domain.
have argued that a collective approach is likely more ben-
In conclusion, given the widely accepted notion that
eficial to society, and have advocated for institutionalised
AI transparency can greatly contribute towards building
solutions instead, akin to the institutionalised solutions seen
trustworthy AI, our proposed three-layer approach to AI
in the aviation or food industries, whereby we as consumers
transparency through (1) Algorithmic transparency, (2) In-
don’t need to inspect and build trust in individual compa-
teraction transparency, and (3) Social transparency, sheds
nies or products making up the industry. Rather, trust is
come light on the various stakeholders and contexts in-
formed in the system as a whole, whose individual parts
volved. It expands the scope of AI transparency beyond the
are highly regulated by laws that encourage transparency
algorithm. And most importantly, it illustrates the complex
[20]. Closely related to the idea of institutionalised trust
and multifaceted nature of transparency, and emphasises
is organisational transparency, where transparency enables
the need for multidisciplinary research and cross-domain
accountability, thereby forming trust. It isn’t known whether
collaboration in the field.
such an industry-wide standardisation is possible with re-
gards to AI, but there are indications that consumers would
prefer such as solution too. ACKNOWLEDGMENTS
Overarching the conversation around AI transparency, This work was funded as part of ’AI Transparency and Con-
data privacy and ’datafied’ living in general is literacy. Digi- sumer Trust’ project, by the Wallenberg AI, Autonomous
tal literacy is at the core of end users’ ability to comprehend Systems and Software Program – Humanities and Society
the technology they are interacting with, and consequently (WASP-HS).
for transparency to be realised [6]. Studies have found that
a majority of users have very little understanding of online R EFERENCES
data collection [2]. With data being an integral part of AI,
[1] Larsson, S., and Heintz, F. Transparency in artificial intelligence,
and consumer oriented AI relying on online data collection, Internet Policy Review, 9(2), 2020.
one can then extrapolate, perhaps, that literacy regarding [2] Larsson, S., Jensen-Urstad, A., and Heintz, F. Notified But Unaware:
AI in general is probably extremely low. While some have Third-Party Tracking Online, Critical Analysis of Law, 8(1), 2021, 101-
argued that trustworthy AI should not default to placing the 120.
[3] Ehsan, U., Liao, Q. V., Muller, M., Riedl, M. O., and Weisz, J. D.
burden of literacy on the consumer, preferring institutions Expanding explainability: towards social transparency in AI systems,
and regulations to mediate instead [20], it is still worrying In Proceedings of the 2021 CHI Conference on Human Factors in
that active measures in improving literacy are scarce. Computing Systems (pp. 1-19), 2021.
[4] Miller, T., Howe, P., and Sonenberg, L. Explainable AI: Beware of
Lastly, ethics and norms are not necessarily universally inmates running the asylum or: How I learnt to stop worrying and love
consistent. Larsson writes that, ”this could for example the social and behavioural sciences., arXiv preprint arXiv:1712.00547,
regard different groups, ethnicities, religions, demographics 2017
[5] Arrieta, A. B., Dı́az-Rodrı́guez, N., Del Ser, J., Bennetot, A., Tabik, S.,
with different notions of what is regarded as right and Barbado, A., and Herrera, F. Explainable Artificial Intelligence (XAI):
wrong for everything from families, nudity, gender, sexu- Concepts, taxonomies, opportunities and challenges toward responsible
ality, to free speech, media habits, driving behaviour, and AI, Information fusion, 58, 2020, 82-115.
so on” [6]. These nuanced, often sensitive, social challenges [6] Larsson, S. The socio-legal relevance of artificial intelligence, Droit et
societe, (3), 2019, 573-593.
with regards to AI transparency will require careful consid- [7] High Level Expert Group on Artificial Intelligence. Ethics Guideline
eration. for Trustworthy AI, European Commission, 2019.

© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any
current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating
new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in
other works.
Digital Object Identifier: 10.1109/MC.2022.3213181

[8] Miller, T. Explanation in artificial intelligence: Insights from the social


sciences, Artificial intelligence, 267, 2019, 1-38.
[9] Wortham, R. H. Transparency for Robots and Autonomous Systems:
Fundamentals, technologies and applications, Institution of Engineering
and Technology, 2020
[10] Meijer, A. Transparency, The Oxford handbook of public account-
ability, 2014
[11] Felzmann, H., Fosch-Villaronga, E., Lutz, C., and Tamò-Larrieux,
A. Towards transparency by design for artificial intelligence, Science and
Engineering Ethics, 26(6), 2020, 3333-3361
[12] Rudin, C. Stop explaining black box machine learning models for high
stakes decisions and use interpretable models instead, Nature Machine
Intelligence, 1(5), 2019, 206-215.
[13] Felzmann, H., Villaronga, E. F., Lutz, C., and Tamò-Larrieux, A.
Transparency you can trust: Transparency requirements for artificial
intelligence between legal norms and contextual concerns, Big Data and
Society, 6(1), 2019.
[14] Ghajargar, M., Bardzell, J., Smith-Renner, A. M., Höök, K., and
Krogh, P. G. Graspable AI: Physical Forms as Explanation Modality for
Explainable AI, In Sixteenth International Conference on Tangible,
Embedded, and Embodied Interaction (pp. 1-4), 2022.
[15] Lakoff, George, and Johnson, Mark. Metaphors we live by, Univer-
sity of Chicago press, 2008.
[16] Fishkin, Kenneth P., Thomas P. Moran, and Beverly L. Harri-
son. Embodied user interfaces: Towards invisible user interfaces, IFIP
International Conference on Engineering for Human-Computer
Interaction. Springer, Boston, MA, 1998.
[17] Frauenberger, Christopher. Entanglement HCI the next wave?, ACM
Transactions on Computer-Human Interaction (TOCHI) 27.1 2019,
1-27.
[18] Jobin, Anna, Marcello Ienca, and Effy Vayena. The global landscape
of AI ethics guidelines, Nature Machine Intelligence 1.9, 2019, 389-
399.
[19] European Commission Proposal for a Regulation of the European
Parliament and of the Council laying down harmonised rules on
artificial intelligence (Artificial Intelligence Act) and amending cer-
tain Union legislative acts, https://artificialintelligenceact.eu/the-
act/,COM(2021) 206 final, 2021.
[20] Knowles, Bran, and John T. Richards. The sanction of authority: Pro-
moting public trust in AI, Proceedings of the 2021 ACM Conference
on Fairness, Accountability, and Transparency, 2021.

Kashyap Haresamudram is a Doctoral Researcher at the Department


of Technology and Society, Lund University, Sweden. His research inter-
ests include AI Transparency, Human-AI Trust, and Human-AI Interac-
tion. Haresamudram received his Master of Arts in Cognitive Semiotics
from Aarhus University, Denmark. He is a candidate at The Wallenberg
AI, Autonomous Systems and Software Program – Humanities and
Society (WASP-HS). Contact him at [email protected].

Stefan Larsson is a Senior Lecturer and Associate Professor at the


Department of Technology and Society, Lund University, Sweden. His
research interests include Trust, Transparency, and the socio-legal im-
pact of autonomous and AI-driven technologies. Larsson holds two
Doctor of Philosophy degrees in Sociology of Law and Spatial Planning,
respectively, as well as a Master of Laws (L.L.M.). Contact him at
[email protected].

Fredrik Heintz is a Professor at the Department of Computer and


Information Science, Linköping University, Sweden. His research inter-
ests include AI, Trustworthy AI, and the combination of reasoning and
learning. Heintz received his Doctor of Philosophy in Computer Science
from Linköping University. He is a fellow of the Royal Swedish Academy
of Engineering Sciences. Contact him at [email protected].

© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any
current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating
new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in
other works.

You might also like