Applying Transparency in Artificial Intelligence Based Personalization Systems
Applying Transparency in Artificial Intelligence Based Personalization Systems
Personalization Systems
Laura Schelenz1 and Avi Segal2 and Kobi Gal3
with which they interact. Increasing transparency is an important frameworks that prescribe transparency in data collection, process-
goal for personalization based systems. Unfortunately, system de- ing, and storage [4], system designers require increased awareness
signers lack guidance in assessing and implementing transparency in and guidance in the implementation of transparency in their systems.
their developed systems. Recent efforts recognize this need for guidance and seek to synthe-
In this work we combine insights from technology ethics and com- size and make ethics principles more tangible for implementation in
puter science to generate a list of transparency best practices for ma- systems [10, 9].
chine generated personalization. Based on these best practices, we We begin our analysis by discussing the need for transparency in
develop a checklist to be used by designers wishing to evaluate and Artificial Intelligence systems (section 2). We then provide a defini-
increase the transparency of their algorithmic systems. Adopting a tion of transparency drawing on prior art and address transparency’s
designer perspective, we apply the checklist to prominent online ser- distinction from explainability and related concepts (section 3.1).
vices and discuss its advantages and shortcomings. We encourage re- We identify transparency as a practice of communication and in-
searchers to adopt the checklist in various environments and to work teraction with the end user regarding a system’s features and pos-
towards a consensus-based tool for measuring transparency in the sible effects, including options for user control. We move on to de-
personalization community. rive transparency best practices for personalization in online systems
(section 3.2). These best practices constitute ethical responsibilities
on the part of system designers. Based on these best practices, we
1 Introduction specify questions that should be addressed when considering trans-
parency in personalization processes. This constitutes a concrete first
Recent years saw significant increase in personalization approaches checklist that can be used by system designers to evaluate and op-
for online systems [52, 62, 65]. Such personalization can be used to erationalize transparency in their systems (section 3.3). We then de-
direct users’ attention to relevant content [37], increase their moti- scribe a preliminary application of the checklist to existing web sites
vation when working online [43], improve their performance [35], in the wild (section 4). Specifically, we look at Facebook, Netflix,
extend their engagement [53] and more. These approaches rely on YouTube, Spotify and Amazon. For each such destination, we check
social theories of human behaviour (e.g. [25, 15]) as well as on ma- how the elements from the checklist are supported in the particu-
chine learning based abilities to predict human behavior and human lar site. We finish by discussing the value but also limitations of our
response to various interventions [47, 30]. approach and by pointing to further research needed in this area (sec-
Yet, personalization technology that focuses on maximizing sys- tions 5 and 6).
tem designers goals runs the risk of marginalizing users [1]. Person- We make the following main contributions: we create a new def-
alized recommendations of content, products, and services usually inition of transparency in the context of AI based personalization;
attempt to influence a person’s decision-making. When such influ- we develop a set of best practices for the community based on this
ences are hidden and subtly try to persuade users (maybe even against definition and on prior art; we generate a concrete tool-set to help
their expressed goals), this constitutes a form of manipulation [58]. system designers asses and realize these best practices in their re-
Subverting a person’s decision-making abilities reduces their auton- spective systems.
omy. Especially with regard to personalized advertisement, personl-
ization can exploit users’ vulnerabilities [56] and may even threaten
democratic processes [22]. 2 The Need for Transparency in AI Based Systems
1 University of Tbingen, Germany, email: [email protected] The speedy uptake of Artifical Intelligence based approaches has
2 Ben-Gurion University of the Negev, Israel, email: [email protected] raised concerns about their ambivalence. Personalized recommenda-
3 The University of Edinburgh, UK, email: [email protected] tion may help users find relevant content and items but also intro-
duce bias and pose a risk of manipulation [16]. Similarly, algorithmic 3.1 Step 1: Transparency Definition
decision-making may help allocate resources in a fairer manner but
also pose risks of discriminating against social groups due to bias To generate a list of best practices, we began by asking: What is
in the system [68]. Such risks are especially high if the user is un- transparency in the context of AI systems? When working with the
aware of the processes of personalization and decision making. This term transparency, we should first clarify the relationship of trans-
is often the case with algorithmic systems, as filtering, classification, parency to principles of ethics. According to Turilli and Floridi [60],
personalization, and recommendation remain intransparent or even transparency is not an ethics principles itself. Rather, transparency
opaque [46, 20]. can enable or prevent ethics. In some cases, calls for transparency
Transparency can help address the concerns voiced about AI may for instance inhibit privacy rights. We thus frame transparency
based personalization systems. First, transparency can balance power not as a principle of ethics but as a practice that can achieve ethics
asymmetry, empowering users while curtailing the influence of com- goals such as autonomy and accountability [60]. We investigated
panies on customer behavior. While companies have easy access to views on transparency from technology ethics, the philosophy of
user data, users lack knowledge of algorithmic systems [40]. Es- technology, computer sciences, but also ethics guidelines and legal
pecially big players in the information system economy hold enor- documents. Based on our analysis, we define transparency as fol-
mous power vis vis users to the extent that they can shape informa- lows:
tion, knowledge and culture production [13]. User empowerment by Transparency is a practice of system design that centers on the
means of transparency and user control may level the playing field. disclosure of information to users, whereas this information should
Second, transparency can increase user autonomy. Recommender be understandable to the respective user and provide insights about
systems usually filter content according to preference models that the system. Specifically, the information disclosed should enable the
easily create a feedback loop. A classic example is the filter bub- user to understand why and how the system may produce or why and
ble in social media platforms [45, 63]. When users lack exposure to how it has produced a certain outcome (e.g. why a user received a
information diversity, their autonomy and ability to make indepen- certain personalized recommendation).
dent decisions is impacted [42]. However, if users understand why Recent and emerging scholarship on explainable AI has provided
and how an algorithm presents information to them, they can better computational methods to increase explainability in computer sys-
reflect on how sources of information inform their decisions. tems [34, 49, 67, 48, 8, 33]. While transparency is often used synony-
Third, transparency can boost privacy rights and user trust in algo- mously with explainability and similar concepts such as observabil-
rithmic systems. Users can only give meaningful informed consent ity, controllability, and interpretibility, transparency is in fact broader
when they understand the risks of algorithmic decision-making [42]. than explaining a system’s functionality or enabling the user to infer
Fourth, transparency can enable fairness and non-discrimination information from a system’s outcome. Transparency follows a more
in algorithmic decision-making. Algorithmic decision-making is be- comprehensive approach, as it combines several components, which
coming ever more pervasive, affecting individuals in pivotal areas we now lay out.
of life [24]. While human decision-making reserves the possibility The first important component of transparency is the notion that
to provide a straightforward face to face explanation of why some- information must be “understandable”. The user of a system must be
ones application was denied, algorithmic systems are considered too able to comprehend the information disclosed to them. For instance,
complex for operators to provide a simple answer [41] (for an oppos- the GDPR [4] states with regard to data processing that information
ing view, see [66]). Transparency may thus increase subjects ability must be provided in “clear and plain language and it should not con-
to understand the cause of decisions made by algorithms. Thereby, tain unfair terms” [4].
transparency enables users to assess whether a decision-making pro- Here, we can see how transparency is a relational concept and
cess is fair and non-discriminatory [5]. a performative practice [11]. Whether the information provided is
While transparency is highly relevant, it is not absolute. Calls for transparent to an individual user (or data subject) depends on their
transparency may not always be ethical and warranted. Indeed, they cognitive abilities, their language skills, and epistemic conditions.
depend on the standing of different actors that are involved and inter- Therefore, practices of transparency must be personalized to the user
act in algorithmic assemblages [36, 51, 17]. For instance, demand- at hand, given the diversity of users ability to comprehend informa-
ing increased transparency on behalf of users (in terms of sharing tion [61].
more data) seems inappropriate given their vulnerability to loss of Several sources stress the importance of information comprehen-
informational privacy [39]. It is thus appropriate to focus attention sibility. According to Chromnik et al. [21], transparency is an en-
on promoting transparency from the system design perspective, and abling condition for the user to “understand the cause of a decision”.
increase users understanding of the logic underlying designers’ ac- Ananny and Crawford [11] describe transparency as a form of seeing
tivities [36, 44, 57]. and understanding an actor-network. The authors stress that trans-
parency means not merely looking inside a system but across sys-
tems. Transparency thus means explaining a model as it interacts
3 Best Practices for Transparency with other actors in an algorithmic system [11]. Floridi et al. [26]
understand transparency as explainability, whereas explainability in-
We take a three step approach to developing best practices for trans- corporates both intelligibility and accountability. AI decision-making
parency in machine generated personalization. First, we develop a processes can only be understood if we are able to grasp how mod-
new definition of transparency for algorithmic systems by drawing els work and who is responsible for the way they work [26]. For
on prior art. Second, from this definition, we derive best practice Vakarelov and Rogerson [61], transparency means communication of
for the implementation of transparency in machine generated per- information under two conditions: information must be a) sufficient
sonalization. Third, we translate these best practices into questions and b) accessible. The latter means that the recipient of the informa-
for system designers to be used as a reflection and assessment tool, tion must be able to comprehend and act upon the information.
presented as an online checklist for open usage. Another crucial element of transparency is information disclosure
about deliberation or decision-making processes. The IEEE Guide- more data [59]. User control is thus of particular ethical sensitivity
line for Ethically Aligned Design states that transparency means the and significance as it relates directly to the autonomy of a person.
possibility to ascertain why a certain decision was made [2]. For Past research has demonstrated the importance of user control mech-
Turilli and Floridi [60], disclosing information refers to communica- anisms in Artificial Intelligence based systems [29].
tion about the deliberation process, i.e. how information came about.
The rationale here is that the deliberation process reveals the values
that guide organizations or system designers in their everyday prac- 3.3 Step 3: Checklist
tices and illustrate how they make decisions. Based on steps 1 and 2, we can now move to define a checklist
Similarly, for Tene and Polonetsky [59], transparency refers to the for systems designers to assess the transparancy of machine gener-
revelation of information about criteria used in decision-making pro- ated personalization. We map each architecture component in Ta-
cesses. The disclosure of the dataset (or its existence) is less relevant ble 1 (namely Input, Processing, Output, Control) to a section in the
than the actual factors (such as inferences made from the data) that checklist. Questions for each section are then derived from the best
inform a model and its effects on users. Also Zerilli et al. [66] ar- practices uncovered in the previous steps. In this process, we prior-
gue that, similar to explanations in human decision-making, a sys- itize some best practices that were overwhelmingly affirmed by the
tem should reveal factors in decision-making and how they might be literature.
weighted. The resulting checklist is given at:
Dahl [23] even argues that it is not necessary to reveal the inner http://tiny.cc/evxckz.
working of a model for the user to determine whether a system is The checklist includes a total of 23 questions, presented in Table 2
trustworthy. Rather, transparency means providing key details about (described in the next section). After filling it, the system designer
how the results came about or offering expert testimony about how can download a PDF file with their responses. They can also print an
the system usually works. Burrell [20] suggests that improving in- empty copy of the checklist to be filled offline if needed.
terpretability of models is crucial to reduce opacity: One approach To arrive at a comprehensible and user friendly checklist, we omit-
to building more interpretable classifiers is to implement an end user ted some questions. If system designers wanted to attempt at par-
facing component to provide not only the classification outcome, but ticularly high standards of transparency, they could also answer the
also exposing some of the logic of this classification. following additional questions:
Finally, there can be an element of participation in transparency.
The user is expected to assess the system with regard to its trustwor- • Does the system explain to the user how the model(s) may interact
thiness based on the information that is disclosed. Furthermore, the with other models in algorithmic systems?
user may become active in choosing between different models, i.e. • Does someone from the design team provide expert testimony to
different options of personalization [54]. The user is thus becoming the users about how the model(s) works (e.g. in a video)?
involved in the process of transparency which increases user control
while interacting with the system. We note that the checklist is supplied as an assessment tool for sys-
tem designers, enabling them to identify areas in their system which
suffer from lack of transparency as well as point to imbalances be-
3.2 Step 2: Best Practices tween the transparency aspects of a system and the control it gives
From our definition of transparency, we derived nine principles of users over its operation. Ideally, a system designer has implemented
transparency for responsible personalization. They reflect the three transparency so that they can check yes for every question. However,
core elements of transparency: information provided must be un- the goal should not be to score high on the checklist but rather to
derstandable to users, information must be disclosed about why and have an honest assessment and decide on priorities and next steps.
how a model reaches a certain outcome, and users should have a say
in personalization processes. The best practices further reflect addi- 4 Case Study: Applying the checklist
tional needs for information about the data collection processes, the
composition of datasets, the functionality of a model, the responsi- We performed an initial application of the proposed checklist as a re-
bilities for the model or system, and how the model may interact with flective and assessment tool for the following online services that use
other models across algorithmic systems. personalization: Facebook, Netflix, YouTube, Spotify, and Amazon.
Table 1 shows the list of the best practices as well as the sources For each of these destinations, we took a system designers point of
on which these practices build. Based on the qualitative analysis in view, and asked “how are the transparency elements from the check-
step 1, particular relevance can be ascribed to practices 1, 2, 3, and list supported on this particular site?”, when examining the informa-
8. The table also identifies the different system architecture compo- tion available to registered users on the sites. For this assessment we
nents relevant for each best practice based on the Input-Processing- adopted the checklist and examined the above web services using
Output architecture model [14]. These components include: “Input” one of the authors account on these sites. Specifically, we checked
for transparency relating to the data used by the system, “Process- the information available to registered users on the sites including
ing” for transparency relating to system models and “Output” for the privacy policy, the legal terms and conditions and other infor-
presenting the transparent information to the user. We extend this ar- mation that is shared with the user and covers any of the checklist
chitecture with a “Control” component to represent the control given elements. We answered each checklist question for each site with a
to the user over the system’s personalization behaviour. “yes”, “no” or “partial” reply.
We define user control as the possibility of users to interact with Table 2 presents our application of the checklist to Facebook. As
the system to adjust elements thereof to their respective needs and can be seen from the table, while some transparency elements are
preferences. It is important that users not only feel that they have well established on this site, other elements are only partially sup-
control because this can put them at risk of exploitation. If users ported or not supported at all and should be considered for future
think that they have control, they might feel encouraged to share improvement.
No. Component Description of transparency standard Sources
1 Input, Processing, Disclosing accessible and actionable information, [61, 4]
Output, Control meaning that the user can comprehend and act upon the
information
2 Input, Processing Disclosing relevant and detailed information about data [4, 19, 6]
collection and processing; this includes notification of
data collected for personalization, information about
pre-processing and possible biases in the dataset
3 Processing Disclosing relevant and detailed information about the [60, 26, 20, 66, 59, 21, 55, 2, 5]
goals of the designer/system, the reasoning of a system,
the factors and criteria used (potentially also how they
are weighted), as well as the inferences made to reach
an algorithmic decision
4 Processing Providing expert testimony (e.g. by a member of the de- [23]
sign team) about how a system works and reaches a cer-
tain outcome, including information about the stochas-
tic nature of a model as well as lab accuracy perfor-
mance of a model
5 Processing Disclosing information about how the algorithmic [11]
model may affect the user and how the model may in-
teract with other models across algorithmic systems
6 Output Disclosing that a machine is communicating with the [3]
user and not a real person
7 Output Disclosing information about those responsible for the [26]
model (e.g. name of the company or designer)
8 Control Proposing alternative choices for user interaction with [54]
the system, e.g. different options for personalization
9 Control Providing the user with opportunities to give feedback [31]
about personalization; providing the user with opportu-
nities to specify their goals as these goals are expected
to drive personalization
Table 1. Transparency Best Practices for Machine Generated Personalization.
To perform a preliminary comparison between the different sites This trend to follow best practices of data or “Input” transparency
and between the different sections of the checklist for each site, we may be attributable to the rise of data protection laws such as the
also compute the percentage of “Yes” replies (i.e. full adherence to GDPR. System designers so far pay less attention to transparency
the best practices) for each checklist section. Namely, for each check- about the reasoning and underlying logic of personalization. This is a
list question, we give a “Yes” reply a value of 1. We then sum these severe shortcoming as ethics and philosophical work on transparency
values for each section and divide it by the total number of questions in algorithmic system clearly identify the need to disclose informa-
in the corresponding section. This computation, while being limited tion about how a certain outcome (personalization) emerged. Mak-
and potentially biased due to the subjective filling of the checklist ing processing-related information transparent does not necessarily
by the research team, may offer comparative information about the mean cracking open and looking inside the system, but rather pro-
different sites and between the different checklist sections as to ade- viding meaningful and understandable information about the goals
quate coverage of transparency items. Figure 1 presents the result of of personalization as well as the factors that contribute to making a
this comparison. We further discuss these results in the next section. personalized recommendation.
We suspect that transparency about the reasoning of a system
will gain relevance in the future. In fact, there is an ongoing debate
5 Discussion
whether the GDPR even provides a legal right to receive an explana-
5.1 Advantages of the transparency checklist tion for algorithmic decision-making [64]. Legal cases in the future
will shed light on such questions and eventually, disclosure of why
The major advantage of the transparency checklist is that it helps
and how a computer model caused a certain outcome may become
system designers understand where they are strong on transparency
customary practice.
and where improvements are needed. Looking at Figure 1 and the
Literature also points to the need for user control to fulfill trans-
online systems we have examined from the designer perspective, we
parency [54]. Users should be provided with different options of
notice that they primarily focus on realizing transparency in the “In-
personalization to best align with their personal goals and increase
put” category, i.e. with regard to data collection and the handling of
their autonomy. Our application of the checklist points to significant
user data. They are particularly weak in providing information about
shortcomings in the realm of “user control”. While both the areas of
why and how models bring about certain personalization (“Process-
processing and user control exhibit a lack of transparency, increas-
ing”). They also lack participatory elements such as offering the user
ing only one of the two areas would miss the point of transparency’s
different options of personalization or allowing the user to supply
comprehensive nature. Disclosing the factors weighted in a personal-
feedback to the system (“Control”).
Question Reply Details
General:
Does the system inform the user about the purpose of personaliza- Yes
tion?
Does the system inform the user who developed the technology and Yes
is liable in cases of wrongdoing?
Does the system inform the user about their rights under data pro- Partial Local law rights are not specified.
tection law?
Does the system inform the user about possible risks of engaging No Risks are not specified.
with the system?
Input:
Have users given informed consent about the collection, processing, Partial Default data collection policies are not speci-
and storage of their data? fied.
Does the system inform the user about the fact that data is collected Yes
for personalization?
Does the system inform the user about which data is collected to Yes
produce personalized content for them?
Does the system inform the user about pre-processing done with the No Pre-processing of data is not explained.
data collected for personalization purposes?
Does the system inform the user if their data is used and shared Partial Information about sharing data with partners is
beyond the goals of personalization? given without sufficient details as to the use of
this data.
Processing:
Does the system inform the user about the kind of data that is pro- Partial The link between data sources and personaliza-
cessed to create a certain personalized item? tion is not clear.
Does the system explain to the user why they are receiving a certain Partial The notion of personalization is generally men-
personalization? tioned but not specified enough.
Does the system inform the user about the behavioural models un- No Missing information about models used.
derlying the personalization system?
Does the system inform the user about possible constraints of the No Missing information about models constraints.
model such that may result from pre-processing or biases in the
dataset?
Output:
Does the system present information to the user in a location where Partial Hard to find the links to this data. Visibility and
they can notice it and access it easily? accessibility are lacking.
Does the system provide information to the user in a comprehensi- Partial Setting option is hard to understand and follow.
ble way and can they act upon this information?
Does the system provide the user with information in a clear and Yes
simple language that avoids technical terms?
Does the system make it clear to the user that they interact with a Yes
machine?
Control:
Does the system provide the user with the opportunity to specify No Missing capability.
their goals which are then used for personalization?
Does the system provide the user with different options as to the Partial Notification control is good. Ads control is
personalized content they receive? poor. Data control is very partial.
Does the system provide the user with opt-in and opt-out options Partial Complicated. Users have to control each option
(e.g. for data collection)? in separation.
If applicable, can the user adjust frequency and timing of personal- Partial. Is not supported for some content.
ized content?
Does the user have a say in which data or models are used for per- Partial Users cannot fully understand the connection
sonalization? between data and personalization.
Does the system encourage the user to give feedback and express No Feedback is not strongly encouraged.
their opinion about the personalization mechanisms used (type, fre-
quency, duration, etc.)?
Table 2. Preliminary Checklist Application to Facebook
Figure 1. Preliminary checklist, online sites: Y-axis is the percentage of positive replies in each checklist section
ized recommendation AND providing the user with the opportunity tect their data (privacy paradox) [28]. Similar dynamics may apply
to adjust these factors meets the demands of transparency’s compre- to transparency. Therefore, feedback from end users on how much
hensive approach and potentially leads to more meaningful interac- transparency and what kind of transparency they prioritize in a sys-
tion between the user and the system. tem would be helpful for system designers. Such feedback could be
As a system designer, having applied the checklist and seen some obtained in further studies or in collaborative designs with the ac-
blind spots, one would now be able to make a deliberate decision tive involvement of end users. Nevertheless, independent of users
about whether to increase transparency and user control in one’s own personal preferences, users should have the opportunity to take ad-
system. vantage of transparency. Even when users disregard information pro-
vided to them, system designers have an ethical responsibility to im-
plement transparency best practices.
5.2 Transparency from multiple stakeholders’
Another significant issue concerns the relationship of information
point of view
to the user. Transparency is a relational concept. The same informa-
Another advantage of the checklist is that it can be used as an assess- tion may make something transparent to one group or individual but
ment tool, not just internally for self-assessment but also as an openly not to others [61]. It follows that transparency must be configured to
accessible evaluation of a system’s transparency performance. An the individual user. In fact, we may need personalization technology
online service may commission a “transparency check” by an inde- to fulfill the transparency best practices we have suggested in this
pendent organization to assess the system’s trustworthiness. This ap- paper [38, 50].
plication of the checklist may increase user’s trust in a system. Stud-
ies show that transparency can be a competitive advantage of compa-
nies [18], and thus companies may have an interest in providing in- 5.3 Limitations of transparency
formation about the transparency levels of their online services. The
desire for independent audits may increase in the future with move- We now briefly point to some limitations of our approach. We
ments to certify “trustworthy use of Artificial Intelligence” [27]. note that the idea or “ideal” of transparency itself has limits [11].
Beyond commissioned reviews of a system’s transparency perfor- For instance, transparency rests on the idea that something can be
mance, users and activists may employ the transparency checklist as known [11]. There is no guarantee that we succeed in understanding
a means of control and oversight of online services. A comparison a model, even when transparency is in place. This can be due to lack
of online services’ transparency performance as in figure 1 exposes of resources, human capital [42], and lack of basic digital or techni-
the brands behind them and may generate pressure to implement in- cal literacy [20]. Disclosing information can also confuse users and
creased transparency. The checklist can thus be used as a means to not add to clarity or insight [12, 11]. Transparency may further clash
raise awareness of algorithmic transparency, and may be adopted by with important ethical principles such as privacy. Full disclosure of
like-minded institutions and projects (e.g. CyCat4 ). input or output data may put users at risk of being re-identified, espe-
We should note here that, while an ethics perspective promotes cially in areas like finance and medicine [40]. Business interests may
user control and meaningful transparency, it is not certain that end also be legitimate reasons to reject full disclosure [40].
users desire transparency and control. From extensive work in the These limitations of transparency also put a checklist in perspec-
field of privacy and data protection, we know that users claim pri- tive. Whether transparency is appropriate or warranted depends on
vacy to be an important issue for them but rarely take steps to pro- the unique use case. It remains an open question how much and what
kind of transparency should be provided. These are questions for the
4 http://www.cycat.io/ personalization community or the respective design teams. Ideally,
such questions will be answered in collaboration with end users. [8] Amina Adadi and Mohammed Berrada, ‘Peeking inside the black-box:
A survey on explainable artificial intelligence (xai)’, IEEE Access, 6,
52138–52160, (2018).
6 Conclusion and Future Work [9] AI Ethics Impact Group. From principles to practice: An interdisci-
plinary framework to operationalise ai ethics.
In this work, we have presented our best practices for transparency [10] Saleema Amershi, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, Eric
in Aritifical Intelligence based personalization systems and we have Horvitz, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira
applied our own transparency checklist to existing systems. While Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, and Paul N. Ben-
nett, ‘Guidelines for human-ai interaction’, in Proceedings of the 2019
transparency needs may vary depending on the use case, the checklist CHI Conference on Human Factors in Computing Systems - CHI ’19,
is valuable as a supporting instrument that guides system designers eds., Stephen Brewster, Geraldine Fitzpatrick, Anna Cox, and Vassilis
in embedding transparency into their work. We have demonstrated Kostakos, pp. 1–13, New York, New York, USA, (2019). ACM Press.
such a use by a preliminary application of the checklist from a sys- [11] Mike Ananny and Kate Crawford, ‘Seeing without knowing: Limita-
tem designer perpective to prominent AI based services that use per- tions of the transparency ideal and its application to algorithmic ac-
countability’, New Media & Society, 20(3), 973–989, (2018).
sonalization. [12] Frank Bannister and Regina Connolly, ‘The trouble with transparency:
While we propose a first transparency and user control checklist, A critical review of openness in e-government’, Policy & Internet, 3(1),
we recognize that it may be amended in future engagement with re- 158–187, (2011).
searchers and system designers. An important next step is to have [13] Yochai Benkler, The Wealth of Networks: How Social Production
Transforms Markets and Freedom, Yale University Press, New Haven
an exchange with practitioners in the field and develop a consensus and London, 2006.
regarding a transparency checklist for the personalization commu- [14] Sebastian Boell and Dubravka Cecez-Kecmanov, ‘Conceptualizing in-
nity [7]. This can increase the checklists likelihood of adoption. We formation systems: From’input-processing-output’devices to socioma-
therefore encourage researchers, funding agencies, and journals to terial apparatuses’, (2012).
provide feedback and recommendations. Furthermore, tangible de- [15] Ali Borji and Laurent Itti, ‘State-of-the-art in visual attention model-
ing’, IEEE transactions on pattern analysis and machine intelligence,
sign actions based on the best practices have to be developed in future 35(1), 185–207, (2012).
work. Tutorials and workshops may invite system designers to apply [16] Engin Bozdag, ‘Bias in algorithmic filtering and personalization’,
the checklist and create innovative design solutions that implement Ethics and Information Technology, 15(3), 209–227, (2013).
transparency in their respective systems. [17] J Brill, ‘Scalable approaches to transparency and accountability in de-
cisionmaking algorithms: remarks at the nyu conference on algorithms
Another line of research that follows up on this work relates to and accountability’, Federal Trade Commission, 28, (2015).
end users’ transparency needs. On the one hand, studies may provide [18] Ryan W. Buell and MoonSoo Choi, ‘Improving customer compatibility
additional insights into how transparency helps or hinders end users with operational transparency’, SSRN Electronic Journal, (2019).
in their engagement with a system. On the other hand, further re- [19] Joy Buolamwini and Timnit Gebru, ‘Gender shades: Intersectional ac-
search is required to understand the respective transparency needs of curacy disparities in commercial gender classification’, Proceedings of
Machine Learning Research, 81, 1–15, (2018).
diverse end users. Possibilities to personalize transparency should be [20] Jenna Burrell, ‘How the machine ‘thinks’: Understanding opac-
explored to ensure that users of different capabilities receive tailored ity in machine learning algorithms’, Big Data & Society, 3(1),
information and user control options. 205395171562251, (2016).
[21] Michael Chromnik, Malin Eiband, Sarah Theres Völk, and Daniel
Buschek. Dark patterns of explainability, transparency, and user con-
ACKNOWLEDGEMENTS trolfor intelligent systems, 20 March 2019.
[22] Crain and Nadler, ‘Political manipulation and internet advertising in-
This project has received funding from the European Union’s Hori- frastructure’, Journal of Information Policy, 9, 370, (2019).
zon 2020 WeNet research and innovation program under grant agree- [23] E. S. Dahl, ‘Appraising black-boxed technology: the positive
ment No 823783. The authors kindly thank participants of the ACM prospects’, Philosophy & Technology, 31(4), 571–591, (2018).
[24] Paul B. de Laat, ‘Algorithmic decision-making based on machine learn-
UMAP 2020 conference session “Demo and Late-Breaking Results” ing from big data: Can transparency restore accountability?’, Philoso-
for their comments and questions on an earlier version of this paper. phy & Technology, 31(4), 525–541, (2018).
We further thank our colleagues PD Dr. Jessica Heesen and Michael [25] Edward L Deci and Richard M Ryan, ‘Overview of self-determination
Lohaus for their feedback on our research. theory: An organismic dialectical perspective’, Handbook of self-
determination research, 3–33, (2002).
[26] Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice
REFERENCES Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo
Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and Effy
[1] https://uxdesign.cc/user-experience- Vayena, ‘Ai4people—an ethical framework for a good ai society: Op-
vs-business-goals-finding-the-balance- portunities, risks, principles, and recommendations’, Minds and Ma-
7507ac85b0a9, 2019. chines, 28(4), 689–707, (2018).
[2] Ethically aligned design: A vision for prioritizing human well-being [27] Fraunhofer Institute for Intelligent Analysis and Information Systems.
with autonomous and intelligent systems, 2019. Trustworthy use of artificial intelligence.
[3] https://www.brookings.edu/research/the-case- [28] Nina Gerber, Paul Gerber, and Melanie Volkamer, ‘Explaining the pri-
for-ai-transparency-requirements/, 2020. vacy paradox: A systematic review of literature investigating privacy
[4] General data protection regulation (regulation (eu) 2016/679), 27 April attitude and behavior’, Computers & Security, 77, 226–261, (2018).
2016. [29] Jaron Harambam, Dimitrios Bountouridis, Mykola Makhortykh, and
[5] Behnoush Abdollahi and Olfa Nasraoui, ‘Transparency in fair machine Joris Van Hoboken, ‘Designing for the better by taking users into ac-
learning: the case of explainable recommender systems’, in Human and count: a qualitative evaluation of user control mechanisms in (news)
Machine Learning, eds., Jianlong Zhou and Fang Chen, volume 44 of recommender systems’, in Proceedings of the 13th ACM Conference
Human-computer interaction series, 21–35, Springer, Cham, Switzer- on Recommender Systems, pp. 69–77, (2019).
land, (C 2018). [30] Jason S Hartford, James R Wright, and Kevin Leyton-Brown, ‘Deep
[6] ACM Council. Acm code of ethics and professional conduct, 2018. learning for predicting human strategic behavior’, in Advances in Neu-
[7] Balazs Aczel, Barnabas Szaszi, Alexandra Sarafoglou, Zoltan Kekecs, ral Information Processing Systems, pp. 2424–2432, (2016).
Šimon Kucharskỳ, Daniel Benjamin, Christopher D Chambers, Agneta [31] Jeffrey Heer, ‘Agency plus automation: Designing artificial intelligence
Fisher, Andrew Gelman, Morton A Gernsbacher, et al., ‘A consensus- into interactive systems’, Proceedings of the National Academy of Sci-
based transparency checklist’, Nature human behaviour, 1–3, (2019).
ences, 116(6), 1844–1850, (2019). 24(2), 109–143, (2005).
[32] High-Level Expert Group on Artificial Intelligence. Ethics guidelines [56] Shaun B. Spencer, ‘The problem of online manipulation’, SSRN Elec-
for trustworthy ai. tronic Journal, (2019).
[33] Robert Hoffman, Shane Mueller, Gary Klein, and Jordan Litman. Met- [57] Christopher Steiner, Automate This: How Algorithms Came to Rule Our
rics for explainable ai: Challenges and prospects, 12 2018. World, Portfolio/Penguin, New York, N.Y., 2013.
[34] Joana Hois, Dimitra Theofanou-Fuelbier, and Alischa Janine Junk, [58] Daniel Susser, Beate Roessler, and Helen F. Nissenbaum, ‘Online ma-
‘How to achieve explainability and transparency in human ai interac- nipulation: Hidden influences in a digital world’, SSRN Electronic Jour-
tion’, in HCI INTERNATIONAL 2019 - POSTERS, ed., Constantine nal, (2018).
Stephanidis, volume 1033 of Communications in Computer and Infor- [59] Omer Tene and Jules Polonetsky, ‘Big data for all: Privacy and user
mation Science, 177–183, Springer, [Place of publication not identi- control in the age of analytics’, Northwestern Journal of Technology
fied], (2019). and Intellectual Property, 239, (2013).
[35] Corey Jackson, Gabriel Mugar, Kevin Crowston, and Carsten Øster- [60] Matteo Turilli and Luciano Floridi, ‘The ethics of information trans-
lund, ‘Encouraging work in citizen science: Experiments in goal setting parency’, Ethics and Information Technology, 11(2), 105–112, (2009).
and anchoring’, in Proceedings of the 19th ACM Conference on Com- [61] Orlin Vakarelov and Kenneth Rogerson, ‘The transparency game: Gov-
puter Supported Cooperative Work and Social Computing Companion, ernment information, access, and actionability’, Philosophy & Technol-
pp. 297–300, (2016). ogy, 23(1–2), 193, (2019).
[36] Rob Kitchin, ‘Thinking critically about and researching algorithms’, [62] Rob van Roy, Sebastian Deterding, and Bieke Zaman, ‘Collecting
Information, Communication & Society, 20(1), 14–29, (2017). pokémon or receiving rewards? how people functionalise badges in
[37] Sucheta V Kolekar, Radhika M Pai, and Manohara Pai MM, ‘Rule gamified online learning environments in the wild’, International Jour-
based adaptive user interface for adaptive e-learning system’, Educa- nal of Human-Computer Studies, 127, 62–80, (2019).
tion and Information Technologies, 24(1), 613–641, (2019). [63] Giuseppe A. Veltri, ‘The political bubble’, Longitude, 93, 54–59,
[38] Pigi Kouki, James Schaffer, Jay Pujara, John O’Donovan, and Lise (2019).
Getoor, ‘Personalized explanations for hybrid recommender systems’, [64] Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, ‘Why a right to
in Proceedings of the 24th International Conference on Intelligent User explanation of automated decision-making does not exist in the general
Interfaces, eds., Wai-Tat Fu, Shimei Pan, Oliver Brdiczka, Polo Chau, data protection regulation’, SSRN Electronic Journal, (2016).
and Gaelle Calvary, pp. 379–390, New York, NY, USA, (2019). ACM. [65] Stav Yanovsky, Nicholas Hoernle, Omer Lev, and Kobi Gal, ‘One size
[39] Marjolein Lanzing, ‘The transparent self’, Ethics and Information does not fit all: Badge behavior in q&a sites’, in Proceedings of the 27th
Technology, 18(1), 9–16, (2016). ACM Conference on User Modeling, Adaptation and Personalization,
[40] Bruno Lepri, Nuria Oliver, Emmanuel Letouzé, Alex Pentland, and pp. 113–120, (2019).
Patrick Vinck, ‘Fair, transparent, and accountable algorithmic decision- [66] John Zerilli, Alistair Knott, James Maclaurin, and Colin Gavaghan,
making processes’, Philosophy & Technology, 31(4), 611–627, (2018). ‘Transparency in algorithmic and human decision-making: Is there a
[41] Tim Miller. Explanation in artificial intelligence: Insights from the so- double standard?’, Philosophy & Technology, 32(4), 661–683, (2019).
cial sciences. [67] Human and Machine Learning: Visible, Explainable, Trustworthy and
[42] Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Transparent, eds., Jianlong Zhou and Fang Chen, Human-computer in-
Wachter, and Luciano Floridi, ‘The ethics of algorithms: Mapping the teraction series, Springer, Cham, Switzerland, C 2018.
debate’, Big Data & Society, 3(2), 205395171667967, (2016). [68] James Zou and Londa Schiebinger, ‘Ai can be sexist and racist - it’s
[43] Conor Muldoon, Michael J OGrady, and Gregory MP OHare, ‘A survey time to make it fair’, Nature, 559(7714), 324–326, (2018).
of incentive engineering for crowdsourcing’, The Knowledge Engineer-
ing Review, 33, (2018).
[44] Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases
Inequality and Threatens Democracy, Allen Lane, London, 2016.
[45] Eli Pariser, The Filter Bubble: What the Internet is Hiding From You,
Penguin Books, London, 2012.
[46] Frank Pasquale, The Black Box Society: The Secret Algorithms that
Control Money and Information, Harvard University Press, Cambridge,
2015.
[47] Ori Plonsky, Reut Apel, Eyal Ert, Moshe Tennenholtz, David Bour-
gin, Joshua C Peterson, Daniel Reichman, Thomas L Griffiths, Stuart J
Russell, Evan C Carter, et al., ‘Predicting human decisions with behav-
ioral theories and machine learning’, arXiv preprint arXiv:1904.06866,
(2019).
[48] Alun Preece, ‘Asking ‘why’ in ai: Explainability of intelligent systems
- perspectives and challenges’, Intelligent Systems in Accounting, Fi-
nance and Management, 25(2), 63–72, (2018).
[49] Avi Rosenfeld and Ariella Richardson, ‘Explainability in human–agent
systems’, Autonomous Agents and Multi-Agent Systems, 33(6), 673–
705, (2019).
[50] Johanes Schneider and Joshua Handali, ‘Personalized explana-
tion in machine learning: A conceptualization’, arXiv preprint
arXiv:1901.00770, (2019).
[51] Nick Seaver, ‘Knowing algorithms’, in DigitalSTS, eds., Janet Vertesi
and David Ribes, Princeton University Press, Princeton, New Jersey,
(2019).
[52] Avi Segal, Kobi Gal, Ece Kamar, Eric Horvitz, and Grant Miller, ‘Op-
timizing interventions via offline policy evaluation: studies in citizen
science’, in Thirty-Second AAAI Conference on Artificial Intelligence,
(2018).
[53] Avi Segal, Ece Kamar, Eric Horvitz, Alex Bowyer, Grant Miller, et al.,
‘Intervention strategies for increasing engagement in crowdsourcing:
Platform, predictions, and experiments’, (2016).
[54] Judith Simon, ‘The entanglement of trust and knowledge on the web’,
Ethics and Information Technology, 12(4), 343–355, (2010).
[55] Frode Sørmo, Jörg Cassens, and Agnar Aamodt, ‘Explanation in case-
based reasoning–perspectives and goals’, Artificial Intelligence Review,