Inconclusive evidence
When algorithms draw conclusions from the data they process using inferential statistics
and/or machine learning techniques, they produce probable yet inevitably uncertain
knowledge. Statistical learning theory and computational learning theory are both
concerned with the characterisation and quantification of this uncertainty. Statistical
methods can identify significant correlations, but correlations are typically not sufficient to
demonstrate causality, and thus may be insufficient to motivate action on the basis of
knowledge of such a connection. The concept of an ‘actionable insight’ captures the
uncertainty inherent in statistical correlations and normativity of choosing to act upon them.
Inscrutable evidence
When data are used as (or processed to produce) evidence for a conclusion, it is
reasonable to expect that the connection between the data and the conclusion should be
intelligible and open to scrutiny. Given the complexity and scale of many AI systems,
intelligibility and scrutiny cannot be taken for granted. A lack of access to datasets and the
inherent difficulty of mapping how the multitude of data and features considered by an AI
system contribute to specific conclusions and outputs cause practical as well as principled
limitations.
Misguided evidence
Algorithms process data and are therefore subject to a limitation shared by all types of data
processing, namely that the output can never exceed the input. The informal ‘garbage in,
garbage out’ principle illustrates this phenomenon and its significance: conclusions can only
be as reliable (but also as neutral) as the data they are based on.
Unfair outcomes
Algorithmically driven actions can be scrutinised from a variety of ethical perspectives,
criteria, and principles. The normative acceptability of the action and its effects is observer-
dependent and can be assessed independently of its epistemological quality. An action can
be found discriminatory, for example, solely from its effect on a protected class of people,
even if made on the basis of conclusive, scrutable and well-founded evidence.
Transformative effects
The impact of AI systems cannot always be attributed to epistemic or ethical failures. Much
of their impact can appear initially ethically neutral in the absence of obvious harm. A
separate set of impacts, which can be referred to as transformative effects, concern subtle
shifts in how the world is conceptualised and organised.
Traceability
AI systems often involve multiple agents which can include human developers and users,
manufacturers and deploying organisations, and the systems and models themselves. AI
systems can also interact directly, forming multi-agent networks characterised by rapid
behaviours that avoid the oversight and comprehension of their human counterparts due to
speed, scale, and complexity. As suggested in the original landscaping study by Mittelstadt
et al., “algorithms are software-artefacts used in data-processing, and as such inherit the
ethical challenges associated with the design and availability of new technologies and those
associated with the manipulation of large volumes of personal and other data.” All of these
factors mean it is difficult to detect harms, find their cause, and assign blame when AI
systems behave in unexpected ways. Challenges arising through any of the aforementioned
five types of concerns can thus raise a related challenge concerning traceability, wherein
both the cause and responsibility for bad behaviours need to be established.