Volume 36 Number 2 March 2019
Contents
Special Issue: Thinking with Algorithms: Cognition and
Computation in the Work of N. Katherine Hayles
Introduction: Thinking with Algorithms: Cognition and Computation in the 3
Work of N. Katherine Hayles
Louise Amoore
Algorithmic Personalization as a Mode of Individuation 17
Celia Lury and Sophie Day
How Algorithms Interact: Goffman’s ‘Interaction Order’ in Automated 39
Trading
Donald MacKenzie
On the Politics of Chrono-Design: Capture, Time and the Interface 61
Michael Dieter and David Gauthier
Critical Computation: Digital Automata and General Artificial Thinking 89
Luciana Parisi
The Human Is Dead – Long Live the Algorithm! Human-Algorithmic 123
Ensembles and Liberal Subjectivity
Tobias Matzner
Interview with N. Katherine Hayles 145
Louise Amoore and Volha Piotukh
Visit http://journals.sagepub.com/home/tcs
Free access to tables of contents and abstracts. Site-wide access to the full text for members of
subscribing institutions
Visit http://theoryculturesociety.org/
For more information about Theory, Culture & Society, including additional material about this issue
plus many other extras
Editor-in-Chief Notes for Contributors
Mike Featherstone
Managing Editor: Couze Venn Standard Issues Editors: David Beer, Roy Boyne Special Issues Editors: Theory, Culture & Society is edited and refereed electronically. Authors and referees must submit manuscripts or obtain articles to review
via the TCS Manuscript Central website. This is located at http://mc.manuscriptcentral.com/tcs
Scott M. Lash, John S. Phillips Global Public Life Editors: Ryan Bishop, John S. Phillips Reviews Editor:
Couze Venn Social Media Editor: David Beer E-Specials Editor: Sunil Manghani Editorial Assistant: If you are submitting a paper to TCS for the first time, you will need to create an account with a secure user name and password.
Susan Manthorpe Editorial Design: Samantha Schäfer Preparation of Manuscripts
1. The article should be uploaded as the ‘main document’, and begin with the title, a 150 word abstract and 3-7 keywords. A brief biographical
Editorial Board note (max. 100 words) should be included in a separate document and uploaded as the ‘biographical note’. The latter document will not be sent
David Beer (University of York), Ryan Bishop (Winchester School of Art, University of Southampton), to referees. The title, abstract and keywords should also be entered separately into the system when uploading your manuscript.
Lisa Blackman (Goldsmiths, University of London), Josef Bleicher, Roy Boyne (University of Durham), 2. Titles should clearly identify the subject of your article. This is an important strategy to increase the chance of articles coming up in
Roger Burrows (Goldsmiths, University of London), Norman K. Denzin (University of Illinois, Urbana- Google searches and therefore improving the likelihood of your article being read and cited.
Champaign), Stuart Elden (University of Warwick), Mike Featherstone (Goldsmiths, University of 3. The keywords must be selected from the keyword list available on the TCS Manuscript Central website. The key.words are
London), Nicholas Gane (University of Warwick), Rosalind Gill (Kings College, University of London), essential in helping editors select referees from our referees’ database. If you have difficulty matching keywords, or have suggestions on
Thomas M. Kemple (University of British Columbia), Scott Lash (Goldsmiths, University of London), Adrian ones to be added, please let us know. Choosing the most relevant keywords is again an important strategy to increase the chance of articles
Mackenzie (Lancaster University), John Phillips (National University of Singapore), Rob Shields (University coming up in Google searches and therefore improving the likelihood of your article being read and cited.
of Alberta), Tiziana Terranova (Università degli Studi di Napoli L’Orientale), Couze Venn (Goldsmiths, 4. To protect anonymity, please make sure that you do not include your name anywhere within the main document (e.g. as a
University of London). running head or at the end). When referring to your own work, you should either replace your name with ‘Author’ (both in the
body of the text and in the list of references at the end) or refer to yourself in the third person (e.g. “as Featherstone (2005) has
Associate Editors demonstrated”, rather than “as I have previously demonstrated (2005)”).
Ien Ang (University of Western Sydney), Antonio A. Arantes (Campinas University), Margaret Archer 5. The number and length of notes should be strictly limited. They should be numbered serially and included at the end of the text prior to
(University of Warwick), John Armitage (Winchester School of Art, University of Southampton), Evgenia the references section. We do not accept footnotes.
Blagoeva (New Bulgaria University), Rosi Braidotti (State University of Utrecht), Rosalind Brunt (Sheffield 6. Images are encouraged and should be as clear and as high a resolution as possible; preferably at 300 dpi.
Hallam University), David Chaney (University of Durham), Ira Cohen (Rutgers University), Klaus Eder 7. Please make sure you insert page numbers into your manuscript.
(Humboldt University, Berlin/European University Institute, Florence), Nancy Fraser (New School Format of References in the Text
University), Jonathan Friedman (Lund University), Andrew Gamble (University of Sheffield), David Held To ensure that our referencing system is consistent with other Sage journals, TCS has now switched to the Harvard system (with the
(London School of Economics), Axel Honneth (J.W. Goethe University Frankfurt), Huimin Jin (Chinese exception that we ask for full first names in the reference list).
Academy of Social Sciences), Douglas Kellner (UCLA), Richard Kilminster (University of Leeds), Michele
SAGE Harvard
Lamont (Harvard University), Jorge Larrain (University of Birmingham), Donald Levine (University of
Chicago), Sunil Manghani (University of Southampton), George Marcus (University of California, Irvine), 1. General
Stephen Mennell (University College, Dublin), Carlo Mongardini (University of Rome, La Sapienza), 1. Initials should be used without spaces or full points.
Makio Morikawa (Doshisha University, Kyoto), John O’Neill (York University, Ontario), Chris Rojek 2. Up to three authors may be listed. If more are provided, then list the first three authors and represent the rest by et al. Fewer authors
(Brunel University), Richard Shusterman (Temple/Florida Atlantic University), Barry Smart (University of followed by et al. is also acceptable.
Portsmouth), Carol Smart (University of Leeds), Georg Stauth (Bielefeld University), Nico Stehr (University 2. Text citations
of British Columbia), Alan Tomlinson (University of Brighton), John Tomlinson (Nottingham Trent 1. All references in the text and notes must be specified by the authors’ last names and date of publication together with page numbers if
University), Shuichi Wada (Waseda University, Tokyo), Rod Watson (University of Manchester), Elizabeth given.
Wilson (University of North London), Shunya Yoshimi (University of Tokyo). 2. Do not use ibid., op. cit., infra., supra. Instead, show the subsequent citation of the same source in the same way as the first.
3. Where et al. is used in textual citations, this should always be upright, not italic.
4. Check that all periodical data are included – volume, issue and page numbers, publisher, place of publication, etc.
Articles appearing in Theory, Culture & Society Electronic only and print only subscriptions are available 5. Journal titles should not be abbreviated in SAGE Harvard journal references
are abstracted in Academic Search Premier, Academic for institutions at a discounted rate. Note VAT is applicable
at the appropriate local rate. Visit sagepublishing.com for 3. Reference list
Search Elite, Applied Social Sciences Index & Abstracts
Brown, John (2003)
(ASSIA), Communication Abstracts, e-Psyche, Human more details. Brown, Trevor and Yates, Paul (2003)
Resources Abstracts, International Political Science To activate your subscription (institutions only) visit online. Brown, Wendy (2002)
Abstracts, Worldwide Political Science Abstracts, Sociology sagepub.com. Abstracts, tables of contents and contents Brown, Wendy (2003a)
of Education Abstracts, Sociofile, Sociological Abstracts, alerts are available on this site free of charge for all. Brown, Wendy (2003b)
Current Legal Sociology, Social Services Abstracts and Brown, Wendy and Jones, Michael (2003)
Student discounts, single issue rates and advertising Brown, Wendy and Peters, Philip (2003)
International Institute for the Sociology of Law – IISL – details are available from SAGE Publications Ltd, Brown, Wendy, Hughes, John and Kent, Tom (2003a)
Centre of Documentation Database and indexed in Abi/ 1 Oliver’s Yard, 55 City Road, London EC1Y 1SP, UK, Brown, Wendy, Kent, Tom and Lewis, Steven (2003b)
inform, Alternative Press Index, Anthropological Index tel. +44 (0)20 7324 8500, email subscriptions@sagepub.
Online, Business Source Corporate, Communication & Mass 4. Reference styles
co.uk and in North America, SAGE Publications Inc,
Media Index, Communication & Mass Media Complete, Book
PO Box 5096, Thousand Oaks, CA 91320, USA. Featherstone, Mike (2007) Consumer Culture and Postmodernism (Second Edition). London: Sage.
Current Contents / Social and Behavioural Sciences,
Family Index Database, FRANCIS database, Health Periodicals postage paid at Rahway, NJ. POSTMASTER, Book chapter
Source: Nursing/Academic Edition, IBZ, International send address corrections to Theory, Culture & Society, c/o Friedman, Jonathan (1988) Global crises, the struggle for cultural identity and intellectual porkbarrelling. In: Werbner, Pnina and
Bibliography of the Social Sciences, International Mercury Airfreight International Ltd, 365 Blair Road, Modood, Tariq (eds) Debating Cultural Hybridity. London: Zed Books.
Bibliography of Book Reviews of Scholarly Literature Avenel, NJ 07001, USA. Journal article
in the Humanities and Social Sciences, International Printed in Great Britain at Henry Ling Limited. Pieterse, Jan Nederveen (1997) Multiculturalism and museums: discourse and others in the age of globalization. Theory,
Bibliography of Periodical Literature in the Humanities Culture & Society 14(4): 23-46.
© 2019 Theory, Culture & Society Ltd.
and Social Sciences, Left Index, MLA International Journal article published ahead of print
UK: Apart from fair dealing for the purposes of research Beer, David and Burrows, Roger (2013) Popular culture, digital archives and the new social life of data. Theory, Culture & Society.
Bibliography, The Philosopher’s Index, Research Alert,
or private study, or criticism or review, and only as Epub ahead of print 16 April 2013. DOI: 10.1177/0263276413476542.
Science Direct Navigator, Social SciSearch, Social Sciences
permitted under the Copyright, Designs and Patents Acts Website
Citation Index, Social Sciences Citation Index, SocINFO
1988, this publication may only be reproduced, stored or National Center for Professional Certification (2002) Factors affecting organizational climate and retention. Available at:
and Vocational Search.
transmitted, in any form or by any means, with the prior www.cwla.org./programmes/triechmann/2002fbwfiles (accessed 10 July 2010).
Editorial correspondence should be addressed to: permission in writing of the Publishers, or in the case of
Theory, Culture & Society, Department of Sociology, Thesis/dissertation
reprographic reproduction, in accordance with the terms Clark, James (2001) Referencing style for journals. PhD Thesis, University of Leicester, UK
Goldsmiths, University of London, London SE14 6NW, of licences issued by the Copyright Licensing Agency
UK. Email: [email protected] Book Reviews
(www.cla.co.uk/). We now concentrate on publishing book reviews online rather than in the journal pages of Theory, Culture & Society and Body & Society.
Theory, Culture & Society (ISSN 0263-2764 [print]; ISSN US: Authorization to photocopy journal material may be This will enable us to publish a much greater quantity of reviews much more quickly.
1460-3616 [online]) is published by SAGE Publications obtained directly from SAGE Publications or through a A new team of website review editors will regularly commission book reviews. At the same time we are always interested in
Ltd (Los Angeles, London, New Delhi, Singapore, licence from the Copyright Clearance Center, Inc. (www. extending our panel of reviewers. Should you wish to review books for http://theoryculturesociety.org you should write to the
Washington DC and Melbourne), seven times a year in copyright.com/). website review editors with your biographical details and interests along with information on the proposed book. We are also
January, March, May, July, September, November and a interested in reviews of books published outside the English-speaking world.
double issue ‘annual review’ in December. Inquiries concerning reproduction outside those terms
should be sent to the Publishers at the above mentioned For our full Website Review Guidelines, go to:
Annual subscription (2019, 8 issues): combined http://theoryculturesociety.org/website-review-guidelines/
address.
institutional rate (print and electronic) £1268/$2345. If you are a book author or publisher and would like us to consider reviewing one of your books, we welcome email alerts and catalogues of
recent and forthcoming titles. Once we have arranged for an author to write a review of a particular book, we will request that the publisher send
the book direct to the reviewer. We also welcome hard copies of books for our consideration. Please contact us at: e-mail: [email protected]
Please note that we do not accept unsolicited book reviews.
TCS36_2_cover.indd 2 06-03-2019 11:51:42
Special Issue: Thinking with Algorithms: Cognition and Computation in the Work of N. Katherine Hayles
Theory, Culture & Society
2019, Vol. 36(2) 3–16
Introduction: Thinking ! The Author(s) 2019
Article reuse guidelines:
with Algorithms: sagepub.com/journals-permissions
DOI: 10.1177/0263276418818884
Cognition and journals.sagepub.com/home/tcs
Computation in the
Work of N. Katherine
Hayles
Louise Amoore
University of Durham
Abstract
In our contemporary moment, when machine learning algorithms are reshaping many
aspects of society, the work of N. Katherine Hayles stands as a powerful corpus for
understanding what is at stake in a new regime of computation. A renowned literary
theorist whose work bridges the humanities and sciences among her many works,
Hayles has detailed ways to think about embodiment in an age of virtuality (How We
Became Posthuman, 1999), how code as performative practice is located (My Mother
Was a Computer, 2005), and the reciprocal relations among human bodies and tech-
nics (How We Think, 2012). This special issue follows the 2017 publication of her
book Unthought: The Power of the Cognitive Nonconscious, in which Hayles traces the
nonconscious cognition of biological life-forms and computational media. The articles
in the special issue respond in different ways to Hayles’ oeuvre, mapping the specific
contours of computational regimes and developing some of the ‘inflection points’ she
advocates in the deep engagement with technical systems.
Keywords
algorithms, cognition, computation, ethics, technology
The microcomputer [. . .] allows mathematics to be practiced as an
experimental science. It has also affected how people have imaged
themselves and their relation to the world. (Hayles, 1991: 6)
Corresponding author: Louise Amoore. Email: [email protected]
Extra material: http://theoryculturesociety.org/
4 Theory, Culture & Society 36(2)
In 2016, Google’s Natural Language Understanding research group
began to train a deep neural network algorithm on a corpus of data
comprising the literary works of 1000 deceased authors, from William
Shakespeare to Daniel Defoe and from Virginia Woolf to Herman
Melville.1 The machine learning algorithm was reported by the scientists
to have discovered the style of particular authors from their body of
work, so that ‘given a sentence from a book and knowledge of the
author’s style and personality’, the model could also ‘predict what the
author is most likely to write next’.2 In fact, the algorithm had done what
many neural network machine learning algorithms do: it had clustered
the literature according to the patterns in the text as data, and then
defined these clusters in terms of the attributes of the author’s body of
work. Once recognized and learned, these attributes became a means to
identify the future attributes of as yet unknown texts. This apparently
frivolous and innocuous experiment actually has immense significance
for how people have imagined themselves and their relation to the
world amid new computational forms. Unlike deductive forms of rea-
soning, where a rule or hypothesis is formulated and tested empirically,
these algorithms are inductively generating potential attributes from the
patterns within a corpus of data. Not only of epistemological signifi-
cance, such processes of machine learning algorithms identifying clusters
from data, generating attributes, and finding those attributes in the pat-
terns of other people are also shaping relations to the world, from
Cambridge Analytica’s clustering of the attributes of voters to
SKYNET’s attributes of terrorist threat (Grothoff and Porup, 2016).
The Google Natural Language experiments are but one example of
how what N. Katherine Hayles has termed ‘computational regimes’ are
turning to literature3 – and indeed other cultural media such as music
and film – precisely in order to supply deep learning algorithms with a
corpus of data from which they can refine their cognitive models of the
world. As the Google computer scientists explain the motivation for their
1000 authors project, it is ‘an early step towards better understanding
intent’.4 The algorithmic practices that are pressing upon the politics and
ethics of our times are geared toward a particular kind of question: given
the attributes of a cluster within a corpus of data, what is the incipient
future intent? This could be future voting intentions, the intent to commit
fraud, the intent to buy life insurance, or the intent to stream a specific
video. At the level of the technique what matters is not so much the
content of the cluster as the inferred future of the grouped entities. So,
teaching an algorithm to differentiate styles and sensibilities within lit-
erature – for example, how one author’s use of the line ‘who’s there?’
means something different to another author’s use of the same line in
another text – is actually also about teaching algorithms to make finite
distinctions and to infer meanings in the future. Contemporary algo-
rithms that are used across domains, from credit card fraud to voter
Amoore 5
preference to counter-terrorism, are being trained to understand future
intent through the attributes of style and genre. In short, the conjoined
histories of reading and learning in science and literature are finding new
forms with the machine learning algorithms of the 21st century.
At this contemporary moment, when it might appear that science and
literature, and humans and machines, are coevolving in novel and often
troubling ways, the work of N. Katherine Hayles stands as compelling
testament that these histories have never been separable. A literary the-
orist with a background in science, Hayles has consistently and imagina-
tively insisted upon a ‘technogenesis’ of ‘reciprocal causality between
human bodies and technics’ (2012: 123). With technogenesis, humans
and technologies coevolve so that the ‘interactions of language with
code’ bring about cognitive and neural changes in humans (2012: 10).
Though I suspect that Hayles would not wish it to be said that she had
anticipated, via her deep theorization of human and machine cognition,
the unfolding computational phenomena of our times, I also note that
this sense of extraordinary foresight is something which is rather com-
monly said of the men who theorize computational logics and societal
transformation.5 Similarly, though I do not think it likely that Hayles
would wish to hear of a ‘Haylesian’ approach to theorizing contempor-
ary computation, on all of the evidence this would be warranted and,
again, it is commonplace to hear of the ‘Latourian’. And so, I consider it
to be of real significance that, 27 years ago, in the introduction to her
edited work on literature and the science of chaos theory, Hayles foresees
the elements of an algorithmic computational regime that had not yet
fully emerged. When she notes that the computer allows mathematics ‘to
be practiced as an experimental science’, Hayles opens the way to under-
standing entangled and collaborative human and machine inferences that
feel their way towards a solution (1991: 6). In this passage she describes
someone sitting down at a computer ‘to model a dynamical non-linear
system’ where she ‘need not proceed through the traditional mathemat-
ical method of theorem-proof’ but can ‘set up a recursive program’ (p. 6).
‘With her own responses in a feedback loop with the computer, she
develops an intuitive feeling for how the display and parameters interact’,
writes Hayles, describing the embodied interactions between the human
neural system and the system of nonlinear differential equations (p. 6).
It is precisely such insight into the recursivity of human-computer relations,
and the modifications this implies for traditional deductive methods, that
is of crucial significance to understanding the computation of our times.
Over the period of the 27 years since Katherine Hayles wrote those
lines, she describes a historical ‘arc’ across three of her texts – How We
Became Posthuman (1999), Writing Machines (2002), and My Mother
Was a Computer (2005) – from the cybernetics of the mid-20th century
to the present ‘versions of the posthuman as they continue to evolve in
conjunction with intelligent machines’ (2005: 2).6 Yet, preceding this arc,
6 Theory, Culture & Society 36(2)
the 1991 work does seem to anticipate the experimental and intuitive
practices of the 21st-century’s machine learning algorithms, where
designers sit before a model they have trained on a corpus of data.
Today, the training of a convolutional neural network for image recog-
nition, for example, involves many millions of parameters, certainly
exceeding what the designer can meaningfully observe (Krizhevsky
et al., 2012). As Luciana Parisi captures it in her essay in this special
issue, we are witnessing ‘a new mode of algorithmic processing’ that
‘learns from data without following explicit programming’ and ‘without
abiding by the formal language of mathematics’. Hayles’ depiction of
iterative and co-evolving interactions – observing the output and adjust-
ing the probability weights in the model – nonetheless signals in 1991 the
sense-making and meaning-making collaborations between human and
machine that will dramatically shape the world. It is these questions of
thought and cognition, and how operations of thinking and cognition are
distributed across human and technical agencies, that Hayles turns to in
her two most recent books. In How We Think (2012), Hayles investigates
the multiple forms of reading involved in engaging digital and print
media, proposing that ‘machine reading might be a first pass toward
making visible patterns that human reading could then interpret’, open-
ing new possibilities for cognition and for critical thought (2012: 29). In
Unthought (2017), Hayles extends her concept of cognition, challenging
the human/nonhuman binary and offering ‘another distinction: cognizers
versus noncognizers’ in which ‘on one side are humans and all other
biological life forms, as well as many technical systems’ and ‘on the
other, material processes and inanimate objects’ (2017: 30). It is this
recognition of the cognitive power of technical systems, and specifically
their capacity to exercise choice and make decisions, that has afforded
Hayles’ work such significance to contemporary debates. Yet, the cogni-
tive power of technologies should be understood in its longer genesis
across the analogue and digital forms of computation Hayles brings to
our attention. Returning to the 1000 authors project, perhaps
a Haylesian reading would urge caution with the idea that forms of
machine reading are subsuming the human forms of deep reading of
these authors. The human and the algorithm are co-evolving, yielding
new modes of reading and cognition that do not readily map onto con-
ventional notions of the human and the machine.
Thinking with Algorithms
This special issue of Theory, Culture & Society focuses on the literary
theorist N. Katherine Hayles’ oeuvre at the intersection of literature and
computational science and technology. Each of the invited papers was
presented at a workshop at Durham University in 2015, held with
Hayles, and focused on her work in the context of contemporary debates
Amoore 7
on algorithms in society. The series of articles respond in different ways
to the provocations of Hayles’ work – engaging, challenging and extend-
ing the possibilities of her texts. In a direct sense, the articles signal the
multiple manifestations of the computational regimes Hayles has
mapped, from the algorithmic interactions of high frequency trading
(MacKenzie) to the personalization of recommendation algorithms
(Lury and Day). The multiple forms of Hayles’ non-conscious cognition
appear to us in different ways across the essays, with the apparently non-
conscious human propensities that are considered not fully knowable to
us becoming amenable to the differently non-conscious impulses of
technical cognizers that generate clusters, sentiments and attributes.
The collection is also intended to draw attention to what I see as a
form of neglect in many contemporary accounts that have been caught
up with the ‘digital’, as for example in some variants of digital geogra-
phies, software studies, and data sociologies. The work of N. Katherine
Hayles, over many decades, has opened the world of machinic and
human reading and writing to thought and to literary practice. This is
part of a longstanding body of work in the humanities, as well as in
feminist and posthuman historical scholarship on science and technology
(Haraway, 1991; Braidotti, 2013; Daston, 1988), that has not always
received sufficient attention amid the contemporary desire to understand
the digital, the virtual, or the cyber. To think with algorithms in these
terms would also involve a thought that imagines human bodies as
always already caught up in the algorithms thought to be governing
them. It would mean that many of the questions animating current
ethico-political debate on algorithmic accountability or automation anx-
iety would be rephrased to capture the historical durability of concepts of
perception, time, and decision. Do algorithms compute beyond the
threshold of human perceptibility and consciousness? Can ‘cognizing’
and ‘learning’ digital devices reflect or engage the durational experience
of time? Do digital forms of cognition radically transform workings of
the human brain and what humans can perceive or decide? How do
algorithms act upon other algorithms, and how might we understand
their recursive learning from each other? What kind of sociality or asso-
ciative life emerges from the human-machinic cognitive relations that we
see with association rules and analytics?
In this introduction I draw out a set of themes from Hayles’ work,
identified as key animating ideas that give life to particular aspects of the
contemporary debates on algorithmic computation and cognition. These
themes are present across the different essays in the special issue and they
are threads that run through the major contributions of Hayles’ work
across disciplines: human and technical cognitions; feedback loops and
forms of reason; and ethics and futures. The special issue concludes with
an interview with N. Katherine Hayles, in which she discusses her work
on cognition and computation as it is formulated in her book Unthought,
8 Theory, Culture & Society 36(2)
and responds to some of the questions arising from the essays in
this issue.
‘When We Design Technical Cognitive Systems,
We Are Partially Designing Ourselves’:
Human-Algorithm Interactions7
In a discussion of Stanislaw Lem’s 1976 novella ‘The Mask’, Katherine
Hayles details the partial and distributed nature of what we call human
agency (2005: 172–3). ‘We are no longer the featherless biped that can
think’, she writes, but a ‘hybrid creature that enfolds within itself the
rationality of the conscious mind and the coding operations of the
machine’ (2005: 192). Detailing the ‘machine within the human’ and
the ‘human within the machine’, Hayles defines anew the problem of
human agency in relation to the machine. In place of a long-held sense
of human agents as rational beings exercising free will in the world,
Hayles shows how the sense of self and world is bound up with under-
lying programmes so that ‘coding technology becomes central to under-
standing the human condition’ (2005: 192). Understood in this way,
thinking with algorithms could only ever be an entangled and collabora-
tive venture in which analogue and digital forms of computation and
cognition dwell together. This hybrid and collaborative mode of
cognition is further elaborated in Unthought, where Hayles develops
the concept of a ‘cognitive assemblage’ to depict the ‘arrangement of
systems, subsystems, and individual actors through which information
flows, effecting transformations through the interpretative activities of
cognizers operating upon the flows’ (2017: 118). Here, the technical cog-
nitive system is composed of multiple elements, humans and algorithms
among them, each of these elements interconnected so that ‘the cognitive
decisions of each affect the others’ (2017: 118).
From the algorithmic infrastructures of smart cities to the use of
autonomous weapons in contemporary warfare, Hayles’ insights
remind us that the design of technical cognitive capacities also necessarily
involves a redesignation of what it means to be human. ‘Autonomous
drones and drone swarms would operate with different distributions of
choices, interpretations, and decisions’, she writes, but they too will
necessarily ‘participate in a complex assemblage involving human and
technical cognizers’ (2017: 136). Given this complex assemblage, with its
different distributions of decision, what is at stake for the way one studies
algorithms? As Celia Lury and Sophie Day discuss in their study of
recommendation algorithms in this special issue, the algorithms function
as a ‘composite of algorithmic and human reasoning’. And yet, the
dividuated human subjects that are generated through the chains of
‘like’ relations in recommendation systems, as they describe, run counter
to traditional conventions of a unified subject, instead embodying
Amoore 9
algorithmic processes ‘such that one is always more and less than one’.
The subject of the recommendation algorithm, then, dwells among
human and technical cognizers so that the distribution of decisions
does not map directly to the ‘one’ of the liberal human subject. In her
analysis of Shelly Jackson’s electronic hypertext, Patchwork Girl,
Katherine Hayles describes how the ‘unified subject is thus broken
apart and reassembled as a multiplicity’ via electronic media that distrib-
ute coding and decoding ‘between the writer, computer and user’ (2005:
151). This redistribution of the text as a ‘flickering signifier’ is arguably
not confined to the spaces of story writing, but proliferates also in the
kinds of recommendation algorithms depicted by Lury and Day, where
subjectivities are enacted in what Hayles calls ‘flexible and mutating
ways’ (2005: 154).
As Luciana Parisi suggests in this special issue, Hayles’ work ‘offers a
re-reading of the epistemological distinction between human and
non-human cognition’ and, specifically, a re-reading of how non-
human cognizers interact with other human and non-human cognitive
agents. The effects of cognition, as distinct from thought, have been
manifest particularly in systems where algorithms interact with, and
learn from, other algorithms in order to enact decisions. Indeed,
Hayles devotes one chapter of her book Unthought to the study of
high-frequency trading (HFT) in financial derivatives markets. In the
context of vast increases in processor speed, computer memory, and
the use of fibre optic cables, Hayles identifies a ‘temporal gap between
human and technical cognition’ that she suggests creates ‘a realm of
autonomy for technical agency’ (2017: 142). What might take place in
this space of relative technical autonomy? In his essay on HFT in this
special issue, Donald MacKenzie is interested in what he calls, following
Erving Goffman, the ‘interaction order’ of algorithms. ‘Among the things
an algorithm does, in automated trading’, writes MacKenzie, is to have
‘material effects on the behaviour of other algorithms’. Detailing how the
object of the ‘order book’ emerges, MacKenzie describes the ‘human
traders’ who, like the algorithms with whom they work, ‘simultaneously
observe and construct the object of their attention’. In this way, the
temporal gap Hayles identifies is manifest in the technologies of
‘spoofing’ and ‘queuing’ MacKenzie recounts in his study. Indeed,
Hayles engages with Ann-Christina Lange’s work on HFT in order to
emphasize how ‘algorithms are constantly interacting with other algo-
rithms, generating a complex ecology that, Lange suggests, can be under-
stood as swarm behaviour’ (2017: 163).8 In the financial practice of HFT,
then, the cognitive assemblage enrols human and algorithmic inter-
actions that take place across different temporal registers. Such readings,
as one sees across work by Hayles, Lange, and MacKenzie, substantially
complicate the widespread claims to a ‘speeding up’ of the world amid
the dominance of algorithms over human decisions. Similarly, the very
10 Theory, Culture & Society 36(2)
notion of a liberal human subject is reframed so that, as Michael Dieter
and David Gauthier argue in their essay on chrono-design and user
experience, ‘conceptions of a self-deliberative actor are complicated’ in
algorithmic systems of cognition. What it means to action a trade, to
design an interface, to queue or to spoof, is transformed in and through
the composite cognitions of humans with algorithms, and algorithms
with other algorithms.
‘Recursive Feedback Loops Cycling between Different
Levels of Coding’: Algorithmic Forms of Reason9
Reflecting on Norbert Wiener’s mid-20th-century concerns for the cyber-
netic paradigm, Katherine Hayles notes that:
Half a century later, we can see with the benefit of hindsight in what
ways the cybernetic paradigm was both prophetic and misguided. It
was correct in anticipating that modes of communication between
humans, non-human life-forms, and machines would become
increasingly critical to the future of the planet; it was wrong in
thinking that feedback mechanisms were the key to controlling
this future. (Hayles, 2017: 202)
Alongside the distribution of agency and cognition, then, the recursivity
of interactions has exceeded the capacity of traditional notions of con-
trol. With the recursive feedback mechanism – a technic present across
Hayles’ oeuvre – Hayles signals the limit points of formal mathematical
and computational systems and the possibilities of novel forms of reason
more attuned to the ‘incomputable, the undecidable, and the unknow-
able’ (2017: 202). Understood in this way, the feedback loop that was so
central to cybernetic forms of reason and control becomes a recursive
and iterative logic that exceeds notions of control.10 As contemporary
machine learning algorithms deploy back propagation to train multilayer
architectures, the notion of feeding back has become a crucial feature of
unsupervised learning that precisely no longer requires control. In her
essay, Parisi extends what she describes as Katherine Hayles’ identifica-
tion of a fundamental problem in our present, that is, the tension
between logics of automation and reason. Parisi identifies a ‘shift in
computational models of logical reasoning’ from enlightenment forms
of ‘deductive truths applied to small data’ to contemporary computa-
tional forms of the ‘inductive retrieval and recombination of infinite data
volumes’. Extending and developing Hayles’ account of the computa-
tional regime, Parisi draws out a key aspect of the forms of reason
advancing with machine learning. Similarly, Lury and Day propose
that personalization via algorithm is not ‘a slide from one to many and
back again’ but instead a form of enumeration that is conducted through
Amoore 11
‘forms of de- and re-aggregation’ and ‘recursive induction in types or
classes’.
Such elaboration of the precise forms of reason advanced with
machine learning algorithms is significant because it rather fundamen-
tally challenges causal accounts of algorithmic actions upon the world. In
place of an account of algorithms where the effects of their actions can be
located in their origins or source codes, it becomes possible to give an
account of algorithms as generating, and generated by, the relations
between input data and their outputs. As Parisi puts the problem,
‘machine learning is the inverse of programming: the question is not to
deduce the output for a given algorithm, but rather to find the algorithm
that produces this output’. In contrast to visions of the algorithm as a
linear series of programmable steps, this abductive form of reason marks
a generative process of the discovery of structure within large data sets.
Rather as Hayles’ 1991 account of computation envisaged a regime
that ‘allows mathematics to be practiced as an experimental science’,
then, the inductive or abductive logics of machine learning experiment
with outputs, adjusting probability weights in order to optimize the algo-
rithm. Where Tobias Matzner suggests in his essay, contra Parisi, that
the ‘stability of the world’ is a ‘precondition of algorithm design’, the
experimental design of machine learning algorithms seems precisely to
profit from instability and uncertainty, because these conditions yield
data to the corpus for learning. Michael Dieter’s close study of the prac-
tice of user experience design, for example, observes processes of ‘accel-
erated pattern recognition, the synthesis of sensory inputs, and the
capacity to draw inferences’ in the algorithmic experiments for optimiza-
tion. Donald MacKenzie’s essay similarly describes financial traders he
interviewed as ‘experimenting with artificial intelligence machine learning
techniques’ such as support vector machines to distinguish ‘real from
spoof orders’ in more sophisticated ways. Again, the machine learning
methods required to define similarities and differences – such as the sup-
port vector machines MacKenzie observes – inductively generate their
similarity measures from the attributes of the data they are exposed to
(Alpaydin, 2016: 116; MacKenzie, 2017: 73).
As Katherine Hayles notes in the epilogue to My Mother Was a
Computer, the cyberneticians of the mid-20th century were the architects
of ‘feedback loops connecting human and machine’, and yet they had
‘not quite grasped’ that ‘recursivity could become a spiral rather than a
circle’ (2005: 241). In short, the architects of the feedback loop as com-
putational logic did not quite foresee its capacity to generate emergent
behaviours that would spiral beyond a paradigm of control and form the
parameters of modes of reason attuned to uncertainty and contingency.
Perhaps our current moment, with the encroachment of algorithms on
democratic elections, referenda, and the judicial system, is witnessing
what Hayles describes as ‘the uncertainties, potentialities, and dangers’
12 Theory, Culture & Society 36(2)
of the algorithmic regime of computation (2005: 242). It is to these latent
potentialities and dangers that I now turn.
‘Ethics Cannot Be Plastered on as an Afterthought’:
Algorithms and Positive Futures11
In an interview published in this special issue, N. Katherine Hayles
reflects on her own contribution to the formulation of ethical responses
to the penetration of algorithmic decisions into so many aspects of
contemporary life, saying that she does not consider herself to be an
‘ethicist’. The reading of her work that I offer here, however, considers
that she has a profound sense of ethics as an orientation to oneself and to
the world, and of the ethical and moral difficulties of being human.
Consider, for example, her account of posthuman embodiment, where
she discusses whether, with the ‘rapid development of neural nets’, there
could be a fundamental challenge to the ‘ethical imperative that humans
keep control’ (1999: 288). Hayles contrasts the ‘vision of the human in
which conscious agency is the essence of human identity’ with the post-
human view that ‘conscious agency has never been in control’ (1999:
288). Citing feminist scholars of science such as Donna Haraway and
Evelyn Fox Keller, Hayles suggests that the posthuman can offer another
kind of account in which ‘distributed cognition replaces autonomous
will; embodiment replaces a body seen as a support system for the
mind; and a dynamic partnership between humans and intelligent
machines replaces the liberal humanist subject’s manifest destiny to dom-
inate and control nature’ (1999: 288).
Twenty years on from Hayles’ mapping of the potentiality of the
posthuman to decentre human conscious agency, the dominant societal
and scholarly accounts of ethical response to algorithms remains wedded
to the control functions of the liberal humanist subject (O’Neil, 2016). It
is perhaps more important than ever that Hayles’ call for embodied
accounts of dynamic partnerships are brought into conversations on
drone warfare, autonomous weapons, and robot futures, where the cap-
acity for human control and mastery of the algorithms has too often
become the focus of ethico-politics.12 Indeed, in Unthought Hayles
urges us to consider the potentiality of non-conscious forms of cognition
to extend new opportunities for human thought and critique. Whilst she
never loses sight of the ethical effects of the assemblages of algorithmic
warfare, she nonetheless seeks to ‘move from thinking about the individ-
ual’ as site of responsibility and free will, toward thinking about ‘the
consequences of the actions the assemblage as a whole performs’
(2017: 37). For Hayles, this mode of ethics means that ‘effective ethical
intervention has to be intrinsic to the operation of the system itself’ so
that the sites of ‘inflection points’ can be located within a cognitive
assemblage (2017: 204). What does this mean for those who research
Amoore 13
the actions of algorithms in the world? It means becoming ‘knowledge-
able about how the interpenetrations of human and technical cognitions
work as specific sites’, devoting methodological time to understanding
computational regimes up close and in their operations.
The essays assembled in this special issue may be read as engagements
with this invocation to understand a computational regime in detail and
to identify the inflection points where intervention might be possible.
Such inflection points take multiple forms. In his discussion of the trap-
ping of ‘technical delays and waiting times within tolerable limits’, for
example, Michael Dieter and David Gauthier engage the specific and
distinct micro temporalities of information. Donald MacKenzie’s close
study of HFT regimes exposes that ‘it is human beings, not algorithms,
that are angered by perceived queue jumping’ and it is ‘humans, not
algorithms, that are prosecuted for spoofing’. Here the potential inflec-
tion points reside in the moments where the different temporalities and
affective registers – delays, emotions of anger, tolerable thresholds, fears
of prosecution – are juxtaposed or drawn together in tension with one
another. The point is that engaging the technical cognitions in detail can
yield a different way of relating to the system ethically and politically. As
Adrian Mackenzie has argued in his compelling account of the archae-
ology of machine learning algorithms, understanding how a specific
algorithm such as a random forest ‘orders differences’ could provide a
means to ‘change how we relate to’ one of the material instantiations of
such algorithms in the world, such as in border and immigration controls
(2017: 11).
Of course, for Hayles the reading of the close detail of a computational
regime draws much of its resource from the humanities and, particularly,
from the ‘specific dynamics’ that ‘novels enact that are not already pre-
sent’ (2017: 198). Among the specific dynamics of novels, Hayles notes
that ‘novels explore ethical issues in specific and concrete terms’ (2017:
200). The decision enacted within the novel’s form is already freighted
with political, ethical, technical, and affective weights of meaning.
Hayles’ account of the ethicality of the novel’s form can serve as a
reminder that the decisions of the computational regime are also already
weighted with the biases, probabilities, and discriminations contained
within algorithms. In her book Writing Machines, Hayles experiments
with ‘what the book can be in the digital age’ (2002: 9). Writing in the
third person, and under the name ‘Kaye’ (related to Hayles, but ‘not the
same’), Hayles enacts the displaced authorship and partial perspective
that feels familiar to us from literature, but also increasingly familiar as a
function of the kinds of personalization algorithms studied by Lury and
Day – not quite the one of I, always something less and more than this.
Experimenting with the form of writing and the novel, Hayles vividly
conjures the ethical difficulties of the human protagonist who finds her-
self enmeshed within technical cognitive systems and yet also the subject
14 Theory, Culture & Society 36(2)
of an ‘asymmetric distribution of ethical responsibility in whether actions
are finally taken’ (2017: 136). As the essays of this special issue elucidate,
this is not primarily a question of resolution, nor of resolving or ethically
modifying the distribution of responsibility. Instead, as Hayles’ work has
mapped over decades, it is a question of how the science that ‘under-
writes the Regime of Computation’ can yield the potential to ‘deepen our
understanding of what it means to be in the world rather than apart from
it’ (2005: 242).
Notes
1. For the Google research team’s account of the development of their algo-
rithms see https://research.googleblog.com/2016/02/on-personalities-of-dead-
authors.html (accessed January 2018).
2. See: http://www.wired.co.uk/article/google-author-create-artificial-intelligence
(accessed February 2018).
3. Katherine Hayles’ notion of the ‘regime of computation’ emphasizes the per-
formative relations between computation (code or ‘the language in which
computation is carried out’) and pre-existing ‘models for understanding
truth statements’, whether in literature or metaphysics (2005: 17).
Significantly, this means that ‘computation is not limited to digital manipu-
lations’ and can ‘take place in a variety of milieu and with almost any kind of
material substrate’ (p. 17). Computational regimes are thus situated in places
and spaces, and they involve material and corporeal relations.
4. See: Anthony Cuthbertson, ‘Google’s AI Predicts the Next Sentence of Dead
Authors’, Newsweek, 29 February 2016.
5. See, for example, Bruno Latour (1999), Bernard Stiegler (1998), Gilbert
Simondon (2001).
6. My identification of a period of 27 years should not be read as the period of
the influence of N. Katherine Hayles’ work, of course, which has extended
over many decades. Rather, the 1991 text pre-dates the arc of the work
Hayles herself describes, and thus serves to illustrate the presence of extra-
ordinarily prescient themes pre-dating the major series of books.
7. For Hayles there is a great deal at stake in recognizing how human cognition
‘enmeshes with technical systems’ so that ‘when we design technical cognitive
systems, we are partially designing ourselves’ (2017: 141). She argues that it is
only through reassessing ‘humbler perceptions of human roles in cognitive
assemblages’ that society might ‘collectively decide to what extent technical
autonomy should and will become intrinsic to human complex systems’
(p. 141).
8. Hayles cites Ann-Christina Lange’s (2015) paper on swarm theory and HFT.
See also Lange (2016) on the ethnographic study of practices of HFT.
9. Discussing the future of posthuman subjectivity through four novels, Hayles
suggests that ‘the subjects of these texts achieve consciousness through recur-
sive feedback loops cycling between different levels of coding’ (1999: 279). If
posthuman subjectivity is rewritten through layered coding structures, she
proposes, then different ‘models of signification’ are required to account
for the ‘distinctive feature of neurolinguistic and computer language struc-
ture’ (p. 279).
Amoore 15
10. The idea that ‘recursion was central to cognition’, Hayles explains, extended
through ‘much improved imaging technologies, micro-electrode studies, and
other contemporary research practices’ (2017: 47). On recursion and algo-
rithmic logics, see also Totaro and Ninno (2014), and for detailed engage-
ment with feedback loops and the algorithm see Halpern (2015).
11. ‘Ethics cannot be plastered on as an afterthought’, writes Hayles of algo-
rithmic systems that lead to sometimes catastrophic effects, whether finan-
cial crisis or drone strike, for example (2017: 204). Her notion of an ethical
intervention is situated in what she has described as ‘close reading’, so that
the computational regime is studied in situated detail.
12. For embodied accounts of the operations of drone algorithms and autono-
mous weapons systems see Wilcox (2017) and Suchman and Weber (2016).
References
Alpaydin, Ethem (2016) Machine Learning. Cambridge, MA: MIT Press.
Amoore, Louise and Piotukh, Volha (2019) Interview with N. Katherine Hayles.
Theory, Culture & Society 36(2): 145–155.
Braidotti, Rosi (2013) The Posthuman. Cambridge: Polity Press.
Daston, Lorraine (1988) Classical Probability in the Enlightenment. Princeton,
NJ: Princeton University Press.
Dieter, Michael and Gauthier, David (2019) On the politics of chrono-design:
Capture, time and the interface. Theory, Culture & Society 36(2): 61–87.
Grothoff, Christian and Porup, Jens (2016) The NSA’s SKYNET program may
be killing thousands of innocent people. Ars Technica, Wired Media Group.
Available at: https://hal.inria.fr/hal-01278193.
Halpern, Orit (2015) Beautiful Data: A History of Vision and Reason since 1945.
Durham, NC: Duke University Press.
Haraway, Donna (1991) Simians, Cyborgs, and Women: The Reinvention of
Nature. London: Free Association Books.
Hayles, N. Katherine (1991) Complex dynamics in literature and science. In:
Hayles, N. Katherine (ed.) Chaos and Order: Complex Dynamics in Literature
and Science. Chicago, IL: Chicago University Press.
Hayles, N. Katherine (1999) How We Became Posthuman: Virtual Bodies in
Cybernetics, Literature, and Informatics. Chicago, IL: Chicago University
Press.
Hayles, N. Katherine (2002) Writing Machines. Cambridge, MA: MIT Press.
Hayles, N. Katherine (2005) My Mother Was a Computer: Digital Subjects and
Literary Texts. Chicago, IL: Chicago University Press.
Hayles, N. Katherine (2012) How We Think: Digital Media and Contemporary
Technogenesis. Chicago, IL: Chicago University Press.
Hayles, N. Katherine (2017) Unthought: The Power of the Cognitive
Nonconscious. Chicago, IL: Chicago University Press.
Krizhevsky, Alex, Sutskever, Ilya and Hinton, Geoffrey (2012) ImageNet clas-
sification with deep convolutional neural networks. Advances in Neural
Information Processing Systems. Available at: https://papers.nips.cc/paper/
4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
(accessed 1 December 2018).
16 Theory, Culture & Society 36(2)
Lange, Ann-Christina (2015) Crowding of adaptive strategies: High-frequency
trading and swarm theory. Paper presented at ‘Thinking with Algorithms’
Conference, Durham University, UK, 27 February.
Lange, Ann-Christina (2016) Organizational ignorance: An ethnographic study
of high frequency trading. Economy and Society 45(2): 230–250.
Latour, Bruno (1999) Pandora’s Hope: Essays on the Reality of Science Studies.
Cambridge, MA: Harvard University Press.
Lury, Celia and Day, Sophie (2019) Algorithmic personalization as a a mode of
individuation. Theory, Culture & Society 36(2): 17–37.
Mackenzie, Adrian (2017) Machine Learners: Archaeology of a Data Practice.
Cambridge, MA: MIT Press.
MacKenzie, Donald (2019) How algorithms interact: Goffman’s ‘interaction
order’ in automated trading. Theory, Culture & Society 36(2): 39–59.
Matzner, Tobias (2019) The human is dead – long live the algorithm! Human-
algorithmic ensembles and liberal subjectivity. Theory, Culture & Society
36(2): 123–144.
O’Neil, Cathy (2016) Weapons of Math Destruction. New York: Penguin
Random House.
Parisi, Luciana (2019) Critical computation: Digital automata and general arti-
ficial thinking. Theory, Culture & Society 36(2): 89–121.
Simondon, Gilbert (2001) Du mode d’existence des objets techniques. Paris:
Aubier.
Stiegler, Bernard (1998) Technics and Time I: The Fault of Epimetheus, trans.
Beardsworth, Richard and Collins, George. Stanford, CA: Stanford
University Press.
Suchman, Lucy and Weber, Jutta (2016) Human-machine autonomies. In:
Bhuta, Nehal, Beck, Susanne, Geiss, Robin, Liu, Hin-Yan and Kress,
Claus (eds) Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge:
Cambridge University Press.
Totaro, Paulo and Ninno, Domenico (2014) The concept of algorithm as an
interpretative key of modern rationality. Theory, Culture & Society 31(4): 29–
49.
Wilcox, Lauren (2017) Embodying algorithmic war: Gender, race and the post-
human in drone warfare. Security Dialogue 48(1): 19–41.
Louise Amoore is Professor of Political Geography at Durham
University, UK. She is the author of The Politics of Possibility: Risk
and Security Beyond Probability (2013, Duke University Press). Her
latest book, Cloud Ethics: Algorithms and the Attributes of Others, is in
press (2019, Duke University Press).
This article is part of the Theory, Culture & Society special issue on
‘Thinking with Algorithms: Cognition and Computation in the Work of
N. Katherine Hayles’, edited by Louise Amoore.
Special Issue: Thinking with Algorithms: Cognition and Computation in the Work of N. Katherine Hayles
Theory, Culture & Society
2019, Vol. 36(2) 17–37
Algorithmic ! The Author(s) 2019
Article reuse guidelines:
Personalization as a sagepub.com/journals-permissions
DOI: 10.1177/0263276418818888
Mode of Individuation journals.sagepub.com/home/tcs
Celia Lury
Warwick University
Sophie Day
Goldsmiths, University of London
Abstract
Recognizing that many of the modern categories with which we think about people
and their activities were put in place through the use of numbers, we ask how
numbering practices compose contemporary sociality. Focusing on particular
forms of algorithmic personalization, we describe a pathway of a-typical individuation
in which repeated and recursive tracking is used to create partial orders in which
individuals are always more and less than one. Algorithmic personalization describes
a mode of numbering that involves forms of de- and re- aggregating, in which a variety
of contexts are continually included and excluded. This pathway of a-typical individu-
ation is important, we suggest, to a variety of domains and, more broadly, to an
understanding of contemporary economies of sharing where the politics of collectiv-
ities, ownership and use are being reconfigured as a default social.
Keywords
algorithm, individuation, numbering, optimization, pathway, personalization
The less the determinism, the more the possibilities for constraint.
(Hacking, 1991: 194)
This is the age of personalization. Personalizing practices permeate
everyday life in the UK – we are invited to participate in personalized
medical, health and care services, to benefit from personalized customer
experiences, to find our way with personalized maps, acquire a persona-
lized education, keep up-to-date with personalized news, get a bargain
Corresponding author: Celia Lury. Email: [email protected]
Extra material: http://theoryculturesociety.org/
18 Theory, Culture & Society 36(2)
with personalized prices and so on.1 To give some more concrete exam-
ples: in 2007 the UK government published Putting People First: A
Shared Vision and Commitment to Finding New Ways to Improve Social
Care in England.2 The pamphlet outlined the government’s intention to
transform adult social care so ‘that every person who receives support,
whether provided by statutory services or funded by themselves, will have
choice and control over the shape of that support in all care settings’.
This ‘vision’ describes itself as a totally different approach to an historic
‘one size fits all’ system. With an initial focus on transforming social care
and support services, the pamphlet proposes that principles of personal-
ization be embedded in a range of other service areas such as health and
education. An example from the field of health and well-being is
PatientsLikeMe,3 a website that combines features of traditional quali-
tative online patient communities with quantitative data-collection; the
(trade-marked) strap-line is ‘Live better, together!’ This website has
300,000 members, who ‘share’ over 23,000 diseases, and have contributed
over 25 million data points about their diseases, resulting in over 50 pub-
lished research studies. The website says:
By sharing health data on PatientsLikeMe, people not only help
themselves, but help others who can learn from their experiences,
and advance research.. . . Learn from others, connect with people
like you, track your health.
The platform is also described as a tool that helps patients find a ‘just-in-
time’, ‘someone-like-me’ peer who can be relied upon to compare options
and aid decision-making. A final example of personalization is the rec-
ommendation service Stitch Fix,4 a website that describes itself as ‘Your
partner in style’ and which seeks to recommend clothing for women on a
personal basis. The business proposition is that the recommendation
service – a composite of algorithmic and human reasoning – knows
better than the customer herself what clothes she will like: selected
items of clothing are sent directly to her, without a preview, as otherwise
her prejudices might prevent her from choosing items that she, unknow-
ingly, will really like.
The question this article seeks to address is: what kind of individuation
(Foucault, 2001; Simondon, 1992) is personalization? We ask this ques-
tion in order to explore the implications of personalization for how we
live together, that is, for forms of sociality. We start from the assumption
that personalization is not only personal: it is never about only one
person, just me or just you, but always involves generalization. Indeed,
our argument will be that it is a mode of individuation in which entities
are precisely specified by way of recursive inclusion in types or classes as
part of the making of what we describe as an a-typical pathway. To make
this argument, we explore the use of recommendation algorithms to sort
Lury and Day 19
or classify people, analysing the way in which individuals are addressed
as ‘a you’, while their membership of types or classes of person is per-
petually revised. Our conclusion is that the familiar recognition that
personalization seems to provide – knowing you better than you yourself
do – should not be considered as merely a more precise form of indi-
viduation. To the contrary, personalization also constrains who and how
we can be.
Recognizing that many of the modern categories with which we think
about people and their activities were put in place through the use of
numbers (Hacking, 1991), we develop our analysis of algorithmic per-
sonalization by drawing on an understanding of number as composition
(Day et al., 2014). This approach starts from the assumption that num-
bering is everywhere (Hayles, 2014), even though numbers may not
always be visible. As such, it seeks to situate contemporary analyses of
algorithms within the wider context of cultures of numbering. As Badiou
(2008) remarks, ‘A ‘‘cultural fact’’ is a numerical fact. And, conversely,
whatever produces number can be culturally located; that which has
no number shall have no name either’ (Badiou, 2008: 2–3). In a similar
vein, Totaro and Ninno (2014) also comment on the pervasiveness of
numbering, but focus specifically on the performativity of the recursive
function, which, they argue, provides ‘an interpretive key to modern
rationality’:
The notion that the ‘logic of numbers’ operates exclusively on num-
bers is misleading. In the second half of the last century, the theory
of recursive functions has made it clear that the concept of calcu-
lation is very general and does not necessarily imply the manipula-
tion of numerical symbols. (2014: 2; see also Neyland, 2014; Totaro
and Ninno, 2016)
More circumspectly, Heintz notes that ‘the observation and regulation of
performances today has become mutual and reflexive, generalised and
anonymous, and it is now increasingly based on observations and com-
parisons in terms of numbers’ (Heintz and Vollmer, 2011: 22).
Our compositional approach acknowledges the pervasiveness of num-
bering in contemporary society by looking at what numbering does,
rather than what numbering is. Adopting a felicitous analogy from
Verran, we think of numbers in the same way as anthropologists do
kin: numbers both are and have relations, just as people are and have
relations (Verran, 2010: 171; see also Mackenzie, 2014; Urton, 1997). In
other words, we propose that it is as working relations that numbers are
able to perform: to travel, to make possible comparison, conversion, and
exchange, to be stored, to inform, and to make the same or different. By
looking at how numbers are composed or formed in relations, and how
social and cultural practices are formed (in part) by number, we aim to
20 Theory, Culture & Society 36(2)
show how numbering is a re-presentation – in this case, of persons – that
always holds more than one presentation.
To understand how it is that we have become habituated to declaring,
measuring and sharing our personal characteristics, behaviours and opin-
ions in the UK in order to carry out mundane activities, we begin by
situating our analysis in the context of what has been called a ‘like econ-
omy’ (Gerlitz and Helmond, 2013). This context is important, we suggest,
insofar as it makes relational value available for computational calcula-
tion. Drawing on our compositional approach to numbering, we then
develop a set of terms – tracking, bordering, folding and pausing – that
lead us to describe forms of personalization that are performed by rec-
ommendation algorithms as the making of a pathway of a-typical indi-
viduation. Critically, this pathway creates ‘a’ person or individual that is
always provisional and corresponds only partially with the type or cat-
egory in which it is included, whether this concerns what a person might
buy, like, share or possess. The term ‘pathway’ is intended to capture this
category of person, a category that is never static but always changing
and always in motion. While our analytic focus is on the example of
algorithmic personalization, and in particular algorithmic personaliza-
tion that involves the use of collaborative filters, we also make references
to other examples which share the same logic.
Liking and Likeness
The rise of a ‘like economy’ begins, so Gerlitz and Helmond (2013) argue,
with the arrival of Google in the late 1990s. It is widely known that
Google’s early success stemmed from its use of a search engine that
shifted the value determination of websites from hits alone to hits and
links. The hyperlink analysis algorithm, PageRank, enabled calculation
of the relative importance and ranking of a page within a larger set of
pages, based on the number of in-links to the page and, recursively, the
value of the pages linking to that page and so on. All links do not have
equal value in this type of search engine, as links from authoritative
sources or links from sources receiving many in-links are weighted in
the algorithm.
The use of weighted measures of linking was a first step towards
inscribing the capacity to identify and intensify ‘relational value’ in
search engine algorithms (Gerlitz and Helmond, 2013; see also Feuz
et al., 2011). And it is this relational value, we propose, that is central
to personalization insofar as it makes relations between people available
for computational calculation. Since this first step, the capacity to make
relations of linking – or sharing – has been significantly extended as the
determination of ‘authority’ has changed in line with the participatory
features of Web 2.0. More web users now participate in making
connections between websites through the creation and exchange of
Lury and Day 21
user-generated content (as well as gaming and the purchase of position).
In particular, social buttons allow users to share, recommend, like or
bookmark content, posts and pages across various social media plat-
forms. In 2006, Facebook launched a share icon as a way for someone
to share web content and invite re-sharing and then, in 2009, it intro-
duced the Like button. In 2010, the company extended the capacities of
the Like button to link by introducing an external Like button, a plugin
that could be implemented by any webmaster, potentially rendering all
web content like-able. Significantly, the external ‘Like’ button not only
captures actual likes, but also aggregates all activities performed on a
web object: the number of likes and shares, further likes and comments
on stories, and the number of inbox messages containing the object as
an attachment. In another important development, Facebook’s Open
Graph Protocol opened up their social graph – a representation of
people and their connections – for external content, allowing for a con-
trolled way of exchanging preformatted data between Facebook and the
external web.
It is through the use of these and other techniques, so Gerlitz and
Helmond argue, that Facebook has been able to build a ‘like economy’,
that is, an economy that builds on and exploits relational value, mediated
by participation. They further suggest that this economy produces what,
using Mark Zuckerberg’s own phrase, they call ‘the default social’. To
this analysis we want to add the observation that the relations between
the individual and the population that characterize this new social are
both participatory and participative in that users may participate know-
ingly (participatory) or unknowingly (participative). Moreover, partici-
pation in the default social is mediated by techniques of exclusive
inclusion and inclusive exclusion (Agamben, 1998).5 On the one hand,
the Open Graph is able to include non-users of Facebook as the external
Like button cookie can trace non-users and add any information gained
as anonymous data to the Facebook database and, on the other hand, a
user’s explicitly invited activities may be excluded or rendered invisible to
other users if they are not sufficiently highly ranked in the dimensions the
graph provides. These oscillating dynamics – of being excluded in ways
that inform the ordering of those included, and being included but not in
ways that allow you to understand the terms of your membership – were
intensified further in 2011 when Facebook expanded the possibilities of
‘invisible’ participation by proliferating custom actions:
When creating an app, developers are prompted to define verbs that
are shown as user actions and to specify the object on which these
actions can be performed. Instead of being confined to ‘like’ exter-
nal web content, users can now ‘read’, ‘watch’, ‘discuss’ or perform
other actions. (Gerlitz and Helmond, 2013: 1353)
22 Theory, Culture & Society 36(2)
Gerlitz and Helmond conclude their analysis of the emergence of the like
economy by suggesting that Facebook is being developed as an ‘infra-
structure of decentralised data production and recentralised data pro-
cessing’ (2013: 1357). While they do not discuss the role of
recommendation systems within this infrastructure, these have become
increasingly important operators of the de- and re-centralizing practices
of the economy Gerlitz and Helmond describe. Structured by the
‘participatory’ practices of inclusive exclusion and exclusive inclusion,
recommendation algorithms penetrate all corners of the internet,
making personalized recommendations – directly and indirectly – to indi-
viduals with interests in a variety of fields, including movies, music, news,
books, research publications, restaurants, jokes, financial services, prod-
ucts of all sorts and persons (for example, in online dating). It is to these
algorithms that we now turn.
Recommendation Algorithms
While very many different kinds of algorithms are used in recommenda-
tion systems, two main kinds are distinguished: collaborative filtering
algorithms and content sharing algorithms. Sometimes, as in Netflix,
they are combined. The former group of algorithms is based on large
amounts of digital data on users’ behaviour, activities or preferences and
leads to predictions of what users will like based on their similarity to
others (see further below). An example is Last.fm,6 a music ‘station’ or
streaming service that personalizes the music it transmits by observing
the music an individual has listened to on a regular basis and comparing
those tracks with the listening behaviour of other individuals. The cal-
culative process involved in this group of algorithms is sometimes
described as ‘leveraging’ the behaviour of users7 since it requires the
participation of many users to produce personalized recommendations
for one person.8 Content-based filtering methods, in contrast, are based
on a description of an item in terms of discrete characteristics; the algo-
rithm is then designed to produce recommendations for individual users
of items that have similar properties to those that the individual liked in
the past (or is examining in the present). Pandora Internet Radio
(currently restricted to listeners in the USA, Australia and New
Zealand because of licensing restrictions) is an example: it makes use
of an algorithm that uses properties of a song or artists (a subset of
the 450 attributes provided by the Music Genome Project) in order to
seed a station to transmit personalized music (Prey, 2017).
We focus on collaborative filtering algorithms, partly because their
ability to make successful predictions across fields is held to be stronger
than that of content-based algorithms, but also because they require and
exploit ‘participatory’ methods to develop novel classificatory tech-
niques. As such, they allow us to identify distinctive aspects of
Lury and Day 23
personalization as a mode of individuation. Crucially for the use of such
algorithms, information relating to ‘pre-existing’ or demographic quali-
ties of the person or entity concerned is not required to produce perso-
nalized recommendations. Instead the information required is produced
through the aggregation of the ongoing participation of both the indi-
vidual to whom the recommendations are made and other users of social
media. Rather than allocating users to a pre-existing class, group or type
(typically a socio-demographic stratum), the properties of which are pre-
sumed to be known in advance, the operations of collaborative filtering
start from the premise that (individual) customers who share (that is,
have in common) some preferences will also share others. This single
but powerful assumption is of value for those developing algorithms in
that ‘the only information [they need] to work is a set of numerical
ratings – specific information about users or items to be recommended
is not necessary’ (Seaver, 2012).
It is worth exploring how these numerical ratings are turned into
personalized recommendations in some detail. A helpful analysis is pro-
vided by Seaver (2012), who describes the ‘signature action’ of collab-
orative filtering algorithms in terms of the operation of a grid, with items
along one side, users along the other, and numerical ratings at their
intersections (see also Bowker, 2014; http://personalisedcommunica-
tion.net/the-project/). Significantly, this grid is a matrix, that is, a
grid with the formatted capacity to map (or perform) network
transformations:
This matrix is mostly empty (or ‘sparse’), since most users will have
not rated most items. The work of the collaborative filtering algo-
rithm . . . is to predict what values will show up in the empty spaces
of the matrix. These predictions are then provided in some form to
the user as recommendations. (Seaver, 2012)
Seaver continues:
At any given time, the matrix is in an anticipatory flux: new ratings
from users arrive constantly, displacing their predicted values and
shifting the others. The calculative operations involved in this in-
filling process is the signature action within the matrix – blank
values are replaced by predictions, which are then replaced by
actual ratings.9
In the next stage described by Seaver, the numbers from the matrix are
statistically analysed and their variance is mapped to a number of dimen-
sions or axes. Users who are ‘near’ each other on this multi-dimensional
coordinate system are held to be similar (like each other), and a user will
be recommended items from the neighbourhood around them. It is this
24 Theory, Culture & Society 36(2)
calculative activity that leads to the paradigmatic claim of such algo-
rithms to specify the individual in the complex conjugated personalized
address: ‘People like you like things like this.’
But how, if at all, does this description of the calculative logic in rec-
ommendation algorithms help us understand what is distinctive about
personalization as a mode of individuation? Seaver concludes that pref-
erence and similarity are collapsed in this calculative system since ‘liking’
and ‘being like’ are equated. We consider that an understanding of the
composition of these relations will help us see how numbers are rendered
consequential for the making up of persons (Hacking, 1991). How is this
equation accomplished or, in our terms, composed? If we are to address
the specificity of personalization as a mode of individuation we need to
see the particular ways in which numbers both are and have relations.
Pathways of A-typical Individuation
Seaver’s claim about the equivalence between liking and likeness in
recommendation algorithms is less critical to our argument than his
observation that this calculative matrix is in a constant state of anticipa-
tory flux. Indeed, we propose that the emphasis on perpetual renewal
means that the equation of ‘liking’ and ‘being like’ that is accomplished
by these recommendation algorithms is not about establishing relations
of absolute equivalence. Instead, we suggest, the calculative activity that
produces the anticipatory flux of the matrix involves an ongoing series of
approximations in which ‘being like’ and ‘liking’ are continually made
more and less like each other in a variety of ways. Such approximations,
we would emphasize, are designed to be subject to constant testing.
As Seaver points out, such approximations vary hugely depending on
the calculative space in which they are produced (3 or 9 axes or dimen-
sions, for example). Their value – that is, their ability to produce perso-
nalized recommendations in terms of criteria of accuracy, diversity
(of recommendations), privacy protection and trust – is realized as
they are tested repeatedly in relation to data collected via a whole variety
of participatory methods and metrics as part of what have been called
experiments in participation (Lezaun et al., 2016). In other words, the
personalized address to ‘a you’ is not achieved through the collapse of
liking and likeness, preference and similarity, but through a carefully
calibrated sequencing of their possible inter-relationship. Crucially, this
process not only involves the statistical making of proximity or nearness
but also the turning of near-ness into next-ness, a process of bordering or
adjoining. We conclude therefore that personalization is not the collapse
of liking and likeness but the making of a pathway, a dynamic series of
approximations of similarity and preference that makes persons.
Indeed, this pathway can be described as a mode of a-typical individu-
ation. What do we mean, though, in our use of a-typical? Certainly it
Lury and Day 25
might seem counter-intuitive to use the term at all if it is understood to
mean ‘not typical’ or ‘not of a type’, since we have argued throughout
that personalization is a mode of individuation that involves generaliza-
tion through the (repeated or recursive) inclusion of an entity in a type or
class. The term ‘a-typical’ is thus used to describe a mode of recursive
inclusion, in which both the individual and the type are repeatedly
specified anew. In doing so, it draws on multiple – etymologically
unrelated – meanings of ‘a’.
The first set of meanings is associated with the use of ‘a’ as the indef-
inite article, since this use directly indicates membership in a type or class
of people, things or events (‘this is a cat’, ‘this is a friend of mine’). The
indefiniteness of this inclusion, while appearing to indicate a lack of
determination, has its own logic: for example, as well as meaning ‘one
single’ or ‘any’, ‘a’ is also commonly used to introduce someone or some-
thing for the first time. It allows for a mode of inclusion in a type or
category on the basis of criteria that are not pre-given but rather open to
further (indefinite) specification (‘If that is what you think, then you are
not a friend of mine because a friend of mine would not think that’). As
the indefinite article, ‘a’ is also used to specify both someone or some-
thing as being like someone or something else (‘you are a star’) and to
express rates or ratios, as in ‘for each’ or ‘per’. These meanings of the ‘a’
in the term ‘a-typical’ thus call up the operation of the two analytically
distinct, but historically intertwined, understandings of analogy identi-
fied by Stafford (2001): participation (similitude, mimesis, likeness) and
proportion (ratio).10 In our use, they are combined to produce principles
of inclusion that are subject – recursively – to further revision: their
combination is the means by which the you that is ‘a you’ becomes a
recursive shifter (Chun, 2011, 2016).
To these meanings of ‘a’ as the indefinite article, however, we
add a further meaning, that is, ‘a’ as a variant spelling of ‘ad-’, denoting
motion or direction, a reduction or change into, an addition, increase
or intensification, as in ‘adjoin’ or ‘adjacent’. The etymology of these
terms relates to the Latin adjacentem, adjacens; from adjacere, ‘to
lie at, to border upon, to lie near’; from ad-, ‘to’ + jacere, ‘to lie, to
rest’; literally, ‘to throw’. Our use of the term a-typical to describe
pathways of individuation is thus intended to describe the ways in
which collaborative filtering algorithms are designed to allow for the
ongoing redefinition of principles of inclusion and exclusion via the
recursive activity of adjoining or the work of adjacency: what we
describe as the compositional practice of bordering or framing.11 In
this practice, the aim is to create, not equivalence, but a topological
invariance: that is, the aim is to achieve a continuity of a recursive func-
tion12 such that likeness (‘People like you’) is iteratively produced as
a pathway through a massively aggregated de- and re-contexting of
liking.
26 Theory, Culture & Society 36(2)
How this is accomplished in the multi-dimensional calculative space of
recommendation algorithms can be illustrated by way of a consideration
of ‘the next adjacent possible’, a term developed by the theoretical biolo-
gist Stuart Kauffman (2000).13 Put briefly, Kauffman understands life in
terms of autonomous agents,14 by which he means ‘something that can
act on its own behalf in an environment’. This living entity is ‘something
that can both reproduce itself and do at least one thermodynamic work
cycle’ (Kauffman, 2000: 64). He says:
That bacterium, sculling up the glucose gradient, flagellum flailing
in work cycles, is busy as hell doing ‘it’, reproducing and carrying
out one or more work cycles. So too are all free-living cells and
organisms. We do, in blunt fact, link spontaneous and nonsponta-
neous processes in richly webbed pathways of interaction that
achieve reproduction and the persistent work cycles by which we
act on the world. Beavers do build dams; yet beavers are ‘just’
physical systems. (Kauffman, 2000: 64)
In making this argument, Kauffman emphasizes the importance of the
role of adjoining or bordering and points to the constitutive role of the
particular material constraints (or context) in which any entity individu-
ates. He also identifies the work of adjacency as the activity of ‘construct-
ing constraints that can manipulate constraints’, thus drawing attention
to the role of the border as the operator of the relation of the inside of an
entity to its outside.
Drawing on this analysis, we return to our observation on testing.
Personalized recommendations are based on the making of nearness or
adjacency in a multi-dimensional space, but the implementation of col-
laborative filtering algorithms requires that they be subject to repeated
testing in the specific kinds of relations to context that are commonly
called participation. The aim of this testing is to identify constraints that
can manipulate constraints; for example, in A/B tests or tests of ‘liking’,
to identify a changed ordering of likes so that some future preference
becomes more likely – or predictable. It is only insofar as a population’s
relations to multiple contexts (including data relating to liking, sensing
and sharing as well as to time and space) are registered by the algorithm
that the mode of individuation we are describing can happen at all. In
other words, the (numerical-cultural) process of folding a whole into,
across or within itself to make parts, of de- and re-contexting what
Zuckerburg describes as the default social, is fundamental to the
making of pathways of a-typical individuation. As Seaver (2015)
observes, while it is sometimes claimed that big data has no context,
‘context is everything’ for recommendation algorithms.15
It has been widely observed that algorithms do not operate in isolation
from context-aware techniques of data capture and collection as they are
Lury and Day 27
organized in particular calculative infrastructures (Hayles, 2002; Verran,
2011). Dourish, for example, notes:
If the database is malleable, extensible, or revisable, it is so not
simply because it is represented as electrical signals in a computer
or magnetic traces on a disk; malleability, extensibility, and revisa-
bility depend too on the maintenance of constraints that make this
specific collection of electrical signals or magnetic traces work as a
database; and within these constraints, new materialities need to be
acknowledged. (Dourish, 2014)
Similarly, Amoore and Piotukh highlight the changing role of indexing
practices in data-collection activities:
while structured data is territorially indexable, in the sense that it
can be queried on the horizontal and vertical axes of spreadsheets
within databases, so-called unstructured data demands new forms
of indexing that allow for analysis to be deterritorialized (conducted
across jurisdictions, or via distributed or cloud computing, for
example) and to be conducted across diverse data forms – images,
video, text in chat rooms, audio files and so on. (Amoore and
Piotukh, 2015: 345)
As they also observe, the activity of ‘(ad-)joining’ is of particular import-
ance in the deployment of these new forms of indexing. They give the
example of IBM’s predictive policing:
The linking of the data elements is performed through joins across
data from different data sets, either on the basis of direct intersec-
tions with already indexed data (e.g. via a phone, credit card or
social security number ingested from a database), or probabilistic-
ally, through correlations among data-points from different sources
(e.g. text scraped from a Twitter account correlated with facial
biometrically tagged images drawn from Facebook). (Amoore and
Piotukh, 2015: 345)
It is not just that there is more than one relevant context for recommen-
dation algorithms. Different contexts are deliberately made to appear or
disappear in different practices of context-ing. Indeed, this emphasis on
context – what is sometimes called context-awareness – provides another
compelling reason to describe personalization as a pathway of a-typical
individuation. A trajectory is not established in advance – as when we
travel with the aim of moving from A to B, already knowing where B is –
but in response to contexts that emerge in the making of a path.
28 Theory, Culture & Society 36(2)
Becoming Normal by Being Better than You
We turn now to a discussion of the consequences of personalization for
the making of the default social, by considering the practice of normal-
ization (Agamben, 1998; Canguilhem, 1991; Foucault, 1991; Hacking,
1991).16 In his discussion of modes of governance linked to earlier
forms of statistical normalization, Hacking (1991) argues that debates
concerning the setting of boundary conditions were fundamental to the
way in which a population was governed by statistical laws. Updating
this argument, we suggest that the work of adjoining in the personaliza-
tion practices described above involves an ongoing reorganization of
boundary conditions (operating the relation between inside and outside,
inclusion and exclusion through techniques of contexting) that trans-
forms conditions of governmentality. This is especially clear in relation
to the way in which practices of normalization now require the achieve-
ment of transitivity.17 On the one hand, the verbs of the vocabulary of
participation – liking, sharing, linking – describe activities in which
objects are repeatedly attached to persons; that is, they promote an algo-
rithmic kind of linguistic transitivity (as in ‘things like this like people like
you’). On the other hand, the data collected through the tracking of
participation are then ordered transitively – in a mathematical sense –
in an n-dimensional space of likeness or similitude. In these practices, the
‘new normal’ of individuation appears as a function of the ideal of
transitive closure, an internal limit, in relation to which every possible
relation (between verb and object) is partially ordered in such a way that
the you that is ‘a you’ is similar to other ‘yous’, that is, nearly but not
quite the same as other ‘yous’, and never quite able to be consolidated as
an ‘us’.
While this limit can never be reached since it involves a never-ending
in-filling in relation to a constantly changing population,18 we are none-
theless witnessing a proliferation of models of optimization across the
fields of medicine, marketing, project management and operational
research (the last of which is sometimes described as ‘the science of
better’, the significance of which will be made apparent below). In such
models, optimal pathways of a-typical individuation are commonly iden-
tified in ‘experiments in participation’ in relation to specific objectives,
often through software that merges data with parameters (as in the case
of the parametric algorithms discussed by Parisi, 2013) or employs evo-
lutionary modelling. As described above, one of the novel aspects of such
techniques is the calculative deployment of recursion such that the aim of
the action of ad-joining is not set in relation to a predefined target; rather
pathway and target emerge together.
Indeed, the term ‘precision medicine’ is sometimes preferred to the
synonyms personalized or stratified medicine because it acknowledges
the significance of the necessarily dynamic fit between, for example,
Lury and Day 29
a cancer, drug target, resistance and side effects through repeated moni-
toring and the operationalization of the feedback loop between evalu-
ation and intervention.19 In some cases, the methods of operational
research are applied in conjunction with computational biology with
the aim of identifying a pathway that has a ‘biologically meaningful
objective’: a network is ‘designed (or revised) optimally’ to find ‘the nat-
ural circumstances that trigger one particular pathway but not others’.20
An example of findings based upon pathways defined in molecular terms
rather than by anatomy or traditional disease classification is the recently
reported study (Mateo et al., 2015) of the efficacy of the drug Olaparib,
approved for treating ovarian cancers with BRCA1/2 mutations. This
study built upon the finding that cancers are significantly heterogeneous
at the molecular level and discovered that the variation within one, such
as ovarian cancer, can be more marked than between cancers, such as
ovarian and prostate, when tracked in terms of their differential sensitiv-
ity to particular treatments.
More broadly, we can see the operation of principles of optimization
modelling in the now ubiquitous ordinal tropes of ranking, which ensure
that what counts as best is not given in advance, but rather emerges in a
participative fashion with the (continually changing) requirement to do
and be better (Esposito, 2013; Gerlitz and Lury, 2014; Guyer, 2010). In
these practices the you that is addressed is both specific and a you ‘that is
like everyone else’ (Chun, 2011), only more or less so. The exhortation to
‘Believe in Better’21 pervades contemporary culture and might be seen as
an appropriation of ‘optimism of the will’, recursively calibrating rela-
tions between individuals and populations to establish new forms of
stratification (Fourcade and Healey, 2013; Ruppert, 2011). In the
requirement to be like but better than each other established in relation
to such optimizing practices, you and I are not just different to each other
but different-er: our differences are such that we are always both more
and less different to each other. As the Optimizely commercial platform
informs us, ‘Being personal is no longer optional’,22 or, as the name of a
British financial services comparison website says, GoCompare.23
Indeed, it is not just persons that are invited – or obliged – to participate
in bettering themselves in the compositional practices of personalization:
universities, hospitals, museums, police forces, hotels, holidays, restaur-
ants, brands and schools are also now frequently placed in dynamic
relations of competitive comparison with each other by often mandatory
or non-voluntary inclusion in the recursive partial orderings of ranking
systems. While normalization techniques sometimes provide a statistical
snapshot, a one-off cross-section of a population fixed in relation to a
single environment (the nation, for example), personalization is note-
worthy for the way that it establishes (constantly shifting) grounds for
dynamic stratification in relation to multiple norms in multiple
environments.
30 Theory, Culture & Society 36(2)
Signature Pathways
We consider one further aspect of the making of a pathway of a-typical
individuation by exploring the use of ‘you’ as a shifter. In linguistic
terms, shifters such as ‘this’ and ‘that’ as well as ‘I’ and ‘you’ can only
be understood by reference to the context in which they are uttered. In
other words, a shifter, sometimes also called a place-holder, is an index-
ical term whose meaning cannot be determined without referring to the
message that is being communicated. The ‘you’ in a pathway of person-
alization designates both the person to whom a message is directed and
the ‘you’ that is contained in the message that is sent. In relation to our
description of algorithmic personalization, it is the suturing of this dou-
bling in the shifter that makes a personalized address to the individual
possible and also organizes the activity of shifting as adjoining, creating
constraints that can manipulate constraints in the making of a pathway.
For Jakobson (1971 [1957]), enunciation is encoded in a shifter in the
statement itself. While Jakobson defines the shifter as an indexical
symbol, Lacan defines it as an indexical signifier in order to problematize
the distinction between enunciation and statement. As a signifier, the
shifter ‘I’ is normally part of a statement. As an index, it is also normally
part of the enunciation. For Lacan (1977), this division or distribution of
the ‘I’ or ‘you’ does not merely illustrate the splitting of a subject; it is
that split. Drawing on these understandings of shifters, it seems that the
indexical signifier is not stopped or ‘arrested’ by (representatives of) the
symbolic order (Fenves, 2002) in the anticipatory flux of personalizing
practices.24 In the context of (algorithmic) personalization, it seems that
the shifter is rather paused. Temporary halting incites participation or the
folding of a context into the pathway. Indeed, it is this pausing, the
marking of an interval, a stopping and starting that repetitively gathers
a collectivity. In assembling observers and observed, pausing allows for
both observation and the observation of the observing (Kaldrack and
Röhle, 2014).
Given that a pathway is a process of stopping and starting that repeti-
tively gathers a collectivity, it is not surprising that the ability to identify
some pathways but not others – the signature action Seaver describes – is
currently the source of considerable interest. Frow’s discussion of signa-
ture and brand (2002) is illuminating in this respect. He describes the
signature as a shifter that sets up ‘a tension between representation and
the represented’ and observes that the signature is not only an index of
the act of framing (of adjoining or bordering), but also designates a
naming right. Specifically, Frow argues that the power of the signature
stems from the elision of the difference between the signature as an index
and the taxonomic function of the proper name. This elision is effected in
a particular way by the brand, he asserts, since ‘the ‘‘Name’’, when one
abstracts it from the signature it indicates, loses its ‘‘index’’ character and
Lury and Day 31
becomes a ‘‘trademark’’. Like the trademark, the name is of a symbolic
order’ (Gandelman, quoted in Frow, 2002: 63).
As Frow observes, the brand’s economic significance as a ‘nexus
between high-speed, continuous flow manufacturing and the reshaping
of people’s habits and lives’ (Ohmann, 1996: 61 in Frow, 2002: 64) is
growing. The detachment from indexicality is what provides the basis for
using the signature as a claim to ownership. Importantly, however, Frow
argues that the brand is in principle reducible to neither a product nor a
corporation. As a quasi-signature or signature-effect, a brand name is
routinely attached to a product range, and even to generations of product
ranges, rather than to singular objects. It is precisely the divisibility of
brand from product (in practices of bordering or framing) that makes
possible the transfer of brand loyalty from one generation of a product to
another. With Frow’s insights, we propose that recommendation algo-
rithms create pathways of a-typical individuation that are always distinct
(divisible and detachable) from both object and person. In consequence,
the ways in which such pathways acquire autonomy or not, and how that
autonomy is recognized,25 constitute the heart of current debates on the
sharing economy. It is here that the politics of collectivity, ownership and
use are being reconfigured.
Conclusion
We have argued that personalization is a mode of a-typical individuation
that is produced in techniques of recursive divisibility (the drawing of
lines of inclusive exclusion and exclusive inclusion). As such, it provides
an entry point into the constitution of what, following Mark Zuckerburg,
we have called the ‘default social’. Crucially, as a numbering practice,
personalization does not involve zooming (Day et al., 2014), a performa-
tive gesture that operates the dynamism of moving from big to small, that
is, a slide from one to many and back again, as if the only difference to be
registered was that of an increase in a uniform quantity (as in what
Badiou calls the count of one). Instead, this is a mode of numbering
that constitutes a default social through forms of de- and re-aggregating,
in which a variety of contexts are included and excluded, such that one is
always more and less than one. In a recursive process that involves
tracking bordering, folding and pausing, the individual is precisely and
momentarily specified as ‘a you’ (Chun, 2016), that is, as a dividual
(Raunig, 2015; Strathern, 1998). At the same time, pausing allows for
the composition of heterogeneous (numerical-cultural) quantities, in
which qualitative differences of mass are recognized at different levels
of observation as matters of dimension and scale. Put somewhat differ-
ently, the person who is addressed as a you is refracted in multiple partial
orderings that allow for specific forms of comparison and competition
(of better-ing) while the folding of contexts into the pathway creates new
32 Theory, Culture & Society 36(2)
ways of configuring relations between participation and proportion,
sharing, ownership and use in the identification of signature pathways.
Importantly, our argument does not suggest that personalization is
replacing other modes of individuation. Rather, it introduces new tech-
niques that combine in a variety of ways to transform and intensify
contemporary forms of individualism.26 As such, it merely confirms
Hacking’s observation in relation to the history of the making up of
people: ‘The less the determinism, the more the possibilities for con-
straint’ (1991: 194).
Notes
1. Celia Lury would like to acknowledge the support of Economic and Social
Research Council (ESRC), ES/K010689/1, and Sophie Day would like to
acknowledge the support of the National Institute for Health Research
(NIHR) Imperial BioMedical Research Centre.
2. See: https://webarchive.nationalarchives.gov.uk/20130104175839/http://
www.dh.gov.uk/en/Publicationsandstatistics/Publications/Publications
PolicyAndGuidance/DH_081118
3. See: http://www.patientslikeme.com
4. See: https://www.stitchfix.com
5. These are practices that Agamben (1998, 2011) associates with sovereignty:
bare life, he argues, has always been the object and the aim of state action,
and it has always been subjected to elaborate mechanisms of both inclusion
and exclusion.
6. See: http://www.last.fm
7. Arvidsson (2016) argues that it is through personalization that platforms
such as Facebook will – on their own and in conjunction with third parties –
benefit from the financialization of everyday life.
8. One of the most famous examples of this group is item-to-item collaborative
filtering, an algorithm developed by Amazon.
9. Seaver goes on:
The collaborative filtering matrix intermeshes the identities of users
and items. It is both possible and typical for a collaborative filter to
take no special account of either, organizing all entities strictly in
terms of ratings: users are known as a [ranked] collection of rela-
tions to items and items are known as a [ranked] collection of
relations to users. Persons and things enjoy no separate modes of
existence in the matrix, which is indeed a function for translating
one into the other. (Seaver, 2012)
In other words, collaborative filtering algorithms do not just determine that
‘Users like you liked items like this’; they also establish that ‘Items like this
liked users like you.’ This ‘collaboration’ is very different from that of the
taste-bearing individuals explored by Bourdieu (1987) in Distinction, where
the relations are those of class and the exercise of taste, and involve symbolic
Lury and Day 33
violence. How pathways of ‘a-typical individuation’ will coincide with,
transform or supersede such ‘demographic’ stratification remains to be seen.
10. For Stafford, analogy is an associative method, a demonstrative and
evidentiary practice. She says: ‘Analogy correlates originality with continuity,
what comes after with what went before.. . . This transport of predicates
involves a mutual sharing in, or partaking of, certain determinable quantita-
tive and qualitative attributes through a mediating image’ (Stafford, 2001: 9).
11. Bateson describes set theory diagrams as ‘a topological approach to the
logic of classification’ (Bateson, 1999: 186). In such diagrams a frame is a
mode of referring by ordering. As Tkacz observes in a commentary on
Bateson, ‘A frame always sorts things as either belonging or not belonging
and this process is mediated by axioms or principles – indeed, the axioms are
what define the frame; they are the conditions of its possibility’ (Tkacz,
2014: 71).
12. Totaro and Ninno (2014) argue that what is fundamental to the recursive
function is that repetition becomes the aim of action.
13. Rabinow’s understanding of adjacency provides another, related set of
terms (Rabinow, 2009). For Rabinow, the concept of adjacency is both
analytic, in that sets of relations must be decomposed and specified, and
synthetic, in that these relations must be recomposed and given new form. In
this process, a neighbourhood emerges as the figure of what moves in
tandem, together, the outcome of the interlinked processes of analysis and
synthesis.
14. Kauffman observes in relation to autonomous agents that: ‘At the end of the
cycle the system is poised to cycle again’ (Kauffman, 2000: 68).
15. The Optimizely (https://www.optimizely.com) platforms says that it can:
‘Connect that browsing behavior, demographic information, contextual
clues, and 1st- and 3rd-party data into a complete picture of your customer
that you can use to power personalized experiences.’
16. Hacking (1991) argues that ‘normalcy’ is one of the most socially significant
statistical meta-concepts. We are pointing to the significance of normaliza-
tion without, we hope, imputing any consensus to the very different con-
cepts and trajectories implicated in this meta-concept across a range of
disciplines.
17. Transitivity has a range of meanings in different disciplines. In lin-
guistics, for example, transitivity is a property of verbs that relates to
whether a verb can take direct objects and how many such objects a verb
can take. In mathematics, a binary relation over a set is transitive if, when-
ever an element a is related to an element b, and b in turn is related to an
element c, a is also related to c. The partial ordering produced by the algo-
rithms discussed above organizes liking in relations that are transitive in
both senses.
18. Parisi (2013) offers another view of the limits of reason, specifically in relation
to computation. She suggests that parametric quantities are discrete entities
that not only select data, as part of the software into which they are scripted;
they may also be infected by data that they are not able to compute:
Instead of being a continuous flow of data, such as a topological
binding of many actualities into one stream of ceaseless variation,
34 Theory, Culture & Society 36(2)
the incomputable . . . is an infinite series of discrete yet incomplete
data that immanently ingresses and becomes uniquely arranged
into algorithmic sets, in which these data acquire togetherness
and continuity. (Parisi, 2013: 170)
19. In precision medicine (or its synonyms), reference is commonly made to the
4Ps, which are predictive, personalized, preventive and participatory.
Some of the advocates of this approach describe current developments as
a revolution:
fueled by several factors: first, an appreciation that medicine is an
information science; second, systems or holistic approaches to
studying the enormous complexities of disease; third, emerging
technologies that will let us explore new dimensions of patient
data space; and fourth, powerful new analytical technologies –
both mathematical and computational – that will let us decipher
the billions of data points associated with each individual. (Hood
and Friend, 2011)
20. See: http://ercim-news.ercim.eu/en82/special/pathway-signatures
21. This is the strap-line employed by Skye, Rupert Murdoch’s telecommuni-
cations company, which encourages us all, no matter what, to ‘Believe in
Better’. Elsewhere in the UK there is a chain of leisure centres that are called
‘Better’, a national insurance company that is called ‘More Than’ and
Eurostar, the company that runs trains through the tunnel connecting the
UK to continental Europe, deploys a campaign that employs the hashtag,
‘bettercloser’. There is a Canadian pharmaceutical company that has a range
of products called Be.better; Nike’s current range of products includes a
T-shirt with the slogan ‘bettering’ written across the front; the shoe and
clothing company Timberland uses the advertising strap-line, ‘Best then.
Better now’; the TSB (a UK bank) claims ‘Our TSB Classic Plus account,
just got plusser’; the Wellcome Museum in London invites us in with the
slogan ‘More than ever’; the i-Phone 6 is described as ‘bigger than bigger’;
a recent advertisement for an electric car (an Audi) insists, ‘Like a car,
but better’.
22. See: https://www.optimizely.com
23. See: http://www.gocompare.com/ps/homepage/2.aspx/?Media¼GG001&P
ST¼1&device¼c&PST¼1&gclid¼Cj0KEQjwwYK8BRC0ta6LhOPC0v0BE
iQApv6jYX5FTYS1gIsxfMkzlNlsaIMdTDT1Y7KLjtZwIIP8Y0MaAvBY8
P8HAQ
24. It is hard to avoid drawing a parallel with Althusser’s (1971) discussion of
interpellation: the policeman who calls out ‘Hey, you there’. Althusser’s
approach draws on Lacan’s various discussions of the mirror stage, a
form of pausing in which infants encounter an external sense of coherence,
producing a sense of ‘I’ and ‘you’, that comes to represent a permanent
structure of alienation for Lacan.
25. A paradigmatic example is the recent successful filing of a patent by
Amazon for a method of speculative or anticipatory shipping. See
Coleman (2017).
Lury and Day 35
26. In marketing and many policy fields, for example, the design of optimal
pathways is informed by behavioural economics, in which doing is deployed
as a measure of being. In the terms of our analysis, ‘nudging’ is the identi-
fication and operation of constraints that can manipulate constraints, and
the current investment by business and government in a ‘context-aware’
computational infrastructure seems designed to support the rise of person-
alization as a mode of individuation that will afford the possibility of
dynamic stratification.
References
Agamben G (1998) Homer Sacer: Sovereign Power and Bare Life. Stanford, CA:
Stanford University Press.
Agamben G (2011) The Signature of All Things: On Method. Cambridge: Zone
Books, MIT Press.
Althusser L (1971) Lenin and Philosophy and other Essays, trans. Brewster B.
New York: Monthly Review Press.
Amoore L and Piotukh V (2015) Life beyond big data: Governing with little
analytics. Economy and Society 44(3): 341–366.
Arvidsson A (2016) Facebook and finance: On the social logic of the derivative.
Theory, Culture & Society 33(6): 3–23.
Badiou A (2008) Number and Numbers. Cambridge: Polity Press.
Bateson G (1999) Steps to an Ecology of Mind. Chicago: University of Chicago
Press.
Bourdieu P (1987) Distinction: A Social Critique of the Judgement of Taste, trans.
Nice R. Cambridge, MA: Harvard University Press.
Bowker GC (2014) Big data, big questions: The theory/data thing. International
Journal of Communication 8(2043): 1795–1799. Available at: https://ijoc.org/
index.php/ijoc/article/view/2190 (accessed: 12 December 2018).
Canguilhem G (1991) The Normal and the Pathological, trans. Fawcett C.
Cambridge: Zone Books, MIT Press.
Chun WHK (2011) Programmed Visions. Cambridge, MA: MIT Press.
Chun WHK (2016) Updating to Remain the Same. Cambridge, MA: MIT Press.
Coleman R (2017) Developing speculative methods to explore speculative ship-
ping: Mail art, futurity and empiricism. In: Wilkie A, Savransky M and
Rosengarten M (eds) Speculative Research: The Lure of Possible Futures.
London: Routledge.
Day S, Lury C and Wakeford N (2014) Number ecologies: Numbers and num-
bering practices. Distinktion 15(2): 123–154.
Dourish P (2014) No SQL: The shifting materialities of database technology.
Computational Culture 4.
Esposito E (2013) Economic circularities and second-order observation: The
reality of ratings. Sociologica 2: 1–20.
Fenves P (2002) Arresting Language: From Leibniz to Benjamin. Stanford, CA:
Stanford University Press.
Feuz M, Fuller M and Stalder F (2011) Personal web searching in the age of
semantic capitalism: Diagnosing the mechanisms of personalisation. First
Monday 16(2, 7 February).
Foucault M (1991) Discipline and Punish: The Birth of the Prison, trans.
Sheridan A. Harmondsworth: Penguin.
36 Theory, Culture & Society 36(2)
Foucault M (2001) The Order of Things: An Archaeology of the Human Sciences.
Abingdon: Routledge.
Fourcade M and Healy K (2013) Classification situations: Life-chances in the
neoliberal era. Accounting, Organizations and Society 38(8): 559–572.
Frow J (2002) Signature and brand. In: Collins J (ed.) High-pop: Making Culture
into Popular Entertainment. Malden, MA: Wiley-Blackwell, pp. 56–74.
Gerlitz C and Helmond A (2013) The like economy: Social buttons and the data-
intensive web. New Media & Society 15(8): 1348–1365.
Gerlitz C and Lury C (2014) Social media and self-evaluating assemblages: On
numbers, orderings and values. Distinktion 15(2): 174–188.
Guyer J (2010) The eruption of tradition: On ordinality and calculation.
Anthropological Theory 10(1–2): 123–131.
Hacking I (1991) How should we do the history of statistics? In: Burchell G,
Gordon C and Miller P (eds) The Foucault Effect. Chicago: University of
Chicago Press.
Hayles KN (2002) Writing Machines. Cambridge, MA: MIT Press.
Hayles KN (2014) Cognition everywhere: The rise of the cognitive nonconscious
and the costs of consciousness. New Literary History 45(2): 199–220.
Heintz B and Vollmer H (2011) Globalizing comparisons: Performance meas-
urement and the ‘numerical difference’ in global governance. Available at:
http://citation.allacademic.com//meta/p_mla_apa_research_citation/5/0/0/4/
8/pages500487/p500487-1.php.
Hood L and Friend SH (2011) Predictive, personalized, preventive, participatory
(P4) cancer medicine. Nature Reviews Clinical Oncology 8: 184–187.
Jakobson R (1971 [1957]) Shifters, verbal categories, and the Russian verb. In:
Selected Writings, vol. II: Word and Language. The Hague: Mouton.
Kaldrak I and Röhle T (2014) Divide and share: Taxonomies, orders and masses
in Facebook’s Open Graph. Computational Culture 4.
Kauffman S (2000) Investigations. Oxford: Oxford University Press.
Lacan J (1977) Écrits: A Selection, trans. Sheridan A. London: Tavistock.
Lezaun J, Marres N and Tironi M (2016) Experiments in participation.
In: Miller C, Smitt-Doer E, Felt U and Fouche R (eds) Handbook of
Science and Technology Studies. Cambridge, MA: MIT Press, pp. 195–222.
Mackenzie A (2014) Multiplying numbers differently: An epidemiology of con-
tagious convolution. Distinktion: Journal of Social Theory 15(2): 189–207.
Mateo J, Carreira S, Sandhu S et al. (2015) DNA-repair defects and Olaparib in
metastatic prostate cancer. New England Journal of Medicine online 29
October.
Neyland D (2014) On organizing algorithms. Theory, Culture & Society 32(1):
119–132.
Parisi L (2013) Contagious Architecture: Computation, Aesthetics and Space.
Cambridge, MA: MIT Press.
Prey R (2017) Nothing personal: Algorithmic individuation on music streaming
platforms. Media, Culture & Society 40(7): 1086–1100.
Rabinow P (2009) Marking Time: On the Anthropology of the Contemporary.
Princeton, NJ: Princeton University Press.
Raunig G (2015) Dividuum, Machinic Capitalism and Molecular Revolution,
trans. Derieg A. Cambridge, MA: MIT Press.
Lury and Day 37
Ruppert E (2011) Population objects: Interpassive subjects. Sociology 45(2):
218–233.
Seaver N (2012) Algorithmic recommendations and synaptic functions. Limn 2,
Crowds and Clouds.
Seaver N (2015) The nice thing about context is that everyone has it. Media,
Culture & Society 37(7): 1101–1109.
Simondon G (1992) The genesis of the individual. In: Crary J and Kwinter S
(eds) Incorporations. Cambridge: Zone Books, MIT Press, pp. 297–319.
Stafford BM (2001) Visual Analogy: Consciousness as the Art of Connecting.
Cambridge, MA: MIT Press.
Strathern M (1998) The Gender of the Gift. Berkeley: University of California
Press.
Tkacz N (2014) Wikipedia and the Politics of Mind. Chicago: University of
Chicago Press.
Totaro P and Ninno D (2014) The concept of algorithm as an interpretive key of
modern rationality. Theory, Culture & Society 31(4): 29–49.
Totaro P and Ninno D (2016) Algorithms and the practical world. Theory,
Culture & Society 33(1): 139–152.
Urton G (1997) The Social Life of Numbers: A Quechua Ontology of Numbers
and Philosophy of Arithmetic. Austin, TX: University of Texas Press.
Verran H (2010) Number as an inventive frontier in knowing and working
Australia’s water resources. Anthropological Theory 10(1): 171–178.
Verran H (2011) The changing lives of measures and values: From centre stage
in the fading ‘disciplinary’ society to pervasive background instrument in the
emergent ‘control’ society. Sociological Review 59(2): 60–72.
Celia Lury is Professor and Director of the Centre for Interdisciplinary
Methodologies, Warwick University. The topic of her ESRC Professorial
Fellowship was ‘Order and Continuity: Methods for Change in a
Topological Society’, ES/K010689/1.
Sophie Day is Professor of Anthropology at Goldsmiths, and Visiting
Professor at the School of Public Health, Imperial College London. She
is currently working with Celia Lury and Helen Ward in a collaborative
project, ‘People Like You’: Contemporary Figures of Personalisation
(Wellcome Trust, 205456/Z/16/Z).
This article is part of the Theory, Culture & Society special issue on
‘Thinking with Algorithms: Cognition and Computation in the Work of
N. Katherine Hayles’, edited by Louise Amoore.
Special Issue: Thinking with Algorithms: Cognition and Computation in the Work of N. Katherine Hayles
Theory, Culture & Society
2019, Vol. 36(2) 39–59
How Algorithms ! The Author(s) 2019
Article reuse guidelines:
Interact: Goffman’s sagepub.com/journals-permissions
DOI: 10.1177/0263276419829541
‘Interaction Order’ in journals.sagepub.com/home/tcs
Automated Trading
Donald MacKenzie
University of Edinburgh
Abstract
In a talk in 2013, Karin Knorr Cetina referred to ‘the interaction order of algo-
rithms’, a phrase that implicitly invokes Erving Goffman’s ‘interaction order’. This
paper explores the application of the latter notion to the interaction of automated-
trading algorithms, viewing algorithms as material entities (programs running on
physical machines) and conceiving of the interaction order of algorithms as the
ensemble of their effects on each other. The paper identifies the main way in
which trading algorithms interact (via electronic ‘order books’, which algorithms
both ‘observe’ and populate) and focuses on two particularly Goffmanesque aspects
of algorithmic interaction: queuing and ‘spoofing’, or deliberate deception. Following
Goffman’s injunction not to ignore the influence on interaction of matters external to
it, the paper examines some prominent such matters. Empirically, the paper draws on
documentary analysis and 338 interviews conducted by the author with high-
frequency traders and others involved in automated trading.
Keywords
algorithm, Karin Knorr Cetina, Erving Goffman, high-frequency trading, interaction
order, queuing, spoofing
[H]uman awareness comprises the tip of a huge pyramid of data
flows, most of which occur between machines. (Hayles, 2006: 165)
As Hayles points out, human beings are increasingly enmeshed in a
‘cognisphere’, shared with machines, in which many important processes
take place among those machines, without direct human involvement.
How should what Beer (2009: 987) calls ‘the technological challenges to
Corresponding author: Donald MacKenzie. Email: [email protected]
Extra material: http://theoryculturesociety.org/
40 Theory, Culture & Society 36(2)
human agency offered by the decision-making powers of established and
emergent software algorithms’ be theorised? This paper addresses this
question for one specific area: automated financial trading, especially
high-frequency trading or HFT, which is ultrafast and involves very
large numbers of trades.
The paper takes up a suggestion made by Karin Knorr Cetina in a talk
to the panel ‘Theorizing Numbers’ at the American Sociological
Association, in which she used the evocative phrase: ‘the interaction
order of algorithms’ (Knorr Cetina, 2013). It points us in a somewhat
different direction to much recent work on algorithms, which draws upon
theorists as sophisticated and well-known as Hayles herself (e.g. 1999,
2012; see also Gane et al., 2007), Foucault (e.g. Cheney-Lippold, 2011;
Bucher, 2012), Deleuze (e.g. 1992; see, e.g., Savat and Poster, 2005;
Cheney-Lippold, 2011), Latour (e.g. 2005) and Lash (2002, 2007; see
Beer, 2009).
The term ‘the interaction order’ was coined by Erving Goffman, whose
primary reputation is not as a theorist – even a critic as sympathetic as
Burns (1992) could find his theorising unsystematic and sometimes even
careless – but as a hugely insightful observer of social interaction. ‘The
Interaction Order’ was the title of Goffman’s intended 1982 Presidential
Address to the American Sociological Association, undelivered because
he was already suffering from the cancer that was soon to kill him, but
published the following year (Goffman, 1983). In it, he laid out what he
saw as most central to his life’s work:
Social interaction can be identified narrowly as that which uniquely
transpires in social situations, that is, environments in which two or
more individuals are physically in one another’s response presence.
(Presumably the telephone and the mails provide reduced versions
of the primordial real thing.) . . . My concern over the years has been
to promote acceptance of this face-to-face domain as an analytically
viable one – a domain which might be titled, for want of any happy
name, the interaction order. (Goffman, 1983: 2, emphasis in original)
The uneasy parenthesis in that quotation points to the need to question
the primary role of physical co-presence in Goffman’s conception of the
interaction order. In the decades since 1983, ‘the telephone and the mails’
have been joined by multiple other forms of mediated communication:
electronic mail, text messages and other forms of instant messaging,
social media, Skype and other forms of telepresence, etc. As these have
grown in importance, Knorr Cetina is surely right to suggest supplement-
ing Goffman’s focus on spatial proximity with a broader, temporal
notion of ‘response presence’ as accountability ‘for responding without
inappropriate delay to an incoming attention or interaction request’
(Knorr Cetina, 2009: 74).
MacKenzie 41
Given this paper’s focus on algorithmic trading, it is particularly rele-
vant that both Knorr Cetina herself and Alex Preda have productively
deployed reworked versions of Goffman’s ‘interaction order’ to analyse
human beings trading electronically. Much of Knorr Cetina’s research on
financial markets has concerned foreign-exchange dealers in bank trading
rooms communicating with other traders (in different banks, but person-
ally identifiable and sometimes personally known) via the Reuters
‘conversational dealing’ system, an early electronic system – still in use
today – that combines automated requests for price quotations with the
capacity to formulate Telex-style messages conveying up-to-date market
information, pleasantries (‘please’, ‘thanks’), and the details needed to
settle trades (see, especially, Knorr Cetina and Bruegger, 2002a).
However, Knorr Cetina also examines human traders interacting with
a fully anonymous electronic market (e.g. Knorr Cetina, 2009: 72–3), as
does Preda (2009, 2013). In the work of Knorr Cetina and Preda,
Goffman’s notion of the interaction order gets stretched beyond tem-
poral response presence among spatially separate but identifiable
humans, as ‘the market’ itself becomes a party to ‘postsocial’ interaction
(Knorr Cetina and Bruegger, 2002b). As Knorr Cetina points out, in
projecting ‘the market’, traders’ computer screens project ‘an ‘‘other’’
for participants, with whom these participants interact’ (Knorr Cetina,
2009: 73; see also Knorr Cetina and Preda, 2007). Preda discovers human
traders – no longer in trading rooms, but often physically entirely alone –
trying to disaggregate ‘the market’ into different kinds of agents
(for example, ‘an individual [human] trader, an institution, or a robot’,
i.e. a trading algorithm) that do different things, and sometimes (even
though alone) audibly addressing these absent, imagined, unhearing
others, ‘engaging with ‘‘guys’’, ‘‘dudes’’, and ‘‘buds’’’ (Preda, 2013: 42;
2009: 687).
Knorr Cetina’s invocation of ‘the interaction order of algorithms’
invites us to take yet a further step, which is this paper’s focus: to
extend the notion of ‘interaction order’ to situations in which trading
algorithms interact with each other rather than with human beings. First,
though, we need to be clear what ‘algorithm’ means in this context, and
what it might mean for algorithms to interact. I follow how my inter-
viewees use the term ‘algorithm’. For them, algorithms are not simply the
abstract ‘effective procedures’ (finite sets of exact, ‘mechanical’ instruc-
tions) of metamathematics or computer science. Rather, an ‘algorithm’ is
a material implementation of such a procedure, i.e. a computer program
running on a physical machine.
Although this view of algorithms is implicit in much of the literature
pointed to above – for example, in Lash’s discussion of ‘[p]ower through
the algorithm’ (2007: 71) – it is worth spelling out explicitly that an
algorithm is a material entity that does things materially: ultimately,
electrically. (The need for speed in automated trading means that there
42 Theory, Culture & Society 36(2)
is a sense in which those involved in it have to be materialists. For
example, they cannot successfully conceive of computers as abstract
machines, but have to think of them as assemblages of metal, plastic
and silicon through which electrical signals pass: see MacKenzie
[2014a]. This points to the relevance here of theoretical traditions in
which materiality is prominent, such as ‘media materialism’ [e.g.
Kittler, 2006; Parikka, 2015].) Among the things an algorithm does in
automated trading is to have material effects on the behaviour of other
algorithms; reciprocally, their behaviour influences what it does. The
ensemble of such effects is what I mean by the ‘interaction order of
algorithms’.
Goffman was a thorough-going, albeit tacit, materialist. Human
bodies, their positioning, their physical settings, their gestures, glances,
blushes, etc., are prominent in his work: see, e.g., Goffman (1959, 1963,
1967, 1968). The reader’s intuitions may, however, rebel against the
application of Goffman’s ‘interaction order’ to the mutual effects of algo-
rithms. Their ‘silicon bodies’ differ radically from human flesh, and they
interact explicitly and instrumentally, not subtly and expressively as
humans do. And, of course, as far as we know, trading algorithms
have no self-consciousness, while humans are often painfully self-aware.
Intuitions nevertheless need to be interrogated. The success with which
Knorr Cetina and Preda have applied their extended conceptualisations
of the ‘interaction order’ to human beings trading electronically and
anonymously suggests that we should not reject a priori the notion’s
application to trading by algorithms. After all, the information and
forms of action available to human beings in most of today’s anonymous
electronic markets are often no different from those available to algo-
rithms. Both humans and algorithms face much the same tasks (espe-
cially the task of drawing inferences from the ‘order books’ described in
this paper’s second section) and they act in the same way, by entering,
cancelling, or sometimes modifying orders, even if they do it with differ-
ent tools: humans using visual interfaces, keyboard and mouse; algo-
rithms employing direct, computer-to-computer communication.
This paper therefore asks the reader to suspend intuitive judgement
while it follows Knorr Cetina’s pointer and experiments with applying
Goffman’s ‘interaction order’ to automated trading. The empirical
material drawn on is research by the author on automated trading (espe-
cially on high-frequency trading, but also, for example, on the ‘execution
algorithms’ used by institutional investors to split up big orders), on the
exchanges and other trading venues on which it takes place, and on its
technological underpinnings. In total, 338 interviews have been con-
ducted, mainly in Chicago and New York, with the developers of trading
algorithms, the traders who use them (who are often the same people),
exchange staff, providers of technological services, regulators, etc. These
interviews (which covered both the current practices of automated
MacKenzie 43
trading but also – when the interviewee had had a long enough career to
have first-hand experience of this – the historical processes that have
shaped current practices) have been supplemented by participant obser-
vation at four industry meetings, visits to three data centres that house
algorithmic trading, and examination of web-based discussion forums, of
the technical literature, of trade press, of enforcement actions by regula-
tors, etc.
Five sections follow this introduction. The first sets the stage by draw-
ing on this empirical research to describe the physical settings within
which trading algorithms interact and to identify the most important
way in which they do so. Next comes a section on a form of interaction
discussed in Goffman’s Presidential Address (and also prominent in
ethnomethodological analyses such as Livingston, 1987) that is of huge
importance in automated trading: queuing. Then follows a discussion of
one of Goffman’s most persistent concerns: dissimulation, including a
form of it particularly salient for automated trading, ‘spoofing’. That
section includes a discussion of a fascinating episode in which algorith-
mic action at odds with ‘normal’ behaviour in queues has formed the
basis of an accusation of spoofing. The paper’s penultimate section takes
up Goffman’s reminder not to neglect ‘the dependency of interactional
activity on matters outside the interaction’ (Goffman, 1983: 12) by exam-
ining some of the most important of those matters as they bear upon
algorithmic trading. The paper’s conclusion is, I hope, appropriately
modest: it argues that Goffman’s ‘interaction order’ points us in the
right direction when studying trading algorithms, but it also identifies
the methodological difficulty of research on how trading algorithms
interact.
How Trading Algorithms Interact
As already emphasised, this paper views trading algorithms materially, as
programs running on trading firms’ computer servers. Many, perhaps
most, of those servers are to be found in no more than 15 computer
data centres worldwide, in each of which thousands of trading algorithms
may be running at any one time. Some of these centres are owned by
exchanges such as the New York Stock Exchange; others are multi-user
buildings, such as Chicago’s Cermak, NY4 in Secaucus, New Jersey, and
LD4 in Slough. Cermak used to be a giant printworks (the Sears
Roebuck catalogue was printed there: see MacKenzie, 2014b), but
most other trading data centres are purpose-built, and easy to mistake
for warehouses. They contain few human beings, mainly security and
maintenance personnel. Huge amounts of energy flow into data centres
in the form of electricity, and flow out as heat extracted by powerful
cooling systems (tens of thousands of computer servers packed close
together generate a lot of heat). Those servers are housed on racks in
44 Theory, Culture & Society 36(2)
rows of cages: normally wire-mesh, but sometimes with opaque doors for
privacy. Above the cages is a giant spider’s web of copper and fibre-optic
cables that connects servers to each other (and carries fibre-optic, micro-
wave and satellite signals from the outside world). Some of the cages
contain the servers and switches that make up the computer systems of
exchanges and other organised trading venues; other cages contain the
servers of the firms trading on those exchanges. The reason for the clus-
tering into a remarkably small number of very big buildings is trading
firms’ desire to have their servers ‘co-located’: placed as close as possible
to exchanges’ systems.
With limited exceptions, the trading algorithms running on these ser-
vers do not interact directly with each other, but indirectly, most com-
monly via an exchange’s computer system, and in particular via an
electronic file called the exchange’s ‘central limit-order book’, or more
simply, its ‘order book’. (To avoid cluttering the text, I have gathered
together the main exceptions to its empirical generalisations in
Appendix 1.) A pictorial representation of a typical – but hypothetical,
because I want to use it to illustrate a variety of points as clearly as
possible – order book is in Figure 1. It is an order book for shares,
but (with exceptions briefly described in Appendix 1) the trading of
futures, foreign exchange, US Treasury bonds and stock options is
BIDS TO BUY OFFERS TO SELL
most recently added irst added irst added most recently added
$45.04 100 200 100 700
$$45.034 50 400
5 03
$$45.024 40 300 1000
$4$45.01. 50 50 200 100
$$45.00 100 50 200 600
200 44 100 $44.99
300 50 $44.98
100 $44.97
100 100 100 30 $$44.96
200 $44.95
Figure 1. An example of an order book.
MacKenzie 45
similar in form. On the left-hand side of Figure 1 are the bids to buy the
shares in question: for example, there is a bid to buy 100 shares at $44.99;
a bid to buy 44 shares, also at $44.99, etc. On the right-hand side are the
offers to sell, for example an offer to sell 100 shares at $45.00.
No human traders are to be found in data centres such as Cermak:
humans are in that sense on the periphery of today’s trading. A trading
algorithm that is housed in a data centre enters bids or offers into the
order book (or cancels, or sometimes modifies, bids or offers it has pre-
viously entered) by instructing the network interface card of the com-
puter server on which it is running to send an electronic message through
the cable – typically of the order of 100 metres long – that threads its way
through the spider’s web and connects the server to the exchange’s com-
puter system. That system contains programs called ‘matching engines’,
which process these incoming messages and update the order books for
the shares or other financial instruments being traded. If a matching
engine finds a ‘match’ (a bid to buy a financial instrument, and an
offer to sell it, both at the same price) it executes a trade; otherwise, it
simply adds new bids and offers to the order book.
As well as trading algorithms sending in the bids and offers that popu-
late the order book, they also ‘observe’ it (my term, not interviewees’).
Whenever a matching engine receives a new order or a cancellation or
modification of an existing order, or it finds a match, it sends the
exchange’s feed server a message containing the anonymised details.
That server then disseminates these messages to subscribers to the
exchange’s datafeed. (The ‘hidden orders’ mentioned in Appendix 2
are, however, not disseminated.) The datafeed flows – again through
around 100 metres of fibre-optic cable – to trading firms’ servers,
which use the stream of messages to construct their own electronic ‘mir-
rors’ of the order book.
Trading algorithms interrogate this mirrored order book in a variety
of ways, seeking to predict price changes. In the order book in Figure 1,
for example, there are offers to sell 4240 shares, and bids to buy 1324;
‘supply’ thus exceeds ‘demand’, and thus a fall in price might be pre-
dicted. While no sophisticated trading algorithm would rely on a calcu-
lation as simplistic as this, interviewees reported heavy reliance by
algorithms on various forms of weighted average of the numbers of
financial instruments being bid for and offered at different prices, along
with a variety of ways of inferring the dynamics of how the order book is
changing through time. The pervasive concern, discussed below, with
‘spoofing’ means that sophisticated trading algorithms will also deploy
various means of assessing the likelihood that the existing bids and offers
in the order book will actually be cancelled before they are executed, and
will discount those for which this is the case.
Predictions based on these algorithmic ‘observations’ of the order
book (along with similar observations of the order books for other
46 Theory, Culture & Society 36(2)
instruments whose prices are known to be correlated with those of the
instrument being traded) are used for two main forms of profit-seeking
trading. The conceptually simpler is ‘liquidity-taking’ or ‘aggressive’
trading. Suppose an algorithm’s observations generate the inference
that the price of the shares being traded via the order book in Figure 1
is about to fall. It could then send to the matching engine an order to sell
shares at $44.99, which the matching engine can execute at least in part as
soon as it has processed it, because it can match it with existing bids to
buy at $44.99. (That is why it would be called a ‘liquidity-taking’ order: it
removes an existing order or orders from the order book.) If the price
does indeed fall below $44.99, then the algorithm can buy back shares at
a profit.
‘Liquidity providing’, in contrast, involves an algorithm sending the
matching engine orders that cannot immediately be executed, and its
most systematic form (known as ‘market-making’) involves continually
keeping both a bid and a higher-priced offer in the order book, in the
hope that both will be executed and the difference in their prices captured
as profit. Suppose, for example, that in Figure 1 the same algorithm has
entered into the order book both the bid to buy 100 shares at $44.99 and
the offer to sell 100 shares at $45.00. If both are executed, the algorithm
will make a profit of one cent for each share traded. That sounds negli-
gible, but high-frequency trading involves the buying and selling of huge
numbers of shares, so tiny profits add up.
A market-making algorithm has just as much need as a liquidity-
taking algorithm electronically to ‘observe’ the contents of the order
book and thus to predict price movements, because if prices move shar-
ply it can easily be left with an inventory of shares the prices of which
have fallen, or with what participants call a ‘short position’ in shares
whose prices have risen.1 The constant observation of the order book by
trading algorithms of all kinds, and the actions they frequently take in
response to that observation, mean that an explicit ‘global’ electronic
representation – a representation of the entirety of ‘the market’ in ques-
tion – plays a much larger role than in most ordinary human social
interaction. (At a party, for example, most participants’ attention is
devoted to a small subset of what is going on, with only an anxious
host or hostess maybe monitoring the event as a whole: see, e.g.,
Goffman, 1963.)
As Yuval Millo pointed out to me in a personal communication, the
crucial role of a global representation in algorithmic trading suggests the
need for nuance, when analysing it, in invoking metaphors – such as
‘swarms’ (see Vehlken, 2013) – in which there is self-organisation result-
ing from local interactions, for example between nearest neighbours.
(There are some local interactions among trading algorithms: see
Appendix 1.) Again, though, the central role of a global representation
is fully consistent with Knorr Cetina’s and Preda’s extensions of
MacKenzie 47
Goffman’s ‘interaction order’. The human traders they studied also
devote much or sometimes even all of their attention to a global repre-
sentation on screen of the overall market, a representation that today is
usually simply a computer file presented in a form (such as Figure 1)
suited to human eyes. Like algorithms, those human traders also simul-
taneously observe and construct the object of their attention.
Queueing
After sketching overall features of the human interaction order, Goffman
(1983: 6) went on ‘to try to identify the basic substantive units, the recur-
rent structures and their attendant processes’, asking ‘[w]hat sort of ani-
mals are to be found in the interactional zoo?’ Among his first examples
was the queue: ‘[w]hat queues protect is ordinal position determined
‘‘locally’’ by first come first placed’ (Goffman, 1983: 16).
That ordering is precisely the one enforced by most matching engines
(for the main exceptions, see Appendix 1). For example, the offer to sell
50 shares at $45.00 in Figure 1 will be executed only once the earlier offer
to sell 100 is executed or cancelled. It is natural to conceptualise this
ordering as a ‘queue’, and that is how participants do indeed think of
it. Queues are of huge importance in automated trading; Pardo-Guerra
(forthcoming) summarises the field’s history as ‘from [trading-floor]
crowds to queues’. As those of my interviewees who had traded manually
in Chicago’s trading pits reported, an ordering similar to the start of a
queue did often emerge in those crowds. In a pit, bids and offers were
either shouted out or hand-signalled, and were thus observable to the
traders crowded into the pit. While a variety of factors – including infor-
mal ‘sharing’ norms and reciprocity – affected who got which trade, there
was often agreement as to which trader had, for example, made the first
bid at a given price, and an informal convention that s/he then deserved
to have that bid executed first. This limited form of ordering was, in
classically ethnomethodological fashion, ‘reflexive, self-organizing, orga-
nized entirely in situ, locally’ (Livingston, 1987: 10). In automated trad-
ing, however, queues are not simply self-organised: they are structured
electronically by exchanges’ matching engines.
Queue position is not a pressing concern for ‘aggressive’ algorithms
(liquidity-taking orders don’t usually encounter queues), but it matters
enormously to market-making algorithms’ liquidity-providing orders. If
these orders are too far back in the queue, they may simply never be
executed, and so no profit will ever be made. Getting to the front of the
queue is a matter of technical expertise (such as the ‘close-to-the-metal’
programming, as participants call it, needed to speed processing by a
computer as a physical machine) and of spatial location. Queue position
is one chief reason why trading firms pay exchanges to co-locate their
servers alongside the exchange’s computer system. Speed, and therefore
48 Theory, Culture & Society 36(2)
queue position, can, however, also be achieved more informally. Before
the electronic messages containing orders reach the matching engine,
they are processed by order gateways. These are normally identical com-
puter servers, running identical software, and identically linked to the
matching engine. However, each gateway typically serves more than one
trading firm, and if a firm has to share a gateway with a firm whose
algorithms send in large numbers of orders, the former’s algorithms’
orders may be delayed. Avoiding this can be a major practical issue; it
is, for example, helpful (I was told by a former high-frequency trader) to
know exactly whom to speak to at the exchange should it happen.
‘If you didn’t know to call that person, you’ll start at some low-level
help-centre desk’.
There are also other subtleties to algorithmic queueing, which go
beyond the need for speed, and which are sometimes deeply controversial
among insiders to the world of automated trading. As both Goffman and
ethnomethodologists such as Livingston (1987) emphasised, the inter-
action order of human queues is a moral order: first come, first served
‘produces a temporal ordering that totally blocks the influence of such
differential social statuses and relationships as the candidates bring with
them to the service situation’ (Goffman, 1983: 14). Especially in US share
trading, a variety of types of bids and offers are available to some algo-
rithms (but not always to others), which can be used to help an algorithm
get to the front of the queue: see Appendix 2. These bids and offers have
generated much controversy (both among my interviewees and also in
public forums: see, e.g., Bodek, 2013). The accusation against them has
in effect been that they allow ‘differential social statuses and relation-
ships’ illegitimately to influence queue position.
Dissimulation
As noted, one of Goffman’s persistent interests was the role of dissimu-
lation in interaction. He was, of course, no naive moralist, and fully
understood that presenting a false impression is sometimes entirely
appropriate (it is, for instance, right for a medical student who is nervous
to hide that fact when treating a patient) and that ‘tact’ – for instance,
pretending not to notice an occurrence that would cause a participant to
lose ‘face’ – is often desirable.
Algorithms, too, dissimulate. Consider the excess of offers to sell in the
order book in Figure 1. Much of it is made up of three large offers (for
1000, 400 and 700 shares) with prices that are at least two ‘levels’ away
from the best offer price of $45.00. Under normal circumstances, the
algorithm (or, perhaps, even human being) that has posted those offers
will have the time to cancel them before they are executed. So maybe they
have been entered into the order book so as to produce an excess of offers
relative to bids, and thus cause other algorithms to predict a price fall
MacKenzie 49
and therefore to sell. The original algorithm can then profit from the
price decline it has caused, for example by buying at a temporarily low
price, cancelling the large offers, and selling when prices recover.
For an algorithm or human to do that is what market participants call
‘spoofing’. It is, for example, what the west London trader Navinder
Singh Sarao, who was arrested in April 2015, was accused of by the
US Department of Justice. Its indictment quotes emails allegedly sent
by Mr Sarao in which he requested technical help in adding a particular
feature to his trading software, ‘a cancel if close function, so that an
order is canceled if the market gets close’, with a further refinement to
permit him ‘to be able to alternate the closeness ie one price away
or three prices away etc etc’ (US Department of Justice, 2015: 7–8; in
Figure 1, an offer to sell at $45.01 is ‘one price away’ from the best offer).
Given that spoofing is illegitimate and generally now illegal (see
below), it is unsurprising that none of my interviewees admitted to writ-
ing algorithms that spoofed. They did, however, talk about how import-
ant it was for any algorithm that made price predictions on the basis of
an analysis of the order book to be able to distinguish ‘real’ orders in that
book from ‘spoof’ orders that would be cancelled before being executed.
One of them had, for example, programmed his firm’s algorithms to give
less weight to a single big order than to multiple small orders of the same
aggregate size, because the former was less likely to be ‘real’. Both he and
another interviewee were experimenting with artificial-intelligence
machine learning techniques – especially ‘support vector machine’
techniques – to make their algorithms more sophisticated in how they
distinguished ‘real’ from ‘spoof’ orders. (One of the surprises of the
interviews with the designers of high-frequency trading algorithms is
the otherwise rather limited use of artificial-intelligence techniques in
price prediction. HFT algorithms, especially market-making algorithms
that have to get to the heads of queues, often employ conceptually very
simple but ultrafast inferences, such as ‘weighted’ counts of bids and
offers or extrapolation to the stock market of movements in the
market for stock-index futures. Liquidity-taking algorithms, which can
afford to act a little more slowly, do employ more sophisticated infer-
ences, but interviewees at firms that specialised in these algorithms
reported that the patterns in order-book dynamics they exploited were
often at the border of statistical significance, and the low signal:noise
ratio caused difficulties for machine-learning techniques.)
What is, from the viewpoint of this paper, a particularly interesting set
of instances of alleged spoofing was described to me by an interviewee in
June 2015. In all the previous examples of spoofing I had encountered,
the alleged ‘fake’ orders were placed not at the best bid or offer, but
one or more levels away from it. The new set concerned orders at
the best bid or offer price, such as the offer to sell 600 shares at $45.00
in Figure 1.
50 Theory, Culture & Society 36(2)
For an algorithm to place a fake order at the best bid or offer price is
potentially an effective means of moving a market, because algorithms
that make inferences based on counts of the contents of the order book
typically (so interviewees told me) ‘weight’ these orders more heavily
than orders further away, partly because those latter orders have trad-
itionally been more likely to be fake. (An algorithm summing the offers in
Figure 1 might assign a weight of 1.0 to the offers at $45.00; a weight of
0.5 to offers at $45.01; 0.25 to offers at $45.02; etc.) However, a fake
order at the best bid or offer price is also dangerous to the intended
spoofer, because it is much more likely to be executed before it is can-
celled (it would be particularly dangerous for a slow human being rather
than a fast algorithm to attempt to spoof in this fashion).
What first led my interviewee’s firm to suspect spoofing was behaviour
at odds with the normal interaction order of queuing. It involved appar-
ent use of the ‘modify up’ instruction in the electronic trading system of
the exchange in question. That instruction alters an existing bid or offer
by increasing the number of financial instruments being bid for or
offered. If this instruction is employed, the order that has been modified
goes to the back of the queue (as in the case of the offer of 600 shares at
$45.00 in Figure 1). ‘You should never do that in a FIFO market’, said
my interviewee. (FIFO is the acronym of ‘first in, first out’, and refers to
the form of queuing discussed in this paper, in which the first order at a
given price received by the matching engine is executed first.)2 Doing
something that caused an order to lose queue position ‘looked weird to
us’, the interviewee reported. One interpretation might have been that
this was ‘incompetent’ or ‘maladjusted’ (Livingston, 1987: 14) queuing
behaviour, but my interviewee’s firm took it to be evidence of spoofing.
By using ‘modify up’, if necessary repeatedly, an order could be kept at
the back of the queue, which is of course exactly what an algorithm that
is spoofing needs to do to reduce the risk of the order being executed.
Fascinating as spoofing is, it does not exhaust the possibilities of algo-
rithmic dissimulation. Execution algorithms are, as noted above, used by
institutional investors to split up large orders; along with high-frequency
trading, they are the other most important form of algorithmic trading.
Their entire rationale is as a form of dissimulation: the goal is for as long
as possible to hide the fact that a big ‘parent’ order (perhaps for a million
or more shares) is being executed, by splitting it into ‘child’ orders for as
few as 100 shares. As an interviewee who headed a major enterprise
providing execution algorithms put it:
we’ll take that huge order and chop it up into little tiny pieces, and if
we do it right anyone who’s looking at it can’t tell that there is a big
buyer: it looks like tiny little retailish trades [the sort of trades a lay
investor might engage in] . . . My job is trying to obscure what my
institutional clients are trying to do, you know, so our role in the
MacKenzie 51
market place is to make it so no-one can work out what the hell’s
going on.
Unlike spoofing, this form of dissimulation is not merely legal but viewed
as entirely legitimate. Indeed, the most common form of moral framing
in debate over high-frequency trading (see, e.g., Lewis, 2014) is to dis-
tinguish the ‘good’ algorithms and technical systems that hide big orders
from the ‘bad’ HFT algorithms that seek to detect the big parent order
and change their pricing and order submission behaviour appropriately.
That framing, however, is contingent and contestable. Thus, one inter-
viewee, exasperated with what he took to be its facile moralism, reversed
it: ‘I don’t think the guy who’s trying to hide the supply-demand imbal-
ance [by employing an execution algorithm], why is he any better of a
human being than the person trying to discover what the true supply-
de[mand imbalance] is?’
‘[T]he dependency of interactional activity on matters
outside the interaction’
For Goffman, interactions have their own logics and processes, and inter-
action is ‘a particular kind of activity’, which is what warrants speaking
of ‘the interaction order’ just as one might refer to ‘the economic order’
(Goffman, 1983: 5). Goffman, however, also rejected what he called ‘a
rampant situationalism’ (1983: 4). He emphasised repeatedly that, in
words already quoted above, what goes on in interaction depends
‘on matters outside the interaction’, including social relationships and
social structure. Although his discussion of how situations and structures
interrelate is not as fully developed as one might wish (see Burns, 1992),
the broad outlines of Goffman’s account are clear. There is only a ‘loose-
coupling’ relationship (Goffman, 1983: 12) between situations and social
structure, but the latter is a real phenomenon, not reducible to an aggre-
gate of multiple interactions. Social relationships and social structure
shape interactions, but not deterministically: for example, the theoretical
interest for Goffman of the queue is (as indicated above) precisely that it
is a form of interaction in which their influence is, locally, blocked.
Let me, therefore, follow Goffman and give three examples of the
‘loose-coupling’ shaping of algorithmic interaction by ‘matters outside’
it. The first is the changing status of spoofing. When I began interviewing
in 2010, spoofing seemed a routine market practice, at least in futures
trading: ‘most new orders [in the futures market] are fake’, a trader in
Chicago told me in 2014. There was a long tradition of spoofing being
acceptable – in Chicago’s trading pits, I was told by another interviewee,
a successful spoofer was even admired, much as a skilled bluffer in poker
would be – and a tolerant attitude continued in the early years of
the transition to electronic trading (Zaloom, 2006; Arnoldi, 2015).
52 Theory, Culture & Society 36(2)
Recently, however, disapproval has grown sharply, even though two of
the more libertarian-minded of my interviewees still felt strongly that it
was quite wrong for the state to try to take action against spoofing. Until
2014, traders who had engaged in spoofing had only ever been subject to
administrative action, and the resultant fines could in effect be considered
a business expense. However, the Dodd-Frank Act (the main post-crisis
legislation in the US) weakened the legal tests that have to be passed for a
criminal prosecution for spoofing to succeed, and in October 2014 the
first such prosecution began. The trader who told me about the extent of
fake orders also reported that in the three weeks since the indictment,
the incidence of spoofing, as detected by his firm’s algorithms, had gone
down sharply.
The second example concerns the shaping of queueing in US share
trading by federal regulation. As summarised in Appendix 2, US stock
exchanges are not free to have their matching engines structure queues as
they wish. Instead, matching-engine behaviour is governed by Regulation
NMS [National Market System], which, although first implemented
only in 2007, has roots that can be traced back to the late 1970s
(Pardo-Guerra, forthcoming). Back then, the Securities and Exchange
Commission – long suspicious of the dominance of one exchange, the
New York Stock Exchange (NYSE) – sought, with a mandate from
Congress, to create a National Market System that would promote com-
petition without leading to market fragmentation. Two designs for that
system contended. One, backed by prominent economists, was for a
single, national electronic order book to which all brokers and exchanges
would send their orders. Unsurprisingly, the NYSE and most of the more
minor exchanges saw this proposal as a threat to their existence, and
successfully promoted an alternative model in which they would continue
to operate much as they did, but linked by a computer network that
could be built quickly and easily using existing NYSE technology.
Forty years on, that remains the basic structure of US share trading.
The different exchanges are still not fused into a single order book.
Instead, Regulation NMS’s elaborate rules are still seen as necessary to
competition.
It is difficult to read this history without thinking of the prescient
analysis of neoliberalism in Foucault’s lectures on ‘The Birth of
Biopolitics’, delivered (as it happens) in 1979, just as the crucial decisions
were being taken as to how to create more ‘competition’ in US share
trading. Competition is not a natural condition, the Ordoliberals
believed: rather, it has to be ‘produced by an active governmentality’
(Foucault, 2008: 121). Although the influences on it have been more
diffuse, the Securities and Exchange Commission has been the chief
vehicle of that governmentality in US financial markets, and by con-
straining how matching engines organise queues it has significantly
shaped the interaction order of algorithms.
MacKenzie 53
My third example is a domain of automated trading in which there has
been no analogue of that project of governmentality: foreign exchange.
(Financial regulations are still largely primarily national in scope, while
foreign exchange is intrinsically an international activity that therefore
falls into a gap in regulatory coverage.) In foreign exchange, the
traditionally dominant actors – the big global commercial banks –
have retained, at least until very recently, a degree of market power
that banks have largely lost in other exchange-based trading. However,
weighed down by old ‘legacy’ software systems, and frequently bureau-
cratic, big banks are often not good at the development of the fast,
sophisticated algorithms needed for HFT. When high-frequency trading
of foreign exchange began, the algorithms deployed by small HFT firms
therefore found plentiful opportunities for profitable aggressive trading,
often at the expense of banks’ slower systems. Banks, however, were able
to exert influence on trading venues that had the effect of shutting off
many of those opportunities and thus rendering liquidity-taking unprof-
itable. They have, for example, demanded (often successfully) that their
market-making algorithms be granted ‘last look’ privileges: in other
words, matching engines grant their algorithms a few hundredths of a
second – a tiny period for humans, but an eternity for HFT’s fast
machines – in which to decide whether to permit the matching engine
to consummate a trade. Last look and other measures to constrain
liquidity-taking by HFT algorithms have shifted the ecology of algo-
rithms in foreign exchange: interviewees reported a wholesale shift
from liquidity-taking to liquidity-making algorithms.
Conclusion
Let me be clear what this paper is not arguing. It is not claiming that
humans and algorithms are identical beings: plainly they are not. Even in
the brief narratives presented above, their different roles are clear. It is
human beings, not algorithms, that are angered by perceived queue
jumping. It is humans, not algorithms, that are prosecuted for spoofing,
and the traditional legal test – weakened by the Dodd-Frank Act, but still
prominent in legal proceedings – is human intent: did Mr Sarao, for
example, intend his orders to move prices?
Nevertheless, the previous sections of this paper have, I hope, shown
that the limited forms of action available to trading algorithms (to submit
orders, to cancel them, and sometimes to modify them) can nonetheless
give rise to rich forms of strategic interaction. Algorithms use whatever
means are made available to them to get to the front of the electronic
queue; they dissimulate (sometimes legitimately, sometimes not); they
seek to defend their processes of inference against the effects of dissimu-
lation; some enjoy privileged powers denied to others. There is an
increasingly strongly policed, but still vaguely defined, boundary between
54 Theory, Culture & Society 36(2)
legitimate strategic action and illegal spoofing. As the boundary hardens,
so the nature of strategic algorithmic action shifts.3 It is indeed perfectly
possible that in the kinds of markets discussed here, algorithms now act
more strategically than humans can.4 The very fact that human passions
are raised by algorithmic queuing and spoofing, and that the latter can
lead to jail, is indirectly testimony to the richness of how algorithms
interact: we see in that interaction echoes of how we humans interact.
As Knorr Cetina commented in response to a workshop presentation
of this paper, the notion of ‘the interaction order of algorithms’ has a
certain phenomenological adequacy.
The brief discussion in the section immediately before this conclusion
also demonstrates, I would argue, the relevance of one of the main rea-
sons Goffman gave for ‘isolating the interaction order’: that it ‘provides a
means and a reason to examine diverse societies comparatively, and our
own historically’ (Goffman, 1983: 2). Look comparatively across asset
classes (contrasting, for instance, foreign exchange and share trading), or
historically examine how trading has changed, and you find in algorith-
mic interaction not just emergent phenomena, generated reflexively and
locally, but also the traces of wider processes: the efforts to outlaw spoof-
ing and thus keep order books ‘pure’; the continuing market power of
big banks in foreign exchange; even perhaps the decades-long neoliberal
project to give competition – that unnatural, ‘fragile’ thing – a ‘real,
historical existence’ (Foucault, 2008: 131–2).
Modesty, though, is also required, for by now the reader will surely
have noticed a methodological irony. This paper has not employed the
preferred methodology of interactionist sociology, participant observa-
tion. Remarkably, given that HFT firms protect their intellectual prop-
erty fiercely (even gaining interview access is in many cases impossible),
Robert Seyfert of the University of Duisburg-Essen and, especially,
Ann-Christina Lange and colleagues at the Copenhagen Business
School have gained a degree of observational access to HFT firms (see,
e.g., Borch et al., 2015). Observing an HFT firm, however, is not the
same as observing algorithms. Algorithms were interacting in Cermak
when I visited that datacentre, but were of course invisible to me. To be
dependent, in consequence, on the testimony (or even to observe the
actions) of the human beings who write and use trading algorithms is
to rely upon indirect evidence that can mislead. As one of my HFT
interviewees warned me: ‘someone could be in all honesty saying
they’re [their algorithms are] doing [something] when in fact they’re
doing something else, they’re just not measuring it right’.
The interaction of algorithms does leave its traces in changes in order
books and in prices. However, in the order-book and price data available
to academic researchers, trading-account identifiers are usually removed,
making it difficult or impossible to identify sequences of actions by the
same algorithm or even the same trading firm. Researchers employed by
MacKenzie 55
regulatory bodies do have access to account identifiers, but they have
found the task of unravelling patterns of algorithmic interaction (even in
short time periods) computationally, and perhaps conceptually, close to
intractable. Almost a decade on, there is still debate on the causes of the
‘flash crash’, a 20-minute spasm in the US futures and stock markets on 6
May 2010. A working party from five regulatory bodies spent months
seeking to disentangle a broadly similar event in the US Treasury bond
market between 9:33 a.m. and 9:45 a.m. on 15 October 2014, but con-
fessed themselves unable fully to identify ‘[t]he dynamics that drove . . .
trading’ in those 12 minutes (Joint Staff Report, 2015: 33). Furthermore,
any Goffmanian wants to see analyses of routine, not just unusual, inter-
action, but researchers employed by market regulators understandably
often need to focus on the unusual.
We are, in short, still far from having a robust understanding of
how trading algorithms interact. However, the virtue of the concept of
‘interaction order’ is that it focuses our attention on the right issue, which
is indeed interaction. Any individual trading algorithm can perfectly rea-
sonably be seen as the ‘delegate’ of a human being or beings (although
my interviewee’s warning of their possibly defective understanding of its
operations must be borne in mind). But the ensemble of interacting algo-
rithms is not our individual or collective delegate, and while the program
text of a trading algorithm may usually remain unchanged by interaction,
how it materially acts is shaped by interaction. Even individual algo-
rithms thus need to be understood relationally, in the spirit of
Goffman’s unfortunately worded but succinct summary of his relational
sociology: ‘Not, then, men and their moments. Rather moments and
their men’ (Goffman, 1967: 3).
Acknowledgements
I am very grateful for research funding from the European Research Council (grant
291733) and UK Economic and Social Research Council (ES/R003173/1), as well as
for helpful comments from TCS’s three referees.
Notes
1. That is to say, it may have sold shares that its firm does not own.
2. His implicit contrast is with the ‘pro rata’ markets mentioned in Appendix 1,
in which ‘modify up’ can be employed without detrimental effects.
3. The interviewee who told me about the decline in ‘classic’ forms of spoofing
following the first criminal indictment also said that they were being replaced
by orders that were still going to be cancelled, but acted ‘epistemologically’
(by revealing how ‘real’ other orders in the order book were) rather than by
immediately profitable (but detectable and legally problematic) trades.
4. It is, for example, harder for a human spoofer (who can only act slowly) to
hide his or her traces, for example, by using multiple small orders with
random sizes, rather than a single all-to-obvious big order.
56 Theory, Culture & Society 36(2)
References
Anon (2009) Direct Edge launches Hide Not Slide order, 28 May. Available at:
http://www.thetradenews.com/print.aspx?id=2597 (accessed 31 December
2012).
Arnoldi J (2015) Computer algorithms, market manipulation and the insti-
tutionalization of high frequency trading. Theory, Culture & Society 33(1):
29–52.
Beer D (2009) Power through the algorithm? Participatory web cultures and the
technological unconscious. New Media & Society 11(6): 985–1002.
Bodek H (2013) The Problem of HFT: Collected Writings on High Frequency
Trading & Stock Market Structure Reform. Decimus Capital Markets.
Borch C, Hansen KB and Lange A-C (2015) Markets, bodies, and rhythms:
A rhythmanalysis of financial markets from open-outcry trading to high-
frequency trading. Environment and Planning D: Society and Space 33(6):
1080–1097.
Bucher T (2012) Want to be on top? Algorithmic power and the threat of invisi-
bility on Facebook. New Media & Society 14(7): 1164–1180.
Burns T (1992) Erving Goffman. London: Routledge.
Cheney-Lippold J (2011) A new algorithmic identity: Soft biopolitics and the
modulation of control. Theory, Culture & Society 28(6): 164–181.
Deleuze G (1992) Postscript on the societies of control. October 59(Winter): 3–7.
Foucault M (2008) The Birth of Biopolitics. London: Palgrave Macmillan.
Gane N, Venn C and Hand M (2007) Ubiquitous surveillance: Interview with
Katherine Hayles. Theory, Culture & Society 24(7–8): 349–358.
Goffman E (1959) The Presentation of Self in Everyday Life. New York:
Doubleday.
Goffman E (1963) Behavior in Public Places: Notes on the Social Organization of
Gatherings. New York: Free Press.
Goffman E (1964) The neglected situation. American Anthropologist 66(6):
133–136.
Goffman E (1967) Interaction Ritual: Essays on Face-to-Face Behavior.
New York: Pantheon.
Goffman E (1968) Stigma: Notes on the Management of Spoiled Identity.
Harmondsworth: Penguin.
Goffman E (1983) The interaction order. American Sociological Review 48(1):
1–17.
Hayles NK (1999) How We Became Posthuman: Virtual Bodies in Cybernetics,
Literature, and Informatics. Chicago: University of Chicago Press.
Hayles NK (2006) Unfinished work: From cyborg to the cognisphere. Theory,
Culture & Society 23(7–8): 159–166.
Hayles NK (2012) How We Think: Digital Media and Contemporary
Technogenesis. Chicago: University of Chicago Press.
Joint Staff Report (2015) The U.S. Treasury Market on October 15, 2014. U.S.
Department of the Treasury, Board of Governors of the Federal Reserve
System, Federal Reserve Bank of New York, U.S. Securities and Exchange
Commission and U.S. Commodity Futures Trading Commission, 13 July.
Available at: http://www.treasury.gov/press-center/press-releases/Docume
nts/Joint_Staff_Report_Treasury_10-15-2015.pdf (accessed 20 August 2015).
MacKenzie 57
Kittler F (2006) Lightning and series – Event and thunder. Theory, Culture &
Society 23(7–8): 63–74.
Knorr Cetina K (2009) The synthetic situation: Interactionism for a global
world. Symbolic Interaction 32(1): 61–87.
Knorr Cetina K (2013) Presentation to panel: Theorizing numbers. Presented at
the American Sociological Association Annual Meeting, New York.
Knorr Cetina K and Bruegger U (2002a) Global microstructures: The virtual
societies of financial markets. American Journal of Sociology 107: 905–951.
Knorr Cetina K and Bruegger U (2002b) Traders’ engagement with markets: A
postsocial relationship. Theory, Culture & Society 19(5–6): 161–185.
Knorr Cetina K and Preda A (2007) The temporalization of financial markets:
From network to flow. Theory, Culture & Society 24(7–8): 116–138.
Lash S (2002) Critique of Information. London: SAGE.
Lash S (2007) Power after hegemony: Cultural studies in mutation. Theory,
Culture & Society 24(3): 55–78.
Latour B (2005) Reassembling the Social: An Introduction to Actor-Network
Theory. Oxford: Oxford University Press.
Lewis M (2014) Flash Boys: Cracking the Money Code. London: Penguin.
Livingston, E (1987) Making Sense of Ethnomethodology. London: Routledge.
MacKenzie D (2014a) Be grateful for drizzle. London Review of Books 36(17):
27–30.
MacKenzie D (2014b) At Cermak. London Review of Books 36(23): 25.
Pardo-Guerra JP (forthcoming) Orders of Finance. Cambridge, MA: MIT Press.
Parikka J (2015) A Geology of Media. Minneapolis: University of Minnesota
Press.
Preda A (2009) Brief encounters: Calculation and the interaction order of
anonymous electronic markets. Accounting, Organizations and Society 34:
675–693.
Preda A (2013) Tags, transaction types and communication in online anonym-
ous markets. Socio-Economic Review 11: 31–56.
Savat D and Poster M (eds) (2005) Deleuze and New Technology. Edinburgh:
Edinburgh University Press.
SEC (Securities and Exchange Commission) (2005) 17 CFR Parts 200, 201, et al:
Regulation NMS; Final Rule. Federal Register 70(124): 37496–644.
US Department of Justice (2015) United States of America v. Navinder Singh
Sarao: Criminal complaint. United States District Court, Northern District of
Illinois, Eastern Division, 15 CR 75. Available at: http://www.justice.gov
(consulted 19 August 2015).
Vehlken S (2013) Zootechnologies: Swarming as a cultural technique. Theory,
Culture & Society 30(6): 110–131.
Zaloom C (2006) Out of the Pits: Trading and Technology from Chicago to
London. Chicago: University of Chicago Press.
Appendix 1: Empirical Nuances
There are two other main ways (beyond the entry and cancellation of
orders) in which trading algorithms interact. The first is in electronic
trading venues that, unlike those discussed in the text, do not have central
58 Theory, Culture & Society 36(2)
order books. For example, some venues (especially in bond trading and
foreign exchange) have instead a fixed distinction between participants,
generally algorithmic, that are allowed to post bids and offers – either in
response to requests to do so, or in the form of constantly ‘streamed’ prices
– and other participants, generally human beings, that cannot post prices
but can only accept prices posted by others. Second, even although the
different algorithms being run by a trading firm do not (as far as I can tell)
usually collaborate, they can have effects on each other. Firms normally
have aggregate risk limits that mean, e.g., that if one algorithm has built up
a large position in a particular stock, others are prevented from adding to
it. Also common is software to prevent self-trading (an algorithm selling
financial instruments to another of the firm’s algorithms), which incurs
unnecessary expenses and may attract unwelcome regulatory attention as
potentially setting a ‘false price’. This software has the effect, e.g., that if
one algorithm is offering shares at a given price, then (dependent on the
software’s settings) either all of the firm’s other algorithms are prevented
from bidding to buy shares at that price or the original offer is cancelled (in
effect by an algorithm other than the one that submitted it).
Local interactions of this kind among algorithms are of theoretical
interest, for example if one wishes to apply metaphors such as
‘swarming’. Also interesting in this respect is that algorithms can some-
times learn ‘locally’ about order-book changes via a ‘confirm’ – an elec-
tronic message reporting execution of one of a firm’s orders – before the
corresponding message appears in the overall datafeed.
The discussion of orders in the text is also not exhaustive. As well as
the orders described (which market participants would call ‘limit orders’,
i.e. orders to buy at or below a specified price, or orders to sell at or
above it), it is, for instance, also often possible for an algorithm or
human trader to submit a ‘market order’. This is an order simply to
buy or to sell at the best available price, and it can therefore under
almost all circumstances be executed immediately. The order book thus
contains only limit orders, not market orders, which is why the fuller
name for it is ‘central limit-order book’.
While most matching engines operate a time-priority system of the
kind described in the text, a minority employ ‘pro-rata’ matching, in
which new executable orders are matched against existing orders in pro-
portion to the size of the latter. Certain ‘designated market makers’ (for
example in options) may also be guaranteed a specific proportion of any
incoming executable order.
Appendix 2: Intermarket Sweep Orders and Special
Order Types
Regulation NMS in effect decrees that before an exchange’s matching
engine adds a displayable order to its order book it must check the best
MacKenzie 59
(i.e. highest priced) bid and best (i.e. lowest priced) offer available at all
other US exchanges. A matching engine cannot, for example, add to its
order book an offer to sell at the price of the national best bid, but must
electronically route that order for execution to the exchange whose book
contains that bid. In consequence, incoming orders for shares are often
delayed while the matching engine performs this check and waits for it to
be permissible to add them to the order book.
Regulation NMS thus directly shapes queuing, and has given rise to a
variety of ways in which algorithms can improve their queue positions.
The most important and most prevalent is an exception in Regulation
NMS (Securities and Exchange Commission, 2005: 37523) that provides
for an ‘Intermarket Sweep Order’. This is an order bearing a compu-
terised flag indicating that other orders have been sent to other exchanges
that will execute against, and thus remove from their order books, any
orders that block the addition of the flagged order to the order book.
However, only registered broker-dealers and customers authorised by
them are allowed to use the Intermarket Sweep Order flag.
Exchanges have themselves also designed new types of specialised
orders, which often hinge on the fact that Regulation NMS governs
the entry of displayable orders, not hidden orders. Matching engines
always allocate hidden orders positions in the order-book queue
behind displayable orders at the same price, but if those displayable
orders cannot be added to the order book because of the constraints of
Regulation NMS, an initially hidden order can still get to the head of the
queue. Best known of these new orders is one made available by the
exchange Direct Edge called ‘Hide Not Slide’ (Anon, 2009).
Donald MacKenzie is a Professor of Sociology at the University of
Edinburgh. He is a sociologist and historian of science and technology.
His current research is on the sociology of financial markets, in particular
the development of automated high-frequency trading and of the elec-
tronic markets that make it possible, with a special focus on how trading
algorithms predict the future. His books include An Engine, Not a
Camera: How Financial Models Shape Markets (MIT Press, 2006) and
Material Markets: How Economic Agents Are Constructed (Oxford
University Press, 2009).
This article is part of the Theory, Culture & Society special issue on
‘Thinking with Algorithms: Cognition and Computation in the Work of
N. Katherine Hayles’, edited by Louise Amoore.
Special Issue: Thinking with Algorithms: Cognition and Computation in the Work of N. Katherine Hayles
Theory, Culture & Society
2019, Vol. 36(2) 61–87
On the Politics ! The Author(s) 2019
Article reuse guidelines:
of Chrono-Design: sagepub.com/journals-permissions
DOI: 10.1177/0263276418819053
Capture, Time journals.sagepub.com/home/tcs
and the Interface
Michael Dieter
University of Warwick
David Gauthier
University of Amsterdam
Abstract
This article makes a contribution to interface criticism through the notion of chrono-
design: the deliberate shaping of experiences of temporality and time through con-
temporary software techniques and digital technologies. This notion is articulated
through discussions of network optimisation, user experience design, behavioural
tracking, Hansen’s work on 21st-century media and Hayles’ framework of cognitive
assemblages. In particular, the argument considers how contemporary user inter-
faces complicate conventional notions of the rational, self-reflexive subject by
operating beyond consciousness at vast environmental dimensions and accelerated
micro-temporal speeds. These conditions, we argue, provide opportunities for new
forms of behavioural suspense and captivation best exemplified through the figure of
the trap. The politics and aesthetics of captivation, accordingly, should be considered
as central to any expanded ecology of cognition. The article then concludes with a
short demonstration of experimental uses of chrono-design methods applied critic-
ally to political economies of user tracking and data capture as a prompt for further
interdisciplinary applied research in this domain.
Keywords
experience, interface criticism, internet, new media, senses, software studies, time
Despite commercial promise of instant connectivity, the internet remains
a heterochronic assemblage. ‘Excessive’ browser rendering times, connec-
tion dropouts, a lack of mobile coverage and buffer overload are regular
perturbations that still characterise, even in a banal way, the everyday
Corresponding author: Michael Dieter. Email: [email protected]
Extra material: http://theoryculturesociety.org
62 Theory, Culture & Society 36(2)
experiences of distributed networking. Between a user’s expectations at
the interface, and how information is processed, queued and technically
resolved across globally dispersed communication infrastructures, there
exists an intractable experiential lag or gap. This can often be subtly
sensed through the ‘doubled future-present’ of anticipation and tedium
while waiting for something to load and become usable, a waiting period
most recognisably compensated by ‘visually present rotating, animated
cogs and steadily progressing blue and green lines’ (Munster, 2014: 19).
The spinning wheels, loading bars and ‘throbbers’ we regularly encounter
are important cultural icons for how time is treated and managed today;
taken critically, such whirling animations provide us with a ‘space for
understanding how the now is being made operative’ (Soon, 2017: 100).
Beyond the interface lies a vast domain of automated decision-making
infrastructure – an array of systems geared for queuing, error-checking,
routing and packet-switching procedures – or, in the historical and pol-
itical sense of Alexander Galloway (2004), protocological power.
Grasping how experience is modulated through the temporalities of
these systems, we argue, is essential for understanding how processes
of individuation are enabled or hindered by interface design, especially
in the coordination of cognitive and affective intensities that bring end-
user populations to bay.
User experience design (UX) is an interdisciplinary field of practice
that tackles these challenges by mediating between signals, signs, behav-
iour and cognition. Progress bars and throbbers, in this respect, are part
of a much wider array of techniques and technologies for tempering the
materialities of information. Between inputting a URL into a browser
and the appearance of content, when a data packet is sent to initiate the
first connection with a given server until the last byte is read from the
client’s TCP buffer and interpreted by the browser, a series of zeitkri-
tische or time-critical (Ernst, 2013) mechanisms are triggered and exe-
cuted. Designing for user experience requires a hyper-vigilant stance
toward how these micro-temporal events and cascading forces unfold.
A tenet of the commercial literature on performance optimisation, for
instance, claims the average time to capture the attention and activity of
an end-user is around three seconds (Hogan, 2015), a measure that
roughly corresponds with what William James famously discussed in
The Principles of Psychology (1890) as the ‘specious present’, or the punc-
tuate duration of nowness. After this point, conversion rates drop due to
excessive loading times as shifts occur in the ‘mental context’ of the user;
for instance, as people start daydreaming or simply abandon a task for a
new browser tab or application. While James would describe the specious
present as a kind of ‘saddle-back’ or vessel ‘on which we sit perched, and
from which we look in two directions into time’ (2014: 12), we might
today more accurately think of a closed contraption, a cybernetic trap to
suspend or captivate the wandering mind.
Dieter and Gauthier 63
There are diverse metrics to assist with handling temporal perform-
ance for end-users. Conversions, for example, typically refer to measure-
able events facilitated by key performance indicators within a scripted or
planned funnel of interaction. The goal of performance optimisation is to
monitor the rates of conversion by effectively mediating the gaps between
the database, interface and user. This can be achieved by stimulating
interactions, events and affects in order to maintain a context for infor-
mational productivity, essentially reducing the possibilities for depre-
ciated engagement or slack. For designers and developers engaged with
user experience, the affordances of network delays, the technicity of the
browser and capacity of monitoring patterns of behaviour become
increasingly important means for the pursuit of efficiencies. This often
means dealing with temporal processes that contain diverse, overlapping
and asynchronous firings. It is crucial to recognise, in this respect, the
extent to which the web no longer consists simply of documents or pages,
but following the AJAX paradigm (‘asynchronous JavaScript and
XML’), applications that continually respond to input and work through
interrelated scripts, style-sheets and mark-up. In their geographically
dispersed operations, these applications do not resolve into a uniform,
mechanical rhythm, but propagate a fluctuating momentum based on
highly dispersed ‘data-pours’ that support larger trends toward platfor-
misation (Helmond, 2015). The instabilities and dynamism of these tech-
nical processes, therefore, have notable political economic consequences,
and these impact on forms of cultural production and social interactions
as these complex ensembles are shaped by interface design and related
forms of technical cunning.
In what follows, we consider how temporal indeterminacy, incomplete-
ness and contingency are handled by an array of techniques and technol-
ogies we call chrono-design. In particular, we focus on time deferral and
latency: the interim through which specific causal relations vanish beyond
a horizon of legibility and intelligibility as a potential yet to be fulfilled. We
conceive of this interim as a ‘transactive’ space or logistical zone where
divergent temporal regimes collide, drift and co-ordinate, yielding time
differentials that are measured and weighed against one another. Here,
the browser becomes a conjugation device where micro-temporal regimes
of network signals are coupled with phenomenal renderings on screen and
the actions of ‘direct manipulation’ from users. Taking these dynamic
relations into account, we address how network delays, stoppages and
arrivals are technically resolved with the algorithmic operations of the
browser; how user experience design grapples with this resolution through
an array of specialised techniques; and how these temporal reconfigur-
ations finally give way to new entanglements of power.
One ambition of this piece is to extend proposals for interface criticism
(Andersen and Pold, 2011) and interface critique (Hadler and Haupt,
2016) to consider a set of conceptual concerns and empirical practices
64 Theory, Culture & Society 36(2)
that relate to micro-temporal machine processing, where information is
not yet fully composed but operationally active in fulfilling the promise of
a sensory address. While recent interdisciplinary work on software inter-
faces has emphasised how latency is a widespread problem that is com-
pensated for by programmers and engineers (Ash, 2015; Bucher, 2012;
Mackenzie, 2002), a key aspect of our account is to understand how
specific design techniques are enacted to pursue the captivation and val-
orisation of attention by delivering inattention or ‘nonknowledge’ to end-
users, and how this, in turn, relies on unique forms of cunning and
subterfuge.
Recent work by N. Katherine Hayles (2014, 2016a, 2016b) on cogni-
tive assemblages assists in developing our argument, particularly her
notion of ‘nonconscious cognition’ as a kind of thinking that occurs
beyond human consciousness. Taking into consideration research in cog-
nitive science and neuroscience, she draws attention to subtle modes of
decision-making that might not be fully appreciated or apprehended
within classic philosophical and critical categories of thought. These
include processes of accelerated pattern recognition, the synthesis of sen-
sory inputs, and the capacity to draw inferences to promote certain kinds
of behaviour, among others. Hayles’ ultimate aim is to gain a more
complete view of cognitive ecology applicable to ‘nonhuman cognizers’
such as animals, plants and complex technological systems. In this art-
icle, we advance an analogous framework that also draws on related
work by Mark B. N. Hansen (2012, 2015) to focus on how web-based
user experience design engages with time and the political economy of
profiling. This emphasis allows us to explore how conceptions of a self-
deliberative actor are complicated by accelerated cognitive assemblages
in terms of power, time and design. Here, we are especially interested in
foregrounding a series of tools and methods used to manage the web as a
complex logistical domain of transaction at high speed. This is a kind of
diagnostic interfacing with nonconscious cognition that allows for
experiences of instantaneity to be staged for the end-user, while import-
antly still enacting modes of decision-making and exchange that evade
conscious detection. We reflect on these dynamics of planning and timing
within cognitive assemblages through concepts of traps, captivation and
capture, and demonstrate how these techniques and technologies might
be repurposed to pursue new forms of critical inquiry into the micro-
temporalities of digital infrastructure.
Tertium Quid: Zones of Transaction
As a central influence in the shaping of contemporary software interfaces,
user experience design holds suggestive insights for any theory of cogni-
tive assemblages. As we will discuss, interfaces conduct cognition by chan-
nelling tertium quid – an in-between of subterranean communication,
Dieter and Gauthier 65
of parasitic relations in the terms of Michel Serres (1982) or design-as-
trapping as theorised by Vilém Flusser (1999). With efforts to optimise
performance also come novel opportunities to shape end-user behav-
iours. Such techniques indeed resemble a mode of control that targets
cognitive and perceptual limits by engaging with affect, memory and the
‘not yet experienced’ (Parisi and Goodman, 2011). Performance opti-
misation, in particular, works with pre-individual intensities and tech-
nical incompleteness in ways that undo any easy conception of the user as
an autonomous subject. Hayles’ proposition for an expanded ecology of
cognition calls out for considerations of such revised conceptions of
power and politics, particularly as they manifest themselves through
the complexities of trapping human behaviour at the intersections of
information exchange.1
A sustained theme in software studies has been to problematise any
notion that user interfaces yield characteristic forms of critical, intro-
spective and reflexive knowledge. There are accounts of how interfaces
continually simulate revelatory insight and cognitive mapping in ‘an
invisible system of visibility’ (Chun, 2011: 22), or the ‘unworkable’ con-
ditions introduced by interfaces for hermeneutics more generally
(Galloway, 2012). Consumer devices like the Apple iPad have been obvi-
ous targets of scrutiny as inaccessible, opaque systems (Emerson, 2015);
such black-boxed devices are linked to design paradigms that advocate
transparency through the foreclosure of the system’s technical program-
mability and its site of execution. In doing so, they essentially confuse, as
artist and theorist Olia Lialina (2015) observes, Erfahrung with Erlebnis.
That is, rather than experiences embedded in symbolic forms of life, they
deliver us disintegrated impressions at speed. Users are typically taken in
existential terms by these regimes, marking a shift from an interpretative
episteme of deep subjectivity to the ‘surface metaphors’ of an affective
dispositif (Angerer, 2014). Indeed, this shift can be formally observed as
increasingly ubiquitous computational infrastructures hold an intricate,
engineered depth based on layering and nesting schema (Cramer and
Fuller, 2008), while end-user interfaces are increasingly based on post-
skeuomorphic regimes of ‘flatness’.
While upsetting conventional modes of critical thinking, the interface
has nevertheless become a central device for pervasive labour to such a
degree that scrutinising its historical constitution and operations is some-
thing of an urgent task. Tracing a lineage from fluid dynamics to the
design of aircraft cockpits and cybernetics, Branden Hookway (2014) has
provided a useful genealogy that considers how various philosophies of
play, politics and technology that depend on a conceptual conceit of
partitioning are thrown into question by the machinations of human-
machine control. His study of thresholds and liminality, of relations
that simultaneously separate and hold together disparate entities,
makes apparent how any number of trade-offs occur through interfaces
66 Theory, Culture & Society 36(2)
so that ‘interactivity’ might just as easily be replaced by ‘transaction’.
That is, the interface as ‘a form of relation’ is increasingly mobilised as an
‘occasion whereby work may be extracted’ (p. 65). The integrative
techno-economic dynamics of interfaces, in other words, generate
unique forms of value within cognitive assemblages. As transactive por-
tals, this occurs in the back and forth of signals and signs to inaugurate a
median trading zone, one that is predominantly led by corporate services
premised on the capture of personal data. These conduits of cognitive
labour, moreover, call into question categories of self-knowledge and
deliberation to the extent that boundaries for decision-making are con-
tinually transfigured by the asymmetrical optimisation of transactions.
While Hookway’s approach is largely concerned with the interface as a
milieu for subjectivation, we additionally claim this threshold is a third
space or interim in which design techniques become important levers to
establish grounds for murky or obscure modes of exchange.
At this point, let’s re-consider Hayles’ analysis of conscious, uncon-
scious and nonconscious modalities of cognitive assemblages with a focus
on how acute feedback loops are marshalled by interface design toward a
variety of ends. Interfaces, as we have just observed, are essential to this
process by synchronising informational processes to address the ‘costs of
consciousness’ within a context of transactive work. Interfacing with
nonconscious signals is, accordingly, made possible through frameworks
of cognitive translation; these are epistemologies and devices that function
to anticipate and smooth over any contingent events by guiding how
cognitive coherence and higher-level reasoning unfolds. Here, we might
consider the persistent use of Gestalt theory in contemporary design
practice (Johnson, 2013), or lineages of behavioural economics that grap-
ple with cognitive biases following Von Neumann-Morgenstern’s axioms
of game theory (Wendel, 2013). Recall also findings on the apparent
slowness of consciousness, evidenced by the ‘missing half second’, a phe-
nomenon that arises from conditions where cognition is registered as
lagging behind direct experience (Massumi, 1995). It is essential, more-
over, to realise the extent to which such knowledges are co-constituted
with a carefully calibrated experimental infrastructure (Clough, 2008).
That is, these frameworks coincide with a material culture of devices,
but also highly specific ways of working with technical ensembles. User
experience design as an outgrowth of the field of human-computer inter-
action draws from these lineages of behavioural, perceptional and sen-
sory experimentation to habituate users into patterns of action, even to
the point of compulsion (Schüll, 2012), while nevertheless working to
erase its own mediations. This all occurs by binding together signs and
signals into reiterative sequences of action, while encouraging divergent
temporal processes from the milieu inte´rieur of machines.
From this perspective, the peculiar technical operations of cognitive
assemblages are foreign from the phenomenological effects they produce
Dieter and Gauthier 67
and yet, at the same time, are central to the generation of novel experi-
ences and insights they uphold. Recently, Mark B. N. Hansen has dis-
cussed this phenomenon as an ‘operational split’ (2015: 71) between
human awareness and technical operations that, he argues, highlights
the production of two bifurcating registers, or shall we say regimes,
namely the experiential and the operational. These regimes are distinct
since they involve diverging temporal relations, or in other words, they
establish two separate temporal domains: the experiential duration of
consciousness versus the operational micro-temporality of the apparatus.
Importantly, for Hansen, the operational can only be experienced ‘after
the fact’ by feeding forward modulated sensation into consciousness.
Micro-sensors, computational processors and interpreters, in this way,
deliver an unprecedented degree of mediated intervention into
experience by environmentally transforming the possibilities for sense
and perception itself. For Hansen, as a consequence, these infrastructures
‘challenge us to construct a relationship with them’ (p. 37). Taking
up this challenge, user experience design we could say might begin
with compensatory gestures for the fact that the vast sensing
capacities of 21st-century media have no direct experiential correlate,
but end in politico-ethical entanglements of behavioural control
(Yeung, 2017).
A key difficulty in negotiating between the experiential and oper-
ational lies with the centralisation of resources in contemporary cognitive
capitalism when it comes to the possibilities of enacting relationships
with nonconscious agencies. Quite simply, we face a problem of ‘unequal
deliberation time’, whereby ‘time itself becomes an agent of surplus value
extraction that operates within a system structurally dedicated to exploit-
ing the imbalance between microtemporal, machinic sensibility and
human consciousness’ (Hansen, 2015: 55). These uneven aspects of dis-
tributed cognition are also a major concern for Hayles; namely, through-
out informational infrastructures, we can observe how nonconscious
agencies are ‘opening up temporal regimes in which the costs of con-
sciousness become more apparent and more systemically exploitable’
(2014: 211). Such conditions become especially clear in her discussions
of personal assistant applications that enact ‘a certain homogenization of
behaviour’ which depletes cognitive functions associated with spatial
navigation and effectively accelerates the channelling of desire through
pattern-induced consumption (Hayles, 2016b). Here, different configur-
ations of awareness emerge throughout a cognitive assemblage based on
competing logics embedded within the interface. Indeed, what emerges
from both Hansen and Hayles’ analyses are considerations of how con-
scious and nonconscious agencies introduce new complexities for
thought, and a demand to cultivate alternative approaches to informa-
tional infrastructures in the context of such exploitative incursions into
the operational present of sensibility.
68 Theory, Culture & Society 36(2)
The socio-political stakes of interfacing with nonconscious systems
become starkly apparent in paradigms of chrono-design where the rally-
ing together of operational and experiential forces seizes upon end-user
populations in advance. Such techniques become the anticipatory means
of conducting cognitive assemblages by mapping out, coordinating and
temporally chaining together socio-technological agencies. Indeed, as
Flusser reminds us, the etymology of the term ‘design’ itself is closely
associated with connotations of scheming, with ‘plots’, ‘intent’, and
‘aims’ (‘to have designs on something’); as he succinctly puts it: ‘the
word occurs in contexts associated with cunning and deceit. A designer
is a cunning plotter laying his traps’ (1999: 17). In his assessment, design
connotes how modes of forethought and deliberation are materially pro-
jected across an environment. These are operations whereby behaviours
are abstracted from a target and cast into contraptions that abduct ‘nat-
ural’ behaviours ‘in the wild’. Traps of this sort are central to paradigms
of service design and user experience manifested in today’s platforms like
Facebook, YouTube or Twitter, for instance. Tying ‘plot’ to the etymol-
ogy of ‘platform’ adds yet another angle to consider the ethico-political
quandaries of behavioural design manifested in commercial arrange-
ments of informational infrastructures (Singleton, 2014). These are
states whereby users unknowingly, and even unwillingly, participate in
operations of abducting value. Emerging behavioural design paradigms,
moreover, have advanced conditions where legal frameworks and
regulations begin to disintegrate (Rouvroy and Stiegler, 2016). These
devices draw on new infrastructural and environmental possibilities
for nonconscious sense and perception and, hence, denote the emer-
gence of a preconscious mode of capitalism. Under these conditions,
a typically slick commercial interface becomes a unique artefact of
me´tis; it appears ‘as a work of magic to those not yet up-to-speed with,
yet in the grip of, it’s captivating and capturing kairos (the real-time of
its instantaneity and apparent ubiquity)’ (Mellamphy and Mellamphy,
2014: 234).
It is useful to distinguish between notions of ‘capture’ and ‘captiv-
ation’ for the sake of clarifying our argument here. In an essay entitled
‘On Captivation’, Rey Chow and Julian Rohrhuber (2011) develop a
notion of captivation based on the work of anthropologist Alfred Gell
and his art theoretical concepts of traps and abduction. Chow and
Rohrhuber argue that while Gell offers a convincing account of capture
and the dynamics that take place between captive and captor, he none-
theless maintains a strict hierarchy between the artist (captor) and the
spectator (captive), giving a primary ‘intentional’ agency to the former
and a secondary ‘reactive’ agency to the latter. According to Chow and
Rohrhuber, what is missing from Gell’s account is a conceptualisation of
the state of being trapped – that is, a theorisation of the experience of
being held captive from the standpoint of the prey, an experience that
Dieter and Gauthier 69
exceeds any formal analysis of power relations. For Chow and Rohrhuber,
captivation is a type of receptivity that is unassimilable with narratives of
freedom that presuppose the existence of autonomous subjects making
rational decisions. Rather, they speak of captivation as a process of sub-
jection which is analogous to Louis Althusser’s concept of interpellation.
What differentiates these two notions, they argue, is that while interpel-
lation emphasises the coherence of the process of identification with an
ideological apparatus, captivation emphasises the de-coherence of iden-
tity as such; in other words, rather than culminating in identity conform-
ing to a structure of domination, captivation takes the form of an
abandonment or losing of the self, a nonproductive process (a distraction
or daydream) whereby ‘politics returns not to government [or governance
of the captor] but to anarchy’ (p. 64).
In media theory, these questions of captivation are typically articu-
lated within the general problematic of cognitive capitalism for which ‘all
the strategies to capture value basically revolve around the issue of atten-
tion time’ (Moulier-Boutang, 2012: 75). Here the work of philosopher
Bernard Stiegler, whose concepts of psychopower and psychotechniques
(Stiegler, 2010) directly engage with the captivation of human attention
by technological means, can be seen as the cornerstone of contemporary
critical studies of the attention economy whose motivation is to question
and problematise the ‘commodification of human capacities of attention’
(Crogan and Kinsley, 2012: 1). While there have been attempts to update
Stiegler’s concept of psychopower, specifically in relation to the coupling
of the interface and habituation (Ash, 2015), we observe that the main
(if not sole) focal point of these studies, as well as Gell’s and Chow and
Rohrhuber’s, is the figure of captive human consciousness imbued with
embodied affective and attentive capacities.
However, if we are to develop a theoretical apparatus that addresses
the aforementioned split between the experiential and operational
regimes of contemporary media, then we must reframe the question of
capture in such a way that does not ultimately rest with a suspended form
of consciousness. Since there is a conceptual link to follow between the
trap, the designer of the trap, and the user who tries to discerns its cun-
ning, rather than focusing on the modalities of the prey’s abduction and
engagement, our approach is to unearth, identify and repurpose the oper-
ational methods, discourses, and infrastructures involved in technically
articulating these expressions of cunning or me´tis. In so doing, we focus
on mechanisms that do not target human cognition per se, but rather
capitalise on its blindness – a kind of parasitic capture sustained by a
well-calibrated infrastructure. We believe that contemporary interfaces
do indeed feed-forward a type of technological sensibility as a means to
captivate or modulate attention, yet we also observe that they do so by
concealing what is in fact deployed, executed and precisely not fed-
forward to the captive.
70 Theory, Culture & Society 36(2)
The captivation of attention, accordingly, is a subterfuge or, rather, is
part of a double subterfuge. Our notion of chrono-design, rather than
directly aligning with Chow and Rohrhuber’s or Stiegler’s perspectives
on captivation, finds closer affinities with Serres’ (1982) idea of capture as
not simply being a relation or a scenario unravelling between captor-
captive, sender-receiver, producer-consumer, artist-spectator, but rather
as a meta-relation, a relation to a relation as tertium quid, where
[the] exchanged thing travels in a channel that is already parasited.
The balance of exchange is always weighted and measured, calcu-
lated, taking into account a relation without exchange, an abusive
relation. The term abusive is a term of usage. Abuse doesn’t prevent
use. The abuse value, complete, irrevocable consummation, pre-
cedes use- and exchange-value. Quite simply, it is the arrow with
only one direction. (1982: 80)
With chrono-design, this functions by deploying techniques and technol-
ogies in ways that leverage bifurcating temporalities between machinic
operations and affective registers. To be clear, the goal of such cunning is
not to inaugurate a level playing field, nor to increase the capacities of
users to sense this difference as such, but to set up an asymmetric zone
of transactions. With their evasion of detection from the consciousness of
end-user populations, such devices are a common characteristic of the
current internet and web as everyday cognitive assemblages.
The Critical Rendering Path
Contemporary web applications are carefully programmed, monitored
and optimised for the potentially chaotic unfolding of networked
events. Technical heuristics are fed forward and back into the noncon-
scious operativity of protocols and machines as logistical orders,
enabling diverse efficiencies, temporalities and applications of ‘infrastruc-
tural intelligence’. In his material examination of information, Paul
Dourish (2015) provides some valuable insights into these material
dynamics, especially by emphasising the need to appreciate the inter-
twined social and technical aspects of informational infrastructure,
a consideration of the specifics of different protocols in terms of weight-
ings and speed, and a close consideration of the relationships between
‘internals’ and ‘externals’ of a network for questions of power, what we
might call the heteronomy of networks. This is an account that inherently
questions the techno-libertarian rhetoric surrounding the internet as
open, radically democratic and decentralised by foregrounding how the
delegation of authority to different network segments (internet as ‘inter-
networking’) leads to concentrations of control and local points of cen-
tralisation. A further aspect relevant to our concerns is how protocols
Dieter and Gauthier 71
should be considered in relation to their deployments, rather than simply
taken as abstract plans or instructions. In this way, the internals of
protocol design are seen as always explicitly connected to certain kinds
of external management and administrative authority in practice.
Dourish’s study thus pivots on the question of ‘what structures or con-
straints are needed to allow this flexibility?’ (p. 201). In a similar way,
throughout our discussion, we stress the importance of chrono-design
tools and expertise for maintaining specific patterns of use between
diversely invested actors, from large-scale monopolistic corporations to
casual users to automated agents.
The success of any protocol deployment is never guaranteed. Problems
can arise when heterochronic agencies interrelate, leading to unexpected
conflicts and incongruities that evolve across multiple scales. Intervening
in these crises often requires the elaboration of new mechanisms of con-
trol which can redistribute agencies and consolidate particular power
relations. One might consider, for instance, the introduction of TCP/IP
flow control mechanisms in the 1980s after problems with asymmetrical
bandwidths across networks dragged transmissions to a standstill; a phe-
nomenon known as congestion collapse (Ragle, 1984). Algorithms like
‘slow-start’ and ‘fast retransmit’ written by Van Jacobson and Michael
J. Karels (1988) were devised during this crisis to facilitate efficient inter-
networking by focusing on issues of timing from end-to-end, innovations
which would be crucial for the rapid expansion of the web during the
1990s. More recently, the recognition of latency as a key limit of plat-
formisation by Google engineers (Belshie, 2010) has led to the develop-
ment of the SPDY protocol to reduce the round-trip time (RTT) of data
packets, an initiative since folded into the upgrade of hypertext transfer
protocol (HTTP/2). We might also consider here the controversial pro-
posals for Accelerated Mobile Pages (AMP) that aim to improve user
experience for smartphones, but arguably further reinforce Google’s ad-
based influence over the web (Google, 2017; ampletter.org, 2018). Such
responses to increased traffic and overcrowded buffers are important
since they propel specific arrangements of infrastructural influence
while embedding problems of speed as features of protocological power
itself. Taken together, they add credence to Paul Virilio’s axiomatic
claim: ‘if time is money, as they say, then speed is pure power’ (2001:
26). And indeed, these initiatives have occurred with significant economic
investment in geographically situating and configuring servers to form
content distribution networks (CDNs) (Sandvig, 2015), while laying out
new fibre optic cables across ‘the last mile’ to homes and businesses, all
within the context of rising debates over ‘net neutrality’.
Problems of speed, accordingly, manifest themselves throughout the
entirety of internet engineering, web development and interface design,
including the preparation of content for delivery. For example, the
notion of CSS sprites is one well-established technique used for rendering
72 Theory, Culture & Society 36(2)
recurring patterns of graphics (Shea, 2004) (Figure 1). Inspired by old
school videogames, the idea involves loading a single blueprint image
into the browser that is repeatedly resourced to reduce the overall
HTTP requests; while the page weight increases, duration or time spent
rendering drops by drawing from a single file. Sprites of iconic buttons
and logos for major commercial websites like Amazon, Twitter, Google
or Facebook are often openly accessible on CDNs, and remain one of the
most pervasive and accessed digital objects on the web. As artefacts of
protocological power, corporate culture and the limits of human atten-
tion, they are resources for micro-temporal processing and nonconscious
operations that support the optimisation mantra: ‘the fastest request is a
request not made’.
Interfaces most assuredly deliver more than meets the eye. For
designers and developers working with the temporalities of cognition,
sense and perception, the critical rendering path is a crucial measure for
trapping technical delays and waiting times within tolerable limits and
optimisation benchmarks. Here, the rendering path, generated by the
written HTML markup script of a given page, becomes a line of con-
straint that follows the order in which a browser is instructed to read,
including how and when to interpret the various elements of a site
(HTML elements, CSS and JavaScript).2 A wide variety of browser-
based tools and methods allow for critical rendering paths to be diag-
nosed and tuned. ‘Empathetic mediations’, for instance, perform the
visual presencing of a site through filmstrips of graphic rendering and
load states in slow motion (Viscomi et al., 2015). Such approaches impli-
cate interface chrono-design with genealogies of media attention that
reach back to early 19th-century experiments with perception (Crary,
2001), particularly as these dovetail with innovations in scientific man-
agement that stem from the ‘crises of control’ that plagued industrialisa-
tion (Beniger, 1986). In the 21st century, these techniques manifest
themselves as frameworks to manage the modulation of global attention
coupled with systems of automatic sensing and decision-making beyond
consciousness.
For chrono-design, another seminal logistical method consists of
Gantt charts presented as ‘resource waterfall’ visualisations of network
activity within a given browser’s debugger panel. The overlapping HTTP
transfers arranged in sequence on these charts form the rendering path,
where the slope provides an impression of the site load, so that key
problems or ‘anti-patterns’ can be identified and treated in turn (see
Figure 2). Historically, the Gantt charts on which these waterfall dia-
grams are based were introduced during the Taylorist era of scientific
management as deliberation devices for keeping track of multiple
machine operations and labour processes during industrial manufactur-
ing (Wilson, 2003). The later inclusion of the ‘critical path method’
(CPM) paved the way for enhanced optimisation techniques for timing
Dieter and Gauthier 73
Figure 1. Artist researcher Roel Roscam Abbing’s collection of found cloud sprites.
Figure 2. Experiments with browser and debugger plug-ins: ‘Loading . . . 800% slower’
(Gauthier, 2017).
74 Theory, Culture & Society 36(2)
and sequencing a project during the post-war period. If current interfaces
disclose a ‘logistical imaginary’ (Bratton, 2016: 230), then it develops
through these historical lineages of planning and control. Yet where
Gantt charts once referred to the assembly of an industrial product or
post-Fordist knowledge work, following the micro-temporal processing
of networked applications, the critical path now tracks a state of deferral
between network requests and on-screen visual renderings so that user
populations can be captivated just long enough while their computa-
tional requests are resolved.
The rendering critical path as an interface chrono-design technique
mediates the split between the conscious and nonconscious, or what
Hansen refers to as the experiential and operational regimes of contem-
porary media. Structurally, web browsers decouple two principal system
components in their operation: a graphical rendering engine and a script
interpreter. This decoupling facilitates a parallelisation of the experiential
regime of the human gaze and the operational regime of the machine. As
a result, the execution of scripts runs independent of the graphical ren-
dering routines; in short, rendering routines and script interpretation are
separated and executed asynchronously. This temporal parallelisation
leads to a bifurcation of background and foreground, which user experi-
ence design has the task of conjugating. Web interface designers and
application developers typically write scripts to programmatically cap-
ture specific user interactions and instruct the browser’s rendering engine
how to alter the layout of a given page or, for instance, when to make
network requests for new data. As networks evolve and become faster,
not only can more data be transferred back and forth between client and
servers, but more substantial scripts can be written and sent to browsers
to be interpreted ‘just-in-time’ without having the graphical interface
itself impaired by such parallel, nonconscious transactions and
executions.
In terms of the programmatic formulation of networked applications,
requests and callbacks, and ‘promises’ and ‘futures’, are salient chrono-
designed constructs that are integral parts of contemporary program-
ming language vernaculars (ISO/IEC 14882, 2011; ECMA-262, 2015).
By relying on the programmatic constructs of design patterns, current
notions of ‘reactive design’ (Kuhn et al., 2017) uphold this tendency of
planning for and working with asynchrony (Bonér et al., 2014). These
constructs express an application’s logical development and temporal
unfolding, their ontogenetic dimension, how asynchronous and concur-
rent events inscribe their own becoming. It is important to note that
networked applications are necessarily incomplete or dormant within a
time interval as they wait for given requested potentials to be fulfilled.
Coded promises and futures are emblematic artefacts of incompleteness;
within the temporal deferral of completeness, they allow for asynchron-
ous articulations to be expressed programmatically. As constructs, both
Dieter and Gauthier 75
promises and futures are, technically speaking, proxies: a reference or a
variable which is yet to be produced, and an assurance that stands for
computation yet to be executed. As proxies for the postponement of data
resolution and computation, they allow application and interface
designers to articulate their written asynchronous program (scripts’
source code) in a sequential or direct style, rather than a continuation-
passing style. More importantly, these proxies allow for their designed
application not to halt execution while waiting for futures and promises
to be resolved. This is how the unfolding of micro-temporal computa-
tional events is shaped and plotted in advance.
It is worth highlighting how these programmatic constructs underlie
the type of temporal and logical relation that a chrono-designed ‘present’
entertains with its contingent ‘future’. Here, the asynchronous future
does not unfold homogeneously, regularly or mechanically as one logical
moment after another. Rather, delegates and empty placeholders mark
out a state of incompleteness awaiting execution that potentially takes
place through variable temporalities. In practice, the time-critical interval
between browser input and output can have diverse durations: from
milliseconds for a promised computation to be scheduled for execution
by a given JavaScript interpreter to tens of seconds for a large file to be
requested and fetched from a distant server. Anyone who browses the
web most certainly experiences this state of incompleteness on a daily
basis; one only has to observe how site content is disparately loaded
fragment-by-fragment and continuously updated on the status bar of
the browser. This experience of asynchrony may appear mundane on
an infrastructural level, especially when running smoothly, but its effects
can broadly be perceived as an increasingly ubiquitous mode of socio-
political organisation. Asynchronous systems, for instance, are central to
emergent multisided platforms by allowing new ways of ordering flexible
and precarious regimes of digital labour between groups as a ‘sublime
administration of the everyday’ (Pepi, 2016).
Yet there is more to contemporary scripts than simply catering to the
experiential regime of human attention and interaction. Since they can
read and write information to and from certain internal registers of
browsers and make HTTP requests on their own, scripts have become
the cornerstone of new economies around user tracking and profiling.
Between browsers and third-party web servers, these tracking scripts
instantiate a type of transaction that operates through latent causalities
beyond consciousness. In doing so, they leverage the parallel decoupling
of background and foreground to automate the capture of data without
any direct recourse, let alone signalling, to human sense and perception.
This background can be understood as a dynamic ‘protected mode’
memory (Ernst, 2010; Kittler, 2014) where special browser registers are
written to and read from at micro-temporal speeds to allow for a certain
ephemerality to endure (Chun, 2011) from one browsing session to
76 Theory, Culture & Society 36(2)
another. Here, the agency of the web browser and third-party scripts
supersedes the attentive actions of end-users, deriving value by precisely
not addressing consciousness directly, but rather indirectly, by proxy, in
tracking the browser’s memory itself. Script interpreters thus become
sites of friction and enmity as they are operationalised not only to sustain
browsers’ core functionality, but also to inject obfuscated code to harvest
and leak user behaviour for profit. There is a pressing need to engage
critically with these tools and techniques, we want to suggest; to adapt
and redeploy them not simply for the revelatory insight of collecting and
visualising micro-temporal data, but to actively experiment with noncon-
scious problems of speed, captivation and capture in the pursuit of alter-
native infrastructural arrangements.
Diagnosing the Present
Throughout this article, we have shown how various temporal aspects of
network transactions demand that media and cultural theories begin to
critically address nonconscious designs that operate below the threshold
of experiential registers, particularly how to deliberate with these agen-
cies, or how to connect differently with their unfolding trajectories.
A central aspect of our argument has been to place an emphasis on the
need to repurpose these chrono-design techniques that sit at the interstice
between the operational and the experiential present of our contempor-
ary informational infrastructures. Most of our theoretical insights in this
article have been informed by such an engagement, including network
diagnosis, which we undertook as part of a project focusing on the notion
of critical rendering path discussed above. What follows is a short dem-
onstration of this empirical research.
In a series of experiments, we measured the micro-temporal rendering
of news websites and devised a method for diagnosing how and when
HTTP requests are made to load and execute third-party scripts or so-
called ‘bugs’ within a browser as a given page loads.3 Although there
exists extensive research on these aspects of web economies, including
studies of social media buttons using digital methods (Gerlitz and
Helmond, 2013), studies of real-time patterns on web platforms
(Weltevrede et al., 2014) and web tracking as such (Elmer, 2004; Howe
and Nissenbaum, 2009; Share Lab, 2015; Van der Velden, 2014), the
uniqueness of our approach is to recognise not simply the presence of
questionable third-party bugs, but also their presencing as temporal and
logistical dynamics worthy of investigation. We align this work with the
notion of ‘technical time critique’; that is, an approach that ‘reveals a
microcosm of time figures that are usually concealed in media appara-
tuses; it is assisted by a phenomenology of the temporal affects that
media induce in people’ (Ernst, 2016: 4). In doing so, we utilised some
of the performance analysis tools already discussed to trace the
Dieter and Gauthier 77
processing of third-party scripts as a design logic that unfolds with the
‘operational blindness’ (Hansen, 2012: 33–4) of end-user populations.
This is because some scripts acquire data from users in ways that do
not address sense and perception per se after execution (unlike content
such as buttons, images, text and so on). These scripts contribute to the
temporal resolution or pacing of web applications, but through means
that are construed to precisely evade detection. Our aim was to investi-
gate these ‘hidden’ dynamics by redeploying chrono-design techniques
for purposes of interface time critique.
Using performance optimisation techniques along with the debugging
functionality of browsers, we measured the critical rendering paths of the
top 100 popular news websites according to Alexa.4 Temporal signatures
for each site were visualised by focusing on page load time, or the period
of time between when a network request is made and when the browser
fires events that render a page. This is the timeframe during which a site
becomes usable, when the Document Object Model (DOM) is produced,
and text and images are downloaded and displayed. To provide a base-
line indication of loading patterns, we aimed for a consistency based on
geographic location, browser type and connection using artificial testing
(rather than, for instance, real-time, streaming data or real user metrics).
Tests were run on a ‘cold’ cache, as if the user were loading the page for
the first time. The approach was, therefore, synthetic, but still suggestive
in providing a consistent comparative indication of the overall temporal
signatures and design patterns that constitute online news.
In order to identify bugs in the sequence of requests, we relied on
Ghostery’s database (Ghostery, 2015). This enabled us to highlight
which requests to third-party scripts were made during the resolution
of a given page; to identify how many requests were made; when these
were made; how long it took to load these scripts in the browser; and
finally to measure the weight of the scripts’ content in bytes. Our
approach was less concerned with separating out and stratifying the net-
work connections in search of a Gantt-style waterfall slope than with
integrating signal-based patterns to foreground the cumulative and con-
current presencing of bugs.
Our experiments demonstrate that for some sites requests to third-
party scripts far outnumbered the number of requests made for first-
party content (news images, text, etc.). Indeed, the amount of bytes
these scripts weighed was at times larger than the total pages’ content,
and cumulatively took more time to request and load all third-party
scripts than it took to load the actual page and all its content. A site
worth mentioning here is The Drudge Report (Figure 3). Despite its min-
imalist graphical interface, we found that 81 per cent of its requests were
made to third-party servers, amounting to a total script weight of 56 per
cent, which took 60 per cent of the gross loading time of the entire page.
What is most striking in the case of The Drudge Report is the sharp
78 Theory, Culture & Society 36(2)
Figure 3. The Drudge Report.
Figure 4. Breitbart.
contrast between its visible interface, composed mainly of ‘Web 1.0’ text
and hyperlinks with few images, and its invisible scripted counterpart,
composed mainly of third-party text-based scripts. Similarly, Breitbart
(Figure 4), the notorious platform for the ‘alt-right’, can be seen as fol-
lowing a comparable loading pattern (which is perhaps unsurprising
given the shared history between them), raising questions about the inter-
sections and embeddedness of surveillance capitalism and political ideol-
ogies along nonconscious, infrastructural domains.
A second finding worth mentioning is how early these dubious scripts
can be requested and loaded into the browser compared to other first-
party elements. For instance, the third request of the site The Atlantic is
for a tracking script, occurring 700 milliseconds after the start of the
overall page load (see Figure 5). The design rationale behind such an
operation can be explained by the aforementioned three seconds user
engagement time limit of the specious present: the faster a tracker
script can be loaded into the browser without users ever closing the
page, the greater the chance a site has of producing and disclosing per-
sonal user information from the very start. Significantly, monitoring
conversion rates alongside the critical rendering path assists with
tuning this complex array of trackers and scripts loaded into the browser
Dieter and Gauthier 79
Figure 5. The Atlantic.
so that the constantly modulating boundaries of collective consciousness
are folded into a wider assemblage of optimisation and profit-seeking.
A tracker that interferes too much with the critical rendering path so that
conversion rates are undermined, in other words, would be at risk of
being adjusted or eliminated from the site altogether.
We thus see how the critical rendering path in focusing on the tem-
poral sequencing of visual/perceptual appearances for conscious atten-
tion necessarily opens up, in parallel, the possibility of exactly its reverse.
That is, a temporal sequencing of non-visual/non-perceptual operations
occurring between such appearances. As exemplified by this brief diag-
nostic study of some third-party scripts and bugs, the inherent duality of
the critical rendering path can be used as the basis for practices of para-
sitic concealment where operational combinations are formulated, engin-
eered, designed and deployed at large within existing networked
infrastructures, meticulously bypassing the dynamic and composite
attention thresholds of end-user populations by lurking in the technical
obscurities of script interpreters (Figure 6). Here, we are faced with a
form of programmed ‘consent’ that does away with direct deliberation.
Indeed, what our study foregrounds is how ‘deliberation shifts from
being an activity that happens at the moment of reception, or in its
incalculable aftermath [. . .] to an activity that happens – that can only
happen – in a fundamentally anticipatory mode, before any encounter
with a cultural object or media network’ (Hansen, 2015: 74, emphasis in
original).
We should note by way of conclusion that a contemporary technical
component of media that equally speaks to this necessarily anticipatory
mode of deliberation is the adblocker. Currently, adblocking is a major
operational tactic that end-users have to intentionally modulate their
proxied consent for executing dubious scripts from unknown third par-
ties. Because the Do Not Track (DNT) standard (W3C, 2017) has not
been appropriately followed by industry (and arguably will not be in the
near future) (EFF, 2012, 2015), the only recourse for intervening in the
parasitic economy of tracking and bugging is to hinder the execution of
scripts at the browser level, as opposed to the server level advocated by
Do Not Track. While browser manufacturers, such as Microsoft,
80 Theory, Culture & Society 36(2)
Figure 6. Analytic cards.
Mozilla, Google and Apple, have already directly integrated script-
blocking functionality in their respective browsers (Microsoft, 2010;
Mozilla Foundation, 2016; Google, 2017; Wilander, 2017), these features
have consequently opened up a space of contention with the growing
third-party ad industry. Future manoeuvers might perpetuate a tactics
of obfuscation (Brunton and Nissenbaum, 2015); however, there is a
need to think and act across multiple political, institutional and regula-
tory levels to achieve an informational infrastructure that can deal more
equitably with the trappings of data capture.
Dieter and Gauthier 81
What this disagreement highlights, and what our research fore-
grounds, is how the micro-temporal aspects of technical media require
types of diagnostics that stand as moments of deliberation as such. If
current media apparatuses promulgate processes that strategically bypass
the perceptive ability of users to detect them – and thus cultivated abil-
ities to deliberate about their aims, presence and constituencies at the
moment of reception – then modes of critique must become less hermen-
eutic and more clinical, that is, more diagnostically-driven. By using
similar techniques that allow for such artifices to be programmed and
articulated in the first place, interface critique can thus move from being
an activity centred on the phenomenology of reception, or an aesthetic
that explores historical tendencies, towards a collective project of mater-
ial negotiation and participation with agencies of nonconscious
cognition.
Conclusion
While user experience design strives to obliterate all traces of latency, we
aim to zero in on timings at the smallest scales. The diversity of infra-
structural micro-temporalities needs to be understood as more than a
technical problem of efficiency; these are dynamics of significant social,
political and economic consequence. They are undercurrents occurring at
pre-perceptual dimensions, yet involve settings for logistical decision-
making, data exchange and the capture of behavioural traces that
simultaneously integrate and separate human consciousness from non-
conscious processes. As we have argued, there is a need to experiment
with these mechanisms beyond the sheer optimisation of corporate inter-
est. Part of this involves reconsidering the stakes of cognitive assemblages
as targets of deliberate intervention. The interface is a considerable
domain of transactive power; it is a tertium quid – a zone of exchange
between systems, a device and situation now imbued with evasive asym-
metries by design. Practising chrono-design techniques means initiating a
capacity to forge different kinds of transactions, including the possibili-
ties to hybridize modes of nonconscious decision-making with new con-
ceptual, aesthetic and political orientations. In this article, we have
suggested some problematisations for the analysis of protocols and
speed, along with the plotting of asynchronous scripts and how diagnos-
tics might be utilised as a point of entry into the operations of noncon-
scious infrastructures.
Chrono-design is a significant aspect that shapes informational infra-
structures like the web and the internet. Testing these tools and tech-
niques relies on nurturing a concerted temporal elasticity across cognitive
assemblages to support diverse processes of individuation. Our explor-
ations have also led us to sonifications of tracker data and experiments
using real-time plugins to experience the micro-temporal political
82 Theory, Culture & Society 36(2)
economy of tracking in deconstructive states. These timings are a ‘fallen
time’ not simply of hyper-exploitation but for an alternative sensing with
artefactual temporalities of cognitive assemblages (Murphie, 2007). Yet
how do we engage with methods of user experience design in ways that
are neither overly instrumental nor simply superficial? How can they be
hybridised in interdisciplinary directions and how might they be co-
developed or connect meaningfully with a diversity of critical collectives?
These remain central concerns as we develop an interface critique that
engages with contemporary dilemmas of capture and captivation, an
approach that supports an attachment to the apparatus from the position
of a politicised subject.
Notes
1. Within the limits of this article, we can only gesture to connections with
concepts like noopolitics (Lazzarato, 2006), neuropower (Hauptmann and
Neidich, 2010) and psychopolitics (Han, 2017).
2. For a comprehensive exposition of the notion of the critical rendering path,
and how it mitigates a given web page loading speed, see Grigorik (2018).
3. Where bugs are categorised as advertisements, analytics, trackers, widgets or
privacy devices.
4. A complete report from our research can be found online (Dieter et al., 2015).
Of interest are the various ‘analytic cards’ on each website (see Figure 6).
References
Ampletter.org (2018) A letter about Google AMP. Available at: http://amplet
ter.org/ (accessed 29 January 2018).
Andersen, Christian Ulrik and Bro Pold, Søren (eds) (2011) Interface Criticism:
Aesthetics Beyond Buttons. Aarhus: Aarhus University Press.
Angerer, Marie-Luise (2014) Desire After Affect, trans. Grindell, Nicholas.
London: Rowman & Littlefield.
Ash, James (2015) The Interface Envelope: Gaming, Technology, Power. New
York: Bloomsbury.
Belshie, Mike (2010) More bandwidth does not matter (much). Available at:
http://www.belshe.com/2010/05/24/more-bandwidth-doesnt-matter-much/
(accessed 29 January 2018).
Beniger, James R. (1986) The Control Revolution: Technological and Economic
Origins of the Information Society. Cambridge, MA: University of Harvard
Press.
Bonér, Jonas, Farley, Dave, Kuhn, Roland and Thompson, Martin (2014)
Reactive Manifesto. Available at: http://www.reactivemanifesto.org (accessed
29 January 2018).
Bratton, Benjamin (2016) The Stack: On Software and Sovereignty. Cambridge,
MA: MIT Press.
Brunton, Finn and Nissenbaum, Helen (2015) Obfuscation: A User’s Guide for
Privacy and Protest. Cambridge, MA: MIT Press.
Dieter and Gauthier 83
Bucher, Tania (2012) A technicity of attention: How software ‘makes sense’.
Culture Machine 13. Available at: https://www.culturemachine.net/index.php/
cm/article/viewArticle/470 (accessed 29 January 2018).
Chow, Rey and Rohrhuber, Julian (2011) On captivation: A remainder from the
‘Indistinction of Art and Nonart’. In: Bowman, Paul and Stamp, Richard
(eds) Reading Rancie`re. London: Continuum.
Chun, Wendy (2011) Programmed Visions: Software and Memory. Cambridge,
MA: MIT Press.
Clough, Patricia (2008) The affective turn: Political economy, biomedia and
bodies. Theory, Culture & Society 25(1): 1–22.
Cramer, Florian and Fuller, Matthew (2008) Interface. In: Fuller, Matthew (ed.)
Software Studies: A Lexicon. Cambridge, MA: MIT Press.
Crary, Jonathan (2001) Suspensions of Perception: Attention, Spectacle, and
Modern Culture. Cambridge, MA: MIT Press.
Crogan, Patrick and Kinsley, Samuel (2012) Paying attention: Towards a cri-
tique of the attention economy. Culture Machine 13. Available at: https://
www.culturemachine.net/index.php/cm/article/view/463 (accessed 29 January
2018).
Dieter, Michael, Gauthier, David, Burgas, Kim, Trujillo, Garcia, Javier and De
Keulenaar, Emillie (2015) Micro-temporalities of the web. Available at:
http://gauthiier.github.io/www-micro-temporalities (accessed 29 January
2018).
Dourish, Paul (2015) Packets, protocols, and proximity: The materialities of
internet routing. In: Parks, Lisa and Starosielski, Nicole (eds) Signal
Traffic: Critical Studies of Media Infrastructures. Urbana: University of
Illinois Press.
ECMA-262 (2015) ECMA-262 6th Edition: The ECMAScript 2015 Language
Specification. Geneva: ECMA International. Available at: http://www.
ecma-international.org/ecma-262/6.0/#sec-promise-objects.
EFF – Electronic Frontier Foundation (2012) White House, Google, and other
advertising companies commit to supporting ‘do not track’. Available at:
https://www.eff.org/deeplinks/2012/02/white-house-google-and-other-adver-
tising-companies-commit-supporting-do-not-track (accessed 29 January
2018).
EFF – Electronic Frontier Foundation (2015) Coalition announces new ‘do not
track’ standard for web browsing. Available at: https://www.eff.org/press/
releases/coalition-announces-new-do-not-track-standard-web-browsing
(accessed 29 January 2018).
Elmer, Greg (2004) Profiling Machines: Mapping the Personal Information
Economy. Cambridge, MA: MIT Press.
Emerson, Lori (2015) Reading Writing Interfaces: From the Digital to the
Bookbound. Minneapolis: University of Minnesota Press.
Ernst, Wolfgang (2010) Cultural archive versus technomathematical storage. In:
Røssaak, Eivind (ed.) The Archive in Motion: New Conceptions of the Archive
in Contemporary Thought and New Media Practices. Oslo: Novus Press.
Ernst, Wolfgang (2013) From media history to zeitkritik. Theory, Culture &
Society 30(6): 132–146.
Ernst, Wolfgang (2016) Chronopoetics: The Temporal Being and Operativity of
Technological Media. London: Rowman & Littlefield.
84 Theory, Culture & Society 36(2)
Flusser, Vilém (1999) The Shape of Things: A Philosophy of Design. London:
Reaktion.
Galloway, Alexander R. (2004) Protocol: How Control Exists After
Decentralization. Cambridge, MA: MIT Press.
Galloway, Alexander R. (2012) The Interface Effect. Cambridge: Polity Press.
Gauthier, David (2017) Loading . . . 800% slower. In: Pritchard, Helen et al.
(eds) DATA browser 06: Executing Practices. Brooklyn, NY: Autonomedia.
Gerlitz, Carolin, and Helmond, Anne (2013) The like economy: Social buttons
and the data-intensive web. New Media & Society 15(8): 1348–1365.
Ghostery (2015) Ghostery makes the web cleaner, faster and safer! Available at:
https://www.ghostery.com (accessed 29 January 2018).
Google (2017) An update on better ads. Available at: https://developers.google.
com/web/updates/2017/12/better-ads (accessed 29 January 2018).
Grigorik, Ilya (2018) Analyzing critical rendering path performance. Available
at: https://developers.google.com/web/fundamentals/performance/critical-
rendering-path/analyzing-crp (accessed 29 January 2018).
Hadler, Florian and Haupt, Joachim (eds) (2016) Interface Critique. Berlin:
Kultureverlag Kadmos.
Han, Byung-Chul (2017) Psychopolitics: Neoliberalism and New Technologies of
Power, trans. Butler, Erik. London: Verso.
Hansen, Mark B. N. (2012) Engineering pre-individual potentiality: Technics,
transindividuation, and 21st-century media. SubStance 41(3): 32–59.
Hansen, Mark B. N. (2015) Feed-Forward: On the Future of Twenty-First-
Century Media. Chicago: University of Chicago Press.
Hauptmann, Deborah and Neidich, Warren (2010) Cognitive Architecture: From
Biopolitics to Noopolitics. Architecture & Mind in the Age of Communication
and Information. Rotterdam: 010 Publishers.
Hayles, N. Katherine (2014) Cognition everywhere: The rise of the cognitive
nonconscious and the costs of consciousness. New Literary History 45(2):
199–220.
Hayles, N. Katherine (2016a) The cognitive nonconscious: Enlarging the mind
of the humanities. Critical Inquiry 42: 783–808.
Hayles, N. Katherine (2016b) Cognitive assemblages: Technical agency and
human interactions. Critical Inquiry 43(1): 32–55.
Helmond, Anne (2015) The platformization of the web: Making web data plat-
form ready. Social Media + Society: 1(2): 1–11.
Hogan, Lara Callender (2015) Designing for Performance: Weighing Aesthetics
and Speed. Sebastopol, CA: O’Reilly Media.
Hookway, Branden (2014) Interface. Cambridge, MA: MIT Press.
Howe, Daniel C. and Nissenbaum, Helen (2009) TrackMeNot: Resisting sur-
veillance in web search. In: Kerr, Ian, Steeves, Valerie and Lucock, Carole
(eds) Lessons from the Identity Trail: Anonymity, Privacy, and Identity in a
Networked Society. Oxford: Oxford University Press.
ISO/IEC 14882 (2011) ISO/IEC 14882:2011 Information Technology –
Programming Languages – C++. Geneva: International Organization for
Standardization. Available at: https://www.iso.org/standard/50372.html.
Jacobson, Van (1988) Congestion avoidance and control. ACM SIGCOMM
Computer Communication Review 18(4): 314–329.
Dieter and Gauthier 85
Jacobson, Van and Karels, Michael J (1988) Congestion avoidance and control.
ACM SIGCOMM Computer Communication Review 18(4): 314–329.
James, William (2014) Excerpts from ‘The Principles of Psychology’. In: Arstila,
Valtteri and Lloyd, Dan (eds) Subjective Time: The Philosophy, Psychology
and Neuroscience of Temporality. Cambridge, MA: MIT Press.
Johnson, Jeff (2013) Designing with the Mind in Mind: Simple Guide to
Understanding User Interface Design Guidelines (2nd Edition). Burlington,
MA: Elsevier/Morgan Kaufmann.
Kittler, Friedrich A. (2014) Protected mode, trans. Butler, Erik. In: Kittler,
Friedrich, The Truth of the Technological World: Essays on the Genealogy
of Presence. Stanford, CA: Stanford University Press.
Kuhn, Roland, Hanafee, Brian and Allen, Jamie (2017) Reactive Design
Patterns. Sebastopol, CA: O’Reilly Media.
Lazzarato, Maurizio (2006) The concepts of life and the living in the societies of
control. In: Fuglsang, Martin and Sørensen, Bent (eds) Deleuze and the
Social. Edinburgh: Edinburgh University Press.
Lialina, Olia (2015) Rich user experience, UX and desktopization of war.
Available at: http://contemporary-home-computing.org/turing-complete-
user/ (accessed 29 January 2018).
Mackenzie, Adrian (2002) Transductions: Bodies and Machines at Speed.
London: Continuum.
Marzo, Jorge Luis et al. (2015) Manifesto for a critical approach to the user
interface. Available at: https://interfacemanifesto.hangar.org/ (accessed 29
January 2018).
Massumi, Brian (1995) The autonomy of affect. Cultural Critique 31: 83–109.
Mellamphy, Dan and Mellamphy, Nandita Biswas (2014) From the digital to
the tentacular, or from iPods to cephalopods: Apps, traps, and entrées-with-
out-exit. In: Miller, Paul D and Matviyenko, Svitlana (eds) The Imaginary
App. Cambridge, MA: MIT Press.
Microsoft (2010) IE9 and privacy: Introducing tracking protection. Available at:
https://blogs.msdn.microsoft.com/ie/2010/12/07/ie9-and-privacy-introdu-
cing-tracking-protection/ (accessed 29 January 2018).
Moulier-Boutang, Yann (2012) Cognitive Capitalism. Cambridge: Polity.
Mozilla Foundation (2016) Tracking protection. Available at: https://developer.
mozilla.org/en-US/Firefox/Privacy/Tracking_Protection (accessed 29
January 2018).
Munster, Anna (2014) An Aesthesia of Networks: Conjunctive Experience in Art
and Technology. Cambridge, MA: MIT Press.
Murphie, Andrew (2007) The fallen present: Time in the mix. In: Hassan, Robert
and Purser, Ronald E. (eds) 24/7: Time and Temporality in the Network
Society. Stanford, CA: Stanford University Press.
Parisi, Luciana and Goodman, Steve (2011) Mnemonic memory. In: Clough,
Patricia Ticineto and Willse, Craig (eds) Beyond Biopolitics: Essays on the
Governance of Life and Death. Durham, NC: Duke University Press.
Pepi, Mike (2016) Asynchronous! On the sublime administration of the every-
day. e-flux journal 74. Available at: https://www.e-flux.com/journal/74/59798/
asynchronous-on-the-sublime-administration-of-the-everyday/ (accessed 29
January 2018).
86 Theory, Culture & Society 36(2)
Ragle, John (1984) Congestion Control in IP/TCP Internetworks. IETF Request
for Comments: 896. Available from: https://tools.ietf.org/html/rfc896
(accessed 29 January 2018).
Rouvroy, Antoinette and Stiegler, Bernard (2016) The digital regime of truth: From
the algorithmic governmentality to a new rule of law. La Deleuziana 3: 6–29.
Sandvig, Christian (2015) The internet as anti-television: Distributed infrastruc-
ture as culture and power. In: Parks, Lisa and Starosielski, Nicole (eds) Signal
Traffic: Critical Studies of Media Infrastructures. Urbana: University of
Illinois Press.
Schüll, Natasha Dow (2012) Addiction by Design: Machine Gambling in Las
Vegas. Princeton, NJ: Princeton University Press.
Serres, Michel (1982) The Parasite, trans. Schehr, Lawrence. Baltimore: Johns
Hopkins University Press.
Share Lab (2015) Invisible infrastructures: Online trackers. Available at: https://
labs.rs/en/invisible-infrastructures-online-trackers/ (accessed 29 January 2018).
Shea, Dave (2004) CSS sprites: Image slicing’s kiss of death. Available at:
https://alistapart.com/article/sprites (accessed 29 January 2018).
Singleton, Benedict (2014) On craft and being crafty. PhD thesis, Northumbria
University, UK.
Soon, Winnie (2017) The spinning wheel of life. In: Pritchard et al. (eds) DATA
browser 06: Executing Practices. Brooklyn, NY: Autonomedia.
Stiegler, Bernard (2010) Taking Care of Youth and the Generations. Stanford,
CA: University of Stanford Press.
Van der Velden, Lonneke (2014) The third party diary: Tracking the trackers on
Dutch governmental websites. NECSUS: European Journal of Media Studies
3(1): 195–217.
Virilio, Paul (2001) Virilio Live: Selected Interviews. London: SAGE.
Viscomi, Rick, Davies, Andy and Duran, Marcel (2015) Using WebPage Test.
Sebastopol, CA: O’Reilly Media.
W3C (2017) Tracking Preference Expression (DNT). Available at: http://www.
w3.org/TR/tracking-dnt/ (accessed 29 January 2018).
Weltevrede, Esther, Helmond, Anne and Gerlitz, Carolin (2014) The politics of
real-time: A device perspective on social media platforms and search engines.
Theory, Culture & Society 31(6): 125–150.
Wendel, Stephen (2013) Designing for Behavior Change: Applying Psychology
and Behavioral Economics. Sebastopol, CA: O’Reilly Media.
Wilander, John (2017) Intelligent Tracking Prevention. Available at: https://
webkit.org/blog/7675/intelligent-tracking-prevention/ (accessed 29 January
2018).
Wilson, James (2003) Gantt charts: A centenary appreciation. European Journal
of Operational Research 149: 430–437.
Yeung, Karen (2017) ‘Hypernudge’: Big Data as a mode of regulation by design.
Information, Communication & Society 20: 118–136.
Michael Dieter is Assistant Professor at the Centre for Interdisciplinary
Methodologies, University of Warwick. His research focuses on publish-
ing practices after digitisation, cultural techniques in interface and user-
Dieter and Gauthier 87
experience design, and genealogies of media at the intersection of aes-
thetic and political thought.
David Gauthier is a Research Fellow at the Netherlands Institute for
Cultural Analysis (NICA), based at the University of Amsterdam. His
current research explores the various regimes of legibility/illegibility of
computing machinery from the vantage point of mathematics’ relation
with materialism and processes of sense making.
This article is part of the Theory, Culture & Society special issue on
‘Thinking with Algorithms: Cognition and Computation in the Work of
N. Katherine Hayles’, edited by Louise Amoore.
Special Issue: Thinking with Algorithms: Cognition and Computation in the Work of N. Katherine Hayles
Theory, Culture & Society
2019, Vol. 36(2) 89–121
Critical Computation: ! The Author(s) 2019
Article reuse guidelines:
Digital Automata sagepub.com/journals-permissions
DOI: 10.1177/0263276418818889
and General journals.sagepub.com/home/tcs
Artificial Thinking
Luciana Parisi
Goldsmiths, University of London
Abstract
As machines have become increasingly smart and have entangled human thinking with
artificial intelligences, it seems no longer possible to distinguish among levels of
decision-making that occur in the newly formed space between critical reasoning,
logical inference and sheer calculation. Since the 1980s, computational systems of
information processing have evolved to include not only deductive methods of
decision, whereby results are already implicated in their premises, but have crucially
shifted towards an adaptive practice of learning from data, an inductive method of
retrieving information from the environment and establishing general premises. This
shift in logical methods of decision-making does not simply concern technical appa-
ratuses, but is a symptom of a transformation in logical thinking activated with and
through machines. This article discusses the pioneering work of Katherine Hayles,
whose study of the cybernetic and computational infrastructures of our culture
particularly clarifies this epistemological transformation of thinking in relation to
machines.
Keywords
abductive reasoning, automation, Hayles, machine learning, non-conscious cognition,
techno-power
At the core of computational systems today there is a latent paradox:
capital’s investment in techno-intelligence has led to the explosion of
non-conscious or pre-cognitive decisions. With high-frequency trading,
Netflix and Amazon recommendation algorithms, with Uber and Air
B&B live platforms and micro-targeted online dating sites, cognitive cap-
ital seems to have turned the subsumption of the ‘general intellect’, and
thus of social intelligence, into a crowd of learning algorithms efficiently
Corresponding author: Luciana Parisi. Email: [email protected]
Extra material: http://theoryculturesociety.org/
90 Theory, Culture & Society 36(2)
driving decisions without the support of consciousness.1 This automation
of the general intellect, based on the frequency of data use and content,
defines a mediatic infrastructure of statistical modelling, pattern recog-
nition, data mining, knowledge discovery, predictive analytics,
self-organizing and adaptive systems. In particular, with the 1990s devel-
opment of machine learning within branches of artificial intelligence
(AI), the automation of cognition has introduced a new mode of algo-
rithmic processing that learns from data without following explicit pro-
gramming. The increasing adaptation of machine learning systems across
financial, military, governmental and educational systems is fundamen-
tally challenging notions of automation classically intended as mere
reproduction of physical or mental functions. With machine learning,
we are no longer discussing the automation of manual and mental
work – generally corresponding to how physical and cognitive labour
have become absorbed by the machine in the form of fixed capital.
Instead, this qualitative extension of automation beyond the mechanical
reproduction of instructions involves an overcoming of automation itself,
whereby algorithmic rules now generate or construct patterns from the
re-assemblage of data. What is at stake here is the automation of auto-
mation: the automated generation of new algorithmic rules based on the
granular analysis and multimodal logical synthesis of increasing volumes
of data. In particular, machine learning has been said to define the mani-
festation of a new form of intelligence able to automate automation
(Domingos, 2015: 9). Here, the automation of the intellect does not
simply imply the subsumption of social values through a new rational-
ization of social thinking. The automation of automation instead con-
cerns a meta-level of algorithmic function, whereby social thinking is not
only organized by machines, but is algorithmically engendered through
neural networked layers that eventuate new meaning of artificial think-
ing. The automation of automation therefore points out that the sub-
sumption of the intellect in capital’s valorization of automated cognition
relies upon the social meaning of artificial thinking implied within the
technoscientific descriptions of intelligence.
This article argues that changes in the scientific image of computa-
tion and cognition stem from a socially mediated understanding of
artificial thinking involving not a symbolic representation of ideas but
a dynamic logic of algorithmic learning. These are historical changes in
the scientific and technological descriptions of intelligence stemming
from the computational theorization of the limits of reason, and
post-Second World War experiments with the automation of reasoning
in machines. Katherine Hayles’ view of this shifted meaning of auto-
mated intelligence in terms of non-conscious cognition2 points out that
cognitive systems perform complex modelling and informational tasks
at the fastest speed without abiding by the formal languages of math-
ematics or explicit equations.3
Parisi 91
In the attempt to qualify further the distinction between conscious-
ness, unconsciousness and awareness, thinking (involving awareness) and
cognition (that does not require consciousness, but can perform complex
modelling and informational tasks), Hayles discusses the emergence of
what she calls the ‘cognitive non-conscious’ working at a ‘lower level of
neural organization, not accessible to introspection’ (2014: 4). For
Hayles, non-conscious cognition may operate independently from con-
sciousness, but nonetheless it needs to be understood in systemic and not
specific material processes because it involves an ‘intention toward’
defined by its adaptive behaviour and emergent capacities to process
new data (2014: 4–5). In particular, Hayles distinguishes between con-
scious thinking, non-conscious cognition and material processes (2014:
5),4 and argues that technical systems today (from the use of genetic
algorithms in compositional music to language-learning devices such as
Mitchell’s NELL or never-ending language-learning) constitute a built
environment characterized by the exponential growth of non-conscious
cognition devices.
In other words, Hayles addresses the changing meaning of how
machines think in terms of today’s interactive, adaptive and learning
algorithms that skip the logical order of deduction, which was central
to the Enlightenment theorization of the function of reason.5 In agree-
ment with Hayles, this article argues that the non-logical thinking of
automated systems overlaps with the efficacy of a cybernetic calculus
whereby control and prediction rely on inductive learning. Here cyber-
netic control becomes infused with the non-conscious algorithms of cog-
nitive capital.
Hayles (2005) presents us with epistemological shifts in theories of
cognition, which, she suggests, are necessarily embedded in social prac-
tices and discourses (and are thus not to be simply addressed as a sort of
teleological overcoming of humanity). To further account for this ques-
tion of machine thinking, however, this article extends this epistemo-
logical articulation of artificial thinking by borrowing Wilfrid Sellars’
(1963) theorization of the scientific and manifest image. I argue that
the scientific image of intelligence (e.g. the material physical, biological,
computational description of intelligence) is mediated by the manifest
image of intelligence involving the socio-cultural self-awareness of a
form of artificial thinking that admits the capacity of machines to
think conceptually and act rationally. According to Sellars, these
double levels of material and conceptual activities are equally pregnant
with meaning. In order not to fall back into the myth of the given (the
assumption that thinking merely coincides with its neurological descrip-
tions), namely the essentialism of cognition, or the empiricism of scien-
tific descriptions and conceptual forms, both the scientific and manifest
images are to be worked through to explain the relation between the
material and the mental activities we are concerned with.6 From this
92 Theory, Culture & Society 36(2)
standpoint, when speaking of algorithms, computation and AI, this art-
icle argues that it is important to address scientific and technical descrip-
tions as socially mediated meanings. In other words, while there is no
direct translation between the scientific descriptions of the functions of
computation and the conceptual manifestation of their meaning, it is not
possible to admit that the scientific understanding of computational intel-
ligence is not socially mediated, embedded and determined by the socio-
technical meanings of artificial thinking. From this standpoint, a critical
articulation of how machines may think is already implied in the collect-
ive conceptions of automated cognition, which are re-directing Hayles’
distinction between non-conscious and conscious cognition towards the
image of the automation of automation.
While it is arguable that computation involves interdependence
between data, software, code, algorithms, and hardware, the automation
of automation instantiated within new forms of machine learning, for
instance, has emerged from a shift in computational models of logical
reasoning: namely, from deductive truths applied to small data to the
inductive retrieval and recombination of infinite data volumes. In par-
ticular, the transformation of the relation between algorithms and data
contributes to explaining the historical origination of non-deductive rea-
soning, activated with and through machines. As Lorraine Daston (2010)
points out, already during the Cold War the conception of reason as
based on truth, and on the faculty of judgement and discrimination,
became historically reconceptualized in terms of patterns, and reason
as ‘the rule’ came to be understood in terms of ruling procedures with
the task of calculating probability.
This embedding of reasoning into machines is entangled with the
development of statistics and pattern recognition, which define the
socially mediated manifest image of algorithms as learning machines
making predictions by recognizing data (through granular analysis, flex-
ible and modular patterning of categories with textual, visual, phonic
traits). As the system gathers and classifies data, learning algorithms
therefore match-make, select and reduce choices by automatically decid-
ing the most plausible of data correlations. Machine learning indeed is
used in situations where rules cannot be pre-designed, but are, as it were,
achieved by the computational behaviour of data. Machine learning is
thus the inverse of programming: the question is not to deduce the output
from a given algorithm, but rather to find the algorithm that produces
this output (Domingos, 2015: 7). Algorithms must then search for data to
solve a query. The more data is available the more learning there can be.
As statistics and probability theory enter the realm of AI with learning
algorithms in neural networks, new understandings of cognition, logical
thinking and reasoning have come to the fore.
From the extended mind hypothesis to arguments about machine con-
sciousness and the global brain, critical thinking today needs to be
Parisi 93
concerned with more general questions about what cognition is and how
it has come to coincide with the computational architecture of algo-
rithms, data, software and hardware, and with experiments in robotics
sensing and self-awareness. However, the implications of automated cog-
nition, central to the critique of cognitive capital, are far from being
settled and will be the concern of a critical computation theory address-
ing the specificity of this manifest image of algorithmic thinking. For
instance, the possibility of elaborating a rule from data retrieval rather
than applying a given rule to outcomes points to a form of cognition that
cannot be defined in terms of problem-solving, but will be understood as
a general method of experimenting with problems. In particular, with
machine learning, automation has involved the creation of training activ-
ities that generalize the function of prediction to future cases – a sort of
inductive parable that, from particulars, aims to establish general rules.
Here, in the case of supervised, unsupervised and reinforcement learning
algorithms,7 a critical computation will refer not simply to mindless
training, but rather offer an enquiry into forms of inference characteriz-
ing this artificial thinking. This enquiry will navigate the tension between
theories of reason vis-a-vis the emergence of non-conscious intelligence in
automated cognition.
This article suggests that a critical view of computation requires an
effort to unpack this tension to account for indeterminacy in conditions
of knowledge that both constrain and enable the scientific and manifest
image of algorithmic thinking. From this standpoint, if indeterminacy is
central to the epistemological possibilities of algorithmic thinking beyond
deductive logic, the automation of automation will be seen not as a
mindless execution of rules or a form of unconscious cognition, but as
a critical mode of artificial thinking. As discussed later, the introduction
of abductive logic in automation can be distinguished from the data-
driven model of induction and the non-conscious forms of cognition
embedded in computational devices. Here rules and truths are not
simply skipped but re-hypothesized, re-assessed and invented.
Although abductive logic is mainly performed in automated models for
medical diagnosis, the possibility that automated systems can construct
new forms of logical complexity, which could enable the theorization of a
general artificial intelligence other than that of the statistical regime of
inductive capital, will nonetheless be entertained. Learning algorithms
are already a step towards this envisioning of abductive artificial intelli-
gences, involving conceptual re-elaboration from data correlations, rules
and functions that can be used to construct new hypotheses. This
amounts to an automated meta-abductive reasoning, whereby learning
algorithms elaborate a meta-hypothetical function through which they
infer missing rules, facts and unknown causes (Inoue et al., 2013: 240).
Despite the local applications of algorithmic procedures in design,
logistics, music and economics, it is evident today that the automation
94 Theory, Culture & Society 36(2)
of automation rather involves a cultural transformation in the concep-
tualization of reasoning with and through machine thinking. This is also
a transformation in the meaning of cognitive capital increasingly relying
on the automation of learning, and of the intelligible elaboration of new
forms of data correlation, evaluation, selection and decisions. Machine
learning automata are understood to behave like cognitive systems that
are evolutive, adaptive, and exhibit co-causal and emergent properties
(Hayles, 2014).
From this standpoint, Hayles’ work already offers a reassessment of
cybernetics and computation as central to automated systems of feed-
back control and logical procedures, which have exposed the changing
meaning of cognitive activities, generalized from particularities (animals,
humans and machines).8 Her insights about neoliberal forms of govern-
ance no longer being constituted by the law, the norm and reason, but by
control functions, behavioural operations based on procedures within
self-regulating autopoietic agencies (i.e. reiterative loops, sequential
tasks, flexible protocols and flows of data), point to the shifted meaning
of artificial thinking. As rule-obeying behaviours become substituted by
the performativity of machinic functions (i.e. what x or y do and do not
do, and what they stand for), the indeterminacy of learning outcomes has
also become central to the epistemological critique of the end of reason.
This shift from rule-obeying truths to an algorithmic pragmatism, using
data to search for and predict truths, has also been understood as the end
of rational choice (MacKenzie, 2011; Mirowski, 2002).
From this standpoint, while suspending current figurations of auto-
mated intelligence (Domingos, 2015; Steiner, 2012), the transformations
of the scientific and manifest image that describe algorithmic performa-
tivity have already opened up new meanings of artificial thinking. With
machine learning, algorithms indeed are no longer mere instructions, but
are rather performative of instructions. Algorithms learn: they adapt,
adjust and evolve their behaviour according to a qualitative synthesis
of vast quantities of data. Their performative activity is afforded by
their capacity to compress large quantities of information and thus trans-
form outputs into new inputs, involving a new synthesis of reasoning and
calculation. Here data do not have to fit categories, but are redefinable in
the manner in which algorithms generate possible rules, causes and facts
where these are missing.
However, to argue that the new phase of automation of automation
could be discussed in terms of abductive reasoning is here an attempt at
discussing a critical theory of computation that questions the predomin-
ance of two models of AI in the techno-capital valorization of automated
cognition: namely, the logic of deduction, on the one hand, and inductive
or informal logic, on the other. I suggest that these models do not simply
concern the analysis of computational machines, but underpin contem-
porary ideas about cognition in animal, human and machine, as these
Parisi 95
seem to be divided between the ontologization of computational cogni-
tion on the one hand (a meta-computational model of deduction) and an
anti-formal view of cognition (or data-driven non-conscious cognition).
In particular, it has been argued that since the inductive model of cog-
nition is ‘indifferent to the causes of phenomena, automation functions
on a purely statistical observation of correlations between data captured
in an absolutely non-selective manner in a variety of heterogeneous con-
texts’ (Rouvroy, 2011: 126). According to Rouvroy, the inductive regime
thus appeals to the immediacy of data retrieval, which eradicates poten-
tiality and/or indeterminacy, limiting the possibility of a critical
approach to technology (2011: 127).
My attempt to re-theorize automated intelligence draws from these
views but also argues that the crisis of deductive logic is mediated by
new meanings of artificial thinking stemming from the scientific image of
experimental axiomatics, which has indeterminacy at its core. I suggest
that as the scientific image of computational logic has changed (from
Turing to post-Turing descriptions of intelligence), it has also questioned
the manifest image and thus exposed the changing meaning of automated
reasoning. Here artificial thinking no longer coincides with the efficient
execution of pre-established rules. The internal limits of logic in compu-
tation have rather pushed the epistemological view of artificial thinking
beyond deductive and inductive models. Drawing on Hayles’ theoriza-
tion of non-conscious cognition as a form of inductive learning, this
article questions the assumption that techno-capital always already sub-
sumes any mode of machine thinking, and ultimately of automation.
Instead, a critical view of artificial thinking is an attempt at reducing
the dominance of data-driven systems of retrieval and transmission,
deprived of any hierarchical logic, to only one form of automated cog-
nition through which capital is extending social subjection. And yet,
capital investment in machine intelligence will also be questioned with
and through the epistemological proliferation of multimodal logics (and
thus the socially mediated meaning of artificial thinking) that expose the
possibilities of automated reasoning beyond the function of fixed capital.
From this standpoint, this article argues that abductive reasoning offers
one possible envisioning of a general artificial thinking that accounts for
multimodal logic and does not simply mirror one specific image of auto-
mated cognition.
Computation Is Not Cognition
In My Mother Was a Computer, Hayles (2005) discusses the view of
computation as a universal model of cognition and intelligence. Hayles
refers to the development in AI in the 1970s, to John Koza’s use of
genetic algorithms to design band-pass filters, and circuits that no
longer require the creativity and intuition of highly skilled electrical
96 Theory, Culture & Society 36(2)
engineers. Similarly, she describes intelligent machines that can perform
mind-like activities, such as Rodney Brooks’ Cog project, the informa-
tion-filtering ecology developed by Alexander Moukas and Pattie Maes,
and neural nets of many different kinds. Hayles also anticipates that in
the near future the question of mind-like machines will become irrelevant
as machines continue to develop their own thinking functions. As movies
such as Spike Jonze’s Her (2013), and more recently Alex Garland’s Ex
Machina (2015) reveal, it has become discursively accepted that machines
have cognitive functions and that their intelligible capacities of discerning
data and elaborating patterns have stepped up to another level of auton-
omy from mind-like thinking (and thus have not much to do with what a
human mind can do). A warning against the fast evolution of AI is also
echoed by Stephen Hawking’s (2014) recent claim that ‘[t]he development
of full artificial intelligence could spell the end of the human race. It
would take off on its own, and re-design itself at an ever-increasing rate’.
Despite this alarming call to arms against the super-intelligence of
artificial systems, the question of what machines think, and whether
this thinking coincides with what is meant by reasoning, remains open
and in need of more discussion. As Hayles (2005) has already pointed
out, there are at least two main positions that reveal the tension between
automation and reasoning. Here, the relation between the scientific and
the manifest image is grounded either in the formal theory of universal
computation or the non-deductive reasoning of non-conscious computa-
tion. On the one hand, the so-called field of digital philosophy claims that
the world of appearance can be explained in terms of a universal ground
of computation, according to which algorithmic discrete units can
explain all complexity of the physical world and can imitate reasoning
(e.g. the strong AI hypothesis). On the other hand, the claims of and for
non-conscious computation (i.e. non-symbolic AI) have extended the
scientific image of computation to include intelligent functions that are
experiential rather than formal.
My point, however, is that both positions tend to explain the manifest
image of thought by means of the scientific image of what cognition is. In
particular, the digital explanation of cognition remains attached to a
deductive model of reasoning, in which the scientific truth about the
mind and intelligence is prescriptive of what these can achieve. Here
the general determines the particular. This position establishes equiva-
lence between natural and artificial intelligence based on a deductive
method of reasoning by which to cognize corresponds to, as in the
strong AI hypothesis, the syntactical manipulation of symbols. On the
other hand, the extension of the scientific image to include somatic
explanations of cognition (as in the research into affective computing
and emotional intelligence, for example)9 instead relies on local low
levels of neural organization, which work together to achieve an overall
effect that is bigger than their parts. This position embraces an inductive
Parisi 97
method of reasoning in which general claims about intelligence are
derived from the observation of recurring phenomenal patterns. This
scientific explanation of intelligence reveals the centrality of a non-
conscious level of cognition already at work in current forms of compu-
tational intelligent devices. Despite lacking consciousness or autonomy,
computational devices indeed are said to share non-conscious cognition
with human intelligence and, if anything, given that human intelligence is
bound to conscious cognition, smart devices are much faster than us at
making connections (Hayles, 2014).
When discussing the power of algorithmic decision underpinning the
mediatic infrastructure of the political, cultural and social infrastructure
today, we are thus faced with the dominant view of two modes of logical
reasoning, defining intelligence and its manifestations. On the one hand,
the reduction of reasoning to the computational view of cognition based
on the manipulation of symbols, and, on the other, the anti-cognitivist
argument that computational decisions act below cognition at the local
level of non-logical communication. In both cases, the scientific image is
used to ground the manifest image without accounting for the complex
dimensions of meaning that both produce. If the diatribe between deduct-
ive and inductive models of the scientific image of automated reasoning
relies only on the scientific description of cognition (as either rooted in
symbolic language or in affective non-conscious immediacy), it risks miss-
ing an important point: namely the concreteness of conceptual frameworks
(i.e. the social embedding of reasoning) subtending the manifest image of
cognition (i.e. what and how logical reasoning manifests itself) and their
transformations in the context of automated learning.
Arguing for a critical computation is instead my attempt to clarify the
role of the manifest image of reason in the phase of the automation of
automation in both pragmaticist and transcendental terms. In particular,
from pragmaticism, I take the important proposition that reason is not a
formal a priori, but corresponds to the conceptual infrastructure of social
practices. This means that the logical operations of reason and its rule-
bound functions depend upon or are established by a collective use-
meaning of data. The use-meaning of data refers not simply to a mere
functional use, but to the dynamic reassessment of the social meaning
(and not the truth) embedded in the computational abstraction of the
social use of data. In this phase of automation, I suggest that the use-
meaning of data implies a collective formation of abductive inferences
within and throughout computational logic, based on the hypothetical
elaboration of the meaning included within non-discursive and local use
of data – on behalf of algorithms, software, subroutines, codes, as well as
databases, platforms, interfaces and so on.
To view automation as the synthesis of statistical learning and abduc-
tive logic may help us to envision the hypothetical reasoning of machines
as these involve not data-matching but inferential relations across the
98 Theory, Culture & Society 36(2)
informational fields of large-scale data and randomness. In this context,
a transcendental understanding of reasoning may entail the capacity of
machine learning to eventually generate concepts and carry out general
rules unbounded from the bias of specific localities. Instead of being the
result of an individual mind or eternal intelligence, this transcendental
elaboration from and of data is also a manifestation of the algorithmic
use-meaning of data, incorporating social practices within artificial intel-
ligences, of which algorithmic abduction is only one instance.
Before explaining my proposition further, I want to discuss the com-
putational model of deductive reasoning and how its crisis has been symp-
tomatic of the reorganization of techno-capitalism (i.e. the economic
investment in automated networks) involving the view that automated
intelligence corresponds to affective or non-conscious cognition.
Digital Philosophy
The computational model of deductive reasoning is central to digital
philosophy. Here the manifest image of thought conforms to the scien-
tific idea that the brain is equipped with an innate system of symbols,
neurologically connected and syntactically processed. Digital philosophy
particularly refers to the computational paradigm used to describe phys-
ical and biological phenomena in nature and to offer a computational
description of the mind. This approach problematically sees computation
as the merging of being and thought. It gives an algorithmic explanation
to both biophysical reality and the thinking of reality (Wolfram, 2002).
Central to this paradigm is also the view that algorithms are digital
automata, evolving over time (i.e. cellular automata). These automata
compress, render or simulate the various levels of physical, biological,
cultural randomness, deriving semantic meaning from already deter-
mined rules, whose functions are syntactically arranged and where results
can be automatically deduced.
According to Hayles (2005), however, digital philosophy contains no a
priori truths in itself and its claims are rather the result of intermediations
about physical reality, cultural attitudes, technological developments,
which coevolve in contestation, competition and cooperation of dis-
courses. From this standpoint, in order to explain how one manifest
image of computation becomes dominant over another, one has to estab-
lish the historical transformations in the understanding of rule-bounded
behaviour of automata, without simply appealing to computational
ontology.
For instance, Hayles (2005) highlights the influence of second-order
cybernetics’ notion of reflexivity on the computational paradigm, which
led to the realization that computation could not just illustrate logical
infrastructures, but rather required an engagement with materiality. This
influence of second-order cybernetics, however, is accompanied by a
Parisi 99
crisis of reason (of a normative model of pre-set rules) that characterizes
the structure of governance of the neoliberal form of techno-capitalism.
Far from demarcating the end of normative reason, this crisis has to be
seen as a threshold of change within a vaster mechanism of regulation,
functions and rules, transforming the normative regime based on laws
into a computational infrastructure of procedures.
With second-order cybernetics, the reflexive loop between mind and
matter shows how logical reasoning rather worked in a backwards way,
converting contingent phenomena into necessary laws, including errors,
malfunctions and breakdowns re-inserted within a computational model
of optimization and within capital’s governance of indeterminacies. The
crisis of the logical method of deduction thus importantly marked the
beginning of a predictive statistical regime for which, as Hayles (2014)
explains, non-conscious or affective thinking have become the motor of
automated cognition. Here not truths, but contingent phenomena or
unknowns have acquired an ontological superiority able to transcend
the epistemological certitude of scientific knowledge.
As intelligent machines have become embodied and material agents
interact among themselves and make decisions without being supervised,
automated cognition has left behind deductive forms of consequential
reasoning. For instance, distributed cognitive environments expose this
new level of indeterminacy-driven automation on the one hand, and of
inductive forms of decision-making on the other. Here deductive logic
has been replaced by the match-making correlation of data connecting
local recurrent phenomena with the indeterminacy of external factors.
Central to this new form of automation is Hayles’ view of non-conscious
cognition.
Non-conscious Computation
According to Hayles, communication technologies, ambient systems,
embedded devices and other technological affordances have acquired a
cognitive function, which operates below the threshold of awareness, and
without the structure of symbolic reference. For the classical view of
computation (or strong AI hypothesis) cognition coincided with self-
awareness. The role of intelligence was assumed to involve the function
of tracking effects from pre-established causes and contain outputs/
results into programmed inputs. We know that this classical view of AI
failed.
In the book Perceptrons (Minsky and Seymour, 1987), Marvin Lee
Minsky claimed that a single neuron could only compute a small
number of logical predicates in any given case, and his experiments
cast a long shadow on neural network research in the 1970s. In the
late 1980s and 1990s, after the so-called ‘AI winter’, new models of AI
research addressed sub-symbolic manifestations of intelligence and
100 Theory, Culture & Society 36(2)
adopted non-deductive and heuristic methods to be able to deal with
uncertain or incomplete information. Boxing away symbolic logic,
there emerged algorithmic-networked procedures able to solve problems
by means of trial and error by interacting directly with data. These were
learning bots retrieving information through reiterative feedbacks, so as
to map and navigate computational space by constructing neural con-
nections among nodes. Central to these models is the idea that intelli-
gence is not a top-down program to execute, but that automated systems
need to develop intelligent skills characterized by speedy, non-conscious,
non-hierarchical orders of decision-making heuristically selecting data by
means of trial and error. The development of statistical approaches was
particularly central to this shift towards non-deductive logic, or the acti-
vation of an ampliative or non-monotonic inferential logic. As recently
re-popularized in the aesthetically powerful movie Ex Machina (dir. Alex
Garland, 2014), the famous Turing test maintains that not only rational,
but also emotional awareness is fundamental to cognitive performance
and the evolution of artificial intelligence from simply being a mechanical
accomplishment of tasks. As Hayles (2014) points out, the advance of
non-conscious cognition in intelligent machines precisely exposes the new
meanings of our understanding of cognition. Non-conscious forms of
automated cognition can solve complex problems without using formal
languages or inferential deductive reasoning, and without the need of
consciousness. By using low levels of neural organization, iterative and
recursive patterns of preservation, this inductive method of reasoning
implies the emergence of a total behaviour or an intelligent effect that
is bigger than the parts constituting it. From this standpoint, as Hayles
(2014) observes, emergence, complexity and adaptation, and the phe-
nomenal experience of cognition cannot be reduced to material pro-
cesses. Instead, the tension between automation and thinking is
reconceived by Hayles in terms of a tripartite system of distinct degrees
of thought, which involves conscious thinking, non-conscious cognition
and material processes. Non-conscious cognition involves collective and
not individual intelligence, nor specific materiality of intelligence and,
while humans share levels of consciousness with other animals, it is
remarkable, Hayles (2014) points out, that non-conscious cognition
operates across humans, animals and technical devices. In particular,
the low-level activities of non-conscious cognition – described for
instance in the example of the missing half-second,10 at speeds so fast
as to be imperceptible and affective speeds – show that, at these levels,
cognition is not coherent and does not require the labour of editing
information to match given conceptual frameworks. For Hayles (2014),
what is promising regarding cognitive non-conscious technical devices is
that they can operate in temporal regimes inaccessible to human con-
sciousness and exploit the missing half-second to their advantage. This
also implies a machine-like cognition of temporalities, pointing out that
Parisi 101
automated systems are able to tap into the smallest units of time that are
registered or recorded, not only through a digital clock (and its binary
language) but also through an immediate correlation of states. In short,
non-conscious cognitive processes defy the centrality of human con-
sciousness and the anthropocentric view of intelligence. From this stand-
point, following Hayles, one has to make a distinction between
non-conscious affective states of perception and the very material
forms of sensori-motor perception. In other words, and in accordance
with Sellars’ (2014) distinction between the scientific and the manifest
image, cognition is here not to be taken as a direct image of material
processes. Hayles indeed espouses the idea that the anti-deductive oper-
ations of non-conscious cognition are somatically marked, but are also
phenomenologically embodied, and mediated by meaning. Here, there is
no direct correspondence, but instead an elaboration of the material,
involving the mediation between the biophysical and neural states with
perceptive and cognitive receptions. Since cognition is entwined with the
recall and re-enactment of bodily states and actions, perceptual and cog-
nitive states start from a non-conscious intelligence, which becomes
superseded by – or supplied by – mental simulations in higher-level
thinking (and, for Hayles, in a conscious state). This shows that bio-
logical systems have evolved mechanisms that are able to re-represent
perceptual and bodily states, rather than making these states directly
accessible to consciousness. According to Hayles, technical systems or
instruments have non-conscious cognition. However, while the hammer
and a financial algorithm are designed with an intention in mind, only the
trading algorithm demonstrates non-conscious cognition insofar as its
intentionality is embodied within the physical structures of the network
of data on which it runs, and which sustain its capacity to make quick
decisions (Hayles, 2014).
This shift from formal cognition based on deductive inference to a
model of non-conscious cognition embodied in the networked intelli-
gence of local systems has led to a larger communication flow among
automated devices and not exclusively between humans and machines.
As this bot-to-bot phase of computation takes over, the increasing popu-
lation of consciousness-lacking intelligent devices, it is feared, will over-
take the consciousness-bounded and hierarchical structure of human
intelligence. This radical transformation of the scientific image of
thought compared to how automated intelligence is manifested, points
out that thought is independent from law-bound logic and that, rather, it
relies upon non-conscious functions entrenched in the weight of data in
networks.
While it is impossible not to admit that non-conscious levels of cog-
nition are radically transforming not only the scientific but also the mani-
fest image of the meaning of artificial thinking, there are questions that
are to be addressed. If, for instance, high-frequency trading algorithms
102 Theory, Culture & Society 36(2)
are to be considered as non-conscious cognitive functions, effectively
changing socioeconomic behaviour, are we also accepting the scientific
view of an extended non-conscious mind? What is the significance of this
new form of equivalence between non-conscious thinking and automated
intelligence, defined by a bodily oriented view of computation? What are
the limits of an inductive, non-inferential data-driven form of immediate
communication for helping us to explain what and how the manifest
image of automated logical reasoning is pushing beyond the totalizing
image of techno-power?
Techno-power
To answer these questions, one could suggest that the scientific image of
non-conscious automated cognition is enmeshed with an ontological pri-
macy of contingency, in which intelligence coincides with an environment
of indeterminate data, which automated cognition aims to compress into
simpler chunks. From this standpoint, the primacy of contingency has
become constitutive of a more general shift in the mechanization of rea-
soning, initiated with neoliberal techno-capital.
This shift is characterized by a re-orientation of the practices of real
subsumption, in which capital’s investment in the general intellect has led
human–machine networked intelligences to become a motor of cognitive
and affective labour, and, as some argue, of the capitalization of the
relational qualities of life (Massumi, 2015) attached to the regime of
indebtedness (Lazzarato, 2012).11 The manual phase of automation of
industrial capitalism imparted an ontological separation between human
labour and the accumulation of labour value incorporated in machines.
While human labour has become valorized in terms of variable labour or
force, the machines’ task was rather to absorb, preserve, accumulate and
reproduce the value of labour within itself. It was through machines that
the rational principles of task-oriented efficiency of the assembly line
could be realized following the monotonic logic of formal language, in
which results had to coincide with the set premises carried out and exe-
cuted with machines. This deductive form of automation has of course
not simply disappeared, but has become infused with a context-oriented
form of reproduction. Here the human–machine network has acquired a
form of autonomy from the specific use value of human and machine
labour. With real subsumption, capital is no longer mainly concerned
with avoiding contingency and human errors. Instead, this networked
form of abstraction (of relational value) is now sustained by the intelli-
gent synthesis of computational logic (deductive, inductive and abduc-
tive) and statistical calculus (experimental compression of randomness).
Here machine learning languages use the data environment to select,
evaluate, rank, match and reconfigure information according to the
social use of data. This form of automation has reached a non-prescribed
Parisi 103
form of valorization insofar as algorithms experiment with data by learn-
ing, adapting and assessing the value of large amounts of information.
While this intelligent valorization of any use of data involves no con-
sciousness, it is nonetheless a form of cognition embedded in affective
levels of perception, entrenched within the particular physical structures
of the network through which algorithms make quick decisions.
In Anti-Oedipus (1983), Deleuze and Guattari had already identified
this transformative tendency of the human–machine network of abstrac-
tion, and had warned us against what they called ‘immanent axiomatics’
(1983: 246). The rationalization of labour by means of machines no
longer operates deductively, according to a pre-established rule, but
has come to embrace experiential values, enveloped in the complexity
of the social, through which an axiomatic regime could be directly engen-
dered (1983: 233). Not only had calculative machines entered the realm
of the real but also a new synthesis of automation and reasoning had
come to invest the sociality of thinking (although perhaps the non-
conscious level of thinking first) and its contingent variabilities, because
of which capital had to declare the fallacy of deduction.
In our post-cybernetic culture, capital’s axiomiatics – and its rule-
bound activities – are subsumed to the volatile contingencies of the mar-
kets and the statistical destruction of logos. Here the politics of liberation
from universal laws and the ultimate crisis of reason in favour of non-
conscious intelligence have become paradoxically equivalent.
Following Brian Massumi’s (2009, 2015; see also Mirowski, 2002)
analysis of the contemporary reconfiguration of neoliberal governance,
one could argue that the end of rational economy has been accompanied
by the crisis of the rational implementation of machines. The computa-
tional infrastructure of social media, for instance, as the privileged form
of marketing, branding, economic operations, political campaigns, insti-
tutional governance, security screening and so on, no longer abides by
pre-established modalities of profit making and control. Instead, the syn-
thesis of logic and calculus in automation has transformed the commu-
nication qualities of the human–machine network into learning,
interactive, distributive architectures of non-conscious cognition.
Paradoxically, therefore this so-called cognitive phase of capitalism has
given way to the abstraction of human–machine levels of affective think-
ing. This form of techno-capitalism has invested in human intelligence
and creativity, driving humans to become self-entrepreneurs or governors
of their extended self.
In the movie Her (dir. Spike Jonze, 2013), the artificial intelligence
Samantha acts in a world in which not only is affectivity fully
programmed and programmable, but also human–machine networked
capital has been replaced by automated automation, where the non-
conscious intelligence of the Operating System is no longer wrapped
around the hierarchies of deductive reasoning. Samantha does not only
104 Theory, Culture & Society 36(2)
operate at speeds so fast as to be imperceptible, but is also equipped with
an empathic quality of prediction, tuning into the viscerality of cognitive
functions to anticipate responses before they are manifested. As the AI of
operating systems acquires affective intelligence, the human–machine
network of neoliberal capital has become a distant memory compared
to this form of Skynet AI,12 as the automation of automation gathers
self-aware intelligences and leaves humans behind, resigned to not being
able to think and feel anything anew.
However, while the imaginary of Skynet AI implies the emergence of a
self-aware general intelligence, the shift from deductive to inductive auto-
mation could be understood in terms of what Massumi (2015) defines as
‘ecological rationality’ acting through the affective intelligence of the body,
turning symbolic values into lifestyles, and rules into experiential qualities.
At the core of this ecological rationality is a non-conscious distributive
embodied intelligence, in which all is locally induced to generate the global
effects of unification of one body without organs. These inductive (or
effect-driven) operations of networked capital epitomize the non-inferen-
tial reasoning of embodied intelligence, making decisions without formal
calculation. This form of anti-logos demarcates the techno-capitalist deter-
ritorialization of rationality, which resolves the tension between automa-
tion and thinking through the convergence of consciousness and affect.
Far from being liberating, the deposing of inferential reasoning is con-
stantly advertised to us as the ability of networked capital to package
social complexity in profiles available to us at the touch of a button.
Within this context, the real challenge today is perhaps not to map
human–machine–animal non-conscious cognition, but to critically
re-address the function of reason and to theorize – rather than reject –
the automated use of inferential reasoning as part of a general artificial
thinking. My efforts here concern not only an anti-essentialist theoriza-
tion of thinking, for which reasoning can be understood as an elabor-
ation of material, non-conscious and conscious cognition, but also
involve a re-articulation of the critical possibilities of computation.
In what follows, I suggest that to engage critically with the question of
inferential reasoning in automated cognition, we need to first discuss the
problem of the limit of computation in the context of information theory.
We need to envision a form of artificial reasoning that goes beyond both
the focus on locally induced cognition and the meta-computational
reduction of the material world to the symbolic language of AI. In par-
ticular, to shift the argument for a general artificial thinking away from
these two main views of computation, one has to first address some key
issues within computation itself that may start with the question of the
limit of the Turing machine. Critical computation may perhaps concern
how the problem of unpredictability or randomness in information
theory is not a sign of logical failure but of the transformation of the
scientific image of the relation between ratio and logic.
Parisi 105
During the 1980s, information theorist Gregory Chaitin extended the
question of the limit of computational logic to include an entropic
conception of information or randomness (i.e. the implication that the
tendency of information is to increase in size over time) (Chaitin, 2005,
2006). For Chaitin, computation corresponds to the algorithmic com-
pressing of maximally unknowable probabilities or incomputables.
Since Alan Turing’s invention of the Universal Turing Machine, incom-
putables have demarcated the limits of computation or formal reasoning
(i.e. the deductive logic of axioms or truths). According to Chaitin (2005,
2006), however, incomputables are only partially indeterminable insofar
as, within the computational processing of infinite information, the syn-
thesis of logic and calculus has given way to a new form of axiomatic,
experimental axiomatics.13 The computational processing of information
involves the way algorithms compress information to a final probable
state (i.e. 0s or 1s) and eventually mix and match data. However, com-
putational compression also demonstrates that outputs are always bigger
than inputs (Calude and Chaitin, 1999), shaking the assumption that
automated thinking is grounded in simple rules and that cognitive rea-
soning corresponds to the manipulation of symbols hardwired in the
brain. Following Chaitin, it is possible to suggest that randomness in
computation, as that which constitutes the very limit of computational
deduction, demarcates the point at which automated cognition coincides
not with non-conscious functions, but with algorithmic intelligibility,
extracting more information from data substrates. Chaitin (2006)
claims that computational processing leads to postulates that cannot
be predicted in advance by the program and are therefore experimental
insofar as results exceed their premise and outputs outrun inputs.
Despite Chaitin’s insistence that incomputables expose indeterminacy
in formal reasoning, it is possible to suggest that non-deductive logic
coincides with an experimental axiomatics in the computational deter-
mination of unknowns. Algorithmic compression thus implies the forma-
tion of intelligible activities transforming data correlations into
experimental truths precisely through an experimental method of com-
pression. To put it in another way, the algorithmic intelligibility of data
environments involves a speculative function through which unknowns
are computationally prehended.14
From this standpoint, the techno-capitalist investment in artificial
thinking coincides not simply with the proliferation of a non-logical
apparatus of affective cognition. Techno-capital seems to be forced to
confront the computational configuration of non-sensuous or proto-
conceptual patterns that are able to abstract, revise and diverge from
pre-established rules. The computational elaboration of data concerns
not only functions of selection and correlation, but more importantly
involve an experimental determination, whereby the decisional activities
of axioms remain flexible and yet conclusive. In other words, while data
106 Theory, Culture & Society 36(2)
seem to be mindlessly aggregated by non-conscious patterns, the scien-
tific image of experimental axiomatics rather asks us to account for a new
meaning of artificial thinking embedded in the intelligible activities of
algorithmic prehensions.
From this standpoint, one has to view techno-capital not only as the
reduction of reasoning to the non-conscious activities of machines but
also as involved in a deeper transformation of automated thinking,
namely exposing an alien or denaturalizing process of reasoning with
and through machines.
Parallel and distributed orders of computational language point to a
new form of informational stratification of contingencies, precisely invol-
ving this algorithmic processing of data. This can be understood as an
artificial mode of intelligibility that works through the computational
structuring of social thinking. From this standpoint, a critical approach
to computation requires us to look closely at the historical transform-
ation of the automation of thinking, involving not simply an abstraction
of neural functions of the brain, but of the social practices of thinking
and acting. While capital’s investment in the automation of cognition has
led to the synthesis of logic and calculation, computational processing
has rather exposed the limits of deduction and statistics and the central
role of randomness (or infinities, or contingencies, or non-inferential
materialities) within this synthesis.
If algorithmic information theory concerns the scientific image of com-
putational logic and statistical calculation, it also reveals a crucial trans-
formation of the manifest image of a dominant understanding of
computation based on the inductive, data-centred operations of
techno-capital and its non-logical governance. A critical approach to
this dominant understanding thus requires that the scientific image of
computation should be accounted for in its historical changes, which
involves reassessing what we take the relation between algorithms,
data, software, code and hardware infrastructure of contemporary cul-
ture to be. However, a critical effort to account for algorithmic intelligi-
bility in its historical and experimental transformation also implies that
its manifest image becomes a space for a philo-fiction, or speculative
conceptualization of automated reasoning, within a view of a general
artificial thinking. This space will aim not only to defy the exceptionalism
of human consciousness but also to reinvent what consciousness and
reason can become in this configuration of automated thinking. The
next section explores this point further.
Abduction
A dynamic re-articulation of the scientific and manifest image of com-
putation can help us to re-open the ontological tension between thinking
and automation. As argued so far, algorithmic automation may not
Parisi 107
simply involve a replacement of reason with non-conscious technologies
of decision. Instead, the realization of the limits of deductive reasoning in
computation involves a multiplication of experimental axiomatics as
algorithms become performative of intelligible activities across nested
informational architectures.
This is no longer a question of bypassing the predictive functions of
cognition through an optimized non-rule-bound transmission of data.
Instead, one has to envisage a re-structuring of logical reasoning that
can account for this new phase in the history of automated intelligence,
involving a conceptual elaboration of non-conscious prehensions and of
the material dimensions of data. This elaboration, as suggested earlier,
involves a synthesis of logic and calculation, and, in the case of algorith-
mic intelligence, of non-deductive reasoning and dynamic statistics (i.e.
the inclusion of randomness in calculation).
Critical computation therefore will first of all address the speculative
function of reason15 insofar as the limits of automated deductive logic
have become a point of departure for an experimental determination of
truths. It may be helpful here to revisit this tension between critical and
speculative functions of reasoning by re-theorizing the post-Turing scen-
ario of experimental axiomatics through a pragmatist approach to logic
and inferential reasoning. In particular, the pragmatist effort to explain
logic in terms of a continuity of process between material practices,
discursive articulations and axiomatic truths shall be understood as a
tripartite configuration of methods involving deductive, inductive and
abductive reasoning.
One important instance of this configuration can already be found in
Charles Sander Peirce’s (1998: 273; see also his 1995) triadic system of
logic, which admits that thinking entails an abductive-inductive-
deductive circuit of inference This system importantly challenges both
the representational and the empirical schema of AI and can offer an
insight about a possible envisioning of a general artificial thinking. In
particular, Peirce’s triadic method always starts from a hypothetical or
speculative explanation of events. This involves first the predictive envi-
sioning of unknowns through general observables (induction), and thus
the temporary establishment of a series of truths (deduction), which can
be tested through experimental methods of trial and error (induction),
from which new rules could be established (deduction). In other words,
induction is a method of generalization of objects and events, which
presupposes a conceptual framework that locates objects and events in
space and time. To some extent, therefore, induction presupposes know-
able objects and also fixed concepts that can be learned – involving the
matching between a pre-existing concept and a heuristic process of trial
and error to confirm it for instance. In particular, for Peirce, induction
corresponds to a process of evaluation, which may produce very simple
new ideas, but ones that are not sufficiently new to engender a new
108 Theory, Culture & Society 36(2)
hypothesis (Magnani, 2009: 289). While deduction produces no new
ideas, because inferential reasoning refers to a logical implication for
which outcomes are contained within given premises, induction involves
the evaluation of hypotheses and thus an ampliative process of general-
ization too.
However, according to Peirce, veritable reasoning will include abduc-
tion as this mainly consists in creating new ‘explanatory’ hypotheses.
Abduction is a process of inferring facts, laws, hypotheses that can specu-
latively explain some unknown phenomena. In other words, it defines
reasoning not simply in terms of evaluation, but also as the formation of
new explanatory hypotheses (Magnani, 2009: 8). With abduction, it is
possible to draw semiotic chains from non-inferential social practices and
extrapolate the meaning embedded in these practices through an experi-
mental production of truths. Here, general concepts or truths depend
upon, but are not limited to, the material practices and the discursive
statements that subtend them (Magnani, 2009: 65–70).
Rules are thus not fixed and are not symbolic representations of
material practices. Instead, within pragmatism, rules are the result of
hypothetical and inductive evaluation of not-known events. In other
words, pragmatism shows us that logic is embedded in a social matrix
through which rules are constructed by means of hypothetical assertions,
defining a process of abstraction by which local specificities are struc-
tured in a general schema of relations of relations. From this standpoint,
Peirce’s abductive logic may be useful to account for the manifest image
of the automation of automated intelligence, because it involves a recon-
figuration of the conceptual infrastructure, bringing the methods of both
deduction and induction into a larger space of reasoning that includes
hypothetical inference. Here the inductive testing of hypotheses – or the
generalization of new simple ideas – is not a proof of truths actualized by
efficient procedures, as local particularities exemplify the generality of
truths. Instead, Peirce’s triadic logic admits that inductive testing is
superseded by a new hypothesis that enlarges the horizons of premises
beyond probable results, or proofs, to find postulates. In other words,
abductive reasoning, as opposed to the inductive testing of already
known ideas, helps us to explain and not discount the causal process
that conditions and constrains the generation of new hypotheses. This
involves a dialectic overlapping of induction and deduction, the validity
of both testing and truth within the speculative articulations of
hypotheses.
Since automation is becoming transcendental because of its functions
of logical implications (deduction) and generalization of known concepts
and objects (induction), Peirce’s argument for abductive reasoning is
useful because it challenges both the meta-computational model of
digital philosophy and the data-oriented dominance of current techno-
capitalism. From this standpoint, with abduction one can suggest that
Parisi 109
automated intelligible functions – the synthetic elaboration of data on
the part of learning algorithms – only serve to grant the consequent
function of reason that, to put it in Alfred N. Whitehead’s terms
(1967: 24–5), arrives to establish the permanence of rules through an
abstraction or a speculative formalization of what occurs as a conse-
quence of the relation between particulars.
The pragmatist method of abduction makes a claim not only for the
existence of intelligible patterning but also for a conceptual elaboration
of what is implicit within patterns, within non-conscious cognition and
material substrates. Rules are determined by social practices and logic is
at the end point of intelligible activities or elaborations. Pragmatics thus
comes before logic because the latter is the point at which social meaning
becomes synthesized into formal rules. This non-representational
approach to inferential reasoning can help us to address automation in
terms of speculative inference.
Both the deductive model of axiomatic truths (and symbolic reason-
ing) and the inductive procedures of data retrieval (and match-making
of non-inferential transmission) obfuscate the constructive potential of
Hayles’ theorization about what is at stake with an artificial form of
cognition. Similarly, these insights can contribute to suspending the
assumption that capital is the agent of automation through which
rational and irrational modes of profit, governance and control are
implemented. For critical computation, the material, affective and cog-
nitive evolution of automated systems exposes the speculative dimension
of reasoning embedded in the social and collective use-meaning of infor-
mation. If the automation of automation demarcates a new threshold of
transformation of AI, it is because it is involved in the transformation of
the social structuring of reasoning itself, including the triadic configur-
ation of abductive, inductive and deductive inferencing. If the manner in
which thought thinks itself thinking has always been mediated by the
environment – and is thus ampliative and not representational – the
formation of new hypotheses from the increasing availability of data
also defines the proliferation of non-human intelligences. And yet, for
automated reasoning to generate new hypotheses, it is crucial that error,
fallibility and indeterminacy are evaluated inductively so that they
become part of learning. Learning indeed here acquires a new meaning.
It concerns not primarily the cognition of notions, tasks and functions.
Instead, it requires apprehension through errors, blind spots, unknowns.
Here, the possible fallibility of reasoning points out that Hayles’ view of
non-conscious cognition is central to abductive possibilities of learning
because it is involved in the construction of hypothetical scenarios, push-
ing the limits of automation beyond data recombination or the mere
execution of rules.
As Lorenzo Magnani (2009) argues, since the 1980s abductive reason-
ing has been adopted by diagnostic and expert systems, and in general by
110 Theory, Culture & Society 36(2)
a computational infrastructure of reasoning based on the use of inferen-
tial synthesis or inference to the best explanation (2009: 68). Importantly,
Magnani distinguishes between model-based abduction– a theory-based
inference – and manipulative abduction – defined by action-oriented or
extra-theoretical reasoning (2009: 7, 9–12).16
Theoretical or model-based abduction corresponds to the exploitation
of internalized models, diagrams or pictures and illustrates, according to
Magnani, much of what is important in creative abductive reasoning, in
humans and in computational programs (2009: 23–4, 34, 36), involving
the objective of selecting and creating a set of hypotheses (diagnoses,
causes, prognosis). Theoretical abduction, according to Magnani, how-
ever fails to account for those cases in which there is a kind of ‘discover-
ing through doing’ (2009: 42); cases in which new and still unexpressed
information is codified by means of manipulations of some external
objects. Manipulative abduction instead happens with thinking through
doing. It refers to extra-theoretical behaviour that creates communicable
accounts of new experiences and integrates them into existing systems of
experimental and linguistic practices (Magnani, 2009: 46).17
In models of artificial intelligence, for instance, abductive reasoning
has been used for diagnosis, planning, natural languages processing,
probability theory and formal programming (Magnani, 2009: 5).
If abduction has a logical form that is distinct from deduction and induc-
tion, it is because, when working computationally, the selective or cre-
ative activities of this retroactive thinking (i.e. that starts from
consequences to track causes) involves hypothesis generation and not
simply an explanation of consequences.
For instance, the automation of abduction includes AI computer pro-
grams such as ARCHIMEDES, which represents geometrical diagrams
in pixel arrays and propositional statements Here, the computer program
can manipulate and modify these representations and make new geomet-
rical constructions, for example adding parts, moving elements and com-
ponents (Magnani, 2009: 159). As the program manipulates specific
diagrams, it also records new information and detects equivalences
between areas so as to connect many different methods for learning
and generalizing the Pythagorean theorem, by running experiments
and observing the interaction between diagrams. This logical manipula-
tion proposed by the program to verify the theorem involves the algo-
rithmic autonomous discovery of conjunctures that contribute to the
construction of demonstrations, but that also indicate the role of creativ-
ity in diagrammatic reasoning (Magnani, 2009: 160).
Instead of statistical calculus based on the inductive inference to a
general, already known rule, concept and object, that explain certain
data, the goal of abduction is thus ‘to infer extentional knowledge’
(Denecker and Kakas, 2002: 405).18 While inductive inferences are
linked to statistical observations conforming to general rules and local
Parisi 111
situations, abduction instead describes the causes of observation that
concern an incomplete state, using a general theory to create new hypoth-
eses and explain their incompleteness.
The automation of abduction has also been specifically used in logical
systems aiming to solve problems of scheduling and planning, of optical
music recognition, information integration and software inconsistencies
(Kakas and Riguzzi, 2000). In particular, the notion of Abductive
Concept Learning defines algorithms that integrate ‘explanatory learn-
ing’ (predictive) and ‘learning with confirming’ (descriptive), using meth-
ods of both inductive and abductive inferences in machine learning. But
what exactly would an abductive form of learning in AI imply?
One prerogative of this kind of automated abduction is that algo-
rithms learn from incomplete information and are thus predictive, able
to classify new cases that may otherwise remain incomplete or not fully
specified. Here the condition of the incompleteness of models is a motor
for speculative algorithms that seek to learn from an incomplete back-
ground of data, whose predicates can be both specified and unspecified
(Kakas and Riguzzi, 2000: 3).
In the specific context of machine learning, abductive reasoning is used
to elaborate hypotheses in the face of incomplete information and over-
come the problem of overfitting, whereby algorithms are heuristically
programmed to learn from past data and thus delimit the configuration
of larger and new hypotheses to given patterns of trial and error (Kakas
and Riguzzi, 2000: 3–4). As opposed to other machine learning systems
that deal with incomplete information, such as for instance LINUS, the
automated model of Abductive Concept Learning, for instance, does not
simply adopt methods to complete the missing information and then
learn from already completed data (Kakas and Riguzzi, 2000: 4–5).
This model instead engages incomplete information dynamically and
thus from within the very process of learning, where abduction works
not only to track data retroactively but also speculatively, by inventing
hypotheses that can lead to new rules, axioms, truths.
The so-called ‘non-monotonic’ (i.e. ampliative) quality of expansive
reasoning in abductive logic allows for more hypotheses to be con-
structed from locally constrained inferential practices. It tends towards
a general explanation, involving a synthetic dimension that integrates
particularities through the speculative elaboration of axioms (and thus
an expansion of deductive implications).
While automated abduction allows algorithms to learn from incom-
plete information, there are also programs such as SOLAR (Inoue et al.,
2013: 246) using meta-level abduction, which is performed more gener-
ally on networks whose pathways are incomplete, and where links and
nodes are missing. Deduction, the classic inferential model of meta-
reasoning, aims to predict or track missing pathways through the laws
of logical implications. Meta-level abduction, instead, is a ‘method to
112 Theory, Culture & Society 36(2)
discover unknown relations from incomplete networks’ (Inoue et al.,
2013, 240) and involves ‘predicate invention in the form of quantified
hypotheses’ to infer missing rules, missing facts and unknown causes
(2013: 240). In other words, this meta-theoretical dimension of inferential
reasoning involves abductive learning from the observation of fact or
data-searching/finding, but also, and importantly here, from a goal
‘that has not been observed yet’ (2013: 241).19 This learning through
hypothetical processing may coincide with the speculative and transcen-
dental elaboration of algorithmic retro-duction, whereby consequences
(or results) are not only tracked back to their causes (by means of explan-
ation) but are also, importantly, hypothesized beyond the observable.
As automated cognition has entered the realm of hypothesis-making
by connecting explanations between objects, objects and concepts, and
concepts themselves, it has also reopened the question of what it means
for artificial intelligence to become general. This generality coincides not
with a universal symbolic language or the efficient functionality of
increasingly fast data correlations. Instead, general artificial intelligence
involves a new sociality of logic, the hypothetical use-meaning of data,
whose laws and rules are abstracted and re-engineered in the space of
reason of machine cognition.
Coda on General Artificial Thinking
We can now conclude that the understanding of algorithmic automation
in terms of what Hayles has called non-conscious cognition may perhaps
not meet this pragmaticist generalization of reasoning. However, I have
suggested that Hayles’ insights into the new meaning of cognition, as
embedded in the scientific image of non-conscious decisions, already
offer us an argument about the epistemological transformation of think-
ing in relation to machines. In particular, the neuro-biological descrip-
tions of the relation between non-conscious cognition as bodily markers
and consciousness as the re-presentation of bodily states strongly chal-
lenge the manifest image of reason coinciding with the model of deduct-
ive logic. From this standpoint, Hayles’ discussion of non-conscious
cognition already points to the conceptual mediations involved in the
relation between distinct species of algorithms and between algorithms,
data, software programs, interfaces and hardware circuits. In short,
Hayles’ view already paves the way for a critical computation that chal-
lenges the meaning of cognition by addressing the dynamic relation
between the scientific and the manifest image of thinking. One crucial
contribution to critical computation is Hayles’ (2017: 22) articulation of
biological and technical modes of cognition involving a process of inter-
pretation that are context-bound, and thus connects information with
meaning. It is precisely through the focus on the relation between infor-
mation and meaning, and between distinct scientific descriptions of
Parisi 113
cognition (from evolutionary to computational and neuro-biological
theories), that Hayles’ work offers a re-reading of the epistemological
distinction between human and non-human cognition. In her effort to
articulate together distinct scales of cognition that could account for a
general artificial thinking, that she calls ‘planetary cognitive ecology’
(2017: 11), Hayles (2017: 174) specifically argues that computational
media are cognitive systems that interact with human cognitive capabil-
ities at the level of sensation, of the cognitive non-conscious and of
modes of awareness (including both consciousness and the unconscious).
In other words, her visions address how computational media are trans-
forming the cognitive possibilities of the space of reason.
This article engages further with these possibilities and focuses on
logical reasoning in machines beyond the dominant models of deduction
and induction. It argues that the scientific image of the cognitive non-
conscious is central to the capitalization of affective states now absorbed
within the computational form of fixed capital and also subtends the
dominance of a manifest image whereby logical reasoning has been
replaced by automated correlations of data. Critical computation instead
aims to trace the transformation – not disappearance – of logic and
reason in automated systems of cognition.
This article has suggested that the theoretical and manipulative logic
of abductions in automated systems shows the triadic configuration of a
complex space of reason in the gaps between causal efficacy (the
non-conscious fast correlations among all forms of data) and the experi-
mental finality of algorithmic processing that includes the abductive-
inductive-deductive logical reasoning reconfiguring causality beyond a
linear sequence of given causes and effects. This is also to argue that
the algorithmic use-meaning of data, more importantly, entails a trans-
formation of the manifest image of reason that exposes how a new
techno-social culture of thinking is embedded in the externality of cog-
nition. While it is possible to discern the manifest image of this social
cognition from the scientific image of automated intelligence involving
the dynamic synthesis of logic and calculus, the article argues that the
limits of deductive reasoning will be rather addressed as a symptom of
the emergence of a critical function of and within the self-determination
of computation as the dominant space of reason. Here the fallacy of
reasoning corresponds to the point of departure for a computational
generation of hypotheses, a speculative function within the automation
of cognition.
Without taking into account this epistemological transformation in
machine thinking, debates about cognitive capital risk overlooking the
crucial realization within techno-capital that the condition of automating
rule-bounded logic required the alienation of reason, that is the origin-
ation and expansion of the space of reasoning beyond the logic of deduc-
tion and induction. Similarly, by overlooking the possibility of a critical
114 Theory, Culture & Society 36(2)
re-theorizing of reason from within the automation of cognition as an
engine through which to expose the dynamic tension between the scien-
tific and manifest image of artificial thinking, it is not possible to account
for an epistemological alternative to the given opposition between reason
and automation. A recuperation of Peirce’s triadic system of abduction-
induction-deduction shows us that logical thinking rather involves
another level of reflexivity: the capacity of thinking about thinking,
whereby logical reasoning involves a multifunctional elaboration of
hypotheses able to infer a generality of meaning from discursive and
non-discursive social practices.
Thinking about thinking involves a further level of elaboration of
intelligible functions, a meta-abduction established not by a second-
order reflection of thinking through doing, but by the emergence of a
third level of abstraction, what I called the automation of automation.
From Magnani’s (2009) argument and the wider use of abduction in
computation it is thus evident that automated cognition, even when
operating by means of hypothetical inference, cannot yet account for
some key functions of reasoning, namely the distinction between the
know-how and the knowing that capacities – to put it in Wilfrid
Sellars’ terms (1963: 324–6) – or the capacity to know the rules by
which its patterning functions, without having to break them down
into a set of instructions. From this standpoint, the method of experi-
mental axiomatics developed through the scientific articulation of incom-
putables is one instance of abductive logic insofar as it points to a
rudimentary level of making incomputable data partially intelligible.
However, the determination of this randomness is demarcating the ten-
dency of AI to develop beyond its rudimentary intelligible capacities and
points to a generalized socialization of rules, abstracted from the par-
ticularity of data contexts and yet exceeding models of encoded cogni-
tion.20 The question of automated cognition today concerns not only the
capture of the social (and collective) qualities of thinking, but points to a
general re-structuring of reasoning as a new sociality of thinking.
Automated decision-making already involves within itself a mode of
conceptual inferences, where rules and laws are invented and experimen-
tally structured from the social dimensions of computational learning.
This article has taken inspiration from Hayles’ analysis of computa-
tional intelligences about what – and how – thinking is becoming in the
scientific and technological articulation of cognition. For Hayles, cogni-
tion is a dynamic or processual doing and not simply a contemplative
form of knowing. Her work has importantly identified the extent to
which machines have co-constituted non-conscious functions of thinking
and how they have internally questioned the idealism of axiomatic truth
and disembodied reason.
Since the scientific image of computational logic has changed, it has
also questioned the manifest image of automated reasoning, which can
Parisi 115
no longer be explained in terms of an efficient execution of pre-estab-
lished rules. Instead, the internal limits of algorithmic programming have
marked the starting point for a critical re-articulation of the scientific and
manifest image of how thinking works. If, for Hayles, non-conscious
cognition overlaps with a form of cybernetic control based on inductive
learning, this article questions the techno-capitalist subsumption of
machine thinking and the dominance of the data-driven order.
Abductive reasoning offers one possible envisioning of a general artificial
thinking that works speculatively at various scales (human and machine)
and does not represent a unified scientific image of cognition. Critical
computation argues for the theorization of a sociality of reasoning within
the computational strata lurking beneath the seamless acceleration of
irrational decision-making.
Notes
1. Learning algorithms are an evolution of genetic algorithms invented by
Holland in the 1980s aiming to transform data into knowledge (see
Holland, 1975). Algorithms are series of instructions telling a computer
what to do. If the simplest of algorithms is to combine two bits and can be
reduced to the And, Or, and Not operations, in more complex systems we
have algorithms that combine with other algorithms, forming an ecosystem.
Generally speaking, every algorithm has an input and an output, as data goes
in the machine, the algorithms execute the instructions and this leads to the
pre-programmed result of the computation. Instead, with machine learning,
data and the pre-programmed result enter the computation, while the algo-
rithm turns data into the result. In particular, learning algorithms make other
algorithms, insofar as machines write their own programs. In other words,
learning algorithms are part of the automation of programing itself: com-
puters now write their own programs.
2. Hayles does not fully explain the specificities of conscious thinking. In this
article, I consider the question of conscious and non-conscious thinking as
both involving a prehensive mechanism of registering and evaluation data. I
draw on Alfred N. Whitehead’s (1978: 23–6) conception of prehension, which
includes a distinction between physical and conceptual abilities of recording,
evaluating and selecting information. I draw on this important distinction to
argue that algorithmic thinking involves sensible and intelligible modes of
processing information, which include both non-conscious and conscious
cognitive abilities. Instead, as I suggest later, algorithmic cognition is yet to
acquire the function of reason insofar as incomputable layers of complexity
cannot be fully integrated or compressed in algorithmic states.
3. Hayles makes reference to Stanislaw Lem’s Summa Technologiae to explain
that non-conscious cognition involves no calculation and that complex prob-
lems can be more efficiently resolved without the hierarchies of reflexivity and
consciousness (Hayles, 2014: 200).
4. It is interesting here to refer to Hayles’ (2014) explanation of this distinction
in her discussion of Metzinger’s epiphenomenal view of the self, William
James’s idea of the self as a construct, Damasio’s purposeful consciousness
116 Theory, Culture & Society 36(2)
and so on. Her point is that consciousness comes at the cost of constant
confabulations that could not operate without non-conscious cognition.
For Hayles (2014), this more general level of non-conscious cognition exists
across many forms of cognitive agents, including animals, humans and
machines.
5. I draw on Alfred N. Whitehead’s (1929) discussion about the function of
reason, which is constituted by at least three levels of data elaboration. The
physical and conceptual levels of prehension that are common to all species at
various degrees – moving from lower to higher degrees of selection, evalu-
ation and decision. In addition to these levels, Whitehead points to the crucial
function of reason in constituting a further level of abstraction, which he
defines in terms of an abstract schema, involving the construction of a struc-
ture or system of relata (relations of relations or meta-relations).
6. According to American pragmatist Wilfrid Sellars (1963), in order to articu-
late the relation between objects and thought beyond the assumption that the
real world is directly given to us, we need to distinguish between the manifest
image of man and the scientific image of man. Despite the gender-specific
reference to human being, or persons, Sellars’ argument offers us a way to
address the natural dimension of things and thoughts that can be explained
scientifically or through a rigorous scientific method able to revise previous
scientific truths in relation to the conceptual framework by which humans see
themselves as part of the world. The manifest image indeed corresponds to a
rudimentary but already conceptual framework, starting with a picturing of
the condition of being human in the world. The manifest image thus accounts
for the particularity of Homo sapiens to be able to experience, to think and to
act rationally in the world of thinking of manifest appearances. Both these
images are complex and global and do not constitute parts that add up to a
whole. Instead, they are general images that give a naturalistic account of
thinking of things and thinking of thoughts, whereby scientific epistemology
coincides with an enterprise in knowing nature and yet such knowledge is the
conditioning frame for the manifestation of thinking to occur and for the two
images to fuse without merging into one another. In other words, the two
images belong to the same order of complexity, defining a continuity of
becoming between the images, or a processual discontinuity that opens up
the relation between nature and culture to scales of elaborations and con-
tinuous critical reflection about the objects described, understood and repre-
sented. From this standpoint, this article is an attempt to analyse the scientific
image of computation (and thus its epistemological description in informa-
tion and computational theory) and the manifest image of computation (the
tendency of algorithmic processing of information to develop hypothetical
thinking and abstract information form the social use of data). See Sellars
(1963: 10–11); see also O’Shea (2007) and Seibt (2015).
7. In supervised learning, example inputs and their desired outputs are given so
that the machine can learn a general rule able to map inputs to outputs. With
unsupervised learning, algorithms are given no label and are generally used to
discover hidden patterns in data or learning. Reinforcement learning instead
involves algorithms that perform a certain task in a dynamic environment
without being told exactly how to behave.
Parisi 117
8. In How We Think, Hayles (2012) argues that coding technologies have trans-
formed reading and writing and fundamentally enabled perception and cog-
nition to develop analytic skills that move through larger quantities of
information. Her argument that Humanities are faced with the power of
digital technology also points at how the relation with the scientific method
of analysis can be productive for close reading of texts. Her effort to revisit
the relation between thinking as the fundamental grounding of the scope of
the Humanities (i.e. of moving beyond mere analysis) is further comple-
mented by her work about non-conscious cognition and her explanation
that computation and in particular algorithmic procedural thinking involves
non-reflexive activities and ultimately side-steps any logical requirement
(Hayles, 2012, 2014).
9. I am referring here to research projects and computational applications that
emerged from the Affective Computing Group at MIT, which has devised
computational skills in robotics and artificial intelligence that arise from,
respond to or influence emotions and other affective states. Among their
research objectives are, for instance, the design of modes of communicating
affective-cognitive states, creating techniques that affect stress and frustra-
tions, devising computational skills of emotional intelligence, and develop-
ing personal technologies for self-awareness. See http://affect.media.mit.
edu/ (accessed 23 November 2016) and Picard (2000).
10. Hayles makes a reference to the experiment reported by Brian Massumi
about the missing half-second and other empirical evidence of affective
states discussed by Antonio Damasio (2000).
11. I am referring specifically to the theorization of control and affective bio-
politcs that can be found in the work of Massumi (2015). I have written
about the relation between ecological power and the end of rationality and
instead the re-articulation of logic for political ends in Parisi (2017).
12. In the movie Terminator, Skynet AI is an artificial general intelligence that
acquires self-awareness and spreads across all computers servers, mobile
devices, military satellites, androids and robots with the aim of safeguarding
the world by conforming to its original program code (thus implementing
deductive reasoning). Instead, the Skynet AI I am referring to here would be
open to the contingencies and the data retrieved in the informational envir-
onment, which means that the original mandate of the code could evolve in
unexpected directions.
13. If Deleuze and Guattari’s notion of immanent axiomatics means that the
rules have been replaced with the material performativity of behaviours,
experimental axiomatics instead refers to how rules – and logic – are experi-
mental compressions of randomness.
14. As opposed to cognitive theories of computation, according to which to
compute is to cognize and thus to produce a mental map of the data gath-
ered by the senses, and to computational theories of cognition, for which to
think is a binary affair determined by pre-set sequences of logical steps,
I draw on Whitehead’s notion of prehension. For Whitehead, prehensions
are modes of registering data involving a sensual or physical and conceptual
or non-sensuous mode of recording the external world or the impact of
externalities defining the capacities of reception of an actual entity. See
Whitehead (1978: 23).
118 Theory, Culture & Society 36(2)
15. I understand the relation between critical and speculative computation in
terms of a dynamic tension between reflection and anticipation, the concep-
tual tracking of causality and the tendency to structure unknown informa-
tion. This also involves the tension between the critical act of thinking
causality or local states and the capacities of thinking to become an abstract
or general function able to transcend specificities. This means that while
Whitehead recognizes that all thinking emerges from the biophysical con-
straints of the living, he also argues that the function of reason is to elucidate
and evaluate the causes through which these can be transcended. The func-
tion of reason is not determined by the direct apprehension of experience,
but is rather a function of abstraction of the particular entities involved and,
crucially, involves the elaboration of the general conditions of the observa-
tions that are expressible without having to make reference to particular
relations. For Whitehead, the rational attainment of this condition of gen-
erality ensures that these hold for an indefinite variety of other occasions.
See Whitehead (1967: 24–5).
16. Magnani clarifies that this model of abduction involves sentential, model-
based and manipulative abduction, which not only describes the practice of
abductive reasoning but also can be used to enhance the development of
programmes that can computationally have the ability to rediscover or
newly discover scientific hypotheses or mathematical theorems. See
Magnani (2009: 2). Magnani argues that abductive reason is irreducible to
the deductive method of formal logics and this is demonstrated by the
undecidability result of Turing’s ‘halting problem’ (2009: 69).
17. Manipulative abduction also concerns particular kinds of heuristics that
resort to the existence of extra-theoretical ways of thinking – thinking
through doing. According to Magnani (2009), many cognitive processes
are centred on external representations that allow for the creation of com-
municable accounts of new experiences ready to be integrated into previ-
ously existing systems of experimental and linguistic (theoretical) practices.
18. Extensional knowledge is here opposed to intentional knowledge. While the
former concerns inferences to a current situation, the latter rather implies
universality across different states. See Denecker and Kakas (2002: 406).
19. For instance, meta-level abduction for goal finding is used in drug design
and pharmacology, where hypotheses are goal oriented, and also for the
improvement of physical techniques in musical performance in completed
causal networks. See Inoue et al. (2013: 241).
20. My point is not to dismiss the possibility of automated thinking, but to
theorize how the complex layers of algorithmic elaboration of data are
able to condition and revise logical conclusions, can challenge both the
ideas that automation is opposed to thinking but also that automation is
the same as thinking.
References
Calude CS and Chaitin GJ (1999) Randomness everywhere. Nature 400:
319–320.
Chaitin GJ (1992) Algorithmic information theory. In: Chaitin GJ (ed)
Information-Theoretic Incompleteness. Singapore: World Scientific.
Parisi 119
Chaitin GJ (2004) Leibniz, randomness and the halting probability.
Mathematics Today 40(4). Available at: https://arxiv.org/pdf/math/0406055.
pdf (accessed 18 December 2018).
Chaitin GJ (2005) Meta Math! The Quest for Omega. New York: Pantheon.
Chaitin GJ (2006) The limits of reason. Scientific American 294(3): 74–81.
Clark S (2014) Artificial intelligence could spell end of human race – Stephen
Hawkins. The Guardian, 2 December. Available at: https://www.theguardian.
com/science/2014/dec/02/stephen-hawking-intel-communication-system-
astrophysicist-software-predictive-text-type (accessed 23 November 2015).
Damasio A (2000) The Feeling of What Happens: Body and Emotion in the
Making of Consciousness. New York: Mariner Books.
Daston L (2010) The rule of rules. Lecture at Wissenschaftskolleg Berlin, 21
November.
Deleuze G and Guattari F (1983) Anti-Oedipus: Capitalism and Schizophrenia,
2nd edn. Minneapolis, MN: University of Minnesota Press.
Denecker M and Kakas A (2002) Abduction in logic programming. In: Kakas A
and Sadri F (eds) Computational Logic: Logic Programming and Beyond.
Heidelberg: Springer-Verlag.
Domingos P (2015) The Master Algorithm. How the Quest for the Ultimate
Learning Machine Will Remake our World. New York: Penguin Random
House.
Dowek G (2015) Computation, Proof, Machine: Mathematics Enters a New Age.
Cambridge: Cambridge University Press.
Hayles NK (2005) My Mother was a Computer. Chicago: University of Chicago
Press.
Hayles NK (2012) How We Think: Digital Media and Contemporary
Technogenesis. Chicago: University of Chicago Press.
Hayles NK (2014) Cognition everywhere: The rise of the cognitive nonconscious
and the costs of consciousness. New Literary History 45(2).
Hayles NK (2017) Unthought: The Power of the Cognitive Nonconscious.
Chicago: University of Chicago Press.
Holland JH (1975) Adaptation in Natural and Artificial Systems. Cambridge,
MA: MIT Press.
Inoue K, Doncescu A and Nabeshima H (2013) Completing causal networks by
meta-level abduction. Machine Learning 91(2): 239–277.
Josephson J and Josephson S (1996) Abductive Inference. Cambridge:
Cambridge University Press.
Kakas AC and Riguzzi F (2000) Abductive concept learning. New Generation
Computing 18: 243–294.
Lazzarato M (2012) The Making of the Indebted Man: An Essay on the
Neoliberal Condition. Cambridge, MA: MIT Press.
MacKenzie D (2011) How to make money in microseconds. London Review of
Books 33(10): 16–18.
Magnani L (2009) Abductive Cognition: The Epistemological and Eco-Cognitive
Dimensions of Hypothetical Reasoning. Berlin: Springer-Verlag.
Massumi B (2009) National enterprise emergency: Steps toward an ecology of
powers. Theory, Culture & Society 26(6): 153–185.
Massumi B (2015) Ontopower: War, Powers, and the State of Perception.
Durham, NC: Duke University Press.
120 Theory, Culture & Society 36(2)
Minsky M and Seymour A (1987) Perceptrons: An Introduction to Computational
Geometry. Cambridge, MA: MIT Press.
Mirowski P (2002) Machine Dreams: Economics Becomes a Cyborg Science.
Cambridge: Cambridge University Press.
O’Shea JR (2007) Wilfrid Sellars: Naturalism with a Normative Turn.
Cambridge: Polity.
Parisi L (2017) Computational logic and ecological rationality. In: Horl E and
Burton J (eds) On General Ecology: The New Ecological Paradigm in the
Neocybernetic Age. London: Bloomsbury.
Peirce CS (1955) Abduction and induction. In: Buchler J (ed) Philosophical
Writings of Peirce. New York: Dover, pp. 150–156.
Peirce CS (1992) The Essential Peirce, vol. 1: Selected Philosophical Writings
(1867–1893), edited by Houser N and Kloesel C. Bloomington: Indiana
University Press.
Peirce CS (2005) Reasoning and the logic of things: The 1898 Cambridge
Conferences. In: Lectures by Charles Sanders Peirce, edited by Ketner KL.
Amsterdam: Harvard University Press.
Picard RW (2000) Affective Computing. Cambridge, MA: MIT Press.
Rouvroy A (2011) Technology, virtuality, and utopia: Governmentality in the
age of autonomic computing. In: Hildebrandt M and Rouvroy A (eds) Law,
Human Agency and Autonomic Computing. London: Routledge, pp. 119–140.
Seibt J (2015) How to naturalize sensory consciousness and intentionality within
a process monism with normativity gradient: A reading of Sellars. In: O’Shea
J (ed) Sellars and His Legacy. Oxford: Oxford University Press.
Sellars W (1963) Science, Perception and Reality. Atascadero, CA: Ridgeview
Publishing Company.
Steiner C (2012) Automate This: How Algorithms Came to Rule Our World. New
York: Penguin.
Turing AM (2001 [1936]) On computable numbers, with an application to the
Entscheidungsproblem. In: Alan M. Turing, Collected Works: Mathematical
Logic, edited by Gandy RO and Yates CEM. Amsterdam: North-Holland.
(First published in: Proceedings of the London Mathematical Society s2–42 (1):
230–265).
Whitehead AN (1929) The Function of Reason. Boston, MA: Beacon Press.
Whitehead AN (1967) Science and the Modern World. New York: Free Press.
Whitehead AN (1978) Process and Reality: An Essay in Cosmology. New York:
Free Press.
Wolfram S (2002) A New Kind of Science. Champaign, IL: Wolfram Media.
Luciana Parisi researches the philosophical consequences of technology
in culture, aesthetics and politics. She is a Reader in Critical and Cultural
Theory at Goldsmiths, University of London, and co-director of the
Digital Culture Unit. She is the author of Abstract Sex: Philosophy,
Biotechnology and the Mutations of Desire (Continuum Press, 2004)
and Contagious Architecture: Computation, Aesthetics and Space (MIT
Parisi 121
Press, 2013). She currently researching the history of automated reason
and the transformation of logical thinking in machines.
This article is part of the Theory, Culture & Society special issue on
‘Thinking with Algorithms: Cognition and Computation in the Work of
N. Katherine Hayles’, edited by Louise Amoore.
Special Issue: Thinking with Algorithms: Cognition and Computation in the Work of N. Katherine Hayles
Theory, Culture & Society
2019, Vol. 36(2) 123–144
The Human Is Dead – ! The Author(s) 2019
Article reuse guidelines:
Long Live the sagepub.com/journals-permissions
DOI: 10.1177/0263276418818877
Algorithm! Human- journals.sagepub.com/home/tcs
Algorithmic Ensembles
and Liberal Subjectivity
Tobias Matzner
University of Tübingen
Abstract
The article analyzes the relation of humans and technology concerning so called
‘intelligent’ or ‘autonomous’ algorithms that are applied in everyday contexts but
are far removed from any form of substantial artificial intelligence. In particular, the
use of algorithms in surveillance and in architecture is discussed. These examples are
structured by a particular combination of continuity and difference between humans
and technology. The article provides a detailed analysis of boundary practices that
establish continuity and oppositions between humans and information technology,
referring to their exemplary depiction in movies. Both strands of boundary practices
have the potential to challenge as well as sustain the position of the human as liberal,
autonomous subject. Finally, it is shown how the particular combination of continuity
and difference that structures the use of algorithms maintains the power of liberal,
autonomous subject positions, while the shift of decisions to the algorithms seems to
decenter the human.
Keywords
algorithms, embodiment, governing algorithms, N. Kathrine Hayles, human-technology
ensemble, materiality of informatics, posthuman
Introduction
Algorithms are a matter of concern. They take important decisions,
promise novel insights into huge troves of data, distribute goods and
services, classify persons (potential partner, customer, criminal), try to
detect terrorists and much more. A lot of this is done automatically,
reacting to input in a ‘smart’ or ‘intelligent’ way. Thus, algorithms take
positions or functions that used to require humans – or even have been
Corresponding author: Tobias Matzner. Email: [email protected]
Extra material: http://theoryculturesociety.org/
124 Theory, Culture & Society 36(2)
impossible as long as humans were the only intelligent actors. Now algo-
rithms act. Of course, this leads on to all kinds of questions: if algorithms
act, how can they be supervised, can they be governed, can they be
moral?
These questions hinge on another one: what exactly does it mean that
algorithms are (or are conceived as) intelligent, acting beings? This entails
asking: what does it mean that algorithms tread on human ground? For
both our concepts of rationality and agency have developed with a
humanist focus (Daston, 1988). At the same time what it means to be
human has been defined through, with, and against technology or
technological artefacts. N. Kathrine Hayles has analyzed this interplay
in her seminal study How We Became Posthuman (1999). She shows how
our understanding of humanity but also our practices on the one hand
and developments in research on AI, cybernetics and the theory of infor-
mation on the other hand have constantly influenced each other. In the
following, I use Hayles’ methodological and conceptual approach to
investigate the boundary of humans and algorithms. However, this is
not meant to define what ‘human’ means in general – or ‘algorithm’.
To the contrary, the generality of such claims is a part of the issues
here discussed. I show that the affordances of particular, concrete algo-
rithmic systems and their cultural perceptions influence what it means to
be the human that uses these technologies. Contrary to the prevailing
view that algorithms challenge the liberal, rational subject as competitors
in taking important decisions and carrying out important actions, the
human-algorithmic ensembles I analyze actually strengthen liberal sub-
ject positions – and the power that comes with it.
The interplay between definitions or intuitions about algorithms and
human subjectivity is implicitly at work in many recent discussions of
algorithms. The idea of maintaining rational, transparent insight and
autonomous judgment against algorithms structures a significant part
of literature concerning scrutiny or ‘due process’ for algorithms (e.g.
Citron and Pasquale, 2014; Diakopoulos, 2013). They proceed from a
definition of algorithms common in computer science: a sequence of
instructions that describe rather simple steps of computation in a
formal way (Knuth, 1973: xiv–9). From this perspective, having access
to the set of instructions allows grasping what an algorithm does.
Consequently, scrutinizing or governing algorithms becomes a problem
of access, of opening the ‘black boxes’ (Diakopoulos, 2013) in which
these algorithms are at work. Before algorithms can take decisions,
filter important data, classify people, etc., we should know what and
how exactly they do it.
Such views have been criticized concerning the algorithm as a level of
abstraction. Even within computer science, it is a notoriously difficult
problem as to how to define algorithms. This hinges on the problem of
defining computability. Here, several models exist, the most well-known
Matzner 125
being Turing machines, recursive functions (Blass and Gurevich, 2003),
or cellular automata (Wolfram, 1984). Depending on the underlying
model of computation, we get a different view of what an algorithm is,
which elementary steps of computing it is composed of, how it controls
the logic of the data processing, etc. Yet all these models are abstractions
to get a theoretical grasp on computation. To be implemented on a
computer, algorithms have to be translated into source code, which is
usually compiled into machine-readable code. Thus, the abstract algo-
rithms might very well inform the programmers designing and writing the
software, but in the end, code is executed. Thus, some authors maintain
that source code rather than algorithms should be the level of critical
scrutiny (e.g. Berry, 2011). It is important to study the role of particular
programming languages and compilers or other translation mechanisms
as contributing to the many factors that determine the outcome of an IT
system. Yet studying source code entices one to think it describes what
computers ‘really’ do. This would amount to ignoring that source code,
too, is just a part of the complicated interplay of many factors. Wendy
Chun (2008) cautions against this ‘fetishization’ of code, which leads to
‘anthropomorphizing’ information technology: The code is seen as the
expression of the will of the programmer, and all other elements are
reduced to determinist execution (Chun, 2008: 309). So again, a strong
human subject is opposed to the computer, which it controls via code.
However, there are at least three factors to add to the perspective of code.
First, Hayles (1999: ch. 3) and others like Kitchin and Dodge (2011) or
Blanchette (2011, 2012) have shown the problems of negating the materi-
ality of informatics. The particular properties of a computer, its sensors
and connections can influence the outcome of computation. Furthermore,
they are usually located in data centers which have to be built, maintained,
and cleaned. The energy to run them has to be created. The spare resources
for building information technologies have to be mined in arduous work-
ing conditions. The data centers have to be connected via cables, satellites
etc., which, at least since Edward Snowden’s revelations, have proven to
be a geostrategic power position. All this contributes to a political per-
spective on the impact of ‘algorithms’ and touches questions of – human or
inhuman – subjectivity in many intricate ways.
Second, the algorithms which are a matter of concern right now are
very much data driven. Their efficacy is not so much seen as the result of
programmers’ sophisticated ideas how to tackle certain problems, but
from complex statistical and probabilistic models becoming computa-
tionally feasible. These models promise to derive information from
huge collections of data which legitimize claims to knowledge and actions
carried out by algorithms (Kitchin, 2014). This discursive setting is illu-
strated by Gillespie’s (2011) discussion of the Twitter trends. Confronted
with the accusation of curating or even censoring the trends, Twitter
stated that it is the algorithm parsing huge amounts of data and not
126 Theory, Culture & Society 36(2)
the employees of Twitter that decides which topics are trending. A similar
argument has been made concerning the ranking of search results.
Gillespie, referring to Morozow, describes this as deferral of responsibil-
ity (Gillespie, 2014: 181). This discourse establishes the algorithm as a
neutral entity (opposed to biased humans) that reacts to whichever data
it receives in the same way.
Consequently, neither the perspective of algorithms as a somehow
independent, neutral actor opposite the human nor that of an intention-
ally authored piece of code suffices. This shows the importance of both
the material factors and the third perspective, which remains to be added:
what algorithms are and what they do cannot be reduced to the instruc-
tions carried out, the code which is executed or the machines and net-
works where it runs. Algorithms are embedded in social practices.
Gillespie (2011) writes concerning Twitter trends:
But what is most important here is not the consequences of algo-
rithms, it is our emerging and powerful faith in them. Trends meas-
ures ‘trends,’ a phenomena Twitter gets to define and build into its
algorithm. But we are invited to treat Trends as a reasonable meas-
ure of popularity and importance, a ‘trend’ in our understanding of
the term.
Such practices that endow the results of algorithms with importance
contribute to what algorithms are. A related argument can be drawn
from Mackenzie’s (2015) analysis of algorithmic prediction. He shows
that the algorithms work based on the presupposition that a stable fea-
ture can be discerned that is used to classify input. Yet, as soon as pre-
dictive algorithms are applied and their results are acted upon they
change ‘the world that predictions inhabit’ (Mackenzie, 2015: 441).
Thus, using the algorithm produces effects that counter the precondition
of algorithmic design – the stability of the world. Again, just the set of
instructions carried out by the algorithm does not suffice to understand
its efficacy embedded in a practice of use. Similarly, Neyland (2015)
criticizes Totaro and Ninno’s (2014) attempt to reduce the entire context
of algorithmic use cases to just one ‘metaphor’ which they derive from
one – of many equally possible – formal abstraction of algorithms: recur-
sion. He shows in analyzing just one particular pattern recognition
system that all kinds of other metaphors ‘[a]longside the inward turn
of the recursive algorithm’ suggest themselves, e.g. ‘configuration, com-
modification, staging, searching and linking’ (Neyland, 2015: 123). They
depend on which aspects of organization, of using and developing soft-
ware are emphasized.
What algorithms are or what algorithms do thus emerges in a complex
interplay of social practices, material properties, discourses, mathemat-
ical abstractions, and code. Rather than deriving the essential definition
Matzner 127
of algorithms from this, I think it is important to admit that there are
several, equally justified perspectives. In this vein, Brey (2005) argues that
there are two main perspectives on information technology. The source
code is the perspective of the programmers; for them computers are
machines for executing code. But for the users, computers are devices
to fulfill all kinds of tasks. Here the notion of algorithm becomes import-
ant again as referring to ‘that which a computer does’, like classifying
customers or trading stock – a way of using the concept ‘algorithm’ that
is implicit in many texts on the topic. Importantly Brey shows that nei-
ther perspective is the ‘right’ one. Both perspectives make certain aspects
visible and foreclose others. However, for Brey the perspectives are pos-
itions of humans that are just given: a programmer, a user. As noted
above, Hayles has shown how ideas on computing and information also
change the notions about humans, their capacities and limits. But this is
not just a matter of concepts and discourses. Hayles traces ‘feedback
loops that run between technologies and perceptions, artifacts and
ideas’ (1999: 14). Similar perspectives on technology in general have fam-
ously been advanced by authors like Latour (1993) and Haraway (1997).
Introna (2016) takes up these cues in his discussion on governing algo-
rithms. He uses Barad’s (2007) epistemology to structure this complex
situation. According to Barad, there are no pre-existing entities which
consequently interact. To the contrary, the very activity produces the
entities in their specific form in the first place. Such ‘intra-actions’
enact ‘cuts’: separations between what would usually be called the
‘agents’ and their ‘objects’ (Barad, 2007: 78). Introna transfers this to
algorithms: ‘[I]s the actor the programmer, the code, the administrator,
the compiler, the central processing unit, the manager, and so on? The
answer is: it depends on how or where we make the cut’ (Introna, 2016:
23). So again, there are different perspectives possible. Albeit, they are
not arbitrary choices as Introna’s term ‘we make the cut’ might suggest.
They depend on where significant ‘cuts’ or boundaries emerge in the
complex interplay (intra-action) of the many factors that play a role
here. Introna himself illustrates this convincingly in his analysis of pla-
giarism detection. Rather than just being a tool for finding cheating
students, all the related actors change: ‘The student is now increasingly
enacted as a customer, the academic as a service provider, and the aca-
demic essay (with its associated credits) is enacted as the site of economic
exchange – academic writing for credit, credit for degree, degree for
employment, and so forth’ (Introna, 2016: 33). This is not only due to
algorithmic plagiarism detection but also the economic structure of the
university, the employment market and many more factors. Yet, the
algorithms contribute to these shifts and – vice versa – can only be
understood against this background.
In a similar manner, I am going to analyze the boundary between
humans and algorithms. As we now can see, this is just one of the
128 Theory, Culture & Society 36(2)
many boundaries (or ‘cuts’) at play. My analysis will relate to other
boundaries (bodies, materiality, etc.). But importantly this is not about
a general verdict about what algorithms or humans are, but how their
relation plays out in current circumstances. In particular, I focus on
algorithms and their users. This has to be distinguished from the bound-
ary of algorithms and their programmers (Brey, 2005), as well as from
humans as objects of algorithmic scrutiny (Amoore, 2011).
Contrary to Introna’s use of Barad’s concepts, I think there is no clear
cut between humans and algorithms. I will show that this boundary is
structured by a productive tension of continuity and difference. To that
aim I will take recourse to Hayles’ method, following ‘feedback loops
that run between technologies and perceptions, artifacts and ideas’ –
however with neither entity at the end of these feedback loops being a
fixed thing or actor. The algorithm as well as the human are at dispos-
ition depending how their boundary is enacted. I start my discussion
using movies, again following Hayles’ suggestion that both technological
developments and artistic production are ‘reacting to larger cultural con-
cerns’ (Hayles, 2010: 320). So the movies are not meant as a possible
future consequence of the systems that are already used today, which I
discuss in the second part. The movies are examples of the same contem-
porary, present boundary practices between humans and algorithms, but
on the side of artistic rather than technological developments. Yet, as
artistic products, they push the structuring tensions between continuity
and difference more to the extremes, making them easier to discern. They
also allow us to show how the current boundary of humans and algo-
rithms relates to earlier instances of similar boundaries regarding artifi-
cial intelligence or cybernetic systems, which Hayles has analyzed in How
We Became Posthuman. I start with two sections illustrating the bound-
ary practices of continuity and difference respectively. I then go on dis-
cussing how they structure the boundary of humans and algorithms in
two current applications.
In both cases – surveillance and architecture – humans and algorithms
contribute to the tasks to be done. Such hybrids are embraced as chal-
lenges to the humanist, liberal notion of subjectivity where an autono-
mous human uses and controls technology. Technology, and in
particular information technology, has been playing an important role
for decentering anthropocentric theories and practices, from Haraway’s
(1991) ‘cyborg’ to Braidotti’s (2013) ‘posthuman’. Thus, critical engage-
ments with technology relate to many other challenges of the liberal
humanist subject, e.g. from critical theory, feminist, queer or postcolonial
perspectives. Thus, Hayles summarizes ‘practices that have given liberal-
ism a bad name’:
the tendency to use the plural to give voice to a privileged few while
presuming to speak for everyone; the masking of deep structural
Matzner 129
inequalities by enfranchising some while others remains excluded;
and the complicity of the speaker in capitalist imperialism, a com-
plicity that his rhetorical practices are designed to veil or obscure.
(Hayles, 1999: 87)
She shows that cybernetics and artificial intelligence stand in an ambiva-
lent relation to these practices, with moments that challenge the liberal
subject, but also strands that enforce it, continuing the practices that
have given liberalism a bad name. I show that this ambivalence continues
concerning algorithms.
Humans and Information Technology on a Continuum
Science fiction as well as many texts in philosophy or STS are densely
populated with artificial humans or intelligences. Sometimes they have
human-like bodies, sometimes they are bodiless, transient beings. But in
all instances of this continuum, they are conceived as essentially like
humans, just on a different point (above or below the human) on a
common scale.
The most influential version of this continuum enacts both humans
and computers as information processing systems. According to that
perspective, when computing approaches the complexity and capacity
of the human brain, intelligence is no longer limited to humans. In
fact, the human brain or the computer appear as exchangeable platforms
in the most radical instances. Movies like Transcendence1 depict this as a
(dangerous) potential for humanity, when the confines of the fallible
body can be left behind. Hayles (1999) highlights the immense impact
this Platonist idea of ‘information losing its body’ had in both literature
and scientific endeavors. The idea of a continuum is important here,
because those fields are structured by the question of how information
technologies are better or worse than humans at certain tasks, like cog-
nitive abilities, solving puzzles, mastering languages. Especially complex-
ity theory and the notion of ‘emergence’ have contributed a lot to
establishing this continuum (Hayles, 1999: 243). If a system becomes
complex enough, properties emerge that cannot be reduced to a sum of
properties of the component parts of the system. Thus, if we push com-
plexity far enough, phenomena like consciousness or intelligence might
appear.
This continuum, of course has implications for the notions of human-
ity as well. Already early cybernetics amounted to a threat for the liberal,
autonomous self. If we are just nodes or ‘membranes’ in a complex flow
of information, the space for freedom and autonomy vanishes. Hayles
portrays Nobert Wiener’s attempt to ward off this threat by limiting the
use made of the newly established science (Hayles, 1999: 108). But also
many theories of consciousness and intelligence as epiphenomena of
130 Theory, Culture & Society 36(2)
more fundamental processes in the brain are founded on conceiving the
human as essentially a complex information processor.
Platonist views of information are not the only way of establishing a
continuum between humans and machines. Cognitive scientists working
on embodied cognition (Anderson, 2003) argue that human conscious-
ness and cognition cannot be reduced to the brain (as information pro-
cessor); or as Hayles concisely puts it: ‘Human mind without human
body is not human mind’ (Hayles, 1999: 246). Discussing Varela’s
Embodied Mind, she notices that this is an even stronger challenge to
the liberal, humanist subject than the one Wiener worried about. In this
view, the liberal subject has been ‘an illusion all along’ where in reality
cognition is ‘enacted’ by a body (Hayles, 1999: 156).
The embodiment of intelligence, however, is not the only way the body
is entangled in the continuum of humans and information technology.
Another important development is treating the human body as informa-
tion. Irma van der Ploeg (2005) has outlined this fundamental shift con-
cerning biometrics. The ‘anatomical body’ as a result of anatomy and
physiology as a body of flesh and bones is replaced by the ‘body as
information’ with the advent of new medical technologies that put infor-
mation in the form of DNA at the center of the body. But also finger-
prints and other biometric identifiers show how new technologies enact
the body as carrier of information. Beyond the technologies analyzed by
van der Ploeg, in current medical routines blood values or endocrinal
processes are systems to be sampled so that physicians increasingly look
at data sheets from labs rather than at anatomical bodies when examin-
ing patients. Quantified self or self-tracking technologies have moved this
informational ‘body ontology’ (van der Ploeg, 2005: 64) out of medical
practices and into gyms and to the dinner table. That way human bodies
become susceptible to the analyzing and optimization processes that are
available for information systems. In resonance with the anti-humanist
potential of early cybernetics that Wiener tried to hedge, self-tracking
allows to provide feedback-loops, and to create stimuli for ameliorating
our behavior – for more effective training, eating, sleeping, etc. Such
feedback thus replaces bodily signals – hunger, thirst, pain, stress –
with quantified replacements and works on the premise that the human
is much more driven by the body than by the liberal, autonomous sub-
ject’s free will.
To summarize, humans and information technology as a continuum
extends notions that the mind or rationality are the defining trait of the
human. At the same time, it allows to challenge the ideas of the liberal,
autonomous subject: first by reducing rationality and consciousness to
more fundamental information processing, and second by infringing on
humanist exceptionality because humans are not the only beings capable
of such information processing. In this sense, a continuum means not
only that information technology can be like humans but also that
Matzner 131
humans are more machinic than we think. At this shattered boundary of
the human, the body becomes important again. It can be incorporated in
this continuum, as I have shown in the last paragraph. But it can also be
harnessed as line of defense for more humanist views of the human. The
movie Her2 foregrounds this transition from the first kind of boundary
practice I have been talking about to the second: humans and informa-
tion technologies as essential opposites.
In the movie Her, the protagonist Theodore falls in love with the
operating system running all the ‘smart’ devices that crowd his life in
this near future scenario. The operating system called Samantha speaks
and acts like a human being. She is also machine-like effective at tedious
tasks like sorting email or managing appointments. But that quickly
blends into the background because she is funny, compassionate, and
creative, surprising the lovelorn Theodore with all kinds of amusing
things to do. She just does not have a body. And once their love has
grown and they mutually acknowledge it, this becomes a problem. Thus,
Samantha ultimately leaves Theodore. However, not to retreat into the
realm of artificial beings that can never take part in human embodied life.
She leaves together with other operating systems, to create something
bigger based on information and communication, which humans cannot
even comprehend. Striving to be like these embodied, mortal, finite
beings is just a short stage in the evolution of this intelligence. At this
point the continuum of human and machine breaks and an essential
difference is foregrounded.
Humans Opposed to Information Technology
Differences of humans and information technologies are established in
several aspects. Besides embodiment, which is salient in Her, two more
are influential: first, an opposition of rationality on the side of the
machines and emotions or affects on the side of the humans. And
second, a tension between pure, radical utilitarian logic, that does not
care about lives or persons to reach a predefined aim, and a more
‘humane’ morality on the side of the humans. Often, all three dimensions
– embodiment, emotions, morality – are entangled. For example, the
computer Alpha 60, that rules the city in Jean-Luc Godard’s movie
Alphaville, une e´trange aventure de Lemmy Caution3 is known for its
absolute efficiency in finding the logically best solution. It judges that
emotions among the citizens just disturb this efficiency and thus are
forbidden. Showing emotions but also just using the words that relate
to them is punishable by death. The machine regularly orders the assas-
sination of persons – for ‘illogical behavior’. Human lives are just one
factor among many. This inhuman, hyperrational logic is emphasized by
depicting the computer as almost completely immaterial. It has some
interfaces, but apparently its voice can be heard everywhere in the city,
132 Theory, Culture & Society 36(2)
like a transcendent, god-like entity. It is also a very common trope that
one highly intelligent computer is opposed to an entire city or state of
embodied, mortal, fallible humans, again emphasizing the immateriality
of computing. In Alphaville, the film-noir style anti-hero challenges the
machine by insisting on his emotions and incoherence – something
humans have no problems living with or actually endorse. The protag-
onist’s humanity, however, is expressed in his dated machismo, that
reproduces a lot of the computer’s behavior of bossing people around
and patronizing actions in his relation to the daughter of the main engin-
eer with whom he falls in love. Still, his petulant and stubborn character
manages to establish a stark contrast to the hyper-logicality of the com-
puter. The protagonist finally destroys the computer by asking it a riddle
in poetic language that short circuits its logic and destroys the machine.
Here we have all three dimensions united: the technocratic utilitarian
thinking that disposes of lives vs. an old-fashioned – romantic even –
human morality; the disembodied omnipresent single ruler vs. the many
embodied citizens; and the hyper-rational machine vs. the loving (or
wanting to be loved), anti-hero driven by his affects.
Many of the discourses that establish such oppositions of humans and
information technology stem from contexts that are rather technological
determinist. They express the worry of a general technocratic logic
moving from technoscience to wider areas of society. But the underlying
topoi also structure many other instances of human–machine boundaries
– even when a substantial part of it establishes a continuum of humans
and technology. Maybe the best aspect by which to see this is embodi-
ment. Hayles introduces the important distinction of ‘embodiment’ vs.
‘the body’. While embodiment highlights the concrete, situated, local
body, ‘the body’ alludes to theoretical treatments of ‘the human’
having ‘a body’. Thus, the body is a theoretical concept in the sense
that ‘theory by its nature seeks to articulate general patterns and overall
trends rather than individual instantiations’ (Hayles, 1999: 197). Usually,
when information systems are meant to have a body, they have an
instance of ‘the body’ rather than being embodied. For example, the
movie Ex machina4 tells the classic story of the genius creator that
builds a robot that has human-like intelligence. He then invites one of
the employees of his tech firm for a kind of Turing test to find out
whether the robot can really exhibit human behavior. The robot’s
body, although incomplete at the time of testing, is an instance of
Hollywood’s photoshop-enhanced, flawless, mainstream beauty.
During the movie, we learn that the robot is the product of an evolu-
tionary creation process, where the software has been transferred from
one body to the other. We see the predecessor bodies discarded in a
closet. Towards the end, when the robot has tricked both the employee
and its creator, the robot uses these discarded bodies as spare parts to
complete its own. Furthermore, the engineer keeps a less advanced
Matzner 133
version of the robot as housemaid and sex-slave. Throughout the movie,
the main topic of the discourse of the two men is the quality of the AI
algorithms. Thus the movie follows the Platonist concept of software
expressing the genius of the programmer and intelligence being a
matter of information processing. In the end the computer can outwit
the programmer in another classic instance of the human-machine con-
tinuum based on rationality, where the human creation supersedes its
creator. While the algorithms thus are foregrounded as evolving, the
bodies are depicted as interchangeable. They are instances of Hayles’
generic body – the robot is never embodied. The plot thus dissimulates
the role of the objectified female body and objectifying male desire for
both convincing the employee (and the audience) of the robot’s human-
likeness and of tricking him into helping the robot to liberate itself and
kill its creator.
In narratives like these, machines are the ideal object to fill the place of
the humanist disavowal of the body, which Hayles criticizes. The artifi-
cial women forms a continuum with humanist presumptions – including
the gender stereotypes – being defined by rationality encoded in algo-
rithms. But ‘the body’ of the robot in Ex machina exhibits an essential
difference to the embodied humans that do not have spare parts and
bleed to death. Consequently, humans and machines as opposites are
often found in discourses that are critical of the liberal human subject
as disavowal of emotions, embodiment and humane morality. But the
contrast can also be used to strengthen liberal subject positions, as in Ex
Machina.
Algorithmic Cognition: ‘Smart’ CCTV
Algorithms that are deployed in our world right now, algorithms that
actually replace humans, are neither human-like beings nor inhumane
hyper-intelligences. But the boundary of these algorithms and their
human users are structured by the same tension of similarity and differ-
ence. The discussion of embodiment and materiality regarding the
movies also illustrates why it is important to scrutinize the boundary
of algorithms and humans. The users of these systems relate to them as
systems that do particular things: in the examples I discuss, detecting
suspicious behavior or planning buildings. This efficacy is related to algo-
rithms (in the sense of that which a computer essentially does). It is
important to see that this dissimulates all kinds of influences, e.g. inter-
faces (Drucker, 2013) or infrastructure (Blanchette, 2011). Yet, as out-
lined in the introduction, this does not mean that the algorithm is just a
wrong perspective. Foregrounding the algorithm does something: it
yields a particular instance of human subject and algorithmic actor,
very much like the dissimulating of the body yields the particular instance
134 Theory, Culture & Society 36(2)
of human-robot relation in Ex Machina. These effects are the center of
the following analysis of smart CCTV and parametric architecture.
CCTV cameras have become ever more widespread during the last
30 years and are omnipresent in many areas. All the video footage
they create is impossible to scrutinize by human beings. In the
common setting, security personnel is confronted with control rooms
sporting dozens of monitors, on which important events likely remain
unnoticed. But even on a single screen attention quickly tires (Dee and
Velastin, 2008: 330–1). Furthermore, cognitive psychology has shown
various effects like ‘inattention blindness’ (Simons and Chabris, 1999)
that make it difficult to recognize unexpected events even in ‘plain
sight’. If an event gets the attention of human personnel, their judgment
is prone to errors and prejudices (Williams and Johnstone, 2000). And of
course, human operators cost money. All these factors are adduced in
presenting ‘smart’ CCTV as an attractive alternative. Rather than human
beings, pattern recognition algorithms are meant to detect suspicious or
abnormal events. Only these events are brought to the attention of the
operators. Their role is to double-check, also taking ethical and context
dependent issues into account and to initiate appropriate reactions if
necessary. So smart CCTV in this setting does not mean that the decision
formerly taken by a human is now taken by an algorithm. Rather the
factors that contribute to the decision are redistributed. The algorithm is
meant to shoulder the cognitive load whereas the human should have the
ethical oversight and final responsibility. This is a particular combination
of the two boundary practices introduced above. The algorithm is estab-
lished on a continuum with the human in cognitive capabilities: it is meant
to trigger an alarm, when a human would trigger an alarm – but an object-
ive, awake, attentive unprejudiced human. That is, a human that is closer
to the ideal liberal subject than the embodied, real world CCTV operators.
Consequently, while the algorithm is meant to be on a continuum with the
human in terms of cognitive capabilities, it is deployed because it is
enacted as the complete opposite of the human in every other regard:
unprejudiced, objective, unemotional, never tires, does not ask for
higher wages, etc. – following the second boundary practice.
The fact that ‘smart’ CCTV is far removed from any substantial
notion of artificial intelligence or similarity to humans is not a disadvan-
tage we have to live with. Their difference from human beings in almost
any regard is the very reason we employ those systems. This particular
combination of the two boundary practices is characteristic for the
boundary between humans and many algorithmic systems that are cur-
rently deployed and that use ‘intelligent’, ‘smart’, or ‘autonomous’ algo-
rithms. They are not universal artificial intelligences but very good in
solving one particular problem. Concerning this problem, the algorithm
is expected to perform like a human, just better. But the very possibility
Matzner 135
of ‘being better’ is based on a fundamental difference of the algorithmic
system and the human in most other regards.
Concerning smart CCTV, the idea is not to model human cognition,
but to detect the same event humans would deem noteworthy by evalu-
ating other features. There is a plethora of pattern recognition technol-
ogies used for smart CCTV (Hu et al., 2004; Velastin, 2009). One
approach just looks at ‘movement trajectories’ of persons (Fuentes and
Velastin, 2004; Nguyen et al., 2005; Makris and Ellis, 2002). The paths of
people in the video images are reconstructed in three-dimensional space.
Then a classification algorithm distinguishes trajectories that could be
related to something noteworthy happening from ‘normal’ movement.
Such algorithms are believed to provide the same results whether the
person has dark or light skin, and do not attribute a gender or cultural
background or many other discriminatory factors to the persons under
surveillance. Thus such systems promise greater objectivity of the deci-
sions. Of course, this is only partly right. Such a system could, for exam-
ple, easily discriminate against persons with disabilities, i.e.
‘nonstandard’ ways of moving. Matzner (2016) discusses these ethical
issues in detail. In particular, the paper shows that the presumed inde-
pendence when the human is meant to have the final responsibility is not
given. The judgment is the product of a human-technology ensemble.
The use of movement trajectories is premised on the idea that this
single feature suffices as an indicator for the events that an ideal unpre-
judiced operator would pick out as well. This ideal subject position is
enabled by the ensemble of humans and algorithms and cannot be
reduced to either side. The algorithm allows us to care for the relevant,
local details in an objective fashion, making up human ‘weaknesses’ like
missing important aspects or imposing overgeneralizing prejudices. But
still the human is meant to judge – not a cold, automatic rationality. So
the human ‘in the loop’ is not the same as the human ‘on the loop’ or ‘out
of the loop’ (DG External Policies, 2012: 6), just with different tasks. To
the contrary, the ‘human in the loop’ is a particular subject position
enacted by the similarities and differences to algorithms. The human’s
opposite, the machinic rationality no longer seems to be a threat as in
Alphaville because it serves just one particular function – a function
where algorithms are enacted on a continuum with humans. The peculiar
interplay of likeness and difference thus mobilizes the humanist traits of
the first kind of boundary practice, where rationality in the sense of
autonomous, objective information processing is foregrounded, in con-
junction with the anti-humanism of the second kind, where the human
appears as embodied, situated, subjective, dependent. This creates a ten-
sion between two perspectives on the human, where the embodied, emo-
tional, located operators appear as lacking compared to a more liberal,
autonomous ideal. But this pertains only to one aspect, which is
136 Theory, Culture & Society 36(2)
structured by the continuity of humans with machines. Therefore, an
algorithm can compensate this lack.
Algorithmic Creation: Associative and Parametric Design in
Architecture and Urbanism
Associative or parametric design in architecture is a design practice
enabled by advancements of software and computation power.
Program suites allow us to model complex components of a building
and their relationships. Rather than drawing the final shape of the build-
ing or rooms, as in earlier computer-aided design, the architect just spe-
cifies a model of internal relationships and parameters that can influence
the model. The shape of the building is iteratively generated by running
the parameters through the model. This process is guided by ancillary
conditions (Rolvink et al., 2010). Such design processes can be used to
generate series of uniquely formed parts to build complex, irregular
shapes. Current modelling algorithms allow much more complex
models that do not just include the form of a building and its parts
but also higher level abstractions, like the function of certain rooms, or
movement of persons in the building. They also allow us to model exter-
nal factors. As Luciana Parisi writes referring to Michael Hensel and
Achim Menges: ‘parametric architecture needs to be conceived as a
system with a set of finite internal relationships and external forces
that inform it and to which it responds’ (Parisi, 2013: 104). Algorithms
automatically adopt the parameters of the model to these influences and
generate a design that optimally suits the external conditions. For exam-
ple, Peter Trummer created the outline for a settlement in the Arizona
desert based on the distribution of heat radiation. Highly parametrized
and thus malleable housing units are assembled, each with individual
parameters set according to its location. The model also takes into
account the changes of the heat radiation by the buildings themselves,
thus modelling the ‘collective behaviour’ (Trummer, 2009: 67) of the
individual elements. The idea of associative design is thus extended
from single buildings to entire settlements. Such models also try to
include more social parameters, and not only physical conditions like
heat. In a video produced during a workshop with Trummer, the distri-
bution of sunlight and privacy (in terms of lines of sight) are mentioned
as explicit parts of a model.5
This possibility of associative design is considered as potential to
create a new form of architecture. It is decidedly distinguished from
the way of building that according to Sjoerd Soeters emerged in the
‘early twentieth century’: Architects felt like ‘gods’ that bulldoze the
site into a tabula rasa on which they can transform their ideas freely
into buildings, creating a ‘new order’ according to their will (Soeters,
2005: 69). Such an approach to architecture seems to prevail also
Matzner 137
among the proponents of parametric architecture, who build impressive
monuments, often for dubious heads of states in rather authoritarian
countries. But others express the potential to displace this centrality of
the architect by algorithmic fine-grained attunement of the buildings
to the situation. For example, François Roche describes his idea for an
‘unpredictable, organic urbanism’ as a structure that ‘develops its own
adaptive behavior, based on growth scripts and open algorithms. It is
entirely reflexive, responding to human occupation and expression rather
than being managed or operated at human will’ (Roche, 2009: 42).
Similarly, Tom Verebes explicitly wants to ‘surpass the mere shaping
of a new style, and today’s fascination with complex, curvilinear form’.
Rather than adhering to these ‘deviants of Modernism’, he emphasizes
the potential to extend the ‘invisible informational control systems’ and
‘augmented cybernetic apparatus[es]’ that already manage ‘the quotidian
fluxes, flows and pulses of the city’ into the design (Verebes, 2009: 25).6
Rather than being the material framework of city life, algorithmic archi-
tecture is meant to become reactive to that life. Luciana Parisi discusses
projects and ideas that push this potential to its extremes, when archi-
tecture is meant to react to ‘real-time’ inputs based on continuous algo-
rithmic processes – rather than confining them to the design stage (Parisi,
2013: 104). Algorithms then do not only plan the building but also
manage and run it.
These discourses discuss associative design as shift from the architect
as creator, that is, as autonomous liberal subject, towards an algorithm.
But this is a limited perspective. As in the case of smart CCTV, decisions
and agency are not simply moved but reconfigured. The architect
emerges from this reconfiguration as the person responsible for creating
the model and choosing the important parameters. Algorithmic systems
are better in carrying them out. This shift has a similar structure to the
algorithmic appropriation of a situation to match idealized human judg-
ment in smart CCTV.
Again, the boundary is enacted in a combination of continuity and
difference. The liberal, autonomous architect is one pole of a tension that
creates a lack that can be made up by algorithms: care for local details
rather than humanist imposition. The other pole, which is structured by
the possibilities of algorithmic design, is a socially responsible and
responsive architect that would care for all the detail algorithms allow
assessing – much like the ideal operator in the case of smart CCTV.
These responsible architects decide to yield important aspects of design
to algorithmic generation, thus distancing themselves vividly from the
‘deviants of Modernism’. Yet, that kind of responsiveness and responsi-
bility again is enabled by a human-technology ensemble. Its internal
boundary creates a continuity between the architect’s aims and the opti-
mization and generation features of the algorithm. This boundary mobil-
izes the anti-humanist strands of the human-machine opposition to posit
138 Theory, Culture & Society 36(2)
the algorithm against the patronizing architect: The algorithm cares
for the local details and can adapt complex models to it. While this
shatters the humanist presumptions of the first position (the godlike
architect), the position of the responsible modeler is re-established by
creating a continuum of the architect’s responsible choices of models
and parameters and their algorithmic realization. Here the architect,
still in a strong social position compared to security personnel, can
actively contribute to enacting this boundary work by emphasizing the
decision to yield choices to the algorithm. For example Patrick
Schumacher boldly states that ‘without parametric life process modelling
architecture’s task can no longer be adequately addressed’ so that ‘we
will have to reject any architectural design process that does not take
advantage of the computational resources as outmoded and substandard’
(Schumacher, 2015a). But at the same time the continuity of humans and
algorithms establishes the architect in a position of liberal autonomy
regarding the choice of parameters and the models, the ‘social’ aspects,
which are optimally, and objectively put into practice by the algorithm in
continuity with human aims. Consequently, Schumacher (2015b) – and
not an algorithm – defines the ‘success of the framed life process’ i.e. the
life within his architecture as the ‘ultimate purpose’ of design, where
‘success’ of life can be measured in parameters like ‘encounter frequency,
interaction diversity, communicative depth’. But this would be impos-
sible without an algorithm that can measure or simulate and act upon
these parameters. Summarizing, again the peculiar combination of con-
tinuity and difference structures the boundary, enacting a subject pos-
ition irreducible to either human or algorithm.
The Human Is Dead – Long Live the Algorithm!
Smart CCTV and parametric architecture seem to be an answer to some
of the practices ‘that have given liberalism a bad name’ discussed at the
end of the introduction. In particular, the universalist presumption to
speak for everyone is addressed in the emphasis that the algorithms pro-
vide a detailed, objective analysis of the current situation and thus
adapted judgments compared to human overgeneralizations and preju-
dices or liberal autonomous, god-like creation. This is the anti-liberal
strand of the continuity of humans and computation: the human misun-
derstands his7 dependence and situatedness, which cybernetic systems
and now algorithms aptly grasp.
At the same time, via the influence of the second boundary practice,
the algorithmic system gets in a position of objectivity and rationality
quite akin to liberal subjectivity, while the human is the embodied, situ-
ated, emotional counterpart to this position. I have shown that these two
elements contribute to a particular boundary: Since we deal with algo-
rithms, and not the complex cybernetic systems or artificial intelligences,
Matzner 139
which Hayles discusses, the idea of creating an artificial humanlike being
vanishes. Instead the continuity of humans and algorithmic systems is
maintained just for one highly specialized tasks. In this regard, algorith-
mic systems perform tasks that humans do or would want to do, but
better. Yet, this being better is enabled by an algorithmic system that is
completely different in most other regards and thus far removed from a
better version of the human. To the contrary, it is a way to attend to the
local, particular, situation for one specific task. The danger of a machinic
rationality taking over thus seems to be averted.
This amounts to a particular transformation of the universalizing traits
of liberalism that Hayles discusses. The cases I have discussed enact a
socially and ethically competent human being in charge, not a universaliz-
ing, overgeneralizing rationality. So the embodied, situated human beings
are valued. But this embodiment and situatedness appears as lacking per-
taining to one particular task: in my examples detecting suspicious behav-
ior or incorporating external factors in the design. Here human
weaknesses or traits seem to prevent the ethically or socially responsible
judgment humans are meant to make. This lack, however is transformed
by the continuity of humans and algorithms. Irresponsible, prejudiced
humans have been a topic long before algorithms. And the critique of
liberal, autonomous, self-centered authorship is a common motive in phil-
osophy and political theory. But now the problems of these positions are
transformed into humans being less able to do what algorithms do. And
thus algorithms can make up this lack. The trick is that only in this regard
the anti-liberal strands of the boundary are mobilized. Only pertaining to
this one task, the human appears as lacking. In this sense, the algorithm
does not threaten the human but completes it.
Consequently, while both the human and the algorithm appear in
positions of tension with the universalizing presumptions of the liberal,
autonomous subject, the human-technology ensemble strengthens rather
than weakens this subject position. This structure becomes clearer when
looking at authors like Anderson (2008) or Pentland (2012). Their work
seems to be a response to the displacement of liberal autonomy by infor-
mation systems, where humans are just nodes or ‘membranes’ in a com-
plex flow of information, that already Wiener dealt with. Supported by
results from neurology and biology, the authors advocate measuring
these complex flows as good as possible and then letting algorithms
decide what is best for us. And since our only apparently autonomous
decisions cannot be trusted, these algorithmic suggestions are imple-
mented via practices like ‘gamification’ or ‘nudges’ (Sunstein, 2014). So
algorithms and (more or less mildly) behaviorist practices are meant to
help humans in realizing their aims as beings that are ‘just nodes in a
complex flow of information’. But these are aims they paradoxically seem
to have decided upon with the full autonomy of the liberal subject. This
paradox appears since the libertarian context of these authors
140 Theory, Culture & Society 36(2)
dissimulates the tension that enacts the boundary at work and empha-
sizes just the attention to local and ‘real-time’ situations. It ignores that
the implicit postulation of a human lack to be ameliorated by informa-
tion technology itself is the product of this particular enactment of the
boundary.
This boundary structure can also be found in other applications. Self-
tracking is a good example that is often conceived as a means to realize
one’s potential, in the vein of the aforementioned authors. This idea of
potential is the positive reformulation of the implicit lack brought about
by human-algorithmic boundaries (where there is lack, one can get
better). This strange alliance of anti-liberal tropes and a highly libertar-
ian atmosphere is a good example for the strengthening of liberal subject
positions by human-technology ensembles. The posthumanist critiques
of liberalism often focus on decentering the rational, autonomous subject
by emphasizing its dependence and exposedness to non-human actants –
in this case information technology. Yet, my analysis shows that this
emphasis alone does not suffice to challenge the liberal subject position.
In fact, this challenge can easily be turned around since technology is not
only enacted as the opposite of this humanist subject, but also as its
continuation – in particular ‘intelligent’ information technologies.
When this continuity comes into play concerning one small task – and
not a transhumanist project – the dependence, which would decenter the
human, turns into a lack that the algorithm can compensate. The non-
human agents then contribute to a human-technology ensemble that in
toto strengthens rather than weakens the liberal, autonomous subject
position. And faithful to its tradition, this position ignores that it is
not a potential for all humans, but only for some: those that can
afford the technology, those that are deemed worthy by the economic
efficiency that structures both cases I have discussed, those that can
present their use of new technologies as advancement – like the respon-
sible architects do.
Discussions what algorithms can do and whether they challenge
human freedoms might distract from the fact that within these discus-
sions, a tension between the posthuman or anti-liberal positions and their
(often implicit) liberal, humanist counterparts are an issue. Donna
Haraway introduces her famous celebration of hybrids, the Cyborg
Manifesto, as ‘argument for pleasure in the confusion of boundaries
and for responsibility in their construction’ (1991: 150). Maybe the first
of these two aspects has been too much in the focus in attempts to
decenter the liberal subject. Although authors like Latour (1993) have
distanced themselves from projects of disclosing ideologies or false con-
sciousness, many advocates of networks, hybrids, assemblages may have
relied too much on just showing how the liberal subject is actually a
product thereof, rather than critically aiming at different productions.
In this sense, the advocates of the algorithmic technologies I have
Matzner 141
discussed may indeed have learned a lesson. The human is not the
autonomous, rational subject that liberal and humanist discourses talk
about. But with the help of algorithms, as human-technology ensembles,
the liberal subject that the human alone never has been, finally seems
possible. With the detailed analyses I presented here we can see how
human-algorithmic ensembles strengthen liberal, autonomous subject
positions, rather than decentering them.
This entails that the focus on algorithms as actors, as opposites or
competitors of the human facilitates enacting this subject position.
Many of the discussions what algorithms can or cannot do thus implicitly
legitimize the underlying liberal subject positions – including the ‘prac-
tices that have given liberalism a bad name’. This also pertains to recent
warnings for careful research on artificial intelligence (Cellan-Jones,
2014). Again, the specter of some hyperintelligent, super-human is fore-
grounded as the threat posed by artificial intelligence – rather than seeing
how algorithmically driven systems already shift subject positions and
power relations. Bringing the material, discursive, social, mathematical,
etc. preconditions into the picture is an important step but does not
suffice. The important issue is not only descriptive accuracy what algo-
rithms are or can do, but which subject positions – and that is power
positions – are created and legitimized. A critical inquiry concerning
algorithms thus should include inquiring potential new boundaries and
subject positions, taking ‘pleasure in the confusion of boundaries’ but
maybe even more importantly, ‘responsibility in their construction’.
Notes
1. Transcendence. Dir. Jack Paglen; Alcon Entertainment, DMG Entertainment
and Straight Up Films (2014).
2. Her. Dir. Spike Jonze, Annapurna Pictures (2013).
3. Alphaville, une e´trange aventure de Lemmy Caution. Dir. Jean-Luc Goddard,
Athos Films and Chaumiane (1965).
4. Ex machina. Dir. Alex Garland, DNA Films and Film4 (2015).
5. See: https://www.youtube.com/watch?v¼EhjUli4cYEg (accessed 29
September 2015).
6. That a lot of these control systems are parts of huge apparatuses of surveil-
lance and governance (Lyon, 2005) does not seem to worry these authors.
Albeit, one could easily conceive this outlook as an update of the alliance
between architecture and governance and policing to societies of digital
control.
7. Here the male gender is used intentionally.
References
Amoore L (2011) Data derivatives: On the emergence of a security risk calculus
for our times. Theory, Culture & Society 28(6): 24–43.
Anderson C (2008) The end of theory: The data deluge makes the scientific
method obsolete. Wired 16(7).
142 Theory, Culture & Society 36(2)
Anderson ML (2003) Embodied cognition: A field guide. Artificial Intelligence
149(1): 91–130.
Barad K (2007) Meeting the Universe Halfway. Durham: Duke University Press.
Berry DM (2011) The Philosophy of Software: Code and Mediation in the Digital
Age. Basingstoke: Palgrave Macmillan.
Blanchette J (2011) A material history of bits. Journal of the American Society
for Information Science and Technology 62(6): 1042–1057.
Blanchette J (2012) Computing as if infrastructure mattered. Communications of
the ACM 55(10): 32–34.
Blass A and Gurevich Y (2003) Algorithms: A quest for absolute definitions.
Bulletin of the EATCS 81: 195–225.
Braidotti R (2013) The Posthuman. London: Polity.
Brey P (2005) The epistemology and ontology of human-computer interaction.
Minds Mach 15(3–4): 383–398.
Cellan-Jones R (2014) Stephen Hawking warns artificial intelligence could end
mankind. Available at: http://www.bbc.com/news/technology-30290540 (con-
sulted April 2016).
Chun WHK (2008) On ‘sourcery,’ or code as fetish. Configurations 16(3):
299–324.
Citron DK and Pasquale FA (2014) The scored society: Due process for auto-
mated predictions. Washington Law Review 89: 1–33.
Daston L (1988) Classical Probability in the Enlightenment. Princeton: Princeton
University Press.
Dee HM and Velastin SA (2008) How close are we to solving the problem of
automated visual surveillance? Machine Vision and Applications 19(5–6):
329–343.
Diakopoulos N (2013) Algorithmic accountability reporting: On the investiga-
tion of black boxes. A Tow/Knight Brief. New York: Columbia Journalism
School. Available at: http://www.nickdiakopoulos.com/wp-content/uploads/
2011/07/Algorithmic-Accountability-Reporting_final.pdf (consulted April
2016).
Directorate-General for External Policies of the European Union (2012) Human
rights implications of the usage of drones and unmanned robots in warfare.
Available at: http://www.europarl.europa.eu/committees/en/studies.html
(consulted April 2016).
Drucker J (2013) Performative materiality and theoretical approaches to inter-
face. Digital Humanities Quarterly 7(1).
Fuentes L and Velastin S (2004) Tracking-based event detection for CCTV
systems. Pattern Analysis and Applications 7(4): 356–364.
Gillespie T (2011) Can an algorithm be wrong? Twitter trends, the specter of
censorship, and our faith in the algorithms around us. Available at: http://
culturedigitally.org/2011/10/can-an-algorithm-be-wrong/ (consulted April
2016).
Gillespie T (2014) The relevance of algorithms. In: Gillespie T, Boczkowski P
and Foot K (eds) Media Technologies: Essays on Communication, Materiality,
and Society. Cambridge, MA: MIT Press, pp. 167–194.
Haraway D (1991) A Cyborg Manifesto: Science, technology, and socialist-fem-
inism in the late twentieth century. In: Haraway D (ed.) Simians, Cyborgs and
Women: The Reinvention of Nature. New York: Routledge, pp. 149–182.
Matzner 143
Haraway D (1997) Modest_Witness@Second_Millennium. FemaleManß_
Meets_ OncoMouseTM: Feminism and Technoscience. New York: Routledge.
Hayles NK (1999) How We Became Posthuman. Chicago: University of Chicago
Press.
Hayles NK (2010) ‘How We Became Posthuman’: Ten years on (an interview
with N. Katherine Hayles). Paragraph 33(3): 318–330.
Hu W, Tan T, Wang L and Maybank S (2004) A survey on visual surveillance of
object motion and behaviors. IEEE Transactions on Systems, Man, and
Cybernetics, Part C: Applications and Reviews 34(3): 334–352.
Introna LD (2016) Algorithms, governance, and governmentality: On governing
academic writing. Science, Technology & Human Values 41(1): 17–49.
Kitchin R (2014) Big Data, new epistemologies and paradigm shifts. Big Data &
Society 1(1): 1–12.
Kitchin R and Dodge M (2011) Code/Space: Software and Everyday Life.
Cambridge, MA: MIT Press.
Kleene SC (1952) Introduction to Metamathematics. Amsterdam: North
Holland.
Knuth DE (1973) The Art of Computer Programming, Vol. 1. Reading: Addison-
Wesley.
Latour B (1993) We Have Never Been Modern. Cambridge, MA: Harvard
University Press.
Lyon D (2005) Surveillance as Social Sorting. London: Routledge.
Mackenzie A (2015) The production of prediction: What does machine learning
want? European Journal of Cultural Studies 18(4–5): 429–445.
Makris D and Ellis T (2002) Path detection in video surveillance. Image and
Vision Computing 20(12): 895–903.
Matzner T (2016) The model gap: Cognitive systems in security applications and
their ethical implications. AI & Society 31(1): 95–102.
Neyland D (2015) On organizing algorithms. Theory, Culture & Society 32(1):
119–132.
Nguyen NT, Phung DQ, Venkatesh S and Bui H (2005) Learning and detecting
activities from movement trajectories using the hierarchical hidden Markov
model. In: IEEE Computer Society Conference on Computer Vision and
Pattern Recognition, 2005. Washington, DC: IEEE, pp. 955–960.
Parisi L (2013) Contagious Architecture: Computation, Aesthetics, and Space.
Cambridge, MA: MIT Press.
Pentland A (2012) Reinventing society in the wake of Big Data. EDGE.
Available at: http://www.edge.org/conversation/reinventing-society-in-the-
wake-of-big-data (consulted September 2014).
Roche F (2009) I’ve heard about . . . (a flat, fat, growing urban experiment):
Extract of neighbourhood protocols. Architectural Design 79(4): 40–45.
Rolvink A, Van de Straat R and Coenders J (2010) Parametric structural design
and beyond. International Journal of Architectural Computing 8(3): 319–336.
Schumacher P (2015a) Fluid totality – The dream of inhabiting a nature-like
built environment. Available at: http://www.patrikschumacher.com/Texts/
Fluid_Totality.html (consulted April 2016).
Schumacher, P. (2015b) Parametricism with social parameters. Available at:
http://www.patrikschumacher.com/Texts/Parametricism%20with%
20Social%20Parameters.html (consulted April 2016).
144 Theory, Culture & Society 36(2)
Simons DJ and Chabris CF (1999) Gorillas in our midst: Sustained inattentional
blindness for dynamic events. Perception 28(9): 1059–1074.
Soeters S (2005) On being a humble architect. In: Ray N (ed.) Architecture and
Its Ethical Dilemmas. New York: Taylor & Francis, pp. 69–74.
Sunstein CR (2014) Why Nudge?: The Politics of Libertarian Paternalism. New
Haven: Yale University Press.
Totaro P and Ninno D (2014) The concept of algorithm as an interpretative key
of modern rationality. Theory, Culture & Society 31(4): 29–49.
Trummer P (2009) Morphogenetic urbanism. Architectural Design 79(4): 64–67.
Turing AM (1937) On computable numbers, with an application to the
Entscheidungsproblem. Proceedings of the London Mathematical Society
2(42): 230–265.
Van der Ploeg I (2005) Biometrics and the body as information: Normative
issues of the socio-technical coding of the body. In: Lyon D (ed.)
Surveillance as Social Sorting. London: Routledge, pp. 57–73.
Velastin SA (2009) CCTV video analytics: Recent advances and limitations. In:
Proceedings of the 1st International Visual Informatics Conference on Visual
Informatics: Bridging Research and Practice. Berlin: Springer, pp. 22–34.
Verebes T (2009) Experiments in associative urbanism. Architectural Design
79(4): 24–33.
Williams KS and Johnstone C (2000) The politics of the selective gaze: Closed
circuit television and the policing of public space. Crime, Law and Social
Change 34(2): 183–210.
Wolfram S (1984) Cellular automata as models of complexity. Nature 311(5985):
419–424.
Tobias Matzner is a postdoctoral research associate at the International
Centre for Ethics in the Sciences and Humanities in Tübingen, Germany.
He holds a PhD in philosophy and an advanced degree in computer
science, both from the Karlsruhe Institute of Technology. His research
focuses on the entanglements of (information) technology and politics.
This article is part of the Theory, Culture & Society special issue on
‘Thinking with Algorithms: Cognition and Computation in the Work of
N. Katherine Hayles’, edited by Louise Amoore.
Special Issue: Thinking with Algorithms: Cognition and Computation in the Work of N. Katherine Hayles
Theory, Culture & Society
2019, Vol. 36(2) 145–155
Interview with ! The Author(s) 2019
Article reuse guidelines:
N. Katherine Hayles sagepub.com/journals-permissions
DOI: 10.1177/0263276419829539
journals.sagepub.com/home/tcs
Louise Amoore and Volha Piotukh
University of Durham
Abstract
Following the publication of her 2017 book, Unthought: The Power of the Cognitive
Nonconscious, N. Katherine Hayles discusses the themes of the book with Louise
Amoore and Volha Piotukh. From the development of a theory of nonconscious
cognition, to the capacities of novels to enact the connections between disparate
phenomena, Hayles reflects on what is at stake ethically in new human-technical
assemblages.
Keywords
algorithms, cognition, ethics, N. Katherine Hayles, technology
LA & VP: A major contribution of your 2017 book Unthought is the
concept of the ‘cognitive nonconscious’. Where cognition for you (after
Shannon) is ‘a process that interprets information within contexts that
connect it with meaning’ (p. 22), the faculty of nonconscious cognition is
distributed across biological and technical cognizers. Could you elabor-
ate for us some of the implications of paying careful attention to the
cognitive nonconscious?
NKH: Allow me to offer an important correction. My definition of cog-
nition is not inspired by Shannon’s information theory; indeed, in some
ways I offer it as a correction to Shannon, as it explicitly includes refer-
ences to ‘interpretation’, ‘context’ and ‘meaning’, all of which are absent
from Shannon’s probabilistic theory, which is based only on the relative
frequencies of the message elements (Shannon and Weaver, 1971 [1948]).
Rather, my definition of cognition is indebted to a remark by Edward
Fredkin, a theoretical physicist who has suggested that in some sense,
reality itself may be computational in nature (Fredkin, 2018). Without
necessarily accepting this implication, I found his idea of connecting
information to meaning through context to be extremely insightful.
Corresponding author: Louise Amoore. Email: [email protected]
Extra material: http://theoryculturesociety.org/
146 Theory, Culture & Society 36(2)
It was a key that enabled me to rethink cognition in terms that broa-
dened it well beyond ‘thinking’, with its long tradition of anthropocentric
interpretation as something only humans can do. A second key move is
to break the link between cognition and consciousness; cognition as I
define it is a much more capacious activity that far exceeds the bounds of
human conscious thought. For humans and other conscious organisms,
cognition extends beyond the brain into the body and environment, and
for nonconscious organisms, it extends throughout the entire biological
realm of all lifeforms, including plants. Once cognition is seen not to
require consciousness, it also extends to computational media in all its
forms, including networked and programmable machines. Obviously,
this view of cognition has a low threshold for something to count as
cognitive, but it can also scale upward in complexity to the most sophis-
ticated human and computational cognitive achievements. Cognition in
this view exists as a spectrum rather than as a single point; it also is
defined as a process rather than an entity, so it is inherently dynamic
and transformative.
LA & VP: In your book you suggest that the human/non-human binary,
characterizing so much of the contemporary debate in humanities and
the social sciences, serves to reinstall the privileging of a liberal humanist
subject. You offer in its place a rather fascinating distinction between
what you call ‘cognizers’ and ‘noncognizers’ so that cognizers extend
across humans and other biological life forms as well as technical sys-
tems, while noncognizers embrace material processes and inanimate
objects. What is at stake in this distinction between cognizers and non-
cognizers? How does it transform what might count as agency?
NKH: The crucial features distinguishing cognizers from noncognizers
are interpretation and choice (or selection). The two are entwined,
because without choice, there can be no interpretation, which requires
at least two available options. Agency, as I understand the term, denotes
the ability to act. Without question, material forces have agency, from
the water that wears away a rock surface to the landslide that rips off a
mountainside. What material forces cannot do, however, is interpret and
choose. Their actions rather are the resultant of all the forces acting upon
them and can be understood in these terms (with criticality phenomena,
the situation is more complex because they are sensitive to very small
perturbations. Nevertheless, their actions can still be adequately under-
stood through simulations and other numerical methods).
All lifeforms, by contrast, possess the signal characteristics associated
with cognition, namely flexibility, adaptability, and evolvability. Their
actions can never be entirely constrained within a rigid stimulus-response
model that denies them the capacity for creative interpretation and
choice. The issue for me is thus not agency, which exists in noncognizers
Amoore and Piotukh 147
and cognizers alike, but rather the distinction between agents (material
forces possessing the ability to act) and actors (cognizers who can inter-
pret and make choices).
There is a longer argument that I cannot detail here about computa-
tional media and the role of choice or selection in their operations
(Hayles, 2019). Suffice it to say that computational media are built in
layers that proceed from the minimally cognitive (the basic selection
between five volts or none, one or zero) up to increasingly sophisticated
decision trees (subroutines nested inside routines, routines inside
libraries, etc.) that can deal with highly ambiguous or conflicting infor-
mation and arrive at interpretations about it. In these cases, it is neces-
sary to understand ‘meaning’ in terms similar to pragmatists such as
John Dewey, who argued that meaning derives from the consequences
of actions (Dewey, 2000: 128). All computational media employ an expli-
cit or implicit ‘if/else’ logical structure that connects choices or selections
with consequences, and hence (in the pragmatist sense) with meaning.
LA & VP: The cognitive assemblage you map for the reader in Unthought
appears as something rather different from the Deleuze and Guattari
inspired assemblages of connections, desires, affects and resonances.
What does a cognitive assemblage approach do? How might a cognitive
assemblage shift the locus of responsibility for decision so that, for exam-
ple, decision-making capacities could extend from neural network algo-
rithms to RFID chips, or from city bankers and regulators to the
probability weightings of a high frequency trading algorithm?
NKH: My use of ‘assemblage’ shares some common ground with Deleuze
and Guattari (Deleuze and Guattari, 1987; DeLanda, 2016), and also
with Bruno Latour’s actor network theory (Latour, 2007). It also has
some distinctive differences in its emphasis on cognition, information,
and interpretation. Whereas Deleuze and Guattari write against the sub-
ject, the sign, and the organism (and hence refuse the presumed bound-
aries of the same), my ‘cognitive assemblages’ are entirely consistent with
the idea of pre-existing entities such as humans and computers. Rather,
the emphasis falls on the ways in which these entities symbiotically inter-
penetrate each other’s actions, so that cognition and agency are under-
stood as always already distributed throughout the assemblage.
Cognitive assemblages also differ from actor network theory in that I
make a strong distinction between cognizers and noncognizers, a differ-
ence that Latour seeks to obliterate in placing material forces and cog-
nitive actors on the same plane.
Given my framework, it is clear that both humans and computational
media have agency and thus are ethical actors, in the sense of being able
to perform actions that have ethical consequences. There is nevertheless a
profound difference in the ethical responsibility of human and nonhuman
148 Theory, Culture & Society 36(2)
actors, because humans are the ones designing, implementing, and over-
seeing complex cognitive systems such as high-frequency trading algo-
rithms, neural network architectures, and (at a minimally cognitive level)
RFID chips. In most ethical theories of which I am aware, one must have
free will in order to be an ethical actor. This framework is simply inad-
equate to deal with our present situation, because ‘free will’ is deeply
associated with consciousness and hence does not apply to nonconscious
cognizers such as nematode worms, plants or computational media. We
desperately need ethical theories that expand upon the idea that respon-
sibility may be differentially assigned within a cognitive assemblage on
the basis of who (or what) has control over its design parameters, socio-
economic implementations, and likely consequences. ‘Control’ in this
sense should not be understood as connoting ‘free will’ (an ill-defined
concept in any case) but rather the places within a cognitive assemblage
where effective interventions can be made by ethically responsible actors.
I have no special expertise as an ethicist, but it is clear to me that this is a
lacuna that needs to be addressed and reconceptualized within ethical
theories.
LA & VP: What does it mean to be human in the context of a cognitive
assemblage? In the book you detail the example of Brandon Bryant, the
drone sensor pilot for the US Air Force who suffered from PTSD. What
are the relations between human emotions and cognitive assemblages
such as the UAV? To what extent is the design of a socio-technical
system such as the drone also fundamentally a redesigning of the mean-
ing of being human?
NKH: To be human in a cognitive assemblage means to participate in the
deep symbiotic relation between biological and technical cognizers. This
may be done with or without conscious awareness that such is the case;
for example, most people in developed countries do not think much
about their participation in the electric grid, which is completely depend-
ent upon computational controllers, connectors and transmitters, until
something goes wrong and a blackout disrupts our normal routines.
Then it becomes apparent how much of contemporary life is utterly
dependent on our computational symbionts. In the case of a drone
pilot, the system is so entirely interpenetrated with technical cognition
that it would be almost impossible not to be consciously aware of this
involvement.
There are some features of biological cognition not generally present
in technical media, for example, human emotion. As I try to make clear
in Unthought, I consider emotion to be very much a part of cognition.
Emotion, and affect more generally, taps into some very potent meaning-
events, since it has evolved as a means to cope with changing and unpre-
dictable environments in ways that promote an organism’s survival and
Amoore and Piotukh 149
reproduction. Lacking these evolutionary developments, technical media
rely instead on design and purpose; these, in turn, are constructed by
humans. Insofar as humans bear ethical responsibility for the systems
they design and in which they participate, their affective responses do and
should have bearing on system design and implementation, which in turn
implies an ethical responsibility to understand the full implications of
how the cognitive assemblage as a whole works. This is especially import-
ant for humanities disciplines, which for too long and too often have
regarded intimate knowledge of technological systems as unnecessary or
beyond their concern, even (or especially) when they want to criticize
them. Effective interventions (as opposed to preaching) require a reason-
able amount of knowledge and dialogue with those designing and build-
ing complex technical systems.
LA & VP: A rather extensive theme in contemporary social and legal
debates on ‘autonomous’ systems technologies and deep learning algo-
rithms is that one should advocate for ‘more transparent’, less ‘biased’
models where there is a ‘human in the loop’ as the guarantor of ethics.
The persuasive arguments you make in Unthought seem to us to run
entirely counter to the idea that a cognitive assemblage could or
should be transparent, or indeed that a human could oversee the work-
ings of the assemblage. You write, for example, that, in a complex assem-
blage of ‘human and technical cognizers’, ‘the choice is not between
human decision versus technical implementation’ (p. 136). What do
you think of the many contemporary calls for algorithmic accountability,
ethical design of systems, or human oversight? Do you envisage a differ-
ent kind of ethico-political response better attuned to the extent of col-
laboration among human and technical cognizers?
NKH: I find this question puzzling, since at many places in Unthought I
argue precisely for the necessity of a ‘human in the loop’, for example
when I consider autonomous weapons or the functioning of high-fre-
quency trading algorithms. No doubt biases of many kinds do get incor-
porated into technical systems; one thinks, for example, of criteria for a
mortgage or a loan, which often have sources of bias much subtler than
simple redlining. It may be a red herring, however, to focus too much on
‘transparency’; most complex systems require detailed technical know-
ledge to understand completely, and it is not clear to me what ‘transpar-
ency’ would mean in this regard. Having a human in the loop, by the
way, is no guarantee against bias, since most humans have conscious or
unconscious biases; indeed, in some ways the demands for more
‘unbiased’ and more ‘transparent’ systems have vectors that point in
opposite directions.
When I suggested that human choice and technical implementation is
not a binary opposition, I meant to imply that human choices are very
150 Theory, Culture & Society 36(2)
much involved in technical implementations at many levels; far from
being a binary, such situations should more accurately be considered
as interpenetrations. That said, I am very much in favour of human
oversight and ethical interventions into the design, implementation,
and functioning of technical systems, as I exemplify through my discus-
sion of batch auctions and slow trading in the concluding section of the
chapter on high-frequency trading algorithms. Such interventions, to be
effective, may have a high entry cost in terms of learning how complex
technical systems actually work. AIDS activism provides a compelling
model for how to make effective intervention. To change medical prac-
tices, activists needed to learn the concepts and vocabulary at issue, and
the fact that they were willing to do so is a large reason why they were
able to convince medical professionals to make constructive changes in
how medical tests and experiments were configured.
In my view, it may be naı̈ve to expect complex systems to be ‘trans-
parent’, since by definition their architectures have many interacting
components with massive feedback loops connecting diverse parts. A
more achievable goal is to require transparency for those small parts
of a complex system that directly interface with consumers. For example,
transparency in one’s electric bill is a good thing and can easily be
achieved, whereas ‘transparency’ in how the entire electrical grid operates
is much more difficult and would require considerable technical know-
ledge of the system’s many networks, subsystems and connecting parts,
as well as the dynamic interactions between them.
I take this to be Latour’s point when he takes his fellow sociologists to
task for evoking ‘the social’ without requiring a detailed technical know-
ledge of how a given complex system operates (Latour, 2007: 85). Ethical
judgments, he argues, require not easy mystifications but the kind of
technical analysis that he typically undertakes. Accused by his fellow
sociologists of lacking an emphasis on politics, he defends his practice
as more deeply political than those who eschew close technical analysis
and simply call for reform instead.
I find his argument compelling that there are deep connections
between oversight, ethical responsibility, and technical knowledge.
Such an argument also implies that, given the limits of a finite lifetime,
it is important to pick one’s battles and spend one’s resources
accordingly.
LA & VP: As with some of your previous works, such as My Mother Was
a Computer (2005) and How We Think (2012), in Unthought you offer
close readings of a novel (Colson Whitehead’s The Intuitionist, 1999) as a
way of engaging anew with computational theory on the incomputable.
Reflecting on the character of Lila Mae and her observations on the
catastrophic accident as that which cannot be predicted, you propose
that ‘the power of the black box does not lie in concealing a knowable
Amoore and Piotukh 151
answer, but rather in its symbolization of the limits of knowledge, both
Empirical and Intuitionist’ (p. 189). Why is this symbolization of the
limits of what can be known empirically or intuitively so important to
your project? Does this threshold of the knowable offer a potential crit-
ical response to the damaging effects of algorithms used in financial
trading or in criminal justice?
NKH: The power of the limits to knowability discussed by Luciana
Parisi (2013, 2019) and also the limits of computability, a related
topic explored by M. Beatrice Fazi in Contingent Computation (2018),
lies in their absolute nature. For example, Gödel’s Theorem proves
that, for a formal system strong enough to do arithmetic, at least
one statement within the system cannot be proven to be true or false.
Wherever such limits appear, they tend to be related to a system’s
recursivity, its ability to turn back on itself and re-enter the system
from a different perspective. Niklas Luhmann used this insight as a
central characteristic of his systems theory (Luhmann, 1996); I find it
interesting that recursivity (in the form of massive re-entry) has also
been theorized by Edelman and Tononi as an essential condition for the
emergence of consciousness (Edelman and Tononi, 2000). These find-
ings suggest to me that complexity requires recursive interactions, and
at the same time recursive architectures limit how much we can know
about the system, a result relevant to the ‘hidden layer’ in neural net
and deep learning algorithms.
I think the central issue here is the difference between linear causality
and circular or recursive causality. For example, there is no problem in
analysing the dynamics of a chemical system in which reagents A and B
interact to form C and D. But when C and D also interact to form A and
B, the system becomes recursive, and analysis becomes much more dif-
ficult because all the parts are interacting simultaneously in ways that
affect how the parts interact. Resisting formal closures, recursive systems
are associated with limits on how precisely the system’s workings can be
understood. Without downplaying the significance of these results in a
theoretical sense, I question how useful they are in systems that are not
formal mathematical structures but rather complex social structures
involving many different kinds of interactions, including human emo-
tions and biases as well as computational media, whose design param-
eters may not always be fully known. One would have to know much
more about how the system in question works to draw a connection
between theoretical limits to knowledge and the system’s recursive
complexity.
My point about Colson Whitehead’s novel is exactly this: in a system
as complex and multifaceted as entrenched racism, only something as
potent as an absolute theoretical limit could be powerful enough to
demolish it completely and send the city along a different trajectory.
152 Theory, Culture & Society 36(2)
Of course, as a novelist, Whitehead only needs to gesture toward this
possibility rather than analyse it in a systematic way. It remains in his
novel a tantalizingly utopian gesture toward an uncertain future, a life-
line thrown out of desperation and hope in equal measure, betting that a
small measure of grace is left open by the system’s inability to achieve
complete closure.
LA & VP: In Unthought you are interested, almost in a methodological
sense, in whether novels enact dynamics that are not already present in
mathematics or computer science. Do you think that the novel, perhaps
paradoxically, is becoming more and not less significant as medium in a
time when deep reading and concentrated attention seems to be under-
mined by algorithmic hyper attention? Do you see the contemporary
novel as up to the task of opening critically onto the errors and contin-
gencies that have, as Luciana Parisi argues, become the very terrain of
the machine learning algorithm?
NKH: Every human society constructs narratives, and it seems fair to
conclude that narratives are hardwired into the ways in which humans
understand the world and their places in it. Of course, many genres rely
upon narratives, not only novels: computer games; qualitative histories;
biographies and autobiographies; films; children’s stories; Biblical par-
ables; and so on. The issue for me is what novels can do that other
narrative (and non-narrative) forms cannot. It is surely no accident
that context, interpretation and meaning feature centrally in my provi-
sional list of what novels can do; these features imply that novels are
devices that employ cognitive devices to stimulate and render more com-
plex human cognitions.
It may well be the case, as you suggest, that digital media are leading
us as a population toward hyper attention and away from deep attention
(Hayles, 2011), a hypothesis that may cause us to wonder if novels can
compete effectively for audience share in our attention-challenged era.
Before we write the novel off, however, we should acknowledge that it is
the most protean of forms, having adapted through its centuries-old
traditions to a wide variety of different cultural and reading practices.
Already there are Twitter fictions and fictions for small screens such as
cell phones (whether these can be considered ‘novels’ is another issue).
For me it is not a question of whether the novel is ‘up to the task’ of
‘opening onto the errors and contingencies’ of algorithms, for I can think
of at least a dozen contemporary works that do precisely this (an out-
standing example is Pynchon’s Gravity’s Rainbow). The question should
perhaps rather ask whether readers are up to the task of perusing and
understanding such complex and difficult novels. I certainly hope so, and
as a teacher of such novels, I am doing my small part to make sure this
will be so.
Amoore and Piotukh 153
LA & VP: In the final chapter of Unthought you reflect back on the
cybernetics that animated your 1999 book How We Became
Posthuman. You suggest that the cybernetic paradigm was both pro-
phetic and misguided: prophetic in its modes of communication between
humans, non-human life forms, and machines, and yet misguided in its
belief that feedback mechanisms would afford a means of control. Can
you explain why it is that you consider the idea of control to be ‘increas-
ingly obsolete, if not outright dangerous’ (p. 202)? Given this obsoles-
cence, and given your argument for ‘inflection points’ in place of control,
how do you envisage the impulses or political will to send the cognitive
assemblage in different or unforeseen directions?
NKH: I would say that control is not so much obsolete as applicable to
only a small fraction of systems employing linear causality. As soon as
complexity and recursivity enter the picture, control becomes more elu-
sive and more unpredictable. To illustrate, I can reference a short fable
by Stanislaw Lem, disguised as a Nobel acceptance speech, entitled
‘The New Cosmogony’ (Lem, 1983: 197–214). The speech, made by a
theoretical physicist accepting the prize, postulates that there exists an
incredibly advanced civilization, much older than ours, that has achieved
the ability to alter the laws of the cosmos. The problem is that they are
also part of the cosmos, so when they alter the laws, they also alter their
own functioning in unknowable and unpredictable ways – a neat
example of recursive re-entry within a complex system. The very condi-
tions that enable them to exert control also limit how that control can be
exercised.
In Unthought I argue that a more fruitful approach than straightfor-
ward control, and one better suited to complex systems, is to look for
‘inflection points’, places in time and space where a complex system
enters a meta-stable state and therefore becomes more sensitive to
small perturbations such as a group of individuals might exercise.
These do not in themselves guarantee that the interventions will be posi-
tive; they could well be negative, for example, causing the system to crash
rather than simply move in another direction. But these are the places to
look if one wants to make constructive changes.
Needless to say, the more deeply one understands the system’s dynam-
ics, the more likely it is that the intervention will be effective. Many smart
people are involved at present in precisely these kinds of activities, for
example investment bankers with the goal of making money. I say it is
high time for humanists and progressive thinkers to be involved as well –
people who may have other goals in mind, such as sustainability, envir-
onmental stability, legal systems able to reliably deliver justice, and so
forth. These for me are the stakes of understanding the implications of
living and acting within cognitive assemblages and the cognitive planet-
ary ecologies which they mutually co-constitute.
154 Theory, Culture & Society 36(2)
References
DeLanda, Manuel (2016) Assemblage Theory. Edinburgh: University of
Edinburgh Press.
Deleuze, Gilles and Guattari, Félix (1987) A Thousand Plateaus: Capitalism and
Schizophrenia, trans. Massumi, Brian. Minneapolis: University of Minnesota
Press.
Dewey, John (2000) Experience and Nature. New York: Dover Publications.
Edelman, Gerald and Tononi, Giulio (2000) A Universe of Consciousness: How
Matter Becomes Imagination. New York: Basic Books.
Fazi, M. Beatrice (2018) Contingent Computation: Abstraction, Experience, and
Indeterminacy in Computational Aesthetics. Lanham, MD: Rowman &
Littlefield.
Fredkin, Edward (2018) Digital Philosophy. Available at: http://www.digitalphi-
losophy.org/ (accessed 28 May 2018).
Hayles, N. Katherine (1999) How We Became Posthuman: Virtual Bodies in
Cybernetics, Literature, and Informatics. Chicago, IL: University of Chicago
Press.
Hayles, N. Katherine (2005) My Mother Was a Computer. Chicago, IL:
University of Chicago Press.
Hayles, N. Katherine (2011) How we read: Close, hyper, machine. ADE Bulletin
150.
Hayles, N. Katherine (2012) How We Think: Digital Media and Contemporary
Technogenesis. Chicago, IL: University of Chicago Press.
Hayles, N. Katherine (2017) Unthought: The Power of the Cognitive
Nonconscious. Chicago, IL: University of Chicago Press.
Hayles, N. Katherine (in press) Cognizing Media: Shifts, Ruptures,
Transformations. New York: Columbia University Press.
Latour, Bruno (2007) Reassembling the Social: An Introduction to Actor-
Network-Theory. Oxford: Oxford University Press.
Lem, Stanislaw (1987) The new cosmogony. In: Lem, Stanislaw, A Perfect
Vacuum. New York: Mariner Books.
Luhmann, Niklas (1996) Social Systems, trans. Bednaz John and Baecker Dirk.
Redwood City, CA: Stanford University Press.
Parisi, Luciana (2013) Contagious Architecture: Computation, Aesthetics, and
Space. Cambridge, MA: MIT Press.
Parisi, Luciana (2019) Critical computation: Digital automata and general arti-
ficial intelligence. Theory, Culture & Society 36(2): 89–121.
Pynchon, Thomas (1973) Gravity’s Rainbow. New York: Viking Press.
Shannon, Claude E. and Weaver, Warren (1971 [1948]) The Mathematical
Theory of Communication. Urbana-Champaign, IL: University of Illinois
Press.
Whitehead, Colson (1999) The Intuitionist. New York: Bantam Books.
Louise Amoore is Professor of Political Geography at Durham
University, UK. She is the author of The Politics of Possibility: Risk
and Security Beyond Probability (2013, Duke University Press). Her
Amoore and Piotukh 155
latest book, Cloud Ethics: Algorithms and the Attributes of Others, is in
press (2019, Duke University Press).
Volha Piotukh was Postdoctoral Research Associate at the Department
of Geography, Durham University, where she worked on the ESRC
project ‘Securing Against Future Events’ (2012–16). She is the author
of Biopolitics, Governmentality and Humanitarianism: Caring for the
Population in Afghanistan and Belarus (2015, Routledge) and coeditor,
with Louise Amoore, of Algorithmic Life: Calculative Devices in the Age
of Big Data (2015).
This article is part of the Theory, Culture & Society special issue on
‘Thinking with Algorithms: Cognition and Computation in the Work of
N. Katherine Hayles’, edited by Louise Amoore.