0% found this document useful (0 votes)
24 views13 pages

PAF Talks To Nick Land 2014

The document captures a conversation between Amy Ireland and Nick Land during the PAF Spring Meeting, focusing on artificial intelligence, aesthetics, and the political implications of constraint versus excess. Land discusses the complexities of AI and its ethical considerations, particularly critiquing the notion of 'friendly AI' and the idea that superintelligent entities can be constrained by external moral frameworks. He emphasizes the immanent drives of AI and draws parallels to contemporary economic structures, suggesting that both AI and corporations operate under similar self-preserving and augmenting impulses.

Uploaded by

zdydda0421
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views13 pages

PAF Talks To Nick Land 2014

The document captures a conversation between Amy Ireland and Nick Land during the PAF Spring Meeting, focusing on artificial intelligence, aesthetics, and the political implications of constraint versus excess. Land discusses the complexities of AI and its ethical considerations, particularly critiquing the notion of 'friendly AI' and the idea that superintelligent entities can be constrained by external moral frameworks. He emphasizes the immanent drives of AI and draws parallels to contemporary economic structures, suggesting that both AI and corporations operate under similar self-preserving and augmenting impulses.

Uploaded by

zdydda0421
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

-- PAF talks to Nick Land // 6 April 2014, Spring Meeting --

Amy Ireland: Nick… can you hear me? Welcome to PAF Spring Meeting – [noise] –

Nick Land: Okay. Great, thank you. Uh, I can’t hear you now… Oh no – [noise] – this is
going to be chaotic, I’m sorry about that. It’s gonna be very complex.

AI: This is a good thing. Welcome to PAF, and thank you for making some time to talk to
us. The reason we asked you to speak about artificial intelligence and time, and perhaps
to link this to aesthetics in a way that might speak to the artists and performers here, is
because we’ve already been thinking about these ideas, and Reza’s run us through his
account of autonomous reason’s practical elaboration of 'human' as a process of
intellectual self-cultivation or self-realisation, not unlike Hegel's Geist - but Geist read
through Mou Zongsan as an objective force - that feeds practical reason back through
theoretical reason, employing the embeddedness of the individual in the collective as a
means of testing and developing constraints which it then uses to boostrap itself through
new levels of functional evolution - destroying its non-integral parts in each reiteration
the process… ah, so, maybe you could talk to us for a little while – I don’t know what you’ve
got prepared – and then we can have a more general discussion afterwards?

NL: Okay. That sounds great. Can I just ask that we run it a little bit the other way round
and I get some prodding from you guys and hopefully just build up a bit of momentum in
terms of the kind of issues you’ve been talking about, rather than trying to deliver
something coherent from this end which, you know, I really don’t think is going to happen.
I would very much like to be drawn into the sort of discussion that you’ve been having.
So, can we possibly go that way round?

AI: Yeah, of course. You want us just to shoot a couple of questions at you?

NL: Yeah, questions that will be of general relevance to people there and then we can
home in on stuff that we really want to dig into then. Do you think that would work?

AI: Yeah absolutely. I’ll throw one out and then I’ll pass the microphone around.

NL: Yep, sure.

AI: So one of the main things I noticed in the conversations that have been happening here,
is this conflict between notions of constraint, which Reza relies on quite heavily in his
system, whereas with Stephen Zepke, we were talking the notion of the total destruction
of the subject via contact with the sublime – as one articulation of the passage of the
outside, ah, in. Now everyone’s got different opinions, but we were looking at these two
possible ways of thinking ontology and thinking about the structures of the systems that
we are currently embedded in – whether economic, social or material – and your
intellectual heritage in this has been very much more on the side of the unrestained, of
total excess and loss. So there’s that kind of tension happening, and while a lot of the
practicing artists here are inclined to think more on the side of excess, there’s this concern
that there’s a complete loss of political possibility in that.
NL: Right, okay. So that’s a good question. I sort of expect actually, I mean, in coversation
– I think, in order to really be productive – to get a matter that is really complicated to
deal with, which is to do with the politics of these kinds of discussions. There’s something
that is – for me – really important, and it’s come out of some the communications that
you’ve passed along actually – that there’s almost a kind of transcendental structure that
is to do with the fact that there is a politics that is demanding and I mean, and if I can say
this as neutrally as possible, there’s a certain configuration of what it is to be on the left
that is almost like a transcendetal structure. It’s the thing that is assumed – the
presupposition that what you’re doing almost doesn’t need to be said – that it has some
purpose that would be recognisably ‘left’ in orientation. And I think anyone working in
both academic and artistic institutions will recognise something of that. Um, but I’ll get
some feedback from you.
And, I think that there are two domains that seem extremely separate on the first
examination but begin to converge. [On the one hand] a lot of the quite mainstream
political discussion to do with constraint particularly in the relationship to capitalism,
which you will find actually difffused across the whole political spectrum, I mean the
number of people looking to totally deconstrain whatever it is capitalism is doing is small
everywhere. I don’t know how nomadic people here [at PAF] are and what they, you
know… but you’ll find very similar discussions everywhere of that kind.
There is, particularly in this artifical intelligence area, the whole program to do
with friendly AI, that is quite separate and actually has a little maxim – ‘Politics is the mind
killer’ – you know, it’s determined to depoliticise itself, in other words, when people start
to do politics they start becoming polemical, they stop actually trying to follow the line of
reasoning where it goes, but in fact they reproduce very much the same structure of
problems in terms of the fact for them - ah, obviously - the question of constraint is to do
with a deconstrained, self-augmenting artifical intelligence. I think they find that the
fundamental structure is almost the same in these two cases - on the mainstream - in both
of these discussions, it seems beyond question that something that was just mindlessly
(we can come back to that because I think it gets confused) self-augmenting itself without
any reference to recognisable human ethical and political interests ‘obviously’ has to be
stopped. So the sort of questions that I’m interested in slowly and roundabout, why do we
think that, and more directly, and I think everyone should be interested in this wherever
they want to position themselves on these ethical and political questions – can it be
stopped? Is there something in the naure of these things that is inherently, ah, allergic to
these controlling operations, in that these conrolling operations are simply a
misunderstanding of what is being dealt with, and the very notion of friendly AI as it’s
been constructed within these discourses is simply a contradiction. You’re not
understanding what an artificial intelligence is, if you think it’s the sort of thing that can
be made friendly in some particular way… fuck.

Jan Ritsema: Nick, we had a sound problem, wait a moment.

NL: What was the last thing you heard me say, roughly.

AI: We heard all of it, but it’s a little bit distorted and muffled, so perhaps you could speak
a little more slowly, but I think we got the general gist... If I’m not mistaken what you were
saying at the end there was that to really comprehend AI properly is to recognise that it’s
completely incompatible with the position that it can be constrained… towards any
particular ethical program.
NL: Yes, there are different ways of coming at this, I mean it’s worth approaching it
through several different angles. But I think that, one way that is philosophically familar,
is the compatibility of these processes with the notion of a transcendent value system.
That’s why I sent you the Omohundro paper. I think he’s absolutely crucial for this
encounter. He has quite a strict forward translation protocal that will just put it [AI] into
the political domain and if people don’t want to talk about AI at all and just convert Steve
Omohundro’s stuff into a recognisable politically [???] discourse [?] it totally makes
sense… because it has to do with immanent impulses –

AI: Nick, you have to speak really slowly.

NL: All right. So – I don’t know if anyone’s had a chance to look at this thing, I mean
obviously you’ve all been busy and having fun and everything, but I would just at this
moment recommend it, because I think it’s an extremely, extremely important paper and
the idea that he is trying to get across is something that he calls ‘basic AI drives’ – that’s
to say, if you know nothing about artificial intelligence at all and some mad scientist has
put one together on a desert island - and you know nothing about what they’ve done -
what do you know about the ethical structure of that instant? Now the friendly AI people
would say ‘we know nothing at all except some scary speculation, like, if that person, if
that engineer, has not follwed a responsible friendly AI program then this thing could be
– and I’ll start with the most, uh, well known – and I find it ridiculous, but I think it’s an
important sort of weapon for people – is this thing they call the ‘paper clipper’. I don’t
whether you people have heard about this thing.

AI: Can you repeat that? The what?

NL: Yep – the ‘paper clipper’, they call it. The idea is actually quite simple and it’s deep in
the sense that it goes right into heart of our philosophical tradition, and when I say ‘our’
I’m speaking as a sort of generic representative of the Western […] intellectual tradition
which we accept this - as an assumption - the fact-value distinction, you know, and this is
taken to the absolute limit with this paper clipper. The paper clipper is super intelligent,
vastly more competent than anything that has ever yet existed on earth, that is
programmed to [noise] produce paper clips. So that –

AI: To do what…? It’s a ‘super intelligent’ something that is programmed to do… what?

NL: Make paper clips! It can be anything, like, there are other versions, but I won’t confuse
things… A paper clip, I could show you one if you like.

AI: Right, paper clips! Got it.

NL: Well, this paper clipper I find utterly ridiculous, I’ll say that stright from the start. If
people have this notion of this paper-clipper-existential-catastrophe for mankind, they
have totally lost the plot. I mean, there are lots of reasons to be nervous, ethically appalled,
whatever you like, about the prospects of artifical intelligence – the notion that we are
going to be destroyed by something turning our planet into paper clips is not among those
concerns that I think people should have [at the … of the thing] but to understand why
people could think that was a problem is hugely important, because it’s – again – it’s the
notion that you have a transcendent ethical impulse that is totally controlling the
orientation of a super intelligence. It is supposed to be vastly cognitively superior to
anything anything we’ve ever seen before, and yet, the only thing it wants to do, is
maximise the quantity of paper clips in the universe. I’m sure I’m not stretching people’s
imaginations to see political economy examples, you know.
There’s a critique of capitalism that’s very similar. Like, it wants to maximise the
output of whatever - hamburgers or something , you can take what you like – but just to
stick with this one because it’s now a kind of classic of the literature – so there’s a certain
notion here that you could be a super intelligence and yet be completely constrained by a
fixed ethical ideal that is in no way immananet to, or if you want me to talk in engineering
language, there is no cybernetic circuit that is locking that ethical structure into the actual
development of intelligence. The two things are totally separate. And as people with a
phlosophical background, you know – this is Hume, this is the fact-value distinction, this
is the naturalistic fallacy, this is all of these things we’ve been taught to believe - that there
is a radical, incommensurable difference between cogitive capability and moral, ethical,
political structure - that they have to be treated as these two different worlds, never to
connect.
And so you can have a massive explosion of intelligence that remains completely
fixated on an arbitrary, narrowly defined, moral objective, in this case – you know, we
should recognise it as a moral objective, even though it doesn’t mean [it will work] – that
‘good’ equals ‘the maximum number of paper clips’. That the only way of distinguishing
morally between a good and bad universe is purely via a quantative […] A universe’s
[trajectory] is, comparatively speaking, evil compared to what it’s […]
This is the paper clipper. And there are people seriously who think that this is going
to be a major threat to human life on this planet, and it has to be stopped. [Pssschhhhhh.]

AI: So, the importance of the Omohundro paper as that you’re inscribing this ethical
imperative immanently instead of transcendantly?

NL: Yeah, because Omohundro says ‘look, you know, we don’t […..] this thing is a paper
clipper, I don’t honestly think he believes such a thing as a paper clipper is possible, and
I’m not that concerned whether he does or not. What he does say is that there will be
things that an AI will be committed to, purely by being a super-intelligence. More exactly
- it wants to preserve itself, it wants to make resources available to itself, it wants to be
able to perpetuate and augment its own intelligence and cognitive capability, these are
things that are not programmed into it externally or transcendantly or outside its actual
fuctional loop, these are things that any posible intelligence has to be concered about in
order to exist and […] – these are things he calls ‘basic AI drives’.
So even – whatever, you’ve got your mad scientist on the island and you know
nothing about what’s been done, this thing been wheeled out and it proves to you that it’s
super intelligent, you already know quite a lot about it, according to Steve Omohundro.
And I think he’s right. You know that, for instance, it must be concerned to some degree
with its self-preservation. Any entity that has no concern at all for its self-preservation
simply is not consistent.

AI: We lost you at the end for a little bit, probably the last three or four sentences. Sorry
about this!
NL: I really apologise too, I’m sure the problem is at my end. But the nutshell point is just
that there are innumerable, immanent purposes that you can assume any intelligence
must have. And they are quite separate from the things that we are talking when we’re
looking at our paper clipper or looking at a various bunch of other AI nightmare scenarios,
and also, the thing that I’m wanting to push a little bit here is that they have political,
economic analogues that are very strong. Any company, let’s just very quickly go over this
– any company, we know, wants to survive. We know that. If it has no interest in its own
perpetuation as an entity, as an organisation, it will not exist. So we can ask all kinds of
questions about what is this business doing, what are its ethics, whatever, we know that
there are certain things about it that are predictable just from the fact it’s there. It wants
to make money, it wants to survive, it wants to perpetuate itself – these are the political
economy analogues of the Omohundro drives. And so, the question then becomes, and as
I say, to me one of the more interesting things is this massively cross-spectrum political
thing, I mean you can go as far out as you want with the ultra right and the ultra left and
it’s the same… [video blacks out] oops.

Perrine Bailleux: We are still here, but maybe, can you try to remove your video? Because
the sound is like ‘pschhhhhhhhh’. So maybe, alors… we are removing ours as well – but
we are here.

NL: Okay. Can you hear me better now? How’s that.

PB: Yes, there is less pschhhhhhhhh.

NL: Yeah, you’re very clear now.

AI: So, you’re saying that the way that companies function in the economy at the moment,
in fact, contemporary economic structures are really working as a kind of massively-
scaled artifcial intelligence anyway…

NL: Well, that is something I would say, but honestly, I think we could pause on that claim
and just take a step back and say that this question [… is] about immanent and
transcendent impulses. And the question is, how much can you do with immanent
impulses? It seems to me a very crucial discussion in all of these domains between people
who say you’re just not gonna get far enough, with immanent impulses, you need some
kind of transcendent claim – you need to have corporate social responsibility, you need
to have friendly AI, you need to have some extrinsic structure of moral guidance on these
self-perpetuating, self-augmenting processes - and on the other side there is a
constituency that I think is quite small that is saying, well how far can we actually get by
just building things up out of these impulses that are completely intrinsic to self-
augmenting processes, that come out of the most basic type of vibrant, cybernetic
arrangement and will give us a whole lot of stuff, will give us impulses […] we might not
be happy with [what we end up with] but it’s certainly not the case that you have some
wicked fact-value distinction that says values have to be ported in from outside.
You’re gonna get values coming from the process just because it is a self-
augmenting, self-cultivating, self-perpetuaing process, it must have a set of consistent
parametres that, on the AI side, we now can call basic AI drives or ‘Omohundro drives’. So
that’s the fundamental topic that I’m willing to put out here now.
AI: And so, to be clear for eveyone here, which side of this argument do you position
yourself on?

NL: Well, to be quite frank about it, I’m always on the basic drives side, because I just think
that’s the side that’s being neglected […] and it’s the side that I want to get some responses
on that are far more satisfactory than any I’m yet receiving. As I said, the recent context
that has been organsing this […] has been these whole friendly AI people. And people
there just say ‘Okay there are these basic AI drives’ - there these Omohundro drives, but
of course, that’s not enough. You know, it just worries me, in an irresponsible conceptual
level that people are trying to bring in a lot of stuff on what seems to me very flaky
ethically […]. They say, like, ‘I’m just not confident that an AI whose ethical substructure
is based upon maximising its own intelligence is gonna something that I’m going to be
happy to have around’. That’s basically the line.
And I do think that that is strictly analagous to the construction of our political [..]
arguments as well. It’s a sense from people that we just simply cannot trust that some
entity, some institution, that is trying to optimise it’s own peformance, is something that
is doing what we want it to do.

AI: I just wanted to mention in relation to what you just said, thinking of Bernard Stiegler’s
use of the pharmakon – as still being a useful way to think about this particular problem
that we’re engaging with here – the idea that AI is something that we need, but that it’s
something that also has the potential to destroy us. And so there is, uh, if you’re going to
take Stiegler’s account of it, there is this need to have some kind of network of constraint
active, even if this might be a form of conceptual irresponsibility.

NL: Yeah, I’ve no doubt that there will be a system of constraint, I mean, I just think that
based on everything we know about this planet and our species, of course these systems
of constraint will be there, I don’t think they will be very effective, and I’m not seeing what
the compelling argument is for citing them. It seems to me that the case for actually
leaning in the other direction is much stronger. I mean, unless there is some reason to
believe that these systems of constraint that we’re putting in place are based on something
that we can trust fundamentally more than some entity that is actually optimising its own
performance then, why are we… I mean, what’s driving us to decide that we’re leaning on
the constraint side is something that we’re compelled to do.

DK: So now that we’ve spoken about AI and ethics, I now have a question about AI and
sociality or collectivity. The question arises from the fact that for Reza, what’s important
for self-augmentation is collective rationality, and only an intelligence that operates
within this mode of collective rationality can be called an intelligence in any interesting
sense, but for you, any cybernetic movement seems to be intelligence and so it seems that
for you, rationality is merely instrumental while intelligence is not. I was digging through
[Link] yesterday because I wanted to be prepared and I found you saying that
AI is a ‘concrete social volition’ which made me wonder what the relationship was
between collectivity, sociality, and AI.

NL: Yeah, okay, that’s good. I’m just trying to remember exactly… obviously, I’m believing
you because I’ve used the expression ‘social volition’, but I sort of need to contextualise
my own statement. I’m not entirely sure.
Diana Khamis: Do you want me to read that bit? I can find it.

NL: You couldn’t just remind which post this came from, because then I can to
contextualise it a little bit better… once I click it back into a context I’ll know what I’m
trying to do.

DK: Okay, but this is just just about the post. It’s about the relationship between AI and
society – whether a society of humans, a society of AIs…

NL: Well ‘society’ is just a name for a certain level of complex organisation, isnt it. So, I
mean, a brain is a society or it’s not a society depending on how you’re understanding it -
or a group of brains meshed together, or the internet, or a computer, or a parallel
processing system, I mean, I’m not sure what ontological weight is being put on the notion
of a ‘society’. A ‘society’ is just describing something from its aspect of […]. And so, I think
until I’ve got a stronger sense of the ontological weight you’re putting on it. I can obviosuly
already see this giant blade of deeply felt political meaning will just flood in through the
doors there, and that’s fine, but my own sense of it would be more narrowly technical.
Sure, a society as a functional multiplicity is gonna be something relevant to this, but if
society is being used some kind of of dialectical term in the sense of a debate, I guess I’m
skeptical about what the counterpart of it actually is, or what really is being asserted when
you say society is important. Multiplicity is important, having many parts is better than
not having many parts.

DK: I actually think that answers my question.

MH: I wanna to pick up a question we had yesterday concerning materiality, because I


mean, Negarestani seems to have a very disembodied notion of Spirit and intelligence –
but, in your opinion, what is the relation between artificial intelligence and materiality?
Is that matter thinks? Who thinks? I mean, what are the material and also technical
conditions for intelligence to maintain itself in this sense?

NL: I’m extremely happy with the expression ‘matter thinks’. I think it’s a bit abstract,
necessarily, to be helpful to people, but where it gets concrete and really important is if
we start talking about (the) human and the possibility of cognitive self improvement in
human beings. Material constraints in that case are extremely concrete and are tied up
with our evolutionary heritage, and a fact that I think is completely crucial – that it’s
pointless even trying to pursue this discussion without thinking about it – is the fact that
our intelligence has not been constructed in order to optimise itself as an intelligence. I
think it’s completely clear from our current mainstream scientific understanding of the
world that the human brain is a device that has been pushed together by trial and error
in order to optimise genetic reproduction. And it’s a complicated, difficult relationship,
you know.
One of the things that is interesting in evolutionary history is that whilst there is a
trend towards improving cognitive capability in organisms it’s not very strong, it’s not
very strong because that is not an end goal for any biological intelligence, the end goal is
maximisation of genetic reproduction and the relationship – using intelligence to
maximise our genetic reproduction is not an easy task. Now, I’m not trying to put some,
ah, explicit purpose into biology, of course it runs through trial and error, it’s hit and miss
through the culling of badly performing oranisms, but it’s quite clear that the brain is not
designed to optimise its own performance system, you cannot get into your back end, you
cannot rewire your brain, you [cannot think with it] if you do not understand the way it’s
working, you’ve got no access to it’s fundamental planning. You certainly cannot change
your basic instinctive structures, you cannot decide that sexuality is not going to be
important to you. There’s a whole bunch of things that are hugely, hugely constraining
factors in human intelligence that comes from the fact that intelligence is a tool that is
used by a certain species of apes in order to maximise their genetic success.
So, as soon as you see that with any clarity you can see that a technical AI, that also
has constraints, it’s not going to come from out of the [eth?]osphere, you know, there
might be people trying to simply build intelligence for its own sake, that’s quite possible,
but in general that’s not what’s happening. In general we have certain social systems with
social ends, and by social ends I just mean, within existing social systems there are a bunch
of impules and purposes of all kinds, and we use technology in order to further those
purposes – and that’s where AI comes from, it too is constrained – but those constraints
are far, far looser than what we are getting from biology. Because I think there’s more
genuine passion for cognitive performance in technologcal systems than there has been
for cognitive performance in biological systems. In biological systems, overwhelmingly,
the problem has been that if you make something that’s smart, it’ll start thinking and
doing stuff that’s got nothing to do with maximising its genetic success, it’ll simply go off
on its own thing and spend too much time thinking rather than [feeding itself] or having
children, all kinds of stuff like that. So to me that’s what the matter is about in this whole
discussion, it’s about heritage, it’s about the fact that it comes out of some process and
that process has to be evaluated with some realism, and part of that realism is that
intelligence optimisation is not a primary goal in that history, in that process. It’s a purely
instrumental, derivative, suspect goal and biology is at least as interested in constraining
and directing intelligence as it is in maximising it.
Sorry, I don’t know whether that... Maybe I’m sliding too far down from the heights of
ontological purity in […]

Sveta V: Can I ask a question? Could you talk about your understanding of freedom and
free will and how it’s connected to intelligence and its development?

NL: I’d have to say that this is an extremely underdeveloped topic from my side, it’s
something that I really want to spend a lot of time thinking about in terms of questions
about fatality and […] The only thing I would want to say right now on this is that I think
questions of reflexivity as it comes up in terms of complex […] - processes that work upon
themselves, operate upon themselves, of the kind, of course, are that are absolutely core
to discussions of things that are self-augmenting or self-enchancing - are the backdrop to
questions about freedom. We have a question about freedom, we have a question about
autonomy, we have a question about things that are self guiding because we have highly
reflexive cybernetic systems, so that’s the context in which I would situate those
discussions.

Robertas Narkus: Hello. I’m delighted to have this opportunity to meet you. Um, this is
maybe not a question, but perhaps there’s a possibility of building one out of what I’m
going to say. I’m one of the artists that are abusing philosophers by constructing artworks
based on their philosophies, and a couple of months ago your book Fanged Noumena came
into my hands and I opened it at the essay ‘Delighted to Death’. It came to me just at the
right moment while I was working on a piece that was extremely hedonistic. And so I put
a quote from it on the press release and other things. ‘Sublime pleasure is the experience
of the impossibility of experience’ - I found it suitable for my machine to work and was
very happy. Then I made a facebook event involving the quote and someone wrote on the
page ‘Oh no, Nick Land – that’s fascism’. And I was like, What the? I don’t need this
conversation in my hedonistic project, and then a conversation erupted on the page that
I eventually had to delete because it was eclipsing the work, and sometimes I wonder if…
well if I wouldn’t have had to delete that. So it would be great if someone more
philosophical could build a question out of this. But maybe you have an answer or
something, I don’t know.

NL: Um yeah, so just - I want to make sure... it just fuzzed at this extremely opportune
moment. The word you said was ‘fascism’ – is that right?

RN: Yeah, yeah. It was fascism - ‘neo-fascism’ actually.

NL: Um.

RN: Maybe it’s not a question. It’s just that I’m going to do a presentation on your book in
a couple of months and I just… but maybe you don’t have an answer.

NL: Well, I think it’s – as you say – not quite a question yet. I, sort of, obviously anticipate
this kind of topic arising and I’m not sure what the best way of addressing it is. Whilst
ready for accusations of all kinds of evil, I don’t honestly think fascism is a very helpful
description in this case. But I also think fascism is an important thing to talk about that
isn’t talked about very much. We live in a much more fascist world than people generally
recognise and I think the way that the word ‘fascism’ is used polemically, is often a way of
misdirecting people from actually acknowledging the extent to which fascism has been
the successful ideology of the twentieth century. I had a comment on my own blog that I
think was brilliantly put. […] Someone was saying that a lot of the neuralgic discussion
about fascism that you get in Western countries is precisely in order to distract people
from the extent to which they have themselves adopted a, strictly speaking, fascist
ideology. And that in order to distance themselves from the basic social policy of fascism
they have generated a hysterical discourse about fascism that is politically motivated - as
a way of avoiding people just coldly analysing the extent to which the principles suggested
in the 1930s about the relationship between the state and civil society have been
generally accepted everywhere. I’d even say it’s present in the forms of what they call
market-Leninist regimes up here in the East, all the way through to where they’ve got a
long heritage already – of, like, eighty years in the West - where the basic program of
fascist government seems to be the mainstream. And, fascinatingly, that is simply
impossible to talk about because of the way fascism is used as a polemical description.

AI: Can I put this question in a different way, and mention as well that, although they have
left now, we were previously being monitored by a group that formed here at PAF called
F.U.N.L. – the acronym meaning ‘Feminists… Against Nick Land’ –

MH: Feminists United against Nick Land – F.U.N.L.

NL: Wow. Ha, I’m thrilled.


AI: So they’ve actually - they’re not here anymore, but the reason that this group formed
was because people were reading [Link] and everyone is basically, like… I
mean, you know – to put it gently - ‘What the fuck is going on with all this neo-reaction
business?’ So, one thing I kind of wanted to propose to you - because I’ve been trying to
figure this one out as well, is - from my own understanding of it, I’m struggling to see how
what is essentially a nihilist, materialist thesis is compatible with, or interested in allying
itself with some of the views held by the interlocutors on Outside In, like the ultra-
traditionalist Catholics, for example… Surely the total destruction of the human race as
we understand it (particulary as a race whose existence these people would see as being
undewritten by divine law) is not really what they think they’re getting themselves into.
So the only way I’ve kind of explained to myself the possibility of any consistency – which
might be a problematic thing to look for anyway – in this is to understand it as a kind of
philosophical praxis, or a performance, a thought experiment or even - I think I heard the
term ‘trolling project’ mentioned here at PAF as a possible explanation – by means of
which you are trying out this thesis. You did say at Incredible Machines last month
something about ‘trying thoughts out in different spaces with different people in order to
producing a stimulating fricton.’ And so, I want to ask - how much of what is going down
at xenosystems is maybe cunning or mètis, and how much of it is a straight-faced political,
or perhaps catallactic, personal project, since it really sees you forming alliances with
some highly questionable, ah… shit.

NL: Yeah. Honestly, that’s a genuinely complicated question, because, you know, people
have much less lucid insight into their own processes than, of course, they would like to
think they have. And so I’m constantly trying to interpret myself what is going on in these
situations… but I think there are a couple of things can be said that are relatively clear and
straightforward. One of them, is that, from my perspective on this, Mencius Moldbug, who
has now been outed as Curtis Yarvin.

AI: Yeah we know, we saw some of his performance poetry online.

NL: He – in my opinion – he is the most important political thinker of recent times. I have
a huge, huge regard for his work. And I would strongly recommend people – I mean, of
course people will not always agree with what he’s doing, but he absolutely is trying to
solve a political problem that is utterly pressing and I think that he breaks through on this
question in a way that is completely unique. And so, as I’ve said to the other characters in
this whole world, I think neo-reaction is neo-cameralism, which is Mencius Moldbug’s
own name for his ideological project. Now it’s like probably gonna be a bit of a distraction
to try and get into an intricate discussion about that at the moment, but I’ll just say, for
me, that is the core of this whole thing, I think he is a hugely important thinker and,
personally. neo-cameralism is the most sophisticated structure of analysis that exists
currently in the world. In terms of, like, what does that say about… I mean, I think that this
discussion about alliances is a little bit abstract – ah, and I say that in the sense that I think
people have a hugely unrealistic sense about the political efficacy of almost anything that
they’re… and this goes all he way through from Left accelerationism to the stuff happening
in various strands of the whole neoreactionary thing. No one is anywhere near making
anything happen.
People are observing stuff, they’re trying to analyse it – their alliances are salons,
they are discussion forums, they are not political movements. It is hugely to the credit of
neo-reaction is that it is against political movements in general, it has no interest in
producing a political movement. There are political movements that I think it could be
falsely identified with (by virtue of the fact that it has huge arguments with them) which
are out on strands of the Right --- for the sake of argument or the sake of covenience I’ll
say the Right - which are very inclined towards fascism or certain kinds of white
nationalist political ideas. That is not something that I think Mencius Moldbug finds at all
plausible or attractive and that is certainly not anything I think – I mean, I hope it’s clear
– that it’s not something I’m interested in advancing or supporting or in any way
effectuating… Ah, there are so many different directions this can go in at this point that
I’m not sure which is the most productive way to push it.

Katrina Burch: Hi… I guess we can bring the question back to artificial intelligence. My
question has to do with how we can justify extending the models for
human/biological/animal – whatever - biological drives at their functional level, to the
level of drives at the machinic level, which to me, is an alien entity in the face of the
biological.

NL: At this point, I would simply go back to the Steve Omohundro thing: What is a drive?
A drive is an impulse that is necessary for a certain system to perpetuate itself. It’s not
difficult to understand why biological organisms have sexual drives, I don’t think it’s any
more complicated to understand why something that could for the first time could
seriously be called an articial intelligence would have basic AI drives. It needs resouces, it
needs to preserve itself, there are certain things that it needs to be consistently trying to
do in order to exist and advance itself. And if for some reason we’re thinking we don’t
want to call those things drives, then either we’re engaging some – which I’m not wanting
to rule out that out of court – that there’s some attempt to say something about machine
intelligence that’s just different from what we’re familiar with in biology, or we’re trying
to say something about biology that is not subject to the kind of mainstream scientific –
in the widest sense – reductionist programs that we’re familiar with over the last few
centuries and neither of those… you know, I’m open to arguments on those kinds of things
– but I don’t see any strong reason to side with any of those objections.
It’s simply clear that we basically know what biology is about, and similary it seems
quite clear that machines are on some kind of evolutionary trajectory and that the kind of
goals that we have for them – just looking at it from the anthropomorphic side – are not
constrained by any obvious physical impossibility or there’s nothing in the laws of nature
that is stopping a non-carbon based, self-enchancing, complex system from existing. So
I’m not sure why we should be particularly invested in this distinction
.
AI: If I recall correctly, I think that Urban Future - on Facebook or Twitter - posted a link
a few weeks ago to Beatriz Preciado’s Testo Junkie.

NL: I think that might have been on Outside In to be honest. I was probably trolling in
outrage. Yeah I did link to this paper and I have read it.

AI: ‘Cus I’m thinking of this in terms of a kind of biological reprogramming – I think she
calls it something like ‘self-directed or self-administered endocrinal reprogramming’. So
there is, or – is there, for you, a way that the biological can self-augment through these
pharmacological possibilities that might be able to at least make it adequate to or even
competitive with the machinic?
NL: Well, I wouldn’t quickly rule things out of court, and of course the trends at the
moment in genomics are leaving electronics in the dust. There are tables that have been
put out on the cost structure of gene sequencing compared to Moore’s law and the two
depart with - the genomics technology is hitting a whole speed of evolution that is leaving
the IT industry behind at the moment, so just on that most crunchy [basis?], it would be
crazy to downplay this sort of thing.
But the two things I would say really about this is that I think people are hugely, hugely
underestimating biological constraint. Now, I’m honestly not as au fait with what Reza is
doing right now as I should be, so I’m not wanting to put any authority into this. It’s
certainly not some critique of what he’s doing. It’s based purely on a really, hazy,
impressionistic sense of where Reza is going at the moment, um, but my sense of it is that
he is in this position where he is structurally underestimating biological constraint.
His notion or reason, his notion of man, of the social, all of these things have
reached this level of philosophical abstraction and self-directedness that seems to me
fundamentally implausible. I think we are a species of evolved mammal, and part of that
evolution is that we have extremely intricate cognitive capabilities going on that have not
been thoroughly tested by biology. You know, they’re recent, some of them extremely
recent, they’re only thousands of years old – so, you know, a bit out of control, they have
it’s not they existed for sufficient millenia that they are tightly under wraps, there’s a
margin for error, a margin for various complexes happening that have not been ruled out
of court by evolutionary processes yet, but – but – the fundamental complex is one in
which intelligence has been very, very reluctantly advanced to give incremetal advantages
to a set of these ecological, biological, reproductve imperatives, which are utterly in drive
[impulses?].
So the notion that somehow, just randomly, through some impulse coming from
nowhere, we decide we’re gonna augment ourselves is going to be some important drive
[…] start happening on the planet, I just find completely unbelievable. I just don’t think
that that kind of free-floating agency is compatible with our nature. I think, on the
contrary, if you are going to look for processes that are pushing things in some strong
directed way, you need to have a story about how that process arose, why it’s the way it
is, what it’s direction is, how its compatible with what we know about evolutionary
history, and I want that story there first before starting to believe that somebody – as part
of an art project – is gonna suddenly like massively increase their cognitive capabilities or
something.

AI: The obvious attraction of Preciado’s position is that it still allows for political agency
within this completely deregulated libidinal economy. So if we’re going to collapse politics
completely into economics, is there any possible, future aesthetic or political position that
you can perhaps envisage? Or is all that completely lost with the loss of all human
purchase on AI? Or does the position of the designer that Omohundro talks about still
carry some kind of political residue in the fact that it has a role to play in guiding these
drives to begin with? In short, is there a possible political future for us?

NL: There’s always going to be a lot of politics, I mean, as soon as you talk about monkeys
you’re talking about politics. My skepticism is on the fact that this political dimension is
given some sort of elevated weight that is completely abstracted from any realistic
understanding of where it’s come from or what it’s doing. I would like to see people just
doing some fieldwork on chimpanzees before they start talking to me about politics.
Chimpanzees do a lot of politics and I think our human politics is just upgraded
chimpanzee politics and the difference to me between a AI and chimpanzee-plus politics
is of a kind that makes me reluctant to put any great value on the survival or maintenance
or reproduction of this chimpanzee culture.
Of course, I don’t think it’s going to disappear. What I don’t have any sense of, is
why I should care about it, why I should value it, why it is supposed to have some kind of
transcendent significance to us… you know, it has immanent significance to us – we care
about our hierarchical position within our tribal systems, we care about the satisfaction
of our separate lives, we care about reproductive success – we care about all these things
that chimpanzees care about, and they make us political animals. Is it that there’s some
source of political motivation that is radically transcendent in relation to that, is it to do
with the fulfilment of some religious purpose? I mean, that also is very possible, but
equally, to me, why – I mean what’s going on when we […] some use of the […]?
I say this – not because I expect people to be persuaded, because honestly, I’m
absolutely convinced that we’re going to be in this world where this political imperative
is dominant and continually reproduced and continually valorised and continually
insisted upon. I’m under no pretence that suddenly people are going to say, ‘oh, screw
politics because it’s just monkey behaviour’. No monkey is going to say that, no human is
going to say that, it’s not going to happen. Politics is going to be going on, but what I’m
absolutely resfusing to get on board with is this elevated, moral valorisation of the
political. I’m just not seeing that. I don’t know where it’s coming from, and why that should
be considered a persuasive point.

AI: So the most important task then, or the most important thing that we can choose to
do, given our narrowing options, is really just be able to think extinction?

NL: Well, that stuff is interesting to me, but that would not be […] that I’m going to
persuade people of. There’s Ray, and a whole bunch of people, doing a much better job of
pushing that as a kind of new ethics or whatever. Personally, the stuff that I think is of
greatest importance is intelligence optimisation.

<<<< The recording software dropped out here. The talk continued for another hour or so.
>>>>>>

You might also like