Mackenzie and Spears - The Formula That Killed Wall Street' - The Gaussian Copula and Modelling Practices in Investment Banking
Mackenzie and Spears - The Formula That Killed Wall Street' - The Gaussian Copula and Modelling Practices in Investment Banking
http://sss.sagepub.com/
'The formula that killed Wall Street': The Gaussian copula and modelling
practices in investment banking
Donald MacKenzie and Taylor Spears
Social Studies of Science 2014 44: 393 originally published online 6 February 2014
DOI: 10.1177/0306312713517157
Published by:
http://www.sagepublications.com
Additional services and information for Social Studies of Science can be found at:
Subscriptions: http://sss.sagepub.com/subscriptions
Reprints: http://www.sagepub.com/journalsReprints.nav
Permissions: http://www.sagepub.com/journalsPermissions.nav
What is This?
Article
investment banking
Abstract
Drawing on documentary sources and 114 interviews with market participants, this and a
companion article discuss the development and use in finance of the Gaussian copula family of
models, which are employed to estimate the probability distribution of losses on a pool of loans
or bonds, and which were centrally involved in the credit crisis. This article, which explores
how and why the Gaussian copula family developed in the way it did, employs the concept of
‘evaluation culture’, a set of practices, preferences and beliefs concerning how to determine the
economic value of financial instruments that is shared by members of multiple organizations. We
identify an evaluation culture, dominant within the derivatives departments of investment banks,
which we call the ‘culture of no-arbitrage modelling’, and explore its relation to the development
of Gaussian copula models. The article suggests that two themes from the science and technology
studies literature on models (modelling as ‘impure’ bricolage, and modelling as articulating with
heterogeneous objectives and constraints) help elucidate the history of Gaussian copula models
in finance.
Keywords
finance, financial modelling, Gaussian copula, investment banking, performativity
The Formula that Killed Wall Street: that was how Wired’s editors introduced the
Gaussian copula to the readers of a February 2009 article by journalist Felix Salmon. The
model had ‘devastated the global economy’. Its author, ‘math wizard … David X. Li …
Corresponding author:
Donald MacKenzie, School of Social and Political Science, University of Edinburgh, Chrystal Macmillan
Building, Edinburgh, EH8 9LD, Scotland.
Email: [email protected]
won’t be getting [a] Nobel [prize] anytime soon’, wrote Salmon: ‘Li’s Gaussian copula
formula will go down in history as instrumental in causing the unfathomable losses that
brought the world financial system to its knees’ (Salmon, 2009).
In this and a companion article published in this journal (MacKenzie and Spears,
2014), we examine the history of the Gaussian copula family of models, their embedding
in organizational practices in finance and their role in the global financial crisis. This
article presents a history of these models set in the context of a discussion of the domi-
nant ‘evaluation culture’ – as we call it – of the modelling of financial derivatives, a
culture that enjoys a degree of intellectual hegemony in modern investment banking. (A
derivative is a contract or security the value of which depends on the price of an underly-
ing asset or the level of an index or interest rate.)
Given how crucial mathematical models are to financial markets, surprisingly little
research has been devoted to how financial models develop, which is our topic in this
article; we return to other issues in the companion article. One theme in work on models
by researchers on finance influenced by science and technology studies (STS) has been
the ‘performativity’ of models: models are not simply representations of markets, but
interventions in them and part of how markets are constructed (Callon, 1998, 2007).
Models do indeed have effects, but – vital though that issue is – exclusive attention to
their effects occludes attention to the processes that shape models and their development.
Research on the history of financial modelling has seldom gone beyond 1970 when the
canonical financial-derivatives model, the Black–Scholes or Black–Scholes–Merton
options pricing model, was constructed.1 If the history of modelling in the decades since
1970 is treated in detail at all – and these are decades in which global financial markets
have changed utterly – it is by practitioners (the best such work is Rebonato’s (2004)
history of interest-rate modelling).
Modelling more generally has, however, become a significant focus in STS and in
philosophy of science. Much of the research on models in STS and philosophy of science
addresses issues – for instance, whether modelling is a form of knowledge generation
distinct from both theory and experiment (e.g. Dowling, 1999; Galison, 1997) or whether
models are the crucial intermediaries between theory and reality (Cartwright, 1983) –
that have no exact analogues in financial markets. Of course, the institutional contexts
and purposes of modelling in finance and in science are different. The goal of most mod-
elling in finance, after all, is to make money, not to contribute to knowledge. Experiment
– the relationship of which to modelling in science has been an important topic for schol-
ars (e.g. Morgan, 2005) – is much less prominent in finance. Finance does have its exper-
iments (see Muniesa and Callon, 2007), but they are generally looser affairs. Furthermore,
neither experiments nor experimental evidence played a part in the history discussed in
this article. Nor does ‘theory’ occupy the prominent place in finance that it does in many
sciences; for many financial practitioners, ‘theory’ (option pricing theory, for example)
simply is a collection of models, not something separate from models.
Nevertheless, there is much in the science-studies literature on models that can help
frame research such as ours. A common finding is that building models is a creatively
‘messy’ process that draws on a heterogeneity of elements (e.g. Breslau and Yonay, 1999;
Morgan, 2012; Morgan and Morrison, 1999). Boumans (1999), for example, argues that
model construction in economics is ‘a trial and error process’, ‘like building a cake
without a recipe’ from heterogeneous ingredients (pp. 95 and 67). In a one-word sum-
mary, model construction is bricolage (MacKenzie, 2003).
Other relevant themes from the science-studies literature on models emerge, for
example, from Sismondo’s (2000) nuanced analysis of controversy surrounding Robert
MacArthur and Edward O Wilson’s ‘island biogeography’ model, which posits a simple
mathematical relationship between an island’s area and the number of species on it
(MacArthur and Wilson, 1967). The model is not unitary, argues Sismondo (2000): ‘it
has multiple uses and interpretations’. It can legitimately be viewed either as ‘true but
quite abstract’ or as ‘interesting but false’. Its ‘success in representing nature’ thus
depends on who is assessing that success, their professional identities, the kind of scien-
tific work they engage in and the role of models in that work. ‘[T]he success of IB [the
island biogeography model] in representing nature depends in part upon [scientific] life-
style and labor issues’ (Sismondo, 2000: 251–254). ‘Seeing models and simulations just
in a space between theories and data’, Sismondo (1999) argues, ‘misses their articulation
with other goals, resources, and constraints’ (p. 254). The articulation of Gaussian copula
models with broader ‘goals, resources, and constraints’ is central to this article and espe-
cially to our companion article (MacKenzie and Spears, 2014).
There is a particular tension that characterizes much of the history of Gaussian copula
models. On the one hand, during the period on which we focus (from the late 1980s to
the present), the modelling of financial derivatives was, as suggested above, character-
ized increasingly by a dominant approach. On the other hand, there was a pressing need
to evaluate a class of products known as collateralized debt obligations (CDOs, explained
below). The tension arose because CDOs could not initially be evaluated by models of
the kind that were highly regarded in the dominant culture. The ensuing bricolage
involved in the construction of Gaussian copula models thus took place in an uneasy
interstitial space in which both practical demands and intellectual – and on occasion,
perhaps, even aesthetic – preferences played important roles.
The importance of intellectual preferences is part of what we want to highlight by
emphasizing the presence here of an evaluation culture. We intend the term to signal a
phenomenon similar to that captured by recent uses of ‘culture’ in science studies, in
which the concept has been employed to express the pervasive finding that scientific
practices (even within the same discipline at the same point in time) are not uniform:
there are different ‘local scientific cultures’ (Barnes et al., 1996), ‘experimental cultures’
(Rheinberger, 1997), ‘epistemic cultures’ (Knorr Cetina, 1999), ‘epistemological cul-
tures’ (Keller, 2002) and ‘evidential cultures’ (Collins, 2004).2
Such differences in practices are found in finance too, as Smith (1999), for example,
demonstrates in the case of the US stock market. An appropriate term for the more coher-
ent and more distinct of clusters of practices that we focus on here is ‘evaluation cultures’
because evaluation – determining the economic worth and risks of financial instruments
– is the activity at their core. ‘An evaluation culture’, as we use the term, is an at least
partially shared set of practices, preferences, forms of linguistic or non-linguistic com-
munication, meanings and beliefs, which perhaps includes an ontology or a distinctive
set of assumptions about what ‘the economic world’ is made of, together with a mecha-
nism of socialization into those practices and beliefs.3 Crucially, to count for us as an
evaluation culture, such a set must go beyond the boundaries of any particular bank or
Salmon’s argument needs to be qualified, the Gaussian copula was implicated. Those
interviewed after the crisis may well be influenced by a desire to avoid blame. Fortunately,
however, we had conducted a reasonable number of pilot interviews prior to the crisis;
29 of the 114 interviews, including 8 of the 29 interviews with quants, took place before
summer 2007. This is important to our analysis of criticisms by quants of the Gaussian
copula family of models (MacKenzie and Spears, 2014).
The evaluation culture on which we focus, which we call the culture of no-arbitrage
modelling, is outlined in the next section. We discuss the way in which the culture organ-
izes its activities in relation to an ontology of probabilities (‘risk-neutral’ or ‘martingale’
probabilities) invisible to others. We emphasize the culture’s close connections to hedg-
ing practices in banks’ derivatives departments (practices utterly central to their work;
see Lépinay (2011)). The sections ‘The origins of the Gaussian copula’ and ‘Broken
hearts, corporate defaults and investment banks’ turn to the history of the Gaussian cop-
ula family of models. We refer to a ‘family’ because the Gaussian copula is not a single,
unitary model. It has been developed mathematically in different ways by different peo-
ple in different contexts. Indeed, the modellers we discuss in the section ‘The origins of
the Gaussian copula’ did not explicitly employ copula functions; only after such func-
tions were introduced in this area by Li were earlier one-period models seen as Gaussian
copulas. The section ‘Broken hearts, corporate defaults and investment banks’ shows
how Li imported the idea of a copula function (an idea explained in that section and in
Appendix 2) from actuarial science. The section also sketches differences between the
two most important organizational contexts in which Gaussian copulas were used: the
credit-rating agencies (Moody’s, Standard & Poor’s and Fitch) and the derivatives
departments of investment banks, our focus here. The section ends by sketching the ori-
gins of the canonical (although still not entirely unitary) set of Gaussian models in invest-
ment banking: Gaussian copula base correlation models.
In developing this ‘no-arbitrage’ model, Black, Scholes and Merton used what had by
the early 1970s become the new specialism’s standard model of share-price movements:
geometric Brownian motion. Brownian motion is the random movement of tiny parti-
cles, for example, of dust and pollen, that results from collisions with the molecules of
the gas or liquid in which they are suspended. The standard mathematical-physics model
of this had been imported into finance. Given geometric Brownian motion and other
simplifying assumptions (e.g. of a ‘frictionless’ market, in which both the underlying
asset and riskless bonds can be bought or sold without incurring brokers’ fees or other
transaction costs), Black, Scholes and Merton showed that it was possible to create a
perfect hedge for an option, in other words, a position in the underlying asset and in risk-
less bonds that, if adjusted appropriately, would have the same payoff as the option,
whatever the path followed by the price of the asset. Since the option and perfect hedge
have identical payoffs, the price of the option must equal the cost of the hedge, or else
there is an opportunity for arbitrage. This simple argument determines the price of the
option, and nowhere in the formula for such a price is there any reference to investors’
attitudes to risk or beliefs about whether the price of the underlying asset was going to
rise or fall.
The Black–Scholes model could have been taken as simply a surprising result about
an unimportant security, because options were not central to finance in the early 1970s.
Even as modelling of this kind was adopted in investment banking, initially it often
was thought of as simply a cluster of loosely similar practices, sometimes called ‘the
PDE approach’, because it typically involved finding a way of translating a problem in
derivatives pricing into a partial differential equation (PDE) akin to the canonical
Black–Scholes equation.7 Gradually, however, ‘the PDE approach’ was supplanted by
a more systematic conceptualization of no-arbitrage modelling based on the work of
Stanford University applied mathematician and operations researcher Michael
Harrison, his economist colleague David Kreps, and former Stanford PhD student
Stanley Pliska. They proved the two propositions about arbitrage-free, ‘complete’ mar-
kets that have become known as the ‘fundamental theorems of asset pricing’, and in so
doing introduced the idea, key to the ontology of no-arbitrage modelling, of ‘martin-
gale probabilities’.8
Let us give a flavour of martingale probabilities by using ‘the parable of the book-
maker’, with which Martin Baxter and Andrew Rennie (quants at Nomura and Merrill
Lynch, respectively) began an early textbook organized around martingale theory (Baxter
and Rennie, 1996). Consider a race between two horses, and a bookmaker who knows
the actual probabilities of each horse winning: 0.25 for the first horse and 0.75 for the
second. The bookmaker could therefore set the odds on the first horse at ‘3-1 against’,
and on the second at ‘3-1 on’. (Odds of ‘3-1 against’ mean that if a punter bets US$1 and
wins, the bookmaker pays out US$3 plus the original stake. ‘3-1 on’ means that if a bet
of US$3 is successful, the bookmaker pays US$1 plus the original stake. In this simpli-
fied parable, the adjustments to the odds necessary for the bookmaker to earn a profit are
ignored.) Imagine, however, that ‘there is a degree of popular sentiment reflected in the
bets made’, for example, that US$5000 has been bet on the first horse and US$10,000 on
the second (Baxter and Rennie, 1996: 1). Over the long run, a bookmaker who knows the
actual probabilities of each outcome and sets odds accordingly will break even, no matter
how big the imbalance in money staked, but in any particular race he or she might lose
heavily. However, a quite different strategy is available. The bookmaker can set odds not
according to the actual probabilities but according to the amounts bet on each horse; in
this example, ‘2-1 against’ for the first horse, and ‘2-1 on’ for the second. Then,
‘[w]hichever horse wins, the bookmaker exactly breaks even’ (Baxter and Rennie, 1996:
1). As a probability theorist would put it, by adopting this strategy the bookmaker has
changed ‘the measure’, replacing the actual probabilities of each outcome (a quarter and
three-quarters) with probabilities that ensure no loss (one-third and two-thirds). Those
latter probabilities are the loose analogue of the ‘martingale’ probabilities introduced to
finance by Harrison, Kreps and Pliska.
The diffusion from academia to banking of the martingale approach was pivotal to
no-arbitrage pricing ceasing to be simply a cluster of mathematical practices and becom-
ing in our terminology an evaluation culture. The shift in measure from actual probabili-
ties to martingale probabilities (or ‘risk-neutral’ probabilities, as they are sometimes
called) is common practice in the derivatives department of investment banks. The math-
ematics of derivatives pricing is then conducted not in the world of actual probabilities
but in a world with a different ontology, the world of martingale probabilities which are
simultaneously less real and more real than actual probabilities. Martingale probabilities
are less real in the sense that they do not correspond to the actual probabilities of events,
but are more real in the sense that (at least in finance) actual probabilities cannot be
determined while martingale or risk-neutral probabilities can be calculated from current
market prices. (Similarly, a bookmaker cannot actually know the true probabilities of the
outcomes of a race, but can easily calculate how much punters have bet with him or her
on each horse.) As an interviewee put it, martingale probabilities ‘have nothing to do
with the past [they are not based on the statistical analysis of past events] or the future
[they are not the actual probabilities of events] but are simply the recoding of … prices’.
Working with martingale probabilities in a world in which prices change continually
through time requires specialist training, because the underlying mathematics –
‘Brownian integrals’, or more generally stochastic calculus (the mathematics of random
processes in continuous time) – is not part of standard university mathematics curricula.
Socialization into the practices of no-arbitrage modelling was originally quite localized.
At MIT, Robert C. Merton taught a notoriously mathematically demanding graduate
course, described to us by two of his students. In the 1990s, however, such modelling
was incorporated into textbooks (e.g. Baxter and Rennie, 1996) and into newly created
Masters courses in mathematical finance. ‘[T]here was an influx of people who were not
scared of performing Brownian integrals and so on’, said an interviewee who worked in
investment banking in the 1990s and 2000s. ‘I think it [no-arbitrage modelling using
martingale probabilities] just generally became the de facto way of doing it [derivatives
pricing]’.
The adoption of no-arbitrage modelling was encouraged by a rough homology
between it and financial practices in the derivatives departments of banks. The empha-
sis on hedging in the investment bank studied by Lépinay (2011) is consistent with our
interviews; despite the widespread impression of reckless risk-taking that the crisis cre-
ated, derivatives departments seek carefully to hedge their portfolios. Such hedging is
incentivized by how traders are paid (see the discussion in our companion article of
‘Day 1 P&L’ where ‘P&L’ is the acronym of profit and loss), and analyses of the expo-
sure of derivatives portfolios to the risks of changing prices, interest rates and so on are
part of a daily routine that several interviewees described. The necessary modelling is
very demanding computationally, even when grids of hundreds or thousands of inter-
connected computers are devoted to it. So the necessary risk-analysis programs are typi-
cally run overnight, while during the trading day no-arbitrage modelling is applied
primarily in pricing (see Lépinay (2011) on ‘pricers’, which are software programs that
run the necessary models). In pricing, all the complication of no-arbitrage modelling
and martingale probabilities reduces to a simple precept: ‘price is determined by hedg-
ing cost’ (McGinty et al., 2004: 20). It is the strategy of Black–Scholes modelling writ
large: find a perfect hedge, a continuously adjusted portfolio of more basic securities
that will have the same payoff as the derivative, whatever happens to the price of the
underlying asset or assets; use that portfolio to hedge the derivative; and use the cost of
the hedge as a guide to the price of the derivative. In actual practice, of course, few if
any hedges are actually perfect, and the price quoted to an external customer will be
greater than that hedging cost, the difference generating the bank’s profit and the trad-
er’s hoped-for Day 1 P&L.
The crucial role of a no-arbitrage model as a guide to hedging generates for traders a
practical criterion of a good model. If they implement the hedges implied by the model,
the profitability of the resultant trading position should be ‘flat’ (constant), indicating
(e.g. to risk controllers) that the position is indeed hedged, that risks are fully controlled.
P&L should not ‘swing too much’, said an interviewee: ‘That is what it is always about’.
Nevertheless, not all of the preferences of quants are purely pragmatic. An approach that
can encompass the modelling of a huge range of complex derivatives but yet be boiled
down to the two simple theorems formulated by Harrison, Kreps and Pliska fits well with
the preferences for ‘elegance’ of many of those with advanced mathematical training.
‘The simplicity of it is alluring’, said an interviewee. In the middle of their textbook,
Baxter and Rennie, after recasting the derivation of the exemplary achievement, the
Black–Scholes model, in the more general framework of martingale theory, paused
with a respectable stochastic model for the stock [geometric Brownian motion], we can replicate
any [derivative]. Not something we had any right to expect. … Something subtle and beautiful
really is going on under all the formalism. … Before we push on, stop and admire the view.
(Baxter and Rennie, 1996: 98)
By now, perhaps, the reader may feel our two articles are a badly telegraphed murder
mystery: the culprit in the financial crisis is surely this strange, abstract culture, with its
capacity to see things – martingale probabilities – of which others are unaware, and even
to appreciate them as beautiful. Not so. The Gaussian copula family of financial models
drew upon resources from that evaluation culture, but was never entirely of that culture.
from the Soviet invasion of Czechoslovakia. He was hired in late 1968 by John McQuown,
head of the Management Science Department of Wells Fargo in San Francisco. McQuown
was a strong supporter of the new field of financial economics, hiring leading figures
such as Black and Scholes as consultants, and financing conferences at which members
of the bank’s staff such as Vasicek were ‘able to sit in and listen, wide-mouthed’ (Vasicek,
interview). Those conferences and his work for the bank introduced Vasicek to the
Black–Scholes model and to Merton’s use of stochastic calculus. In 1983, McQuown
persuaded Vasicek to join him in a new venture, a firm called Diversified Corporate
Loans. Banks’ loan portfolios are often ‘very ill-diversified’, as Vasicek puts it – heavily
concentrated in specific geographical regions or particular industries – and McQuown’s
idea was to enable banks to reduce these concentrations of risk by swapping ‘loans that
the bank has on its books for participation shares’ in a much larger pool of loans origi-
nated by many banks (Vasicek, interview).
‘[I]t didn’t work’, says Vasicek – banks did not take up the idea. However, his model-
ling for it gave birth to what is to our knowledge the first Gaussian copula model in
finance (although it was only retrospectively seen as a Gaussian copula). In order that the
terms of the swap could be negotiated, it was necessary to model the risks both of default
on a loan to one corporation and of multiple defaults in the bigger pool of loans. Financial
economists, especially Merton (1974), had tackled the first problem, but not the second.
It was immediately clear to Vasicek that defaults by different corporations could not
plausibly be treated as statistically independent events. As he put it in an unpublished
note, now in his private papers,
The changes in the value of assets among firms in the economy are correlated, that is, tend to
move together. There are factors common to all firms, such as their dependence on [the]
economy in general. Such common factors affect the asset values of all companies, and
consequently the loss experience on all loans in the portfolio. (Vasicek, 1984: 9)
The task Vasicek set himself, therefore, was to model the value of a pool of loans to
multiple corporations, taking account of the correlation between changes in the values of
different firms’ assets. There was almost no direct empirical data to guide his modelling
(even 20 years later, the econometric estimation of the relevant correlations was still
tricky; see MacKenzie (2011)). Given this absence of data, Vasicek simply reached for
the standard model of asset-value fluctuations, geometric Brownian motion, imposing
the mathematically most familiar correlation structure, that of a multivariate Gaussian
distribution, the analogue for multiple variables of the familiar, bell-shaped, univariate
normal distribution (see Appendix 1). Even with those simple choices, however, he could
not find a general ‘analytical’ solution to his model, a solution that would avoid recourse
to computer simulation. Indeed, no analytical solution has subsequently been found.
Vasicek did, however, succeed in formulating an analytically solvable special case,
that of a pool of a large number of equally sized loans, all of which fall due simultane-
ously, have the same probability of default and have the same correlation between the
values of the assets of any pair of borrowers. This set of features leads to Vasicek’s spe-
cial case often being called the ‘large homogeneous pool’ model. He showed that as the
number of loans increases, the probability distribution of different levels of loss on the
Figure 2. Probability distribution of losses on large portfolio of loans, each with default
probability of 0.02, and identical pairwise asset correlations of 0.1 (upper graph) and 0.99
(lower graph).
probability that the husband will die at or before another age). This coupling results in
the ‘joint’ or ‘multivariate’ distribution function (see Appendix 2). Frees and his col-
leagues showed that taking into account the ‘correlation’ between the mortality of
spouses by using an appropriate copula function reduced the value of a joint annuity by
around 5 percent (Frees et al., 1996: 229).
Their work gave Li a way of linking his training in actuarial science and statistics
to the practical problems of pricing CDOs and similar credit derivatives on which he
was working. He treated a corporation’s default as analogous to a person’s death: the
risks of different corporations defaulting were correlated, just as the mortality risks
of spouses were, and copula functions could be used to model that correlation. This
approach enabled Li to escape the limitation to a single period of Vasicek’s special
case and CreditMetrics, while still retaining a connection to them. Viewed in the lens
of Li’s (1999, 2000) work, the model of correlation in them was a Gaussian copula,
or a copula function that couples marginal distributions to form a multivariate nor-
mal distribution function. Although other copula functions were discussed by Li and
by others who were also exploring the applicability of copula functions to insurance
and finance (such as a group of academic mathematicians in Zürich with strong links
to the financial sector),9 this connection to CreditMetrics – already a well-estab-
lished, widely used model – together with the simplicity and familiarity of the
Gaussian (and the ease of implementing it: commercially available programs facili-
tated the use of correlated, normally distributed random numbers in Monte Carlo
simulation) meant that as others took up copula functions, the Gaussian copula had
the single most salient role.
Although it was also used to measure banks’ overall credit risks, the most conse-
quential modelling problem to which the Gaussian copula was applied was the evalu-
ation of CDOs, a new class of securities becoming increasingly popular in the late
1990s and 2000s (see Figure 3). The firm, typically a large investment bank, creating
a CDO would set up a legally separate ‘special-purpose vehicle’ (a trust or special-
purpose corporation) that would buy a pool of bonds or loans, raising the money to do
so by selling investors interest-bearing securities that were claims on the cash flow
generated from the pool. The lower ‘tranches’ of securities offered higher ‘spreads’
(interest payments were typically set as a given ‘spread’ or increment over a baseline
interest rate, normally Libor, the London Interbank Offered Rate), but also greater
risk of partial or complete loss on the investment if bonds or loans in the pool
defaulted. For instance, the lowest tranche absorbed the first losses, and only once
that tranche was ‘wiped out’ by these losses did they begin to affect the next-highest,
mezzanine, tranche. In a typical CDO, if correlation among the bonds or loans in the
pool was low, only the holders of the lowest tranche would be at substantial risk of
losing some or all of their investment. If, however, correlation was very high (as in
the 0.99 case in Figure 2), many of the bonds or loans might default, and losses could
affect even the holders of the most senior tranche. So modelling correlation was the
most crucial problem in CDO evaluation, and Gaussian copulas became – and still are
(MacKenzie and Spears, 2014) – the canonical way of doing this.
As noted above, Li’s work freed the Gaussian copula family from the restriction of
earlier models to a single time period. It did not, however, free Gaussian models from the
asset 1
asset 2
mezzanine tranche or
. tranches
.
.
.
.
.
equity tranche
asset n
Monte Carlo formulations, although our interviews do not contain detailed information
on practice at Moody’s in this respect.)
The situation in the other main context, investment banking, was quite different.
When CDOs first started to become a relatively large business, in the late 1990s, evaluat-
ing a CDO on a ‘once and for all’ basis, akin to practice at the rating agencies, was ade-
quate (typically, the risks of all but the equity tranche were sold on to external parties),
and CreditMetrics or similar one-period models were judged up to the job. In the early
2000s, however, new versions of CDOs became popular, of which the most important
were ‘synthetic’ single-tranche CDOs. Instead of consisting of a special-purpose legal
vehicle that bought a pool of debt instruments, these new CDOs were simply bilateral
contracts between an investment bank and a client (such as a more minor bank or other
institutional investor); such contracts mimicked the returns and risks of a CDO tranche.
They became popular because the CDO tranches in heaviest demand – mezzanine
tranches – formed only a small part of the structure of a traditional ‘cash’ CDO of the
kind shown in Figure 3, and were therefore in short supply.
For the client institution that bought a synthetic CDO tranche, the latter was a security
that paid a decent rate of interest with a modest risk of default. For the investment bank
that sold the tranche, it was a complex derivative. Because the bank did not own the pool
of loans or bonds underpinning the contract (the pool was hypothetical, simply a way of
calculating the interest payments the bank had to make to the client and the losses the lat-
ter might suffer), it had to find other means of hedging itself against losses throughout the
contract’s lifetime. This involved the use of credit default swaps on each of the corpora-
tions whose debts made up the pool. In a credit default swap contract on a corporation –
on Ford, for example – one financial institution pays set ‘insurance premiums’ to a second
financial institution in return for the right, if Ford defaults on its loans or bonds, to hand
over those loans or bonds to the second institution and receive their full face value.
Hedging a synthetic CDO tranche using credit default swaps was roughly analogous to
hedging an option with a position in the underlying shares, and the hedge ratios that were
needed were christened ‘deltas’, the term already used in the options market.
The need to adjust credit-default-swap hedges of this sort, often daily, throughout the
lifetime of a synthetic CDO – typically 5, 7 or 10 years – meant that Gaussian copula
models in investment banks had to satisfy demands quite different from those of the ‘one
off’ analyses conducted by credit-rating agencies. Single-period CDO models akin to
CreditMetrics were not well suited to the calculation of deltas, so following Li’s work,
there was rapid, sustained interest in investment banking in full-fledged copula formula-
tions. The need to recalculate deltas and other risk parameters frequently meant that the
computational demands of Monte Carlo simulation were a major problem for investment
banks, not the minor one they were for rating agencies; extracting reliable estimates of a
large set of partial derivatives such as deltas from a Monte Carlo copula model was
vastly more time-consuming than using the model to rate a CDO tranche. In a situation
in which the information technology (IT) departments of many big banks were strug-
gling to meet the computational demands of the overnight runs – ‘some days, everything
is finished at 8 in the morning, some days it’s finished at midday because it had to be
rerun’, an interviewee told MacKenzie in early 2007 – the huge added load of millions
of Monte Carlo scenarios was unwelcome. The requisite computer runs sometimes even
had to be done over weekends. An interviewee described one bank in which the Monte
Carlo calculation of deltas took over 40 hours. Such difficulties could be alleviated by
distributing the necessary computations over grids of hundreds of interconnected com-
puters, but there were often down-to-earth material and physical constraints on the size
of grid: the finite capacity of computer room air-conditioning systems to cope with the
resultant heat, and in some places (especially the City of London) constraints on electric-
ity supply.
The innovative efforts of investment-bank quants were therefore focused on devel-
oping what were christened ‘semi-analytical’ versions of the Gaussian and other copu-
las. These involved less radical simplifications than Vasicek’s model with its ‘analytical’
solution (equations 1 and 2 in Appendix 1), while being sufficiently tractable mathe-
matically that Monte Carlo simulation was not needed. Much faster computational
techniques such as numerical integration and recursion sufficed. A commonly used
simplification was introduced by, among others, Jon Gregory and Jean-Paul Laurent of
the French bank BNP Paribas, first in a confidential May 2001 BNP working paper and
then in Gregory and Laurent (2003) (see also Laurent and Gregory (2005)).10 The sim-
plification was to assume that the correlations among the asset values or default times
of the corporations in a CDO’s pool all arose simply from their common dependence
on one or more underlying factors. Most common of all was to assume a single under-
lying factor, which could be interpreted as ‘the state of the economy’. The advantage
of doing this was that given a particular value of the underlying factor, defaults by
different corporations could then be treated as statistically independent events, simpli-
fying the mathematics, avoiding Monte Carlo simulation and greatly reducing compu-
tation times.
‘Factor reduction’ (as this was sometimes called) and other techniques – such as, for
example, a recursion algorithm introduced by the Bank of America quants Leif Andersen,
Jakob Sidenius and Susanta Basu (2003) – made it possible for ‘single-factor’ Gaussian
copulas and other copula models to run fast enough to be embedded in the hedging and
risk-management practices of investments banks’ derivatives departments. Such tech-
niques quickly became common knowledge among quants, and single-factor, semi-ana-
lytical Gaussian copula models became pervasive in investment banking. This style of
modelling, and the associated hedging practices, helped make the creation and trading of
CDOs culturally familiar to derivatives specialists. As one of them told us in January
2009, it ‘had all the appearance of a derivative business … [models with] parameters that
they could look at and discuss, [which] had names they were familiar with like “correla-
tion”’. His choice of word – ‘appearance’ – is of course telling; as we discuss in our
companion article, he and others felt it was mere appearance. Nevertheless, in his view,
the resemblance ‘did give people some comfort’.
One final step was left in the creation of a de facto industry-standard modelling
approach. It was triggered in 2003–2004 by J.P. Morgan and other investment banks col-
lectively creating a set of tranched ‘index markets’, each of which traded what was in
effect a standardized synthetic CDO with a specified underlying pool of corporate debt
issuers (e.g. a given set of 125 investment-grade corporations domiciled in North
America). Being standardized rather than negotiated ad hoc between a client institution
and an investment bank, this CDO and its tranches could readily be bought and sold by
multiple participants, and so had credible market prices. Indeed, the credibility of those
prices was an important motivation for the creation of the index markets (MacKenzie and
Spears, 2014). Those market prices provided a new way of determining correlation, the
crucial parameter of a Gaussian copula model, turning it from a difficult econometric
task to an easy modelling job. One could simply assume a common level of correlation
among the corporations in the pool underlying a standardized index and run a Gaussian
copula model ‘backwards’ to discover the level of correlation consistent with market
prices, that is, with the traded ‘spreads’ of the tranches. Indeed, correlation itself started
to be reified. It was no longer just a parameter of a model, but something ‘correlation
traders’ (as they started to be called) could trade in the new markets.
A last mathematical snag remained. In the case of many standardized index tranches,
especially mezzanine tranches, running a Gaussian copula model backwards yielded not
a single correlation value consistent with the ‘spread’ on the tranche, but two values. In
other cases, the model would simply fail to calibrate: it could not reproduce market
prices, and no correlation value would be generated. Problems of this kind could be
avoided, a team at J.P. Morgan argued, if correlation modelling shifted to what they
called ‘base correlation’ (explained in Appendix 3). Others in the CDO market quickly
saw the advantages of doing so, and use of ‘base correlation’ rapidly became pervasive
in investment banking.
That 2004–2005 switch was the final stage in the construction of the canonical set of
models, at least in investment banking: Gaussian copula base correlation models. Unlike
in earlier years, when Gaussian copula models could be set against empirical data only
partially and with difficulty, the new standard-index markets provided a ready empirical
test of base correlation models: the output of the latter could be compared directly to the
traded ‘spreads’ in the index markets. At one level, the models passed this test unequivo-
cally. A Gaussian copula base correlation model ‘fits the market exactly’, as an inter-
viewee put it. Appropriately calibrated, the model could replicate precisely the market
spreads. On other levels, though, we found deep disquiet – even in our pre-crisis inter-
views with quants – about the newly emerged standard models, disquiet that forms the
starting point of our companion article.
Conclusion
Cultures borrow; modelling is bricolage; modelling articulates not just with data (which
has not played a large role in our story) but with ‘other goals, resources, and constraints’.
With the exception of Li, with his background in actuarial science, all the contributors to
the development of the Gaussian copula family of models whom we interviewed owed
some degree of allegiance to what we have called the culture of no-arbitrage modelling.
Indeed, as our companion article will describe, that allegiance was sufficiently strong
that in private some of them distanced themselves markedly from the very family of
models, Gaussian copulas, to which they had contributed. In their modelling practices,
however, they had embraced productive heterogeneity. The Gaussian copula was not a
no-arbitrage model, but they adopted and developed it nonetheless. They were bricole-
urs. For all their private qualms, they went for what ‘worked’, not for what was culturally
homogeneous, ‘pure’ or ‘beautiful’.
documents was in each case about to end in calamity. The role of the Gaussian copula in
economic disaster is one of the topics to which we turn in our companion article.
Acknowledgements
These are to be found at the end of our companion article in this journal (MacKenzie and Spears,
2014).
Funding
The research leading to these results has received funding from the European Research Council
under the European Union’s Seventh Framework Programme (FP7/2007-2013/ERC grant agree
ment no. 291733) and from the UK Economic and Social Research Council (RES-062-23-1958).
Spears’s work was supported by another grant from the latter source (RES-598-25-0054). The
pre-crisis pilot interviewing was supported by an earlier ESRC grant, RES-051-27-0062.
Notes
1. On the history of this model, see MacKenzie (2003) and Mehrling (2005); for an excellent
sample of historical work on financial modelling, see Poitras and Jovanovic (2007).
2. Broadly similar invocations of ‘culture’ in economic life include the ‘cultures of economic
calculation’ (Kalthoff, 2006) and ‘calculative cultures’ (Mikes, 2009).
3. A strong form of evaluation culture may also convey an identity. In other words, it might
involve not just what participants do but also whom they take themselves to be. That was
not an issue we set out to investigate in this research, but our interviewees did occasionally
employ formulations suggestive of identities, such as ‘you’re either a risk-neutral person [i.e.
one who works with risk-neutral or martingale probabilities: see below] or you’re not’.
4. Figure 1 is of course not to be interpreted too literally. Neither cultures nor, indeed, organiza-
tions have clear boundaries; see the delightful parable in Hines (1988).
5. Beunza and Stark (2004) and Lépinay (2011) discover large differences in practices within
the organizations they study, for example, differences among trading ‘desks’ (subgroups). In
our experience, intra-organizational differences of this kind are often manifestations of the
intersection of evaluation cultures and organizations. For example, the practices of deriva-
tives groups in a bank typically differ greatly from those of its asset-backed securities (ABS)
desk, while the practices of derivatives groups often quite closely resemble those of deriva-
tives groups in other banks.
6. A total of 101 of these interviews were conducted by MacKenzie, 10 by Spears, and 3 by our
colleague Iain Hardie. Other articles drawing on this wider corpus include MacKenzie (2011),
which focuses on credit rating, and MacKenzie (2012), which focuses on the subprime credit-
derivatives indices known as the ABX.
7. The textbook that best exemplifies the partial differential equation (PDE) approach, espe-
cially in its early editions, is Hull (2000).
8. First, Harrison, Kreps and Pliska showed that a market is free of arbitrage opportunities if and
only if there is an equivalent martingale measure, a way of assigning new, different probabili-
ties (‘martingale’ probabilities) to the path followed by the price of an underlying asset such
that the price of the asset (discounted back to the present at the riskless rate of interest) ‘drifts’
neither up nor down over time, and the price of the option or other ‘contingent claim’ on the
asset is simply the expected value of its payoff under these probabilities, discounted back
to the present. Second, that martingale measure is unique if and only if the market is com-
plete, in other words if the securities that are traded ‘span’ all possible outcomes, allowing all
contingent claims (contracts such as options whose payoffs depend on those outcomes) to be
hedged with a self-financing replicating portfolio of the type introduced by Black, Scholes
and Merton (Harrison and Kreps, 1979; Harrison and Pliska, 1981).
9. See Embrechts et al. (1999).
10. Broadly analogous factor models were also discussed, for example, by Philipp Schönbucher
(2001).
References
Andersen L, Sidenius J and Basu S (2003) All your hedges in one basket. Risk 16(11): 67–72.
Barnes B, Bloor D and Henry J (1996) Scientific Knowledge: A Sociological Analysis. Chicago,
IL: Chicago University Press.
Baxter M and Rennie A (1996) Financial Calculus: An Introduction to Derivative Pricing.
Cambridge: Cambridge University Press.
Bergman S (2001) CDO Evaluator Applies Correlation and Monte Carlo Simulation to Determine
Portfolio Quality. New York: Standard & Poor’s. Available at: http://www2.standarda-
ndpoors.com (accessed 5 May 2006).
Beunza D and Stark D (2004) Tools of the trade: The socio-technology of arbitrage in a Wall Street
trading room. Industrial and Corporate Change 13(2): 369–400.
Black F and Scholes M (1973) The pricing of options and corporate liabilities. Journal of Political
Economy 81(3): 637–654.
Boumans M (1999) Built-in justification. In: Morgan MS and Morrison M (eds) Models as
Mediators: Perspectives on Natural and Social Change. Cambridge: Cambridge University
Press, pp. 66–96.
Breslau D and Yonay Y (1999) Beyond metaphor: Mathematical models in economics as empiri-
cal research. Science in Context 12(2): 317–332.
Callon M (ed.) (1998) The Laws of the Markets. Oxford: Blackwell.
Callon M (2007) What does it mean to say that economics is performative? In: MacKenzie
D, Muniesa F and Siu L (eds) Do Economists Make Markets? On the Performativity of
Economics. Princeton, NJ: Princeton University Press, pp. 311–357.
Cartwright N (1983) How the Laws of Physics Lie. Oxford: Clarendon Press.
Collins H (2004) Gravity’s Shadow: The Search for Gravitational Waves. Chicago, IL: Chicago
University Press.
Dowling D (1999) Experimenting on theories. Science in Context 12(2): 261–273.
Embrechts P, McNeil A and Straumann D (1999) Correlation: Pitfalls and alternatives. Risk 12(5):
69–71.
Frees EW, Carriere J and Valdez E (1996) Annuity valuation with dependent mortality. Journal of
Risk and Insurance 63(2): 229–261.
Galison P (1997) Image and Logic: A Material Culture of Microphysics. Chicago, IL: Chicago
University Press.
Gregory J and Laurent J-P (2003) I will survive. Risk 16(6): 103–107.
Gupton GM, Finger CC and Bhatia M (1997) CreditMetrics: Technical Document. New York:
J.P. Morgan.
Harrison JM and Kreps DM (1979) Martingales and arbitrage in multiperiod securities markets.
Journal of Economic Theory 20(3): 381–408.
Harrison JM and Pliska SR (1981) Martingales and stochastic integrals in the theory of continuous
trading. Stochastic Processes and Their Applications 11(3): 215–260.
Hines RD (1988) Financial accounting: In communicating reality, we construct reality. Accounting,
Organizations and Society 13(3): 251–261.
Hull JC (2000) Options, Futures, and Other Derivatives. Upper Saddle River, NJ: Prentice
Hall.
Kalthoff H (2006) Cultures of economic calculation. In: Strulik T and Willke (eds) Towards a
Cognitive Mode in Global Finance. Frankfurt; New York: Campus, pp. 156–179.
Keller EF (2002) Making Sense of Life: Explaining Biological Development with Models,
Metaphors, and Machines. Cambridge, MA: Harvard University Press.
Knorr Cetina K (1999) Epistemic Cultures: How the Sciences Make Knowledge. Cambridge, MA:
Harvard University Press.
Knuuttila T and Voutilainen A (2003) A parser as an epistemic artifact: A material view on mod-
els. Philosophy of Science 70(5): 1484–1495.
Kuhn TS (1970) The Structure of Scientific Revolutions, 2nd ed. Chicago, IL: Chicago University
Press.
Laurent J-P and Gregory J (2005) Basket default swaps, CDOs and factor copulas. Journal of Risk
7(4): 103–122.
Lépinay VA (2011) Codes of Finance: Engineering Derivatives in a Global Bank. Princeton, NJ:
Princeton University Press.
Li DX (1999) The valuation of basket credit derivatives. CreditMetrics Monitor 2: 34–50.
Li DX (2000) On default correlation: A copula function approach. Journal of Fixed Income 9(4):
43–54.
MacArthur RH and Wilson EO (1967) The Theory of Island Biogeography. Princeton, NJ:
Princeton University Press.
McGinty L, Beinstein E, Ahluwalia R and Watts M (2004) Credit Correlation: A Guide. London: J.P.
Morgan. Available at: http://www.scribd.com/doc/19606722/JP-Morgan-Credit-Correlation-
A-Guide (accessed 13 January 2014).
MacKenzie D (1981) Statistics in Britain, 1865–1930: The Social Construction of Scientific
Knowledge. Edinburgh: Edinburgh University Press.
MacKenzie D (2003) An equation and its worlds: Bricolage, exemplars, disunity and performativ-
ity in financial economics. Social Studies of Science 33(6): 831–868.
MacKenzie D (2006) An Engine, Not a Camera: How Financial Models Shape Markets.
Cambridge, MA: MIT Press.
MacKenzie D (2011) The credit crisis as a problem in the sociology of knowledge. American
Journal of Sociology 116(6): 1778–1841.
MacKenzie D (2012) Knowledge production in financial markets: Credit default swaps, the ABX
and the subprime crisis. Economy and Society 41(3): 335–359.
MacKenzie D and Spears T (2014) ‘A device for being able to book P&L’: The organizational
embedding of the Gaussian copula. Social Studies of Science 44(3): 418–440.
Mehrling P (2005) Fischer Black and the Revolutionary Idea of Finance. New York: Wiley.
Merton RC (1973) Theory of rational option pricing. Bell Journal of Economics and Management
Science 4(1): 141–183.
Merton RC (1974) On the pricing of corporate debt: The risk structure of interest rates. Journal of
Finance 29(2): 449–470.
Mikes A (2009) Risk management and calculative cultures. Management Accounting Research
20(1): 18–40.
Mirowski P (2010) Inherent vice: Minsky, Markomata, and the tendency of markets to undermine
themselves. Journal of Institutional Economics 6(4): 415–443.
Mol A (2002) The Body Multiple: Ontology in Medical Practice. Durham, NC: Duke University
Press.
Morgan MS (2005) Experiments versus models: New phenomena, interference, and surprise.
Journal of Economic Methodology 12(2): 317–329.
Morgan MS (2012) The World in the Model: How Economists Work and Think. Cambridge:
Cambridge University Press.
Morgan MS and Morrison M (eds) (1999) Models as Mediators: Perspectives on Natural and
Social Science. Cambridge: Cambridge University Press.
Muniesa F and Callon M (2007) Economic experiments and the construction of markets.
In: MacKenzie D, Muniesa F and Siu L (eds) Do Economists Make Markets? On the
Performativity of Economics. Princeton, NJ: Princeton University Press, pp. 163–189.
Poitras G and Jovanovic F (eds) (2007) Pioneers of Financial Economics. Cheltenham: Edward
Elgar Publishing.
Rebonato R (2004) Interest-rate term-structure pricing models: A review. Proceedings of the
Royal Society of London Series A: Mathematical, Physical and Engineering Sciences 460:
667–728.
Rheinberger H-J (1997) Toward a History of Epistemic Things: Synthesizing Proteins in the Test
Tube. Stanford, CA: Stanford University Press.
Salmon F (2009) Recipe for disaster: The formula that killed Wall Street. Wired, 23 February.
Available at: http://www.wired.com/print/techbiz/it/magazine/17-03/wp_quant (accessed 25
February 2009).
Schönbucher PJ (2001) Factor models: Portfolio credit risks when defaults are correlated. Journal
of Risk Finance 3(1): 45–56.
Sismondo S (1999) Models, simulations, and their objects. Science in Context 12(2): 247–260.
Sismondo S (2000) Island biogeography and the multiple domains of models. Biology and
Philosophy 15: 239–258.
Sklar A (1959) Fonctions de répartition à n dimensions et leurs marges. Publications de l’Institut
de Statistique de l’Université de Paris 8: 229–231.
Smith CW (1999) Success and Survival on Wall Street: Understanding the Mind of the Market.
Lanham, MD: Rowman & Littlefield.
Tett G and Larsen PT (2005) Market faith goes out the window as the ‘model monkeys’ lose track
of reality. Financial Times, 20 May, p. 23.
Vasicek O (1984) The philosophy of credit valuation (Privately circulated typescript). Personal
papers, 22 March.
Vasicek O (1987) Probability of loss on loan portfolio (privately circulated addendum to Vasicek
(1991)). KMV Corporation, San Francisco, CA, 12 February.
Vasicek O (1991) Limiting Loan Loss Probability Distribution (privately circulated). San
Francisco, CA: KMV Corporation. Available at: http://www.moodyskmv.com/research/
whitepaper/Limiting_Loan_Loss_Probability_Distribution.pdf (accessed 5 May 2008).
Vasicek O (2002) The distribution of loan portfolio value. Risk 15(12): 160–162.
Author biographies
Donald MacKenzie is a Professor of Sociology at the University of Edinburgh. His current
research is on the sociology of financial markets, and he is focusing in particular on how market
participants and technical systems evaluate (work out the economic worth of) financial securi-
ties. His books include Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance
(MIT Press, 1990) and An Engine, Not a Camera: How Financial Models Shape Markets (MIT
Press, 2006).
Taylor Spears is a Research Fellow in the School of Political Science at the University of Edinburgh.
His current research focuses on the community of derivatives quants and the development and
social shaping of the financial models they build and use. He was previously a Research Fellow at
the Science Policy Research Unit at the University of Sussex.
Appendix 1
Vasicek’s large homogeneous pool model
Vasicek applied to firms’ asset values what had become the standard geometric Brownian
motion model. Expressed as a stochastic differential equation
dAi = µi Ai dt + σ i Ai dzi
where Ai is the value of the ith firm’s assets, µi and σi are the drift rate and volatility of
that value and zi is a Wiener process or Brownian notion, that is, a random walk in con-
tinuous time in which the change over any finite time period is normally distributed with
mean zero and variance equal to the length of the period, and changes in separate time
periods are independent of each other.
Vasicek (1987, 1991) considers a portfolio of equally sized loans to n such firms, with
each loan falling due at the same time and each with the same probability of default p.
Making the assumption that the correlation, ρ, between the values of the assets of any
pair of firms was the same, Vasicek showed that in the limit in which n becomes very
large, the distribution function of L, the proportion of the loans that suffer default, is
1 − ρ N −1 ( x) − N −1 ( p )
P[ L ≤ x] = N (1)
ρ
where N is the distribution function of a standardized normal distribution with zero mean
and unit standard deviation. The corresponding probability density function is
1− ρ 1 1
f ( x) = exp − ( 1 − ρ N −1 ( x) − N −1 ( p )) 2 + ( N −1 ( x)) 2 (2)
ρ 2 ρ 2
(Figure 2 shows graphs of this function with p = 0.02 and two different values of ρ.)
Vasicek went on to show that the assumption of equally sized loans was not necessary,
n 2
and that this limit result still held so long as ∑ i =1 wi tended to zero as n became infinitely
large, where wi is the proportion of the portfolio made up of loan i: ‘In other words, if the
portfolio contains a sufficiently large number of loans without it being dominated by few
loans much larger than the rest, the limiting distribution provides a good approximation
for the portfolio loss’ (Vasicek, 2002: 160–161).
Appendix 2
‘Broken-heart’ syndrome and a bivariate copula function
Let X be the age at death of a woman and Y the age at death of her husband. In the nota-
tion of Frees et al. (1996), let H(x,y) be the joint distribution function of X and Y: that is,
H(x,y) is the probability that the wife dies at or before age x, and that the husband dies at
or before age y. Let F1 ( x) and F2 ( y ) be the corresponding marginal distribution func-
tions; for example, F1 ( x) is the probability simply that the wife dies at or before age x.
A copula function C ‘couples’ (Frees et al., 1996: 236) F1 and F2 , the two marginal
distributions, to form the joint distribution H. That is,
H ( x, y ) = C[ F1 ( x), F2 ( y )]
If C, F1 and F2 are all known, then obviously H is known. What Sklar (1959) had shown
was that a generalized version of the less obvious converse also held: ‘if H is known and
if F1 and F2 are known and continuous, then C is uniquely determined’ (Frees et al.,
1996: 236).
Appendix 3
Index tranches and base correlation
The credit indices that make ‘correlation trading’ possible are, in effect, a set of standard-
ized, synthetic single-tranche collateralized debt obligations (CDOs). Consider, for
instance, the DJ Tranched TRAC-X Europe, set up by J.P. Morgan and Morgan Stanley,
the example of an index used in McGinty et al. (2004). Traders could buy and sell ‘pro-
tection’ against all losses caused by defaults or other ‘credit events’ suffered by the cor-
porations whose debts were referenced by the index, or against specific levels of loss:
0–3 percent, 3–6 percent, 6–9 percent, 9–12 percent and 12–22 percent. Instead of run-
ning a Gaussian copula ‘backwards’ to work out the implied correlation (the ‘compound’
correlation, in the terminology of the J.P. Morgan team) of each of these tranches, the J.P.
Morgan team recommended inferring from the ‘spreads’ (costs of ‘protection’) on the
tranches that were actually traded what the spreads would be on tranches of 0–6 percent,
0–9 percent, 0–12 percent and 0–22 percent. Running a Gaussian copula backwards on
the traded 0–3 percent tranche and on these untraded tranches generates the ‘base cor-
relations’ implied by the spreads in the index market.