Physics and Financial Economics (1776-2014) Puzzles, Ising and Agent-Based Models
Physics and Financial Economics (1776-2014) Puzzles, Ising and Agent-Based Models
1
ETH Zurich – Department of Management, Technology and Economics, Scheuchzerstrasse 7, CH-8092 Zurich, Switzerland
2
Swiss Finance Institute, 40, Boulevard du Pont-d’ Arve, Case Postale 3, 1211 Geneva 4, Switzerland
E-mail address: [email protected]
Abstract
This short review presents a selected history of the mutual fertil-
ization between physics and economics, from Isaac Newton and Adam
Smith to the present. The fundamentally different perspectives em-
braced in theories developed in financial economics compared with
physics are dissected with the examples of the volatility smile and
of the excess volatility puzzle. The role of the Ising model of phase
transitions to model social and financial systems is reviewed, with the
concepts of random utilities and the logit model as the analog of the
Boltzmann factor in statistic physics. Recent extensions in term of
quantum decision theory are also covered. A wealth of models are
discussed briefly that build on the Ising model and generalize it to
account for the many stylized facts of financial markets. A summary
of the relevance of the Ising model and its extensions is provided to
account for financial bubbles and crashes. The review would be incom-
plete if it would not cover the dynamical field of agent based models
(ABMs), also known as computational economic models, of which the
Ising-type models are just special ABM implementations. We formu-
late the “Emerging Market Intelligence hypothesis” to reconcile the
pervasive presence of “noise traders” with the near efficiency of finan-
cial markets. Finally, we note that evolutionary biology, more than
physics, is now playing a growing role to inspire models of financial
markets.
1
Contents
1 Introduction 3
8 Concluding remarks 49
2
1 Introduction
The world economy is an extremely complex system with hidden causali-
ties rooted in intensive social and technological developments. Critical events
in such systems caused by endogenous instabilities can lead to huge crises
wiping out of the wealth of whole nations. On the positive side, positive
feedbacks of education and venture capital investing on entrepreneurship can
weave a virtuous circle of great potential developments for the future gen-
erations. Risks, both on the downside as well as on the upside, are indeed
permeating and controlling the outcome of all human activities and require
high priority.
Traditional economic theory is based on the assumptions of rationality of
economic agents and of their homogeneous beliefs, or equivalently that their
aggregate behaviors can be represented by a representative agent embodying
their effective collective preferences. However, many empirical studies pro-
vide strong evidences on market agents heterogeneity and on the complexity
of market interactions. Interactions between individual market agents for
instance cause the order book dynamics, which aggregate into rich statistical
regularities at the macroscopic level. In finance, there is growing evidence
that equilibrium models and the efficient market hypothesis (EMH), see sec-
tion 7.3 for an extended presentation and generalisation, cannot provide a
fully reliable framework for explaining the stylized facts of price formation
(Fama, 1970). Doubts are further fuelled by studies in behavioral economics
demonstrating limits to the hypothesis of full rationality for real human be-
ings (as opposed to the homo economicus posited by standard economic the-
ory). We believe that a complex systems approach to research is crucial to
capture the inter-dependent and out-of-equilibrium nature of financial mar-
kets, whose total size amounts to at least 300% of the world GDP and of the
cumulative wealth of nations.
From the risk management point of view, it is now well established that
the Value-at-Risk measure, on which prudential Basel I and II recommen-
dations are based, constitutes a weak predictor of the losses during crises.
Realized and implied volatilities as well as inter-dependencies between assets
observed before the critical events are usually low, thus providing a com-
pletely misleading picture of the coming risks. New risk measures that are
sensitive to global deteriorating economic and market conditions are yet to
be fully developed for better risk management.
In todays high-tech era, policy makers often use sophisticated computer
models to explore the best strategies to solve current political and economic
issues. However, these models are in general restricted to two classes: (i) em-
pirical statistical methods that are fitted to past data and can successfully be
3
extrapolated a few quarters into the future as long as no major changes oc-
cur; and (ii) dynamic stochastic general equilibrium models (DSGE), which
by construction assume a world always in equilibrium. The DSGE-models
are actively used by central banks, which in part rely on them to take im-
portant decisions such as fixing interest rates. Both of these methods as-
sume that the acting agents are fully rational and informed, and that their
actions will lead to stable equilibria. These models therefore do not encom-
pass out-of-equilibrium phenomena such as bubbles and subsequent crashes
(Kindleberger, 2000; Sornette, 2003), arising among other mechanisms from
herding among not fully rational traders (De Grauwe, 2010). Consequently,
policy makers such as central banks base their expectations on models and
processes that do not contain the full spectrum of possible outcomes and are
caught off guard when extreme events such as the financial crisis in 2008 oc-
cur (Colander et al., 2009). Indeed, during and following the financial crisis
of 2007-2008 in the US that cascaded to Europe in 2010 and to the world,
central bankers in top policy making positions, such as Trichet, Bernanke,
Turner and many others, have expressed significant dissatisfaction with eco-
nomic theory in general and macroeconomic theory in particular, suggesting
even that their irrelevance in times of crisis.
Physics as well as other natural sciences, in particular evolutionary bi-
ology and environmental sciences, may provide inspiring paths to break the
stalemate. The analytical and computational concepts and tools developed
in physics in particular are starting to provide important frameworks for a
revolution that is in the making. We refer in particular to the computa-
tional framework using agent-based or computational economic models. In
this respect, let us quote Jean-Claude Trichet, the previous chairman of the
European Central Bank in 2010: First, we have to think about how to char-
acterize the homo economicus at the heart of any model. The atomistic,
optimizing agents underlying existing models do not capture behavior dur-
ing a crisis period. We need to deal better with heterogeneity across agents
and the interaction among those heterogeneous agents. We need to entertain
alternative motivations for economic choices. Behavioral economics draws on
psychology to explain decisions made in crisis circumstances. Agent-based
modeling dispenses with the optimization assumption and allows for more
complex interactions between agents. Such approaches are worthy of our
attention. And, as Alan Kirman (2012) stressed recently, computational or
algorithmic models have a long and distinguished tradition in economics.
The exciting result is that simple interactions at the micro level can generate
sophisticated structure at the macro level, exactly as observed in financial
time series. Moreover, such ABMs are not constrained to equilibrium condi-
tions. Out-of-equilibrium states can naturally arise as a consequence of the
4
agents’ behavior, as well as fast changing external conditions and impacting
shocks, and lead to dramatic regime shift or tipping points. The fact that
such systemic phenomena can naturally arise in agent based models makes
this approach ideal to model extreme events in financial markets. The em-
phasis on ABMs and computational economics parallels a similar revolution
in Physics that developed over the last few decades. Nowadays, most physi-
cists would agree that Physics is based on three pillars: experiments, theory
and numerical simulations, defining the three inter-related disciplines of ex-
perimental physics, theoretical physics and computational physics. Many
scientists have devoted their life in just one of these three. In comparison,
computational economics and agent-based models are still in their infancy
but with similar promising futures.
Given the above mentioned analogies and relationships between economics
and physics, it is noteworthy that these two fields have been life-long compan-
ions during their mutual development of concepts and methods emerging in
both fields. There has been much mutual enrichment and catalysis of cross-
fertilization. Since the beginning of the formulation of the scientific approach
in the physical and natural sciences, economists have taken inspiration from
physics, in particular in its success in describing natural regularities and pro-
cesses. Reciprocally, physics has been inspired several times by observations
done in economics.
This review aims at providing some insights on this relationship, past,
present and future. In the next section, we present a selected history of
mutual fertilization between physics and economics. Section 3 attempts to
dissect the fundamentally different perspectives embraced in theories devel-
oped in financial economics compared with physics. For this, the excess
volatility puzzle is presented and analyzed in some depth. We explain the
meaning of puzzles and the difference between empirically founded science
and normative science. Section 4 reviews how the Ising model of phase tran-
sitions has developed to model social and financial systems. In particular, we
present the concept of random utilities and derive the logit model describing
decisions made by agents, as being the analog of the Boltzmann factor in sta-
tistical physics. The Ising model in its simplest form can then be derived as
the optimal strategy for boundedly rational investors facing discrete choices.
The section also summarises the recent developments on non-orthodox de-
cision theory, called quantum decision theory. Armed with these concepts,
section 5 reviews non-exhaustively a wealth of models that build on the Ising
model and generalize it to account for the many stylized facts of financial
markets, and more, with still a rich future to enlarge the scope of the investi-
gations. Section 6 briefly reviews our work on financial bubbles and crashes
and how the Ising model comes into play. Section 7 covers the literature on
5
agent-based models, of which the class of Ising models can be considered a
sub-branch. This section also presents the main challenges facing agent-based
modelling before being widely adopted by economists and policy makers. We
also formulate the “Emerging Market Intelligence hypothesis”, to explain the
pervasive presence of “noise traders” together with the near efficiency of fi-
nancial markets. Section 8 concludes with advice on the need to combine
concepts and tools beyond physics and finance with evolutionary biology.
2.2 Equilibrium
In the second half of the 19th century, the microeconomists Francis Edge-
worth and Alfred Marshall drew on the concept of macroequilibrium in gas,
understood to be the result of the multitude of incessant micro-collisions
of gas particles, which was developed by Clerk Maxwell and Ludwig Boltz-
mann. Edgeworth and Marshall thus developed the notion that the economy
achieves an equilibrium state not unlike that described for gas. In the same
way that the thermodynamic description of a gas at equilibrium produces a
6
mean-field homogeneous representation that gets rid of the rich heterogeneity
of the multitude of micro-states visited by all the particles, the dynamical
stochastic general equilibrium (DSGE) models used by central banks for in-
stance do not have agent heterogeneity and focus on a representative agent
and a representative firm, in a way parallel to the Maxwell Garnett effec-
tive medium theory of dielectrics and effective medium approximations for
conductivity and wave propagation in heterogenous media. In DSGE, equi-
librium refers to clearing markets, such that total consumption equal output,
or total demand equals total supply, and this takes place between represen-
tative agents. This idea, which is now at the heart of economic modeling,
was not accepted easily by contemporary economists who believed that the
economic world is out-of-equilibrium, with heterogeneous agents who learn
and change their preferences as a function of circumstances. It is important
to emphasize that the concept of equilibrium, which has been much criticized
in particular since the advent of the “great financial crisis” since 2007 and
of the “great recession”, was the result of a long maturation process with
many fights within the economic profession. In fact, the general equilibrium
theory now at the core of mainstream economic modeling is nothing but a
formalization of the idea that “everything in the economy affects everything
else” (Krugman, 1996), reminiscent of mean-field theory or self-consistent ef-
fective medium methods in physics. However, economics has pushed further
than physics the role of equilibrium by ascribing to it a normative role, i.e.,
not really striving to describe economic systems as they are, but rather as
they should be (Farmer and Geanakoplos, 2009).
7
event sizes is intrinsic.
Such distributions have later been documented for many types of systems,
when describing the relative frequency of the sizes of events they generate,
for instance earthquakes, avalanches, landslides, storms, forest fires, solar
flares, commercial sales, war sizes, and so on (Mandelbrot, 1982; Bak, 1996;
Newman, 2005; Sornette, 2004). Notwithstanding the appeal for a univer-
sal power law description, the reader should be warned that many of the
purported power law distributions are actually spurious or only valid over a
rather limited range (see e.g. Sornette, 2004; Perline, 2005; Clauset et al.,
2009). Moreover, most data in finance show strong dependence, which inval-
idates simple statistical tests such as the Kolmogorov Smirnov test (Clauset
et al., 2009). A drastically different view point is offered by multifractal pro-
cesses, such as the multifractal random walk (Bacry et al., 2001; 2013; Muzy
et al., 2001; 2006), in which the multiscale two-point correlation structure of
the volatility is the primary construction brick, from which derives the power
law property of the one-point statistics, i.e. the distribution of returns (Muzy
et al., 2006). Moreover, the power law regime may even be superseded by
a different “dragon-king” regime in the extreme right tail (Sornette, 2009;
Sornette and Ouillon, 2012).
8
tion pricing formula (Black and Scholes, 1973; Merton, 1973) and the Capital
Asset Pricing Model (Sharpe, 1964) and its generalized factor models of as-
set valuations (Fama and French, 1993; Carhart, 1997). Similarly, it is not
exaggerated to state that much of physics is occupied with modeling fluctu-
ations of (interacting) particles undergoing some kind of correlated random
walk motion. As in physics, empirical analyses of financial fluctuations have
forced the introduction of a number of deviations from the pure naive random
walk model, in the form of power law distribution of log-price increments,
long-range dependence of their absolute values (intermittency and cluster-
ing) and absence of correlation of returns, multifractality of the absolute
value of returns (multi-scale description due to the existence of information
cascades) (Mandelbrot, 1997; Mandelbrot et al., 1997; Bacry et al., 2001)
and many others (Chakraborti et al., 2011). A profusion of models have
been introduced to account for these observations, which build on the GBM
model.
9
ever, the interest in stable Lévy laws faded as empirical evidence mounted
rapidly to show that the distributions of returns are becoming closer to the
Gaussian law at time scales larger than one month, in contradiction with
the self-similarity hypothesis associated with the Lévy laws (Campbell et
al., 1997; MacKenzie, 2006). In the late 1960s, Benoit Mandelbrot mostly
stopped his research in the field of financial economics. However, inspired by
his forays on the application of power laws to empirical data, he went on to
show that non-differentiable geometries (that he coined “fractal”), previously
developed by mathematicians (Weierstrass, Hölder, Hausdorff among others)
from the 1870s to the 1940s, could provide new ways to deal with the real
complexity of the world (Mandelbrot, 1982). This provided an inspiration
for the econophysicists’ enthusiasm starting in the 1990s to model the mul-
tifractal properties associated with the long-memory properties observed in
financial asset returns (Mandelbrot et al., 1997; Mandelbrot, 1997; Bacry et
al., 2001; 2013; Muzy et al., 2001; 2006; Sornette et al., 2003).
10
derlying mechanism(s). For instance, Gabaix et al. (2003) attribute the large
movements in stock market activity to the interplay between the power-law
distribution of the sizes of large financial institutions and the optimal trading
of such large institutions. Levy and Levy (2003) and Levy (2005) similarly
emphasize the importance of the Pareto wealth distribution in explaining
the distribution of stock returns, pointing out that the Pareto wealth dis-
tribution, market efficiency, and the power law distribution of stock returns
are closely linked and probably associated with stochastic multiplicative pro-
cesses (Sornette and Cont, 1997; Sornette, 1998a; Malevergne and Sornette,
2001; Huang and Solomon, 2002; Solomon and Richmond, 2002; Malcai et
al., 2002; Lux and Sornette, 2002; Saichev et al., 2010). However, another
strand of literature emphasizes that most large events happen at relatively
high frequencies, and seem to be triggered by a sudden drop in liquidity
rather than to an outsized order (Farmer et al., 2004; Weber and Rosenow,
2006; Gillemot et al., 2007; Joulin et al., 2008).
11
3 Thinking as an economist or as a physicist?
3.1 Puzzles and normative science
Economic modeling (and financial economics is just a branch following
the same principles) is based on the hunt for paradoxes or puzzles. The term
puzzle refers to problems posed by empirical observations that do not conform
to the predictions based on theory. Many puzzles have been unearthed by
financial economists. One of the most famous of these paradoxes is called
the excess volatility puzzle, which was discovered by Shiller (1981;1989) and
LeRoy and Porter (1981).
A puzzle emerges typically by the following procedure. A financial mod-
eler builds a model or a class of models based on a pillar of standard eco-
nomic thinking, such as efficient markets, rational expectations, represen-
tative agents, and so on. She then draws some prediction that is then
tested statistically, often using linear regressions on empirical data. A puz-
zle emerges when there is a strong divergence or disagreement between the
model prediction and the regressions, so that something seems at odds, liter-
ally “puzzling” when viewed from the interpreting lenses of the theory. But
rather than rejecting the model as the falsification process in physics dic-
tates (Dyson, 1988), the financial modeler is excited because she has hereby
identified a new “puzzle”: the puzzle is that the “horrible” reality (to quote
Huxley) does not conform to the beautiful and parsimonious (and norma-
tive) theoretical edifice of neo-classical economic thinking. This is a puzzle
because the theory should not be rejected, it cannot be rejected, and there-
fore the data has something wrong in it, or there are some hidden effects that
have to be taken into account that will allow the facts to confirm the theory
when properly treated. In the most generous acceptation, a puzzle points
to improvements to bring to the theory. But the remarkable thing remains
that the theory is not falsified. It is used as the deforming lens to view and
interpret empirical facts.
This rather critical account should be balanced with the benefits obtained
from studying “puzzles” in economics. Indeed, since it has the goal of for-
malising the behavior of individuals and of organisations striving to achieve
desired goals in the presence of scarce resources, economics has played and
is still playing a key role in helping policy makers shape their decision when
governing organisation and nations. To be concerned with how things should
be may be a good idea, especially with the goal of designing “better” sys-
tems. If and when reality deviates from the ideal, this signals to economists
the existence of some “friction” that needs to be considered and possibly
alleviated. Frictions are important within economics and, in fact, are often
12
modelled.
13
by accounting for non-Gaussian features of the distribution of returns, long-
range dependence in the volatility as well as other market imperfections that
are neglected in the standard Black-Scholes-Merton theory.
The implied volatility type of thinking is so much ingrained that all
traders and investors are trained in this way, think according to the risks
supposedly revealed by the implied volatility surface and develop correspond-
ingly their intuition and operational implementations. By their behaviors,
the traders actually justify the present use of the implied volatility surface
since, in finance, if everybody believes in something, it will happen by their
collective actions, called self-fulfilling prophecies. It is this behavioral bound-
edly rational feedback of traders’ perception on risk taking and hedging that
is neglected in the Black-Scholes-Merton theory. Actually, Potters et al.
(1998) showed, by studying in detail the market prices of options on liquid
markets, that the market has empirically corrected the simple, but inade-
quate Black-Scholes formula to account for the fat tails and the correlations
in the scale of fluctuations. These aspects, although not included in the pric-
ing models, are found very precisely reflected in the price fixed by the market
as a whole.
Sircar and Papanicolaou (1998) showed that a partial account of this
feedback of hedging in the Black-Scholes theory leads to increased volatil-
ity. Wyart and Bouchaud (2007) formulated a nice simple model for self-
referential behavior in financial markets where agents build strategies based
on their belief of the existence of correlation between some flow of informa-
tion and prices. Their belief followed by action makes the former realized
and may produce excess volatility and regime shifts that can be associated
with the concept of convention (Orléan, 1995).
14
that is, p(t) should be an approximation of p∗ (t). Thus, we write
and there is no excess volatility paradox. The large volatility of p(t) com-
pared with p∗ (t) provides an information on the price forming processes, and
in particular tells us that the dynamics of price formation is not optimal
from a fundamental valuation perspective. The corollary is that prices move
for other reasons than fundamental valuations and this opens the door to
investigating the mechanisms underlying price fluctuations.
In contrast, when thinking in equilibrium, the notion of causality or cau-
sation ceases to a large degree to play a role in finance. According to finance,
it is not because the price should be the logical consequence of the fundamen-
tals that it should derive from it. In contrast, the requirement of “rational
expectations” (namely that agents’ expectations equal true statistical ex-
pected values) gives a disproportionate faith in the market mechanism and
collective agent behavior so that, by a process similar to Adam Smith’s invis-
ible hand, the collective of agents by the sum of their actions, similar to the
action of a central limit theorem given an average converging with absolute
certainty to the mean with no fluctuation in the large N limit, converge to
the right fundamental price with almost certainty. Thus, the observed price
is the right price and the fundamental price is only approximately estimated
because not all fundamentals are known with good precision. And here comes
the excess volatility puzzle.
In order to understand all the fuss made in the name of the excess volatil-
ity puzzle, we need to go back to the definition of value. According the ef-
ficient market hypothesis (Fama, 1970; 1991; Samuelson, 1965; 1973), the
observed price p(t) of a share (or of a portfolio of shares representing an
index) equals the mathematical expectation, conditional on all information
available at the time, of the present value p∗ (t) of actual subsequent divi-
dends accruing to that share (or portfolio of shares). This fundamental value
p∗ (t) is not known at time t, and has to be forecasted. The key point is
that the efficient market hypothesis holds that the observed price equals the
optimal forecast of it. Different forms of the efficient markets model differ
for instance in their choice of the discount rate used in the present value, but
the general efficient markets model can be written as
15
the fundamental value p∗ (t). It follows from the efficient markets model that
where ǫ(t) is a forecast error. The forecast error ǫ(t) must be uncorrelated
with any information variable available at time t, otherwise the forecast would
not be optimal; it would not be taking into account all information. Since
the price p(t) itself constitutes a piece of information at time t, p(t) and
ǫ(t) must be uncorrelated with each other. Since the variance of the sum of
two uncorrelated variables is the sum of their variances, it follows that the
variance of p∗ (t) must equal the variance of p(t) plus the variance of ǫ(t).
Hence, since the variance of ǫ(t) cannot be negative, one obtains that the
variance of p∗ (t) must be greater than or equal to that of p(t). This expresses
the fundamental principle of optimal forecasting, according to which the
forecast must be less variable than the variable forecasted.
Empirically, one observes that the volatility of the realized price p(t) is
much larger than the volatility of the fundamental price p∗ (t), as estimated
from all the sources of fluctuations of the variables entering in the definition
of p∗ (t). This is the opposite of the prediction resulting from expression (3).
This disagreement between theoretical prediction and empirical observation
is then referred to as the “excess volatility puzzle”. This puzzle is consid-
ered by many financial economists as perhaps the most important challenge
to the orthodoxy of efficient markets of neo-classical economics and many
researchers have written on its supposed resolution.
To a physicist, this puzzle is essentially non-existent. Rather than (3), a
physicist would indeed have written expression (1), that is, the observed price
is an approximation of the fundamental price, up to an error of appreciation
of the market. The difference between (3) and (1) is at the core of the
difference in the modeling strategies of economists, that can be called top-
down (or from rational expectations and efficient markets), compared with
the bottom-up or microscopic approach of physicists. According to equation
(1), the fact that the volatility of p(t) is larger than that of the fundamental
price p∗ (t) is not a problem; it simply expresses the existence of a large noise
component in the pricing mechanism.
Black (1985) himself introduced the notion of “noise traders”, embodying
the presence of traders who are less than fully rational and whose influence
can cause prices and risk levels to diverge from expected levels. Models built
on the analogy with the Ising model to capture social influences between in-
vestors are reviewed in the next section, which often provide explanations for
the excess volatility puzzle. Let us mention in particular our own candidate
in terms of the “noise-induced volatility” phenomenon (Harras et al., 2012).
16
4 The Ising model and financial economics
4.1 Roots and sources
The Ising model, introduced initially as a mathematical model of ferro-
magnetism in statistical mechanics (Brush, 1967), is now part of the common
culture of physics, as the simplest representation of interacting elements with
a finite number of possible states. The model consists of a large number of
magnetic moments (or spins) connected by links within a graph, network
or grid. In the simplest version, the spins can only take two values (±1),
which represent the direction in which they point (up or down). Each spin
interacts with its direct neighbors, tending to align together in a common
direction, while the temperature tends to make the spin orientations random.
Due to the fight between the ordering alignment interaction and the disor-
dering temperature, the Ising model exhibits a non-trivial phase transition
in systems at and above two dimensions. Beyond ferromagnetism, it has de-
veloped into different generalized forms that find interesting applications in
the physics of ill-condensed matter such as spin-glasses (Mezard et al., 1987)
and in neurobiology (Hopfield, 1982).
There is also a long tradition of using the Ising model and its extensions
to represent social interactions and organization (Wiedlich, 1971; 1991; 2000;
Callen and Shapero, 1974; Montroll and Badger, 1974; Galam et al., 1982;
Orlean, 1995). Indeed, the analogy between magnetic polarization and opin-
ion polarization was presented in the early 1970s by Weidlich (1971), in the
framework of “Sociodynamics”, and later by Galam et al. (1982) in a mani-
festo for “Sociophysics”. In this decade, several efforts towards a quantitative
sociology developed (Schelling, 1971; 1978; Granovetter, 1978; 1983), based
on models essentially undistinguishable from spin models.
A large set of economic models can be mapped onto various versions of
the Ising model to account for social influence in individual decisions (see
Phan et al. (2004) and references therein). The Ising model is indeed one of
the simplest models describing the competition between the ordering force of
imitation or contagion and the disordering impact of private information or
idiosyncratic noise, which leads already to the crucial concept of spontaneous
symmetry breaking and phase transitions (McCoy and Wu, 1973). It is
therefore not surprising to see it appearing in one guise or another in models
of social imitation (Galam and Moscovici, 1991) and of opinion polarization
(Galam, 2004; Sousa et al., 2005; Stauffer, 2005; Weidlich and Huebner,
2008).
The dynamical updating rules of the Ising model can be shown to describe
the formation of decisions of boundedly rational agents (Roehner and Sor-
17
nette, 2000) or to result from optimizing agents whose utilities incorporate
a social component (Phan et al., 2004).
An illuminating way to justify the use in social systems of the Ising model
(and of its many generalizations) together with a statistical physics approach
(in terms of the Boltzmann factor) derives from discrete choice models. Dis-
crete choice models consider as elementary entities the decision makers who
have to select one choice among a set of alternatives (Train, 2003). For in-
stance, the choice can be to vote for one of the candidates, or to find the right
mate, or to attend a university among several or to buy or sell a given finan-
cial asset. To develop the formalism of discrete choice models, the concept of
a random utility is introduced, which is used to derive the most prominent
discrete choice model, the Logit model, which has a strong resemblance with
Boltzmann statistics. The formulation of a binary choice model of socially
interacting agents then allows one to obtain exactly an Ising model, which
establishes a connection between studies on Ising-like systems in physics and
collective behavior of social decision makers.
18
conscious. As ǫ(x) is unknown to the researcher, it will be assumed random,
hence the name, random utility model.
The probability for the decision maker to choose x over all other alterna-
tives Y = X − {x} is then given by
P (x) = Prob (U(x) > U(y) , ∀y ∈ Y )
= Prob (V (x) − V (y) > ǫ(y) − ǫ(x) , ∀y ∈ Y ) . (5)
Holman and Marley (as cited in Luce and Suppes (1965)) showed that if
the unknown utility ǫ(x) is distributed according to the double exponential
distribution, also called the Gumbel distribution, which has a cumulative
distribution function (CDF) given by
−(x−µ)/γ
FG (x) = e−e (6)
with positive constants µ and γ, then P (x) defined in expression (5) is given
by the logistic model, which obeys the axiom of independence from irrelevant
alternatives (Luce, 1959). This axiom, at the core of standard utility theory,
states that the probability of choosing one possibility against another from
a set of alternatives is not affected by the addition or removal of other alter-
natives, leading to the name “independence from irrelevant alternatives”.
Mathematically, it can be expressed as follows. Suppose that X represents
the complete set of possible choices and consider S ⊂ X, a subset of these
choices. If, for any element x ∈ X, there is a finite probability pX (x) ∈]0; 1[
of being chosen, then Luce’s choice axiom is defined as
pX (x) = pS (x) · pX (S) , (7)
where pX (S) is the probability of choosing any element in S from the set X.
Writing expression (7) for another element y ∈ X and taking the ratios term
by term leads to
pS (x) pX (x)
= , (8)
pS (y) pX (y)
which is the mathematical expression of the axiom of independence from
irrelevant alternatives. The other direction was proven by McFadden (1974),
who showed that, if the probability satisfies the independence from irrelevant
alternatives condition, then the unknown utility ǫ(x) has to be distributed
according to the Gumbel distribution.
The derivation of the Logit model from expressions (5) and (6) is as
follows. In equation (5), P (x) is written
P (x) = Prob (V (x) − V (y) + ǫ(x) > ǫ(y) , ∀y ∈ Y ) ,
Z +∞ Y !
−(V (x)−V (y)+ǫ(x))/γ
= e−e fG (ǫ(x))dǫ(x) , (9)
−∞ y∈Y
19
−x/γ
where µ has been set to 0 with no loss of generality and fG (ǫ(x)) = γ1 e−x/γ e−e
is the probability density function (PDF) associated with the CDF (6). Per-
forming the change of variable u = e−ǫ(x)/γ , we have
Z +∞ Y !
(V (x)−V (y))/γ
P (x) = e−ue e−u du ,
0 y∈Y
Z +∞ P
e−(V (x)−V (y))/γ −u
= e−u y∈Y e du ,
0
1
= P . (10)
1+ e−V (x)/γ y∈Y eV (y)/γ
20
the mathematical theory of separable Hilbert spaces. We are not suggesting
that the brain operates according to the rule of quantum physics. It is just
that the mathematics of Hilbert spaces, used to formalized quantum mechan-
ics, provides the simplest generalization of the probability theory axiomatized
by Kolmogorov, which allows for entanglement. This mathematical structure
captures the effect of superposition of composite prospects, including many
incorporated intentions, which allows one to describe a variety of interesting
fallacies and anomalies that have been reported to particularize the decision
making of real human beings. The theory characterizes entangled decision
making, non-commutativity of subsequent decisions, and intention interfer-
ence.
Two ideas form the basement of the QDT developed by Yukalov and Sor-
nette (2008-2013). First, our decision may be intrinsically probabilistic, i.e.,
when confronted with the same set of choices (and having forgotten), we may
choose different alternatives. Second, the attraction to a given option (say
choosing where to vacation among the following locations: Paris, New York,
Roma, Hawaii or Corsica) will depend in significant part on the presentation
of the other options, reflecting a genuine “entanglement” of the propositions.
The consideration of composite prospects using the mathematical theory of
separable Hilbert spaces provides a natural and general foundation to capture
these effects. Yukalov and Sornette (2008-2013) demonstrated how the viola-
tion of the Savage’s sure-thing principle (disjunction effect) can be explained
quantitatively as a result of the interference of intentions, when making deci-
sions under uncertainty. The sign and amplitude of the disjunction effects in
experiments are accurately predicted using a theorem on interference alter-
nation, which connects aversion-to-uncertainty to the appearance of negative
interference terms suppressing the probability of actions. The conjunction
fallacy is also explained by the presence of the interference terms. A series
of experiments have been analysed and shown to be in excellent agreement
with a priori evaluation of interference effects. The conjunction fallacy was
also shown to be a sufficient condition for the disjunction effect and novel
experiments testing the combined interplay between the two effects are sug-
gested.
Our approach is based on the von Neumann theory of quantum measure-
ments (von Neumann, 1955), but with an essential difference. In quantum
theory, measurements are done over passive systems, while in decision the-
ory, decisions are taken by active human beings. Each of the latter is char-
acterized by its own strategic state of mind, specific for the given decision
maker. Therefore, expectation values in quantum decision theory are defined
with respect to the decision-maker strategic state. In contrast, in standard
measurement theory, expectation values are defined through an arbitrary
21
orthonormal basis.
In order to give a feeling of how QDT works in practice, let us delineate
its scheme. We refer to the published papers (Yukalov and Sornette, 2008,
2009a,b,c; 2010a,b; 2011) for more in-depth presentations and preliminary
tests. The first key idea of QDT is to consider the so-called prospects, which
are the targets of the decision maker. Let a set of prospects πj be given,
pertaining to a complete transitive lattice
L ≡ {πj : j = 1, 2, . . . , NL } . (12)
The aim of decision making is to find out which of the prospects is the most
favorable.
There can exist two types of setups. One is when a number of agents,
say N, choose between the given prospects. Another type is when a single
decision maker takes decisions in a repetitive manner, for instance taking
decisions N times. These two cases are treated similarly.
To each prospect πj , we put into correspondence a vector |πj > in the
Hilbert space, M, called the mind space, and the prospect operator
P̂ (πj ) ≡ | πj ih πj | .
whose set defines a probability measure on the prospect lattice L, such that
X
p(πj ) = 1 , 0 ≤ p(πj ) ≤ 1 . (14)
πj ∈L
22
through the expected utility U(πj ) of prospects. The utilities are calculated
in the standard way accepted in classical utility theory. By this definition
X
f (πj ) = 1 , 0 ≤ f (πj ) ≤ 1 .
πj ∈L
over the prospect lattice L be zero. In addition, the average absolute value
of the attraction factor is estimated by the quarter law
1 X 1
| q(πj ) | = . (17)
NL π ∈L 4
j
These properties (16) and (17) allow us to quantitatively define the prospect
probabilities (13).
The prospect π1 is more useful than π2 , when f (π1 ) > f (π2 ). And π1
is more attractive than π2 , if q(π1 ) > q(π2 ). The comparison between the
attractiveness of prospects is done on the basis of the following criteria: more
certain gain, more uncertain loss, higher activity under certainty, and lower
activity under uncertainty and risk.
Finally, decision makers choose the preferable prospect, whose probability
(13) is the largest. Therefore, a prospect can be more useful, while being less
attractive, as a result of which the choice can be in favor of the less useful
prospect. For instance, the prospect π1 is preferable over π2 when
f (π1 ) − f (π2 ) > q(π2 ) − q(π1 ) . (18)
This inequality illustrates the situation and explains the appearance of para-
doxes in classical decision making, while in QDT such paradoxes never arise.
The existence of the attraction factor is due to the choice under risk and
uncertainty. If the latter would be absent, we would return to the classical
decision theory, based on the maximization of expected utility. Then we
would return to the variety of known paradoxes.
The comparison with experimental data is done as follows. Let Nj agents
of the total number N choose the prospect πj . Then the aggregate probability
of this prospect is given (for a large number of agents) by the frequency
Nj
pexp (πj ) = . (19)
N
23
This experimental probability is to be compared with the theoretical prospect
probability (13), using the standard tools of statistical hypothesis testing. In
this way, QDT provides a practical scheme that can be applied to realistic
problems. The development of the scheme for its application to various kinds
of decision making in psychology, economics, and finance, including temporal
effects, provides interesting challenges.
Recently, Yukalov and Sornette (2013a) have also been able to define
quantum probabilities of composite events, thus introducing for the first time
a rigorous and coherent generalisation of the probability of joint events. This
problem is actually of high importance for the theory of quantum measure-
ments and for quantum decision theory that is a part of measurement theory.
Yukalov and Sornette (2013a) showed that Luders probability of consecutive
measurements is a transition probability between two quantum states and
that this probability cannot be treated as a quantum extension of the classi-
cal conditional probability. Similarly, the Wigner distribution was shown to
be a weighted transition probability that cannot be accepted as a quantum
extension of the classical joint probability. Yukalov and Sornette (2013a) sug-
gested the definition of quantum joint probabilities by introducing composite
events in multichannel measurements. Based on the notion of measurements
under uncertainty, they demonstrated that the necessary condition for mode
interference is the entanglement of the composite prospect together with the
entanglement of the composite statistical state. Examples of applications
include quantum games and systems with multimode states, such as atoms,
molecules, quantum dots, or trapped Bose-condensed atoms with several co-
herent modes (Yukalov et al., 2013).
24
either buy or sell only one unit of the asset. This is quantified by the buy
state si = +1 or the sell state si = −1. Each agent can trade at time t − 1 at
the price p(t − 1) based on all previous information up to t − 1. We assume
that the asset price variation is determined by the following equation
PN !
p(t) − p(t − 1) s
i=1 i (t − 1)
=F + σ η(t) , (20)
p(t − 1) N
where σ is the price volatility per unit time and η(t) is a white Gaussian noise
with unit variance that represents for instance the impact resulting from the
flow of exogenous economic news.
The first term in the r.h.s. of (20) is the price impact function describ-
ing the possible imbalance between buyers and sellers. We assume that the
function F (x) is such that F (0) = 0 and is monotonically increasing with
its argument. Kyle (1985) derived his famous linear price impact function
F (x) = λx within a dynamic model of a market with a single risk neutral in-
sider, random noise traders, and competitive risk neutral market makers with
sequential auctions. Huberman and Stanzl (2004) later showed that, when
the price impact of trades is permanent and time-independent, only linear
price-impact functions rule out quasi-arbitrage, the availability of trades that
generate infinite expected profits. We note however that this normative lin-
ear price impact impact has been challenged by physicists. Farmer et al.
(2013) report empirically that market impact is a concave function of the
size of large trading orders. They rationalize this observation as resulting
from the algorithmic execution of splitting large orders into small pieces and
executing incrementally. The approximate square-root impact function has
been earlier rationalized by Zhang (1999) with the argument that the time
needed to complete a trade of size L is proportional to L and that the unob-
servable price fluctuations obey a diffusion process during that time. Toth
et al. (2011) propose that the concave market impact function reflects the
fact that markets operate in a critical regime where liquidity vanishes at the
current price, in the sense that all buy orders at price less than current prices
have been satisfied, and all sell orders at prices larger than the current price
have also been satisfied. The studies (Bouchaud et al., 2009; Bouchaud,
2010), which distinguish between temporary and permanent components of
market impact, show important links between impact function, the distri-
bution of order sizes, optimization of strategies and dynamical equilibrium.
Kyle (private communication, 2012) and Gatheral and Schied (2013) point
out that the issue is far from resolved due to price manipulation, dark pools,
predatory trading and no well-behaved optimal order execution strategy.
Returning to the implication of expression (20), at time t − 1, just when
25
the price p(t−1) has been announced, the trader i defines her strategy si (t−1)
that she will hold from t−1 to t, thus realizing the profit (p(t)−p(t−1))si (t−
1). To define si (t − 1), the trader calculates her expected profit E[P &L],
given the past information and her position, and then chooses si (t − 1) such
that E[P &L] is maximal. Within the rational expectation model, all traders
have full knowledge of the fundamental equation (20) of their financial world.
However, they cannot poll the positions {sj } that all other traders will take,
which will determine the price drift according to expression (20). The next
best thing that trader i can do is to poll her N(i) “neighbors” and construct
her prediction for the price drift from this information. The trader needs
an additional information, namely the a priori probability P+ and P− for
each trader to buy or sell. The probabilities P+ and P− are the only pieces
of information that she can use for all the traders that she does not poll
directly. From this, she can form her expectation of the price change. The
simplest case corresponds to a neutral market where P+ = P− = 1/2. To
allow for a simple discussion, we restrict the discussion to the linear impact
function F (x) = λx. The trader i thus expects the following price change
P∗ N (i) !
j=1 s j (t − 1)
λ + σ η̂i (t) , (21)
N
where the index j runs over the neighborhood of agent i and η̂i (t) represents
the idiosyncratic perception of the economic news as interpreted by agent
i. Notice that the sum is now restricted to the N(i) neighbors of trader
i because the sum over all other traders, whom she cannot poll directly,
averages out. This restricted sum is represented by the star symbol. Her
expected profit is thus
P∗ N (i) ! !
j=1 sj (t − 1)
E[P &L] = λ + σ η̂i (t) p(t − 1) si (t − 1) . (22)
N
Equation (23) is nothing but the kinetic Ising model with Glauber dynamics
if the random innovations η̂i (t) are distributed with a Logistic distribution
(see demonstration in the Appendix of (Harras et al., 2012)).
This evolution equation (23) belongs to the class of stochastic dynamical
models of interacting particles (Liggett, 1995; 1997), which have been much
26
studied mathematically in the context of physics and biology. In this model
(23), the tendency towards imitation is governed by λ/N, which is called the
coupling strength; the tendency towards idiosyncratic behavior is governed
by σ. Thus the value of λ/N relative to σ determines the outcome of the
battle between order (imitation process) and disorder, and the development
of collective behavior. More generally, expression (23) provides a convenient
formulation to model imitation, contagion and herding and many generaliza-
tions have been studied that we now briefly review.
27
social interactions exceed a particular threshold and decision making is non-
cooperative. As expected from the neighborhood of phase changes, a large
susceptibility translates into the observation that small changes in private
utility lead to large equilibrium changes in average behavior. The originality
of Brock and Durlauf (2001) is to be able to introduce heterogeneity and
uncertainty into the microeconomic specification of decision making, as well
as to derive an implementable likelihood function that allows one to calibrate
the agent-based model onto empirical data.
Kaizoji (2000) used an infinite-range Ising model to embody the tendency
of traders to be influenced by the investment attitude of other traders, which
gives rise to regimes of bubbles and crashes interpreted as due to the collec-
tive behavior of the agents at the Ising phase transition and in the ordered
phase. Biased agent’s idiosyncratic preference corresponds to the existence
of an effective “magnetic field” in the language of physics. Because the so-
cial interactions compete with the biased preference, a first-order transition
exists, which is associated with the existence of crashes.
Bornholdt (2001) studied a simple spin model in which traders interact at
different scales with interactions that can be of opposite signs, thus leading
to “frustration”, and traders are also related to each other via their aggre-
gate impact on the price. The frustration causes metastable dynamics with
intermittency and phases of chaotic dynamics, including phases reminiscent
of financial bubbles and crashes. While the model exhibits phase transitions,
the dynamics deemed relevant to financial markets is sub-critical.
Krawiecki et al. (2002) used an Ising model with stochastic coupling
coefficients, which leads to volatility clustering and a power law distribution
of returns at a single fixed time scale.
Michard and Bouchaud (2005) have used the framework of the Random
Field Ising Model, interpreted as a threshold model for collective decisions
accounting both for agent heterogeneity and social imitation, to describe
imitation and social pressure found in data from three different sources: birth
rates, sales of cell phones and the drop of applause in concert halls.
Nadal et al. (2005) developed a simple market model with binary choices
and social influence (called “positive externality” in economics), where the
heterogeneity is either of the type represented by the Ising model at finite
temperature (known as annealed disorder) in a uniform external field (the
random utility models of Thurstone) or is fixed and corresponds to a a par-
ticular case of the quenched disorder model known as a random field Ising
model, at zero temperature (called the McFadden and Manski model). A
novel first-order transition between a high price and a small number of buy-
ers, to another one with a low price and a large number of buyers, arises
when the social influence is strong enough. Gordon et al. (2009) further
28
extend this model to the case of socially interacting individuals that make
a binary choice in a context of positive additive endogenous externalities.
Specifically, the different possible equilibria depend on the distribution of id-
iosyncratic preferences, called here Idiosyncratic Willingnesses to Pay (IWP)
and there are regimes where several equilibria coexist, associated with non-
monotonous demand function as a function of price. This model is again
strongly reminiscent of the random field Ising model studied in the physics
literature.
Grabowski and Kosinski (2006) modeled the process of opinion forma-
tion in the human population on a scale-free network, taking into account
a hierarchical, two-level structures of interpersonal interactions, as well as a
spatial localization of individuals. With Ising-like interactions together with
a coupling with a mass media “field”, they observed several transitions and
limit cycles, with non-standard “freezing of opinions by heating” and the re-
building of the opinions in the population by the influence of the mass media
at large annealed disorder levels (large temperature).
Sornette and Zhou (2006a) and Zhou and Sornette (2007) generalized a
stochastic dynamical formulation of the Ising model (Roehner and Sornette,
2000) to account for the fact that the imitation strength between agents may
evolve in time with a memory of how past news have explained realized mar-
ket returns. By comparing two versions of the model, which differ on how
the agents interpret the predictive power of news, they show that the stylized
facts of financial markets are reproduced only when agents are overconfident
and mis-attribute the success of news to predict return to the existence of
herding effects, thereby providing positive feedbacks leading to the model
functioning close to the critical point. Other stylized facts, such as a multi-
fractal structure characterized by a continuous spectrum of exponents of the
power law relaxation of endogenous bursts of volatility, are well reproduced
by this model of adaptation and learning of the imitation strength. Harras
et al. (2012) examined a different version of the Sornette-Zhou (2006a) for-
mulation to study the influence of a rapidly varying external signal to the
Ising collective dynamics for intermediate noise levels. They discovered the
phenomenon of “noise-induced volatility”, characterized by an increase of
the level of fluctuations in the collective dynamics of bistable units in the
presence of a rapidly varying external signal. Paradoxically, and different
from “stochastic resonance”, the response of the system becomes uncorre-
lated with the external driving force. Noise-induced volatility was proposed
to be a possible cause of the excess volatility in financial markets, of en-
hanced effective temperatures in a variety of out-of-equilibrium systems, and
of strong selective responses of immune systems of complex biological organ-
isms. Noise-induced volatility is robust to the existence of various network
29
topologies.
Horvath and Kuscsik (2007) considered a network with reconnection dy-
namics, with nodes representing decision makers modeled as (“intra-net”)
neural spin network with local and global inputs and feedback connections.
The coupling between the spin dynamics and the network rewiring produces
several of the stylized facts of standard financial markets, including the Zipf
law for wealth.
Biely et al. (2009) introduced an Ising model in which spins are dynam-
ically coupled by links in a dynamical network in order to represent agents
who are free to choose their interaction partners. Assuming that agents
(spins) strive to minimize an “energy”, the spins as well as the adjacency
matrix elements organize together, leading to an exactly soluble model with
reduced complexity compared with the standard fixed links Ising model.
Motivated by market dynamics, Vikram and Sinha (2011) extend the Ising
model by assuming that the interaction dynamics between individual com-
ponents is mediated by a global variable making the mean-field description
exact.
Harras and Sornette (2011) studied a simple agent-based model of bub-
bles and crashes to clarify how their proximate triggering factor relate to
their fundamental mechanism. Taking into account three sources of informa-
tion, (i) public information, i.e. news, (ii) information from their “friendship”
network and (iii) private information, the boundedly rational agents continu-
ously adapt their trading strategy to the current market regime by weighting
each of these sources of information in their trading decision according to
its recent predicting performance. In this set-up, bubbles are found to origi-
nate from a random lucky streak of positive news, which, due to a feedback
mechanism of these news on the agents’ strategies develop into a transient
collective herding regime. Paradoxically, it is the attempt for investors to
adapt to the current market regime that leads to a dramatic amplification
of the price volatility. A positive feedback loop is created by the two dom-
inating mechanisms (adaptation and imitation), which, by reinforcing each
other, result in bubbles and crashes. The model offers a simple reconciliation
of the two opposite (herding versus fundamental) proposals for the origin of
crashes within a single framework and justifies the existence of two popu-
lations in the distribution of returns, exemplifying the concept that crashes
are qualitatively different from the rest of the price moves (Johansen and
Sornette, 1998; 2001/2002; Sornette, 2009; Sornette and Ouillon, 2012).
Inspired by the bankruptcy of Lehman Brothers and its consequences on
the global financial system, Sieczka et al. (2011) developed a simple model
in which the credit rating grades of banks in a network of interdependen-
cies follow a kind of Ising dynamics of co-evolution with the credit ratings
30
of the other firms. The dynamics resembles the evolution of a Potts spin
glass with the external global field corresponding to a panic effect in the
economy. They find a global phase transition, between paramagnetic and
ferromagnetic phases, which explains the large susceptibility of the system
to negative shocks. This captures the impact of the Lehman default event,
quantified as having an almost immediate effect in worsening the credit wor-
thiness of all financial institutions in the economic network. The model is
amenable to testing different policies. For instance, bailing out the first few
defaulting firms does not solve the problem, but does have the effect of al-
leviating considerably the global shock, as measured by the fraction of firms
that are not defaulting as a consequence.
Kostanjcar and Jeren (2013) defined a generalized Ising model of financial
markets with a kind of minority-game payoff structure and strategies that
depend on order sizes. Because their agents focus on the change of their
wealth, they find that the macroscopic dynamics of the aggregated set of
orders (reflected into the market returns) remains stochastic even in the
thermodynamic limit of a very large number of agents.
Bouchaud (2013) proposed a general strategy for modeling collective
socio-economic phenomena with the Random Field Ising model (RFIM) and
variants, which is argued to provide a unifying framework to account for
the existence of sudden ruptures and crises. The variants of the RFIM cap-
ture destabilizing self-referential feedback loops, induced either by herding
or trending. An interesting insight is the determination of conditions under
which Adam Smith’s invisible hand can fail badly at solving simple coor-
dination problems. Moreover, Bouchaud (2013) stresses that most of these
models assume explicitly or implicitly the validity of the so-called “detailed-
balance” in decision rules, which is not a priori necessary to describe real
decision-making processes. The question of the robustness of the results
obtained when detailed balance holds to models where it does not remain
largely open. Examples from physics suggest that much richer behaviors can
emerge.
Kaizoji et al. (2013) introduced a model of financial bubbles with two
assets (risky and risk-free), in which rational investors and noise traders co-
exist. Rational investors form expectations on the return and risk of a risky
asset and maximize their expected utility with respect to their allocation on
the risky asset versus the risk-free asset. Noise traders are subjected to social
imitation (Ising like interactions) and follow momentum trading (leading to a
kind of time-varying magnetic field). Allowing for random time-varying herd-
ing propensity as in (Sornette, 1994; Stauffer and Sornette, 1999; Sornette
et al., 2002), this model reproduces the most important stylized facts of fi-
nancial markets such as a fat-tail distribution of returns, volatility clustering
31
as well as transient faster-than-exponential bubble growth with approximate
log-periodic behavior (Sornette, 1998b; 2003). The model accounts well for
the behavior of traders and for the price dynamics that developed during the
dotcom bubble in 1995-2000. Momentum strategies are shown to be tran-
siently profitable, supporting these strategies as enhancing herding behavior.
32
or sector that is successful, with strong fundamentals. Credit expands, and
money flows more easily. (Near the peak of Japan’s bubble in 1990, Japan’s
banks were lending money for real estate purchases at more than the value
of the property, expecting the value to rise quickly.) As more money is avail-
able, prices rise. More investors are drawn in, and expectations for quick
profits rise. The bubble expands, and then finally has to burst. In other
words, fuelled by initially well-founded economic fundamentals, investors de-
velop a self-fulfilling enthusiasm by an imitative process or crowd behavior
that leads to the building of castles in the air, to paraphrase Malkiel (2012).
Furthermore, the causes of the crashes on the US markets in 1929, 1987,
1998 and in 2000 belong to the same category, the difference being mainly
in which sector the bubble was created: in 1929, it was utilities; in 1987, the
bubble was supported by a general deregulation of the market with many
new private investors entering it with very high expectations with respect to
the profit they would make; in 1998, it was an enormous expectation with
respect to the investment opportunities in Russia that collapsed; before 2000,
it was extremely high expectations with respect to the Internet, telecommu-
nications, and so on, that fuelled the bubble. In 1929, 1987 and 2000, the
concept of a “new economy” was each time promoted as the rational origin
of the upsurge of the prices.
Several previous works in economics have suggested that bubbles and
crashes have endogenous origins, as we explain below. For instance, Irving
Fisher (1933) and Hyman Minsky (1992) both suggested that endogenous
feedback effects lead to financial instabilities, although their analysis did
not include formal models. Robert Shiller (2006) has been spearheading
the notion that markets, at times, exhibit “irrational exuberance”. While
the efficient market hypothesis provides a useful first-order representation of
financial markets in normal times, one can observe regimes where the an-
chor of a fundamental price is shaky and large uncertainties characterize the
future gains, which provides a fertile environment for the occurrence of bub-
bles. When a number of additional elements are present, markets go through
transient phases where they disconnect in specific dangerous ways from this
fuzzy concept of fundamental value. These are regimes where investors are
herding, following the flock and pushing the price up along an unsustainable
growth trajectory. Many other mechanisms have been studied to explain the
occurrence of financial bubbles, such as constraints on short selling and lack
of synchronisation of arbitrageurs due to heterogeneous beliefs on the exis-
tence of a bubble, see Brunnermeier and Oehmke (2012) and Xiong (2013)
for two excellent reviews.
33
6.2 The critical point analogy
Mathematically, we propose that large stock market crashes are the so-
cial analogues of so-called critical points studied in the statistical physics
community in relation to magnetism, melting, and other phase transforma-
tion of solids, liquids, gas and other phases of matter (Sornette, 2000). This
theory is based on the existence of a cooperative behavior of traders imi-
tating each other which leads to progressively increasing build-up of market
cooperativity, or effective interactions between investors, often translated
into accelerating ascent of the market price over months and years before
the crash. According to this theory, a crash occurs because the market has
entered an unstable phase and any small disturbance or process may have
triggered the instability. Think of a ruler held up vertically on your finger:
this very unstable position will lead eventually to its collapse, as a result of a
small (or absence of adequate) motion of your hand or due to any tiny whiff
of air. The collapse is fundamentally due to the unstable position; the in-
stantaneous cause of the collapse is secondary. In the same vein, the growth
of the sensitivity and the growing instability of the market close to such a
critical point might explain why attempts to unravel the local origin of the
crash have been so diverse. Essentially, anything would work once the system
is ripe. In this view, a crash has fundamentally an endogenous or internal
origin and exogenous or external shocks only serve as triggering factors.
As a consequence, the origin of crashes is much more subtle than often
thought, as it is constructed progressively by the market as a whole, as a self-
organizing process. In this sense, the true cause of a crash could be termed a
systemic instability. This leads to the possibility that the market anticipates
the crash in a subtle self-organized and cooperative fashion, hence releas-
ing precursory “fingerprints” observable in the stock market prices (Sornette
and Johansen, 2001; Sornette, 2003). These fingerprints have been modeled
by log-periodic power laws (LPPL) (Johansen et al., 1999; 2000), which are
beautiful mathematical patterns associated with the mathematical general-
ization of the notion of fractals to complex imaginary dimensions (Sornette,
1998). In the framework of Johansen, Ledoit and Sornette (1999; 2000),
an Ising-like stochastic dynamics is used to describe the time evolution of
imitation between noise traders, which controls the dynamics of the crash
hazard rate (see Sornette et al. (2013) for a recent update on the status of
the model).
Our theory of collective behaviour predicts robust signatures of specula-
tive phases of financial markets, both in accelerating bubbles and decreasing
prices (see below). These precursory patterns have been documented for
essentially all crashes on developed as well as emergent stock markets. Ac-
34
cordingly, the crash of October 1987 is not unique but a representative of
an important class of market behaviour, underlying also the crash of Octo-
ber 1929 (Galbraith, 1997) and many others (Kindleberger, 2000; Sornette,
2003).
We refer to the book (Sornette, 2003) for a detailed description and the
review of many empirical tests and of several forward predictions. In partic-
ular, we predicted in January 1999 that Japan’s Nikkei index would rise 50
percent by the end of that year, at a time when other economic forecasters
expected the Nikkei to continue to fall, and when Japan’s economic indica-
tors were declining. The Nikkei rose more than 49 percent during that time.
We also successfully predicted several short-term changes of trends in the US
market and in the Nikkei and we have diagnosed ex-ante several other major
bubbles (see e.g. (Jiang et al., 2010) and references therein).
35
economists. One reason why predicting complex systems is difficult is that we
have to look at the forest rather than the trees, and almost nobody does that.
Our approach tries to avoid this trap. From the tulip mania, where tulips
worth tens of thousands of dollars in present U.S. dollars became worthless
a few months later, to the U.S. bubble in 2000, the same patterns occur over
the centuries. Today we have electronic commerce, but fear and greed remain
the same. Humans remain endowed with basically the same qualities (fear,
greed, hope, lust) today as they were in the 17th century.
36
feedbacks that lead to widespread endorsement and extraordinary commit-
ment by those involved, beyond what would be rationalized by a standard
cost-benefit analysis. It does not cast any value system however, notwith-
standing the use of the term “bubble”. Rather it identifies the types of
dynamics that shape scientific or technological endeavors. In other words,
we suggest that major projects often proceed via a social bubble mechanism
(Sornette, 2008; Gisler and Sornette, 2009; 2010; Gisler et al., 2011; 2013).
Thus, bubbles and crashes, the hallmark of humans, are perhaps our
most constructive collective process. But they may also undermine our quest
for stability. We thus have to be prepared and adapted to the systemic
instabilities that are part of us, part of our collective organization, ... and
which will no doubt recur again perhaps with even more violent effects in
the coming decade.
37
impact of repeated interactions leading to large-scale collective patterns. In
the physicist language, in addition to energy, entropy is an important and
often leading contribution to large scale pattern formation, and this under-
standing requires the typical statistical physics training that economists and
social scientists often lack.
38
adopted by the synthetic agents together with the aggregate market behav-
ior. However, the Santa Fe Institute Artificial Stock Market has been shown
to suffer from a number of defects, for instance the fact that the rate of ap-
pearance of new trading strategies is too fast to be realistic. Only recently
was it also realized that previous interpretations neglecting the emergence of
technical trading rules should be corrected (Ehrentreich, 2008).
Inspired by the El-Farol Bar problem (Arthur, 1994b) meant to emphasize
how inductive reasoning together with a minority payoff prevents agents to
converge to an equilibrium and force them to continuously readjust their ex-
pectation, the Minority Game was introduced by Challet and Zhang (1997;
1998) to model prices in markets as reflecting competition among a finite
number of agents for a scarce resource (Marsili et al., 2000). Extensions in-
clude the Majority Game and the Dollar Game (a time delayed version of
the majority game) and delayed version of the minority games. In minority
games, which are part of first-entry games, no strategy can remain persis-
tently a winner; otherwise it will be progressively adopted by a growing
number of agents, bringing its demise by construction of the minority payoff.
This leads to the phenomenon of frustration and anti-persistence. Satinover
and Sornette (2007a,b; 2009) have shown that optimizing agents are actually
underperforming random agents, thus embodying the general notion of the
illusion of control. It can be shown more generally that learning and adaptive
agents will converge to the best dominating strategy, which turns out to be
the random choice strategy for minority or first-entry payoffs.
Evstigneev et al. (2009) review results obtained on evolutionary finance,
namely the field studying the dynamic interaction of investment strategies in
financial markets through ABM implementing Darwinian ideas and random
dynamical system theory. By studying the wealth distribution among agents
over the long-term, Evstigneev et al. are able to determine the type of strate-
gies that over-perform in the long term. They find that such strategies are
essentially derived from Kelly (1956)’s criterion of optimizing the expected
log-return. They also pave the road for the development of a generalization of
continuous-time finance with evolutionary and game theoretical components.
Darley and Outkin (2007) describe the development of a Nasdaq ABM
market simulation, developed during the collaboration between the Bios
Group (a spin-off of the Santa Fe Institute) and Nasdaq Company to explore
new ways to better understand Nasdaq’s operating world. The artificial mar-
ket has opened the possibility to explore the impact of market microstructure
and market rules on the behavior of market makers and traders. One ob-
tained insight is that decreasing the tick size to very small values may hinder
the market’s ability to perform its price discovery process, while at the same
time the total volume traded can greatly increase with no apparent benefits
39
(and perhaps direct harm) to the investors’ average wealth.
In a similar spirit of using ABM for an understanding of real-life economic
developments, Geanakoplos et al. (2012) have developed an agent based
model to describe the dynamics that led to the housing bubble in the US
that peaked in 2006 (Zhou and Sornette, 2006). At every time step, the
agents have the choice to pay a monthly coupon or to pay off the remaining
balance (prepay). The conventional method makes a guess for the functional
form of the prepayments over time, which basically boils down to extrapolate
into the future past patterns in the data. In contrast, the ABM takes into
account the heterogeneity of the agents through a parameterization with two
variables that are specific to each agent: the cost of prepaying the mortgage
and the alertness to his financial situation. A simulation of such agents acting
in the housing market is able to capture the run up in housing price and
the subsequent crash. The dominating factor driving this dynamic could be
identified as the leverage the agents get from easily accessible mortgages. The
conventional model entirely missed this dynamic and was therefore unable
to forecast the bust. Of course, this does not mean that non-ABM models
have not been able or would not be able to develop the insight about the
important role of the procyclicality of the leverage on real-estate prices and
vice-versa, a mechanism that has been well and repeatedly described in the
literature after the crisis in 2007-2008 erupted.
Hommes (2006) provides an early survey on dynamic behavioral financial
and economic models with rational agents with bounded rationality using
different heuristics. He emphases the class of relatively simple models for
which some tractability is obtained by using analytic methods in combina-
tion with computational tools. Nonlinear structures often lead to chaotic
dynamics, far from an equilibrium point, in which regime switching is the
natural occurrence associated with coexisting attractors in the presence of
stochasticity (Yukalov et al., 2009). By the aggregation of relatively sim-
ple interactions occurring at the micro level, quite sophisticated structure
at the macro level may emerge, providing explanations for observed stylized
facts in financial time series, such as excess volatility, high trading volume,
temporary bubbles and trend following, sudden crashes and mean reversion,
clustered volatility and fat tails in the returns distribution.
Chiarella et al. (2009) review another branch of investigation of bound-
edly rational heterogeneous agent models of financial markets, with particular
emphasis to the role of the market clearing mechanism, the utility function of
the investors, the interaction of price and wealth dynamics, portfolio implica-
tions, and the impact of stochastic elements on market dynamics. Chiarella et
al. find regimes with market instabilities and stochastic bifurcations, leading
to fat tails, volatility clustering, large excursions from the fundamental, and
40
bubbles, which are features of real markets that are not easily reconcilable
for the standard financial market paradigm.
Shiozawa et al., 2008 summarize the main properties and finding resulting
from the U-Mart project, which creates a virtual futures market on a stock
index using a computer or network in order to promote on-site training,
education, and economics research. In the U-Mart platform, human players
can interact with algorithms, providing a rich experimental platform.
Building on the insight that when past information is limited to a rolling
window of prior states of fixed length, the minority, majority and dollar
games may all be expressed in Markov-chain formulation (Marsili et al. 2000,
Hart et al. 2002, Satinover and Sornette 2007a,b), Satinover and Sornette
(2012a,b) have further shown how, for minority and majority games, a cycle
decomposition method allows to quantify the inherently probabilistic nature
of a Markov chain underlying the dynamics of the models as an exact su-
perposition of deterministic cyclic sequences (Hamiltonian cycles on binary
graphs), extending ideas discussed by Jefferies et al. (2002). This provides
a novel technique to decompose the sign of the time-series they generate
(analogous to a market price time-series) into a superposition of weighted
Hamiltonian cycles on graphs. The cycle decomposition also provides a dis-
section of the internal dynamics of the games and a quantitative measure of
the degree of determinism. The performance of different classes of strategies
may be understood on a cycle-by-cycle basis. The decomposition offers a
new metric for comparing different game dynamics with real-world financial
time-series and a method for generating predictors. A cycle predictor applied
to a real-world market can generate significantly positive returns.
Feng et al. (2012) use an agent-based model that suggests a dominant
role for the investors using technical strategies over those with fundamen-
tal investment styles, showing that herding emerges via the mechanism of
converging on similar technical rules and trading styles in creating the well-
known excess volatility phenomenon (Shiller, 1981; LeRoy and Porter, 1981;
LeRoy, 2008). This suggests that there is more to price dynamics than just
exogeneity (e.g. the dynamics of dividends). Samanidou et al. (2007) review
several agent-based models of financial markets, which have been studied by
economists and physicists over the last decade: Kim-Markowitz, Levy-Levy-
Solomon (1994), Cont-Bouchaud, Solomon-Weisbuch, Lux-Marchesi (1999;
2000), Donangelo-Sneppen and Solomon-Levy-Huang. These ABM empha-
size the importance of heterogeneity, of noise traders (Black, 1986) or tech-
nical analysis based investment styles, and of herding. Lux (2009a) reviews
simple stochastic models of interacting traders, whose design is closer in spirit
to models of multiparticle interaction in physics than to traditional asset-
pricing models, reflecting the insight emergent properties at the macroscopic
41
level are often independent of the microscopic details of the system. Hasan-
hodzic et al. (2011) provides a computational view of market efficiency, by
implementing agent-based models in which agents with different resources
(e.g, memories) perform differently. This approach is very promising to un-
derstand the relative nature of market efficiency (relative with respect to
resources such as super-computer power and intellectual capital) and pro-
vides a rationalization of the technological arm race of quantitative trading
firms.
42
generalization is not obvious. This makes it difficult to compare the different
ABMs found in the literature and gives an impression of lack of robustness
in the results that are often sensitive to details of the modelers choices. The
situation is somewhat similar to that found with artificial neural network,
the computational models inspired by animals’ central nervous systems that
are capable of machine learning and pattern recognition. While providing
interesting performance, artificial neural networks are black boxes: it is gen-
erally very difficult if not impossible to extract a qualitative understanding
of the reasons for their performance and ability to solve a specific task. We
can summarize this first difficulty as the micro-macro problem, namely un-
derstanding how micro-ingredients and rules transform into macro-behaviors
at the collective level when aggregated over many agents.
The second related problem is that of calibration and validation (Sornette
et al., 2007). Standard DSGE models of an economy, for instance, provide
specific regression relations that are relatively easy to calibrate to a cross-
sectional set of data. In contrast, the general problem of calibrating ABMs is
unsolved. By calibrating, we refer to the problem of determining the values
of the parameters (and their uncertainty intervals) that enter in the definition
of the ABM, which best corresponds to a given set of empirical data. Due to
the existence of nonlinear chaotic or more complex dynamical behaviors, the
likelihood function is in general very difficult if not impossible to determine
and standard statistical methods (maximum likelihood estimation (MLE))
cannot apply. Moreover, due to the large number of parameters present in
large scale ABMs, calibration suffers from the curse of dimensionality and of
ill-conditioning: small errors in the empirical data can be amplified into large
errors in the calibrated parameters. We think that it is not exaggerated to
state that the major obstacle for the general adoption of ABMs by economists
and policy makers is the absence of a solid theoretical foundation for and
efficient reliable operational calibration methods.
This diagnostic does not mean that there have not been attempts, some-
time quite successful, in calibrating ABMs. Windrum et al. (2007) review the
advances and discuss the methodological problems arising in the empirical
calibration and validation of agent-based models in economics. They classify
the calibration methods into three broad classes: (i) the indirect calibration
approach, (ii) the Werker-Brenner approach, and (iii) the history-friendly.
They have also identified six main methodological and operational issues
with ABM calibration: (1) fitness does not imply necessarily that the true
generating process has been correctly identified; (2) the quest for feasible cal-
ibration influences the type of ABMs that are constructed; (3) the quality of
the available empirical data; (4) the possible non-ergodicity of the real world
generating process and the issue of representativeness of short historical time
43
series; (5) possible time-dependence of the micro and macro parameters.
Restricting our attention to financial markets, an early effort of ABM
calibration is that of Poggio et al. (2001), who constructed a computer simu-
lation of a repeated double-auction market. Using six different experimental
designs, the calibration was of the indirect type, with attempt to match the
price efficiency of the market, the speed at which prices converge to the
rational expectations equilibrium price, the dynamics of the distribution of
wealth among the different types of artificial Intelligent agents, trading vol-
ume, bid/ask spreads, and other aspects of market dynamics. Among the
ABM studies touched upon above, that of Chiarella et al. (2009) include an
implementation of the indirect calibration approach. Similarly, Bianchi et
al. (2007) develop a methodology to calibrate the Complex Adaptive Trivial
System (CATS) model proposed by Gallegati et al. (2005), again matching
several statistical outputs associated with different stylized facts of the ABM
to the empirical data. Fabretti (2013) uses a combination of mean and stan-
dard deviation, kurtosis, Kolmogorov-Smirnov statistics and Hurst exponent
for the statistical objects determined from the ABM developed by Farmer
and Joshi (2002) whose distance to the real statistics should be minimized.
Alfarano et al. (2005) studied a very simple ABM that reproduces the
most studied stylized facts (fat tails, volatility clustering). The simplicity
of the model allows the authors to derive a closed form solution for the
distribution of returns and hence to develop a rigorous maximum likelihood
estimation (MLE) approach to the calibration of the ABM. The analytical
analysis provides an explicit link between the exponent of the unconditional
power law distribution of returns and some structural parameters, such as the
herding propensity and the autonomous switching tendency. This is a rare
example for which the calibration of the ABM is similar to more standard
problems of calibration in econometrics.
Andersen and Sornette (2005) introduced a direct history-friendly cal-
ibration method of the Minority game on time series of financial returns,
which statistically significant abnormal performance to detect special pock-
ets of predictability associated with turning points. Roughly speaking, this
is done by calibrating many times the ABM to the data and by performing
meta-searches in the set of parameters and strategies, while imposing robust-
ness constraints to address the intrinsic ill-conditional nature of the problem.
One of the advantages is to remove possible biases of the modeler (except for
the fact that the structure of the model reflects itself a view of what should
be the structure of the market). This work by Andersen and Sornette (2005)
was one of the first to establish the existence of pockets of predictability in
stock markets. A theoretical analysis showed that when a majority of agents
follows a decoupled strategy, namely the immediate future has no impact
44
on the longer-term choice of the agents, a transient predictable aggregate
move of the market occurs. It has been possible to estimate the frequency of
such prediction days if the strategies and histories were randomly chosen. A
statistical test, using the Nasdaq Composite Index as a proxy for the price
history, confirms that it is possible to find prediction days with a probability
extremely higher then chance.
Another interesting application is to use the ABM to issue forecasts that
are used to further refine the calibration as well as test the predictive power
of the model. To achieve this, the strategies of the agents become in a certain
sense a variable, which is optimized to obtain the best possible calibration of
the in-sample data. Once the optimal strategies are identified, the predictive
power of the simulation can be tested on the out-of-sample data. Statis-
tical tests have shown that the model performs significantly better than a
set of random strategies used as comparison (Andersen and Sornette; 2005;
Wiesinger et al., 2012). These results are highly relevant, because they show
that it seems possible to extract from times series information about the fu-
ture development of the series using the highly nonlinear structure of ABMs.
Applied to financial return time series, the calibration and subsequent fore-
cast show that the highly liquid financial markets (e.g. S&P500 index) have
progressively evolved towards better efficiency from the 1970s to present
(Wiesenger et al., 2012). Nevertheless, there seems to remain statistically
significant arbitrage opportunities (Zhang, 2013), which seems inconsistent
with the weak form of the efficient market hypothesis (EMH). This method
lays down the path to a novel class of statistical falsification of the EMH. As
the method is quite generic, it can virtually be applied on any time series
to check how well the EMH holds from the viewpoint offered by the ABM.
Further on, this approach has wide potential to reverse engineer many more
stylized facts observed in financial markets.
Lillo et al. (2008) present results obtained in the rare favorable situa-
tion in which the empirical data is plentiful, with access to a comprehensive
description of the strategies followed by the firms that are members of the
Spanish Stock Exchange. This provides a rather unique opportunity for val-
idating the assumptions about agents preferred stylized strategies in ABMs.
The analysis indicates that three well-defined groups of agents (firms) char-
acterize the stock exchange.
Saskia Ter and Zwinkels (2010) have modeled the oil price dynamics with
a heterogeneous agent model that, as in many other ABMs, incorporates two
types of investors, the fundamentalists and the chartists and their relation
to the fundamental supply and demand. The fundamentalists, who expect
the oil price to move towards the fundamental price, have a stabilizing effect,
while the chartists have a destabilizing effect driving the oil price away from
45
its fundamental value. The ABM has been able to outperform in an out-
of-sample test both the random walk model and VAR models for the Brent
and WTI market, providing a kind of partial history-friendly calibration ap-
proach.
46
imation, an asymptotic ideal construct that is never reached in practice. It
can be approached but a convergence towards it unleash effective repelling
forces due to dwindling incentives. “The abnormal returns always exist to
compensate for the costs of gathering and processing information. These re-
turns are necessary to compensate investors for their information-gathering
and information-processing expenses, and are no longer abnormal when these
expenses are properly accounted for. The profits earned by the industrious
investors gathering information may be viewed as economic rents that ac-
crue to those willing to engage in such activities” (cited from Campbell et
al., 1997).
Let us push this reasoning in order to illuminate further the nature and
limits of the EMH and as a bonus clarify the nature and origin of “noise
traders” (Black, 1986). As illustrated by the short review of section 7.1, the
concept of “noise trader” is an essential constituent of most ABMs that aim
at explaining the excess volatility, fat-tailed distributions of asset returns,
as well as the astonishing occurrence of bubbles and crashes. It also solves
the problem of the no-trade theorem (Milgrom and Stokey, 1982), which
in essence shows that no investor will be willing to trade if the market is
in a state of efficient equilibrium and there are no noise traders or other
non-rational interferences with prices. Intuitively, if there is a well-defined
fundamental value, all well-informed rational traders agree on it, the market
prices is the fundamental value and everybody holds the stock according to
their portfolio allocation strategy reflecting their risk profiles. No trade is
possible without the existence of exogenous shocks, changes of fundamental
values or taste alterations.
In reality, real financial markets are heavily traded, with at each tick an
exact balance between the total volume of buyers and of sellers (by defi-
nition of each realised trade), reflecting a generalised disagreement on the
opportunity to hold the corresponding stocks. These many investors who
agree to trade and who trade much more than would be warranted on the
basis of fundamental information are called noise traders. Noise traders are
loosely defined as the investors who makes decisions regarding buy and sell
trades without much use of fundamental data, but rather on the basis of price
patterns, trends, and who react incorrectly to good and bad news. On one
side, traders exhibit over-reaction, which refers to the phenomenon that price
responses to news events are exaggerated. A proposed explanation is that
excess price pressure is applied by overconfident investors (Bondt and Thaler,
1985; Daniel et al., 1998) or momentum traders (Hong and Stein, 1999), re-
sulting in an over- or under-valued asset, which then increases the likelihood
of a rebound and thus creates a negative autocorrelation in returns. On the
other side, investors may under-react, resulting in a slow internalisation of
47
news into price. Due to such temporally spread-out impact of the news,
price dynamics exhibit momentum, i.e. positive return autocorrelation (Lo
and MacKinlay, 1988; Jegadeesh, 1990; Cutler, 1990; Jegadeesh and Titman,
1993).
In fact, most investors and portfolio managers are considered noise traders
(Malkiel, 2012)! In other words, after controlling for luck, there is a general
consensus in the financial academic literature that most fund managers do
not provide statistically significant positive returns above the market return
that would be obtained by just buying and holding for the long term (Barras
et al., 2010; Fama and French, 2010). This prevalence of noise traders is in
accord with the EMH. But are these investors really irrational and mindless?
This seems difficult to reconcile with the evidence that the banking and
investment industry has been able in the last decades to attract a significant
fraction of the best minds and most motivated persons on Earth. Many
have noticed and even complained that, in the years before the financial
crisis of 2008, the best and brightest college grads were heading for Wall
Street. At ETH Zurich where I teach financial market risks and tutor master
theses, I have observed even after the financial crisis a growing flood of civil,
mechanical, electrical and other engineers choosing to defect from their field
and work in finance and banking.
Consequently, we propose that noise traders are actually highly intelligent
motivated and capable investors. They are like noise traders as a result of
the aggregation of the collective intelligence of all trading strategies that
structure the price dynamics and makes each of the individual strategy look
“stupid”, like noise trading. The whole is more than the sum of the part. In
other words, a universe of rational optimizing traders create endogenously
a large fraction of rational traders who are effectively noise, because their
strategies are like noise, given the complexity or structure of financial and
economic markets that they collectively create. The continuous actions of
investors, which are aggregated in the prices, produce a “market intelligence”
more powerful than that of most of them. The “collective intelligence” of
the market transforms most (but not all) strategies into losing strategies,
just providing liquidity and transaction volume. We call this the “Emerging
Market Intelligence hypothesis” (EIMH). This phrasing stresses the collective
intelligence that dwarfs the individual ones, making them look like noise when
applied to the price structures resulting from the price formation process.
But for this EIMH to hold, the “noise traders” need a motivation to
continue trading, in the face of their collective dismal performances. In ad-
dition to the role of monetary incentives for rent-seeking that permeates
the banking industry (Freeman, 2010) and makes working in finance very
attractive notwithstanding the absence of genuine performance, there is a
48
well-documented fact in the field of psychology that human beings in gen-
eral and investors in particular (and especially traders who are (self-)selected
for their distinct abilities and psychological traits) tend to rate their skills
over-optimistically (Kruger and Dunning, 1999). And when by chance, some
performance emerges, we tend to attribute the positive outcome to our skills.
When a negative outcome occurs, this is bad luck. This is referred to in the
psychological literature as “illusion of control” (Langer 1975). In addition,
human beings have evolved the ability to attribute meaning and regularity
when there is none. In the psychological literature, this is related to the
fallacy of “hasty generalisation” (“law of small numbers”) and to “retrospec-
tive determinism”, which makes us look at historical events as part of an
unavoidable meaningful laminar flow. All these elements combine to gener-
ate a favourable environment to catalyse trading, by luring especially young
bright graduate students to finance in the belief that their intelligence and
technical skills will allow them to “beat the market”. Thus, building on
our cognitive biases and in particular on over-optimism, one could say that
the incentive structures of the financial industry provides the remunerations
for the individuals who commit themselves to arbitrage the financial mar-
kets, thereby providing an almost efficient functioning machine. The noise
traders naturally emerge as a result of the emergent collective intelligence.
This concept is analogous to the sandpile model of self-organised criticality
(Bak, 1996), which consistency functions at the edge of chaos, driven to its
instability but never completely reaching it by the triggering of avalanches
(Scheinkman and Woodford, 1994). Similarly, the incentives of the financial
system creates an army of highly motivated and skilled traders who push the
market towards efficiency but never succeeding to allow some of them to win,
while make most of them look like noise.
8 Concluding remarks
While it is difficult to argue for a physics-based foundation of economics
and finance, physics has still a role to play as a unifying framework full
of concepts and tools to deal with complex dynamical out-of-equilibrium
systems. Moreover, the specific training of physicists explains the impressive
number of recruitments in investment and financial institutions, where their
data-driven approach coupled with a pragmatic sense of theorizing has made
physicists a most valuable commodity on Wall Street.
At present however, the most exciting progress seems to be unraveling at
the boundary between economics and the biological, cognitive and behavioral
sciences (Camerer et al., 2003; Shiller, 2003; Thaler 2005). A promising re-
49
cent trend is the enrichment of financial economics by concepts developed in
evolutionary biology. Several notable groups with very different backgrounds
have touched upon the concept that financial markets may be similar to ecolo-
gies filled by species that adapt and mutate. For instance, we mentioned
earlier that Potters et al. (1998) showed that the market has empirically
corrected and adapted to the simple, but inadequate Black-Scholes formula
to account for the fat tails and the correlations in the scale of fluctuations.
Doyne Farmer (2002) proposed a theory based on the interrelationships of
strategies, which views a market as a financial ecology. In this ecology, new
better adapted strategies exploit the inefficiencies of old strategies and the
evolution of the capital of a strategy is analogous to the evolution of the
population of a biological species. Cars Hommes (2001) also reviewed works
modeling financial markets as evolutionary systems constituted of different,
competing trading strategies. Strategies are again taken as the analog of
species. It is found that simple technical trading rules may survive evolution-
ary competition in a heterogeneous world where prices and beliefs coevolve
over time. Such evolutionary models can explain most of the stylized facts
of financial markets (Chakraborti et al., 2011).
Andrew Lo (2004; 2005; 2011) coined the term “adaptive market hypoth-
esis” in reaction to the “efficient market hypothesis” (Fama, 1970; 1991), to
propose an evolutionary perspective on market dynamics in which intelligent
but fallible investors learn from and adapt to changing environments, lead-
ing to a relationship between risk and expected return that is not constant
in time. In this view, markets are not always efficient but they are highly
competitive and adaptive, and can vary in their degree of efficiency as the eco-
nomic environment and investor population change over time. Lo emphasizes
that adaptation in investment strategies (Neelya et al. 2009) are driven by the
“push for survival”. This is perhaps a correct assessment of Warren Buffet’s
own stated strategy: “We do not wish it only to be likely that we can meet our
obligations; we wish that to be certain. Thus we adhere to policies – both in
regard to debt and all other matters – that will allow us to achieve acceptable
long-term results under extraordinary adverse conditions, rather than opti-
mal results under a normal range of conditions” (Berkshire Hathaway Annuel
Report 1987: http://www.berkshirehathaway.com/letters/1987.html).
But the analogy with evolutionary biology as well as many studies of the be-
havior of bankers and traders (e.g. Coates, 2012) suggest that most market
investors care for much more than just survival. They strive to maximize
their investment success measured as bonus and wealth, which can accrue
with luck on time scales of years. This is akin to maximizing transmission of
“genes” in a biological context (Dawkins, 1976). The focus on survival within
an evolutionary analogy is clearly insufficient to account for the extraordi-
50
nary large death rate of business companies and in particular of financial
firms such as hedge-funds (Saichev et al., 2010; Malevergne et al., 2013 and
references therein).
But evolutionary biology itself is witnessing a revolution with genomics,
benefitting from computerized automation and artificial intelligence classifi-
cation (ENCODE Project Consortium, 2012). And (bio-)physics is bound to
continue playing a growing role to organize the wealth of data in models that
can be handled, playing on the triplet of experimental, computational and
theoretical research. On the question of what could be useful tools to help
understand, use, diagnose, predict and control financial markets (Cincotti
et al., 20120; de S. Cavalcante et al. 2013), we envision that both physics
and biology are going to play a growing role to inspire models of financial
markets, and the next significant advance will be obtained by marrying the
three fields.
References
[1] Alfarano, S., T. Lux and F. Wagner (2005) Estimation of Agent-Based
Models: The Case of an Asymmetric Herding Model, Computational Eco-
nomics 26, 19-49.
[2] Andersen, J. V., and D. Sornette (2005) A Mechanism for Pockets of
Predictability in Complex Adaptive Systems, Europhysics Letters (EPL)
70.5, 697-703.
[3] Anderson, P.W. (1958) Absence of diffusion in certain random lattices,
Phys. Rev. 109, 1492-1505.
[4] Anderson, P.W. (1972) More is different (Broken symmetry and the na-
ture of the hierarchical structure of science), Science 177, 393-396.
[5] Arneodo, A., J.-F. Muzy and D. Sornette (1998) Direct causal cascade in
the stock market, European Physical Journal B 2, 277-282 (1998).
[6] Arthur, W. B. (1994a) Increasing Returns and Path Dependence in the
Economy, University of Michigan Press, Ann Arbor.
[7] Arthur, W. B. (1994b) Inductive Reasoning and Bounded Rationality,
American Economic Review (Papers and Proceedings), 84,406-411.
[8] Arthur, W. B. (1997) The Economy as an Evolving Complex System II,
edited with Steven Durlauf and David Lane, Addison-Wesley, Reading,
MA, Series in the Sciences of Complexity.
51
[9] Arthur, W. B. (2005) Out-of-Equilibrium Economics and Agent-Based
Modeling, in Handbook of Computational Economics, Vol. 2: Agent-Based
Computational Economics, K. Judd and L. Tesfatsion, eds. (Elsevier /
North Holland).
[13] Bacry, E., J. Delour and J.-F. Muzy (2001) Multifractal random walk,
Physical Review E 64, 026103.
[15] Bak, P. (1996) How Nature Works: the Science of Self-organized Criti-
cality (Copernicus, New York).
[16] Barras, L., O. Scaillet and R. Wermers (2010) False Discoveries in Mu-
tual Fund Performance: Measuring Luck in Estimated Alphas, The Jour-
nal of Finance 65 (1), 179-216.
[17] Bawa, V.S., E.J. Elton and M.J. Gruber (1979) Simple rules for optimal
portfolio selection in stable Paretian markets, Journal of Finance 34 (4),
1041-1047.
[18] Bianchi, C., P. Cirillo, M. Gallegati, and P.A. Vagliasindi (2007) Validat-
ing and Calibrating Agent-Based Models: A Case Study, Computational
Economics 30 (3), 245-264.
52
[19] Biely, C., R. Hanel and S. Thurner (2009) Socio-economical dynamics
as a solvable spin system on co-evolving networks, Eur. Phys. J. B 67,
285-289.
[20] Bikhchandani, S., Hirshleifer, D., and Welch, I. (1992) A Theory of
Fads, Fashion, Custom, and Cultural Change as Informational Cascades,
Journal of Political Economy 100 (5), 992-1026.
[21] Bishop, C.M. (2006) Pattern Recognition and Machine Learning,
Springer, 738pp.
[22] Black, F. (1985) Noise, Journal of Finance 41 (3), 529-543.
[23] Black, F. and M. Scholes (1973)The Pricing of Options and Corporate
Liabilities, Journal of Political Economy 81 (3), 637-654.
[24] Bonabeau, E. (2002) Agent-based modeling: Methods and techniques
for simulating human systems, Proc. Natl. Acad. Sci. USA 99 (3), 7280-
7287.
[25] Bondt, W.F. and R. Thaler (1985) Does the Stock Market Overreact?
The Journal of Finance 40 (3), 793-805.
[26] Bornholdt, S. (2001) Expectation bubbles in a spin model of market in-
termittency from frustration across scales, International Journal of Modern
Physics C 12 (5), 667-674.
[27] Bouchaud, J-P. (2001) Power laws in economics and finance: some ideas
from physics, Quantitative Finance 1 (1), 105-112.
[28] Bouchaud, J.-P. (2010) Price impact, in Encyclopedia of Quantitative
Finance, R. Cont, ed., John Wiley & Sons Ltd.
[29] Bouchaud, J.-P. (2013), Crises and Collective Socio-Economic Phenom-
ena: Simple Models and Challenges, J Stat Phys (2013) 151:567-606.
[30] Bouchaud, J.-P. and R. Cont (1998) A Langevin approach to stock mar-
ket fluctuations and crashes, Eur. Phys. J. B 6, 543-550.
[31] Bouchaud, J.-P., J.D. Farmer, and F. Lillo (2009) How markets slowly
digest changes in supply and demand, in Handbook of Financial Markets,
Elsevier.
[32] Bouchaud, J.-P. and M. Potters (2003) Theory of Financial Risk and
Derivative Pricing, From Statistical Physics to Risk Management, 2nd ed.
(Cambridge University Press).
53
[33] Bouchaud, J.-P. and D. Sornette (1994) The Black-Scholes option pric-
ing problem in mathematical finance : Generalization and extensions for
a large class of stochastic processes, J.Phys. I France 4, 863-881.
[34] Bouchaud, J.-P., D. Sornette, C. Walter and J.-P. Aguilar (1998) Taming
large events : Optimal portfolio theory for strongly fluctuating assets,
International Journal of Theoretical and Applied Finance 1, 25-41.
[35] Brock, W.A. and S.N. Durlauf (1999) A formal model of theory choice
in science, Economic Theory 14, 113-130.
[36] Brock and Durlauf (2001) Discrete Choice with Social Interactions, Re-
view of Economic Studies 68, 235-260.
[39] Busemeyer, J.R. and P.D. Bruza (2012) Quantum Models of Cognition
and Decision, Cambridge University Press; 1 edition (September 17).
[40] Busemeyer, J.R., Z. Wang and J.T. Townsend, (2006) Quantum dy-
namics of human decision making, Journal of Mathematical Psychology,
50, 220-241.
[43] Calvet, L.E. and A.J. Fisher (2008) Multifractal volatility theory, fore-
casting, and pricing. Burlington, MA: Academic Press.
[45] Campbell, J.Y., A.W. Lo and A.C. MacKinlay (1997) The Econometrics
of Financial Markets. Princeton University Press, Princeton, NJ.
54
[47] Cardy, J.L. (1996) Scaling and Renormalization in Statistical Physics
(Cambridge University Press, Cambridge, UK).
[49] Challet, D., M. Marsili, Y.-C. Zhang (2005) Minority Games: Interact-
ing Agents in Financial Markets, Oxford University Press, Oxford.
[50] Challet, D. and Zhang, Y.C. (1997) Emergence of cooperation and or-
ganization in an evolutionary game. Physica A 246 (3), 407-418.
[51] Challet, D. and Zhang, Y.C. (1998) On the minority game: Analytical
and numerical studies. Physica A 256 (3/4), 514-532.
[52] Chiarella, C., R. Dieci and X.-Z. He (2009) Heterogeneity, Market Mech-
anisms, and Asset Price Dynamics, Handbook of Financial Markets, Dy-
namics and Evolution, North-Holland, Elsevier.
[55] Coates, J. (2012) The Hour Between Dog and Wolf: Risk Taking, Gut
Feelings and the Biology of Boom and Bust, Penguin Press.
[57] Cootner, P.H. (1964) The random character of stock market prices, MIT
Press (ISBN 978-0-262-03009-0).
[59] Cutler, D. M., J.M. Poterba and L.H. Summers (1990) peculative Dy-
namics and the Role of Feedback Traders The American Economic Review
80 (2), 63-68.
55
[60] Daniel, K., D. Hirshleifer and A. Subrahmanyam (1998) Investor Psy-
chology and Security Market Under- and Overreactions, The Journal of
Finance 53 (6), 1839-1885.
[61] Darley, V. and A.V. Outkin (2007) A Nasdaq market simulation (In-
sights on a major market from the science of complex adaptive systems),
Complex Systems and Interdisciplinary Science, World Scientific, Singa-
pore.
[62] Dawkins, R. (1976) The Selfish Gene, New York City: Oxford University
Press.
56
[72] Embrechts, P., C. Kluppelberg and T. Mikosch (1997) Modelling Ex-
tremal Events for Insurance and Finance (Springer, New York).
[77] Fama, E.J. (1970) Efficient capital markets: a review of theory and
empirical work, Journal of Finance 25, 383-417.
[78] Fama, E.J. (1991) Efficient capital markets: II, Journal of Finance 46
(5), 1575-1617.
[79] Fama, E.F.; and K.R. French (1993) Common Risk Factors in the Re-
turns on Stocks and Bonds, Journal of Financial Economics 33 (1), 3-56.
[80] Fama, Eugene F. and French, Kenneth R. (2010) Luck versus Skill in
the Cross-Section of Mutual Fund Returns The Journal of Finance 65 (5),
1915-1947.
[81] Farmer, J.D. (2002) Market forces, ecology and evolution, Industrial and
Corporate Change 11 (5), 895-953.
[82] Farmer, J. D. and J. Geanakoplos (2009) The virtues and vices of equi-
librium and the future of nancial economics, Complexity 14 (3),11-38.
[83] Farmer, J.D., A. Gerig, F. Lillo and H. Waelbroeck (2013) How efficiency
shapes market impact (http://arxiv.org/abs/1102.5457).
[84] Farmer, J. D., L. Gillemot, F. Lillo, S. Mike, S. and A. Sen (2004) What
really causes large price changes? Quant. Finance 4(4), 383-397.
[85] Farmer, J.D. and S. Joshi (2002) The price dynamics of common trading
strategies, J. Econ. Behav. Org. 49, 149-171.
57
[86] Farmer, J.D. and T. Lux, editors (2008) Applications of statistical
physics in economics and finance, Journal of Economic Dynamics and Con-
trol 32 (1), 1-320.
[87] Feng, L., B. Lia, B. Podobnik, T. Preisc and H.E. Stanley (2012) Linking
agent-based models and stochastic models of financial markets, Proceed-
ings of the National Academy of Sciences of the United States of America
109 (22), 8388-8393.
[94] Gabaix, X., P. Gopikrishnan, V. Plerou and H.E. Stanley (2003) A The-
ory of Power Law Distributions in Financial Market Fluctuations, Nature
423, 267-270.
[96] Galam, S. and Gefen ,Y. and. Shapir, Y. (1982) Sociophysics: A new
approach of sociological collective behavior 1. Mean-behavior description
of a strike, Math. J. Soc. 9, 1-13.
58
[98] Galla, T. and J.D. Farmer (2013) Complex Dynamics in Learning Com-
plicated Games, Proceedings of the National Academy of Sciences 110
(4),1232-1236.
[99] Gallegati, M., Delli Gatti, D., Di Guilmi, C., Gaffeo, E., Giulioni, G.
and Palestrini, A. (2005) A new approach to business fluctuations: Het-
erogeneous interacting agents, scaling laws and financial fragility. Journal
of Economic Behavior and Organization, 56, 489-512.
[102] Gillemot, L., J. D. Farmer, and F. Lillo (2007) Theres More to Volatility
than Volume, Quantitative Finance 6, 371-384.
[105] Gisler, M., D. Sornette and G. Grote (2013) Early dynamics of a major
scientific project: Testing the social bubble hypothesis, Science, Technol-
ogy and Innovation (submitted) (http://ssrn.com/abstract=2289226)
[108] Gopikrishnan, P., V. Plerou, L.A.N. Amaral, M. Meyer, and H.E. Stan-
ley (1999) Scaling of the distributions of fluctuations of financial market
indices, Physical Review E 60, 5305-5316.
59
[109] Gordon, M.B. J.-P. Nadal, D. Phan and V. Semeshenko (2009) Discrete
Choices under Social Influence: Generic Properties Mathematical Models
and Methods in Applied Sciences (M3AS) 19 (Suppl. 1), 1441-1481.
[110] Gould, S.J. (1996) Full House: The Spread of Excellence From Plato
to Darwin (New York: Harmony Books).
[114] Hardiman, S.J., N. Bercot and J.-P. Bouchaud (2013) Critical reflex-
ivity in financial markets: a Hawkes process analysis, Eur. Phys. J. B (in
press) (http://arxiv.org/abs/1302.1405)
[117] Hart, M.L., Jefferies, P. and Johnson, N.F. (2002) Dynamics of the
time horizon minority game. Physica A 311(1), 275-290.
60
[122] Hong, H. and J.C. Stein (1999) A Unified Theory of Underreaction,
Momentum Trading, and Overreaction in Asset Markets, The Journal of
Finance 54 (6), 2143-2184.
[123] Hopfield, J.J. (1982), Neural networks and physical systems with emer-
gent collective computational abilities, Proc. Natl. Acad. Sci. USA 79 (8).
2554-2558.
[128] Jefferies, P., Hart, M.L. and Johnson, N.F. (2002) Deterministic dy-
namics in the minority game. Phys. Rev. E 65, 016105-016113.
[132] Johansen, A. and D. Sornette (1998) Stock market crashes are outliers,
European Physical Journal B 1, 141-143.
61
[135] Johansen, A., D. Sornette and O. Ledoit (1999) Predicting Financial
Crashes using discrete scale invariance, Journal of Risk 1 (4), 5-32.
62
[148] Kruger, J. and D. Dunning (1999) Unskilled and unaware of it: how
difficulties in recognising one’s own incompetence lead to inflated self-
assessments, Journal of Personality and Social Psychology 77 (6), 1121-
1134.
[153] LeBaron, B., W. B. Arthur, and R. Palmer (1999) Time series prop-
erties of an artificial stock market, Journal of Economic Dynamics and
Control, 23 (9-10), 1487-1516.
[154] LeRoy, S. F. (2008) Excess Volatility Tests, The New Palgrave Dictio-
nary of Economics, 2nd ed.
[155] LeRoy, S.F. and R.D. Porter (1981) The present value relation: Tests
based on implied variance bounds, Econometrica 49, 555-574.
[157] Levy, M. (2005) Market Efficiency, the Pareto Wealth Distribution, and
the Lvy Distribution of Stock Returns, in The Economy as an Evolving
Complex System III, S. Durlauf and L. Blume (Eds.), Oxford University
Press.
[158] Levy, M. and H. Levy, Investment Talent and the Pareto Wealth Dis-
tribution: Theoretical and experimental Analysis, Review of Economics
and Statistics 85 (3), 709-725.
[159] Levy, M., H. Levy, and S. Solomon (1994) A Microscopic Model of the
Stock Market: Cycles, Booms, and Crashes, Economics Letters 45 (1),
103-111.
63
[160] Liggett, T.M. (1995) Interacting particle systems (New York :
Springer-Verlag)
[162] Lillo, F., E. Moro, G. Vaglica, and R.N. Mantegna (2008) Specialization
and herding behavior of trading firms in a financial market, New Journal
of Physics 10, 043019 (15pp).
[165] Lo, A. (2011) Adaptive Markets and the New World Order (December
30, 2011). Available at SSRN: http://ssrn.com/abstract=1977721
[166] Lo, A. W. and A.C. MacKinlay (1988) Stock market prices do not
follow random walks: evidence from a simple specification test, Review of
Financial Studies 1 (1), 41-66.
[168] Luce, R. D., Suppes, P. (1965) Preference, Utility, and Subjective Prob-
ability. Handbook of mathematical psychology 3 (171), 249-410.
[172] Lux, T., and M. Marchesi (2000) Volatility clustering in financial mar-
kets: A microsimulation of interacting agents, International Journal of
Theoretical and Applied Finance 3, 675-702.
64
[173] Lux, T. and D. Sornette (2002) On Rational Bubbles and Fat Tails,
Journal of Money, Credit and Banking, Part 1, vol. 34, No. 3, 589-610.
[176] Malcai, O., O. Biham, O., P. Richmond and S. Solomon (2002) The-
oretical analysis and simulations of the generalized Lotka-Volterra model,
Physical Review E, 66, 3, 31102.
[177] Malevergne, Y., V.F. Pisarenko and D. Sornette (2005) Empirical Dis-
tributions of Log-Returns: between the Stretched Exponential and the
Power Law? Quantitative Finance 5 (4), 379-401.
[179] Malevergne, Y., A. Saichev and D. Sornette (2013) Zipfs law and max-
imum sustainable growth, Journal of Economic Dynamics and Control 37
(6), 1195-1212.
[182] Malkiel, B.G. (2012) A Random Walk Down Wall Street: The Time-
Tested Strategy for Successful Investing, Tenth Edition, W. W. Norton &
Company.
[184] Mandelbrot, B.B. (1982) The fractal Geometry of Nature (W.H. Free-
man, San Francisco).
65
[186] Mandelbrot, B.B., A. Fisher and L. Calvet (1997) A Multifractal Model
of Asset Returns, Cowles Foundation Discussion Papers 1164, Cowles
Foundation, Yale University.
[187] Mantegna, R. N. and H.E. Stanley (1995) Scaling behavior in the dy-
namics of an economic index, Nature 376, 46-49.
[195] Mézard, M., G. Parisi and M. Virasoro (1987) Spin Glass Theory and
Beyond (World Scientific, Singapore).
[198] Minsky, H.P. (1992) The Financial Instability Hypothesis, The Jerome
Levy Economics Institute Working Paper No. 74. Available at SSRN:
http://ssrn.com/abstract=161024.
66
[199] Montroll, E. W. and Badger, W. W. (1974) Introduction to Quantita-
tive Aspects of Social Phenomena, Gordon and Breach, New York.
[200] Muzy, J.F., E. Bacry and A. Kozhemyak (2006) Extreme values and
fat tails of multifractal fluctuations, Phys. Rev. E 73, 066114.
[202] Neelya, C.J., P.A. Wellera and J.M. Ulrich (2009) The Adaptive Mar-
kets Hypothesis: Evidence from the Foreign Exchange Market, Journal of
Financial and Quantitative Analysis 44 (2), 467-488.
[203] Newman, M.E.J. (2005) Power laws, Pareto distributions and Zipf’s
law, Contemporary Physics 46, 323-351.
[204] Nadal, J.-P., D. Phan, M.B. Gordon and J. Vannimenus (2005) Multi-
ple equilibria in a monopoly market with heterogeneous agents and exter-
nalities Quantitative Finance 5 (6), 557-568.
[206] Osborne, M.F.M. (1959) Brownian Motion in the Stock Market, Oper-
ations Research 7 (2), 145-173.
[207] Ouillon, E. Ribeiro and D. Sornette (2009) Multifractal Omori Law for
Earthquake Triggering: New Tests on the California, Japan and Worldwide
Catalogs, Geophysical Journal International 178, 215-243.
67
[212] Parisi, D.R., D. Sornette, and D. Helbing (2013) Financial Price Dy-
namics vs. Pedestrian Counterflows: A Comparison of Statistical Stylized
Facts, Physical Review E 87, 012804.
[213] Perline, R. (2005) Strong, weak and false inverse power laws, Statistical
Science 20 (1), 68-88.
[214] Phan, D. and Gordon,M. B. and Nadal, J.-P. (2004) Social interac-
tions in economic theory: An insight from statistical mechanics, Cognitive
Economics – An Interdisciplinary Approach, Springer, Bourgine, P. and
Nadal, J.-P., eds., pp. 335-354.
[215] Poggio, T. A.W. Lo, B. LeBaron, and N.T. Chan (2001) Agent-Based
Models of Financial Markets: A Comparison with Experimental Markets
(October). MIT Sloan Working Paper No. 4195-01. Available at SSRN:
http://ssrn.com/abstract=290140.
[216] Pothos, E.M. and J.R. Busemeyer (2009) A Quantum Probability Ex-
planation for Violations of “Rational” Decision Theory, Proceedings of the
Royal Society B, 276 (1165), 2171-2178.
[217] Potters, M., R. Cont and J.-P. Bouchaud (1998) Financial markets as
adaptive systems Europhys. Lett. 41 (3), 239-244.
[219] Roehner, B.M., D. Sornette and J.V. Andersen (2004) Response Func-
tions to Critical Shocks in Social Sciences: An Empirical and Numerical
Study, Int. J. Mod. Phys. C 15 (6), 809-834.
68
[224] Saichev, A. and D. Sornette (2006) Power law distribution of seismic
rates: theory and data, Eur. Phys. J. B 49, 377-401.
[232] Saskia Ter, E.and R.C.J. Zwinkels (2010) Oil Price Dynamics: A Be-
havioral Finance Approach with Heterogeneous Agents, Energy Economics
32.6, 1427-1434.
[235] Satinover, J.B. and D. Sornette (2009) Illusory versus Genuine Control
in Agent-Based Games, Eur. Phys. J. B 67, 357-367.
[236] Satinover, J.B. and D. Sornette (2012a) Cycles, determinism and per-
sistence in agent-based games and financial time-series I, Quantitative Fi-
nance 12 (7), 1051-1064.
69
[237] Satinover, J.B. and D. Sornette (2012b) Cycles, determinism and per-
sistence in agent-based games and financial time-series II , Quantitative
Finance 12 (7), 1065-1078.
[243] Shiller, R.J. (1981) Do stock prices move too much to be justied by
subsequent changes in dividends? American Economic Review 71, 421-
436.
[246] Shiller, R.J. (2003) From Efficient Markets Theory to Behavioral Fi-
nance, The Journal of Economic Perspectives 17 (1), 83-104.
[249] Sieczka, P., D. Sornette and J.A. Holyst (2011) The Lehman Brothers
Effect and Bankruptcy Cascades, European Physical Journal B 82 (3-4),
257-269.
70
[250] Silva, A.C., R.E. Prange, and V.M. Yakovenko (2004) Exponential dis-
tribution of financial returns at mesoscopic time lags: a new stylized fact,
Physica A 344, 227-235.
[257] Sornette, D. (2003) Why Stock Markets Crash, Critical Events in Com-
plex Financial Systems (Princeton University Press).
[259] Sornette, D. (2005) in Extreme Events in Nature and Society, eds Al-
beverio S, Jentsch V, Kantz H (Springer, Berlin), pp 95-119.
71
[262] Sornette, D. and R. Cont (1997) Convergent multiplicative processes
repelled from zero: power laws and truncated power laws, J. Phys. I France
7, 431-444.
[271] Sornette, D., R. Woodard, W. Yan and W.-X. Zhou (2013) Clarifica-
tions to Questions and Criticisms on the Johansen-Ledoit-Sornette bubble
Model, Physica A 392 (19), 4417-4428.
[272] Sornette, D., R. Woodard and W.-X. Zhou (2009) The 2006-2008 Oil
Bubble: evidence of speculation, and prediction, Physica A 388, 1571-1576.
72
[273] Sornette, D. and W.-X. Zhou (2006a) Importance of Positive Feedbacks
and Over-confidence in a Self-Fulfilling Ising Model of Financial Markets,
Physica A 370 (2), 704-726.
[275] Sousa, A.O., K. Malarz, S. Galam (2005) Reshuffling spins with short
range interactions: When sociophysics produces physical results, Int. J.
Mod. Phys. C16, 1507-1517.
[283] Tsai, C.-Y., G. Ouillon and D. Sornette (2012) New Empirical Tests
of the Multifractal Omori Law for Taiwan, Bulletin of the Seismological
Society of America 102 (5), 2128-2138.
73
[285] von Neumann, J. (1955) Mathematical Foundations of Quantum Me-
chanics, Princeton University, Princeton.
[286] Weber, P. and B. Rosenow (2006) Large stock price changes: volume
or liquidity?, Quant. Finance 6 (1), 7-14.
[292] Wilson, K.G. (1979) Problems in physics with many scales of length,
Scientific American, August.
[296] Yukalov, V.I. and D. Sornette (2008) Quantum decision theory as quan-
tum theory of measurement, Physics Letters A 372, 6867-6871.
74
[298] Yukalov, V.I. and D. Sornette (2009b) Physics of risk and uncertainty
in quantum decision making, Eur. Phys. J.B 71, 533-548.
[302] Yukalov, V.I. and D. Sornette (2011) Decision Theory with Prospect
Interference and Entanglement, Theory and Decision 70, 283-328.
[309] Yukalov, V.I., E.P. Yukalova and D. Sornette (2013) Mode interfer-
ence of quantum joint probabilities in multi-mode Bose-condensed systems,
Laser Phys. Lett. 10, 115502.
75
[310] Zhang, Q. (2013) Disentangling Financial Mar-
kets and Social Networks: Models and Empirical
Tests, PhD thesis at ETH Zurich, 25 February 2013
(http://www.er.ethz.ch/publications/PHD_QunzhiZhang_final_050213.pdf)
[313] Zhou, W.-X. and D. Sornette (2007) Self-fulfilling Ising Model of Fi-
nancial Markets, European Physical Journal B 55, 175-181.
76