Jovanovic, Franck - Schinckus, Christophe - Econophysics and Financial Economics - An Emerging Dialogue-Oxford University Press (2017) PDF
Jovanovic, Franck - Schinckus, Christophe - Econophysics and Financial Economics - An Emerging Dialogue-Oxford University Press (2017) PDF
Econophysics and
Financial Economics
An Emerging Dialogue
Franck Jovanovic
and
Christophe Schinckus
1
1
Oxford University Press is a department of the University of Oxford. It furthers
the University’s objective of excellence in research, scholarship, and education
by publishing worldwide. Oxford is a registered trade mark of Oxford University
Press in the UK and certain other countries.
1 3 5 7 9 8 6 4 2
Printed by Edwards Brothers Malloy, United States of America
C O N T E N TS
Acknowledgments╇ vii
Introduction╇ ix
Notes╇ 167
References╇ 185
Index╇ 217
v
AC K N O W L E D G M E N TS
This book owes a lot to discussions that we had with Anna Alexandrova, Marcel
Ausloos, Françoise Balibar, Jean-Philippe Bouchaud, Gigel Busca, John Davis, Xavier
Gabaix, Serge Galam, Nicolas Gaussel, Yves Gingras, Emmanuel Haven, Philippe Le
Gall, Annick Lesne, Thomas Lux, Elton McGoun, Adrian Pagan, Cyrille Piatecki,
Geoffrey Poitras, Jeroen Romboust, Eugene Stanley, and Richard Topol. We want to
thank them. We also thank Scott Parris. We also want to acknowledge the support of
the CIRST (Montréal, Canada), CEREC (University St-Louis, Belgium), GRANEM
(Université d’Angers, France), and LÉO (Université d’Orléans, France). We also thank
Annick Desmeules Paré, Élise Filotas, Kangrui Wang, and Steve Jones. Finally, we wish
to acknowledge the financial support of the Social Sciences and Humanities Research
Council of Canada, the Fonds québécois de recherche sur la société et la culture, and
TELUQ (Fonds Institutionnel de Recherche) for this research. We would like to
thank the anonymous referees for their helpful comments.
vii
INTRODUCTION
Stock market prices exert considerable fascination over the large numbers of people
who scrutinize them daily, hoping to understand the mystery of their fluctuations.
Science was first called in to address this challenging problem 150 years ago. In 1863,
in a pioneering way, Jules Regnault, a French broker’s assistant, tried for the first time
to “tame” the market by creating a mathematical model called the “random walk” based
on the principles of social physics (chapter 1 in this book; Jovanovic 2016). Since then,
many authors have tried to use scientific models, methods, and tools for the same pur-
pose: to pin down this fluctuating reality. Their investigations have sustained a fruitful
dialogue between physics and finance. They have also fueled a common history. In
the mid-1990s, in the wake of some of the most recent advances in physics, a new ap-
proach to dealing with financial prices emerged. This approach is called econophysics.
Although the name suggests interdisciplinary research, its approach is in fact multi-
disciplinary. This field was created outside financial economics by statistical physicists
who study economic phenomena, and more specifically financial markets. They use
models, methods, and concepts imported from physics. From a financial point of view,
econophysics can be seen as the application to financial markets of models from par-
ticle physics (a subfield of statistical physics) that mainly use stable Lévy processes and
power laws. This new discipline is original in many points and diverges from previous
works. Although econophysicists concretized the project initiated by Mandelbrot in
the 1960s, who sought to extend statistical physics to finance by modeling stock price
variations through Lévy stable processes, econophysicists took a different path to get
there. Therefore, they provide new perspectives that this book investigates.
Over the past two decades, econophysics has carved out a place in the scientific
analysis of financial markets, providing new theoretical models, methods, and results.
The framework that econophysicists have developed describes the evolution of finan-
cial markets in a way very different from that used by the current standard financial
models. Today, although less visible than financial economics, econophysics influences
financial markets and practices. Many “quants” (quantitativists) trained in statistical
physics have carried their tools and methodology into the financial world. According
to several trading-room managers and directors, econophysicists’ phenomenological
approach has modified the practices and methods of analyzing financial data. Hitherto,
these practical changes have concerned certain domains of finance: hedging, portfolio
management, financial crash predictions, and software dedicated to finance. In the
coming decades, however, econophysics could contribute to profound changes in the
entire financial industry. Performance measures, risk management, and all financial
ix
x Introduction
xi Introduction
that this approach is an old issue in finance. Many examples of this situation can be
observed in the literature, with each community failing to venture beyond its own per-
spective. A key point is that the vast majority of econophysics publications are written
by econophysicists for physicists, with the result that the field is not easily accessible
to other scholars or readers. This context highlights the necessity to clarify the differ-
ences and similarities between the two disciplines.
The second cause is rooted in the way each discipline deals with its own scien-
tific knowledge. Contrary to what one might think, how science is done depends on
disciplinary processes. Consequently, the ways of producing knowledge are different
in econophysics and financial economics (chapter 4): econophysicists and financial
economists do not build their models in the same way; they do not test their models
and hypotheses with the same procedures; they do not face the same scientific con-
straints even though they use the same vocabulary (in a different manner), and so
on. The situation is simply due to the fact that econophysics remains in the shadow
of physics and, consequently, outside of financial economics. Of course there are
advantages and disadvantages in such an institutional situation (i.e., being outside
of financial economics) in terms of scientific innovations. A methodological study
is proposed in this book to clarify the dissimilarities between econophysics and fi-
nancial economics in terms of modeling. Our analysis also highlights some common
features regarding modeling (chapter 5) by stressing that the scientific criteria any
work must respect in order to be accepted as scientific are very different in these two
disciplines. The gaps in the way of doing science make reading literature from the
other discipline difficult, even for a trained scholar. These gaps underline the needs
for clear explanations of the main concepts and tools used in econophysics and how
they could be used on financial markets.
The third cause is the lack of a framework that could allow comparisons between
results provided by models developed in the two disciplines. For a long time, there
have been no formal statistical tests for validating (or invalidating) the occurrence of
a power law. In finance, satisfactory statistical tools and methods for testing power
laws do not yet exist (chapter 5). Although econophysics can potentially be useful
in trading rooms and although some recent developments propose interesting solu-
tions to existing issues in financial economics (chapter 5), importing econophysics
into finance is still difficult. The major reason goes back to the fact that econophysi-
cists mainly use visual techniques for testing the existence of a power law, while finan-
cial economists use classical statistical tests associated with the Gaussian framework.
This relative absence of statistical (analytical) tests dedicated to power laws in finance
makes any comparison between the models of econophysics and those of financial
economics complex. Moreover, the lack of a homogeneous framework creates difficul-
ties related to the criteria for choosing one model rather than another. These issues
highlight the need for the development of a common framework between these two
fields. Because econophysics literature proposes a large variety of models, the first step
is to identify a generic model unifying key econophysics models. In this perspective,
this book proposes a generalized model characterizing the way econophysicists statis-
tically describe the evolution of financial data. Thereafter, the minimal condition for
xii Introduction
a theoretical integration in the financial mainstream is defined (chapter 6). The iden-
tification of such a unifying model will pave the way for its potential implementation
in financial economics.
Despite this difficult dialogue, a number of collaborations between financial econ-
omists and econophysicists have occurred, aimed at increasing exchanges between
the two communities.1 These collaborations have provided useful contributions.
However, they also underline the necessity for a better understanding of the discipli-
nary constraints specific to both fields in order to ease a fruitful association. For in-
stance, as the physicist Dietrich Stauffer explained, “Once we [the economist Thomas
Lux and Stauffer] discussed whether to do a Grassberger-Procaccia analysis of some
financial data … I realized that in this case he, the economist, would have to explain
to me, the physicist, how to apply this physics method” (Stauffer 2004, 3). In the same
vein, some practitioners are aware of the constraints and perspectives specific to each
discipline. The academic and quantitative analyst Emanuel Derman (2001, 2009) is
a notable example of this trend. He has pointed out differences in the role of models
within each discipline: while physicists implement causal (drawing causal inference)
or phenomenological (pragmatic analogies) models in their description of the phys-
ical world, financial economists use interpretative models to “transform intuitive
linear quantities into non-linear stable values” (Derman 2009, 30). These consider-
ations imply going beyond the comfort zone defined by the usual scientific frontiers
within which many authors stay.
This book seeks to make a contribution toward increasing dialogue between the
two disciplines. It will explore what econophysics is and who econophysicists are by
clarifying the position of econophysics in the development of financial economics.
This is a challenging issue. First, there is an extremely wide variety of work aiming to
apply physics to finance. However, some of this work remains outside the scope of
econophysics. In addition, as the econophysicist Marcel Ausloos (2013, 109) claims,
investigations are heading in too many directions, which does not serve the intended
research goal. In this fragmented context, some authors have reviewed existing econo-
physics works by distinguishing between those devoted to “empirical facts” and those
dealing with agent-based modeling (Chakraborti et al. 2011a, 2011b). Other authors
have proposed a categorization based on methodological aspects by differentiating be-
tween statistical tools and algorithmic tools (Schinckus 2012), while still others have
kept to a classical micro/macro opposition (Ausloos 2013). To clarify the approach
followed in this book, it is worth mentioning the historical importance of the Santa
Fe Institute in the creation of econophysics. This institution introduced two compu-
tational ways of describing complex systems that are relevant for econophysics: (1)
the emergence of macro statistical regularity characterizing the evolution of systems;
(2) the observation of a spontaneous order emerging from microinteractions be-
tween components of systems (Schinckus 2017). Methodologically speaking, stud-
ies focusing on the emergence of macro regularities consider the description of the
system as a whole as the target of the analysis, while works dealing with an emerging
spontaneous order seek to reproduce (algorithmically) microinteractions leading the
system to a specific configuration. These two approaches have led to a methodological
xiii Introduction
xiv Introduction
created. This book thus offers conceptual tools to surmount the disciplinary barriers
that currently limit the dialogue between these two disciplines. In accordance with
this purpose, the book gives econophysicists an opportunity to have a specific discipli-
nary (financial) perspective on their emerging field.
The book is divided into three parts.
The first part (chapters 1 and 2) focuses on financial economics. It highlights the
scientific constraints this discipline has to face in its study of financial markets. This
part investigates a series of key issues often addressed by econophysicists (but also
by scholars working outside financial economics): why financial economists cannot
easily drop the efficient-market hypothesis; why they could not follow Mandelbrot’s
program; why they consider visual tests unscientific; how they deal with extreme
values; and, finally, why the mathematics used in econophysics creates difficulties in
financial economics.
The second part (chapters 3 and 4) focuses on econophysics. It clarifies econo-
physics’ position in the development of financial economics. This part investigates
econophysicists’ scientific criteria, which are different from those of financial econo-
mists, implying that the scientific benchmark for acceptance differs in the two com-
munities. We explain why econophysicists have to deal with power laws and not with
other distributions; how they describe the problem of infinite variance; how they
model financial markets in comparison with the way financial economists do; why
and how they can introduce innovations in finance; and, finally, why econophysics and
financial economics can be looked on as similar.
The third part (chapters 5 and 6) investigates the potential development of a
common framework between econophysics and financial economics. This part aims at
clarifying some current issues about such a program: what the current uses of econo-
physics in trading rooms are; what recent developments in econophysics allow pos-
sible contributions to financial economics; how the lack of statistical tests for power
laws can be solved; what generative models can explain the appearance of power laws
in financial data; and, finally, how a common framework transcending the two fields by
integrating the best of the two disciplines could be created.
1
F O U N DAT I O N S O F F I N A N C I A L EC O N O M I C S
T H E K E Y R O L E O F T H E G AU S S I A N D I ST R I B U T I O N
1
as “the law of errors,” made it possible to determine errors of observation (i.e., dis-
crepancies) in relation to the true value of the observed object, represented by the
average. Quételet, like Regnault, applied the Gaussian distribution, which was con-
sidered as one of the most important scientific results founded on the central-limit
theorem (which explains the occurrence of the normal distribution in nature),6 to
social phenomena.
Precisely, the normal law allowed Regnault to determine the true value of a se-
curity that, according to the “law of errors,” is the security’s long-term mean value.
He contrasted this long-term determination with a short-term random walk that was
mainly due to the shortsightedness of agents. In Regnault’s view, short-term valua-
tions of a security are subjective and subject to error and are therefore distributed
in accordance with the normal law. As a result, short-term valuations fall into two
groups spread equally about a security’s value: the “upward” and the “downward.”
In the absence of new information, transactions cause the price to gravitate around
this value, leading Regnault to view short-term speculation as a “toss of a coin” game
(1863, 34).
In a particularly innovative manner, Regnault likened stock price variations to a
random walk, although that term was never employed.7 On account of the normal
distribution of short-term valuations, the price had an equal probability of lying
above or below the mean value. If these two probabilities were different, Regnault
pointed out, actors could resort to arbitrage8 by choosing to systematically follow
the movement having the highest probability (Regnault 1863, 41). Similarly, as in
the toss of a coin, rises and falls of stock market prices are independent of each other.
Consequently, since neither a rise nor a fall can anticipate the direction of future
variations (Regnault 1863, 38), Regnault explained, there could be no hope of short-
term gain. Lastly, he added, a security’s current price reflects all available public infor-
mation on which actors base their valuation of it (Regnault 1863, 29–30). Therefore,
with Regnault, we have a perfect representation of stock market variations using a
random-walk model.9
Another important contribution from Regnault is that he tested his hypothesis of
the random nature of short-term stock market variations by examining a mathemat-
ical property of this model, namely that deviations increase proportionately with the
square root of time. Regnault validated this property empirically using the monthly
prices from the French 3 percent bond, which was the main bond issued by the gov-
ernment and also the main security listed on the Paris Stock Exchange. It is worth
mentioning that at this time quoted prices and transactions on the official market of
Paris Stock Exchange were systematically recorded,10 allowing statistical tests. Such
an obligation did not exist in other countries. In all probability the inspiration for this
test was once again the work of Quételet, who had established the law on the increase
of deviations (1848, 43 and 48). Although the way Regnault tested his model was
different from the econometric tests used today ( Jovanovic 2016; Jovanovic and Le
Gall 2001; Le Gall 2006), the empirical determination of this law of the square root
of time thus constituted the first result to support the random nature of stock market
variations.
+∞
p( z ,t ) = ∫ p( x ,t1 ) p( z − x ,t 2 )dx , with t = t1 + t 2 , (1.1)
−∞
where Pz ,t +t designates the probability that price z will be quoted at time t1 + t2, know-
1 2
ing that price x was quoted at time t1. Bachelier then established the probability of
transition as σWt—╉where Wt is a Brownian movement:14
x2
1 −
p( x ,t ) = (1.2)
2
e 4 πk t ,
2 πk t
where t represents time, x a price of the security, and k a constant. Bachelier next ap-
plied his double-╉demonstration principle to the “two problems of the theory of spec-
ulation” that he proposed to resolve: the first establishes the probability of a given
price being reached or exceeded at a given time—╉that is, the probability of a “prime,”
which was an asset similar to a European option,15 being exercised, while the second
seeks the probability of a given price being reached or exceeded before a given time
(Bachelier 1900, 81)—╉which amounts to determining the probability of an American
option being exercised.16
His 1901 article, “Théorie mathématique du jeu,” enabled him to generalize the
first results contained in his thesis by moving systematically from discrete time to
continuous time and by adopting what he called a “hyperasymptotic” point of view.
The “hyperasymptotic” was one of Bachelier’s central concerns and one of his major
contributions. “Whereas the asymptotic approach of Laplace deals with the Gaussian
limit, Bachelier’s hyperasymptotic approach deals with trajectories,” as Davis and
Etheridge point out (2006, 84). Bachelier was the first to apply the trajectories of
Brownian motion, making a break from the past and anticipating the mathematical
finance developed since the 1960s (Taqqu 2001). Bachelier was thus able to prove the
results in continuous time of a number of problems in the theory of gambling that the
calculation of probabilities had dealt with since its origins.
For Bachelier, as for Regnault, the choice of the normal distribution was not only dic-
tated by empirical data but mainly by mathematical considerations. Bachelier’s interest
was in the mathematical properties of the normal law (particularly the central-╉limit the-
orem) for the purpose of demonstrating the equivalence of results obtained using math-
ematics in continuous time and those obtained using mathematics in discrete time.
leads to the same result that we had arrived at when supposing the application of the
law of error [i.e., the normal law]” (Bronzin 1908, 195). In other words, Bronzin used
the normal law in the same way as Regnault, since it allowed him to determine the
probability of price fluctuations (Bronzin 1908 in Hafner and Zimmermann 2009,
188). In all these pioneering works, it appears that the Gaussian distribution and the
hypothesis of random character of stock market variations were closely linked with
the scientific tools available at the time (and particularly the central-limit theorem).
The works of Bachelier, Regnault, and Bronzin have continued to be used and
taught since their publication (Hafner and Zimmermann 2009; Jovanovic 2004,
2012, 2016). However, despite these writers’ desire to create a “science of the stock ex-
change,” no research movement emerged to explore the random nature of variations.
One of the reasons for this was the opposition of economists to the mathematization
of their discipline (Breton 1991; Ménard 1987). Another reason lay in the insufficient
development of what is called modern probability theory, which played a key role in
the creation of financial economics in the 1960s (we will detail this point later in this
chapter).
Development of continuous-time probability theory did not truly begin until
1931, before which the discipline was not fully recognized by the scientific community
(Von Plato 1994). However, a number of publications aimed at renewing this theory
emerged between 1900 and 1930.17 During this period, several authors were working
on random variables and on the generalization of the central-limit theorem, including
Sergei Natanovich Bernstein, Alexandre Liapounov, Georges Polya, Andrei Markov,18
and Paul Lévy. Louis Bachelier (Bachelier 1900, 1901, 1912), Albert Einstein (1905),
Marian von Smoluchowski (1906),19 and Norbert Wiener (1923)20 were the first to
propose continuous-time results, on Brownian motion in particular. However, up until
the 1920s, during which decade “a new and powerful international progression of the
mathematical theory of probabilities” emerged (due above all to the work of Russian
mathematicians such as Kolmogorov, Khintchine, Markov, and Bernstein), this work
remained known and accessible only to a few specialists (Cramer 1983, 8). For ex-
ample, the work of Wiener (1923) was difficult to read before the work of Kolmogorov
published during the 1930s, while Bachelier’s publications (1901, 1900, 1912) were
hardly readable, as witnessed by the error that Paul Lévy (one of the rare mathemati-
cians working in this field) believed he had detected.21 The 1920s were a period of very
intensive research into probability theory—and into continuous-time probabilities in
particular—that paved the way for the construction of modern probability theory.
Modern probability theory was properly created in the 1930s, in particular
through the work of Kolmogorov, who proposed its main founding concepts: he in-
troduced the concept of probability space, defined the concept of the random vari-
able as we know it today, and also dealt with conditional expectation in a totally new
manner (Cramer 1983, 9; Shafer and Vovk 2001, 39). Since his axiom system is the
basis of the current paradigm of the discipline, Kolmogorov can be seen as the father
of this branch of mathematics. Kolmogorov built on Bachelier’s work, which he con-
sidered the first study of stochastic processes in continuous time, and he generalized
on it in his 1931 article.22 From these beginnings in the 1930s, modern probability
theory became increasingly influential, although it was only after World War II that
Kolmogorov’s axioms became the dominant paradigm in the discipline (Shafer and
Vovk 2005, 54–╉55).
It was also after World War II that the American probability school was born.23 It
was led by Joseph Doob and William Feller, who had a major influence on the con-
struction of modern probability theory, particularly through their two main books,
published in the early 1950s (Doob 1953; Feller 1957), which proved, on the basis of
the framework laid down by Kolmogorov, all results obtained prior to the 1950s, ena-
bling their acceptance and integration into the discipline’s theoretical corpus (Meyer
2009; Shafer and Vovk 2005, 60).
In other words, modern probability theory was not accessible for analyzing stock
markets and finance until the 1950s. Consequently, it would have been exceedingly
difficult to create a research movement before that time, and this limitation made the
possibility of a new discipline such as financial economics prior to the 1960s unlikely.
However, with the emergence of econometrics in the United States in the 1930s, an
active research movement into the random nature of stock market variations and their
distribution did emerge, paving the way for financial econometrics.
Working (1934) started from the notion that the movements of price series “are
largely random and unpredictable” (1934, 12). He constructed a series of random re-
turns with random drawings generated by a Tippett table26 based on the normal distri-
bution. He assumed a Gaussian distribution because of “the superior generality of the
‘normal’ frequency distribution” (1934, 16). This position was common at this time
for authors who studied price fluctuations (Cover 1937; Bowley 1933): the normal
distribution was viewed as the starting point of any work in econometrics. This pre-
sumption was reinforced by the fact that all existing statistical tests were based on
the Gaussian framework. Working compared his random series graphically with the
real series, and noted that the artificially created price series took the same graphic
shapes as the real series. His methodology was similar to that used by Slutsky ([1927]
1937) in his econometric work, which aimed to demonstrate that business cycles
could be caused by an accumulation of random events (Armatte 1991; Hacking 1990;
Le Gall 1994; Morgan 1990).27 Slutsky proposed a graphical comparison between
a random series and an observed price series. Slutsky and Working considered that,
if price variations were random, they must be distributed according to the Gaussian
distribution.
The second researcher affiliated with the Cowles Commission, Cowles himself,
followed the same path: he tested the random character of returns (price variations),
and he postulated that these price variations were ruled by the normal distribution.
Cowles (1933), for his part, attempted to determine whether stock market profes-
sionals (financial services and chartists) were able to predict stock market variations,
and thus whether they could realize better performance than the market itself or than
random management. He compared the evolution of the market with the perfor-
mances of fictional portfolios based on the recommendations of 16 professionals.
He found that the average annual return of these portfolios was appreciably inferior
to the average market performance; and that the best performance could have been
attained by buying and selling stocks randomly. It is worth mentioning that the desire
to prove the unpredictability of stock market variations led authors occasionally to
make contestable interpretations in support of their thesis ( Jovanovic 2009b).28 In
addition, Cowles and Jones (1937), whose article sought to demonstrate that stock
price variations are random, compared the distribution of price variations with a
normal distribution because, for these authors, the normal distribution was the
means of characterizing chance in finance.29 Like Working, Cowles and Jones sought
to demonstrate the independence of stock price variations and made no assumption
about distribution.
The work of Cowles and Working was followed in 1953 by a statistical study by
the English statistician Maurice Kendall. Although his work used more technical sta-
tistical tools, reflecting the evolution of econometrics, the Gaussian distribution was
still viewed as the statistical framework describing the random character of time series,
and no other distribution was considered when using econometrics or statistical
tests. Kendall in turn considered the possibility of predicting financial-market prices.
Although he found weak autocorrelations in series and weak delayed correlations
between series, Kendall concluded that “a kind of economic Brownian motion” was
this context provides an understanding of some of the main theoretical and method-
ological foundations of contemporary financial economics. We will detail this point in
the next section when we study how the hard core of this discipline was constituted.
1959b, 808). In 1959, his observation that the distribution of prices did not follow
the normal distribution led him to perform a log-linear transformation to obtain the
normal distribution. According to Osborne, this distribution facilitated empirical tests
and linked with results obtained in other scientific disciplines. He also proposed con-
P
sidering the price-ratio logarithm, log t +1 , which constitutes a fair approximation
Pt
of returns for small deviations (Osborne 1959a, 149). He then showed that deviations
in the price-ratio logarithm are proportional to the square root of time, and validated
this result empirically. This change, which leads to consideration of the logarithmic
of returns of stocks rather than of prices, was retained in later work, because it pro-
vides an assurance of the stationarity of the stochastic process. It is worth mention-
ing that such a transformation was already suggested by Bowley (1933) for the same
reasons: bringing back the series to the normal distribution, the only one allowing the
use of statistical tests at this time. This transformation shows the importance of math-
ematical properties that authors used in order to keep the normal distribution as the
major describing framework.
The random processes used at that time have also been updated in the light of
more recent mathematics. Samuelson (1965a) and Mandelbrot (1966) criticized
the overly restrictive character of the random-walk (or Brownian-motion) model,
which was contradicted by the existence of empirical correlations in price move-
ments. This observation led them to replace it with a less restrictive model: the
martingale model. Let us remember that a series of random variables Pt adapted to
( Φ;0 ≤ n ≤ N ) is a martingale if E(Pt+1 Φ t ) = Pt , where E(. / Φt ) designates the condi-
tional expectation in relation to (Φt) which is a filtration.36 In financial terms, if one
considers a set of information Φt increasing over time with t representing time and
Pt ∈Φ t , then the best estimation—in line with the method of least squares—of the
price (Pt+1) at the time t + 1 is the price (Pt) in t. In accordance with this definition, a
random walk is therefore a martingale. However, the martingale is defined solely by
a conditional expectation, and it imposes no restriction of statistical independence
or stationarity on higher conditional moments—in particular the second moment
(i.e., the variance). In contrast, a random-walk model requires that all moments in
the series are independent37 and defined. In other terms, from a mathematical point
of view, the concept of a martingale offers a more generalized framework than the
original version of random walk for the use of stochastic processes as a description
of time series.
Prior to the 1960s, finance in the United States was taught mainly in business
schools. The textbooks used were very practical, and few of them touched on what
became modern financial theory. The research work that formed the basis of modern
financial theory was carried out by isolated writers who were trained in economics
or were surrounded by economists, such as Working, Cowles, Kendal, Roy, and
Markowitz.38 No university community devoted to the new subjects and methods
existed prior to the 1960s. During the 1960s and 1970s, training in American busi-
ness schools changed radically, becoming more “rigorous.”39 They began to “acade-
micize” themselves, recruiting increasing numbers of economics professors who
taught in university economics departments, such as Merton H. Miller (Fama 2008).
Similarly, prior to offering their own doctoral programs, business schools recruited
PhD students who had been trained in university economics departments ( Jovanovic
2008; Fourcade and Khurana 2009). The members of this new scientific community
shared common tools, references, and problems thanks to new textbooks, seminars,
and to scientific journals. The two journals that had published articles in finance, the
Journal of Finance and the Journal of Business, changed their editorial policy during the
1960s: both started publishing articles based on modern probability theory and on
modeling (Bernstein 1992, 41–44, 129).
The recruitment of economists interested in questions of finance unsettled teach-
ing and research as hitherto practiced in business schools and inside the American
Finance Association. The new recruits brought with them their analysis frameworks,
methods, hypotheses, and concepts, and they were also familiar with the new math-
ematics that arose out of modern probability theory. These changes and their conse-
quences were substantial enough for the American Finance Association to devote part
of its annual meeting to them in two consecutive years, 1965 and 1966.
At the 1965 annual meeting of the American Finance Association an entire ses-
sion was devoted to the need to rethink courses in finance curricula. At the 1966
annual meeting, the new president of the American Finance Association, Paul Weston,
presented a paper titled “The State of the Finance Field,” in which he talked of the
changes being brought about by “the creators of the New Finance [who] become im-
patient with the slowness with which traditional materials and teaching techniques
move along” (Weston 1967, 539).40 Although these changes elicited many debates
( Jovanovic 2008; MacKenzie 2006; Whitley 1986a, 1986b; Poitras and Jovanovic
2007, 2010), none succeeded in challenging the global movement.
The antecedents of these new actors were a determining factor in the institution-
alization of modern financial theory. Their background in economics allowed them
to add theoretical content to the empirical results that had been accumulated since
the 1930s and to the mathematical formalisms that had arisen from modern prob-
ability theory. In other words, economics brought the theoretical content that was
missing and that had been underlined by Working and Roberts. Working (1961,
1958, 1956) and Roberts (1959) were the first authors to suggest a theoretical ex-
planation of the random character of stock market prices by using concepts and
theories from economics. Working (1956) established an explicit link between the
unpredictable arrival of information and the random character of stock market price
changes. However, this paper made no link with economic equilibrium and, prob-
ably for this reason, was not widely circulated. Instead it was Roberts (1959, 7) who
first suggested a link between economic concepts and the random-╉walk model by
using the “arbitrage proof ” argument that had been popularized by Modigliani and
Miller (1958). This argument is crucial in financial economics: it made it possible
to demonstrate the existence of equilibrium in uncertainty when there is no oppor-
tunity for arbitrage. Cowles (1960, 914–╉15) then made an important step forward
by identifying a link between financial econometric results and economic equilib-
rium. Finally, two years later, Cootner (1962, 25) linked the random-╉walk model,
information, and economic equilibrium, and set out the idea of the efficient-╉market
hypothesis, although he did not use that expression. It was a University of Chicago
scholar, Eugene Fama, who formulated the efficient-╉market hypothesis, giving it its
first theoretical account in his PhD thesis, defended in 1964 and published the next
year in the Journal of Business. Then, in his 1970 article, Fama set out the hypothesis of
efficient markets as we know it today (we return to this in detail in the next section).
Thus, at the start of the 1960s, the random nature of stock market variations began to
be associated both with the economic equilibrium of a free competitive market and
with the building of information into prices.
The second illustration of how economics brought theoretical content to mathe-
matical formalisms is the capital-╉asset pricing model (CAPM). In finance, the CAPM
is used to determine a theoretically appropriate required rate of return for an asset, if
the asset is to be added to an already well-╉diversified portfolio, given the asset’s nondi-
versifiable risk. The model takes into account the asset’s sensitivity to nondiversifiable
risk (also known as systematic risk or market risk or beta), as well as the expected
market return and the expected return of a theoretical risk-╉free asset. This model is
used for pricing an individual security or a portfolio. It has become the cornerstone of
modern finance (Fama and French 2004). The CAPM is also built using an approach
familiar to economists for three reasons. First, some sort of maximizing behavior on
the part of participants in a market is assumed;41 second, the equilibrium conditions
under which such markets will clear are investigated; third, markets are perfectly com-
petitive. Consequently, the CAPM provided a standard financial theory for market
equilibrium under uncertainty.
In conclusion, this combination of economic developments with the probability
theory led to the creation of a truly homogeneous academic community whose actors
shared common problems, common tools, and a common language that contributed
to the emergence of a research movement.
Beginning in the 1950s, computers gradually found their way into financial institu-
tions and universities (Sprowls 1963, 91). However, owing to the costs of using them
and their limited calculation capacity, “It was during the next two decades, starting
in the early 1960s, as computers began to proliferate and programming languages
and facilities became generally available, that economists more widely became users”
(Renfro 2009, 60). The first econometric modeling languages began to be developed
during the 1960s and the 1970s (Renfro 2004, 147). From the 1960s on, computer
programs began to appear in increasing numbers of undergraduate, master’s, and doc-
toral theses. As computers came into more widespread use, easily accessible databases
were constituted, and stock market data could be processed in an entirely new way
thanks to, among other things, financial econometrics (Louçã 2007). Financial econ-
ometrics marked the start of a renewal of investigative studies on empirical data and
the development of econometric tests. With computers, calculations no longer had
to be performed by hand, and empirical study could become more systematic and
conducted on a larger scale. Attempts were made to test the random nature of stock
market variations in different ways. Markowitz’s hypotheses were used to develop spe-
cific computer programs to assist in making investment decisions.42
In addition, computers allowed the creation of databases on the evolution of stock
market prices. They were used as “bookkeeping machines” recording data on phe-
nomena. Chapter 2 will discuss the implications of these new data on the analysis of
the probability distribution. Of the databases created during the 1960s, one of the
most important was set up by the Graduate School of Business at the University of
Chicago, one of the key institutions in the development of financial economics. In
1960, two University of Chicago professors, James Lorie and Lawrence Fisher, started
an ambitious four-year program of research into security prices (Lorie 1965, 3). They
created the Center for Research in Security Prices (CRSP). Roberts worked with
them too. One of their goals was to build a huge computer database of stock prices
to determine the returns of different investments. The first version of this database,
which collected monthly prices from the New York Stock Exchange (NYSE) from
January 1926 through December 1960, greatly facilitated the emergence of empirical
studies. Apart from its exhaustiveness, it provided a history of stock market prices and
systematic updates.
The creation of empirical databases triggered a spectacular development of finan-
cial econometrics. This development also owed much to the scientific criteria pro-
pounded by the new community of researchers, who placed particular importance
on statistical tests. At the time, econometric studies revealed very divergent results
regarding the representation of stock market variations by a random-walk model with
the normal distribution. Economists linked to the CRSP and the Graduate School of
Business at the University of Chicago—such as Moore (1962) and King (1964)—
validated the random-walk hypothesis, as did Osborne (1959a, 1962), and Granger
and Morgenstern (1964, 1963). On the other hand, work conducted at MIT and
Harvard University established dependencies in stock market variations. For example,
Alexander (1961), Houthakker (1961), Cootner (1962), Weintraub (1963), Steiger
(1963), and Niederhoffer (1965) highlighted the presence of trends.43 Trends had
what the efficient-market hypothesis should be, but this hypothesis does not really
reach this goal.
To establish this link, Fama extended the early theoretical thinking of the 1960s
and transposed onto financial markets the concept of free competitive equilibrium on
which rational agents would act (1965b, 56). Such a market would be characterized
by the equalization of stock prices with their equilibrium value. This value is deter-
mined by a valuation model the choice of which is irrelevant for the efficient-market
hypothesis.45 The latter considers that the equilibrium model valued stocks using all
available information in accordance with the idea of competitive markets. Thus, on
an efficient market, equalization of the price with the equilibrium value meant that
all available information was included in prices.46 Consequently, that information is
of no value in predicting future price changes, and current and future prices are inde-
pendent of past prices. For this reason, Fama considered that, in an efficient market,
price variations should be random, like the arrival of new information, and that it is
impossible to achieve performances superior to that of the market (Fama 1965a, 35, 98).
A random-walk model thus made it possible to simulate dynamic evolution of prices
in a free competitive market that is in constant equilibrium.
For the purpose of demonstrating these properties, Fama assumed the existence
of two kinds of traders: the “sophisticated traders” and the normal ones. Fama’s key
assumption was the existence of “sophisticated traders” who, due to their skills, make
a better estimate of the intrinsic/fundamental value than other agents do by using all
available information. Moreover, Fama assumes that “although there are sometimes
discrepancies between actual prices and intrinsic values, sophisticated traders in ge-
neral feel that actual prices usually tend to move toward intrinsic values” (1965a, 38).
According to Fama’s hypothesis, “sophisticated traders” are better than other agents
at determining the equilibrium value of stocks, and since they share the same valu-
ation model for asset prices and since their financial abilities are superior to those of
other agents (Fama 1965a, 40), their transactions will help prices trend toward the
fundamental value that these sophisticated traders share. Fama added, using arbitrage
reasoning, that any new information is immediately reflected in prices and that the ar-
rival of information and the effects of new information on the fundamental value are
independent (1965a, 39). The independence of stock market fluctuations, the inde-
pendence of the arrival of new information, and the absence of profit made the direct
connection with the random-walk hypothesis possible. In other words, on the basis of
assumptions about the existence of these sophisticated traders’ having financial abili-
ties superior to those of other agents, Fama showed that the random nature of stock
market variations is synonymous with dynamic economic equilibrium in a free com-
petitive market.
But when the time came to demonstrate mathematically the intuition of the link
between information and the random (independent) nature of stock market varia-
tions, Fama became elusive. He explicitly attempted to link the efficient-market hypo-
thesis with the random nature of stock market variations in his 1970 article. Seeking
to generalize, he dropped all direct references to fundamental value. The question of
the number of “sophisticated traders” required to obtain efficiency (which Fama was
unable to answer) was resolved by being dropped. Consequently, all agents were as-
sumed to be perfectly rational and to have the same model for evaluating the price of
financial assets (i.e., the representative-agent hypothesis). Finally, he kept the general
hypothesis that “the conditions of market equilibrium can (somehow) be stated in
terms of expected returns” (1970, 384). He formalized this hypothesis by using the
definition of a martingale:
Pj ,t+1 − Pj ,t
( ) ( )
E Pj ,t+1 Φ t = 1 + E rj ,t+1 Φ t Pj ,t , with rj ,t+1 =
Pj ,t
,
(1.3)
where the tilde indicates that the variable is random, Pj and rj represent the price and
return of a period of the asset j, E(./.) the conditional expectation operator, and Φt all
information at the time t.
This equation implies that “the information Φt would be determined from the par-
ticular expected return theory at hand” (1970, 384). Fama added that “this is the sense
in which Φt is ‘fully reflected’ in the formation of the price Pj,t” (1970, 384). To test the
hypothesis of information on efficiency, he suggested that from this equation one can
obtain the mathematical expression of a fair game, which is one of the characteristics
of a martingale model and a random-walk model. Demonstration of this link would
ensure that a martingale model or a random-walk model could test the double charac-
teristic of efficiency: total incorporation of information into prices and the nullity of
expected return.
This is the most well-known and used formulation of the efficient-market hypo-
thesis. However, it is important to mention that the history of the efficient-market
hypothesis went beyond the Fama (1970) article. Indeed, in 1976, LeRoy showed
that Fama’s demonstration is tautological and that his hypothesis is not testable. Fama
answered by changing his definition and admitted that any test of the efficient-market
hypothesis is a test of both market efficiency and the model of equilibrium used by in-
vestors (Fama 1976). Moreover, he modified his mathematical formulation and made
his definition of efficiency more precise:
( ) ( )
Em R j ,t |Φ mt −1 = E R j ,t |Φ t −1 , (1.4)
( )
where Em R j ,t |Φ mt −1 is the equilibrium expected return on security j implied by the
( )
set of information used by the market at t – 1, Φ mt −1 , and E R j ,t |Φ t −1 is the true ex-
pected return implied by the set of information available at t – 1, Φ t −1 . From then on,
efficiency presupposes that, using Fama’s own terms, the market “correctly” evaluates
the “true” density function conditional on all available information. Thus, in an effi-
cient market, the true model for valuing the equilibrium price is available to agents.
To test efficiency, Fama reformulated the expected return by introducing a distinction
between price—defined by the true valuation model—and agents’ expectations. The
test consisted in verifying whether the return expected by the market based on the in-
formation used, Φ mt −1 , is equal to the expectation of true return obtained on the basis of
all information available, Φ t −1 . This true return is obtained by using the “true” model
for determining the equilibrium price. Fama proposed testing the efficiency in two
ways, both of which relied on the same process. The first test consisted in verifying
whether “trading rules with abnormal expected returns do not exist” (1976, 144). In
other words, this was a matter of checking that one could obtain the same return as that
provided by the true model of assessment of the equilibrium value on the one hand
and the set of available information on the other hand. The second test would look
more closely at the set of information. It was to verify that “there is no way to use the
information Φ t −1 available at t − 1 as the basis of a correct assessment of the expected
return on security j which is other than its equilibrium expected value” (1976, 145).
At the close of his 1976 article, Fama answered LeRoy’s criticisms: the new defini-
tion of efficiency was a priori testable (we will make this point more precise hereafter).
It should be noted, however, that the definition of efficiency had changed: it now re-
ferred to the true model for assessing the equilibrium value. For this reason, testing
efficiency required also testing that agents were using the true assessment model for
the equilibrium value of assets.47 The test would, then, consist of using a model for set-
ting the equilibrium value of assets—the simplest would be to take the model actually
used by operators—and determining the returns that the available information would
generate; then to use the same model with the information that agents use. If the same
result were obtained—that is, if equation (1.4) was verified—then all the other infor-
mation would indeed have been incorporated into prices. It is striking to note that this
test is independent of the random nature of stock market variations. This is because, in
this 1976 article, there is no more talk of random walk or martingale; no connection
with a random process is necessary to test efficiency. Despite this important conclu-
sion, Fama’s article (1976) is rarely cited. Almost all authors refer to the 1970 article
and keep the idea that to validate the random nature of stock market variations means
validating market efficiency.
The precise linkage proposed by Fama was, however, only one of many possible
linkages, as subsequent literature would demonstrate. LeRoy (1973) and Lucas
(1978) provided theoretical proofs that efficient markets and the martingale hypo-
thesis are two distinct ideas: a martingale is neither necessary nor sufficient for an
efficient market. In a similar way, Samuelson (1973), who gave a mathematical proof
that prices may be permanently equal to the intrinsic value and fluctuate randomly,
explained that the making of profits by some agents cannot be ruled out, contrary to
the original definition of the efficient-market hypothesis. De Meyer and Saley (2003)
showed that stock market prices follow a martingale even if all available information is
not reflected in the prices.
A proliferation of theoretical developments combined with the accumulation of
empirical work led to a confusing situation. Indeed, the definition of efficient markets
has changed depending on the emphasis placed by each author on a particular feature.
For instance, Fama et al. (1969, 1) defined an efficient market as “a market that adjusts
rapidly to new information”; Jensen (1978, 96) stated that “a market is efficient with
respect to information set θt if it is impossible to make economic profit by trading on
the basis of information set θt”; and according to Malkiel (1992), “The market is said
to be efficient with respect to some information set … if security prices would be
(Schinckus 2008, 2012; McGoun 1997; Macintosh 2003; Macintosh et al. 2000).
Third, the choice of the Gaussian framework is directly related to the need to de-
velop statistical tests, which, due to the state of science, cannot be separated from the
Gaussian distribution and the central-╉limit theorem.
Finally, the efficient-╉market hypothesis represents an essential result for financial
economics, but one that is founded on a consensus that leads to acceptance of the
hypothesis independent of the question of its validity (Gillet 1999, 10). The reason
is easily understood: by linking financial facts with economic concepts, the efficient-╉
market hypothesis enabled financial economics to become a proper subfield of eco-
nomics and consequently to be recognized as a scientific field. Having provided this
link, the efficient-╉market hypothesis became the founding hypothesis of the hard core
of financial economics.
influential paper by Samuelson (1965b) was missing from the edited volume, Cootner
(1964) did provide, along with other studies of option pricing, an English translation
of Bachelier’s 1900 thesis and a chapter by Case Sprenkle (1961) where the partial-
differential-equation-based solution procedure employed by Black and Scholes was
initially presented (MacKenzie 2003, 2007). With the aim of setting a price for op-
tions, Black and Scholes took the CAPM as their starting point, using this model of
equilibrium to construct a null-beta portfolio made up of one unit of the underlying
asset and a certain quantity of sold options.50
Black and Scholes (1973) marked the beginning of another scientific movement—
concerned with contingent securities pricing51—that was to be larger in practical
impact and substantially deeper in analytical complexity. The Black-Scholes-Merton
model is based on the creation of a replicating portfolio that, if the model is clearly
specified and its hypotheses tested, holds out the possibility of locally eliminating risk
in financial markets.52 From a theoretical point of view, this model allows for a partic-
ularly fruitful connection with the Arrow-Debreu general-equilibrium model, giving
it a degree of reality for the first time. Indeed, Arrow and Debreu (1954) and later
Debreu (1959) were able to model an uncertain economy and show the existence
of at least a competitive general equilibrium—which, moreover, had the property of
being Pareto-optimal if as many markets as contingent assets were opened. When a
market system is in equilibrium according to Arrow-Debreu’s framework, it is said to
be complete. Otherwise, it is said to be incomplete. Black-Scholes-Merton’s model
gave reality to this system of complete markets by allowing that any contingent claim
asset is replicable by basic assets.53 This model takes on special theoretical importance,
then, because it ties results from financial economics more closely to the concept of
equilibrium from economic science.
The theories of the hard core of financial economics have had a very strong
impact on the practices of the financial industry (MacKenzie and Millo 2009; Millo
and Schinckus 2016). The daily functioning of financial markets today is conducted,
around the clock, on concepts, theories, and models that have been defined in finan-
cial economics (MacKenzie 2006). What operators on today’s financial markets do
is based on stochastic calculation, benchmarks, informational efficiency, and the ab-
sence of arbitrage opportunities. The theories and models of financial economics have
become tools indispensable for professional activities (portfolio management, risk
measurement, evaluation of derivatives, etc.). Hereafter we give some examples to il-
lustrate this influence.
According to efficient-market hypothesis, it is impossible to outperform the
market. Together with the results of the CAPM, particularly regarding the possibility
of obtaining a portfolio lying close to the efficiency frontier, this theory served as the
basis for the development, from 197354 on, of a new way of managing funds—passive,
as opposed to active, management. Funds managed this way create portfolios that
mirror the performance of an externally specified index. For example, the well-known
Vanguard 500 Index fund is invested in the 500 stocks of Standard & Poor’s 500
Index on a market-capitalization basis. “Two professional reports published in 1998
and 1999 [on portfolio management] stated that ‘the debate for and against indexing
generally hinged on the notion of the informational efficiency of markets’ and that
‘managers’ various offerings and product ranges [note: indexed and nonindexed prod-
ucts] often depended on their understanding of the informational efficiency of mar-
kets’ ” (Walter 2005, 114).55
Further examples of the changes brought about by the hard core of financial eco-
nomics are the development of options and new ways of managing risks. The Chicago
Board Options Exchange (CBOE), the first public options exchange, began trading
in April 1973, and since 1975, thousands of traders and investors have been using
the Black and Scholes formula every day (MacKenzie 2006; MacKenzie and Millo
2009) on the CBOE to price and hedge their option positions. By enabling a dis-
tinction to be made between risk takers and hedgers, the Black and Scholes model
directly influenced the organization of the CBOE by defining how market makers
can be associated with the second category, the hedgers (Millo and Schinckus 2016.
Between 1974 and 2014, annual volumes of options exchanged on the CBOE rose
from 5.6 million to 1.275 billion (in dollars, from $449.6 million to $579.7 billion
billion) in Chicago alone. OTC derivatives notional amounts outstanding totaled
$630 trillion at the end of December 2014 (www.bis.org). In 1977 Texas Instruments
brought out a handheld calculator specially programmed to produce Black-Scholes
options prices and hedge ratios. Merton (1998) pointed out that the influence of
the Black-Scholes option theory on finance practice has not been limited to financial
options traded in markets or even to derivatives generally. It is also used to price and
evaluate risk in a wide array of applications, both financially and nonfinancially.
Moreover, the Black and Scholes model totally changed approaches to apprais-
ing risk since it allows risk to be individualized by giving a price to each insurance
guarantee rather than mutualizing it, as was done previously. This means that a
price can be put on any risk, such as the loss of the use of a famous singer’s voice,
which would clearly not be possible when risks are mutualized (Bouzoubaa and
Osseiran 2010).
Last, we would point out that financial-market regulations increasingly make
reference to concepts taken from financial economics, such as the efficiency of
markets, that directly influence US legislative policies (Dunbar and Heller 2006)56
As Hammer and Groeber (2007, 1) explain, the “efficient-market hypothesis is the
main theoretical basis for legal policies that impact both Fraud on the Market and
doctrines in security regulation litigation.” The efficient-market hypothesis was in-
voked as an additional justification for the existing doctrine of fraud on the market,
thereby strengthening the case for class actions in securities-fraud litigation. The
efficient-market hypothesis demonstrated that every fraudulent misrepresentation
was necessarily reflected in stock prices, and that every investor could rely solely
on those prices for transactions ( Jovanovic et al. 2016). Chane-Alune (2006)
emphasizes the incidence of the efficient-market hypothesis on accounting stand-
ardization, while Miburn (2008, 293) notes that the theory directly influences the
international practice of certified accountants: “It appears that arguments typically
put forward by the International Accounting Standards Board and the FASB for the
1.4.╇CONCLUSION
This chapter analyzed the theoretical and methodological foundations of financial ec-
onomics, which are embedded in the Gaussian framework. The historical, mathemat-
ical, and practical reasons justifying these foundations were investigated.
Since the first works in modern finance in the 1960s, the Gaussian distribution
has been considered to be the law ruling any random phenomena. Indeed, the au-
thors based their stochastic models on results deduced from the central-╉limit theorem,
which led to the systematic use of the Gaussian distribution. In this perspective, the
major objective of these developments was to “reveal” the Gaussian distribution in
the data. When observations did not fit with the normal distribution or showed some
extreme values, authors commonly used a log-╉linear transformation to obtain the
normal distribution. However, it is worth reminding that, in the 1960s, prices were
recorded monthly or daily, implying a dilution of price volatility.
In this chapter, we explained that financial econometrics and statistical tests
became key scientific criteria in the development of financial economics. Given that
the vast majority of statistical tests have been developed in the Gaussian framework,
the latter was viewed as a necessary scientific tool for the treatment of financial data.
Finally, this chapter clarified the links between the Gaussian distribution and the
efficient-╉market hypothesis. More precisely, the random character of stock market
prices/╉returns must be separated from the efficient-╉market hypothesis. In other
words, any stochastic process, including the Gaussian one, will not provide an empir-
ical validation for this hypothesis.
For all these reasons, the Gaussian distribution becomes a key element of financial
economics. The following chapter will study how financial economists have dealt with
extreme values given the scientific constraints dictated by the Gaussian framework.
2
E X T R E M E VA LU E S I N F I N A N C I A L EC O N O M I C S
F R O M T H E I R O B S E RVAT I O N TO T H E I R I N T EG R AT I O N
I N TO T H E G AU S S I A N F R A M E W O R K
The previous chapter explained how the Gaussian framework played a key role in the
development of financial economics. It also pointed out how the choice for the normal
distribution was directly related to the kind of data available at that time. Given the
inability of the Gaussian law to capture the occurrence of extreme values, chapter 2
studies the techniques financial economists use to deal with extreme variations on
financial markets. This point is important for the general argument of this book be-
cause the existence of large variations in the stock prices/returns is often presented
by econophysicists as the major justification for the importation of their models in
finance. While financial economists have mainly used stochastic processes with
Gaussian distribution to model stock variations, one must not assume that they have
ignored extreme variations. On the contrary, this chapter will show that the possi-
bility of modeling extreme variations has been sought since the creation of financial
economics in the early 1960s. From an econophysics viewpoint, this statement may
surprise: there are countless publications on extreme values in finance. However, few
econophysicists seem to be aware of them, since they usually ignore or misunderstand
the solutions that financial economists have implemented. Indeed, statistical analysis
of extreme variations is at the heart of econophysics, and the integration of these vari-
ations into stochastic processes is the main purpose of this discipline, as will be shown
in chapter 3. From this perspective, a key question is, how does financial economics
combine Gaussian distribution with other statistical frameworks in order to charac-
terize the occurrence of extreme values?
This chapter aims to investigate this question and the reasons that financial econ-
omists decided to keep the Gaussian distribution. First of all, this chapter will review
the first observations of extreme values made by economists. Afterward, we will an-
alyze their first attempts to model these observations by using stable Lévy processes.
The difficulties in using these processes will be detailed by emphasizing the reasons
that financial economists did not follow this path. We will then study the alterna-
tive paths that were developed to consider extreme values. Two major alternatives
will be considered here: the ARCH-type models and the jump-diffusion processes.
To sum up, this chapter shows that although financial economists have integrated
extreme variations into their models, they use different stochastic processes than
econophysicists.
25
0.3
0.2
0.1
Returns
0.0
–0.1
–0.2
–0.3
Figure 2.1╇ Time plot of monthly log returns of IBM stock from March 1967 to December 2012
Source: Google Finance.
Extreme variations are also present in daily returns of the same stock (figure 2.2).
0.1
0.0
Log return
–0.1
–0.2
Figure 2.2╇ Time plot of daily log returns of IBM stock from July 1962 until December 1998
Source: Google Finance.
Similar observations exist for most stocks, currencies, and other commodities listed on
any financial market around the world. The challenge for theoreticians is therefore to find
the most appropriate statistical framework for describing variations as observed empirically.
From a statistical point of view, the occurrence of these extreme values is gener-
ally associated with what statisticians call the leptokurticity of empirical distribution.
Schematically, leptokurtic distributions (such as the dotted line in figure 2.3) have higher
peaks (characterized by long tails on both sides of the mean) around the mean than does
the normal distribution (the solid line in figure 2.3), which has short statistical tails.
Leptokurtic distribution
Gaussian distribution
Probability
–1.980 1.980
95 % of values
Values
–2.580 2.580
Probability of Cases 99 % of values
in portions of the curve
Standard Deviations
From The Mean –40 –30 –20 –10 0 +10 +20 +30 +40
Cumulative % 0.1 % 2.3 % 15.9 % 50 % 84.1 % 97.7 % 99.9 %
Figure 2.3 Visual comparison between the Gaussian distribution (solid line) and a more leptokurtic
distribution (dotted line) for an infinity of observations
Long tails observed in leptokurtic distributions signify that portion of the distribu-
tion which has a large number of occurrences far from the mean. For instance, with the
Gaussian distribution the probability of a fluctuation 2.6 times the standard deviation is
less than 1 percent. This implies that extreme values are extremely rare. By contrast, with
the more leptokurtic distribution represented in figure 2.3, such a fluctuation is more
probable and therefore more important (depending on the value of parameters associ-
ated with the distribution), meaning that the possibility of large variations is greater in
the distribution. In other words, long tails in a distribution describe the behavior of a
variable for which extreme variations occur more often than in the case of the Gaussian
distribution. For illustrative purposes, consider that the occurrence of a financial crash is
equivalent to a fluctuation five times the standard deviation suggested by the Gaussian
framework, in which a financial crash has a probability of less than 0.000057 percent
(that would mean a crash every 10,000 years, according to Mandelbrot 2004).1
Although the Gaussian distribution has been widely used by financial economists,
mainly for its interesting statistical and mathematical properties (chapter 1), it does
not provide a full description of empirical data (figure 2.4a). In contrast, a leptokurtic
distribution implies that small changes are less frequent than in the Gaussian case, but
that extreme moves are more likely to occur and are potentially much larger than in the
Gaussian distribution, as shown in figure 2.4b.
(a) 0.3
0.2
0.1
–0.1
–0.2
–0.3
(b) 0.3
0.2
0.1
–0.1
–0.2
–0.3
(c) 0.3
0.2
0.1
0
Log return
–0.1
–0.2
–0.3
In this figure we see that using a stochastic process with a Gaussian distribution
(figure 2.4a) does not allow extreme variations of stock prices or returns to be re-
produced (figure 2.4c). In contrast, a Paretian simulation opens a room for the sta-
tistical characterization of extreme variations (figure 2.4b). Obviously, this inability
to capture the occurrence of extreme variations is a major limitation of the Gaussian
framework for reproducing stock variations and consequently for analyzing risk or
for theoretical models. In this challenging context, financial economists had to find
a way to capture the occurrence of extreme values in the evolution of stock prices/╉
returns.
2003; Chancelier 2006b, 2006a; Morgan 1990). The leader was Harvard’s forecast-
ing barometer, created in 1916 by the Economic Office of Harvard and supervised by
Professor Warren Persons.6 Harvard’s barometer became famous for its three curves
(labeled A, B, and C), representing stock prices, the level of business, and the interest
rate on the money market. This approach spread to Europe in the 1920s, when French
statistician Lucien March constructed a French business barometer based on index
numbers that was seen as an equivalent to the Harvard barometer ( Jovanovic and Le
Gall 2001). With the development of barometers, economists—for example Fisher7
and Mitchell8 (Armatte 1992; Banzhaf 2001; Diewert 1993)—became increasingly in-
terested in the construction of indices, among them the stock price index. According to
barometer analysis, stock price movements were an important indicator of economic
activity, and this belief led economists to study stock price variations and their distribu-
tions. Economists interested in barometers were among the first to notice that stock
price variations were not necessarily distributed according to the normal distribution.
Among them, Mitchell (1915) seems to have been the first to empirically detect and
describe extreme variations in wholesale price time series. He found that distribution of
percentage changes in prices consistently deviated from normal distribution (Mitchell
1915, 19–21). Mills (1927, chap. 3) also stressed the leptokurtic character of finan-
cial distributions. However, two points must be underlined here: first, neither Mitchell
nor Mills studied stock prices per se (they were more interested in the direction of the
prices than in the volatility of the series); second, the distribution was not the core of
their analysis, which, like business barometers, focused on macroeconomic predictions.
Three other authors deserve our attention. Irving Fisher (1922) did some analysis
of stock market prices in his book on index numbers; he noted in particular the ab-
sence of correlation in stock prices (Dimand and Geanakoplos 2005; Dimand 2007).
Moore must also be mentioned. In his book, published in October 1917, he analyzed
the price of cotton quoted on the stock market and compared the actual distribution
of spot cotton prices to the theoretical normal distribution (Le Gall 1999, 2006). Even
though the histogram created from real price changes has high peaks and fat tails com-
pared to the Gaussian distribution (Moore 1917, 26), Moore ignored these observa-
tions and considered that the variations in cotton prices were normally distributed.
Later, Arthur Lyon Bowley (1933) discussed the distribution of prices, particularly
stock prices, taking the length of time studied into account. He found that for short
periods (e.g., one year), a normal distribution provides a good approximation of stock
prices, but that “when we take the longer period 1924 to 1930, resemblance to the
normal curve is not present” (Bowley 1933, 369). To circumvent this difficulty while
preserving the Gaussian distribution (and consequently its mathematical properties),
he proposed considering the logarithm of the price in some cases (Bowley 1933, 370).9
These were not the only authors of the period to identify the leptokurtic nature of the
distribution of stock market fluctuations. This finding was shared by a number of statisti-
cians and French engineers (known at the time as ingénieurs économistes). As explained
by Jovanovic and Le Gall (2001), these ingénieurs économistes developed an indisputable
talent for measurement in economics. Their involvement in measurement and more gen-
erally in statistics became particularly apparent around 1900: Statistique Générale de la
France (SGF), the main government statistical agency at that time, became something
of an engineers’ fortress. Although the SGF remained small in comparison with other
European statistical bureaus, its early-╉twentieth-╉century achievements—╉in data col-
lection and in formulating explanations of what had been measured—╉remain impres-
sive. A large part of its work can be attributed to Lucien March (Huber 1937). March
(1921) was the first author to analyze whether the probability calculus and the law of
large numbers were relevant for studying, in particular, whether relative prices are dis-
tributed around their mean according to Gaussian law. He pointed out that that distri-
bution did not conform to Gaussian law: occurrences far from the mean occurred too
frequently and were too large. In addition, during the 1910s, SGF statisticians under
Marcel Lenoir became interested in the effects of stock market fluctuations on economic
activity (Chaigneau and Le Gall 1998). In 1919 Lenoir developed the first long-╉term
indices for the French stock market. These examples should not cause us to forget that
analysis of price distribution was still rare when another author, Maurice Olivier, pub-
lished his doctoral thesis in 1926, wherein he correctly pointed out that “a series of in-
dices should never be published unless the distribution of the data used to calculate the
index is published at the same time” (1926, 81). Olivier can be considered to have pro-
vided the first systematic analysis of price distribution. He obtained the same result as
March: the distribution of price variations did not fit with normal distribution. Several
decades later, Kendall (1953) studied a number of price series by analyzing their varia-
tions (see �chapter 1) and pointed out that some of them were leptokurtic.
Although these authors noted the fact that distributions of stock price variations
were not strictly Gaussian, there was no systematic research into this topic at the time.
Distributions were only a matter of observation and were not mathematically mod-
eled by a specific statistical framework. Econometrics was in its beginnings and not yet
organized as a discipline (Le Gall 1994, 2006; Morgan 1990), and economists were
interested in empirical investigations into dynamic business cycles (i.e., barometers).
Moreover, even Vilfredo Pareto (1848–╉1923), who was widely known for his research
on the leptokurtic nature of distribution of wealth in Italy at the beginning of the twen-
tieth century (Barbut 2003), did not analyze the dynamics of price variations (he made
a static analysis). Only one exception to this pattern can be mentioned, since Luigi
Amoroso used a Paretian distribution in a dynamic analysis of incomes (Tusset 2010).
Kingdom and 1923 in the United States), they were available only for the wholesale
prices of a small number of foods.10 Finally, the quantity of data was limited, and con-
sequently the opportunity to observe extreme values or rare events was very limited.
The situation changed completely in the 1960s, as discussed in chapter 1. The
creation of financial economics as a scientific discipline, on the one hand, and the
building of the first comprehensive database (through the creation of the CRSP),
on the other, made the development of work in financial econometrics possible.
Moreover, with the creation of financial economics, empirical validation became
a prerequisite for publication of research. The introduction of computers into uni-
versity departments also facilitated changes. Thus, from the 1960s onward, financial
economists and especially MA and PhD students systematically tested the statistical
indicators of the random character of stock market fluctuations, such as the indepen-
dence, the distribution, and the stability of the process. Some of them also developed
new tests to validate the models used in “hard core” financial economics.11 However,
as Jovanovic (2008) has shown, empirical results indicating the random character
of stock market prices are inseparable from the underlying theoretical assumptions
guiding the gathering and interpretation of data, especially the assumption of per-
fect markets, defended at the Chicago Graduate School of Business, or of imperfect
markets, defended at MIT. Opinion on the randomness of stock market fluctuations
was far from unanimous: authors from the Chicago Graduate School of Business vali-
dated randomness empirically, while authors from the MIT did not. The same was
true of the identification and analysis of extreme values: while econometric studies
established several important results, there was no universally shared result regarding
leptokurtic distributions.
Some authors pointed out extreme values in stock market fluctuations. For in-
stance, Larson (1960, 380) identified “an excessive number of extreme values” that did
not accord with a Gaussian process. Houthakker (1961, 168) concluded that “the dis-
tribution of day-to-day changes in the logarithms of prices does not conform to the
normal curve. It is not significantly skew, but highly leptokurtic… . The variance of
price changes does not seem to be constant over time… . The leptokurticity men-
tioned above may be related to the changing variance.” Studying a sample of 85,000
observations, Brada, Ernst, and van Tassel observed that “the distributions, in all cases,
are excessively peaked. However … there are no fat tails. At this point we suggest that
stock prices differenced across single transactions are not normally distributed, not be-
cause they have fat tails, but because they are too peaked” (1966, 337). Let us also men-
tion Sprenkle (1961, 1964), who also noted that the stock price distribution was not
Gaussian.
It is clear that with the creation of financial economics and the development of
stock price databases, the distribution of stock price variations, and their leptokurtic
character in particular, began to be apparent and widely analyzed by financial econo-
mists. Although each author presented his empirical results with confidence, these
results should primarily be considered as statistical indices, because empirical data
and tests were still new at that time. However, this field of study underwent a radical
change when the leptokurtic nature of changes in stock prices became the subject of
mathematical investigations from 1962 onward.
α α
{ πα
}
−σ t 1 − iβsign(t )tan 2 + iγ t , α ≠ 1
(2.1)
{ }
log Φ(t ) = .
− σ α t 1 − iβsign(t ) 2 log t + iγ t , α = 1
π
Finally, Lévy’s work made it possible to introduce a set of stable stochastic pro-
cesses that can be used to model stock price variations from an empirical estimate
of the four parameters (α, β, γ, δ). Here is a brief reminder of the interpretation of
these parameters. The parameter α (called the “characteristic exponent”) is the index
of stability of the distribution. The value of this exponent determines the shape of
the distribution: the smaller this exponent is, the fatter the tails are (extreme events
have a higher probability of occurring). In other words, the lower α is, the more often
extreme events are observed. In financial terms, this parameter is an indicator of risk,
since it describes how often substantial variations can occur. The parameter β, termed
“skewness,” provides information about the symmetry of the distribution. If β = 0, the
distribution is symmetric. If β < 0 it is skewed toward the left (totally asymmetric
toward the left if β = −1) while β > 0 indicates a distribution skewed towards the right
(totally asymmetric towards the left if β = −1). The parameter γ is the scale factor,
which can be any positive number. It refers to the “random size,” that is, the size of var-
iance whose regularity is given by the exponent α. This parameter describes the size of
the variations (whose regularity is given by α). Finally, the parameter δ is a localization
factor: it shifts the distribution right if δ > 0 and left if δ < 0.
Thanks to the value of these parameters, stable Lévy distribution provides a ge-
neral framework that makes it possible to characterize both the Gaussian distribution
(one that does not include extreme values) and a Paretian distribution (one that in-
cludes extreme values).
United States permanently in 1958 and began working for IBM. From the beginning
of his research, Mandelbrot aimed to extend statistical physics to other fields, in-
cluding social sciences. In his 1957 report, he set out his “interdisciplinary” project: “It
appears that the author’s activity has by now been durably oriented along the lines of
a long-range program, having two aspects: the study of physical and non-physical ap-
plications of thermodynamics, and the study of the ‘universal’ foundations” (1957, 4).
His approach was “to distinguish the part of thermodynamics, which is so general
that it could also conceivably apply to systems more general than sets of molecules”
(1957, 2). Mandelbrot started his ambitious program of investigations (nonphysical
applications of thermodynamics) with the structure of language, texts, and commu-
nication, moving on to distributions of species, and then to income distributions. In
this perspective, he proposed to expand Pareto’s work on income distributions by rep-
resenting these distributions by Lévy distribution in the late 1950s.16 In 1961, he was
invited to present his work on income distribution at a Harvard University seminar.
On this occasion, Mandelbrot entered Hendrik Houthakker’s office and came face to
face with a graphical representation of the distribution of changes in cotton prices that
was similar to the results he had obtained with incomes: “Houthakker’s cotton chart
looked like my income chart. The math was the same” (Mandelbrot 1963, 394 n. *).
This parallelism led him to apply the stable Lévy process to cotton prices, which was
the longest, most complete time series with daily quotations available at the time, and
then to stock price movements. He published a first article on the topic in 1962 and
gave lectures at Harvard in 1962, 1963, and 1964.
Mandelbrot’s starting point was to postulate an infinite variance. It is clear that in-
finite variance cannot be observed in the real world:
the result of changes in prices of a given frequency (hour, day, week, etc.) remains the
same regardless of the length of time considered (one day, several weeks, one year) or,
in other words, whatever the number of observations. Thus, thanks to the stability of
probability distributions, a daily observation can be thought of as 24 hourly observa-
tions or 1,440 per-minute observations or even 86,400 per-second observations, and
so on. By conceptualizing the infinite divisibility of time, one can give real meaning to
the mathematical hypothesis of infinity.
In sum, drawing on Lévy’s work (1924) on the stability of probability distribu-
tions and that of Gnedenko and Kolmogorov (1954)18 on the generalization of the
central-limit theorem, Mandelbrot used a Pareto-Lévy distribution for two reasons
(Mandelbrot 1960, 80):
1. It is a possible limit distribution with infinite variance of the sums of random in-
dependent and identically distributed variables (i.e., generalized central-limit
theorem).
2. It is stable (i.e., it is time invariant) in its Pareto form.
of his work because a clear computational definition of all parameters did not exist
(Fama 1965b, 405). However, in his article, Fama aimed to give an economic mean-
ing to diversification in a Pareto-Lévy framework, even if the parameter related to the
dispersion of the return on the portfolio could no longer be described by the vari-
ance. From this perspective, he observed that a general increase in diversification
has a direct impact on the evolution of the scale factor (γ) of the stable distribution.
More precisely, increasing diversification reduces the scale of the distribution of the
return on a portfolio (only when the characteristic exponent α > 1).20 Fama proposed,
therefore, to replace variance with the scale factor (γ) of a stable distribution in order
to approximate the dispersion of financial distributions. It is worth mentioning that
Fama’s work on the stable Lévy framework in the 1960s does not conflict with his
efficient-market hypothesis (which we discussed in c hapter 1), because this hypo-
thesis assumes the perfect integration of information in financial prices with no partic-
ular hypothesis regarding the statistical process describing the evolution of financial
returns. Basically, the only necessary condition for a statistical process to be consistent
with the efficient-market hypothesis is the use of independent variables (and by exten-
sion i.i.d. variables), ensuring that past information is not necessary for forecasting the
future evolution of variables. This is the case for stable Lévy processes.
In the same vein, Samuelson (1967) provided an “efficient portfolio selection for
Pareto-Lévy investments” in which the scale factor (γ) was used as an approximation
of variance (because the scale parameter is proportional to the mean absolute devia-
tion). Samuelson presented the computation of the efficiency frontier as a problem of
nonlinear programming solvable by Kuhn-Tucker techniques. However, even though
he demonstrated the theoretical possibility of finding an optimal solution for a stable
Lévy distribution, Samuelson gave no example or application of his technique. Other
economists followed the path opened by Fama and Mandelbrot toward stable Lévy
processes: Fama and Roll (1968, 1971), Blattberg and Sargent (1971), Teichmoeller
(1971), Clark (1973), and Brenner (1974) were among them. Moreover, following
the publications by Mandelbrot and Fama, the hypothesis of the Pareto-Lévy pro-
cesses was frequently tested during the 1960s and the 1970s (Brada, Ernst, and van
Tassel 1966; Godfrey, Granger, and Morgenstern 1964; Officer 1972) as explained in
the following section.
According to Godfrey, Granger, and Morgenstern, “No evidence was found in any of
these series [of the NYSE prices] that the process by which they were generated be-
haved as if it possessed an infinite variance” (1964, 6). Brada, Ernst, and van Tassel
observed that “the distributions [of the stock price differences], in all cases, are ex-
cessively peaked. However, contrary to The Stable Paretian hypothesis, there are no
fat tails” (Brada, Ernst, and van Tassel 1966, 337). Officer (1972) wrote that stock
markets have some characteristics of a non-Gaussian process but also emphasized
that there was a tendency for sums of daily stock returns to become “thinner-tailed”
for large sums, even though he acknowledged that the normal distribution did not
approximate the distribution of his sample. Blattberg and Gonedes (1974) showed
that while stable Pareto-Lévy distributions had better descriptive properties than the
Gaussian one, the Student distribution21 fit the data better. Even Mandelbrot’s stu-
dent, Fama, reached a similar conclusion: “Distributions of monthly returns are closer
to normal than distributions of daily returns. This finding was first discussed in detail
in Officer 1971, and then in Blattberg and Gonedes 1974. This is inconsistent with the
hypothesis that return distributions are non-normal symmetric stable, which implies
that distributions of daily and monthly returns should have about the same degree of
leptokurtosis” (Fama 1976, 38).
Nor did stability of the distribution, which is one of the major statistical proper-
ties of stable Lévy processes, appear certain.22 In the 1950s, Kendall (1953, 15) had
already noticed that variance was not always stationary. Investigations carried out
in the 1960s and 1970s seemed to confirm this point. Officer (1972) explained that
the sum of independently distributed random variables from a stable process did not
give a stable distribution with the same characteristic exponent, as required by the
stability property.23 Financial data “have some but not all properties of a stable pro-
cess,” and since several “inconsistencies with the stable hypothesis were observed,”
the evolution of financial markets cannot be described through a stable Lévy process
(Officer 1972, 811). Even Fama seemed to share this view: “Contrary to the implica-
tions of the hypothesis that daily and monthly returns conform roughly to the same
type of stable non-normal distribution, monthly returns have distributions closer to
normal than daily returns” (Fama 1976, 33). Upton and Shannon (1979) also con-
firmed that stable processes were not appropriate for the analysis of empirical of time
series.24
The second limitation to the use of stable Lévy processes in finance was the fact
that, until the end of the 1970s, the theory of probability related to stable Lévy pro-
cesses was still unformed (Nolan 2009). Several developments were necessary in
order to apply these processes to the study of stock market fluctuations and to test
them adequately. While Mandelbrot opened up a stimulating avenue for research, the
formulation of the mathematical hypothesis (i.e., the Pareto-Lévy process) was not
immediately followed by the development of statistical tools. A direct result of this un-
formed knowledge on stable Lévy processes was the absence, at the time, of statistical
tests to check the robustness of results provided by these processes. Therefore, authors
focused on visual tests to validate the hypothesis of non-Gaussian distribution. This
situation is very similar to the first empirical work that suggested the random character
of stock prices before the creation of financial economics (chapter 1). However,
without appropriate statistical tests, it is simply impossible to evaluate to what extent
these outcomes are statistically reliable (chapters 4 and 5 will come back to this point).
Available statistical tests at the time either were constructed for the Gaussian distri-
bution or assumed normality: examples are the Pearson test (also known as the chi-
square test), which was used to test the compliance of the observed distribution with
a theoretical distribution; and Student’s t-test, which was used for comparing param-
eters such as the mean and for estimating the parameters of a population from data
on a sample. Similarly, as Mandelbrot (1962, 35) explained, methods based upon the
minimization of the sum of squares of sample deviations, or upon the minimization
of the expected value of the square of the extrapolation error, cannot be reasonably
used for non-Gaussian Lévy processes. This was true for the test of the distribution
as well as for the other components of the hard core of financial economics, as Fama
pointed out:
“… There are admittedly difficult problems involved in applying [portfolio models with
stable Paretian distributions] to practical situations. Most of these difficulties are due to the
fact that economic models involving stable Paretian generating processes have developed
more rapidly than the statistical theory of stable Paretian distributions.” (Fama 1965b, 418)
For this reason, authors such as Mandelbrot and Fama insisted on the need “to de-
velop more adequate statistical tools for dealing with stable Paretian distributions”
(Fama 1965b, 429). Even in 1976, 14 years after Mandelbrot’s first publication, Fama
considered that “statistical tools for handling data from non-normal stable distribu-
tions are [still] primitive relative to the tools that are available to handle data from
normal distribution” (Fama 1976, 26). It is worth keeping in mind the importance
that statistical tests hold for financial economists since they founded their approach
on the use of such tests, in opposition to the visual techniques used by chartists or the
accounting-based analysis adopted by fundamentalists.
The third reason for explaining the meager use of stable Lévy processes in finance
in the 1970s refers to the difficulty of estimating their parameters. Indeed, the use of
these processes requires the identification of the four parameters that define the stable
distribution. This is done by using parameterization techniques. Unfortunately, pa-
rameterization techniques were nonexistent in the 1960s and still in their infancy in
the 1970s (Borak, Hardle, and Weron 2005). As Fama explained,
The acceptability of the stable Paretian hypothesis will be improved not only by further
empirical documentation of its applicability but also by making the distributions them-
selves more tractable from a statistical point of view. At the moment very little is known
about the sampling behavior of procedures for estimating the parameters of these distribu-
tions. (1963, 428–29)
For instance, Fama (1965b, 414) emphasized that the application of stable Lévy pro-
cesses in practical situations is very complicated because “the difficulty at this stage
is that the scale parameter of a stable Paretian distribution is, for most values of α, a
theoretical concept.” That is, the mathematical statistics of stable Paretian distribution
was not yet sufficiently developed to give operational or computational meaning to γ
in all cases. The first estimation for symmetric stable Lévy processes was proposed by
Fama and Roll (1968), who, for simplicity, assumed symmetry of stable distributions,
meaning that three parameters could be equal to zero (μ = 0, β = 0, γ = 0), while α was
given by the following approximate formula:
n x
α = 1 + n ∑ ln i , (2.3)
i=1 x min
where xi are the quantiles-based measures and x min is the minimum value of x. This
formula is an approximation, because the parameterization of factors depends on the
size of the sample. It is worth mentioning that this parameterization technique25 was
the only one available at the end of the 1960s. Because results given by quantiles-
based methods depend directly on the size of the sample, Press (1972) proposed
a second method based on their characteristic function. Given that the generalized
central-limit theorem requires a huge (theoretically an infinite) amount of data, Press
combined observations from the empirical samples with extrapolations made from
these data. He was therefore able to generate a large quantity of data.26
All the methods that emerged in the 1970s for parameterizing these stable Lévy
processes had one serious drawback: while quantiles-based (called nonparametric)
methods directly depended on the size of the sample, the characteristic-function-
based (called spectral) methods depended on the extrapolation technique used to
minimize the difference between empirical results given by the sample and theoretical
results. The impossibility of parameterizing stable Lévy processes explained why fi-
nancial economists were not inclined to use them.
Finally, the fourth barrier to the use of stable Lévy processes in finance concerns the
infinity of variance, which has no theoretical interpretation congruent with the theo-
retical framework of financial economics. As discussed in chapter 1, variance and the
expected mean are two of the main variables for the theoretical interpretations of finan-
cial economists, which associated risk with variance and return with the mean.27 From
this perspective, if variance is infinite, it is impossible to understand the notion of risk
as financial economists define it. In other words, the statistical characteristics of stable
Lévy processes could not be integrated into the theoretical framework of financial ec-
onomics, which provided significant operational results. Financial economists focused
on statistical solutions that were theoretically compatible with the Gaussian framework.
Fama and Roll (1971, 337) emphasized this difficulty of working with a non-Gaussian
framework in the social sciences, where “economists, psychologists and sociologists fre-
quently dismiss non-normal distributions as data models because they cannot believe
that processes generating prices, breakdowns, or riots, can fail to have second moments.”
It is worth mentioning that, in the beginning of the 1970s, financial economics
was a young field trying to assert its ability to provide scientific reasoning about finan-
cial reality, and financial economists did not necessarily want to deal with a scientific
puzzle (the interpretation of infinite variance) that could discredit the scientific rep-
utation of their emerging field. Knowing these limitations, and because the Gaussian
framework had allowed the development of financial economics while achieving re-
markable theoretical and practical results (�chapter 1), several authors,28 including
Fama, proposed acceptance of the Gaussian framework as a good approximation:
Although the evidence also suggests that distributions of monthly returns are slightly lepto-
kurtic relative to normal distributions, let us tentatively accept the normal model as a work-
ing approximation for monthly returns…â•›. If the model does well on this score [how well
it describes observed relationships between average returns and risk], we can live with the
small observed departures from normality in monthly returns, at least until better models
come along. (Fama 1976, 38)
Fama added:
Although most of the models of the theory of finance can be developed from the assumption
of stable non-╉normal return distributions … the cost of rejecting normality for securities
returns in favor of stable non-╉normal distributions are substantial.” (Fama 1976, 26)
In other words, at the end of the 1970s, Mandelbrot’s hypothesis could not yet be
applied for modeling extreme values in financial markets. However, as Mandelbrot
(1962) had already pointed out, the four major limitations evoked above do not nec-
essarily contradict the Pareto-╉Lévy framework; they simply made it impossible to
use this approach in the 1970s. This conclusion can also be drawn from the paper by
Officer (1972), who mentioned that his sample was not large enough to justify the use
of a nonnormal distribution, as observed by Borak et al. (2005, 13). Parameterization
techniques also directly depends on the size of the sample used in the statistical a-
nalysis. The bigger the sample, the better the estimation of the parameters. Despite
such conclusions, research on integrating the leptokurtic character of distributions
was continued by financial economists, who developed alternative models in order to
describe the occurrence of extreme values.
Leptokurtic distribution
Gaussian distribution
Probability
Jump part
–1.980 1.980
95 % of values
Values
–2.580 2.580
Probability of Cases in 99 % of values
portions of the curve
Heavy
Standard Deviations
–30 tails
From The Mean –40 –20 –10 0 +10 +20 +30 +40
Cumulative % 0.1 % 2.3 % 15.9 % 50 % 84.1 % 97.7 % 99.9 %
Gaussian part
(a) 0.3
0.2
0.1
–0.1
–0.2
–0.3
(b) 0.3
0.2
0.1
–0.1
–0.2
–0.3
(c) 0.3
0.2
0.1
0
Log return
–0.1
–0.2
–0.3
The first models, introduced by Press (1967), combined the normal distribution
with a Poisson law (figure 2.6b). Although the combination of these two distributions
does not describe accurately the empirical data (figure 2.6c), it offers, at least, a possi-
bility to deal with a dynamics exhibiting a high volatility (by opposition to the Gaussian
distribution reproduced on figure 2.6a). Press was a PhD student in the GBS of Chicago
at the same time as Fama and opened debates about Mandelbrot’s hypothesis (evoked
in the previous section). He wanted to solve the issue of leptokurticity while keeping
finite variance, in line with the data he observed: “Sample evidence cogently reported
by Fama (1965a) supports the hypothesis of non-zero mean, long-tailed, peaked, non-
Gaussian distributions for logged price changes. However, there is no need to conclude,
on the basis of this evidence, that the variance is infinite” (Press 1967, 319). Within
this framework, Press explained that the Gaussian distribution was not appropriate for
describing the empirical data.30 To address this shortfall, he built a model in which
the logged price changes are assumed to follow a distribution that is a Poisson mixture of
normal distributions. It is shown that the analytical characteristics of such a distribution
agree with what has been found empirically. That is, this distribution is in general skewed,
leptokurtic, [and] more peaked at its mean than the distribution of a comparable normal
variate. (Press 1967, 317)
From this perspective, the occurrence of extreme values is associated with jumps
whose probability follows a Poisson law:
N (t )
X t = C + ∑ Yk + Wt , (2.4)
k =1
where C is the initial price X(0), ( Y1 ,… Yk ) are independent random variables fol-
lowing a common law N (0, σt2 ), N(t) is a Poisson process counting the number of
random events, and Wt is a classical Weiner process.
Some years later, Merton, who introduced the term “jump,” popularized this ap-
proach by applying it to the analysis of option pricing, which is a major element in the
hard core of financial economics (chapter 1). In his paper, Merton (1976) provided an
extension of Black, Scholes, and Merton’s 1973 option-pricing model with a two-block
model. The evolution of stock prices was therefore described by the following process:
St = S0 e Xt , (2.5)
Nt
X t = µt + σBt + ∑ Yi , (2.6)
i =1
where µ is the mean (drift) of the process, σ is the variance, and Bt is a Brownian
motion with Bt = Bt − B0 ~ N ( 0 ,t ) . {Yi } refers to the jumps block modeled with a
compound Poisson process with t ≥ 0. In his article, Merton explained the need to de-
velop a model “consistent with the general efficient market hypothesis of Fama (1970)
and Samuelson (1965b)” (Merton 1976, 128) by taking “the natural process for the
continuous component of the stock-╉price change: the Wiener process” (128). In other
words, Merton founded his reasoning on the Gaussian framework since the Wiener
process is also called Brownian motion (Doob 1953).
The methodology used by Press and Merton opened the path to a vast literature
on what are now called “jump-╉process models.” A large category of models were de-
veloped, using different kinds of statistical processes.31 From this category of models
emerged another kind of model, called “pure-╉jump processes,” introduced by Cox and
Ross in 1976. They do not describe the occurrence of jumps through a double-╉block
model but rather through a single block in which a high rate of arrival of jumps of dif-
ferent sizes obviates the need to use a diffusion component.32 Over two decades, these
processes have generated a prolific literature in which the number of models is impor-
tant: the generalized hyperbolic distribution (Eberlein and Keller 1995), the variance
gamma model (Madan, Carr, and Chang 1998), and the CGMY process (named after
the authors of Carr, Geman, Madan, and Yor 2002).33
This category of models allowed financial economics to characterize extreme values in
an improved Gaussian framework even though they are by definition nonstable. Indeed,
all these pure-╉jump processes are related to Brownian motion because of its properties
of normality and continuity (Geman 2002). In other terms, the randomness in opera-
tional time is supposed to generate randomness in the evolution of financial returns. By
associating the process describing the evolution of financial returns with the evolution
of volume characterized by Brownian motion, Clark (1973) showed that the variance
characterizing the extreme values observed in the dynamics of financial returns is always
finite. Consequently, pure-╉jump models offer very technical solutions for describing the
occurrence of extremes values while maintaining an improved Gaussian framework.
2.3.2.2.╇ ARCH-╉Type Models
The second category of financial economic models that take into account extreme var-
iations is ARCH-╉type models.
As we have seen, some of the first empirical investigations during the 1960s and 1970s
pointed out that the variance of stock price fluctuations appeared not to be stable over
time and that some dependence in price variations seemed to exist. Mandelbrot (1963,
1962) had already discussed this empirical result and had suggested in 1966 (as did
Samuelson 1965) replacing Brownian motion with a martingale model (see �chapter 1).
This is because martingale models, unlike Gaussian processes, allow a process with a var-
iance that is not independent. The first exhaustive empirical study of this kind of depend-
ence was by McNees in 1979, who showed that variance “associated with different forecast
periods seems to vary widely over time” (1979, 52). To solve this problem, Engle (1982)
introduced ARCH-╉types models based on statistical processes whose variance directly de-
pends on past information. Within this framework, variance is considered a random var-
iable with its own distribution that can be estimated through a defined average of its past
values. Such models can therefore reproduce extreme variations, as �figure 2.7 suggests.
(a) 0.3
0.2
0.1
–0.1
–0.2
–0.3
(b) 0.3
0.2
0.1
–0.1
–0.2
–0.3
(c) 0.3
0.2
0.1
0
Log return
–0.1
–0.2
–0.3
X t =N ( µ + σt2 ) + ε t , (2.7)
where μ is the mean, σ² is the variance, and E is the statistical error. This statistical
equation can also be expressed as
ARCH-type models focus on the variance and statistical errors that they decompose
into an unconditional and conditional part. The first ARCH model was introduced by
Engle (1982), who characterized the variance as follows:
1 n 2
σt2 = ασ + (1 − α ) ∑ Ri .
n i=1
(2.9)
GARCH, NGARCH, etc.).35 Moreover, notice that, in line with the literature on jump
processes, ARCH models can describe the occurrence of extreme variations within a
Gaussian framework.
2.4.╇CONCLUSION
This chapter showed that financial economists have considered extreme variations in
the analysis of stock prices/╉returns by using three statistical approaches: Pareto-╉Lévy,
jump diffusion, and ARCH types. In the 1960s, the Pareto-╉Lévy model was the first
alternative proposed for characterizing the occurrence of extreme values in finance.
However, as explained, this path of research was not investigated further for tech-
nical and theoretical reasons. Consequently, since the beginning of the 1980s, only
the jump-╉diffusion and the ARCH-╉types models have been developed in financial ec-
onomics. As discussed, these two approaches are totally embedded in the Gaussian
framework. This point is important because �chapter 1 showed that the latter defines
the foundations of financial economics (modern portfolio theory, the capital-╉asset
pricing model, the arbitrage pricing theory, the efficient-╉market hypothesis, and the
Black and Scholes model).
It is interesting to note that the Gaussian framework was also shared by alternative
approaches that emerged in financial economics. For instance, extreme-╉values analysis
was the starting point for one of the major theoretical alternatives developed in finan-
cial economics: behavioral finance. In 1981, Shiller showed that market volatility was
too high to be compatible with a Gaussian process and that, therefore, it could not de-
scribe the occurrence of extremes values. However, although behavioral finance offers
a psychological explanation for the occurrence of extreme values, it does not question
the Gaussian statistical framework (Schinckus 2009).
The first two chapters highlighted the foundations of financial economics and how
this field deals with the extreme variations. Similarly, the two following chapters will
introduce econophysics and investigate its foundations.
3
N E W TO O L S F O R E X T R E M E -V A LU E A N A LY S I S
STAT I ST I C A L P H Y S I CS G O E S B E YO N D I TS B O R D E R S
The previous chapter studied how financial economists deal with extreme values
given the constraints imposed by their theoretical and methodological framework, the
major constraint being variance as a measure of risk. In this context, we explained that
stable Lévy distributions, which generate an infinite variance, considerably compli-
cate their application in finance. Surprisingly, since the 1990s statistical physicists have
found a way to apply stochastic models based on stable Lévy distributions to deal with
extreme values in finance. Many financial economists have found the incursion from
statistical physics for the study of finance difficult to accept. How can this reluctance
be explained? It appears that the major reason comes from the divergence in terms of
scientific criteria, and a way of “doing science” that differs from financial economists’
practices. Moreover, other differences can be mentioned, including approach, meth-
odology, and concepts used. In the same vein, the core mathematics used in econo-
physics models are still not easily accessible for noninitiated scholars.
The aim of this chapter is to outline the theoretical and methodological foundations
of statistical physics, which led some physicists to apply their approach and models to
finance. This clarification is a crucial step in the understanding of econophysics from
an economic point of view. This chapter will trace the major developments in statis-
tical physics, explaining why physicists extended their new methods out of their field.
This extension is justified by four key physical notions: critical points, universality
class, renormalization group theory, and the Ising model. These elements will be clari-
fied through an intuitive presentation. We will then explain how the combination of
these key notions leads statistical physicists to consider their framework as the most
appropriate one to describe the occurrence of extreme values in some phenomena. In
finance, econophysicists describe the observation of extreme variations with a power
law. This chapter will show the connections between these power laws and the current
knowledge in financial economics. As c hapter 2 explained, financial economists aban-
doned stable Lévy processes due to a technical issue related to the infinity of variance.
This chapter will explain how physicists solve this problem by introducing truncation
techniques. Their original motivation was to make stable Lévy distributions physically
plausible, simply because all physical systems have finite parameters and therefore
finite measures. To sum up, this chapter studies the theoretical tools developed since
the 1970s in statistical physics that led to the creation of econophysics.
49
Pressure
solid phase
critical pressure
critical point
liquid
phase
vapour
critical
temperature
Temperature
The transition of a state into another one is due to the gradual change of an external
variable (temperature or pressure); it is simply called “phase transition” in physics.
This transformation can be likened to the passage from one equilibrium (phase)7 to
another. When this passage occurs in a continuous way (for instance, a continuous
variation of temperature), the system passes through a critical point defined by a crit-
ical pressure and a critical temperature and at which neither of the two states is real-
ized (figure 3.1). This is a kind of nonstate situation with no real difference between
the two configurations of the phenomenon—both gas and liquid coexist in a homoge-
nous phase. Indeed, physicists have observed that at the critical point, the liquid water,
before becoming a gas, becomes opalescent and is made up of liquid water droplets,
made up of a myriad of bubbles of steam, themselves made up of a myriad of droplets
of liquid water, and so on. This is called critical opalescence. In other words, at the crit-
ical point, the system appears the same at all scales of analysis. This property is called
“scale invariance,” which means that no matter how closely one looks, one sees the
same properties. In contrast, when this passage occurs in a discontinuous way (i.e.,
the system “jumps” from one state to another), there is no critical point. Phenomena
for which this passage is continuous are called critical phenomena (in reference to the
critical points).
Since the 1970s, critical phenomena have captured the attention of physicists due to
several important conceptual advances in the characterization of scale invariance through
the theory of renormalization,8 on the one hand, and to the very interesting properties
that define them, on the other. Among these properties, the fact that the dynamics of crit-
ical states can be characterized by a power law deserved special attention, because this law
is a key element in econophysics’ literature (the next part of this chapter will come back to
this point). As Serge Galam (2004) explains in his personal testimony, the years 1975–80
appeared to be a buoyant period for statistical physics, which was blossoming with the
exact solution of the enigma of critical phenomena, one of the toughest problems in phys-
ics. The so-called modern theory of phase transitions, along with renormalization group
techniques, brought condensed-matter physics into its golden age, leading hundreds of
young physicists to enter the field with a great deal of excitement.
As previously mentioned, Wilson won the Nobel Prize for his method of renor-
malization, used to demonstrate mathematically how phase transitions occur in crit-
ical phenomena. His approach provides a conceptual framework explaining critical
phenomena in terms of phase transitions and enabling exact resolutions.
The renormalization group theory has been applied in order to describe critical phe-
nomena. As explained above, the latter are characterized by the existence of a crit-
ical state in which the phenomenon shows the same properties independently of the
scale of analysis. The major idea of the renormalization group theory is to describe
mathematically these common features through a power-law dependence. As we will
show, this theory played a key role in the extension of statistical physics to other social
sciences. Therefore, we will introduce this method in order to understand some of
the connections econophysicists make with finance.9 As mentioned above, the renor-
malization method deals with scale invariance. While the concept of invariance refers
to the observation of recurrent characteristics independently of the context, the
notion of scale invariance describes a particular property of a system/object or law
that does not change when scales of length, energy, or other variables are multiplied
by a common factor. In other words, this idea of scale invariance means that one re-
current feature (or more than one) can be found at every level of analysis. Concretely,
this means that a macroscopic configuration can be described without describing all
microscopic details. This aspect is a key point in the renormalization theory devel-
oped by Wilson, who extended Widom’s (1965a, 1965b) and Kadanoff ’s (1966)
discovery of “the importance of the notion of scale invariance which lies behind all
renormalisation methods” (Lesne 1998, 25). More precisely, his method considers
each scale separately and progressively connects contiguous scales to one another.
This makes it possible to establish a connection between the microscopic and the
macroscopic levels by decreasing the number of interacting parts at the microscopic
level until one obtains the macroscopic level (ideally a system with one part only). In
this perspective, the idea of scaling invariance allows physicists to capture the essence
of a complex phenomenon by identifying key features that are not dependent on the
scale of analysis.
Consider a phenomenon whose combination of microcomponents can be
described with the sequence X = X1 + X 2 + + X kn , composed of kn random in-
dependent variables and identically distributed by a stable Lévy distribution
(i.e., power law) such as that used in finance. The renormalization group method
When applied several times, this pairing method allows modelers to “climb” the
scales by reducing the number of variables (kn) without losing key features of the phe-
nomena, which are captured in the scaling invariance of the process. In other words,
this technique allows us to group random variables into (n) blocks of variables in order
to reduce the initial complexity of the phenomenon. Roughly speaking, the technique
can be summarized by the equation
n
Sn ([ X ] , α ) = n − α ∑ X j ,
j =1
where Sn is the sequence at a specific scale of analysis describing the phenomena, while
Xj refers to the number variables used at that level of analysis. The α quantity is called
the “critical exponent” and describes the scale invariance of the phenomena. In other
terms, this exponent describes a universal property observed without regard for scale.
Considering the renormalization group method, the system at one scale is said to con-
sist of self-similar copies of itself when viewed at a smaller scale, but with different
There is no way to speed up or slow down the spin of an electron, but its direc-
tion can be changed, as modeled by the Ising model. The interesting element is
that the direction of one spin directly influences the direction of its neighbor spins
(figure 3.4).
T < Tc
Subcritical
But at the critical point, when the temperature has been increased to reach the critical
point, the situation is completely different. The spins no longer point in the same di-
rection because thermal energy influences the whole system, and the magnetization
spin-spin vanishes. In this critical situation, spins point in no specific direction and
follow a stochastic distribution.
T ~ Tc
Critical
As we can see in figure 3.6, there are regions of spin up (black areas) and regions of
spin down (white areas), but all these regions are speckled with smaller regions of
the opposite type, and so on. In fact, at the critical point, each spin is influenced by
all other spins (not only a particle’s neighbors) whatever their distance. This situation
is a particular configuration in which the correlation length13 is very important (it is
considered to be infinite). At this critical state, the whole system appears to be in a
homogeneous configuration characterized by an infinite correlation length between
spins, making the system scale invariant. Consequently, the spin system has the same
physical properties whatever the scale length considered. The renormalization group
method can then be applied, and by performing successive transformations of scales
on the original system, one can reduce the number of interacting spins and therefore
determine a solution from a finite cluster of spins.
Beyond the ability to describe the spins’ movement, there is another point of in-
terest in the Ising model. Because of its very simple structure, it is not confined to the
study of ferromagnetism. In fact, “Proposed as a model of ferromagnetism, it ‘pos-
sesses no ferromagnetic properties’ ” (Hughes 1999, 104)! Its abstract and general
structure has enabled its use to be extended to the study of many other problems or
phenomena:
The Ising model is employed in a variety of ways in the study of critical point phenomena.
To recapitulate, Ising proposed it … as a model of ferromagnetism; subsequently it has been
used to model, for example, liquid-vapour transitions and the behaviour of binary alloys.
Each of these interpretations of the model is in terms of a specific example of critical point
behaviour… . [T]he model also casts light on critical point behaviour in general. Likewise,
the pictures generated by computer simulation of the model’s behaviour illustrate … the
whole field of scale-invariant properties. (Hughes 1999, 124–25)
For these reasons, statistical physicists consider the Ising model the perfect illus-
tration of the simplest unifying mathematical model. Their looking for such models
is rooted in the scientific view of physicists, for whom “the assault on a problem of
interest traditionally begins (and sometimes ends) with an attempt to identify and
understand the simplest model exhibiting the same essential features as the phys-
ical problem in question” (Alastair and Wallace 1989, 237). The Ising model meets
this requirement perfectly. Its use is not restricted to statistical physics because “the
specification of the model has no specific physical content” (Hughes 1999, 99); its
content is mathematical. Therefore, this model is independent of the underlying phe-
nomenon studied, and it can be used to analyze any empirical data that share the same
characteristics.
With these new theoretical developments, statistical physicists had powerful math-
ematical models and methods that could solve crucial problems in physics. They were
able to establish the behavior of systems at their macroscopic level from hypotheses
about their microscopic level, but without analyzing this microscopic level. The com-
bination of the renormalization theory and the Ising model offers statistical physicists
a unified mathematical framework that can analogically be used for the study of phe-
nomena characterized by a large number of interacting microcomponents.
describing the statistical features of a large variety of phenomena that have the same
critical exponent. In other words, the idea of “universality” refers here to the fact that
these phenomena can be studied with the same mathematical model independently
of the context.
The use of critical phenomena analysis and its extension to social sciences suggest
changes in scientific methodology that developed in the twentieth century. Giorgio
Israel (1996) identifies a major change in the way of doing science through “mathe-
matical analogies.” These are based on the existence of unifying mathematical simple
models that are not dedicated to the phenomena studied. Mathematical modeling then
uses mathematical analogies by means of which the same mathematical formalism is
able to account for heterogeneous phenomena. The latter are “only interconnected
by an analogy that is expressed in the form of a common mathematical description”
(Israel 1996, 41). The model then is an effective reproduction of reality without on-
tology, one that may provide an explanation of phenomena. The Ising model is a per-
fect illustration of these simple unifying mathematical models. Israel (1996) stressed
that such mathematical analogies strongly contribute to the increasing mathematiza-
tion of reality.
Mathematical analogies illustrate the origin of the temptation for statistical physi-
cists to extend their models to analyzing critical phenomena beyond physics. First,
they looked for phenomena with large numbers of interacting units whose micro-
scopic behaviors would not be observed directly but which can generate observable
macroscopic results. These results are consistent with the microscopic motions de-
fined by a set of mathematical assumptions (which characterize random motion).14
Therefore, modelers can look for statistical regularities often characterized by
power laws:
Since economic systems are in fact comprised of a large number of interacting units having
the potential of displaying power-law behaviour, it is perhaps not unreasonable to examine
economic phenomena within the conceptual framework of scaling and universality. (Stanley
and Plerou 2001, 563)
This search led some statistical physicists to create new fields that were called “so-
ciophysics” or “econophysics” depending on the topics to which their methods and
models were applied.
A first movement, sociophysics,15 emerged in the 1980s. One of the reasons that
physicists attempt to explain social phenomena stems from the mathematical power of
the new models borrowed from statistical physics:
During my research, I started to advocate the use of modern theory phase transitions to de-
scribe social, psychological, political and economical phenomena. My claim was motivated
by an analysis of some epistemological contradictions within physics. On the one hand, the
power of concepts and tools of statistical physics [was] enormous, and on the other hand,
I was expecting that physics would soon reach the limits of investigating inert matter. (Galam
2004, 50)
In the 1990s statistical physicists turned their attention to economic phenomena, and
particularly finance, giving rise to econophysics. As two of its leading authors, Stanley
and Mantegna, put it, econophysics is “a quantitative approach using ideas, models,
conceptual and computational methods of statistical physics” (2000, 2). As men-
tioned above, these models and methods allow the study of macroscopic behaviors
of any kind whose components behave randomly. This point is interesting because it
echoes the efficient-╉market hypothesis (the cornerstone of financial economics, as ex-
plained in Â�chapter 1), in particular Fama’s 1970 formulation based on the assumption
of a representative agent. This version of the efficient-╉market hypothesis is perfectly
compatible with the statistical-╉physics approach that makes no hypothesis about spe-
cific behaviors of investors. Moreover, the renormalization group method seems to be
an appropriate answer to finance, because it makes it possible to move from the micro-
scopic level (i.e., agents whose behaviors are reduced to mathematical hypotheses) to
the macroscopic level (i.e., the evolution of the financial prices empirically observed).
Thus, by analogy, statistical physicists could consider the evolution of financial mar-
kets as the statistical and macroscopic results of a very large number of interactions at
the microscopic level. The application of statistical physics to economics is not limited
to finance; it touches on a number of other economic subjects.16 However, an analysis
of the articles published by econophysicists indicates that research conducted in this
field mainly concerns the study of financial markets, marginalizing other fields. It is no
accident, then, that econophysics is often associated with themes dealing with finan-
cial data (Rickles 2007, 4).
20
10 (a) S & P 500 (10 min data)
0
–10
–20
Price returns
While the first time series shows the evolution of 10 minutes of data, the second
one describes the evolution of the same data but recorded on a monthly interval.
These graphs show that larger time intervals reduce the volatility of data, making
them appear to more closely match the Gaussian framework (simulated in the third
time series). Nowadays, with the recording of “intraday data,” all prices quoted
and tens of thousands of transactions are conserved every single day (Engle and
Russell 2004).
The increasing quantity of data and the computerization of financial markets led to
notable changes in techniques for detecting new phenomena. Intraday data brought
to light new phenomena that could not be detected or did not exist with monthly or
daily data. Among these are strategic behaviors that influence price variations.19 More
importantly, the new data have exhibited values more extreme than could be detected
before. Indeed, monthly or daily prices recorded were generally the last prices quoted
during a month or a day, or a mean of the prices quoted during a period, and therefore
extreme values in these data are generally smaller and less frequent than in intraday
data. As the figure 3.7 suggests, intraday data have increased interest in research on
extreme values. They have brought new challenges in the analysis of stock price varia-
tions that have required the creation of new statistical tools to characterize them.
The increased quantity of statistical data on financial markets favors the grow-
ing interest in extending the methods and models of statistical physics into finance.
Indeed, as previously mentioned, most of the results obtained in statistical physics are
based on huge amounts of data, which makes this question crucial:
Economics systems differ from often-studied physical systems in that the number of subunits is
considerably smaller in contrast to macroscopic samples in physical systems that contain a huge
number of interacting subunits, as many as Avogadro’s number, 6 × 1023. In contrast, in an eco-
nomic system, one initial work was limited to analysing time series comprising of order of mag-
nitude 103 terms, and nowadays with high-frequency data the standard, one may have 108 terms.
(Stanley and Plerou 2001, 563–64)
The advent of intraday data has made it possible to build samples that are suffi-
ciently broad to provide evidence to support the application of power-law distribution
analysis to the evolution of prices and returns on financial markets. This explosion of
financial data—which has no equivalent in other social sciences fields—comes closer
to the standards to which statistical physicists generally work, making market finance
“a natural area for physicists” (Gallegati et al. 2006, 1).
Computers have also transformed scientific research on distributions of stock
market variations. Their ability to perform calculations more rapidly than humans
paved the way for an analysis of new phenomena and the study of old phenomena in
new ways. In this context, old issues debated in the 1970s by financial economics can
be updated and re-evaluated. This is particularly true for stable Lévy processes.20 As
explained in c hapter 2, in general there are no closed-form formulas for Lévy α-stable
distributions—except for Gaussian, Pareto, and Cauchy distributions—which made
their use in finance difficult in the 1970s. This point is still true today: to work on such
distributions and the associated processes, one has to calculate the value of the param-
eters, and such complex calculations require numerous data. Of course, such calculus
cannot be done by hand. However, computer simulations have changed the situation
because they allow research when no analytic solution actually exists. Moreover, they
make it possible to chart step-by-step the evolution of a system whose dynamics are
governed by nonintegrable differential equations (i.e., there is no analytic solution).
They have also provided a more precise visual analysis of empirical data and of the re-
sults obtained by mathematical tools. By allowing simulations with the different param-
eters of the Lévy α-stable distributions, they have facilitated work with these distribu-
tions, making possible visual research that could have appeared vague before (Mardia
and Jupp 2000). Statistical and mathematical programs have been developed to com-
pute stable densities, cumulative distribution functions, and quantiles, resolving most
computational difficulties in using stable Lévy distributions in practical problems. In
conclusion, then, the increasing use of computers created a context for reconsidering
the use of stable Lévy distributions (and their power laws) in finance. The following
section will detail this point.
Why do physicists care about power laws so much? … The reason … is that we’re condi-
tioned to think they’re a sign of something interesting and complicated happening. The first
step is to convince ourselves that in boring situations, we don’t see power laws.23
Physicists are often fascinated by power laws. The reason for this is that complex, collective
phenomena give rise to power laws which are universal, that is, to a large degree independent
of the microscopic details of the phenomenon. These power laws emerge from collective
action and transcend individual specificities. As such, they are unforgeable signatures of a
collective mechanism.
These remarks underline the fascination of power laws for econophysicists and ex-
plain the numerous empirical studies of power laws (this point will be detailed in
section 3.2.3). As previously mentioned, the link between power laws and critical
phenomena occurs at three levels: correlation lengths, scale invariance, and univer-
sality classes.
The first level is correlation lengths. Section 3.1.1 presented the Ising model by intro-
ducing the concept of correlation length between spins (or interacting components) of a
system. In this context, critical phenomena evolve into a fragile configuration of critical
states in which the large correlation lengths that exist in the system at the critical point are
distributed like a power law. Traditionally, physicists have characterized the correlations be-
tween the constituents of− rthe system they analyze (at a given temperature T) as decaying
like an exponential law e ξ(T ) (i.e., the correlation function), where r is the distance between
two components and ξ(T ) is the correlation length.24 With the purpose of characteriz-
ing the divergence observed at the critical point, physicists added to the exponential law
−r
a power law, taking the following for r − α e ξ(T ) . At the critical point,
−r
as the previous section
explained, correlations are infinite, making the exponential, e ξ(T ) , equal to zero. This situa-
tion means that the correlation function is only distributed according to a power law,25 r−α. In
other words, away from the critical point, the correlation length between two constituents,
− x− y
x and y, decays exponentially,26 e ξ(T ) . But as the critical point is approached, the correlation
length increases, and right at the critical point, the correlation length goes to infinity and
decays in accordance with the power of the distance, |x − y|−α. As Shalizi explains intuitively
in his notes, far from the critical point the microscopic dynamics mean that the central-limit
theorem can be applied. In this case, the fluctuations are approximately Gaussian. As one
approaches the critical point, however, giant, correlated fluctuations begin to appear. In this
case, the fluctuations can deliver a non-Gaussian stationary distribution.27
The second level of influence between power laws and critical phe-
nomena is scale invariance. At their critical point, the phenomena become
independent of the scale; there is, therefore, a scaling invariance. The lack
of a characteristic scale implies that the microscopic details do not have
to be considered in the analysis. Scaling invariance is the footprint of crit-
ical phenomena; and, statistically speaking, power- law distribution is the
sole distribution that has a scale-invariance property.28 Indeed, consider the
definition of scale invariance—no matter how closely you look, you see the same
thing. Thus, one must have the same probability distribution for price variations
on the interval [110, 121] when the price quoted is 110 as for price variations on
the interval [100, 110] when the price quoted is 100. Mathematically speaking,29
we have
x
Pr [ Y ≥ x X ] = Pr Y ≥ , with x > X . (3.1)
X
x F ( x1 ) (3.2)
⇔ F 1 = .
x2 F ( x 2 )
α
x1 x1α (3.3)
x = x α .
2 2
k = c yk − α , (3.4)
where k is (by definition) the rank of yk, c is a fixed constant, and α is called the critical
exponent or the scaling parameter.30 In the case of a power-law distribution, the tails
decay asymptotically according to α—the smaller the value of α, the slower the decay
and the heavier the tails. A more common use of power laws occurs in the context
of random variables and their distributions. That is, assuming an underlying proba-
bility model P for a nonnegative random variable X, let F(x) = P[X ≤ x] for x ≥ 0
denote the (cumulative) distribution function of X, and let F (x) =1− F(x) denote the
complementary cumulative distribution function. In this stochastic context, a random
P [ X > x ] = 1 − F ( x ) ≈ cx − α (3.5)
for some constant 0 < c < ∞ and a tail index α > 0. For 1 < α < 2, F has infinite variance
but finite mean, and for 0 < α ≤ 1, F has infinite variance and infinite mean. In general,
all moments of F of order β ≥ α are infinite.
rt = ln pt − ln pt −∆t . (3.6)
Therefore, the probability of having a return r higher than the return x, P[r > x], can be
written as ln P[r > x] = −α ln x + c, which can be rewritten as a power-╉law expression
by using the exponential of both sides of the equation, P[r > x] = c x−α. This way of in-
terpreting financial returns shows that power laws are an appropriate tool with which
to characterize the evolution of financial prices.
The second connection is that power laws are easily linked with stochastic pro-
cesses used in financial economics, which describe the evolution of a variable X (price,
return, volume, etc.) over time (t). Knowing that a power law is a specific relationship
between two variables that requires no particular statistical assumption, we may asso-
ciate the evolution between the variables X and t with a power law. In this case, this
evolution is said to be a stable Lévy process when
C
1+ µ (for x → ±∞),
P(x) (3.7)
x
where C is a positive constant called the tail or scale parameter and the exponent μ is
between 0 and 2 (0 < μ ≤ 2). It is worth mentioning that among Lévy processes, only
stable Lévy processes can be associated with power laws31 because the stability pro-
perty is a statistical interpretation of the scaling property. As explained in �chapter 2,
the stability of a random variable implies that there exists a linear combination of two
independent copies of that variable with the same statistical properties (distribution
form and location parameters—╉i.e., mean, variance). This property is very useful in fi-
nance because a monthly distribution can be seen as a linear combination of a weekly
or daily distribution, for example, meaning that statistical characteristics can easily be
estimated for every time horizon. While stochastic processes used in financial eco-
nomics are based on the Gaussian framework, as Mandelbrot (1963), Pagan (1996),
and Sornette (2006, 97) explained, taking a Brownian random walk (i.e., Gaussian
process) as a starting point, a no-normal diffusion law can be obtained by keeping
the independence and the stationarity of increments but by characterizing their dis-
tribution through a distribution exhibiting fat tails. A telling example of such a law is a
power law whose exponent μ will be between 0 and 2.
The third and final connection between power laws and financial economics
refers to the work of Mandelbrot since the 1960s, which we discussed in chapter 2.
The link between power laws and Lévy processes allows econophysicists to anchor
their work in that of Mandelbrot, who tried to improve the first canonical models
of financial economics. Econophysicists have in a sense concretized Mandelbrot’s
project (and they refer systematically to his work). However, while Mandelbrot and
econophysicists arrive at the same result—modeling stock price variations using
Lévy stable processes—they do not take the same path to get there. Mandelbrot starts
his analysis from the stability of stochastic process. His suggestion was to generalize
Gaussian processes by using Lévy stable distributions. For doing this, Mandelbrot
singled out the Gaussian framework for its properties of stability. He considered sta-
bility the most important hypothesis for a process because without it we cannot use
certain mathematical results (such as the central-limit theorem) to produce new in-
teresting results in finance:
It is widely believed that price series are not stationary, in the sense that the mechanism that
generates them does not remain the same during successive time periods. … Statisticians
tend to be unaware of the gravity of this conclusion. Indeed, so little is known of nonstation-
ary time series, that accepting nonstationarity amounts to giving up any hope of performing
a worthwhile statistical analysis. (Mandelbrot 1966b, 2)
Mandelbrot’s path was to use the generalized central-limit theorem, which is com-
patible with Lévy stable distributions. In contrast, econophysicists’ starting point
is critical phenomena and the results obtained from renormalization group meth-
ods that can demonstrate stability for non-Gaussian stable processes. More pre-
cisely, the property of being distributed according to a power law is conserved
under addition, multiplication, and polynomial transformation. When we combine
two power-law variables, the one with the fatter-tailed distribution (that is, the one
with the smaller exponent) dominates. The new distribution is the minimum of the
tail exponents of the two combined distributions. Renormalization group meth-
ods focus on the scaling property of the process. According to Lesne and Laguës,
such an approach led to the creation of a new mathematical approach. In this case,
the attractor32 is no longer the Lévy stable distribution—as it was in Mandelbrot’s
approach—but the critical point. The latter is the attractor in the sense of the scal-
ing invariance.
Scaling laws are a new type of statistical law on the same level as the law of large numbers and
the central-╉limit theorem. As such they go far beyond the scope of physics and can be found
in other domains such as finance, traffic control and biology. They apply to global, macro-
scopic properties of systems containing a large number of elementary microscopic units.
(Lesne and Laguës 2011, 63)
This difference between Mandelbrot and econophysicists explains why the latter
start systematically from power-╉law distributions, while Mandelbrot starts systemati-
cally from a stable Lévy distribution. The stable Lévy distribution is a specific case of
the power-╉law distribution, since stable Lévy distribution is associated with a power
law whose increments are independent. However, we must remember that when
Mandelbrot began his research on the extension of physical methods and models to
nonphysical phenomena, results from renormalization group theory had not yet been
established. While this connection between Mandelbrot and econophysics seems to
be an ad hoc creation, it does suggests that the roots of econophysics lie in an active re-
search movement that emerged with the creation of financial economics (�chapter 1).
100
10–1
alpha = 1.5
10–2 alpha = 2
alpha = 2.5
10–3
10–4
Pr (X ≥ X)
10–5
10–6
10–7
10–8
10–9
10–10
10–1 100 101
X
Figure 3.8 The slope of the graph (log-log axis) characterizing a power-law directly depends on the
characteristic exponent
A potential infinity of power laws exists. Therefore, the common way of identifying
power-law behavior consists in checking visually on a histogram whether the fre-
quency distribution approximately falls on a straight line, which therefore provides
an indication that the distribution may follow a power law (with a scaling parameter α
given by the absolute slope of the straight line).35 To illustrate this visual test, one can
consider, for instance, Gabaix et al. (2006). These authors combined on the same his-
togram the graph of a power law with ones describing empirical data from the Nikkei,
the Hang Seng Index. and the S&P 500 index (figure 3.9).
100
10–1
Cumulative distribution
10–2
α
10–3
Nikkei ’84–’97
10–4 Hang–Seng ’80–’97
S&P 500 ’62–’96
10–5
1 10
Normalized daily returns
Figure 3.9 Empirical cumulative distribution function of the absolute value of the daily returns of
the Nikkei (1984–97), the Hang-Seng (1980–97) and the S&P 500 (1962–96)
Source: Gopikrishnan et al. 1999, 5311. Reproduced by permission of the authors.
In their identification of the power law, Gabaix and al. (2006) varied the scaling pa-
rameter α of the power laws until they obtained the best fit between the graph of the
power law and the ones describing the empirical data. In so doing, they approximately
estimate the critical exponent. After having observed that the distributions of returns
for three different country indices were similar, the authors concluded that the equa-
tion given by their power law, whose critical exponent has been estimated to 3, appears
to hold for the data studied independently from the context. Finally, it is worth men-
tioning that due to the asymptotic dimension of power laws, these laws make sense
only for a very large quantity of data.
This type of visual investigation has guided econophysicists’ empirical research on
power-law distribution. As mentioned above, this kind of approximation is nothing
new, since Pareto (1897) was the first author to identify such empirical linearity for
income distribution in the population—where many people seemed to have low in-
comes while very few seemed to have high incomes. Since then, this linearity has been
observed in a wide variety of phenomena. The physicist Felix Auerbach (1913) pointed
out that city sizes are also distributed according to a power law—there are many towns,
fewer large cities, and very few metropolises.36 The linguist Zipf (1949) refined this
observation, hence the term “Zipf ’s Law” frequently used to refer to the idea that city
sizes follow a Pareto distribution as a function of their rank. Interestingly, Auerbach
suggested that this law of city sizes was “only a special case of a much more general law
which can be applied to the most diverse problems in the natural sciences, geography,
statistics, economics, etc.” (Auerbach 1913, 76, in Rybski 2013, 1267).37 In the same
the vein, Estoup (1916), and some years later Condon (1928) and then Zipf (1935),
observed this linear relationship in the occurrence of words in the vast majority of texts,
regardless of the language.38 In 1922, Willis and Yule (1922) discovered that the fre-
quency of the sizes of biological genera collected by Willis (1922) satisfies a power-law
distribution. Yule (1925) explained that the distribution of species among genera of
plants also follows a power law. Lotka (1926) observed this distribution for the fre-
quency of publications by scientists. Kleiber (1932) and Brody (1945) found that the
mass-specific metabolic rate of various animals is a linear function of their body mass.
We can find many other authors who have made similar observations in a variety
of fields—including earthquakes, DNA, and human memory retrieval. The number
of these observations has considerably increased with the spread of computerized
databases.39 Power laws have become more and more popular in science, as Clauset
explains:
In the mid-1990s, when large data sets on social, biological and technological systems were
first being put together and analyzed, power-law distributions seemed to be everywhere … .
There were dozens, possibly hundreds of quantities, that all seemed to follow the same pat-
tern: a power-law distribution. (2011, 7)
The result of this evolution was such that some “scientists are calling them more
normal than normal [distribution; therefore,] the presence of power-law distributions
in data … should be considered as the norm rather than the exception” (Willinger,
cited in Mitchell 2009, 269).
This kind of relationship was also observed in financial and economic phe-
nomena, in addition to Pareto’s observations on income distribution.40 As explained
in c hapter 2, Mandelbrot was the first to identify this linear relationship in stock
price variations. This observation led him to apply the stable Lévy process to stock
price movements in the early 1960s. Although financial economists did not follow
Mandelbrot’s research due to mathematical difficulties (chapter 2), economists have
always used power laws as a descriptive framework to characterize certain economic
phenomena. The relationship between the size of firms, cities, and organizations and
one of their characteristics (number of employees, inhabitants, growth, profits, etc.)
is an example. Recently, financial economists have shown a new interest in this dis-
tribution. For instance, Gabaix (2009) showed that the returns of the largest compa-
nies on the New York Stock Exchange exhibit the same visual linearity (figure 3.10).
10–1
10–2
Distribution function P(|return| > x)
10–3
10–4
10–5
10–6
10–7
100 101 102
x (units of standard deviation)
Figure 3.10 Empirical cumulative distribution of the absolute values of the normalized 15-minute
returns of the 1,000 largest companies in the Trades and Quotes database for the two-year period
1994–95 (12 million observations)
Source: Gabaix et al. 2003, 268. Reproduced by permission of the authors.
of physical systems. However, while they are interested in power laws for the statis-
tical description of large fluctuations, some physicists, and particularly econophysi-
cists (Koponen 1995; Mantegna 1991; Mantegna and Stanley 1994) have emphasized
empirical difficulties in using these processes. They have pointed out oppositions
between the mathematical properties of these processes and their physical applica-
tions. With the objective of resolving these oppositions, these authors have developed
truncated techniques that have proved necessary for using power laws and associated
stochastic processes.
Consequently, at the end of the 1980s physicists were faced with a contradiction: while
power laws (stable Lévy processes) appeared to be supported by visual analysis of em-
pirical data, their major statistical feature (infinite variance) did not fit with a phenom-
enological description of physical systems, which all exhibit strictly finite measures.
Van der Vaart (2000) explained that two conceptual problems can emerge when
an asymptotic regime is used to describe behaviors of finite systems: first, the number
of empirical data is always finite, and second, the theoretical properties associated
with an asymptotic regime are in opposition with data observed, which necessarily
have finite variance due to the finitude of the sample. To solve these problems, statisti-
cians have suggested truncation techniques. These techniques allow an adaptation of
the asymptotic regime to empirical finite samples. Therefore, they have made power
laws physically plausible.
The first reason for truncation is the need to fill the gap between the finiteness
of every empirical sample of empirical data and the asymptotic framework de-
fining the power-╉law regime. The truncation of a distribution is a statistical tech-
nique allowing modelers to transform an asymptotic and theoretical distribution
into a finite and empirical one.43 In other words, truncation is a conceptual bridge
between empirical data and asymptotic results. This idea of truncation is nothing
new in physics,44 but research into this topic has accelerated during the past two
decades. During this period, econophysicists have developed sophisticated sta-
tistical tools (truncation techniques) to escape the asymptotic regime and retain
power laws to describe empirical data. Again, this need to hold onto power laws
derives from their theoretical link with scaling invariance, exposed in the previous
section.
The second reason for truncating power laws associated with non-╉Gaussian
stable Lévy processes involves the need to transform these laws into a “physically
plausible description” of reality. As we know, physical systems refer to real phe-
nomena that can have neither asymptotic behavior nor an infinite parameter. Thus,
using non-╉Gaussian stable Lévy processes to describe a finite sample with finite pa-
rameters, although they are based on an asymptotic regime generating an infinite
variance, seems complicated. Truncation techniques allow modelers to transform
an asymptotic regime into a finite sample by cutting off the asymptotic part of the
distribution, thus providing finite variance. We will illustrate this process in the fol-
lowing sections.
from one to the other, econophysicists provided a first theory of this changeover.
Therefore, physicists applied asymptotical properties without misrepresenting
them since they provided a specific formulation of the gap between these proper-
ties and empirical results. In this way, physicists clarified the bridge between the
asymptotic and nonasymptotic regimes by making the switch physically plausible
and theoretically justified.
Econophysicists resolved the challenge of obtaining stable Lévy processes with
finite variance by using a “cutoff.” The first truncation technique was established by
Mantegna (1991), who developed his solution using financial data. He justified his
choice to use financial data because they offer time series that take into account the oc-
currence of extreme variations. Later, Mantegna and Stanley (1994) generalized this
approach with a truncated stable Lévy distribution with a cutoff at which the distri-
bution begins to deviate from the asymptotic region. This deviation can take different
forms, as shown in the equations below. With small amounts of data, that is, for a small
value of x, P(x) takes a value very close to what one would expect for a truncated stable
Lévy process. But for large values of observations, n, P(x) tends to the value predicted
for a nontruncated stable Lévy process. In this framework, the probability of taking a
step of size (x) at any time is defined by
where L(x) is a stable distribution and g(x) a truncation function. In the simplest
case (abrupt truncation), the truncation function g(x) is equal to a constant k and the
abrupt truncation process can be characterized by
kL(x) if x ≤ I
g (x) , (3.9)
0 if x > I
where I is the value from which the distribution is truncated. If x is not large enough
(the physically plausible regime), a Lévy distribution behaves very much like a trun-
cated stable Lévy distribution since most values expected for x fall in the Lévy-like
region. When x is beyond the crossover value, we are in the nontruncated regime
to which the generalized central-limit theorem (i.e., asymptotic regime) can be
applied.
The goal is to reduce the fat tails of the non-Gaussian stable Lévy distribution
without deforming the central part of the distribution in order to decompose it into
a truncated stable part dedicated to the short and medium term and a nontruncated
part dedicated to the long term.47 This temporal decomposition is required because
in the (very) long term, we necessarily have an asymptotic regime. Therefore, thanks
to the truncation, in the short and medium term, we can leave the asymptotic regime.
In other words, the truncation makes it possible to decompose the stable distribution
into a physically plausible regime (truncated part) and an asymptotic one (nontrun-
cated part) (figure 3.11).
n
l
Truncated stable Non Truncated
Lévy regime stable Lévy
regime
Truncated distribution
(3)
Figure 3.11 The idea of truncation for stable Lévy processes, where l is the selected cutoff and n the
number of data
Thanks to the truncated non-Gaussian stable Lévy process, physicists can have a finite
variance and hence a more physically plausible description of empirical systems. The
idea is to avoid the statistical properties that are in opposition to empirical observation
(example: an infinite variance) by keeping those that are perceived as useful (example:
stability). It is worth mentioning that a truncated Lévy distribution taken as a whole is
no longer stable since it is characterized by a statistical switch of regimes (the third arrow
in figure 3.11). However, the truncated part of the distribution (first arrow in figure
3.11) keeps the statistical property of stability by offering a finite variance (Nakao 2000).
This first truncation technique based on a “cutoff” parameter provides a solution to the
problem of infinite variance. However, this specific technique produces a distribution that
truncates abruptly. Some physicists have claimed that this kind of truncation is not physi-
cally plausible enough because the physical system rarely changes abruptly:48 “In general,
the probability of taking a step [a variation] should decrease gradually and not abruptly, in a
complex way with step size due to limited physical capacity” (Gupta and Campanha 1999,
232). This generalized empirical principle has led physicists to go beyond abruptly trun-
cated stable Lévy processes, which do not have a sufficient physical basis (Mantegna and
Stanley 2000). With this purpose, physicists have developed statistical techniques to solve
this constraint related to the physically plausible dimension of the truncation technique.
Among them were Gupta and Campanha (1999), who considered the truncation
with a cutoff that is a decreasing exponential function49 (also called an exponential
cutoff). The idea was to switch from a physically plausible regime to an asymptotic
one through a gradual or an exponential cutoff after a certain step size, which may be
due to the limited physical capacity of the systems under study. We can express the
exponential truncation function50 by using equation (3.10):
1 if x ≤ I
β (3.10)
g (x) = x − I
exp − k if x > I
where I is the cutoff parameter at which the distribution begins to deviate from the
x − I β
Lévy distribution, exp − is a decreasing function, and k and β are constants
k
related to truncation.51 Using this truncation function, Gupta and Campanha (1999)
defined the probability of taking a step of size (x) at any time as being given by
k L(x) if x ≤ I
β
P( x ) = x − I . (3.11)
k L(x ) exp − k if x ≤ I
3.4.╇CONCLUSION
The major contribution of chapter is to present the theoretical and methodolog-
ical foundations of econophysics. The chapter explained the theoretical origins
of the power law used in econophysics models. This objective is valuable simply
because these foundations are usually not explicitly exposed in the literature. It is
worth reminding ourselves that our analysis is based on a financial economist’s
viewpoint.
As detailed, the roots of econophysics come from the research on critical phe-
nomena and from the scientific changes observed in the 1970s statistical physics.
The major key themes of this discipline were presented: the renormalization group
theory, the Ising model and scaling properties. We also discussed scale invariance,
which is at the heart of these three themes. With the purpose of understanding
econophysics, we explained the connections between power laws, scale invari-
ance, and financial economics. Power laws make sense in relation to the definition
of financial returns; they can also describe the evolution of financial fluctuations
through stable Lévy processes. This is where stable Lévy processes play a key role
for econophysicists. Readers were reminded that power laws are not new in finan-
cial economics, since Mandelbrot—╉for different reasons—╉introduced them into
finance in the 1960s.
As this chapter showed, power laws are not perfectly adapted to describe phys-
ical (finite) systems, because they theoretically generate an infinite variance. To solve
this problem, physicists have developed truncation techniques. Econophysicists are
thereby able to obtain a finite variance while keeping the notion of scale invariance.
As mentioned, these truncation techniques are closely associated with the emergence
of econophysics.
The following chapter will define econophysics and explain how this discipline is
related to financial economics.
4
T H E D I S C I P L I N A RY P O S I T I O N
O F EC O N O P H Y S I C S
N E W O P P O RT U N I T I E S
F O R F I N A N CI A L I N N OVAT I O N S
78
1,800
1,500
Physics Ph.D.s per year
1,200
900
600
300
0
1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000
The rapid rise in the public funding in the 1980s for young physicists generated a form
of “déjà vu” since it resembled the first bubble observed during the “Sputnik era” of the
1960s (also visible in figure 4.1).2 The second bubble was mainly favorable to physicists
involved in condensed-╉matter physics because this area of knowledge benefited from
the significant theoretical results obtained in the 1970s (Â�chapter 3). Moreover, “[In this]
field of research the line between fundamental physics and its practical applications was
so close that it was often blurred” (Cassidy 2011, 131). This rhetoric was directly in line
with the political community’s expectations in the 1990s, leading to a higher number
of public-╉funding opportunities for projects developed in this area of knowledge. This
trend was strengthened in the 1990s with the second bubble, when this field became
the first choice for new PhD students in physics to choose condensed-╉matter physics,
generating the second bubble evoked above. In 2000, for instance, 41 percent of doctor-
ates in physics were in condensed-╉matter physics (Cassidy 2011).
A point must be clarified here. In the previous chapter, we associated the founda-
tions of econophysics with statistical physics. It is worth mentioning that statistical
physics refers to a set of theoretical tools that can be applied to different physical
systems, while condensed matter is the branch of physics dealing with the physical
properties of matter phases. In other terms, condensed-╉matter physicists investigate
the behavior of matter and how it can evolve toward different phases. In their study
of matter, these physicists often use statistical methods (i.e., statistical physics). This
context explains that several fathers of econophysics (e.g., McCauley and Stanley) and
the vast majority of research centers providing PhD opportunities in this field are as-
sociated with condensed-╉matter physics. Telling examples are the Center for Polymer
Studies in Boston and the Santa Fe Institute, which have been important actors in the
organization and the diffusion of econophysics.
The following section will analyze the disciplinary position of econophysics in this
particular context. Did econophysicists produce their knowledge in physics or finan-
cial economics (or both)? Did they publish their works in financial economics jour-
nals? These are the kind of questions we will investigate hereafter.
This trend appears clearly in the journal Physica A (figure 4.2), one of the three jour-
nals that publish the great majority of articles on the subject (we will return to this
point in the next section).
25%
20%
15%
10%
5%
0%
96
97
98
99
00
01
02
03
04
05
06
07
08
09
19
19
19
19
20
20
20
20
20
20
20
20
20
20
Figure 4.2 Number of articles on econophysics published in Physica A between 1996 and 2010
Source: Web of Science.
It is worth mentioning that the nonlinearity of the trend shown in figure 4.2 is mainly
due to regularly published special issues devoted to econophysics. The trend observed
for Physica A is also found in the three other major journals that have published articles
in econophysics (International Journal of Modern Physics C, Physical Review E, and the
European Journal of Physics B). This sustained growth in the number of articles published
each year earned econophysics official recognition as a discipline by the Physics and
Astrophysics Classification Scheme (PACS) in 2003—10 years after the first articles.
The emerging editorial activity in econophysics has followed a relatively clear
line: at the beginning, econophysicists preferred to publish and gain acceptance in
journals devoted to a preexisting theoretical field in physics (statistical physics) rather
than create new journals outside a preexisting scientific space and structure. Moreover,
these journals are among the most prestigious in physics. This editorial orientation
results from the methodology used by econophysicists (derived from statistical phys-
ics, as was mentioned in the previous section) but also from the new community’s
hope, on the one hand, to quickly gain recognition from the existing scientific com-
munity and, on the other hand, to reach a larger audience. Next, econophysicists cre-
ated journals with a title directly associated with economics or financial economics.
One can consider, for instance, Quantitative Finance, created in 2001, a journal essen-
tially devoted to questions of econophysics. The Journal of Economic Interaction and
Coordination (JEIC), which started in 2006, is open to econophysics although its main
focus, the concept of interaction, is not a key theme in econophysics. Econophysicists
have also increased their influence in economic journals that already existed. Among
these is the Journal of Economic Dynamics and Control, created in 1979, which has been
open to papers related to econophysics since the publication in 2008 of a special issue
devoted to this theme.
The 1990s, then, stand as the decade in which econophysics emerged thanks
to growth in the number of publications. Textbooks on econophysics were not far
behind. The first textbook, Introduction to Econophysics by Mantegna and Stanley,
was published in 2000. Several more have appeared since—by Roehner (2002) and
McCauley (2004), for example. The publication of textbooks is a very important
step in the development of a new field. They do not have the same status as arti-
cles or collections of articles. The latter are frequently aimed at spreading know-
ledge in an approach to the subject matter that remains exploratory and not unified.
Textbooks, on the other hand, are more grounded in a unified analysis. Their con-
tents therefore require a period of homogenization of the discipline, which is why
they represent an additional step in the institutionalization process. They “contain
highly elaborate models of linguistic forms for students to follow” (Bazerman 1988,
155). Textbooks play a sociological and educational role for neophytes by defining
the patterns of learning and formulating statements appropriate to the community
they wish to join. Given that collections of articles are published before textbooks,
the interval between the publication of the former and that of the latter gives an indi-
cation of the discipline’s evolution ( Jovanovic 2008): econophysics appears, there-
fore, as a theoretical approach that is evolving relatively rapidly. Barely a decade was
required to see the appearance of the first textbooks presenting econophysics as a
unified and coherent field. The swiftness of the development of this discipline can
be gauged by noting that it took twice as long, that is, two decades, for the first text-
books devoted to another recent specialty in finance, behavioral finance, to appear
(Schinckus 2009).
Another way to spread knowledge related to a new field is to organize workshops
and colloquiums. The first conference devoted to econophysics took place at the
University of Budapest in 1997 and was organized by the Department of Physics. Two
years later, the European Association of Physicists officially endorsed the first con-
ference on applications, Applications of Physics in Financial Analysis (APFA), which
was held in Dublin. The APFA colloquium was entirely dedicated to econophysics and
was held annually until 2007. Today, several annual conferences dedicated to econo-
physics exist, such as the Nikkei Econophysics Research Workshop and Symposium
and the Econophysics Colloquium. Combined with publications of papers and text-
books, these events contribute to the stabilization and spread of a common scientific
culture among econophysicists.
Another important component of a truly institutionalized research field is the cre-
ation of academic courses and BA, MA, and PhD programs devoted solely to that field.
Here again, physics has served as the institutional basis. Courses in econophysics have
been offered exclusively in physics departments since 2002 (the universities of Ulm in
to 2008—╉allows an analysis of the evolution of the field since the changes that oc-
curred in its prehistory period and makes it possible to measure the impact of its
birth (in 1996). The objective of the following section is to trace the birth and the
beginning of econophysics in order to understand the context in which the field
emerged. From this perspective, we will investigate the early period of econophysics
between 1996 and 2008. Another reason for focusing our attention on this period
refers of the fact that econophysics in its infancy was clearly defined (statistical
physics applied to financial economics). As Chakraborti et al. (2011a, 2011b) and
Schinckus (2012, 2017) showed, the econophysics literature has become more and
more scattered since 2009–╉10.
Source: Web of Science.
These observations suggest that, although finance and economics journals did publish
articles on econophysics, the field did not hold a large place in financial economics. In
contrast, econophysics became more and more important in the discipline of physics.
This observation corroborates the fact that all academic programs dedicated to econo-
physics have developed outside economics.
The centrality of physics for econophysics (between 1996 and 2008) is clearly vis-
ible in figure 4.3, which maps the network of co-╉citations between journals cited in
papers citing our 242 source papers in econophysics. The dense core of the network
is composed of physics journals, while economics and finance journals are peripheral
(northwest of the map) and Quantitative Finance is in between.
Review of Financial Studies
Journal of Economics and Finance
Journal of Econometrics
American Economic Review Journal of Empirical and Finance
Journal of Business
P Nat Acad Science USA
Nature
Int J Modern Physics B Physical Review Physical Review A
Figure 4.3 Most co-cited journals (and manuals) in papers citing our 242 source articles in econophysics (100 co-citations +)
Source: Gingras and Schinckus (2012). Reproduced by permission of the authors.
Another way to look at the centrality of physics journals is provided in table 4.2, which
shows that between 1996 and 2008 only 12 percent of the citations of the source
papers came from economics or finance journals. Interestingly, this trend was similar
in the previous period (1980–95), even though more than half of the papers had been
published in economics and finance journals.
Source: Web of Science.
Source: Web of Science.
A more precise investigation of the bibliometric data shows that econophysics has
not merely grown in the shadow of physics: it has progressively developed as an au-
tonomous subfield in the discipline. Indeed, although most of the publications citing
the source papers have appeared in journals of physics since 1996, we note that they
are concentrated in only two journals: Physica A (devoted to “statistical mechanics
and its applications”) and European Physical Journal B (devoted to condensed matter
and complex systems). Together these two journals account for 47.1 percent of
the publications (table 4.3). In addition, table 4.4 shows that Physica A published
by far the largest number of econophysics papers, with 41.5 percent of the total in
the second period (1996–2008). It has thus become the leading journal of this new
field. In second place is another physics journal, European Physical Journal B. This
observation must be compared to the relative unimportance of Physica A in the first
period: only 4 percent of the key papers were published in this journal between 1980
and 1996. With the European Physical Journal B, Physica A published 53.9 percent
of the key papers between 1996 and 2008. In addition to the two journals already
identified as the core publishing venues for econophysics, we find Physical Review E,
the major American physics journal devoted to research on “statistical, nonlinear and
soft-matter physics.”
Source: Web of Science.
This concentration in physics journals, and the fact that econophysicists have a
strong tendency to refer essentially to each other (Pieters and Hans 2002), suggest
that econophysics has now established its autonomy within physics.
Source: Web of Science.
This trend can also be observed in table 4.6, which lists the main journals cited in the
source papers. While economics journals (e.g., American Economic Review) were often
cited in the key papers written between 1980 and 1995, physics journals became the main
source of knowledge for the papers published after 1996. As the table shows, between
1996 and 2008, among the first 10 journals listed, physics journals represent 22.6 percent,
while economics and finance journals represent 10.7 percent of papers published.
Table 4.6 Main journals cited in the source papers (two citations or more)
Source: Web of Science.
Taken together, these data confirm that econophysics has developed on the existing
institutional structures of physics, rather than attempting to impose itself inside the
existing field of financial economics. As we have already pointed out, econophysics is
being promoted by “outsiders” to financial economics.
economists. While it is not hard to understand that this disciplinary structure makes
dialogue difficult, it has provided a surprisingly fruitful context for scientific innova-
tions. This section will analyze this fertility by investigating to what extent their po-
sition as outsiders has allowed econophysicists to innovate and to contribute to the
understanding of financial markets.
The Gaussian framework being a statistician’s best friend, often, when he must process data
that are obviously not normal, he begins by “normalizing” them … in the same way, it has
been very seriously suggested to me that I normalize price changes. (1997, 142)
We [econophysicists] do not “massage” the data. Data massaging is both dangerous and mis-
leading. (2006, 8)
a different “conceptual matrix”: while economists use data in the last step of their re-
search process as an empirical justification of their theoretical argument, econophysi-
cists use data as an epistemic starting point—suggesting the selection of an existing
theoretical framework. Because data are not perceived, used, and presented in the same
way in econophysics and financial economics, they can be a point of contention be-
tween the two communities.
The fourth and last obstacle to dialogue between the two communities concerns
methodology. In this domain, financial economists and econophysicists do not share
the same assumptions about readers’ expectations. Although the empirical dimension
is emphasized in both communities, financial economists take an apriorist approach
(axiomatically justified argumentation), while econophysicists develop an a posteriori
perspective (phenomenological data-driven models). These methodological diver-
gences are responsible for two main shortcomings of econophysics: a lack of theoret-
ical explanation and a lack of quantitative tests.
Although econophysicists have obtained numerous empirical observations
(chapter 3), there are no theoretical explanations to support them (Mitzenmacher
2005). Now, from the statistical physics viewpoint, one can consider that there is a
theoretical justification of sorts: the proof that the phenomenon studied is a critical
phenomenon, which justifies the use of the specific models and approach coming from
statistical physics. However, leaving this theoretical argument aside, econophysicists
have produced no theoretical justification to explain why the economic phenomena
studied are governed by a power law.18 The economist Steven Durlauf pointed this out
in 2005:
The empirical literature on scaling laws [i.e. power laws] is difficult to interpret because of the
absence of a compelling set of theoretical models to explain how the laws might come about.
This is very much the case if one examines the efforts by physicists to explain findings of scal-
ing laws in socioeconomic contexts. (Durlauf 2005, F235)
Consequently,
The econophysics approach to economic theory has generally failed to produce models that
are economically insightful.” (Durlauf 2005, F236)
real business cycle (RBC) models have faced the same criticism.21 Some works fo-
cusing on the theoretical foundations of power laws have been initiated over the past
few years. Chapter 5 will analyze them in detail. However, these attempts were mar-
ginal or not sufficiently general to be adopted by all. Some econophysicists today have
a greater awareness of this dearth of theory:
One [problem with the efforts to explain all power laws using the things statistical physi-
cists know] is that (to mangle Kipling) there turn out to be nine and sixty ways of con-
structing power laws, and every single one of them is right, in that it does indeed produce a
power law. Power laws turn out to result from a kind of central limit theorem for multipli-
cative growth processes, an observation which apparently dates back to Herbert Simon,
and which has been rediscovered by a number of physicists (for instance, Sornette). Reed
and Hughes have established an even more deflating explanation… . Now, just because
these simple mechanisms exist, doesn’t mean they explain any particular case, but it does
mean that you can’t legitimately argue “My favorite mechanism produces a power law;
there is a power law here; it is very unlikely there would be a power law if my mechanism
were not at work; therefore, it is reasonable to believe my mechanism is at work here.”
(Deborah Mayo would say that finding a power law does not constitute a severe test of
your hypothesis.) You need to do “differential diagnosis,” by identifying other, non-power-
law consequences of your mechanism, which other possible explanations don’t share. This,
we hardly ever do.22
In addition to this lack of theoretical interpretation, there exists another issue re-
lated to techniques of validating statistical analysis. Chapter 3 explained that econo-
physicists base their empirical results on a visual technique for identifying a fit
between a phenomenon and a power law. As the next chapter will explain, until re-
cently there was no quantitative test on the power-law hypothesis. However, this visual
approach can be considered qualitative testing, and it is extremely problematic for fi-
nancial economists, who are strong defenders of quantitative tests. Durlauf pointed
out the insufficiency of such qualitative tests:
Literature on power and scaling laws has yet to move beyond the development of statistical
measures to the analyses of model comparison and evaluation. In other words, many of the
empirical claims in this literature concerning the presence of a particular law in some data set
fail to address the standard statistical issues of identification and statistical power adequately.
Hence, it is difficult to conclude that the findings in this literature can allow one to infer that
some economic environment is complex. (Durlauf 2005, F232)
In aiming to understand such criticism, we must bear in mind that financial econo-
mists systematically use econometrics for testing their models. Moreover, econo-
metric tests are a criterion of scientific acceptability on which financial economics was
built (chapter 1). Section 4.3.3 will come back to this point: suffice it to say here that,
at the time financial economists created their own discipline, they strongly rejected the
visual approach used by the chartists and defended quantitative tests.23 Quantitative
tests were a crucial point in financial economists’ argument that their approach was
based on scientific criteria, while chartism was presented as a practice with no sci-
entific foundation. Considering this methodological position, the visual tests used
by econophysicists are considered to lack credibility, and even scientific foundation.
The lack of quantitative tests makes econophysics literature unacceptable to financial
economists:
The power law/╉scaling literature has yet to develop formal statistical methodologies for
model comparison exercises; until such methods are developed, findings in the econophys-
ics literature are unlikely to persuade economists that scaling laws are empirically important.
(Durlauf 2005, F234)
Well-╉founded methods for analyzing power-╉law data have not yet taken root in all, or even
most, of these areas and in many cases hypothesised distributions are not tested rigorously
against the data. This naturally leaves open the possibility that apparent power-╉law behavior
is, in some cases at least, the result of wishful thinking … [T]â•„he common practice of identi-
fying and quantifying power-╉law distributions by the approximately straight-╉line behavior of
a histogram on a doubly logarithmic plot should not be trusted: such straight-╉line behavior
is a necessary but by no means sufficient condition for true power-╉law behavior. (Clauset,
Shalizi, and Newman 2009, 691)
In fact, many of the power laws econophysicists been trying to explain are not power
laws at all (Clauset, Shalizi, and Newman 2009). The next chapter will return to this
crucial assertion in detail.
The dominant characteristic of the modern Western sciences as particular kinds of social or-
ganisation which generate new knowledge is their combination of novelty with conformity.
They reward intellectual innovation—only new knowledge is publishable—and yet contri-
butions have to conform to collective standards and priorities if they are to be regarded as
competent and scientific. Scientific fields can therefore be regarded as conservative novelty-
producing systems. (Whitley 1986b, 186–87)
To publish new ideas, authors have to share the criteria, standards, and objectives of
the scientific community of the specific field. Consequently, new ideas that are not
plainly compatible with the scientific framework will be more frequently developed
outside the field. This is particularly true in economics and physics, which are the
most controlled disciplines and in which scientists share a strong awareness of the
boundaries of their own discipline (Whitley 1986b, 193). Financial economics is an
institutionalized discipline with a strong control over its theoretical goals and models.
In this context, all new contributions have to conform to the criteria of conventional
acceptance—postulates, beliefs, standards, priorities, and so on—shared by financial
economists (Whitley 1986a, 1986b), and, consequently, there is a danger that novel-
ties will be kept in the periphery of the discipline. In addition, like economics, financial
economics “has a strong hierarchy of journals, so that researchers seeking the highest
reputations have to adhere to the standards and contribute to the goals of those con-
trolling the most prestigious outlets… . Because of this hierarchy, deviant economists
publishing in new journals can be ignored as not meeting the highest standards of the
discipline” (Whitley 1986b, 192–93). Because econophysics appears precisely to be a
theoretical innovation that is not in line with the ideals, goals, and standards of finan-
cial economics, its position in the shadow of physics and outside financial economics
has allowed econophysics to develop scientific innovations that would have very little
chance of being developed inside financial economics.
Being outside financial economics, econophysicists can ignore the theoretical con-
straints imposed by the foundations of financial economics. The institutional situation
of econophysics provides a significant advantage for easing scientific innovations. A tell-
ing example is option pricing. Econophysics models of option pricing generally ignore
one of the most important strengths of the Black-Scholes-Merton model: the replicat-
ing portfolio reasoning.26 Basically, this notion refers to the possibility of replicating
(in terms of cash flows) the payoff of an option with a portfolio that is constituted by a
combination of the risk-free asset and the underlying asset. The idea of replicating port-
folio is very important in finance since it leads, by arbitrage reasoning, to obtaining only
one price for the option (Cont and Tankov 2004). The practical usefulness of this ap-
proach is well known in finance (Derman 2009; Gaussel and Legras 1999).27 Because
econophysicists developed their knowledge outside of financial economics, they pro-
posed option-pricing models partially independent of arbitrage reasoning. Moreover,
the option-pricing models based on stable Lévy processes pose serious problems for
obtaining a replicating portfolio, because in this case the market is incomplete (Ivancevic
2010; Takayasu 2006; Cont and Tankov 2004; Miyahara 2012; Zhang and Han 2013).
The use of these processes is therefore problematic for financial economics since they
are in conflict with the discipline’s probabilistic framework, as defined in the work of
Harrison, Kreps, and Pliska (chapters 1 and 2). This divergence helps to explain the mar-
ginal use of these processes in financial economics. It is no coincidence that financial
economists and financial mathematicians have developed few models based on stable
Lévy processes since the 1990s (although there exist some generalized Lévy processes
in finance, as mentioned in c hapter 2). One must conclude that it is precisely because
econophysicists have developed their work outside the theoretical framework of finan-
cial economics that they can apply such processes more freely. This liberty paved the way
to potential new developments, as we will illustrate in c hapter 6.
Equilibrium is another example. There is a fundamental difference between fi-
nancial economics and econophysics concerning financial market equilibrium.
Equilibrium is a key concept in financial economics. While financial economics pro-
vides a less restrictive condition (a no-arbitrage condition) than the traditional eco-
nomic equilibrium, econophysicists have developed a technical framework without
having given any consideration to this aspect of nonarbitrage or to the theoretical as-
sumptions that could bring the market to its equilibrium. Actually, these notions do
not play a key role in econophysics; they instead appear for econophysicists as a priori
beliefs28 that provide a “standardised approach and a standardised language in which
to explain each conclusion” (Farmer and Geanakoplos 2009, 17). Specifically, econo-
physicists do not reject the concept of equilibrium, but they do not assume a conver-
gence toward such a state. In addition, equilibrium is considered merely a potential state
of the system because “there is no empirical evidence for equilibrium” seen as a final state
of the system (McCauley 2004, 6). Similarly, while they do not reject the condition of
no arbitrage, they are indifferent to this restriction. Such a position is possible because
econophysicists do not have to deal with the scientific framework that rules financial
economics.
A last illustration refers to the way econophysicists handle empirical data, lead-
ing them to introduce models with power laws in order to deal with extreme values
in finance. The vast majority of contributions from econophysicists take a phenom-
enological approach based on a determination of the probability distribution from
stock prices, which are directly observed (chapter 3). In other words, their models are
derived from empirical data. This approach, econophysicists believe, guarantees the
scientificity of their work by ensuring that the model obtained is as close as possible
to empirical data. From this perspective, econophysicists often present their discipline
as a data-driven field, echoing a naive form of empiricism in which “rough data” would
constitute the only empirical basis of science. This approach implicitly promotes
a “passivist theory of knowledge” (Lakatos 1978, 20) according to which scientists
merely have to describe phenomena as they appear to them. However, working with
rough data does not prevent one’s disciplinary background from influencing how phe-
nomena are perceived. In the case of econophysics, given that power laws are a key
framework in statistical physics (chapter 3), it is not surprising that econophysicists
systematically identify power laws in their visual analysis of data. Indeed, while econo-
physicists rarely say so, they are expressly looking for a power law, on the basis that
the phenomenon studied is complex and therefore ruled by such a law. This postulate
emerges clearly when one considers the visual tests econophysicists use to validate
observations with a power law (chapter 3). The power law is postulated first (chapter
5 will analyze this test in greater depth). In other words, it appears that, despite their
phenomenological claims, econophysicists turn the empirical world into their theo-
retical world in accordance with a “conservative activism” (Lakatos 1978, 20).
In this regard, the major objective of econophysics studies is not to “reveal” the
true statistical distribution that describes the rough data, but rather to determine the
value of the critical exponent of the (expected) power law by calibrating the param-
eters of the law required to fit the data. Such an approach is, therefore, phenomenolog-
ical in a very specific way: the implicit disciplinary assumption that econophysicists
have regarding the identification of statistical laws comes from the hypothesis of the
universality of power laws. To put it in other words, econophysics inductively expects
to identify a power law. Figure 4.4 summarizes this approach based on an observa-
tional postulate.
Empirical Data
Calibration
Parameters
The first step of this approach is the disciplinary expectation (observational pos-
tulate) of a power law that would govern the empirical data. The second step is to
calibrate the parameters of this power law in order to fit the empirical data. Finally,
the third step is to identify the critical exponent of the power law that best allows
econophysicists to specify the category of models they can use to describe the data.
This identification is based on visual tests, as explained in chapter 3.
This approach draws little attention in financial economics simply because the re-
sults do not respect two of the major scientific criteria in financial economics: on the
one hand, the postulate that stock prices are not predictable or at least not predictable
enough to make a profit; and, on the other hand, the fact that empirical works must be
based on hypotheses that can be assessed by statistical tests. Because of these postu-
lates, financial economists proceed in a very different way than econophysicists. Their
first step is to postulate that financial markets are competitive, agents are rational, and
prices result from market information. These postulates lead economists to formulate
testable hypotheses. The best-known is the efficient-market hypothesis, according to
which, when a market is competitive, prices reflect all available (and relevant) infor-
mation and there is no arbitrage opportunity. The third step is to test this hypothesis
with statistical tests. Thus, financial economists will follow a hypothetico-deductive
method: they proceed by formulating a hypothesis from experiences. This hypothesis
can then be found to be valid or invalid by a statistical test on observable data.29 In
other words, in the hypothetico-deductive method, models are derived/deduced
from theory (figure 4.5).
Postulates
Perfect Rationality of
Competitive Markets
Agents
Hypotheses
Efficient market
Equilibrium model
Hypothesis
Statistical tests
Empirical data
Disciplinary expectation: Gaussian law
Calibration
Hypothesis
Statistical tests
The five steps portrayed in figure 4.6 describe the whole phenomenological approach
observed in the implementation of certain models in financial economics. When fi-
nancial economists try to describe the occurrence of extreme values in empirical data,
they combine an empirical analysis of these data based on the existence of a main
Gaussian trend with more phenomenological techniques relying on ARCH-type
models to capture the high volatility of this main trend. This technique is usually as-
sociated with a conditional distribution whose calibration allows financial economists
to identify clearly the models (GARCH, EGARCH, etc.) they will implement. The
identification of this conditional distribution remains compatible with the Gaussian
framework, making it possible to save the classical foundations of financial economics.
This conceptual machinery is finally validated through statistical tests.
A comparison between the two phenomenological approaches underlines a major
difference between the two fields. Because of the scientific constraints ruling each
discipline, when financial economists adopt a phenomenological approach, their
modeling differs from that in econophysics. Being outside of financial economics,
econophysicists were not theoretically constrained by the conceptual framework de-
veloped by financial economists. In this open context, they implemented a phenom-
enological approach based on their own disciplinary matrix by focusing on the use of
power laws to characterize the dynamics of financial markets. Because the use of such
tools was abandoned in financial economics in the 1970s (chapter 2), their reintro-
duction in finance is doubtless the major contribution of econophysics. Consequently,
econophysicists can freely investigate financial markets to innovate and develop a new
way of describing financial return/price fluctuations. The introduction of power laws
is the direct result of this lack of constraints that rule financial economics.
In our view, the position of econophysics in the disciplinary space explains why
econophysicists positioned themselves in theoretical a niche that mathematicians and
economists have barely investigated, or not investigated at all, because of their theoret-
ical constraints. The three previous examples provide a telling illustration of this spe-
cific situation for introducing scientific innovations. As Kuhn (1989) put it, a scientific
framework provides a cognitive foundation for a specific community. That shared way
of thinking defines theoretical constraints without which organized research would
not be possible. An absence of theory would lead to a chaotic situation that would
undermine the confidence of the scientific community, and possibly lead to its disin-
tegration. In other words, the conceptual framework shared by a specific community
allows the development of structured research. However, that framework also defines
the conceptual horizons for that community. Because econophysics emerged in the
shadow of physics, it did not have to deal with the theoretical constraints that deter-
mine the structure of financial economics. That conceptual freedom favored scientific
innovation by paving the way for the development of a new perspective shaping fi-
nancial knowledge from outside the mainstream (Chen, Anderson, and Barker 2008).
This strategy of positioning itself in theoretical niches is not specific to econophysi-
cists. It was exactly the one used by financial economists to introduce innovations in
finance when they created their field in the 1960s and the 1970s, during the period of
high theory of finance.
specializing in finance at the time, the Journal of Business and the Journal of Finance.
The aim was to modify the content of published articles by imposing a more strongly
mathematical content and by using a particular structure: presenting the mathematical
model and then empirical tests. To reinforce the new orientation, these two journals
also published several special issues. Once control over these journals had been estab-
lished, the newcomers developed their own journals, such as the Journal of Financial
and Quantitative Analysis, created in 1965. Similarly, econophysicists chose to publish
and gain acceptance in journals devoted to an existing theoretical field in physics (sta-
tistical physics) rather than create new journals outside an existing scientific space and
hence structure. As explained in section 4.1, they took control of editorial boards (as
in the case of Physica A and the European Journal of Physics B). Once control over these
journals had been established, econophysicists developed their own journals, such as
Quantitative Finance and the Journal of Economic Dynamics and Control. This “coloni-
zation strategy” allowed the adepts of the new approach to bypass the partisans of the
dominant paradigm (and hence of the so-called mainstream journals), who rejected
these new theoretical developments in which they were not yet proficient. Gradual
recognition of the new discipline subsequently allowed new specialist journals to be
created, making it possible to reach a wider readership (especially in economics).
In aiming to colonize finance, the partisans of the two disciplines used the same
discourse to justify the scientificity of the new approach. In each case, outsiders chal-
lenged the traditional approach by asking its adepts to prove that it was scientific. This
“confrontational” attitude is founded upon the challengers’ contention that the empir-
ical studies, the new mathematics and methodology they use guarantee a scientificity
(i.e., a way of doing science) absent from the traditional approach.31 The challengers
maintain that the scientificity of a theory or a model should determine whether it is
adopted or rejected. During the 1960s and 1970s, financial economists underlined
the importance of the empirical dimension of their research from their very first pub-
lications (Lorie 1965, 3). They saw the testability of their models and theories as a
guarantee of scientificity ( Jovanovic 2008). Consider Fama’s three articles (Fama
1965a, 1965b, 1970). All used the same structure: the first part dealt with theoretical
implications of the random-walk model and its links with the efficient-market hypo-
thesis, while the second part presented empirical results that validate the model. This
sequence—theory, then empirical results—is today familiar in financial economics.
It constitutes the hypothetico-deductive method, the scientific method that has been
defended in economics since the middle of the twentieth century. Indeed, in the
1960s, financial economists criticized the chartists for their inability to present their
works with “scientific” arguments, accusing them of using a pure rhetorical justifica-
tion rather than a strong theoretical demonstration of their findings.32 Financial econ-
omists then developed a confrontational approach in their opposition to the chartists.
As an example, James Lorie (1965, 17) taxed the chartists with not taking into account
the tools used in a scientific discipline such as economics. In this debate, financial
economists argued that their approach was based on scientific criteria, while chartism
was based on folklore and had no scientific foundation.33 Consequently, financial eco-
nomics should supplant previous “folkloric” practices, judged to be groundless.
Econophysicists have proceeded in like fashion. In their work, they belittle the meth-
odological framework of financial economics, using a familiar vocabulary. They describe
the theoretical developments of financial economics as “inconsistent … and appalling”
(Stanley et al. 1999, 288). Despite being an economist,34 Keen (2003, 108–9) discred-
its financial economics by highlighting the “superficially appealing” character of its key
concepts and by comparing it to a “tapestry of beliefs.” Marsili and Zhang (1998, 51)
describe financial economics as “anti-empirical,” while McCauley (2006, 17) does not
shrink from comparing the scientific value of the models of financial economics to that
of cartoons. The vocabulary used is designed to leave the reader in no doubt: “scientific,”
“folklore,” “deplorable,” “superficial,” “sceptical,” “superstition,” “mystic,” “challenge.” All
these wrangling words serve to dramatize a situation in which actors simply hold diver-
gent positions.
Another feature of this colonization strategy is the use of new mathematical tools
combined with the creation of new statistical data, both being considered as a guar-
antee of scientificity. Chapters 1 and 3 showed that the development of modern prob-
ability theory, on the one hand, and the evolution of financial markets, which are
increasingly quantitative (or digitized), on the other, contributed to the emergence
of financial economics and of econophysics. In each case these two factors triggered
the emergence of an alternative approach. In the 1960s, some financial economists
took up random processes at a time when mathematical developments had become
newly accessible to nonmathematicians (chapter 1). The use or nonuse of these new
tools—modern probability theory and work on statistical data—constituted the main
element setting the “new approach” against the “traditional approach” of the time.35
This mathematical evolution went hand in hand with technological developments as
the use of computers became widespread. Computers made it possible to perform
tests on empirical data to assess the methods proposed for earning money on financial
markets, particularly chartist analysis.36 The creation of empirical databases stimulated
the application of mathematical models taken from modern probability theory and
research into stock market variations.
The development of probability theory and finer quantification of financial mar-
kets (thanks to developments in computing) were also triggering factors in the emer-
gence of econophysics. Indeed, since the 1990s, electronic markets have ruled the
financial sphere, while the use of information technologies has grown consistently
in companies. Computerization allowed the use of “high-frequency data” offering a
more accurate study of the evolution of real-time data (chapter 3). Accumulated data
are then stored in the form of time series. While this type of data has been studied
by economists for several decades, the automation of markets has enabled “intraday”
data to provide “three orders of magnitude more data” to be recorded (Stanley et al.
2000, 339). The quantity of data is an important factor at a statistical level because
the larger the sample, the more reliable the identification of probabilistic patterns. In
other words, the development of computers, both in the 1960s and in the 1990s, cre-
ated two favorable decades for the development of probability theory, and c hapter 3
introduced the evolution of this theory in the 1990s (with the development of trun-
cation techniques). Quantified information and statistical data possess this property
of readily moving disciplinary boundaries and show the role of quantification in the
construction of an objective knowledge for an emerging field. Actors involved in the
development of financial economics and those who developed econophysics made
use of technological evolution to remain up-╉to-╉date with computational possibilities.
This parallel between these two disciplines is particularly interesting because this
institutional strategy occurred during the years of high theory of financial economics
(�chapter 1). In both cases, discriminating against existing practices, challengers at-
tempted to introduce scientific innovations that could not be developed in the disci-
pline because they were incompatible with the existing theoretical framework.
4.4.╇CONCLUSION
This chapter showed that econophysics is nowadays a fully-╉fledged scientific field with
its own journals, conferences, and education, which standardize and promote the key
concepts associated with this new approach. Econophysics emerged in the shadow
of physics and, until recently, has stayed outside financial economics. Two general
consequences of this disciplinary position were studied. First, this background gener-
ates a difficult dialogue between econophysicists and financial economists. Second,
being outside financial economics, econophysicists have been able to develop scien-
tific innovations that have not been explored by financial economists. We consider
that this issue of innovation is crucial for a future dialogue between the two areas of
knowledge. This chapter pointed out that financial economists followed the same
path, introducing new ideas and models, when they were outside the mainstream of
finance in the 1960s—╉these innovations led to the creation of modern finance theory.
This similarity in the evolution of the two fields suggests that the outsider position
of econophysics might lead to the introduction of their innovations in financial ec-
onomics. This chapter emphasized that some scientific innovations are directly re-
lated to the phenomenological approach (implemented in both econophysics and
financial economics). However, the comparison between the techniques of modeling
implemented in the two fields showed that the current phenomenological approach
(bottom-╉up) provided by econophysicists is quite similar to the one implemented
by financial economists in the 1960s. We suggested that econophysics could evolve
toward a more deductive (top-╉down) method like the one financial economists use.
We can therefore expect that stable Lévy processes and other contributions pushed by
econophysicists will be integrated into a framework common to financial economics
and econophysics. This is the path we will explore in the two last chapters.
5
M A J O R C O N T R I BU T I O N S O F EC O N O P H Y S I C S
TO F I N A N C I A L EC O N O M I C S
106
a distribution. In the same vein, Lillo and Mantegna (2004) identified a power-╉law-╉
like relationship between financial prices and market capitalization, whereas Kaizoji
and Kaizoji (2004) used the same relation to describe the occurrence of calm-╉time
intervals in price changes. Mike and Farmer (2008) described the liquidity and the
volatility derived from empirical regularities in trading order flow by using a power
law. Financial autocorrelations have also been analyzed through the lens of this distri-
bution. Studies have shown that autocorrelations of order volume (Farmer and Lillo
2004), of signs of trading orders (Bouchaud, Mezard, and Potters 2002; Potters and
Bouchaud 2003; Ivanov et al. 2004; Farmer et al. 2005), and of liquidity at the best
bid and ask prices (Farmer and Lillo 2004) can all also be modeled with this pattern.
One could extend the list to many other publications. Basically, the point that all these
empirical works have in common is the conceptual role played by power laws, which
appears to be universally associated with different phenomena independently of their
microscopic details. The empirical studies have highlighted statistical properties re-
lated to financial data that financial economists traditionally do not take into account
in their work. As shown in �chapter 4, economics and physics use different methods
of producing scientific knowledge and results. We must bear in mind that for statis-
tical physicists, working on empirical data is the first step in scientific methodology,
the second step being the construction of a theoretical framework.2 This phenome-
nological approach used by econophysicists is, per se, a first significant contribution
of econophysics to finance because it has identified statistical patterns in raw financial
data. This approach should not be considered separately from the fact that the mathe-
matics and statistics on which econophysics models are based are relatively recent and
are still in development.3 In this context, “Rather than investigating the underlying
forces responsible for the universal scaling laws of financial markets, a relatively large
part of the econophysics literature mainly adapts physics tools of analysis to more
practical issues in finance. This line of research is the academic counterpart to the work
of ‘quants’ in the financial industry who mostly have a physics background and are oc-
cupied in large numbers for developing quantitative tools for forecasting, trading and
risk management” (Lux 2009, 14). Moreover, one has to consider that the impact of
econophysics on financial practices is potentially substantial because it involves the
statistical description of financial distributions and thus the statistical characterization
of financial uncertainty. However, this potential impact must be considered by taking
into account several drawbacks that we will discuss in this section.
To date, one can consider five major practical contributions of econophysics to
finance: practical implementations; identification of the common critical exponent;
volatility considered as a whole; uses in a long-╉term perspective; and prediction of
financial crashes and their possible management.
portfolio management, and (3) software, and with one potential issue related to the
future of risk management.
The first issue deals with option pricing and hedging. It is well known that the
statistical description of financial distribution is the major information on which port-
folio managers can base their decisions. Two different statistical descriptions will nec-
essarily generate two different decision schemas. A non-Gaussian framing of financial
distributions will produce a pricing different from one typically proposed by classical
Gaussian descriptions (McCauley 2004; Dionisio, Menezes, and Mendes 2006), and
consequently a different hedging strategy (Mantegna and Stanley 2000). While early
works in econophysics showed that an optimal hedging strategy appears impossible in
a non-Gaussian framework (Aurell et al. 1997; Bouchaud and Sornette 1994), more
recent articles have defined mathematical conditions for optimal hedging, in line with
what financial economists mean by “optimal hedging” (McCauley, Gunaratne, and
Bassler 2007; Bucsa et al. 2014). We will illustrate this point in the next chapter.
Although a hedging solution based on econophysics is in its infancy, the pricing
issue has already generated high interest among practitioners, and econophysics-based
portfolio management leaves room for a theoretical integration of a potential financial
crisis. The use of distributions in which extreme values can occur leads portfolio man-
agers to consider more appropriate track records in the case of large variations. This
probably explains why some banks and investment firms (BNP Paribas, Morningstar
Investment, Ibbotson Associates, etc.) have successfully used models from econophys-
ics (Casey 2013).4 Moreover, some econophysicists have created their own investment
companies that propose econophysics-based management.5 Although these firms do
not reveal which models they use, physics-based management is clearly part of their
strategy and advertising. For example, Capital Fund Management’s website states,
Research in the statistical properties of financial instruments and the development of sys-
tematic trading strategies are carried out at CFM by a team of Ph.D.’s, most of them former
physicists from prestigious international institutions.6
The last practical implementation concerns risk management. The most recent fi-
nancial crisis questioned the ability of risk managers to predict and limit financial catas-
trophes. In this challenging context, calls emerged for a new conceptual framework for
risk management. The objective of risk managers, portfolio managers, and regulators
is to limit the potential damage resulting from crises. Their ability to limit damage de-
pends directly on their understanding of financial uncertainty. By providing a different
statistical characterization of empirical distributions, econophysicists offer a new per-
spective on modeling financial uncertainty. From this perspective, econophysics could
raise questions about the models used in financial regulation, which are mainly based
on the concept of value-╉at-╉risk (VaR—╉introduced by JPMorgan in 1992). VaR, the
most widely used measure of market risk, “refers to the maximum potential loss over a
given period at a certain confidence level” (Bormetti et al. 2007). This measure is very
well known to practitioners since it can also be used to estimate the risk of individual
assets or portfolios. In practice, VaR models are easy to use since they refer to a high
quartile of the loss distribution of a portfolio over a certain time horizon. Therefore,
calculating VaR implies knowledge of the tail behavior of distribution returns. To pro-
mote the diffusion of its variance model, JPMorgan developed a computer program,
RiskMetrics, that is widely used in the financial industry. Some econophysicists (Wang,
Wang, and Si 2012; Bormetti et al. 2007) proposed an empirical comparison between
results produced by RiskMetrics and a VaR model based on non-╉Gaussian distribu-
tions (such as Lévy processes for instance), concluding that the latter provide better
forecasting of extreme variations in financial prices.11 Although in their infancy, these
reflections on non-╉Gaussian descriptions of financial distributions could have a direct
impact on the use of VaR (and consequently on the statistical models used) in financial
regulations and the prudent supervision of financial institutions.
These practices and practical reflections have been implemented at the same time
that academicians have provided practical results in four major domains, which we
will set out in the four next sections.
law are assumed to have the same statistical features. According to Stanley and Plerou
(2001, 563), the identification of a universality class for financial data was “vital” in
experimental research into critical phenomena in finance, because in this phenome-
nological perspective, identifying a critical exponent will provide a model that can be
tested (see �chapter 4).
Nowadays, a consensus has emerged in econophysics literature on the value of the
critical exponent, since stock price variations follow a power law with an exponent close
to 3 (Stanley, Gabaix, and Plerou 2008; Plerou and Stanley 2008; Gopikrishnan et al.
1999; Malevergne, Sornette, and Pisarenko 2005; Plerou et al. 1999). This result also
concerns options, commodities, currencies, and interest rates (Bouchaud and Challet
2014). Figure 5.1 illustrates this result for US stocks, for the “at the money” volatility of
the corresponding options, and for credit default swaps on these same stocks.
–2
CDS
–4 Stocks
Vol
Inverse cubic law x–3
–6
–8
–4 –2 0 2 4
financial prices in two ways. A first category of studies focuses on modeling volatility
distribution (only the tail part of a distribution is considered in this case) with a spe-
cific (conditional) distribution. A second category of studies use power laws to char-
acterize the entire distribution of financial prices/returns.
Econophysicists who use power laws to describe the evolution of volatility (i.e., de-
scription of fat tails only), implement the same methodology as financial economists
with the ARCH class of models, which are nowadays the principal econometric used
in financial economics to model financial returns. The key idea behind these models
is that the most recent volatility influences current volatility. As chapter 2 explained,
the ARCH class of models assumes that financial returns follow the Gaussian distri-
bution (called unconditional distribution in the ARCH class of models) but charac-
terize the dynamics of the volatility with another distribution, which is not necessarily
Gaussian. The distribution describing the evolution of volatility is called “conditional
distribution.” It is worth mentioning that the conditional and unconditional dimen-
sions are defined in reference to the way we consider the history of the data. An
unconditional study will consider that all past data have the same influence on the
current evolution of the variable; data are considered in their entirety, and no specific
weight is assigned to the various past fluctuations. In contrast, a conditional treatment
will consider recent fluctuations to have a stronger influence on the current evolution
of the variable, and so more weight will be given to recent fluctuations. These time-
dependent dynamics can be modeled through various potential statistical processes
(Kim et al. 2008), and this variety has generated a huge literature.12 To characterize
the occurrence of extreme values, financial economists use a conditional distribution
with the Gaussian distribution, in which large variations are associated with another
distribution that can take the form of a power law. In this situation, power laws are
considered a corrective tool for modeling important fluctuations that the Gaussian
framework does not capture: the Gaussian (unconditional) distribution can therefore
be combined with a power law (characterizing the conditional distribution) in order
to obtain a better fit with empirical results. The major advantage of a conditional ap-
proach is to capture the time-dependent dynamics observed in the variability of finan-
cial prices. However, this advantage is weakened by the assumed Gaussian dimension
of unconditional distribution, which underestimates risk even when associated with
a power law for describing the (conditional) evolution of volatility, as explained by
Farmer and Geanakoplos. However, when the model is matched with real data, the
tail exponent is much too large, that is, “The tails of an ARCH process are too thin to
explain the fat tails of prices” (Farmer and Geanakoplos 2009, 17).
Unlike this first category of works, the great majority of studies by econophysicists
do not break down statistical analysis into conditional and unconditional dimensions;
instead, they use power laws to characterize the entire distribution. However, keeping
the previous vocabulary, we can say that they mainly work on unconditional distribu-
tions by considering the empirical time series as a whole. Econophysicists often un-
derline the advantage of dealing with historical distributions by claiming that financial
distributions must be described as they appeared in the past and not as they should
be according to a preexisting theoretical framework (McCauley 2004).13 Beyond
The practical value of the power law for risk control is that it results in more efficient risk es-
timates than extrapolation methods […] [W]â•„hen used to extrapolate risk levels that are not
contained in the sample [i.e., the use of unconditional distribution], they will consistently
underestimate risk. The power law, in contrast, is more parsimonious, and so is more efficient
with limited data. This can result in less biased estimates.
Some authors (Bollerslev 1986; Engle 1982; Rachev et al. 2011) have claimed
that current financial prices are more sensitive to recent fluctuations of the market. In
other words, statistical description of financial returns should take into account this
short-term influence by proposing a conditional level assigning autocorrelations on
the second moment of the distribution that attribute a greater weight to more recent
returns (Bollerslev 1986). Current volatility is assumed to be more closely correlated
to its more recent variations than to its older ones. From this perspective, ARCH
models are particularly well suited to describing the evolution of short-run dynamics
of volatility since, because of their definition, these models assign more weight to the
influence of recent fluctuations in describing current volatility. This short-term per-
spective on financial stocks makes ARCH models useful for traders, whose investment
window is often shorter than three months. In this context, traders often characterize
the dynamics of variation through ARCH models by adjusting a three-month vola-
tility to the daily volatility of assets. Moreover, the fact that traders rarely work on a
sample older than four years does not favor the use of power laws in trading rooms.14
However, in some specific situations, financial management has to focus on long-
term issues. Stress tests, for example, are a well-known analysis to determine the extent
to which a financial instrument (an institution or a portfolio) can deal with an ec-
onomic crisis or the occurrence of extreme situations that could generate a critical
loss. By generating a computerized simulation based on historical distributions, this
methodology gauges how robust a financial asset (institution, portfolio) is in a crisis
situation. In this situation, a long-term approach is usually implemented since a longer
sample will provide a better estimation of large fluctuations without diminishing the
magnitude of the maximum variation (Matei, Rovira, and Agell 2012). Large disrup-
tive events are not frequent, but they can be captured and approximated through a
long-term historical (unconditional) analysis (Buchanan 2004). Because extreme
variations involve the vulnerability of financial assets to a crisis situation, the statis-
tical characterization of fat tails is very important. In that perspective, power laws are
the most appropriate statistical framework for extreme value management, whose key
theory (extreme-value theory) can be seen as “a theoretical background for power law
behavior” (Alfarano and Lux 2010, 2).
The pertinence of power laws for estimating long-term risk was highlighted by the
dramatic case of Long-Term Capital Management (LTCM).15 Buchanan (2004, 5)
argued that the collapse of this hedge fund was partly due to inadequate estimation of
long-term risk, whose historical (unconditional) form took the form of a power law.
In Buchanan’s view,
Fund managers at LTCM were sophisticated enough to be aware that their bell-curve esti-
mates were probably low, yet they lacked methods for assessing the likelihood of more ex-
treme risks [in the long term]. In September of 1998, “unexpected” volatility in the markets,
set off by a default in Russia’s sovereign debt, led LTCM to lose more than 90 percent of its
value. … A power law is much better than the bell curve at establishing this long term risk.
It is also better at helping managers avoid the painful consequences of “unexpected” fluctua-
tions.” (Buchanan 2004, 5)
Although LTCM managers knew their unconditional distributions were not nor-
mally distributed, they applied a mainstream way of managing the fund (Bernstein
2007) for which “they just add a large fudge factor at the end [as proposed by ARCH
class of models]. It would be more appropriate for the empirical reality of market
fluctuations—as captured by the power law—to be incorporated in the analysis of
risk” (Buchanan 2004, 6).
In the same vein, Buchanan (2004) also mentioned the importance of unconditional
power laws for estimating financial risk for pension funds and the insurance industry.
The first have to provide stable growth over the long term in order to ensure periodic
payments related to pensions. That financial objective makes pension funds particularly
sensitive to liquidity risk, leading them to deal with a long-term horizon to predict the
worst loss possible and avoid “liquidity black holes” (Franzen 2010). The insurance in-
dustry is faced with the challenge of integrating extreme variations into their portfolio
management, especially insurance companies involved in contracts related to natural
disasters. “The bell curve does the job with automobile insurance but it fails miserably
when assessing large catastrophic losses due to hurricanes and earthquakes” (Buchanan
2004, 6).
Moreover, there is empirical evidence supporting the existence of a long-
memory process in the volatility of prices, meaning that there is persistent temporal
dependence between observations widely separated in time. That long-memory
property is well known and well documented in specialized literature (Ding, Enlge,
and Granger 1993; Andersen and Bollerslev 1997; Breidt, Crato, and de Lima
1998). The origin of the long-memory process lies in the work of Hurst (1951), a
hydrologist commissioned by the British government to study changes in the level
of the Nile. More specifically, he was in charge of designing dams to manage water
reserves guarding against risk of drought. This task required a deep understanding
of Nile floods, which Hurst described with a temporal series characterizing the
changes in water levels. In his research, he observed that the optimal storage ca-
pacity for water tanks divided by the standard deviation of the river floods was
proportional by following a power law with an exponent of between 0.5 and 1.0.
Moreover, Hurst noticed that the evolution of water levels was not independent in
time: a major recent flood would influence a great number of future floods, which
implies a long-memory process.
Several long-memory processes have been observed in finance: volatility for stocks
(Ding, Enlge, and Granger 1993), exchange rates (Lu and Guegan 2011), and trading
volume (Lillo and Mantegna 2004).16 Financial economists acknowledge the exist-
ence of long-memory processes, which they have usually captured through a myriad
of ARCH-type models (IGARCH, EGARCH, GARCH, NGARCH, etc.) gradually
moving away from the use of power laws (Lux 2006). However, although ARCH
models provide a good approximation for short-time dependence, they fail to capture
this long-memory property (Ding, Enlge, and Granger 1993) and, contrary to models
that explicitly take long memory into account, an ARCH model that fits on one time-
scale does not work well on a different timescale (Farmer and Geanakoplos 2009,
19). The necessity of capturing long-memory properties from time series, such as
long-term volatility clustering or management mainly based on the long term, should
encourage the use of power laws in finance.
In order to pinpoint the differences between the ways in which financial econo-
mists and econophysicists deal with fat-tailed distributions, table 5.1 summarizes the
main methodological differences between them.
detected on the financial market are looked on as a passage from one phase to an-
other. This evolution can be statistically characterized by a power law. In so doing,
econophysicists use one property of the scaling hypothesis according to which there
“is a sort of data collapse, where under appropriate axis normalization, diverse data
‘collapse’ onto a single curve called a scaling function” (Preis and Stanley 2010, 4).
Sornette and Woodard (2010, 119) explained that
according to this “critical” point of view, the specific manner by which prices collapsed is not
the most important problem: a crash occurs because the market has entered an unstable phase
and any small disturbance or process may reveal the existence of the instability… . The col-
lapse is fundamentally due to the unstable position; the instantaneous cause of the collapse
is secondary. In the same vein, the growth of the sensitivity and the growing instability of the
market close to such a critical point might explain why attempts to unravel the proximal origin
of the crash have been so diverse. Essentially, anything would work once the system is ripe.
{ }
ln[ p(t )] = A + B(t c − t ) 1 + Ccos ω ln(t c − t ) + φ ,
β
(5.1)
where p(t) is the price index at time t, tc is most probable time of crash (i.e., the crit-
ical time) and quantifies the power-law acceleration of prices, β is the exponent of the
power-law growth, ω is the frequency of the fluctuations during the bubble, and the
remaining variables carry no structural interpretation (A, B, C, and φ).18
The LPPL model echoes two concepts well known in financial economics: noise trad-
ers (Black 1986; Grossman 1976; Dow and Gorton 2008; Shleifer and Summers 1990;
Kyle 1985) and mimetic/herding behaviors (Orléan 1989; Keynes 1936, chap. 12; Orléan
1995; Banerjee 1993; Welch 1992).19 Indeed, according to this LPPL model, two catego-
ries of agents trade on financial markets: rational traders (who share the same characteris-
tics/preferences) and noise traders (who mimic their network of friends). In this context,
the decisions of the latter depend on the behavior of traders within their network. Financial
crashes occur when mimetic behaviors between noise traders become self-reinforcing,
leading to a large proportion of noise traders to share the same position. In this situation
the market becomes extremely sensitive to small global disturbances and a crash can occur.
As mentioned by Johansen et al. (1999, 7), “If the tendency for traders to ‘imitate’ their
‘friends’ increases up to a certain point called the ‘critical’ point, many traders may place the
same order (sell) at the same time, thus causing a crash.” This collective phenomenon takes
place when the market is close to its critical point. This situation can be detectable through
the size of the fluctuations: the closer the market is its critical point, the greater the price
fluctuations (see chapter 3 for the analogy with critical phenomena in statistical physics).
LPPL-type modeling has already provided some successful predictions, such as
the fall of the Japan Nikkei index in 1999, the 2007 crash, and some others ( Jiang
et al. 2010). The Financial Crisis Observatory (FCO), which was created by the
Econophysics Group at the University of Zurich, has developed an indicator of this
type with the aim of monitoring the risk of crises.
One often finds [in the literature of econophysics] a scolding of the carefully maintained
straw man image of traditional finance. In particular, ignoring decades of work in dozens
of finance journals, it is often claimed that “economists believe that the probability dis-
tribution of stock returns is Gaussian,” a claim that can easily be refuted by a random
consultation of any of the learned journals of this field. In fact, while the (erroneous)
juxtaposition of scaling (physics!) via Normality (economics!) might be interpreted as
an exaggeration for marketing purposes, some of the early econophysics papers even
gave the impression that what they attempted was a first quantitative analysis of financial
time series ever. If this was, then, performed on a level of rigor way below established
standards in economics (a revealing example is the analysis of supposed day-of-the-
week effects in high-frequency returns in Zhang, 1999) it clearly undermined the stand-
ing of econophysicists in the economics community. (Lux 2009, 15)
While numerous models that yield power law behavior have been suggested and, in fact, the
number of such models continues to grow rapidly, no general mechanisms or approaches
have been suggested that allow one to validate that a suggested model is appropriate… .
[W]e have beautiful frameworks, theory, and models—indeed, we have perhaps far too many
models—but we have been hesitant in moving to the next steps, which could transform this
promising beginning into a truly remarkable new area of science. (Mitzenmacher 2005, 526)
Mitzenmacher’s disclaimer was relevant for econophysics. Like other fields (geog-
raphy, biology, etc.) using power laws in their research, econophysics had not really
been able to go beyond the third step when Mitzenmacher published his article in
2005. Mitzenmacher’s argument is important because, on the one hand, it underlines
that the claims made by economists have an echo in other fields dealing with power
laws; and on the other hand, it paves the way for a potential research agenda that would
ease the collaboration between econophysics and financial economists.
This situation indicates that the approach used by econophysicists is not unknown
to financial economists. Indeed, the models of the ARCH class and of econophysics
deal with data in the same way: both make a calibration in order to estimate the param-
eters of the models (chapters 2 and 4). This similarity helps to explain why the lack of
interest in econophysics models is not fully comprehensible to econophysicists, and
by contrast why it is so comprehensible to financial economists. From a financial ec-
onomics’ viewpoint there are two nuances in these phenomenological approaches—
although these might appear very tenuous. First, the ARCH class of models use
statistical tests, while econophysics models use visual tests. Financial economists are
skeptical about such visual tests because they provide no results about the statistical
behavior of the quantities estimated. Second, the ARCH class of models is based on
the Gaussian framework and is associated with the efficient-market hypothesis by
testing the predictability of stock price/return variations, both of which are founding
hypotheses of financial economics (chapter 1). Therefore, most financial economists
using the ARCH class of models consider their models to have foundations from a
theoretical point of view (and not merely from a statistical one).
The next section will show that the situation concerning statistical tests for power
laws has changed very recently; this section will give some details concerning the the-
oretical foundations of the ARCH class of models. It seems that there is a misunder-
standing of the theoretical importance of the Gaussian framework: many econophysics
studies have associated the Gaussian random character of stock market prices/returns
with the efficient-market hypothesis, believing that it has been theoretically demon-
strated, as financial economics’ literature suggests. However, as explained in c hapters 1
and 2, this association is not theoretically demonstrated: efficient markets and the
random walk or the martingale hypothesis are two distinct ideas. A random walk or a
martingale is neither necessary nor sufficient for an efficient market; in addition, the
efficient-market hypothesis implies no particular stochastic process. Therefore, it must
be admitted that the methodology used with the ARCH class of models is similar to
that used in econophysics modeling: a purely statistical modeling approach without
theoretical interpretations.22 In addition, it must be clarified that ARCH models “do not
provide an avenue towards an explanation of the empirical regularities” (Lux 2006, 8).
In fact, “Few [economic] models are capable of generating the type of ARCH [class of
models] one sees in the data” (Pagan 1996, 92), leading Adrian Pagan to conclude his
survey on the econometrics of financial markets with the following remarks:
One cannot help but feel that these statistical approaches to the modeling of financial series
have possibly reached the limits of their usefulness… . Ultimately one must pay more at-
tention to whether … it is possible to construct economic models that might be useful in
explaining what is observed. … For many economists … it is desirable to be able to under-
stand the phenomena one is witnessing, and this is generally best done through theoretical
models. Of course this desire does not mean that the search for statistical models which fit
the data should be abandoned. . . . It is this interplay between statistics and theoretical work in
economics which needs to become dominant in financial econometrics in the next decade.
(Pagan 1996, 92)
Since 1996, very little progress has been made toward a higher interplay. As one can
see, the approaches defended in econophysics and in financial econometrics are very
close: they both describe financial data without explaining the emergence of the ob-
served patterns.23 Methodologically speaking, it appears the two fields are using the
same kind of phenomenological approach, leading us to complete our analysis initi-
ated in the previous chapter. Indeed, chapter 4 showed that econophysicists follow the
same three first steps as does the hypothetico-deductive approach used by financial
economists. The two last steps (formulation of hypotheses and validation with statis-
tical tests) do not really appear (yet) in econophysics literature (figure 5.2).
Calibration Calibration
Hypothesis Hypothesis
However, regarding the evolution of financial economics (chapters 1 and 2), econo-
physics appears to be at the same stage as financial economics was in the 1960s. At that
time, financial economists tried to identify the most appropriate statistical description
in accordance with the data available at the time and their disciplinary expectations
(i.e., the existence of a Gaussian trend—see chapter 1). In the same vein, since the
1990s, econophysicists have focused on statistical characterization of empirical data
through the data available and their disciplinary expectations (i.e., the existence of a
power law). After calibrating their statistical description, econophysicists can iden-
tify the class of models likely to explain the occurrence of extreme values (McCauley
2006). When econophysicists identify such a class of models, they can provide, on
the one hand, a potential explanation for the emergence of power laws and, on the
other hand, statistical tests to validate their use in finance. Chapters 6 will study this
agenda. In this analysis, econophysics could take the form of a hypothetico-deductive
approach: testing the class of models identified from empirical data (figure 5.3).
Hypothesis related to
the emergence of
power laws
Statistical tests
What econophysicists will suggest to explain the emergence of power laws will act as
future hypotheses, which will have to be tested and validated through statistical tests.
Such an evolution could methodologically ease the development of future collabora-
tions between this field and financial economics.
Such a rapprochement between the two fields could be supported by general devel-
opments we observe in the sciences. Sciences are not static; over the past few decades
disciplinary borders have been changing, and new fields have emerged that are not mono-
disciplinary. If disciplinarity implies a monodiscipline describing a specialized scientific
field, multidisciplinarity (or pluridisciplinarity), interdisciplinarity, and transdisciplinarity
imply a variety of disciplines.24 “Researchers from different fields not only work closely to-
gether on a common problem over an extended period but also create a shared conceptual
model of the problem that integrates and transcends each of their separate disciplinary
perspectives” (Rosenfield 1992, 55). All participants then have common roles and try to
offer a holistic scheme that subordinates disciplines. The ecology of today can be looked
on as an illustration of these changes. Max-Neef (2005) explained that growth and envi-
ronment were frequently identified as opposites in conventional economics because they
were mainly based on anthropocentric reasoning. By taking into account different fields
(economics, demography, biophysics, etc.), a more biocentric ecology has recently been
developed. This ecology has proposed a new framework in order to solve the traditional
opposition between environment and growth. In this perspective, these opposite concepts
can now be seen as complementary in a unified development.
The emergence of econophysics is a clear illustration of this recent evolution.
But, as we have observed, to date no common conceptual scheme integrating and
transcending financial economics and econophysics has emerged. By staying within
their traditional boundaries, (econo)physicists and financial economists do not facil-
itate the creation of a common scientific literature that could be shared by the two
disciplines and could allow the creation of new models or knowledge. Morin (1994)
explained that “the big problem is to find the difficult path of the inter-relationship
(l’entre-articulation) between sciences that have not only their own language, but
basic concepts that cannot move from one language to another.” Although an inte-
grated approach requires that disciplines share common features, and in particular a
common conceptual scheme, the problem of language (concordances) must be also
considered. As Klein (1994) explained, “A ‘pidgin’ is an interim tongue, based [on]
partial agreement on the meaning of shared terms… . [An integrated approach] will
begin with a pidgin, with interim agreement on basic concepts and their meanings.”
The concept of a pidgin was introduced by Galison (1997), who called Kuhnian
incommensurability into question by explaining how people from different social
groups can communicate.25 From this perspective, a pidgin can be seen as a means of
communication between two (or more) groups that do not have a shared language.26
Galison (1997, 783) used the metaphor of “trading zone” (because in situations
of trade, groups speak languages other than that of their home country) to charac-
terize this process of communication between people who do not share the same
language. More specifically, “Two groups can agree on rules of exchange even if they
ascribe utterly different significance to the objects being exchanged” (Galison 1997,
783). As Chrisman (1999) pointed out, the emergence of a pidgin requires specific
conditions: regular contact between the language communities involved, the need
to communicate between these communities, and, last, the absence of a widespread
interlanguage.
All the conditions seem to be met for the emergence of a new pidgin in the case of
econophysics and financial economics. Some theoretical continuities exist between
these two disciplines because Gaussian processes are a particular case of Lévy pro-
cesses. These continuities do not imply a common language, but they do encourage
the potential emergence of a common conceptual scheme based on a new axiomatic,
which would make sense for financial economists and econophysicists as well. In ac-
cordance with the analysis of Morin and Klein, a more integrated approach must take
into account the constraints that both original disciplines have to face: for econophysi-
cists the need for statistical processes to be physically plausible (i.e., in line with the
properties of physical systems) combined with the fit between model and empirical
data; for financial economists, the existence of reliable statistical tests for distribu-
tion describing data on the one hand, and the compatibility between the model and
theoretical framework on the other. In other words, this potential language implying
a common conceptual scheme must result from a double movement: models from
physics must incorporate the theoretical framework from financial economics and, at
the same time, theories and concepts from financial economics must be modified so
that they encompass richer models from physics.
This double movement is a necessary step toward a more integrative approach in
econophysics. This adaptation implies the integration of theoretical constraints ob-
served in each discipline in such a way that the new shared conceptual framework will
make sense in each discipline. As Chrisman (1999, 5) has pointed out, the emergence
of a pidgin can be seen as a new integrated jargon between two disciplines, as op-
posed to a multidisciplinary approach that relies on what Chrisman called “boundary
objects,” which imply an agreement and an “awareness” between the groups involved
through which each can understand that the other may not see things in the same way.
The following section will present recent developments that could pave the way to a
progressive integration of econophysics models into financial economics.
The first two categories of models find the origin of power laws in the time series, while
the last two associate these macro laws with an emerging property resulting from a
statistical expression of a large number of microinteractions. It is worth mentioning
that the majority of these works (except the first category) have been developed by
theoreticians involved in econophysics. While the first category emerged from strictly
statistical development, the second category refers to a theory proposed by a financier
trained in mathematics (Xavier Gabaix) in collaboration with econophysicists (partic-
ularly Eugene Stanley). In the same vein, the third category of models combines works
proposed by economists with models provided by econophysicists. However, this cat-
egory is a direct application of the renormalization group theory developed in physics,
presented in �chapter 3. The last set of models refers to a strictly physical perspective
on power laws, since it is founded on what physicists call the self-╉organized criticality
that we introduced in �chapter 3.
Using the seminal model proposed by Yule (1925), Kesten (1973) showed that this
basic equation can generate a power law when the growth rate itself follows a power
law. More precisely, Kesten (1973) associated the probability at time t + 1 to observe
a population S i higher than a specific size x with the distribution function Dt+1 ( x ) ,
which he formalized as follows:
Dt +1 ( x ) = P (Sti +1 > x)
= P (Sti + γ it +1 > x)
x
= P Sti > i . (5.5)
γ t +1
If growth rates in time are assumed to be i.i.d. random variables with a density f ( γ ) ,
the previous equation can be rewritten as
∞
x
Dt +1 ( x ) = ∫Dt f ( γ )d γ , (5.6)
0
γ
∞
S
D ( S ) = ∫D f ( γ )d γ . (5.7)
0
γ
E γ a = 1. (5.8)
Reed and Hughes (2002) sustained this statistical analysis later and showed that a
power-law distribution can be obtained when things grow exponentially at random
times. The formalization presented here shows that the condition for which Dt de-
scribes a power law results from a technical constraint in the statistical description of
economic data. In other words, the origin of power laws is not in the observed phe-
nomenon but rather in its statistical characterization independently of the disciplinary
context in which this statistical investigation is made. However, a more economic ex-
planation of why the equation above generates a power law can be derived from Gibrat
(1931), who claimed that frictions preventing cities (or firms) from becoming too
small must be built into the model. Gibrat explained that, in the presence of frictions,
the pure random growth process modeled by Yule (1925) will follow not a power law
but the Gaussian law. In this perspective, the condition under which the above equa-
tion generates a power law requires the addition of a disturbing factor (a positive cost)
characterizing the existence of frictions. As Gabaix (2009) explained, this technical
point developed by Gibrat (1931) paved the way (which was not followed) for an
economic explanation for the emergence of power laws, since this idea of friction can
easily take the form of a positive probability of death or a high cost of expansion that
prevents cities (or firms) from growing.
Gabaix (2009) also explained that the statistical constraint identified by Yule
(1925) opened the door to another unexpected potential explanation for the emer-
gence of power laws: they could result from a specific parameterization of ARCH-type
models since they imply random growth. And indeed, in line with what we mentioned
above, ARCH models can generate a power law if they are parameterized with a
random growth rate that follows a power law. Let us illustrate this idea with a classical
application of an ARCH model taking the form
where ε t σt −1 is the return, with ε t independent of σt . Therefore, in line with the Yule-╉
2 2
This ARCH origin of power laws is conceptually troubling: ARCH models are sup-
posed to describe the evolution of financial returns through an improved Gaussian
description whose parameterization can generate a power law! While econophysi-
cists see a power law “in the observations,” financial economists create it through
statistical treatment of data. Indeed, this statistically generated power law appears to
be a simulacrum aimed at capturing the occurrence of extreme variations in line with
the methodological framework (based on the importance of the Gaussian uncondi-
tional distribution) used in finance. This first category of models generating power
laws does not really explain the origin of the macro laws, since they appear to result
from another power law characterizing the evolution of the growth rate. As strange
as this might be, we will find this kind of explanation again in the following section.
observation of a power-law distribution for trading volume] and (ii) [the observation of a
power law for the financial returns]. (Gabaix et al. 2006, 463)
The model’s starting point is observation of the distribution related to the investors’
size, which takes the form of a power law.29 The fat-tailed distribution means that we
can observe a big difference between large and small trading institutions, implying
an important heterogeneity of actors (in terms of size). This diversity results in a dis-
persal of trading power in which only bigger institutions will have a real impact on the
market.30 Starting from this conceptual framework, Gabaix et al. (2003, 2006) used
three initial empirical observations. First, the distribution of the size of investors (in
the United States) can be characterized through a power law (with an exponent of 1).
Second, the distribution of the trading value follows a power law (with an exponent
equal to 1.5). Third, the distribution of financial returns can be described with a power
law (whose exponent is equal to 3).
Given these observations, Gabaix et al. demonstrated that optimal trading be-
havior of large institutions (considering that they are the only elements influencing
market prices) can generate a power law in the distribution of trading volume and fi-
nancial returns. Gabaix et al. (2003, 2006) proposed a model describing a single risky
security with a fixed supply and with the price at time t denoted p(t). At time t, a
large fund gets a signal M about a mispricing (M < 0 is associated with a selling signal,
while M > 0 is a buying one). That signal leads the fund to negotiate with a market
supplier a potential buy on the market. Of course, there is a lag between the negotia-
tion, the transaction, and the impact on the market price. Basically, at t = 1 − 2 ε ( ε is
a small positive number), the fund buys a quantity V of shares at price p + r, where
r is the price concession (or the full price). Afterward, at t = 1 − ε , the transaction is
announced on the market. Consequently, the price jumps, at t = 1 , to p(1) = p + π (V ) ,
where π (V ) is called the permanent impact, while the difference ( ϕ = r − π ) is the
temporary price impact. In this modeling, the generalized evolution of price will take
the form
where B is a standard Brownian motion with B(1) = 0. By using this generalized evolu-
tion of price, Gabaix et al. (2006) showed that a power law describing the evolution of
financial returns can result from an optimizing behavior in line with a classical meth-
odology used in financial economics. Specifically, the liquidity supplier and the large
fund will try to maximize their utility function formalized as
δ
U = E[w] − λ [Var ( w )] 2 , (5.11)
where λ is a positive factor describing the aversion to risk, and δ is the δ order of
th
risk aversion. W (r ) is the money earned during the trade, which directly depends on
the expected return E[ r ] generated by the transaction. In other words, the optimizing
process implies a maximization of the expected value of total return given by the trade
(r). This return must take into account a potential uncertainty (denoted C) on the
mispricing signal M. When the mispricing signal results only from noise, this uncer-
tainty parameter is equal to zero (C = 0); otherwise, C is positive when the mispricing
is real. If the fund is assumed to spend S dollars in assets that it buys for a volume of Vt
in a transaction in which it pays a price concession equal to R(Vt ) , the total return of
the fund’s portfolio can be summarized through the equation
Vt (CM t − R (Vt ) + ut )
rt = , (5.12)
S
where ut is the mean zero noise. By simulating data with this model, Gabaix et al.
(2003, 2006) were able to replicate the power law observed for financial returns.
More formally, their model showed that the evolution of financial returns can gen-
erate a power law when the trading volume is used as an explanative variable. To
put it in another way, a classical model maximizing a utility function based on
wealth can generate a power law related to the trade volume under the condition
that the size distribution of traders follows a power law, r ~kV α , k being a posi-
tive parameter. In this theory, power laws in the financial sphere would result from
another power law characterizing the heterogeneity of investors’ size. The origin
of power laws in financial data would be due to another (unexplained) power law
observed in the size distribution of investors. We find here the same kind of argu-
ment as that used in the previous section. Although this theory incorporates some
key optimizing elements used in the financial mainstream, it must be explained
“by a more complete theory of the economy along with the distribution of firm
sizes [investors]” (Lux 2009, 8). Moreover, the theoretical framework proposed by
Gabaix et al. (2003, 2006) appears to be incompatible with the observation made
by Aoki and Yoshikawa (2007) that financial and foreign exchange markets are dis-
connected from their underlying economic fundamentals, since the first usually
demonstrate power-╉law behavior, while the latter are characterized by an exponen-
tial distribution.31
First of all, we must remember that the microstructural behaviors of financial mar-
kets have been investigated in financial economics in theoretical frameworks other than
the one set out in chapters 1 and 2. In the 1980s there emerged two alternative theo-
retical approaches that took as their starting point a questioning of empirical anoma-
lies32 and of the main hypotheses of the dominant framework: behavioral finance and
financial market microstructure. Both directly called upon the informational efficiency
theory, which, as we have seen, was a crucial element in the birth of financial economics.
Although the theory of financial market microstructure has been developing since the
1980s,33 the first works appeared closer to 1970 with an article by Demsetz (1968),
which looked at how to match up buyers and sellers to find a price when orders do not
arrive synchronously. In 1971, Jack Treynor, editor-in-chief of Financial Analysts Journal
from 1969 to 1981, published a short article under the pseudonym of Walter Bagehot,
“The Only Game in Town,” in which he analyzed the consequences when traders have
different motivations for trading. Maureen O’Hara, one of the leading authors of this
theoretical trend, defined market microstructure as “the study of the process and out-
comes of exchanging assets under a specific set of rules” (1995). Financial market mi-
crostructure focuses on how specific trading mechanisms and how strategic behaviors
affect the priceformation process. This field deals with issues of market structure and
design, price formation and price discovery, transaction and timing cost, information
and disclosure, and market-maker and investor behavior. A central idea in the theory of
market microstructure is that asset prices do not fully reflect all available information
even if all participants are rational. Indeed, information may be unequally distributed
between, and differently interpreted by, market participants. This hypothesis stands
in total contradiction to the efficient-market hypothesis defended by the dominant
paradigm.34
The second alternative approach is behavioral finance. In 1985 Werner F. M. De
Bondt and Richard Thaler published “Does the Stock Market Overreact?,” effectively
marking the start of what has become known as behavioral finance. Behavioral fi-
nance studies the influence of psychology on the behavior of financial practitioners
and the subsequent effect on markets.35 Its theoretical framework is drawn mainly
from behavioral economics. Behavioral economics uses social, cognitive, and emo-
tional factors to understand the economic decisions of economic agents performing
economic functions, and their effects on market prices and resource allocation. It is
primarily concerned with the bounds of rationality in economic agents. The first im-
portant article came from Kahneman36 and Tversky (1979), who used cognitive psy-
chology to explain various divergences of economic decision-making from neoclas-
sical theory. There exists as yet no unified theory of behavioral finance.37 According to
Schinckus (2009b), however, it is possible to characterize this new school of thought
on the basis of three hypotheses common to all the literature: (1) the existence of
behavioral biases affecting investor behavior; (2) the existence of bias in investors’
perception of the environment that affects their decisions; 3) the existence of system-
atic errors in the processing of information by individuals, which affects the market’s
informational efficiency. The markets are therefore presumed to be informationally
inefficient.
The third category of underlying models from econophysics for explaining power-
law behaviors of financial data is close to the microstructure behaviors that we find
in financial economics. The approach defended in this category of models can meth-
odologically be broken down into two steps: (1) designing the model by defining
specific rules governing agents’ interactions; (2) simulating the market in order to
replicate (hopefully) the data initially recorded on the financial markets. This method-
ology provides a microsimulation of economic phenomena. It was initiated by Stigler
(1964) and was mainly expanded in the 1990s with the development of what is known
as agent-based modeling.
Initiated by the Santa Fe Institute in the 1980s, agent-based modeling was gradu-
ally developed in the 1990s and has now become one of the most widely used tools
to describe dynamic (adaptive) economic systems (Arthur 2005). The methodology
designs sets of abstract algorithms intended to describe the “fundamental behavior”
of agents, formulating it in a computerized language in which agents’ behavioral char-
acteristics are inputs, while outputs are associated with the macro level resulting from
the computerized iterated microinteractions. The microscopic level is characterized
by hypotheses about agents’ behaviors. Agent-based modeling is also widely used in
economics literature, with authors using it to model many economic phenomena: the
opinion transmission mechanism (Amblard and Deffuant 2004; Guillaume 2004);
the development of industrial networks and the relationship between suppliers and
customers (Epstein 2006; Gilbert 2007; Brenner 2001) the addiction of consumers
to a brand ( Janssen and Jager 1999); the description of secondhand (car) markets
(Izquierdo and Izquierdo 2007), and so on.38 In this section, we mention only agent-
based models used in finance, and, more specifically, we focus on models in which a
large number of computerized iterations can generate power laws describing the evo-
lution of financial returns.39
Lux and Marchesi (1999, 2000) proposed a model simulating an “artificial finan-
cial market” in which fat tails characterizing large fluctuations (described through
a power law) would result from speculative periods leading to the emergence of a
common opinion among agents, who regularly tend to over and underestimate finan-
cial assets. This agent-based approach integrating the over/underestimation bias as a
major interaction rule has been confirmed by Kou et al. (2008) and Kaizoji (2006).
In the same vein, Chen, Lux, and Marchesi (2001) and Lévy et al. (1995, 2000) pro-
posed a model in which traders can switch between a fundamentalist and a chartist
strategy. In this framework, the authors showed that financial returns follow a power
law only when financial prices differed significantly from the evolution of economic
fundamentals (in contrast with the efficient-market paradigm). Alfarano and Lux
(2005) proposed an agent-based model in which power laws emerged from interac-
tions between fundamentalists and chartists whose adaptive behaviors are based on
a variant of the herding mechanism. By using only elements from the microstruc-
ture, Wyart et al. (2008) proposed a model in which a power law can result from a
specific relationship between the bid-ask spread and the volatility. More specifically,
their model generates a power law when these variables are directly related through
the following form:
Ask − Bid σ
=k , (5.13)
Price N
where σ is the daily volatility of the stock, N is the average number of trades for the
stock, and k is a constant. Bouchaud, Mezard, and Potters (2002) and Bouchaud,
Farmer, and Lillo (2009) also defined some conditions in the trade process that can
generate a power law in the evolution of financial returns.
Unlike the models presented in the previous section, these models do not associate
the emergence of power laws in finance with the existence of another unexplained
power law. However, there are gaps in the explanation proposed by these models.
Although they provide particular conditions under which financial distributions can
take the form of a power law, they do not really explain how these conditions might
come about. Models referring to behavioral biases do not explain why the selected
bias would primarily shape the market, while models defining specific conditions in
terms of the relationship between variables involved in the exchange do not explain
why these conditions would exist. It is worth mentioning that the feedback effects
between micro and macro levels lead to complex behaviors that cannot be analytically
studied in terms of renormalization group theory (Sornette 2014).
Although work in this category avoids the argument consisting in justifying the
emergence of a power law through the existence of another power law, it raises an-
other curiosity, since power laws appear as an emergent macroproperty resulting from
a large number of computerized iterations characterizing microinteractions (Lux
2009). By identifying the trading conditions in which a power law can emerge, this
category of models sheds new light on the occurrence of such laws. However, this idea
of a power law as an emergent property still generates debate, since its “microfounda-
tions remain unclear” (Gabaix 2009, 281). Theoreticians involved in self-╉organized
criticality models, which are presented in the following section, have also used this
reference to an emerging mechanism.
The author associated “self-organized criticality” with “slowly driven dynamic sys-
tems, with many degrees of freedom, [that] naturally self-organize into a critical state
obeying power-law statistics” (Bak 1994, 480). More specifically, some physical sys-
tems appear to be ruled by a single macroscopic power law describing the frequency
at which phase transitions occur (Newman 2005). Chapter 3 already introduced this
idea when the foundations of statistical physics were presented. The particularity of
the self-organized criticality refers to the assumption that certain phenomena main-
tain themselves near a critical state. An illustration of such a situation is a stable heap
of sand to which the addition of one grain generates miniavalanches. At some point,
these minicascades stop, meaning that the heap has integrated the effect of this addi-
tional grain. The sand heap is said to have reached its self-organized critical state (be-
cause the addition of a new grain of sand would generate the same process). Physicists
talk about a “critical state” because the system organizes itself into a fragile config-
uration resting on a knife-edge (where the addition of a single sand grain would be
enough to modify the sand heap).
This self-organized criticality has been extended in economics by Bak et al. (1993),
who proposed a model in which a shock in the supply chain (which acts as an ad-
ditional grain in the sand heap) generates economy-wide fluctuations (like miniava-
lanches in the sand) until the economy self-organizes critically (i.e., at a fragile state
that could easily be modified by an additional small shock). In their model, the au-
thors showed that the occurrence of large fluctuations in the economy can statistically
be described through a power law. This conceptual framework can also be applied in
finance. Ponzi and Aizawa (2000), Bartolozzi et al. (2004), and Dupoyet et al. (2011)
proposed models describing financial markets as self-organized critical systems in
which actors would tend to drive the market to a stable state. Once the market is close
to this stable level, profits become lower and lower until negative variations (returns)
are generated. These variations act as the extra grain of sand, leading to a cascade of
losses, putting some agents out of the market until a situation in which profit for the
remaining actors becomes possible again. In this context, new agents, attracted by the
profit, re-enter the market, which therefore will tend toward a stable state, and so on.
In this perspective, power laws are presented as fluctuations around the market stable
state, which is looked on as a critical state perennially set on a razor’s edge.
The idea that the phenomenon is always at its critical point allows physicists to give
a meaning to the emergence of power laws in their field. Indeed, for a large number of
self-organized critical systems, Bak et al. (1987) showed that many extreme variations
(i.e., critical modifications of the system) follow a power-law distribution.
In contrast with the first two categories of models, which were based on a dynam-
ical mechanism for producing power laws that are stochastic processes and in which
noise was supplied by an external source,
In other words, the emergence of power laws results from the evolution of microcon-
figurations of the system itself and not from an external factor such as the existence of
another power law or frictions in the growth rate.
However, as with the previous category of works, the self-╉organized criticality
theory is silent on how the power laws would emerge. This framework associates
power laws with an emergent statistical macro result. Econophysicists have justi-
fied the existence of power laws in finance through two arguments: (1) microscopi-
cally, the market can be seen as a self-╉organized critical system (i.e., sand heap) and
(2) macroscopically, the statistical characterization of the fluctuations in such a self-╉
organized system appears to be “universally” associated with power laws since “there
are many situations, both in dynamical systems theory and in statistical mechanics, in
which many of the properties of the dynamics around critical points are independent
of the details of the underlying dynamical system” (Farmer and Geanakoplos 2008,
45–╉46).
Although the four categories of works presented in this section do not really ex-
plain the reasons why power laws emerge on the financial markets, they open concep-
tual doors for a potential economic interpretation of these macro laws. The following
section will deal with the second crucial step for the potential development of an inte-
grative collaboration between econophysicists and economics: the creation of statis-
tical tests validating the power laws.
et al. 2011; Goerlich 2013).43 Among these works, the most significant research is
probably by Clauset, Shalizi, and Newman (2009), wherein they presented a set
of statistical techniques that allow the validation of power laws and calculation of
their parameters. “Properly applied, these techniques can provide objective evi-
dence for or against the claim that a particular distribution follows a power law”
(Clauset, Shalizi, and Newman 2009, 692). As explained hereafter, their test follows
three steps.
The first step aims to estimate the two parameters of the power law by using the
method of maximum likelihood: the lower bound to the power law,44 xmin, and the
critical exponent, α. The necessity for estimating the lower bound comes from the fact
that, in practice, few empirical phenomena obey a power law for all values of x. More
often the power law applies only for values greater than some minimum xmin, leading to
the use of truncated distributions (as explained in chapter 3) for capturing statistical
characteristics associated with values < xmin. These authors suggest the following max-
imum likelihood estimator:
−1
n xi
α ≅ 1 + n ∑ ln
.
1 (5.16)
i=1 x min −
2
The maximum likelihood estimators are only guaranteed to be unbiased in the asymp-
totic limit of large sample size, n → ∞. For finite data sets, biases are present, but decay
with the number of observations for any choice of xmin. In order to estimate the critical
exponent, α, they provide a method that estimates xmin by minimizing the “distance”
between the power-law model and the empirical data using the Kolmogorov-Smirnov
statistic, which is simply the maximum distance between the cumulative distribution
functions of the data and the fitted model. This first step makes it possible to fit a power-
law distribution to a given data set and to provide an estimation of the parameters
xmin and α.
The second step aims to determine whether the power law is a plausible fit to the
data. Clauset, Shalizi, and Newman propose a goodness-of-fit test, which generates
a p-value that quantifies this plausibility. Such tests are based on the Kolmogorov-
Smirnov statistic previously obtained, which enables measurement of the “distance”
between the distribution of the empirical data and those of the hypothesized model.
The p-value is defined as a fraction of the synthetic distances that are larger than the
empirical distance. If the resulting p-value is greater than 0.1, the power law is a plau-
sible hypothesis for the data; otherwise it is rejected.
The third and last step aims to compare the power law with alternative hypotheses
via a likelihood-ratio test. Even if a power law fits the data well, it is still possible that
another distribution, such as an exponential or a log-normal distribution, might give
as good a fit or better. To eliminate this possibility, Clauset, Shalizi, and Newman sug-
gest using a goodness-of-fit test again, calculating a p-value for a fit to the competing
distribution and comparing it with the p-value for the power law. For each alternative,
if the calculated likelihood ratio is significantly different from zero, then its sign indi-
cates whether the alternative is favored over the power-╉law model or not.
Goerlich (2013) extended the article by Clauset et al. (2009) by proposing an-
other kind of test derived from the Lagrange multiplier principle. He showed that
the statistical power of the Lagrange multiplier test is higher than the bootstrapped
Kolmogorov-╉Smirnov test and needs less computational time. However, according
to Gillespie (2014), Clauset, Shalizi and Newman’s method has three main draw-
backs: “First, by fitting xmin we are discarding all data below that cut-╉off. Second, it
is unclear how to compare distributions where each distribution has a different xmin.
Third, making future predictions based on the fitted model is not possible since values
less than xmin have not been directly modelled.”
These two kinds of tests provide an important contribution to the creation of an
approach more integrated between financial economics and econophysics, because
they pave the way for testing explicative theoretical models based on the power-╉law
behavior of financial prices/╉returns. However, it is worth emphasizing that the de-
velopment of such tests could directly contribute to the development of an integra-
tive approach between econophysicists and financial economists: this kind of test
can establish the scientific justification for using power laws while paving the way for
an appropriate comparison between these statistical laws and patterns usually used
in finance. This statistical approach is very recent and is not widely disseminated
among econophysicists. Moreover, it is worth mentioning that, to date, statistical tests
of power laws have not yet been used with financial data. (They have been used for
wealth, income, city sizes and firm sizes). Despite their drawbacks and the fact that
further investigation is needed, we can consider that these tests have opened the door
to some research into statistical tests. We can add that while visual tests are the most
common in the econophysics’ literature for testing the power-╉law distribution, some
econophysicists use statistical tests. Among other works we may mention Redelico
et al. (2009) and Gligor and Ausloos (2007), who used Student’s t-╉test; Clippe and
Ausloos (2012) and Mir et al. (2014), who used a chi-╉square test; and Queiros (2005),
Zanin et al. (2012), Theiler et al. (1992), Morales et al. (2013).
5.4.╇CONCLUSION
This chapter analyzed the potential uses of econophysics models by financial econo-
mists. Although econophysics offers a set of practical solutions, its theoretical develop-
ments remain outside of financial economics. In this challenging context, the question
of theoretical integration is crucial. First of all, this chapter studied how econophysics
can be useful in trading rooms and also the drawbacks in its broader use. Beyond these
practical implementations, the theoretical contributions of econophysics were also
investigated from the viewpoint of financial economists. The similarities between the
phenomenological approach used by financial economists and the one implemented
by econophysicists have been presented. Afterward, we discussed to what extent this
methodological situation can play a key role in a potential future rapprochement
between the two fields. In this context, we also explained how econophysics’ contri-
butions can become effective. Two conditions have then been highlighted: (1) the
elaboration of models explaining the emergence of power laws and (2) the creation of
statistical tests validating (or not) these macro laws. As explained, the latter calls for
further research in statistics.
6
TO WA R D A C O M M O N F R A M E W O R K
The two previous chapters identified what remains to be done (from an economist’s
point of view) for econophysicists to contribute significantly to financial economics
(and finance) and to develop a fruitful collaboration between these two fields.
Chapter 5 showed that recent developments in mathematics and statistics have cre-
ated an opportunity for an integration of results from econophysics and financial
economics. It pointed out that three paths are still waiting to be investigated: (1) de-
velopment of a common framework/vocabulary in order to better compare and in-
tegrate the two approaches; (2) creation of statistical tests to identify and to test
power laws or, at least, to provide statistical tests to compare results from econo-
physics models with those produced by financial models; and, finally, (3) work on
generative models for giving a theoretical explanation of the emergence of power
laws. These three axes of research could be considered as a possible research pro-
gram to be developed in the coming years by those who would like to contribute
to financial theory by developing “bridges” between econophysics and financial
economics.
Of the three paths mentioned above, the creation of statistical tests is the most
crucial step from the point of view of attracting the attention of financial economists
and practitioners. As explained in the previous chapters, financial economists have
founded the scientific dimension of their discipline on the use of statistical tests for
the characterization of empirical data. Consequently, such tests are a necessary condi-
tion for doing research in financial economics. Although several problems still present
obstacles to the development of statistical tests dedicated to power laws, significant re-
sults have emerged in recent years (chapter 5). Moreover, because of the rapid expan-
sion of statistical studies on this topic, new research may develop in the near future.
Developing such tests is a research program on its own and cannot be undertaken here
without leading us very far from the initial objectives of this book. However, from
our point of view, research of this kind is an important first step in the search for a
common theoretical framework between financial economics and econophysics. If
one wants to create statistical tests for comparing results produced by models from
the two fields, the first question is to know what is to be compared. Although econo-
physics is a vibrant research field that has given rise to an increasing number of models
that describe the evolution of financial returns, the vast majority focus on a specific
statistical sample without offering a generic description of the phenomenon. The lack
of a homogeneous perspective creates obvious difficulties related to the criteria for
choosing one model rather than another in a comparison with key models of financial
139
economics. Consequently, the existing literature offers a broad catalog of models but
no unified conceptual framework. The standardization of knowledge through such a
common framework is a necessary condition if econophysics is to become a strong
discipline (Kuhn 1962).
This chapter will propose the foundations of a unified framework for the major
models published in econophysics. To this end, it will provide a generalized formula
describing the evolution of financial returns. This formula will be derived from a meta-╉
analysis of models commonly used in econophysics. By proposing a generic formula
characterizing the statistical distributions usually used by econophysicists (Lévy, trun-
cated Lévy, or nonstable Lévy distributions), we are seeking to pave the way toward
a more unified framework for econophysics. This formula will be used to propose an
econophysics option-╉pricing model compatible with the basic concepts used in the
classical Black and Scholes framework.
The difference between [the model for the best estimate of the fundamental price from
physics] and [the model for the best estimate of the fundamental price from financial eco-
nomics, i.e. efficient-market theory] is at the core of the difference in the modelling strategies
of economists, [which] can be called top-down (or from rational expectations and efficient
markets), compared with the bottom-up or microscopic approach of physicists. (7)
The distinction is frequently found in econophysics literature.1 From the elements dis-
cussed in the previous chapters, we can claim that this distinction between the two
modeling “strategies” is false and that the authors who rely on it are confusing two
levels of analysis—the methodological aspect and the modeling dimension. More im-
portantly, this confusion provides a good illustration of the nature of the differences
between financial economics and econophysics by highlighting a difficulty in compar-
ing them. Let us explore this point.
In the literature of econophysics, it is commonly asserted that the “modelling strat-
egies of economists can be called top-down” because a “financial modeller builds a
model or a class of models based on a pillar of standard economic thinking, such as
efficient markets, rational expectations, representative agents” in order to draw “some
prediction that is then tested statistically, often using linear regressions on empirical
data” (Sornette 2014, 5). In this context, rational expectations and the idea that mar-
kets are efficient are assumed. So it appears from this formulation that, methodolog-
ically, economists usually start their analysis with assumptions that they implement
in modeling and from which they deduce conclusions; “top-down” modeling appears
therefore to be associated with the deductive approach mainly used in economics.
Econophysics is, in contrast, presented as a data-driven field founded on descriptive
models resulting from observations. First of all, it is worth mentioning that the belief
in “no a priori” is in itself an a priori since it refers to a form of positivism. Second, in
such a comparison, this inductive approach is implicitly considered to be a “bottom-
up” methodology because it starts with data related to a phenomenon rather than as-
sumptions about it. In this case then, the terms “top-down” and “bottom-up” refer to
methodological strategies rather than to modeling methods.
Although the difference between the two dimensions seems subtle, it is impor-
tant: methodology refers to the conceptual way of dealing with phenomena (i.e.,
quantitatively vs. qualitatively; empirically vs. theoretically, etc.), while modeling
methods concern the kind of computation (and data) used by scientists. Actually, by
contrasting the two fields in this way, authors focus on a very specific way of doing
econophysics and economics (i.e., on a very specific part of the literature related to
these two areas of knowledge). On the one hand, although a part of economics is well
known for its representative-agent hypothesis, there are works in financial economics
that do not necessarily implement top-down modeling. Agent-based modeling, which
has become a common practice in (financial) economics, is directly founded on a
micro-oriented modeling from which an aggregate (bottom-up) result is derived. On
the other hand, a large part of the literature of econophysics is dedicated to the phe-
nomenological macrodescription of the evolution of financial prices, making these
studies ineligible for consideration as “bottom-up” modeling.
One often finds [in the literature of econophysics] a scolding of the carefully maintained
straw man image of traditional finance. In particular, ignoring decades of work in dozens of
finance journals, it is often claimed that “economists believe that the probability distribution
of stock returns is Gaussian,” a claim that can easily be refuted by a random consultation of
any of the learned journals of this field.
values part of the system, whereas financial economists associate these extreme values
with a statistical distribution characterizing the error terms of a major trend. In other
words, financial economists break down stylized facts into two elements, a general
trend (assumed to be governed by a Brownian uncertainty) and a statistical error term
that follows a conditional distribution for which a calibration is required. For these
conditional distributions, it goes without saying that data-╉driven calibrations (i.e.,
without theoretical justification) are common in financial economics. As explained in
�chapter 5, the major difference between the two fields appears in how they calibrate
their models: calibration in econophysics mainly results from data, whereas most
financial economists consider their models to have theoretical foundations from a fi-
nancial (and not merely from a statistical) point of view.2
In conclusion then, econophysicists and financial economists model in the same
way (calibrating models to fit data), but the latter combine the calibration step with a
specific theoretical framework,3 while the former claim to be more data-╉driven. Rather
than focusing on the differences, we see in these similarities a need to develop statis-
tical tools in order to compare the two ways of modeling and then to create a common
framework. However, before talking about a possible comparison, it is important to
know what has to be compared. While the key models of finance are clearly defined
in the literature, there is no agreement on what would be the core models in econo-
physics. The following section will propose a unified framework, generalizing several
important models identified in the literature of econophysics.
where C and d are constants that might have temporal variation. The analytical form of
f(x) is not always known for all the values of x, but it has a power law variation in the
limit of large x (x → ∞):
1 (6.2)
f (x ) b1 + a1 α
for x → ∞.
x
The parameters a1 and b1 (usually equal to 1) define the shape of the distribution at
large x, and α is the principal exponent of the power law. The function g(x) introduced
in equation (6.1) has the form
with two possible forms for h(x): x or log(x). As mentioned in c hapter 1, the use of
the log-normal law in finance was introduced by Osborne (1959) in order to avoid
the theoretical possibility of negative prices. Moreover, this use is also based on the
assumption that rates of return (rather than price changes) are independent random
variables. In equation (6.3), a2, b2, and c2 are parameters that differ from one model to
another, defining the final shape of the distribution function. Finally, our generic prob-
ability distribution function can be expressed as
1 c2
(6.4)
P (x ) C b1 + a1α
e −(a2 x+b2 ) +d
.
x
This metaformula makes it possible to rewrite and to compare the probability distribu-
tion function of price changes used in the main econophysics models identified in the
literature that deal with the statistical description of financial returns. Actually, these
models can be classified in three groups depending on the distribution used: stable
Lévy distribution, truncated stable Lévy distribution, and nonstable Lévy distribution.
As previous chapters explained, stable Lévy distributions were first proposed by
Mandelbrot (1962) and afterward used by the pioneers of econophysics to describe
the tail of the distribution of financial data. Most importantly, for large x values,
stable Lévy distributions are well approximated by a power law as described in e-
quation (6.2), with the exponent α having values between 1 and 2, generally around
1.5. Regarding the equation (6.4), most authors4 use a1 = b1 = 1, with one special
case where a1 = −1 and b1 = 1 (Weron, Mercik, and Weron 1999). The parameter
a2 is nonzero only in three cases (Cont and Bouchaud 2000; Gupta and Campanha
2002; Louzoun and Solomon 2001), and Louzoun and Solomon (2001) set c2 = −1.
The other parameters of equation (6.4) are taken to be zero in three cases out of 20
models studied in our meta-analysis. The observed nonzero a2 involves the pres-
ence of an exponential term in the distribution function P(x), which is derived by
using models to explain the empirical data—for example, the generalized Lotka-
Voltera model (Louzoun and Solomon 2001) and the Percolation model (Cont and
Bouchaud 2000; Gupta and Campanha 2002). But most often the authors focus
stocks (from 1926 to 1996); they concluded that in both cases these distributions can
be described with a power law in line with our equation (6.4), whose characteristic
exponent is asymptotically equal to 3—╉meaning that the greater the volume of data,
the higher the probability of approximating 3. In a such context, our generalized prob-
ability distribution function could be simplified as follows:
1 1
P ( x ) ~C ~C . (6.5)
x 1+ 3
x4
1 1
c= , a1 = b1 = 0 , b2 = d = 0 , c2 = 2 and a2 = .
2 πσ 2
2σ 2
1 − x 2 ( 2 σ2 )
P(x) = e . (6.6)
2 πσ 2
This point is important for two reasons: first, this generalization shows that a conceptual
bridge between econophysics and financial economics is possible, implying the need to
develop new non-╉Gaussian statistical tests; second, it opens the door to the identifica-
tion of potential statistical similarities that would go beyond the disciplinary perspec-
tives. In other words, our meta-╉analysis proposes a unified equation that can make sta-
tistical sense for both economists and econophysicists. This methodological implication
is in line with the “transdisciplinary analysis” proposed in our previous work (Jovanovic
and Schinckus 2013) in which we suggested that an integrated conceptual framework
could transcend the disciplinary perspectives as summarized in figure 6.1.
Econophysics
Holistic perspective
Financial Mathematical
economics finance
The key element in the Black and Scholes model was the application of the “ar-
bitrage argument,” which is an extension of the economic law of one price in perfect
capital markets.7 This law was popularized by Modigliani and Miller (1958)8 and is
used as the foundation of equilibrium in financial economics. In financial economics’
option-pricing model, the arbitrage argument ensures that the replicating portfolio
has a unique price. In econophysics’ models, this concern does not exist,9 something
completely unacceptable according to the fundamental intuitions of financial eco-
nomics, in which this arbitrage argument is a necessary condition for the uniqueness
of the solution for any option-pricing methods.
The model presented in this section must be considered a first attempt to work
within an integrated framework between econophysics and financial economics. More
specifically, we will use our generic formula originating in econophysics to describe
the evolution of underlying returns by showing that this statistical description is also
compatible with the necessary condition for the use of an arbitrage argument (so im-
portant in financial economics). As explained below, we will use our generic formula
in its truncated form in order to avoid the problem of infinite variance (discussed in
chapter 3). Some authors have worked on this issue: Matacz (2000) and Boyarchenko
and Levendorskii (2002, 2000) offered interesting option-pricing models based on
an exponential truncated stable Lévy distribution. However, although these authors
gave a procedure for minimizing an estimated risk, they did not define the conditions
for a potential optimal risk hedge. In contrast, McCauley et al. (2007b), for example,
showed that a non-Gaussian option-pricing model can provide optimal risk hedging;
but their model focused on general conditions without defining an individual cutoff.
Significantly, this existing literature does not directly address the implications of trun-
cated exponential Lévy models for the optimal option-hedge ratio problem. However,
this is of interest in an econophysics context as the result of various contributions
stretching back to studies by Mantegna and Stanley in the mid-1990s that employed
truncation to deal with statistical limitations of “infinite variance.”
Our model aims to define the minimal condition required for an optimal risk hedg-
ing for a particular cutoff based on an exponentially truncated stable Lévy distribution.
In other words, our contribution is to show that a (nonconditional) exponential stable
Lévy description of a financial distribution (i.e., description of the whole distribution, as
usually proposed by econophysicists) is compatible with an admissible hedging strategy
in the sense defined by Harrison and Kreps (1979) and Harrison and Pliska (1981),
who showed that the existence of an equivalent martingale measure is a necessary con-
dition for optimal hedging (see Carr and Madan 2005 for further details on this theme).
It is worth bearing in mind that this probabilistic framework defined by Harrison, Kreps,
and Pliska has progressively become the mainstream in financial economics (chapter 2).
Section 6.2.1 presents our generalized model in line with the Black and Scholes
framework (based on a martingale measure), and section 6.2.2 defines a specific risk
measure for a truncated stable Lévy version of this model that will be presented in sec-
tion 6.2.3. In line with McCauley et al. (2007a), we use an exponentially truncated stable
Lévy distribution whose statistical conditions are defined to make this model viable
(meeting the necessary condition) in the sense suggested by Harrison and Kreps (1979).
where the first term is the value of the option at time T and the second term is the pre-
mium paid for the option at time t = 0. Assuming that one is operating in a no-arbitrage
market, the discounted stochastic price process must be a martingale (Harrison and
Kreps 1979). The option price is
∞
Due to the stochastic nature of the price process, risk is inherent in the financial evalu-
ation of options and stocks. Black and Scholes (1973) showed that for the log-normal
distribution, this risk can be hedged by using an appropriate hedging condition (the so-
called ϕ hedging) for the financing strategy. But for nonnormal models, the Black and
Scholes procedure for hedging risk no longer works.10 A measure of risk that was also used
in Bouchaud and Sornette (1994) and Aurell et al. (1997) is the variance of the value of
the portfolio V = C − φS . We make the supposition here that this variance is finite.11 Thus:
First of all, note that for uncorrelated assets, one has the following expression:
E[(φ∆S)2 ] = φ2 σ 2 = ∑ φi2 σi 2 , where σi is the volatility. However, when there exists a
correlation between the assets, one can write E[(φ∆S)2 ] = ∑ φi2σi2 = 2 ∑ φi φ jσij , where
i,j
σij is the covariance matrix. In a sense, our conceptual model defined in equation (6.9)
is in line with the generalized call-option pricing formula defined by Tan (2005), in
which S is observed for non-Gaussian distributions. This approach is well known in
finance and requires a minimization of the risk with respect to the trading strategy:
∞
1 1
φ* = E[(S0 − S)max(S − K , 0)] = 2 ∫ (S0 − S)(S − K ) f (S)dS. (6.10)
σ 2
σ K
This equation is valid for a martingale process S with E[∆S] = 0 , ensuring the necessary
condition for an optimal hedging solution. If there is more than one uncorrelated asset
(stock), the above equation should be applied for each stock individually in order to
obtain the total optimal hedging strategy. The optimal strategy for the ith asset would
be written like the above equation with index i on all variables. For many correlated
assets using E[(φ∆S)2 ] = ∑ φi2σi2 = 2 ∑ φi φ j σij , one finds
i ,j
1
∞
φi* = 2 ∫
(Si 0 − Si )(Si − K ) f (Si )dSi − ∑ φ j σ ij , (6.11)
σi K j
In line with the central-limit theorem, a stable Lévy regime will converge12 toward
a Gaussian asymptotic distribution after a very high number of variables x. In other
words, there is a cross-value l after which the stable Lévy process is assumed to switch
into the asymptotic (Gaussian) regime. That particular evolution implies a Gaussian
asymptotic regime for x > l and a truncated stable regime for x < l. The former can also
be seen as a specific case of the generic formula (6.4) with the following parameters,
1 1
C= , a1 = b1 = 0, b2 = d = 0, c2 = 2, and a2 = ; the latter can be described
2 πσ 2
2σ 2
through the following specific case where distribution density function for the log re-
turns of underlying options can take the form
e− γ x
f (x) = C α +1 ,
(6.12)
x
where x = log (S/S0), C > 0, γ ≥ 0, and 0 < α < 2, which is the necessary condition
for a stable Lévy distribution. C can be seen as a measure of the level of activity in
a case where all other parameters are constant (i.e., a parameter of scale). The pa-
rameter γ is the speed at which arrival rates decline with the size of the move (i.e.,
rate of exponential decay). This model accords with studies dealing with exponen-
tial truncation exposed in chapter 3, and the formula is a symmetric version of the
so-called CGMY model (named after the authors of Carr, Geman, Madan, and Yor
2002) and a generalization of the exponentially truncated Lévy stable models pro-
posed by Koponen (1995) and by Boyarchenko and Levendorskii (2002, 2000).
However, while Carr et al. (2002) applied this model in a time-changed framework,
Koponen (1995) did not apply his model to option pricing, while Boyarchenko
and Levendorskii (2002, 2000) did not seek conditions for a potential risk hedge.
Our objective here is to show that a stable Lévy regime (with no time-changed dis-
tribution) is theoretically compatible with a key assumption of the financial main-
stream. Consequently, the rest of this chapter will focus only on the stable Lévy
regime.
Because stable Lévy processes generate infinite variance, we use an exponential
truncation implying an exponential decay of the distribution. This restriction means
that the truncated distribution generates finite variations, making possible the esti-
mation of the variance (in the stable Lévy regime), which is given by the following
equation:
l
Using the general equation (6.6), we calculate the option price for this model for the
chosen portfolio, by considering the density distribution of stock returns:
∞
e− γx
C = e − rT ∫ (S0 e − x − K )C
ln( K /S0 )
dx.
x α +1
(6.14)
∞ −u ∞
e E (x ) e − xt
Using the result ∫ n
du = nn−1 with En (x) = ∫ n dt in equation (6.14), and express-
x
u x 1
t
ing C as a function of squared volatility, yields:
−α
σ 2 e − rT K K K
C= ln S0 Eα +1 ( γ − 1) ln − KEα+1 γ ln . (6.15)
2 γ Γ(2 − α) S0
α −2
S0 S0
Given this result, we can estimate the hedging strategy that minimizes risk by using
equation (6.10):
∞
C e−γx
σ 2 ln( K∫/S0 )
φ* = (S − S e x
)(S e x
− K ) dx. (6.16)
x α+1
0 0 0
Γ(2 − α) = ∫ e − t t z −1dt
0
, we can detail equation (6.16) as
K
(S0 K + S0 )Eα+1 (γ − 1)ln
2
1 K S0
φ = α −2 1−α − l
*
* ln .
2 γ l (e − 1) S0 2 K K
− S0 Eα +1 (γ − 2)ln S − S0 KEα +1 γ ln S
0 0
Although Tan (2005) did not deal with infinite variance processes, we came to the same
conclusions as he did about non-Gaussian densities, φ , which explicitly depend on
*
(1) higher partial derivatives of the call-option pricing function toward the price of the
underlying asset; and (2) the value of the cumulants (as they are used in the logarithm of
the characteristic function). Although equation (6.14) could then be further generalized
as proposed by McCauley et al. (2007b) and Tan (2005), generalization would require
specific statistical conditions (defined in this chapter) in order to offer a viable hedg-
ing solution in a stable Lévy framework. Our objective here is to show the theoretical
compatibility between an exponentially truncated stable Lévy framework and the nec-
essary condition for an optimal hedging in the Harrison, Kreps, and Pliska theoretical
framework.
6.3.╇CONCLUSION
The objective of this final chapter was to develop a unified framework generalizing
major models found in econophysics literature. The identification of such a framework
allowed us to work on the minimal condition under which it could be compatible with
the financial mainstream. This task suggests that a potential fruitful future collabora-
tion between econophysicists and financial economists is possible.
The first step in the elaboration of a conceptual bridge between the two fields was
to show that, in the diversified econophysics literature dealing with extreme values in
econophysics, it was possible to deduce a unified technical framework. We proposed
such a unification in the first section of this chapter by showing that econophysics
models can be seen as a specific derivation of a generalized equation. The second
step was to show that the generalized equation elaborated to describe the key models
developed in econophysics could be compatible with a strictly Gaussian approach.
That was the aim of our second section. Equations (6.4) and (6.6) showed that the
Gaussian framework can be expressed as a specific case of the generalized equation.
This point is important, since it facilitates the potential development of a common
vocabulary between the two communities. While equation (6.4) highlighted the sta-
tistical parameters common both to models used in econophysics and to those used
in finance, the next step will be to give them an economic/╉financial meaning. In a
sense, proposing a generalized equation is a preliminary condition for this kind of
theoretical investigation, which requires a combination of theoretical and technical
knowledge of the key assumptions used by econophysicists and economists, because
the interpretation of the statistical parameters must make sense for both communi-
ties. After defining such a unified equation, we presented it in the light of the financial
mainstream. We used our generic formula originating in econophysics to describe
the evolution of underlying returns by showing that this statistical description is also
compatible with the necessary condition for the use of an arbitrage argument. The
model proposed here must be considered a basic and preliminary application whose
objective is to stress the feasibility of a conceptual framework common to econo-
physics and financial economics. Hopefully, this first attempt will generate further
research on the topic.
1 1 2
log( x/x0 )−( µ −σ 2 /2 ) /( 2 σ2 ) 1 −(a2 x+b2 )c2 +d
2. Black and Scholes (1973) P(x) = e
P(x) ~ C e for x → ∞
An option pricing technique –the Gaussian distribution x 2 πσ 2
x b1 +a1α
1
is one of the principal assumptions C= , h(x)=log(x)
2 πσ2
a1 = 0, b1 = 0, a2 = 1/(2σ2)0.5, b2 = 0, c2 = 2, d = 0
1 2
/( 2 σ2 ) 1 c2
3. Clark (1973) P(x) = e−x P(x) ~ C b1 + a1 α
e −(a2 x+b2 ) +d
for x → ∞
Mixture of Gaussian distributions 2 πσ 2
x
1
C= , h(x)=x
2 πσ2
a1 = 0, b1 = 0, a2 = 1/(2σ2)0.5, b2 = 0, c2 = 2, d = 0
(log(s ′ / s) + BT )2 1 c2
4. Martin Schaden (2002) P(s ′ ,t f ;s ,t i ) = (4 πσ2Ts ′s)−1 exp P(x) ~ C b1 + a1 α
e −(a2 log( x )+b2 )
+d
for x → ∞
2 σ 2T x
s′ –price at time tf,; s –price at time t; T=tf–ti h(x) = log(x)
a1 = 0, b1 = −1, a2 = 1/(2σ2T)0.5,
b2 = [BT-log(s)]/ (2σ2T)0.5, c2 = 2, d = 0
Table 6.2 Lévy stable –Paretian distributions
(continued)
Table 6.2 (Continued)
Reference Author formula Generic formula
1 −(a2 x+b2 )c2 +d
8. Malcai, Biham et al. (1999) Power law α=1.5 P(x) ~ C e for x → ∞
x b1 +a1α
Power law with α = 1.5 Idem to article 7 above.
Obs. Articles 6, 7, 8 are connected a1 = 1, b1 = 1, a2 = b2 = c2 = 0
h(x) = x, d = 0
α ≅ 1.5
1 c2
9. Scalas and Kim (2007) The characteristic function for symmetric P(x) ~ C b1 + a1 α
e −(a2 x+b2 )
+d
for x → ∞
x
Lévy α stable distribution distributions:
a1 = 1, b1 = 1, a2 = b2 = c2 = 0
Ex: α = 1.57, β = 0.159, γ = 6.76 × 10−3, δ = 3.5 × 10−4
General forms for Lévy distributions
(
Lα (k) = exp −a k
α
) h(x) = x, d = 0
1 α ≅ 1.4
The paper illustrates a procedure for fitting financial data with Lévy Lα (x) ~ 1+α for x → ∞
x
α-stable distributions
1 1
exp( − ε 2 S )
c2
12. Cont and Bouchaud (2000) P(S) ~ P(x) ~ C b1 + a1 α
e −(a2 x+b2 )
+d
for x → ∞
S 5/2 x
Percolation theory—one gets the distribution for the cluster
S—cluster size; ε = 1−c a1 = 1, b1 = 1, a2 = ε2, b2 = 0, c2 = 1
(of financial operators) size—identic to the distribution of price
for c = 1, one has a pure power law, Lévy α ≅ 1.5
changes according to percolation theory
symmetrical, distribution with α = 3/2. h(x) = x, d = 0
The probability that financial operators interact between them is
ε =1−c
c/N; N is the total number of operators
h(x) = x, d = 0
1 1
13. Bouchaud (2002) exp( − ε 2 S )
c2
P(S) ~ P(x) ~ C b1 + a1 α
e −(a2 x+b2 )
+d
for x → ∞
S 5/2 x
α = 3/2 a1 = 1, b1 = 1, a2 = ε2, b2 = 0, c2 = 1
h(x) = x, d = 0
α ≅ 1.5
Table 6.3 Lévy non-stable distributions
v h(x) = x, d = 0
ν2 = G 2 T − G T volatility
2 α≅3
α ≈ 3 for 3 < g < 50
α ≈ 1.6 for 3 < g < 50
−u 1 −(a2 x+b2 )c2 +d
17. Alejandro-Quinones, Bassler, Field, McCauley, F(u) = C exp (ε u + 1)α −1 P(x) ~ C e for x → ∞
D0 ε x b1 +a1α
Nicol et al. (2006)
x a1 = −1, b1 = 1, a2 = 1/(D0ε), b2 = 0, c2 = 1
Theoretical model that uses Focker Plank equation
for D = D0 (1 + ε u ) , u = , x price return h(x) = x, d = 0
(describes anomalous diffusion) to derive the t
probability function C
F(u) = , D = D0 (1 + εu )
2
1 1 c2
18. Clementi, Di Matteo and Gallegati (2006) P(i > x) ∝ P(x) ~ C e −(a2 x+b2 ) +d
for x → ∞
They estimate the power law tail exponent (α) xα xb1 + a1 α
α ~ 2.3 –Australia a1 = 1, b1 = 1, a2 = b2 = c2 = 0
using Hill estimator, for the personal income in
α ~ 2.5 –Italy h(x) = x, d = 0
Australia and Italy
α ≅ 2.5
Table 6.4 Lévy truncated distributions
1 c2
20. Mariani and Liu (2007). They use the characteristic function in simulations P(x) ~ C b1 + a1 α
e −(a2 x+b2 )
+d
for x → ∞
x
Study of market indices from Brazil, -Probably same kind of equation as at number 21
a1 = 1, b1 = 1, a2 = 1/k, b2 = −l/k, c2 = 1
Mexico, Argentine
h(x) = x, d = 0
Exponentially truncated Lévy
α <2 for |x|>l
distribution α <2
1 c2
21. Gupta and Campanha (1999) cLα (x , ∆t ), −lC ≤ x ≤ lC P(x) ~ C b1 + a1 α
e −(a2 x+b2 )
+d
for x → ∞
x
They propose a gradually truncated β
P(x) = x − lC a1 = 1, b1 = 1, a2 = 1/k, b2 = −l/k, c2 = β~0.6
Lévy distribution. CL α (x , ∆t )exp − , x 〉lC
k h(x) = x, d = 0
k, β are constants α =1.2 for |x|>l
α ≅ 1.2, β ≈ 0.6 pour S&P500
1 1
P(S) ~ 5/2 exp( − ε 2 S )
c2
22. Cont and Bouchaud (2000). P(x) ~ C b1 + a1 α
e −(a2 x+b2 )
+d
for x → ∞
S x
Percolation theory –one gets
S –cluster size; ε = 1−c a1 = 1, b1 = 1, a2 = −ε2, b2 = 0, c2 = 1
the distribution for the cluster
for c < 1, one has an exponentially truncated Lévy distribution h(x) = x, d = 0
(of financial operators) size –identic
α ≅ 1.5
to the distribution of price changes
ε = 1−c
The probability that operators interact
between them is c/N; N is the total
number of operators
{ }
1
1 1
1 + β(t )(q − 1)[ x − x (t )]
2 1−q c2
24. Michael and Johnson (2003) P(x ,t ) = P(x) ~ C b1 +a1 α
e −(a2 x+b2 ) +d
for x → ∞
Z(t ) x
Tsalis power law distribution −2 a1 = 1, b1 = 1, a2 = b2 = c2 = 0
The value of q smaller than 5/3 allows a finite variance P(x ,t ) = x q−1 = x −(α +1) for x large h(x) = x, d = 0
Can be classified as Lévy stable q Tsallis parameter, (1.64 for S&P500) –good fit on the α ~ 2.15
empirical data
for q = 5/3 one gets Gaussian;
for 1 < q ≤ 5/3 –Lévy non stable;
for 5/3 < q < 3 –Lévy stable.
ν2 − ν(x −R∆t ) 1 c2
28. McCauley (2003) f (x , t ) = e , x > R ∆t P(x) ~ C b1 +a1 α
e −(a2 x+b2 )
+d
for x → ∞
γ +ν x
Empirical Exponential Distribution function for
γ 2 γ (x −R∆t ) a1 ≠ 0, b1 = 0, a2 = 0, b2 = 0, c2 = 0
intraday trading of bonds and foreign exchange, f (x , t ) = e , x < R ∆t
γ +ν h(x) = log(x), d = 0
written in terms of returns
v , γ ∝ ∆t
x = ln(p(t) /p(t0)).
ν2 − ν(x −R∆t ) 1 c2
29. Gunaratne and McCauley (2005) f (x , t ) = e , x > R ∆t P(x) ~ C b1 +a1 α
e −(a2 x+b2 )
+d
for x → ∞
γ +ν x
Empirical Exponential Distribution function for
γ 2 γ (x −R∆t ) a1 ≠ 0, b1 = 0, a2 = 0, b2 = 0, c2 = 0
intraday trading of bonds and foreign exchange, written f (x ,t ) = e , x < R ∆t
γ +ν h(x) = log(x), d = 0
in terms of returns
v , γ ∝ ∆t
x = ln(p(t) /p(t0)).
C O N C LU S I O N
W H AT K I N D O F F U T U R E L I E S I N STO R E F O R
ECO N O P H Y S I CS ?
This book has investigated the development of econophysics and its potential implica-
tions for financial economics. Our project was to analyze the issues and contributions
of these two disciplines, focusing mainly on extreme values on stock prices/returns
and a common vocabulary and perspective. As we have explained, in the context of
the difficult dialogue between the two communities, there is now a pressing need to
adopt a homogenous presentation. This opens a door to a real comparison between
the contributions of the two disciplines; it paves the way for a profitable dialogue be-
tween the two fields; and it also offers conceptual tools to surmount barriers that cur-
rently limit potential collaborations between the two communities. For the purpose of
providing a homogenous perspective, we have identified the disciplinary constraints
ruling each discipline and then suggested some paths for econophysicists and financial
economists to overcome the current limitations of their collaboration.
Throughout the book, we have studied econophysics’ contributions in the light
of the evolution of financial economics. In addition, all our analyses have taken the
standpoint of financial economics. In so doing, we have expressed the dissimilarities
in terms of vocabulary and methods of modeling between the two fields in a common
terminology. We have sought to provide financial economists with a clear introduc-
tion to econophysics, its current issues, its major challenges, and its possible future
developments in relation to financial economics. We have shown that econophysics
literature holds a twofold interest for financial economists: first, this discipline pro-
vides alternative tools for analytical characterization of financial uncertainty (through
a specific statistical treatment of long-term samples). Second, by defending a strict
phenomenological method, econophysics renews the analysis of empirical data and
practical implications. Beyond the interest for financial economists, this book also
concerns econophysicists. Indeed, it gives an opportunity on the one hand to under-
stand the reasons why financial economists have been unable to use econophysics
models in their current form; and on the other hand, to identify the current challenges
econophysics has to solve in order to be accepted in financial economics and to pro-
vide more efficient models for analyzing financial markets.
In taking stock of the current situation of econophysics and in identifying poten-
tial paths for further investigation, this book is a first step toward more integrative re-
search between econophysics and financial economics. Collaborative research will
164
165 Conclusion
infinite population in which frequencies are associated with probabilities. In this per-
spective, the question of comparison becomes inevitable, but econophysicists and fi-
nancial economists work in different statistical frameworks, and at present there are
no uniform statistical tests allowing one to choose between a model from a strictly
non-Gaussian framework and one developed in the Gaussian framework.
Given the current situation, two future research areas can be investigated: on the
one hand, development of Bayesian tests in order to compare Gaussian-based models
with non-Gaussian ones and, on the other hand, development of (frequentist) statis-
tical tests for identifying power laws. A Bayesian approach uses the Bayes factor for
comparing two models using the ratio of the marginal likelihood of data used by the
models. The advantage of this testing approach lies in the fact that it is independent of
the statistical distribution of data. In other words, it offers a statistical framework for
comparing Gaussian and non-Gaussian models. Conversely, the second possible area
of investigation involves developing a new frequentist testing approach implying the
possibility of comparing non-Gaussian models with the Gaussian one.
One of our reasons for developing a generalized formula was precisely to create
room for the development of statistical tests. Indeed, in proposing a generalized for-
mula, we have solved the problem of the lack of unified non-Gaussian statistical de-
scription of financial returns, thus justifying the need to develop non-Gaussian statis-
tical tests. Moreover, our formula shows that econophysics models, though varied, are
technically compatible and can be presented through a unified and coherent frame-
work. This can then be used as a theoretical benchmark for a comparison with tra-
ditional Gaussian descriptions (GARCH-type models) used in financial economics.
Statistically speaking, the fact that the Gaussian framework can be expressed as a
specific case of the generalized equation shows that a comparison between econo-
physics models and financial ones makes sense. In our opinion, a Bayesian comparison
could provide an interesting conceptual tool in order to go beyond the differences in
terms of modeling since it would compare models based on a conditional distribu-
tion (GARCH approach) and models based on an unconditional description of return
(econophysics perspective).
The last point on the research agenda is the development of generative econophys-
ics models to explain the emergence of power laws in financial data. The main idea of
these models is to go beyond the mere statistical description given by current power-
laws model. As we saw, Gabaix et al. (2003) showed that institutional investors’ trades
have an impact on the emergence of a power law in the evolution of financial prices.
Although some technical models explaining the emergence of a power law in statistical
data exist, this area of investigation is still in its infancy concerning the economic in-
terpretation of these factors. Future collaboration between econophysics and financial
economics must pay more attention to this question.
To conclude, although the suggested agenda raises a number of questions and chal-
lenges, it creates many research opportunities by improving collaboration between fi-
nancial economists and econophysicists.
N OT E S
Introduction
1. The literature has expanded greatly since the early 2000s (Bouchaud, Mezard, and Potters
2002; Potters and Bouchaud 2003; McCauley 2009; Gabaix 2009; Lux 2009; McCauley,
Gunaratne, and Bassler 2007; Sornette 2014; Bouchaud 2002; McCauley 2006; Stanley and
Plerou 2001; Durlauf 2005, 2012; Keen 2003; Chen and Li 2012; Ausloos 2001; Chakrabarti
and Chakraborti 2010; Farmer and Lux 2008; Carbone, Kaniadakis, and Scarfone 2007;
Ausloos 2013; Jovanovic and Schinckus 2016; Schinckus 2010a, 2010b).
2. Those who are interested in such presentations can refer to the literature (Bouchaud and
Potters 2000; Cai, Lax, and Xu 2006; Chakrabarti, Chakraborti, and Chatterjee 2006;
Malevergne and Sornette 2005; Mantegna and Stanley 2000; Roehner 2002; Savoiu 2013;
Sornette 2003, 2006; Voit 2005; Bouchaud and Potters 2003; McCauley 2004; Richmond,
Mimkes, and Hutzler 2013; Takayasu 2002; Slanina 2014; Takayasu, Watanabe, and
Takayasu 2010; Paul and Baschnagel 2013).
Chapter 1
1. In this book, we use the term “stock market variations” to cover fluctuations in both the
prices and the returns of securities.
2. It is worth mentioning that this statement is true for any science. Physics, for instance, is
based on Euclidean geometry (or since the beginning of the twentieth century, non-╉Euclidian
geometrics, such as quantic physics). Euclidean geometry is founded on five axioms or
postulates, that is, five propositions that are accepted without proof. One of these postulates,
for example, states that a straight line segment can be drawn joining any two points. By
changing these postulates, it has been possible to create non-╉Euclidean geometries, which
enable the creation of other mathematics.
3. Jules Regnault (1834–╉1894) came from modest beginnings but died a millionaire, a fact
probably not unconnected with the model he proposed for determining stock market
variations. A biography of Regnault can be found in Jovanovic 2004, 2006a, 2016. His work
is analyzed by Jovanovic (2006a, 2000, 2002, 2016) and by Jovanovic and Le Gall (2001).
4. On Quételet, see Hankins 1908; Porter 1986; or Donnelly 2015.
5. Regnault never explicitly named the normal law: the term only appeared in 1877 with
Wilhelm Lexis (Armatte 1991).
6. The central-╉limit theorem, along with the law of large numbers, is one of the most important
results of probability theory. This theorem is crucial because it states that the sum of many
independent random variables with finite variance will tend to the normal distribution.
This phenomenon was first observed by Gauss. Proof of the theorem was provided by de
167
168 Notes
Moivre and Laplace and published in 1738 in the second edition of The Doctrine of Chances
by Abraham de Moivre. It was subsequently generalized by Gnedenko and Kolmogorov in
1954. Let us remind readers that the central-limit theorem states that the average of many
independent and identically distributed random variables with finite variance tends toward
a normal distribution irrespective of the distribution followed by the original random
variables.
7. The term “random” first appeared in its statistical meaning in 1898 (Frankfurter and McGoun
1999, 166–67), and the term “random walk” was first used in 1905 by Karl Pearson (1905a,
1905b).
8. This term is explicitly used by Regnault (1863, 40).
9. Mathematically, his model is of the following type: Pt +1 = Pt + ε t +1 , where ε = {ε t ,t ∈N } is
white noise and Pt+1 the price of the bond at time t. As a result, the expectation of profit
between two periods is nil, E ( Pt +1 − Pt ) = 0 .
10. These recordings were a way for the French government to control government bond prices.
France looks like an exception, because in other countries, official price data recording
started later (Preda 2007, 48). For instance, “The Wall Street Journal began publishing
closing quotations only in 1868. The NYSE got an official quotation list on February 28,
1872. ‘Official’ did not mean, however, that the NYSE guaranteed price data. In the 1860s
in London, only the published quotations of consols were closing prices” (Preda 2006,
760). By contrast, continuous data began to be organized with the introduction in 1923
of the Trans-Lux Movie Ticker (Preda 2006). Of course, technology was not available for
recording these continuous flows of data.
11. It is worth mentioning that Regnault’s ideas were diffused and used both during his lifetime
and also after ( Jovanovic 2016).
12. Bachelier’s research program and his work are presented by Courtault et al. (2002),
Jovanovic (2000, 2012), Davis and Etheridge (2006), and Ben-El-Mechaiekh and Dimand
(2006, 2008).
13. Here we preserve Bachelier’s notation, which we have reproduced in contemporary
language.
14. We should point out that equation 1.2 is not, strictly speaking, the law of probability of
a Brownian movement, but that of a Brownian movement multiplied by the standard
deviation, σ , which here is equal to 2 πk .
15. “Primes” (i.e., premiums) are derivative contracts similar to options but with some
differences, particularly in the evolution of their prices (Courtadon 1982; Cuoco and
Barone 1989; Jovanovic 2016). A “prime” is a conditional forward contract that allows the
holder to buy (or sell) an underlying asset at the strike price at the liquidation date of the
primes. To buy this contract, traders had to pay a fixed forfeit called “prime” (i.e. premium).
The expression “dont” served to indicate the amount to be forfeited. One traded Primes from
France to Central Europe (Weber 2009, 455).
16. Note that Bachelier did not evoke this second type of derivative contract, which did not exist
on the French stock market at the time. For this second type of contract, his calculations
were purely theoretical mathematical explorations with no empirical basis.
17. See Cramer 1983 for a presentation of the context and contributions of this period.
18. In 1906 Andrei Markov introduced Markov chains for the purpose of generalizing the law of
large numbers for a series of interdependent experiments.
19. Smoluchowski (1906) described Brownian motion as a limit of random walks.
169 Notes
20. Wiener (1923) carried out the first rigorous mathematical study of Brownian motion and
proved its existence.
21. One of the difficulties in reading Bachelier stemmed from the language he used, which was
not that of pure mathematics but that of mathematical physics: “In fact, the mathematicians
of the 30s who read Bachelier felt that his proofs are not rigorous and they are right, because
he uses the language of a physicist who shows the way and provides formulas. But again,
there is a difference between using that language and making mistakes. Bachelier’s arguments
and formulas are correct and often display extreme originality and mathematical richness”
(Taqqu 2001, 23).
22. One of the major contributions of Kolmogorov’s 1931 article was to make rigorous the
move from discrete to continuous schemes, a development that is a direct continuation
of Bachelier’s work. Moreover, Bachelier also influenced Khinchine (1933), with whom
Kolmogorov worked.
23. The two main schools studying probability theory prior to the 1940s were the French and the
Russian. From the 1940s onward, a number of important papers were developed in Japan,
influenced by Kiyosi Itô’s work in particular. On Japanese contributions, see Watanabe 2009.
24. On the Cowles Commission and its links with financial econometrics, see Morgan 1990;
Mirowski 2002; Christ 1994; and Dimand 2009. See also Hendry and Morgan 1995 on the
foundations of econometric analysis.
25. Working was professor of economics and statistics at Stanford University’s Food Research
Institute. He was never a member of the Cowles Commission, but took part in its summer
conferences.
26. A Tippett table is “random number table,” used to produce series of random numbers.
There are several such tables, the first being created by the English statistician Leonard
Henry Caleb Tippett in 1927. It is made up of 10,400 four-digit numbers extracted from
nineteenth-century British census records.
27. Slutsky published his 1927 article in Russian; it was translated into English in 1937.
Barnett (2011) provides a detailed presentation of Slutsky’s contributions from the
perspective of the history of economics.
28. For example, Brown, Goetzmann, and Kumar (1998) redid Cowles’s calculations,
disregarding these ambiguous forecasts, and obtained the opposite result: William Peter
Hamilton’s recommendations made it possible to achieve a better performance than
the market. For further details on this point, see also Dimand 2009 and Dimand and
Veloce 2010.
29. Note that Cowles and Jones (1937) compared the observed distribution of stock prices with
the normal distribution to determine the possibility of making a profit.
30. In his article, Roberts used the same graphical methodology as Slutsky and Working
to demonstrate the random character of stock price variations. He did not analyze the
distribution of prices.
31. The work of Regnault and Bronzin seems unknown to American writers, and almost none
of the American writers working in financial econometrics prior to 1955 were aware of the
work of Bachelier. Two exceptions were Samuelson, who in the 1930s learned of Bachelier’s
work, and Arne Fisher, who in 1922 suggested applying Bachelier’s formulas to securities
( Jovanovic 2012).
32. On the emergence of financial economics see Jovanovic 2008, 2009a, 2009b; MacKenzie
2006; and Whitley 1986a.
170 Notes
33. For a retrospective on Markowitz, see Rubinstein 2002 and Markowitz 1999. On Roy’s
contribution, see Sullivan 2011.
34. As previously mentioned, diversification strategy was already used at the end of the nineteenth
century (Edlinger and Parent 2014; Rutterford and Sotiropoulos 2015). Moreover, the
relationship between risk and return had already been emphasized by Williams (1938),
although he did not provide an operational definition of this link.
35. This theorem can actually be thought of as an extension of the “separation theorem” originally
developed by Irving Fisher (1930). For an introduction to the work of Fisher, see Dimand
and Geanakoplos 2005. For a retrospective look at the Modigliani and Miller model, see
Miller 1988 and Rubinstein 2003.
36. Formally, these definitions take place in a probability space (Ω, F, P) with filtration
(Φ n )0≤n≤ N . A filtration is an ascending— or descending— family of Φi tribes:
Φ1 ⊂ Φ 2 ⊂ Φ 3 ⊂ Φ n−1 ⊂ Φ n , a tribe being a family of parts, verifying stability hypotheses,
of all states of nature, Ω. The Φt tribe is a list of the events of which one can say at date t
whether they have occurred or not. It translates all information known on date t.
37. Note that if stock exchange prices follow a martingale, the expectation of profit, y, between
two consecutive periods is nil, E( yt +1 / Φ t ) = 0 , taking into account information Φt. In other
words, this is a fair game, as is the random walk also. Following Samuelson’s and Mandelbrot’s
articles, random movements of stock market prices were represented using martingales.
38. Markowitz (1952, 1955), for instance, was the first scholar to apply the expected-utility
theory in financial (portfolio) management.
39. See Mackenzie 2006, 72–73; Whitley 1986a, 1986b; Fourcade and Khurana 2009; and
Bernstein 1992.
40. The same issues were raised in training sessions given by Financial Analysts Seminar, one of
the leading professional organizations connected with financial markets (Kennedy 1966).
41. It is worth mentioning that the CAPM is often presented as a logical extension of the
portfolio theory, developed by Markowitz (1952), based on expected-utility theory.
42. See, for instance, Cohen and Pogue 1967.
43. As explained by Jovanovic (2008), this polarization of results largely stems from the
theoretical frameworks propounded at MIT (Keynesian) and the Graduate School of
Business at the University of Chicago (monetarist).
44. Cowles and Jones (1937) had obtained a statistical dependence on monthly or weekly
averages of daily prices. Working (1960) explained that in this case it is possible to obtain
a degree of dependency because statistical analyses based on average prices can introduce
artificial correlations between consecutive variations.
45. In Fama’s thesis, this equilibrium value is the fundamental—or intrinsic—value of a security.
The signification of this value is unimportant: it may be the equilibrium value determined by
a general equilibrium model, or a convention shared by “sophisticated traders” (Fama 1965a,
36 n. 3). Fama later dropped the reference to a convention.
46. This is the most commonly accepted definition of efficiency, which Fama proposed in his
1970 paper: “A market in which prices always ‘fully reflect’ available information is called
‘efficient’ ” (1970, 383).
47. Fama acknowledged the difficulties involved in this joint test in a report on efficiency
published in 1991 (Fama 1991, 1575–76).
48. Nearly all direct contributors to this hard core have received the Nobel Prize in
Economics: Markowitz, Sharpe, and Miller were joint winners in 1990; Merton and Scholes
171 Notes
received the award jointly in 1997. Although the contributions of Black (1938–1995)
were explicitly recognized, he was not a named recipient, as the prize cannot be awarded
posthumously. Fama was awarded in 2013.
49. See Mehrling 2005 on Fischer Black, and MacKenzie 2006 on the influence of this model.
50. Merton (1973) would later show that use of the CAPM was unnecessary.
51. A security is said to be contingent if its realization depends on states of another factor (price
of another asset, climatic conditions, etc.).
52. Elimination of risk is a misnomer, because there is always a modeling risk related to, among
other things, the choice of the process or distribution to model stock market variations.
53. In an economy having T periods, the existence of a complete system of markets allows agents
from the initial moment to make intertemporal choices for all present and future prices at
determined prices that are known to all. The organization of such a system of markets appears
very complicated, since it would require a market for every good at every period. Arrow
(1953) showed that a complete system of markets can be replaced by a financial market
through which the assets exchanged allow agents to transfer their revenue independently
in each state (Arrow’s equivalency theorem). Subsequently Ross (1976b) showed that
options can complete incomplete markets and thus lend greater verisimilitude to Arrow-
Debreu general equilibrium. Bayeux-Besnainou and Rochet (1996) extended Ross’s work
to a multiperiod model. However, it was the work of Harrison, Kreps, and Pliska that would
provide the rigorous mathematical framework for this intuition.
54. John McQuown at Wells Fargo and Rex Sinquefield at American National Bank in
Chicago established the first Standard and Poor’s Composite Index Funds in 1973
(http://www.ifa.com).
55. These are the PriceWaterhouseCooper and BGI report (1998), 25 Years of Indexing, and the
PriceWaterhouseCooper and BDM Alliance report (1999), Investment Style and Its Growing
Role in Packaged Investment Products.
56. This influence is also found in the Basel II Accords (Pillar 3 explicitly refers to efficiency).
57. Specifically, these authors demonstrate two fundamental theorems. Th. 1: A market is
arbitrage-free (efficient) if there exists at least one martingale measure.
This means that in a market free of arbitrage the stochastic price process for financial
assets must have at least one martingale measure—under which the expected value of the
stochastic price does not change in time. Th. 2: A market is complete if there is a unique
martingale measure for the financial assets.
This theorem gives the conditions for a market to be complete: the stochastic price must
have the martingale property and there must be only one martingale measure. Thus in order
to price a financial asset (such as options) one must find the unique martingale probability
measure and then use the martingale property for the stochastic price.
58. In this case there is more than one probability measure that causes the stochastic price
process to be a martingale, and, consequently, this is an arbitrage-free market.
For an incomplete market it can be shown that the price of a contingent security in a
market that does not allow arbitrage is found in an interval of values, xmin ≤ EQ [βT X ] ≤ xmax ,
with X the contingent security and β the discount process (βT = e − rT ) . The existence of many
possible prices is equivalent with the existence of a risk imbalance between the buyer and
the seller of the option. In this case, to have a unique price for the options, we have to add a
minimization procedure for the risk that leads to some mathematical conditions in the final
option formula.
172╇Notes
59. We might add that the continuity that characterizes Brownian motion is also a vital element
of the Black and Scholes model.
60. http://╉post.nyssa.org/╉nyssa-╉news/╉ 2010/╉10/╉in- ╉defense-╉of-╉the-╉quant-╉ii-╉the-╉ups-╉ and-╉
downs-╉of-╉the-╉normal-╉distribution.html.
Chapter 2
1 ( x − x0 )2
╇ 1. The expression is P ( x ) = exp − , defined for −∞ < x < +∞ . Therefore, the
√ 2 πσ2 2σ2
deviations from the mean larger than a few standard deviations are rare for the Gaussian law,
as one can see on �figure 2.3.
╇ 2. Olivier pointed this out in his thesis (1926, 81), which was one of the first (if not the first)
exhaustive analysis on time-╉series analysis applied to economic phenomena: “Regardless of
the importance of the problem of the dispersion of prices and their distribution around their
mean—╉and for my part I believe the problem to be fundamental—╉hitherto it has been only
very insufficiently studied. This is explained by the very small number of prices tracked by
financial and economic newspapers that publish price index numbers.”
╇ 3. Cowles and Jones (1937) were the only authors to compare the distribution and cumulative
frequency of observed series of stock price variations with those of the random series. They
used them in order to determine the distribution of the expected net gain (Cowles and Jones
1937, 293).
╇4. Armatte (1995), Boumans (2007), Friedman (2009), Le Gall (1994, 2006), Morgan
(1990, 2012), and Morgan and Klein (2001) provided careful analysis of this development.
However, we should point out that index numbers did not exist before that time, because
goods were not sufficiently standardized and quantifiable to guarantee reliable data.
╇5. Barometers being defined as “time-╉series representations of cyclical activity” (Morgan
1990, 57).
╇ 6. See, for instance, Friedman 2009 on this barometer. We can also mention Wesley Clair
Mitchell at Columbia and Irving Fisher at Yale, who also promoted rigorous empirical
approaches for understanding economic fluctuations.
╇ 7. Fisher (1911) published the first systematic research on the mathematical formula that can
be applied to price indices calculus.
╇ 8. Mitchell (1915) can be considered as the first major study, because he provided a theoretical
and a practical analysis of index numbers.
╇ 9. As we explained in �chapter 1, Osborne made the same observation in 1959 and also suggested
using the logarithm of the price: finding that the distribution of price changes was not normal,
he suggested the use of logarithms of stock prices, which did exhibit the normal distribution.
By proceeding in this way, these authors sought to return to the normal distribution so that
they could apply the results from statistics and the probability theory that we know today.
10. The cotton price time series was the most complete, and daily quotations were available.
11. We can mention, for instance, Eugene Fama, Benjamin King, and Arnold Moore at Chicago;
and Walter Barney, John Bauer, Sidney Levine, William Steiger, and Richard Kruizenga
at MIT.
12. For instance, if X1 and X2 are independent random variables that are normally distributed
with the respective parameters (µ1 + σ1 ) and (µ1 + σ 2 ), then their sum, X1 + X2, is also a
normal random variable with parameters (µ1 + µ 2 , σ12 + σ22 ).
13. Since there is no closed-╉form formula for densities, a Lévy distribution is often described
by its characteristic. See Nolan 2005 and Samorodnitsky and Taqqu 1994 for different equivalent
definitions of stable Lévy distributions.
173 Notes
14. Only three parameters appear in equation (2.1) because the fourth one, δ, is included in
the set ℜ.
15. See Nolan 2009 for the demonstration.
16. Paretian law was the first statistical regularity associated with leptokurtic distribution used
in science. Pareto used it in his Cours d’économie politique (1896–97) to characterize the
distribution of income and wealth.
17. “Scale invariance,” a term that Mandelbrot held dear, is in a sense the geometric translation
of the stability of the distribution of a stochastic process in probability theory.
18. The general central-limit theorem claims that a sum of many independent and identically
distributed random variables with power-law distributions decreasing through a Paretian
1
law, as α+1 where 0 < α < 2, will tend to be distributed according to a small attractor
x
distribution (i.e., an asymptotic limit for statistical distribution). When α = 2, we have the
Gaussian case of central-limit theorem where the variance is finite and for which the attractor
distribution is normal, whereas a 0 < α < 2 process converges instead toward stable Lévy
distribution, as we will explain in this chapter. In other words, the sum of random variables
according to a Lévy law, distributed independently and identically, converges toward a stable
Lévy law having the same parameters.
19. This speculative approach is identical, for example, to that which led to the discovery of the
Higgs boson, whose existence was first theoretically assumed (and demonstrated) before
being empirically observed.
20. When α = 1, there is no impact of an increasing diversification on the scale factor, and when
α < 1, the scale factor increases in case of increasing diversification (Fama 1965b, 412).
21. Student distribution arises when estimating the mean of a normally distributed population
in situations where the sample size is small and population standard deviation is unknown.
This distribution is symmetric and bell-shaped, like the normal distribution, but has heavier
tails. It approaches the normal distribution when the number of degrees of freedom grows.
22. Obviously, stability concerns Gaussian processes too.
23. Fama and Roll (1971) showed that a stable nonnormal process cannot be obtained from a
mixture of (stable) normal processes.
24. It is worth mentioning that these empirical series were based on relatively small samples
in comparison with the data available nowadays in finance. This point is quite important
because the length of price series can reduce the scope of the results. For instance, the
occurrence of a power law only makes sense on very large samples and working with a small
sample directly influences the statistical description that will emerge from the data.
25. Note that the technique used by Fama and Roll (1968) was generalized by McCulloch (1986),
who provided an estimation for all four parameters with no restriction on the symmetry of
distribution. Basically, McCulloch (1986) used the information given by the empirical
distributions’ quantiles, from which he computed the four parameters of the process.
26. The method based on iterated regressions developed by Press (1972) was generalized by Arad
(1980) and Koutrouvelis (1980). Also worthy of mention is the Paulson-Holcomb-Leitch
(1975) method, which minimizes the difference between the theoretical characteristic function
and a polynomial extrapolation of the empirical characteristic function. For further information
about these methods, see Embrechts, Klüppelberg, and Mikosch 1997b and Nolan 2009.
27. This relationship between risk and return was given shape in Markowitz’s seminal portfolio
theory (1952, 1959).
28. For instance, Blume (1968) and Officer (1972).
174╇Notes
29. Since the only statistical condition for describing this jump is the provision of a finite mean
in order to ensure the finiteness of variability (in line with the mean variance approach).
30. “The random-╉walk model, however, even in modified logarithmic form, has been found
inadequate on the basis that the tails of the distribution of price changes (or their logarithms)
appear to be too long, based on sample evidence, to be accounted for by the appropriate
normal distribution. One effort at compensation engendered the variant of the random-╉walk
model due to Cootner (1962), which added reflecting barriers to the Markov (random-╉walk)
process. Mandelbrot (1962) proposed a model for the behaviour of security-╉price changes that
generated keen interest in the finance area, in the mathematics of stable laws” (Press 1967, 318).
31. See Cont and Tankov 2004 for an overview of this literature.
32. “One-╉block reasoning” is not the only difference between the jump-╉diffusion processes
presented above and pure-╉jump processes. Indeed, while the first can use stable Lévy processes
to characterize the jump part of the model, the latter deal only with nonstable Lévy processes
(i.e., with a characteristic exponent alpha > 2), allowing them to value all statistical moments.
33. See Cont and Tankov 2004 for further details on this literature.
34. This time-╉dependence dynamic is defined by the modeler.
35. See Francq and Zakoian 2010; Bauwens et al. 2006; Tim 2010; and Pagan 1996 for further
details on these categories of models.
Chapter 3
╇ 1. We have borrowed the term from Le Gall (2002, 5), who provides an excellent introduction
to the analysis of methodology transfer between physical sciences and economics.
╇ 2. Note that the explanations about statistical physics are borrowed from an article by Richard
Fitzpatrick (2012) available on the Web (farside.ph.utexas.edu/╉teaching/╉sm1/╉statmech.pdf).
╇ 3. As Fitzpatrick (2012) noticed, to solve a system with 6 × 1023 particles exactly, we would
have to write down 1,024 coupled equations of motion, with the same number of initial
conditions, and then try to resolve the system.
╇ 4. In particular the central-╉limit theorem.
╇ 5. For instance, as Fitzpatrick (2012) commented, the familiar equation of state of an ideal
gas, PV = nRT, is actually a statistical result. In other words, it relates the average pressure
(P) and the average volume (V) to the average temperature (T) through the number (n)
of particles in the gas. “Actually, it is virtually impossible to measure the pressure, volume,
or temperature of a gas to such accuracy, so most people just forget about the fact that the
above expression is a statistical result, and treat it as a law of physics interrelating the actual
pressure, volume, and temperature of an ideal gas” (Fitzpatrick 2012, 6).
╇ 6. As Lesne and Laguës (2011, 3) point out, the study of critical phenomena was initiated
by Cagnard de Latour in 1822 and then boosted with the work of Thomas Andrews from
1867 onward. In 1869 Andrews observed a spectacular opalescence near the critical point
of carbon dioxide. However, “The 1960s saw the emergence of a new general approach to
critical phenomena, with the postulation of the so-╉called scaling laws, algebraic relations
holding between the critical exponents for a given system” (Hughes 1999, 111).
╇ 7. It is important to mention that the concept of equilibrium can be associated with the notion
of phase in an “all other things being equal” analogy. Indeed, in the case of continuous
variations of pressure/╉temperature, the phase is progressively moving toward a critical state,
implying that it cannot be associated with a static equilibrium.
175 Notes
8. Lesne and Laguës (2011) and Lesne (1998) provide an extremely clear and exhaustive
presentation of renormalization methods. These papers make a very good introduction to
intuitions and formalisms. Stanley (1999) provides a short presentation. See also Wilson
1993; Jona-Lasinio 2001; Calvo et al. 2010; and Stanley 1971 for further details.
9. Sornette (2006, chap. 2) provides a detailed presentation of this method.
10. For more details, see Samorodnitsky and Taqqu 1994; and Lesne 1998.
11. For instance, it exists in the work of Euclid and Galileo.
12. To understand the importance of this approach, one has to keep in mind that the macroscopic
level is directly observable—for instance, a table—but the microscopic level—the molecules
that constitute the table—is not directly observable (one needs a tool, as a microscope).
13. It is worth mentioning that large variation correlation length appears to be ruled by a power
law, as the next section will detail.
14. Proceeding by analogies is very common in sciences, and particularly in economics. For
instance, Henri Lefevre used analogies with the human body to analyze financial markets
( Jovanovic 2006). Cohen (1993) provides a good analysis of the use of analogy, homology,
and metaphor in interactions between the natural sciences and the social sciences.
15. The physicist Serge Galam initiated this discipline in papers published between 1980
and 1983, and proposed the term sociophysics in 1982 (Chakrabarti, Chakraborti, and
Chatterjee 2006; Stauffer 2005, 2007; Galam 2008). Săvoiu and Iorga-Simăn (2013) give
some historical perspective on sociophysics.
16. For instance, corporate revenue (Okuyama, Takayasu, and Takayasu 1999), the emergence
of money (Shinohara and Gunji 2001), and global demand (Donangelo and Sneppen 2000).
17. In 1977, the Toronto Stock Exchange became the first stock exchange to be fully automated.
Then, the Paris Stock Exchange (now Euronext) imported Toronto’s system and became fully
automated at the end of the 1980s. These changes occurred for NASDAQ between 1994 and
2004, and later for the NYSE in 2006 with the introduction of the NYSE hybrid market. The
Tokyo Stock Exchange switched to electronic trading for all transactions in 1999.
18. The term “high-frequency data” refers to the use of “intraday” data, meaning that price
changes can be recorded at every transaction on the market.
19. Today there is a large literature on this subject, in particular with the theory of financial
market microstructure, which focuses on how specific trading mechanisms and how
strategic comportments affect prices. Maureen O’Hara (1995), one of the leading lights of
this theoretical trend, provides a good introduction to this field.
20. As explained in chapter 2, a process is said to be a Lévy process when it has (1) independent
increments; (2) stationary increments, and (3) a continuous probability function.
Lévy processes include a large category of statistical distributions (Poisson, Gaussian,
Gamma, etc.).
21. There are some exceptions, particularly in the most recent works (Nakao 2000). We will
come back to these recent works in chapter 5.
22. Let us mention the so- called Kosterlitz-Thoules transition, which is an exception
characterizing a transformation from a disordered vortex fluid state with equal number
of vortices to an ordered molecular like state composed by pairs of vortices with different
polarities. We thank Marcel Ausloos for this precision. For further details on this point, see
Hadzibabic et al. 2006.
23. Shalizi’s notebook, http://bactra.org/notebooks/power-laws.html.
24. Precisely, ξ(T ) ∝ T − Tc
−υ
.
176 Notes
25. Precisely, at the critical point T = TC the correlation length diverges. There is no typical size.
−r −r
r 1 1
Hence, at the critical point: ξ ( T ) → ∞ ; → 0 ; e ξ(T) → 1 ; α e ξ(T) → α .
ξ( T) r r
26. The correlation length is finite. The exponential term “wins” over the power-law term,
since it decreases more rapidly as the distance r increases. Hence the correlation function is
described by an exponential function.
27. Shalizi’s notebooks are available at http://bactra.org/notebooks/. One can also read Jona-
Lasinio 2001 for further details.
28. The first studies on statistical scale invariance came from Kolmogorov (1942, 1941), who studied
data related to phenomena associated with turbulence. His research on turbulence also led him to
introduce power laws (and their scaling property) into physics at the same period. These concepts
progressively became widespread in the discipline (Hughes 1999). In the 1960s, scholars such
as Domb and Hunter (1965), Widom (1965a, 1965b), Fisher (1967), and Kadanoff (1966)
established some of the most important theoretical results on scaling properties, which
contributed to the crucial developments that have occurred in physics since the 1970s.
29. We use here the easiest case of scaling invariance (i.e., scaling factor = 1.1). For more
information about more complex scaling properties, see Hartemink et al. 2001.
30. The negative sign is not really important, since it depends on the choice of axes for the
histogram. We base our presentation on Li et al. 2005, which provides a clear mathematical
presentation of power laws. Newman (2005) also provides a clear and exhaustive
mathematical analysis (included the determination of the moments).
31. Stable Lévy processes can be characterized through a specific power law exhibiting an
independence of increments (a defining element of Lévy processes). While all power laws
with 0 < μ ≤ 2 are said to be stable, only those with 0 < μ < 2 are associated with a distribution
whose variance is infinite, since an exponent equal to 2 is a Gaussian distribution (and
therefore a finite variance). In other words, a Gaussian process can be looked on as a
specific case of stable Lévy processes or, to put it in another way, a Gaussian distribution
can mathematically be expressed as a stable power law with an exponent equal to 2 (Nolan
2009). It is worth mentioning that independence of increments is not a required condition
for power laws, which can also describe dependent increments (the reason that power laws
are also used for characterizing long memory). In other words, statistical stability implies
scaling properties, while the reverse is not true (Schoutens 2003).
32. By attractor, we mean an asymptotically approached set of points in phase space, which is
invariant under dynamics.
33. Note that this phenomenological approach is much less common in economics. On the
phenomenological approach in physics, see Cartwright 1983.
34. Mitzenmacher (2004) and Simkin and Roychowdhury (2011) are the best-documented
articles.
35. In addition to the variation of the characteristic exponent (α), econophysicists also try to
fit the data by adjusting the constant C in the equation ln P[r > x] = −α ln x. The variation
of this parameter C explains why the curves do not begin at the same point in the fi gure 3.7.
It is worth mentioning that, because power laws are scale invariant, the positive constant C,
associated with the scale of analysis, does not play a key role in the statistical characterization
of data.
36. It is worth mentioning that Vitanov et al. (2014), like many others, showed that this is not
always true.
177 Notes
37. According to Rybski (2013), Berry and Okulicz-Kozaryn (2012) distinguish three periods
in research on city-size distributions. The first period was initiated by Auerbach’s empirical
discovery. Then Zipf ’s work (1949) was the starting point of the second period that gave
rise to empirical studies for estimating the exponent for city distributions and cross-national
analyses. During this second period, an article by Simon (1955) proposed a statistical
process for explaining these observations. He suggested that the probability of a city growing
by a unit is proportional to its size, which is known today as preferential attachment. Finally,
the lack of an intuitive theoretical model led to the third phase. Despite a variety of early
modeling approaches from various disciplines, Berry attributes the first economic theory
addressing city-size distributions to Gabaix (1999).
38. In 2002 the journal Glottometrics dedicated a special issue edited by Gunther Altam to
Zipf ’s work.
39. See Dubkov et al. (2008) for an astonishing list of stable Lévy-type phenomena in physical,
biological, chemical, and social systems.
40. Lévy (2003) and Klass et al. (2006) confirmed Pareto’s results by showing that wealth
and income distribution can both statistically be characterized by a power law. Amaral
et al. (1997) explained that annual growth rates of US manufacturing companies can also
be described through a power law, while Axtell (2001), Luttmer (2007), and Gabaix and
Landier (2008) claimed that this statistical framework can also be used to characterize the
evolution of companies’ size as a variable of their assets, market capitalization, or number
of employees. These “size models” have since been applied for describing the evolution of
cities’ size (Cordoba 2008; Eeckhout 2004; Gabaix 1999; Krugman 1996). In the same vein,
Lux (1996) and Gabaix et al. (2007) observed that the large fluctuations on the financial
markets can be captured through a power law. We can also read Gabaix’s (2009) survey on
power laws in economics and finance.
41. The two next chapters will discuss this point and some limits of this approach.
42. See Ausloos 2014a for a detailed analysis of these analytical tools and Ausloos 2014b for an
application of these techniques.
43. See Ding 1983 for further details on the use of asymptotic (theoretical) statistics for
describing physical (empirical) systems.
44. This idea of truncation dated back to the 1700s, with the famous St. Petersburg paradox
(Csorgor and Simons 1983). Indeed, many works have considered this idea as empirical
evidence because all physical systems are finite.
45. The finite configuration of all samples implies that there is necessarily an upper bound
rank making the statistical description of these samples possible. However, as previously
mentioned, statistical physicists wanted to combine the asymptotic properties of power laws
with the finite dimension of empirical samples. That is the reason why some physicists went
beyond the use of ranks in the treatment of power laws (Mantegna 1991; Mantegna and
Stanley 1994). On this point, see also Ausloos 2014a.
46. It is worth mentioning that Mandelbrot (1963) already mentioned the possibility of using
truncation techniques in the 1960s at low rank scale.
47. See Figueiredo et al. 2007 for a statistical explanation of this slow convergence.
48. However, knowing that critical phenomena have a continuous and abrupt change, we can
suppose that their choice is not independent of this consideration.
49. It is worth mentioning that the first exponentially truncation technique was introduced in
physics by Koponen (1995). However, this author did not justify his paper using a physically
178╇Notes
plausible argument, as was the case in the literature after the critique made by Gupta and
Campanha (1999).
50. The truncation technique based on a gradual cutoff can be considered a specific case of the
exponential technique (Matsushita, Rathie, and Da Silva 2003).
51. These constants are usually derived from physical theories (Gupta and Campanha 2002),
thus offering a variety of potential definitions for the truncation function.
52. It is worth mentioning that a variety of exponential truncation functions have been developed
in the specialized literature (Schinckus 2013). Two factors can explain the diversity of these
functions: first, the collection of physical theories that can be used to define the form of the
function; and second, the will to develop more data-╉driven truncations (Matsushita, Rathie,
and Da Silva 2003).
53. These techniques have been presented as a potential solution for power laws based on
high-╉frequency data. For low-╉frequency items (i.e., not financial data), some physicists
implemented alternative solutions by using other statistical law (the Lavalette law, for
instance). See Ausloos 2015; Cerqueti and Ausloos 2015; and Tripp and Feitelson 2001.
Chapter 4
1. Actually, Stanley was the first scholar to propose the term “econophysics” during a conference
dedicated to “physics of economics” organized in Kolkata (India) in 1995 (Chakrabarti and
Chakraborti 2010).
2. According to Standler (2009), the end of this kind of bubble can partly be explained by a
generational shift in the administration: senior officers close to retirement who favored funding
of specific scientific research are no longer able to insist on this generous financial support.
3. See http://╉phys.uh.edu/╉research/╉econophysics/╉index.php.
4. See http://╉www.tcd.ie/╉Physics/╉people/╉Peter.Richmond/╉Econophysics/╉Position.html and
http://╉www.itp.phys.ethz.ch/╉research/╉comp/╉econophys for examples. For further infor�
mation on these programs, see Kutner and Grech 2008 and the website of these universities.
5. http://╉www3.unifr.ch\econophysics.
6. It earned its official recognition in the Physics and Astrophysics Classification Scheme
(PACS): since 2003, econophysics has been an official subcategory of physics under the
code 89.65 Gh.
7. The sample is composed of Eugene Stanley, Rosario Mantegna, Joseph McCauley, Jean-╉
Philippe Bouchaud, Mauro Gallegati, Benoît Mandelbrot, Didier Sornette, Thomas Lux,
Bikas Chakrabarti, and Doyne Farmer. Moreover, given the usual practice of citations, other
important authors have been retrieved through the analysis of the cited references in these
papers as well as in the papers citing those source papers. A group of 242 source papers
covering the domain of econophysics and the papers that cite them over the period 1980–╉
2008 were identified in order to analyze the evolution of the field. Starting with these core
papers, which construct the population of researchers, 1,817 other papers that cited the
source articles have been identified.
8. These papers were mainly written by Thomas Lux and Mauro Gallegati and dealt with
macroeconomics (Gallegati 1990; Lux 1992a, 1992b; Gallegati 1994) or the history of
economics (Gallegati and Dardi 1992).
9. His research focuses partly on complexity in economics, a topic that may cause him to be
more open to the approach proposed by econophysicists.
179 Notes
10. The data on the cited journals come from the “Journal of Citation Report 2008” published
by Thomson Reuters and part of the Web of Knowledge.
11. The first is a physicist and the second an economist, and both were in our source authors.
12. Following Backhouse (2004, 265), we distinguish “orthodox dissenters” from “heterodox
dissenters”; the latter reject the mainstream theory and aim at profoundly changing
conventional ideas, while the former are critical but work within mainstream economics.
13. Chapter 2 explained that financial economists describe the occurrence of extreme values
through the evolution of the Gaussian trend (unconditional distribution) corrected by a
conditional capturing the large variations.
14. Eugene Stanley, who is often presented as the father of econophysics, told us privately that
after more than six years (!) he decided to cancel his submission to the American Economic
Review—although it is a top 10 journal in economics.
15. While economists use the JEL (Journal of Economic Literature) classification, physicists
organize their knowledge through their PACS (Physics and Astrophysics Classification
Scheme) under which econophysics has its own code (89.65 Gh).
16. Economists usually employ stylistic conventions defined by the University of Chicago
Press or the Harvard citation style, where references are listed in alphabetical order, while
physicists adopt the conventions used by the American Institute of Physics, where references
are listed in the order in which they appear in the text.
17. They had to choose from five reasons for having been rejected and were invited to comment
on their choices: (1) the topic of the paper; (2) the assumptions used in the paper; (3) the
method used in the paper; (4) the results of the paper; or (5) another reason.
18. This situation is not specific to economics. It also existed in the other fields into which
statistical physicists have extended their models and methods (Mitzenmacher 2005).
19. This is one of the main reasons for having the efficient-market theory depict its weak
connection with the random character of stock price variations, as we saw in c hapter 1.
20. Mirowski (1989) gives a good overview on this controversy.
21. The VAR approach has been developed by Christopher Sims (1980a, 1980b). VAR models are
a set of related linear difference equations in which each variable is in turn explained by its own
lagged values, plus current and past values of the remaining n − 1 variables (Christiano 2012;
Stock and Watson 2001). Numerous economists have critiqued these models because they
do not shed any light on the underlying structure of the phenomena or the economy studied
(Chari, Kehoe, and McGrattan 2008, 2009; Christiano 2012). Similarly, the RBC approach is
based on calibration. It has been developed by Finn E. Kydland and Edward C. Prescott (1982).
These macroeconomic models study business cycle fluctuations as the result of exogenous
shocks. Their model is based on a calibration approach, and it is validated if the simulations
provided by the model fit with empirical observations. Numerous economists have critiqued
this method (Eichenbaum 1996; Gregory and Smith 1995; Hansen and Heckman 1996;
Hendry 1995; Hoover 1995; Quah 1995; Sims 1996; Wickens 1995). De Vroey and Malgrange
(2007) offer a presentation of this model and its influence in macroeconomics literature.
22. Shalizi’s notebook, http://bactra.org/notebooks/power-laws.html.
23. Chartism (also called “technical analysis”) is a financial practice based on the observation of
the historical evolution of assets’ prices. More precisely, actors try to identify visual patterns
that help them give a meaning to the financial reality.
180╇Notes
24. Stanley and Plerou (2001) replied to LeBaron’s critic. Although their reply provided a
technical answer—╉particularly the limited number of data used by LeBaron—╉it underlines
the difficulties in the expectations of the two communities.
25. Chapter 2 also emphasized that econophysicists can find this linearity by treating statistically
the potential inflection points that could appear in the visual analysis of data.
26. We can mention Bouchaud and Sornette (1994) and Bouchaud and Potters (2003, 2000),
who proposed a first approximation to the European call option formula that is equivalent to
the option-╉pricing formula obtained in financial economics under the risk-╉neutral approach.
However, their arguments are different from those used in financial economics.
27. It is worth mentioning that most practitioners have developed their own formulas that are far
from Black-╉Scholes-╉Merton model (Haug and Taleb 2011).
28. When econophysicists deal with equilibrium, they rather use a “statistical equilibrium”
coming from a statistical mechanism (i.e., a reconciliation between a mechanism and
thermodynamics). See Bouchaud 2002.
See Schinckus 2010a, 2010c for further information about the importance of equilibrium
in econophysics.
29. It is worth mentioning that the hypothetico-╉deductive method is considered the major
scientific method by many authors in philosophy of science (for instance, Popper 1959).
When (financial) economics emerged as discipline, economists integrated hypothetico-╉
deductive reasoning as a scientific foundation: the assumption of the perfectly rational
agent allows economists to deduce implications in terms of individual behaviors, while
the hypothesis of representative agent offers them the ability to generalize microbehaviors
at a macroeconomic level. Even recent developments such as behavioral finance kept this
deductive method by giving a generalization of the perfect rational agent (Schinckus 2009).
30. ARCH-╉type models include ARCH, GARCH, NGARCH, EGARCH, and similar models.
31. See, for instance, Lorie 1966, 107.
32. In epistemological terms, this opposition between early financial economists and chartists
shaded the classical opposition between deduction (used by financial economists) and
induction (used by chartists). See Jovanovic 2008 for further details on this opposition.
33. See, for instance, Cootner 1964, 1; Fama 1965b, 59; Fisher and Lorie 1964, 1–╉2; and Archer
1968, 231–╉32.
34. With Rosser (2006, 2008), Keen (2003) is one of the rare breed of economists who have
engaged in a dialogue with econophysicists.
35. See, for instance, Weston 1967, 539.
36. In this respect, Rosenfeld (1957, 52) proved to be visionary when he suggested using
computers for testing theories on a large sample.
Chapter 5
1. One should point out that, more recently, econophysics models have less frequently used
power laws; this represents a new diversification of econophysics models that will be
discussed in this chapter.
2. As detailed in chapter 3,
� theoretical investigation in econophysics is considered only after
having observed patterns in the empirical results.
3. Mandelbrot (1962, 6) had already underlined the need for a phenomenological approach in
his early works applied to finance. From this perspective Mandelbrot’s agenda (building new
mathematical models, new statistical tools, etc.) was very ambitious for the 1960s.
181 Notes
4. By “success,” Casey (2013) meant that these models were able to detect an overestimation of
the market (in the credit default swap bubble, for example).
5. Examples are Jean-Philippe Bouchaud and Mark Potters, who created Capital Fund
Management, and Tobias Preis, who created Artemis Capital Asset Management.
6. https://www.cfm.fr/en/.
7. http://modeco-software.webs.com/econophysics.htm.
8. https://www.rmetrics.org/.
9. https://www.rmetrics.org/sites/default/files/2013-VorlesungSyllabus.pdf.
10. http://tuvalu.santafe.edu/~aaronc/powerlaws/.
11. Cornelis (2005) and Guégan and Zhao (2014) point out that extreme events lead to the
failure of VaR.
12. Pagan (1996) offers a very clear and useful perspective on econometric models applied to
financial markets. See also Bauwens et al. 2006 and Francq and Zakoian 2010 for further
details on these categories of models.
13. Considering the time series as a whole is the most common approach in natural sciences and
is associated with scaling laws.
14. We thank Nicolas Gaussel for helpful discussion on this topic.
15. LTCM was a hedge-fund management firm that utilized absolute-return trading strategies
combined with high financial leverage. The firm was founded in 1994 and collapsed in 1998.
Members of its board of directors included Myron S. Scholes and Robert C. Merton. Initially
successful with extremely high annualized return in the first years (21 percent, 41 percent,
and 43 percent after fees), in 1998 it lost $4.6 billion in less than four months following the
Asian and the Russian financial crisis. See Dunbar 2000 and Lowenstein 2000.
16. Harrison (1998) showed that the characteristics of eighteenth-century financial-asset
returns are the same as those of the twentieth century: “The distribution of price changes
now and then both exhibit the same patterns or regularities. In particular, the distribution of
price changes is leptokurtic, and fluctuations in variance are persistent” (1998, 55). In other
words, these regularities are stable.
17. Fantazzini and Geraskin (2011) provide a clear presentation of LPPL models.
18. A > 0 is the value of ln p(tc) at the critical time, B < 0 is the increase in ln p(t) over the time
unit before the crash if C were to be close to zero, C ≠ 0 is the proportional magnitude of the
oscillations around the exponential growth, 0 < β < 1 should be positive to ensure a finite
price at the critical time, while 0 < δ < 2π is a phase parameter.
19. The opinions of other participants influence each participant. It is the well-known beauty
contest described by Keynes in c hapter 12 of the General Theory, in which judges picked
whom they thought other judges would pick, rather than whom they considered to be the
most beautiful.
20. Durlauf set out his position more clearly in a later paper (2012, 14).
21. Let us mention, for instance, that Durlauf (Arthur, Durlauf, and Lane 1997; Blume and
Durlauf 2006) was involved in the meetings organized by the Santa Fe Institute dedicated to
the application of physics to economics, while Lux regularly published articles dealing with
econophysics (Lux 2009).
22. In the same vein as Mitzenmacher, a theoretical interpretation is considered here in the sense
of explaining the significance of a mathematical model.
23. However, as explained in chapter 2, the characterization of these statistical patterns is
developed in different conceptual framework (i.e., the Gaussian framework for financial
economists and the power-law perspective for econophysicists).
182 Notes
24. See Jovanovic and Schinckus 2013 for a detailed discussion of this point in connection with
econophysics and financial economics.
25. Galison (1997) explained how engineers collaborated with physicists to develop particle
detectors and radar.
26. A Creole (e.g., Chavacano in the Philippines, Krio in Sierra Leone, and Tok in Papua New
Guinea) is often presented as an example of a pidgin because it results from a mix of regional
languages; see Todd 1990.
27. Note also special issues of economic journals, such as the Journal of Economic Dynamics and
Control dedicated to the “Application of Physics to Economics and Finance” published in
2008, and the issue of the International Review of Financial Analysis titled “Contributions of
Econophysics to Finance,” published in 2016.
28. Brakman et al. (1999) extended Krugman’s (1991) model by introducing negative externalities.
29. In contrast, Farmer et al. (2004) have shown that large price changes in response to large
orders are very rare. See also Chiarella, Iori, and Perello 2009 for a more recent model
showing that large price changes are likely to be generated by the presence of large gaps in
the book of orders.
30. It is worth mentioning that this hypothesis is similar to those of Fama (1965) when he
defined and demonstrated the efficiency of financial markets for the first time.
31. From an economic perspective, the difference observed between the distributions
characterizing the evolution of financial variables (returns, foreign exchange) and those
describing economic fundamentals could result from the higher liquidity of the former. See
Aoki and Yoshikawa 2007 for more information on this subject.
32. A number of empirical studies very soon contradicted the conclusions of the theoretical
framework built during the 1960s and the 1970s (see c hapter 1). These empirical studies gave
birth to what is known as the “anomalies literature,” which has become extensive and well
organized since the 1980s. Schwert (2003) provides a fairly exhaustive review of anomalies.
33. The term “market microstructure” was coined by Mark Garman (1976), who studied
order flux dynamics (the dealer must set a price so as to not run out of stock or cash). For a
presentation of the discipline, see O’Hara 1995; Madhavan 2000; and Biais et al. 2005.
34. The first generation of market microstructure literature has shown that trades have both a
transitory and a permanent impact on prices (Biais, Glosten, and Spatt 2005). For instance,
Copeland and Galai (1983) showed that a dealer who cannot distinguish between informed
and uninformed investors will always set a positive spread to compensate for the expected
loss that he will incur if there is a positive probability of some investors being informed.
Kyle (1985) suggests that informed dealers can develop strategic behavior to profit from
their information by concealing their orders among those of noninformed dealers. While
informed dealers thus maximize their own profits on the basis of the information they hold,
their behavior restricts dissemination of the information. O’Hara (2003) presents another
example of results that contradict the dominant paradigm. In this article, she shows that
if information is asymmetrically distributed, and if those who do not have information
know that others know more, contrary to the suggestions of the CAPM, we will not get an
equilibrium where everyone holds the market portfolio.
35. See Schinckus 2009a, 2009b for a presentation of this school and its positioning vis-à-vis the
dominant paradigm.
36. In 2002 Daniel Kahneman received the Nobel Prize in Economics for his work on the
integration of psychology with economics.
183╇ Notes
37. Note that Shefrin (2002) made a first attempt to unify the theory.
38. Agent-╉based modeling is a computational method applied in so many fields (Epstein
2006) that it is not possible to number them in this chapter. The agent-╉based approach
appeared in the 1990s as a new tool for empirical research in many fields, including
economics (Axtell 1999), voting behavior (Asselain 1985), military tactics (Ilachinski
2000), organizational behavior (Prietula, Carley, and Gasse 1998), epidemics (Epstein and
Axtell 1996), and traffic-╉congestion patterns (Rasmussen and Nagel, 1994). For a detailed
literature review on the topic, see Epstein 2006 or more recently Cristelli 2014.
39. See LeBaron 2006 for details on agent-╉based modeling used in economics.
40. Note that this agent-╉based econophysics is not limited to financial issues, since Pickhardt and
Seibold (2011), for example, explained that income-╉tax evasion dynamics can be modeled
through an “agent-╉based econophysics model” based on the Ising model of ferromagnetism,
while Donangelo and Sneppen (2000) and Shinohara and Gunji (2001) approached the
emergence of money through studying the dynamics of exchange in a system composed of
many interacting and learning agents. Focardi et al. (2002) and Chiarella and Iori (2002)
also provided an Ising-╉type model with interactions between nearest neighbors.
41. Bak became an external member of the Santa Fe Institute, where he found the perfect
environment to promote his theory of criticality, which gradually spread to several
disciplinary contexts in the 1990s (Frigg 2003). The Santa Fe Institute was founded in 1984
to conduct theoretical research outside the traditional disciplinary boundaries by basing
it on interdisciplinarity. Its original mission was to disseminate complexity theory (also
called complex systems). This institute plays a key role in econophysics due to a fruitful
collaboration between economists and physicists, among them some Nobel laureates such
as Phil Anderson and Kenneth Arrow. For further details, see Schinckus 2016.
42. The visual tests make it difficult to distinguish between power-╉law, log-╉normal, and
exponential distributions. See Clauset et al. 2009 for further details on this point.
43. The difficulty of distinguishing between log-╉normal and Pareto tails has been widely
commented on in the literature (Embrechts, Klüppelberg, and Mikosch 1997; Bee,
Riccaboni, and Schiavo 2011).
44. This bound comes from the size of the sample that leads the probability density to diverge
when x tends toward zero.
Chapter 6
1. Stanley et al. 1999, 157; Challet, Marsili, and Cheng Zhang 2005, 14; Bouchaud and Potters
2003; McCauley 2004; Bouchaud and Challet 2014; McCauley 2006. See also Rickles 2008
and Schinckus 2010, who discussed this point.
2. Although econophysicists (McCauley 2006, Sornette 2014) criticize this theoretical
dependence of the modeling step, it is worth mentioning that physics also provides telling
examples in which a theoretical framework is accepted while the empirical results are wholly
incompatible with this framework. One could mention the recent example of the Higgs
boson. The conceptual existence of the Higgs boson predated its observation, meaning that
its theoretical framework was assumed for a number of years without the particle being
observed. In the same vein, string theory is an elegant mathematical framework, empirical/╉
concrete evidence of which is still debated. These are not unique examples: “There are
plenty of physicists who appear to be unperturbed about working in a manner detached
184╇Notes
from experiment: quantum gravity, for example. Here, the characteristic scales are utterly
inaccessible, there is no experimental basis, and yet the problem occupies the finest minds in
physics” (Rickles 2008, 14).
3. By being derived from a theoretical framework setting up the initial calibration, the “model
becomes an a priori hypothesis about real phenomena” (Haavelmo 1944, 8).
4. See, for instance, Mandelbrot 1963; Malcai, Biham, and Solomon 1999; Blanck and
Solomon 2000; Skjeltorp 2000; Cont and Bouchaud 2000; Louzoun and Solomon 2001;
Gupta and Campanha 2002; and Scalas and Kim 2007.
5. Derived distributions like that obtained by Clementi, Di Matteo, and Gallegati (2006) are
in general not Lévy-╉like distributions, but they approach a Lévy distribution for a very large
number of data.
6. Bouchaud, Mezard, and Potters 2002; Potters and Bouchaud 2003; McCauley 2009; Gabaix
2009; Lux 2009; McCauley, Gunaratne, and Bassler 2007b; Sornette 2014; Bouchaud 2002;
McCauley 2006; Stanley and Plerou 2001; Durlauf 2005, 2012; Keen 2003; Chen and Li
2012; Ausloos 2001; Chakrabarti and Chakraborti 2010; Farmer and Lux 2008; Carbone,
Kaniadakis, and Scarfone 2007; Ausloos 2013; Jovanovic and Schinckus 2016.
7. The law of one price suggests that the forces of competition will ensure that any given
commodity will be sold at the same price.
8. Although Modigliani and Miller were not the first to apply the arbitrage proof in finance
(Rubinstein 2003), their article led to its popularity for two reasons: (1) their article
was one of the first to use modern probability theory to analyze a financial problem; and
(2) the authors were members of strong academic departments (MIT and the University of
Chicago).
9. Except in Bouchaud and Potters 1994. They reach a first approximation to the European
call option formula that is quite equivalent with the option-╉pricing formula obtained in
mathematical finance under the risk-╉neutral approach, but they use arguments that are
somehow different from those used in mathematical finance.
10. The reason is due to the incompleteness of markets (Ivancevic 2010; Takayasu 2006; Cont
and Tankov 2004; Miyahara 2012; Zhang and Han 2013). See also �chapters 1 and 4.
11. Because we work unconditionally on the whole distribution (and not only on its fat-╉tail
part), we need not define a short-╉term time dependence for the variance as is usually done in
conditional methodology, such as ARCH-╉type models.
12. It is worth mentioning that this convergence of x to a Gaussian regime is extremely slow due
to the stable property of the Lévy distribution (Mantegna and Stanley 1994). The crossover
value Nc can be derived by using the Berry-╉Esseen theorem (Shlesinger 1995) or by using a
method based on the probability of x returning to the origin (Mantegna and Stanley 1994)—╉
both approaches provide a crossover value equal to Nc ~ c –╉α l –╉α, where c is the scale factor and
l the cross-╉value at which the regime will switch from a stable Lévy to a Gaussian one.
Conclusion
1. Although the Bayesian framework is also implemented in finance, it is not the statistical
approach used by the mainstream. For further details on this point, see Rachev et al. 2008.
REFERENCES
Abergel, Frédéric, Hideaki Aoyama, Bikas K. Chakrabarti, Anirban Chakraborti, and Asim
Ghosh, eds. 2014. Econophysics of Agent-Based Models. Cham: Springer.
Alastair, Bruce, and David Wallace. 1989. “Critical phenomena: Universal physics at large
length scales.” In The New Physics, edited by Paul Davies, 109–52. Cambridge: Cambridge
University Press.
Alejandro-Quiñones, Ángel L., Kevin E. Bassler, Michael Field, Joseph L. McCauley, Matthew
Nicol, Ilya Timofeyev, Andrew Török, and Gemunu H. Gunaratne. 2006. “A theory of fluc-
tuations in stock prices.” Physica A 363 (2): 383–92.
Alexander, Sidney S. 1961. “Price movements in speculative markets: Trends or random walk.”
Industrial Management Review 2: 7–26.
Alfarano, Simone, and Thomas Lux. 2005. “A noise trader model as a generator of apparent fi-
nancial power laws and long memory.” Macroeconomic Dynamics 11: 80–101.
Alfarano, Simone, and Thomas Lux. 2010. “Extreme value theory as a theoretical background
for power law behavior.” MPRA Paper No. 24718. August 30.
Alstott, Jeff, Ed Bullmore, and Dietmar Plenz. 2014. “Powerlaw: A Python package for analysis
of heavy-tailed distributions.” PloS One 9 (1): e85777.
Altam, Gunther, ed. 2002. Special issue dedicated to Zipf law. Glottometrics 3.
Amaral, Luis A. Nunes, Sergey V. Buldyrev, Shlomo Havlin, Heiko Leschhorn, Philipp Maass,
Michael A. Salinger, H. Eugene Stanley, and Michael H. R. Stanley. 1997. “Scaling behavior
in economics: I. Empirical results for company growth.” Journal de Physique 7: 621–33.
Amblard, Frédéric, and Guillaume Deffuant. 2004. “The role of network topology on extremism
propagation with the relative agreement opinion dynamics.” Physica A 343: 725–38.
Anagnostidis, Panagiotis, and Christos J. Emmanouilides. 2015. “Nonlinearity in high-
frequency stock returns: Evidence from the Athens Stock Exchange.” Physica A: Statistical
Mechanics and Its Applications 421: 473–87.
Andersen, Torben G., and Tim Bollerslev. 1997. “Intraday Periodicity and Volatility Persistence
in Financial Markets.” Journal of Empirical Finance 4: 115–58.
Annaert, Jan, Frans Buelens, and Angelo Riva. 2015. “Financial history databases: Old data, new
issues, new insights.” Paper presented at Financial History Workshop, Judge Business School,
Cambridge, July 23–24.
Aoki, Masanao, and Hiroshi Yoshikawa. 2007. A Stochastic Approach to Macroeconomics and
Financial Markets. Cambridge: Cambridge University Press.
Aoyama, Hideaki, Yoshi Fujiwara, Yuichi Ikeda, Hiroshi Iyetomi, and Wataru Souma. 2011.
Econophysics and Companies: Statistical Life and Death in Complex Business Networks.
Cambridge: Cambridge University Press.
Arad, Ruth W. 1980. “Parameter estimation for symmetric stable distribution.” International
Economic Review 21 (1): 209–20.
185
186 References
Arbulu, Pedro. 1998. “La Bourse de Paris au XIXe siècle: L’exemple d’un marché émergent
devenu efficient.” Revue d’Economie Financière 49: 213–49.
Archer, Stephen H. 1968. “Introduction.” Journal of Financial and Quantitative Analysis 3
(3): 231–33.
Armatte, Michel. 1991. “Théorie des erreurs, moyenne et loi ‘normale.’” In Moyenne, milieu,
centre: Histoires et usages, edited by Jacqueline Feldman, Gérard Lagneau, and Benjamin
Matalon, 63–84. Paris: Editions EHESS.
Armatte, Michel. 1992. “Conjonctions, conjoncture, et conjecture: Les baromètres économiques
(1885–1930).” Histoire et Mesure 7 (1–2): 99–149.
Armatte, Michel. 1995. “Histoire du modèle linéaire: Formes et usages en statistique et en
econométrie jusqu’en 1945.” Doctoral dissertation, EHESS.
Armatte, Michel. 2003. “Cycles and Barometers: Historical insights into the relationship be-
tween an object and its measurement.” In Papers and Proceedings of the Colloquium on the
History of Business-Cycle Analysis, edited by Dominique Ladiray, 45–74. Luxembourg: Office
for Official Publication of the European Communities.
Arrow, Kenneth Joseph. 1953. Le rôle des valeurs boursières pour la répartition la meilleure des
risques: (The role of securities in the optimal allocation of risk bearing). Cowles Commission for
Research in Economics. Chicago: University of Chicago.
Arrow, Kenneth Joseph, and Gérard Debreu. 1954. “Existence of an equilibrium for a competi-
tive economy.” Econometrica 22 (3): 265–90.
Arthur, W. Brian. 2005. “Out-of-equilibrium economics and agent-based modeling.” Santa Fe
Institute Working Paper No. 2005-09-03.
Arthur, W. Brian, Steven N. Durlauf, and David A. Lane, eds. 1997. The Economy as an Evolving
Complex System II. Reading, MA: Addison-Wesley.
Asselain, Jean-Claude. 1985. Histoire économique: De la révolution industrielle à la première guerre
mondiale. Paris: Presses de la Fondation Nationale des Sciences Politiques & Dalloz.
Auerbach, Felix. 1913. “Das Gesetz der Bevölkerungskonzentration.” Petermanns Geographische
Mitteilungen 59: 73–76.
Aurell, Erik, Jean-Philippe Bouchaud, Marc Potters, and Karol Zyczkowski. 1997. “Option pric-
ing and hedging beyond Black and Scholes.” Journal de Physique IV 3 (3): 2–11.
Ausloos, Marcel. 2001. “Editorial of the Special issue on econophysics.” European Physical
Journal B 20 (4): 471.
Ausloos, Marcel. 2013. “Econophysics: Comments on a few applications, successes, methods
and models.” IIM Kozhikode Society & Management Review 2 (2): 101–15.
Ausloos, Marcel. 2014a. “Toward fits to scaling-like data, but with inflection points and general-
ized Lavalette function.” Journal of Applied Quantitative Methods 9 (2): 1.
Ausloos, Marcel. 2014b. “Two-exponent Lavalette function: A generalization for the case of ad-
herents to a religious movement.” Physical Review E: Statistical, Nonlinear, and Soft Matter
Physics 89 (6): 062803.
Ausloos, Marcel. 2015. “France new regions planning? Better order or more disorder?” Entropy
17 (8): 5695–710.
Ausloos, Marcel, Kristinka Ivanova, and Nicolas Vandewalle. 2002. “Crashes: Symptoms, diag-
noses and remedies.” In Empirical Sciences of Financial Fluctuations: The Advent of Econophysics,
edited by Hideki Takayasu, 62–76. Berlin: Springer Verlag.
Ausloos, Marcel, Franck Jovanovic, and Christophe Schinckus. 2016. On the “usual” misunder-
standing between econophysics and finance: Some clarifications on modelling approaches
and efficient market hypothesis. International Review of Financial Analysis 47: 7–14.
187 References
Axtell, Robert L. 1999. “The emergence of firms in a population of agents: Local increasing re-
turns, unstable Nash equilibria, and power law size distributions.” Santa Fe Institute Working
Paper No. 99-03-019.
Axtell, Robert L. 2001. “Zipf distribution of U.S. firm sizes.” Science in Context 293: 1818–20.
Bachelier, Louis. 1900. “Théorie de la spéculation.” Annales de l’Ecole Normale Supérieure, 3rd
ser. 17 ( January): 21–86. Reprint, 1995, J. Gabay, Paris.
Bachelier, Louis. 1901. “Théorie mathématique du jeu.” Annales de l’Ecole Normale Supérieure,
3rd ser. 18 ( January): 77–119. Reprint, 1995, J. Gabay, Paris.
Bachelier, Louis. 1910. “Les probabilités à plusieurs variables.” Annales Scientifiques de l’École
Normale Supérieure 339–60.
Bachelier, Louis. 1912. Calcul des probabilités. Paris: Gauthier-Villars.
Backhouse, Roger E. 2004. “A suggestion for clarifying the study of dissent in economics.”
Journal of the History of Economic Thought 26 (2): 261–71.
Bagehot, Walter (pseudonym used by Jack Treynor). 1971. “The only game in town.” Financial
Analysts Journal 8: 31–53.
Bak, Per. 1994. “Introduction to self-criticality.” In Complexity: Metaphors, Models, and Reality,
edited by G. Cowan, D. Pines, and D. Meltzer, 476–82. Santa Fe: Santa Fe Institute.
Bak, Per, Kan Chen, José Sheinkman, and Michael Woodford. 1993. “Aggregate fluctuations
from independent sectorial shocks: Self-organized criticality in a model of production and
inventory dynamics.” Ricerche Economische 47 (1): 3–30.
Bak, Per, Maya Paczuski, and Martin Shubik. 1997. “Price variations in a stock market with
many agents.” Physica A 246 (3–4): 430–53.
Bak, Per, Chao Tang, and Kurt Wiesenfeld. 1987. “Self-organized criticality: An explanation of
1/f noise.” Physical Review Letters 59 (4): 381–84.
Bak, Per, Chao Tang, and Kurt Wiesenfeld. 1988. “Self-organized criticality.” Physical Review A
38 (1): 364–74.
Banerjee, Abhijit V. 1993. “The economics of rumors.” Review of Economic Studies 60 (2):
309–27.
Banz, Rolf W. 1981. “The relationship between return and market value of common stocks.”
Journal of Financial Economics 9 (1): 3–18.
Banzhaf, H. Spencer. 2001. “Quantifying the qualitative: Quality-adjusted price indexes in
the United States, 1915–61.” In History of Political Economy, annual supplement, The Age
of Economic Measurement, edited by Judy L. Klein and Mary S. Morgan, 345–70. Durham,
NC: Duke University Press.
Barbut, Marc. 2003. “Homme moyen ou homme extrême? De Vilfredo Pareto (1896) à Paul
Lévy (1936) en passant par Maurice Fréchet et quelques autres.” Journal de la Société Française
de la Statistique 144 (1–2): 113–33.
Barnett, Vincent. 2011. E. E. Slutsky as Economist and Mathematician: Crossing the Limits of
Knowledge. London: Routledge.
Bartolozzi, Marco, and Anthony William Thomas. 2004. “Stochastic cellular automata model for
stock market dynamics.” Physical Review E 4: 46–69.
Bassler, Kevin E., Joseph L. McCauley, and Gemunu H. Gunaratne. 2007. “Nonstationary incre-
ments, scaling distributions, and variable diffusion processes in financial markets.” Proceedings
of the National Academy of Sciences of the United States of America 104 (44): 17287–290.
doi: 10.1073/pnas.0708664104.
Batterman, Robert W. 2002. The Devil in the Details: Asymptotic Reasoning in Explanation,
Reduction and Emergence. New York: Oxford University Press.
188 References
Bauwens, Luc, Sébastien Laurent, and Jeroen V. K. Rombouts. 2006. “Multivariate Garch
models: A survey.” Journal of Applied Econometrics 21 (1): 79–109.
Bayeux-Besnainou, Isabelle, and Jean-Charles Rochet. 1996. “Dynamic Spanning: Are Options
an Appropriate Instrument?” Mathematical Finance, 6 (1): 1–16.
Bazerman, Charles. 1988. Shaping Written Knowledge: The Genre and Activity of the Experimental
Articles in Science. Madison: University of Wisconsin Press.
Bee, Marco, Massimo Riccaboni, and Stefano Schiavo. 2011. “Pareto versus lognormal: A max-
imum entropy test.” Physical Review E 84 (2): 026104.
Ben-El-Mechaiekh, Hichem, and Robert W. Dimand. 2006. “Louis Bachelier.” In Pioneers of
Financial Economics, vol. 1: Contributions Prior to Irving Fisher, edited by Geoffrey Poitras,
225–37. Cheltenham: Edward Elgar.
Ben-El-Mechaiekh, Hichem, and Robert W. Dimand. 2008. “Louis Bachelier’s 1938 volume on
the calculus of speculation: Efficient markets and mathematical finance in Bachelier’s later
work.” Working paper.
Bernstein, Peter L. 1992. Capital Ideas: The Improbable Origins of Modern Wall Street.
New York: Free Press; Toronto: Maxwell Macmillan Canada.
Bernstein, Peter L. 2007. Capital Ideas Evolving. Hoboken, NJ: John Wiley & Sons.
Berry, Brian J. L., and Adam Okulicz-Kozaryn. 2012. “The city size distribution debate: Resolution
for US urban regions and megalopolitan areas.” Cities 29 (supplement 1): S17–S23.
Biais, Bruno, Larry Glosten, and Chester Spatt. 2005. “Market microstructure: A survey of micro-
foundations, empirical results, and policy implications.” Journal of Financial Markets 8 (2): 217–64.
Binney, James, N. J. Dowrick, Andrew J. Fisher, and Mark E. J. Newman. 1992. The Theory of
Critical Phenomena: An Introduction to the Renormalization Group. Oxford: Clarendon Press.
Black, Fischer. 1971a. “Toward a fully automated stock exchange (part 1).” Financial Analysts
Journal 27 (4): 28–35, 44.
Black, Fischer. 1971b. “Toward a fully automated stock exchange (part 2).” Financial Analysts
Journal 27 (6): 24–28, 86–87.
Black, Fischer. 1986. “Noise.” Journal of Finance 41 (3): 529–43.
Black, Fischer, and Myron Scholes. 1973. “The pricing of options and corporate liabilities.”
Journal of Political Economy 81 (3): 637–54.
Blank, Aharon, and Sorin Solomon. 2000. “Power laws in cities population, financial markets
and internet sites (scaling in systems with a variable number of components).” Physica A 287
(1–2): 279–88.
Blattberg, Robert, and Nicholas Gonedes. 1974. “A comparison of the stable and Student dis-
tributions as statistical models for stock prices: Reply.” Journal of Business 47 (2): 244–80.
Blattberg, Robert, and Thomas Sargent. 1971. “Regression with non-Gaussian stable distur-
bances: Some sampling results.” Econometrica 39 (3): 501–10.
Blume, Lawrence E., and Steven N. Durlauf, eds. 2006. The Economy as an Evolving Complex
System III: Current Perspectives and Future Directions. New York: Oxford University Press.
Blume, Marshall. 1968. “The assessment of portfolio performance: An application of portfolio
theory.” Doctoral dissertation, University of Chicago.
Bollerslev, Tim. 1986. “Generalized autoregressive conditional heteroskedasticity.” Journal of
Econometrics 31: 307–27.
Bollerslev, Tim. 2010. “Glossary to ARCH (GARCH).” In Volatility and Time Series
Econometrics: Essays in Honour of Robert F. Engle, edited by Tim Bollerslev, Jeffrey R. Russell,
and Mark W. Watson, 137–64. New York: Oxford University Press.
189 References
Borak, Szymon, Wolfgang Hardle, and Rafal Weron. 2005. “Stable distributions.” SFB Working
Paper, Humboldt University Berlin.
Bormetti, Giacomo, Enrica Cisana, Guido Montagna, and Oreste Nicrosini. 2007. “A non-
Gaussian approach to risk measures.” Physica A 376: 532–42.
Bouchaud, Jean-Philippe. 2001. “Power laws in economics and finance: Some ideas from phys-
ics.” Quantitative Finance 1 (1): 105–12.
Bouchaud, Jean- Philippe. 2002. “An Introduction to statistical finance.” Physica A 313
(1): 238–51.
Bouchaud, Jean-Philippe, and Damien Challet. 2014. “Behavioral finance and financial mar-
kets: Arbitrage techniques, exuberant behaviors and volatility.” Opinion et débats 7: 20–35.
Bouchaud, Jean-Philippe, Doyne Farmer, and Fabrizio Lillo. 2009. “How markets slowly digest
changes in supply and demand.” In Handbook of Financial Markets: Dynamics and Evolution,
edited by Thorsten Hens and Klaus Reiner Schenk-Hoppe, 57–160. Amsterdam: North-
Holland, Elsevier.
Bouchaud, Jean-Philippe, Marc Mezard, and Marc Potters. 2002. “Statistical properties of stock
order books: Empirical results and models.” Quantitative Finance 2: 251–56.
Bouchaud, Jean-Philippe, and Marc Potters. 2000. Theory of Financial Risks: From Statistical
Physics to Risk Management. Cambridge: Cambridge University Press.
Bouchaud, Jean-Philippe, and Marc Potters. 2003. Theory of Financial Risk and Derivate Pricing.
Cambridge: Cambridge University Press.
Bouchaud, Jean- Philippe, and Didier Sornette. 1994. “The Black- Scholes option pricing
problem in mathematical finance: Generalisation and extension to a large class of stochastic
processes.” Journal de Physique I 4: 863–81.
Boumans, Marcel, ed. 2007. Measurement in Economics: A Handbook. London: Elsevier.
Bouzoubaa, Mohamed, and Adel Osseiran. 2010. Exotic Options and Hybrids: A Guide to
Structuring, Pricing and Trading. London: John Wiley & Sons.
Bowley, Arthur Lyon. 1933. “The action of economic forces in producing frequency distri-
butions of income, prices, and other phenomena: A suggestion for study.” Econometrica 1
(4): 358–72.
Boyarchenko, Svetlana I., and Sergei Z. Levendorskiĭ. 2000. “Option pricing for truncated Lévy
processes.” International Journal of Theoretical and Applied Finance 3 (3): 549–52.
Boyarchenko, Svetlana I., and Sergei Z. Levendorskiĭ. 2002. “Barrier options and touch-and-out
options under regular Lévy processes of exponential type.” Annals of Applied Probability 12
(4): 1261–98.
Brada, Josef, Harry Ernst, and John van Tassel. 1966. “The distribution of stock price differ-
ences: Gaussian after all?” Operations Research 14 (2): 334–40.
Brakman, Steven, Harry Garretsen, Charles Van Marrewijk, and Marianne van den Berg. 1999.
“The return of Zipf: Towards a further understanding of the rank-size distribution.” Journal
of Regional Science 39: 183–213.
Breidt, F. Jay, Nuno Crato, and Pedro de Lima. 1998. “On the detection and estimation of long
memory in stochastic volatility.” Journal of Econometrics 83: 325–48.
Brenner, Menachem. 1974. “On the stability of distribution of the market component in stock
price changes.” Journal of Financial and Quantitative Analysis 9: 945–61.
Brenner, Thomas. 2001. “Self-organisation, local symbiosis of firms and the life cycle of lo-
calised industrial clusters.” Papers on Economics and Evolution, Max Planck Institute of
Economics, Jena.
190 References
Breton, Yves. 1991. “Les économistes français et les questions de méthode.” In L’Economie
Politique en France au XIXe Siècle, edited by Yves Breton and Michel Lutfalla, 389–419.
Paris: Economica.
Broda, Simon A., Markus Haas, Jochen Krause, Marc S. Paolella, and Sven C. Steude. 2013.
“Stable mixture GARCH models.” Journal of Econometrics 172 (2): 292–306.
Brody, Samuel. 1945. Bioenergetics and Growth. New York: Reinhold.
Bronzin, Vinzenz. 1908 “Theory of Premium Contracts.” In Vinzenz Bronzin's Option Pricing
Models: Exposition and Appraisal, edited by Wolfgang Hafner and Heinz Zimmermann 2009.
Berlin: Springer, 113–200.
Brown, Stephen J., William N. Goetzmann, and Alok Kumar. 1998. “The Dow theory: William
Peter Hamilton’s track record reconsidered.” Journal of Finance 53 (4): 1311–33.
Buchanan, Mark. 2004. “Power law and the new science of complexity management.” Strategy
+ Business 34: 104–7.
Bucsa, Gigel, Emmanuel Haven, Franck Jovanovic, and Christophe Schinckus. 2014. “The prac-
tical use of the general formula of the optimal hedge ratio in option pricing: An example of
the generalized model of an exponentially truncated Levy stable distribution.” Theoretical
Economics Letters 4 (9): 760–66.
Cai, Wei, Melvin Lax, and Min Xu. 2006. Random Processes in Physics and Finance. Oxford: Oxford
University Press.
Calvo, Iván, Juan C. Cuchí, José G. Esteve, and Fernando Falceto. 2010. “Generalized
central limit theorem and renormalization group.” Journal of Statistical Physics 141:
409–21.
Carbone, Anna, Giorgio Kaniadakis, and Antonio Maria Scarfone. 2007. “Where do we stand
on econophysics.” Physica A 382: 11–14.
Carr, Peter, Hélyette Geman, Dilip Madan, and Marc Yor. 2002. “The fine structure of asset re-
turns: An empirical investigation.” Journal of Business 75: 305–32.
Carr, Peter, and Dilip B. Madan. 2005. “A note on sufficient conditions for no arbitrage.” Finance
Research Letters 2 (3): 125–30.
Cartwright, Nancy. 1983. How the Laws of Physics Lie. Oxford: Oxford University Press.
Casey, Michael. 2013. “Move over economists, markets need physicists.” Market Watch
Magazine, July 10.
Cassidy, David. 2011. A Short History of Physics in the American Century. Cambridge: Harvard
University Press.
Cerqueti, Roy, and Marcel Ausloos. 2015. “Evidence of economic regularities and disparities of
Italian regions from aggregated tax income size data.” Physica A: Statistical Mechanics and Its
Applications 421: 187–207.
Chaigneau, Nicolas, and Philippe Le Gall. 1998. “The French connection: The pioneering e-
conometrics of Marcel Lenoir (1913).” In European Economists of the Early 20th Century, vol.
1: Studies of Neglected Thinkers of Belgium, France, the Netherlands and Scandinavia, edited by
Warren J. Samuels, 163–89. Cheltenham: Edward Elgar.
Chakrabarti, Bikas K., and Anirban Chakraborti. 2010. “Fifteen years of econophysics research.”
Science and Culture 76: 9–10.
Chakrabarti, Bikas K., Anirban Chakraborti, and Arnab Chatterjee. 2006. Econophysics and
Sociophysics: Trends and Perspectives. Weinheim: Wiley-VCH.
Chakraborti, Anirban, Ioane Muni Toke, Marco Patriarca, and Frédéric Abergel. 2011a.
“Econophysics review: I. Empirical facts.” Quantitative Finance 11 (7): 991–1012.
191 References
Chakraborti, Anirban, Ioane Muni Toke, Marco Patriarca, and Frédéric Abergel. 2011b.
“Econophysics review: II. Agent-based models.” Quantitative Finance 11 (7): 1013–41.
Challet, Damien, Mateo Marsili, and Yi Cheng Zhang. 2005. Minority Games: Interacting Agents
in Financial Markets. Oxford: Oxford University Press.
Challet, Damien, and Robin Stinchcombe. 2001. “Analyzing and modeling 1+1d markets.”
Physica A 300 (1–2): 285–99.
Champernowne, David Gawen. 1953. “A model of income distribution.” Economic Journal
63: 318–51.
Chancelier, Éric. 2006a. “L’analyse des baromètres économiques de Persons et
Wagemann: Instrument de prévision—instrument de théorisation, 1919–1932.” Revue
d’Économie Politique 116 (5): 613–32.
Chancelier, Éric. 2006b. “Les premiers baromètres économiques américains (1990–1919).”
Revue d’Histoire des Sciences Humaines 15 (2): 135–55.
Chane-Alune, Elena. 2006. “Accounting standardization and governance structures.” Working
Paper No. 0609, University of Liège.
Chari, V. V., Patrick J. Kehoe, and Ellen R. McGrattan. 2008. “Are structural VARs with long-run
restrictions useful in developing business cycle theory?” Journal of Monetary Economics 55
(8): 1337–52.
Chari, V. V., Patrick J. Kehoe, and Ellen R. McGrattan. 2009. “New Keynesian models: Not yet
useful for policy analysis.” American Economic Journal: Macroeconomics 1 (1): 242–66.
Chen, Shu-heng, and Sai-ping Li. 2012. “Econophysics: Bridges over a turbulent current.”
International Review of Financial Analysis 23: 1–10. doi: 10.1016/j.irfa.2011.07.001.
Chen, Shu-Heng, Thomas Lux, and Michele Marchesi. 2001. “Testing for non-linear structure in
an artificial financial market.” Journal of Economic Behavior and Organization 46 (3): 327–42.
Chen, X., H. Anderson, and P. Barker. 2008. “Kuhn’s theory of scientific revolutions and cogni-
tive psychology.” Philosophical Psychology 11 (1): 5–28.
Chiarella, Carl, and Giulia Iori. 2002. “A Simulation analysis of the microstructure of double
auction markets.” Quantitative Finance 2 (5): 346–53.
Chiarella, Carl, Giulia Iori, and Josep Perello. 2009. “The impact of heterogeneous trading rules on
the limit order book and order flows.” Journal of Economic Dynamics and Control 33: 525–37.
Chrisman, Nicholas. 1999. “Trading Zones or Boundary Objects: Understanding Incomplete
Translations of Technical Expertise.” San Diego.
Christ, Carl F. 1994. “The Cowles Commission’s contributions to econometrics at Chicago,
1939–1955.” Journal of Economic Literature 32 (March): 30–59.
Christiano, Lawrence J. 2012. “Christopher A. Sims and vector autoregressions.” Scandinavian
Journal of Economics 114 (4): 1082–104.
Clark, Peter K. 1973. “A Subordinated stochastic process with finite variance for speculative
prices.” Econometrica 41 (1): 135–55.
Clauset, Aaron. 2011. Inference, Models and Simulation for Complex Systems. Lecture 3. in, Santa
Fe, http://tuvalu.santafe.edu/~aaronc/courses/7000/csci7000-001_2011_L3.pdf
Clauset, Aaron, Cosma Rohilla Shalizi, and Mark Newman. 2009. “Power-law distributions in
empirical data.” SIAM Review 51 (4): 661–703.
Clementi, Fabio, Tiziana Di Matteo, and Mauro Gallegati. 2006. “The power-law tail exponent
of income distributions.” Physica A 370 (1): 49–53.
Clippe, Paulette, and Marcel Ausloos. 2012. “Benford’s law and Theil transform of financial
data.” Physica A: Statistical Mechanics and Its Applications 391 (24): 6556.
192 References
Cohen, I. Bernard. 1993. “Analogy, homology and metaphor in the interactions between
the natural sciences and the social sciences, especially economics.” In Non-natural Social
Science: Reflecting on the Enterprise of “More Heat Than Light”, edited by Neil De Marchi,
7–44. Durham, NC: Duke University Press.
Cohen, Kalman J., and Jerry A. Pogue. 1967. “An empirique evaluation of alternative portfolio-
selection models.” Journal of Business 40 (2): 166–93.
Condon, E. U. 1928. “Statistics of vocabulary.” Science 67 (1733): 300.
Cont, Rama, and Jean-Philippe Bouchaud. 2000. “Herd behavior and aggregate fluctuations in
financial markets.” Macroeconomic Dynamics 4 (2): 170–96.
Cont, Rama, Marc Potters, and Jean-Philippe Bouchaud. 1997. “Scaling in stock market
data: Stable laws and beyond.” In Scale Invariance and Beyond: Les Houches Workshop, March
10–14, 1997, edited by B. Dubrulle, F. Graner, and D. Sornette, 75–85. New York: Springer.
Cont, Rama, and Peter Tankov. 2004a. Financial Modelling with Jump Processes. London: Chapman
& Hall/CRC.
Cont, Rama, and Peter Tankov. 2004b. “Non-parametric calibration of jump-diffusion option
pricing models.” Journal of Computational Finance 7 (3): 1–49.
Cootner, Paul H. 1962. “Stock prices: Random vs. systematic changes.” Industrial Management
Review 3 (2): 24.
Cootner, Paul H. 1964. The Random Character of Stock Market Prices. Cambridge, MA:
MIT Press.
Copeland, Thomas E., and Dan Galai. 1983. “Information effects and the bid-ask spread.” Journal
of Finance 38 (5): 1457–69.
Cordoba, J. 2008. “On the distribution of city sizes.” Journal of Urban Economics 63: 177–97.
Cornelis, A. Los. 2005. “Why VAR fails: Long memory and extreme events in financial markets.”
Journal of Financial Economics 3 (3): 19–36.
Courtadon, Georges. 1982. “A note on the premium market of the Paris stock exchange.” Journal
of Banking and Finance 6 (4): 561–65.
Courtault, Jean-Michel, Youri Kabanov, Bernard Bru, Pierre Crépel, Isabelle Lebon, and Arnaud
Le Marchand. 2002. “Louis Bachelier on the centenary of Théorie de la spéculation.” In Louis
Bachelier: Aux origines de la finance mathématique, edited by Jean-Michel Courtault and Youri
Kabanov, 5–86. Besançon: Presses Universitaires Franc-Comtoises.
Cover, John H. 1937. “Some investigations in the Sampling and Distribution of Retail Prices.”
Econometrica 5 (3): 263–79.
Cowles, Alfred. 1933. “Can stock market forecasters forecast?” Econometrica 1 (3): 309–24.
Cowles, Alfred. 1944. “Stock market forecasting.” Econometrica 12 (3/4): 206–14.
Cowles, Alfred. 1960. “A revision of previous conclusions regarding stock price behavior.”
Econometrica 28 (4): 909–15.
Cowles, Alfred, and Herbert E. Jones. 1937. “Some a posteriori probabilities in stock market
action.” Econometrica 5 (3): 280–94.
Cox, John C., and Stephen Ross. 1976. “The valuation of options for alternative stochastic pro-
cesses.” Journal of Financial Economics 3 (1-2): 145–66.
Cramer, Harald. 1983. “Probabilité mathématique et inférence statistique: Quelques souvenirs
personnels sur une importante étape du progrès scientifique.” Revue de Statistique Appliquée
31 (3): 5–15.
Cristelli, Matthieu. 2014. Complexity in Financial Markets: Modeling Psychological Behavior in
Agent-Based Models and Order Book Models. Milan: Springer.
193 References
Csorgor, S., and G. Simons. 1983. “On the Steinhpaus’s resolution of the St-Petersburg Paradox.”
Probability and Mathematical Statistics 14: 157–71.
Cuoco, Domenico, and Eugenio Barone. 1989. “The Italian market for ‘premium’ contracts: An
application of option pricing theory.” Journal of Banking and Finance 13 (4): 709–45.
Daemen, Job. 2010. “Ketchup economics: What is finance about?” Presented at the 14th
Annual Conference of the European Society for the History of Economic Thought,
Amsterdam, March.
Davis, Mark, and Alison Etheridge. 2006. Louis Bachelier’s Theory of Speculation. Princeton,
NJ: Princeton University Press.
De Bondt, Werner F. M., and Richard Thaler. 1985. “Does the stock market overreact?” Journal
of Finance 40 (3): 793–805.
De Meyer, Bernard, and Hadiza Moussa Saley. 2003. “On the strategic origin of Brownian
motion in finance.” International Journal of Game Theory 31: 285–319.
De Vroey, Michel, and Pierre Malgrange. 2007. “Théorie et modélisation macro-économiques,
d’hier à aujourd’hui.” Revue Française D’économie 21 (3): 3–38.
Debreu, Gérard. 1959. Theory of Value. New Haven: Yale University Press.
Demsetz, Harold. 1968. “The cost of transacting.” Quarterly Journal of Economics 82: 33–53.
Derman, Emanuel. 2001. “A guide for the perplexed quant.” Quantitative Finance 1 (5): 476–80.
Derman, Emanuel. 2009. “Models.” Financial Analysts Journal 65 (1): 28–33
Diewert, W. Erwin. 1993. “The early history of price index research.” In Essays in Index Number
Theory, vol. 1, edited by W. Erwin Diewert and Alice O. Nakamura, 33–65. Amsterdam: North
Holland.
Dimand, Robert W. 2007. “Irving Fisher and financial economics: The equity premium puzzle,
the predictability of stock prices, and intertemporal allocation under risk.” Journal of the
History of Economic Thought 29 (2): 153–66. doi: 10.1080/10427710701335885.
Dimand, Robert W. 2009. “The Cowles Commission and Foundation on the functioning of fi-
nancial markets from Irving Fisher and Alfred Cowles to Harry Markowitz and James Tobin.”
Revue d’Histoire des Sciences Humaines 20: 109–30.
Dimand, Robert W., and John Geanakoplos, eds. 2005. Celebrating Irving Fisher: The Legacy of a
Great Economist. Malden, MA: Blackwell.
Dimand, Robert W., and William Veloce. 2010. “Alfred Cowles and Robert Rhea on the predict-
ability of stock prices.” Journal of Business Inquiry 9 (1): 56–64.
Ding, E-Jiang. 1983. “Asymptotic properties of the Markovian master equations for multi-
stationary systems.” Physica A 119: 317–26.
Ding, Zhuanxin, Robert F. Enlge, and Clive W. J. Granger. 1993. “A long memory property of
stock market returns and a new model.” Journal of Empirical Finance 1 (1): 83–106.
Dionisio, Andreia, Rui Menezes, and Diana A. Mendes. 2006. “An econophysics approach to
analyse uncertainty in financial markets: An application to the Portuguese stock market.”
European Physical Journal B 50 (1): 161–64.
Domb, Cyril, and D. Hunter. 1965. “On the critical behaviour of ferromagnets.” Proceedings of
the Physical Society 86: 1147.
Donangelo, Raul, and Kim Sneppen. 2000. “Self-organization of value and demand.” Physica A
276: 572–80.
Donnelly, Kevin. 2015. Adolphe Quetelet, Social Physics and the Average Men of Science, 1796–
1874. New York: Routledge.
Doob, Joseph L. 1953. Stochastic Process. New York: John Wiley & Sons.
194 References
Dow, James, and Gary Gorton. 2008. “Noise traders.” In The New Palgrave: A Dictionary of
Economics, 2nd ed., edited by Steven N. Durlauf and Lawrence E. Blume. New York: Palgrave
Macmillan.
Dubkov, Alexander A., Bernardo Spagnolo, and Vladimir V. Uchaikin. 2008. “Lévy flight
superdiffusion: An introduction.” International Journal of Bifurcation and Chaos 18
(9): 2649–72.
Dunbar, Frederick C., and Dana Heller. 2006. “Fraud on the market meets behavioral finance.”
Delaware Journal of Corporate Law 31 (2): 455–532.
Dunbar, Nicholas. 2000. Inventing Money: The Story of Long-Term Capital Management and the
Legends behind It. New York: Wiley.
Dupoyet, Brice, Rudolf H. Fiebig, and David P. Musrgrove. 2011. “Replicating financial market
dynamics with a simple self-organized critical lattice model.” Physica A 390: 3120–35.
Durlauf, Steven N. 2005. “Complexity and empirical economics.” Economic Journal
115: F225–F243.
Durlauf, Steven N. 2012. “Complexity, economics, and public policy.” Politics Philosophy
Economics 11 (1): 45–75.
Eberlein, Ernst, and Ullrich Keller. 1995. “Hyperbolic distributions in finance.” Bernouilli
1: 281–99.
Edlinger, Cécile, and Antoine Parent. 2014. “The beginnings of a ‘common-sense’ approach to
portfolio theory by nineteenth-century French financial analysts Paul Leroy-Beaulieu and
Alfred Neymarck.” Journal of the History of Economic Thought 36 (1): 23–44.
Eeckhout, Jan. 2004. “Gibrat’s law for (all) cities.” American Economic Review 94 (1): 1429–51.
Eguíluz, Victor M., and Martin G. Zimmermann. 2000. “Transmission of information and herd
behavior: An application to financial markets.” Physical Review Letters 85 (26): 5659–62.
Eichenbaum, Martin. 1996. “Some comments on the role of econometrics in economic theory.”
Economic Perspectives 20 (1): 22.
Einstein, Albert. 1905. “Uber die von der molekularkinetischen Theorie der Wärme geforderte
Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen.” Annalen der Physik
17: 549–60.
Embrechts, Paul, Claudia Klüppelberg, and Thomas Mikosch. 1997a. Modelling Extremal
Events: For Insurance and Finance. New York: Springer.
Embrechts, Paul, Claudia Klüppelberg, and Thomas Mikosch. 1997b. Modelling Extreme Values.
Berlin: Springer.
Engle, Robert F., and Jeffrey R. Russell. 2004. “Analysis of high frequency financial data.” In
Handbook of Financial Econometrics: Tools and Techniques, edited by Yacine Ait-Sahalia and
Lars Hansen, 383–426. Amsterdam: Elsevier.
Engle, Robert F. 1982. “Autoregressive conditional heteroskedasticity with estimates of the var-
iance of United Kingdom inflation.” Econometrica 50: 987–1007.
Epstein, Joshua M. 2006. Generative Social Science: Studies in Agent-Based Computational
Modeling. Princeton, NJ: Princeton University Press.
Epstein, Joshua M., and Robert L. Axtell. 1996. Growing Artificial Societies: Social Science from the
Bottom Up. Cambridge, MA: MIT Press.
Estoup, Jean-Baptiste. 1916. Gammes sténographique. Paris: Institut Sténographique.
Fama, Eugene F. 1963. “Mandelbrot and the stable Paretian hypothesis.” Journal of Business 36
(4): 420–29.
Fama, Eugene F. 1965a. “The behavior of stock-market prices.” Journal of Business 38 (1): 34–105.
195 References
Fama, Eugene F. 1965b. “Portfolio analysis in a stable Paretian market.” Management Science
Series A: Sciences 11 (3): 404–19.
Fama, Eugene F. 1965c. “Random walks in stock-market prices.” Financial Analysts Journal 21
(5): 55–59.
Fama, Eugene F. 1970. “Efficient capital markets: A review of theory and empirical work.”
Journal of Finance 25 (2): 383–417.
Fama, Eugene F. 1976a. “Efficient capital markets: Reply.” Journal of Finance 31 (1): 143–45.
Fama, Eugene F. 1976b. Foundations of Finance: Portfolio Decisions and Securities Prices.
New York: Basic Books.
Fama, Eugene F. 1991. “Efficient capital markets: II.” Journal of Finance 46 (5): 1575–617.
Fama, Eugene F. 2008. “Interview with Professor Eugene Fama by Professor Richard Roll.”
August 15.
Fama, Eugene F., Lawrence Fisher, Michael C. Jensen, and Richard Roll. 1969. “The Adjustment
of Stock Prices to New Information.” International Economic Review 10 (1): 1–21.
Fama, Eugene F., and Kenneth French. 2004. “Capital asset pricing model: Theory and evi-
dence.” Journal of Economic Perspective 18 (3): 25–36.
Fama, Eugene F., and Richard Roll. 1968. “Some properties of symmetric stable distributions.”
Journal of the American Statistical Association 63 (323): 817–36.
Fama, Eugene F., and Richard Roll. 1971. “Parameter estimates for symmetric stable distribu-
tions.” Journal of the American Statistical Association 66 (334): 331–38.
Farmer, J. Doyne, and Duncan Foley. 2009. “The economy needs agent-based modelling.”
Nature 460: 685–86.
Farmer, J. Doyne, and John Geanakoplos. 2008. “Power laws in economics and elsewhere.”
Working paper.
Farmer, J. Doyne, and John Geanakoplos. 2009. “The Virtues and Vices of Equilibrium, and the
Future of Financial Economics.” Working paper, Cowles Foundation.
Farmer, J. Doyne, Laszlo Gillemot, Fabrizio Lillo, Szabols Mike, and Anindya Sen. 2004. “What
really causes large price changes?” Quantitative Finance 4: 383–97.
Farmer, J. Doyne, and Fabrizio Lillo. 2004. “On the origin of power law tails in price fluctua-
tions.” Quantitative Finance 4 (1): 7–11.
Farmer, J. Doyne, and Thomas Lux. 2008. “Introduction to special issue on ‘Applications of
Statistical Physics in Economics and Finance.’” Journal of Economic Dynamics and Control 32
(1): 1–6.
Farmer, J. Doyne, Paolo Patelli, Ilija I. Zovko, and Kenneth J. Arrow. 2005. “The predictive power
of zero intelligence in financial markets.” Proceedings of the National Academy of Sciences of the
United States of America 102 (6): 2254–59.
Fayyad, U. 1996. “Data mining and knowledge discovery: Making sense out of data.” Intelligent
Systems 11 (5): 20–25.
Feigenbaum, James A., and Peter G. O. Freund. 1996. “Discrete scale invariance in stock markets
before crashes.” International Journal of Modern Physics B 10 (27): 3737–45.
Feigenbaum, James A., and Peter G. O. Freund. 1998. “Discrete scale invariance and the ‘second
Black Monday.’” Modern Physics Letters B 12 (3): 57–60.
Feller, William. 1957. An Introduction to Probability Theory and Its Applications. Vol. 1.
New York: Wiley.
Feller, William. 1971. An Introduction to Probability Theory and Its Applications, Vol. 2.
New York: Wiley.
196 References
Figueiredo, Annibal, Raul Matsushita, Sergio Da Silva, Maurizio Serva, Ganghi Viswanathan,
Cesar Nasciment, and Iram Gleria. 2007. “The Lévy sections theorem: An application to
econophysics.” Physica A 386 (2): 756–59.
Fisher, Irving. 1911. The Purchasing Power of Money. London: Macmillan.
Fisher, Irving. 1922. The Making of Index Numbers: A Study of Their Varieties, Tests, and Reliability.
Boston: Houghton Mifflin.
Fisher, Irving. 1930. The Theory of Interest as Determined by Impatience to Spend Income and
Opportunity to Invest It. New York: Macmillan.
Fisher, Lawrence, and James H. Lorie. 1964. “Rates of return on investments in common stocks.”
Journal of Business 37 (1): 1–21.
Fisher, Mark. 1967. “The theory of equilibrium critical phenomena.” Reports on Progress in
Physics 30: 615–730.
Fitzpatrick, Richard. 2012. “Thermodynamics and statistical mechanics: An intermediate level
course.”
Focardi, Sergio, Silvano Cincotti, and Michele Marchesi. 2002. “Self-organization and market
crashes.” Journal of Economic Behavior and Organization 49 (2): 241–67.
Fourcade, Marion, and Rakesh Khurana. 2009. “From social control to financial economics: The
linked ecologies of economics and business in twentieth-century America.” Presented to the
Annual Meeting of the SASE, Paris.
Francq, Christian, and Jean-Michel Zakoian. 2010. GARCH Models: Structure, Statistical
Inference and Financial Applications. London: Wiley.
Frankfurter, George M., and Elton G. McGoun. 1999. “Ideology and the theory of financial ec-
onomics.” Journal of Economic Behavior and Organization 39: 159–77.
Franzen, Dorothee. 2010. “Managing investment risk in defined benefit pension funds.” OECD
Working Papers on Insurance and Private Pensions.
Friedman, Walter A. 2009. “The Harvard Economic Service and the problems of forecasting.”
History of Political Economy 41 (1): 57–88.
Frigg, R. 2003. “Self-organised criticality: What it is and what it isn’t.” Studies in the History and
Philosophy of Science Part A 34 (3): 613–32.
Fujimoto, Shouji, Atushi Ishikawa, Takayuki Mizuno, and Tsutomu Watanabe. 2011. “A new
method for measuring tail exponents of firm size distributions.” Economics 5 (2011–20): 1.
doi: 10.5018/economics-ejournal.ja.2011-20.
Gabaix, Xavier. 1999. “Zipf ’s law for cities: An explanation.” Quarterly Journal of Economics
114: 739–67.
Gabaix, Xavier. 2009. “Power laws in economics and finance.” Annual Review of Economics 1: 255–93.
Gabaix, Xavier, Parameswaran Gopikrishnan, Vasiliki Plerou, and H. Eugene Stanley. 2000.
“Statistical properties of share volume traded in financial markets.” Physical Review E 62
(4): R4493–R4496.
Gabaix, Xavier, Parameswaran Gopikrishnan, Vasiliki Plerou, and H. Eugene Stanley. 2003. “A
theory of power law distributions in financial market fluctuations.” Nature 423: 267–70.
Gabaix, Xavier, Parameswaran Gopikrishnan, Vasiliki Plerou, and H. Eugene Stanley. 2006.
“Institutional investors and stock market volatility.” Quarterly Journal of Economics 121:
461–504.
Gabaix, Xavier, Parameswaran Gopikrishnan, Vasiliki Plerou, and H. Eugene Stanley. 2007.
“A unified econophysics explanation for the power-law exponents of stock market activity.”
Physica A 382: 81–88.
197 References
Gabaix, Xavier, and Rustam Ibragimov. 2011. “Rank—1 /2: A simple way to improve
the OLS estimation of tail exponents.” Journal of Business and Economic Statistics 29
(1): 24–39.
Gabaix, Xavier, and A. Landier. 2008. “Why has CEO pay increased so much?” Quarterly Journal
of Economics 123: 49–100.
Galam, Serge. 2004. “Sociophysics: A personal testimony.” Physica A 336 (2): 49–55.
Galam, Serge. 2008. “Sociophysics: A review of Galam models.” arXiv:0803.1800v1.
Galam, Serge, Yuval Gefen, and Yonathan Shapir. 1982. “Sociophysics: A mean behavior model
for the process of strike.” Journal of Mathematical Sociology 9 (2): 1–13.
Galison, Peter. 1997. Image and logic: A Material Culture of Microphysics. Chicago: University of
Chicago Press.
Gallais-Hamonno, Georges, ed. 2007. Le marché financier français au XIXe siècle. Vol. 2.
Paris: Publications de la Sorbonne.
Gallegati, Mauro. 1990. “Financial instability, income distribution and the stock market.” Journal
of Post Keynesian Economics 12 (3): 356–74.
Gallegati, Mauro. 1994. “Composition effect and economic fluctuations.” Economics Letters 44
(1–2): 123–26.
Gallegati, Mauro, and Marco Dardi. 1992. “Alfred Marshall on speculation.” History of Political
Economy 24 (3): 571–94.
Gallegati, Mauro, Steve Keen, Thomas Lux, and Paul Ormerod. 2006. “Worrying trends in
econophysics.” Physica A 370 (1): 1–6.
Garman, Mark. 1976. “Market microstructure.” Journal of Financial Economics 3: 257–75.
Gaussel, Nicolas, and Jérôme Legras. 1999. “Black-Scholes ... what’s next?” Quants 35: 1–33.
Geman, Hélyette. 2002. “Pure jump Lévy processes for asset price modelling.” Journal of Banking
and Finance 26 (7): 1297–316.
Geraskin, Petr, and Dean Fantazzini. 2011. “Everything you always wanted to know about log-
periodic power laws for bubble modeling but were afraid to ask.” The European Journal of
Finance 19 (5): 1–26.
Gibrat, Robert. 1931. Les inégalités économiques. Paris: Librairie du Recueil.
Gilbert, G. Nigel. 2007. Agent Based Models. London: Sage.
Gilbert, G. Nigel, and Michael Mulkay. 1984. Opening Pandora’s Box. New York: Cambridge
University Press.
Gillespie, Colin S. 2014. “A complete data framework for fitting power law distributions.”
arXiv:1408.1554.
Gillet, Philippe. 1999. L’efficience des marchés financiers. Paris: Economica.
Gingras, Yves, and Christophe Schinckus. 2012. “Institutionalization of econophysics in the
shadow of physics.” Journal of the History of Economic Thought 34 (1): 109–30.
Gligor, Mircea, and Marcel Ausloos. 2007. “Cluster structure of EU-15 countries derived from
the correlation matrix analysis of macroeconomic index fluctuations.” European Physical
Journal B 57 (2): 139–46.
Gnedenko, Boris Vladimirovich, and Andrei N. Kolmogorov. 1954. Limit Distributions for Sums
of Independent Random Variables. Cambridge, MA: Addison-Wesley.
Godfrey, Michael D., Clive W. J. Granger, and Oskar Morgenstern. 1964. “The random-walk hy-
pothesis of stock market behavior.” Kyklos 17: 1–29.
Goerlich, Francisco J. 2013. “A simple and efficient test for the Pareto law.” Empirical Economics
45 (3): 1367–81.
198 References
Gopikrishnan, Parameswaran, Martin Meyer, Luis A. Nunes Amaral, and H. Eugene Stanley.
1998. “Inverse cubic law for the probability distribution of stock price variations.” European
Physical Journal B 3: 139–40.
Gopikrishnan, Parameswaran, Vasiliki Plerou, Xavier Gabaix, and H. Eugene Stanley. 2000.
“Statistical properties of share volume traded in financial markets.” Physical Review E
62: 4493–96.
Gopikrishnan, Parameswaran, Vasiliki Plerou, Yanhui Liu, Luis A. Nunes Amaral, Xavier
Gabaix, and H. Eugene Stanley. 2000. “Scaling and correlation in financial time series.”
Physica A: Statistical Mechanics and its Applications 287 (3): 362– 73. doi: 10.1016/
S0378-4371(00)00375-7.
Gopikrishnan, Parameswaran, Vasiliki Plerou, Luıs A. Nunes Amaral, Martin Meyer, and H.
Eugene Stanley. 1999. “Scaling of the distribution of fluctuations of financial market indices.”
Physics Review E 60 (5): 5305–16.
Granger, Clive W. J., and Oskar Morgenstern. 1963. “Spectral analysis of New York Stock
Exchange prices.” Kyklos 16: 1–27.
Gregory, Allan W., and Gregor W. Smith. 1995. “Business cycle theory and econometrics.”
Economic Journal 105 (433): 1597–608.
Grossman, Sanford J. 1976. “On the efficiency of competitive stock markets where traders have
diverse information.” Journal of Finance 31 (2): 573–85.
Grossman, Sanford J., and Joseph E. Stiglitz. 1980. “The impossibility of informationally effi-
cient markets.” American Economic Review 70 (3): 393–407.
Guégan, Dominique, and Xin Zhao. 2014. “Alternative modeling for long term risk.” Quantitative
Finance 14 (12): 2237–53.
Guillaume, Deffuant. 2004. “Comparing extremism propagation patterns in continuous opinion
models.” Journal of Artificial Societies and Social Simulation 9 (3): 8–14.
Gunaratne, Gemunu H., and Joseph L. McCauley. 2005. “A theory for fluctuations in stock
prices and valuation of their options.” Proceedings of the SPIE 5848: 131.
Gupta, Hari, and José Campanha. 1999. “The gradually truncated Lévy flight for systems with
power-law distributions.” Physica A 268: 231–39.
Gupta, Hari, and José Campanha. 2002. “Tsallis statistics and gradually truncated Lévy
flight: Distribution of an economic index.” Physica A 309 (4): 381–87.
Haavelmo, Trygve. 1944. “The probability approach in econometrics.” Econometrica 12
(Supplement): iii–vi + 1–115.
Hacking, Ian. 1990. The Taming of Chance. New York: Cambridge University Press.
Hadzibabic, Zoran, Peter Krüger, Marc Cheneau, Baptiste Battelier, and Jean Dalibard.
2006. “Berezinskii-Kosterlitz-Thouless crossover in a trapped atomic gas.” Nature
441: 1118–21.
Hafner, Wolfgang, and Heinz Zimmermann, eds. 2009. Vinzenz Bronzin’s Option Pricing
Models: Exposition and Appraisal. Berlin: Springer.
Hammer, Howard M., and Ronald X. Groeber. 2007. “Efficient market hypothesis and class
action securities regulation.” International Journal of Business Research 1: 1–14.
Hand, D. 1998. “Data mining: Statistics and more?” American Statistician 52 (2): 112–18.
Hankins, Frank Hamilton. 1908. “Adolphe Quetelet as statistician.” Doctoral dissertation,
Columbia University.
Hansen, Lars Peter, and James J. Heckman. 1996. “The empirical foundations of calibration.”
Journal of Economic Perspectives 10 (1): 87–104.
199 References
Harrison, J. Michael, and David M. Kreps. 1979. “Martingales and arbitrage in multiperiod secu-
rities markets.” Journal of Economic Theory 20 (3): 381–408.
Harrison, J. Michael, and Stanley R. Pliska. 1981. “Martingales and stochastic integrals in the
theory of continuous trading.” Stochastic Processes and Their Applications 11 (3): 215–60.
Harrison, Paul. 1998. “Similarities in the distribution of stock market price changes between the
eighteenth and twentieth centuries.” Journal of Business 71 (1): 55–79. doi: 10.1086/209736.
Hartemink, Alexander J., David K. Gifford, Tommi S. Jaakkola, and Richard A. Young. 2001.
“Maximum likelihood estimation of optimal scaling factors for expression array normaliza-
tion.” Proceedings of SPIE 4266: 132–40.
Hastie, Trevor, Robert Tibshirani, Jerome Friedman, and James Franklin. 2005. “The element
of statistical learning: Data mining, inference and prediction.” Mathematical Intelligencer 27
(2): 83–85.
Haug, Espen Gaarder, and Nassim Nicholas Taleb. 2011. “Option traders use very sophisticated
heuristics, never the Black-Scholes-Merton formula.” Journal of Economic Behaviour and
Organization 77 (2): 97–106.
Hautcœur, Pierre-Cyrille, and Angelo Riva. 2012. “The Paris financial market in the nineteenth
century: Complementarities and competition in microstructures.” Economic History Review
65 (4): 1326–53.
Hendry, David F. 1995. “Econometrics and business cycle empirics.” Economic Journal 105
(433): 1622–36.
Hendry, David F., and Mary S. Morgan, eds. 1995. The Foundations of Econometric Analysis.
New York: Cambridge University Press.
Hoover, Kevin D. 1995. “Facts and artifacts: Calibration and the empirical assessmen.” Oxford
Economic Papers 47 (1): 24.
Houthakker, Hendrik S. 1961. “Systematic and random elements in short-term price move-
ments.” American Economic Review 51 (2): 164–72.
Huber, Michel. 1937. “Quarante années de la statistique générale de la France 1896–1936.”
Journal de la Société Statistique de Paris 78: 179–214.
Hughes, Robert I. G. 1999. “The Ising model, computer simulation, and universal physics.” In
Models as Mediators: Perspectives on Natural and Social Science, edited by Mary S. Morgan and
Margaret Morrison, 97–145. Cambridge: Cambridge University Press.
Hurst, Harold Edwin. 1951. “Long-term storage capacity of reservoirs.” Transactions of the
American Society of Civil Engineers 116: 770–99.
Ilachinski, Andrew 2000. “Irreducible semi- autonomous adaptive combat (ISAAC): An
artificial-life approach to land warfare.” Military Operations Research 5 (3): 29–46.
Ingrao, Bruna, and Israel Giorgio. 1990. The Invisible Hand: Economic Equilibrium in the History
of Science. Cambridge, MA: MIT Press.
Israel, Giorgio. 1996. La mathématisation du réel: Essai sur la modélisation mathématique.
Paris: Le Seuil.
Ivancevic, Vladimir G. 2010. “Adaptive-wave alternative for the Black-Scholes option pricing
model.” Cognitive Computation 2 (1): 17–30.
Ivanov, Plamen Ch, Ainslie Yuen, Boris Podobnik, and Youngki Lee. 2004. “Common scaling
patterns in intertrade times of U. S. stocks.” Physical Review E: Statistical, Nonlinear, and Soft
Matter Physics 69 (5.2): 056107. doi: 10.1103/PhysRevE.69.056107.
Izquierdo, Segismundo S., and Luis R. Izquierdo. 2007. “The impact of quality uncertainty without
asymmetric information on market efficiency.” Journal of Business Research 60 (8): 858–67.
200 References
Jensen, Michael C., ed. 1972. Studies in the Theory of Capital Markets. New York: Praeger.
Jensen, Michael C. 1978. “Some anomalous evidence regarding market efficiency.” Journal of
Financial Economics 6 (2–3): 95–101.
Jiang, Zhi-Qiang, Wei-Xing Zhou, Didier Sornette, Ryan Woodard, Ken Bastiaensen, and Peter
Cauwels. 2010. “Bubble diagnosis and prediction of the 2005–2007 and 2008–2009 Chinese
stock market bubbles.” Journal of Economic Behavior and Organization 74 (3): 149–62.
Johansen, Anders, and Didier Sornette. 2000. “The Nasdaq crash of April 2000: Yet another ex-
ample of log-periodicity in a speculative bubble ending in a crash.” European Physical Journal
B 17 (2): 319–28.
Johansen, Anders, Didier Sornette, and Olivier Ledoit. 1999. “Predicting financial crashes using
discrete scale invariance.” Journal of Risk 1: 5–32.
Jona-Lasinio, Giovanni. 2001. “Renormalization group and probability theory.” Physics Reports
352: 439–58.
Jovanovic, Franck. 2000. “L’origine de la théorie financière: Une réévaluation de l’apport de
Louis Bachelier.” Revue d’Economie Politique 110 (3): 395–418.
Jovanovic, Franck. 2001. “Pourquoi l’hypothèse de marche aléatoire en théorie financière? Les
raisons historiques d’un choix éthique.” Revue d’Economie Financière 61: 203–11.
Jovanovic, Franck. 2002. “Le modèle de marche aléatoire dans la théorie financière quantita-
tive.” Doctoral dissertation, University of Paris 1, Panthéon-Sorbonne.
Jovanovic, Franck. 2004. “Eléments biographiques inédits sur Jules Regnault (1834–1894),
inventeur du modèle de marche aléatoire pour représenter les variations boursières.” Revue
d’Histoire des Sciences Humaines 11: 215–30.
Jovanovic, Franck. 2006a. “Economic instruments and theory in the construction of Henri
Lefèvre’s ‘science of the stock market’.” In Pioneers of Financial Economics, vol. 1, edited by
Geoffrey Poitras, 169–90. Cheltenham: Edward Elgar.
Jovanovic, Franck. 2006b. “A nineteenth-century random walk: Jules Regnault and the origins
of scientific financial economics.” In Pioneers of Financial Economics, vol. 1, edited by Geoffrey
Poitras, 191–222. Cheltenham: Edward Elgar.
Jovanovic, Franck. 2006c. “Was there a ‘vernacular science of financial markets’ in France during
the nineteenth century? A comment on Preda’s ‘Informative Prices, Rational Investors.’”
History of Political Economy 38 (3): 531–46.
Jovanovic, Franck. 2008. “The construction of the canonical history of financial economics.”
History of Political Economy 40 (2): 213–42.
Jovanovic, Franck, ed. 2009a. “L’institutionnalisation de l’économie financière: Perspectives his-
toriques.” Special issue of Revue d’Histoire des Sciences Humaines 20.
Jovanovic, Franck. 2009b. “Le modèle de marche aléatoire dans l’économie financière de 1863 à
1976.” Revue d’Histoire des Sciences Humaines 20: 81–108.
Jovanovic, Franck. 2012. “Bachelier: Not the forgotten forerunner he has been depicted as.”
European Journal for the History of Economic Thought 19 (3): 431–451.
Jovanovic, Franck. 2016. “An introduction to the calculation of the chance: Jules Regnault,
his book and its influence.” In The Calculation of Chance and Philosophy of the Stock Market,
edited by William Goetzmann, forthcoming. New Haven: Yale University Press.
Jovanovic, Franck, Stelios Andreadakis, and Christophe Schinckus. 2016. “Efficient market hy-
pothesis and fraud on the market theory: A new perspective for class actions.” Research in
International Business and Finance 38 (7): 177–90.
201 References
Jovanovic, Franck, and Philippe Le Gall. 2001a. “Does God practice a random walk? The ‘finan-
cial physics’ of a 19th century forerunner, Jules Regnault.” European Journal for the History of
Economic Thought 8 (3): 323–62.
Jovanovic, Franck, and Philippe Le Gall. 2001b. “March to numbers: The statistical style of
Lucien March.” In History of Political Economy, annual supplement: The Age of Economic
Measurement, edited by Judy L. Klein and Mary S. Morgan, 86–100.
Jovanovic, Franck, and Christophe Schinckus. 2013. “Towards a transdisciplinary econophys-
ics.” Journal of Economic Methodology 20 (2): 164–83.
Jovanovic, Franck, and Christophe Schinckus. 2016. “Breaking down the barriers between
econophysics and financial economics.” International Review of Financial Analysis. Online.
Kadanoff, Leo. 1966. “Scaling laws for Ising models near Tc.” Physics 2: 263–72.
Kahneman, Daniel, and Amos Tversky. 1979. “Prospect theory: An analysis of decision under
risk.” Econometrica 47 (2): 263–91.
Kaiser, David. 2012. “Booms, busts, and the world of ideas: Enrollment pressures and the chal-
lenge of specialization.” Osiris 27: 276–302.
Kaizoji, Taisei 2006. “On Stock-Price Fluctuations in the Periods of Booms and Stagnations.”
In Proceedings of the Econophysics-Kolkata II Series, edited by Arnab Chatterjee and Bikas K.
Chakrabarti, 3–12. Tokyo: Springer.
Kaizoji, Taisei, and Michiyo Kaizoji. 2004. “Power law for the calm-time interval of price
changes.” Physica A 336 (3): 563–70.
Keen, Steve. 2003. “Standing on the toes of pygmies: Why econophysics must be careful of the
economic foundations on which it builds.” Physica A 324 (1): 108–16.
Kendall, Maurice George. 1953. “The analysis of economic time-series. Part I: Prices.” Journal of
the Royal Statistical Society 116: 11–25.
Kennedy, Robert E. 1966. “Financial analysts seminar.” Financial Analysts Journal 22 (6): 8–9.
Kesten, Harry. 1973. “Random difference equations and renewal theory for products of random
matrices.” Acta Mathematica 131 (1): 207–48.
Keynes, John Maynard. 1936. The General Theory of Employment, Interest and Money.
London: Macmillan.
Khinchine, A. Ya. 1933. Asymptotische Gesetze der Warsheinlichekeitsrechnung. Berlin: Springer.
Kim, Young, Svetlozar Rachev, Michele Bianchi, and Frank J. Fabozzi. 2008. “Financial market
models with Lévy processes and time-varying volatility.” Journal of Banking and Finance 32
(7): 1363–78.
King, Benjamin. 1964. “The latent statistical structure of security price changes.” Doctoral dis-
sertation, Graduate School of Business, University of Chicago.
Klass, Oren S., Ofer Biham, Moshe Levy, Ofer Malcai, and Sorin Solomon. 2006. “The Forbes
400 and the Pareto wealth distribution.” Economics Letters 90: 90–95.
Kleiber, Max. 1932. “Body size and metabolism.” Hilgardia 6: 315–51.
Klein, Julie Thompson. 1994. “Notes toward a social epistemology of transdisciplinarity.”
Paper presented to the First World Congress of Transdisciplinarity, Convento da Arrábida,
Portugal.
Knorr-Cetina, Karin. 1981. The Manufacture of Knowledge: An Essay on the Constructivist and
Contextual Nature of Science. Elmsford, NY: Pergamon.
Kolmogorov, Andrei N. 1931. “Über die analytischen Methoden in der Wahrscheinlich-
keitsrechnung.” Mathematische Annalen 104: 415–58.
202 References
203 References
Lesne, Annick, and Michel Laguës. 2011. Scale Invariance: From Phase Transitions to Turbulence.
Berlin: Springer.
Levy, M. 2003. “Are rich people smarter?” Journal of Economic Theory 110: 42–64.
Levy, Moshe H., Haim Levy, and Sorin Solomon. 1995. “Microscopic simulation of the stock
market: The effect of microscopic diversity.” Journal de Physique I 5 (8): 1087–107.
Levy, Moshe H., Haim Levy, and Sorin Solomon. 2000. Microscopic Simulation of Financial
Markets: From Investor Behaviour to Market Phenomena. San Diego: Academic Press.
Lévy, Paul. 1924. “Théorie des erreurs: La loi de Gauss et les lois exceptionnelles.” Bulletin de la
SMF 2 (1): 49–85.
Li, Lun, David Alderson, John C. Doyle, and Walter Willinger. 2005. “Towards a theory of scale-
free graphs: Definition, properties, and implications.” Internet Mathematics 2 (4): 431–523.
Li, Sai-Ping, and Shu-Heng Chen, eds. 2012. “Complexity and non-linearities in financial
markets: Perspectives from econophysics.” Special issue of International Review of Financial
Analysis 23.
Lillo, Fabrizio, and Rosario N. Mantegna. 2004. “Dynamics of a financial market index after a
crash.” Physica A 338 (1–2): 125–34.
Lintner, John. 1965a. “Security prices, risk and maximal gains from diversification.” Journal of
Finance 20 (4): 587–615.
Lintner, John. 1965b. “The valuation of risk assets and the selection of risky investments in stock
portfolios and capital budgets.” Review of Economic Statistics 47 (1): 13–37.
Lorie, James Hirsch. 1965. “Controversies on the stock market.” Selected Papers, Graduate
School of Business of the University of Chicago.
Lorie, James Hirsch. 1966. “Some comments on recent quantitative and formal research on the
stock market.” Journal of Business 39 (1.2: Supplement on Security Prices): 107–10.
Lotka, Alfred J. 1926. “The frequency distribution of scientific productivity.” Journal of the
Washington Academy of Sciences 16 (12): 317–24.
Louçã, Francisco. 2007. The Years of High Econometrics. A Short History of the Generation That
Reinvented Economics. London: Routledge.
Louzoun, Yoram, and Sorin Solomon. 2001. “Volatility driven markets in a generalized Lotka
Voltera formalism.” Physica A 302 (1): 220–33.
Lowenstein, Roger. 2000. When Genius Failed: The Rise and Fall of Long- Term Capital
Management. New York: Random House.
Lu, Zhiping, and Dominique Guegan. 2011. “Testing unit roots and long range dependence of
foreign exchange.” Journal of Time Series Analysis 32 (6): 631–38.
Lucas, Robert E., Jr. 1978. “Asset Prices in an Exchange Economy.” Econometrica 46 (6): 1429–45.
Luttmer, E. 2007. “Selection, growth, and the size distribution of firms.” Quarterly Journal of
Economics 122: 1103–44.
Lux, Thomas. 1992a. “A note on the stability of endogenous cycles in Diamond’s model of
search and barter.” Journal of Economics 56 (2): 185–96.
Lux, Thomas. 1992b. “The sequential trading approach to disequilibrium dynamics.” Jahrbücher
für Nationaloekonomie und Statistik 209 (1): 47–59.
Lux, Thomas. 1996. “The stable Paretian hypothesis and the frequency of large returns: An ex-
amination of major German stocks.” Applied Financial Economics 6: 463–75.
Lux, Thomas. 2006. “Financial power laws: Empirical evidence, models, and mechanism.”
Working paper No12 from Christian-Albrechts-University of Kiel, Department of Economics.
204 References
Lux, Thomas. 2009. “Applications of statistical physics in finance and economics.” In Handbook
of Research on Complexity, edited by Barkley Rosser, 213–58. Cheltenham: Edward Elgar.
Lux, Thomas, and Michele Marchesi. 1999. “Scaling and criticality in a stochastic multi-agent
model of a financial market.” Nature 397: 498–500.
Lux, Thomas, and Michele Marchesi. 2000. “Volatility clustering in financial markets: A micro-
simulation of interacting agents.” International Journal of Theoretical and Applied Finance 3
(4): 675–702.
Maas, Harro. 2005. William Stanley Jevons and the Making of Modern Economics.
New York: Cambridge University Press.
Macintosh, Norman B. 2003. “From rationality to hyperreality: Paradigm poker.” International
Review of Financial Analysis 12 (4): 453–65.
Macintosh, Norman B., Teri Shearer, Daniel B. Thornton, and Michael Welker. 2000.
“Accounting as simulacrum and hyperreality: Perspectives on income and capital.”
Accounting, Organizations and Society 25 (1): 13–50.
MacKenzie, Donald A. 2003. “An equation and its worlds: Bricolage, exemplars, disu-
nity and performativity in financial economics.” Paper presented to “Inside Financial
Markets: Knowledge and Interaction Patterns in Global Markets,” Konstanz, May.
MacKenzie, Donald A. 2006. An Engine, Not a Camera: How Financial Models Shape Markets.
Cambridge, MA: MIT Press.
MacKenzie, Donald A. 2007. “The emergence of option pricing theory.” In Pioneers of Financial
Economics, vol. 2: Twentieth Century Contributions, edited by Geoffrey Poitras and Franck
Jovanovic, 170–91. Cheltenham: Edward Elgar.
MacKenzie, Donald A., and Yuval Millo. 2009. “The usefulness of inaccurate models: Towards
an understanding of the emergence of financial risk management.” Accounting, Organizations
and Society 34: 638–53.
Madan, Dilip B., Peter P. Carr, and Éric C. Chang. 1998. “The variance gamma process and
option pricing.” European Finance Review 2 (1): 79–105.
Madhavan, Ananth. 2000. “Market microstructure: A survey.” Journal of Financial Markets 3
(3): 205–58.
Malcai, Ofer, Ofer Biham, and Sorin Solomon. 1999. “Power-law distributions and Lévy-stable
intermittent fluctuations in stochastic systems of many autocatalytic elements.” Physical
Review E 60: 1299–303.
Malevergne, Yannick, Vladilen Pisarenko, and Didier Sornette. 2011. “Testing the Pareto against
the lognormal distributions with the uniformly most powerful unbiased test applied to the
distribution of cities.” Physical Review E 83 (3): 036111.
Malevergne, Yannick, and Didier Sornette. 2005. Extreme Financial Risks: From Dependence to
Risk Management. Vol. 1. Berlin: Springer.
Malevergne, Yannick, Didier Sornette, and Vladilen Pisarenko. 2005. “Empirical distributions of
stock returns: Between the stretched exponential and the power law?” Quantitative Finance
5 (4): 379–401.
Malkiel, Burton G. 1992. “Efficient market hypothesis.” In The New Palgrave Dictionary of Money
and Finance, edited by Peter Newman, Murray Milgate and John Eatwell. London: Macmillan.
Mandelbrot, Benoit. 1957. “Application of thermodynamical methods in communication
theory and in econometrics.” Institut Mathématique de l’Université de Lille.
Mandelbrot, Benoit. 1960. “The Pareto-Lévy law and the distribution of income.” International
Economic Review 1: 79–106.
205 References
Mandelbrot, Benoit. 1962a. “Sur certains prix spéculatifs: Faits empiriques et modèle basé sur
les processus stables additifs non gaussiens de Paul Lévy.” Comptes Rendus de l’Académie des
Sciences (Meeting of June 4): 3968–70.
Mandelbrot, Benoit. 1962b. “The variation of certain speculative prices.” IBM Report NC-87.
Mandelbrot, Benoit. 1963a. “New Methods in Statistical Economics.” Journal of Political
Economy 71 (5): 421–40.
Mandelbrot, Benoit. 1963b. “The variation of certain speculative prices.” Journal of Business 36
(4): 394–419.
Mandelbrot, Benoit. 1966a. “Forecasts of future prices, unbiased markets, and “martingale”
models.” Journal of Business 39 (1.2): 242–55.
Mandelbrot, Benoit. 1966b. “Seminar on the Analysis of Security Prices” held November 12-13,
1966 at the Graduate School of Business of the University of Chicago.
Mandelbrot, Benoit. 1997. Fractals, hasard et finance. Paris: Flammarion.
Mandelbrot, Benoit, and Richard L. Hudson. 2004. The (Mis)Behavior of Markets: A Fractal
View of Risk, Ruin, and Reward. London: Profile Books.
Mantegna, Rosario N. 1991. “Levy Walks and enhanced diffusion in Milan stock exchange.”
Physica A 179 (1): 232–42.
Mantegna, Rosario N., and H. Eugene Stanley. 1994. “Stochastic process with ultra-slow con-
vergence to a Gaussian: The truncated Lévy flight.” Physical Review Letters 73 (22): 2946–49.
Mantegna, Rosario N., and H. Eugene Stanley. 2000. An Introduction to Econophysics: Correlations
and Complexity in Finance. New York: Cambridge University Press.
March, Lucien. 1921. “Les modes de mesure du mouvement général des prix.” Metron 1
(4): 57–91.
Mardia, Kanti V., and Peter E. Jupp. 2000. Directional Statistics. Chichester: Wiley.
Mariani, Maria Cristina, and Y. Liu. 2007. “Normalized truncated Lévy walks applied to the
study of financial indices.” Physica A 377: 590–98.
Markowitz, Harry M. 1952. “Portfolio selection.” Journal of Finance 7 (1): 77–91.
Markowitz, Harry M. 1955. “Portfolio selection.” MA thesis, University of Chicago.
Markowitz, Harry M. 1959. Portfolio Selection: Efficient Diversification of Investments.
New York: Wiley.
Markowitz, Harry M. 1999. “The early history of portfolio theory: 1600–1960.” Financial
Analysts Journal 55 (4): 5–16.
Marsili, Mateo, and Yi-Chen Zhang. 1998. “Econophysics: What can physicists contribute to
economics?” Physics Review Letters 80 (1): 80–85.
Maslov, Sergei. 2000. “Simple model of a limit order-driven market.” Physica A 278 (3–4): 571–78.
Matacz, Andrew. 2000. “Financial modeling and option theory with the truncated Levy pro-
cess.” International Journal of Theoretical and Applied Finance 3 (1): 142–60.
Matei, Marius, Xari Rovira, and Nuria Agell. 2012. “Bivariate volatility modeling for stocks and
portfolios.” Working paper, ESADE Business School, Ramon Llull University.
Matsushita, Raul, Pushpa Rathie, and Sergio Da Silva. 2003. “Exponentially damped Lévy
flight.” Physica A 326 (1): 544–55.
Max-Neef, Manfred. 2005. “Foundations of transdisciplinarity.” Ecological Economics 53: 5–16.
McCauley, Joseph L. 2003. “Thermodynamics analogies in economics and finance: Instability of
markets.” Physica A 329: 199–212.
McCauley, Joseph L. 2004. Dynamics of Markets: Econophysics and Finance. Cambridge: Cambridge
University Press.
206 References
207 References
Mir, Tariq Ahmad, Marcel Ausloos, and Roy Cerqueti. 2014. “Benford’s law predicted digit dis-
tribution of aggregated income taxes: The surprising conformity of Italian cities and regions.”
European Physical Journal B 87 (11): 1–8.
Mirowski, Philip. 1989a. “The measurement without theory controversy.” Oeconomia 11: 65–87.
Mirowski, Philip. 1989b. More Heat Than Light: Economics as Social Physics, Physics as Nature’s
Economics. New York: Cambridge University Press.
Mirowski, Philip. 2002. Machine Dreams. Cambridge: Cambridge University Press.
Miskiewicz, Janusz. 2010. “Econophysics in Poland.” Science and Culture 76: 395–98.
Mitchell, Melanie. 2009. Complexity: A Guided Tour. New York: Oxford University Press.
Mitchell, Wesley C. 1915. “The making and using of index numbers.” Bulletin of the United States
Bureau of Labor Statistics 173: 5–114.
Mitzenmacher, Michael. 2004. “A brief history of generative models for power law and lognor-
mal distributions.” Internet Mathematics 1 (2): 226–51.
Mitzenmacher, Michael. 2005. “Editorial: The future of power law research.” Internet Mathematics
2 (4): 525–34.
Miyahara, Yoshio. 2012. Option Pricing in Incomplete Markets: Modeling Based on Geometric Lévy
Processes and Minimal Entropy Martingale Measures. London: Imperial College Press.
Modigliani, Franco, and Merton H. Miller. 1958. “The cost of capital, corporation finance and
the theory of investment.” American Economic Review 48 (3): 261–97.
Moore, Arnold B. 1964. “Some characteristics of changes in common stock prices.” In The
Random Character of Stock Market Prices, edited by Paul H. Cootner, 139–61. Cambridge,
MA: MIT Press.
Moore, Henry Ludwell. 1917. Forecasting the Yield and the Price of Cotton. New York:
Macmillan.
Morales, Raffaello, Tiziana Di Matteo, and Tomaso Aste. 2013. “Non-stationary multifractal-
ity in stock returns.” Physica A: Statistical Mechanics and Its Applications 392 (24): 6470.
doi: 10.1016/j.physa.2013.08.037.
Morgan, Mary S. 1990. The History of Econometric Ideas: Historical Perspectives on Modern
Economics. Cambridge: Cambridge University Press.
Morgan, Mary S., and Judy L. Klein, eds. 2001. The Age of Economic Measurement. Durham,
NC: Duke University Press.
Morgan, Mary S., and Tarja Knuuttila. 2012. “Models and modelling in economics.” In Handbook
of the Philosophy of Science, vol. 13: Philosophy of Economics, edited by Uskali Mäki, 49–87.
Amsterdam: Elsevier.
Morin, Edgar. 1994. “Sur l’interdisciplinarité.” Paper presented to the First World Congress of
Transdisciplinarity, Convento da Arrábida, Portugal.
Mossin, Jan. 1966. “Equilibrium in a Capital Asset Market.” Econometrica 34 (4): 768–83.
Nadeau, Robert. 2003. “Thomas Kuhn ou l’apogée de la philosophie historique des sciences.” In
Actes du colloque du Centre Culturel International de Cerisy-la-Salle sur “Cent ans de philosophie
américaine”, edited by Jean-Pierre Cometti and Claudine Tiercelin, 273–97. Pau: P.U. Pau et
Pays de l’Adour – Quad.
Nakao, Hiroya. 2000. “Multi-scaling properties of truncated Levy flights.” Physics Letters A
266: 282–89.
Newman, Mark. 2005. “Power laws, Pareto distributions and Zipf ’s law.” Contemporary Physics
46 (5): 323–51.
Niederhoffer, Victor. 1965. “Clustering of stock prices.” Operations Research 13 (2): 258–65.
208 References
Nolan, John P. 2005. “Modeling financial data with stable distributions.” Working paper,
American University.
Nolan, John P. 2009. “Stable distributions: Models for heavy tailed data.” Working paper,
American University.
O’ Hara, Maureen Patricia. 1995. Market Microstructure Theory. Cambridge, MA: Blackwell.
O’ Hara, Maureen Patricia. 2003. “Presidential address: Liquidity and price discovery.” Journal
of Finance 58 (3): 1335–54.
Officer, Robert Rupert. 1972. “The distribution of stock returns.” Journal of American Statistical
Association 340 (67): 807–12.
Okuyama, K., Misako Takayasu, and Hideki Takayasu. 1999. “Zipf ’s law in income distribution
of companies.” Physica A 269 (1): 125–31.
Olivier, Maurice. 1926. Les nombres indices de la variation des prix. Paris: Marcel Giard.
Orléan, André. 1989. “Mimetic contagion and speculative bubbles.” Theory and Decision 27
(1–2): 63–92.
Orléan, André. 1995. “Bayesian interactions and collective dynamics of opinion: Herd behavior
and mimetic contagion.” Journal of Economic Behavior and Organization 28 (2): 257–74.
Osborne, Maury F. M. 1959a. “Brownian motion in the stock market.” Operations Research 7
(2): 145–73.
Osborne, Maury F. M. 1959b. “Reply to ‘Comments on “Brownian Motion in the Stock
Market.’ ” Operations Research 7 (6): 807–11.
Osborne, Maury F. M. 1962. “Periodic structure in the Brownian motion of stock prices.”
Operations Research 10 (3): 345–79.
Pagan, Adrian R. 1996. “The econometrics of financial markets.” Journal of Empirical Finance 3
(1): 15–102.
Pandey, Ras B., and Dietrich Stauffer. 2000. “Search for log-periodic oscillations in stock market
simulations.” International Journal of Theoretical and Applied Finance 3 (3): 479–82.
Pareto, Vilfredo. 1897. Cours d’economie politique. Geneva: Librairie Droz.
Paul, Wolfgang, and Jörg Baschnagel. 2013. Stochastic Processes: From Physics to Finance.
Berlin: Springer.
Paulson, Alan, Edgar Holcomb, and Robert Leitch. 1975. “The estimation of the parameters of
the stable laws.” Biometrika 62: 163–70.
Pearson, Karl. 1905a. “The problem of the random walk.” Nature 72 (1865): 294.
Pearson, Karl. 1905b. “The problem of the random walk (answer).” Nature 72 (1867): 342.
Pickhardt, Michael, and Goetz Seibold. 2011. “Income tax evasion dynamics: Evidence from an
agent-based econophysics model.” Working paper, University of Cottbus.
Pieters, Riek, and Baumgartner Hans. 2002. “Who talks to whom? Intra-and interdisciplinary
communication of economics journals.” Journal of Economic Literature 40 (2): 483–509.
Plerou, Vasiliki, Parameswaran Gopikrishnan, L. A. Nunes Amaral, Martin Meyer, and H.
Eugene Stanley. 1999. “Scaling of the distribution of price fluctuations of individual compa-
nies.” Physical Review E: Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics
60 (6.A): 6519–29.
Plerou, Vasiliki, and H. Eugene Stanley. 2008. “Stock return distributions: Tests of scaling and
universality from three distinct stock markets.” Physical Review E: Statistical, Nonlinear, and
Soft Matter Physics 77 (3.2): 037101.
Poitras, Geoffrey. 2009. “From Antwerp to Chicago: The history of exchange traded derivative
security contracts.” Revue d’Histoire des Sciences Humaines 20: 11–50.
209 References
Poitras, Geoffrey, and Franck Jovanovic, eds. 2007. Pioneers of Financial Economics, vol.
2: Twentieth-Century Contributions. Northampton, MA: Edward Elgar.
Poitras, Geoffrey, and Franck Jovanovic. 2010. “Pioneers of Financial Economics: Das Adam
Smith Irrelevanzproblem?” History of Economics Review 51 (Winter): 43–64.
Ponzi, Adam, and Yoji Aizawa. 2000. “Evolutionary financial market models.” Physica A: 507–23.
Popper, Karl Raimund. 1959. The Logic of Scientific Discovery. New York: Basic Books.
Porter, Theodore M. 1986. The Rise of Statistical Thinking, 1820–1900. Princeton, NJ: Princeton
University Press.
Potters, Marc, and Jean-Philippe Bouchaud. 2003. “More statistical properties of stock order
books and price impact.” Physica A 324: 133–40.
Preda, Alex. 2001. “The rise of the popular investor: Financial knowledge and investing in
England and France, 1840–1880.” Sociological Quarterly 42 (2): 205–32.
Preda, Alex. 2004. “Informative prices, rational investors: The emergence of the random walk
hypothesis and the 19th century ‘science of financial investments.’” History of Political
Economy 36 (2): 351–86.
Preda, Alex. 2006. “Socio-technical agency in financial markets: The case of the stock ticker.”
Social Studies of Science 36 (5): 753–82.
Preda, Alex. 2007. “Where do analysts come from? The case of financial chartism.” Sociological
Review 55 (s2): 40–64.
Preis, Tobias, and H. Eugene Stanley. 2010. “Trend switching processes in financial markets.” In
Econophysics Approaches to Large-Scale Business Data and Financial Crisis, edited by Misako
Takayasu, Tsutomu Watanabe, and Hideki Takayasu, 3–26. Tokyo: Springer Japan.
Press, S. James. 1967. “A compound events model for security prices.” Journal of Business 40
(3): 317–35.
Press, S. James. 1972. “Estimation in univariate and multivariate stable distributions.” Journal of
American Statistical Association 67: 842–46.
Prietula, Michael, Kathleen Carley, and Les Gasse, eds. 1998. Simulating
Organizations: Computational Models of Institutions and Groups. Cambridge, MA: MIT Press.
Quah, Danny T. 1995. “Business cycle empirics: Calibration and estimation.” Economic Journal
105 (433): 1594–96.
Queiros, Silvio M. Duarte, and Constantino Tsallis. 2005. “Bridging the ARCH model for fi-
nance and nonextensive entropy.” Europhysics Letters 69: 893–99. doi: 10.1209/epl/
i2004-10436-6.
Quételet, Adolphe. 1848. Du système social et des lois qui le régissent. Paris: Guillaumin et Cie.
Rachev, Svetlozar T., John S. J. Hsu, Biliana S. Bagasheva, and Frank J. Fabozzi. 2008.
“Bayesian methods in finance.” In The Oxford Handbook of Bayesian Econometrics, edited
by John Geweke, Gary Koop, and Herman van Dijk, 439–512. New York: Oxford
University Press.
Rachev, Svetlozar T., Young Shin Kim, Michele L. Bianchi, and Frank J. Fabozzi. 2011. Financial
Models with Levy Processes and Volatility Clustering. Hoboken, NJ: John Wiley & Sons.
Redelico, Francisco O., Araceli N. Proto, and Marcel Ausloos. 2009. “Hierarchical structures
in the gross domestic product per capita fluctuation in Latin American countries.” Physica
A: Statistical Mechanics and its Applications 388 (17): 3527–35.
Reed, William J., and Barry D. Hughes. 2002. “From gene families and genera to incomes
and internet file sizes: Why power laws are so common in nature.” Physical Review E
66: 067103-1–067103-4.
210 References
Regnault, Jules. 1863. Calcul des chances et philosophie de la bourse. Paris: Mallet-Bachelier and
Castel.
Renfro, Charles G. 2004. “The early development of econometric modeling languages.” In
Computational Econometrics: Its Impact on the Development of Quantitative Economics, edited
by Charles G. Renfro, 145–66. Amsterdam: IOS Press.
Renfro, Charles G. 2009. The Practice of Econometric Theory: An Examination of the Characteristics
of Econometric Computation. Heidelberg: Springer.
Richmond, P. 2001. “Power law distributions and dynamic behaviour of stock markets.”
European Physical Journal B 20 (4): 523–26.
Richmond, Peter, Jurgen Mimkes, and Stefan Hutzler. 2013. Econophysics and Physical Economics.
Oxford: Oxford University Press.
Rickles, Dean. 2007. “Econophysics for philosophers.” Studies in History and Philosophy of
Science 38 (4): 948–78.
Rickles, Dean. 2008. “Econophysics and the complexity of the financial markets.” In Handbook
of the Philosophy of Science, vol. 10: Philosophy of Complex Systems, edited by John Collier and
Cliff Hooker, 531–65. New York: North Holland Elsevier Editions.
Rimmer, Robert H., and John P. Nolan. 2005. “Stable distributions in Mathematica.” Mathematica
Journal 9 (4): 776–89.
Roberts, Harry V. 1959. “Stock-market ‘patterns’ and financial analysis: Methodological sugges-
tions.” Journal of Finance 14 (1): 1–10.
Robert, Shiller. 1981. “Do Stock Prices Move Too Much to be Justified by Subsequent Changes
in Dividends.” American Economic Review 3: 421–36.
Roehner, Bertrand M. 2002. Patterns of Speculation: A Study in Observational Econophysics.
Cambridge: Cambridge University Press.
Rosenfeld, Lawrence. 1957. “Electronic computers and their place in securities analyses.”
Analysts Journal 13 (1): 51–53.
Rosenfield, Patricia L. 1992. “The potential of transdisciplinary research for sustaining and
extending linkages between the health and social sciences.” Social Science and Medicine 35
(11): 1343–57.
Ross, Stephen A. 1976a. “The arbitrage theory of capital asset pricing.” Journal of Economic
Theory 13 (3): 341–60.
Ross, Stephen A. 1976b. “Options and efficiency.” Quarterly Journal of Economics 90 (1): 75–89.
Ross, Stephen A. 1977. “Return, risk and arbitrage.” In Risk and Return in Finance, edited by
Irwin Friend and James L. Bicksler, 189–217. Cambridge: Ballinger.
Rosser, Barkley. 2006. “The nature and future of econophysics.” In Econophysics of Stock and Other
Markets, edited by Arnab Chatterjee and Bikas K. Chakrabarti, 225–34. Milan: Springer.
Rosser, Barkley. 2008. “Econophysics and economics complexity.” Advances in Complex Systems
11 (5): 745–60.
Rosser, Barkley. 2010. “Is a transdisciplinary perspective on economic complexity possible?”
Journal of Economic Behavior and Organization 75 (1): 3–11.
Roy, A. D. 1952. “Safety first and the holding of assets.” Econometrica 20 (3): 431–49.
Rozenfeld, Hernán D., Diego Rybski, Xavier Gabaix, and Hernán A. Makse. 2011. “The Area
and Population of Cities: New Insights from a Different Perspective on Cities.” American
Economic Review 101 (5): 2205–25.
Rubinstein, Mark. 2002. “Markowitz’s ‘Portfolio Selection’: A fifty-year retrospective.” Journal of
Finance 62 (3): 1041–45.
211 References
Rubinstein, Mark. 2003. “Great moments in financial economics: II. Modigliani-Miller the-
orem.” Journal of Investment Management 1 (2): 7–13.
Rutterford, Janette, and Dimitris Sotiropoulos. 2015. “Like the horses in a race’: Financial diver-
sification before modern portfolio theory.” Paper presented to the 19th Annual Conference
of the European Society for the History of Economic Thought (ESHET), Rome.
Rybski, Diego. 2013. “Auerbach’s legacy.” Environment and Planning A 45 (6): 1266–68.
Samorodnitsky, Gennady, and Murad Taqqu. 1994. Stable Non-Gaussian Random Processes.
New York: Chapman and Hall.
Samuelson, Paul A. 1965a. “Proof that properly anticipated prices fluctuate randomly.” Industrial
Management Review 6 (2): 41–49.
Samuelson, Paul A. 1965b. “Rational theory of warrant pricing.” Industrial Management Review
6 (2): 13–40.
Samuelson, Paul A. 1967. “Efficient portfolio selection for Pareto-Lévy Investments.” Journal of
Financial and Quantitative Analysis 2 (2): 107–22.
Savoiu, Gheorghe. 2013. Econophysics: Background and Applications in Economics, Finance, and
Sociophysics. Boston: Academic Press.
Săvoiu, Gheorghe G., and Ion Iorga- Simăn. 2013. “Sociophysics: A new science or a
new domain for physicists in a modern university?” In Econophysics: Background and
Applications in Economics, Finance, and Sociophysics, edited by Gheorghe G. Săvoiu, 149–66.
Oxford: Academic Press.
Scalas, Enrico, and Kyungsik Kim. 2007. “The art of fitting financial time series with Lévy stable
distributions.” Journal of the Korean Physical Society 50 (1): 105–11.
Schabas, Margaret. 1990. A World Ruled by Number: William Stanley Jevons and the Rise of
Mathematical Economics. Princeton, NJ: Princeton University Press.
Schaden, Martin. 2002. “Quantum finance.” Physica A 316: 511–38.
Schinckus, Christophe. 2008. “The financial simulacrum.” Journal of Socio- economics 73
(3): 1076–89.
Schinckus, Christophe. 2009a. “La diversification théorique en finance de marché: Vers
de nouvelles perspectives de l’incertitude.” Doctoral dissertation, University of Paris 1,
Panthéon-Sorbonne.
Schinckus, Christophe. 2009b. “La finance comportementale ou le développement d’un nou-
veau paradigme.” Revue d’Histoire des Sciences Humaines 20: 131–57.
Schinckus, Christophe. 2010a. “Econophysics and economics: Sister disciplines?” American
Journal of Physics 78 (1): 325–27.
Schinckus, Christophe. 2010b. “Is econophysics a new discipline? A neo-positivist argument.”
Physica A 389: 3814–21.
Schinckus, Christophe. 2010c. “A reply to comment on econophysics and economics: Sister dis-
ciplines?” American Journal of Physics 78 (8): 789–91.
Schinckus, Christophe. 2012a. “Financial economics and non-representative art.” Journal of
Interdisciplinary Economics 24 (1): 77–97.
Schinckus, Christophe. 2012b. “Statistical econophysics and agent- based econophysics.”
Quantitative Finance 12 (8): 1189–92.
Schinckus, Christophe. 2013. “How do econophysicists make stable Levy processes physically
plausible.” Brazilian Journal of Physics 43 (4): 281–93.
Schinckus, Christophe. 2017. “Econophysics and complexity studies.” Doctoral dissertation,
University of Cambridge.
212 References
213 References
Sornette, Didier, and Peter Cauwels. 2015. “Financial bubbles: Mechanisms and diagnostics.”
Review of Behavioral Economics 2 (3): 279–305.
Sornette, Didier, and Anders Johansen. 1997. “Large financial crashes.” Physica A 245
(1): 411–22.
Sornette, Didier, and Anders Johansen. 2001. “Significance of log-periodic precursors to finan-
cial crashes.” Quantitative Finance 1 (4): 452–71.
Sornette, Didier, Anders Johansen, and Jean-Philippe Bouchaud. 1995. “Stock market crashes,
precursors and replicas.” Journal de Physique I 6 (1): 167–75.
Sornette, Didier, and Ryan Woodard. 2010. “Financial bubbles, real estate bubbles, derivative
bubbles, and the financial and economic crisis.” In Econophysics Approaches to Large-Scale
Business Data and Financial Crisis, edited by Misako Takayasu, Tsutomu Watanabe, and
Hideki Takayasu, 101–48. Tokyo: Springer Japan.
Sprenkle, Case M. 1961. “Warrant prices as indicators of expectations and preferences.” Yale
Economic Essays 1 (2): 178–231.
Sprenkle, Case M. 1964. “Some evidence on the profitability of trading in put and call options.”
In The Random Character of Stock Market Prices, edited by Paul H. Cootner. Cambridge,
MA: MIT Press.
Sprowls, Clay. 1963. “Computer education in the business curriculum.” Journal of Business 36
(1): 91–96.
Standler, Ronald B. 2009. “Funding of basic research in physical sciences in the USA.”
Working paper.
Stanley, H. Eugene. 1971. Introduction to Phase Transitions and Critical Phenomena.
New York: Oxford University Press.
Stanley, H. Eugene. 1999. “Scaling, universality, and renormalization: Three pillars of modern
critical phenomena.” Reviews of Modern Physics 71 (2): S358–S366.
Stanley, H. Eugene, Viktor Afanasyev, Luis A. Nunes Amaral, Serguey Buldyrev, Albert Goldberger,
Steve Havlin, Harry Leschhorn, Peter Mass, Rosario N. Mantegna, Chung Kang Peng, Paul Prince,
Andrew Salinger, and Karthik Viswanathan. 1996. “Anomalous fluctuations in the dynamics of
complex systems: From DNA and physiology to econophysics.” Physica A 224 (1): 302–21.
Stanley, H. Eugene, Luis A. Nunes Amaral, D. Canning, Parameswaran Gopikrishnan, Youngki
Lee, and Yanhui Liu. 1999. “Econophysics: Can physicists contribute to the science of eco-
nomics?” Physica A 269 (1): 156–69.
Stanley, H. Eugene, Xavier Gabaix, and Vasiliki Plerou. 2008. “A statistical physics view of finan-
cial fluctuations: Evidence for scaling universality.” Physica A 387 (1): 3967–81.
Stanley, H. Eugene, L. A. Nunes Amaral, Parameswaran Gopikrishnan, Yanhui Liu, Vasiliki
Plerou, and Bernd Rosenow. 2000. “Econophysics: What can physicists contribute to eco-
nomics?” International Journal of Theoretical and Applied Finance 3 (3): 335–46.
Stanley, H. Eugene, and Vasiliki Plerou. 2001. “Scaling and universality in economics: Empirical
results and theoretical interpretation.” Quantitative Finance 1 (6): 563–67.
Stauffer, D. 2007. “Opinion dynamics and sociophysics.” arXiv:0705.0891.
Stauffer, Dietrich. 2004. “Introduction to statistical physics outside physics.” Physica A 336: 1–5.
Stauffer, Dietrich. 2005. “Sociophysics simulations II: Opinion dynamics.” arXiv:physics/
0503115.
Stauffer, Dietrich, and Didier Sornette. 1999. “Self-organized percolation model for stock
market fluctuations.” Physica A 271: 496–506.
214 References
Steiger, William Lee. 1963. “Non-randomness in the stock market: A new test on an existent
hypothesis.” MS thesis, School of Industrial Management, MIT.
Stigler, George J. 1964. “A theory of oligopoly.” Journal of Political Economy 72: 44–61.
Stock, James H., and Mark W. Watson. 2001. “Vector autoregressions.” Journal of Economic
Perspectives 15 (4): 101–15.
Stumpf, Michael P. H., and Mason A. Porter. 2012. “Critical truths about power laws.” Science
335 (6069): 665–66.
Sullivan, Edward J. 2011. “A. D. Roy: The forgotten father of portfolio theory.” In Research in the
History of Economic Thought and Methodology, edited by Jeff E. Biddle and Ross B. Emmett,
73–82. Greenwich, CT: JAI Press.
Takayasu, Hideki. 2002. Empirical Science of Financial Fluctuations: The Advent of Econophysics.
Tokyo: Springer Japan.
Takayasu, Hideki, ed. 2006. Practical Fruits of Econophysics. Tokyo: Springer Japan.
Takayasu, Misako, Tsutomu Watanabe, and Hideki Takayasu, eds. 2010. Econophysics Approaches
to Large-Scale Business Data and Financial Crisis. Tokyo: Springer Japan.
Tan, A. 2005. “Long memory stochastic volatility and a risk minimization approach for derivative
pricing and hedging.” Doctoral dissertation, School of Mathematics, University of Manchester.
Taqqu, Murad S. 2001. “Bachelier and his times: A conversation with Bernard Bru.” Finance and
Stochastics 5 (1): 3–32.
Teichmoeller, John. 1971. “A note on the distribution of stock price changes.” Journal of the
American Statistical Association 66: 282–84.
Theiler, James, Stephen Eubank, André Longtin, Bryan Galdrikian, and J. Doyne Farmer. 1992.
“Testing for nonlinearity in time series: The method of surrogate data.” Physica D: Nonlinear
Phenomena 58 (1): 77–94.
Todd, Loreto. 1990. Pidgins and Creoles. London: Routledge.
Treynor, Jack L. 1961. “Toward a theory of market value of risky assets.” Available at SSRN.
Tripp, Omer, and Dror Feitelson. 2001. “Zipf ’s Law Revisited.” Working paper, School of
Engineering and Computer Science, Jerusalem.
Tusset, Gianfranco. 2010. “Going back to the origins of econophysics: The Paretian conception
of heterogeneity.” Paper presented to the 14th ESHET Conference, Amsterdam.
Upton, David, and Donald Shannon. 1979. “The stable Paretian distribution, subordinated
stochastic processes, and asymptotic lognormality: An empirical investigation.” Journal of
Finance 34: 1031–39.
van der Vaart, A. W. 2000. Asymptotic Statistics. Cambridge: Cambridge University Press.
Vitanov, Nikolay K., and Zlatinka I. Dimitrova. 2014. Bulgarian Cities and the New Economic
Geography. Sofia: Vanio Nedkov.
Voit, Johannes. 2005. Statistical Mechanics of Financial Markets. 3rd ed. Berlin: Springer.
Von Plato, Jan. 1994. Creating Modern Probability: Its Mathematics, Physics, and Philosophy in
Historical Perspective. New York: Cambridge University Press.
von Smoluchowski, Marian. 1906. “Zur kinetischen Theorie der Brownschen Molekularbe
wegung und der Suspensionen.” Annalen der Physik 21: 756–80.
Walter, Christian. 2005. “La gestion indicielle et la théorie des moyennes.” Revue d’Économie
Financière 79: 113–36.
Wang, Jie, Chun-Xia Yang, Pei-Ling Zhou, Ying-Di Jin, Tao Zhou, and Bing-Hong Wang. 2005.
“Evolutionary percolation model of stock market with variable agent number.” Physica A
354: 505–17.
215 References
Wang, Yuling, Jing Wang, and Fengshan Si. 2012. “Conditional value at risk of stock return
under stable distribution.” Lecture Notes in Information Technology 19: 141–45.
Watanabe, Shinzo. 2009. “The Japanese contributions to martingales.” Journal Electronique
d’Histoire des Probabilités et de la Statistique 5 (1): 1–13.
Weber, Ernst Juerg. 2009. “A short history of derivative security markets.” In Vinzenz Bronzin’s
Option Pricing Models, edited by Wolfgang Hafner and Heinz Zimmermann, 431–66.
Berlin: Springer.
Weintraub, Robert E. 1963. “On speculative prices and random walks: A denial.” Journal of
Finance 18 (1): 59–66.
Welch, Ivo. 1992. “Sequential sales, learning and cascades.” Journal of Finance 47 (2): 695–732.
Weron, Aleksander, Szymon Mercik, and Rafal Weron. 1999. “Origins of the scaling behaviour
in the dynamics of financial data.” Physica A 264: 562–69.
Weston, J. Fred 1967. “The state of the finance field.” Journal of Finance 22 (4): 539–40.
Whitley, Richard Drummond. 1986a. “The rise of modern finance theory: Its characteristics
as a scientific field and connection to the changing structure of capital markets.” In Research
in the History of Economic Thought and Methodology, edited by Warren J. Samuels, 147–78.
Stanford, CA: JAI Press.
Whitley, Richard Drummond. 1986b. “The structure and context of economics as a scientific
field.” In Research in the History of Economic Thought and Methodology, edited by Waren J.
Samuels, 179–209. Stanford, CA: JAI Press.
Wickens, Michael R. 1995. “Real business cycle analysis: A needed revolution in macroecono-
metrics.” Economic Journal 105 (433): 1637–48.
Widom, Benjamin. 1965a. “Equation of state in the neighborhood of the critical point.” Journal
of Chemical Physics 43: 3898–905.
Widom, Benjamin. 1965b. “Surface tension and molecular correlations near the critical point.”
Journal of Chemical Physics 43: 3892–97.
Wiener, Norbert. 1923. “Differential-space.” Journal of Mathematics and Physics 2: 131–74.
Williams, John Burr. 1938. The Theory of Investment Value. Cambridge, MA: Harvard
University Press.
Willis, J. C. 1922. Age and Area: A Study in Geographical Distribution and Origin of Species.
Cambridge: Cambridge University press.
Willis, J. C., and G. Udny Yule. 1922. “Some statistics of evolution and geographical distribution
in plants and animals, and their significance.” Nature 109: 177–79.
Wilson, Kenneth G. 1993. “The renormalization group and critical phenomena.” In Nobel
Lectures in Physics (1981–1990), edited by Gösta Ekspong, 102–32. London: World
Scientific.
Working, Holbrook. 1934. “A random-difference series for use in the analysis of time series.”
Journal of the American Statistical Association 29: 11–24.
Working, Holbrook. 1949. “The investigation of economic expectations.” American Economic
Review 39 (3): 150–66.
Working, Holbrook. 1953. “Futures trading and hedging.” American Economic Review 43
(3): 314–43.
Working, Holbrook. 1956. “New ideas and methods for price research.” Journal of Farm
Economics 38: 1427–36.
Working, Holbrook. 1958. “A theory of anticipatory prices.” American Economic Review 48
(2): 188–99.
216 References
Working, Holbrook. 1960. “Note on the correlation of first differences of averages in a random
chain.” Econometrica 28 (4): 916–18.
Working, Holbrook. 1961. “New concepts concerning futures markets and prices.” American
Economic Review 51 (2): 160–63.
Wyart, M., Jean-Philippe Bouchaud, J. Kockelkoren, Marc Potters, and M. Vettorazo. 2008.
“Relation between bid-ask spread, impact and volatility in order-driven markets.” Quantitative
Finance 8: 41–57.
Yule, G. Udny. 1925. “A mathematical theory of evolution, based on the conclusions of Dr. J. C.
Willis, F.R.S.” Philosophical Transactions of the Royal Society B 213 (402–10): 21–87.
Zanin, Massimiliano, Luciano Zunino, Osvaldo A. Rosso, and David Papo. 2012. “Permutation
entropy and its main biomedical and econophysics applications: A review.” Entropy 14
(12): 1553–77. doi: 10.3390/e14081553.
Zhang, Qiang, and Jiguang Han. 2013. “Option pricing in incomplete markets.” Applied
Mathematics Letters 26 (10): 975.
Zipf, George K. 1935. The Psychobiology of Language. New York: Houghton Mifflin.
Zipf, George K. 1949. Human Behaviour and the Principle of Least Effort. Reading,
MA: Addison-Wesley.
INDEX
217
218 Index
219 Index
220 Index
221 Index
222 Index
223 Index
224 Index
225 Index
226 Index
227 Index
228 Index
229 Index
230 Index