Computational Physics
Computational Physics
Peter Hertel
Fachbereich Physik
Universität Osnabrück
2 Matrices 4
3 Numbers 6
4 Functions 8
7 Fitting data 17
8 Simulated Annealing 21
10 Propagation 29
A Figures Makefile 34
List of Figures 36
2
1 Introduction
This series of lectures is about standard numerical techniques for computing
in physics. Although teachers prefer examples which can be solved by pencil
and paper, or by chalk on the blackboard, most problems cannot be solved
analytically. They require computations. Computational physics has a long
tradition. However, computational physics cannot be taught any more in
the traditional way.
In 1980, the first IBM personal computer could address 64 kB of memory,
its mass storage was a floppy of 360 kB, and it ran at 8 MHz with an
8 bit processor. With 15000 EUR it was cheap as compared to mainframes.
Today, an electronic computer, such as my laptop, costs 1500 EUR. Its 32 bit
processor runs at 1.2 GHz, is supported by 512 MB fast storage, 40 GB
mass storage, ethernet and modem controller on board, and so forth. This
amounts to a many-hundred-fold increase in computational power within
the last two decades.
Moreover, high quality software to be run on these cheap and highly efficient
computers has become cheap. Just study the price list of the MathWorks
corporation for Matlab.
Today, a course on computational physics, in particular an introduction,
should concentrate on how to use available and cheap high quality software,
and not how to develop it.
I have decided on using Matlab for various reasons.
First, because it is relatively cheap.
Second, because it comes with an integrated development environment such
as a command window, a language sensitive editor, a debugger, a workspace
inspector, and so forth.
Third, because it includes the software treasures of the last 50 years. In
particular, the LINPACK and EISPACK packages1 of linear algebra, forming
the backbone of computational physics, are the core of Matlab. In fact,
Matlab was conceived as a meta-language how to use such FORTRAN code
without indulging into unnecessary details.
Fourth, Matlab now provides for sparse matrix technology which is essen-
tial for partial differential equations.
Fifth, Matlab makes you think in terms of objects, it is not a procedural
language. After a short while, you will think of x as a variable, not of an
array of equally spaced representative values.
And last but not least, Matlab is a powerful graphics machine for visual-
1
Steve Moler is one of the authors of the the LINPACK and EISPACK packages docu-
mentation, and one of its main contributors. Today he is the chief executive officer of
MathWorks.
3
izing data, both two and three dimensional. In particular, Matlab allows
for the export of pictures in encapsulated postscript format.
In these lectures on Computational Physics we cannot cover a number of
other important fields, such as data acquisition, graphical user interfaces or
fine points of graphics programming. Instead, we will restrict ourselves to
standard techniques of doing analysis on a computer.
This text addresses students of physics who are used to learn by carefully
chosen examples. It is neither systematic nor complete.
2 Matrices
The basic objects of Matlab are matrices. A number is a 1 × 1 matrix.
A row vector is 1 × N matrix, a column vector is an N × 1 matrix. A
polynomial is represented by the row vector of its coefficients. A string is a
row vector of 2 byte numbers which represent characters. And so forth.
The most important operators are square brackets for aggregation. The
comma operator separates objects which are to be aggregated horizontally.
The semicolon operator separates objects which are to be aggregated ver-
tically. The semicolon operator, at the end of an expression, suppresses
echoing.
Here are a few examples:
>> A=[1,2,3;4,5,6];
>> B=[A;[7,8,9]];
The size function takes in a matrix and returns the numbers of rows and
columns. Try
>> [r,c]=size(A);
2
Matlab distinguishes between lower and upper case
4
generates a square 5 × 5 unit matrix. Check it by typing
>> B(2,1)
>> B(2,2)
and
>> xc=x’
Note that matrices are used just as functions. Arguments are in parentheses
and separated by commas. The first argument is a vector of column indices,
the second argument a vector of row indices.
Another often used function for creating vectors is linspace. This function
has three arguments: the lower value of an interval, the upper value, and
the number of representative points.
>> x=linspace(0,10,128);
produces a row vector of size 1 × 128. Its first element is 0, the last is 10,
and linear interpolation in-between.
>> x=[0:10/127:10];
5
>> y=x(1:2:10,1:2:10);
are allowed as well. Matlab’s syntax rules, if strictly applied, would require
>> k=[[1],[4],[9:2:17],[31]];
They are, however, sensibly relaxed. In many cases the horizontal alignment
comma operator may also be omitted,
>> k=[1 4 9:2:17 31];
works as well. We shall try to avoid this kind of syntax relaxation although
you may encounter it in the help system.
3 Numbers
Matlab is rather strict with numbers. It adheres to the IEEE3 convention
on representing real numbers and algebraic operations with them. It is obvi-
ous that real numbers must be approximated. The approximation, however,
should be reliable and as good as possible.
There is a predefined number eps. It is the smallest number such that 1
and 1+eps/2 are identical. On my computer eps=2.2204e-016. Roughly
speaking, the inherent accuracy is 16 digits. This is sufficient for physics
where not more than 6 digits are meaningful.
Rounding errors are randomly
√ distributed. With N operations, the error
grows proportional to N . Hence, 1020 operations on real numbers are
allowed until the statistical error exceeds the 6 digits limit. If your computer
runs at 1.0 GHz, approximately 1011 seconds are required to perform such
a large number of operations. This is more than 1000 years. Put otherwise:
rounding errors, unless crucial, will not pose a problem.
Rounding errors become crucial if a result is the difference between two
large, but almost equal numbers. Try
>> x=1e-15;
>> y=((1+x)-1)/x
3
Institute of Electrical and Electronics Engineers
6
The result is 1.1102, but should be 1.
Assume you want to calculate f (x) = sin(x)/x for x ∈ [0, 2π]. The following
code does is:
>> x=linspace(0,2*pi,1024);
>> y=sin(x)./x;
Typing
>> y(1)
produces NaN, not a number. And this should be so. After all, we have
divided zero by zero, and the result is neither 1 nor infinite, it is not defined.
The IEEE standard of floating point representation, which is realized by
all Intel processors and supported by Matlab software, provides for a bit
pattern which is not a valid number. Any operation with NaN results in NaN.
NaNs tend to spread out. For example,
>> z=y/sum(y);
7
>> k=find(x~=0);
>> y(k)=sin(x(k))./x(k);
1 for k=1:1024
2 if x(k)==0
3 y(k)=1;
4 else
5 y(k)=sin(x(k))/x(k);
6 end
7 end
Although it works, this is a dirty trick. The problem is avoided, not solved.
4 Functions
Functions map one ore more real or complex variables into real or complex
numbers. Functions are essential for physics, they describe relationships
between measurable quantities.
Let us study an example. The spectral intensity of black body radiation is
described by Planck’s famous formula
15 x3
S(x) = , (1)
π4 e x − 1
4
Line numbers instead of the >> prompt indicate that we have executed a script file,
test.m in this case
8
0.25
0.2
0.15
0.1
0.05
0
0 1 2 3 4 5 6 7 8 9 10
9
where $x=\hbar\omega/\kB T$.}
My \FIG macro is defined as follows:
1 \newcommand{\FIG}[2]{
2 \begin{figure}[!hbt]
3 \begin{center}
4 \begin{minipage}{0.85\textwidth}
5 \centering{\includegraphics[width=95mm]{#1}}
6 \caption{\label{#1}\small{#2}}
7 \end{minipage}
8 \end{center}
9 \end{figure}
10 }
results in 1.00000000003192.
The function, here planck, is referred to by its handle @planck.
>> quad(’planck’,0,20)
will work as well. Although there are subtle differences, we shall not discuss
them here.
quad or quadl are in-built functions which work on functions. You may
inquire by typing
>> help funfun
fminbnd is such a function function. It finds out the position of the minimum
within prescribed bounds. However, we want to know about the maximum
position of our planck function. Therefore we must define the negative of
10
the planck probability distribution. For this we do not require a new m-file,
because there is the inline statement:
>> np=inline(’-planck(x)’);
delivers 2.8214.
We now may ask
>> elo=quad(@planck,0,xmax);
>> ehi=quad(@planck,xmax,20);
for finding the amount of energy below and above the spectral intensity
maximum. The two numbers should add up to 1. Do they?
Higher than first derivatives can always be removed. After all, the second
derivative is the first derivative of the first derivative, and so forth.
Let us investigate one of the oldest physical problems, the motion of planets
in the sun’s gravitational field.
Recall that the conservation of angular momentum requires the planet to
move in a plane. The planet’s location x1 (t), x2 (t) obey the following differ-
ential equation:
GmM xj
mẍj (t) = − . (4)
(x21 + x22 )3/2
11
p
and t in units of τ = a3 /GM we arrive at6
xj
ẍj (t) = − . (5)
(x21 + x22 )3/2
ẏ1 = y2 (6)
3
ẏ2 = −y1 /r (7)
ẏ3 = y4 (8)
3
ẏ4 = −y3 /r (9)
p
results, where r = y12 + y32 .
We describe this set of four ordinary differential equations by the following
derivative field:
Although the trajectories are ellipses, they shrink more and more. The
reason is not physics, but numerics. We substantiate this remark by plotting
the energy in Fig. 3
>> Ekin=0.5*(y(:,2).*y(:,2)+y(:,4).*y(:,4));
>> Epot=-1./sqrt(y(:,1).*y(:,1)+y(:,3).*y(:,3));
>> plot(t,Ekin+Epot);
We try better by setting a lower relative tolerance (the default being 0.001):
6
2πτ is one year
12
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−0.5 0 0.5 1
−0.66
−0.68
−0.7
−0.72
−0.74
−0.76
−0.78
−0.8
0 5 10 15 20 25 30 35 40 45 50
13
>> tol=odeset(’RelTol’,1e-8);
>> [t,y]=ode45(@kepler,[0:0.1:50],[1;0;0;0.8],tol);
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−0.5 0 0.5 1
14
ω is the N th root of 1.
√
Because Ω/ N is unitary,
N
X −1
Ωjk (Ω† )kl = N δj,l (12)
k=0
N −1
1 X +2πijk/N
gk = e Gj . (13)
N
j=0
N −1
X −2πifj tk
Gj = G(fj ) = e gk (14)
k=0
and
N −1
1 X 2πifj tk
gk = g(tk ) = e Gj . (15)
N
j=0
15
4
−1
−2
−3
−4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Figure 5: Noisy cosine (50 Hz) vs. time sampled at millisecond steps
11 G=fft(g);
12 f=[0:N/2-1]/N/tau;
13 S=abs(G(1:N/2)).^2;
14 plot(f,S);
15 print -deps2 ncos2.eps;
In Fig. 6 we have plotted the spectral power S = |G(f )|2 for positive fre-
quencies7 . The prominent peak at f = 50 Hz is evident. The remaining
spectral power is more or less constant which is indicative of white noise.
How can one extract the signal so convincingly from a lot of noise? Not
by looking at the sampled data. They appear to be random. However,
by performing a Fourier analysis, the different harmonic contributions are
excited with their proper phases, so that they add up.
If you know that the signal is spoilt by white noise, you may remove it to a
large extent. You might say
>> H=G.*(abs(G)>150);
>> h=ifft(H);
16
4
x 10
18
16
14
12
10
0
0 50 100 150 200 250 300 350 400 450 500
The fast Fourier transform is an algorithm which takes into account that
an operation on 2N data points are two operations on N data points. This
reduces complexity from N 2 to N log(N ) which makes all the difference.
7 Fitting data
Measured data are to be compared with a theoretically justified model. Dis-
crepancies may arise for two different reasons: the data are not accurate, or
17
the model is inappropriate. Inaccurate data may arise because of systematic
or statistical errors.
Let us first formulate the standard task.
You have a set of data (y1 , x1 ), (y2 , x2 ), . . . (yN , xN ) and a model y = fp (x).
The p denote a set of parameters which are to be determined. For this we
define the variance
N
1 X
v(p) = (fp (xi ) − yi )2 . (16)
N −1
i=1
−a(x − x0 )2
y = y0 + s e . (18)
18
1.4
1.2
0.8
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
The first command defines the model, a Gaussian peak. It is the Matlab
equivalent of (18). The next line describes the misfit of a data set x,y to
the model, for a certain parameter set p. We will next simulated data:
4 tp=[3;1;4;2.5];
5 x=linspace(0,5,1024);
6 ty=peak(tp,x);
7 ny=ty+0.5*randn(size(x));
8 fp=fminsearch(misfit,tp,[],x,ny);
The first argument to fminsearch is the misfit, or cost function. The second
argument is a parameter set from which to begin the search. Then comes
an options structure for which we specify nothing, i. e. []. In this case the
default values for fminsearch are used.
19
Normally the cost function has just one argument, namely the vector of vari-
ables over which to minimize. If the cost function requires more arguments
(which remain constant when searching for a minimum), these are to be
specified in the remainder of the argument list of fminsearch. In our case
it is the data set x,ny to be fitted.
Let us plot the result:
9 fy=peak(fp,x);
10 plot(x,ty,’-k’,x,ny,’.b’,x,fy,’-r’,’LineWidth’,2);
5.5
4.5
3.5
2.5
1.5
1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
However, what will be the result if the search for the best fit starts from
another parameter set which is not so close?
Let us try sp=[1;1;1;1] as the starting parameter set. This will work.
However, with sp=[0;0;0;0] the minimization procedure does not converge
to a sensible fit.
We might try to fiddle with the optimization options:
11 opt=optimset(’TolFun’,1e-8,’TolX’,1e-8,...
12 ’MaxFunEvals’,20000,’MaxIter’,10000);
13 fp=fminsearch(misfit,[0;0;0;0],opt,x,ny);
This helps occasionally8 , but more often not. Obviously the minimization
8
bear in mind that we solve a different problem each time because of randoms in the
20
procedure runs into this or another shallow side minimum and misses the
absolute minimum.
You might program a coarse pre-search such as this:
8 Simulated Annealing
Minimization problems are among the most frequently encountered tasks of
computational physics. There is a set of parameters, and each parameter
set is weighted by a cost function. The problem to be solved is: find the
parameter set for which the costs are minimal.
We did already discuss such an optimization problem in the context of model
fitting. The Nelder-Mead simplex method (fminsearch) is never the best,
but always a good choice to tackle the problem. Remember: you have to
specify a starting point when searching for the minimum. In general, the
local minimum closest to the starting point is found.
If there are very many parameters, there are also very many local minima,
but there is only one global minimum. Therefore, a coarse search for a
sensible starting point is the first step in order to find a sensible starting
point. In many cases this first step is visual inspection of data, or intuition,
or the result of previous minimization efforts.
There are however problems where there are so many local minima that a
direct search for the global minimum is appropriate.
Here we discuss a computational classic: the traveling salesman problem.
There are NC cities to be visited, their Cartesian coordinates are stored in
vectors XC and YC, respectively. The itinerary is described by a a permuta-
tion of indices from 1 to NC, a vector it. The cost function is the length of
a round trip, as calculated by
data set
21
1 % this file is ts_length.m
2 function len=ts_length(it);
3 global NC XC YC
4 itt=it([2:NC,1]);
5 len=sum(sqrt((XC(itt)-XC(it)).^2+(YC(itt)-YC(it)).^2));
The global statement says that all or some of the variables may be visible
to subprograms or functions if they want to see them.
We silently assume that the effort to travel between two cities is proportional
to their distance. This can be easily modified by introducing a table which
describes the effort to travel from one to another city.
The number of possible itineraries is finite, but prohibitively large. If there
are only 20 cities, we have to check almost 1017 possibilities. If one check
lasts only 1 µs, this requires 1011 s, almost 3000 years.
Here we describe the simulated annealing algorithm9 . A probe is heated
and then slowly cooled down. Thereby all sorts of defects may be mended.
Temperature allows for random fluctuations, and cooling will lead to the
state of lowest energy. Even if the system is close to a local energy minimum,
a fluctuation may send it to an even better minimum.
If a new configurations has a lower energy (cost), it is accepted straight away.
If its energy is higher, it shall be accepted with probability exp(−∆E/T )
where T is the temperature and ∆E the energy increase.
So we set up a sensible temperature, allow for random variations of the
itinerary, and let the system cool down.
Here is a not-yet optimized algorithm.
22
10 % temperature reduction. For each temperature, the system is annealed.
The relevant function returns the new best itinerary and the number of
successful trials to find a shorter path. The program stops if there are no
more improvements.
Fig. 9 shows the solution. It was created by calling ts problem (see above).
The anneal function lets the system perform random fluctuations. If a
modified itinerary is shorter, it is accepted right away. If it is longer, but
not too much, it will be accepted as well. How much is too much is ruled by
the temperature. The lower the temperature, the likelier a worsening will
be rejected.
Here is the code:
23
10 im=ts_modify(it);
11 len=ts_length(im);
12 if rand<exp(-(len-min_len)/T)
13 ii=im;
14 end;
15 if len<min_len
16 min_len=len;
17 suc=suc+1;
18 end
19 if suc==max_suc
20 break
21 end
22 end
We try at most max rep times. And if the number of successes is too large,
we stop as well and try a lower temperature. The decisive statement is
>> if rand<exp(-(len-min len)/T)
The itinerary it is split into a randomly chosen sub-itinerary is, its leader
24
ib and trailer ia. fliplr flips matrices in left-right direction. Note that this
simple code does not guarantee that the original and the modified itinerary
differ.
In order to be complete, we also show how the initial temperature was
chosen:
f (x + h/2) − f (x − h/2)
f 0 (x) = lim (19)
h→0 h
f (x + h) − 2f (x) + f (x − h)
f 00 (x) ≈ . (20)
h2
Generalizations to different step widths hx , hy and for more than two di-
mensions are obvious.
Our first example is a simple ordinary differential equation:
25
the solution of which is f (x) = sin(x).
The normal ordinary differential equation solvers require initial conditions,
such as f (0) = 0 and f 0 (0) = 1. We, however, are faced with boundary
conditions.
The following program realizes the finite difference method.
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.5 1 1.5
Figure 10: Analytic solution (full line) and finite difference approxi-
mation (red) of f 00 + f = 0 with f (0) = 0 and f (π/2) = 1 (black)
We have plotted the result in Fig. 10. The maximum deviation is 2 × 10−4
although only 15 representative points have been used. Well, better say
26
17 because the entries RS(1)=0 and RS(N-1)=-1/h^2 reflect the boundary
conditions. The second derivatives at the smallest and largest interior point
require information from outside.
Note the diag function which allows to set and to extract diagonals. Also
note that x=M\y is short for solving the system of linear equations y=M*x.
axis tight does what it says: the axis frame is fitted as tightly as possible
to the data.
In our second example we want to solve a two-dimensional boundary prob-
lem. Let us reproduce the Matlab logo which is a visualization of the
lowest order eigenmode of the following equation:
−∆u = Λu . (23)
with u = 0 on ∂Ω.
We shall not describe the most efficient solution but a program which is easy
to generalize.
We begin by describing the domain.
27
8 [L,J,K]=laplace(Domain,h);
9 [u,d]=eigs(-L,1,’sm’);
10 s=sign(sum(u));
11 field=zeros(size(Domain));
12 for a=1:size(u)
13 field(J(a),K(a))=s*u(a);
14 end;
15 mesh(field);
16 axis off;
17 print -depsc ml_logo.eps;
The laplace function which calculates the Laplacian as a sparse matrix and
returns the indexing scheme is here:
28
10 if D(j,k)==1
11 Nv=Nv+1;
12 jj(Nv)=j;
13 kk(Nv)=k;
14 aa(j,k)=Nv;
15 end
16 end
17 end
18 L=sparse(Nv,Nv);
19 for a=1:Nv
20 j=jj(a);
21 k=kk(a);
22 L(a,a)=-4/h^2;
23 if D(j+1,k)==1
24 L(a,aa(j+1,k))=1/h^2;
25 end
26 if D(j-1,k)==1
27 L(a,aa(j-1,k))=1/h^2;
28 end
29 if D(j,k+1)==1
30 L(a,aa(j,k+1))=1/h^2;
31 end
32 if D(j,k-1)==1
33 L(a,aa(j,k-1))=1/h^2;
34 end
35 end
36 J=jj(1:Nv);
37 K=kk(1:Nv);
10 Propagation
In the previous section we have studied a special class of partial differential
equations. The solution was fixed by values on a boundary. Here we address
another class of problems. A field is prescribed at time t = 0. How will it
propagate in the course of time?
29
Let us study a simple, but typical10 partial differential equation of this type,
namely
u̇ = ∆u , (25)
for simplicity in one spatial dimension, u = u(t, x). The dot denotes partial
differentiation with respect to the time argument t, the Laplacian is the
second derivative with respect to the space argument x. The spatial region
is an interval, 0 ≤ x ≤ 1, say. Time begins with t = 0. We look for a
solution of (25) subject to the following conditions:
for integers j and k and time and space steps τ and h, respectively.
The most natural approach is to approximate (25) by
It turns out that (28) is stable11 only if τ < h2 . Hence, with h = 0.01, we
must proceed in time steps of τ = 10−4 or smaller. This is not acceptable.
Another, fully implicite differencing scheme is
uj = (I − τ L) uj+1 (31)
10
heat transport, diffusion, etc.
11
see the section on Diffusive Initial Value Problems in Numerical Recipes, loc. cit.
30
which is solved by
This one is stable for all time steps τ , but we have to solve a system of linear
equations for every propagation step. Both differencing schemes are biased:
f (t + τ ) − f (t)
they approximate, by , the time derivative at t or at t + τ ,
τ
respectively, while it should be at t + τ /2.
We therefore write
τL τL
uj+1/2 = (I + ) uj = (I − ) uj+1 (33)
2 2
which amounts to
τ L −1 τL
uj+1 = (I − ) (I + ) uj . (34)
2 2
This so called Crank-Nicholson scheme is stable for all time steps τ and one
order more accurate than (29) or (31).
The following Matlab function propagates the field u at time t by one time
step τ according to the Crank-Nicholson scheme. The spacing h and the
boundary values bk (t + τ /2) (a two component vector) have to be specified
as well12 . Fields are to be represented as column vectors.
Let us study a situation where u0 (x) = sin πx + sin 2πx, b0 (t) = 0 and
b1 (t) = 0.
12
more precisely: the mean of the boundary values before and after the time step
31
1 % this file is cn_example.m
2 Nx=500;
3 x=linspace(0,1,Nx)’;
4 h=x(2)-x(1);
5 tau=0.0025;
6 u0=sin(pi*x)+sin(2*pi*x);
7 u=u0(2:Nx-1);
8 b=[0,0];
9 field=u;
10 for j=1:100
11 u=cn_step(u,tau,h,b);
12 field=[field,u];
13 end;
14 contour(field,32);
15 print -depsc cn_example.eps;
450
400
350
300
250
200
150
100
50
10 20 30 40 50 60 70 80 90 100
By the way, Jean Baptiste Joseph Fourier, who lived from 1768 to 1830, has
invented the method of decomposing functions into harmonic contributions
while trying to solve the heat equation (25). In today’s notation:
∞
X
u(t, x) = cn (t) sin nπx (35)
n=1
32
is solved by
−(nπ)2 t
cn (t) = cn (0) e . (36)
Thus, the sin(2*pi*x) contribution dies much more rapidly than the sin(pi*x)
term as can be read off Fig. 12.
33
A Figures Makefile
The figures (as .pdf-files) were produced with the aid of .m-files printed in
this document and by the following function. Note that ’!...’ invokes the
system command shell.
34
39 rand(’state’,0);
40 fit_regr;
41 eval(’!epstopdf fitr.eps’);
42 clear all;
43 fit_peak;
44 print -depsc fitp.eps
45 eval(’!epstopdf fitp.eps’);
46 clear all;
47
48 % make ts_problem
49 rand(’state’,1);
50 ts_problem;
51 plot(XC(it),YC(it),’r.’,[XC(it),XC(it(1))],[YC(it),YC(it(1))],...
52 ’-k’,’MarkerSize’,20);
53 axis equal;
54 axis off;
55 print -depsc ts_problem.eps;
56 eval(’!epstopdf ts_problem.eps’);
57 clear all;
58
59 % make fdm_sin
60 fdm_sin;
61 eval(’!epstopdf fdm_sin.eps’);
62 clear all
63
64 % make ml_logo
65 ml_logo;
66 eval(’!epstopdf ml_logo.eps’);
67 clear all;
68
69 % make cn_example
70 cn_example;
71 eval(’!epstopdf cn_example.eps’);
72 clear all;
35
List of Figures
1 The black body radiation spectral intensity S(x) where x = ~ω/kB T 9
2 Planetary motion without explicite accuracy control . . . . . . . . 13
3 Total energy vs. time without accuracy control . . . . . . . . . . 13
4 Planetary motion with accuracy control . . . . . . . . . . . . . . 14
5 Noisy cosine (50 Hz) vs. time sampled at millisecond steps . . . . 16
6 Spectral power of noisy cosine vs. frequency (Hz) . . . . . . . . . 17
7 Noisy quadratic relationship between x (abscissa) and y (ordinate)
and reconstruction by quadratic regression . . . . . . . . . . . . . 19
8 A Gaussian peak on top of background: signal (black), signal plus
noise (blue dots) and reconstructed signal (red) . . . . . . . . . . 20
9 The shortest itinerary found by the simulated annealing algorithm 23
10 Analytic solution (full line) and finite difference approximation (red)
of f 00 + f = 0 with f (0) = 0 and f (π/2) = 1 (black) . . . . . . . . 26
11 The lowest order eigenmode of −∆u = Λu on an L-shaped domain
has become their logo . . . . . . . . . . . . . . . . . . . . . . . 28
12 Contour plot of u = u(t, x). Cranck-Nicolson scheme, 100 time
propagation steps, t running from left to right. Initially, u(0, x) =
sin(πx) + sin(2πx) . . . . . . . . . . . . . . . . . . . . . . . . . 32
36