Monte Carlo Ray Tracing: Siggraph 2003 Course 44
Monte Carlo Ray Tracing: Siggraph 2003 Course 44
Organizer
Henrik Wann Jensen
University of California, San Diego
Lecturers
James Arvo
University of California, Irvine
Phil Dutre
Katholieke Universiteit Leuven
Alexander Keller
Universitat Kaiserslautern
Henrik Wann Jensen
University of California, San Diego
Art Owen
Stanford University
Matt Pharr
NVIDIA
Peter Shirley
University of Utah
Abstract
This full day course will provide a detailed overview of state of the art in Monte
Carlo ray tracing. Recent advances in algorithms and available compute power
have made Monte Carlo ray tracing based methods widely used for simulating
global illumination. This course will review the fundamentals of Monte Carlo
methods, and provide a detailed description of the theory behind the latest techniques and algorithms used in realistic image synthesis. This includes path tracing,
bidirectional path tracing, Metropolis light transport, irradiance caching and photon mapping.
Course Syllabus
10:00 Break
10:30 Direct Illumination
Peter Shirley
Sampling of light sources
Special types of light sources
Efficient sampling of many light sources
12:00 Lunch
2:15 The Rendering Equation
Philip Dutre
The path integral formulation
Path tracing
Russian Roulette
Adjoint techniques
Bidirectional transport
Bidirectional path tracing
4:00 Break
4:15 Metropolis Sampling
Matt Pharr
One dimensional setting
Motion blur
Metropolis light transport
Contents
1
11
12
12
12
13
13
13
14
15
16
17
17
19
20
21
22
24
25
27
28
29
29
31
34
35
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3.4.1
3.4.2
4
Constant i . . . . . . . . . . . . . . . . . . . . . . . . .
Linear i . . . . . . . . . . . . . . . . . . . . . . . . . .
36
37
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
39
39
41
46
46
47
48
49
49
54
56
57
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
63
63
64
66
68
.
.
.
.
.
.
.
.
69
70
72
74
77
80
84
86
87
.
.
.
.
89
89
96
98
103
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
109
109
111
114
115
115
116
117
118
118
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
121
123
123
125
126
127
128
129
131
131
133
135
139
139
141
143
144
146
147
147
149
149
151
153
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
153
156
156
157
159
162
10
Chapter 1
Introduction
By Henrik Wann Jensen
Complexity has empirically been found to be O(log N ) where N is number of scene elements. Compare this with O(N log N ) for the fastest finite
element methods [12].
In addition one might add that Monte Carlo ray tracing methods can be very
easy to implement. A basic path tracing algorithm which has all of the above
advantages is a relatively straightforward extension to ray tracing.
The main problem with Monte Carlo ray tracing is variance seen as noise in the
rendered images. This noise can be eliminated by using more samples. Unfortunately the convergence of Monte Carlo methods is quite slow, and a large number
of samples can be necessary to reduce the variance to an acceptable level. Another
way of reducing variance is to try to be more clever; a large part of this course material is devoted to techniques and algorithms for making Monte Carlo ray tracing
more efficient.
1.1
The purpose of this course is to impart upon the attendies a thorough understanding
of the principles of Monte Carlo ray tracing methods, as well as a detailed overview
of the most recently developed methods.
1.2
Prerequisites
The reader is expected to have a good working knowledge of ray tracing and to
know the basics of global illumination. This includes knowledge of radiometric
terms (such as radiance and flux) and knowledge of basic reflection models (such
as diffuse, specular and glossy).
1.3
Acknowledgements
Funding for the authors of these notes include DARPA DABTB63-95-C0085 and
an NSF Career Award (CCR9876332).
12
Chapter 2
This Chapter discusses Monte Carlo integration, where random numbers are used
to approximate integrals. First some basic concepts from probability are reviewed,
and then they are applied to numerically estimate integrals. The problem of estimating the direct lighting at a point with arbitrary lighting and reflection properties
is then discussed. The next Chapter applies Monte Carlo integration to the direct
light problem in ray tracing.
2.1.1
behavior of x is entirely described by the distribution of values it takes. This distribution of values can be quantitatively described by the probability density function,
p, associated with x (the relationship is denoted x p). The probability that x will
take on a value in some interval [a, b] is given by the integral:
Z b
Probability(x [a, b]) =
p(x)dx.
(2.1)
a
Loosely speaking, the probability density function p describes the relative likelihood of a random variable taking a certain value; if p(x1 ) = 6.0 and p(x2 ) = 3.0,
then a random variable with density p is twice as likely to have a value near x1
than it it to have a value near x2 . The density p has two characteristics:
p(x) 0 (Probability is nonnegative),
Z
(2.2)
(2.3)
2.1.2
The average value that a real function f of a one dimensional random variable with
underlying pdf p will take on is called its expected value, E(f (x)) (sometimes
written Ef (x)):
Z
E(f (x)) = f (x)p(x)dx.
The expected value of a one dimensional random variable can be calculated by
letting f (x) = x. The expected value has a surprising and useful property: the
14
expected value of the sum of two random variables is the sum of the expected
values of those variables:
E(x + y) = E(x) + E(y),
for random variables x and y. Because functions of random variables are themselves random variables, this linearity of expectation applies to them as well:
E(f (x) + g(y)) = E(f (x)) + E(g(y)).
An obvious question is whether this property holds if the random variables being
summed are correlated (variables that are not correlated are called independent).
This linearity property in fact does hold whether or not the variables are independent! This summation property is vital for most Monte Carlo applications.
2.1.3
The discussion of random variables and their expected values extends naturally to
multidimensional spaces. Most graphics problems will be in such higher-dimensional
spaces. For example, many lighting problems are phrased on the surface of the
hemisphere. Fortunately, if we define a measure on the space the random variables occupy, everything is very similar to the one-dimensional case. Suppose the
space S has associated measure , for example S is the surface of a sphere and
measures area. We can define a pdf p : S 7 R, and if x is a random variable with
x p, then the probability that x will take on a value in some region Si S is
given by the integral:
Z
Probability(x Si ) =
p(x)d
(2.4)
Si
Here Probability(event) is the probability that event is true, so the integral is the
probability that x takes on a value in the region Si .
In graphics S is often an area (d = dA = dxdy), or a set of directions (points
on a unit sphere: d = d = sin dd). As an example, a two dimensional
random variable is a uniformly distributed random variable on a disk of radius
R. Here uniformly means uniform with respect to area, e.g., the way a bad dart
players hits would be distributed on a dart board. Since it is uniform, we know that
p() is some constant. From Equation 2.3, and the fact that area is the appropriate
15
measure, we can deduce that p() = 1/(R2 ). This means that the probability
that is in a certain subset S1 of the disk is just:
Z
1
Probability( S1 ) =
dA.
2
R
S1
This is all very abstract. To actually use this information we need the integral in
a form we can evaluate. Suppose Si is the portion of the disk closer to the center
than the perimeter. If we convert to polar coordinates, then is represented as a
(r, ) pair, and S1 is where r < R/2. Note that just because is uniform does
not imply that theta or r are necessarily uniform (in fact, theta is, and r is not
uniform). The differential area dA becomes r dr d. This leads to:
Z 2 Z R
2
R
1
Probability(r < ) =
r dr d = 0.25.
2
2
R
0
0
The formula for expected value of a real function applies to the multidimensional case:
Z
E(f (x)) =
f (x)p(x)d,
S
2
=
3
Note that here f (x, y) = x.
2.1.4
Variance
The expression E([x E(x)]2 ) is more useful for thinking intuitively about variance, while the algebraically equivalent expression E(x2 ) [E(x)]2 is usually
convenient for calculations. The variance of a sum of random variables is the sum
of the variances if the variables are independent. This summation property of variance is one of the reasons it is frequently used in analysis of probabilistic models.
The square root of the variance is called the standard deviation, , which gives
some indication of expected absolute deviation from the expected value.
2.1.5
Estimated Means
Many problems involve sums of independent random variables xi , where the variables share a common density p. Such variables are said to be independent identically distributed (iid) random variables. When the sum is divided by the number
of variables, we get an estimate of E(x):
N
1 X
E(x)
xi .
N
i=1
i=1
2.2
In this section the basic Monte Carlo solution methods for definite integrals are
outlined. These techniques are then straightforwardly applied to certain integral
problems. All of the basic material of this section is also covered in several of
the classic Monte Carlo texts. This section differs by being geared toward classes
of problems that crop up in Computer Graphics. Readers interested in a broader
treatment of Monte Carlo techniques should consult one of the classic Monte Carlo
texts [27, 72, 26, 98].
17
E(f (x)) =
xS
N
1 X
f (xi ).
N
(2.5)
i=1
Because the expected value can be expressed as an integral, the integral is also
approximated by the sum. The form of Equation 2.5 is a bit awkward; we would
usually like to approximate an integral of a single function g rather than a product
f p. We can get around this by substituting g = f p as the integrand:
Z
g(x)d
xS
N
1 X g(xi )
.
N
p(xi )
(2.6)
i=1
i=1
It can be shown that the variance of stratified sampling is never higher than unstratified if all strata have equal measure:
Z
Z
1
p(x)d =
p(x)d.
N S
Si
The most common example of stratified sampling in graphics is jittering for pixel
sampling [14].
18
method
importance
importance
importance
importance
stratified
sampling function
variance
(6 x)/(16)
1/4
(x + 2)/16
x/8
1/4
56.8N 1
21.3N 1
6.3N 1
0
21.3N 3
R4
0
x dx
The great impact of the shape of the function p on the variance of the N sample
estimates is shown in Table 2.1. Note that the variance is lessened when the shape
of p is similar to the shape of g. The variance drops to zero if p = g/I, but I is
not usually known or we would not have to resort to Monte Carlo. One important
principle illustrated in Table 2.1 is that stratified sampling is often far superior to
importance sampling. Although the variance for this stratification on I is inversely
proportional to the cube of the number of samples, there is no general result for the
behavior of variance under stratification. There are some functions where stratification does no good. An example is a white noise function, where the variance is
constant for all regions. On the other hand, most functions will benefit from stratified sampling because the variance in each subcell will usually be smaller than the
variance of the entire domain.
2.2.1
Although distribution ray tracing is usually phrased as an application of Equation 2.6, many researchers replace the i with more evenly distributed (quasirandom) samples (e.g. [13, 53]). This approach can be shown to be sound by
analyzing decreasing error in terms of some discrepancy measure [99, 97, 53, 67]
rather than in terms of variance. However, it is often convenient to develop a sampling strategy using variance analysis on random samples, and then to turn around
and use non-random, but equidistributed samples in an implementation. This approach is almost certainly correct, but its justification and implications have yet to
19
be explained.
For example, when evaluating a one dimensional integral on [0, 1] we could
use a set of N uniformly random sample points (x1 , x2 , , xN ) on [0, 1] to get
an approximation:
Z 1
N
1 X
f (xi ).
f (x)dx
N
0
i=1
2.2.2
20
where each xi is a two dimensional point distributed according to a two dimensional density p. We can convert to more explicit Cartesian coordinates and have a
form we are probably more comfortable with:
Z
f (x, y)dxdy
I=
y=1
x=1
N
1 X f (xi , yi )
.
N
p(xi , yi )
i=1
This is really no different than the form above, except that we see the explicit
components of xi to be (xi , yi ).
If our integral is over the of radius R, nothing really changes, except that the
sample points must be distributed according to some density on the disk. This is
why Monte Carlo integration is relatively easy: once the sample points are chosen,
the application of the formula is always the same.
2.3
2.3.1
Function inversion
If the density is a one dimensional f (x) defined over the interval x [xmin , xmax ],
then we can generate random numbers i that have density f from a set of uniform
random numbers i , where i [0, 1]. To do this we need the cumulative probability distribution function P (x):
Z x
Probability( < x) = P (x) =
f (x0 )d
(2.9)
xmin
(2.10)
xmin
We first choose an xi using the marginal distribution F (x, ymax ), and then choose
yi according to F (xi , y)/F (xi , ymax ). If f (x, y) is separable (expressible as g(x)h(y)),
then the one dimensional techniques can be used on each dimension.
For example, suppose we are sampling uniformly from the disk of radius R, so
p(r, ) = 1/(R2 ). The two dimensional distribution function is:
Z 0 Z r0
rdrd
r2
P rob(r < r0 and < 0 ) = F (r0 , 0 ) =
=
2
R
2R2
0
0
This means that a canonical pair (1 , 2 ) can be transformed to a uniform random
n+1
cosn
2
(2.11)
Where n is a Phong-like exponent, is the angle from the surface normal and
[0, /2] (is on the upper hemisphere) and is the azimuthal angle ( [0, 2]).
The distribution function is:
Z Z
P (, ) =
p(0 , 0 ) sin 0 d0 d0
(2.12)
0
The sin 0 term arises because on the sphere d = sin dd. When the marginal
densities are found, p (as expected) is separable and we find that a (1 , 2 ) pair of
canonical random numbers can be transformed to a direction by:
1
ux vx wx
R = uy vy wy
uz vz wz
23
~
N
~|
|N
2.3.2
Rejection
A rejection method chooses points according to some simple distribution and rejects some of them so that they are in a more complex distribution. There are
several scenarios where rejection is used, and we show several of these by example.
Suppose we want uniform random points within the unit circle. We can first
choose uniform random points (x, y) [1, 1]2 and reject those outside the circle.
If the function r() returns a canonical random number, then the procedure for this
is:
done = false
while (not done)
x = -1 + 2*r()
y = -1 + 2*r()
if (x*x + y*y < 1)
done = true
end while
24
2.3.3
1)
Metropolis
The Metropolis method uses random mutations to produce a set of samples with a
desired density. This concept is used extensively in the Metropolis Light Transport
algorithm described later in Chapter 9. Suppose we have a random point x0 in a
domain S. Further, suppose for any point x we have a way to generate random
y px . We use the marginal notation px (y) p(x y) to denote this density
function. Now suppose we let x1 be a random point in S selected with underlying
25
f
sf
It turns out this can be forced by making sure the xi are stationary in some strong
sense. If you visualize a huge collection of sample points x, you want the flow
between two points to be the same in each direction. If we assume the density of
points near x and y are proportional to f (x) and f (y) respectively, the the flow in
the two directions as follows should be the same:
(
flow(x y) = kf (x)t(x y)a(x y)
flow(y x) = kf (y)t(y x)a(y x)
where k is some positive constant. Setting these two flows constant gives a constraint on a:
a(y x)
f (x)t(x y)
=
.
a(x y)
f (y)t(y x)
Thus if either a(y x) or a(x y) is known, so is the other. Making them bigger
26
improves the chance of acceptance, so the usual technique is to set the larger of the
two to 1.
An awkward part of using the Metropolis sample generation technique is that it
is hard to estimate how many points are needed before the set of points is good.
Things are accelerated if the first n points are discarded, although choosing n
wisely is non-trivial. Weights can be added if a truly unbiased distribution is desired, as shown later in the context of Metropolis Light Transport.
2.4
For some physical processes, we have statistical models of behavior at a microscopic level from which we attempt to derive an analytic model of macroscopic
behavior. For example, we often think of a luminaire (a light emitting object) as
emitting a very large number of random photons (really pseudo-photons that obey
geometric, rather than physical, optics) with certain probability density functions
controlling the wavelength and direction of the photons. From this a physicist
might use statistics to derive an analytic model to predict how the luminaire distributes its energy in terms of the directional properties of the probability density
functions. However, if we are not interested in forming a general model, but instead
want to know about the behavior of a particular luminaire in a particular environment, we can just numerically simulate the behavior of the luminaire. To do this
we computationally emit photons from the luminaire and keep track of where
the photons go. This simple method is from a family of techniques called Monte
Carlo Simulation and can be a very easy, though often slow, way to numerically
solve physics problems.
The first thing that you might try in generating a highly realistic image is to
actually track simulated photons until they hit some computational camera plane
or were absorbed. This would be very inefficient, but would certainly produce a
correct image, although not necessarily while you were alive. In practice, very
few Monte Carlo simulations model the full physical process. Instead, an analog
process is found that is easier to simulate, but retains all the important behavior of
the original physical process. One of the difficult parts of finding an analog process
is deciding what effects are important.
An analog process that is almost always employed in graphics is to replace
photons with set wavelengths with power carrying beams that have values across
the entire spectrum. If photons are retained as an aspect of the model, then an
27
obvious analog process is one where photons whose wavelengths are outside of the
region of spectral sensitivity of the film do not exist.
28
Chapter 3
In this chapter we apply Monte Carlo Integration to compute the lighting at a point
from an area light source in the presence of potential occluders. As the number of
samples in the pixel becomes large, the number if samples on the light will become
large as well. Thus the shadows will become smooth. Much of the material in this
chapter is from the book Realistic Ray Tracing and is used with permission from
the publisher AK Peters.
3.1
Mathematical framework
To calculate the direct light from one luminaire (light emitting object) onto a diffuse surface, we solve the following equations:
Z
R(x)
L(x) = Le (x) +
Le (x,
~ 0 ) cos d,
(3.1)
all
~0
where L(x) is radiance (color) of x, Le (x) is the light emitted at x, R(x) is the
reflectance of the point,
~ 0 is the direction the light is incident from, and is the
angle between the incident light and the surface normal. Suppose we wanted to
restrict this integral to a domain of one luminaire. Instead of all
~ 0 we would
need to integrate over just the directions toward the luminaire. In practice this can
be hard (the projection of a polygon onto the hemisphere is a spherical polygon,
29
x
dA
luminaire
Figure 3.1: Integrating over the luminaire. Note that there is a direct correspondence between dx, the differential area on the luminaire, and d, the area of the
projection of dx onto the unit sphere centered at x.
and the projection of a cylinder is stranger still). So we can change variable to
integrate over just area (Figure 3.1). Note that the differential relationship exists:
d =
dA cos 0
.
kx0 xk2
(3.2)
kx xk2
all x0
There is an important flaw in the equation above. It is possible that the points x and
x cannot see each other (there is a shadowing object between them). This can
be encoded in a shadow function s(x, x0 ) which is either one or zero depending
on whether or not there is a clear line of sight between x and x. This gives us the
equation:
Z
s(x, x0 )dA cos 0
R(x)
Le (x0 ) cos
L(x) = Le (x) +
.
(3.3)
kx0 xk2
all x0
If we are to sample Equation 3.3, we need to pick a random point x on the surface
of the luminaire with density function p (so x0 p). Just plugging into the Monte
Carlo equation with one sample gives:
L(x) Le (x) +
R(x)
s(x, x0 ) cos 0
Le (x0 ) cos
.
(3.4)
If we pick a uniform random point on the luminaire, then p = 1/A, where A is the
area of the luminaire. This gives:
L(x) Le (x) +
A s(x, x0 ) cos 0
R(x)
Le (x0 ) cos
.
kx0 xk2
(3.5)
We can use Equation 3.5 to sample planar (e.g. rectangular) luminaires in a straightforward fashion. We simply pick a random point on each luminaire. The code for
one luminaire would be:
spectrum directLight( x, ~n)
pick random point x with normal vector ~n0 on light
d~ = (x0 x)
if ray x + td~ hits at x0 then
~ n0 d)/k
~ dk
~ 4
return ALe (x0 )(~n d)(~
else
return 0
The above code needs some extra tests such as clamping the cosines to zero if they
~ 4 comes from the distance squared term and
are negative. Note that the term kdk
~ cos because d~ is not necessarily a unit vector.
the two cosines, e.g., ~n d~ = kdk
Several examples of soft shadows are shown in Figure 3.2.
3.2
Although a sphere with center c and radius r can be sampled using Equation 3.5,
this will yield a very noisy image because many samples will be on the back of the
sphere, and the cos 0 term varies so much. Instead we can use a more complex
p(x0 ) to reduce noise. The first nonuniform density we might try is p(x0 ) cos 0 .
This turns out to be just as complicated as sampling with p(x0 ) cos 0 /kx0 xk2 ,
so we instead discuss that here. We observe that sampling on the luminaire this
way is the same as using a density constant function q(~
0 ) = const defined in
the space of directions subtended by the luminaire as seen from x. We now use
a coordinate system defined with x at the origin, and a right-handed orthonormal
basis with w
~ = (c x)/kc xk, and ~v = (w
~ ~n)/k(w
~ ~n)k (see Figure 3.3).
We also define (, ) to be the azimuthal and polar angles with respect to the uvw
coordinate system.
31
Figure 3.2: Various soft shadows on a backlit sphere with a square and a spherical
light source. Top: one sample. Bottom: 100 samples. Note that the shape fof the
light source is less important than its size in determining shadow appearance.
32
luminaire
n
x
r
c
u
x
max
q() =
r
2 1
r
kxck
2
!.
And we get
cos
= 1 1 + 1
r
kxck
2
.
22
This gives us the direction to x. To find the actual point, we need to find the first
point on the sphere in that direction. The ray in that direction is just (x+t~a), where
~a is given by:
cos sin
ux vx wx
~a = uy vy wy sin sin .
uz vz wz
cos
33
p(x0 ) cos 0
.
kx0 xk2
(3.6)
p(x0 ) =
2kx0 xk2
r
kxck
2
!.
A good debugging case for this is shown in Figure 3.4. For further details on
sampling the sphere with p see the article by Wang [92].
3.3
Non-diffuse Luminaries
There is no reason the brightness of the luminaire cannot vary with both direction
and position. It can vary with position if the luminaire is a television. It can
vary with direction for car headlights and other directional sources. Nothing need
change from the previous sections, except that Le (x0 ) must change to Le (x0 ,
~ 0 ).
The simplest way to vary the intensity with direction is to use a phong-like pattern
34
with respect to the normal vector ~n0 . To keep the total light output independent of
exponent, you can use the form:
Le (x0 ,
~ 0) =
(n + 1)E(x0 ) (n1) 0
cos
,
2
where E(x0 ) is the radiant exitance (power per unit area) at point x0 , and n is the
phong-exponent. You get a diffuse light for n = 1.
3.4
then we use the pair ((1 )/(1 ), 2 ). This way a collection of stratified
samples will remain stratified in some sense. Note that it is to our advantage to
have 1 stratified in one dimension, as well as having the pair (1 , 2 ) stratified in
two dimensions, so that the li we choose will be stratified over many (1 , 2 ) pairs,
so some multijittered sampling method may be helpful (e.g [7]).
This basic idea used to estimate L = (L1 + L2 ) can be extended to NL luminaires by mixing NL densities
p(x0 ) = 1 p1 (x0 ) + 2 p2 (x0 ) + + NL pNL (x0 ),
(3.7)
where the i s sum to one, and where each i is positive if li contributes to the
direct lighting. The value of i is the probability of selecting a point on the li ,
and pi is then used to determine which point on li is chosen. If li is chosen, the
we estimate L with ei /i. Given a pair (1 , 2 ), we choose li by enforcing the
conditions
i1
i
X
X
j < 1 <
j .
j=1
j=1
And to sample the light we can use the pair (10 , 2 ) where
10 =
Pi1
j=1 j
This basic process is shown in Figure 3.5. It cannot be over stressed that it is
important to reuse the random samples in this way to keep the variance low, in
the same way we use stratified sampling (jittering) instead of random sampling in
the space of the pixel To choose the point on the luminaire li given (10 , 2 ), we can
use the same types of pi for luminaires as used in the last section. The question
remaining is what to use for i .
3.4.1
Constant i
The simplest way to choose values for i was proposed by Lange [45] (and this
method is also implied in the figure on page 148 of [36]), where all weights are
made equal: i = 1/NL for all i. This would definitely make a valid estimator
because the i sum to one and none of them is zero. Unfortunately, in many scenes
this estimate would produce a high variance (when the Li are very different as
occurs in most night walkthroughs).
36
1
1=0
3
1=0.45
1=0
1=0.75
5
1=1.0
1=0.55
1=0.33
1=1.0
3.4.2 Linear i
Suppose we had perfect pi defined for all the luminaires. A zero variance solution
would then result if we could set i Li , where Li is the contribution from the
ith luminaire. If we can make i approximately proportional to Li , then we should
have a fairly good estimator. We call this the linear method of setting i because
the time used to choose one sample is linearly proportional to NL , the number of
luminaires.
To obtain such i we get an estimated contribution ei at x by approximating
the rendering equation for li with the geometry term set to one. These ei s (from all
luminaires) can be directly converted to i by scaling them so their sum is one:
i =
ei
.
e1 + e2 + + eNL
(3.8)
This method of choosing i will be valid because all potentially visible luminaires
will end up with positive i . We should expect the highest variance in areas where
shadowing occurs, because this is where setting the geometry term to one causes
i to be a poor estimate of i .
Implementing the linear i method has several subtleties. If the entire luminaire is below the tangent plane at x, then the estimate for ei should be zero. An
37
easy mistake to make is to set ei to zero if the center of the luminaire is below the
horizon. This will make i take the one value that is not allowed: an incorrect zero.
Such a bug will become obvious in pictures of spheres illuminated by luminaires
that subtend large solid angles, but for many scenes such errors are not noticeable
(the figures in [68] had this bug, but it was not noticeable). To overcome this problem, we make sure that for a polygonal luminaires all of its vertices are below the
horizon before it is given a zero probability of being sampled. For spherical luminaires, we check that the center of the luminaire is a distance greater than the
sphere radius under the horizon plane before it is given a zero probability of being
sampled.
38
Chapter 4
Stratified Sampling of
2-Manifolds
By Jim Arvo
4.1
Introduction
tion of the random variable should mimic the integrand as closely as possible. The
closer the match, the smaller the variance of the random variable, and the more
reliable (and efficient) the estimator. In the limit, when the samples are generated
with a density that is exactly proportional to the (positive) integrand, the variance
of the estimator is identically zero [64]. That is, a single sample delivers the exact
answer with probability one.
Perhaps the most common form of integral arising in image synthesis is that
expressing either irradiance or reflected radiance at a surface. In both cases, we
must evaluate (or approximate) an integral over solid angle, which is of the form
Z
f (~
)(~
~n) d,
(4.1)
S
Figure 4.1: A spherical triangle with uniform samples (left) and stratified samples (right).
Both sets of samples were generated using an area-preserving parametrization for spherical triangles, which we derive below.
All of the sampling algorithms that we construct are based on mappings from
the unit square, [0, 1] [0, 1], to the regions or surfaces in question that preserve
uniform sampling. That is, uniformly distributed samples in the unit square are
mapped to uniformly distributed samples in the range. Such mappings also preserve stratification, also known as jitter sampling [13], which means that uniform
partitionings of the unit square map to uniform partitionings of the range. The ability to apply stratified sampling over various domains is a great advantage, as it is
often a very effective variance reduction technique. Figure 4.1 shows the result of
applying such a mapping to a spherical triangle, both with and without stratified
(jittered) sampling. Figure 4.2 shows the result of applying such a mapping to a
projected spherical polygon, so that the samples on the original spherical polygon
are cosine-distributed rather than uniformly distributed.
All of the resulting algorithms depend upon a source of uniformly distributed
random numbers in the interval [0, 1], which we shall assume is available from
some unspecified source: perhaps drand48, or some other pseudo-random number generator.
4.2
Although there is a vast and mature literature an Monte Carlo methods, with many
texts describing how to derive sampling algorithms for various geometries and
density functions (see, for example, Kalos and Whitlock [37], Spanier and Gel41
Figure 4.2: A projected spherical polygon with uniform samples (left) and stratified samples (right). Projection onto the plane results in more samples near the north pole of
the sphere than near the equator. Both sets of samples were generated using an areapreserving parametrization for spherical polygons, which we derive below.
area-preserving
parametrization
unit square
2-manifold
(1,1)
M
w
warp
2
t
Ms
parametrization
bard [77], or Rubinstein [64]), these treatments do not provide step-by-step instructions for deriving the types of algorithms that we frequently require in computer
graphics. In this section we present a detailed recipe for how to convert an arbitrary parametrization : [0, 1]2 M, from the unit square to a 2-manifold, into
an area-preserving parametrization, : [0, 1]2 M . That is, a mapping with
the property that
area(A) = area(B) = area([A]) = area([B]),
(4.3)
for all A, B [0, 1] [0, 1]. Such mappings are used routinely in image synthesis
to sample surfaces of luminaires and reflectors. Note that may in fact shrink or
magnify areas, but that all areas undergo exactly the same scaling; hence, it is areapreserving in the strictest sense only when area(M) = 1. A parametrization with
42
this property will allow us to generate uniformly distributed and/or stratified samples over M by generating samples with the desired properties on the unit square
(which is trivial) and then mapping them onto M. We shall henceforth consider
area-preserving parametrizations to be synonymous with sampling algorithms.
Let M represent a shape that we wish to generate uniformly distributed samples on; in particular, M may be any 2-manifold with boundary in IRn , where n is
typically 2 or 3. The steps for deriving a sampling algorithm for M are summarized in Figure 4.4. These steps apply for all dimensions n 2; that is, M may be
a 2-manifold in any space.
Step 1 requires that we select a smooth bijection from [0, 1] [0, 1] to the
2-manifold M. Such a function is referred to as a parametrization and its inverse
is called a coordinate chart, as it associates unique 2D coordinates with (almost)
all points of M. In reality, we only require to be a bijection almost everywhere;
that is, on all but a set of measure zero, such as the boundaries of M or [0, 1]
[0, 1]. In theory, any smooth bijection will suffice, although it may be impractical
or impossible to perform some of the subsequent steps in Figure 4.4 symbolically
(particularly step 4) for all but the simplest functions.
Step 2 defines a function : [0, 1]2 IR that links the parametrization to
the notion of surface area on M. More precisely, for any region A [0, 1]2 , the
function satisfies
Z
= area([A]).
(4.10)
A
That is, the integral of over any region A in the 2D parameter space is the surface
area of the corresponding subset of M under the mapping . Equation (4.4) holds
for all n 2. For the typical cases of n = 2 and n = 3, however, the function
can be expressed more simply. For example, if M is a subset of IR2 then
(s, t) det D(s,t) ,
(4.11)
where D(s,t) is 2 2 Jacobian matrix of at the point (s, t). On the other hand,
if M is a subset of IR3 , then
(s, t) || s (s, t) t (s, t) || ,
(4.12)
which is a convenient abbreviation for equation (4.4) that holds only when n = 3,
as the two partial derivatives of are vectors in IR3 in this case. Non-uniform sampling can also be accommodated by including a weighting function in the definition
of in step 2.
43
n
1
s , . . . , s
(4.4)
R01 R01
0
Rt
Gs (t)
R 01
0
(u, v) du dv
(u, v) du dv
(s, v) dv
(s, v) dv
(4.5)
(4.6)
G1
f (z1 ) (z2 )
(4.7)
(4.8)
(4.9)
44
Step 3 can often be carried out without the aid of an explicit expression for .
For example, the cumulative distributions can often be found by reasoning directly
about the geometry imposed by the parametrization rather than applying formulas (4.4), (4.5) and (4.6), which can be tedious. Let Ms denote the family of
sub-manifolds of M defined by the first coordinate of . That is,
h
i
Ms = [0, s] [0, 1] .
(4.13)
See Figure 4.3. It follows from the definition of F and equation (4.10) that
F (s) =
area(Ms )
,
area(M)
(4.14)
which merely requires that we find an expression for the surface area of Ms as a
function of s. Similarly, by equation (4.7) we have
area(Ms )
s = f
.
(4.15)
area(M)
Thus, f is the map that recovers the parameter s from the fractional area of the
sub-manifold Ms . Equation (4.15) can be more convenient to work with than
equation (4.14), as it avoids an explicit function inversion step. While Gs () and
g(, ) do not admit equally intuitive interpretations, they can often be determined
from the general form of , since many of the details vanish due to normalization. A good example of how this can be done is provided by the area-preserving
parametrization derived for spherical triangles, which we discuss below.
Step 4 above is the only step that is not purely mechanical, as it involves function inversion. When this step can be carried out symbolically, the end result is
a closed-form area-preserving transformation from [0, 1]2 to the manifold M.
Closed-form expressions are usually advantageous, both in terms of simplicity and
efficiency. Of the two inversions entailed in step 4, it is typically equation (4.5)
that is the more troublesome, and frequently resists symbolic solution. In such a
case, it is always possible to perform the inversion numerically using a root-finding
method such as Newtons method; of course, one must always weigh the cost of
drawing samples against the benefits conferred by the resulting importance sampling and stratification. When numerical inversion is involved, the area-preserving
transformation is less likely to result in a net gain in efficiency.
The steps outlined in Figure 4.4 generalize very naturally to the construction of
volume-preserving parametrizations for arbitrary k-manifolds. For any 2 k n,
45
4.3
In this section we will apply the recipe given in Figure 4.4 to derive a number
of useful area-preserving parametrizations. Each will be expressed in closed form
since the functions F and Gs will be invertible symbolically; however, in the case
of spherical triangles it will not be trivial to invert F .
4.3.1
As a first example of applying the steps in Figure 4.4, we shall derive an areapreserving parametrization for an arbitrary triangle ABC in the plane. We begin
with an obvious parametrization from [0, 1] [0, 1] to a given triangle in terms of
barycentric coordinates. That is, let
(s, t) = (1 s)A + s(1 t)B + stC.
(4.16)
(4.17)
where c is the area of the triangle. From equations (4.5) and (4.6) we obtain
F (s) = s2
Gs (t) = t.
and
(4.18)
In both cases the constant c disappears due to normalization. These functions are
trivial to invert, resulting in
f (z) =
and
g(z1 , z2 ) = z2 .
46
(4.19)
= (1 z1 )A + z1 (1 z2 )B + z1 z2 C.
(4.20)
Figure 4.5 shows the final algorithm. If 1 and 2 are independent random variables, uniformly distributed over the interval [0, 1], then the resulting points will be
uniformly distributed over the triangle.
4.3.2
(4.21)
where x and y are the orthogonal unit vectors in the plane. Again, is a smooth
mapping that is bijective except when s = 0. Computing the Jacobian of , we
obtain
det(D) = 2s.
(4.22)
The remaining steps proceed precisely as in the case of the planar triangle; in fact,
the distributions F and G turn out to be identical. Thus, we obtain
(z1 , z2 ) = ( z1 , z2 )
p
=
1 [cos(22 )x + sin(22 )y] .
(4.23)
47
The resulting algorithm for sampling the unit disk is exactly analogous to the algorithm shown in Figure 4.5 for sampling planar triangles.
4.3.3
As a first example of applying the steps in Figure 4.4 to a surface in IR3 , we shall
derive the well-known area-preserving parametrization for the unit-radius hemisphere centered at the origin. First, we define a parametrization using spherical
coordinates:
sin s
2 cos(2t)
(s, t) = sin s
(4.24)
2 sin(2t) .
s
cos 2
Here the parameter s defines the polar angle and t defines the azimuthal angle.
Since the codomain of is IR3 , we can apply equation (4.12). Since
cos s
sin s
2 cos(2t)
2 sin(2t)
s
s (s, t) t (s, t) = 2 cos s
2 sin(2t) sin 2 cos(2t) ,
sin s
0
2
we obtain
(s, t) = || s (s, t) t (s, t) ||
s
= 2 sin
.
2
(4.25)
s
and
Gs (t) = t,
2 cos1 (1 z)
and
g(z1 , z2 ) = z2 .
1 z1
48
(4.26)
Here the s coordinate of the parametrization simply selects the z-plane from z = 1
and z = 0, while the t coordinate parameterizes the resulting circle in the z-plane.
The form of can be simplified somewhat by substituting 1 z1 for z1 , which
does not alter the distribution.
4.3.4
Now suppose that we wish to sample the hemisphere according to a Phong distribution rather than uniformly; that is, with a density proportional to the cosine of
the polar angle to a power. To do this we simply include a weighting function in
the definition of given in equation (4.25). That is, we let
s
s
cosk
,
(4.27)
(s, t) = 2 sin
2
2
where k is the Phong exponent. It follows that
s
F (s) = cosk+1
and
2
which implies that
f (z) =
1
2
cos1 z n+1
Gs (t) = t,
g(z1 , z2 ) = z2 .
and
It follows that
q
2
k+1
q1 z1 cos(2z2 )
2
(z1 , z2 ) =
1 z1k+1 sin(2z2 )
(4.28)
z1k+1
4.3.5
(4.29)
B
c
s
as
bs
Cs
Figure 4.6: Parameter s controls the edge length bs , which determines the vertex Cs , and
consequently sub-triangle Ts . Parameter t then selects a point P along the arc between Cs
and B. Not shown is the length of the arc AC, which is b.
(4.30)
(I xxT ) y
,
||(I xxT ) y ||
(4.31)
(4.32)
for some function h, where as is the length of the moving edge BCs as a function of
s. The exact nature of h is irrelevant, however, as it will not be needed to compute
F , and it is eliminated from Gs by normalization. Thus, we have
F(s) =
Gs ( t ) =
area( Ts )
area(T)
(4.33)
1 cos( as t )
.
1 cos as
(4.34)
50
(4.35)
(4.36)
sin
sin
sin
=
=
sin a
sin b
sin c
(4.37)
(4.38)
(4.39)
(4.40)
Each of these identities will be employed in deriving area-preserving parametrizations, either for spherical triangles or projected spherical polygons, which will be
described in the following section. Equation (4.36) is known as Girards formula,
equation (4.37) is the spherical law of sines, and equations (4.38), (4.39), and (4.40)
are spherical cosine laws for angles [6].
Our task will be to construct f : [0, 1] IR such that f (As /A) = s, where the
parameter s [0, 1] selects the sub-triangle Ts and consequently determines the
area As . Specifically, the sub-triangle Ts is formed by choosing a new vertex Cs
on the great arc between A and C, at an arc length of bs = sb along the arc from
A, as shown in Figure 4.6. The point P is finally chosen on the arc between B and
Cs , according to the parameter t.
51
u
p) cos v
1
;
s cos1
b
(v p + u q) sin
Compute the third vertex of the sub-triangle.
Cs slerp(A, C, s) ;
Compute the thcoordinate using Cis and 2 .
t
cos1 1 2 (1 Cs B)
;
cos1 Cs B
Construct the corresponding point on the sphere.
P slerp(B, Cs , t) ;
return P;
end
Figure 4.7: An area-preserving parametrization for an arbitrary spherical triangle ABC.
This procedure can be easily optimized to remove the inverse cosines used to compute the
warped coordinates s and t, since the slerp function uses the cosine of its scalar argument.
To find the parameter s that corresponds to the fractional area As /A, we first
solve for cos bs in terms of As and various constants associated with the triangle.
From equations (4.36) and (4.39) we have
cos bs =
=
=
(4.41)
(4.42)
(4.43)
Results of the algorithm are shown in Figure 4.1. On the left, the samples are
identically distributed, which produces a pattern equivalent to that obtained by rejection sampling; however, each sample is guaranteed to fall within the triangle.
The pattern on the right was generated by partitioning the unit square into a regular
grid and choosing one pair (1 , 2 ) uniformly from each grid cell, which corresponds to stratified sampling. The advantage of stratified sampling is evident in the
resulting pattern; the samples are more evenly distributed, which generally reduces
the variance of Monte Carlo estimates based on these samples. The sampling algorithm can be applied to spherical polygons by decomposing them into triangles and
performing stratified sampling on each component independently, which is analogous to the method for planar polygons described by Turk [84]. This is one means
of sampling the solid angle subtended by a polygon. We discuss another approach
in the following section.
4.4
In this section we will see an example in which the inversion of the F function
can not be done symbolically; consequently, we will resort to either approximate
inversion, or inversion via a root finder.
The dot product
~ ~n appearing in equation (4.1) is the ubiquitous cosine factor that appears in nearly every illumination integral. Since it is often infeasible to
construct a random variable that mimics the full integrand, we settle for absorbing
the cosine term into the sampling distribution; this compromise is a useful special
case of importance sampling. In this section we address the problem of generating
stratified samples over the solid angle subtended by arbitrary polygons, while taking the cosine weighting into account, as shown in Figure 4.2. The combination
of stratification and importance sampling, even in this relatively weak form, can
significantly reduce the variance of the associated Monte Carlo estimator [3, 64].
We now describe a new technique for Monte Carlo sampling of spherical polygons with a density proportional to the cosine from a given axis which, by Nusselts
analogy, is equivalent to uniformly sampling the projection of the spherical polygon onto the z = 0 plane. The technique handles polygons directly, without first
partitioning them into triangles, and is ideally suited for stratified sampling. The
Jacobian of the bijection from the unit square to the polygon can be made arbitrarily close to the cosine density, making the statistical bias as close to zero as desired.
After preprocessing a polygon with n vertices, which can be done in O(n2 log n)
54
A
v4
C
v1
(a)
v3
3
v2
(b)
Figure 4.8: (a) Spherical triangle T . We consider the projected area of triangle T as a
function of , keeping vertices A and B, and angle fixed. (b) Partitioning a spherical
polygon by great circles passing through the poles and the vertices.
(4.45)
Let a , b , and c denote the outward unit normals for each edges of the triangle,
as shown in Figure 4.8a. Then cos = a b , where b can be expressed as
b = c cos +(c A) sin .
Also, from equation (4.37) we have sin = sin c sin / sin a. Therefore,
sin
1
a() = tan
,
c1 cos c2 sin
(4.46)
(4.47)
(a c ) cos + 1
,
sin sin c
c2 =
55
(a c A) cos
.
sin sin c
(4.48)
These constants depend only on the fixed features of the triangle, as the vectors a
and c do not depend on . It is now straightforward to find cos b as a function of
, which we shall denote by z(). Specifically,
z() = (B N) cos a() + (D N) sin a(),
(4.49)
where D is a point on the sphere that is orthogonal to B, and on the great circle
through B and C. That is,
D = (I BB T )C.
(4.50)
4.4.1
We break the problem of computing the bijection : [0, 1]2 P into two parts.
First, we define a sequence of sub-polygons of P in much the same way that we
parameterized the triangle T above; that is, we define P() to be the intersection
of P with a lune 1 whose internal angle is , and with one edge passing through an
extremal vertex of P. Next we define a cumulative marginal distribution F () that
gives the area of polygon P() projected onto the plane orthogonal to N, which is
simply the cosine-weighted area of P. Then F is a strictly monotonically increasing function of . By inverting this function we arrive at the first component of
our sampling algorithm. That is, if 1 is a uniformly distributed random variable in
[0, 1], and if is given by
= F 1 ( 1 ),
(4.51)
A lune is a spherical triangle with exactly two vertices, which are antipodal.
56
(4.52)
where a , b , and c are outward normals of the triangle T , as shown in Figure 4.8a. If we now constrain T to be a polar triangle, with vertex A at the pole of
the hemisphere (A = N), then becomes a very simple function of . Specifically,
() = a() (a N),
(4.53)
where a N is fixed; this follows from the fact that both b and c are orthogonal
to N. Equation (4.53) allows us to easily compute the function F () for any collection of spherical polygons whose vertices all lie on the lune with vertices at A
and A as shown in Figure 4.10, where we restrict our attention to the positive or
upper half of the lune. Thus,
F () =
k
X
i ai ( k ),
(4.54)
i=1
for [k , k+1 ], where the constants 1 , 2 , . . . , k account for the slope and
orientation of the edges; that is, edges that result in clockwise polar triangles are
positive, while those forming counter-clockwise triangles are negative.
We now extend F () to a general spherical polygon P by slicing P into lunes
with the above property; that is, we partition P into smaller polygons by passing
a great arc through each vertex, as shown in Figure 4.8b. Then for any spherical polygon, we can evaluate F () exactly for any value of by virtue of equation (4.47). The resulting function F is a piecewise-continuous strictly monotonically increasing function with at most n 2 discontinuities, where n is the number
of vertices in the polygon. See Figure 4.9. This function is precisely the cumulative marginal distribution function that we must invert to perform the first stage
of cosine-weighted sampling. Because it is monotonically increasing, its inverse is
well-defined.
4.4.2
Given two variables 1 and 2 in the interval [0, 1] we will compute the correspond as
ing point P = (1 , 2 ) in the polygon P. We use 1 to determine an angle ,
described above, and 2 to select the height z according to the resulting conditional
density defined along the intersection of the polygon P and the great circle at .
To compute using equation (4.51), we proceed in two steps. First, we find the
lune from which will be drawn. This corresponds to finding the integer k such
57
3
2
1
Figure 4.9: The cumulative marginal distribution function F as a function of the angle
. At each value of i , the abscissa is the form factor from the origin to the polygon that
is within the range [1 , i ]. This function is strictly monotonically increasing, with at most
n 2 derivative discontinuities, where n is the number of vertices in the polygon. The
fluctuations in F have been greatly exaggerated for purposes of illustration.
z3
z
z3
3
2
1
z2
2
z2
z1
z1
1
k
k+1
Figure 4.10: On the left is an illustration of a single lune with a collection of arcs passing
through it, and the points at which a great circle at intersects them. On the right is a
cross-section of the circle, showing the heights z 1 , z 1 , corresponding to these intersection
points.
that
k
k+1
1
.
tot
tot
(4.55)
cally in general, so we seek a numerical approximation. This is the only step in the
algorithm which is not computed exactly; thus, any bias that is introduced in the
sampling is a result of this step alone.
Approximate numerical inversion is greatly simplified by the nature of F within
each lune. Since F is extremely smooth and strictly monotonic, we can approximate F 1 directly to high accuracy with a low-order polynomial. For example, we
may use
F 1 (x) a + bx + cx2 + dx3 ,
(4.56)
where we set
a
b
= V 1
c
k
k + 1
.
k + 2
k+1
(4.57)
j
X
z 2i z 2i .
(4.58)
i=1
Then Zn is the normalization constant. The random variable 2 then selects the
interval by finding 1 ` n such that
Z`
Z`1
2
Zn
Zn
59
(4.59)
Figure 4.12: A non-convex spherical polygon with cosine-weighted and stratified samples
generated with the proposed mapping.
2 Z`1 + z ` ,
(4.60)
(4.61)
where = 1 z2 .
The algorithm described above also works for spherical polygons P that surround the pole of the sphere. In this case, each lune has an odd number of segments
crossing it, and z n = 1 must be added to the list of heights defined by each in
sampling from the conditional distribution.
The algorithm described above is somewhat more costly than the algorithm for
uniform sampling of spherical triangles [3] for two reasons: 1) evaluating piecewise continuous functions requires some searching, and 2) the cumulative marginal
60
distribution cannot be inverted exactly. Furthermore, the sampling algorithm requires some preprocessing to make both of these operations efficient and accurate.
Pre-processing includes partitioning the polygon into lunes, computing the
constants c1 and c2 defined in equation (4.48) for each resulting edge, and sorting the line segments within each lune into increasing order. In the worst case,
there may be n 2 lunes, with (n) of them containing (n) segments. Thus,
creating them and sorting them requires O(n2 log n) in the worst case. For convex
polygons, this drops to O(n), since there can be only two segments per lune.
Once the pre-processing is done, samples can be generated by searching for
the appropriate [k , k+1 ] interval, which can be done in O(log n) time, and then
sampling according to the conditional distribution, which can be done on O(n)
time. The latter cost is the dominant one because all of the intervals must be formed
for normalization. Therefore, in the worst case, the cost of drawing a sample is
O(n); however, for convex polygons this drops to O(log n).
Figure 4.2 shows 900 samples in a spherical quadrilateral, distributed according
to the cosine distribution. Note that more of the samples are clustered near the pole
than the horizon. Stratification was performed by mapping jittered points from
the unit square onto the quadrilateral. Figures 4.11 and 4.12 show 900 samples
distributed according to the cosine density within a highly non-convex spherical
polygon. These samples were generated without first partitioning the polygon into
triangles. In both of the test cases, the cumulative marginal distribution function F
is very nearly piecewise linear, and its inverse can be computed to extremely high
accuracy with a piecewise cubic curve.
61
62
Chapter 5
5.1
Introduction
In this chapter we explore the idea of constructing effective random variables for
Monte Carlo integration by combining two or more simpler random variables. For
instance, suppose that we have at our disposal a convenient means of sampling the
solid angle subtended by a luminaire, and also a means of sampling a brdf; how
are these to be used in concert to estimate the reflected radiance from a surface?
While each sampling method can itself serve as the basis of an importance sampling scheme, in isolation neither can reliably predict the shape of the resulting
integrand. The problem is that the shape of the brdf may make some directions
important (i.e. likely to make a large contribution to the integral) while the luminaire, which is potentially orders of magnitude brighter than the indirect illumination, may make other directions important. The question that we shall address
is how to construct an importance sampling method that accounts for all such hot
spots by combining available sampling methods, but without introducing statistical bias. The following discussion closely parallels the work of Veach [90], who
was the first to systematically explore this idea in the context of global illumination.
To simplify the discussion, let us assume that we are attempting to approximate
some quantity I, which is given by the integral of an unknown and potentially illbehaved function f over the domain D:
Z
I =
f (x) dx.
(5.1)
D
63
For instance, f may be the product of incident radiance (direct and indirect), a reflectance function, and a visibility factor, and D may be the either a collection of
surfaces or the hemisphere of incident directions; in cases such as these, I may
represent reflected radiance. In traditional importance sampling, we select a probability density function (pdf) p over D and rewrite the integral as
Z
f (x)
f (X)
I =
p(x) dx =
,
(5.2)
p(X)
D p(x)
where X denotes a random variable on the domain D distributed according to the
pdf p, and hi denotes the expected value of a random variable. The second equality
in equation (5.2) is simply the definition of expected value. It follows immediately
that the sample mean of the new random variable f (X)/p(X) is an estimator for I;
that is, if
N
1 X f (Xi )
E =
,
N
p(Xi )
(5.3)
i=1
5.2
Now let us suppose that we have k distinct pdfs, p1 , p2 , . . . , pk , that each mimic
some potential hot spot in the integrand; that is, each concentrates samples in a
region of the domain where the integrand may be relatively large. For instance, p1
may sample according to the brdf, concentrating samples around specular directions of glossy surfaces, while p2 , . . . , pk sample various luminaires or potential
specular reflections. Let us further suppose that for each pi we draw Ni iid samples, Xi,1 , Xi,2 , . . . , Xi,Ni , distributed according to pi . Our goal is to combine them
into an estimator E that has several desirable properties. In particular, we wish to
ensure that
1. hEi = I,
64
Ni
k X
X
i (Xi,j )f (Xi,j ),
(5.4)
i=1 j=1
for some suitable choice of the functions i . That is, let us allow a different function i to be associated with the samples drawn from each pdf pi , and also allow the
weight of each sample to depend on the sample itself. Equation (5.4) is extremely
general, and also reasonable, as we can immediately ensure that E is unbiased by
constraining the functions i to be of the form
i (x)
wi (x)
,
Ni pi (x)
(5.5)
(5.6)
w1 (x) + + wk (x) = 1.
(5.7)
To see that the resulting estimator is unbiased, regardless of the choice of the
weighting functions wi , provided that they satisfy constraints (5.6) and (5.7), let
us define Ew to be the estimator E where the i are of the form shown in equa65
* k N
i
XX
+
i (Xi,j )f (Xi,j )
i=1 j=1
Ni
k X
X
hi (Xi,j )f (Xi,j )i
i=1 j=1
Ni Z
k X
X
i=1 j=1
k Z
X
wi (x)
=
f (x)pi (x) dx
p (x)
i=1 D i
" k
#
Z
X
=
f (x)
wi (x) dx
D
i=1
Z
=
f (x) dx
D
= I.
Thus, by considering only estimators of the form Ew , we may henceforth ignore
property 1 and concentrate strictly on selecting the weighting functions wi so as to
satisfy the other two properties.
5.3
In some sense the most obvious weighting functions to employ are given by
wi (x)
ci pi (x)
,
q(x)
(5.8)
where
q(x) c1 p1 (x) + + ck pk (x),
(5.9)
is a pdf obtained by taking a convex combination of the original pdfs; that is, the
constants ci satisfy ci 0 and c1 + + ck = 1. Clearly, these wi are positive
and sum to one at each x; therefore the resulting estimator is unbiased, as shown
above. This particular choice is obvious in the sense that it corresponds exactly
66
to classical importance sampling based on the pdf defined in equation (5.9), when
a very natural constraint is imposed on N1 , . . . , Nk . To see this, observe that
Ew
Ni
k X
X
wi (Xi,j )
f (Xi,j )
=
Ni pi (Xi,j )
i=1 j=1
Ni
k
X
X
1
c
i
=
f (Xi,j )
Ni
q(Xi,j )
i=1
j=1
Ni
k
X
X
f (Xi,j )
ci
=
.
Ni
q(Xi,j )
i=1
(5.10)
j=1
k Ni
f (Xi,j )
1 XX
=
.
N
q(Xi,j )
(5.11)
i=1 j=1
Note that in equation (5.11) all samples are handled in exactly the same manner;
that is, the weighting of the samples does not depend on i, which indicates the
pdfs they are distributed according to. This is precisely the formula we would
obtain if we began with q as our pdf for importance sampling. Adopting Veachs
terminology, we shall refer to this particular choice of weighting functions as the
balance heuristic [90]. Other possibilities for the weighting functions, which are
also based on convex combinations of the original pdfs, include
(
1 if ci pi (x) = maxj cj pj (x)
wi (x) =
,
(5.12)
0 otherwise
and
1
k
X
,
wi (x) = ci pm
cj pm
i (x)
j (x)
(5.13)
j=1
for some exponent m 1. Again, we need only verify that these weighting functions are non-negative and sum to one for all x to verify that they give rise to
unbiased estimators. Note, also, that each of these strategies is extremely simple to
compute, thus satisfying property 2 noted earlier.
67
,
(5.14)
Nmin N
where Nmin = mini Ni . Inequality (5.14) indicates that the variance of the estimator
Ebw compares favorably with the optimal strategy, which would be infeasible to
determine in any case. In fact, as the number of samples of the least-sampled pdf
approaches to infinity, the balance heuristic approaches optimality.
Fortunately, the balance heuristic is also extremely easy to apply; it demands
very little beyond the standard requirements of importance sampling, which include the ability to generate samples distributed according to each of the original
pdfs pi , and the ability to compute the density of a given point x with respect to
each of the original pdfs [41]. This last requirement simply means that for each
x D and 1 i k, we must be able to evaluate pi (x). Thus, Ebw satisfies all
three properties noted earlier, and is therefore a reasonable heuristic in itself for
combining multiple sampling strategies.
68
Chapter 6
In Monte Carlo (MC) sampling the sample averages of random quantities are used
to estimate the corresponding expectations. The justification is through the law of
large numbers. In quasi-Monte Carlo (QMC) sampling we are able to get a law
of large numbers with deterministic inputs instead of random ones. Naturally we
seek deterministic inputs that make the answer converge as quickly as possible. In
particular it is common for QMC to produce much more accurate answers than MC
does. Keller [39] was an early proponent of QMC methods for computer graphics.
We begin by reviewing Monte Carlo sampling and showing how many problems can be reduced to integrals over the unit cube [0, 1)d . Next we consider
how stratification methods, such as jittered sampling, can improve the accuracy
of Monte Carlo for favorable functions while doing no harm for unfavorable ones.
Method of multiple-stratification such as Latin hypercube sampling (n-rooks) represent a significant improvement on stratified sampling. These stratification methods balance the sampling points with respect to a large number of hyperrectangular
boxes. QMC may be thought of as an attempt to take this to the logical limit: how
close can we get to balancing the sample points with respect to every box in [0, 1)d
at once? The answer, provided by the theory of discrepancy is surprisingly far,
that the result produces a significant improvement compared to MC. This chapter
concludes with a presentation of digital nets, integration lattices and randomized
QMC.
69
The set D Rd is the domain of interest, perhaps a region on the unit sphere
or in the unit cube. The function q is a probability density function on D. That is
R
q(x) 0 and D q(x)dx = 1. The function f gives the quantity whose expectation
we seek: I is the expected value of f (x) for random x with density q on D.
In crude Monte Carlo sampling we generate n independent samples x1 , . . . , xn
from the density q and estimate I by
n
1X
I = In =
f (xi ).
n
(6.2)
i=1
(6.3)
That is, crude Monte Carlo always converges to the right answer as n increases
without bound.
R
Now suppose that f has finite variance 2 = Var(f (x)) D (f (x)I)2 q(x)dx.
Then E((In I)2 ) = 2 /n so the root mean square error (RMSE) of MC sam
pling is O(1/ n). This rate is slow compared to that of classical quadrature rules
(Davis and Rabinowitz [16]) for smooth functions in low dimensions. Monte Carlo
methods can improve on classical ones for problems in high dimensions or on discontinuous functions.
A given integration problem can be written in the form (6.1) in many different
ways. First, let p be a probability density on D such that p(x) > 0 whenever
q(x)|f (x)| > 0. Then
Z
Z
f (x)q(x)
I=
f (x)q(x)dx =
p(x)dx
p(x)
D
D
and we could as well sample xi p(x) and estimate I by
n
1 X f (xi )q(xi )
Ip = In,p =
.
n
p(xi )
i=1
70
(6.4)
The RMSE can be strongly affected, for better or worse, by this re-expression,
known as importance sampling. If we are able to find a good p that is nearly
proportional to f q then we can get much better estimates.
Making a good choice of density p is problem specific. Suppose for instance,
that one of the components of x describes the angle = (x) between a ray and a
surface normal. The original version of f may include a factor of cos() for some
> 0. Using a density p(x) q(x) cos() corresponds to moving the cosine
power out of the integrand and into the sampling density.
We will suppose that a choice of p has already been made. There is also the
possibility of using a mixture of sampling densities pj as with the balance heuristic
of Veach and Guibas [88, 89]. This case can be incorporated by increasing the dimension of x by one, and using that variable to select j from a discrete distribution.
Monte Carlo sampling of x p over D almost always uses points from a
pseudo-random number generator simulating the uniform distribution on the interval from 0 to 1. We will take this to mean the uniform distribution on the half-open
interval [0, 1). Suppose that it takes d uniform random variables to simulate a
point in the d dimensional domain D. Often d = d but sometimes d = 2 variables from [0, 1) can be used to generate a point within a surface element in d = 3
dimensional space. In other problems we might use d > d random variables to
generate a p distributed point in D Rd . Chapter 4 describes general techniques
for transforming [0, 1)d into D and provides some specific examples of use in ray
tracing. Devroye [17] is a comprehensive reference on techniques for transforming
uniform random variables into ones desired random objects.
Suppose that a point having the U [0, 1)d distribution is transformed into a
point (x) having the density p on D. Then
Z
I=
D
where
by
f (x)q(x)
p(x)dx =
p(x)
Z
[0,1)d
f ( (x))q( (x))
dx
p( (x))
Z
[0,1)d
f (x)dx
(6.5)
incorporates the transformation and the density q. Then I is estimated
n
(6.6)
i=1
[0,1)d
6.2 Stratification
Stratified sampling is a technique for reducing the variance of a Monte Carlo integral. It was originally applied in survey sampling (see Cochran [10]) and has been
adapted in Monte Carlo methods, Fishman [21]. In stratified sampling, the domain
S
T
of x is written as a union of strata D = H
Dk = if j 6= k.
h=1 Dh where Dj
An integral is estimated from within each stratum and then combined. Following
the presentation in chapter 6.1, we suppose here that D = [0, 1)d .
Figure 6.1 shows a random sample from the unit square along with 3 alternative
stratified samplings. The unit cube [0, 1)d is very easily partitioned into box shaped
strata like those shown. It is also easy to sample uniformly in such strata. Suppose
that a, c [0, 1)d with a < c componentwise. Let U U [0, 1)d . Then a + (c
a)U interpreted componentwise is uniformly distributed on the box with lower left
corner a and upper right corner c.
In the simplest form of stratified sampling, a Monte Carlo sample xh1 , . . . xhnh
is taken from within stratum Dh . Each stratum is sampled independently, and the
results are combined as
ISTRAT = ISTRAT (f ) =
nh
H
X
|Dh | X
h=1
nh
f (xhi ),
(6.7)
i=1
nh
i=1
E(f (xhi )) =
H
X
|Dh |h =
h=1
72
H Z
X
h=1 Dh
f (x)dx = I,
Figure 6.1: The upper left figure is a simple random sample of 16 points in [0, 1)2 .
The other figures show stratified samples with 4 points from each of 4 strata.
variance formula
2 = Var(f (x)) = E(Var(f (x) | h(x))) + Var(E(f (x) | h(x)))
=
H
X
|Dh |h2
h=1
H
X
(6.10)
(6.11)
h=1
H
X
|Dh |2
h=1
nh
h2 =
1X
2
|Dh |h2
,
n
n
(6.12)
h=1
from (6.10).
Equation (6.12) shows that stratified sampling with proportional allocation
does not increase the variance. Proportional allocation is not usually optimal. Optimal allocations take nh |Dh |h . If estimates of h are available they can be
used to set nh , but poor estimates of h could result in stratified sampling with
larger variance than crude MC. We will assume proportional allocation.
A particular form of stratified sampling is well suited to the unit cube. Haber [24]
proposes to partition the unit cube [0, 1)d into H = md congruent cubical regions
and to take nh = 1 point from each of them. This stratification is known as jittered
sampling in graphics, following Cook, Porter and Carpenter [14].
Any function that is constant within strata is integrated without error by ISTRAT .
If f is close to such a function, then f is integrated with a small error. Let f be the
function defined by f(x) = h(x) , and define the residual fRES (x) = f (x) f(x).
This decomposition is illustrated in Figure 6.2 for a function on [0, 1). The error
ISTRAT I reduces to the stratified sampling estimate of the mean of fRES . Stratified
sampling reduces the Monte Carlo variance from 2 (f )/n to 2 (fRES )/n.
6.3
Multiple Stratification
Suppose we can afford to sample 16 points in [0, 1)2 . Sampling one point from
each of 16 vertical strata would be a good strategy if the function f depended
primarily on the horizontal coordinate. Conversely if the vertical coordinate is the
more important one, then it would be better to take one point from each of 16
horizontal strata.
It is possible to stratify both ways with the same sample, in what is known as
Latin hypercube sampling (McKay, Beckman and W. J. Conover [51]) or n-rooks
74
1.2
1.0
0.8
0.6
0.4
0.2
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.0
Figure 6.2: The upper plot shows a piece-wise smooth function f on [0, 1). The
step function is the best approximation f to f , in mean square error, among functions constant over intervals [j/10, (j + 1)/10). The lower plot shows the difference f f using a vertical scale similar to the upper plot.
sampling (Shirley [67]). Figure 6.3 shows a set of 16 points in the square, that are
simultaneously stratified in each of 16 horizontal and vertical strata.
If the function f on [0, 1)2 is dominated by either the horizontal coordinate
or the vertical one, then well get an accurate answer, and we dont even need to
know which is the dominant variable. Better yet, suppose that neither variable is
75
Figure 6.3: The left plot shows 16 points, one in each of 16 vertical strata. The
right plot shows the same 16 points. There is one in each of 16 horizontal strata.
These points form what is called a Latin hypercube sample, or an n-rooks pattern.
(6.13)
where fH depends only on the horizontal variable, fV depends only on the vertical
one, and the residual fRES is defined by subtraction. Latin hypercube sampling will
give an error that is largely unaffected by the additive part fH + fV . Stein [78]
2 /n
showed that the variance in Latin hypercube sampling is approximately RES
2 is the smallest variance of f
where RES
RES for any decomposition of the form (6.13).
His result is for general d, not just d = 2.
Stratification with proportional allocation is never worse than crude MC. The
same is almost true for Latin hypercube sampling. Owen [58] shows that for all
n 2, d 1 and square integrable f , that
Var(ILHS )
2
.
n1
For the worst f , Latin hypercube sampling is like using crude MC with one observation less.
The construction of a Latin hypercube sample requires uniform random permutations. A uniform random permutation of 0 through n 1 is one for which
76
all n! possible orderings have the same probability. Devroye [17] gives algorithms for such random permutations. One choice is to have an array Ai = i
for i = 0, . . . , n 1 and then for j = n 1 down to 1 swap Aj with Ak where k
is uniformly and randomly chosen from 0 through j.
For j = 1, . . . , d, let j be independent uniform random permutations of
0, . . . , n 1. Let Uij U [0, 1)d independently for i = 1, . . . , n and j = 1, . . . , d
and let X be a matrix with
Xij =
j (i 1) + Uij
.
n
Then the n rows of X form a Latin hypercube sample. That is we may take xi =
(Xi1 , . . . , Xid ). An integral estimate I is the same whatever order the f (xi ) are
summed. As a consequence we only need to permute d 1 of the d input variables.
We can take 1 (i 1) = i 1 to save the cost of one random permutation.
Jittered sampling uses n = k 2 strata arranged in a k by k grid of squares
while n-rooks provides simultaneous stratification in both an n by 1 grid and a 1
by n grid. It is natural to wonder which method is better. The answer depends on
whether f is better approximated by a step function, constant within squares of size
1/k 1/k grid, or by an additive function with each term constant within narrower
bins of width 1/n. Amazingly, we dont have to choose. It is possible to arrange
n = k 2 points in an n-rooks arrangement that simultaneously has one point in
each square of a k by k grid. A construction for this was proposed independently
by Chiu, Shirley and Wang [8] and by Tang [81]. The former handle more general
grids of n = k1 k2 points. The latter reference arranges points in [0, 1)d with
d 2 in a Latin hypercube such that every two dimensional projection of xi puts
one point into each of a grid of strata.
6.4
Let a and c be points in [0, 1)d for which a < c holds componentwise, and then
let [a, c) denote the box of points x where a x < c holds componentwise. We
use |[a, c)| to denote the d-dimensional volume of this box.
An infinite sequence of points x1 , x2 , [0, 1)d is uniformly distributed
P
if limn (1/n) ni=1 1axi <c = |[a, c)| holds for all boxes. This means that
In I for every function f (x) of the form 1ax<c and so for any finite linear
combination of such indicators of boxes. Riemann integrable functions are well
approximated by linear combinations of indicators of boxes; if the sequence (xi )
is uniformly distributed then limn |In I| = 0 for any function f that is Riemann integrable. Thus uniformly distributed sequences can be used to provide a
deterministic law of large numbers.
To show that a sequence is uniformly distributed it is enough to show that
(6.14)
for integers kj 0 and 0 `j < bkj . When b = 2 the box is called dyadic. An
arbitrary box can be approximated by b-ary boxes. If I I for all indicators of badic boxes then the sequence (xi ) is uniformly distributed. A mathematically more
interesting result is the Weyl condition. The sequence (xi ) is uniformly
distributed
2
1kx
where
if and only if In I for all trigonometric polynomials f (x) = e
d
kZ .
If xi are independent U [0, 1)d variables, then (xi ) is uniformly distributed with
probability one. Of course we hope to do better than random points. To that end,
we need a numerical measure of how uniformly distributed a sequence of points is.
These measures are called discrepancies, and there are a great many of them. One
of the simplest is the star discrepancy
Dn
Dn (x1 , . . . , xn )
n
1 X
= sup
10xi <a [0, a)
a[0,1)d n
(6.15)
i=1
Figure 6.4 illustrates this discrepancy. It shows an anchored box [0, a) [0, 1)2 and
P
a list of n = 20 points. The anchored box has 5 of the 20 points so (1/n) ni=1 10xi <a =
78
Figure 6.4: Shown are 20 points in the unit square and an anchored box (shaded)
from (0, 0) to a = (.3, .7). The anchored box [0, a) has volume 0.21 and contains
a fraction 5/20 = 0.2 of the points.
0.20. The volume of the anchored box is 0.21, so the difference is |0.2 0.21| =
0.01. The star discrepancy Dn is found by maximizing this difference over all
anchored boxes [0, a).
For xi U [0, 1)d , Chung [9] showed that
2nDn
lim sup p
=1
(6.16)
n
log(log(n))
79
(6.17)
The factor VHK (f ) is the total variation of f in the sense of Hardy and Krause.
Niederreiter [56] gives the definition.
Equation (6.17) shows that a deterministic law of large numbers can be much
better than the random one, for large enough n and a function f with finite variation
VHK (f ). One often does see QMC methods performing much better than MC, but
equation (6.17) is not good for predicting when this will happen. The problem is
that Dn is hard to compute, VHK (f ) is harder still, and that the bound (6.17) can
grossly overestimate the error. In some cases VHK is infinite while QMC still beats
MC. Schlier [66] reports that even for QMC the variance of f is more strongly
related to the error than is the variation.
`
0
1
2
3
4
5
6
7
` base 2
0.
1.
10.
11.
100.
101.
110.
111.
2 (`)
0.000
0.100
0.010
0.110
0.001
0.101
0.011
0.111
0.000
0.500
0.250
0.750
0.125
0.625
0.375
0.875
Table 6.1: The first column shows integers ` from 0 to 7. The second column shows
` in base 2. The third column reflects the digits of ` through the binary point to
construct 2 (`). The final column is the decimal version of 2 (`).
[0, 1). A radical inverse sequence consists of b (i) for n consecutive values of i,
conventionally 0 through n 1.
Table 6.1 illustrates a radical inverse sequence, using b = 2 as van der Corput
did. Because consecutive integers alternate between even and odd, the van der
Corput sequence alternates between values in [0, 1/2) and [1/2, 1). Among any 4
consecutive van der Corput points there is exactly one in each interval [k/4, (k +
1)/4) for k = 0, 1, 2, 3. Similarly any bm consecutive points from the radical
inverse sequence in base b are stratified with respect to bm congruent intervals of
length 1/bm .
If d > 1 then it would be a serious mistake to simply replace a stream of
pseudo-random numbers by the van der Corput sequence. For example with d = 2
taking points xi = (2 (2i 2), 2 (2i 1)) [0, 1)2 we would find that all xi lie
on a diagonal line with slope 1 inside [0, 1/2) [1/2, 1).
For d > 1 we really need a stream of quasi-random d-vectors. There are
several ways to generalize the van der Corput sequence to d 1. The Halton [25]
sequence in [0, 1)d works with d relatively prime bases b1 , . . . , bd . Usually these
are the first d prime numbers. Then for i 1,
xi = (2 (i 1), 3 (i 1), 5 (i 1), . . . , bd (i 1)) [0, 1)d .
The Halton sequence has low discrepancy: Dn = O((log n)d /n).
The Halton sequence is extensible in both n and d. For small d the points
of the Halton sequence have a nearly uniform distribution. The left panel of Figure 6.5 shows a two dimensional portion of the Halton sequence using prime bases
81
Figure 6.5: The left panel shows the first 23 32 = 72 points of the Halton sequence
using bases 2 and 3. The middle panel shows the first 72 points for the 10th and
11th primes, 29 and 31 respectively. The right panel shows these 72 points after
Faures [20] permutation is applied.
2 and 3. The second panel shows the same points for bases 29 and 31 as would be
needed with d = 11. While they are nearly uniform in both one dimensional projections, their two dimensional uniformity is seriously lacking. When it is possible
to identify the more important components of x, these should be sampled using the
smaller prime bases.
The poorer distribution for larger primes can be mitigated using a permutation
of Faure [20]. Let be a permutation of {0, . . . , b 1}. Then the radical inverse
P
k
function can be generalized to b, (n) =
k=1 (nk )b . It still holds that any
consecutive bm values of b, (i) stratify into bm boxes of length 1/bm . Faures
transformation b of 0, . . . , b 1 is particularly simple. Let 2 = (0, 1). For even
b > 2 take b = (2b/2 , 2b/2 + 1), so 4 = (0, 2, 1, 3). For odd b > 2 put
k = (b 1)/2 and = b1 . Then add 1 to any member of greater than or equal
to k. Then b = ((0), . . . , (k 1), k, (k), . . . , (b 2)). For example with
b = 5 we get k = 2, and after the larger elements are incremented, = (0, 3, 1, 4).
Finally 5 = (0, 3, 2, 1, 4). The third plot in Figure 6.5 shows the effect of Faures
permutations on the Halton sequence.
Digital nets provide more satisfactory generalizations of radical inverse sequences to d 2. Recall the b-ary boxes in (6.14). The box there has volume
bK where K = k1 + + kd . Ideally we would like nbK points in every such
box. Digital nets do this, at least for small enough K.
82
Figure 6.6: Shown are 81 points of a (0, 4)net in base 3. Reference lines are
included to make the 3-ary boxes more visible. There 5 different shapes of 3-ary
box balanced by these points. One box of each shape is highlighted.
i1
(g1 , . . . , gd )
n
84
mod n
(6.18)
Figure 6.7: Shown are the points of two integration lattices in the unit square. The
lattice on the right has much better uniformity, showing the importance of making
a good choice of lattice generator.
Integration lattices are not as widely used in computer graphics as digital nets.
Their periodic structure is likely to produce unwanted aliasing artifacts, at least in
some applications. Compared to digital nets, integration lattices are very good at
integrating smooth functions, especially smooth periodic functions.
6.7
2
estimate of the RMSE of I is [R(R 1)]
r=1 (Ir I) .
Cranley and Patterson [15] proposed a rotation modulo one
xi = ai + U
mod 1
where U U [0, 1)d and both addition and remainder modulo one are interpreted
componentwise. It is easy to see that each xi U [0, 1)d . Cranley and Patterson
proposed rotations of integration lattices. Tuffin [83] considered applying such
rotations to digital nets. They dont remain nets, but they still look very uniform.
Owen [57] proposes a scrambling of the base b digits of ai . Suppose that ai
is the ith row of the matrix A with entries Aij for j = 1, . . . , d, and either i =
P
k
1, . . . , n for a finite sequence or i 1 for an infinite one. Let Aij =
k=1 b aijk
where aijk {0, 1, . . . , b 1}. Now let xijk = jaij1 ...aij k1 (aijk ) where
jaij1 ...aij k1 is a uniform random permutation of 0, . . . , b 1. All the permutations required are independent, and the permutation applied to the kth digits of
Aij depends on j and on the preceding k 1 digits.
Applying this scrambling to any point a [0, 1)d produces a point x
U [0, 1)d . If (ai ) is a (t, m, d)net in base b or a (t, d)sequence in base b, then
with probability 1, the same holds for the scrambled version (xi ). The scrambling
described above requires a great many permutations. Random linear scrambling
is a partial derandomization of scrambled nets, given by Matousek [49] and also
86
in Hong and Hickernell [30]. Random linear scrambling significantly reduces the
number of permutations required from O(dbm ) to O(dm2 ).
= o(1/n)
For integration over a scrambled digital sequence we have Var(I)
2
for any f with < . Thus for large enough n a better than MC result will be
obtained. For integration over a scrambled (0, m, d)-net Owen [58] shows that
Var(I)
b min(d1,m) 2
2.72 2
.
b1
n
n
.
That is scrambled (0, m, d)nets cannot have more than e = exp(1) = 2.72 times
the Monte Carlo variance for finite n. For nets in base b = 2 and t 0, Owen [60]
shows that
2
2t 3d .
Var(I)
n
Compared to QMC, we expect RQMC to do no harm. After all, the resulting
xi still have a QMC structure, and so the RMSE should be O(n1 (log n)d ). Some
forms of RQMC reduce the RMSE to O(n3/2 (log n)(d1)/2 ) for smooth enough
f . This can be understood as random errors cancelling where deterministic ones
do not. Surveys of RQMC appear in Owen [61] and LEcuyer and Lemieux [46].
6.8
In some applications d is so large that it becomes problematic to construct a meaningful QMC sequence. For example the number of random vectors needed to follow a single light path in a scene with many reflective objects can be very large
and may not have an a priori bound. As another example, if acceptance-rejection
sampling (Devroye [17]) is used to generate a random variable then a large number
of random variables may need to be generated in order to produce that variable.
Padding is a simple expedient solution to the problem. One uses a QMC or
RQMC sequence in dimension s for what one expects are the s most important input variables. Then one pads out the input with d s independent U [0, 1) random
variables. This technique was used in Spanier [76] for particle transport simulations. It is also possible to pad with a d s dimensional Latin hypercube sample
as described in Owen [59], even when d is conceptually infinite.
In Latin supercube sampling, the d input variables of xi are partitioned into
some number k of groups. The jth group has dimension dj 1 and of course
Pk
j=1 dj = d. A QMC or RQMC method is applied in each of the k groups.
87
Just as the van der Corput sequence cannot simply be substituted for a pseudorandom generator, care has to be taken in using multiple (R)QMC methods within
the same problem. It would not work to take k independent randomizations of
the same QMC sequence. The fix is to randomize the run order of the k groups
relative to each other, just as Latin hypercube sampling randomizes the run order
of d stratified samples.
To describe LSS, for j = 1, . . . , k and i = 1, . . . , n let aji [0, 1)dj . Suppose
that aj1 , . . . , ajn are a (R)QMC point set. For j = 1, . . . , k, let j (i) be independent uniform permutations of 1, . . . , n. Then let xji = ajj (i) . The LSS has rows
xi comprised of x1i , . . . , xki . Owen [59] shows that in Latin supercube sampling
the function f can be written as a sum of two parts. One, from within groups
of variables, is integrated with an (R)QMC error rate, while the other part, from
between groups of variables, is integrated at the Monte Carlo rate. Thus a good
grouping of variables is important as is a good choice of (R)QMC within groups.
88
Chapter 7
This Chapter discusses Monte Carlo Path Tracing. Many of these ideas appeared
in James Kajiyas original paper on the Rendering Equation. Other good original
sources for this material is L. Carter and E. Cashwells book Particle-Transport
Simulation with the Monte Carlo Methods and J. Spanier and E. Gelbards book
Monte Carlo Principles and Neutron Transport Problems.
7.1
Li(x,i)
Lr(x,r)
fr(x,i,r)
~ (~x1 , ~x2 ) =
~ (~x1 ~x2 ) =
~x2 ~x1
.
|~x2 ~x1 |
dA(x')
'
x'
(x',x)
(x,x')
x
dA(x)
Figure 7.2: Two-point geometry.
Point to point functions are useful since they are intuitive and often clarify the
geometry and physics. For example, if ~x sees ~x0 , then ~x0 sees ~x. This mutual visibility is represented as the two-point visibility function, V (~x, ~x0 ), which is defined
to be 1 if a line segment connecting ~x to ~x0 does not intersect any opaque object,
and 0 otherwise.
The reflection equation involves an integral over the upper hemisphere. This
integral may be converted to an integral over other surfaces by changing of variables from solid angles to surface areas. This is easily done by relating the solid
angle subtended by the source to its surface area.
di =
cos o0
dA(~x0 )
|~x ~x0 |2
cos i cos o0
V (~x, ~x0 )
|~x ~x0 |2
In these equations we are making a distinction between the parameters used to
specify points on the surface (the ~xs) and the measure that we are using when perG(~x, ~x0 ) = G(~x0 , ~x) =
91
forming the integral (the differential surface area dA(~x)). Sometimes we will be
less rigorous and just use dA or dA0 when we mean dA(~x) and dA(~x0 ). The geometry factor G is related to the differential form factor by the following equation:
0
F (~x, ~x0 )dA0 = G(~x,~x ) dA0 .
Performing this change of variables in the reflection equation leads to the following integral
Z
fr (~x,
~ (~x, ~x0 ),
~ ) Lo (~x0 ,
~ (~x0 , ~x)) G(~x, ~x0 ) V (~x, ~x0 ) dA(~x0 )
Lr (~x,
~) =
A
For notational simplicity, we will drop the subscript o on the outgoing radiance.
The rendering equation couples radiance at the receiving surfaces (the lefthand side) to the radiances of other surfaces (inside the integrand). This equation
applies at all points on all surfaces in the environment. It is important to recognize
the knowns and the unknowns. The emission function Le and the BRDF fr are
knowns since they depends on the scene geometry, the material characteristics, and
the light sources. The unknown is the radiance L on all surfaces. To compute the
radiance we must solve this equation. This equation is an example of an integral
equation, since the unknown L appears inside the integral. Solving this equation in
the main goal of Monte Carlo Path Tracing.
The rendering equation is sometimes written more compactly in operator form.
An operator is a method for mapping a function to another function. In our case,
the function is the radiance.
L = Le + K L
92
n
X
K n Le
i=0
Noting that K 0 = I, where I is the identity operator. This infinite sum is called the
Neumann Series and represents the formal solution (not the computed solution) of
the operator equation.
Another way to interpret the Neumann Series is to draw the analogy between
1
= (1 x)1 = 1 + x + x2 ...,
1x
and
(I K)1 = I + K + K 2 ....
The rendering equation
(I K) L = Le
then has the following solution
L = (I K)1 Le
Note that (I K)1 is just an operator acting on the emission function. This
operator spreads the emitted light over all the surfaces.
93
fr(x2,x3,x4)
dA(x2)
dA(x0)
G(x0,x1)
dA(x4)
G(x2,x3)
G(x1,x2)
dA(x3)
dA(x1)
fr(x1,x2,x3)
G(x3,x4)
fr(x2,x3,x4)
K i Le (~x0 ,
~ 0 ))
i=0
~ x)n
G(~xn1 , ~xn )fr (~xn1 , ~xn , ~xn+1 ) dA0 dA1 ...dA(~
This integral has a very simple geometric and physical intuition. It represents
a family of light paths. Each path is characterized by the number of bounces or
length n. There are many possible paths of a given length. Paths themselves are
specified by a set vertices. The first vertex is a point on the light source and subsequent vertices are points on reflecting surfaces. The total contribution due to all
paths of a given length is formed by integrating over all possible light and surface
positions. This involves doing n integrals over surface areas. However, we must
weight paths properly when performing the integral. A integrand for a particular
path consists of alternating sequence of geometry and reflection terms. Finally, the
final solution to the equation is the sum of paths of all lengths; or more simply, all
possible light paths.
Note that these are very high dimensional integrals. Specifically, for a path of
length n the integral is over a 2n dimensional space. The integral also involves
very complicated integrands that include visibility terms and complex reflection
94
functions defined over arbitrary shapes. It turns out these complexities is what
makes Monte Carlo the method of choice for solving the rendering equation.
There is one more useful theoretical and practical step, and that is to relate the
solution of the rendering equation to the process of image formation in the camera.
The equation that governs this process is the Measurement Equation
Z Z Z
M=
R(~x,
~ , t)L(~x,
~ , t) dt d dA.
A
reflection and refraction steps. Thus, Whitteds technique ignores paths such as
the following EDSDSL or E(D|G) L. Distributed ray tracing and path tracing
includes multiple bounces involving non-specular scattering such as E(D|G) L.
However, even these methods ignore paths of the form E(D|G)S L; that is, multiple specular bounces from the light source as in a caustic. Obviously, any technique
that ignores whole classes of paths will not correctly compute the solution to the
rendering equation.
Lets now describe the basic Monte Carlo Path Tracing Algorithm:
Step 1. Choose a ray given (x,y,u,v,t)
weight = 1
Step 2. Trace ray to find point of intersection with the nearest surface.
Step 3. Randomly decide whether to compute emitted or reflected light.
Step 3A. If emitted,
return weight * Le
Step 3B. If reflected,
weight *= reflectance
Randomly scatter the ray according to the BRDF pdf
Go to Step 2.
This algorithm will terminate as long as a ray eventually hits a light source. For
simplicity, we assume all light sources are described by emission terms attached to
surfaces. Latter we will discuss how to handle light sources better.
A variation of this algorithm is to trace rays in the opposite direction, from light
sources to the camera. We will assume that reflective surface never absorb light,
and that the camera is a perfect absorber.
Step 1. Choose a light source according to the light source power distribution.
Generate a ray from that light source according to its intensity distribution.
weight = 1
Step 2. Trace ray to find point of intersection.
Step 3. Randomly decide whether to absorb or reflect the ray.
97
7.3
To understand more about why path tracing works, lets consider a simpler problem: a discrete random walk. Instead of a physical system with continuous variables, such as position and direction, consider a discrete physical system comprised
of n states. Path tracing as described above is an example of a random walk where
we move from sample to sample, or from point to point, where the samples are
drawn from a continuous probability distribution. In a discrete random walk, we
move from state to state, and the samples are drawn from a discrete probability
distribution.
A random walk is characterized by three distributions:
1. Let p0i be the probability of starting in state i.
2. Let pi,j is the probability of moving from state i to state j.
98
Transition
pi,j
Creation p0i
p*i Termination
state i after n transitions. Since each state transition is independent of the previous
transitions, this probability may be computed using a simple recurrence
Pj0 = p0j
Pj1 = p0j +
pi,j Pi0
...
Pjn = p0j +
pi,j Pin1 .
Defining a matrix M whose entries are Mi,j = pi,j , the above process can be
viewed as the following iterative product of a matrix times a vector
P 0 = p0
P 1 = p0 + M P 0
...
P n = p0 + M P n1 .
And this procedure may be recognized as the iterative solution of the following
matrix equation
(I M )P = p0
since then
P = (I M )1 p0 = p0 + M (p0 + M (p0 ... =
M i p0 .
i=0
This process will always converge assuming the matrices are probability distributions. The basic reason for this is that probabilities are always less than one, and so
a product of probabilities quickly tends towards zero.. Thus, the random walk provides a means for solving linear systems of equations, assuming that the matrices
are probability transition matrices. Note the similiarity of this discrete iteration of
matrices to the iterative application of the continuous operator when we solve the
rendering equation using Neumann Series.
This method for solving matrix equations using discrete random walks may be
directly applied to the radiosity problem. In the radiosity formulation,
Bi = Ei + i
X
j
100
Fi,j Bj
p()W ()
X
X
p(k )W (k )
k=1 k
X
X
...
k=1 i1
ik
In the last line we group all paths of the same length together. The sums on each
index i go from 1 to n - the number of states in the system. Thus, there are nk
paths of length k, and of course paths can have infinite length. There are a lot of
paths to consider!
What is the probability p(k ) of path k ending in state ik ? Assuming the discrete random walk process described above, this is the probability that the particle
is created in state i1 , times the probability of making the necessary transitions to
arrive at state ik , times the probability of being terminated in state ik
p(k ) = p0i1 pi1 ,i2 ...pik1 ,ik pk
With these careful definitions, the expected value may be computed
E[Wj ] =
=
X
X
k=1 k
X
X
p(k )Wj (k )
...
k=1 i1
ik
Recall, that our estimator counts the number of counts the number of particles that
terminate in state j. Mathematically, we can describe this counting process with
a delta function, Wj (k ) = ik ,j /pj . This delta function only scores particles
terminating in ik = j. The expected value is then
E[Wj ] =
X
X
k=1 i1
X
X
k=1 i1
...
XX
ik1 ik
...
ik1
102
7.4
Recall, that the pixel response is equal to the sum over paths of length n
Z
Z
Mn =
... S(~x0 , ~x1 )G(~x0 , ~x1 )fr (~x0 , ~x1 , ~x2 )G(~x1 , ~x2 )
0
~ x)n .
...fr (~xn2 , ~xn1 , ~xn )G(xn1 , xn )R(~xn1 , ~xn )dA0 dA1 ...dA(~
where we have switched notation and written the source term as S(~x, ~x0 ) = Le (~x, ~x0 ).
103
As noted above this equation is symmetric under the interchange of lights and
sensors. Switching Le with R, and noting that
Z
Z
Mn =
...
R(~x0 , ~x1 )G(~x0 , ~x1 )fr (~x0 , ~x1 , ~x2 )G(~x1 , ~x2 )
A
~ x)n
...fr (~xn2 , ~xn1 , ~xn )G(xn1 , xn )S(~xn , ~xn1 ) dA0 dA1 ...dA(~
Z
Z
=
...
S(~xn , ~xn1 )G(xn , xn1 )fr (~xn , ~xn1 , ~xn2 )G(~xn1 , ~xn2 )
A
~ x)n
...fr (~x2 , ~x1 , ~x0 )G(~x1 , ~x0 )R(~x1 , ~x0 ) dA0 dA1 ...dA(~
In the second step, we noted from the symmetry of the geometry that
G(~xi , ~xj ) = G(~xj , ~xi )
and because of the reciprocity principle the BRDF is also symmetric
fr (~xi , ~xj , ~xk ) = fr (~xk , ~xj , ~xi )
These symmetries implie that we may ray trace from either the light or the eye;
both methods will lead to the same integral.
Suppose now we break the path at some point k. The amount of light that
makes to k is
Z
Z
LS (~xk , ~xk+1 ) =
...
S(~x0 , ~x1 )G(~x0 , ~x1 )fr (~x0 , ~x1 , ~x2 )G(~x1 , ~x2 )
A
~ x)k1
...G(~xk1 , ~xk )fr (~xk1 , ~xk , ~xk+1 ) dA0 ...dA(~
In a similar way, treating the sensor as a virtual light source, we can compute the
amount of light coming from the sensor makes it to k.
Z
Z
LR (~xk , ~xk+1 ) =
... fr (~xk , ~xk+1 , ~xk+2 )G(~xk+1 , ~xk+2 )
k+2
~ x)n
...fr (~xn2 , ~xn1 , ~xn )G(xn1 , xn )R(~xn1 , ~xn ) dAk+2 ...dA(~
The measured response is then
Z Z
M=
LS (~xk , ~xk+1 )G(~xk , ~xk+1 )LR (~xk , ~xk+1 ) dAk dAk+1
A
Note the use of the notation LS and LR to indicate radiance cast from the source
vs. the receiver.
104
dA(x2)
LS(x2,x3)
S
G(x2,x3)
LR(x2,x3)
dA(x3)
Figure 7.5: A path with both the forward and backward (adjoint) solution of the
transport equation. The forward solution is generated from the source term S and
the backward solution is generated from the received term R. For physical situations where the transport equation is invariant under path reversal, the forward and
backward equations are the same.
We make two observations about this equation. First, this equation can be
considered the inner product of two radiance functions. If we consider radiance to
be a function on rays r = (~x,
~ ), then if we have functions f (r) and g(r), the inner
product of f and g is
Z
< f, g >= f (r)g(r)d(r)
where d(r) is the appropriate measure on rays. The natural way to measure
the rays between two surface elements A and A0 is d(r) = G(x, x0 ) dA dA0 .
Equivalently, considering r to be parameterized by position ~x and direction
~ , the
~
d(r) = d~
dA(~x)(~x).
Second, this integral naturally leads to a method for importance sampling a
path. Suppose we are tracing light and arrive at surface k. To compute the sensor
response, we need to integrate L against R. In this sense, R may be considered
an importance function for sampling the next directions, since we want a sampling
technique that is proportional to R to achieve low variance. But R is the solution
of the reversed transport equation that would be computed if we were to trace rays
from the sensor. R tells us how much light from the sensor would make it to this
point. Thus, the backward solution provides an importance function for the forward
solution, and vice versa. This is the key idea between bidirectional ray tracing.
Manipulating about adjoint equations is easy using the operator notation. Using
105
and
K+ f =
Z
K(x, y)f (x) dx.
One integral is over the first variable, the other is over the second variable. Of
course, if K(x, y) = K(y, x) these two integrals are the same, in which case
K + = K and the operator is said to be self-adjoint.
This notation provides a succinct way of proving that the forward estimate is
equal to the backward estimate of the rendering equation. Recall
K LS = S
We can also write a symmetric equation in the other direction
K LR = R
Then,
< R, LS > = < K LR , LS >
= < LR , K + LS >
= < LR , K LS >
= < LR , S >
106
This result holds even if the operator is not self-adjoint. We will leave the demonstration of that fact as an exercise.
This is a beautiful result, but what does it mean in practice. Adjoint equations
have lots of applications in all areas of mathematical physics. What they allow you
to do is create output sensitive algorithms. Normally, when you solve an equation
you solve for the answer everywhere. Think of radiosity; when using the finite
element method you solve for the radiosity on all the surfaces. The same applies to
light ray tracing or the classic discrete random walk; you solve for the probability
of a particle landing in any state. However, in many problems you only want to
find the solution at a few points. In the case of image synthesis, we only need to
compute the radiance that we see, or that falls on the film. Computing the radiance
at other locations only needs to be done if its effects are observable.
We can model the selection of a subset of the solution as the inner product
of the response function times the radiance. If we only want to observe a small
subset of the solution, we make the response function zero in the locations we
dont care about. Now consider the case when all the surfaces act as sources and
only the film plane contributed a non-zero response. Running a particle tracing
algorithm forward from the sources would be very inefficient, since only rarely
is a particle terminated on the film plane. However, running the algorithm in the
reverse direction is very efficient, since all particles will terminate on sources. Thus
each particle provides useful information. Reversing the problem has led to a much
more efficient algorithm.
The ability to solve for only a subset of the solution is a big advantage of
the Monte Carlo Technique. In fact, in the early days of the development of the
algorithm, Monte Carlo Techniques were used to solve linear systems of equations.
It turns out they are very efficient if you want to solve for only one variable. But
be wary: more conventional techniques like Gaussian elimination are much more
effective if you want to solve for the complete solution.
107
108
Chapter 8
This chapter gives various formulations of the rendering equation, and outlines
several strategies for computing radiance values in a scene.
8.1
The global illumination problem is in essence a transport problem. Energy is emitted by light sources and transported through the scene by means of reflections (and
refractions) at surfaces. One is interested in the energy equilibrium of the illumination in the environment.
The transport equation that describes global illumination transport is called
the rendering equation. It is the integral equation formulation of the definition
of the BRDF, and adds the self-emittance of surface points at light sources as an
initialization function. The self-emitted energy of light sources is necessary to
provide the environment with some starting energy. The radiance leaving some
point x, in direction , can be expressed as an integral over all hemispherical
directions incident on the point x (figure 8.1):
Z
L(x ) = Le (x ) +
109
Nx
L(x)
L(x)
Le(x)
L(x)
L(x)
x
Figure 8.1: Rendering equation
One can transform the rendering equation from an integral over the hemisphere
to an integral over all surfaces in the scene. Also, radiance remains unchanged
along straight paths, so exitant radiance can be transformed to incident radiance and
vice-versa, thus obtaining new versions of the rendering equation. By combining
both options with a hemispheric or surface integration, four different formulations
of the rendering equation are obtained. All these formulations are mathematically
equivalent.
Exitant radiance, integration over the hemisphere
Z
L(x ) = Le (x ) +
with
y = r(x, )
When designing an algorithm based on this formulation, integration over the
hemisphere is needed, and as part of the function evaluation for each point in the
integration domain, a ray has to be cast and the nearest intersection point located.
Exitant radiance, integration over surfaces
Z
L(x ) = Le (x ) +
with
G(x, y) =
cos(Nx , )cos(Ny , )
2
rxy
110
with
y = r(x, )
Incident radiance, integration over surfaces
Z
L(x ) = Le (x ) +
yz)V
z
with
y = r(x, )
8.2
Importance function
In order to compute the average radiance value over the area of a pixel, one needs
to know the radiant flux over that pixel (and associated solid angle incident w.r.t.
the aperture of the camera). Radiant flux is expressed by integrating the radiance
distribution over all possible surface points and directions. Let S = Ap p denote
all surface points Ap and directions p visible through the pixel. The flux (S) is
written as:
Z Z
(S) =
L(x ) cos(Nx , )d dAx
Ap
When designing algorithms, it is often useful to express the flux as an integral over all possible points and directions in the scene. This can be achieved by
introducing the initial importance function We (x ):
Z Z
(S) =
L(x )We (x ) cos(Nx , )d dAx
A
(
1
We (x ) =
0
if (x, ) S
if (x, )
/S
A RL(x
R
A
with
z = r(x, )
112
fr (x, )W (x )cos(Nx , )d
x
An expression for the flux of through every pixel, based on the importance
function, can now be written. Only the importance of the light sources needs to be
considered when computing the flux:
Z Z
(S) =
Le (x )W (x )cos(Nx , )d dAx
A
and also:
Z Z
L(x )We (x ) cos(Nx , )d dAx
(S) =
A
Z Z
L(x )We (x ) cos(Nx , )d dAx
(S) =
A
There are two approaches to solve the global illumination problem: The first
approach starts from the pixel, and the radiance values are computed by solving
one of the transport equations describing radiance. A second approach computes
the flux starting from the light sources, and computes for each light source the
corresponding importance value. If one looks at various algorithms in some more
detail:
113
Stochastic ray tracing propagates importance, the surface area visible through
each pixel being the source of importance. In a typical implementation, the
importance is never explicitly computes, but is implicitly done by tracing
rays through the scene and picking up illumination values from the light
sources.
Light tracing is the dual algorithm of ray tracing. It propagates radiance
from the light sources, and computes the flux values at the surfaces visible
through each pixel.
Bidirectional ray tracing propagates both transport quantities at the same
time, and in an advanced form, computes a weighted average of all possible
inner products at all possible interactions.
8.3
Path formulation
f (x)d(x)
8.4
The integral is evaluated using MC integration, by generating N random directions i over the hemisphere x , according to some pdf p(). The estimator for
Lr (x ) is given by:
N
1 X L(x i )fr (x, i ) cos(i , Nx )
hLr (x )i =
N
p(i )
i=1
8.5
Russian Roulette
The recursive path generator described above needs a stopping condition to prevent
the paths being of infinite length. We want to cut off the generation of paths, but at
the same time, we have to be very careful about not introducing any bias into the
115
image generations process. Russian Roulette addresses the problem of keeping the
lengths of the paths manageable, but at the same time leaves room for exploring all
possible paths of any length. Thus, an unbiased image can still be produced.
The idea of Russian Roulette can best be explained by a simple example: suppose one wants to compute a value V . The computation of V might be computationally very expensive, so we introduce a random variable r, which is uniformly distributed over the interval [0, 1]. If r is larger than some threshold value
[0, 1], we proceed with computing V . However, if r , we do not compute
V , and assume V = 0. Thus, we have a random experiment, with an expected
value of (1 )V . By dividing this expected value by (1 ), an unbiased estimator for V is maintained.
If V requires recursive evaluations, one can use this mechanism to stop the
recursion. is called the absorption probability. If is small, the recursion will
continue many times, and the final computed value will be more accurate. If is
large, the recursion will stop sooner, and the estimator will have a higher variance.
In the context of our path tracing algorithm, this means that either accurate paths
of a long length are generated, or very short paths which provide a less accurate
estimate.
In principle any value for can be picked, thus controlling the recursive depth
and execution time of the algorithm. 1 is often set to be equal to the hemispherical reflectance of the material of the surface. Thus, dark surfaces will absorb
the path more easily, while lighter surfaces have a higher chance of reflecting the
path.
8.6
Indirect Illumination
In most path tracing algorithms, direct illumination is explicitly computed separately from all other forms of illumination (see previous chapter on direct illumination). This section outlines some strategies for computing the indirect illumination
in a scene. Computing the indirect illumination is usually a harder problem, since
one does not know where most important contributions are located. Indirect illumination consists of the light reaching a target point x after at least one reflection
at an intermediate surface between the light sources and x.
116
8.6.1
Hemisphere sampling
The rendering equation can be split in a direct and indirect illumination term. The
indirect illumination (i.e. not including any direct contributions from light sources
to the point x) contribution to L(x ) is written as:
Z
Lindirect (x ) =
The integrand contains the reflected terms Lr from other points in the scene,
which are themselves composed of a direct and indirect illumination part. In a
closed environment, Lr (r(x, ) ) usually has a non-zero value for all (x, )
pairs. As a consequence, the entire hemisphere around x needs to be considered as
the integration domain.
The most general MC procedure to evaluate indirect illumination, is to use any
hemispherical pdf p(), and generating N random directions i . This produces
the following estimator:
hLindirect (x )i =
N
1 X Lr (r(x, i ) i )fr (x, i ) cos(i , Nx )
N
p(i )
i=1
In order to evaluate this estimator, for each generated direction i , the BRDF
and the cosine term are to be evaluated, a ray from x in the direction of i needs
to be traced, and the reflected radiance Lr (r(x, i ) i ) at the closest intersection point r(x, i ) has to be evaluated. This last evaluation shows the recursive
nature of indirect illumination, since this reflected radiance at r(x, i ) can be split
again in a direct and indirect contribution.
The simplest choice for p() is p() = 1/2, such that directions are sampled
proportional to solid angle. Noise in the resulting picture will be caused by variations in the BRDF and cosine evaluations, and variations in the reflected radiance
Lr at the distant points.
The recursive evaluation can again be stopped using Russian Roulette, in the
same way as was done for simple stochastic ray tracing. Generally, the local hemispherical reflectance is used as an appropriate absorption probability. This choice
can be explained intuitively: One only wants to spend work (i.e. tracing rays and
evaluating Lindirect (x)) proportional to the amount of energy present in different
parts of the scene.
117
8.6.2
Importance sampling
Uniform sampling over the hemisphere does not use any knowledge about the integrand in the indirect illumination integral. However, this is necessary to reduce
noise in the final image, and thus, some form of importance sampling is needed.
Hemispherical pdfs proportional (or approximately proportional) to any of the following factors can be constructed:
Cosine sampling
Sampling directions proportional to the cosine lobe around the normal Nx prevents directions to be sampled near the horizon of the hemisphere where cos(, Nx )
yields a very low value, and thus possibly insignificant contributions to the computed radiance value.
BRDF sampling
BRDF sampling is a good noise-reducing technique when a glossy or highly
specular BRDFs is present. It diminishes the probability that directions are sampled where the BRDF has a low value or zero value. Only for a few selected BRDF
models, however, is it possible to sample exactly proportional to the BRDF. Even
better would be trying to sample proportional to the product of the BRDF and the
cosine term. Analytically, this is even more difficult to do, except in a few rare
cases where the BRDF model has been chosen carefully.
Incident radiance field sampling
A last technique that can be used to reduce variance when computing the indirect illumination is to sample a direction according to the incident radiance
values Lr (x ). Since this incident radiance is generally unknown, an adaptive
technique needs to be used, where an approximation of Lr (x ) is constructed
during the execution of the rendering algorithm.
8.6.3
Overview
It is now possible to build a full global illumination renderer using stochastic path
tracing. The efficiency, accuracy and overall performance of the complete algorithm will be determined by the choice of all of the following parameters. As is
usual in MC evaluations, the more samples or rays are generated, the less noisy the
final image will be.
Number of viewing rays per pixel The amount of viewing rays through each pixel
is responsible for effects such as aliasing at visible boundaries of objects or
shadows.
118
Direct Illumination:
The total number of shadow rays generated at each surface point x;
The selection of a single light source for each shadow ray;
The distribution of the shadow ray over the area of the selected light
source.
Indirect Illumination (hemisphere sampling):
Number of indirect illumination rays;
Exact distribution of these rays over the hemisphere (uniform, cosine,
...);
Absorption probabilities for Russian Roulette.
The better one makes use of importance sampling, the better the final image
and the less noise there will be. An interesting question is, given a maximum
amount of rays one can use per pixel, how should these rays best be distributed to
reach the highest possible accuracy for the full global illumination solution? This
is still an open problem. There are generally accepted default choices, but there
are no hard and fast choices. It generally is accepted that branching out equally at
all levels of the tree is less efficient. For indirect illumination, a branching factor
of 1 is often used after the first level. Many implementations even limit the indirect
rays to one per surface point, and compensate by generating more viewing rays.
119
120
Chapter 9
Metropolis Sampling
By Matt Pharr
A new approach to solving the light transport problem was recently developed by
Veach and Guibas, who applied the Metropolis sampling algorithm [91, 87] (first
introduced in Section 2.3.3 of these notes.)1 . The Metropolis algorithm generates a
series of samples from a non-negative function f that are distributed proportionally
to f s value [52]. Remarkably, it does this without requiring anything more than the
ability to evaluate f ; its not necessary to be able to integrate f , normalize it, and
invert the resulting pdf. Metropolis sampling is thus applicable to a wider variety
of sampling problems than many other techniques. Veach and Guibas recognized
that Metropolis could be applied to the image synthesis problem after it was appropriately reformulated; they used it to develop a general and unbiased Monte Carlo
rendering algorithm which they named Metropolis Light Transport (MLT).
MLT is notable for its robustness: while it is about as efficient as other unbiased techniques (e.g. bidirectional ray tracing) for relatively straightforward lighting problems, it distinguishes itself in more difficult settings where most of the
light transport happens along a small fraction of all of the possible paths through
the scene. Such settings were difficult for previous algorithms unless they had specialized advance-knowledge of the light transport paths in the scene (e.g. a lot of
light is coming through that doorway); they thus suffered from noisy images due
1
We will refer to the Monte Carlo sampling algorithm as the Metropolis algorithm here. Other
commonly-used shorthands for it include M(RT)2 , for the initials of the authors of the original paper,
and Metropolis-Hastings, which gives a nod to Hastings, who generalized the technique [22]. It is
also commonly known as Markov Chain Monte Carlo.
121
to high variance because most of they generated would have a low contribution, but
when they randomly sampled an important path, there would be a large spike in the
contribution to the image. In contrast, the Metropolis method leads to algorithms
that naturally and automatically adapt to the subtleties of the particular transport
problem being solved.
The basic idea behind MLT is that a sequence of light-carrying paths through
the scene is computed, with each path generated by mutating the previous path
in some manner. These mutations are done in a way that ensures that the overall
distribution of sampled paths in the scene is proportional to the contribution these
paths make to the image being generated. This places relatively few restrictions on
the types of mutations that can be applied; in general, it is possible to invent unusual sampling techniques that couldnt be applied to other MC algorithms without
introducing bias.
MLT has some important advantages compared to previous unbiased approaches
to image synthesis:
Path re-use: because paths are often constructed using some of the segments
of the previous one, the incremental cost (i.e. number of rays that need to be
traced) for generating a new path is much less than the cost of generating a
path from scratch.
Local exploration: when paths that make large contributions to the final image are found, its easy to sample other paths that are similar to that one by
making small perturbations to the path.
The first advantage increases overall efficiency by a relatively fixed amount
(and in fact, path re-use can be applied to some other light transport algorithms.)
The second advantage is the more crucial one: once an important transport path
has been found, paths that are similar to it (which are likely to be important) can
be sampled. When a function has a small value over most of its domain and a large
contribution in only a small subset of it, local exploration amortizes the expense
(in samples) of the search for the important region by letting us stay in that area for
a while.
In this chapter, we will introduce the Metropolis sampling algorithm and the
key ideas that it is built on. We will then show how it can be used for some lowdimensional sampling problems; this setting allows us to introduce some of the
important issues related to the full Metropolis Light Transport algorithm without
122
getting into all of the tricky details. We first show its use in one-dimension. We
then demonstrate how it can be used to compute images of motion-blurred objects;
this pushes up the domain of the problem to three dimensions and also provides
a simpler setting in which to lay more groundwork. Finally, we will build on
this basis to make connections with and describe the complete MLT algorithm.
We will not attempt to describe every detail of MLT here; however the full-blown
presentation in MLT paper [91] and the MLT chapter in Veachs thesis [87] should
be much more approachable with this groundwork.
9.1
Overview
9.1.1
Detailed Balance
f (x)
x
x0
xi
p(x)
I(f )
fpdf
f(x)
T(x x0 )
a(x x0 )
hj (u, v)
Ij
freedom left over!) We also define an acceptance probability a(x x0 ) that gives
the probability of accepting a proposed mutation from x to x0 .
The key to the Metropolis algorithm is the definition of a(x x0 ) such that
the distribution of samples is proportional to f (x). If the random walk is already
in equlibrium, the transition density between any two states must be equal:2
f (x) T(x x0 ) a(x x0 ) = f (x0 ) T(x0 x) a(x0 x).
(9.1)
This property is called detailed balance. Since f and T are set, Equation 9.1 tells
us how a must be defined. In particular, a definition of a that maximizes the rate at
which equilibrium is reached is
f (x0 ) T(x0 x)
0
a(x x ) = min 1,
(9.2)
f (x) T(x x0 )
One thing to notice from Equation 9.2 is that if the transition probability density
is the same in both directions, the acceptance probability simplifies to
f (x0 )
a(x x0 ) = min 1,
(9.3)
f (x)
2
See Kalos and Whitlock [38] or Veachs thesis [87] for a rigorous derivation.
124
x = x0
for i = 1 to n
x = mutate(x)
a = accept(x, x)
if (random() < a)
x = x
record(x)
Figure 9.2: Pseudocode for the basic Metropolis sampling algorithm. We generate
n samples by mutating the previous sample and computing acceptance probabilities
as in Equation 9.2. Each sample xi is then recorded in a data structure.
For some of the basic mutations that well use, this condition on T will be met,
which simplifies the implementation.
Put together, this gives us the basic Metropolis sampling algorithm shown in
pseudo-code in Figure 9.2. We can apply the algorithm to estimating integrals such
R
as f (x)g(x)d. The standard Monte Carlo estimator, Equation 2.6, says that
Z
f (x)g(x) d
N
1 X f (xi )g(xi )
N
p(xi )
(9.4)
i=1
where xi are sampled from a density function p(x). Thus, if we apply Metropolis
sampling and generate a set of samples x1 , . . . , xN , from a density function fpdf (x)
proportional to f (x), we have
"
#
Z
N
1 X
f (x)g(x) d
g(xi ) I(f )
(9.5)
N
i=1
9.1.2
Expected Values
Because the Metropolis algorithm naturally avoids parts of where f (x)s value
is relatively low, few samples will be accumulated there. In order to get some
information about f (x)s behavior in such regions, the expected values technique
can be used to enhance the basic Metropolis algorithm.
At each mutation step, we record a sample at both the current sample x and the
proposed sample x0 , regardless of which one is selected by the acceptance criteria.
125
x = x0
for i = 1 to n
x = mutate(x)
a = accept(x, x)
record(x, (1-a) * weight)
record(x, a * weight)
if (random() < a)
x = x
Figure 9.3: The basic Metropolis algorithm can be improved using expected values.
We still decide which state to transition into as before, but we record a sample at
each of x and x0 , proportional to the acceptance probability. This gives smoother
results, particularly in areas where f s value is small, where otherwise few samples
would be recorded.
Each of these recorded samples has a weight associated with it, where the weights
are the probabilities (1a) for x and a for x0 , where a is the acceptance probability.
Comparing the pseudocode in Figures 9.2 and 9.3, we can see that in the limit, the
same weight distribution will be accumulated for x and x0 . Expected values more
quickly gives us more information about the areas where f (x) is low, however.
Expected values doesnt change the way we decide which state, x or x0 to use
at the next step; that part of the computation remains the same.
0.3
0.2
0.1
0.0
0.00
0.25
0.50
0.75
1.00
9.2.1
We will first describe two basic mutation strategies, each of which depends on a
uniform random number between zero and one. Our first mutation, mutate1 ,
discards the current sample x and uniformly samples a new one x0 from the entire
state space [0, 1]. Mutations like this one that sample from scratch are important to
make sure that we dont get stuck in one part of state space and never sample the
rest of it (an example of the problems that ensue when this happens will be shown
below.) The transition function for this mutation is straightforward. For mutate1 ,
since we are uniformly sampling over [0, 1], the probability density is uniform over
the entire domain; in this case, the density is just one everywhere. We have
mutate1 (x)
T1 (x x0 )
The second mutation adds a random offset between .05 to the current sample
x in an effort to sample repeatedly in the parts of f that make a high contribution
to the overall distribution. The transition probability density is zero if x and x0 are
far enough away that mutate2 will never mutate from one to the other; otherwise
the density is constant. Normalizing the density so that it integrates to one over its
1
domain gives the value 0.1
.
127
9.2.2
Start-up bias
Before we can go ahead and use the Metropolis algorithm, one other issue must
be addressed: start-up bias. The transition and acceptance methods above tell us
how to generate new samples xi+1 , but all presuppose that the current sample xi
has itself already been sampled with probability proportional to f . A commonly
used solution to this problem is to run the Metropolis sampling algorithm for some
number of iterations from an arbitrary starting state, discard the samples that are
generated, and then start the process for real, assuming that that has brought us to
an appropriately sampled x value. This is unsatisfying for two reasons: first, the
expense of taking the samples that were then discarded may be high, and second,
we can only guess at how many initial samples must be taken in order to remove
start-up bias.
Veach proposes another approach which is unbiased and straightforward. If an
alternative sampling method is available, we sample an initial value x0 using any
density function x0 p(x). We start the Markov chain from the state x0 , but we
128
weight the contributions of all of the samples that we generate by the weight
w=
f (x0 )
.
p(x0 )
This method eliminates start-up bias completely and does so in a predictable manner.
The only potential problem comes if f (x0 ) = 0 for the x0 we chose; in this
case, all samples will have a weight of zero, leading to a rather boring result. This
doesnt mean that the algorithm is biased, however; the expected value of the result
still converges to the correct distribution (see [87] for further discussion and for a
proof of the correctness of this technique.)
To reduce variance from this step, we can instead sample a set of N candidate
sample values, y1 , . . . , yN , defining a weight for each by
wi =
f (yi )
.
p(yi )
(9.7)
We then choose the starting x0 sample for the Metropolis algorithm from the yi
with probability proportional to their relative weights and compute a sample weight
w as the average of all of the wi weights. All subsequent samples xi that are
generated by the Metropolis algorithm are then weighted by the sample weight w.
For our particular f 1 example above, we only need to take a single sample with
a uniform pdf over , since f 1 (x) > 0 except for a single point in which there
is zero probability of sampling.
x0 =
The sample weight w is then just f 1 (x0 ).
9.2.3
Initial Results
We can now run the Metropolis algorithm and generate samples xi of f 1 . At each
transition, we have two weighted samples to record (recall Figure 9.3.) A simple
approach for reconstructing the approximation to f 1 s probability distribution f1 is
just to store sums of the weights in a set of buckets of uniform width; each sample
falls in a single bucket and contributes to it. Figure 9.5 shows some results. For
both graphs, we followed a chain of 10,000 mutations, storing the sample weights
in fifty buckets over [0, 1]. The weighting method for eliminating start-up bias was
used.
129
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.0
0.0
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Figure 9.5: On the left, we always mutate by randomly selecting a completely new
x value. Convergence is slow, but the algorithm is finding the right distribution.
On the right, we perturb the current sample by .05 90% of the time, and pick a
completely new x value the remaining 10%.
On the left graph, we used only mutate1 when a new x0 value is to be proposed.
This alone isnt a very useful mutation, since it doesnt let us take advantage of the
times when we find ourselves in a region of where f has a relatively large value
and generate many samples in that neighborhood. However, the graph does suggest
that the algorithm is converging to the correct distribution.
On the right, we randomly chose between mutate1 and mutate2 with probabilities of 10% and 90%, respectively. We see that for the same number of samples
taken, we converge to f s distribution with less variance. This is because we are
more effectively able to concentrate our work in areas where f s value is large, and
propose fewer mutations to parts of state space where f s value is low. For example, if x = .8 and the second mutation proposes x0 = .75, this will be accepted
f (.75)/f (.8) 69% of the time, while mutations from .75 to .8 will be accepted
min(1, 1.44) = 100% of the time. Thus, we see how the algorithm naturally tends
to try to avoid spending time sampling around dip in the middle of the curve.
One important thing to note about these graphs is that the y axis has units that
are different than those in Figure 9.4, where f 1 is graphed. Recall that we just
1 ; as such
have a set of samples distributed according to the probability density fpdf
(for example), we would get the same sample distribution for another function
g = 2f 1 . If we wish to reconstruct an approximation to f 1 directly, we must
1 . We explain this process in
compute a normalization factor and use it to scale fpdf
130
9.2.4
Ergodicity
Figure 9.6 shows the surprising result of what happens if we only use mutate2 to
suggest sample values. On the left, we have taken 10,000 samples using just that
mutation. Clearly, things have gone awrywe didnt generate any samples xi > .5
and the result doesnt bear much resemblance to f 1 .
Thinking about the acceptance probability again, we can see that it would take
a large number of mutations, each with low probability of acceptance, to move xi
down close enough to .5 such that mutate2 s short span would be enough to get us
to the other side. Since the Metropolis algorithm tends to keep us away from the
lower-valued regions of f (recall the comparison of probabilities for moving from
.8 to .75, versus moving from .75 to .8), this happens quite rarely. The right side
of Figure 9.6 shows what happens if we take 300,000 samples. This was enough
to make us jump from one side of .5 to the other a few times, but not enough to get
us close to the correct distribution.
This problem is an example of a more general issue that must be addressed
with Metropolis sampling: its necessary that it be possible to reach all states
x where f (x) > 0 with non-zero probability. In particular, it suffices that
T(x x0 ) > 0 for all x and x0 where f (x) > 0 and f (x0 ) > 0. Although the
first condition is in fact met when we use only mutate2 , many samples would be
necessary in practice to converge to an accurate distribution. Periodically using
mutate1 ensures sufficiently better coverage of such that that this problem goes
away.
9.2.5
If we have a pdf that is similar to some component of f , then we can use that
to derive one of our mutation strategies as well. Note that if we had a pdf that
was exactly proportional to f , all this Metropolis sampling wouldnt be necessary,
but lacking that we often can still find pdfs that approximate some part of the
function being sampled. Adding such an extra mutation strategy to the mix can
improve overall robustness of the Metropolis algorithm, by ensuring good coverage
of important parts of state space.
If we can generate random samples from a probability density function p, x
131
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.0
0.0
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Figure 9.6: Two examples that show why it is important to periodically pick a
completely new sample value. On the left, we ran 10,000 iterations using only
mutate2 , and on the right, 300,000 iterations. It is very unlikely that a series of
mutations will be able to move from one side of the curve, across 0.5, to the other
side, since mutations to areas where f 1 s value is low will usually be rejected. As
such, the results are inaccurate for these numbers of iterations. (Its small solace
that they would be correct in the limit.)
1.2 : x 1/3
.6 : 1/3 < x 2/3
p1 (x) =
1.0
0.5
0.0
0.00
0.25
0.50
0.75
1.00
Figure 9.7: Linear pdf used to sample f 1 . We can also develop mutation strategies based on pdfs that approximate some component of the function that were
sampling. Here, were using a simple linear function that is roughly similar to the
shape of f 1 .
gives us:
x=
1
3
2
3
+
+
1
3
1
3
1
3 .4
(.4)
.2
(.6)
.4
: .4
: .4 < .6
: > .6
(9.8)
Our third mutation strategy, mutate3 , just generates a uniform random number
and proposes a mutation to a new state x0 according to Equation 9.8. Results
of using this mutation alone are shown in Figure 9.8; the graph is not particularly
better than the previous results, but in any case, it is helpful to have a variety of
methods with which to develop mutations.
9.3
Motion Blur
We will now show how Metropolis sampling can be used to solve a tricky problem with rendering motion-blurred objects. For objects that are moving at a high
speed across the image plane, the standard distribution ray tracing approach can be
inefficient; if a pixel is covered by an object for only a short fraction of the overall
time, most of the samples taken in that pixel will be black, so that a large number
are needed to accurately compute the pixels color. We will show in this section
how Metropolis can be applied to solve this problem more efficiently. In addition
to being an interesting application of Metropolis, this also gives us an opportunity
133
0.8
0.6
0.4
0.2
0.0
0.00
0.25
0.50
0.75
1.00
hj (u, v) L(u, v, t) du dv dt
N
1 X hj (xi ) L(xi )
,
N
p(xi )
(9.9)
i=1
134
Z
N
1 X
hj (xi )
L(x) d ,
N
(9.10)
i=1
9.3.1
We will start with two basic mutation strategies for this problem. First, to ensure
ergodicity, we generate a completely new sample 10% of the time. This is just like
the one dimensional case of sampling f 1 . Here, we choose three random numbers
from the range of valid image and time sample values.
Our second mutation is a pixel and time perturbation that helps with local exploration of the space. Each time it is selected, we randomly move the pixel sample
location up to 8 pixels in each direction (for a 512 by 512 image), and up to .01
in time. If we propose a mutation that takes us off of the image or out of the valid
time range, the transition is immediately rejected. The performance of the algorithm isnt too sensitive to these values, though see below for the results of some
experiments where they were pushed to extremes.
The transition probabilities for each of these mutations are straightforward,
analogous to the one dimensional examples.
Figure 9.9 shows some results. The top image was rendered with distribution
ray tracing (with a stratified sampling pattern), and the bottom the Metropolis sampling approach with these two mutations was used. The sample total number of
samples was taken for each. Note that Metropolis does equally well regardless of
the velocity of the balls, while fast moving objects are difficult for distribution ray
tracing to handle well. Because Metropolis sampling can locally explore the path
space after it has found a sample that hits one of the balls, it is likely to find other
135
samples that hit them as well, thus being more efficientthe small time perturbation is particularly effective for this. Note, however, that Metropolis doesnt do as
well with the ball that is barely moving at all, while this is a relatively easy case
for stratified sampling to handle well.
Its interesting to see the effect of varying the parameters to mutate2 . First, we
tried greatly increasing the amount we can move in , up to 80 pixels in each
direction and .5 in time. Figure 9.10 (top) shows the result. Because we are no
longer doing a good job of local exploration of the space, the image is quite noisy.
(One would expect it to degenerate to something like distribution ray tracing, but
without the advantages of stratified sampling patterns that are easily applied in that
setting.)
We then dialed down the parameters, to .5 pixels of motion and .001 in
time, for a single mutation; see Figure 9.10 (bottom). Here the artifacts in the
image are more clumpythis happens because we find a region of state space with
a large contribution but then have trouble leaving it and finding other important
regions. As such, we dont do a good job of sampling the entire image.
As a final experiment, we replaced mutate2 with a mutation that sampled the
pixel and time deltas from an exponential distribution, rather than a uniform distribution. Given minimum and maximum pixel offsets rmax and rmin and time offsets
tmax and tmin , we computed
r = rmax e log(rmax /rmin )
dt = tmax e log(tmax /tmin )
Given these offsets, a new pixel location was computed by uniformly sampling
an angle = 2, and the new image (u, v) coordinates were computed by
(u, v) = (r sin , r cos )
The new time value was computed by offsetting the old one by dt, where addition
and subtraction were chosen with equal probability.
We rendered an image using this mutation; the range of pixel offsets was
(.5, 40), and the range of time deltas was (.0001, .03). Figure 9.11 shows the result. The improvement is not major, but is noticeable. In particular, see how noise
is reduced along the edges of the fast-moving ball.
The advantage of sampling with an exponential distribution like this is that it
naturally tries a variety of mutation sizes. It preferentially makes small mutations,
136
Figure 9.9: Basic motion blur results. On the top, we have applied distribution
ray tracing with a stratified sampling pattern, and on the bottom, we have applied
Metropolis sampling. The images are 512x512 pixels, with an average of 9 samples
per pixel.
137
Figure 9.10: The effect of changing the parameters to the second motion blur mutation strategy. Top: if the mutations are too large, we get a noisy image. Bottom:
too small a mutation leads to clumpy artifacts, since were not doing a good job of
exploring the entire state space.
138
close to the minimum magnitudes specified, which help locally explore the path
space in small areas of high contribution where large mutations would tend to be
rejected. On the other hand, because it also can make larger mutations, it also
avoids spending too much time in a small part of path space, in cases where larger
mutations have a good likelihood of acceptance.
9.3.2
Re-normalization
Recall that the set of samples generated by the Metropolis algorithm is from the
normalized distribution fpdf of the function that we apply it to. When we are
computing an image, as in the motion blur example, this means that the images
pixel values need to be rescaled in order to be correct.
This problem can be solved with an additional short pre-processing step. We
take a small number of random samples (e.g. 10,000) of the function f and estimate
its integral over . After applying Metropolis sampling, we have an image of
sample densities. We then scale each pixel by the precomputed total image integral
divided by the total number of Metropolis samples taken. It can be shown that the
expected value is the original function f :
N
1 X
f (x)
f (xi ) I(f )
N
i=1
9.3.3
When the image function I has regions with very large values, Metropolis sampling
may not quite do what we want. For example, if the image has a very bright light
source directly visible in some pixels, most of the Metropolis samples will naturally
be clustered around those pixels. As a result, the other regions of the image will
have high variance due to under-sampling. Figure 9.12 (top) shows an example
of this problem. The bottom ball has been made a very bright red, 1,000 times
brighter than the others; most of the samples concentrate on it, so the other balls
are under-sampled.
Veach introduced two-stage Metropolis to deal with this problem [87]. Twostage Metropolis attempts to keep the relative error at all pixels the same, rather
than trying to just minimize absolute error. We do this by renormalizing the function L to get a new function L0 :
L0 (x) =
139
L(x)
n(x)
Figure 9.11: Comparison of Metropolis sampling with the first two mutation strategies (top) versus Metropolis sampling where the second strategy is replaced a mutation based on sampling pixel and time offsets from an exponential distribution
(bottom). Note how noise is reduced along the edges of the fast-moving ball.
140
where n is a normalization function that roughly approximates L(x)s magnitude, such that the range of values taken on by L0 (x) is much more limited. The
Metropolis algorithm progresses as usual, just evaluating L0 (x) where it otherwise would have evaluated L(x). The result is that samples are distributed more
uniformly, resulting in a better image. We correct for the normalization when accumulating weights in pixels; by multiplying each weight by n(x), the final image
pixels have the correct magnitude.
For our example, we computed a normalization function by computing a lowresolution image (32 by 32 pixels) with distribution ray tracing and then blurring
it. We then made sure that all pixels of this image had a non-zero value (we dont
want to spend all of our sampling budget in areas where we inadvertently underestimated n(x), such that L0 (x) = L(x)/n(x) is large) and so we also set pixels
in the normalization image with very low values to a fixed minimum. Applying
Metropolis as before, we computed the image on the bottom of Figure 9.12. Here
all of the balls have been sampled well, resulting in a visually more appealing result (even though absolute error is higher, due to the red ball being sampled with
fewer samples.)
9.3.4
Color
For scenes that arent just black-and-white, the radiance function L(u, v, t) returns
a spectral distribution in some form. This distribution must be converted to a single
real value in order to compute acceptance probabilities (Equation 9.2). One option
is to compute computing the luminance of the spectrum and use that; the resulting
image (which is still generated by storing spectral values) is still correct, and there
is the added advantage that the image is sampled according to its perceived visual
importance.
141
Figure 9.12: Two stage sampling helps when the image has a large variation in
pixel brightness. Top: the red ball is much brighter than the others, resulting in
too few samples being taken over the rest of the image. Bottom: by renormalizing
the image function I, we distribute samples more evenly and generate a less noisy
image.
142
x1
x0
x3
x2
Figure 9.13: A path with three edges through a simple scene. The first vertex v0 is
on a light source, and the last, v3 is at the eye.
9.4
Using the groundwork of the last few sections, the Metropolis Light Transport
algorithm can now be described. We will not explain all of its subtleties or all of
the details of how to implement it efficiently; see Chapter 11 of Veachs thesis for
both of these in great detail [87]. Rather, we will try to give a flavor for how it
all works and will make connections between MLT and the sampling problems we
have described in the previous few sections.
In MLT, the samples x from are now sequences v0 v1 . . . vk , k 1, of vertices
on scene surfaces. The first vertex, v0 , is on a light source, and the last, vk is at
the eye (see Figure 9.13). This marks a big change from our previous examples:
the state space is now an infinite-dimensional space (paths with two vertices, paths
with three vertices, ...). As long as there is non-zero probability of sampling any
particular path length, however, this doesnt cause any problems theoretically, but
its is another idea that needs to be juggled when studying the MLT algorithm.
As before, the basic strategy is to propose mutations x0 to paths x, accepting
or rejecting mutations according to the detailed balance condition. The function
f (x) represents the differential radiance contribution carried along the path x, and
the set of paths sampled will be distributed according to the image contribution
function 9.10. (See [87] for a more precise definition of f (x).) Expected values are
also used as described previously to accumulate contributions at both the current
path xs image location, as well as the image location for the proposed path x0 .
A set of n starting paths xi are generated with bidirectional path tracing, in a
manner that eliminates startup bias. N candidate paths are sampled (recall Section 9.2.2), where n N and we select n of them along with appropriate weights.
143
v6
v5
v4
v5'
v6
v5
v4'
v1
v1
v1
v0
v0
v3'
v0
v3
v2'
v2
9.4.1
Path Mutations
paths, since they just cancel out when the acceptance probability is computed.
Computation of the proposed transition densities is more difficult; see [87, Section 11.4.2.1] for a full discussion. The basic issue is that it is necessary to consider
all of the possible ways that one could have sampled the path you ended up with,
given the starting path. (For example, for the path in Figure 9.14, we might have
generated the same path by sampling no new vertices from the light source, but
two new vertices along the eye path.) This computation is quite similar in spirit to
how importance sampling is applied to bidirectional path tracing.
The bidirectional mutation by itself is enough to ensure ergodicity; because
there is some probability of throwing away the entire current path, we are guaranteed to not get stuck in some subset of path space.
Bidirectional mutations can be ineffective when a very small part of path space
is where the most important light transport is happeningalmost all of the proposed mutations will cause it to leave the interesting set of paths (e.g. those causing
a caustic to be cast from a small specular object.) This problem can be ameliorated
by adding perturbations to the mix; these perturbations try to offset some of the
vertices of the current path from their current location, while still leaving the path
mostly the same (e.g. preserving the mode of scatteringspecular or non-specular,
reflection or transmission, at each scattering event.)
One such perturbation is the caustic perturbation (see Figure 9.15). If the current path hits one or more specular surfaces before hitting a single diffuse surface
and then the eye, then its a caustic path. For such paths, we can make a slight
shift to the outgoing direction from the light source and then trace the resulting
path through the scene. If it hits all specular surfaces again and if the final diffuse
surface hit is visible to the eye, we have a new caustic sample at a different image location. The caustic perturbation thus amortizes the possibly high expense of
finding caustic paths.
Lens perturbations are based on a similar idea to caustic perturbations, where
the direction of outgoing ray from the camera is shifted slightly, and then followed
through the same set of types of scattering at scene surfaces. This perturbation
is particularly nice since it keeps us moving over the image plane, and the more
differently-located image samples we have, the better the quality of the final image.
145
non-specular
specular
Figure 9.15: The caustic perturbation in action. Given an old path that leaves the
light source, hits a specular surface, and then hits a non-specular surface before
reaching the eye, we perturb the direction leaving the light source. We then trace
rays to generate a new path through the scene and to the eye. (The key here is
that because the last surface is non-specular, we arent required to pick a particular
outgoing direction to the eyewe can pick whatever direction is needed.)
9.4.2
Pixel Stratification
Another problem with using Metropolis to generate images is that random mutations wont do a good job of ensuring that pixel samples are well-stratified over
the image plane. In particular, it doesnt even ensure that all of the pixels have any
samples taken within them. While the resulting image is still unbiased, it may be
perceptually less pleasing than an image computed with alternative techniques.
Veach has suggested a lens subpath mutation to address this problem. A set
of pixel samples that must be taken is generated (e.g. via a Poisson-disk process,
a stratified sampling pattern, etc.) As such, each pixel has some number of required sample locations associated with it. The new mutation type first sees if the
pixel corresponding to the current sample path has any precomputed sample positions that havent been used. If so, it mutates to that sample and traces a new
path into the scene, following as many specular bounces as are found until a nonspecular surface is found. This lens subpath is then connected with a path to a light
source. If the randomly selected pixel does have its quota of lens subpath mutations already, the other pixels are examined in a pseudo-random ordering until one
is found with remaining samples.
By ensuring a minimum set of well-distributed samples that are always taken,
overall image quality improves and we do a better job of (for example) anti-aliasing
geometric edges than we would to otherwise. Remember that this process just sets
a minimum number of samples that are taken per pixelif the number of samples
allocated to ensuring pixel stratification is 10% of the total number of samples
146
(for example), then most of our samples will still be taken in regions with high
contribution to the final image.
9.4.3
Direct Lighting
Veach notes that MLT (as described so far) often doesnt do as well with direct
lighting as standard methods, such as those described in Chapter 3 of these notes,
or in Shirley et als TOG paper [70]. The root of the problem is that the Metropolis
samples over the light sources arent as well stratified as they can be with standard
methods.
A relatively straightforward solution can be applied: when a lens subpath is
generated (recall that lens subpaths are paths from the eye that follow zero or more
specular bounces before hitting a diffuse surface), standard direct lighting techniques are used at the diffuse surface to compute a contribution to the image for
the lens subpath. Then, whenever a MLT mutation is proposed that includes direct
lighting, it is immediately rejected since direct lighting was already included in the
solution.
Veach notes, however, that this optimization may not always be more effective.
For example, if the direct lighting cannot be sampled efficiently by standard techniques (e.g. due to complex visibility, most of the light sources being completely
occluded, etc.), then MLT would probably be more effective.
9.4.4
Participating Media
Pauly et al have described an extension of MLT to the case of participating media [62]. The state space and path measure are extended to include points in the
volume in addition to points on the surfaces. Each path vertex may be on a surface
or at a point in the scattering volume. The algorithm proceeds largely the same
way as standard MLT, but places some path vertices on scene surfaces and others
at points in the volume. As such, it robustly samples the space of all contributing
transport paths through the medium.
They also describe a new type of mutation, tailored toward sampling scattering
in participating mediathe propagation perturbation. This perturbation randomly
offsets a path vertex along one of the two incident edges (see Figure 9.16). Like
other perturbations, this mutation helps concentrate work in important regions of
the path space.
147
Figure 9.16: For path vertices that are in the scattering volume, rather than at
a surface, the scattering propagation perturbation moves a vertex (shown here as
a black circle) a random distance along the two path edges that are incident the
vertex. Here we have chosen the dashed edge. If the connecting ray to the eye
is occluded, the mutation is immediately rejected; otherwise the usual acceptance
probability is computed.
Acknowledgements
Eric Veach was kind enough to read these notes through a series of drafts and offer
extensive comments and many helpful suggestions for clearer presentation.
148
Chapter 10
Biased Techniques
By Henrik Wann Jensen
In the previous chapters we have seen examples of several unbiased Monte Carlo
ray tracing (MCRT) techniques. These techniques use pure Monte Carlo sampling
to compute the various estimates of radiance, irradiance etc. The only problem
with pure MCRT is variance seen as noise in the rendered images. The only
way to eliminate this noise (and still have an unbiased algorithm) is to sample
more efficiently and/or to use more samples.
In this chapter we will discuss several approaches for removing noise by introducing bias. We will discuss techniques that uses interpolation of irradiance to
exploit the smoothness of the irradiance field. This approach makes it possible to
use more samples at selected locations in the model. We will also discuss photon
mapping, which stores information about the flux in a scene and performs a local
evaluation of the statistics of the stored flux in order to speedup the simulation of
global illumination.
10.1
An unbiased Monte Carlo technique does not have any systematic error. It can be
stopped after any number of samples and the expected value of the estimator will
be the correct value. This does not mean that all biased methods give the wrong
result. A method can converge to the correct result as more samples are used and
still be biased, such methods are consistent.
In chapter 2 it was shown how the integral of a function g(x) can be expressed
149
(10.1)
i=1
where p(x) is a p.d.f. distributed according to x such that p(x) > 0 when g(x) > 0.
The expected value of is the value of the integral:
Z
E{} = I =
g(x) d .
(10.2)
xS
(10.3)
where is some error term. For a consistent estimator the error term will diminish
as the number of samples is increased:
lim = 0 .
(10.4)
Figure 10.1: A path traced box scene with 10 paths/pixel. On the left the box has
two diffuse spheres, and on the right box box has a mirror and a glass sphere. Note
how the simple change of the sphere material causes a significant increase in the
noise in the right image.
is fairly insensitive to slowly changing illumination. The trick is to avoid blurring
edges and sharp features in the image and to have control over the amount of blur.
Consider the simple box scene in Figure 10.1. The figure contains two images
of the box scene: one in which the box contains two diffuse spheres, and one in
which the box contains a glass and a mirror sphere. Both images have been rendered with 10 paths per pixel. Even though the illumination of the walls is almost
the same in the two images the introduction of the specular spheres is enough to
make the path tracing image very noisy. In the following sections we will look at a
number of techniques for eliminating this noise without using more samples.
Figure 10.2: Filtered versions of the image of the box with specular spheres. On
the left is the result of a 3x3 low-pass filter, and on the right the result of a 3x3
median filter.
rendered image. The effect of low-pass and median filtering on the box scene is
shown in Figure 10.2
Several other more sophisticated filtering techniques have been used.
Jensen and Christensen [34] applied a median filter to the indirect illumination
on diffuse surfaces based on the assumption that most of the noise in path tracing is
found in this irradiance estimate. By filtering only this estimate before using it they
removed a large fraction of the noise without blurring the edges, and features such
as highlights and noisy textures. The problem with this approach is that it softens
the indirect illumination and therefore blurs features due to indirect lighting. Also
the technique is not energy-preserving.
Rushmeier and Ward [65] used a energy-preserving non-linear filter to remove
noisy pixels by distributing the extra energy in the pixel over several neighboring
pixels.
McCool [50] used an anisotropic diffusion filter which also preserves energy
and allows for additional input about edges and textures in order to preserve these
features.
Suykens and Willems [79] used information gathered during rendering (they
used bidirectional path tracing) about the probabilities of various events. This enabled them to predict the occurrence of spikes in the illumination. By distributing
the power of the spikes over a larger region in the image plane their method removes noise and preserves energy. The nice thing about this approach is that it
allows for progressive rendering with initially blurry results that converges to the
correct result as more samples are used.
152
10.3
Adaptive Sampling
Another way of eliminating noise in Monte Carlo ray traced images is to use adaptive sampling. Here the idea is to use more samples for the problematic (i.e. noisy)
pixels.
One techniques for doing this is to compute the variance of the estimate based
on the samples used for the pixel [48, 63]. Instead of using a fixed number of
samples per pixel each pixel is sampled until the variance is below a given threshold. An estimate, s2 , of the variance for a given set of samples can be found using
standard techniques from statistics:
N
1 1 X 2
s2 =
Li
N 1 N
i=1
N
1 X
Li
N
!2
(10.5)
i=1
This estimate is based on the assumption that the samples Li are distributed according to a normal distribution. This assumption is often reasonable, but it fails
when for example the distribution within a pixel is bimodal (this happens when a
light source edge passing through a pixel).
Using the variance as a stopping criteria can be effective in reducing noise, but
it introduces bias as shown by Kirk and Arvo [42]. They suggested using a pilot
sample to estimate the number of samples necessary. To eliminate bias the pilot
sample must be thrown away.
The amount of bias introduced by adaptive sampling is, however, usually very
small as shown by Tamstorf and Jensen [80]. They used bootstrapping to estimate
the bias due to adaptive sampling and found that it is insignificant for most pixels
in the rendered image (a notable exception is the edges of light sources).
10.4
Irradiance Caching
Irradiance caching is a technique that exploits the fact that the irradiance field often is smooth [95]. Instead of just filtering the estimate the idea is to cache and
interpolate the irradiance estimates. This is done by examining the samples of the
estimate more carefully to, loosely speaking, compute the expected smoothness of
the irradiance around a given sample location. If the irradiance is determined to be
sufficiently smooth then the estimate is re-used for this region.
153
To understand in more detail how this works let us first consider the evaluation
of the irradiance, E, at a given location, x, using Monte Carlo ray tracing.
M N
XX
Li,j (j , i ) ,
E(x) =
MN
(10.6)
j=1 i=1
where
r
j = sin1
j 1
M
!
and i = 2
i 2
.
N
(10.7)
Here (j , i ) specify a direction on the hemisphere above x in spherical coordinates. 1 [0, 1] and 2 [0, 1] are uniformly distributed random numbers, and
M and N specify the subdivision of the hemisphere. Li,j (j , i ) is evaluated by
tracing a ray in the (j , i ) direction. Note that the formula uses stratification of
the hemisphere to obtain a better estimate than pure random sampling.
To estimate the smoothness of the local irradiance on the surface around the
sample location Ward et al. [95] looked at the distances to the surfaces intersected
by the rays as well as the local changes in the surface normal. This resulted in
an estimate of the local relative change, i , in irradiance as the surface location is
changed away from sample i:
i (x, ~n) =
||xi x|| p
+ 1 1~n ~ni .
R0
(10.8)
Here xi is the original sample location and x is the new sample location (for which
we want to compute the change), R0 is the harmonic distance to the intersected
surfaces, ~ni is the sample normal, and ~n is the new normal.
Given this estimate of the local variation in irradiance Ward et al. developed a
caching method where previously stored samples re-used whenever possible. All
samples are stored in an octree-tree this structure makes it possible to quickly
locate previous samples. When a new sample is requested the octree is queried first
for previous samples near the new location. For these nearby samples the change
in irradiance, , is computed. If samples with a sufficiently low is found then
these samples are blended using weights inversely proportional to :
P
wi (x, ~n)Ei (xi )
E(x, ~n)
i,wi >1/a
wi (x, ~n)
i,wi >1/a
Here wi =
1
i ,
(10.9)
10.5.1
The first step is building the photon map. This is done by emitting photons from the
light sources and tracing them through the scene using photon tracing. The photons
are emitted according to the power distribution of the light source. As an example,
a diffuse point light emits photons with equal probability in all directions. When
a photon intersects a surface it is stored in the photon map (if the surface material
has a diffuse component). In addition the photon is either scattered or absorbed
based on the albedo of the material. For this purpose Russian roulette [4] is used to
decide if a photon path is terminated (i.e. the photon is absorbed), or if the photon
is scattered (reflected or transmitted). This is shown in Figure 10.4.
The photon tracing algorithm is unbiased. No approximations or sources of
systematic error is introduced in the photon tracing step. In contrast many other
156
Figure 10.4: The photon map is build using photon tracing. From [33].
light propagation algorithms, such as progressive refinement radiosity [11], introduces a systematic error at each light bounce (due to the approximate representation of the illumination).
For efficiency reasons several photon maps are constructed in the first pass. A
caustics photon map that stores only the photons that correspond to a caustic light
path, a global photon map that stores all photon hits at diffuse surfaces (including
the caustics), and a volume photon map that stores multiple scattered photons in
participating media. In the following we will ignore the case of participating media
(see [35, 33] for details).
10.5.2
The photon map represents incoming flux in the scene. For rendering purposes we
want radiance. Using the expression for reflected radiance we find that:
Z
Lr (x,
~) =
fr (x,
~ 0,
~ )Li (x,
~ 0 )(~nx
~ 0 ) d~
0 ,
(10.10)
Figure 10.5: The radiance estimate is computed from the nearest photons in the
photon map. From [33].
Z
=
fr (x,
~ 0,
~)
d2 i (x,
~ 0)
.
dAi
(10.11)
Here we have used the relationship between radiance and flux to rewrite the incoming radiance as incoming flux instead. By using the nearest n photons around
x from the photon map to estimate the incoming flux, we get:
Lr (x,
~)
n
X
fr (x,
~ p,
~)
p=1
p (x,
~ p)
.
A
(10.12)
This procedure can be seen as expanding a sphere around x until it contains enough
photons. The last unknown piece of information is A. This is the area covered
by the photons, and it is used to computed the photon density (the flux density). A
simple approximation for A is the projected area of the sphere used to locate the
photons. The radius of this sphere is r (where r is the distance to the nth nearest
photon), and we get A = r2 . This is equivalent to a nearest neighbor density
estimate [73]. The radiance estimate is illustrated in Figure 10.5.
The radiance estimate is biased. There are two approximations in the estimate.
It assumes that the nearest photons represents the illumination at x, and it uses
a nearest neighbor density estimate. Both of these approximations can introduce
artifacts in the rendered image. In particular the nearest neighbor density estimate
is often somewhat blurry.
158
However, the radiance estimate is also consistent. As more photons are used in
the photon map as well as the estimate it will converge to the correct value.
A useful property of the radiance estimate is that the bias is purely local (the
error is a function of the local geometry and the local photon density).
10.5.3
Pass 2: Rendering
For rendering the radiance through each pixel is computed by averaging the result
of several sample rays. The radiance for each ray is computed using distribution
ray tracing. This ray tracer is using the photon map both to guide the sampling
(importance sampling) as well as limit the recursion.
There are several strategies by which the photon map can be used for rendering.
One can visualize the radiance estimate directly at every diffuse surface intersected
by a ray. This approach will work (a very similar strategy is used by [69]), but it
requires a large number of photons in both the photon map as well as the radiance
estimate. To reduce the number of photons the two-pass photon mapping approach
uses a mix of several techniques to compute the various components of the reflected
radiance at a given surface location.
We distinguish between specular and diffuse reflection. Here specular means
perfect specular or highly glossy, and diffuse reflection is the remaining part of the
reflection (not only Lambertian).
For all specular surface components the two-pass method uses recursive ray
tracing to evaluate the incoming radiance from the reflected direction. Ray tracing
is pretty efficient at handling specular and highly glossy reflections.
Two different techniques are used for the diffuse surface component. The first
diffuse surface seen either directly through a pixel or via a few specular reflections
is evaluated accurately using Monte Carlo ray tracing. The direct illumination
is computed using standard ray tracing techniques and similarly the irradiance is
evaluated using Monte Carlo ray tracing or gathering (this sampling is improved
by using the information in the photon map to importance sample in the directions
where the local photons originated). Whenever a sample ray from the gathering
step reaches another diffuse surface the estimate from the global photon map is
used). The use of a gathering step means that the radiance estimate from the global
photon map can be fairly coarse without affecting the quality of the final rendered
image. The final component of the two-pass method is caustics, to reduce noise
in the gathering step the caustics component is extracted and caustics are instead
159
Figure 10.6: The rendering step uses a gathering step to compute the first diffuse
bounce more accurately. From [33].
160
161
162
Bibliography
[1] James Arvo. Analytic Methods for Simulated Light Transport. PhD thesis,
Yale University, December 1995.
[2] James Arvo. Applications of irradiance tensors to the simulation of nonLambertian phenomena. In Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH, pages 335342, August 1995.
[3] James Arvo. Stratified sampling of spherical triangles. In Computer Graphics
Proceedings, Annual Conference Series, ACM SIGGRAPH, pages 437438,
August 1995.
[4] James Arvo and David B. Kirk. Particle transport and image synthesis. In
Forest Baskett, editor, Computer Graphics (SIGGRAPH 90 Proceedings),
volume 24, pages 6366, August 1990.
[5] J. Beck and W. W. L. Chen. Irregularities of Distribution. Cambridge University Press, New York, 1987.
[6] Marcel Berger. Geometry, volume II. Springer-Verlag, New York, 1987.
Translated by M. Cole and S. Levy.
[7] Kenneth Chiu, Peter Shirley, and Changyaw Wang. Multi-jittered sampling.
In Paul Heckbert, editor, Graphics Gems IV, pages 370374. Academic Press,
Boston, 1994.
[8] Kenneth Chiu, Peter Shirley, and Changyaw Wang. Multi-jittered sampling.
In Paul Heckbert, editor, Graphics Gems IV, pages 370374. Academic Press,
Boston, 1994.
[9] K.-L. Chung. An estimate concerning the Kolmogoroff limit distribution.
Transactions of the American Mathematical Society, 67:3650, 1949.
163
[10] William G. Cochran. Sampling Techniques (3rd Ed). John Wiley & Sons,
1977.
[11] Michael F. Cohen, Shenchang Eric Chen, John R. Wallace, and Donald P.
Greenberg. A progressive refinement approach to fast radiosity image generation. In John Dill, editor, Computer Graphics (SIGGRAPH 88 Proceedings),
volume 22, pages 7584, August 1988.
[12] Michael F. Cohen and John R. Wallace. Radiosity and Realistic Image Synthesis. Academic Press Professional, San Diego, CA, 1993. Excellent book
on radiosity algorithms.
[13] Robert L. Cook. Stochastic sampling in computer graphics. ACM Transactions on Graphics, 5(1):5172, January 1986.
[14] Robert L. Cook, Thomas Porter, and Loren Carpenter. Distributed ray tracing. Computer Graphics, 18(4):165174, July 1984. ACM Siggraph 84
Conference Proceedings.
[15] R. Cranley and T.N.L. Patterson. Randomization of number theoretic methods for multiple integration. SIAM Journal of Numerical Analysis, 13:904
914, 1976.
[16] P. J. Davis and P. Rabinowitz. Methods of Numerical Integration (2nd Ed.).
Academic Press, San Diego, 1984.
[17] Luc Devroye. Non-uniform Random Variate Generation. Springer, 1986.
[18] Kai-Tai Fang and Yuan Wang. Number Theoretic Methods in Statistics. Chapman and Hall, London, 1994.
[19] Henri Faure. Discrepance de suites associees a` un syst`eme de numeration (en
dimension s). Acta Arithmetica, 41:337351, 1982.
[20] Henri Faure. Good permutations for extreme discrepancy. Journal of Number
Theory, 42:4756, 1992.
[21] G. Fishman. Monte Carlo: Concepts, Algorithms, and Applications.
Springer-Verlag, 1995.
[22] George S. Fishman. Monte Carlo: concepts, algorithms, and applications.
Springer Verlag, New York, NY, 1996.
164
[23] Rafael C. Gonzalez and Paul Wintz. Digital Image Processing (2nd Ed.).
Addison-Wesley, Reading, MA, 1987.
[24] S. Haber. A modified Monte Carlo quadrature. Mathematics of Computation,
20:361368, 1966.
[25] J.H. Halton. On the efficiency of certain quasi-random sequences of points
in evaluating multi-dimensional integrals. Numerische Mathematik, 2:8490,
1960.
[26] John H. Halton. A retrospective and prospective of the Monte Carlo method.
SIAM Review, 12(1):163, January 1970.
[27] J. M. Hammersley and D. C. Handscomb. Monte Carlo Methods. Wiley, New
York, N.Y., 1964.
[28] F. J. Hickernell, H. S. Hong, P. LEcuyer, and C. Lemieux. Extensible lattice sequences for quasi-Monte Carlo quadrature. SIAM Journal on Scientific
Computing, 22(3):11171138, 2000.
[29] E. Hlawka. Funktionen von beschrankter Variation in der Theorie der Gleichverteilung. Annali di Matematica Pura ed Applicata, 54:325333, 1961.
[30] H. S. Hong and F. J. Hickernell. Implementing scrambled digital sequences.
AMS Transactions on Mathematical Software, 2003. To appear.
[31] L.K. Hua and Y. Wang. Applications of number theory to numerical analysis.
Springer, Berlin, 1981.
[32] Henrik Wann Jensen. Global illumination using photon maps. In Xavier
Pueyo and Peter Schroder, editors, Eurographics Rendering Workshop 1996,
pages 2130, New York City, NY, June 1996. Eurographics, Springer Wien.
ISBN 3-211-82883-4.
[33] Henrik Wann Jensen. Realistic Image Synthesis using Photon Mapping. AK
Peters, 2001.
[34] Henrik Wann Jensen and Niels J. Christensen. Optimizing path tracing using
noise reduction filters. In Winter School of Computer Graphics 1995, February 1995. held at University of West Bohemia, Plzen, Czech Republic, 14-18
February 1995.
165
[35] Henrik Wann Jensen and Per H. Christensen. Efficient simulation of light
transport in scenes with participating media using photon maps. In Michael
Cohen, editor, SIGGRAPH 98 Conference Proceedings, Annual Conference
Series, pages 311320. ACM SIGGRAPH, Addison Wesley, July 1998. ISBN
0-89791-999-8.
[36] James T. Kajiya. The rendering equation. Computer Graphics, 20(4):143
150, August 1986. ACM Siggraph 86 Conference Proceedings.
[37] M. H. Kalos and Paula A. Whitlock. Monte Carlo Methods, volume I, Basics.
John Wiley & Sons, New York, 1986.
[38] Malvin H. Kalos and Paula A. Whitlock. Monte Carlo Methods: Volume I:
Basics. John Wiley & Sons, New York, 1986.
[39] Alexander Keller. A quasi-Monte Carlo algorithm for the global illumination
problem in a radiosity setting. In Harald Niederreiter and Peter Jau-Shyong
Shiue, editors, Monte Carlo and Quasi-Monte Carlo Methods in Scientific
Computing, pages 239251, New York, 1995. Springer-Verlag.
[40] David Kirk and James Arvo. Unbiased sampling techniques for image synthesis. Computer Graphics, 25(4):153156, July 1991.
[41] David Kirk and James Arvo. Unbiased variance reduction for global illumination. In Proceedings of the Second Eurographics Workshop on Rendering,
Barcelona, May 1991.
[42] David B. Kirk and James Arvo. Unbiased sampling techniques for image
synthesis. In Thomas W. Sederberg, editor, Computer Graphics (SIGGRAPH
91 Proceedings), volume 25, pages 153156, July 1991.
[43] N. M. Korobov. The approximate computation of multiple integrals. Dokl.
Akad. Nauk SSSR, 124:12071210, 1959.
[44] L. Kuipers and H. Niederreiter. Uniform Distribution of Sequences. John
Wiley and Son, New York, 1976.
[45] Brigitta Lange. The simulation of radiant light transfer with stochastic raytracing. In Proceedings of the Second Eurographics Workshop on Rendering
(Barcelona, May 1991), 1991.
166
[78] Michael Stein. Large sample properties of simulations using Latin hypercube
sampling. Technometrics, 29(2):14351, 1987.
[79] Frank Suykens and Yves Willems. Adaptive filtering for progressive monte
carlo image rendering. 2000.
[80] Rasmus Tamstorf and Henrik Wann Jensen. Adaptive sampling and bias estimation in path @acing. In Julie Dorsey and Philipp Slusallek, editors, Eurographics Rendering Workshop 1997, pages 285296, New York City, NY,
June 1997. Eurographics, Springer Wien. ISBN 3-211-83001-4.
[81] Boxin Tang. Orthogonal array-based Latin hypercubes. Journal of the American Statistical Association, 88:13921397, 1993.
[82] D. M. Titterington, A. F. M. Smith, and U. E. Makov. The Statistical Analysis
of Finite Mixture Distributions. John Wiley & Sons, New York, NY, 1985.
[83] Bruno Tuffin. On the use of low discrepancy sequences in Monte Carlo methods. Technical Report 1060, I.R.I.S.A., Rennes, France, 1996.
[84] Greg Turk. Generating random points in triangles. In Andrew S. Glassner,
editor, Graphics Gems, pages 2428. Academic Press, New York, 1990.
[85] J. G. van der Corput. Verteilungsfunktionen I. Nederl. Akad. Wetensch. Proc.,
38:813821, 1935.
[86] J. G. van der Corput. Verteilungsfunktionen II. Nederl. Akad. Wetensch.
Proc., 38:10581066, 1935.
[87] Eric Veach. Robust Monte Carlo Methods for Light Transport Simulation.
PhD thesis, Stanford University, December 1997.
[88] Eric Veach and Leonidas Guibas. Bidirectional estimators for light transport.
In 5th Annual Eurographics Workshop on Rendering, pages 147162, June
1315 1994.
[89] Eric Veach and Leonidas Guibas. Optimally combining sampling techniques
for Monte Carlo rendering. In SIGGRAPH 95 Conference Proceedings,
pages 419428. Addison-Wesley, August 1995.
170
[90] Eric Veach and Leonidas J. Guibas. Optimally combining sampling techniques for Monte Carlo rendering. In Computer Graphics Proceedings, Annual Conference Series, ACM SIGGRAPH, pages 419428, August 1995.
[91] Eric Veach and Leonidas J. Guibas. Metropolis light transport. In Turner
Whitted, editor, SIGGRAPH 97 Conference Proceedings, Annual Conference Series, pages 6576. ACM SIGGRAPH, Addison Wesley, August 1997.
ISBN 0-89791-896-7.
[92] Changyaw Wang. Physically correct direct lighting for distribution ray tracing. In David Kirk, editor, Graphics Gems 3. Academic Press, New York,
NY, 1992.
[93] Greg Ward. Adaptive shadow testing for ray tracing. In Proceedings of the
Second Eurographics Workshop on Rendering (Barcelona, May 1991), 1991.
[94] Gregory J. Ward and Paul Heckbert. Irradiance gradients. Third Eurographics
Workshop on Rendering, pages 8598, May 1992.
[95] Gregory J. Ward, Francis M. Rubinstein, and Robert D. Clear. A ray tracing
solution for diffuse interreflection. In John Dill, editor, Computer Graphics
(SIGGRAPH 88 Proceedings), volume 22, pages 8592, August 1988.
171