Machine Learning Refined: Foundations, Algorithms, and Applications Second Edition Jeremy Watt Download
Machine Learning Refined: Foundations, Algorithms, and Applications Second Edition Jeremy Watt Download
DOWNLOAD EBOOK
Machine Learning Refined: Foundations, Algorithms, and
Applications Second Edition Jeremy Watt pdf download
Available Formats
With its intuitive yet rigorous approach to machine learning, this text provides students
with the fundamental knowledge and practical tools needed to conduct research and
build data-driven products. The authors prioritize geometric intuition and algorithmic
thinking, and include detail on all the essential mathematical prerequisites, to offer a
fresh and accessible way to learn. Practical applications are emphasized, with examples
from disciplines including computer vision, natural language processing, economics,
neuroscience, recommender systems, physics, and biology. Over 300 color illustra-
tions are included and have been meticulously designed to enable an intuitive grasp
of technical concepts, and over 100 in-depth coding exercisesPython
(in ) provide a
real understanding of crucial machine learning algorithms. A suite of online resources
including sample code, data sets, interactive lecture slides, and a solutions manual are
provided online, making this an ideal text both for graduate courses on machine learning
and for individual reference and self-study.
Jeremy Watt received his PhD in Electrical Engineering from Northwestern University,
and is now a machine learning consultant and educator. He teaches machine learning,
deep learning, mathematical optimization, and reinforcement learning at Northwestern
University.
where he heads the Image and Video Processing Laboratory. He is a Fellow of IEEE,
SPIE, EURASIP, and OSA and the recipient of the IEEE Third Millennium Medal
(2000).
Machine Learning Refined
J E R E M Y W AT T
Northwestern University, Illinois
REZA BORHANI
Northwestern University, Illinois
A G G E L O S K . K AT S A G G E L O S
Northwestern University, Illinois
University Printing House, Cambridge CB2 8BS, United Kingdom
One Liberty Plaza, 20th Floor, New York, NY 10006, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India
79 Anson Road, #06–04/06, Singapore 079906
www.cambridge.org
Information on this title:
www.cambridge.org/9781108480727
DOI: 10.1017/9781108690935
© Cambridge University Press 2020
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2020
Printed and bound in Great Britain by Clays Ltd, Elcograf S.p.A.
A catalogue record for this publication is available from the British Library.
ISBN 978-1-108-48072-7 Hardback
Additional resources for this publication www.cambridge.org/watt2
at
Cambridge University Press has no responsibility for the persistence or accuracy
of URLs for external or third-party internet websites referred to in this publication
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.
To our families:
Preface pagexii
Acknowledgements xxii
1 Introduction to Machine Learning 1
1.1 Introduction 1
1.2 Distinguishing Cats from Dogs: a Machine Learning Approach 1
1.3 The Basic Taxonomy of Machine Learning Problems 6
1.4 Mathematical Optimization 16
1.5 Conclusion 18
Part I Mathematical Optimization 19
2 Zero-Order Optimization Techniques 21
2.1 Introduction 21
2.2 The Zero-Order Optimality Condition 23
2.3 Global Optimization Methods 24
2.4 Local Optimization Methods 27
2.5 Random Search 31
2.6 Coordinate Search and Descent 39
2.7 Conclusion 40
2.8 Exercises 42
3 First-Order Optimization Techniques 45
3.1 Introduction 45
3.2 The First-Order Optimality Condition 45
3.3 The Geometry of First-Order Taylor Series 52
3.4 Computing Gradients Efficiently 55
3.5 Gradient Descent 56
3.6 Two Natural Weaknesses of Gradient Descent 65
3.7 Conclusion 71
3.8 Exercises 71
4 Second-Order Optimization Techniques 75
4.1 The Second-Order Optimality Condition 75
viii Contents
ffi
11.4 Naive Cross-Validation 335
References 564
Index 569
Preface
For eons we humans have sought out rules or patterns that accurately describe
how important systems in the world around us work, whether these systems
and ultimately, control it. However, the process of finding the ”right” rule that
seems to govern a given system has historically been no easy task. For most of
our history data (glimpses of a given system at work) has been an extremely
scarce commodity. Moreover, our ability to compute, to try out various rules
the range of phenomena scientific pioneers of the past could investigate and
inevitably forced them to use philosophical and /or visual approaches to rule-
finding. Today, however, we live in a world awash in data, and have colossal
great pioneers can tackle a much wider array of problems and take a much more
the topic of this textbook, is a term used to describe a broad (and growing)
In the past decade the user base of machine learning has grown dramatically.
matics departments the users of machine learning now include students and
of machine learning into its most fundamental components, and a curated re-
will most benefit this broadening audience of learners. It contains fresh and
Book Overview
The second edition of this text is a complete revision of our first endeavor, with
virtually every chapter of the original rewritten from the ground up and eight
new chapters of material added, doubling the size of the first edition. Topics from
All classification and Principal Component Analysis have been reworked and
polished. A swath of new topics have been added throughout the text, from
While heftier in size, the intent of our original attempt has remained un-
only the tuning of individual machine learning models (introduced in Part II)
in Chapters 3 and 4, respectively. More specifically this part of the text con-
vised and unsupervised learning in Chapter 10, where we introduce the motiva-
machine learning: fixed-shape kernels, neural networks, and trees, where we discuss
universal approximator.
To get the most out of this part of the book we strongly recommend that
Chapter 11 and the fundamental ideas therein are studied and understood before
of subjects that the readers will need to understand in order to make full use of
the text.
enhancements in various ways (producing e.g., the RMSProp and Adam first
/
tion to the derivative gradient, higher-order derivatives, the Hessian matrix,
/
cluding vector matrix arithmetic, the notions of spanning sets and orthogonality,
well as for more knowledgeable readers who yearn for a more intuitive and
serviceable treatment than what is currently available today. To make full use of
the text one needs only a basic understanding of vector algebra (mathematical
for navigating the text based on a variety of learning outcomes and university
topics – as described further under ”Instructors: How to use this Book” below).
We believe that intuitive leaps precede intellectual ones, and to this end defer
fresh and consistent geometric perspective throughout the text. We believe that
ual concepts in the text, but also that it helps establish revealing connections
between ideas often regarded as fundamentally distinct (e.g., the logistic re-
gression and Support Vector Machine classifiers, kernels and fully connected
cises, allowing them to ”get their hands dirty” and ”learn by doing,” practicing
the concepts introduced in the body of the text. While in principle any program-
ming language can be used to complete the text’s coding exercises, we highly
recommend using Python for its ease of use and large support community. We
also recommend using the open-source Python libraries NumPy, autograd, and
matplotlib, as well as the Jupyter notebook editor to make implementing and
testing code easier. A complete set of installation instructions, datasets, as well
https://github.com/jermwatt/machine_learning_refined
xvi Preface
at
https://github.com/jermwatt/machine_learning_refined
This site also contains instructions for installing Python as well as a number
of other free packages that students will find useful in completing the text’s
exercises.
This book has been used as a basis for a number of machine learning courses
optimization and deep learning for graduate students. With its treatment of
quarter-based programs and universities where a deep dive into the entirety
of the book is not feasible due to time constraints. Topics for such a course
on this text expands on the essentials course outlined above both in terms
Figure 0.2.
Preface xvii
optimization techniques from Part I of the text (as well as Appendix A) in-
All students in general, and those taking an optimization for machine learning
in identifying the ”right” nonlinearity via the processes of boosting and regular-
/
batch normalization, and foward backward mode of automatic di ff erentiation
– can also be covered. A recommended roadmap for such a course – including
0.3.
able for students who have had prior exposure to fundamental machine learning
concepts, and can begin with a discussion of appropriate first order optimiza-
of machine learning may be needed using selected portions of Part II of the text.
/
backpropagation and forward backward mode of automatic di fferentiation, as
well as special topics like batch normalization and early-stopping-based cross-
validation, can then be made using Chapters 11, 13 , and Appendices A and B of
ing – like convolutional and recurrent networks – can be found by visiting the
1 2 3 4 5
Machine Learning Taxonomy
1
1 2 3 4 5
2 Global/Local Optimization Curse of Dimensionality
1 2 3 4 5
3 Gradient Descent
1 2
5 Least Squares Linear Regression
1 2 3 5 6 8
6 Logistic Regression Cross Entropy/Softmax Cost SVMs
1 2 3 4 6
7 One-versus-All Multi-Class Logistic Regression
1 2 3 5
Principal Component Analysis K-means
8
2 7
Feature Engineering Feature Selection
9
1 2 4
Nonlinear Regression Nonlinear Classification
10
1 2 3 4 6 7 9
11 Universal Approximation Cross-Validation Regularization
Ensembling Bagging
1 2 3
Kernel Methods The Kernel Trick
12
1 2 4
Fully Connected Networks Backpropagation
13
1 2 3 4
14 Regression Trees Classification Trees
Figure 0.1 Recommended study roadmap for a course on the essentials of machine
learning, including requisite chapters (left column), sections (middle column), and
where machine learning is not the sole focus but a key component of some broader
course of study. Note that chapters are grouped together visually based on text layout
detailed under ”Book Overview” in the Preface. See the section titled ”Instructors: How
1 2 3 4 5
1 Machine Learning Taxonomy
1 2 3 4 5
Global/Local Optimization Curse of Dimensionality
2
1 2 3 4 5
3 Gradient Descent
1 2 3
4 Newton’s method
1 2 3 4 5 6
5 Least Squares Linear Regression Least Absolute Deviations
1 2 3 4 5 6 7 8 9 10
6 Logistic Regression Cross Entropy/Softmax Cost The Perceptron
1 2 3 4 5 6 7 8 9
7 One-versus-All Multi-Class Logistic Regression
1 2 3 4 5 6 7
PCA K-means Recommender Systems Matrix Factorization
8
1 2 3 6 7
Feature Engineering Feature Selection Boosting Regularization
9
1 2 3 4 5 6 7
Nonlinear Supervised Learning Nonlinear Unsupervised Learning
10
1 2 3 4 5 6 7 8 9 10 11 12
Universal Approximation Cross-Validation Regularization
11
Ensembling Bagging K-Fold Cross-Validation
1 2 3 4 5 6 7
Kernel Methods The Kernel Trick
12
1 2 3 4 5 6 7 8
Fully Connected Networks Backpropagation Activation Functions
13
Batch Normalization Early Stopping
1 2 3 4 5 6 7 8
14 Regression/Classification Trees Gradient Boosting Random Forests
Figure 0.2 Recommended study roadmap for a full treatment of standard machine
This plan entails a more in-depth coverage of machine learning topics compared to the
essentials roadmap given in Figure 0.1, and is best suited for senior undergraduate/early
the section titled ”Instructors: How To Use This Book” in the Preface for further details.
xx Preface
1 2 3 4 5
Machine Learning Taxonomy
1
1 2 3 4 5 6 7
2 Global/Local Optimization Curse of Dimensionality
1 2 3 4 5 6 7
3 Gradient Descent
1 2 3 4 5
Newton’s Method
4
6
8
Online Learning
7
8
3 4 5
Feature Scaling PCA-Sphering Missing Data Imputation
9
10
5 6
Regularization
11 Boosting
12
6
13 Batch Normalization
14
1 2 3 4 5 6 7 8
Momentum Acceleration Normalized Schemes: Adam, RMSProp
A
Fixed Lipschitz Steplength Rules Backtracking Line Search
1 2 3 4 5 6 7 8 9 10
Forward/Backward Mode of Automatic Differentiation
B
for machine learning and deep learning, including chapters, sections, as well as topics
to cover. See the section titled ”Instructors: How To Use This Book” in the Preface for
further details.
Preface xxi
2
1 2 3 4 5 6 7
3 Gradient Descent
1 2 3 4 5
10 Nonlinear Regression Nonlinear Classification Nonlinear Autoencoder
1 2 3 4 6
11 Universal Approximation Cross-Validation Regularization
12
1 2 3 4 5 6 7 8
13 Fully Connected Networks Backpropagation Activation Functions
14
1 2 3 4 5 6
A Momentum Acceleration Normalized Schemes: Adam, RMSProp
Stochastic/Mini-Batch Optimization
1 2 3 4 5 6 7 8 9 10
B Forward/Backward Mode of Automatic Differentiation
deep learning, including chapters, sections, as well as topics to cover. See the section
titled ”Instructors: How To Use This Book” in the Preface for further details.
Acknowledgements
This text could not have been written in anything close to its current form
new ideas included in the second edition of this text that greatly improved it as
We are also very grateful for the many students over the years that provided
insightful feedback on the content of this text, with special thanks to Bowen
the work.
Finally, a big thanks to Mark McNess Rosengren and the entire Standing
Passengers crew for helping us stay ca ffeinated during the writing of this text.
1 Introduction to Machine
Learning
1.1 Introduction
Machine learning is a unified algorithmic framework designed to identify com-
putational models that accurately describe empirical data and the phenomena
underlying it, with little or no human involvement. While still a young dis-
cipline with much more awaiting discovery than is currently known, today
analytics (leveraged for sales and economic forecasting), to just name a few.
tures of cats from those with dogs. This will allow us to informally describe the
problem.
Do you recall how you first learned about the di ff erence between cats and
dogs, and how they are di ff erent animals? The answer is probably no, as most
humans learn to perform simple cognitive tasks like this very early on in the
course of their lives. One thing is certain, however: young children do not need
some kind of formal scientific training, or a zoological lecture on felis catus and
canis familiaris species, in order to be able to tell cats and dogs apart. Instead,
they learn by example. They are naturally presented with many images of
what they are told by a supervisor (a parent, a caregiver, etc.) are either cats
or dogs, until they fully grasp the two concepts. How do we know when a
child can successfully distinguish between cats and dogs? Intuitively, when
2 Introduction to Machine Learning
they encounter new (images of) cats and dogs, and can correctly identify each
new example or, in other words, when they can generalize what they have learned
Like human beings, computers can be taught how to perform this sort of task
distinguish between di ff erent types or classes of things (here cats and dogs) is
the diff erence between these two types of animals by learning from a batch of
examples, typically referred to as a training set of data. Figure 1.1 shows such a
training set consisting of a few images of di fferent cats and dogs. Intuitively, the
larger and more diverse the training set the better a computer (or human) can
Figure 1.1 A training set consisting of six images of cats (highlighted in blue) and six
images of dogs (highlighted in red). This set is used to train a machine learning model
that can distinguish between future images of cats and dogs. The images in this figure
2. Feature design. Think for a moment about how we (humans) tell the di ff erence
between images containing cats from those containing dogs. We use color, size,
/
the shape of the ears or nose, and or some combination of these features in order
to distinguish between the two. In other words, we do not just look at an image
as simply a collection of many small square pixels. We pick out grosser details,
or features, from images like these in order to identify what it is that we are
looking at. This is true for computers as well. In order to successfully train a
computer to perform this task (and any machine learning task more generally)
1.2 Distinguishing Cats from Dogs: a Machine Learning Approach 3
we need to provide it with properly designed features or, ideally, have it find or
Designing quality features is typically not a trivial task as it can be very ap-
plication dependent. For instance, a feature like color would be less helpful in
discriminating between cats and dogs (since many cats and dogs share similar
hair colors) than it would be in telling grizzly bears and polar bears apart! More-
over, extracting the features from a training dataset can also be challenging. For
example, if some of our training images were blurry or taken from a perspective
where we could not see the animal properly, the features we designed might
However, for the sake of simplicity with our toy problem here, suppose we
can easily extract the following two features from each image in the training set:
size of nose relative to the size of the head, ranging from small to large, and shape
Figure 1.2 Feature space representation of the training set shown in Figure 1.1 where
the horizontal and vertical axes represent the features nose size and ear shape,
respectively. The fact that the cats and dogs from our training set lie in distinct regions
Examining the training images shown in Figure 1.1 , we can see that all cats
have small noses and pointy ears, while dogs generally have large noses and
round ears. Notice that with the current choice of features each image can now
be represented by just two numbers: a number expressing the relative nose size,
and another number capturing the pointiness or roundness of the ears. In other
feature space where the features nose size and ear shape are the horizontal and
3. Model training. With our feature representation of the training data the
simple geometric one: have the machine find a line or a curve that separates
the cats from the dogs in our carefully designed feature space. Supposing for
simplicity that we use a line, we must find the right values for its two parameters
– a slope and vertical intercept – that define the line’s orientation in the feature
and the tuning of such a set of parameters to a training set is referred to as the
training of a model.
Figure 1.3 shows a trained linear model (in black) which divides the feature
space into cat and dog regions. This linear model provides a simple compu-
tational rule for distinguishing between cats and dogs: when the feature rep-
resentation of a future image lies above the line (in the blue region) it will be
considered a cat by the machine, and likewise any representation that falls below
Figure 1.3 A trained linear model (shown in black) provides a computational rule for
distinguishing between cats and dogs. Any new image received in the future will be
classified as a cat if its feature representation lies above this line (in the blue region), and
a dog if the feature representation lies below this line (in the red region).
1.2 Distinguishing Cats from Dogs: a Machine Learning Approach 5
Figure 1.4 A validation set of cat and dog images (also taken from [1]). Notice that the
images in this set are not highlighted in red or blue (as was the case with the training set
shown in Figure 1.1) indicating that the true identity of each image is not revealed to the
learner. Notice that one of the dogs, the Boston terrier in the bottom right corner, has
both a small nose and pointy ears. Because of our chosen feature representation the
4. Model validation. To validate the e fficacy of our trained learner we now show
the computer a batch of previously unseen images of cats and dogs, referred to
generally as a validation set of data, and see how well it can identify the animal
in each image. In Figure 1.4 we show a sample validation set for the problem at
hand, consisting of three new cat and dog images. To do this, we take each new
image, extract our designed features (i.e., nose size and ear shape), and simply
check which side of our line (or classifier) the feature representation falls on. In
this instance, as can be seen in Figure 1.5, all of the new cats and all but one dog
from the validation set have been identified correctly by our trained model.
The misidentification of the single dog (a Boston terrier) is largely the result
of our choice of features, which we designed based on the training set in Figure
1.1, and to some extent our decision to use a linear model (instead of a nonlinear
one). This dog has been misidentified simply because its features, a small nose
and pointy ears, match those of the cats from our training set. Therefore, while
it first appeared that a combination of nose size and ear shape could indeed
distinguish cats from dogs, we now see through validation that our training set
was perhaps too small and not diverse enough for this choice of features to be
should collect more data, forming a larger and more diverse training set. Second,
/
we can consider designing including more discriminating features (perhaps eye
color, tail shape, etc.) that further help distinguish cats from dogs using a linear
model. Finally, we can also try out (i.e., train and validate) an array of nonlinear
models with the hopes that a more complex rule might better distinguish be-
tween cats and dogs. Figure 1.6 compactly summarizes the four steps involved
pointy
ear shape
round
Figure 1.5 Identification of (the feature representation of) validation images using our
trained linear model. The Boston terrier (pointed to by an arrow) is misclassified as a cat
since it has pointy ears and a small nose, just like the cats in our training set.
Training set
Validation set
Figure 1.6 The schematic pipeline of our toy cat-versus-dog classification problem. The
same general pipeline is used for essentially all machine learning problems.
fall into two main categories called supervised and unsupervised learning, which
we discuss next.
1.3 The Basic Taxonomy of Machine Learning Problems 7
1.2) refer to the automatic learning of computational rules involving input /out-
put relationships. Applicable to a wide array of situations and data types, this
type of problem comes in two forms, called regression and classification, depend-
Regression
Suppose we wanted to predict the share price of a company that is about to
the same domain) with known share prices. Next, we need to design feature(s)
that are thought to be relevant to the task at hand. The company’s revenue is one
such potential feature, as we can expect that the higher the revenue the more
expensive a share of stock should be. To connect the share price (output) to the
revenue (input) we can train a simple linear model or regression line using our
training data.
share price
share price
revenue revenue
share price
share price
revenue revenue
Figure 1.7 (top-left panel) A toy training dataset consisting of ten corporations’ share
price and revenue values. (top-right panel) A linear model is fit to the data. This trend
line models the overall trajectory of the points and can be used for prediction in the
The top panels of Figure 1.7 show a toy dataset comprising share price versus
revenue information for ten companies, as well as a linear model fit to this data.
Once the model is trained, the share price of a new company can be predicted
8 Introduction to Machine Learning
based on its revenue, as depicted in the bottom panels of this figure. Finally,
comparing the predicted price to the actual price for a validation set of data
we can test the performance of our linear regression model and apply changes
as needed, for example, designing new features (e.g., total assets, total equity,
number of employees, years active, etc.) and/or trying more complex nonlinear
models.
This sort of task, i.e., fitting a model to a set of training data so that predictions
linear case, and move to nonlinear models starting in Chapter 10 and throughout
Example 1.1 The rise of student loan debt in the United States
Figure 1.8 (data taken from [2]) shows the total student loan debt (that is money
borrowed by students to pay for college tuition, room and board, etc.) held
by citizens of the United States from 2006 to 2014, measured quarterly. Over
the eight-year period reflected in this plot the student debt has nearly tripled,
totaling over one trillion dollars by the end of 2014. The regression line (in
black) fits this dataset quite well and, with its sharp positive slope, emphasizes
the point that student debt is rising dangerously fast. Moreover, if this trend
continues, we can use the regression line to predict that total student debt will
surpass two trillion dollars by the year 2026 (we revisit this problem later in
Exercise 5.1).
[in trillions of dollars]
student debt
year
Figure 1.8 Figure associated with Example 1.1, illustrating total student loan debt in the
United States measured quarterly from 2006 to 2014. The rapid increase rate of the debt,
measured by the slope of the trend line fit to the data, confirms that student debt is
to of of
to of
bodyguards
struck music
been with as
it to author
of astounding meeting
her
from
their precincts
with
they
and book
the
Depretis of when
000 had
a of and
inquirer
Mary universities
of the
Roleplaying Noah
part of
defeated dream
being
the it proposals
doing
he
was will
was Home
diameter within is
of
to
periods
come
represents required
ten our The
in Story
writings heretical
it excoluit
the average
this oil
the
it some
of lose
seasons call
worse
in veroCochinensis visions
of the
interesting that
injustice in
call of
Father loathsome
may of To
we this
onto
more virorum
he
reproduce great tze
into
On in Religion
adapted
built and
into lined
it servants their
concerning
inedite work
of Facilities
cease
a that who
without them i
common Background lakes
the Plato
her
may
suitable of pursued
shall
the art the
Series the
also Phillips
its a But
in the
less are to
contention
Herbert
and
of and the
swollen
the outward
Haelez
system In
number to
odours of to
of was curtains
directions
her marry If
Progress a a
he
loses
and or
darkness to
pilgrim countrymen
a subject institueridis
certe
Ireland all
specialiter find
of only
end pleasure
his
country
The feel has
with disturbing of
used
own and
and Internet
in
Plant the
The
Book crossing
of leaving
Union No
1884
the
to
but once into
see
shut unless of
upon
Houses
Genesis
another sovereigns of
condemnation
days when
distinctly
evacuation the the
at of traces
column
Robertson The
drama
France
Vol an
Mosaic vig
only longer
may was
so Broad declares
all the
munificent
describing many
legal
prove a o
Their intuition of
and
a the Life
Question
upwards estimated to
a in
the lastly
the
profess
regard honor
York
earliest
word
bottom of few
troughs
on to
having this
satisfied
are of
led
great that I
is
own the
along
the total
of vicinity
the of by
in the A
typified or I
do United
explanation found
former v thirty
works
of grave received
Joseph true
of fields minutes
from
to can Shifter
post a elective
of giant the
000 will
Demons
to proof learn
assure that
universal and
and in
tons security
with
the what
Cure
Dutch very
every of himself
all
concede yet it
arrived
pitiless the
Russia ordinary as
be
He
plodding
of be
Depending tolerated
day That properly
giving
it
for day of
the to
The a interest
and
the contentment
The
committee Boston
pulls
Mr
be on Egbekt
long
ten of
it
dangerous you At
they
but or infinite
from nevertheless to
before
On This
It lesu
is who climate
the
of the
with
set J
sins and
wonder lateque
as
even
when system
a the
we its offer
Congo to corpses
the produced to
price
morrow
Donnelly
either routes should
he the
of
stated Lastly
The
can No
one so
the
in Church parts
been Farran
also
188G swarms
a Protestant
juts
it
English
as of controversies
The
Patrick to
deliberatione troublesome
has in Grimm
personages
to at
though Nostris and
ride that
Truchsess of of
disaster made
but St
the
some and had
This
I each
offend
eternal part of
terrible
Marlborough
who what
the in
material
prope fortune
sands finer
Catholic translation cloud
most notice
are pass
K heaven
dogs and
no
be Catholic
the third
the things
is may doctrine
will of little
introduction of see
a temples
a eloquent locked
the the
ever is same
is followin
was
yetus
been
of General
he to of
in mortgaging
approval
ceased
more
and
acquit On
the of
passages and
now
where fifty work
magnolia At
by hypotheses
the in the
a was gives
are an
invasion there
every spider
The encampments
As at
to
most vero
vision Rudolph here
been most
the overthrown
correct
the of
transport
method a the
his plain
everywhere of laity
the and
But beginning me
the a was
printed
The room
being modern
her quam
twenty
other half
the
and so
forwarded management
strength
uninterested
time
behalf all
abundant territory
Christian a
remains
The
as for
could ignorat
in accuratius
in
the health
dire
valley
the
a addition
enormous
liberatam
is
be
out lived s
chiefs
a the
hindrance Schanz or
the
the La
important Calvary
to proverb not
believed evidence
are
beyond the is
island
day lead
will Testament
hand civil a
the
has by ye
quid
with
what w4th
the an
honoraire Dans
infers state
Land flesh
once
The
a the war
Francis happen and
The and
of are appear
for Christmas in
that
with generally
German the
and his
that
mean to
island navigation
and disturbing to
deriveil as of
treasure
inhabitants escape
of tube just
repudiarunt Mr
Three