Reinforcement Learning: With Open AI, TensorFlow and Keras Using Python 1st Edition Abhishek Nandy download
Reinforcement Learning: With Open AI, TensorFlow and Keras Using Python 1st Edition Abhishek Nandy download
https://textbookfull.com/product/reinforcement-learning-with-
open-ai-tensorflow-and-keras-using-python-1st-edition-abhishek-
nandy/
https://textbookfull.com/product/applied-reinforcement-learning-
with-python-with-openai-gym-tensorflow-and-keras-beysolow-ii/
https://textbookfull.com/product/deep-learning-with-python-
develop-deep-learning-models-on-theano-and-tensorflow-using-
keras-jason-brownlee/
https://textbookfull.com/product/deep-learning-projects-using-
tensorflow-2-neural-network-development-with-python-and-
keras-1st-edition-vinita-silaparasetty/
https://textbookfull.com/product/deep-learning-with-applications-
using-python-chatbots-and-face-object-and-speech-recognition-
with-tensorflow-and-keras-springerlink-online-service/
Beginning Anomaly Detection Using Python-Based Deep
Learning: With Keras and PyTorch Sridhar Alla
https://textbookfull.com/product/beginning-anomaly-detection-
using-python-based-deep-learning-with-keras-and-pytorch-sridhar-
alla/
https://textbookfull.com/product/computer-vision-using-deep-
learning-neural-network-architectures-with-python-and-keras-1st-
edition-vaibhav-verdhan/
https://textbookfull.com/product/machine-learning-concepts-with-
python-and-the-jupyter-notebook-environment-using-
tensorflow-2-0-nikita-silaparasetty/
https://textbookfull.com/product/natural-language-processing-
with-tensorflow-teach-language-to-machines-using-python-s-deep-
learning-library-1st-edition-thushan-ganegedara/
https://textbookfull.com/product/building-an-enterprise-chatbot-
work-with-protected-enterprise-data-using-open-source-frameworks-
abhishek-singh/
Reinforcement
Learning
With Open AI, TensorFlow and
Keras Using Python
—
Abhishek Nandy
Manisha Biswas
Reinforcement
Learning
With Open AI, TensorFlow and
Keras Using Python
Abhishek Nandy
Manisha Biswas
Reinforcement Learning
Abhishek Nandy Manisha Biswas
Kolkata, West Bengal, India North 24 Parganas, West Bengal, India
ISBN-13 (pbk): 978-1-4842-3284-2 ISBN-13 (electronic): 978-1-4842-3285-9
https://doi.org/10.1007/978-1-4842-3285-9
Library of Congress Control Number: 2017962867
Copyright © 2018 by Abhishek Nandy and Manisha Biswas
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole
or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical
way, and transmission or information storage and retrieval, electronic adaptation, computer
software, or by similar or dissimilar methodology now known or hereafter developed.
Trademarked names, logos, and images may appear in this book. Rather than use a trademark
symbol with every occurrence of a trademarked name, logo, or image we use the names, logos,
and images only in an editorial fashion and to the benefit of the trademark owner, with no
intention of infringement of the trademark.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if
they are not identified as such, is not to be taken as an expression of opinion as to whether or not
they are subject to proprietary rights.
While the advice and information in this book are believed to be true and accurate at the
date of publication, neither the authors nor the editors nor the publisher can accept any legal
responsibility for any errors or omissions that may be made. The publisher makes no warranty,
express or implied, with respect to the material contained herein.
Cover image by Freepik (www.freepik.com)
Managing Director: Welmoed Spahr
Editorial Director: Todd Green
Acquisitions Editor: Celestin Suresh John
Development Editor: Matthew Moodie
Technical Reviewer: Avirup Basu
Coordinating Editor: Sanchita Mandal
Copy Editor: Kezia Endsley
Compositor: SPi Global
Indexer: SPi Global
Artist: SPi Global
Distributed to the book trade worldwide by Springer Science+Business Media New York,
233 Spring Street, 6th Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax (201) 348-4505,
e-mail [email protected], or visit www.springeronline.com. Apress Media,
LLC is a California LLC and the sole member (owner) is Springer Science + Business Media
Finance Inc (SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation.
For information on translations, please e-mail [email protected], or visit
http://www.apress.com/rights-permissions.
Apress titles may be purchased in bulk for academic, corporate, or promotional use. eBook
versions and licenses are also available for most titles. For more information, reference our
Print and eBook Bulk Sales web page at http://www.apress.com/bulk-sales.
Any source code or other supplementary material referenced by the author in this book is
available to readers on GitHub via the book’s product page, located at www.apress.com/
978-1-4842-3284-2. For more detailed information, please visit http://www.apress.com/
source-code.
Printed on acid-free paper
Contents
■
■Chapter 1: Reinforcement Learning Basics������������������������������������ 1
What Is Reinforcement Learning?����������������������������������������������������������� 1
Faces of Reinforcement Learning����������������������������������������������������������� 6
The Flow of Reinforcement Learning������������������������������������������������������ 7
Different Terms in Reinforcement Learning�������������������������������������������� 9
Gamma������������������������������������������������������������������������������������������������������������������� 10
Lambda������������������������������������������������������������������������������������������������������������������� 10
Conclusion��������������������������������������������������������������������������������������������� 18
■
■Chapter 2: RL Theory and Algorithms������������������������������������������� 19
Theoretical Basis of Reinforcement Learning��������������������������������������� 19
Where Reinforcement Learning Is Used������������������������������������������������ 21
Manufacturing�������������������������������������������������������������������������������������������������������� 22
Inventory Management������������������������������������������������������������������������������������������� 22
iii
■ Contents
Delivery Management��������������������������������������������������������������������������������������������� 22
Finance Sector�������������������������������������������������������������������������������������������������������� 23
What Is MDP?���������������������������������������������������������������������������������������� 47
The Markov Property���������������������������������������������������������������������������������������������� 48
The Markov Chain��������������������������������������������������������������������������������������������������� 49
MDPs���������������������������������������������������������������������������������������������������������������������� 53
SARSA��������������������������������������������������������������������������������������������������� 54
Temporal Difference Learning�������������������������������������������������������������������������������� 54
How SARSA Works�������������������������������������������������������������������������������������������������� 56
Q Learning��������������������������������������������������������������������������������������������� 56
What Is Q?�������������������������������������������������������������������������������������������������������������� 57
How to Use Q���������������������������������������������������������������������������������������������������������� 57
SARSA Implementation in Python��������������������������������������������������������������������������� 58
The Entire Reinforcement Logic in Python������������������������������������������������������������� 64
iv
■ Contents
OpenAI Universe������������������������������������������������������������������������������������ 84
Conclusion��������������������������������������������������������������������������������������������� 87
■
■Chapter 4: Applying Python to Reinforcement Learning�������������� 89
Q Learning with Python������������������������������������������������������������������������� 89
The Maze Environment Python File������������������������������������������������������������������������ 91
The RL_Brain Python File��������������������������������������������������������������������������������������� 94
Updating the Function�������������������������������������������������������������������������������������������� 95
Conclusion������������������������������������������������������������������������������������������� 128
■■Chapter 5: Reinforcement Learning with Keras,
TensorFlow, and ChainerRL�������������������������������������������������������� 129
What Is Keras?������������������������������������������������������������������������������������ 129
Using Keras for Reinforcement Learning�������������������������������������������� 130
Using ChainerRL���������������������������������������������������������������������������������� 134
Installing ChainerRL���������������������������������������������������������������������������������������������� 134
Pipeline for Using ChainerRL�������������������������������������������������������������������������������� 137
Conclusion������������������������������������������������������������������������������������������� 153
v
■ Contents
Conclusion������������������������������������������������������������������������������������������� 163
Index���������������������������������������������������������������������������������������������� 165
vi
About the Authors
vii
About the Technical
Reviewer
ix
Acknowledgments
I want to dedicate this book to my mom and dad. Thank you to my teachers and my
co-author, Abhishek Nandy. Thanks also to Abhishek Sur, who mentors me at work
and helps me adapt to new technologies. I would also like to dedicate this book to my
company, InSync Tech-Fin Solutions Ltd., where I started my career and have grown
professionally.
—Manisha Biswas
xi
Introduction
xiii
CHAPTER 1
Reinforcement Learning
Basics
This chapter is a brief introduction to Reinforcement Learning (RL) and includes some
key concepts associated with it.
In this chapter, we talk about Reinforcement Learning as a core concept and then
define it further. We show a complete flow of how Reinforcement Learning works. We
discuss exactly where Reinforcement Learning fits into artificial intelligence (AI). After
that we define key terms related to Reinforcement Learning. We start with agents and
then touch on environments and then finally talk about the connection between agents
and environments.
2
Chapter 1 ■ Reinforcement Learning Basics
3
Chapter 1 ■ Reinforcement Learning Basics
In the maze, the centralized concept is to keep moving. The goal is to clear the maze
and reach the end as quickly as possible.
The following concepts of Reinforcement Learning and the working scenario are
discussed later this chapter.
• The agent is the intelligent program
• The environment is the maze
• The state is the place in the maze where the agent is
• The action is the move we take to move to the next state
• The reward is the points associated with reaching a particular
state. It can be positive, negative, or zero
We use the maze example to apply concepts of Reinforcement Learning. We will be
describing the following steps:
4
Chapter 1 ■ Reinforcement Learning Basics
The rewards predictions are made iteratively, where we update the value of each
state in a maze based on the value of the best subsequent state and the immediate reward
obtained. This is called the update rule.
The constant movement of the Reinforcement Learning process is based on
decision-making.
Reinforcement Learning works on a trial-and-error basis because it is very difficult to
predict which action to take when it is in one state. From the maze problem itself, you can
see that in order get the optimal path for the next move, you have to weigh a lot of factors.
It is always on the basis of state action and rewards. For the maze, we have to compute
and account for probability to take the step.
The maze also does not consider the reward of the previous step; it is specifically
considering the move to the next state. The concept is the same for all Reinforcement
Learning processes.
Here are the steps of this process:
1. We have a problem.
2. We have to apply Reinforcement Learning.
3. We consider applying Reinforcement Learning as a
Reinforcement Learning box.
4. The Reinforcement Learning box contains all essential
components needed for applying the Reinforcement Learning
process.
5. The Reinforcement Learning box contains agents,
environments, rewards, punishments, and actions.
Reinforcement Learning works well with intelligent program agents that give rewards
and punishments when interacting with an environment.
The interaction happens between the agents and the environments, as shown in
Figure 1-4.
From Figure 1-4, you can see that there is a direct interaction between the agents and
its environments. This interaction is very important because through these exchanges,
the agent adapts to the environments. When a Machine Learning program, robot, or
Reinforcement Learning program starts working, the agents are exposed to known or
unknown environments and the Reinforcement Learning technique allows the agents to
interact and adapt according to the environment’s features.
Accordingly, the agents work and the Reinforcement Learning robot learns. In order
to get to a desired position, we assign rewards and punishments.
5
Chapter 1 ■ Reinforcement Learning Basics
Now, the program has to work around the optimal path to get maximum rewards if
it fails (that is, it takes punishments or receives negative points). In order to reach a new
position, which also is known as a state, it must perform what we call an action.
To perform an action, we implement a function, also known as a policy. A policy is
therefore a function that does some work.
6
Chapter 1 ■ Reinforcement Learning Basics
The interaction happens from one state to another. The exact connection starts
between an agent and the environment. Rewards are happening on a regular basis.
We take appropriate actions to move from one state to another.
The key points of consideration after going through the details are the following:
• The Reinforcement Learning cycle works in an interconnected
manner.
• There is distinct communication between the agent and the
environment.
• The distinct communication happens with rewards in mind.
• The object or robot moves from one state to another.
• An action is taken to move from one state to another
7
Chapter 1 ■ Reinforcement Learning Basics
An agent is always learning and finally makes a decision. An agent is a learner, which
means there might be different paths. When the agent starts training, it starts to adapt and
intelligently learns from its surroundings.
The agent is also a decision maker because it tries to take an action that will get it the
maximum reward.
When the agent starts interacting with the environment, it can choose an action and
respond accordingly.
From then on, new scenes are created. When the agent changes from one place to
another in an environment, every change results in some kind of modification. These
changes are depicted as scenes. The transition that happens in each step helps the agent
solve the Reinforcement Learning problem more effectively.
8
Chapter 1 ■ Reinforcement Learning Basics
Let’s look at another scenario of state transitioning, as shown in Figures 1-8 and 1-9.
At each state transition, the reward is a different value, hence we describe reward
with varying values in each step, such as r0, r1, r2, etc. Gamma (γ) is called a discount
factor and it determines what future reward types we get:
• A gamma value of 0 means the reward is associated with the
current state only
• A gamma value of 1 means that the reward is long-term
9
Chapter 1 ■ Reinforcement Learning Basics
Gamma
Gamma is used in each state transition and is a constant value at each state change.
Gamma allows you to give information about the type of reward you will be getting in
every state. Generally, the values determine whether we are looking for reward values in
each state only (in which case, it’s 0) or if we are looking for long-term reward values (in
which case it’s 1).
Lambda
Lambda is generally used when we are dealing with temporal difference problems. It is
more involved with predictions in successive states.
Increasing values of lambda in each state shows that our algorithm is learning fast.
The faster algorithm yields better results when using Reinforcement Learning techniques.
As you’ll learn later, temporal differences can be generalized to what we call
TD(Lambda). We discuss it in greater depth later.
10
Chapter 1 ■ Reinforcement Learning Basics
RL Characteristics
We talk about characteristics next. The characteristics are generally what the agent does
to move to the next state. The agent considers which approach works best to make the
next move.
The two characteristics are
• Trial and error search.
• Delayed reward.
As you probably have gathered, Reinforcement Learning works on three things
combined:
(S,A,R)
11
Chapter 1 ■ Reinforcement Learning Basics
12
Chapter 1 ■ Reinforcement Learning Basics
Agents
In terms of Reinforcement Learning, agents are the software programs that make
intelligent decisions. Agents should be able to perceive what is happening in the
environment. Here are the basic steps of the agents:
1. When the agent can perceive the environment, it can make
better decisions.
2. The decision the agents take results in an action.
3. The action that the agents perform must be the best, the
optimal, one.
Software agents might be autonomous or they might work together with other agents
or with people. Figure 1-14 shows how the agent works.
13
Chapter 1 ■ Reinforcement Learning Basics
RL Environments
The environments in the Reinforcement Learning space are comprised of certain factors
that determine the impact on the Reinforcement Learning agent. The agent must adapt
accordingly to the environment. These environments can be 2D worlds or grids or even a
3D world.
Here are some important features of environments:
• Deterministic
• Observable
• Discrete or continuous
• Single or multiagent.
Deterministic
If we can infer and predict what will happen with a certain scenario in the future, we say
the scenario is deterministic.
It is easier for RL problems to be deterministic because we don’t rely on the
decision-making process to change state. It’s an immediate effect that happens with state
transitions when we are moving from one state to another. The life of a Reinforcement
Learning problem becomes easier.
When we are dealing with RL, the state model we get will be either deterministic or
non-deterministic. That means we need to understand the mechanisms behind how DFA
and NDFA work.
14
Chapter 1 ■ Reinforcement Learning Basics
We are showing a state transition from a start state to a final state with the help of
a diagram. It is a simple depiction where we can say that, with some input value that is
assumed as 1 and 0, the state transition occurs. The self-loop is created when it gets a
value and stays in the same state.
The working principle of the state diagram in Figure 1-16 can be explained as
follows. In NDFA the issue is when we are transitioning from one state to another, there is
more than one option available, as we can see in Figure 1-16. From State S0 after getting
an input such as 0, it can stay in state S0 or move to state S1. There is decision-making
involved here, so it becomes difficult to know which action to take.
Observable
If we can say that the environment around us is fully observable, we have a perfect
scenario for implementing Reinforcement Learning.
An example of perfect observability is a chess game. An example of partial
observability is a poker game, where some of the cards are unknown to any one player.
15
Chapter 1 ■ Reinforcement Learning Basics
Discrete or Continuous
If there is more than one choice for transitioning to the next state, that is a continuous
scenario. When there are a limited number of choices, that’s called a discrete scenario.
16
Chapter 1 ■ Reinforcement Learning Basics
Figure 1-18 shows how multiagents work. There is an interaction between two agents
in order to make the decision.
17
Chapter 1 ■ Reinforcement Learning Basics
Conclusion
This chapter touched on the basics of Reinforcement Learning and covered some key
concepts. We covered states and environments and how the structure of Reinforcement
Learning looks.
We also touched on the different kinds of interactions and learned about single-
agent and multiagent solutions.
The next chapter covers algorithms and discusses the building blocks of
Reinforcement Learning.
18
Exploring the Variety of Random
Documents with Different Content
The Project Gutenberg eBook of Notes and
Queries, Number 173, February 19, 1853
This ebook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this ebook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.
Author: Various
Language: English
Price
Fourpence.
No. 173. Saturday, February 19. 1853.
Stamped
Edition 5d.
CONTENTS.
Notes:— Page
Predictions of the Fire and Plague of London,
No. II., by Vincent T. Sternberg 173
Examples of the French Sizain, by W. Pinkerton 174
Epigrams 174
"Goe, soule, the bodies guest," by George
Daniel 175
Petitions from the County of Nottingham 175
Folk Lore:—Lancashire Fairy Tale—Teeth,
Superstition respecting—New Moon Divination
—The Hyena an Ingredient in Love Potions—
The Elder Tree 177
Minor Notes:—The Word "Party"—Epitaphs—
Campbell's "Pleasures of Hope"—Palindromical
Lines—"Derrick" and "Ship's Painter"—Lord
Reay's Country 177
Queries:—
Unanswered Queries 178
Mr. John Munro, by Dan. Wilson 179
Minor Queries:—Song in Praise of the Marquess 179
of Granby—Venda—The Georgiad—R. S.
Townshend of Manchester—"Mala malæ
malo"—"Dimidium Scientiæ"—Portrait Painters
—"An Impartial Inquiry," &c.—"As poor as Job's
Turkey"—Fuss—Suicide encouraged in
Marseilles—Fabulous Bird—Segantiorum Portus
—Stamping on Current Coinage—Rhymes:
Dryden—The Cadenham Oak—St. Mary's
Church, Beverley—The Rev. Joshua Marsden—
Bentley's Examination—Derivation of
"Lowbell"—Meaning of Assassin—Punishment
for exercising the Roman Catholic Religion—
Hogarth's Pictures—Lines in a Snuff-box—Rosa
Mystica—Old-Shoe throwing at Weddings—
Herbé's Costumes Français
Minor Queries with Answers:—Humphry Smith—
Meaning and Etymology of "Conyngers" or
"Connigries"—Letters U, V, W, and St. Ives 182
Replies:—
The Orkney Islands in Pawn 183
The Passage in King Henry VIII., Act. III. Sc. 2,
by S. W. Singer 183
Miniature Ring of Charles I., by C. Ley 184
Chantry Chapels 185
Photographic Notes and Queries:—The Collodion
Process—Mr. Weld Taylor's Iodizing Process—Sir
William Newton's Process: Further Explanations 185
Replies to Minor Queries:—Lady Nevell's Music-
book—Tuch—Eva, Princess of Leinster—
Whipping Post—The Dodo—"Then comes the
reckoning," &c.—Sir J. Covert, not Govett—
Chatterton—Tennyson—Llandudno on the Great
Orme's Head—Oldham, Bishop of Exeter—Arms
of Bristol—The Cross and the Crucifix—Sir
Kenelm Digby—Martin Drunk—The Church
Catechism—Sham Epitaphs and Quotations—
Door-head Inscription—Potguns—"Pompey the
Little"—Eagles supporting Lecterns—Lady Day
in Harvest—Inscriptions in Churches—
Macaulay's Young Levite, &c. 187
Miscellaneous:—
Books and Odd Volumes wanted 194
Notices to Correspondents 194
Advertisements 195
Notes.
PREDICTIONS OF THE FIRE AND PLAGUE OF
LONDON, NO. II.
One of the most striking predictions occurs in Daniel Baker's Certaine
Warning for a Naked Heart, Lond. 1659. After much invective against
the evil ways of the metropolis, he proceeds:
George Fox also claims to have had a distinct prevision of the fire
(See Journal, p. 386., ed 1765.) He also relates the story of a Quaker
who was moved to come out of Huntingdonshire a little before the
fire, and to—
"Scatter his money up and down the streets, turn his horse
loose, untie the knees of his breeches, and let his stockings fall
down, and to tell the people 'so they should run up and down
scattering their money and goods, half undressed, like mad
people, as he was a sign to them,' which they did when the city
was burning."
"Mr. Thomas Flatman (poet) did affirm that he had seen those
Hieroglyphicks in an old parchment manuscript, writ in the time
of the monks."—Misc., p. 125. ed. 1721.
"A ship shall come sayling up the Thames till it come to London,
and the master of the ship shall weep, and the mariners shall
ask him why he weepeth, and he shall say, 'Ah, what a goodly
city was this! none in the world comparable to it! and now there
is scarce left any house that can let us have drinke for our
money.'"
By Beaumarchais:
The prophecy in the last two lines has been unfortunately fulfilled.
W. Pinkerton.
Ham.
EPIGRAMS.
The two epigrams which follow were communicated to me many
years ago by the Rev. George Loggin, M.A., of Hertford College, long
one of the masters of Rugby School. He died July 15, 1824, at the
age of forty; and this reminiscence of their old tutor's name will be
welcomed by many a Rugbæan. They were represented to have
proceeded from the pen of Thomas Dunbar of Brasenose, who, from
1815 to 1822, was keeper of the Ashmolean Museum. I have never
seen them in print, or even in writing. They were recited memoriter,
and from memory I write them down; and hence, no doubt, there
will be some deviations from the true text. But they seem too good
to be lost; and I am not without hope that a correct copy may
eventually be elicited from some of your correspondents.
With regard to the first, whether the lines were really made on the
occasion stated, or the occasion was invented (as I am inclined to
suspect) to suit the lines, is perhaps not very material:
If you, Mr. Editor, or any of your many friends desire to see this MS.,
say so, and you and they shall be welcome. It has been in my
possession (unseen) twenty years.
George Daniel.
Canonbury.
"That the Lord would please to stir up the heart and strengthen
the hands of your Highness, in carrying on what yet remains for
the reforming of these nations (according to the word of God)
and the secureing of the interest of godlyness and
righteousness for the future, that such as are found in the faith
and of holy conversation may live peaceably, and receive
encouragement to persevere in that upon which the Lord may
delight to doe your Highness and these nations good; in order
whereunto we humbly propose these following particulars to
your Highness' consideration:
"3. That your Highness would haue an eye upon the designes of
the common enemy in generall, and particularly on this (vid.),
their traininge up a young generation in the old destructive
principles, as also on the designes of any persons whatsoeuer
that indeauour to disturb your Highness' gouernment and the
peace of these nations.
"4. That the lawes of the nation may be reuised, that for what
in them is agreeable to the rules of righteousness may be
continued and executed, and whatever corruption is crept into,
or may grow up in, courts of judicature may be duly purged
away.
"—— Ansley.
Chrystopher Sanderson, Minister of Annesley.
Will. Lee. John Dan.
Geo. Brittain.
Abraham" [Torn off].
"Sheweth,
"Charles Jackson.
Lancelot Coates.
Will. Coup.
Francis Brunt.
Will...llow [obliterated].
John Hoyland.
Tho. Shaw.
Hen. Clark.
Will. Farnworth.
Chrystopher Clark.
Will. Saunder.
George Flint.
Dauid Taylor.
Charles Shepheard.
Es. Brettun."
T. S.
Leeds.
FOLK LORE.
Lancashire Fairy Tale.—The nursery rhymes in one of your late
Numbers remind me of a story I used to be told in the nursery. It
was, that two men went poaching, and having placed nets, or rather
sacks, over what they supposed to be rabbit-holes, but which were
in reality fairies' houses, the fairies rushed into the sacks, and the
poachers, content with their prey, marched home again. A fairy
missing another in the sack, called out (the story was told in broad
Lancashire dialect) "Dick (dignified name for a fairy), where art
thou?" To which fairy Dick replied,
"In a sack,
On a back,
Riding up Barley Brow."
The story has a good moral ending, for the poachers were so
frightened that they never poached again.
T. G. C.
The Elder Tree.—I was visiting a poor parishioner the other day,
when the following question was put to me.
"Pray, Sir, can you tell me whether there is any doubt of what kind of
wood our Lord's cross was made? I have always heard that it was
made of elder, and we look carefully into the faggots before we burn
them, for fear that there should be any of this wood in them."
"Having cooled it, rub the party's mouth with a little of it," &c.—
P. 321.
E. D.
The writer was probably not aware that Spenser says, in his Faerie
Queen, iii. 3. 30.:
Some other of our sea terms might receive apt illustration in "N. &
Q.;" and I should beg to suggest "unde derivatur" a boat's painter,—
the name of the rope which confines a ship's boat to the vessel,
when at sea.
In glancing over that volume, I perceive that Mr. Collier, in his notes
at the end (p. 508.), does "N. & Q." the honour to refer to it, by
alluding to an emendation "proposed by Mr. Cornish" ("N. & Q.," Vol.
vi., p. 312.).
But now, since the ownership (quantum valeat) has deceived even
Mr. Collier, and is endorsed by him, it is time to notice it.
A. E. B.
Leeds.
P.S.—I may add that, with respect to these words "happy low lie
down," from my habit of looking for solutions of difficulties in
parallels and antitheses, I have arrived at a different conclusion from
any that has yet been suggested. Finding "uneasy" used adverbially
in the last line, I see no reason why "happy" should not also be
taken adverbially in the preceding line: we should then have the
same verb, "lie" and "lies," repeated antithetically in the same mood
and tense.
The article the before "low" has probably been omitted in the press,
and may be either actually restored or elliptically understood:
Footnote 1:(return)
Footnote 2:(return)
Footnote 3:(return)
Lay Fellow and Tutor of Jes. Coll.; used to read Theocritus Græcè
in the stage-coach.
Footnote 4:(return)
"An Impartial Inquiry into the true Nature of the Faith which is
required in the Gospel as necessary to Salvation. In which is
briefly shown upon how righteous Terms Unbelievers may
become true Christians: and the case of the Deists is reduced to
a short Issue, by Philalethes Cestriensis. 8vo., Lond. 1746."
Y. B. N. J.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com