Probability Theory: A Concise Course
4/5
()
About this ebook
The author begins with basic concepts and moves on to combination of events, dependent events and random variables. He then covers Bernoulli trials and the De Moivre-Laplace theorem, which involve three important probability distributions (binomial, Poisson, and normal or Gaussian). The last three chapters are devoted to limit theorems, a detailed treatment of Markov chains, continuous Markov processes. Also included are appendixes on information theory, game theory, branching processes, and problems of optimal control. Each of the eight chapters and four appendixes has been equipped with numerous relevant problems (150 of them), many with hints and answers.
This volume is another in the popular series of fine translations from the Russian by Richard A. Silverman. Dr. Silverman, a former member of the Courant Institute of Mathematical Sciences of New York University and the Lincoln Laboratory of the Massachusetts Institute of Technology, is himself the author of numerous papers on applied probability theory. He has heavily revised the English edition and added new material. The clear exposition, the ample illustrations and problems, the cross-references, index, and bibliography make this book useful for self-study or the classroom.
Related to Probability Theory
Titles in the series (100)
First-Order Partial Differential Equations, Vol. 1 Rating: 5 out of 5 stars5/5Methods of Applied Mathematics Rating: 3 out of 5 stars3/5Laplace Transforms and Their Applications to Differential Equations Rating: 5 out of 5 stars5/5Analytic Inequalities Rating: 5 out of 5 stars5/5A Catalog of Special Plane Curves Rating: 2 out of 5 stars2/5History of the Theory of Numbers, Volume II: Diophantine Analysis Rating: 0 out of 5 stars0 ratingsChebyshev and Fourier Spectral Methods: Second Revised Edition Rating: 4 out of 5 stars4/5Infinite Series Rating: 4 out of 5 stars4/5An Adventurer's Guide to Number Theory Rating: 3 out of 5 stars3/5Applied Functional Analysis Rating: 0 out of 5 stars0 ratingsCounterexamples in Topology Rating: 4 out of 5 stars4/5Mathematics for the Nonmathematician Rating: 4 out of 5 stars4/5Calculus Refresher Rating: 3 out of 5 stars3/5First-Order Partial Differential Equations, Vol. 2 Rating: 0 out of 5 stars0 ratingsVector and Tensor Analysis with Applications Rating: 4 out of 5 stars4/5Dynamic Probabilistic Systems, Volume II: Semi-Markov and Decision Processes Rating: 0 out of 5 stars0 ratingsStatistical Inference Rating: 4 out of 5 stars4/5An Introduction to Lebesgue Integration and Fourier Series Rating: 0 out of 5 stars0 ratingsTheory of Approximation Rating: 0 out of 5 stars0 ratingsA History of Mathematical Notations Rating: 4 out of 5 stars4/5A Treatise on Probability Rating: 0 out of 5 stars0 ratingsOptimization Theory for Large Systems Rating: 5 out of 5 stars5/5Topology for Analysis Rating: 4 out of 5 stars4/5Journey into Mathematics: An Introduction to Proofs Rating: 4 out of 5 stars4/5Theory of Games and Statistical Decisions Rating: 4 out of 5 stars4/5Fourier Series and Orthogonal Polynomials Rating: 0 out of 5 stars0 ratingsVision in Elementary Mathematics Rating: 3 out of 5 stars3/5Elementary Theory of Analytic Functions of One or Several Complex Variables Rating: 5 out of 5 stars5/5Topoi: The Categorial Analysis of Logic Rating: 5 out of 5 stars5/5
Related ebooks
Probability: An Introduction Rating: 4 out of 5 stars4/5Introduction to Probability Rating: 3 out of 5 stars3/5Problems in Probability Theory, Mathematical Statistics and Theory of Random Functions Rating: 3 out of 5 stars3/5Concepts of Probability Theory: Second Revised Edition Rating: 3 out of 5 stars3/5Foundations of the Theory of Probability: Second English Edition Rating: 0 out of 5 stars0 ratingsA Book of Set Theory Rating: 4 out of 5 stars4/5Schaum's Outline of Probability, Third Edition Rating: 5 out of 5 stars5/5A Concrete Approach to Abstract Algebra Rating: 5 out of 5 stars5/5Introduction to Topology: Third Edition Rating: 3 out of 5 stars3/5The Foundations of Statistics Rating: 0 out of 5 stars0 ratingsA Graduate Course in Probability Rating: 5 out of 5 stars5/5BAYES Theorem Rating: 2 out of 5 stars2/5Calculus and Statistics Rating: 4 out of 5 stars4/5The Bayesian Way: Introductory Statistics for Economists and Engineers Rating: 2 out of 5 stars2/5Theory of Games and Statistical Decisions Rating: 4 out of 5 stars4/5Schaum's Outline of Precalculus, Fourth Edition Rating: 5 out of 5 stars5/5Naive Set Theory Rating: 4 out of 5 stars4/5Introduction to Probability Theory with Contemporary Applications Rating: 2 out of 5 stars2/5Basic Algebra I: Second Edition Rating: 4 out of 5 stars4/5Introduction to Proof in Abstract Mathematics Rating: 5 out of 5 stars5/5Introduction to Bayesian Statistics Rating: 0 out of 5 stars0 ratingsThe Theory of Algebraic Numbers Rating: 4 out of 5 stars4/5Probability Theory Rating: 4 out of 5 stars4/5Algebraic Methods in Statistical Mechanics and Quantum Field Theory Rating: 0 out of 5 stars0 ratingsProbability with Permutations: An Introduction To Probability And Combinations Rating: 0 out of 5 stars0 ratingsPrelude to Mathematics Rating: 4 out of 5 stars4/5Modern Calculus and Analytic Geometry Rating: 4 out of 5 stars4/5The Red Book of Mathematical Problems Rating: 0 out of 5 stars0 ratingsFundamental Concepts of Abstract Algebra Rating: 5 out of 5 stars5/5Concepts of Mathematical Modeling Rating: 4 out of 5 stars4/5
Mathematics For You
Fluent in 3 Months: How Anyone at Any Age Can Learn to Speak Any Language from Anywhere in the World Rating: 3 out of 5 stars3/5My Best Mathematical and Logic Puzzles Rating: 4 out of 5 stars4/5What If? 10th Anniversary Edition: Serious Scientific Answers to Absurd Hypothetical Questions Rating: 4 out of 5 stars4/5Mental Math Secrets - How To Be a Human Calculator Rating: 5 out of 5 stars5/5Basic Math & Pre-Algebra Workbook For Dummies with Online Practice Rating: 3 out of 5 stars3/5Quantum Physics for Beginners Rating: 4 out of 5 stars4/5Introducing Game Theory: A Graphic Guide Rating: 4 out of 5 stars4/5Calculus Made Easy Rating: 4 out of 5 stars4/5Algebra - The Very Basics Rating: 5 out of 5 stars5/5Basic Math & Pre-Algebra For Dummies Rating: 4 out of 5 stars4/5The Little Book of Mathematical Principles, Theories & Things Rating: 3 out of 5 stars3/5A Mind For Numbers: How to Excel at Math and Science (Even If You Flunked Algebra) Rating: 4 out of 5 stars4/5Pre-Calculus For Dummies Rating: 5 out of 5 stars5/5Precalculus: A Self-Teaching Guide Rating: 4 out of 5 stars4/5Math Magic: How To Master Everyday Math Problems Rating: 3 out of 5 stars3/5Algebra I Workbook For Dummies Rating: 3 out of 5 stars3/5Standard Deviations: Flawed Assumptions, Tortured Data, and Other Ways to Lie with Statistics Rating: 4 out of 5 stars4/5Alan Turing: The Enigma: The Book That Inspired the Film The Imitation Game - Updated Edition Rating: 4 out of 5 stars4/5The Math of Life and Death: 7 Mathematical Principles That Shape Our Lives Rating: 4 out of 5 stars4/5A Beginner's Guide to Constructing the Universe: The Mathematical Archetypes of Nature, Art, and Science Rating: 4 out of 5 stars4/5Limitless Mind: Learn, Lead, and Live Without Barriers Rating: 4 out of 5 stars4/5Seeing Further: The Story of Science and the Royal Society Rating: 4 out of 5 stars4/5Statistics: a QuickStudy Laminated Reference Guide Rating: 0 out of 5 stars0 ratingsGeometry For Dummies Rating: 4 out of 5 stars4/5Calculus For Dummies Rating: 4 out of 5 stars4/5Why Machines Learn: The Elegant Math Behind Modern AI Rating: 3 out of 5 stars3/5Trigonometry For Dummies Rating: 0 out of 5 stars0 ratingsThe Everything Everyday Math Book: From Tipping to Taxes, All the Real-World, Everyday Math Skills You Need Rating: 5 out of 5 stars5/5
Reviews for Probability Theory
2 ratings0 reviews
Book preview
Probability Theory - Y. A. Rozanov
PROBABILITY
THEORY
A CONCISE COURSE
Y.A.ROZANOV
Revised English Edition
Translated and Edited by
Richard A. Silverman
DOVER PUBLICATIONS, INC.
NEW YORK
Copyright © 1969 by Richard A. Silverman.
All rights reserved.
This Dover edition, first published in 1977, is an unabridged and slightly corrected republication of the revised English edition published by Prentice-Hall Inc., Englewood Cliffs, N. J., in 1969 under the title Introductory Probability Theory.
International Standard Book Number: 0-486-63544-9
Library of Congress Catalog Card Number: 77-78592
Manufactured in the United States by Courier Corporation
63544916
www.doverpublications.com
EDITOR’S PREFACE
This book is a concise introduction to modern probability theory and certain of its ramifications. By deliberate succinctness of style and judicious selection of topics, it manages to be both fast-moving and self-contained.
The present edition differs from the Russian original (Moscow, 1968) in several respects:
1. It has been heavily restyled with the addition of some new material. Here I have drawn from my own background in probability theory, information theory, etc.
2. Each of the eight chapters and four appendices has been equipped with relevant problems, many accompanied by hints and answers. There are 150 of these problems, in large measure drawn from the excellent collection edited by A. A. Sveshnikov (Moscow, 1965).
3. At the end of the book I have added a brief Bibliography, containing suggestions for collateral and supplementary reading.
R. A. S.
CONTENTS
1 BASIC CONCEPTS
1. Probability and Relative Frequency
2. Rudiments of Combinatorial Analysis
Problems
2 COMBINATION OF EVENTS
3. Elementary Events. The Sample Space
4. The Addition Law for Probabilities
Problems
3 DEPENDENT EVENTS
5. Conditional Probability
6. Statistical Independence
Problems
4 RANDOM VARIABLES
7. Discrete and Continuous Random Variables. Distribution Functions
8. Mathematical Expectation
9. Chebyshev’s Inequality. The Variance and Correlation Coefficient
Problems
5 THREE IMPORTANT PROBABILITY DISTRIBUTIONS
10. Bernoulli Trials. The Binomial and Poisson Distributions
11. The De Moivre-Laplace Theorem. The Normal Distribution
Problems
6 SOME LIMIT THEOREMS
12. The Law of Large Numbers
13. Generating Functions. Weak Convergence of Probability Distributions
14. Characteristic Functions. The Central Limit Theorem
Problems
7 MARKOV CHAINS
15. Transition Probabilities
16. Persistent and Transient States
17. Limiting Probabilities. Stationary Distributions
Problems
8 CONTINUOUS MARKOV PROCESSES
18. Definitions. The Sojourn Time
19. The Kolmogorov Equations
20. More on Limiting Probabilities. Erlang’s Formula
Problems
APPENDIX 1 INFORMATION THEORY
APPENDIX 2 GAME THEORY
APPENDIX 3 BRANCHING PROCESSES
APPENDIX 4 PROBLEMS OF OPTIMAL CONTROL
BIBLIOGRAPHY
INDEX
1
BASIC CONCEPTS
1. Probability and Relative Frequency
Consider the simple experiment of tossing an unbiased coin. This experiment has two mutually exclusive outcomes, namely heads
and tails.
The various factors influencing the outcome of the experiment are too numerous to take into account, at least if the coin tossing is fair.
Therefore the outcome of the experiment is said to be random.
Everyone would certainly agree that the probability of getting heads
and the probability of getting tails
both equal . Intuitively, this answer is based on the idea that the two outcomes are equally likely
or equiprobable,
because of the very nature of the experiment. But hardly anyone will bother at this point to clarify just what he means by probability.
Continuing in this vein and taking these ideas at face value, consider an experiment with a finite number of mutually exclusive outcomes which are equiprobable, i.e., equally likely because of the nature of the experiment.
Let A denote some event associated with the possible outcomes of the experiment. Then the probability P(A) of the event A is defined as the fraction of the outcomes in which A occurs. More exactly,
where N is the total number of outcomes of the experiment and N(A) is the number of outcomes leading to the occurrence of the event A.
Example 1. In tossing a well-balanced coin, there are N = 2 mutually exclusive equiprobable outcomes (heads
and tails
). Let A be either of these two outcomes. Then N(A) = 1, and hence
Example 2. In throwing a single unbiased die, there are N = 6 mutually exclusive equiprobable outcomes, namely getting a number of spots equal to each of the numbers 1 through 6. Let A be the event consisting of getting an even number of spots. Then there are N(A) = 3 outcomes leading to the occurrence of A (which ones?), and hence
Example 3. In throwing a pair of dice, there are N = 36 mutually exclusive equiprobable events, each represented by an ordered pair (a, b), where a is the number of spots showing on the first die and b the number showing on the second die. Let A be the event that both dice show the same number of spots. Then A occurs whenever a = b, i.e., n(A) = 6. Therefore
Remark. Despite its seeming simplicity, formula (1.1) can lead to nontrivial calculations. In fact, before using (1.1) in a given problem, we must find all the equiprobable outcomes, and then identify all those leading to the occurrence of the event A in question.
The accumulated experience of innumerable observations reveals a remarkable regularity of behavior, allowing us to assign a precise meaning to the concept of probability not only in the case of experiments with equiprobable outcomes, but also in the most general case. Suppose the experiment under consideration can be repeated any number of times, so that, in principle at least, we can produce a whole series of independent trials under identical conditions,
¹ in each of which, depending on chance, a particular event A of interest either occurs or does not occur. Let n be the total number of experiments in the whole series of trials, and let n(A) be the number of experiments in which A occurs. Then the ratio
is called the relative frequency of the event A (in the given series of trials). It turns out that the relative frequencies n(A)/n observed in different series of trials are virtually the same for large n, clustering about some constant
called the probability of the event A. More exactly, (1.2) means that
Roughly speaking, the probability P(A) of the event A equals the fraction of experiments leading to the occurrence of A in a large series of trials.²
Example 4. Table 1 shows the results of a series of 10,000 coin tosses,³ grouped into 100 different series of n = 100 tosses each. In every case, the table shows the number of tosses n(A) leading to the occurrence of a head. It is clear that the relative frequency of occurrence of heads
in each set of 100 tosses differs only slightly from the probability P(A) = found in Example 1. Note that the relative frequency of occurrence of heads
is even closer to if we group the tosses in series of 1000 tosses each.
Table 1. Number of heads in a series of coin tosses
Example 5 (De Méré’s paradox). As a result of extensive observation of dice games, the French gambler de Méré noticed that the total number of spots showing on three dice thrown simultaneously turns out to be 11 (the event A1 more often than it turns out to be 12 (the event A2), although from his point of view both events should occur equally often. De Méré reasoned as follows: A1 occurs in just six ways (6:4:1, 6:3:2, 5:5:1, 5:4:2, 5:3:3, 4:4:3), and A2 also occurs in just six ways (6:5:1, 6:4:2, 6:3:3, 5:5:2, 5:4:3, 4:4:4). Therefore A1 and A2 have the same probability P(A1) = P(A2).
The fallacy in this argument was found by Pascal, who showed that the outcomes listed by de Méré are not actually equiprobable. In fact, one must take account not only of the numbers of spots showing on the dice, but also of the particular dice on which the spots appear. For example, numbering the dice and writing the number of spots in the corresponding order, we find that there are six distinct outcomes leading to the combination 6:4:1, namely (6, 4, 1), (6, 1, 4), (4, 6, 1), (4, 1, 6), (1, 6, 4) and (1, 4, 6), whereas there is only one outcome leading to the combination 4:4:4, namely (4, 4, 4). The appropriate equiprobable outcomes are those described by triples of numbers (a, b, c), where a is the number of spots on the first die, b the number of spots on the second die, and c the number of spots on the third die. It is easy to see that there are then precisely N = 6³ = 216 equiprobable outcomes. Of these, N(A1) = 27 are favorable to the event A1 (in which the sum of all the spots equals 11), but only N(A2) = 25 are favorable to the event A2 (in which the sum of all the spots equals 12).⁵ This fact explains the tendency observed by de Méré for 11 spots to appear more often than 12.
2. Rudiments of Combinatorial Analysis
Combinatorial formulas are of great use in calculating probabilities. We now derive the most important of these formulas.
THEOREM 1.1. Given n1 elements a1, a2, …, an1 and n2 elements b1, b2, …, bn2, there are precisely n1n2 distinct ordered pairs (ai bj) containing one element of each kind.
FIGURE 1.
Proof. Represent the elements of the first kind by points of the x-axis, and those of the second kind by points of the y-axis. Then the possible pairs (ai, bj) are points of a rectangular lattice in the xy-plane, as shown in Figure 1. The fact that there are just n1n2 such pairs is obvious from the figure. ⁶
More generally, we have
THEOREM 1.2. Given n1 elements a1, a2, …, an1, n2 elements b1, …, bn2, etc., up to nr elements x1, x2, …, xnr, there are precisely n1n2 … nr distinct ordered r-tuples (ai1, bi2, …, xir) containing one element of each kind.⁷
Proof. For r = 2, the theorem reduces to Theorem 1.1. Suppose the theorem holds for r – 1, so that in particular there are precisely n2 … nr (r – 1)-tuples (bi2, …, xir) containing one element of each kind. Then, regarding the (r – 1)-tuples as elements of a new kind, we note that each r-tuple (ai1, bi2, …, xir) can be regarded as made up of a (r – 1)-tuple (bi2, …, xir) and an element ai1. Hence, by Theorem 1.1, there are precisely
n1 (n2 … nr) = n1n2 … nr
r-tuples containing one element of each kind. The theorem now follows for all r by mathematical induction.
Example 1. What is the probability of getting three sixes in a throw of three dice?
Solution. Let a be the number of spots on the first die, b the number of spots on the second die, and c the number of spots on the third die. Then the result of throwing the dice is described by an ordered triple (a, b, c), where each element takes values from 1 to 6. Hence, by Theorem 1.2 with r = 3 and n1 = n2 = n3 = 6, there are precisely N = 6³