0% found this document useful (0 votes)
767 views672 pages

Probability and Statistics in Engineering: - Restricted!

The document is the fourth edition of 'Probability and Statistics in Engineering' authored by Hines, Montgomery, Goldsman, and Borror, aimed at undergraduate students in engineering and related fields. It has been significantly revised to enhance accessibility, featuring motivational examples, real-world applications, and computer exercises, along with a new chapter on statistical techniques for computer simulation. The book covers a comprehensive range of topics, including probability theory, random variables, statistical methods, and specialized topics like quality control and stochastic processes.

Uploaded by

ak1990074
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
767 views672 pages

Probability and Statistics in Engineering: - Restricted!

The document is the fourth edition of 'Probability and Statistics in Engineering' authored by Hines, Montgomery, Goldsman, and Borror, aimed at undergraduate students in engineering and related fields. It has been significantly revised to enhance accessibility, featuring motivational examples, real-world applications, and computer exercises, along with a new chapter on statistical techniques for computer simulation. The book covers a comprehensive range of topics, including probability theory, random variables, statistical methods, and specialized topics like quality control and stochastic processes.

Uploaded by

ak1990074
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 672

EDITION

PROBABILITY AND
STATISTICS
IN ENGINEERING
FOURTH EDITION

WILEY
STUDENT
\\— EDITION
- RESTRICTED!
| FOR SALE ONLY IN
WILLIAM W. HINES er
DOUGLAS C. MONTGOMERY
DAVID M. GOLDSMAN
CONNIE M. BORROR
Digitized by the Internet Archive
in 2022 with funding from
Kahle/Austin Foundation

https://archive.org/details/ison_9788126516469
Oss 331

il= I
Probability and Statistiey !
in Engineering
Probability and Statistics
in Engineering
Fourth Edition

William W. Hines
Professor Emeritus
School of Industrial and Systems Engineering
Georgia Institute of Technology

Douglas C. Montgomery
Professor of Engineering and Statistics
Department of Industrial Engineering
Arizona State University

David M. Goldsman
Professor
School of Industrial and Systems Engineering
Georgia Institute of Technology

Connie M. Borror
Senior Lecturer
Department of Industrial Engineering
Arizona State University
PROBABILITY AND STATISTICS IN ENGINEERING
Fourth Edition

Copyright © 2003, 2004, 2006 by John Wiley & Sons, Inc. All rights reserved.

Authorized reprint by Wiley India (P.) Ltd., 4435/7, Ansari Road, Daryaganj, New
Delhi 110 002.

All rights reserved. AUTHORIZED REPRINT OF THE EDITION PUBLISHED BY


JOHN WILEY & SONS INC., U.K. No part of this book may be reproduced in any
form without the written permission of the publisher.

Limits of Liability/Disclaimer of Warranty: The publisher and the author make no


representation or warranties with respect to the accuracy or completeness of the
contents of this work and specifically disclaim all warranties, including without
limitation warranties of fitness for a particular purpose. No warranty may be
created or extended by sales or promotional materials. The advice and strategies
contained herein may not be suitable for every situation. This work is sold with
the understanding that the publisher is“ not engaged in rendering legal,
accounting, or other professional services. If professional assistance is required,
the services of a competent professional person should be sought. Neither the
publisher nor the author shall be liable for damages arising herefrom. The fact
that an organization or Website if referred to in this work as a citation and/or a
potential source of further information does not mean that the author or the
publisher endorses the information the organization or Website may provide or
recommendations it may make. Further, readers should be aware that Internet
Websites listed in this work may have changed or disappeared between when
this work was written and when it is read.

Wiley also publishes its books in a variety of electronic formats. Some content
that appears in print may not be available in electronic books. For more
information about Wiley products, visit our website at www.wiley.com.

Reprint : 2012

Printed at : Sharda Offset Press, Delhi

ISBN: 978-81-265-1646-9
PREFACE to the 4th Edition |

This book is written for a first course in applied probability and statistics for undergradu-
ate students:in engineering, physical sciences, and management science curricula. We have
found that the text can be used effectively as a two-semester sophomore- or junior-level
course sequence, as well as a one-semester refresher course in probability and statistics for
first-year graduate students.
The text has undergone a major overhaul for the fourth edition, especially with regard
to many of the statistics chapters. The idea has been to make the book more accessible to a
wide audience by including more motivational examples, real-world applications, and use-
ful computer exercises. With the aim of making the course material easier. to learn and eas-
ier to teach, we have also provided a convenient set of course notes, available on the Web
site www.wiley.com/college/hines. For instructors adopting the text, the complete solu-
tions are also available on a password-protected portion of this Web site.
Structurally speaking, we start the book off with probability theory (Chapter 1) and
progress through random variables (Chapter 2), functions of random variables (Chapter 3),
joint random variables (Chapter 4), discrete and continuous distributions (Chapters 5 and
6), and normal distribution (Chapter 7). Then we introduce statistics and data description
techniques (Chapter 8). The statistics chapters follow the same rough outline as in the pre-
vious edition, namely, sampling distributions (Chapter 9), parameter estimation (Chapter
10), hypothesis testing (Chapter 11), single- and multifactor design of experiments (Chap-
ters 12 and 13), and simple and multiple regression (Chapters 14 and 15). Subsequent spe-
cial-topics chapters include nonparametric statistics (Chapter 16), quality control] and
reliability engineering (Chapter 17), and stochastic processes and queueing theory (Chap-
ter 18). Finally, there is an entirely new chapter, on statistical techniques for computer sim-
ulation (Chapter 19)—perhaps the first of its kind in this type of statistics text.
The chapters that have seen the most substantial evolution are Chapters 8—14. The dis-
cussion in Chapter 8 on descriptive data analysis is greatly enhanced over that of the previ-
ous edition’s. We also expanded the discussion on different types of interval estimation in
Chapter 10. In addition, an emphasis has been placed on real-life computer data analysis
examples. Throughout the book, we incorporated other structural changes. In all chapters,
we included new examples and exercises, including numerous computer-based exercises.
A few words on Chapters 18 and 19. Stochastic processes and queueing theory arise
naturally out of probability, and we feel that Chapter 18 serves as a good introduction to the
subject—normally taught in operations research, management science, and certain engi-
neering disciplines. Queueing theory has garnered a great deal of use in such diverse fields
as telecommunications, manufacturing, and production planning. Computer simulation, the
topic of Chapter 19, is perhaps the most widely used tool in operations research and man-
agement science, as well as in a number of physical sciences. Simulation marries all the
tools of probability and statistics and is used in everything from financial analysis to factory
control and planning. Our text provides what amounts to a simuiz%on minicourse, covering
the areas of Monte Carlo experimentation, random number and variate generation, and
simulation output data analysis.
.
vi Preface

We are grateful to the following individuals for their help during the process of com-
pleting the current revision of the text. Christos Alexopoulos (Georgia Institute of Tech-
nology), Michael Caramanis (Boston University), David R. Clark (Kettering University),
J. N. Hool (Auburn University), John S. Ramberg (University of Arizona), and Edward J.
Williams (University of Michigan — Dearborn), served as reviewers and provided a great
deal of valuable feedback. Beatriz Valdés (Argosy Publishing) did a wonderful job super-
vising the typesetting and page proofing of the text, and Jennifer Welter at Wiley provided
great leadership at every turn. Everyone was certainly a pleasure to work with. Of course,
we thank our families for their infinite patience and support throughout the endeavor.

Hines, Montgomery, Goldsman, and Borror


Contents

1. An Introduction to Probability 1 4. Joint Probability Distributions 71


1-1 Introduction 1 4-1 Introduction 71
1-2 AReview of Sets 2 4-2 Joint Distribution for Two-Dimensional
1-3 Experiments and Sample Spaces 5 Random Variables 72
1-4 Events 8 4-3 Marginal Distributions 75
1-5 Probability Definition and Assignment 8 4-4 Conditional Distributions 79
1-6 Finite Sample Spaces and Enumeration 14 4-5 Conditional Expectation 82
1-6.1 Tree Diagram 14 4-6 Regression of the Mean 85
1-6.2 Multiplication Principle 14 4-7 Independence of Random Variables 86
1-6.3 Permutations 15 4-8 Covariance and Correlation 87
1-6.4 Combinations 16 4-9 The Distribution Function for Two-
1-6.5 Permutations of Like Objects 19 Dimensional Random Variables 91
1-7 Conditional Probability 20 4-10 Functions of Two Random Variables 92
1-8 Partitions, Total Probability, and Bayes’ 4-11 Joint Distributions of Dimension n > 2 94
Theorem 25 4-12 Linear Combinations 96
1-9 Summary 28 4-13 Moment-Generating Functions and Linear
1-10 Exercises 28 Combinations 99
4-14 The Law of Large Numbers 99
2. One-Dimensional Random 4-15 Summary 101
Variables 33 4-16 Exercises 101

2-1 Introduction 33
5. Some Important Discrete
2-2 The Distribution Function 36
Distributions 106
2-3 Discrete Random Variables 38
2-4 Continuous Random Variables 41 Introduction 106
2-5 Some Characteristics of Distributions 44 Bernoulli Trials and the Bernoulli
2-6 Chebyshev’s Inequality 48 Distribution 106
2-7 Summary 49 The Binomial Distribution 108
2-8 Exercises 50 5-3.1 Mean and Variance of the Binomial
Distribution 109
3. Functions of One Random Variable and 5-3.2 The Cumulative Binomial
Expectation 52 Distribution 110
5-3.3 An Application of the Binomial
3-1 Introduction 52 Distribution Ii
3-2 EquivalentEvents 52 The Geometric Distribution 112
3-3 Functions of a Discrete Random 5-4.1 Mean and Variance of the Geometric
Variable 54 Distribution 113
3-4 Continuous Functions of a Continuous 5-5 The Pascal Distribution 115
Random Variable 55 5-5.1 Mean and Variance of the Pascal
3-5 Expectation 58 Distribution 115
3-6 Approximations to E[H(X)] and V[H(X)] 62 5-6 The Multinomial Distribution 116
3-7 The Moment-Generating Function 65 The Hypergeometric Distribution 117
3-8 Summary 67 5-7.1 Mean and Variance of the Hyper-
3-9 Exercises 68 geometric Distribution 118

vii
viii Contents

5-8 The Poisson Distribution 118 7-2.4 The Standard Normal


5-8.1 Development from a Poisson Distribution 145
Process 118 7-2.5 Problem-Solving Procedure 146
5-8.2 Development of the Poisson 7-3 The Reproductive Property of the Normal
Distribution from the Distribution 150 ©
Binomial 120 7-4 The Central Limit Theorem 152
5-8.3 Mean and Variance of the Poisson 7-5 The Normal Approximation to the Binomial
Distribution 120 Distribution 155
5-9 Some Approximations 122 7-6 The Lognormal Distribution 157
5-10 Generation of Realizations 123 7-6.1 Density Function 158
5-11 Summary 123 7-6.2 Mean and Variance of the Lognormal
5-12 Exercises 123 - Distribution 158
7-6.3 Properties of the Lognormal
Distribution 159
6. Some Important Continuous
7-7 The Bivariate Normal Distribution 160
Distributions 128
7-8 Generation of Normal Realizations 164
6-1 Introduction 128 7-9 Summary 165
6-2 The Uniform Distribution 128 7-10 Exercises 165
6-2.1 Mean and Variance of the Uniform
Distribution 129 8. Introduction to Statistics and Data
6-3 The Exponential Distribution 130 Description 169
6-3.1 The Relationship of the Exponential
Distribution to the Poisson 8-1 The Field of Statistics 169
Distribution 131 8-2 Data 173 zs
6-3.2 Mean and Variance 8-3 Graphical Presentation of Data 173
of the Exponential 8-3.1 Numerical Data: Dot Plots and
Distribution 131 Scatter Piots 173
6-3.3 Memoryless Property of the 8-3.2 Numerical Data: The Frequency
Exponential Distribution 133 Distribution and Histogram 175
6-4 The Gamma Distribution 134 8-3.3. The Stem-and-Leaf Plot 178
6-4.1 The Gamma Function 134 8-3.4 The Box Plot 179
6-4.2 Definition of the Gamma 8-3.5 The Pareto Chart 181
Distribution 134 8-3.6 Time Plots 183
6-4.3 Relationship Between the Gamma 8-4 Numerical Description of Data 183
Distribution and the Exponential 8-4.1 Measures of Central
Distribution 135 Tendency 183
6-4.4 Mean and Variance of the Gamma 8-4.2 Measures of Dispersion 186
Distribution 135 8-4.3 Other Measures for One
6-5 The Weibull Distribution 137 Variable 189
6-5.1 Mean and Variance of the Weibull 8-4.4 Measuring Association 190
Distribution 137 8-4.5 Grouped Data 191
6-6 Generation of Realizations 138 8-5 Summary 192
6-7 Summary 139 8-6 Exercises 193
6-8 Exercises 139
9. Random Samples and Sampling
7. The Normal Distribution 143 Distributions 198
7-1 Introduction 143 9-1 Random Samples 198
7-2 The Normal Distribution 143 9-1.1 Simple Random Sampling from a
7-2.1 Properties of the Normal Finite Universe 199
Distribution 143 9-1.2 Stratified Random Sampling of a
7-2.2 Mean and Variance of the Normal Finite Universe 200
Distribution 144 | 9-2 Statistics and Sampling
7-2.3 The Normal Cumulative Distributions 201
Distribution 145 - 9-2.1 Sampling Distributions 202
Contents ix

9-2.2 Finite Populations and Enumerative 10-8 Other Interval Estimation Procedures 255
Studies 204 10-8.1 Prediction Intervals: 255
9-3 The Chi-Square Distribution 205 10-8.2 Tolerance Intervals 257
9-4 The ¢Distribution 208 10-9 Summary 258
9-5 The F Distribution 211 10-10 Exercises 260
9-6 Summary 214
9-7 Exercises 214 11. Tests of Hypotheses 266

10. Parameter Estimation 216 11-1 Introduction 266


11-1.1 Statistical Hypotheses 266
10-1 Point Estimation 216 11-1.2 Type I and Type II Errors 267
10-1.1 Properties of Estimators 217 11-1.3 One-Sided and Two-Sided
10-1.2 The Method of Maximum Hypotheses 269
Likelihood 221 11-2 Tests of Hypotheses on a Single
10-1.3 The Method of Moments 224 Sample 271
10-1.4 Bayesian Inference 226 11-2.1 Tests of Hypotheses on the Mean
10-1.5 Applications to Estimation eon, of a Normal Distribution, Variance
10-1.6 Precision of Estimation: Known 271
The Standard Error 230 11-2.2 Tests of Hypotheses on the Mean
10-2 Single-Sample Confidence Interval of a Normal Distribution, Variance
Estimation 232 Unknown 278
10-2.1 Confidence Interval on the Mean 11-2.3 Tests of Hypotheses on the
of a Normal Distribution, Variance Variance of a Normal
Known 233 Distribution 281
10-2.2 Confidence Interval on the Mean 11-2.4 Tests of Hypotheses on a
of a Normal Distribution, Variance Proportion 283
Unknown 236 11-3 Tests of Hypotheses on Two Samples 286
10-2.3 Confidence Interval on the 11-3.1 Tests of Hypotheses on the Means
Variance of a Normal of Two Normal Distributions,
Distribution 238 Variances Known 286
10-2.4 Confidence Interval on a 11-3.2 Tests of Hypotheses on the Means
Proportion 239 of Two Normal Distributions,
10-3 Two-Sample Confidence Interval Variances Unknown 288
Estimation 242 11-3.3 The Paired t-Test 292
10-3.1 Confidence Interval on the 11-3.4 Tests for the Equality of Two
Difference between Means of Two Variances 294
Normal Distributions, Variances 11-3.5 Tests of Hypotheses on Two
Known 242 Proportions 296
10-3.2 Confidence Interval on the 11-4 Testing for Goodness of Fit 300
Difference between Means of Two 11-5 Contingency Table Tests 307
Normal Distributions, Variances 11-6 Sample Computer Output 309
Unknown 244 11-7 Summary 312
10-3.3 Confidence Interval on p, — , for 11-8 Exercises 312
Paired Observations 247
10-3.4 Confidence Interval on the Ratio 12. Design and Analysis of Single-
of Variances of Two Normal Factor Experiments: The Analysis
Distributions 248 of Variance 321
10-3.5 Confidence Interval on the
Difference between Two 12-1 The Completely Randomized Single-Factor
Proportions 250 Experiment 321
10-4 Approximate Confidence Intervals in 12-1.1 An Example 321
Maximum Likelihood Estimation 251 12-1.2 The Analysis of
10-5 Simultaneous Confidence Intervals 202) Variance 323
10-6 Bayesian Confidence Intervals 252 12-1.3 Estimation of the Model
10-7 Bootstrap Confidence Intervals 253 Parameters 327
x Contents -

12-1.4 Residual Analysis and Model 14. Simple Linear Regression and
Checking 330 Correlation | 409
12-1.5 An Unbalanced Design 333)!
12-2 Tests on Individual Treatment 14-1 Simple Linear Regression 409
Means 332 14-2 Hypothesis Testing in Simple Linear
12-2.1 Orthogonal Contrasts 332 Regression 414
12-2.2 Tukey’s Test 335 14-3. Interval Estimation in Simple Linear
12-3 The Random-Effects Model 3357, Regression 417
12-4 The Randomized Block Design 341 14-4 Prediction of New Observations 420
12-4.1 Design and Statistical 14-5 Measuring the Adequacy of the Regression
Analysis 341 Model 421
12-4.2 Tests on Individual Treatment 14-5.1 Residual Analysis 421
Means 344 14-5.2 The Lack-of-Fit Test 422
12-4.3 Residual Analysis and Model 14-5.3 The Coefficient of
Checking 345 Determination 426
12-5 Determining Sample Size in Single-Factor 14-6 Transformations to a Straight
Experiments 347 Line 426
12-6 Sample Computer Output 348 14-7 Correlation 427
12-7 Summary 350 14-8 Sample Computer Output 431
12-8 Exercises 350 14-9 Summary 431
14-10 Exercises 432
13. Design of Experiments with Several
Factors 353 15. Multiple Regression 437

13-1 Examples of Experimental Design 15-1 Multiple Regression Models 437


Applications 353 15-2 Estimation of the Parameters 438
13-2 Factorial Experiments 285) 15-3 Confidence Interva!s in Multiple Linear
13-3 Two-Factor Factorial Regression 444
Experiments 359 15-4 Prediction of New Observations 446
13-3.1 Statistical Analysis of the Fixed- 15-5 Hypothesis Testing in Multiple Linear
Effects Model by) Regression 447
13-3.2 Model Adequacy 15-5.1 Test for Significance of
Checking 364 Regression 447
13-3.3 One Observation per 15-5.2 Tests on Individual Regression
Cell 364 Coefficients 450
13-3.4 The Random-Effects 15-6 Measures of Model Adequacy 452
Model 366 15-6.1 The Coefficient of Multiple
13-3.5 The Mixed Model 367 Determination 453
13-4 General Factorial Experiments 369 15-6.2 Residual Analysis 454
13-5 The 2‘ Factorial Design 373 15-7 Polynomial Regression 456
13-5.1 The 2? Design 373 15-8 Indicator Variables 458
13-5.2 The 2‘ Design for k > 3 15-9 The Correlation Matrix 461
Factors 379 15-10 Problems in Multiple Regression 464
13-5.3 A Single Replicate of the 2" 15-10.1 Multicollinearity 464
Design 386 15-10.2 Influential Observations in
13-6 Confounding in the 2‘ Design 390 Regression 470
13-7 Fractional Replication of the 2‘ 15-10.3 Autocorrelation 472
Design 394. 15-11 Selection of Variables in Multiple
13-7.1 The One-Half Fraction of the 2‘ Regression 474
Design 394 15-11.1 The Model-Building
13-7.2 Smaller Fractions: The 2‘~? Problem 474
Fractional Factorial 400 15-11.2 Computational Procedures for
13-8 Sample Computer Output 403 Variable Selection 474
13-9 Summary 404 15-12 Summary 486
13-10 Exercises 404 15-13 Exercises 486
Contents xi

16. Nonparametric Statistics 491 17-4.2 The Exponential Time-to-Failure


Model 541
16-1 Introduction 491
17-4.3 Simple Serial Systems 542
16-2 The Sign Test 491
17-4.4 Simple Active
16-2.1 A Description of the Sign
Redundancy 544
Test 491
17-4.5 Standby Redundancy 545
16-2.2 The Sign Test for Paired
17-4.6 Life Testing 547
Samples 494
17-4.7 Reliability Estimation with a
16-2.3 Type II Error (f) for the Sign
Known Time-to-Failure
Test 494
Distribution 548
16-2.4 Comparison of the Sign Test and 17-4.8 Estimation with the
the ¢-Test 496 Exponential Time-to-Failure
16-3 The Wilcoxon Signed Rank Test 496 Distribution 548
16-3.1 A Description of the Test 496 17-4.9 Demonstration and Acceptance
16-3.2 A Large-Sample Testing 551
Approximation 497 Summary 551
16-3.3 Paired Observations 498 Exercises Sil
16-3.4 Comparison with the
t-Test 499
Stochastic Processes and
16-4 The Wilcoxon Rank Sum Test 499
16-4.1 A Description of the Test 499 Queueing 555
16-4.2 A Large-Sample Introduction 355)
Approximation 501 Discrete-Time Markov Chains 555
16-4.3 Comparison with the Classification of States and Chains Soi
t-Test 501 Continuous-Time Markov Chains 561
16-5 Nonparametric Methods in the Analysis of The Birth—Death Process
Variance 501 in Queueing 564
16-5.1 The Kruskal-Wallis Test 501 Considerations in Queueing Models 567
16-5.2 The Rank Transformation 504 Basic Single-Server Model with Constant
16-6 Summary 504 Rates 568
16-7 Exercises 505 Single Server with Limited Queue
Length 570
7, Statistical Quality Control and 18-9 Multiple Servers with an Unlimited
Reliability Engineering 507 Queue $72
18-10 Other Queueing Models aie)
17-1 Quality Improvement and 18-11 Summary 5/3
Statistics 507 18-12 Exercises 573
17-2 Statistical Quality Control 508
17-3 Statistical Process Control 508
19. Computer Simulation 576
17-3.1 Introduction to Controi
Charts 509 19-1 Motivational Examples 576
17-3.2 Control Charts for 19-2 Generation of Random Variables 580
Measurements ‘’ 510 19-2.1 Generating Uniform (0,1) Random
17-3.3 Control Charts for Individual Variables 581
Measurements $18 19-2.2 Generating Nonuniform Random
17-3.4 Control Charts for Variables 582
Attributes $20 19-3 Output Analysis 586
17-3.5 CUSUM and EWMA Control 19-3.1 Terminating Simulation
Charts 525 Analysis 587
17-3.6 Average Run Length 532 19-3.2 Initialization Problems 589
17-3.7 Other SPC Problem-Solving 19-3.3 Steady State Simulation
Tools 535 Analysis 590
17-4 Reliability Engineering oH, 19-4 Comparison of Systems 591
17-4.1 Basic Reliability 19-4.1 Classical Confidence
Definitions 538 Intervals 592
Xii Conivnts

19-4.2 Common Random Chart VIII Operating Characteristic Curves for the
Numbers 593 Random-Effects Model Analysis of
19-4.3 Antithetic Random Variance » 623
Numbers 593 Table IX Critical Values for the Wilcoxon Two-
19-4.4 Selecting the Best Sample Test 627
System 593 Table X Critical Values for the Sign
19-5 Summary 593 Test 629
19-6 Exercises 594 Table XI Critical Values for the Wilcoxon Signed
Rank Test 630
Appendix 597 Table XII Percentage Points of the Studentized
Range Statistic 631
Table I Cumulative Poisson Table XIII Factors for Quality-Control
Distribution 598 Charts 633
Table II Cumulative Standard Normal Table XIV k Values for One-Sided and Two-Sided
Distribution 601 Tolerance Intervals 634
Table III Percentage Points of the Table XV Random Numbers 636
x’ Distribution 603
TableIV Percentage Points of the References 637
t Distribution 604
Table V _—Percentage Points of the
Answers to Selected Exercises 639
F Distribution 605
Chart VI Operating Characteristic Curves 610
Index 649
Chart VII Operating Characteristic Curves for the
Fixed-Effects Model Analysis of
Variance 619
Chapter 1

An Introduction to Probability

1-1 INTRODUCTION
Since professionals working in engineering and applied science are often engaged in both
the analysis and the design of systems where system component characteristics are nonde-
terministic, the understanding and utilization of probability is essential to the description,
design, and analysis of such systems. Examples reflecting probabilistic behavior are abun-
dant, and in fact, true deterministic behavior is rare. To illustrate, consider the description of
a variety of product quality or performance measurements: the operational lifespan of
mechanical and/or electronic systems; the pattern of equipment failures; the occurrence of
natural phenomena such as sun spots or tornados; particle counts from a radioactive source;
travel times in delivery operations; vehicle accident counts during a given day on a section of
freeway; or customer waiting times in line at a branch bank.
The term probability has come to be widely used in everyday life to quantify the degree
of belief in an event of interest. There are abundant examples, such as, the statements that
“there is a 0.2 probability of rain showers” and “the probability that brand X personal com-
puter will survive 10,000 hours of operation without repair is 0.75.” In this chapter we intro-
duce the basic structure, elementary concepts, and methods to support precise and
unambiguous statements like those above.
The formal study of probability theory apparently originated in the seventeenth and
eighteenth centuries in France and was motivated by the study of games ci chance. With lit-
tle formal mathematical understructure, people viewed the field with some skepticism;
however, this view began to change in the nineteenth century, when a probabilistic model
(description) was developed for the behavior of molecules in a liquid. This became known
as Brownian motion, since it was Robert Brown, an English botanist, who first observed the
phenomenon in 1827. In 1905, Albert Einstein explained Brownian motion under the
hypothesis that particles are subject to the continual bombardment of molecules of the sur-
rounding medium. These results greatly stimulated interest in probability, as did the emer-
gence of the telephone system in the latter part of the nineteenth and early twentieth
centuries. Since a physical connecting system was necessary to allow for the interconnec-
tion of individual telephones, with call lengths and interdemand intervals displaying large
variation, a strong motivation emerged for developing probabilistic models to describe this
system’s behavior.
Although applications like these were rapidly expanding in the early twentieth century,
it is generally thought that it was not until the 1930s that a rigorous mathematical structure
for probability emerged. This chapter presents basic concepts leading to and including a
definition of probability as well as some results and methods useful for problem solution.
The emphasis throughout Chapters 1-7 is to encourage an understanding and appreciation
of the subject, with applications to a variety of problems in engineering and science. The
reader should recognize that there is a large, rich field of mathematics related to probabil-
ity that is beyond the scope of this book.
2 Chapter! An Introduction to Probability

Indeed, our objectives in presenting the basic probability topics considered in the cur-
rent chapter are threefold. First, these concepts enhance and enrich our basic understanding
of the world in which we live. Second, many of the examples and exercises deal with the
use of probability concepts to model the behavior of real-world systems. Finally, the prob-
ability topics developed in Chapters 1-7 provide a foundation for the statistical methods
presented in Chapters 8-16 and beyond. These statistical methods deal with the analysis
and interpretation of data, drawing inference about populations based on a sample of units
selected from them, and with the design and analysis of experiments and experimental
data. A sound understanding of such methods will greatly enhance the professional capa-
bility of individuals working in the data-intensive areas commonly encountered in this
twenty-first century.

1-2 A REVIEW OF SETS


To present the basic concepts of probability theory, we will use some ideas from the theory
of sets. A set is an aggregate or collection of objects. Sets are usually designated by capital
letters, A, B, C, and so on. The members of the set A are called the elements of A. In gen-
eral, when x is an element of A we write x € A, and if x is not an element of A we write
x ¢ A. In specifying membership we may resort either to enumeration or to a defining prop-
erty. These ideas are illustrated in the following examples. Braces are used to denote a set,
and the colon within the braces is shorthand for the term “such that.”

Example 1-1
The set whose elements are the integers 5, 6, 7, 8 is a finite set with four elements. We could denote
this by

A= {5, 6, 7, 8}.
Note that 5 € A and 9 ¢ A are both true.

Example 1-2
If we write V= {a, e, 1, 0, u} we have defined the set of vowels in the English alphabet. We may use
a defining property and write this using a symbol as

V= {*: * is a vowel in the English alphabet}.

Example 1-3
If we say that A is the set of all real numbers between 0 and 1 inclusive, we might also denote A by a
defining property as :

A={x:xe R,0Sx< 1},

where R is theset of all real numbers.


f

|
Example 1-4
The set B = {— 3, + 3} is the same set as

Ba(x xe Rx =9oF
where R is again the set of real numbers.
1-2 A Review of Sets 3

Example 1-5
In the real plane we can consider points (x, y) that lie on a given line A. Thus, the condition for inclu-
sion for A requires (x, y) to satisfy ax + by =c, so that

A={@, y):xe Rye R,ax+by=c},


where R is the set of real numbers.

The universal set is the set of all objects under consideration, and it is generally
denoted by U. Another special set is the null set or empty set, usually denoted by ©. To illus-
trate this concept, consider a set

A={x:x€ R, x.=-1}.

The universal set here is R, the set of real numbers. Obviously, set A is empty, since there
are no real numbers having the defining property x” = —1. We should point out that the set
{0} #©.
If two sets are considered, say A and B, we call A a subset of B, denoted A CB, if each
element in A is also an element of B. The sets A and B are said to be equal (A = B) if and
only if A c B and B CA. As direct consequences of this we may show the following:

1. For any set A, O CA.


2. For a given U, A considered in the context of U satisfies the relation A C U.
3. For a given set A, A CA (a reflexive relation).
4. IfAc Band BCC, thenA c C (a transitive relation).

An interesting consequence of set equality is that the order of element listing is imma-
terial. To illustrate, let A = {a, b, c} and B= {c, a, b}. Obviously A = B by our definition.
Furthermore, when defining properties are used, the sets may be equal although the defin-
ing properties are outwardly different. As an example of the second con.equence, we let A =
{x: x € R, where x is an even, prime number} and B = {x: x + 3 = 5}. Since the integer 2 is
the only even prime, A = B.
We now consider some operations on sets. Let A and B be any subsets of the universal
set U. Then the following hold:

1. The complement of A (with respect to U) is the set made up of the elements of U that
do not belong to A. We denote this complementary set as A. That is,

A= {x:xe U,x¢ A}.

2. The intersection of A and B is the set of elements that belong to both A and B. We
denote the intersection as A 1 B. In other words,

AQNB={x:xe Aandxe B}.

We should also note that A 4 B is a ser, and we could give this set some designator,
such as C.
3. The union of A and B is the set of elements that belong to-at least one of the sets A
and B. If D represents the union, then
D=AUB={x: xe Aorxeé B (orboth)}.

These operations are illustrated in the following examples.


4 Chapter1 An Introduction to Probability

Example 1-6
Let U be the set of letters in the alphabet, that is, U = {*: * is a letter of the English alphabet}; and let
A= {*: * is a vowel} and B = {*: * is one of the letters a, b, c}. As a consequence of the definitions,

‘A= the set of consonants,


B= Gy sf; Caermeexe mye 3
AVG Bi= GaDy Cee wlaOn ths
AQ
B= {a}.

Example 1-7
If the universal set is defined as U= {1, 2, 3, 4, 5, 6, 7}, and three subsets, A= {1, 2,3}, B= {2, 4, 6},
C= {1, 3,5, 7}, are defined, then we see immediately from the definitions that

A= '(45576, 7); BRI


95, FPSO, 7 C= 274) 6)=5B,
AU B= (1, 2, 3, 4, 6}, JOWIGEN WE OE ahs BUC=e=U,
AQB={2)}, AndC={L3),4 ~Boc=2.

The Venn diagram can be used to illustrate certain set operations. A rectangle is drawn
to represent the universal set U. A subset A of U is represented by the region within a cir-
cle drawn inside the rectangle. Then A will be represented by the area of the rectangle out-
side of the circle, as illustrated in Fig. 1-1. Using this notation, the intersection and union
are illustrated in Fig. 1-2.
The operations of intersection and union may be extended in a straightforward manner
to accommodate any finite number of sets. In the case of three sets, say A, B, and C, AUB
U C has the property that A U (B U C) = (A U B) UC, which obviously holds since both
sides have identical members. Similarly, we see that AM BAN C=(ANB)QNC=AN
- (BOC). Some important laws obeyed by sets relative to the operations previously defined
are listed below.
Identity laws: AUS=A, AQU=A,
AW US U; AND=.
De Morgan’s law: AUB=AQB, AN B=AUB.
Associative laws; AU(BUC)=(AUB)UCG,
AN(BOOC)=ANBAC.
Distributive laws: AU(BOAC)=(AUB)QN(AUQ),
AN(BUC)=(ANB)U(ANC).

The reader is asked in Exercise 1-2 to illustrate some of these statements with Venn dia-
grams. Formal proofs are usually more lengthy.

Figure 1-1 A set in a Venn diagram.


1-3 Experiments and Sample Spaces 5

A B

(a)
Figure 1-2 The intersection and union of two sets in a Venn diagram.
(a) The intersection shaded. (b) The union shaded.

In the case of more than three sets, we use a subscript to generalize. Thus, if n is a pos-
itive integer, and B,, B,, ..., B, are given sets, then B, 7 B, A *:: OB, is the set of elements
belonging to all of the sets, and B, U B, U*: UB, is the set of elements that belong to at
least one of the given sets. ;
If A and B are sets, then the set of all ordered pairs (a, b) such that a € A and be Bis
called the Cartesian product set of A and B. The usual notation is A x B. We thus have

AXB={(a,b):ae Aandbe B},

Let r be a positive integer greater than 1, and let A,, ... , A, represent sets. Then the Carte-
sian product set is given by
A, XA, X***XA,= {(a,,
a), ...,a,): a; € A, forj=1,2,...,r}.
Frequently, the number of elements in a set is of some importance, and we denote by
n(A) the number of elements in set A. If the number is finite, we say we have a finite set.
Should the set be infinite, such that the elements can be put into a one-to-one correspon-
dence with the natural numbers, then the set is called a denumerably infinite set. The non-
denumerable set contains an infinite number of elements that cannot be enumerated. For
example,if a < b, then the set A = {x € R, aS x Sb} is anondenumerable set.
A set of particular interest is called the power set. The elements of this set are the sub-
sets of a set A, and a common notation is {0, 1}*. For example if A = {1, 2, 3}, then

{0 1}0= {© {1}, {2}, (3}, (1, 2), {1,3},.(2, 3}, (ease

1-3 EXPERIMENTS AND SAMPLE SPACES


Probability theory has been motivated by real-life situations where an experiment is per-
formed and the experimenter observes an outcome. Furthermore, the outcome may not be
predicted with certainty. Such experiments are called random experiments. The concept of -
a random experiment is considered mathematically to be a primitive notion and is thus not
otherwise defined; however, we can note that random experiments have some common
charactéristics. First, while we cannot predict a particular outcome with certainty, we can
describe the set of possible outcomes. Second, from a conceptual point of view, the exper-
iment is one that could be repeated under conditions that remain unchanged, with the out-
comes appearing in a haphazard manner; however, as the number of repetitions increases,
certain patterns in the frequency of outcome occurrence emerge.
We will often consider idealized experiments. For example, we may rule out the out-
come of a coin toss when the coin lands on edge. This is more for convenience than out of
6 Chapter1 An Introduction to Probability

necessity. The set of possible outcomes is called the sample space, and these outcomes
define the particular idealized experiment. The symbols @ and & are used to represent the
random experiment and the associated sample space.
Following the terminology employed in the review of sets and set operations, we will
classify sample spaces (and thus random experiments). A discrete sample space is one
in which there is a finite number of outcomes of 4 countably (denumerably) infinite
number of outcomes. Likewise, a continuous sample space has nondenumerable (uncount-
able) outcomes. These might be real numbers on an interval or real pairs contained in
the product of intervals, where measurements are made on two variables following an
experiment.
To illustrate random experiments with an associated sample space, we consider several
examples.

%,: Toss a true coin and observe the “up” face.


Fite de

%,: Toss a true coin three times and observe the sequence of heads and tails.
Y,: (HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}.

%,: Toss a true coin three times and observe the total number of heads.
¥;:; {0, 1, 2, 3}.

%,: Toss a pair of dice and observe the up faces.


Fy: {(1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (1, 6),
(2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (2, 6),
(3, 1), (3, 2), (3, 3), (3, 4),-(3, 5), (3, 6),
(4, 1), (4, 2), (4, 3), (4, 4), (4, 5), (4, 6),
(5, 1), (5, 2), (5, 3), (5, 4), (5, 5), (5, 6),
(6, 1), (6, 2), (6, 3), (6, 4), (6, 5), (6, 6)}.

‘Example 1-12

%s: An automobile door is assembled with a large number of spot welds. After assembly, each
weld is inspected, and the total number of defectives is counted.
f;: {0,1,2,...,K}, where K = the total number of welds in the door.
1-3 Experiments and Sample Spaces -7

Example 1-13
, |
€,: Acathode ray tube is manufactured, put on life test, and aged to failure. The elapsed time
(in hours) at failure is recorded.
Fs {nte R20}.
This set is uncountable.

‘Example 1-14
€,: A monitor records the emission count from a radioactive source in one minute.
Se 10,2, ot):
This set is countably infinite.

Example 1-15

€3: Two key solder joints on a printed circuit board are inspected with a probe as well as visu-
ally, and each joint is classified as good, G, or defective, D, requiring rework or scrap.
¥,: (GG, GD, DG, DD}.

‘Example 1-16
%,: Ina particular chemical plant the volume produced per day for a particular product ranges
between a minimum value, b, and a maximum value, c, which corresponds to capacity. A
day is randomly selected and the amount produced is observed.
Sy: {(e'xe'R, bSxSce}.

Exaniple 1.177
€,9: Anextrusion plant is engaged in making up an order for pieces 20 feet long. Inasmuch as
the trim operation creates scrap at both ends, the extruded bar must exceed 20 feet.
Because of costs involved, the amount of scrap is critical. A bar is extruded, trimmed, and
finished, and the total length of scrap is measured.
fio: {xi x€ R,x>0}.

“Example 1-18
€,,: Ina missile launch, the three components of velocity are mentioned from the ground as a
function of time. At 1 minute after launch these are printed for a control unit.
Fir: {Wy Vy» Vz): Ver Vy» V, are real numbers}.

Example 1-19
€,,: In the preceding example, the velocity components are continuously recorded for 5
minutes.
Y 15: The space is complicated here, as we have all possible realizations of the functions v,(1),
v,(t), and v(t) for 0 < t <5 minutes to consider.
eee EN a es ae ee eee al pete izeh 2 Oh OS
8 Chapter! An Introduction to Probability

All these examples have the characteristics required of random experiments. With the
exception of Example 1-19, the description of the sample space is straightforward, and
although repetition is not considered, ideally we could repeat the experiments. To illustrate
the phenomena of random occurrence, consider Example 1-8. Obviously, if @, is repeated
indefinitely, we obtain a sequence of heads and tails. A pattern emerges as we continue the
experiment. Notice that since the coin is true, we should obtain heads approximately one-
half of the time. In recognizing the idealization in the model, we simply agree on a theo-
retical possible set of outcomes. In €,, we ruled out the possibility of having the coin land
on edge, and in €,, where we recorded the elapsed time to failure, the idealized sample
space consisted of all nonnegative real numbers.

1-4 EVENTS
An event, say A, is associated with the sample space of the experiment. The sample space is
considered to be the universal set so that event A is simply a subset of &. Note that both ©
and & are subsets of &. As a generai rule, a capital letter will be used to denote an event. For
a finite sample space, we note that the set of all subsets is the power ser, and more gener-
ally, we require that if A c Y, then A CY, and if A,, A,, ... is a sequence of mutually exclu-
sive events in &, as defined below, then U4 c &. The following events relate to
experiments % ,, €,, ... , @,, described in the preceding section. These are provided for illus-
tration only; many other events could have been described for each case.
"$A: The coin toss yields a head {H}.
€,.A: All the coin tosses give the same face {HHH, TTT}.
€,. A: ’ The total number of heads is two {2}.
,.A: The sum of the “up” faces is seven {(1, 6), (2, 5), (3, 4), (4, 3), (5, 2),
(6, 1)}.
Ss. A The number of defective welds does not exceed 5 {0, 1, 2, 3, 4, 5}.
A The time to failure is greater than 1000 hours {t: t > 1000}.
Pay,o The count is exactly two {2}.
A Neither weld is bad {GG}.
A The volume produced is between a > b and c {x:x €.R,b<a<x<c}.
$0. A: The scrap does not exceed one foot {x: xe R,0 <x 1}.
Since an event is a set, the set operations defined for events and the laws and proper-
ties of Section 1-2 hold. If the intersections for all combinations of two or more events
among k events considered are empty, then the k events are said to be mutually exclusive (or
disjoint). If there are two events, A and B, then they are mutually exclusive if A 7 B= ©.
With k = 3, we would requireA, 1 A, = ©, A, NA; = ©, A, VA; =@, andA,NA, A;=
©, and this case is illustrated in Fig.1-3. We emphasize that these multiple events are asso-
ciated with one experiment.

1-5 PROBABILITY DEFINITION AND ASSIGNMENT


An axiomatic approach is taken to define probability as a set function where the elements
of the domain are sets and the elements of the range are real numbers between 0 and 1. If
event A is an element in the domain of this function, we use customary functional notation,
P(A), to designate the corresponding element in the range.
1-5 Probability Definition and Assignment 9

Figure 1-3 Three mutually exclusive events.

Definition
If an experiment € has sample space ¥ and an event A is defined on Y, then P(A) is a real
number called the probability of event A or the probability of A, and the function P(-) has
the following properties:
1. 0 < P(A) < 1 for each event A of &.
an PRPs.
3. For any finite number k of mutually exclusive events defined on &,

4. IfA,, A, A;, ... isa denumerable sequence of mutually exclusive events defined on
f, then

{Ua |-E0(4)
i=l i=]
Note that the properties of the definition given do not tell the experimenter how to
assign probabilities; however, they do restrict the way in which the assignment may be
accomplished. In practice, probability is assigned on the basis of (1) estimates obtained
from previous experience or prior observations, (2) an analytical consideration of experi-
mental conditions, or (3) assumption.
To illustrate the assignment of probability based on experience, we consider the repe-
tition of the experiment and the relative frequency of the occurrence of the event of interest.
This notion of relative frequency has intuitive appeal, and it involves the conceptual
repetition of an experiment and a counting of both the number of repetitions and the num-
ber of times the event in question occurs. More precisely, € is repeated m times and two
events are denoted A and B. We let m, and m, be the number of times A and B occur in the
m repetitions.

Definition
The value fy, = m,/m is called the relative frequency of event A. It has the following
properties:
1nOsf-o 1:
2. f, = 0 if and only if A never occurs, and f, = 1 if and only if A occurs on every
repetition.
3. If A and B are mutually exclusive events, then f, ,, , =f, + fg-
°‘#
10 Chapter! An Introduction to Probability

As m becomes large, f, tends to stabilize. That is, as the number of repetitions of the
experiment increases, the relative frequency of event A will vary less and less (from repetition
to repetition). The concept of relative frequency and the tendéncy toward stability lead to one
method for assigning probability. If an experiment € has sample space F and an event A is
defined, and if the relative frequency f, approaches some number p, as the number of repeti-
tions increases, then the number p, is ascribed to A as its probability, that is, as m — ,

P(A)="4 = pg. feb)

In practice, something less than infinite replication must obviously be accepted. As an


example, consider a simple coin-tossing experiment € in which we sequentially toss a fair
coin and observe the outcome—either heads (H) or tails (T)—arising from each trial. If the
observational process is considered as a random experiment such that for a particular rep-
etition the sample space is S = {H, T}, we define the event A = {H}, where this event is
defined before the observations are made. Suppose that after m = 100 repetitions of €, we
observe m, = 43, resulting in a relative frequency of f, = 0.43, a seemingly low value. Now
suppose that we instead conduct m = 10,000 repetitions of € and this time we observe m,
= 4,924, so that the relative frequency is in this case f, = 0.4924. Since we now have at our
disposal 10,000 observations instead of 100, everyone should be more comfortable in
assigning the updated relative frequency of 0.4924 to P(A). The stability of f, as m gets
large is only an intuitive notion at this point; we will be able to be more precise later.
~ A method for computing the probability of an event A is as follows. Suppose the sam-
ple space has a finite number, n, of elements, e,, and the probability assigned to an outcome
is p; = P(E;), where E, = {e;} and

while

Pippa
P(A) = Sipe (1-2)
te,eA

This is a statement that the probability of event A is the sum of the probabilities asso-
ciated with the outcomes making up event A, and this result is simply a consequence of the
definition of probability. The practitioner is still faced with assigning probabilities to the
outcomes, e;. It is noted that the sample space will not be finite if, for example, the elements
e, of F are countably infinite in number. In this case, we note that

ed eae bee

However, equation 1-2 may be used without modification.


If the sample space is finite and has n equally likely outcomes so that p, =p, =--- = Pp
= 1/n, then

nA
P(A) = : (1-3)
and n(A) outcomes are contained in A. Counting methods useful in determining n and n(A)
will be presented in Section 1-6.
1-5 Probability Definition and Assignment 11

Example 1-20
Suppose the coin in Example 1-9 is biased so that the Duleomies of the sample space ¥ = {HHH, HHT,
reer poets Bi 4
HT uy oes THT, TTH, TTT} have probabilities p, = 57, p, = 27? P3 = 37> Pa = 371 Ps = 97 Po= 37>
P7 = 37> Pg = 27> Where e, = HHH, e, = HHT, etc. If we let event A be the event that all tosses yield the
same face, then P(A) = + “+ £ = t.

Example 1-21
' Suppose that in Example 1-14 we have prior knowledge that
=2 Mp i=l
e°-2
be i iy an
Be (=i)
=0, otherwise.
where p; is the probability that the monitor will record a count outcome of i-1 during a 1-minute inter-
val. If we consider event A as the event containing the outcomes 0 and 1, then A = {0, 1}, and P(A) =
Pp, +p, =e? + 2e? = 3e* = 0.406.

Example 1-22
Consider Example 1-9, where a true coin is tossed three times, and consider event A, where all coins
show the same face. By equation 1-3,

since there are eight total outcomes and two are favorable to event A. The coin was assumed to be true,
so all eight possible outcomes are equally likely.

Example 1-23
Assume that the dice in Example 1-11 are true, and consider an event A where the sum of the up faces
is 7. Using the results of equation 1-3 we note that there are 36 outcomes, of which six are favorable
to the event in question, so that P(A) = Z.

Note that Examples 1-22 and 1-23 are extremely simple in two respects: the sample space
is of a highly restricted type, and the counting process is easy. Combinatorial methods fre-
quently become necessary as the counting becomes more involved. Basic counting methods
are reviewed in Section 1-6. Some important theorems regarding probability follow.

Theorem 1-1
If © is the empty set, then P(@) = 0.

Proof Note that =P UO and F& and © are mutually exclusive. Then P(F) = P(P) + P(D)
from property 4; therefore P(O) = 0.

Theorem 1-2
P(A) = 1- P(A).
12 Chapter1 An Introduction to Pr.bability

Proof Notethatf=AvU A and A andA are mutually exclusive. Then P(P)= P(A) + P(A)
from property 4, but from wigs 2, P(f)= 1; therefore P(A) == 1- P(A).

Theorem 1-3
P(A U B) = P(A) + P(B) - P(A OB).

Proof SinceAUB=AU(BO A), where A and (BO A) are mutually exclusive, and B =
(A 7 B) U (BOA), where (A B) and (B 7 A) are mutually exclusive, then P(A U B) =
P(A) + P(B OA), and P(B) = P(A 0 B) + P(B OA). Subtracting, P(A U B) — P(B) = P(A) -
P(A B), and thus P(A U B) = P(A) + P(B) - P(AA B).

The Venn diagram shown in Fig. 1-4 is helpful in following the argument of the proof for
Theorem 1-3. We see that the “double counting” of the hatched region in the expression
P(A) + P(B) is corrected by subtraction of P(A 1 B).

Theorem 1-4
PAUBUC)=P(A)+
P(B) + P(C)- P(ANB)-P(AANC)-P(BOAC)+P(ANBAC).

Proof We may write AU BUC=(AWB) UC and use Theorem 1-3 since A U B is an


event. The reader is asked to provide the details in Exercise 1-32.

Theorem 1-5
k
P(A, U Ay UU Ay)palate )- SeP(A, 0.4;) + SHAM AAY!
i=] i<j=2 i<j<r=3

+(-1)*
1P(A, Ay A A, ).
Proof Refer to Exercise 1-33.

Theorem 1-6

If A c B, then P(A) < P(B).

Proof
If AcB, then B =A U (4 4 B) and P(B) = P(A) + P(A 4 B) 2 P(A), since P(A A B) > 0.

Figure 1-4 Venn diagram for two events.


1-5 Probability Definition and Assignment 13

Example 1-24
If A and B are mutually exclusive events, and if it is known that P(A)= 0.20 while P(B)= 0.30, we
can evaluate several probabilities:
1. P(A) = 1 - P(A) =0.80.
. P(B)= 1 — P(B) = 0.70.
Be (ALO) = i + P(B) = 0.2 +0:3 = 0.5.
El ASG) B=
NY
WwW
n> P(A B)=P(AUB), by De Morgan’s law
=1-P(A UB)
= 1-—[P(A) + P(B)] =0.5.

Example 1-25
Suppose events A and B are not mutually exclusive and we know that P(A) = 0.20, P(B) = 0.30, and
P(A O B) = 0.10. Then evaluating the same probabilities as before, we obtain
1. P(A) =1- P(A) =0.80.
2. P(B) = 1—- P(B) =0.70.
3. P(A U B) = P(A) + P(B) — P(A A B) = 0.2 + 0.3 - 0.1 = 0.4.
4. PAOB)=0.1.
5. P(A B) = P(A UB) =1-[P(A) + P(B) — P(A B)] = 0.6.

Example 1-26
Suppose that in a certain city 75% of the residents jog (J), 20% like ice cream (/), and 40% enjoy
music (M). Further, suppose that 15% jog and like ice cream, 30% jog and enjoy music, 10% like ice
cream and music, and 5% do all three types of activities. We can consolidate all of this information
in the simple Venn diagram in Fig. 1-5 by starting from the last piece of data, PJ A 1 7 M) = 0.05,
and “working our way out” of the center.
1. Find the probability that a random resident will engage in at least one of the three activities.
By Theorem 1-4,

P(J UIU M) = P(J) + PU) + PM) — PJ ND) - PJM) —- PUN M) + PF AT AM)


= 0.75 + 0.20 + 0.40 — 0.15 — 0.30 — 0.10 + 0.05 = 0.85.

This answer is also immediate by adding up the components of the Venn diagram.
2. Find the probability that a resident engages in precisely one type of activity. By the Venn dia-
gram, we see that the desired probability is
P(UIATAM)+ PU AIAM) + PU ATOM) = 0.35 +0 + 0.05 = 0.40.

M Figure 1-5 Venn diagram for Example 1-26.


14 Chapter1 An Introduction to Probability

1-6 FINITE SAMPLE SPACES AND ENUMERATION


Experiments that give rise to a finite sample space have,already been discussed, and the
methods for assigning probabilities to events associated with such experiments have been
presented. We can use equations 1-1, 1-2, and 1-3 and deal either with “equally likely” or
with “not equally likely” outcomes. In some situations we will have to resort to the relative
frequency concept and successive trials (experimentation) to estimate probabilities, as indi-
cated in equation 1-1, with some finite m. In this section, however, we deal with equally
likely outcomes and equation 1-3. Note that this equation represents a special case of equa-
tion 1-2, where p, =p, =--: =p, = I/n.
In order to assign probabilities, P(A) = n(A)/n, we must be able to determine both n, the
number of outcomes, and n(A), the number of outcomes favorable to event A. If there are n
outcomes in &, then there are 2” possible subsets that are the elements of the power set,
{Oma
The requirement for the n outcomes to be equally likely is an important one, and there
will be numerous applications where the experiment will specify that one (or more) item(s)
is (are) selected at random from a population group of N items without replacement.
If n represents the sample size (n < N) and the selection is random, then each possible
selection (sample) is equally likely. It will soon be seen that there are N!/[n!(N — n)!] such
samples, so the probability of getting a particular sample must be n!(N — n)!/N!. It should
be carefully noted that one sample differs from another if one (or more) item appears in one
sample and not the other. The population items must thus be identifiable. In order to illus-
trate, suppose a population has four chips (N = 4) labeled a, b, c, and d. The sample size is
to be two (n = 2). The possible results of the selection, disregarding order, are elements of
f = {ab, ac, ad, bc, bd, cd}. If the sampling process is random, the probability of obtain-
ing each possible sample is 2 The mechanics of selecting random samples vary a great
deal, and devices such as pseudorandom number generators, random number tables, and
icosahedron dice are frequently used, as will be discussed at a later point.
It becomes obvious that we need enumeration methods for evaluating n and n(A) for
experiments yielding equally likely outcomes; the following sections, 1-6.1 through 1-6.5,
review basic enumeration techniques and results useful for this purpose.

1-6.1 Tree Diagram


In simple experiments, a tree diagram may be useful in the enumeration of the sample
space. Consider Example 1-9, where a true ccin is tossed three times. The set of possible
outcomes could be found by taking all the paths in the tree diagram shown in Fig. 1-6. It
should be noted that there are 2 outcomes to each trial, 3 trials, and 2? = 8 outcomes {HHH,
OHHH, BT Lae: THE Tine

1-6.2 Multiplication Principle


If sets A,, A>, ..., A, have, respectively, n,, np, ..., nm, elements, then there aren, «n,- +++ +n,
ways to select an element first from A,, then from A,, ..., and finally from A,.
In the special case where n, =n, = --- =n, =n, there are n* possible selections. This was
the situation encountered in the coin-tossing experiment of Example 1-9.
Suppose we consider some compound experiment € consisting of k experiments, %,,
€,, ..., €;. If the sample spaces £,, F,, ..., F, contain n,, 5, ..., n, outcomes, respectively,
then there are n, -n,- --- -n, outcomes to ©. In addition, if the n, outcomes of F; are equally
likely for j= 1, 2,..., k, then the n, -n, - --- - n, outcomes of © are equally likely.
1-6 Finite Sample Spaces and Enumeration 15

First Second Third


toss toss toss
H
H cree T
H
H
ifae
-

H
H ae T
+
H
a
T Figure 1-6 A tree diagram for tossing a true coin three times.

Example 1-27
Suppose we toss a true coin and cast a true die. Since the coin and the die are true, the two outcomes
to%,, S, = {H, T}, are equally likely and the six outcomes to €,, f, = {1,2,3,4,5,6}, are equally likely.
Since n, = 2 and n, =6, there are 12 outcomes to the total experiment and all the outcomes are equally
likely. Because of the simplicity of the experiment in this case, a tree diagram permits an easy and
complete enunieration. Refer to Fig. 1-7.

Example 1-28
A manufacturing process is operated with very little “in-process inspection.” When items are com-
pleted they are transported to an inspection area, and four characteristics are inspected, each by a dif-
ferent inspector. The first inspector rates a characteristic according to one of four ratings. The second
inspector uses three ratings, and the third and fourth inspectors use two ratings each. Each inspector
marks the rating on the item identification tag. There would be a total of 4 - 3 - 2 - 2 = 48 ways in
which the item may be marked.

1-6.3 Permutations
A permutation is an ordered arrangement of distinct objects. One permutation differs from
another if the order of arrangement differs or if the content differs. To illustrate, suppose we

OanFr
WD

¥ = {(H, 1), (H, 2), (H, 3), (H, 4), (H, 5), (H, 6),
(7, 1), (7; 2), (7, 3), (T, 4), (7, 5), (7, 6)}

aoah
wh

Figure 1-7 The tree diagram for Example 1-27.


16 Chapter! An Introduction to Probability

again consider four distinct chips labeled a, b, c, and d. Suppose we wish to consider all per-
mutations of these chips taken one at a time. These would be
»

a
b
by
d
If we are to consider all permutations taken two at a time, these would be

ab be
ba cb
JGaLDG.
ca db
ad cd
da dc

Note that permutations ab and ba differ because of a difference in order of the objects,
while permutations ac and ab differ because of content differences. In order to generalize,
we consider the case where there are n distinct objects from which we plan to select per-
mutations of r objects (r < n). The number of such permutations, P”, is given by

P” =n(n-1)(n-2)(n-3)-----(n—r+1)
n!
, ai ia
This is a result of the fact that there are n ways to select the first object, (n — 1) ways to
select the second, ..., [n — (r — 1)] ways to select the 7th and the application of the multi-
plication principle. Note that P” =n! and 0! = 1.

Example 1-29
A major league baseball team typically has 25 players. A line-up consists of nine of these players in
a particular order. Thus, there are P, = 7.41 x 10!' possible lineups.

1-6.4 Combinations

A combination is an arrangement of distinct objects where one combination differs from


another only if the content of the arrangement differs. Here order does not matter. In the
case of the four lettered chips a, b, c, d, the combinations of the chips, taken two at a time,
are :
ab
ac
ad
bc
bd
cd
We are interested in determining the number of combinations when there are n distinct
objects to be selected r at a time. Since the number of permutations was the number of ways
to select r objects from the n and then permute the r objects, we note that
1-6 Finite Sample Spaces and Enumeration 17

ol -r1(")
5 r (1-4)

where (”)represents the number of combinations. It follows that


!
("\= er /n= (1-5)
nN(n-r)!
In the illustration with the four chips where r = 2, the reader may readily verify that P; =12
and (;)= 6, as we found by complete enumeration.
For present purposes, (”)is defined where n and r are integers such that 0 <r <n; how-
ever, the terms (") may be generally defined for real n and any nonnegative integer r. In this
case we write

(7)-ne=ne=2 i (n=r+1)
r r!

The reader will recall the binomial theorem:

(a+b)" = >(")(8 ag (1-6)


r=0

The numbers (”)are thus called binomial coefficients.


Returning briefly to the definition of random sampling from a finite population with-
out replacement, there were N objects with n to be selected. There are thus (*)different sam-
ples. If the sampling process is random, each possible outcome has probability 1/( *) of
being the one selected.
Two identities that are often helpful in problem solutions are

(")-(,",) (1-7)

(22) 08
To develop the result shown in equation 1-7, we note that
("}= n! es n! =| n
HP r(n—r)! — (n—r)ir' n-r)

and to develop the result shown in equation 1-8, we expand the right-hand side and collect
terms.
To verify that a finite collection of n elements has 2” subsets, as indicated earlier, we

rear $C)
see that |

from equation 1-6. The right side of this relationship gives the total number of subsets since
()is the number of subsets with 0 elements. (7)is the number with one element, ..., and (")
is the number with n elements.
18 Chapter! An Introduction to Probability

Example 1-30
A production lot of size 100 is known to be 5% defective. A random sample of 10 items is selected
without replacement. In order to determine the probability that there Will be no defectives in the sam-
ple, we resort to counting both the number of possible samples and the number of samples favorable
to event A, where event A is taken to mean that there are no defectives. The number of possible sam-
ples is (‘)) = TRIE The number “favorable to A” is (5)-(73), so that

Lio! sigs!
O10) = ors
P(A) = AZ 10185! = 0.58375.
O'S 100!
1018S!
100
10 10!90!

To generalize the preceding example, we consider the case where the population has N
items of which D belong to some class of interest (such as defective). A random sample of
size n is selected without replacement. If A denotes the event of obtaining exactly r items
from the class of interest in the sample, then

Oe
eres) ee D).
r=0,1,2,...,min(n, (1-9)

i
Problems of this type are often referred to as hypergeometric sampling problems.

Example 1-31
An NBA basketball team typically has 12 players. A starting team consists of five of these players in
no particular order. Thus, there are (')=792 possible starting teams.

Example 1-32
One obvious application of counting methods lies in calculating probabilities for poker hands. Before
proceeding, we remind the reader of some standard terminology. The rank of a particular card drawn
from a standard 52-card deck can be 2, 3, ..., Q, K, A, while the possible suits are &, 0, 9, &. In poker,
we draw five cards at random from a fhe The number of possible hands is i2) = 2,598,960.
1. We first calculate the probability of obtaining two pairs, for example AV, A&, 39, 30, 104.
We proceed as follows.
(a) Select two ranks (e.g.,A, 3). We can do this tT ways.
(b) Select two suits for first pair (e.g., Y, #). There are () ways.
(c) Select two suits for second pair (e.g., Y, ©). There are (3)ways.
(d) Select remaining card to complete the hand. There are 44 ways.

Thus, the number of ways to select two pairs is

y (
n(2 pairs)=ty
Gy > A=12359925

and so

123,552
Ppaits see ae ay OUTS
PSPS ersce- 6a
1-6 Finite Sample Spaces and Enumeration 19

2. Here we calculate the probability of obtaining a full house (one pair, one three-of-a-kind), for
example AY, A®, 39, 36, 34.

(a) Select two ordered ranks (e.g., A, 3). There are P,, ways. Indeed, the ranks must be
.)
Ordered since “‘three A’s, two 3’s” differs from “two A’s, three 3’s.”
(b) Select two suits for the pair (e.g., V, #). There are (})ways.
(c) Select three suits for the three-of-a-kind (e.g., Y, ©, @). (3)ways.
Thus, the number of ways to select a full house is

4V4
n(full house) = 13- of] = 3744,

and so

P(full house) = os = 0.00144.

3. Finally, we calculate the probability of a flush (all five cards from same suit). How many
ways can we obtain this event?
(a) Select a suit. There are (})ways.
(b) Select five cards from that suit. There are (% ways.
Then we have

P(flush) = ete = 0.00198.

1-6.5 Permutations of Like Objects


In the event that there are k distinct classes of objects, and the objects within the classes are
not distinct, the following result is obtained, where n, is the number in the first class, n, is
the number in the second class, ..., n, is the number in the kth class, and n, +n, + ++» +n,
=n:

P=. (1-10)

‘Example 1-33
Consider the word “TENNESSEE.” The number of ways to arrange the letters in this word is

n! 9!
ae eee
ny\nginy!ng! 1!4!2!2!

where we use the obvious notation. Problems of this type are often referred to as multinomial sam-
pling problems.

The counting methods presented in this section are primarily to support probability
assignment where there is a finite number of equally likely outcomes. It is important to
remember that this is a special case of the more general types of problems encountered in
probability applications.
20 Chapter! An Introduction to Probability

1-7 CONDITIONAL PROBABILITY


As noted in Section 1-4, an event is associated with a sample space, and the event is repre-
sented by a subset of F. The probabilities discussed in Settion 1-5 all relate to the entire
sample space. We have used the symbol P(A) to denote the probability of these events; how-
ever, we could have used the symbol P(A|Y), read as “the probability of A, given sample
space &.” In this section we consider the probability of events where the event is condi-
tioned on some subset of the sample space.
Some illustrations of this idea should be helpful. Consider a group of 100 persons of
whom 40 are college graduates, 20 are self-employed, and 10 are both college graduates
and self-employed. Let B represent the set of college graduates and A represent the set of
self-employed, so that A 7 B is the set of college graduates who are self-employed. From
the group of 100, one person is to be randomly selected. (Each person is given a number
from | to 100, and 100 chips with the same numbers are agitated, with one being selected
by a blindfolded outsider.) Then, P(A) = 0.2, P(B) = 0.4, and P(A 7 B) = 0.1 if the entire
sample space is considered. As noted, it may be more instructive to write P(A|¥), P(B|Y),
and P(A © B\f) in such a case. Now suppose the following event is considered: self-
employed given that the person is a college graduate (A|B). Obviously the sample space is
reduced in that only college graduates are considered (Fig. 1-8). The probability P(A|B) is
thus given by

n(ANB) _ n(ANB)/n _ PUA CrB yy On)


= 0.25.
n(B) ss n(B)/n HOS P(B) ~—s 0.4
The reduced sample sp:xe consists of the set of all subsets of & that belong to B. Of course,
A B satisfies the condition.
As a second illustration, consider the case where a sample of size 2 is randomly
selected from a lot of size 10. It is known that the lot has seven good and three bad items.
Let A be the event that the first item selected js good, and B be the event that the second item
selected is good. If the items are selected without replacement, that is, the first item is not
replaced before the second item is selected, then
7
(A) =—a
P(A)

and

(ala) =°
6
P( BIA) = —.

If the first item is replaced before the second'item is selected, the conditional probability
P(B\A) = P(B) = 35, and the events A and B resulting from the two selection experiments
comprising & are said to be independent. A formal definition of conditional probability

Si

A B B

(a) (b)
Figure 1-8 Conditional probability. (a) Initial sample space. (b) Reduced sample space.
1-7 Conditional Probability 21

P(A|B) will be given later, and independence will be discussed in detail. The following
examples will help to develop some intuitive feeling for conditional probability.

Example 1-34
Recall Example 1-11 where two dice are tossed, and assume that each die is true. The 36 possible out-
comes were enumerated. If we consider two events,

A= {(d,, d,): d, +d, =4},

B= {(d,,d,):
d) dj},
where d, is’ ae value ofatheup face of the figstdie and dq,is the value of the up face of the second die,
then P(A)= z, P(B)= 35, P(A 2 B) = 2, P(BIA) = ai and P(A|B)= 2. The probabilities were
obtained from a direct consideration of the sample space and the counting of outcomes. Note that

P(A\B) = P(ANB) and P(BIA) =


P(ANB)
PA)
P(B)

Example 1-35
In World War II an early operations research effort in England was directed at establishing search pat-
terns for U-boats by patrol flights or sorties. For a time, there was a tendency to concentrate the flights
on in-shore areas, as it had been believed thet ore sightings took place in-shore. The research group
studied 1000 sortie records with the following result (the data are fictitious):

In-shore Off-shore Total

Sighting 80 20 100
No sighting 820 80 900
Total sorties 900 100 1000

Let S,: There was a‘sighting.


S,: There was no sighting.
B,: In-shore sortie.
B,: Off-shore sortie.
We see immediately that

‘which indicates a search strategy counter to prior practice.

Definition
We may define the conditional probability of event A given event B as
P(AQB) f
P(A|B) = P(B)>0. (1-11)
P(B)
This definition results from the intuitive notion presented in the preceding discussion. The
conditional probability P(-|-) satisfies the properties required of probabilities. That is,
22 Chapter1 An Introduction to Probability

1. 0< P(AIB) <1.


2. P(YIB) =1.
a (Uae]- ; P(A,|B) ifA; 0A; = ©for i # j.
i=1 i=1

4, ACs = 5P(Ai|B)
i=l
for A,, Az, A3,... a denumerable sequence of disjoint events.
In practice we may solve problems by using equation 1-11 and calculating P(A 7 B) and
P(B) with respect to the original sample space (as was illustrated in Example 1-35) or by
considering the probability of A with respect to the reduced sample space B (as was illus-
trated in Example 1-34).
A restatement of equation 1-11 leads to what is often called the multiplication rule,
that is

P(A B)=P(B): P(A|B), PB)>9,


and :
P(A QB) = P(A) - P(BIA), P(A) > 0. (1-12)

The second statement is an obvious consequence of equation 1-11 with the conditioning on
event A rather than event B.
It should be noted that if A and B are mutually exclusive as indicated in Fig. 1-9, then
AM B=© so that P(A|B) = 0 and P(BIA) = 0.
In the other extreme, if B C A, as shown in Fig. 1-10, then P(A|B) = 1. In the first case,
A and B cannot occur simultaneously, so knowledge of the occurrence of B tells us that A
does not occur. In the second case, if B occurs, A must occur. On the other hand, there are
many cases where the events are totally unrelated, and knowledge of the occurrence of one
has no bearing on and yields no information about the other. Consider, for example, the
experiment where a true coin is tossed twice. Event A is the event that the first toss results
in a “heads,” and event B is the event that the second toss results in a “heads.” Note that
P(A)= 5,since the coin is true, and P(B|A) = a since the coin is true and it has no memory.
The occurrence of event A did not in any way affect the occurrence of B, and if we wanted
to find the probability of A and B occurring, that is, P(A 7 B), we find that

ag Be
P(AMB)= P(A): P(BIA) = ivi.

We may observe that if we had no knowledge about the occurrence or nonoccurrence of A,


we have P(B) = P(BIA), as in this example.

Figure 1-9 Mutually exclusive Figure 1-10 Event B as a subset of A.


events.
1-7 Conditional Probability 23

Informally speaking, two events are considered to be independent if the probability of


the occurrence of one is not affected by the occurrence or nonoccurrence of the other. This
leads to the following definition.

Definition

A and B are independent if and only if


P(A O B) = P(A) - P(B). (1-13)
An immediate consequence of this definition is that if A and B are independent events, then
from equation 1-12,

P(A|B) = P(A) and P(BIA) = P(B). (1-14)


The following theorem is sometimes useful. The proof is given here only for the first part.

Theorem 1-7
If A and B are independent events, then the following holds:
1. A and B are independent events.
2. A and B are independent events.
3. A and B are independent events.

Proof (Part 1)

P(A B)= P(A) - P(BIA)


= P(A) - [1 - P(BIA)]
= P(A) - [1 - P(B)]
= P(A) - P(B).
In practice, there are many situations where it may not be easy to determine whether
two events are independent; however, there are numerous other cases where the require-
ments may be either justified or approximated from a physical consideration of the exper-
iment. A sampling experiment will serve to illustrate.

Example 1-36
Suppose a random sample of size 2 is to be selected from a lot of size 100, and it is known that 98 of
the 100 items are good. The sample is taken in such a manner that the first item is observed and
replaced before the second item is selected. If we let
A: First item observed is good,
B: Second item observed is good,
and if we want to determine the probability that both items are good, then

P(AMB)=B) = P(A):
P(A): P(B)
P(B)= = 98 98
——-— = 0.9604.

If the sample is taken “without replacement” so that the first item is not replaced before the second
item is selected, then
97
P(ANB)= P(A): P(BIA)=~ = 0.9602.
24 .Chapter1 An Introduction to Probability

The results are obviously very close, and one common practice is to assume the events independent
when the sampling fraction (sample size/population size) is small, say less than 0.1.
»

‘ffxample 1-37
The field of reliability engineering has developed rapidly since the early 1960s. One type of problem
encountered is that of estimating system reliability given subsystem reliabilities. Reliability is defined
here as the probability of proper functioning for a stated period of time. Consider the structure of a
simple serial system, shown in Fig. 1-11. The system functions if and only if both subsystems func-
tion. If the subsystems survive independently, then.

System reliability = R, = R, - Rp,

where R, and R, are the reliabilities for subsystems 1 and 2, respectively. For example, if R, = 0.90
and R, = 0.80, then R, = 0.72.

Example 1-37 illustrates the need to generalize the concept of independence to more
than two events. Suppose the system consisted of three subsystems or perhaps 20 subsys-
tems. What conditions would be required in order to allow the analyst to obtain an estimate
of system reliability by obtaining the product of the subsystem reliabilities?

Definition
The k events A,, A,,..., A, are mutually independent if and only if the probability of the
intersection of any 2, 3,..., k of these sets is the product of their respective probabilities.
Stated more precisely, we require that for r = 2, 3,..., k,

P(A, VA, 0+: 0A,) = P(A,) + PCA,) «++ « PCA;)


-T1>(4,}
j=

In the case of serial system reliability calculations where mutual independence may
reasonably be assumed, the system reliability is a product of subsystem reliabilities

R,=R,R, ---R, (1-15)


In the foregoing definition there are 2‘ — k — 1 conditions to be satisfied. Consider three
events, A, B, and C. These are independent if and only if P(A 7 B) = P(A) - P(B), PAN C)
= P(A) - P(C), P(B A C) = P(B) - P(C), and P(A M BO C) = P(A) - P(B) - P(C). The fol-
lowing example illustrates a case where events are pairwise independent but not mutually
independent.

2
Figure 1-11 A simple serial system.
1-8 Partitions, Total Probability, and Bayes’ Theorem 25

‘Example 1-38
Suppose the sample space, with equally likely outcomes, for a particular experiment is as follows:

Ff = {(9, 0, 0), (0, 1, 1), (1, 0, 1), (1, 1, 0)}.

Let A): First digit is zero. B,: Second digit is one.


A,: First digit is one. C): Third digit is zero.
By: Second digit is zero. C,: Third digit is one.
It follows that

Pld) = PA) = P(B)= P(B,)= P(C)= P(G,)=,


and it is easily seen that

P(A, B,)=—= P(A)-P(B,) i=0,1, 7=0,1,


P(A, nC,) === P(A) Bey e017 —0,1,
P(B,AC,)===P(B,)-P(C;), i=0,1, f=0,1.
However, we note that

1
P(Ap 0 Bo AC) = 7 # P(Ao): P(Bo): P(Co)
P(A 0 By 0 C,) = 0 # P(A): P(By): P(C,),
and there are other triplets to which this violation could be extended.

~ The concept of independent experiments is introduced to complete this section. If we con-


sider two experiments denoted €, and @, and let A, and A, be arbitrary events defined on
the respective sample spaces ¥, and Y, of the two experiments, then the following defini-
tion can be given.

Definition
If P(A, (A) = P(A,) : P(A;), then @, and @, are said to be independent experiments.

1-8 PARTITIONS, TOTAL PROBABILITY, AND BAYES’ THEOREM


A partition of the sample space may be defined as follows.

Definition
If B,, Bp..., B, are disjoint subsets of F (mutually exclusive events), and if B, U B, U --
UB,=, then these subsets are said to form a partition of F.

When the experiment is performed, one and only one of the events, B;, occurs if we
have a partition of F.
26 Chapter! An Introduction to Probability

Example 1-39.
A particular binary “word” consists of five “bits,” b,, b,, b3, b,, bs, where b; = 0, 1, i= 1, 2, 3, 4, 5. An
experiment consists of transmitting a “word,” and it follows that there are 32 possible words. If the
events are
B, = {(0, 0, 0,:0, 0), (0, 0, 0, 0, 1)},
B, = {(0, 0, 0, 1, 0), (0, 0, 0, 1, 1), (0, 0, 1, 0, 0), (0, 0, 1, 0, 1), (0, 0, 1, 1, 0), (0, 0, 1, 1, 1)},
B,= {(0, 1, 0, 0, 0), (0, 1, 0, 0, 1), (0, 1, 0, 1, 0), (0, 1; 0, 1, 1), (0, 7, 1, 0, 0), (0, 1, 1,0, 1),
(Oye et 0), (Ova in tet).
B, = {(1, 0, 0, 0, 0), (1, 0, 0, 0, 1), (1, 0, 0, 1, 0), (1, 0, 0, 1, 1), (1, 0, 1, 0, 0), (1, 0, 1, 0, 1),
(10.1, 15.0) 71307115 1)},
B, = {(1, 1, 0, 0, 0), (1, 1, 0, 0, 1),(1,1,:0,1, 0), (1, 1, 0, 1,1), (1, 1, 1,0, 0), (1, 1,1, 0, 1),
CO)
Bea iGa le lel)ha ¢
then & is partitioned by the events B,, B,, B;, B,, B;, and Bg.

In general, if k events B; (i= 1, 2, ... , k) form a partition and A is an arbitrary event with
respect to F, then we may write

A=(ANB,)UANB)U+UANB,)

so that"

P(A)
= P(AQMB,)+P(AQNB,) ++» +PANB,),

since the events (A > B,) are pairwise mutually exclusive. (See Fig. 1-12 for k = 4.) It does
not matter that A 7 B, = © for some or all of the i, since P(@) = 0.
Using the results of equation 1-12 we can state the following theorem.

Theorem 1-8
If B,, B,, ..., B, represents a partition of & and A is an arbitrary event on &, then the total
probability of A is given by
k
P(A) = P(B,): P(A| B,)+ P(A| B,) +:+++ P(B,): P(A B,) = > P(B)P(A |B;).
i=l
The result of Theorem 1-8, also known as the law of total probability, is very useful, as there
are numerous practical situations in which P(A) cannot be computed directly. However,

Figure 1-12 Partition of F.


1-6 Partitions, Total Probability, and Bayes’ Theorem 27

with the information that B, has occurred, it is possible to evaluate P(A\B,) and thus deter-
mine P(A) when the values P(B,) are obtained,
Another important result of the total probability law is known as Bayes’ theorem.

Theorem 1-9
IfB,, B,,..., B, constitute a partition of the sample space ¥ and A is an arbitrary event on
F, then forr= 1, 2,..., k,

P(B,\A) = fe)ei Ae),VBE (1-16)


: > P(B,): P(A\B,)
i=

r(,\4)
=18-04)
P(A)
__P(B,)
MAE P(AIB,)
sid
> PB): PAB)
i=}

The numerator
is a result of equation 1-12 and the denorninator is a result of Theorem 1.8.

Three facilities supply microprocessors to 2 manufacturer of telemetry equipment. All are supposedly
made tothe same specifications. However, the manufacturer has for several years tested the micro-

1 082 0.15
2 001 OW
3 0.93 0.05

The manufacturer has stopped testing because of the costs involved, and it may be reasonably
assumed that the fractions that are defective and the inventory mix are the same as during the period
of record keeping. The director of manufacturing randomly selects 2 microprocessor, takes it tothe
test department, and finds that itis detective. Ifwe let A be the event that an item is defective, and B,
be the event that the item came from facility i (/ = 1,2, 3), then we can evaluate P(BJA), Suppose, for
instance, that we are interested in determining P(B JA). Then
a PB) MAB)
AE, "Fay fi
AB,)+HE,) PAB,)+AB,) HAB;)
(0.05003 a2:
(O.15f0.02)+(OB0KOD1)+(005K03) 25
28 Chapter1 An Introduction to Probability

1-9 SUMMARY
This chapter has introduced the concept of random experiments, the sample space and
events, and has presented a formal definition of probability. This was followed by methods
to assign probability to events. Theorems 1-1 to 1-6 provide results for dealing with the
probability of special events. Finite sample spaces with their special properties were dis-
cussed, and enumeration methods were reviewed for use in assigning probability to events
in the case of equally likely experimental outcomes. Conditional probability was defined
and illustrated, along with the concept of independent events. In addition, we considered
partitions of the sample space, total probability, and Bayes’ theorem, The concepts pre-
sented in this chapter form an important background for the rest of the book.

1-10 EXERCISES
1-1. Television sets are given a final inspection fol- (a) ANB.
lowing assembly. Three types of defects are identi-
fied, critical, major, and minor defects, and are coded
(b) AUB.
A, B, and C, respectively, by a mail-order house. Data (c) ARB.
are analyzed with the following results. (d) AN(BOC).
(e) (BUC).
AN
Sets having only critical defects 2%
1-4. A flexible circuit is selected at random from a
Sets having only major defects 5%
production run of 1000 circuits. Manufacturing
Sets having only minor defects 7% defects are classified into three different types, labeled
Sets having only critical A, B, and C. Type A defects occur 2% of the time, type
and major defects 3% B defects éccur 1% of the time, and type C occur
Sets having only critical 1.5% of the time. Furthermore, it is known that 0.5%
and :ninor defects 4% have both type A and B defects, 0.6% have both A and
Sets having only major C defects, and 0.4% have B and C defects, while 0.2%
and minor defects 3% have all three defects. What is the probability that the
Sets having all three types of defects 1% flexible circuit selected has at least one of the three
types of defects?

(a) What fraction of the sets has no defects?


1-5. In a human-factors laboratory, the reaction times
(b) Sets with either critical defects or major defects
of human subjects are measured as the elapsed time
(or both) get a complete rework. What fraction
from the instant a position number is displayed on a
falls in this category?
digital display until the subject presses a button
1-2. Illustrate the following properties by shadings or located at the position indicated. Two subjects are
colors on Venn diagrams. involved, and times are measured in seconds for each
(a) Associative laws: AU (BUC)=(AUB)UC,A subject (t,, t,). What is the sample space for this exper-
N(BAC=(ANB)NC. iment? Present the following events as subsets and
mark them on a diagram: (ft, + t,)/2 < 0.15, max (t,, t,)
(b) Distributive laws: AU (BO C)=(A UB) AAU
C)AN(BUC)=(ANB)U(ANC).
< 0.15, |t,-1,|
<0.06.
(c) IfA CB, thenANB=A.
1-6. During a 24-hour period, a computer is to be
(d) IfA CB, thenA UB=B. accessed at time X, used for some processing, and
(e) FAN B=@, thenAcB. exited at time Y 2 X. Take X and Y to be measured in
(6) IfA cBandBCC,thenAcCC. hours on the time line with the beginning of the 24-
hour period as the origin. The experiment is to observe
1-3. Consider a universal set consisting of the inte- X and Y.
gers | through 10, or U= {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}.
LetA = {2, 3, 4}, B= (3, 4, 5}, and C= {5, 6, 7}. By (a) Describe the sample space &.
enumeration, list the membership of the following (b) Sketch the following events in the X, Y plane.
sets. (i) The time of use is 1 hour or less.
1-10 Exercises 29

(ii) The access is before f, and the exit is after b, probability that the sample will contain no more than
where 0 St, <t, $ 24. two defective units?
(iii) The time of use is less than 20% of the
1-15. In inspecting incoming lots of merchandise, the
period.
following inspection rule is used where the lots con-
1-7, Diodes from a batch are tested one at a time and tain 300 units. A random sample of 10 items is
marked either defective or nondefective. This is con- selected. If there is no more than one defective item in
tinued until either two defective items are found or the sample, the lot is accepted. Otherwise it is
five items have been tested. Describe the sample space returned to the vendor. If the fraction defective in the
for this experiment. original lot is p’, determine the probability of accept-
ing the lot as a function of p’.
1-8. A set has four elements, A = {a, b, c, d}.
Describe the power set {0, 1}*. 1-16. In a plastics plant 12 pipes empty different
chemicals into a mixing vat. Each pipe has a five-posi-
1-9, Describe the sample space for each of the fol-
tion gauge that measures the rate of flow into the vat.
lowing experiments.
One day, while experimenting with various mixtures,
(a) A lot of 120 battery lids for pacemaker cells is a solution is obtained that emits a poisonous gas. The
known to contain a number of defectives because settings on the gauges were not recorded. What is the
of a problem with the barrier material applied to probability of obtaining this same solution when ran-
the glassed-in feed-through. Three lids are ran- domly experimenting again?
domly selected (without replacement) and are
carefully inspected following a cut down. 1-17. Eight equally skilled men and women are
applying for two jobs. Because the two new employ-
(b ~ A pallet of 10 castings is known to contain one
ees must work closely together, their personalities
defective and nine good units. Four castings are
should be compatible. To achieve this, the personnel
randomly selected (without replacement) and
manager has administered a test and must compare
inspected.
the scores for each possibility. How many compar-
1-10. The production manager of a certain company isons must the manager make?
is interested in testing a finished product, which is
available in lots of size 50. She would like to rework a 1-18. By accident, a chemist combined two labora-
lot if she can be reasonably sure that 10% of the items tory substances that yielded a desirable product.
are defective. She decides to select a random sample Unfortunately, her assistant did not record the names
of the ingredients. There are forty substances available
of 10 items without replacement and rework the lot if
in the lab. If the two in question must be located by
it contains one or more defective items. Does this pro-
successive trial-and-error experiments, what is the
cedure seem reasonable?
maximum number of tests that might be made?
1-11. A trucking firm has a contract to ship a load of
goods from city W to city Z. There are no direct routes 1-19. Suppose, in the previous problem, a known cat-
connecting W to Z, but there are six roads from W to X alyst was used in the first accidental reaction. Because
and five roads from X to Z. How many total routes are of this, the order in which the ingredients are mixed is
important. What is the maximum number of tests that
there to be considered?
might be made?
1-12. A state has one million registered vehicles and
is considering using license plates with six symbols 1-20. A company plans to build five additional ware-
where the first three are letters and the last three are houses at new locations. Ten locations are under con-
digits. Is this scheme feasible? sideration. How many total possible choices are there
for the set of five locations?
1-13. The manager of a small plant wishes to deter-
mine the number of ways he can assign workers to the 1-21. Washing machines can have five kinds of major
first shift. He has 15 workers who can serve as opera- and five kinds of minor defects. In how many ways
tors of the production equipment, eight who can serve can one major and one minor defect occur? In how
as maintenance personnel, and four who can be super- many ways can two major and two minor defects
visors. If the shift requires six operators, two mainte- occur?
nance personnel, and one supervisor, how many ways 1-22. Consider the diagram at the top of the next page
can the first shift be manned? of an electronic system, which shows the probabilities
1-14. A production lot has 100 units of which 20 of the system components operating properly. The
are known to be defective. A random sample of four entire system operates if assembly III and at least
units is selected without replacement. What is the one of the components in each of assemblies I and II
30 Chapter1 An Introduction to Probability

operates. Assume that the components of each assem- units are approximately 0.995, 0.993, and 0.994,
bly operate independently and that the assemblies respectively. Estimate the system reliability.
operate independently. What is the probability that the 1-27. Two balls are drawn from an urn containing m
entire system operates? balls numbered from 1 to m. The first ball is kept if it
1-23. How is the probability of system operation is numbered 1 and returned to the urn otherwise. What
affected if, in the foregoing problem, the probability is the probability that the second ball drawn is num-
of successful operation for the component in assembly bered 2?
III changes from 0.99 to 0.9? 1-28. Two digits are selected at random from the dig-
1-24, Consider the series-parallel assembly shown its 1 through 9 and the selection is without replace-
below. The values R; (i = 1, 2, 3, 4, 5) are the reliabil-
ment (the same digit cannot be picked on both
selections). If the sum of the two digits is even, find
ities for the five components shown, that is, R; = prob-
ability that unit i will function properly. The the probability that both digits are odd.
components operate (and fail) in a mutually inde- 1-29. Ata certain university, 20% of the men and 1%
pendent manner and the assembly fails only when the of the women are over 6 feet tall. Furthermore, 40% of
path from A to B is broken. Express the assembly reli- the students are women. If a student is randomly
ability as a function of R,, R,, R,, Ry, and Rs. picked and is observed to be over 6 feet tall, what is
the probability that the student is a woman?
1-25. A political prisoner is to be exiled to either
Siberia or the Urals. The probabilities of being sent to 1-30. At a machine center there are four automatic
these places are 0.6 and 0.4, respectively. It is also screw machines. An analysis of past inspection
known that if a resident of Siberia is selected at ran- records yields the following data.
dom the probability is 0.5 that he will be wearing a fur
coat, whereas the probability is 0.7 that a resident of
the Urals will be wearing one. Upon arriving in exile, Percent
the first person the prisoner sees is not wearing a fur Percent Defectives
coat. What is the probability he is in Siberia? Machine Production Produced
1-26. A braking device designed to prevent automo- 1 15 4
bile skids may be broken down into three series sub- 2 30 3
systems that operate independently: an electronics
3 20 5
system, a hydraulic system, and a mechanical activa-
4 35 2
tor. On a particular braking, the reliabilities of these
1-10 Exercises 31

Machines 2 and 4 are newer and more production has point (in which case he wins), or until he throws a 7
been assigned to them than to machines 1 and 3. (in which case he loses). What is the probability
Assume that the current inventory mix reflects the that the player with the dice will eventually win the
production percentages indicated. game?
(a) If a screw is randomly picked from inventory,
1-37, The industrial engineering department of
what is the probability that it will be defective?
the XYZ Company is performing a work sampling
(b) If a screw is picked and found to be defective, study on eight technicians. The engineer wishes to
what is the probability that it was produced by randomize the order in which he visits the techni-
machine 3? cians’ work areas. In how many ways may he arrange
1-31. A point is selected at random inside a circle. these visits?
What is the probability that the point is closer to the 1-38. A hiker leaves point A shown in the figure
center than to the circumference? below, choosing at random one path from AB, AC,
1-32. Complete the details of the proof for Theorem AD, and AE. At each subsequent junction she chooses
1-4 in the text. another path at random. What is the probability that
she arrives at point X?
1-33. Prove Theorem 1-5.
1-39, Three printers do work for the publications
1-34, Prove the second and third parts.of Theorem office of Georgia Tech. The publications office does
1-7. not negotiate a contract penalty for late work, and the
data below reflect a large amount of experience with
1-35. Suppose there are n people in a room. If a list is
these printers.
made of all their birthdays (the specific month and
day of the month), what is the probability that two or
oe
more persons have the same birthday? Assume there
Fraction of
are 365 days in the year-and that each day is equally
Fraction of Deliveries
likely to occur for any person’s birthday. Let B be the
Contracts Held More than
event that two or more persons have the same birth-
Printer? by Printeri One Month Late
day. Find P(B) and P(B) for n = 10, 20, 21, 22, 23, 24,
25, 30, 40, 50, and 60. 1 0.2 0.2
2 0.3 0.5
1-36. In a certain dice game, players continue to throw
two dice until they either win or lose. The player wins 3 0.5 0.3
on the first throw if the sum of the two upturned faces
is either 7 or 11 and loses if the sum is 2, 3, or 12.
Otherwise, the sum of the faces becomes the A department observes that its recruiting booklet is
player’s “point.” The player continues to throw until more than a month late. What is the probability that
the first succeeding throw on which he makes his the contract is held by printer 3?
32 Chapter! An Introduction to Probability

1-40. Following aircraft accidents, a detailed investiga- due to structural failure is 0.2. If 25% of all aircraft
tion is conducted. The probability that an accident accidents are due to structural failures, find the proba-
due to structural failure is correctly identified is 0.9 bility that an aircraft accident is due to structural failure
and the probability that an accident that is not due given that it has been diagnosed as due to structural
to structural failure being identified incorrectly as failure.
Chapter 2

One-Dimensional Random Variables

2-1 INTRODUCTION
The objectives of this chapter are to introduce the concept of random variables, to define
and illustrate probability distributions and cumulative distribution functions and to present
useful characterizations for random variables.
When describing the sample space of a random experiment, it is not necessary to spec-
ify that an individual outcome be a number. In several examples we observe this, such as in
Example 1-9, where a true coin is tossed three times and the sample space is = {HHH,
HHT, HTH, HTT, THH, THT, TTH, TTT}, or Example 1-15, where probes of solder joints
yield a sample space = {GG, GD, DG, DD}.
In most experimental situations, however, we are interested in numerical outcomes. For
example, in the illustration involving coin tossing we might assign some real number «x to
every element of the sample space. In general, we want to assign a real number x to every out-
come e of the sample space &. A functional notation will be used initially, so that x = X(e),
where X is the function. The domain of X is , and the numbers in the range are real numbers.
The function X is called a random variable. Figure 2-1 illustrates the nature of this function.

Definition
If € is an experiment having sample space &, and X is a function that assigns a real num-
ber X(e) to every outcome e € F&, then X(e) is called a random variable.

Example 2-1
Consider the coin-tossing experiment discussed in the preceding paragraphs. If X is the number of
heads showing, then X(HHH) = 3, X(HHT) = 2, X(HTH) = 2, X(HTT) = 1, X(THH) = 2, X(THT) = 1,
X(TTH) = 1, and X(TTT) = 0. The range space Ry = {x: x= 0, 1, 2, 3} in this example (see Fig. 2-2).

a Rx

(a) (b)
Figure 2-1 The concept of a random variable. (a) S: The sample space of &. (b) Ry: The range
space of X.

33
34 Chapter2 One-Dimensional Random Variables

Figure 2-2 The number of heads in three coin tosses.

The reader should recall that for all functions and for every element in the domain,
there is exactly one value in the range. In the case of the random variable, for every outcome
e € & there corresponds exactly one value X(e). It should be noted that different values of
e may lead to the same x, as was the case where X(7TH) = 1, X(THT) = 1, and X(HTT) = 1
in the preceding example.
Where the outcome in & is already the numerical characteristic desired, then X(e) =e,
the identity function. Example 1-13, in which a cathode ray tube was aged to failure, is a
good example. Recall that S = {t: t = 0}. If X is the time to failure, then X(t) = t. Some
authors call this type of sample space a numerical-valued phenomenon.
The range space, Ry. is made up of all possible values of X, and in subsequent work it
will not be necessary to indicate the functional nature of X. Here we are concerned with
events that are associated with Ry, and the random variable X will induce probabilities onto
these events. If we return again to the coin-tossing experiment for illustration and assume
the coin to be true, there are eight equally likely outcomes: HHH, HHT; ATH, HTT, THH,
THT, TTH, and TTT, each having probability 2g- Now suppose A is the event “exactly two
heads” and, as previously, we let X represent the number of ieadspale Fig. 2-2). The event
that (X= 2) relates to Ry, not ¥; however, P,(X = 2) = P(A)= =, since A = {HHT, HTH,
THH} is the equivalent event in &, and probability was defined on events in the sample
space. The random variabie X induced the probability of ¢to the event (X-= 2). Note that
parentheses will be used to denote an event in the range of the random variable, and in gen-
eral we will write P,(X =x).
In order to generalize this notion, consider the following definition.

Definition

If F is the sample space of an experiment € and a random variable X with range space Ry
is defined on &, and furthermore if event A is an event in F while event B is an event in Ry,
then A and B are equivalent events if

A= {ee F:X(e)e B}.


Figure 2-3 illustrates this concept.
More simply, if event A in & consists of all outcomes in & for which X(e) € B, then A
and B are equivalent events. Whenever A occurs, B occurs; and whenever B occurs, A
occurs. Note that A and B are associated with different spaces.
2-1 Introduction 35

Figure 2-3 Equivalent events.

Definition

If A is an event in the sample space and B is an event in the range space Ry of the random
variable X, then we define the probability of B as
P,(B) = P(A), where A={ee &: X(e)e B}.

With this definition, we may assign probabilities to events in Ry in terms of probabili-


ties defined on events in F, and we will suppress the function X, so that P,(X = 2) = 3 in
the familiar coin-tossing example means that there is an event A = {HHT, HTH, THH} = {e:
X(e) = 2} in the sample space with probability +.In subsequent work, we will not deal with
the nature of the function X, since we are interested in the values of the range space and
their associated probabilities. While outcomes in the sample space may not be real num-
bers, it is noted again that all elements of the range of X are real numbers.
An alternative but similar approach makes use of the inverse of the function X. We
would simply define X"'(B) as
X'(B)= {ee £: X(e)€ B}
so that

P,(B) = P(X"'(B))
= P(A).

Table 2-1 Equivalent Events

Some Events in Ry Equivalent Events in S Probability

¥e2 {(1, 1)} 7g


yas {(1, 2), (2, 1} ss
Yad {(1,3),(2, 2), (3, 1)} =
y=5 {(1, 4), (2, 3), 3, 2), (4, 1)} es
yoo {(1, 5), (2, 4), (3, 3), (4, 2), (5, D} =
ya7 {(1, 6), (2, 5), 3,4), (4, 3), (5, 2), (6, D} a5
Y=8 {(2, 6), (3, 5), (4, 4), (5, 3), (6, 2)} a
y=9 {(3, 6), (4, 5), (5, 4), (6, 3)} aE
y=10 {(4, 6), (5, 5), (6, 4)) %
rei {(5, 6), (6, 5)} %
Vat! {6,6)} =
36 Chapter2 One-Dimensional Random Variables

The following examples illustrate the sample space-range space relationship, and the
concern with the range space rather than the sample space is evident, since numerical
results are of interest. .

Example 2-2
Consider the tossing of two true dice as described in Example 1-11. (The sample space was described
in Chapter 1.) Suppose we define a random variable Y as the sum of the “up” faces. Then Ry = {2, 3,
eer 1: 9. 3 ae ed aed
4,5, 6, 7, 8, 9, 10, 11, 12}, and the probabilities are fees 36° 36° 36° 36° 36° 36° 36° 36° 36° ral respec-
tively. Table 2-1 shows equivalent events. The reader will recall that there are 36 outcomes, which,
since the dice are true, are equally likely.

Example 2-3
One hundred cardiac pacemakers were placed on life test in a saline solution held as close to body
temperature as possible. The test is functional, with pacemaker output monitored by a system pro-
viding for output signal conversion to digital form for comparison against a design standard. The test
was initiated on July 1, 1997. When a pacer output varies from the standard by as much as 10%, this
is considered a failure and the computer records the date and the time of day (d, t). If X is the random
variable “time to failure,” then & = {(d, t): d= date, t= time} and Ry = {x: x 2 0}. The random vari-
able X is the total number of elapsed time units since the module went on test. We will deal directly
with X and its probability law. This concept will be discussed in the following sections.

2-2 THE DISTRIBUTION FUNCTION


As a convention we will use a lowercase of the same letter to denote a particular value of a
random variable. Thus (X =x), (X < x), (X <x) are events in the range space of the random
variable X, where x is a real number. The probability of the event (X < x) may be expressed
as a function of x as

Fy(x) = P(X Sx). (2-1)


This function F’' is called the distribution function or cumulative function or cumulative dis-
tribution function (CDF) of the random variable X.

Example 2-4

In the case of the coin-tossing experiment, the random variable X assumed four values, 0, 1, 2, 3, with
probabilities = a é,a We can state F(x) as follows:
F(x) =0, jexe(O)

OR
x <1,

Sie <2.

N 1A ad A ee

co|—
co]
=

A graphical representation is as shown in Fig. 2-4.


2-2 The Distribution Function 37

0 u 2 3 x
Figure 2-4 A distribution function for the number of heads in tossing three true coins.

Example 2-5
Again recall Example 1-13, when a cathode ray tube is aged to failure. Now & = {t: t> 0}, and if we
let X represent the elapsed time in hours to failure, then the event (X < x) is in the range space of X. A
mathematical model that assigns probability to (X < x) is

F(x) = 0, xs0,
=1-e%, x>0,
where A is a positive number called the failure rate (failures/hour). The use of this “exponential”
model in practice depends on certain assumptions about the failure process. These assumptions will
be presented in more detail later. A graphical representation of the cumulative distribution for the time
to failure for the CRT is shown in Fig. 2-5.

‘Example 2-6
A customer enters a bank where there is a common waiting line for all tellers, with the individual at
the head of the line going to the first teller that becomes available. Thus, as the customer enters, the
waiting time prior to moving to the teller is assigned to the random variable X. If there is no one in
line at the time of arrival and a teller is free, the waiting time is zero, but if others are waiting or all
tellers are busy, then the waiting time will assume some positive value. Although the mathematical
form of F, depends on assumptions about this service system, a general graphical representation is as
illustrated in Fig. 2-6.

Cumulative distribution functions have the following properties, which follow directly
from the definition:
1. OS Fy(x) $1, — 0 <X<00,

F(x)
1

0 x
Figure 2-5 Distribution function of time to failure for CRT.
38 Chapter2 One-Dimensional Random Variables

F(x)
1

Probability
of no wait

Figure 2-6 Waiting-time distribution function.

2. lim,_,../ (x) = 1,
lim,_,_../’' (x) = 0.
3. The function is nondecreasing. That is, if x, S$ x,, then F,(x,) S Fy(x).
4. The function is continuous from the right. That is, for all x and 6>0,
lim[Fx(2 + 5) — Fy(x)]=0.

In reviewing the last three examples, we note that in Example 2-4, the values of x for
which there is an increase in F,(x) were integers, and where x is not an integer, then F(x)
has the value that it had at the nearest integer x to the left. In this case, F(x) has a saltus or
jump at the values 0, 1, 2, and 3 and proceeds from 0 to 1 in a series of such jumps. Exam-
ple 2-5 illustrates a different situation, where F(x) proceeds smoothly from 0 to | and is
continuous everywhere but not differentiable at x = 0. Finally Example 2-6 illustrates a sit-
uation where there is a saltus at x = 0, and for x > 0, F(x) is continuous.
Using a simplified form of results from the Lebesgue decomposition theorem, it is noted
that we can represent F(x) as the sum of two component functions, say G,(x) and H,(x), or

F(x) = G(x) + Ay(x), (2-2)


where G,(x) is continuous and H,(x) is a right-hand continuous step function with jumps
coinciding with those of F(x), and H,(-- c) = 0. If G,(x) =0 for all x, then X is called a dis-
crete random variable, and if H,(x) = 0, then X is called a continuous random variable.
Where neither situation holds, X is called a mixed type random variable, and although this
was illustrated in Example 2-6, this text will concentrate on purely discrete and continuous
random variables, since most of the engineering and management applications of statistics
and probability in this book relate either to counting or to simple measurement.

2-3 DISCRETE RANDOM VARIABLES


Although discrete random variables may result from a variety of experimental situations, in
engineering and applied science they are often associated with counting. If X is a discrete
random variable, then F,(x) will have at most a countably infinite number of jumps and
Ry= (Xp. Xo oes Bp oo fe

Example 2-7
Suppose that the number of working days in a particular year is 250 and that the records of employ-
ees are marked for each day they are absent from work. An experiment consists of randomly selecting
2-3 Discrete Random Variables 39

a record to observe the days marked absent. The random variable X is defined as the number of days
absent, so that Ry = {0, 1, 2, ..., 250}. This is an example of a discrete random variable with a finite
number of possible values.

‘Example 2-8
A Geiger counter is connected to a gas tube in such a way that it will record the background radiation
count for a selected time interval [0, t]. The random variable of interest is the count. If X denotes the
random variable, then R, = {0, 1, 2, ..., k, ...}, and we have, at least conceptually, a countably infi-
nite range space (outcomes can be placed in a one-to-one correspondence with the natural numbers)
so that the random variable is discrete.

Definition
If X is a discrete random variable, we associate a number p,(x,) = P,(X =x,) with each out-
come x, in R, fori=1, 2, ...,n, ..., where the numbers p,(x;) satisfy the following:
1. p,{x;) 20 for all i.
2.) Px(m)=1.
We note immediately that

PxlX;) = Fy) — Fy) (2-3)


and

Fy(x;) = Py(X $ x;)= ¥ px(x). (2-4)


XSX;

The function p, is called the probability function or probability mass function or prob-
ability law of the random variable, and the collection of pairs [(x;, py(x))), i= 1, 2, ...] is
called the probability distribution of X. The function p, is usually presented in either tabu-
lar, graphical, or mathematical form, as illustrated in the following examples.

Example 2-9.
For the coin-tossing experiment of Example 1-9, where X = the number of heads, the probability dis-
tribution is given in both tabular and graphical form in Fig. 2-7. It will be recalled that Ry = {0, 1, 2, Sis

Tabular Presentation Graphical Presentation

0 1/8
1 3/8
2 3/8 1/8 1/8
3 1/8
0 1 2 3 x
Figure 2-7 Probability distribution for coin-tossing experiment.
40 Chapter2 One-Dimensional Random Variables

Example 2-10
Suppose we have a random variable X with a probability distribution given by the relationship

px)=("Jprd-py,#=0, am
=0, otherwise, (2-5

where n is a positive integer and 0 < p < 1. This relationship is known as the binomial distribution
and it wili be studied in more detail later. Although it would be possible to display this model i
graphical or tabular form for particular n and p by evaluating p,(x) for x =0, 1, 2, ..., n, this is seldon
done in practice.

Example 2-11
Recall the earlier discussion of random sampling from a finite population without replacement. Sup-
pose there are N objects of which D are defective. A random sample of size n is selected without
replacement, and if we let X represent the number of defectives in the sample, then

0
52 hlln soeyieatbal (7 2)),

=0, otherwise, (2-6)


This distribution is known as the hypergeometric distribution. In a particular case, suppose N = 100
items, D = 5 items, and n = 4: then

5\( 95

Ge
x\\4-—x
Px (x)= x=0, iF 2; 3, 4,

=0, otherwise,
In the event that either tabular or graphical presentation is desired, this would be as shown in Fig. 2-8;
however, unless there is some special reason to use these forms, we will use the mathematical relationship.

Tabular Presentation Graphical Presentation

oO
()
(2)
Aves = 0.014 Prix)

()
() 0 1 2 3 4 x

Figure 2-8 Some hypergeometric probabilities N= 100, D=5,n=4.


2-4 Continuous Random Variables 41

Example 2-12
In Example 2-8, where the Geiger counter was prepared for detecting the background radiation count,
we might use the following relationship, which has been experimentally shown to be appropriate:

Daye ALY ere S Or 1S 2, ASO,


=0 otherwise. (2-7)
This is called the Poisson distribution, and at a later point it will be derived analytically. The param-
eter A is the mean rate in “hits” per unit time, and x is the number of these “hits.”

These examples have illustrated some discrete probability distributions and alternate
means of presenting the pairs [(x;, py(x;,)), i= 1, 2, ...]. In later sections a number of proba-
bility distributions will be developed, each from a set of postulates motivated from consid-
erations of real-world phenomena.
A general graphical presentation of the discrete distribution from Example 2-12 is
given in Fig. 2-9. This geometric interpretation is often useful in developing an intuitive
feeling for discrete distributions. There is a close analogy to mechanics if we consider the
probability distribution as a mass of one unit distributed over the real line in amounts p,(x;)
at points x,, i= 1, 2, ..., n. Also, utilizing equation 2-3, we note the following useful result
where b 2 a:
Py (a<X $b) = F,(b) - F(a). (2-8)

2-4 CONTINUOUS RANDOM VARIABLES


Recall from Section 2-2 that where H,(x) = 0, X is called continuous. Then F(x) is contin-
uous, F,(x) has derivative f,(x) = (d/dx) F(x) for all x (with the exception of possibly a
countable number of values), and f,(x) is piecewise continuous. Under these conditions, the
range space Ry will consist of one or more intervals.
An interesting difference from the case of discrete random variables is that for 6 > 0,

Py(X = x) = lim[F(x +8) — Fy(x)]= 0. (2-9)

We define the probability density function f(x) as

f= Fx) 2-10)

Px (x)

Px (Xp - 1)

Px (Xp)

x4 Xo X3 X4 Xn-1 Xn x

Figure 2-9 Geometric interpretation of a probability distribution.


42 Chapter2 One-Dimensional Random Variables

and it follows that

(2-11)
x

Fy(x)=[o x(a.
We also note the close correspondence in this form with equation 2-4 with an integral
replacing a summation symbol, and the following properties of f(x):
1. f(x) 20 forallxe Ry.
2. |fi:()dx=1
3. f,{x) is piecewise continuous.
4. f(x) = 0 if x is not in the range Ry.
These concepts are illustrated in Fig. 2-10. This definition of a density function stipu-
lates a function f, defined on Ry such that

Plee¥:a< X(e)<$b}=P,(as X<b)= [fe@ax, (2-12)


where e is an outcome in the sample space. We are concerned only with Ry, and fy. It is
important to realize that f,(x) does not represent the probability of anything, and that only
when the function is integrated between two points does it yield a probability.
Some comments about equation 2-9 may be useful, as this result may be counterintu-
itive. If we allow X to assume all values in some interval, then P,(X = xp) = 0 is not equiv-
alent to saying the event (X = x,) in R, is impossible. Recall that if A = ©, then P,(A) = 0;
however, although P,(X = x,) = 0, the fact that the set A = {x: x = x9} is not empty clearly
indicates that the converse is not true.
An immediate result of this is that P,(a< X <b) =Py(a<X<b)=P,(a<X<b)=
P,(a SX <b), where X is continuous, all given by F,(b) — F(a).

Example 2-13
The time to failure of the cathode ray tube described in Example 1-13 has the following probability
density function:

fit) = Aex*, t>0,


=0 otherwise,

where A > 0 is a constant known as the failure rate. This probability density function is called the
exponential density, and experimental evidence has indicated that it is appropriate to describe the time

F(X)

P,(as
X< b)

xX=a X= b x
Figure 2-10 Hypothetical probability density function.
2-4 Continuous Random Variables 43

to failure (a real-world occurrence) for some types of components. In this example suppose we want
to find P;(T 2 100 hours). This is equivalent to stating P,(100 < T < ), and

P,(T2100) =|100 de*at


= e7100A

We might again employ the concept of conditional probability and determine P,(T > 100IT > 99), the
probability the tube lives at least 100 hours given that it has lived beyond 99 hours. From our earlier
work,

P,(T2100|T>99)= P,(T >P(T>99)


100 and T > 99)
T

f Ne"dt ~—_-100a
é
[ hea .e

A random variable X has the triangulor probability density function given below and shown graphi-
cally in Fig. 2-11:
SX) =x OStc—at:
=2-x, 1 Sipe.

= (); otherwise.

The following are calculated for illustration:


1/2
h B(-1<x<3]=[ odr+f xix=<
1 3/2
2. P(xs3)= [odes sar+| (2- x)dx
1
=0+—+!| 2x-—|
a) ey
=-—.
2 2 8

3. P(X <3)=1.
4. P(X 22.5) =0.
1 3 1 3/2
is B(+<x<3]= Pag (2-x)dx
is ese 27
=—
4+ — = —
ap +8 32

f(x)

0 1 2 x Figure 2-11 Triangular density function.


44 Chapter2 One-Dimensional Random Variables

In describing probability density functions, a mathematical model is usually employed.


A graphical or geometric presentation may also be useful. The area under the density func-
tion corresponds to probability, and the total area is one. Again the student familiar with
mechanics might consider the probability of one to be distributed over the real line accord-
ing to fy. In Fig. 2-12, the intervals (a, b) and (b, c) are of the same length; however, the
probability associated with (a, b) is greater.

5 SOME CHARACTERISTICS OF DISTRIBUTIONS


While a discrete distribution is completely specified by the pairs [(x, py(%))); i= 1, 2, ....m,
...], and a probability density function is likewise specified by [(x, f,(x)); x € Ry], it is often
convenient to work with some descriptive characteristics of the random variable. In this sec-
tion we introduce two widely used descriptive measures, as well as a general expression for
other similar measures. The first of these is the first moment about the origin. This is called
the mean of the random variable and is denoted by the Greek letter 1, where

= > x)py(x;,) for discrete X,

= | oy (x) for continuous X. (2-13)


This measure provides an indication of central tendency in the random variable.

Example 2-15
Returning to the coin-tossing experiment where X represents the number of heads and the probabil-
ity distribution is as shown in Fig. 2-7, the calculation of py yields

1=Sapals)=0(2)s1(2)e2(2)3(2]=3
4

i=l

as indicated in Fig. 2-13. In this particular example, because of symmetry, the value 1 could have been
easily determined from inspection. Note that the mean value in this example cannot be obtained as the
output from a single trial.

Example 2-16
In Example 2-14, a density f, was defined as

f(x) =x, WES AKI


=2-x, 1s 2;
=0; otherwise.

a b Cc x
Figure 2-12 A density function.
2-5 Some Characteristics of Distributions 45

w= 3/2

Figure 2-13 Calculation of the mean.

The mean is determined by


1 2
p= |xxde+[ x-(2-x)de
1
0 a5
+ [x-0dx+[x-Ode
—22 2
=1,
another result that we could have determined via symmetry.

Another measure describes the spread or dispersion of the probability associated with
elements in Ry. This measure is called the variance, denoted by o”, and is defined as
follows:

o? = ¥ (x; - wu) py(x;) for discrete X,

| (p (x- by fx (x)dx for continuous X. (2-14)

This is the second moment about the mean, and it corresponds to the moment of inertia in
mechanics. Consider Fig. 2-14, where two hypothetical discrete distributions are shown in
graphical form. Note that the mean is one in both cases. The variance for the discrete ran-
dom variable shown in Fig. 2-14a is
2 Beto 2 *] (4) |
o (0-1) (5) (1-1) (5 2-1"
=(0-1) -|—|4+0-1)
+} =j+(2-1) -|—|=—, [5 \=5
and the variance of the discrete random variable shown in Fig. 2-14b is

SMT ee
o” =(-1+1) 5+eOh 12 - mall
(0 1) te —1)-
1)
1
+(2-1) 240-1) = 2,
5
which is four times as great as the variance of the random variable shown in Fig. 2-14a.
If the units on the random variable are, say feet, then the units of the mean are the same,
but the units on the variance would be feet squared. Another measure of dispersion, called the
standard deviation, is defined as the positive square root of the variance and denoted o, where

o =Vo°. (2-15)
It is noted that the units of o are the same as those of the random variable, and a small value
for o indicates little dispersion whereas a large value indicates greater dispersion.
46 Chapter2 One-Dimensional Random Variables

Px (x) Py (x)
1/2

1/4 1/4

0 1 2 x
(a)w=1 ando?=0.5 (b) p=1 ando*=2
Figure 2-14 Some hypothetical distributions.

An alternate form of equation 2-14 is obtained by algebraic manipulation as

a? => x}px(x;)-w for discrete X,

= ip x? fy(x)dx — pr? for continuous X. (2-16)

This simply indicates that the second moment about the mean is equal to the second
moment about the origin less the square of the mean. The reader familiar with engineering
mechanics will recognize that the development leading to equation 2-16 is of the same
nature as that leading to the theorem of moments in mechanics.

Example 2-17
1. Coin tossing—Example 2-9. Recall that u= from Example 2-15 and
2 2
o -(0-3} F+(1-3| =
2) 38 2) 8

Using the alternate form,


P|
o* =|0? 2417242? 343.2]_(3) md

8 8 8 8 2 4

which is only slightly easier.


2. Binomial distribution—Example 2-10. From equation 2-13 we may show that p= np, and

or

which (after some algebra) simplifies to

o”? =np(1—p).

3. Exponential distribution—Example 2-13. Consider the density function f,Ax), where

f,Q@) = Qe wilt 24> 0,


=0 otherwise.
2-5 Some Characteristics of Distributions 47

Then, using integration by parts,

e 1
u = es : 2e ae ‘dx =—5

fy (x)

0 x

and

2 :
o =| x? dea —[| pede llieandag
De Ph se

4. Another density is g,(x), where

gx) = l6xe*, x 2.0,


=i) otherwise.

9x(*)

0 x

Then

i [* ‘l6xe**dx = e
0 2

and

o = ee t6xe**de-(5) = a
0 2

Note that the mean is the same for the densities in parts 3 and 4, with part 3 having a variance
twice that of part 4.

In the development of the mean and variance, we used the terminology “mean of the
random variable” and “variance of the random variable.” Some authors use the terminology
“mean of the distribution” and “variance of the distribution.” Either terminology is accept-
able. Also, where several random variables are being considered, it is often convenient to
use a subscript on / and o, for example, LU, and oy.
In addition to the mean and variance, other moments are also frequently used to
describe distributions. That is, the moments of a distribution describe that distribution,
measure its properties, and, in certain circumstances, specify it. Moments about the origin
are called origin moments and are denoted ju, for the kth origin moment, where
48 Chapter2 One-Dimensional Random Variables

i = >x Px(x;) for discrete X,

= Ion tea aven: for continuous X,


k= 01h con (2-17)
Moments about the mean are called central moments and are denoted ju,, where

1g u)' py(x;) for discrete X,

= ip (x- u)* fy (x) dx for continuous X,


k 30, bed, a. (2-18)
Note that the mean p= su and the variance is 0” = 11. Central moments may be expressed
in terms of origin moments by the relationship
k flew.
m= DI way a OR aoe
j=0

2-6 CHEBYSHEV’S INEQUALITY


In earlier sections of this chapter, it was pointed out that a smal! variance, 0, indicates that
large deviations from the mean, }, are improbable. Chebyshev’s inequality gives us a way
of understanding how the variance measures the probability of deviations about py.

Theorem 2-1
Let X be a random variable (discrete or continuous), and let k be some positive number.
Then
1
Py (|X - ub]2 ko) < ez (2-20)

Proof For continuous X and a constant K > 0, consider


K

oF = P(e ay filadae= f(a)? Salada °o


+VK
fo (x- My fx(a)ax.
% a uy Sade

Since
+VK 2
fr) - f(x) dx> 0,
it follows that

of(xmwy elec+f” (eH)? fala)


—co X\ H+VK xX )

Now, (x — 11)? > K if and only if lx — sl > VK; therefore,

o 24, >|tH-vK Kfy(x)dx + fia


ir
og Khelx)ax
= K[P,(X <u-VK)+,(X2n+VR)|
and
2-7 Summary 49

so that if k = VK/o, then

Py (|X - u|>ko) < =.


The proof for discrete X is quite similar.
An alternate form of this inequality,

Py (|X - ue]< ko) = ae1 (2-21)


or

Py(u-ko <X<pt+ko) 2 =<.

is often useful.
The usefulness of Chebyshev’s inequality stems from the fact that so little knowledge
about the distribution of X is required. Only and o7 must be known. However, Cheby-
shev’s inequality is a weak statement, and this detracts from its usefulness. For example,
P, (Ix- pl 2 0) S 1, which we knew before we started! If the precise form of f,(x) or py(x)
is known, then a more powerful statement can be made.

Example 2-18
From an analysis of company records, a materials control manager estimates that the mean and stan-
dard deviation of the “lead time” required in ordering a small valve are 8 days and 1.5 days, respec-
tively. He does not know the distribution of lead time, but he is willing to assume the estimates of the
mean and standard deviation to be absolutely correct. The manager would like to determine a time
interval such that the probability is at least 5 that the order will be received during that time. That is,

so that k= 3 and w+ ko gives 8 + 3(1.5) or [3.5 days to 12.5 days]. It is noted that this interval may
very well be too large to be of any value to the manager, in which case he may elect to learn more
about the distribution of lead times.

2-7 SUMMARY
This chapter has introduced the idea of random variables. In most engineering and man-
agement applications, these are either discrete or continuous; however, Section 2-2 illus-
trates a more general case. A vast majority of the discrete variables to be considered in this
book result from counting processes, whereas the continuous variables are employed to
model a variety of measurements. The mean and variance as measures of central tendency
and dispersion, and as characterizations of random variables, were presented along with
more general moments of higher order. The Chebyshev inequality is presented as a bound-
ing probability that a random variable lies between pf — ko and pl + ko.
50 Chapter2 One-Dimensional Random Variables

2-8 EXERCISES
2-1. A five-card poker hand may contain from zero to @
-20
20
x

four aces. If X is the random variable denoting the Px(x)= ’ XO


el Deere
x!
number of aces, enumerate the range space of X. What =0, otherwise.
are the probabilities associated with each possible
value of X? Find the probability that he will have suits left over at
2-2. A car rental agency has either 0, 1, 2, 3, 4, or 5 the season’s end.
cars returned each day, with probabilities a 4 - = 2-10. A random variable X has a CDF of the form
s and TE respectively. Find the mean and the vari- x+l
ance of the number of cars returned. Fe(x)=1-(5) 20
2-3. A random variable X has the probability density
=0, x <0.
function ce“. Find the proper value of c, assuming 0 $
X < co, Find the mean and the variance of X. (a) Find the probability function for X.
2-4. The cumulative distribution function that a tele- (b) Find P(0<X <8).
vision tube will fail in t hours is 1 — e™, where c is a
2-11. Consider the following probability density
parameter dependent on the manufacturer and f 2 0. function:
Find the probability density function of T, the life of
foo =k, 0<x<2,
the tube.
=k(4-x), 2<x<4,
2-5. Consider the three functions given below. Deter- =0, otherwise.
mine which functions are distribution functions
(a) Find the value of k for which f is a probability
(CDFs).
density function.
(a) Fyp(x)=1-e%, O<x<o.
(b) Find the mean and variance of X.
(b) Gy(x) =e, 0<x<o9,
(c) Find the cumulative distribution function.
=(s x0:
2-12. Rework the above problem, except let the prob-
(c) A(x) =e, —o<x<0,
ability density function be defined as
=I. sess)
F(X) = ke, O<x<a,
2-6. Refer to Exercise 2-5. Find the probability den-
= k(2a —x), asx<2a,
sity function corresponding to the functions given, if
==1()5 otherwise.
they are distribution functions.
2-13. The manager of a job shop does not know the
2-7. Which of the following functions are discrete
probability distribution of the time required to com-
probability distributions?
plete an order. However, from past performance she
@ pylx)=z, £=0, has been able to estimate the mean and variance as 14
days and 2\(days)’, respectively. Find an interval such
ee xz]
that the probability is at least 0.75 that an order is fin-
3 ished during that time.
es)! otherwise. 2-14. The continuous random variable T
has the prob-
5 x S-x
ability density function f(t) = kr for -1 <1 < 0. Find
(b) py (x)= (=) (=) , «=0,1,2,3,4,5, the following: °
XIN 3
(a) The appropriate value of k.
=0, otherwise.
(b) The mean and variance of T:
2-8. The demand for a product is—1, 0, +1, +2 per day
(c) The cumulative distribution function.
with probabilities “sJ 3° = respectively. A demand
of —1 implies a unit is returned. Find the expected 2-15. A discrete random variable X has probability
demand and the variance. Sketch the distribution function p,(x), where
function (CDF). Pylx) = K(1/2)", lee
2-9. The manager of a men’s clothing store is con-
=0, otherwise.
cerned over the inventory of suits, which is currently (a) Find k.
30 (all sizes). The number of suits sold from now to (b) Find the mean and variance of X.
the end of the season is distributed as (c) Find the cumulative distribution function F (x).
2-8 Exercises 51

2-16. The discrete random variable N (N = 0, 1, ...) 2-21. A continuous random variable X has a density
has probabilities of occurrence of kr’ (0 < r< 1). Find function
the appropriate value of k.
2-17. The postal service requires, on the average, 2 fla) ==, Oren <3!
days to deliver a letter across town. The variance is
estimated to be 0.4 (day)’. If a business executive =I() otherwise.
wants at least 99% of his letters delivered on time, (a) Develop the CDF for X.
how early should he mail them?
(b) Find the mean of X and the variance of X.
2-18. Two different real estate developers, A and B,
(c) Find y5.
own parcels of land being offered for sale. The proba-
(d) Find a value m such that P,(X 2m) = P(X < m).
bility distributions of selling prices per parcel are
This is called the median of X.
shown in the following table.
2-22. Suppose X takes on the values 5 and —5 with
Price probabilities 5. Plot the quantity P[|X — | > ko] asa
function of k (for k > 0). On the same set of axes, plot
$1000 $1050 $1100 $1150 $1200 $1350 the same probability determined by Chebyshev’s
A 0.2 0.3 0.1 0.3 0.05 0.05 inequality.
B 0.1 0.1 0.3 0.3 0.1 0.1 2-23. Find the cumulative distribution function asso-
ciated with

Assuming that A and B are operating independently, Sods


compute the following: fuls)=Hen 2,
x x=
> Oma10:
(a) The expected selling price of A and of B.
otherwise.
(b) The expected selling price of A given that the B
selling price is $1150. 2-24. Find the cumulative distribution function asso-
(c) The probability that A and B both have the same ciated with é
selling price.
2-19, Show that the probability function for the sum of baderimstt siclonezn an
fx (x)= On {1+{(x-u)'/o?} ,—- OSX <0,
values obtained in tossing two dice may be written as

Px(x) ee
x-1
36 23. 6,
2-25. Consider the probability density function
ee ty Rat) fy) = k sin y, 0 S$ y $ 7/2. What is the appropriate
36 value of k? Find the mean of the distribution.

2-20. Find the mean and variarce of the random vari- 2-26. Show that central moments can be expressed in
able whose probability function is defined in the pre- terms of origin moments by equation 2-19. Hint: See
vious problem. Chapter 3 of Kendall and Stuart (1963).
Chapter 3

Functions of One Random


Variable and Expectation

3-1 INTRODUCTION
Engineers and management scientists are frequently interested in the behavior of some
function, say H, of a random variable X. For example, suppose the circular cross-sectional
area of a copper wire is of interest. The relationship Y = 7X/4, where X is the diameter,
gives the cross-sectional area. Since X is a random variable, Y also is arandom variable, and
we would expect to be able to determine the probability distribution of Y = H(X) if the dis-
tribution of X is known. The first portion of this chapter will be concerned with problems
of this type. This is followed by the concept of expectation, a notion employed extensively
throughout the remaining chapters of this book. Approximations are developed for the
mean and variance of functions of random variables, and the moment-generating function,
a mathematical device for producing moments and describing distributions, is presented
with some example illustrations.

3-2.) EQUIVALENT EVENTS


Before presenting some specific methods used in determining the probability distribution of
a function of a random variable, the concepts involved should be more precisely formulated.
Consider an experiment @ with sample space S. The random variable X is defined on
F, assigning values to the outcomes e in FY, X(e) = x, where the values x are in the range
space R, of X. Now if Y = H(X) is defined so that the values y = H(x) in Ry; the range space
of Y, are real, then Y is a random variable, since for every outcome e € &, a value y of the
random variable Y is determined; that is, y = H[X(e)]. This notion is illustrated in Fig. 3-1.
If C is an event associated with R, and B is an event in R,, then B and C are equivalent
’ events if they occur together, that is, if B= {x © Ry: H(x) € C}. In addition, if A is an event asso-
ciated with Pand, furthermore, A and B are equivalent, then A and C are equivalent events.

Ss Ry Ry

Figure 3-1 A function of a random variable.

52
3-2 Equivalent Events 53

Definition
If X is a random variable (defined on /) having range space Ry, and if H is a real-valued
function, so that Y = H(X) is a random variable with range space Ry, then for any event
CC Ry, we define

PC) = Py({x € Ry: H(x) € C}). (3-1)


It is noted that these probabilities relate to probabilities in the sample space. We could
write

PKC) =P({ee SF H[X(e)] € C}).


However, equation 3-1 indicates the method to be used in problem solution. We find the
event B in Ry, that is equivalent to event C in Ry; then we find the probability of event B.

Example 3-1
In the case of the cross-sectional area Y of a wire, suppose we know that the diameter of a wire has
density function

f(x) = 200, 1.000 < x < 1.005,


= 05 otherwise.

Let Y = (7/4)X° be the cross-sectional area of the wire, and suppose we want to find PY < (1.0 1)q/4).
The equivalent event is determined. P)(Y $ (1.01)/4) = Py [(1/4)X? s (1.01)7/4] = Py(X| <+¥1.01).
The event {x € Ry:|x|< 1.01; is in the range space Ry, and since f,(x) = 0, for all x < 1.0,
we calculate

Py(|x < v.01)=F (1.05 xs v.01)= [° 2004


= 0.9975.

Example 3-2
In the case of the Geiger counter experiment of Example 2-12, we used the distribution given in equa-
tion 2-7:
py) i= CAA
ALY oe = Onl Qn nes,
=i(0); otherwise.

Recall that A, where A > 0, represents the mean “hit” rate and ¢ is the time interval for which the
counter is operated. Now suppose we wish to find PY < 5), where
Y=2X +2.

Proceeding as in the previous example,

Py(Y$5)=Pe(2K4255)=P, (X53)
=[px(0)+ px(t)]=[e™(42)° /0!]+[e™ (a2)! 1]
=e“ [1+A2].
The event {xe Ry: xS =} is in the range space of X, and we have the function py to work with in that
space.
54 Chapter3 Functions of One Random Variable and Expectation

3-3 FUNCTIONS OF A DISCRETE RANDOM VARIABLE


Suppose that both X and Y are discrete random variables, and letx;,x;,,..., X;.---» represent
the values of X such that H(x,) = y; for some set of index values, oe ia72 =1, 2, PAE
The probability distribution for Y is denoted by p(y) and is given by

Py(y;)=Py(Y =y;)= ¥ px(x;a (3-2)


JEeQ,;

For example, in Fig. 3-2, where s; = 4, the ‘Probability of y; is pAy;)=Py(%,) + Px%,) + pil)
+ py{%;,)-
In the special case where H is such that for each y there is exactly one x, then
Py) =pyx;), where y; = H(x,). To illustrate these concepts, consider the following examples.

In the coin-tossing experiment where X represented the number of heads, recall that X assumed four
values, 0, 1, 2,3, with probabilities 43, -+.1fY¥= -2X— 1, then the possible values of Y are ~ 1, 1
3, 5, and p(—- 1) = e pyl)= 4 P3) = 3 pS)= ¢. In this case, H is such that for each y there is
exactly one x.

X is as in the previous example; however, suppose now that Y = |X — 2|, so that the possible values of
Y are 0, 1, 2, as indicated in Fig. 3-3. In this case

Pr(0)= px(2)==.
Pr(l)=px(l)+Px)= 5
Py(2)= px(0)= 7

ey = A(x),
for
j=1,2, 3,4

Figure 3-2 Probabilities in Ry.


3-4 Continuous Functions of a Continuous Random Variable 55

Ry y=H(x) =|x-2| Ry

Figure 3-3 An example function H.

In the event that X is continuous but Y is discrete, the formulation for pyy;) is

Py(yi) = J,fx(x)dx, (3-3)

where the event B is the event in R, that is equivalent to the event (Y = y,) in Ry.

Suppose X has the exponential probability density function given by


fdxade™, x20,
=0 otherwise.

Furthermore, if

=10) for X < 1/A,


=) for
X > 1/A,

then
Va VA
py(0)=[0 0Ae™ de =-e*|" =1-e7!= 0.6321
and

py (1) =1- P,(0)=0.3679.

3-4 CONTINUOUS FUNCTIONS OF A RESULTS


RANDOM VARIABLE
If X is a continuous random variable with probability density function f,, and H is also con-
tinuous, then Y= H(X) is a continuous random variable. The probability density function for
the random variable Y will be denoted f, and it may be found by performing three steps.
1. Obtain the CDF of Y, F(y) = PAY <y), by finding the event B in R,, which is equiv-
alent to the event (Y < y) in Ry.
2. Differentiate F(y) with respect to y to obtain the probability density function f(y).
3. Find the range space of the new random variable.
56 Chapter3 Functions of One Random Variable and Expectation

‘Example 3-6,
Suppose that the random variable X has the following density function:

fx) =x/8, OSx4,


=0, otherwise.

If Y = H(X) is the random variable for which the density f, is desired, and H(x)= 2x + 8, as shown in
Fig. 3-4, then we proceed according to the steps given above.

1. Fy) = PAY Sy) = P(2X+8<y)


= P(X S$ (y - 8)/2)

. 1
2. f(y)=R(y)=>--.
3. Ifx=0,
y= 8, and if x = 4, y = 16, so then
we have

1
frly)= 35-4 8<y<l6,
=0, otherwise.

x= H"(y) = (y-8)/2
Figure 3-4 The function H(x) =2x + 8.

y= H(x)

H(x) = (x 2)?

Figure 3-5 The function H(x) = (x - 2).


3-4 Continuous Functions of a Continuous Random Variable 57

es i,
ue Seg

Consider the random variable X defined in Example 3-6, and suppose Y = H(X) = (X — 2)’, as shown
in Fig. 3-5. Proceeding as in Example 3-6, we find the following:
1. Fy) = PAY Sy) = Py((X ~ 2° Sy) =B,(-Jy <X-2< Jy)
= Py(2-Jy $X<2+4/y)
2+Jy x x? aay
shee
T6h

=—(4+4 y+y)-(4-4/y +y)]


1
“> y.

fry)=—=, O<ys4,

=0 otherwise.

In Example 3-6, the event in R, equivalent to (Y < y) in Ry was [X < (y — 8)/2]; and in
Example 3-7, the event in R, equivalent to (Y S y) in Ry was (2 — Vy < X<2+./y). In the
first example, the function H is a strictly increasing function of x, while in the second
xample this is not the case.

_ Theorem 3-1
If X is a continuous random variable with probability density function f, that satisfies f(x)
> 0 for a <x <b, and if y = H(x) is a continuous strictly increasing or strictly decreasing
function of x, then the random variable Y = H(X) has density function

fr) =
= fale) dx
fy(x)-—1 (3-4)

with x = H~'(y) expressed in terms of y. If H is increasing, then f(y) > 0 if H(a) < y < H(d),
and if H is decreasing, then f(y) > 0 if H(b) < y < H(a).

Proof (Given only for H increasing. A similar argument holds for H decreasing.)
Fy) = PAY Sy) = PH) Sy)
° =P,XS H7'0)]
= F,{H~'()].
fel) = Fy) = 22.
re ee by the chain rule,

= f(x) a where
x = H™ '(y).
58 Chapter3 Functions of One Random Variable and Expectation

Example 3-8
In Example 3-6, we had
fx) = x18, 0<x<4,
=i05 otherwise,

and H(x) = 2x + 8, which is a strictly increasing function. Using equation 3-4,

fy(y) = fx (*)-
head
dy
since x = (y — 8)/2. H(O) = 8 and H(4) = 16; therefore,

frly)=%-2, 8S S16,
=0) otherwise.

3-5 EXPECTATION
If X is arandom variable and Y = H(X) is a function of X, then the expected value of H(X)
is defined as follows:

E[H(X)]= ¥"H(x;): px(x;) forXdiscrete, (3-5)

, E|H(X)| = [> H(x)- f(x)dx forXcontinuous. (3-6).

In the case where X is continuous, we restrict H so that Y = H(X) is a continuous random


variable. In reality, we ought to regard equations 3-5 and 3-6 as theorems (not definitions),
and these results have come to be known as the law of the unconscious statistician.
The mean and variance, presented earlier, are special applications of equations 3-5 and
3-6. IfH(X) = X, we see that

E{H(X)] = E(X) = b. (3-7)


Therefore, the expected value of the random variable X is just the mean, JJ.
If H(X) = (X — p)’, then
E{H(X)] = E((X
-p)) = 0”. (3-8)
Thus, the variance of the random variable X may be defined in terms of expectation. Since
the variance is utilized extensively, it is customary to introduce a variance operator V that
is defined in terms of the expected value operator E:

VLA(X)] = LHX) - E(H(X))Y). (3-9)


Again, in the case where H(X) = X,

V(X) = E[(X - E(X))’].


= E(X*) - [EQO/, (3-10)
which is the variance of X, denoted 0”.
The origin moments and central moments discussed in the previous chapter may also
be expressed using the expected value operator as

Ly,= E(x") (3-11)


3-5 Expectation 59

and

My, = E[(X — E(X))').


There is a special linear function H that should be considered at this point. Suppose that
H(X) = aX + b, where a and b are constants. Then for discrete X, we have

E(aX +b) = ¥ (ax; + b)px(x;)

= ay X;Px (x;) usby Px(x;)


i i
SUE() +b, (3-12)
and the same result is obtained for continuous X; namely,

E(aX +b)= | (ax+b)fy(x)dx


= al x fy (x)dx + bf fx (x)dx
= aE(X)+b. (3-13)
Using equation 3-9,
V(aX + b) = E[(aX + b — E(aX + b))’]
= E[(aX + b — aE(X) — b)’]
= aE[(X — E(X))’]
=a’V(X). (3-14)
In the further special case where H(X) = b, a constant, the reader may readily verify that
E(b)=b (3-15)
and
V(b) = 0. (3-16)
These results show that a linear shift of size b only affects the expected value, not the variance.

Example 3-9
Suppose X is a random variable such that E(X) = 3 and V(X) = 5. In addition, let H(X) = 2X — 7. Then

E({H(X)] = E(2X-—7) = 2E(X) - 7 =-1


and
VIA(X)] = V(2X — 7) = 4V(X) = 20.

The next examples give additional illustrations of calculations involving expectation


and variance.

Example 3-10
Suppose a contractor is about to bid on a job requiring X days to complete, where X is a random vari-
able denoting the number of days for job completion. Her profit, P, depends on X; that is, P = H(X).
The probability distribution of X, (x, p(x), is as follows:
60 Chapter3 Functions of One Random Variable and Expectation

x Px (x)
3 y 3

as
; 8
otherwise, 0

Using the notion of expected value, we calculate the mean and variance of X as

BX\c8 ese
8 8

and

1 5 iss 23
=|3?.— #.345°.2)-(3) ==.
VEX) Suma & 8} (8
If the function H(X) is given as

H(x)

$10,000
2,500
WI]
—&
an
& —7,000

then the expected value of H(X) is

£{H(X)|=10,000 (3) 2500: (=|- 7000: (=)= $1062.50

and the contractor would view this as the average profit that she would obtain if she bid this job many,
many times (actually an infinite number of times), where H remained the same and the random vari-
able X behaved according to the probability function py. The variance of P = H(X) can readily be
calculated as

V[H(Xx)]= 20,000) +=+ (2500) 2+ (~7000)" 2] (0062.57


= $27.53-10°.

A well-known simple inventory problem is “the newsboy problem,” described as follows. A newsboy
buys papers for 15 cents each and sells them for 25 cents each, and he cannot return unsold papers.
Daily demand has the following distribution and each day’s demand is independent of the previous
day’s demand.

Number of customers x 23 24 25 26 27 28 29 30

Probability, py(x)_ 0.01 0.04 0.10 0,10 0.25 0.25 0.15 0.10
SR SSS SSSR SS SSS SS SS SSP [SFI REE)

If the newsboy stocks too many papers, he suffers a loss attributable to the excess supply. If he stocks
too few papers, he loses profit because of the excess demand. It seems reasonable for the newsboy to
3-5 Expectation 61

stock some number of papers so as to minimize the expected loss. If we let s represent the number of
papers stocked, X the daily demand, and L(X, s) the newsboy’s loss for a particular stock level s, then
the loss is simply

L(X, s) = 0.10(X
— s) ifX >,
=0.15(s — xX) ifX<s,

and for a given stock level, s, the expected loss is

E[L(X,s)] ze$50.156s3) Feb) SY)0.10(x-s): py (x)


x=23 x=s+l

and the E[L(X, s)] is evaluated for some different values of s.

For s = 26,

E[L(X, 26)] = 0.15((26 — 23)(0.01) + (26 — 24)(0.04) + (26 — 25)(0.10)


+ (26 — 26)(0.10)] + 0.10[(27 — 26)(0.25) + (28 — 26)(0.25)
+ (29 — 26)(0.15) + (30 — 26)(0.10)}
= $0.1915.
For s = 27,

E[L(X, 27)] = 0.15|(27 — 23)(0.01) + (27 — 24)(0.04) + (27 — 25)(0.10)


+ (27 — 26)(0.10) + (27 — 27)(0.25)} + 0.10[(28 — 27)(0.25)
+ (29 — 27)(0.15) + (30 — 27)(0.10)
= $0.1540.

For s = 28,

E[L(X, 28)] = 0.15{(28 — 23)(0.01) + (28 — 24)(0.04) + (28 - 25)(0.10)


+ (28 — 26)(0.10) + (28 — 27)(0.25) + (28 — 28)(0.25)]
+ 0.10{(29 — 28)(0.15) + (30 — 28)(0.10)]
= $0.1790
Thus, the newsboy’s policy should be to stock 27 papers if he desires to minimize his expected loss.

Example 3-12
Consider the redundant system shown in the diagram below.

At least one of the units must function, the redundancy is standby (meaning that the second unit does
not operate until the first fails), switching is perfect, and the system is nonmaintained. It can be
shown that under certain conditions, when the time to failure for each of the units of this system has
an exponential distribution, then the time to failure for the system has the following probability den-
sity function.
62 Chapter3 Functions of One Random Variable and Expectation

fl) = xe, = x > 0, A>0,


=0, otherwise,

where J is the “failure rate” parameter of the component exponential’ models. The mean time to fail-
ure (MTTF) for this system is

E(X)= ihex: Axe"dx = os

Hence, the redundancy doubles the expected life. The terms “‘mean time to failure” and “expected
life” are synonymous.

3-6 APPROXIMATIONS TO E[H(X)] AND V[H(X)]


In cases where H(X) is very complicated, the evaluation of the expectation and variance
may be difficult. Often, approximations to E[H(X)] and V[H(X)] may be obtained by uti-
lizing a Taylor series expansion. This technique is sometimes called the delta method. To
estimate the mean, we expand the function H to three terms, where the expansion is about
x=p If Y= H(X), then
2
Y = H(u)+(X-y)H"(u) X=»)
xXx-

2 “-H"(u)+R,
where R is the remainder. We use equations 3-12 through 3-16 to perform
E(Y) = E[H(u)]+ E[H’(u)(X - 4)|
+ A>H"(u)(X - u)|+ E(R)

= H(u)+ oH”(u)V(X) + E(R)


It H(p)+ :H”(y)o". (3-17)

Using only the first two terms and grouping the third into the remainder so that

Y=H(u)
+ (X-p)- HW) +R,
where

(X -Lt) -H”(u),

then an approximation for the variance of Y is determined as

VY) = VIA()) + VIX— ) - A’) + VR)


=0+ V(X): (Awl
= [H(w): 0°. (3-18)
If the variance of X, 0”, is large and the mean, J, is small, there may be a rather large error
in this approximation. .

Example3-13
The surface tension of a liquid is represented by T (dyne/centimeter), and under certain conditions,
T = 2(1 - 0.005X)'*, where X is the liquid temperature in degrees centigrade. If X has probability den-
sity function f,, where
3-6 Approximations to E[H(X)] and V[H(X)] 63

Fy) = 3000x, x 2 10,


=0 otherwise,
then

E(T)= | 2(1-0.005x)'? -3000x-* dx


and

V(T) = il
=
» 4(1=0.005x)" 24 -3000x-*dx 4 -(E(T))”.-
In order to determine these values, it is necessary to evaluate

{=(1—0.005x)!”
( 4
x)" oe and
{ (1-0.005x)**
r dx.
10 85 10 x

Since the evaluation is difficult, we use the approximations given by equations 3-17 and 3-18. Note
that

[= E(X)= hex:3000x-‘dx = -1500x |" =15°C


and

o* = V(X) = E(X?)-[E(x)] = ftix?-3000x~4dx - 15? = 75(°C)?.


Since

H(X) = 2(1 — 0.005X)'?,

then

H’(X) =— 0.012(1 — 0.005X)°?

and

H’(X) = 0.000012(1 — 0.005x)°°.


Thus,

H(15) =2[1 - 0.005(15)]!2 = IES2.

H’(15) = — 0.012,

and

JUSS AQ),

Using equations 3-17 and 3-18

E(T) ~H(I5)+5H"(15)-0° = 1.82

and

V(T) = [H’(15)}? - o? = [- 0.012]° - 75 = 0.0108.

An alternative approach to this sort of approximation utilizes digital simulation and


statistical methods to estimate E(Y) and V(Y). Simulation will be discussed in general in
Chapter 19, but the essence of this approach is as follows.
64 Chapter3 Functions of One Random Variable and Expectation

1. Produce n independent realizations of the random variable X, where X has proba-


bility distribution p, or fy. Call these realizations x,, x,, ..., x, (The notion of inde-
pendence will be discussed further in Chapter 4.)
2. Use x, to compute independent realizations of Y; namely, y, = H(x,), y. = HQ), «++»
Yn = H(x,).
3. Estimate E(Y) and V(Y) from the y,, y,, ..., y, values. For example, the natural esti-
mator for E(Y) is the sample mean y = 2, y; /n. (Such statistical estimation prob-
lems will be treated in detail in Chapters 9 and 10.)
As a preview of this approach, we give a taste of some of the details. First, how do we
generate the n realizations of X that are subsequently used to obtain y,, y2, ..., ¥,? The most
important technique, called the inverse transform method, relies on a remarkable result.

Theorem 3-2
Suppose that X is a random variable with CDF F,(x). Then the random variable F,(X) has
a uniform distribution on [0, 1]; that is, it has probability density function
fy) = 1, Osusil,.
=i() otherwise. (3-19)

Proof Suppose that X is a continuous random variable. (The discrete case is similar.)
Since X is continuous, its CDF F,(x) has a unique inverse. Thus, the CDF of the random
variable F(X) is
P(F AX) Su) = P(X S FY (u)) = FFX (u)) =u.
Taking the derivative with respect to u, we obtain the probability density function of F(X),
d
P(Fy(X) Su) =1,

which matches equation 3-19, and completes the proof.


We are now in a position to describe the inverse transform method for generating random
variables. According to Theorem 3-2, the random variable F,(X) has a uniform distribution
on [0,1]. Suppose that we can somehow generate such a uniform [0,1] random variate U. The
inverse transform method proceeds by setting F(X) = U and solving for X via the equation
X= FU). (3-20)
We can then generate independent realizations of Y by setting Y, = H(X,) = H(F = (U))),
i= 1,2, ...,n, where U;,\U,, ..., U, are independent realizations of the uniform [0,1] dis-
tribution. We defer the question of generating U until Chapter 19, when we discuss com-
puter simulation techniques. Suffice it for now to say that there are a variety of methods
available for this task, the most widely used actually being a pseudo-uniform generation
method—one that generates numbers that appeur to be independent and uniform on [0,1]
but that are actually calculated from a deterministic algorithm.

Example 3-14°
Suppose that U is uniform on [0,1]. We will show how to use the inverse transform method
to generate an exponential random variate with parameter A, that is, the continuous distribution
3-7 The Moment-Generating Function 65

having probability density function f,(x) = Ae, for x > 0. The CDF of the exponential random
variable is

F, (x)= [falar =1-e*.


Therefore, by the inverse transform theorem, we can set

Ff{xy=1-e"=U
and solve for X,

X='F,'(U)= -=In(1-U),

where we obtain the inverse after a bit of algebra. In other words, —(1/A) In(1 — U) yields an expo-
nential random variable with parameter /.

We remark that it is not always possible to find Fy in closed form, so the usefulness
of equation 3-20 is sometimes limited. Luckily, many other random variate generation
schemes are available, and taken together, they span all of the commonly used probability
distributions.

3-7 THE MOMENT-GENERATING FUNCTION


It is often convenient to utilize a special function in finding the moments of a probability dis-
tribution. This special function, called the moment-generating function, is defined as follows.

Definition
Given a random variable X, the moment-generating function M,(t) of its probability distri-
bution is the expected value of e%. Expressed mathematically,
M,(t) = Ele) (3-21)
= sS e*' py(x;) discrete X, (3-22)

= kee™ fy(x)dx continuous X. (3-23)

For certain probability distributions, the moment-generating function may not exist for
all real values of t. However, for’ the probability distributions treated in this book, the
moment-generating function always exists.
Expanding e™ as a power series in t we obtain

On taking expectations we see that


t2
M,(t)= Fle™|= 1+ E(X)-1+ E(X?) ++ + E(Xexh
Jae

so that
2 r
BGA Eyh tenHay
Bw pth irk Hep
meeiegecb noe: :
(3-24)
66 Chapter3 Functions of Ong Random Variable and Expectation

Thus, we see that when M,(t) is written as a power series int, the coefficient of t’/r! in the
expansion is the rth moment about the origin. One procedure, then, for using the moment-
generating function would be the following:
1. Find M,(t) analytically for the particular distribution.
2. Expand M,(?) as a power series in ¢ and obtain the coefficient of ’/r! as the rth origin
moment.
The main difficulty in using this procedure is the expansion of M,(t) as a power series in t.
If we are only interested in the first few moments of the distribution, then the process
of determining these moments is usually made easier by noting that the rth derivative of
M,(t), with respect to t, evaluated at t = 0, is just

= [a9 ¢ ae je
=
dt iP My (oh=0 E|X’e |, =0 = H,, G3 25)

assuming we can interchange the operations of differentiation and expectation. So, a sec-
ond procedure for using the moment-generating function is the following:
1. Determine M,(t) analytically for the particular distribution.
d’
2. Find 1; = ar xl!Ne 7

Moment- fede functions have many interesting and useful properties. Perhaps the
most important of these properties is that the moment-generating function is unique when
it exists, so that if we know the moment-generating function, we may be able to determine
the form of the distribution.
In cases where the moment-generating function does not exist, we may utilize the
characteristic function, C,(t), which is defined to be the expectation of e’“, where i= J-1.
There are several advantages to using the characteristic function rather than the moment-
generating function, but the principal one is that C,(t) always exists for all t. However, for
simplicity, we will use only the moment-generating function.

Example 3-15
Suppose that X has a binomial distribution, that is,

Px(x) -(")pa-o) : x=0,1,2,...,7,


n n-x

=0, otherwise,

where 0 < p < | and mis a positive integer. The moment-generating function M,(t) is

This last summation is recognized as the binomial expansior. of [pe' + (1 — p)]", so that

M,{t) = [pe'+ (1 - p)]".


Taking the derivatives, we obtain

M;(t) = npe'[] + p(e'- 1)"


3-8 Summary 67

and

My(t)= npe'(1 — p + npe')[1 + p(e' = 1)]".


Thus

My, =H=My()| 0= np
and

Hy = My), = np(l — p + np).


The second central moment may be obtained using 0” = w5 — p? = np(1 — p).

Example 3-16.
Assume X to have the following gamma distribution:
b
fx@)s = x le <x <~,a>0,b>0,

=0, otherwise,

where I'(b) = [ ey? dy is the gamma function.


The moment-generating function is

Mx(0)= icteulea) tale


which, if we let y = x(a — t), becomes

a
M,(t)=
- T(b)(a-

Since the integral on the right is just (b), we obtain

a? folie
M,(t)= , -(1-4) for
t<a.
(a-t) a

Now using the power series expansion for

we find

My(t)=1+b—2 ee r—
2! a

which gives the moments

3-8 SUMMARY
This chapter first introduced methods for determining the probability distribution of a ran-
dom vafiable that arises as a function of another random variable with known distribution.
That is, where, Y = H(X), and eitherX is discrete with known distribution p,(x) or X is con-
68 Chapter3 Functions of One Random Variable and Expectation

tinuous with known density f, (x), methods were presented for obtaining the probability dis-
tribution of Y.
The expected value operator was introduced in general terms for E[H(X)], and it was
shown that E(X) = , the mean, and E[(X — }1)’] = 0”, the variance. The variance operator V
was given as V(X) = E(X*) — [E(X)]’. Approximations were developed for E[H(X)] and
V[H(X)] that are useful when exact methods prove difficult. We also showed how to use the
inverse transform method to generate realizations of random variables and to estimate their
means.
The moment-generating function was presented and illustrated for the moments yy’ of
a probability distribution. It was noted that E(X’) = pr’.

3-9 EXERCISES
3-1. A robot positions 10 units in a chuck for machin- (b) If the profit per sale is $200 and the replacement
ing as the chuck is indexed. If the robot positions the of a picture tube costs $200, find the expected
unit improperly, the unit falls away and the chuck profit of the business.
position remains open, thus resulting in a cycle that 3-4, A contractor is going to bid a project, and the
produces fewer than 10 units. A study of the robot’s number of days, X, required for completion follows
past performance indicates that if X = number of open the probability distribution given as
positions,
Px(x) = 0.1, GO:
Py(x) = 0.6, 370), =) 5e x=11,
. 203. c= le = 0.4, x= 12,
=i) ale GSD) ='0.1;, x= 13,
=i0i0} otherwise. SOM, ay
=0, otherwise.
If the loss due to empty positions is given by ¥ = 20X”,
The contractor’s profit is Y= 2000(12 — X).
find the following:
(a) Find the probability distribution of Y.
(a) Py).
(b) Find E(X), V(X), E(Y), and V(Y).
(b) E(Y) and V(Y).
3-5. Assume that a continuous random variable X has
3-2. The content of magnesium in an alloy is a random probability density function
variable given by the following probability density
function: F(x) = 2xe7 at mes
=} otherwise.

ful) =o5 O<x<6,


Find the probability distribution of Z = X?.
=(0}, otherwise. 3-6. In developing a random digit generator, an impor-
tant property sought is that each digit D; follows the
The profit obtained from this alloy is P= 10 + 2X.
following discrete uniform distribution,
(a) Find the probability distribution of P.
1
(b) What is the expected profit? d)=—,
Pp,(4) 10 d=0,1)2;3,...,9;9
3-3. A manufacturer of color television sets offers a 1- =0, otherwise.
year warranty of free replacement if the picture tube
fails. He estimates the time to failure, 7, to be a ran- (a) Find E(D;) and V(D)).
dom variable with the following probability distribu- (b) If y=|D, - 4.5], where | | is the greatest integer
tion (in units of years): (“round down’) function, find py(y), E(Y), V(Y).
3-7. The percentage of a certain additive in gasoline
fr(t)=ze" t>0, determines the wholesale price. If A is a random vari-
able representing the percentage, then 0 <A < 1. If the
=—()i otherwise.
percentage of A is less than 0.70 the gasoline is low-
(a) What percentage of the sets will he have to test and sells for 92 cents per gallon. If the percentage
service? of A is greater than or equal to 0.70 the gasoline is
3-9: Exercises 69

high-test and sells for 98 cents per gallon. Find the Evaluate E(A) and V(A) by using the approximations
expected revenue per gallon where f,(a)=1,0<a<1; derived in this chapter.
otherwise, f,(a) = 0.
3-13. Suppose that X has the uniform probability
3-8. The probability function of the random variable X, density function

fdlx)= ces x2 B,@>0, fo) = 1, Sc:


=0, otherwise.
= (0) otherwise
(a) Find the probability density function of ¥ = H(X)
is known as the two-parameter exponential distribu- where H(x) = 4 — x’.
tion, Find the moment-generating function of X. Eval- (b) Find the probability density function of ¥Y = H(X)
uate E(X) and V(X) using the moment-generating where A(x) = e*.
function.
3-14, Suppose that X has the exponential probability
3-9. A random variable X has the following probabil- density function
ity density function:
fro) =e", ee!
fae, x>0, =0, otherwise.
=) otherwise.
Find the probability density function of Y= H(X) where
(a) Develop the density function for Y = 2 X?.
(b) Develop the density for V= X"”. 3
ae ices gs
(c) Develop the density for U = In X.
3-10. A two-sided rotating antenna receives signals. 3-15. A used-car salesman finds that he sells either 1,
The rotational position (angle) of the antenna is denoted 2, 3, 4, 5, OF 6 cars per week with equal probability.
X, and it may be assumed that this position at the time (a) Find the moment-generating function of X.
a signal is received is a random variable with the den-
(b) Using the moment-generating function, find E(X)
sity below. Actually, the randomness lies in the signal. and V(X).

' Osx<2z,
3-16. Let X be a random variable with probability
density function
otherwise. flo) = axe”,
seSa(0)
The signal can be received if Y > yo, where Y = tan X =0, otherwise.
For instance, y) = 1 corresponds to a <X< é and
(a) Evaluate the constant a.
& <X< a Find the density function for Y.
(b) Suppose a new function Y = 18X? is of interest.
3-11. The demand for antifreeze in a season is consid- Find an approximate value for E(Y) and for V(Y).
ered to be a uniform random variable X, with density
3-17. Assume that Y has the exponential probability
Mozk0-, 10-5152 x 10°, density function
=0, otherwise,
fry) =e", y>0,
where X is measured in liters. If the manufacturer = 0, otherwise.
makes a 50 cent profit on each liter she sells in the fall
Find the approximate values of E(X) and V(X) where
of the year, and if she must carry any excess over to
the next year at a cost of 25 cents per liter, find the
X=vY’? +36.
“optimum” stock level for a particular fall season.
3-12. The acidity of a certain product, measured on an 3-18. The concentration of reactant in a chemical
arbitrary scale, is given by the relationship process is a random variable having probability
distribution
A=(3+0.05G)’,
fxr) = Or(1 — 1), OSs
where G is the amount of one of the constituents hav-
=i0), otherwise.
ing probability distribution
The profit associated with the final product is P =
, Oses4, $1.00 + $3.00R. Find the expected value of P. What is
the probability distribution of P?
otherwise.
70 Chapter 3 Functions of One Random Variable and Expectation

3-19, The repair time (in hours) for a certain electron- (b) If Y=(X— 2)’, find the CDF for ¥.
icaliy controlled milling machine follows the density
3-24. The third moment about the mean is related to
function
the asymmetry, or skewness, of the distribution and is
flo) =4ne*, xe 0} defined as z
=i) otherwise.
Hy = E(X - ly’
Determine the moment-generating function for X and
Show that 1, = 3 — 3u5; + 2(u;)’. Show that for a
use this function to evaluate E(X) and V(X).
symmetric distribution, p11, = 0.
3-20. The cross-sectional diameter of a piece of bar
stock is circular with diameter X. It is known that 3-25. Let fbe a probability density function for which
E(X) = 2 cm and V(X) = 25 x 10° cm’. A cutoff tool the rth order moment ’ exists. Prove that all moments
cuts wafers that are exactly | cm thick, and this is con- of order less than 7 also exist.
stant. Find the expected volume of a wafer. 3-26. A set of constants k,, called cumulants, may be
3-21. If a random variable X has moment-generating used instead of moments to characterize a probability
function M,(t), prove that the random variable Y = aX distribution. If M,(t) is the moment-generating func-
+ b has moment-generating function e” M, (af). tion of a random variable X, then the cumulants are
defined by the generating function
3-22. Consider the beta distribution probability
density function W(t) = log M(t).

i @=kie aaa O0<xsl,a>0,b>0, Thus, the rth cumulant is given by


=\(); otherwise.
ie, d'Wy(t)
(a) Evaluate the constant k. dt’
1=0
(b) Find the mean.
(c) Find the variance. Find the cumulants of the normal distribution whose
density function is
3-23. The probability distribution of a random vari-
able X is given by
Pxix) = 1/2, 5K
f (x)=oO 2a
= 1/4, ooh
Sy SP 3-27. Using the inverse transform method, produce 20
=o Se r=) realizations of the variable X described by p,(x) in
Exercise 3-23.
exit)! otherwise.
(a) Determine the mean and variance of X from the 3-28. Using the inverse transform method, produce 10
moment-generating function. realizations of the random variable T in Exercise 3-3.
Chapter 4

Joint Probability Distributions

4-1 INTRODUCTION
In many situations we must deal with two or more random variables simultaneously. For
example, we might select fabricated sheet steel specimens and measure shear strength and
weld diameter of spot welds. Thus, both weld shear strength and weld diameter are the ran-
dom variables of interest. Or we may select people from a certain population and measure
their height and weight.
The objective of this chapter is to formulate joint probability distributions for two or
more random variables and to present methods for obtaining both marginal and conditional
distributions. Conditional expectation is defined as well as the regression of the mean. We
also present a definition of independence for random variables, and covariance and corre-
lation are defined. Functions of two or more random variables are presented, and a special
case of linear combinations is presented with its corresponding moment-generating func-
tion. Finally the law of large numbers is discussed.

Definition
If F is the sample space associated with an experiment @, and X,, X,, ..., X, are functions,
each assigning a real number X,(e), X,(e) ..., X,(e) to every outcome e, we call [X,, X), ...,
X,] a k-dimensional rundom vector (see Fig. 4-1).
The range space of the random vector [X,, X), ..., X,] is the set of all possible values of
the random vector. This may be represented as Ry, . yx xx, Where
Ry 855% 28% Ete450 A) d, € Ryx, € Ry, a, € Ry}

This is the Cartesian product of the range space sets for the components. In the case where
k =2, that is, where we have a two-dimensional random vector, as in the earlier illustrations,
R xX) XX is a subset of the Euclidean plane.

Figure 4-1 A k-dimensional random vector.

71
72 Chapter4 Joint Probability Distributions

4-2 JOINT DISTRIBUTION FOR TWO-DIMENSIONAL


RANDOM VARIABLES
In most of our considerations here, we will be concerned with two-dimensional random
vectors. Sometimes the equivalent term two-dimensional random variables will be tised.
If the possible values of [X,, X,] are either finite or countably infinite in number, then
[X,, X,] will be a two-dimensional discreve random vector. The possible values of [X,, X,]
are [Xx,;, X,],i=1,2,...,f=1,2,.... ,
If the possible values of [X,, X,] are some uncountable set in the Euclidean plane, then
[X,, X,] will be a two-dimensional continuous random vector. For example, if a $x, <b and
c $x, Sd, we would have R,, x, = ([t), x)]: aS x, Sb, cS x, Sd}.
It is also possible for one component to be discrete and the other continuous; however,
here we consider only the case where both are discrete or both are continuous.

Example 4-1
Consider the case where Weld shear strength and weld diameter are measured. If we let X, represent
diameter in inches and X, represent strength in pounds, and if we know 0 < x, < 0.25 inch while 0 <
X $ 2000 pounds, then the range space for [X,, X,] is the set {[x,, x,]: 0 < x, < 0.25, 0S x, $ 2000}.
This space is shown graphically in Fig. 4-2.

Example 4-2
A small pump is inspected for four quality-control characteristics. Each characteristic is classified as
good, minor defect (not affecting operation) or major defect (affecting operation). A pump is to be
selected and the defects counted. If X, = the number of minor defects and X; = the number of major
defects, we know that x, = 0, 1, 2, 3, 4 and %, 20,1, 1.5 4= x, because only four characteristics are
inspected. The range space for [X,, X,] is thus {[0, 0], [0, 1], [0, 2], [0, 3]; (0, 4], (1, 0], [1, 1], (1, 21,
[1, 3], (2, 0}, (2, 1], (2, 2], [3, 0}, [3, 1], [4, 0}. These possible outcomes are shown in Fig. 4-3.

Xp ae

2,006

Figure 4-2 The range space of [X,, X,], where X, is


0 0.25 x; (inches) weld diameter and X, is shear strength.

4
cm:
3

Ht
1 ¢——__»> Figure 4-3 The range space of [X,, X,], where X, is
the number of minor defects and X, is the number of
major defects. The range space is indicated by heavy
0 1 2 3 4 x dots.
4-2 Joint Distribution for Two-Dimensional Random Variables 73

In presenting the joint distributions in the definition that follows and throughout the
remaining sections of this chapter, where no ambiguity is introduced, we shall simplify the
notation by omitting the subscript on the symbols used to specify these joint distributions.
Thus, if X = [X,, Xp], px(x,, x.) = p(x, x) and fy(x,, x.) =f(x,, x).

Definition
Bivariate probability functions are as follows.
1. Discrete case. To each outcome [x,, x,] of [X,, X,], we associate a number,
P(X, X)) = P(X, =x, and X, =x),

where
P(X, x2) 2 9, for all x,, x,

and

ee) = (4-1)
x; %2

The values ([x,, x4], p(%,, x,)) for all i, j make up the probability distribution of
[X,, Xa).
2. Continuous case. If [X,, X,] is a continuous random vector with range space R in the
Euclidean plane, then f, the joint density function, has the following properties:
f(@,, x.) 2 0, for all (x,,x,)€ R

and

JJ, fle) dxydx, =1.


A probability statement is then of the form
by pb
Pla, $ X; $b,a) SX) Sb,)=] a,'f4a,' f(x.x2)dudry
(see Fig. 4-4).

f(x}, Xa)

ay b, x4

Figure 4-4 A bivariate density function, where P(a, <X, $b,, 4) $ X, <b,) is given by the shaded
volume.
74 Chapter4 Joint Probability Distributions

It should again be noted that f{x,, x,) does not represent the probability of anything, and
the convention that f(x,, x.) = 0 for (x,, x,) ¢ R will be employed so that the second prop-
erty may be written ‘

[LIE ena)ae dey =1


In the case where [X,, X,] is discrete, we might present the probability distribution of
[X,, X,] in tabular, graphical, or mathematical form. In the case where [X,, X,] is continu-
ous, we usually employ a mathematical relationship'to present the probability distribution;
however, a graphical presentation may occasionally be helpful.

Example 4-3
A hypothetical probability distribution is shown in both tabular and graphical form in Fig. 4-5 for the
random variables defined in Example 4-2.

Example 4-4
In the case of the weld diameters represented by X, and tensile strength represented by X,, we might
have a uniform distribution as by

Rep Pepa)
72/0 |v0[290[900|10|
a fe [80[a0[480[|
T= [m0[200[ae| |
Ed
si gon
P(X, Xe)

(b)
Figure 4-5 Tabular and graphical presentation of a bivariate probability distribution. (a) Tabulated
values are p(x,, x). (b) Graphical presentation of discrete bivariate distribution.
4-3 Marginal Distributions 75

1
fla.*2)= so, OS%.<0.25,0.<x5°<-2000,
=i), otherwise.

The range space was shown in Fig. 4-2, and if we add another dimension to display graphically
y =flx;, x2), then the distribution would appear as in Fig. 4-6. In the univariate case, area corresponded
to probability; in the bivariate case, volume under the surface represents the probability.
For example, suppose we wish to find P(0.1 < X, < 0.2, 100 < X, < 200). This probability would
be found by integrating f(x,, x.) over the region 0.1 < x, < 0.2, 100 < x, < 200. That is,

200 0.2 1
yd | | Sty dx, = SSS.
100 40.1 500 a0

4-3 MARGINAL DISTRIBUTIONS


Having defined the bivariate probability distribution, sometimes called the joint probabil-
ity distribution (or in the continuous case the joint density), a natural question arises as to
the distribution of X, or X, alone. These distributions are cailed marginal distributions. In
the discrete case, the marginal distribution of X;, is

Pi (x,) = yeP(x, ‘ xy) for all x, (4-2)

and the marginal distribution of X, is

P2(x2) = » Px +%) for all x. (4-3)


x)

Example 4-5
In Example 4-2 we considered the joint discrete distribution shown in Fig. 4-5. The marginal distri-
butions are shown in Fig. 4-7. We see that, [x,, p,(x,)] is a univariate distribrtion and it is the distri-
bution of X, (the number of minor defects) alone. Likewise [x,, p,(x,)] is a univariate distribution and
it is the distribution of X, (the number of major defects) alone.

If [X,, X,] is a continuous random vector, the marginal distribution of X; is

fila) = [- flx2) ax, (4-4)

Xp = 2,000

0.25 x,
Figure 4-6 A bivariate uniform density.
76 Chapter4 Joint Probability Distributions

EAxy
[sa

P4104)

Figure 4-7 Marginal distributions for discrete [X,, X,]. (a) Marginal distributions—tabular form. (b) Mar-
ginal distribution (x,, p,(x,)). (c) Marginal distribution (x, p,(x,)).

and the marginal distribution of X, is

fy(x2) = [ flpn) dx. (4-5)

The function f, is the probability density function for X, alone, and the function fy is the den-
sity function for X, alone.

Example 4-6
In Example 4-4, the joint density of [X,, X,] was given by

= Osx, <0.25,0< x, < 2000,

=); otherwise.

The marginal distributions of X, and X, are


2000 |
Alu)=[ Sp ah OS <0.25,
=0, otherwise.

and

Osa 1
filx2)=[ — dx, =——, 08x, $2000,

=i; otherwise.

These are shown graphically in Fig. 4-8.


a a I oe a ar se eens ES a ee eee eee
4-3 Marginal Distributions 77

fy(x4) f(Xo)

1/2000
0.25 x; 0 2,000 Xp
(a) (b)
Figure 4-8 Marginal distributions for bivariate uniform vector [X,, X,]. (a) Marginal distribution of
X,. (b) Marginal distribution of X,.

Sometimes the marginals do not come out so nicely. This is the case, for example,
when the range space of [X,, X,] is not rectangular.

Example 4-7
Suppose that the joint density of [X,, X,] is given by

S(X), Xp) = 6X, W<ne, bere IIe


=(, otherwise.

Then the marginal of X, is

= [6x,dx,
=rchielen for 0< x, <1,
and the marginal of X, is

Alx)=[_ f(x1,%2) dx,

=| 6x, dx,
=n, for 0< x, <1.

The expected values and variances of X, and X, are determined from the marginal dis-
tributions exactly as in the univariate case. Where [X,, X,] is discrete, we have

E(X,) == > Pil) = YY 1P(x1.%2),


% KS (4-6)
2
V(X,)=07 = (x) - 1) pil)

= xipi(x))- a

= PY x7p(x.a2)-
ue, (4-7)
My Vy
78 Chapter4 Joint Probability Distributions

and, similarly,

E(X,) = b= Y 22P2(22) = Yd oper ),


xX; Xo (4-8)

V(X>) = oO; = HEP ~ ty) py(x2)


x2

= "x3 P2(x2)- Hi
xX,

= 3p(122) - Hs. (4-9)

Example 4-8
In Example 4-5 and Fig. 4-7 marginal distributions for X, and X, were given. Working with the mar-
ginal distribution of X, shown in Fig. 4-7b, we may calculate:

EXP =, hh
7 .
30° S0E- “305 3025 0 5
and

The mean and variance of X, could also be determined using the marginal distribution of X,.

Equations 4-6 through 4-9 show that the mean and variance of X, and X,, respectively,
may be determined from the marginal distributions or directly from the joint distribution.
In practice, if the marginal distribution has already been determined, it is usually easier to
make use of it.
In the case where [X,, X,] is continuous, then

E(X,)= My; = IP x fi(2y dx, = i is x1 f(21,42 )dndx, (4-10)

V(X) =o; = (eG, 5 by) Ail dx,

= fail )ae a}
EPR froatiy (nsy hekegete = iy, (4-11)

and

E(X:)=tn =f mAlm)dy =f" [of (a.m)dndr,, (4-12)


V(X,)=03 = [7 (4-H)Alea)ay
=| i A(n)dx -u3
= J) ifm aed, Hb. (4-13)
4-4 Conditional Distributions 79

Again, in equations 4-10 through 4-13, observe that we may use either the marginal densi-
ties or the joint density in the calculations.

Example 4-9
In Example 4-4, the joint density of weld diameters, X,, and shear strength, X,, was given as

1
flax )= se 0<x, <0.25,0<x, < 2000,
iQ) otherwise.

and the marginal densities for X, and X, were given in Example 4-6 as

Fi 4) ee OS 0.25;
= (0), otherwise,

and

= 0, otherwise.

Working with the marginal densities, the mean and variance of X , are thus

0.25 ,
E(X,)=m, =| x, -4dx, = 2(0.25)’ = 0.125
and

0.25
V(X,)=07 = [ x? -4dx, —(0.125)° = =(0.25) (0.125)? = 5.21x 107.

4-4 CONDITIONAL DISTRIBUTIONS


When dealing with two jointly distributed random variables it may be of interest to find the
distribution of one of these variables, given a particular value of the other. That is, we may
wish to find the distribution of X, given that X, = x,. For example, what is the distribution
of a person’s weight given that he is a particular height? This probability distribution would
be called the conditional distribution of X, given that X, = x).
Suppose that the random vector [X,, X,] is discrete. From the definition of conditional
probability, it is easily seen that the conditional probability distributions are
;
_ P(%%2) for all x,, x, (4-14)
Px,|x, (x)= ete)

and

))
)= Pt for all x,, x, (4-15)
X:n
jig

where p,(x,) > 0 and p,(x,) > 0.


It should be noted that there are as many conditional distributions of X, for given X, as
there are values x, with p,(x,) > 0, and there are as many conditional distributions of X, for
given X, as there are values x, with p,(x,) > 0.
80 Chapter4 Joint Probability Distributions

Example 4-10
Consider the counting of minor and major defects of the small pumps in Example 4-2 and Fig. 4-7.
There will be five conditional distributions of X,, one for each value of X,. They are shown in Fig. 4-9.
The distribution pyjo(x,), for X, = 0, is shown in Fig. 4-9a. Figure 4-9b shows the distribution py, (5).
Other conditional distributions could likewise be determined for-X, = 2, 3, and 4, respectively. The
distribution of X, given that X, = 3 is
{30 Le
Pxs)= 739=@
Peioke
Py 3(1) = 4730 4°

Py 3(¥ )=0, otherwise.

If [X,, X,] is a continuous random vector, the conditional densities are

7 fla |
Fes |x, la) atarey”
(4-16)
and
mu f(x.x2)
; fa(x2) (4-17)
fy be; (x:) = i 7

where f,(x,) > 0 and f,(x,) > 0. ,

Example 4-11
Suppose the joint density of [X,, X,] is the function
f presented here and shown in Fig. 4-10:

ote per O<x sO, Ss2:

=10) otherwise.

Xp 0 1 2 3 4
Pe,)ol%s) = P10, x2)/py(0) P00) Pl 4) 10.2) p(0, 3) p(0, 4)
P,(0) P,(0) p(0) p,(0) p(0)

s 1700) 1/30) Al 1/30 =_ —1 WON 3/30 3


ushen 156 © 71 P y So eR T Son enTa eT ee TiO a
Q t —S = —_ = — — ——— = — — = ——

(a)

Xp 0 1 2 3 4

Px} 1(Xe) = p(1, Xp)/p,(1) p(1, 0) p(1, 1) p(1, 2) p(1, 3) p(1, 4)


P(1) P3(1) P,(1) P3(1) Px(1)
: 1/30 1 1/30 1 2/30 2 3/30 3 0
Quotient Sani San
730° 7 7907S| SAAS
7i900:7.q Se aS
bawo) 7a Se
eve”

(b)
Figure 4-9 Some examples of conditional distributions.
4-4 Conditional Distributions 81

f(X;, Xo)

1 x1 Figure 4-10 A bivariate density function.

The marginal densities are f,(x,) and f,(x,). These are determined as
2)
flx)= | (3?4 2122 Jae=2x? ae <p |,
0 3 3}
=10) otherwise,

and
1 2, XXz Jt 25)
fy(x2) = fl + —— 3 |dx, 3=—+—, 6 0<x, x2 $2,

=); : otherwise.

The marginal densities are shown in Fig. 4-11.


The conditional densities may be determined using equations 4-16 and 4-17 as

i) === O<x, $10sx, $2,

=0, otherwise,

and

5 Wx<<am
SIL Osos S74

= otherwise.

Note that for Frgx, 2)» there are an infinite number of these conditional densities, one for each value
0 <x, < 1. Two of these, fy 4;1;2)(%2) and f,,),(2), are shown in Fig. 4-12. Also, for Fx;x,%1), there are an

f(x) ie (x1)

0 : 1 x;
Figure 4-11 The marginal densities for Example 4-11.
82 Chapter4 Joint Probability Distributions

fol x, 4

3 ,0
typ} v2 (X2) = 10 + IA x lA ie)

hl 1(X2) oa

1/2 1 xX;
Figure 4-12 Two conditional densities fy,),. and fy,,, from Example 4-11.

3x?
fy,|2X1) aur ee+ xX;

2
fy, 4(X4) + 2x? + 3°

0 x
Figure 4-13 Three conditional densities fy.io, fy,)1, and fy,o, from Example 4-11.

infinite number of these conditional densities, one for each value 0 < x, < 2. Three of these are shown
in Fig. 4-13.

4-5 CONDITIONAL EXPECTATION


In this section we are interested in such questions as determining a person’s expected
weight given that he is a particular height. More generally we want to find the expected
value of X, given information about X,. If [X,, X,] is a discrete random vector, the condi-
tional expectations are

E(Xib2) = Daye, (%1) (4-18)


and

E(X, Ix) = dP x,y, (x2). (4-19)

Note that there will be an E(X,|x,) for each value of x,. The value of each E(X,|x,) will
depend on the value x,, which is in turn governed by the probability function. Similarly,
there will be as many values of E(X|x,) as there are values x, and the value of E(X,|x,) will
depend on the value x, determined by the probability function.
4-5 Conditional Expectation 83

Exampie 4-12
Consider the probability distribution of the discrete random vector [X,, X,], where X , represents the
number of orders for a large turbine in July ard X, represents the number of orders in August. The
joint distribution as well as the marginal distributions are given in Fig. 4-14. We consider the three
conditional distributions Px,jo» Px,j1» ANd py, and the conditional expected values of each:

1 1 1
LX ol)=— x2 =0, Pry (2) = 75° x2 =0, Pxo(42)= > x. =0

A 5 1
=-, x, =1, =—, t=. =— Gy =,
6 10 i 4 i
2 3 1
==, x, =2, =—, X, =2 a= Fs ee
6 10 4 z
1 1 = &
0 Xo = 3, SS xX, =3, aU ES
6 10 z ;
; \ =0 otherwise
=i0; otherwise, ==), othei wise,
1 2 5 e 1 1
E(X,|0)=0-—+1-= Eadie BD] Ons.
6 6 7 1 10 2 4
2 1 1
+2-=+3-—=15 Dee Ne att A trata)
— 31D
6 6 10 10 4

If [X,, X,] is a continuous random vector, the conditional expectations are

E(X,|x.) = Ie xf jy, (1)er | (4-20)


and

E(Xa|x,) = [2 Fy), (#2)42 (4-21)


and in each case there will be an infinite number of values that the expecte 1 value may take.
In equation 4-20, there will be one value of E(X,|x,) for each value x, and in equation 4-21,
there will be one value of E(X,|x,) for each value x,.

Example 4-13-
In Example 4-11, we considered a joint density f, where

f(%9) =%7 ae O<x,<10<x, $2,


=i) otherwise.

Figure 4-14 Joint and marginal distributions of [X,, X,]. Values in body of table are p(x), x3).
84 Chapter4 Joint Probability Distributions

The conditional densities were

x,)=— O<x, <10<x, <2,


Fri (2) Phe Seaveril . aor

and

Fapal)= 13x +%2)


te pay’
0<x,<$1,0<x, <2.

Then, using equation 4-21, the E(X,|x,) is determined as

2) Aa, +
E(Xa)x) = [[« ae ‘eee
7 dx
_ 9x, +4

9x, +3.

It should be noted that this is a function of x,. For the two conditional densities shown in Fig. 4-12,
where x,= $ and x,= 1, the corresponding expected values are E(X,}5 ) = 5 and E(X,|1)= Ti

Since E(X,|x,) is a function of x,, and x, is a realization of the random variable X,,
E(X,|X,) is a random variable, and we may consider the expected value of E(X,|X,), that is,
E[E(X,|X,)]. The inner operator is the expectation of X, given X, = x,, and the outer expec-
tation is with respect to the marginal density of X,. This suggests the following “double
expectation” result.

Theorem 4-1
E{E(X,|X,)] = E(X,)
= Wy (422)
and

E(E(X,|X,)] = E(X,) = uy, (4-23)

Proof Suppose that X, and X, are continuous random variables. (The discrete case is sim-
ilar.) Since the random variable E(X,|X,) is a function of X,, the law of the unconscious stat-
istician (see Section 3-5) says that

ELE(X2|X,)]=
[_E(Xalu (mn)
=f (Sheafigia (redder Jina
=| |xXSyle (2 )fi(m) dxdx,

fila )dxydxy
=| Jo»2 fants),

aege "2 (a f(x,y) dxydry

= [ic Glen
= E(X,),
which is equation 4-22. Equation 4-23 is derived similarly, and the proof is complete.
4-6 Regression of the Mean 85

Example 4-14
Consider yet again the joint probability density function from Example 4-11.

P(u%)=xi4?, O0<x, $1,0<x, <2,

=U, otherwise.
In that example, we derived expressions for the marginal densities f,(x,) and f,(x,). We also derived
E(X,|x,) = (9x, + 4)/(9x, + 3) in Example 4-13. Thus,

E[E(X, IX,)= [2% |x,\A(2; dx,


1
=| 9x,+4 (2xt+Z Jan

ol 9x, +3 3
_10oh
Note that this is also E(X,), since

E(X,)= Jehln )dx, = lee (+ 2) mato

The variance operator may be applied to conditional distributions exactly as in the uni-
variate case.

4-6 REGRESSION OF THE MEAN


it has been observed previously that E(X,|x,) is a value of the random variable E(X,|X,) for
a particular X, = x,, and it is a function of x,. The graph of this functior is called the regres-
sion of X, on X,. Alternatively, the function E(X,|x,) would be callea ine regression of X, on
X,. This is demonstrated in Fig. 4-15.

icanipte “is

In Example 4-13 we found E(X,|x,) for the bivariate density of Example 4-11, that is,

Fee Rae: = eat. SOS XS 2,


=0 otherwise.

E(X,| sy E(X,] x2)

x; X2
(a) (b)
Figure 4-15 Some regression curves. (a) Regression of X, on X,. (b) Regression of X, on Xo:
86 Chapter4 Joint Probability Distributions

The result was

In a like manner, we may find

ea 1
[es
3x2 4x, x

— 42249
a6x5 £125

Regression will be discussed further in Chapters 14 and 15.

4-7 INDEPENDENCE OF RANDOM VARIABLES


The notions of independence and independent random variables are very useful and impor-
tant statistical concepts. In Chapter 1 the idea of independent events was introduced and a
formal definition of this concept was presented. We are now concerned with defining inde-
pendent random variables. When the outcome of one variable, say X,, does not influence
the outcome of X,, and vice versa, we say the random variables X, and X, are independent.

Definition

1. If [X,, X,] is.a discrete random vector, then we say that X, and X, are independent
if and only if

P(X, X,) = p,(x,) : P7(X>) (4-24)

for all x, and x,.


2. If [X,, X,] is a continuous random vector, then we say that X, and X, are independ-
ent if and only if

AX; %) =F) -ACry (4-25)


forall x, and x,.
Utilizing this definition and the properties of conditional probability distributions we
may extend the concept of independence to a theorem.

Theorem 4-2
1. Let [X,, X,] be a discrete random vector. Then

Pyxx,%2) = P(X)
and

Px p%1) = p,(%,)

for all x, and x, if and only if X, and X, are independent.


2. Let [X,, X,] be a continuous random vector. Then

Fyaje,2) = fOr)
4-8 Covariance and Correlation 87

and

Fp) =f\)

for all x, and x, if and only if X, and X, are independent.

Proof We consider here only the continuous case. We see that


Fre; 2) = f,(x,) for all x, and x,
if and only if

(Assam)
a= f,(x,) for all x, and x,
f(x)
if and only if

IK, x2) =f, &)f,) for all x, and x,


if and only if X, and X, are independent, and we are done.
Note that the requirement for the joint distribution to be factorable into the respective
marginal distributions is somewhat similar to the requirement that, for independent events,
the probability of the intersection of events equals the product of the event probabilities.

Example 4-16
A city transit service receives calls from broken-down buses and a wrecker crew must haul the buses
in for service. The joint distribution of the number of calls received on Mondays and Tuesdays is
given in Fig. 4-16, along with the marginal distributions. The variable X, represents the number of
calls on Mondays and X, represents the number of calls on Tuesdays. A quick inspection will show
that X, and X, are independent, since the joint probabilities are the product of the appropriate marginal
probabilities.

4-8 COVARIANCE AND CORRELATION


We have noted that E(X,) =u, and V(X,) = o; are the mean and variance of X,. They may
be determined from the marginal distribution of X,. In a similar manner, j1, and o,are the
mean and variance of X,. Two measures used in describing the degree of association
between X, and X, are the covariance of [X,, X,] and the correlation coefficient.

Po(Xo)

Figure 4-16 Joint probabilities for wrecker calls.


88 Chapter4 Joint Probability Distributions

Definition
If (X,, X,] is a two-dimensional random variable, the covurjance, denoted 0}, is
Cov(X), X2) =Oy =E[(X, — E(X, Mae E(X,))] (4-26)
and the correlation coefficient, denoted 9, is

iis
C (xX x
| Mae Fs (4-27)
~ W(X)
IV(X;) V(X) 0,02

The covariance is measured in the units of X, times the units of X,. The correlation
coefficient is a dimensionless quantity that measures the linear association between two
random variables. By performing the multiplication operations in equation 4-26 before dis-
tributing the outside expected value operator across the resulting quantities, we obtain an
alternate form for the covariance as follows:
Cov(X,, X,) = E(X, -X,) - E(X,) - E(X,). (4-28)

Theorem 4-3
{f X, and X, are independent, then p = 0.

case. If X, and X, are independent,


Proof We again prove the result forthe continuous
+f (X4,.X2 )dxdx,
E(X; -X;) = [Jia

r ss ieX1f(%1)- X2.fr(%2 daar,


= ieee faJan|fieX2fx(x Jar
= E(X,)- E(X).
Thus Cov(X,, X,) = 0 from equation 4-28, and p= 0 from equation 4-27. A similar argument
would be used for [X,, X,] discrete.
The converse of the theorem is not necessarily true, and we may have p= 0 without the
variables being independent. If p = 0, the random variables are said to be uncorrelated.

Theorem 4-4
The value of p will be on the interval [—1, +1], that is,
-lsps+l.

Proof Consider the function Q defined below and illustrated in Fig. 4-17.

Olt) = E[(X, - E(X,)) + (X,- EX)?»


= E[X, — E(X,)) + 2tE[(X, — E(X,))(X, - E(X,))]
+ PE[X, - E(X,)/.
Since Q(t) 2 0, the discriminant of Q(#) must be < 0, so that

{2E[(X, — E(X,))(X, — E(X,))]}? - 4E[X, - E(X,)P EX, - E(X,)7 < 0.


4-8 Covariance and Correlation 89

Q(t)

t Figure 4-17 The quadratic Q(:).

It follows that
4[Cov(X,, X,)]? — 4V(X,) - V(X,) $0,
sO

[Cov(X;, iG i
1
V(X,)V(X2)
and

~1<ps+l. (4-29)
A correlation of p = 1 indicates a “high positive correlation,” such as one might find
between the stock prices of Ford and General Motors (two related companies). On the
other hand, a “high negative correlation,” p = —1, might exist between snowfall and tem-
perature.

Recall Example 4-7, which examined a continuous random vector [X,, X,] with the joint probability
density function

KX), Xz) = 6x, O0<x,<x,<1,


=0, otherwise.

The marginal of X, is
fi) = 6x,(1-x,), forO<x,<1,
=0, otherwise,
and the marginal of X, is
flx) =32, \ for0<x,<1,
=0, otherwise.

These facts yield the following results:

F(X,)= [oxi )dx, =1/2,


1

if
E(x?) = [i6x}(1—x,
)dx, = 3/10,
V(X,) = E(X?)-[E(X,)} =1/20,
90 Chapter4 Joint Probability Distributions

E(X,)= [3x3dx, =3/4,


E(X3)= if3x$dx, = 3/5,
V(X) = E(X3)-[E(X,)] = 0.39.
Further, we have

E(X,X,) = [FE nx flene)dndr,

1 px.
= [te6x2x, dx, dx, = 2/5.
This implies that
Cov(X,, X,) = E(X,X,) — E(X,)E(X,) = 1/40,

and then

= ORXa) 9179,
V(X,)-V(X2)

Example 4-18
A Continuous random vector [X,, X,] has density function f as given below:

Kx), x) = 1, =X <X,<+%,0<x,
<1,
=0, otherwise.

This function is shown in Fig. 4-18.


The marginal densities are
fi) =1-x, for 0 <x, <1,
=1+4x, for -1 <x, <0,
=0, otherwise,
and

F(x.) =2x,, O<x,<1,


=0, otherwise.

Figure 4-18 A joint density from Example 4-18.


4-9 The Distribution Function for Two-Dimensional Random Variables 91

Since f(x,, x2) # f,(x,) - f{Q), the variables are not independent. If we calculate the covariance, we
obtain
1 px.
Cov(X,.X)= [|eesX, +) +1 dx, dx, -0=0,

and thus p= 0, so the variables are uncorrelated although they are not independent.

Finally, it is noted that if X, is related to X, linearly, that is, X, =A + BX,, then p’ = 1.


If B > 0, then p= +1; and if B <0, p=-—1. Thus, as we observed earlier, the correlation coef-
ficient is a measure of linear association between two random variables.

4-9 THE DISTRIBUTION FUNCTION FOR TWO-DIMENSIONAL.


RANDOM VARIABLES
The distribution function of the random vector [X,, X,] is F, where
F(x,, X)) = P(X, S$ x,, X, $ x,). (4-30)
This is the probability over the shaded region in Fig. 4-19.
If [X,, X,] is discrete, then

F(x,%2)= )) Y(t), (4-31)


t) Sx, t, Sx,

and if [X,, X,] is continuous, then


x x
F(x,%2) = (ale Ff(t,t, )dtydty. (4-32)

Example 4-19
Suppose X, and X, have the following density:
X4,X))=24x,x%,
19 X2 1% x,1 >0,x,>0,x,
2 +2, <1,
==i()s otherwise.

Looking at the Euclidean plane shown in Fig. 4-20 we see several cases that must be considered.

1. x, $0, F(x,x) =0.


2. x, $0, F(x),
x) =0.

Figure 4-19 Domain of integration or sum-


mation for F(x,, x2).
92 Chapter4 Joint Probability Distributions

Figure 4-20 The domain of F, Example 4-19.

3. O<x, <1 andx,+x,<1,

F(x,,x2) = i}[-24t)t, dt, dt,

= 6x? x3.

4. O<x,<landl—-x,<x,S1,
I-x, px, x, pl-t
P(x,.22)= | [ 241,t, dt, dt, +f ih 24t,t, dt, dt,
0 —x

= 3x4-8x} +6x7 +3x3 -8x3 +6x5-1.

§. O<x,<1andx,>1,
x)
F(x,,x2)= ifibe24t,t dt, dt,
= 6x4-8x}+3x4.

6. OS x,< landx, 21,


F(x,, X>) = 6x; - 8x; + 325.

7. x,21andx,21,
F(x,, X)) = 1.

The function F has properties analogous to those discussed in the one-dimensional


case. We note that when X, and X, are continuous,

if the derivatives exist.

4-10 FUNCTIONS OF TWO RANDOM VARIABLES


Often we will be interested in functions of several random variables; however, at present,
this section will concentrate on functions of two random variables, say Y= H(X,, X,). Since
X, = X,(e) and X, = X,(e), we see that Y = H[X,(e), X,(e)] clearly depends on the outcome
of the original experiment and, thus, Y is a random variable with range space Ry.
The problem of finding the distribution of Y is somewhat more involved than in the
case of functions of one variable; however, if [X,, X,] is discrete, the procedure is straight-
forward if X, and X, take on a relatively small number of values.
4-10 Functions of Two Random Variables 93

If X, represents the number of defective units produced by machine No. | in 1 hour and X, represents
the number of defective units produced by machine No. 2 in the same hour, then the joint distribution
might be presented as:in Fig. 4-21. Furthermore, suppose the random variable Y = H(X,, X,), where
H(x,, X,) = 2x, + xp. It follows that Ry = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}. In order to determine, say, P(Y
= 0) = pO), we note that ¥ = 0 if and only if X, = 0 and X, = 0; therefore, p,(0) = 0.02.
We note that Y = 1 if and only if X, = 0 and X, = 1; therefore p,{1) = 0.06. We also note that Y=2
if and only if either X, = 0, X, =2 or X, = 1, X, = 0; so p,{2) = 0.10 + 0.03 = 0.13. Using similar logic,
we obtain the rest of the distribution, as follows:

Ji PyOid
0 0.02
1 0.06
2 0.13
8 0.11
4 0.19
5 0.15
6 0.21
i 0.07
8 0.05
9 0.01
otherwise 0O

In the case where the random vector is continuous with joint density function f(x,, x)
and H(x,, x.) is continuous, then Y = H(X,, X,) is a continuous, one-dimensional random
variable. The general procedure for the determination of the density function of Y is out-
lined below.
1. We are given Y= H,(X,, X,).
2. Introduce a second random variable Z = H,(X,, X,). The function H, is selected for
convenience, but we want to be able to solve y = A,(x,, x,) and z = H,(x,, x) for x,
and x, in terms of y and z.
3. Find x, = G,(y, z), and x, = G,(y, 2).
4. Find the following partial derivatives (we assume they exist and are continuous):

0% Ox 2k Ok
dy dz dy . dz.

Po(X2)

Figure 4-21 Joint distribution of defectives


SEE AN produced on two machines p(x,, x5).
94 Chapter4 Joint Probability Distributions

5. The joint density of [Y, Z], denoted €(), z), is found as follows:
€(y, 2) =f1G,0, 2), G0, 2) Vo, a, (4-33)
where J(y, z), called the Jacobian of the transformation, is given by the determinant

dx, /dy dx,/dz (4-34)


J(y, 2)= L
0.2) Ox, /dy odx,/dz

6. The density of Y, say gy, is then found as

sy(»)=f_ &y,2)dz. (4-35)

Example 4-21
Consider the continuous random vector [X,, X,] with the following density:

fin) =4e0! 255 ex >On > 0.


=0, otherwise.

Suppose we are interested in the distribution of Y = X,/X,. We will let y = x,/x, and choose z= x, + x,
so that x, = yz/(1 + y) and x, = z/(1 + y). It follows that

dx, i//dy=——
res
yy and ox i/az=——,
i+

1
Ox Oy and ox, /dz
= ——.
2/ (1+y)’ af l+y

Therefore,

Zz y
2 ae
Inzi=|
Oe ee
—. —_—_|
ele
l+yz
(I+y)
eez l+y
(1+y)° (1+y)
(I+y)° (1+y)
and

AGO, 2), Gy(, 2)] = dele +9).


=4e%,
Thus,

Wc) =tde** —
(1+y )

and

y= if4e722 [-/a+ y) Jaz


Lvoee)
y>0,
“(
(14
+ y)?
=0, otherwise.
ee
ee, ee

4-11 JOINT DISTRIBUTIONS OF DIMENSION n > 2


Should we have three or more random variables, the random vector will be denoted
[X,, X,, ..., X,], and extensions will follow from the two-dimensional case. We will assume
4-11 Joint Distributions of Dimensionn>2 95

that the variables are continuous; however, the results may readily be extended to the dis-
crete case by substituting the appropriate summation operations for integrals. We assume
the existence of a joint density f such that

Gee oe aed!) (4-36)


and

deeb xd | ARCA i -++dx5dx, =i

Thus,

Pay sXPs bp ay 2X, 5b) uh Gi sX, <b.)


_ phi pes pb,
=f5 is ale litnentateal Eis *dx,dx,. (4-37)

The marginal densities are determined as follows:

fla)=(—_f- se [ f(X1.2 34-000q) Oty dry,

fr(22) - [espe fh f(a 3.0 ty) a, *dx3,dx,,

Aja ety Cees, dr, dx dx.

The integration is over all variables having a subscript different from the one for which the
marginal density is required.

Definition
The variables [X,, X;,..., X,,] are independent random variables if and only if for all [x,, x,, ..., X,]

Kx, X>, tee x,,) =f\(x,) - f(x) ores Hees (4-38)

The expected value of, say, X, is

m= E(X%)= [ff flea, arid, de, (4-39)


and the variance is

)dtd, dey.
V(X) = [ed ke lie (0= Lh) P(X1 2900, (4-40)
We recognize these as the mean and variance, respectively, of the marginal distribution of X,.
In the two-dimensional case considered earlier, geometric interpretations were instruc-
tive; however, in dealing with n-dimensional random vectors, the range space is the Euclid-
ean n-space, and graphical presentations are thus not possible. The marginal distributions
are, however, in one dimension and the conditional distribution for one variable given val-
ues for the other variables is in one dimension. The conditional distribution of X, given val-
ues (X,, Xz, ..., X,) 1s denoted

tre) (4-41)

and the expected value of X, for given (Xx, ..., X,) is

E(X,|x21%3,.-1%n) = [ix PIE lpn: (x, dx). (4-42)


96 Chapter4 Joint Probability Distributions

The hypothetical graph of E(X,|x,, x;, ..., X,) aS a function of the vector [x,, x3, .... X,] is
called the regression of X, on (X3, X3, ..., X,).
»

4-12 LINEAR COMBINATIONS


The consideration of general functions of random variables, say X,, X, ..., X,, is beyond the
scope of this text. However, there is one particular function of the form Y = H(X,, ...,.X,,),
where
H(X,, X,, sees X,.) = ay + aX, +++ +4,X,, (4-43)

that is of interest. The a; are real constants for i= 0, 1, 2, ..., 2. This is called a linear com-
bination of the variables X,, X,, ..., X,. A special situation occurs when a, = 0 and a, = a,
=---=a,= 1, in which case we have a sum Y= X, + X,+-:: +X,

‘Example 4-22
Four resistors are connected in series as shown in Fig. 4-22. Each resistor has a resistance that is a ran-
dom variable. The resistance of the assembly may be denoted Y, where Y = X, + X, + X, + X,.

‘Example 4-23
Two parts are to be assembled as shown in Fig. 4-23. The clearance can be expressed as Y = X, — X,
or Y=(1)X, + (—1)X,. Of course, a negative clearance would mean interference. This is a linear com-
bination with a, = 0, a, = 1, anda, =-1.

Example 4-24
A sample of 10 items is randomly selected from the output of a process that 1.anufactures a small
snaft used in electric fan motors, and the diameters are to be measured with a value called the sample
mean, calculated as

X= o
0% +X, +--+X,).

The value X = WX, + 1X, tee + 0X19 is a linear combination with a) = 0 and a, = a, =--- =a,)= 1

WWW VN )
x; Xo Xz X4 Figure 4-22 Resistors in series.

Figure 4-23 A siinple assembly.


4-12 Linear Combinations 97

Let us next consider how to determine the mean and variance of linear combinations.
Consider the sum of two random variables,

Y=X,+X,. (4-44)
The mean of Yor ty = E(Y) is given as

E(Y) = E(X,) + E(X,). (4-45)


However, the variance calculation is not so obvious.

V(Y) = ELY - E(Y)}* = E(Y’) - [E(Y)/


= E[(X, + X,)"] — [E(X, + X,))
= E[X {+ 2X,X, + X35] - [E(X,) + E(X))
= E(X{) + 2E(X,X;) + E(X 3)— [E(X,)? — 2E(X,) - E(X;) — (E(X,)P
= {E(X}) — [E(X,)?} + {E(X3) — [E(X,)]°} + 2[E(X,X,) - E(X,) - E(X,)]
= V(X,) + V(X,) + 2Cov(X,, X;),

or
Oy = 0; +03 +20}. (4-46)
These results generalize to any linear combination
Y=a)+a,X, + a,X,+---+a,X, (4-47)

as follows:

E(Y) = ap + ¥ 4,8 (X))

=a) + Dia,tt;, (4-48)


i=l

where E(X,) = 1, and

V(Y) = ¥ @v(V(X 43, Yaa,0, Cov(X;,X;) (4-49)


i=) i=] j=l
it]

or

2S wot DY aa
i=1 i=1 j=l
i#j

If the variables are independent, the expression for the variance of Y is greatly simpli-
fied, as all the covariance terms are zero. In this situation the variance of Y is simply

V(Y) = ¥.a; -V(X;) (4-50)

or
98 Chapter4 Joint Probability Distributions

Example 4-25
In Example 4-22, four resistors were connected in series so that Y = X, + X, + X; + X, was the resist-
ance of the assembly, where X, was the resistance of the first resistor, and so on. The mean and vari-
ance of Y in terms of the means and variances of the components may -be easily calculated. If the
resistors are selected randomly for the assembly, it is reasonableto assume that X,, X,, X;, and X, are
independent,

My
= My + bh + Wy + My
and

0} =0) +03 +03 +04.


We have said nothing yet about the distribution of Y; however, given the mean and variance of X,,
X,, X;, and X,, we may readily calculate the mean and the variance of Y since the variables are
independent.

‘Example 4-26
In Example 4-23, where two components were to be assembled, suppose the joint distribution of
[X,, X,] is
fh La 8e Fiten} x, 20,x,20,
=0, otherwise.

Since f{x,, x) can be easily factored as


AX, X) = [2e"] « [4e%”]
=fi%) - fA),
X, and X, are independent. Furthermore, E(X,) = J, = 3,and E(X,) = [t, =z. We may calculate the vari-
ances

V(X,)=07 = [27-20 dr, -(2) =+


0 2 4
and

V(X,)=03 =|" x3-4e“" dx, -(


1y
: =o
0 4 16

We denoted the clearance Y=X, ~*% or in the example, so that E(Y)= w, — = $ -i=t and V(Y)
=(1)?- 0} +(-1)*- o3= iseiee=.

In Example 4-24, we might expect.the random variables X,, X,, ..., Xj. to be independent because of
the random sampling process. Furthermore, the distribution for each variable X, is identical. This is
shown in Fig. 4-24. In the earlier example, the linear combination of interest was the sample mean

X= 10 X, + Tragctk

It follows that
= = uz = 1 ElXi)
E(X) +55 ElXa) t+ E(Xio)
1
4-14 The Law of Large Numbers 99

of = 07

Hy =H x Hip =H X10
Figure 4-24 Some identical distributions.

Furthermore,

4-13 MOMENT-GENERATING FUNCTIONS


AND LINEAR COMBINATIONS
In the case where Y = aX, it is easy to show that

M(t) = M,(at). (4-51)


For sums of independent random variables Y = X, + X,+---+X,,n
M,{t) = My,(0) My.) «++» My (0). (4-52)
This property has considerable use in statistics. If the linear combination is of the general
form Y = a) + a,X, + --- +a,X, and the variables X,, ..., X, are independent, then
Mt) = e*'[My (a,t) - My,(ayt) - +++ > My (a,t)).
Linear combinations are to be of particular significance in later chapters, and we will dis-
cuss them again at greater length.

4-14 THE LAW OF LARGE NUMBERS


A special case arises in dealing with sums of independent random variables where each
variable may take only two values, 0 and 1. Consider the following formulation. An exper-
iment consists of n independent experiments (trials) €,, j= 1, 2, ...,n. There are only two
outcomes, success, {5}, and failure, {F'}, to each trial, so that the sample space F, = {S, F}.
The probabilities

P{S} =p
and
ith =l- p= q
remain constant for j = 1, 2, ..., n. We let

X.=
( ~ if the jth trial results ;in failure,
1 if the jth trial results in success,
100 Chapter 4 Joint Probability Distributions

and

Y=X,+X,+---+X,.

Thus Y represents the number of successes in n trials, and Y/n is an approximation (or esti-
mator) for the unknown probability p. For convenience, we will let p = Y/n. Note that this
value corresponds to the term f, used in the relative frequency definition of Chapter 1.
The law of large numbers states that

re l=
Pip a<el21- aes, (4-53)
ne
or equivalently

A er
Pip - p| 2 ¢|s al -?); (4-54)
ne
To indicate the proof, we note that

E(Y) =n - E(X)) =n[(O- q) + (1 p)] =np


and

VY) = nV(X,) = nf(O? - g) + (1? - p) - (@)*] = mp( = p).


Since p = Y/n, we have

il
E(b) =—- E(Y) = p (4-55)
and

F-aish
A
P l=n Back (4-56)
so if

e=k plteee)
n

then we obtain equation 4-53.


Thus for arbitrary e > 0, as n > ©,

P\lp -pl< elo 1.


Equation 4-53 may be rewritten, with an obvious notation, as

Pilp-pl<elzl1—-a (4-57)
We may now fix both € and a in equation 4-57 and determine the value of n required to sat-
isfy the probability statement as

ah Oe (4-58)
4-16 Exercises 101

Example 4-28
A manufacturing process operates so that there is a probability p that each item produced is defective,
and p is unknown. A random sample of n items is to be selected to estimate p. The estimator to be used
is p= Y/n, where

0, if the jth item is good,


if the jth item is defective,
and

Y=X,+X,+-:-+X,.
It is desired that the probability be at least 0.95 that the error, |p- pl, not exceed 0.01. In order to
determine the required value of n, we note that € = 0.01, and a@= 0.05; however, p is unknown. Equa-
tion 4-58 indicates that

ms Als P) at
(0.01) -(0.05)

Since p is unknown, the worst possible case must be assumed [note that p(1 — p) is maximum when
Pp =3. This yields

0.5)(0.5
(0.01)° (0.05)
a very large number indeed.

Example 4-28 demonstrates why the law of large numbers sometimes requires large
sample sizes. The requirements of €= 0.01 and a@=0.05 to give a probability of 0.95 of the
departure | — p| being less than 0.01 seem reasonable; however, the resulting sample size
is very large. In order to resolve problems of this nature we must know the distribution of
the random variables involved (f in this case). The next three chapters will consider in
detail a number of the more frequently encountered distributions.

4-15 SUMMARY
This chapter has presented a number of topics related to jointly distributed random variables
and functions of jointly distributed variables. The examples presented illustrated these top-
ics, and the exercises that follow will allow the student to reinforce these concepts.
A great many situations encountered in engineering, science, and management involve
situations where several related random variables simultaneously bear on the response
being observed. The approach presented in this chapter provides the structure for dealing
with several aspects of such problems.

4-16 EXERCISES

4-1. A refrigerator manufacturer subjects his finished table, where X represents the occurrence of finish
products to a final inspection. Of interest are two cat- defects and Y represents the occurrence of mechanical
egories of defects: scratches or flaws in the porcelain defects.
finish, and mechanical defects. The number of each (a) Find the marginal distributions of X and Y.
type of defect is a random variable. The results of (b) Find the probability distribution of mechanical
inspecting 50 refrigerators are shown in the following defects given that there are no finish defects.
102 Chapter 4 Joint Probability Distributions

rd |Ome. 45)
(a) Find the appropriate value of k.
cae (b) Calculate the probability that X, < 1, X, <3.
Pea 11/50 |4/50
a sro a0[neo[v0[a0|
(c) Calculate the probability that X, + X, <4.

a [aso 366[260 a0 |]
(d) Find the probability that X, < 1.5.

Iau a
(e) Find the marginal denéities of both X, and X,.

Mba =
4-5. Consider the density function
SW, x y, 2) = l6wxyz, O<w,x,y,z<1
=0, ee
(c) Find the probability distribution of finish defects (a) Compute the protiability that W < >
> and VES 5.
given that there are no mechanical defects. (b) Compute the probability that X < ie
> ZS +
4-2. An inventory manager has accumulated records (c) Find the marginal density of W.
of demand for her company’s product over the last
100 days. The random variable X represents the num- 4-6, Suppose the joint density of [X, Y] is
ber of orders received per day and the random variable 1
Y represents the number of units per order. Her data f(x,y) = go-2-y) O<x<2,2<y<4,
are shown in the table at the bottom of this page. =(0} otherwise.
(a) Find the marginal distributions of X and Y.
Find the conditional densities Fyy(x) and fy.().
(b) Find all conditional distributions for Y given X.
4-3. Let X, and X, be the scores on a géneral intelli- 4-7, For the data in Exercise 4-2 find the expected
gence test and an occupational preference test, respec- number of units per order given that there are three
tively. The probability density function of the random orders per day.
variables [X,, X,] is given by
4-8. Consider the probability distribution of the dis-
f(x1,%2) = ——, 0x, $100,0<x, $10, crete random vector [X,, X,], where X, represents the
number of orders for aspirin in August at the neigh-
otherwise. borhood drugstore and X, represents the number of
orders in September. The joint distribution is shown in
(a) Find the appropriate value of k.
the table on the next page.
(b) Find the marginal densities of X, and X,.
(a) Find the marginal distributions.
(c) Find an expression for the cumulative distribution
(b) Find the expected sales in September given that
function F(x,, x).
sales in August were either 51, 52, 53, 54, or 55.
4-4, Consider a situation in which the surface tension
and acidity of a chemical product are measured. These 4-9. Assume that X, and X, are coded scores on two
variables are coded such that surface tension is meas- intelligence tests, and the probability density function
ured on a scale 0 < X, $ 2, and acidity is measured on of [X,, X,] is given by
a scale 2 < X, <4. The probability density function of Kx, x) = 6x,X>, Osx, <1,0Sx,<1,
[X,, X,] is =0, otherwise.
AX), X)) =K(6—x, —x,), Osx, $2,2<sx, <4, Find the expected value of the score on test No. 2
= otherwise. given the score on test No. 1. Also, find the expected

ee a Ee ee
8/100 15/100 |3/100 |2/100 Toeaas

4
2/100 Pc a Pe
|7/100 VA00: bai| ta) foaled
ae0 0 ew
4-16 Exercises 103

to(z)=[_ g(uz)h(u) hud


Hint: Let Z= X/Y and U=Y, and find the Jacobian for
the transformation to the joint probability density
function of Z and U, say r(z, u). Then integrate r(z, 1)
with respect to u.
4-14. Suppose we have a simple electrical circuit in
which Ohm’s law V = JR holds. We wish to find the
probability distribution of resistance given that the
value of the score on test No. 1 given the score on test probability distributions of voltage (V) and current (J)
No. 2. are known 'to be .
4-10. Let gv) =e”, v20,

AO, X) = 4x 2,678 +, pee ea = 0, otherwise;


=0, otherwise. RGy=ser,, i2=0;
(a) Find the marginal distributions of X; and X,. =0, otherwise.
(b) Find the conditional probability distributions of Use the results of the previous problem, and assume
X, and X,. that V and J are independent random variables.
(c) Find expressions for the conditional expectations 4-15. Demand for a certain product is a random vari-
of X, and X,. able having a mean of 20 units per day and a variance
4-11. Assume that [X, Y] is a continuous random vec- of 9. We define the lead time to be the time that elapses
tor and that X and Y are independent such that fix, y) = between the placement of an order and its arrival. The
g(x)h(y). Define a new random variable Z = XY. Show lead time for the product is fixed at 4 days. Find the
that the probability density function of Z, , (z), is expected value and the variance of /ead time demand,
given by assuming demands to be independently distributed.

el)=[_ orl) lat. t


4-16. Prove the discrete case of Theorem 4-2.
4-17. Let X, and X, be random variables such that X,
= A + BX,. Show that
p’ = 1 and that p=-1 if B<0
Hint: Let Z = XY and T = X and find the Jacobian for
while p= +1 if B>0.
the transformation to the joint probability density
function of Z and T, say r(z, t). Then integrate r(z, t) 4-18. Let X, and X, be random variables such that X,
with respect to t. =A + BX,. Show that the moment-generating function
for X, is
4-12. Use the result of the previous problem to find
the probability density function of the area of a rec- M,(t) = eM (Bt).
tangle A = S,S,, where the sides are of random length. 4-19, Let X, and X, be distributed according to
Specifically, the sides are independent random vari-
AX, X%2) = 2, Osx,<x,
81,
ables such that
=()) otherwise.
85,(S,) = 25), 0ss,s1,
Find the correlation coefficient between X, and X,.
=) otherwise,
4-20. Let X, and X, be random variables with correla-
and
tion coefficient py, y,. Suppose we define two new ran-
hs,(62)= 352 OSs, <4, dom variables U =A + BX, and V= C + DX,, where A,
B, C, and D are constants. Show that pyy =
=0, otherwise.
(BD/BD I)Px,,Xr
Some care must be taken in determining the limits of 4.21. Consider the data shown in Exercise 4-1. Are
integration because the variable of integration cannot X and Y independent? Calculate the correlation
assume negative values. coefficient.
4-13. Assume that [X, Y] is a continuous random vec- 4-22. A couple wishes to sell their house. The mini-
tor and that X and Y are independent such that f(x, y) mum price that they are willing to accept is a random
= g(x)h(y). Define a new random variable Z = X/Y. variable, say X, where s, $ X < s,. A population of
Show that the probability density function of Z, ,(z), buyers is interested in the house. Let Y, where p, < Y
is given by S p,, denote the maximum price they are willing to
104 Chapter4 Joint Probability Distributions

pay. Y is also a random variable. Assume that the joint 4-28. For the bivariate distribution,
distribution of [X, Y] is f(x, y). k(1+x+y)
x,y CET BEET ETT EO OS x<0,
0S y<oo,
(a) Under what circumstances will a sale take place? f(xy) (1+ x), (I+ y)
(b) Write an expression for the probability of a sale = otherwise.
taking place.
(c) Write an expression for the expected price of the (a) Evaluate the constant k.
transaction. (b) Find the marginal distribution of X.

4-23. Let [X, Y] be uniformly distributed over the 4-29. For the bivariate distribution,
semicircle in the following diagram. Thus f(x, y) = 2/7
if [x, y] is in the semicircle. f(x,y) (l+ x+y)"
=— * ek 20) MaO hn,

(a) Find the marginal distributions of X and Y. =. otherwise.


(b) Find the conditional probability distributions.
(a) Evaluate the constant k.
(c) Find the conditional expectations.
(b) Find F(x, y).

y. 4-30. The manager of a small bank wishes to deter-


mine the proportion of the time a particular teller is
y= (1 = x22 busy. He decides to observe the teller at n randomly
spaced intervals. The estimator of the degree of gain-
ful employment is to be ¥/n, where
if on the ith observation, the teller is idle,
xe?
ie 1,
-1 0 1 x if on the ith observation, the teller is busy,

and Y= "Xi. It is desired to estimate p= P(X, = 1)


n

4-24, Let X and Y be independent random variables.


Prove that E(Xly) = E(X) and that E(¥|x) = E(Y). so that the error of the estimate does not exceed 0.05
with probability 0.95. Determine the necessary value
4-25. Show that, in the discrete’ case, of r
ELE(XIY)] = E(X), 4-31. Given the following joint distributions, deter-
E{E(Y|X)] = E(Y). mine whether X and Y are independent.
4-26. Consider the two independent random variables (a) g(x, y)=4xye"*”, x20, y20.
S and D, whose probability densities are (DAG) SICY.. OSX SVSel.
(c) flu y)=6(1+x+y)*, x20, y20.
Oe. 10<s< 40,
4-32. Let f(x, y, z) = hA(x)hQ)A(z), x 20, y 20, z 20.
=0 otherwise; Determine the probability that a point drawn at ran-
dom will have a coordinate (x, y, z) that does not sat-
1
p(d)=—,
8p( ) 0 10<d<30, isfy either x > y>zorx<y<z.

=0. otherwise. 4-33. Suppose that X and Y are random variables


denoting the fraction of a day that a request for mer-
Find the probability distribution of the new random chandise occurs and the receipt of a shipment occurs,
variable respectively. The joint probability density function is
W=S+D. Te yy =i O<xs1,0sy<l,
4-27. If =) otherwise.
Se yo =xt+y, 0<x<1,0<y<l, (a) What is the probability that both the request for
=0, otherwise,
merchandise and the receipt of an order occur
during the first half of the day?
find the following:
(b) What is the probability that a request for merchan-
(a) E[Xly). dise occurs after its receipt? Before its receipt?
(b) E[X].
4-34. Suppose that in Problem 4-33 the merchandise
(c) E[Y]. is highly perishable and must be requested during the
4-16 Exercises 105

4-day interval after it arrives. What is the probability (a) Z=a+bx.


that merchandise will not spoil? (be Z= Ux
4-35. Let X be a continuous random variable with (c) Z=In X,
probability density function f(x). Find a general
(d) Z=e.
expression for the new random variable Z, where
Chapter 5

Some Important Discrete


Distributions

5-1 INTRODUCTION
In this chapter we present several discrete probability distributions, developing their ana-
lytical form from certain basic assumptions about real-world phenomena. We also present
some examples of their application. The distributions presented have found extensive appli-
cation in engineering, operations research, and management science. Four of the distribu-
tions, the binomial, the geometric, the Pascal, and the negative binomial, stem from a
random process made up of sequential Bernoulli trials. The hypergeometric distribution,
the multinomial distribution, and the Poisson distribution will also be presented in this
chapter.
When we are dealing with one random variable and no ambiguity is introduced, the
symbol for the random variable will once again be omitted in the specification of the prob-
ability distributions and cumulative distribution function; thus, p,(x) = p(x) and F(x). =
F(x). This practice will be continued throughout the text.

5-2 BERNOULLI TRIALS AND THE BERNOULLI DISTRIBUTION


There are many problems in which the experiment consists of n trials or subexperiments.
Here we are concerned with an individual trial that has as its two possible outcomes suc-
cess, S, or failure, F. For each trial we thus have the following:
S - Perform an experiment (the jth) and observe the outcome.
SF: (S, F}.

For convenience, we will define a random variable X;= 1 if @, results in {S} and X; = 0 if
Gj results in F (see Fig. 5-1).
The n Bernoulli trials €,, €3, ..., €, are called a Bernoulli process if the trials are inde-
pendent, each trial has only two possible outcomes, say S or F, and the probability of suc-
cess remains constant from trial to trial. That is,

P(X, Xp5 +++ Xq) = Py) + Pa(Xq) + +++ -p,(%,)


and

Dp x; =1, Prep
p,(x;) = p(x,) = I-~p=q, x;=0, j=1,2,...,n, (5-1)
0 otherwise.

106
5-2 Bernoulli Trials and the Bernoulli Distribution 107

Figure 5-1 A Bernoulli trial.

For one trial, the distribution given in equation 5-1 and Fig. 5-2 is called the Bernoulli
distribution.
The mean and variance are

E(X))=(0-q)+(1-p)=p
and

V(X) = [(0° - q) + (1? - p)] - p? = p( - p) = pa. (5-2)


The moment-generating function may be shown to be

M,(t) = 4 + pe’. (3-3)

Example 5-1
Suppose we consider a manufacturing process in which a small steel part is produced by an auto-
matic machine. Furthermore, each part in a production run of 1000 parts may be classified as defec-
tive or good when inspected. We can think of the production of a part as a single trial that results
in success (say a defective) or failure (a good item). If we have reason to believe that the machine
is just as likely to produce a defective on one run as on another, and if the production of a defec-
tive on one run is neither more nor less likely because of the results on the previous runs, then it
would be quite reasonable to assume that the production run is a Bernoulli process with 1000 trials.
The probability, p, of a defective being produced on one trial is called the process average fraction
defective.

Note that in the preceding example the assumption of a Bernoulli process is a mathe-
matical idealization of the actual real-world situation. Effects of tool wear, machine adjust-
ment, and instrumentation difficulties were ignored. The real world was approximated by a
model that did not consider all factors, but nevertheless, the approximation is good enough
for useful results to be obtained.

p(x;))

0 1 x; Figure 5-2 The Bernoulli distribution.


108 Chapter5 Some Important Discvete Distributions

We are going to be primarily concerned wiih a series of Bernoulli trials. In this case the
experiment € is denoted {(€,, €,, ..., €,): €, are independent Bernoulli trials,j= 1, 2, ...,
n}. The sample space is »

PECs oe Xp Ho OF fede yaks Hts

Example 5-2

Suppose an experiment consists of three Bernoulli trials and the probability of success is p on each
3
trial (see Fig. 5-3). The random variable X is given by X = ¥
i= esr The distribution of X can be
determined as follows:

x p(x)

0 P{FFF}=q-q:q=q
1 P{FFS} + P{ FSF} + P{SFF} = 3pq’
2 P{ FSS} + P{SFS} + P{SSF} = 3p’q
3 P{SSS} =p"

5-3. THE BINOMIAL DISTRIBUTION


The random variable X that denotes the number of successes in n Bernoulli trials has a
binomial distribution given by p(x), where

n,
x ay x=0, 1, Deistas
nts) =("Jor0-

=0, otherwise. (5-4)


Example 5-2 illustrates a binomial distribution with n = 3. The parameters of the binomial
distribution are n and p, where n is a positive integer and 0 < p < 1. A simple derivation is
outlined below. Let

p(x) = P{“‘x successes in n trials”).

Figure 5-3 Three Bernoulli trials.


5-3 The Binomial Distribution 109

The probability of the particular outcome in & with Ss for the first x trials and Fs for the
last n — x trials is
:
Comdi

58 Dw ae, - Degas

n !
(where q = | — p), due to the independence of the trials. There are ( = a out-
comes having exactly x Ss and (n — x) Fs; therefore, " an x):

p(x) = Jota" eee 0,1) 2. an,


n _

x
= 0, otherwise.

Since q = 1 — p, this last expression is the binomial distribution.

5-3.1 Mean and Variance of the Binomial Distribution


The mean of the binomial distribution may be determined as

. n!
(x) =
EX) 2 yx: Esa Spe

=hp)‘ . oes ))!


x->-——_———— x=) n-x
;
2» (x—Din—x” 4
and jetting y=x- 1, :
n-1
(n ts 1}; Veni
E(X)=np :
Loin -—1-y)}

so that

E(X) = np. (5-5)

Using a similar approach, we can find

E(X(X-1)) = > x(x—1)n! pq’


Pa .

=n(n-1)p?>ies) “
(n—2): x-2 n-x

(n=)!
2)!
=n(n- 1p?’ ee
gun 2)!
=n(n-1)p*,

so that

V(X) = E(X?) - (E(X))’


= E(X(X — 1)) + E(X) - (E(X))’
= n(n — 1)p’ + np - (np)
= npq. (5-6)
110 Chapter5 Some Important Discrete Distributions

An easier approach to find the mean and variance is to consider X as a sum of n inde-
pendent Bernoulli random variables, each with mean p and variance pq, so that X= X, + X,
fine A en .

E(X)=pt+pt+-.-+p=np

and

V(X) =pq+pqt---+pq=npq.
The moment-generating function for the binomial distribution is

My(t) = (pe! + q)". (5-7)

Example 5-3
A production process represented schematically by Fig. 5-4 produces thousands of parts per day. On
the average, 1% of the parts are defective and this average does not vary with time. Every hour, a ran-
dom sample of 100 parts is selected from a conveyor and several characteristics are observed and
measured on each part; however, the inspector classifies the part as either good or defective. If we
consider the sampling as n = 100 Bernoulli trials with p = 0.01, the total number of defectives in the
sample, X, would have a binomial distribution
me -("”
Joor(oasy x=0,1,2,...,100,
xX

=0 otherwise.

SuppoSe the inspector has instructions to stop the process if the sample has more than two defectives.
Then, P(X > 2) = 1 — P(X < 2), and we may calculate
2
P(X <2) = d['P }oon*(oss"™
x=0

= (0.99)' +100(0.01)' (0.99)” + 4950(0.01)? (0.99)


= 0.92.

Thus, the probability of the inspector stopping the process is approximately 1 -- 0.92 = 0.08. The mean
number of defectives that would be found is E(X) = np = 100(0.01) = 1, and the variance is V(X) = npq
= 0.99.

5-3.2 The Cumulative Binomial Distribution


The cumulative binomial distribution ar the distribution function, F, is

F(x) = D( pe =pyae (5-8)


k=0\

Conveyor
Process Warehouse

aan }n= 100 Figure 5-4 A sampling situation with attrib-


ute measurement.
5-3 The Binomial Distribution 111

The function is readily calculated by such packages as Excel and Minitab. For example,
suppose that n = 10, p =0.6, and we are interested in calculating F(6) = Pr(X < 6). Then the
Excel function call BINOMDIST(6,10,0.5,TRUE) gives the result F(6) = 0.6177. In
Minitab, all we need do is go to Calc/Probability Distributions/Binomial, and click on
“Cumulative probability” to obtain the same result.

5-3.3 An Application of the Binomial Distribution


Another random variable, first noted in the law of large numbers, is frequently of interest.
It is the proportion of successes and is denoted by

p=xXM, (5-9)
where X has a binomial distribution with parameters n and p. The mean, variance, and
moment-generating function are

E(p) =~ E(X) = —np = p, (5-10)


n n
2 \2
v(i)=(- -v(x)=(4) npq = 22, (5-11)
n n n

m(r)=M,(+)=(pe" +4)" (5-12)


n

In order to evaluate, say, P(p < py), where py is some number between 0 and 1, we note that

Ze oe
P(PS po) = (= < Po)= P(X <npy).

Since np, is possibly not an integer,

Lnpo|
P(P S po) = P(X S npy) = 3 Oia (5-13)
x=0

where | | indicates the “greatest integer contained in” function.

‘Example 5-4.
From a flow of product on a conveyor belt between production operations J and J + 1, arandom sam-
ple of 200 units is taken every 2 hours (see Fig. 5-5). Past experience has indicated that if the unit is
not properly degreased, the painting operation will not be successful, and, furthermore, on the aver-
age 5% of the units are not properly degreased. The manufacturing manager has grown accustomed
to accepting the 5%, but he strongly feels that 6% is bad’performance and 7% is totally unacceptable.
He decides to plot the fraction defective in the samples, that is, p. If the process average stays at 5%,
he would know that E(p) = 0.05. Knowing enough about probability to understand that p will vary,
he asks the quality-control department to determine the P(p > 0.07 | p =0.05). This is done as follows:

Operation J Operation J+ 1 Operation J+ 2


degreasing painting packaging

n= 200 every two hours


Figure 5-5 Sequential production operations.
112 Chapter5 Some Important Discrete Distributions

P(p > 0.07|p = 0.05) = 1 — P(p $ 0.07|p = 0.05)


= | — P(X $< 200(0.07)|p = 0.05)

=f >| 1 }o.0s)'(095)
Bs 200
0.05)" (0.95)"
k
| * 900-k

= 1 -0.922 = 0.078.

Example 5-5
An industrial engineer is concerned about the excessive “avoidable delay” time that one machine
operator seems to have. The engineer considers two activities as “avoidable delay time” and “not
avoidable delay time.” She identifies a time-dependent variable as follows:

xX(o— avoidable delay,


=10) otherwise.

A particular realization of X(t) for 2 days (960 minutes) is shown in Fig. 5-6.
Rather than have a time study technician continuously analyze this operation, the engineer elects
to use “work sampling,” randomly selects n points on the 960-minute span, and estimates the fraction
of time the “avoidable delay” category exists. She lets X, = 1 if X(t) = 1 at the time of the ith obser-
vation and X; = 0 if X(t) = 0 at the time of the ith observation. The statistic

ee
Ding
n

is to be evaluated. Of course, Pis a random variable having a mean equal to p, variance equal to pq/n,
and a standard deviation equal to / pq/n. The procedure outlined is not necessarily the best way to
go about such a study, but it does illustrate one utilization of the random variable P.

In summary, analysts must be sure that the phenomenon they are studying may be rea-
sonably considered to be a series of Bernoulli trials in order to use the binomial distribution
to describe X, the number of successes in n trials. It is often useful to visualize the graphi-
cal presentation of the binomial distribution, as shown in Fig. 5-7. The values p(x) increase
to a point and then decrease. More precisely, p(x) > p(x — 1) for x < (n + 1)p, and p(x) <
p(x — 1) for x > (n+ 1)p. If (n + 1)p is an integer, say m, then p(m) = p(m — 1).

5-4 THE GEOMETRIC DISTRIBUTION


The geometric distribution is also related to a sequence of Bernoulli trials except that the
number of trials is not fixed, and, in fact, the random variable of interest, denoted X, is
defined to be the number of trials required to achieve the first success. The sample space

X(t)

0 480 min. 960 min. t


Figure 5-6 A realization of X(t), Example 5-5.
5-4 The Geometric Distribution 113

p(x)

0 1 2 3 n-2 n-1 n
Figure 5-7 The binomial distribution.

and range space for X are illustrated in Fig. 5-8. The range space for X is Ry = {1, 2, 3, ...},
and the distribution of X is given by.
DO=q sD, 3 = MN, Zovoone
=) otherwise. (5-14)

It is easy to verify that this is a probability distribution since

a I
Pa" =a =| |=
ue) k=0 —q
and
p(x) 20 for all x.

5-4.1 Mean and Variance of the Geometric Distribution


The mean and variance of the geometric distribution are easily found as follows:

or

(5-15)

Figure 5-8 Sample space and range space for X.


114 Chapter5 Some Important Discrete Distributions

or, after some algebra,

o° =q/p’. : (5-16)

The moment-generating function is


t

My(t)= + 5
e
: (5-17)

Example 5-6
A certain experiment is to be performed until a successful result is obtained. The trials are independ-
ent and the cost of performing the experiment is $25,000; however, if a failure results, it costs $5000
to “set up” for the next trial. The experimenter would like to determine the expected cost of the proj-
ect. If X is the number of trials required to obtain a successful experiment, then the cost function
would be

C(X) = $25,000X + $5000(X — 1)


= 30,000X — 5000.

Then

E[C(X)] = $30,000 - E(X) — E($5000)

= [20
000: 4— 5000.
p

If the probability of success on a single trial is, say, 0.25, then the E[C(X)] = $30,000/0.25 — $5000 =
$115,000. This may or may not be acceptable to the experimenter. Kt should also be recognized that
it is possible to continue indefinitely without having a successful expersment. Suppose that the exper-
imenter has a maximum of $500,000. He may wish to find the probability that the experimental work
would cost more than this amount, that is,

P(C(X) > $500,000) = P($30,000X — $5000 > $500,000)


Ae ae
30,000
= P(X > 16.833)
=1-P(X< 16)
16
=1-}°0.25(0.75)*"
x=)

= 0.01.

The experimenter may not be at all willing’to run the risk (probability 0.01) of spending the available
$500,000 without getting a successful run.

The geometric distribution decreases, that is, p(x) < p(x — 1) for x = 2, 3, .... This is
shown graphically in Fig. 5-9.
An interesting and useful property of the geometric distribution is that it has no mem-
ory, that is,

P(X >x+s5|X>5)=P(X>>x). (5-18)


The geometric distribution is the only discrete distribution having this memoryless property.
5-5 The Pascal Distribution 115

P(x)

\Ganees r
1 2 page| 5 x Figure 5-9 The geometric distribution.

Example 5-7
Let X denote the number of tosses of a fair die until we observe a 6. Suppose we have already tossed
the die five times without seeing a 6. The probability that more than two additional tosses will be
required is

P(X>7|X>5)=P(X>2)
=1-P(X $2)

5-5 THE PASCAL DISTRIBUTION


The Pascal distribution also has its basis in Bernoulli trials. It is a logical extension of the
geometric distribution. In this case, the random variable X denotes the trial on which the rth
success occurs, where r is an integer. The probability mass function of X is
x-1
pls) =[ ira SRP pipe lb Peete asda
r-

=0, otherwise. (5-19)

The term p’q*” arises from the probability associated with exactly one outcome in & that
has (x —r) Fs (failures) and r Ss (successes). In order for this outcome to occur, there must
be r— 1 successes in the x — | repetitions before the last outcome, which is always success.
There are thus (* i)arrangements satisfying this condition, and therefore the distribution is
as shown in equation 5-19.
The development thus far has been for integer values of r: If we have arbitrary r > 0 and
0 <p <1, the distribution of equation 5-19 is known as the negative binomial distribution.
s

5-5.1 Mean and Variance of the Pascal Distribution


If X has a Pascal distribution, as illustrated in Fig. 5-10, the mean, variance, and moment-
generating function are:
L=7/p, (5-20)
116 Chapter5 Some Important Discrete Distributions

yiniagoeg gis He aae x


Figure 5-10 An example of the Pascal distribution.

oO’ =rq/p’, (5-21)


and
t Un

Mx(t)= [2] (5-22)


l1-ge

Example 5-8
The president of a large corporation makes decisions by throwing darts at a board. The center section
is marked “yes” and represents a success. The probability of his hitting a “yes” is 0.6, and this prob-
ability remains constant from throw to throw.,The president continues to throw until he has three,
“hits.” We denote X as the number of the trial on whicirhe-experiences the third hit. The mean is 3/0.6
= 5, meaning that on the average it will take five thr8ws. The president's decision rule is simple. If he
gets three hits on or before the fifth throw he decides in favor of the question. The probability that he
will decide in favor is therefore

P(X $5) = p(3) + p(4) + p(5)'”


= [Jos)'o4y +{ jo0)*(o4) +{J
]05)*(04)
= 0.6826.

5-6 THE MULTINOMIAL DISTRIBUTION


An important and useful higher dimensional random variable has a distribution known as
the multinomial distribution. Assume an experiment € with sample space & is partitioned
into k mutually exclusive events, say B,, B), ..., B,. We consider n independent repetitions
of @ and let p, = P(B,) be constant from trial to trial, for i= 1, 2, ..., k. If k= 2, we have
Bernoulli trials, as described earlier. The random vector [X,, X3, ..., X,] has the following
distribution, where X; is the number of times B, occurs in the n repetitions of €, i = 1,
Qe:

n!
ae aren (5-23)
k ,

for x;=0, 1, 2,...,”, i= 1, 2, ..., k, and where =n,


i=l
It should be noted that X,, X,, ..., X, are not independent random variables, since
M-
x =n for any n repetitions.
i
It turns out that the mean and variance of X,, a particular component, are
5-7 The Hypergeometric Distribution 117

E(X;) = np; (5-24)


and

V(X,) = np - p). (5-25)

Example 5-9
Mechanical pencils are manufactured by a process involving a large amount of labor in the assembly
operations. This is highly repetitive work and incentive pay is involved. Final inspection has revealed
that 85% of the product is good, 10% is defective but may be reworked, and 5% is defective and must
be scrapped. These percentages remain constant over time. A random sample of 20 items is selected,
and if we let

X, = number of good items,


X, = number of defective but reworkable items,
X, = number of items to be scrapped,
then

(20)!
P(x1,%2,%3) = (0.85)*' (0.10)? (0.05).
X,!x9!x,!
Suppose we want to evaluate this probability function for x, = 18, x, =2, and x, = 0 (we must have x,
+ x, +x, = 20); then

P(18,2,0)
18, 2,0) = (0.88)
(20)!
= ————(0..85) (0.10)"(0.05)
18 “(0.10)” 2 (0.05 0

= 0.102.

5-7 THE HYPERGEOMETRIC DISTRIBUTION


In an earlier section an example presented the hypergeometric distribution. We will now
formally develop this distribution and further illustrate its application. Suppose there is
some finite population with N items. Some number D (D < N) of the items fall into a class
of interest. The particular class will, of course, depend on the situation under consideration.
It might be defectives (vs. nondefectives) in the case of a production Jot, or persons with
blue eyes (vs. not blue eyed) in a classroom with N students. A randomsample of size n is
selected without replacement, and the random variable of interest, X, is the number of items
in the sample that belong to the class of interest. The distribution of X is

Beas
p(x) = ae, x=0,1,2,..., min(n, D),

=0 otherwise. (5-26)

The hypergeometric’s probability mass function is available in many popular software pack-
ages. For instance, suppose that N = 20, D = 8, n= 4, and x = 1. Then the Excel function

]
HYPGEOMDIST(x, n, D, N) = HYPGEOMDIST(1, 4, 8, 20) =
118 Chapter5 Some Important Discrete Distributions

5-7.1 Mean and Variance of the Hypergeometric Distribution


The mean and variance of the hypergeometric distribution are

E(X)=n- 2) ; (5-27)
N
and

eanD D
ee N-n
5-28
fe int fe : a Pee ee,

‘Example 5-10
In a receiving inspection department, lots of a pump shaft are periodically received. The lots contain
100 units and the following acceptance sampling plan is used. A random sample of 10 units is
selected without replacement. The lot is accepted if the sample has no more than one defective. Sup-
pose a lot is received that is p’(100) percent defective. What is the probability that it will be accepted?

(100p’\/ 100[1— p’]


t
P(accept lot) = P(X <1)= Pa
10

[o,
Obviously the probability of accepting the lot is a function of the lot quality, p*
If p’ = 0.05, then

>) 95 5\(95
y= AIO) UNI) 95953
+

P(accept lot
. 100 ‘aiheite
10 y

5-8 THE POISSON DISTRIBUTION


One of the most useful discrete distributions is the Poisson distribution. The Poisson dis-
tribution may be developed in two ways, and both are instructive insofar as they indicate the
circumstances where this random variable may be expected to apply in practice. The first
development involves the definition of a Poisson process. The second development shows
the Poisson distribution to be a limiting form of the binomial distribution.

5-8.1 Development from a Poisson Process


In defining the Poisson process, we initially consider a collection of arbitrary, time-oriented
occurrences, often called “arrivals” or “births” (see Fig. 5-11). The random variable of
interest, say X,, is the number of arrivals that occur on the interval [0, ]. The range space
Ry = {0, 1, 2, ...}. In developing the distribution of X, it is necessary to make some assump-
tions, the plausibility of which is supported by considerable empirical evidence.
5-8 The Poisson Distribution 119

Se
ee eee
0 t t+ At Figure 5-11 The time axis.

The first assumption is that the number of arrivals during nonoverlapping time inter-
vals are independent random variables. Second, we make the assumption that there exists a
positive quantity A such that for any small time interval, Ar, the following postulates are
satisfied.

1. The probability that exactly one arrival will occur in an interval of width At is
approximately A - At. The approximation is in the sense that the probability is (A -
At) + 0,(At) where the function [o,(At)/At] > 0 as At > 0.
2. The probability that exactly zero arrivals will occur in the interval is approximately
1 —(A- At). Again this is in the sense that it is equal to 1 — (A - At) + 0,(At) and
[o0,(At)/At] > 0 as At > 0.
3. The probability that two or more arrivals occur in the interval is equal to a quan-
tity 0,(At), where [0,(At)/At] > 0 as At 0.

The parameter A is sometimes called the mean arrival rate or mean occurrence rate. In
the development to follow, we let

p(x) = P(X, =x) = p,(0), Xu Ove les2 yeaa (5-29)

We fix time at ¢ and obtain

pot + At) = [1 -—A- At] : p(t),

so that

Po(t + At)= po(t)__ 4


iN Po(t)

and

lim eet Aral) = pi(t)=—A po(t). (5-30) ;


At0 At

For
x > 0,

p,(t+ At) =A- Atp,_(t)+[1—A- At}: po),


so that
At) —
P,(t+ At)~ py(t) _ Ap eat) —A-p, (1)

and
a rena ;
lim [eer = p((t)=A-p,_,(t)-A- p, (t)- (5-31)
At0 be

Summarizing, we have a system of differential equations:

Polt) = -Apo(o) (5-32a)


and

Pit) = Ap) Ap), X= 1,2, Ca:


120 Chapter5 Some Important Discrete Distributions

The solution to these equations is

piste s(x, <x a0, caer. (5-33)


Thus, for fixed t, we let c = At and obtain the Poisson distribution as
x =, > :

p(x) = —_, = Oe ba caenes


x!
= (5 otherwise. (5-34)
Note that this distribution was developed as a consequence of certain assumptions; thus,
when the assumptions hold or approximately hold, the Poisson distribution is an appropriate
model. There are many real-world phenomena for which the Poisson model is appropriate.

5-8.2 Development of the Poisson Distribution from the Binomial


To show how the Poisson distribution may also be developed as a limiting form of the bino-
mial distribution with c = np, we return to the binomial distribution
n!
p(x) = mes | Di(LSnp)
a eideerTOE Doeenalt

If we let np =c, so that p= c/m and 1 - p= 1 —c/n=(n-c)/n, and if we then replace terms
involving p with the corresponding terms involving c, we obtain

p(x) = watx!er

-Slol-2E-2}-6-S feo
n

In letting n i oo andp — 0 in such a way that np = c remains fixed, the terms (1 — +), c— 2),
4 ,a- s-) all approach 1, as does (1 — us Now we know that (1 — =i > e asn > 9,
Thus,the limiting form of equation 5-35 is p(x)= (c*Z«!) -e~, which is the Poisson distribution.

5-8.3 Mean and Variance of the Poisson Distribution

The mean of the Poisson distribution is c and the variance is also c, as seen below.

xe “c ee
E(X)= =
py x 2 (x -1)!
| Cie cs |
=ce “|{+-~+—+
Lier!
=e i) c

=; (5-36)
Similarly,

E(X *)= sy a+ @,
x=0

so that

V(X) = E(X*) - [E(X)P


=6, (5-37)
5-8 The Poisson Distribution 121

The moment-generating function is

Myo=eo, (5-38)
The utility of this generating function is illustrated in the proof of the following theorem.

Theorem 5-1

If X,, X,, ..., X, are independently distributed random variables, each having a Poisson dis-
tribution with parameter c,, i= 1, 2, ..., k, and Y= X, + X, +--+: +X,, then Y has a Poisson
distribution with parameter
C=C,
+ Cg to +h

Proof The moment-generating function of X; is


M,(t) = ecte-))

and since M(t) = M,,(t) : My,(t): +--+ - My (0), then


MAb) = elcitert toe CUE

which is recognized as the moment-generating function of a Poisson random variable with


parameter c=c,+¢,+:+'+¢,.
This reproductive property of the Poisson distribution is highly useful. Simply, it states
that sums of independent Poisson random variables are distributed according to the Poisson
distribution.
A brief tabulation for the Poisson distribution is given in Table I of the Appendix. Most
Statistical software packages automatically calculate Poisson probabilities.

Example 5-11
Suppose a retailer determines that the number of orders for a certain home appliance in a particular
period has a Poisson distribution with parameter c. She would like to determine the stock level K for
the beginning of the period so that there will be a probability of at least 0.95 of supplying all cus-
tomers who order the appliance during the period. She does not wish to back-order merchandise or
resupply the warehouse during the period. If X represents the number of orders, the dealer wishes to
determine K such that
P(X s K) 20.95

or
P(X > K) $0.05,

so that
°°

Sie ce*/x!$ 0.05.


x=K+l1

The solution may be determined directly from tables of the Poisson distribution and is obviously a
function of c.

Example 5-12
The attainable sensitivity for electronic amplifiers and apparatus is limited by noise or spontaneous
current fluctuations. Jn vacuum tubes, one noise source is shot noise due to the random emission of
122 Chapter5 Some Important Discrete Distributions

electrons from the heated cathode. Assume that the potential difference between the anode and cath-
ode is large enough to ensure that all electrons emitted by the cathode have high velocity—high
enough to preclude spare charge (accumulation of electrons between the cathode and anode). Under
these conditions, and defining an arrival to be an emission of an electrode from the cathode, Daven-
port and Root (1958) showed that the number of electrons, X, emitted from the cathode in time t has
a Poisson distribution given by

p(x)=(Aye"A!, _x=0,1,2,...,
=i()5 otherwise.

The parameter A is the mean rate of emission of electrons from the cathode.

5-9 SOME APPROXIMATIONS


It is often useful to approximate one distribution using another, particularly when the
approximation is easier to manipulate. The two approximations considered in this section
are as follows:
1. The binomial approximation to the hypergeometric distribution.
2. The Poisson approximation to the binomial distribution.
For the hypergeometric distribution, if the sampling fraction n/N is small, say less than
0.1, then the binomial distribution with parameters p = D/N and n provides a good approx-
imation. The smaller the ratio n/N, the better the approximation.

Example 5-13
A production lot of 200 units has eight defectives. A random sample of 10 units is selected, and we
want to find the probability that the sample will contain exactly one defective. The true probability is
He
1} 9
P(X =1)= SS ae 0.288.

10
Since n/N= w= 0.05 is small, we letp =m =00.04 and use the binomial approximation

pti =("
fo.) '(0.96)° = 0.277.

In the case of the Poisson approximation to the binomial, we indicated earlier that for large
n and small p, the approximation is satisfactory. In utilizing this approximation we let c =
np. In general, p should be less than 0.1 in order to apply the approximation. The smaller p
and the larger n, the better the approximation.

Example 5-14
The probability that a particular rivet in the wing surface of a new aircraft is defective is 0.001. There
are 4000 rivets in the wing. What is the probability that not more than six defective rivets will be
installed?
6
P(X <6)= > o001(o9seyt
x=0
5-12 Exercises 123

Using the Poisson approximation,

c = 4000(0.001) = 4
and
6
P(X $6)= }e44?/x! = 0.889,
x=0

5-10 GENERATION OF REALIZATIONS


Schemes exist for using random numbers, as is described in Section 3-6, to generate real-
izations of most common random variables.
With Bernoulli trials, we might first generate a value u, as the ith realization of a uni-
form [0,1] random variable U, where

neyo Osusl,
=i()i otherwise,

and independence among the sequence U, is maintained. Then if u; Sp, we let X, = 1, and
if u, >p, X,=0. Thus if Y = yet X;, Y will follow a binomial distribution with parameters
nand p, and this entire process might be repeated to produce a series of values of Y, that is,
realizations from the binomial distribution with parameters n and p.
Similarly, we could produce geometric variates by sequentially generating values u,
and counting the number of trials until u, <p. At the point this condition is met, the trial
number is assigned to the random variable X, and the entire process is repeated to produce
a series of realizations of a geometric random variable.
Also, a similar scheme may be used for Pascal random variable realizations, where we
proceed testing u, Sp until this condition has been satisfied r times, at which point the trial
number is assigned to X, and once again, the entire process is repeated to obtain subsequent
realizations.
Realizations from a Poisson distribution with parameter At = c may be obtained by
employing a technique based on the so-called acceptance-rejection method. The approach is
to sequentially generate values u, as described above until the product u, + u, +++ uy, ,<e"° is
obtained, at which point we assign X < k, and once again, this process is repeated to obtain a
sequence of realizations.
See Chapter 19 for more details on the generation of discrete random variables.

5-11 SUMMARY
The distributions presented in this chapter have Wide use in engineering, scientific, and
management applications. The selection of a specific discrete distribution will depend on
how well the assumptions underlying the distribution are met by the phenomenon to be
modeled. The distributions presented here were selected because of their wide applicability.
A summary of these distributions is presented in Table 5-1.

5-12 EXERCISES
5-1. An experiment consists of four independent 5-2. Six independent space missions to the moon are
Bernoulli trials with probability of success pon each _ planned. The estimated probability of success on each
trial. The random variable X is the number of suc- mission is 0.95. What is the probability that at least
cesses. Enumerate the probability distribution of X. five of the planned missions will be successful?
124

a1quy,]-¢ Areurumg
Jo a2N9sIqj suonNgMNsIq
Chapter 5

-jUDMIOI
Auqeqoig suneIaueH
uonNnquysiq sIg}oulvIeg~ uoyqoun,y
(xd ueJjy DOULLIEA, uonoun,y

eh ae ‘ ; f ae He
*4 4
e

dUaUIONF d/{ 24/5 ad/ 1) — (ab


Some Important Discrete Distributions

yeoseg a I c++
“3any) (Terarourg djs dba | vad |
“= +4‘ +4 +96 ie |
ab-|
SSTMIOYIO

dUUIONSIodAH
sag [[ppucy
pur
HeMS (€961)

uOSssIOg Ose (1-w)?


(x)d
= 9,3 ‘jx/ Q=x “77
O= ISIMIOT}O
5-12 Exercises 125

5-3. The XYZ Company has planned sales presenta- 5-12. The XYZ Company plans to visit potential cus-
tions to a dozen important customers. The probability tomers until a substantial sale is made. Each sales
of receiving an order as a result of such a presentation presentation costs $1000. It costs $4000 to travel to
is estimated to be 0.5. What is the probability of the next customer and set up a new presentation.
receiving four or more orders as the result of the meet- (a) What is the expected cost of making a sale if the
ings? probability of making a sale after any presentation
5-4, A stockbroker calls her 20 most important cus- is 0.10?
tomers every morning. If the probability is one in (b) If the expected profit from each sale is $15,000,
three of making a transaction as the result of such a should the trips be undertaken?
call, what are the chances of her handling 10 or more (c) If the budget for advertising is only $100,000,
transactions? what is the probability that this sum will be spent
5-5. A production process that manufactures transis- without getting an order?
tors operates, on the average, at 2% fraction defective. 5-13. Find the mean and variance of the geometric
Every 2 hours a random sample of size 50 is taken distribution using the moment-generating function.
from the process. If the sample contains more than
5-14. A submarine’s probability of sinking an enemy
two defectives the process must be stopped. Deter-
ship with any one firing of its torpedoes is 0.8. If the
mine the probability that the process will be stopped
firings are independent, determine the probability of
by the sampling scheme.
a sinking within the first two firings. Within the first
5-6. Find the mean and variance of the binomial dis- three.
tribution using the moment-generating function (see
5-15. In Atjanta the probability that a thunderstorm
equation 5-7).
will occur on any day during the spring is. 0.05.
5-7. A production process manufacturing turn-indica- Assuming independence, what is the probability that
tor dash lights is known to produce lights that are 1% the first thunderstorm occurs on April 25? Assume
defective. Assume this value remains unchanged and spring begins on March 21.
assume a sample of 100 such lights’ is randomly
5-16. A potential customer enters an automobile deal-
selected. Find P(f < 0.03), where f is the sample frac-.
ership every hour. The probability of a salesperson
tion defective.
concluding a transaction is 0.10. She is determined to
5-8. Suppose a random sample of size 200 is taken keep working until she has sold three cars. What is the
from a process that is 0.07 fraction defective. What is probability that she will have to work exactly 8 hours?
the probability that p will exceed the true fraction More than 8 hours?
defective by one standard deviation? By two standard 5-17. A personnel manager is interviewing potential
deviations? By three standard deviations? employees in order to fill two jobs. The probability of
§-9, Five cruise missiles have been built by an aero- an interviewee having the necessary qualifications and
space company. The probability of a successful firing accepting an offer is 0.8. What is the probability that
is, on any one test, 0.95. Assuming independent fir- exactly four people must be interviewed? What is the
ings, what is the probability that the first failure occurs probability that fewer than four people must be inter-
on the fifth firing? viewed?
5-10. A real estate agent estimates his probability of 5-18. Show that the moment-generating function of
selling a house to be 0.10. He has to see four clients the Pascal random variable is as given by equation
today. If he is successful on the first three calls, what 5-22. Use it to determine the mean and variance of the
is the probability that his fourth call is unsuccessful? Pascal distribution.
5-11. Suppose five independent identical laboratory 5-19. The probability that an experiment has a suc-
experiments are to be undertaken. Each experiment is cessful outcome is 0.80. The experiment is to be
extremely sensitive to environmental conditions, and repeated until five successful outcomes have occurred.
there is only a probability p that it will be completed What is the expected number of repetitions required?
successfully. Plot, as a function of p, the probability What is the variance?
that the fifth experiment is the first failure. Find math- 5-20. A military commander wishes to destroy an
ematically the value of p that maximizes the probabil- enemy bridge. Each flight of planes he sends out has
ity of the fifth trial being the first unsuccessful a probability of 0.8 of scoring a direct hit on the
experiment. bridge. It takes four direct hits to completely destroy
the bridge. If he can mount seven assaults before the
126 Chapter 5 Some Important Discrete Distributions

bridge becomes tactically unimportant, what is the 5-29. The number of automobiles passing through a
probability that the bridge will be destroyed? particular intersection per hour is estimated to be 25.
5-21. Three companies, X, Y, and Z, have probabilities Find the probability that fewer than 10 vehicles pass
of obtaining an order for a particular type of mer- through during any 1-hour interval. Assume that the
chandise of 0.4, 0.3, and 0.3, respectively. Three number of vehicles follows a Poisson distribution.
orders are to be awarded independently. What is the 5-30. Calls arrive at a telephone switchboard such that
probability that one company receives all the orders? the number of calls per hour follows a Poisson distri-
5-22. Four companies are interviewing five college bution with a mean of 10. The current equipment can
students for positions after graduation. Assuming all handle up to 20 calls without becoming overloaded.
five receive offers from each company and assuming What is the probability of such an overload occurring?
the probabilities of the companies hiring a new 5-31. The number of red blood cells per square unit
employee are equal, what is the probability that one visible under a microscope follows a Poisson distribu-
company gets all of the new employees? None of tion with a mean of 4. Find the probability that more
them? than five such blood cells are visible to the observer.
5-23. We are interested in the weight of bags of feed. 5-32. Let X, be the number of vehicles passing through
Specifically, we need to know if any of the four events an intersection during a length of time ¢. The random
below has occurred: variable X, is Poisson distributed with a parameter Az.
T, = (X$ 10), p(T,) = 0.2, Suppose an automatic counter has been installed to
T,=(10<XS11), ——p(T,) =0.2, count the number of passing vehicles. However, this
counter is not functioning properly, and each passing
T,=(11<XS11.5), — p(T,) =0.2, vehicle has a probability p of not being counted. Let Y,
T, = (11.5<X), p(T,) = 0.4. be the number of vehicles counted during ¢. Find the
probability distribution of Y,.
If 10 bags are selected at random, what is the proba-
bility of four being less than or equal to 10 pounds, §-33. A large insurance company has discovered that
one being greater than 10 but less than or equal to 11 0.2% of the U.S. population is injured as a result of a
pounds, and two being greater than 11.5 pounds? particular type of accident. This company has 15,000
§-24. In Problem 5-23 what is the probability that all policyholders carrying coverage against such an acci-
10 bags weigh more than 11.5 pounds? What is the dent. What is the probability that three or fewer claims
probability that five bags weigh more than 11.5 will be filed against those policies next year? Five or
pounds and the remaining five weigh less than 10 more claims?
pounds? 5-34. Mainténance crews arrive at a tool crib request-
§-25. A lot of 25 color television tubes is subjected to ing a particular spare part according to a Poisson dis-
an acceptance testing procedure. The procedure con- tribution with parameter A = 2. Three of these spare
sists of drawing five tubes at random, without replace- parts are normally kept on hand. If more than three
ment, and testing them If two or fewer tubes fail, the orders occur, the crews must journey a considerable
remaining ones are accepted. Otherwise the lot is distance to central stores.
rejected. Assume the lot contains four defective tubes. (a) Ona given day, what is the probability that such a
(a) What is the exact probability of lot acceptance? journey must be made?
(b) What is the probability of lot acceptance com- (b) What is the expected demand per day for spare
puted from the binomial distribution with p = =? parts?
5-26. Suppose that in Exercise 5-25 the lot size had (c) How many spare parts must be carried if the tool
been 100. Would the binomial approximation be satis- crib is to service all incoming crews 90% of the
factory in this case? time?
§-27. A purchaser receives small lots (NV = 25) of a (d) What is the expected number of crews serviced
high-precision device. She wishes to reject the lot daily at the tool crib?
95% of the time if it contains as many as seven defec- (e) What is the expected number of crews making the
tives. Suppose she decides that the presence of one journey to central stores?
defective in the sample is sufficient to cause rejection. 5-35. A loom experiences one yarn breakage approx-
How large should her sample size be? imately every 10 hours. A particular style of cloth is
5-28, Show that the moment-generating function of the being produced that will take 25 hours on this loom. If
Poisson random variable is as given by equation 5-38, three or more breaks are required to render the prod-
5-12 Exercises 127

uct unsatisfactory, find the probability that this style of (a) Find the probabilities of 1, 2, 3, 4, or 5S accidents.
cloth is finished with acceptable quality. (b) The Poisson distribution has been freely applied
5-36. The number of people boarding a bus at each in the area of industrial accidents. However, it fre-
stop follows a Poisson distribution with parameter A. quently provides a poor “fit” to actual historical
The bus company is surveying its usages for schedul- data. Why might this be true? Hint: See Kendall
ing purposes and has installed an automatic counter and Stuart (1963), pp. 128-30.
on each bus. However, if more than 10 people board at 5-41. Use either your favorite computer language or
any one stop, the counter cannot record the excess and
the random integers in Table XV in the Appendix (and
merely registers 10. If X is the number of riders
scale them by multiplying by 10 to get uniform [0,1]
recorded, find the probability distribution of X. random number realizations) to do the following:
5-37. A mathematics textbook has 200 pages on which (a) Produce five realizations of a binomial random
typographical errors in the equations could occur. If variable with n = 8, p=0.5.
there are in fact five errors randomly dispersed among
(b) Produce ten realizations of a geometric distribu-
these 200 pages, what is the probability that a random
tion with p = 0.4.
sample of 50 pages will contain at least one error?
How large must the random sample be to assure that at (c) Produce five realizations of a Poisson random
least three errors will be found with 90% probability? variable with c= 0.15.

§-38. The probability of a vehicle having an accident at 5-42. If Y= X'* and X follows a geometric distribution
a particular intersection is 0.0001. Suppose that 10,000 with a mean of 6, use uniform [0,1] random number
vehicles per day travel through this intersection. What realizations, and produce five realizations of ¥.
is the probability of no accidents occurring? What is 5-43, With Exercise 5-42 above, use your computer to
the probability of two or more accidents? do the following:
5-39. If the probability of being involved in an auto (a) Produce 500 realizations of Y.
accident is 0.01 during any year, what is the probabil-
(b) Calculate y = x00()) + 2 + +++ + Yeo9), the mean
ity of having two or more accidents during any 10-
from this sample.
year driving period?
5-40. Suppose that the number of accidents to 5-44, Prove the memoryless property of the geometric
employees working on high-explosive shells over a distribution.
period of time (say 5 weeks) is taken to follow a Pois-
son distribution with parameter A = 2.
Chapter 6

Some Important
Continuous Distributions

6-1 INTRODUCTION
We will now study several important continuous probability distributions. They are the uni-
form, exponential, gamma, and Weibull distributions. In Chapter 7 the normal distribution,
and several other probability distributions closely related to it, will be presented. The nor-
mal distribution is perhaps the most important of all continuous distributions. The reason
for postponing its study is that the normal distribution is important enough to warrant a sep-
arate chapter.
It has been noted that the range space for a continuous random variable X consists of
an interval or a set of intervals. This was illustrated in an earlier chapter, and it was observed
that an idealization is involved. For example, if we are measuring the time to failure for an
electronic component or the time to process an order through an information system, the
measurement devices used are such that there are only a finite number of possible out-
comes; however, we will idealize and assume that time may take any value on some inter-
val. Once again we will simplify the notation where no ambiguity is introduced, and we let
fx) = f(x) and F(x) = Fy(x).
6-2 THE UNIFORM DISTRIBUTION
The uniform density function is defined as

f(x)=—= peas asxsFB,

= (0), otherwise, (6-1)


where o@ and f are real constants with a < f. The density function is shown in Fig. 6-1.
Since a uniformly distributed random variable has a probability density function that is

a B x Figure 6-1 A uniform density.


6-2 The Uniform Distribution 129

constant over some interval of definition, the constant must be the reciprocal of the length
of the interval in order to satisfy the requirement that

[f(x
de=1.
—co

A uniformly distributed random variable represents the continuous analog to equally


likely outcomes in the sense that for any subinterval [a, b], where aS a<b< B, the
P(a < X <b) depends only on the length b — a.
Pe
Pax <p) =
heea pa
The statement that we choose a point at random on (a, B] simply means that the value cho-
sen, say Y, is uniformly distributed on [a, B].

6-2.1 Mean and Variance of the Uniform Distribution


The mean and variance of the uniform distribution are

el) x dx =F+0 (6-2)

(which is obvious by symmetry) and

(=| 5
Bx Ce, Bra ‘

2
lisff (6-3)
12
The moment-generating function M(t) is found as follows:
B
Mx(t)= E(e*) = [oem ee ee oe ee
é orem p= 1 Pano
# eh — el (6-4)

(B - @)
For a uniformly distributed random variable, the distribution function F(x) = P(X < x)
is given by equation 6-5, and its graph is shown in Fig. 6-2.
Bix 0, X<Q,

aif eee Casey <p,


ef-a@ —p—a
=], bap. (6-5)

F(x)

0 a B x
Figure 6-2 Distribution function for the uniform random variable.
130 Chapter 6 Some Important Continuous Distributions

Example 6-1
A point is chosen a rancor on the interval [0, 10]. Suppose we wish fo fied the probability that es
point lies between5
3 and 23. The density of the random variable X is f(x) &S ne Osx < 10, and f(x) =
; 7 2
otherwise. Hence, ie SXS>)=79-

Example 6-2
Numbers of the form NN.N are “rounded off” to the nearest integer. The round-off procedure is such
that if the decimal part is less than 0.5, the round is “down” by simply dropping the decimal part:
however, if the decimal part is greater than 0.5, the round is up, that is, the new number is LNN.NJ+ 1
where |_| is the “greatest integer contained in” function. If the decimal part is exactly 0.5, a coin is
tossed to determine which way to round. The round-off error, X, is defined as the difference between
the number before rounding and the number after rounding. These errors are commonly distributed
according to the uniform distribution on the interval [—0.5, + 0.5]. That is,

fos, -0.5<x<+0.5,
= () otherwise.

Example 6-3.
One of the special features of many simulation languages is a simple automatic procedure for using the
uniform distribution. The user declares a mean and modifier (e.g., 500, 100). The compiler immediately
creates a routine to produce realizations of a random variable X uniformly distributed on [400, 600].

In the special case where a = 0, B = 1, the uniform variable is said to be uniform on


[0, 1] and a symbol U is often used to describe this special variable. Using the results from
equations 6-2 and 6-3, we note that E(U) =>+and VU) =57.If U,, U;, ..., U, is a sequence
of such variables, where the variables are eaurdally iTiiependen the values U,, U,, ..., U,
are called random numbers and a realization u,, U5, . ., u, is properly called a random num-
ber realization, however, in common usage, the term “random numbers” is often given to
the realizations.

6-3. THE EXPONENTIAL DISTRIBUTION


The exponential distribution has density function
{Osher Sx a0:
=k otherwise, (6-6)
where the parameter A is a real, positive constant. A graph of the exponential density is
shown in Fig. 6-3.

f(x)

-- >
0 x Figure 6-3 The exponential density function.
6-3 The Exponential! Distribution 131

6-3.1 The Relationship of the Exponential Distribution


to the Poisson Distribution
The exponential distribution is closely related to the Poisson distribution, and an explana-
tion of this relationship should help the reader develop an understanding of the kinds of sit-
uations for which the exponential density is appropriate.
In developing the Poisson distribution from the Poisson postulates and the Poisson
process, we fixed time at some value t, and we developed the distribution of the number of
occurrences in the interval [0, t]. We denoted this random variable X, and the distribution
was

pecs Ai) ex Ol,


2,...5
=A()5 otherwise. (6-7)

Now consider p(0), which is the probability of no occurrences on (0, #]. This is given by

pO) =e™. (6-8)


Recall that we originally fixed time at t. Another interpretation of p(0) = e™ is that this is
the probability that the time to the first occurrence is greater than ¢. Considering this time
as a random variable 7, we note that

p0)=P(T>hn=e,"*° 120. (6-9)


If we now let time vary and consider the random variable T as the time to occurrence, then
RO = P(L sta leh et 2 0. (6-10)
And since f(t) = F(t), we see that the density is
fiy=Ae", «120,
aC) otherwise. (6-11)

This is the exponential density of equation 6-6. Thus, the relationship between the
exponential and Poisson distributions may be stated as follows: if the number of occur-
rences has a Poisson distribution as shown in equation 6-7, then the time between succes-
sive occurrences has an exponential distribution as shown in equation 6-11. For example, if
the number of orders for a certain item received per week has a Poisson distribution, then
the time between orders would have an exponential distribution. One variable is discrete
(the count) and the other (time) is continuous.
In order to verify that fis a density function, we note that f(x) 2 0 for all x and

[f teMaca -e*P a1.


6-3.2 Mean and Variance of the Exponential Distribution
The mean and variance of the exponential distribution are
aa iyl (6-12)
is
= —xe et |
(94-4 dx at
E(X)=— [xe +f,So @- Ar dx=1/

and

V(X) = Joox?de Max — (1/4)

= |-x%e"* :+ Alerear] - (1/a)? =1/2?. (6-13)


132 Chapter6 Some Important,Continuous Distributions

F(x)

Figure 6-4 Distribution function for the


X exponential.

The standard deviation is 1/A, and thus the mean and the standard deviation are equal.
The moment-generating function is

x()=(1-5]
M,(t)=|1-—di! 6-14
(6-14)

provided t < A.
The cumulative distribution function F can be obtained by integrating equation 6-6 as
follows:
F(x) =0, pats

=[de*Mar=1-e*\
0
x20. (6-15)
Figure 6-4 depicts the distribution function of equation 6-15.

Example 6-4
An electronic component is known to have a useful life represented by an exponential density with
failure rate of 10° failures per hour (i.e., A= 10~°). The mean time to failure, E(X), is thus 10° hours.
Suppose we want to determine the fraction of such components that would fail before the mean life
or expected life:
V/A WA
Arss)=[ de™ dx= "| =|-e!

= 0.63212.

This result holds for any value of A greater than zero. In our example, 63.212% of the items
would fail before 10° hours (see Fig. 6-5).

LX Nea x= OF
= 9; otherwise

Figure 6-5 The mean of an exponential distribution.


6-3 The Exponential Distribution 133

‘Biample 6-5
Suppose a designer is to make a decision between two manufacturing processes for the manufacture
of a certain component. Process A costs C dollars per unit to manufacture a component. Process B
costs k - C dollars per unit to manufacture a component, where k > 1. Components have an exponen-
tial time to failure density with a failure rate of 2007! failures per hour for process A, while compo-
nents from process B have a failure rate of 300"! failures per hour. The mean lives are thus 200 hours
and 300 hours, respectively, for the two processes. Because of a warranty clause, if a component lasts
for fewer than 400 hours, the manufacturer must pay a penalty of K dollars. Let X be the time to fail-
ure of each component. Thus, the component costs are

C,=C if X => 400,


=C+K if X < 400,
and

Ca=kC if X > 400,


=kC+K if X < 400.

The expected costs are


00 eo
E(C,)=(C+K) [ 2007177de+C [i200 161?
). c{-|
=(C+ Ky)21
=(C+ k)[1-e?]+Ce?]

=C+K(1-e7)
and

E(C,)= (ec+K)[™ 3007e"* de + Cf”300°1e 9a

=(kC+ k)l a et] + Kc|e*”


=kC+ K(1-e~*?),

Therefore, if k < 1 — K/C(e? —e~”), then E(C,,) > E(C,), and it is likely that the designer would select
process B.

6-3.3 Memoryless Property of the Exponential Distribution


The exponential distribution has an interesting and unique memoryless property for con-
tinuous variables; that is,

P(X >x+5s)
P(XX>x+5X
>x+5X>x) >x)=—_——_.
PX> x)
-A(x+s)
e -As
= =e,
ev

so that
P(X >x+5s|X >x)=P(X>5). (6-16)
For example, if a cathode ray tube has an exponential time to failure distribution and at time
x it is observed to be still functioning, then the remaining life has the same exponential fail-
ure distribution as the tube had at time zero.
134 Chapter6 Some Important Continuous Distributions

6-4 THE GAMMA DISTRIBUTION


6-4.1 The Gamma Function »

A function used in the definition of a gamma distribution is the gamma function defined by

T(n)= { prea Seis Bor (6-17)


0
An important recursive relationship that may easily be shown on integrating equation 6-17
by parts is
T(n) =(@— LI — 1). (6-18)

If n is a positive integer, then


I(n) =(n-1)!, (6-19)
since (1) = i e “dx =1. Thus, the gamma function is a generalization of the factorial.
The reader is asked in Exercise 6-17 to verify that

15) = f* We dy = Ir. (6-20)


» 0

6-4.2 Definition of the Gamma Distribution


With the use of the gamma function, we are now able to introduct the gamma probability
density function as

‘ =0, otherwise. (6-21)


The parameters are r > 0 and A > 0. The parameter r is usually called the shape parameter,
and / is called the scale parameter. Figure 6-6 shows several gamma distributions, for A= 1
and various r. It should be noted that f(x) = 0 for all x, and
co co A ro

[- flx)ax = ibTo”) € ax

1 Sake SVE yates» 1 va


=sal We pipe

The cumulative distribution function (CDF) of the gamma distribution is analytically


intractable but is readily obtained from software packages such as Excel and Minitab. In par-
ticular, the Excel function GAMMADIST(x, r, 1/A, TRUE) gives the CDF F(x). For exam-

f(x)

Figure 6-6 Gamma distribution for A = 1.


6-4 The Gamma Distribution 135

ple, GAMMADIST(15, 5.5, 4.2, TRUE) returns a value of F(15) = 0.2126 for the case r=
5.5, A= 1/4.2 = 0.238. The Excel function GAMMAINV gives the inverse of the CDF.

6-4.3 Relationship Between the Gamma Distribution


and the Exponential Distribution
There is a close relationship between the exponential distribution and the gamma distribu-
tion. Namely, if r= 1 the gamma distribution reduces to the exponential distribution. This fol-
lows from the general definition that if the random variable X is the sum of r independent,
exponentially distributed random variables, each with parameter A, then X has a gamma
density with parameters r and A. That is to say, if

X=X,+X,+---+X, (6-22)

where X; has probability density function

Ose Se Sx 20;
=i08 otherwise,

and where the X; are mutually independent, then X has the density given in equation 6-21.
In many applications of the gamma distribution that we will consider, r will be a positive
integer, and we may use this knowledge to good advantage in developing the distribution
function. Some authors refer to the special case in which r is a positive integer as the
Erlang distribution.

6-4.4 Mean and Variance of the Gamma Distribution


We may show that the mean and variance of the gamma distribution are

E(X)=r/A (6-23)

and

V(X) = r/A?. (6-24)


Equations 6-23 and 6-24 represent the mean and variance regardless of whether or not r is
an integer; however, when r is an integer and the interpretation given in equation 6-22 is
made, it is obvious that

E(X)= ¥ £(x,)=r-WA=r/A
j=l

and

V(X)= PY(X,)=r-1/¥ =1/A?


j=l

from a direct application of the expected value and variance operators to the sum of inde-
pendent random variables.
The moment-generating function for the gamma distribution is
t =e

M,(t) Ei (1 +] (6-25)
136 Chapter6 Some Important Continuous Distributions

Recalling that the moment-generating function for the exponential distribution was
[1 —(#/A)}", this result is expected, since
r eg eS aie
Ma exgecax (1-5)
(011M, (0=| . (6-26)
j=l
The distribution function, F, is

= 0; x <0. (6-27)
If r is a positive integer, then equation 6-27 may be integrated by parts, giving
r-1

F(x)=1- Sie* (ax) /kt, x >0, (6-28)


k=0
which is the sum of Poisson terms with mean Ax. Thus, tables of the cumulative Poisson
may be used to evaluate the distribution function of the gamma.

on standby. When unit 1 fails, the decision switch (DS) switches unit 2 on until it fails and then unit 3 is
switched on. The decision switch is assumed to be perfect, so that the system life X may be represented
as the sum of the subsystem lives X = X, + X, + X;. If the subsystem lives are independent of one
another, and if the subsystems each have a life X,, j= 1, 2, 3, having density g(x) = (1/100)e*"™, x20,
then X will have a gamma density with r= 3 and A= 0.01. That is,

f(x)= OOF (0.012)? , x>0,


=); otherwise.

The probability that the system will operate at least x hours is denoted R(x) and is called the reliabil-
ity function. Here,

R(x) =1- F(x) = ya (0.01x)* /k!

= el] +-(0.01x) + (0.01x)’ /2}

Figure 6-7 A standby redundant system.


6-5 The Weibull Distribution 137

Example 6-7
For a gamma distribution with A = > and r = v/2, where v is a positive integer, the chi-square distri-
bution with v degrees offreedom results:

i (v/2)-1 —x/2
ifNS
) Qv/2 T(v/2) G 0:
x

=i) otherwise.

This distribution will be discussed further in Chapter 8.

6-5 THE WEIBULL DISTRIBUTION


The Weibull distribution has been widely applied to many random phenomena. The princi-
pal utility of the Weibull distribution is that it affords an excellent approximation to the
probability law of many random variables. One important area of application has been as a
model for time to failure im electrical and mechanical components and systems. This is dis-
cussed in Chapter 17. The density function is

=U, otherwise. (6-29)

Its parameters are y(— c° < y< ce), the location parameter; 6 > 0, the scale parameter; and
B> 0, the shape parameter. By appropriate selection of these parameters, this density func-
tion will closely approximate many observational phenomena.
Figure 6-8 shows some Weibull densities for y= 0, = 1, and B= 1, 2, 3, 4. Note that
when y= 0 and B= 1, the Weibull distribution reduces to an exponential density with A =
. 1/6. Although the exponential distribution is a special case of both the gamma and Weibull
distributions, the gamma and Weibull in general are noninterchangeable.

6-5.1 Mean and Variance of the Weibull Distribution


The mean and variance of the Weibuil distribution can be shown to be

E(X) = reorfi+ 4 (6-30)

0.4 0.8 1.2 1.6 2.0. x


Figure 6-8 Weibull densities for y= 0, 5= 1, and B= 1, 2, 3, 4.
138 Chapter6 Some Important Continuous Distributions

and
~72

V(X) = 8 r(1+3)-[rfi+4)| - (6-31)

The distribution function has the relatively simple form


B
F(x) =1- of (552) | x2Y. (6-32)

The Weibull CDF F(x) is conveniently provided by software packages such as Excel and
Minitab. For the case A = 0, the Excel function WEIBULL (x, 8, y, TRUE) returns F(x).

Example 6-8
The time-to-failure distribution for electronic subassemblies is known to have a Weibull density with
y=0, B= 5 and 6= 100. The fraction expected to survive to, say, 400 hours is thus

1— F(400) =e ¥“" = 0.1353.


The same result could have been obtained in Excel via the function call 1 - WEIBULL (400, 1/2, 100,
TRUE).
The mean time to failure is

E(X) = 0 + 100(2) = 200 hours.

Example 6-9
Berrettoni (1964) presented a number of applications of the Weibull distribution. The following are
examples of natural processes having a probability law closely approximated by the Weibull distri-
bution. The random variable is denoted X in the examples.
1. Corrosion resistance of magnesium alloy plates.
X: Corrosion weight loss of 10? mg/(cm”)(day) when magnesium alloy plates are immersed
in an inhibited aqueous 20% solution of MgBr,.
2. Return goods classified according to number of weeks after shipment.
X: Length of period (107' weeks) until a customer returns the defective product after
shipment.
3. Number of downtimes per shift.
X: Number of downtimes per shift (times 10~'} occurring in a continuous automatic and com-
plicated assembly line.
4. Leakage failure in dry-cell batteries.
X: Age (years) when leakage starts.
5. Reliability of capacitors.
X: Life (hours) of 3.3-uF, 50-V, solid tantalum capacitors operating at an ambient tempera-
ture of 125°C, where the rated catalogue voltage is 33 V.

6-6 GENERATION OF REALIZATIONS


Suppose for now that U,, U,, ..., are independent uniform [0, 1] random variables. We will
show how to use these uniforms to generate other random variables.
6-8 Exercises 139

If we desire to produce realizations of a uniform random variable on [«, f], this is sim-
ply accomplished using

X= Oe u(P—
Oo) i=1)2,..-. (6-33)
If we seek realizations of an exponential random variable with parameter A, the inverse
transform method yields

(6-34)

Similarly, using the same method, realizations of a Weibull random variable with parame-
ters y, B, 6 are obtained using
x= 7+ 6Cinu)"?, i=1, 2, .... (6-35)
The generation of gamma variable realizations usually employs a technique known as
the acceptance-rejection method, and a variety of these methods have been used. If we wish
to produce realizations from a gamma variable with parameters r > 1 and A > 0, one
approach, suggested by Cheng (1977), is as follows:

Step 1. Let a= (2r—1)'? and b= 2r—1n4 + Va.


Step 2. Generate u,, u, as uniform [0, 1] random number realizations.
Step 3. Let y=r[u,/(1 —u,)]°.
Step 4a. Ify>b- In(u;u,), reject y and return to Step 2.
Step 4b. Ify<b- In(uu>) assign x < (y/A).

For more details on these and other random-variate generation techniques, see
Chapter 19.

6-7 SUMMARY
This chapter has presented four widely used density functions for continuous random vari-
ables. The uniform, exponential, gamma, and Weibull distributions were presented along
with underlying assumptions and example applications. Table 6-1 presents a summary of
these distributions.

6-8 EXERCISES
6-1. A point is chosen at random on the line segment this profit is uniformly distributed between $0 and
[0, 4]. What is the probability that it lies between : and $2000, find the probability distribution of the broker’s
132 Between 2} and 35? total fees.
6-2. The opening price of a particular stock is uni- 6-5. Use the moment-generating function for the uni-
formly distributed on the interval (353, 447]. What is form density (as given by equation 6-4) to generate
the probability that, on any given day, the opening the mean and variance.
price is less than 40? Between 40 and 42? 6-6. Let X be uniformly distributed and symmetric
6-3. The random variable X is uniformly distributed about zero with variance 1. Find the appropriate val-
on the interval [0, 2]. Find the distribution of the ran- ues for a and f.
dom variable Y=5 + 2X. 6-7. Show how the uniform density function can be
6-4. A real estate broker charges a fixed fee of $50 used to generate variates from the empirical probabii-
plus a 6% commission on the landowner’s profit. If ity distribution described below:
140
Chapter6

ayquy,
-9= AreuruNg
Jo SnonuNUODSUOISIAIG

-JUSWIOPY
Aytsuoq sIa}oUIeIeg suneisuay
Aysuaq uonsun,y
(x)/ uray doueLIe, uonouny
FL), SUD gx» 18 Beg PP) +2) Z/(g (0-9)
/ ZI oye
n<g mas (o-y)i
‘o= “OSIMIOI}O
[enusuodyg 0<¥ (x)S
Y= ‘ny O<% [ WI a=
NS “ISTMIBYIO

euures) O<4 (x)f aad eV)0<x /4 Vv vie Wh) = PB 5h


Some Important Continuous Distributions

a<7 (4
‘Q= ‘OSIMIOYIO

EL]| \g 9
fatter ftpet oe
be
a
| tes [BB
¢ Glia ape
0<g 0= “ASIMIOYIO
6-8 Exercises 141

6-16. A transistor has an exponential time-to-failure


y pd)
distribution with a mean time to failure of 20,000
1 0.3 hours. The transistor has already lasted 20,000 hours
2 0.2 in a particular application. What is the probability that
3 0.4 the transistor fails by 30,000 hours?
4 0.1 6-17. Show that (5) = Vz.
6-18. Prove the gamma function properties given by
Hint: Apply the inverse transform method. equations 6-18 and 6-19.

6-8. The random variable X is uniformly distributed 6-19. A ferry boat will take its customers across a river
over the interval [0, 4]. What is the probability that the when 10 cars are aboard. Experience shows that cars
roots of y?+ 4Xy + X + 1 =Oare real? arrive at the ferry boat independently and at a mean
rate of seven per hour. Find the probability that the
6-9. Verify that the moment-generating function of the
time between consecutive trips will be at least 1 hour.
exponential distribution is as given by equation 6-14.
Use it to generate the mean and variance. 6-20. A box of candy contains 24 bars. The time
between demands for these candy bars is exponen-
6-10. The engine and drive train of a new car is guar-
tially distributed with a mean of 10 minutes. What is
anteed for 1 year. The mean life of an engine and drive
the probability that a box of candy bars opened at
train is estimated to be 3 years, and the time to failure
8:00 A.M, will be empty by noon?
has an exponential density. The realized profit on a
new car is $1000. Including costs of parts and labor 6-21. Use the moment-generating function of the
the dealer must pay $250 to repair each failure. What gamma distribution (as given by equation 6-25) to find
is the expected profit per car? the mean and variance.

6-11. For the data in Exercise 6-10, what percentage 6-22. The life of an electronic system is Y= X, + X, +
of cars will experience failure in the engine and drive X, + X,, the sum of the subsystem component lives.
train during the first 6 months of use? The subsystems are independent, each having expo-
nential failure densities with a mean time between
6-12. Let the length of time a machine will operate be
failures of 4 hours. What is the probability that the
an exponentially distributed random variable with
system will operate for at least 24 hours?
probability density function f(t) = Ge, t> 0. Suppose
an operator for this machine must be hired for a pre- 6-23. The replenishment time for a certain product is
determined and fixed length of time, say Y. She is paid known to be gamma distributed with a mean of 40 and
d dollars per time period during this interval. The net a variance of 400. Find the probability that an order is
profit from operating this machine, exclusive of labor received within the first 20 days after it is ordered.
costs, is r dollars per time period that it is operating. Within the first 60 days. :
Find the value of Y that maximizes the expected total 6-24. Suppose a gamma distributed random variable is
profit obtained. defined over the interval u < x < co with density
6-13. The time to failure of a television tube is esti- function
mated to be exponentially distributed with a mean of
3 years. A company offers insurance on these tubes Sires
x)= Coyne 8 x su ASUr> 0,
for the first year of usage. On what percentage of poli- =a(0): otherwise.
cies will they have to pay a claim?
Find the mean of this three-parameter gamma
6-14. Is there an exponential density that satisfies the
distribution.
following condition?
6-25. The beta probability distribution is defined by
P{X <2} =4P(X<3}
T(A+r) as
ox 0 sas b AS 0, r> 0,
If so, find the value of A.
6-15. Two manufacturing processes are under consid-
"Tarn
=i), otherwise.
eration. The per-unit cost for process I is C, while for
process II it is 3C. Products from both processes have (a) Graph the distribution for A> 1, r> 1.
exponential time-to-failure densities with mean rates (b) Graph the distribution for A< 1,r<1.
of 25"! failures per hour and 35”' failures per hour (c) Graph the distribution for A< 1, r2 1.
from I and II, respectively. If a product fails before 15 (d) Graph the distribution for A2 1, r< 1.
hours it must be replaced at a cost of Z dollars. Which
(e) Graph the distribution for A = r.
process would you recommend?
142 Chapter 6 Some Important Continuous Distributions

6-26. Show that when A = r = | the beta distribution 6-36. A manufacturer of a commercial television mon-
reduces to the uniform distribution. itor guarantees the picture tube for 1 year (8760
hours). The monitors are used in airport terminals for
6-27. Show that when A= 2, r= 1 orA=1, r=2 the
flight schedules, and they are in continuous use with
beta distribution reduces to a triangular probability
power on. The mean life of the tubes is 20,000 hours,
distribution. Graph the density function.
and they follow an exponential time-to-failure density.
6-28. Show that if A = r = 2 the beta distribution It costs the manufacturer $300 to make, sell, and
reduces to a parabolic probability distribution. Graph deliver a monitor that will be sold for $400. It costs
the density function. $150 to replace a failed tube, including materials and
labor. The manufacturer has no replacement obliga-
6-29. Find the mean and variance of the beta
distribution.
tion beyond the first replacement. What is the manu-
facturer’s expected profit?
6-30. Find the mean and variance of the Weibull
6-37. The lead time for orders of diodes from a certain
distribution.
manufacturer is known to have a gamma distribution
6-31. The diameter of steel shafts is Weibull distrib- with a mean of 20 days and a standard deviation of 10
uted with parameters y = 1.0 inches, B = 2, and 6= days. Determine the probability of receiving an order
0.5. Find the probability that a randomly selected within 15 days of the placement date.
shaft will not exceed 1.5 inches in diameter. 6-38. Use random numbers generated from your
6-32. The time to failure of a certain transistor is favorite computer language or from scaling the ran-
known to be Weibull distributed with parameters y = dom integers in Table XV of the Appendix by multi-
0, B= . and 6 = 400. Find the fraction expected to plying by 10°, and do the following:
survive 600 hours. (a) Produce 10 realizations of a variable that is uni-
form on [10, 20].
6-33. The time to leakage failure in a certain type of
dry-cell battery is expected to have a Weibull distribu- (b) Produce five realizations of an exponential ran-
tion with parameters y= 0, B= a and 6= 400. What dom variable with a parameter of A = 2 x 107.
is the probability that a battery will survive beyond (c) Produce five realizations of a gamma variable
800 hours of use? with r=2 and A=4. :
6-34. Graph the Weibull distribution with y=0, 6=1, (d) Produce 10 realizations of a Weibull variable with
and B= 1, 2, 3, and 4. V.—08Bal Doi L000)
6-39. Use the random number generation schemes
6-35. The time to failure density for a small computer
suggested in Exercise 6-38, and do the following:
system has a Weibull density with y= 0, B= ra and
6 = 200. (a) Produce 10 realizations of Y= 2X°? where X fol-
lows an exponential distribution with a mean of 10.
(a) What fraction of these units will survive to 1000
hours? (b) Produce 10 realizations of Y= bg /os , where
(b) What is the mean time to failure? X, is gamma with r = 2, A= 4 and X, is uniform
on [0, 1).
Chapter 7

The Normal Distribution

7-1 INTRODUCTION
In this chapter we consider the normal distribution. This distribution is very important in
both the theory and application of statistics. We also discuss the lognormal and bivariate
normal distributions.
The normal distribution was first studied in the eighteenth century, when the patterns
in errors of measurement were observed to follow a symmetrical, bell-shaped distribution.
It was first presented in mathematical form in 1733 by DeMoivre, who derived it as a lim-
iting form of the binomial distribution. The distribution was also known to Laplace no later
than 1775. Through historical error, it has been attributed to Gauss, whose first published
reference to it appeared in 1809, and the term Gaussian distribution is frequently employed.
Various attempts were made during the eighteenth and nineteenth centuries to establish this
distribution as the underlying probability law for all continuous random variables; thus, the
name normal came to be applied.

7-2 THE NORMAL DISTRIBUTION


The normal distribution is in many respects the cornerstone of statistics. A random variable
X is said to have a normal distribution with mean pl (— © < 1 <) and variance 0° > 0 if it
has the density function

2
SNES Sere
1 —(Y2(x-w/of ; <a
me CO! (7-1)

The distribution is illustrated graphically in Fig. 7-1. The normal distribution is used so
extensively that the shorthand notation X ~ M(u, 0”) is often employed to indicate that the
random variable X is normally distributed with mean p and variance 0”.

7-2.1 Properties of the Normal Distribution


The normal distribution has several important properties.

3 IC sho a required of all density functions.


2. f(x) 20 for all x (7-2)
3. lim,_,..f(x) = 0 and lim,_,_.. f(x) = 0.
4. f(ut+ x) =f(u-—~x). The density is symmetric about U.
5. The maximum value of f occurs at x =
6. The points of inflection of fareatx=pto.

143
144 Chapter7 The Normal Distribution

f(x)

m x
Figure 7-1 The normal distribution.

Property 1 may be demonstrated as follows. Let y = (x — y)/o in equation 7-1! and


denote the integral /. That is,

sed -(1/2)y

Our proof that 14 f(x) dx =1 will consist of showing that / > = | and then inferring that
[= 1, since f must be everywhere positive. Defining a second normally distributed variable,
Z, we have

osha -(1/2)yae -(1/2)z

2EL caja
On changing to polar coordinates with the transformation of variables y =r sin 9 and z=r
cos 6, the integral becomes

sf re tear

0 re WY" Gp = I,
2 is
completing the proof.

7-2.2 Mean and Variance of the Normal Distribution


The mean of the normal distribution may be determined easily. Since

SS | pene ~(1/2)[(x-n)/o]
GaP iEoon a
and if we let z = (x — 1)/o, we obtain \

E(X)= weg (utozje* dz


et
2 24 +o
Cr | =z
P) lq ?

= : a V2 2 ¥

Since the integrand of the first integral is that of a normal density with = 0 and o* = 1, the
value of the first integral is one. The second integral has value zero, that is,
oe | -72/2 Lert 2/2|*
——ze* “dz =-——e™ | = Q,
iB:\20 V20 0
7-2. The Normei Distribution 145

and thus

E(X) = u[1] + of0).


=H (7-3)
In retrospect, this result makes sense via a symmetry argument.
To find the variance we must evaluate

(x) = £[(x-1)*]= [7 (m0) stecarenMltee eTa


Dares

and letting z = (x — y)/o, we obtain

V(xX)=[~ °y
02? 1
tRa=o?{ tS Sue
—z? z? PY
lg
ae ph aed RO NVA|
2/2
roles te
= = +f Be en -z7/2 ndz

=o7(0+1],
so that

ViX) = 0%. (7-4)


In summary the mean and variance of the normal density given in equation 7-1 are y and
o”, respectively.
The moment-generating function for the normal distribution can be shown to be

My(t) = explty + ZS]. | (7-5)


For the development of equation 7-5, see Exercise 7-10.

7-2.3 The Normal Cumulative Distribution Function


The distribution function F is
l
a. ae
F(x) = P(X $x) = ee= te 2 Walu-aylo

It is impossible to evaluate this integral without resorting to numerical methods, and even
then the evaluation would have to be accomplished for each pair (u, 0”). However, a sim-
ple transformation of variables, z = (x — [1)/o, allows the evaluation to be independent of wu
and o. That is,

X-H) ¥ (x-p)/o 1 etlg,


Fix)= P(X s x)= {zs xk
oO —* V2

= fo oz)de = of ca
4 } (7-7)

7-2.4 The Standard Normal Distribution


The probability density function in equation 7-7 above,
1 Pa 23
ts — 00 < J < OO~7
9(z) = SOE. ,
146 Chapter7 The Normal Distribution

is that of a normal distribution with mean 0 and variance 1; that is, Z ~ N(O, 1), and we say
that Z has a standard normal distribution. A graph of the probability density function is
shown in Fig. 7-2. The corresponding distribution function is ®, where
1
(2) = fi PER ext du, (7-8)

and this function has been well tabulated. A table of the integral in equation 7-8 has been
provided in Table II of the Appendix. In fact, many software packages such as Excel and
Minitab provide functions to evaluate ®(z). For instance, the Excel call NORMSDIST(z)
does just this task. As an example, we find that NORMSDIST(1.96) = 0.9750. The Excel
function NORMSINV returns the inverse CDF. For example, NORMSINV(0.975) = 1.960.
The functions NORMDIST(x, u, o, TRUE) and NORMINV give the CDF and inverse
CDF of the M(u, 0”) distribution.

7-2.5 Problem-Solving Procedure


The procedure for solving practical problems involving the evaluation of cumulative normal
probabilities is actually very simple. For example, suppose that X ~ (100, 4) and we wish
to find the probability that X is less than or equal to 104; that is, P(X < 104) = F(104). Since
the standard normal random variable is

we can standardize the point of interest x = 104 to obtain


x=p,., 104-100
2:
fo 2
Now the probability that the standard normal random variable Z is less than or equal to 2
is equal to the probability that the original normal random variable X is less than or equal
to 104. Expressed mathematically,

F(x)= ofs ia = ®(2)


or ‘

F(104) = ®(2).

(2)

Figure 7-2 The standard normal distribution.


7-2 The Normal Distribution 147

Appendix Table II contains cumulative standard normal probabilities for various values of
z. From this table, we can read

@(2) = 0.9772.
Note that in the relationship z = (x — 4)/o, the variable z measures the departure of x
from the mean J in standard deviation (0) units. For instance, in the case just considered,
F(104) = ®(2), which indicates that 104 is two standard deviations (o= 2) above the mean.
In general, x = + Oz. In solving problems, we sometimes need to use the symmetry prop-
erty of @ in addition to the tables. It is helpful to make a sketch if there is any confusion in
determining exactly which probabilities are required, since the area under the curve and
over the interval of interest is the probability that the random variable will lie on the
interval.

Example 7-1
The breaking strength (in newtons) of a synthetic fabric is denoted X, and it is distributed as M(800,
144). The purchaser of the fabric requires the fabric to have a strength of at least 772 nt. A fabric sam-
ple is randomly selected and tested. To find P(X 2 772), we first calculate

P(X <772) = {<4


sD Lael
=)
oO V2
= P(Z <-2.33)
- = @(-2.33)=0.01.
Hence the desired probability, P(X = 772), equals 0.99. Figure 7-3 shows the calculated probability
relative to both X and Z. We have chosen to work with the random variable Z because its distribution
function is tabulated.

Example 7-2
The time required to repair an automatic loading machine in a complex food- packaging operation of
a production process is X minutes. Studies have shown that the approximation X ~ N(120, 16) is quite
good. A sketch is shown in Fig. 7-4. If the process is down for more than 125 minutes, all equipment

772 py = 800 x =. 33 p=0 Z

Figure 7-3 P(X < 772), where X ~ N(800, 144).

fim 120 Te
Figure 7-4 P(X> 125), where X~N(120, 16).
148 Chapter7 The Normal Distribution

must be cleaned, with the loss of all product in process. The total cost of product loss and cleaning
associated with the long downtime is $10,000. In order to determine the probability of this occurring,
we proceed as follows: .

P(X >125)= ofz> ea) P(Z> 1.25)


= ]—@(1.25)
= 1-0.8944
= 0.1056.

Thus, given a breakdown of the packaging machine, the expected cost is E(C) = 0.1056(10,000 + C,)
+0.8944(C,),where C is the total cost and C,, is the repair cost. Simplified, E(C) = Cg, + 1056. Sup-
pose the management can reduce the mean of the service time distribution to 115 minutes by adding
more maintenance personnel. The new cost for repair will be Cy, > Cp; however,

le
P(X>125)=
P| Z>—— = P(Z >2.5)
=1-(2.5)
= 1-0.9938
= 0.0062,
so that the new expected cost would be C,, + 62, and one would logically make the decision to add
to the maintenance crew if

Cp, + 62 < Cy, + 1056


or F
Cp, — Cy, < $994.
It is assumed that the frequency of breakdowns remains unchanged.

Example 7-3
The pitch diameter of the thread on a fitting is normally distributed with a mean of 0.4008 cm and a stan-
dard deviation of 0.0004 cm. The design specifications are 0.4000 + 0.0010 cm. This is illustrated in Fig.
7-5. Notice that the process is operating with the mean not equal to the nominal specifications. We desire
to determine what fraction of product is within tolerance. Using the approach employed previously,

(0.399 < Xs 0.401) = p(92°70 —OS008 <Z< 0.4010 — 0.4008


0.0004 0.0004
= P(-4.5<Z<0.5)
= (0.5)— ©(-4.5)
= 0.6915 — 0.0000
= 0.6915.

0.3990 0.4010
Lower spec. Upper spec.
limit limit Figure 7-5 Distribution of thread pitch diameters.
7-2 The Normal Distribution 149

As process engineers study the results of such calculations, they decide to replace a worn cutting tool
and adjust the machine producing the fittings so that the new mean falls directly at the nominal value
of 0.4000. Then,

P(0.3990 < X < 0.4010) = eee =Usiperp aatia


0.0004
= P(-2.5<Z<+2.5)
= (2.5) - 6(-2.5)
= 0.9938 — 0.0062
= 0.9876.
We see that with the adjustments, 98.76% of the fittings will be within tolerance. The distribution of
adjusted machine pitch diameters is shown in Fig. 7-6.

The previous example illustrates a concept important in quality engineering. Operating


a process at the nominal level is generally superior to operating the process at some other
level, if there are two-sided specification limits.

GC
Sees

Another type of problem involving the use of tables of the normal distribution sometimes arises. Sup-
pese, for example, that X ~ N(50, 4). Furthermore, suppose we want to determine a value of X, say x,
such that P(X > x) = 0.025. Then,

x-50
P(X>x)= Az> = 0.025

fz<= 4 = 0.975,

so that, reading the normal table “backward,” we obtain


x-—50
= 1.96 = ®'(0.975)
2

and thus ‘
x= 50 + 2(1.96) = 53.92.

There are several symmetric intervals that arise frequently. Their probabilities are
P(u- 1.000< X sp + 1.000) = 0.6826,
P(u- 1.6450 <X < w+ 1.6450) = 0.90,
P(u- 1.9605 X < + 1.960) = 0.95,
P(u- 2.5705 X $+ 2.570) = 0.99,
P(u - 3.000< X < 1 + 3.000) = 0.9978. (7-9)

~\ o =0.0004

Figure 7-6 Distribution of adjusted machine pitch


0.3990 0.4000 CaGioamxn dianicters:
150 Cuapter7 The Normal Distribution

7-3 THE REPRODUCTIVE PROPERTY OF THE


NORMAL DISTRIBUTION

Suppose we have n independent, normal random variables X,, X, ..., X,, where X; ~
NU, o,), for i= 1, 2, ..., n. It was shown earlier that if :
Y=X,+ X,+---+X,, (7-10)

then
n

E(Y)= by =) ih (7-11)
i=l
and
n

= D3oc

i=l
Using moment-generating functions, we see that

My(t) = My (t):My, (2): + -My, (2)


ieJenseits ta bene 2e ae Eg: (7-12)

Therefore,

M (=e +a t-tH, t+(o7 +0} +---+02)7/2] (7-13)


V2 = ’

which is the moment-generating function of a normally distributed random variable with


mean [, + HM, + --: + LL, and variance om “ oO;si 2 o.. Therefore, by the uniqueness prop-
erty of the moment-generating function, we see that Y is normal with mean py and vari-
ance 0).2

Example 7-5
An assembly consists of three linkage components, as shown in Fig. 7-7. The properties X,, X,, and
X; are given below, with means in centimeters and variance in square centimeters.

~ N(12, 0.02)
~ N(24, 0.03)
~ N(18, 9.04)
Links are produced by different machines and operators, so we have reason to assume that X,, X,, and
X; are independent. Suppose we want to determine P(53.8 < Y < 54. 2),Since is X, + X, + X;, Yis
distributed normally with mean My= = 12+24+ 18 =54 and variance o? =O, = + 0; + o; = 0.02 + 0.03
+ 0.04= 0.09. Thus,

54.2-54
P(53.8<Y < 54.2) = ORS <7)
0.3
- p(-2sz<+3)
3 3

= ®(0.667) — ®(-0.667)
= 0.748 - 0.252
= 0.496.
eee EE
7-3 The Reproductive Property of the Normal Distribution 151

Oe a OSI SUATH R 7

WV
Figure 7-7 A linkage assembly.

These results can be generalized to linear combinations of independent normal variables.


Linear combinations of the form

Y=a)+a,X,+---+4,X, (7-14)

were presented earlier and we found that py = ay + y=: 4H. When the variables are inde-
pendent, 0 : = Satr a;oO.. Again, if X,, X5, ..., X, are independent and normally distributed,
then Y ~ M(ty, 0}).

Example 7-6
A shaft is to be assembled into a bearing, as shown in Fig. 7-8. The clearance is Y = X, — X,. Suppose

X, ~ N(1.500, 0.0016)
and

X, ~ N(1.480, 0.0009).

Then,

Hy = 4), + af,
= (1)(1.500) + (—1)(1.480)
= 0.02
and
0} =a\0,+a,0,
= (1)?(0.0016) + (-1)?(0.0009)
= 0.0025,

so that
0, = 0.05.
When the parts are assembled, there will be interference if Y < 0, so
0-0.02 >
P(interference) = P(Y <0)= (z <
0.05 »
= (-0.4) = 0.3446.
This indicates that 34.46% of all assemblies attempted would meet with failure. If the designer feels that
the nominal clearance [ly = 0.02 is as large as it can be made for the assembly, then the only way to
reduce the 34.46% figure is to reduce the variance of the distributions. In many cases, this can be accom-
plished by the overhaul of production equipment, better training of production operators, and so on.

ibesisdahy 1bona
Figure 7-8 An assembly.
152 Chapter7 The Normal Distribution

7-4 THE CENTRAL LIMIT THEOREM


If a random variable Y is the sum of n independent random variables that satisfy certain
general conditions, then for sufficiently large n, Y is approximately normally distributed.
We state this as a theorem—the most important theorem in all of probability and statistics.

Theorem 7-1 Central Limit Theorem


If X,, X,, ..., X, is a sequence of n independent random variables with E(X;) = Hj, and V(X;)
= 0? (both finite) and Y= X, + X, + --- + X,, then under some general conditions

ae Ve
eres (7-15)

has an approximate N(0,1) distribution as n approaches infinity. If F,, is the distribution


function of Z,, then

lim Be =1, forallz. (7-16)


n>0 D(z

The “general conditions” mentioned in the theorem are informally summarized as fol-
lows. The terms X,, taken individually, contribute.a negligible amount to the variance of the
sum, and it is not likely that a single term makes a large contribution to the sum.
The proof of this theorem, as well as a rigorous discussion of the necessary assump-
tions, is beyond the scope of this presentation. There are, however, several observations that
should be made. The fact that Y is approximately normally distributed when the X; terms
may have essentially any distribution is the basic underlying reason for the importance of
the normal distribution. In numerous applications, the random variable being considered
may be represented as the sum of n independent random variables, some of which may be
measurement error, some due to physical considerations, and so on, and thus the normal
distribution provides a good approximation.
A special case of the central limit theorem arises when each of the components has the
same distribution.

Theorem 7-2
If X,, X,, ..., X, is a sequence of n independent, identically distributed random variables
with E(X;) = p and V(X,) = 0”, and Y=X,+X,+---+X,, then

n (7-17)

has an approximate (0,1) distribution in the same sense as equation 7-16.


Under the restriction that M,(t) exists for real t, a straightforward proof may be pre-
sented for this form of the central limit theorem. Many mathematical statistics texts pres-
ent such a proof.
The question immediately encountered in practice is the following: How large must n
be to get reasonable results using the normal distribution to approximate the distribution of
Y? This is not an easy question to answer, since the answer depends on the characteristics
of the distribution of the X; terms as well as the meaning of “reasonable results.” From a
7-4 The Central Limit Theorem 153

practical standpoint, some very crude rules of thumb can be given where the distribution of
the X; terms falls into one of three arbitrarily selected groups, as follows:
1, Well behaved—The distribution of X; does not radically depart from the normal dis-
tribution. There is a beli-shaped density that is nearly symmetric. For this case,
practitioners in quality control and other areas of application have found that n
should be at least 4. That is, n > 4.
2. Reasonably behaved—The distribution of X, has no prominent mode, and it appears
much as a uniform density. In this case, n = 12 is a commonly used rule.
3. Ill behaved—The distribution has most of its measure in the tails, as in Fig. 7-9. In
this case it is most difficult to say; however, in many practical applications, n = 100
should be satisfactory.

boars cr

Small parts are packaged 250 to the crate. Part weights are independent random variables with a mean
of 0.5 pound and a standard deviation of 0.10 pound. Twenty crates are loaded to a pallet. Suppose
we wish to find the probability that the parts on a pallet will exceed 2510 pounds in weight. (Neglect
both pallet and crate weight.) Let
Y=X, +X, +:++ + Xs000
represent the total weight of the parts, so that
My= 5000(0.5) = 2500,
a+=5000(0.01) = 50,
and

oy = 50 =7.071.
Then
‘_ _Z>————
P(Y > 2510) = P|[z> 2510-2500
on |,
=1-(1.41) =0.08.

Note that we did not know the distribution of the individual part weights.

re) f(x)

(a) (b)
Figure 7-9 Ill-behaved distributions.
154 Chapter7 The Normal Distribution

Table 7-1 Activity Mean Times and Variances (in Weeks and Weeks”)
Activity Mean Variance. Activity . Mean Variance

1 SY) 1.0 9 3.1 EZ


2 3.2 1.3 10 rare 0.8
3 4.6 1.0 11 3.6 1.6
4 2.1 1.2 12 0.5 0.2
5 3.6 0.8 13 zal 0.6
6 32 Ze 14 Hass 0.7
7 13" 1.9 15 1.2 0.4
8 1.5 0.5 16 2.8 0.7

In a construction project, a network of major activities has been constructed to serve as the basis for
planning and scheduling. On a critical path there are 16 activities. The means and variances are given
in Table 7-1.
The activity times may be considered independent and the project time is the sum of the activ-
ity times on the critical path, that is, Y= X, + X, + --- + X;,, where Yis the project time and X; is the
time for the ith activity. Although the distributions of the X, are unknown, the distributions are fairly
well behaved. The contractor would like to know (a) the expected completion time, and (b) a project
time corresponding to a probability of 0.99 of having the project completed. Calculating 4 and Co,
we ovtain
My = 49 weeks,
o, = 16 weeks’.
The expected completion time for the project is thus 49 weeks. In determining the time Yo such that
the probability is 0.9 of having the project completed by that time, Fig. 7-10 may be helpful.
We may calculate

P(Y S yp) = 0.90


or

Az< ») = 0.90,
4
so that

Ae = 1.282 = ©"(0.90)
and
Yo = 49 + 1.282(4)
= 54.128 weeks.

ay = 4 weeks

Yo y
by = 49 weeks Figure 7-10 Distribution of project times.
7-5 The Normal Approximation to the Binomial Distribution 155

7-5 THE NORMAL APPROXIMATION


TO THE BINOMIAL DISTRIBUTION
In Chapter 5, the binomial approximation to the hypergeometric distribution was presented,
as was the Poisson approximation to the binomial distribution. In this section we consider
the normal approximation to the binomial distribution. Since the binomial is a discrete
probability distribution, this may seem to go against intuition; however, a limiting process
is involved, keeping p of the binomial distribution fixed and letting n > >, The approxi-
mation is known as the DeMoivre-Laplace approximation.
We recall the binomial distribution as
n! =
p(x) = Dig=wm x =0,1,2,...,n,
x!(n— x)!
=0, otherwise.
Stirling’s approximation to n! is
nl = Qare tn?) (7-18)

The error
Aes (27)! en" *("?)

a =e (7-19)
as n —> oe, Using Stirling’s formula to approximate the terms involving n! in the binomial
model, we eventually find that, for large n,

BE (7-20)
np(1~ p)V2a
so that

Pxsn=0[ 2) (x-np)/J/npq 1 eta


(7-21)
apg == J2n
This result makes sense in light of the central limit theorem and the fact that X is the sum
of independent Bernoulli trials (so that E(X) = np and V(X) = npq). Thus, the quantity
(X -np)/./npq approximately has a N(0,1) distribution. If p is close to $ and n > 10, the
approximation is fairly good; however, for other values of p, the value of n must be larger.
In general, experience indicates that the approximation is fairly good as long as np > 5 for
p 5 or when ng > 5 when p > 5.

Example 7-9
In sampling from a production process that produces items of which 20% are defective, a random
sample of 100 items is selected each hour of each production shift. The number of defectives in a sam-
ple is denoted X. To find, say, P(X S$ 15) we may use the normal approximation as follows:

P(X <15)= fz< 15-100-0.2


100(0.2)(0.8)
= P(Z <$-1.25) = ®(-1.25) = 0.1056
Since the binomial distribution is discrete and the normal distribution is continuous, it is common
practice to use a half-interval correction or continuity correction. In fact, this is a necessity in calcu-
lating P(X = x). The usual procedure is to go a half-unit on either side of the integer x, depending on
the interval of interest. Several cases are shown in Table 7-2.
156 Chapter7 The Normal Distribution

Table 7-2 Continuity Corrections

Quantity Desired from With Continuity . In Terms of the


Binomial Distribution Correction - Distribution Function ®

i MN x+—-—np ———Ap.
= —-5SXSx+5 ® -®
te a ; npq npq
x+—-np
sx)
P(X¥S AKX<Sx+5,sx+3)
1 ® |os

P(X <x) =P(XSx- 1) PIX<x-1+4)1 shod sd


npq

P(X 2x Pikes)
X2>x-F
2
1-® 3a" npq

1
. x+—-—np
P(X >x) =P(X2x+1) P\x>x+1-4) Loa hr fee
np

P| ; M b+—-np le pee
P(asXsb) a->SXsb+s5 ® =
? d npq npq

Using the data from Example 7-9, where we had n = 100 and p = 0.2, we evaluate P(X = 15),
P(X $ 15), P(X < 18), P(X2 22), and P(18 < X< 21).

1. P(X =15)=P(14.5< X $15.5) = o( 85 2).of14.5 =20)


= (-1.125) - &(-1.375)=0.046.

2. P(X 15)~0{ 25-20) = 0.130.

3. P(X <18) = P(x <i7)~a{ 75=2?) = 0.266.

4, P(X222)=1- of 22) ~ 0.354.


5. P(I8<X<21)=P(19< X<20)
~0{205-20) 9(85-20)
4 4
0.550 - 0.354 = 0.196.
eee
7-6 The Lognormal Distribution 157

As discussed in Chapter 5, the random variable p = X/n where X has a binomial distribution
with parameters p and n, is often of interest. Interest in this quantity stems primarily from
sampling applications, where a random sample of n observations is made, with each obser-
vation classified success or failure, and where X is the number of successes in the sample.
The quantity p is simply the sample fraction of successes. Recall that we showed that

E(p) =p (7-22)
and

4 (7-23)
jp
e y Pq/n

has an approximate M(0,1) distribution. This result has proved useful in many applications,
including those in the areas of quality control, work measurement, reliability engineering,
and economics. The results are much more useful than those from the law of large numbers.

Instead of timing the activity of a maintenance mechanic over the period of a week to determine the
fraction of his time spent in an activity classification called “secondary but necessary,” a technician
elects to use a work sampling study, randomly picking 400 time points over the week, taking a flash
observation at each, and classifying the activity of the maintenance mechanic. The value X will rep-
resent the number of times the mechanic was involved in a “secondary but necessary” activity and p
= X/400. If the true fraction of time that he is involved in this activity is 0.2, we determine the prob-
ability that p, the estimated fraction, falls between 0.15 and 0.25. That is,

0.25-0.2 |-o| ——
0.15 p<0.3)=6 —=—= 0.15-0.2
ieee bm ee | bod
= (2.5) -©(-2.5)
~ 0.9876.

7-6 THE LOGNORMAL DISTRIBUTION


The lognormal distribution is the distribution of a random variable whose logarithm follows
the normal distribution. Some practitioners hold that the lognormal distribution is as fun-
damental as the normal distribution. It arises from the combination of random terms by a
multiplicative process. . .
The lognormal distribution has been applied in a wide variety of fields, including the
physical sciences, life sciences, social sciences, and engineering. In engineering applica-
tions, the lognormal distribution has been used to describe “time to failure” in reliability
engineering and “time to repair” in maintainability engineering.
158 Chapter7 The Normal Distribution

7-6.1 Density Function


We consider a random variable X with range space Ry = {x: 0,< x < c0}, where Y = In X is
normally distributed with mean jy and variance oe that is,
E(Y).= pyorvandsiqanW(Y= of
The density function of X is
1 (-1/2)[(inx-py )/oy |
x) =——_—-e ; 10:
fx) XOyV20
=0 otherwise. (7-24)
The lognormal distribution is shown in Fig. 7-11. Notice that, in general, the distribution is
skewed, with a long tail to the right. The Excel functions LOGNORMDIST and LOGINV
provide the CDF and inverse CDF, respectively, of the lognormal distribution.

7-6.2 Mean and Variance of the Lognormal Distribution


The mean and variance of the lognormal distribution are

E(X) apy set ee (7-25)


and

Vid) =a. pe = y= wi(e” = i) (7-26)


In some applications of the lognormal distribution, it is important to know the values of the
median and the mode. The median, which is the value ¥ such that P(X < x) = 0.5, is

= eb, (7-27)
The mode is the value of x for which f(x) is maximum, and for the lognormal distribution,
the mode is

MO = eb?
-% (7-28)
Figure 7-11 shows the relative location of the mean, median, and mode for the
lognormal distribution. Since the distribution has a right skew, generally we will find that
mode < median < mean.

Mode Mean x
Median Figure 7-11 The lognormal distribution.
7-6 The Lognormal Distribution 159

7-6.3 Properties of the Lognormal Distribution


While the normal distribution has additive reproductive properties, the lognormal distribu-
tion has multiplicative reproductive properties. Some of the more important properties are
as follows:
1. If X has a lognormal distribution with parameters Hy and o? and if a, b, and d are
constants such that b = e4, then W = bX” has a lognormal dis-
tribution with parameters (d + aly) and (ao,)’.
2. If X, and X, are independent lognormal variables, with parameters (Uy, + Ly) and
(Uy, + Ly) respectively, then W =X, - X, has a lognormal distribution with parame-
ters [(y, + My,), (0), + O7,)].
3. If X,, X5, ..., X, is a sequence of n independent lognormal variates, with parameters
(Hy» Oy,),J = 1, 2, ..., n, respectively, and {a;} is a sequence of constants while b =
e“ is a single constant, then the product
n

W =b] |X)’
a;

j=l
has a lognormal distribution with parameters
n n

d+ \ajpy and | Dl ajoy, |.


j=l j=l
4. If X,, X,, ..., X,, are independent lognormal variates, each with the same parameters
(Uy, o}), then the geometric mean

ie]
has a lognormal distribution with parameters fy and o;/n.

mean and variance of


E(X) = el0+ (1/24 = el? = 162,754

and
V(X) = e!2(10) + 4194 Es 19;

= e4(e4 — 1) = 53.598e%,
respectively. The mode and median are
mode = e° = 403.43
and
median = e!° = 22,026.
In order to determine a specific probability, say P(X < 1000), we use the transform P(In X < In 1000)
= P(Y $1n 1000):

PY sin1000)= (zs 2OCO=10)


= ©(-1.55)
=0.0611.
160 Chapter7 The Normal Distribution

Example 7-13
Suppose .

Y, =InX, ~ N(4, 1),


Y, = In X, ~ N(3, 0.5),
Y, = In X; ~ M(2, 0.4),
Y, =In X, ~ NC, 0.01),
and furthermore suppose X,, X,, X;, and X, are independent random variables. The random variable
W defined as follows represents a critical performance variable on a telemetry system:

W eX if,XX
; h ce
vel:
By reproductive property 3, W will have a lognormal distribution with parameters
1.54+(2.5:4+0.2-3+0.7-2+3.1-1)=16.6

and

(2.5)? - 1+ (0.2)? - 0.5 + (0.7)? -0.4+ Gay: (0.01) = 6.562,

respectively. That is to say, In W~ N(16.6, 6.562). If the specifications on W


are, say, 20,000-600,000,
we could determine the probability that W would fall within specifications as follows:

P(20,000<W < 600-10°)


= Plin(20,000) < InW-< In(600- 10° )
a [ 600:10° -'88)-4(¥ somal
16.526 16.526
= (-1.290) - ©(-2.621) = 0.0985 — 0.0044
\= 0.0941.

7-7 THE BIVARIATE NORMAL DISTRIBUTION


Up to this point, all of the continuous random variables have been of one dimension. A very
important two-dimensional probability law that is a generalization of the one-dimensional
normal probability law is called the bivariate normal distribution. If [X,, X>] is a bivariate
normal random vector, then the joint density function of [X,, X,] is
2
1 1 x—- HL
F(X,
(x %2)
2) = ———[—
20,0, hp? exp) --
2(1- rr
p?) (1 6; )

2
-2 pick |Bat), aate) (7-29)
oO; O2 O2

for
- 0 <x, <co and —- < x, <0,
The joint probability P(a, < X, S$ b,, a, $ X, < b,) is defined as
by pb
j.J,f(%1.%2)dx,dx, (7-30)
and is represented by the volume under the surface and over the region {(x,, x,): a, $x, $
b,, a, $x, $b}, as shown in Fig. 7-12. Owen (1962) has provided a table of probabilities.
The bivariate normal density has five parameters. These are [U,, [,, 0), 2, and p, the cor-
7-7 The Bivariate Normal Distribution 161

Figure 7-12 The bivariate normal density.

relation coefficient hetween X, and X,, such that — < fl, < 2, — 0 < fl, < 0, d, > 0, 0, >
0, and-1<p<l.
The marginal densities f, and f, are given, respectively, as

— 1 = x -H))/o, ;
Ae) iGeates 7 o,v2x es : (7-31)
for — 0° < x, <e and

me 1 -(1/2)[(x2-H2)/o2]
f,(2)
5%)\) = {orl Te pz)
55%))) Cha, ax, = Bion=e (7-32)

for — 2% < x, <9,


We note that these marginal densities are normal; that is,

X, ~ N(My, 0°) (7-33)


and

X,~ ML, 6),


so that

E(X,) = by,
E(X;) = Lb,

V(X,) = Gi,
V(X,) = 03. (7-34)

The correlation coefficient p is the ratio of the covariance to [0, : 6,]. The covariance is

A 42)deydp.
oie thele (2 — Hy )(42 — Ha) f(a1.
Thus .

=—B (7-35)
162 Chapter7 The Normal Distribution

The conditional distributions fy,,,(%) and fy,,,,(%;) are also important. These condi-
tional densities are normal, as shown here:

f(x1»X2) @
Faas)
2
x
- [bp + P(o? o1)(x, -4)}} (7-36)
o =
on 2”
pain ON x
203(1-
*)EC
2

for — co < x, < co and

f(1.2)
Bigspy x)= Rica)

=I
=
Pies- P 2071-2)
(i ?){x — [My + p(o, 0» (x2 - H,)}} (7-37)

for — oc < x, < ce, Figure 7-13 illustrates some of these conditional densities.
We first consider the distribution f,,,,,. The mean and variance are

E(X,)4)) = by + P(Gx/0,)(%; — Hy) (7-38)

fx} x,(*2)
Xo

E(Xo| x,)

x
(a)

fe
fy.)x (%1)
2

E(X;,| x.)

x
(b)
Figure 7-13 Some typical conditional distributions. (a) Some example conditional distributions of
X, for a few values of x,. (b) Some example conditional distributions of X, for a few values of x,.
7-7 The Bivariate Normal Distribution 163

and

V(X,|x,) = o5(1 — p*) (7-39)


Furthermore, f,.,,, is normal; that is,

Xx, ~ Mu, + p(o,/0,)(x, - H)), O51 — p)]. (7-40)


The locus of expected values of X, for given x,, as shown in equation 7-38, is called the
regression of X, on X,, and it is linear. Also, the variance in the conditional distributions is
constant for all x,.
In the case of the distribution f, |,., the results are similar. That is,

E(X x) = My + P(0)/02)(%) — My), (7-41)

V(X, |x) = o7(1 - p), (7-42)


and

Xl, ~ NLM, + P(O,/ 0)(x) — Hy), oi(1 -p)l. (7-43)


In the bivariate normal distribution we observe that if p = 0, the joint density may be fac-
tored into the product of the marginal densities and so X, and X, are independent. Thus, for
a bivariate normal density, zero correlation and independence are equivalent. If planes par-
allel to the x,, x, plane are passed through the surface shown in Fig. 7-12, the contours cut
from the bivariate normal surface are ellipses. The student may wish to show this property.

Example 7-14
In an attempt to substitute a nondestructive testing procedure for a destructive test, an extensive study
was made of shear strength, X,, and weld diameter, X,, of spot welds, with the following findings.

1. [X,, X,] ~ bivariate normal.


2. uw, = 0.20 inch, , = 1100 pounds, 07 = 0.02 inch’, 0; = 525 pounds’, and p = 0.9.
The regression of X, on X, is thus

E(Xq|x,) = Ly + p(Oy/9,)(x, — Hy)

=1100+09f 2a (x, — 0.2)


0.02
= 145.8x, + 1070.84,
and the variance is

V(X), = 051 - p’)


= 525(0:19) = 99.75.
In studying these results, the manager of manufacturing notes that since p = 0.9, that is, close to 1,
weld diameter is highly correlated with shear strength. The specification on shear strength calls for a
value greater than 1080. If a weld has a diameter of 0.18, he asks: “What is the probability that the
strength specification will be met?” The process engineer notes that E(X,|0.18) = 1097.05; therefore,

1080 — 1097.05
PX, 21080)}.= PZ. ———
% ) 99.75
= 1-@(-1.71) = 0.9564,

and he recommends a policy such that if the weld diameter is not less than 0.18, the weld will be clas-
sified as satisfactory.
164 Chapter7 The Normal Distribution

Example 7-15
In developing an admissions policy for a large university, the office of student testing and evaluation
has noted that X,, the combined score on the college board examinations, and X,, the student grade
point average at the end of the freshman year, have a bivariate normal distribution. A grade point of
4.0 corresponds to A. A study indicates that :

HM, = 1300,
[bh = 2.3,
o; = 6400,
o; = 0.25,
p=0.6.
Any student with a grade point average less than 1.5 is automatically dropped at the end of the fresh-
man year; however, an average of 2.0 is considered to be satisfactory.
An applicant takes the college board exams, receives a combined score of 900, and is not
accepted. An irate parent argues that the-student will do satisfactory work and, specifically, will have
better than a 2.0 grade point average at the end of the freshman year. Considering only the proba-
bilistic aspects of the problem, the director of admissions wants to determine P(X, > 2.0|x, = 900).
Noting that .

E(X,|900) = 2.3+ (0.6) 25|(9001300)


=0.8
and

V(X,|900= 0.16,
the director calculates

12 of208) = 0.0013,
04
which predicts only a very slim chance of the parent’s claim being valid.

7-8 GENERATION OF NORMAL REALIZATIONS


We will consider both direct and approximate methods for generating realizations of a
standard normal variable Z, where Z ~ N(0, 1). Recall that X = 1+ OZ, so realizations of X ~
Nu, 6”) are easily obtained as x = 1 + oz.
The direct method calls for generating uniform [0, i] random number realizations in
pairs: u, and u,. Then, using the methods of Chapter 4, it turns out that

z, = (-2 In u,)'"cos(27m,),
. 2, =(—2 In u,)'sin(2m,) (7-44)
are realizations of independent, M(0,.1) variables. The values x = 1 + oz follow directly, and
the process is repeated until the desired number of realizations of X are obtained.
An approximate method that makes use of the Central Limit Theorem is as follows:
12
z= u;-6. (7-45)
i=l
With this procedure, we would begin by generating 12 uniform [0, 1] random number real-
izations, adding them and subtracting 6. This entire process is repeated until the desired
number of realizations is obtained.
7-10 Exercises 165

Although the direct method is exact and usually preferable, approximate values are
sometimes acceptable.

7-9 SUMMARY
This chapter has presented the normal distribution with a number of example applications.
The normal distribution, the related standard normal, and the lognormal distributions are
univariate, while the bivariate normal gives the joint density of two related normal random
variables.
The normal distribution forms the basis on which a great deal of the work in statistical
inference rests. The wide application of the normal distribution makes it particularly .
important.

7-10 EXERCISES
7-1, Let Z be a standard normal random variable and 7-7, The personnel manager of a large company
calculate the following probabilities, using sketches requires job applicants to take a certain test and
where appropriate: achieve a score of 500. If the test scores are normally
(a) P(OSZS2). distributed with a mean of 485 and a standard devia-
(b) P(-1SZs+H1). tion of 30, what percentage of the applicants pass the
test?
(c) P(Z S$ 1.65).
7-8. Experience indicates that the development time
(d) P(Z2-1.96).
for a photographic printing paper is distributed as X ~
(e) P(|Z{ > 1.5). N (30 seconds, 1.21 seconds’). Find the following: (a)
(f) P(-1.9SZs 2). The probability that X is at least 28.5 seconds. (b) The
(g) P(Zs 1.37). probability that X is at most 31 seconds. (c) The prob-
(h) P(\Z| $2.57). ability that X differs from its expected value L more
than 2 seconds.
7-2, LetX~ N(10, 9). Find P(X $ 8), P(X 2 12), P(2s
Xs 10). 7-9. A certain type of light bulb has an output known
to be normally distributed with mean of 2500 end foot-
7-3, In each part below, find the value of c that makes candles and a standard deviation of 75 end footcan-
the probability statementxrue. dles. Determine a lower specification limit such that
(a) B(c) = 0.94062. only 5% of the manufactured bulbs will be defective.
(b) P(\Z) $c) =0.95.> 7-10. Show that the moment-generating function for
(c) PZ} <c)=0.99. the normal distribution is as given by equation 7-5.
Use it to generate the mean and variance.
(d) P(ZS$c) =0.05.
7-11, If X~ N(u, 0”), show that Y = aX + b, where a
7-4, If P(Z 2 2,) = @, determine z, for @ = 0,025,
and b are real constants, is also normally distributed.
a@=0.005, ~=0.05, and a=0.0014.
Use the methods outlined in Chapter 3.
7.8, If X ~ N(80, 10°), compute the following: 7-12, The inside diameter of a piston ring is normally
(a) P(X s 100). distributed with a mean of 12 cm and a standard devi-
(b) P(X s 80). ation of 0.02 cm.
(c) P(75 Xs 100). (a) What fraction of the piston rings will have diam-
(d) P(75$X). eters exceeding 12,05 cm?
(e) P(X -80| s 19.6). (b) What inside diameter value c has a probability of
0.90 being exceeded?
7-6, The life of a particular type of dry-cell battery is
(c) What is the probability that the inside diameter
normally distributed with a mean of 600 days and a
will fall between 11,95 and 12,05?
standard deviation of 60 days. What fraction of these
batteries would be expected to survive beyond 680 7-13, A plant manager orders a process shutdown and
days? What fraction would be expected to fail before setting readjustment whenever the pH of the final
560 days? product falls above 7.20 or below 6.80. The sample
166 Chapter7 The Normal Distribution

pH is normally distributed with unknown jp and a (c) Where the acceptable range is as in (a) and the
standard deviationofo= 0.10. Determine the follow- hardness of each of nine randomly selected spec-
ing probabilities: imens is independently determined, what is the
(a) Of readjusting when the process is operating as expected number of acceptable specimens among
intended with p= 7.0. the aine specimens?
(b) Of readjusting when the process is slightly off 7-20. Prove that E(Z,) = 0 and V(Z,) = 1, where Z, is
target with the mean pH 7.05. as defined in Theorem 7-2.
(c) Of failing to readjust when the process is too alka- 7-21. Let X,(i= 1, 2, ..., n) be independent and iden-
line and the mean pH is = 7.25. tically distributed random variables with mean p and
(d) Of failing to readjust when the process is too variance o”. Consider the sample mean,
acidic and the mean pH is 4 = 6.75. a 1 tw
K=—(X,+X,+--+X,)=— DX,
7-14. The price being asked for a certain security is i=]

distributed normally with a mean of $50.00 and a stan-


dard deviation of $5.00. Buyers are willing to pay an Show that E(X) = pf: and V(X) = 07/n.
amount that is also normally distributed with a mean 7-22. A shaft with an outside diameter (O.D.) ~N(1.20,
of $45.00 and a standard deviation of $2.50. What is 0.0016) is inserted into a sleeve bearing having an
the probability that a transaction will take place? inside diameter (I.D.) that is N(1.25, 0.0009). Deter-
7-15. The specifications for a capacitor are that its life mine the probability of interference.
must be between 1000 and 5000 hours. The life is 7-23. An assembly consists of three components
known to be normally distributed with a mean of 3000 placed side by side. The length of each component is
hours. The revenue realized from each capacitor is normally distributed with a mean of 2 inches and @
$9.00; however, a failed unit must be replaced at a standard deviation of 0.2 inch. Specifications require
cost of $3.00 to the company. Two manufacturing . that all assemblies be between 5.7 and 6.3 inches long.
processes can’ produce capacitors having satisfactory How many assemblies will pass these requirements?
mean lives. The standard deviation for process A is
1000 hours and for process B it is 500 hours. How- 7-24. Find the mean and variance of the linear
ever, process A manufacturing costs are only half combination
those for B. What value of process manufacturing cost Vaxpeox eke exe
is critical, so far as dictating the use of process A or B? where X, ~ N(4, 3), X, ~ N(4, 4), X, ~ N(2, 4), and
7-16. The diameter of a ball bearing is a normally dis- X, ~ N(3, 2). What is the probability that 15 < Y < 20?
tributed random variable with mean p and a standard 7-25. Round-off error has a uniform distribution on
deviation of 1. Specifications for the diameter are [-0.5, +0.5] and round-off errors are independent. A
6 <X <8, and a ball bearing within these limits yields sum of 50 numbers is calculated where each is
a profit of C dollars. However, if X < 6, then the profit rounded before adding. What is the probability that
is —R, dollars, or if X > 8, the profit is —R, dollars. the total round-off error exceeds 5?
Find the value of that maximizes the expected
profit. 7-26. One hundred small bolts aré packed in a box.
Each bolt weights 1 ounce, with a standard deviation
7-17. In the preceding exercise, find the optimum
of 0.01 ounce. Find the probability that a box weighs
value of wif R, =R,=R. more than 102 ounces.
7-18. Use the results of Exercise 7-16 with C = $8.00,
7-27. An automatic machine is used to fill boxes with
R, = $2.00, and R, = $4.00. What is the value of yz that soap powder. Specifications require that the boxes
maximizes the expected profit? i weigh between 11.8 and 12.2 ounces. The only data
7-19. The Rockwell hardness of a particular alloy is available about machine performance concern the
normally distributed with a mean of 70 and a standard average content of groups of nine boxes. It is known
deviation of 4. that the average content is 11.9 ounces with a standard
(a) If 4 specimen is acceptable only if its hardness is deviation of 0.05 ounce. What fraction of the boxes
between 62 and 72, what is the probability that a produced is defective? Where should the mean be
randomly chosen specimen has an acceptable located in order to minimize this fraction defective?
hardness? Assume the weight is normally distributed.
(b) If the acceptable range of hardness was (70 — c, 7-28. A bus travels between two cities, but visits six
70 + c), for what value of c would 95% of all intermediate cities on the route. The means and stan-
specimens have acceptable hardness? dard deviations of the travel times are as follows:
7-10 Exercises 167
rr
7-37. The random variable Y = In X has a N(50, 25)
Mean Standard
City Time Deviation distribution. Find the mean, variance, mode, and
median of X.
Pairs (hours) (hours)
7-38. Suppose independent random variables Y,, Y,,
1-2 3 0.4 Y, are such that
2-3 4 0.6
Y, = 1n X, ~ N(4, 1),
3-4 3 0.3
Y, = 1n X, ~ N(3, 1),
4-5 5 12
Y, = In X, ~ N(2, 0.5).
5-6 7 0.9
6-7 5 0.4 Find the mean and variance of W = e?XiX)°X;"*
7-8 3 0.4 Determine a set of specifications L and R such that
_
P(LS WS R)=0.90.
What is the probability that the bus completes its jour- 7-39. Show that the density function for a lognormally
ney within 32 hours? distributed random variable X is given by equation
7-24.
7-29. A production process produces items, ‘of which
8% are defective. A random sample of 200 items is 7-40. Consider the bivariate normal density
selected every day and the number of defective items,
say X, is counted. Using the normal approximation to fs) set 2a |
the binomial find the following:
(a) P(X< 16). — 0 < xX, < 00, — 00 < x, <0,
(b) P(X = 15). where A is chosen so that fis a probability distribu-
(c) P(i2<S X20). tion. Are the random variables X, and X, independent?
(d) P(X= 14), Define two new random variables:
7-30. In a work-sampling study it is often desired to
find the necessary number of observations. Given that
p = 0.1, find the necessary n such that P(0.05 <p <
0.15) = 0.95.
7-31. Use random numbers generated from your
favorite computer package or from scaling the random Show that the two new random variables are
integers in Table XV of the Appendix by multiplying independent.
by 10° to generate six realizations of a N(100, 4)
variable, using the following: 7-41. The life of a tube (X,) and the filament diameter
(X,) are distributed as a bivariate normal random vari-
(a) The direct method.
able with the parameters 4, = 2000 hours, p, = 0.10
(b) The approximate method. inch, 07 = 2500 hours’, 0% = 0.01 inch’, and p = 0.87.
7-32. Consider a linear combination Y = 3X, — 2X), The quality-control manager wishes to determine the
where X, is N(10, 3) and X, is uniformly distributed life of each tube by measuring the filament diameter.
on [0, 20]. Generate six realizations of the random If a filament diameter is 0.098, what is the probability
variable Y, where X, and X, are independent. that the tube will last 1950 hours?
7-33. If Z ~ N(O, 1), generate five realizations of Z*. 7-42. A college professor has noticed that grades on
7-34. If Y=1n X and Y ~ N(fy, 0%), develop a proce- each of two quizzes have a bivariate normal distribu-
dure for generating realizations of X. tion with the parameters 1, = 75, ML, = 83, on 25,
7-35. If Y = X\7/X;, where X, ~ M(t, o,) and o; = 16, and p = 0.8. If a student receives a
grade of 80 on the first quiz, what is the probability
X, ~ Nb, 03) and where X, and X, are independent,
that she will do better on the second one? How is the
develop a generator for producing realizations of Y.
answer affected by making p = —-0.8?
7-36. The brightness of light bulbs is normally dis-
7-43. Consider the surface y = f(x,, x), where fis the
tributed with a mean of 2500 footcandles and a stan-
bivariate normal density function.
dard deviation of 50 footcandles. The bulbs are tested
and all those brighter than 2600 footcandles are (a) Prove that y = constant cuts the surface in an
placed in a special high-quality lot. What is the prob- ellipse.
ability distribution of the remaining bulbs? What is (b) Prove that y = constant with p = 0 and o, = 0;
their expected brightness? cuts the surface as a circle.
168 Chapter7 The Normal Distribution

7-44, Let X, and X, be independent random variables,


c= x,
each following a normal density with a mean of zero
2
and variance o, Find the distribution of
We say that C follows*the Cauchy distribution. Try to
R=4X?+X} compute E(C).

The resulting distribution is known as the Rayleigh 7-47, Let X ~ N(O, 1). Find the probability distribution
distribution and is frequently used to model the distri- of Y = X*. Y is said to follow the chi-square distribu-
bution of radial error in a plane. Hint: Let X, = Rcos @ tion with one degree of freedom. It is an important
and X, = Rsin 6. Obtain the joint probability distribu- distribution in statistical methodology.
tion of R and @, then integrate out 0.
7-48. Let the independent random variables X; ~
7-45, Using a method similar to that in Exercise 7-44,
N(O, 1) for i = 1, 2, ..., n. Show that the probability
obtain the distribution of
n

RE XP 4 XP Pee? distribution of Y= })’X? follows a chi-square distri-


i=l
where X,~ M(0, 0”) and is independent. bution with n degrees of freedom.

7-46. Let the independent random variables X, ~ 7-49, Let X ~ N(0, 1). Define a new random variable Y
N(0, 0”) for i= 1, 2. Find the probability distribution = |X|. Then, find the probability distribution of ¥, This
of is often called the half-normal distribution.
Chapter 8

Introduction to Statistics
and Data Description

8-1 THE FIELD OF STATISTICS


Statistics deals with the collection, presentation, analysis, and use of data to solve problems,
make decisions, develop estimates, and design and develop both products and procedures.
An understanding of basic statistics and statistical methods would be useful to anyone in
this information age; however, since engineers, scientists, and those working in manage-
ment science are routinely engaged with data, a knowledge of statistics and basic statistical
methods is particularly vital. In this intensely competitive, high-tech world economy of the
first decade of the twenty-first century, the expectations of consumers regarding product
quality, performance, and reliability have increased significantly over expectations of the
recent past. Furthermore, we have come to expect high levels of performance from logisti-
cal systems at all levels, and the operation as well as the refinement of these systems is
largely dependent on the collection and use of data. While those who are employed in var-
ious “service industries” deal with somewhat different problems, at a basic level they, too,
are collecting data to be used in solving problems and improving the “service” so as to
become more competitive in attracting market share.
Statistical methods are used to present, describe, and understand variability. In observ-
ing a variable value or several variable values repeatedly, where these values are assigned
to units by a process, we note that these repeated observations tend to yield different results.
While this chapter will deal with data presentation and description issues, the following
chapters will utilize the probability concepts developed in prior cMapters to model and
develop an understanding of variability and to utilize this understanding in presenting infer-
ential statistics topics and methods.
Virtually all real-world processes exhibit variability. For example, consider situations
where we select several castings from a manufacturing process and measure a critical
dimension (such as a vane opening) on each part. If the measuring instrument has sufficient
resolution, the vane openings will be different (there will be variability in the dimension).
Alternatively, if we count the number of defects on printed circuit boards, we will find vari-
ability in the counts, as some boards will have few defects and others will have many
defects. This variability extends to all environments. There is variability in the thickness of
oxide coatings on silicon wafers, the hourly yield of a chemical process, the number of
errors on purchase orders, the flow time required to assemble an aircraft engine, and the
therms of natural gas billed to the residential customers of a distributing utility in a given
month.

169
170 Chapter 8 Introduction to Statistics and Data Description

Almost all experimental activity reflects similar variability in the data observed, and in
Chapter 12 we will employ statistical methods not only to analyze experimental data but
also to construct effective experimental designs for the study of processes.
Why does variability occur? Generally, variability is the result of changes in the con-
ditions under which observations are made. In a manufacturing context, these changes may
be differences in specimens of material, differences in the way people do the work, differ-
ences in process variables, such as temperature, pressure, or holding time, and differences
in environmental factors, such as relative humidity. Variability also occurs because of the
measurement system. For example, the measurement obtained from a scale may depend on
where the test item is placed on the pan. The process of selecting units for observation may
also cause variability. For example, suppose that a lot of 1000. integrated circuit chips has
exactly 100 defective chips. If we inspected all 1000 chips, and if our inspection process
was perfect (no inspection or measurement error), we would find all 100 defective chips.
However, suppose that we select 50 chips. Now some of the chips will likely be defective;
and we would expect the sample to be about 10% defective, but it could be 0% or 2% or
12% defective, depending on the specific chips selected.
The field of statistics consists of methods for describing and modeling variability, and
for making decisions when variability is present. In inferential statistics, we usually want
to make a decision about some population. The term population refers to the collection of
measurements on all elements of a universe about which we wish to draw conclusions or
make decisions. In this text, we make a distinction between the universe and the popula-
tions, in that the universe is composed of the set of elementary units or simply units, while
a population is the set of numerical or categorical variable values for one variable associ-
ated with each of the universe units. Obviously, there may be several populations associated
with a given universe. An example is the universe consisting of the residential class cus-
tomers of an electric power company whose accounts are active during part of the month of
August 2003. Example populations might be the set of energy consumption (kilowatt-hour)
values billed to these customers in the August 2003 bill, the set of customer demands (kilo-
watts) at the instant the company experiences the August peak demand, and the set made up
of dwelling category, such as single-family unattached, apartments, mobile home, etc.
Another example is a universe which consists of all the power supplies for a personal
computer manufactured by an electronics company during a given period. Suppose that the
manufacturer is interested specifically in the output voltage of each power supply. We may
think of the output voltage levels in the power supplies as such a population. In this case,
each population value is a numerical measurement, such as 5.10 or 5.24. The data in this
case would be referred to as measurement data. On the other hand, the manufacturer may
be interested in whether or not each power supply produced an output voltage that conforms
to the requirements. We may then visualize the population as consisting of attribute data,
in which each power supply is assigned a value of one if the unit is nonconforming and a
value of zero if it conforms to requirements. Both measurement data and attribute data are
called numerical data. Furthermote, it is convenient to consider measurement data as being
either continuous data or discrete data depending on the nature of the process assigning
values to unit variables. Yet another type of data is called categorical data. Examples are
gender, day of the week when the observation is taken, make of automobile, etc. Finally, we
have unit identifying data, which are alphanumeric¢ and used to identify the universe and
sample units. These data might neither exist nor have statistical interpretation; however, in
some situations they are essential identifiers for universe and sample units. Examples would
be social security numbers, account numbers in a bank, VIN numbers for automobiles, and
serial numbers of cardiac pacemakers. In this book we will present techniques for dealing
with both measurement and attribute data; however, categorical data are also considered.
8-1 The Field of Statistics 171

In most applications of statistics, the available data result from a sample of units
selected from a universe of interest, and these data reflect measurement or classification of
one or more variables associated with the sampled units. The sample is thus a subset of the
units, and the measurement or classification values of these units are subsets of the respec-
tive universe populations.
Figure 8-1 presents an overview of the data acquisition activity. It is convenient to think
of a process that produces units and assigns values to the variables associated with the units.
An example would be the manufacturing process for the power supplies. The power sup-
plies in this case are the units, and the output voltage values (perhaps along with other vari-
able values) may be thought of as being assigned by the process. Oftentimes, a probabilistic
model or a model with some probabilistic components is assumed to represent the value
assignment process. As indicated earlier, the set of units is referred to as the universe, and
this set may have a finite or an infinite membership of units. Furthermore, in some cases this
set exists only in concept; however, we can describe the elements of the set without enu-
merating them, as was the case with the sample space, S, associated with a random exper-
iment presented in Chapter 1. The set of values assigned or that may be assigned to a
specific variable becomes the population for that variable.
Now, we illustrate with a few additional examples that reflect several universe struc-
tures and different aspects of observing processes and population data. First, continuing
with the power supplies, consider the production from one specific shift that consists of 300
units, each having some voltage output. To begin, consider a sample of 10 units selected
from these 300 units and tested with voltage measurements, as previously described for
sample units. The universe here is finite, and the set of voltage values for these universe
units (the population) is thus finite. If our interest is simply to describe the sample results,
we have done that by enumeration of the results, and both graphical and quantitative meth-
ods for further description are given in the following sections of this chapter. On the other

Observation

Unit Assignment of
generation variable values
to units

Universe units

O@@®@::-@

Sample units
O@®@:::@
Figure 8-1 From process to data.
172 Chapter 8 Introduction to Statistics and Data Description

hand, if we wish to employ the sample results to draw statistical inference about the popu-
lation consisting of 300 voltage values, this is called an enumerative study, and careful
attention must be given to the method of sample selection if valid inference is to be made
about the population. The key to much of what may be accomplished in such a study
involves either simple or more complex forms of random sampling or at least probability
sampling. While these concepts will be developed further in the next chapter, it is noted at
this point that the application of simple random sampling from a finite universe results in
an equal inclusion probability for each unit of the universe. In the case of a finite universe,
where the sampling is done without replacement and the sample size, usually denoted n, is
the same as the universe size, N, then this sample is called a census.
Suppose our interest lies not in the voltage values for units produced in this specific
shift but rather in the process or process variable assignment model. Not only must great
care be given to sampling methods, we must also make assumptions about the stability of
the process and the structure of the process model during the period of sampling. In
essence, the universe unit variable values may be thought of as a realization of the process,
and a random sample from such a universe, measuring voltage values, is equivalent to a ran-
dom sample on the process or process model voltage variable or population. With our
example, we might assume that unit voltage, E, is distributed as M(u, 0”) during sample
selection. This is often called an analytic study, and our objective might be, for example, to
estimate p/ and/or o”. Once again, random sampling is to be rigorously defined in the fol-
lowing chapter.
_ Even though a universe sometimes exists only in concept, defined by a verbal descrip-
tion of the membership, it is generally useful to think carefully about the entire process of
observation activity and the conceptual universe. Consider an example where the ingredients
of concrete specimens have been specified to achieve high strength with early cure time. A
batch is formulated, five specimens are poured into cylindrical molds, and these are cured
according to test specifications. Following the cure, these test cylinders are subjected to lon-
gitudinal load until rupture, at which point the rupture strength is recorded. These test units,
with the resulting data, are considered sample data from the strength measurements associ-
ated with the universe units (each with an assigned population value) that might have been,
but were never actually, produced. Again, as in the prior example, inference often relates to
the process or process variable assignment model, and model assumptions may psu cru-
cial to attaining meaningful inference in this analytic study.
Finally, consider measurements taken on a fluid in order to measure some variable or
characteristic. Specimens are drawn from well-mixed (we hope) fluid, with each specimen
placed in a specimen container. Some difficulty arises in universe identification. A conven-
tion here is to again view the universe as made up of all possible specimens that might have
been selected. Similarly, an alternate view may be taken that the sample values represent a
realization of the process value assignment model. Getting meaningful results from such an
analytic study once again requires close attention to sampling methods and to process or
process model assumptions.
Descriptive statistics is the branch of statistics that deals with organization, summa-
rization, and presentation of data. Many of the techniques of descriptive statistics have been
in use for over 200 years, with origins in surveys and census activity. Modern computer
technology, particularly computer graphics, has greatly expanded the field of descriptive
statistics in recent years. The techniques of descriptive statistics can be applied either to
entire finite populations or to samples, and these methods and techniques are illustrated in
the following sections of this chapter. A wide selection of software is available, ranging in
focus, sophistication, and generality from simple spreadsheet functions such as those found
in Microsoft Excel®, to the more-comprehensive but “user friendly” Minitab®, to large,
comprehensive, flexible systems such as SAS. Many other options are also available.
8-3 Graphical Presentation of Data 173

In enumerative and analytic studies, the objective is to make a conclusion or draw


inference about a finite population or about a process or variable assignment model.
This activity is called inferential statistics, and most of the techniques and methods
employed have been developed within the past 90 years. Subsequent chapters will focus on
these topics.

8-2. DATA
Data are collected and stored electronically as well as by human observation using tradi-
tional records or files. Formats differ depending on the observer or observational process
and reflect individual preference and ease of recording. In large-scale studies, where unit
identification exists, data must often be obtained by stripping the required information
from several files and merging it into a file format suitable for statistical analysis.
A table or spreadsheet-type format is often convenient, and it is also compatible with
most software analysis systems. Rows are typically assigned to units observed, and columns
present numerical or categorical data on one or more variables. Furthermore, a column may
be assigned for a sequence or order index as well as for other unit identifying data. In the
case of the power supply voltage measurements, no order or sequence was intended so that
any permutation of the 10 data elements is the same. In other contexts, for example where
the index relates to the time of observation, the position of the element in the sequence may
be quite important. A common practice in both situations described is to employ an index,
say i= 1, 2, ..., n, to serve as a unit identifier where n units are observed. Also, an alpha-
betical character is usually employed to represent a given variable value; thus if e, equals the
value of the ith voltage measurement in the example, e,'= 5.10, e, = 5.24, e, = 5.14, ...,
€i9 = 5.11. In a table format for these data there would be 10 rows (11 if a headings row is
employed) and one or two columns, depending on whether the index is included and pre-
sented in a column.
Where several variables and/or categories are to be associated with each unit, each of
these may be assigned a column, as shown in Table 8-1 (from Montgomery and Runger,
2003) which presents measurements of pull strength, wire length, and die height made on
each of 25 sampled units in a semiconductor manufacturing facility. In situations where cat-
egorical classification is involved, a column is provided for each categorical variable and a
category code or identifier is recorded for the unit. An example is shift of manufacture with
identifiers D, N, G for day, night, and graveyard shifts, respectively.

8-3 GRAPHICAL PRESENTATION OF DATA


In this section we will present a few of the many graphical and tabular methods for sum-
marizing and displaying data. In recent years, the availability of computer graphics has
resulted in a rapid expansion of visual displays for observational data.

8-3.1 Numerical Data: Dot Plots and Scatter Plots


When we are concerned with one of the variables associated with the observed units, the
data are often called univariate data, and dot plots provide a simple, attractive display that
reflects spread, extremes, centering, and voids or gaps in the data. A horizontal line is
scaled so that the range of data values is accommodated. Each observation is then plotted
as a dot directly above this scaled line, and where multiple observations have the same
value, the dots are simply stacked vertically at that scale point. Where the number of units
is relatively small, say n < 30, or where there are relatively few distinct values represented
174 Chapter 8 Introduction to Statistics and Data Description

Table 8-1 Wire Bond Pull Strength Data

Observation Pull Strength Wire Lengjh Die Height


Number (y) (x,) (x5)

1 9.95 2 ; 50
2 24.45 8 110
3 31.75 11 120
4 35.00 10 550
5 25.02 8 295
6 16.86 4 200
fl 14.38 2 375
8 9.60 2 52
9 24.35 9 100
10 27.50 8 300
11 17.08 4 412
12 37.00 11 400
13 41.95 i) 500
14 11.66 2 360
15 21.65 4 205
16 17.89 4 400
17 69.00 20 600
18 10.30 1 585
19 34.93 10 540
20 46.59 15 250
21 44.88. 15 290
5 54.12 16 510
23 56.63 17 590
24 22.13 6 100
25 2145 5 400

in the data set, dot plots are effective displays. Figure 8-2 shows the univariate dot plots or
marginal dot plots for each of the three variables with data presented in Table 8-1. These
plots were produced using Minitab®.
Where we wish to jointly display results for two variables, the bivariate equivalent of
the dot plot is called a scatter plot. We construct a simple rectangular coordinate graph,
assigning the horizontal axis to one of the variables and the vertical axis to the other. Each
observation is then plotted as a point in this plane. Figure 8-3 presents scatter plots for pull
strength vs. wire length and for pull strength vs. die height for the data in Table 8-1. In order
to accommodate data pairs that are identical, and thus that fall at the same point on the
plane, one convention is to employ alphabet characters as plot symbols; so A is the dis-
played plot point where one data point falls at a specific point on the plane, B is the dis-
played plot point if two fall at a specific point of the plane, etc. Another approach for this,
which is useful where values are close but not identical, is to assign a randomly generated,
small, positive or negative quantity sometimes called jitter to one or both variables in order
to make the plots better displays. While scatter plots show the region of the plane where the
data points fall, as well as the data density associated with this region, they also suggest
possible association between the variables. Finally, we note that the usefulness of these
plots is not limited to small data sets.
8-3 Graphical Presentation of Data 175

sade, hth lela lhl deat lana a


40 50 60 70
Pull strength

e ° e e e | ie e ° 2 e e t

5 10 15 20
Wire length

Die height

Figure 8-2 Dot plots for pull strength, wire length, and die height.

Pull
strength Pull
strength

0 10 20 0 100 200 300 400 500 £600


Wire length Die height

Figure 8-3 Scatter plots for pull strength vs. wire length and for pull strength vs. die height (from
Minitab®).

To extend the dimensionality and graphically display the joint data pattern for three
variables, a three-dimensional scatter plot may be employed, as illustrated in Figure 8-4 for
the data in Table 8-1. Another option for a display, not illustrated here, is the bubble plot,
which is presented in two dimensions, with the third variable reflected in the dot (now
called bubble) diameter that is assigned to be proportional to the magnitude of the third
variable. As was the case with scatter plots, these plots-also suggest possible associations
between the variables involved.

8-3.2 Numerical Data: The Frequency Distribution and Histogram


Consider the data in Table 8-2. These data are the strengths in pounds per square inch (psi)
of 100 glass, nonreturnable 1-liter soft drink bottles. These observations were obtained by
testing each bottle until failure occurred. The data were recorded in the order in which the
bottles were tested, and in this format theydo not convey very much information about
bursting strength of the bottles. Questions such as “what is the average bursting strength?”
176 Chapter 8 Introduction to Statistics and Data Description

80+

60

- oO

strength
Pull

8 12 16 0 or
Wire length 20

Figure 8-4 Three-dimensional plots for pull strength, wire length, and die height.

Table 8-2 Bursting Strength in Pounds per Square Inch for 100 Glass, 1-Liter, Nonreturnable Soft
Drink Bottles

265 197 346 280 265, 200 221, 265 261 278
205 286 317 242 254 235 176 262 248 250
263 274 242 260 281 246 248 271 260 265
307 243 258 321 294 328 263 245 274 270
220 231 276 228 223 296 Dol 301 3i)// 298
268 267 300 250 260 276 334 280 250 257
260 - 281 208 299 308 264 280 274 278 210
234 265 187 258 235 269 265 259 254 280
299 214 264 267 283 235 272 287 274 269
215 318 271 293 277 290 283 258 275 251

or ‘what percentage of the bottles burst below 230 psi?” are not easy to answer when the
data are presented in this form.
A frequency distribution is a more useful summary of data than the simple enumera-
tion given in Table 8-2. To construct a frequency distribution, we must divide the range of
the data into intervals, which are usually called class intervals. If possible, the class inter-
vals should be of equal width, to enhance the visual information in the frequency distribu-
tion. Some judgment must be used in selecting the number of class intervals in order to give
a reasonable display. The number of class intervals used depends on the number of obser-
vations and the amount of scatter or dispersion in the data. A frequency distribution that
uses either too few or too many class intervals will not be very informative. We generally
find that between 5 and 20 intervals is satisfactory in most cases, and that the number of
class intervals should increase with n. Choosing a number of class intervals approximately
equal to the square root of the number of observations often works well in practice.
A frequency distribution for the bursting strength data in Table 8-2 is shown in Table
8-3. Since the data set contains 100 observations, we suspect that about /100 = 10 class
intervals will give a satisfactory frequency distribution. The largest and smallest data val-
ues are 346 and 176, respectively, so the class intervals must cover at least 346 — 176 = 170
psi units on the scale. If we want the lower limit for the first interval to begin slightly below
the smallest data value and the upper limit for the last cell to be slightly above the largest
data value, then we might start the frequency distribution at 170 and end it at 350. This is
an interval of 180 psi units. Nine class intervals, each of width 20 psi, gives a reasonable
frequency distribution, and the frequency distribution in Table 8-3 is thus based on nine
class intervals.
8-3 Graphical Presentation of Data 177

Table 8-3 Frequency Distribution for the Bursting Strength Data in Table 8-2
a
aaa a er
Class Interval ‘a Relative Cumulative
(psi) Tally Frequency Frequency Relative Frequency

170$x< 190 II 2 0.02 0.02


190sx<210 UHI 4 0.04 0.06
210s x < 230 HH || 1 0.07 0.13
230 $x < 250 HH HH III 13 0.13 0.26
250 $x < 270 HHH HH HH HHH | 32 0.32 0.58
270 $x < 290 HH HH HH HH III 24 0.24 0.82
290 sx < 310 HH HH | 1] 0.11 0.93
310 Sx < 330 illl 4 0.04 0.97
330 $x < 350 III we 0.03 1.00
100 1.00

The fourth column in Table 8-3 contains the relative frequency distribution. The rela-
tive frequencies are found by dividing the observed frequency in each class interval by the
total number of observations. The last column in Table 8-3 expresses the relative frequen-
cies on a cumulative basis, Frequency distributions are often easier to interpret than tables
of data. For example, from Table 8-3 it is very easy to see that most of the bottles burst
between 230 and 290 psi, and that 13% of the bottles burst below 230 psi.
It is also helpful to present the frequency distribution in graphical form, as shown in
Fig. 8-5. Such a display is called a histogram. To draw a histogram, use the horizontal axis
to represent the measurement scale and draw the boundaries of the class intervals. The ver-
tical axis represents the frequency (or relative frequency) scale. If the class intervals are of
equal width, then the heights of the rectangles drawn on the histogram are proportional to
the frequencies. If the class intervals are of unequal width, then it is customary to draw rec-
tangles whose areas are proportional to the frequencies. In this case, the result is called a

40

170 190 210 230 250 270 290 310 330 350
Bursting strength (ps!)

Figure 8-5 Histogram of bursting strength for 100 glass, 1-liter, nonreturnable soft drink bottles.
178 Chapter 8 Introduction to Statistics and Data Description

density histogram. For a histogram displaying relative frequency on the vertical axis, the
rectangle heights are calculated as
class relative frequency
rectangle height = ——_______—_———_
class width
When we find there are multiple empty class intervals after grouping data into equal-width
intervals, one option is to merge the empty intervals with contiguous intervals, thus creat-
ing some wider intervals. The density histogram resulting from this may produce a more
attractive display. However, histograms are easier to interpret when the class intervals are
of equal width. The histogram provides a visual impression of the shape of the distribution
of the measurements, as well as information about centering and the scatter or dispersion
of the data.
In passing from the original data to either a frequency distribution or a histogram, a cer-
tain amount of information has been lost in that we no longer have the individual observa-
tions. On the other hand, this information loss is small compared to the ease of
interpretation gained in using the frequency distribution and histogram. In cases where the
data assume only a few distinct values, a dot plot is perhaps a better graphical display.
Where observed data are of a discrete nature, such as is found in counting processes,
then two choices are available for constructing a histogram. One option is to center the rec-
tangles on the integers reflected in the count data, and the other is to collapse the rectangle
into a vertical line placed directly over these integers. In both cases, the height of the rec-
tangle or the length of the line is either the frequency or relative frequency of the occurrence
of the value in question.
In summary, the histogram is a very useful graphic display. A histogram can give the
decision maker a good understanding of the data and is very useful in displaying the shape,
location, and variability of the data. However, the histogram does not allow individual data
points to be identified, because all observations falling in a cell are indistinguishable.

8-3.3 The Stem-and-Leaf Plot


Suppose that the data are represented by x,, x5, ..., X,, and that each number x; consists of
at least two digits. To construct a stem-and-leaf plot, we divide each number x, into two
parts: a stem, consisting of one or more of the leading digits, and a leaf, consisting of the
remaining digits. For example, if the data consist of the percentage of defective information
between 0 and !00 on lots of semiconductor wafers, then we could divide the value 76 into
the stem 7 and the leaf 6. In general, we should choose relatively few stems in comparison
with the number of observations. It is usually best to choose between 5 and 20 stems. Once
a set of stems has been chosen, they are listed along the left-hand margin of the display, and
beside each stem all leaves corresponding to the observed data values are listed in the order
in which they are encountered in the data set.

Example 8-1
To illustrate the construction of a stem-and-leaf plot, consider the bottle-bursting-strength data in
Table 8-2. To construct a stem-and-leaf plot, we select as stem values the numbers 17, 18, 19, ..., 34.
The resulting stem-and-leaf plot is presented in Fig. 8-6. Inspection of this display immediately
reveals that most of the bursting strengths lie between 220 and 330 psi, and that the central value is
somewhere between 260 and 270 psi. Furthermore, the bursting strengths are distributed approxi-
mately symmetrically about the central value. Therefore, the stem-and-leaf plot, like the histogram,
allows us to determine quickly some important features of the data that were not immediately obvi-
ous in the original display, Table 8-2. Note that here the original numbers are not lost, as occurs in a
8-3 Graphical Presentation of Data 179

17 6 1
18 7 1
19 7 1
20 0,5,8 3
21 0,4,5 2
22 1,0,8,3 4
23 Syl SyS) 6
24 Caperpa loyal Sos) Uf
25 4,0,8,0,0,7,8,3,4,8,1 1]
26 5,5,5,1,2,3,0,0,5,3,8,7,0,0,4,5,9,5,4,7,9 7)
27 8,4,1,4,0,6,6,4,8,2,4,1,7,5 14
28 0,6,1,0,1,0,0,3,7,3 10
29 4,6,8,9,9,3,0 7
30 7,1,0,8 4
31 7,8 2
32 1,8 2
33 7,4 D
34 6 1
100
Figure 8-6 Stem-and-leaf plot for the bottle-bursting-strength data in Table 8-2.

histogram. Sometimes, in order to assist in finding percentiles, we order the leaves by magnitude, pro-
ducing an ordered stem-and-leaf plot, as in Fig. 8-7. For instance, since n = 100 is an even number,
the median, or “middle” observation (see Section 8-4.1), is the average of the two observations with
ranks 50 and 51, or

K = (265 + 265)/2 = 265.


The tenth percentile is the observation with rank (0.1)(100) + 0.5 = 10.5 (halfway between the 10th and
11th observations), or (220 + 221)/2 = 220.5. The first quartile is the observation with rank (0.25)(100)
+ 0.5 = 25.5 (halfway between the 25th and 26th observations), or (248 + 248)/2 = 248, and the third
quartile is the observation with rank (0.75)(100) + 0.5 = 75.5 (halfway between the 75th and 76th
observations), or (280 + 280)/2 = 280. The first and third quartiles are occasionally denoted by the sym-
bols Q1 and Q3, respectively, and the interquartile range IQR = Q3 — Q1 may be used as a measure of
variability. For the bottle-bursting-strength data, the interquartile range is IQR = Q3 — Q1 = 280 — 248
= 32. The stem-and-leaf displays in Figs. 8-6 and 8-7 are equivalent to a histogram with 18 class inter-
vals. In some situations, it may be desirable to provide more classes or stems. One way to do this would
be to modify the original stems as follows: divide stem 5 (say) into two new stems, 5* and 5¢ Stem 5*
has leaves 0, 1, 2, 3, and 4, and stem 5e has leaves 5, 6, 7, 8, and 9. This will double the number of orig-
inal stems. We could increase the number of original stems by five by defining five new stems: 5* with
leaves 0 and 1, 5t (for twos and threes) with leaves 2 and 3, 5f (for fours and fives) with leaves 4 and
5, 5s (for sixes and sevens) with leaves 6 and 7, and Se with leaves 8 and 9.

8-3.4 The Box Plot


A box plot displays the three quartiles, the minimum, and the maximum of the data on a
rectangular box, aligned either horizontally or vertically. The box encloses the interquartile
range with the left (or lower) line at the first quartile Q1 and the right (or upper) line at the
third quartile Q3. A line is drawn through the box at the second quartile (which is the 50th
180 Chapter 8 Introduction to Statistics and Data Description

17 6 1
18 7 . 1
19 7 1
20 0,5,8 3
21 0,4,5 3
22 0,1,3,8 4
23 1,1,4,5,5,5 6
24 2,2,3,5,6,8,8 7
25 0,0,0,1,3,4,4,7,8,8,8 11
26 0,0,0,0,1,2,3,3,4,4,5,5,5,5,5,5,7,7,8,9,9 21
27 0,1,1,2,4,4,4,4,5,6,6,7,8,8 14
28 0,0,0,0,1,1,3,3,6,7 10
29 0,3,4,6,8,9,9 4)
30 0,1,7,8 4
31 7,8 2
32 1,8 2
33 4,7 2
34 6 zips
—(=)o

Figure 8-7 Ordered stem-and-leaf plot for the bottle-bursting-strength data.

percentile or the median) Q2 = %, A line at either end extends to the extreme values. These
lines, sometimes called whiskers, may extend only to the 10th and 90th percentiles or the
5th and 95th percentiles in large data sets. Some authors refer to the box plot as the box-
and-whisker plot. Figure 8-8 presents the box plot for the bottle-bursting-strength data. This
box plot indicates that the distribution of bursting strengths is fairly symmetric around the
central value, because the left and right whiskers and the lengths of the left and right boxes
around the median are about the same.
The box plot is useful in comparing two or more samples. To illustrate, consider the
data in Table 8-4. The data, taken from Messina (1987), represent: viscosity readings on
three different mixtures of raw material used on a manufacturing line. One of the objectives
of the study that Messina discusses is to compare the three mixtures. Figure 8-9 presents the
box plots for the viscosity data. This display permits easy interpretation of the data. Mix-
ture | has higher viscosity than mixture 2, and mixture 2 has higher viscosity than mixture
3, The distribution of viscosity is not symmetric, and the maximum viscosity reading from
mixture 3 seems unusually large in comparison to the other readings. This observation may
be an outlier, and it possibly warrants further examination and analysis.

176 248 265 280 346

ee ee Cer eee eee Peleee al |ee ey Se PY ee


175 200 225 250 275 300 $25 350
Figure 8-8 Box plot for the bottle-bursting-strength data.
8-3. Graphical Presentation of Data 181

Table 8-4 Viscosity Measurements for Three Mixtures


SSS
eee
Mixture 1 Mixture 2 _ Mixture 3
22.02 21.49 20.33
23.83 22.67 21.67
26.67 24.62 24.67
25.38 24.18 22.45
25.49 22.78 22.28
23.50 22.56 21.95
25.90 24.46 20.49
24.98 23.79 21.81

25.8

24.7

23.5

22.3
Viscosity
(centipoise)

20.0
1 2 3
Mixture
Figure 8-9 Box plots for the mixture-viscosity data in Table 8-4.

8-3.5 The Pareto Chart


A Pareto diagram is a bar graph for count data. It displays the frequency of each count on
the vertical axis and the category of classification on the horizontal axis. We always arrange
the categories in descending order offrequency of occurrence; that is, the most frequently
occurring is on the left, followed by the next most frequently occurring type, and so on.
Figure 8-10 presents a Pareto diagram for the production of transport aircraft by the
Boeing Commercial Airplane Company in the year 2000. Notice that the 737 was the most
popular model, followed by the 777, the 757, the 767, the 717, the 747, the MD-11, and the
MD-90. The line on the Pareto chart connects the cumulative percentages of the k most fre-
quently produced models (k = 1, 2, 3, 4, 5). In this example, the two most frequently pro-
duced models account for approximately 69% of the total airplanes manufactured in 2000.
One feature of these charts is that the horizontal scale is not necessarily numeric. Usually,
categorical classifications are employed as in the airplane production example,
The Pareto chart is named for an Italian economist who theorized that in certain
economies the majority of the wealth is held by a minority of the people. In count data, the
“Pareto principle” frequently occurs, hence the name for the chart.
Pareto charts are very useful in the analysis of defect data in manufacturing systems. Fig-
ure 8-11 presents a Pareto chart showing the frequency with which various types of defects
182 Chapter 8 Introduction to Statistics and Data Description

500 100

80

60 =
@
°{+}
49, =

20

0
737 777 757 767 717 747 MD-11 MD-90
Count 281 55 45 44 32 25 4 3
Percent 57.5 11.2 9.2 9.0 6.5 5.1 0.8 0.6
Cum % 57.5 68.7 77.9 86.9 93.5 98.6 99.4 100.0
Figure 8-10 Airplane production in 2000. (Source: Boeing Commercial Airplane Company.)

occur on metal parts used in a structural component of an automobile door frame. Notice how
the Pareto chart highlights the relatively few types of defects that are responsible for most of
the observed defects in the part. The Pareto chart is an important part of a quality-improve-
ment program because it allows management and engineering to focus attention on the most
critical defects in a product or process. Once these critical defects are identified, corrective
actions to reduce or eliminate these defects must be developed and implemented. This is eas-
ier to do, however, when we are sure that we are attacking a legitimate problem: it is much
" easier to reduce oreliminate frequently occurring defects than rare ones.

81 i00

defects
of
Number

frequency
cumulative
Relative
(percent)

Parts not
contour greased
Parts Out of Parts not Other
under-trimmed sequence deburred
Defect type

Figure 8-11 Pareto chart of defects in door structural elements.


8-4 Numerical Description of Data 183

8-3.6 Time Plots

Virtually everyone should be familiar with time plots, since we view them daily in media
presentations. Examples are historical temperature profiles for a given city, the closing
Dow Jones Industrials Index for each trading day, each month, each quarter, etc., and the
plot of the yearly Consumer Price Index for all urban consumers published by the Bureau
of Labor Statistics. Many other time-oriented data are routinely gathered to support infer-
ential statistical activity. Consider the electric power demand, measured in kilowatts, for a
given office building and presented as hourly data for each of the 24 hours of the day in
which the supplying utility experiences a summer peak demand from all customers.
Demand data such as these are gathered using a time of use or “load research” meter. In
concept, a kilowatt is a continuous variable. The meter sampling interval is very short, how-
ever, and the hourly data that are obtained are truly averages over the meter sampling inter-
vals contained within each hour.
Usually with time plots, time is represented on the horizontal axis, and the vertical
scale is calibrated to accommodate the range of values represented in the observational
results. The kilowatt hourly demand data are displayed in Fig. 8-12. Ordinarily, when series
like these display averages over a time interval, the variation displayed in the data is a func-
tion of the length of the averaging interval, with shorter intervals producing more variabil-
ity. For example, in the case of the kilowatt data, if we used a 15-minute interval, there
would be 96 points to plot, and the variation in data would appear much greater.

8-4 NUMERICAL DESCRIPTION OF DATA


Just as graphs can improve the display of data, numerical descriptions are also of value. In
this section, we present several important numerical measures for describing the character-
istics of data.

8-4.1 Measures of Central Tendency


The most common measure of central tendency, or location, of the data is the ordinary arith-
metic mean. Because we usually think of the data as being obtained from a sample of units,

250

= 150

50
Ome Cemo OM Teeei40G 181 20 22
Hour

Figure 8-12 Summer peak day hourly kilowatt (KW) demand data for an office building.
184 Chapter 8 Introduction to Statistics and Data Description

we will refer to the arithmetic mean as the sample mean. If the observations in a sample of
size n are X,, X>, ..., X,, then the sample mean is

(8-1)

100
2 Fiates et
y=l_= rien 2 264.06
100 100
From examination of Fig. 8-5, it seems that the sample mean 264.06 psi is a “typical” value
of bursting strength, since it occurs near the middle of the data, where the observations are
concentrated. However, this impression can be misleading: Suppose that the histogram
looked like Fig. 8-13. The mean of these data is still a measure of central tendency, but it
does not necessarily imply that most of the observations are concentrated around it. In gen-
eral, if we think of the observations as having unit mass, the sample mean is just the center
of mass of the data. This implies that the histogram will just exactly balance if it is sup-
ported at the sample mean.
The sample mean x represents the average value of all the observations in the sample.
We can also think of calculating the average value of all the observations in a finite popu-
lation. This average is called the population mean, and as we here seen in previous chap-
ters, it is denoted by the Greek letter . When there are a finite number of possible
observations (say NV) in the population, then the population mean is
ie (8-2)
aye
N
where T, = ES x; is the finite population total for the population.
In the following chapters dealing with statistical inference, we will present methods for
making inferences about the population mean that are based on the sample mean. For exam-
ple, we will use the sample mean as a point estimate of [.
Another measure of central tendency is the median, or the point at which the sample is
divided into two equal halves. Let x(,), x,.), ...; Xj) denote a sample arranged in increasing
order of magnitude; that is, x,,, denotes the smallest observation, x,., denotes the second

Sample
mean
Frequency

Figure 8-13 A histogram.


8-4 Numerical Description of Data 185

smallest observation, ..., and x,,, denotes the largest observation. Then the median is
defined mathematically as
*((n+1)/2) n odd,
X=ix +x
n/2 n/2)+
ACT) Base)
3 i2)43) n even. (8-3)

The median has the advantage that it is not influenced very much by extreme values. For
example, suppose that the sample observations are
1, 3, 4, 2, 7, 6, and 8.
The sample mean is 4.43, and the sample median is 4. Both quantities give a reasonable
measure of the central tendency of the data. Now suppose that the next-to-last observation
is changed, so that the data are
1, 3,4; 2;°7, 2519, and 8.
For these data, the sample mean is 363.43. Clearly, in this case the sample mean does not
tell us very much about the central tendency of most of the data. The median, however, is
still 4, and this is probably a much more meaningful measure of central tendency for the
“majority of the observations.
Just asxis the middle value in a sample, there is a middle value in the population. We
define fi as the median of the population; that is, {i is a value of the associated random vari-
able such that half the population lies below {ffand half lies above.
The mode is the observation that occurs most frequently in the sample. For example, |
the mode of the sample data
2, 4, 6, 2, 5, 6, 2,9, 4, 5, 2, and 1
is 2, since it occurs four times, and no other value occurs as often. There may be more than
one mode.
If the data are symmetric, then the mean and median coincide. If, in addition, the data
have only one mode (we say the data are unimodal), then the mean, median, and mode may
all coincide. If the data are skewed (asymmetric, with a long tail to one side), then the mean,
median, and mode will not coincide. Usually we find that mode < median < mean if the dis-
tribution is skewed to the right, while mode > median > mean if the distribution is skewed
to the left (see to Fig. 8-14). i
The distribution of the sample mean is well-known and relatively easy to work with.
Furthermore, the sample mean is usually more stable than the sample median, in the sense
that it does not vary as much from sample to sample. Consequently, many analytical statis-
tical techniques use the sample mean. However, the median and mode may also be helpful
descriptive measures.

A i bp
iv
it
Negative or left skew Symmetric Positive or right skew

(a) (6) (c)


Figure 8-14 The mean and median for symmetric and skewed distributions.
186 Chapter 8 Introduction to Statistics and Data Description

8-4.2 Measures of Dispersion


Central tendency does not necessarily provide enough information to describe data ade-
quately. For example, consider the bursting strengths obtained from two samples of six bot-
tles each:
Sample 1: 230 250 245 258 265 240
Sample 2: 190 228 305 240 265 260
The mean of both samples is 248 psi. However, note that the scatter or dispersion of Sam-
ple 2 is much greater than that of Sample 1 (see Fig. 8-15). In this section, we define sev-
eral widely used measures of dispersion.
The most important measure of dispersion is the sample variance. If x,, X,, ..., X, iS a
sample of n observations, then the sample variance is
n

¥ (x - x)’
(gil skeen aig pay (8-4)
n-1 n-1
Note that computation of s* requires calculation of x, n subtractions, and n squaring and
adding operations. The deviations x; -- x may be rather tedious to work with, and several
decimals may have to be carried to ensure numerical accuracy. A more efficient (yet equiv-
alent) computational formula for calculating S,, is
2

Sx = > x} = “» «| (8-5)

The formula for S,, presented in equation 8-5 requires only one computational pass; but
care must again be taken to keep enough decimals to prevent round-off error.
To see how the sample variance measures dispersion or variability, refer to Fig. 8-16
which shows the deviations x, — x for the second sample of six bottle-bursting strengths. The
greater the amount of variability in the bursting-strength data, the larger in absolute mag-
nitude some of the deviations x, — x. Since the deviations x, — x will always sum to zero, we
must use a measure of variability that changes the negative deviations to nonnegative quan-
tities. Squaring the deviations is the approach used in the sample variance. Consequently,
if s* is small, then there is relatively little variability in the data, but if s? is large, the vari-
ability is relatively large.
The units of measurements for the sample variance are the square of the original units
of the variable. Thus, if x is measured in pounds per square inch (psi), the units for the sam-
ple variance are (psi).

Example 8-2
We will calculate the sample variance of the bottle-bursting strengths for the second sample in Fig.
8-15. The deviations x, — x for this sample are shown in Fig. 8-16.

180 200 220 240 260 280 300 $20

Sample mean = 248


e =Sample 1 o =Sample 2
Figure 8-15 Bursting-stréngth data.
8-4 Numerical Description of Data 187

X= 248
Figure 8-16 How the sample variance measures variability through the deviations x,-%,

Observations x;-% (x,- x)


x,=190 58 3364
x,=228 =20 400
x, = 305 57 3249
X4 = 240 -8 64
x5 = 265 17 289
Xe = 260 12 144
% = 248 Sum =0 Sum
=7510

From equation 8-4,

2_ Sux _ isl
ms 7 i=l de ae “ 1502(psi)”.
12

If we calculate the sample variance of the bursting strength for the Sample 1 values, we
find that s* = 158 (psi)?. This is considerably smaller than the sample variance of Sample 2,
confirming our initial impression that Sample 1 has less variability than Sample 2.
Because s* is expressed in the square of the original units, it is not easy to interpret.
Furthermore, variability is a more difficult and unfamiliar concept than location or central
tendency. However, we can solve the “curse of dimensionality” by working with the (posi-
tive) square root of the variance, s, called the sample standard deviation. This gives a meas-
ure of dispersion expressed in the same units as the original variable.

‘Example 8-3"
The sample standard deviation of the bottle-bursting strengths for the Sample 2 bottles in Example 8-2
and Fig. 8-15 is

s= 1s? =/1502 = 38.76 psi.


For the Sample | bottles, the standard deviation of bursting strength is
s = ¥158 = 12.57 psi.
188 Chapter 8 Introduction to Statistics and Data Description

Example 8-4
Compute the sample variance and sample standard deviation of the bottle-bursting-strength data in
Table 8-2. Note that
100 100 :
S'x; =7,074,258.00 and Sx; = 26,406.
i=l i=l
Consequently,
S,, = 7,074,258.00 — (26,406)7/100 = 101,489.85 and s? = 101,489.85/99 = 1025.15 psi’,

so that the sample standard deviation is

s=~71025.15 =32.02 psi.

When the population is finite and consists of N values we may define the population vari-
ance as

o = =e ; (8-6)

which is simply the mean of the average squared departures of the data values from the pop-
ulation mean. A closely related quantity, 67, is also sometimes called the population vari-
ance and is defined as

62-42, (8-7)
N-1
Obviously, as N gets large, G6? > 0, and oftentimes the use of 6* simplifies some of the
algebraic formulation presented in Chapters 9 and 10. Where several populations are to be
observed, a subscript may be employed to identify the population characteristics and
descriptive measures, e.g., L,, O°, so etc., if the x variable is being described.
We noted that the sample mean may be/used to make inferences about the population
mean. Similarly, the sample variance may be used to make inferences about the population
variance. We observe that the divisor for the sample yariance, s’, is the sample size minus
1, (n— 1). If we actually knew the true value of the population mean J, then we could define
the sample variance as the average squared deviation of the sample observations about pl.
In practice, the value of j/ is almost never known, and so the sum of the squared deviations
about the sample average x must be used instead. However, the observations x; tend to be
closer to their average x than to the population mean ju, so to compensate for this we use as
a divisor n — 1 rather than n.
Another way to think about this is to consider the sample variance s’ as being based on
n— 1 degrees offreedom. The term degrees offreedom results from the fact that the n devi-
ations x, — X, X, —X, ..., X, — x always sum to zero, so specifying the values of any r: — 1 of
these quantities autornatically determines the remaining one. Thus, only n — 1 of the n devi-
ations x; — x are independent.
Another useful measure of dispersion is the sample range

R = max (x,) — min (x). (8-8)


The sample range is very simple to compute, but it ignores all the information in the sam-
ple between the smallest and largest observations. For small sample sizes, say n « 10, this
information loss is not too serious in some situations. The range traditionally has had wide-
8-4 Numerical Description of Data 189

spread application in statistical quality control, where sample sizes of 4 or 5 are common
and computational simplicity is a major consideration; however, that advantage has been
largely diminished by the widespread use of electronic measurement, and data storage and
analysis systems, and as we will later see, the sample variance (or standard deviation) pro-
vides a “better” measure of variability. We will briefly discuss the use of the range in sta-
tistical ¢ uality-control problems in Chapter 17.

Example 8-5
Calculate the ranges of the two samples of bottle-bursting-strength data from Section 8-4, shown in
Fig. 8-15. For the first sample, we find that

R, = 265 — 230 = 35,


whereas for the second sample

R, = 305 — 190= 115.


Note that the range of the second sample is much larger than the range of the first, implying that the
second sample has greater variability than the first.

Occasionally, it is desirable to express variation as a fraction of the mean. A measure


of relative variation called the sample coefficient of variation is defined as

cya. (8-9)
XxX

The coefficient of variation is useful when comparing the variability of two or more data
sets that differ considerably in the magnitude of the observations. For example, the coeffi-
cient of variation might be useful in comparing the variability of daily electricity usage
within samples of single-family residences in Atlanta, Georgia, and Butte, Montana, during
July.

8-4.3 Other Measures for One Variable


Two other measures, both dimensionless, are provided by spreadsheet or statistical software
systems and are called skewness and kurtosis estimates. The notion of skewness was graph-
ically illustrated in Fig. 8-14. These characteristics are population or population model
characteristics and they are defined in terms of moments, [,, as described in Chapter 2,
Section 2-5.

skewness f; = #33 and kurtosis B, = “4— 3, (8-10)

where, in the case of finite populations, the kth central moment is defined as

ay sac ) (8-11)
As discussed earlier, skewness reflects the degree of symmetry about the mean; and nega-
tive skew results from an asymmetric tail toward smaller values of the variable, while posi-
tive skew results from an asymmetric tail extending toward the larger values of the variable.
190 Chapter 8 Introduction to Statistics and Data Description

Symmetric variables, such as those described by the normal and uniform distributions, have
skewness equal zero. The exponential distribution, for example, has skewness B, =2.
Kurtosis describes the relative peakedness of a distribution as compared to a normal
distribution, where a negative value is associated with a relatively flat distribution and a
positive value is associated with relatively peaked distributions. For example, the kurtosis
measure for a uniform distribution is —1.2, while for a normal variable, the kurtosis is zero.
If the data being analyzed represent measurement of a variable made on sample units,
the sample estimates of B, and B, are as shown in equations 8-12 and 8-13. These values
may be calculated from Excel® worksheet functions using the SKEW and KURT functions.

fs “ i (3-3) (8-12)
skewness f; = (ces) Geviien mee” n> 2.

) Yd (; - x)" (8-13)
kurtosis B, =———>—___..
#1! ——__-3, n>3.
n

If the data represent measurements on all the units of a finite population or a census, then
equations 8-10 and 8-11 should be utilized directly to determine these measures.
For the pull strength data shown in Table 8-1, the worksheet functions return values of
0.865 and 0.161 for the skewness and kurtosis measures, respectively.

8-4.4 Measuring Association


One measure of association between two numerical variables in sample data is called the
Pearson or simple correlation coefficient, and it is usually denoted r. Where data sets con-
tain a number of variables and the correlation coefficient for only one pair of variables, des-
ignated x and y, is to be presented, a subscript notation such as r,, may be used to designate
the simple correlation between variable x and variable y. The correlation coefficient is
S
Ny = — Te (8-14)
{Sta8\5}
where S,, is as shown in equation 8-5 and S,, is similarly defined for the y variable, replac-
ing x with y in equation 8-5, while

85 = S61 2061-9)=[ Sn} Fai}{So] ea


Now we will return to the wire bond pull strength data as presented in Table 8-1, with
the scatter plots shown in Fig. 8-3 for both pull strength (variable y) vs. wire length (vari-
able x,) and for pull strength vs. die height (variable x,). For the. sake of distinguishing
between the two correlation coefficients, we let r, be used for y vs. x,, and r, for y vs. x5.
The correlation coefficient is a dimensionless measure which lies on the interval [-1, +1].
It is a measure of linear association between the two variables of the pair. As the strength
of linear association increases, |r|1. A positive association means that larger x, values
have larger y values associated with them, and the same is true for x, and y. In other sets ‘of
data, where the larger x values have smaller y values associated with them, the correlation
coefficient is negative. When the data reflect no linear association, the correlation is zero.
In the case of the pull strength data, r, = 0.982, and r, = 0.493. The calculations were made
8-4 Numerical Description of Data 191

using Minitab®, selecting Stat>Basic Statistics>Correlation. Both are obviously positive. It


is important to note however, that the above result does not imply causality. We cannot
claim that increasing x, or x, causes an increase in y. This important point is often missed—
with the potential for major misinterpretation—as it may well be that a fourth, unobserved
variable is the causative variable influencing all the observed variables.
In the case where the data represent measures on variables associated with an entire
finite universe, equations 8-14 and 8-15 may be employed after replacing x by 11,, the finite
population mean for the x population and y by j,, the finite population mean for the y pop-
ulation, while replacing the sample size n in the formulation by N, the universe size. In this
case, it is customary to use the symbol p,, to represent this finite population correlation
coefficient between variables x and y.

8-4.5 Grouped Data


If the data are in a frequency distribution, it is necessary to modify the computing formu-
las for the measures of central tendency and dispersion given in Sections 8-4.1 and 8-4.2.
Suppose that for each of p distinct values of x, say x;, x, ..., X,, the observed frequency is
f;. Then the sample mean and sample variance may be computed as
P p
LAr Lit
vets (8-16)
‘3 n
LS
j=l
and
2
Pp 3 1 Pp

LS; a LD Air)
phe j=l (8-17)
n=l
respectively.
A similar situation arises where original data have been either lost or destroyed but the
grouped data have been preserved. In such cases, we can approximate the important data
moments by using a convention, which assigns each data value the value representing the
midpoint of the class interval into which the observations were classified, so that al] sam.
ple values falling in a particular interval are assigned the same value. This may be done eas-
ily, and the resulting approximate mean and variance values are as follows in equations 8-18
and 8-19. If m, denotes the midpoint of the jth class interval and there are c class intervals,
then the sample mean and sample variance are approximately

a ee (8-18)

and

pet = (8-19)
192 Chapter 8 Introduction to Statistics and Data Description

Example 8-6
To illustrate the use of equations 8-18 and 8-19 we compute the,mean and variance of bursting
strength for the data in the frequency distribution of Table 8-3. Note that there are c = 9 class inter-
vals, and that m, = 180,f, = 2, m, = 200,fy= 4, m, = 220,f, = 7, m, = 240,f, = 13, ms = 260,f5= 32,
me = 280,fe= 24, m, = 300,f;= 11, mg = 320,fy= 4, my = 340, andf,= 3. Thus,

TS
silos othe 264.60 psi
00
and

9 1 y S
ite fm,
_ *" 100\ 45°" 7} 7,091,900 -(26,460)? /100 = 914,99 (psi)”.
oe: 99 e 99
Notice that these are very close to the values obtained from the ungrouped data.

When the data are grouped in class intervals, it is also possible to approximate the
median and mode. The median is approximately

A, (8-20)

where L,, is the lower limit of the class interval containing the median (called the median
class), fy, is the frequency in the median class, T is the total of all frequencies in the class
intervals preceding the median class, and A is the width of the median class. The mode, say
MO, is approximately
a
MO = LyoMO +| (-
—— |A,
ryA] 2
(8-21)

where Ly is the lower limit of the modal class (the class interval with the greatest fre-
quency), a is the absolute value of the difference in frequency between the modal class and
the preceding class, b is the absolute value of the difference in frequency between the
modal class and the following class, and A is the width of the modal class.

8-5 SUMMARY
This chapter has provided an introduction to the field of statistics, including the notions of
process, universe, population, sampling, and sample results called data. Furthermore, a
variety of commonly used displays have been described and illustrated. These were the dot
plot, frequency distribution, histogram, stem-and-leaf plot, Pareto chart, box plot, scatter
plot, and time plot.
We have also introduced quantitative measures for summarizing data. The mean,
median and mode describe central tendency, or location, while the variance, standard devi-
ation, range, and interquartile range describe dispersion or spread in the data. Furthermore,
measures of skew and kurtosis were presented to describe asymmetry and peakedness,
respectively. We also presented the correlation coefficient to describe the strength.of linear
association between two variables. Subsequent chapters will focus on utilizing sample
results to draw inferences about the process or process model or about the universe.
8-6 Exercises 193

8-6 EXERCISES
8-1. The shelf life of a high-speed photographic film is 86.6 94.1 ei 89.4 973 83.7
being investigated by the manufacturer. The following Oie2 97.8 94.6 88.6 96.8 82.9
data are available. 86.1 93.1 96.3 84.1 94.4 87.3
90.4 86.4 94.7 82.6 96.1 86.4
Life Life Life Life
89.1 87.6 91.1 83.1 98.0 84.5
(days) (days) (days) (days)

126 129 134 141


8-4. An electronics company manufactures power
131 132 136 145
supplies for a personal computer. They produce sev-
116 128 130 162 eral hundred power supplies each shift, and each unit
125 126 134 129 is subjected to a 12-hour burn-in test. The number of
134 127 120 127, units failing during this 12-hour test each shift is
120 122 129 133 shown below.
125 111 147 129 (a) Construct a frequency distribution and histogram.
150 148 126 140
(b) Find the sample mean, sample variance, and sam-
130 120 117 131 ple standard deviation.
149 117 143 133

3 6 4 7 6 7
Construct a histogram and comment on the properties 4 7 8 2 1 4
of the data.
2 9 4 6 4 8
8-2. The percentage of cotton in a material used to 5 10 10 9 13 7
manufacture men’s shirts is given below. Construct a 6 14 14 10 12 3
histogram for the data. Comment on the properties of 10 13 8 7 10 6
the data. 5 10 12 9 D q
4 9 4 16 5 8
Ard S O50. 54.) ONO) 92.0). 35.0 34.6
3 8 5 11 7 4
oules4 e+. 95108 30.0 33.10 7357.0) 33.0
11 10 14 13 10 12
34.5 35.0 33.4 32.5 354 34.6 37.3 34.1
9 3 g) 3 4 6
35.6 35.4 34.7 34.1 346 35.9 346 34.7 2 2 8 13 2 17
B40 Oe 35-5338 opSF) on35 ad, 7 4 6 3 2 5
S5el gO Saad).le SO:0un Sk le 153.0e052-8 © 30.8 8 6 10 7 6 10
34 Feel (635104537-.99) 34.0 9 032:9. 532.1. 4343 4 4 8 3 4 8
33:09 3555, 94.9) 36.4, 341 33:5) 34:5 282.7, 2 10 6 2 10 9
6 8 4 9 8 11
8-3. The following data represent the yield on 90 con- 5 7 6 4 14 7
secutive batches of ceramic substrate to which a metal 4 14 15 13 6 2
coating has been applied by a vapor-deposition 3 13 4 3 4 8
process. Construct a histogram for these data and 2 12 a 6 4 10
comment on the properties of the data. 8 5 5 5 8 7
10 4 3 10 7 4
94.1 87.3 94.1 92.4 84.6 85.4
9 6 2 6 9 3
O52 84.1 pial 90.6 83.6 86.6
11 5 6 f) 2 6
90.6 90.1 96.4 89.1 85.4 91.7
91.4 95:2 88.2 88.8 89.7 87.5
88.2 86.1 86.4 86.4 87.6 84.2 8-5. Consider the shelf life data in Exercise 8-1. Com-
86.1 94.3 85.0 85.1 85.1 85.1 pute the sample mean, sample variance, and sample
standard deviation.
Rn 93.2 84.9 84.0 89.6 90.5
90.0 86.7 87.3 eB 90.0 95.6 8-6. Consider the cotton percentage data in Exercise
92.4 83.0 89.6 87.7 90.1 88.3 8-2. Find the sample mean, sample variance, sample
87.3 95.3 90.3 90.6 94.3 84.1 standard deviation, sample median, and sample mode.
194 Chapter 8 Introduction to Statistics and Data Description

8-7. Consider the yield data in Exercise 8-3. Calculate (a) Construct a stem-and-leaf plot.
the sample mean, sample variance, and sample stan- (b) Construct a frequency distribution and histogram.
dard deviation. (c) Calculate the sample mean, sample variance, and
8-8. An article in Computers and Industrial Engineer- sample standard deviation. —
ing (2001, p. 51) describes the time-to-failure data (in (d) Find the sample median and sample mode.
hours) for jet engines. Some of the data are repro- (e) Determine the skewness and kurtosis measures.
duced below.
8-11. Consider the shelf-life data in Exercise 8-1.
Construct a stem-and-leaf plot for these data. Con-
Failure Failure
struct an ordered stem-and-leaf plot. Use this plot to
Engine # Time Engine # Time
find the 6Sth and 95th percentiles.
| 150 14 171
8-12. Consider the cotton percentage data in Exercise
2 291 15 197
8-2.
3 93 16 200
(a) Construct a stem-and-leaf plot.
4 53 17 262
5 2 18 259 (b) Calculate the sample mean, sample variance, and
sample standard deviation.
6 65 19 286
7 183 20 206 (c) Construct an ordered stem-and-leaf plot.
8 144 DA 179 (d) Find the median and the first and third quartiles.
9 223 22 232 (e) Find the interquartile range.
10 197 2} 165 (f) Determine the skewness and kurtosis measures.
11 187 24 155
8-13. Consider the yield data in Exercise 8-3.
12 197 25 203
(a) Construct an ordered stem-and-leaf plot.
13 MY)
(b) Find the median and the first and third quartiles.
(c) Calculate the interquartile range.
(a) Construct a frequency distribution and histogram
for these data. 8-14. Construct a-box plot for the shelf-life data in
(b) Calculate the sample mean, sample median, sam- Exercise 8-1. Interpret the data using this plot.
ple variance, and sample standard deviation. 8-15. Construct a box plot for the cotton percentage
8-9. For the time-to-failure data in Exercise 8-8, sup- data in Exercise 8-2. Interpret the data using this plot.
pose the fifth observation (2 hours) is discarded. Con- 8-16. Construct a box plot for the yield data in Exer-
struct a frequency distribution and a histogram for the cise 8-3. Compare it to the histogram (Exercise 8-3)
remaining data, and calculate the sample mean, sam- and the stem-and-leaf plot (Exercise 8-13). Interpret
ple median, sample variance, and sample standard the data.
deviation. Compare the results with those obtained in
Exercise 8-8. What impact has remoyal of this obser- 8-17. An article in the Electrical Manufacturing &
vation had on the summary statistics? Coil Winaing Conference Proceedings (1995, p. 829)
presents the results for the number of returned ship-
8-10. An article in Technometrics (Vol. 19, 1977, p.
ments for a record-of-the-month club. The company is
425) presents the following data on motor fuel octane
interested in the reason for a returned shipment. The
ratings of several blends of gasoline:
results are shown below. Construct a Pareto chart and -
88.5, 87.7, 83.4, 86.7, 87.5, 91.5, 88.6, 100.3, interpret the data.
95.6, 93.3, 94.7, 911.1, 91.0, 94.2, 87.8, 89.9,
88.3, 87.6, 84.3, 86.7, 88.2, 90.8, 88.3, 98.8,
Reason Number of Customers
94.2, 92.7, 93.2, 91.0, 90.3, 93.4, 88.5, 90.1,
89.2, 88.3, 85.3, 87.9, 88.6, 90.9, 89.0, 96.1, Refused 195,000
93.3, 91.8, 92.3, 90.4, 90.1, 93.0, 88.7, 89.9, Wrong selection 50,000
89.8, 89.6, 87.4, 88.4, 88.9, 91.2, 89.3, 94.4, Wrong answer 68,000
92.7, 91.8, 91.6, 90.4, 91.1, 92.6, 89.8, 90.6,
Canceled 5,000
91.1, 90.4, 89.3, 89.7, 90.3, 91.6, 90.5, 93.7,
Other 15,000
Pele VaPe SPDs Mery IOC PoP LOS 10)Tic
8-6 Exercises 195

8-18. The following table contains the frequency of 8-24. Coding the Data. Let y, =a + bx, i=1, 2, ...,
occurrence of final letters in an article in the Atlanta n, where a and b are nonzero constants. Find the rela-
Journal. Construct a histogram from these data. Do tionship between x and y, and between s, and s..
any of the numerical descriptors in this chapter have
any meaning for these data? 8-25. Consider the quantity DAE: a)” For what
value of a is this quantity minimized?
a 12 n 19
b 11 Co) 13 8-26. The Trimmed Mean. Suppose that the data are
c 11 p 1 arranged in increasing order, LN% of the observations
removed from each end, and the sample mean of the
d 20 q 0
remaining numbers calculated. The resulting quantity
e 25 T 15
is called a trimmed mean. The trimmed mean gener-
f 13 s 18 ally lies between the sample mean x and the sample
g 12 t 20 median x (why?).
h 12 u 0 (a) Calculate the 10% trimmed mean for the yield
i 8 Vv 0 data in Exercise 8-3.
j 0 w 41 (b) Calculate the 20% trimmed mean for the yield
k 2 x 0 data in Exercise 8-3 and compare it with the quan-
1 11 y 15 tity found in part (a).
m 12 Zz 0
8-27. The Trimmed Mean. Suppose that LN is not an
integer. Develop a procedure for obtaining a trimmed
8-19. Show the following:
mean.
(a) That De (x; - x) 05
8-28. Consider the shelf-life data in Exercise 8-1.
(b) That >” (x; sar ne a? an? Construct a frequency distribution and histogram
using a class interval width of 2. Compute the approx-
8-20. The weight of bearings produced by a forging imate mean and standard deviation from the frequency
process is being investigated. A sample of six bearings distribution and compare it with the exact values
provided the weights 1.18, 1.21, 1.19, 1.17, 1.20, and found in Exercise 8-5.
1.21 pounds. Find the sample mean, sample variance,
8-29. Consider the following frequency distribution.
sample standard deviation, and sample median.
(a) Calculate the sample mean, variance, and stan-
8-21. The diameter of eight automotive piston rings is dard deviation.
shown below. Calculate the sample mean, sample
(b) Calculate the median and mode.
variance, and sample standard deviation.
74.001 mm 73.998 mm a LOn ee LS Re20M 2a2324
74.005 74.000
74.003 74.006 ie & © O MR Tey Wo 20, te dis le
74.001 74.002
8-22. The thickness of printed circuit boards is a very 8-30. Consider the following frequency distribution.
important characteristic. A sample of eight boards had (a) Calculate the sample mean, variance, and stan-
the following thicknesses (in thousands of an inch): dard deviation.
63, 61, 65, 62, 61, 64, 60, and 66. Calculate the sample (b) Calculate the median and mode.
mean, sample variance, and sample standard deviation.
What are the units of measurement for each statistic?
8-23. Coding the Data. Consider the printed circuit
board thickness data in Exercise 8-22.
(a) Suppose that we subtract a constant 63 from each
8-31. For the two sets of data in Exercises 8-29 and 8-
number. How are the sample mean, sample vari-
30, compute the sample coefficients of variation.
ance, and sample standard deviation affected?
(b) Suppose that we multiply each number by 100. 8-32. Compute the approximate sample mean, sample
How are the sample mean, sample variance, and variance, sample median, and sample mode from the
sample standard deviation affected? data in the following frequency distribution:
196 Chapter 8 Introduction to Statistics and Data Description

Class Interval Frequency Distance Covered


Exhaustion Time until Exhaustion
10<x<20 121 (in seconds) (in meters)
20 <x < 30 165
30<x<40 184 6105 1373
40<x<50 173 310 698
50<x<60 142 720 1440
60<x<70 120 990 2228
70<x< 80 118 1820 4550
80<x<90 110 475 ls
90 <x< 100 90 890 2003
390 488
745 1118
8-33. Compute the approximate sample mean, sample 885 199]
variance, median, and mode from the data in the fol-
lowing frequency distribution:
Determine the simple (Pearson) correlation between
time and distance. Interpret your results.
Class Interval Frequency
8-36. An electric utility which serves 850,332 residen-
-10<x<0 3 tial customers on May 1, 2002, selects a sample of 120
0<sx<10 8 customers randomly and installs a time-of-use meter
10<x<20 12 or “load research meter” at each selected residence. At
20 <x < 30 16 the time of installation, the technician also records the
residence size (sq. ft.). During the summer peak
30<x< 40 9
demand period, the company experienced peak
40<x<50 4
demand at 5:31 P.M. on July 30, 2002. July bills for
50<x<60 2 usage (kwh) were sent to all customers, including
those in the sample group. Due to cyclical billing, the
July bills do not reflect the same time-of-use period
8-34. Compute the approximate sample mean, sample
for all customers; however, each customer is assigned
standard deviation, sample variance, median, and
a usage for July billing. The time-of-use meters have
mode for the data in the following frequency
memory and they record time specific demand (kw) by
distribution: x

time interval; so for the sampled customers, average


15-minute demand is available for the time interval
Class Interval Frequency 5:00-5:45 P/M. on July 30, 2002. The data file Load-
data contains the kwh, kw, and sq. ft. data for each
600 <x < 650 4] otf the 120 sampled residences. This file is available
650 <x < 700 46 at www.wiley,com/college/hines. Using Minitab® or
700 <x < 750 50 other software, do the following:
750 <x < 800 oy (a) Construct scatter plots of
800 <x < 850 60 1. kw vs. sq. ft. and
850 <x < 900 64 2. kw vs. kwh,
900 <x < 950 65 and comment on the observed displays specifi-
950 <x < 1000 70
cally in regard to the nature of the asscciation in
1000 < x < 1050 122 parts 1 and 2 above, and for each part, the general
pattern of observed variation in kw across the
range of sq. ft. and the range’of kwh.
8-35. An article in the Jnternational Journal of Indus-
trial Ergonomics (1999, p. 483) describes a study con- (b) Construct the three-dimensianal plot of kw vs.
ducted te determine the relationship between kwh and sq. ft.
exhaustion time and distance covered until exhaustion (c) Construct histograms for kwh, kw, and sq. ft.
for several wheelchair exercises performed on a data, and comment on the patterns observed.
400-m outdoor track. The time and distances for ten (d) Construct a stem-and-leaf plot for the kwh data
participants are as follows: and compare to the histogram for the kwh data.
8-6 Exercises 197

(e) Determine the sample mean, median, and mode (h) Determine the skewness and kurtosis measures for
for kwh, kw, and sq. ft. kwh, kw, and sq. ft., and compare to the measures
(f) Determine the sample standard deviation for kwh, for a normal distribution.
kw, and sq. ft. (i) Determine the simple (Pearson) correlation
(g) Determine the first and third quartiles for kwh, kw, between kw and sq.ft. and between kw and kwh.
and sq. ft. Interpret.
Chapter 9

Random Samples and


Sampling Distributions

In this chapter, we begin our study of statistical inference. Recall that statistics is the sci-
ence of drawing conclusions about a population based on an analysis of sample data from
that population. There are many different ways to take a sample from a population. Fur-
thermore, the conclusions that we can draw about the population often depend on how the
sample is selected. Generally, we want the sample to be representative of the population.
One important method of selecting a sample is random sampling. Most of the statistical
techniques that we present in the book assume that the sample is a random sample. In this
chapter we will define a random sample and introduce several probability distributions use-
ful in analyzing the information in sample data.

9-1 RANDOM SAMPLES


To define a random sample, let X be a random variable with probability distribution f(x).
Then the set of n observations X,, X>, ..., X,,, taken on the random variable X, and having
numerical outcomes x), X>, ...,X,, 1s called a random sample if the observations are obtained
by observing X independently under unchanging conditions for n times. Note that the obser-
vations X,, X,, ..., X,, in arandom sample are independent random variables with the same
probability distribution f(x). That is, the marginal distributions of X,, X,, ..., X, are f(x),
f(x), ..., f(%,), respectively, and by independence the joint probability distribution of the
random sample is

G(X
jsXoy 0, X,) = Ce) ee ee (9-1)

Definition

X,, X), ..., X, is a random sample of size n if (a) the Xs are independent random variables,
and (b) every observation X, has the same probability distribution.
To illustrate this definition, suppose that we are investigating the bursting strength of
glass, 1-liter soft drink bottles, and that bursting strength in the population of bottles is nor-
mally distributed. Then we would expect each of the observations on bursting strength X,,
X;, ...,X,, in a random sample of n bottles to be independent random variables with exactly
the same normal distribution.
It is not always easy to obtain a random sample. Sometimes we may use tables of uni-
form random numbers. At other times, the engineer or scientist cannot easily use formal
procedures to help ensure randomness and must rely on other selection methods. A judg-
ment sample is one chosen from the population by the objective judgment of an individual.

198
9-1 Random Samples 199

Since the accuracy and statistical behavior of judgment samples cannot be described, they
should be avoided.

Example 9-1
Suppose we wish to take a random sample of five batches of raw material out of 25 available batches.
We may number the batches with the integers 1 to 25. Now, using Table XV of the Appendix, arbi-
trarily choose a row and column as a starting point. Read down the chosen column, obtaining two dig-
its at a time, until five acceptable numbers are found (an acceptable number lies between 1 and 25).
To illustrate, suppose the above process gives us a sequence of numbers that reads 37, 48, 55, 02, 17,
61, 70, 43, 21, 82, 73, 13, 60, 25. The bold numbers specify which batches of raw material are to be
chosen as the random sample.

We first present some specifics on sampling from finite universes. Subsequent sections will
describe various sampling distributions. Sections marked by an asterisk may be omitted
without loss of continuity.

*9-1.1 Simple Random Sampling from a Finite Universe


When sampling n items without replacement from a universe of size N, there are (")possi-
ble samples, and if the selection probabilities are m, = 1/("), for k = 1, 2, ..., (“), then this is
simple random sampling. Note that each universe unit appears in exactly (or
of the possible samples, so each unit has inclusion probability of \n-
(Y7! \/(“) = Ae
As will be seen later, sampling without replacement is more “efficient” than sampling
with replacement for estimating the finite population mean or total; however, we briefly dis-
cuss simple random sampling with replacement for a basis of comparison. In this case, there
are N” possible samples, and we select each with probability m, = 1/N”, fork = 1, 2, ..., N”.
In this situation, a universe unit may appear in no samples or in as many as n, so the notion
of inclusion probability is less meaningful; however, if we consider the probability that a
specific unit will be selected at least once, that is obviously 1 —- (1 — {)”, since for each unit
the probability of selection on a given observation is 1/N, a constant, and the n selections
are independent, so that these observations may be considered Bernoulli trials.

Example 9-2
Consider a universe consisting of five units numbered 1,2,3,4,5. In sampling without replacement, we
will employ a sample of size two and enumerate the possible samples as

(1,2), (1,3), (1,4), (1,5), (2,3), (2,4), (2,5), (3,4), (3,5), (4,5).
Note that there are (3) = 10 possible samples. If we select one from these, where each has an equal
selection probability of 0.1 assigned, that is simple random sampling. Consider these possible samples
to be numbered 1, 2, ..., 0, where 0 represents number 10. Now, go to Table XV (in the Appendix),
showing random integers, and with eyes closed place a finger down. Read the first digit from the five-
digit integer presented. Suppose we pick the integer which is in row 7 of column 4. The first digit is
6, so the sample consists of units 2 and 4. An alternate to using the table is to cast a single icosohe-
dron die and pick the sample corresponding to the outcome. Notice also that each unit appears in
exactly four of the possible samples, and thus the inclusion probability for each unit is 0.4, which is
simply n/N = 2/5.

*May be omitted on first reading.


200 Chapter9 Random Samples and Sampling Distributions

It is usually not feasible to enumerate the set of possible samples. For example, if
N= 100 and n= 25, there would be more than 2.43 x 10” possible samples, so other selec-
tion procedures which maintain the properties described in the definition must be
employed. The most commonly used procedure is to first number the universe units from 1
to N, then use realizations from a random number process to sequentially pick numbers
from 1 to N, discarding duplicates until n units have been picked if we are sampling with-
out replacement, keeping duplicates if sampling is with replacement. Recall from Chapter
6 that the term random numbers is used to describe a sequence of mutually independent
variables U,, U5, ..., which are identically distributed as uniform on [0,1]. We employ a
realization u,, U5, ..., Which in sequentially selecting units as members of the sample is
roughly outlined as

(Unit Number), =LN-u,J+1, j=1,2,...J,i=12,...0, i<j,


where J is the trial number on which the nth, or final, unit is selected. In sampling without
replacement, J >n, and J is the greatest integer contained function. When sampling with
replacement, J =n.

*9-1.2 Stratified Random Sampling of a Finite Universe '


In finite-universe sampling, sometimes an explanatory or auxiliary variable(s) is available
that has known value(s) for each unit of the universe. These may be either numerical or cat-
egorical variables or both. If we are able to use the, auxiliary variables to establish criteria
for assigning each universe unit to exactly one of the resulting strata, before sampling
begins, then simple random samples may be selected within each stratum, and the sampling
is independent from stratum to stratum, which allows us to later combine, with appropriate
weights, stratum “statistics” such as the means and variances obtained from various strata.
In general, this is an “efficient” scheme in the sense of estimation of population means and
totals if, following the classification, the variance in the variable we seek to measure is
small within strata while the differences in the stratum mean values are large. If Z strata are
formed, then the stratum sizes are N,, N,,...,N,, and N, +N, +-:- +N,=N, while the sam-
ple sizes are n,, n,, ...,n,, and n, +n, +--- +n, =n. It is noted that the inclusion probabil-
ities are constant within strata, as n,/N, for stratum h, but they may differ greatly across
strata. Two commonly used methods for allocating the overall sample to strata are propor-
tional allocation and Neyman optimal allocation, where proportional allocation is

n, =n(N,/N), h= 1,2, ..., L, (9-2)


and Neyman allocation is

n,=n Baie
eal ie rath = 12>
aede (9-3)
ae S),

The values G,, are standard deviation values within strata, and when designing the sampling
study, they are usually unknown for the variables to be observed in the study; however, they
may often be calculated for an explanatory or auxiliary variable where at least one of these
is numeric, and if there is a “reasonable” correlation, |p| > 0.6, between the variable to be
measured and such an auxiliary variable, then these surrogate standard deviation values,
denoted o,, will produce an allocation which will be reasonably close to optimal.

*May be omitted on first reading.


9-2 Statistics and Sampling Distributions 201

‘Example 9-3.
In seeking to address growing consumer concerns regarding the quality of claim processing, a
national managed health care/health insurance company has identified several characteristics to be
monitored on a monthly basis. The most important of these to the company are, first, the size of the
mean “financial error,” which is the absolute value of the error (overpay or underpay) for a claim and,
second, the fraction of correctly filed claims paid to providers within 14 days. Three processing cen-
ters are eastern, mid-America, and western. Together, these centers process about 450,000 claims per
month, and it has been observed that they differ in accuracy and timeliness. Furthermore, the corre-
lation between the total dollar amount of the claim and the financial error overall is historically about
0.65-0.80. If a sample of size 1000 claims is to be drawn monthly, strata might be formed using cen-
ters E, MA, and W and total claim sizes as $0-$200, $201-$900, $901-$max claim. Therefore there
would be nine strata. And if for a given year there were 441,357 claims, these may be easily assigned
to strata, since they are naturally grouped by center, and each center uses the same processing system
to record data by claim amount, thus allowing ease of classification by claim size. At this point, the
1000-unit planned sample is allocated to the nine strata as shown in Table 9-1. The allocation in ital-
ics, shown first, employs proportional allocation while the second one uses “optimal” allocation. The
values represent the standard deviation in the claim-amount metric within the stratum, since the stan-
dard deviation in “financial error” is unknown. After deciding on the allocation to use, nine inde-
pendent, simple random samples, without replacement, would be selected, yielding 1000 claim forms
to be inspected. Stratum identifying subscripts are not shown in Table 9-1.

It is noted that proportional allocation results in an equal inclusion probability for all
units, not just within strata, while an optimal allocation draws larger samples from strata in
which the product of the internal variability as measured by standard deviation (often in an
auxiliary variable) and the number of units in the stratum is large. Thus, inclusion proba-
bilities are the same for all units assigned to a stratum but may differ greatly across strata.

9-2 STATISTICS AND SAMPLING DISTRIBUTIONS


A statistic is any function of the observations in a random sample that does not depend on
unknown parameters. The process of drawing conclusions about populations based on

Table 9-1 Data for Health Insurance Example 9-3


Claim Amount

Center $0 - $200 $201 - $900 $901 — $Max Claim All


E N= 132,365 N=41,321 N= 10,635 184,321
o =$42 o = $255 o = $6781
n= 300/29 n=94/54 n= 24/371 418/454
MA N= 96,422 N = 31,869 N=6,163 134,454
o =$31 o =$210 o = $5128
n=218/ 15 n=72/35 n= 14/163 304/213
WwW N= 82,332 N= 33,793 N=6,457 122,582
o =$57 o =$310 o = $7674
n= 187/24 n=76/54 n=15/255 /333
278

All N=311,119 N= 106,983 N=23,255 441,357


n= 705/68 n= 242/143 n= 53/789 1000 /1000
202 Chapter9 Random Samples and Sampling Distributions

sample data makes considerable use of statistics. The procedures require that we understand
the probabilistic behavior of certain statistics. In general, we call the probability distribu-
tion of a statistic a sampling distribution. There are several important sampling distributions
that will be used extensively in subsequent chapters. In this section, we define and briefly
illustrate these sampling distributions. First, we give some relevant definitions and addi-
tional motivation.
A statistic is now defined as a value determined by a function of the values observed
in a sample. For example, if X,, X,, ..., X, represent values to be observed in a probability
sample of size n on a single random variable X, then X and S’, as described in Chapter 8 in
equations 8-1 and 8-4, are statistics. Furthermore, the same is true for the median, the
mode, the sample range, the sample skewness measure, and the sample kurtosis. Note that
capital letters are used here as this reference is to random variables, not specific numerical
results, as was the case in Chapter 8.

9-2.1 Sampling Distributions


Definition
The sampling distribution of a statistic is the density function or probability function that
describes the probabilistic behavior of the statistic in repeated sampling from the same uni-
verse or on the same process variable assignment model.
Examples have been presented earlier, in Chapters 5—7. Recall that random sampling
with sample size n on a process variable X provides results X,, X, ..., X,, which are mutu-
ally independent random variables, all with a common distribution function. Thus, the sam-
ple mean, X, is a linear combination of n independent variables. If E(X) = and V(X) = 0°,
then recall that E(X) = p and V(X) = o’/n. And if X is a measurement variable, the density
function of X is the sampling distribution of this statistic, X. There are several important
sampling distributions that will be used extensively in subsequent chapters. In this section
we will describe and briefly illustrate these sampling distributions.
The form of a sampling distribution depends on the stability assumption as well as on
the form of the process variable model. In Chapter 7, we observed that if X ~ N (ut, 0”), then
X ~ N(u, o/n), and this is the sampling distribution of X for n > 1. Now, if the process vari-
able assignment model takes some form other than the normal model illustrated here (e.g.,
the exponential model), and if all stationarity assumptions hold, then mathematical analysis
may yield a closed form for the sampling distribution of X. In this example case, if we
employ an exponential process model for X, the resulting sampling distribution for X is a
form of the gamma distribution. In Chapter 7, the important Central Limit Theorem was pre-
sented, Recall that if the moment-generating function M,(t) exists for all ¢, and if

v8n X=
oldie Me ACS
BATE as (9-4)

where F,,(z) is the cumulative distribution function of Z,, and ®(z) is the CDF for the stan-
dard normal variable, Z. Simply stated, as n > ©, Z, —» N(0, 1) random variable, and this
result has enormous utility in applied statistics. However, in applied work, a question arises
regarding how large the sample size n must be to employ the N(0,1) model as the sampling
distribution of Z,, or equivalently stated, to describe the sampling distribution of X as
N(u, 0°/n). This is an important question, as the exact form of the process variable assign-
ment model is usually unknown. Furthermore, any response must be conditioned even when
it is based on simulation evidence, experience, or “accepted practice.” Assuming process sta-
bility, a general suggestion is that if the skewess measure is close to zero (implying that the
variable assignment model is symmetric or very nearly so), then X approaches normality
9-2 Statistics and Sampling Distributions 203

quickly, say for n = 10, but this depends also on the standardized kurtosis of X. For exam-
ple, if B, = 0 and |B,| < 0.75, then a sample size of n = 5 may be quite adequate for many
applications. However, it should be noted that the tail behavior of X may deviate somewhat
from that predicted by a normal model.
Where there is considerable skew present, application of the Central Limit Theorem
describing the sampling distribution must be interpreted with care. A “rule of thumb” that
has been successfully used in survey sampling, where such variable behavior is common
(B; > 0), is that

n> 25(B,)°. (9-5)


For instance, returning to an exponential process variable assignment model, this rule sug-
gests that a sample size of n > 100 is required for employing a normal distribution to
describe the behavior of X, since B, = 2.

Definition
The standard error of a statistic is the standard deviation of its sampling distribution. If the
standard error involves unknown parameters whose values can be estimated, substitution of
these estimates into the standard error results in an estimated standard error.
To illustrate this definition, suppose we are sampling from a normal distribution with
mean p and variance o*. Now the distribution of X is normal with mean p and variance
o”/n, and so the standard error of X is
(oy
a

If we did not know o but substituted the sample standard deviation s into the above, then
the estimated standard error of X is

re

Example 9-4
Suppose we collect data on the tension bond strength of a modified portland cement mortar. The ten
observations are
16.85, 16.40, 17.21, 16.35, 16.52, 17.04, 16.96, 17.15, 16.59, 16.57,

where tension bond strength is measured in units of kgf/cm?. We assume that the tension bond
strength is well-described by a normal distribution. The sample average is

X= 16.76 kgf/cm’. .
First, suppose we know (or are willing to assume) that the standard deviation of tension bond strength
is o= 0.25 kgf/cm’. Then the standard error of the sample average is

o/Vn = 0.25/10 = 0.079 kgf/cm?.


If we are unwilling to assume that o= 0.25 kgf/cm’, we could use the sample standard deviation s =
0.316 kgf/cm? to obtain the estimated standard error as follows:

s/Vn =0.316/V10 = 0.0999 kgf/cm’.


204 Chapter9 Random Samples and Sampling Distributions

*9.2.2 Finite Populations and Enumerative Studies


In the case where sampling may be conceptually repeated on the same finite universe of
units, the sampling distribution of X is interpreted in a manner similar to that of sampling a
process, except the issue of process stability is not a concern. Any inference to be drawn is
to be about the specific collection of unit population values. Ordinarily, in such situations,
sampling is without replacement, as this is more efficient. The general notion of expecta-
tion is different in such studies in that the expected value of a statistic@is defined as
N

E°(6) = Se Oy, (9-6)


k=]

where 6,is the value of the statistic if possible sample k is selected. In simple random sam-
pling, recall that 2, = 1).
Now in the case of the sample mean statistic, X, we have E°(X) = w,, where ju, is the
finite population mean of the random variable X. Also, under simple random sampling
without replacement,

aCe
V(X) beer
=E°[X] -y? dbee*
=(1 9-7
(9-7)
where (ol is as defined in equation 8-7. The ratio n/N is called the sampling fraction, and it
represents the fraction of the population measures to be included in the sample. Concise
proofs of the results shown in equations 9-6 and 9-7 are given by Cochran (1977, p. 22).
Oftentimes, in studies of this sort, the objective is to estimate the population total (see equa-
tion 8-2) as well as the mean. The “mean per unit,” or mpu, estimate of the total is simply
#,=N-X, and the variance of this statistic is obviously V(?,) = N?: V(X). While necessary and
sufficient conditions for the distribution of X to approach normality have been developed,
these are of little practical utility, and the rule given by equation 9-5 has been widely
employed.

In sampling with replacement, the quanitities E°(X) = y,, and v°(X )=62/n.

Where stratification has been employed in a finite population enumeration study, there
are two statistics of common interest. These are the aggregate sample mean and the estimate
of population total. The mean is given by
~ te ee *
By = = Nn Xn = LM Xp (9-8)
hel h=l
and the estimate of total is ,, where

t, =N- A. (9-9)

In these formulations, X, is the sample mean for stratum h, and W,, given by (N,/N), is
called the stratum weight for stratum h. Note that both of these statistics are expressed as
simple linear combinations of the independent, stratum statistics X,, and both are mpu esti-
mators. The variance of these statistics is given by

,)= Swe |S.


L oy)
Vi, = Ww. 2 (1-m}

V(t,)=N? -V(ji,). (9-10)


*May be omitted on first reading.
9-3 The Chi-Square Distribution 205

The within-stratum variance terms, On , may be estimated by the sample variance terms,
4 for the respective strata. The sampling distributions of the aggregate mean and total
estimate statistics for such stratified, enumeration studies are stated only for situations
where sample sizes are large enough to employ the limiting normality indicated by the Cen-
tral Limit Theorem. Note that these statistics are linear combinations across observations
within strata and across strata. The result is that in such cases we take

hl, NL, Vi) (9-1 1)

and

7; me N(t,, V(t,)).

Example 9-5
Suppose the sampling plan in Example 9-3 utilizes the “optimal” allocation as shown in Table 9-1.
The stratum sample means and sample standard deviations on the financial error variable are calcu-
lated with the results shown in Table 9-2, wheré the units of measurement are error $/claim.
Then utilizing the results presented in equations 9-8 and 9-9, the aggregate statistics when eval-
uated are x = $18.36, and ?, = $8,103,315. Utilizing equations 9-10 and employing the within-stratum
sample variance values Sis to estimate the within-stratum variancesé , the estimates for the variance
in the sampling distribution of the sample mean and the estimate of total are V(A,) = 0.574 and
V@,) = 1.118 x 10'', and the estimates of the respective standard errors are thus $0.785 and $334,408
for the mean and total estimator distributions.

9-3 THE CHI-SQUARE DISTRIBUTION


Many other useful sampling distributions can be defined in terms of normal random vari-
ables. The chi-square distribution is defined below.

Table 9-2 Sample Statistics for Health Insurance Example


Claim Amount

Center $0 — $200 $201 - $900 $901 — $Max Claim All


E N= 132,365 N=41,321 N= 10,635 184,321
n=29 — n=54 n= 371 454
Sea PR x = 34.10 x = 91.65
5,= 4.30 5, = 28.61 s. = 81.97
MA N= 96,422 N= 31,869 N= 6,163 134,454
iS) n= 35 n= 163 213
= 150 x = 22.00 X= 72.00
s, = 4.82 5,= 16.39 s, = 56.67
Ww N = 82,332 N = 33,793 N=6,457 131,225
n=24 n=54 n=255 333
x= 10.52 xX = 46.28 X= 124.91
5, = 9.86 S, = 31.23 5, = 109.42
a ae, a A, SE
All N=311,119 N= 106,983 N= 31,898 441,357
n=67"" n= 143 n= 789 n= 1,000
Le
206 Chapter9 Random Samples and Sampling Distributions

Theorem 9-1
Let Z,, Z,, ..., Z, be normally and independently distributed random variables, with mean
j= 0 and variance o” = 1. Then the random variable
aL+ Zi ++ +Z}
has the probability density function
R2)-V wf?
ul u>0O,

=0, _ otherwise (9-12)


and is said to follow the chi-square distribution with k degrees of freedom, abbreviated yA
For the proof of Theorem 9-1, see Exercises 7-47 and 7-48.
The mean and variance of the 7; distribution are
=k (9-13)

and

o? = 2k. (9-14)

Several chi-square distributions are shown in Fig. 9-1. Note that the chi-square random vari-
able is nonnegative, and that the probability distribution is skewed to the right. However, as
k increases, the distribution becomes more symmetric. As k — ©», the limiting form of the
chi-square distribution is the normal distribution.
The percentage points of the Xu. distribution are given in Table III of the Appendix.
Define re as the percentage point or value of the chi-square random variable with k
degrees of freedom such that the probability that ae exceeds this value is a. That is,

Plat 2 xea}= J, fu)du=a.

Figure 9-1 Several y’ distributions.


9-3 The Chi-Square Distribution 207

This probability is shown as the shaded area in Fig. 9-2. To illustrate the use of Table III,
note that
PAB may ie 18.31) = 0.05:
That is, the 5% point of the chi-square distribution with 10 degrees of freedom is Y; o< =
183
Like the normal distribution, the chi-square distribution has an important reproductive
property.

Theorem 9-2 Additivity Theorem of Chi-Square


Let re ee Sane Le be independent chi-square random variables with k,, k,, ..., k, degrees of
freedom, respectively. Then the quantity

Yanegente.
follows the chi-square distribution with degrees of freedom equal to
p
Sa Lit
i=l
Proof Note that each chi-square random variable ie can be written as the sum of the
squares of k, standard normal random variables, say
kj

ui => Zi; t=1,2,...,P


j=l
Therefore,

y Ss 2_¥ 2
= Xi => Zi;

and since all the random variables Z, are independent because the x, are independent, Y is
just the sum of the squares of k = we k, independent standard normal random variables.
From Theorem 9-1, it follows that Y is a chi-square random variable with k degrees of
freedom.

Example 9-6
As an example of a statistic that follows the chi-square distribution, suppose that X,, X,, ..., X, isa
random sample from a normal population, with mean p and variance o*. The function of the sample
variance

0 x3 k u
Figure 9-2 Percentage point Xn , of the chi-square distribution.
208 Chapter9 Random Samples and Sampling Distributions

is distributed as 7?_,. We will use this random variable extensively in Chapters 10 and 11. We will see
in those chapters that because the distribution of this random variable is chi square, we can construct
confidence interval estimates and test statistical hypotheses about the variance of a normal population.
To illustrate heuristically why the distribution of the random ee in Example 9-6,
(n - 1)S/o”, is chi-square, note that
n
ae
X,-X
(n~1)S? _ ry ) (9-15)
o* out yh
If X in equation 9-15 were replaced by , then the distribution of

. 2
Ds(X; -H)
i=l
oO

is ie because each term (X; — /)/o'is an independent standard normal random variable. Now consider
the following:

Therefore,

or

LK
au |
ong (Few)
o/h (9-16)

Since X is normally distributed with mean p and variance o”/n, the quantity (X - y)*(7/n) is dis-
tributed as y{. Furthermore, it can be shown that the random variables X arfd S? are ‘independent.
Therefore, since Dae(X,in uy /o?
o~ is distributed as im it seems logical to use the additivity prop-
ertyof the chi-square els (Theorem 9-2) and conclude that the distribution of (n — 1)S?/o? is
x n-|°

9-4 THE ?t DISTRIBUTION


Another important sampling distribution is the t distribution, sometimes ‘called the student
t distribution.

Theorem 9-3
Let Z~ N(U, 1) and V be a chi-square random variable with k degrees of freedom. If Z and
V are independent, then the random variable
Z
TVR
9-4 Ther Distribution 209

has the probability density function

f(t) = ———— . —_____.. —00


<t <0,

, (9-17)
and is said to follow the f distribution with k degrees of freedom, abbreviated t,.

Proof Since Z and V are independent, their joint density function is

yoPyt esp
f(zv)= ; —0<7<0,0<y<em,
222"? r{§)
Using the method of Section 4-10 we define a new random variable U = V. Thus, the
inverse solutions of

and

u=v

are

ran k
and

v=u

The Jacobianis

{et t 7
0 1
Thus,

v= fi
J\ = f=
u

and so the joint probability density function of T and U is

g(t,u) = ee bere (tii {ore +2 (9-18)


afamk2*? r(§)
2
Now, since v > 0 we must require that u > 0, and since — % < z < », then — 0% <t<oo, On
rearranging equation 9-18 we have
1 seta, way e?/e) +
g(t,u) = F 0 <u <00,-00
<t < 09,
2k 2"? r*|

and since f(t) = [ g(t,u) du, we obtain


210 Chapter9 Random Samples and Sampling Distributions

mr
re
: du
1 fee} (k-1)/2 ~(u/2)[(+? /k)+1]
a

_Me+0/2) l
e
ay:(5) [(?/x) rto
—coct[<o

Primarily because of historical usage, many authors make no distinction between the
random variable T and the symbol r. The mean and variance of the ¢ distribution are = 0
and o° = k/(k — 2) for k > 2, respectively. Several t distributions are shown in Fig. 9-3. The
general appearance of the f distribution is similar to the standard normal distribution, in that
both distributions are symmetric and unimodal, and the maximum ordinate value is reached
at the mean pu = 0. However, the ¢ distribution has heavier tails than the normal; that is, it
has more probability further out. As the number of degrees of freedom k — », the limiting
form of the ¢ distribution is the standard normal distribution. In visualizing the ¢ distribution
it is sometimes useful to know that the ordinate of the density at the mean p = 0 is approx-
imately four to five times larger than the ordinate at the 5th and 9Sth percentiles. For exam-
ple, with 10 degrees of freedom for ¢ this ratio is 4.8, with 20 degrees of freedom this factor
is 4.3, and with 30 degrees of freedom this factor is 4.1. By comparison, for the normal dis-
tribution, this factor is 3.9.
The percentage points of the r distribution are given in Table IV of the Appendix. Let t,,
be the percentage point or value of the t random variable with k degrees of freedom such that

Piet, = ff f(t)dt=a.
This percentage point is illustrated in Fig. 9-4. Note that since the f distribution is symmetric
about zero, we find t,_,,=—t,,. This relationship is useful, since Table IV gives only upper-tail
percentage points, that is, values of t,, for @< 0.50. To illustrate the use of the table, note that
P{T 2 to95
19} = P(T2 1.812} = 0.05.
Thus, the upper 5% point of the ¢ distribution with 10 degrees of freedom is fo 95 9 = 1.812.
Similarly, the lower-tail point f995.19 = —fo.95,19 = —1.812.

Example 9-7
As an example of arandom variable that follows the r distribution, suppoge that X,, X>, ..., X,, is a ran-
dom sample from a normal distribution with mean p/ and variance 0”, and let X and 5S? denote the sam-
ple mean and variance. Consider the statistic

0
Figure 9-3 Several t distributions.
9-5 The F Distribution 211

t -a,k= a Gy k 0 Gr k

Figure 9-4 Percentage points of the ¢


distribution.

ve
a (9-19)
S/vn

Dividing both the numerator and denominator of equation 9-19 by o, we obtain

eee. X-H
C——- o/Vn f

SsVn )S*/o°

Since (X — wi(o/Vn) ~ M(0,1) and S*/a? ~ y*_,/(n — 1), and since X and S? are independent, we see
from Theorem 9-3 that

X-u
T =—— (9-20)
S/\n
follows a ¢ distribution with v =: n — 1 degrees of freedom. In Chapters 10 and 11 we will use the ran-
dom variable in equation 9-20 to construct confidence intervals and test hypotheses about the mean
of a normal distribution.

9-5 THE F DISTRIBUTION


A very useful sampling distribution is the F distribution.

Theorem 9-4
Let W avd Y be independent chi-square random variables with u and v degrees of freedom,
re:pectively. Then the ratio
Wu

has the probability density function

(“ye y ¢(u/2)-1

~GvOE f+]
2 WV <f<o, (9-21)
(u+v)/2 ’

and is said to follow the F distribution with u degrees of freedom in the numerator and v
degrees of freedom in the denominator. It is usually abbreviated F,,,.

Proof Since W and ¥Y are independent, their joint probability density distribution is
Chapter9 Random Samples and Sampling Distributions

we) teal un
f(wy ene ; O0<w,y<oe.
wr 4over) »

2 2
Proceeding as in Section 4-10, define the new random variable M = ¥. The inverse solutions
of f= (w/u)/(y/u) and m = y are
ns umf
v
and

y=m
Therefore, the Jacobian

um uf
J=[yovl=—m
0 1
Thus, the joint probability density function is given by
tn (u/2)-1

Gy
a( )Bae NEV, oo(m2\(w/») F+1)
O0<f,m<ocs,

Z 2
and since h(f) = fy g(f, m) dm, we obtain equation 9-21, completing the proof.

The mean and variance of the F distribution are ti = v/(v — 2) for v > 2, and
,_ 2v'(ut+v—2)
oO see CED) ERIN v>4.

Several F distributions are shown in Fig. 9-5. The F random variable is nonnegative and the
distribution is skewed to the right. The F distribution looks very similar to the chi-square
distribution in Fig. 9-1; however, the parameters u and v provide extra flexibility regarding
shape.

0 2 4 6 8 10 f Figure 9-5 The F distribution.


9-5 TheF Distribution 213

The percentage points of the F distribution are given in Table V of the Appendix. Let
Fy be the percentage point of the F distribution with u and v degrees of freedom, such that
the probability that the random variable F exceeds this value is

PIF2 Feuv}=] h(f)df =a.


This is illustrated in Fig. 9-6. For example, if u=5 and v= 10, we find from Table V of the
Appendix that

P(F > Foss10}= P(F 2 3.33} = 0.05.


That is, the upper 5% point of F'; 19 iS Fo.95.5, 19 = 3-33. Table V contains only upper-tail per-
centage points (values of F,,,,,for @ < 0.50). The lower-tail percentage points F,_,,,, can
be found as follows:
1
Fou = (9-22)
|e

For example, to find the lower-tail percentage point Fo 95 5 iy, note that
1 1
F = —— = —— = 0.211.
ie Foosios 4-74

Example 9-8
As an example of a statistic that follows the F distribution, suppose we have two normal populations
with vaziances o; and Oo}. Let independent random samples of sizes n, and n, be taken from popula
tions 1 and 2, respectively, and let S? and S; be the sample variances. Then the ratio

_ S/o
S3 /coy; (9-23)
has an F distribution with n, — 1 numerator degrees of freedom aut n, — 1 denominator ag of
freedom. This follows directly from the facts that (n, — 1)Si/o7oe
~ ae., and (n, — 1)S3/a3~Lite , and
from Theorem 9-4. The random variable in equation 9-23 plays a key role in Chapters 10 and 11,
where we dddress the problems of confidence interval estimation and hypothesis testing about the
variances of two independent normal populations.

Figure 9-6 Upper and lower percentage points of the F distribution.


214 Chapter9 Random Samples and Sampling Distributions

9-6 SUMMARY
This chapter has presented the concept of random sampling and introduced sampling dis-
tributions. In repeated sampling from a population, sample statistics of the sort discussed in
Chapter 2 vary from sample to sample, and the probability distribution of such statistics (or
functions of the statistics) is called the sampling distribution. The normal, chi-square, Stu-
dent t, and F distributions have been presented in this chapter and will be employed exten-
sively in later chapters to describe sampling variation.

9-7 EXERCISES
9-1. Suppose that a random variable is normally dis- the vendor 2 observed resistances assumed normally
tributed with mean p and variance o”. Draw a random and independently distributed with a mean of 105 Q
sample of five observations. What is the joint density and a standard deviation of 2.0 Q. What is the sam-
function of the sample? pling distribution of X, — X,?
9-2. Transistors have a life that is exponentially dis- 9-9. Consider the resistort problem iin Exercise 9-8.
tributed with parameter A. A random sample of n tran- Find the standard error of X, — X).
sistors is taken. What is the joint density function of 9-10. Consider the resistor problem in Exercise 9-8. If
the sample? we could not assume that resistance is normally dis-
9-3. Suppose that X is uniformly distributed on the tributed, what could be said about the sampling distri-
interval from 0 to 1. Consider a random sample of size bution of X, — X,?
4 from X. What is ee joint density function of the 9-11. Suppose that independent random samples. of
sample? sizes n, and n, are taken from two normal populations
9-4. A lot consists of N transistors, and of these M with means 2, and p, and variances o% and 03,
(M SN) are defective. We randomly select two tran- respectively. If X, and X, are the sample means, find
sistors without replacement from this lot and deter- the sampling distribution of the statistic
mine whether they are defective or nondefective. The X,-X, (Hy — Hy)
random variable
{ if the ith transistor is nondefective,
(o /n,)+(03/nq)
i =1,2.
~ 0. if the ith transistor is defective, 9-12. A manufacturer of semiconductor devices takes
a random sample of 100 chips and tests them, classi-
Determine the joint probability function for X, and X,. fying each chip as defective or nondefective. Let
What are the marginal probability functions for X, and X, = 0(1) if the ith chip is nondefective (defective).
X,? Are X, and X, independent random variables? The sample fraction defective is
9-5. A population of power supplies for a personal mn _ X, +X +--+ Xjo9
computer has an output voltage that is normally dis- uy 100
tributed with a mean of 5.00 V and a standard devia-
What is the sampling distribution of p?
tion of 0.10 V. A random sample of eight power
supplies is selected. Specify the sampling distribution 9-13. For the semiconductor problem in Exercise
of X. 9-12, find the standard error of p. Also find the esti-
mated standard error of p.
9-6. Consider the power supply problem described in
Exercise 9-5. What is the standard error of X? 9-14, Develop the moment-generating function of the
chi-square distribution.
9-7. Consider the power supply problem described in
Exercise 9-5. Suppose that the population standard 9-15. Derive the mean and variance of the chi-square
deviation is unknown. How would nou obtain the esti- random variable with u degrees of freedom.
mated standard error? 9-16. Derive the mean and variance of the t
distribution.
9-8. A procurement specialist has purchased 25 resis-
tors from vendor 1 and 30 resistors from vendor 2. 9-17. Derive the mean and variance of the. F
Let X,,, Xj, ..., X}5 represent the vendor 1 observed distribution.
resistances assumed normally and independently dis- 9-18. Order Statistics. Let X,, X,, ..., X, be a random
tributed with a mean of 100 Q and a standard devia- sample of size n from X, a random variable having
tion of 1.5 Q. Similarly, let X,,, X, ..., X> 49 represent distribution function F(x). Rank the elements in order
9-7 Exercises 215

of increasing numerical magnitude, resulting in Xs variable with parameter A. Derive the distribution
X 2) «++» X(qyy Where X,,) is the smallest sample element functions and probability distributions for X,,. and
(Xi; =min {X,, X,, ...,X,}) and X,, is the largest sam- X(,). Use the results of Exercise 9-18.
ple element (X/,) = max {X,, X), ..., X,}). Xj is called
9-22. Let X,, X,, ..., X, be a random sample of a con-
the ith order statistic. Often, the distribution of some
tinuous random variable. Find
of the order statistics is of interest, particularly the
minimum and maximum sample values, X,,, and X,,), E[F(Xqy)]
respectively. Prove that the distribution functions of and
Xi) and X,,), denoted respectively by Fy (t) and
F.Xia it)» are E[F(X,,))].
Fy (=1-(1- FOr’, 9-23. Using Table III of the Appendix, find the fol-
Fy () = (FO! lowing values:

Prove that if X is continuous with probability distribu- (a) XxHE E

tion f(x), then the probability distributions of X,,, and Da Vasous


Xin) are () Xx—
fx, =n — FON" fo, (d) Xijo Such that P(Xi) $ X49 } = 0.975.
fx,,(t) = ntFON" |fl). 9-24. Using Table IV of the Appendix, find the fol-
9-19. Continuation of Exercise 9-18. Let X,, X,, ..., lowing values:
X,, be arandom sample of a Bernoulli random variable (a) f925,10°
with parameter p. Show that
(Bb) t.25,20:
P(X, = 1)=1-(1-p)", (Cc) tgyo such that P{t)y St, } = 0.95.
P(X) = 0) = 1 -p”.
9-25. Using Table V of the Appendix, find the follow-
Use the results of Exercise 9-18. ing values:
9-20. Continuation of Exercise 9-18. Let X,, X,, ..., (a) Fo25,49:
X,, be a random sample of a normal random variable
(6) Fo0s,15,10°
with mean p and variance o*. Using the results of
Exercise 9-18, derive the density functions of X,,) and (c) Fos:
Xin): (d) Fo90,2424:
9-21. Continuation of Exercise 9-18. Let X,, X;, ..., 9-26. Let F\_,,,, denote a lower-tail point (a < 0.50)
X,,) be a random sample of an exponential random of the F,,,distribution. Prove that F\_,,,, = W/F ayy:
Chapter 10

Parameter Estimation

Statistical inference is the process by which information from sample data is used to draw
conclusions about the population from which the sample was selected. The techniques of
statistical inference can be divided into two major areas: parameter estimation and hypoth-
esis testing. This chapter treats parameter estimation, and hypothesis testing is presented in
Chapter 11.
As an example of a parameter estimation problem, suppose that civil engineers are ana-
lyzing the compressive strength of concrete. There is a natural variability in the strength of
each individual concrete specimen. Consequently, the engineers are interested in estimating
the average strength for the population consisting of this type of concrete. They may also
be interested in estimating the variability of compressive strength in this population. We
present methods for obtaining point estimates of parameters such as tne population mean
and variance and we also discuss methods for obtaining certain kinds of interval estimates
of parameters called confidence intervals.

10-1 POINT ESTIMATION


A point estimate of a population parameter is a single numerical value of a statistic that cor-
responds to that parameter. That is, the point estimate is a unique selection for the value of
an unknown parameter. More precisely, if X is a random variable with probability distribu-
tion f(x), characterized by the unknown parameter 6, and if X,, X,, ..., X, is arandom sam-
ple of size n from X, then the statistic 6= h(X,, X>, ..., X,,) corresponding to @ is called the
estimator of 0. Note that the estimate 6 is a random variable, because it is a function of sam-
ple data. After the sample has been selecied, 6 takes ona particular numerical value called
the point estimate of 0.
As an example, suppose that the random variable X is normally distributed with
unknown mean su and known variance o*. The sample mean X is a point estimator of the
unknown population mean pl. That is, fi =X. After the sample has been selected, the numer-
ical value x is the point estimate of ju. Thus, if x, = 2.5, x, =3.1, x, = 2.8, and x, = 3.0, then
the point estimate of is
Pian Ge PR eNO)
x = = 2.85.
4
Similarly, if the population variance o” is also unknown, a point estimator for 0” is the sam-
ple variance S’ and the numerical value s* = 0.07 calculated from the sample data is the °
point estimate of o°.
Estimation problems occur frequently in engineering. We often need to estimate the
following parameters:
¢ The mean p of a single population

216
10-1 Point Estimation 217

* The variance o” (or standard deviation o) of a single population


¢ The proportion p of items in a population that belong to a class of interest
* The difference between means of two populations, {, — LL,
¢ The difference between two population proportions, p, — p,

Reasonable point estimates of these parameters are as follows:


¢ For i, the estimate is f = X, the sample mean
¢ For o”, the estimate is 6? = S?, the sample variance
* For p, the estimate is 6= X/n, the sample proportion, where X is the number of items
in a random sample of size n that belong to the class of interest
¢ For /U; — pL, the estimate is A, — fi, = X, - X,, the difference between the sample
means of two independent random samples
¢ For p, — p>, the estimate is f, — p,, the difference between two sample proportions
computed from two independent random samples
There may be several different potential point estimators for a parameter. For example,
if we wish to estimate the mean of a random variable, we might consider the.sample mean,
the sample median, or perhaps the average of the smallest and largest observations in the
sample as point estimators. In order to decide which point estimator of a particular param-
eter is the best one to use, we need to examine their statistical properties and develop some
criteria for comparing estimators.

10-1.1 Properties of Estimators


A desirable property of an estimator is that it should be “close” in some sense to the true
value of the unknown parameter. Formally, we say that @ is an unbiased estimator of the
parameter 0 if
E@)= 0. (10-1)
That is, 6is an unbiased estimator of 0 if “on the average” its values are equal to @. Note that
this is equivalent to requiring that the mean of the sampling distribution of 6 be equal to 6.

_Example 10-1
Suppose that X is a random variable with mean / and variance o”. Let X,, X>, ..., X,, be a random sam-
ple of size n from X. Show that the sample mean X and sample variance S ? are unbiased estimators of
rand o”, respectively. Consider

E(X)=E ae

3 1y (x,),

and since E(X,)


=ji,for all i= 1,2, ..., n,
218 Chapter 10 Parameter Estimation

as ilee
a
Therefore, the sample mean X is an unbiased estimator of the population mean ps. Now consider
n

(4%)
E(S?)=5]=|
ea —

mez ED, (xj°X)e

-—LE (x? +X? -2%x,)

However, since E(X?) = p? + o” and E(X*) = p’ + o”/n, we have


n

E(SBN |siesre
4 Le 3 +0 LVR
) n(uNe +0 Xe/n)

= (np? + no? — np? —0”)


=o",
Therefore, the sample variance S? is an unbiased estimator of the population variance o*. However,
the sample standard deviation S is a biased estimator of the population standard deviation o. For large
samples this bias is negligible.

The mean square error of an estimator@ is defined as


MSE@) = E(6-0). (10-2)
The mean square error can be rewritten as follows:
MSE(@) = E{6-E®/ + (0- EOP
= V(6) + (bias)’. (10-3)
That is, the mean square error of dis equal to the variance of the estimator plus the squared
bias. If @is an unbiased estimator of 6, the mean square error of Gis equal to the variance of é
The mean square error is an important criterion for comparing two estimators. Let 6,
and 6, be two estimators of the parameter @, and let MSE@,) and MSE(@.) be the mean
square errors of 6, and 6,. Then the relative efficiency of 6, to 6, is defined as

MSE(6,)
MSE(6,)
If this relative efficiency is less than one, we would conclude that 6, is a more efficient esti-
mator of @ than is @,, in the sense that it has smaller mean square error. For example,
suppose that we wish to estimate the mean y of a population. We have a random sample of
10-1 Point Estimation 219

n observations X,, X,, ..., X,, and we wish to compare two possible estimators for pu: the
sample mean X and a Ai Alsobservation from the sample, say X;. Note that both X and X,
are unbiased estimators of 1; consequently, the mean square error of both estimators bresim-
ply the variance. For the sample mean, we have MSE(X)= V(X)= o7/n, where osis the
population variance; for an individual observation, we have MSE(X,) = V(X;)= o”. There-
fore, the relative efficiency of X, to X is
MSE(X) —?/n 1
MSE(X;,) o? on
Since (1/n) < 1 for sample sizes n 2 2, we would conclude that the sample mean is a better
estimator of y than a single observation X,.
Within the class of unbiased estimators, we would like to find the estimator that has the
smallest variance. Such an estimator is called a minimum variance unbiased estimator. Fig-
ure 10-1 shows the probability distribution of two unbiased estimators 6,and é,, with 6,hav-
ing smaller variance than 6,. The estimator 6, is more likely than 6, to produce an estimate
that is close to the true value of the unknown parameter 0.
It is possible to obtain a lower bound on the variance of all unbiased estimators of 0.
Let 0 be an unbiased estimator of the parameter 0, based on a random sample of n observa-
tions, and let f(x, 6) denote the probability distribution of the random variable X. Then a
lower bound on the variance of 0 is!
1
v(6) > wo (10-4)
Fi FIXf(X, 0)
nE 716 n

This inequality is called the Cramér—Rao lower bound. If an unbiased estimator 6 satisfies
equation 10-4 as an equality, it is the minimum variance unbiased estimator of 6.

Example 10-2
We will show that the sample mean X is the minimum variance unbiased estimator of the mean of a
normal distribution with known variance.

Distribution of a,

Distribution of 4,

0
A, kone

Figure 10-1 The probability distribution of two unbiased estimators, 6, and 6.

ee eee
‘Certain conditions on the function f(X, @) are required for obtaining the Cramér-Rao inequality (for example,
see Tucker 1962). These conditions are satisfied by most of the standard probability distributions.
220 Chapter 10 Parameter Estimation

From Example 10-1 we observe that X is an unbiased estimator of p. Note that

In f(X.u)= of ze) alas"


) |
=-In(o 7m) -3(==4)

Substituting into equation 10-4 we obtain

Since we know that, in general, the variance of the sample mean is V(X) = o7/n, we see that V(X) sat-
isfies the Cramér—Rao lower bound as an equality. Therefore X is the minimum variance unbiased
estimator of p for the normal distribution where 0” is known.

Sometimes we find that biased estimators are preferable to unbiased estimators


because they have smaller mean square error. That is, we can reduce the variance of the esti-
mator considerably by introducing a relatively small amount of bias. So long as the reduc-
tion in variance is greater than the squared bias, an iraproved estimator in the mean square
error sense will result. For example, Fig. 10-2 shows the probability distribution of a biased
estimator 6, with smaller variance than the unbiased estimator 6,. An estimate based on 6,
would more likely be close to the true value of 6 than would an estimate based on 6,. We
will see an application of biased estimation in Chapter 15.
An estimator 6* that has a mean square error that is less than or equal to the mean
square error of any other estimator 6, for all values of the parameter @, is called an optimal
estimator of 6.
Another way to define the closeness of an estimator6to the parameter @ is in terms of
consistency. If 6, is an estimator of @ based on a random sample of size n, we say that 6, is
consistent for @ if, for € > 0,

limP((s,- |< e|=1. (10-5)


Consistency is a large-sample property, since it describes the limiting behavior of the esti-
mator 6 as the sample size tends to infinity. It is usually difficult to prove that an estimator
10-1 Point Estimation 221

Distribution of 8,

Distribution of 6,

t)

Figure 10-2 A biased estimator, 6, that has smaller variance than the unbiased estimator, 6,.

is consistent using the definition of equation 10-5. However, estimators whose mean square
error (or variance, if the estimator is unbiased) tends to zero as the sample size approaches
infinity are consistent. For example, X is a consistent estimator of the mean of a normal dis-
tribution, since X is unbiased and lim, _,..V(X) = lim,_,.,n—-eo (o?/n) = 0.

10-1.2 The Method of Maximum Likelihood


One of the best methods for obtaining a point estimator is the method of maximum likeli-
hood. Suppose that X is a random variable with probability distribution f(x, 9), where 6 is
a single unknown parameter. Let X,, X5, ..., X,, be the observed values in a random sample
of size n. Then the likelihood function of the sample is

L(6) =f(x,, 6) *f(X2, 6) eres “f(Xis 6). (10-6)

Note that the likelihood function is now a function of only the unknown parameter @. The
maximum likelihood estimator (MLE) of @ is the value of @ that maximizes the likelihood
function L(@). Essentially, the maximum likelihood estimator is the value of 6 that maxi-
mizes the probability of occurrence of the sample results.

Example 10-3
Let X be a Bernoulli random variable. The probability mass function is

p(x) =p'(1=p)'*, x=0, 1,


=0, otherwise,

where p is the parameter to be estimated. The likelihood function of a sample of size n would be

We observe that if 6maximizes L(p) then p also maximizes In L(p) since the logarithm is a monoto-
nically increasing function. Therefore,

In L(p)= Sx, inpa[


nD nt
222 Chapter 10 Parameter Estimation

Now

dinL(p) aA Se] i
dpe ip l-p
Equating this zero and solving for p yields the MLE f, as

1x =
eee

an intuitively pleasing answer. Of course, one should also perform a second derivative test, but we
have foregone that here.

Example 10-4
Let X be normally distributed with unknown mean p/ and known variance 0”. The likelihood function
of a sample of size n is

Uu)=[ix) [a e OP
OV2E

i (20°) Zam)
"(2x02)"
Now-

In L(11)=-(n/2)in(220?) -(202)" S(x;-1)°


and i

dinL(u) _ (o)" y is)


ott i=l
Equating this last result to zero and solving for pw yields

as the MLE of wu.

It may not always be possible to use calculus methods to determine the maximum of
L(@). This is illustrated in the following example.

Example 10-5

Let X be uniformly distributed on the interval 0 to a. The likelihood function of a random sample X,,
Xopiee Ol SIZE 11S

L{a)= ve ws
i) @

Note that the slope of this function is not zero anywhere, so we cannot use calculus methods to find
the maximum likelihood estimator 4. However, notice that the likelihood function increases as a
decreases. Therefore, we would maximize L(a) by setting 4 to the smallest value that it could reason-
ably assume. Clearly, a can be no smaller than the largest sample value, so we would use the largest
observation as d. Thus,@ = max; X; is the MLE for a.
10-1 Point Estimation 223

The method of maximum likelihood can be used in situations where there are several
unknown parameters, say @,, 9, . ., @,, to estimate. In such cases, the likelihood function
is a function of the k unknown once 6,, 9,, .... ® and the maximum likelihood esti-
mators {6,} would be found by equating the k first aR derivatives OL(6,, 0,, ..., 9,)/00,
i= 1, 2, ..., k, to zero and solving the resulting system of equations.

Example 10-6
LetX be normally distributed with mean su and variance 0”, where both / and o° are unknown. Find the
maximum likelihood estimators of 1 and o”. The likelihood function for a random sample of size 1 is
n

fs 1 ele) dC h)
(2x07 ia
and

In L(u, )=-+In(220°) =): Gra);


isl
Now

aes
dln oOosi
=a (x;-n)=
=l
aie ) = 5 we
ee + Se Lt Lt) Snive)
a(o?) 207 20°
The solutions to the above equations yield the maximum likelihood estimators

and

which is closely related to the unbiased sample variance S*. Namely, G” = ((n — 1)/n)S°.

Maximum likelihood estimators are not necessarily unbiased (see the maximum like-
lihood estimator of o? in Example 10-6), but they usually may be easily modified to make
them unbiased. Further, the bias approaches zero for large samples. In general, maximum
likelihood estimators have good large-sample or asymptotic properties. Specifically, they
are asymptotically normally distributed, unbiased, and have a variance that approaches the
Cramér—Rao lower bound for large n. More precisely, if 6 is the maximum likelihood esti-
mator for 8,then Vn (6- 9) is normally distributed with mean zero and variance
A 7 1
aa Sin f(X.8)
E| —In f(X,0

for large n. Maximum likelihood estimators are also consistent. In addition, they possess the
invariance property; that is, if 6 is the maximum likelihood estimator of @ and u(6) is a func-
224 Chapter10 Parameter Estimation

tion of @ that has a single-valued inverse, then the maximum likelihood estimator of u(6) is
u(6).
It can be shown graphically that the maximum of the likelihdod will occur at the value
of the maximum likelihood estimator. Consider a sample of size n = 10 from a normal
distribution:

14:15, 32.07, 32.30, 25.01, 21.86, 23.70;23.92, 25.19722.59, 26.47;

Assume that the population variance is known to be 4. The MLE for the mean, y, of a nor-
mal distribution has been shown to be X. For this set of data, x = 25. Figure 10-3 displays
the log-likelihood for various values of the mean. Notice thatthe maximum value of the
log-likelihood function occurs at approximately x = 25. Sometimes, the likelihood function
is relatively flat in the region around the maximum. This may be due to the size of the sam-
ple taken from the population. A small sample size can lead to a fairly flat log likelihood,
implying less precision in the estimate of the parameter of interest.

10-1.3 The Method of Moments


Suppose that X is either a continuous random variable with probability density f(x; 6,, 9,,
..-, 8) or a discrete random variable with distribution p(x; 0,, 0,, ..., 6,) characterized by k
unknown parameters. Let X,, X,, ..., X,, be arandom sample of size n from X, and define the
first k sample moments about the origin as

mK, FELD ke (10-7)


n

i=l

The first k population moments about the origin are

Mi;= E(X*) =f x!f(x;0,,0),....0,)dx, t=1,2,...,k, | X continuous,

= > x!p(%.0,.025-,0:): t=1,2,....k, Xdiscrete. © (10-8)


xeR,

-500

=|
© -600 .

7100 9a, 1kagtime gees 26:064227 0s MDS


Sample mean
Figure 10-3 Log likelihood for various means.
10-1 Point Estimation 225

The population moments {{;} will, in general, be functions of the k unknown parameters
{9,}. Equating sample moments and population moments will yield k simultaneous equa-
tions in k unknowns (the @,); that is,

fyi s ey
eed (10-9)
The solution to equation 10-9, denoted 6,, 6,, ...,6,, yields the moment estimators of 6,, 6,,
ae 0) :

Example 10-7
Let X ~ M(u, 0”) where 1 and o° are unknown. To derive estimators for j1 and o? by the method of
moments, recall that for the normal distribution

Hy=,
Hy=o" +pr.
The sample moments are m; = (yn) X, and m3 = (i/n) X?. From equation 10-9 we obtain

which have the solution

Example 10-8
Let X be uniformly distributed on the interval (0, a). To find an estimator of a by the method of
moments, we note that the first population moment about zero is
co =| ealx—dx=-—.
a
Hi J; a “ 2
The first sample moment is just X. Therefore,
a= 2X,
or the moment estimator of a is just twice the sample mean.

The method of moments often yields estimators that are reasonably good. In Example
10-7, for instance, the moment estimators are identical to the maximum likelihood estima-
tors. In general, moment estimators are asymptotically normally distributed (approxi-
mately) and consistent. However, their variance may be larger than the variance of
estimators derived by other methods, such as the method of maximum likelihood. Occa-
sionally, the method of moments yields estimators that are very poor, as in Example 10-8.
The estimator in that example does not always generate an estimate that is compatible with
our knowledge of the situation. For example, if our sample observations were x, = 60, x, =
10, and x, = 5, then a = 50, which is unreasonable, since we know that a 2 60.
226 Chapter 10 Parameter Estimation

10-1.4 Bayesian Inference


In the preceding chapters we made an extensive study of the usé of probability. Until now,
we have interpreted these probabilities in the frequency sense; that is, they refer to an
experiment that can be repeated an indefinite number of times, and if the probability of
occurrence of an eventA is 0.6, then we would expect A to occur in about 60% of the exper-
imental trials. This frequency interpretation of probability is often called the objectivist or
classical viewpoint.
Bayesian inference requires a different interpretation of probability, called the subjec-
tive viewpoint. We often encounter subjective probabilistic statements, such as “There is a
30% chance of rain today.” Subjective statements measure a person’s “degree of belief”
concerning some event, rather than a frequency interpretation. Bayesian inference requires
us to make use of subjective probability to measure our degree of belief about a state of
nature. That is, we must specify a probability distribution to describe our degree of belief
about an unknown parameter. This procedure is totally unlike anything we have discussed
previously. Until now, parameters have been treated as unknown constants. Bayesian infer-
ence requires us to think of parameters as random variables.
Suppose we let f(@) be the probability distribution of the parameter or state of nature
6. The distribution f(@) summarizes our objective information about @ prior to obtaining
sample information. Obviously, if we are reasonably certain about the value of 6, we will
choose f(@) with a small variance, while if we are less certain about 0, f(@) will be chosen
with a larger variance. We call (8) the prior distribution of 6.
Now consider the distribution of the random variable X. The distribution of X we denote
f(x|@) to indicate that the distribution depends on the unknown parameter @. Suppose we take
a random sample from X, say X,, X, ..., X,. The joint density of likelihood of the sample is
F(%, Xy4 085 x,10) = f(x,|O)f(x,|0) + f(x,/0).

We define the posterior distribution of @ as the conditional distribution of 8, given the


sample results. This is just

F Xue Xo Xen?
PsA ty) = Aa (10-10)

The joint distribution of the sample and @ in the numerator of equation 10-10 is the prod-
uct of the prior distribution of @ and the likelihood, or
FQ By e880) AlOf Gir, ox 10).
The denominator of equation 10-10, which is the marginal distribution of the sample, is just
a normalizing constant obtained by

TAGs cele
[2 FO) (12.00 %918)48,
8 ice af
x continuous,
(6) il ae ohne 6), x discrete. (10-11)
(7)

Consequently, we may write the posterior distribution of @ as.

e Ff(9) F (21, ¥25--5% 10)


ALCS (10-12)
Estates oe)

We note that Bayes’ theorem has been used to transform or update the prior distribution
to the posterior distribution. The posterior distribution reflects our degree of belief about 0
given the sample information. Furthermore, the posterior distribution is proportional to the
product of the prior distribution and the likelihood, the constant of proportionality being the
normalizing constant f(x,, x5, ..., x,).
10-1 Point Estimation 227

Thus, the posterior density for 6 expresses our degree of belief about the value of
0
given the result of the sample.

Example 10-9
The time to failure of a transistor is known to be exponentially distributed with parameter A. For a ran-
dom sample of n transistors, the joint density of the sample elements, given A, is

AS <
eda tel
Fp

Suppose we feel that the prior distribution for A is also exponential,

fia) =ke™, A>0O,


=i0)s otherwise,

where k would be chosen depending on the exact knowledge or degree of belief we have about the
value of A. The joint density uf the sample and A is

eS re MDa)
iteay
is
nsity of the sample
and the marginal de
Sali,
e res
5-05 %n) = fe
F (21,22

kT (n+1)
.
(o3E? +k)"

Therefore, the posterior density for A, by equation 10-12, is


] j +k Ae Ni é
en
Al Gace SS ect
X, ) e
f( | 7 x,) aan

and we see that the posterior density for A is a gamma distribution with parameters 1 + 1 and LY +k.

10-1.5 Applications to Estimation


In this section, we discuss the application of Bayesian inference to the problem of estimat-
ing an unknown parameter of a probability distribution. Let X,, X,, ..., X,, be arandom sam-
ple of the random variable X having density f(x|@). We want to obtain a point estimate of @.
Let f(@) be the prior distribution for @ and let €(6; @) be the loss function. The loss function
is a penalty function reflecting the “payment” we must make for misidentifying @ by a real-
ization of its point estimator 6. Common choices for (6; 8) are (@ — 0)” and |6— 6). Gen-
erally, the less accurate a realization of 6 is, the more we must pay. In conjunction with a
particular loss function, the risk is defined as the expected value of the loss function with
respect to the random variables X,, X,, ..., X, comprising 6. In other words, the risk is

R(d;6) = z\¢(6:4)]
= feat fT Od (x15 5 -0n Xn) OPF(X, X25 Fn 10) ded, dx,

where the function d(x,, x», ..., x,), an alternate notation for the estimator 6, is simply a
function of the observations. Since @ is considered to be a random variable, the risk is itself
a random variable. We would like to find the function d that minimizes the expected risk.
We write the expected risk as
228 Chapter 10 Parameter Estimation

B(d) = E[R(d;6)]= | R(d,0) f(8)a0


(10-13)
= Ele oP d(x, 2951p OR Caza.) diy} f(0) 8.
We define the Bayes estimator of the parameter @ to be the function d of the sample X,,
X,, :.., X, that minimizes the expected risk. On interchanging the order of integration in
equation 10-13 we obtain

B(d) = i fie Hf x21-0%p) OF (ei tars Ful) F(0)AO} dey dy (10-14)


The function B will be minimized if we can find a function d that minimizes the quantity
within the large braces in equation 10-14 for every set of the x values. That is, the Bayes
estimator of 0 is a function d of the x, that minimizes

ade
E (xp. .--%q |e) f(6)
| OYd(x15XQ-05%n OL

0
r=Iz (6:0) (x12... %p30)d
Sf (be Mae FE 8:0) (lx, 25.-05%q)d8. (10-15)
Thus, the Bayes estimator of 0 is the value 6 that minimizes

( (8:0) f(6|x,,2295.-%,)A0. (10-16)


If the loss function £(6, 6) is the squared-error loss (@— 6)”, then we may show that the Bayes
estimator of 6, say 0, is the mean of the posterior density for 0 (refer to Exercise 10-78).

Example 10-10
Consider the situation in Example 10-9, where it was shown that if the random variable X is expo-
nentially distributed with parameter A, and if the prior distribution for A is exponential with parame-
ter k, then the posterior distribution for A is a gamma distribution, with parameters n + 1 and
n

y fis Xi +k. Therefore, if a squared-error loss function is assumed, the Bayes estimator for A is the
mean of this gamma distribution,

Suppose that in the time-to-failure problem in Example 10-9, a reasonable exponential prior distri-
bution for A has parameter k = 140. This is equivalent to saying that the prior estimate for A is
!
0.07142. A random sample of size n = 10 yields Dai;x; =1500. The Bayes estimate of A is

sell = ali
er = 0.06707.
1500 +140
2 x, +k

We may compare this with the results that would have been obtained by classical methods. The max-
imum likelihood estimator of the parameter A in an exponential distribution is
ae n

Sx
n

i=l
10-1 Point Estimation 229

Consequently, the maximum likelihood estimate of A, based on the foregoing sample data, is

gue 2h = = 0.06667,
i=l
Note that the results produced by the two methods differ somewhat. The Bayes estimate is slightly
closer to the prior estimate than is the maximum likelihood estimate.

Example 10-11
Let X,, X;, ..., X, be arandom sample from the normal density with mean u and variance 1, where
is unknown. Assume that the prior density for js is normal with mean 0 and variance 1; that is,

f(u)=ae
20
WH wc pcee
The joint conditional density of the sample given p is

1 ‘ eoW/2)E(xi-m)
foo apenas) =
(27)"
i gaat -2u xin?)
(2n)"”
Thus, the joint density of the sample and y is

1 1 ra
f(%,%904%, 3H) = — raysenn 25x? +(n+1)p? - 2uni]},
(27)
The marginal density of the sample is

1 1 < 1 =
frente = ape 32} [_exp{-$[(n+0 x? = 2unshd

By completing the square in the exponent under the integral, we obtain


1 | nx 1 0 1 nx ;
F(X 1.22 600%)) = aaron -3[0 ntl
Wate Flen]-p0r+a(e- 25)ja

ae exp -3{z0 “e n 3
= a
(n+1)'* (27) n+
using the fact that the integral is (27)'"/(n + 1)'” (since a normal density has to integrate to 1). Now
the posterior density for pu is

(2m) rv? exp4-s[Ex? +(n+1)u? - 2niy}


f (tly 2-04 _)= 72s?
(22) =n/2 ""(n+1) -1/2 9 ee5 [za
gl I

(n+1)""7a) 1 Se — QNK += nx |be


(any rE at *(nay
= -—(n+1

= 2
(n+1)'? 1 ideal ie
= Qn)? exp ee DP "aera ak

Therefore, the posterior density for u is a normal density with mean nX/(n + 1) and variance (n+ 1).
If the loss function €(j1; 4) is squared error, the Bayes estimator of p is
230 Chapter 10 Parameter Estimation

a*
ares | nel”

There is a relationship between the Bayes estimator for a parameter and the maxi-
mum likelihood estimator of the same parameter. For large sample sizes the two are nearl
equivalent. In general, the difference between the two estimators is small compared to 1/dn.
In practical problems, a moderate sample size will produce approximately the same estimate
by either the Bayes or the maximum likelihood method, if the sample results are consistent
with the assumed prior information. If the sample results are inconsistent with the prior
assumptions, then the Bayes estimate may differ considerably from the maximum likelihood
estimate. In these circumstances, if the sample results are accepted as being correct, the prior
information must be incorrect. The maximum likelihood estimate would then be the better
estimate to use.
If the sample results do not agree with the prior information, the Bayes estimator will
tend to produce an estimate that is between the maximum likelihood estimate and the prior
assumptions. If there is more inconsistency between the prior information and the sample,
there will be a greater difference between the two estimates. For an illustration of this, refer
to Example 10-10.
.

10-1.6 Precision of Estimation: The Standard Error


When we report the-value of a point estimate, it is usually necessary to give some idea of
its precision. The standard error is the usual measure of precision employed. If 0 is an esti-
mator of 6, then the standard error of 6 is just the standard deviation of 6, or

Gas v(6). (10-17)


If og involves any unknown parameters, then if we substitute estimates of these param-
eters into equation 10-17, we obtain the estimated standard error of 8, say Gg. A small stan-
dard error implies that a relatively precise estimate has been reported.

Example 10-12
An article.in the Journal of Heat Transfer (Trans. ASME, Ses. C, 96, 1974, p. 59) describes a method
of measuring the thermal conductivity of Armco iron. Using a temperature of 100°F and a power inp"t
of 550 W, the following 10 measurements of thermal conductivity (in Btu/hr—ft—°F) were obtained:

41.60, 41.48, 42.34, 41.95, 41.86,


42.18, 41.72, 42.26, 41.81, 42.04.
A point estimate of mean thermal conductivity at 100°F and 550 W is the sample mean, or

x= 41.924 Btu/hr—ft-°F.
The standard error of the sample mean is 0; = o/ vn, and since ois unknown, we may replace it with
the sample standard deviation to obtain the estimated standard error of x,

Gia seer 0.0808:


vn 10
Notice that the standard error is about 0.2% of the sample mean, implying that we have obtained a rel-
atively precise point estimate of thermal conductivity.
10-1 Point Estimation 231

When the distribution of 6 is unknown or complicated, the standard error of 6 may be


difficult to estimate using standard statistical theory. In this case, a computer-intensive
technique called the bootstrap can be used. Efron and Tibshirani (1993) provide an excel-
lent introduction to the bootstrap technique.
Suppose that the standard error of 8 is denoted 0%. Further, assume the population prob-
ability density function is given by f(x;6). A bootstrap estimate of o, can be easily
constructed.
1. Given a random sample from f(x;6), x,, X>, ..-, X,» eStimate 6, denoted by 6.
2. Using the estimate 6, generate a sample of size n from the distribution fix; 8. This
is the bootstrap sample.
3. Using the bootstrap sample, estimate @. This estimate we denote 6".
4. Generate B bootstrap samples to obtain bootstrap estimates, 6°, fOr —la2eee- DB (B
= 100 or 200 is often used).
B
5. Let 0° = y 6" B represent the sample mean of the bootstrap estimates.
i=l
6. The bootstrap standard error of 6” is found with the usual standard deviation
formula:

In the literature, B — 1 is often replaced by B; for large values of B, however, there is little
practical difference in the estimate obtained.

Example 10-13
The failure times X of an electronic component are known to follow an exponential distribution with
unknown parameter A. A random sample of ten components resulted in the following failure times (in
hours):
195.2, 201.4, 183.0, 175.1, 205.1, 191.7, 188.6, 173.5, 200.8, 210.0.
The mean of the exponential distribution iis given by E(X)= 1/A. It is also known that E(X) =1/A.A
reasonable estimate for A then is A = 1/X. From the sample data, we find X = 192.44, resutting in
A= 1/192.44=0.00520. B = 100 bootstrap samples of size n = 10 were generated using Minitab® with
Ax; 0.00520)= 0.00520e°"*"*, Some of the bootstrap estimates are shown in Table 10-1.
100
The average of the bootstrap estimates is found to be A* = yy jo»= 0.00551. The standard
error of the estimate is i=l

100
Yi — 0.0055 1)
=|
_____ = 000169.

Table 10-1 Bootstrap Estimates for Example 10-13


Sample Sample Mean, x; A;
1 243.407 0.00411
2 153.821 0.00650
3 126.554 0.00790

100 * 204.390 0.00489


eeEIEEEEIEnEE EEIEEEEREER EERE
232 Chapter10 Parameter Estimation

10-2. SINGLE-SAMPLE CONFIDENCE INTERVAL ESTIMATION


In many situations, a point estimate does not provide enough*information about the param-
eter of interest. For example, if we are interested in estimating the mean compression
strength of concrete, a single number may not be very meaningful. An interval estimate of
the form L < pp: < U might be more useful. The end points of this interval will be random
variables. since they are functions of sample data.
In general, to construct an interval estimator of the unknown parameter 6, we must find
two statistics, L and U, such that
P{L<s Os U}=1-a. (10-18)
The resulting interval
bSgs. (10-19)
is called a 100(1 — &)% confidence interval for the unknown parameter @. L and U are called
the lower- and upper-confidence limits, respectively, and 1 — ais called the confidence coef-
ficient. The interpretation of a confidence interval is that if many random samples are col-
lected and a 100(1 — a@)% confidence interval on @ is computed from each sample, then
100(1 — a&)% of these intervals will contain the true value of @. The situation is illustrated in
Fig. 10-4 which shows several 100(1 — @)% confidence intervals for the mean p of a distri-
bution. The dots at the center of each interval indicate the point estimate of pu (in this case X).
Notice that one of the 15 intervals fails to contain the true value of uu. If this were a 95% con-
fidence level, in the long run, only 5% of the intervals would fail to contain j.
Now in practice, we obtain only one random sample and calculate one confidence
interval. Since this interval either will or will not contain the true value of 9, it is not rea-
sonable to attach a probal::lity level to this specific event. The appropriate statement would
be that @ lies in the observed interval [L, U] with confidence 100(1 — &). This statement has

Figure 10-4 Repeated construction of a confi-


dence interval for LU.
10-2 Single-Sample Confidence Interval Estimation 233

a frequency interpretation; that is, we do not know if the statement is true for this specific
sample, but the method used to obtain the interval [L, U] yields correct statements
100(1 — @)% of the time.
The confidence interval in equation 10-19 might be more properly called a two-sided
confidence interval, as it specifies both a lower and an upper limit on @. Occasionally, a one-
sided confidence interval might be more appropriate. A one-sided 100(1 — a@)% lower-con-
fidence interval on @ is given by the interval
Ls@, (10-20)
where the lower-confidence limit L is chosen so that
; P{L< 6} =1-a. (10-21)
Similarly, a one-sided 100(1 — a)% upper-confidence interval on @ is given by the interval
O<U, (10-22)
where the upper-confidence limit U is chosen so that
P{@sUj)=1-a. (10-23)
The length of the observed two-sided confidence interval is an important measure of
the quality of the information obtained from the sample. The half-interval length 6 — L or
U — is called the accuracy of the estimator. The longer the confidence interval, the more
confident we are that the interval actually contains the true value of @ On the other hand,
the longer the interval, the less information we have about the true value of @. In an ideal
situstion, we obtain a relatively short interval with high confidence.

10-2.1 Confidence Interval on the Mean of a Normal Distribution,


Variance Known
Let X be a normal random variable with unknown mean // and known variance o”, and sup-
pose that a random sample of size n, X,, X3, ..., X,, is taken. A 100(1 — a)% confidence
interval on p éan be obtained by considering the sampling distribution of the sample mean
X. In Section 9-3 we noted that the sampling distribution of X is normal if X is normal and
approximately normal if the conditions of the Central Limit Theorem are met. The mean of
X is ps and the variance is o7/n. Therefore, the distribution of the statistic

aX—
~ of\n
is taken to be a standard normal distribution.
The distribution of Z =(X — p)/(a/ ./n) is shown in Fig. 10-5. From examination of
this figure we see that

~ Zz 0. za Z_ Figure 10-5 The distribution of Z.


234 Chapter10 Parameter Estimation

or

This can be rearranged as

P{X
~Za, 6/Vn SUS X + Za 0/ Vn} =1-a (10-24)
Comparing equations 10-24 and 10-18, we see that the 100(1 — a@)% two-sided confidence
interval on [LU is
X -Zyy0/Vn SUSX +Zy79/Nn. (10-25)

Example 10-14
Consider the thermal conductivity data in Example 10-12. Suppose that we want to find a 95% con-
fidence interval on the mean thermal conductivity of Armco iron. Suppose we know that the standard
deviation of thermal conductivity at 100°F and 550 W is o= 0.10 Btu/hr—-ft—°F. If we assume that
thermal conductivity is normally distributed (or that the conditions of the Central Limit Theorem are
met), then we can use equation 10-25 to construct the confidence interval. A 95% interval implies that
1 - a=0.95, so a= 0.05, and from Table II in the Appendix Z,) = Zp 5/2 = Zoos = 1.96. The lower
confidence limit is ;

= 41.924 -1.96(0.10)/V10
= 41.924 — 0.062
= 41.862
and the upper confidence limit is

U=¥+Zy,0/Vn
= 41.924 +1.96(0.10)/V10
= 41.924 + 0.062
= 41.986.
Thus the 95% two-sided confidence interval is

41.862 << 41.986.

This is our interval of reasonable values for mean thermal conductivity at 95% confidence.

Confidence Level and Precision of Estimation

Notice that in the previous example our choice of the 95% level of confidence was essen-
tially arbitrary. What would have happened if we had chosen a higher level of confidence,
say 99%? In fact, doesn’t it seem reasonable that we would want the higher level of confi-
dence? At a= 0.01, we find Z4j) = Zo914. = Zp,95 = 2-58, while for a = 0.05, Zp); = 1.96.
Thus, the length of the 95% confidence interval is

2(1.960/n) =3.920/Vn,
whereas the length of the 99% confidence interval is

2(2.580//n)=5.150/Vn.
10-2 Single-Sample Confidence Interval Estimation 235

The 99% confidence interval is longer than the 95% confidence interval. This is why we
have a higher level of confidence in the 99% confidence interval. Generally, for a fixed sam-
ple size n and standard deviation o, the higher the confidence level, the longer the resulting
confidence interval.
Since the length of the confidence interval measures the precision of estimation, we see
that precision is inversely related to the confidence level. As noted earlier, it is highly desir-
able to obtain a confidence interval that is short enough for decision-making purposes and
that also has adequate confidence. One way to achieve this is by choosing the sample size
n to be large enough to give a confidence interval of specified length with prescribed
confidence. .

Choice of Sample Size


The accuracy of the confidence interval in equation 10-25 is Z,/.0 /./n.. This means that in
using X to estimate pi, the error E = [x— | is less than 24/20 iVn , with confidence 100(1 —
a). This is shown graphically in Fig. 10-6. In situations where the sample size can be con-
trolled, we can choose n to be 100(1 — a@)% confident that the error in estimating J is less
than a specified error E. The appropriate sample size is

n= (=27| (10-26)
E
If te right-hand side of equation 10-26 is not an integer, it must be rounded up. Notice that
2 E is the length of the resulting confidence interval.
To illustrate the use of this procedure, suppose that we wanted the error in estimating
the mean thermal conductivity of Armco iron in Example 10-14 to be less than 0.05 Btu/hr
-ft-°F, with 95% confidence. Since o = 0.10 and Z)o3; = 1.96, we may find the required
sample size from equation 10-26 to be

Tn \ ¢
Ped (ps
ice (1.96)0.10 =15.37=16.
E 0.05
Notice how, in general, the sample size behaves as a function of the length of the con-
fidence interval 2 E, the confidence level 100(1 — @)%, and the standard deviation o as
follows:
¢ As the desired length of the interval 2 E decreases, the required sample size n
increases for a fixed value of o and specified confidence.
¢ As Gincreases, the required sample size n increases for a fixed length 2 E and spec-
ified confidence.
* As the level of confidence increases, the required sample size n increases for fixed
length 2 E and standard deviation o.

E= error =|x-pl
bis:ote
L=X-Z,pohn r x [ad U=X+Z, ah/n

Figure 10-6 Error in estimating pu with x.


236 Chapter 10 Parameter Estimation

One-Sided Confidence Intervals


It is also possible to obtain one-sided confidence intervals for [by setting either L = — © or
U = and replacing Z,,. by Z,. The 100(1 — @)% upper-confidence interval for U is

USX+Z,0 <n, (10-27)


and the 100(1 — @)% lower-confidence interval for p is

X-Z,o Vn Su. (10-28)

10-2.2 Confidence Interval on the Mean of a Normal Distribution,


Variance Unknown
Suppose that we wish to find a confidence interval on the mean of a distribution but the vari-
ance is unknown. Specifically, a random sample of size n, X,, X>, ..., X, is available, and X
and S” are the sample mean and sample variance, respectively. One possibility would be to
replace o in the confidence interval formulas for 1 with known variance (equations 10-25,
10-27, and 10-28) with the sample standard deviation s. If the sample size, n, is relatively
large, say n > 30, then this is an acceptable procedure. Consequently, we often call the con-
fidence intervals in Sections 10-2.1 and 10-2.2 large-sample confidence intervals, because
they are approximately valid even if the unknown population variances are replaced by the
corresponding sample variances.
When sample sizes are small, this approach will not work, and we must use another
procedure. To produce a valid confidence interval, we must make a stronger assumption
about the underlying population. The usual assumption is that the underlying population is
normally distributed. This leads to confidence intervals based on the ¢ distribution. Specif-
ically, let X,, X5, ..., X, be arandom sample from a normal distribution with unknown mean
and unknown variance o”. In Section 9-4 we noted that the sampling distribution of the
statistic

Dine
S/Jn
is the t distribution with n — 1 degrees of freedom. We now show how the confidence inter-
val on jis obtained.
The distribution of t = (X —y)/(S/Vn) is shown in Fig. 10-7. Letting 1,.,,_, be the
upper o/2 percentage point of the t distribution with n — 1 degrees of freedom, we observe
from Fig. 10-7 that

Piston StS tan,1} = 1—@

—ta2,n-1 0 tv2n-1 ¢ Figure 10-7 The t distribution.


10-2 Single-Sample Confidence Interval Estimation 237

or

Pyta s s/n Staaoi =) 7c.

Rearranging this last equation yields

PAX ~ typ S/Vn SUS X + te nt S/Jn}=1-a. (10-29)


Comparing equations 10-29 and 10-18, we see that a 100(1 — @)% two-sided confidence
interval on LU is

X ten Sivn SUSX tap p15 Vn. (10-30)


A 100(1 — &)% lower-confidence interval on j1 is given by

X-ty,15 Vn Sy, (40-31)


and a 100(1 — a)% upper-confidence interval on y is

usX+t,,,S Vn. (10-32)


Remember that these procedures assume that we are sampling from a normal popula-
tion. This assumption is important for gmall samples. Fortunately, the normality assumption
holds in many practical situations. When it does not, we must use distribution-free or non-
parametric confidence intervals. Nonparametric methods are discussed in Chapter 16. How-
ever, when the population is normal, the f-distribution intervals are the shortest possible
100(1 — &)% confidence intervals, and are therefore superior to the nonparametric methods.
Selecting the sample size n required to give a confidence interval of required length is
not as easy as in the known o case, because the length of the interval depends on the value
of o (unknown before the data is collected) and on n. Furthermore, n enters the confidence
interval through both 1/Vn and ty2-1- Consequently, the required n must be determined
through trial and error.

‘Example 10-15.
An article in the Journal of Testing and Evaluation (Vol. 10, No. 4, 1982, p. 133) presents the
following 20 measurements on residual flame time (in seconds) of treated specimens of children’s
nightwear:

hts, CER eh), Shi SMO


9.87, 9.67, 9.94, 9.85, 9.75,
9.83, 9.92, 9.74, 9.99, 9.88,
059592939992 ,89:89:
We wish to find a 95% confidence interval on the mean residual flame time. The sample mean
and standard deviation are
x = 9.8475,
s = 0.0954.

From Table IV of the Appendix we find t9 95 19 = 2.093. The lower and upper 95% confidence limits
are

L=X~bo/2.n-t s/n
= 9,8475 — 2.093(0.0954)/4/20
= 9.8029 seconds.
238 Chapter 10 Parameter Estimation

and

U=x+ tQ/2,n-1 s/n

= 9.8475 +2.093(0.0954)//20
= 9.8921 seconds.

Therefore the 95% confidence interval is

9.8029 sec < U < 9.8921 sec

We are 95% confident that the mean residual flame time is between 9.8025 and 9.8921 seconds.

10-2.3 Confidence Interval on the Variance of a Normal Distribution


Suppose that X is normally distributed with unknown mean pt and unknown variance 0”. Let
X,, X,, ...,X, be arandom sample of size n, and let S? be the sample variance. It was shown
in Section 9-3 that the sampling distribution of
aS (n -1)S?
o

is chi-square with n — 1 degrees of freedom. This distribution is shown in Fig. 10-8.


To develop the confidence interval, we note from Fig. 10-8 that

1 RORE ES EMGET NSC)


or

n-1)S?
Pla = ae s Be| =l-a.

This last equation can be rearranged to yieid


_1)s? _1)¢2
Pe ge (10-33)
Xa/2,n-1 Xi-a/2,n-1
Comparing equations 10-33 and 10-18, we see that a 100(1 — @&)% two-sided confidence
interval for 0” is
We wa vee
ed <o< (n~1)S"_ (10-34)
7
Xa/2,n-1 X\-a/2,n-1

al2

0 xi She at Nae nt x?
Figure 10-8 The x’ distribution.
10-2 Single-Sample Confidence Interval Estimation 239

To find a 100(1 — a)% lower-confidence interval on 0”, set U = and replace y”w/2.n-) With
Xoun-1» Siving
2
ee <0 2 (10-35)
Gah

The 100(1 — a)% upper-confidence interval is found by setting L = 0 and replacing


Kee a/2.n- With x. an-1» Tesulting in

a 1)S*
(10-36)
dee~a,n-1

Example 10-16
A manufacturer of soft drink beverages is interested in the uniformity of the machine used to fill cans.
Specifically, it is desirable that the standard deviation o of the filling process be less than 0.2 fluid
ounces; otherwise there will be a higher than allowable percentage of cans that are underfilled. We
will assume that fill volume is approximately normally distributed. A random sample of 20 cans result
in a sample variance of S* = 0.0225 (fluid ounces)’. A 95% upper-confidence interval is found from
equation 10-36 as follows:

or

(19)0.0225
os = 0.0423 (fluid ounces)’.
10.117
This last statement may be converted into a confidence interval on the standard deviation o by taking
the square root of both sides, resulting in

o<0.21 fluid ounces.

Therefore; at the 95% level of confidence, the data do not support the claim that the process standard
deviation is less than 0.20 fluid ounces.

10-2.4 Confidence Interval on a Proportion


It is often necessary to construct a 100(1 — a@)% confidence interval on a proportion. For
example, suppose that a random sample of size n has been taken from a large (possibly infi-
nite) population, and X(< n) observations in this sample belong to a class of interest. Then
p = X/n is the point estimator of the proportion of the population that belongs to this class.
Note that n and p are the parameters of a binomial distribution. Furthermore, in Section 7-
5 we saw that the sampling distribution of f is approximately normal with mean p and vari-
ance p(1 — p)/n, if p is not too close to either 0 or 1, and if n is relatively large. Thus, the
distribution of

is approximately standard normal.


To construct the confidence interval on p, note that
240 Chapter10 Parameter Estimation

or

; as [p(l-p)
PY Ds poh: + Zep HD}
i= (10-37)
We recognize the quantity ./p(1—p)/n as the standard error of the point estimator p.
Unfortunately, the upper and lower limits of the confidence interval obtained from equation
10-37 would contain the unknown parameter p. However, a satisfactory solution is to
replace p by / in the standard error, giving an estimated standard error. Therefore,

;
Fie [a .—P eps x
p+ Zu) HDz a (10-38)

and the approximate 100(1 — a)% two-sided confidence interval on p is

: [P(l- P % CMaa
P-Zop Be eciniprazip Raa (10-39)

An approximate 100(1 — @)% lower-confidence interval is

:
pa L
B(1= p
aoe <p, (10-40)

and an approximate 100(1 — @)% upper-confidence interval is

Me ‘ eZ mae
B(1- p (10-41)
n

Example 10-17
In a random sample of 75 axle shafts, 12 have a surface finish that is rougher than the specifications
will allow. Therefore, a point estimate of the proportion p of shafts in the population that exceed the
roughness specifications is p = x/n = 12/75 = 0.16. A 95% two-sided confidence interval for p is com-
puted from equation 10-39 as
2 SS

‘A p(l-p p(1- p
ia dans ; “ps p+ Zany ; ’)

or

0.16—1.96 St)
0.16(0.84)
< ps0.16+1.96,;—————,,
75 iP)
which simplifies to

0.08 <p $0.24.


10-2 Single-Sample Confidence Interval Estimation 241

Define the error in estimating p by p as E = |p — pl. Note that we are approximately


100(1 — &)% confident that this error is less than Zy/2\ PL — p)/n. Therefore, in situations
where the sample size can be selected, we may choose n to be 100(1 — @)% confident that
the error is less than some specified value E. The appropriate sample size is

(2)
=|——|
E,
p(l-p).

This function is reiatively flat from p = 0.3 to p = 0.7. An estimate of p is required to use
(10-42)

equation 10-42. If an estimate p from a previous sample is available, it could be substituted


for p in equation 10-42, or perhaps a subjective estimate could be made. If these alternatives
are unsatisfactory, a preliminary sample could be taken, 6 computed, and then equation
10-42 used to determine how many additional observations are required to estimate p with
the desired accuracy. The sample size from equation 10-42 will always be a maximum for
p=0.5 [that is, p(1 — p) = 0.25], and this can be used to obtain an upper bound on n. In other
words, we are at least 100(1 — a)% confident that the error in estimating p using f is less
than E£ if the sample size is

n=( =|——
22) |
Za/2 :
(025)
(0.25).

In order to maintain at least a 100(1 — &)% level of confidence the value for n is always
rounded up to the next integer.

Example 10-18
Consider the data in Example 10-17. How large a sample is required if we want to be 95% confident
that the error in using p to estimate p is less than 0.05? Using 6 = 0.16 as an initial estimate of p, we
find from equation 10-42 that the required sample size is

Zorss y
4o025 “nee_ 5)=( fuss)
| a) 12°) 9 16(0.84) x=207.
2

( Peele ene salo.0s ley

We note that the procedures developed in this section depend on the normal approxi-
mation to the binomial. In situations where this approximation is inappropriate, particularly
cases where n is small, other methods must be used. Tables of the binomial distribution
could be used to obtain a confidence interval for p. If n is large but p is small, then the Pois-
son approximation to the binomial could be used to construct confidence intervals. These
procedures are illustrated by Duncan (1986).
Agresti and Coull (1998) present an alternative form of a confidence interval on the
population proportion, p, based on a large-sample hypothesis test on p (see Chapter 11
of this text). Agresti and Coull show that the upper and lower limits of an approximate
100(1 — @)% confidence interval on p are

o ii/2 ;
+—“¢ +z, ,.,,————_
+ —>
P 2n aie n 4n?

The authors refer to this as the score confidence interval. One-sided confidence intervals
can be constructed simply by replacing Z,,. with Z,.
242 Chapter 10 Parameter Estimation

To illustrate this confidence interval, reconsider Example 10-17, which discusses the
surface finish of a shaft with n = 75 and p = 0.16. The lower and upper limits of a 95% con-
fidence interval using the approach of Agresti and Coull are

; Al-A)Zin 9164 0-99) 2 21562480088) , 199) 2


[
x KR 2 i 5 ‘
Zo
p+ cae + Z,/2 hues ha 2(75 m6 4(75)°
‘e.7 i (1.96)?
A 75
_ 0.186 + 0.087
i ie © HOST
= 0.177 + 0.083.
The resulting lower- and upper-confidence limits are 0.094 and 0.260, respectively.
Agresti and Coull argue that the more complicated confidence interval has several
advantages over the standard large-sample interval (given in equation 10-39). One advan-
tage is that their confidence interval tends to maintain the stated level of confidence better
than the standard large-sample interval. Another advantage is that the lower-confidence
limit will always be non-negative. The large-sample confidence interval can result in neg-
ative lower-confidence limits, which the practitioner will generally then set to 0. A method
which can report a negative lower limit on a parameter that is inherently non-negative (such
as a proportion, p) is often considered an inferior method. Lastly, the requirements that p
not be close to 0 or | and n be relatively large are ‘not requirements for the approach sug-
gested by Agresti and Coull. In other words, their approach results in an appropriate confi-
dence interval for any combination of n and p.

10-3 TWO-SAMPLE CONFIDENCE INTERVAL ESTIMATION


10-3.1 Confidence Interval on the Difference between Means of Two
Normal Distributions, Variances Known
Consider two independent random variables X, with unknown mean sl, and known variance
go, and X, with unknown mean 1, and known variance o;. We wish to find a 100(1 — a)%
confidence interval on the difference in means [, — H. Let X,,, Xj, ..., X,,, be a random
sample of n, observations from X,, and X,, X3,, ..., X>,, be a random sample of n, obser-
vations from X,. If X, and X, are the sample means, the statistic

Th X, -X, =(u — fy)


2 2
Otoy
Nveres Te

is standard normal if X, and X, are normal or approximately standard normal if the condi-
tions of the Central Limit Theorem apply, respectively. From Fig. 10-5, this implies that
Piers
7 az = 1 0
or

X, - X, -(y, — fly)
P a @/2 = rr

jet.ot
eer oe =l-a.

ny ny
10-3 Two-Sample Confidence Interval Estimation 243

This can be rearranged as

2 2
Be pt tan tied) (10-43)
1

Comparing equations 10-43 and 10-18, we note that the 100(1 — a@)% confidence interval
for Ll, — H, is
PT
os
BR, Oo, mia?ero sys
Zap |e
ort rat
SRR +2 C12 (10-44)
Ny ny ny
n

One-sided confidence intervals on j1, — 4, may also be obtained. A 100(1 — a)% upper-
confidence interval on [U, — LH, is

2 2

Hy — M $ X~ Xp + Zq,| ny +2,
ny
bee)
and a 100(1 — @)% lower-confidence interval is
2 2

¥,-X,-2Z, |Ls + <p, -py. (10-46)

Example 10-19
Tensile strength tests were performed on two different grades of aluminum spars used in manufac-
turing the wing of a commercial transport aircraft. From past experience with the spar manufacturing
process and the testing procedure, the standard deviations of tensile strengths are assumed to be
known. The data obtained are shown in Table 10-2.
If 42, and 2, denote the true mean tensile strengths for the two grades of spars, then we may find
a 90% confidence interval on the difference in mean strength 1, — 1, as follows:

L=x, Xo =—ZLa/ Cie Om

ny
2 2
Pree ras siesta)
10 12
= 13.1-0.88
=12.22 kg/mm’,

Table 10-2 Tensile Strength Test Result for Aluminum Spars

Sample Mean
Spar Sample Tensile Strength Standard Deviation
Grade Size (kg / mm’) (kg / mm?)
ee a en a a
1 n,= 10 x, = 87.6 o, = 1.0
2 n,=12 X, = 74.5 6,=1.5
a
244 Chapter 10 Parameter Estimation

= 87.6-74.5+1.645 (0) |03)


10 12
=13.1+0.88
= 13.98 kg/mm’.
Therefore the 90% confidence interval on the difference in mean tensile strength is

12.22 kg/mm? < pL, — Hy $ 13.98 kg/mm’.


We are 90% confident that the mean tensile strength of grade 1 aluminum exceeds that of grade 2 alu-
minum by between 12.22 and 13.98 kg/mm’.

If the standard deviations o, and 0, are known (at least approximately), and if the sam-
ple sizes n, and n, are equal (n, = n, =n, say), then we can determine the sample size
required so that the error in estimating 1, — , using X, — X, will be less than E at 100(1 —
o)% confidence. The required sample size from each population is
z..\
ie a/2
n=( = (6;2 +03).
2
(10-47)
=

Remember to round up if n is not an integer.

10-3.2 Confidence Interval on the Difference between


Means of Two Normal Distributions, Variances Unknown
We now extend the results of Section 10-2.2 to the case of two populations with unknown
means and variances, and we wish to find confidence iniervals on the difference in means
H, — . If the sample sizes n, and n, both exceed 30, then the normal known-variances dis-
tribution intervals in Section 10-3.1 can be used. However, when small samples are taken,
we must assume that the underlying populations are normally distributed with unknown
variances and base the confidence intervals on the ¢ distribution.

Case I. i= o}=0"
Consider two independent normal random variables, say X, with mean wu, and variance ce
and X, with mean w, and variance 0}. Both the means ju, and £1, and the variances o* and
Oo;are unknown. However, suppose it is reasonable to assume that both variances are equal;
that is, ot = Oo;=o’. We wish to find a 100(1 — @)% confidence interval on the difference
in means [, — LL.
Random samples of size n, and n, are taken on X, and X,, respectively. Let the sample
means be denoted X, and X, and the sample variances be denoted S :and § zsSince both S :
2 : ' &
and S;, are estimates of the common variance o’, we may obtain a combined (or “pooled’’)
estimator of 07:

S?2 = (10-48)
n, tn,—2
10-3 Two-Sample Confidence Interval Estimation 245

To develop the confidence interval for HM, — Hp, note that the distribution of the
statistic

is the ¢ distribution with n, + n, — 2 degrees of freedom. Therefore,

OE ASA SORES a MET sd Let


or

X, — Xp — (4 = Ha)
P —la/2,.n, +n, -2 Ss S ta/2,n, +n, =P I=a:

1 + —pyoyl
P
nm Ny
This may be rearranged as

2S |1 1
rk=%3~= /2,n, +n, -25p n a

. api (10-49)
S My — Hy,SX, -X, + be/2.n, +n, -2Sp Fe hai-a
oe)
Therefore, a 100(1 — @)% two-sided confidence interval for the difference in means , — Ll,
1S

X, — Xy — ta/2,.n,
+n, -25p a tix
1 1
S My — Hy SX — Xp + ty/2,n, +n, -25p ae+ oe (10-50)

A one-sided 100(1 — &)% lower-confidence interval on [, — LH, is

> s 1 1
X; - X, clon ieape2 at dea 3 H, — Ho, (10-51)
| my Mm

and a one-sided 100(1 — @)% upper-confidence interval on [, — H, is

s eel
My — Hy SK, —X + tan ang —25Sp.- + — (10-52)
“SR nm Ny

Example 10-20
In a batch chemical process used for etching printed circuit boards, two different catalysts are being
compared to determine whether they requite different emersion times for removal of identical quan-
tities of photoresist material. Twelve batches were run with catalyst 1, resulting in a sample mean
emersion time of x, = 24.6 minutes and a sample standard deviation of s, = 0.85 minutes. Fifteen
batches were run with catalyst 2, resulting in a mean emersion time of x, = 22.1 minutes and a stan-
dard deviation of s, = 0.98 minutes. We will find a 95% confidence interval on the difference in means
246 Chapter 10 Parameter Estimation

Lt, — H, assuming that the standard deviations (or variances) of the two populations are equal. The
pooled estimate of the common variance is found using equation 10-48 as follows:

a (n, -1)s; +(nz —1)s3


5
4 n, +n, -2
_ 11(0.85)” +14(0.98)”
. [2 is 2
= 0.8557.
The pooled standard deviation is s, =-V0.8557 = 0.925. Since ty n,+n;-2 = 0.025,25 = 2-060, we may
calculate the 95% lower- and upper-confidence limits as

Sreue ical
L=X,~Xy ~bayj2,m+m,-25p Sie

er
= 24.6 -22.1-2.060(0.925),|— + =sl
= 1.76 minutes
and

Ph pak [A 1
Omi Xo toner, plot
\n

= 24.6-22.1+ 2060(0.925) | + ch
’ WW 1S
= 3.24 minutes..
That is, the 95% confidence interval on the difference in mean emersion times is

1.76 minutes S$ jl,—Hy $3.24 minutes.


We are 95% confident that catalyst 1 requires an emersion time that is between 1.76 minutes and 3.24
minutes longer than that required by catalyst 2.

Case II. O71#0;


In many situations it is not reasonable to assume that om = ‘ae When this assumption is
unwarranted, one may still find a 100(1 — @)% confidence interval for j1, — 1, using the fact
that the statistic

Xi X2 = (M1 = a)
VSt/mea
+53 uP)
is distributed approximately as ¢ with degrees of freedom given by
2
(s?/n + S3/ny)
=2. (10-53)
(Stim) (St/m)
n +1 n, +1
Consequently, an approximate 100(1 — a)% two-sided confidence interval for L, — LL,
when 0; # 0;, is
HO Laney |s? SY rougiers iet 2 2
oe Sy (10-54)
xX,2 X, ~Faj2.v ann +—2 SsHy, - HU, s X, tS X, ae Ly/2,y
Db Ge Vn,
10-3 Two-Sample Confidence Interval Estimation 247

Upper (lower) one-sided confidence limits may be found by replacing the lower (upper)-
confidence limit with —<o(-») and changing o/2 to a

10-3.3 Confidence Interval on U, — 1, for Paired Observations


In Sections 10-3.1 and 10-3.2 we developed confidence intervals for the difference in means
where two independent random samples were selected from the two populations of inter-
est. That is, n, observations were selected at random from the first population and a com-
pletely independent sample of n, observations was selected at random from the second
population. There are also a number of experimental situations where there are only n dif-
ferent experimental units and the data are collected in pairs; that is, two observations are
made on each unit.
For example, the journal Human Factors (1962, p. 375) reports a study in which 14
subjects were asked to park two cars having substantially different wheelbases and turning
radii. The time in seconds was recorded for each car and subject, and the resulting data are
shown in Table 10-3. Notice that each subject is the “experimental unit” referred to earlier.
We wish to obtain a confidence interval on the difference in mean time to park the two cars,
Say H, — Hp.
In general, suppose that the data consist of n pairs (X,,, X51), (Xy2, X29) «+ -s(Xins Xo):
Both X, and X, are assumed to be normally distributed with mean p, and LL, respectively.
The random variables within different pairs are independent. However, because there are
two measurements on the same experimental unit, the two measurements within the same
pair may not be independent. Consider the n differences D, = X,,; — X,,;, D, =X\.- Xx, «--,
D,, = X,,, — X>,- Now the mean of the differences D, say Up, is

Up= E(D) = E(x, - X,) = E(X,) - E(X,) =H, — 4,

because the expected value of X, — X, is the difference in expected values regardless of


whether X, and X, are independent. Consequently, we can construct a confidence interval
for L, — H, just by finding a confidence interval on [p. Since the differences D, are normally
and independently distributed, we can use the f-distribution procedure described in Section

Table 10-3 Time in Seconds to Parallel Park Two Automobiles

Automobile

Subject 1 2 Difference

1 37.0 17.8 19.2


2 25.8 20.2 5.6
3 16.2 16.8 -0.6
4 24.2 41.4 S172
5 22.0 21.4 0.6
6 33.4 38.4 -5.0
7 23.8 16.8 7.0
8 58.2 32.2 26.0
9 33.6 27.8 5.8
10 24.4 232 1.2
11 23.4 29.6 6.2
12 212 20.6 0.6
13 36.2 Ly) 4.0
14 29.8 53.8 -24.0
248 Chapter 10 Parameter Estimation

10-2.2 to find the confidence interval on Up. By analogy with equation 10-30, the 100(1 —
a)% confidence interval on [Up = Ll, — Hy is ‘

D = taan-1 Sp/Vn $ Up $ D + teyr.n-t Sp/Vn, (10-55)


where D and S, are the sample mean and sample standard deviation of the differences D,,
respectively. This confidence interval is valid for the case where 0; # 0}, because Sp esti-
mates 0, = V(X, — X,). Also, for large samples (say n 2 30 pairs), the assumption of nor-
mality is unnecessary.

Example 10-21
We now return to the data in Table 10-3 concerning the time for n = 14 subjects to parallel park two
cars. From the column of observed differences d, we calculate d= 1.21 and s, = 12.68. The 90% con-
fidence interval for Up = HM, — Hy is found from equation 10-55 as follows:

d ~ 195,13 Sa Vn S[p<d +0513 Sa vn,


1.21-1.771(12.68)/-V14 $ up $1.21+41.771(12.68)/
14,
4.79 $fp $7.21.
Notice that the confidence interval on {1, includes zero. This implies that at the 90% level of confi-
dence, the data do not support the claim that the two cars have different mean parking times p, and
LL. That is, the value 1p = 11, — L, = 0 is not inconsistent with the observed data.

Note that when pairing data, degrees of freedom are lost in comparison to the two-sample
confidence intervals, but typically a gain in precision of estimation is achieved because S,
is smaller than S,,.

10-3.4 Confidence Interval on the Ratio of Variances


of Two Normal Distributions
Suppose that X, and X, are independent normal random variables with unknown means J,
and {, and unknown variances on and 03, respectively. We wish to find a 100(1 — ~)% con
fidence interval on the ratio 0/05. Let two random samples of sizes n, and n, be taken on
X, and X,, and let S ;and SS denote the sample variances. To find the confidence interval,
we note that the sampling distribution of

Je _ 53/03
2752
St/9j
is F with n, — 1 and n, — 1 degrees of freedom. This distribution is shown in Fig. 10-9.
From Fig. 10-9, we see that

PAP Oineiyes 2 os Ferenc} = ae


or

S?/o? :
PS|
FG @/2,n,—1,n,-1
n,tn <1 S eT
S? /o: ee
a/2,n,-1,n, eee

Hence

oe Teas a 5
risF-a/2,n, -1,n,-1 S st = o Fy/2,n; =n i (ia 1-a@. (10-56)
2 2 2
10-3 Two-Sample Confidence Interval Estimation 249

al2
al2

Dione Jashaaqeae 2 bFip, i,Since


Figure 10-9 The distribution of F n2-1,n)-1*

Comparing equations 10-56 and 10-18, we see that a 100(1 — a@)% two-sided confi-
dence interval for oo; is
2 oe
52Fi-aiammtn-t $ Gr$ 62Fajen -tm-1 (10-57)
where the lower 1 — o/2 tail point of theF,,_,,,_, distribution is given by (see equation
9-22).
1
F.-¢/2,n,-1,n,-1 = ee (10-58)
@/2,n,-1,n2—1
We may also construct one-sided confidence intervals. A 100(1 — &)% lower-confi-
dence limit on 07/03 is
Sa o
ee
2 ae sh (10-59)
2 2
while a 100(1 — a)% upper-confidence interval on 04/0; is
of ee
ME @,n,—1,n,-1° (10-60)
oO}

Example 10-22.
Consider the batch chemical etching process described in Example 10-20. Recall that two catalysts
are being compared to measure their effectiveness in reducing emersion times for printed circuit
boards. n, = 12 batches were run with catalyst 1 and n, = 15 batches were run with catalyst 2, yield-
ing s, = 0.85 minutes and s, = 0.98 minutes. We will find a 90% confidence interval on the ratio of-
variances 07/0;. From equation 10-57, we find that
2
5] F
—3 Or sk
eS
0.95,14,11 <—|
a? 3 eae :

(0.85)° 039< 212 < (0.


0:85)
85)
<= 2.74,
(0.98) 0? (0.98)
250 Chapter 10 Parameter Estimation

using the fact that Fp 9s 14), = 1/Foos,11,14 = 1/2.58 = 0.39. Since this confidence interval includes unity,
we could not claim that the standard deviations of the emersion times for the two catalysts are differ-
ent at the 90% level of confidence.

10-3.5 Confidence Interval on the Difference between Two Proportions


If there are two proportions of interest, say p, and p,, it is possible to obtain a 100(1 — a) %
confidence interval on their difference, p, — p,. If two independent samples of size n, and
n, are taken from infinite populations so that X, and X, are independent, binomial random
variables with parameters (n,, p,) and (n,, p,), respectively. where X, represents the num-
ber of sample observations from the first population that belong to a class of interest and X,
represents the number of sample observations from the second population that belong to the
class of interest, then p, = X,/n, and p, = X,/n, are independent estimators of p, and p,,
respectively. Furthermore, under the assumption that the norma! approximation to the bino-
mial applies, the statistic

Te* Py — Py -(P; - Pa)

SS
ny Ny
is distributed approximately as standard normal. Using an approach analogous to that of the
previous section, it follows that an approximate 100(1 — @)% two-sided confidence inter-
val for p, — p> is

P\ — P2 (4 +
ny Ny

x D,(1- 1-p
SP. — P2 SP ~ Pr + Zo Pa| Pi) , Bal P2) (10-61)
n, Ny

An approximate 100(1 — &)% lower-confidence interval for p, — p, is

ie
Seer
Bi(l- bi) by(1- py) Deere (10-62)
ny ny

and an approximate 100(1 — &)% upper-confidence interval for p, — p, is

pa D(i— p p,(1—p
Pi — P2 = Pj — P2 za lesa (10-63)
n Ny

Example 10-23
Consider the data in Example 10-17. Suppose that a modification is made in the surface finishing
process and subsequently a second random sample of 85 axle shafts is obtained. The number of defec-
tive shafts in this second sample is 10. Therefore, since n, = 75, p, = 0.16, n, = 85, and p, = 10/85 =
0.12, we can obtain an approximate 95% confidence interval on the difference in the proportions of
defectives produced under the two processes from equation 10-61 as

p,(1—p Ae ae
a
)
Pi ~ P2 — Zo2s Pi(l-A), PolNyl Pe
ny

oe A(l-—) |(l-P
SP — Pr S$Py — P2 + Zo095 =A) atpa)
10-4 Approximate Confidence Intervals in Maximum Likelihood Estimation 251

or

4)8)
,0.12(0.8
0.16-0.12-1.96, |2-10(0-8
7 85
oa eK IEE SHEA) EOD)
This simplifies to

— 0.07 Sp, —p, $0.15.


This interval includes zero, so, based on the sample data, it seems unlikely that the changes made in
the surface finish process have reduced the proportion of defective axle shafts being produced.

10-4 APPROXIMATE CONFIDENCE INTERVALS


IN MAXIMUM LIKELIHOOD ESTIMATION
If the method of maximum likelihood is used for parameter estimation, the asymptotic
properties of these estimators may be used to obtain approximate confidence intervals. Let
@ be the maximum likelihood estimator of @. For large samples, 6 is approximately normally
distributed with mean 6 and variance V(@) given by the Cramér—Rao lower bound (equation
10-4). Therefore, an approximate 100(1 — @)% confidence interval for 0 is
6- Z[V@]!? < 056+ Z,1VO]!”. (10-64)
Usually, the vi) is a function of the unknown parameter @. In these cases, replace 6 with 6

Example 10-24
Recall Example 10-3, where it was shown that the maximum likelihood estimator of the parameter p
of a Bernoulli distribution is p =(1/ n> %,= X. Using the Cramér-Rao lower bound, we may
verify that the lower bound for the variance of p is

V(p) 2

For the Bernoulli distribution, we observe that E(X) = p and E(X ) = p. Therefore, this last expression
simplifies to

giapfinos
(2) s[atlercra: |
V( p) 2 —————_= = sbnta
:
atpllp) :

p (1-p)
This result should not be surprising, since we know directly that for the Bernoulli distribution, V(X)
= V(X;)/n = p(1 — p)/n.Inany case, replacing p in V(p) by p, the approximate 100(1 — a@)% confidence
interval for p is found from equation 10-64 to be
252 Chapter10 Parameter Estimation

10-5 SIMULTANEOUS CONFIDENCE INTERVALS


Occasionally it is necessary to. construct several confidence intervals on more than one
parameter, and we wish the probability to be (1 — @) that all such confidence intervals
simultaneously produce correct statements. For example, suppose that we are sampling
from a normal population with unknown mean and variance, and we wish to construct con-
fidence intervals for : and 0° such that the probability is (1 — a) that both intervals simul-
taneously yield correct conclusions. Since X and S are independent, we could ensure this
result by constructing 100(1 — @)!% confidence intervals for each parameter separately,
and both intervals would simultaneously produce correct conclusions with probability
(= ay b= a)" = (1 0). |
If the sample statistics on which the confidence intervals are based are not independ-
ent random variables, then the confidence intervals are not independent, and other methods
must be used. In general, suppose that m confidence intervals are required. The Bonferroni
inequality states that

P{all m statements are simultaneously correct} =l-a@21-)a;, (10-65)


Ms
t " —_

where | — a; is the confidence level used in the ith confidence interval. In practice, we select
a value for the simultaneous confidence level 1 — a, and then choose the individual a, such
that eo = a. Usually, we set a = a/m.
As an illustration, suppose we wished to construct two confidence intervals on the
means of two normal distributions such that we are at least 90% confident that both state-
ments are simultaneously correct. Therefore, since 1 - a= 0.90, we have a = 0.10, and
since two confidence intervals are required, each of these should be constructed with
a, = 2 =0.10/2 = 0.05, i= 1, 2. That is, two individual 95% confidence intervals on 1, and
Hy will simultaneously lead to correct statements with probability at least 0.90.

10-6 BAYESIAN CONFIDENCE INTERVALS


Previously, we presented Bayesian techniques for point estimation. In this section, we will
present the Bayesian approach to constructing confidence intervals.
We may use Bayesian methods to construct interval estimates of parameters that are
similar to confidence intervals. If theposterior density for @ has been obtained, we can con-
struct an interval, usually centered at the posterior mean, that contains 100(1 — a)% of the
posterior probability. Such an interval is called the 100(1 — a)% Bayes interval for the
unknown parameter 0.
While in many cases the Bayes interval estimate for @ will be quite similar to a classi-
cal confidence interval with the same confidence coefficient, the interpretation of the two
is very different. A confidence interval is an interval that, before the sample is taken, will
include the unknown 6 with probability 1 — @. That is, the classical confidence interval
relates to the relative frequency of an interval including @. On the other hand, a Bayes infer-
val is an interval that contains 100(1 — a)% of the posterior probability for @ Since the
posterior probability density measures a degree of belief about 6 given the sample results,
10-7 Bootstrap Confidence Intervals 253

the Bayes interval provides a subjective degree of belief about 6 rather than a frequency
interpretation. The Bayes interval estimate of @ is affected by the sample results but is not
completely determined by them.

Example 10-25
Suppose that the random variable X is normally distributed with mean p and variance 4. The value of
Lis unknown, but a reasonable prior density would be normal with mean 2 and variance 1. That is,

Lg (8)E(x-n)
Slttaestelt)= eae

and
] oF _9/2
eee (1/2)(u-2)

ose oan)
ni 2

f(b Aaroky) = on( B41) exp -3(4+1) fi )


—+
4

=pe(te1) A(t 4(a-28)


Jon \4 P24 Ty
using the methods of Section 10-1.4. Thus, the posterior distribution for is normal with mean
(nX + 8)/(n + 4) and variance 4/(n + 4). A 95% Bayes interval for 1, which is symmetric about the
posterior mean, would be

nX +8
By deg 2 arial 2
Ler 0.025 se
Pea+4 ieee Zi 0.025 ine 4) (10-66)

If a random sample of size 16 is taken and we find that x = 2.5, equation 10-66 reduces to

1.52
Sp 3.28.
If we ignore the prior information, the classical confidence interval for p is

1.52 sys 3.48.


We see that the Bayes interval is slightly shorter than the classical confidence interval, because the prior
information is equivalent to a slight increase in the sample size if no prior knowledge was assumed.

10-7 BOOTSTRAP CONFIDENCE INTERVALS


In Section 10-1.6 we introduced the bootstrap technique for estimating the standard error
of a parameter, 0. The bootstrap technique can also be used to construct a confidence inter-
val on 0.
For an arbitrary parameter 9, general 100(1 — a)% lower and upper limits are,
respectively,
L = 6-100(1 — a2) percentile of 6— 6),
. U=6- 100(a/2) percentile of @— 6).
254 Chapter10 Parameter Estimation

Bootstrap samples can be generated to estimate the values of L and U.


Suppose B bootstrap samples are generated and 6,,6,,. mA) Aand 6” are calculated. From
these estimates, we then compute the differences 6; — 0°,0; — 0",...,0; — 6”, arrange the dif-
ferences in increasing order, and find the necessary percentiles 100(1 — a/2) and 100(a/2)
for L and U. For example, if B = 200 and a 90% confidence interval is desired, then the
100(1 — 0.10/2) = 95th percentile and the 100(0.10/2) = Sth percentile would be the 190th
difference and the 10th difference, respectively.

Example 10-26
An electronic device consists of four components. The time to failure for each component follows an
exponential distribution and the components are identical to and independent of one another. The
electronic device will fail only after all four components have failed. The times to failure for the elec-
tronic components have been collected for 15 such devices. The total times to failure are

78.7778, 13.5260, 6.8291, 47.3746, 16.2033, 27.5387, 28.2515, 38.5826,


35.4363, 80.2757, 50.3861, 81.3155, 42.2532, 33.9970, 57.4312.

It is of interest to construct a 90% confidence interval on the exponential parameter A. By definition,


the sum of r independent and identically distributed exponential random variables follows a gamma _
distribution and is defined as gamma(r, A). Therefore, r = 4, but A needs to be estimated. A bootstrap
estimate for A can be found using the technique given in Section 10-1.6. Using the time-to-failure data
above, we find the average time to failure to be x = 42.545. The mean of a gamma distribution is E(X)
= r/A and A is calculated for each bootstrap sample. Running Minitab® for B = 100 bootstraps, we
found that the bootstrap estimate 7” = 0.0949. Using the bootstrap estimates for each sample, the dif-
ferences can be calculated and some of the calculations are shown in Table 10-4.
When the 100 differences are arranged in increasing order, the Sth percentile and the 95th per-
centiles turn out to be -0.0205 and 0.0232, respectively. Therefore, the resulting confidence limits are

L=0.0949 — 0.0232 = 0.0717,


U=0.0949 — (—0.0205) = 0.1154.

We are approximately 90% confident that the true value of A lies between 0.0717 and 0.1154. Figure
10-10 displays the histogram of the bootstrap estimates A; while Fig. 10-11 depicts the differences
A. — 1". The bootstrap estimates are reasonable when the estimator is unbiased and the standard error
is approximately constant.

Table 10-4 Bootstrap Estimates for Example 10-26


Sample x ART:
1 0.087316 —0.0075392
2 0.090689 —0.0041660
é) 0.096664 0.0018094

100 0.090193 —0.0046623


10-8 Other Interval Estimation Problems 255

20

o
frequency

0 =]

0.065 0.075 0.085 0.095 0.105 0.115 0.125 0.135 0.145 0.155
lambda

Figure 10-10 Histogram of the bootstrap estimates Ar

20 |

r=)
frequency

0
-0.03 -0.02 -0.01 0.00 0.01 0.02 0.03 0.04
differences

Figure 10-11 Histogram of the differences yh --

10-8 OTHER INTERVAL ESTIMATION PROBLEMS

10-8.1 Prediction Intervals


So far in this chapter we have presented interval estimators on population parameters, such
as the mean, LU. There are many situations where the practitioner would like to predict a sin-
gle future observation for the random variable of interest instead of predicting or estimat-
ing the average of this random variable. A prediction interval can be constructed for any
single observation at some future time.
Consider a given random sample of size n, X,, X>, ..., X,, from a normal population
with mean p/ and variance o”. Let the sample average be denoted by X. Suppose we wish to
predict the future observation X,,,. Since X is the point predictor for this observation, the
prediction error is given by X,,,, — X. The expected value and variance of the prediction error
are

E(Xn41 -X) = E(X41) . E(X) =H-H= 0


256 Chapter 10 Parameter Estimation

and

Var(X,,4; — X) = Var(X,41) + Var(X)


Ps
=o? +—
n

= o*(1+2)
n

Since X,,, and X are independent, normally distributed random variables, the prediction
error is also normally distributed and

(Xne1 — X)—-9 Ca,

oe [1+ ahs [1+

and is standard normal. If o” is unknown, it can be estimated )the sample variance, S’,
and then

(Pe Xn+l oe X

Sh (1“ |
n

follows the t distribution with n—1 degrees of freedom.


Following the usual procedure for constructing confidence intervals, the two-sided
100(1 — &)% prediction interval is

Xn+l -X
—teyan-1 S S ty2.n-1-
s(1+*)
n

By rearranging the inequality we obtain the final form for the two-sided 100(1 — a&)% pre-
diction interval:

X = teyant (142) < X n+l < X thon s*(1+). (10-67)

The lower one-sided 100(1 — &)% prediction interval on X,,,, is given by

X ty nt (is *)$ Xi (10-68)

The upper one-sided 100(1 — @)% prediction interval on X,,, is given by

Da), aa onal sei+ | | (10-69)


n

Example 10-27
Maximum forces experienced by a transport aircraft for an airline on a particular route for 10 flights
are (in units of gravity, g)

F155 1-235, 1.50; 1-69) 171, 1.83; Lesselison SOMO.


10-8 Other Interval Estimation Problems 257

The sample average and sampie standard deviation are calculated to be X = 1.666 and s = 0.273,
respectively. It may be of importance to predict the next maximum force experienced by the aircraft.
Since to995.9 = 2.262, the 95% prediction interval on X,, is

1.666 = ta/>.n-1 lo27sy(14+4) <x, € 1.666 + ty n-1 (0.273)°(14 2


1.018< X,, $2.314.

10-8.2 Tolerance Intervals

As presented earlier in this chapter, confidence intervals are the intervals in which we
expect the true population parameter, such as 4, to lie. In contrast, tolerance intervals are
intervals in which we expect a percentage of the population values to lie.
Suppose that X is a normally distributed random variable with mean p and variance o”.
We would expect approximately 95% of all values of X to be contained within the interval
fut 1.6450. But what if and o are unknown and must be estimated? Using the point esti-
mates x and s for a sample of size n, we can construct the interval x + 1.6455. Unfortunately,
due to the variability in estimating y and othe resulting interval may contain less than 95%
of the values. In this particular instance, a value larger than 1.645 will be needed to guar-
antee 95% coverage when using point estimates for the population parameters. We can con-
struct an interval that will contain the stated percentage of population values and be
relatively confident in the result. For example, we may want to be 90% confident that the
resulting interval covers at least 95% of the population values. This type of interval is
referred to as a tolerance interval and can be constructed easily for various confidence
levels.
In general, for 0 < g < 100, the two-sided tolerance interval for covering at least g% of
the values from a normal population with 100(1 — @)% confidence is x + ks. The value k is
a constant tabulated for various combinations of g and 100(1 — @). Values of k are given in
Table XIV of the Appendix for g = 90, 95, and 99 and for 100(1 — a) = 90, 95, and 99.
The lower one-sided tolerance interval for covering at least qg% of the values from a
normal population with 100(1 — a@)% confidence is x — ks. The upper one-sided tolerance
interval for covering at least q% of the values from a normal population with 100(1 — a)%
confidence is x + ks. Various values of k for one-sided tolerance intervals were calculated
using the technique given in Odeh and Owens (1980) and are provided in Table XIV of the
Appendix.

Example 10-28
Reconsider the maximum forces for the transport aircraft in Example 10-27. A two-sided tolerance
interval is desired that would cover 99% of all maximum forces with 95% confidence. From Table
XIV (Appendix), with 1 - a@ = 0.95, q = 0.99, and n = 10, we find that k = 4.433. The sample aver-
age and sample standard deviation were calculated as x = 1.666 and s = 0.273, respectively. The result-
ing tolerance interval is then
1.666 + 4.433(0.273)

or
(0.456, 2.876).

Therefore, we conclude that we are 95% confident that at least 99% of all maximum forces would lie
between 0.456 g and 2.876 g.
258 Chapter 10 Parameter Estimation

It is possible to construct nonparametric tolerance intervals that are based on the


extreme values in a random sample of size n from any continubus population. If P is the
minimum proportion of the population contained between the largest and smallest obser-
vation with confidence 1 — a, then it can be shown that A
nP"'—(n-1)P"=a.
Further, the required n is approximately
2
LAP kaa (10-70)
2 jap 4
Thus, in order to be 95% certain that at least 90% of the popuiation will be included
between the extreme values of the sample, we require a sample of size
Ogg PRE
Zirh ene
Note that there is a fundamental difference between confidence limits and tolerance
limits. Confidence limits (and thus confidence intervals) are used to estimate a parameter
of a population, while tolerance limits (and tolerance intervals) are used to indicate the lim-
its between which we can expect to find a proportion of a population. As n approaches infin-
ity, the length of a confidence interval approaches zero, while tolerance limits approach the
corresponding quantiles for the population.

10-9 SUMMARY
This chapter has introduced the point and interval estimation of unknown parameters. A
number of methods of obtaining point estimators were discussed, including the method of
maximum likelihood and the method of moments. The method of maximum likelihood usu-
ally leads to estimators that have good statistical properties. Confidence intervals were
derived for a variety of parameter estimation problems. These intervals have a frequency
interpretation. The two-sided confidence intervals developed in Sections 10-2 and 10-3 are
summarized in Table 10-5. In some instances, one-sided confidence intervals may be appro-
priate. These may be obtained by setting one confidence limit in the two-sided confidence
interval equal to the lower (or upper) limit of a feasible region for the parameter, and using
a instead of o/2 as the probability level on the remaining upper (or lower) confidence
limit. Confidence intervals using a bootstrapping technique were introduced. Tolerance
intervals were also presented. Approximate confidence intervals in maximum likelihood
estimation and simultaneous confidence intervals were also briefly introduced.
a1qe],S-O1 Areuru
Jo mng SUapyuo|
[AISUT SOINPII0Ad

UI0g
wlatqoiadd],
g Joyeunsy POPIs-[O01
omy
— wD sduapyuoD
[eArajuy
ues
7/ joe [euliou ‘UONNQUNSIp
JdULLIR
.O A UMOUY x ee di /o uf xsas
es a/o uh
doUAIA
UI TIGsueoU
Jo Om) [euJOU SUOHNGIsIp ly — ty
'77/ pur “7 saoueu
+O eapue °O umouy
ues
7/ jok [euLou ‘uonnginsip x x \“1—ups sins Mayurls"
aoUeLIBA
0 UMOUYUN

2oUAIAI
UI TIsuBaUT
Jo Om) [euLIOU -%x Ny tye. “EER
“Sig
suonnqii
'7/ — sip
“71 sour
10= +O
umouyun
ale Is(-
“u)+ S(I-
a1oym
“5 :
“ut+'u
Z—
2oUaTATT
UI IGsuRaUE
Jo Om) [eULIOU da
sip
"2/?y_
[Agup SONS
*@ es [SgtS
suolnqiy
10} parted sajdures
'W= WM

eA JouUeLIe | Ip nq
_O
ra JOJ eB
[RULIOU UOTINQLISIP éS 2S(I-¥) a)” s(t
——_
5 ,2 eS poe
ioucfay ae ae
ee e
Se e
oney
onR
JOJo Yi3} 0/0
SQDURLIVA
* JO 1JO OM} aS
1 >
5— eS 2
aS)
d > > 4
Ghee c /p-| |=‘ (-'utRe t
sai leao3
= a /0 utc {—'usj—4
ye Leor
ab
as See 3005 pal§ Sipe
uonlodo
Jo ig Jajaues
Jo& ed (eTulourg d
: - -1)4| . el
uonnquy
d —d Py ay Po+dsd @=14|
sip
10-9

i i
FIG
BISIITaouasazyt
ul om) dod suonodo id
Jo Om} \q — d—'d (‘d-1)'d -1)‘d
ed d= ~id—'d
LZ We aes = eS (‘d s
tS (‘d-1)'d (‘d-1)‘d
[eruoulgq siajaure
'd — red
“d dag lds ZtSeria —+ :
Summary

fr a a a,
259
260 Chapter 10 Parameter Estimation

10-10 EXERCISES
10-1. Suppose we have a random sample of size 2n 10-10. Find the estimator of A in the exponential dis-
from a population denoted X, and E(X) = u and V(X) tribution by the method of moments, based on a ran-
=. 07. Let dom sample of size n.
= il 2n as 1 n
10-11. Find moment estimators of the parameters r
Mo and SEES and A of the gamma distribution, based on a random
sample of size n.
be two estimators of 1. Which is the better estimator
10-12. Let X be a geometric random variable with
of 41? Explain your choice. parameter p. Find an estimator of p by the method of
10-2. Let X,, X5, ..., X, denote a random sample from moments, based on a random sample of size n.
a population having mean p and variance o*. Con- 10-13. Let X be a geometric random variable with
sider the following estimators of U: parameter p. Find the maximum likelihood estimator
F) = X, +X, +--+ X,
of p, based on a random sample of size n.
7 10-14. Let X be a Bernoulli random variable with
parameter p. Find an estimator of p by the method of
: 2 moments, based on a random sample of size n.
Is either estimator unbiased? Which estimator is “bet- 10-15. Let X be a binomial random variable with
ter”? In what sense is it better? parameters n (known) and p. Find an estimator of p by
10-3. Suppose that 6, and 6, are estimators of the the method of moments, based on a random sample of
size N.
parameter 6. We know that E@,) 10) E6,) = O25 vé,)
= 10, and v6; = 4. Which estimator is “better”? In 10-16. Let X be a binomial random variable with
what sense is it better? parameters n and p, both unknown. Find estimators of
nand p by the method of moments, based on a random
10-4. Suppose that,,, 6,, and 6, are estimators of @
sample of size N.
We know that E(6,) = E(6,) = 0, E(6,) # @, V(6,) = 12,
V(6,) = 10, and E(6, — 6)’ = 6. Compare these three 10-17. Let X be a binomial random variable with
estimators. Which do you prefer? Why? parameters n (unknown) and p. Find the maximum
likelihood estimator of p, based on a random sample
10-5. Let three random samples of sizes n, = 10, n, =
of size N.
8, and n, = 6 be taken from a population with mean
and variance 0”. Let S "RY and ‘Sebe the sample vari- 10-18. Set up the likelihood function for a random
ances. Show that sample of size n from a Weibull distribution. What
difficulties would be encountered in obtaining the
/ §2 _ 10S; +85; +653 maximum likelihood estimators of the three parame-
24 ters of the Weibull distribution?
is an unbiased estimator of 0”. 10-19. Prove that if 6 is an unbiased estimator of 6,
10-6. Best Linear Unbiased Estimators. An estima- and if lim, _,..V(@) = 0, then 6 is a consistent estimator
tor 6 is called a linear estimator if it is a linear combi- of 0.
nation of the observations in the sample. 6 is called a 10-20. Let X be a random variable with mean jy and
best linear unbiased estimator if, of all linear func- variance o*. Given two random samples of sizes n,
tions of the observations, it both is unbiased and has
and n, with sample means X, and X,, respectively,
minimum variance. Show that the sample mean X is show that
the best linear unbiased estimator of the population
mean .
X=aX,+(1-a)X, O<a<1,
is an unbiased estimator of yu. Assuming X, and X, to
10-7. Find the maximum likelihood estimator of the
be independent, find the value of a that minimizes the
parameter c of the Poisson distribution, based on a
variance of X.
random sample of size n.
10-21. Suppose that the random variable X has the
10-8. Find the estimator of c in the Poisson distribu-
probability distribution
tion by the method of moments, based on a random
sample of size n. fx) = (y+ Dx", O<x <1,
=) otherwise.
10-9. Find the maximum likelihood estimator of the
parameter A in the exponential distribution, based on a Let X,, X,, ..., X, be arandom sample of size n. Find
random sample of size n. the maximum likelihood estimator of y.
10-10 Exercises 261

10-22. Let X have the truncated (on the left at x) expo- 10-32. The time between failures of a milling machine
nential distribution is exponentially distributed with parameter A. Sup-
fix) = A exp[-A(x - x,)], x>x,~>0, pose we assume an exponential prior on A with a
=) mean of 3000 hours. Two machines are observed and
otherwise.
_ the average time between failures is x = 3135 hours.
Let X,, X,, ..., X, be a random sample of size n. Find Assuming a squared-error loss, determine the Bayes
the maximum likelihood estimator of A. estimate of A.
10-23. Assume that A in the previous exercise is
10-33. The weight of boxes of candy is normally dis
known but x; is unknown. Obtain the maximum like-
tributed with mean y and variance io It is reasonable
lihood estimator of x,.
to assume a prior density for jz that is normal with a
10-24. Let X be a random variable with mean yp and mean of 10 pounds and a variance of 3. Determine the
variance o°, and let X,, X;, ..., X, be arandom sample Bayes estimate of 4 given that a sample of size 25
of size n from X. Show that tie estimator G = produces x = 10.05 pounds. If boxes that weigh less
n-l 2
K Dh (Xie = X;) is unbiased for an appropriate than 9.95 pounds are defective, what is the probability
that defective boxes will be produced?
choice for K. Find the appropriate value for K.
10-25. Let X be a normally distributed random vari- 10-34. The number of defects that occur on a silicon
able with mean y and variance o”. Assume that 0” is wafer ‘used in integrated circuit manufacturing is
known and yp unknown. The prior density for p is known to be a Poisson random variable with parame-
assumed to be normal with mean [M) and variance ore ter A. Assume that the prior density for A is exponen-
Determine the posterior density for , given a random tial with a parameter of 0.25. A total of 45 defects
sample of size n from X. were observed on 10 wafers. Set up an integral that
defines a 95% Bayes interval for A. What difficulties
10-26. Let X be normally distributed with known mean
would you encounter in evaluating this integral?
Lt and unknown variance o”. Assume that the prior
density for 1/o* is a gamma distribution with parame- 10-35. The random variable X has a density function
ters m + 1 and moj. Determine the posterior density
s(xf6) == 0<x<8,
for 1/0”, given a random sample of size n from X.
10-27. Let X be a geometric random variable with and the prior density for 0 is
parameter p. Suppose we assume a beta distribution
flO) =1, 0<@<1.
with parameters a and 5 as the prior density for p.
(a) Find the posterior density for 9 assuming n = 1.
Determine the posterior density for p, given a random
sample of size n from X. (b) Find the Bayes estimator for 6 assuming the loss
function €(6; 6) = 67(@- 6)" andn= 1.
10-28. Let X be a Bernoulli random variable with
parameter p. If the prior density for p is a beta distri- 10-36. Let X follow the Bernoulli distribution with
bution with parameters a and b, determine the poste- parameter p. Assume a reasonable prior density for p
rior density for p, given a random sample of size n to be
from X. Sip) = 6p(1 - p), O<psl,
10-29. Let X be a Poisson random variable. with =0, otherwise.
parameter A. The prior density for A is a gamma dis-
If the loss function is squared error, find the Bayes
tribution with parameters m + 1 and (m + 1)/A). Deter-
estimator of p if one observation is available. If the
mine the posterior density for A, given a random
loss function is
sample of size n from X.
€(p; p) = 2(p - py’,
10-30. Suppose that X ~ N(, 40), and let the prior
density for ps be N(4, 8). For a random sample of size find the Bayes estimator of p for n= 1.
25, the value x = 4.85 is obtained. What is the Bayes 10-37. Consider the confidence interval for with
estimate of ju, assuming a squared-error loss? known standard deviation o:
10-31. A process manufactures printed circuit boards. ¥-Z,oVnsusX+Z,ovn,
A locating notch is drilled a distance X from a com-
ponent hole on the board. The distance is a random where @, + & = a. Let a= 0.05 and find the interval
variable X ~ N(u, 0.01). The prior density for pis uni- for &, = @, = o/2 = 0.025. Now find the intervat for the
form between 0.98 and 1.20 inches. A random sample case @, = 0.01 and a, = 0.04. Which interval is
of size 4 produces the value x = 1.05. Assuming a shorter? Is there any advantage to a “symmetric” con-
squared-error loss, determine the, Bayes estimate of tU. fidence interval?
262 Chapter10 Parameter Estimation

10-38. When X,, X,, ..., X, are independent Poisson and 6, = 0.18 fluid ounces for the two machines,
random variables, each with parameter A, and when n respectively. Two random samples of n, = 12 bottles
is relatively large, the sample mean X is approxi- from machine 1 and n, = 10 bottles from machine 2
mately normal with mean A and variance A/n. are selected, and the sample mean fill volumes are x,
(a) What is the distribution of the statistic = 30.87 fluid ounces and x, = 30.68 fluid ounces.
(a) Construct a 90% two-sided confidence interval on
X-A 9
the mean difference in fill volume.
A/n
(b) Construct a 95% two-sided confidence interval on
(b) Use the results of (a) to find a 100(1 — a@)% confi- the mean difference in fill volume. Compare the
dence interval for A. width of this interval to the width of the interval in
10-39. A manufacturer produces piston rings for an part (a).
automobile engine. It is known that ring diameter is (c) Construct a 95% upper-confidence interval on the
approximately normally distributed and has standard mean difference in fill volume.
deviation o= 0.001 mm. A random sample of 15 rings 10-46. The burning rates of two different solid-fuel
has a mean diameter of x = 74.036 mm. rocket propellants are being studied. It is known that
(a) Construct a 99% two-sided confidence interval on both propellants have approximately the same standard
the mean piston ring diameter. deviation of burning rate; that is, o, = 6, = 3 cm/s. Two
(b) Construct a 95% lower-confidence limit on the random samples of n, = 20 and n, = 20 specimens are
mean piston ring diameter. tested, and the sample mean burning rates are x, = 18
cm/s and x, = 24 cm/s. Construct a 99% confidence
10-40. The life in hours of a 75-W light bulb is known
interval on the mean difference in burning rate.
to be approximately normally distributed, with a stan-
dard deviation o = 25 hours. A random sample of 20 10-47. Two different formulations of a lead-free gaso-
bulbs has a mean life of x = 1014 hours. line are being tested to study their road octane num-
bers. The variance of road octane number for
(a) Construct a 95% two-sided confidence interval on
formulation 1 is or = 1.5 and for formulation 2 it is
the mean life.
o, = 1.2. Two random samples of size n, = 15 and
(b) Construct a 95% lower-confidence interval on the
n, = 20 are tested, and the mean road octane numbers
mean life.
observed are x, = 89.6 and x, = 92.5. Construct a 95%
19-41. A civil engineer is analyzing the compressive two-sided confidence interval on the difference in
strength of concrete. Compressive strength is approx- mean road octane number.
imately normally distributed with a variance 0 =
10-48. The compressive strength of concrete is being
1000 (psi)”. A random sample of 12 specimens has a
tested by a civil engineer. He tests 16 specimens and
mean compressive strength of x = 3250 psi.
obtains the following data:
(a) Construct a 95% two-sided confidence interval on
2216 2237 2249 2204
mean compressive strength.
2225 2301 2281 2263
(b) Construct a 99% two-sided confidence interval on
2318 2255 2275 2295
mean compressive strength. Compare the width 2250 2238 2300 2217
of this confidence interval with the width of the
(a) Construct a 95% two-sided confidence interval on
one found in part (a).
the mean strength.
10-42. Suppose that in Exercise 10-40 we wanted to (b) Construct a 95% lower-confidence interval on the
be 95% confident that the error,in estimating the mean mean strength.
life is less than 5 hours. What sample size should be
(c) Construct a 95% two-sided confidence interval on
used?
the mean strength assuming that o= 36. Compare
10-43. Suppose that in Exercise 10-40 we wanted the this interval with the one from part (a).
total width of the confidence interval on mean life to (d) Construct a 95% two-sided prediction interval for
be 8 hours. What sample size should be used?
a single compressive strength.
10-44. Suppose that in Exercise 10-41 it is desired to (e) Construct a two-sided tolerance interval that
estimate the compressive strength with an error that is would cover 99% of all compressive strengths
less than 15 psi. What sample size is required? with 95% confidence.
10-45. Two machines are used to fill plastic bottles 10-49. An article in Annual Reviews Material
with dishwashing detergent. The standard deviations Research (2001, p. 291) presents bond strengths for
of fill volume are known to be 0, = 0.15 fluid ounces various energetic materials (explosives, propellants,
10-10 Exercises 263

and pyrotechnics). Bond strengths for 15 such materi- two-sided confidence interval on the difference in
als are shown below. Construct a two-sided 95% con- mean voltage.
fidence interval on the mean bond strength.
10-56. Random samples of size 20 were drawn from
323, 312, 300, 284, 283, 261, 207, 183, two independent normal populations. The sample
180, 179, 174, 167, 167, 157, 120. means and standard deviations were x, = 22.0, s,=
1.8, x, = 21.5, and s, = 1.5. Assuming that o; ='6;,
10-50. The wall thickness of 25 glass 2-liter bottles
find the following:
was measured by a quality-control engineer. The sam-
ple mean was x = 4.05 mm and the sample standard (a) A 95% two-sided confidence interval on 1, — Jb.
deviation was s = 0.08 mm. Find a 90% lower-confi- (b) A 95% upper-confidence interval on fl, — Lb.
dence interval on the mean wall thickness. (c) A 95% lower-confidence interval on , — Lb.

10-51. An industrial engineer is interested in estimat- 10-57. The diameter of steel rods manufactured on
ing the mean time required to assemble a printed cir- two different extrusion machines is being investi-
cuit board. How large a sample is required if the gated. Two random samples of sizes n, = 15 and n,=
engineer wishes to be 95% confident that the error in 18 are selected, and the sample means and sample
estimating the mean is less than 0.25 minutes? The variances are x, = 8.73, s? = 0.30, x, = 8.68, and s3 =
standard deviation of assembly time is 0.45 minutes. 0.34, respectively. Assuming that o? = 03, construct a
95% two-sided confidence interval on the difference
10-52. A random sample of size 15 from a normal
in mean rod diameter.
population has mean x = 550 and variance s* = 49.
Find the following: 10-58. Random samples of sizes n, = 15 and n, = 10
(a) A 95% two-sided confidence interval on p. are drawn from two independent normal populations.
The sample means and variances are x, = 300, s? = 16,
(b) A 95% lower-confidence interval on pu.
X, = 325, s} = 49. Assuming that 07 # 03, construct a
(c) A 95% upper-confidence interval on ju.
95% two-sided confidence interval on , — Lb.
(d) A 95% two-sided prediction interval for a single
10-59. Consider the data in Exercise 10-48. Construct
observation.
the following:
(e) A two-sided tolerance interval that would cover
(a) A 95% two-sided confidence interval on 07.
90% of all observations with 99% confidence.
(b) A 95% lower-confidence interval on 0”.
10-53. An article in Comput&s in Cardiology (1993,
(c) A 95% upper-confidence interval on o”.
p. 317) presents the results of a heart stress test in
which the stress is induced by a particular drug. The 10-60. Consider the data in Exercise 10-49. Construct
heart rates (in beats per minute) of nine male patients the following:
after the drug is admirSistered are recorded. The aver- (a) A 99% two-sided confidence interval on 07.
age heart rate was found to be x = 102.9 (bpm) with a (b) A 99% lower-confidence interval on o7.
sample standard deviation of s = 13.9 (bpm). Find a (c) A 99% upper-confidence interval on 0”.
90% confidence interval on the mean heart rate after
the drug is administered. 10-61. Construct a 95% two-sided confidence interval
on the variance of the wall thickness data in Exercise
10-54. Two independent random samples of sizes n, = 10-50.
18 and n, = 20 are taken from two normal popula-
tions. The sample means are x, = 200 and x, = 190. We 10-62. In a random sample of 100 light bulbs, the
know that the variances are 07 = 15 and o} = 12. Find sample standard deviation of bulb life was found to be
12.6 hours. Compute a 90% upper-confidence interval
the following:
on the variance of bulb life.
(a) A 95% two-sided confidence interval on [U, — H,.
(b) A 95% lower-confidence interval on [, — {b. 10-63. Consider the data in Exercise 10-56. Construct
a 95% two-sided confidence interval on the ratio of
(c) A 95% upper-confidence interval on [, — Lp.
the population variances 0,/0%.
10-55. The output voltage from two different types of
10-64. Consider the data in Exercise 10-57. Construct
transformers is being investigated. Ten transformers
the following:
of each type are selected at random and the voltage
(a) A 90% two-sided confidence interval on 04/0%.
ponies The sample means are X,= 12.13 volts and
X, = 12.05 volts. We know that the variances of eee (b) A 95% two-sided confidence interval on 07/03.
voltagefor the two types of transformers are c= 0.7 Compare the width of this interval with the width
and 6.2= 0.8, respectively. Construct a 95% of the interval in part (a).
264 Chapter10 Parameter Estimation
ee rTEEEEEEERR
(c) A 90% lower-confidence interval on o1/0°. Subject» PSJ FSJ
(d) A 90% upper-confidence interval on 01/03.
1 25.9 33.4
10-65. Construct a 95% two-sided confidence interval 2 30.2 37.4
on the ratio of the variances o/o, using the data in 3 334) 48.0
Exercise 10-58.
4 27.6 30.5
10-66. Of 400 randomly selected motorists, 48 were 5 335 27.8
found to be uninsured. Construct a 95% two-sided 6 34.6 27.5
confidence interval on the uninsured rate for motorists. 7 33.1 36.9
10-67. How large a sample would be required in Exer- 8 30.6 Sui
cise 10-66 to be 95% confident that the error in esti- 9 30.5 Jaifesl
mating the uninsured rate for motorists is less than 10 25.4 38.0
0.03?
10-68. A manufacturer of electronic calculators is Find a 95% confidence interval on the difference in
interested in estimating the fraction of defective units mean completion times. Is there any indication that
produced. A random sample of 8000 calculators con- one joystick is preferable?
tains 18 defectives. Compute a 99% upper-confidence
10-73. The manager of a fleet of automobiles is test-
interval on the fraction defective.
ing two brands of radial tires. He assigns one tire of
10-69. A study is to be conducted of the percentage of each brand at random to the two rear wheels of eight
homeowners who own at least two television sets. cars and runs the cars until the tires wear out. The data
How large a sample is required if we wish to be 99% (in kilometers) are shown below:
confident that the error in estimating this quantity is
less than 0.01?
Car Brand 1 Brand 2
10-70. A study is conducted to determine whether
there is a significant difference in union membership 1 36,925 34,318
based on sex of the person.Arandom sample of 5000 2 45,300 42,280
factory-employed men were polled, and of this group, 3 36,240 35,500
785 were members of a union. A random sample of 4 32,100 31,950
3000 factory-employed women were also polled, and 5 37,210 38,015
of this group, 327 were members of a union. Con- 6 48,360 47,800
struct a 99% confidence interval on the difference in
7 38,200 37,810
proportions p, — p>.
8 33,500 28 W))'5)
10-71. The fraction of defective product produced by
two production lines is being analyzed. A random
sample of 1000 units from line 1 has 10 defectives, Find a 95% confidence interval on the difference in
while a random sample of 1200 units from line 2 has mean mileage. Which brand do you prefer?
25 defectives. Find a 99% confidence interval on the 10-74. Consider the data in Exercise 10-50. Find con-
difference in fraction defective produced by the two fidence intervals on p and o” such that we are at least
lines, ‘ 90% confident that both intervals simultaneously lead
to correct conclusions.
10-72. The results of a study on powered wheelchair
driving performance were presented in the Proceed- 10-75. Consider the data in Exercise 10-56. Suppose
; ings of the IEEE 24th Annual Northeast Bioengineer- that a random sample of size n, = 15 is obtained from
ing Conference (1998, p. 130). In this study, the a third normal population, with x, = 20.5 and s, = 1.2.
effects of two types of joysticks, force sensing (FSJ) Find two-sided confidence intervals on fl, — [b, Hy —
and position sensing (PSJ), on power wheelchair Hy, and [, — L, such that the probability is at least 0.95
control were investigated. Each of 10 subjects was that all three intervals simultaneously lead to correct
asked to test both joysticks. One response of interest is conclusions.
the time (in seconds) to complete a predetermined 10-76. A random variable X is normally distributed
course. Data typical of this type of experiment are as with mean [ and variance o” = 10. The prior density
follows: for Lis uniform between 6 and 12. A random sample
10-10 Exercises 265

of size 16 yields x = 8. Construct a 90% Bayes inter- density for 1/07. If a random sample of size 10 yields
val for 4. Could you reasonably accept the hypothesis Xx; - 4)* = 4.92, determine the Bayes estimate of
that u = 9? 1/o” assuming a squared-error loss. Set up an integral
that defines a 90% Bayes interval for 1/0”.
10-77. Let X be a normally distributed random vari-
able with mean pt = 5 and unknown variance o”. The 10-78. Prove that if a squared-error loss function is
prior density for 1/o” is a gamma distribution with used, the Bayes estimator of @ is the mean of the pos-
parameters r = 3 and A= 1.0. Determine the posterior terior distribution for 0.
Chapter 1 1

Tests of Hypotheses

Many problems require that we decide whether to accept or reject a statement about some
parameter. The statement is usually called a hypothesis, and the decision-making procedure
about the hypothesis is called hypothesis testing. This is one of the most useful aspects of
statistical inference, since many types of decision problems can be formulated as
hypothesis-testing problems. This chapter will develop hypothesis-testing procedures for
several important situations.

11-1 INTRODUCTION

11-1.1 Statistical Hypotheses


A statistical hypothesis is a statement about the probability distribution of a random vari-
able. Statistical hypotheses often involve one or more parameters of this distribution. For
example, suppose that we are interested in the mean compressive strength of a particular
type of concrete. Specifically, we are interested in deciding whether or not the mean com-
pressive strength (say 1) is 2500 jpsi. We may express this formally as

Ho: L = 2500 psi,


é (11-1)
H,: UW # 2500 psi.
The statement H): = 2500 psi in equation 11-1 is called the null hypothesis, and the state-
ment H,: 1 # 2500 psi .s called the alternative hypothesis. Since the alternative hypothesis
specifies values of pz that coulci be either greater than 2500 psi or less than 2500 psi, it is
called a two-sided alternative hypothesis. In some situations, we may wish to formulate a
one-sided alternative hypothesis, as in
Ho: L = 2500 psi,
H;: [> 2500 psi.
’ (11-2)

It is important to remernber that hypotheses are always statements about the population
or distribution under study, not statements about the sample. The value of the population
parameter specified in the niull hypothesis (2500 psi in the above example) is usually deter-
mined in one of three ways. First, it may result from past experience or knowledge of the
process, or even from prior experimentation. The objective of hypothesis testing then is usu-
ally to determine whether the experimental situation has changed. Second, this value may
be determined from some theory or model regarding the process under study. Here the
objective of hypothesis testing is to verify the theory or model. A third situation arises when
the value of the population parameter results from external considerations, such as design
or engineering specifications, or from contractual obligations. In this situation, the usual
objective of hypothesis testing is conformance testing.
i)
/

266
11-1 Introduction 267

We are interested in making a decision about the truth or falsity of a hypothesis. A pro-
cedure leading to such a decision is called a test of a hypothesis. Hypothesis-testing proce-
dures rely on using the information in a random sample from the population of interest. If
this information is consistent with the hypothesis, then we would conclude that the hypoth-
esis is true; however, if this information is inconsistent with the hypothesis, we would con-
clude that the hypothesis is false.
To test a hypothesis, we must take a random sample, compute an appropriate test sta-
tistic from the sample data, and then use the information contained in this test statistic to
make a decision. For example, in testing the null hypothesis concerning the mean com-
pressive strength of concrete in equation 11-1, suppose that a random sample of 10 concrete
specimens is tested and the sample mean x is used as a test statistic. If x > 2550 psi or if x
< 2450 psi, we wall consider the mean compressive strength of this particular type of con-
crete to be different from 2500 psi. That is, we would reject the null hypothesis Ho: L =
2500. Rejecting Ho implies that the alternative hypothesis, H,, is true. The set of all possi-
ble values of x that are either greater than 2550 psi or less than 2450 psi is called the criti-
cal region or rejection region for the test. Alternatively, if 2450 psi < x < 2550 psi, then we
would accept the null hypothesis Hy: 4 = 2500. Thus, the interval [2450 psi, 2550 psi] is
called the acceptance region for the test. Note that the boundaries of the critical region,
2450 psi and 2550 psi (often called the critical values of the test statistic), have been deter-
mined somewhat arbitrarily. In subsequent sections we will show how to construct an
appropriate test statistic to determine the critical region for several hypothesis-testing
situations.

11-1.2 Type I and Type II Errors


The decision to accept or reject the null hypothesis is based on a test statistic computed
from the data in a random sample. When a decision is made using the information in a ran-
dom sample, this decision is subject to error. Two kinds of errors may be made when test-
ing hypotheses. If the null hypothesis is rejected when it is true, then a type I error has been
made. If the null hypothesis is accepted when it is false, then a type II error has been made.
The situation is described in Table 11-1.
The probabilities of occurrence of type I and type II errors are given special symbols:

a =P{type
I error} = P {reject HA is true}, (11-3)
B =P {type II error} = P {accept H,|Hy is false}. (11-4)
Sometimes it is more convenient to work with the power of the test, where

Power = 1 - B =P {reject Ho|Hy is false}. (11-5)

Note that the power of the test is the probability that a false null hypothesis is correctly
rejected. Because the results of a test of a hypothesis are subject to error, we cannot “prove”
or “disprove” a statistical hypothesis. However, it is possible to design test procedures that
control the error probabilities a and f to suitably small values.

Table 11-1 Decisions in Hypothesis Testing

Hg is True Hg is False
Accept Hy No error Type Il error
Reject H, Type I error No error
268 Chapter 11 Tests of Hypotheses

The probability of type I error a is often called the significance level or size of the test.
In the concrete-testing example, a type I error would occur if the sample mean x > 2550 psi
or if x < 2450 psi when in fact the true mean compressive strength 4 = 2500 psi. Generally,
the type I error probability is controlled by the location of the critical region. Thus, it is usu-
ally easy in practice for the analyst to set the type I error probability at (or near) any desired
value. Since the probability of wrongly rejecting Hp is directly controlled by the decision
maker, rejection of Hy is always a strong conclusion. Now suppose that the null hypothesis
Hp: = 2500 psi is false. That is, the true mean compressive strength 1 is some value other
than 2500 psi. The probability of type II error is not a constant but depends on the true mean
compressive strength of the concrete. If denotes the true mean compressive strength, then
B() denotes the type II error probability corresponding to wu. The function B(U) is evaluated
by finding the probability that the test statistic (in this case x) falls in the acceptance region
given a particular value of tu. We define the operating characteristic curve (or OC curve) of
a test as the plot of B(u) against u. An example of an operating characteristic curve for the
concrete-testing problem is shown in Fig. 11-1. From this curve, we see that the type II error
probability depends on the extent to which Hp: 1:= 2500 psi is false. For example, note that
B(2700) < B(2600). Thus we can think of the type II error probability as a measure of the
ability of the test procedure to detect a particular deviation from the null hypothesis Hp.
Small deviations are harder to detect than large ones. We also observe that since this is a
two-sided alternative hypothesis, the operating characteristic curve is symmetric; that is,
B(2400) = B(2600). Furthermore, wher pi = 2500 the probability of type II error B= 1 - a.
The probability of type II error is also a function of sample size, as illustrated in Fig.
11-2. From this figure, we see that for a given value of the type I error probability a and a
given value of mean compressive strength the type II error probability decreases as the sam-
ple size n increases. That is, a specified deviation of the true mean from the value specified
in the null hypothesis is easier to detect for larger sample sizes than for smaller ones. The
effect of the type I error probability a on the type II error probability 8 for a given sample
size n is illustrated in Fig. 11-3. Decreasing a causes B to increase, and increasing a causes
B to decrease.
Because the type II error probability B is a function of both the sample size and the
extent to which the null hypothesis Hp is false, it is customary to think of the decision to
accept Hy as a weak conclusion, unless we know that B is acceptably small. Therefore,
rather than saying we “accept Ho,” we prefer the terminology “fail to reject Hp.” Failing to
reject Hy implies that we have not found sufficient evidence to reject Ho, that is, to make a
strong statement. Thus failing to reject Hy does not necessarily mean that there is a high

pi2400)'='p(2600) pais Sas ee] ee thes

(2300) ="8(2700) iva 2 Se Porto eee


0.00
2300 2400 2500 2600 2700 &
Figure 11-1 Operating characteristic curve for the concrete testing example.
11-1 Introduction 269

B(x)
1.00

0.00
2500 be
Figure 11-2 The effect of sample size on the operating characteristic curve.

B(u)
1.00

wae 2500 td
Figure 11-3 The effect of type I error on the operating characteristic curve.

probability that Hp is true. It may imply that more data are required to reach a strong con-
clusion. This can have important implications for the formulation of hypotheses.

11-1.3 One-Sided and Two-Sided Hypotheses


Because rejecting Hp is always a strong conclusion while failing to reject Hy) can be a weak
conclusion unless B is known to be small, we usually prefer to construct hypotheses such
that the statement about which a strong conclusion is desired is in the alternative hypothe-
sis, H,. Problems for which a two-sided alternative hypothesis is appropriate do not really
present the analyst with a choice of formulation. That is, if we wish to test the hypothesis
that the mean of a distribution 4/ equals some arbitrary value, say Mp, and if it is important
to detect values of the true mean y that could be either greater than [Up or less than [p, then
one must use the two-sided alternative in
Ap: L = Uo,

Hy: L# Up.
Many hypothesis-testing problems naturally involve a one-sided alternative hypothesis.
For example, suppose that we want to reject Hy only when the true value of the mean
exceeds [y. The hypotheses would be

Ho: = Ho, (11-6)


Hy:
U> Uo.
270 Chapter 11 Tests of Hypotheses

This would imply that the critical region is located in the upper tail of the distribution of the
test statistic. That is, if the decision is to be based on the value of the sample mean x, then
we would reject H, in equation 11-6 if x is too large. The operating characteristic curve for
the test for this hypothesis is shown in Fig. 11-4, along with the operating characteristic
curve for a two-sided test. We observe that when the true mean p exceeds {lp (i.e., when the
alternative hypothesis, H,: | > Mp is true), the one-sided test is superior to the two-sided test
in the sense that it has a steeper operating characteristic curve. When the true mean [ = Mp,
both the one-sided and two-sided tests are equivalent. However, when the true mean J is
less than Lp, the two operating characteristic curves differ. If 1 < fo, the two-sided test has
a higher probability of detecting this departure from Mp than the one-sided test. This is intu-
itively appealing, as the one-sided test is designed assuming either that 4 cannot be less than
Mo or, if 1 is less than Mp, that it is desirable to accept the null hypothesis.
In effect there are two different models that can be used for the one-sided alternative
hypothesis. For the case where the alternative hypothesis is H,: [1 > Up, these two models are

oe eae
Hy: U> Uy
(11-7)
and
Ho: HS Mo, (11-8)
Ay: L> Up.
In equation 11-7, we are assuming that w cannot be less than Mp, and the operating charac-
teristic curve is undefined for values of U < Up. In equation 11-8, we are assuming that can
be less than {y and that in such a situation it would be desirable to accept Hp. Thus for equa-
tion 11-8 the operating characteristic curve is defined for all values of / S Up. Specifically,
if LS Mp, we have P(u) = 1 - oc), where ou) is the significance level as a function of jU.
For situations in which the model of equation 11-8 is appropriate, we define the significance
level of the test as the maximum value of the type I error probability a; that is, the value of
@ at {J = Mp. In situations where one-sided alternative hypotheses are appropriate, we will
usually write the null hypothesis with an equality; for example, Hp: lM = Mo. This will be
interpreted as including the cases Hp: MS My or Hp: | 2 Mp, as appropriate.
In problems where one-sided test procedures are indicated, analysts occasionally expe-
rience difficulty in choosing an appropriate formulation of the alternative hypothesis. For
example, suppose that a soft drink beverage bottler purchases 10-ounce nonreturnable bot-
tles from a glass company. The bottler wants to be sure that the bottles exceed the specifi-
cation on mean internal pressure or bursting strength, which for 10-ounce bottles is 200 psi.

OC curve for a
two-sided test

OC curve for a

CS

0.00
c

Ko Bb
Figure 11-4 Operating characteristic curves for two-sided and one-sided tests.
11-2 Tests of Hypotheses on a Single Sample 271

The bottler has decided to formulate the decision procedure for a specific lot of bottles as a
hypothesis problem. There are two possible formulations for this problem, either

Hp: US 200 psi,


11-9
Hi: > 200 psi ( )

or

Ho: L 2 200 psi,


(11-10)
Hy: UW < 200 psi.
Consider the formulation in equation 11-9. If the null hypothesis is rejected, the bot-
tles will be judged satisfactory; while if Hy is not rejected, the implication is that the bot-
tles do not conform to specifications and should not be used. Because rejecting Hy is a
strong conclusion, this formulation forces the bottle manufacturer to “demonstrate” that the
mean bursting strength of the bottles exceeds the specification. Now consider the formula-
tion in equation 11-10. In this situation, the bottles will be judged satisfactory unless Hp is
rejected. That is, we would conclude that the bottles are satisfactory unless there is strong
evidence to the contrary.
Which formulation is correct, equation 11-9 or equation 11-10? The answer is “‘it
depends.” For equation 11-9, there is some probability that H, will be accepted (i.e., we
would decide that the bottles are not satisfactory) even though the true mean is slightly
greater than 200 psi. This formulation implies that we want the bottle manufacturer to
demonstrate that the product meets or exceeds our specifications. Such a formulation could
be appropriate if the manufacturer has experienced difficulty in meeting specifications in the
past, or if product safety considerations force us to hold tightly to the 200 psi specification.
On the other hand, for the formulation of equation 11-10 there is some probability that Ho
will be accepted and the bottles judged satisfactory even though the true mean is slightly less
than 200 psi. We would conclude that the bottles are unsatisfactory only when there is strong
evidence that the mean does not exceed 200 psi; that is, when Hp: U2 200 psi is rejected. This
formulation assumes that we are relatively happy with the bottle manufacturer’s past per-
formance and that small deviations from the specification of 2 200 psi are not harmful.
In formulating one-sided alternative hypotheses, we should remember that rejecting Hp
is always a strong conclusion, and consequently, we should put the statement about which
it is important to make a strong conclusion in the alternative hypothesis. Often this will
depend on our point of view and experience with the situation.

11-2 TESTS OF HYPOTHESES ON A SINGLE SAMPLE


11-2.1 Tests of Hypotheses on the Mean of a Normal Distribution,
Variance Known
Statistical Analysis
Suppose that the random variable X represents some process or population of interest. We
assume that the distribution of X is either normal or that, if it is nonnormal, the conditions
of the Central Limit Theorem hold. In addition, we assume that the mean pL of X is unknown
but that the variance o” is known. We are interested in testing the hypothesis

Ho: H= Ho, (11-11)


Hy:
UF Uo,

where [Mp is a specified constant.


272 Chapter 11 Tests of Hypotheses

A random sample of size n, X,, X>, ..., X,, is available. Each observation in this sam-
ple has unknown mean p and known variance o*. The test procedure for Ho: Ld = Up uses the
test statistic
Bego
hina ‘ <
(11-12)

If the null hypothesis Ho: [ = Mp is true, then E(X) = Mp, and it follows that the distribution
of Zy is N(O, 1). Consequently, if Ho: { = Mp is true, the probability is 1 — @ that a value of
the test statistic Zp falls between — Z,,. and Z,/, where Z,,. isthe percentage point of the
standard normal distribution such that P{Z 2 Z),.} = O/2 (1€., Zap is the 100(1—a/2) per-
centage point of the standard normal distribution). The situation is illustrated in Fig. 11-5.
Note that the probability is @ that a value of the test statistic Z) would fall in the region Z)
> Zyp OF Zy <— Zyy when Ho: [= My is true. Clearly, a sample producing a value of the test
statistic that falls in the tails of the distribution of Z) would be unusual if Ho: = Ly is true;
it is also an indication that Hp is false. Thus, we should reject Hp if either
Zo > Zan (11-13a)
or
Zo < Zan (11-13b)
and fail to reject Ho if
Ly S Zo S Zyp- (11-14)
Equation 11-14 defines the acceptance region for Hp and equation 11-13 defines the criti-
cal region or rejection region. The type I error probability for this test procedure is a.

Example 11-1
The burning rate of a rocket propellant is being studied. Specifications require that the mean burning
rate must be 40 cm/s. Furthermore, suppose that we know that the standard deviation of the burning
rate is approximately 2 cm/s. The experimenter decides to specify a type I error probability a = 0.05,
and he will base the test on a random sample of size n = 25. The hypotheses we wish to test are
Hy: U=40 cm/s,
+
H,: w+ 40 cm/s.
Twenty-five specimens are tested, and the sample mean burning rate obtained is x = 41.25 cm/s.
The value of the test statistic in equation 11-12 is

ae X— Mo
o/n
_ 41.25-40 = 3.125.
18 3/fa5

Critical region Acceptance


region

—La2 0 Zul 2
Figure 11-5 The distribution of Z) when Ho: | = My is true.
11-2 Tests of Hypotheses on a Single Sample 273

Since a= 0.05, the boundaries of the critical region are Zy9); = 1.96 and - Zo.025 =— 1.96, and we note
that Zp falls in the critical region. Therefore, Hp is rejected, and we conclude that the mean burning
rate is not equal to 40 cm/s.

Now suppose that we wish to test the one-sided alternative, say

Ho: = Ho (11-15)
Hy: U> Uo.
(Note that we could also write Hp: S Up.) In defining the critical region for this test, we
observe that a negative value of the test statistic Z) would never lead us to conclude that Hy: 1
= Mp is false. Therefore, we would place the critical region in the upper tail of the N(0, 1) dis-
tribution and reject Ho on values of Zp that are too large. That is, we would reject H, if

Zo > Ly (11-16)
Similarly, to test

Hy
Hy:
HS Hoe
U< Up,
(11-17)
we would calculate the test statistic Z) and reject Hy on values of Z, that are too small. That
is, the critical region is in the lower tail of the N(0, 1) distribution, and we reject Hp if

Teale (11-18)

Choice of Sample Size


In testing the hypotheses of equations 11-11, 11-15, and 11-17, the type I error probability
Qis directly selected by the analyst. However, the probability of type II error B depends on
the choice of sample size. In this section, we will show how to select the sample size in
order to arrive at a specified value of f.
Consider the two-sided hypothesis

Ap:
= Hp,
Ay:
U# Up.
Suppose that the null hypothesis is false and that the true value of the mean is [= [Hy + 6,
say, where 6 > 0. Now since H, is true, the distribution of the test statistic Zp is

~ oVn 1}
LEON. (11-19)

The distribution of the test statistic Z) under both the null hypothesis Hp and the alternative
hypothesis H, is shown in Fig. 11-6. From examining this figure, we note that if 1 is true,
a type II error will be made only if — Zp $ Zp $ Zo where Z) ~ N(5Vn/o,1). That is, the
probability of the type II error Bis the probability that Z, falls between — Z,,) and Z,,) given
that H, is true. This probability is shown as the shaded portion of Fig. 11-6. Expressed
mathematically, this probability is

ovn -8{253 _ vn }
B = of
Zaaa oN | (11-20)
oO ’

where ®(z) denotes,the probability to the left of z on the standard normal distribution. Note
that equation 11-20 was obtained by evaluating the probability that Z, falls in the interval
274 Chapter 11 Tests of Hypotheses

Under Ho: 2 = Lo Under H;: 1 # Ko P

~ Zale 0 Zale dyn. Zo


oO

Figure 11-6 The distribution of Z) under Hy and A).

[- Zao» Zq7] on the distribution of Z) when H, is true. These two points were standardized
to produce equation 11-20. Furthermore, note that equation 11-20 also holds if 6< 0, due
to the symmetry of the normal distribution.
While equation | 1-20 could be used to evaluate the type II error, it is more convenient
to use the operating characteristic curves in Charts VIla and VIb of the Appendix. These
curves plot # as calculated from equation 11-20 against a parameter d for various example
sizes n. Curves are provided for both @= 0.05 and a= 0.01. The parameter d is defined as

. 7-H Hol 13} Cite 21)


Oo oO

We have chosen d so that one set of operating characteristic curves can be used for all prob-
lems regardless of the values of UW) and o. From examining the operating characteristic
curves or equation 11-20 and Fig. 11-6 we note the following:
1. The further the true value of the mean LU from [Mp, the smaller the probability of type
Il error 6 for a given n and a. That is, we see that for a specified sample size and a,
large differences in the mean are easier to detect than small ones.
2. For a given 6 and @, the probability of type II error B decreases as n increases. That
is, to detect a specified difference in the mean 6, we may make the test more pow-
erful by increasing the sample size.

Example 11-2
Consider the rocket propellant problem in Example 11-1. Suppose that the analyst is concerned about
the probability of a type II error if the true mean burning rate is 1 = 41 cm/s. We may use the operat-
ing characteristic curves to find 8. Note that 6= 41 —-40 = 1, n= 25, o=2, and a@=0.05. Then

4H oHol [5]
Oo o 2

and from Chart VIa (Appendix), with n = 25, we find that 8 = 0.30. That is, if the true mean burning
rate is 4= 41 cm/s, then there is approximately a 30% chance that this will not be detected by the test
with n = 25.

Example 11-3
Once again, consider the rocket propellant problem in Example 11-1. Suppose that the analyst would
like to design the test so that if the true mean burning rate differs from 40 cni/s by as much as 1 cm/s,
11-2 Tests of Hypotheses on a Single Sample 275

the test will detect this (i.e., reject Ho: 1 = 40) with a high probability, say 0.90. The operating char-
acteristic curves can be used to find the sample size that will give such a test. Since d = |u - Up|/o =
1/2, a= 0.05, and B= 0.10, we find from Chart Vla (Appendix) that the required sample size is n =
40, approximately.

In general, the operating characteristic curves involve three parameters: B, 6, and n.


Given any two of these parameters, the value of the third can be determined. There are two
typical applications of these curves.
1. Fora given n and 6, find B. This was illustrated in Example 11-2. This kind of prob-
lem is often encountered when the analyst is concerned about the sensitivity of an
experiment already performed, or when sample size is restricted by economic or
other factors.
2. For a given Band 6, find n. This was illustrated in Example 11-3. This kind of prob-
lem is usually encountered when the analyst has the opportunity to select the sam-
ple size at the outset of the experiment.
Operating characteristic curves are given in Charts VIc and VId (Appendix) for the
one-sided alternatives. If the alternative hypothesis is H,: {1 > Mo, then the abscissa scale on
these charts is
H-U
d= aot (11-22)
When the alternative hypothesisisH,: [ < Up, the corresponding abscissa scale is

qa Pot
o
(11-23)
It is also possible to derive formulas to determine the appropriate sample size to use to
obtain a particular value of B for a given 6 and a. These formulas are alternatives to using
the operating characteristic curves. For the two-sided alternative hypothesis, we know from
equation 11-20 that

dVn dvn

or if 5>0,

éVn
B Sa of
ZaSe , (11-24)

since o(- 41. OVNI a) = 0 when 6 is positive. From equation 11-24, we take normal
inverses to obtain

or

(11-25)
276 Chapter 11 Tests of Hypotheses

This approximation is good when 0(-Z aj OV /o) is small compared to f. For either of
the one-sided alternative hypotheses in equation 11-15 or equation 11-17, the sample size
required to produce a specified type II error with probability B given 6 and aqis

(Zs Naa ce
a ee (11-26)

Example 11-4
Returning to the rocket propellant problem of Example 11-3, we note that o= 2, 6=41-40=1, a=
0.05, and B= 0.10. Since Z,y) = Zp 995 = 1.96 and Zz = Zp jy = 1.28, the sample size required to detect
this departure from Ho: = 40 is found in equation 11-25 to be

(Zaj2+Zg)9? (1.96+1.28)22?
se A
6 re

which is in close agreement with the value determined from the operating characteristic curve. Note
that the approximation is good, since o(-Z,al2 -5n/o)= o(- 1.96-(11)V42 / 2)=-5.20=0
which is small relative to B.

The Relationship Between Tests of Hypotheses and Confidence Intervals


There is a close relationship between the test of a hypothesis about a parameter 0 and the
confidence interval for @. If [L, U] is a 100(1 — a@)% confidence interval for the parameter
6, then the test of size @ of the hypothesis
Ho: B= 6;

H,: 0# 4
will lead to rejection of Hy if and only if 6 is not in the interval [L, U]. As an illustration,
consider the rocket propellant problem in Example 11-1. The null hypothesis Hp: u = 40
was rejected using a& = 0.05. The 95% two-sided confidence interval on for these data may
be computed from equation 10-25 as 40.47 < u < 42.03. That is, the interval [L, U] is [40.47,
42.03], and since Up = 40 is not included in this interval, the null oie Ap: U= 40 is
rejected.

Large Sample Test with Unknown Variance


Although we have developed the test procedure for the null hypothesis Hp: [f= Up) assum-
ing that o” is known, in many practical situations o° will be unknown. In general, if n > 30,
then the sample variance S? can be substituted for o* in the test procedures with little
harmful effect. Thus, while we have given a test for known o”, it can be converted easily in
to a large-sample test procedure for unknown o”. Exact treatment of the case where o7 is
unknown and n is small involves the use of the rf distribution and is deferred until
Section 11-2.2.

P-Values

Computer software packages are frequently used for statistical hypothesis testing. Most of
these programs calculate and report the probability that the test statistic Will take on a value |
11-2 Tests of Hypotheses on a Single Sample 277

at least as extreme as the observed value of the statistic when Hp is true. This probability is
usually called a P-value. It represents the smallest level of significance that would lead to
rejection of Hy. Thus, if P = 0.04 is reported in the computer output, the null hypothesis Hy
would be rejected at the level @ = 0.05 but not at the level a = 0.01. Generally, if P is less
than or equal to @ we would reject Hp, whereas if P exceeds a we would fail to reject Hp.
It is customary to call the test statistic (and the data) significant when the null hypoth-
esis Hp is rejected, so we may think of the P-value as the smallest level a at which the data
are significant. Once the P-value is known, the decision maker can determine for himself
or herself how significant the data are without the data analyst formally imposing a prese-
lected level of significance.
It is not always easy to compute the exact P-value of a test. However, for the forego-
ing normal distribution tests it is relatively easy. If Zy is the computed value of the test sta-
tistic, then the P-value is

2[1-@((Z,|)| _ for a twotailed test,


P=41- (Zp) for an upper-tail test,
®(Z,) for a lower-tail test.

To illustrate, consider the rocket propellant problem in Example 11-1. The computed value
of the test statistic is Z) = 3.125 and since the alternative hypothesis is two-tailed, the P-
value is

P =2{(1 - ®(3.125)] = 0.0018.


Thus, Ho: U = 40 would be rejected at any level of significance where a@ = P = 0.0018. For
example, Hp would be rejected if @ = 0.01, but it would not be rejected if a = 0.001.

Practical Versus Statistical Significance


In Chapters 10 and 11, we present confidence intervals and tests of hypotheses for both sin-
gle-sample and two-sample problems. In hypothesis testing, we have discussed the statisti-
cal significance when the null hypothesis is rejected. What has not been discussed is the
practical significance of rejecting the null hypothesis. In hypothesis testing, the goal is to
make a decision about a claim or belief. The decision as to whether or not the null hypoth-
esis is rejected in favor of the alternative is based on a sample taken from the population of
interest. If the null hypothesis is rejected, we say there is statistically significant evidence
against the null hypothesis in favor of the alternative. Results that are statistically significant
(by rejection of the null hypothesis) do not necessarily imply practically significant results.
To illustrate, suppose that the average temperature on a single day throughout a par-
ticular state is hypothesized to be 1 = 63 degrees. Suppose that n = 50 locations within the
state had an average temperature of x = 62 degrees and standard deviation of 0.5 degrees.
If we were to test the hypothesis Hp: 4 = 63 against H,: u # 63, we would get a resulting
P-value of approximately 0 and we would reject the null hypothesis. Our conclusion would
be that the true average temperature is not 63 degrees. In other words, we have illustrated a
statistically significant difference between the hypothesized value and the sample average
obtained from the data. But is this a practical difference? That is, is 63 degrees different
from 62 degrees? Very few investigators would actually conclude that this difference is
practical. In other words, statistical significance does not imply practical significance.
The size of the sample under investigation has a direct influence on the power of the
test and the practical significance. As the sample size increases, even the smallest differ-
ences between the hypothesized value and the sample value may be detected by the
278 Chapter 11 Tests of Hypotheses

hypothesis test. Therefore, care must be taken when interpreting the results of a hypothesis
test when the sample sizes are large. .

11-2.2 Tests of Hypotheses on the Mean of a Normal Distribution,


Variance Unknown
When testing hypotheses about the mean i of a population when o” is unknown, we can use
the test procedures discussed in Section 11-2.1 provided that the sample size is large (n 2
30, say). These procedures are approximately valid regardless of whether or not the under-
lying population is normal. However, when the sample size is small and o” is unknown, we
must make an assumption about the form of the underlying distribution in order to obtain a
test procedure. A reasonable assumption in many cases is that the underlying distribution 1s
normal. ;
Many populations encountered in practice are quite well approximated by the normal
distribution, so this assumption will lead to a test procedure of wide applicability. In fact,
moderate departure from normality will have little effect on the test validity. When the
assumption is unreasonable, we can either specify another distribution (exponential,
Weibull, etc.) and use some general method of test construction to obtain a valid procedure,
or we could use one of the nonparametric tests that are valid for any underlying distribution
(see Chapter 16).

Statistical Analysis
Suppose that X is a normally distributed random variable with unknown mean yp and vari-
ance 0”. We wish to test the hypothesis that equals a constant [). Note that this situation
is similar to that treated in Section 11-2.1, except that now both pt and o7 are unknown.
Assume that a random sample of size n, say X,, X,..., X,, is available, and letXand S* be
the sample mean and variance, respectively.
Suppose that we wish to test the two-sided alternative

Ho: U= Mo, (11-27)


Ay: LL # Mo.
The test procedure is based on the statistic

; X= Mo
0 S/n’ (11-28)

which follows the t distribution with n — 1 degrees of freedom if the null hypothesis Ho: L
= Mp is true. To test Ho: = Uy in equation 11-27, the test statistic t) in equation 11-28 is cal-
culated, and Hp is rejected if either

10 > la, n-1 (11-29a)


or

19 <—lqy2,n- 1 (11-29b)
where fg, _1 and —tyy) ,_ , are the upper and lower a/2 percentage points of the t distri-
bution with n — | degrees of freedom.
For the one-sided alternative hypothesis

Ho: = Uo, 11-30


Hy: M> Up,
11-2 Tests of Hypotheses on a Single Sample 279

we calculate the test statistic tg from equation 11-28 and reject Ho if

lone (11-31)
For the other one-sided alternative,

Ho: = Ho, (132)


Ay: L< Up,
we would reject Hy if

LAS Pere (11-33)

The breaking strength of a textile fiber is a normally distributed random variable. Specifications
require that the mean breaking strength should equal 150 psi. The manufacturer would like to detect
any significant departure from this value. Thus, he wishes to test

Ho: = 150 psi,


H,: U# 150 psi.
A random sample of 15 fiber specimens is selected and their breaking strengths determined. The sam-
ple mean and variance are computed from the sample data as x = 152.18 and s” = 16.63. Therefore.
the test statistic is

t _ ¥=Hy _eee
152.18-150 _0.07,
° s/vn 16.63/15
The type I error is specified as @ = 0.05. Therefore f9995;4= 2.145 and —19 995 4 = —2.145, and we
would conclude that there is not sufficient evidence to reject the hypothesis that u = 150 psi.

Choice of Sample Size


The type II error probability for tests on the mean of a normal distribution with unknown
variance depends on the distribution of the test statistic in equation 11-28 when the null
hypothesis Hp: [ = My is false. When the true value of the mean is [1 = Lp + 6, note that the
test statistic can be written as

fe X — Uo
0 a

[X= (Ho +8) bin


(oj (0)
S Gi

Sea, (11-34)

The distributions of Z and W in equation 11-34 are M(O, 1) and x2_,/(n-1), respectively,
and Z and W are independent random variables. However, 6 Jn/o is a nonzero constant,
so that the numerator of equation 11-34 is a N 6\Vnlo, 1)random variable. The resulting
distribution is called the noncentral t distribution with n — 1 degrees of freedom and
280 Chapter 11 Tests of Hypotheses

noncentrality parameter 6 n/a. Note that if 5=0, then the noncentral rdistribution reduces
to the usual or central t distribution. In any case, the type II*error of the two-sided alterna-
tive (for example) would be

B= P{- te, n-1 $ ty Stan, n- 110% 0}

= P{= ty n-1 Sb Ste, n-1}5


where x denotes the noncentral t random variable. Finding the type II error for the ¢-test
involves finding the probability contained between two points on the noncentral ¢
distribution.
The operating characteristic curves in Charts Ve, VIf, Vig, and VIA (Appendix) plot B
against a parameter d for various sample sizes n. Curves are provided for both the two-sided
and one-sided alternatives and for a@ = 0.05 or a = 0.01. For the two-sided alternative in
equation 11-27, the abscissa scale factor d on Charts Vle and VIf is defined as

jaa
Hol_loloO (0)
(11-35)

For the one-sided alternatives, if rejection is desired, i.e., L,> [o, as in equation 11-30, we
use Charts VIg and VIA with

gu PEN 20) (11-36)


oO Oo

while if rejection is desired, i.e., U < Mp, as in equation 11-32,

ga tomH io (11-37)
oO oO

We note that d depends on the unknown parameter o”. There are several ways to avoid this
difficulty. In some cases, we may use the results of a previous experiment or prior infor-
mation to make a rough initial estimate of 07. If we are interested in examining the operat-
ing characteristic after the data has been collected, we could use the sample variance s? to
estimate o”. If analysts do not have any previous experience on which to draw in estimat-
ing 0”, they can define the difference in the mean 6 that they wish to detect relative to o.
For example, if one wishes to detect a small difference in the mean, one might use a value
of d=|6\/o< 1 (say) whergas if one is interested in detecting only moderately large differ-
ences in the mean, one might select d = |6|/o= 2 (say). That is, it is the value of the ratio
|5|/o that is important in Jetermining sample size, and if it is possible to specify the rela-
tive size of the difference in means that we are interested in detecting, then a proper value
of d can usually be selected.

Example 11-6

Consider the fiber-testing problem in Example 11-5. If the breaking strength of this fiber differs from
150 psi by as much as 2.5 psi, the analyst would like to reject the null hypothesis Hp: = 150 psi with
a probability of at least 0.90. Is the sample size n = 15 adequate to ensure that the test is this sensitive?
If we use the sample standard deviation s = /16.63 = 4.08 to estimate o, then d=|6|/o= 2.5/4.08 =
0.61. By referring to the operating characteristic curves in Chart Vle, with d = 0.61 and n = 15, we
find B = 0.45. Thus, the probability of rejecting Ho: = 150 psi if the true mean differs from this value
by +2.5 psi is 1 - B= 1 - 0.45 = 0.55, approximately, and we would conclude that a sample size of
11-2 Tests of Hypotheses on a Single Sample 281

.n = 15 is not adequate. To find the sample size required to give the desired degree of protection, enter
the operating characteristic curves in Chart Vle with d = 0.61 and B= 0.10, and read the correspon-
ding sample size as n = 35, approximately.

11-2.3 Tests of Hypotheses on the Variance of a Normal Distribution


There are occasions when tests assessing the variance or standard deviation of a population
are needed. In this section we present two procedures, one based on the assumption of nor-
mality and the other one a large-sample test.

Test Procedures for aNormal Population


Suppose that we wish to test the hypothesis that the variance 6” of a normal distribution
equals a specified value, say 0. Let X ~ N({/, 0”), where p and o” are unknown, and let X,,
X,, ..., X, be a random: sample of n observations from this population. To test

Ho:“ we (11-38)
H,: O° # 04,
we use the test statistic

n—1)S-
oe ee (10-39)
ion)

where S? is the sample variance. Now if Hy: 0” = 0; is true, then the test statistic 7 follows
the chi-square distribution with n — 1 degrees of freedom. Therefore, Hy: 0” = 0% would be
rejected if

an, (11-40a)
or if

YEGN an 1) (11-40)
where in n- 1 and x. o/?, n-1 are the upper and lower @/2 percentage points of the chi-
square distribution with n — 1 degrees of freedom.
The same test statistic is used for the one-sided alternatives. For the one-sided
hypothesis
Hy: 02 = 0%, (1-41)
H,: 0? > 0%,
we would reject Hp if
5S (11-42)
For the other one-sided hypothesis,

Biase’ (11-43)
Heo <0,
we would reject Hy if
pi gree (11-44)
282 Chapter11 Tests of Hypotheses

Example 11-7
Consider the machine described in Example 10-16, which is used to fill cans with a soft drink bever-
age. If the variance of the fill volume exceeds 0.02 (fluid ounces)”, then-an unacceptably large per-
centage of the cans will be underfilled. The bottler is interested in testing the hypothesis

Ho =0.02,
H,: 0? > 0.02.
A random sample of n = 20 cans yields a sample variance of s* = 0.0225. Thus, the test statistic is

(n 1s2 _ (19)00225 _.,9


oe 0 0.02

If we choose @=0.05, we find that Voce = 30.14, and we would conclude that there is no strong evi-
dence that the variance of fill volume exceeds 0.02 (fluid ounces)’.

Choice of Sample Size


Operating characteristic curves for the 7° tests are provided in Charts VIi through VIn
(Appendix) for a= 0.05 and a= 0.01. For the two-sided alternative hypothesis of equation
- 11-38, Charts VIi and VJj plot B against an abscissa parameter,
o
A=—,
oy .
(11-45)

for various sample’sizes n, where o denotes the true value of the standard deviation. Charts
VIk and VII are for the one-sided alternative H,: 0? > 0}, while Charts VIm and VIn are for
the other one-sided alternative H,: 0” < o;. In using these charts, we think of cas the value
of the standard deviation that we want to detect.

Example 11-8
In Example 11-7, find the probability of rejecting Hp: o” = 0.02 if the true variance is as large as 0” =
0.03. Since o = 40.03 = 0.1732 and oy =/0.02 =0.1414 the abscissa parameter is

SH SOAISI
Oy 0.1414
From Chart VIk, with A = 1.23 and n = 20, we find that B ~ 0.60. That is, there is only about a 40%
chance that Hy: 0? = 0.02 will be rejected if the variance is really as large as 0* = 0.03. To reduce B,
a larger sample size must be used. From the operating characteristic curve, we note that to reduce B
to 0.20 a sample size of 75 is necessary.

A Large-Sample Test Procedure

The chi-square test procedure prescribed above is rather sensitive to the normality assump-
tion. Consequently, it would be desirable to develop a procedure that does not require this
assumption. When the underlying population is not necessarily normal but n is large (say n
2 35 or 40), then we can use the following result: if X,, X,, ..., X, is a random sample from
a population with variance o”, the sample standard deviation S is approximately normal
with mean E(S) ~ o and variance VS) ~ o7/2n, if n is large.
11-2 Tests of Hypotheses on a Single Sample 283

Then the distribution of

Fl. =

is approximately standard normal.


To test

Hy:0 o = c.,
0 (11-47)
Hie 0,
substitute Oy for o in equation 11-46. Thus, the test statistic is

S-6,ES
Hho0 eeal ’
wr (11-48)

and we would reject Ho if Z) > Za or if Zy < — Zy. The same test statistic would be used
for the one-sided alternatives. If we are testing
Hy
Co0 =0,,
=O, (11-49)
Hoe i,
we would reject Hy if Z) > Z,, while if we are testing
9 ane:
COE (11-50)
Hiou< 0,;
we would reject Ho if Z) < -Z,,.

Example 11-9
An injection-molded plastic part is used in a graphics printer. Before agreeing toa long-term contract,
the printer manufacturer wants to be sure using @ = 0.01 that the supplier can produce parts with a
standard deviation of length of at most 0.025 mm. The hypotheses to be tested are
Hy: o7 = 6.25 x 107,
Hy.07 <625e10-,
since (0.025)? = 0.000625. A random sample of n = 50 parts is obtained, and the sample standard devi-
ation is s = 0.021 mm. The test statistic is

Zo =
S- Oy _ 0.021-0.025 __, gy
~ 6/V2n 0.025/./100
Since -Zp9;= —2.33 and the observed value of Zp is not smaller than this critical value, Hp is not
rejected. That is, the evidence from the supplier’s process is not strong enough to justify a long-term
contract.

11-2.4 Tests of Hypotheses on a Proportion


Statistical Analysis
In many engineering and management problems, we are concerned with a random variable
that follows the binomial distribution. For example, consider a production process that
manufactures items that are classified as either acceptable or defective. It is usually
284 Chapter11 Tests of Hypotheses

reasonable to model the occurrence of defectives with the binomial distribution, where the
binomial parameter p represents the proportion of defective items produced.
We will consider testing
A: P=Po (11-51)
A:
p # Po.

An approximate test based on the normal approximation to the binomial will be given. This
approximate procedure will be valid as long as p is not extremely close to zero or 1, and if
the sample size is relatively large. Let X be the number of observations in a random sample
of size n that belongs to the class associated with p. Then, if the null hypothesis Hp: p =p
is true, we have X ~ N (npo, npo(1 — po)), approximately. To test Ho: p = Po calculate the test
statistic

X—NDo
Zo = (11-52)
Inpo( Pa Po)

and reject Hy: p = pp if


Zo > Zu or Zo < —Z yy: el 1-53)

Criti¢al regions for the one-sided alternative hypotheses would be located in the usual
manner.

Example 11-10.
A semiconductor firm produces logic devices. The contract with their customer calls for a fraction
defective of no more than 0.05. They wish to test

Ap: p = 0.05,
Hi: p> 0.05.
A random sample of 200 devices yields six defectives. The test statistic is

Ze -X-=mpy __ 6 —200(0.05) ape


npo(1- po) _200(0.05)(0.95)

Using a= 0.05, we find that Zp95 = 1.645, and so we cannot reject the null hypothesis that p = 0.05.

Choice of Sample Size


It is possible to obtain closed-form equations for the B error for the tests in this section. The
B error for the two-sided alternative H,: p # py is approximately

B~o Po ~ P+ Zz. Po(1- Po)/n on Po ~ P= Zaj24] Po(1— Po)/n


P(1- p)/n Jp(i-)/n (11-54)
11-2 Tests of Hypotheses on a Single Sample 285

If the alternative is H,: p <pp, then

gui —o|P0— P= 2ay/Poll= Po)/n


pi-pyn J ota
whereas if the alternative is H,: p > pp, then

pu a)20—P+2Zay Poll Po)/n (11-56)


yP(i- p)/n
These equations can be solved to find the sample size n that gives a test of level @ that
has a specified B risk. The sample size equations are

L. Za. Po(1— Po) + Zp, p(1- P) '


P- Po
(11-57)

for the two-sided alternative and


een 2
_| Za Po(1- Po) +Zpyp(l- P)
P-Po
(11-58)

for the one-sided alternatives.

Example 11-11
For the situation described in Example 11-10, suppose that we wish to find the B error of the test if p
= 0.07. Using equation 11-56, the Berror is ~

0.05 — 0.07 + 1.645,/(0.05)(0.95) /200


(0.07)(0.93)/200
= (0.30)
= 0.6179,

This type II error probability is not as small as one might like, but n = 200 is not particularly large and
0.07 is not very far from the null value py) = 0.05. Suppose that we want the f error to be no larger than
0.10 if the true value of the fraction defective is as large as p = 0.07. The required sample size would
be found from equation 11-58 as
2
1,645,/(0.05)(0.95) +1.28,/(0.07)(0.93)
aS 0,070.05
= 1174,
which is a very large sample size. However, notice that we are trying to.detect a very small deviation
from the null value po = 0.05.
286 Chapter 11 Tests of Hypotheses

11-3 TESTS OF HYPOTHESES ON TWO SAMPLES


11-3.1 Tests of Hypotheses on the Means of Two Normal Distributions,
Variances Known ;
Statistical Analysis
Suppose that there are two populations of interest, say X, and X,. We assume that X, has
unknown mean /J, and known variance 0}, and that X, has unknown mean jl, and known
variance oO}. We will be concerned with testing the hypothesis that the means 4, and [, are
equal. It is assumed either that the random variables X, and X, are normally distributed or,
if they are nonnormal, that the conditions of the Central Limit Theorem apply.
Consider first the two-sided alternative hypothesis
Hy: Hy = Ho, (11-59)
Ay: Hy # Mo.
Suppose that a random sample of size n, is drawn from X,, say X,,, X},..., X1,,, and that a
second random sample of size n, is drawn from X), say X7), X39,..., Xz_,- It is assumed that
the {X,,} are independently distributed with mean jp, and variance oR that the {X5,} are
independently distributed with mean LU, and variance Oa and that the two samples {X,,} and
{X>,} are independent. The test procedure is based on the distribution of the difference in
sample means, say X, — X. In general, we know that

= Oo
2 oO
Xx; -%~n{ - 1,24}
Ty eit
Thus, if the null hypothesis Hp: 1, = H, is true, the test statistic

Ze <= x, = X,
Ciaeas (11-60)
— + a

Vn, ny
follows the MO, 1) distribution. Therefore, the procedure for testing Hp: U, = Ly is to cal-
culate the test statistic Zp) in equation 11-60 and reject the null hypothesis if

or
Zo < Zur (11-61b)
The one-sided alternative hypotheses are analyzed similarly. To test
Ho:o: Hy
Ly ==My Lb, (dl1-62)
Ay: Hy > bb,
the test statistic Zp in equation 11-60 is calculated, and Hp: 1, = Uy is rejected if
Ly > Ly (11-63)
To test the other one-sided alternative hypothesis, ?
Jobe
o: sth
Hy ale,
= by (11-64)

Ay: My < by,


use the test statistic Z) in equation 11-60 and reject Ho: Ll, = U, if

Zo <-Zy (11-65)
11-3 Tests of Hypotheses on Two Samples 287

Example 11-12
The plant manager of an orange juice canning facility is interested in comparing the performance of
two different production lines in her plant. As line number 1 is relatively new, she suspects that its out-
put in number of cases per day is greater than the number of cases produced by the older line 2. Ten
days of data are selected at random for each line, for which it is found that X, = 824.9 cases per day
and x, = 818.6 cases per day. From experience with operating this type of equipment it is known that
o = 40 and ©, = 50. We wish to test

Hp: by = bb,
A: My > by.
The value of the test statistic is

Pe Ser 824.9 818.6% 2.10.


zu 9
"hata
Cie
ens
pled pop
10 10

Using a:=0.05 we find that Zp 9; = 1.645, and since Zy > Zp 95, we would reject Hy and conclude that
the mean number of cases per day produced by the new production line is greater than the mean num-
ber of cases per day produced by the old line.

Choice of Sample Size


The operating characteristic curves in Charts Vla, VIb, VIc, and VId (Appendix) may be
used to evaluate the type II error probability for the hypotheses in equations 11-59, 11-62,
and 11-64. These curves are also useful in sample size determination. Curves are provided
for a= 0.05 and a= 0.01. For the two-sided alternative hypothesis in equation 11-59, the
abscissa scale of the operating characteristic curves in Charts Va and VIb is d, where

d is [1 - Ha
ae eat a al DE
|6|
11-66
Jo2+03 Jor+o2 poe)
and one must choose equal sample sizes, say n =n, =>. The one-sided alternative hypothe-
ses require the use of Charts VIc and VId. For the one-sided alternative H,: 1, > H2 in equa-
tion 11-62, the abscissa scale is
= 6
d= LT En. = 2Bose (11-67)
JO; +0, jo; +07
where n =n, =n. The other one-sided alternative hypothesis, Hj: [, < Hd), requires that d
be defined as

2 =
My = by =
6 (11-68)
JOr+o2 jo; +03
andn =n, = np.
It is not unusual to encounter problems where the costs of collecting data differ sub-
stantially between the two populations, or where one population variance is much greater
than the other. In those cases, one often uses unequal sample sizes. If n, # nz, the operating
characteristic curves may be entered with an equivalent value of n computed from
2 2
0; +07
n= ————. (11-69)
of /n, +03 /n,
288 Chapter 11 Tests of Hypotheses

If n; # ny, and their values are fixed in advance, then equation 11-69 is used directly to cal-
culate n, and the operating characteristic curves are entered with a specified d to obtain B.
If we are given d and it is necessary to determine n, and n, to obtain a specified 8, say B*,
then one guesses at trial values of n, and n, calculates n in equation 11-69, enters the curves
with the specified value of d, and finds f. If B = B*, then the trial values of n, and n, are sat-
isfactory. If B B*, then adjustments to n, and n, are made and the process is repeated.

‘Example 11-13
Consider the orange juice production line problem in Example 11-12. If the true difference in mean
production rates were 10 cases per day, find the sample sizes required to detect this difference with a
probability of 0.90. The appropriate value of the abscissa parameter is

and since @= 0.05, we find from Chart VIc that n =n, =n, =8.

It is also possible to derive formulas for the sample size required to obtain a specified
B for a given 6 and a These formulas occasionally are useful supplements to the operating
characteristic curves. For the two-sided alternative hypothesis, the sample size n, =n, =n
iS
A
is(Zaps + Zp (0; +03)
ae RS
Squad NING TONET TS (11-70)

This approximation is valid when 9(-Za/ -dVn /o7 + 0}|is small compared to B.
For a one-sided alternative, we have n, =n, =n, where
2
(23 +Z,) (7 +03)
(11-71)
allanahiet iaceisraea
The derivations of equations 11-70 and 11-71 closely follow the single-sample case in Sec-
tion 11-2. To illustrate the use of these equations, consider the situation in Example 11-13.
We have a one-sided alternative with a= 0.05, = 10, o = 40, o, = 50, and B=0.10. Thus
Ze, = Zo o5 = 1.645, Zg = Z jo = 1.28, and the required sample size is found from equation
11-71 to be

cA ee ne <7)
ot (Z4 +Zg) (o7 +02) _ (1.645 +1.28)°(40 +50) _
5 107 as
which agrees with the results obtained in Example 11-13.

11-3.2 Tests of Hypotheses on the Means of Two Normal Scns,


Variances Unknown
We now consider tests of hypotheses on the equality of the means jJ, and pW, of two normal
distributions where the variances o; and Oo; are unknown. A f statistic will be used to test
these hypotheses. As noted in Section 11-2.2, the normality assumption is required to
11-3 Tests of Hypotheses on Two Samples 289

develop the test procedure, but moderate departures from normality do not adversely affect
the procedure. There are two different situations that must be treated. In the first case, we
assuine that the variances of the two normal distributions are unknown but equal; that is, oO,
= 0, = 0°. In the second, we assume that o; and oO; are unknown and not necessarily equal.

Case I; 0,2 = 6,2 = 0°" Let X, and X, be two independent


: 5
normal populations with
:

unknown means [U, and {, and unknown but equal variances 0; = 0 |= 0”. We wish to test

Ao: Hy = Ly (11-72)
Ay: My, # Uy.
Suppose that X,,, X), ..., X;,, is arandom sample of n, observations from X,, and X>,, X,
++) Xyq, is a random sample of n, observations from X,. Let X,, X, Si, and S; be the sam-
ple means and sample variances, respectively. Since both S ;and § ;estimate the common
variance o”, we may combine them to yield a single estimate, say

ge = (an, —1)S?DSE ++(ma(ny -1)S3“DF en


ny +n, —-2
This combined or “pooled” estimator was introduced in Section 10-3.2. To test Hp: U, = Ly
in equation 11-72, compute the test statistic
x aa xX,

5: ae, -74
(11-74)
LL)
If Ho: H; = Hy is true, fp is distributed as ¢,, , ,, 2. Therefore, if

fo > bay2, ny +n2-2 (11-75a)


or if
19 < bay, ny+n -2 (11-75b)
we reject Hp: LM, = Up.
The one-sided alternatives are treated similarly. To test
Ao: My = by, (11-76)
Ay: My > by
compute the test statistic f) in equation 11-74 and reject Ho: Md, = M, if
a paras (11-77)
For the other one-sided alternative
Ao: My = bys (11-78)
Hy: Ly < bb.

calculate the test statistic tg and reject Ho: Md, = My if


lo <mto apm <2? (11-79)

The two-sample t-test given in this section is often called the pooled t-test, because the
sample variances are combined or pooled to estimate the common variance. It is also known
as the independent t-test, because the two normal populations are assumed to be independent.
290 Chapter 11 Tests of Hypotheses

Example 11-14
Two catalysts are being analyzed to determine how they affect the mean yield of a chemical process.
Specifically, catalyst 1 is currently in use, but catalyst 2 is acceptable. Since catalyst 2 is cheaper, if
it does not change the process yield, it should be adopted. Suppose we wish to test the hypotheses

Ag: Hy = Ha»
Ay: My # Up.
Pilot plant data yields n, = 8, x, = 91.73, 5; = 3.89, n) = 8, X = 93.75, and s, = 4.02. From equation
11-73, we find

> _(m —1)sy +(m -1)s3 _ (7)3.89


+7(4.02)
-
sy = 3.96.
n, +n, -2 8+8-2
The test statistic is

t= SL = 91.3
=93-95 «9 93,
ifr ck ak pein
cent 2 AA) 1°60 bey
* ny Ny 8 8
Using a = 0.05 we find that fo 995. }4 = 2.145 and —to 995. 14 =—-2.145, and, consequently, Ho: Md, = LM,
cannot be rejected. That is, we do not have strong evidence to conclude that catalyst 2 results in a
mean yield that differs from the mean yield when catalyst 1 is used.

2 2 she ‘
Case 2: 0+ 0, Insome situations, we cannot reasonably assume that the unknown vari-
2 eS o site ‘ .
ances 0; and 0; are equal. There is not an exact f statistic available for testing Hp: LM, = UH,
in this case. However, the statistic

Mot eG

StS (11-80)
gS LG)
is distributed approximately as t with degrees of freedom given by

a a i Deki Bangle) :
ia) (3m) nk
n +1 ‘ n, +1
if the null hypothesis Hp: i, = H, is true. Therefore, if oO;# Oo, the hypotheses of equations
11-72, 11-76, and 11-78 are tested as before, except that f) is used as the test statistic and
n, + ny — 2 is replaced by v in determining the degrees of freedom for the test. This general
problem is often called the Behrens—Fisher problem.

Example 11-15
A manufacturer of video display units is testing two microcircuit designs to determine whether they
produce equivalent current flow. Development engineering has obtained the following data:

Design] n,=15 %,=242 s,=10


Design2 n,=10 x,=23.9 s;= 20
11-3 Tests of Hypotheses on Two Samples 291

We wish to test

Ho:Hy = to,
Ay: by F Lb,

where both populations are assumed to be normal, but we are unwilling to assume that the unknown
variances oO, and ron are equal. The test statistic is
,
C ipceitn
ee yg ares:

fa 2
Peta eo rey AreAIS310) os seyag
(s?m,) : (s3ny) (10/15)° b (20/10)
n, +1 n +1 16 11

Using a = 0.10, we find that tq....= fo.05,16 = 1.746. Since |f| < toos,16. We Cannot reject Ho: fy = Up.

Choice of Sample Size


The operating characteristic curves in Charts Vle, VIf, VIg, and VIA (Appendix) are used to
evaluate the type II error for the case where 0; = Oo;= o*. Unfortunately, when oO;# Oo}, the
distribution of f,is unknown if the null hypothesis is false, and no operating characteristic
curves are available for this case.
For the two-sided alternative in equation 11-72, when 0; = 0; = O° and n, = ny =n,
Charts Vile and VIf are used with

| — H| |d|
Ge te 3
20 20 (11-82)

To use these curves, they must be entered with the sample size n* = 2n — 1. For the one-
sided alternative hypothesis of equation 11-76, we use Charts VIg and VIh and define

ier 1ae (11-83)


20 20

whereas for the other one-sided alternative hypothesis of equation 11-78, we use

goer Oos -84)


20 20 TEAL

It is noted that the parameter d is a function of o, which is unknown. As in the single-sample


t-test (Section 11-2.2), we may have to rely on a prior estimate of 0, or use a subjective
estimate. Alternatively, we could define the differences in the mean that we wish to detect
relative to oO.

‘Example 11-16
Consider the catalyst experiment in Example 11-14. Suppose that if catalyst 2 produces a yield that
differs from the yield of catalyst 1 by 3.0% we would like to reject the null hypothesis with a proba-
bility of at least 0.85. What sample size is required? Using s, = 1.99 as a rough estimate of the
292 Chapter 11 Tests of Hypotheses

common standard deviation o, we have d = |6|/20= [3.00|/(2)(1.99) = 0.75. From Chart Vle (Appen-
dix) with d= 0.75 and B=0.15, we find n* = 20, approximately. Therefore, since n* = 2n — 1,

n*+1 20+1
n= = = 10.5 =11 (say),
2

and we would use sample sizes of n, =n, =n=11.

11-3.3 The Paired t-Test


A special case of the two-sample t-tests occurs when the observations on the two popula-
tions of interest are collected in pairs. Each pair of observations, say (X,;, X2;), is taken
under homogeneous conditions, but these conditions may change from one pair to another.
For example, suppose that we are interested in comparing two different types of tips for a
hardness-testing machine. This machine presses the tip into a metal specimen with a known
force. By measuring the depth of the depression caused by the tip, the hardness of the spec-
imen can be determined. If several specimens were selected at random, half tested with tip
1, half tested with tip 2, and the pooled or independent t-test in Section 11-3.2 applied, the
results of the test could be invalid. That is, the metal specimens could have been cut from
bar stock that was produced in different heats, or they may not be homogeneous, which is
another way hardness might be affected; then the observed differences between mean hard-
ness readings for the two tip types also include hardness differences between specimens.
The correct experimental procedure is to collect the data in pairs; that is, to take two
hardness readings of each specimen, one with each tip. The test procedure would then con-
sist of analyzing the differences between hardness readings of each specimen. If there is no
difference between tips, then the mean of the differences should be zero. This test procedure
is called the paired t-test.
Let (X11, X21), (Xj2. X22), ---» (Xn» Xz,,) be a set of n paired observations, where we
assume that X, ~ M(y, 0%) and X, ~ N(u,, 03). Define the differences between each pair of
observations as D; =X); — Xz, j= 1, 2, ..., n.
The D, are normally distributed with mean

Mp
=E (X, — X,) = E (X)) - E(X2) = by - by,

so testing hypotheses about the equality of 1, and 1, can be accomplished by performing a


one-sample t-test on Lp. Specifically, testing Ho: 4, = My against Hj: UW, # My is equivalent
to testing

Hp: Up = 0,
11-85
A;: Up # 0.

The appropriate test statistic for equation 11-85 is

ages ine
ant
CS So i (11-86)

where

D=1¥D
PA j (11-87)
11-3 Tests of Hypotheses on Two Samples 293

and

(11-88)

are the sample mean and variance of the differences. We would reject Ho: Up = 0 (implying
that Ll, # Hp) if f > te ,-1 OF if ty <—ty , _ 1. One-sided alternatives would be treated
. similarly.

Example 11-17
An article in the Journal of Strain Analysis (Vol. 18, No. 2, 1983) compares several methods for pre-
dicting the shear strength for steel plate girders. Data for two of these methods, the Karlsruhe and
Lehigh procedures, when applied to nine specific girders, are shown in Table 11-2. We wish to deter-
mine if there is any difference (on the average) between the two methods.
The sample average and standard deviation of the differences d; are d=0.2739 and s,= 0.1351,
so the test statistic is

by) =ed 0.2739


ee 6 08.
© sa/vn 0.1351/J9
For the two-sided alternative H,: [yp # 0 and a= 0.1, we would fail to reject only if |tol < to,95, 3 = 1.86.
Since fo > fo.95, 3, we conclude that the two strength prediction methods yield different results. Specif-
ically, the Karlsruhe method produces, on average, higher strength predictions than does the Lehigh
method.

Paired Versus Unpaired Comparisons Sometimes in performing a comparative experi-


ment, the investigator can choose between the paired analysis and the two-sample (or
unpaired) t-test. If n measurements are to be made on each population, the two-sample t sta-
tistic is

Table 11-2 Strength Predictions for Nine Steel Plate Girders (Pre-
dicted Load/Observed Load)

Girder Karlsruhe Method Lehigh Method Difference d;

S1/1 1.186 1.061 0.125


$2/1 Lalo 0.992 0.159
$3/1 16322 1.063 0.259
S4/1 1.339 1.062 0.277
S5/1 1.200 1.065 0.135
$2/1 1.402 1.178 0.224
$2/2 1.365 1.037 0.328
$2/3 1537 1.086 0.451
$2/4 1.559 1.052 0.507
294 Chapter 11 Tests of Hypotheses

which is compared to tg >, _ 2, and of course, the paired f statistic is

BCS /in
which is compared to ty) , _ ;- Notice that since

the numerators of both statistics are identical. However, the denominator of the two-sam-
ple t-test is based on the assumption that X, and X, are independent. In many paired exper-
iments, there is a strong positive correlation between X, and X,. That is,

v(D) = V(X, + X,)

= V(X,)+V(X,)- 2Cov( XX)


Seale n

assuming that both populations X, and X, have identical variances. Furthermore, S p/n esti-
mates the variance of D, Now, whenever there is positive correlation within the pairs, the
denominator for the paired t-test will be smaller than the denominator of the two-sample
t-test. This can cause the two-sample t-test to considerably understate the significance of the
data if it is incorrectly applied to paired samples.
Although pairing will often lead to a smaller value of the variance of X, - X,, it does
have a disadvantage. Namely, the paired t-test leads to a loss of n — 1 degrees of freedom in
comparison to the two-sample t-test. Generally, we know that increasing the degrees of
freedom of a test increases the power against any fixed alternative values of the parameter.
So how do we decide to conduct the experiment—should we pair the observations or
not? Although there is no general answer to this question, we can give some guidelines
based on the above discussion. They are as follows:
1. If the experimental units are relatively homogenous (small 0) and the correlation
between pairs is small, the gain in precision due to pairing will be offset by the loss
of degrees of freedom, so an independent-samples experiment should be used.
2. If the experimental units are relatively heterogeneous (large o) and there is large
positive correlation between pairs, the paired experiment should be used.
The rules still require judgment in their implementation, because o and p are usually
not known precisely. Furthermore, if the number of degrees of freedom is large (sey 40 or
50), then the loss of n - 1 of them for pairing may not be serious. However, if the number
of degrees of freedom is small (say 10 or 20), then losing half of them is potentially serious
if not compensated for by an increased precision from pairing.

11-3.4 Tests for the Equality of Two Variances


We now present tests for comparing two variances. Following the approach in Section
11-2.3, we present tests for normal populations and large-sample tests that may be applied
to nonnormal populations.
11-3 Tests of Hypotheses on Two Samples 295

Test Procedure for Normal Populations


pin that two peecPenden populations are of interest, say X, ~N(,, 0 A and X, ~ N(tb,
03), where LU, Or H,, and oO; are unknown. We wish to test hypotheses about the equality
of the two variances, say Hp: o,= aa Assume that two random samples of size n, from pop-
ulation | and of size n, from population 2 are available, and let S and S$:be the sample vari-
ances. To test the two-sided alternative

Hy:
Ogee0 =
C22 (11-89)
ee Oe Os,
we use the fact that the statistic
2
Fy = + (11-90)
S;
is distributed as F, with n, — 1 and n, — 1 degrees of freedom, if the null hypothesis Hp: o,=
O; is true. Therefore, we would reject Hy if
Py elaine | (11-91a)
or if
12 A SOG Pe eee (11-91b)
where Fo np, —1,n,-1 aNd Fy - ayo, -1,np-1 are the upper and lower o/2 percentage points of
the F distribution with n, — 1 and n, — 1 degrees of freedom. Table V (Appendix) gives only
the upper tail points of F, so to find Fy _ gw »,-1,n,-1 We Must use
1
F-a/2.m-tm-1 = (11-92)
./2,n2 -1,n, -1

The same test statistic can be used to test one-sided alternative hypotheses. Since the
notation X, and X, is arbitrary, let X, denote the population that may have the largest vari-
ance. Therefore, the one-sided alternative hypothesis is

Hg: or = of
He; >'0;: (11-93)
If
Peel et aged ny — 19 (11-94)

we would reject Hy: 07 = 03.

Example 11-18
Chemical etching is used to remove copper from printed circuit boards. X, and X, represent process
yields when two different concentrations are used. Suppose tht we wish to test
Hy: Oo;= OC,

H,: 07 #03.
Two samples of sizes n, =n, = 8 yield s; = 3.89 and s;= 4.02, and
296 Chapter 11 Tests of Hypotheses

If @= 0.05, we find that Fo, 25,7, 7 = 4.99 and F975, 7,7 = (Fos, 7, ,) |= (4.99)|= 0.20. Therefere,
we cannot reject Hp: o,= C5, and we can conclude that there is no strong evidence that the variance
of the yield is affected yythe concentration.
Choice of Sample Size
Charts VIo, VIp, Vig, and VIr (Appendix) provide operating characteristic curves for the
F-test for a = 0.05, and a = 0.01, assuming that n,= n,=n. Charts Vio and VIp are used
with the two-sided alternative of equation 11-89. They plot f against the abscissa parameter
feeds (11-95)
for various n, =n, =n. Charts VIq and VIr are used for the one-sided alternative of equa-
tion 11-93.

Example 11-19

For the chemical process yield analyses problem in Example 11-18, suppose that one of the concen-
trations affected the variance of the yield so that one of the variances was four times the other and we
wished to detect this with probability at least 0.80. What sample size should be used? Note that if one
variance is four times the other, then

By referring to Chart VIo, with B= 0.20 and A= 2, we find that a sample size of n, =n, = 20, approx-
imately, is necessary.

A Large-Sample Test Procedure


When both sample sizes n, and n, are large, a test procedure that does not require the nor-
mality assumption can be developed. The test is based on the result that the sample standard
deviations S, and S, have approximate normal distributions with means 0, and 0;, respec-
tively, and variances o'/2n, and 0;/2n,, respectively. To test
Hp: oO;— ae

Hy: 0, #0; UFR


we would use the test statistic

peers itare i)
P 2n 2n,

where S, is the pooled estimator of the common standard deviation o. This statistic has an
approximate standard normal distribution when Co,=d,.= We would reject Ho if Z) > Z,,) or
if Zy) <— Zy. Rejection regions for the one-sided altetiiatives have the same form as in other
two-sample normal tests.

11-3.5 Tests of Hypotheses on Two Proportions


The tests of Section 11-2.4 can be extended to the case where there are two binomial
parameters of interest, say p, and ;»,, and we wish to test that they are equal. That is, we
wish to test

Ao: P\ = Px
HM:Py# Py. (11-98)
11-3 Tests of Hypotheses on Two Samples 297

We will present a large-sample procedure based on the normal approximation to the bino-
mial and then outline one possible approach for small sample sizes.

Large-Sample Test for Ho: p, =p,


Suppose that the two random samples of sizes n, and n, are taken from two populations,
and let X; and X, represent the number of observations that belong to the class of interest
in samples | and 2, respectively. Furthermore, suppose that the normal approximation to the
binomial applies to each population, so that the estimators of the population proportions
P, = X,/n, and p, = X,/n, have approximate normal distributions. Now, if the null hypothe-
sis Ho: p, = p2 is true, then using the fact that p, = p, =p, the random variable

Z=- P rz Pp

Ie. #1
P(1— Ph et |
p) —+—

is distributed approximately N(0, 1). An estimate of the common parameter p is


a Ay eX.
pee
ny +n,
The test statistic for Hp: p, = p> is then

Zy = lf (11-99)
, male
dre al
PIt=) ay+
ea— |
If
Lo > Lop or LZ <—Lon» (11-100)

the null hypothesis is rejected.

Example 11-20
Two different types of fire control computers are being considered for use by the U.S. Army in six-
gun 105-mm batteries. The two computer systems are subjected to an onezauvnal test in which the
total number of hits of the target are counted. Computer sysicin 1 gave 250 hits out of 300 rounds,
while computer system 2 gave 178 hits ont of Z6U rounds. Is there reason to believe that the two com-
puter systems differ? To aiswer this question, we test

Ho: P\ = Pr
A: p\ ¥ Pp.
Note that p, = 250/300 = 0.8333, p, = 178/260 = 0.6846, and

X,; +x, _ 250+178


p= njtn, = 300+260 = 0.7643.

The value of the test statistic is

Zz. = Pi- Po ~ 0.8333 — 0.6846 at


ab
ple
yt as+ a
- A) 0.7643(0.2357)) + a:a
357 sa0 51 1

If we use a: = 0.05, then Zp 995 = 1.96 and -Zp995= -1.96, and we would reject Ho, concluding that
there is a significant difference in the two computer systems.
298 Chapter 11 Tests of Hypotheses

Choice of Sample Size

The computation of the f error for the foregoing test is somewhat more involved than in the
single-sample case. The problem is that the denominator of Z, is an estimate of the standard
deviation of f, —p, under the assumption that p, = p, =p. When Hy: p, =p; is false, the stan-
dard deviation of p, —p, is

Livsbatt Hla Oy, a (11-101)

If the alternative hypothesis is two-sided, the B risk turns out to be approximately

B~® Zajr\ Pa(l/m, +1/n,) -(p; - "|


b,-p2
ae 11-102
_ | WZaeNPalYm + Wnz)
~(1 = Pa)
Op, -p:
where

B= MPi +N2P2
ny ete ny :

ae m(1- py) + m9(1- pr)


ny aR ny :

and 6; _,, is given by equation 11-101. Uf the alternative hypothesis is Hj: p, > p2, then

B~o Za PA(1/n +1/ny) -(p, — D2)


p (11-103)
O5,-B,

and if the alternative hypothesis is H,: p, <p, then g

p=1_o| ~Zy4{Pa(I/n,
22 Pal +1/n)
/ny) -(p,
(p; - m= p») (11-104)
P\-P2

For a specified pair of values p, and p, we can find the sample sizes n, = n, =n
required to give the test of size @ that has specified type II error B. For the two-sided alter-
native the common sample size is approximately

2
(Zara(p+ P2)(a1 + 92)/2 + ZpV Pid + P24 1
n=
(11-105)
(P, a po)

where q, = 1 — p, and q, = 1 — p. For the one-sided alternatives, replace Z,. in equation


11-105 with Z,.
11-3 Tests of Hypotheses on Two Samples 299

Small-Sample Test for H): p, =p,


Most problems involving the comparison of proportions p, and p, have relatively large sam-
ple sizes, so the procedure based on the normal approximation to the binomial is widely
used in practice. However, occasionally, a small-sample-size problem is encountered. In
such cases, the Z-tests are inappropriate and an alternative procedure is required. In this sec-
tion we describe a procedure based on the hypergeometric distribution.
Suppose that X, and X, are the number of successes in two random samples of sizes n;
and n, respectively. The test procedure requires that we view the total number of successes
as fixed at the value X, + X, = ¥. Now consider the hypotheses

Ao: P) = Pa,
Ay: py > Dp.
Given that X, + X, = Y, large values of X, support H,, whereas small or moderate values of
X, support Hy. Therefore, we will reject Hy whenever X, is sufficiently large.
Since the combined sample of n, + n, observations contains X, + X, = ¥ total successes,
if Ho: p, = p2, the successes are no more likely to be concentrated in the first sample than
in the second. That is, all the ways in which the n, + n, responses can be divided into one
sample of n, responses and a second sample of n, responses are equally likely. The number
of ways of selecting X, successes for the first sample leaving Y — X, successes for the
second is

YO\/( nga ny = Y.

Because outcomes are equally likely, the probability of there being exactly X, successes in
sample 1 is determined by the ratio of the number of sample 1 outcomes having X, suc-
cesses to the total number of outcomes, or

va ny + Ny ia Y

xy ny — 1
P(X, =x \ysuccess in n, +N, responses) = fi) , (11-106)

ny

given that H): p, = pz is true. We recognize equation 11-106 as a hypergeometric


distribution.
To use equation 11-106 for hypothesis testing, we would compute the probability of
finding a value of X, at least as extreme as the observed value of X,. Note that this proba-
bility is a P-value. If this P-value is sufficiently small, then the null hypothesis is rejected.
This approach could also be applied to lower-tailed and two-tailed alternatives.

le11-21
Insulating cloth used in printed circuit boards is manufactured in large rolls. The manufacturer is try-
ing to improve the process yield, that is, the number of defect-free rolls produced. A sample of 10 rolls
contains exactly four defect-free rolls. From analysis of the defect types, manufacturing engineering
suggests several changes in the process. Following implementation of these changes, another sample
of 10 rolls yields 8 defect-free rolls. Do the data support the claim that the new process is better than
the old one, using @= 0.10?
300 Chapter 11 Tests of Hypotheses

To answer this question, we compute the P-value. In our example, n, =n, = 10, y=8+4= 12, and
the observed value of x, = 8. The values of x, that are more extreme than 8 are 9 and 10. Therefore

P(X 1 =8li2 successes) = (I. = 0.0750,

(0)
91
P(X, =9|12 successes)=oe
5 = 9.0095,
&

=
P(X, = 10/12 successes)

The P-value is P = 0.0750 + 0.0095 + 0.0003 = 0.0848. Thus, at the level a = 0.10, the null hypothe-
sis is rejected and we conclude that the engineering changes have improved the process yield.

This test procedure is sometimes called the Fisher-Irwin test. Because the test depends
on the assumption that X,+ X, is fixed at some value, some statisticians have argued against
vuse of the test when X, + X, is not actually fixed. Clearly X, + X, is not fixed by the sampling
procedure in our example. However, because there are no other better competing procedures,
the Fisher-Irwin test is often used whether or not X, + X, is actually fixed in advance.

11-4 TESTING FOR GOODNESS OF FIT


The hypothesis-testing procedures that we have discussed in previous sections are for prob-
lems in which the form of the density function of the random variable is known, and the
hypotheses involve the parameters of the distribution. Another kind of hypothesis is often
encountered: we do not know the probability distribution of the random variable under study,
say X, and we wish to test the hypothesis that X follows a particular probability distribution.
For example, we might wish to test the hypothesis that X follows the normal distribution.
In this section, we describe a formal goodness-of-fit test procedure based on the
chi-square distribution. We also describe a very useful graphical technique called probabil-
ity plotting. Finally, we give some guidelines useful in selecting the form of the population
distribution.

The Chi-Square Goodness-of-Fit Test


The test procedure requires a random sample of size n of the random variable X,/whose
probability density function is unknown. These n observations are arrayed in a frequency
histogram having k class intervals. Let O; be the observed frequency in the ith class inter-
val. From the hypothesized probability distribution we compute the expected frequency in
the ith class interval, denoted E;. The test statistic is

Xo = oe (11-107)
i=] 1

It can be shown that Ls approximately follows the chi-square distribution with k — p — 1


degrees of freedom, where p represents the number of parameters of the hypothesized
11-4 Testing for Goodness of Fit 301

distribution estimated by sample statistics. This approximation improves as n increases. We


would reject the hypothesis that X conforms to the hypothesized distribution if Xr =
Xo, k- p-l-
One point to be noted in the application of this test procedure concerns the magnitude
of the expected frequencies. If these expected frequencies are too small, then ye will not
reflect the departure of observed from expected, but only the smallest of the expected fre-
quencies. There is no general agreement regarding the minimum value of expected fre-
quencies, but values of 3, 4, and 5 are widely used as minimal. Should an expected
frequency be too small, it can be combined with the expected frequency in an adjacent class
interval. The corresponding observed frequencies would then be combined also, and k
would be reduced by 1. Class intervals are not required to be of equal width.
We now give three examples of the test procedure.

‘Example 11-22
A Completely Specified Distribution A computer scientist has developed an algorithm for gener-
ating pseudorandom integers over the interval 0-9. He codes the algorithm and generates 1000
pseudorandom digits. The data are shown in Table 11-3. Is there evidence that the random number
generator is working correctly?
If the random number generator is working correctly, then the values 0-9 should follow the dis-
crete uniform distribution, which implies that each of the integers-should occur about 100 times. That
is, the expected frequencies E; = 100, for i=0, 1, ..., 9. Since these expected frequencies can be deter-
mined without estimating any parameters from the sample data, the resulting chi-square goodness-of-
fit test will have k -p —- 1 = 10-0O-1=9 degrees of freedom.
The observed value of the test statistic is

_ (94-100)" 2 | (93-100)’ 2 |, (94-100) 2

100 100 100


=3.72.

Since Wes 9 = 16.92 we are unable to reject the hypothesis that the data come from a discrete uniform
distribution. Therefore, the random number generator seems to be working satisfactorily.

aiaple 11223"
A Discrete Distribution The number of defects in printed circuit boards is hypothesized to follow
a Poisson distribution. A random sample of n = 60 printed boards have been collected, and the num-
ber of defects observed. The following data result:

Table 11-3 Data for Example 11-22

Observed
frequencies,O, 94 93 112 101 104 95 100 99 108 94 1000

Expected c
frequencies,E,; 100 100 100 100 100 100 100 100 100 100 1000
er
302 Chapter11 Tests of Hypotheses

Number of Defects | Observed Frequency


a a ee ee
0 32
1 is
2 9
3 4
The mean of the assumed Poisson distribution in this example is unknown and must be estimated
from the sample data. The estimate of the mean number of defects per board is the sample average;
that is (32-0+15-1+9-2+4-3)/60=0.75. From the cumulative Poisson distribution with param-
eter 0.75 we may compute the expected frequencies as E; = np;, where p; is the theoretical, hypothe-
sized probability associated with the ith class interval and n is the total number of observations. The
appropriate hypotheses are
-0.75
0.75 x tink
Ho:p(x) =: par
H,: p(x) is not Poisson with A =0.75.

We may compute the expected frequencies as follows:

Number of Failures Probability Expected Frequency

q 0 0.472 28.32
1 0.354 21.24
2 0.133 7.98
23 0.041 2.46

The expected frequencies are obtained by multiplying the sample size times the respective probabil-
ities. Since the expected frequency in the last cell is less than 3, we combine the last two cells:

Number of Failures Observed Frequency Expected Frequency

0 32 28.32
1 15 21.24
22 13 10.44

The test statistic (which will have k- p—1=3-1-1=1 degree of freedom) becomes

(32- 28.32)’ % (15-21.24)? (13-10.44)"


Nie 28.32 21.24
:
10.44
= 2.94,

and since Y5;, = 3.84, we cannot reject the hypothesis that the occurrence of defects follows a
Poisson distribution with mean 0.75 defects per board.

Example 11-24
A Continuous Distribution A manufacturing engineer is testing a power supply used in a word
processing work station. He wishes to determine whether output voltage is adequately described by
anormal distribution. From a random sample of n = 100 units he obtains sample estimates of the mean
and standard deviation x = 12.04 V and s = 0.08 V.
A common practice in constructing the class intervals for the frequency distribution used in the
chi-square goodness-of-fit test is to choose the cell boundaries so that the expected frequencies E; =
np; are equal for all cells. To use this method, we want to choose the cell boundaries ap, a;, ..., a, for
the k cells so that all the probabilities
11-4 Testing for Goodness of Fit 303

Pj = Pla; <X<a,)=[" f(x)dx


i-1

are equal. Suppose we decide to use k = 8 cells. For the standard normal distribution the intervals that
divide the scale into eight equally likely segments are [0, 0.32), [0.32, 0.675), [0.675, 1.15), [1.15, -),
and their four “mirror image” intervals on the other side of zero. Denoting these standard normal end-
points by dp, @j,..., ag, it is a simple matter to calculate the endpoints that are necessary for the gen-
eral normal problem at hand; namely, we define the new class interval endpoints by the transformation
a;=xX+sa,,i=0, 1, ..., 8. For example, the sixth interval’s right endpoint is

a =X + Sag = 12.04 + (0.08) (0.675) = 12.094.

For each interval, p; = + = 0.125, so the expected cell frequencies are E; = np; = 100 (0.125) = 12.5.
The complete table of observed and expected frequencies is given in Table 11-4.
The computed value of the chi-square statistic is

(10—12.5)° 2 | (14~12.5)' 2 |, (1412.5) 2


12.5 12.5 12.5

Since two parameters in the normal distribution have been estimated, we would compare ye =1.12to
a chi-square distribution with k — p—- 1 = 8 —-2 - 1 =5 degrees of freedom. Using a = 0.10, we see
that ee = 13.36, and so we conclude that there is no reason to believe that output voltage is not nor-
mally distributed.

Probability Plotting
Graphical methods are also useful when selecting a probability distribution to describe
data. Probability plotting is a graphical method for determining whether the data conform
to a hypothesized distribution based on a subjective visual examination of the data. The
general procedure is very simple and can be performed quickly. Probability plotting
requires special graph paper, known as probability paper, that has been designed for the
hypothesized distribution. Probability paper is widely available for the normal, lognormal,
Weibull, and various chi-square and gamma distributions. To construct a probability plot,
the observations in the sample are first ranked from smallest to largest. That is, the sample

Table 11-4 Observed and Expected Frequencies

Class Observed Expected


Interval Frequency, O; Frequency, E;

x < 11.948 10 12.5


11.948 <x < 11.986 14 12.5
11.986 <x < 12.014 12 IES)
12.014 <x < 12.040 13 12.5
12.040 <x < 12.066 11 12.5
12.066 < x < 12.094 12 2)
12.094 <x < 12.132 14 125
12.132 5% 14 2)
100 100
304 Chapter 11 ‘Tests of Hypotheses

X,, Xp, ..., X, is arranged as X(1, X(2y, «--» Xa)» Where X,) S X(;4 +. The ordered observa
tions X; are then plotted against their observed cumulative’ frequency (j — 0.5)/n on the
appropriate probability paper. If the hypothesized distribution adequately describes the
data, the plotted points will fall approximately along a straight line; if the plotted points
deviate significantly from a straight line, then the hypothesized model is not appropriate.
Usually, the determination of whether or not the data plot as a straight line is subjective.

Example 11-25:
To illustrate probability plotting, consider the following data:
0.314, 1.080, 0.863, -0.179, -1.390, -0.563, 1.436, 1:153, 0.504, -0.801.
We hypothesize that these data are adequately modeled by a normal distribution. The observations are
arranged in ascending order and their cumulative frequencies (j — 0.5)/n calculated as follows:

i Xe (j-0.5)/n
1 -1.390 0.05
2 0.801 0:15
3 0.563 0.25
4 0.314 0.35
5 0.179 0.45
6 0.504 0.55
7 0.863 0.65
8 1.080 4 0.75
9 1.153 0.85
10 1.436 0.95
The pairs of values X;, and (j — 0.5)/n are now plotted on normal probability paper. This plot is shown
in Fig. 11-7. Most normal probability paper plots 100(j — 0.5)/n on the right vertical scale and
100{1 - (j — 0.5)/n] on the left vertical scale, with the variable value plotted on the horizontal scale.
We have chosen to plot X, versus 100(j — 0.5)/n on the right vertical in Fig. 11-7. A straight line,

0.5)/n
(j-
100
—(j-0.5)/n}
[1
100

98
-2,0 1.5. =1,0; —0.5-umeQ, 020.6 met meer See D
Xj)
Figure 11-7 Normal probability plot.
11-4 Testing for Goodness of Fit 305

chosen subjectively, has been drawn through the plotted points. In drawing the straight line, one
should be influenced more by the points near the middle than the extreme points. Since the points fall
generally near the line, we conclude that a normal distribution describes the data.

We can obtain an estimate of the mean and standard deviation directly from the normal
probability plot. We see from the straight line in Fig. 11-7 that the mean is estimated as the
50th percentile of the sample, orfi = 0.10, approximately, and the standard deviation is esti-
mated as the difference between the 84th and 50th percentiles, or G = 0.95 — 0.10 = 0.85,
approximately.
A normal probability plot can also be constructed on ordinary graph paper by plotting
the standardized normal scores Z; against X, ;), where the standardized normal scores satisfy
j — 0.5
ak =P(Z<Z,)=0(Z,).
For example, if (j — 0.5)/n = 0.05, then @(Z) = 0.05 implies that Z= 1.64. To illustrate, con-
sider the data from Example 11-25. In the table below we have shown the standardized nor-
mal scores in the last column:
j xr (j-0.5)/n Z,
1 1.390 0.05 -1.64
2 -0.801 0.15 -1.04
3 -0.563 0.25 ~0.67
4 -0.314 0.35 -0.39
5 0.179 0.45 0.13
6 0.504 0.55 0.13":
7 0.863 0.65 0.39
8 1.080 0.75 0.67
9 1.153 0.85 1.04
10 1.436 0.95 1.64

Figure 11-8 presents the plot of Z; versus X_;. This normal probability plot is equivalent to
the one in Fig. 11-7. Many software packages will construct probability plots for various
distributions. For a Minitab® example, see Section 11-6.

=2.0 =1.0 0 1.0


Xa
Figure 11-8 Normal probabilit; plot.
306 Chapter11 Tests of Hypotheses

Selecting the Form of a Distribution


The choice of the distribution hypothesized to fit the data is important. Sometimes analysts
can use their knowledge of the physical phenomena to choose a distribution to model the
data. For example, in studying the circuit board defect datainExample 11-23, a Poisson
distribution was hypothesized to describe the data, because failures are an “event per unit”
phenomena, and such phenomena are often well modeled by a Poisson distribution. Some-
times previous experience can suggest the choice of distribution.
In situations where there is no previous experience or theory to suggest a distribution
that describes the data, analysts must rely on other methods. Inspection of a frequency his-
togram can often suggest an appropriate distribution. One may also use the display in Fig.
11-9 to assist in selecting a distribution that describes the data. When using Fig. 11-9, note
that the B, axis increases downward. This figure shows the regions in the f,, 8, plane for
several standard probability distributions, where

B= E(X- Hy
(0?) 2

Bo

Irs
&.
5 6 %,
Fi G
3 ny
%, Or

“Ry Up:
y% (o)~)
8 %6,
% ~

9 ; 4
Exponential
distribution


0 1 2 3 4
By
Figure 11-9 Regions in the B,, B, plane for various standard distributions. (Adapted from G. J.
Hahn and S. S. Shapiro, Statistical Models in Engineering, John Wiley & Sons, New York, 1967;
used with permission of the publisher and Professor E. S. Pearson, University of London.)
11-5 Contingency Table Tests 307

is a standardized measure of skewness and

_ E(X-H)vey
4

aaa o
is a standardized measure of kurtosis (or peakedness). To use Fig. 11-9, calculate the sam-
ple estimates of 8, and B,, say
B M
B, = ns
(M,)
and
-M,
Bb, M2 ,

where

and plot the point B,, By. If this plotted point falls reasonably close to a point, line, or area
that correspunds to one of the distributions given in the figure, then this distribution is a log-
ical candidate to model the data.
From inspecting Fig. 11-9 we note that all normal distributions are represented by the
point 8, = 0 and f, = 3. This is reasonable, since all normal distributions have the same
shape. Similarly, the exponential and uniform distributions are represented by a single point
in the B,, B, plane. The gamma and lognormal distributions are represented by lines,
because their shapes depend on their parameter values. Note that these lines are close
together, which may explain why some data sets are modeled equally well by either distri-
bution. We also observe that there are regions of the B,, B, plane for which none of the dis-
tributions in Fig. 11-9 is appropriate. Other, more general distributions, such as the Johnson
or Pearson families of distributions, may be required in these cases. Procedures for fitting
these families of distributions and figures similar to Fig. 11-9 are given in Hahn and Shapiro
(1967).

11-5 CONTINGENCY TABLE TESTS


Many times, the n elements of a sample from a population may be classified according to
two different criteria. It is then of interest to know whether the two methods of classifica-
tion are statistically independent; for example, we may consider the population of graduat-
ing engineers and we may wish to determine whether starting salary is independent of
academic disciplines. Assume that the first method of classification has r levels and that the
second method of classification has c levels. We will let O,; be the observed frequency for
level i of the first classification method and for level j of the second classification method.
The data would, in general, appear as in Table 11-5. Such a table is commonly called an
r X c contingency table.
We are interested in testing the hypothesis that the row and column methods of classi-
fication are independent. If we reject this hypothesis, we conclude there is some interaction
between the two criteria of classification. The exact test procedures are difficult to obtain,
but an approximate test statistic is valid for large n. Assume the O,; to be multinomial ran-
dom variables and p;; to be the probability that a randomly selected element falls in the ijth
308 Chapter 11 Tests of Hypotheses

Table 11-5 Anr xc Contingency Table

Column

cell, given that the two classifications are independent. Then p,; = u;v;, where u; is the
probability that a randomly selected element falls in row class i and v; is the probability that
a randomly selected element falls in column class j. Now, assuming independence, the
maximum likelihood estimators of u; and v; are

O;;. (11-108)

Af
E, = nii;0, ==ele
Om 2, i: (11-109)

Then, for large n, the statistic


2
2 f a TAC
yy O,,anti
- E;,
larooeee : (11-110)
i=l y

approximately, and we would reject the hypothesis of independence if X > ye ex heat)

Example 11-26
A company has to choose among three pension plans. Management wishes to know whether the pref-
erence for plan is independent of job classification. The opinions of a random sample of 500 employ-
ees are shown in Table 11-6. We may compute u, = (340/500) = 0.68, #, = (160/500) = 0.32, $, =
(200/500) = 0.40, 6, = (200/S00) = 0.40, and ); = (100/500) = 0.20. The expected frequencies may be
computed from equation i 1-109. For example, the expected number of salaried workers favoring pen-
sion plan 1 is
E,, = nity, = 500(0.68)(0.40) = 136.

Table 11-6 Observed Data for Example 11-26

Pension Plan

1 2 5 Total

Salaried
workers 160 140 40 340
Hourly
workers 40 _ 60 60 160
Totals 200 200 100 500
11-6 Sample Computer Output 309

Table 11-7 Expected Frequencies for Example 11-26

Pension Plan

1 2 3 Total

Salaried
workers 136 136 68 340
Hourly
workers 64 64 32 160
Totals 200 200 100 500

The expected frequencies, are shown in Table 11-7. The test statistic is computed from equation
11-110 as follows:

_ (160-136)" 2 | (140-136)" 2 , (40-68)" 2 | (40-64)" 2 | (60-64)" 2 , (60-32)" 2 _ 49 63


136 136 68 64 64 32

Since Waves = 5.99, we reject the hypothesis of independence and conclude that the preference for
pension plans is not independent of job classification.

Using the two-way contingency table to test independence between two variables of
classification in a sample from a single population of interest is only one application of con-
tingency table methods. Another common situation occurs when there are r populations of
interest and each population is divided into the same c categories. A sample is then taken
from the ith population and the counts entered in the appropriate columns of the ith row. In
this situation we want to investigate whether or not the proportions in the c categories are
the same for all populations. The null hypothesis in this problem states that the populations
are homogeneous with respect to the categories. For example, when there are only two cat-
egories, such as success and failure, defective and nondefective, and so on, then the test for
homogeneity is really a test of the equality of r binomial parameters. Calculation of
expected frequencies, determination of degrees of freedom, and computation of the
chi-square statistic for the test for homogeneity are identical to the test for independence.

11-6 SAMPLE COMPUTER OUTPUT


There are many statistical packages available that can be used to construct confidence inter-
vals, carry out tests of hypotheses, and determine sample size. In this section we present
results for several problems using Minitab®.

Example 11-27
A study was conducted on the tensile strength of a particular fiber under various temperatures. The
results of the study (given in MPa) are
226, 237, 272, 245, 428, 298, 345, 201, 327, 301, 317, 395, 332, 238, 367.
310 Chapter 11 Tests of Hypotheses

Suppose it is of interest to determine if the mean tensile strength is greater than 250 MPa. That is, test
Ho: “= 250, :
Ay: > 250.
A normal probability plot was constructed for the tensile strength and is given in Fig. 11-10. The nor-
mality assumption appears to be satisfied. The population variance for tensile strength is assumed to
be unknown, and as a result, a single-sample t-test will be used for this problem.
The results from Minitab® for hypothesis testing and confidence interval on the mean are

Test of mu = 250 vs mu > 250 e

Variable N Mean StDev SE Mean


TS 15 301.9 65.9 17.10
Variable 95.0% Lower Bound P
LS 272.0 : 0.004

_The P-value is reported as 0.004, leading us to reject the null hypothesis and conclude that the mean
tensile strength is greater than 250 MPa. The lower one-sided 95% confidence interval is given as
PAPA TE

‘Example 11-28
Reconsider Example 11-17, comparing two methods for predicting the shear strength for steel plate
girders. The Minitab® output for the paired r-test using @= 0.10 is

Paired T for Karlsruhe - Lehigh

StDev SE Mean
Karlsruhe : 0.1460 0.0487
Lehigh c 0.0494 0.0165
Difference : 0.1351 0.0450
90% CI for mean differenee: (06.1901, 0.3576)
T-Test of mean difference = 0 (vs not = 0): T-Value = 6.08 P-Value =
0.000

Tensile strength
Figure 11-10 Normal probability plot for Example 11-27.
11-6 Sample Computer Output 311

The results of the Minitab® output are in agreement with the results found in Example 11-17.
Minitab® also provides the appropriate confidence interval for the problem. Using a= 0.10, the level
of confidence is 0.90; the 90% confidence interval on the difference between the two methods is
(0.1901, 0.3576). Since the confidence interval does not contain zero, we also conclude that there is
a significant difference between the two methods.

Example 11-29.
The number of airline flights canceled is recorded for all airlines for each day of service. The num-
ber of flights recorded and the number of these flights that were canceled on a single day in March
2001 are provided below for two major airlines.

Airline # of Flights # of Canceled Flights Proportion

American Airlines 2128 115 p, = 0.054


America West Airlines 635 49 p= 0.077

Is there a significant difference in the proportion of canceled flights for the two airlines? The hypothe-
ses of interest are Hy: p, = p2 versus H,: p, # p>. A two-sample test on proportions and a two-sided
confidence interval on proportions are

Sample X N Sample p
1 115 2128 0.054041
2 49 635 0.077165

Estimate for p(1) - p(2): -0.0231240


95% CI for p(1) - p(2): (-0.0459949, -0.000253139)
Test for p(1) - p(2) = 0 (vs not = 0): Z = -1.98 P-Value = 0.048

The P-value is given as 0.048, indicating that there is a significant difference between the proportions
of flights canceled for American and America West airlines at a 5% level of significance. The 95%
confidence interval of (-0.0460, —0.0003) indicates that America West airlines had a statistically sig-
nificant higher proportion of canceled flights than American Airlines for the single day.

Example 11-30
The mean compressive strength for a particular high-strength concrete is hypothesized to be = 20
(MPa). It is known that the standard deviation of compressive strength is o= 1.3 MPa. A group of
engineers wants to determine the number of concrete specimens that will be needed in the study to
detect a decrease in the mean compressive strength of two standard deviations. If the average com-
pressive strength is actually less than f/ - 20. they want to be confident of correctly detecting this sig-
nificant difference. In other words, the test of interest would be Ho: = 20 versus H;: ft < 20. For this
study, the significance level is set at a = 0.05 and the power of the test is 1 - B= 0.99. What is the min-
imum number of concrete specimens that should be used in this study? For a difference of 20 or 2.6
MPa, a= 0.05, and 1 —- B=0.99, the minimum sample size can be found using Minitab®. The result-
ing output is

Testing mean = null (versus < null)


Calculating power for mean = null + difference
Alpha = 0.05 Sigma = 1.3

Sample Target Actual


Difference Size Power Power
F -2.6 6 0.9900 0.9936
312 Chapter11 Tests of Hypotheses

The minimum number of specimens to be used in the study should be n = 6 in order to attain the
desired power and level of significance.

Example 11-31
A manufacturer of rubber belts wishes to inspect and control the number of nonconforming belts pro-
duced on line. The proportion of nonconforming belts that is acceptable is p = 0.01. For practical pur-
poses, if the proportion increases to p = 0.035 or greater, the manufacturer wants to detect this change.
That is, the test of interest would be Hp: p = 0.01 versus H,: p > 0.01. If the acceptable level of sig-
nificance is a = 0.05 and the power is 1 — 8 = 0.95, how many rubber belts should be selected for
inspection? For a = 0.05 and 1 — B = 0.95, the appropriate sample size can be determined using
Minitab®. The output is

Testing proportion = 0.01 (versus > 0.01)


Alpha = 0.05

Alternative Sample Target Actual


Proportion Size Power Power
3.50E-02 348 0.9500 0.9502

Therefore, to adequately detect a significant change in the proportion of nonconforming rubber belts,
random samples of at least n = 348 would be needed.

11-7 SUMMARY
This chapter has introduced hypothesis testing. Procedures for testing hypotheses on means
and variances are summarized in Table 11-8. The chi-square goodness of fit test was intro-
duced to test the hypothesis that an empirical distribution follows a particular probability
law. Graphical methods are also useful in goodness-of-fit testing, particularly when sample
sizes are small. Two-way contingency tables for testing the hypothesis that two methods of
classification of a sample are independent were also introduced. Several computer examples
were also presented.

11-8 EXERCISES
11-1. The breaking strength of a fiber used in manu- (b) What sample size would be required to detect a
facturing cloth is required to be at least 160 psi. Past true mean yield of 85% with probability 0.95?
experience has indicated that the standard deviation of
11-3. The diameters of bolts are known to have a
breaking strength is 3 psi. A random sample of four
standard deviation of 0.0001 inch. A random sample
specimens is tested and the average breaking strength
is found to be 158 psi. of 10 bolts yields an average diameter of 0.2546 inch.
(a) Test the hypothesis that the true mean diameter of
(a) Should the fiber be judged acceptable with
a= 0.05? belts equals 0.255 inch, using a= 0.05.

(b) What is the probability of accepting Hy: u < 160


(b) What size sample would be necessary to detect a
if the fiber has a true breaking strength of 165 psi? true mean bolt diameter of 0.2552 inch with a
probability of at least 0.90?
11-2. The yield of a chemical process is being stud-
ied. The variance of yield is known from previous 11-4. Consider the data in Exercise 10-39.
experience with this process to be 5 (units of o” = per- (a) Test the hypothesis that the mean piston ring
centage’), The past five days of plant operation have diameter is 74.035 mm. Use a= 0.01.
resulted in the following yields (in percentages): 91.6, (b) What sample size is required to detect a true mean
88.75, 90.8, 89.95, 91.3. diameter of 74.030 with a probability of at least :
(a) Is there reason to believe the yield is less than 90%? 0.95?
11-8 Exercises 313
Table 11-8 Summary of Hypothesis Testing Procedures on Means and Variances

Alternative Criteria for OC Curve


Null Hypothesis Test Statistic Hypothesis Rejection Parameter
Ho: chet
dup L= b> zeaeX= Ho :
HaHa Zo] > Zare d=|n-p\/o
o/Vn Hy: L> Mp Z > Zq d=(U- [jo
Hy: U< My LZ) <—Zy d=(%)-wWilo
uaes eran Hy: U# My Ital > #072,n= 1 d=|n-WI/o
unknown sc EN Hy: U> Uy lie d=(U-[h)/o
Ay: LW< My 9S Sone d= (Ly- W/o
Hy: My = bh, persy Hy: My # IZ|>Zoen = d =|",| Yor +03
0;2 and 0% known . 2
oT oy2 Hpi, Zo> Ze d=(u,-p,)/ fo? +0?

dada Ay: My < by Zy <-Zy d=(u,-1,)/{o?


+0?
Ap: fy = Ly, Pi X, -X, Ay: Ml, # Ly ltol > tora,my +p -2 d=|y,
-p\/20
o|==
= 0, = 0?
0 unknown ; te ingen Ay: My > by 19 > to ny +ny-2 d= (My - 1/20
ya Me Ay: My < My 19 < to, ny +n-2 d= (ty -Uj/2o
Ho: Hi = Hay __X,-X, A: My # by Itol > tora,y a
o, # 6,unknown Ss? .§? Hy: My > bb Iolo: —
ae Ay: My < by 19 < la, y =a

Uy LD)

(S3/n)°_ (st/n)
v= 5 ae

n, +1 n, +1

Hy: 8 = 6, 2 (n-1)S? H: cea Ye a A= ald)


Xo= oc or

. eR)
Xo <X1-a2,n-1

H:c>o ee A= fo,
2
HghO”Coates i A= a0,
Hy 0; = 0, Fy=Si/S; H:0; # 0, Fo > Foy, ny -1, 2-1 A= 0/0,
or
FO < Fi atm =1.m=t
H):0; > 0; Fo > Fo. ny-1,m-1 A= 0,/0;

11-5. Consider the data in Exercise 10-40. Test the ume, whether or not this volume is 16.0 ounces. A
hypothesis that the mean life of the light bulbs is 1000 random sample is taken from the output of each
hours. Use a = 0.05. machine.
11-6. Consider the data in Exercise 10-41. Test the
hypothesis that mean compressive strength equals Machine 1 Machine 2
3500 psi. Use @= 0.01. : 16.03 16.01 16.02 16.03
11-7. Two machines are used for filling plastic bottles 16.04 15.96 15.97 16.04
with a net volume of 16.0 ounces. The filling 16.05 15.98 15.96 16.02
processes can be assumed normal, with standard devi- 16.05 16.02 16.01 16.01
ations 0, = 0.015 and 0, = 0.018. Quality engineering 16.02 15.99 15.99 16.00
suspects that both machines fill to the same net vol-
314 Chapter11 Tests of Hypotheses

(a) Do you think that quality engineering is correct? at random from the current production. Assume that
Use a= 0.05. shelf life is normally distributed.
(b) Assuming equal sample sizes, what sample size 108 days 128 days
should be used to assure that = 0.05 if the true 134 163
difference in means is 0.075? Assume that 124 159
a=0.05.
116 134
(c) What is the power of the test in (a) for a true dif-
ference in means of 0.075? (a) Is there any evidence that the mean shelf life is
greater than or equal to 125 days?
11-8. The film development department of a local
store is considering the replacement of its current (b) If it is important to detect a ratio of W/o of 1.0
film-processing machine. The time in which it takes with a probability 0.90, is the sample size
the machine to completely process a roll of film is sufficient?
important. A random sample of 12 rolls of 24-expo- 11-14. The titanium content of an alloy is being stud-
sure color film is selected for processing by the cur- ied in the hope of ultimately increasing the tensile
rent machine. The average processing time is 8.1 strength. An analysis of six recent heats chosen at ran-
minutes, with a sample standard deviation of 1.4 min- dom produces the following titanium contents.
utes. A random sample of 10 rolls of the same type of 8.0% 7.7%
film is selected for testing in the new machine. The
O19 11.6
average processing time is 7.3 min=tes, with a sample
O19 14.6
standard deviation of 0.9 minutes. The local store will
not purchase the new machine unless the processing Is there any evidence that the mean titanium content is
time is more than 2 minutes shorter than the current greater than 9.5%?
machine. Based on this information, should they pur-
11-15, An article in the Journal of Construction
chase the new machine? Engineering and Management (1999, p. 39) presents
11-9. Consider the data in Exercise 10-45. Test the some data on the number of work hours lost per day
hypothesis that both machines fill to the same volume. on a construction project due to weather-related inci-
Use a= 0.10. dents. Over 11 workdays, the following lost work
11-10. Consider the data in Exercise 10-46. Test Hp: hours were recorded.
Hy = Uy against H,:L, > Ly, using a= 0.05. 8.8 8.8
11-11. Consider the gasoline road octane number 12,9 12.2
data in Exercise 10-47. If formulation 2 produces a 5.4 13.3
higher road octane number than formulation 1, the 12.8 6.9
manufacturer would like to detect this. Formulate and
Cell 2.2
test an appropriate hypothesis, using a= 0.05. _
14.7
11-12. The lateral deviation in yards of a certain type
of mortar shell is being investigated by the propellant Assuming work hours are normally distributed, is
manufacturer. The following data have been observed. there any evidence to conclude that the mean number
of work hours lost per day is greater than 8 hours?
11-16. The percentage of scrap produced in a metal
Round Deviation Round Deviation
finishing operation is hypothesized to be less than
1 11.28 6 —-9.48 7.5%. Several days were chosen at random and the
2 -10.42 7 6.25 percentages of scrap were calculated.
3 -8.51 8 10.11 5.51% 7.32%
4 1.95 9 -8.65 6.49 8.81
5 6.47 10 0.68 6.46 8.56
5:07, 7.46
Test the hypothesis that the mean lateral deviation of (a) In your opinion, is the true scrap rate less than
these mortar shells is zero. Assume that lateral devia- 7.5%?
tion is normally distributed. (b) If it is important to detect a ratio of 6/o= 1.5 with
11-13, The shelf life of a photographic film is of a probability of at least 0.90, what is the mini-
interest to the manufacturer. The manufacturer mum sample size that can be used?
observes the following shelf life for eight units chosen (c) For 6/a= 2.0, what is the power of the above test?
11-8 Exercises 315

11-17. Suppose that we must test the hypotheses EY

Field Model
Ho: 2 15,
Ayww< 15, 53.33 57.14 47.40 58.20
53.33 57.14 49.80 59.00
where it is known that 0” = 2.5. If @ = 0.05 and the 53.33 61.54 51.90 60.10
true mean is 12, what sample size is necessary to
55,17 61.54 52.20 63.40
assure a type II error of 5%?
55.17 61.54 54.50 65.80
11-18. An engineer desires to test the hypothesis that 55.17 69.57 55.70 71.30
the melting point of an alloy is 1000°C. If the true
57.14 69.57 56.70 75.40
melting point differs from this by more than 20°C he
must change the alloy’s composition. If we assume
that the melting point is a normally distributed ran- Assuming the variances are equal, conduct a test of
dom variable, @ = 0.05, B= 0.10, and o= 10°C, how hypothesis test to determine whether there is a signif-
many observations should be taken? icant difference between the field data and the model
simulated data. Use a = 0.05.
11-19. Two methods for producing gasoline from
crude oil are being investigated. The yields of both 11-21. The following are the burning times (in min-
processes are assumed to be normally distributed. The utes) of flares of two different types.
following yield data have been obtained from the pilot
plant. Type 1 Type 2

63? "82 64 56
Process Yields (%)
81 = 68 20S
1 24.2 26.6 25.7 24.8 25.9 26.5 Sih 8) 83 74
2 21.0,922.1, 21.8. 20.9) 22.4 22.0 Sop eA) 59 82
> 8 65 82
(a) Is there reason to believe that process 1 has a
greater mean yield? Use a = 0.01. Assume that (a) Test the hypothesis that the two variances are
both variances are equal. equal. Use a= 0.05.
(b) Assuming that in order to adopt process | it must (b) Using the results of (a), test the hypothesis that
produce a mean yield that is at least 5% greater than the mean burning times are equal.
that of process 2, what are your recommendations?
11-22. A new filtering device is installed in a chemi-
(c) Find the power of the test in part (a) if the mean cal unit. Before its installation, a random sample
yield of process 1 is 5% greater than that of yielded the following information about the percent-
process 2. age of impurity: x, = 12.5, s; = 101.17, and n, = 8.
(d) What sample size is required for the test in part After installation, a random sample yielded x, = 10.2,
(a) to ensure that the null hypothesis will be 5 = 94.73, ny =9.
rejected with a probability of 0.90 if the mean (a) Can you conclude that the two variances are
yield of process 1 exceeds the mean yield of equal?
process 2 by 5%? (b) Has the filtering device reduced the percentage of
11-20. An article that appeared in the Proceedings of impurity significantly?
the 1998 Winter Simulation Conference (1998, p. 11-23. Suppose that two random samples were drawn
1079) discusses the concept of validation for traffic from normal populations with equal variances. The
simulation models. The stated purpose of the study is sample data yields x, = 20.0, n, = 19, U(x); - x,)° =
to design.and modify the facilities (roadways and con- 1480, x, = 15.8, n, = 10, and L(x; — X))” = 1425.
trol devices) to optimize efficiency and safety of traf-
(a) Test the hypothesis that the two means are equal.
fic flow. Part of the study compares speed observed at
Use a=0.01.
various intersections and speed simulated by a model
being tested. The goal is to determine whether the (b) Find the probability that the null hypothesis in (a)
simulation model is representative of the actual will be rejected if the true difference in means
observed speed. Field data is collected at a particular is 10.
location and then the simulation model is imple- (c) What sample size is required to detect a true dif-
mented. Fourteen speeds (ft/sec) are measured at a ference in means of 5 with probability at least 0.60
particular location. Fourteen observations are simu- if it is known at the start of the experiment that a
lated using the proposed model. The data are: rough estimate of the common variance is 150?
316 Chapter11 Tests of Hypotheses

11-24. Consider the data in Exercise 10-56. this sample size is used and the sample standard devi-
(a) Test the hypothesis that the means of the two nor- ation s = 0.007, what is your conclusion, using @ =
mal distributions are equal. Use a = 0.05 and 0.01? Construct a 95% upper-confidence interval for
assume that om=0,, the true variance. -
(b) What sample size is required to detect a difference 11-29. The manufacturer of a power supply is inter-
in means of 2.0 with a probability of at least 0.85? ested in the variability of output voltage. He has tested
(c) Test the hypothesis that the variances of the two 12 units, chosen at random, with the following results:
distributions are equal. Use a= 0.05.
5.34 5.65 4.76
(d) Find the power of the test in (c) if the variance of 5.00 5.55 5.54
a population is four times the other.
5.07 5.35 5.44
11-25. Consider the data in Exercise 10-57. Assum- 5:25) 5.35 4.61
ing that o,= O;, test the hypothesis that the mean rod
diameters do not differ. Use a= 0.05. (a) Test the hypothesis that 07 = 0.5. Use a= 0.05.
11-26. A chemical company produces a certain drug (b) If the true value of o7 = 1.0, what is the probabil-
whose weight has a standard deviation of 4 mg. A new ity that the hypothesis in (a) will be rejected?
method of producing this drug has been proposed, 11-30. For the data in Exercise 11-7, test the hypoth-
although some additional cost is involved. Manage- esis that the two variances are equal, using a= 0.01.
ment will authorize a change in production technique Does the result of this test influence the manner in
only if the standard deviation of the weight in the new which a test on means would be conducted? What
process is less than 4 mg. If the standard deviation of sample size is necessary to detect 04/0, = 2.5, with a
weight in the new process is as small as 3 mg, the probability of at least 0.90?
company would like to switch production methods
with a probability of at least 0.90. Assuming weight to 11-31. Consider the following two samples, drawn
be normally distributed and a = 0.05, how many
from two normal populations.
observations should be taken? Suppose the
researchers choose n = 10 and obtain the data below. Sample 1 Sample 2
Is this a good choice for n? What should be their 4.34 1.87
decision?
5.00 2.00
16.628 grams 16.630 grams
4.97 2.00
16.622 16.631
4.25 1.85
16.627 16.624
3) 2.11
16.623 16.622
6.55 2.31
16.618 16.626
6.37 2.28
11-27. A manufacturer of precision measuring instru- S55 2.07
ments claims that the standard deviation in the use of
3.76 1.76
the instrument is 0.00002 inch. An analyst, who is
_— 1.91
unaware of the claim, uses the instrument eight times
and obtains a sample standard deviation of 0.00005 — 2.00
inch.
(a) Using a@=0.01, is the claim justified? Is there evidence to conclude that the variance of pop-
(b) Compute a 99% confidence interval for the true ulation 1 is greater than the variance of population 2?
variance. Use a = 0.01. Find the probability of detecting
(c) What is the power of the test if the true standard
alo,
=4.0.
deviation equals 0.00004? 11-32. Two machines produce metal parts. The vari-
(d) What is the smallest sample size that can be used ance of the weight of these parts is of interest. The fol-
to detect a true standard deviation of 0.00004 with lowing data have been collected.
a probability at least of 0.957? Use ~=0.01.
11-28. The standard deviation of measurements made Machine 1 Machine 2
by a special thermocouple is supposed to be 0.005
degree. If the standard deviation is as great as 0.010,
ny = 25 ny = 30
we wish to detect it with a probability of at least 0.90. X, = 0.984 Xp = 0.907
Use a = 0.01. What sample size should be used? If 5;= 13.46 5)= 9.65
11-8 Exercises 317

(a) Test the hypothesis that the variances of the two Do the data support the designer’s theory? Use
machines are equal. Use a= 0.05. a=0.05.
(b) Test the hypothesis that the two machines produce 11-36. An article in the International Journal of
parts having the same mean weight. Use w= 0.05. Fatigue (1998, p. 537) discusses the bending fatigue
11-33. In a hardness test, a steel ball is pressed into resistance of gear teeth when using a particular pre-
the material being tested at a standard load. The diam- stressing or presetting process. Presetting of a gear
eter of the indentation is measured, which is related to tooth is obtained by applying and then removing a
the hardness. Two types of steel balls are available, single overload to the machine element. To determine
and their performance is compared on 10 specimens. significant differences in fatigue resistance due to pre-
Each specimen is tested twice, once with each ball. setting, fatigue data were paired. A “preset” tooth and
The results are given below: a “nonpreset” tooth were paired if they were present
on the same gear. Eleven pairs were formed and the
Ball x 75 46 57 43 58 32 61 56 34 65 fatigue life measured for each. (The final response of
Ball y 52 41 43 47 32 49 52 44 57 60 interest is In[(fatigue life) x 107°.)

Test the hypothesis that the two steel balls give the Pair Preset Tooth Nonpreset Tooth
same expected hardness measurement. Use a&= 0.05. 1 3.813 2.706
11-34. Two types of exercise equipment, A and B, for 2 4.025 2.364
handicapped individuals are often used to determine 3 3.042 2.773
the effect of the particular exercise on heart rate (in 4 3.831 2.558
beats per minute). Seven subjects participated in a 5 3.320 2.430
study to determine whether the two types of equip-
6 3.080 2.616
ment have the same effect on heart rate. The results
7) 2.498 2.765
are given in the table below.
8 2.417 2.486
9 2.462 2.688
Subject A B
10 2.236 2.700
1 162 161 11 3.932 2.810
2 163 187
3 140 199
Conduct a test of hypothesis to determine whether
4 191 206 presetting significantly increases the fatigue life of
5 160 161 gear teeth. Use ~=0.10.
6 158 160
11-37. Consider the data in Exercise 10-66. Test the
7 155 162 hypothesis that the uninsured rate is 10%. Use a= 0.05.
11-38. Consider the data in Exercise 10-68. Test the
Conduct an appropriate test of hypothesis to deter- hypothesis that the fraction of defective calculators
mine whether there is a significant difference in heart produced is 2.5%.
rate due to the type of equipment used.
11-39. Suppose that we wish to test the hypothesis
11-35. An aircraft designer has theoretical evidence Ho: [Ml = bb against the alternative H,: MW, + >, where
that painting the airplane reduces its speed at a speci- both variances 0” and 0%, are known. A total of n, +n)
fied power and flap setting. He tests six consecutive = N observations can be taken. How should these
airplanes from the assembly line before and after observations be allocated to the two populations to
painting. The results are shown below. maximize the probability that Hy will be rejected if H,
is true, andpl, - Ll, = 6#0?
Top Speed (mph) 11-40. Consider the union membership study
Airplane Painted Not Painted described in Exercise 10-70. Test the hypothesis that
the proportion of men who belong to a union does not
1 286 289
differ from the proportion of women who belong to a
D 285 286 union. Use a= 0.05.
3 279 283
11-41. Using the data in Exercise 10-71, determine
4 283 288
whether it is reasonable to conclude that production
5 281 283 line 2 produced a higher fraction of defective product
6 286 ° 289 than line 1. Use ~=0.01.

IEEE
318 Chapter 11 Tests of Hypotheses

11-42. Two different types of injection-molding (a) It is reasonable to conclude that these data come
machines are used to form plastic parts. A part is con- from a normal distribution? Use a chi-square
sidered defective if it has excessive shrinkage or is goodness-of-fit test.
discolored. Two random samples, each of size 500, (b) Plot the data on normal probability paper. Does an
are selected, and 32 defective parts are found in the assumption of normality seem justified?
sample from machine 1, while 21 defective parts are 11-48. Defects on wafer surfaces in integrated circuit
found in the sample from machine 2. Is it reasonable
fabrication are unavoidable. In a particular process the
to conclude that both machines produce the same frac-
following data were collected.
tion of defective parts?
11-43. Suppose that we wish to test Hp: fl, = Hy Number of Number of Wafers
against H,: [, # Ly, where o, and 0, are known. The Defects i with 1 Defects
total sample size N is fixed, but the allocation of
0 4
observations to the two populations such that n, + n,
= N is to be made on the basis of cost. If the costs of 1 13
sampling for populations 1 and 2 are C, and C,, 2 34
respectively, find the minimum cost sample sizes that 3 56
provide a specified variance for the difference in sam- 4 70
ple means. 5 70
11-44. A manufacturer of a new pain relief tablet 6 58
would like to demonstrate that her product works 7 42
twice as fast as her competitor’s product. Specifically, 8 25
she would like to test 9 15
10 9
Ap: Hi = 2,
Hy: My > 2t, 11 3
12 1
where [U, is the mean absorption time of the competi-
tive product and jl, is the mean absorption time of the Does the assumption of a Poisson distribution seem
new product. Assuming that the variances 0) and 0; appropriate as a probability model for this process?
are known, suggest a procedure for testing this
hypothesis. 11-49, A pseudorandom number generator is
designed so that integers 0 through 9 have an equal
11-45. Derive an expression similar to equation 11-
probability of occurrence. The first 10,000 numbers
20 for the f error for the test on the variance of a nor- are as follows:
mal distribution. Assume that the two-sided
alternative is specified. Oe a ee 6 ne eee)
11-46. Derive an expression similar to equation 11- 967 1008 975 1022 1003 989 1001 981 1043 1011
20 for the f error for the test of the equality of the
Does this generator seem to be working properly?
variances of two normal distributions. Assume that the
two-sided alternative is specified. 11-50. The cycle time of an automatic machine has
11-47. The number of defective units found each day been observed and recorded.
by an in-circuit functional tester in a printed circuit
Sec |2.1]2.11|2.12(2.13|]2.14|2.15|2.16|2.17|2.18]2.19|2.2
board assembly process is shown below. Freq|16| 28 | 41 | 74 |149|256|137] 82 | 40 | 19 [11
j
Number of (a) Does the normal distribution seem to be a reason-
Defectives per Day Times Observed able probability model for the cycle time? Use the
/ 0-10 6 chi-square goodness-of-fit test.
11-15 11 (b) Plot the data on normal probability paper. Does
16-20 16 the assumption of normality seem reasonable?
21-25 28
26-30 22 11-51. A soft drink bottler is studying the internal
31-35 19 pressure strength of 1-liter glass nonreturnable bot-
36-40 11 tles. A random sample of 16 bottles is tested and the
pressure strengths obtained. The data are shown
41-45 4
below. Plot these data on normal probability paper.
11-8 Exercises 319

Does it seem reasonable to conclude that pressure tions and ranges. Would you conclude that deflection
strength is normally distributed? and range are independent?
226.16 psi 211.14 psi
Lateral Deflection
202.20 203.62
219.54 188.12 Range (yards) _Left Normal Right
193.73 224.39 0 — 1,999 6 14 8
208.15 PAM 2,000 — 5,999 9 11 4
195.45 204.55 6,000— 11,999 8 Mofo 6
193.71 202.21
200.81 201.63 11-56. A study is being made of the failures of an
11-52. A company operates four machines for three electronic component. There are four types of failures
shifts each day. From production records, the follow- possible and two mounting positions for the device.
The following data have been taken.
ing data on the number of breakdowns are collected.

Machines Failure Type


Shift A B C D Mounting Position A B (C D
1 BP. 46 18 9
ail 11 9 14 2 4 17 6 12

wn 15 17 16 10
Would you conclude that the type of failure is inde-
Test the hypothesis that breakdowns are independent pendent of the mounting position?
of the shift. ;
11-57. An article in Research in Nursing and Health
11-53. Patients in a hospital are classified as surgical (1999, p. 263) summarizes data collected from a pre-
or medical. A record is kept of the number of times vious study (Research in Nursing and Health, 1998, p.
patients require nursing service during the night and 285) on the relationship between physical activity and
whether these patients are on Medicare or not. The socio-economic status of 1507 Caucasian women. The
data are as follows: data are given in the table below.

Patient Category
Physical Activity
Medicare Surgical Medical
Socio-economic Status Inactive Active
Yes 46 52
Low 216 245
No 36 43
Medium 226 409
Test the hypothesis that calls by surgical—-medical High 114 297
patients are independent of whether the patients are
receiving Medicare. Test the hypothesis that physical activity is independ-
11-54. Grades in a statistics course and an operations ent of socio-economic status.
research course taken simultaneously were as follows 11-58. Fabric is graded into three classifications: A,
for a group of students. B, and C. The results below were obtained from five
°
~ looms. Is fabric classification independent of the
Statistics Operations Research Grade loom?
Grade A B G Other
A 25 6 17 13 Number of Pieces of Fabric
B 17 16 15 6 in Fabric Classification
Cc 18 4 18 10 Loom A B G
Other 10 8 11 20
1 185 16 12
2 190 24 21
Are the grades in statistics and operations research
3 170 35 16
related?
4 158 22 7
11-55, An experiment with artillery shells yields the 5 185 22 15
following data on the characteristics of lateral deflec-
320 Chapter 11 Tests of Hypotheses

11-59. An article in the Journal of Marketing 11-60. Consider the injection molding process
Research (1970, p. 36) reports a study of the relation- described in Exercise 11-42.
ship between facility conditions at gasoline stations (a) Set up this problem as a 2 x 2 contingency table
and the aggressiveness of their gasoline marketing pol- and perform the indicated statistical analysis.
icy. A sample of 441 gasoline stations was investigated (b) State clearly the hypothesis being tested. Are you
with the results shown below obtained. Is there evi- testing homogeneity or independence?
dence that gasoline pricing strategy and facility condi-
(c) Is this procedure equivalent to the test procedure
tions are independent?
used in Exercise 11-42?
Condition
Policy Substandard Standard Modern

Aggressive 24 Sy? 58
Neutral 15 73 86
Nonaggressive 17 80 36
Chapter 12

Design and Analysis of


Single-Factor Experiments:
The Analysis of Variance

Experiments are a natural part of the engineering and management decision-making


process. For example, suppose that a civil engineer is investigating the effect of curing
methods on the mean compressive strength of concrete. The experiment would consist of
making up several test specimens of concrete using each of the proposed curing methods
and then testing the compressive strength of each specimen. The data from this experiment
could be used to determine which curing method should be used to provide maximum
compressive strength.
If there are only two curing methods of. interest, the experiment could be designed and
analyzed using the methods discussed in Chapter 11. That is, the experimenter has a single
factor of interest—curing methods—and there are only two levels of the factor. If the exper-
imenter is interested in determining which curing method produces the maximum com-
pressive strength, then the number of specimens to test can be determined using the
operating characteristic curves in Chart VI (Appendix), and the t-test can be used to deter-
mine whether the two means differ.
Many single-factor experiments require more than two levels of the factor to be con-
sidered. For example, the civil engineer may have five different curing methods to investi-
gate. In this chapter we introduce the analysis of variance for dealing with more than two
levels of a single factor. In Chapter 13, we show how to design and. analyze experiments
with several factors.

12-1 THE COMPLETELY RANDOMIZED SINGLE-FACTOR


EXPERIMENT
12-1.1 An Example
A manufacturer of paper used for making grocery bags is interested in improving the ten-
sile strength of the product. Product engineering thinks that tensile strength is a function of
the hardwood concentration in the pulp, and that the range of hardwood concentrations of
practical interest is between 5% and 20%. One of the engineers responsible for the study
decides to investigate four levels of hardwood concentration: 5%, 10%, 15%, and 20%. She
also decides to make up six test specimens at each concentration level, using a pilot plant.
All 24 specimens are tested on a laboratory tensile tester in random order. The data from
this experiment are shown in Table 12-1.
This is an example of a completely randomized single-factor experiment with four lev-
els of the factor. The levels of the factor are sometimes called treatments. Each treatment
321
322 Chapter 12 The Analysis of Variance

Table 12-1 Tensile Strength of Paper (psi) 2

Hardwood Observations
Concentration (%) 1 D) 3 4 5 6 Totals Averages
> 5 7 8 15 11 9 10 60 10.00
10 1 ome 13 18 198 bel'S 94 15.67
15 14 "135 19) Sly teal Gree s 102 17.00
20 19 2588 $2298, 823 Wosl83 620 127 2117
383 15.96

has six observations, or replicates. The role of randomization in this experiment is


extremely important. By randomizing the order of the 24 runs, the effect of any nuisance
variable that may affect the observed tensile strength is approximately balanced out. For
example, suppose that there is a warm-up effect on the tensile tester; that is, the longer the
machine is on, the greater the observed tensile strength. If the 24 runs are made in order of
increasing hardwood concentration (i.e., all six 5% concentration specimens are tested first,
followed by all six 10% concentration specimens, etc.), then any observed differences due
to. hardwood concentration could also be due to the warm-up effect.
It is important to graphically analyze the data from a designed experiment. Figure
12-1 presents box plots of tensile strength at the four hardwood concentration levels. This
plot indicates that changing the hardwood concentration has an effect on tensile strength;
specifically, higher hardwood concentrations produce higher observed tensile strength. Fur-
thermore, the distribution of tensile strength at a particular hardwood level is reasonably
symmetric, and the variability in tensile strength does not change dramatically as the hard-
wood concentration changes.

a I i

Tensile
(psi) 10-
strength 4

(0):Lae SO ET Ss SEAT IO ee! Sl


5 10 15 20
Hardwood concentration (%)
Figure 12-1 Box plots of hardwood concentration data.
12-1 The Completely Randomized Single-Factor Experiment 323

Graphical interpretation of the data is always a good-idea. Box plots show the vari-
ability of the observations within a treatment (factor level) and the variability between treat-
ments. We now show how the data from a single-factor randomized experiment can be
analyzed statistically.

12-1.2 The Analysis of Variance


Suppose we have a different levels of a single factor (treatments) that we wish to compare.
The observed response for each of the a treatments is a random variable. The data would
appear as in Table 12-2. An entry in Table 12-2, say y, » tepresents the jth observation taken
under treatment i. We initially consider the case where there is an equal number of obser-
vations on each treatment.
We may describe the observations in Table 12-2 by the linear statistical model,

=e) td,
ay =H Fey (12-1)

where y,; is the (ij)th observation, is a parameter common to all treatments, called the
overall mean, T, is a parameter associated with the ith treatment, called the ith treatment
effect, and €,; is a random error component. Note that y,, represents both the random vari-
able and its realization. We would like to test certain hypotheses about the treatment effects
and to estimate them. For hypothesis testing, the model errors are assumed to be normally
and independently distributed random variables with mean zero and variance o° [abbrevi-
ated NID(0, 074]. The variance o” is assumed constant for all levels of the factor.
The model of equation 12-1 is called the one-way-classification analysis of variance,
because only one factor is investigated. Furthermore, we will require that the observations
be taken in random order so that the environment in which the treatments are used (often
called the experimental units) is as uniform as possible. This is called a completely ran-
domized experimental design. There are two different ways that the a factor levels in the
experiment could have been chosen. First, the a treatments could have been specifically
chosen by the experimenter. In this situation we wish to test hypotheses about the t,, and
conclusions will apply only to the factor levels considered in the analysis. The conclusions
cannot be extended to similar treatments that were not considered. Also, we may wish to
estimate the T,. This is called the fixed effects model. Alternatively, the a treatments could
be a random sample from a larger population of treatments. In this situation we would like
to be able to extend the conclusions (which are based on the sample of treatments) to all
treatments in the population, whether they were explicitly considered in the analysis or not.
Here the 7, are random variables, and knowledge about the particular ones investigated is
relatively useless. Instead, we test hypotheses about the variability of the 7, and try to esti-
mate this variability. This is called the random effects, or components of variance, model.

Table 12-2 Typical Data for One-Way-Classification Analysis of Variance

Treatment Observation Totals Averages

1 Mi ode ae Sa Yi: yi
324 Chapter 12 The Analysis of Variance

In this section we will develop the analysis of variance for the fixed-effects model, one-
way classification. In the fixed-effects model, the treatment effects T, are usually defined as
deviations from the overall mean, so that

v1, “6 (12-2)

Let y,. represent the total of the observations under the ith treatment and y, represent the
average of the observations under the ith treatment. Similarly, let y.. represent the grand
total of all observations and y.. represent the grand mean of all observations. Expressed
mathematically,

Lr y;-=);./n, b Sle 2 Aes ds


Jj =

n
(12-3)
LY y..=y../N,
iM
mT

where N =an is the total number of observations. Thus the “dot” subscript notation implies
summation over the subscript that it replaces.
We are interested in testing the equality of the a treatment effects. Using equation 12-2,
the appropriate hypotheses are
A; THGZ== t,=0,
(12-4).
H,: t, #0 for at least one i.
That is, if the null hypothesis is true, then each observation is made up of the overall mean
plus a realization of the random error €,,.
The test procedure for the hypotheses in equation 12-4 is called the analysis of vari-
ance. The name “analysis of variance” results from partitioning total variability in the data
into its component parts. The total corrected sum of squares, which is a measure of total
variability in the data, may be written as

Ydbu aaa a YE [6--7.)+(y -5,.)) (12-5)

or

a n

YDX(% -5..) < nd (5-5. +E (uy-5,)


i=l j=l i=l i=l j=l
(12-6)
+20) 1%-- Foy -¥;.).

Note that the cross-product term in equation 12-6 is zero, since

¥(9,-¥;,) = yj.—ny;.= y;.—n(y;-/n) =0.


a!

Therefore, we have

YY
i=] j=l
(y-F) DV
i=]
5-5. +L L(y HJ.
i=l j=l
(12-7)
12-1 The Completely Randomized Single-Factor Experiment 325

Equation 12-7 shows that the total variability in the data, measured by the total corrected
sum of squares, can be partitioned into a sum of squares of differences between treatment
means and the grand mean and a sum of squares of differences of observations within treat-
ments and the treatment mean. Differences between observed treatment means and the
grand mean measure the differences between treatments, while differences of observations
within atreatment from the treatment mean can be due only to random error. Therefore, we
write equation 12-7 symbolically as

SS, = SS treatments
ASS
where SS; is the total sum of squares, SS,,..ments 18 the sum of squares due to treatments (i.e.,
between treatments), and SS, is the sum of squares due to error (i.e., within treatments).
There are an =N total observations; thus SS, has N — 1 degrees of freedom. There are a lev-
els of the factor, SO SS,..sments has a — 1 degrees of freedom. Finally, within any treatment
there are n replicates providing n — 1 degrees of freedom with which to estimate the exper-
imental error. Since there are a treatments, we have a(n - 1) =an-a=N-a degrees of
freedom for error.
Now consider the distributional properties of these sums of squares. Since we have
assumed that the errors €,, are NID(0, o”), the observations y,, are NID(u + 7, 0°). Thus
SS,/o° is distributed as chi-square with N - 1 degrees of freedom, since SS, is a sum of
squares in normal random variables. We may also show that SS,,csmens/O- is chi-square with
a — | degrees of freedom, if H, is true, and SS,/o” is chi-square with N — a degrees of free-
dom. However, all three sums of squares are not independent, since SS,,.amens and SS, add
up to SS,. The following theorem, which is a special form of one due to Cochran, is useful
in developing the test procedure.

Theorem 12-1 (Cochran)

Let Z, be NID(O, 1) for i= 1, 2, ..., v and let

hz mOpdtOs + Q,,
i=l
where s <v and Q, is chi-square with v, degrees of freedom (i= 1, 2, ..., s). Then Q,, Q,,...,
Q, are independent chi-square random variables with v,, v,, ..., v, degrees of freedom,
respectively, if and only if
vSv,tyyte-
ty,
Using this theorem, we note that the degrees of freedom for SS,,.amens and SS, add up
to N—1, so that SS, .smen/O and SS,/o” are independently distributed chi-square random
variables. Therefore, under the null hypothesis, the statistic

a Sauna [(a ms 1) = MSR ant


Os nigse(N—a) MS; me
follows the F,_, y_, distribution. The quantities MS,,.amem; and MS, are mean squares.
The expected values of the mean squares are used to show that F, in equation 12-8 is
an appropriate test statistic for H,: 7, = 0 and to determine the criterion for rejecting this nu!!
hypothesis. Consider

_ E(MS;}= afBe )-TE ¥¥ (4-5)


N-a i=l j=l
326 Chapter 12 The Analysis of Variance

-A (03— 2y;;¥j- + yjd:


i=] j=l

AS Soa ond] |
i=] j=l i=l

asoa i=l j=l i=]

Substituting the model, equation 12-1, into this equation we obtain

E(MS,) = —— EH leon ess) LE Elen)


i=] j=l i=l \j=l

Now on squaring and taking the expectation of the quantities within brackets, we see that
terms involving €;, and D/_,€ ; are replaced by o” and no”, respectively, because E(€,,) = 0.
Furthermore, all cross eects involving €,; have zero expectation. Therefore, after squar-
ing, taking expectation, and noting that 3d,ae 0, we have

Bl.) = +n + No? -—Nw? =nSt? =e"


i=1 i=l

or

E(MS,) = o°.
Using a similar approach, we may show that
a

n> t;
E(MSrestments) = 0° + Sane a —

From the expected meari squares we see that MS, is an unbiased estimator of 0”. Also,
under the null hypothesis, MS,,.amens iS an unbiased estimator of a”. However, if the null
hypothesis if false, then the expected value of MS,,caments iS greater than o”. Therefore, under
the alternative hypothesis the expected value of the numerator of the test statistic (equation
12-8) is greater than the expected value of the denominator. Consequently, we should reject
Hy, if the test statistic is large. This implies an upper-tail, one-tail critical region. Thus, we
would reject H, if
FoF aa-1,N-a

where F, is computed from equation 12-8.


emilee computational formulas for the sums of squares may be obtained by expand-
ing and simplifying the definitions of SS reatments 20d SS; in equation 12-7. This yields
aon 5 y*
SS = yi cay: (12-9)
i=) j=l
and

SSirednente SDS (12-10)


12-1 The Completely Randomized Single-Factor Experiment 327

The error sum of squares is obtained by subtraction:

SS, = SS,— SS treatments* (12-11)


The test procedure is summarized in Table 12-3. This is called an analysis-of-variance table.

Consider the hardwood concentration experiment described in Section 12-1.1 We can use the analy-
sis of variance to test the hypothesis that different hardwood concentrations do not affect the mean
tensile strength of the paper. The sums of squares for analysis of variance are computed from equa-
tions 12-9, 12-10, and 12-11 as follows:

SS, = 29 =

= (7)? +(8)? +---+(20) abs = 512.96,

See = De meta IN

_ (60)" +(94)* +(102)" +(127)" _ (383) sips

SSp = SS; oF SS ceatments


= 512.96 — 382.79 = 130.17.
The analysis of variance is summarized in Table 12-4. Since Fo) 3.) = 4.94, we reject H, and con-
clude that hardwood concentration in the pulp significantly affects the strength of the paper.

12-1.3 Estimation of the Model Parameters


It is possible to derive estimators for the parameters in the one-way analysis-of-variance
model
Yj=Ut
T+ €;;-

Table 12-3 Analysis of Variance for the One-Way-Classification Fixed-Effects Model


———————————————————————————— eS ——
Source of Sum of Degrees of Mean
Variation Squares Freedom Square 1
MS,
Between treatments SS a-1 MS Le
Error (within treatments) SS; N-a MS, 2
Total SS; N-1
LL

Table 12-4 Analysis of Variance for the Tensile Strength Data

Source of Sum of Degrees of Mean


Variation Squares Freedom Square ae

Hardwood concentration 382.79 3 127.60 19.61


Error 130.17 20 6.51
Total
ns
512.96 23
328 Chapter12 The Analysis of Variance

An appropriate estimation criterion is to estimate p and 7, such,that the sum of the squares
of the errors or deviations €,, is a minimum. This method of parameter estimation is called
the method of least squares. In estimating p and 7, by least squares, the normality assump-
tion on the errors €,, is not needed. To find the least-squares estimators of and 7,, we form
the sum of squares of the errors

i=l j=l i=l j=l


and find values of pz and 7, say {i and 7, that minimize L. The valves A and 7, are the solu-
tions to the a + 1 simultaneous equations

and

After simplification these equations become

Nii+nt,+nt,+-- +nt=y..,
nil + nt, = yea
nj + nt, =y,., (12-13)

nil Sano Pec


Equations 12-13 are called the least-squares normal equations. Notice that if we add the
last a normal equations we obtain the first normal equation. Therefore, the normal equations
are not linearly independent, and there are no unique estimates for U, T,,T),...,T,. One way
to overcome this difficulty is to impose a constraint on the solution to the normal equations.
There are many ways to choose this constraint. Since we have defined the treatment effects
as deviations from the overall mean, it seems reasonable to apply the constraint
a

7, =0, (12-14)
I =1

Using this constraint, we obtain as the solution to the normal equations

7, = Yn y--, 5uals
EON, (12-15)
12-1 The Completely Randomized Single-Factor Experiment 329

This solution has considerable intuitive appeal, since the overall mean is estimated by the
grand average of the observations and the estimate of any treatment effect is just the differ-
ence between the treatment average and the grand average.
This solution is obviously not unique because it depends on the constraint (equation
12-14) that we have chosen. At first this may seem unfortunate, because two different
experimenters could analyze the same data and obtain different results if they apply differ-
ent constraints. However, certain functions of the model parameter are estimated uniquely,
regardless of the constraint. Some examples are T, — 7, which would be estimated by is-7 =
y, — y;, and ft + T,, which would be estimated by fi + 7,=y,. Since we are usually interested
in differences in the treatment effects rather than their actual values, it causes no concern
that the 7, cannot be estimated uniquely. In general, any function of the model parameters
that is a linear combination of the left-hand side of the normal equations can be estimated
uniquely. Functions that are uniquely estimated, regardless of which constraint is used, are
called estimable functions.
Frequently, we would like to construct a confidence interval for the ith treatment mean.
The mean of the ith treatment is

H,=U+T, Ns Pscsogtle

A point estimator of 11, would be fi, =/i + T, = y,. Now, if we assume that the errors are nor-
mally distributed, each y,. is NID(u,, 07/n). Thus, if 0? were known, we could use the no:-
mal distribution to construct a confidence interval for u,. Using MS, as an estimator of 0°,
we can base the confidence interval on the ¢ distribution. Therefore, a 100(1 — a@)% -onfi-
dence interval on the ith treatment mean J; 1s

[5.4 tay2,.v-a-/MSz/n|. (12-16)


A 100(1 — &)% confidence interval on the difference between any two treatment means, say
H;- Hj, is

[5.5 tay2.n-al2MSe/n|. (12-17)

Example 12-2
We can use the results given previously to estimate the mean tensile strengths at different levels of
hardwood concentration for the experiment in Section 12-1.1. The mean tensile strength estimates are,

J,- = fis, = 10.00 psi,

Yo» = inn =. 15.67 psi,

Y3- = [Ly5q,= 17.00 psi,

F42= booe =.21 17,ps


A 95% confidence interval on the mean tensile strength at 20% hardwood is found from equatior —
12-16 as follows:

[y+ tap.n-aVMS;/n],

anges (2:086),/6.51/6|,
[21.17+2.17].
Chapter 12 The Analysis of Variance

The desired confidence interval is .

19.00 psi $ Uyog S$ 23.34 psi.

Visual examination of the data suggests that mean tensile strength at 10% and 15% hardwood is sim-
ilar. A confidence interval on the difference in means [5 — Hioq iS

[5,--5, + tay2n-aV2MSe/n],
[17.00-15.67 + (2.086),/2(6.51)/6 |,
[1.33+3.07].
Thus, the confidence interval on [y.q — Myoq, iS
- 1.74 $ Uysg - Hing $4.40.
Since the confidence interval includes zero, we would conclude that there is no difference in mean
tensile strength at these two particular hardwood levels.

12-1.4 Residual Analysis and Model Checking


The one-way model analysis of variance assumes that the observations are normally and
independently distributed, with the same variance in 2ach treatment or factor level. These
assumptions should be checked by examining the residuals. We define a residual as e,; =
y;; — Yi» that is, the difference between an observation and the corresponding treatment
mean. The residuals for the hardwood percentage experiment are shown in Table 12-5.
The normality assumption can be checked by plotting the residuals on normal probabil-
ity paper. To check the assumption of equal variances at each factor level, plot the residuals
against the factor levels and compare the spread in the residuals. It is also useful to plot the
residuals against y, (sometimes called the fitted value); the variability in the residuals should
not depend in any way on the value of y,. When a pattern appears in these plots, it usually
suggests the need for transformation, that is, analyzing the data in a different metric. For
example, if the variability in the residuals increases with y,., then a transformation such as
log y or Jy should be considered. In some problems the dependency of residual scatter in y,.
is very important information. It may be desirable to select the factor level that results in
maximum y; however, this level may also cause more variation in y from run to run.
The independence assumption can be checked by plotting the residuals against the time
or run order in which the experiment was performed. A pattern in this plot, such as
sequences of positive and negative residuals, may indicate that the observations are not
independent. This suggests that time or run order is important, or that variables that change
over time are important and have not been included in the experimental design.
A normal probability plot of the residuals from the hardwood concentration experiment
is shown in Fig. 12-2. Figures 12-3 and 12-4 present the residuals plotted against the treat-

Table 12-5 Residuals for the Tensile Strength Experiment

Hardwood
Concentration Residuals

5% -3.00 -2.00 5.00 1.00 -1.00 0.00


10% -3.67 P33 ea2.0 2 SONS Soa O- 07,
15% -3.00 1.00 2.00 0.00 ~-i.00 1.00
20% 2.17 3.83 0.83 183 -3.17 1.17
12-1 The Completely Randomized Single-Factor Experiment 331

ment number and the fitted value y,.. These plots do not reveal any model inadequacy or
unusual problem with the assumptions.

12-1.5 An Unbalanced Design


In some single-factor experiments the number of observations taken under each treatment
may be different. We then say that the design is unbalanced. The analysis of variance
described earlier is still valid, but slight modifications must be made in the sums of squares
formulas. Let n; observations be taken under treatment i(i = 1, 2,..., a); and let the total

lo)

Z;
Normal
score
I —_

=2

-4 -2 0 2 4 6
Residual value
Figure 12-2 Normal probability plot of residuals from the hardwood concentration experiment.

Residual
value

Figure 12-3 Plot of residuals vs. treatment.


332 Chapter 12 The Analysis of Variance

value
Residual

Figure 12-4 Plot of residuals vs. y,.

number of observations N = )/_,n;. The computational formulas for SS; and SS,,.stmens
become
aq nj
2
SSp = DL - WN.
i=1 j=l
and

og 2 Re ROS2
treatments n N

i=l

In solving the normal equations, the constraint >”i =_nt, = 0 is used. No other changes are
required in the analysis of variance. -
There are two important advantages in choosing a balanced design. First, the test sta-
tistic is relatively insensitive to small departures from the assumption of equality of vari-
ances if the sample sizes are equal. This is not the case for unequal sample sizes. Seccn?,
the power of the test is maximized if the samples are of equal size.

12-2 TESTS ON INDIVIDUAL TREATMENT MEANS


12-2.1 Orthogonal Contrasts
Rejecting the null hypothesis in the fixed-effects-model analysis of variance implies that
there are differences between the a treatment means, but the exact nature of the differences
is not specified. In this situation, further comparisons between groups of treatment means
may be useful. The ith treatment mean is defined as u/, = 1 + T,, and pi, is estimated by y,. .
Comparisons between treatment means are usually made in terms of the treatment totals
{yp}:
Consider the hardwood concentration experiment presented in Section 12-1.1. Since
the hypothesis H): 7, = 0 was rejected, we know that some hardwood concentrations pro-
duce tensile strengths different from others, but which ones actually cause this difference?
12-2 Tests on Individual Treatment Means 333

We might suspect at the outset of the experiment that hardwood concentrations 3 and 4 pro-
duce the same tensile strength, implying that we would like to test the hypothesis

Ao: Hy = Hy,
Je fem ea
This hypothesis could be tested by using a linear combination of treatment totals, say
yz. — yy. = 0.

If we had suspected that the average of hardwood oncentrations 1 and 3 did not differ from
the average of hardwood concentrations 2 and 4, then the hypothesis would have been

Ay: My, + Hy = by + My
Ay: M+ by # 1b + My
which implies that the linear combination of treatment totals
Yi + Vz — Yo — Vy = Oz
In general, the comparison of treatment means of interest will imply a linear combina-
tion of treatment totals such as
a

Ce yo ?
i=l
with the restriction that 27 _,c,=0. These linear combinations are called contrasts. The sum
of squares for any contrast is
7 2
» Ci;
SS, = 3 (12-18)
“de
a

i=]

and has a single degree of freedom. If the design is unbalanced, then the comparison of
treatment means requires that 2"_,n,c, = 0, and equation 12-18 becomes

Sew,
SS, = a
(12-19)
2
nC;
i=l
A contrast is tested by comparing its sum of squares to the mean square error. The result-
ing statistic would be distributed as F, with 1 and N — a degrees of freedom.
A very important special case of the above procedure is that of orthogonal contrasts.
Two contrasts with coefficients {c,} and {d,} are orthogonal if

or, for an unbalanced design, if


334 Chapter 12 The Analysis of Variance

For a treatments a set of a — 1 orthogonal contrasts will partitibn the sum of squares due to
treatments into a — 1 independent single-degree-of-freedom components. Thus, tests per-
formed on orthogonal contrasts are independent. ‘
There are many ways to choose the orthogonal contrast coefficients for a set of treat-
ments. Usually, something in the nature of the experiment should suggest which compar-
isons will be of interest. For example, if there are a = 3 treatments, with treatment 1 a
“control” and treatments 2 and 3 actual levels of the factor of interest to the experimenter,
then appropriate orthogonal contrasts might be as follows:

Treatment Orthogonal Contrasts

1 (control) -2 0
2 (level 1) 1 -1
3 (level 2) 1 1

Note that contrast 1 with c,=-2, 1, 1 compares the average effect of the factor with the con-
trol while contrast 2 with d, = 0, - 1, 1 compares the two levels of the factor of interest.
Contrast coefficients must be chosen prior to running the experiment, for if these com-
parisons are selected after examining the data, most experimenters would construct tests
that compare large observed differences in means. These large differences could be due to
the presence of real effects or they could be due to random error. If experimenters always
pick the largest differences to compare, they will inflate the type I error of the test, since it
is likely that in an unusually high percentage of the comparisons selected the observed dif-
ferences will be due to error.

Consider the hardwood concentration experiment. There are four levels of hardwood concentration.
and the possible sets of comparisons between these means and the associated orthogonal comparisons
are
Hy: fy + Uy= by + C, =) Yo — Vu Fy

ys Silty ies fee, C,


= - 3y,.- yp. + yz. + 3y4->

Ap: My + 3h, = 3H, + My Cy =— yy. + By. — 3yy. + Yy--


Notice that the contrast constants are orthogonal. Using the data from Table 12-1, we find the numer-
ical values of the contrasts and the sums of squares as follows:
2
C, = 60-94
-102 +127 =-9, SSc = (39) = 3.38,
' 6(4)
C, = -3(60)~94 +102 +3(127) = 209, 55-, > = 29)"
6(20)
«364,00

C; = 60 +3(94) — 3(102) +127 = 43, SSo. = (43) = 15.41,


»~ 6(20)
These contrast sums of squares completely partition the treatment sum of squares; that is, SS,earments=
SSc, + SSc, + SSc, = 382.79. These tests on the contrasts are usually incorporated into the analysis of
variance, siren as ‘shown iin Table 12-6. From this analysis, we conclude that there are significant dif-
ferences between hardwood concentrations 1, 2 vs. 3, 4, but that the average of 1 and 4 does not dif-
fer from the average of 2 and 3, nor does the average of 1 and 3 differ from the average of 2 and 4.
ee ae
12-2 Tests on Individual Treatment Means 335

Table 12-6 Analysis of Variance for the Tensile Strength Data


a ee
Source of Sum of Degrees of Mean
Variation Squares Freedom Square
a FS Sea a el, lo
Hardwood concentration 382.79 3 127.60 19.61
C, (1, 4 vs. 2, 3) (3.38) (1) 3.38 0.52
C, (1, 2 vs. 3, 4) (364.00) (1) 364.00 55.91
C, (1, 3 vs. 2, 4) (15.41) (1) 15.41 231
Error 130.17 20 6.51
Total 512.96 23

12-2.2 Tukey’s Test


Frequently, analysts do not know in advance how to construct appropriate orthogonal con-
trasts, or they may wish to test more than a — 1 comparisons using the same data. For exam-
ple, analysts may want to test all possible pairs of means. The null hypotheses would then
be Hp: LU, =, for all i #j. If we test all possible pairs of means using t-tests, the probability
of committing a type I error for the entire set of comparisons can be greatly increased.
There are several procedures available that avoid this problem. Among the more popular of
these procedures are the Newman-Keuls test [Newman (1939); Keuls (1952)], Duncan’s
multiple range test [Duncan (1955)], and Tukey’s test [Tukey (1953)]. Here we describe
Tukey’s test.
Tukey’s procedure makes use of another distribution, called the Studentized range dis-
tribution. The Studentized range statistic is

= Vinax Se vinis
4 JMS; /n ”

where y,,,, is the largest sample mean and y,,,, is the smallest sample mean out of p sample
means. Let q, (a, f) represent the upper @ percentage point of g, where a is the number of
treatments and f is the number of degrees of freedom for error. Two means, y, and y, (i #
j), are considered significantly different if

yale ts
where

MS
Ty = Ga(a Ff), (12-20)
Table XII (Appendix) contains values of q, (a, f) for @= 0.05 and 0.01 and a selection of
values for a and f. Tukey’s procedure has the property that the overall significance level is
exactly a for equal sample sizes and at most @ for unequal sample sizes.

gis sie

We will apply Tukey’s test to the hardwood concentration experiment. Recall that there are a = 4
means, n = 6, and MS, = 6.51. The treatment means are

y,. = 10.00 psi, y,. = 15.67 psi, y;, = 17.00 psi, y,. = 21.17 psi.

From Table XII (Appendix), with a = 0.05, a =4, and f= 20, we find gp o.(4, 20) = 3.96.
336 Chapter 12 The Analysis of Variance

Using Equation 12-20, »

MS, [6.51 =e
Ty = gys(4,20), MSE = 3.96 a

Therefore, we would conclude that two means are significantly different if

ly}. — yl > 4.12.


The differences in treatment averages are

ly. — ¥>-| = ]10.00 - 15.67| = 5.67,


ly. — Y3-| = |10.00 — 17.00] = 7.00,

|y,.
—¥4-| = |10.00
- 21.17] = 11.17,

|¥,. — y3.| = [15.67 - 17.00] = 1.33,


|¥.. — y-| = [15.67 - 21.17| = 5.50,

|¥,. — ys.| = 17.00


- 21.17] = 4.17.

From this analysis, we see significant differences between all pairs of means except 2 and 3. It may
be of use to draw a graph of the treatment means, such as Fig. 12-5, with the means that are not dif-
ferent underlined.

Simultaneous confidence intervals can also be constructed on the differences in pairs


of means using the Tukey approach. It can be shown that

AC,-5,)-aela,f)|—* Sp; - My $(Y, -5, + ale f) 2 |l-o

when sample sizes are equal. This expression represents a 100(1 — @)% simultaneous con-
fidence interval on all pairs of means {, —,.
If the sample sizes are unequal, the 100(1 — a@)%~imultaneous confidence interval on
all pairs of means /U,—, is given by

12 (5.-3,)-269 ms 1 Ju
Lot s(5.-5,) ted MS (+2) sia.
t ily

Interpretation of the confidence intervals is straightforward. If zero is contained in the inter-


val, then there is no significant difference between the two means at the @ significance level.
It should be noted that the significance level, a, in Tukey’s multiple comparison pro-
cedure represents an experimental error rate. With respect to confidence intervals, @ rep-
resents the probability that one or more of the confidence intervals on the pairwise
differences will not contain the true difference for equal sample sizes (when sample sizes
are unequal, this probability becomes at most @).

yy. Y2. ¥3. V4.

10.0 12.0 14.0 16.0 18.0 — 20.0

Figure 12-5 Results of Tukey’s test.


12-3 The Random-Effects Model 337

12-3) THE RANDOM-EFFECTS MODEL


In many situations, the factor of interest has a large number of possible levels. The analyst
is interested in drawing conclusions about the entire population of factor levels. If the
experimenter randomly selects a of these levels from the population of factor levels, then
we say that the factor is a random factor. Because the levels of the factor actually used in
the experiment were chosen randomly, the conclusions reached will be valid about the
entire population of factor levels. We will assume that the population of factor levels is
either of infinite size or is large enough to be considered infinite.
The linear statistical model is

t=1,)2,%.,a,
vy =H +4]Zl Pecos: (locas

where T, and €, ,are independent random variables. Note that the model is identical in struc-
ture to the fixed-effects case, but the parameters have a different interpretation. If the vari-
’ 2 . ‘ '
ance of 7, is 0, then the variance of any observation is

Vy, =o, + 0°.

The variances oO. and o” are called variance components, and the model, ecuation 12-21,
is called the components-of-variance or the random-effects mode]. To test hypotheses using
this model, we require that the {e,,} are NID(0, o°), that {7,} are NID(0, O°), and that 7, and
€,, are independent.The assumption that the {7;} are independent random variables implies
that the usual assumption of Y*_, 7, = 0 from the fixed-effects mode) does not apply to the
random-effects model.
The sum of squares identity

SS; = Ohad airy SS; (12-22)

still holds. That is, we partition the total variability in the observations into a component
that measures variation between treatments (SS,,.aimens) and a component that measures vari-
ation within treatments (SS,.). However, instead of testing hypotheses about individual treat-
ment effects, we test the hypotheses

Hy 0,20,

Hie >.
If o, = 0, all treatments are ‘Jentical, but if om > 0, then there is variability between treat-
ments. The quantity SS,./o” is distributed as chi-square with N — a degrees of freedom, and
under the null hypothesis, SS,,.smems/O iS distributed as chi-square with a — | degrees of
freedom. Further, the random variables are independent of each other. Thus, under the null
hypothesis, the ratio

Fo=8 SStreaments /(a ~ 1) = MS vestments (12-23)


” SS,/(N =a) MS;

is distributed as F with a — 1 and N — a degrees of freedom. By examining the expected


mean squares we can determine the critical region for this statistic.
Consider
338 Chapter 12 The Analysis of Variance

ae l
a
H ——
Ij.\j2 Uys|
2

E(MS,reaimens) 7 Aeel EU SSircsesieats) e Pea


i=l
re 2

Ca aS Y (“+7 +€,) -— Ydle+2 +6)


M ja jel i=l j=l

If we square and take the expeseuon of the quantities in brackets, we a thatterms oS


ing T are Teplaced by o7, as E(1,) = 0. Also, terms involving aeaesWo ADs2 6;,. and
DE pa ay T are replaced by no’, ano”, and an a respectively. Pelle allcross-product
terms involving 7, and e,, have zero expectation. This leads to

os wil)! 2 2
EMS aha ml + No? +a0? - Nu’ -no? -o

or

E(MS, ORES)
=O +n0%. (12-24)
A similar approach will show that
E(MS,) = 0°. (12-25)
From the expected mean squares, we see that if H, is true, both the numerator and the
denominator of the test statistic, equation 12-23, are unbiased estimators of 07; whereas if
H, is true, the expected value of the numerator is greater than the expected value of the
denominator. Therefore, we should reject H, for values of F, that are too large. This ‘mplies
an upper-tail, one-tail critical region, so we reject Hy if Fy > Fy4; ya:
The computational procedure and analysis-of-variance table for the random- effects
model are identical to the fixed-effects case. The conclusions, however, are quite different
because they apply to the entire population of treatments.
We usually need to estimate the variance components (07 and 0%) in the model. The
procedure used to estimate o* and 0° is called the “‘analysis-of-variance method,” because
it uses the lines in the analysis-of-variance table. It does not require the normality assump-
tion on the observations. The procedure consists of equating the expected mean squares to
their observed values in the analysis-of-variance table and solving for the variance compo-
nents. When equating observed and expected mean squares in the one-way-classification
random-effects model, we obtain &

MS treatments
=o +no.
and

MS, = 0°.
Therefore, the estimators of the variance components are

o° = MS, (12-26)
and

G? = MS cements a MS, ; (12-27)


n

For unequal sample sizes, replace n in equation 12-27 with


12-3 The Random-Effects Model 339

=|orgpan
a

alee ee =

Sometimes the analysis-of-variance method produces a negative estimate of a variance


component. Since variance components are by definition nonnegative, a negative estimate
of a variance component is unsettling. One course of action is to accept the estimate and use
it as evidence that the true value of the variance component is zero, assuming that sampling
variation led to the negative estimate. While this has intuitive appeal, it will disturb the sta-
tistical properties of other estimates. Another alternative is to reestimate the negative vari-
ance component with a method that always yields nonnegative estimates. Still another
possibility is to consider the negative estimate as evidence that the assumed linear model is
incorrect, requiring that a study of the model and its assumptions be made to find a more
appropriate model.

ae st 612s
a nas Ses

In his book Design and Analysis of Experiments (2001), D. C. Montgomery describesa single-factor
experiment involving the random-effects model. A textile manufacturing company weaves a fabric on
a large number of looms. The company is interested in loom-to-loom variability in tensile strength.
To investigate this, a manufacturing engineer selects four looms at random and makes four strength
determinations on fabric samples chosen at random for each loom. The data are shown in Table 12-7,
and the analysis of variance is summarized in Table 12-8.
From the analysis of variance, we conclude that the looms in the plant differ sir candy in their
ability to produce fabric of uniform strength. The variance components are estimated by G? = 1.90 and
g2= EE Keefe

Therefore, the variance of strength in the manufacturing process is estimated by

Vo)
=67 +6"
= 6.96 + 1.90

= 8.86.
Most of this variability is attributable to differences between looms.

Table 12-7 Strength Data for Example 12-5


ee
Ne SS
Observations

Loom 1 Z 3 4 Totals Averages


conse GE is, Bs SE een Se ees Sad ES 2) Se ee See
1 98 97 99 96 390 97.5
2 9]-—..90.-93 92, 366 91.5
3 Oey Shy Oy BS 383 95.8
4 95 96 99 98 388 97.0
: 1527 95.4
Ce Es elas.
340 Chapter 12 The Analysis of Variance

Table 12-8 Analysis of Variance for the Strength Data

Source of Sum of Degrees of Mean


Variation Squares Freedom Square EG

Looms 89.19 3 2973) 15.68


Error 2215 12 1.90
Total 111.94 15

This example illustrates an important application of analysis of variance—the isolation


of different sources of variability in a manufacturing process. Problems of excessive
variability in critical functional parameters or properties frequently arise in quality-
improvement programs. For example, in the previous fabric-strength example, the process
mean is estimated by y.. = 95.45 psi and the process standard deviation is estimated by
6.2 Wo,) = V8.86 = 2.98 psi If strength is approximately normally distributed, this
would imply a distribution of strength in the outgoing product that looks like the normal
distribution shown in Fig. 12-6a. If the lower specification limit (LSL) on strength is at 90
psi, then a substantial proportion of the process defective is fallout; that is, scrap or defec-
tive material that must be sold as second quality, and so on. This fallout is directly related
to the excess variability resulting from differences between looms. Variability in loom per-
formance could be caused by faulty setup, poor maintenance, inadequate supervision,
poorly trained operators, and so forth. The engineer or manager responsible for quality
improvement must identify and remove these sources of variability from the process. If he
can do this, then strength variability will be greatly reduced, perhaps as low as 6,=6=
1.90 = 1.38 psi, as shown in Fig. 12-6b. In this improved process, reducing the variabil-
ity in strength has greatly reduced the fallout. This will result in lower cost, h'gher quality,
a more satisfied customer, and enhanced competitive position for the company. .

Process
fallout

80 85 90 95 100 105 110 psi


LSL
(a)

80 85 90 95 100. 105 110 psi


LSL
(5)
Figure 12-6 The distribution of fabric strength. (a) Current process, (b) Improved process.
12-4 The Randomized Block Design 341

12-4 THE RANDOMIZED BLOCK DESIGN


12-4.1 Design and Statistical Analysis
In many experimental problems it is necessary to design the experiment so that variability
arising from nuisance variables can be controlled. As an example, recall the situation in
Example 11-17, where two different procedures were used to predict the shear strength of
steel plate girders. Because each girder has potentially different strength, and because this
variability in strength was not of direct interest, we designed the experiment using the two
methods on each girder and compared the difference in average strength readings to zero
using the paired t-test. The paired t-test is a procedure for comparing two means when all
experimental runs cannot be made under homogeneous conditions. Thus, the paired t-test
reduces the noise in the experiment by blocking out a nuisance variable effect. The ran-
domized block design is an extension of the paired r-test that is used in situations where the
factor of interest has more than two levels.
As an example, suppose that we wish to compare the effect of four different chemicals
on the strength of a particular fabric. It is known that the effect of these chemicals varies
considerably from one fabric specimen to another. In this example, we have only one fac-
tor: chemical type. Therefore, we could select several pieces of fabric and compare all four
chemicals within the relatively homogeneous conditions provided by each piece of fabric.
This would remove any variation due to the fabric.
The general procedure for a randomized complete block design consists of selecting b
blocks and running a complete replicate of the experiment in each block. A randomized
complete block design for investigating a single factor with a levels would appear as in Fig.
12-7. There will be a observations (one per factor level) in each block, and the order in
which these observations are run is randomly assigned within the block.
We will now describe the statistical analysis for a randomized block design. Suppose
that a single factor with a levels is of interest, and the experiment is run in b blocks, as
shown in Fig. 12-7. The observations may be represented by the linear statistical model,
r= a
yj =UAT, +B; ted oie (12-28)

where pl is an overall mean, 1, is the effect of the ith treatment, B; is the effect of the jth
block, and €, is the usual NID(O, o°) random error term. Treatments and blocks will be con-
sidered initially as fixed factors. Furthermore, the treatment and block effects are defined
as deviations from the overall mean, so that "_, 7,=0 and ©" ,8,= 0. We are interested in
testing the equality of the treatment effects. That is, .
Hy: T, = % = -#* = T,
a
= 9,
H,: t, #0 for at least one 1.

Block 1 Block 2 Block 5b

Yi Y42 Vip
Yo4 Y20 Y2b
y31 ¥32 ee Y3b

Yat } Va2 Vab

Le

Figure 12-7 The randomized complete block design.


342 Chapter 12 The Analysis of Variance

Let y, be the total of all observations taken under treatment i, let y.; be the total of all
observations in block j, let y.. be the grand total of all observations, and let N = ab be the
total number of observations. Similarly, y,. is the average of the observations taken under
treatment i, y.; is the average of the observations in block j, and y.. is the grand average of
all observations. The total corrected sum of squares is

Fd 04 -5..) SY [G.-5.)+05, -¥..)+(yj-H--F) +f. (12-29)


Expanding the right-hand side of equation 12-29 and applying algebraic elbow grease
yields

i=] j=l i=] j=l


note : (12-30)
+> Dy Vice atey )
i=] j=l

or, symbolically,

SS; = SSigene of SSyiocks of SS; (12-3 1)

The degrees-of-freedom breakdown corresponding to equation 12-31 is


ab-1=(a-1)+(b-1)+(a-1)(b- 1). (12-32)
The null hypothesis of no treatment effects (H,: t; = 0) is tested by the F ratio,
MSvreatments/MS;. The analysis of variance is summarized in Table 12-9. Computing formu-
las for the sums of squares are also shown in this table. The same test procedure is used in
cases where treatments and/or blocks are random.

‘Example 12-6
An experiment was performed to determine the effect of four different chemicals on the strength of a
fabric. These chemicals are used as part of the permanent-press finishing process. Five fabric samples
were selected, and a randomized block design was run by testing each chemical type once in random
order on each fabric sample. The data are shown in Table 12-10.
The sums of squares for the analysis of variance are computed as follows:

Table 12-9 Analysis of Variance for Randomized Complete Block Design

Source of Sum of Degrees of Mean


Variation Squares Freedom Square 10.

T t t ayy,
SUR gettin ~~ 5S treatments MS treatments
reatments 2 om a-1\ cm "IMs:
b 2 5)
Oi AR SSrrocks
Blocks 2 2ST, b-1 b-1

Error SS, (by subtraction) (a-1)(b-1) ae


j (a-1)(6-1)

Total » Yi ab -\
12-4 The Randomized Block Design 343

Table 12-10 Fabric Strength Data — Randomized Block Design

Row Row
Chemical Fabric Sample Totals, Averages,
Type 1 2 3 4 5 y;. y,.
1 PED DU AD NS Ele a Se) 1.14
2 ee Aen Og) Oe 18 8.8 1.76
3 ets GZ AO ILS gles} 6.9 1.38
4 SOs 4 ae.Oe Aol 4 17.8 3.56
Column
totals, y.; OD eeelOS|, ha3'5 8:87 6 39.2 1.96
Column
averages, y., 2 SU Ree.
SS OLSSiep220y ul.90 (y..) (y..)

SS; e%
= Des
i=l j=l

2
= (1.3) +(1.6)° +---+(3.4)° fer = 25.69,
y?
4
yy. its
Streatment
atments =)
—_ = b ab

(5.7)? +(8.8)'+(6.9)'
5
+(17.8)" _ (39.2)" _ 19 4
20

(9.2)" +(10.1)" + (3.5) +(8.8)° +(7.6) 39.2)"_


4 a 20
SS = SS; - SSpigeks = SStreatments

= 25.69 — 6.69—18.04=0.96.
The analysis of variance is summarized in Table 12-11. We would conclude that there is a significant
difference in the chemical types as far as their effect on fabric strength is concerned.

Table 12-11 Analysis of Variance for the Randomized Block Experiment


a
EEE SS EE EEE
Source of Sum of Degrees of Mean
Variation Squares Freedom Square lB

Chemical type
(treatments) 18.04 3 6.01 G53
Fabric sample
(blocks) 6.69 4 1.67
Error 0.96 12 C.08
Total 25.69 19 eon | ES
344 Chapter 12 The Analysis of Variance

Suppose an experiment is conducted as a randomized block design, and blocking was


not really necessary. There are ab observations and (a — 1)(b — 1) degrees of freedom for
error. If the experiment had been run as a completely randomized single-factor design with
b replicates, we would have had a(b— 1) degrees of freedom for error. So, blocking has cost
a(b — 1) - (a= 1) (b- 1) =b— 1 degrees of freedom for error. Thus, since the loss in error
degrees of freedom is usually small, if there is a reasonable chance that block effects may
be important, the experimenter should usé the randomized block design.
For example, consider the experiment described in Example 12-6 as a one-way-
classification analysis of variance. We would have 16 degrees of freedom for error. In the
randomized block design there are 12 degrees of freedom for error. Therefore, blocking has
cost only 4 degrees of freedom, a very small loss considering the possible gain in informa-
tion that would be achieved if block effects are really important. As a general rule, when in
doubt as to the importance of block effects, the experimenter should block and gamble that
the block effect does exist. If the experimenter is wrong, the slight loss in the degrees of free-
dom for error will have a negligible effect, unless the number of degrees of freedom is very
small. The reader should compare this discussion to the one at the end of Section 11-3.3.

12-4.2 Tests on Individual Treatment Means


When the analysis of variance indicates that a difference exists between treatment means, —
we usually need to perform some follow-up tests to isolate the specific differences. Any
multiple comparison method, such as Tukey’s test, could be used to do this.
Tukey’s test presented in Section 12-2.2 can be used to determine differences between
treatment means when blocking is involved simply by replacing n with the number of
blocks b in equation 12-20. Keep in mind that the degrees of freedom for error have now
changed. For the randomized block design, f= (a — 1)(b - 1).
To illustrate this procedure, recall that the four chemical type means from Example
‘12-6 are
y, =1.14, », = 1276, ¥en b38, y, = 3.56,

Ty = 4o95(4,12)JeE = 420/298 -o.53,

Therefore, we would conclude that two means are significantly different if

ly,.—y,-|> 0.53.
The absolute values of the differences in treatment averages are

ly,.-y2.] = [1.14 — 1.76] = 0.62,

ly,-—yy-| = |1.14- 1.38]= 0.24,

l¥,.-y4.] = [1.14 — 3.56] = 2.42,


{¥5. —y,.| = [1.76 — 1.38] = 0.38,
[¥>.—¥4.] = [1.76 — 3.56] = 1.80,
ly. —Y4-] = [1.38— 3.56]=2.18.
The results indicate chemical types | and 3 do not differ, and types 2 and 3 do not differ.
Figure 12-8 represents the results graphically, where the underlined pairs do not differ.
12-4 The Randomized Block Design 345

Figure 12-8 Results of Tukey’s test.

12-4.3_ Residual Analysis and Model Checking


In any designed experiment it is always important to examine the residuals and check for
violations of basic assumptions that could invalidate the results. The residuals for the ran-
domized block design are just the differences between the observed and fitted values

en = Jij —Jip
where the fitted values are
3, =Ye + YY. - r (12-33)
The fitted value represents the estimate of the mean response when the ith treatment is run in
the jth block. The residuals from the experiment from Example 12-6 are shown in Table 12-12.
Figures 12-9, 12-10, 12-11, and 12-12 present the important residual plots for the
experiment. There is some indication that fabric sample (block) 3 has greater variability in
strength when treated with the four chemicals than the other samples. Also, chemical type

Table 12-12 Residuals from the Randomized Block Design

Chemical Fabric Sample


Type SS SS Oks ay ee ae
1 =O. 1S Old 0.44 -0.18 0.02
2 0.10 0.07. -0.27 0.00 0.10
3 0.08 -0.24 0.30 0.12 -0.02
4 0.00 0.27 -0.48 0.30 -0.10

Oo

Zj
Normal
score
| —_

-0.50 -0.25 0 0.25 0.50


’ Residual value

Figure 12-9 Normal probability plot of residuals from the randomized block design.
346 Chapter 12 The Analysis of Variance

4, which provides the greatest strength, also has somewhat more variability in strength. Fol-
low-up experiments may be necessary to confirm these findings if they are potentially
important. :

=0.5

Figure 12-11 Residuals by block.

Figure 12-12 Residuals versus §,,.


12-5 Determining Sample Size in Single-Factor Experiments 347

12-5 DETERMINING SAMPLE SIZE IN SINGLE-FACTOR


EXPERIMENTS
In any experimental design problem the choice of the sample size or number of replicates
to use is important. Operating characteristic curves can be used to provide guidance in mak-
ing this selection. Recall that the operating characteristic curve is a plot of the type II (B)
error for various sample sizes against a measure of the difference in means that it is impor-
tant to detect. Thus, if the experimenter knows how large a difference in means is of poten-
tial importance, the operating characteristic curves can be used to determine how many
replicates are required to give adequate sensitivity.
We first consider sample size determination in a fixed-effects model for the case of
equal sample size in each treatment. The power (1 — B) of the test is
1 — B= P{Reject H,|H, is false} (12-34)
= Pb oie om44H is false}.
To evaluate this Brobabilitystatement, we need to know the distribution of the test statistic
F, if the null hypothesis is false. It can be shown that if H, is false, the statistic Fy =
MSweatmens/MS, is distributed as a noncentral F random variable, with a\- 1 and N - a
degrees of freedom and a noncentrality parameter 6. If 6=0, then the noncentral F distri-
bution becomes the usual central F distribution.
The operating characteristic curves in Chart VII of the Appendix are used to calculate
the power of the test for the fixed-effects model. These curves plot the probability of type
II error (B) against ®, where
a

Se
2__ i=l
® re (12-35)

The parameter ® is related to the noncentrality parameter 6. Curves are available for a =
0.05 and a= 0.01 and for several values of degrees of freedom for numerator and: denom-
inator. In a completely randomized design, the symbol n in equation 12-35 is the number
of replicates. In a randomized block design, replace n by the number of blocks.
In using the operating characteristic curves, we must define the difference in means
that we wish to detect in terms of Y/_ it. Also, the error variance 0” is usually unknown.
In such cases, we must choose ratios of £7_,t;/o* that we wish to detect. Alternatively, if
an estimate of o” is available, one may replace o” with this estimate. For example, if we
were interested in the sensitivity of an experiment that has already been performed, we
might use MS, as the estimate of 0”.

Suppose that five means are being compared in a completely randomized experiment with a = 0.01.
The experimenter would like to se how many replicates to run if it is important to reject H, with a
_|t;. ,/0”=5.0. The parameter @’ is, in this case,
probability of at least 0.90 if yt

ao’ 5

and the operating characteristic curve for a — 1=5-1=4 and N-a=a(n- 1)=5(n- 1) error degrees
of freedom is shown in Chart VII (Appendix). As a first guess, try n = 4 replicates. This yields ©’ =4,
® =2, and 5(3)= 15 error degrees of freedom. Consequently, from Chart VII, we find that B= 0.38.
348 Chapter 12 The Analysis of Variance

Therefore, the power of the test is approximately 1 — B= 1 — 0.38 =*0.62, which is less than the
required 0.90, and so we conclude that n = 4 replicates are not sufficient. art in a similar man-
ner, we can construct the following display.

n oD ® a(n —1) B Power (1-8)


4 4 2.00 1) 0.38 0.62
5 5 2.24 20 0.18 0.82
6 6 2.45 25 0.06 0.94

Thus, at least n = 6 replicates must be run in order to obtain a test with the required power.

The power of the test for the random-effects model is


1 - B= P{Reject H,|H, is false}
(12-36)
=P{F,>F aa-l, N- lo, > 0} .

Once again the distribution of the test statistic F, under the alternative hypothesis is needed.
It can be shown that if H, is true (0% > 0), the distribution of F, is central F, with a - 1 and
N —a degrees of freedom.
Since the power of the random-effects model is based on the central F distribution, we
could use the tables of the F distribution in the Appendix to evaluate equation 12-36. How-
ever, it is much easier.to evaluate the power of the test by using the operating characteris-
tic curves in Chart VIII of the Appendix. These curves plot the probability of the type II
error against A, where

A= jl+—Z. re (12-37)
In the randomized block design, replace n with b, the number of beset Since 0” is usually
unknown, we may either use a prior estimate or define the value of o. that we are interested
in detecting in terms of the ratio 07aig

Consider a completely randomized design with five treatments selected at septs with six observa-
tions per treatment and a= 0.05. We wish to determine the power of the test if o. is equal to o”. Since
a=5,n=6, and o. = 0’, we may compute

A= /1+(6)l = 2.646.
From the operating characteristic curve with a- 1 = 4, N-a=25 degrees of freedom and a= 0.05,
we find that

B= 0.20.
Therefore, the power is approximately 0.80.

12-6 SAMPLE COMPUTER OUTPUT


Many computer packages can be implemented to carry out the analysis of variance for the
situations presented in this chapter. In this section, computer.output from Minitab®is
presented.
12-6 Sample Computer Output 349

Computer Output for Hardwood Concentration Example

Reconsider Example 12-1, which investigates the effect of hardwood concentration on ten-
sile strength. Using ANOVA in Minitab® provides the following output.

Analysis of Variance for TS


Source DF SS MS F P
Concen 3 SAS aT Lao) 19.61 0.000
Error 20 SOR 7 Grol
Total 23 512.96

Individual 95% CIs For Mean


Based on Pooled StDev
Level N Mean StDev Set $= —— fH
5 6 10.000 2.828 Go ))
10 6 ALS 5 MS) 7 ees O15) Ca)
15 6 17.000 1.789 (es¥=s)
20 6 2AeA'6 7, 2.639 Ca)
6 aa a4---—-4+---—-+-—---4+-
Pooled StDev = A Sak LOR Oss TsO) 20) Oeme2ono

The analysis of variance results are identical to those presented in Section 12-1.2. Minitab®
also provides 95% confidence intervals for the means of each level of hardwood concen-
tration using a pooled estimate of the standard deviation. Interpretation of the confidence
intervals is straightforward. Factor levels with confidence intervals that do not overlap are
said to be significantly different.
A better indicator of significant differences is provided by confidence intervals based
on Tukey’s test on pairwise differences, an option in Minitab®. The output provided is

Tukey’s pairwise comparisons


Family error rate = 0.0500
Individual error rate = 0.0111

Critical value = Symele:


Intervals for (column level mean) - (row level mean)

10 iD

10 =H)
ike
15 Salle
7
20 =O
=/.

The (simultaneous) confidence intervals are easily interpreted. For example, the 95% con-
fidence interval for the difference in mean tensile strength between 5% hardwood concen-
tration and 10% hardwood concentration is (-9.791, —1.542). Since this confidence interval
does not contain the value 0, we conclude there is a significant difference between 5% and
10% hardwood contentrations. The remaining confidence intervals are interpreted simi-
larly. The results provided by Minitab® are identical to those found in Section 12-2.2.
350 Chapter 12 The Analysis of Variance

12-7 SUMMARY »

This chapter has introduced design and analysis methods for experiments with a single fac-
tor. The importance of randomization in single-factor experiments was emphasized. In a
completely randomized experiment, all runs are made in random order to balance out the
effects of unknown nuisance variables. If a known nuisance variable can be controlled,
blocking can be used as a design alternative. The fixed-effects and random-effects models
of analysis of variance were preseuted. The primary difference between the two models is
the inference space. In the fixed-effects model inferences are valid only about the factor lev-
els specifically considered in the analysis, while in the random-effects model the conclu-
sions may be extended to the population of factor levels. Orthogonal contrasts and Tukey’s
test were suggested for making comparisons between factor level means in the fixed-effects
experiment. A procedure was also given for estimating the variance components in a ran-
dom-effects model. Residual analysis was introduced for checking the underlying assump-
tions of the analysis of variance.

12-8 EXERCISES
12-1. A study is conducted to determine the effect of Observations
cutting speed on the life (in hours) of a particular
machine tool. Four levels of cutting speed are selected
C.F, Flow 1 2 3 4 5 6
for the study with the following results: 1251015 24:6 Phe 2.6S ORNS 23.8
160 49 46 50 42 3.6 -42
Cutting 200 4.6003:4-2.9'9 2352 he SU
Speed Tool Life
1 415 43) 33 39). 36, 7 40 (a) Does C,F, flow rate affect etch uniformity? Con-
2 42 36 34 45 40 39 struct box plots to compare the factor levels and
perform the analysis of variance.
3 349) 38 3457 534-365 533
4 36ers re Oper 8S et (b) Do the residuals indicate any problems with the
underlying assumptions?
12-3. The compressive strength of concrete is being
(a a Does cutting speed affect tool life? Draw compar-
studied. Four different mixing techniques are being
ative box plots and perform an’ analysis of
investigated. The following data have been collected:
variance.
(b) Plot average tool life against cutting speed and Mixing
interpret the results. Technique’ Compressive Strength (psi)
(c) Use Tukey’s test to investigate differences 1 3129 3000 2865 2890
between the individual levels of cutting speed. 2 3200 3300 2975 3150
Interpret the results. ; 3 2800 2900 2985 3050
(d) Find the residuals and examine them for model 4 2600 2700 2600 2765
inadequacy.

12-2. In “Orthogonal Design for Process Optimiza- (a) Test the hypothesis that mixing techniques affect
the strength of the concrete. Use a = 0.05.
tion and Its Application to Plasma Etching” (Solid
State Technology, May 1987), G. Z. Yin and D. W. Jil- (b) Use Tukey’s test to make comparisons between
lie describe an experiment to determine the effect of pairs of means. Estimate the treatment effects.
C,F, flow rate on the uniformity of the etch on a sili- 12-4. A textile mill has a large number of looms. Each
con wafer used in integrated circuit manufacturing. loom is supposed to provide the same output of cloth
Three flow rates are used in the experiment, and the per minute. To investigate this assumption, five looms
resulting uniformity (in percent) for six replicates is as are chosen at random and their output measured at dif-
follows: ferent times. The following data are obtained:
12-8 Exercises 351

Loom Output (Ib / min) (c) Compute a 95% interval estimate of the mean for
coating type 1. Compute a 99% interval estimate
1 4.0 4.1 4.2 4.0 4.1
of the mean difference between coating types 1
2 3.9 3.8 3.9 4.0 4.0 and 4.
3 4.1 4.2 4.1 4.0 3.9
(d har Test all pairs of means using Tukey’s test, with
4 3.6 3.8 4.0 a) 3.7 a=0.05.
5 3.8 3.6 3.9 3.8 4.0
(e Assuming that coating type 4 is currently in use,
what are your recommendations to the manufac-
(a) Is this a fixed- or random-effects experiment? Are turer? We wish to minimize conductivity.
the looms similar in output?
12-7. The response time in milliseconds was deter-
(b) Estimate the variability between looms. mined for three different types of circuits used in an
(c) Estimate the experimental error variance. electronic calculator. The results are recorded here:
(d) What is the probability of accepting H, if oO. is
four times the experimental error variance? Circuit Type Response Time
(e) Analyze the residuals from this experiment and 1 EE BR SOL POS DS:
check for model inadequacy. 2 2) eee SS eee ee 40)
3 I Viey I ey
12-5. An experiment was run to determine whether
four specific firing temperatures affect the density of a
(a) Test the hypothesis that the three circuit types
certain type of brick. The experiment led to the fol-
have the same response time.
lowing data:
(b) Use Tukey’s test to compare pairs of treatment
Temperature means.
(°F) Density (c) Construct a set of orthogonal contrasts, assuming
that at the outset of the experiment you suspected
100 2S O27 a2 One 74212572 1.8
the response time of circuit type 2 to be different
125 21.7 21.4 215 215 - - -
from the other two.
150 DVOW2VS 621.8 2168215 Y=) ~ =
-(d) What is the power of this test for detecting
175 DUO Ted Dice 2 lee lOr2 1.8 1
roe 3.07
(e) Analyze the residuals from this experiment.
(a) Does the firing temperature affect the density of
the bricks? 12-8. In “The Effect of Nozzle Design on the Stability
and Performance of Turbulent Water Jets” (Fire Safety
(b) Estimate the-components in the model.
Journal, Vol. 4, August 1981), C. Theobald describes
(c) Analyze the residuals from the experiment. an experiment in which a shape factor was determined
12-6. An electronics engineer is interested in the effect for several different nozzle designs at different levels
on tube conductivity of five different types of coating of jet efflux velocity. Interest in this experiment
for cathode ray tubes used in a telecommunications focuses primarily on nozzle design, and velocity is a
system display device. The following conductivity nuisance factor. The data are shown below:
data are obtained:
Nozzle Jet Efflux Velocity (m/s)
Coating Type Conductivity Type 11.73 14.37 16.59 20.43 23.46 28.74
1 143 14] 150 146 1 0.78 0.80 0.81 0.75 0.77 0.78
2 152 149 137 143 2 0.85 0.85 0.92 0.86 0.81 0.83
3 134 133 132 127 3 0.93 0.92 0.95 0.89 0.89 0.83
4 129 12 132; 129 4 1.14 0.97 0.98 0.88 0.86 0.83
5 147 148 144 142 5 0.97 0.86 0.78 0.76 0.76 0.75

(a) Is there any difference in conductivity due to coat- (a) Does nozzle type affect shape factor? Compare the
ing type? Use a = 0.05. nozzles using box plots and the analysis of variance.
(b) Estimate the overall mean and the treatment (b) Use Tukey’s test to determine specific differences
effects. between the nozzles. Does a graph of average (or
352 Chapter 12 The Analysis of Variance _

standard deviation) of shape factor versus nozzle 12-11. Suppose that five normal populations have
type assist with the conclusions? common variance o? = 100 and means pl, = 175, LW, =
(c) Analyze the residuals from this experiment. 190, uw, = 160, “4, = 200, and uw, = 215. How many
observations per population must be taken sc that the
12-9. In his book Design and Analysis of Experiments probability of rejecting the hypothesis of equality of ,
(2001), D. C. Montgomery describes an experiment to means is at least 0.95? Use a= 0.01.
determine the effect of four chemical agents on the
strength of a particular type of cloth. Due to possible 12-12. Consider testing the equality of the means of
variability from cloth to cloth, bolts of cloth are con- two normal populations where the variances are
sidered blocks. Five bolts are selected and al} four unknown but assumed equal. The appropriate test
chemicals in random order are applied to each bolt. procedure is the two-sample t-test. Show that the
The resulting tensile strengths are two-sample t-test is equivalent to the one-way-
classification analysis of variance.

Bolt
12-13. Show that the variance of the linear combina-
tion). GY 1 OD a ed
Chemical 1 2 3 4 5
12-14, In a fixed-effects model, suppose that there are
Tress Ub FAL OH n observations for each of four treatments. Let Q’, Q3,
tes Wi ie 2 Te and Q; be single-degree-of-freedom components for
TD pee OS eanme/.B seater SemenOS the orthogonal contrasts. Prove that SS,eamens = Q +
re
Wn 3 ee ee SOD Q; + ,.
12-15. Consider the data shown in Exercise 12-7.
(a) Is there any difference in tensile strength between (a) Write out the least squares normal equations for
the chemicals? * this problem, and solve them for{i and7, making
(b) Use Tukey’s test to investigate specific differ- the usual constraint (L)._ ,f,= 0). Estimate 1, - 1,.
ences between the chemicals. (b Solve the equations in (a) using the constraintT,=
(c) Analyze the residuals from this experiment. 0. Are the estimators T, and {i the same as you
12-10. Suppose that four normal populations have
found in (a)? Why? Now estimate 7, - t, and com-
common variance o° = 25 and means pl, = 50, , = 60, pare your answer with (a). What statement can
you make about estimating contrasts in the T,?
LH, = 50, and wu, = 60. How many observations should
be taken on each population so that the probability of (c ~~ Estimate f+ 7, 27, - T, - T, and w+ 7, + T, using
rejecting the hypothesis of equality of means is at least the two solutions to the normal equations. Com-
0.90? Use a= 0.05. pare the results obtained in each case.
Chapter 13

Design of Experiments
with Several Factors
An experiment is just a test or a series of tests. Experiments are performed in all scientific
and engineering disciplines and are a major part of the discovery and learning process. The
conclusions that can be drawn from an experiment will depend, in part, on how the exper-
iment was conducted and so the design of the experiment plays a major role in problem
solution. This chapter introduces experimental design techniques useful when several fac-
tors are involved.

13-1 EXAMPLES OF EXPERIMENTAL DESIGN APPLICATIONS

Example 13-1
A Characterization Experiment A development engineer is working on a new process for sol-
dering electronic components to printed circuit boards. Specifically, he is working with a new type of
flow solder machine that he hopes will reduce the number of defective solder joints. (A flow solder
machine preheats printed circuit boards and then moves them into contact with a wave of liquid sol-
der. This machine makes all the electrical and most of the mechanical connections of the components
to the printed circuit board. Solder defects require touchup or rework, which adds cost and often dam-
ages the boards.) The flow solder machine has several variables that the engineer can control. They
are as follows:
1. Solder temperature
74. Preheat temperature
3 . Conveyor speed
4. Flux type
5. Flux specific gravity
6. Solder wave depth
7. Conveyor angle
In addition to these controllable factors, there are several factors that cannot be easily controlled
once the machine enters routine manufacturing, including the following:
1. Thickness of the printed circuit board
2. Types of components used on the board
3. Layout of the components of the board
4. Operator
5, Environmental factors
6. Production rate
Sometimes we call the uncontrollable factors noise factors. 4. schematic representation of the
process is shown in Fig. 13-1.

353
354 Chapter 13 Design of Experiments with Several Factors

Controllable factors
x Xo Xp

Input Process
(printed circuit boards) (flow solder machine) (defects, y)

2; 25 2g

Uncontrollable (noise) factors

Figure 13-1 The flow solder experiment.

In this situation the engineer is interested in characterizing the flow solder machine; that is, he is inter-
ested in determining which factors (both controllable and uncontrollable) affect the occurrence of
defects on the printed circuit boards. To accomplish this he can design an experiment that will enable
him to estimate the magnitude and direction of the factor effects. Sometimes we call an experiment
such as this a screening experiment. The information from this characterization study or screening
experiment can be used to identify the critical factors, to determine the direction of adjustment for
these factors to reduce the number of defects, and to assist in determining which factors should be care-
fully controlled during manufacturing to prevent high defect levels and erratic process performance.

Example 13-2
An Optimization Experiment In a characterization experiment, we are interested in determining
which factors affect the response. A logical next step is to determine the region in the important fac-
tors that leads to an optimum response. For example, if the response is yield, we would look for a
region of maximum yield, and if the response is cost, we would look for a region of minimum cost.
As an illustration, suppose that the yield of a chemical process is influenced by the operating
temperature and the reaction time. We are currently operating the process at 155°F and 1.7 hours of
reaction time and experiencing yields around 75%. Figure 13-2 shows a view of the time—-tempera-
ture space from above. In this graph we have connected points of constant yield with lines. These lines
are called contours, and we have shown the contours at 60%, 70%, 80%, 90%, and 95% yield. To
locate the optimum, it is necessary to design an experiment that varies reaction time and temperature
together. This design is illustrated in Fig. 13-2. The responses observed at the four points in the exper-
iment (145°F, 1.2 hr), (145°F, 2.2 hr), (165°F, 1.2 hr), and (165°F, 2.2 hr) indicate that we should
move in the general direction of increased temperature and lower reaction time to increase yield. A
few additional runs could be performed in this direction to locate the region of maximum yield.

These examples illustrate only two potential applications of experimental design meth-
ods. In the engineering environment, experimental design applications are numerous. Some
potential areas of use are as follows:
1. Process troubleshooting
2. Process development and optimization
3. Evaluation of material alternatives
> . Reliability and life testing
5. Performance testing
13-2 Factorial Experiments 355

200

190 Path leading to region


of higher yield

Temperature
(°F)
Current operating
conditions

150

140

0.5 1.0 1.5 2.0 Pate)


Time (hr)

Figure 13-2 Contour plot of yield as a function of reaction time and reaction temperature, illustrat-
ing an optimization experiment.

6. Product design configuration


7. Component tolerance determination
Experimental design methods allow these problems to be solved efficiently during the
early stages of the product cycle. This has the potential to dramatically lower overall prod-
uct cost and reduce development lead time.

13-2. FACTORIAL EXPERIMENTS


When there are several factors of interest in an experiment, a factorial design should be
used. These are designs in which factors are varied together. Specifically, by a factorial
experiment we mean that in each complete trial or replicate of the experiment all possible
combinations of the levels of the factors are investigated. Thus, if there are two factors, A
and B, with a levels of factor A and b levels of factor B, then each replicate contains all ab
treatment combinations.
The effect of a factor is defined as the change in response produced by a change in the
level of the factor. This is called a main effect because it refers to the primary factors in the
study. For example, consider the data in Table 13-1. The main effect of factor A is the dif-
ference between the average response at the first level of A and the average response at the
second level of A, or
ee Oa 104 20-2
20.
ine 2
356 Chapter 13 Design of Experiments with Several Factors

Table 13-1 A Factorial Experiment with Two Factors .

Factor B

Factor A B, B,

A, 10 20
A, 30 40

That is, changing factor A from level | to level 2 causes an average response increase of 20
units. Similarly, the main effect of B is
jee 20+40 10+30 _
10.
2 2
In some experiments, the difference in response between the levels of one factor is not
the same at all levels of the other factors. When this occurs, there is an interaction between
the factors. For example, consider the data in Table 13-2. At the first level of factor B, the
A effect is
A=30—10 = 20,
and at the second level of factor B, the A effect is

A =0 - 20t= -20.

Since the effect.of A depends on the level chosen for factor B, there is interaction between
A and B. :
When an interaction is large, the comesponding main effects have little meaning. For
example, by using the data in Table 13-2, we find the main effect ofA to be
ge 0t Oy10-t20a
2,

2 2 ’

and we would be tempted to conclude that there is no A effect. However, when we exam-
ined the effects of A at different levels of factor B, we saw that this was not the case. The
effect of factor A depends on the levels of factor B. Thus, knowledge of the AB interaction
is more useful than knowledge of the main effects. A significant interaction can mask the
significance of main effects.
The concept of interaction can be illustrated graphically. Figure 13-3 plots the data in
Table 13-1 against the levels ofA for both levels of B. Note that the B, and B, lines are
roughly parallel, indicating that factors A and B do not interact significantly. Figure 13-4
plots the data in Table 13-2. In this graph, the B, and B, lines are not parallel, indicating the
interaction between factors A and B. Such graphical displays are often useful in presenting
the results of experiments.
An alternative to the factorial design that is (unfortunately) used in practice is to change
the factors one at a time rather than to vary them simultaneously. To illustrate this one-factor-
at-a-time procedure, consider the optimization experiment described in Example 13-2. The

Table 13-2 A Factorial Experiment with Interaction

Factor B

Factor A B, B,

A, 10 20
A, 30 0
13-2 Factorial Experiments 357

50
40 By
as
doa:
:
20 es

1oL
Observation 8B,
ol-
A; A Le)
Factor A Figure 13-3 Factorial experiment, no interaction.

30 B,
Observation
0 By
Figure 13-4 Factorial experiment, with
A; Az interaction.

engineer is interested in finding the values of temperature and reaction time that maximize
yield. Suppose that we fix temperature at 155°F (the current operating level) and perform
five runs at different levels of time, say 0.5 hour, 1.0 hour, 1.5 hour:, 2.0 hours, and 2.5
hours. The results of this series of runs are shown in Fig. 13-5. This figure indicates that
maximum yield is achieved at about 1.7 hours of reaction time. To opt' nize temperature, the
engineer fixes time at 1.7 hours (the apparent optimum) and perform: ;five runs at different
temperatures, say 140°F, 150°F, 160°F, 170°F, and 180°F. The result. of this set of runs are
plotted in Fig. 13-6. Maximum yield occurs at about 155°F. Therefc:re, we would conclude
that running the process at 155°F and 1.7 hours is the best set of operating conditions, result-
ing in yields around 75%.

Yield
(%)

0.5 1.0 1.5 2.0 PAS


Time (hr)

Figure 13-5 Yield versus 1 action time with temperature constant ai 155 I’.
358 Chapter 13 Design of Experiments with Several Factors

80

70

60
Yield
(%)

50

140 150 160 170 180


Temperature (°F)
Figure 13-6 Yield versus temperature with reaction time constant at 1.7 hr.

Figure 13-7 displays the contour plot of yield as a function of temperature and time
with the one-factor-at-a-time experiment shown on the contours. Clearly the one-factor-at-
a-time design has failed dramatically here, as the true optimum is at least 20 yield points
higher.and occurs at much lower reaction times and higher temperatures. The failure to dis-
cover the shorter reaction times is particularly important as this could have significant
impact on production volume or capacity, production planning, manufacturing cost, and
total productivity.
The one-factor-at-a-time method has failed here because it fails to detect the interac-
tion between temperature and time. Factorial experiments are the only way to detect inter-

200

190

180

170

Temperature
(°F) 160

150

140

Time (hr)
Figure 13-7 Optimization experiment using the one-factor-at-a-time method.
13-3 Two-Factor Factorial Experiments 359

actions. Furthermore, the one-factor-at-a-time method is inefficient; it will require more


experimentation than a factorial, and as we have just seen, there is no assurance that it will
produce the correct results. The experiment shown in Fig. 13-2 that produced the informa-
tion pointing to the region of the optimum is a simple example of a factorial experiment.

13-3 TWO-FACTOR FACTORIAL EXPERIMENTS


The simplest type of factorial experiment involves only two factors, say A and B. There are
a levels of factor A and 6 levels of factor B. The two-factor factorial is shown in Table 13-3.
Note that there are n replicates of the experiment and that each replicate contains all ab
treatment combinations. The observation in the ijth cell in the kth replicate is denoted y,jk
In collecting the data, the abn observations would be run in random order. Thus, like the
single-factor experiment studied in Chapter 12, the two-factor factorial is a completely ran-
domized design.
The observations may be described by the linear statistical model

i bar dach
Vix =U+T; + B; + (7B); +4) J =1,2,..50, (13-1)
Kady 2)sy,

where 1 is the overall mean effect, 7, is the effect of the ith level of factor A, B; is the effect
of the jth level of factor B, (7), is the effect of the interaction between A and B, and €¢,,, is
a NID(O, 07) (normal and independently distributed) random error component. We are
interested in testing the hypotheses of no significant factor A effect, no significant factor B
effect, and no significant AB interaction. As with the single-factor experiments of Chapter
12, the analysis of variance will be used to test these hypotheses. Since there are two fac-
tors under study, the procedure used is called the two-way analysis of variance.

13-3.1 Statistical Analysis of the Fixed-Effects Model


Suppose that factors A and B are fixed. That is, the a levels of factor A and the b levels of
factor B are specifically chosen by the experimenter, and inferences are confined to these
levels only. In this model, it is customary to define the effects 7, B,, and (tB),, as deviations
from the mean, so that D*_,7,=0, D)_,8, = 0, L/_,(1B),; = 0, and Drei (7),; = 0.
Let y, denote the total of the observations under the ith level of factor A, let y.;. denote
the total of the observations under the jth level of factor B, let y,,, denote the total of the
observations in the ijth cell of Table 13-3, and let y... denote the grand total of all the
\

Table 13-3 Data Arrangement for a Two-Factor Factorial Design

FactorB
Factor A - 1 2 ais’ b

1 Vinay Viz e+ Yin Yiaay Yi2ae +++ Vian Yipr» Vier +++» Yibn
2 Yarn» Yai2e +++ Vain Yaar Ya2ae +++» Y22n Yavir Yor» +++» Yan

a Yart» Yar2» +++» Vain Ya21» Ya22» +++» Yaan Yab1» Yab2 ***9 Yabn
360 Chapter 13 Design of Experiments with Several Factors

observations. Define y;.., y.;-, Y;;-, and y... as the corresponding row, column, cell, and
grand averages. That is,
“ er Aeie ‘ ;
Mra SyoMIe Yitne aries i=1,2,...,4,
j=l k=l bn
n yj
a

yj = Lue RT At j=1,2,...,b,
i=l k=l a
ayy, t=B2 8NG (13-2)
‘ Vij SS bate, by tog
Vij = einer

sipeanres
n j=1, geeeg ly
k=1

) Fi tenia eoaie eae


i=l j=1 k=l abn

pI myx)

(13-3)

a b a
b
+ Bed. be say hunivyat y..).ii>» (vin aaah
Lond ete =I
Thus, the total sum of squares is partitioned into a sum of squares due to.“rows,” or factor
A (SS,), a sum of squares due to “columns,” or factor B (SS,), a sum of squares due to the
interaction between A and B (SS,,), and a sum of squares due to error (SS,). Notice that
there must be at least two replicates to obtain a nonzero error sum of squares.
The sum of squares identity in equation 13-3 may be written symbolically as
eR TE A (13-4)
There are abn — | total degrees of freedom. The main effects A and B have a — 1 and
b—1 degrees of freedom, while
the interaction effect AB has (a — 1)(b — 1) degrees of free-
dom. Within each of the ab cells in Table 13-3, there are n — 1 degrees of freedom between
the n replicates, and observations in the same cell can differ only due to random error.
Therefore, there are ab(n — 1) degrees of freedom for error. The ratio of each sum of squares
on the right-hand side of equation 13-4 to its degrees of freedom is a mean square.
Assuming that factors A and B are fixed, the expected values of the mean squares are
13-3 Two-Factor Factorial Experiments 361

a b

ai SSap =.
n> (8i=1 j=l
F(MSu)=£{ Sst |= ms(a= 1)(b- 7
and

E(MS,) = e{sEe |=o".


Therefore, to test H,: t;= 0 (no row factor effects), Hy: B= 0 (no column factor effects), and
H,: (7),, = 0 (no interaction effects), we would divide the corresponding mean square by
themean square error. Each of these ratios will follow an F distribution with numerator
degrees of freedom equal to the number of degrees of freedom for the numerator mean
square and ab(n — 1) denominator degrees of freedom, and the critical region will be located
in the upper tail. The test procedure is arranged in an analysis-of-variance table, such as is
shown in Table 13-4. —
_ Computational formulas for the sums of squares in equation 13-4 are obtained easily.
The total sum of squares is computed from

creek PAs ao (13-5)


i=1 j=l k=1

The sums of squares for main effects are


a 82 2
Bim eafo
SS,= >) —-— (13-6)
2 21 On abn
and
be 2 2
Vin a)
5S. ee
an abn
(13-7)

We usually calculate the SS,, in two steps. First, we compute the sum of squares between
the ab cell totals, called the sum of squares due to “subtotals”:

i i ij. aas
SSsubtotals = Di es hn
i=1 j=l

Table 13-4 The Analysis-of- Variance Table for the Two-Way-Classification Fixed-Effects Model

Source of Sum of Degrees of Mean


Variation Squares Freedom Square F,
SS A MS,

A treatments SS, a-1 MS, = pier MS,


_ SSp MS,
B treatments SS, b-1 Me MS,
| MS... = SS AB MS pp
Interaction SSap (a-1)(6-1) 1)(b-1)
AB = (a-1b-1) MS,

SSr
Error SS; ab(n - 1) 1)
b(n ay
MSs Rare

Total
a
SS, abn -1 ——————
362 Chapter 13 Design of Experiments with Several Factors

This sum cf squares also contains SS, and SS,,. Therefore, the second step is to compute
SS ap as

SSiz = 9 subsoeals = SS, * SS5. (13-8)

The error sum of squares is found by subtraction as either

SS, = SS7—- SS, — SS, — SS, (13-9a)


or
SS_ = SSy Pe SS subtotals: (13-9b)

Example 13-3
Aircraft primer paints are applied to aluminum surfaces by two methods: dipping and spraying. The
purpose of the primer is to improve paint adhesion. Some parts can be primed using either applica-
tion method and engineering is interested in learning whether three different primers differ in their
adhesion properties. A factorial experiment is performed to investigate the effect of paint primer type
and application method on paint adhesion. Three specimens are painted with each primer using each
application method, a finish paint applied, and the adhesion force measured. The data from the
experiment are shown in Table 13-5. The circled numbers in the cells are the cell totals y,,. . The sums
of squares required to perform the analysis of variance are computed as follows:

a bren y?

Lp Dp Wie re
i=l j=l k=l

= (4.0) +(4.5)° +---+(5.0)? oa = 10.72,

_= A
(28.7)? + (34.1) +(27.0)? (89.8)?
= 458,
6 18
o ye y
SSmmethods = OR SE
jai an abn

= A+
(40.2)? +(49.6)° *_- +
(89.8) = 4.9],
9 18
(Ph Soa? 2
Yip Mics
SSinteraction = > a Fn<= e SSrypes — SSmmethods

12.8)” +(15.9)’ +...+(15.5)” Bye


Sua 3
PR 18
OIE)
and

SS; = SS; a SS yoes- SS method — SS iseeractiog

= 10.72 - 4.58 - 4.91 - 0.24 = 0.99.

The analysis of variance is summarized in Table‘13-6. Since Fo 052,12 = 3.89 and Fo os.)1.= 4.75, we
conclude that the main effects of primer type and application method affect adhesion force. Further-
more, since 1.5 < Fy o.,,>, there is no indication of interaction between these factors.
13-3 Two-Factor Factorial Experiments 363

Table 13-5 Adhesion Force Data for Example 13-3

Application Method
Primer Type Dipping Spraying Sie

1 4.0, 4.5, 4.3 5.4, 4.9, 5.6 28.7


2 5.6, 4.9, 5.4 5.8, 6.1, 6.3 34.1
3 3.8,3.7,4.0 G5 5.5, 5.0,5.0 (5.5) 27.0
yy, 40.2 49.6 89.8=y_

Table 13-6 Analysis of Variance for Example 13-3

Source of Sum of Degrees of Mean


Variation Squares Freedom Square FS

Primer types 4.581 2 2.291 27.86


Application methods 4.909 1 4.909 59.70
Interaction 0.241 2 0.121 1.47
Error 0.987 12 0.082
Total 10.718 17

A graph of the cell adhesion force averages y,,. versus the levels of primer type for each application
method is shown in Fig. 13-8. The absence of interaction is evident by the parallelism of the two lines.
Furthermore, since a large response indicates greater adhesion force, we conclude that spraying is a
superior application method and that primer type 2 is most effective.

Tests on Individual Means When both factors are fixed, comparisons between the indi-
vidual means of either factor may be made using Tukey’s test. When there is no interaction,
these comparisons may be made using either the row averages y,.. or the column averages
y';- However, when interaction is significant, comparisons between the means of one factor
(say A) may be obscured by the AB interaction. In this case, we may apply Tukey’s test to
the means of factor A, with factor B set at a particular level.

Yije
7.0

6.0 ae

5.0
Dipping

Figure 13-8 Graph of average adhesion force versus


Primer type primer types for Example 13-3.
364 Chapter 13 Design of Experiments with Several Factors

13-3.2 Model Adequacy Checking Ft


Just as in the single-factor experiments discussed in Chapter 12, the residuals from a facto-
rial experiment play an important role in assessing model. adequacy. The residuals from a
two-factor factorial experiment are

Cie = Yije — Jy: :


That is, the residuals are just the difference between the observations and the corresponding
cell averages.
Table 13-7 presents the residuals for the aircraft primer paint data in Example 13-3.
The normal probability plot of these residuals is shown in Fig. 13-9. This plot has tails that
do not fall exactly along a straight line passing through the center of the plot, indicating
some potential problems with the normality assumption, but the deviation from normality
does not appear severe. Figures 13-10 and 13-11 plot the residuals versus the levels of
primer types and application methods, respectively. There is some indication that primer
type 3 results in slightly lower variability in adhesion force than the other two primers. The
graph of residuals versus fitted values 9,,,ijk = y,,. in Fig. 13-12 reveals no unusual or diag-
nostic pattern.

13-3.3 One Observation per Cell


In some cases involving a two-factor factorial experiment, we may have only one replicate,
that is, only one observation per cell. In this situation there are exactly as many parameters

Table 13-7 Residuals for the Aircraft Primer Paint Experiment in Example 13-3

Application Method
Primer Type Dipping Spraying

1 -0.27, 0.23, 0.03 0.10, -0.40, 0.30


2 0.30, -0.40, 0.10 -0.27, 0.03, 0.23
3 -0.03, -0.13, 0.17 0.33, -0.17, -0.17

2.0

1.0

2) 0.0

-1.0

-2.0 :
-0.5 -0.3 -0.1 +0.1 +0.3
Oi, residual
Figure 13-9 Normal probability plot of the residuals from Example 13-3.
13-3 Two-Factor Factorial Experiments 365

Sijk
+0.5

1 2 3 Primer type

-0.5

Figure 13-10 Plot of residuals versus primer type.

Sik

+0.5

Application method

Figure 13-11 Plot of residuals versus application method.

Oijx

+0.5

Figure 13-12 Plot of residuals versus predicted values §;,;,. = y;;. .

in the analysis-of-variance model as there are observations, and the error degrees of free-
dom is zero. Thus, it is not possible to test a hypothesis about the main effects and interac-
tions unless some additional assumptions are made. The usual assumption is to ignore the
interaction effect arid use the interaction mean square as an error mean square. Thus the
analysis is equivalent to the analysis used in the randomized block design. This
366 Chapter 13 Design of Experiments with Several Factors

no-interaction assumption can be dangerous, and the experimenter should carefully exam-
ine the data and the residuals for indications that there really is interaction present. For more
details, see Montgomery (2001). :

13-3.4 The Random-Effects Model

So far we have considered the case where A and B are fixed factors. We now consider the
situation in which the levels of both factors are selected at random from larger populations
of factor levels, and we wish to extend our conclusions to the sampled population of factor
levels. The observations are represented by the model
(St con@h
Yije = H+; +B; + (1B), +€ijx) FHL 2.1d, (13-10)
k= 2G R,
where the parameters 1, 8, (7),,, and €,,, are random variables. Specifically, we assume
thatz, is NID(O, 07), 8 is NID(O, 0%), (7B),, is NID(0, o%,), and €,,, is NID(0, 0”): The vari-
ance of any observation is

VO) =O + 0+ Orgt0",
and o., Ory o , and o” are called variance components. The hypotheses that we are inter-
ested‘in testing are H,: 0; = 0, Hy: G, = 0, and H,: oe = 0. Notice the similarity to the one-
way classification random-effects model.
The basic analysis of variance remains unchanged; that is, SS,, SS,, SS,,, SS, and SS,
are all calculated as in the fixed-effects case. To construct the test statistics, we must exam-
ine the expected mean squares. They are

E(MS,) = 0? + nov, + bno;,,


E(MS,) = 0° + no‘, + ands ;
E(MS,,) = 0? + n0%,, (13-11)
and
E(MS,) = 0°.
Note from the expected mean squares that the appropriate statistic for testing H,:
of =0is
MS
fo0 =e 2
MS; (13-12)

since under H, both the numerator and denominator of F, have expectation o”, and only if
H, is false is E(MS,,) greater than E(MS,). The ratio F, is distributed as F,
ieee : a — 1)(b- 1), ab(n — 1)°
Similarly, for testing H,: oO. = 0, we would use

_ MS,
0 Tee (13-13)

which is distributed as F,,_; (,_jy— 1» and for testing H,: Os = 0, the statistic is
MS
Fy=—4,
0 MS up :
(13-14)

which is distributed as F,, , (1-1): These are all upper-tail, one-tail tests. Notice that these
test statistics are not the same as those used if both factors A and B are fixed. The expected
mean squares are always used as a guide to test statistic construction.
13-3 Two-Factor Factorial Experiments 367

The variance components may be estimated by equating the observed mean squares to
their expected values and solving for the variance components. This yields

Oo = MS;,

G2, = MS 4p ~ MS
(75) = ,

5 = MS, — MS 4p ’
(13-15)
an

aM a=MS.5
bn

Example 13-4
Suppose that in Example 13-3, a large number of primers and several application methods could be
used. Three primers, say 1, 2, and 3, were selected at random, as were the two application methods.
The analysis of variance assuming the random effects model is shown in Table 13-8.
Notice that the first four columns in the analysis of variance table are exactly as in Example 13-
3. Now, however, the F ratios are computed according to equations 13-12 through 13-14. Since
Fy05,2.12 = 3-89, we conclude that interaction is not significant. Also, since Fy 95) = 19.0 and Fyy. =
18.5, we conclude that both types and application methods significantly affect adhesion force,
although primer type is just barely significant at a = 0.05. The variance components may be estimated
using equation 13-15 as follows:
6? = 0.08,
6% wales, 0.0133,
Be 2999 = 0.12 Biae
6
és 491-012 <5

Clearly, the two largest variance components are for primer types G = 0.36) and application meth-
ods 6; = 0.53). ;

13-3.5 The Mixed Model

Now suppose that one of the factors, A, is fixed and the other, B, is random. This is called
the mixed model analysis of variance. The linear model is
i=1,2,...,a,
Vije = M+; +B; +(7B);, +€; 4)J =1,2,...,b, (13-16)
Kit 125,

Table 13-8 Analysis of Variance for Example 13-4

Source of Sum of Degrees of Mean


Variation Squares Freedom Square Fy

Primer types 4.58 2 2.29 19.08


Application methods 4.91 1 4.91 40.92
Interaction 0.24 2 0.12 1.5
Error 0.99 12 0.08
Total 10.72 17
Ena
368 Chapter 13 Design of Experiments with Several Factors

In this model, 7, is a fixed effect defined such that L'_ , 7,=0, Bsis a random effect, the inter-
action term (tf),, is a random effect. and €,,, is a NID(O, o”) random error. It is also cus-
tomary to assume that 8 is NID(O, Oo;) aad ‘that the interaction elements (f),, are normal
random variables with mean zero anilvariance [(a — Ila] op. The interaction elements are
not all independent.
The expected mean squares in this case are

md tT?
E(MS,)=07 +nozs omar
E(MS,)=0" + andj, (13-17)

E(MSyn)= 0° +no%,,
and
E(MS,) = 0”.

Therefore, the appropriate test statistic for testing H): T= 0 is

MS,
(13-18)
0
MS 5"
which is distributed as F,_, (,_ 1-1): For testing H,: O, = 0, the test statistic is

MS,
y= (13-19)
MS,’
which is distributed as F,,_ | 44, -1): Finally, for testing Hyd, = 0, we would use

Fy= (13-20)
aMS=
which is distributed as Fy._1) —1), abn 1)"
The variance components ope Orgs and o” may be estimated by eliminating the first
equation from equation 13-17, leaving three equations in three unknowns, the solutions of
which are

5 = MS, - MS; :

an

a2 MS 4p - MS;
n ’

and

6’ = MS,. (13-21)
This general approach can be used to estimate the variance components in any mixed
model. After eliminating the mean squares containing fixed factors, there will always be a
set of equations remaining that can be solved for the variance components, Table 13-9
summarizes the analysis of variance for the two-factor mixed model.
13-4 General Factorial Experiments 369

Table 13-9 Analysis of Variance for the Two-Factor Mixed Model


Ee ee ee ee
Source of Sum of Degrees of Mean Expected
Variation Squares Freedom Square Mean Square F,

Rows (A) sy a-l MS, 2 +no®_+bndr? (a-1)


TB i
4S
M.SAB

Columns (B) SS, Beal MS, o? + ano? ae


B MS;
I 2 2 2 M.Sap
nteraction SSip (a-1)(b-1) MS, o +no', MS,

Error SS; ab(n — 1) MS, o


Total SS, abn —- |

13-4 GENERAL FACTORIAL EXPERIMENTS


Many experiments involve more than two factors. In this section we introduce the case
where there are a levels of factor A, b levels of factor B, c levels of factor C, and so on,
arranged in a factorial experiment. In general, there will be abc --- n total observations, if
there are n replicates of the complete experiment.
For example, consider the three-factor experiment with underlying model

Yijer =H+T;, +B; +7, + (7B), + (tY),, 3 (BY) is


iS Zonk .
j= 1,250: (13-22)
+BY), 4 e154) k=1,2,...,¢
[ALD 5h

Assuming that A, B, and C are fixed, the analysis of variance is shown in “.able 13-10. Note
that there must be at least two replicates (n 2 2) to compute an error sum of squares. The
F-tests on main effects and interactions follow directly from the expected mean squares.
Computing formulas for the sums of squares in Table 13-10 are easily obtained. The
total sum of squares is, using the obvious “dot” notation,

SSS eee (13-23)


SSp = Lyin is
i=) j=l k=! f=! 2

The sum of squares for the main effects are computed from the totals for factors A (y,...),
B(y.,.-), and C(y..,.) as follows:

Zz 2
Si y
SS,=) —--—., (13-24
‘s Disen abcn )

b 2 2
SSi = Yj Yo (13-25)
fa] Gen abcn

AS hy 2
5 Oe Ae
SS = : 13-26
c Din abcn ( )
370 Chapter 13 Design of Experiments with Several Factors

To compute the two-factor interaction sums of squares, the totals for the A x B, A x C and
B x C cells are needed. It may be helpful to collapse the original data table into three two-
way tables in order to compute these totals. The sums of squares are

SSip=yy aeYi - SSp


fala (13-27)
= aes — SS, — SSg,

fa1ka1 On — aben (13-28)


= SSsuptotals(AC a SS4—SSc,
and

SSgo = > “ik nese ss, -SS¢


jalk=l (13-29)
i pert — SSp - SSc.

The three-factor interaction sum of squares is computed from the three-way cell totals y,,,.
as
2 2
a Ey CY Ge Ry ee
i=l j=1 k=l aben (13-30a)

Table 13-10 The Analysis-of-Variance Table for the Three-Factor Fixed-Effects Model

Source of Sum of Degrees of Mean Expected


Variation Squares Freedom Square Mean Squares Ie
2
a SS, aol MS, Siegen —-
a-l MS,
2

B SS, b-1 MS, paid oe


b-1 MS,
2
G SS c-1 MS cee ee
e c c-l MS,

AB SSz (a-1)(b-1) MS,» a+ hie Me


(a-1)(b-1) MS,
bn&X(ty):, MS
AC SS a-1)(c-1 MS ac yee ze
nc Screen “-e-0) MS,
BC SSac (b-1)(c-1) MSc 2 E(B MS 5c
re 1)(c -1) MS,
>»»
ABC SSisc (€@-1)(6-1)(C-1) ~ MSjac a’ + SPEC boraSue
(a-1)(b-1)(c-1) MS;
Error SS, abc(n — 1) MS, o
Total SS, abcn — i
eee
13-4 General Factorial Experiments 371

= SS smoraisiaBc) — SS, — SSp — SS — SSag - SSiac — Spe (13-30b)


The error sum of squares may be found by subtracting the sum of squares for each main
effect and interaction from the total sum of squares, or by

SS; = SS; se SS subtotais(ABCy* (13-3 1)

Example 13-5
A mechanical engineer is studying the surface roughness of a part produced in a metal-cutting oper-
ation. Three factors, feed rate (A), depth of cut (B), and tool angle (C), are of interest. All three fac-
tors have been assigned two levels, and two replicates of a factorial design are run. The coded data are
shown in Table 13-11. The three-way cell totals y,,,. are circled in this table.
The sums of squares are calculated as follows, using equations 13-23 to 13-31:

Sas aan pine Oo Ue = 92.9375,


i=l j=l k=l [=I
a 2 2
by 2 mi
pees
ben abcn
2
- (75)° +(102)° = (177) = 45,5625,
8 16
eagle
z am acn abcn
2
_ (82)+(95)" _ (477) = 10,5625,
8 16
Cc 2 2
YE ie Ore
Sis SS
£ Es abcn

_ (85) +(92) LT =3.0625


8
a b 2 2
a Vij er -SS, —SS
SSap or pycn abcn - "
2 2 2 2 2
_ (37) +38) +(45) +(57) O77)" _ 45 5695 - 10.5625
4 16
=|7.5029,
2 y?
ick. 4
pace DD i as
aeieee

_ (36) +39)" + (49) +(53) mune ~ 45.5625


-3.0625
2 2 2 2 2

2 2 2 2
= (38)° +(44)" +(47) +(48) PL 10.5625 — 3.0625
4 16
= 1.5625,
372 Chapter 13 Design of Experiments with Several Factors

a bs -¢ 2 2
ie aS. ®
SSanc = 2 Pipi Saparth ively. SS- SS Sc Sone
t=| j= =

ee RET 45.5625 — 10.5625 — 3.0625 — 7.5625 — 0.0625 — 1.5625

= 5.0625,

SS_ = SSp — SScupiotais( ABC )


= 92.9375 — 73.4375 = 19.5000.

The analysis of variance is summarized in Table 13-12. Feed rate has a significant effect on surface
finish (@ < 0.01), as does the depth of cut (0.05 < a@< 0.10). There is some indication of a mild inter-
action between these factors, as the F-test for the AB interaction is just less than the 10% critical value.

Obviously factorial experiments with three or more factors are complicated and require
many runs, particularly if some of the factors have several (more than two) levels. This
leads us to consider a class of factorial designs with all factors at two levels. These designs
are extremely easy to set up and analyze, and as we will see, it is possible to greatly reduce
the number of experimental runs through the technique of fractional replication.

Table 13-11 Coded Surface Roughness Data for Example 13-5

Depth of Cut (B)


0.025 inch 0.040 inch

Feed Rate (A) Tool Angle (C) Tool Angle (C)

15° 252 153 25° Ween

9 11 9 10
20 in./min y 10 11 8 75

@)) (8)
10 10 j\p2 16
30 in./min 12 13 15 14 102

B x C totals y.,,. 38 44 47 48 = Ye.

A x B Totals y,... A x C Totals y,.,.

A/B 0.025 0.040 A/C 15 25

20 37 38 20 36 39
30 45 57 30 49 53

y, 82 95 vy, 85 92
13-5 The 2" Factorial Design 373

Table 13-12 Analysis of Variance for Example 13-5

Source of Sum of Degrees of Mean


Variation Squares Freedom Square F,

Feed rate (A) 45.5625 1 45.5625 18.697


Depth of cut (B) 10.5625 1 10.5625 4,33°
Tool angle (C) 3.0625 1 3.0625 1.26
AB 7.5625 1 7.5625 3.10
AC 0.0625 1 0.0625 0.03
BC 1.5625 1 1.5625 0.64
ABC 5.0625 1 5.0625 2.08
Error 19.5000 8 2.4375
Total 92.9375 IS
*Significant at 1%.
*Significant at 10%.

13-5 THE 2‘ FACTORIAL DESIGN


There are certain special types of factorial designs that are very useful. One of these is a fac-
torial design with k factors, each at two levels. Because each complete replicate of the
design has 2 runs or treatment combinations, the arrangement is called a 2‘ factorial design.
These designs have a greatly simplified statistical analysis, and they also form the basis of
many other useful designs.

13-5.1 The 2? Design


The simplest type of 2‘ design is the 2’, that is, two factors, A and B, each at two levels. We
usually think of these levels as the “low” and “high” levels of the factor. The 2° design is
shown in Fig. 13-13. Note that the design can be represented geometrically as a square, with
the 2? = 4 runs forming the corners of the square. A special notation is used to represent the
treatment combinations. In general a treatment combination is represented by a series of
lowercase letters. If a letter is present, then the corresponding factor is run at the high level

High 0 ab
(+)

Low
(1) a
Low / A High
io) (+) Figure 13-13 The 2’ factorial design.
374 Chapter 13 Design of Experiments with Several Factors

in that treatment combination; if it is absent, the factor is run at its low level. For example,
treatment combination a indicates that factor A is at the high level and factor B is at the low
level. The treatment combination with both factors at the low level is denoted (1). This nota-
tion is used throughout the 2" design series. For example, the treatment combination in a 2*
design with A and C at the high level and B and D at the low level is denoted ac.
The effects of interest in the 2? design are the main effects A and B and the two-factor
interaction AB. Let (1), a, b, and ab also represent the totals of all n observations taken at
these design points. It is easy to estimate the effects of these factors. To estimate the main
effect of A we would average the observations on the right side of the square, where A is at
the high level, and subtract from this the average of the observations on the left side of the
square, where A is at the low level, or

Aaa atab _b+(I)


2n 2n
(13-32)
1
=—lat+ab-—b-(I)}.
male S (0)
Similarly, the main effect of B is found by averaging the observations on the top of the
square, where B is at the high level, and subtracting the average of the observations on the
bottom of the square, where B is at the low level:

(13-33)

Finally, the AB interaction is estimated by taking the difference in the diagonal averages in
Fig. 13-13, or
_ab+(l1)_ a+b
AB
2n 2n
(13-34)
= 5-[ab+()- 2-5}

The quantities in brackets in equations 13-32, 13-33, and 13-34 are called contrasts.
For example, the A contrast is
Contrast, = a+ ab—b-(1).

In these equations, the contrast coefficients are always either +1 or —1. A table of plus and
minus signs, such as Table 13-13, can be used to determine the sign of each treatment com-
bination for a particular contrast. The column headings for Table 13-13 are the main effects
A and B, AB interaction, and I, which represents the total. The row headings are the treat-
ment combinations. Note that the signs in the AB column are the products of signs from

Table 13-13 Signs for Effects in the 2? Design


Treatment Factorial Effect
Combinations I A BeAB

e
Rra
13-5 The 2" Factorial Design 375

columns A and B. To generate a contrast from this table, multiply the signs in the appropri-
ate column of Table 13-13 by the treatment combinations listed in the rows and add,
To obtain the sums of squares for A, B, and AB, we can use equation 12-18, which
expresses the relationship between a single-degree-of-freedom contrast and its sum of
squares:

(Contrast)
‘SsS=— . (13-35)
n> (contrast coefficients)”
Therefore, the sums of squares for A, B, and AB are
2
ss, _[a+ab-b-(1)]
4n
2
SS, = [b+ab - (1)]

[ab +(1)-a-b]”
SSap =
4n

The analysis of variance is completed by computing the total sum of squares SS, (with
4n — 1 degrees of freedom) as usual, and obtaining the error sum of squares SS, [with 4(n
— 1) degrees of freedom] by subtraction.

Example 13-6
An article in the AT&T Technical Journal (March/April, 1986, Vol. 65, p. 39) describes the applica-
tion of two-level experimental designs to integrated circuit manufacturing. A basic processing step in
this industry is to grow an epitaxial layer on polished silicon wafers. The wafers are mounted on a sus-
ceptor and positioned inside a bell jar. Chemical vapors are introduced through nozzles near the top
of the jar. The susceptor is rotated and heat is applied. These conditions are maintained until the epi-
taxial layer is thick enough.
Table 13-14 presents the results of a 2? factorial design with n = 4 replicates using the factors A =
deposition time and B = arsenic flow rate. The two levels of deposition time are — = short and + = long,
and the two levels of arsenic flow rate are — = 55% and + = 59%. The response variable is epitaxial
layer thickness (4m). We may find the estimates of the effects using equations 13-32, 13-33, and
13-34 as follows:

A==[a+ab-b-(1)]
= ai[59.299 + 59.156 — 55.686 — 56.081] = 0.836,

Table 13-14 The 2’ Design for the Epitaxial Process Experiment

Treatment Design Factors Thickness (Wm)


Combinations A B AB Thickness (jum) Total Average

(i) - -— + 14.037, 14.165, 13.972, 13.907 56.081 14.021


a + -— = 14.821, 14.757, 14.843, 14.878 59.299 14.825
b - + - 13.880, 13.860, 14.032, 13.914 55.686 13.922
ab + "+ + 14.888, 14.921, 14.415, 14.932 59.156 14.789
ne
376 Chapter 13 Design of Experiments with Several Factors

B==-[b+ab-a-()] .

= al + 59.156 — 59.299 — 56.081] = -0.067,

AB = ——[ab+(1)-a-8]

Mise [59.156 + 56.081 — 59.299 — 55.686] = 0.032.


~ 2(4)
The numerical estimates of the effects indicate that the effect of deposition time is large and has a pos-
itive direction (increasing deposition time increases thickness), since changing deposition time from
low to high changes the mean epitaxial layer thickness by 0.836 yum. The effects of arsenic flow rate
(B) and the AB interaction appear small.
The magnitude of these effects may be confirmed with the analysis of variance. The sums of
squares for A, B, and AB are computed using equation 13-35:

5S= (Contrast) 2
n-4
[a+ab-b-(1)}
SS,=
16
_ [6.688]
16
= 2.7956,
2
b+ab-a-(1)
35,1 16
_ [-0.538]’
16
= 0.0181,
0 [ab +(1)-a-b]
AB — 16

az[0.256]”
16
= 0.0040.

The analysis of variance is summarized in Table 13-15. This confirms our conclusions obtained by
examining the magnitude and direction of the effects; deposition time affects epitaxial layer thickness,
and from the direction of the effect estimates we know that longer deposition times lead to thicker epi-
taxial layers.

Table 13-15 Analysis of Variance for the Epitaxial Process Experiment

Source of Sum of Degrees of Mean


Variation Squares Freedom Square F,
A (deposition time) 2.7956 1 2.7956 134.50
B (arsenic flow) 0.0181 1 0.0181 0.87
AB 0.0040 1 0.0040 0.19
Error 0.2495 12 0.0208
Total 3.0672 15
13-5 The 2‘ Factorial Design 377

Residual Analysis _It is easy to obtain the residuals from a 2" design by fitting a regression
model to the data. For the epitaxial process experiment, the regression model is

y=By+ Bx, +6,


since the only active variable is deposition time, which is represented by x,. The low and
high levels of deposition time are assigned the values x, = -1 and x, = +1, respectively. The
fitted model is

j= 14.389 +(25° }s.


2
where the intercept B, is the grand average of all 16 observations (y) and the slope B, is one-
half the effect estimate for deposition time. The reason the regression coefficient is one-half
the effect estimate is because regression coefficients measure the effect of a unit change in
x, on the mean of y, and the effect estimate is based on a two-unit change (from —1 to +1).
This model can be used to obtain the predicted values at the four points in the design.
For example, consider the point with low deposition time (x, = —1) and low arsenic flow
rate. The predicted value is

3 = 14,389 + 0.836 Joo = 13.971 ym,


2

and the residuals would be


é, = 14.037 — 13.971 = 0.066,
e, = 14.165 — 13.971 = 0.194,
e, = 13.972 - 13.971 = 0.001,
e, = 13.907 — 13.971 = -0.064.
It is easy to verify that for low deposition time (x, = —1) and high arsenic flow rate, ) =
14.389 + (0.836/2) (—1) = 13.971 pum, the remaining predicted values and residuals are
e, = 13.880 - 13.971 =- 0.091,
e, = 13,860—13.971 =- 0.111,
e, = 14.032 - 13.971 = 0.061,
e, = 13.914 - 13.971 = - 0.057,
that for high deposition time (x, = +1) and low arsenic flow rate, 9 = 14.389 + 0.836/2)(+1) =
14.807 pm, they are
e, = 14.821 - 14.807 = 0.014,
€19 = 14.757 — 14.807 = - 0.050,
e,, = 14.843 - 14.807 = 0.036,
€,) = 14.878 - 14.807 = 0.071,
and that for high deposition time (x, = +1) and high arsenic flow rate, = 14.389 +
(0.836/2)(+1) = 14.807 um, they are
e,, = 14.888 — 14.807 = 0.081,
e,4= 14.921 — 14.807 = 0.114,
378 Chapter 13 Design of Experiments with Several Factors

e,, = 14.415 - 14.807 = - 0.392,


e,, = 14.932 — 14.807 = 0.125.
A normal probability plot of these residuals is shown in Fig. 13-14. This plot indicates
that one residual, e,, = — 0.392, is an outlier. Examining the four runs with high deposition
time and high arsenic flow rate reveals that observation y,, = 14.415 is considerably smaller
than the other three observations at that treatment combination. This adds some additional
evidence to the tentative conclusion that observation 15 is an outlier. Another possibility is
that there are some process variables that affect the variability in epitaxial layer thickness,
and if we could discover which variables produce this effect, then it might be possible to
adjust these variables to levels that would minimize the variability in epitaxial layer thick-
ness. This would have important implications in subsequent manufacturing stages. Figures
13-15 and 13-16 are plots of residuals versus deposition time and arsenic flow rate, respec-
tively. Apart from the unusually large residual associated with y,,, there is no strong evi-
dence that either deposition time or arsenic flow rate influences the variability in epitaxial
layer thickness.
Figure 13-17 shows the estimated standard deviation of epitaxial layer thickness at all
four runs in the 2’ design. These standard deviations were calculated using the data in
Table 13-14. Notice that the standard deviation of the four observations with A and B at the
high level is considerably larger than the standard deviations at any of the other three design

+2

+1

—2 ,

-0.39200 -0.29433 -0.19667 -0.09900 -0.00133 0.09633 0.19400


Residual
Figure 13-14 Normal probability plot of residuals for the epitaxial process experiment.

+0.5

0
Bo High

-0.5- Deposition time, A

Figure 13-15 Plot of residuals versus deposition time.


13-5 The 2‘ Factorial Design 379

-0.5 Arsenic flow rate, B

Figure 13-16 Plot of residuals versus arsenic flow rate.

s=0.077 $= 0.250
Tp ab

(-) (1) a Figure 13-17 The estimated standard deviations


s=0.110 $= 0.051 of epitaxial layer thickness at the four runs in the
(-) A (+ 2? design.

points. Most of this difference is attributable to the unusually low thickness measurement
associated with y,,. The standard deviation of the four observations with A and B at the low
levels is also somewhat larger than the standard deviations at the remaining two runs. This
could be an indication that there are other process variables not included in this experiment
that affect the variability in epitaxial layer thickness. Another experiment to study this pos-
sibility, involving other process variables, could be designed and conducted (indeed, the
original paper shows that there are two additional factors, unconsidered in this example,
that affect process variability).

13-5.2 The 2‘ Design for k > 3 Factors


The methods presented in the previous section for factorial designs with k = 2 factors each
at two levels can be easily extended to more than two factors. For example, consider k = 3
factors, each at two levels. This design is a 2° factorial design, and it has eight treatment
combinations. Geometrically, the design is a cube as shown in Fig. 13-18, with the eight
runs forming the corners of the cube. This design allows three main effects to be estimated
(A, B, and C) along with three two-factor interactions (AB, AC, and BC) and a three-factor
interaction (ABC).
The main effects can be estimated easily. Remember that (1), a, b, ab, c, ac, bc, and abc
represent the total of alln replicates at each of the eight treatment combinations in the
design. Referring to the cube in Fig. 13-18, we would estimate the main effect of A by aver-
aging the four treatment combinations on the right side of the cube, where A is at the high
380 Chapter 13 Design of Experiments with Several Factors

abc

+1

Figure 13-18 The 2° design.

level, and subtracting from that quantity the average of the four treatment combinations on
the left side of the cube, where A is at the low level. This gives

A=—[a+ab+ac+abe-b~c~be~(I)} (13-36)
n
In a similar manner the effect of B is the average difference of the four treatment combina-
tions in the back face of the cube and the four in the front, or

B= —[b+ab+be+abc-a-c—ac~(1)}, (13-37)
n
and the effect of C is the average difference between the four treatment combinations in the
top face of the cube and the four in the bottom, or

C= [c+ ac+be+ abe-a-b-ab-(1)} (13-38)


n

Now consider the two-factor interaction AB. When C is at the low level, AB is just the
average difference in the A effect at the two levels of B, or

(Clow) == —|ab—
AB(C low) —[ab~b]-—[a-(0)]
1 1
bj-—|a-—(1)}.

Similarly, when C is at the high level, the AB interaction is

AB(C high) = as [abc - bc]- lies -cl.


2n 2n
The AB interaction is just the average of these two components, or
1
AB = ——|ab + (1) + abc+c~b-a~be-ac} (13-39)
n
Using a similar approach, we can show that the AC and BC interaction effect estimates are
as follows:
1
AC = 7—[ac + (I) +abe +b -a-c— ab ~ be}
n
(13-40)

BC = —[bc + (I) +abe+a~b-c—ab— acl (13-41)


n
13-5 The 2‘ Factorial Design 381

The ABC interaction effect is the average difference between the AB interaction at the two
levels of C. Thus

ABC = x {[abe — be]—[ac-c]-[ab-b]+|a- (1)]}


is (13-42)
= —|[abe - be - ac +c - ab+b+a-(1)}.
4n

The quantities in brackets in equations 13-36 through 13-42 are contrasts in the eight
treatment combinations. These contrasts can be obtained from a table of plus and minus
signs for the 2° design, shown in Table 13-16. Signs for the main effects (columns A, B, and
C) are obtained by associating a plus with the high level of the factor and a minus with the
low level. Once the signs for the main effects have been established, the signs for the
remaining columns are found by multiplying the appropriate preceding columns row by
row. For example, the signs in column AB are the product of the signs in columns A and B.
Table 13-16 has several interesting properties.
1. Except for the identity column /, each column has an equal number of plus and
minus signs.
2. The sum of products of signs in any two columns is zero; that is, the columns in the
table are orthogonal.
3. Multiplicating any column by column / leaves the column unchanged; that is, / is an
identity element.
4. The product of any two columns yields a column in the table; for example, A x B =
AB and AB x ABC =A’B°’C =C, since any column multiplied by itself is the iden-
tity column.
The estimate of any main effect or interaction is determined by multiplying the treat-
ment combinations in the first column of the table by the signs in the corresponding main
effect or interaction column, adding the result to produce a contrast, and then dividing the
contrast by one-half the total number of runs in the experiment. Expressed mathematically,
Contrast
Effect = ghia (13-43)

The sum of squares for any effect is


ssa on (13-44)
n

Table 13-16 Signs for Effects in the 2’ Design


ETTEEInE ESSIEN EIEEIEEEENE EEEED
rE
Treatment Factorial Effect
Combinations I A BY Aba G AC “BEC FABC

(i) + - - + ~ + + -
a + ye = = = = ae +

b + - + - - + = +
ab + + + + = = = =
Cc + - = + + - - +
ac oP oF = = + + = =>
bc + - + + - + ~
abc te + e + 35 + + +
382 Chapter 13 Design of Experiments with Several Factors

»
Example 13-7
Consider the surface-roughness experiment described originally in Example 13-5. This is a 2° facto-
rial design in the factors feed rate (A), depth of cut (B), and tool angle (C), with n = 2 replicates. Table
13-17 presents the observed surface-roughness data.
The main effects may be estimated using equations 13-36 through 13-42. The effect of A is, for
example,

A= —[a+ab+ac+abe~b-e-be~ (1)
n

= [22+27+23+30-20-21-
18-16]
4(2)
= =[27]=3.395,
and the sum of squares for A is found using equation 13-44;
2
sige (Conse a)
n2

It is easy to verify that the other effects are

AB = 1,375,

AC = 0,125,

BC=- 0.625,

ABC= 1.125.
From examining the magnitude of the effects, clearly feed rate (factor A) is dominant, followed by
depth of cut (B) and the AB interaction, although the interaction effect is relatively small. The analy-
sis of variance is summarized in Table 13-18, and it confirms our interpretation of the effect estimates.

Table 13-17 Surface Roughness Data for Example 13-7


Treatment Design Factors Suitace
Combinations A BC Roughness Totals

(1) -l -l -l 9,7 16
a lL El Sl 10, 12 i)
b -1 1 -l 9, 11 20
ab 1 1 -l 1335 27
c -1 -l 1 11,10 © 21
ac Lact 1 10, 13 23
be = 1 10, 8 18
abe 1 1 16, 14 30
A SE PP ES AED
13-5 The 2‘ Factorial Design 383

Table 13-18 Analysis of Variance for the Surface-Finish Experiment


rr S

Source of Sum of Degrees of Mean


Variation ‘Squares Freedom Square J

A 45.5625 1 45.5625 18.69


B 10.5625 1 10.5625 4.33
Cc 3.0625 1 3.0625 1.26
AB 7.5625 1 7.5625 3.10
AC 0.0625 1 0.0625 0.03
BC 1.5625 1 1.5625 0.64
ABC 5.0625 1 5.0625 2.08
Error 19.5000 8 2.4375

Total 92.9375 15

Other Methods for Judging Significance of Effects The analysis of variance is a formal
way to determine which effects are nonzero. There are two other methods that are useful.
In the first method, we can calculate the standard errors of the effects and compare the mag-
nitudes of the effects to their standard errors. The second method uses normal probability
plots to assess the importance of the effects.
The standard error of an effect is easy to find. If we assume that there are n replicates
at each of the 2‘ runs in the design, and if y,,, y,,, ..., y,, are the observations at the ith run
(design point), then

is an estimate of the variance at the ith run, where y, = oh ,y,/n is the sample mean of the
n observations. The 2" variance estimates can be pooled to give an overall variance estimate
1 2k on a2

oy amie ae (13-45)
i=l j=l
where we have obviously assumed equal variances for each design point. This is also the
variance estimate given by the mean square error from the analysis of variance procedure.
Each effect estimate has variance given by

V(Effect) = ya
n2k-! |

= ae V(Contrast).
NZet

Each contrast is a linear combination of 2‘ treatment totals, and each total consists of n
observations. Therefore,
V(Contrast) = n2‘o”
and the variance of an effect is

(n2*") (13-46)
384 Chapter 13 Design of Experiments with Several Factors

The estimated standard error of an effect would be found by replacing o” with its estimate
S and taking the square root of equation 13-46.
To illustrate for the surface-roughness experiment, we find that '§*= 2.4375 and the
standard error of each estimated effect is

s.e.(Effect) =

I (2-4375)
= |spe
= 0.78.
Therefore two standard deviation limits on the effect estimates are

A: 3.375 + 1.56,
B: 1.625+ 1.56,
C: 0.875 + 1.56,
AB: 1.375+1.56,
AC: 0.125 + 1.56,
BC: —0.625 + 1.56,
ABC: 1.125+1.56.
These intervals are approximate 95% confidence intervals. They indicate that the two main
effects, A and B, are important, but that the other effects are not, since the intervals for all
effects except A and B include zero.
Normal probability plots can also be used to judge the significance of effects. We will
illustrate that method in the next section.

Projection of 2 Designs Any 2‘ design will collapse or project into another 2‘ decign in
fewer variables if one or more of the original factors are dropped. Sometimes this can pro-
vide additional insight into the remaining factors. For example, consider the surface-rough-
ness experiment. Since factor C and all its interactions are negligible, we could eliminate
factor C from the design. The result is to collapse the cube in Fig. 13-18 into a square in the
A — B plane—however, each of the four runs in the new design has four replicates. In gen-
eral, if we delete h factors so that r =k —h factors remain, the original 2‘ design with n repli-
cates will project into a 2’ design with n2" replicates.

Residual Analysis We may obtain the residuals from a 2‘ design by using the method
demonstrated earlier for the 2* design. As an example, consider the surface-roughness
experiment. The three largest effects are A, B, and the AB interaction. The regression model
used to obtain the predicted values is

9 = By +B.x, +B.x, +B,%,%,


where x, represents factor A, x, represents factor B, and x,x, represents the AB interaction.
The regression coefficients B,5 B» and B,2 are estimated by one-half the corresponding effect
estimates and Bhiis the grand average. Thus

§ SHOUD) 1.62 :
5 =11.0625+( > J +( =} +(*Shit,
13-5 The 2‘ Factorial Design 385

and the predicted values would be obtained by substituting the low and high levels of A and
B into this equation. To illustrate, at the treatment combination where A, B, and C are all at
the low level, the predicted value is

$= 11.0625 + (Ee Jeo+ ($2 \(-1+ (SE) enen


= 9.25.
The observed values at this run are 9 and 7, so the residuals are 9 — 9.25 = —0.25 and
7 — 9.25 =-2.25. Residuals for the other seven runs are obtained similarly.
A normal probability plot of the residuals is shown in Fig. 13-19. Since the residuals
lie approximately along a straight line, we do not suspect any severe nonnormality in the
data. There are no indications of severe outliers. It would also be helpful to plot the resid-
uals versus the predicted values and against each of the factors A, B, and C.

Yates’ Algorithm for the 2‘ Instead of using the table of plus and minus signs to obtain the
contrasts for the effect estimates and the sums of squares, a simple tabular algorithm
devised by Yates can be employed. To use Yates’ algorithm, construct a table with the treat-
ment combinations and the corresponding treatment totals recorded in standard order. By
standard order, we mean that each factor is introduced one at a time by combining it with
all factor levels above it. Thus for a 2’, the standard order is (1), a, b, ab, while for a 2 itis
(1), a, b, ab, c, ac, bc, abc, and for a 2° it is (1), a, b, ab, c, ac, bc, abc, d, ad, bd, abd, cd,
acd, bcd, abcd. Then follow this four-step procedure:

1. Label the adjacent column [1]. Compute the entries in the top half of this column by
adding the observations in adjacent pairs. Compute the entries.in the bottom half of
this column by changing the sign of the first entry in each pair of the original obser-
vations and adding the adjacent pairs.
2. Label the adjacent column [2]. Construct colurnn [2] using the entries in column
[1]. Follow the same procedure employed to generate column [1]. Continue this
process until k columns have been constructed. Column [k] contains the contrasts
designated in the rows.
3. Calculate the sums of squares for the effects by squaring the entries in coiumn [k]
and dividing by n2".
4. Calculate the effect estimates by dividing the entries in column [k] by n2‘~'.

+2

+1

ef

-2
=2.2500 —1.5833 —0.9167 -—0.2500 0.4167 1.0833 1.7500
Residual, e

Figure 13-19 Normal probability plot of residuals from the surface-roughness experiment.
386 Chapter 13 Design of Experiments with Several Factors

Table 13-19 Yates’ Algorithm for the Surface-Roughness Experiment


RE ec eee cn eee eee
: Sum of
Treatment Squares Effect Estimates
Combinations Response WO PA) AD Effect - (3]?/n2° [3}/n2?
(1) 16 38 Seals Total — _
a Pp. Cop Spe Pei A 45.5625 3.375
b 20 44 13 13 B 10.5625 1.625
ab 27 48 14 11 AB 7.5625 e375
c Pas 6 9 J Cc 3.0625 0.875
ac 23 ii 4 1 AC 0.0625 0.125
bc 18 2 1 -5 BC 1.5625 --0.625
abc 30 12 10 9 ABC 5.0625 1,125

Example 13-8
Consider the surface-roughness experiment in Example 13-7. This is a 2° design with n = 2 replicates.
The analysis of this data using Yates’ algorithm is illustrated in Table 13-19. Note that the sums of
squares computed from Yates’ algorithm agree with the results obtained in Example 13-7.

13-5.3 A Single Replicate of the 2‘ Design


As the number of factors in a factorial experiment grows, the number of effects that can be
estimated grows also. For example, a 2* experiment has 4 main effects, 6 two-factor inter-
actions, 4 three-factor interactions, and 1 four-factor interaction, while a 2° experiment has
six main effects, 15 two-factor interactions, 20 three-factor interactions, 15 four-factor
interactions, 6 five-factor interactions, and 1 six-factor interaction. In most situations the
sparsity of effects principle applies; that is, the system is usually dominated by the main
effects and low-order interactions. Three-factor and higher interactions are usually negligi-
ble. Therefore, when the number of factors is moderately large, say k 2 4 or 5, a common
practice is to run only a single replicate of the 2‘ design and then pool or combine the
higher-order interactions as an estimate of error.

Example 13-9
An article in Solid State Technology (‘Orthogonal Design for Process Optimization and its Application
in Plasma Etching,” May 1987, p. 127) describes the application of factorial designs in developing a
nitride etch process on a single-wafer plasma etcher. The process uses C,F, as the reactant gas. It is pos-
sible to vary the gas flow, the power applied to the cathode, the pressure in the reactor chamber, and the
spacing between the anode and the cathode (gap). Several response variables would usually be of inter-
est in this process, but in this example we will concentrate on etch rate for silicon nitride.
We will use a single replicate of a 2* design to investigate this process. Since it is unlikely that
the three-factor and four-factor interactions are significant, we will tentatively plan to combine them
as an estimate of error. The factor levels used in the design are shown here:

Design Factor

A B . ce D
Gap Pressure C.F, Flow Power
Level (cm) (mTorr) (SCCM) (w)

Low (-) 0.80 450 125 275


High (+) 1.20 550 200 325
13-5 The 2‘ Factorial Design 387

Table 13-20 presents the data from the 16 runs of the 2* design. Table 13-21 is the table of plus
and minus signs for the 2* design. The signs in the columns of this table can be used to estimate the
factor effects. To illustrate, the estimate of factor A is

= =[a+ ab+ac+ abe + ad + abd + acd + abed (1) b~c~ d ~ be ~ bd ~ ed - bed]

= _[669+ 650+ 642+635 + 749 + 868 + 860 + 729 — 550 — 604 - 633

— 601-1037 —1052— 1075-1063]


=-—101.625.
Thus the effect of increasing the gap between the anode and the cathode from 0.80 cm to 1.20 cm is
to decrease the etch rate by 101.625 A/min.
It is easy to verify that the complete set of effect estimates is
A=-101.625, D= 306.125,
B=-1.625, AD = -153.625,
AB = -7.875, BD=-0.625,
C=7.375, ABD = 4.125,

AC = -24.875, CD = -2.125,
BC=-43.875, ACD = 5.625,

ABC =-15.625, BCD


= -25.375,
ABCD = - 40.125.

A very helpful method in judging the significance of factors in a 2‘ experiment is to


construct a normal probability plot of the effect estimates. If none of the effects is

Table 13-20 The 2* Design for the Plasma Etch Experiment

A 3B C D Etch Rate

(Gap) (Pressure) (C,F,Flow) (Power) (A/min)


ah =| -1 -1 550
1 ={ -1 -1 669
=| -1 -1 604
1 -1 -1 650
=] =] -] 633
1 =I -1 642
=i 1 -1 601
1 1 -1 635
-1 =I -1 1 1037
1 | =] 1 749
=] 1 -1 1 1052

|
1 -1 1 868
-1 -1 1 1 1075
1 -1 1 860
-1 1 1 1063
1 1 w29
388 Chapter 13 Design of Experiments with Several Factors

Table 13-21 Contrast Constants for the 2* Design


MMO aS a iis Te ae Le Cea eee
A B AB C AC BC (ABCD) #ADi*BD! ABD-'CD ACD2 BCD. ABCD

(1) — eet ye, ey a te — + - - +


a + - - - = + + - = + + a + - -
b = + =. =, + - + - 45 - + a — 3° -
ab + + + - = - - - - - - + + + +
c - - + + = = + -—- + + - - + + -
ac + -—- - + + = - =- = + + - - + +
bc - + = + - + - - + - + - + - +
(kG wh BP SP ae ae eae + - - - - - - - -
d - =- + = + + - + - = Ae - + + -
ad + - = = = + f= een f - - - - - +
bd -— + == - ie A + - - ap - +
abd +" + +tyeasy - - - + ob + + - - - -
cd - = + + = - + + - _ + + - - +
acd +) a - - + + - - + + - -
DCD rpm A tie =" ee deme ee — els tie oT et - + - + -
QDCd FE Ee i en 2 + + + + +

significant, then the estimates will behave like a random sample drawn from a normal dis-
tribution with zero mean, and the plotted effects will lie approximately along a straight line
Those effects that do not plot on the line are significant factors.
The normal probability plot of effect estimates from the plasma etch experiment is
shown in Fig. 13-20. Clearly the main effects of A and D and the AD interaction are signif-
icant, as they fall far from the line passing through the other points. The analysis of variance
summarized in Table 13-22 confirms these findings. Notice that in the analysis of variance
we have pooled the three- and four-factor interactions to form the error mean square. If the
normal probability plot had indicated that any of these interactions were important, they
then should not be included in the error term.
Since A = —101.625, the effect of increasing the gap between the cathode and anode is
to decrease the etch rate. However, D = 306.125, so applying higher power levels will
increase the etch rate. Figure 13-21 is a plot of the AD interaction. This plot indicates that
the effect of changing the gap width at low power settings is small, but that increasing the

+2

+1

=2
—153.62 -77.00 -0.37 79.25 152.37 229.80 306.12
Effects
Figure 13-20 Normal probability plot of effects from the plasma etch experiment.
13-5 The 2‘ Factorial Design 389

Table 13-22 Analysis of Variance for the Plasma Etch Experiment


a
EE ee
Source of Sum of Degrees of Mean
Variation Squares Freedom Square F,

A 41,310.563 1 41,310.563 20.28


B 10.563 1 10.563 Si
EG 217.563 1 217.563 <1
D 374,850,063 1 374,850.063 183.99
AB 248.063 1 248.063 <i
AC 2,475.063 1 2,475.063 1.21
AD ~ 94,402.563 1 99,402.563 48.79
BC 7,700.063 1 7,700.063 3.78
BD 1.563 1 1.563 <1
CD 18.063 1 18.063 <a
Error 10,186.815 5) 2,037.363
Total 531,420.938 15

gap at high power settings dramatically reduces the etch rate. High etch rates are obtained
at high power settings and narrow gap widths.
The residuals from the experiment can be obtained from the regression model
$= 776.0625 -( Lee nsr Gaia JkyiGene

For example, when A and D are both at the low level the predicted value is
9 = 776.0625 oe Jon: (20828 ay2 eee Joren
2 2)
= 597,
and the four residuals at this treatment combination are
e, = 550 - 597 = -47,
e, = 604 - 597 =7,

e
e
i=)S o
—_ :
Dhigh
= 325w

@ oOoO

600
; adr r id 4
Drowlow = 275w

A/min
Etch
rate,

Low (0.80 cm) High (1.20 cm)

A (Gap)
Figure 13-21 AD interaction from the plasma etch experiment.
390 Chapter 13 Design of Experiments with Several Factors

+2

-72.50 -49.33 -26.16 -3.00 20.16 43.33 66.50


Residual, 6;

Figure 13-22 Normal probability plot of residuals from the plasma etch experiment.

e, = 638 - 597 = 41,


e, = 601-597 =4.
The residuals at the other three treatment combinations (A high, D low), (A low, D high),
and (A high, D high) are obtained similarly. A normal probability plot of the residuals is
shown in Fig. 13-22. The plot is satisfactory. t

13-6 CONFOUNDING IN THE 2‘ DESIGN


It is often impossible to run a complete replicate of a factorial design under homogeneous
experimental conditions. Confounding is a design technique for running a factorial experi-
ment in blocks, where the block size is smaller than the number of treatment combinations
in one complete replicate. The technique causes certain interaction effects to be indistin-
guishable from, or confounded with, blocks. We will illustrate confounding in the 2‘ facto-
rial design in 2” blocks, where p < k.
Consider a 2? design. Suppose that each of the 2” = 4 treatment combinations requires
four hours of laboratory analysis. Thus, two days are required to perform the experiment.
If days are considered as blocks, then we must assign two of the four treatment combina-
tions to each day.
Consider the design shown in Fig. 13-23. Notice that block 1 contains the treatment
combinations (1) and ab, and that block 2 contains a and b. The contrasts for estimating the
main effects A and B are
Contrast, = ab +a-—b-(1),
Contrast, = ab + b-—a-—(1).
Note that these contrasts are unaffected by blocking since in each contrast there is one plus
and one minus treatment combination from each block. That is, any difference between
block 1 and block 2 wiil cancel out. The contrast for the AB interaction is
Contrast,, = ab +(1)-a—b.
Since the two treatment combinations with the plus sign, ab and (1), are in block 1 and the
two with the minus sign, a and b, are in block 2, the block effect and the AB interaction are
identical. That is, AB is confounded with blocks.
13-6 Confounding in the 2‘ Design 391

Block 1 Block 2

Block 1 Block 2 (1) a


i ab b
ac Cc
be abe

Figure 13-23 The 2? design in two blocks. Figure 13-24 The 2? design in two blocks, ABC
confounded.

The reason for this is apparent from the table of plus and minus signs for the 2” design
(Table 13-13). From this table, we see that all treatment combinations that have a plus on
AB are assigned to block 1, while all treatment combinations that have a minus sign on AB
are assigned to block 2.
This scheme can be used to confound any 2‘ design in two blocks. As a second exam-
ple, consider a 2’ design, run in two blocks. Suppose we wish to confound the three-factor
interaction ABC with blocks. From the table of plus and minus signs for the 2° design (Table
13-16), we assign the treatment combinations that are minus on ABC to block 1 and those
that are plus on ABC to block 2. The resulting design is shown in Fig. 13-24.
There is a more general method of constructing the blocks. The method employs a
defining contrast, say
L=Q,x, + Ox, +--+ 4%, (13-47)
where x; is the level of the ith factor appearing in a treatment combination and a, is the
exponent appearing on the ith factor in the effect to be confounded. For the 2‘ system, we
have either a; = 0 or 1, and either x; = 0 (low level) or x; = 1 (high level). Treatment combi-
nations that produce the same value of L (modulus 2) will be placed in the same block.
Since the only possible values of L (mod 2) are 0 and 1, this will assign the 2‘ treatment
combinations to exactly two blocks. )
As an example consider a 2° design with ABC confounded with blocks. Here x, corre-
sponds to A, x, to B, x, to C, and a, = a = a, = 1. Thus, the defining contrast for ABC is
L=x, +x, +X3.
To assign the treatment combinations to the two blocks, we substitute the treatment com-
binations into the defining contrast as follows:

(1): L= 1(0) + 1(0) + 1(0) = 0 = 0 (mod 2),


a: L=1(1) + 1(0) + 1(0) = 1 = 1 (mod 2),
b: L=1(0) + 1(1) + 1(0)=1 = 1 (mod 2),
ab: L = 1(1) + 1(1) + 1(0) = 2 = 0 (mod 2),
c: L= 1(0) + 1(0) + 1(1) =1 = 1 (mod 2),
ac: L = 1(1) + 1(0) + 1(1) = 2 = 0 (mod 2),
bc: L= 1(0) + 1(1) + 1(1) = 2 = 0 (mod 2),
abe: L = 1(1) + 1(1) + 1(1) = 3 = 1 (mod 2).
Therefore, (1), ab, ac, and be are run in block 1, and a, b, c, and abc are run in block 2, Ups
is the same design shown in Fig. 13-24.
392 Chapter 13 Design of Experiments with Several Factors

A shortcut method is useful in constructing these designs. The block containing the
treatment combination (1) is called the principal block. Any element [except (1)] in the
principal block may be generated by multiplying two other elements in the principal block
modulus 2. For example, consider the principal block of the 2” design with ABC con-
founded, shown in Fig. 13-24. Note that

ab -ac=a’bc = be,

ab - bc =ab’c=ac,

ac: bc = abc" = ab.


Treatment combinations in the other block (or blocks) may be generated by multiplying one
element in the new block by each element in the principal block modulus 2. For the 2* with
ABC confounded, since the principal block is (1), ab, ac, and bc, we know that b is in the
other block. Thus, the elements of this second block are

b-(1)=5,
b- ab =ab’ =a,
b -ac =abc,
baba=bre=c.

Example 13-10
An experiment is performed to investigate the effects of four factors on the terminal miss distance of
a shoulder-fired ground-to-air missile.
The four factors are target type (A), seeker type (B), target altitude (C), and target range (D).
Each factor may be conveniently run at two levels, and the optical tracking system will allow termi-
nal miss distance to be measured to the nearest foot. Two different gunners are used in the flight test,
and since there may be differences between individuals, it was decided to conduct the 2* design in two
blocks with ABCD confounded. Thus, the defining contrast is

L=x,+x,+%,+X,.
The experimental design and the resulting data are

Block 1 Block 2

=7
=
=6
=4
=6
=7
=9
=Ls)

The analysis of the design by Yates’ algorithm is shown in Table 13-23. A normal probability plot of
the effects would reveal A (target type), D (target range), and AD to have large effects. A confirming
analysis of variance, using three-factor interactions as error, is shown in Table 13-24.

It is possible to confound the 2‘ design in four blocks of 2‘-? observations each. To con-
struct the design, two effects are chosen to confound with blocks and their defining con-
trasts obtained. A third effect, the generalized interaction of the two initially chosen, is also
‘13-6 Confounding in the 2‘ Design 393

Table 13-23 Yates’ Algorithm for the 2* Design in Example 13-10


a a ee ea
Treatment Sum of Effect
Combinations Response Co) (BAL ~ [eile 14 Effect Squares Estimate
(1) 3 Kg) ee alla? Total — —
a i 12 26 63 21 A 27.5625 2.625
b 5 125 30 4 5 B 1.5625 0.625
ab 7 14335 17 -l AB 0.0625 0.125
c 6 14 6 4 1 GC 3.0625 0.875
ac 6 16 -2 1 -19 AC 22.5625 —2.375
bc 8 17 14 -4 =3 BC 0.5625 —0.375
abc 6 16rtien3o of Bei we? ABC 0.0625 0,125
d 4 4 2 4 15 D 14.0625 1.375
ad 10 2 2 3 13 AD 10.5625 - 1.625
bd 4 0 2 -8 -3 BD 0.5625 0.375
abd 12 —2 -1 -itil 7 ABD 3.0625 0.875
cd 8 6 -2 0 -1 CD 0.0625 0.125
acd 9 8 2 -3 -3 ACD 0.5625 -0.375
‘bcd 7 1 2 0 -3 BCD 0.5625 0.375
abcd 9 2 1 -1l -1 ABCD 0.0625 0.125

Table 13-24 Analysis of Variance for Example 13-10

Source of Sum of Degrees of Mean


Variation Squares Freedom Square Hie
Blocks (ABCD) 0.0625 1 0.0625 0.06
A 27.5625 1 27.5625 25:94
B 1.5625 1 1.5625 1.47
Cc 3.0625 1 3.0625 2.88
D 14.0625 1 14.0625 13.24
AB 0.0625 1 0.0625 0.06
AC 22.5625 1 22.5625 21.24
AD, 10.5625 1 10.5625 9.94
BC 0.5625 1 0.5625 0.53
BD 0.5625 1 0.5625 0.53
CD 0.0625 1 0.0625 0.06
Error (ABC +ABD +ACD+BCD) — 4.2500 4 1.0625
Total 84.9375 15

confounded with blocks. The generalized interaction of two effects is found by multiplying
their respective columns.
, For example, consider the 2* design in four blocks. If AC and BD are confounded with
blocks, their generalized interaction is (AC)(BD) = ABCD. The design is constructed by
using the defining contrasts for AC and BD:

L, =x, +,
L, =x,+X,.
394 Chapter 13 Design of Experiments with Several Factors

It is easy to verify that the four blocks are

Block 1 Block ah bey, 2 Block4


E=0;5=0 = =0,;b=1 “L=1,0=1
(1) ab
ac te be
bd abe ad
abcd bed cd

This general procedure can be extended to confounding the 2‘ design in 2” blocks,


where p < k. Select p effects to be confounded, such that no effect chosen is a generalized
interaction of the others. The blocks can be constructed from the p defining contrasts L,, L,,
..., L, associated with these effects. In addition, exactly 2’ - p — 1 other effects are con-
founded with blocks, these being the generalized interaction of the original p effects cho-
sen. Care should be taken so as not to confound effects of potential interest.
For more information on confounding refer to Montgomery (2001, Chapter 7). That
book contains guidelines for selecting factors to confound with blocks so that main effects
and low-order interactions are not confounded. In particular, the book contains a table of
suggested confounding schemes for designs with up to seven factors and a range of block
sizes, Some as small as two runs.

13-7 FRACTIONAL REPLICATION OF THE 2‘ DESIGN


As the number of factors in a 2‘ increases, the number of runs required increases rapidly.
For example, a 2° requires 32 runs. In this design, only 5 degrees of freedom correspond to
main effects and 10 degrees of freedom correspond to two-factor interactions. If we can
assume that certain high-order interactions are negligible, then a fractional factorial design
involving fewer than the complete set of 2 runs can be used to obtain information on the
main effects and low-order interactions. In this section, we will introduce fractional repli-
cation of the 2‘ design. For a more complete treatment, see Montgomery (2001, Chapter 8).

13-7.1 The One-Half Fraction of the 2" Design


A one-half fraction of the 2‘ design contains 2‘~' runs and is often called a 2‘~' fractional
factorial design. As an example, consider the 2°~' design, that is, a one-half fraction of the
2’. The table of plus and minus signs for the 2° design is shown in Table 13-25. Suppose we
select the four treatment combinations a, b, c, and abc as our one-half fraction. These

Table 13-25 Plus and Minus Signs for the 2° Factorial Design
Treatment Factorial Effect
Combinations I A B C AB AC BC ABC

a Se alia Salat sah =) tah a Me A al 3


z r ne eS - +
c + - - + + - - +
abe St a ee ee
ab + + + = + = = =
ac + + - + - + 5 fas
muDG + - + + - - + =
(1) ap - - - + * + ES
13-7 Fractional Replication of the 2‘ Design 395

treatment combinations are shown in the top half of Table 13-25. We will use both the con-
ventional notation (a, b, c, ...) and the plus and minus notation for the treatment combina-
tions. The equivalence between the two notations is as follows:

Notation 1 Notation 2

a +--
b -+-
c --+
abc +++

Notice that the 2°-' design is formed by selecting only those treatment combinations
that yield a plus on the ABC effect. Thus ABC is called the generator of this particular frac-
tion. Furthermore, the identity element / is also plus for the four runs, so we call
I=ABC
the defining relation for the design.
The treatment combinations in the 2°~' designs yield three degrees of freedom associ-
ated with the main effects. From Table 13-25, we obtain the estimates of the main effects as

A=>[a-b-c+abch

1
B= gl-atb-ctabc},

C=5[-a-b+c+abc]

It is also easy to verify that the estimates of the two-factor interactions are

BC=—[a-b-c+abc}

AC = sl-a+b-c+abc],

AB= sl-a-b+ctabcl.

Thus, the linear combination of observations in column A, say ¢,, estimates A + BC.
Similarly, £, estimates B + AC, and €,. estimates C + AB. Two or more effects that have this
property are called aliases. In our 2°~' design, A and BC are aliases, B and AC are aliases,
and C and AB are aliases. Aliasing is the direct result of fractional replication. In many prac-
tical situations, it will be possible to select the fraction so that the main effects and low-
order interactions of interest will be aliased with high-order interactions (which are
probably negligible).
The alias structure for this design is found by using the defining relation /=ABC. Mul-
tiplying any effect by the defining relation yields the aliases for that effect. In our example,
the alias of A
A=A-ABC=A’BC=BC,
since A -1 =A and A? =/. The aliases of B and C are
B=B-ABC=AB°C=AC
396 Chapter 13 Design of Experiments with Several Factors

and
C=C-ABC=ABC?=AB. ~
Now suppose that we had chosen the other one-half fraction, that is, the treatment com-
binations in Table 13-25 associated with minus on ABC. The defining relation for this
design is J=-—ABC. The aliases are A = -BC, B = AC, and C = -AB. Thus the estimates of
A, B, and C with this fraction really estimate A - BC, B- AC, and C—AB. In practice, it usu-
ally does not matter which one-half fraction we select. The fraction with the plus sign in the
defining relation is usually called the principal fraction, and the other fraction is usually
called the alternate fraction.
Sometimes we use sequences of fractional factorial designs to estimate effects. For
example, suppose we had run the principal fraction of the 2°~' design. From this design we
have the following effect estimates:

£,=A+BC,
£,=B+AC,
€-=C+AB.

Suppose that we are willing to assume at this point that the two-factor interactions are neg-
ligible. If they are, then the 2’~' design has produced estimates of the three main effects, A,
B, and C. However, if after running the principal fraction we are uncertain about the inter-
actions, it is possible to estimate them by running the alternate fraction. The alternate frac-
tion produces the following effect estimates:

£,=A-BC,
\€,= BAC,
£-=C—AC.
If we combine the estimates from the two fractions, we obtain the following:

Effect i From xe, + £.) From x€,- £ )

i=A HA+BC+A-BOQ)=A 3{A+BC-(A-BO)]=BC


i=B (B+AC+B-AC)=B [B+ AC-(B-AQ] =AC
i=c (C+AB+C-AB)=C — 3{C+AB-(C-AB)]=AB

Thus by combining a sequence of two fractional factorial designs we can isolate both the
main effects and the two-factor interactions. This property makes the fractional factorial
design highly useful in experimental problems, as we can run sequences of small, efficient
experiments, combine information across several experiments, and take advantage of learn-
ing about the process we are experimenting with as we go along.
A 2‘~' design may be constructed by writing down the treatment combinations for a
full factorial with k — 1 factors and then adding the kth factor by identifying its plus and
minus levels with the plus and minus signs of the highest-order interaction tABC---- (K —
1). Therefore, a 2’~' fractional factorial is obtained by writing down the full 2 factorial and
then equating factor C to the +AB interaction. Thus, to obtain the principal fraction, we
would use C =+ AB as follows:
13-7 Fractional Replication of the 2" Design 397

Full 2? 2?-', [=+ABC


A B A Bae AB

- - - - ~
~ = - - -
= + = + 2
+ + + + a

To obtain the alternate fraction we would equate the last column to C = —AB.

“Example 13-11
To illustrate the use of a one-half fraction, consider the plasma etch experiment described in Exam-
ple 13-9. Suppose that we decide to use a 2*~' design with J = ABCD to investigate the four factors,
gap (A), pressure (B), C.F, flow rate (C), and power setting (D). This design would be constructed by
writing down a 2’ in the factors A, B, and C and then setting D = ABC. The design and the resulting
etch rates are shown in Table 13-26.
In this design, the main effects are aliased with the three-factor interactions; note that the alias
of A is
A-I=A- ABCD,

= ABCD,

= BCD,
and similarly
B=ACD,
C=ABD,
D=ABC.
The two-factor interactions are aliased with each other. For example, the alias of AB is CD:

AB - I1=AB* ABCD,

=A’B’CD,

=CD.

The other aliases are

Table 13-26 The 2*~' Design with Defining Relation / = ABCD


Treatment Etch
A B CG D=ABC Combinations Rate

- - - - (1) 550
+ - = + ad 749
- + - + bd 1052
+ + - - ab 650
- - + + cd 1075
+ - + ~ ac 642
= + + ~ bc 601
+ + + + abcd 729
398 Chapter 13 Design of Experiments with Several Factors

AC= BD,
AD = BC.

The estimates of the main effects and their aliases are found using the four columns of signs in
Table 13-26. For example, from column A we obtain
£,=A+BCD= a- 550 + 749 — 1052 + 650 — 1075 + 642 - 601 + 729)

= -127.00.
The other columns produce

£,=B+ACD = 4.00,
€.=C+ABD = 11.50,
and
£, =D + ABC = 290.50.

Clearly €, and €,, are large, and if we believe that the three-factor interactions are negligible, then the
main effects A (gap) and D (power setting) significantly affect etch rate.
The interactions are estimated by forming the AB, AC, and AD columns and adding them to the
table. The signs in the AB column are +, -, -, +, +, -, -, +, and this column produces the estimate
€,,=AB+CD= +(550 — 749 — 1052 + 650 + 1075 ~ 642 - 601 + 729)

=- 10.00.
From the AC and AD columns we find
. Cc = AC + BD = - 25.50,
4p = AD + BC =- 197.50.
The €,,, estimate is large; the most straightforward interpretation of the results is that this is the AD
interaction. Thus, the results obtained from the 2*~' design agree with the full factorial results in
Example 13-9.

Normality Probability Plots and Residuals The normal probability plot is very useful in
assessing the significance of effects from a fractional factorial. This is particularly true
when there are many effects to be estimated. Residuals can be obtained from a fractional
factorial by the regression model method shown previously. These residuals should be plot-
ted against the predicted values, against the levels of the factors, and on normal probability
paper, as we have discussed before, both to assess the validity of the underlying model
assumptions and to gain additional insight into the experimental situation.

Projection of the 2'~' Design If one or more factors from a one-half fraction of a 2 can
be dropped, the design will project into a full factorial design. For example, Fig. 13-25 pres-
ents a 2’~' design. Notice that this design will project into a full factorial in any two of the
three original factors. Thus, if we think that at most two of the three factors are important,
the 2*~' design is an excellent design for identifying the significant factors. Sometimes we
call experiments to identify a relatively few significant factors from a larger number of fac-
tors screening experiments. This projection property is highly useful in factor screening, as
it allows negligible factors to be eliminated, resulting in a stronger experiment in the active
factors that remain.
13-7 Fractional Replication of the 2‘ Design 399

Figure 13-25 Projection of a 2*~' design into three 2 designs.

In the 2*-' design used in the plasma etch experiment in Example 13-11, we found that
two of the four factors (B and C) could be dropped. If we eliminate these two factors, the
remaining columns in Table 13-26 form a 2° design in the factors A and D, with two repli-
cates. This design is shown in Fig. 13-26.

Design Resolution The concept of design resolution is a useful way to catalog fractional
factorial designs according to the alias patterns they produce. Designs of resolution III, IV,
and V are particularly important. The definitions of these terms and an example of each
follow:
1. Resolution III Designs. These are designs in which no main effects are aliased with
any other main effect, but main effects are aliased with two-factor interactions, and
two-factor interactions may be aliased with each other. The 2°~' design with / =
ABC is of resolution III. We usually employ a subscript Roman numeral to indicate
design resolution; thus this one-half fraction is a 2°;,' design.
2. Resolution IV Designs. These are designs in which no main effect is aliased with
any other main effect or two-factor interaction, but two-factor interactions are

(1052, 1078) (749, 729)


+1

D (Power)

, |($80.601) | {(680, 642)


= +1
A (Gap)
Figure 13-26 The 2? design obtained by dropping factors B and C from the plasma etch
experiment.
400 Chapter 13 Design of Experiments with Several Factors

aliased with each other. The 2*~' design with J = ABCD used in Example 13-11 is
of resolution IV (24, '). .
3. Resolution V Designs. These are designs in which no main effect or two-factor
interaction is aliased with any other main effect or two-factor interaction, but two-
factor interactions are aliased with three-factor interactions. A 2°~' design with J =
ABCDE is of resolution V (2°,').
Resolution III and IV designs are particularly useful in factor screening experiments.
A resolution IV design provides very good information about main effects and will provide
some information about two-factor interactions.

13-7.2 Smaller Fractions: The 2‘~-” Fractional Factorial


Although the 2‘~ ' design is valuable in reducing the number of runs required for an experi-
ment, we frequently find that smaller fractions will provide almost as much useful informa-
tion at even greater economy. In general, a 2‘ design may be run in a 1/2? fraction called a
2'-? fractional factorial design. Thus, a 1/4 fraction is called a 2‘~? fractional factorial design,
a 1/8 fraction is called a 2‘~* design, and so on.
To illustrate a 1/4 fraction, consider an experiment with six factors and suppose that the
engineer is interested primarily in main effects but would also like to get some information
about the two-factor interactions. A 2°~' design would require 32 runs and would have 31
degrees of freedom for estimation of effects. Since there are only six main effects and 15
two-factor interactions, the one-half fraction is inefficient—it requires too many runs.
Suppose we consider a 1/4 fraction, or a 2°~* design. This design contains 16 runs and, with
15 degrees of freedom, will allow estimation of all six main effects with some capability for
examination of the two-factor interactions. To generate this design we would write down a 2*
design in the factors A, B, C, and D, and then add two columns for E and F. To find the new
columns we would select the two design generators 1=ABCE and 1=ACDF. Thus column E
would be found from E = ABC and column F would be F = ACD, and also columns ABCE
and ACDF are equal to the identity column. However, we know that the product of any two
columns in the table of plus and minus signs for a 2‘ is just another column in the table; there-
fore, the product of ABCE and ACDF or ABCE (ACDF) = A°BC’DEF = BDEF is also an
identity column. Consequently, the complete defining relation for the 2°~* design is
I=ABCE= ACDF = BDEF.
To find the alias of any effect, simply multiply the effect by each word in the foregoing
defining relation. The complete alias structure is

ae BCE = CDF = ABDEF,


B= ACE = DEF = ABCDF,
C=ABE = ADF = BCDEF,
D=ACF = BEF=ABCDE,
E=ABC = BDF = ACDEF,
F =ACD = BDE = ABCEF,
AB = CE = BCDF = ADEF,
AC = BE= DF = ABCDEF,
AD = CF = BCDE = ABEF,
13-7 Fractional Replication of the 2‘ Design 401

AE = BC = CDEF = ABDF,
AF = CD = BCEF = ABDE,
BD = EF = ACDE = ABCF,
BF = DE = ABCD = ACEF
ABF = CEF = BCD= ADE,
CDE = ABD = AEF= CBF.
Notice that this is a resolution IV design; main effects are aliased with three-factor and
higher interactions, and two-factor interactions are aliased with each other. This design
would provide very good information on the main effects and give some idea about the
strength of the two-factor interactions. For example, if the AD interaction appears signifi-
cant, either AD and/or CF are significant. If A and/or D are significant main effects, but C
and F are not, the experimenter may reasonably and tentatively attribute the significance to
the AD interaction. The construction of the design is shown in Table 13-27.
The same principles can be applied to obtain even smaller fractions. Suppose we wish
to investigate seven factors in 16 runs. This is a 2’~* design (a 1/8 fraction). This design is
constructed by writing down a 2° design in the factors A, B, C, and D and then adding three
new columns. Reasonable choices for the three- generators required are / = ABCE, | =
BCDF, and | = ACDG. Therefore, the new columns are formed by setting E = ABC, F =
BCD, and G = ACD. The complete defining relation is found by multiplying the generators
together two at a time and then three at a time, resulting in
1 = ABCE = BCDF = ACDG = ADEF = BDEG = ABFG = CEFG.
Notice that every main effect in this design will be aliased with three-factor and higher
interactions and that two-factor interactions will be aliased with each other. Thus this is a
resolution IV design.
For seven factors, we can reduce the number of runs even further. The 2’~* design is
an eight-run experiment accommodating seven variables. This is a 1/16 fraction and is

Table 13-27. Construction of the 2° * Design with Generators J = ABCE and J = ACDF
A B Cc D E=ABC F=ACD
- 2 3 Pa 2 ~ (1)
ae = = - + + aef
= + - - oS = be
+ + = = = + abf
= = + - + + cef
Be = + = - = ac

= + + - - at bcf
+ ar + - + - abce
- = a + = + df
he & = + + - ade
= oe = + + + bdef
+ + = + - = abd

- és + + + - cde
~ - - + - # acdf
- + + + - - bcd
+ + _ + + + abcdef
402 Chapter 13 Design of Experiments with Several Factors

obtained by first writing down a 2° design in the factors A, B, and C, and then forming the
four new columns, from / = ABD, I = ACE, I = BCF, and I = ABCG. The design is shown
in Table 13-28.
The complete defining relation is found by multiplying the generators together two,
three, and finally four at a time, producing
1 = ABD = ACE = BCF = ABCG = BCDE = ACDF = CDG = ABEF
= BEG = AFG = DEF = ADEG = CEFG = BDFG = ABCDEFG.
The alias of any main effect is found by multiplying that effect through each term in the
defining relation. For example, the alias of A is
A= BD = CE=ABCF = BCG = ABCDE = CDF
= ACDG
= BEF = ABEG = FG = ADEF = DEG = ACEFG = ABDFG
= BCDEFG.
This design is of resolution III, since the main effect is aliased with two-factor interactions.
If we assume that all three-factor and higher interactions are negligible, the aliases of the
seven main effects are
€,=A+BD+CE+FG,
£,=B+AD+CF+
FG,
£.=C+AE+BF+DG,
£,=D+AB+CG+
EF,
£.=E+AC+BG+DF
€,-=F+BC+AG+
DE,
£,=G+CD+BE+AF.
This 2’,,* design is called a saturated fractional factorial, because all of the available
degrees of freedom are used to estimate main effects. It is possible to combine sequences
of these resolution III fractional factorials to separate the main effects from the two-factor
interactions. The procedure is illustrated in Montgomery (2001, Chapter 8).
In constructing a fractional factorial design it is important to select the best set of
design generators. Montgomery (2001) presents a table of optimum design generators for
2‘-? designs with up to 10 factors. The generators in this table will produce designs of max-
imum resolution for any specified combination of k and p. For more than 10 factors, a

Table 13-28 A 27,‘ Fractional Factorial Design


A B C D(=AB) EAC FBC) G=ABO
= 2 = + + + =
+ in ei = = + +
= + 2 _ + = +
+ + zZ + 2 § =
- = Ps + = = +
+: _ + = + = -
= + + = 2 + =
+ + = + + + +
nr
13-8 Sample Computer Output 403

resolution III design is recommended. These designs may be constructed by using the same
method illustrated earlier for the 2’;,* design. For example, to investigate up to 15 factors
in 16 runs, write down a 2‘ design in the factors A, B, C, and D, and then generate 11 new
columns by taking the products of the original four columns two at a time, three at a time,
and four at a time. The resulting design is a 2'};'' fractional factorial. These designs, along
with other useful fractional factorials, are discussed by Montgomery (2001, Chapter 8).

13-8 SAMPLE COMPUTER OUTPUT


We provide Minitab® output for some of the examples presented in this chapter.

Sample Computer Output for Example 13-3


Reconsider Example 13-3, dealing with aircraft primer paints. The Minitab® results of the
3 x 3 factorial design with three replicates are

Analysis of Variance for Force


Source DF SS MS F 1D
Type 2 4.5811 2.2906 27.86 0.000
Applicat al 4.9089 4.9089 59.70 0.000
Type*Applicat 2 0.2411 0.1206 ray 0.269
Error 2 0.9867 0.0822
Total ak7/ MO), (7A

The Minitab® results are in agreement with the results given in Table 3-6.

Sample Output for Example 13-7


Reconsider Example 13-7, dealing with surface roughness. The Minitab® results for the 2°
design with two replicates are

Term Effect Coef SE Coef Ae P


Constant 1.0625 0.3903 28.34 0.000
A 35.3750 16875 0.3903 4.32 0.003
B P6250) 0.8125 0.3903 2.08 OnOrak
G 0.8750 0.4375 0.3903 eet2 0.295
A*B iba BVA570) 0.6875 0.3903 1.76 0.116
JNEHE 0.1250 0.0625 0.3903 0.16 0.877
B*C -0.6250 -0.3125 0.3903 -0.80 0.446
A*B*E e250 0.5625 0.3903 1.44 0.188

Analysis of Variance

Source ie)
Main Effects
2-Way Interactions
3-Way Interactions uw So o>)nN ul f=)Oo N
Residual Error
Pure Error
Total paU1
AY
WwW
Ww
HR
COO
00

The output from Minitab® is slightly different from the results given in Example 13-7. t-tests
on the main effects and interactions are provided in addition to the analysis of variance on
the significance of main effects, two-factor interactions, and three-factor interactions. The
404 Chapter 13 Design of Experiments with Several Factors

ANOVA results indicate that at least one of the main effects is significant, whereas no two-
factor or three-factor interaction is significant. .

13-9 SUMMARY
This chapter has introduced the design and analysis of experiments with several factors,
concentrating on factorial designs and fractional factorials. Fixed, random, and mixed mod-
els were considered. The F-tests for main effects and interactions in these designs depend
on whether the factors are fixed or random.
The 2' factorial designs were also introduced. These are very useful designs in which
all k factors appear at two levels. They have a greatly simplified method of statistical analy-
sis. In situations where the design cannot be run under homogeneous conditions, the 2‘
design can be confounded easily in 2” blocks. This requires that certain interactions be con-
founded with blocks. The 2 design also lends itself to fractional replication, in which only
a particular subset of the 2‘ treatment combinations are run. In fractional replication, each
effect is aliased with one or more other effects. The general idea is to alias main effects and
low-order interactions with higher-order interactions. This chapter discussed methods for
construction of the 2‘-? fractional factorial designs, that is, a 1/2? fraction of the 2‘ design.
These designs are particularly useful in industrial experimentation.

13-10 EXERCISES
13-1. An article in the Journal of Materials Process- 25, and 30 minutes—and randomly chooses two types
ing Technology (2000, p. 113) presents results from an of paint from several that are available. He conducts
experiment involving tool wear estimation in milling. an experiment and obtains the data shown here. Ana-
The objective is to minimize tool wear. Two factors of lyze the data and draw conclusions. Estimate the vari-
interest in the study were cutting speed (m/min) and ance components.
depth of cut (mm). One response of interest is tool
flank wear (mm). Three levels of each factor were
selected and a factorial experiment with three repli- Drying Time (min)
cates is run. Analyze the data and draw conclusions. Paint 20 25 30
1 74 73 78
Depth of Cut
64 61 85
Cutting Speed | @ 3 50 44 92
0.170 0198), O:217. 2 92 98 66
12 0.185 0.210 0.241 86 73 45
0.110 02325501223 68 88 85
0.178 0.215 0.260
15 0.210 0.243 0.289
13-3. Suppose that in Exercise 13-2 paint types were
0.250 0297 eee 0326
fixed effects. Compute a 95% interval estimate of the
0.212 0.250 =0.285 mean difference between the responses for paint type
18.75 0.238 O28 2a n0.325 1 and paint type 2.
0.267 0321S 570354
13-4. The factors that influence the breaking strength
of cloth are being studied. Four machines and three
13-2. An engineer suspects that the surface finish of a operators are chosen at random and an experiment is
. metal part is influenced by the type of paint used and Tun using cloth from the same one-yard segment. The
the drying time. He selects three drying times—20, results are as follows:
13-10 Exercises 405
rr
13-8. Consider the tool wear data in Exercise 13-1.
Machine
Plot the residuals from this experiment against the lev-
Operator 1 2 3 4 els of cutting speed and against the depth of cut. Com-
ment on the graphs obtained. What are the possible
A 100 ssh i 1]Ose? wokO8ite 81.10 consequences of the information conveyed by the
ideal RISA 09 se L1G residual plots?
B 111 Oe ett 114 13-9. The percentage of hardwood concentration in
eae ad 0°: Deal raw pulp and the freeness and cooking time of pulp
fe iGodmeesh ieee Ae 111 are being investigated for their effects on the strength
of paper. Analyze the data shown in the following
111 lismeeico 112 table, assuming that all three factors are fixed.
Test for interaction and main effects at the 5% level.
Estimate the components of variance. Cooking Time Cooking Time
Percentage of 1.5 hours 2.0 hours
13-5. Suppose that in Exercise 13-4 the operators
Hardwood Freeness Freeness
were chosen at random, but only four machines were
Concentration 400 500 650 400 500 650
available for the test. Does this influence the analysis
or your conclusions? 10 96.6 97.7 99.4 98.4 99.6 100.6
13-6. A company employs two time-study engineers. 96.0 96.0 99.8 98.6 100.4 100.9
Their supervisor wishes to determine whether the 15 98.5 96.0 98.4 97.5 98.7 99.6
standards set by them are influenced by any interac-
97.2 96.9 97.6 98.1 98.0 99.0
tion between engineers and operators. She selects
three operators at random and conducts an experiment 20 97.5 95.6 97.4 97.6 97.0 98.5
in which the engineers set standard times for the same 96.6 96.2 98.1 98.4 97.8 99.8
job. She obtains the data shown here. Analyze the data
and draw conclusions. 13-10. An article in Quality Engineering (1999, p.
357) presents the results of an experiment conducted
Operator
to determine the effects of three factors on warpage in
Engineer 1 2 3 an injection-molding process. Warpage is defined as
the nonflatness property in the product manufactured.
1 ’ 2.59 2.38 2.40 This particular company manufactures plastic molded
2.78 2.49 Py? components for use in television sets, washing
2 2515 2.85 2.66 machines, and automobiles. The three factors of inter-
2.86 Dale 2.87 est (each at two levels) are A = melt temperature, B =
injection speed, and C = injection process. A complete
13-7. An article in Industrial Quality Control (1956, p.
2? factorial design was carried out with replication.
Two replicates are provided in the table below. Ana-
5) describes an experiment to investigate the effect of
two factors (glass type and phosphor type) on the
lyze the data from this experiment.
brightness of a television tube. The response variable
measured is the current necessary (in microamps) to
A B C I 1
obtain a specified brightness level. The data are shown i 5 ee 1.35 1.40
here. Analyze the data and draw conclusions, assum- pies; Wz] 2.15 2.20
ing that both factors are fixed. S -1 1 EI 1.50 1.50
1 1. sowed 1.10 1.20
Phosphor Type
| 1 0.70 0.70
Glass Type 1 2 3 ates! 1 1.40 1.35
1 280 300 290
=I 1 1 1.20 1.35
290 310 285 1 1 1 1.10 1.00
285 295 290
13-11. For the Warpage experiment in Exercise 13-10,
2 230 260 220
obtain the residuals and plot them on normal proba-
235 240 225 bility paper. Also plot the residvals versus the pre-
240 235 230 dicted values. Comment on these plots.
406 Chapter 13 Design of Experiments with Several Factors

13-12. Four factors are thought to possibly influence Experimental Statistics (No. 91, 1963) involves
the taste of a soft drink beverage: type of sweetener flame-testing fabrics, after applying fire-retardant
(A), ratio of syrup to water (B), carbonation level (C), treatments. There are four factors: type of fabric (A),
and temperature (D). Each factor can be run at two type of fire-retardant treatment (B), laundering condi-
levels, producing a 2* design. At each run in the tion (C—the low level is no laundering, the high level
design, samples of the beverage are given to a test is after one laundering), and the method of conducting
panel consisting of 20 people. Each tester assigns a the flame test (D). All factors are run at two levels, and
point score from 1 to 10 to the beverage. Total score is the response variable is the inches of fabric burned on
the response variable, and the objective is to find a a standard size test sample. The data are
{ormulation that maximizes total score. Two replicates
of this design are run, and the results shown here. (1) = 42 d=40
Analyze the data and draw conclusions. a =31 ad = 30
bi 45 bd = 50
Treatment Replicate Treatment Replicate
Combinations I II Combinations. I II ap = 29 abd = 25
Z2=189 cd = 40
(1) 190 193 d 198 195
a 174 +178 ad 172 176 cic = 28 acd =25
b 181 185 bd 187 183 bc = 46 bcd = 50
ab 183 180 abd 185 186 ube = 32 abcd = 23
@ iE ~~ ollakts cd 199 190
ac 181 180 acd 179 175 (a) Estimate the effects and prepare a normal proba-
be 188 182 bcd 187 184 bility plot of the effects.
abc 173, 170 abcd 180180 (b) Construct a normal probability plot of the residu-
als and comment on the results.
13-13. Consider the experiment in Exercise 13-12. (c) Construct an analysis of variance table assuming
Plot the residuals against the Jevels of factors A, B, C, that three- and four-factor interactions are
and D. Also construct a normal probability plot of the negligible.
residuals. Comment on these plots.
13-17. Consider the data from the first replicate of
13-14, Find the standard error of the effects for the Exercise 13-10. Suppose that these observations could
experiment in Exercise 13-12. Using the standard not all be run under the same conditions. Set up a
errors as a guide, what factors appear significant? design to run these observations in two blocks of four
13-15. The data shown here represent a single repli- observations, each with ABC confounded. Analyze the
cate of a 2° design that is used in an experiment to data.
study the compressive strength of concrete. The fac- 13-18. Consider the data from the first replicate of
tors are mix (A), time (B), laboratory (C), temperature Exercise 13-12. Construct a design with two blocks of
(D), and drying time (E). Analyze the data, assuming eight observations each, with ABCD confounded.
that three-factor and higher interactions are negligible. Analyze the data.
Use a normal probability plot to assess the effects.
13-19, Repeat Exercise 13-18 assuming that four
(1)= 700 d=1000 e= 800 de =1900 blocks are required. Confound ABD and ABC (and
consequently CD) with blocks.
a= 900 ad=1100 ae=1200 ade=1500
13-20. Construct a 2° design in four blocks. Select the
b= 3400 bd=3000 be=3500 bde=4000
effects to be confounded so that we confound the
ab= 5500 abd=6100 abe=6200 abde =6500 highest possible interactions with blocks.
c= 600 cd= 800 ce= 600 cde=1500
13-21. An article in Industrial and Engineering
ac=1000 acd=1100 ace=1200 acde =2000 Chemistry (“Factorial Experiments in Pilot Plant
be = 3000 bcd =3300 bce =3006 bcde =3400 Studies,” 1951, p. 1300) reports on an experiment to
abe = 5300 abcd =6000 abce =5500 abcde =6300 investigate the effects of temperature (A), gas through-
put (B), and concentration (C) on the strength of prod-
uct solution in a recirculation unit. Two blocks were
13-16. An experiment described by M. G. Natrella in used with ABC confounded, and the experiment was
the National Bureau of Standards Handbook of replicated twice. The data are as follows:
13-10 Exercises 407

Replicate 1 Replicate 2 . D = hold down time, and E = quench oil temperature.


Block 1 Block 2 The data are shown below.
Block 1 Block 2
Al Dim Geen Dec E.
ee ee oe Thier ee RM
+ = +. 8.15, 8.18, 7.88
- + - + = a0 E50, 1-50
Phim afeom Sy pnmion aime T9756 1S
(a)—S Analyze the data from this experiment. Se a 7.54, 8.00, 7.88
(b) Plot the residuals on normal probability paper and + -—- + = 7.69, 8.09, 8.06
against the predicted values. Comment on the - + + - = USS PSB pee
plots obtained. to t+ + + = TeON Ol OS,
(c) Comment on the efficiency of this design. Note - - = = + ESS {eso Aly
that we have replicated the experiment twice, yet + -—- =— + + T-SOsmee OSs) 1044:
we have no information on the ABC interaction. - + = + + Wels BLS = ASD
(d Suggest a better design; specifically, one that would + + - = + POSTAL. O16
~—

provide some information on ail interactions. - - + + + W327 A427 A4


+ = + = + TSO) 69a 6 2)
13-22. R. D. Snee (“Experimenting with a Large - + + = + Sse HelSee 1.25
Number of Variables,” in Experiments in Industry: + + + + + 18 Loewe D0, =aa D9
Design, Analysis and Interpretation of Results, by R.
D. Snee, L. B. Hare, and J. B. Trout, Editors, ASQC, (a) What is the generator for this fraction? Write out
1985) describes an experiment in which a 2°~' design the alias structure.
with J=ABCDE was used to investigate the effects of
(b Analyze the data. What factors influence mean
five factors on the color of a chemical product. The
free height?
factors are A = solvent/reactant, B = catalyst/reactant,
C= temperature, D = reactant purity, and E = reactant (c) Calculate the range of free height for each run. Is
PH. The results obtained are as follows: there any indication that any of these factors
affects variability in free height?
= -0.63 d =6.79 (d) Analyze the residuals from this experiment and
a= 25) ade = 6.47 comment on your findings.
= -2.68 bde = 3.45 13-24. An article in /ndustrial and Engineering
abe = 1.66 abd = 5.68 Chemistry (“More on Planning Experiments to
Increase Research Efficiency,’ 1970, p. 60) uses a Dae
c = 2.06 cde = 5.22
design to investigate the effects of A = condensation
ace=122 acd =4.38 temperature, B = amount of material 1, C = solvent
bce = -2.09 bcd = 4.30 volume, D = condensation t..ne, and E = amount of
material 2 on yield. The results obtained are as
abe = 1.93 abcde = 4.05
follows:
(a) Prepare a normal probability plot of the effects. e 23.2 )ad = 169) cd = 23:8) “bde =1638.
Which factors are active?
ab 155; 0c = 16.2; ace = 23.4, abcde =1830
(b) Calculate the residuals. Construct a normal prob-
ability plot of the residuals and plot the residuals (a) Verify that the design generators used were / =
versus the fitted values. Comment on the plots. ACE and I = BDE.
(c) If any factors are negligible, collapse the 2°~'
(b Write down the complete defining relation,
and
design into a full factorial in the active factors.
’ the aliases from this design.
Comment on the resulting design, and interpret
the results. (c) Estimate the main effects.
(d) Prepare an analysis of variance table. Verify that
13-23. An article in the Journal of Quality Technol-
the AB and AD interactions are available to use as
ogy, (Vol. 17, 1985, p. 198) describes the use of a
replicated fractional factorial to investigate the effects error.
of five factors on the free height of leaf springs used in (e) Plot the residuals versus the fitted values. Also
an automotive application. The factors are A = furnace construct a normal probability plot of the residu-
temperature, B = heating time, C = transfer time, als. Comment on the results.
408 Chapter 13 Design of Experiments with Several Factors

13-25. An article in Cement and Concrete Research (a) What is the generator for this fraction?
(2001, p. 1213) describes an experiment to investigate (b) Analyze the data. What factors influence mean
the effects of four metal oxides on several cement bulk density?
properties. The four factors are all run at two levels (c) Analyze the residuals from this experiment and
and one response of interest is the mean bulk density comment on your findings.
(g/cm?). The four factors and their levels are
13-26. Consider the 2°~? design in Table 13-27. Sup-
pose that after analyzing the original data, we find that
Low Level High Level factors C and E can be dropped. What type of 2‘
Factor (-1) (+1) design is left in the remaining variables?

A: %Fe,O, 0 30 13-27. Consider the 2°~? design in Table 13-27. Sup-


pose that after the original data analysis, we find that
B: % ZnO 0 15
factors D and F can be dropped. What type of 2‘
C: %PbO 0 2.5,
design is left in the remaining variables? Compare the
D: %Cr,0, 0 Pes)
results with Exercise 13-26. Can you explain why the
answers are different?
Typical results from this type of experiment are given 13-28. Suppose that in Exercise 13-12 it was possible
in the following table:
to run only a one-half fraction of the 2* design. Con-
struct the design and perform the statistical analysis,
Run A B Cc D Density use the data from replicate I.

1 -l -1 -l 1 2.001 13-29. Suppose that in Exercise 13-15 only a one-half


fraction of the 2° design could be run. Construct the
2 1 -l1 -1 -1 2.062
design and perform the analysis.
3 -1 -1 -l 2.019
4 1 -1 1 2.059 13-30. Consider the data in Exercise 13-15. Suppose
5 -1 -l 1. -1 1.990 that only a one-quarter fraction of the 2° design could
be run. Construct the design and analyze the data.
6 1 -1 1 1 2.076
7 -] 1 1 1 2.038 13-31. Construct a 2°,,* fractional factorial design.
Write down the aliases, assuming that only main
8 1 1 1 -l 2.118
effects and two-factor interactions are of interest.
Chapter 14

Simple Linear Regression


and Correlation

In many problems there are two or more variables that are inherently related, and it is nec-
essary to explore the nature of this relationship. Regression analysis is a statistical technique
for modeling and investigating the zclationship between two or more variables. For exam-
ple, in a chemical process, suppose that the yield of product is related to the process oper-
ating temperature. Regression analysis can be used to build a model that expresses yield as
a function of temperature. This model can then be used to predict yield at a given temper-
ature level. It could also be used for process optimization or process control purposes.
In general, suppose that there is a single dependent variable, or response y, that is
related to k independent, or regressor, variables, say x,, x,, ..., X, The response variable y
is a random variable, while the regressor variables x,, x,, ..., x, are measured with negligi-
ble error. The x, are called mathematical variables and are frequently controlled by the
experimenter. Regression analysis can also be used in situations where y, x,, x,, ..., x, are
jointly distributed random variables, such as when the data are collected as different meas-
urements on a common experimental unit. The relationship between these variables is char-
acterized by a mathematical model called a regression equation. More precisely, we speak
of the regression of y on x,, x,, ..., X,. This regression model is fitted to a set of data. In some
instances, the experimenter will know the exact form of the true functional relationship
between y and x,, x,, ..., %, Say y = O(x,, X,,...., %,). However, in most cases, the true func-
tional relationship is unknown, and the experimenter will choose an appropriate function to
approximate ¢. A polynomial model is usually employed as the approximating function.
In this chapter, we discuss the case where only a single regressor variable, x, is of inter-
est. Chapter 15 will present the case involving more than one regressor variable.

14-1 SIMPLE LINEAR REGRESSION


We wish to determine the relationship between a single regressor variable x and a response
variable y, The regressor variable x is assumed to be a continuous mathematical variable,
controllable by the experimenter. Suppose that the true relationship between y and x is a
straight line, and that the observation y at each level of x is a random variable, Now, the
expected value of y for each value of x is
E(y|x) = By + Bx, (14-1)

where the intercept 8, and the slope B, are unknown constants. We assume that each obser-
vation, y, can be described by the model
y=By+Bxte, (14-2)

409
410 Chapte: 4 Simple Linear Regression and Correlation

where € is a random error with mean zero and variance 0”. The {e} are also assumed to be
uncorrelated random variables. The regression model of equation 14-2 involving only a sin-
gle regressor variable x is often called the simple linear regression model.
Suppose that we have n pairs of observations, say (y,, X,), (V2) Xz), --+» (Y,. X,)» These
data may be used to estimate the unknown parameters f, and B, in equation 14-2. Our esti-
mation procedure will be the method of least squares. That is, we will estimate f, and , so
that the sum of squares of the deviations between the observations and the regression line
is a minimum. Now using equation 14-2, we may write

y= Byo+Bx+e, i=1,2,...50, (14-3)


and the sum of squares of the deviations of the observations from the true regression line is

i=l i=l (14-4)

The least-squares estimators of 8, and B,, say B,and jie must satisfy

OL . Pipe
“ihA kb
= — Bo -B,x;)=

i ats Ta (14-5)
fei. A =~2)
= -|lx, =0.
(y— Bo Bx; )x,

Simplifying these two equations yields

nB +B,> »;mevy
i=l- i=l

BY x;+B, >.x? = Y yx
i=l i=l i=l (14-6)

Equations 14-6 are called the least-squares normal equations. The solution to the normal
equations is

Bo=y- Bx, (14-7)

(14-8)

wherey = (1/n);_ ,y, andx=(1/n)X;_ ,x, Therefore, equations 14-7 and 14-8 are the least-
squares estimators of the intercept and slope, respectively. The fitted simple linear regres-
sion model is

9=B,+B.x. (14-9)
14-1 Simple Linear Regression 411

Notationally, it is convenient to give special symbols to the numerator and denomina-


tor of equation 14-8. That is, let

Sa = x aay -S.-1(Ss] (14-10)


i=l

and

Sy = > yi(%-*))= Davi Ap »} (14-11)


i= i=]

We call S,, the corrected sum of squares of x and S,, the corrected sum of cross products of
x and y. The extreme right-hand sides of equations 14-10 a. 14-11 are the usual computa-
tional formulas. Using this new notation, the least-squares estimator of the slope is
eS.
B, Bie (14-12)
xx

Example 14-1
A chemical engineer is investigating the effect of process operating temperature on product yield. The
study results in'the following data:

Temperature, °C (x) 100 110 120 130 140 150 160 170 180

Yield, % (y) 45 51 54 —Ss«61 662070) B74 SE 78: S85 89

These pairs of points are plotted in Fig. 14-1. Such a display is called a scatter diagram. Examination
of this scatter diagram indicates that there is a strong relationship between yield and temperature, and
the tentative assumption of the straight-line model y = B, + B,x + € appears to be reasonable. The fol-
lowing quantities may be computed:
10 10

n=10, > x,=1450, ))y=673, ¥=145, y=67.3,


i=l a j—

10 10 10
x7 = 218,500, }) y? = 47,225, Y x9; = 101,570.
t=1 i=l i=l

From equations 14-10 and 14-11, we find

10 [Ome \s )
8.23% =
2
-—— Ss) | = 218,500
(1450)
:
--——~— = 8250

and

ye a {3} 101,570 - PUT) - 985,


i=l 1

Therefore, the least-squares estimates of the slope and intercept are


412 Chapter 14 Simple Linear Regression and Correlation

100

onoOo
Yield,
y

100 110 120 130 140 150 160 170 180 190
Temperature, x
Figure 14-1 Scatter diagram of yield versus temperature.

and
By =¥- BX = 67.3 - (0.483)145 = - 2.739.
The fitted simple linear regression model is

§ =- 2.739 + 0.483.

Since we have only tentatively assumed the straight-line model to be appropriate, we


will want to investigate the adequacy of the model. The statistical properties of the least-
squares estimators 8, and B, are useful in assessing model adequacy. The estimators By and
B, are random variables, since they are just linear combinations of the y,, and the y, are ran-
dom variables. We will investigate thebias and variance properties of these estimators, Con-
sider first B,. The expected value of B, is

= LAS nls -9)]


tel

oiSs — ¥)(Bo + Bx; +; }

= rae ~ )+ FAS ty - )
i=] i=]

=p alee |
14-1 Simple Linear Regression 412

since L;_ (x, - x) =0, X7_,x,(x,- x) =S,,, and by assumption E(e,) = 0. Thus, 8, is an unbi-
ased estimator of the true slope B,. Now consider the variance of B,. Since we have
assumed that V(e,) = 0”, it follows that V(y,) = 0’, and

A (14-13)

The random variables {y,} are uncorrelated because the {¢€,} are uncorrelated. Therefore, the
variance of the sum in equation 14-13 is just the sum of the variances, and the variance of
each term in the sum, say V[y,(x, — x)], is o7(x, - x)’. Thus,
A 1 n
v(B,) = rad
CodahXx; ~x)
i (14-14)
ante
Sx

Using a similar approach, we can show that

efs)=h
8) = ad v(A,)=o"|4
Sac pel +2| 44.15)
‘ote that Biis an unbiased estimator of B,. The covariance of B, and Biis not zero; in fact,
Cov( Bo,So =0 x5).
Itis usually necessary to obtain an estimate of o”. The difference between the obser-
vation y, and the corresponding predicted value $,, say e, = y,—3,, is called a residual. The
sum of the squares of the residuals, or the error sum of squares. would be

ial (14-16)

= YO; a vie
isl

A more convenient computing formula for SS, may be found by substituting the fitted
model 9,= By + B x, into equation 14-16 and Peutne The result is

SS_ = yy? = ny" - BS,


i=l

and if we let L,y; -ny? = ¥ (5, = S,,, then we may write SS, as
i=]

= 5,,- 8,5,» (14-17)


414 Chapter 14 Simple Linear Regression and Correlation

The expected value of the error sum of squares SS, is E(SS,) = (n - 2)o°. Therefore,

n-2 ; (14-18)
is an unbiased estimator of 0°.
Regression analysis is widely used and frequently misused. There are several common
abuses of regression that should be briefly mentioned. Care should be taken in selecting vari-
ables with which to construct regression models and in determining the form of the approxi-
mating function. It is quite possible to develop statistical relationships among variables that
are completely unrelated in a practical sense. For example, one might attempt to relate the
shear strength of spot welds with the number of boxes of computer paper used by the data
processing department. A straight line may even appear to provide a good fit to the data, but
the relationship is an unreasonable one on which to rely. A strong observed association
between variables does not necessarily imply that a causal relationship exists between those
variables. Designed experiments are the only way to determine causal relationships.
Regression relationships are valid only for values of the independent variable within
the range of the original data. The linear relationship that we have tentatively assumed may
be valid over the original range of x, but it may be unlikely to remain so as we encounter x
values beyond that range. In other words, as we move beyond the range of values of x for
which data were collected, we become less certain about the validity of the assumed model.
Regression models are not necessarily valid for extrapolation purposes.
Finally, one occasionally feels that the model y,= Bx + € is appropriate. The omission of
the intercept from this model implies, of course, that y = 0 when x = 0. This is a very strong
assumption that often is unjustified. Even when two variables, such as the height and weight
of men, would seem to qualify for the use of this model, we would usually obtain a better fit
by including the intercept, because of the limited range of data on the independent variable.

14-2 HYPOTHESIS TESTING IN SIMPLE LINEAR REGRESSION


An important part of assessing the adequacy of the simple linear regression model is test-
ing statistical hypotheses about the model parameters and constructing certain confidence
intervals. Hypothesis testing is discussed in this section, and Section 14-3 presents methods
for constructing confidence intervals. To test hypotheses about the slope and intercept of the
regression model, we must make the additional assumption that the error component e, is
normally distributed. Thus, the complete assumptions are that the errors are NID(0, 0°)
(normal and independently distributed). Later we will discuss how these assumptions can
be checked through residual analysis.
Suppose we wish to test the hypothesis that the slope equals a constant, say pi. ihe
appropriate hypotheses are

H,: B, = B,.o,
(14-19)
¥ H;: B, # By o,
where we have assumed a two-sided alternative. Now since the €, are NID(0, 0°), it follows
directly that the observations y, are NID(B, + 8.x, 0”). From equation 14-8 we observe that
B, is a linear combination of the observations y,. Thus, B, is a linear combination of inde-
pendent normal random variables and, consequently, Biis M(B,o”/S,,),
o using the bias and
variance properties of B, from Section 14-1. Furthermore, Biis independent of MS,. Then,
as a result of the normality assumption, the statistic

Bi - Bro
tS {MS¢/S.. (14-20)
14-2 Hypothesis Testing in Simple Linear Regression 415

follows the ¢distribution with n — 2 degrees of freedom under H,: B, = B, y. We would reject
Hy: B, = B,,y if
Itol > lw, n-29 (14-21)

where f, is computed from equation 14-20.


A similar procedure can be used to test hypotheses about the intercept. To test

Hy: By = Boo»
(14-22)
H,: By # Bo.
we would use the statistic

pe Bo - Boo
——_—_—_—__

naoSe

and reject the null hypothesis if |t)| >t... ,,_>-


A very important special case of the hypothesis of equation 14-19 is
TphowsiceO) (14-24)
H,: B, #0.
This hypothesis relates to the significance of regression. Failing to reject H,: B, = 0 is
equivalent to concluding that there is no linear relationship between x and y. This situation
is illustrated in Fig. 14-2. Note that this may imply either that x is of little value in explain-
ing the variation in y and that the best estimator of y for any x is) = y (Fig. 14-2a) or that
the true relationship between x and y is not linear (Fig. 14-2). Alternatively, if H,: B; = 0
is rejected, this implies that x is of value in explaining the variability in y. This is illustrated
in Fig. 14-3. However, rejecting H,:8, = 0 could mean either that the straight-line model is
adequate (Fig. 14-3a) or that even though there is a linear effect of x, better results could be
obtained with the addition of higher-order polynomial terms in x (Fig. 14-35).
The test procedure for H,:8, = 0 may be developed from two approaches. The first -
approach starts with the following partitioning of the total corrected sum of squares for y:

(14-25)
Sy = Y 0; -5) 5 >“ -¥) +L, —5,) .

(a) (b)
Figure 14-2 The hypothesis H,: B, = 0 is not rejected.
416 Chapter 14 Simple Linear Regression and Correlation

Figure 14-3 The hypothesis H,: B, = 0 is rejected.

The two components of S,, measure, respectively, the amount of variability in the y,
accounted for by the regression line, and the residual variation left unexplained by the
regression line. We usually call SS, = L7_,(y, — 9,)° the error sum of squares and SS, =
x). (9, - y)’ the regression sum of squares. Thus, equation 14-25 may be written
S,, = SSp + SS; (14-26)
Comparing equation 14-26 with equation 14-17, we note that the regression sum of squares
SS, is
SSp= Bion (14-27)
S,, has n — 1 degrees of freedom, and SS, and SS, have 1 and n - 2 degrees of freedom,
respectively.
We may show that E[SS,/(n - 2)] = 0° and E(SS,) = 0? + B'S,,, and that SS, and SS,
are independent. Thus, if H,: 8, = 0 is true, the statistic
SSevt MS
sh Kk 14-28
0” SS_/(n-2).. MS; tae)
follows the F, ,_, distribution, and we would reject H, if Fy > F,,,,2. The test procedure
is usually arranged in an analysis of variance table (or ANOVA), such as Table 14-1.
The test for significance of regression may also be developed from equation 14-20 with
B,»=0, say

to weed)
TMS;/Sa ; (14-29)

Squaring both sides of equation 14-29, we obtain

(14-30)

Table 14-1 Analysis of Variance for Testing Significance of Regression

Source of Sum of Degrees of Mean


Variation Squares Freedom Square F,
Regression SS, = BS, d Leak MS, MS,/MS,
Error or residual SS,=S,,- BS, n-2 MS,
Total S,, n-1
14-3 Interval Estimation in Simple Linear Regression 417

Table 14-2 Testing for Significance of Regression, Example 14-2


Ee ee eee
Source of Sum of Degrees of Mean
Variation Squares Freedom Square le,
Regression 1924.87 1 1924.87 2138.74
Error ees) 8 0.90
Total 1932.10 9

Note that f in equation 14-30 is identical to F y in equation 14-28. It is true, in general, that
the square of a t random variable with f degrees of freedom is an F random variable, with
one and f degrees of freedom in the numerator and denominator, respectively. Thus, the test
using f, is equivalent to the test based on F,.

‘Example 14-2
We will test the model developed in Example 14-1 for significance of regression. The fitted model is
$ =- 2.739 + 0.483x, and S., is computed as

n 5 1 n 2

Sy = yi -1¥5
i=l i=l
2
= 47,225- {CHE
10
= 1932.10.

The regression sum of squares is

SSp = B,S,, = (0.483)(3985) = 1924.87,


and the error sum of squares is

apg Ky mak
= 1932.10 - 1924.87
=7223.
The analysis of variance for testing H,: 6, = 0 is summarized in Table 14-2. Noting that F, = 2138.74
> Foo, 1,3 = 11.26, we reject H, and conclude that 8, # 0.

14-3 INTERVAL ESTIMATION IN SIMPLE LINEAR REGRESSION


In addition to point estimates of the slope and intercept, it is possible to obtain confidence
interval estimates of these parameters. The width of these confidence intervals is a measure
of the overall quality of the regression line. If the €, are normally and independently dis-
tributed, then

(A,= B,)/(MSe/S.x and (Bo-p) |ase


418 Chapter 14 Simple Linear Regression and Correlation

are both distributed as t with n — 2 degrees of freedom. Therefore, a 100(1 — a)% confidence
interval on the slope B, is given by

Fs [MS or MS,
By -tey2.n-2 a SB, S By + te/2n-2 ier (14-31)

Similarly, a 100(1 — a@)% confidence interval on the intercept f, is

; -te/2,n-26 lex - 1 7% (14-32)


Bo Elem: Bo + tajr.n-2 sy]24
xx xx

‘Example 14-3
We willfind a 95% confidence interval on the slope of the regression line using the data in Example
14-1. Recall that B, = 0.483, S,, = 8250, and MS, = 0.90 (see Table 14-2). Then, from equation 14-31
we find

A MS a MS
By -to0258,/——— $B, $ Bi + 0.0258,
Si, | ss.
or

0.483-2.306,
|2 < p< 0.483+2.306, 222.
. 8250 8250

This simplifies to

0.459 < B, < 0.507.

A confidence interval may be constructed for the mean response at a specified x, say
Xp. This is a confidence interval about E()|x,) and is often called a confidence interval about
the regression line. Since E(y|x,) = B, + B,x», we may obtain a point estimate of E(y|x,) from
the fitted model as

EQ) =¥o =B, +B Xp,


Now §, is an unbiased point estimator of E(y|x,), since B, and B, are unbiased estimators of
B, and B,. The variance of §, is

and §, is normally distributed, as B, and B, are normally distributed. Therefore, a 100(1 —


a)% confidence interval about the true regression line at x = x, may be computed from

(14-33)
“9
s E(y|x9) SY +be/2n-2 ms 1+Ge20)
xXx
14-3 Interval Estimation in Simple Linear Regression 419

The width of the confidence interval for E(y|x,) is a function of x). The interval width is a minimum
for x, =x and widens as |x, —x| increases. This widening is one reason why using regression to extrap-
olate is ill-advised.

~ Example 14-4
We will construct a 95% confidence interval about the regression line for the data in Example 14-1.
The fitted model is }, = — 2.739 + 0.483x,, and the 95% confidence interval on E(y|x,) is found from
equation 14-33 as

2
Fo + 2.306 [0.90 1, Go~145
=145)" |)
10. 8250
The fitted values }, and the corresponding 95% confidence limits for the points x, =x, i= 1, 2, ...,
10, are displayed in Table 14-3. To illustrate the use of this table, we may find the 95% confidence
interval on the true mean process yield at x, = 140°C (say) as

64.88 — 0.71 < E(y|x, = 140) < 64.88 + 0.71


or
64.17 < E(y|x, = 140) < 65.49.

The fitted model and the 95% confidence interval about the regression line are shown in Fig. 14-4.

Table 14-3 Confidence Interval about the Regression Line, Example 14-4

Xe ACO WO TAD EO I), SO ED DE. ie

do 45.56 50.39 55.22 60.05 64.88 69.72 74.55 79.38 84.21 89.04

95% confidence
limits +1.30 +1.10 +0.93 +0.79 +0.71 +0.71 +0.79 +0.93 +1.10 +1.30

Yield,
y

G 100 110 120 130 140 150 160 170 180 190
Temperature, x

Figure 14-4 A 95% confidence interval about the regression line for Example 14-4.
420 Chapter 14 Simple Linear Regression and Correlation

14-4 PREDICTION OF NEW OBSERVATIONS


An important application of regression analysis is predicting new or future observations y
corresponding to a specified level of the regressor variable x. If x, is the value of the regres-
sor variable of interest, then

$9 = By + Bix (14-34)
is the point estimate of the new or future value of the response yy.
Now consider obtaining an interval estimate of this future observation y,. This new
observation is independent of the observations used to develop the regression model. There-
fore, the confidence interval about the regression line, equation 14-33, is inappropriate,
since it is based only on the data used to fit the regression model. The confidence intervai
about the regression line refers to the true mean response at x = x, (that is, a population
parameter), not to future observations.
Let y, be the future observation at x = x,, and let }, given by equation 14-34 be the esti-
mator of y). Note that the random variable

Y=Yo-Yo
is normally distributed with mean zero and variance

Viy)= V(¥o = So)

=<¢? ala COAL


§ “4

n 53%

because y, is independent of },. Thus, the 100(1 — @)% prediction interval on a future obser-
vations at x, is

(14-35)

SY SY t+ fa

Notice that the prediction interval is of minimum width at x, = x and widens as |x, — x|
increases. By comparing equation 14-35 with equation 14-33, we observe that the predic-
tion interval at x, is always wider than the confidence interval at x). This results because the
prediction interval depends on both the error from the estimated model and the error asso-
ciated with future observations (0°).
We may also find a 100(1 — @)% prediction interval on the mean of k future observa-
tions on the response at x = x). Let y, be the mean of k future observations at x = x). The
100(1 — &)% prediction interval on y, is

Oo
x abe dt Wea ee
Yo — tey2n-2 ms] Pode |
ig 9D -
(14-36)
’ =F)
= - 1 Xy —X
SV S90 ms fool
0-%)
Keehn S xX
14-5 Measuring the Adequacy of the Regression Model 421

To illustrate the construction of a prediction interval, suppose we use the data in Exam-
ple 14-1 and find a 95% prediction interval on the next observation on the process yield at
X = 160°C. Using equation 14-35, we find that the prediction interval is

74.55 -2.306,|0.90] 1+ 1 4 160=145)"


2

10° 8250
1 (160-145)
S yp $74.55 +2.306 090Sf eek
10 = 8250_—sé|’
which simplifies to

72.21 S yy S$ 76.89.

14-5 MEASURING THE ADEQUACY OF THE REGRESSION MODEL


Fitting a regression model requires several assumptions. Estimation of the model parame-
ters requires the assumption that the errors are uncorrelated random variables with mean
zero and constant variance. Tests of hypotheses and interval estimation require that the
errors be normally distributed. In addition, we assume that the order of the model is correct;
that is, if we fit a first-order polynomial, then we are assuming that the phenomenon actu-
ally behaves in a first-order manner.
The analyst should always consider the validity of these assumptions to be doubtful
and conduct analyses to examine the adequacy of the model that has been tentatively enter-
tained. In this section we discuss methods useful in this respect.

14-5.1 Residual Analysis


We define the residuals as e; = y,-$,, i= 1, 2, ..., m, where y, is an observation and §, is the
corresponding estimated value from the regression model. Analysis of the residuals is fre-
quently helpful in checking the assumption that the errors are NID(0, o”) and in determin-
ing whether additional terms in the model would be useful.
As an approximate check of nonaality, the experimenter can construct a frequency his-
togram of the residuals or plot them on normal probability paper. It requires judgment to
assess the nonnormality of such plots. One may also standardize the residuals by comput-
ing d, = e,/VMS,, i = 1, 2, ..., n. If the errors are NID(0, 0”), then approximately 95% of
the standardized residuals should fall in the interval (-2, +2). Residuals far outside this
interval may indicate the presence of an outlier, that is, an observation that is atypical of the
rest of the data. Various rules have been proposed for discarding outliers. However, some-
times outliers provide important information about unusual circumstances of interest to the
experimenter and should not be discarded. Therefore a detected outlier should be investi-
gated first, then discarded if warranted. For further discussion of outliers, see Montgomery,
Peck, and Vining (2001).
It is frequently helpful to plot the residuals (1) in time sequence (if known), (2) against
the $,, and (3) against the independent variable x. These graphs will usually look like one of
the four general patterns shown in Fig. 14-5. The pattern in Fig. 14-5a represents normal-
ity, while those in Figs. 14-5b, c, and d represent anomalies. If the residuals appear as in
Fig. 14-5b, then the variance of the observations may be increasing with time or with the
magnitude of the y, or x, If a plot of the residuals against time has the appearance of
Fig. 14-56, then the variance of the observations is increasing with time. Plots against 9, and
422 Chapter 14 Simple Linear Regression and Correlation

(a) (b)

(c) (qd)
Figure 14-5 Patterns for residual plots. (a) Satisfactory, (b) funnel, (c) double bow, (d) nonlinear.
[Adapted from Montgomery, Peck, and Vining (2001).]

x, that look like Fig. 14-5c also indicate inequality of variance. Residual plots that look like
Fig. 14.5d indicate model inadequacy; that is, higher-order terms should be added to the
model.

Example 14-5
The residuals for the regression model in Example 14-1 are computed as follows:
e, = 45.00 - 45.56 =- 0.56, é, = 70.00 — 69.72 = 0.28,
e, = 51.00 - 50.39 = 0.61, e, = 74.00 - 74.55 =- 0.55,
e, = 54.00 - 55.22 =—- 1.22, e, = 78.00 — 79.38 = — 1.38,
é, = 61.00 - 60.05 = 0.95, e, = 85.00 — 84.21 =0.79,
e, = 66.00
—64.88 = 1.12, 2,9 = 89.00 - 89.04 = - 0.04.
These residuals are plotted on normal probability paper in Fig. 14-6. Since the residuals fall approx-
imately along a straight line in Fig. 14-6, we conclude that there is no severe departure from normal-
ity. The residuals are also plotted against ), in Fig. 14-7a and against x, in Fig. 14-7b. These plots
indicate no serious model inadequacies.

14-5.2 The Lack-of-Fit Test


Regression models are often fit to data when the true functional relationship is unknown.
Naturally, we would like to know whether the order of the model tentatively assumed is cor-
rect. This section will describe a test for the validity of this assumption.
14-5 Measuring the Adequacy of the Regression Model 423

Probability

-15 -10 -05 00 £05 1.0 5e 2:0


Residuals
Figure 14-6 Normal probability plot of residuals.

-1.00

-2.00

8;
2.00

1.00

0.00

-1.00

-2.00

100 110 120 1380 140 150 160 170 180 190 xj
Figure 14-7 Residual plots for Example 14-5. (a) Plot against 9,; (b) plot against x,

The danger of using a regression model that is a poor approximation of the true func-
tional relationship is illustrated in Fig. 14-8. Obviously, a polynomial of degree two or
greater should have been used in this situation.
424 Chapter 14 Simple Linear Regression and Correlation

Figure 14-8 A regression model displaying lack of fit.

We present a test for the “goodness of fit” of the regression model. Specifically, the
hypotheses we wish to test are
H,: The model adequately fits the data
H,: The model does not fit the data
The test involves partitioning the error or residual sum of squares into the two components

SS, = SSpp + SS,or


where SS,, is the sum of squares attributable to “pure” error and SS,5, is the sum of squares
attributable to the lack of fit of the model. To compute SS,, we must have repeated obser-
vations on y for at least one level of x. Suppose that we have n total observations such that
Vir Yire «+09 Yim repeated observations at x,,
Yair Vag» «+19 Yong repeated observations at x,,

Spee ee repeated observations at x,,.


Note that there are m distinct levels of x. The contribution to the pure-error sum of squares
at x, (say) would be
ny
> (Yui) (14-37)
u=1

The total sum of squares for pure error would be obtained by summing equation 14-37 over
all levels of x as
m Nn;

Spe =D (Yu - Fi)


i=l u=
(14-38)
_

There are n, = pi 1(n, - 1) =n — m degrees of freedom associated with the pure-error


sum of squares. The sum of squares for lack of fit is simply
SSior = SS_—- SSpg (14-39)
with n - 2.-n, =m — 2 degrees of freedom. The test statistic for lack of fit would then be

o=_ SSror /(m=- 2)wpa (14-40)


SSpp/(n- m) "MSpg ”
and we would reject it if F, > F a,m-2,n=m'
14-5 Measuring the Adequacy of the Regression Model 425

This test procedure may be easily introduced into the analysis of variance conducted
for the significance of regression. If the null hypothesis of model adequacy is rejected, then
the mode] must be abandoned and attempts made to find a more appropriate model. If H, is
not rejected, then there is no apparent reason to doubt the adequacy of the model, and MS,,
and MS,,, are often combined to estimate 0”.

Suppose we have the following data:

x 1.0 1.0 2.0 33 333 4.0 4.0 4.0 4.7 5.0


y 23 1.8 2.8 1.8 Sal 2.6 2.6 pag oD 2.0

y 35 2.8 el 3.4 3.2 3.4 5.0

We may compute S,, = 10.97, S,,, = 13.20, S,, =52.53, y = 2.847, andx= 4.382. The regression model
is )= 1.708 + 0.260x, and the regression sum of squares is SS, = B,S,, = (0.260)(13.62) = 3.541. The
pure-error sum of squares is computed as follows:

Level of x LXy,- yy’ Degrees of Freedom


1.0 0.1250 1
3.3 1.8050 1
4.0 0.1066 2 ‘
56 yo 0.9800 2
6.0 0.0200 1

Total: 3.0366 7

The analysis of variance is summarized in Table 14-4. Since F,,, , 7 = 1.70, we cannot reject the
hypothesis that the tentative model adequately describes the data. We will pool lack-of-fit and pure-
error mean squares to form the denominator mean square in the test for significance of regression.
Also, since Fos. ; 5 = 4-54, we conclude that 8, # 0.

In fitting a regression model to experimental data, a good practice is to use the lowest
degree model that adequately describes the data. The lack-of-fit test may be useful in this

Table 14-4 Analysis of Variance for Example 14-6


eS

Source of Sum of Degrees of Mean


Variation Squares Freedom Square F5

Regression 3.541 1 3.541 7.15


Residual 7.429 15 0.495
(Lack of fit) 4.392 8 0.549 1.27
(Pure error) 3.037 7 0.434
Total 7 210.970 16
426 Chapter 14 Simple Linear Regression and Correlation

respect. However, it is always possible to fit a polynomial of degree n — | to n data points,


and the experimenter should not consider using a model thaf‘is “saturated,” that is, that has
very nearly as many independent variables as observations on y.

14-5.3 The Coefficient of Determination


The quantity

Ree eee (14-41)


Syy Syy

is called the coefficient of determination and is often used to judge the adequacy of a
regression model. (We will see subsequently that in the case where x and y are jointly dis-
tributed random variables, R’ is the square of the correlation coefficient between x and y.)
Clearly 0 < R’ < 1. We often refer loosely to R* as the amount of variability in the data
explained or accounted for by the regression model. For the data in Example 14-1, we have
R’ = SS,/S,, = 1924.87/1932.10 = 0.9963; that is, 99.63% of the variability in the data is
accounted for by the model.
The statistic R? should be used with caution, since it is always possible to make R? unity
simply by adding enough terms to the model. For example, we can obtain a “perfect” fit to
n data points with a polynomial of degree n — 1. Also, R’ will always increase if we add a
variable to the model, but this does not necessarily mean the new model is superior to the
old one. Unless the error sum of squares in the new model is reduced by an amount equal
to the original error mean squere, the new model will have a larger error mean square than
the old one, because of the loss of one degree of freedom. Thus the new model will actu-
ally be worse than the old one.
There are several misconceptions about R’. In general, R’ does not measure the mag-
nitude of the slope of the regression line. A large value of R? does not imply a steep slope.
Furthermore, R? does not measure the appropriateness of the model, since it can be artifi-
cially inflated by adding higher-order polynomial terms. Even if y and x are related in a non-
linear fashion, R’ will often be large. For example, R? for the regression equation in
Fig. 14-3b will be relatively large, even though the linear approximation is poor. Finally,
even though R’ is large, this does not necessarily imply that the regression model will pro-
vide accurate predictions of future observations.

14-6 TRANSFORMATIONS TO A STRAIGHT LINE


We occasionally find that the straight-line regression model y = 8, + B,x + € is inappropri-
ate because the true regression function is nonlinear. Sometimes this is visually determined
from the scatter diagram, and sometimes we know in advance that the model is nonlinear
because of prior experience or underlying theory. In some situations a nonlinear function
can be expressed as a straight line by using a suitable transformation. Such nonlinear mod-
els are called intrinsically linear.
As an example of a nonlinear model that is intrinsically linear, consider the exponen-
tial function

y = BoeP'e.
This function is intrinsically linear, since it can be transformed to a straight line by a loga-
rithmic transformation

Iny=Inf,+Bxt+Ine.
14-7 Correlation 427

This transformation requires that the transformed error terms In € be normally and inde-
pendently distributed with mean 0 and variance o”.
Another intrinsically linear function is
1
y=By +B Jre
in
By using the reciprocal transformation z= 1/x, the model is linearized to

y=By+ Bzre.
Sometimes several transformations can be employed jointly to linearize a function. For
example, consider the function
* 1
an exp(By + Byx +e)

Letting y* = 1/y, we have the linearized form


dn y*=B,+ Bxte.
Several other examples of nonlinear models that are intrinsically linear are given by Daniel
and Wood (1980).

14-7 CORRELATION
Our development of regression analysis thus far has assumed that x is a mathematical vari-
able, measured with negligible error, and that y is a random variable. Many applications of
regression analysis involve situations where both x and y are random variables. In these sit-
uations, it is usually assumed that the observations (y, x,), i= 1, 2, ..., n, are jointly dis-
tributed random variables obtained from the distribution f(y, x). For example, suppose we
wish to develop a regression model relating the shear strength of spot welds to the weld
diameter. In this example, weld diameter cannot be controlled. We would randomly select
n spot welds and observe a diameter (x,) and a shear strength (y,) for each. Therefore, (y,, x))
are jointly distributed random variables.
We usually assume that the joint distribution of y, and x, is the bivariate normal distri-,
bution. That is,
2 2
f(.s)= 20,0,
— \1-—p” exp geal")
1
2(1- p*) a
[28] 1 yr Mh X~ Hy
rt (14-42)

where , and or are the mean and variance of y, 1, and 0% are the mean and variance of x,
and p is the correlation coefficient between y and x. Recall from Chapter 4 that the corre-
lation coefficient is defined as
p= O12
0102

where @,, is the covariance between y and x.


The conditional distribution of y for a given value of x is (see Chapter 7)
428 Chapter 14 Simple Linear Regression and Correlation

2
= 1 EXP Sasa1/ y- tte
By - ae
Bix (14-43)
(ste) \2n01, || O12

where

Bo = - bp, (14-44a)
On

B,=—p, (14-44b)
88)
and
G,=0,(1—p). (14-44c)
That is, the conditional distribution of y given x is normal with mean
E(y|x) = B, + Bx (14-45)
and variance @,,. Note that the mean of the conditional distribution of y given x is a
straight-line regression model. Furthermore, there is a relationship between the correlation
coefficient p and the slope B,. From equation 14-44b we see that if p= 0, then B, = 0, which
implies that there is no regression of y on x. That is, knowledge of x does not assist us in pre-
dicting y.
The method of maximum likelihood may be used to estimate the parameters B, and f,.
It may be shown that the maximum likelihood estimators of these parameters are

Bo=y- B,x (14-46a)


and

- S
B, = =——_—_= oa (14-46b)

We note that the estimators of the intercept and slope in equation 14-46 are identical to
those given by the method of least squares in the case where x was assumed to be a mathe-
matical variable. That is, the regression model with y and x jointly normally distributed is
equivalent to the model with x considered as a mathematical variable. This follows because
the random variables y given x are independently and normally distributed with mean f, +
B,x and constant variance o},. These results will also hold for any joint distribution of y and
x such that the conditional distribution of y given x is normal.
It is possible to draw inferences about the correlation coefficient p in this model. The
estimator of p is the sample correlation coefficient

Sols mex)

¥ («1-32 D0,-3)7
o uel ; (14-47)
Sy
14-7 Correlation 429

Note that
‘ Ks 1/2

A -(=) ie (14-48)

so the slope B, is just the sample correlation coefficient r multiplied by a scale factor that
is the square root of the “spread” of the y values divided by the “spread” of the x values.
Thus f, and r are closely related, although they provide somewhat different information.
The sample correlation coefficient r measures the linear association between y and x, while
B, measures the predicted change in the mean of y for a unit change in x. In the case of a
mathematical variable x, r has no meaning because the magnitude of r depends on the
choice of spacing for x. We may also write, from equation 14-48,

Rar = pps yy

= BS
Syy

ween
o

which we recognize from equation 14-41 as the coefficient of determination. That is, the
coefficient of determination R’ is just the square of the sample correlation coefficient
between y and x.
It is often useful to test the hypothesis
H,: p=9,
(14-49)
H,: p #0.
The appropriate test statistic for this hypothesis is

to = lier 5 (14-50)
: Vl-r?

which follows the ¢ distribution with n — 2 degrees of freedom if H,: p = 0 is true. There-
fore, we would reject the null hypothesis if |t,| > t,,.., >. This test is equivalent to the test of
the hypothesis H,: B, = 0 given in Section 14-2. This equivalence follows directly from
equation 14-48.
The test procedure for the hypothesis

Hy: P = Pur
(14-51)
Hi:
p # Po
where p, #0, is somewhat more complicated. For moderately large samples (say n 2 25) the
statistic
+

ay ee rt tye pe (14-52)
2 Meaie

is approximately normally distributed with mean


eae as
Hz = arctanh p = Siete
Ze Mba)
430 Chapter 14 Simple Linear Regression and Correlation

and variance
O; =e ayn
Therefore, to test the hypothesis H,: p = Py), we may compute the statistic
Z, = (arctanh r — arctanh p,)(n - 3)!” (14-53)
and reject H;: p= p, if |Z,|'>Z,)-
It is also possible to construct a 100(1 — a@)% confidence interval for p using the trans-
formation in equation 14-52. The 100(1 —- a@)% confidence interval is
Z Z,
tan arta r- =, spss tanharta r+ | (14-54)
n- Eve

where tanh u = (e — e “)/(e“ +e“).

Example 14-7
Montgomery, Peck, and Vining (2001) describe an application of regression analysis in which an
engineer at a soft-drink bottler is investigating the product distribution and route service operations
for vending machines. She suspects that the time required to load and service a machine is related to
the number of cases of product delivered. A random sample of 25 retail outlets having vending
machines is selected, and the in-outlet delivery time (in minutes) and volume of product delivered (in
cases) is observed for each outlet. The data are shown in Table 14-5. We assume that delivery time and
volume of product delivered are jointly normally distributed.
Using the data in-Table 14-5, we may calculate

S,, = 6105.9447, S,, = 698.5600, and S,, = 2027.7132.


The regression model is

J =5.1145 + 2.9027x.
The sample correlation coefficient between x and y is computed from equation 14-47 as
<, Sy = 2027.7132 or = 0.9818.
[S..5,] [(698.5600)(6105.9447)]

Table 14-5 Data for Example 14-7

Delivery Number of Delivery Number of


Observation Time (y) Cases (x) Observation Time (y) Cases (x)

1 9.95 2 14 11.66 7
2 24.45 8 15 21.65 4
3 31.75 11 16 17.89 4
4 35.00 10 17 69.00 20
5 25.02 8 18 10.30 1
6 16.86 4 19 34.93 10
7 14.38 2 20 46.59 15
8 9.60 2 21 44.88 15
9 24.35 9 22 54.12 16
10 27.50 8 23 56.63 17
11 17.08 4 24 2213 6
12 37.00 11 2) 21.15 5
13 41.95 12
rr wr
14-9 Summary 431

Note that R? = (0.9818)? = 0.9640, or that approximately 96.40% of the variability in delivery time is
explained by the linear relationship with delivery volume. To test the hypothesis

Hy: p=9,
H,: p #0,
we can compute the test statistic of equation 14-50 as follows:

r¥n-2 _ 0.9818/23
to= = 24.80.
1-r2 V1—0.9640
Since f555; = 2.069, we reject H, and conclude that the correlation coefficient p # 0. Finally, we may
construct an approximate 95% confidence interval on p from equation 14-54. Since arctanh r =
arctanh 0.9818 = 2.3452, equation 14-54 becomes

seni’ 196 }
tanh2.345A,| Ane an 2.3
22 (22
which reduces to

0.9585 < p< 0.9921.

14-8 SAMPLE COMPUTER OUTPUT


Many of the procedures presented in this chapter can be implemented using statistical soft-
ware. In this section, we present the Minitab® output for the data in Example 14-1.
Recall that Example 14-1 provides data on the effect of process operating temperature
on product yield. The Minitab® output is

The regression equation is


Yield = - 2.74 + 0.483 Temp

Predictor Coef SE Coef at P


Constant -2.739 1.546 =i TATA 0.114
Temp 0.48303 0.01046 46.17 0.000

S = 0.9503 R-Sq = 99.6% R-Sq(adj) = 99.6%


Analysis of Variance

Source DF SS MS F Pp
Regression iI, 1924.9 1924.9 ZS Oe OOO
Residual Error 8 72 0.9
Total hag) 1932.1

The regression equation is provided along with the results from the t-tests on the individual
coefficients. The P-values indicate that the intercept does not appear to be significant
(P-value = 0.114) while the regressor variable, temperature, is statistically significant
(P-value = 0). The analysis of variance is also testing the hypothesis that H,: B, = 0 and can
be rejected (P-value = 0). Note also that T = 46.17 for temperature, and t? = (46.17)* =
2131.67 = F. Aside from rounding, thecomputer results are in agreement with those found
earlier in the chapter.

14-9 SUMMARY
This chapter has introduced the simple linear regression model and shown how least-
squares estimates of the model parameters may be obtained. Hypothesis-testing procedures
432 Chapter 14 Simple Linear Regression and Correlation

and confidence interval estimates of the model parameters have also been developed. Tests
of hypotheses and confidence intervals require the assumption that the observations y are
normally and independently distributed random variables. Procedures for testing model
adequacy, including a lack-of-fit test and residual analysis, were presented. The correlation
model was also introduced to deal with the case where x and y are jointly normally distrib-
uted. The equivalence of the regression model parameter estimation problem for the case
where x and y are jointly normal to the case where x is a mathematical variable was also dis-
cussed. Procedures for obtaining point and interval estimates of the correlation coefficient
and for testing hypotheses about the correlation coefficient were developed.

14-10 EXERCISES

14-1. Montgomery, Peck, and Vining (2001) present (a) Fit a linear regression model relating games won
data concerning the performance of the 28 National to yards gained by an opponent.
Football League teams in 1976. It is suspected that (b) Test for significance of regression.
the number of games won (jy) is related to the number (c) Find a 95% confidence interval for the slope.
of yards gained rushing by an opponent (x). The data (d) What percentage of total variability is explained
are shown below. by the model?
(e) Find the residuals and prepare appropriate resid-
Yards Rushing by ual plots.
Teams “Games Won (y) Opponent (x) 14-2. Suppose we would like to use the model devel-
opéd in Exercise 14-1 to predict the number of games
Washington 10 2205
a team will win if it can limit the opponents to 1800
Minnesota ifs! 2096
yards rushing. Find a point estimate of the number of
New England 11 1847 games won if the opponents gain only 1800 yards
Oakland 13 1903 rushing. Find a 95% pftediction interval on the number
Pittsburgh 10 4757 of games won.
Baltimore 11 1848 14-3. Motor Trend magazine frequently presents per-
Los Angeles 10 1564 formance data for automobiles. The table below pres-
Dallas 11 1821 ents data from the 1975 volume of Motor Trend
Atlanta 4 2577 concerning the gasoline milage performance and the
Buffalo 2 2476 engine displacement for 15 automobiles.
Chicago i 1984
Cincinnati 10 1917 Miles / Displacement
Automobile Gallon (y) | (Cubic Inches) (x)
Cleveland 9 1761
Denver 9 1709 Apollo 18.90 350
Detroit 6 1901 Omega 17.00 350
GreenBay 5 2288 Nova 20.00 250
Houston 5 2072 Monarch 18.25 351
Kansas City 5 2861 Duster 20.07 74phs
Miami 6 2411 Jensen Conv. 11.20 440
New Orleans 4 2289 Skyhawk 22.12 231
New York Giants 3 2203 Monza 21.47 262
New York Jets 3 2592 Corolla SR-5 30.40 96.9
Philadelphia 4 2053 Camaro 16.50 350
St. Louis 10 1979 Eldorado 14.39 500
San Diego 6 2048 Trans Am 16.59 400
San Francisco 8 1786 Charger SE 19.73~ 318
Seattle 2 2876 Cougar 13.90 351
Tampa Bay 0 2560 Corvette 16.50 350
14-10 Exercises 433

(a) Fit a regression model relating mileage perform- (a) Fit a regression model relating sales price to taxes
ance to engine displacement. paid.
(b) Test for significance of regression. (b) Test for significance of regression.

(c) What percentage of total variability in mileage is (c) What percentage of the variability in selling price
explained by the model? is explained by the taxes paid?
(d) Find the residuals for this model. Construct a nor-
(d) Find a 90% confidence interval on the mean
mal probability plot for the residuals. Plot the
mileage if the engine displacement is 275 cubic
residuals versus 9 and versus x. Does the model
inches.
seem Satisfactory?
14-4, Suppose that we wish to predict the gasoline 14-7. The strength of paper used in the manufacture of
mileage from a car with a 275 cubic inch displacement cardboard boxes (y) is related to the percentage of
engine. Find a point estimate, using the model devel- hardwood concentration in the original pulp (x). Under
oped in Exercise 14-3, and an appropriate 90% interval controlled conditions, a pilot plant manufactures 16
estimate. Compare this interval to the one obtained in samples, each from a different batch of pulp, and
Exercise 14-3d. Which one is wider, and why? measures the tensile strength. The data are shown here.
14-5. Find the residuals from the model in Exercise y 101.4 117.4 117.1 106.2 131.9 146.9 146.8 133.9
14-3. Prepare appropriate residual plots and comment
on model adequacy. eat Oink: 1.5 Sees: Ola.O)ee 2.20 |e,

14-6. An article in Technometrics by S. C. Narula and y 111.3 123.0 125.1 145.2 134.3 144.5 143.7 146.9
J. F. Wellington (Prediction, Linear Regression, and Mple2tS m2 |en2 Sint 62.80 B30) ah F3LO) ite3 129), 3.3
a Minimum Sum of Relative Errors,” Vol. 19, 1977)
presents data on the selling price and annual taxes for (a) Fit a simple linear regression model to the data.
27 houses. The data are shown below. (b) Test for lack of fit and significance of regression.
(c) Construct a 90% confidence’interval on the slope
be
Taxes (Local, School, (d) Construct a 90% confidence interval on the inter-
Sale Price / 1000 County) / 1000 cept f,. 3
25.9 4.9176 (e) Construct a 95% confidence interval on the true
29.5 5.0208 regression line at x = 2.5.
27.9 4.5429 14-8. Compute the residuals for the regression model
25.9 4.5573 in Exercise 14-7. Prepare appropriate residual plots
29.9 5.0597 and comment on model adequacy.
29.9 3.8910 14-9, The number of pounds of steam used per month
30.9 5.8980 by a chemical plant is thought to be related to the aver-
28.9 5.6039 age ambient temperature for that month. The past year’s
usage and temperatures are shown in the following
859 5.8282
table.
31.5 5.3003
31.0 6.2712
Month Temp. Usage / 1000
30.9 5.9592
30.0 5.0500 Jan. 21 185.79
36.9 8.2464 Feb. 24 214.47
41.9 6.6969 Mar. 32 288.03
40.5 7.7841 Apr. 47 424.84
43.9 9.0384 May 50 454.58
SHED 5.9894 June 59 539.03
37.9 7.5422 July 68 621.55
44.5 8.7951 Aug. 74 675.06
Sia 6.0831 Sept. 62 562.03
38.9 8.3607. Oct. 50 452.93
36.9 8.1400 Nov. 41 369.95
45.8 9.1416 Dec 30 273.98
eS
434 Chapter 14 Simple Linear Regression and Correlation

(a) Fit a simple linear regression model to the data.


Average Size Level
(b) Test for significance of regression.
(c) Test the hypothesis that the slope f, = 10. 8530 18.20
8544 18.05
(d) Construct a 99% confidence interval about the
true regression line at x = 58. 7964 16.81
7440 15.56
(e) Construct a 99% prediction interval on the steam
usage in the next month having a mean ambient 6432 13.98
temperature of 58°. 6032 14.51
14-10. Compute the residuals for the regression model 5125 10.99
in Exercise 14-9. Prepare appropriate residual plots 4418 12.83
and comment on model adequacy. 4327 11.85
14-11. The percentage of impurity in oxygen gas pro- 4133 11.33
duced by a distilling process is thought to be related to 3765 10.25
the percentage of hydrocarbon in the main condenser
of the processor. One month’s operating data are avail- (a) Fit a linear regression model relating average
able, as shown in the table at the bottom of this page. manning level to average ship size.
(a) Fit a simple linear regression model to the data. (b) Test for significance of regression.
(b) Test for lack of fit and significance of regression. (c) Find a 95% confidence interval on the slope.
(c) Calculate R? for this model. (d) What percentage‘of total variability is explained
(d) Calculate a 95% confidence interval for the slope by the model?

B,. (e) Find the residuals and construct appropriate resid-


ual plots.
14-12. Compute the residuals for the data in Exercise
14-11. 14-14, The final averages for 20 randomly selected
(a) Plot the residuals on normal probability paper and students taking a course in engineering statistics and a
draw appropriate conclusions. course in operations research at Georgia Tech are
shown here. Assume that the final averages are jointly
(b) Plot the residuals against} and x. Interpret these
normally distributed.
displays.
\
14-13. An article in Transportation Research (1999, Statistics | 86] 75 |69) 75} 90} 94) 83| 86] 71) 65
p. 183) presents a study on world maritime employ-
ment. The purpose of the study was to determine a OR 80] 81 |75| 81| 92] 95} 80| 81 |76) 72
relationship between average manning level and the
average size of the fleet. Manning level refers to the Statistics 84] 71 |62} 90] 83) 75| 71] 76) 84) 97
ratio of number of posts that must be manned by a
OR 85 |72 |65| 93] 81| 70| 73| 72| 80| 98
seaman per ship (posts/ship). Data collected for ships
of the United Kingdom over a 16-year period are
(a) Find the regression line relating the statistics final
average to the OR final average.
Average Size Level
(b) Estimate the correlation coefficient.
9154 20.27 (c) Test the hypothesis that p = 0.
9277 19.98 (d) Test the hypothesis that p = 0.5.
9221 20.28
(e) Construct a 95% confidence interval estimate of
9198 19.65 the correlation coefficient.
8705 18.81
continues
14-15, The weight and systolic blood pressure of 26
randomly selected males in the age group 25-30 are

Purity (%) 86.91 92.58 89.86


Hydrocarbon (%) 1.02

Purity (%) 96.73 90.56


Hydrocarbon (%) 1.46
14-10 Exercises 435

shown in the following table. Assume that weight and Is the estimator of the slope in the simple linear
blood pressure are jointly normally distributed. regression model unbiased?
rr

Subject Weight Systolic BP 14-18. Suppose that we are fitting a straight line and
we wish to make the variance of the slope B, as small
1 165 130 as possible. Where should the observations x, i = 1,
2 167 133 2,...,n, be taken so as to minimize V(f,)? Discuss the
3 180 150 practical implications of this allocation of the x,
4 155 128
14-19. Weighted Least Squares. Suppose that we are
5 212 151
fitting the straight line y = 8, + B,x + € but that the
6 175 146
variance of the y values now depends on the level of x;
1 190 150 that is,
8 210 140
9 200 148
10 149 125 —
V(y,|x;) = Go; oy fie
Pd ceotfls
Ww;
11 158 133
12 169 135
13 170 150 where the w, are unknown constants, often called
weights. Show that the resulting least-squares normal
14 172 153
equations are
15 159 128
16. 168 132 x n x n n

17 174 149 Bo dw, +B, Y wx, = Vwi


18 183 158 i=l i=l i=l
19 IS) 150 ™ n n n
a=
n

20 195 163 By > wi; +B, > w,x? = DVwin


i=l i=l isl
7a] 180 156
22 143 124
23 240 170 14-20. Consider the data shown below. Suppose that
24 235 165 the relationship between y and x is hypothesized to be
25 192 160 y = (B, + B,x + €)'. Fit an appropriate model to the
26 187 159 data. Does the assumed .nodel form seem
appropriate?

(a) Find a regression line relating systolic blood pres-


sure to weight. onal Om ieTSM GS: Nee 2e 9 S| Jbk 4]
(b) Estimate the correlation coefficient. y |0.17] 0.13 |0.09 |0.15 |0.20 |0.21 |0.18 |0.24
(c) Test the hypothesis that p = 0.
(d) Test the hypothesis that p = 0.6.
14-21. Consider the weight and blood pressure data in
(e) Construct a 95% confidence interval estimate of
Exercise 14-15. Fit a no-intercept model to the data,
the correlation coefficient.
and compare it to the model obtained in Exercise
14-16. Consider the simple linear regression model 14-15. Which model is superior?
= 07+ B'S...
y= B, + B,x +. Show that E(MS,) 14-22. The following data, adapted from
14-17. Suppose that we have assumed the straight-line Montgomery, Peck, and Vining (2001), present the
regression model number of certified mental defectives per 10,000 of
estimated population in the United Kingdom (y) and
y=B,+ Bx, +€ the number of radio receiver licenses issued (x) by the
BBC (in millions) for the years 1924-1937. Fit a
but that the response is affected by a second variable,
regression model relating y to x. Comment on the
x», such that the true regression function is
model. Specifically, does the existence of a strong cor-
Ey) = By + Bix, + Bx. relation imply a cause-and-effect relationship?
436 Chapter 14 Simple Linear Regression and Correlation

Number of Certified Number of Radio


Mental Defectives per Receiver Licenses
10,000 of Estimated Issued (Millions)
Year U.K. Population (y) in the U.K. (x)

1924 8 1.350
1925 8 1.960
1926 9 2.270
1927 10 2.483
1928 11 2.730
1929 11 3.091
1930 12 3.674
1931 16 4.620
1932 18 5.497
1933 19 6.260
1934 20 7.012
1935 21 7.618
1936 22 8.131
1937 23 8.593
Chapter 15

Multiple Regression

Many regression problems involve more than one regressor variable. Such models are
called multiple regression models. Multiple regression is one of the most widely used sta-
tistical techniques. This chapter presents the basic techniques of parameter estimation, con-
fidence interval estimation, and model adequacy checking for multiple regression. We also
introduce some of the special problems often encountered in the practical use of multiple
regression, including model building and variable selection, autocorrelation in the errors,
and multicollinearity or near-linear dependence among the regressors.

15-1 MULTIPLE REGRESSION MODELS


A regression model that involves more than one regressor variable is called a mulriple
regression model. As an example, suppose that the effective life of a cutting tool depends
on the cutting speed and the tool angle. A multiple regression model that might describe this
relationship is

y=B,+Bx,+ Bx, +6 (15-1)

where y represents the tool life, x, represents the cutting speed, and x, represents the tool
angle. This is a multiple linear regression model with two regressors. The term “linear” ts
used because equation 15-1 is a linear function of the unknown parameters f,, B,, and B,.
Note that the model describes a plane in the two-dimensional x,, x, space. The parameter 3,
defines the intercept of the plane. We sometimes call B, and f, partial regression coeffi-
cients, because B, measures the expected change in y per unit change in x, when x, is held
constant, and #, measures the expected change in y per unit change in x, when x, is held
constant.
In general, the dependent variable or response y may be related to k independent vari-
ables. The model

y= By + Bx, + Bx, ++ + Bx, + € (15-2)


is called a multiple linear regression model with k independent variables. The parameters
B, j=0, 1, ..., k, are called the regression coefficients. This model describes a hyperplane
in the k-dimensional space of the regressor variables {x,}. The parameter f; represents the
expected change in response y per unit change in x, when all the remaining independent
variables x, (¢ #/) are held constant. The parameters B,j=1, 2, ..., k, are often called par-
tial regression coefficients, because they describe the partial effect of one independent vari-
able when the other independent variables in the model are held constant.
Multiple linear regression models are often used as approximating functions. That is.
the true functional relationship between y and x,, x), ..., x, is unknown, but over certain
ranges of the independent variables the linear regression model is an adequate
approximation.

437
438 Chapter 15 Multiple Regression

Models that are more complex in appearance than equation 15-2 may often still be ana-
lyzed by multiple linear regression techniques. For example, consider the cubic polynomial
model in o \e independent variable,
y=Bot+Bx+Berr+Betre ~ (15-3)
If we let x, = x, x, =’, and x, =x’, then equation 15-3 can be written
y = By + Bx, + Bx, + Bx, + € (15-4)
which is a multiple linear regression model with three regressor variables. Models that
include interaction effects may also be analyzed by multiple linear regression methods. For
example, suppose that the model is

y= By + Bx, + Bx, + Byxx, + €. (15-5)


If we let x, = x,x, and B, = B,,, then equation 15-5 can be written
y=B,+ Bx, + Bx, + Bx, + (15-6)
which is a linear regression model. In general, any regression model that is linear in the
parameters (the B’s) is a linear regression model, regardless of the shape of the surface that
it generates.

15-2. ESTIMATION OF THE PARAMETERS


The method of least squares may be used to estimate the regression coefficients in equation
15-2. Suppose that n > k observations are available, and let x, denote the ith observation or
level of variable x;. The data will appear as in Table 15-1. We assume that the error term €
in the model has Ke) = 0, V(€) = 0°, and that the {¢,} are uncorrelated random variables.
We may write the model, equation 15-2, in terms of the observations,

Y; = Bo + Bix; + Boxj2 +++ +Byxiy + E;


k 15-7
= Bp +}. Bix; +&), P=1,7Z,...,7. a
jel
The least-squares function is
n

L=)'¢
i=t
(15-8)
n k 2

> S[>— By - XP :
isl j=l

Table 15-1 Data for Multiple Linear Regression

y x; x, xX,

y\ x1, X17 Xi,


y2 x3) xy Xn

Va Xn Xn2 \ aoe Xnk


eeee
15-2 Estimation of the Parameters 439

The function L is to be minimized with respect to By, B,, ..., B, The least-squares estima-
tors of B,, B,, . ., 8,must satisfy

OL ul Be Re KG?
OBy ; 25, — Bo -¥ |-0 (15-9a)
O18).BiBr i=] j=!

and

OL “ a LD
ap; =-2)) ¥i- Bo - > yxy yy =0, f=1,2,...,k. (15-9b)
t=] j=l
J\B,, B, yeeey B,

Simplifying equation 15-9, we obtain the least-squares normal equations


~ an n ie n x n

nBy + B, > xi + BY xin +--+ BY Xi = Soy!


i=l i=l

LSS er +B > xh +BY xx; ++ BY xix = oe


i=1 T= i=l i i= (15-10)

m= n - n - n n

2 a
Bo > Xi +B, > eX + By > x4; ret De Dae = xi:
i=l i=l i=l
Note that there are p =k + 1 normal equations, one for each of the unknown regression coef-
ficients. The solution to the normal equations will be the least-squares estimators of the
regression coefficients Bos B, Br Be
It is simpler to solve the pommel equations if they are expressed in matrix notation. We
now give a matrix development of the normal equations that parallels the development of
equation 15-10. The model in terms of the observations, equation 15-7, may be written in
matrix notation,
y=XBP+e,
where

yy Po 'Xyp yy
yD 1X) Xg2 2k
y = 5 x = A ’

Yn 1 Xnl *n2 Xnk

Bo E|
B 5)
B= A ‘ and €=| .

B, E,

In general, y is an (n x 1) vector of the observations, X is an (n x p) matrix of the levels of


the independent variables, B is a \p x 1) vector of the regression coefficients, and € is an
(n X 1) vector of random errors.
We wish to find the vector of least-squares estimators, ®, that minimizes

— Ye = 8'e= (y-XB) (y- XB).


i=1
440 Chapter 15 Multiple Regression

Note that L may be expressed as

L=y’y-BXy-y XB+BX'XB
(15-11)
= y’y — 2B’ X’y + BX’ XB,
" since B’X’y is a(1 x 1) matrix, hence a scalar, and its transpose (PBX’y)’ = y’ XB is the same
scalar. The least-squares estimators must satisfy

OL
= -2X’y +2X’XB =0,
OBl,

which simplifies to
X’XB=X’y. (15-12)
Equations 15-12-are the least-squares normal equations. They are identical to equations
15-10. To solve the normal equations, multiply both sides of equation 15-12 by the inverse
of X’X. Thus, the least-squares estimator of B is

B= (X’X)'X’y. (15-13)
It is easy to see that the matrix form of the normal equations is identical to the scalar
form. Writing out equation 15-12 in detail we obtain
n n n RS n

n ver atin es ere Bo yi


i=l i=l i=l i=l
7 n n n A n

Xi Xi
2
Xi? eae

XiXix |)Bi yxy


i=l i=l [=]
t=] =! j=1 b

= n n
2 A

Xix pin res pap ts a per B, Y x0


i=] i=l i=l j
If the indicated matrix multiplication is performed, the scalar form of the normal equations
(that is, equation 15-10) will result. In this form it is easy to see that X’X is a (p Xx p) sym-
metric matrix and X’y is a (p x 1) column vector. Note the special structure of the X’X
matrix. The diagonal elements of X’X are the sums of squares of the elements in the
columns of X, and the off-diagonal elements are the sums of cross products of the elements
in the columns of X. Furthermore, note that the elements of X’y are the sums of cross prod-
ucts of the columns of X and the observations {y,}.
The fitted regression modelis

y= XB. (15-14)
In scalar notation, the fitted model is

The difference between the observation y, and the fitted value J, is a residual, say e, = y,—-9,.
The (n x 1) vector of residuals is denoted

e=y-¥. (15-15)
15-2 Estimation of the Parameters 441

An article in the Journal of Agricultural Engineering and Research (2001, p. 275) describes the use
of a regression model to relate the damage susceptibility of peaches to the height at which they are
dropped (drop height, measured in mm) and the density of the peach (measured in g/cm’). One goal
of the analysis is to provide a predictive model for peach damage to serve as a guideline for harvest-
ing and postharvesting operations. Data typical of this type of experiment is given in Table 15-2.
We will fit the multiple linear regression model

y=B,+
Bx, + B,x, +e
to these data. The X matrix and y vector for this model are

1 303.7 0.90 3.62


1 366.7 1.04 UPA
1 336.8 1.01 2.66
1 304.5 0.95 lays)
1 346.8 0.98 4.91
1 600.0 1.04 10.36
1 369.0 0.96 5.26
1 418.0 1.00 6.09
1 269.0 1.01 6.57
1 323.0 0.94 4.24
X= ; y=
156222 41,01 8.04
1 284.2 0.97 3.46
1 558.6 1.03 8.50
1 415.0 1.01 9.34
1 349.5 1.04 Reh)
1 462.8 1.02 8.11
1 333:1° 1.05 7232
te 02 sO 12.58
1 311.4 0.91 0.15
1 351.4 0.96 5.23

The X’X matrix is


1 303.7 0.90
; : 1 366.7 1.04
Mike 03 eyCoTecii: 514i aboard)
0.90 1.04 -- 0.96 1 3514 0.96

20 7767.8 19.93
=| 7767.8 3201646 7791.878 |,
19.93 7791.878 19.9077

and the X’y vector is


3.62
1 Inj wure cle -|: 120.79
X'y=|303.7 366.7 - 351.4] . |=|51129.17].
0.90 1.04 -.. 0.96 || _. 122.70
5:23
442 Chapter 15 Multiple Regression

Table 15-2 Peach Damage Data for Example 15-1

Observation Damage (mm), Drop Height (mm), Fruit Density (g/cm’),


Number y x, ee)

1 3.62 303.7 0.90


2 TAT 366.7 1.04
3 2.66 336.8 1.01
4 1.53 304.5 0.95
5 4.91 346.8 0.98
6 10.36 600.0 1.04
7 5.26 369.0 0.96
8 6.09 418.0 1.00
9 6.57 269.0 1.01
to 4.24 323.0 0.94
11 8.04 562.2 1.01
2 3.46 284.2 0.97
aa 8.50 558.6 1.03
14 9.34 415.0 1.01
15 5.55 349.5 1.04
16 8.11 462.8 1.02
17 7.32 333.1 1.05
18 12.58 502.1 1.10
19 0.15 311.4 0.91
20 5.23 351.4 0.96

The least-squares estimators are found from equation 15-13 to be


A
B = (X’X)'X’y,

or

Bo 20 7767.8 ~=—:19.93 ih 120.79 J


B, |=| 7767.8 3201646 7791.878! |51129.17
B,| | 19.93 7791.878 19,9077 | | 122.70
24.63666 0.005321 -26.74679 || 120.79
0.005321 0.0000077 -0.008353 ||51129.17
—26.74679 —0.008353 30.096389 || 122.70

—33.831
0.01314 |.
34.890

Therefore, the fitted regression model is

y =-33.831 + 0.01314x, + 34.890x,.

Table 15-3 shows the fitted values of y and the residuals. The fitted values and residuals are calculated
to the same accuracy as the original data.
15-2 Estimation of the Parameters 443

Table 15-3 Observations, Fitted Values, and Residuals for Example 15-1
a

Observation Number y; y, e,=y,-3;


1 3.62 1.56 2.06
2 UA TAY 0.00
3 2.66 5.83 -3.17
4 1253 3.31 -1.78
5 4.91 4.92 -0.01
6 10.36 10.34 0.02
7, 5.26 4.51 0.75
8 6.09 6.55 —0.46
9 6.57 4.94 1.63
10 4.24 3.2% 1.03
11 8.04 8.79 -0.75
12 3.46 3.75 ~0.29
1) 8.50 9.44 0.94
14 9.34 6.86 2.48
15 Spel) 7.05 -1.50
16 8.11 7.84 0.27
17 7.32 7.18 0.14
18 12.58 11.14 1.44
19 0.15 2.01 -1,86
20 5.23 4.28 0.95

The statistical properties of the least-squares estimator B may be easily demonstrated.


Consider first bias:
E(B) = E [(X’X)'X’y]
= E ((X’X)'X’(XB + €)]
= E [(X’X)'X’XB + (X’X)'X’e]
= B,

since E(€) = 0 and (X’X)'X’X = I. Thus B is an unbiased estimator of B. The variance


property of B is expressed by the covariance matrix
Cov(f) = E {[B - E(B)I[B - £(B))'}.
The covariance matrix of Biis a (p X p) symmetric matrix whose jjth element is the variance
of B and whose (i, j)th element is the covariance between B, and B..The covariance matrix
of Biis
Cov(B) = 07(X’X)".
It is usually necessary to estimate 0°. To develop this estimator, consider the sum of
squares of the residuals, say ;
444 Chapter 15 Multiple Regression

Substituting e= y -y =y- XB, we have .


SS, = (y - XB)'(y - XB)
=y'y-B'X’y -y'XB+B'X'XB
; = y’y - 26’X’y + B’X’XB.
Since X’XB = X’y, this last equation becomes
SS,=y’y -B’X’y. (15-16)
Equation 15-16 is called the error or residual sum of squares, and it has n — p degrees of
freedom associated with it. The mean square for error is

MS; = Ee (15-17)
n= Pp
It can be shown that the expected value of MS, is 0°; thus an unbiased estimator of 0” is
given by
6° = MS,. (15-18)

Example 15-2
We will estimate the error variance o” for the multiple regression problem in Example 15-1. Using the
data in Table 15-2, we find
20
yy= Sky = 904.60
i=]

and

120.79
B’X’y =(-33.831 0.01314 34.890]}51129.17) ~
122.70
= 866.39.
Therefore, the error sum of squares is

SS,=yy - BX’y
= 904.60 — 866.39

= See

The estimate of 07 is

n-p 20-3

15-3 CONFIDENCE INTERVALS IN MULTIPLE LINEAR


REGRESSION
It is often necessary to construct confidence interval estimates for the regression coefficients
{B,}. The development of a procedure for obtaining these confidence intervals requires that
we assume the errors {€,} to be normally and independently distributed with mean zero and
variance O°. Therefore, the observations {y,} are normally and independently distributed
15-3 Confidence Intervals in Multiple Linear Regression 445

. k 3 a)
with mean B, + Dy:a, Bx, and variance 0-. Since the least-squares estimator
A . wala .
Bis a linear com-
bination of the observations, it follows that B is normally distributed with mean vector B and
covariance matrix o7(X’X)'. Then each of the quantities
A

B;-B;
z ’ NPA Reel (15-19)

aC

is distributed as t with n — p degrees of freedom, where C,, ts the jjth element of the (X’X)'
matrix and 6” is the estimate of the error variance, obtained from equation 15-18. Therefore,
a 100(1 — a)% confidence interval for the regression coefticient Ber — Oy lois ke IS
A

B; slap rye Cy S B; S B; oe py os Cy : (15-20)

Example 15-3
We will construct a 95% confidence interval on the parameter B, in Example 15-1. Note that the point
estimate of 8, is B, = 0.01314, and the diagonal element of (X’X)' corresponding to B, is C,, =
0.0000077. The estimate of o* was obtained in Example 15-2 as 2.247, and 4, ),; ,7= 2.110. Therefore,
the 95% confidence interval on B, is computed from equation 15-20 as

0.01314
—(2.110),/(2.247)(0.0000077) < B, < 0.01314 +(2.110),(2.247)(0.0000077),
which reduces to
0.00436 < B, < 0.0219.

We may also obtain a confidence interval on the mean response at a particular point, say
(p11 Xp2> «++» Xp.) TO estimate the mean response at this point define the vector
1
XI
Xo =| *o2

XOk

The estimated mean response at this point is


5) =x,B. (15-21)

This estimator is unbiased, since E(},) = E(x,B) = xB = E(y,), and the variance of $, is

Vp) = OX(X’X)x). (15-22)


Therefore, a 100(1 — a)% confidence interval on the mean response at the point (Xp), Xo, -. +5
Xa) lS
io , , = A be , , =
Yo —taan-pVF X0(X X) x6 S$E(yo) 6 + ty/2n-p\F X0(X X) Xo. (15-23)

Equation 15-23 is a confidence interval about the regression hyperplane. It is the multiple
regression generalization of equation 14-33.
446 Chapter 15 Multiple Regression

Example 15-4 ‘

The scientists conducting the experiment on damaged peaches in Example 15-1 would like to con-
struct a 95% confidence interval on the mean damage for a peach-dropped from a height of x, = 325
mm if its density is x, = 0.98 g/cm’. Therefore,

1
Xo =| 325 |.
0.98
The estimated mean response at this point is found from equation 15-21 to be

|
-33.831
Jo=xiB=[1 325 0.98] 0.01314 |= 4.63.
|34,890

The variance of }), is estimated by

24.63666 0.005321 -26.74679}| 1


6°xi,(X’X) x, = 2.247[1 325 0.98] 0.005321 0.0000077 -0.008353 || 325
—26.74679 -0.008353 30.096389 ||0.98
= 2.247(0.0718) = 0.1613.

Therefore, a 95% confidence interval on the mean damage at this point is found from equation 15-23
to be

4.63-2.110-V0.1613 < E(y) $ 4.63+2.110/0.1613,


which reduces to
3.78 < E(y) $ 5.48.

15-4 PREDICTION OF NEW OBSERVATIONS


The regression model can be used to predict future observations on y corresponding to par-
ticular values of the independent variables, say Xo), Xo), .--5 Xoy If Xy = [1], Xoys Xogs +++» Xoxls
then a point estimate of the future observation y, at the point (9), Xp «-+5 Xo,) is
Jo = XB- (15-24)
A 100(1 - &)% prediction interval for this future observation is

Yo =tajan-py *(14+x9(XX) xo] (15-25)


S Yo SY + tojan-p e(1 +Xx9(X’X) Xo}.

This prediction interval is a generalization of the prediction interval for a future observation
in simple linear regression, equation 14-35.
In predicting new observations and in estimating the mean response at a given point
(Xo}» Xo2» +++» Xo,), ON€ must be careful about extrapolating beyond the region containing the
original observations. It is very possible that a model that fits well in the region of the orig-
inal data will no longer fit well outside that region. In multiple regression it is often easy to
inadvertently extrapolate, since the levels of the variables (x;,, x, ... X,), i= 1, 2, ..., n,
jointly define the region containing the data. As an example, consider Fig, 15-1, which illus-
15-5 Hypothesis Testing in Multiple Linear Regression 447

*< ie)

: Joint
region of
original
Original
for
range
x data
x fo}nm

Original
range for x,

Figure 15-1 An-example of extrapolation in multiple regression.

trates the region containing the observations for a two-variable regression model. Note that
the point (%p,, X92) lies within the ranges of both independent variables x, and x,, but it is out-
side the region of the original observations. Thus, either predicting the value of a new
observation or estimating the mean response at this point is an extrapolation of the original
regression model.

Example 15-5
Suppose that the scientists in Example 15-1 wish to construct a 95% prediction interval on the dam-
age on a peach that is dropped from a height of x, = 325 mm and has a density of x, = 0.98 g/cm’. Note
that x = [1 325 0.98], and the point estimate of the damage is y, = x,B = 4.63 mm. Also, in Example
15-4 we calculated x,(X’X) 'x, = 0.0718. Therefore, from equation 15-25 we have

4.63 —2.110,/2.247(1+ 0.0718) < yp $ 4.63+2.110./2.247(1+ 0.0718),

and the 95% prediction interval is


1.36 < yy $7.90.

15-5 HYPOTHESIS TESTING IN MULTIPLE LINEAR REGRESSION


In multiple linear regression problems, certain tests of hypotheses about the model param-
eters are useful in measuring model adequacy. In this section, we describe several impor-
tant hypothesis-testing procedures. We continue to require the normality assumption on the
errors, which was introduced in the previous section.

15-5.1 Test for Significance of Regression


The test for significance of regression is a test to determine whether there is a linear rela-
tionship between the dependent variable y and a subset of the independent variables x,, x,,
..+ X,. The appropriate hypotheses are
448 Chapter 15 Multiple Regression

H,: B= B,=-=B,=0, ,
15-26
H,: B, #0 for at least onej. ( )

Rejection of H,: B, = 0 implies that at least one of the independent variables x,, x, ..., X,
contributes significantly to the model. The test procedure is a generalization of the proce-
dure used in simple linear regression. The total sum of squares S,, is partitioned into a sum
of squares due to regression and a sum of squares due to error, say

S,, = SSp+ SSe


and if H,: B= 0 is true, then SS,/o” ~ a where the number of degrees of freedom for the
7’ is equal to the number of regressor variables in the model. Also, we can show that SS,/o”
~y°_,_,,and SS, and SS, are independent. The test procedure for H,: B,=0 is to compute

Fy =
Sy Lae oe
SSp/(n-k-1) MS, Cs

and to reject H, if F) > F, ,, ,4.;- Lhe procedure is usually summarized in an analysis of vari-
ance table such as Table 15-4.
A computational formula for SS, may be found easily. We have derived a computa-
tional formula for SS, in equation 15-16, that is,

SS, =y'y - B’X’y.


Now since S,, = Diy, - (Liy)In = y’y-(Lj,y,)'/n, we may rewrite the foregoing
equation

SSp = ry-413>‘Joxy-1{>)
i=]

or

SS, = S,, -— SSp.

Therefore, the regression sum of squares is

‘ P 1 hn 93

SSp = Xx ya— Yi ’
i=l (15-28)

the error sum of squares is

SS,=y'y - B’X’y, (15-29)


and the total sum of squares is

(15-30)
15-5 Hypothesis Testing in Multiple Linear Regression 449

Table 15-4 Analysis of Variance for Significance of Regression in Multiple Regression

Source of Sum of Degrees of Mean


Variation Squares Freedom Square Ee
Regression SS, k MS, MS,/MS,
Error or residual SS, n-k-1 MS,
Total S,, n-1

Example 15-6
We will test for significance of regression using the damaged peaches data from Example 15-1. Some
of the numerical quantities required are calculated in Example 15-2. Note that

\2
es
ays “v5
i=l
ea el) 2

20
= 175.089,
2
A i<
SS, R = B
B’ X’y y -—13>,

20
= 136.88,

SS_ =5S,, -SSp

=y’y-B’ X’y
= 38.21.

The analysis of variance is shown in Table 15-5. To test H,: B, = B, =0, we calculate the statistic

yl ed 30.46.
OEMS 55 92.247

Since Fy > Fos .2.:7 = 3-59, peach damage is related to drop height, fruit density, or both. However, we
note that this does not necessarily imply that the relationship found is an appropriate one for predict-
ing damage as a function of drop height or fruit density. Further tests of model adequacy are required.

Table 15-5 Test for Significance of Regression for Example 15-6


ee
Source of Sum of Degrees of Mean
Variation Squares Freedom Square Fs

Regression 136.88 2 68.44 30.46


Error 38.21 17 2.247
Total 175.09 19
ane .
450 Chapter 15 Multiple Regression

15-5.2 Tests on Individual Regression Coefficients


We are frequently interested in testing hypotheses on the individual regression coefficients.
Such tests would be useful in determining the value of each of the independent variables in
the regression model. For example, the model might be more effective with the inclusion of
additional variables, or perhaps with the deletion of one or more of the variables already in
the model.
Adding a variable to a regression model always causes the sum of squares for regres-
sion to increase and the error sum of squares to decrease. We must decide whether the
increase in the regression sum of squares is sufficient to warrant using the additional vari-
able in the model. Furthermore, adding an unimportant variable to the model can actually
increase the mean square error, thereby decreasing the usefulness of the model.
The hypotheses for testing the significance of any individual regression coefficient, say
j, are
Hy: B,=0,
(15-31)
H,: B #0.
If H,: B, = 0 is not rejected, then this indicates that x, can possibly be deleted from the
model. The test statistic for this hypothesis is

Me__&;
eee (15-32)
afc, Ci

where C;, is the diagonal element of (X’X)' corresponding to B. The null hypothesis
H,: B = 0 is rejected if |t| > ta, .,-,- Note that this is really a partial or marginal test,
because the regression coefficient B depends on all the other regressor variables x.(i #/) that
are in the model. To illustrate the use of this test, consider the data in Example 15-1, and
suppose that we want to test

H,: B, = 0,
Hy BAO.
The main diagonal element of (X’X)' corresponding to B, is C,, = 30.096, so the t statistic
in equation 15-32 is
A

B; 34.89
fy = ae = 424
° (62¢,, _J(2.247)(30.096)
SiNCE f5995,7) = 2.110, we reject Hy: B, = 0 and conclude that the variable x, (density) con-
tributes significantly to the model. Note that this test measures the marginal or partial con-
tribution of x, given that x, is in the model.
We may also examine the contribution to the regression sum of squares of a variable,
say x, given that other variables x; (i #/) are included in the model. The procedure used to
do this is called the general regression significance test, or the “extra sum of squares”
method. This procedure can also be used to investigate the contribution of a subset of the
regressor variables to the model. Consider the regression model with k regressor variables

y=XPr+e,
where y is (n X 1), X is (n x p), B is (p x 1), € is (n x 1), and p=k+ 1. We would like to
determine whether the subset of regressor variables x,, x,, ..., x, (r < k) contributes
15-5 Hypothesis Testing in Multiple Linear Regression 451

significantly to the regression model. Let the vector of regression coefficients be partitioned
as follows:

=F
LB,
where B, is (r x 1) and B, is [(p — r) x 1]. We wish to test the hypotheses

H,: B, =9,
H,: B, #9. ae
The model may be written

y=XB+e=X,B,+X,B, +, (15-34)
where X, represents the columns of X associated with B, and X, represents the columns of
X associated with B..
For the full model (including both B, and B,), we know that B = (X’X) 'X’y. Also, the
regression sum of squares for all variables including the intercept is
A
SS,(B) = B’X’y (p degrees of freedom)
and

Ms; = Y¥= Bix'y


n—p

SS,(B) is called the regression sum of squares due to B. To find the contribution of the terms
in f,, to the regression, fit the model assuming the null hypothesis H,: B, = 0 to be true. The
reduced model is found from equation 15-34 to be

y=X,B,+€. (15-35)
The least-squares estimator of B, is B, = (X,X,)"'X.y, and

S5,(B2) =B,X,y (p - r degrees of freedom). (15-36)


Tht regression sum of squares due to B, given that B, is already in the model is

SS(B,1B,) = SS,(B) - SS,(B,)- (15-37)


This sum of squares has r degrees of freedom. It is sometimes Called the “extra sum of
squares” due to B,. Note that SS,(B,|B.) is the increase in the regression sum of squares due
to including the variables x,, x,, ..., x, in the model. Now SS,(B,|B,) is independent of MS,,
and the null hypothesis B, = 0 may be tested by the statistic

m SSe(BilB2)/"_
Fo
MS, (15-38)

If Fy > F,,n-p We reject H,, concluding that at least one of the parameters in B, is not zero
and, consequently, at least one of the variables x,, x,, ..., x, in X, contributes significantly
to the regression model. Some authors call the test in equation 15-38 a partial F-test.
The partial F-test is very useful. We can use it to measure the contribution of x; as if it
were the last variable added to the model by computing

SS (BIB By, ---» Ber Biv ++» By:


452 Chapter 15 Multiple Regression

This is the increase in the regression sum of squares due to adding x, to a model that already
InGlUdeS yh aes COL cat
Ceti) career ., x, Note that the partial F-test on a single variable x, is equiva-

lent to the t-test in equation 15-32. However, the partial F-test is a more general procedure
in that we can measure the effect of sets of variables. In Section 15-11 we will show how
the partial F-test plays a major role in model building, that is, in searching for the best set
of regressor variables to use in the model.

Example 15-7
Consider the damaged peaches data in Example 15-1. We will investigate the contribution of the vari-
able x, (density) to the model. That is, we wish to test

H,: B, = 9,
H;: B, #0.
To test this hypothesis, we need the extra sum of squares due to B,, or

SSBB. By) = SSx(B,, Br» By) - SSx(B,. Bo)


= SS,(By, BylBy) - SS_(B,|B,)-
In Example 15-6 we calculated

we, Ife)
SSq(B,, Bs By)=B x15 |= 136.88 (2 degrees of freedom)
i=]

and if the model y = B, + Bx, + € is fit, we have


SS,(B,|By) = BS, = 96.21 (1 degree of freedom).
Therefore, we have

SSp(BslB,, Bo) = 136.88 - 96.21


= 40.67 (1 degree of freedom).

This is the increase in the regression sum of squares attributable to adding x, to a model already con-
taining x,. To test H,: B, =0, form the test statistic

Hs SSa(B> B.Bo)/1 40.67 = 18.10.


i MS, 2.247
Note that the MS, from the full model, using both x, and x,, is used in the denominator of the test sta-
tistic. Since F,,,,;, ,,=4.45, we reject.H,: B, = 0 and conclude that density (x,) contributes significantly
to the model.
Since this partial F-test involves a single variable, it is equivalent to the t-test. To see this, recall
that the 7-test on H,: B, = 0 resulted in the test statistic t, = 4.24. Furthermore, recall that the square
of a tfrandom variable with v degrees of freedom is an F random variable with one and v degrees of
freedom, and we note that f, = (4.24) = 17.98 = F,.

15-6 MEASURES OF MODEL ADEQUACY


A number of techniques can be used to measure the adequacy of a multiple regression
model. This section will present several of these techniques. Model validation is an impor-
15-6 Measures of Model Adequacy 453

tant part of the multiple regression model building process. A good paper on this subject is
Snee (1977) (see also Montgomery, Peck, and Vining, 2001).

15-6.1 The Coefficient of Multiple Determination


The coefficient of multiple determination R’ is defined as

SS SS
RoasAal-—E. (15-39)
yy Sy

R’ is a measure of the amount of reduction in the variability of y obtained by using the


regressor variables x,, x, ..., x, As in the simple linear regression case, we must have 0 <
R’ < 1. However, as before, a large value of R? does not necessarily imply that the regres-
sion model is a good one. Adding a variable to the model will always increase R’, regard-
less of whether the additional variable is statistically significant or not. Thus it is possible
for models that have large values of R’ to yield poor predictions of new observations or esti-
mates of the mean response.
The positive square root of R? is the multiple correlation coefficient between y and the
set of regressor variables x,, x,, ..., x, That is, R is a measure of the linear association
between y and x,, x,, ..., x, When k= 1, this becomes the simple correlation between y and x.

pete

The coefficient of multiple determination for the regression model estimated in Example 15-1 is

R? a ok = 130.88
9799.
Sjy 175.09
That is, about 78.2% of the variability in damage y is explained when the two regressor variables, drop
height (x,) and fruit density (x,) are used. The model relating density to x, only was developed. The
value of R? for this model turns out to be R’ = 0.549. Therefore, adding the variable x, to the model
has increased R’ from 0.549 to 0.782.

Adjusted R?
Some practitioners prefer to use the adjusted coefficient of multiple determination, adjusted
R’, defined as

SSp/(n- p) F
Rey =1 = SseTn = 1) (15-40)

The value S,,/(n — 1) will be constant regardless of the number of variables in the model.
SS,/(n — p) is the mean square for error, which will change with the addition or removal of
terms (new regressor variables, interaction terms, higher-order terms) from the model.
Therefore, Ro will increase only if the addition of a new term significantly reduces the
mean square for error. In other words, the Rig will penalize adding terms to the model that
are not significant in modeling the response. Interpretation of the adjusted coefficient of
multiple determination is identical to that of R’.
454 Chapter 15 Multiple Regression

Example 15-9
We can calculate Ra for the model fit in Example 15-1. From Example 15-6, we found that SS, =
38.21 and S,, = 175.09. The estimate R;,, is then
38.24/(20-3) _,_ 2.247
~ 175.09/(20-1) 9.215
= 0.756.

The adjusted R? will play a significant role in variable selection and model building later in
this chapter.

15-6.2 Residual Analysis


The residuals from the estimated multiple regression model, defined by e; = y; -9,, play an
important role in judging model adequacy, just as they do in simple linear regression. As
noted in Section 14-5.1, there are several residual plots that are often useful. These are illus-
trated in Example 15-9. It is also helpful to plot the residuals against variables not presently
in the model that are possible candidates for inclusion. Patterns in these plots, similar to
those in Fig. 14-5, indicate that the model may be improved by adding the candidate
variable.

Example 15-10
The residuals for the model estimated in Example 15-1 are shown in Table 15-3. These residuals are
plotted on a normal probability plot in Fig. 15-2. No severe deviations from normality are obvious,
although the smallest residual (e, = -3.17) does not fall near the remaining residuals. The standard-
‘ized residual, —3.17/./2.247 = -2.11, appears to be large and could indicate an unusual observation.
The residuals are plotted against } in Fig. 15-3, and against x, and x, in Figs. 15-4 and 15-5, respec-
tively. In Fig. 15-4, there is some indication that the assumption of constant variance may not be sat-
isfied. Removal of the unusual observation may improve the model fit, but there is no indication of
error in data collection. Therefore, the point will be retained. We will see subsequently (Example 15-
16) that two other regressor variables are required to adequately model these data.

Normal
score

3 -2 -1 0 1 2 3
Figure 15-2 Normal probability plot of residuals for Example 15-10.
15-6 Measures of Model Adequacy 455

Residual

Fitted value

Figure 15-3. Plot of residuals against } for Example 15-10.

300 400 500 600


xy

Figure 15-4 Plot of residuals against x, for Example 15-10.

0.9 1.0 1.1


XQ

Figure 15-5 Plot of residuals against x, for Example 15-10.


456 Chapter 15 Multiple Regression

15-7 POLYNOMIAL REGRESSION .


The linear model y = XB + € is a general model that can be used to fit any relationship that
is linear in the unknown parameters B. This includes the important class of polynomial
regression models. For example, the‘second-degree polynomial in one variable,

y= By+B.x+ Bx +, (15-41)
and the second-degree polynomial in two variables,

y= By + B,x, + B,x, + Bux; 3 pee + Bi.x,x, + €, (15-42)


are linear regression models.
Polynomial regression models are widely used in cases where the response is curvilin-
ear, because the general principles of multiple regression can be applied. The following
example illustrates some of the types of analyses that can be performed.

Example 15-11
Sidewall panels for the interior of an airplane are formed in a 1500-ton press. The unit manufactur-
ing cost varies with the production lot size. The data shown below give the average cost per unit (in
hundreds of dollars) for this product (y) and the production lot size (x). The scatter diagram, shown
in Fig. 15-6, indicates that a second-order polynomial may be appropriate.

y hel 1:70 11.65°.155 1:48" 140 91:30 9126) 1.24 Qie re is


0) 250 50" 235 40, S50” 60" 9°65...) 707 75 80 90

We will fit the model

y=By+Bxt+B, rere.

1.40

Average
unit,
cost
per
y

20 30 40 50 60 70 80 ‘90
Lot size,x

Figure 15-6 Data for Example 15-11.


15-7 Polynomial Regression 457

The y vector, X matrix, and B vector are as follows:


1.81 1 20 400
1.70 1 25 625
1.65 1 30 900
1.55 1 35 1225
1.48 1 40 1600
_|1.40 1 50 2500 By
Y=l130f *=l1 60 3600? 8=| 4 |
1.26 1 65 4225 Bu
1.24 1 70 4900
1.21 1 75 5625
1.20 1 80 6400
1.18 1 90 8100

Solving the normal equations x’xB = X’y gives the fitted model
y = 2.1983 — 0.0225x + 0.0001251x’.
The test for significance of regression is shown in Table 15-6. Since F, = 2171.07 is significant at 1%,
we conclude that at least one of the parameters 8, and B,, is not zero. Furthermore, the standard tests
for model adequacy reveal no unusual behavior.

In fitting polynomials, we generally like to use the lowest-degree model consistent with
the data. In this example, it would seem logical to investigate dropping the quadratic term
from the model. That is, we would like to test

H,: B,, = 9,
H,: B,, #9.
The general regression significance test can be used to test this hypothesis. We need to
' determine the “extra sum of squares” due to f,,, or

SS,(B,,18,, B,) = SSB, B,:1By) - SS,(B,|Bp).


The sum of squares SS,(B,, B,,|8,) = 0.5254, from Table 15-6. To find SS,(B,|B,), we fit a
simple linear regression model to the original data, yielding
$= 1.9004 - 0.009 1x.
It can be easily verified that the regression sum of squares for this model is
SS,(B, |B) = 0.4942.

Table 15-6 Test for Significance of Regression for the Second-Order Model in Example 15-11
Nee eee ee ee eee ee ee eee
Source of Sum of Degrees of Mean
Variation Squares Freedom Square F,

Regression 0.5254 2 0.2627 2171.07


Error 0.0011 9 0.000121
Total 0.5265 11
458 Chapter 15 Multiple Regression

Table 15-7 Analysis of Variance of Example 15-11 Showing the Test for H,: B,, =0

Source of Sum of Degree of Mean


Variation Squares Freedom Square F,

Regression 55,(8,, B,,|8,) = 0.5254 2 0.2627 2171.07


Linear SS,(B,|B,) = 0.4942 1 0.4942 4084.30
Quadratic SS,(B,,|B,, 8,) = 0.0312 1 0.0312 257.85
Error 0.0011 9 0.000121
Total 0.5265 11

Therefore, the extra sum of squares due to B,,, given that 8, and f, are in the model, is
SS,(B,\Bo, By) = SSB, Bi 11Bp) - SS,(B,|Bo)
= 0.5254 - 0.4942
= 0.0312.
The analysis of variance, with the test of H,: B,, = 0 incorporated into the. procedure, is dis-
played in Table 15-7. Note that the quadratic term contributes significantly to the model.

15-8 INDICATOR VARIABLES


The regression models presented in previous sections have been based on quantitative vari-
ables, that is, variables that are measured on a numerical scale. For example, variables such
as temperature, pressure, distance, and age are quantitative variables. Occasionally, we
need to incorporate qualitative variables in a regression model. For example, suppose that
one of the variables in a regression model is the operator who is associated with each obser-
vation y,, Assume that only two operators are involved. We may wish to assign different lev-
els to the two operators to account for the possibility that each operator may have a different
effect on the response.
The usual method of accounting for the different levels of a qualitative variable is by
using indicator variables. For instance, to introduce the effect of two different operators into
a regression model, we could define an indicator variable as follows:
x = 0 if the observation is from operator 1,
x= 1 if the observation is from operator 2.
In general, a qualitative variable with t levels is represented by t — 1 indicator variables,
which are assigned values of either 0 or 1. Thus, if there were three operators, the different
levels would be accounted for by two indicator variables defined as follows:

x, x,
0 0 if the observation is from operator 1,
1 0 if the observation is from operator 2,
0 1 if the observation is from operator 3.

Indicator variables are also referred to as dummy variables. The following example illus-
trates some of the uses of indicator variables. For other applications, see Montgomery,
Peck, and Vining (2001).
15-8 Indicator Variables 459

pas ste 15-12


(Adapted from Montgomery, Peck, and Vining, 2001). A mechanical engineer is investigating the sur-
face finish of metal parts produced on a lathe and its relationship to the speed (in RPM) of the lathe.
The data are shown in Table 15-8. Note that the data have been collected using two different types of
cutting tools. Since it is likely that the type of cutting tool affects the surface finish, we will fit the
model

y=B,+
Bx, + Bx, +€,
where y is the surface finish, x, is the lathe speed in RPM, and x, is an indicator variable denoting the
type of cutting tool used; that is,

i"
for tool type 302,
x7 =
1 for tool type 416.

The parameters in this model may be easily interpreted. If x, = 0, then the model becomes

y=B,+B
x,+6,
which is a straight-line model with slope 8, and intercept B,. However, if x, = 1, then the model
becomes

y= By + Bx, + B,1) + e=B,+ B,+ Bx, +8,


which is a straight-line model with slope f, and intercept B, + B,. Thus, the model y = B, + B,x + B,x,
+ € implies that surface finish is linearly related to lathe speed and that the slope B, does not depend
on the type of cutting tool used. However, the type of cutting tool does affect the intercept, and , indi-
cates the change in the intercept associated with a change in tool type frem 302 to 416.

Table 15-8 Surface Finish Data for Example 15-12

Observation Surface Finish, Type of Cutting


Number, i y; RPM Tool

1 45.44 225 302


2 42.03 200 302
3 50.10 250 302
4 48.75 245 302
5 47.92 235 302
6 47.79 237 302
7 52.26 265 302
8 50.52 259 302
9 45.58 221 302
10 44.78 218 302
11 33.50 224 416
12 Bi1E23 212 416
13 37.52 248 416
14 37.13 260 416
15 34.70 243 416
16 S25 fe 236 416
17 S205 224 416
18 \ 35.47 251 416
19 33.49 232 416
20 fat 2::29 216 416
nD
460 Chapter 15 Multiple Regression

The X matrix and y vector for this problem are as follows: s

1225 0 45.44
1200 0 42.03
1250 0 50.10
1245 0 48.75
1
235 0 47.92
1237 0 47.79
1265 0 52.26
1259 0 50.52
1221.0 45.58
1218 0 44.78
X=! a4 1; Y*133.50
yr2i2 4 31.23
1 248 1 37.52
1 260 1 37.13
1 243 1 34.70
1 2381 33.92
1 224 1 Prey be
P25T- f 35.47) /
L 232 1 33.49
1 216 1 32.29
The fitted model is

y = 14.2762 + 0.1411x, — 13.2802x,.

The analysis of variance for this model is shown in Table 15-9. Note that the hypothesis H,: B, = B,
= 0 (significance of regression) is rejected. This table also contains the sum of squares
SSp = SS,(B;, B,|B,)
= $S,(B,|B.) + SS,(B,1B,, By),
so a test of the hypothesis H,: 8, = 0 can be made. This hypothesis is also rejected, so we conclude
that tool type has an effect on surface finish.
It is also possible to use indicator variables to investigate whether tool type affects both slope and
intercept. Let the model be

y=B,+ Bx, + Bx, + B,x,x, + &

Table 15-9 Analysis of Variance of Example 15-12

Source of Sum of Degrees of Mean


Variation Squares Freedom Square F,

Regression 1012.0595 Z 506.0297 1103.69*


SS,(B,|B,) (130.6091) (1) 130.6091 284.87°
SS,(B,1B,, Bo) (881.4504) (1) 881.4504 1922520
Error 7.7943 17 0.4508
Total 1019.8538 19
SN

“Significant at 1%.
15-9 The Correlation Matrix 461

where x, is the indicator variable. Now if tool type 302 is used, x, = 0, and the model is
y=B,+ Bx, +.
If tool type 416 is used, x, = 1, and the model becomes

y=Py+
Bix,+B,+ Bx,+€
= (B, + B,) + (B, + B)x, +.
Note that , is the change in the intercept, and B, is the change in slope produced by a change in
tool type.
Another method of analyzing these data set is to fit separate regression models to the data for
each tool type. However, the indicator variable approach has several advantages. First, only one
regression model must be estimated. Second, by pooling the data on both tool types, more degrees of
freedom for error are obtained. Third, tests of both hypotheses on the parameters B, and 8, are just
special cases of the general regression significance test.

15-9 THE CORRELATION MATRIX


Suppose we wish to estimate the parameters in the model

Y, = By + Bix, + Bx2 + & i=1,2,...,7. (15-43)


We may rewrite this model with a transformed intercept f,, as

Sie By + B,%; —%,) + By 2-%,) + & (15-44)


or, since B, ys

¥,-~Y=B% — X,) + BLA. - %) + E- (15-45)


The X’X matrix for this model is
SiS;
xx-| 2 a (15-46)
Siz Sx
where
n

Sy = >i(xa -F)xy-F) F= 12. (15-47)


i=l
It is possible to express this XX matrix in correlation form. Let
5,
tg =——ar J =12, (15-48)
(SuS,))

and note that r,, = r,, = 1. Then the correlation form of the X’X matrix, equation 15-46, is
1 4
R-| et (15-49)
h2 1

The quantity r,, is the sample correlation between x, and x,. We may also define the sam-
ple correlation between x, and y as

piesenc tis AdMob D! (15-50)


462 Chapter 15 Multiple Regression

where c

Siy = u=l35 -¥))0u-3) j=12, © (15-51)


is the corrected sum of cross products between x, and y, and S,, is the usual total corrected
sum of squares of y.
These transformations result in a new regression model,

y, = Diz t+Dizn t &, (15-52)


where

ory Fi
Dia Mine ya a
Syy

Se
fy = as?
ea i
fp ee

The relationship between the parameters b, and b, in the new model, equation 15-52, and
the parameters f,, B,, and B, in the original model, equation 15-43, is as follows:
5 1/2

B, = ol=| (15-53)
Si

Oy 6
B, =b,|—| , (15-54)
522

By = y = Bia ¥ Boxe (15-55)

The least-squares normal equations for the transformed model, equation 15-52, are

|. i 6 |_|" (15-56)
Nn 1 b, Ny

or

a. Vioutliok
b, = 2 (15-57a)
P13

beat (15-57b)
A I- rn

The regression coefficients, equations 15-57, are usually called standardized regression
coefficients. Many multiple regression computer programs use this transformation to reduce
round-off errors in the (X’X)' matrix. These round-off errors may be very serious if the
15-9 The Correlation Matrix 463

original variables differ considerably in magnitude. Some of these computer programs also
display. both the original regression coefficients and the standardized coefficients. The
standardized regression coefficients are dimensionless, and this may make it easier to com-
pare regression coefficients in situations where the original variables x, differ considerably
in their units of measurement. In interpreting these standardized feeression coefficients,
however, we must remember that they are still partial regression coefficients G.e., b; shows
the effect of z, given that other z,, i # j, are in the model). Furthermore, the b, are affected
by the spacing of the levels of the x,. Consequently, we should not use the magnitude of the
b as a measure of the importance of the regressor variables.
While we have explicitly treated only the case of two regressor variables, the results
generalize. If there are k regressor variables x,, x, ..., xj, one may write the X’X matrix in
correlation form,

Ll on. 3 Nk
h2 | ny Thr
R=] %3. 3.01. Feb (15-58)

he Top. Ve 1

where r, = S;ALSYRY agis the sample correlation between x; and x, and S,, = X'_,(x,, — X,)
(x, - x;
zy) The correlations between x, and y are

Scie (15-59)

where is DOL —X,)(y, — y). The vector of standardized regression coefficients b’ = [5,,
Bb is |
b=R'g. (15-60)
The relationship between the standardized regression coefficients and the original regres-
sion coefficients is
1/2
A AUS:
B; = {2} F Jessie gett, hs (15-61)
a]

Example 15-13
For the data in Example 15-1, we find
S,,,= 175.089, S,, = 184710.16,
ty = 4215.372, S,, = 0.047755,
S,, = 2.33, Si. = 51.2873.

Therefore,
‘pe 51.2873 i
Lipits
« (Sp5So0.)c ~ [(asario. 16)(0.047755)
464 Chapter 15 Multiple Regression

S, 4215.372
y
hy= - r 0.412,
(5:5) (184710.16)(175.089)
= SS _ — U7

and the correlation matrix for this problem is


1 0.5460
0.5460 ila

From equation 15-56, the normal equations in terms of the standardized regression coefficients are

1 0.5460) 6, |_ [0.7412
0.5460 1 |/6,| [0.8060]

Consequently, the standardized regression coefficients are

é,|_[ 1 0.54607 "'[0.7412


b,| [05460 1 0.8060
_ [1.424737 -0.77791][ 0.7412
~ |-0.77791 1.424737 ||0.8060
_ [0.429022
~ |0.571754 |

These standardized regression coefficients could also have been computed directly from
either equation 15-57 or equation 15-61. Note that although 6, > b,, we should be cautious
about concluding that the fruit density x, is more important than drop height (x,), since 5,
and 5, are still partial regression coefficients.

15-10 PROBLEMS IN MULTIPLE REGRESSION


There are a number of problems often encountered in the use of multiple regression. In this
section, we briefly discuss three of these problem areas: the effect of multicollinearity on
the regression model, the effect of outlying points in the x-space on the regression coeffi-
cients, and autocorrelation in the errors.

15-10.1 Multicollinearity
In most multiple regression problems, the independent or regressor variables x, are inter-
correlated. In situations which this intercorrelation is very large, we say that multi-
collinearity exists. Multicollinearity can have serious effects on the estimates of the
regression coefficients and on the general applicability of the estimated model.
The effects of multicollinearity may be easily demonstrated. Consider a regression
model with two regressor variables x, and x,, and suppose that x, and x, have been “stan-
dardized,” as in Section 15-9, so that the X’X matrix is in correlation form, as in equation
15-49. The model is

Y¥;= Bot Bx, + Bx2+6, i=1,2,...,.0.


15-10 Problems in Multiple Regression 465

The (X’X)"' matrix for this model is

. V(t-n2) ~ia/(t-m)
ee Lnaflt=nb)/t=r8)
and the estimators of the parameters are
am , ,
B= XY —72%2Y
1-73

A , ,
poe
eeeX2Y ee
—2%1y
ES
1-7

where r,, is the sample correlation between x, and x,, and x,y and x,y are the elements of
the X’y vector.
Now, if multicollinearity is present, x, and x, are highly correlated, and |r,,| > 1. In
such a situation, the sverpnees and covariances of the regression coefficients become very
large, since VB, ) = ? — 0 as |r,.| > 1,and Cov (B,, B,) =C,0? > + depending on
whether r,, > fs1. a me: variances for B imply that the regression coefficients are very
poorly estimated. Note that the effect of multicollinearity is to introduce a “near” linear
dependency in the columns of the X matrix. As M2 —> + 1, this linear dependency becomes
exact. Furthermore, if we assume that x,y — x,y as |r,,] > + 1, then the estimates of the
regression coefficients become equal in magnitude but opposite in sign; that is B, = fig
regardless of the true values of B, and f,.
Similar problems occur when multicollinearity is present and there are more than two
regressor variables. In general, the diagonal elements of the matrix C = (X’X)"' can be
written

j=1,2,..uk, (15-62)

where R, is the coefficient of multiple determination resulting from regressing x, on the


other k — 1 regressor variables. Clearly, the stronger the linear dependency of x, on the
remaining TeRRESR OF variables (and hence the stronger the multicollinearity), the larger,the
value of R, will be. We say that the variance of B is “inflated” by the quantity (1 - R,ee
Consequently, we usually call

viF(B;)= Tied las ks (15-63)


1—R;

the variance inflation factor for B. Note that these factors are the main diagonal elements
of the inverse of the correlation matrix. They are an important measure of the extent to
which multicollinearity is present.
Although the estimates of the regression coefficients are very imprecise when multi-
collinearity is present, the estimated equation may still be useful. For example, suppose we
wish to predict new observations. If these predictions are required in the region of the
x-space where the multicollinearity is in effect, then often satisfactory results will be
obtained because while individual 8; may be poorly estimated, the function ȴ1 %;; may be
estimated quite well. On the other hand, if the prediction of new observations requires
extrapolation, then generally we would expect to obtain poor results. Extrapolation usually
requires good estimates of the individual model parameters.
466 Chapter 15 Multiple Regression

Multicollinearity arises for several reasons. It will occur when the analyst collects
the data such that a constraint of the form os ax, = 0 holds among the columns of the X
matrix (the a, are constants, not all zero). For example, if four regressor variables are the
components of a mixture, then such a constraint will always exist because the sum of
the components is always constant. Usually, these constraints do not hold exactly, and the
analyst does not know that they exist.
There are several ways to detect the presence of multicollinearity. Some of the more
important of these are briefly discussed.

1. The variance inflation factors, defined in equation 15-63, are very useful measures
of multicollinearity. The larger the variance inflation factor, the more severe the
multicollinearity. Some authors have suggested that if any variance inflation factors
exceed 10, then multicollinearity is a problem. Other authors consider this value too
liberal and suggest that the variance inflation factors should not exceed 4 or 5.
2. The determinant of the correlation matrix may also be used as a measure of multi-
collinearity. The value of this determinant can range between 0 and 1. When the
value of the determinant is 1, the columns of the X matrix are orthogonal (i.e., there
is no intercorrelation between the regression variables), and when the value is 0,
there is an exact linear dependency among the columns of X. The smaller the value
of the determinant, the greater the degree of multicollinearity.
w The eigenvalues or characteristic roots of the correlation matrix provide a measure
of multicollinearity. If X’X is in correlation form, then the eigenvalues of X’X are
the roots of the equation
[X’X - Al| = 0.
One or more eigenvalues near zero implies that multicollinearity is present. If A
and A,,,, denote the largest and smallest eigenvalues of-X’X, then the ratio A,,,./Amin
can also be used as a measure of multicollinearity. The larger the value of this ratio,
the greater the degree of multicollinearity. Generally, if the ratio A,,,,/A,,,, is less than
10, there is little problem with multicollinearity.
4. Sometimes inspection of the individual elements of the correlation matrix can be
helpful in detecting multicollinearity. If an element |r;| is close to 1, then x, and
x; may be strongly multicollinear. However, when more than two regressor vari-
ables are involved in a multicollinear fashion, the individual r,, are not necessarily
large. Thus, this method will not always enable us to detect the presence of
multicollinearity.
Mes If the F-test for significance of regression is significant but tests on the individual
regression coefficients are not significant, then multicollinearity may be present.

Several remedial measures have been propcsed for resolving the problem of multi-
collinearity. Augmenting the data with new observations specifically designed to break up
the approximate linear dependencies that currently exist is often suggested. However, some-
times this is impossible for economic reasons, or because of the physical constraints that
relate the x, Another possibility is to delete certain variables from the model. This suffers
from the disadvantage that one must discard the information contained in the deleted
variables.
Since multicollinearity primarily affects the stability of the regression coefficients, it
would seem that estimating these parameters by some method that is less sensitive to mul-
ticollinearity than ordinary least squares would be helpful. Several methods have been sug-
gested for this. Hoerl and Kennard (1970a, b) have proposed ridge regression as an
15-10 Problems in Multiple Regression 467

alternative to ordinary least squares. In ridge regression, the parameter estimates are
obtained by solving

B'())= (X’K + M)y'X’y, (15-64)


where / > 0 is a constant. Generally, values of / in the interval 0 </< 1 are appropriate. The
ridge estimator B (/) is not an unbiased estimator of B, as is the ordinary least-squares esti-
mator B, but the mean square error of B’(/) will be smaller than the mean square error of is
Thus ridge regression seeks to find a set of regression coefficients that is more “stable,”i
the sense of having a small mean square error. Since multicollinearity usually results in
ordinary least-squares estimators that may have extremely large variances, ridge regression
is suitable for situations where the multicollinearity problem exists.
To obtain the ridge regression estimator from equation 15-64, one must specify a value
for the constant /. Of course, there is an “optimum” / for any problem, but the simplest
approach is to solve equation 15-64 for several values of / in the interval 0 </< 1. Thena
plot of the values of B’(/) against / is constructed. This display is called the ridge trace. The
appropriate value of / is chosen subjectively by inspection of the ridge trace. Typically, a
value for / is chosen such that relatively stable parameter estimates are obtained. In general,
the variance of B’(/) is a decreasing function of !, while the squared bias [B - B'())]’ is an
increasing function of /, Choosing the value of / involves trading off these two properties of
B'(). ;
A good discussion of the practical use of ridge regression is in Marquardt and Snee
(1975). Also, there are several other biased estimation techniques that have been proposed
for dealing with multicollinearity. Several of these are discussed in Montgomery, Peck, and
Vining (2001).

Example 15-14
(Based on an example in Hald, 1952.) The heat generated in calories per gram for a particular type of
cement as a function of the quantities of four additives (z,, z,, Z;, and z,) is shown in Table 15-10. We
wish to fit a multiple linear regression model to these data.

Table 15-10 Data for Example 15-14

Observation Number y zy 2 Z, 2%

1 28.25 10 31 5. 45
2 24.80 12 35 or
3 11.86 5 15 SecA
4 36.60 17 42 9 65
5 15.80 8 6 5 15
6 16.23 6 17 ay 625
7 29.50 12 36 6 55
8 28.75 10 34 5 50
9 43.20 18 40 10 #70
10 38.47 23 50 10 80
11 10.14 16 37 Se 6)
12 38.92 20 40 11 70
13 36.70 15 45 8 68
14 15.31 7 22 ED
15 8.40 9 12 a PDA
468 Chapter 15 Multiple Regression

The data will be coded by defining a new set of regressor variables as

nj cpt wc alex Spee, J12r3rdor

where S,,= i, (2,;— z)’ is the corrected sum of squares of the levels of z,. The coded data are shown
in Table 15-11. This transformation makes the intercept orthogonal to the other regression coeffi-
cients, since the first column of the X matrix consists of ones. Therefore, the intercept in this model
will always be estimated by y. The (4 x 4) X’X matrix for the four coded variables is the correlation
matrix
1.00000 0.84894 0.91412 0.93367
0.84894 1.00000 0.76899 0.97567
xX’X
“1 0.91412 0.76899 1.00000 0.86784 |
0.93367 0.97567 0.86784 1.00000
This matrix contains several large correlation coefficients, and this may indicate significant mul-
ticollinearity. The inverse of X’X is
20.769 25.813 -0.608 -44.042

(xy = 25.813
— 0.608
74.486
12.597
12.597
8.274
-107.710
-18.903 |
—44.042 -107.710 -18.903 163.620

The variance inflation factors are the main diagonal elements of this matrix. Note that three of the
variance inflation factors exceed 10, a good indication that multicollinearity is present. The eigenval-
ues of X’X are A, = 3.657, A, = 0.2679, A, = 0.07127, and A, = 0.004014. Two of the eigenvalues, A,
and A,, are relatively close to zero. Also, the ratio of the largest tothe smallest eigenvalue is

et S57 = 911.06,
A min 0.004014

which is considerably larger than 10. Therefore, since examination of the variance inflation factors
and the eigenvalues indicates potential problems with multicollinearity, we will use ridge regression
to estimate the model parameters.

Table 15-11 Coded Data for Example 15-14

Observation Number y x, we. Xx, X,

1 28.25 =-0.12515 0.00405 —0.09206 —0.05538


2 24.80 -0.02635 0.08495 —0.09206 0.03692
3 11.86 -0.37217 —0:31957 —0.27617 —0.33226
4 36.60 0.22066 0.22653 0.27617 0.20832
5 15.80 -0.22396 —0.50161 —0.09206 —0.39819
6 16.23 -0.32276 —0.27912 —0.27617 —0.31907
i 29.50 -0.02635 0.10518 0.00000 0.07647
8 28.75 0.12515 0.06472 —0.09206 0.01055
2 43.20 0.27007 0.18608 0.36823 0.27425
oO w ie2)- ~— 0.51709 0.38834 0.36823 0.40609
= =‘= =A 0.17126 0.12540 —0.09206 0.15558
12 38.92 0.36887 0.18608 0.46029 0.27425
13 36.70 0.12186 0.28721 0.18411 0.24788
14 15.31 -0.27336 —0.17799 —0.36823 0.25315
15 8.40 -0,17456 —0.38025 0.27617 0.33226
terra eres
15-10 Problems in Multiple Regression 469

We solved Equation 15-64 for various values of /, and the results are summarized in Table 15-12. The
ridge trace is shown in Fig. 15-7. The instability of the least-squares estimates B Cl = 0) is evident
from inspection of the ridge trace. It is often difficult to choose a value of / from the ridge trace that
simultaneously stabilizes the estimates of all regression coefficients. We will choose / = 0.064, which
implies that the regression model is

§ = 25.53 — 18.0566x, + 17.2202x, + 36.0743x, + 4.7242x,,


using By = y = 25.53. Converting the model to the original variables z, we have

J = 2.9913 — 0.8920z, + 0.3483z, + 3.3209z, — 0.0623z,.

Table 15-12 Ridge Regression Estimates for Example 15-14

l BO Bx BO BAO
0.000 —28.3318 65.9996 64.0479 -57.2491
0.001 -31.0360 57.0244 61.9645 44,0901
0.002 —-32.6441 50.9649 60.3899 —35.3088
0.004 —34,1071 43.2358 58.0266 —24,3241
0.008 —34.3195 35.1426 54.7018 —13.3348
0.016 —31.9710 27.9534 50.0949 —4,5489
0.032 -26.3451 22.0347 43.8309 1.2950
0.064 -18.0566 17.2202 36.0743 4.7242
0.128 ~9,1786 13.4944 27.9363 6.5914
0.256 ~1.9896 10.9160 20.8028 7.5076
0.512 2.4922 9.2014 15.3197 7.7224

B; (!)
70
60
50
40
Bz (1)
30
20 Bo (I)
10
0 Bq (!)
-10 -
B, (/)
-20
-30 |
-~40
-50
6 0.05 0.40 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 /

Figure 15-7 Ridge trace for Example 15-14.


470 Chapter 15 Multiple Regression

15-10.2 Influential Observations in Regression .


When using multiple regression we occasionally find that some small subset of the obser-
vations is unusually influential. Sometimes these influential observations are relatively far
away from the vicinity where the rest of the data were collected. A hypothetical situation
for two variables is depicted in Fig. 15-8, where one observation in x-space is remote from
the rest of the data. The disposition of points in the x-space is important in determining the
properties of the model. For example, the point (x;,, x,.) in Fig. 15-8 may be very influen-
tial in determining the estimates of the regression coefficients, the value of R’, and the value
of MS,.
We would like to examine the data points used to build a regression model to determine
if they control many model properties. If these influential points are “bad” points, or are
erroneous in any way, then they should be eliminated. On the other hand, there may be noth-
ing wrong with these points, but at least we would like to determine whether or not they
produce results consistent with the rest of the data. In any event, even if an influential point
is a valid one, if it controls important model properties, we would like to know this, since
it could have an impact on the use of the model.
Montgomery, Peck, and Vining (2001) describe several methods for detecting influen-
tial observations. An excellent diagnostic is the Cook (1977, 1979) distance measure. This
is a measure of the squared distance between the least squares estimate of B based on all n
observations and the estimate 8, based on removal of the ith point. The Cook distance
measure is

By -8)“8)xx(6,-6
rie (By X%(6.9 -8) i=1,2,...50.
PMS;
Clearly if the ith point is influential, its removal will result in B,, changing considerably
from the value f. Thus a large value of D, implies that the ith point is influential. The sta-
tistic D, is actually computed using

;Z h,
D, Se i Pelt (15-65)

where f. =e, /x MS,( 1- h;;) and h,; is the ith diagonal element of the matrix
H = X(X’X)''X’.

all observations |
except the /th |
|
|
|
|
|
|
|
|

Xi x

Figure 15-8 A point that is remote in x-space.


15-10 Problems in Multiple Regression 471

The H matrix is sometimes called the “hat” matrix, since

y= XB
= X(X’X)''X’y
= Hy.

Thus H is a projection matrix that transforms the observed values of y into a set of fitted
values j.
From equation 15-65 we note that D, is made up of a component that reflects how well
the model fits the ith observation y,[e; / MS,(1-h,)] is called a Studentized residual, and
it is a method of scaling residuals so that they have unit variance] and a component that
measures how far that point is from the rest of the data [h,,/(1 — h,,) is the distance of the ith
point from the centroid of the remaining n — 1 points]. A value of D, > 1 would indicate that
the point is influential. Either component of D, (or both) may contribute to a large value.

Example 15-15
Table 15-13 lists the values of D, for the damaged peaches data in Example 15-1. To illustrate the cal-
culations, consider the first observation:

Table 15-13. Influence Diagnostics for the Damaged Peaches Data in Example 15-15

Observation Cook’s Distance


(i) h,, Measure (D,)

1 0,249 0.277
2 0.126 0.000
3 0.088 0.156
4 0.104 0.061
5 0.060 0.000
6 0.299 0.000
7 0.081 0.008
8 0.055 0.002
9 0.193 0.116
10 0.117 0.024
11 0.250 0.037
12 0.109 0.002
13 0.213 0.045
14 0.055 0.056
15 0.147 0.067
16 0.080 0.001
17 0.209 0.001
18 0.276 0.160
19 * 0.210 0.171
20 0.078 0.012
472 Chapter 15 Multiple Regression

eae aut‘Bhith
(I-A)
_ [2.06/.2-247(1-0.249)
06/./2. ihe 0.249 y]| role
“(1-0.249)
= 0.277.
The values in Table 15-13 were calculated using Minitab®. The Cook distance measure D, does not
identify any potentially influential observations in the data, as no value of D, exceeds unity.

15-10.3 Autocorrelation
The regression models developed thus far have assumed that the model error components
€, are uncorrelated random variables. Many applications of regression analysis involve data
for which this assumption may be inappropriate. In regression problems where the depend-
ent and independent variables are time oriented or are time-series data, the assumption of
uncorrelated errors is often untenable. For example, suppose we regressed the quarterly
sales. of a product against the quarterly point-of-sale advertising expenditures. Both vari-
ables are time series, and if they are positively correlated with other factors such as dispos-
able income and population size, which are not included in the model, then it is likely that
the error terms in the regression model are positively correlated over time. Variables that
exhibit correlation over time are referred to as autocorrelated variables. Many regression
problems in economics, business, and agriculture involve autocorrelated errors.
The occurrence of positively autocorrelated errors has several potentiatly serious con-
sequences. The ordinary least-squares estimators of the parameters are affected in that they
are no longer minimum variance estimators, although they are still unbiased. Furthermore,
the mean square error MS, may underestimate the error variance 0”. Also, confidence inter-
vals and tests of hypotheses, which are developed assuming uncorrelated errors, are not
valid if autocorrelation is present.
There are several statistical procedures that can be used to determine whether the error
terms in the model are uncorrelated. We will describe one of these, the Durbin- Watson test.
This test assumes that the data are generated by the first-urder autoregressive model

y, = By t Bix, + & f= beet, (15-66)


where f is the index of time and the error terms are generated according to the process

&— pe, + a, (15-67)

where |p| < 1 is an unknown parameter and a, is a NID(0, 0’) random variable. Equation
15-66 is a simple linear regression model, except for the errors, which are generated from
equation 15-67. The parameter p in equation 15-67 is the autocorrelation coefficient. The
Durbin—Watson test can be applied to the hypotheses

Ay: p=9,
15-68
Hp =O, ( )

Note that if H,: p = 0 is not rejected, we are implying that there is no autocorrelation in the
errors, and the ordinary linear regression model is appropriate.
15-10 Problems in Multiple Regression 473

To test H,: p = 0, first fit the regression model by ordinary least squares. Then, calcu-
late the Durbin—Watson test statistic

D==2—___., (15-69)
Le
t=]
2

where ¢, is the tth residual. For a suitable value of @, obtain the critical values D, ,, and De
from Table 15-14. If D> D, ,, do not reject H,: p = 0; but if D< D, ,, reject Hy: p= 0 and
conclude that the errors are positively autocorrelated. If D, , < D < D, ,, the test is

Table 15-14 Critical Values of the Durbin—Watson Statistic

Probability in
Loner ral k = Number of Regressors (Excluding the Intercept)
Sample (Significance 1 2 3 4 5
SHG spatinsiogi aesHiAens WPasine Pasi Poa wPon Dua) PrieeiDuteRe Pu
0.01 0.81 1.07 0.70' 1.25 059 1.46 OO eee 7 O039 1296
£5 0.025 0.95 91au23 0:83 S140 0.7 18" 61 0.59 184 0.48 2.09
0.05 1.08 1.36 O95 G54 e0:82 28-75: 0.69 1.97 Det. 27h
0.01 01955 Sarat 15) OLS a OA ae OSS SY 0.60 1.74
20 0.025 LOS sl.28 0.99 - 1.41 OS OTaae E55 OME SEO SOO ey
0.05 120m 1-41 110 1.54 1.00 1.68 0905 1.83 0:79 1.99
0.01 AOS plik OOS 130 ime CSOs 61741 OS3aee to 2meeOwon 1:65
25 0.025 113% yl.34 LOR ita LOZ Pert S4itet 0:94 fee 1k65 0.86 1.77
0.05 120 861.45 122 Lil 15129 S166 1.04 1.77 0:95 1.89
0.01 LibBia Ae) 107 1.34 1.01 1.42 0.94 1.51 0.88 1.61
30 0.025 125) ess 118 1.46 12 54 1.05 1.63 0.98 1.73
0.05 (eS) oll ee See eS P25 os) 1.14 1.74 10/83
0.01 125 sol 34 1.20 1.40 L155) 1:46 151O papel2 OSs) 58
40 0.025 3S 1:45 130) KSI 2 nailed, E209 vel63 IES 69
0.05 144 1.54 1.39 1.60 1.34 1.66 129 17/2 123 eS
0.01 1.32 1.40 1.28 1.45 1.24 1.49 1.20 1.54 Ge P159
50 0.025 1.42 1.50 1.38 1.54 345 159 1.30 1.64 E2669
0.05 Wood), akeshe) 146 81.63 1.42 1.67 1S Sewn 2 S477
0.01 1.38 1.45 135 1.48 632 52 1.28 1.56 sy 1s)
60 0.025 147 = 1.54 1.44 1.57 140 1.61 1S Teel OS 33169
0.05 i[eaysy Bow Le) el OS) 1.48 1.69 1.44 1.73 eal ay
0.01 147m 9152 144 1.54 142 b 1.57 1.39 1.60 136 1.62
80 0.025 15410159 15208 162 1.49. 1.65 1.47 1.67 1.44 1.70
0.05 1.61 1.66 1.598 BeN1i69) ESGR Ceo 2 1.53 1.74 St 7

0.01 nove glen ibpalO yyslests§ 1.48 1.60 145 1.63 1.44 = 1.65
100 0.025 159) 1.03 tS) ae OD) feBeSa ECS mn ae Poa a 1) ih aT
0.05 1:65" 1.69 eehin wale 161 1.74 ES See 0 ipsa Mls
.

Source: Adapted from Econometrics, by R. J. Wonnacott and T. H. Wonnacott, John Wiley & Sons, New York, 1970, with
S

permission of the publisher.


474 Chapter 15 Multiple Regression

inconclusive. When the test is inconclusive, the implication isthat more data must be col-
lected. In many problems this is difficult to do.
To test for negative autocorrelation, that is, if the alternative hypothesis in equation
15-68 is H,: p <0, then use D’ = 4 - D as the test statistic, where D is defined in equation
15-69. If a two-sided alternative is specified, then use both of the one-sided procedures, not-
ing that the type I error for the two-sided test is 20%, where ais the type I error for the one-
sided tests.
The only effective remedial measure when autocorrelation is present is to build a model
that accounts explicitly for the autocorrelative structure of the errors. For an introductory
treatment of these methods, refer to Montgomery, Peck, and Vining (2001).

15-11 SELECTION OF VARIABLESIN MULTIPLE REGRESSION


15-11.1 The Model-Building Problem
An important problem in many applications of regression analysis is the selection of the set
of independent or regressor variables to be used in the model. Sometimes previous experi-
ence or underlying theoretical considerations can help the analyst specify the set of inde-
pendent variables. Usually, however, the problem consists of selecting an appropriate set of
regressors from a set that quite likely includes all the important variables, but we are sure
that not all these candidate variables are necessary to adequately model the response y.
In such a situation, we are interested in screening the candidate variables to obtain a
regression model that contains the “best” subset of regressor variables. We would like the
final model to contain enough regressor variables so that in the intended use of the model
(prediction, for example) it will perform satisfactorily. On the other hand, to keep model
maintenance costs to a minimum, we would like the model to use as few regressor variables
as possible. The compromise between these conflicting objectives is often called finding the
“best” regression equation. However, in most problems, there is no single regression model
that is “best” in terms of the various evaluation criteria that have been proposed. A great
deal of judgment and experience with the system being modeled is usually necessary to
select an appropriate set of independent variables for a regression equation.
No algorithm will always produce a good solution to the variable selection problem.
Most currently available procedures are search techniques. To perform satisfactorily, they
require interaction with and judgment by the analyst. We now briefly discuss some of the
more popular variable selection techniques.

15-11.2 Computational Procedures for Variable Selection


We assume that there are k candidate variables, x,, x,, ..., x,, and a single dependent vari-
able y. All models will include an intercept term B,, so that the model with all variables
included would have k + 1 terms. Furthermore, the functional form of each candidate vari-
able (for example, x, = 1/x,, x, = In x,, etc.) is correct.

All Possible Regressions This approach requires that the analyst fit all the regression
equations involving one candidate variable, all regression equations involving two candi-
date variables, and so on. Then these equations are evaluated according to some suitable cri-
teria to select the “best” regression model. If there are k candidate variables, there are 2
total equations to be examined. For example, if k = 4, there are 2* = 16 possible regression
equations, while if k = 10, there are 2'° = 1024 possible regression equations. Hence, the
number of equations to be examined increases rapidly as the number of candidate variables
increases.
15-11 Selection of Variables in Multiple Regression 475

There are a number of criteria that may be used for evaluating and comparing the dif-
ferent regression models obtained. Perhaps the most commonly used criterion is based on
the coefficient of multiple determination. Let R’ denote the coefficient of determination for
a regression model with p terms, that is, p — 1 candidate variables and an intercept term
(note that p < k + 1). Computationally, we have

2 SSR(P)
Rp = SSz(p) ;
=1-—# (15-70)
Syy Syy
where SS,(p) and SS,(p) denote. the regression sum of squares and the error sum of squares,
respectively, for the p-variable equation. Now R; increases as p increases and is a maximum
when p = k + 1. Therefore, the analyst uses this criterion by adding variables to the model
up to the point where an additional variable is not useful in that it gives only a small
increase in R’.The general approach is illustrated in Fig. 15-9, which gives a hypothetical
plot of R, against p. Typically, one examines a display such as this and chooses the number
of variables in the model as the point at which the “knee” in the curve becomes apparent.
Clearly, this requires judgment on the part of the analyst.
A second criterion is to consider the mean square error for the p-variable equation, say
MS,(p) = SS,(p)/(n — p). Generally, MS,(p) decreases as p increases, but this is not neces-
sarily so. If the addition of a variable to the model with p — 1 terms does not reduce the error
sum of squares in the new p term model by an amount equal to the error mean square in the
old p — 1 term model, MS,(p) will increase, because of the loss of one degree of freedom
for error. Therefore, a logical criterion is to select p as the value that minimizes MS,(p); or
since MS,(p) is usually relatively flat in the vicinity of the minimum, we could choose p
_ such that adding more variables to the model produces only very small reductions in
MS,(p). The general procedure is illustrated in Fig. 15-10.
A third criterion is the C, statistic, which is a measure of the total mean square error for
the regression model. We define the total standardized mean square error as

T= orLALA)
=] {e0)-
20) +2)
Gal!
=
mre! [(bias)” Mdvariance],

Rp

0 : : k+1 Pp

Figure 15-9 Plot of R; against p.


476 Chapter 15 Multiple Regression

MSe(p)

Minimum MS¢(p)
k+1 p

Figure 15-10 Plot of MS,(p) against p.

We use the mean square error from the full k + 1 term model as an estimate of 0”; that is,
6° = MS,(k + 1). An estimator of I, is

Claas La (15-71)
oO

If the p-term model has negligible bias, then it can be shown that

E(C,|zero bias) = p.

Therefore, the values of C, for each regression model under consideration should be plot-
ted against p. The regression equations that have negligible bias will have values of C, that
fall near the line C, = p, while those with significant bias will have values of C, thatplot
above this line. One then chooses as the “best” regression equation either a model with min-
imum C, or a model with a slightly larger C, that contains less bias than the minimum.
Another criterion is based on a modification of R’ that accounts for the number of vari-
ables in the model. We presented this statistic in Section 15-6.1, the adjusted R* for the
model fit in Example 15-1. This statistic is called the adjusted R’ defined as

Ri(p)=1- 25 (1-R?). (15-72)

Note that Re (p) may decrease as p increases if the decrease in (n — 1)(1 - R’) is not com-
pensated for by the loss of one degree of freedom in n — p. The experimenter would usually
select the regression model that has the maximum value of R?4;\P)- However, note that this
is equivalent to the model that minimizes MS,(p), since

n—-p
=i n-l SS_(p)

= Pay
15-11 Selection of Variables in Multiple Regression 477

Example 15-16
The data in Table 15-15 are an expanded set of data for the damaged peach data in Example 15-1.
There are now five candidate variables, drop height (x,), fruit density (x,), fruit height at impact point
(x,), fruit pulp thickness (x,), and potential energy of the fruit before the impact (x).
Table 15-16 presents the results of running all possible regressions (except the trivial model with
only an intercept) on these data. The values of Re R;4iP), MS,(p), and C, are given in the table. A plot
of the maximum R, for each subset of size p is shown in Fig. 15-11. Based on this plot there does not
appear to be much gain in adding the fifth variable. The value of R’ does not seem to increase signif-
icantly with the addition of x, over the four-variable model with the highest R, value. A plot of the
minimum MS,(p) for each subset of size p is shown in Fig. 15-12. The best two-variable model is
either (x,, x,) or (x,, x;); the best three-variable model is (x,, x,, x,); the best four-variable model is
either (x,, X,, X3, X,) Or (X,, X,, X3, x5). There are several models with relatively small values of MS,(p),
but either the three-variable model (x,, x,, x,) or the four-variable model (x,, x,, x;, x,) would be supe-
nor to the other models based on the MS,(p) criterion. Further investigation will be necessary.
A C, plot is shown in Fig. 15-13. Only the five-variable model has a C, < p (specifically C, =
6.0), but the C, value for the four-variable model (x,, x,, x, x,) is C, = 6.1732. There appears to be
insufficent gain in the C, value to justify including x,. To illustrate the calculations, for this equation
[for the model including (x,, x,, x3, x,)] we would find

SS
c = SE np

_= 18.29715 ~20 + 2(5) = 6.1732,


1.13132

Table 15-15 Damaged Peach Data for Example 15-16

Delivery Drop Fruit Fruit Fruit Pulp Potential


Observation Time, Height, Density, Height, Thickness, Energy,
vy x) xy *3 X, Xs
1 3.62 303.7 0.90 26.1 228 184.5
2 T27 366.7 1.04 18.0 Dies 185.2
3 2.66 336.8 1.01 39.0 229 128.4
4 153 304.5 0.95 48.5 20.4 173.0
5 4.91 346.8 0.98 43.1 18.7 139.6
6 10.36 600.0 1.04 21.0 17.0 146.5
Y 5.26 369.0 0.96 12.7 20.4 SSeS
8 6.09 418.0 1.00 46.0 18.1 129.2
9 6.57 269.0 1.01 2.6 21.5 154.6
10 4.24 52320 0.94 6.9 24.4 152.8
11 8.04 562.2 1.01 Dies 1OKS) 199.6
12 3.46 284.2 0.97 30.6 20.2 ITS
13 8.50 558.6 1.03 371 22.6 210.0
14 9.34 415.0 1.01 26.1 17Al 165.1
15 S155 349.5 1.04 48.0 o) Ee 195.3
16 8.11 462.8 1.02 32.8 Zou 171.0
iW 7.32 33351 1.05 BS) 7) AS) 163.9
18 1258 502.1 1.10 4.0 16.9 140.8
19 0.15 311.4 0.91 39:2 26.0 154.1
20 5.23 351.4 0.96 36.3 (eh) 194.6
SK
478 Chapter 15 Multiple Regression

Figure 15-12 The MS,(p) plot for Example 15-16.

Bag

25
on

15

Figure 15-13 The C, plot for Example 15-16.

noting that G?= 1.13132 is obtained from the full equation (x,, x, x3, X4, x5). Since all other models
(with the exclusion of the five-variable model) contain substantial bias, we would conclude on the
basis of the C, criterion that the best subset of the regressor variables is (x,, x, x3, x,). Since this model
also results in relatively small MS,(p) and a relatively high R’, we would select it as the “best”
regression equation. The final model is

9 =-19.9 + 0.0123x, + 27.3x, — 0.0655x, - 0.196x,.


Keep in mind, though, that further analysis should be conducted on this model as well as other pos-
sible candidate models. With additional investigation, it is possible to discover an even better-fitting
model. We will discuss this in more detail later in this chapter.
{5-11 Selection of Variables in Multiple Regression 479

The all-possible-regressions approach requires considerable computational effort, even


when k is moderately small. However, if the analyst is willing to look at something less than
the estimated model and all its associated statistics, it possible to devise algorithms for all
possible regressions that produce less information about each model but which are more
efficient computationally. For example, suppose that we could efficiently calculate only the
MS, for each model. Since models with large MS, are not likely to be selected as the best
regression equations, we would then have only to examine in detail the models with small
values of MS,. There are several approaches to developing a computationally efficient algo-
rithm for all possible regressions (for example, see Furnival and Wilson, 1974). Both
Minitab® and SAS computer packages provide the Furnival and Wilson (1974) algorithm as
an option. The SAS output is provided in Table 15-16.

Stepwise Regression This is probably the most widely used variable selection technique.
The procedure iteratively constructs a sequence of regression models by adding or

Table 15-16 All Possible Regressions for the Data in Example 15-16

Number in Variables in
Model p-1 R Riy(P) C. MS,(p) Model
1 0.6530 0.6337 37.7047 3.37540 a
1 0.5495 0.5245 53.7211 4.38205 1%
1 0.3553 0.3194 83.7824 6.27144 x,
1 0.1980 0.1535 108.1144 7.80074 rr
1 0.0021 ~0.0534 138.4424 9.70689 x
2 0.7805 0.7547 19.9697 2.26063 bie,
2 0.7393 0.7086 26.3536 2.68546 exe
2 0.7086 0.6744 31.0914 3.00076 ab
2 0.7030 0.6681. 31.9641 3.05883 ey,
2 0.6601 0.6201 38.6031 3.50064 ke
2 0.6412 0.5990 41.5311 3.69550 tax,
2 0.5528 0.5002 55.2140 4.60607 Peay
2 0.4940 0.4345 64.3102 5.21141 Xap Mh
2 0.4020 0.3316 78.5531 6.15925 axe
2 0.2125 0.1199 107.8731 8.11045 ore
3 0.8756 0.8523 7.2532 1.36135 ISERES
3 0.8049 0.7683 18.1949 2.13501 EREP
3 0.7898 0.7503 20.5368 2.30060 shee SOOT
3 0.7807 ° 0.7396 21.9371 2.39961 Lge aBNe
3 0.7721 0.7294 23.2681 2.49372 es
3 0.7568 0.7112 25.6336 2.66098 x XN
3 0.7337 0.6837 29.2199 2.91456 Lite,
3 0.7032 0.6475 33.9410 3.24838 Rx x
3 0.6448 0.5782 42.9705 3.88683 6Aa nb
3 0.5666 0.4853 55.0797 4.74304 ale ots
4 0.8955 0.8676 6.1732 1.21981 eX,
4 0.8795 0.8474 8.6459 1.40630 pment)
4 0.8316 0.7866 16.0687 1.96614 pe ey RPE
4 0.8103 0.7597 19.3611 2.21446 iy ke poe
4 0.7854 0.7282 23.2090 2.50467 eee?
5 0.9095 0.8772 6.0000 1.13132 Bien ed es
480 Chapter 15 Multiple Regression

removing variables at each step. The criterion for adding or removing a variable at any step
is usually expressed in terms of a partial F-test. Let F,, be the value of the F statistic for
adding a variable to the model, and let F.,,, be the value of the F statistic for removing a vari-
able from the model. We must have F,, 2 F,,,, and usually F;, = F,.,.-
Stepwise regression begins by forming a one-variable model using the regressor vari-
able that has the highest correlation with the response variable y. This will also be the vari-
able producing the largest F statistic. If no F statistic exceeds F;,, the procedure terminates.
For example, suppose that at this step x, is selected. At the second step the remaining k — 1
candidate variables are examined, and the variable for which the statistic

Eo SSa(B;lA1-Bo) (15-73)
7 MS E (xf x;)

is a maximum is added to the equation, provided that F, > F,,. In equation 15-73, MS,(%, x,)
denotes the mean square for error for the model containing both x, and x, Suppose that this
procedure now indicates that x, should be added to the model. Now the stepwise regression
algorithm determines whether the variable x, added at the first step should be removed. This
is done by calculating the F statistic

_ SSe(B,|B2,
Bo)
jing MS; (%,,X2) sel’
If F, < F,,,,, the variable x, is removed.
In general, at each step the set of remaining candidate variables is examined and the
variable with the largest partial F statistic is entered, provided that the observed value of F
exceeds F;,. Then the partial F statistic for each variable in the model is calculated, and the
variable with the smallest observed value of F is deleted if the observed F < F,,,,. The pro-
cedure continues until no other variables can be added to or removed from the model.
Stepwise regression is usually performed using a computer program. The analyst exer-
cises control over the procedure by the choice of F,, and F.,,.out* Some stepwise regression
computer programs require that numerical values be specified for F;, and F,,,. Since the
number of degrees of freedom on MS, depends on the number of variables in the model,
which changes from step to step, a fixed value of F,, and F.,,, causes the type I and type II
error rates to vary. Some computer programs allow the analyst to specify the type I error
levels for F,, and F,,,. However, the “advertised” significance level is not the true level,
because the variable selected is the one that maximizes the partial F statistic at that stage.
Sometimes it is useful to experiment with different values of F,, and F,,,, (or different adver-
tised type I error rates) in several runs to see if this substantially affects the choice of the
final model. ;

Example 15-17
We will apply stepwise regression to the damaged peaches data in Table 15-15. Minitab® output is pro-
vided in Fig. 15-14. From this figure, we see that variables x,, x,, and x, are significant; this is because
the last column contains entries for only x,, x,, and x,. Figure 15-15 provides the SAS computer output
that will support the computations to be calculated next. Instead of specifying numerical values of F is
and F.,,,, we use an advertised type I error of «= 0.10. The first step consists of building a simple linear
regression model using the variable that gives the largest F statistic. This is x,, and since

~ "
s\Bo) _ 114.32885 _
SSe(B2IBo)
p, = S5e\P
MS;(x7) 3.37540 33.87 > Fi, =Fotos =3-01,
a
15-11 Selection of Variables in Multiple Regression 481
Alpha-to-Enter: On Alpha-to-Remove Oe
Response is y on 5 predictors, with N = 20
Step il 2 3}
Constant: -42.87 -33.83 -27.89
x2 49.1 34.9 30.7
T-Value Soe 4.23 4.71
P-Value 0.000 OnoOn 0.000

x1 : On Oaks ai 0.0136
T-Value 8) a 4.19
P-Value 0.006 0.001

x3 -0.067
T-Value -3.50
P-Value 0.003

Ss 1.84 GaSe) abe aby?


R-Sq 65.30 Bd OS 87.56
R-Sq (adj) 63.37 TAS CST 85.23
C-p eI Uf 20.0 tha8}

Figure 15-14 Minitab® output for stepwise regression in Example 15-17.

x, is entered into the model.


The second step begins by finding the variable x, that has the largest partial F statistic, given that
x, is in the model. This is x,, and since

FCI)
SSp(Bi|B2»B Fae STOR
MS,(x2,x,) 2.26063
EEia on
x, is added to the model. Now the procedure evaluates whether or not x, should be retained, given that
x, is in the model. This involves calculating

SS B i
; _ 55x(B21B1»Bo) _ 40.44627 _ 17 99. F., = Fo 10147 = 3.03.
MS; (x2.x,) 2.26063 ie:

Therefore x, should be retained. Step 2 terminates with both x, and x, in the model.
The third step finds the next variable for entry as x,. Since

a SSe(B:1B2+Bi+Bo) _ 1664910 _ 1» 45. Sh 39


MS;,(x3,%2,%,) 1.36135

x, is added to the model. Partial F-tests on x, (given x, and x,) and x, (given x, and x,) indicate that
these variables should be retained. Therefore, the third step concludes with the variables x,, x,, and x,
in the model.
At the fourth step, neither of the remaining terms, x, or x., is significant enough to be included
in the model. Therefore, the stepwise procedure terminates.
The stepwise regression procedure would conclude that the best model includes x,, x,, and x.
The usual checks of model adequacy, such as residual analysis and C, plots, should be applied to the
equation. These results are similar to those found by all possible regressions, with the exception that
x, was also considered a possible significant variable with all possible regressions.
482 Chapter 15 Multiple Regression

Forward Selection This variable selection procedure is based on the principle that vari-
ables should be added to the model one at a time until no remaining candidate variables
produce a significant increase in the regression sum of squares. That is, variables are added
one at a time as long as F > F,,. Forward selection is a simplification of stepwise regression
that omits the partial F-test for deleting variables from the model that have been added at
previous steps. This is a potential weakness of forward selection; the procedure does not
explore the effect that adding a variable at the current step has on variables added at ear-
lier steps.

The REG Procedure


Dependent Variable: y
Forward Selection: Step 1 Variable x2 Entered: R-Square = 0.6530 and C(p) =
37.7047
Source DF Sum of Squares Mean Square F Value ligase 15
Model 1 114.32885 114.32885 331.87 <.0001
Error 18 60 7o725 3.37540
Corrected Total 19 175 .08609
Variable Parameter Estimate Standard Error Types ii Ss F Value 1s S52
Intercept -42.87237 8.41429 87.62858 25.96 <.0001
x2 2 49.08366 8.43377 114.32885 33.87 <.0001
Bounds on condition number: 1, 1

Forward Selection: Step 2 Variable x1 Entered: R-Square = 0.7805 and C(p) =


19.9697
Source DF Sum of Squares Mean Square F Value Pres E
Model 2 136.65541 68ne2770 a Omes <.0001
Error i 38.43068 2.26063
Corrected Total 19 175.0860
Variable Parameter Estimate Standard Error Type II SS F Value Pree r:
Intercept -33.83110 7.46286 46.45691 ZOO 0.0003
sail 0.01314 0.00418 22.32656 9.88 0.0059
x2 34.88963 8.24844 40.44627 17.89 0.0006
Bounds on condition number: 1.4282, 5.7129

Forward Selection: Step 3 Variable x3 Entered: R-Square = 0.8756 and C(p) =


The apes
Source DF Sum of Squares Mean Square F Value Pri
Model 3 Dio 0 20k 51.10150 37s 4. <.0001
Error 16 21278159 es OO
Corrected Total 19 175.08609
Variable Parameter Estimate Standard Error Type II SS F Value Pri aeer
Intercept -27.89190 6.03518 29.07675 Zoo 0.0003
x1 0.01360 0.00325 23.87130 17.54 9.0007
x2 30.68486 6.51286 30.21859 22.40 0.0002
xo -0.06701 0.01916 16.64910 223 0.0030
Bounds on condition number: 1.4786, 11.85

No other variable met the


0.1000 significance level for entry into the model.
Summary of Forward Selection
Variable Number Partial Model
Step Entered Vars In R-Square R-Square GCap). sc). Fn. Value Presa
1 x2 ‘il 0.6530 0.6530 37.7047 33.87 <.0001
2 x1 2 ORL 275 0.7805 19.9697 9.88 0.0059
3 x3 3 0.0951 0.8756 lee oae HO 23 0.0030

Figure 15-15 SAS output for stepwise regression in Example 15-17.


15-11 Selection of Variables in Multiple Regression 483

Example 15-18
Application of the forward selection algorithm to the damaged peach data in Table 15-15 would begin
by adding x, to the model. Then the variable that induces the largest partial F-test, given that x, is in
the model, is added—this is variable x,. The third step enters x,, which produces the largest partial F
statistic given that x, and x, are in the model. Since the partial F statistics for x, and x, are not signif-
icant, the procedure terminates. The SAS output for forward selection is given in Fig. 15-16. Note that
forward selection leads to the same final model as stepwise regression. This is not always the case.
a Sa eg rn

Backward Elimination This algorithm begins with all k candidate variables in the model.
Then the variable with the smallest partial F statistic is deletedifthis F statistic is insignif-
icant, that is, if F < F,,,. Next, the model with k — 1 variables is estimated, and the next vari-
able for potential elimination is found. The algorithm terminates when no further variables
can be deleted.

Example 15-19
To apply backward elimination to the data in Table 15-15, we begin by estimating the full model in
all five variables. This model is §= -20.89732 + 0.01102x, + 27.37046x, — 0.06929x, — 0.25695x, +
0.01668x,. }
The SAS computer output is given in Fig. 15-17. The partial F-tests for each variable are as
follows:

Be 55 (B:1B2+Bs»Bs,Bs»Bo) _= 13.65360 =12.07,


MS; 1.13132
ms 55p(B2|B,, Bs. Ba» Bs» Bo) 1821373153 =1921
MS; 143132 Gree?
7, - 55e(Bs1Bi»B2»Bs»Bs»Bo) _ 17.37834 _ 5 x6
sad MS; $13132--
piel SSp(Bs|B,,B,+Bs,Bs»Bo) a 25002 = 465
a MS; 43132,
7, — 558(Bs1B1+B2»Bs-Ba»Bo) _ 2.45862 _
ee MS, 1.13132
The variable x, has the smallest F statistic, F, = 2.17 < F,,, = Fo 191,14 = 3-10; therefore, x, is removed
from the model at step 1. The model is now fit with only the four remaining variables. In step 2, the
F statistic for x, (F, = 2.86) is less than F,,,, = Fy io;,:5 = 3-07, therefore, x, is removed from the model.
No remaining variables have F statistics less than the appropriate F,,, values, and the procedure is ter-
minated. The three-variable model (x,, x,, x,) has all variables significant according to the partial
F-test criterion. Note that backward elimination has resulted in the same model that was found by
forward selection and stepwise regression. This may not always happen.

Some Comments on Final Model Selection We have illustrated several different


approaches to the selection of variables in multiple linear regression. The final model
obtained from any model-building procedure should be subjected to the usual adequacy
checks, such as residual analysis and examination of the effects of outermost points. The
analyst may also consider augmenting the original set of candidate variables with cross
products, polynomial terms, or other transformations of the original variables that might
improve the model.
484 Chapter 15 Multiple Regression

The REG Procedure »

Dependent Variable: y
Backward Elimination: Step 0 All Variables Entered: R-Square = 0.9095 and
C(Cp) = 6.0000
Sum of Mean
Source DF Squares Square F Value Pret:
Model 5 159.24760 31.84952 28.15 <.0001
Error 14 15.83850 di
byelobdicgey72
Corrected Total 19 175.08609
Parameter Standard
Variable Estimate Error type” Li Ss F Value ihe So 1s
Intercept -20.89732 7.16035 9.63604 8.52 0.0112
x1 0.01102 0.00317 13.65360 We Ol7, 0.0037
x2 27.37046 6.24496 PAWS 7 EMSS} 1920. 0.0006
xo -0.06929 0.01768 17.37834 15.36 OF00LS
x4 -0.25695 0.11921 5.25602 4.65 0.0490
x5 0.01668 0.01132 2.45862 C3. ali7/ 0.1626
Bounds on condition number: 1.6438, 35.628

Backward Elimination: Step 1 Variable x5 Removed: R-Square = 0.8955 and


CCp) = 6.1732
Sum of Mean
Source DF Squares Square F Value ier SS
Model Ps 4 156.78898 39.19724 4 = als} <.0001
Error als 18.29712 1.21981
Corrected Total 19 175.08609
Parameter Standard
Variable Estimate Error Type II SS F Value Pr>F
Intercept -19.93175 7.40393 8.84010 avs 0.0167
x1 0.01233 0.00316 18.51245 15218 0.0014
x2 27.28797 6.48433 21.60246 7rd: 0.0008
x3 -0.06549 0.01816 15 .86299 1300 0.0026
x4 -0.19641 0.11621 3.48447 2.86 OnLy 7
Bounds on condition number: 1.6358, 22.31

Backward Elimination: Step 2 Variable x4 Removed: R-Square = 0.8756 and


Cp N= sila o32
Sum of Mean
Source DF Squares Square F Value Pras
Model 3 153.30451 SILOS O 37.54 <.0001
Error 16 21.78159 148613'5
Corrected Total 19 175.08609
Parameter Standard
Variable Estimate Error Type II SS F Value Pri Seek:
Intercept -27.89190 6.03518 .29.07675 21.36 0.0003
x1 0.01360 0.00325 23.87130 Alans’ 0.0007
x2 30.68486 6.51286) — 30.21859 7272) 5 PAO) 0.0002
x3 -0.06701 0.01916 16.64910 sb? Pde 0.0030
Bounds on condition number: 1.4786, 11.85

All variables left in the model are significant at the 0.1000 level.
Summary of Backward Elimination:
Variable Number Partial Model
Step Entered Vars In R-Square R-Square C(p) F Value Ike 55 49
a x5 4 0.0140 0.8955 6.1732 ZU? 0.1626
2 x4 3 0.0199 0.8756 L253 2 2.86 OTL 7
Figure 15-16 SAS output for forward selection in Example 15-18.
15-11 Selection of Variables in Multiple Regression 485

The REG Procedure


Dependent Variable: y
Stepwise Selection: Step 1 Variable x2 Entered: R-Square = 0.6530 and C(p) =
37.7047
Sum of Mean
Source DF Squares Square F Value Pre>F
Model ail 114.32885 114.32885 33.87, <.0001
Error 18 60.75725 3.37540
Corrected Total 19 175.08609
Parameter Standard
Variable Estimate Error Type II SS F Value Pr > F
Intercept -42.87237 8.41429 87.62858 25.96 <.0001
x2 49.08366 8.43377 114.32885 33.87 <.0001
Bounds on condition number: 1, 1

Stepwise Selection: Step 2 Variable x1 Entered: R-Square = 0.7805 and C(p) =


19.9697
' Sum of Mean
Source DF Squares Square F Value Pr > LF
Model 2 136.65541 68.32771 30.23 <.0001
Error 7, _ 38.43068 2.26063
Corrected Total 19 175.08609
Parameter Standard
Variable Estimate Error Type II SS F Value Pr > F
Intercept -33.83110 7.46286 46.45691 20.55 0.0003
x1 0.01314 0.00418 22.32656 9.88 0.0059
x2 34.88963 8.24844 40.44627 17.89 0.0006
Bounds on condition number: 1.4282, 5.7129

Stepwise Selection: Step 3 Variable x3 Entered: R-Square = 0.8756 and C(p) =


Tmesoe
Sum of Mean
Source DF Squares Square F Value Pr > F
Model 3 153.30451 51.10150 37.54 <.0001
Error 16 21.78159 1.36135
Corrected Total 19 175.08609
Parameter Standard
Variable Estimate Error Type II SS F Value Pr > F
Intercept -27.89190 6.03518 29.07675 21.36 0.0003
x1 0.01360 0.00326 23.87130 17.54 0.0007
x2 30.68486 6.51286 30.21859 22.20 0.0002
x3 -0.06701 0.01916 16.64910 Peo 36) 0.0030
Bounds on condition number: 1.4786, 11.85

All variables left in the model are significant at the 0.1000 level.
No other variable met the 0.1000 significance level for entry into the model.
Summary of Stepwise Selection
Variable Variable Number Partial Model
Step Entered Removed Vars In R-Square R-Square C(p) F Value Pre we,
1 x2 1 0.6530 0.6530 0093:7. 7047 333.87 <.0001
2 x1 Z 0.1275 0.7805 19.9697 9.88 0.0059
3 x3 3 0.0951 0.8756 (eeSae ) ened 0.0030

. Figure 15-17 SAS output for backward elimination in Example 15-19.

A major criticism of variable selection methods, such as stepwise regression, is that the
analyst may conclude that there is one “best” regression equation. This generally is not the
case, because there are often several equally good regression models that can be used. One
486 Chapter15 Mbultple Regression

way to avoid this problem is to use several different model-building techniques and see if
different models result. For example, we have found the same mode! for the damaged peach
data by using stepwise regression, forward selection, and backward elimination. This is 2
good indication that the three-variable model is the best regression equation. Furthermore,
there are variable selection techniques that are designed te find the best one-vaniable model,
the best two-variable model, and so forth. For a discussion of these methods, and the var-
able selection problem in general, see Montgomery, Peck, and Vining (2001).
If the number of candidate regressors is not too large, the all-posuible-regressions
method is recommended. It is not distorted by mulicolhneanty among the regressars, 38
stepwise-type methods are.

13-12 SUMMARY
This chapter has introduces multiple linear regression, including least-squares estimation of
the parameters, interval estimation, prediction of new observations, and methods for
hypothesis testing. Various tests of model adequacy, including residual plots, have been dis-
cussed. It was shown that polynomial regression models can be handled by the usual mal-
tuple linear regression methods. Indicator variables were introduced for dealing with
qualitative variables. It also was observed that the problem of multicollineanity, or inter-

and often leads to a regression model that may net predict new observations well. Several
causes and remedial measures of this problem, including biased estimation techniques,
were discussed. Finally, the variable selection problem in multiple regression was intro
duced. A number of model-building procedures, including all possible regressions, stepwise
regression, forward selection, and backward elimination, were illustrated.

15-13 EXERCISES
15-1. Consider the damaged peach data in Table 134. Using the results of Exercise 15-2, find a O@
15-15. confidence interval on B..
(a) Fit a regression model using x, (drop height) and 1S-S. The data in the tabk at the ‘top of page 487 are
xX, (fruit pulp thickness) to these data. the 1976 team performance statistics for the teams in
(b) Test for significance of regression. the National Foorball League (Source: The Sporting
(c) Compute the residuals from this model. Analyze News).
these residuals using the methods discussed in (a) Rita moluple regression mode! relating the num
this chapter. ber of games won to the teams” passing yardage
(d) How does this two-variable model compare with (x,), the percentage of rushing plays (x,),and the
the two-variable model using x, and x, from opponents” yards rushing (4)
Example 15-1? : (b) Construct the appropriate residual plots and com-
15-2. Consider the damaged peach data in Table ment on model adequacy.
15-18. (c) Test the significance of each variable to the
(a) Fit a regression model using x, (drop height), x, model, using either the Mest or the partial F-test.
(fruit density), andx, (fruit height at impact point) 15-6, The table at the top of page 488 presents gaso-
to these data. line mileage performance for 25 automohiles (Source:
(b) Test for significance of regression. Motor Trend, 1975).
(c) Compute the residuals from this model. Analyze (a) Fita multiple regression mode! relating gasoline
these residuals using the methods discussed in mileage to engine displaceme (x,) and number
nt
this chapter. of carburetor barrels (x)
15-3, Using the results of Exercise 15-1, find a 95% (b) Analyze the residuals and comment on mode!
confidence interval on §,. adequacy. ve
15-13 Exercises 487

National Football League 1976 Team Performance


Team y x % %;
Washington 10 2113 1945 34,9
Minneswta 1) 203 2455 36.4
New England M1 2957 1737 AOA
Oakland 13 2245 2905 416
Pittsburgh 10 2971 1666 99,2
Baltimore 11 = 2309 2927 ¥9,7
Los Angeles 10 2524 244) 364
Dallas MJ 2147 2737 370
Allanta 4 1649 1414 421
Buffalo v4 256 143% 4213
Chicago 7 2363 1440) 37,3
Cincinnati 10 2109 2191 39,5
Cleveland g 2295 2229 374
Denver g 1932 2204 35.1
Detroit 6 2213 2140 346%
Green Bay 5 1722 17# 36.6
Houston 5 149% 2072 35,3
Kansas City 5 1473 2929 411
Miami 6 2118 226% 36,2 D6 +6 582 54.7 il 2670
New Orleans 4 1775 1943 39.3 783 +7 WI “AS le 954 2202
New York Giants 3 1904 1792 39,7 36.1 I T34 619 223 198%
New York Jets 3 1929 16 39,7 684 -21 627 98 fae I1 23h
Philadelphia 4 2060 1492 35,5 688 -8 722 578 253 250)
St. Lowis 10 21 2835 35,3 743 42 683 59.7 1979 2110
San Diego 6 2049) 7A16 38,7 50.0 ee IS 5449 Ws 262%
San Francisco 4 VAT 163% 999 571 & 4% 65.3 17% 1776
Seattle Z 1416 VAI 374 56.3 -22 66% 434 2876 2524
Tampa Bay 0 1503 1503 39,3 470 9 875 53.5 25600 2741

y.Games won (per 14 game ve290m),


4: Yushing yards (season),
ty. Passing yards (seam),
ty, Porting average (yds / punt),
44. Field gos percentage (fgs made / {gs attempted).
7. Turnover differential (turnovers acquired / turnovers lost).
ty, Penalty yards (veaum).
ty: Percent rushing (rushing plays / nal plays).
ty, Oppoments’ rushing yards (seaxm),
tg: Oppoments’ passing yards (aun),

Uj

(c) What is the value of adding x, to 2 mode) that ambient temperature (x,), the number of days in the
already contains x,7 month (x,), the average product purity (x,), and the
tons of product produced (x,). The past year’s histori-
15-7. The electric power consumed each month by 4 cal data is available and is presented in the table
at the
chemical plant is thought to be related tothe average bottom of page 444.
488 Chapter 15 Multiple Regression
i —————

Gasoline Mileage Performance for 25 Automobiles


Automobile y x, Xe; x, x, Xe XX; xs ii Daye AS

Apollo 18.90 350 165 260 8.0:1 2.56:1 4 3 200.37 69.9 3910 A
Nova 20.00 250 105 185 WEE ihe WATERIL voll Se 9G) 22a Ones
Monarch 18.25 351 143. 255 8.0:1 3.00:1 2 3 199.9 74.0 3890 A
Duster 20.07 225 95° ~170 8.4:1 2.76:1 1 3 194.1 71.8 3365 M
Jenson Conv. 11.20 440 215) 330 Sol: eee O01 ar 3 1845 69.0 4215 A
Skyhawk 22.12 231 110 =175 S021 25631 2 3 1793 65.4 3020 A
Scirocco 34.70 89:7 Sax/0 81 Sez: 1) 243.90:1 Fez 4 155.7 640 1905 M
CorollaSR-5S 30.40 96.9 75 83 9.0:1 4.30:1 2 5\ ¥6i165.2" 2465.0 2320 3M
Camaro 16.50 350 155 +250 8.5:1 3.08:1 4 3 1954 744 3885 A
Datsun B210 36.50 85.3 oO 83 820:1 #3389: Be 4 160.6 62.2 2009 M
Capri II 21.50 171 109 146 8:2: og3 22221 eed, 4 1704 669 2655 M

Pacer 19.70 258 110 9195 810:17 3:08:11 Ei ed WA Ween FLD SIS/S Ieee
Granada 17.80 302 12977220 8.0:1 3.00:1 2 3, 199.9 74.0 3890 A
Eldorado 14,39 9 9500 S190" $360" P8531” F2:73:1 4s 3 Se224. 1" 7918 S290 aN
Imperial 14.89 - (440 (215 3330 (© 82:1 62:71:11 GAL 3 Cp23109 79.7 “ SIssaerA
Nova LN 17.80.» 3350 =/ 455° «250 5 85:1" 8 S08: HAT 3S S0196.7°R 72-2" S98 mA
Starfire 723.54 e231 110° A7S =; 80:1" 5 2.56:1 2 3) l79.3° "65.4" 3050 nA
Cordoba 21.47 360 180 290 8.4:1 2.45:1 2 3 2142 76.3 4250 A
Trans Am 16:59 400 85 NA 7.621) 3:08: eae eS) LOO 73.0 3850 A:
Corolla E-5 31.90 96:9" 975 83 9.0:1 4.30:1 2 J 9165.2" 2 61-8 = 2275. M
Mark IV 13.27 460 223 366 8.0:1 3.00:1 4 B25 79.8 5430 A
Celica GT 23.90 133.6 96-= 120 S41 pool 2 Se lil 5 03.4253) M.
Charger SE 19.73 318 140 255 eo! ZA IIEIL 2 3 215-3 6.3 470A

Cougar 13.90 351 148 243 8:0:1 53:25: 92 3. 2155 785 4540 A
Corvette 16.50 350 165 255 85:1 e2731 24 3 185.22 69.0 3660 A
y: Miles / gallon.
x,: Displacement (cubic in.).
x,: Horsepower (ft-lb).
X,: Torque (ft-lb).
x,; Compression ratio.
xs: Rear axle ratio.
x,: Carburetor (barrels).
x,: No. of transmission speeds.
X,: Overall length (in.).
X,: Width (in.).
Xo: Weight (Ibs).
X,,: Type of transmission (A—automatic, M—manual).

y 4; x, 3 X4 y x; x *3 X4
240 25 24 91 100 300 80 25 87 97
236 31 21 90 95 296 84 25 86 96
290 45 24 88 110 267 75 24 88 110
274 60 25 87 88 276 60 25 91 105
301 65 25 91 94 288 50 25 90 100
316 72 26 94 no9 261 38 23 89 98
15-13. Exercises 489

(a) Fit a multiple regression model to these data. (c) Test the hypothesis that 6,, = 0.
(b) Test for significance of regression. (d) Compute the residuals and test for model
(c) Use partial F statistics to test H,: B, = 0 and ekg adequacy.
B,=0. 15-10. Consider the following data, which result from
(d) Compute the residuals from this model. Analyze an experiment to determine the effect of x = test time
the residuals using the methods discussed in this in hours at a particular temperature on y = change in
chapter. oil viscosity.
15-8. Hald (1952) reports data on the heat evolved in y —4.42 -1.39 -1.55 -1.89 -2.43 -3.15 -4.05 -5.15 -6.43 -7.89
calories per gram of cement (y) for various amounts of
four ingredients (x,, x,, x3, X,). x 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 2.50

(a) Fit a second-order polynomial to the data.


Observation
Number y x oA x, = (b) Test for significance of regression.
(c) Test the hypothesis that f,, = 0. :
1 78.5 1 26 6 60
(d) Compute the residuals and check for model
2 74.3 1 29 15 52 adequacy.
3 104.3 11 56 8 20
15-11. For many polynomial regression models we
4 87.6 11 31 8 47
subtract x from each x value to produce a “centered”
5 95.9 7 52 6 33 regressor x’ = x — x. Using the data from Exercise
6 109.2 11 55 9 22 15-9, fit the model y= 8} + B;x’+ B,(x)? + & Use the
7 102.7 3 71 17 6 results to estimate the coefficients in the uncentered
8 2S 1 31 22 44 model y = 8, + B,x+ B, +
9 93.1 2 54 18 22 15-12. Suppose that we use a standardized variable
10 115.9 21 47 4 26 x’=(x-—x)/s,, where s, is the standard deviation of x,
11 83.8 1 40 23 34 in constructing a polynomial regression model. Using
12 1353 11 66 9 12 the data in Exercise 15-9 and the standardized variable
13 109.4 10 68 8 12 approach, fit the model y = B) + Bix’+ Bix’ +e
(a) What value of y do you predict when x = 285°F?
(a) Fit a multiple regression model to these data. (b) Estimate the regression coefficients in the unstan-
(b) Test for significance of regression. dardized model y = 8, + B,x + B,,x?+€
(c) Test the hypothesis B,=0 using the partial F-test. (c) What can you say about the relationship between
SS, and R? for the standardized and unstandard-
(d) Compute the t statistics for each independent vari-
ized models?
able. What conclusions can you draw?
(d) Suppose that y’= (y — y)/s, is used in the model
(e) Test the hypothesis 8, = B, = B, = 0 using the par-
along with x’ Fit the model and comment on the
tial F-test.
relationship between SS, and R? in the standard-
(f) Construct a 95% confidence interval estimate for ized model and the unstandardized model.
B,. 15-13. The data shown at the bottom of this page were
15-9, An article entitled “A Method for Improving the collected during an experiment to determine the
Accuracy of Polynomial Regression Analysis” in the change in thrust efficiency (%) (y) as the divergence
Journal of Quality Technology (1971, p. 149) reported angle of a rocket nozzle (x) changes.
the following data on y = ultimate shear strength of a
(a) Fit a second-order model to the data.
rubber compound (psi) and x = cure temperature (°F).
(b) Test for significance of regression and lack of fit.
y 770 800 840 810 735 640 590 560 (c) Test the hypothesis that B,, = 0.

x 280 284 292 295 298 305 308 315 15-14. Discuss the hazards inherent in fitting polyno- ,
mial models.
(a) Fit a second-order polynomial to this data. 15-15. Consider the data in Example 15-12. Test the
(b) Test for significance of regression. hypothesis that two different regression models (with

y 24.60 24.71 23.90 ‘39.50 39.60 57.12 67.11 67.24 67.15 77.87 80.11 84.67

x 40 4.0 4.0 5.0 5.0 6.0 6.5 6.5 Gi) TAY 7.1 Ue
490 Chapter 15 Multiple Regression

different slopes and intercepts) are required to ade- 15-22. Use the Natignal Football League Team Per-
quately model the data. formance data in Exercise 15-5 to build regression
models using the following techniques:
15-16. Piecewise Linear Regression (I). Suppose
(a) All possible regressions.
that y is piecewise linearly related to x. That is, differ-
ent linear relationships are appropriate over the inter- (b) Stepwise regression.
vals — 00 <x < x’ and x’ < x < ee. Show how indicator (c) Forward selection.
variables can be used to fit such a piecewise linear (d) Backward elimination.
regression model, assuming that point x" is known. (e) Comment on the various models obtained.
15-17. Piecewise Linear Regression (II). Consider 15-23. Use the gasoline mileage data in Exercise 15-6
the piecewise linear regression mode] described in to build regression models using the following tech-
Exercise 15-16. Suppose that at point x’ a discontinu- niques:
ity occurs in the regression function. Show how indi- (a) All possible regressors.
cator variables can be used to incorporate the
(b) Stepwise regression.
discontinuity into the model.
(c) Forward selection.
15-18. Piecewise Linear Regressi-n (III). Consider (d) Backward elimination.
the piecewise linear regression model described in
(e) Comment on the various models obtained.
Exercise 15-16. Suppose that point x° is not known
with certainty and must be estimated. Develop an 15-24, Consider the Hald cement data in Exercise
approach that could be used to fit the piecewise linear 15-8. Build regression models for the data using the
regression model. following techniques:
(a) All possible regressions.
15-19, Calculate the standardized regression coeffi-
(b) Stepwise regression.
cients for the regression model developed in Exercise
15-1, (c) Forward selection.
(d) Backward elimination.
15-20. Calculate the standardized regression coeffi-
cients for the regression model developed in Exercise 15-25. Consider the Hald cement data in Exercise
15-2. 15-8. Fit a regression model involving all four regres-
sors and find the variance inflation factors. Is multi-
15-21. Find the variance inflation factors for the collinearity a problem in this model? Use ridge
regression model developed in Example 15-1. Do they regression to estimate the coefficients in this model.
indicate that multicollinearity is a problem in this Compare the ridge model to the models obtained in
model? Exercise 15-25 using variable selection methods.
Chapter 16

Nonparametric Statistics

16-1 INTRODUCTION
Most of the hypothesis testing and confidence interval procedures in previous chapters are
based on the assumption that we are working with random samples from normal popula-
tions. Fortunately, most of these procedures are relatively insensitive to slight departures
from normality. In general, the t- and F-tests and t confidence intervals will have actual lev-
els of significance or confidence levels that differ from the nominal or advertised levels
chosen by the experimenter, although the difference between the actual and advertised lev-
els is usually fairly small when the underlying population is not too different from the nor-
mal distribution. Traditionally, we have called these procedures parametric methods
because they are based on a particular parametric family of distributions—in this case, the
normal. Alternatively, sometimes we say that these procedures are not distribution free
because they depend on the assumption of normality.
In this chapter we describe procedures called nonparametric or distribution-free meth-
ods and usually make no assumptions about the distribution of the underlying population,
other than that it is continuous. These procedures have actual levels of significance a or con-
fidence levels 100(1 — a)% for many different types of distributions. These procedures also
have considerable appeal. One of their advantages is that the data need not be quantitative; it
could be categorical (such as yes or no, defective or nondefective, etc.) or rank data. Another
advantage is that nonparametric procedures are usually very quick and easy to perform.
The procedures described in this chapter are competitors of the parametric f- and
F-procedures described earlier. Consequently, it is important to compare the performance
of both parametric and nonparametric methods under the assumptions of both normal and
nonnormal populations. In general, nonparametric procedures do not utilize all the infor-
mation provided by the sample, and as a result a nonparametric procedure will be less effi-
cient than the corresponding parametric procedure when the underlying population is
normal. This loss of efficiency usually is reflected by a requirement for a larger sample size
for the nonparametric procedure than would be required by the parametric procedure in
order to achieve the same probability of type II error. On the other hand, this loss of effi-
ciency is usually not large, and often the difference in sample size is very small. When the
underlying distributions are not normal, then nonparametric methods have much to offer.
They often provide considerable improvement over the normal-theory parametric methods.

16-2 THE SIGN TEST


16-2.1 A Description of the Sign Test
The sign test is used to test hypotheses about the median fA of a continuous distribution.
Recall that the median of a distribution is a value of the random variable such that the prob-
ability is 0.5 that an observed value of X is less than or equal to the median, and the

491
492 Chapter 16 Nonparametric Statistics

probability is 0.5 that an observed value of X is greater than-or equal to the median. That is,
P(X $ Pf)= P(X 2s) = 0.5.
Since the normal distribution is symmetric, the mean of a normal distribution equals
the median. Therefore the sign test can be used to test hypotheses about the mean of a nor-
mal distribution. This is the same problem for which we used the t-test in Chapter 11. We
will discuss the relative merits of the two procedures in Section 16-2.4. Note that while the
t-test was designed for samples from a normal distribution, the sign test is appropriate for
samples from any continuous distribution. Thus, the sign test is a nonparametric procedure.
Suppose that the hypotheses are

Hy:
= By,
(16-1)
H,:
fl# ly.
The test procedure is as follows. Suppose that X,, X,, ..., X, is a random sample of n obser-
vations from the population of interest. Form the differences (X;— fy), i= 1, 2, ...,n. Now
if H,: 2=[1, is true, any difference X, — 1, is equally likely to be positive or negative. There-
fore let R* denote the number of these differences (X; — 1) that are positive and let R~ denote
the number of these differences that are negative, where R = min (R’, R°).
When the null hypothesis is true, R has a binomial distribution with parameters n and
p=0.5. Therefore, we would find a critical value, say R., from the binomial distribution that
ensures that P(type I error)=P(reject H, when H, is true)= a. A table of these critical val-
ues R. is given in the Appendix, Table X. If thetest statistic R S$ Re then the null hypothe-
sis H,: 2 = fy should be rejected.

Montgomery, Peck, and Vining (2001) report on a study in which a rocket motor is formed by bind-
ing an igniter propellant and a sustainer propellant together inside a metal housing. The shear strength
of the bond between the two propellant types is an important characteristic. Results of testing 20 ran-
domly selected motors are shown in Table 16-1. We would like to test the hypothesis that the median
shear strength is 2000 psi.
The formal statement of the hypotheses of interest is

Hy: = 2000,
H,: fi #2000.
The last two columns of Table 16-1 show the differences (X, — 2000) for i= 1, 2, ..., 20 and the cor-
responding signs. Note that R* = 14 and R" = 6. Therefore R = min (R*, R~) = min (14, 6) = 6. From
the Appendix, Table X, with n = 20, we find that the critical value for a= 0.05 is R,,, = 5. Therefore,
since R =6 is not less than or equal to the critical value R’,,, = 5, we cannot reject the null hypothesis
that the median shear strength is 2000 psi.
We note that since R is a binomial random variable, we could test the hypothesis of interest by
directly calculating a P-value from the binomial distribution. When H,: {= 2000 is true, R has a bino-
mial distribution with parameters n = 20 and p = 0.5. Thus the probability of observing six or fewer
negative signs in a sample of 20 observations is

P(R<6)= >a:"\osy(0.5)°-"

= 0:058.
Since the P-value is not less than the desired level of significance, we cannot reject the null hypoth-
esis of 1 = 2000 psi.
16-2. The SignTest 493

Table 16-1 Propellant Shear Strength Data


a
ea a i a
Observation (i) Shear Strength (X;) Differences (X, — 2000) Sign
1 2158.70 +158.70 +
22 1678.15 -321.85 -
3 2316.00 +316.00 +
4 2061.30 +61.30 +
5 2207.50 +207.50 +
6 1708.30 —291.70 -
7 1784.70 —215.30 -
8 2575.10 +575.00 +
9 2357.90 +357.90 +
10 2256.70 +256.70 +
11 2165.20 +165.20 +
12 2399.55 +399.55 +
13 1779.80 —220.20 ~
14 2336.75 +336.75 +
15 1765.30 —234.70 -
16 2053.50 +53.50 +
17 2414.40 +414.40 +
18 2200.50 +200.50 +
19 2654.20 +654.20 +
20 1753.70 —246.30 -

Exact Significance Levels When atest statistic has a discrete distribution, such as R does
in the sign test, it may be impossible to choose a critical value R. that has a level of signif-
icance exactly equal to a. The usual approach is to choose R. to yield an @ as close to the
advertised level a as possible.

Ties in the Sign Test Since the underlying population is assumed to be continuous, it is
theoretically impossible to find a “tie,” that is, a value of X, exactly equal to fi. However,
this may sometimes happen in practice because of the way the data are collected. When
ties occur, they should be set aside and the sign test applied to the remaining data.

One-Sided Alternative Hypotheses We can also use the sign test when a one-sided alter-
native hypothesis is appropriate. If the alternative is H,: fi > fly, then reject Hp: fi = fly if
R< Re if the alternative is 'H,: fi < fi, then reject H): fi =f, if R’ < R... The level of signif-
icance of a one-sided test is one-half the value shown in the Appendix, Table X. It is also
possible to calculate a P-value from the binomial distribution for the one-sided case.

The Normal Approximation When p = 0.5, the binomial distribution is well approxi-
mated by a normal distribution when n is at least 10. Thus, since the mean of the binomial
is np and the variance is np(1 —p), the distribution of R is approximately normal, with mean
0.5n and variance 0.25n whenever n is moderately large. Therefore, in these cases the null
hypothesis can be tested with the statistic

ara
» R-O05n
= (16-2)
16-2
494 Chapter16 Nonparametric Statistics

The two-sided alternative would be rejected if |Z,| > Z,,., and the critical regions of the one-
sided alternative would be chosen to reflect the sense of the alternative (if the alternative is
H;: [i> fl, reject H, if Z) > Z,, for example). .

16-2.2 The Sign Test for Paired Samples


The sign test can also be applied to paired observations drawn from continuous popula-
tions. Let (X,,, X,,), j= 1, 2, ...1 be a collection of paired observations from two contin-
uous populations, and let

DX Xo, Mpa lpeor ts


be the paired differences. We wish to test the hypothesis that the two populations have a
common median, that is, that f, = 1,. This is equivalent to testing that the median of the dif-
ferences fi, = 0. This can be done by applying the sign test to the n differences D,, as illus-
trated in the following example.

Example 16-2
An automotive engineer is investigating two different types of metering devices for an electronic fuel
injection system to determine if they differ in their fuel mileage performance. The system is installed
on 12 different cars, and a test is run with each metering system on each car. The observed fuel mileage
performance data, corresponding differences, and their signs are shown in Table 16-2. Note that R* = 8
and R- = 4. Therefore R = min (R’, R-) = min (8, 4) = 4. From the Appendix, Table X, with n = 12, we
find the critical value for a= 0.05 is R).< = 2. Since R is not less than the critical value R,».,we cannot
reject the null hypothesis that the two metering devices produce the same fuel mileage performance.

16-2.3 Type II Error (8) for the Sign Test


The sign test will control the probability of type I error at an advertised level @ for testing
the null hypothesis H): fi = fi, for any continuous distribution. As with any hypothesis-
testing procedure, it is important to investigate the type II error, B. The test should be able
to effectively detect departures from the null hypothesis, and a good measure of this effec-

Table 16-2 Performance of Flow Metering Devices

Metering Device
Car 1 2 Difference, D, Sign

1 17.6 16.8 0.8 +


2 19.4 20.0 0.6 -
3 19.5 18.2 1.3 +
4 Wal 16.4 0.7 +
5 15.3 16.0 0.7 -
6 15.9 15.4 0.5 +
i 16.3 16.5 —0.2 -
8 18.4 18.0 0.4 +
9 slgiee 16.4 0.9 +
10 19.1 20.1 -1.0 -
11 17.8 16.7 (Neil +
12 18.2 17.9 0.3 +
16-2 The Sign Test 495

tiveness is the value of B for departures that are important. A small value of B implies an
effective test procedure.
In determining , it is important to realize that not only must a particular value of fi, say
fi, + A, be used, also the form of the underlying distribution will affect the calculations. To
illustrate, suppose that the underlying distribution is normal with o= 1 and we are testing
the hypothesis that fl = 2 (since ff = y in the normal distribution this is equivalent to testing
that the mean equals 2). It is important to detect a departure from fi = 2 to fi = 3. The situa-
tion is illustrated graphically in Fig. 16-1a. When the alternative hypothesis is true (H,: f=
3), the probability that the random variable X exceeds the value 2 is
p =P(X > 2)=P(Z>-1) =1- ®(-1) = 0.8413.
Suppose we have taken samples of size 12. At the a = 0.05 level, Appendix Table X indi-
cates that we would reject H,: fi = 2 if R < R’,,< = 2. Therefore, the B error is the probabil-
ity that we do not reject H,: = 2 when in fact fi = 3, or

p=1- rot(0.1587)*(0.8413)'*-* = 0.2944.


If the distribution of X has been exponential rather than normal, then the situation
would be as shown in Fig. 16-1, and the probability that the random variable X exceeds the
value x = 2 when ff = 3 (note when the median of an exponential distribution is 3 the mean
is 4.33) is

p=P(X>2)= fame 13g = eon

o=1

STN ee ee =i, 0) slap 2b Sn odaleSieS


Under Hp: = 2 Under
H,: 7 =3

(a)

p=2 w=2.89

Under
Ho: 4 =2

(b)
Figure 16-1 Calculation of B for the sign test. (a2) Normal distributions, (b) exponential
distributions.
496 Chapter 16 Nonparametric Statistics

The B error in this case is .


aft
p=1- >( ](03695)*(o.6301)** = 0.8794.
x= “

Thus, the f error for the sign test depends not only on the alternative value of ff but on
the area to the right of the value specified in the null hypothesis under the population prob-
ability distribution. This area is highly dependent on the shape of that particular probabil-
ity distribution.

16-2.4 Comparison of the Sign Test and the ¢-Test


If the underlying population is normal, then either the sign test or the t-test could be used
to test H,: {1 =/1). The t-test is known to have the smallest value of possible among all tests
that have significance level a, so it is superior to the sign test in the normal distribution case.
When the population distribution is symmetric and nonnormal (but with finite mean p/ =f),
then the t-test will have a f error that is smaller than f for the sign test, unless the distribu-
tion has very heavy tails compared with the normal. Thus, the sign test is usually consid-
ered a test procedure for the median rather than a serious competitor for the t-test. The
Wilcoxon signed rank test in the next section is preferable to the sign test and compares
well with the t-test for symmetric distributions.

16-3 THE WILCOXON SIGNED RANK TEST


Suppose that we are willing to assume that the population of interest is continuous and sym-
metric. As in the previous section, our interest focuses on the median ff (or equivalently, the
mean HU, since fi = pt for symmetric distributions). A disadvantage of the sign test in this sit-
uation is that it considers only the signs of the deviations X; — fi, and not their magnitudes.
The Wilcoxon signed rank test is designed to overcome that disadvantage.

16-3.1 A Description of the Test


We are interested in testing H): 1s = [, against the usual alternatives. Assume that X,, X,, ...,
X, is arandom sample from a continuous and symmetric distribution with mean (and medi-
an) 4. Compute the differences X; — fy, i = 1, 2, ..., n. Rank the absolute differences
IX, — ul, i= 1, 2, ..., 2, in ascending order, and then give the ranks the signs of their cor-
responding differences. Let R*-be the sum of the positive ranks and R~ be the absolute
value of the sum of the negative ranks, and let R = min (R*, R-). Appendix Table XI con-
tains critical values of R, say Re If the alternative hypothesis is H,: 1 # Mp, then if R < R.
the null hypothesis H,: [1 = My is rejected.
For one-sided tests, if the alternative is H,: {> Mp, reject Hy: = LW, if R”< Re and if
the alternative is H,: LM < pp, reject Hy: = L, if R* < Rs The significance level for
one-sided tests is one-half the advertised level in Appendix Table XI.

Example 16-3
‘To illustrate the Wilcoxon signed rank test, consider the propellant shear strength data presented in
Table 16-1. The signed -anks are
16-3 The Wilcoxon Signed Rank Test 497
rr

Observation Difference, X, - 2000 Signed Rank

16 5550) el
4 +61.30 +2
1 +158.70 +3
11 +165.20 4
18 +200.50 +5
> +207.50 +6
7 —215.30 -7
13 —220.20 -8
15 —234.70 -9
20 —246.30 -10
10 +256.70 +11
6 -291.70 -12°
3) +316.00 +13
2 -321.85 -14
14 +336.75 +15
9 +357.90 +16
12 +399.55 +17
17 +414.40 +18
8 +575.00 mai)
19 +654.20 +20

The sum of the positive ranks is RK’=(+1+2+3+4+5+6+11+13+ 15+16+17+18+19+


20) = 150 and the sum of the negative ranks is F =(7+8+9+ 10+ 12 + 14) = 60. Therefore R =
min (R*, R~) = min (150, 60) = 60. From Appendix Table XI, with n = 20 and a = 0.05, we-find the
critical value R,,, = 52. Since R exceeds R,, we cannot reject the null hypothesis that the mean (or
median, since the populations are assumed to be symmetric) shear strength is 2000 psi.

Ties in the Wilcoxon Signed Rank Test


Because the underlying population is continuous, ties are theoretically impossible, although
they will sometimes occur in practice. If several observations have the same absolute mag-
nitude, they are assigned the average of the ranks that they would receive if they differed
slightly from one another.

16-3.2 A Large-Sample Approximation


If the sample size is moderately large, say n > 20, then it can be shown that R has approxi-
mately a normal distribution with mean
_ n(n+1)
HrR= 4

and variance
n(n +1)(2n +1)
On= 24
498 Chapter 16 Nonparametric Statistics

Therefore, a test of H): = My can be based on the statistic


v2 R-n(n+1)/4
(16-3)
© n(n + 1)(2n +1)/24-
An appropriate critical region can be chosen from a table of the standard normal
distribution.

16-3.3 Paired Observations


The Wilcoxon signed rank test can be applied to paired data. Let (X,,, X,,),j = 1, 2, ...,.n, be
a collection of paired observations from continuous distributions that differ only with
respect to their means (it is not necessary that the distributions of X, and X, be symmetric).
This assures that the distribution of the differences D, = X,, — X», is continuous and
symmetric.
To use the Wilcoxon signed rank test, the differences are first ranked in ascending order
of their absolute values, and then the ranks are given the signs of the differences. Ties are
assigned average ranks. Let R* be the sum of the positive ranks and R~ be the absolute value
of the sum of the negative ranks, and let R = min (R’, R-). We reject the hypothesis of equal-
ity of means if R< Re where R. is chosen from Appendix Table XI.
For one-sided tests, if the alternative is H,: Ll, > M, (or H,: Up > 0), reject Hy if R” < Re
and if H,: by < L (or H,: Hp < 0), reject H, if R* < R,,. Note that the significance level for
one-sided tests is one-half the value given in Table XI.

Example 16-4
Consider the fuel metering device data examined in Example 16-2. The signed ranks are shown
below.

Car Difference Signed Rank

V —0.2 -1
12 0.3 2
8 0.4 3
6 0.5 4
2 -0.6 -5
4 0.7 6.5
5 -0.7 -6.5
1 0.8 8
9 0.9 W)
10 -1.0 -10
11 1.1 11
3 1.3 12

Note that R* = 55.5 and R = 22.5; therefore, R = min (R*, R-) = min (55.5, 22.5) = 22.5. From Appen-
dix Table XI, with n = 12 and a@=0.05, we find the critical value R,,,, = 13. Since R exceeds Roos WE
cannot reject the null hypothesis that the two metering devices produce the same mileage
performance.
16-4 The Wilcoxon Rank-Sum Test 499

16-3.4 Comparison with the ¢-Test


When the underlying population is normal, either the t-test or the Wilcoxon signed rank
test can be used to test hypotheses about ju. The t-test is the best test in such situations in
the sense that it produces a minimum value of B for all tests with significance level a.
However, since it is not always clear that the normal distribution is appropriate, and since
there are many situations in which we know it to be inappropriate, it is of interest to com-
pare the two procedures for both normal] and nonnormal populations.
Unfortunately, such a comparison is not easy. The problem is that B for the Wilcoxon
signed rank test is very difficult to obtain, and B for the t-test is difficult to obtain for non-
normal distributions. Because type II error comparisons are difficult, other measures of
comparison have been developed. One widely used measure is asymptotic relative effi-
ciency (ARE). The ARE of one test relative to another is the limiting ratio of the sample
sizes necessary to obtain identical error probabilities for the two procedures. For example,
if the ARE of one test relative to a competitor is 0.5, then when sample sizes are large, the
first test will require a sample twice as large as the second one to obtain similar error per-
formance. While this does not tell us anything for small sample sizes, we can say the
following:
1. For normal populations the ARE of the Wilcoxon signed rank test relative to the
t-test is approximately 0.95.
2. For nonnormal populations, the ARE is at least 0.86, and in many cases it will
exceed unity. When it exceeds unity, the Wilcoxon signed rank test requires a
smaller sample size than does the t-test.
Although these are large-sample results, we generally conclude that the Wilcoxon
signed rank test will never be much worse than the f-test, and in many cases where the pop-
ulation is nonnormal it may be superior. Thus the Wilcoxon signed rank test is a useful
alternative to the t-test.

16-4 THE WILCOXON RANK-SUM TEST


Suppose that we have two independent continuous populations X, and X, with means fu, and
LL. The distributions of X, and X, have the same shape and spread and differ only (possibly)
in their means. The Wilcoxon rank-sum test can be used to test the hypothesis H,: LW, = 4b.
Sometimes this procedure is called the Mann-Whitney test, although the Mann-Whitney
test statistic is usually expressed in a different form.

16-4.1 A Description of the Test


Let X,,,X;, -..;X;,, and X,,, Xp, ..., X>,, be two independent random samples from the con-
tinuous populations X, and X, described earlier. We assume that n, <n,. Arrange all n, +n,
observations in ascending order of magnitude and assign ranks to them. If two or more
observations are tied (identical), then use the mean of the ranks that would have been
assigned if the observations differed. Let R, be the sum of the ranks in the smaller X, sam-
ple, and define
R,=n,(n, +n, + 1)-R,. (16-4)

Now if the two means do not differ, we would expect the sum of the ranks to be nearly equal
for both samples. Consequently, if the sums of the ranks differ greatly, we would conclude
that the means are not equal.
500 Chapter 16 Nonparametric Statistics

Appendix Table IX contains the critical values R. of the rank sums for ~@= 0.05 and @
= 0.01. Refer to Appendix Table IX, with the appropriate sample sizes n, and n,. The null
hypothesis H,: 1, = HL, is rejected in favor of H,: 1,4 y, if either R, or R, is less than or equal
to the tabulated critical value R..
The procedure can also be used for one-sided alternatives. If the alternative is H,: U, <
LL, then reject H, if R, < R.; while for H,: [1, > Lb, reject H, if R, < R.. For these one-sided
tests the tabulated critical values R, correspond to levels of significance of a = 0.025 and
a= 0.005.

Example 16-5
The mean axial stress in tensile members used in an aircraft structure is being studied. Two alloys are
being investigated. Alloy 1 is a traditional material and alloy 2 is a new aluminum-lithium alloy that
is much lighter than the standard material. Ten specimens of each alloy type are tested, and the axial
stress measured. The sample data are assembled in the following table: ‘

Alloy 1 Alloy 2

3238 psi 3254 psi 3261 psi 3248 psi


3195 3229 3187 3215
3246 3225 3209 3226
3190 3217 3212 3240
3204 3241 3258 3234

. The data are arranged in ascending order and ranked as follows:

Alloy Number Axial Stress Rank

oy 3187 psi 1
1 3190 2
1 3195 3
1 3204 4
2 3209 5
2 3212 6
2 3215 1
1 3217 8
1 3225 9
2 3226 10
1 3229 11
2 3234 12
1 3238 13
2 3240 14
1 3241 15
1 3246 16
2 3248 17
1 3254 18
2 3258 19
2 3261 20
16-5 Nonparametric Methods in the Analysis of Variance 501

The sum of the ranks for alloy 1 are

R,=2+3+44+84+94+114+134+15+
16+ 18 =99,
and for alloy 2 they are

R,=n(n, +n, + 1)-—R, = 10(10 + 10+ 1) -99 = 111.


From Appendix Table IX, with n, =n, = 10 and @=0.05, we find that R,,, = 78. Since neither R, nor
R, is less than R, 9;, we cannot reject the hypothesis that both alloys exhibit the same mean axial stress.

16-4.2 A Large-Sample Approximation


When both n, and n, are moderately large, say greater than 8, the distribution of R, can be
well approximated by the normal distribution with mean

3 n(n +n, +1)


Hr, )

and variance
2 mn,(n, +n, +1)
on 2 eer
12
Therefore, for n, and n, > 8 we could use

Zea Ret
1 ~Hr,
PR, (16-5)
as a test statistic, and the appropriate critical region as |Z,| > Z,, Z) > Z,, Or Z) < -Zy,
depending on whether the test is a two-tailed, upper-tail, or lower-tail test.

16-4.3 Comparison with the ¢-Test


In Section 16-3.4 we discussed the comparison of the t-test with the Wilcoxon signed rank
test. The results for the two-sample problem are identical to the one-sample case; that is,
when the normality assumption is correct the Wilcoxon rank-sum test is approximately 95%
as efficient as the t-test in large samples. On the other hand, regardless of the form of the
distributions, the Wilcoxon rank-sum test will always be at least 86% as efficient if the
underlying distributions are very nonnormal. The efficiency of the Wilcoxon test relative to
the t-test is usually high if the underlying distribution has heavier tails than the normal,
because the behavior of the f-test is very dependent on the sample mean, which is quite
unstable in heavy-tailed distributions.

16-5 NONPARAMETRIC METHODS IN THE ANALYSIS OF VARIANCE

16-5.1 The Kruskal-Wallis Test

The single-factor analysis of variance model developed in Chapter 12 for comparing a


population means is
ae a
j W) jf=1,2,.0.57). (16-6)
502 Chapter 16 Nonparametric Statistics

In this model the error terms €,, are assumed to be normally and independently distributed
with mean zero and variance o*.The assumption of normality led directly to the F-test
described in Chapter 12. The Kruskal-Wallis test is a nonparametric alternative to the
F-test; it requires only that the ¢,, have the same continuous distribution for all treatments
TH Pde seerri( 2
Suppose that N = D277, is the total number of observations.
Rank all N observations
from smallest to largest and assign the smallest observation rank 1, the next smallest rank
2, ..., and the largest observation rank N. If the null hypothesis

Hy: Ha =H,
is true, the N observations come from the same distribution, and all possible assignments of
the N ranks to the a samples are equally likely, then we would expect the ranks 1, 2, ..., N
to be mixed throughout the a samples. If, however, the null hypothesis H, is false, then
some samples will consist of observations having predominantly small ranks while other
samples will consist of observations having predominantly large ranks. Let R,, be the rank
of observation yi» and let R;. and R,. denote the total and average of the n, Paieenin the ith
treatment. When the null hypothesis iis true, then

and

The Kruskal-Wallis test statistic measures the degree to which the actual observed average
ranks R,. differ from their expected value (N + 1)/2. If this difference is large, then the null
hypothesis H, is rejected. The test statistic is

= N+
Taree
aNei (R- “ty (16-7)

An alternate computing formula is

12 R;
——— N+1 -
~ N(N ti)ie
n; a ) Coe

We would usually prefer equation 16-8 to equation 16-7, as it involves the rank totals rather
than the averages.
The null hypothesis H, should be rejected if the sample data generate a large value for
K. The null distribution for K has been obtained by using the fact that under H, each possi-
ble assignment of ranks to the a treatments is equally likely. Thus we could enumerate all
possible assignments and count the number of times each value of K occurs. This has led to
tables of the critical values of K, although most tables are restricted to small sample sizes
n,. In practice, we usually employ the following large-sample approximation: Whenever fo
is true and cither

a=3andn,26 fori=1, 2,3


or

a>3andn,25 torr
1. 2o0 aed.
16-5 Nonparametric Methods in the Analysis of Variance 503

then K has approximately a chi-square distribution with a — 1 degrees of freedom. Since


large values of K imply that H, is false, we would reject H, if

B27...
The test has approximate significance level a.

Ties in the Kruskal-Wallis Test When observations are tied, assign an average rank to
each of the tied observations. When there are ties, we should replace the test statistic in
equation 16-8 with

1. cm
(Sees R2. N(N 41)"
ms psp myoeen Tae eal | . rere
(16-9)

where n; is the number of observations in the ith treatment, N is the total number of obser-
vations, and

a Nn; 2
S? “Heda _N(N +1)" [ (16-10)

Note that S? is just the variance of the ranks. When the number of ties is moderate, there
will be little difference between equations 16-8 and 16-9, and the simpler form (equation
16-8) may be used.

Example 16-6.
In Design and Analysis of Experiments, Sth Edition (John Wiley & Sons, 2001), D. C. Montgomery
presents data from an experiment in which five different levels of cotton content in a synthetic fiber
were tested to determine if cotton content has any effect on fiber tensile strength. The sample data and
ranks from this experiment are shown in Table 16-3. Since there is a fairly large number of ties, we
use equation 16-9 as the test statistic. From equation 16-10 we find

po
dal i al
i=l j=l

eek $497.79 - 25126)"


24 4
= 53.03,

Table 16-3 Data and Ranks for the Tensile Testing Experiment

Percentage of Cotton

15 20 25 30 35
504 Chapter 16 Nonparametric Statistics

and the test statistic is

Since K> 75, |, = 13.28, we would reject the null hypothesis and conclude that treatments differ. This
is the same conclusion given by the usual analysis of variance F-test.

16-5.2. The Rank Transformation


The procedure used in the previous section of replacing the observations by their ranks is
called the rank transformation. It is a very powerful and widely useful technique. If we
were to apply the ordinary F-test to the ranks rather than to the original data, we would
obtain

pon Eanes)
© (N=1-K)/(N=a)
as the test statistic. Note that as the Kruskal-Wallis statistic K increases or decreases, F,
also increases or decreases, so the Kruskal-Wallis test is nearly equivalent to applying the
usual analysis of variance to the ranks. .
The rank transformation has wide applicability in experimental design problems for
which no nonparametric alternative to the analysis of variance exists. If the data are ranked
and the ordinary F-test applied, an approximate procedure results, but one that has good sta-
tistical properties. When we are concerned about the normality assumption or the effect of
outliers or “wild” values, we recommend that the usual analysis of variance be performed
on both the original data and the ranks. When both procedures give similar results, the
analysis of variance assumptions are probably satisfied reasonably well, and the standard
analysis is satisfactory. When the two procedures differ, the rank transformation should be
preferred since it is less likely to be distorted by nonnormality and unusual observations. In
such cases, the experimenter may want to investigate the use of transformations for non-
normality and examine the data and the experimental procedure to determine whether out-
liers are present and if so, why they have occurred.

16-6 SUMMARY
This chapter has introduced nonparametric or distribution-free statistical methods. These
procedures are alternatives to the usual parametric t- and F-tests when the normality
assumption for the underlying population is not satisfied. The sign test can be used to test
hypotheses about the median of a continuous distribution. It can also be applied to paired
observations. The Wilcoxon signed rank test can be used to test hypotheses about the mean
of a symmetric continuous distribution. It can also be applied to paired observations. The
Wilcoxon signed rank test is a good alternative to the t-test. The two-sample hypothesis-
testing problem on means of continuous symmetric distributions is approached using the
Wilcoxon rank-sum test. This procedure compares very favorably with the two-sample
t-test. The Kruskal-Wallis test is a useful alternative to the F-test in the analysis of variance.
16-7 Exercises 505

16-7 EXERCISES
16-1. Ten samples were taken from a plating bath used the sign test to determine whether or not the two tips
in an electronics manufacturing process and the bath produce equivalent hardness readings.
pH determined. The sample pH values are hd ee
below: Coupon Tip 1 Tip 2

7.91, 7.85, 6.82, 8.01, 7.46, 6.95, 7.05, 7.35, 1 63 60


7.25, 7.42. 7 52 51
Manufacturing engineering believes that pH has a 3 58 56
median value of 7.0. Do the sample data indicate that 4 60 59
this statement is correct? Use the sign test to investi- 5 55 58
gate this hypothesis. 6 57 54
7 53 52
16-2. The titanium content in an aircraft-grade alloy is 8 59 61
an important determinant of strength. A sample of 20
test coupons reveals the following titanium contents 16-6. Testing for Trends. A turbocharger wheel is
(in percent): manufactured using an investment casting process.
» 8.32, 8.05, 8.93, 8.65, 8.25, 8.46, 8.52, 8.35, The shaft fits into the wheel opening, and this wheel
8.36, 8.41, 8.42, 8.30, 8.71, 8.75, 8.60, 8.83, opening is a critical dimension. As wheel wax patterns
8.50, 8.38, 8.29, 8.46. are formed, the hard tool producing the wax patterns
wears. This may cause growth in the wheel-opening
The median titanium content should be 8.5%. Use the dimension. Ten wheel-opening measurements, in time
sign test to investigate this hypothesis. order of production, are shown below:

16-3. The distribution time between arrivals in a 4.00 (mm), 4.02, 4.03, 4.01, 4.00, 4.03,
telecommunication system is exponential, and the sys- 4.04, 4.02, 4.03, 4.03.
tem manager wishes to test the hypothesis that H,: = (a) Suppose that p is the probability that observation
3.5 min versus H,: [1 > 3.5 min. X,,; exceeds observation X,. If there is no upward
(a) What is the value of the mean of'the exponential or downward trend, then X,,, is no more or less
distribution under H,: fl = 3.5? likely to exceed X, or lie below X, What is the
value of p?
(b) Suppose that we have taken a sample of n = 10
observations and we observe R= 3. Would the (b) Let. V be the number of values of i for which
sign test reject H, at @= 0.05? X,,5 > X;, If there is no upward or downward trend
in the measurements, what is the probability dis-
(c) What is the type III error of this test if @= 4.5?
tribution of V?
16-4. Suppose that we take a sample of n = 10 meas- (c) Use the data above and the results of parts (a) and
urements from a normal distribution with o = 1. We (b) to test H,: there is no trend versus H;: there is
wish to test H,: 4 = 0 against H,: pf > 0. The normal upward trend. Use a = 0.05.
test statistic is Z) =(X-—Hy)/o/~n, and we decide Note that this test is a modification of the sign test. It
tp use a critical region of 1.96 (that is, reject H, if Z, 2 was developed by Cox and Stuart.
1.96). 16-7. Consider the Wilcoxon signed rank test, and
(a) What is @ for this test? suppose that n = 5. Assume that H): [4 = [My is true.
(b) What is B for this test ifz= 1? (a) How many different sequences of signed ranks
(c) If a sign test is used, specify the critical region are possible? Enumerate these sequences.
that gives an @ value consistent with a for the nor- (b) How many different values of R* are there? Find
mal test. the probability associated with each value of R’.
(d) What is the B value for the sign test if = 1? (c) Suppose that we define the critical region of the
Compare this with the result obtained in part (b). test to be R” such that we would reject if R* > R.,
16-5. Two different types of tips can be used in a and R, = 13. What is the approximate @ level of
Rockwell hardness tester. Eight coupons from test this test?
ingots of a nickel-based alloy are selected, and each (d) Can you see from this exercise how the critical
coupon is tested twice, once with each tip. The Rock- ' yalues for the Wilcoxon signed rank test were
well C-scale hardness readings are shown next. Use developed? Explain.
506 Chapter 16 Nonparametric Statistics

16-8. Consider the data in Exercise 16-1, and assume Unit 1 25, 27, 29, 31, 30, 26, 24, 32, 33, 38.
that the distribution of pH is symmetric and continu-
Unit2. 31,33.) 32..355-34, «29, -385-35,,,37, 130;
ous. Use the Wilcoxon signed rank test to test the
hypothesis H,: 4=7 against H,: pf #7.
16-16. In Design and Analysis of Experiments, 5th
16-9. Consider the data in Exercise 16-2. Suppose Edition (John Wiley & Sons, 2001), D. C. Mont-
that the distribution of titanium content is symmetric gomery presents the results of an experiment to com-
and continuous. Use the Wilcoxon signed rank test to pare four different mixing techniques on the tensile
test the hypotheses H,: = 8.5 versus H;: p # 8.5. strength of portland cement. The results are shown
16-10. Consider the data in Exercise 16-2. Use the below. Is there any indication that mixing technique
large-sample approximation for the Wilcoxon signed affects the strength?
rank test to test the hypotheses H,: 1 = 8.5 versus H,:
+ 8.5. Assume that the distribution of titanium con- Mixing Technique Tensile Strength (1b / in.’)
tent is continuous and symmetric.
1 3129 3000 2865 2890
16-11. For the large-sample approximation to the
Wilcoxon signed rank test, derive the mean and stan- 2 3200 3000 2975 3150
dard deviation of the test statistic used in the 3 2800 2900 2985 3050
procedure. 4 2600 2700 2600 2765
16-12. Consider the Rockwell hardpe’s test data in
Exercise 16-5. Assume that both distributions are 16-17. An article in the Quality Control Handbook,
continuous and use the Wilcoxon signed rank test to 3rd Edition (McGraw-Hill, 1962) preserts the results
test that the mean difference in hardness readings of an experiment performed to investigate the effect of
between the two tips is zero. three different conditioning methods on the breaking
16-13. An electrical engineer must design a circuit to strength of cement briquettes. The data are shown
deliver the maximum amount of current to a display below. Is there any indication that conditioning
tube to achieve sufficient image brightness. Within method affects breaking strength?:
his allowable design constraints, he has developed
two candidate circuits and tests prototypes of each. Conditioning
The resulting data (in microamperes) is shown below: Method Breaking Strength (1b / in.”)
Circuit 1: 251, 255, 258, 257, 250, 251, 254, 250, 248 1 553ge D0) 5e508 tee 4 lee Si,
Circuit 2: 250, 253, 249, 256, 259, 252, 260, 251 2 DPMS may Le sy Spe eey.10)
3 LOPE SPS OM Syl
Use the Wilcoxon rank-sum test to test Hy: fl, = HM,
against the alternative H,: 1, > >.
16-18. In Statistics for Research (John Wiley & Sons,
16-14. A consultant frequently travels from Phoenix, 1983), S. Dowdy and S. Wearden present the results of
Arizona, to Los Angeles, California. He will use one an experiment to measure stress resulting from oper-
of two airlines, United or Southwest. The number ating hand-held chain saws. The experimenters meas-
of minutes that his flight arrived late for the last ured the kickback angle through which the saw is
six trips on each airline is shown below. Is there evi- deflected when it begins to cut a 3-inch stock syn-
dence that either airline has superior on-time arrival thetic board. Shown below are deflection angles for
performance? ; five saws chosen at random from each of four differ-
United Dy ASy Ap) 8 0 (minutes late)
ent manufacturers. Is there any evidence that the man-
ufacturers’ products differ with respect to kickback
Southwest 20 48 8 -3 5 (minutes late) angie’?

16-15. The manufacturer of a hot tub is interested in


testing two different heating elements for his product. Manufacturer Kickback Angle
The element that produces the maximum heat gain after A A2E2 7 E24 oe 2305043
15 minutes would be preferable. He obtains 10 samples
B 28 SO 44 32 «641
of each heating unit and tests each one. The heat gain
Con’ S7 45 48 41 54
after 15 minutes (in °F) is shown below. Is there any rea-
son to suspect that one unit is superior to the other?
D 29 40 22 34 = 30
Chapter 17

Statistical Quality Control and


Reliability Engineering

The quality of the products and services used by our society has become a major consumer
decision factor in many, if not most, businesses today. Regardless of whether the consumer
is an individual, a corporation, a military defense program, or a retail store, the consumer
is likely to consider quality of equal importance to cost and schedule. Consequently, qual-
ity improvement has become a major concern of many U.S. corporations. This chapter is
about statistical quality control and reliability engineering methods, two sets of tools that
are essential in quality-improvement activities.

17-1 QUALITY IMPROVEMENT AND STATISTICS


Quality means fitness for use. For example, we may purchase automobiles that we expect
to be free of manufacturing defects and that should provide reliable and economical trans-
portation, a retailer buys finished goods with the expectation that they are properly pack-
aged and arranged for easy storage and display, or a manufacturer buys raw material and
expects to process it with minimal rework or scrap. In other words, all consumers expect
that the products and services they buy will meet their requirements, and those requirements
define fitness for use.
Quality, or fitness for use, is determined through the interaction of quality of design and
quality of conformance. By quality of design we mean the different grades or levels of per-
formance, reliability, serviceability, and function that are the result of deliberate engineer-
ing and management decisions. By quality of conformance, we mean the systematic
reduction of variability and elimination of defects until every unit produced is identical and
defect free.
There is some confusion in our society about quality improvement; some people still
think that it\means gold plating a product or spending more money to develop a product or
process. This thinking is wrong. Quality improvement means the systematic elimination of
waste. Examples of waste include scrap and rework in manufacturing, inspection and test,
errors on documents (such as engineering drawings, checks, purchase orders, and plans),
customer complaint hotlines, warranty costs, and the time required to do things over again
that could have been done right the first time. A successful quality-improvement effort can
eliminate much of this waste and lead to lower costs, higher productivity, increased cus-
tomer satisfaction, increased business reputation, higher market share, and ultimately
higher profits for the company.

507
508 Chapter 17 Statistical Quality Control and Reliability Engineering

Statistical methods play a vital role in quality improvement. Some applications include
the following:
1. In product design and development, statistical methods, including designed exper-
iments, can be used to compare different materials and different components or
ingredients, and to help in both system and component tolerance determination.
This can significantly lower development costs and reduce development time.
2. Statistical methods can be used to determine the capability of a manufacturing
process. Statistical process control can be used to systematically improve a process
by reduction of variability.
3. Experiment design methods can be used to investigate improvements in the process.
These improvements can lead to higher yields and lower manufacturing costs.
4. Life testing provides reliability and other performance data about the product. This
can lead to new and improved designs and products that have longer useful lives and
lower operating and maintenance costs.
Some of these applications have been illustrated in earlier chapters of this book. It is essen-
tial that engineers and managers have an in-depth understanding of these statistical tools in
any industry or business that wants to be the high-quality, low-cost producer. In this chap-
ter we give an introduction to the basic methods of statistical quality control and reliability
engineering that, along with experimental design, form the basis of a successful quality-
improvement effort.

17-2 STATISTICAL QUALITY CONTROL


The field of statistical quality control can be broadly defined as consisting of those statisti¢al
and engineering methods useful in the measurement, monitoring, control, and improvement
of quality. In this chapter, a somewhat more narrow definition is employed. We will define
Statistical quality control as the statistical and engineering methods for process control.
Statistical quality control is a relatively new field, dating back to the !920s. Dr. Walter
A. Shewhart of the Bell Telephone Laboratories was one of the early pioneers of the field.
In 1924, he wrote a memorandum showing a modern control chart, one of the basic tools
of statistical process control. HaroldF.Dodge and Harry G. Romig, two other Bell System
employees, provided much of the leadership in the development of statistically based sam-
pling and inspection methods. The work of these three men forms the basis of the modern
field of statistical quality control. World War II saw the widespread introduction of these
methods to U.S. industry. Dr. W. Edwards Deming and Dr. Joseph M. Juran have been
instrumental in spreading statistical quality-control methods since World War II.
The Japanese have been particularly successful in deploying statistical quality-control
methods and have used statistical methods to gain significant advantage relative to their
competitors. In the 1970s American industry suffered extensively from Japanese (and other
foreign) competition, and that has led, in turn, to renewed interest in statistical quality-
control methods in the United States. Much of this interest focuses on statistical process
control and experimental design. Many U.S. companies have begun extensive programs to
implement these methods into their manufacturing, engineering, and other business
organizations.

17-3. STATISTICAL PROCESS CONTROL


It is impossible to inspect quality into a product; the product must be built right the first
time. This implies that the manufacturing process must be stable or repeatable and capable
17-3 Statistical Process Control 509

of operating with little variability around the target or nominal dimension. Online statisti-
cal process controls are powerful tools useful in achieving process stability and improving
capability through the reduction of variability.
It is customary to think of statistical process control (SPC) as a set of problem-solving
tools that may be applied to any process. The major tools of SPC are the following:
1. Histogram
2. Pareto chart
3 Cause-and-effect diagram
4. Defect-concentration diagram
5. Control chart
6. Scatter diagram
7. Check sheet
While these tools are an important part of SPC, they really constitute only the technical
aspect of the subject. SPC is an attitude—a desire of all individuals in the organization for
continuous improvement in quality and productivity by the systematic reduction of vari-
ability. The control chart is the most powerful of the SPC tools. We now give an introduc-
tion to several basic types of control charts.

17-3.1 Introduction to Control Charts


The basic theory of the control chart was developed by Walter Shewhart in the 1920s. To
understand how a control chart works, we must first understand Shewhart’s theory of vari-
ation. Shewhart theorized that all processes, however good, are characterized by a certain
amount of variation if we measure with an instrument of sufficient resolution. When this
variability is confined to random or chance variation only, the process is said to be in a state
of statistical control. However, another situation may exist in which the process variability
is also affected by some assignable cause, such as a faulty machine setting, operator error,
unsatisfactory raw material, worn machine components, and so on.! These assignable
causes of variation usually have an adverse effect on product quality, so it is important to
have some systematic technique for detecting serious departures from a state of statistical
control as soon after they occur as possible. Control charts are principally used for this pur-
pose.
The power of the control chart lies in its ability to distinguish assignable causes from
random variation. It is the job of the individual using the control chart to identify the under-
lying root cause responsible for the out-of-control condition, develop and implement an
appropriate corrective action, and then follow up to ensure that the assignable cause has
been eliminated from the process. There are three points to remember.
1. A state of statistical control is not a natural state for most processes.
2. The attentive use of control charts will result in the elimination of assignable causes,
yielding anin-control process and reduced process variability.
3. The control chart is ineffective without the system to develop and implement cor-
rective actions that attack the root causes of problems. Management and engineer-
ing involvement is usually necessary to accomplish this.

Sometimes common cause is used instead of “random” or “chance cause,” and special cause is used instead of
“assignable cause.”
510 Chapter 17 Statistical Quality Control and Reliability Engineering

We distinguish between control charts for measurements and control charts for attrib-
utes, depending on whether the observations on the quality characteristic are measurements
or enumeration data. For example, we may choose to measure the diameter of a shaft, say
with a micrometer, and utilize these data in conjunction with a’centrol chart for measure-
ments. On the other hand, we may judge each unit of product as either defective or nonde-
fective and use the fraction of defective units found or the total number of defects in
conjunction with a control chart for attributes. Obviously, certain products and quality char-
acteristics lend themselves to analysis by either method, and a clear-cut choice between the
two methods may be difficult.
A control chart, whether for measurements or attributes, consists of a centerline, corre-
sponding to the average quality at which the process should perform when statistical control
is exhibited, and two control limits, called the upper and lower control limits (UCL and
LCL). A typical control chart is shown in Fig. 17-1. The control limits are chosen so that val-
ues falling between them can be attributed to chance variation, while values falling beyond
them can be taken to indicate a lack of statistical control. The general approach consists of
periodically taking a random sample from the process, computing some appropriate quan-
tity, and plotting that quantity cn the control chart. When a sample value falls outside the
control limits, we search for some assignable cause of variation. However, even if a sample
value falls between the control limits, a trend or some other systematic pattern may indicate
that some action is necessary, usually to avoid more serious trouble. The samples should be
selected in such a way that each sample is as homogeneous as possible and at the same time
maximizes the opportunity for variation due to an assignable cause to be present. This is usu-
ally called the rational subgroup concept. Order of production and source (if more than one
source exists) are commonly used bases for obtaining rational subgroups.
The ability to interpret control charts accurately is usually acquired with experience. It
is necessary that the user be thoroughly familiar with both the statistical foundation of con-
trol charts and the nature of the production process itself.

17-3.2 Control Charts for Measurements


When dealing with a quality characteristic that can be expressed as a measurement, it is cus-
tomary to exercise control over both the average value of the quality characteristic and its
variability. Control over the average quality is exercised by the control chart for means, usu-
ally called the X chart. Process variability can be controlled by either a range (R) chart or a

Upper control limit (UCL)

‘lies LEER A Centerline

characteristic
quality
Sample

WAS 2 ASE RAP FOGLE Oe ORFS 101211 rots


Sample number

Figure 17-1 A typical control chart.


17-3 Statistical Process Control 511

standard deviation chart, depending on how the population standard deviation is estimated.
We will discuss only the R chart.
Suppose that the process mean and standard deviation, say p/ and o, are known, and,
furthermore, that we can assume that the quality characteristic follows the normal distribu-
tion. Let X be the sample mean based on a random sample of size n from this process. Then
the probability is 1 — a@ that the mean of such random samples will fall between
H+Zap (o/Jn )and L-Z,/ (o/vn ) Therefore, we could use these two values as the
upper and lower control limits, respectively. However, we usually do not know pt and o and
they must be estimated. In addition, we may not be able to make the normality assumption.
For these reasons, the probability limit 1 - @ is seldom used in practice. Usually Z,,, is
replaced by 3, and “3-sigma” control limits are used.
When p and o are unknown, we often estimate them on the basis of preliminary sam-
ples, taken when the process is thought to be in contro]. We recommend the use of at least
20-25 preliminary samples. Suppose k preliminary samples are available, each of size n.
Typically, n will be 4, 5, or 6; these relatively small sample sizes are widely used and often
arise from the construction of rational subgroups. Let the sample mean for the ith sample
be x; Then we estimate the mean of the population, , by the grand mean

PSIIi] 6a (igi)
=|
M-
I 1

Thus, we may take X as the centerline on the X control chart.


We may estimate o from either the standard deviations or the ranges of the k samples.
Since it is more frequently used in practice, we confine our discussion to the range method.
The sample size is relatively small, so there is little loss in efficiency in estimating o from
the sample ranges. The relationship between the range, R, of a sample from a normal pop-
ulation with known parameters and the standard deviation of that population is needed.
Since R is a random variable, the quantity W = R/o, called the relative range, is also a ran-
dom variable. The parameters of the distribution of W have been determined for any sam-
ple size n. The mean of the distribution of W is called d,, and a table of d, for various n is
given in Table XIII of the Appendix. Let R; be the range of the ith sample, and let
k
R=-)R (17-2)

grat (17-3)
d,

Therefore, we may use as our upper and lower control limits for the X chart
= 30
UCL=X + R.
ie vn (17-4)
B@liva Koninns GR

We note that the quantity


512 Chapter 17 Statistical Quality Control and Reliability Engineering

is a constant depending on the sample size, so it is possible to rewrite equations 17-4 as


UCL= X +A3R,
(17-5)
LCL= X -A,R.
The constant A, is tabulated for various sample sizes in Table XIII of the Appendix.
The parameters of the R chart may also be easily determined. The centerline will obvi-
ously be R. To determine the control limits, we need an estimate of Og, the standard devia-
tion of R. Once again, assuming the process is in control, the distribution of the relative
range, W, will be useful. The standard deviation of W, say Oy, is a function of n, which has
been determined. Thus, since
R=Wo,
we may obtain the standard deviation of R as
Op = Owd.

As ois unknown, we may estimate Op as

and we would use as the upper and lower control limits on the R chart

UCL=R+ ao R,
2 (17-6)
LOL= R22 8.
d,
Setting D; = 1 — 30,/d, and D, = 1 + 36y/d,, we may rewrite equation 17-6 as
BeL= Dik (17-7)
LCL = DR}
where D, and D, are tabulated in Table XIII of the Appendix.
When preliminary samples are used to construct limits for control charts, it is cus-
tomary to treat these limits as trial values. Therefore, the k sample means and ranges
should be plotted on the appropriate charts, and any points that exceed the control limits
should be investigated. If assignable causes for these points are discovered, they should be
eliminated and new limits for the control charts determined. In this way, the process may
eventually be brought into statistical control and its inherent capabilities assessed. Other
changes in process centering and dispersion may then be contemplated.

Example 17-1

A component part for a jet aircraft engine is manufactured by an investment casting process. The vane
opening on this casting is an important functional parameter of the part. We will illustrate the use of
X and R control charts to assess the statistical stability of this process. Table 17-1 presents 20 samples
of five parts each. The values given in the table have been coded by using the last three digits of the
dimension; that is, 31.6 should be 0.50316 inch.
The quantities X = 33.33 and R.= 5.85 are shown at the foot of Table 17-1. Notice that even
though X, X, R, and R are now realizations of random variables, we have still written them as
17-3 Statistical Process Control 513

Table 17-1 Vane Opening Measurements


Sample Number ae ra % X isi
a
e r 5Xe Jem xth R
eR
1 33 29 31 32 83 31.6 4
z 35 33 31 37 31 33.2 6
3 35 37 33 34 36 35.0 4
4 30 a 33 34 33 32,2 4
5 33 34 35 33 34 33.8 2
6 38 ait 39 40 38 38.4 3
U 30 31 32 34 31 31.6 4
8 29 39 38 39 39 36.8 10
9 28 34 35 36 43 35.2 15
10 39 33 32 34 32 34.0 7/
11 28 30 28 32 31 29.8 4
12 31 35 35 35 34 34.0 4
13 Zi, 32 34 25) 37 33.0 10
14 33 33 35 ai) 36 34.8 4
15 35 37 32 35 39 35.6 Wl
16 33 33 27 31 30 30.8 6
17 35 34 34 30 32 33.0 5
18 32 33 30 30 33 31.6 3
19 25 27 34 21 28 238.2 9
20 35 35 36 33 30 33.8 6

X = 33,33 R=5.85

uppetcase letters. This is the usual convention in quality control, and it will always be clear from the
context what the notation implies. The trial control limits are, for the X chart,

X + A,R=33.3 + (0.577)(5.85) = 33.33 + 3.37,

UCL = 36.70,

LCL = 29.96.

For the R chart, the trial control limits are


UCL = D,R = (2'115)(S.85) = 12.37,

LCL = D,R = (0)(5.85) = 0.


The X and R control charts with these trial control limits are shown in Fig. 17-2. Notice that samples
6, 8, 11, and 19 are out of control on the X chart, and that sample 9 is out of control on the R chart.
Suppose that all of these assignable causes can be traced to a defective tool in the wax-molding area.
We should discard these five samples and recompute the limits for the X and R charts. These new
revised limits are, for the X chart,
UCL = X +A,R = 32.90 + (0.577)(5.313) = 35.96,
LCL = X -A,R = 32.90 - (0.577)(5.313) = 29.84,
and, for the R chart, they are
: UCL = D,R = (2.115)(5.067) = 10.71,
LCL = D,R = (0)(5.067) = 0.
514 Chapter 17 Statistical Quality Control and Reliability Engineering

40

UCL = 36.70
S
2 35

ra Mean = 33.33
Ee
©
7p)
30 LCL = 29.96

0 10 20
Sample number

15

UCL = 12.37

5
10
oo
a =
= R=5.85
”CaS

o0.- |
—_— LCL =0

0 10 20
Sample number

Figure 17-2 The X and R control charts for vane opening.

The revised control charts are shown in Fig. 17-3. Notice that we have treated the first 20 preliminary
samples as estimation data with which to establish control limits. These limits can now be used to
judge the statistical control of future production. As each new sample becomes available, the values
of X and R should be computed and plotted on the control charts. It may be desirable to revise the lim-
its periodically, even if the process remains stable. The limits should always be revised when process
improvements are made.

Estimating Process Capability

It is usually necessary to obtain some information about the capability of the process, that
is, about the performance of the process when it is operating in control. Two graphical tools,
the tolerance chart (or tier chart) and the histogram, are helpful in assessing process capa-
bility. The tolerance chart for all 20 samples from the vane manufacturing process is shown
in Fig. 17-4. The specifications on vane opening, 0.5030 + 0.001 inch, are also shown on
the chart. In terms of the coded data, the upper specification limit is USL = 40 and the lower
specification limit is LSL = 20. The tolerance chart is useful in revealing patterns over time
in the individual measurements, or it may show that a particular value of X or R was pro-
duced by one or two unusual observations in the sample. For example, note the two unusual
observations in sample 9 and the single unusual observation in sample 8. Note also that it
17-3 Statistical Process Control 515

40

S
@ a5 UCL = 35.96
3 Mean = 32.90
5 30 LCL = 29.84

0 10 20
Subgroup

Fe) UCL = 10.71


©
=
E R= 5.067
n

LCL=0

Subgroup
(*) = Not used in computing control limits

Figure 17-3 X and R control charts for vane opening, revised limits.

Nominal
Vane
opening dimension = 30

ve 5 10 15 20
* Sample number
Figure 17-4 Tolerance diagram of vane openings.
516 Chapter 17 Statistical Quality Control and Reliability Engineering

is appropriate to plot the specification limits on the tolerance ¢hart, since it is a chart of indi-
vidual measurements. /t is never appropriate to plot specification limits on a control chart,
or to use the specifications in determining the control limits. Specification limits and con-
trol limits are unrelated. Finally, note from Fig. 17-4 that the process is running off center
from the nominal dimension of 0.5030 inch.
The histogram for the vane opening measurements is shown in Fig. 17-5. The obser-
vations from samples 6, 8, 9, 11, and 19 have been deleted from this histogram. The gen-
eral impression from examining this histogram is that the process is capable of meeting the
specifications, but that it is running off center.
Another way to express process capability is in terms of the process capability ratio
(PCR), defined as
USL — LSL
PCR= (17-8)
66
Notice that the 60 spread (30 on either side of the mean) is sometimes called the basic
capability of the process. The limits 30 on either side of the process mean are sometimes
called natural tolerance limits, as these represent limits that an in-control process should
meet with most of the units produced. For the vane opening, we could estimate o as

6 = Aeetds erp ge
d, 2.326

Therefore, an estimator for the PCR is

PCR = USL — LSL


60
_ 40-20
6(2.06)
= 10625

20

15

°
Frequency

18 **20°)22)° 24° 26) "28" 430) Ve2"0e4) VSG iss 40 *45e44


LSL t USL
Nominal dimension:
Vane opening

Figure 17-5 Histogram for vane opening.


17-3 Statistical Process Control 517

The PCR has a natural interpretation; (1/PCR)100 is just the percentage of the toler-
ance band used by the process. Thus, the vane opening process uses approximately
(1/1.62)100 = 61.7% of the tolerance band.
Figure 17-6a shows a process for which the PCR exceeds unity. Since the process nat-
ural tolerance limits lie inside the specifications, very few defective or nonconforming units
will be produced. If PCR = 1, as shown in Fig. 17-6b, more nonconforming units result. In
fact, for a normally distributed process, if PCR = 1, the fraction nonconforming is 0.27%,
or 2700 parts per million. Finally, when the PCR is less than unity, as in Fig. 17-6c, the
process is very yield sensitive and a large number of nonconforming units will be produced.
The definition of the PCR given in equation 17-8 implicitly assumes that the process
is centered at the nominal dimension. If the process is running off center, its actual capa-
bility will be less than that indicated by the PCR. It is convenient to think of PCR as a meas-
ure of potential capability, that is, capability with a centered process. If the process is not
centered, then a measure of actual capability is given by

Pere mi ULE ie] (17-9)


30 tsi

PCR >1

LSL bad USL


Deng ealeiijiacs
(a)

PCR
=1

Nenconforming Nonconforming
units units
- . a

bes3a ee 3a
LSL USL
(b)

PCR<1

Nonconforming Nonconforming
units | units

LSL rn USL
nadg7 la
Ae aa
(c)
Figure 17-6 Process fallout and the process capability ratio (PCR).
518 Chapter 17 Statistical Quality Control and Reliability Engineering

In effect, PCR, is a one-sided process capability ratio that is calculated relative to the spec-
ification limit nearest to the process mean. For the vane opening process, we find that

POR) thin UShen Ara of


30 30

. |40—33.19 1.10, 33.19-20 =213


3(2.06) 3(2.06)
=1.10.

Note that if PCR = PCR, the process is centered at the nominal dimension. Since PCR, =
1.10 for the vane opening process, and PCR = 1.62, the process is obviously running off
center, as was first noted in Figs. 17-4 and 17-5. This off-center operation was ultimately
traced to an oversized wax tool. Changing the tooling resulted in a substantial improvement
in the process.
Montgomery (2001) provides guidelines on appropriate values of the PCR and a table
relating fallout for a normally distributed process in statistical control as a function of PCR.
Many U.S. companies use PCR = 1.33 as a minimum acceptable target and PCR = 1.66 as
a minimum target for strength, safety, or critical characteristics. Also, some U.S. compa-
nies, particularly the automobile industry, have adopted the Japanese terminology C, = PCR
and C,,, = PCR,. As C, has another meaning in statistics (in multiple regression; see Chap-
ter 15), we prefer the traditional notation PCR and PCR,.

17-3.3 Control Charts for Individual Measurements

Many situations exist in which the sample consists of a single observation; that is, n = 1.
These situations occur when production is very slow or costly and it is impractical to allow
the sample size to be greater than one. Other cases include processes where every observa-
tion can be measured due to automated inspection, for example. The Shewhart control
chart for individual measurements is appropriate for this type of situation. We will see later
in this chapter that the exponentially weighted moving average control chart and the cumu-
- lative sum control chart may be more informative than the individual chart.
The Shewhart control chart uses the moving range, MR, of two successive observations
for estimating the process variability. The moving range is defined

MR; = lx;~x,\|.

For example, for m observations, m — 1 moving ranges are calculated as MR; = be, — mL
MR, =|x, — x), ... “MR, = |x, ~ Xm_ Simultaneous control charts can be established on
the individual observations and on the moving range.
The control limits for the individuals control chart are calculated as

UCL=x+ quik
d,
Centerline = x, (17-10)

(WUC/L: 57 = 3,
2

where MR is the sample mean of the MR,.


17-3 Statistical Process Control 519

If a moving range of size n = 2 is used, then d, = 1.128 from Table XIII of the Appen-
dix. The control limits for the moving range control chart are
UCL = D,MR,
Centerline = MR, (17-11)
LCL=D,MR.

Example 17-2
Batches of a particular chemical product are selected from a process and the purity measured on each.
Data for 15 successive batches have been collected and are given in Table 17-2. The moving ranges
of size n = 2 are also displayed in Table 17-2.
To set up the control chart for individuals, we first need the sample average of the 15 purity
measurements. This average is found to be x = 0.757. The average of the moving ranges of two obser-
vations is MR = 0.046. The control limits for the individuals chart with moving ranges of size 2 using
the limits in equation 17-10 are
0.046
UCL = 0.757 +3 = 0.879,
1.128
Centerline = 0.757,

LCL = 0.757— gud’s, = 0.635.


1.128

The control limits for the moving range chart are found using the limits given in Equation 17-11:

UCL = 3.267(0.046) = 0.150,


Centerline = 0.046,

~ LCL =0(0.046) = 0.

Table 17-2 Purity of Chemical Product


Batch Purity, x Moving Range, MR

1 0.77
2 0.76 0.01
3 0.77 0.01
4 0.72 0.05
5 0.73 0.01
6 0.73 0.00
“ 0.85 0.12
8 0.70 0.15
9 0.75 0.05
10 0.74 0.01
18 0.75 0.01
12 0.84 0.09
13 0.79 0.05
14 0.72 0.07
15 0.74 0.02
x= 0.757 MR = 0.046
TEETER
Neen
520 Chapter 17 Statistical Quality Control and Reliability Engineering

0.9 “ UCL = 0.879


®o
2
§ 08
i} Mean = 0.757
:3 0.7

+4 LCL = 0.635
0.6 0 5 10 15
Subgroup
(a)

0.15 UCL
= 0.150
is?)
Le))
6 0.10
2
3 0.05 R= 0.046
=
0.00 LCL=0
0 5 10 15
Subgroup
(b)
Figure 17-7 Control charts for (a) the individual observations and (b) the moving range on purity.

The control charts for individual observations and for the moving range are provided in Fig. 17-7.
Since there are no points beyond the control limits, the process appears to be in statistical control.
The individuals chart can be interpreted much like the X control chart. An out-of-control situa-
tion would be indicated by either a point (or points) plotting beyond the control limits or a pattern
such as a run on one side of the centerline.
The moving range chart cannot be interpreted in the same way. Although a point (or points) plot-
ting beyond the control limits would likely indicate an out-of-control situation, a pattern or run on one
side of the centerline is not necessarily an indication that the process is out of control. This is due to
the fact that the moving ranges are correlated, and this correlation may naturally cause patterns or
trends on.the chart.

17-3.4 Control Charts for Attributes

The p Chart (Fraction Defective or Nonconforming)

Often it is desirable to classify a product as either defective or nondefective on the basis of


comparison with a standard. This is usually done to achieve economy and simplicity in the
inspection operation. For example, the diameter of a ball bearing may be checked by deter-
mining whether it will pass through a gauge consisting of circular holes cut in a template.
This would be much simpler than measuring the diameter with a micrometer. Control charts
for attributes are used in these situations. However, attribute control charts require a con-
siderably larger sample size than do their measurements counterparts. We will discuss the
fraction-defective chart, or p chart, and two charts for defects, the c and u charts. Note that
it is possible for a unit to have many defects, and be either defective or nondefective. In
some applications a unit can have several defects, yet be classified as nondefective.
17-3 Statistical Process Control 521

Suppose D is the number of defective units in a random sample of size n. We assume that
D is a binomial random variable with unknown parameter p. Now the sample fraction
defective is an estimator of p, that is

p=—.
See (17-12)
17-12

Furthermore, the variance of the statistic p is -

of = P(l=p)
l-

n
SO we may estimate 0; as

2
63 = P(l-p
wee)
n.
(17-13)
The centerline and control limits for the fraction-defective control chart may now be
easily determined. Suppose k preliminary samples are available, each of size n, and D; is the
number of defectives in the ith sample. Then we may take

p= as (17-14)
n

as the centerline and

UCL =p+3,\PULc®)
n (lg 15)

Lohanp73|P0=F)
n

as the upper and lower control limits, respectively. These control limits are based on the
normal approximation to the binomial distribution. When p is small, the normal approxi-
mation may not always be adequate. In such cases, it is best to use control limits obtained
directly from a table of binomial probabilities or, perhaps, from the Poisson approximation
to the binomial distribution. If p is small, the lower control limit may be a negative number.
If this should occur, it is customary to consider zero as the lower control limit.

Example 17-3
Suppose we wish to construct a fraction-defective control chart for a ceramic substrate pro-
duction line. We have 20 preliminary samples, each of size 100; the number of defectives
in each sample are shown in Table 17-3. Assume that the samples are numbered in the
sequence of production. Note that p = 800/2000 = 0.40, and therefore the trial parameters
for the control chart are
Centerline = 0.395

uct =0395+3 (O20)


).395)(0.605 -osen,

(0.395)(0.605)
CE 0395 =5 = 0.2483. ,
522 Chapter 17 Statistical Quality Control and Reliability Engineering

Table 17-3 Number of Defectives in Samples of 100 Ceramic Substrates

Sample No. of Defectives Sample No. of Defectives

1 Ad 11 Shay >
2 48 12 52
3 32 13 35
4 50 14 41
5 29 15 42
6 31 16 30
7 46 17 46
8 52 18 38
9 tt 19 26
_oO Wwoo nNoO WwoO

The control chart is shown in Fig. 17-8. All samples are in control. If they were not, we
would search for assignable causes of variation and revise the limits accordingly.
Although this process exhibits statistical control, its capability (p = 0.395) is very
poor. We should take appropriate steps to investigate the process to determine why such a
large number of defective units are being produced. Defective units should be analyzed to
detérmine the specific types of defects present. Once the defect types are known, process
changes should be investigated to determine their impact on defect levels. Designed exper-
iments may be useful in this regard.

Example 17-4

Attributes Versus Measurements Control Charts

The advantage of measurement control charts relative to the p chart with respect to size of sample may
be easily illustrated. Suppose that a normally distributed quality characteristic has a standard devia-
tion of 4 and specification limits of 52 and 68. The process is centered at 60, which results in a frac-
tion defective of 0.0454. Let the process mean shift to 56. Now the fraction defective is 0.1601.If the

0.55 UCL = 0.5417


oO
2
#
2®@ 0.45
ja

= p= 0.395
° ae

©
‘> 0.35
Q
EoO

0.25 LCL = 0.2483


0 10 20
Sample number

Figure 17-8 The p chart for a ceramic substrate.


17-3 Statistical Process Control 523

probability of detecting the shift on the first sample following the shift is to be 0.50, then the sample
size must be such that the lower 3-sigma limit will be at 56. This implies

60 3(4) _ 56,
vn
whose solution is n = 9. For a p chart, using the normal approximation to the binomial, we must Have

0.0454+3 COCO) 0.1601,


n

whose solution is 1 = 30. Thus, unless the cost of measurement inspection is more than three times as
costly as the attributes inspection, the measurement control chart is cheaper to operate.

The c Chart (Defects)

In some situations it may be necessary to control the number of defects in a unit of product
rather than the fraction defective. In these situations we may use the control chart for
defects, or the c chart. Suppose that in the production of cloth it is necessary to control the
number of defects per yard, or that in assembling an aircraft wing the number of missing
rivets must be controlled. Many defects-per-unit situations can be modeled by the Poisson
distribution.
Let c be the number of defects in a unit, where c is a Poisson random variable with
parameter a. Now the mean and variance of this distyibution are both a@. Therefore, if k units
are available and c; is the number of defects in unit.i, the centerline of the control chart is

@=-) a, (17-16)
k

k ith

and

UCL =2 +3vc, ela)

LCL =2 -3Ve
are the upper and lower control limits, respectively.

Example 17-5
Printed circuit boards are assembled by a combination of manual assembly and automation. A flow
solder machine is used to make the mechanical and electrical connections of the leaded components
to the board. The boards are run through the flow solder process almost continuously, and every hour
five boards are selected and inspected for process-control purposes. The number of defects in each
sample of five boards is noted. Results for 20 samples are shown in Table 17-4. Now c = 160/20 = 8,
and therefore
UCL = 8 +378 = 16.484,
LCL =8-3¥8 <0, set to 0.
From the control chart in Fig. 17-9, we see that the process is in control. However, eight defects per
group of five printed circuit boards is too many (about 8/5 = 1.6 defects/board), and the process needs
improvement. An investigation needs to be made of the specific types of defects found on the printed
circuit boards. This will usually suggest potential avenues for process improvement.
524 Chapter 17 Statistical Quality Control and Reliability Engineering

Table 17-4 Number of Defects in Samples of Five Printed Circuit, Boards SS SS


Meee ae eee ee ee ee ee ee eee ee ee SS
Sample No. of Defects Sample No. of Defects

1 6 11 9
2 4 12 15
3 8 13 8
4 10 14 10
5 9 15 8
6 12 16 2
ay 16 17 a
8 2 18 1
9 3 19 if
10 10 20 13

20

UCL = 16.484

defects,
of
Number
c

LCL=0
0 10 20
Sample number

Figure 17-9 The c chart for defects in samples of five printed circuit boards.

The u Chart (Defects per Unit)

In some processes it may be preferable to work with the number of defects per unit rather
than the total number of defects. Thus, if the sample consists of n units and there are c total
defects in the sample, then

c
u=—
n

is the average number of defects per unit. A u chart may be constructed for such data. If
there are k preliminary samples, each with uj, uy, ..., u, defects per unit, then the venter-
line on the u chart is

__1X
=> DM (17-18)
i=l
and the control limits are given by

UCL=u+ ae
7 (17-19)
LCL=iu- ae
n
17-3 Statistical Process Control 525

Example 17-6
A u chart may be constructed for the printed circuit board defect data in Example 17-5. Since each
sample contains n = 5 printed circuit boards, the values of u for each sample may be calculated as
shown in the following display:

eS

Sample Sample size, n Number of defects, c Defects per unit

1 3) 6 12
2 3) 4 0.8
3 5 8 1.6
4 5 10 2.0
5 5 9 1.8
6 5 12 2.4
7 5 16 3.2
8 5 0.4
9 5 3 0.6
10 5) 10 2.0
11 5 } 1.8
12 5 15 3.0
$3 5 8 1.6
14 5 10 2.0
15 5 8 1.6
16 5) 2 0.4
17 5 1 1.4
18 S) 1 0.2
19 5 7 1.4
20 5 13 2.6

Thke centerline for the u chart is

and the upper and lower control limits are

ucL=7+3/% =16+3 isd.= 23)


n 5

per =a-3% 16-318 <0, set to 0.


n
The control chart is plotted in Fig. 17-10. Notice that the u chart in this example is equivalent to the
c chart in Fig. 17-9. In some cases, particularly when the sample size is not constant, the uw chart will
be preferable to the c chart. For a discussion of variable sample sizes on control charts, see Mont-
gomery (2001).

17-3.5 CUSUM and EWMA Control Charts


Up to this point in Chapter 17 we have presented the most basic of control charts, the
Shewhart control charts. A major disadvantage of these control charts is their insensitivity
to small shifts in the process (shifts often less than 1.50). This disadvantage is due to the
fact that the Shewhart charts use information only from the current observation.
526 Chapter 17 Statistical Quality Control and Reliability Engineering

UCL
=3.3
3

cee
ot u=1.6
2
D
5 He
Q

0 LCL=0
| ! |

0 10 20
Sample number

Figure 17-10 The u chart of defects per unit on printed circuit boards, Example 17-6.

Alternatives to Shewhart control charts include the cumulative sum control chart and
the exponentially weighted moving average control chart. These control charts are more
sensitive to small shifts in the process because they incorporate information from current
and recent past observations.

Tabular CUSUM Control Charts for the Process Mean

The cumulative sum (CUSUM) control chart was first introduced by Page (1954) and incor-
porates information from a sequence of sample observations. The chart plots the cumula-
tive sums of deviations of the observations from a target value. To illustrate, let x; represent
the jth sample mean, let tl) represent the target value for the process mean, and say the sam-
ple size is n = 1. The CUSUM control chart plots the quantity

C= ¥ (yz,= Hp) (17-20)


§=1
against the sample i. The quantity C; is the cumulative sum up to and including the ith sam-
ple. As long as the process is in control at the target value [/), then C; in equation 17-20 rep-
resents a random walk with a mean of zero. On the other hand, if the process shifts away
from the target :ean, then either an upward or downward drift in C; will be evident. By
incorporating information from a sequence of observations, the CUSUM chart is able to
detect a small shift in the process more quickly than a standard Shewhart chart. The
CUSUM charts can be easily implemented for both subgroup data and individual observa-
tions. We will present the tabular CUSUM for individual observations.
The tabular CUSUM involves two statistics, C; and C, , which are the accumulation of
deviations above and below the target mean, respectively. C; is called the one-sided upper
CUSUM and C;, is called the one-sided lower CUSUM. The
statistics are computed as follows:

C; = max(0, x; - (up + K) + Cy],


t
(17-21)

C, = max[0j(flg= K) x, Cas], (17-22)


with initial values of C} = C) = 0. The constant, K, is referred to as the reference value and
is often chosen approximately halfway between the target mean, Up, and the out-of-control
17-3 Statistical Process Control 527

mean that we.are interested in detecting, denoted ju,. In other words, K is half of the mag-
nitude of the shift from fp to 1), or

xn Heol
2
The statistics given in equations 17-21 and 17-22 accumulate the deviations from target that
are larger than K and reset to zero when either quantity becomes negative. The CUSUM
control chart plots the values of C’ and C, for each sample. If either statistic plots beyond
the decision interval, H, the process is considered out of control. We will discuss the choice
of H later in this chapter, but a good rule of thumb is often H = So.

Example 17-7
A study presented in Food Contro! (2001, p. 119) gives the results of measuring the dry-matter content
in buttercream from a batch process. One goal of the study is to monitor the amount of dry matter from
batch to batch. Table 17-5 dispiays some data that may be typical of this type of process. The reported
values, x;, are percentage of dry-matter content examined after mixing. The target amount of dry-mat-
ter content is 45% and assume that o= 0.84%. Let us also assume that we are interested in detecting a
shift in the process of mean of at least 10; that is, UW; =My) + 1o=45 + 1(0.84) = 45.84%. We will use the

Table 17-5 CUSUM Calculations for Example 17-7

Batch, i %, x;- 45.42 Gs 44.58 — x; GF


1 46.21 0.79 0.79 ~1.63 0
2 45.73 0.31 1.10 =115 0
3 44.37 -1.05 0.05 0.21 0.21
4 44.19 =023 0 0.39 0.60
5 43.73 -1.69 0 0.85 1.45
6 45.66 0.24 0.24 -1.08 0.37
7 44.24 -1.18 0 0.34 0.71
8 44.48 -0.94 0 0.10 0.81
9 46.04 0.62 0.62 -1.46 0
10 44.04 -1.38 0 0.54 0.54
11 42.96 ~2.46 0 1.62 2.16
2 46.02 0.60 0.60 -1.44 0.72
13 44.82 ~0.60 0 ~0.24 0.48
14 45.02 +0.40 0 ~0.44 0.04
15 45.77 0.35 0.35 i190 0
16 47.40 “1.98 2.33 -2.82 0
17 47.55 2.13 4.46 207 0
18 46.64 1.22 5.68 ~2.06 0
19 46.31 0.89 6.57 ait73 0
20 44.82 -0.60 5.97 -0.24 0
21 45.39 -0.03 5.94 -0.81 0
22 47.80 2.38 8.32 2322 0
23 46.69 oe] 9.59 95 0
A 46.99 1.57 11.16 =2.41 0
25 44.53 . -0.89 10.27 0.05 0
ee
ae
528 Chapter 17 Statistical Quality Control and Reliability Engineering

recommended decision interval of H = 5o'= 5(0.84) = 4.2. The reference value, K, is found to be

jgedenbce tsb yy)

The values of C* and C; are given in Table 17-5. To illustrate the calculations, consider the first two
sample batches. Recall that C; = C, = 0, and using equations 17-21 and 17-22 with K = 0.42 and
My = 45, we have

Ct =max(0, x, - (45 + 0.42) + Cy]


= max(0, x, - 45.42 + Cy]
and
C, =max[(0, (45 - 0.42) - x, + Co]

= max[0, 44.58 —x, + C)].

For batch 1, x, = 46.21,


C} = max(0, 46.21 - 45.42 + 0]
= max[0, 0.79]

=0.79, |
and

C; = max(0, 44.58 - 46.21 + 0]


= max(0, -1.63]
=0.
For batch 2, x, = 45.73,

C} = max(0, 45.73 — 45.42 + 0.79]


= max[(0, 1.10]

=1.10,
and

C, = max(0, 44.58 - 45.73 + 0]


= max(0, -1.15]
=0.

The CUSUM calculations given in Table 17-5 indicate that the upper-sided CUSUM for batch 17 is
C;, = 4.46, which exceeds the decision value of H = 4.2. Therefore, the process appears to have
shifted out of control. The CUSUM status chart created using Minitab® with H = 4.2 is given in Fig.
17-11. The out-of-control control situation is also evident on this chart at batch 17.

The CUSUM control chart is a powerful quality tool for detecting a process that has
shifted from the target process mean. The correct choices of H and K can greatly improve
the sensitivity of the control chart while protecting against the occurrence of false alarms
(the process is actually in control, but the control chart signals out of control). Design rec-
ommendations for the CUSUM will be provided later in this chapter when the concept of
average run length is introduced.
17-3 Statistical Process Control 529

Upper CUSUM

Cumulative
sum

Lower CUSUM
—4.2

0 IS) 10 15 20 25
Subgroup number

Figure 17-11 CUSUM status chart for Example 17-7.

We have presented the upper and lower CUSUM control charts for situations in which
a shift in either direction away from the process target is of interest. There are many
instances when we may be interested in a shift in only one direction, either upward or
downward. One-sided CUSUM charts can be constructed for these situations. For a thor-
ough development of these charts and more details, see Montgomery (2001).

EWM<aA Control Charts

The exponentially weighted moving average (EWMA) control chart is also a good alterna-
tive to the Shewhart control chart ‘when detecting a small shift in the process mean is of
interest. We will present the EWMA for individual measurements although the procedure
can also be modified for subgroups of size n > 1.
The EWMA control chart was first introduced by Roberts (1959). The EWMA is
defined as -

z= Ax, + (1 -A)z-1, (17-23)

where A is a weight, 0 <A < 1. The procedure to be presented is initialized with Z) = Up, the
process target mean. If a target mean is unknown, then the average of preliminary data, x,
is used as the initial value of the EWMA. The definition given in equation 17-23 demon-
strates that information from past observations is incorporated into the current value of z;.
The value z; is a weighted average of all previous sample means. To illustrate, we can
replace z,_, on the right-hand side of equation 17-23 to obtain

= Ax; + (1-AAy_) + 1 - Az-2]


=Ax, +A -A)x_, + (1 - Ay°2;-2-

. By recursively replacing z;_;, j= 1, 2, ..., t, we find


530 Chapter 17 Statistical Quality Control and Reliability Engineering

The EWMA can be thought of as a weighted average of all past and current observations.
Note that the weights decrease geometrically with the age of the observation, giving less
weight to observations that occurred early in the process. The EWMA is often used in fore-
casting, but the EWMA control chart has been used extensively for monitoring many types
of processes.
If the observations are independent random variables with variance o”, the variance of
the EWMA, z,, is

o: =0°{=*_ |i-a-a)"]
Given a target mean, Lp, and the variance of the EWMA, the upper control limit, centerline,
and lower control limit for the EWMA control chart are

UCL= figch 104} = fie ay],


Centerline = Lo,

LCL= Hy - wo (4 -(1-4)"],

where L is the width of the control limits. Note that the term’1 — (1 — A)! approaches 1 as i
increases. Therefore, as the process continues running, the control limits for the EWMA
approach the steady state values
———e

5)
isle ON pralade
an ee (17-24)
LCL = py - Lo =~.

Although the control limits given in equation 17-24 provide good approximations, it is rec-
ommended that the exact limits be used for small values of i.

Example 17-8
We will now implement the EWMA control chart with A= 0.2 and L = 2.7 for the dry-matter content
data provided in Table 17-5. Recall that the target mean is {fj = 45% and the process standard devia-
tion is assumed to be o= 0.84%. The EWMaA calculations are provided in Table 17-6. To demonstrate
some of the calculations, consider the first observation with x, = 46.21. We find

Zz, = Ax, + (1-A)zy


= (0.2)(46.21) + (0.80)(45)
= 45.24.
The second EWMA value is then

= (0.2)(45.73) + (0.80)(45.24)

= 45.34.
17-3 Statistical Process Control 531

Table 17-6 EWMaA Calculations for Example 17-8


SS

Batch, i x; Zz; UCL GL,

1 46.21 45.24 45.45 44.55


2 45.73 45.34 45.58 44.42
3 44.37 45.15 45.65 44.35
4 44.19 44.95 45.69 44.31
5 43.73 44.7] 45.71 44.29
6 45.66 44.90 45.73 44.27
i 44.24 44.77 45.74 44.26
8 44.48 44.71 45.75 44.25
9 46.04 44.98 45.75 44.25
10 44.04 44.79 45.75 44.25
1h 42.96 44.42 45.75 44.25
12 46.02 44.74 45.75 44.25
13 44.82 44.76 45.75 44.25
14 45.02 44.81] 45.76 44.24
15 45.77 45.00 45.76 44.24
16 47.40 45.48 45.76 44.24
17 47.55 45.90 45.76 44.24
18 46.64 46.04 45.76 44.24
19 46.31 46.10 45.76 44.24
20 44.82 45.84. 45.76 44.24
21 45.39 45.75 45.76 44.24
22 47.80 46.16 45.76 44.24
23 46.69 46.27 45.76 44.24
24 46.99 46.41 45.76 44.24
25 44.53 46.04 45.76 44.24

The EWMA values are plotted on a control chart along with the upper and iower control limits given by

UCL = [ly + 104 )p-¢ -)"'|

rants
= 4542.10.84) |(=F |-(1-0.2)"],
2

= 45-2.7(0.84) (23) -(1-0.2)"].


Therefore, for i= 1,

0.2
UCL = 4542.10.84) =}
.7(0.84), |}——— (10.2) 2(1)
}}1-(1-0.2

= 45.45,

A 2(1)
LCL =U,= fly — -Lo (|i
—— (1-4)
}1-(1-A

= 44,55.
532 Chapter 17 Statistical Quality Control and Reliability Engineering

46.0 -
es UCL = 45.76

x 45.5
Ss ;
=
fu :
45.0 Mean = 45

44.5 ean

LCL = 44.24
44.0
0 5 10 15 20 25
Sample number

Figure 17-12 EWMaA chart for Example 17-8.

The remaining control limits are calculated similarly and plotted on the control chart given in Fig. 17-
12. The control limits tend to increase as i increases, but then tend to the steady state values given by
equations in 17-24:

UGE = lq Lo! ands


Oo era-a
0.2

= 45~2.7(0.84) 5
= 44,24,

The EWMA control chart signals at observation 17, indicating that the process is out of control.

The sensitivity of the EWMA control chart for a particular process will depend on the
— choices of L and A. Various choices of these parameters will be presented later in this chap-
ter, when the concept of the average run length is introduced. For more details ard devel-
opments regarding the EWMA, see Crowder (1987), Lucas and Saccucci (1990), and
Montgomery (2001).

17-3.6 Average Run Length


In this chapter we have presented control-charting techniques for a variety of situations and
made some recommendations about the design of the control charts. In this section, we will
present, the average run length (ARL) of a control chart. The ARL can be used to assess the
performance of the control chart or to determine the appropriate values of various parame-
ters for the control charts presented in this chapter.
17-3 Statistical Process Control 533

The ARL is the expected number of samples taken before a control chart signals out of con-
trol. In general, the ARL is
1
ARL =-,
Pp
where p is the probability of any point exceeding the control limits. If the process is in con-
trol and the control chart signals out of control, then we say that a fulse alarm has occurred.
To illustrate, consider the X control chart with the standard 3¢limits. For this situation, p=
0.0027 is the probability that a single point falls outside the limits when the process is in
control. The in-control ARL for the X control chart is

Lio, po acne
p 0.0027
In other words, even if the process remains in control we should expect, on the average, an
out-of-control signal (or false alarm) every 370 samples. In general, if the process is actu-
ally in control, then we desire a large value of the ARL. More formally, we can define the
in-control ARL as

ARL) apts
a
where @ is the probability that a sample point plots beyond the control limit.
If on the other hand the process is out of control, then a small ARL value is desirable.
A small value of the ARL indicates that the control chart will signal out of control soon after
the process has shifted. The out-of-control ARL is

ARL, — ailie.
1-p’
where 8 is the probability of not detecting a shift on the first sample after a shift has
occurred. To illustrate, consider the X control chart with 30 limits. Assume the target or in-
control mean is [py and that the process has shifted to an out-of-control mean given by fl, =
Mo + ko. The probability of not detecting this shift is given by
B=P(LCL<X < UCLIu= 1].
That is, B is the probability that the next sample mean plots in control, when in fact the
process has shifted out of control. Since X~N(u,07/n) and LCL= Uy - Lo/Vn and
UCL= Uy + La/Vn we can rewrite Bas
B= Ply - Lo/Vn < X < py + Lo/Vn\u = 1

z (ty -—Lo/Vn)— my X-y (Uo + Lo/ Vn) — my he


ead ale o]Vn | = b,

(Mo - La/Vn)—
(Uy + ko) big (Ho +Lo/\n) (uo +ke)|
eel flip o/n |
P[-L-kyn $Z< L-kn],
where Z is a standard normal random variable. If we let ® denote the standard normal
cumulative distribution function, then

B=(L-kyn)-0(-L-kyn).
534 Chapter 17 Statistical Quality Control and Reliability Engineering

From this, 1 — Bis the probability that a shift in the process is detected on the first sample
after the shift has occurred. That is, the process has shifted and a point exceeds the control
limits—signaling the process is out of control. Therefore, ARL, is the expected number of
samples observed before a shift is detected. ;
The ARLs have been used to evaluate and design control charts for variables and for
attributes. For more discussion on the use of ARLs for these charts, see Montgomery
(2001).

ARLs for the CUSUM and EWMA Control Charts

Earlier in this chapter, we presented the CUSUM and EWMaA control charts. The ARL can
be used to spe~ify some of the parameter values needed to design these control charts.
To implement the tabular CUSUM control chart, values of the decision interval, H,
and the reference value, K, must be chosen. Recall that H and K are multiples of the
process standard deviation, specifically H = ho and K = ko, where k = 1/2 is often used as
a standard. The proper selection of these values is important. The ARL is one criterion that
can be used to determine the values of H and K. As stated previously, a large value of the
ARL when the process is in control is desirable. Therefore, we can set ARLp to an accept-
able level, and determine h and k accordingly. In addition, we would want the control chart
to quickly detect a shift in the process mean. This would require values of h and k such that
the values of ARL, are quite small. To illustrate, Montgomery (2001) provides the ARL
for a CUSUM control chart with h = 5 and k = 1/2. These values are given in Table 17-7.
The in-control average run length, ARLp, is 465. If a small shift, say, 0.500, is important
to detect, then with h = 5 and k = 1/2, we would expect to detect this shift within 38 sam-
ples (on the average) after the shift has occurred. Hawkins (1993) presents a table of h and
k values that will result in an in-control average run length of ARL) = 370. The values are
reproduced in Table 17-8.
Design of the EWMA control chart can also be based on the ARLs. Recall that the
design parameters of the EWMA control chart are the multiple of the standard deviation, L,
and the value of the weighting factor, A. The values of these design parameters can be cho-
sen so that the ARL performance of the control charts is satisfactory.
Several authors discuss the ARL performance of the EWMA control chart, including
Crowder (1987) and Lucas and Saccucci (1990). Lucas and Saccucci (1990) provide the
ARL performance for several combinations of L and A. The results are reproduced in Table
17-9. Again, it is desirable to have a large value of the in-control ARL and small values of
out-of-control ARLs. To illustrate, if L = 2.8 and A= 0.10 are used, we would expect ARLy
= 500 while the ARL for detecting a shift of 0.5 ois ARL, = 31.3. To detect smaller shifts

Table 17-7 Tabular CUSUM Performance with 4 = 5 and k= 1/2

Shift in Mean
(multiple of o) 0 0:25-—0:50%8 30:75 1.00 -1-50-—-2,002 792550 3:00 4:00

ARL 465 139 38.0." 17.0). 10.4 3.75) 4.01 maeA 2-57 e201
rr SSS SSS

Table 17-8 Values of h and k Resulting in ARL = 370 (Hawkins 1993)

k' 0.25 0.50 0.75 1.0 1:25 1S


h 8.01 4.77 3.34 2.52 1.99 1.61
eS
°
17-3 Statistical Process Control 535

Table 17-9 ARLs for Various EWMA Control Schemes (Lucas and Saccucci 1990)
Cc i ns a a ai
Shift in Mean 1b sO sys! L=2.998 L=2.962 L=2.814 = 2615
(multiple of co) A=0.40 A=0.25 A020) A=0.10 A=0.05
a
a
0 500 500 500 500 500
0.25 224 oe 170 150 106 84.1
0.50 TN 48.2 41.8 Sys) 28.8
0.75 28.4 20.1 18.2 ISB) 16.4
1.00 14.3 11.1 10.5 10.3 11.4
1.50 5.9 33) Se) 6.1 el
2.00 : 35 3.6 Sh) 4.4 Se
2.50 2.5 wif jas) 3.4 4.2
3.00 2.0 23 2.4 2) 3:5
4.00 1.4 Nod 1) ep Dell

in the process mean, it is found that small values of A should be used. Note that for L = 3.0
and A = 1.0, the EWMA reduces to the standard Shewhart control chart with 3-sigma lim-
its.

Cautions in the Use of ARLs

Although the ARL provides valuable information for designing and evaluating control
schemes, there are drawbacks to relying on the ARL as a design criterion. It should be noted
that run length follows a geometric distribution, since it represents the number .f samples
before a “success” occurs (a success being a point falling beyond the contro! |::nits). One
drawback is the standard deviation of the run length is quite large. Second, because the dis-
tribution of the run length follows a geometric distribution, the mean of the distribution
(ARL) may not be a reliable estimate of the true run length.

17-3.7 Other SPC Problem-Solving Tools


While the control chart is a very powerful tool for investigating the causes of variation in
a process, it is most effective when used with other SPC problem-solving tools. In this
section we illustrate some of these tools, using the printed circuit board defect data in
Example 17-5.
Figure 17-9 shows ac chart for the number of defects in samples of five printed circuit
boards. The chart exhibits statistical control, but the number of defects must be reduced, as
the average number of defects per board is 8/5 = 1.6, and this level of defects would require
extensive rework.
The first step in solving this problem is to construct a Pareto diagram of the individual
defect types. The Pareto diagram, shown in Fig. 17-13, indicates that insufficient solder and
solder balls are the most frequently occurring defects, accounting for (109/160) 100 = 68%
of the observed defects. Furthermore, the first five defect categories on the Pareto chart are
all solder-related defects. This points to the flow solder process as a potential opportunity
for improvement.
To improve the flow solder process, a team consisting of the flow solder operator, the
shop supervisor, the manufacturing engineer-responsible for the process, and a quality engi-
neer meets to study potential causes of solder defects. ‘ithey conduct a brainstorming session
and produce the cause-and-effect diagram shown in Fig. 17-14. The cause-and-effect
536 Chapter 17 Statistical Quality Control and Reliability Engineering

defects
of
Number

Figure/17-13 Pareto diagram for printed circuit board defects.

diagram is widely used to clearly display the various potential causes of defects in products
and their interrelationships. It is useful in summarizing knowledge about the process.
As a result of the brainstorming session, the team tentatively identifies the following
variables as potentially influential in creating solder defects:

1. Flux specific gravity


2. Solder temperature
3. Conveyor speed
4. Conveyor angle
5. Solder wave height
6. Preheat temperature
7. Pallet loading method

A statistically designed experiment could be used to investigate the effect of these seven
variables on solder defects. Also, the team constructed a defect concentration diagram for
the product. A defect concentration diagram is just a sketch or drawing of the product, with
the most frequently occurring defects shown on the part. This diagram is used to determine
whether defects occur in the same location on the part. The defect concentration diagram
for the piinted circuit board is shown in Fig. 17-15. This diagram indicates that most of the
insufficient solder defects are near the front edge of the board, where it makes initial con-
tact with the solder wave. Further investigation showed that one of the pallets used to carry
the boards across the wave was bent, causing the front edge of the board to make poor con-
tact with the solder wave.
17-4 Reliability Engineering 537

Wave turbulence Temperature Amount

Wave height
En atiat Conveyor speed Specific gravity
Contact time
Conveyor angle
Maintenance Wave fluidity Type

Solder
Alignment of pallet defects
Orientation
Solderability
/ Pallet loading Temperature
Contaminated lead

Figure 17-14 Cause-and-effect diagram for the printed circuit board flow solder process.

Region of insufficient solder

Back

Figure 17-15 Defect concentration diagram for a printed circuit board.

When the defective pallet was replaced, a designed experiment was used to investigate
the seven variables discussed earlier. ‘he results of this experiment indicated that several of
these factors were influential and could be adjusted to reduce solder defects. After the
results of the experiment were implemented, the percentage of solder joints requiring
rework was reduced from 1% to under 100 parts per million (0.01%).

17-4 RELIABILITY ENGINEERING


One of the challenging endeavors of the past three decades has been the design and devel-
opment of large-scale systems for space exploration, new generations of commercial and
military siteraft: and complex electromechanical products such as office copiers and com-
puters. The performance of these systems, and the consequences of their failure, is of vital
concern. For example, the military community has historically placed strong emphasis on
equipment reliability. This emphasis stems largely from increasing ratios of maintenance
cost to procurement costs and the strategic and tactical implications of system failure. In the
538 Chapter 17 Statistical Quality Control and Reliability Engineering

area of consumer product manufacture, high reliability has corhe to be expected as inuch as
conformance to other important quality characteristics.
Reliability engineering encompasses several activities, one of which is reliability mod-
eling. Essentially, the system survival probability is expressed as a function of a subsystem
of component reliabilities (survival probabilities). Usually, these models are time depend-
ent, but there are some situations where this is not the case. A second important activity is
that of life testing/and reliability estimation.

17-4.1 Basic Reliability Definitions


Let us consider a component that has just been manufactured. It is to be operated at a stated
“stress level” or within some range of stress such as temperature, shock, and so on. The ran-
dom variable T will be defined as time to failure, and the reliability of the component (or
subsystem or system) at timetis R(t) = P[T >t]. R is called the reliability function. The fail-
ure process is usually complex, consisting of at least three types of failures: initial failures,
wear-out failures, and those that fail between these. A hypothetical composite distribution
of time to failure is shown in Fig. 17-16. This is a mixed distribution, and

p(0) + [oslear =1. (17-25)


Since for many components (or systems) the initial failures or time zero failures are
removed during testing, the random variable T is conditioned on the event that 7 > 0, so that
the failure density is.

INUi)
oy
BO) 0, (17-26)
=0 otherwise.
Thus, in terms of f, the reliability function, R, is

R(t) =1- F(t)= jf(x)dx. (17-27)


The term interval failure rate denotes the rate of failure on a particular interval of time
[t;, t)] and the terms failure rate, instantaneous failure rate, and hazard will be used syn-
onymously as a limiting form of the interval failure rate as t, — t,. The interval failure rate
FR(t), t2) is as follows:

FR(t,,t,) = Ao) | I } (17-28)

p(0)

g(t)

0 t
Figure 17-16 A composite failure distribution.
17-4 Reliability Engineering 539

The first bracketed term is simply

P{Failure during [t,, t,]|Survival to time t,}. (17-29)

The second term is for the dimensional characteristic, so that we may express the condi-
tional probability of equation 17-29 on a per-unit time basis.
We will develop the instantaneous failure rate (as a function of t). Let h(t) be the haz-
ard function. Then
R(t)— R(t+At) 1
A(t) = im ———-——- —
() Le 50 R(t) At

ca nail At R(t) ;

= = Od)
RG)’ :
(17-30)
h(t) = RO)

since R(t) = 1 — F(t) and — R(t) = f(t). A typical hazard function is shown in Fig. 17-17. Note
that A(t) - dt might be thought of as the instantaneous probability of failure at t, given sur-
vival to t.
A useful result is that the reliability function R may be easily expressed in terms of h as

R(t) = ris deaderse id C) (17-31)


where

Equation 17-31 results from the definition given in equation 17-30

t
Early
fire age Wear out
allur failures and
and Random failures
j
random oad

random failures
failures

Figure 17-17 A typical hazard function.


540 Chapter 17 Statistical Quality Control and Reliability Engineering

and the integrationof both sides .

[me ax=| Sa =-In


= Re.

so that

J,ba)de = -In RCs) + 10 RCO).


Since F(0) = 0, we see that In R(0) = 0 and

: gb Male _ pint) — R(t)

The mean time to failure (MTTF) is

E(T|= [jtf(t)dt

A useful alternate form is

E{T]= [Ria. (17-32)


Most complex system modeling assumes that only random component failures need be
considered. This is equivalent to stating that thé time-to-failure distribution is exponential,
that is,

f(t) = Ae, t>0,


='0; otherwise,

so that

eda eae
MO renqyeree
h(t yl

is a constant. When all early-age failures have been removed by burn in, and the time to
occurrence of wearout failures is very great (as with electronic parts), then this assumption
is reasonable.
The normal distribution is most generally used to model wearout failure or stress fail-
ure (where the random variable under study is stress level). In situations where most failures
are due to wear, the normal distribution may very well be appropriate.
The lognormal distribution has been found to be applicable in describing time to fail-
ure for some types of components, and the literature seems to indicate an increased
utilization of this density for this purpose.
The Weibull distribution has been extensively used to represent time to failure, and its
nature is such that it may be made to approximate closely the observed phenomena. When
a system is composed of a number of components and failure is due to the most serious of
a large number of defects or possible defects, the Weibull distribution seems to do particu-
larly well as a model.
The gamma distribution frequently results from modeling standby redundancy where
components have an exponential time-to-failure distribution. We will investigate standby
redundancy in Section 17-4.5.
17-4 Reliability Engineering 541

17-4.2 The Exponential Time-to-Failure Model


In this section we assume that the time-to-failure distribution is exponential; that is, only
“random failures” are considered. The density, reliability function, and hazard functions are
given in equations 17-33 through 17-35, are shown in Fig. 17-18:

f() = Ae, t>0,


=; otherwise, (17-33)
Rive Pil tl =e". t20,
=i()s otherwise, (17-34)

N= FO=A, r20,

=i()e otherwise. (17-35)

The constant hazard function is interpreted to mean that the failure process has no
memory; that is,
-h -A(t+Ar)
P{t<T <1+AtT >t} =——"___=1-e™, (17-36)
e

>

(a) Density function

Rit)

(b) Reliability function

h(t)

(c) Hazard function

Figure 17-18 Density, reliability function, and hazard function for the exponential failure model.
542 Chapter 17 Statistical Quality Control and Reliability Engineering

a quantity that is independent of t. Thus if a component is functioning at time t, it is as good


as new. The remaining life has the same density as f

Example 17-9
A diode used on a printed circuit board has a rated failure rate of 2.3 x 10~ failures per hour. How-
ever, under an increased temperature stress, it is felt that the rate is about 1.5 x 10° failures per hour.
The time to failure is exponentially distributed, so that we have

Rt) = (1.5 x 10>)


et 10, 120,
=0, otherwise,
R(t) = et x10), 120,

=0, otherwise,
and
A(t) = 1.5 x 10°, t20,
=0, otherwise.
To determine the reliability at t= 104 and t= 10°, we evaluate R(10*) = e°!5 = 0.86, and R(10°) =e"!
SUB,

17-4.3 Simple Serial Systems


A simple serial system is shown in Fig. 17-19. In order for the system to function, all com-
ponents must function, and it is assumed that the components function independently. We
let T; be the time to failure for component c; for j = 1, 2, ..., n and let T be system time to
failure. The reliability model is thus
R(t)
= P[T >t] = P(T, >t): P(T, >t): ---- P(T,
> 0,
or
R(t) = R(t) - R(t): «+ - RC), (17-37)
where

PIT; Si) = R,(2).

Example 17-10
Three components must all function for a simple system to function. The random variables T,, T>, and
T; representing time to failure for the components are independent with the following distributions:

T,~N(2x10’, 4x10‘),

T;~Weibull 7=0, Sues p=>\

T, ~ lognormal (1=10, o? = 4).

Figure 17-19 A simple serial system.


17-4 Reliability Engineering 543

It follows that

R,(0)=1- a
2 i?

mofo
omy ese)
so that

For example, if t= 2187 hours, then

R(2187) = [1 - (0.935) ][e“][1 — @(-1.154)]


= [0.175][0.0498][0.876]
= 0.0076.

For the simple serial system, system reliability may be calculated using the product of
the component reliability functions as demonstrated; however, when all components have
an exponential distribution, the calculations are greatly simplified, since
R(t) =e. Pt eat = i tat + At
or
RQ) =e, (17-38)
where A, = A; represents the system failure rate. We also note that the system reliability
function is of the same form as the component reliability functions. The system failure rate
is simply the sum of the component failure rates, and this makes application very easy.

Example 17-11
Consider an electronic circuit with three integrated circuit devices, 12 silicon diodes, 8 ceramic capac-
itors, and 15 composition resistors. Suppose under given stress levels of temperature, shock, and so
on that each component has failure rates as shown in the following table, and the component failures
are independent.

Failures per Hour

Integrated circuits 13x 102


Diodes aoelOe
Capacitors 1.2.x10°?
Resistors 6.1 x 10°

Therefore,

A, = 3(0.013 x 1077) + 12(1.7 x 10°”) + 8(1.2 x 10°”) + 15(0.61 x 1077)


= 3.9189x 10°,
and
R(t) = @-13.9189%10-6)
544 Chapter 17 Statistical Quality Control and Reliability Engineering

The circuit mean time to failure is .

10° = 2.55
x10° hours.

If we wish to determine, say, R(10*), we get R(i0*) = 7 0.039189 ~ 0.96.

17-4.4 Simple Active Redundancy


An active redundant configuration is shown in Fig. 17-20. The assembly functions if k or
more of the components function (k <n). All components begin operation at time zero, thus
the term “active” is used to describe the redundancy. Again, independence is assumed.
A general formulation is not convenient to work with, and in most cases it is unneces-
sary. When all components have the same reliability function, as is the case when the com-
ponents are the same type, we let Rt) = r(t) for j = 1, 2, ...,7;so that

R(t) = >") Hor [l= r(a""


x=k (17-39)

iid > ("fra [1 r(0)


ut x n-Xx

x=0

Equation 17-39 is derived from the definition of reliability.

Example 17-12
Three identical components are arranged in active redundancy, operating independently. In order for
the assembly to function, at least two of the components must function (k = 2). The reliability func-
tion for the system is thus
3
r= {fro nto}
x=2

= 3fr(0) [= r(0] + [tof


=[r(s)} [B-2r(2)}
It is noted that R is a function of time, t.

Figure 17-20 An active redundant configuration.


17-4 Reliability Engineering 545

When only one of the n components is required, as is often the case, and the components
are not identical, we obtain

R(t) = 1-T][1- (0 (17-40)


The product is the probability that all components fail, and, obviously, if they do not fail the
system survives. When the components are identical and only one is required, equation
17-40 reduces to

R(t) =1-[1-r(y)’, (17-41)


where r(t) = Rd), j= 1, 2, ..., 0.
When the components have exponential failure laws, we will consider two cases. First,
when the components are identical with failure rate A and at least kcomponents are required
for the assembly to operate, equation 17-39 becomes

R(t) = S(T" |Ulgegatel ase (17-42)


x=k

The second case is considered for the situation where the components have identical expo-
nential failure densities and where only one component must function for the assembly to
function. Using equation 17-41, we get
Riy=toiwee '. (17-43)

In Example 17-12, where three identical components were arranged in an active redundancy, and at
least two were required for system operation, we found

R(t) = [(OVL3 - 2r(0)].


If the component reliability functions are

rj=e™,
then
R(t) =e"[3- 2e]
moe = te

If two components are arranged in an active redundancy as described, and only one must function for
the assembly to function, and, furthermore, if the time-to-failure densities are exponential with fail-
ure rate A, then from equation 17-42, we obtain

R()=1- [1 -e* 7?=2e% - eo


™.

17-4.5 Standby Redundancy


A common form of redundancy, called standby redundancy, is shown in Fig. 17-21. The
unit labeled DS is a decision switch that we will assume has reliability of 1 for all t. The
operating rules are as follows. Component 1 is initially “online,” and when this component
fails, the decision switch switches in component 2, which remains online until it fails.
546 Chapter 17 Statistical Quality Control and Reliability Engineering

Figure 17-21 Standby redundancy.

Standby units are not subject to failure until activated. The time to failure for the assem-
bly is
T=T, +T,+ oe onthe

where 7; is the time to failure for the ith component and T;, T), ..., T,, are independent ran-
dom variables. The most common value for n in practice is two, so the Central Limit The-
orem is of little value. However, we know from the property of linear combinations that

aT] = z(t)
and
n

viT]= >,
i=l
V(7).
We must know the distributions of the random variables T, in order to find the distribu-
tion of 7: The most common case occurs when the components are identical and the time-to-
failure distributions are assumed to be exponential. In this case, T has a gamma distribution

f=th= AS At(Hye
A n-l —)y
aor 0)

=0 otherwise,
so that the reliability function is
n-1
Ra)= Se * (ay fe, 120. (17-44)
k=0
The parameter / is the component failure rate; that is, E(T;) = 1/A. The mean time to fail-
ure and variance are
MTTE = E[T] = n/A (17-45)
and

VIT] = n/A?, (17-46)


respectively.
17-4 Reliability Engineering 547

Two identical components are assembled in a standby redundant configuration with perfect switch-
ing. The component lives are identically distributed, independent random variables having an expo-
nential distribution with failure rate 100~'. The mean time to failure is

MTTF = 2/1007! = 200


and the variance is

V[T] = 2/(1007')? = 20,000.


The reliability function R is
1
R(t)= Ye! (—1/100)*
/kt,
k=0

or

R(t) =e") + 1/100).

17-4.6 Life Testing


Life tests are conducted for different purposes. Sometimes, n units are placed on test and
aged until all or most units have failed; the purpose is to test a hypothesis about the form of
the time-to-failure density with certain parameters. Both formal statistical tests and proba-
bility plotting are widely used in life testing.
A second objective in life testing is to estimate reliability. Suppose, for example, that
a manufacturer is interested in estimating R(1000) for a particular component or system.
One approach to this problem would be to place n units on test and count the number of fail-
ures, 7, occurring before 1000 hours of operation. Failed units are not to be replaced in this
example. An estimate of unreliability is = r/n, and an estimate of reliability is

R(1000) =1-—. (17-47)


n

A 100(1 — @)% lower-confidence limit on R(1000) is given by [1 — upper limit on p], where
pis the unreliability. This upper limit on p may be determined using a table of the binomial
distribution. In the case where n is large, an estimate of the upper limit on p is

p+Zi-¢ uid (17-48)


n

“Example 17-15
One hundred units are placed on life test, and the test is run for 100C hours. There are two failures dur-
ing test, sop = 0.02, and R(1000) = 0.98. Using a table of the binomial distribution, a 95% upper con-
fidence limit on p is 0.06, so that a lower limit on R(1000) is given by 0.94.

In recent years, there has been much work on the analysis of failure-time data, includ-
ing plotting methods for identification of appropriate failure-time models and parameter
estimation. For a godd summary of this work, refer to Elsayed (1996).
548 Chapter 17 Statistical Quality Control and Reliability Engineering

17-4.7 Reliability Estimation with a Known Time-to-Failure Distribution


In the case where the form of the reliability function is assumed known and there is only
one parameter, the maximum likelihood estimator for R(t) is R(t), which is formed by sub-
stituting 6 for the parameter 6 in the expression for R(t), where Gis the maximum likelihood
estimator of 6. For more details and results for specific time-to-failure distributions, refer
to Elsayed (1996).

17-4.8 Estimation with the Exponential Time-to-Failure Distribution


The most common case for the one-parameter situation is where the time-to-failure distri-
bution is exponential, R(t) = e”? The parameter 0 = E[T] is called the mean time to failure
and the estimator for R is R(t), where

Ry =e"?
and @ is the maximum likelihood estimator of @.
Epstein (1960) developed the maximum likelihood estimators for 9 under a number of
different conditions and, furthermore, showed that a 100(1 — a@)% confidence interval on
R(t) is given by
fe, eu, (17-49)
for the two-sided case, or

[e”, 1] (17-50)
for the lower, one-sided interval. In these cases, the values 6,and 6, are the lower- and
upper-confidence limits on 6.
The following symbols will be used:

n =number of units placed on test at r= 0.


Q = total test time in unit hours.
t’ =time at which the test is terminated.
r =number of failures accumulated at time ¢.
r’ =preassigned number of failures.
1 —a@ =confidence level.
Le , = the upper @ percentage point of the chi-square distribution with k degrees of
freedom.

There are four situations to consider, according to whether the test is stopped after a preas-
signed time or after a preassigned number of failures and whether failed items are replaced
or not replaced during test.
For the replacement test, the total test time in unit hours is Q = nt’, and for the nonre-
placement test

emus:
0=S14+(n—-ny'. (17-51)
i=]
If items are censored (withdrawn items that have not failed), and if failures are replaced
while censored items are not replaced, then

Q=S'1,+(n-c)t", (17-52)
j=l
17-4 Reliability Engineering 549

where c represents the number of censored items and t is the time of the jth censorship. If
neither censored items nor failed items are replaced, fies

SE ee oi, (17-53)
i=l jal
The development of the maximum likelihood estimators for @ is rather straightforward.
In the case where the test is nonreplacement, and the test is discontinued after a fixed num-
ber of items have failed, the likelihood function is

t= TLsts) TL") (17-54)


=) v0)Eh18 .-(n-r)t*/0
6” ;
Then

f=InL= -rind-= 4, -(n-r)t"/@


i=l
and solving (01/06) = 0 yields the estimator

Si t(n-ne
gia te 8: (17-55)
r r
It turns out that

6=Q/r (17-56)
is the maximum likelihood estimator of @ for all cases considered for the test design and
operation.
The quantity 2r6/0 has a chi--square distribution with 2r degrees of freedom in the case
where the test is terminated after a fixed number of failures. For fixed termination time ¢’,
the degrees of freedom becomes 2r + 2.
Since the expression 2rf/0 = 20/0, confidence limits on 6 may be expressed as indi-
cated in Table 17-10. The results presented in the table may be used directly with equations
17-49 and 17-50 to establish confidence limits on R(t). It should be noted that this testing
procedure does not require that the test be run for the time at which a reliability estimate is
required. For example, 100 units may be placed on a nonreplacement test for 200 hours, the
parameter 6 estimated, and R(1000) calculated. In the case of the binomial testing men-
tioned earlier, it would have been necessary to run the test for 1000 hours.

Table 17-10 Confidence Limits on 9 i


etree se rn en
Nature of Limit Fixed Number of Failures, r’ Fixed Termination Time, f”

Two-sided limits
200 20 20-20 |
Major ie Ia/2,2r Hapore Xi-a/2.2r+2

2 20
Lower, one-sided lnit 20 | 2 |
Xo.2r+2
———————
nl
550 Chapter 17 Statistical Quality Control and Reliability Engineering

The results are, however, dependent on the assumption that the distribution is exponential.
It is sometimes necessary to estimate the time tp for which the reliability will be R. For
the exponential model, this estimate is
LOR E ey
tr agi (17-57)

and confidence limits on fp are given in Table 17-11.

Example 17-16
Twenty items are placed on a replacement test that is to be operated until 10 failures occur. The tenth
failure occurs at 80 hours, and the reliability engineer wishes to estimate the mean time to failure,
95% two-sided limits on 0, R(100), and 95% two-sided limits on R(100). Finally, she wishes to esti-
mate the time for which the reliability will be 0.8 with point and 95% two-sided confidence interval
estimates.
According to equation 17-56 and the results presented in Tables 17-10 and 17-11,

§@=—
= —— = 160 hours,

ces
Q=nt* =1600 unit hours,

_ eee
2 2 34.17° 9.591
X0.025,20 %0.975,20
= [93.65, 333.65],
R(100) = e~!0/8 = ¢ 190/160 _ 9 535,
According to equation 17-49, the confidence interval on R(100) is
fe ess Pr EERE = [0.344 0.741].

Also,

iene éin— = 160 noe = 35.70 hours.

The two-sided 95% confidence limit is determined from Table 17-11 as

SUCRE 22) OU959]


Nee|-(209,7445)
a417

Table 17-11 Confidence Limits on tp


—————
Nature of Limit Fixed Number of Failures, r° Fixed Termination Time, ft”

eee a eis) 2a
2QI1n(1/R) 2Q1n(1/R
2Qin(W/R) 201n(1/R)
Xa/22r Xi-a/2,2r Xa/2,2r+2 Xi-a/2,2r+2
r 7
Lower, one-sided limit eee 201K)
Xa2r Xa2r-2
SS
17-6 Exercises 551

17-4.9 Demonstration and Acceptance Testing


It is not uncommon for a purchaser to test incoming products to assure that the vendor is
conforming to reliability specifications. These tests are destructive tests and, in the case of
attribute measurement, the test design follows that of acceptance sampling discussed ear-
lier in this chapter.
A special set of sampling plans that assumes an exponential time-to-failure distribution
has been presented in a Department of Defense handbook (DOD H-108), and these plans
are in wide use.

17-5 SUMMARY
This chapter has presented several widely used methods for statistical quality control. Con-
trol charts were introduced and their use as process surveillance devices discussed. The X
and R control charts are used for measurement data. When the quality characteristic is an
attribute, either the p chart for fraction defective or the c or u chart for defects may be used.
The use of probability as a modeling technique in reliability analysis was also dis-
cussed. The exponential distribution is widely usec as the distribution of time to failure,
although other plausible models include the normal, lognormal, Weibull, and gamma dis-
tributions. System reliability analysis methods were presented for serial systems, as well as
for systems having active or standby redundancy. Life testing and reliability estimation
were also briefly introduced.

17-6 EXERCISES
17-1. An extrusion die is used to produce aluminum 17-2. Suppose a process is in control, and 3-sigma
rods. The diameter of the rods is a critical quality control limits are in use on the X chart. Let the mean
characteristic. Below are shown X and R vaiues for 20 shift by 1.50. What is the probability that this shift
samples of five rods each. Specifications on the rods will remain undetected for three consecutive samples?
are 0.5035 + 0.0010 inch. The values given are ‘the What would this probability be if 2-sigma control lim-
last three digits of the measurements; that is, 34.2 is its are used? The sample size is 4.
read as 0.50342.
17-3. Suppose that an X chart is used to control a nor-
mally distributed process, and that samples of size n
Sample xX R Sample X R
are taken every h hours and plotted on the chart, which
1 34.2 5 11 35.4 8 has k sigma limits.
2 31.6 4 12 34.0 6
(a) Find the expected number of samples that will be
8 31.8 4 13 36.0 4 taken until a false action signal is generated. This
4 33.4 5) 14 SP 7 is called the in-control average run length (ARL).
5 35.0 4 15 35,2 3
(b — Suppose that the process shifts to an out-of-
6 B2r il 2 16 33.4 10 control state. Find the expected number of sam-
7 32.6 7 WH 35.0 4 ples that will be taken until a false action is
8 33.8 9 i8 34.4 Ff generated. This is the out-of-control ARL.
9 34.8 10 19 33.9 8 ((@ ww Evaluate the in-control ARL for k = 3. How does
10 38.6 4 20 34.0 4 this change if k = 2? What do you think about the
use of 2-sigma limits in practice?
(a) Set up the X and R charts, revising the trial control
(d ha Evaluate the out-of-control ARL for a shift of one
limits if necessary, assuming assignable causes
sigma, given that n = 5.
can be found.
(b) Calculate PCR and PCR,. Interpret these ratios. 17-4. Twenty-five samples of size 5 are drawn from a
(c) What percentage of defectives is being produced process at regular intervals, and the following data are
by this process? obtained:
552 Chapter 17 Statistical Quality Control and Reliability Engineering

25 25 17-7. Montgomery (2Q01) presents 30 observations of


> % = 362.75, VR = 8.60. oxide thickness of individual silicon wafers. The data
i=l i=l are 7

(a) Compute the control limits for the X and R charts.


(b) Assuming the process is in control and specifica- Oxide Oxide
tion limits are 14.50 + 0.50, what conclusions can Wafer Thickness Wafer Thickness
you draw about the ability of the process to oper- l 45.4 16 58.4
ate within these limits? Estimate the percentage of 2 48.6 7 51.0
defective items that will be produced. 3 495 18 412
(c) Calculate BOK end PCR,, Interpret these ratios. 4 44.0 19 47.1
17-5. Suppose an X chart for a process is in control 5 50.9 20 45.7
with 3-sigma limits. Samples of size 5 are drawn 6 55.2 21 60.6
every 15 minutes, on the quarter hour. Now suppose 7 45.5 22 51.0
the process mean shifts out of control by 1.50 10 min- 8 52.8 23 53.0
utes after the hour. If D is the expected number of ; :
defectives produced per quarter hour in this out-of- 9 45.3 24 “ay
control state, find the expected loss (in terms of defec- 10 46.3 25 47.2
tive units) that results from this control procedure. Il 53.9 26 48.0
17-6. The overall length of a cigar lighter body used in 12 49.8 27 55.9
an automobile application is controlled using X and R 13 46.9 28 50.0
charts. The following table gives length for 20 sam- 14 49.8 29 47.9
ples of size 4 (measurements are coded from 5.00 15 45.1 30 53.4
mm; that is, 15 is 5.15 mm).
(a) Construct a normal probability plot of the data.
Observation * Does the normality assumption seem reasonable?
Sample 1 2 3 4 (b) Set up an individuals control chart for oxide
“ivacn ofl 33 Jiea> & SS RO Souk Wee ab Oe thickness. Interpret the chart.
1 15 10 8 9
2 14 14 10 6 17-8. A machine is used to fill bottles with a particu-
3 9 10 9 11 lar brand of vegetable oil. A single bottle is randomly
4 8 6 9 13 selected every half hour and the weight of the bottle
5 14 8 9 12 recorded. Experience with the process indicates that
6 9 10 7 13 the variability is quite stable, with o = 0.07 oz. The
7 15 10 12 2 process target is 32 oz. Twenty-four samples have
8 14 16 7 10 been recorded in a 12-hour time period with the
results given below.
9 11 i 16 10
10 11 14 11 12 dL
11 2B 8 9 5 Sample Sample
12 io ts Soe afl Num beta eee ae ie aces 2
13 8 12 14 5 1 32.03 13 S197]
14 15 12 14 6 2 31.98 14 32.91
15 13 16 9 5 3 32.02 15 31.93
16 14 8 8 12 4 31.85 16 32.09
iy 8 10 16 9 5 31.91 17 31.96
18 8 14 10 9 6 32.09 18 31.88
19 13 15 10 8 7 31.98 19 31.82
20 9 7 15 8 8 32.03 20 31.92
(a) Set up the X and R charts. Is the process in statis- E ak 3 as

(b)
ahaa
Specifications are 5.10 + 0.05 mm. What can you
ll12 32.01 234 31.97
32.12 31.94
say about process capability?
17-6 Exercises 553

(a) Construct a normal probability plot of the data. 17-13. Consider a process where specifications on a
Does the normality assumption appear to be quality characteristic are 100 + 15. We know that the
satisfied? standard deviation of this quality characteristic is 5.
(b) Set up an individuals control chart for the Where should we center the process to minimize the
weights. Interpret the results. fraction defective produced? Now suppose the mean
shifts to 105 and we are using a sample size of 4 on an
17-9. The following are the number of defective sol- X chart. What is the probability that such a shift will
der joints found during successive samples of 500 sol- be detected on the first sample following the shift?
der joints. What sample size would be needed on a p chart to
obtain a similar degree of protection?
Day No.of Defectives Day No. of Defectives 17-14. Suppose the following fraction defective had
been found in successive samples of size 100 (read
1 106 11 42 down):
2 116 12 37
3 164 13 25 0.09 0.03 0.12
4 89 14. 88 0.10 0.05 0.14
5 99 15 101 0.13 0.13 0.06
6 40 16 64 0.08 0.10 0.05
A 112 17 51 0.14 0.14 0.14
8 36 18 74 0.09 0.07 0.11
9 69 19 71 0.10 0.06 0.09
10 74 20 43 0.15 0.09 0.13
_ — 21 80 0.13 0.08 0.12
0.06 0.11 0.09
Construct a fraction-defective control chart. Is the
process in control? Is the process in control with respect to its fraction
defective?
17-10. A process is controlled by a p chart using sam-
ples of size 100. The centerline on the chart is 0.05. 17-15. The following represent the number of solder
What is the probability that the control chart detects a defects observed on 24 samples of five printed circuit
shift to 0.08 on the first sample following the shift? boards: 7, 6, 8, 10, 24, 6, 5, 4, 8, 11, 15, 8, 4, 16, 11,
What is the probability that the shift is detected by at 12, 8, 6, 5, 9, 7, 14, 8, 21. Can we conclude that the
least the third sample foliowing the shift? process is in control using a c chart? If not, assume
assignable causes can be found and revise the control
17-11. Suppose a p chart with centerline at p with k limits.
sigma units is used to control a process. There is a
17-16. The following represent the number of defects
critical fraction defective p, that must be detected with
per 1000 feet in rubber-covered wire: 1, 1, 3,7, 8, 10,
probability 0.50 on the first sample following the shift
Sed O19 924516, 91155853, On dn te e0,el Loud,
to this state. Derive a general formula for the sample
18, 10, 6, 4, 0, 9, 7, 3, 1, 8, 12. Do the data come from
size that should be used on this chart.
a controlled process?
17-12. A normally distributed process uses 66.7% of 17-17. Suppose the number of defects in a unit is
the specification band. It is centered at the nominal known to be 8. If the number of defects in a unit shifts
dimension, located halfway between the upper and to 16, what it the probability that it will be detected by
lower specification limits. the c chart on the first sample following the shift?
(a) What is the process capability ratio PCR? 17-18. Suppose we are inspecting disk drives for
(b) What fallout level (fraction defective) is pro- defects per unit, and it is known that there is an aver-
duced? age of two defects per unit. We decided to make our
(c) Suppose the mean shifts to a distance exactly 3 inspection unit for the c chart five disk drives, and we
standard deviations below the upper specification control the total number of defects per inspection unit.
limit. What is the value of PCR,? How has PCR Describe the new control chart.
changed? 17-19. Consider the data in Exercise 17-15. Set up au
(d) What is the actual fallout experienced after the chart for this process. Compare it to the c chart in
shift in the mean? Exercise 17-15.
554 Chapter 17 Statistical Quality Control and Reliability Engineering

17-20. Consider the oxide thickness data given in decision switch and only one unit is required for sub-
Exercise 17-7. Set up an EWMA control chart with system survival, determine the subsystem reliability.
A= 0.20 and L = 2.962. Interpret the chart. 17-28. One hundred units are placed on test and aged
17-21. Consider the oxide thickness data given in until all units have failed. The following results are
Exercise 17-7. Construct a CUSUM control chart with obtained, and a mean life of t= 160 hours is calculated
k = 0.75 and h = 3.34 if the target thickness is 50. from the serial data.
Interpret the chart.
17-22. Consider the weights provided in Exercise Time Interval Number of Failures
17-8. Set up an EWMA control chart with A = 0.10
and L = 2.7. Interpret the chart. 0-100 50
100-200 18
17-23. Consider the weights provided in Exercise
17-8. Set up a CUSUM control chart with k= 0.50 and 200-300 17
h = 4.0, Interpret the chart. 300—400 8
400-500 4
17-24. A time-to-failure distribution is given by a uni-
form distribution: After 500 hours 3

1
f(t)= Bae ast<fB, Use the chi-square goodness-of-fit test to determine
whether you consider the exponential distribution to
=i) otherwise. represent a reasonable time-to-failure model for these
(a) Determine the reliability function. data.
(b) Show that
17-29. Fifty units are placed on a life test for 1000
hours. Eight units fail during the period. Estimate
frR(t)dt = [ar(t)dt.
R(1000) for these units. Determine a lower 95% con-
(c) Determine the hazard function. fidence interval on R(1000).
(d) Show that 17-30. In Section 17-4.7 it was noted that for one-
RQ) = eM, parameter reliability functions, R(t;6),R(t;0) = R(t6),
where 6 and R are the maximum likelihood estima-
where H is defined as in equation 17-31. tors. Prove this statement for the case
17-25. Three units that operate and fail independently
R10) =e”, t>0,
form a series configuration, as shown in the figure at
the bottom of this page. =i): otherwise.
The time-to-failure distnbution for each unit is expo- Hint: Express the density function fin terms of R.
nenual with the failure rates indicated.
(a) Find R(60) for the system. 17-31. For a nonreplacement test that is terminated after
200 hours of operation, it is noted that failures occur at
(b) What is the mean time-to-failure (MTTF) for this
system? the following times: 9, 21, 40, 55, and 85 hours. The
units are assumed to have an exponential time-to-failure
17-26. Five identical units are arranged in an active distribution, and 100 units were on test initially.
redundancy to form a subsystem. Unit failure is inde-
(a) Estimate the mean time to failure.
pendent, and at least two of the units must survive
1000 hours for the subsystem to perform its mission. (b) Construct a 95% lower-confidence limit on the
mean time to failure.
(a) If the units have exponential time-to-failure dis-
tributions with failure rate 0.002, what is the sub- 17-32. Use the statement in Exercise 17-31.
system reliability?
(a) Estimate R(300) and construct a 95% lower-
(b) What is the reliability if only one unit is required? confidence limit on R(300).
17-27. If the units described in the previous exercise (b) Estimate the time for which the reliability will be
are operated in a standby redundancy with a perfect 0.9, and construct a 95% lower limit on fo9.

fareaieaspontnion “Ticogsys
>| Ay=3x10?
bo[ ags6xt0% Lolagz4xto®b>
Figure for Exercise 17-25.
Chapter 18

Stochastic Processes and Queueing

18-1 INTRODUCTION
The term stochastic process is frequently used in connection with observations from a
time-oriented, physical process that is controlled by a random mechanism. More precisely,
a stochastic process is a sequence of random variables {X,}, where t € T is a time or
sequence index. The range space for X, may be discrete or continuous; however, in this
chapter we will consider only the case where at a particular time f the process is in exactly
one of m + 1 mutually exclusive and exhaustive states. The states are labeled 0, 1,
2, Syncegiit
The variables X,, X,, ... might represent the number of customers awaiting service at
a ticket booth at times 1 minute, 2 minutes, and so on, after the booth opens. Another
example would be daily demands for a certain product on successive days. X, represents the
initial state of the process.
The chapter will introduce a special type of stochastic process called a Markov process.
We will also discuss the Chapman—Kolmogorov equations, various special properties of
Markov chains, the birth-death equations, and some applications to waiting-line, or queue-
ing, and interference problems.
In the study of stochastic processes, certain assumptions are required about the joint
probability distribution of the random variables X,, X,, ... . In the case of Bernoulli trials,
presented in Chapter 5, recall that these variables were defined to be independent and that
the range space (state space) consisted of two values (0, 1). Here we will first consider dis-
crete-time Markov chains, the case where time is discrete and the independence assumption
is relaxed to allow for a one-stage dependence.

18-2 DISCRETE-TIME MARKOV CHAINS


A stochastic process exhibits the Markovian property if
PAX onidik, See AX pa dlX Hele Xp Shp Xpak typ ieta Xp. =i;) (18-1)
for t=0, 1, 2, ..., and every sequence j, i, i,, ..., i. This is equivalent to stating that the prob-
ability of an event at time t + 1 given only the outcome at time f is equal to the probability
of the event at time ¢ + 1 given the entire state history of the system. In other words, the
probability of the event at t+ 1 is not dependent on the state history prior to time ¢.
The conditional probabilities

P{X,,, =/lX,=1} =Dy (18-2)


are called one-step transition probabilities, and they are said to be stationary if
P{X,, =jIX,
=i} =P (X, =X) =i), for #0; 15:2,.2.8; (18-3)
556 Chapter 18 Stochastic Processes and Queueing

so that the transition probabilities remain unchanged through time. These values may be
displayed in a matrix P = [p,,], called the one-step transition matrix. The matrix P has m +
1 rows and m + 1 columns, and

O<sp, <1,

while
m

p=! fori=0,1,2...,m

That is, each element of the P matrix is a probability, and each row of the matrix sums to one.
The existence of the one-step, stationary transition probabilities implies that

py = P{X,,,
=JX, =i} = P(X, =X=i) (18-4)
for all t=0, 1, 2,.... The values Py are called n-step transition probabilities, and they may
be displayed in an n-step transition matrix
n) — 7,”
Pp‘ = [pi Ij

where
<p esis ty erin Las §710)'1, 23 eae Owe 25. re
and
m

SIA AU Whee arbi pert

The 0-step transition matrix is the identity matrix.


A finite-state Markov chain is defined as a stochastic process having a finite number of
states, the Markovian property, stationary transition probabilities, and an initial set of prob-
abilities A = [ay a) a)’, ..., a], where a\” = P{X, = i}.
The Chapman—Kolmogorov equations are useful in computing n-step transition prob-
abilities. These equations are

i=0,1,2,...,m,

ne= Sf) abe ” j=0,1,2,....m ? (18-5)

O<sv<n,

and they indicate that in passing from state i to state j in n steps the process will be in some
state, say /, after exactly v steps (v <n). Therefore p\)-Pi,™ is the conditional probability
that given state i as the starting state, the process goes to state / in v 2s and from / to] in
(n — v) steps. When summed over /, the sum of the products yields P;,
By setting v = 1 or v =n - 1, we obtain

b= ON le2een
pi =D parh(n— ear esha j=0,1,2,...,m
Nason

It follows that the n-step transition probabilities, P”’, may be obtained from the one-step
probabilities, and

Pp” =P" (18-6)


18-3 Classification of States and Chains 557

The unconditional probability of being in state j at time t =n is


AGN, 2 |, (18-7)
where

al”) =P LX =i ya pl”, Us
We Oo

Thus, A” =A - P”. Further, we note that the rule for matrix multiplication solves the total
probability law of Theorem 1-8 so that A” = A") . Pp,

‘Example 18-1
In a computing system, the probability of an error on each cycle depends on whether or not it was pre-
ceded by an error. We will define 0 as the error state and 1 as the nonerror state. Suppose the proba-
bility of an error if preceded by an error is 0.75, the probability of an error if preceded by a nonerror
is 0.50, the probability of a nonerror if preceded by an error is 0.25, and the probability of nonerror
if preceded by nonerror is 0.50. Thus,
_ {9.75 0.25
[0.50 0.50]
Two-step, three-step, ..., seven-step transition matrices are shown below:

p? aes sae 3| ess me


~|0.625 0.375f «| 0.656 0.344
, [0668 0.332] _, [0.667 0.333
“|oes I shee eon
, [0.667 0.333] _, [0.667 0.333
. ee ee “|
o67 onl
If we know that initially the system is in the nonerror state, then a.” =1,a)’=0, and A™ = [a\”] =
A: P. Thus, for example, A” = (0.667, 0.333].

18-3 CLASSIFICATION OF STATES AND CHAINS


We will first consider the notion of first passage times. The length of time (number of steps
in discrete-time systems) for the process to go from state i to state j for the first time is
called the first passage time. If i=j, then this is the number of steps needed for the process
to return to state i for the first time, and this is termed the first return time or recurrence time
for state i.
First passage times under certain conditions are random variables with an associated
probability distribution. We let fy: denote the probability that the first passage time from
state i to j is equal to n, where it can be shown directly from Theorem 1-5 that
ye ey a
fe Pi; = Pip

(2) Y
fi -ore * Pip

n ih (n-1) (2 (n-2) n-1)


fi = pi a Pi i Pi soi oP Py- (18-8)
558 Chapter 18 Stochastic Processes and Queueing

Thus, recursive computation from the one-step transition probabilities yields the probabil-
ity that the first passage time is n for given i,j.

Example 18-2
Using the one-step transition probabilities presented in Example 18-1, the distribution of the passage
time index n for i= 0, j = 1 is determined as

=p
01
= 0250,
= (0.312) — (0.25)(0.5) = 0.187,
“) = (0,328) — (0.25)(0.375) — (0.187)(0.5) = 0.141,
“ = (0,332) — (0.25)(0.344) — (0.187)(0.375) - (0.141)(0.5) = 0.105.

There are four such distributions corresponding to i,j value: (0, 0), (0, 1), (1, 0), (1, 1).

If tand j are fixed, then L*_,f/; < 1. When the sum is equal to one, the values f;”, for
n= 1,2, ..., represent the probability distribution of first passage time for specific i, j. In
co
the case where a process in state i may never reach state j, Lf in <iks
Where i =j and Y~_f,”” = 1, the state i is termed a recurrent state, since given that the
process is in state i it will always eventually return to i.
If p,, = 1 for some state 1, then that state is called an absorbing state, and the process
will never leave it after it is entered.
The state i is called a transient state if
co

Dees
n=1

since there is a positive probability that given the process is in state i, it will never return to
this state. It is not always easy to classify a state as transient or recurrent, since it is some-
times difficult to calculate first passage time probabilities Sia for all n, as was the case in
Example 18-2. Nevertheless, the expected first passage time is

Ls<1
co

My =
| I: Wie

ea. Dalene
n=1 n=]
(18-9)
and if 27, f;” = 1, a simple conditioning argument shows that

Hi = 1+} pi “Hy.
iis (18-10)
If we take i =j, the expected first passage time is called the expected recurrence time. If [1,,=
co for a recurrent state, it is called null; if U,; < ©, it is called nonnull or positive recurrent.
There are no null recurrent states in a finite-state Markov chain. All of the states in such
chains are either positive recurrent or transient.
18-3 Classification of States and Chains 559

A state is called periodic with period T> 1 if a return is possible only in 7, 21, 3r, ..., steps;
so p” =0 for all values of n that are not divisible by T> I, and Tis the smallest integer hav-
ing this property.
A state j is termed accessible from state i if Pir > 0 for some n= 1, 2, ... . In our exam-
ple of the computing system, each state, 0 and 1, is accessible from the other, since p;” > 0
for all i, j and all n. If state j is accessible from i and state i is accessible from j, then the
states are said to communicate. This is the case in Example 18-1. We note that any state
communicates with itself. If state icommunicates with j, jalso communicates with i. Also,
if icommunicates with / and / communicates with j, then i also communicates with j.
If the state space is partitioned into disjoint sets (called equivalence classes) of states,
where communicating states belong to the same class, then the Markov chain may consist
of one or more classes. If there is only one class so that all states communicate, the Markov
chain is said to be irreducible. The chain represented by Example 18-1 is thus also irre-
ducible. For finite-state Markov chains, the states of a class are either all positive recurrent
or all transient. In many applications, the states will all communicate. This is the case if
there is a value of n for which on > 0 for all values of i and j.
If state i in a class is aperiodic (not periodic), and if the state is also positive recurrent,
then the state is said to be ergodic. An irreducible Markov chain is ergodic if all of its states
are ergodic. In the case of such Markov chains the distribution
Ae AP?
converges as n — 9, and the limiting distribution is independent of the initial probabilities,
A. In Example 18-1, this was clearly observed to be the case, and after five steps (n > 5),
P{X,, = 0} = 0.667 and P{X, = 1} = 0.333 when three significant figures are used.
In general, for irreducible, ergodic Markov chains,

lim pj") = lim ay") = py,


and, furthermore, these values p; are independent of i. These “steady state” probabilities, p,,
satisfy the following state equations:
p)>0, (18-11a)
m

DePye ls (18-11b)
j=0
m

P)=>
Pi Py =f = 9,1,2,...,m. (18-1 1c)
i=0

Since there are m + 2 equations in 18-11b and 18-11c, and since there are m + 1 unknowns,
one of the equations is redundant. Therefore, we will use m of the m + 1 equations in equa-
tion 18-11c with equation 18-1 1b.

Example 18-3
In the case of the computing system presented in Example 18-1, we have from equations 18-11b and
18-11c,

1=po+Pp

Po = Po (0.75) + p,(0.50),
560 Chapter 18 Stochastic Processes and Queueing

or is

Po = 2/3 and P, = 1/3,

which agrees with the emerging result as n > 5 in Example 18-1.

The steady state probabilities and the mean recurrence time for irreducible, ergodic
Markov chains have a reciprocal relationship,

ips aioe =U, 1,25 cxoqthte (18-12)


Pj
In Example 18-3 note that {Up) = 1/py = 1.5 and w,, = 1/p, = 3.

Example 18-4
The mocd of a corporate president is observed over a period of time by a psychologist in the opera-
tions research department. Being inclined toward mathematical modeling, the psychologist classifies
mood into three states as follows:
« 0: Good (cheerful)
1: Fair (so-so)
2: Poor (glum and depressed)

The psychologist observes that mood changes occur only overnight: thus, the data allow estimation
of the transition probabilities

0.6 0.2 0.2


P=/0.3 04 0.31.
008035 07

The equations

Po = 0.6py + 0.3p, + Op,,

P, =90.2p,) + 0.4p, + 0.3p,,

1=po+P\+P2
are solved simultaneously for the steady state probabilities

Po = 3/13,
p, =4/13,
Pp, = 6/13.

Given that the president is in a bad mood, that is, state 2, the mean time required to return to that state
iS [4), where

1 13
Ho9 =—=—
6 y days.

As noted earlier, if p,, = 1, state k is called an absorbing state, and the process remains
in state k once that state is reached. In this case, b, is called the absorption probability,
18-4 Continuous-Time Markov Chains 561

which is the conditional probability of absorption into state k given state i. Mathematically,
we have

Pi De = 0/12... (18-13)
where

and

b, =0 for i recurrent, i # k.

18-4 CONTINUOUS-TIME MARKOV CHAINS


If the time parameter is continuous rather than a discrete index, as assumed in the previous
sections, the Markov chain is called a continuous-parameter chain. It is customary to use a
slightly different notation for continuou’-parameter Markov chains, namely X(t) = X,, where
{X(t)}, t =>0, will be considered to have states 0, 1, ..., m. The discrete nature of the state
space [range space for X(t)] is thus maintained, and

C= OPO im:

pi At) = P [X(t +5) = IX(5)= 1), ph) t|ceAoessvca Ils


520,¢t20,

is the stationary transition probability function. It is noted that these probabilities are not
dependent on s but only on ¢ for a specified i,j pair of states. Furthermore, at time ¢ = 0, the
function is continuous with

{; iA}
lim p;(t) = a
t0

There is a direct correspondence between the discrete-time and continuous-time mod-


els. The Chapman—Kolmogorov equations become

p(t) = >,pylv):
py(t-v) (18-14)
1=0

for 0 < v $14, and for the specified state pair i, jand time ¢. If there are times f, and t, such
that p,(t,) > 0 and p;(t,) > 0, then states i and j are said to communicate. Once again states
that communicate form an equivalence class, and where the chain is irreducible (all states
form a single class)
pi<t)>0, for t>0,
for each state pair i,j.
We also have the property that

lim p(t) oa Pj»


too
562 Chapter 18 Stochastic Processes and Queueing

where p; exists and is independent of the initial state probability vector A. The values p, are
again called the steady state probabilities and they satisfy

p; >9, ORL G2 recor:

— Ml o

i Ne ome 7 t>0.
Pj = Pi Pi(t)
i=0
The intensity of transition, given that the state is j, is defined as

_
u. = lim [ea
{l-py(Ar) d
—Fpyl thao (18-15)

where the limit exists and is finite. Likewise, the intensity of passage from state i to state j,
given that the system is in state i, is
(At 2 =p,(t)|,a0:
uj = lim {ae (18-16)
again where the limit exists and is finite. The interpretation of the intensities is that they rep-
resent an instantaneous rate of transition from state i to j. For a small At, p, (Ar) = u,At +
o(At), where o(Ar)/At + 0 as At — 0, so that u;, is a proportionality constant by which
p,(At) is proportional to At as At — 0. The transition intensities also satisfy the balance
equations

By ED pPaakyiiddisO, LQnewne (18-17)


i#j

These equations indicate that in steady state, the rate of transition out of state j is equal to
the rate of transition into j.

Example 18-5
An electronic control mechanism for a chemical process is constructed with two identical modules,
operating as a parallel, active redundant pair. The function of at least one module is necessary for the
mechanism to of erate. The maintenance shop has two identical repair stations for these modules and,
furthermore, when a module fails and enters the shop, other work is moved aside and repair work is
immediately initiated. The “system” here consists of the mechanism and repair facility and the states
are as follows:

0: Both modules operating


1: One unit operating and one unit in repair
2: Two units in repair (mechanism down)

The random variable representing time to failure for a module has an exponential density, say
Ora t20,
=0, t<0,
and the random variable describing repair time at a repair station also has an exponential density, say
AG) fire t20,
=0, t<0.
18-4 Continuous-Time Markov Chains 563

Interfailure and interrepair times are independent, and {X(t)} can be shown to be a continuous-param-
eter, irreducible Markov chain with transitions only from a state to its neighbor states: 0 > 1, 1 30,
1 + 2,2 — 1. Of course, there may be no state change.
The transition intensities are

Using equation 18-17,

2AP) = UP,
(A+ M) p, = 2Apy + 2up,,
2up, = Ap,,
and since py + p, + p> = 1, some algebra gives

L2
Bone
(A+m)
2Au
Pi 2?
(A+)
z

The system availability (probability that the mechanism is up) in the steady state condition is thus
WK
Availability = 1- ———,.
(A+z)

The matrix of transition probabilities for time increment At may be expressed as

P =(p,(As)|
1—ugAt Up, At oie Ug ;At ODE Ug, At

uyAt 1-uAt --- u,;At * Uy, At (18-18)

ujpAt ujyAt +++ ujAt +++ Wy,At

tiene teweatha A28h APUNITE As


and

pj(t+At)= > p(t): p, (A?) Af Oy 12 scp 8hhs (18-19)


i=0
where

pt) =P [X(t)=J].
From the jth equation in the m + 1 equations of equation 18-19,
ptt At) = pot) UpAt + ++ + pt) -u,At+--- + p(otl - UjAt] + +++ + Py(t) * Ujny At,
564 Chapter 18 Stochastic Processes and Queueing

which may be rewritten as


»

d _ |pjlt + At)— p;(t) (18-20)


ped)
—D.: =
tin]
]
i, uj -p(t) Di pi(t).
=-—u:.:-p:(t)+ U:.: p;(t). _

The resulting system of differential equations is

pj (t)=—u; pj (t)+ > uy le) of = 0,15 25, ey (18-21)


i#j
which may be solved when mm is finite, given initial conditions (probabilities) A, and using
the result that L;_, p,(t) = 1. The solution
nm

[polt), Pi), --+» Pm (t)] = P () (18-22)


presents the state probabilities as a function of time in the same manner that Be presented
state probabilities as a function of the number of transitions, n, given an initial condition
vector A in the discrete-time mode]. The solution to equations 18-21 may be somewhat dif-
ficult to obtain, and in general practice, transformation techniques are employed.

18-5 THE BIRTH-DEATH PROCESS IN QUEUEING


The major application of the so-called birth-death process that we will study is in queue-
ing or waiting-line theory. Here birth will refer to an arrival and death to a departure from
a physical system, as shown in Fig. 18-1.
Queueing theory is the mathematical study of queues or waiting lines. These waiting
lines occur in a variety of problem environments. There is an input process or “calling pop-
ulation,” from which arrivals are drawn, and a queueing system, which in Fig. 18-1 consists
of the queue and service facility. The calling population may be finite or infinite. Arrivals
occur in a probabilistic manner. A common assumption is that the interarrival times are
exponentially distributed. The queue is generally classified according to whether its capac-
ity is infinite or finite, and the service discipline refers to the order in which the customers
in the queue are served. The service mechanism consists of one or more servers, and the
elapsed service time is commonly called the holding time.
The following notation will be employed:
X(t) = Number of customers in system at time t

SECS SS" Oe, soon EUSP UR aot

S { Number of servers

pt) Il P{X(t)=Jj|A} .
p; = lim p,(¢)
t- 00

A, = Arrival rate given that n customers are in the system


H,, = Service rate given that n customers are in the system
The birth-death process can be used to describe how X(t) changes through time. It will
be assumed here that when X(t) =/, the probability distribution of the time to the next birth
(arrival) is exponential with parameter /,, j= 0, 1, 2, .... Furthermore, given X(t) = j, the
remaining time to the next service completion is taken to be exponential with parameter Hy
j= 1, 2, ... . Poisson-type postulates are assumed to hold, so that the probability of more
than one birth or death at the same instant is zero.
18-5 The Birth-Death Process in Queueing {565

Service
facility |

|
|
Input Arrivals |
process -———> Departures
|

Figure 18-1 A simple queueing system.

A transition diagram is shown in Fig. 18-2. The transition matrix corresponding to


equation 18-18 is

1-AgAr AyAt 0 0
Ar 1-(A, +, )Ar A,Ar ria
0 H)dt 1-(A,+m, )ar ... 0
0 0 byAt m0

P=/0 0 0 ig 0
A elt
0 0 0 . 1=(A,
+m,)ar
0 0 0 ve Mj y At

0 0 0 aves O

We note that p; (At) = 0 for j <i- 1 orj >i+ 1. Furthermore, the transition intensities and
intensities of passage shown in equation 18-17 are

Uy = Ap,
u,=A, +H; LOD fla Ceaen,
Uj,= 4; forj =i+1,

= for
j =i-1,
= forj<i-1,j>i+1.
The fact that the transition intensities and intensities of passage are constant with time
is important in the development of this model. The nature of transition can be viewed to be
specified by assumption, or it may be considered as a result of the prior assumption about
the distribution of time between occurrences (births and deaths).

1 —ApAt 1 —Aj_2At 1 —Aj_ At 1 —),At 1 —Aj4At


1 —A At 1 —A,At

State

Figure 18-2 Transition diagram for the birth-death process.


566 Chapter 18 Stochastic Processes and Queueing

The assumptions of independent, exponentially distributed service times and inde-


pendent, exponentially distributed interarrival times yield transition intensities that are con-
stant in time. This was also observed in the development of the Poisson and exponential
distributions in Chapters 5 and 6.
The methods used in equations 18-19 through 18-21 may be used to formulate an infi-
nite set of differential state equations from the transition matrix of equation 18-22. Thus, the
time-dependent behavior is described in the following equations:

Po(t) =— Appo(t) + Hip, (0), (18-23)


P(t) =- (Aj+H) PO) + AyDj) + Hy Pj Os j=1,2,..., (18-24)

2, (j= and [Ne [as al?) es a) al


j=0

In the steady state (t — 00), we have p(t) = 0, so the steady state equations are obtained
from equations 18-23 and 18-24:

MiP, =APo»
AoPo + LoP2 = (A, + Hy) + Pi:
AP + UaP3 = (Ay + Hy): Pr»
(18-25)

Aj-2 Pj-2 + Mpj =(Aj-1 +My) Pj

Ay Pj -1 + ys jar = Ayt HD Pj

and 2p, = 1.
Equations 18-25 could have also been determined by the direct application of equation
18-17, which provides a “rate balance” or “intensity balance.” Solving equations 18-25 we
obtain

Ao
Dil1 comm
it 0 OO>

P\ =
Ay P\ =
Ayo *Po>
Hy L2H, .
A
p= 22 P a 14g“Po;
by; H3Myby

2 Aja _ Aj-14j-2"1-A0
ie fale a
Hj HjHj-1Hy
A. Ded othed
Pj+i = : sa Po-
His ‘ » Mjsity My

If we let
18-6 Considerations in Queueing Models 567

giterrio 20787
CG = Sail aa (18-26)
HjHj-1My
then

P;=G Po [ENGR Sieasa 5

and since

P=]
é
or pot
> p=
=
(18-27)
we obtain

1
Po = =
xn
j=l
These steady state results assume that the ,, 1; values are such that a steady state can
be reached. This will be true if A, = 0 for j> k, $0 that there are a finite number of states. It
is also true if p= A/su <1, where A and pare constant and s denotes the number of servers.
The steady state will not be reached if 2;_ ,C,==.

18-6 CONSIDERATIONS IN QUEUEING MODELS


When the arrival rate A, is constant for all j, the constant is denoted A. Similarly, when the
service rate per busy server is constant, it will be denoted yu, so that fu, = su if j2 5 and =
j Lif j <s. The exponential distributions

rt) =Ae™, 120,


=0, t<0,
r-(t) = pe", t20,
= t<0,
for interarrival times and service times in a busy channel produce rates A and 1, which are
constant. The mean interarrival time is 1/A, and the mean time for a busy channel to com-
plete service is 1/.
A special set of notation has been widely employed in the steady state analysis of
queueing systems. This notation is given in the following list:

= _ oj: pj = Expected number of customers in the queueing system


L, = X)-,—5)- pj = Expected queue length
W = Expected time in the system (including service time)
W, = Expected waiting time in the queue (excluding service time)

If A is constant for all j, then it has been shown that


L=AW (18-28)

and
L,= Aw,
568 Chapter 18 Stochastic Processes and Queueing

(These results are special cases of what is known as Little’s law.) If the A; are not equal, A
replaces A, where

A=) Aver (18-29)


j=0
The system utilization coefficient p = A/sp is the fraction of time that the servers are busy.
In the case where the mean service time is 1/y for all j 2 1,

w=W, (18-30)
Mn
The birth-death process rates, Ap, A, ..., A; ... and [d), Hp, ..., Hj, -.., may be assigned
any positive values as long as the assignment leads to a steady state solution. This allows
considerable flexibility in using the results given in equation 18-27. The specific models
subsequently presented will differ in the manner in which A, and py, vary as a function of j.

18-7 BASIC SINGLE-SERVER MODEL WITH CONSTANT RATES


We will now consider the case where s = 1, that is, a single server. We will also assume an
unlimited potential queue length with exponential interarrivals having a constant parame-
ter A, so that A, = A, = --- =A. Furthermore, service times will be assumed to be independ-
ent and exponentially distributed with 1, = LW, = +-- = U. We will assume A < yz. As a result
of equation 18-26, we have
‘ , |
o-(4) zips yf= DTS} (18-31)

and from equation 18-27

Di = pip mm J=1,2,3.,
1
[—p: (18-32)
1+ 5’ p?
j=l
Thus, the steady state equations are

p)=(1- pp’, f=00t 2a (18-33) —


Note that the probability that there are j customers in the system p; is given by a geometric
distribution with parameter p: The mean number of customers in the system, L, is deter-
mined as

(18-34)
18-7 Basic Single-Server Model with Constant Rates 569

And the expected queue length is

=L-(1- po) (18-35)

Using equations 18-28 and 18-34, we find that the expected waiting time in the system is

L p 1
W =—= Fa er = wate
-———_ (18-36)
=

and the expected waiting time in the queue is

Wee
L x
ee,
A
(18-37)
7 A w(H-A)-A w(u-A)
These results could have been developed directly from the distributions of time in the sys-
tem and time in the queue, respectively. Since the exponential distribution reflects a mem-
oryless process, an arrival finding j units in the system will wait through j + 1 services,
including its own, and thus its waiting time 7;,, is the sum of j + 1 independent, exponen-
tially distributed random variables. This random variable was shown in Chapter 6 to have a
gamma distribution. This is a conditional density given that the arrival finds j units in the
system. Thus, if S represents time in the system,

P(S>w)= SD, ‘P(Tj4, >)


j=0

=
2
— i=
p)p Jigse Wray
ae Je =mHd

=|(l-p)ue (-P)H ay Ses


= Pele i wa

='(); w <0,

which is seen to be the complement of the distribution function for an exponential random
variable with parameter p(1 — p). The mean value W = 1/u(1 - p) = 1/(u — A) follows
directly.
If we let S, represent time in the queue, excluding service time, then

P(S,=0)=py=1-p.
570 Chapter 18 Stochastic Processes and Queueing

If we take 7; as the sum of j service times, T; will again have a gamma distribution. Then,
as in the previous manipulations,

P(S, >w)= > P; P(T; > Wa)


j=l

= ¥'(1- p)p? -P(T; > w,) (18-39)


j=l

= pe KS p)w ; Wa > 0,

= 0, Wa < 0,

and we find the distribution of time in the queue g(w,), for w, > 0, to be

e(v.)=weg
3-6 -u(1-p)w, * |_ elt p)ue-u(1-p)w, ;
|= w, > 0.

Thus, the probability distribution is


g(w,) = 1-p, w,=0,

= A(1— pe 4-™, w, > 0, (18-40)


which was noted in Section 2-2 as being for a mixed-type random variable (in equation 2-2,
G# Oand H# 0). The expected waiting time in the queue W, could be determined directly
from this distribution as

W, =(1-p)-0+ i w,A(l- p)e tame dw q


, (18-41)
~ (ua)
When A 2 4, the summation of the terms p; in equation 18-32 diverges. In this case,
there is no steady state solution since the steady state is never reached. That is, the queue
would grow without bound.

18-8 SINGLE SERVER WITH LIMITED QUEUE LENGTH


If the queue is limited so that at most N units can be in the system, and if the exponential
service times and exponential interarrival times are retained from the prior model, we have
5 Age apa sAy_, =A,

A,=0, J2N,

and

= Hy= = fy= He
It follows from equation 18-26 that

(18-42)
18-8 Single Server with Limited Queue Length 571

Thus,

ay j ;
Bis lap iilo tes Rosre LezPh 2o.0Np

so that

N .

Po > p’ =1
j=0

and

1 l-p
Pore oe fone
ie y of p (18-43)
j=l

As a result, the steady state equations are given by

| 1- ,
kr} j =0,1,2,...,N. (18-44)
=o!

| (18-45)
top| L=(N-p1)p+ No"
(1-p)(1- p%*")

The mean number of customers in the queue is


N
L, =) (j-1)-p;

N N

= > ip; - dP; (18-46)


j=0 j=l

The mean time in the system is found as

L
W=-, (18-47)
A

and the mean time in the queue is

L, L-1+p
W, = = —_—2, 18-48)
ag he), A é

where L is given by equation 18-45.


572 Chapter 18 Stochastic Processes and Queueing

18-9 MULTIPLE SERVERS WITH AN UNLIMITED QUEUE


We now consider the case where there are multiple servers. We also assume that the queue
is unlimited and that exponential assumptions hold for interarrival times and service times.
In this case, we have
Age Aer SHAa SA (18-49)
and
H; =ju forj <5,
= su for
j>.
Thus, defining ¢= A/1, we have

= —_ = — i <s
J Ny !
Le (18-50)
M ¢? ;
sts Sp) stsi8’ bin:

It follows from equation 18-27 that the state equations are developed as
J
a3 aa Jss

£ ¢’ Do F
~ sists he
me 1
(Me 2s Scere ae eyEC re
Lovee Dy sig
9” ¢” (18-51)
j=l j=stl

where p = A/spi = Ws is the utilization coefficient, assuming p < 1.


The value L,, representing the mean number of units in the queue, is developed as
follows:

be 0-80) Bro =|
ES
j j=0 ey

le afDyeiIie
=|" Poa! (18-52)

Then

Ula a
L,
(18-53)
18-12 Exercises 573

and

1
HOM aaa (18-54)

so that

L=aw=a{m+4)=1,+6 (18-55)

18-10 OTHER QUEUEING MODELS

There are numerous other queueing models that can be developed from the birth—death
process. In addition, it is also possible to develop queueing models for situations involving
nonexponential distributions. One useful result, given without proof, is for a single-server
system having exponential interarrivals and arbitrary service time distribution with mean 1/1
and variance o°. If p= A/y< 1, then steady state measures are given by equations 18-56:

Po =i-p,
_Ro* +p’
fae aLp)
L=p+L,, (18-56)
‘3
W, ==,
wowyel, L

In the case where service times are constant at 1/u, the foregoing relationships yield the
measures of system performance by taking the variance o” = 0.

18-11 SUMMARY
This chapter introduced the notion of discrete-state space stochastic processes for discrete-
time and continuous-time orientations. The Markov process was developed along with the
presentation of state properties and characteristics. This was followed by a presentation of
the birth-death process and several important applications to queueing models for the
description of waiting-time phenomena.

18-12 EXERCISES
18-1. A shoe repair shop in a suburban mall has one (b) Find the mean number of pairs in the shop and the
shoesmith. Shoes are brought in for repair and arrive mean number of pairs waiting for service.
according to a Poisson process with a constant arrival (c) Find the mean turnaround time for a pair of shoes
rate of two pairs per hour. The repair time distribution (time in the shop waiting plus repair, but exclud-
is exponential with a mean of 20 minutes, and there is ing time waiting to be picked up).
independence between the repair and arrival
18-2. Weather data are analyzed for a particular local-
processes. Consider a pair of shoes to be the unit to be ity, and a Markov chain is employed as a model for
served, and do the following: weather change as follows. The conditional probabil-
(a) In the steady state, find the probability that the ity of change from rain to clear weather in one day is
number of pairs of shoes in the system exceeds 5. 0.3. Likewise, the conditional probability of transition
574 Chapter 18 Stochastic Processes and Queueing

from clear to rain in one day is 0.1. The model is to be If {X,} can be modeled as a Markov chain with one-
a discrete-time model, with transitions occurring only step transition matrix as
between days.
(a) Determine the matrix P of one-step transition {mend Goes 0
probabilities. 2/3 1/6 1/6 0
(b) Find the steady state probabilities. “10 2/3 1/6 1/6)’
(c) If today is clear, find the probability that it will be OmOmnOe nt
clear exactly 3 days hence.
(d) Find the probability that the first passage from a do the following:
clear day to a rainy day occurs in exactly 2 days, (a) Show that states 0 and | are absorbing states.
given a clear day is the initial state.
(b) If the initial state is state 1, compute the steady
(e) What is the mean recurrence time for the rainy state probability that the system is in state 0.
day state?
(c) If the initial probabilities are (0,1/2,1/2,0), com-
18-3, A communication link transmits binary charac- pute the steady state probability po.
ters, (0, 1). There is a probability p that a transmitted
(d) Repeat (c) with A = (1/4,1/4,1/4,1/4).
character will be received correctly by a receiver,
which then transmits to another link, etc. If Xp is the 18-6. A gambler bets $1 on each hand of blackjack.
initial character and X, is the character received after The probability of winning on any hand is p, and the
the first transmission, X, after the second, etc., then probability of losing is 1 - p = q. The gambler will
with independence {X,,} is a Markov chain. Find the continue to play until either $Y has been accumulated,
one-step and steady state transition matrices. or he has no money left. Let X, denote the accumu-
18-4. Consider a two-component active redundancy lated winnings on hand ¢. Note that X,, , = X, + 1, with
where the components are identical and the time-to- probability p, that X,, , = X, — 1, with probability q,
failure distributions are exponential. When both units and X,, , =X, if X,=0 or X, =Y. The stochastic process
are operating, each carries load L/2 and each has fail- X, is a Markov chain.
ure rate A. However, when one unit fails, the load car- (a) Find the one-step transition matrix P.
ried by the other component is L, and its failure rate (b) For Y=4 and p= 0.3, find the absorption proba-
under this load is (1.5)A. There is only one repair bilities: bj, b,4, D3, and b,,.
facility available, and repair time is exponentially dis-
tributed with mean 1/y. The system is considered 18-7. An object moves between four points on a circle,
failed when both components are in the failed state. which are labeled 1, 2, 3, and 4, The probability of
Both components are initially operating. Assume that moving one unit to the right is p, and the probability
p> (1.5)A. Let the states be as follows: of moving one unit to the left is 1- p =g. Assume that
the object starts at 1, and let X,, denote the location on
0: No components are failed. the circle after n steps.
1: One component is failed and is in repair.
(a) Find the one-step transition matrix P.
2: Two components are failed, one is in repair,
one is waiting, and the system is in the failed (b) Find an expression for the steady state probabili-
ties p;.
condition.
(a) Determine the matrix P of transition probabilities
(c) Evaluate the probabilities p; for p = 0.5 and
associated with interval At. ‘ p=0.8.
(b) Determine the steady state probabilities. 18-8. For the single-server queueing model presented
(c) Write the system of differential equations that in Section 18-7, sketch the graphs of the following
present the transient or time-dependent relation- quantities as a function of p= A/p, for0< p< 1.
ships for transition. (a) Probability of no units in the system.
(b) Mean time in the system.
18-5. A communication satellite is launched via a
(c) Mean time in the queue.
booster system that has a discrete-time guidance con-
trol system. Course correction signals form a sequence 18-9, Interarrival times at a telephone booth are expo-
{X,,} where the state space for X is as follows: nential, with an average time of 10 minutes. The
0: No correction required. length of a phone call is assumed to be exponentially
1: Minor correction required. distributed with a mean of 3 minutes.
2. Major correction required. (a) What is the probability that a person arriving at
3: Abort and system destruct. the booth will have to wait?
18-12 Exercises 575

(b) What is the average queue length? (b) How much time does it take, on average, for a
(c) The telephone company will install a second professor to get his or her jobs completed?
booth when an arrival would expect to have to (c) -If an economy drive reduced the secretarial force
wait 3 minutes or more for the phone. By how to two secretaries, what will be the new answers
much must the rate of arrivals be increased in to (a) and (b)?
order to justify a second booth?
18-12. The mean frequency of arrivals at an airport is
(d ~~ What is the probability that an arrival will have to
18 planes per hour, and the mean time that a runway is
wait more than 10 minutes for the phone?
tied up with an arrival is 2 minutes. How many run-
(e — What is the probability that it will take a person ways will have to be provided so that the probability
more than 10 minutes’ altogether, for the phone of a plane having to wait is 0.20? Ignore finite popu-
and to complete the call? lation effects and make the assumption of exponential
(f) Estimate the fraction of a day that the phone will interarrival and service times.
be in use.
18-13. A hotel reservations facility uses inward WATS
18-10, Automobiles arrive at a service station in a ran- lines to service customer requests. The mean number
dom manner at a mean rate of 15 per hour. This station of calls that arrive per hour is 50, and the mean serv-
has only one service position, with a mean servicing ice time for a call is 3 minutes. Assume that interar-
rate of 27 customers per hour. Service times are expo- rival and service times are exponentially distributed.
nentially distributed. There is space for only the auto- Calls that arrive when all lines are busy obtain a busy
mobile being served and two waiting. If all three signal and are lost from the system.
spaces are filled, an arriving automobile will go on to (a) Find the steady state equations for this system.
another station. (b) How many WATS lines must be provided -to
(a). What is the average number of units in the ensure that the probability of a customer obtain-
station? ing a busy signal is 0.05?
(b) What fraction of customers will be lost? (c wa What fraction of the time are all WATS lines
(c) Why isL, #L-1? busy?
(d wa Suppose that during the evening hours call
18-11. An engineering school has three secretaries in arrivals occur at a mean rate of 10 per hour. How
its general office. Professors with jobs for the secre- does this affect the WATS line utilization?
taries arrive at random, at an average rate of 20 per (e) Suppose the estimated mean service time (3 min-
8-hour day. The amount of time that a secretary utes) is in error, and the true mean service time is
spends on a job has an exponential distribution with a really 5 minutes. What effect will this have on the
mean of 40 minutes. probability of a customer finding all lines busy if
(a) What fraction of the time are the secretaries busy? the number of lires in (b) are used?
Chapter 19

Computer Simulation

One of the most widespread applications of probability and statistics lies in the use of com-
puter simulation methods. A simulation is simply an imitation of the operation of a reai-
world system for purposes of evaluating that system. Over the past 20 years, computer
simulation has enjoyed a great deal of popularity in the manufacturing, production, logis-
tics, service, and financial industries, to name just a few areas of application. Simulations
are often used to analyze systems that are too complicated to attack via analytic methods
such as queueing theory. We are primarily interested in simulations that are:

1. Dynamic—that is, the system state changes over time.


2. Discrete—that is, the system state changes as the result of discrete events such as
customer arrivals or departures.
3. Stochastic (as opposed to deterministic).

The stochastic nature of simulation prompts the ensuing discussion in the text.
This chapter is organized as follows. It begins in Section 19-1 with some simple moti-
vational examples designed to show how one can apply simulation to answer interesting
questions about stochastic systems. These examples invariably involve the generation of
random variables to drive the simulation, for example customer interarrival times and serv-
ice times. The subject of Section 19-2 is the development of techniques to generate random
variables. Some of these techniques have already been alluded to in previous chapters, but
we will give a more complete and self-contained presentation here. After a simulation run
is completed, one must conduct a rigorous analysis of the resulting output, a task made dif-
ficult because simulation output, for example customer waiting times, is almost never inde-
pendent or identically distributed. The problem of output analysis is studied in Section 19-3.
A particularly a‘tractive feature of computer simulation is its ability to allow the experi-
menter to analyze and compare certain scenarios quickly and efficiently. Section 19-4 dis-
cusses methods for reducing the variance of estimators arising from a single scenario, thus
resulting in more-precise statements about system performance, at no additional cost in
simulation run time. We also extend this work by mentioning methods for selecting the best
of a number of competing scenarios. We point out here that excellent general references for
the topic of stochastic simulation are Banks, Carson, Nelson, and Nicol (2001) and Law and
Kelton (2000).

19-1 MOTIVATIONAL EXAMPLES


This section illustrates the use of simulation through a series of simple, motivational exam-
ples. The goal is to show how one uses random variables within a simulation to answer
questions about the underlying stochastic system.

576
19-1 Motivational Examples 577

‘Example 19-1"
Coin Flipping
We are interested in simulating independent flips of a fair coin. Of course, this is a trivial sequence of
Bernoulli trials with success probability p = 1/2, but this example serves to show how one can use sim-
ulation to analyze such a system. First of all, we need to generate realizations of heads (H) and tails
(T), each with probability 1/2. Assuming that the simulation can somehow produce a sequence of
independent uniform (0,1) random numbers, U ,, U,, ..., we will arbitrarily designate flip i as H if we
. observe U; < 0.5, and a flip as T if we observe U; > 0.5. How one generates independent uniforms is
the subject of Section 19-2. In any case, suppose that the following uniforms are observed:

O25 OA 0.0655 0:935e 0:82 10:49 5 0:21 O77 O71 0.08.


This sequence of uniforms corresponds to the outcomes HHHTTHHTTH. The reader is asked to
study this example in various ways in Exercise 19-1. This type of “static” simulation, in which we
simply repeat the same type of trials over and over, has come to be known as Monte Carlo simulation,
in honor of the European city-state, where gambling is a popular recreational activity.

Example 19-2
Estimate n
In this example, we will estimate 7 using Monte Carlo simulation in conjunction with a simple geo-
metric relation. Referring to Fig. 19-1, consider a unit square with an inscribed circle, both centered
at (1/2,1/2). If one were to throw darts randomly at the square, the probability that a particular dart
will land in the circle is 77/4, the ratio of the circle’s area to that of the square. How can we use this
simple fact to estimate 7? We shall use Monte Carlo simulation to throw many darts at the square.
Specifically, generate independent pairs of independent uniform (0,1) random variables, (U,,, U,),
(U,,, U2), .... These pairs will fall randomly on the square. If, for pair i, it happens that
(U,, — 1/2)? + (U,, - 1/2)’ < 1/4, (19-1)
then that pair will also fall within the circle. Suppose we run the experiment for n pairs (darts). Let
X,= 1 if pair i satisfies inequality 19-1, that is, if the ith dart falls in the circle; otherwise, let X, = 0.
Now count up the number of darts X = )_,X; falling in the circle. Cleariy, X has the binomial
distribution with parameters n and p = 77/4. Then the proportion p = X/n is the maximum likelihood
estimate for p = 7/4, and so the maximum likelihood estimator for 7 is just m= 4p. If, for instance,

(1,0)
Figure 19-1 Throwing darts to estimate 7.
578 Chapter 19 Computer Simulation

we conducted n = 1000 trials and observed X = 753 darts in the circle, our estimate would be z= 3.12.
We will encounter this estimation technique again in Exercise 19-2.

Example 19-3

Monte Carlo Integration


Another interesting use of computer simulation involves Monte Carlo integration. Usually, the
method becomes efficacious only for high-dimensional integrals, but we will fall back to the basic
one-dimensional case for ease of exposition. To this end, consider the integral

[= [seer = (b-a)f f(a+(b-a)u)du (19-2)

As described in Fig. 19-2, we shall estimate the value of this integral by summing up n rectangles,
each of width 1/n centered randomly at point U; on [0,1], and of height f(a + (b — a)U,). Then an esti-
mate for J is

i i b-a
yif(a+(b-a)U;). (19-3)
n i=]

One can show (see Exercise 19-3) that i is an unbiased estimator for /, that is, Eff) =/ for all n. This
makes /, an intuitive and attractive estimator.
To illustrate, suppose that we want to estimate the integral

diz ft+ cos(sx)|dx

and the following n = 4 numbers are a uniform (0,1) sample:

0.419 0.109 0.732 0.893.

Figure 19-2 Monte Carlo integration.


19-1 Motivational Examples 579

Plugging into equation 19-3, we obtain

- 1-0<
i,=— at + cos(n(0 + (1-0)U, ))]= 0.896,

which is close to the actual answer of 1. See Exercise 19-4 for additional Monte Carlo integration
examples.

Example 19-4

A Single-Server Queue
Now the goal is to simulate the behavior of a single-server queueing system. Suppose that six cus-
tomers arrive at a bank at the following times, which have been generated from some appropriate
probability distribution:

3 4 6 10 15 20.

Upon arrival, customers queue up in front of a single teller and are processed sequentially, in a first-
come-first-served manner. The service times corresponding to the arriving customers are

7 6 4 6 1 2.
For this example, we assume that the bank opens at time 0 and closes its doors at time 20 (just after
customer 6 arrives), serving any remaining customers.
Table 19-1 and Fig. 19-3 trace the evolution of the system as time progresses. The table keeps
track of the times at which customers arrive, begin service, and leave. Figure 19-3 graphs the status
of the queue as a function of time; in particular, it graphs L(t), the number of customers in the system
(queue + service) at time ¢.
Note that customer i can begin service only at time max (A,, D,_,), that is, the maximum of his
arrival time and the previous customer’s departure time. The table and figure are quite easy to inter-
pret. For instance, the system is empty until time 3, when customer | arives. At time 4, customer 2
arrives, but must wait in line until customer 1 finishes service at time 10. We see from the figure that
between times 20 and 26, customer 4 is in service, while customers 5 and 6 wait in the queue. From
the table, the average waiting time for the six customers 1s ee W,/6 = 44/6. Further, the average num-
ber of customers in the system is ee L(t)dt/29 = 70/29, where we have computed the integral by
adding up the rectangles in Fig. 19-3. Exercise 19-5 looks at extensions of the single-server queue.
Many simulation software packages provide simple ways to model and analyze more-complicated
queueing networks.

Table 19-1 Bank Customers in Single-Server Queueing System


a ee ee ee
i,customer Aj, arrivaltime B,, begin service _S,,, service time - D,,departtime —_W,, wait
cn aN a aS La
1 3 3 7 10 0
2 4 10 6 16 6
3 6 16 4 20 10
4 10 20 6 26 10
5 15 26 1 27 ll
6 20 27 2 29 7
580 Chapter 19 Computer Simulation

L(tys .

3” 4. 6 10 15 16 20 26 27 29

Figure 19-3 Number of customers L(t) in single-server queueing system.

Example 19-5

(s, S) Inventory Policy


Customer orders for a particular good arrive at a store every day. During a certain one-week period,
the quantities ordered are !

10 6 11 3 20 6 8.
The store starts the week off with an initial stock of 20. If the stock falls to 5 or below, the owner
orders enough from a central warehouse to replenish the stock to 20. Such replenishment orders are
placed only at the end of the day and are received before the store opens the next day. There are no
customer back orders, so any customer orders that are not filled immediately are lost. ‘This is called
an (s, S) inventory system, where the inventory is replenished to S = 20 whenever it hits level s = 5.
The following is a history for this system:

Day Initial Stock Customer Order End Stock Reorder? Lost Orders

1 20 10 10 No 0
2 10 6 4 Yes 0
3 20 11 9 No 0
4 9 3 16 No 0
5 6 20 0 Yes 14
6 20 14 No 0
7 14 8 6 No 0

We see that at the end of days 2 and 5, replenishment orders were made. In particular, on day 5, the
store ran out of stock and lost 14 orders as a result. See Exercise 19-6.

19-2 GENERATION OF RANDOM VARIABLES


All the examples described in Section 19-1 required random variables to drive the simula-
tion. In Examples 19-1 through 19-3, we needed uniform (0,1) random variables; Examples
19-4 and 19-5 used more-complicated random variables to model customer arrivals, serv-
19-2 Generation of Random Variables 581

ice times, and order quantities. This section discusses methods to generate such random
variables automatically. The generation of uniform (0,1) random variables is a good place
to start, especially since it turns out that uniform (0,1) generation forms the basis for the
generation of all other random variables.

19-2.1 Generating Uniform (0,1) Random Variables


There are a variety of methods for generating uniform (0,1) random variables, among them
are the following:
1. Sampling from certair physical devices, such as an atomic clock.
2. Looking up predetermined random numbers from a table.
3. Generating pseudorandom numbers (PRNs) from a deterministic algorithm.
The most widely used techniques in practice all employ the latter strategy of generating
PRNs from a deterministic algorithm. Although, by definition, PRNs are not truly random,
there are many algorithms available that produce PRNs that appear to be perfectly random.
Further, these algorithms have the advantages of being computationally fast and
repeatable—speed is a good property to have for the obvious reasons, while repeatability is
desirable for experimenters who want to be able to replicate their simulation results when
the runs are conducted under identical conditions.
Perhaps the most popular method for obtaining PRNs is the linear congruential gener-
ator (LCG). Here, we start with a nonnegative “seed” integer, Xp, use the seed to generate
a sequence of nonnegative integers, X,, X>, ..., and then convert the X; to PRNs, U,, U,, ....
The algorithm is simple.
1. Specify a nonnegative seed integer, Xp.
2. Fori=1, 2,..., let X;=(aX;_, + c) mod (m), where a, c, and m are appropriately cho-
sen integer constants, and where “mod” denotes the modulus function, for example,
17 mod (5) = 2 and —-1 mod (5) = 4.
3. For i= 1, 2,..., let U; =X;,/m.

_ Example 19-6
Consider the “toy” generator X; = (5X,_, + 1) mod (8), with seed X, = 0. This produces the integer
sequence X, = 1, X, =6, X,=7, X,=4, X; =5, X, = 2, X, = 3, X,=0, whereupon things start repeat-
ing, or “cycling.” The PRNs corresponding to the sequence starting with seed X, = 0 are therefore U, =
1/8, U, = 6/8, U, = 7/8, U,= 4/8, Us = 5/8, U, = 2/8, U, = 3/8, Us = 0. Since any seed eventually pro-
duces all integers 0, 1, ..., 7, we say that this is a full-cycle (or full period) generator.

Not all generators are full period. Consider another “tay” generator X; = (3X,_, + 1)mod (7), with seed
X, = 0. This produces the integer sequence X, = 1, X, = 4, X, = 6, X,=5, X; = 2, X, = 0, whereupon
cycling ensues, Further, notice that for this generator, a seed of X, = 3 produces the sequence X, = 3 =
X, = X, = -+-, not very random looking!

The cycle length of the generator from Example 19-7 obviously depends on the seed
chosen, which is a disadvantage. Full-period generators, such as that studied in Example
19-6, obviously avoid this problem. A full-period generator with a long cycle length is given
in the following example.
582 Chapter 19 Computer Simulation

Example 19-8 »

The generator X, =16807 X,_, mod (27!— 1)is full period. Since c = 0, this generator is termed a mul-
tiplicative LCG and must be ceed with a seed X, # 0. This generator is used in many real-world appli-
cations and passes most statistical tests for uniformity and randomness. In order to avoid integer
overflow and real-arithmetic round-off problems, Bratley, Fox, and Schrage (1987) offer the follow-
ing Fortran implementation scheme for this algorithm.
FUNCTION UNIF(IX)
Ki =) 1X/21277-73
IX = 16807 (LX — K1*127773)8= KI *2836
IF (IX.LT.0)IX = IX + 2147483647
UNIF = IX * 4.656612875E-10
RETURN
END

In the above program, we input an integer seed IX and receive a PRN UNIF. The seed IX is auto-
matically updated for the next call. Note that in Fortran, integer division results in truncation, for
example 15/4= 3; thus K1 is an integer.

19-2.2 Generating Nonuniform Random Variables


The goal now is to generate random variables from distributions other than the uniform. The
methods we will use to do so always start with a PRN and then apply an appropriate trans-
formation to the PRN that gives the desired nonuniform random variable. Such nonuniform
random variables are important in simulation for a number of reasons: for example, cus-
tomer arrivals to a service facility often follow a Poisson process; service times may be nor-
mal; and routing decisions are usually characterized by Bernoulli random variables.

Inverse Transform Method for Random Variate Generation

The most basic technique for generating random variables from a pniform PRN relies on
the remarkable Inverse Transform Theorem.

Theorem 19-1
If X is arandom variable with cumulative distribution function (CDF) F(x), then the random
variable Y = F(X) has the uniform (0,1) distribution.

Proof
For ease of exposition, suppose that X is a continuous random variable. Then the CDF of Y is

G(y) = Pr(Y sy)


= PriP(X) sy)
= Pr(X < F~'(y)) (the inverse exists since F'(x) is continuous)

= F(F'(y))
=,

Since G(y) = y is the CDF of the uniform (0,1) distribution, we are done.
With Theorem 19-1 in hand,itis easy to generate certain random variables. All one has
to do is the following:
1. Find the CDF of X, say F(x).
19-2. Generation of Random Variables 583

2. Set F(X) = U, where U is a uniform (0,1) PRN.


3. Solve forX = F~'(U).
We illustrate this technique with a series of examples, for both continuous and discrete
distributions.

Example 19-9
Here we generate an exponential random variable with rate A, following the recipe outlined above.
i. The CDF is F(x) =1-e
2. Set F(X) =1-e" =U.
3. Solving for X, we obtain X = F-'(U) =-[In(1 - U)/A.
Thus, if one supplies a uniform (0,1) PRN U, we see that X = —[In(1 — U)]/A is an exponential ran-
dom variable with parameter A.

Example 19-10
Now we try to generate a standard normal random variable, call it Z. Using the special notation (-) for
the standard normal (0,1) CDF, we set ®(Z) = U, so that Z = ©7'(U). Unfortunately, the inverse CDF
does not exist in closed form, so one must resort to the use of standard normal tables (or other approxi-
mations). For instance, if we have U = 0.72, then Table II (Appendix) yields Z= ©"'(0.72) = 0.583.

Example 19-11
We can extend the previbus example to generate any normal random variable, that is, one with arbi-
trary mean and variance. This follows easily, since if Z is standard normal, then X = 1 + OZ is normal
with mean pt and variance 0”. For instance, suppose we are interested in generating a normal variate
X with mean p= 3 and variance o. = 4. Then if, as in the previous example, U = 0.72, we obtain Z =
0.583, and, as a consequence, X = 3 + 2(0.583) = 4.166.

Example 19-12
We can also use the ideas from Theorem 19-1 to generate realizations from discrete random variables.
Suppose that the discrete random variable X has probability function
0.3 ifx=-1,
_ |0.6 ifx=2.3,
P= 01 ite=7,
{0 otherwise.
To generate variates from this distribution, we set up the following table, where F(x) is the associated
CDF and U denotes the set of uniform (0,1) PRNs corresponding to each x-value:

us (x) F(x) U
-1 0.3 0.3 [0,0.3)
2.3 0.6 0.9 [0.3,0.9)
a 0.1 1.0 [0.9, 1.0)

To generate a realization of X, we first generate a PRN U and then read the corresponding x-value
from the table. For instanée, if U = 0.43, then X = 2.3.
piemeneonnneunen seatesne agen ge OF
584 Chapter 19 Computer Simulation

Other Random Variate Generation Methods

Although the inverse transform method is intuitively pleasing to use, its real-life application
may sometimes be difficult to apply in practice. For instance, closed-form expressions for
the inverse CDF, F~'(U), might not exist, as is the case for the normal distribution, or appli-
cation of the method might be unnecessarily tedious. We now present a small potpourri of
interesting methods to generate a variety of random variables.

Box-Miiller Method
The Box—Miiller (1958) method is an exact technique for generating independent and iden-
tically distributed (IID) standard noris:>! (0,1) random variables. The appropriate theorem,
stated without proof, is

Theorem 19-2

Suppose that U, and U, are IID uniform (0,1) random variables. Then

Z, = /-2In(U, )cos(22U})
and

Z, = \-2In(U,) sin(2zU,)
are IID standard normal random variates.
Note that the sine and cosine evaluations must be carried out in radians.

Example 19-13
Suppose that U, = 0.35 and U, = 0.65 are two IID PRNs. Using the Box—Miiller method to generate
two normal (0,1) random variates, we obtain

Z, = {-2in(0.35) cos(2(0.65)) = 0.8517


and

Z, = -2In(0.35) sin(27(0.65)) = -1.172.

Central Limit Theorem

One can also use the Central Limit Theorem (CLT) to generate “quick-and-dirty” random
variables that are approximately normal. Suppose that U,, U,, ..., U, are IID PRNs. Then
for large enough n, the CLT says that

27U; -FE E7,U;|


Var(Z7_U;)
_ FU; — ZiE[U;]

Zp; —(n/2)
a 12.
= N (0,1).
19-2\ Generation of Random Variables 585

In particular, the choice n = 12 (which turns out to be “large enough”) yields the conven-
ient approximation
12
SU; -6=N (0,1).
i=]

Suppose we have the following PRNs:

0.28 0.87 0.44 0.49 0.10 0.76 0.65 0.98 0.24 0.29 0.77 0.90.
Then
12
>, -6=0.77
a
is a realization from a distribution that is approximately standard normal.

Convolution
Another popular trick involves the generation of random variables via convolution, indi-
cating that some sort of sum is involved.

eee pune mies


Bre CRs
=O Wa Pee,

Suppose that X,, X,, ..., X, are IID exponential random variables with rate A. Then Y = 27.,X; is said
to have an Erlang distribution with parameters n and A. It turns out that this distribution has proba-
bility density function

=
Ae? y"™! (n-1)! ify>0, 19-4
fly) | 0 otherwise,

which readers may recognize as a special case of the gamma distribution (see Exercise 19-16).
This distribution’s CDF is too difficult to invert directly. One way that comes to mind to gener-
ate a realization from the Erlang is simply to generate and then add up n IID exponential(A) random
variables. The following scheme is an efficient way to do precisely that. Suppose that U,, U,, ..., U,
are IID PRNs. From Example 19-9, we know that X, = ~qln(1 - U,), i= 1, 2, ...,n, are IID exponen-
tial(A) random variables. Therefore, we can write

v=),
i=l

=>[-zmt-u)|

: --4u{[T0-u)}

This implementation is quite efficient, since it requires only one execution of a natural log operation.
In fact, we can even do slightly better from an efficiency point of view—simply note that both U; and
(1 - U)) are uniform (0,1). Then

, r=-tal TI
is also Erlang.
586 Chapter 19 Computer Simulation

To illustrate, suppose that we have three IID PRNs at our disposal, U, = 0.23, U2 = 0.97, and U,=
0.48. To generate an Erlang-realization with parameters n = 3 and A=2,we simply take

ra -=in(U, U,U3)= ~+In((0.23)(0.97)(0.48)) atts:

Acceptance—Rejection
One of the most popular classes of random variate generation procedures proceeds by sam-
pling PRNs until some appropriate “acceptance” criterion is met.

i Exampl et SEA

An easy example of the acceptance-rejection technique involves the generation of a geometric ran-
dom variable with success probability p. To this end, consider a sequence of PRNs U,, U,, ... . Our
aim is to generate a geometric realization X, that is, one that has probability function

vis a
l=plap
ainifx=
0,
L245
a
otherwise.

In words, X represents the number of Bernoulli trials until the first success is observed. This English
characterization immediately suggests an elementary acceptance-rejection algorithm.

1. Initialize i — 0.
2. Letici+l.
3. Take a Bernoulli(p) observation,

c eat US <p,
‘10, otherwise.

4. If Y,= 1, then we have our first success and we stop, in which case we accept X =i. Other-
wise, if Y, = 0, then we reject and go back to step 2.

To illustrate, let us generate ageometric variate having success probability p = 0.3. Suppose we have
are at our disposal the following PRNs:

0.38 067— 0.24 089 O10 0.71.

Since U, = 0.38 2 p, we have ¥, = 0, and so we reject X = 1. Since U, = 0.67 2 p, we have Y, = 0, and


so we reject X = 2. Since U, = 0.24 < p, we have Y, = 1, and so we acceptX = 3.

19-3 OUTPUT ANALYSIS


Simulation output analysis is one of the most important aspects of any proper and complete
simulation study. Since the input processes driving a simulation are usually random vari-
ables (e.g., interarrival times, service times, and breakdown times), we must also regard the
output from the simulation as random. Thus, runs of the simulation only yield estimates of
measures of system performance (e.g., the mean customer waiting time). These estimators
are themselves random variables and are therefore subject to sampling error—and sampling
error must be taken into account to make valid inferences concerning system performance.
19-3 Output Analysis 587

The problem is that simulations almost never produce convenient raw output that is ILD
normal data. For example, consecutive customer waiting times from a queueing system
* are not independent—typically, they are serially correlated; if one customer at the
post office waits in line a long time, then the next customer is also likely to wait a
long time.
* are not identically distributed; customers showing up early in the morning might
have a much shorter wait than those who show up just before closing time.
* are not normally distributed—they are usually skewed to the right (and are certainly
never less than zero).
The point is that it is difficult to apply “classical” statistical techniques to the analysis
of simulation output. Our purpose here is to give methods to perform statistical analysis of
output from discrete-event computer simulations. To facilitate the presentation, we identify
two types of simulations with respect to ee analysis: Terminating and steady state
simulations.
1. Terminating (or transient) simulations. Here, the nature of the problem explicitly
defines the length of the simulation run. For instance, we might be interested in sim-
ulating a bank that closes at a specific time each day.
2. Nonterminating (steady state) simulations. Here, the long-run behavior of the sys-
tem is studied. Presumely this “steady state” behavior is independent of the simula-
tion’s initial conditions. An example is that of a continuously running production line
for which the experimenter is interested in some long-run performance measure.
Techniques to analyze output from terminating simulations are based on the method of
_ independent replications, discussed in Section 19-3.1. Additional problems arise for steady
state simulations. For instance, we must now worry about the problem of starting the
simulation—how should it be initialized at time zero, and how long must it be run before
data representative of steady state can be collected? Initialization problems are considered
in Section 19-3.2. Finally, Section 19-3.3 deals with point and confidence interval estima-
tion for steady state simulation performance parameters.

19-3.1 Terminating Simulation Analysis


Here we are interested in simulating some system of interest over a finite time horizon. For
now, assume we obtain discrete simulation output Y,, Y3, ..., Y,,, where the number of
observations m can be a constant or a random variable. For example, the experimenter can
specify the number m of customer waiting times Y,, Y,, ..., Y,, to be taken from a queueing
simulation. Or m could denote the random number of customers observed during a speci-
fied time period (0, 7].
Alternatively, we might observe continuous simulation output {Y(‘)I0 < ts T} over a
specified interval (0, 7]. For instance, if we are interested in estimating the time-averaged
number of customers waiting in a queue during [0, 7], the quantity Y(t) would be the num-
ber of customers in the queue at time 1.
The easiest goal is to estimate the expected value of the sample mean of the
observations,
6=EIY,,],
where the sample mean in the discrete case is
1 m

In So a
588 Chapter 19 Computer Simulation

(with a similar expression for the continuous case). For instarice, we might be interested in
estimating the expected average waiting time of all customers at a shopping center during
the period 10 a.m. to 2 p.m.
Although Y,, is an unbiased estimator for 6, a proper statistical analysis requires that we
also provide an estimate of Var(Y,,). Since the Y; are not necessarily IID random variables,
it may be that Var(Y,,) # Var(Y,)/m for any i, a case not covered in elementary statistics
courses.
For this reason, the familiar sample variance

is likely to be highly biased as an estimator of mVar(Y,,). Thus, one should not use S?/m to
estimate Var(Y,,).
The way around the problem is via the method of independent replications (IR). IR
estimates Var(Y,,) by conducting b independent simulation runs (replications) of the system
under study, where each replication consists of m observations. It is easy to make the repli-
cations independent—simply reinitialize each replication with a different pseudorandom
number seed.
To proceed, denote the sample mean from replication i by

where Y;,;is observationjfrom replication i, for i= 1, 2, ...,b andj= 1, 2,..., m.


If each run is started under the same operating SondiiGhs (e.g., all queues empty and
idle), then the replication sample means Z,, Z,, ..., Z, are IID random variables. Then the
obvious point estimator for Var(Y,,) = Var(Z)) is

where the grand mean is defined as

Notice how closely the forms of V, and S*/m resemble each other. But since the replicate
sample means are IID, V, is usually much less biased for Var(Y,,) than is S°/m.
In light of the abe discussion, we see that V,/bis a reasonable estimator for Var(Z,).
Further, if the number of observations per replication, m, is large enough, the Central Limit
Theorem tells us that the replicate sample means are approximately IID normal.
Then basic statistics (Chapter 10) yields an approximate 100(1 — @)% two-sided con-
fidence interval (CI) for 0,

0eZ,+ ta/2,b-1V Vp/b, (19-5)

where fy.,_,1s the 1 — 0/2 percentage point of the ¢ distribution with b — 1 degrees of freedom.
19-3 Output Analysis 589

Example 19-17
Suppose we want to estimate the expected average waiting time for the first 5000 customers in a cer-
tain queueing system. We will make five independent replications of the system, with each run ini-
tialized empty and idle and consisting of 5000 waiting times. The resulting replicate means are

i 1 2 3 4 8)

Z, 3.2,43-5.15 42 4.6

Then Z, = 4.28 andV, = 0.487. For level a = 0.05, we have 1o.025,4 = 2.78, and equation 19-5 gives
(3.41, 5.15] as a 95% CI for the expected average waiting time for the first 5000 customers.

Independent replications can be used to calculate variance estimates for statistics other
than sample means. Then the method can be used to obtain CIs for quantities other than
E(Y,,,], for example quantiles. See any of the standard simulation texts for additional uses
of independent replications.

19-3.2 Initialization Problems


Before a simulation can be run, one must provide initial values for all of the simulation’s
state variables. Since the experimenter may not know what initial values are appropriate for
the state variables, these values might be chosen somewhat arbitrarily. For instance, we
might decide that it is “most convenient” to initialize a queue as empty and idle. Such a
choice of initial conditions can have a significant but unrecognized impact on the simula-
tion run’s outcome. Thus, the initialization bias problem can lead to errors, particularly in
steady state output analysis.
Some examples of problems concerning simulation initialization are as follows.

1. Visual detection of initialization effects is sometimes difficult—especially in the


case of stochastic processes having high intrinsic variance, such as queueing
systems.
2. How should the simulation be initialized? Suppose that a machine shop closes at a
certain time each day, even if there are jobs waiting to be served. One must there-
fore be careful to start each day with a demand that depends on the number of jobs
remaining from the previous day.
3. Initialization bias can lead to point estimators for steady state parameters having
high mean squared error, as well as for CIs having poor coverage.

Since initialization bias raises important concerns, how do we detect and deal with it?
We first list methods to detect it.

1. Attempt to detect the bias visually by scanning a realization of the simulated


process. This might not be easy, since visual analysis can miss bias. Further, a
visual scan can be tedious. To make the visual analysis more efficient, one might
transform the data (e.g., take logs or square roots), smooth it, average it across sev-
eral independent replications, or construct moving average plots.
590 Chapter 19 Computer Simulation

2. Conduct statistical tests for initialization bias. Kelton and Law (1983) give an intu-
itively appealing sequential procedure to detect bias. Various other tests check to see
whether the initial portion of the simulation output contains more variation than lat-
ter portions.
If initialization bias is detected, one may want to do something about it. There are two
simple methods for dealing with bias. One is to truncate the output by allowing the simu-
lation to “warm up” before data are retained for analysis. The experimenter would then
hope that the remaining data are representative of the steady state system. Output truncation
is probably the most popular method for dealing with initialization bias, and all of the
major simulation languages have built-in truncation functions. But how can one find a good
truncation point? If the output is truncated “too early,” significant bias might still exist in the
remaining data. If it is truncated “‘too late,” then good observations might be wasted. Unfor-
tunately, all simple rules to determine truncation points do not perform well in general. A
common practice is to average observations across several replications and then visually
choose a truncation point based on the averaged run. See Welch (1983) for a good
visual/graphical approach.
The second method is to make a very long run to overwhelm the effects of initializa-
tion bias. This method of bias control is conceptually simple to carry out and may yield
point estimators having lower mean squared errors than the analogous estimators from
truncated data (see, e.g., Fishman 1978). However, a problem with this approach is that it
can be wasteful with observations; for some systems, an excessive run length might be
required before the initialization effects are rendered negligible.

19-3.3 Steady State Simulation Analysis


Now assume that we have on hand stationary (steady state) simulation output, Y,, Y,, ..., Y,,.
Our goal is to estimate some parameter of interest, possibly the mean customer waiting time
or the expected profit produced by a certain factory configuration. As in the case of termi-
nating simulations, a good statistical analysis must accompany the value of any point esti-
mator with a measure of its variance.
A number of methodologies have been proposed in the literature for conducting steady
state output analysis: batch means, independent replications, standardized time series, spec-
tral analysis, regeneration, time series modeling, as well as a host of others. We will examine
the two most popular: batch means and independent replications. (Recall: As discussed ear-
lier, confidence intervals for terminating simulations usually use independent replications.)

Batch Means

The method of batch means is often used to estimate Var(Y,) or calculate Cls for the steady
state process mean Ul. The idea is to divide one long simulation run into a number of con-
tiguous batches, and then to appeal to a Central Limit Theorem to assume that the resulting
batch sample means are approximately IID normal.
In particular, suppose that we partition Y,, Y,, ..., Y, into b nonoverlapping, contiguous
batches, each consisting of m observations (assume that n = bm). Thus, the ith batch, i = 1,
2, ..., b, consists of the random variables

Yoictymet Yoi_t)m+29 pies! Yims

The ith batch mean is simply the sample mean of the m observations from batch i, i= 1,2, ..., b,
m

=—)%-y
—l)m+j°
j=
19-4 Comparison of Systems 591

Similar to independent replications, we define the batch means estimator for Var(Z;) as

where

is the grand sample mean. If m is large, then the batch means are approximately IID nor-
mal, and (as for IR) we obtain an approximate 100(1 — a)% CI for HM,

HE Z, tty25-1\Ve/b.
This equation is very similar to equation 19-5. Of course, the difference here is that
batch means divides one long run into a number of batches, whereas independent replica-
tions uses a number of independent shorter runs. Indeed, consider the old IR example from
Section 19-3.1 with the understanding that the Z, must now be regarded as batch means
(instead of replicate means); then the same numbers carry through the example.
The technique of batch means is intuitively appealing and easy to understand. But
problems can come up if the Y; are not stationary (e.g., if significant initialization bias is
present), if the batch means are not normal, or if the batch means are not independent. If any
of these assumption violations exist, poor confidence interval coverage may result—unbe-
knownst to the analyst.
To ameliorate the initialization bias problem, the user can truncate some of the data or
make a long run, as discussed in Section 19-3.2. In addition, the lack of independence or
normality of the batch means can be countered by increasing the batch size m.

Independent Replications
Of the difficulties encountered when using batch means, the possibility of correlation
among the batch means might be the most troublesome. This problem is explicitly avoided
by the method of IR, described in the context of terminating simulations in Section 19-3.1.
In fact, the replicate means are independent by their construction. Unfortunately, since
each of the b replications has to be started properly, initialization bias presents more trou-
ble when using IR than when using batch means. The usual recommendation, in the context
of steady state analysis, is to use batch means over IR because of the possible initialization
bias in each of the replications.

19-4 COMPARISON OF SYSTEMS


One of the most important uses of simulation output analysis regards the comparison of
competing systems or alternative system configurations. For example, suppose we wish to
evaluate two different “restart” strategies that an airline can evoke following a major traf-
fic disruption, such as a snowstorm in the Northeast. Which policy minimizes a certain cost
function associated with the restart? Simulation is uniquely equipped to help the experi-
menter conduct this type of comparison analysis.
There are many techniques available for comparing systems, among them (i) classical
statistical CIs, (ii) common random numbers, (iii) antithetic variates, and (iv) ranking and
selection procedures.
592 Chapter 19 Computer Simulation
»
19-4.1 Classical Confidence Intervals
With our airline example in mind, let Z;; be the cost from the jth simulation replication of
strategy i, i= 1, 2,7 =1, 2, ..., b, Assume that Z,,, Z,,, .... Z,,, are IID normal with
unknown mean sl; and unknown variance, i = 1, 2, an panned that can be justified by
arguing that we can do the following:
1. Get independent data by controlling the random numbers between replications.
2. Get identically distributed costs between replications by performing the replications
under identical conditions.
3. Get approximately normal data by adding up (or averaging) many subcosts to obtain
overall costs for both strategies.
The goal here is to calculate a 100(1 - @)% CI for the difference p1, — p1,. To this end,
suppose that the Z, ; are independent of the Z, ; and define

and

An approximate 100(1 - a@)% Cl is

7 i
My - Hy, EZ yp — Z, pot lain yyoe 8
i
where the approximate degrees of freedom v (a function of the sample variances) is given
in Chapter 10.
Suppose (as in airline example) that small cost is good. If the interval lies entirely to
the left [right] of zero, then system 1 [2] is better; if the interval contains zero, then the two
systems must be regarded, in a statistical sense, as about the same.
An alternative classical strategy is to use a CI that is analogous to a paired t-test. Here
we take b replications from both strategies and set the differences D, = Z, , — Z, , for j = 1,
2, ..., b. Then we calculate the sample mean and variance of the differences:
—~ 1x a
aman ats and 8-552 (0-By.
v= ’ j=!

The resulting 100(1 - a@)% Cl is

My — Hy € Dy £ tay,p-1V Sp /b-
These paired tintervals are very efficient if Corr(Z, ,, Z, ) > 0,j=1, 2, ..., b (where we
still assume that Z; |, Z; >, ...,Z,, are IID and Z, |, Z,, ..., Z,, are IID). In that case, it turns
out that
V(Z:.;)+ V(Z.,;)
V(D,) < planet
Fer e

If Z, ;and Z, ;had been simulated independently, then we would have equality in the above
expression. Thus, the trick may result in relatively small S, and, hence, small CI length. So
how do we evoke the trick?
19-5 Summary 593

19-4.2 Common Random Numbers


The idea behind the above trick is to use common random numbers, that is, to use the same
pseudorandom numbers in exactly the same ways for corresponding runs of each of the
competing systems. For example, we might use precisely the same customer arrival times
when simulating different proposed configurations of a job shop. By subjecting the alter-
native systems to identical experimental conditions, we hope to make it easy to distinguish
which systems are best even though the respective estimators are subject to sampling error.
Consider the case in which we compare two queueing systems, A and B, on the basis
of their expected customer transit times, 8, and @,, where the smaller @-value curresponds
to the better system. Suppose we have estimators 6, and 0, for 0, and 0,, respectively. We
will declare A as the better system if 8, < 6,. If 6, and 6, are simulated independently, then
the variance of their difference,
v6, — 6,) = V6,) + VO,),
could be very large, in which case our declaration might lack conviction. If we could reduce
V(6, —6,), then we could be much more confident about our declaration.
CRN sometimes induces a high positive correlation between the point estimators 6,
and 6,. Then we have
V6, — 4,) = VO,) + Vz) — 2Covb,, 6,)
< V(6,) + V@,),
and we obtain a savings in variance.

19-4.3 Antithetic Random Numbers


Alternatively, if we can induce negative correlation between two unbiased estimators, 6, and
6,, for some parameter 6, then the unbiased estimator 6, + 6,)/2 might have low variance.
Most simulation texts give advice on how to run the simulations of the competing sys-
tems so as to induce positive or negative correlation between them. The -onsensus is that if
conducted properly, common and antithetic random numbers can lead to tremendous vari-
ance reductions.

19-4.4 Selecting the Best System


Ranking, selection, and multiple comparisons methods form another class of statistical
techniques used to compare alternative systems. Here, the experimenter is interested in
selecting the best of a number of competing processes. Typically, one specifies the desired
probability of correctly selecting the best process, especially if the best process is signifi-
cantly better than its competitors. These methods are simple to use, fairly general, and intu-
itively appealing. See Bechhofer, Santner, and Goldsman (1995) for a synopsis of the most
popular procedures.

19-5 SUMMARY
This chapter began with some simple motivational examples illustrating various simulation
concepts. After this, the discussion turned to the generation of pseudorandom numbers, that
is, numbers that appear to be IID uniform (0,1). PRNs are important because they drive the
generation of a number of other important random variables, for example normal, expo-
nential, and Erlang: We also spent a great deal of discussion on simulation output analy-
sis—simulation output is almost never IID, so special care must be taken if we are to make
594 Chapter 19 Computer Simulation

statistically valid conclusions about the simulation’s results. We concentrated u.: ontput
analysis for both terminating and steady state simulations.

19-6 EXERCISES
19-1. Extension of Example 19-1. (c) What is the maximum number of customers in the
(a) Flipacoin 100 times. How many heads to do you system? When is this maximum achieved?
observe? (d) What is the average number of customers in line
(b) How many times do you observe two heads in a during the first 30 minutes?
row? Three in a row? Four? Five? (e) Now repeat parts (a)—(d) assuming that the serv-
(c) Find 10 friends and repeat (a) and (b) based on a ices are performed last-in-first-out.
total of 1000 flips. 19-6. Repeat Example 19-5, which deals with an (s, S) ~
(d) Now simulate coin flips via a spreadsheet pro- inventory policy, except now use order level s = 6.
gram. Flip the simulated coin 10,000 times and
19-7. Consider the pseudorandom number generator
answer (a) and (b).
X; = (5X;_, + 1) mod (16), with seed X, = 0.
19-2. Extension of Example 19-2. Throw n darts ran-
(a) Calculate X, and X,, along with the corresponding
domly at a unit square containing an inscribed circle.
PRNs U, and U,,.
Use the results of your tosses to estimate . Let n = 2"
(b) Is this a full-period generator?
for k = 1, 2, ..., 15, and graph your estimates as a
function of k. (c) What is X\<9?
19-3, Extension.of Example 19-3. Show that /,, 19-8. Consider the “recommended” pseudorandom
defined in equation 19-3, is unbiased for the integral /, number generator X, = 16807 X,_, mod (2*! - 1), with
defined in equation 19-2. seed X, = 1234567.
19-4. Other extensions of Example 19-3. (a) Calculate X, and X,, along with the corresponding
(a) Use Monte Carlo integration with n = 10 observa- PRNs U, and U,,.
Z 1 os) (b) What is Xs09,000?
tions to estimate { at “2 ay. Now use n =
022 19-9. Show how to use the inverse transform method
1000. Compare to the answer that you can obtain
to generate an exponential random variable with rate
via normal tables.
A = 2. Demonstrate your technique using the PRN
(b) What would you do if you had to estimate U=0.75.
10 2
{ chigedeergo 19-10. Consider the inverse transform method to gen-
0 27 erate a standard normal (0,1) random variable.
(c) Use Monte Carlo integration with n = 10 observa-
(a) Demonstrate your technique using the PRN
tions to estimate i cos(27x) dx. Now use n =
U= 0:25.
1000. Compare to the actual answer.
(b) Using your answer in (a), generate an N (1,9) ran-
19-5. Extension of Example 19-4. Suppose that 10 dom variable.
customers arrive at a post office at the following
times: 19-11. Suppose that X has probability density function
rd SO lS 4D 20h 2S 8 30:
fay shud -2<r<2.
(a) Develop an inverse transform technique to gener-
Upon arrival, customers queue up in front of a single
ate a realization of X.
clerk and are processed in a first-come-first-served’
manner. The service times corresponding to the arriv- (b) Demonstrate your technique using U = 0.6.
ing customers are as follows: (c) Sketch out f(x) and see if you can come up with
another method to generate X.
GOT 5:594:0 1:0 82759200270 254 02S
19-12. Suppose that the discrete random variable X
Assume that the post office opens at time 0, and closes
has probability function
its doors at time 30 (just after customer 10 arrives),
serving any remaining customers. 0.355) yifie=—2.5,
(a) When does the last customer finally leave the _ 40.25 ifx=1.0,
system?
P= 1949 ifx=105,
(b) What is the average waiting time for the 10
0, otherwise.
customers?
19-6 Exercises 595

As in Example 19-12, set up a table to generate real- 19-22. The yearly unemployment rates for Andorra
izations from this distribution. Illustrate your tech- during the past 15 years are as follows:
nique with the PRN U=0.86.
CIO MSSiesS Sagiili4 Sel Sel) 10.6 11.0
19-13. The Weibull (a, 8) distribution, popular in reli- Oar OQ 1203 ASD) 29, 2ABI2 = 8:9
ability theory and other applied statistics disciplines,
Use the method of batch means on the above data to
has CDF
obtain a two-sided 95% confidence interval for the
mean unemployment. Use five batches, each consist-
if x >0,
ing of three years’ data.
0 otherwise.
19-23. Suppose that we are interested in steady state
(a) Show how to use the inverse transform method confidence intervals for the mean of simulation output
to generate a realization from the Weibull X,, X5, ..., X99: (You can pretend that these are wait-
distribution. ing times.) We have conveniently divided the run up
(b) Demonstrate your technique for a Weibull into five batches, each of size 2000; suppose that the
(1.5,2.0) random variable using the PRN resulting batch means are as follows:
U=0.66. 100 80 90 110 120
19-14. Suppose that U, = 0.45 and U, = 0.12 are two Use the method of batch means on the above data to
IID PRNs. Use the Box—Miiller method to generate obtain a two-sided 95% confidence interval for the
two_N (0.1) variates. mean.
19-15. Consider the following PRNs: 19-24. The yearly total snowfall figures for
0.88 0.87 0.33 0.69 0.20 0.79 0.21 Siberacuse, NY, during the past 15 years are as
0.96 0.11 0.42 0.91 0.70 follows:
Use the Central Limit Theorem method to generate a 100 103 88 72 98 121 106 110 99
realization that is approximately standard normal. 162m l23m 139) 920 142) 169
19-16. Prove equation 19-4 from the text. This shows (a) Use the method of batch means on the above data
that the sum of n IID exponential random variables is to obtain a two-sided 95% confidence interval for
Erlang. Hint: Find the moment-generating function of the mean yearly snowfall. Use five batches, each
Y, and compare it to that of the gamma distribution. consisting of three years’ data.
19-17. Using two PRNs, U, = 0.73 and U, = 0.11, (b mS The corresponding yearly total snowfall figures
generate a realization from an Erlang distribution with for Buffoonalo, NY (which is down the road from
n=2andA=3. Siberacuse), are as follows:

19-18. Suppose that U,, U>, ..., U,, are PRNs. OO M95) 2a OS a 95s 110 2 eeOOR 7,5
144 110 123 81 130 145
(a) Suggest an easy inverse transform method to gen-
erate a sequence of IID Bernoulli random vari- How does Buffoonalo’s snowfall compare to
ables, each with success parameter p. Siberacuse’s? Just give an eyeball answer.
(b) Show how to use your answer to (a) to generate a (c ~ Now find a 95% confidence interval for the dif-
binomial random variate with parameters n and p. ference in means between the two cities. Hint:
Think common random numbers.
19-19. Use the acceptance-rejection technique to gen-
erate a geometric random variable with success prob- 19-25. Antithetic variates. Suppose that X,, X,, ...,
ability 0.25. Use as many of the PRNs from Exercise X, are IID with mean p and variance o°. Further sup-
19-15 as necessary. pose that Y,, Y,, ..., Y, are also IID with mean pi and
19-20. Suppose that Z, = 3, Z, = 5, and Z, = 4 are variance oO”. The interesting trick here is that we will
three batch means resulting from a long simulation also assume that Cov(X,, Y,) < 0 for all 1. So, in other
run. Find a 90% two-sided confidence interval for the words, the observations within one of the two
sequences are IID, but they are negatively correlated
mean.
between sequences.
19-21. Suppose that ps € [-2.5, 3.5] is a 90% confi-
(a) Here is an example showing how can we end up
dence interval for the mean cost incurred by a certain
with the above scenario using simulations. Let
inventory policy. Further suppose that this interval
X,=-In(U;) and Y,; = —In(1 - U;), where the U; are
was based on five independent replications of the
the usual IID uniform (0,1) random variables.
underlying inventory system. Unfortunately, the boss
i. What is the distribution of X;? Of Y,?
has decided that she wants a 95% confidence interval.
Can you supply it? ii. |Whatis Cov(U,, 1-U,)?
596 Chapter 19 Computer Simulation

iii. Would you expect that Cov(X,, Y;) < 0? where the U; are IID uniform (0,1). What kind of dis-
Answer: Yes. tribution does it lodk like?
(b) Let X, and Y, denote the sample means of the X; 19-28. Another miscellaneous computer exercise.
and Y,, respectively, each based on n observations. Let us see if the Central Limit Thecrem works. In
Without actually calculating ‘Cov(X,, Y,), state how Exercise 19-27, you generated 20,000 exponential(1)
V((X,, + Y,)/2) compares to V(X,,,.). In other words, observations. Now form 1000 averages of 20 observa-
Should we do two negatively correlated runs, each tions each from the original 20,000. More precisely, let
consisting of n observations, or just one run con- 1 20
sisting of 2n observations?
¥j= 292 Xanines
(c — What if you tried to use this trick when using
Monte Carlo simulation to estimate J5sin(7x) dx?
Make a histogram of the Y;. Do they look approxi-
19-26. Another variance reduction technique. Sup-
mately normal?
pose that our goal is to estimate the mean fl of some
steady state simulation output process, X,, X,, ..., X,, 19-29. Yet another miscellaneous computer exer-
/ Suppose we somehow know the expected value of cise. Let us generate some normal observations via
some other RV Y, and we also know that Cov(X, Y) > the Box—Miiller method. To do so, first generate 1000
0, where X is the sample mean. Obviously, X is the pairs of IID uniform (0,1) random numbers, (U, ,,
“usual” estimator for . Let us look at another estima- U2 1), (Uj2» U22)s «+++ UY}1000» Y2,1000)- Set
tor for 4, namely, the control-variate estimator,
X,= {-2in(u,,) cos(22U;,;)
C=X-kY-E[Y)),
where k is some constant. and

(a) Show that C is unbiased for .


(b) Find an expression for V(C). Comments?
¥, = |-2in(U,) sin(2n0,)
(c) Minimize V(C) with respect to k.
for i= 1, 2, ..., 1000. Make a histogram of the result-
19-27. A miscellaneous computer exercise. Make a ing X;. [The X,’s are N(0,1).] Now graph X; vs. Y, Any
histogram of X; = -In(U;), for 1 = 1, 2, ..., 20,000, comments?
Appendix

Table I Cumulative Poisson Distribution

Table II Cumulative Standard Normal Distribution

Table III Percentage Points of the x? Distribution


Table IV Percentage Points of the t Distribution

Table V Percentage Points of the F Distribution

Chart VI Operating Characteristic Curves

Chart VII Operating Characteristic Curves for the Fixed-Effects


Model Analysis of Variance

Chart VII Operating Characteristic Curves for the Random-Effects


Model Analysis of Variance

Table IX Critical Values for the Wilcoxon Two-Sample Test

Table X Critical Values for the Sign Test

Table XI Critical Values for the Wilcoxon Signed Rank Test

Table XII Percentage Points of the Studentized Range Statistic

Table XIII Factors for Quality-Control Charts

Table XIV k Values for One-Sided and Two-Sided Tolerance Intervals

Table XV Random Numbers


598 Appendix

Table I Cumulative Poisson Distribution*


eS

Qa
9% 0.01 0.05 0.10 0.20 0.30 0.40 - 0.50 0.60

0 0.990 0.951 0.904 0.818 0.740 0.670 0.606 0.548

1 0.999 0.998 0.995 0.982 0.963 0.938 0.909 0.878


Dey 0.999 0.999 0.998 ~ 0.996 0.992 0.985 0.976
3 0.999 0.999 0.999 0.998 0.996
4 0.999 0.999 0.999 0.999
5 0.999 0.999

Gane
x 0.70 0.80 0.90 1.00 1.10 1.20 1.30 1.40

0 0.496 0.449 0.406 0.367 0.332 0.301 0.272 0.246

1 0.844 0.808 0.772 0.735 0.699 0.662 0.626 0.591


2 0.965 0.952 0.937 0.919 0.900 0.879 0.857 0.833
3 0.994 0.990 0.986 0.981 0.974 0.966 0.956 0.946
4 0.999 0.998 0.997 0.996 0.994 0.992 0.989 0.985
5 0.999 0.999 0.999 0.999 0.999 0.998 0.997 0.996

6 0.999 0.999 0.999 0.999 0.999 0.999 0.999


7 0.999 0.999 0.999 0.999 0.999
8 0.999 0.999

e= he
x 1.50 1.60 1.70 1.80 1.90 2.00 2.10 2.20

0 0.223 0.201 0.182 0.165 0.149 0.135 0.122 0.110

1] 0.557 0.524 0.493 0.462 0.433 0.406 0.379 0.354


2 0.808 0.783 0.757 0.730 0.703 0.676 0.649 0.622
3 0.934 0.921 0.906 0.891 0.874 0.857 0.838 0.819
4 0.981 0.976 0.970 0.963 0.955 0.947 0.937 0.927
5) 0.995 0.993 0.992 0.989 0.986 0.983 0.979 0.975

6 0.999 0.998 0.998 0.997 0.996 0.995 0.994 0.992


i 0.999 0.999 0.999 0.999 0.999 0.998 0.998 0.998
8 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
9 0.999 0.999 0.999 0.999 0.999 0.999
ro) S ‘o6 6 So ‘oNe)6
Appendix 599

Table I Cumulative Poisson Distribution® (continued)


i

Gait
x 2.30 2.40 2.50 2.60 2.70 2.80 2.90 3.00

0 0.100 0.090 0.082 0.074 0.067 0.060 0.055 0.049

1 0.330 0.308 0.287 0.267 0.248 0.231 0.214 0.199


2 0.596 0.569 0.543 0.518 . 0,493 0.469 0.445 0.422
3 0.799 0.778 0.757 0.736 0.714 0.691 0.669 0.647
4 0.916 0.904 0.891 0.877 0.862 0.847 0.831 0.815
iS) 0.970 0.964 0.957 0.950 0.943 0.934 0.925 0.916

6 0.990 0.988 0.985 0.982 0.979 0.975 0.971 0.966


7 0.997 0.996 0.995 0.994 0.993 0.991 0.990 0.988
8 0.999 0.999 0.998 0.998 0.998 0.997 0.996 0.996
9 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.998
10 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
Bl 0.999 0.999 0.999 0.999 0.999 0.999
1 0.999 0.999

GEIXt
x 3.50 4.00 4.50 5.00 5.50 6.00 6.50 7.00

0 0.030 0.018 0.011 0.006 0.004 0.002 0.001 0.000

1 0.135 0.091 0.061 0.040 0.026 0.017 0.011 0.007


2 0.320 0.238 0:173 0.124 0.088 0.061 0.043 0.029
3 0.536 0.433 0.342 0.265 0.201 0.151 0.111 0.081
4 0.725 0.628 0.532 0.440 0.357 0.285 0.223 0.172
5 0.857 0.785 0.702 0.615 0.528 0.445 0.369 0.300

6 0.934 0.889 0.831 0.762 0.686 0.606 0.526 0.449


7 0.973 0.948 0.913 0.866 0.809 0.743 0.672 0.598
8 0.990 0.978 0.959 0.931 0.894 0.847 0.791 0.729
9 0.996 0.991 0.982 0.968 0.946 0.916 0.877 0.830
10 0.998 0.997 0.993 0.986 0.974 0.957 0.933 0.901

i 0.999 0.999 0.997 0.994 0.989 0.979 0.966 0.946


12 0.999 0.999 0.999 0.997 0.995 0.991 0.983 0.973
13 0.999 0.999 0.999 0.999 0.998 0.996 0.992 0.987
14 0.999 0.999 0.999 0.999 0.998 0.997 0.994
IS 0.999 0.999 0.999 0.999 0.998 0.997

16 0.999 0.999 0.999 0.999 0.999


17 0.999 0.999 0.999 0.999
18 0.999 0.999 0.999

19 0.999 0.999

20 x 0.999
us.

(continues)
600 Appendix

Table I Cumulative Poisson Distribution’ (continued) “

c=N ,
x 7.50 8.00 8.50 9.00 9.50 10.0 15.0 20.0

0 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000

1 0.004 0.003 0.001 0.001 0.000 0.000 0.000 0.000


Qe 0.020 0.013 0.009 0.006 0.004 0.002 0.000 0.000
3 0.059 0.042 0.030 0.021 0.014 0.010 0.000 0.000
4 0.132 0.099 0.074 0.054 0.040 0.029 0.000 0.000
5 0.241 0.191 0.149 0.115 0.088 0.067 0.002 0.000
6 0.378 0.313 0.256 0.206 0.164 0.130 0.007 0.000
fl 0.524 0.452 0.385 0.323 0.268 0.220 0.018 0.000
8 0.661 0.592 0.523 0.455 0.391 0.332 0.037 0.002
9 0.776 0.716 0.652 0.587 0.521 0.457 0.069 0.005
10 0.862 0.815 0.763 0.705 0.645 =:0.583 0.118 0.010
11 0.920 0.888 0.848 0.803 0.751 0.696 0.184 0.021
12 0.957 0.936 0.909 0.875 0.836 0.791 0.267 0.039
13 0.978 0.965 0.948 0.926 0.898 0.864 0.363 0.066
14 0.989 0.982 0972 0.958 0.940 0.916 0.465 0.104
15 0.995 0.991 0.986 0.977 0.966 0.951 0.568 0.156
16 0.998 0.996 0.993 0.988 0.982 0.972 0.664 0.221
17 0.999 _ 0.998 0.997 0.994 0.991 0.985 0.748 0.297
18 0.999 0.999 0.998 0.997 0.995 0.992 0.819 0.381
19 0.999 0.999 0.999 0.998 0.998 0.996 0.875 0.470
20 0.999 0.999 0.999 0.999 0.999 0.998 0.917 0.559
210.999 0.999 0.999 0.999 0.999 0.999 0.946 (0.643
22 0.999 0.999 0.999 0.999 0.999 0.967 0.720
23 0.999 0.999 0,999 0.999 0.980 0.787
24 0.999 0.999 :0.988-— (0.843
25 0.999 0.993 0.887
26 0.996 0.922
27 0.998 0.947
28 0.999 0.965
29 0.999 0.978
30 0.999 0.986
31 0.999 0.991
32 0.999 0.995
33 0.999 0.997
34 0.998
x .

Entries in the table are values of F(x) = P(X $ x)= Ye ‘c'/i!. Blank spaces below the last entry in any
=A
column may be read as 1.0. :
Appendix 601

Table II Cumulative Standard Normal Distribution


a

®(z) = iz weet ae
sea eaeeneyeaaeteseoaeesement sence rien nsreneenmeam see eat SSS SSeS Stns

0.0 0.500 00 0.503 99 0.507 98 0.51197 0.51595 0.0


0.1 0.539 83 0.543 79 0.547 76 0.551.72 0.55567 0.1
0.2 0.579 26 0.583 17 0.587 06 0.590 95 0.59483 02
0.3 0.617 91 0.621 72 0.625 51 0.629 30 0.63307 03
0.4 0.655 42 0.659 10 0.662 76 0.666 40 0.67003 0.4
0.5 0.691 46 0.694 97 0.698 47 0.701 94 0.70540 0.5
0.6 0.725 75 0.729 07 0.732 37 0.735 65 0.73891 0.6
0.7 0.758 03 0.761 15 0.764 24 0.767 30 0.77035 0.7
0.8 0.788 14 0.791 03 0.793 89 0.796 73 0.79954 08
0.9 0.815 94 0.818 59 0.821 21 0.823 81 0.82639 09
1.0 0.841 34 0.843 75 0.846 13 0.848 49 0.85083 1.0
1.1 0.864 33. 0.866 50 0.868 64 0.870 76 0.87285 il
#2 0.884 93 0.886 86 0.888 77 0.890 65 0.89251 12
i3 0.903 20 0.904 90 0.906 58 0.908 24 0.90988 13
1.4 0.919 24 0.920 73 0.922 19 0.923 64 0.92506 14
1.5 0.933 19 0.934 48 0.935 74 0.936 99 0.93822 1.5
1.6 0.945 20 0.946 30 0.947 38 0.948 45 0.94950 1.6
17 0.955 43 0.956 37 0.957 28 0.958 18 0.95907 1.7
1.8 0.964 07 0.964 85 0.965 62 0.966 37 0.96711 18
1.9 0.971 28 0.971 93 0.972 57 0.973 20 0.97381 1.9
2.0 0.977 25 0.977 78 0.978 31 0.978 82 0.97932 2.0
2.1 0.982 14 0.982 57 0.983 00 0.983 41° 0.98382 2.1
22 0.986 10 0.986 45 0.986 79 0.987 13 0.98745 22
2.3 0.989 28 0.989 56 0.989 83 0.990 10 0.99036 2.3
2.4 0.991 80 0.992 02 0.992 24 0.992 45 0.99266 2.4
2.5 0.993 79 0.993 96 0.994 13 0.994 30 0.99446 2.5
2.6 0.995 34 0.995 47 0.99560 . 0.99573 0.99585 2.6
27 0.996 53 0.996 64 0.996 74 0.996 83 0.99693 2.7
2.8 0.997 44 0.997 52 0.997 60 0.997 67 0.99774 28
2.9 0.998 13 0.998 19 0.998 25 0.998 31 0.99836 2.9
3.0 0.998 65 0.998 69 0.998 74 0.998 78 0.99882 3.0
3.1 0.999 03 0.999 06 0.999 10 0.999 13 0.99916 3.1
a9 0.999 31 0.999 34 0.999 36 0.999 38 0.99940 3.2
3.3 0.999 52 0.999 53 0.999 55 0.999 57 0.99958 3.3
3.4 0.999 66 0.999 68 0.999 69 0.999 70 0.99971 3.4
3.5 0.999 77 0.999 78 0.999 78 0.999 79 0.99980 3.5
3.6 0.999 84 0.999 85 0.999 85 0.999 86 0.99986 3.6
3.7 0.999 89 0.999 90 0.999 90 0.999 90 0.99991 3.7
3.8 0.999 93 0.999 93 0.999 93 0.999 94 0.99994 3.8
3.9 0.999 95 0.999 95 0.999 96 0.999 96 0.99996 3.9
a
; (continues)
602 Appendix

Table I] Cumulative Standard Normal Distribution (continued)

@(z) = iv ke

z 0.05 0.06 0.07 0.08 0.09 Zz


0.0 0.519 94 0.523 92 0.52790 0.531 88 0.53586 0.0
0.1 0.559 62 0.563 56 0.567 49 0.571 42 0.57534 0.1
0.2 0.598 71 0.602 57 0.606 42 0.610 26 0.61409 0.2
0.3 0.636 83 0.640 58 0.644 31 0.648 03 0.65173 0.3
0.4 0.673 64 0.677 24 0.680 82 0.684 38 0.68793 0.4
0.5 0.708 84 0.712 26 0.715 66 0.719 04 0.72240 0.5
0.6 0.742 15 0.745 37 0.748 57 0.751 75 0.75490 0.6
0.7 0.773 37 0.776 37 0.779 35 0.782 30 0.78523 0.7
0.8 0.802 34 0.805 10 0.807 85 0.810 57 0.81327 0.8
0.9 0.828 94 0.831 47 0.833 97 0.836 46 0.83891 09
1.0 0.853 14 0.855 43 0.857 69 0.859 93 0.86214 1.0
1 0.874 93 0.876 97 0.879 00 0.881 00 0.88297 1.1
12 0.894 35 0.896 16 0.897 96 0.899 73 0.90147 1.2
1.3 0.911 49 0.913 08 0.914 65 0.916 21 0.91773 13
14 0.926 47 0.927 85 0.92922 0.930 56 0.93189 14
1.5 0.939 43 0.940 62 0.941 79 0.942 95 0.94408 15
1.6 0.950 53 0.951 54 0.952 54 0.953 52 0.95448 16
17 0.959 94 0.960 80 0.961 64 0.962 46 0.96327 7
1.8 0.967 84 0.968 56 0.969 26 0.969 95 0.97062 18
1.9 0.974 41 0.975 00 0.975 58 0.976 15 0.97670 19
2.0 0.979 82 0.980 30 0.980 77 0.981 24 0.98169 2.0
2.1 0.984 22 0.984 61 0.985 00 0.985 37 0.98574 2.1
2.2 0.987 78 0.988 09 0.988 40 0.988 70 0.98899 2.2
2.3 0.990 61 0.990 86 0.991 11 0.991 34 0.99158 23
2.4 0.992 86 0.993 05 0.993 24 0.993 43 0.99361 24
2.5 0.994 61 0.99477 . 0.99492 0.995 06 0.99520 2.5
2.6 0.995 98 0.996 09 9.996 21 0.996 32 0.99643 2.6
2.7 0.997 02 0.997 11 0.997 20 0.997 28 0.99736 2.7
2.8 0.997 81 0.997 88 0.997 95 0.998 01 0.99807 28
2.9 0.998 41 0.998 46 0.998 51 0.998 56 0.99861 2.9
3.0 0.998 86 0.998 89 0.998 93 0.998 97 0.99900 3.0
3.1 0.999 18 0.999 21 0.999 24 0.999 26 0.99929 3.1
3.2 0.999 42 0.999 44 0.999 46 0.999 48 0.99950 3.2
3.3 0.999 60 0.999 61 0.999 62 0.999 64 0.99965 33
3.4 0.999 72 0.999 73 0.999 74 0.999 75 0.99976 3.4
3.5 0.999 81 0.999 81 0.999 82 0.999 83 0.99983 3.5
3.6 0.999 87 0.999 87 0.999 88 0.999 88 0.99989 3.6
3.7 0.999 91 0.999 92 0.999 92 0.999 92 0.99992 3.7
3.8 0.999 94 0.999 94 0.999 95 0.999 95 0.99995 3.8
3.9 0.999 96 0.999 96 0.999 96 0.999 97 0.999 97 39
SS sens
Appendix 603

Table III Percentage Points of the x? Distribution®

0.950 0.900 0.500 0.100 0.050 0.025 0.010 0.005


0.00+ 0.00+ 0.00+ 0.00+ 0.02 0.45 meThs| 3.84 5.02 6.63 7.88
0.01 0.02 0.05 0.10 0.21 9 4.61 5.99 7.38 9.21 10.60
0.07 0.11 0.22 0.35 0.58 Zo) 6.25 7.81 9.35 11.34 12.84
0.21 0.30 0.48 0.71 1.06 3.36 7.78 949 11.14 13.28 14.86
0.41 0.55 0.83 1S 1.61 4.35 9.24 11.07 12S S:09 1675
0.68 0.87 - 1.24 1.64 2.20 SOD LOGS IDR IAA Se SIGN 1855
0.99 1.24 1.69 Pall 2.83 G35 at2.02) 1407S S16:01%. 18l489 20.28
1.34 1.65 2.18 213 3.49 4 13986" 15551 17253" 20!09" 21:96
1.73 2.09 2.70 3733 4.17 8.34 1468 1692 19.02 21.67 23.59
2.16 2.56 3:25 3.94 4.87 OB Feel OO eeelS:31 "5 2204885 230% 2259
2.60 3.05 3.82 4.57 558 1034 17.28 1968 21.92 2472 26.76
3.07 Shey 440 5.23 CS OR 34 eS S 203% 6235347 = 26122" §28:30
3.57 4.11 5.01 5.89 7.04 12.34 19.81 22.36 24.74 27.69 29.82
4.07 4.66 5.63 6.57 AIS SO 4 een 23.08) 9 26,12) 295148 3132
4.60 S748) 6.27 F26. "855 14734") 22:31 25:00! 27-49" 30/58" 32.80
5.14 5.81 6.91 7.96 9.31 © 15.34 23.54 2630 28.85 32.00 34.27
5.70 6.41 7.56 8:67, 710:09" “16:34 24.77 27:59° 30:19" 33/41 35.72
6.26 7.01 8.23 SoS lyst 25:99) 28:87) Sik53? 34/811" 93716
6.84 7.63 5.91 O20 ll65) 18345 27:20), S3014— 32°85" —361198 63858
7.43 8.26 959 10.85 1244 19.34 2841 31.41 34.17 37.57 40.00
8.03 8.90 10.28 11.59 13.24 20.34 29.62 32.67 35.48 38.93 41.40
8.64 954° 10.98 12.34 . 14.04 21:34 30.81 33.92 36.78 40.29 42.80
O26 0:20) eet69 ets 09 14385) 27:34 32:01 35:17) 3808" 4AN64™ 4418
9.89 10.86 12.40 13.85 15.66 23.34 33.20 3642 39.36 42.98 45.56
10.52 11.52 13.12 1461 1647 2434 3428 37.65 4065 44.31 46.93
| Gee 2-20 584 Is 8o 817-29 25:34. 935.56) 638:89' — 41592) |45°64" «48.29
11.81 12.88 14.57 16.15 1811 2634 36.74 40.11 43.19 46.96 49.65
12.46 13.57 15.31 16.93 18.94 27.34 37.92 41.34 4446 48.28 50.99
3 eee tA 26m O05) dale 19077 92824" 639.09 = ~42'56) ~— 45:72" 49:59" 52:34
13.79 14.95 16.79 18.49 2060 29.34 40.26 43.77 46.98 50.89 53.67
20 yee 216 mee445 2015 1 O05 39:348 mies 155.76) 59345 363.699 (66.77
27.99°-™29.71 32.36 34:76 37.69 ~ 49.33 63.17 67.50 71:42 76.15 79.49
35.53 37.48 40.48 43.19 4646 59.33 7440 79.08 83.30 88.38 91.95
43,28 45.44 "48.76 51.74 55.33 69.33 85.53 90:53 95.02 100.42 104.22
51.17 53.54 57.15 60.39 64.28 79.33 96.58 101.88 106.63 112.33 116.32
59.20 61.75 65.65. 69.13 73.29 89.33 107.57 113.14 118.14 124.12 128.30
67.33 70.06 74.22 77.93 82.36 99.33 118.50 124.34 129.56 135.81 140.17

“vy = degrees of freedom.


604 Appendix

Table IV Percentage Points of the ¢ Distribution

v\& 0.40 0.25 0.10 0.05 0.025 0.01 0.005 0.0025 0.001 0.0005

1 0.325 1.000 3.078 6.314 12.706 31.821 63.657 127.32 318.31 636.62
2 0.289 0.816 1.886 2.920 4.303 6.965 9.925 14.089 23.326 31.598
3 0.277 0.765 1.638 2.353 3.182 4.541 5.841 7.453 10.213 12.924
4 0.271 0.741 1,533 2.132 2.776 3.747 4.604 5.598 7.173 8.610
5) 0.267 0.727 1.476 2.015 2.571 3.365 4.032 4.773 5.893 6.869
6 0.265 0.718 1.440 1.943 2.447 3.143 3.707 4.317 5.208 5.959
q 0.263 0.711 1.415 1.895 2.365 2.998 3.499 4.029 4.785 5.408
8 0.262 0.706 153977, 1.860 2.306 2.896 3.355 3.833 4.501 5.041
9. | 0.261 0.703 1.383 1.833 2.262 2.821 3.250 3.690 4.297 4.781
10 0.260 0.700 1372 1.812 2.228 2.764 3.169 3.581 4.144 4.587
11 0.260 0.697 1.363 1.796 2.201 2.718 3.106 3.497 4.025 4.437
12 0.259 0.695 1.356 1.782 2.179 2.681 3.055 3.428 3.930 4.318
13 0.259 0.694 1.350 1.771 2.160 2.650 3.012 3.372 3.852 4.221
14 0.258 0.692 1.345 1.761 2.145 2.624 2.977 3.326 3.787 4.140
15 0.258 0.691 1.341 1.753 231 2.602 2.947 3.286 3.733 4.073
16 0.258 0.690 1.337 1.746 2.120 2.583 2.921 3.252 3.686 4.015
17 0.257. 0.689 1.333 1.740 2.110 2.567 2.898 3.222 3.646 3.965
18 0.257 0.688 1.330 1.734 2.101 2.352 2.878 3.197 3.610 3.922
19 0.257 0.688 1.328 1.729 2.093 2.539 2.861 3.174 3.579 3.883
20 0.257 0.687 1.325 1.725 2.086 2.528 2.845 3.153 3552 3.850
21 O57, 0.686 16323 1.721 2.080 2.518 2.831 3.135 3.527 3.819
22 0.256 0.686 1.321 1.717 2.074 2.508 2.819 3.119 3.505 3.792
23 0.256 0.685 1.319 1.714 2.069 2.500 2.807 3.104 3.485 3.767
24 0.256 0.685 1.318 1.711 2.064 2.492 2.797 3.091 3.467 3.745
25 0.256 0.684 1.316 1.708 2.060 2.485 2.787 3.078 3.450 3.725
26 0.256 0.684 1.315 1.706 2.056 2.479 2.779 3.067 3.435 3.707
27 0.256 0.684 1.314 1.703 2.052 2.473 2.771 3.057 3.421 3.690
28 0.256 0.683 1.313 1.701 2.048 2.467 2.763 3.047 3.408 3.674
29 0.256 0.683 1.311 1.699 2:045 2.462 2.756 3.038 3.396 3.659
30 0.256 0.683 1.310 1.697 2.042 2.457 2.750 3.030 3.385 3.646
40 0.255 0.681 1.303 1.684 2.021 2.423 2.704 2.97 3.307 3.551
60 0.254 0.679 1.296 1.671 2.000 2.390 2.660 2.915 3.232 3.460
120 0.254 0.677 1,289 1.658 1,980 2.358 2.617 2.860 3.160 3.373
eo 0.253 0.674 1.282 1.645 1.960 2.326 2.576 2.807 3.090 3.291
Source: This table is adapted from Biometrika Tables for Statisticians, Vol. 1, 3rd edition, 1966, by permission of the
Biometrika Trustees.
Appendix 605

so

(Sanuiuor)
orl

vel
esl

8e1

cel
Oel
ce

6c 1

00'1
Lan
Stl

Stl
071

col

61

Sel
rel
eel
cel
Iv

971
97 I
Tel

Li 1

801
08°6srt
$8°6LyeLye
Lyé80°C
80%

L8'1
L381

Ivl
vl
sol
891
8ST

orl
srl
Syl
orl
cyl
trl

Let
oe

el
871
1671
8c
871
LOI
T8eT
9
LOT
STI

scl
vel
|aa
Sv
l

|vel
al
611
Tul

Orl
ell
9L6 Lt sl Ort 8eT velRE cel oe'TOe 6c'T871 LOT 9C'T i)
09 T
WE Lyz807L831 solI6s os Lvl vr cyl OesEsel lel 871 LTT: T97 CO 611 | an

1L6sveLy? 99'1 6e'1 tel lel 671 L71 vel IZ

‘9961
‘AdpuquieD
Ov 80°C83hSLI 6s'Ivst Ist Lr Syl cr lvl Le}I9 Sel rel cel lel oe! 6c118c 871 L71 T 8 bil

bre 99°1 1s} 8rT tail! se Let Ol 871


0€ L9°6 Lvz807831 SLI 09'tssl Saal \yt al 9eT SuabPel tcl cel cel lel lel Oe I67|67 scl|(Ger 6lT orl

‘ssaig
97 SLI 091 OPT Iv'l 6e'T vel tel tel Oe! 6c 9c vol

adpuquieD Asuaatuy)
v7

£96eve 807881 Lol 9ST cst 6r'l vl cr Bel Le'li! sel CET tel Tel Tel o¢1 T 1 Il 81

856 80°C SLI 9S'1 6r'l svi trl 6t1 8eT | Oel COT
07 ereVT 881 Lol I9'T cs'T Ly'l Iv or't Cot oe Sel vel cectcel cel cel TER Tel 871 Sel 6lT
68°1

c9'T

tol

srl

orl
6e1
SI 8e1
Lel

(4 |

col

‘AapueH
os’!
9L'I
891

LSI

orl
vr'l
ev'l

Le
Iv
Ive
9v'7

9ET
sel
Sel
807
6r'6

Lol
vel
vel
tel
tel

cet
0e'l

can
JoyeJ9UINU

Sve80° Ist 6r'TLvl wl


(‘A)

v6 6e'18e Le'lOval9E1 Sel Sel Tel


cl 631 LLI 891 col8ST $S'l 1 Let
6e€ stl trl Iv orl orl vel rel bel 67119T vol

pue “H °C
6S'1

ort

sel
cr
srl

oe'l
8e1

9E°1
9E°1

871
Stl

uosseag
Ol
10J aI

68'1

1a

vr

cl
ssl
rasa
|
ost
srl
|
Lu
I
691
£91

sel
Le
Lel
I'l

6e'
1
6¢'1

Sel
cel
8E€

80°C
ce6

wrt
WOPsayy

OL!

6ST
91

orl

6e 1
cl

6 9CT
8e1

6cT
LET.

LeT

9T

LOT
"sO
ta“la
d

Aq “Yq “§
trl
63'T
Let

Iv
Iv

6t
1

8e1
pris
esl
1st
orl
Ly
orl
eo

LET

vel
Te
Lee

80°7
976

vrz
soaudaq

‘uoNIps
Jo

616

vl
6r'l

8
Call

87
Isl

ipl

8e1
cyl

orl
Ort
6f'1
6e'l
Syl

cyl
(aa
esl
63°1

8e
1

8e1
Lel
lel
Sel
cel
oet
or'l
8L1
Olt
yl
09'T
9S'I
See
1x4
8077

tv OLT 09"! vst orl evilcyl Iv orl 6t'16e'16e'1 8e'l9e


“JOA‘| pig
E
‘suDIONSHDIS
1cS OSI 6r'TLvl Sv byl trl Iv'l Or'l 1 cel Test 671
O16vee 80°C681 8L1 wl Lg 8eT

(4
leex4 19°T8ST vl vr er cr cr orlorlorl6e1
vONNNSIG

9 lvl Iv Wl
86°38 80°C63°T18LTIL sol sol esl Isl ost8h Lyl orlSv LealSel tel Tel

LT cst 6r'l oF cyl


40f

LOT68116L 99°1791I6S 9¢'I|ee Ist grilLy orl Sv vr vr trl tl cr cP lvl Tvl lvl 6t'lLel sel tel
sajqny
DyLaWoIg
br
Os'T

ge 1

sel
trl

trl
Ly'l
Lgl
90°7
681
6L

£91
89°83
£TE

v
Isl

Lel
trl

crt
orl
orl
sv
SPT
vr't
byl
6r'l
srl

Ly
ssl
esl
a
1

6S'I
cL
I
6€°T
Jo 2pJ

6e 1
ort

lel
6r'T

8r 1
ool

Syl

uorsstuiied
8h
8ST
9S'1
SST
9ET

Isl
LOT

£
syulog

STE

WOT
eBeyuao10g

Iv
orl

vr'l
cl
Srl
Srl
srl
Lyl
Ly
6r'l
esl
cst

ost
£91
8L
CLT
881
S07
078

OSL 00%setOL 09'T Ist Lvl orl bel


©+
0O'e877 Olt99'1cl 8S1 9STsslesl asa Ist ost6t'16r'l8P'lsrl Ly Ly orl oT srl sv crt Ov'T6e'l
yim
paydepy
vel
cel
Ort
61
6e1

81

8e1
sel
SFT
691

Isl

sel
oe
61
81

81
cl

Ivl
orl
Onl
SVT
br
trl

cyl
Iv
Ly
z9'T
Ls
ost

6v'l
1ST
70
ess
Ls'7

:aanog
aIqUI,
A

co

= aANmMryrnwor
on
0@I
a)

(A) JoyeuwoUep ey 10} Wopeay Jo seai89q


a1qRL
=A a3ejusoI0
sjutog
gJo ap 7 uornqisiq (panuijuos)
606

dA
“O10‘la Za
ssa1Z9q
Jo WOpeay
10} BY) Joyer0UINU
(‘A)

L
8
0€

6
Or 09

Ol
CI
SI
I 98°6£
4 9779 £S°C9 6L°79
Appendix

eos

19
00°6 916
61°09

PL
I9
b76
1L'09
676 ££6
9r'6

16°8¢
Lv6 Lv6

vr'os
98°6S
€ bss

00°79
ors
6¢°6
6E¢ wes
Iv'6
Tes 87S

8t6
br'6
v LVS 91's sts
vs cop 6lv
£cS
lly
CoS
SOP 10'¢

vos
81s
C8 O8'€ 6L'¢

t
g 90°F 8Le coe
COE 06
CSE SrE Ore

St6LoS86'¢
v6'E
9 LVe Ole pre
Ble
Ore
OVE € 67
EGE
Ble Ite soe

Lee
Le6SS SOEpee
co cr6Oc'SL8evee
Sv'6srs£8 6l€
08°7 8L7 OL
L 6S°¢ 9TE
v6T
LOE 967
067
88°C €8°C

10'€
C8T
9ST vST Isc
8 OVE Ive
OLT
COT
LOT
187 CLC LYE

CLE967CL?
8ET 9ET PET
6 OEE 10€
vST
18Z
0S2
69° 197 SSi4

78L COT
867SLT6ST
8STOv SCT EGC 177
7
Ol

7
€ 67 COT
cre
CLT
8E
19°7 656 oVT

997XG
< Ole 386 Weg
Wu te 98°C
C&C
99°C
8TT
vS7 svt 6eC

L837£97or vETveT
Ive v8 6STcreOe rare 80° S04 £07
> ral 81€
STC
18° 19°
TEC
8r7 6£7

Ly?87 Oe
£e7

ISc Ive veT


SECLOC
107 661 96'T
5 €I vie 9LT
617
9ST
ST?
vz SET 877

8o7
7 veT
CVT90°C
8TT817OrgHOT

96'T £61 061


(0)
2. a
La ia
Ore €LT
4
CSC 6€7 1€T vOT

IZ? LICE
161 68°71 98°1

(4
2 SI LOE
Ore
OL?
S07?
6r7 9ET Le arg LIZ?Or Soe107
107961

L381 S81 C8
1 SOE
90°7
L97
COT
9VT £7 vOT 817

oC 617Cra
Lo

07CSTZcre
6 607
vs! 18'T 1 8L
= LI COE
£07
Lens
661
bre 1€7@ COTE 4

£17?
90°C
v6'l
13'T 1 8L SLI
cS 10'€
00°C
c9T
961
we 607 07? (SIG

Ore
“907%

60°C90°C
161
861 v6 061 L3818'T

1 8L SLI CLI
g 6t 667
861
19°
£61
Ore LOT 817% IV? COT
col 68198°Taw
18'T

OL CL OLl
Tt

$ 0z
961
LOT 6S°7
161
8E7 St 917 60°7 007
6L

68'T98°T

pL IL 891
sd 1Z 967
v6'l
Loc
681
9ET €tT v1 80°C

80°790°CvOT
861
81
18] 6LT
LLY

I cL 691 991
1
8 cw
C61
S6T 99°C
L8
Set COT £17 90°C

£07007861 961 sol


L6OT
£81

OLT L9T y9'l


q (x4
06'T
v6T Ss?
98°T
PET 1c? ITZ S07

COT107
S61

£61
BLTTOL

691 991 col


i) <4 £67
$31
vST
r8'1
£7 617 4 10)

661
vOT v6'l
bl

13] 08°T

LOT p91 19°T


Sc
881
C67
€3°T
oc CET 817Z

861
60°C COT
61 161
£61 99'T £91 691
97
L81
167
e831
CST USi4 LV 80°C 107 col
BLTILL

631
SLTIeL cL Olt 691

Sol 19'T 8ST


Le
93°1
06°C
13°]
gc a4 EMC LO? 007 161
SLT
891

yl 09'1 LSI
87
S81
687
08T
os‘ 677 Sig 90°C 00°C 061 £91 6ST 9ST
I
67
v3'l
68°7
6L
0s 807 STZ 90°C 66'T 63°1
SLT bl
LOT991

col 8ST ssl


0€
£81
887
8LT
6r'7 8o7C LAKé £07 861 88'T
eLT
sol

19T LSI T vs
Ov
C81

[YEN
wre £77 607 00°C £61 £81 vst IST Ly
09
SLI LT
6L7 6¢C 817 vO'c sel L8°T LLY
19}
Lem!Lol

srl byl orl


Oz!
ZT 99°1
SLT StT 4 1
ellIcL ILT OLT691 89'TLOT eS

661 061 31 cL
Ig

CLT 991 091

Sol Iv'l Let cel


G2
09'T
ILZ OE? 8077 v6'l ssl LLY L9'1
sol

09'T Sail vel | hs vl


L61 961 Sol v6'l£61 £61 L381TC3 LLYICL
881 L8T L831981 ssl 6LT PL 891 £91
SVT18c

T srl (4a

6r'T
a1qe],
=A, aBeyuacIe
syulog
gJo ayJ uonnqinsiq (panurjuos)
d
“S00‘la fa
sooldoq
Jo WOpsaly
JO} aU] Joyes0UINU
(‘A)
"A I

@

Ol

v
cai

9
&

Si
Of Ov

8
09 071

6
oo


ve
SI
I vI9l LSIT COET 8°9€7 Ove
S 1b6 tbe6 O8P7 Tose T1Sz

6
z CCST esc €VSe
Isg1 6I oT 6I 0c seol

8E7

S661
6Sb
61
8t 6l Or 1v'61

9'v2Z
or'6l Lr'6l 8r'6l 6r'6l 0S'6l

O'VETZ
6l
16bz
€ €Vol 876 106 688 18°8 6L'8 PL’

Le6l
C98 6S°8 LS'8 Ss'8 £98

Sv6l
Svel
7 ILL 6s°9 979 60°9 00°9 96'S 16's SLS

99°8
¢ CLS 69'S 99'S £9°¢
19°9 1s sos

6£°9
88°F LLY PLY 89°

£v'6lOL'898°S
09°8LLS
OS'b orb

08'S
9 {vy Ory 4 9E
66'S OL 6¢

PF

SOP
ltr Ory 90'P 00'v

bcS

Cob
18 LES PLe OL’ LOE

Pr
Theva 6s'¢ Sev LOE 6L'e 89° 9 Lv¢

9S L8€

v6'E
8e£ vee Oe Loe fe
8 ze LOY 69°€

(aud
Ose 6e€ See 8TE

Ice
v8"Ive
80° pO’ 10° LOT £67
6 ars 98° sre E

' ee 68 919 8t L8¢Boe


67 8I€ ple LOE

COE
98°C £87 6L7 SLT IL7@
Ol 96'7 Tie ee€ Pre COE 86°C 167 OLT 99°C OE 897 pS
II +8’ 6S°€ Ee 0c 10'€ 06°C S87 6LT
cre067PLT LST@ £97 6r'T svc Ov’


ZI SLY

oC
ore Wes 167 08°C SLT 69°7 LvT tye 8EC veT Oe?
€I LOY Ive £0'€

LeeCOE60°00"
£872 LEG LOT

veeSTtv6TLEE pS
09°

10°€S8TGEEONG
197 Isc

8eC VET Oe7 Sc? It?


vl 09h PEE 967

S761716 61'seS r8'€£9'€Bre 97'€8I'E


SEC SG

£7
09°C €G7T
wet

HES BEE COT 817% £1


SI PSP € 67 067 ILZ 69°C 7 pS 8V~
SET

STC 0cT OS LG LOT


91 6b woe S8t

COTS876LT
99°C VS
9? Ov
6¢'7 (ax6
S97 9VT EGC

617 STZ Vz 90°C 107


LI Sey rags 18? 19g 6rT SVC 8CT
Se?
677veT

STZ Org 90°C 10 961


6€7T 8CTCG

81 Ir QUAS LEC

bl?OLT
89°72

v9" S06
9 Ive HET
617

Ve 90°C COT Lol cé'l


61 ser Sle PLT vst cyt 87
St

WEG
1€T LEC

LOT £0°C 861 £61 881


0z cer Ore

067
TEC 1S? 6e7
IV

SET 80S vOT 661 Sol 061 La


IZ 2 (6% LOE 897 6r~

S8809 C8 SI¢elebreLORSLOES67S8TLECOL? 6ST isr487 svc

09°C
WRG, CEG Sc? 10 961
80°C

col L3'T 18'T


ad oer SOE 99°T oT Oe (6 (Ze
(AGGOCT81e
(XE617917% O17

861 v6'1 681 r8'1 1 8L


€Z STP £0'€ y9'T

99°C£97 LSTSSG
x4

(46Ov’
LCE ara
S17?
SOT£07

9671 161 98°1 181 SLT


¥Z 9

(A) JoyeUuIWOUap oY) 10} WOpselj Jo s90139q


10€ COC ax6

Tes90°€10°€967£67 LBZ87 0g'z


UG

St? 817T v6'l 681 8'l 6L'1 (en


St vr

4 SZ
66°C 09°C x4 voT
11?

CECE C61 L3'I C31 GEN ILI


9% tTP 867 697 6£7 (ECG Slt
(are LOTSOc 107

60°C

0611 S81 081 SLI 691


(lf IP 967 LSZ@ LEE 0cC elec
£0°T 661

881 v8'I 6L1 CLL L9'1


8Z Ot’ S67

6r'~LyTov
9ST 9E7 617 ANGE
LOT90°C

L381 c8'1 I LL IZ Sol


6

bLZ LZ
8b £67 So? Se?
OC

817 OV?
Lol 961

S81 131 SLi I OL 91


0€ LIP COC LIKG ££7
807
KG: 60°C
107 861 96'1sol tol 161 0671

v8 I 6L PL 89'1 (an
Or 80°F 8°

787 8LZ 9LZ €LZ OLZ697


Sa StC 80°C
£0 10c

007 i PL 69'1 y9'l 891 IST


09 00° OLE LEG

SVTtye veT
ENG
681 6L

66'1 c6'| sol 6S'1 | ae LY'l 61


Ba
Le?SEGvETCETTec 607 Le?81Z10)

O71 Zoe 897 60°C

197 €S7Z
60°C lol t8l sol ssl ev Sel Sel
eo rst 09°C ard

(4.6 SeeLIZ
Appendix

107 esl SL orl 1 6¢ ce Con 00'1

Ol

SrzLEZ
COTv6'l
cot r8'1SL L91
v6'l61 r3'ICet 991 Lvl
I OL! 19'T cs'I

(sanuijuo2)
607
608 Appendix

OS6e06'¢1Gs 709S8 rlyLy¢ee we 88ZCLTor a rT wz Ce Qe Me OZ vt oz 61 vol 1G1 ssl ssl esl Is] 5 Syl ie
0

oe
8101 VP
eed
oz! 10%861S61 £6116T 68
6LT«99Cc we BETeeeWC Wwe9% IZ 80%OTA
6r6esé'clIth LOo9061OU tLeceewe OTE 8ST
vlol
6
= ~=— = =
09 —-8Y'6E66 os 719.96'PSTP8L'eoveote S8TCLT We we OCSEZ CE we HT BIZHI 807S07C07007861 96T 2 LOT eo
O10!
tI
OT
LST
oll ool
El
Lvl

vorv8 10s 90° kare SIZ 831 pL


Ov 819 lov vstI¢¢ D6E 167 BLTL9T6STIg? vr?8eTt£T677STT 817 ire 607LOTsoz£07107
cA Lea srl
ot
Ly
9001

69°T
607

vol
C31

Ls}
Laas

ir

LOT
96°

VET

81t

£17?
Os’c

6¢7
SET

LTT

Vz
9"
cre

BZ
€L7@

LoT

177%
68't

eZ
oF

9S€
OF'6t

9r'8

LOS
80°F1

tc9

Ieee
0

1001

OL
91
vit

881
OL?

HTT
CT
61%
LVT
St7é
Os'c

LET
£eT

Le@
OLT

997

svt

107
OBZ
6L7

£97
COE
wy

ENG

IvZ
(aes

Sot

LSet
OF OL
cl vl

879

19'¢
1o'8
v7

7L66

(Aa

0c7

vel
781
Oc?

LOT
00

6E7T
97

87 T
STT
£77
oV~
wt

£e7@

WT
9LT
897

9ST
v8T

1c?
we

Loe
S67
998

LOE

te

isr4
£9
LVS
Lyy
Sy ot
Livi
O@

1 £66

v6'l
6¢7
9€7T
VET
CET

81%
907

£81
os‘@

wre
COT

Lyz
98°
6L'7
CLT
LOT

Lg@
ST
Oly

Coe

81

S67

vt

le?
eee

Soe
99°8
ty9
LoS
LS’

LES
tp6t
Sc rl
Sl

6 786

vel
6r'z
Zt

Ly
LEZ
Oty

WE

8'E
ver!

z$9
Ip'6e

7
L9L6
(‘A) soVeIaUINU UN JOj WOPralj Jo saaid90q,

Sv'z

vz
677
LVZ
soz
SZ
1S@

ev
ST€

967
68°%
73%

89%
9%
097
LSZ
L8'€

eve

soe
SL’8

LES
Lob

cy 799orsOLY 96'€ LEE ste 667 L87 LLTeL7 67 LsZ@ €S7 6€7 917
yl $88
Ol

Or'6t Ot CL't£S'e STE 90°€ COT C87


PY OL?LOTv9T19°C S64 4 LUT S07
9°896
6 Lyvl06°8899cssC8 887 S97 tT
6 6E 9b £0'PFLt 6S'€wreIeeeve CHEsoe867£67 v8T08°SLT€LT@OL?897 O'S197 6STLS@Sv £e7~ a4
eer
£96
96°C
06"

6e€

18% €LtZLZ oe
67

10°
vl pS

»
6t Le

8
S0'€
09'S

99"¢

Ove
cle
Oly
868

SBE
968

try

BZ 18%8LZSLZ 69719%soz€sza4 61Z


Ice

167
L986

$389 Ot Sot 19 Bee ore soe L6T7 067 78S OL


L
L06 OLS664oy Pb ie sre 67ETE Ore 10 £67 L837S87 08°7
2%
8L SLTt9'7iSr46€7607
9F c9'F1
Ot
786
(panuijuo2) uoyNqusig J 2) Jo syuIog B3eUI0I9g

£O6EtL 869 (4% 88° 09"Ose vee87" LV€ 60'€ COE b6TCOT06°C887 PLT
9 1Lt6
1 076 8S CRSSov7 LOY ele Ive COE ele soe 667L67 L837 C9TcS vz

SUL60S 8d LEE99°€ 8e€ STECoE Sle 80°€ 0676L7


9¢6
OL 88°41
6t
60'S sry{4 v0'v68°€ 8Seostwe ett6CE 81t eleOre 90°poe£0'€ LOTLg?
8176
Ost 6eL z'ssoswv Zp00°F 08'€€L'€99°€ See 67'€LTESTEere
——————————————

09°6 £79 Ly SUP 68°E 19°€9S'€IS'€Brewe Ive Bre ec'e1€'€ 10°€68°%6L7%


9668S7'6E
pr'sl
8669OL'L09°9 80'S LyySty Slv 10'rSOE CBE 65.
68°¢cys {8p£9'p vv 80°F 06'€98" 8LeSL cee69°L9€SoeOE 19°€ oveveeee ae
LV6e
CT
H98
ors
vs9
0°91

orb
wry
CoP
9S'7
98°
90°9

ons
$901

604

ver
8eV

(4 4
£8

08"£
COV
Sev

£6€

69€
IL's

97S
9TL

69°F
LOv

Igy
LLY

Loy

0c
y
sly
SO'v
00°6€
66L 8 Ly9
S$

COT10°0118°8 ss9 719 86'SCOS 6LSSLS 69'S Ls‘scrs67S

IS'8¢vr
Li L0°8LOLI@L v6 wL9 v9 0e9079 v9 L8°Sess CLS 99'Sco's19°¢6's ss c's
«=A. BIBI,

=aAynstn
wom oa

(A) JoyeuTWOUAP stp 10} WOPsay Jo ssaldaq


BIR,
A eBeUeDIeg
syulog
JO 9y}J uonnqsiq (panunuos)

1007 fasta
; saaidaq
Jo WOPaaly
JOJ JY} soyer2UINU
(‘A) 3
A if cg £ v g 9 L 8 6 ol ral SI 07 vz o€ Ov 09 0zI 2
I 7 aS $'666b OFS SZ9S vOLS 6S8S 876 7865 7709 909 9019
=: Lsi9 6079 SET9 19¢9 L829 £19 6£t9 99¢9
t 8G 00'65——s«OS
LT'66
= ST66 sé (wsCéOEHHC(‘«‘iEG HGHC«iHHHC(tiCiEHGHCCLEHGC—(
CH OH SHH HGH 66HCLH L6H 866)©6=6 486766 0S66
£ 2 SO 0)I +) IL'87 BZ HT = T6LZ LO'LZ 6H'LZ
= SEL~ €TL@ SOLE (A897 -169'9T" L00I9G
— VOSI9T Iv9S «AEDT «COT SW9C
v 0717 00°81 69°91 86'SI test 17ST 86F1 08't1 99'F1 Sov Levi O71 zO'vl £6°E1 vel SLEI SOE El 9S Orel
¢ 991 LvEL 9071 TI 6e L601 L901 01 or 6701 9101 S001 686 tL'6 SS6 Lv6 86 676 076 116 706
9 SLel 7601 8L'6 S16 SLs Lys 978 ors 86°L LEL CLL OSL orl TEL €TL PUL 90°L 169 88°9
L StTI SS6 Svs SSL OVL 6UL 669 v89 tL9o 799 Ly9 RS) 919 LO9 66'S 16S CBs bls sos
8 IT 97 $98 69'L 10L £99 Leo 819 £09 16s 1g°¢ LoS css 9e°S 87S os aes £0'S SOV 98'b
6 9501
- 708 669 cro 90°9 08's 19°¢ Lvs Ses 97S Irs 96'7 I8'p ely Sor Lob 8yP ovr lev
Ol Poor 9S°L soo 66'S pos 6S o7s 90°S ver S8Y ILy 9oS'P Ivy 4 3 STP Liv 807 00'F l6€
ss iT $9°6 Tail to9 LOSS ces LOS 687 bly £9'p psy Ory STP Oly CO vot 98'€ 8Le 69° oof
Zl ££°6 £69 sos Ivs 90° P C8 pr y Os Orv ocr 4 10° O8'£ 8Le Ole COE vst sve OLE
38 I L0°6 OL9 Ls Ics o8'P COP bry OCy 6lb oly 96 C8'e 99° 6S" Ise eve pee STE LIVE
4 al 98°8 Is9 oss b0'S 697 ad +) 8UP bly £0 vet Ose 99° Ice ere Ste EGS, 8 60°E 00'€
§ SI 89°8 9¢°9 cys 68'P 9EF cer bly 00° 68°t Ose Lot (A533 (ASS 6TE we Sie Soe 967 L387
& ol £o8 £79 67S LLY trp ocr tOP 68e 8Le 69 Soe Ive E 9 8Ie Ore cO'eE £67 v8T SLT
= a Ors 19 81s LOY rey Ory f6€ 6Le 89'€ ¢ 6S ort lee 3 S) 80° 00° COT 8° SEC soc
8 sl 678 109 60°S 8S'P Sty I0'P se Le 0oe St Lee fot 80°€ 00" COT P8C SLT 997 LST
3 61 818 £6¢ 10's 'b OS LIV 6 EES £9'€ (a3 ve Ort ST€ O0'e COT v8T OLT LOT 897 6ST
g 0z Ors sss 6p try Or LE Ole € 9c ore Lee £TE 60° voT 98°T 8L7 697 19¢ AOS OVC
> \Wé cO'8 BLS L8'> Ley brO'P Ise pe Ig ove Ite (DNS £0't 887 087 CLT v9'T SSC 9VT 9ET
5 we SOL tls C8'P lev 66'€ SENS 6S'€ SVE Ste 9TE CARS 867 £87 SLT LOT 89°C OST Orc {eT
g £7 88'L 99'S OLY Pb 9c v6'E ILt vse Ive 3 3 ITt LOE t67 8LT OLT COT PST SVT CEG 9TT
g PT C8L 19°¢ Clb (Hes 06'€ ESS ose ote 9TE LIe Of 687 pL 99°C 8ST 6rT OT Lec 1tZ
ST CIEL LSS 89'b str SE £9 Ore Cee CES tle 667 S8T OLT COT PST srt 9ET LTT LIT
9T CLL os vob rly Cee 6st we E 67 Blt 60°€ 967 187 DQG 89°C Ose CHT EaC tee ee,
Lt 89'L 6rs 09" Ivy 8Lt OSE 6c€ 9TE STt 90° £67 8LT £9°T SST Ly 8ET 677 ara Ole
87 9'L sys Lov LOY SLE tse a Lai EGE, ele Cat. 067 SLT 097 CSC Laxé SET 977 LIT 90°C
67 O9L cys Hoy FOP tLe Ose eee OTE 60°€ 00'€ L3T ERC LoT 67 vc (EKG é rer vIT LOT
Or OSL 6ES Isy CO'r OL'e Lye Ore IES LO’ 867 T v8 OLT co? LYT 6¢T Oe? ct 4La 107
Ov ree 81s ley £8 Ise¢ E 67 Cle 66° 687 O87 997 CST LEE T 67 are Lie tOT él O8'l
09 80°L 861 ely Sore Pee CSS S67 T C8 CES £97 osc SET OTT (awe £0°C v6'l Lai tL 09'I
OzI S39 6lv SOE gre LVe 96°T 6LT 99°C 997 Ly T ve 67 £07 s6l 981 9LT 991 tol 8e1
Appendix

no £9°9 I9'p 8Lt CAE CO’ O87 y9T IST v7 TT 81Z pOT 881 6L'1 OL 691 LY Cel 00°!
609
610 Appendix

Chart VI Operating Characteristic Curves


1.0

o@

5a

oObh

Probability
Hp
accepting
of
o De)

0 | |
0 1 2 3 ‘ 4 5
d

(a) OC curves for different values of n for the two-sided normal test for a level of significance
a=0.0S.

1.00

0.80

0.60

0.40

of
Probability
Hp
accepting

0.20

(b) OC curves for different values of n for the two-sided normal test for a level of significance
a=0.01.
Source: Charts Vla, e, f, k, m, and q are reproduced with permission from “Operating Characteristics for the
Common Statistical Tests of Significance,” by C. L. Ferris, F. E. Grubbs, and C. L. Weaver, Annals of
Mathematical Statistics, June 1946.
Charts VIb, c, d, g, h, i,j, l, n, 0, p, and r are reproduced with permission from Engineering Statistics, 2nd edi-
tion, by A. H. Bowker and G. J. Lieberman, Prentice-Hall, Englewood Cliffs, NJ, 1972.
Appendix 611

Chart VI Operating Characteristic Curves (continued)


1.00 :

0.80

£
Lo,)
£&
z 0.60
oS

2
iS) Hi

©
a

D
CH
0
-1.00 -0.50 0.0 0.50 1.00 1.50 2.00 2.50 3.00

(c) OC curves for different values of n for the one-sided normal test for a level of significance
a=0.0S.

1.00

0.80 |-

So roa)fo)

Ss > o

Probability
of
Hy
accepting

0.20

0 4

-1.00 —0.50 0.0 0.50 1.00 1.50 2.00 2.50 3.00


d
(d) OC curves for different values of n for the one-sided normal test for a level of significance
a=0.01.
(continues)
612 Appendix

Chart VI Operating Characteristic Curves (continued) .


1.0

0.8

2a

fo}»

Probability
Hp
accepting
of

0.2

(e) OC curves for different values of n for the two-sided f test for a level of significance a = 0.05.

of
Probability
Hp
accepting

VO persian NicFeel Out Cet 2 One Oe 2.aro O mee Bean |oie


d

(f) OC curves for different values of n for the two-sided ¢test for a level of significance a@ = 0.01.

Appendix 613

Chart VI Operating Characteristic Curves (continued)


1.00

Probability
accepting
of
Hy

0.10 Bevan
nd we

0 H :
-0.8 -0.6 -0.4 -0.2 0 0.2 04 06 0.8 1.0 1.2 1.4 1.6 18 20 22 24 26 2.8 3.0 3.2
d .

(g) OC curves for different values of n for the one-sided t test for a level of significance a = 0.08.

1.00
0.90 |
0.80 }
0.70
0.60
0.50 fe

0.30 Ee il ee eee =~

Probability
of
accepting
Hy i i Ge!
0.20 [IRS Serer | ae cae

-0.8 -0.6 -0.4-0.2 0 02 04 06 08 1.0 12 14 16 18 20 22 24 26 28 3.0 32


d

(h) OC curves for different values of n for the one-sided f test for a level of significance a = 0.01.
(continues)
614 Appendix

Chart VI Operating Characteristic Curves (continued) ‘


1.00 T ; r 7

0.90

oS @ So

o N oS

oO a ro)

Hg
accepting
of
Probability

0 0.40 0.80 1.00 1. : : : . . 4.00

(i) OC curves for different values of n for the two-sided chi-square test for a level of significance a = 0.05.

Oo o oO

Probability
of
Ho
accepting

0 0.40 0.80 1.00 1.20 1.60 2.00 2.40 2.80 3.20 3.60 4.00

(j) OC curves for different values of n for the two-sided chi-square test for a level of significance a = 0.01.
Appendix 615

Chart VI Operating Characteristic Curves (continued)


1.00

0.80

0.60 |-

°5

Probability
of
accepting
Hy

0.20 +——-

0 | | L
: : 3.0 4.0

(k) OC curves for different values of n for the one-sided (upper tail) chi-square test for a level of
significance a = 0.05.

1.00

0.80

0.40

Probability
of
accepting
Hy

° r 6.0 7.0 8.0 9.0

(1) OC curves for different values of n for the one-sided (upper tail) chi-square test for a level of
significance @ = 0.01.
(continues)
616 Appendix

Chart VI Operating Characteristic Curves (continued) (


1.0

0.8

=>

ob

Hp
accepting
of
Probability

1.0 : 2.0
5 A
(m) OC curves for different values of n for the one-sided (lower tail) chi-square test for a level of significance
a= 0.05.

1.00

0.80

0.60

0.40

accepting
of
Probability
Hp

0.20

0 0.20 0.40 0.60 0.80 1.00 1.20 1.40.

(n) OC curves for different values of n for the one-sided (lower tail) chi-square test for a level of significance
a=0.01.
Appendix 617

Chart VI Operating Characteristic Curves (continued)


1.00

0.80

o a°

0.40

‘Probability
of
Hp
accepting

0.20

140 1:80 2.20 2.60 3.00 340. 380 4.00

(0) OC curves for different values of n for the two-sided F-test for a level of significance a = 0.('5.

1.00 1

0.80 |
£ z

(3
3 0.60 PTE TLL |
3 Piimtiae
ce HU FE]
3 M1
= 0.40 Hig
3 ae
2 Inge
a
ai
sam
RANE YI
Wi YY)
0.20 } Tm :

“a 0.40 0.80 1.00 1.20 1.60 2.00 2.40 280 3.20 3.60 4.00
iy
(p) OC curves for different values of n for the two-sided F-test for a level of significance a = 0.01
(continues)
618 Appendix

Chart VI Operating Characteristic Curves (continued) »

= : - -
1.00

0.80

0.40

Probability
Hp
accepting
of

0.20

(q) OC curves for different values of n for the one-sided F-test for a level of significance a = 0.65.

1,00

4 a So

S > oO

of
Probability
Hp
accepting

0 1.00 2.00 4.00 5.00 8.00 10.00 12.00 14.00 16.00


A

(r) OC curves for different values of n for the one-sided F-test for a level of significance a = 0.01.
Appendix 619

Chart VII Operating Characteristic Curves for the Fixed-Effects Model Analysis of Variance
; 1.00 . -

Oo rs)

of
Probability
accepting
hypothesis
the

Probability
of
accepting
hypothesis
the

(fora =0.01) —> 1

v4 = numerator degrees of freedom. Vv = denominator degrees of freedom.

Source: Chart VII is adapted with permission from Biometrika Tables for Statisticians, Vol. 2, by E. S. Pearson
and H. O. Hartley, Cambridge University Press, Cambridge, 1972.
. (continues)
620 Appendix

Chart VII Operating Characteristic Curves for the Fixed- Efleets Model Analysis of Variance
(continued)

of
Probability
accepting
hypothesis
the

1 2 3 <~— ¢/(fora = 0.05)


¢ (fora =0.01) —» 1 2 3 4 5

Probability
of
hypothesis
the
accepting

2 3 +— o(fora = O08)
o (fora =0.01) —> 1 2 5
Appendix 621

Chart VII Operating Characteristic Curves for the Fixed-Effects Model Analysis of Variance
(continued )

hypothesis
the
accepting
of
Probability

the
accepting
of
Probability
hypothesis

0.01; 3 <— $o(fora= Ee


fore 0.01) —>1

(continues)
622 Appendix

Chart VII Operating Characteristic Curves for the Fixed-Effects Model Analysis of Variance
(continued )

hypothesis
the
accepting
of
Probability

Sid 2 3 — ¢(fora =0.05)


$ (for« = 0.01) —> 1 2 3 4

ae Bei
accepting
of
Probability
hypothesis
the

2 3 <— (fora=om '


—> 1
6 (fora=0.01)
Appendix 623

Chart VIII Operating Characteristic Curves for the Random-Effects Model Analysis of Variance

of
Probability
accepting
hypothesis
the

10 30 50 70 90 110 130 150 170 190 ~<A (for @ = 0.05


A(for
a=0.01) > 10 30 50 70 ~ 90 110 130 150 170 190

1.00
0.80 vy=2
0.70

0.10

8
0.074 oe

5So
ONS28
8
Probability
of
the
accepting
hypothesis
o °@

° ro)iS)

a 1 3 5 7 23 25 ~<A (for a = 0.05)


A (for a = 0.01) > 1 3 19 21 23 25

Source: Reproduced with permission from Engineering Statistics, 2nd edition, by A. H. Bowker and G. J. Lieberman,
Prentice-Hall, Englewood Cliffs, NJ, 1972.
(continues)
624 Appendix

Chart VIII Operating Characteristic Curves for the Random-Effects Model Analysis of
Vartance ee )

hypothesis
the
accepting
of
Probability

11 ~—A (for a = 0.05)


9 10 11 12

° ry

0.05

of
Probability
hypothesis
the
accepting

1 2 3 4 5 6 7 8 <—-J (for a = 0.05)


A (fora = 0,01) —> 1 2 3 4 5 6 7 8 9 10 11 12
Appendix 625

Chart VIII Operating Characteristic Curves for the Random-Effects Model Analysis of
Variance (continued )

of
Probability
hypothesis
the
accepting

1 2 3 “4 5 6 7 ~—A (for « = 0.05)


A (for a = 0.01) —>1 2a 4 5 6 7 8 9 10

Probability
of
the
accepting
hypothesis

1 2 3 4 5 6 7 <—A (for @ = 0.01)


a=0.05) —> 1
A (for 2 3 4 5 6 7 8 9

(continues)
626 Appendix

Chart VIII Operating Characteristic Curves for the Random-Effects Model Analysis of
Variance (continued )
1.00

0.20

oO as fo}

0.08
0.07
0.06
° fo)a

of
the
g
Probability
accepting
hypothesis
=) o (es)

So o Ds}

0.08
0.07
0.06

Probability
of
hypothesis
the
accepting

1 2 3 4 5 6 ~—A (for a = 0.01)


A (for a = 0.05) ——~ 1 2 3 4
Appendix 627

Table IX Critical Values for the Wilcoxon Two-Sample Test*


R 0.05

W
WwW
PW
DP
HHP
AYAUNAAAAAUNAUNAAUNHP

Source: Reproduced with permission from “The Use of Ranks in a Test of Significance for Comparing ‘1wo Treatments,”
by C. White, Biometrics, 1952, Vol. 8, p. 37.
*For large n, and n,, R is approximately normally distributed, with mean n,(n, + n, + 1)/2 and variance n,n,(n, +n, + 1)/12.
(continues)
628 Appendix

Table IX Critical Values for the Wilcoxon Two-Sample Test* (continued) »


R *

0.01

—5.SCO
mA
mAYIANINAAN
OOM

idi=) Ww _

—r=
_ o
WwW
WWW
WWW
Appendix 629

Table X__ Critical Values for the Sign Test* . _

0 23 1 6 zt
0 0 24 1 6 5
0 0 25 7 7 5
1 0 0 26 8 i 6
IF = 1 0 27 8 7 6
1 1 0 28 9 8 6
2 1 0 7s) g 8 7
2 2 1 30 10 9 1!
5 Z 1 31 10 9 i
5 2 1 32 10 9 8
3 3 2 33 11 10 8
4 3 2 34 11 10 9
4 4 2 35 12 11 9
2) 4 3 36 12 11 y
5 4 3
2) 3 3
6 5 4
6 5 4
*For n > 40, R is approximately normally distributed, with mean n/2 and variance n/4,
630 Appendix

Table XI Critical Values for the Wilcoxon Signed Rank Test* ,

Source: Adapted with permission from “Extended Tables of the Wiicoxon Matched Pair Signed Rank Statistic”
by Robert L. McCornack, Journal of the American Statistical Association, Vol. 60, September, 1965.
“If n > 50, R is approximately normally distributed, with mean n(n + 1)/4 and variance n(n + 1)(2n + 1)/24.
(sanuljuo2) “BYINIWOIg JO S2a}sn4y
PsI9aOD pul papusixy,, ‘REIN CW “f Wold,
,asuRY PeZNUspNig 24} JO SUIOd aBrquaoieg Joddq ay) Jo sajqey,
631

ay} Jo uoIsstwied Aq peonpoiday “761 “C61-Z6l ‘dd ‘6¢ “JOA ‘vyllawolg “‘wopoaly jo ssaisap =f
G6 sey OL 08 ree poe iad
eve. Wore “S0CA\
Appendix

So's L93¢ LoS SS 6S Cy Cm viS se Se Sa OCs


SecLs To's L8V lp OSir O0Gy O2it ocl
WES ~ OEMS 19'S. . 97S ISiG = Prise SES 0S L2G,
€g¢s 6LS SLs Ole 09
SS Se | BENS Scs CG 664) Coin 00, SGV
709 3=—-86'S CGS MGRIS VO Sen TOLIS els ERS eee ADEBS. GASAS
oss 6S LES Irs t6¢b OLY LEV ay C8ic Ov
79 4o9 729 966 O65 v8S LLS 69s o9cs
zo L4v9 68 OL
‘S#S° "SES ‘SOS FSS OVS yee sos O08 Cyy
ineey | Bee Go? 909 Se 0C9) yl ou 809 100 €60"
1S;SpeeG9'S) MES Semmes, LIES 16¢ Sb 96 Vv
iso "yo "GES feo 909). GTS 11-9 COLD eeCOLs
199 86989 COL OT
GIS S005 LOS ves | 09S (S'S OCS O's 97
Cao LS ims 6% ‘Soo RESO TsrOr V9. «7 0c9~,
"9 G3s els gos e5 sos COTVane SONY 6!
SEF HLY “99” 859 iso “Seo yoo” Sco PTS
6893 4989 | LOY st
19 O79 809 6S 6LS 09S ges 606 OLY
969 169 sv9 66490 =OtL9 S99) 8BS9 OSD 1v9
"SI9 109 S8S 995 evs is ply O1V Lt
F6D GO ceo {29 O99 WD 279 89 La
Ok 002 ol
979 SED 79 «86809 «ws tS 6yS OTs Sin eh
STL ‘60% cL 469 069 7w9 +vL9 999 999
129 919 66S O8S 96S Svs esi, iv, a
WL OCk Wik woe w0t £69 GRO OLS 999 cso = bh9
Ivo 979 809 88S e9S CS 687 Icy rl
HE WOR ALS SOL “969%, 1489 LL9 999 v9
6CL | SE. <92P el
6L£9 “L99" 299 "to GO B65 €LS OVS 9%
COL. Sri aval rei okie ADE ONL 10) <069°
L99 iso <€9 oF9 v8S OSS 0S 67 cal
(POT OSL eG Oa EL TL ORNL 90L 69 1ig9
ELL 6b at
SIL 669 r9 499 8F9 Sto LOS EOS PES
COL ) 88h ioe sll SOL OSL Obl SFL StL
SOL G89 (d99 ero yo LES LOS a Ol
eySli8 LO8 66L T6L ISL Wal (O98 ““8ViL (Oth ICL
COS O94 6
S62 SOL oy Cel ele 169 999 ce9 96% tvs
LS8 6v8 Irs ces £c8 el8 £0°8 I6L
IL Gdvl PCL 969 £99 Oco 95 vLY 8
68 cys 918 998 co8 86v8 1e8 S18e OS UMASLe
£06 cov L
Wi SRS NESS NES RSL DE Bei OL vS9 tos
coe 656 976 St6 “¥C6° TG 006. 9838
L88 198 ces LOL OSE SOE ERY AGS 9
€rOl ZEOI IZOI 8001 $66 136 S96 676 O€6 O16
rSOl a
S8YOl 70 166 L96 6 168 ?@V8 O8L L469 OLS
COIL ISTI: SOIL SSTT OF ll ‘PoIT S01 oso OLOL
CCl ott cll aus DOL 966" LLG" Scrs 1¢9 v
vri (aa! Tvl 6c! Let cel eel (ats! SCI 9TI
Lot col 9ST OST trl ciel CCl 90I 978 £
s6l cl € 61 Tol 88 cst c8l 6Ll cli TLt
Ci6G) C8G 9 9G Eve CCCemn UIOL OrI t
6LE GLE) “OLE MSUL RO: SV'Se Sth Gm Vice OIC me EN oe LOL
1 £6 Lec Lt 9IT COC 981 y9l sel 006 I
867 v6C 067 987 C87 CLT CLT 997 097 €Sé
iL; 9 ¢ v © C

wS
8I LI at SI vl tl cl Il Ol 6 8
07 61
OO ae ae oe
OR OR,
mS
ee (f‘d)'°°b
OUSHEIG WueY PeZUEPMG op) Jo syulog aseweoIeg [TX 31GBL.
SSr"LbYT)
(t67
86).
-6tr
GEbP
IOS)
O8%
t6¥
8h.
988
89h
bly
Liv
€9¢
Lf
Le
Wr
OF
seo
els 60S 0S OOS S6r O6bF Hb 8LP ILb p9r 99r Lv O9€b Ptr Ov CE 69C MEE O87 OCI
v7oS O7S STIS Irs 906 00'S v6rv 887 I8bp €Lry S9r CSb tyr Itb 91h 86€ ple OME £87 09
OES —4 CES —— LES — TES LFS ITS SOS 86b Oobv Cb tly (9b Wh 6b ECV POF GLE WHE 987 OV
St Sama cS oe Se Cee Ce See LECS 1s StS 806 OOS 6b €8b CL O9b OFb OfP Ilvp p8e 8re 687 O£
6sS soccgc OSs ws Be to s7og sis ors 10° @6rv I8b 89r tSb Lev Lib O6€ E€SE 67 ve
«B89
07
IZs
99°¢
69S)
S67
T9¢)6
OOOES
OSS)CCO
YI6E
bh
Shr
CH
LLY
O06¢
10S
Irs
SS
BTS
CC
SLS OLS Ss9S 68S €SS OFS 6€9 @ES ECS HIS pS ér 6LH Wr LHe WTKH B86E 68St 967 6!
Gicme ltFo GecG9.G ae CO GRlELSG MUGOe FeVc mo SCGe Accu me fTS) 0S) SOP BICSt (9b Gkk S82 OOF ING) lec SI
v8S 6L6° bls gos wos sog LYS 65S Tes Ics ITS 66+ 98¢ OLF CSr Ith Cr COE 8672 LI
06S 8S 6L6 €LS 996 68S CS tryS SES 979 STS €0S O6b% bly Br very SOF SE OOE 91
96S 16S S8S 6LS CLS S9G LSS 6hS OFS TeS OCS 80S v6b S8LhH 68h LE B80F LOE IO’ cI
v1
€09
OS
9655
9S
WS
6LS
985
6S
866
EOE
OLE
Ivy
WY
‘E8b
66h
EIS
S~S
DES
119 909 009 6S 986 6LS ILS €95 ESS EMS ES 61S SOS 88h 69F IFH SIbh ELE OE eI
L@9< .SL9. 60:9 — 70:9" "SES “ Ses TOSS | LES I9IG¢ © EGS TORS) LUS -ZS Soh Sir ISh Oty LUE s0'F ZI
IT
ES
GE
S9%rF
S8Sr
Wr
OS
—0cS
SES
64S)
9S
tS
gs
06S
866
909
FtI9
O79
LO79
€£9
«CGS.
«6CERS
EGS
«CED
«PTO
«60S
«Lo
«79
(19
LYO
99%
ler
SIS
Ol
SIE
BBE
Er
0S
ORS
«09'S!
'
6OTE
189
8959
979
LBS
865
609
619
879
9€9
HTS
<GS6E
Wh
Mr
th9
EFS
O9S
wlLS
WS
489 089 €£9 S99 LO9 8h9 6€9 679 819 S09 ws LLS O9S OFS LIS 68h ESb PMH 9ZE 8
es sOL WL co 9 SEO S99 9 WO 679 STS 669 O89 66° Seo 90S 89% SIh PEE L
6Sle ISH “EVD” VER WER VER FOR ©C69 W6L9/" S90, 26N9| ZE9 79 68S EOS. TES (067 EP rE 9
ToS PERS 608 of 82 tei OFF Ive CEL. LIT .669) O89 © 89 C69 €09. 29! %SE OOF ~f5'S ¢
VeOmaVro, (cUion cos (088 (98 (oS LES “1S; CUS = ERE) OSL EB SEE SOE - EEO: Teo (ors 00S eee 5
VEIL CLIT 8601 P80 69:01 ZSOI SOT STO S66 © W6° SVG 816 S88 [PS POS ISL €89 885 OSr €
LLOL LOOT 9E9L PI'9T IST SST BEST 80ST SLYI 6€bl 66EI PSE COEI EMZI ELIT 68 0I 086 878 609 z
SECA Ses USS Glo. © UGn VSS TOTS. “CEO = OIG SUS" "Toy Strive 76h lth COP ECle = ere 21 9G. oT SI I
3 0z 6l 81 LI 91 SI ral Sa | IT 01 6 8 L 9 S t cee f
i=]

a d
< (f ‘d)°°°b
=
a
2 (panuijuo2) Nsnerg sduey poznuapnis oy) Jo suog a8ejueoI0g TK a[quy,
Appendix 633

Table XIII Factors for Quality-Control Charts


a

X Chart R Chart

Factors for Factors for Factors for


Control Limits Central Line Control Limits

n A, A, d, D, D,
2 3.760 1.880 1.128 0 3.267
3 2.394 1.023 1.693 0 2.575
4 1.880" 0.729 2.059 0 2282
5 1.596 0.577 2.326 0 2.115
6 1.410 0.483 2.534 0 2.004
7 1.277 0.419 2.704 0.076 1.924
8 1.175 0.373 2.847 0.136 1.864
9 1.094 0.337 2.970 0.184 1.816
10 1.028 0.308 3.078 0.223 1.777
11 0.973 0.285 3.173 0.256 1.744
12 0.925 0.266 3.258 0.284 1.716
13 0.884 0.249 3.336 0.308 1.692
14 0.848 0.235 3.407 0.329 1.671
15 0.816 0.223 3.472 0.348 1.652
16 0.788 0.212 3.532 0.364 1.636
17 0.762 0.203 3.588 0.379 1.621
18 0.738 0.194 3.640 0.392 1.608
19 0.717 0.187 3.689 0.404 1.596
20 0.697 0.180 3.735 0.414 1.586
21 0.679 0.173 3.778 0.425 1.575
22 0.662 0.167 3.819 0.434 1.566
23 0.647 0.162 3.858 0.443 1.557
24 0.632 0.157 3.895 0.452 1.548
25 0.619 0.153 3.931 0.459 1.541
“n> 25;A,= 3/Vn . n = number of observations in sample.
634 Appendix

Table XIV k Values for One-Sided and Two-Sided Tolerance Intervals


eS
One-Sided Tolerance Intervals
a

Confidence
Level 0.90 0.95 0.99

Percent
Coverage 0.90 0.95 0.99 0.90 0.95 0.99 0.90 0.95 0.99

2 10.253 13.090 18.500 20.581 26.260 37.094 103.029 131.426 185.617


3 4.258 5.311 7.340 6.155) -0D6ae LO953 13.995 17.370 23.896
4 3.183) 3:95) a. 459 4.162 5.144 7.042 7.380; 9.083 12.387
5 2.742 3.400 4.666 3.407 4.203 5.741 5502 6.91 Se O.959
6 2.494 3.092 4.243 3.006 3.708 5.062 4.411 5.406 7.335
7 PHB) PO SSY ENO? 2.755 3.399 4.642 3.859 4.728 6.412
8 PEPIN PRAY BARS 2.582 3.187 4.354 3.497 4285 5.812
9 2AS3- 2650-041 2.454 3.031 4.143 3.240 3.972 5.389
10 2,066. > 2568.5 93.532) 23355: 2.911 S08 3.048 3.738 5.074
11 Ot, 2503.6 3-443 Dela 2.815) i852 2.898 3.556 4.829
12 1.966 2.448 3.371 ae21D~.. 2736. » Bf44 2.777 3.410 4.633
13 1.928 2.402 3.309 2155, 2:67), yo 35659 2.677 3.290 4.472
14 1.895 2.363. .3.257 2.109 2.614 3.585 2.593,, 3.189 4.337
15 L867, G2 529 ell 2,068 2.566 - 3.520 2.521 3.102 4.222
16 1.842 2.299 3.172 2.033 2.524 3.464 2.459 3.028 4.123
17 S102 202 LON, 2.002 2.486 3.414 2.405 2.963 4.037
18 1.800 2.249 3.105 1.974 ~ 2.453 43370 2.357 9 2.905 3.960.
19 ETERS PPE BUTT 1.949 2.423 3.331 2.314 2.854 3.892
20 EOS) m2.028— 55.052 15926; 22396) 931295 2.216% *2.808%) 3.832
21 Eio05 72.1905 3,028 9095 2.371 | 32263 2.241, 2.766 3.777
22 IS 2 43 0077 1,886 2.349 3.233 2.209 #5 2.729 ae 372i
23 L242 LOS e298, 1809) 42.320 2206 2.180 . 2.694 3.681
24 172 2145 2,969 1.853' 2.309. 3x81 2.154 2.662 3.640
25 1.702. 2A32--92:952 ES3Spee? 2923158 229 2:039. 5 SOU!
30 1.657 2.080 2.884 1.777 = =2.220 = 3.064 2.030 \ 2.515 3.447
40 1.598 2.010 2.793 1:697 Dole e204 1.902 2.364 3.249
50 1559 S655 2135 1.646 2.065 2.862 1.821 2269 S125
60 15327) 51/9338) 2:694 1.609 2.022 2.807 1.764 2.202 3.038
70 1514 1.909 2.662 1.581 19905 23765 722 = 203 ee 2974
80 1.495 1.890 2.638 1.559 1.964 2.733 1.688 2.114 2.924
90 1.481 1.874 2.618 » 1,542 1.944 2.706 1.661 2.082 2.883
100 1.470 1.861 2.601 12 eel O28 1.639 2.056 2.850
se
Apperdix 635

Table XIV k Values for One-Sided and Two-Sided Tolerance Intervals (continued )
a
; Two-Sided Tolerance Intervals

Confidence
Level 0.90 0.95 0.99
Percent
Coverage 0.90 0.95 0.99 0.90 0.95 0.99 0.90 0.95 0.99

A) 15.978 18.800 24.167 32.019 37.674 48.430 160.193 188.491 242.300


3 5.847 6.919 8.974 8.380 9.916 12.861 18.930 22.401 29.055
4 4.166 4.943 6.440 5.369 6.370 8.299 0 398e MSO 614.527
5 3.949 4.152> © 5.423 4.275 5.079 6.634 6:61.20 27285581 0:260
6 3.131 3.723 4.870 e712 4:4 14 S75 DOSE O34 9d = 8301
7 2902s 4525 4:52) 3.369 4.007 5.248 4.613 5.488 7.187
8 2.743 3.264 4.278 S536 35/524 891 4.147 4936 6.468
9 2626 98 3.12515 4:098 PSK “RED GNI 3.822 4.550 5.966
10 2.535 3.018 3.959 Pipoeyes Sey) ese 3.982» 94,265" -5:594
11 2.463 2.933 3.849 2087) 3.259 ex4277 3.397 4.045 5.308
12 2.404 2.863 3.758 2.655 3.162 4.150 3.250 3.870 5.079
13 2.355 2.805 3.682 DSSTa ees O81 4.044 3.130 . 3.727 4.893
14 2.314 2.756% ..3.618 2529 nt3, O12 eBt955 3.029" Be3i608 "44137
15 2.278 2.713 3.562 2480071122954 3.878 2.945 3.507 4.605
16 2.2460, 2:0769823.514 2.437 2.903 3.812 2.812 ©2342) 4.492
17 2.219 2.643 3.471 2.400 2.858 3.754 2.808 3.345 4,393
18 2.194 2.614 3.433 2.36618) 22.819" 3.702 Da93) Bee 9 ABT
19 2.172 2.588 3.399 2.337 2.784 3.656 PeJhUBs Sk oP 4.230
20 2ZASZE 2.5646°°3.368 PIO PTB Sil) 2.659 3.168 4.161
21 2.135 2.543 3.340 RIES) 9 eT f28} SiH 2.620) 937121 4.100
22 2.118 2, 32460e 3.315 2.264 2.697 3.543 2.584 3.078 4.044
23 2.103 2-3068= "3.292 2.244 2.673 Bro2 2.551 3.040 3.993
24 2.089 2.489 3.270 WDD se PASS 3.483 222) ees O04 eee S047
25 2.077 2.474 hoy | 220852031 3.457 2.494 2.972 3,904
30 2.025 2.413 3.170 240 ee 29 5350 2.385 2.841 ByoS
40 1.959 8 2334 3.000 2.052 2.445 3 213 224) 2011) SOLS
50 1.916 2.284 3.001 1.996 2.379 3.126 PN ASUS BESS
60 — 1.887 2.248 2.955 195 See O33. 3.066 2103 e200) an293
70 1.865 222 ee O20) 1.929 2.299 3.021 2,060" 2.454 35225
80 1.848 2.202 2.894 1.907, _,.2:272 _ 2.986 2.026 2.414 3.173
90 1-834. ge2.185; 2.872 1.889 2.251 2.958 1.999 2.382 3.130
100 VS22m 02.1722 854 8742233 pe 934 OTT 3923855 3.096
a aan
636 Appendix

Table XV Random Numbers

10480 15011 01536


22368 46573 25595
24130 48360 22527
42167 93093 06243
37570 39975 81837
77921 06907 11008
99562 72905 56420
96301 91977 05463
89579 14342 63661
85475 36857 53342
28918 69578 88231
63553 40961 48235
09429 93969 52636
10365 61129 87529
07119 97336 71048
51085 12765 51821
02368 21382 52404
01011 54092 33362
52162 53916 46369
07056 97628 33787
48663 91245 85828
54164 58492 22421
32639 32363 05597
29334 27001 87637
02488 33062 28834
81525 72295 04839
29676 20591 68086
00742 57392 39064
05366 04213 25669
91921 26418 64117
00582 04711 87917
00725 69884 62797
69011 65795 95876
25976 57948 29888
09763 £3473 73577
91567 42595 27958
17955 56349 90999
46503 18584 18845
92157 89634 94824
14577 62765 35605
98427 07523 33362
34914 63976 88720
70060 28277 39475
53976 54914 06990
76072 7A ENS 40980
90725 52210 83974
64364 67412 33339
08962 00358 31662
95012 68379 93526
15664 10493 20492
References

Agresti, A., and B. Coull (1998), “Approximate is Cook, R. D. (1977), ‘Detection of Influential Obser-
Better than ‘Exact’ for Interval Estimation of vations in Linear Regression,” Technometrics, Vol.
Binomial Proportions.” The American Statistician, 19, pp. 15-18.
52(2): Crowder, S. (1987), “A Simple Method for Studying
Anderson, V. L., and R. A. McLean (1974), Design Run-Length Distributions of Exponentially
of Experiments: A Realistic Approach, Marcel Weighted Moving Average Charts,” Technometrics,
Dekker, New York. Vol. 29, pp. 401-407.
Banks, J., J. S. Carson, B. L. Nelson, and D. M. Nicol Daniel, C., and F. S. Wood (1980), Fitting Equations to
(2001), Discrete-Event System Simulation, 3rd edi- Data, 2nd edition, John Wiley & Sons, New York.
tion, Prentice-Hall, Upper Saddle River, NJ. Davenport, W. B., and W. L. Root (1958), An Intro-
Bartlett, M. S. (1947), “The Use of Transforma- duction to the Theory of Random Signals and
tions,” Biometrics, Vol. 3, pp. 39-52. Noise, McGraw-Hill, New York.
Bechhofer, R. E., T. J. Santner, and D. Goldsman Draper, N. R., and W. G. Hunter (1969), “Transfor-
(1995), Design and Analysis of Experiments for mations: Some Examples Revisited,” Technomet-
Statistical Selection, Screening and Multiple Com- rics, Vol. 11, pp. 23-40.
parisons, John Wiley and Sons, New York. Draper, N. R., and H. Smith (1998), Applied Regres-
Belsley, D. A., E. Kuh, and R. E. Welsch (1980), sion Analysis, 3rd edition, John Wiley & Sons,
Regression Diagnostics, John Wiley & Sons, New New York.
York. Duncan, A. J. (1986), Quality Control and Industrial
Berrettoni, J. M. (1964), “Practical Applications of Statistics, Sth edition, Richard D. Irwin, Home-
the Weibull Distribution,” Jndustrial Quality Con- wood, IL.
trol, Vol. 21, No. 2, pp. 71-79. Duncan, D. B. (1955), “Multiple Range and Multiple
Box, G. E. P., and D. R. Cox (1964), “An Analysis of F Tests,” Biometrics, Vol. 11, pp. 142.
Transformations,” Journal of the Royal Statistical Efron, B. and R. Tibshirani (1993), An Introduction to
Society, B, Vol. 26, pp. 211-252. the Bootstrap, Chapman and Hall, New York.
Box, G. E. P,, and M. F. Miiller (1958), “A Note on Elsayed, E. (1996), Reliability Engineering, Addison
the Generation of Normal Random Deviates,” Wesley Longman, Reading, MA.
Annals of Mathematical Statistics, Vol. 29, pp. Epstein, B. (1960), “Estimation from Life Test Data,”
610-611. IRE Transactions on Reliability, Vol. RQC-9.
Bratley, P., B. L. Fox, and L. E. Schrage (1987), A
Feller, W. (1968), An Introduction to Probability
Guide to Simulation, 2nd edition, Springer-Verlag,
Theory and Its Applications, 3rd edition, John
New York. Wiley & Sons, New York.
Cheng, R. C. (1977), “The Generation of Gamma
Fishman, G. S. (1978), Principles of Discrete Event
Variables with Nonintegral Shape Parameters,”
Simulation, John Wiley & Sons, New York.
Applied Statistics, Vol. 26, No. 1, pp. 71-75.
Furnival, G. M., and R. W. Wilson, Jr. (1974),
Cochran, W. G. (1947), “Some Consequences When
“Regression by Leaps and Bounds,” Technomet-
the Assumptions for the Analysis of Variance Are
rics, Vol. 16, pp. 499-512.
Not Satisfied,” Biometrics, Vol. 3, pp. 22-38.
Hahn, G., and S. Shapiro (1967), Statistical Models
Cochran, W. G. (1977), Sampling Techniques, 3rd
in Engineering, John Wiley & Sons, New York.
edition, John Wiley & Sons, New York.
Cochran, W. G., and G. M. Cox (1957), Experimen- Hald, A. (1952), Statistical Theory with Engineering
tal Designs, John Wiley & Sons, New York. Applications, John Wiley & Sons, New York.

Cook, R. D. (1979), “Influential Observations in Lin- Hawkins, S. (1993), “Cumulative Sum Control
ear Regression,” Journal of the American Statisti- Charting: An Underutilized SPC Tool,” Quality
cal Association, Vol. 74, pp. 169-174. Engineering, Vol. 5, pp. 463-477.

637
638 References

Hocking, R. R. (1976), “The Analysis and Selection Mood, A. M., FE. A. Graybill, and D. C. Boes (1974),
of Variables in Linear Regression,” Biometrics, Introduction to the Theory of Statistics, 3rd edi-
Vol. 32, pp. 1-49. tion, McGraw-Hill, New York.
Hocking, R. R., F. M. Speed, and M. J. Lynn (1976). Neter, J., M. Kutner, C. Nachtsheim, and W. Wasser-
“A Class of Biased Estimators in Linear Regres- man (1996), Applied Linear Statistical Models,
sion,” Technometrics, Vol. 18, pp. 425-437. 4th edition, Irwin Press, Homewood, IL.
Hoerl, A. E., and R. W. Kennard (1970a), “Ridge Newman, D. (1939), “The Distribution of the Range
Regression: Biased Estimation for Non-Orthogonal in Samples from a Normal Population Expressed
Problems,” Technometrics, Vol. 12, pp. 55-67. in Terms of an Independent Estimate of Standard
Hoerl, A. E., and R. W. Kennard (1970b), “Ridge Deviation,” Biometrika, Vol. 31, p. 20.
Regression: Application to Non-Orthogonal Prob- Odeh, R., and D. Owens (1980), Tables for Normal
lems,” Technometrics, Vol. 12, pp. 69-82. Tolerance Limits, Sampling Plans, and Screening,
Kelton, W. D., and A. M. Law (1983), “A New Marcel Dekker, New York.
Approach for Dealing with the Startup Problem in Owen, D. B. (1962), Handbook of Statistical Tables,
Discrete Event Simulation,” Naval Research Addison-Wesley Publishing Company, Reading,
Logistics Quarterly, Vol. 30, pp. 641-658. Mass.
Kendall, M. G., and A. Stuart (1963), The Advanced Page, E. S. (1954), “Continuous Inspection |
Theory of Statistics, Hafner Publishing Company, Schemes.” Biometrika, Vol. 14, pp. 100-115.
New York. Roberts, S. (1959), “Control Chart Tests Based on
Keuls, M. (1952), “The Use of the Studentized Range Geometric Moving Averages,” Technemetrics, Vol.
in Connection with an Analysis of Variance,” 1, pp. 239-250.
Euphytics, Vol. 1, p. 112. Scheffé, H. (1953), “A Method for Judging All Con-
Law, A. M., and W. D. Kelton (2000), Simulation trasts in the Analysis of Variance,’ Biometrika,
Modeling and Analysis, 3rd edition, McGraw-Hill, Vol. 40, pp. 87-104.
New York. Snee, R. D. (1977), “Validation of Regression Mod-
Lloyd, D. K., and M. Lipow (1972), Reliability: els: Methods and Examples,” Technometrics, Vol.
Management, Methods, and Mathematics, Prentice- 19, No. 4, pp. 415-428.
Hall, Englewood Cliffs, N.J. Tucker, H. G. (1962), An Introduction to Probability
Lucas, J., and M. Saccucci (1990), “Exponentially and Mathematical Statistics, Academic Press,
Weighted Moving Average Control Schemes: New York.
Properties and Enhancements,” Technometrics, Tukey, J. W. (1953), “The Problem of Multiple
Vol. 32, pp. 1-12. Comparisons,” unpublished notes, Princeton
Marquardt, D. W., and R. D. Snee (1975), “Ridge University.
Regression in Practice,’ The American Tukey, J. W. (1977), Exploratory Data Analysis,
Statistician, Vol. 29, pp. 3-20. Addison-Wesley, Reading, MA.
Montgomery, D. C. (2001), Design and Analysis of United States Department of Defense (1957), Mili-
Experiments, 5th edition, John Wiley & Sons, tary Standard Sampling Procedures and Tables for
New York. Inspection by Variables for Percent Defective
Montgomery, D. C. (2001), Introduction to Statisti- (MIL-STD-414), Government Printing Office,
cal Quality Control, 4th edition, John Wiley & Washington, DC.
Sons, New York. i Welch, P. D. (1983), “The Statistical Analysis of
Montgomery, D. C., E. A. Peck, and G. G. Vining Simulation Results,” in The Computer Perfor-
(2001), Introduction to Linear Regression mance Modeling Handbook (ed. S. Lavenberg),
Analysis, 3rd edition, John Wiley & Sons, New Acadernic Press, Orlando, FL.
York.
Montgomery, D.C., and G.C. Runger (2003),
Applied Statistics and Probability for Engineers,
3rd edfion, John Wiley & Sons, New York.
Answers to Selected Exercises

Chapter 1
1-1. (a) 0.75. (b) 0.18.
1-3. (a) ANB={5}. (b) AUB=({1, 3,4, 5, 6, 7, 8,9, 10}. (c) ANB = (2, 3,4, 5}.
(d) U=({1, 2, 3, 4,5, 6,7, 8,9, 10}. (e) AN(BUO =({1,2,5, 6, 7, 8, 9, 10}.
15> 9 =, te 0.8 20).
A= {(t,, t), t; 20, 4, 2 0, (t, + )/2 $ 0.15}.
B= {(t,, t,): t, 2 0, t; 2 0, max (t, t,) $ 0.15}.
C= {(t,, 4): t, 20, t, 2 0, (t, — t,)/2 < 0.06}.
1-7. FS = {NNNNN, NNNND, NNNDN, NNNDD, NNDNN, NNDND, NNDD, NDNNN, NDNND,
NDND, NDD, DNNNN, DNNND, DNND, DND, DD}.
1-9. (a) N =not defective, D = defective.
¥ = {NNN, NND, NDN, NDD, DNN, DND, DDN, DDD}.
(b) FS= {NNNN, NNND, NNDN, NDNN, DNNN}.
1-11. 30 routes. 1-13. 560,560 ways.
iss m}
1
1-15. P(Accept! p’)=
(0)
300
x=0

1-17. 28 comparisons. 1-19. (40)(39) = 1560 tests.


1-21. (a) (*)() = 25 ways. (b) (3)(3)= 100 ways.
1-23. R, = [1 —(0-2)(0.1)(0.1)] [1 — (0.2)(0.1)] (0.9) = 0.880.
1-25. S=Siberia) U=Ural, P(S) = 0.6, P(U) = 0.4, P(FIS) = P(FIS) = 0.5,
(0.6)(0.5)
P(FlU)=0.3, P(S|F)=———*~—____
= 0.714.
(A (S!F)= F505) +(04\03)
2
yay, ELL
met m amtl 1.39. Powomeni6’) = 0.03226. 1-31. 1/4,
m-1 m m m_— m‘(m-1)

1-35.
o(B)
=, =(365)(364):--(365—-m+1
2651364)-~(365—n-+)
365
n 10 20 21 22 23 24 25 30 40 50 60
P(B) |0.117 | 0.411 | 0.444 | 0.476 | 0.507 | 0.538 | 0.569 | 0.706 | 0.891 | 0.970 | 0.994
1-37. 8! = 40320. 1-39. 0.441.

Chapter 2

2-1. eo
Py(X=x) 25 ee 4.40123. c= 1, p= l,o?=1,

639
640 Answers to Selected Questions

2-5. (a) Yes. (b) No. (c) Yes. 2+7.. a,b. 2-9, P(X<29)=0.978.
2-11. (a) k=7. (b) W=2,07=3.
(cy ey (x)
=0, x <0)
=x’/8, O0Sx<2,
x?
Talila ia wat 2<x<4,

2-13. k= 2, [1422V2014 OV21,


2-15. (a) k=7. (b) W=7,0°=5.
2

(c) Fy(x)=0,x< 1,
eel es
=,28x<3,
Silersh,

2-17. k= 10 and2+kV04 = 8.3 days.


2-21. (a) F(x) =0,x <0,
=/9,0<x<3,
= 16372 Be ‘ ’

(b) w=2,0°=3. (0) uy=%. @@) m=—~.3


Hu; 2

2-23, Fy(x)=1-e°'", x20, 2-25. k=1,patl


=0,x<0.

Chapter3
3-1. (a) y Phy) (b) E(Y) = 14, VY) = 564.
0 0.6
20 0.3
80 0.1
otherwise 0.0

3-3. (a) 0.221. (b) $155.80.


3-5. f(z) =e*,z20,
= 0; otherwise.
3-7. 93.8 c/gal.
3-9. (a) fly) = aay? - 0" y >0,
= 0, otherwise.
(b) f,(v)= 2ve”, v > 0,
= 0, otherwise.
(ce) fhw=e, u>0,
= 0, otherwise.
3-11. s = (5/3) x 10°.
3-13. (a) f,(y)=2(4- yy, 0S y 3,
= 0, otherwise.
(b) fry)= neSy Sey.
= 0, otherwise.
Answers to Selected Questions 641

3-15. M,(t)=
> (d)e*,
E(X)= Mx(0)=4,
V(X) = Mx(0)-[M;(0)] = #.
3-17. E(¥) =1, V(Y) = 1, E(X) = 6.16, V(X) = 0.027.
3-19. M,(t) = (1 - 1/2), E(X) = M,(0) = 1, VX) = M0) - [MOF = 3.
3-21. M,° = E(e”) = E(e"**), =e” E(e), = eM,
3-23. (a) My(t) =1+4e'+5e% + 5e", E(X)=M,(0)=%, W(X) =M%(0)-()? = 3.
(b) Fy(y)=0, y <0,
=;,0Sy<1,
= = l<y<4,
=l,y>4.

Chapter 4
4-1. (a)
11/50 ; 6/50 | 3/50 | 2/50
Lt 2 3 4

(b) 0 1 2 Son) 4

(c) 0 1 2 3 4
Pyo(x) |11/20 | 4/20 | 2/20 | 1/20 | 1/20
4-3. (a) k=1.
(b) fx,(x,)= 1, 0 Sx, < 100,
= 0, otherwise,
f(x) = 0S x, $10,
= 0, otherwise.
(c) fy, x," = 0, if x, or x, <0,
= 700, 0 <x, < 100, 0<x,< 10,
= 100, 0<x, < 100, x, 210,
= x, 2100, 0<x,<10,
=1, x, 2100, x, 2 10.
4-5. (a) 5. (db) x
(c) f,(w) =2w, OS wl,
= 0, otherwise.

4-7, 77 49. E(Xlx)=3% EX) =3 415. E(Y) = 80, VY) = 36.


4-19. p= h p=-0.135. 4-21. X and ¥are not independent.

4-23. (a) f(a) =aV1=2, —i1<x<],

=0, otherwise;
4
frlyeavie 7 Osy<h
=0, otherwise.
642 Answers to Selected Questions

1
4-23. (b) Fry (x) = mn —Jfi-y? <x<l—y*,
21-y
=0, otherwise;
] /
fax(y}= ae O<y<vl-x’,
vl-x
=0, otherwise.
(c) E(Xly)=0,
E(YIx)=4y1-x?
4-27. (a) E(Xly)=—7,
3(1 + 2y)
O<y<l. ) EM=% © EY)=t
4-29. (a) k=(n—1)(n-2). (b) FQx,y)=1-(L+x)-"-(l + yy" + (1 + x4 y)?-",
x>0, y>0.
4-31. (a) Independent. (b) Not independent. (c) Not independent.
4-33. (a) 3 (b) 3
4-35. (a) F(z) =F,l(z- a/b]. (b) Fz) =1-Fl/z). (0) FAz) = Fle). (d) Fz) = Fy(1n 2).
Chapter 5_
SLA P(X= x)= (ele Opie x = 0.1203,
3
5-3. | Assuming independence, P(W2 4) =1- os" ("|=
= 0.927.
w=0

5-5. p(X>2)=1- Sa“foo


(0.02)*(0.98)°°"* = 0.078,
x=0

5-7, P(p<0.03)=0.98. 5-9. P(X=5)=+0.0407. 5-11. p=0.8.

5-13, E(X)=M;,(0)=4, E(X?) = M,(0)= BED, (i E(X?)-(E(Xx)) =-4.


Pp P P
§-15. P(X = 36) = 0.0083.
§-17. P(X = 4) = 0.077, P(X < 4) = 0.896. 5-19. E(X) = 6.25, V(X) = 1.5625.
5-21. p(3, 0, 0) + p(0, 3, 0) + p(0, 0, 3) = 0.118. 5-23. p(4, 1, 3, 2) = 0.005.
5-25. P(X <2) = 0.98. Binomial approx., P(X < 2) = 0.97.
5-27. P(X 2 1) = 0.95. Binomial oe =>n=9.

5-29. P(X <10) =(1.3888x0S. ) 5-31. P(X>5) = 0.215.


x=0

5-33. Poisson model, c = 30.

P(X <3)=e
_ 304 re
30

30 ~ 30”
P(X25)=1-e bint

§-35. Poisson model, c = 2.5. P(X < 2) = 0.544.


5-37. X= number of errors on n pages ~ Bin (5, n/200). p(x 2 1) = 0.763.
(a) n=50. (b) P(X2 3) 20.90, ifn = 151.
5-39. P(X > 2) = 0.0047.
A Answers to Selected Questions 643

Chapter 6
61. mn.
6-3. fy) =45<y<9,
= 0, otherwise.

6-5. E(X) = M (0) = (B+ @/2, VX) = M4(0) - [M,(0)}? = (B- @)°/12.
6-7. y F\(y)
y<l 0
l<y<2 0.3
2<y<3 0.5
3<y<4 0.9
y>4 1.0

Generate realizations u, ~ uniform [0, 1] as random numbers, as is described in Section 6-6; use these
in the inverse as y,= Fy'(u,), i= 1,2,....
6-9. E(X) = M,(0) = 1/A, V(X) = M40) - [M\(0)P = 1/A?.
6-11. 1-e'° =0.154. 6-13, 1-e"? = 0.283.
6-15. Cae, x > 15, ChE=GC R15,
= (C338 YA secailbp = Crti Zax <1 5%
E(C,) = Ce** + (C+ Z)[1 — e*] = C + (0.4512)Z;
E(C,) = 3Ce*” + (3C + Zl - e-*”] = 3C + (0.3486)Z;
=> Process I, if C > (0.0513)Z.
6-19. 0.8305. 6-23. 0.8488.

6-27. ForA=1, r=2, f,(x)=————


x9 (1-x) =2(1-x),0<x<1.

=0, otherwise.
6-31. 1-¢e!'=0.63. 6-33. =0.24. 6-35. (a) =0.22. (b) 4800. 6-37. ~=0.3528.

Chapter 7
7-1. (a) 0.4772. (b) 0.6827. (c) 0.9505. (d) 0.9750.
(e) 0.1336. (f) 0.9485. (g) 0.9147. (h) 0.9898.
7-3. (a) c=1.56. (b) c=1.96. (c) c=2.57. (d) c=-1.645.
7-5. (a) 0.9772. (b) 0.50. (c) 0.6687. (d) 0.6915. (e) 0.95.
7-7. 30.85%. 7-9. 2376.63 fc.
7-13. (a) 0.0455. (b) 0.0730. (c) 0.3085. (d) 0.3085.
7-15. B, if cost of A< 0.1368. 7-17. w=7.
7-19. (a) 0.6687. (b) 7.84. (c) 6.018.
7-23. 0.616. 7-25. 0.00714.
7-27. (a) 0.276. (b) At w= 12.0.
7-29. (a) 0.552. (b) 0.100. (c) 0.758. (d) 0.09.
7-30. n=139. 7-36. 2497.24.
7-37. E(X) =e, V(X) = e%(e%-1), MED = e*, MODE =e”. 7-41. 0.9788. 7-42. 0.4681.

Chapter 8
8-1, x= 131.30, 5? = 113.85, s= 10.67. 8-21. x= 74.002, s? = 6.875 x 10°, s = 0.0026.
8-23. (a) The sample average will be reduced by 63.
644 Answers to Selected Questions

(b) The sample mean and standard deviation will be-100 units largeg. The sample variance will be
10,000 units larger.
8-25. a=x. 8-29. (a) x= 120.22, 5° =5.66,5=2.38. (b) ~x= 120, mode = 121.
8-31. For 8-29, cv = 0.0198; for 8-30, cv = 9.72. 8-33. x= 22.41, s* = 208.25, x = 22.81, mode = 23.64.

Chapter 9 ;
9-1, fey ey. ++ %5) = Une ye" YG, =p)
i=l
9-3. xy, Xp, XyXy)
=I. 9-5. M(5.00, 0.00125).
9-7. Use S/\Vn.
2 2 2 2

9-9, The standard error of X, - X, is |2-+™ = ye Se =0.47. 9-11. N(0,1).


ne 15 ay 20
9-13. se(p)=[p(1-p)/n, Se(p)=,[a(1-p)/n. 9-15. =u, 0?
=2u.
2n?(m+n-2)
9-17. For F,,,,,
m,n?
we have = n/(n—2) forn>2 and o? = > forn>4.
m(n—2)°(n—-4)
9-21, f, @)=1-e™, Fy, =(1-e%)".
9-23. (a) 2.73. (b) 11.34. (ce) 34.17. (d) 20.48.
9-25. (a) 1.63. (b) 2.85. (c) 0.241. (d) 0.588.

Chapter 10
10-1. | Both estimators are unbiased. Now, V(X,) = 07/2n whereas V(X,) = o7/n. Since V(X,) < V(X), a is
a more efficient estimator than X,,.
10-3. 6,, because it would have a smaller MSE.

10-7. a=y * =X, 10-9, (7).


i=) MW

10-11. A= s/|amyXe | r= [wy WS r|

10-13. 1/X. 10-15. X,/n. 10-17. X/n.

10-21. 1-8] Sx, 10-23. X,.


i=l
é 2
10-25. (M1 x, .x9-..%,) = CY 2(20) ) exp ao (nese bedi :
Mh 2 clo” 06%

where C = ae + uy
Of (oF
10-27. The posterior density for r is a beta distribution with parameters a + n and b + ¥, x, -n.
10-29. The posterior density for A is gamma with parameters r =m + Dx, +1 and d=n+(m +1)/Ap.
10-31. 0.967. 10-33. 0.3783.

10-35. (a) f(@ [xy)= sat = PEST (b) @= 1/2. 10-37. Q, = O, = O/2 is shorter.

10-39. (a) 74.03533 <u < 74.03666. (b) 74.0356 < u.


10-41. (a) 3232.11 <u < 3267.89. (b) 1004.80<u. 10-43. 150 or 151.
10-45. (a) 0.0723 < 1, - f, $ 3267.89. (b) 0.0499 <u, - W) $0.33. (©) MW, — pM, $ 0.3076.
Answers to Selected Questions 645

10-47, -3.68 <p, - Wy, $-2.12. 10-49. 183.0<< 256.6. 10-51. 13.
10-53, 94.282 << 111.518. 10-55. 0.839 <p, —W,<-0.679. 10-57. 0.355 < mu, — pH, < 0.455.
10-59. (a) 649.60 < 0? < 2853.69. (b) 714.56< 07. (c) o7< 2460.62. 10-61. 0.0039< 02<0.0124.
10-63, 0.574 < 0° <3.614. 10-65. 0.11 < oj/0,<0.86. 10-67. 0.088 <p <0.152. 10-69. 16577.
10-71. —0.0244 < p, -p, < 0.0024. 10-73. -2038 <p, — W, < 3774.8.
10-75. -3.1529 < p, — fy $ 0.1529; -1.9015 <u, - MW,$ 0.9015; 0.1775 < fy — fy $ 2.1775.
Chapter 11
11-1. (a) z =-1.333, do not reject Hy. (b) 0.05. 11-3. (a) Z,=-12.65, reject Hy. (b) 3.
11-5. Z,=2.50, reject Hj. 11-7. (a) Z = 1.349, do not reject Hp (b) 2. (c) 1.
11-9. Z, = 2.656, reject Hy. 11-i1. Z)=-7.25, reject Hy. 11-13. t) = 1.842, do not reject Hp.
11-15. f = 1.47, do not reject H) ato=0.05. 11-17, 3.
11-19. (a) t = 8.49, reject Hp. (b) t) =-2.35, do not reject Hy. (c) 1. (d) S.
11-21. F,) = 0.8832, do not reject Hy. 11-23. (a) F,= 1.07, do not reject Hy. (b) 0.15. (ce) 75.
11-25. 1 = 0.56, do not reject Hp.
11-27. (a) es = 43.75, reject Hy. (b) 0.3078 x 10“. (c) 0.30. (d) 17.
11-29. (a) x = 2.28, reject Hp. (b) 0.58. 11-31. F,) = 30.69, reject Hy; B = 0.65.
11-33. 1) = 2.4465, do not reject Hy. 11-35.- t)=5.21, reject Hj. 11-37. z) = 1.333, do not reject Ho.
11-41. Z, =-2.023, do not reject Hy). 11-47. % = 2.915, do not reject Ho.
11-49. 7) = 4.724, do not reject Hy. 11-53. 7, = 0.0331, do not reject Hp.
11-55. 7, = 2.465, do not reject Hy. 11-57. 7, =34.896, reject Hy. 11-59. y) = 22.06, reject Hp.
Chapter 12
12-1. (a) Fy =3.17. 12-3. (a) Fy= 12.73. (b) Mixing technique 4 is different from 1, 2, and 3.
12-5. (a) F)=2.62. (b) #= 21.70, f, = 0.023, t = -0.166, 4, = 0.029, 7, = 0.059.
12-7. (a) F,=4.01. (b) Mean 3 differs from 2. (c) SS,, = 246.33. (d) 0.88.
12-9, (a) F,=2.38. (b) None. 12-11. n=3.
12-15. (a) f= 20.47, 7, = 0.33, t = 1.73, t= 2.07. (b) t,- % =-1.40.

Chapter 13
13-1. Source DF Ss MS F P
cs 2 0.0317805 0.0158903 15.94 0.000
DC 2 0.0271654 0.0135927 13.64 0.000
CS*DC 4 0.0006873 0.0001718 0.17 0.950
Error 18 0.0179413 0.0009967
Total 26 0.0775945
Main effects are significant; interaction is not significant.
13-3. -23.93<,-pM,<5.15. 13-5. No change in conclusions.
13-7. Source DF Ss MS F i?

glass ie 24450708) £44500 , 273.79 0.000


phos 2 933.3 466.7 8.84 0.004
glass*phos 2 133)09 66.7 Li26 0.318
Error Aly 633.3 S28
Total 7 Oo Or0

Significant main effects.


646 Answers to Selected Questions

13-9. Source DF Ss MS . F P
Conc 2 7.7639 3.8819 10.62 0.001
Freeness 2 19.3739 9.6869 26.50 0.000
Time 1 20.2500 20.2500 55.40 0.000
Conc*Freeness 4 6.0911 1.5228 4.17 0.015
Conc*Time 2 2.0817 1.0408 2.85 0.084
Freeness*Time 2 2.1950 oO 0S 3.00 0.075
Conc*Freeness*Time 4 1.9733 0.4933 35 0.290
Error 18 6.5800 0.3656
Total 35 66.3089
Concentration, Time, Freeness, and the interaction Time*Freeness are significant at 0.05.

13-15. Main effects A,B, D, E and the interaction AB are significant.


13-17. Block 1: (1), ab, ac, bc, Block 2: a, b, c, abc.
13-19. Block 1: (1) ab, bed, acd, Block 2: a, b, cd, abcd, Block 3: c, abc, bd, ad, Block 4: d, abd, be, ac.
13-21. A and C are significant. 13-25. (a) D=ABC. (b) A is significant.
13-27. 23! with two replicates. 13-29. 2°? design. Estimates for A, B, and AB are large.

Chapter 14
14-1. (a) y= 10.4397 - 0,00156x. (b) Fy = 2.052. (c) -0.0038 < B, $ 0.00068. (d) 7.316%.
14-3. (a) y= 31.656 -0.041x. (b) Fy = 57.639. (c) 81.59%. (d) (19.374, 21.388).
14-7, (a) y = 93.3399 + 15.6485x. (b) Lack of fit not significant, regression significant.
(c) 7.997 < B, $ 23.299. (d) 74.828 s B,< 111.852. (e) (126.012, 138.910).
14-9. (a) y=-6.3378 + 9.20836x. (b) Regression is significant. (c) t) =—23.41, reject Hp.
(d) (525.58, 529.91). (e) (521.22, 534.28).
14-11. (a) y = 77.7895 + 11.8634x. (b) Lack of fit not significant, regression is significant.
(ce) 0.3933. (d) (4.5661, 19.1607).
14-13. (a) y = 3.96 + 0.00169x. (b) Regression is significant.
(c) (0.0015, 0.0019). (d) 95.2%.
14-15. (a) y = 69.1044 + 0.4194x. (b) 77.35%. (c) t) = 5.85, reject Hy.
(d) Z,=1.61. (e) (0.5513, 0.8932).

Chapter 15
15-1. (a) y= 7.30 + 0.0183x, —0.399x,. (b) Fy = 15.19.
15-3. (—0.8024, 0.0044). 15-5. y= -1.808372 + 0.003598x, + 0.1939360x, — 0.0048 15x,.
15-7, (a) y=-102.713 + 0.605x, + 8.924x, + 1.437x, + 0.014x,.
(b) Fy =5.106. (c) By Fy = 0.361; B,, Fy = 0.0004.
15-9. (a) y =-13729 105.02x— 0.189542". _
15-13. (a) y = 4.459 + 1.384x + 1.467x*. (b) Significant lack of fit. (c) Fy = 16.68.
15-15. t= 1.7898. 15-21. VIF, = VIF, = 1.4.

Chapter 16
16-1. R=2. 16-5. R=2. 16-9, R=885. 16-13. R,=75.
16-15. Z)=-2.117. 16-17. K = 4.835.

Chapter 17
17-1. (a) X = 34.32 R=5.65. (b) PCR, = 1.228. (c) 0.205%.
17-5. D/2. 17-7. LCL = 34.55, CL = 49.85, UCL = 65.14. 17-9. Process is not in control.
Answers to Selected Questions 647

17-13. 0.1587,n=60o0r7. 17-15. Revised control limits: LCL = 0, UCL = 17.32.


17-17, UCL = 16.485; 0.434. 17-19. LCL-= 0.282, UCL = 4.378.

Chapter 18
18-1. (a) ~ 0.088. (b) L=2.L,=1.33. (c) W=1h.

18-3, =|? a
L—pa sp
een t/2 24/3
: io ia
0 Pp 0 t1-p

18-7. (a) P= She aepO) aa > (b) Pp}=P2=P3=P,=5 © P\=P.=Pr=Ps=+.


OR Pp ; 4 4
P OW jj
A=[1000}
18-9, (a) 7p. (b) Fp (©) 3. () 0.03. -(@) 0.10. () +.
18-11. (a) 0.555. (b) 56.378 min. (c) 244.18 min.
18-13. (a) pj =|(4/H’)/21)- Po. j= 0.1,2,...45, (b) s=6,
p=0.417.
= 0, otherwise;
ae
De aie
ye
jo
(c) pg = 0.354. (d) From 41.6 to 8.33%. (e) @=4.17, p, = 0.377.

Chapter 19

1
=(b- a),f(a+(b-a)u)du
=I.
19-5. (a) 35. (b) 4.75. (c) 5 (at time 14).
19-7, (a) X,=1, U, = 1/16, X,=6, U,=6/16. (b) Yes. (¢) Xj59 = 2.
19-9. (a) X=-(1=A) €n(1-U). (b) 0.693.
19-11. (a) X=-2V1-2U, if0<U<1/2, (b) X= 0.894.
=I0U=1, if 1/2<U <1,
19-13. (a) X=o[-€n(1-U)]"%. (b) 1.558.
19-15. 5/2, U,-6=1.07. 19-17. X=-(1 = A) €U,U, = 0.841.
19-19. X=Strials. 1921. [-3:41, 4:41).
19-23. [80.4,119.6]. 19-25. Exponential with parameter 1; -V(U,) = -1/12.
ae

Ud

= ; ‘ ~ = a

7 - os saint

it
_ : a - 4” Ga

7 = - a ie —-
Qe Aa : bd
lira ~w ~~ * 7
“Ot
hs

24 ; ‘. _ “epeaeae
mee | : os ie orehpaie:ee (ian Sy
1a | bi > - , a ja ae
hfe)!
: in Oj”
Bee > - ee 7x, - i :
A aa Ee Sy at
eke «i is bem © 8 Bi sie oo ea
| bee ae

er fe ON Oe FANS ae Se
ar 2-4 i: fa} date (hs ‘gia HS a4 bake ete, & a =

An LT 3, Wiha vie 2 F809 k Gy sysiron ‘ ah We


Chepaer 16 2 : “. donstey Vite
be . ) et aay its-Deo
ad Pim IS Hel bee, &
“¢ Pe | is | o ad Sk AnF
tid K

a ti-=<( Vail are d


, . v hy ee1<1UE Poke ie

175, We Op ‘«
: wit OC) « 4025 Dep eaeee is”
be ete. fe
Index

2? factorial design, 373 normal approximation to Bootstrap confidence intervals,


2? factorial design, 379 binomial, 155 259
2* factorial designs, 373, 379 Assignable cause of variation, Bootstrap standard error, 231
2*' fractional factorial design, 509 Box plot, 179
394 Asymptotic properties of the Box-Miiller method of
2*° fractional factorial design, maximum likelihood generating normal random
400 estimator, 223 numbers, 584
3-sigma control limits, 511 Asymptotic relative efficiency, Burn-in, 540
499, 501
A Attribute data, 170 c
Absorbing state for a Markov Autocorrelation in regression, c chart, 523
chain, 558, 560 472 Cartesian product, 5, 71
Acceptance-rejection method of Average run length (ARL), 532, Cauchy distribution, 168
random number generation, 534, 535 Cause-and-effect diagram, 509,
586 | $35
Active redundancy, 544 B Censored life test, 548, 549
Actual process capability, 517 Backward elimination of Census, 172
Additivity theorem of chi- variables in regression, Centerline on control chart,
square, 207 483 510
Adjusted R’, 453, 476 Batch means, 590 Central limit theorem, 152, 202,
Aliasing of effects in a fractional Bayes’ estimator, 228, 230 233, 546, 584, 588
factorial, 395, 400 Bayes’ theorem, 27, 226 Central moments, 48, 58, 189
All possible regressions, 474 Bayesian confidence intervals, Chance cause of variation, 509
Alternate fraction, 396 252 Chapman-Kolmogorov
Alternative hypothesis, 266 Bayesian inference, 226, 227, equations, 555, 556, 561
Analysis of variance, 321, 323, 252 Characteristic function, 66
324, 331, 342 Bernoulli distribution, 106, 124 Chebyshev’s inequality, 48
Analysis of variance model, 323, Bernoulli process, 106 Check sheet, 509
328, 337, 341, 359, 366, 367 Bernoulli random variable, 582 Chi-square distribution, 137,
Analysis of variance tests in Bernoulli trials, 106, 555 168, 206, 238, 300, 549
regression, 416, 448 Best linear unbiased estimator, mean and variance of, 206
Analytic study, 172 260 additivity theorem of, 207
ANOVA, see analysis of Beta distribution, 141 percentage points of, 603
variance Biased estimator, 220, 467, 588, Chi-square goodness-of-fit test,
Antithetic variates, 591, 593, 589 300
595 Binomial approximation to Class intervals for histogram,
Approximate confidence hypergeometric, 122 176
intervals in maximum Binomial distribution, 40, 46, 66, Cochran’s Theorem, 325
likelihood estimation, 251 106, 108, 111, 124, 492 Coefficient of determination,
Approximation to the mean and mean and variance of, 109 426
variance,62 cumulative binomial Coefficient of multiple
Approximations to distributions, distribution, 110 determination, 453
122; Birth-death equations, 555, 564 Combinations, 16
binomial approximation to Bivariate normal distribution, Common random numbers in
hypergeometric, 122 160, 427 simulation, 591, 593
Poisson approximation to Blocking principle, 341, 390 Comparison of sign test and
binomial, 122 Bootstrap, 231 t-test, 496

649
650 © Index

Comparison of Wilcoxon rank- Confidence limits, 232 Cycle length of a random


sum test and t-test, 501 Confounding, 390, 394 number generator, 581
Comparison of Wilcoxon signed Consistent estimator, 220, 223
rank test and t-test, 499 Contingency table, 307 D
Complement (set complement), Continuity corrections, 155 Data, 173
3 Continuous data, 170 Decision interval for the
Completely randomized design, Continuous function of a CUSUM, 527
359 continuous random Defect concentration diagram
Components of variance model, variable, 55 536
323 Continuous random variable, 41, Defects, 523
Conditional distributions, 71, 79, 55 Defining contrasts, 391
80, 128, 162 Continuous sample space, 6 Defining relation for a design,
Conditional expectation, 71, 82, Continuous simulation output, 395, 400
84 $87 Degrees of freedom, 188
Conditional probability, 19, 21 Continuous uniform distribution, Delta method, 62
Confidence coefficient, 232 128, 140 Demonstration and acceptance
Confidence interval, 232, 233 mean and variance of, 129 testing, 551
Confidence interval on the Continuous-time Markov chain, Descriptive statistics, 172
difference in means for 561 Design generator, 395, 400
paired observations, 247 Contour plot of response, 355, Design of experiments, 353, 354
Confidence interval on the 358 Design resolution, 399, 402
difference in means of two Contrast, 374 Designed experiment, 321, 322,
normal distributions, Control chart, 509 536
variances known, 242 Control charts for attributes, 520, Deterministic versus
Confidence interval onthe _ 52250235524 nondeterministic systems, 1
difference in means of two Control chart for individuals, Discrete data, 170
normal distributions, 518 Discrete distributions, 106
variances unknown, 244, Control charts for measurements, Discrete random variable, 38, 54
246 510, 518, 522 Discrete sample space, 6
Confidence interval on the Control limits, 510, 511 Discrete simulation output, 587
difference in two Convolution, 585 Discrete-évent simulation, 576
proportions, 250 Convolution method of random Distribution free statistical
Confidence interval on mean number generation, 585 methods, see nonparametric
response in regression, 418 Cook’s distance, 470 methods
Confidence interval on the mean Correlation coefficient, 190 Distribution function 36, 37, 38,
of a normal distribution, Correlation matrix of regressor 91,110
variance known, 233, 236 variables, 461 Dot plots, 173, 175
Confidence interval on the mean Correlation, 71, 87, 88, 161, 409, Durbin- Watson test, 472
of a normal distribution, 427 Dynamic simulation, 576
variance unknown, 236 Covariance, 71, 87, 88, 161
Confidence interval on a Covariance matrix of regression E
proportion, 239, 24] coefficients, 443 Enumerative study, 172, 204
Confidence interval on the ratio C, statistic in regression, 475 Equally likely outcomes, 10
of variances of two normal Cramer-Rao lower bound, 219, Equivalent events, 34, 52
distributions, 248 223; 251 Ergodic property of a Markov
Confidence intervals on Critical region, 267 chain, 559
regression coefficients, 417, Critical values of a test statistic, Erlang distribution, 585
444 267 Error sum of squares, 325, 413
Confidence interval on Cumulative distribution function Estimable functions, 329
simulation output, 588, 591, (CDF), see distribution Estimated standard error, 203,
592 function 230, 240
Confidence interval on the Cumulative normal distribution, Estimation of o” in regression,
variance of a normal 145 414, 444
distribution, 238 Cumulative sum (CUSUM) Estimation of variance
Confidence level, 234, 235 control chart, 525, 526, 534 components, 338, 367, 368
Index 651

Events, 8 Function of a random variable, Hypothesis tests on the mean of


Expectation, 58 52, 53 a normal distribution,
Expected life, 62 Functions of two random variance known, 271
Expected mean squares, 326, variables, 92 Hypothesis tests on the mean of
338, 360, 361, 366, 368 a normal distribution,
Expected recurrence time in a G variance unknown, 278
Markov chain, 558 Gamma distribution, 67, 134, Hypothesis tests on the means of
Expected value of a random 140, 540, 546 two normal distributions,
variable, 58, 77 relationship of gamma and variances known, 286
Experimental design, 508 exponential, 135 Hypothesis tests on the means of
Exponential distribution, 42, 46, mean and variance of, 135 two normal distributions,
130, 140, 540, 541, 548, relationship to chi-square variances unknown, 288,
546, 564, 583 distribution, 137 290
relationship of exponential three-parameter gamma, Hypothesis tests on the variance
and Poisson, 131 141 of a normal distribution,
mean and variance of 131 Gamma function, 134 281
memoryless property, 133 General regression significance Hypothesis tests on two
Exponentially weighted tests, 450 proportions, 297, 299
moving average (EWMA) Generalized interaction, 393
control chart, 525, 526, Generation of random variables, I
529, 534 580, 582 Idealized experiments, 5
Extra sum of squares method, Generation of realizations of Identity element, 381
450 random variables, 123, 138, In-control process, 509
Extrapolation, 447 164 Independent events, 20, 23
Geometric distribution, 106, 112, Independent experiments, 25
F 124, 535, 586 Independent random variables,
Factor effects, 356 _ mean and variance of, 113 71, 86, 88, 95
Factorial experiments, 355, 359, memoryless property, 114 Indicator variables, 458
369 Inferential statistics, 170 _
Failure rate, 62, also see hazard H Influential observations in
function Half-interval corrections, 155 regression, 470
F-distribution, 211 Half-normal distribution, 168 Initialization bias in simulation,
mean and variance of, 212 Hat matrix in regression, 471 589
percentage points of, 605, Hazard function, 538, 539, Instantaneous failure rate, see
606, 607, 608, 609 541 hazard function
Finite population, 172, 204 Histogram, 175, 177, 509, 514 Intensity of passage, 562
Finite sample spaces, 14 Hypergeometric distribution, 40, Intensity of transition, 562
Finite-state Markov chain, 556 106, 117, 124 Interaction, 356, 374, 438
First passage time, 557 mean and variance of, 118 Interaction term in a regression
First-order autoregressive model, Hypothesis testing, 216, 266, ' model, 438
472 267 Intersection (set intersection), 3
Fixed effects ANOVA, 323 Hypothesis tests in the Interval failure rate, 538
Fixed effects model, 359 correlation model, 429 Intrinsically linear regression
Forward selection of variables in Hypothesis tests in multiple model, 426
regression, 482 linear regression, 447 Invariance property of the
Fraction defective or Hypothesis tests in simple linear maximum likelihood
nonconforming, 520 regression, 414 estimatur, 223
Fracticnal factorial design, Hypothesis tests on a proportion, Inventory system, 580
394 283 Inverse transform method for
Frequency distribution, 175, Hypothesis tests on individual generating random
176 coefficients in multiple numbers, 582
Full-cycle random number regression, 450 Inverse transform theorem, 582
generator, 581 Hypothesis tests on the equality Inverse transformation method,
Function of a discrete random of two variances, 295, 64
variable, 54 296 Irreducible Markov chain, 559
652 Index

J Mean squares, 325, 342, 360 * Nonparametric ANOVA, see


Jitter, 174 Mean time to failure (MTTF), Kruskal-Wallis test
Joint probability distributions, 62, 540 x Nonparametric confidence
71, 72, 73, 94 Measurement data, 170 interval, 237
Judgment sample, 198 Median of the population, 185 Nonparametric methods, 237,
Median, 158, 492 491
K Memoryless property of the Nonterminating (steady state)
Kruskal-Wallis test, 501, 503, exponential distribution, simulations, 587, 590
504 133, 541 Normal approximation for the
Kurtosis, 189, 307 Memoryless property of the sign test, 493
geometric distribution, Normal approximation for the
i 114 Wilcoxon rank-sum test,
Lack of fit test in regression, 422 Method of least squares, see 501
Large-sample confidence least squares estimation Normal approximation for the
interval, 236 Method of maximum likelihood, Wilcoxon signed rank test,
Law of large numbers, 71, 99, see maximum likelihood 497
101 estimator Normal approximation to the
Law of the unconscious Minimum variance unbiased binomial, 155, 241
statistician, 58 estimator, 219 Normal distribution, 143, 583,
Least squares estimation, 328, Mixed model, 367 mean and variance of,
410, 438 Mode, 158 144
Least squares normal equations, Model adequacy checking, 330, cumulative distribution, 145
328, 410, 439, 440 345, 364, 384, 414, 421, reproductive property of,
Life testing, 547 452, 454 150
Likelihood function, 221, 226 Moment estimator, 224 Normal probability plot, 304,
Linear combinations of random Moment generating function, 65, 305
variables, 96, 99, 151 99, 107, 110, 114, 116, 121, Normal probability plot of
Linear congruential random 129, 132, 135, 145, 150, effects, 387, 398
number generator (LCG), 202 Normal probability plot of
581, 582 Moments, 44, 47, 48, 58, 65 residuals, 330
Linear regression model, 409, Monte-Carlo integration, 578 Null hypothesis, 266
437 Monte-Carlo simulation, 577
Little’s law, 568 Moving range, 518 O
Lognormal distribution, 157, 159 mpu, see mean per unit estimator One factor at a time experiment,
mean and variance of, 158 Multicollinearity, 464 356
Loss function, 227 Multicollinéarity diagnostics, One-half fraction, 394
Lower control limit, 510 466 One-sided alternative hypothesis,
Multinomial distribution, 106, 266, 269-271
M 116 One-sided confidence interval,
Main effects, 355, 374 Multiple comparison procedures, 233, 236
Mann-Whitney test, see 593 One-step transition matrix for a
Wilcoxon rank-sum test Multiple regression model, 437 Markov chain, 556, 563
Marginal distribution, 71, 75, 76, Multiplication principle, 14, 22 One-way classification ANOVA,
161 Multiplicative linear 323
Markov chain, 555, 558, 559, congruential random Operating characteristic (OC)
560, 561 number generator, 582 curve, 274, 275, 280, 347
Markov process, 555, 573 Mutually exclusive, 22, 24 charts for, 610-626
Markov property, 555 Optimal estimator, 220
Maximum likelihood estimator, N Order statistics, 214
221, 223, 230, 251, 428 Natural tolerance limits of a Origin moments, 47, 58, 224
Mean of a random variable, 44, process, 516 Orthogonal contrasts, 332
58 Negative binomial distribution, Orthogonal design, 381
Mean per unit estimate, 204 106, 124 Outlier, 180, 421
Mean square error of an Noncentral F distribution, 347 Output analysis of simulation
estimator, 218 Noncentral t-distribution, 279 models, 586, 587, 591
Index 653

P Probability plotting, 303 Regression of the mean, 71, 85,


p chart, 520 Probability sampling, 172 163
Paired data, 292, 494, 498 Probability, 1, 8, 9, 10, 11, 12, Regression sum of squares, 416
Paired t-test, 292 13 Regressor variable, 409
Parameter estimation, 216 Process capability, 514, 516 Rejection region, see critical
Pareto chart, 181, 509, 535 Process capability ratio, 516, 518 region .
Partial regression coefficients, Projection of 2* designs, 384 Relationship between hypothesis
437 Projection of 2‘! designs, 398 tests and confidence
Partition of the sample space, 25 Properties of estimated intervals, 276
Pascal distribution, 106, 115, regression coefficients, 412, Relationship of exponential and
124 413, 443 Poisson random variables,
mean and variance of, 115 Properties of probability, 9 131
Pearson correlation coefficient, Pseudorandom numbers (PRNs), Relationship of gamma and
see correlation coefficient 581 exponential random
Permutations, 15, 19 variables, 135
Piecewise linear regression, 490 Q Relative efficiency of an
Point estimate, 216 Qualitative regressor variables, estimator, 218
Point estimator, 216 458 Relative frequency, 9, 10
Poisson approximation to Quality improvement, 507 Relative range, 511
binomial, 122 Quality of conformance, 507 Reliability, 507, 538
Poisson distribution, 41, 106, Quality of design, 507 Reliability engineering, 24, 507,
118, 120, 124, 523 Queuing, 555, 564, 568, 570, ey)
mean and variance of, 120 572, 573, 579 Reliability estimation, 548
cumulative probabilities for, Reliability function, 538, 539
598, 599, 600 R Reliability of serial systems, 542
Poisson process, 119, 582 R chart, 510, 512 Replication, 322
Polynomial regression model, R’, 426, 429, 453, 475 Reproductive property of the
438, 456 R’,4, See adjusted R? normal distribution, 150,
Pooled estimator of variance, Random effects ANOVA, 323, 151
289 337 Residual analysis, 330, 345, 364,
Pooled t-test, 289 Random effects model, 366 384, 414, 421, 454
Population, 170, 171 Random experiments, 5 Residuals, 330, 364, 377, 421
Population mean, 184 Random number, see Resolution III design, 399
Population mode, 185 pseudorandom number Resolution IV design, 399
Positive recurrent Markov chain, Random sample, 198, 199 Resolution V design, 400
558 Random sampling, 172 Response variable, 409
Posterior distribution, 226 Random variable, 33, 38 Ridge regression, 466, 467
Potential process capability, 517 Random vector, 71 Risk, 227
Power of a statistical test, 267, Randomization, 322, 359
347 Randomized block ANOVA, S
Practical versus statistical 342 Sample correlation coefficient,
significance in hypothesis Randomized block design, 341 ‘428
tests, 277 Rank transformation in ANOVA, Sample mean, 184
Precision of estimation, 234, 235 504 Sample median, 184, 185
Prediction in regression, 420, Ranking and selection Sample mode, 185
446 procedures, 591, 593 Sample range, 188
Prediction interval, 255 Ranks, 496, 498, 499, 502, 504 Sample size for ANOVA, 347
Prediction interval in regression, Rational subgroup, 510 Sample size for confidence
420, 446 Rayleigh distribution, 168 intervals, 235, 237, 241,
Principal block, 392 Recurrence state for a Markov 244
Principal fraction, 396 chain, 558 Sample size for hypothesis tests,
Prior distribution, 226 Recurrence time, 557 273, 279, 282, 284, 287,
Probability density function, 42 Redundant systems, 544, 545 291, 296, 298
Probability distribution, 39, 42 Regression analysis, 409 Sample spaces, 5, 6, 8
Probability mass function, 39 Regression model, 437 Sample standard deviation, 187
654 Index

Sample variance, 186 Standardizing, 146 »Total Probability Law, 26


Sampling distribution, 201, 202 Standby redundancy, 545 Transformation in regression,
Sampling fraction, 204 State equations for a Markov 426
Sampling with replacement, 199, chain, 559 Transformation of the response,
204 Statistic, 201, 216 330
Sampling without replacement, Statistical control, 509 Transient state for a Markov
172, 199, 204 Statistical hypotheses, 266 chain, 558
Saturated fractional factorial, Statistical inference, 216 Transition probabilities, 555,
402 Statistical process control (SPC), $56, 563
Scatter diagram, see scatter plot 508 Treatment, 321
Scatter plot, 173, 174, 175, 411, Statistical quality control, 507 Treatment sum of squares, 325
412, 509 Statistically based sampling Tree diagram, 14
Seed for a random number plans, 508 Trial control limits, 512
generator, 581 Statistics, field of, 169 Triangular distribution, 43
Selecting the form of a Stem and leaf plot, 178 Trimmed mean, 195
distribution, 306 Stepwise regression, 479 t-tests on individual regression
Set operations, 4 Stirling’s formula, 145 coefficients, 450
Sets, 2 Stochastic process, 555 Tukey’s test, 335, 336, 344, 363
Shewhart control charts, 509, Stochastic simulation, 576 Two-factor factorial, 359
525 Strata, 200, 204 Two-factor random effects
Sign test, 491, 492, 493, 494, Stratified random sample, 200, ANOVA, 366
496 204 Two-factor mixed model
critical values for, 629, Strong versus weak conclusions ANOVA, 367
Sign test for paired samples, 494 in hypothesis testing, 268 Two-sided alternative
Significance level of a statistical Studentized residual, 471 hypothesis, 266, 269-271
test, 267 Sum of Poisson random Two-sided confidence interval,
Significance levels in the sign variables, 121 233
test, 493 Summary statistics for grouped Type I error, 267
Significance of regression, 415, data, 191 Type II error, 267, 494
447 System failure rate, 543 “ Type Il error for the sign test,
Simple correlation coefficient, 494
see correlation coefficient AE
Simple linear regression model, Tabular form of the CUSUM, U
409 526 u chart, 524
Simulation, 576 Taylor series, 62 Unbalanced design, 331
Simultaneous confidence t-distribution, 208, 236 Unbiased estimator, 217, 219,
intervals, 252 percentage points of, 604 220
Single replicate of a factorial Terminating (transient) Uncorrelated random variables,
experiment, 364, 386 simulations, 587 88
Skewness, 189, 203, 306, 307 Three factor analysis of variance, Uniform (0, 1) random numbers,
Sparsity of effects principle, 386 366 581, 582
Specification limits, 514 Threé factor factorial, 369 Union of sets, 3
Standard deviation, 45 Three-dimensional scatter plot, Universal set, 3
Standard error, 203 174, 176 Universe, 170
Standard error of a point Tier chart, see tolerance chart Upper control limit, 510
estimator, 230 Ties in the Kruskal-Wallis test,
Standard error of factor effect, 503 V

383 Ties in the sign test, 493 Variability, 169


Standard normal distribution, Ties in the Wilcoxon signed rank Variable seiection in regression,
145, 152, 233 test, 497 474, 479, 482, 483
cumulative probabilities for, Time plots, 183 Variance components, 337, 366,
601, 602 Time-to-failure distribution, 538, 368
Standardized regression 540, 541 Variance inflation factors, 464
coefficients, 462 Tolerance chart, 514 Variance of a random variable,
Standardized residual, 421 Tolerance interval, 257 45,58
Index 655

Variance of an estimator, 218 Weibull distribution, 137, 140, Wilcoxon signed rank test, 496
Variance reduction methods in 540 critical values for, 630
simulation, 591, 593, 596 mean and variance of, {137
Venn diagram, 4 Weighted least squares, 435 x
Wilcoxon rank-sum test, 499 X chart, 510, 511
WwW critical values for, 627,628
Waiting-line theory, see Wilcoxon signed rank test! for Y
queuing paired samples, 498 Yates’ algorithm for the 2‘, 385
Sear pha, nretter tri
_ a som
“WE fora tanlomeamiber
ee ee : Suaertssiekd of 7 an <™
> eterar
sew trate - Stem sod leat pint i7e
: “eat Rating, Aah Sugoi sepreseiye, 479 =
‘We! operons, 4 tage eoreanlg, WBOS a
» ~Sen,2 ‘<eete Witede ag, S95... 7
Shows «sin a eta er
2 $24 Geum iy ee Iwo iat random chee
Sige gaa sag, <a), 40, Aveaitiead seneine sample, 20> ANOVA. 3465-0!
ue a
= Tee Feige
nd eed 0
Smeg 5 aa® asc cs Vitoy Gwe Lovt cor fous WOVA SSF i
ao gyeeens
be iboats aeerirs, Ae - in hetehen: lemey fe ontesd sleranuer.
a ee | Rented oe ated) 4)
7h afin, “4
hypatbests, 266; 269-271 aS
Sem of fr> 1 a Cwn-des eratitionae |
* questtares levtie fi LoeaA wectables, 3 am 43s! i ea
; =) dearerry suet eich faitgrouped Type t erton, 267
—— ule apne of royréstin, 445 ca, 197 ' ee Ages
> a Syrocem Frlatye Sala ype eee
f Geengie Lonelaiie «cficion, .
ee te cerelaiven ~eSicent + r ae
aA
_
we S
: as - Seep leah eegressan Wedel. Takelar fresh silane
AaB =< — ee
aes ae - Sati 0 chart, $24 Post gh ae
~ Senigaeen, bre layiiy wtyex. fie ~ Gabalanced Yesiwn, aM > nN
= Sirptlemmrse tomSaggor tibia bution, ae ie >
7 pleas *e- =,
gi: Unbiased csciunaior, 217, 299,"
pe'rOcrdtag ie i sa. SG
.- g
wad M4 7 hl :
Single ecpintaetof 5 Nocwe iy) Terminating (intr. ~ 3
*)

Jaccretating Pesos. varnather, © ‘ca


: copayment. “ta, Jan imvatstycea: su r : ea et! nak tal
Seesaw 3. 20), 1B, 307 Theme ileFtod
yi of vacdncge 8wiles iG,Uo
; Npenaiiy ofeffecus pe woe We mong
e, = Sei, Sil i -_
2 aporerie mica beekts:* Mire Leen Csetewnse|, io Unio of aati
4c
sexta: Scvyeietk
ie Ne oa
, 4 a : Saaites rhs, Utena set 3° ok
itn ee: 28 ve - To <i Cniven IW ae
aes ae
atag a
Reedwd cur of apcint ter cheart, sex toleppmbe chub, u peta 310 a
estimajor, =) * Thee “4Kapolei tesé, a
Suatthad cnet of feager effect. -
ve ver Ties geSpe em.4} ;
Standings Oragsl distrtaticg. Ties Wie can signed eam,
7 tae b82, 233 ring +4 Paesos
CS ea
"rs
2 Commpiaiive yrobabilities for b oed capete ll ; en dl
AE, 602
Sieeiag i: hd “repo
: : eR. 447
—oeien aii
¥

* it
Se
This Wiley Student Edition is part of a continuing program of
paperbound textbooks especially designed for students in
developing countries at a reduced price.

FOR SALE ONLY _IN

WSE___ Wiley India Pvt. Ltd.


4435-36/7, Ansari Road, Daryagan),
New Delhi-110 002.
_ Tel: 91-11-43630000
Fax: 91-11-23275895, . os
, E-mail: [email protected]™ a
WI LEY Website: www.wileyindia.com _

You might also like