0% found this document useful (0 votes)
31 views

SimulationUsingPromodel

SimulationUsingPromodel

Uploaded by

luuimex
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

SimulationUsingPromodel

SimulationUsingPromodel

Uploaded by

luuimex
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Simulation

Using ProModel
4th Edition

Biman Ghosh, Royce Bowden,


Bruce Gladwin, Charles Harrell

SAN DIEGO
Bassim Hamadeh, CEO and Publisher
Mieka Portier, Senior Acquisitions Editor
Tony Paese, Project Editor
Abbey Hastings, Production Editor
Emely Villavicencio, Senior Graphic Designer
Greg Isales, Licensing Coordinator
Jaye Pratt, Interior Designer
Natalie Piccotti, Director of Marketing
Kassie Graves, Vice President, Editorial
Jamie Giganti, Director of Academic Publishing

Copyright © 2022 by Cognella, Inc. All rights reserved. No part of this publication may be reprinted,
reproduced, transmitted, or utilized in any form or by any electronic, mechanical, or other means,
now known or hereafter invented, including photocopying, microfilming, and recording, or in any
information retrieval system without the written permission of Cognella, Inc. For inquiries regard-
ing permissions, translations, foreign rights, audio rights, and any other forms of reproduction,
please contact the Cognella Licensing Department at [email protected].

Trademark Notice: Product or corporate names may be trademarks or registered trademarks and
are used only for identification and explanation without intent to infringe.

Cover images: Copyright © 2018 iStockphoto LP/romaset.


Copyright © 2019 iStockphoto LP/Wavebreakmedia.
Copyright © 2020 iStockphoto LP/3alexd.

Images and screenshots from ProModel software Copyright © by ProModel Corporation. Reprinted
with permission.

Screenshots from Microsoft Excel Copyright © by Microsoft.

Printed in the United States of America.

3970 Sorrento Valley Blvd., Ste. 500, San Diego, CA 92121


Contents

Foreword xxv

Acknowledgments xxvii

Part 1 STUDY CHAPTERS 1

1 Introduction to Simulation 3

1.1 Introduction 3

1.2 What Is Simulation? 5

1.3 Why Simulate? 6

1.4 Doing Simulation 8

1.5 Use of Simulation 9

1.6 When Simulation Is Appropriate 11

1.7 Qualifications for Doing Simulation 12

1.8 Conducting a Simulation Study 14

1.8.1 Defining the Objective 15

1.8.2 Planning the Study 16

1.9 Economic Justification of Simulation 17

1.10 Sources of Information on Simulation 20

1.11 How to Use This Book 21

1.12 Summary 21

1.13 Review Questions 22

1.14 Case Studies 23

References 31

Further Reading 32
2 System Dynamics 33

2.1 Introduction 33

2.2 System Definition 34

2.3 System Elements 35

2.3.1 Entities 36

2.3.2 Activities 36

2.3.3 Resources 36

2.3.4 Controls 37

2.4 System Complexity 37

2.4.1 Interdependencies 38

2.4.2 Variability 39

2.5 System Performance Metrics 40

2.6 System Variables 43

2.6.1 Decision Variables 43

2.6.2 Response Variables 43

2.6.3 State Variables 43

2.7 System Optimization 44

2.8 The Systems Approach 45

2.8.1 Identifying Problems and Opportunities 46

2.8.2 Developing Alternative Solutions 47

2.8.3 Evaluating the Solutions 47

2.8.4 Selecting and Implementing the Best Solution 47

2.9 Systems Analysis Techniques 47

2.9.1 Hand Calculations 49

2.9.2 Spreadsheets 49

2.9.3 Operations Research Techniques 50

2.9.4 Special Computerized Tools 53

2.10 Summary 54
2.11 Review Questions 54

References 56
3 Simulation Basics 57

3.1 Introduction 57

3.2 Types of Simulation 57

3.2.1 Static versus Dynamic Simulation 58

3.2.2 Stochastic versus Deterministic Simulation 58

3.2.3 Discrete-Event versus Continuous Simulation 59

3.3 Random Behavior 61

3.4 Simulating Random Behavior 63

3.4.1 Generating Random Numbers 63

3.4.2 Generating Random Variates 68

3.4.3 Generating Random Variates from Common


Continuous Distributions 71

3.4.4 Generating Random Variates from Common


Discrete Distributions 73

3.5 Simple Spreadsheet Simulation 75

3.5.1 Simulating Random Variates 76

3.5.2 Simulating Dynamic, Stochastic Systems 80

3.5.3 Simulation Replications and Output Analysis 83

3.6 Summary 84

3.7 Review Questions 84

References 86
4 Discrete-Event Simulation 87

4.1 Introduction 87

4.2 How Discrete-Event Simulation Works 88

4.3 A Manual Discrete-Event Simulation Example 89

4.3.1 Simulation Model Assumptions 91

4.3.2 Setting Up the Simulation 91


4.3.3 Running the Simulation 93

4.3.4 Calculating Results 99

4.3.5 Issues 101

4.4 Commercial Simulation Software 102

4.4.1 Modeling Interface Module 102

4.4.2 Model Processor 103

4.4.3 Simulation Interface Module 103

4.4.4 Simulation Processor 104

4.4.5 Animation Processor 104

4.4.6 Output Processor 105

4.4.7 Output Interface Module 105

4.5 Simulation Using ProModel 105

4.5.1 Building a Model 105

4.5.2 Running the Simulation 106

4.5.3 Output Analysis 107

4.6 Languages versus Simulators 108

4.7 Future of Simulation 110

4.8 Summary 111

4.9 Review Questions 112

References 113
5 Data Collection and Analysis 115

5.1 Introduction 115

5.2 Guidelines for Data Gathering 116

5.3 Determining Data Requirements 118

5.3.1 Structural Data 118

5.3.2 Operational Data 118

5.3.3 Numerical Data 119

5.3.4 Use of a Questionnaire 119

5.4 Identifying Data Sources 120


5.5 Collecting the Data 121

5.5.1 Defining the Entity Flow 121

5.5.2 Developing a Description of Operation 122

5.5.3 Defining Incidental Details and Refining Data Values 123

5.6 Making Assumptions 124

5.7 Statistical Analysis of Numerical Data 125

5.7.1 Tests for Independence 127

5.7.2 Tests for Identically Distributed Data 132

5.8 Distribution Fitting 135

5.8.1 Frequency Distributions 136

5.8.2 Theoretical Distributions 138

5.8.3 Fitting Theoretical Distributions to Data 142

5.9 Selecting a Distribution in the Absence of Data 148

5.9.1 Most Likely or Mean Value 149

5.9.2 Minimum and Maximum Values 149

5.9.3 Minimum, Most Likely, and Maximum Values 149

5.10 Bounded versus Boundless Distributions 150

5.11 Modeling Discrete Probabilities Using


Continuous Distributions 151

5.12 Data Documentation and Approval 152

5.12.1 Data Documentation Example 152

5.13 Summary 155

5.14 Review Questions 155

5.15 Case Study 158

References 159

For Further Reading 159


6 Model Building 161

6.1 Introduction 161

6.2 Converting a Conceptual Model to a Simulation Model 162


6.2.1 Modeling Paradigms 162

6.2.2 Model Definition 163

6.3 Structural Elements 164

6.3.1 Entities 165

6.3.2 Locations 166

6.3.3 Resources 168

6.3.4 Paths 170

6.4 Operational Elements 170

6.4.1 Routings 170

6.4.2 Entity Operations 171

6.4.3 Entity Arrivals 174

6.4.4 Entity and Resource Movement 176

6.4.5 Accessing Locations and Resources 176

6.4.6 Resource Scheduling 178

6.4.7 Downtimes and Repairs 179

6.4.8 Use of Programming Logic 183

6.5 Miscellaneous Modeling Issues 186

6.5.1 Modeling Rare Occurrences 186

6.5.2 Large-Scale Modeling 186

6.5.3 Cost Modeling 187

6.6 Summary 188

6.7 Review Questions 188

References 190
7 Model Verification and Validation 191

7.1 Introduction 191

7.2 Importance of Model Verification and Validation 192

7.2.1 Reasons for Neglect 192

7.2.2 Practices That Facilitate Verification and Validation 193


7.3 Model Verification 194

7.3.1 Preventive Measures 195

7.3.2 Establishing a Standard for Comparison 196

7.3.3 Verification Techniques 196

7.4 Model Validation 199

7.4.1 Determining Model Validity 200

7.4.2 Maintaining Validation 203

7.4.3 Validation Examples 203

7.5 Summary 206

7.6 Review Questions 206

References 207
8 Simulation Output Analysis 209

8.1 Introduction 209

8.2 Statistical Analysis of Simulation Output 210

8.2.1 Simulation Replications 211

8.2.2 Performance Estimation 212

8.2.3 Number of Replications (Sample Size) 216

8.2.4 Real-World Experiments versus Simulation Experiments 220

8.3 Statistical Issues with Simulation Output 221

8.4 Terminating and Nonterminating Simulations 224

8.4.1 Terminating Simulations 224

8.4.2 Nonterminating Simulations 225

8.5 Experimenting with Terminating Simulations 226

8.5.1 Selecting the Initial Model State 227

8.5.2 Selecting a Terminating Event to Control Run Length 227

8.5.3 Determining the Number of Replications 227

8.6 Experimenting with Nonterminating Simulations 228

8.6.1 Determining the Warm-up Period 228

8.6.2 Obtaining Sample Observations 232


8.6.3 Determining Run Length 239

8.7 Summary 240

8.8 Review Questions 240

References 241
9 Comparing Systems 243

9.1 Introduction 243

9.2 Hypothesis Testing 244

9.3 Comparing Two Alternative System Designs 247

9.3.1 Welch Confidence Interval for Comparing Two Systems 249

9.3.2 Paired-t Confidence Interval for Comparing Two Systems 250

9.3.3 Welch versus the Paired-t Confidence Interval 253

9.4 Comparing More Than Two Alternative System Designs 253

9.4.1 The Bonferroni Approach for Comparing More


Than Two Alternative Systems 254

9.4.2 Advanced Statistical Models for Comparing


More Than Two Alternative Systems 259

9.4.3 Design of Experiments and Optimization 266

9.5 Variance Reduction Techniques 267

9.5.1 Common Random Numbers 267

9.5.2 Example Use of Common Random Numbers 269

9.5.3 Why Common Random Numbers Work 271

9.6 Summary 272

9.7 Review Questions 273

References 274
10 Simulation Optimization 275

10.1 Introduction 275

10.2 In Search of the Optimum 277

10.3 Combining Direct Search Techniques with Simulation 278

10.4 Evolutionary Algorithms 279


10.4.1 Combining Evolutionary Algorithms with Simulation 280

10.4.2 Illustration of an Evolutionary Algorithm’s Search of


a Response Surface 281

10.5 Strategic and Tactical Issues of Simulation Optimization 283

10.5.1 Operational Efficiency 283

10.5.2 Statistical Efficiency 284

10.5.3 General Optimization Procedure 284

10.6 Formulating an Example Optimization Problem 286

10.6.1 Problem Description 287

10.6.2 Demonstration of the General Optimization Procedure 288

10.7 Real-World Simulation Optimization Project 291

10.7.1 Problem Description 291

10.7.2 Simulation Model and Performance Measure 292

10.7.3 Toyota Solution Technique 293

10.7.4 Simulation Optimization Technique 294

10.7.5 Comparison of Results 294

10.8 Summary 296

10.9 Review Questions 296

References 297
11 Design of Simulation Experiments 301

11.1 Experiments 301

Runs 304

Replication 304

11.2 Full Factorial Design 306

11.3 Fraction Factorial Design 307

11.3.1 Half Fraction Factorial Design


(Three Factors, Two Levels Each) 308

11.3.2 Half Fraction Factorial Design


(Four Factors, Two Levels Each) 309
11.4 Analysis of Factorial Experiments 310

11.4.1 Prediction Equation 310

11.4.2 Analysis of Variance 311

11.5 Review Questions 315

References 317
12 Modeling Manufacturing Systems 319

12.1 Introduction 319

12.2 Characteristics of Manufacturing Systems 320

12.3 Manufacturing Terminology 321

12.4 Use of Simulation in Manufacturing 323

12.5 Applications of Simulation in Manufacturing 325

12.5.1 Methods Analysis 326

12.5.2 Plant Layout 327

12.5.3 Batch Sizing 329

12.5.4 Production Control 330

12.5.5 Inventory Control 332

12.5.6 Supply Chain Management 334

12.5.7 Production Scheduling 334

12.5.8 Real-Time Control 336

12.5.9 Emulation 336

12.6 Manufacturing Modeling Techniques 336

12.6.1 Modeling Machine Setup 336

12.6.2 Modeling Machine Load and Unload Time 337

12.6.3 Modeling Rework and Scrap 337

12.6.4 Modeling Transfer Machines 338

12.6.5 Continuous Process Systems 339

12.7 Summary 340

12.8 Review Questions 340

References 340

For Further Reading 341


13 Modeling Material Handling Systems 343

13.1 Introduction 343

13.2 Material Handling Principles 343

13.3 Material Handling Classification 344

13.4 Conveyors 345

13.4.1 Conveyor Types 345

13.4.2 Operational Characteristics 347

13.4.3 Modeling Conveyor Systems 347

13.4.4 Modeling Single-Section Conveyors 349

13.4.5 Modeling Conveyor Networks 349

13.5 Industrial Vehicles 350

13.5.1 Modeling Industrial Vehicles 350

13.6 Automated Storage/Retrieval Systems 351

13.6.1 Configuring an AS/RS 352

13.6.2 Modeling AS/RSs 353

13.7 Carousels 355

13.7.1 Carousel Configurations 355

13.7.2 Modeling Carousels 355

13.8 Automatic Guided Vehicle Systems 355

13.8.1 Designing an AGVS 356

13.8.2 Controlling an AGVS 358

13.8.3 Modeling an AGVS 358

13.9 Cranes and Hoists 359

13.9.1 Crane Management 360

13.9.2 Modeling Bridge Cranes 360

13.10 Robots 361

13.10.1 Robot Control 361

13.10.2 Modeling Robots 362

13.11 Summary 362


13.12 Review Questions 363

References 363

For Further Reading 364


14 Modeling Service Systems 365

14.1 Introduction 365

14.2 Characteristics of Service Systems 366

14.3 Performance Measures 367

14.4 Use of Simulation in Service Systems 368

14.5 Applications of Simulation in Service Industries 370

14.5.1 Process Design 370

14.5.2 Method Selection 370

14.5.3 System Layout 371

14.5.4 Staff Planning 371

14.5.5 Flow Control 372

14.6 Types of Service Systems 372

14.6.1 Service Factory 372

14.6.2 Pure Service Shop 373

14.6.3 Retail Service Store 373

14.6.4 Professional Service 374

14.6.5 Telephonic Service 374

14.6.6 Delivery Service 375

14.6.7 Transportation Service 375

14.6.8 Online Service 375

14.7 Simulation Example: A Help Desk Operation 376

14.7.1 Background 376

14.7.2 Model Description 377

14.7.3 Results 380

14.8 Summary 381

14.9 Review Questions 381

References 381
Part 2 LABS 383

1 Introduction to ProModel 385

L1.1 ProModel Opening Screen 386

L1.2 ProModel Ribbon Bar 386

L1.3 Run-Time Menus and Controls 390

L1.5 Simulation in Decision Making 391

L1.5.1 California Cellular 391

L1.5.2 ATM System 394

L1.6 Exercises 398


2 Building Your First Model 401

L2.1 Building Your First Simulation Model 401

L2.2 Building the Bank of USA ATM Model 408

L2.3 Locations, Entities, Processing, and Arrivals 413

L2.4 Add Location 416

L2.5 Effect of Variability on Model Performance 417

L2.6 Blocking 418

L2.7 Effect of Traffic Intensity on System Performance 421

L2.8 Exercises 423


3 ProModel Output Viewer 425

L3.1 The Output Viewer 425

L3.2 Using the Output Viewer Ribbon 427

L3.3 File 428

L3.4 Charts 428

L3.5 Tables 429

L3.6 Column Charts 431

L3.7 Utilization Charts 431

L3.8 State Charts 432

L3.9 Time Series Charts 434

L3.10 Dynamic Plots 435

L3.11 Exercises 438


4 Basic Modeling Concepts 441

L4.1 Multiple Locations, Multiple Entity Types 442

L4.2 Multiple Parallel Identical Locations 443

L4.3 Resources 446

L4.4 Routing Rules 448

L4.5 Variables 451

L4.6 Uncertainty in Routing—Track Defects and Rework 455

L4.7 Batching Multiple Entities of Similar Type 456

L4.7.1 Temporary Batching 456

L4.7.2 Permanent Batching 459

L4.8 Attaching One or More Entities to Another Entity 461

L4.8.1 Permanent Attachment 461

L4.8.2 Temporary Attachment 463

L4.9 Accumulation of Entities 466

L4.10 Splitting of One Entity into Multiple Entities 467

L4.11 Decision Statements 468

L4.11.1 IF-THEN-ELSE Statement 469

L4.11.2 WHILE…DO Loop 471

L4.11.3 DO…WHILE Loop 473

L4.11.4 DO…UNTIL Statement 474

L4.11.5 GOTO Statement 475

L4.11.6 WAIT UNTIL Statement 476

L4.12 Periodic System Shutdown 477

L4.13 Exercises 479


5 Fitting Statistical Distributions to Input Data 491

L5.1 An Introduction to Stat::Fit 491

L5.2 Fitting Statistical Distributions to Continuous Data 499

L5.3 Fitting Statistical Distributions to Discrete Data 504

L5.4 User Distribution 507

L5.5 Exercises 510


6 Intermediate Model Building 515

L6.1 Attributes 515

L6.1.1 Using Attributes to Track Customer Types 517

L6.1.2 Using Attributes and Local Variables 519

L6.2 Cycle Time 521

L6.3 Sampling Inspection and Rework 523

L6.4 Preventive Maintenance and Machine Breakdowns 524

L6.4.1 Downtime Using MTBF and MTTR Data 525

L6.4.2 Downtime Using MTTF and MTTR Data 526

L6.5 Shift Working Schedule 528

L6.6 Job Shop 531

L6.7 Modeling Priorities 533

L6.7.1 Selecting among Upstream Processes 533

L6.8 Modeling a Pull System 535

L6.8.1 Pull Based on Downstream Demand 536

L6.8.2 Kanban System 538

L6.9 Tracking Cost 540

L6.10 Importing a Background 543

L6.11 Defining and Displaying Views 546

L6.12 Creating a Model Package 549

L6.13 Exercises 552


7 Model Verification and Validation 569

L7.1 Verification of an Inspection and Rework Model 569

L7.2 Verification by Tracing the Simulation Model 571

L7.3 Debugging the Simulation Model 573

L7.3.1 Debugging ProModel Logic 573

L7.3.2 Basic Debugger Options 575

L7.3.3 Advanced Debugger Options 576

L7.4 Validating the Model 577


L7.4.1 An Informal Intuitive Approach to Validation 577

L7.5 Exercises 578

Reference 579
8 Simulation Output Analysis 581

L8.1 Terminating versus Nonterminating Simulations 581

L8.2 Terminating Simulation 582

L8.2.1 Starting and Terminating Conditions (Run Length) 583

L8.2.2 Replications 584

L8.2.3 Required Number of Replications 587

L8.2.4 Simulation Output Assumptions 588

L8.3 Nonterminating Simulation 590

L8.3.1 Warm-up Time and Run Length 592

L8.3.2 Replications or Batch Intervals 597

L8.3.3 Required Batch Interval Length 599

L8.4 Exercises 601


9 Comparing Alternative Systems 603

L9.1 Overview of Statistical Methods 603

L9.2 Three Alternative Systems 604

L9.3 Common Random Numbers 607

L9.4 Bonferroni Approach with Paired-t Confidence Intervals 608

L9.5 Formal Test of Hypotheses for Model Validation 612

L9.6 Exercises 614


10 Simulation Optimization with SimRunner 617

L10.1 Introduction to SimRunner 617

L10.2 SimRunner Projects 619

L10.2.1 Single-Term Objective Functions 621

L10.2.2 Multiterm Objective Functions 629

L10.2.3 Target Range Objective Functions 632

L10.3 Conclusions 634


L10.4 Exercises 636
11 Simulation Analysis of Designed Experiments
Using ProModel 641

L11.1 Introduction 641

L11.1 Full Factorial Design Simulation Experiment 642

L11.2 Fraction Factorial Design Simulation Experiment 646

L11.3 Exercises 648


12 Modeling Manufacturing Systems 651

L12.1 Macros and Runtime Interface 651

Scenario Parameters 653

L12.2 Generating Scenarios 656

L12.3 External Files 658

L12.4 Arrays 663

L12.5 Subroutines 668

L12.6 Random Number Streams 672

L12.7 Merging a Submodel 674

L12.8 Exercises 675


13 Modeling Material Handling Concepts 677

L13.1 Conveyors 677

L13.1.1 Single Conveyor 678

L13.1.2 Multiple Conveyors 679

L13.1.3 Multiple Merging Conveyors 682

L13.1.4 Recirculating Conveyor 684

L13.2 Resources, Path Networks, and Interfaces 685

L13.2.1 Dynamic Resource as Material Handler 686

L13.2.2 Resource as an Operator cum Material Handler 689

L13.3 Crane Systems 691

L13.4 Exercises 693

Reference 706
14 Modeling Service Systems 707

L14.1 Balking of Customers 707

L14.2 Table Functions 709

L14.3 Arrival Cycles 711

L14.4 User Distribution 715

L14.5 Modeling a University Cafeteria 717

L14.6 Modeling a Call Center—Outsource2US 721

L14.7 Modeling a Triage—Los Angeles County Hospital 725

L14.8 Modeling an Office (DMV) 727

L14.9 Exercises 733

Appendix A Continuous and Discrete Distributions in ProModel 745


Appendix B C
 ritical Values for Student’s t Distribution and Standard
Normal Distribution 757
Appendix C F Distribution for α = 0.05 759
Appendix D Critical Values for Chi-Square Distribution 761
Appendix E ProModel Statements and Functions 762

Index 804

About the Authors 810


Foreword

S imulation is a computer modeling and analysis technique used to evaluate and improve
dynamic systems of all types. Imagine being in a highly competitive industry that is burdened
by outdated technologies and inefficient management practices. In order to stay competitive,
you know that changes must be made, but you are not exactly sure what changes would work
best, or if certain changes will work at all. You would like to be able to try out a few different
ideas, but you recognize that this would be very time-consuming, expensive, and disruptive to
the current operation. Now, suppose that there was a way you could make a duplicate of your
system and have unlimited freedom to rearrange activities, reallocate resources, or change
any operating procedures. What if you could even try out completely new technologies and
radical new innovations all within just a matter of minutes or hours? Suppose, further, that
all of this experimentation could be done in compressed time with automatic tracking and
reporting of key performance measures. Not only would you discover ways to improve your
operation, but it could all be achieved risk free—without committing any capital, wasting any
time, or disrupting the current system. This is precisely the kind of capability that simulation
provides. Simulation lets you experiment with a computer model of your system in compressed
time, giving you decision-making capability that is unattainable in any other way.
This text is geared toward simulation courses taught at either an undergraduate or a grad-
uate level. It contains an ideal blend of theory and practice and covers the use of simulation in
both manufacturing and service systems. This makes it well suited for use in courses in either
an engineering or a business curriculum. It is also suitable for simulation courses taught in
statistics and computer science programs. The strong focus on the practical aspects of sim-
ulation also makes it a book that any practitioner of simulation would want to have on hand.
This text is designed to be used in conjunction with ProModel simulation software, which
may or may not accompany the book, depending on how the book was purchased. ProModel
is one of the most powerful and popular simulation packages used today for its ease of use
and flexibility. ProModel was the first fully commercial, Windows-based simulation package
and the first to introduce simulation optimization to maximize the performance of a system.
ProModel is used in organizations and taught in universities and colleges throughout the world.
While many teaching aids have been developed to train individuals in the use of ProModel,
this is the only full-fledged textbook written for teaching simulation using ProModel.

xxv
Simulation is a learn-by-doing activity. The goal of this text is not simply to introduce
students to the topic of simulation, but to develop competence in the use of simulation. To
this end, the book contains plenty of real-life examples, case studies, and lab exercises to
give students actual experience in the use of simulation. Simulation texts often place too
much emphasis on the theory behind simulation and not enough emphasis on how it is used
in actual problem-solving situations. In simulation courses we have taught over the years, the
strongest feedback we have received from students is that they wish they had more hands-on
time with simulation beginning from the very first week of the semester. The book expressly
addresses this feedback.
This text is divided into two parts: a section on the general science and practice of simula-
tion, and a lab section to educate readers in the use of simulation with ProModel. Additionally,
numerous supplemental materials are available on the Cognella website. While the book is
intended for use with ProModel, the division of the book into two parts permits a modular
use of the book, allowing either part to be used independently of the other part.
Part I consists of study chapters covering the science and technology of simulation. The
first four chapters introduce the topic of simulation, its application to system design and
improvement, and how simulation works. Chapters 5 through 11 provide both the practical
and theoretical aspects of conducting a simulation project, including applying simulation opti-
mization. Chapters 12 through 14 cover specific applications of simulation to manufacturing,
material handling, and service systems.
Part II is the lab portion of the book containing exercises for developing simulation skills
using ProModel. The labs are correlated with the study chapters in Part I so that Lab 1 should
be completed along with Chapter 1 and so on. There are 14 chapters and 14 labs. The labs are
designed for hands-on learning by doing. Readers are taken through the steps of modeling a
situation and then are given exercises to complete on their own.
This text focuses on the use of simulation to solve problems in the two most common types
of systems today: manufacturing and service systems. Manufacturing and service systems
share much in common. They both consist of activities, resources, and controls for processing
incoming entities. The performance objectives in both instances relate to quality, efficiency,
cost reduction, process time reduction, and customer satisfaction. In addition to having
common elements and objectives, they are also often interrelated. Manufacturing systems are
supported by service activities such as product design, order management, or maintenance.
Service systems receive support from production activities such as food production, check
processing, or printing. Regardless of the industry in which one ends up, an understanding of
the modeling issues underlying both systems will be helpful.

xxvi
Acknowledgments

N o work of this magnitude is performed in a vacuum, independently of the help and assis-
tance of others. We are indebted to many colleagues, associates, and other individuals
who had a hand in this project. John Mauer (Geer Mountain Software provided valuable
information on input modeling and the use of Stat::Fit. Dr. John D. Hall (APT Research, Inc.
helped to develop and refine the ANOVA material in Chapter 10. Kerim Tumay (Kiran Analytics
provided valuable input on the issues associated with service system simulation.
We are grateful to all the reviewers of past editions not only for their helpful feedback, but
also for their generous contributions and insights. For their work in preparation of this fourth
edition, we particularly want to thank: Krishna Krishnan, Wichita State University; Robert
H. Seidman, Southern New Hampshire University; Lee Tichenor, Western Illinois University;
Hongyi Chen, University of Minnesota, Duluth; Anne Henriksen, James Madison University;
Leonid Shnayder, Stevens Institute of Technology; Bob Kolvoord, James Madison University;
Dave Keranen, University of Minnesota, Duluth; Wade H Shaw, Florida Institute of Technology;
and Marwa Hassan, Louisiana State University.
Many individuals were motivational and even inspirational in taking on this project: Lou
Keller, the late Rob Bateman, Richard Wysk, Dennis Pegden, and Joyce Kupsh, to name a few.
We would especially like to thank our families for their encouragement and for so generously
tolerating the disruption of normal life caused by this project.
Thanks to all of the students who provided valuable feedback on the first, second and
third editions of the text. It is for the primary purpose of making simulation interesting and
worthwhile for students that we have written this book.
We are especially indebted to all the wonderful people at ProModel Corporation who
have been so cooperative in providing software and documentation, especially Christine
Bunker-Crawford. Were it not for the excellent software tools and accommodating support
staff at ProModel, this book would not have been written.
Finally, we thank the editorial and production staff at Cognella Publishing: Mieka Portier,
Rose Tawy, Tony Paese, and Abbey Hastings. They have been great to work with.

xxvii
Part I

Study Chapters
1 Introduction to Simulation

2 System Dynamics

3 Simulation Basics

4 Discrete-Event Simulation

5 Data Collection and Analysis

6 Model Building

7 Model Verification and Validation

8 Simulation Output Analysis

9 Comparing Systems

10 Simulation Optimization

11 Design of Simulation Experiments

12 Modeling Manufacturing Systems

13 Modeling Material Handling Systems

14 Modeling Service Systems

1
Chapter  
Chapter  1#

Introduction to Simulation

“Man is a tool-using animal. Without tools he is nothing, with tools he is all.”


—Thomas Carlyle

1.1 INTRODUCTION
On March 19, 1999, the following story appeared in the Wall Street Journal, p. A1:

Captain Chet Rivers knew that his 747–400 was loaded to the limit. The giant plane,
weighing almost 450,000 pounds by itself, was carrying a full load of passengers and
baggage, plus 400,000 pounds of fuel for the long flight from San Francisco to Australia.
As he revved his four engines for takeoff, Capt. Rivers noticed that San Francisco’s famous
fog was creeping in, obscuring the hills to the north and west of the airport.
At full throttle, the plane began to roll ponderously down the runway, slowly at first
but building up to flight speed well within normal limits. Capt. Rivers pulled the throttle
back and the airplane took to the air, heading northwest across the San Francisco Pen-
insula towards the ocean. It looked like the start of another routine flight. Suddenly the
plane began to shudder violently. Several loud explosions shook the craft and smoke
and flames, easily visible in the midnight sky, illuminated the right wing. Although the
plane was shaking so violently that it was hard to read the instruments, Capt. Rivers
was able to tell that the right inboard engine was malfunctioning, backfiring violently.
He immediately shut down the engine, stopping the explosions and shaking.
However, this introduced a new problem. With two engines on the left wing at
full power and only one on the right, the plane was pushed into a right turn, bringing
it directly towards San Bruno Mountain, located a few miles northwest of the airport.
Capt. Rivers instinctively turned his control wheel to the left to bring the plane back on

William M. Carley, Selection from “United 747's Near Miss Initiates A Widespread Review of Pilot Skills,” The Wall
Street Journal. Copyright © 1999 by Dow Jones & Company, Inc. Reprinted with permission.
3
course. That action extended the ailerons—control surfaces on the trailing edges of the
wings—to tilt the plane back to the left. However, it also extended the spoilers—panels
on the tops of the wings—increasing drag and lowering lift. With the nose still pointed
up, the heavy jet began to slow. As the plane neared stall speed, the control stick began
to shake to warn the pilot to bring the nose down to gain air speed. Capt. Rivers imme-
diately did so, removing that danger, but now San Bruno Mountain was directly ahead.
Capt. Rivers was unable to see the mountain due to the thick fog that had rolled in,
but the plane’s ground proximity sensor sounded an automatic warning, calling “terrain,
terrain, pull up, pull up.” Rivers frantically pulled back on the stick to clear the peak, but
with the spoilers up and the plane still in a skidding right turn, it was too late. The plane
and its full load of 100 tons of fuel crashed with a sickening explosion into the hillside
just above a densely populated housing area.
“Hey Chet, that could ruin your whole day,” said Capt. Rivers’ supervisor, who was
sitting beside him watching the whole thing. “Let’s rewind the tape and see what you
did wrong.” “Sure Mel,” replied Chet as the two men stood up and stepped outside the
747-cockpit simulator. “I think I know my mistake already. I should have used my rudder,
not my wheel, to bring the plane back on course. Say, I need a breather after that expe-
rience. I’m just glad that this wasn’t the real thing.”
The incident above was never reported in the nation’s newspapers, even though it
would have been one of the most tragic disasters in aviation history, because it never really
happened. It took place in a cockpit simulator, a device which uses computer technology
to predict and recreate an airplane’s behavior with gut-wrenching realism.

The relief you undoubtedly felt to discover that this disastrous incident was just a simulation
gives you a sense of the impact that simulation can have in averting real-world catastrophes.
This story illustrates just one of the many ways simulation is being used to help minimize the
risk of making costly and sometimes fatal mistakes in real life. Simulation technology is finding
its way into an increasing number of applications ranging from training for aircraft pilots to the
testing of new product prototypes. The one thing that these applications have in common is
that they all provide a virtual environment that helps prepare for real-life situations, resulting
in significant savings in time, money, and even lives.
One area where simulation is finding increased application is in manufacturing and service
system design and improvement. Its unique ability to accurately predict the performance of
complex systems makes it ideally suited for systems planning. Just as a flight simulator reduces
the risk of making costly errors in actual flight, system simulation reduces the risk of having
systems that operate inefficiently or that fail to meet minimum performance requirements.
While this may not be life-threatening to an individual, it certainly places a company (not to
mention careers) in jeopardy.

4 Simulation Using ProModel


In this chapter we introduce the topic of simulation and answer the following questions:

• What is simulation?
• Why is simulation used?
• How is simulation performed?
• When and where should simulation be used?
• What are the qualifications for doing simulation?
• How is simulation economically justified?

The purpose of this chapter is to create an awareness of how simulation is used to visualize,
analyze, and improve the performance of manufacturing and service systems.

1.2 WHAT IS SIMULATION?


The Oxford American Dictionary (1980) defines simulation as a way “to reproduce the conditions
of a situation, as by means of a model, for study or testing or training, etc.” For our purposes,
we are interested in reproducing the operational behavior of dynamic systems. The model that
we will be using is a computer model. Simulation in this context can be defined as the imitation
of a dynamic system using a computer model to evaluate and improve system performance.
According to Schriber (1987), simulation is “the modeling of a process or system in such a way
that the model mimics the response of the actual system to events that take place over time.” By
studying the behavior of the model, we can gain insights about the behavior of the actual system.

Simulation is the imitation of a dynamic system


using a computer model in order to evaluate and
improve system performance.
Image 1.1

In practice, simulation is usually performed using commercial simulation software like


ProModel that has modeling constructs specifically designed for capturing the dynamic behavior
of systems. Performance statistics are gathered during the simulation and automatically sum-
marized for analysis. Modern simulation software provides a realistic, graphical animation of
the system being modeled. During the simulation, the user can interactively adjust the ani-
mation speed and change model parameter values to do “what if” analysis on the fly.
State-of-the-art simulation technology even provides optimization capability—not that

Chapter 1: Introduction to Simulation   5


Figure 1.1   Simulation provides animation capability.

simulation itself optimizes, but scenarios that satisfy defined feasibility constraints can be
automatically run and analyzed using special goal-seeking algorithms.
This book focuses primarily on discrete-event simulation, which models the effects of the
events in a system as they occur over time. Discrete-event simulation employs statistical meth-
ods for generating random behavior and estimating model performance. These methods are
sometimes referred to as Monte Carlo methods because of their similarity to the probabilistic
outcomes found in games of chance, and because Monte Carlo, a tourist resort in Monaco,
was such a popular center for gambling.

1.3 WHY SIMULATE?


Rather than leave design decisions to chance, simulation provides a way to validate whether
the best decisions are being made. Simulation avoids the expensive, time-consuming, and
disruptive nature of traditional trial-and-error techniques.

Trial-and-error approaches are expensive, time


consuming, and disruptive.
Image 1.2

6 Simulation Using ProModel


With the emphasis today on time-based competition, traditional trial-and-error methods
of decision making are no longer adequate. Regarding the shortcoming of trial-and-error
approaches in designing manufacturing systems, Solberg (1988) notes,

The ability to apply trial-and-error learning to tune the performance of manufacturing


systems becomes almost useless in an environment in which changes occur faster than the
lessons can be learned. There is now a greater need for formal predictive methodology
based on understanding of cause and effect.

The power of simulation lies in the fact that it provides a method of analysis that is not
only formal and predictive but is capable of accurately predicting the performance of even
the most complex systems. Deming (1989) states, “Management of a system is action based
on prediction. Rational prediction requires systematic learning and comparisons of predictions
of short-term and long-term results from possible alternative courses of action.” The key to
sound management decisions lies in the ability to accurately predict the outcomes of alternative
courses of action. Simulation provides precisely that kind of foresight. By simulating alternative
production schedules, operating policies, staffing levels, job priorities, decision rules, and the
like, a manager can more accurately predict outcomes and therefore make more informed
and effective management decisions. With the importance in today’s competitive market of
“getting it right the first time,” the lesson is becoming clear: if at first you do not succeed, you
probably should have simulated it.
By using a computer to model a system before it is built or to test operating policies before
they are implemented, many of the pitfalls that are often encountered in the start-up of a new
system or the modification of an existing system can be avoided. Improvements that tradi-
tionally took months and even years of fine-tuning to achieve can be attained in a matter of
days or even hours. Because simulation runs in compressed time, weeks of system operation
can be simulated in only a few minutes or even seconds. The characteristics of simulation
that make it such a powerful planning and decision-making tool can be summarized as follows:

• Captures system interdependencies.


• Accounts for variability in the system.
• Is versatile enough to model any system.
• Shows behavior over time.
• Is less costly, time consuming, and disruptive than experimenting on the actual system.
• Provides information on multiple performance measures.
• Is visually appealing and engages people’s interest.
• Provides results that are easy to understand and communicate.
• Runs in compressed, real, or even delayed time.
• Forces attention to detail in a design.

Because simulation accounts for interdependencies and variation, it provides insights into
the complex dynamics of a system that cannot be obtained using other analysis techniques.

Chapter 1: Introduction to Simulation   7


Simulation gives systems planners unlimited freedom to try out different ideas for improve-
ment, risk free—with virtually no cost, no waste of time, and no disruption to the current
system. Furthermore, the results are both visual and quantitative with performance statistics
automatically reported on all measures of interest.
Even if no problems are found when analyzing the output of simulation, the exercise of
developing a model is beneficial in that it forces one to think through the operational details
of the process. Simulation can work with inaccurate information, but it cannot work with
incomplete information. Often solutions present themselves as the model is built—before any
simulation run is made. It is a human tendency to ignore the operational details of a design
or plan until the implementation phase, when it is too late for decisions to have a significant
impact. As the philosopher Alfred North Whitehead observed, “We think in generalities; we
live detail” (Auden and Kronenberger 1964). System planners often gloss over the details of
how a system will operate and then get tripped up during implementation by all the loose
ends. The expression “the devil is in the details” has definite application to systems planning.
Simulation forces decisions on critical details so they are not left to chance or to the last
minute when it may be too late.
Simulation promotes a try-it-and-see attitude that stimulates innovation and encourages
thinking “outside the box.” It helps one get into the system with sticks and beat the bushes to
flush out problems and find solutions. It also puts an end to fruitless debates over what solution
will work best and by how much. Simulation takes the emotion out of the decision-making
process by providing objective evidence that is difficult to refute.

1.4 DOING SIMULATION


Simulation is nearly always performed as part of a larger process of system design or process
improvement. A design problem presents itself or a need for improvement exists. Alternative
solutions are generated and evaluated, and the best solution is selected and implemented.
Simulation comes into play during the evaluation phase. First, a model is developed for an
alternative solution. As the model is run, it is put into operation for the period of interest.
Performance statistics (utilization, processing time, and so on) are gathered and reported at
the end of the run. Usually several replications (independent runs) of the simulation are made.
Averages and variances across the replications are calculated to provide statistical estimates
of model performance. Through an iterative process of modeling, simulation, and analysis,
alternative configurations and operating policies can be tested to determine which solution
works the best.
Simulation is essentially an experimentation tool in which a computer model of a new or
existing system is created for the purpose of conducting experiments. The model acts as a
surrogate for the actual or real-world system. Knowledge gained from experimenting on the
model can be transferred to the real system. When we speak of doing simulation, we are talking
about “the process of designing a model of a real system and conducting experiments with

8 Simulation Using ProModel


this model” (Shannon 1998). Conducting experiments on a model reduces the time, cost, and
disruption of experimenting on the actual system. In this respect, simulation can be thought
of as a virtual prototyping tool for demonstrating proof of concept.
The procedure for doing simulation follows the scientific method of (1) formulating a
hypothesis, (2) setting up an experiment, (3) testing the hypothesis through experimentation,
and (4) drawing conclusions about the validity of the hypothesis. In simulation, we formulate
a hypothesis about what design or operating policies work best.
We then set up an experiment in the form of a simulation model Start
to test the hypothesis. With the model, we conduct multiple
replications of the experiment or simulation. Finally, we analyze
Formulate a
the simulation results and draw conclusions about our hypoth-
hypothesis
esis. If our hypothesis was correct, we can confidently move
ahead in making the design or operational changes (assuming
time and other implementation constraints are satisfied). As Develop a
simulation model
shown in Figure 1.2, this process is repeated until we are sat-
isfied with the results. No

By now it should be obvious that simulation itself is not a Run simulation


experiment
solution tool but rather an evaluation tool. It describes how a
defined system will behave; it does not prescribe how it should
be designed. Simulation does not compensate for one’s igno-
Hypothesis
rance of how a system is supposed to operate. Neither does it correct?
excuse one from being careful and responsible in the handling
Yes
of input data and the interpretation of output results. Rather
than being perceived as a substitute for thinking, simulation End

should be viewed as an extension of the mind that enables one


to understand the complex dynamics of a system. Figure 1.2 The process of
simulation experimentation.

1.5 USE OF SIMULATION


Simulation began to be used in commercial applications in the 1960s. Initial models were
usually programmed in FORTRAN and often consisted of thousands of lines of code. Not only
was model building an arduous task, but extensive debugging was required before models ran
correctly. Models frequently took a year or more to build and debug so that, unfortunately,
useful results were not obtained until after a decision and monetary commitment had already
been made. Lengthy simulations were run in batch mode on expensive mainframe computers
where CPU time was at a premium. Long development cycles prohibited major changes from
being made once a model was built.
Only in the last couple of decades has simulation gained popularity as a decision-making
tool in manufacturing and service industries. Much of the growth in the use of simulation is
due to the increased availability and ease of use of simulation software that runs on standard

Chapter 1: Introduction to Simulation   9


PCs. For many companies, simulation has become a standard practice when a new facility is
being planned or a process change is being evaluated. Simulation is to systems planners what
spreadsheet software has become to financial planners.
The primary use of simulation continues to be in manufacturing and logistics, which include
warehousing and distribution systems. These areas tend to have clearly defined relationships
and formalized procedures that are well suited to simulation modeling. They are also the
systems that stand to benefit the most from such an analysis tool since capital investments
are so high and changes are so disruptive. In the service sector, healthcare systems are also a
prime candidate for simulation. Recent trends to standardize and systematize other business
processes such as order processing, invoicing, and customer support are boosting the applica-
tion of simulation in these areas as well. It has been observed that 80 percent of all business
processes are repetitive and can benefit from the same analysis techniques used to improve
manufacturing systems (Harrington 1991). With this being the case, the use of simulation in
designing and improving business processes of every kind will likely continue to grow.
While the primary use of simulation is in decision support, it is by no means limited to
applications requiring a decision. An increasing use of simulation is in communication and
visualization. Modern simulation software incorporates visual animation that stimulates interest
in the model and effectively communicates complex system dynamics. A proposal for a new
system design can be sold much easier if it can be shown how it will operate.
On a smaller scale, simulation is being used to provide interactive, computer-based training
in which a management trainee is given the opportunity to practice decision-making skills
by interacting with the model during the simulation. It is also being used in real-time control
applications where the model interacts with the real system to monitor progress and provide
master control. The power of simulation to capture system dynamics both visually and func-
tionally opens numerous opportunities for its use in an integrated environment.
Since the primary use of simulation is in decision support, most of our discussion will focus
on the use of simulation to make system design and operational decisions. As a decision support
tool, simulation has been used to help plan and make improvements in many areas of both
manufacturing and service industries. Typical applications of simulation include:

• Work-flow planning • Throughput analysis


• Capacity planning • Productivity improvement
• Cycle time reduction • Layout analysis
• Staff and resource planning • Line balancing
• Work prioritization • Batch size optimization
• Bottleneck analysis • Production scheduling
• Quality improvement • Resource scheduling
• Cost reduction • Maintenance scheduling
• Inventory reduction • Control system design

10 Simulation Using ProModel


1.6 WHEN SIMULATION IS APPROPRIATE
Not all system problems that could be solved with the aid of simulation should be solved using
simulation. It is important to select the right tool for the task. For some problems, simulation
may be overkill—like using a shotgun to kill a fly. Simulation has certain limitations that one
should be aware of before making a decision to apply it in a given situation. It is not a pana-
cea for all system-related problems and should be used only if it is appropriate. As a general
guideline, simulation is appropriate if the following criteria hold true:

• An operational (logical or quantitative) decision is being made.


• The process being analyzed is well defined and repetitive.
• Activities and events are interdependent and variable.
• The cost impact of the decision is greater than the cost of doing the simulation.
• The cost to experiment on the actual system is greater than the cost of simulation.

Decisions should be of an operational nature. Perhaps the most significant limitation of sim-
ulation is its restriction to the operational issues associated with systems planning in which
a logical or quantitative solution is being sought. It is not very useful in solving qualitative
problems such as those involving technical or sociological issues. For example, it cannot tell
you how to improve machine reliability or how to motivate workers to do a better job (although
it can assess the impact that a given level of reliability or personal performance can have on
overall system performance). Qualitative issues such as these are better addressed using other
engineering and behavioral science techniques.
Processes should be well defined and repetitive. Simulation is useful only if the process being
modeled is well structured and repetitive. If the process does not follow a logical sequence
and adhere to defined rules, it may be difficult to model. Simulation applies only if you can
describe how the process operates. This does not mean that there can be no uncertainty in the
system. If random behavior can be described using probability expressions and distributions,
they can be simulated. It is only when it is not even possible to make reasonable assumptions
of how a system operates (because either no information is available, or behavior is totally
erratic) that simulation (or any other analysis tool for that matter) becomes useless. Likewise,
one-time projects or processes that are never repeated the same way twice are poor candi-
dates for simulation. If the scenario you are modeling is likely never going to happen again, it
is of little benefit to do a simulation.
Activities and events should be interdependent and variable. A system may have lots of activ-
ities, but if they never interfere with each other or are deterministic (that is, they have no
variation), then using simulation is probably unnecessary. It is not the number of activities that
makes a system difficult to analyze. It is the number of interdependent, random activities. The
effect of simple interdependencies is easy to predict if there is no variability in the activities.
Determining the flow rate for a system consisting of ten processing activities is very straight-
forward if all activity times are constant and activities are never interrupted. Likewise, random
activities that operate independently of each other are usually easy to analyze. For example,

Chapter 1: Introduction to Simulation   11


ten machines operating in isolation from each other can be expected to produce at a rate that
is based on the average cycle time of each machine less any anticipated downtime. It is the
combination of interdependencies and random behavior that really produces the unpredict-
able results. Simpler analytical methods such as mathematical calculations and spreadsheet
software become less adequate as the number of activities that are both interdependent and
random increases. For this reason, simulation is primarily suited to systems involving both
interdependencies and variability.
The cost impact of the decision should be greater than the cost of doing the simulation. Some-
times the impact of the decision itself is so insignificant that it does not warrant the time and
effort to conduct a simulation. Suppose, for example, you are trying to decide whether a worker
should repair rejects as they occur or wait until four or five accumulate before making repairs.
If you are certain that the next downstream activity is relatively insensitive to whether repairs
are done sooner rather than later, the decision becomes inconsequential, and simulation is a
wasted effort.
The cost to experiment on the actual system should be greater than the cost of simulation. While
simulation avoids the time delay and cost associated with experimenting on the real system,
in some situations it may be quicker and more economical to experiment on the real system.
For example, the decision in a customer mailing process of whether to seal envelopes before
or after they are addressed can easily be made by simply trying each method and compar-
ing the results. The rule of thumb here is that if a question can be answered through direct
experimentation quickly, inexpensively, and with minimal impact to the current operation,
then do not use simulation. Experimenting on the actual system also eliminates some of the
drawbacks associated with simulation, such as proving model validity.
There may be other situations where simulation is appropriate independent of the criteria
just listed (see Banks and Gibson 1997). This is certainly true in the case of models built purely
for visualization purposes. If you are trying to sell a system design or simply communicate
how a system works, a realistic animation created using simulation can be very useful, even
though nonbeneficial from an analysis point of view.

1.7 QUALIFICATIONS FOR DOING SIMULATION


Many individuals are reluctant to use simulation because they feel unqualified. Certainly, some
training is required to use simulation, but it does not mean that only statisticians or operations
research specialists can learn how to use it. Decision support tools are always more effective
when they involve the decision maker, especially when the decision maker is also the domain
expert or person who is most familiar with the design and operation of the system. The
process owner or manager, for example, is usually intimately familiar with the intricacies and
idiosyncrasies of the system and is in the best position to know what elements to include in the
model and be able to recommend alternative design solutions. When performing a simulation,
often improvements suggest themselves in the very activity of building the model that the

12 Simulation Using ProModel


decision maker might never discover if someone else is doing the modeling. This reinforces
the argument that the decision maker should be heavily involved in, if not actually conducting,
the simulation project.
To make simulation more accessible to nonsimulation experts, products have been devel-
oped that can be used at a basic level with very little training. Unfortunately, there is always
a potential danger that a tool will be used in a way that exceeds one’s skill level. While simu-
lation continues to become more user-friendly, this does not absolve the user from acquiring
the needed skills to make intelligent use of it. Many aspects of simulation will continue to
require some training. Hoover and Perry (1989) note, “The subtleties and nuances of model
validation and output analysis have not yet been reduced to such a level of rote that they can
be completely embodied in simulation software.”
Modelers should be aware of their own inabilities in dealing with the statistical issues
associated with simulation. Such awareness, however, should not prevent one from using
simulation within the realm of one’s expertise. There are both a basic as well as an advanced
level at which simulation can be beneficially used. Rough-cut modeling to gain fundamental
insights, for example, can be achieved with only a rudimentary understanding of simulation.
One need not have extensive simulation training to go after the low-hanging fruit. Simulation
follows the 80–20 rule, where 80 percent of the benefit can be obtained from knowing only
20 percent of the science involved (just make sure you know the right 20 percent). It is not
until more precise analysis is required that additional statistical training and knowledge of
experimental design are needed.
To reap the greatest benefits from simulation, a certain degree of knowledge and skill in
the following areas is useful:

• Project management
• Communication
• Systems engineering
• Statistical analysis and design of experiments
• Modeling principles and concepts
• Basic programming and computer skills
• Training on one or more simulation product
• Familiarity with the system being investigated

Experience has shown that some people learn simulation more rapidly and become more
adept at it than others. People who are good abstract thinkers yet also pay close attention to
detail seem to be the best suited for doing simulation. Such individuals can see the forest while
keeping an eye on the trees (these are people who tend to be good at putting together one-
thousand-piece puzzles). They can quickly scope a project, gather the pertinent data, and get
a useful model up and running without lots of starts and stops. A good modeler is somewhat
of a sleuth, eager yet methodical and discriminating in piecing together all the evidence that
will help put the model pieces together.

Chapter 1: Introduction to Simulation   13


If short on time, talent, resources, or interest, the decision maker need not despair. Plenty
of consultants who are professionally trained and experienced can provide simulation ser-
vices. A competitive bid will help get the best price, but one should be sure that the individual
assigned to the project has good credentials. If the use of simulation is only occasional, relying
on a consultant may be the preferred approach.

1.8 CONDUCTING A SIMULATION STUDY


Once a suitable application has been selected and appropriate tools and trained personnel
are in place, the simulation study can begin. Simulation is much more than building and running
a model of the process. Successful simulation projects are well planned and coordinated. While
there are no strict rules on how to conduct a simulation project, the following steps are gen-
erally recommended:

Step 1: Define objective and plan the study. Define the purpose of the simulation
project and what the scope of the project will be. A project plan needs to be developed
to determine the resources, time, and budget requirements for carrying out the project.

Step 2: Collect and analyze system data. Identify, gather, and analyze the data defining
the system to be modeled. This step results in a conceptual model and a data document
on which all can agree.
Define objective
Step 3: Build the model. Develop a simulation model of the and plan study
system.

Step 4: Validate the model. Debug the model and make sure it is Collect and analyze
a credible representation of the real system. system data

Step 5: Conduct experiments. Run the simulation for each of the


scenarios to be evaluated and analyze the results. Build model

Step 6: Present the results. Present the findings and make rec-
ommendations so that an informed decision can be made.
Validate model
Each step need not be completed in its entirety before moving
to the next step. The procedure for doing a simulation is an iter-
ative one in which activities are refined and sometimes redefined
Conduct
with each iteration. The decision to push toward further refine- experiments
ment should be dictated by the objectives and constraints of the
study as well as by sensitivity analysis, which determines whether
additional refinement will yield meaningful results. Even after the Present results

results are presented, there are often requests to conduct addi-


tional experiments. Figure 1. 3 Iterative
Figure 1.3 illustrates this iterative process. nature of simulation.

14 Simulation Using ProModel


Here we will briefly look at defining the objective and planning the study, which is the
first step in a simulation study. The remaining steps will be discussed at length in subsequent
chapters.

1.8.1 Defining the Objective


The objective of a simulation defines the purpose or reason for conducting the simulation
study. It should be realistic and achievable, given the time and resource constraints of the
study. Simulation objectives can be grouped into the following general categories:

• Performance analysis—What is the all-around performance of the system in terms of


resource utilization, flow time, output rate, and so on?
• Capacity/constraint analysis—When pushed to the maximum, what is the processing or
production capacity of the system and where are the bottlenecks?
• Configuration comparison—How well does one system or operational configuration meet
performance objectives compared to another?
• Optimization—What settings for each decision variable best achieve desired perfor-
mance goals?
• Sensitivity analysis—Which decision variables are the most influential on performance
measures, and how influential are they?
• Visualization—How can the system operation be most effectively visualized?

Following is a list of sample design and operational questions that simulation can help
answer. They are intended as examples of specific objectives that might be defined for a
simulation study.

1. How many operating personnel are needed to meet required production or service
levels?
2. What level of automation is the most cost-effective?
3. How many machines, tools, fixtures, or containers are needed to meet throughput
requirements?
4. What is the least-cost method of material handling or transportation that meets pro-
cessing requirements?
5. What are the optimum number and size of waiting areas, storage areas, queues, and
buffers?
6. Where are the bottlenecks in the system, and how can they be eliminated?
7. What is the best way to route material, customers, or calls through the system?
8. What is the best way to allocate personnel to specific tasks?
9. How much raw material and work-in-process inventory should be maintained?
10. What is the best production control method (Kanban, JIT, Lean, etc.)?

When the goal is to analyze some aspect of system performance, the tendency is to think
in terms of the mean or expected value of the performance metric. For example, we are

Chapter 1: Introduction to Simulation   15


frequently interested in the average contents of a queue or the average utilization of a resource.
There are other metrics that may have equal or even greater meaning that can be obtained
from a simulation study. For example, we might be interested in variation as a metric, such as
the standard deviation in waiting times. Extreme values can also be informative, such as the
minimum and maximum number of contents in a storage area. We might also be interested
in a percentile such as the percentage of time that the utilization of a machine is less than a
particular value, say, 80 percent. While frequently we speak of designing systems to be able
to handle peak periods, it often makes more sense to design for a value above which values
only occur less than 5 or 10 percent of the time. It is more economical, for example, to design
a staging area on a shipping dock based on 90 percent of peak time usage rather than based
on the highest usage during peak time. Sometimes a single measure is not as descriptive as a
trend or pattern of performance. Perhaps a measure has increasing and decreasing periods,
such as the activity in a restaurant. In these situations, a detailed time series report would be
the most meaningful.
While well-defined and clearly stated objectives are important to guide the simulation
effort, they should not restrict the simulation or inhibit creativity. Michael Schrage (1999)
observes that “the real value of a model or simulation may stem less from its ability to test
a hypothesis than from its power to generate useful surprise. Louis Pasteur once remarked
that ‘chance favors the prepared mind.’ It holds equally true that chance favors the prepared
prototype: models and simulations can and should be media to create and capture surprise
and serendipity … The challenge is to devise transparent models that also make people shake
their heads and say ‘Wow!’” The right “experts” can be “hyper vulnerable to surprise but well
situated to turn surprise to their advantage. That is why Alexander Fleming recognized the
importance of a mold on an agar plate and discovered penicillin.” Finally, he says, “A prototype
should be an invitation to play. You know you have a successful prototype when people who
see it make useful suggestions about how it can be improved.”

1.8.2 Planning the Study


With a realistic, meaningful, and well-defined objective established, a scope of work and
schedule can be developed for achieving the stated objective. The scope of work is important
for guiding the study as well as providing a specification of the work to be done upon which
all can agree. The scope is essentially a project specification that helps set expectations by
clarifying to others exactly what the simulation will include and exclude. Such a specification
is especially important if an outside consultant is performing the simulation so that there is
mutual understanding of the deliverables required.
An important part of the scope is a specification of the models that will be built. When
evaluating improvements to an existing system, it is often desirable to model the current system
first. This is called an “as-is” model. Results from the as-is model are statistically compared with
outputs of the real-world system to validate the simulation model. This as-is model can then be
used as a benchmark or baseline to compare the results of “to-be” models. For reengineering

16 Simulation Using ProModel


or process improvement studies, this two-phase modeling approach is recommended. For
entirely new facilities or processes, there will be no as-is model. There may, however, be
several to-be models to compare.
To ensure that the scope is complete, and the schedule is realistic, a determination should
be made of

• The models to be built and experiments to be made.


• The software tools and personnel that will be used to build the models.
• Who will be responsible for gathering the data for building the models.
• How the models will be verified and validated.
• How the results will be presented.

Once these issues have been settled, a project schedule can be developed showing each
of the tasks to be performed and the time to complete each task. Remember to include
sufficient time for documenting the model and adding any final touches to the animation for
presentation purposes. Any additional resources, activities (travel, etc.) and their associated
costs should also be identified for budgeting purposes.

1.9 ECONOMIC JUSTIFICATION OF SIMULATION


Cost is always an important issue when considering the use of any software tool, and simulation
is no exception. Simulation should not be used if the cost exceeds the expected benefits. This
means that both the costs and the benefits should be carefully assessed. The use of simula-
tion is often prematurely dismissed due to the failure to recognize the potential benefits and
savings it can produce. Much of the reluctance in using simulation stems from the mistaken
notion that simulation is costly and very time-consuming. This perception is shortsighted and
ignores the fact that in the long run simulation usually saves much more time and cost than
it consumes. It is true that the initial investment, including training and startup costs, may be
between $10,000 and $30,000 (simulation products themselves generally range between
$1,000 and $20,000). However, this cost is often recovered after the first one or two projects.
The ongoing expense of using simulation for individual projects is estimated to be between
1 and 3 percent of the total project cost (Glenney and Mackulak 1985). With respect to the
time commitment involved in doing simulation, much of the effort that goes into building the
model is in arriving at a clear definition of how the system operates, which needs to be done
anyway. With the advanced modeling tools that are now available, the actual model develop-
ment and running of simulations take only a small fraction (often less than 5 percent) of the
overall system design time.
Savings from simulation are realized by identifying and eliminating problems and inefficien-
cies that would have gone unnoticed until system implementation. Cost is also reduced by
eliminating overdesign and removing excessive safety factors that are added when performance
projections are uncertain. By identifying and eliminating unnecessary capital investments, and

Chapter 1: Introduction to Simulation   17


discovering and correcting operating inefficiencies, it is not uncommon for companies to report
hundreds of thousands of dollars in savings on a single project using simulation. The return on
investment (ROI) for simulation often exceeds 1,000 percent, with payback periods frequently
being only a few months or the time it takes to complete a simulation project.
One of the difficulties in developing an economic justification for simulation is the fact
that it is usually not known in advance how much savings will be realized until it is used. Most
applications in which simulation has been used have resulted in savings that, had the savings
been known in advance, would have looked very good in an ROI or payback analysis.
One way to assess in advance the economic benefit of simulation is to assess the risk of
making poor design and operational decisions. One need only ask what the potential cost
would be if a misjudgment in systems planning were to occur. Suppose, for example, that a
decision is made to add another machine to solve a capacity problem in a production or service
system. What are the cost and probability associated with this being the wrong decision? If the
cost associated with a wrong decision is $100,000 and the decision maker is only 70 percent
confident that the decision is correct, then there is a 30 percent chance of incurring a cost of
$100,000. This results in a probable cost of $30,000 (.3 × $100,000). Using this approach,
many decision makers recognize that they cannot afford not to use simulation because the
risk associated with making the wrong decision is too high.
Tying the benefits of simulation to management and organizational goals also provides
justification for its use. For example, a company committed to continuous improvement or,
more specifically, to lead time or cost reduction can be sold on simulation if it can be shown
to be historically effective in these areas. Simulation has gained the reputation as a best prac-
tice for helping companies achieve organizational goals. Companies that profess to be serious
about performance improvement will invest in simulation if they believe it can help them
achieve their goals.
The real savings from simulation come
Concept Design Installa on Opera on from allowing designers to make mistakes
and work out design errors on the model
rather than on the actual system. The con-
cept of reducing costs through working
out problems in the design phase rather
than after a system has been implemented
Cost

is best illustrated by the rule of tens. This


principle states that the cost to correct
a problem increases by a factor of ten
for every design stage through which
it passes without being detected (see
System stage
Figure 1.4).
Figure 1.4 Cost of making changes at subsequent Simulation helps avoid many of the
stages of system development. downstream costs associated with poor

18 Simulation Using ProModel


decisions that are made up
front. Figure 1.5 illustrates how
the cumulative cost resulting
Cost without
from systems designed using

System cost
simulation
simulation can compare with
the cost of designing and oper- Cost with
simulation
ating systems without the use
of simulation. Note that while
the short-term cost may be
slightly higher due to the added
labor and software costs asso- Design Implementation Operation
phase phase phase
ciated with simulation, the
long-term costs associated Figure 1.5 Comparison of cumulative system costs with and
with capital investments and without simulation.
system operation are consid-
erably lower due to better
efficiencies realized through simulation. Dismissing the use of simulation based on sticker price
is myopic and shows a lack of understanding of the long-term savings that come from having
well-designed, efficiently operating systems.
Many examples can be cited to show how simulation has been used to avoid costly errors in
the startup of a new system. Simulation prevented an unnecessary expenditure when a Fortune
500 company was designing a facility for producing and storing subassemblies and needed
to determine the number of containers required for holding the subassemblies. It was initially
felt that 3,000 containers were needed until a simulation study showed that throughput did
not improve significantly when the number of containers was increased from 2,250 to 3,000.
By purchasing 2,250 containers instead of 3,000, a savings of $528,375 was expected in the
first year, with annual savings thereafter of over $200,000 due to the savings in floor space
and storage resulting from having 750 fewer containers (Law and McComas 1988).
Even if dramatic savings are not realized each time a model is built, simulation at least
inspires confidence that a particular system design is capable of meeting required performance
objectives and thus minimizes the risk often associated with new startups. The economic
benefit associated with instilling confidence was evidenced when an entrepreneur, who was
attempting to secure bank financing to start a blanket factory, used a simulation model to show
the feasibility of the proposed factory. Based on the processing times and equipment lists
supplied by industry experts, the model showed that the output projections in the business
plan were well within the capability of the proposed facility. Although unfamiliar with the
blanket business, bank officials felt more secure in agreeing to support the venture (Bateman
et al. 1997).
Often simulation can help improve productivity by exposing ways of making better use of
existing assets. By looking at a system holistically, long-standing problems such as bottlenecks,

Chapter 1: Introduction to Simulation   19


redundancies, and inefficiencies that previously went unnoticed start to become more apparent
and can be eliminated. “The trick is to find waste, or muda,” advises Shingo (1992); “after all,
the most damaging kind of waste is the waste we do not recognize.” Consider the following
actual examples where simulation helped uncover and eliminate wasteful practices:

• GE Nuclear Energy was seeking ways to improve productivity without investing large
amounts of capital. Using simulation, the company was able to increase the output of
highly specialized reactor parts by 80 percent. The cycle time required for production
of each part was reduced by an average of 50 percent. These results were obtained by
running a series of models, each one solving production problems highlighted by the
previous model (Bateman et al. 1997).
• A large manufacturing company with stamping plants located throughout the world
produced stamped aluminum and brass parts on order according to customer specifica-
tions. Each plant had from 20 to 50 stamping presses that were utilized anywhere from
20 to 85 percent. A simulation study was conducted to experiment with possible ways
of increasing capacity utilization. As a result of the study, machine utilization improved
from an average of 37 to 60 percent (Hancock, Dissen, and Merten 1977).
• A diagnostic radiology department in a community hospital was modeled to evaluate
patient and staff scheduling, and to assist in expansion planning over the next five years.
Analysis using the simulation model enabled improvements to be discovered in operating
procedures that precluded the necessity for any major expansions in department size
(Perry and Baum 1976).

In each of these examples, significant productivity improvements were realized without the
need for making major investments. The improvements came through finding ways to operate
more efficiently and utilize existing resources more effectively. These capacity improvement
opportunities were brought to light using simulation.

1.10 SOURCES OF INFORMATION ON SIMULATION


Simulation is a rapidly growing technology. While the basic science and theory remain the
same, new and better software is continually being developed to make simulation more pow-
erful and easier to use. It will require ongoing education for those using simulation to stay
abreast of these new developments. There are many sources of information to which one
can turn to learn the latest developments in simulation technology. Some of the sources that
are available include:

• Conferences and workshops sponsored by vendors and professional societies (such as


Winter Simulation Conference and the IIE Conference).
• Professional magazines and journals (IIE Solutions, International Journal of Modeling and
Simulation, etc.).

20 Simulation Using ProModel


• Websites of vendors and professional societies (www.promodel.com, www.scs.org, etc.).
• Demonstrations and tutorials provided by vendors.
• Textbooks (like this one).

1.11 HOW TO USE THIS BOOK


This book is divided into two parts. Part I contains chapters describing the science and practice
of simulation. The emphasis is deliberately oriented more toward the practice than the science.
Simulation is a powerful decision support tool that has a broad range of applications. While a
fundamental understanding of how simulation works is presented, the aim has been to focus
more on how to use simulation to solve real-world problems. Review questions at the end of
each chapter help reinforce the concepts presented.
Part II contains ProModel lab exercises that help develop simulation skills. ProModel is a
simulation package designed specifically for ease of use, yet it provides the flexibility to model
any discrete event or continuous flow process. It is like other simulation products in that it
provides a set of basic modeling constructs and a language for defining the logical decisions
that are made in a system. Basic modeling objects in ProModel include entities (the objects
being processed), locations (the places where processing occurs), resources (the agents used
to process the entities), and paths (the course of travel for entities and resources in moving
between locations such as aisles or conveyors). Logical behavior such as the way entities
arrive and their routings can be defined with little, if any, programming using the data entry
tables that are provided. ProModel is used by thousands of professionals in manufacturing
and service-related industries and is taught in hundreds of institutions of higher learning.
It is recommended that students be assigned at least one simulation project during the
course. Preferably this is a project performed for a nearby company or institution so it will
be meaningful. Student projects should be selected early in the course so that data gathering
can begin and the project can be completed within the allotted time. The chapters in Part I
are sequenced to parallel an actual simulation project.

1.12 SUMMARY

Businesses today face the challenge of quickly designing and implementing complex produc-
tion and service systems that are capable of meeting growing demands for quality, delivery,
affordability, and service. With recent advances in computing and software technology, simu-
lation tools are now available to help meet this challenge. Simulation is a powerful technology
that is being used with increasing frequency to improve system performance by providing a
way to make better design and management decisions. When used properly, simulation can
reduce the risks associated with starting up a new operation or making improvements to
existing operations.

Chapter 1: Introduction to Simulation   21


Because simulation accounts for interdependencies and variability, it provides insights that
cannot be obtained any other way. Where important system decisions are being made of an
operational nature, simulation is an invaluable decision-making tool. Its usefulness increases as
variability and interdependency increase and the importance of the decision becomes greater.
Lastly, simulation makes designing systems fun! Not only can a designer try out new design
concepts to see what works best, but the visualization makes it take on a realism that is like
watching an actual system in operation. Through simulation, decision makers can play what-if
games with a new system or modified process before it gets implemented. This engaging
process stimulates creative thinking and leads to good design decisions.

1.13 REVIEW QUESTIONS

1. Define simulation.
2. What reasons are there for the increased popularity of computer simulation?
3. What are two specific questions that simulation might help answer in a bank? In a
manufacturing facility? In a dental office?
4. What are three advantages that simulation has over alternative approaches to systems
design?
5. Does simulation itself optimize a system design? Explain.
6. How does simulation follow the scientific method?
7. A restaurant gets extremely busy during lunch (11:00 A.M. to 2:00 P.M.) and manage-
ment is trying to decide whether it should increase the number of servers from two to
three. What considerations would you look at to determine whether simulation should
be used to make this decision?
8. How would you develop an economic justification for using simulation?
9. Is a simulation exercise wasted if it exposes no problems in a system design? Explain.
10. A simulation run was made showing that a modeled factory could produce 130 parts
per hour. What information would you want to know about the simulation study
before placing any confidence in the results?
11. A PC board manufacturer has high work-in-process (WIP) inventories, yet machines
and equipment seem underutilized. How could simulation help solve this problem?
12. How important is a statistical background for doing simulation?
13. How can a programming background be useful in doing simulation?
14. Why are good project management and communication skills important in simulation?
15. Why should the process owner be heavily involved in a simulation project?

22 Simulation Using ProModel


16. For which of the following problems would simulation likely be useful?
a. Increasing the throughput of a production line
b. Increasing the pace of a worker on an assembly line
c. Decreasing the time that patrons at an amusement park spend waiting in line
d. Determining the percentage defective from a particular machine
e. Determining where to place inspection points in a process
f. Finding the most efficient way to fill out an order form
7. Why is it important to have clearly defined objectives for a simulation that everyone
understands and can agree on?
8. For each of the following systems, define one possible simulation objective:
a. A manufacturing cell
b. A fast-food restaurant
c. An emergency room of a hospital
d. A tire distribution center
e. A public bus transportation system
f. A post office
g. A bank of elevators in a high-rise office building
h. A time-shared, multitasking computer system consisting of a server, many slave
terminals, a high-speed disk, and a processor
i. A rental car agency at a major airport

1.14 CASE STUDIES

Case Study A: United Space Alliance: Space Shuttle Thruster Motors,


Supply Chain Simulation Situation:
United Space Alliance (USA), a prime contractor to the National Aeronautics and Space Admin-
istration (NASA), was responsible for all space shuttle fleet processing operations. After each
flight, all 44 thruster motors on a returning shuttle were inspected for signs of wear, damage,
or performance issues. Suspect thrusters were removed following inspection and sent to an
offsite repair facility for cleaning, servicing, and repair. USA Logistics was required to maintain
a sufficient onsite supply of ready-to-use spare parts including thruster motors that might
be needed for upcoming shuttle flights from the Kennedy Space Center (KSC). The supply
of thruster motors was increasingly declining at KSC while the offsite facility repair times for
removed thrusters also grew longer. Since it was improbable to manufacture new thruster
motors, there were serious concerns raised about the shrinking supply. Would the supply of
ready-to-use thruster motors be able to meet the demand of the remaining shuttle flights?
With only eighteen shuttle missions remaining, senior management at USA commissioned a
study using simulation modeling to determine whether the thruster motor supplies were likely

Chapter 1: Introduction to Simulation   23


to cause a shuttle launch delay. If delays were likely, then management also wanted recom-
mendations to improve the supply chain.

Objectives:
• Predict the potential impact of thruster motor supplies on the shuttle flight manifest.
• Reduce the probability of a shuttle launch delay by improving the supply of thruster
motors.

Solution and Results:


A baseline model (Figure 1.6) was created for the remaining eighteen shuttle missions utilizing:
past failure rates for inspected thrusters, current thruster supplies and their locations, repair
facility work backlog, and historical repair times. The model was run for one hundred repli-
cations, and then the results were analyzed and summarized. Conclusions from the baseline
model outputs predicted that several shuttle missions were likely to be delayed if no action
was taken on the thruster supply chain. Furthermore, extended shuttle launch delays could
result in costly efforts to cannibalize parts from other shuttle orbiters if needed. Therefore,
additional scenarios were run to show the supply chain effects from implementing changes to
repair turnaround times, repair resources, and inspection criteria. As a result of the simulation
modeling predictions, the thruster motors supply chain was vastly improved within sixty days
of the presentation of model results to NASA.

Figure 1.6 Simulation model of space shuttle thruster motor supply chain.

24 Simulation Using ProModel


Figure 1.7 Nose cap of space shuttle Atlantis.

Figure 1.7 illustrates the nose cap of the space shuttle Atlantis OV-104 showing one F3
thruster motor failed inspection (in red) following a mission during the model run. The three
surrounding thrusters in yellow must also be removed since all four thrusters are serviced
from the same manifold. Figures 1.8 and 1.9 provide additional details on thrusters. Shuttles
Atlantis and Discovery are shown in Figures 1.10 and 1.11.

Figure 1.8 Thruster motor locations within the Figure 1.9 A Vernier thruster
sides and top of a shuttle orbiter nose cap. motor removed from a shuttle
orbiter.

Chapter 1: Introduction to Simulation   25


Figure 1.10 Space shuttle Atlantis landing at Kennedy Space
Center, October 2002.

Figure 1.11 Space shuttle Discovery


launch from Kennedy Space Center,
October 2007.

26 Simulation Using ProModel


Questions

1. What were the objectives for using simulation at USA?


2. Why was simulation better than the other methods USA might have used to achieve
these objectives?
3. What common-sense solution was proved by using simulation?
4. What insights on the use of simulation did you gain from this case study?

Case Study B: AltaMed Health Services—Increased Exam Room


Utilization Saves $250,000 Situation
One of the largest federally qualified healthcare centers in the United States, AltaMed Health
Services, has community clinics spanning across both Los Angeles and Orange counties. AltaMed
employs a workforce of over sixteen hundred, offering a full continuum of care to patients in
an area with one of the highest population densities in the United States.
Image 1.3

To keep up with the growing demand for healthcare and meet the requirements of the Afford-
able Care Act, AltaMed decided to examine facility expansions, as well as the addition of
numerous new clinics to its network. It also needed to make sure that its proposed layouts
would fully support the added growth expected. However, it first wanted to determine if it
could increase its current facility capacity by better understanding patient flow. In the past,
it was common for AltaMed to simply convert existing administration space into exam rooms

Chapter 1: Introduction to Simulation   27


to meet its growth, but this was no longer an option and the staff wanted to find better ways
of utilizing current space or provide justification for constructing new facilities.

Objectives
1. Simulate approximately thirty different clinics by using 1 easily configurable model
template.
2. Create suitable data sets for each clinic so a reasonable representation of each clinic
could be tested.
3. Analyze room utilization rates under various potential policy changes.
4. Increase efficiencies by identifying and eliminating waste.
5. Optimize provider/patient interaction.

Solution
The AltaMed team chose its Garden Grove clinic to begin the examination of patient flow
before expanding research across the whole system. Garden Grove was in the process of
requesting the construction of additional exams rooms to relieve current patient flow bottle-
necks and future demand. AltaMed wanted to contrast the current seventeen exams rooms
versus the requested twenty-four rooms to better understand effect on patient flow and
justification for expansion.

Breakdown of Room Utilization


80%
Percent dirty
Percent w/paent
70%

60%

50%

40%

30%

20%

10%

0%
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44
Average

Room number

Figure 1.12 Baseline average room utilization for each of the seventeen exam rooms at the Garden
Grove clinic.

28 Simulation Using ProModel


Analysis
A simulation model was built and used as an analysis tool to test assumptions and system
improvement recommendations against baseline data (Figure 1.12) of current system behavior.
The model setup allowed AltaMed to simultaneously view time, system performance, and
room utilization changes by volume.
After running several scenarios on the Garden Grove facility simulation model (Figure 1.13),
the model showed that rooms were not at 100 percent capacity, with room utilization only
around 60 percent. This confirmed the team’s assumption that room space was not being
properly utilized and that the system could be reconfigured to accommodate current and
future increases in volume.

Figure 1.13 Garden Grove facility simulation model.

Results
Figure 1.14 shows the result of a scenario in which the seventeen exam rooms were unas-
signed as opposed to the company’s normal policy of preassigning exam rooms to specific
physicians. Point 1 on the chart represents the baseline situation of 60 percent room utili-
zation, allowing them to treat about 2,418 patients during the course of a year, with each
patient spending about 1.23 hours in the clinic. Point 2 on the chart shows that they could
see about 40 percent more patients (3,448) before patient length-of-stay began to increase.

Chapter 1: Introduction to Simulation   29


Garden Grove/Future Module - Jul - 17 Exam Rms
5,500 90%
Growth
Rm Util
5,000 85%

4,500 80%

Room utilization
3,969 4,076
4,000 75%
Patients

3,809
3,926 4,011
3,624
3,711
3,500 3,448 70%
3,287
3,088
3,000 65%
2,880
2,652
2,500 2,418
60%

0 55%
0.50 1.00 1.50 2.00 2.50
Patient time in system (hr)

Figure 1.14 Scenario results from Garden Grove facility simulation model.

Conclusions
The simulation model allowed the AltaMed team to see inefficiencies in its system and to work
on standardizing spaces to improve workflow. These system reconfigurations would also help
improve patient flow and overall patient satisfaction and create a more cost-efficient system
design.
AltaMed was able to save $250,000 at the Garden Grove facility by increasing room utili-
zation and eliminating the need for additional exam rooms. With this result, all other facilities
within the AltaMed system were to be tested in the same manner, creating a potential savings
of millions of dollars.

Questions

1. Why was this a good application for simulation?


2. What key elements of the study made the project successful?
3. What specific decisions were made because of the simulation study?
4. What economic benefit was able to be shown from the project?
5. What insights did you gain from this case study about the way simulation is used?

30 Simulation Using ProModel


REFERENCES

Auden, Wystan Hugh, and L. Kronenberger. The Faber Book of Aphorisms. London: Faber and
Faber, 1964.
Banks, Jerry, and Randall R. Gibson. “Selecting Simulation Software.” IIE Solutions, May 1997,
pp. 29–32.
Banks, J., and R. Gibson. “10 Rules for Determining When Simulation Is Not Appropriate.” IIE
Solutions 29, no. 9 (September 1997): 30–32.
Bateman, Robert E., Royce O. Bowden, Thomas J. Gogg, Charles R. Harrell, and Jack R. A.
Mott. System Improvement Using Simulation. Orem, UT: ProModel Corp., 1997.
Deming, W. E. Foundation for Management of Quality in the Western World. Paper read at a
meeting of the Institute of Management Sciences, Osaka, Japan, July 24, 1989.
Glenney, Neil E., and Gerald T. Mackulak. “Modeling & Simulation Provide Key to CIM Imple-
mentation Philosophy.” Industrial Engineering, May 1985, 76–94.
Hancock, Walton, R. Dissen, and A. Merten. “An Example of Simulation to Improve Plant
Productivity.” AIIE Transactions 9 (March 1977): 2–10.
Harrington, H. James. Business Process Improvement: The Breakthrough Strategy for Total Quality,
Productivity, and Competitiveness. New York: McGraw-Hill, 1991.
Hoover, Stewart V., and Ronald F. Perry. Simulation: A Problem-Solving Approach. Reading, MA:
Addison-Wesley, 1989.
Law, A. M., and M. G. McComas. “How Simulation Pays Off.” Manufacturing Engineering 100,
no. 2 (February 1988): 37–45.
Oxford American Dictionary. New York: Oxford University Press, 1980.
Perry, R. F., and R. F. Baum. “Resource Allocation and Scheduling for a Radiology Department.”
In Cost Control in Hospitals by Walton M. Hancock, Fred C. Munson, and John R. Griffith.
Ann Arbor, MI: Health Administration Press, 1976.
Schrage, Michael. Serious Play: How the World’s Best Companies Simulate to Innovate. Cambridge,
MA: Harvard Business School Press, 1999.
Schriber, T. J. “The Nature and Role of Simulation in the Design of Manufacturing Systems.” In
Simulation in CIM and Artificial Intelligence Techniques, edited by J. Retti and K. E. Wichmann,
5–8. San Diego, CA: Society for Computer Simulation, 1987.
Shannon, Robert E. “Introduction to the Art and Science of Simulation.” In Proceedings of the
1998 Winter Simulation Conference, edited by D. J. Medeiros, E. F. Watson, J. S. Carson, and
M. S. Manivannan, 7–14. Piscataway, NJ: Institute of Electrical and Electronics Engineers.
Shingo, Shigeo. The Shingo Production Management System: Improving Process Functions. Trans-
lated by Andrew P. Dillon. Cambridge, MA: Productivity Press, 1992.
Solberg, James. “Design and Analysis of Integrated Manufacturing Systems.” In W. Dale Comp-
ton, 1–6. Washington, DC: National Academy Press, 1988, p.4.
Wall Street Journal, March 19, 1999. “United 747’s Near Miss Sparks a Widespread Review of
Pilot Skills,” p. A1.

Chapter 1: Introduction to Simulation   31


FURTHER READING

Harrell, Charles R., and Donald Hicks. “Simulation Software Component Architecture for
Simulation-Based Enterprise Applications.” In Proceedings of the 1998 Winter Simulation
Conference, edited by D. J. Medeiros, E. F. Watson, J. S. Carson, and M. S. Manivannan,
1717–21. Piscataway, NJ: Institute of Electrical and Electronics Engineers,1998.
Kelton, W. D. “Statistical Issues in Simulation.” In Proceedings of the 1996 Winter Simulation
Conference, edited by J. Charnes, D. Morrice, D. Brunner, and J. Swain, 47–54. New York:
Association for Computing Machinery, 1996.
Mott, Jack, and Kerim Tumay. “Developing a Strategy for Justifying Simulation.” Industrial
Engineering, July 1992, pp. 38–42.
Rohrer, Matt, and Jerry Banks. “Required Skills of a Simulation Analyst.” IIE Solutions 30, no.
5 (May 1998): 7–23.
Add subhead C:

Figure Credits
IMG 1.1a: Copyright © 2014 Depositphotos/in8finity.
IMG 1.2a: Copyright © 2015 Depositphotos/maxkabakov.
Fig. 1.6: Copyright © by United Space Alliance.
Fig. 1.7: Copyright © by United Space Alliance.
Fig. 1.8: Copyright © by United Space Alliance.
Fig. 1.9: Copyright © by United Space Alliance.
Fig. 1.10: Source: https://science.ksc.nasa.gov/shuttle/missions/sts-112/images/medium/
KSC-02PD-1580.jpg.
Fig. 1.11: Source: https://commons.wikimedia.org/wiki/File:STS120LaunchHiRes-edit1.jpg.
IMG 1.3: Copyright © 2016 Depositphotos/K3star.

32 Simulation Using ProModel

You might also like