(Ebook) Organizational Cognition: Computation and Interpretation by Theresa K. Lant, Zur Shapira ISBN 9780805833331, 0805833331 Online Version
(Ebook) Organizational Cognition: Computation and Interpretation by Theresa K. Lant, Zur Shapira ISBN 9780805833331, 0805833331 Online Version
https://ebooknice.com/product/organizational-cognition-computation-
and-interpretation-2514516
★★★★★
4.9 out of 5.0 (63 reviews )
DOWNLOAD PDF
ebooknice.com
(Ebook) Organizational cognition: computation and
interpretation by Theresa K. Lant, Zur Shapira ISBN
9780805833331, 0805833331 Pdf Download
EBOOK
Available Formats
https://ebooknice.com/product/biota-grow-2c-gather-2c-cook-6661374
https://ebooknice.com/product/sat-ii-success-
math-1c-and-2c-2002-peterson-s-sat-ii-success-1722018
https://ebooknice.com/product/matematik-5000-kurs-2c-larobok-23848312
(Ebook) Master SAT II Math 1c and 2c 4th ed (Arco Master the SAT
Subject Test: Math Levels 1 & 2) by Arco ISBN 9780768923049,
0768923042
https://ebooknice.com/product/master-sat-ii-math-1c-and-2c-4th-ed-
arco-master-the-sat-subject-test-math-levels-1-2-2326094
(Ebook) Cambridge IGCSE and O Level History Workbook 2C - Depth Study:
the United States, 1919-41 2nd Edition by Benjamin Harrison ISBN
9781398375147, 9781398375048, 1398375144, 1398375047
https://ebooknice.com/product/cambridge-igcse-and-o-level-history-
workbook-2c-depth-study-the-united-states-1919-41-2nd-edition-53538044
https://ebooknice.com/product/the-semantic-sphere-1-computation-
cognition-and-information-economy-4546860
https://ebooknice.com/product/the-art-of-interpretation-in-the-age-of-
computation-5893906
https://ebooknice.com/product/first-and-second-samuel-interpretation-
a-bible-commentary-for-teaching-and-preaching-vol-29-36300662
01-02-2012 cover
Page i
Organizational Cognition
Computation and Interpretation
page_i
Page ii
LEA's Organization and Management Series
Arthur Brief and James P. Walsh, Series Editors
Ashforth Role Transitions in Organizational Life: An Identity-Based Perspective
Beach Image Theory: Theoretical and Empirical Foundations
Garud/Karnoe Path Dependence and Creation
Lant/Shapira Organizational Cognition: Computation and Interpretation
Thompson/Levine/Messick Shared Cognition in Organizations: The Management of Knowledge
page_ii
Page iii
Organizational Cognition
Computation and Interpretation
Edited by
Theresa K. Lant
and
H:/Anirvan - Other/…/__joined.html 1/185
01-02-2012 cover
Zur Shapira
page_iii
Page iv
Copyright © 2001 by Lawrence Erlbaum Associates, Inc.
All rights reserved. No part of this book may be reproduced in any form, by photostat, microfilm, retrieval system, or any other means,
without prior written permission of the publisher.
Lawrence Erlbaum Associates, Inc., Publishers
10 Industrial Avenue
Mahwah, NJ 07430
page_iv
Page v
page_v
CONTENTS
page_vii
Page viii
Part II
Knowledge and Learning in Organizational Settings
5 83
Shared and Unshared Transactive Knowledge in Complex Organizations: An Exploratory
Study
Diane Rulke and Srilata Zaheer
6 101
Internal Dissemination of Learning from Loan Loss Crises
Catherine Paul-Chowdhury
7 125
Adapting Aspirations to Feedback: The Role of Success and Failure
Patrice R. Murphy, Stephen J. Mezias, and Ya Ru Chen
Commentary Motivational Preconditions and Intraorganizational Barriers to Learning in 147
Organizational Settings
Kathleen M. Sutcliffe
Part III
Cognition at Work: Managerial Thinking and Decision Making
8 157
Defining Away Dangers: A Study in the Influences of Managerial Cognition on Information
Systems
Michal Tamuz
9 185
page_viii
Page ix
Commentary More Is Not Always Better: Limits to Managerial Thinking and Decision 265
Making
Janet M. Dukerich
Part IV
Social and Strategic Aspects of Cognition
12 273
Classifying Competition: An Empirical Study of the Cognitive Social Structure of Strategic
Groups
Vincenza Odorici and Alessandro Lomi
13 305
Stakeholder Analysis as Decision Support for Project Risk Management
Agi Oldfield
14 327
Understanding Managerial Cognition: A Structurational Approach
Laurence Brooks, Chris Kimble, and Paul Hildreth
Commentary Strategy Is Social Cognition 345
Dennis A. Gioia
Part V
The Past and Future of Organizational Cognition Research
15 351
Is Janus the God of Understanding?
William H. Starbuck
16 367
New Research Directions on Organizational Cognition
Theresa K. Lant and Zur Shapira
Author Index 377
Subject Index 385
page_ix
Page xi
page_xi
Page xiii
PREFACE
The genesis of this book was a conference held at New York University in the spring of 1998. The conference attracted a wide range of
young and established scholars from around the world. We chose a subset of the best papers and asked the authors from the United
States, Canada, England, France, Italy, and India. Authors range from eminent scholars such as Jim March to doctoral students. The type
of work includes theoretical models and empirical studies, both qualitative and quantitative.
We would like to thank a number of people who helped make the conference a success and helped us with the editing of the book. First,
we are very grateful to Greg Udell at the Berkley Center for Entrepreneurial Studies for sponsoring the conference. The center's
administrative staff, Loretta Poole and Patricia Edwards, helped immensely with the organization and running of the conference, and we
are deeply indebted to them for making it such a pleasant experience. We would also like to thank the Management Department at New
York University's Stern School of Business for cosponsoring the conference. Several staff members have helped us pull the book together,
namely Berna Sifonte and Karen Angelilo. We are also most grateful to our work-study student, Emily Fernandez, who has put in endless
hours formatting, proofreading, and creating the index for the book. Of course, the book might never have been conceived without the
encouragement of Anne Duffy from LEA. We also appreciate the encouragement, editorial feedback, and friendship of the LEA
Management series editors, Art Brief and Jim Walsh.
page_xiii
Page 1
Chapter 1
Introduction:
Foundations of Research on Cognition in Organizations
Theresa K. Lant
Zur Shapira
New York University
Research on managerial and organizational cognition has increased dramatically since the 1980s. The importance of cognition research to
management theorists is indicated by the increase in articles on cognition in the major management journals as well as the creation and
institutionalization of a cognition interest group in the major professional association of management scholars, The Academy of
Management. This increased interest in cognitive phenomena might have been influenced by research in other fields such as information
systems and biology, in which the 1990s was called ''the decade of the brain" by former President George Bush. One may be tempted to
think that this line of inquiry in management is a recent phenomenon. However, research on cognition in management goes back at least 50
years to the time Herbert Simon published the first edition of Administrative Behavior (Simon, 1947). One of the most influential
publications in the field of management, March and Simon's (1958) Organizations set the tone in arguing that decision making is a major
explanatory variable in organization theory. Reflecting on some 35 years of research in the field, March and Simon (1993) commented:
page_1
Page 2
The central unifying construct of the present book is not hierarchy but decision making, and the flow of information within
organizations that instructs, informs, and supports decision making processes. The idea of "decision" can also be elusive, of course.
Defining what a decision is, when it is made, and who makes it have all, at times, turned out to be problematic. Nevertheless, the
concept seems to have serves us reasonably well. (p. 3)
March and Simon (1958) viewed organizations as information processing systems consisting of embedded routines through which
information is stored and enacted. Some researchers have taken this to mean that organizations are systems that process and code
information in a computational manner. That is, the problem that organizations face is one of searching and processing relevant
information when such search is costly and decision makers are boundedly rational. Other researchers interpreted March and Simon to
mean that organizations are social entities that enact their world. Some see in these words the elements of collective mind (Garud & Porac,
1999; Sandelands & Stablein, 1987). These two views separated in the last decade into two distinct branches of cognition research in
organizations: the computational approach and the interpretive approach. The computational stream of research examines the processes by
H:/Anirvan - Other/…/__joined.html 5/185
01-02-2012 cover
which managers and organizations process information and make decisions. The interpretive approach investigates how meaning is
created around information in a social context.
page_2
Page 3
At the individual level of analysis, a long duel is raging between those researchers who advocate rational models and those who advocate
behavioral models to describe human decision making. Psychologists have criticized the rational model (cf. Von Neumann & Morgenstern,
1944) for not accurately describing how individuals make judgments. In contrast, they have developed a behavioral perspective on
judgment and choice. The original work initiated in the 1950s by Edwards (1954) on probability estimation, Meehl (1954) on information
integration, and Simon (1955) on heuristic search led to the emergence of a new field called behavioral decision theory. This field has
developed descriptive models of human information processing (see Kahneman, Slovic, & Tversky, 1982) and choice under risk
(Kahneman & Tversky, 1979). Although this field studies how human actors process information and make choices, it is equally
concerned with how actors form preferences and how the context or framing of situations influences their choices.
The early 20th century also saw the development of behaviorism, which focused on the relationship between stimuli and behavior and
criticized any reference to consciousness, introspection, or cognitive processes (Skinner, 1953; Watson, 1913). Many researchers since
the 1950s have questioned the generalizability of behaviorism. In response, these researchers suggested that cognition mediates
stimulusresponse (e.g., Neisser, 1967). Since then, a flood of work ensued on how humans process information and how information
processing guides behavior (Fiske & Taylor, 1984; Lord & Maher, 1991; Nisbett & Ross, 1980). Subsequently, researchers found
evidence that information processing involves categorization processes, in which information is filtered by existing knowledge structures
and schemas. These knowledge structures have been called cognitive maps. These maps have been shown to influence how individuals
interpret information and make decisions.
page_3
Page 4
Bruner (1990) gave an illuminating account of the relations between the computational and the interpretive approaches, in which he tells
the story of the cognitive revolution in psychology. In his words, "The cognitive revolution as originally conceived virtually required that
psychology join forces with anthropology and linguistics, philosophy and history, even with the discipline of law" (p. 3). He commented
that the efforts "were not to 'reform' behaviorism but to replace it." However, it appears that the computational approach was making
strong headway into cognitive psychology. As Bruner commented,
Very early on, for example, emphasis began shifting from meaning to information, from the construction of meaning to the
processing of information. These are profoundly different matters. The key factor in the shift was the introduction of computation
as the ruling metaphor. (p. 4)
For Bruner (1990), this was the wrong turn of events because he equated cognition with the construction of meaning and claimed that
"Information is indifferent with regard to meaning" (p. 4).
At a higher level of analysis, however, the question of how collectives process and store information and the concept of cognitive maps
proved to be problematic (Walsh, 1995). Daft and Weick (1984) viewed organizations as interpretation systems that "encode cues from
the environment, make sense of these cues using existing stocks of knowledge, and incorporate the resulting interpretations into
organizational practices and routines." Weick and Roberts (1993) suggested that interpretation goes on in the interactions among actors.
As Garud and Porac (1999) pointed outthis view of organizational cognition involves all of an organization's systems and structures. Walsh
(1995) noted that collectives can serve as a repository of organized knowledge that acts as a template for interpretation and action.
Once the question was raised about how collectives of individuals think, the concept of collective mind followed (Sandelands & Stablein,
1987). The notion of collective mind was not new. Durkheim (1895) wrote about the social origins of individual behavior. John Dewey's
(1938) words echo the same sentiment:
Experience does not go on simply inside a person. We live from birth to death in a world of persons and things which is in large
measure what it is because of what has been done and transmitted from previous human activities. (p. 39)
H:/Anirvan - Other/…/__joined.html 6/185
01-02-2012 cover
The essentials of the argument are that human thought, cognition, or knowledge is situated within a cultural system, including artifacts and
practices, which is itself made up of prior thoughts and knowledge. Knowledge is
page_4
Page 5
embedded in these systems, which reach out across time and space, and our own thoughts are enabled and constrained by this embedded
knowledge (Pea, 1993; Vygotsky, 1929; Wundt, 1921). For the most part, however, these ideas have been soundly rejected by most
psychologists and even organization theorists. As early as 1924, Allport (1924) claimed that discussing such a concept as collective mind
leaves one in a "state of mystical confusion." Douglas (1986), in her book about how institutions think, dismissed the idea as "repugnant."
Computation or Interpretation?
Are the computational and the interpretive approaches branches of the same tree describing human cognition or do they make assumptions
that cannot be reconciled? There are some fundamental assumptions that push the two approaches away from each other. One major
issue is the way reality is conceived: If reality is only socially constructed, as some interpretivists claim, then one true reality doesn't exist. If
researchers take this ontological assumption to its extreme, it would suggest that there are no real criteria against which human information
processing can be compared. There would be no way of assessing the accuracy or efficacy of decisions. The emphasis in computational
research on error and accuracy would be meaningless to interpretivists who believe that all criteria are socially constructed, and therefore,
none is more real or more important than the other. In approaching an interpretivist colleague with a question on examples of criticism of
the interpretivist approach, he answered, "The interpretivist approach cannot be criticized."
Researchers in the computational camp soundly disagree with this assumption. At the extreme, these researchers take the view that
cognition research should focus entirely on the "processing structures of the brain and the symbolic representations of the mind" (Norman,
1993, p. 3). Simon and others argued that all thought and behavior can be represented in symbolic models (cf. Vera & Simon, 1993).
This perspective suggests that down to the most complex social interaction, symbolic models can represent the real nature of this
interaction. Other researchers who take a situated cognition perspective argue that by attributing all cognitive activity to symbolic
processing, the "question of how people use symbols to create and communicate meaning seems to have disappeared" (Greeno & Moore,
1993, p. 51).
We believe there is a common ground that can be discovered between the two approaches. We argue that extreme positions on either
side are unlikely to produce progress in our understanding of cognitive processes in organizations. We suggest that both information
processing and meaning making are simul-
page_5
Page 6
taneous, ongoing processes in organizations. With this book, we hope to discover some of the boundary conditions under which one
process is more prevalent than the other, as well as those instances in which they complement each other.
Our goal in editing this volume is to foster a dialogue and cross-pollination among researchers who may see their work as falling into one
or the other camp. We believe that at this point in the history of cognition research as applied to organizations, such integration is
necessary for this field to progress. To that end, we assembled chapters that illustrate both perspectives and some that have roots in both
camps. We have woven these chapters together in an attempt to allow the readers to consider the different perspectives and form their
own judgment about the overlapping versus the distinctive features of each approach.
page_6
Page 7
methods can produce models that promote human dialog and exploration that does not otherwise occur in routine organizational activity.
However, Dhar does not ignore or negate the importance of interpretation. Rather, he examines the actormachine interface in organizations
and suggests a set of boundary conditions under which computational approaches are feasible and desirable and can replace human
judgment and when they can merely serve as support systems.
H:/Anirvan - Other/…/__joined.html 7/185
01-02-2012 cover
Ocasio's chapter (chap. 3, this volume) develops a theoretical perspective on how organizations think. He articulates the elements and
processes of cognition at the individual, social, and organizational level. He shows how both the computational and the interpretive
perspectives are crucial to an understanding of cognition at the organizational level of analysis. Organizations are viewed as dynamic social
systems that structure and regulate the cognition of organizational participants.
March's chapter (chap. 4, this volume) provides a rich and compelling narrative about the pursuit of intelligence in organizations. He
outlines the two critical problems in this pursuit. The first, ignorance, is essentially a problem of computation. Intelligent action requires
information and prediction. In a world where information is difficult and costly to obtain and future states are uncertain, intelligent action is
problematic. However, as Dhar (chap. 2) points out, given sufficient data, theories about cause and effect, and a well defined payoff
matrix associated with uncertain outcomes, this problem boils down to one of computation. The second problem, ambiguity, is a problem
of interpretation. To assess intelligence, one has to know what outcomes are desired and know when outcomes have been achieved. The
definition of preferences turns out to be a very sticky problem and one that, in organizations, is played out in a social domain. To make
progress in the pursuit of organizational intelligence, it is necessary to develop our knowledge about both computation and interpretation
and to work on an integration of the two perspectives. Sitkin's commentary provides a discussion of the chapters in light of the balancing
act that organizations undertake with respect to managing ignorance and ambiguity.
Part III is devoted to empirical examinations of knowledge, learning, and framing in organizational settings. These studies concern the
influence of organizational structures, routines, and processes on the distribution, transmission, and interpretation of information in
organizations. Rulke and Zaheer (chap. 5, this volume) explore the cognitive maps of managers across subunits of an organization to
assess the use of learning channels and the degree and use of transactive knowledge. The study concerns the location of an organization's
self-knowledge (knowing what you know) and resource knowledge (knowing who knows what) throughout an organization and
page_7
Page 8
how and through what means such knowledge is disseminated. The issue of where knowledge is located is essentially the organizational
equivalent of mental representations that guide information retrieval and processing. They find that both types of knowledge matter to
performance, thus speaking to the question of how organizations can reduce their ignorance, in March's (chap. 4, this volume) terms. Paul-
Chowdhury (chap. 6, this volume) also tackles the issue of the location and dissemination of knowledge in an organization. She reports on
an exploratory field study in three major banks where she finds that the transfer of lessons learned from performance feedback is inhibited
due to barriers such as organizational structures, promotion and reward systems, and pressures toward conformity. Leadership of bank
executives was an important mechanism for disseminating lessons through the organization.
The Murphy, Mezias, and Chen chapter (chap. 7, this volume) tackles the ambiguity issue raised by March (chap. 4) as opposed to the
ignorance question. These authors argue that the setting of goals is a decision that frames performance feedback as either positive or
negative, thus influencing the interpretation of performance information. They use data from the quarterly performance reports of a large
American corporation to examine the question whether the framing of performance feedback as success or failure relative to previous
aspiration can be expected to have an effect on the adaptation of aspirations over time. They show how simple decision rules, such as how
aspirations adapt in response to feedback, can have a powerful effect on the goals that are set by organizations. Sutcliffe provides an
integrative summary of these three empirical chapters. She raises questions regarding four issues that are central to advancing theory in this
area: culture, performance pressures, learning costs, and attribution framing.
Part IV provides four empirical studies on managerial thinking and decision-making processes. Tamuz (chap. 8, this volume) examines
how the managerial cognition of ambiguous events, encoded in varying definitions of aircraft near accidents, influences the processes of
filtering, interpretation, and knowledge generation. She shows how the categorization of ambiguous events (an interpretation process)
influences if and how knowledge about these events is generated. Thus, interpretations influence the computational process of gathering
information. She finds that who the decision makers are affects how they will interpret and categorize ambiguous events. Categorization
differs depending on past experience, goals, and position.
Bennett and Anthony (chap. 9, this volume) investigate the way cognitive and experiential differences between inside and outside members
of boards of directors may impact the nature of their personal contributions to board deliberations. Like Tamuz (chap. 8), they find that
who the directors
page_8
Page 9
are influences how they categorize and interpret events. Nair (chap. 10, this volume) also examines how managers deal with ambiguous,
complex situations. He presents an empirical study on the relation between the structure of cognitive maps of managers and their
effectiveness in solving complex problems.
Vidaillet (chap. 11, this volume) provides a fine-grained view of the decision-making processes surrounding a highly complex and ill-
structured problema hazardous material incident that occurred in France. She finds that before information about the incident could be
processed, the actors needed to frame and categorize the incident. Each set of actors constructed a representation of the crisis and then
processed information and acted accordingly. The context of the actorstheir background, job, and physical locationall influenced their
framing of the incident. These four chapters illustrate how important the interpretive process of framing and categorization are to actors
faced with ambiguous issues. In all these examples, this interpretive process is a precursor to information processing. The interpretations of
managers are influenced by their contexttheir prior experiences, their social context, and their location in time and space. If we only
examine decision making after goals, preferences, and problem definitions have been set, we will see decision making as largely a
computational, information-processing exercise. To do so, however, misses half of the story. Defining problems and determining preferred
outcomes are interpretive processes that serve to reduce ambiguity and allow information search, retrieval, and processing to ensue.
Dukerich (this volume) discusses the key issues that arise when managers deal with ambiguous, unstructured problems. She shows how in
these four papers, ill-structured problems allow for multiple interpretations of the problem, of the relevance of information, and the
appropriateness of various actions.
page_9
Page 10
formation for predicting negative outcomes is available within the project team; however, the available knowledge is not shared effectively.
This occurs because the interests of certain stakeholders are routinely excluded, resuiting in an overly narrow definition of problems.
Brooks, Kimble, and Hildreth construct an interpretive framework based on structuration theory (Giddens, 1986) to explore the
processes by which an information technology is created, used, and institutionalized within an organization. They show how technologies
are not just used by actors in an organization but, rather, are created and institutionalized by the process of actors using technologies. At all
three levels of analysis, we can see the role of the interaction of actors in creating the interpretations of situations that guide action. In all
three cases, we see how features of organizational life that on one hand seem to lend themselves especially well to
computationalrepresentational approachesthe definition of strategic groups, assessment of risk, and application of technologyare all guided
by interpretive processes. Gioia (this volume) joyfully summarizes these studies by arguing that strategy essentially boils down to social
cognition.
Starbuck's chapter (chapter 15, this volume) opens the final section with a historical account of the development of cognitive research in
psychology. In comparing the behaviorist and cognitive approaches, he concludes that behaviorist theories can explain phenomena that
cognitive theories cannot, and cognitive theories can explain phenomena that behaviorist theories cannot. He argues that although an
integration of the two perspectives is the correct long-term goal, there is value in facilitating debate that clarifies the concepts and
assumptions of the perspectives. We have a similar goal for this book. Although we seek integration of the computational and interpretive
perspectives in the long run, we encourage variance, contrast, and debate at this stage of the game. In our final chapter, Lant and Shapira
(chap. 16, this volume) look back at the research reported in this volume and summarize the major trajectories, commenting on
methodological issues, and provide some conjectures for future research in managerial and organizational cognition. We hope you find our
approach useful in your own endeavors toward understanding cognition in organizations.
References
Allport, F. H. (1924) Social psychology. Boston: Houghton Mifflin.
Berger, P., & Luckman, T. (1966). The social construction of reality. Garden City, NY: Doubleday.
Bruner, J. (1990). Acts of meaning. Cambridge, MA: Harvard University Press.
Daft, R. L. & Weick, K. E. (1984). Toward a model of organizations as interpretation systems. Academy of Management Review, 9,
284 295.
Dewey, J. (1938). Experience and education. New York: Macmillan.
page_10
Page 11
Douglas, M. (1986). How institutions think. Syracuse, NY: Syracuse University Press.
Dreyfus H. (1965). Alchemy and artificial intelligence. Rand Corporation paper P-3244.
Dreyfus, H. (1972). What computers cannot do: A critique of artificial reason. New York: Harper & Row.
Dreyfus, H., & Dreyfus, S. (1986). Man over machine: The power of human intuition and expertise in the era of the computer.
Cambridge, MA: Basil Blackwell.
Durkheim, E. (1895). The rules of sociological method. New York: The Free Press.
Edwards, W. (1954). The theory of decision making. Psychological Bulletin, 51,380 417.
Fiske, S. T., & Taylor, S. E. (1984). Social cognition. Reading, MA: Addison-Wesley.
Garud, R., & Porac, J. F. (1999). Kognition. In R. Garud & J. F. Porac (Eds.), Advances in managerial cognition and organizational
information processing (Vol. 6, pp. ix xxi). Greenwich, CT: JAI.
Giddens, A. (1986). Central problems in social theory: Action, structure, and contradiction social analysis. Berkeley, CA:
University of California Press.
Greeno, J. G., & Moore, J. L. (1993). Situativity and symbols: Response to Vera and Simon. Cognitive Science, 17, 49 60.
Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases. New York: Cambridge
University Press.
H:/Anirvan - Other/…/__joined.html 9/185
01-02-2012 cover
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of choice under risk. Econometrica, 47, 263 291.
Lord & Maher, K. J. (1991). Cognitive theory in industrial and organizational psychology. In M. D. Dunnette & L. M. Hough (Eds.),
Handbook of industrial and organizational psychology, (Vol. 2, 2nd ed., pp. 1 62). Palo Alto, CA: Consulting Psychologist Press.
March, J. G., & Simon, H. (1958). Organizations. New York: Wiley.
March, J. G., & Simon, H. (1993). Organizations (2nd ed.). Cambridge, MA: Basil Blackwell.
Meehl, P. (1954). Clinical versus statistical prediction. Minneapolis, MN: University of Minnesota Press.
Miller, G. (1956). The magical number seven plus or minus two: Some limits on our capacity for processing information. Psychological
Review, 63, 81 97.
Neisser, U. (1967). Cognitive psychology. New York: Appleton-Century-Crofts.
Newell, A., Shaw, J., & Simon, H. (1958). Elements of a theory of human problem solving. Psychological Review, 65, 151 166.
Nisbett, R., & Ross, L. (1980). Human inference: Strategies and shortcomings of social judgment. Englewood Cliffs, NJ: Prentice-
Hall.
Norman, D. (1993). Cognition in the head and in the world: An introduction to the special issue on situated action. Cognitive Science,
17, 1 6.
Pea, R. D. (1993). Practices of distributed intelligence and designs for education. In G. Solomon (Ed.), Distributed cognitions:
Psychological and educational considerations (pp. 47 87). Cambridge, England: Cambridge University Press.
Sandelands, L. E., & Stablein, R. E. (1987). The concept of organization mind. In Research in the sociology of organizations, (Vol. 5,
pp. 135 161). Greenwich, CT: JAI.
Simon, H. A. (1947). Administrative behavior. New York: The Free Press.
Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69, 99 118.
Skinner, B. F. (1953). Science and human behavior. New York: The Free Press.
Thagard, P. (1996). Mind: Introduction to cognitive science. Cambridge, MA: MIT Press.
Vera, A. H., & Simon, H. A. (1993). Situated action: A symbolic interpretation. Cognitive Science, 17, 7 48.
Von Neumman, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton, NJ: Princeton University Press.
page_11
Page 12
Vygotsky, L. S. (1929). The problem of cultural development of the child. Journal of Genetic Psychology, 36, 415 434.
Walsh, J. P. (1995).Managerial and organizational cognition: Notes from a trip down memory lane. Organization Science, 6, 280 319.
Watson, J. B. (1913). Psychology as the behaviorist views it. Psychological Review, 20, 158 177.
Weick, K. (1969). The social psychology of organizing. Reading, MA: Addison-Wesley.
Weick, K., & Roberts, K. H. (1993). Collective mind in organizations: Heedful interrelating on flight decks. Administrative Science
Quarterly, 38, 357 381.
Wundt, W. (1921). Elements of folk psychology. London: Allen & Unwin.
page_12
Page 13
PART I
THEORETICAL FOUNDATIONS
page_13
Page 15
Chapter 2
The Role of Machine Learning in Organizational Learning
Vasant Dhar
New York University
H:/Anirvan - Other/…/__joined.html 10/185
01-02-2012 cover
The individual human being is unpredictable, but the reaction of human mobs, Seldon found, could be treated statistically. The
larger the mob, the greater the accuracy that could be achieved.
Isaac Asimov, The Foundation Series
There is extensive literature on organizational learning and on machine learning but virtually no connection between the two. There are
several reasons for this. First, the disciplinary foundations of the two areas are different. But perhaps more importantly, the lack of a
connection is due to the fact that until recently, information technologies have played a predominantly transaction processing role. The
primary purpose of information technologies has been to support accounting and audit, as opposed to facilitating core business processes
such as customersupplier interactions, supply chain management, and so on. This situation has changed rapidly in the last few years with
the maturation of information technologies and their increasing importance as enablers of business processes.
Figure 2.1 shows roughly how core information technologies such as networks, databases, interfaces, and artificial intelligence have
matured in the last few years. The figure also illustrates to some extent why some early predictions of intelligent systems in organizations
proved to be false, or at least premature: The enabling technologies were simply not mature enough to
page_15
Page 16
Fig. 2.1
From Seven Methods for Transforming Corporate Data into Business
Intelligence, by V. Dhar and R. Stein, Prentice Hall, 1997.
support information-intensive commerce in a reliable and scalable manner. The maturation of these enabling technologies has been
particularly dramatic in the last two years with the explosion of the Internet, which has fueled the power of desktops, search engines, and
databases. It is estimated that since the late 1990s, the data in electronic databases has been roughly doubling each year. This trend is
likely to accelerate.
Most organizations struggle to extract meaningful information from their deluge of data. There are several reasons for this, two of which are
readily observable in organizations. The first has to do with not recording history carefully. In most organizations, much of the data comes
in from one end and goes out of the other. Apart from recording transactions, the focus is on current events such as news, economic
indicators, last week's sales, and so on. Hardly any of these data, what I call history, are retained at all, let alone recorded systematically.
The further back in time we go, the availability and reliability of information declines rapidly. The lack of accurate historical in-formation
makes it impossible to generate and test hypotheses about past behaviors of events, customers, and markets, and to compare them with
the present. This effectively limits human deliberation and the ability of organizations to learn from their history.
The second reason is that the speed with which new insights are derived from exploration is usually much too slow. The human effort and
expense is incurred now and the benefits are potentially spread out over time at best.
page_16
Page 17
There is little motivation to expend resources to explore the data. The net result is that data continue to accumulate and archive and little
value is realized.
The first observation implies that the yield that organizations derive from data diminishes dramatically with time. For example, in the
financial industry, all participants have roughly equal access to current data (and often react to it similarly, sometimes as a herd) but their
abilities to extract meaningful information from historical data vary dramatically. It is not surprising that organizations are now beginning to
deploy significant resources toward their data resource, specifically, toward enterprise wide data ware-housing. A major motivation is to
get away from developing expensive one-time applications to gather and analyze data each time a business hypothesis needs to be verified.
The second observation implies that even if organizations collect and warehouse their data meticulously, there is still the obstacle of speed,
of generating useful insights quickly enough to satisfy managers. The challenge is to compress the period required to generate useful results
to the point that managers realize that their ideas can be discussed, tested, refined, and implemented within days or weeks instead of
months or years.
This chapter describes how organizations can dramatically increase their ability to better understand and exploit history. I draw on my
personal experience as a manager of an advanced technology group in a large financial organization that used machine learning methods to
enable the organization to better harness its data resource and improve customer management, sales, and investment decisions. Several
H:/Anirvan - Other/…/__joined.html 11/185
01-02-2012 cover
decision support systems were developed as a result of this work that have been in regular use for almost 3 years. It is reasonable to point
to them at least anecdotally as successes of data-driven learning. My objective in this chapter is to highlight the major lessons learned and,
more broadly, to claim that in this information intensive-age, data-driven information systems are a critical part of organizational learning.
I shall use three small vignettes from core problems in the financial industry as running examples. The first involves learning from transaction
data, where a transaction is a trade that specifies a product was bought or sold by a customer on a particular date and time through a
particular salesperson. Thousands of such transactions are conducted daily, providing a rich source of data that link customers, products,
and the sales process. The organizational objective is to learn about how to serve existing customers better, to improve the sales process,
and to increase overall profitability.
The second vignette involves the assessment of credit risk for retail customers, namely, the likelihood of a customer defaulting on
borrowed funds.
page_17
Page 18
Retail banks are exposed to significant levels of risk in this area, and cutting down losses through fraud prevention and better customer
management is a major business objective.
The third vignette involves learning about financial equity markets, specifically, how prices of equities are affected by various types of data
that flow into the market on an ongoing basis. The data consist of earnings announcements (or surprises), analyst projections or revisions
about earnings relating to companies or industry sectors, quarterly balance sheet or income statement disclosures by companies, and so
on. The objective is to learn about the relations among these variables and incorporate these insights into profitable trading and risk
management. The ability to trade intelligently is one of the most important problems facing securities firms. This problem is a particularly
challenging one because many of the relationships among problem variables are ephemeral or difficult even for an expert to specify
precisely. Whether markets are efficient or not, they are all about information, and investment professionals devote considerable effort to
gathering and interpreting data.
Organizational Learning:
Mechanisms and Impediments
Levitt and March (1988) provided the following definition of organization learning: ''organizations are seen as learning by encoding
inferences from history into routines that guide behavior . . . " This is a descriptive view of learning, viewing it as a process that guides
behavior. If we were to take a normative view, we would replace the term guide by improve. March characterized organizational activity
broadly into exploration and exploitation (March, 1991). Developing new knowledge requires exploration. Using previously acquired
knowledge is exploitation.
Another view of learning that is somewhat more normative is that of Argyris and Schon (1978), which emphasized that learning takes
place through a questioning of existing beliefs, norms, and evaluation criteria. Argyris refers to this as "double loop" learning. The "inner"
loop is simply a comparison of goals with actuals using some evaluation criteria to make decisions or to modify the inputs. This is similar to
the notion of feedback in the cybernetic sense. The "outer" loop, which occurs less often, is about questioning and modifying the evaluation
criteria themselves (Lant & Mezias, 1992).
In the remainder of this section, the two major approaches to learning and the impediments associated with them are discussed. This is
followed by a discussion of machine learning methods and how they can be used to ad-
page_18
Page 19
dress these impediments. My intention is not to cover all the nuances of organizational learning because these have been discussed at
length by a number of authors such as Argyris and Schon (1978), March (1999), and many others. Accordingly, I keep this section as
brief as possible.
Learning by Doing
There is considerable agreement in the literature on the various mechanisms through which organizations learn as well as the impediments
to learning (Levitt & March, 1988). The first mechanism is "learning by doing," that is, from direct experience.
For example, a sales manager in the asset gathering division of an investment bank discovered the following relationship over time. His
commission revenues tended to be much higher when he focused on executing large transactions for a small number of clients instead of
pursuing a large number of clients at the same time. He explained this success by saying that the former required him to spend a lot less
time on the phone, which gave him more time to read research reports and better understand his products in terms of performance and
risk characteristics. This in turn led to large sales and a high hit rate. He was interested in verifying his hypothesis and institutionalizing this
learning if appropriate.
It turned out that the sales manager's hypothesis was in fact correct, but only for specific account types, namely institutional as opposed to
retail customers. This was, in effect, a nonlinear relationship, a point returned to shortly. This affirmed piece of knowledge was used
subsequently in mentoring the sales force and, more importantly, to monitor and analyze other possible exceptions to the rule. Significant
counterexamples to the rule might suggest other nonlinearities, or that the rule was not particularly robust after all, or that the marketplace
was changing.
There are several limitations to this type of learning that have been noted in the literature. The first arises due to the selective ways that
histories play out and hence data get recorded. Histories are usually produced only for those actions that are taken and not for ones that
are not. Alternatives that are rejected or not even considered do not get a chance to play out. No information is generated about them.
Over time, organizations are therefore more likely to learn by rejecting bad alternatives than by discovering good ones unless the perceived
H:/Anirvan - Other/…/__joined.html 12/185
01-02-2012 cover
worth of making an effort to explore is significant. If the perceived costs of exploration are high, which is often the case, there is a
systematic bias against exploration. In effect, there is a self-correcting bias toward Type I errors (the selection of bad alternatives) and not
toward Type II errors (the rejection of good alternatives) (March, 1991).
page_19
Page 20
Another potential side effect of the selective recording of routine activity is that there may be little memory of the decision process,
including partial solutions or rejected alternatives. For example, organizations spend large amounts of money redeveloping commonly used
pieces of software. Large software projects cry out for some coherent form of historical capture of software development, especially
because of the high turnover of programmers. A major research effort at the Microelectronics and Computer Corporation in the early
1980s (Conklin & Begeman, 1988; Curtis, Krasner, & Iscoe, 1988; Ramesh & Dhar, 1992) focused on tools for the systematic capture
and retrieval of memory. Other similar efforts have been reported by some major Japanese corporations (Cusamano, 1991; Nonaka,
1994). Andersen Consulting also spent a lot of effort building libraries (video and text) of past engagements (Slator & Riesbeck, 1991) for
purposes of knowledge capture and sharing. It is probably fair to say that these efforts met with mixed success. The common problems
reported at the time were high cost, lack of incentives for people entering information into a system, the lack of good business processes,
and high task complexity that made it difficult to specify good reusable components.
More recently, however, there have been some notable successes in creating and accessing memory at the organizational level, particularly
in organizations that have service-intensive operations such as Compaq, Dell, and Microsoft. Compaq, for example, pioneered the use of
"case based reasoning" techniques (Schank, 1986) in capturing information from customers and making this knowledge, suitably cleaned
up, accessible to its entire sales force (Dhar & Stein, 1997). The major reason for this success was the fact that the problem complexity is
low, and there is a solid business process in place for dealing with customers. The information flowing into the system about customers'
problems does not depend on individuals' memories but on a structured interaction with the customer. By maintaining a highly structured
case base of ongoing customer interactions and having easy access to the knowledge contained in it, the organization continues to learn
about problems with its products. Compaq is now beginning to use this knowledge to improve its manufacturing and testing processes.
Finally, a limitation of learning by doing is simply the limited generation of experience of decision makers. People become competent at
things that happen as a by-product of this limited experience. If a learned strategy was successful in the past, it is likely to be exploited,
shutting out exploration of other, possibly better routines. In terms of March's (Leventhal & March, 1993; Levitt & March, 1988) notion
of exploration and exploitation, pure exploitation in the absence of exploration leads to "competency traps" and obsolescence. These traps
might also arise because of sunk costs in the exist-
page_20
Page 21
ing way of doing things, where changes to an existing process might be too hard to implement and the benefits hard to evaluate with
certainty.
page_21
Page 22
of literature on organizational learning (Senge, 1994) exhorts managers to question their measures and processes periodically, arguing that
true learning occurs in this manner, that is, as double loop learning.
Sunk costs.
Structural impediments.
Interpretation biases.
page_22
Page 23
sitions in volatility take place smoothly, like changes in seasons. Price trends, on the other hand, tend to be more sporadic and occur less
often but they do persist occasionally for varying lengths of time. Finding relations among these variables is difficult. Understanding them is
even harder. Explaining them to users is harder still.
Discovery is also expensive in terms of cost and time. For a senior manager, finding answers to simple questions such as which customers
were in the top 20 in terms of commission revenue for 3 years in a row is cumbersome. Harder still are questions such as what is common
or different about customers who were in the top 20 for 3 years in a row. Such requirements are typically passed on to information
technology professionals or controllers, and the answers can take weeks or months. The more the intermediaries, the more the structural
impediments, and the less the exploration and discovery.
Time pressure also limits how much "doing" can occur realistically in organizations and what can be learned from it. This phenomenon is
illustrated using the well known bandit problem, where a subject armed with a certain amount of money is exposed to an array of slot
machines with different payoffs and the goal is to maximize winnings within a specific duration (March, 1998). The shorter the duration, the
sooner the subject must commit to some subset of slot machines to exploit, and the more limited the search. In reality, the payoff function
changes over time, making it even more difficult to balance the effort required in gathering information versus exploiting it. But
organizational reality is characterized by yet another undesirable version of the bandit problem: the information about the goodness of
different slot machines, the outcomes, is not available instantly! In other words, there is usually a significant time lag between the time an
experiment is conceptualized (such as the payoff function of a slot machine) and the time when resuits can be observed. This is a major
inhibitor against exploration.
Fourthly, Type I errors are self-correcting because we recognize when bad decisions turn out to be bad. There is no such information
about good decisions that were not recognized or exercised because of the time and cost of discovery and experimentation. In other
words, the impediments to discovery are doubly penalizingnot only because of managers' reticence to pose hypotheses but because the
answers they have are restricted to what they already do.
A fifth factor inhibiting discovery is interpretation bias. Humans exhibit considerable biases in interpreting statistical phenomena and a wide
variance in their judgment of and adoption of risk depending on their objectives. Shapira (1995), for example, described how managers
may under or over estimate true risk depending on whether they are trying to achieve rela-
page_23
Page 24
tively easy objectives or to break out to new levels of performance. Fischhof (1975) showed how interpretation of data is shaped by
individuals' values and frames of reference.
page_24
Page 25
In the next section, I introduce the basic concept and capability of machine learning and demonstrate through focused real-world examples
why it is a powerful facilitator of organizational learning. I specifically consider how and to what extent it helps us in dealing with the
impediments in Table 2.1. My primary goal is to address the issue of why knowledge is not discovered as opposed to why it is ignored.
Accordingly, I limit my discussion to issues of discovery.
page_25
Page 26
are consistent with the observation, the trick is to generate the more plausible ones and refine them. Machine learning algorithms help in
achieving this objective. By automating hypothesis generation and testing, they iterate through this process to get to the more interesting
distributions of outcomes quickly. With current technology, large-scale data analyses can be done in days or weeks rather than months or
years.
Figure 2.2 shows where machine learning fits into the larger cycle of learning. It characterizes learning as occurring in two loops. The inner
loop is the machine learning cycle. The outer loop is reflection, dialog, and an agenda definition for the inner loop. This is where the results
from the inner loop are analyzed, interpreted, and discussed. The process is similar in spirit to Argyris' double loop learning.
Placing an upper bound on the frequency of the outer loop has important implications for learning. In most ongoing projects where senior
managers are actively probing into some aspect of the business, unless they see interesting new results on a regular basis, their memories
about previous dialogs are hazy and cumulative learning is low. Although there is no hard evidence about the optimal frequency for
reviewing intermediate results, my experience with data mining projects has been that the elapsed time between reviews in the outer loop
H:/Anirvan - Other/…/__joined.html 15/185
01-02-2012 cover
should not exceed 2 or 3 weeks. This upper bound of 2 to 3 weeks places a heavy burden on the inner loop. In fact, this is what makes
machine learning a practical necessity. It speeds up hypothesis generation and testing by enabling analysis to proceed with an initially rough
specification of the problem without requiring any assumptions about the form of the relation among the variables. Indeed, the relation
emerges after several cycles of the outer loop.
What do we mean by the form of the relationship? In traditional hypothesis testing, the null hypothesis is stated as a statistical relation (or
lack of
Fig. 2.2
Machine Learning Forms A Loop Within the Organizational Learning Cycle.
page_26
Page 27
one) among problem variables. For example, that income and default are inversely correlated. This relation might be stated as d = k1 -
k2*i where d is the default rate, i is income, and k1 and k2 are constants usually determined by a regression.
But what if the relation is not linear? For example, default usually decreases at a decreasing rate with rising income. Also, after a certain
level of income, the probability of default flattens out. Although nonlinear, the relation is monotonic: Default does not increase at any point
with increasing income. Economic theory is full of such relations. For such problems, linear regression methods do a fairly good job of
capturing the relation in certain ranges of the independent variables.
Financial markets are full of nonlinearities and they are often not monotonic or continuous. In the sales example, the sales strategy
hypothesized by the managers held only for certain types of customers. In other words, it was nonlinear. As another example from the
investment arena, the maxim the trend is your friend is commonly used by momentum-based investors. There is indeed evidence that
trending situations develop and persist. However, suppose that the longer a trend persists, so does the hazard that it is likely to end. This is
a U-shaped relation for a momentum-based investor. It isn't monotonic.
In addition, nonlinearities can also arise as discontinuities. The sales example demonstrated one type of discontinuity, where the
relationship between two variables changed dramatically when the customer type changed from retail to institutional. Another type of
nonlinearity arises when the relation holds only in certain ranges, such as at the tails of distributions. For example, a commonly tested null
hypothesis in the accounting arena and the investment management community is the following: Positive earnings surprise is associated with
an increase in price.
Suppose that this effect occurs only in the tails of the distribution of earnings surprise? That is, it holds only when the surprise is more than,
say, one standard deviation from the consensus mean earnings estimate. Otherwise, the effect is nonexistent. What this says, in effect, is
that most of the cases, those within one standard deviation, are essentially noise, whereas the signal is located in selected areas, namely,
the tails. Figure 2.3 shows this phenomenon.
We can as easily construct examples where the effect might occur everywhere except toward the tails and so on (the default vs. income
example above being one such example). The point is that the signal is located only in certain selected areas and the challenge is to find
these areas.
The effect can also occur due to the interaction of two or more variables. For example, the earnings surprise effect might hold only when
the funda-
page_27
Page 28
Fig. 2.3
Earnings surprises more than + X standard deviations from the consensus mean lead
page_28
Page 29
Fig. 2.4
Monthly Returns of an Investment Strategy Learned Through Machine Learning Compared
to the S&P 500.
page_29
Page 30
index. As it turns out, for the period under consideration, this was a difficult benchmark to beat.2
The revenue impacts of taking hypothesized actions on credit card customers can be modeled similarly. For example, we could test the
revenue impact of denying further credit to customers who develop a certain profile characterized by specific levels of accelerating debt
and deteriorating payment history. This strategy could be compared with a baseline (such as no action) to assess its effectiveness. More
complex interventions such as working with the customer rather than denying credit completely are harder to simulate but can still be
approximated, especially if data or reasonable expert estimates on these interventions are available. In general, however, as the precision
of the evaluation function declines, so does the quality of the hypotheses and, consequently, the chances of developing a robust theory
about the problem domain.
Automating the generate and test activity via backtesting addresses a number of impediments to learning. For some problems, it addresses
the small sample size problem by creating data points. It also addresses to some degree the Type II error problem because the outcomes
of a large number of actions are actually evaluated in the backtesting. In this sense, the process is an adaptive simulation, where the search
has the intelligence to figure out which actions are worth simulating based on results of prior experiments.
The outputs of machine learning also help provide transparency into the problem. When the outputs are easy to interpret, such as rules or
conditional distributions of outcomes, they can be used to test prior assumptions or to flesh them out. In the earlier example involving the
interaction of earnings surprise and company fundamentals (i.e., value orientation), the results might show an earnings-growth-driven
page_30
Page 31
gested that removing these customers from the analysis would provide different results. When the machine learning uncovered the fact that
the relationship held more strongly for institutional customers, some managers pondered whether doing one's research before approaching
prospects was more important for sophisticated (equated with institutional) customers.
In all of these cases, the results from the machine learning exercise nudge further human dialog along a number of useful directions. In terms
of Fig. 2.3, the dialog about sales effectiveness generated at least three hypotheses. These hypotheses, in turn, focused the machine
learning loop. That is, the outer loop provides the basis for setting up the next series of experiments, for testing the credibility of the
alternative hypotheses, which were not existent in the initial experiment. In this way, the process of learning is distributed cyclically between
humans and the machine, where the human dialog defines the search space for the machine, and the machine, in turn, drives the human
dialog into more relevant areas. The net effect is that machine learning speeds up and focuses the process of exploration and assumption
surfacing, thereby providing actionable results in a realistic time frame.
Finally, organizations struggle with the memory problem, of not keeping an adequate memory of events. Because experiences are not
recorded systematically, they are difficult to group into useful clusters or patterns, to retrieve, and to use. We referred in the earlier section
about how service-oriented organizations such as Compaq are beginning to exploit case-based search methods for finding relevant
information about past experiences. Other organizations such as Morgan Stanley are combining numerically oriented exception-based
analysis with qualitative methods for finding historical information relating to customers and salespeople. For example, a tree induction
algorithm reveals groupings of customers that are exceptions along some criteria such as revenues. A sales manager can them zoom in on
this grouping and find all the call reports that salespeople from across the organization might have put into the system about this set of
customers. Differences between this grouping and those of the lower revenue customers can lead to more effective sales strategies or
product offerings.
Useful groupings or patterns of this type are an important element of the learning process. By maintaining a memory of past events and
providing powerful access mechanisms, systems are able to overcome a major impediment to learning at the level of the organization.
Table 2.3 summarizes the discussion thus far. It lists some of the impediments from Table 2.2 and how machine learning methods
contribute toward addressing them. The contributions should be interpreted literally, as contributions, and not as complete solutions. For
example, automated hypothesis generation and backtesting will not compensate for bad data or the lack
page_31
Page 32
TABLE 2.2
Impediments to Organizational Learning Addressable Through Machine Learning
Impediment Contribution of Machine Learning Methods
NonlinearitiesTask complexity. Provide better signalnoise ratio.
Information gathering or Speeds up pattern discovery through automated
hypothesis testing is too slow. generate and test.
Type II error handling. Automated hypothesize and backtesting surfaces
otherwise overlooked actions.
Frames, assumptions, etc. Provide explainable models (transparency) capable of
challenging or confirming assumptions.
Inadequate organizational Data warehousing and recording of cases.
memory.
Better access methods to data and case bases.
of a good evaluation function. It is also worth noting that such methods do not address several of the impediments to learning such as
human biases in data interpretation. If anything, the need to interpret the outputs from machine learning algorithms correctly is critical,
especially about the confidence intervals associated with the outcome distributions and the sensitivity of these results to minor changes in
inputs.
page_32
Page 33
TABLE 2.3
Conditions Under Which Decisions Should be Made
by Computer Versus Human
Payoff Well Defined?
Yes No
Theory formation? Yes Automate Support
No Support Support
did not mature rapidly enough and that the conjecture will in fact materialize in time. Another is that middle managers do more than making
structured decisions and exercise judgment that cannot be automated. Yet another is that a lot of middle management consists of
authorization and verification, a function that is risky to eliminate. Finally, we can view the insights from the manmachine interaction as an
endless learning loop, where we are seldom comfortable enough that we have learned enough to abdicate decision making to the machine.
Other explanations, from social and political perspectives on organizations can be provided.
Regardless of the accuracy of the conjecture, Simon's classification of decision making provided the basis for several other frameworks
describing the role of information systems in decision making (Gorry & Scott-Morton, 1971; Keen, 1978). The basic reasoning was that
problem complexity determines whether decision making can be automated. Factories are automated because the decisions involved can
be specified algorithmically. Management decisions require judgment and are therefore not automated. This is the raison d'etre for decision
support systems: They provide the analytics for support of decision making where human judgment is ultimately required. This reasoning
makes intuitive sense. After all, it is usually more cost effective or reliable, or both, to automate routine decisions and too risky to automate
the complex ones.
However, consider the following alternative reasoning. Exploration is a theory-building exercise. Building theory requires testing. And
testing a theory that is driven ground up from the data must be tested particularly rigorously. Specifically, if the learned model involves
decision making, then this model must be tested without human intervention, otherwise it is impossible to separate the contribution of the
human from that of the machine-learning-based model.
This point poses an interesting paradox. If we're using machine learning methods to uncover patterns in complex nonlinear situations where
the output of the model is a decision, then we must be particularly careful in en-
page_33
Page 34
suring that it is the learned model that is separating the noise from signal and not the decision maker. Otherwise, we don't know whether
the learned model captures the true structure of the underlying problem or whether human judgment is compensating for a poor model.
Consider the earnings surprise example. The top part of Fig. 2.4 shows the returns assuming a completely systematic or automated
implementation of a particular model. The part to the left of the heavy vertical bar shows the performance of the automated implementation
based on historical data. If the model is any good, or alternatively, if it truly captures the underlying structure of the problem that is
reflected in the data, we would expect the performance of the model to be good on data it has never seen before. The part to the right of
the bar shows the performance of the model in reality, or out of sample. These results can be compared to benchmarks, critiqued by
experts and others to evaluate the model. But if the results were achieved through human intervention, we would not know whether our
learned model captured a real effect in the data or whether the decision maker steered it right when it made bad decisions or in the wrong
directions when it made good decisions.
This counterintuitive point is that even highly complex nonprogrammed decision situations may require completely automated solutions to
test the model. In the example mentioned previously, where a theory about market behavior is being tested, the precision with which the
payoff function is defined has more of an impact on whether a decision is automated or supported than problem complexity. Although the
decision is a complex one, the payoff function is precise: the profit or loss on a trade. Automation provides a clean and rigorous way to
test the theory.
The same reasoning applies to the example of credit risk estimation for a retail bank. The impact of denying or continuing to provide credit
can also be specified quite precisely in terms of the monetary amount lost as a consequence of failing to detect signs of delinquency. Fraud
problems are also similar: The degree of loss can be computed as a function of the relative proportion and costs of Type I and Type II
errors. All of these problems are complex, requiring human judgment. Yet, if our objective is to form a rigorous theory of behavior, it
makes sense to automate decision making to be able to test the learned model.
Ironically, however, the flip side of the coin is that simple problems may not be easy or practical to automate. In the example about the
sales manager's conjecture about profitable sales strategies, coming up with a precise function relating actions and payoffs is difficult. The
discovered relationship and the conjecture that better prepared salespeople make bigger sales is more of an explanation after the fact,
where several alternative reasons are
page_34
Page 35
H:/Anirvan - Other/…/__joined.html 19/185
01-02-2012 cover
conjectured, including pure chance, and discussed to explain the data. The conjectures are difficult to test because of the lack of adequate
data, historical and future; historical in the sense that there is unlikely to be available data on time spent by salespeople researching their
products before approaching customers and future in the sense that it is not practical to obtain such data going forward.
Table 2.3 summarizes the discussion in this section indicating automation or support, depending on the precision of the payoff function and
theoretical objectives. The upper left quadrant, exemplified by the trading example, is one where automation makes sense. In this
quadrant, the relevant data for theory formation are available, and the payoff function is well defined. When either of these do not apply, as
in the other three quadrants, only support is feasible. This more commonly occurring situation explains the relative preponderance of
systems that support rather than automate much of managerial decision making.
Concluding Remarks
A quantitative analyst at a large bank once remarked that he was appalled by the paucity of information that went into decisions that have
multimillion dollar impacts. One would expect that decision makers would learn from such decisions and improve over time, especially
when they are repetitive. However, the reality is that it is difficult for organizations to learn from prior decisions. Some of the reasons have
to do with the poorly understood link between actions and outcomes. Other impediments have do to organizational structure and
interpretation biases.
In this chapter, I argue that machine learning can enable organizations to better understand the link between potential actions and
outcomes. The motivation for this argument is the observation that organizations are beginning to collect vast amounts of data as a by-
product of electronic commerce. These data represent collections of large numbers of events, making it possible to test assumptions and
hypotheses statistically for many types of problems, especially those where decisions are repetitive. I assert that for such problems,
organizations underestimate the wealth of knowledge they can uncover directly or indirectly from their databases. They can learn from their
data. In particular, they can use machine learning methods to mitigate Type II errors by better relating potential actions to outcomes (Dhar,
1998).
But uncovering useful knowledge is an exploratory exercise. Given the systematic bias against exploration, it is not surprising that the data
resource often remains largely untapped for purposes of decision making. The purpose of this chapter is to show how some of the major
impediments to orga-
page_35
Page 36
nizational learning can be addressed when machine learning is an integral part of a business process. By asking the question, "What
patterns emerge from the data?", machine learning provides managers with a powerful attention directing and learning mechanism.
The incorporation of machine learning into managerial decision making is a lot more than more analysis or more rational decision making.
At times, it is about rigorous hypothesis testing and theory building. At other times, it is an efficient attention directing mechanism for
identifying exceptions or tail-oriented phenomena and acting on them. Sometimes it is about validating deeply held beliefs or refuting them.
The upshot is that it leads to dialog among decision makers that would not occur otherwise, which leads to raising better questions,
collecting better data, and obtaining better insight into the problem domain.
Whereas some of the literature on organizational learning points to structural impediments, I have also observed that machine learning
methods are capable of producing systems that can eliminate some of these impediments. For example, the decision support system that
resulted from analysis of the sales data enabled the sales manager to circumvent his usual process of obtaining information. Instead of
relying on assistant and controllers and waiting several weeks for information, the fact that he could test his hypotheses instantly resulted in
the elimination of the intermediary functions. In other words, not only did the machine learning exercise help him better relate actions to
outcomes but perhaps even more significantly, it made the process of information gathering more efficient. The cost savings were
significant.
One simple way of summarizing the impacts of information technology in general and machine learning in particular on organizations is that
it increases the knowledge yield from an organization's information systems. But realizing this benefit requires a process where senior
managers periodically question their assumptions and business objectives and determine the extent to which these assumptions and
objectives can be substantiated, refined, refuted, or fleshed out by the data. My objective in this chapter is to show why and how this can
be done by integrating machine learning into organizational learning, enabling organizations to achieve higher levels of knowledge about
themselves.
References
Argyris, C., & Schon, D. (1978). Organizational learning. Reading, MA: Addison-Wesley.
Conklin, J., & Begeman, M. (1988). gIBIS: A hypertext tool for exploratory policy discussion. ACM Transactions on Office
Information Systems, 6, 4.
Curtis, B., Krasner, H., & Iscoe, N. (1988). A field study of the software design process for large systems. Communications of the
ACM, 31, 11.
page_36
Page 37
Cusamano, M. (1991). Japan's software factories: A challenge to U.S. management. New York: Oxford University Press.
Dhar, V. (1998). Data mining in finance: using counterfactuals to generate knowledge from organizational information systems.
Information Systems, 23, 7.
H:/Anirvan - Other/…/__joined.html 20/185
Other documents randomly have
different content
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebooknice.com