Chap 5 Dashboards
Chap 5 Dashboards
Dashboards
Table of contents
1
Introduction
As we saw in the general introduction to this book, most of the books which present
management tools adopt a descriptive approach and focus on ways of building such
tools. They deal with the “how” of the matter and describe methods of construction. All
too often this ignores the question of “why”, i.e. identifying and clarifying the
objectives and principles that guide the choice of operational solutions. We shall
attempt to clarify these fundamental points here.
1
Existing descriptions of these methods are neither sufficiently conceptualised nor sufficiently detailed; any reference to either of
them is in fact an interpretation.
2
1.1. Objectives and principles generally assigned to dashboards
1.1.1. Objectives
A dashboard is basically defined as a set of indicators which are not exclusively
financial in nature (KPIs). It can take various forms, but is presented generically in the
form of a list of indicators with various values shown for these indicators (see table 5.1
and the examples at the end of the chapter).
Volume (activity)
Lead time
Quality index
Customer satisfaction index
Turnover
Etc.
It is hard to date the origin of this tool precisely because the answer will depend on
whether we are interested in company practices or normative descriptions of its
construction principles. The latter began to be developed in the 1980s in response to two
types of criticism:
• A theoretical criticism pointing out the shortcomings of financial indicators in
terms of relevance2 (Johnson and Kaplan, 1987): their short-term orientation,
their focus on performance only for shareholders, and their limited capacity
for explanation (see chapter 3).
• Observations of the insufficiency of dashboard practices in companies,
emphasising other problems of relevancy. The dashboard was seen as often
overloaded, slow, hard to read and therefore of limited use for managers.
Improving the relevance and usefulness of performance measurement gives rise to three
general objectives for building “good” dashboards:
• Do not limit measurement to a single performance dimension. Instead, provide
a more “balanced” representation of performance (objective 1).
• Orient measures to a longer-term representation of performance (objective 2).
• Facilitate decision making by managers (objective 3).3
2
A measurement system is said to be relevant when it provides a manager with information that is aligned with the requirements of
managing the performance of his business.
3
We will see in the conclusion to first part of the book at the end of chapter 6 that when dashboards are designed by applying these
objectives they have their own limitations with respect to financial measurement systems.
3
1.1.2. Principles
These objectives are translated into a set of commonly accepted principles for building
“relevant and useful” dashboards:
• KPIs must comprehensively cover the organisation’s entire set of objectives
and goals.
• They must focus both on outcomes as well as the performance levers that can
be used to achieve these outcomes.
• They must include both financial and non-financial indicators.
• The indicators chosen must be consistent with the organisation’s strategy.
• They must be limited in number.
• They must deliver their information quickly.
• They must be presented in a legible and vivid way.
4
parts in a manufacturing line), staff training initiatives (for example the average number
of training hours per employee) and the amount of investment (CAPEX,4 for example).
This is the second type of balance sought in a dashboard (objective 1): the balance
between results metrics and indicators associated with performance levers.
This principle also improves managerial decision-making (objective 3). The variety of
performance levers offers managers a richer view of performance which enables them to
produce a better quality analysis, as well as a more operational view. In the event of
poor results, a manager can determine whether the origin of the problem is in the area of
product quality or production time. These indicators also allow him to target his actions
more effectively on the causes of the problem; financial results cannot be directly acted
on.
Nevertheless, this principle also complements the first two principles. We might
imagine, for instance, though this scenario is not very realistic, that we are able to
measure performance vis-à-vis different stakeholders with exclusively financial
indicators. For example, a profitability ratio to measure performance with regard to
shareholders, sales volumes or marketing costs to measure performance with regard to
customers, and average salary as a performance indicator with regard to employees.
Likewise, an indicator for a performance lever can be financial. This is the case, for
example, of export sales figures when a company has chosen to focus on this
performance area with the aim of increasing its total sales figures.
This leads us to distinguish between two meanings of the word “financial”: an indicator
can be financial because it reflects a “financial point of view” or because it is expressed
in accounting values. The principle whereby non-financial indicators are included in a
dashboard aims not only to balance different points of view, but also to strengthen the
long-term vision. If we take another look at the example of export sales figures, it
certainly constitutes a lever which can be used to increase overall sales figures, but it
remains very general. If we take a look at the performance levers that might be used to
influence export sales figures, we may consider, for example, increasing the number of
overseas sales people. The corresponding indicator will then have lost its financial
character. Thus the further we move up the chain of causality, the more financial the
indicators become and the more performance tends to be captured in the long term, in
keeping with the previous principle.
The non-financial character of indicators is also beneficial because it separates the
performance measure from the accounting metric which always takes a certain amount
4
CAPEX: Capital Expenditure.
5
of time to produce. Certain non-financial indicators will be obtained more quickly,
which also contributes to objective 3.
6
On the other hand, the same results presented graphically with colour-coded information
enable the manager to immediately visualise positive areas and problematic areas. They
are more vivid and meaningful. The visual design of the dashboard can therefore
contribute to managerial decision making (objective 3).
Table 5.2 summarises the connections between objectives and the principles generally
assigned to dashboards.
These principles constitute the first level in explaining how to build a good dashboard.
They do however have certain limitations.
7
orientations, on priorities. Managers should select certain indicators because
they are more relevant than others. We will see in the following section that
“strategy” is a vague term and that there are several types of “particular
orientation”.
In our view, these two sets of principles reveal two distinct purposes – with
contradictory consequences – that are assigned to dashboards: to control all the
parameters of the business and to “energise” the organisation to achieve certain
priorities. Indeed, it seems rather difficult to keep an entity under control if only twenty
or so indicators are monitored, as certain methods (particularly the BSC method, see
below) would suggest.5
Moreover, the empirical literature shows that putting these principles into practice is not
a straightforward matter:
• The number of non-financial measures remains very high in company
dashboards (Mendoza & Bescos, 2000). This observation is consistent with
the difficulties encountered by controllers when they try to carry out the
instruction to redesign their dashboards to reduce the number of indicators.
They observe that although limiting the number of indicators to roughly
twenty does strongly focus the attention of managers, it is accompanied by
the risk of losing track of key parameters.
• It is observed, in practice, that management review meetings – even when
based on the use of dashboards including non-financial indicators – struggle
to reach decisions.
5
Unless we limit ourselves to strictly financial measures of performance with their inherent shortcomings, particularly the fact that
only the final result is controlled.
8
positioning of an organisation determines key performance levers and weighs
their relative importance. For example, cost control has more importance in a
low-cost strategy than in a strategy of differentiation through innovation.
There can be different strategic positions at different levels of the
organisation, even within a Strategic Business Unit (SBU), to take a specific
market into consideration for instance. The strategic positioning of different
geographical entities may be partially different to adapt to a specific
competitive environment or different customer expectations in different
countries.
• Finally, orientations may stem from priorities relating to temporary
circumstances, without any direct link to either the business model or the
current strategic position. They may be linked to:
• changes in strategic positioning: the company is then temporarily focused on
the transition to be carried out;
• specific economic or cyclical circumstances: during certain periods financial
markets are particularly attentive to financial structure, which may prompt the
company to focus on debt reduction. Or events in certain raw materials
markets may spur the company to diversify its suppliers;
• specific internal difficulties: if a decline in quality is observed, for instance, the
organisation may focus on turning this situation around;
• particularly high ambitions for certain objectives: it is the magnitude of the
target and not its nature which places a certain urgency on the priority here.6
Reducing the time-to-market for new products may be an ongoing objective
that is defined by the company’s strategic positioning, but if the target is to
halve the time and this target is considered particularly ambitious, it may
become an immediate priority.
Different types of orientation do not have the same degree of stability. Some of them are
quite stable (organisational goals, business model, strategic positioning), they
characterise the ongoing dimensions of the organisation’s performance. Others, on the
contrary, correspond to phases of crisis, transitions, changes; these are the more
ephemeral dimensions of performance, but also points of heightened vigilance.
6
See the definitions of the terms “objective”, “target” and “targeted objective” in chapter 1.
9
selectivity does not necessarily apply to the phase of preparing these monthly
reviews or other occasions and places of analysis.
• Performance measurement may be used by different parties: controllers, entity
managers, operational managers in the entity and at other levels in the
organisation. They do not all have the same role and are not necessarily
involved at the same stages in the decision-making process.
The question of how measurement systems are used has been examined by Robert
Simons (1995) who distinguishes a diagnostic use from an interactive use (see below).
He includes the question of measurement system users in his typology, but not the other
two points that we have raised. Significantly, he considers that there is no link between
the construction of a measurement system and the way it is used. According to Simons,
every performance measurement system can be used either diagnostically or
interactively.
We believe, on the contrary, that the way measurement systems are built depends on the
how they are to be used, as we will explain in the rest of this chapter.
1.2.4. Conclusion
This discussion highlights the fact that measurement systems and consequently
dashboards serve two different and complementary purposes:
• To “energise” and focus the organisation on a few priorities. This goal can be
linked to the objective of “powering up” the organisation.
• To assist managers in controlling the organisation’s activities.
Depending on their purpose, dashboards will be built according to different principles
(what makes an indicator relevant and useful depends on its purpose) and will be used
in different ways. We propose therefore to distinguish between two types of dashboard
which we will call “performance management dashboards” and “panoramic
dashboards”.
10
ongoing performance levers in order to keep them under control. The choice
of indicators is guided by the goals of the organisation, its strategic
positioning and its performance model. Building these dashboards entails the
identification of the most important stakeholders (not only shareholders and
customers) and their objectives. Intervention is done “by exception” – when a
variance or significant slippage is identified.
In the following pages we will describe these two types of dashboard in detail as well as
the associated ways of using them.
7
And not, as is sometimes contended, owing to the analytical capacities of senior executives.
11
achieved. It is therefore possible to set the agenda for monthly performance reviews in
advance, according to the frequency and time horizons deemed relevant for each
objective.
The monitoring of each objective can be assigned to a member of the executive team. It
will then be his responsibility to prepare the points to be discussed in the performance
review meeting and submit proposals for decisions to the executive team. The objective
of the meeting will clearly be to make these decisions.
It should be noted that with this system of monitoring and managing performance,
setting a target to reach for each objective is not fundamental because the management
reaction is not triggered by the comparison of progress with preset targets (contrary to
the panoramic dashboard, see below). In some cases, it is not even necessary to have
indicators to track the progress of action plans; simply focusing the executive team’s
attention on a given topic will result in the launch of action plans.
Given the temporary nature of these priorities, it is not always necessary to incorporate
the indicators in the organisation’s information systems to automatically generate
information; it depends on the associated IT costs compared to what it would cost to
compute the indicator manually for a temporary period.
12
held monthly, it may happen that an action plan has already been launched. In this case,
the review meeting merely serves to provide participants with information about the
action plan. Otherwise, an action plan may either be drawn up during the meeting or an
ad hoc meeting can be scheduled for the people concerned to discuss such a possibility.
Given the unpredictable nature of the topics to be covered and the short preparation
time, this second scenario is more common.
The purpose of the performance review meeting is therefore to make a diagnosis,
pinpointing significant unresolved problems. The number of items dealt with may be
greater than for a performance management dashboard. Furthermore, the topics
discussed in meetings are only an extract of those being monitored in the panoramic
dashboard, which can therefore contain a greater number of indicators than the
performance management dashboard. Still, as there are no empirical studies on this
question,8 it is hard to determine the number of indicators that can be tracked in this
way and which generate a relevant number of topics to be discussed in review meetings.
The number of topics may be limited by distinguishing between the indicators that have
to be analysed monthly and those which may be dealt with less frequently.
Targets play an essential role in panoramic dashboards because part of the analysis is
based on variances with respect to targets. However, given the fact that the key
parameters of performance do not change significantly from one year to the next, it is
not necessary to modify the corresponding indicators as frequently as in performance
management dashboards.
1.3.3. Summary
The main characteristics of the two types of dashboard are summarised in table 5.3:
8
The literature has focused on the question of how many indicators can be monitored by an operational manager, i.e. in the context
of what we have called a performance management dashboard.
13
Comparaison par rapport à des Appréciation systématique de
Appréciation
cibles préétablies, ou analyse de l’atteinte de l’objectif et de
des résultats
tendances l’avancement des plans d’action
Evaluation of Comparison with preset targets or Systematic evaluation of
results analysis of trends achievement of objectives and
progress on action plans
In both cases the objective ultimately is to make useful decisions. But the locus and
means of making decisions are different, which results in two types of indicator that we
feel ought to be placed in two different dashboards.
This typology has not been validated academically. It is somewhat similar to the
typology of R. Simons who, based on an empirical study of the way performance
measurement systems are used by managers in contexts of major strategic change,
distinguishes two modes of using performance measurement systems: one use that he
calls “diagnostic” and another that he calls “interactive” (Simons, 1995).
Certain elements of Simons’ typology are the same as in the typology that we have just
presented, notably the respective roles of managers and controllers and whether or not
indicators are monitored systematically. However, in our opinion, the performance
management dashboard cannot be equated to an interactive use of dashboards such as
defined by Simons. The interactive use is not aimed at mobilising people on priorities
with a view to energising the organisation, but rather to foster dialogue all along the
hierarchical line on key strategic uncertainties with a view to organisational knowledge
sharing and learning. Simons illustrates this kind of dialogue by the fact that a chief
executive can call up an operational manager who is located several hierarchical levels
away and ask him to explain an evolving situation (for example, he can call a sales
manager and ask him for his analysis of the sales growth of a given product to a certain
type of customer in his area, if the strategic uncertainty concerns changes in a specific
market). Moreover, according to Simons, any performance management system can be
used interactively. The time constraints of managers do not lead to a limit being placed
on the number of objectives and indicators (in order for an executive to call an
operational manager about one of his figures, he must have access to data with the
14
appropriate level of detail). Instead, a single subject is selected which corresponds to a
major strategic uncertainty. Contrary to the performance management dashboard, the
selection of priorities is not done collectively at the different levels; instead, the key
uncertainty is determined by the chief executive for the entire entity. Consequently,
detailed information is exchanged all along the hierarchical line, whereas for the
performance management dashboard, the corresponding objectives and indicators are
transformed from one level to another (cf. chapter 6).
Thus, in our view, the interactive use of performance management systems will lead to a
third type of dashboard: “the interactive dashboard”. As this type of use does not
influence dashboard construction, which is the subject of this chapter, we will not
develop this type of dashboard further here.9
The two main methods10 for constructing dashboards are the BSC (Balanced Scorecard)
method and the OVAR method (in French: Objectifs, Variables d’Action,
Responsables).11
9
We will nevertheless briefly touch on this type of dashboard again in chapter 6.
10
There are other methods, notably in the field of quality management, but they are not designed for measuring the entire set of
performance dimensions.
15
Although they have been the subject of various publications,12 they have not been
defined in a very precise manner and may be subject to interpretation, even by their
developers, who in fact provide very little empirical information on their use. The
interpretation we will advance here is based on these publications, on the observation of
some practices and on the distinction between performance management dashboards
and panoramic dashboards that we have just presented. It is underpinned by the
following analyses:13
• The BSC method leads to the identification of around fifteen objectives. This
limit stems from the fact that in this method the objectives are represented in
the form of a “strategy map”. Given the small number of objectives, the BSC
method cannot be used to build a panoramic dashboard. Moreover, its authors
clearly emphasise the necessity of focusing on a few priorities. In our
opinion, therefore, it is more suited to the construction of performance
management dashboards. This does not contradict another of the method’s
recommendations, namely the organisation of objectives in four perspectives
(see below). Indeed, by focusing attention on certain performance levers, the
performance management dashboard moves away from a purely financial
measurement of performance. The idea of defining objectives in four
perspectives, which is only one element of the BSC method, can also be used
to clarify the construction of panoramic dashboards (see below).
• The OVAR method can be used to build both performance management
dashboards and panoramic dashboards. However, this leads to very different
definitions of the elements that make up this method (objectives and critical
performance variables) and rules for selecting them. In order to avoid
confusion, we will reserve the term OVAR for the construction of
performance management dashboards and then borrow elements from this
method and adapt them for the construction of panoramic dashboards.
In the remainder of this chapter, we will describe how the BSC and OVAR methods are
applied to the construction of performance management dashboards (section 2) and
present a method for building panoramic dashboards that we will call OOAA, which is
the fruit of borrowings from and adaptation of the BSC and OVAR methods (section 3).
As stated above, we will present here our interpretation of the BSC and OVAR
methods.
11
These three terms will be translated here as: objectives (O), critical performance variables (CPV) and managers responsible (R).
12
Kaplan and Norton, 1998, 2001, 2004 for BSC; Fiol et al. 2004 for OVAR
13
This preamble is mainly addressed to readers who already have some knowledge of these methods. We advise others to go
directly to the description of the methods and to come back to this introduction only if the relation between method and type of
dashboard seems problematic.
14
In this chapter we will confine ourselves to the construction of dashboards for an entity, without looking at how dashboards are
coordinated between hierarchical levels. That question will be explored in chapter 6.
16
2.1. Applying the BSC method to the construction of performance
management dashboards
The BSC method conceived by Robert Kaplan and David Norton is based on the
construction of a strategy map that represents the key objectives of the company. They
are organised into four perspectives that are linked to each other through cause-and-
effect relationships. The objectives are then translated into indicators. We will first
examine how the strategy map is drawn and then how the indicators are determined.
17
the entity’s value chain and identify the key objectives of each link in the
chain.15
• The “learning and growth” perspective concerns the key resources that are
needed to attain the objectives of the other three perspectives. According to
Kaplan and Norton, these resources can be categorised in three groups:
human capital (competencies, training, knowledge), information capital
(systems, databases, networks) and organisation capital (culture, leadership,
teamwork). Kaplan and Norton also propose another classification: staff
motivation, information and training; strategic competencies; strategic
technologies; and climate for action. In this perspective we find indicators
such as employee turnover, the number of suggestions made by employees,
training hours, the availability of information, etc.
It should be emphasised that this representation in four perspectives can be adapted by
the company, either by adding other perspectives (for example a perspective relating to
environmental performance), or by modifying the perspectives to take into account the
nature of the organisation’s stakeholders. The financial perspective can be enlarged to
encompass stakeholders to which the company is accountable. For a municipality, for
example, we can replace the financial perspective with a perspective that corresponds to
the question: “How are we perceived by voters?” The customer perspective could be
replaced by a perspective to include the constraints in terms of financial resources that
the municipality has to cope with.16
15
Note that Kaplan and Norton propose generic representations of the value chain.
16
This perspective will be of a financial nature as it concerns financial resources, but not in the traditional sense of the BSC
financial perspective.
18
Figure 5.1 Generic representation of a strategy map
Strategy Map
Grow…
Finance
Grow…
Reduce…
Customers
Respond to
Improve… Reduce… Grow…
100% of….
Develop….
Learning &
growth
Drawing the strategy map is a challenge because a choice has to be made between the
necessity of drawing a large number of arrows in order to truly understand the model –
which may make it hard to read – and an objective of communication which requires
simplifying it, sometimes to the point of removing all the arrows. In practice, it
sometimes happens that companies make several versions of the same strategy map
depending on who they are intended for and the objectives.
19
Boxed text 5.2
Example
At an IT services company, the short formulation of an objective for big
accounts and companies (customer perspective) might be the following: “I want
to reduce my costs, the complexity of my information systems and my risk.”
The following detailed definition may be added to the title of the objective: “I
want an IS provider who helps me to increase my financial results, who
guarantees a quick return on investment and who reduces both my operational
costs and my IT costs. I want an IT provider who offers end-to-end solutions
and fast implementation, that leads to a reduction in the complexity of my
information systems and my risks so I can concentrate on my core business.”
Figure 5.2 Fictitious strategy map of an airline from a form Soviet bloc country
Customer retention
Increase operating
Increase turnover by Increase turnover by
margins
widening the customer building customer
base loyalty
Business
customers I want safe and
I want new and
I want on-time reliable flights
innovative
services flights
of capacities
that is larger than the aircraft
international average
Extend geographic
Speed up service
coverage (code sharing
at the terminal
with alliance) Deepen Develop effective
customer maintenance processes
Develop yield intimacy
management
systems
Learning and
Develop a
growth
customer-
Improve oriented culture Increase motivation
employees’ Develop information
through performance-
professional systems for operations
based incentive
competencies schemes
20
BSC and modelling
The BSC method is thus a way of modelling performance based on a generic causal
model that links different dimensions of performance. This model is adapted to each
organisation to create its own specific model, represented by the strategy map.
The different perspectives of the generic causal model are linked by cause-and-effect
relationships, where final performance is that of shareholder expectations (cf. figure
5.3).
It is not therefore a totally open model that would assign equal weight to the different
stakeholders.18
18
If used for organisations that are not companies or for functions within a company, the generic model has to be adapted.
21
Method for designing an indicator
In practice, the first five criteria are the most important in the search for suitable
indicators. One method consists in gathering a group of four or five people and
brainstorming to make a list of possible indicators for each objective. A score of 1 to 5
(the best score being 5) is assigned to each indicator for each of the five criteria, a total
score is calculated by summing up these scores and the indicator(s) with the highest
scores are selected.
Number of new
services compared
2 3 4 3 1 13
to the previous
installation
22
2.2. Applying the OVAR method to the construction of performance
management dashboards
As stated above (1.4), the OVAR method can be used in building both performance
management dashboards and panoramic dashboards, though it takes a very different
form in the two cases. For the sake of clarity, we will only use the term OVAR for the
construction of performance management dashboards.
The acronym OVAR stands for Objectifs (O), Variables d’Action (VA) and
Responsables (R) which we will translate as objectives (O), critical performance
variables (CPV) and managers responsible (R). The method is based on the construction
of an “O/CPV grid” where objectives intersect with critical performance variables, and
then a more complete grid which integrates the determination of the managers
responsible (R). The second grid is useful for coordinating a system of dashboards from
different hierarchical levels. We will examine this second grid in chapter 6.
19
For the construction of panoramic dashboards we will use the notion of action areas. CPV are fewer in number than action areas
because they concentrate on priorities.
23
Table 5.5 Generic O/CPV grid
Objective 1 Objective 2 Objective 3 Objective 4
CPV 1 x x
CPV 2 x x
CPV 3 x
CPV 4 x x
CPV 5 x x
CPV 6 x
CPV 7 x x
CPV 8 x x x
CPV 9 x x
CPV 10 x
2.2.2. Objectives
The following guidelines are recommended for defining objectives, some of which are
also found in the BSC method (Fiol, 2008; Fiol & Jordan, 2008):20
• An objective must contain a verb in the infinitive.
• An objective has to be expressed very precisely so that it will be understood in the
same way by all the different stakeholders.
• An objective must incorporate the notion of progress and hence a suitable verb such
as “grow”, “reduce”, “develop”, etc. “Maintain” the number of customers is not an
acceptable objective unless this constitutes a challenge given the current situation. It is
also suggested that the verb “optimise” be avoided, even though it is sometimes
convenient.
• A performance dimension constitutes an objective if, when one considers its purpose
(the “why” question), it cannot be linked to another performance dimension. If it can
be linked to another performance dimension then it will be a CPV. For example
“increase sales” can be an objective if this is one of the entity’s goals. However, it will
be a CPV if the entity’s objective is profitability and “increasing sales” is simply one
lever for achieving this objective.21
• An objective has to be aligned with the strategy of the entity. Specifically, using the
terms defined at the beginning of this chapter (1.2.2), an objective must correspond to
a temporary priority linked to a change in strategic positioning, to the current business
context, to specific internal difficulties or to particularly ambitious objectives of
20
Fiol, M. & Jordan, H. (2008), Formuler les objectifs d’une grille OVAR, teaching material document, HEC ; Fiol M. (2008), La
démarche OVAR au service de l’élaboration d’un projet commun au sein d’une équipe, teaching material document, HEC.
21
We will see in chapter 6 that an objective of an entity can be connected to another performance dimension of the organisation, but
at a higher hierarchical level. For example, increasing sales figures may be an objective for the sales department linked to a
profitability objective at the level of the company.
24
strategic positioning (and not to ongoing elements relating to the specific goals of the
entity, the performance model of the sector or the organisation’s strategic positioning).
• The timeframe considered in setting these priorities is usually one year.
• The objective must be formulated in such a way that it is possible to measure its
achievement.
• It is often useful to set a target value in specifying the objective. Different CPVs may
be required for different target values. For example, if the objective is to increase
market share, the CPVs will not be the same if target is to increase market share by
10% or to double it.
• The following guidelines are used in selecting from among the all objectives that meet
the preceding criteria:
• The number of objectives should be between four and six.
• If the objectives are too numerous, the choice should correspond to strategic priorities,
on one hand, and to those whose achievement is considered particularly difficult for
the time horizon under consideration, on the other.
• Objectives which partially or fully overlap must be avoided.
• Example:
- O1: increase operating income,
- O2: reduce overheads.
Either objective O1 is too broad for the entity and it is preferable to keep
objective O2 which is more focused, or O2 is actually a CPV of objective O1.
• Following the above guidelines often leads to no objective or CPV being chosen for
human resources. If these resources are considered strategic, it may be advisable to insist
that one objective be dedicated to human resources.
• In general, it is important to make sure that all of the entity’s stakeholders have been
taken into consideration (including employees), even if ultimately only some of them are
retained, since with four to six objectives it is not always possible to cover all the
stakeholders.
25
Since the time horizon used for determining objectives is annual, some CPVs may
resemble action plans, i.e. a set of organised and time-bound actions, which are
therefore more precise than the CPV (for example, to train 100% of the sales team in
new technologies every twelve months). We will see that the notions of CPV and action
plan are more distinct in panoramic dashboards owing to the ongoing nature of the
performance variables (which we will call “action areas” in that framework).
CPV 1 x
CPV 2 x
CPV 3 x
CPV 4 x
CPV 5 x
CPV 6 x
CPV 7 x
CPV 8 x
CPV 9 x
CPV 10 x
• Two objectives with Xs that overlap (cf. table 5.6). In this case, if objective 1 is
achieved, then objective 3 will be achieved automatically. The two objectives
26
are not independent. Either objective 3 is in fact a CPV of objective 1 or the
thinking on CPVs is incomplete and a CPV that is specific to objective 3 has
yet to be found.
CPV 1 x
…
CPV 3 x x
CPV 4 x
…
CPV 7 x x
CPV 8
27
in-store services in order to enable one-stop shopping, etc.
In addition, the company has expanded its store formats with: Monop,
Dailymonop, Beauty Monop, etc.
Current development needs to be continued and consolidated. Having convened
to examine the priorities for next year, the executive team points out that
although store formats and points of sale have developed as planned, there have
been some setbacks:
• There is a significant delay in attaining return on investment objectives, mainly
owing to new stores opening too slowly. The objective of “increasing ROCE
from X% to Y%” has therefore been decided.
• Given the past positioning of the company on low-cost products, the upmarket
urban positioning of the flagship brand still remains to be consolidated so as to
assert its differentiation with respect to competitors. This objective therefore
remains one of the company’s priorities.
• The company is not perceived as being at the leading edge in terms of
sustainable development, in spite of numerous campaigns carried out over the
past two years. The executive team has translated this into the objective of
“improving the credibility of the company’s sustainable development actions”.
• Finally, the executive team perceives increasing difficulties in opening new
retail locations. Consequently it insists on the priority of “continuing the
diversification of store formats at the same pace”.
These objectives were then used to identify CPVs and build the following
O/CPV grid:
Table 5.8 O/CPV grid for a company inspired by Monoprix
Improve
Finalise
credibilit
the Continue
y of the
Increase upmark diversific
company
ROCE et urban ation of
’s
from position store
sustainab
X% to ing of types at
le
Y% the the same
develop
Monopr pace
ment
ix brand
action
Accelerate the
achievement
of profitability X
objectives for
the new stores
Find and
convert new X X
locations
Develop
domestic
X X
help
services22
28
Improve
perceptions
X X
of the retail
brand
Reposition
the Beauty
Monop brand
X X
toward the
high end of
the market
Develop fast-
food X
products
Improve the
quality of
fruit and
vegetables X X
and develop
the organic
food range
Integrate the
warehouses
X X
of the various
store formats
Train cashiers
in customer
X X
reception and
relations
Train buyers
and
department
X X
managers in
sustainable
development
Communicate
about actions
to promote
X
diversity and
equal
opportunities
29
2.2.5. From objectives to indicators
Indicators have to be determined for each objective and each CPV. Indicators are
determined according to the same principles as outlined for the BSC.
When CPVs are similar to action plans, the determination of indicators is generally
straightforward. It is not always necessary to monitor these action plans with an
indicator that is computed automatically by the IT system. For example, if the CPV is
“carry out a quarterly audit of the quantitative and qualitative composition of inventory”
the indicator is “quarterly audit carried out” and it has two values, “yes” or “no”. This
indicator can be monitored without the help of the IT system.
As we have presented them, the two methods share the objective of building
performance management dashboards. The main differences are the following:24
• The tool for ensuring the coherence of the performance model is different: a
grid for the OVAR method and a strategy map for the BSC method.
• These two tools lead to different representations in terms of the form of the
strategic priorities.
• The BSC method provides a more structured framework for achieving the
objective of balancing the indicators.
• The preferred time horizon for building a BSC is the medium term. It is
therefore particularly suited to dynamising the organisation on priorities
relating to strategic change. The preferred time horizon for the OVAR
method is annual. It is therefore particularly suited to dynamising the
organisation on current priorities.
30
We believe that, with a few modifications, the OVAR method of setting out objectives,
i.e. by building a grid of intersecting objectives and CPV, can be used to build
panoramic dashboards. These alterations are mainly linked to the fact that the purpose
and the way of using this dashboard allow for a greater number of indicators than for the
performance management dashboard.
In addition, taking inspiration from the BSC method, we recommend that objectives be
related to the expectations of shareholders and customers and that action areas
correspond to key internal processes (which can be determined using a representation of
the value chain) and to key resources from the human and information systems point of
view. Thus, the panoramic dashboard construction proposed is based on a combination
of the BSC and OVAR methods.
The result is the following method which we will call OOAA for Ongoing Objectives
and Action Areas:
31
• They can be greater in number.
• They correspond to the more perennial performance levers: key elements in the
business model or important performance factors with respect to the strategic
position adopted (cf. 1.2.2).
• AA are areas where it is considered important to make improvements to reach
objectives, but they are also areas where it is important that there be no
deviation without organising specific corrective action or setting an
improvement objective.
• For action areas that concern processes, one can use a representation of the
value chain with the double objective of identifying key elements in the chain
for achieving objectives and not overlooking important elements in the
business plan.
• In chapter 6 we will see that it is possible to limit the number of action areas by
delegating the monitoring of some of them to a lower level in the
organisation.
The method described in section 2.1 also applies to the indicators of panoramic
dashboards. The choice of indicators is even more critical here because dashboard
monitoring is done by exception and because the formulation of the objective or action
area that led to the choice of an indicator is less visible. While formulations and
indicators are largely inseparable in performance management dashboards, in panoramic
dashboards what is analysed are the values produced for the indicators.
It should be kept in mind that setting targets is essential because the principle at work
here is control by exception. Contrary to performance management dashboards, targets
are not set during the process of building the grid, because these dashboards have a
longer lifespan. In general, targets are set annually during the budgeting process. It is
not always necessary however to set targets using figures; in some cases it is enough to
monitor trends. Moreover, targets do not necessarily correspond to objectives that are
negotiated annually (and linked with incentive schemes). Instead they may be alert
levels that are set for longer timeframes (which are nevertheless still validated when the
budget is being built).
32
Boxed text 5.6
Example
We will now return to our illustration of boxed text 5.5. Based on the elements
already provided, it is possible to build the following OO/AA grid.
services
that matches strategic
Build a “sustainable
store formats and brands
Increase market share
development” image
return
the
Reduce debt
positioning
investment
Increase
offering
Adapt
etc.
Increase the number of × ×
Attract customers and build customer
services
etc.
Adjust staffing to ×
Organisation of points
customer flows
Reduce inventory ×
shortages (shrinkage)
Reduce energy × ×
consumption
Adjust checkout opening ×
of sale
to customer flows
etc.
Lengthen supplier ×
payment periods
Ensure the freshness of × ×
perishable foods
Reduce the energy ×
consumption of transport
Ensure the traceability of ×
Purchasing and logistics
products
Reduce the number of ×
suppliers
Integrate the purchasing × ×
and logistics functions of
the different store formats
and brands
etc.
33
Define and implement a × × ×
resources
suitable training plan
Launch actions to ×
promote diversity and
equal opportunities and
management
communicate about them
Develop a job rotation × ×
Human
policy
etc.
Find new locations × × ×
Reduce time taken to × × ×
open new locations
Ensure that the × × ×
positioning of the new
New stores
This example illustrates the formulation of objectives and action areas and the
differences with the O/CPV grid. It is more difficult than for the O/CVP grid to
build an exhaustive grid for a textbook example and to show that action areas
stem from choices, but it is important in practice.
4. Further issues
The principle of coherence between indicators and strategy is not contested. Its practical
implementation, however, is more problematic. Publications on the BSC and OVAR
methods assert that using these methods ensures this coherence in several ways:
• by providing a framework for translating strategy into indicators;
• by providing a structure to ensure coherence between indicators and strategy;
• by making it possible to represent strategy.
• We will discuss these different assertions one by one.
Moreover, we will show that the most recent book by Kaplan and Norton moves away
from the BSC to talk about strategy, using the vision in four perspectives to represent
generic strategy types.
34
organisational strategy into objectives, performance levers and indicators. However, in
practice, determining indicators is just as much a way of formulating strategy as it is a
translation of it, whether we are dealing with a performance management dashboard or a
panoramic dashboard.
Strategy is often expressed in the form of a vague idea which allows a great deal of
room for interpretation. The clear expression of priority objectives and the causal links
between them is therefore a useful way of building a more precise representation of
strategy.
Some consultants will ask every member of a unit’s executive management to construct
his own strategy map separately, providing them with methodological support. In
general, the results are very different from one to the other, which reveals
inconsistencies in representations of strategy. The map is therefore a useful tool to bring
these representations into line with each other – a way to reach consensus on strategy.
It is therefore advisable to use dashboard construction methods to get different
managers to discuss and work toward building consensus on a common representation.
In the other methods this role is played by the O/CPV or OO/AA grids.
The link between strategy and performance management is therefore not sequential.
Dashboards are not a translation or manifestation of strategy. It seems more appropriate
to consider the construction of dashboards as participating in the formulation of
strategy.
Consequently, it is not necessary to have an explicit strategy in order to make use of the
methods presented in this chapter.
35
the priority objectives that senior management will concentrate on over a one-year
timeframe. As CPVs are a means of reaching objectives in the relatively short term, they
do not constitute elements of company strategy.
In the BSC method (performance management dashboard), the strategy map emphasises
certain strategic priorities. It cannot express the entire customer value proposition and
elements of the business model which constitute the performance levers of this
proposition. Given the fact that objectives are expressed in several perspectives and that
the timeframe is implicitly longer than in the OVAR method, the objective is to
highlight the strategic coherence of these priorities, i.e. the entire set of objectives and
indicators. Still, it is only an extract of the organisation’s strategy: elements of strategic
change and not stable elements.
In the method for building a panoramic dashboard, the objective is to control the
attainment of strategic positioning and the business model. Combining the OO/AA grid
with the conclusions of a strategic analysis should enable the company to select relevant
objectives and action areas from a strategic point of view and validate the coherence of
this strategy.
36
Figure 5.4 Customer value propositions and generic strategy types
Product/service Related
Image
attributes services
Operational
excellence ≠ ≠ ≠ ≠ √ √ ≠
Customer
intimacy
√ √ √ √ ≠ ≠ ≠
Product
superiority √ √ ≠ ≠ √ √ ≠
In the same way, they associate each generic strategy type with elements in the value
chain for which the level of performance demanded is standard and those for which
performance must be high (cf. figure 5.5).
This presentation of generic strategy types cannot be equated with the BSC method because
it is not aimed at building dashboards. It uses the analytical framework of the BSC as a tool
for explicating strategies. On the other hand, it can structure the process of dashboard
building by serving as a guide for determining objectives. The BSC method can also be
applied without any reference to these generic strategy types. In our opinion, the association
that Kaplan and Norton make between BSC and strategy actually increases confusion and
wrongly reinforces the idea that the BSC method is a method for translating strategy into
indicators.
37
4.1.5. Summary
Ultimately, these methods are frameworks for building performance models rather than
tools for translating strategy into indicators. They respond to a key challenge that is
poorly handled in practice: the modelling of performance in order to ensure coherence
between indicators and the company’s overall goals. Indeed, Ittner and Larcker (2003)
have shown that most of the indicators monitored in companies are not based on the
identification of causal links.
The representation of strategy as objectives linked by cause-and-effect relationships is
but a tool that must be combined with the results of strategic thinking. A precise and
coherent representation can emerge from this combination, but the way this combination
ought to be organised in practice remains to be defined.
We stated in the introduction to section 2 that the BSC and OVAR methods were
imprecisely defined. We will illustrate this viewpoint here and draw some conclusions
about their implementation.
Concerning the BSC method, several elements support this assertion:
• Kaplan and Norton’s books give numerous examples of strategy maps and
indicators, but they do not give any indications about the process of building
these maps (people concerned, stages, etc.);
• moreover, they are very vague about the way the BSC should be used: the
operational managers concerned, the places where it should be used, the types
of decision envisaged, timeframes, etc.;
• the recommendations that we have made in this chapter concerning the
formulation of objectives in the “customer perspective”, taking the point of
view of a customer, correspond to actual practices that we have observed.
Kaplan and Norton’s recommendations are not as precise;
• the importance of the causal links between objectives and the way of
representing them vary significantly from one book to the next without any
explanation offered for these changes;
• Kaplan and Norton attribute several functions to the method without specifying
that certain elements of the method have to be different depending on the
objective pursued. For example, the goal of the method may be to align a
management team on priorities and communicate these priorities throughout
the organisation (Kaplan and Norton speak of communicating the strategy).
Experience shows that this is not without consequences in terms of the way
the strategy map is presented. In fact, while it is possible to share a strategy
map made up of objectives written in text bubbles that are connected with
arrows, such a map is not comprehensible for those who are not familiar with
this type of representation. In practice, it is common to find a detailed
strategy map as well as a simplified strategy map without any arrows. It
would be useful to have two different names and different sets of construction
principles for these two types of representation –something which Kaplan and
Norton do not provide.
38
Concerning the OVAR method, the designers of the method use the O/CPV grid for
purposes other than building dashboards (Fiol, 2008). For example, to build cohesion
within a management team on shared strategic objectives. In this case they recommend:
• that the executive team jointly build the O/CPV grid;
• that a three-year time horizon be adopted in determining priority objectives;
• that the “manager responsible” (R) part be used to involve the different
members of management in the pursuit of ambitions defined by objectives
and CPVs: once determined, each participant is asked for which CPV he
wants to take responsibility.
This use differs from that presented in point 2.2, notably in the time horizon adopted
which results in different objectives being selected. In addition, it does not necessarily
lead to the construction of a dashboard: the construction and monitoring of indicators
can be delegated to different managers.
Equating this particular use to the OVAR method adds to the fuzziness surrounding the
purpose of this method.
These illustrations show that the methods are incomplete. In a certain way, this
incompleteness supports the idea we subscribe to, namely that there is no single “good
method”. Moreover, it should facilitate the adaptation of methods to a particular
objective or context and, ultimately, the “appropriation” of dashboards by their users.
Unfortunately, though, this is how they are presented. In the case of the BSC, in
addition to the fact that the construction method is unknown, the tool is presented as a
universal solution for improving performance. This runs counter to the idea that the tool
needs to be adapted. Moreover, if some more precise objectives are formulated, such as
communication and the deployment of strategy, the relationship between the tool and
the achievement of these objectives is not explained. The lack of comprehension of
these connections prevents the user from appropriating the method. He therefore has
little choice but to either believe in its effects or not and to hire a consultant to
implement the method in his company.
As for OVAR, there is a definite effort on the part of the authors to describe the
different uses and aims of the method, though using the same name for all the variants.
As we have seen, each aim requires some adaptation of the method, both in its
principles and in the definition of terms. The lack of precision in terminology
contributes to the persistence of conceptual approximations on the question of
dashboards, their construction and their use, which is not conducive to the appropriation
of the different methods that come under the “OVAR” label. It should be noted,
moreover, that there is a lack of literature on how this method is used in practice.
In conclusion, we have to be cautious concerning the application of these methods
because knowledge on these subjects is very meagre. To advance in this area, we feel
that it is necessary both to pursue the work of conceptualisation and to increase the
number of empirical studies on dashboards actually in use in companies, which for the
moment are rather scant.
39
Conclusion
In this chapter we have presented the principles assigned to dashboards and methods
that can be used to build them for an entity taken separately from its organisation. In an
area where knowledge has not been stabilised and methods are often presented as
universal solutions, we have attempted to provide elements to improve the possibility of
organisational members appropriating the concepts and methods associated with
dashboards, their construction and their use. This led us to introduce two goals for
dashboards (dynamisation and control of the organisation), to distinguish between two
types of dashboard (performance management and panoramic) and associated methods
and to further define the methods presented in the literature (BSC and OVAR). The
following chapter will deal with the question of the coordination of the dashboards of
different entities in an organisation.
40