0% found this document useful (0 votes)
197 views40 pages

Chap 5 Dashboards

This chapter discusses dashboards and their objectives and principles of construction. It outlines two main types of dashboards: performance management dashboards and panoramic dashboards. Performance management dashboards focus on controlling the organization and targeting priority achievements, while panoramic dashboards provide a broader view of performance. The chapter also examines methods for constructing each type of dashboard, and discusses further issues like the relationship between dashboards and strategy.

Uploaded by

odrakkir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
197 views40 pages

Chap 5 Dashboards

This chapter discusses dashboards and their objectives and principles of construction. It outlines two main types of dashboards: performance management dashboards and panoramic dashboards. Performance management dashboards focus on controlling the organization and targeting priority achievements, while panoramic dashboards provide a broader view of performance. The chapter also examines methods for constructing each type of dashboard, and discusses further issues like the relationship between dashboards and strategy.

Uploaded by

odrakkir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Chapter 5

Dashboards

Table of contents

1. Objectives and general principles of dashboards ............................................................................................ 2


1.1. Objectives and principles generally assigned to dashboards.................................................................... 3
1.2. Analysis of these objectives and principles.............................................................................................. 7
1.3. Two types of dashboard: performance management dashboards and panoramic dashboards................ 10
1.4. From principles to methods.................................................................................................................... 15
2. Building performance management dashboards ........................................................................................... 16
2.1. Applying the BSC method to the construction of performance management dashboards ..................... 17
2.2. Applying the OVAR method to the construction of performance management dashboards.................. 23
2.3. Comparison of the BSC and OVAR methods ........................................................................................ 30
3. Building “panoramic” dashboards................................................................................................................. 30
3.1. Determining ongoing objectives (OO) ................................................................................................... 31
3.2. Determining action areas........................................................................................................................ 31
3.3. Determining indicators........................................................................................................................... 32
3.4. Setting targets......................................................................................................................................... 32
4. Further issues ................................................................................................................................................ 33
4.1. The relationship between dashboards and strategy ................................................................................ 33
4.2. Lack of precision in the methods ........................................................................................................... 38
Conclusion ........................................................................................................................................................ 40

1
Introduction

I n the preceding chapters we presented methods for building performance


measurement systems based on financial indicators. It is, in large part, the
shortcomings of these indicators that spurred the development of non-financial
indicators and the grouping together of both types of Key Performance Indicators (KPI)
in dashboards.
In this chapter we will focus on dashboards at the level of the organisation as a whole,
but the principles for building them can generally be applied to any entity of the
organisation taken separately. This does not mean however that the same types of
indicators will be chosen at every level of the organisation: organisation-level and local-
level dashboards have to be differentiated and coordinated. In chapter 6 we will deal
more specifically with the question of the coordination of different dashboards, in other
words the construction of a coherent system of dashboards.
There are several methods for constructing dashboards, the best known being the
Balanced Scorecard (BSC) and the OVAR methods. Unfortunately, they contain
significant grey areas which make them difficult to implement as they stand; some
clarifications and alterations are therefore necessary.
The first section of this chapter attempts to clarify the objectives of and principles for
building dashboards. It presents the arguments which are generally made in favour of
their development, identifying the objectives they are assigned and their associated
construction principles. Our analysis of these general orientations will allow us to
highlight areas of imprecision or even contradiction, as well as two implicit goals:
controlling the organisation and focusing behaviours on the achievement of targeted
priorities. As these two goals give rise to appreciably different principles for building
dashboards, we propose to distinguish between two types of tool: the panoramic
dashboard and the performance management dashboard.
In the second section we will propose methods for building a performance management
dashboard that are based on the BSC and OVAR1 methods. The third section puts
forward a method for building a panoramic dashboard.
The fourth and last section deals with further issues: the relationship between
dashboards and strategy, how precisely the methods are defined, indicators relating to
the economic and business context, and the impact of representing performance through
indicators.

1. Objectives and general principles of dashboards

As we saw in the general introduction to this book, most of the books which present
management tools adopt a descriptive approach and focus on ways of building such
tools. They deal with the “how” of the matter and describe methods of construction. All
too often this ignores the question of “why”, i.e. identifying and clarifying the
objectives and principles that guide the choice of operational solutions. We shall
attempt to clarify these fundamental points here.

1
Existing descriptions of these methods are neither sufficiently conceptualised nor sufficiently detailed; any reference to either of
them is in fact an interpretation.

2
1.1. Objectives and principles generally assigned to dashboards

1.1.1. Objectives
A dashboard is basically defined as a set of indicators which are not exclusively
financial in nature (KPIs). It can take various forms, but is presented generically in the
form of a list of indicators with various values shown for these indicators (see table 5.1
and the examples at the end of the chapter).

Table 5.1 Generic example of a dashboard


Indicator Actual n-1 Planned Actual n Variance/n – 1 Variance/
planned

Volume (activity)
Lead time
Quality index
Customer satisfaction index
Turnover
Etc.

It is hard to date the origin of this tool precisely because the answer will depend on
whether we are interested in company practices or normative descriptions of its
construction principles. The latter began to be developed in the 1980s in response to two
types of criticism:
• A theoretical criticism pointing out the shortcomings of financial indicators in
terms of relevance2 (Johnson and Kaplan, 1987): their short-term orientation,
their focus on performance only for shareholders, and their limited capacity
for explanation (see chapter 3).
• Observations of the insufficiency of dashboard practices in companies,
emphasising other problems of relevancy. The dashboard was seen as often
overloaded, slow, hard to read and therefore of limited use for managers.

Improving the relevance and usefulness of performance measurement gives rise to three
general objectives for building “good” dashboards:
• Do not limit measurement to a single performance dimension. Instead, provide
a more “balanced” representation of performance (objective 1).
• Orient measures to a longer-term representation of performance (objective 2).
• Facilitate decision making by managers (objective 3).3

2
A measurement system is said to be relevant when it provides a manager with information that is aligned with the requirements of
managing the performance of his business.
3
We will see in the conclusion to first part of the book at the end of chapter 6 that when dashboards are designed by applying these
objectives they have their own limitations with respect to financial measurement systems.

3
1.1.2. Principles
These objectives are translated into a set of commonly accepted principles for building
“relevant and useful” dashboards:
• KPIs must comprehensively cover the organisation’s entire set of objectives
and goals.
• They must focus both on outcomes as well as the performance levers that can
be used to achieve these outcomes.
• They must include both financial and non-financial indicators.
• The indicators chosen must be consistent with the organisation’s strategy.
• They must be limited in number.
• They must deliver their information quickly.
• They must be presented in a legible and vivid way.

Covering the organisation’s entire set of goals and objectives


In chapter 2 we saw that the performance of an organisation is usually multidimensional
because it can be defined with respect to several stakeholders who have different
expectations. Either the organisation’s goals are intrinsically multiple (as is the case for
a public hospital for example), or the final performance is mono-dimensional but
pursuing it requires the organisation to satisfy other stakeholders in the shorter term.
For a dashboard to be “balanced” (objective 1), the first requirement is therefore that the
set of indicators it contains must encompass the diversity of the organisation’s
objectives.

Indicators focusing both on desired outcomes and on performance levers


As we have seen in previous chapters, financial indicators are not only focused on a
single dimension of performance but also on desired outcomes. As aggregate measures,
they bring together numerous accounting items (different types of cost, revenues, assets,
liabilities) and gather them into a final indicator, which may be net income, operating
income, profitability, etc. depending on the situation. When used for future planning,
they characterise the “financial goals” that the company is striving for – the results that
it hopes to achieve.
However, if performance measurement only focuses on targeted results, it generates a
risk of “short-term bias”, as we saw in chapter 3. Decisions may have been taken that
are at odds with improving the company’s profitability (for example, a decision to
manufacture low-quality products) though the consequences of these decisions will not
yet have made an impact on financial results. Inversely, beneficial decisions may result
in a decline in financial results in the short term, as their impact will only be fully felt in
the long term and in a relatively diffuse way. This is the case, for example, of staff
training expenses and investment decisions.
Adding measures that focus on the “determinants” of these results, i.e. on the
performance levers which enable the company to achieve these results, allows the
performance horizon to be lengthened (objective 2). In our examples, indicators can be
designed for the quality of products manufactured (for example, the number of defective

4
parts in a manufacturing line), staff training initiatives (for example the average number
of training hours per employee) and the amount of investment (CAPEX,4 for example).
This is the second type of balance sought in a dashboard (objective 1): the balance
between results metrics and indicators associated with performance levers.
This principle also improves managerial decision-making (objective 3). The variety of
performance levers offers managers a richer view of performance which enables them to
produce a better quality analysis, as well as a more operational view. In the event of
poor results, a manager can determine whether the origin of the problem is in the area of
product quality or production time. These indicators also allow him to target his actions
more effectively on the causes of the problem; financial results cannot be directly acted
on.

Using indicators that are not exclusively financial


This recommendation is sometimes incorporated into the very definition of a dashboard,
owing to the great emphasis that has been placed on the necessity of balancing financial
indicators. In reality it is a construction principle that derives from the first two: once
we are not longer focusing exclusively on performance for shareholders, indicators may
lose their financial character (for example if we measure customer satisfaction by the
number of complaints). And as soon as we direct our attention to performance levers
and not only results, there again the financial dimension may fade away (as shown by
our example of the number of defective products).

Nevertheless, this principle also complements the first two principles. We might
imagine, for instance, though this scenario is not very realistic, that we are able to
measure performance vis-à-vis different stakeholders with exclusively financial
indicators. For example, a profitability ratio to measure performance with regard to
shareholders, sales volumes or marketing costs to measure performance with regard to
customers, and average salary as a performance indicator with regard to employees.
Likewise, an indicator for a performance lever can be financial. This is the case, for
example, of export sales figures when a company has chosen to focus on this
performance area with the aim of increasing its total sales figures.
This leads us to distinguish between two meanings of the word “financial”: an indicator
can be financial because it reflects a “financial point of view” or because it is expressed
in accounting values. The principle whereby non-financial indicators are included in a
dashboard aims not only to balance different points of view, but also to strengthen the
long-term vision. If we take another look at the example of export sales figures, it
certainly constitutes a lever which can be used to increase overall sales figures, but it
remains very general. If we take a look at the performance levers that might be used to
influence export sales figures, we may consider, for example, increasing the number of
overseas sales people. The corresponding indicator will then have lost its financial
character. Thus the further we move up the chain of causality, the more financial the
indicators become and the more performance tends to be captured in the long term, in
keeping with the previous principle.
The non-financial character of indicators is also beneficial because it separates the
performance measure from the accounting metric which always takes a certain amount

4
CAPEX: Capital Expenditure.

5
of time to produce. Certain non-financial indicators will be obtained more quickly,
which also contributes to objective 3.

Indicators that are consistent with organisational strategy


All of the methods emphasise the importance of strategy in building dashboards. We
will see that in the BSC method strategy is situated alongside the company’s vision at
the heart of the various performance measures. Strategy also underpins the OVAR
method.
A company that chooses to follow a low-cost strategy, for instance, will have
performance indicators that are substantially different from those of a company that has
chosen a differentiation strategy, because their performance levers are very different, or
at least the relative importance of each of them will be different in the two cases.
We will see later on that the concept of strategy can cover different ideas, which we will
have to distinguish as their scope is quite different.

A limited number of indicators


This widely accepted principle has two origins. As we pointed out above, an
examination of the dashboards that are actually used in companies often leads to a
diagnosis of “too many indicators”. On a more theoretical level, the methods of
analysing results based on accounting and financial models (see chapter 6), which at one
time used to predominate in company practices, based their legitimacy partly on their
pursuit of exhaustiveness. Thus even when a budget variance did not show a significant
difference, people were nevertheless told to calculate the sub-variances “anyway” just to
make sure that there were no phenomena of compensation. Financial variance analyses
can thus very quickly become bloated.
These observations of voluminous monitoring systems, combined with the time and
cognitive constraints of managers, gave rise to the principle that a good dashboard
should contain a limited number of indicators.
We will nevertheless see that this principle is open to discussion, especially when one
abandons the idea that all indicators are constantly followed by managers. It also
depends to a large extent on the managerial use for which the dashboard is intended.

Quick access to results


It is often emphasised that a manager needs to get information quickly. However,
financial indicators – derived from accounting figures – often arrive too late. One
objective for dashboards, therefore, is that they must be produced quickly. Some
companies have implemented “flashes” – highly concentrated dashboards which are
produced in the first few days following the end of a given month, thus long before the
accounting results are available.

Presenting indicators in a legible and vivid way


Just as the number of indicators can slow down a manager’s analysis and decision-
making processes, so can the way the information is presented in the dashboard. Tables
full of figures, however relevant their content may be, can sometimes be impenetrable
and do not facilitate the understanding of results.

6
On the other hand, the same results presented graphically with colour-coded information
enable the manager to immediately visualise positive areas and problematic areas. They
are more vivid and meaningful. The visual design of the dashboard can therefore
contribute to managerial decision making (objective 3).
Table 5.2 summarises the connections between objectives and the principles generally
assigned to dashboards.

Table 5.2 Objectives and principles generally assigned to dashboards


Orient
Offer a
towards a Facilitate
balanced
longer-term decision
representation
representation making by
of
of managers
performance
performance

Cover the organisation’s


entire of goals and x
objectives
Focus BOTH on results and x x
x
performance levers
Include BOTH financial and x x
x
non-financial indicators
Contain indicators consistent x x
with organisation strategy
Quick access to results x
Presented in a legible and x
vivid way

1.2. Analysis of these objectives and principles

These principles constitute the first level in explaining how to build a good dashboard.
They do however have certain limitations.

1.2.1. A contradiction between the multiplication and limitation of the


number of indicators
An analysis of these principles reveals a contradiction between principles that tend to
increase the number of indicators and the recommendation to limit their number:
• Some of the principles (to cover the organisation’s entire set of goals, to
include indicators for results as well as for performance levers, to include
both financial and non-financial indicators) which are aimed at diversifying
the indicators, also tend, mechanically, to increase their number. The idea
implicit in these principles is that “everything must be controlled” and the
danger to be avoided is that of overlooking something, of a “gap” in
performance measurement, of non-exhaustiveness, or the incompleteness of
the indicators. As it would certainly be rather difficult in practice to formulate
an objective in terms of “exhaustiveness”, it is formulated more moderately
in terms of “balance”. But, fundamentally, the pitfall to avoid is that of
serious omissions (such as focusing only on the financial dimension).
• Other principles (limiting the number of indicators, alignment with strategy),
on the contrary, urge managers to focus the dashboard on particular

7
orientations, on priorities. Managers should select certain indicators because
they are more relevant than others. We will see in the following section that
“strategy” is a vague term and that there are several types of “particular
orientation”.
In our view, these two sets of principles reveal two distinct purposes – with
contradictory consequences – that are assigned to dashboards: to control all the
parameters of the business and to “energise” the organisation to achieve certain
priorities. Indeed, it seems rather difficult to keep an entity under control if only twenty
or so indicators are monitored, as certain methods (particularly the BSC method, see
below) would suggest.5
Moreover, the empirical literature shows that putting these principles into practice is not
a straightforward matter:
• The number of non-financial measures remains very high in company
dashboards (Mendoza & Bescos, 2000). This observation is consistent with
the difficulties encountered by controllers when they try to carry out the
instruction to redesign their dashboards to reduce the number of indicators.
They observe that although limiting the number of indicators to roughly
twenty does strongly focus the attention of managers, it is accompanied by
the risk of losing track of key parameters.
• It is observed, in practice, that management review meetings – even when
based on the use of dashboards including non-financial indicators – struggle
to reach decisions.

1.2.2. A vague concept of organisational strategy


The priorities that dashboards must focus on are meant to be aligned with “company
strategy”. In reality, a company may have particular strategic orientations for various
and heterogeneous reasons:
• Some of them may stem from the choice of the organisation’s goals (which can
be different from those of another organisation and hence particular). Two
companies running similar businesses may have different stakeholder
configurations. For example, two companies that produce and sell
champagne, one set up as a public limited company and the other as a
cooperative; or a private clinic and a public hospital.
• Other particularities correspond to the critical success factors (CSF) associated
with the business sector and therefore to the business model and to generic
performance levers in a given line of business – regardless of the strategic
positioning of the company. One CSF associated with the consulting business
model, for example, is the workload of the consultants. This performance
lever is independent of the specific strategies of a given consulting firm.
• Orientations may stem from the choice of a strategic position. Generic
strategies are examples of positioning: cost leadership, differentiation or the
generic types of strategy defined by Kaplan and Norton (cf. 4.1): customer
intimacy, operational excellence and product superiority. The strategic

5
Unless we limit ourselves to strictly financial measures of performance with their inherent shortcomings, particularly the fact that
only the final result is controlled.

8
positioning of an organisation determines key performance levers and weighs
their relative importance. For example, cost control has more importance in a
low-cost strategy than in a strategy of differentiation through innovation.
There can be different strategic positions at different levels of the
organisation, even within a Strategic Business Unit (SBU), to take a specific
market into consideration for instance. The strategic positioning of different
geographical entities may be partially different to adapt to a specific
competitive environment or different customer expectations in different
countries.
• Finally, orientations may stem from priorities relating to temporary
circumstances, without any direct link to either the business model or the
current strategic position. They may be linked to:
• changes in strategic positioning: the company is then temporarily focused on
the transition to be carried out;
• specific economic or cyclical circumstances: during certain periods financial
markets are particularly attentive to financial structure, which may prompt the
company to focus on debt reduction. Or events in certain raw materials
markets may spur the company to diversify its suppliers;
• specific internal difficulties: if a decline in quality is observed, for instance, the
organisation may focus on turning this situation around;
• particularly high ambitions for certain objectives: it is the magnitude of the
target and not its nature which places a certain urgency on the priority here.6
Reducing the time-to-market for new products may be an ongoing objective
that is defined by the company’s strategic positioning, but if the target is to
halve the time and this target is considered particularly ambitious, it may
become an immediate priority.
Different types of orientation do not have the same degree of stability. Some of them are
quite stable (organisational goals, business model, strategic positioning), they
characterise the ongoing dimensions of the organisation’s performance. Others, on the
contrary, correspond to phases of crisis, transitions, changes; these are the more
ephemeral dimensions of performance, but also points of heightened vigilance.

1.2.3. Ill-defined guidelines for using performance measurement systems


The objectives and principles presented above only concern the choice of indicators and
do not deal with the process of using these measurement systems:
• Although the ultimate goal is to make decisions, this does not flow directly
from the measurement system. Decision making follows a process made up
of several stages: the observation of a problem, analysis, discussion and
decision. But the same information is not necessarily needed at each of these
different stages. In particular, given that the discussion stage brings together
people with busy schedules in a particularly short timeframe – that of
monthly performance reviews – it is a particularly dense moment which
requires great selectivity in the topics to be dealt with. On the other hand, this

6
See the definitions of the terms “objective”, “target” and “targeted objective” in chapter 1.

9
selectivity does not necessarily apply to the phase of preparing these monthly
reviews or other occasions and places of analysis.
• Performance measurement may be used by different parties: controllers, entity
managers, operational managers in the entity and at other levels in the
organisation. They do not all have the same role and are not necessarily
involved at the same stages in the decision-making process.
The question of how measurement systems are used has been examined by Robert
Simons (1995) who distinguishes a diagnostic use from an interactive use (see below).
He includes the question of measurement system users in his typology, but not the other
two points that we have raised. Significantly, he considers that there is no link between
the construction of a measurement system and the way it is used. According to Simons,
every performance measurement system can be used either diagnostically or
interactively.
We believe, on the contrary, that the way measurement systems are built depends on the
how they are to be used, as we will explain in the rest of this chapter.

1.2.4. Conclusion
This discussion highlights the fact that measurement systems and consequently
dashboards serve two different and complementary purposes:
• To “energise” and focus the organisation on a few priorities. This goal can be
linked to the objective of “powering up” the organisation.
• To assist managers in controlling the organisation’s activities.
Depending on their purpose, dashboards will be built according to different principles
(what makes an indicator relevant and useful depends on its purpose) and will be used
in different ways. We propose therefore to distinguish between two types of dashboard
which we will call “performance management dashboards” and “panoramic
dashboards”.

1.3. Two types of dashboard: performance management dashboards and


panoramic dashboards

Broadly speaking, it is possible to distinguish between these two types of dashboard in


the following way:
• Performance management dashboards serve as a support to focus the attention
of senior executives and managers on a “few” priorities linked to company
strategy or associated with the current economic and business situation. They
structure the representations of these priorities and promote the convergence
of these representations among the different members of the executive team.
They are therefore built collectively and are reviewed quite frequently as
priorities evolve. The purpose of building these dashboards is to reach
agreement on priorities and on the corresponding targeted objectives and may
even extend to the definition of action plans. Monitoring these priorities is
done systematically and concerns both the capacity to attain targets and, if
need be, the implementation of action plans.
• Panoramic dashboards are based on a “complete” modelling of how
performance is formed in the organisation, sweeping across all the important

10
ongoing performance levers in order to keep them under control. The choice
of indicators is guided by the goals of the organisation, its strategic
positioning and its performance model. Building these dashboards entails the
identification of the most important stakeholders (not only shareholders and
customers) and their objectives. Intervention is done “by exception” – when a
variance or significant slippage is identified.

In the following pages we will describe these two types of dashboard in detail as well as
the associated ways of using them.

1.3.1. Performance management dashboards


The purpose of performance management dashboards is to energise and focus an entity
on a few priorities. For this, the executive team has to be closely involved in making
decisions and directing action concerning these priorities which consequently must be
examined regularly in monthly management meetings or performance reviews. This
explains why the number of topics dealt with (and hence ultimately the number of
indicators) must be limited.7
As these performance review meetings deal with the topics (or objectives) that are
deemed to be priorities as well as the review of the main variances in indicators of the
most frequently recurring elements of strategy or the business model (see below), we
estimate that between four and six topics will be examined during each meeting. Given
the fact that most areas do not require monthly tracking, the total number of objectives
monitored may be around fifteen and the number of corresponding indicators around
thirty.
Given these constraints, the choice of objectives has to be very selective and cannot
cover the company’s entire strategy but rather must focus on immediate current
priorities as we defined them in point 1.3: changes in strategic positioning, priorities
relating to the current economic and business context, specific internal difficulties, and
particularly ambitious elements of strategic positioning. Once the objectives have been
chosen, senior management will regularly get involved in following them, at the time of
periodic review meetings.
The performance management dashboard is used at two levels:
• Determining priorities during the process of building the dashboard: as
priorities change frequently, the performance management dashboard must be
modified substantially every year.
• Tracking the achievement of objectives for these priorities during monthly
performance reviews.
The objectives that structure the dashboard and clearly describe priorities must be built
collectively by the executive team with the goal of heightening the involvement of the
various participants and ensuring good coordination. The task of selecting indicators for
tracking the achievement of objectives can however be delegated.
Since objectives are priorities, they should be monitored systematically, along with
indicators and associated action plans, regardless of whether or not preset targets are

7
And not, as is sometimes contended, owing to the analytical capacities of senior executives.

11
achieved. It is therefore possible to set the agenda for monthly performance reviews in
advance, according to the frequency and time horizons deemed relevant for each
objective.
The monitoring of each objective can be assigned to a member of the executive team. It
will then be his responsibility to prepare the points to be discussed in the performance
review meeting and submit proposals for decisions to the executive team. The objective
of the meeting will clearly be to make these decisions.
It should be noted that with this system of monitoring and managing performance,
setting a target to reach for each objective is not fundamental because the management
reaction is not triggered by the comparison of progress with preset targets (contrary to
the panoramic dashboard, see below). In some cases, it is not even necessary to have
indicators to track the progress of action plans; simply focusing the executive team’s
attention on a given topic will result in the launch of action plans.
Given the temporary nature of these priorities, it is not always necessary to incorporate
the indicators in the organisation’s information systems to automatically generate
information; it depends on the associated IT costs compared to what it would cost to
compute the indicator manually for a temporary period.

1.3.2. Panoramic dashboards


The purpose of panoramic dashboards is to control key performance parameters. The
number of objectives is not as limited because subjects will only be dealt with “by
exception”.
Indicators enable managers to cover all of the organisation’s goals as well as the critical
success factors engendered by the organisation’s strategic positioning and business
model. Designing these indicators involves fewer choices and can therefore be done by
the controller upon approval of the executive team. The manager in charge of the entity
will be involved during the initial selection of indicators and again when targets are set
for certain indicators.
One of the factors allowing the number of indicators to be limited to a certain level is
the delegation of monitoring to the controller of an entity at a lower level. We will come
back to this point in chapter 6.
The analysis of variances between targets and results is prepared by the controller in
order to lighten the workload of the operational manager who does not have enough
time to study a large number of figures in a short space of time. From this analysis, the
controller suggests topics to be put on the agenda of the performance review meeting.
There are two ways to do this:
• analyse trends in indicators and examine any major discontinuities (a sudden
increase in an indicator that has historically been stable, for example);
• analyse variances with respect to a target or a standard and select the most
significant variances.
These topics may be discussed after those relating to the performance management
dashboard. The main difference is that, as they relate to variances, items on the agenda
are not known in advance and so there is less preparation for the performance review.
The purpose of the meeting is therefore different. The point is to decide jointly whether
a given variance justifies implementing an action plan. Given that these meetings are

12
held monthly, it may happen that an action plan has already been launched. In this case,
the review meeting merely serves to provide participants with information about the
action plan. Otherwise, an action plan may either be drawn up during the meeting or an
ad hoc meeting can be scheduled for the people concerned to discuss such a possibility.
Given the unpredictable nature of the topics to be covered and the short preparation
time, this second scenario is more common.
The purpose of the performance review meeting is therefore to make a diagnosis,
pinpointing significant unresolved problems. The number of items dealt with may be
greater than for a performance management dashboard. Furthermore, the topics
discussed in meetings are only an extract of those being monitored in the panoramic
dashboard, which can therefore contain a greater number of indicators than the
performance management dashboard. Still, as there are no empirical studies on this
question,8 it is hard to determine the number of indicators that can be tracked in this
way and which generate a relevant number of topics to be discussed in review meetings.
The number of topics may be limited by distinguishing between the indicators that have
to be analysed monthly and those which may be dealt with less frequently.
Targets play an essential role in panoramic dashboards because part of the analysis is
based on variances with respect to targets. However, given the fact that the key
parameters of performance do not change significantly from one year to the next, it is
not necessary to modify the corresponding indicators as frequently as in performance
management dashboards.

1.3.3. Summary
The main characteristics of the two types of dashboard are summarised in table 5.3:

Table 5.3 Characteristics differentiating performance management dashboards and panoramic


dashboards
Performance management
Panoramic dashboard
dashboard

Control the entire set of key Energise and focus action on


Purpose
performance parameters priorities
Balanced/taking into account all Selective, focused on current
the different goals and objectives priorities and on corresponding
of the organisation action goals
Structure
Consistent with CSFs linked to the
of indicators business model
Consistent with CSFs linked to the
organisation’s strategic positioning
The controller directs the The executive team determines
Who builds construction of objectives and objectives collectively. The
the indicators indicators. The executive team selection of indicators may be
approves them delegated
Computation Done automatically by the May be done manually
of indicators information system

8
The literature has focused on the question of how many indicators can be monitored by an operational manager, i.e. in the context
of what we have called a performance management dashboard.

13
Comparaison par rapport à des Appréciation systématique de
Appréciation
cibles préétablies, ou analyse de l’atteinte de l’objectif et de
des résultats
tendances l’avancement des plans d’action
Evaluation of Comparison with preset targets or Systematic evaluation of
results analysis of trends achievement of objectives and
progress on action plans

Only variances and significant All priorities are included on the


discontinuities are put on the agenda of review meetings (but not
Processes
agenda of review meetings necessarily every month)
and people
involved in Controllers play key role in Review meetings are prepared by
monitoring highlighting significant variances operational managers in charge of
action plans
Role of review meetings: assess Role of review meetings:
the seriousness of variances, discussion between manager and
decision on whether corrective operational team leaders on all of
action is advisable the objectives and action plans;
decision making
Number of Fairly large, in order to ensure Reduced, in order to focus the
indicators balanced oversight of the activity attention of managers on current
priorities
Frequency of Regular, frequency can differ Ad hoc depending on the
measurement depending on the nature of the timeframe of action plans
indicators
Changeability Low (depending on the High
of indicators changeability of activities and
strategic positioning)

In both cases the objective ultimately is to make useful decisions. But the locus and
means of making decisions are different, which results in two types of indicator that we
feel ought to be placed in two different dashboards.
This typology has not been validated academically. It is somewhat similar to the
typology of R. Simons who, based on an empirical study of the way performance
measurement systems are used by managers in contexts of major strategic change,
distinguishes two modes of using performance measurement systems: one use that he
calls “diagnostic” and another that he calls “interactive” (Simons, 1995).
Certain elements of Simons’ typology are the same as in the typology that we have just
presented, notably the respective roles of managers and controllers and whether or not
indicators are monitored systematically. However, in our opinion, the performance
management dashboard cannot be equated to an interactive use of dashboards such as
defined by Simons. The interactive use is not aimed at mobilising people on priorities
with a view to energising the organisation, but rather to foster dialogue all along the
hierarchical line on key strategic uncertainties with a view to organisational knowledge
sharing and learning. Simons illustrates this kind of dialogue by the fact that a chief
executive can call up an operational manager who is located several hierarchical levels
away and ask him to explain an evolving situation (for example, he can call a sales
manager and ask him for his analysis of the sales growth of a given product to a certain
type of customer in his area, if the strategic uncertainty concerns changes in a specific
market). Moreover, according to Simons, any performance management system can be
used interactively. The time constraints of managers do not lead to a limit being placed
on the number of objectives and indicators (in order for an executive to call an
operational manager about one of his figures, he must have access to data with the

14
appropriate level of detail). Instead, a single subject is selected which corresponds to a
major strategic uncertainty. Contrary to the performance management dashboard, the
selection of priorities is not done collectively at the different levels; instead, the key
uncertainty is determined by the chief executive for the entire entity. Consequently,
detailed information is exchanged all along the hierarchical line, whereas for the
performance management dashboard, the corresponding objectives and indicators are
transformed from one level to another (cf. chapter 6).
Thus, in our view, the interactive use of performance management systems will lead to a
third type of dashboard: “the interactive dashboard”. As this type of use does not
influence dashboard construction, which is the subject of this chapter, we will not
develop this type of dashboard further here.9

Boxed text 5.1


Panoramic dashboards and information systems
Is the aim of the panoramic dashboard to monitor all of the indicators found in
the databases of the management information systems? In other words, is it
useful to build a panoramic dashboard or should one simply give the controller
access to these databases and have him perform an analysis of them?
As there is no empirical literature available on this subject, we propose the
following points for consideration:
• the information contained in the databases have multiple uses and
users. There is, therefore, a risk that the analyst will be overwhelmed
with information when he has to prepare the performance review
meeting;
• the very act of selecting information to build a panoramic dashboard
constitutes a useful exercise which helps align the representations of
the various participants on the goals of the entity, the operational
translation of its strategic positioning and the key elements of its
performance model.
We feel that it is preferable to build a panoramic dashboard to ensure that all the
key parameters are being controlled, which does not exclude using the databases
to perform specific analyses.

1.4. From principles to methods

The two main methods10 for constructing dashboards are the BSC (Balanced Scorecard)
method and the OVAR method (in French: Objectifs, Variables d’Action,
Responsables).11

9
We will nevertheless briefly touch on this type of dashboard again in chapter 6.
10
There are other methods, notably in the field of quality management, but they are not designed for measuring the entire set of
performance dimensions.

15
Although they have been the subject of various publications,12 they have not been
defined in a very precise manner and may be subject to interpretation, even by their
developers, who in fact provide very little empirical information on their use. The
interpretation we will advance here is based on these publications, on the observation of
some practices and on the distinction between performance management dashboards
and panoramic dashboards that we have just presented. It is underpinned by the
following analyses:13
• The BSC method leads to the identification of around fifteen objectives. This
limit stems from the fact that in this method the objectives are represented in
the form of a “strategy map”. Given the small number of objectives, the BSC
method cannot be used to build a panoramic dashboard. Moreover, its authors
clearly emphasise the necessity of focusing on a few priorities. In our
opinion, therefore, it is more suited to the construction of performance
management dashboards. This does not contradict another of the method’s
recommendations, namely the organisation of objectives in four perspectives
(see below). Indeed, by focusing attention on certain performance levers, the
performance management dashboard moves away from a purely financial
measurement of performance. The idea of defining objectives in four
perspectives, which is only one element of the BSC method, can also be used
to clarify the construction of panoramic dashboards (see below).
• The OVAR method can be used to build both performance management
dashboards and panoramic dashboards. However, this leads to very different
definitions of the elements that make up this method (objectives and critical
performance variables) and rules for selecting them. In order to avoid
confusion, we will reserve the term OVAR for the construction of
performance management dashboards and then borrow elements from this
method and adapt them for the construction of panoramic dashboards.
In the remainder of this chapter, we will describe how the BSC and OVAR methods are
applied to the construction of performance management dashboards (section 2) and
present a method for building panoramic dashboards that we will call OOAA, which is
the fruit of borrowings from and adaptation of the BSC and OVAR methods (section 3).

2. Building performance management dashboards14

As stated above, we will present here our interpretation of the BSC and OVAR
methods.

11
These three terms will be translated here as: objectives (O), critical performance variables (CPV) and managers responsible (R).
12
Kaplan and Norton, 1998, 2001, 2004 for BSC; Fiol et al. 2004 for OVAR
13
This preamble is mainly addressed to readers who already have some knowledge of these methods. We advise others to go
directly to the description of the methods and to come back to this introduction only if the relation between method and type of
dashboard seems problematic.
14
In this chapter we will confine ourselves to the construction of dashboards for an entity, without looking at how dashboards are
coordinated between hierarchical levels. That question will be explored in chapter 6.

16
2.1. Applying the BSC method to the construction of performance
management dashboards

The BSC method conceived by Robert Kaplan and David Norton is based on the
construction of a strategy map that represents the key objectives of the company. They
are organised into four perspectives that are linked to each other through cause-and-
effect relationships. The objectives are then translated into indicators. We will first
examine how the strategy map is drawn and then how the indicators are determined.

2.1.1. Guidelines for drawing the strategy map


The following are the main guidelines given by Kaplan and Norton for drawing the
strategy map:
• Identify objectives along four perspectives: financial, customer, internal
business processes, learning and growth.
• Choose objectives that are consistent with the organisation’s strategy.
• Connect the objectives to each other through causality linkages.
We will now present the four perspectives, the identification of objectives and the
construction of the strategy map. In section 4 we will come back to the question of
alignment with strategy.

The four perspectives


Let’s examine in detail the four perspectives, objectives and corresponding types of
indicator:
• The financial perspective contains objectives linked to the expectations of the
company’s shareholders. Traditionally these expectations are expressed in
terms of growth, return on investment and financial structure, and are
translated into financial performance indicators such as sales growth, ROCE,
EBITDA and financial structure ratios such as leverage ratios.
• The second perspective focuses on customer expectations. To formulate these
objectives we adopt the customers’ point of view and identify their desires
vis-à-vis the product or service, customer relations and the brand. For
example: “I want flights that leave and arrive on time” or “I want a brand that
has a reputation for reliability.” The corresponding indicators concern either
the products and services themselves, or the customer’s perception of them.
Indicators of business results such as market share or customer retention rates
do not correspond to customer expectations. Rather, they correspond to
objectives in the financial perspective.
• The third perspective corresponds to objectives of improving internal
processes. Key internal processes are selected which will enable the company
to achieve objectives in the customer and financial perspectives: production
processes, new product development processes, logistic processes, etc. Here
we find indicators of quality and productivity (for example, cost indicators),
lead time, etc. To determine these objectives we can use a representation of

17
the entity’s value chain and identify the key objectives of each link in the
chain.15
• The “learning and growth” perspective concerns the key resources that are
needed to attain the objectives of the other three perspectives. According to
Kaplan and Norton, these resources can be categorised in three groups:
human capital (competencies, training, knowledge), information capital
(systems, databases, networks) and organisation capital (culture, leadership,
teamwork). Kaplan and Norton also propose another classification: staff
motivation, information and training; strategic competencies; strategic
technologies; and climate for action. In this perspective we find indicators
such as employee turnover, the number of suggestions made by employees,
training hours, the availability of information, etc.
It should be emphasised that this representation in four perspectives can be adapted by
the company, either by adding other perspectives (for example a perspective relating to
environmental performance), or by modifying the perspectives to take into account the
nature of the organisation’s stakeholders. The financial perspective can be enlarged to
encompass stakeholders to which the company is accountable. For a municipality, for
example, we can replace the financial perspective with a perspective that corresponds to
the question: “How are we perceived by voters?” The customer perspective could be
replaced by a perspective to include the constraints in terms of financial resources that
the municipality has to cope with.16

The strategy map


The BSC method is based on a representation of the causal links between the objectives
in the different perspectives that is known as a strategy map. Figure 5.1 provides a
generic representation of a strategy map.
The strategy map is both a tool for presenting the strategic priorities of an entity in a
structured way and also a methodological tool for making choices between different
priorities. The process of thinking about and discussing cause-and-effect relationships is
what enables managers to make sure that objectives are relevant: when envisaging an
objective in one perspective, one also has to consider the objectives in other
perspectives to which it will contribute.

15
Note that Kaplan and Norton propose generic representations of the value chain.
16
This perspective will be of a financial nature as it concerns financial resources, but not in the traditional sense of the BSC
financial perspective.

18
Figure 5.1 Generic representation of a strategy map

Strategy Map

Grow…
Finance

Grow…

Reduce…
Customers

I want… I want… I want… I want…


Internal processes

Respond to
Improve… Reduce… Grow…
100% of….

Develop….
Learning &
growth

Develop… Attract… Train…

Drawing the strategy map is a challenge because a choice has to be made between the
necessity of drawing a large number of arrows in order to truly understand the model –
which may make it hard to read – and an objective of communication which requires
simplifying it, sometimes to the point of removing all the arrows. In practice, it
sometimes happens that companies make several versions of the same strategy map
depending on who they are intended for and the objectives.

Formulation of objectives: dynamic and precise


Objectives have to be expressed using a verb to translate the dynamic inherent in the
performance management dashboard. For example, the objective may be: “to reduce
structural costs” or “to increase market share”. It is recommended that verbs like “to
optimise” and “to maintain” be avoided (except in the case where it is clear that
maintaining performance at its current level is definitely an ambitious objective).
Objectives have to be formulated in a precise way so they will be understood equally by
all. The idea is to facilitate comprehension of strategic priorities and to align the various
people involved by designing the strategy map – in general, members of the executive
team. Experience shows that without detailed definitions the subjects for discussion are
vague and agreement on priorities remains superficial.
The need for a very precise formulation often obliges the company to make two
versions of objectives: a long version which provides precision and detail, but which
cannot appear on the strategy map for reasons of space and legibility, and a short
version, only showing the titles of the objectives, which will be used for presenting the
strategy map.

19
Boxed text 5.2
Example
At an IT services company, the short formulation of an objective for big
accounts and companies (customer perspective) might be the following: “I want
to reduce my costs, the complexity of my information systems and my risk.”
The following detailed definition may be added to the title of the objective: “I
want an IS provider who helps me to increase my financial results, who
guarantees a quick return on investment and who reduces both my operational
costs and my IT costs. I want an IT provider who offers end-to-end solutions
and fast implementation, that leads to a reduction in the complexity of my
information systems and my risks so I can concentrate on my core business.”

Boxed text 5.3


Sample strategy map
The following figure is a fictitious strategy map for an airline in a former Soviet
bloc country. The purpose of the different types of line (solid, dotted, etc.) is to
make the map easier to read by highlighting coherent sub-assemblies.

Figure 5.2 Fictitious strategy map of an airline from a form Soviet bloc country

Attain a return higher


than the sector average Profitability
Customer acquisition
Finance

Customer retention
Increase operating
Increase turnover by Increase turnover by
margins
widening the customer building customer
base loyalty

I want more “value


for money” All kinds of customers
Customers

Business
customers I want safe and
I want new and
I want on-time reliable flights
innovative
services flights

Maximise the use


Develop a service offering Maximise availability of
Internal business processes

of capacities
that is larger than the aircraft
international average

Extend geographic
Speed up service
coverage (code sharing
at the terminal
with alliance) Deepen Develop effective
customer maintenance processes
Develop yield intimacy
management
systems
Learning and

Develop a
growth

customer-
Improve oriented culture Increase motivation
employees’ Develop information
through performance-
professional systems for operations
based incentive
competencies schemes

20
BSC and modelling
The BSC method is thus a way of modelling performance based on a generic causal
model that links different dimensions of performance. This model is adapted to each
organisation to create its own specific model, represented by the strategy map.
The different perspectives of the generic causal model are linked by cause-and-effect
relationships, where final performance is that of shareholder expectations (cf. figure
5.3).

Figure 5.3 Generic causal model of the BSC

LEARNING AND CUSTOMER NTERNAL SHAREHOLDER


GROWTH EXPECTATIONS IPROCESSES EXPECTATIONS

It is not therefore a totally open model that would assign equal weight to the different
stakeholders.18

2.1.2. From the model to the indicators


In the next stage, the objectives from the four perspectives are translated into indicators.

Criteria for the quality of an indicator


The criteria for assessing the quality of an indicator are given in chapter 1 (boxed text:
“The properties of a good measure”):
• Validity: does the indicator faithfully translate the intention that this objective
is aiming for? For example, for the objective “I want innovative solutions”,
an indicator of “innovative image” produced by a survey will only
imperfectly reflect the intention and would be more in step with the objective
“I want to be able to say that I buy innovative products.” It is much easier to
check whether the indicator matches the intention when the objective has
been defined precisely.
• Relevance: will the indicator selected lead to decisions that are aligned with the
intention?
• Ability to set targets: can a target be set for the indicator?
• Cost of producing the indicator.
• Reliability of the information: what level of reliability of information is it
possible to have concerning this indicator?
• Time: how long does it take to produce the indicator?
• Legibility.

18
If used for organisations that are not companies or for functions within a company, the generic model has to be adapted.

21
Method for designing an indicator
In practice, the first five criteria are the most important in the search for suitable
indicators. One method consists in gathering a group of four or five people and
brainstorming to make a list of possible indicators for each objective. A score of 1 to 5
(the best score being 5) is assigned to each indicator for each of the five criteria, a total
score is calculated by summing up these scores and the indicator(s) with the highest
scores are selected.

Boxed text 5.4


Example
Consider a company that installs telecommunications equipment and the objective “I want
innovative solutions” specified the following way: customers want to be convinced that the
solutions installed are truly state-of-the-art technology. The following table illustrates the
method of searching for an indicator for this objective.
Table 5.4 Determining indicators for the objective “I want innovative solutions”
Matches
Effect on Targets
the Cost Reliability Reliability
behaviours can be set
intention
Measure
innovative image
3 2 5 1 4 15
with a
survey
Number of
products less than
3 3 5 3 3 17
x months old in
installations
% of products less
than x months old
5 5 5 3 3 21
in the cost of the
installations
Number of
products less than
3 4 5 3 3 18
x months old in
offers
% of products less
than x months old
4 5 5 3 3 20
in the cost of the
offers
% of products less
than x months
2 1 5 5 3 16

Number of new
services compared
2 3 4 3 1 13
to the previous
installation

22
2.2. Applying the OVAR method to the construction of performance
management dashboards

As stated above (1.4), the OVAR method can be used in building both performance
management dashboards and panoramic dashboards, though it takes a very different
form in the two cases. For the sake of clarity, we will only use the term OVAR for the
construction of performance management dashboards.
The acronym OVAR stands for Objectifs (O), Variables d’Action (VA) and
Responsables (R) which we will translate as objectives (O), critical performance
variables (CPV) and managers responsible (R). The method is based on the construction
of an “O/CPV grid” where objectives intersect with critical performance variables, and
then a more complete grid which integrates the determination of the managers
responsible (R). The second grid is useful for coordinating a system of dashboards from
different hierarchical levels. We will examine this second grid in chapter 6.

2.2.1. The O/CPV grid


The O/CPV grid contains three parts:
• Identification of objectives for the entity whose dashboard we are trying to
build (O). The idea here is to show “where” the entity wants to go in the form
of the results to be attained (cf. details below). In general there are between
four and six objectives.
• Identification of critical performance variables, i.e. a selection made from
among the entity’s performance levers; those that are considered priorities for
action in order to achieve objectives. They answer the question of “how” to
attain objectives. There are generally two to three times as many CPV as
objectives.19
• A representation in grid form that shows the cause-and-effect relationships
between CPV and objectives graphically and which also serves as a tool to
check that CPV and objectives are coherent with each other (cf. table 5.4).

19
For the construction of panoramic dashboards we will use the notion of action areas. CPV are fewer in number than action areas
because they concentrate on priorities.

23
Table 5.5 Generic O/CPV grid
Objective 1 Objective 2 Objective 3 Objective 4

CPV 1 x x
CPV 2 x x
CPV 3 x
CPV 4 x x
CPV 5 x x
CPV 6 x
CPV 7 x x
CPV 8 x x x
CPV 9 x x
CPV 10 x

2.2.2. Objectives
The following guidelines are recommended for defining objectives, some of which are
also found in the BSC method (Fiol, 2008; Fiol & Jordan, 2008):20
• An objective must contain a verb in the infinitive.
• An objective has to be expressed very precisely so that it will be understood in the
same way by all the different stakeholders.
• An objective must incorporate the notion of progress and hence a suitable verb such
as “grow”, “reduce”, “develop”, etc. “Maintain” the number of customers is not an
acceptable objective unless this constitutes a challenge given the current situation. It is
also suggested that the verb “optimise” be avoided, even though it is sometimes
convenient.
• A performance dimension constitutes an objective if, when one considers its purpose
(the “why” question), it cannot be linked to another performance dimension. If it can
be linked to another performance dimension then it will be a CPV. For example
“increase sales” can be an objective if this is one of the entity’s goals. However, it will
be a CPV if the entity’s objective is profitability and “increasing sales” is simply one
lever for achieving this objective.21
• An objective has to be aligned with the strategy of the entity. Specifically, using the
terms defined at the beginning of this chapter (1.2.2), an objective must correspond to
a temporary priority linked to a change in strategic positioning, to the current business
context, to specific internal difficulties or to particularly ambitious objectives of

20
Fiol, M. & Jordan, H. (2008), Formuler les objectifs d’une grille OVAR, teaching material document, HEC ; Fiol M. (2008), La
démarche OVAR au service de l’élaboration d’un projet commun au sein d’une équipe, teaching material document, HEC.
21
We will see in chapter 6 that an objective of an entity can be connected to another performance dimension of the organisation, but
at a higher hierarchical level. For example, increasing sales figures may be an objective for the sales department linked to a
profitability objective at the level of the company.

24
strategic positioning (and not to ongoing elements relating to the specific goals of the
entity, the performance model of the sector or the organisation’s strategic positioning).
• The timeframe considered in setting these priorities is usually one year.
• The objective must be formulated in such a way that it is possible to measure its
achievement.
• It is often useful to set a target value in specifying the objective. Different CPVs may
be required for different target values. For example, if the objective is to increase
market share, the CPVs will not be the same if target is to increase market share by
10% or to double it.
• The following guidelines are used in selecting from among the all objectives that meet
the preceding criteria:
• The number of objectives should be between four and six.
• If the objectives are too numerous, the choice should correspond to strategic priorities,
on one hand, and to those whose achievement is considered particularly difficult for
the time horizon under consideration, on the other.
• Objectives which partially or fully overlap must be avoided.
• Example:
- O1: increase operating income,
- O2: reduce overheads.
Either objective O1 is too broad for the entity and it is preferable to keep
objective O2 which is more focused, or O2 is actually a CPV of objective O1.
• Following the above guidelines often leads to no objective or CPV being chosen for
human resources. If these resources are considered strategic, it may be advisable to insist
that one objective be dedicated to human resources.
• In general, it is important to make sure that all of the entity’s stakeholders have been
taken into consideration (including employees), even if ultimately only some of them are
retained, since with four to six objectives it is not always possible to cover all the
stakeholders.

2.2.3 Critical performance variables


CPVs are a selection of performance levers considered to be priorities for achieving
objectives. CPVs must meet conditions which partly overlap with those relating to
objectives:
• A CPV must contain a verb in the infinitive.
• The “why” question points to one or more of the objectives selected.
• A CPV has to be expressed very precisely so that it will be understood in the
same way by all the different stakeholders.
• It must be possible to observe and measure the CPV.
An example of a CPV could therefore be: “expand the technical competencies of sales
staff in new products and new technologies”.

25
Since the time horizon used for determining objectives is annual, some CPVs may
resemble action plans, i.e. a set of organised and time-bound actions, which are
therefore more precise than the CPV (for example, to train 100% of the sales team in
new technologies every twelve months). We will see that the notions of CPV and action
plan are more distinct in panoramic dashboards owing to the ongoing nature of the
performance variables (which we will call “action areas” in that framework).

2.2.4. Validation of the O/CPV grid


The grid is a tool used to ensure a good balance between objectives and between CPVs.
The following problems must be avoided:
• An objective without a CPV. This is a sign of a weak or irresolute strategy.
• An objective column with a single X. In this case there is some confusion
between the objective and the CPV. Either the objective is really a CPV or
other CPVs corresponding to the objective have been overlooked.
• A CPV without an objective. This is a sign that actions are not oriented
towards priority performance objectives.
• Too many CPVs for the same objective. This is either a sign that the objective
is too broad and must be broken down into several objectives or that too
much importance is being accorded to this objective.
• Imbalance in the number of CPVs per objective. This indicates a poor choice of
objectives or a poor choice of CPVs.
• Xs forming diagonal lines (cf. table 5.5). This is an indication of too narrow
thinking on the causal links between CPVs and objectives and also that some
CPVs which “cut across” objectives have been overlooked (see the example
of Monoprix).

Table 5.6 Situation to be avoided #1: diagonal Xs


Objective 1 Objective 2 Objective 3 Objective 4

CPV 1 x
CPV 2 x
CPV 3 x
CPV 4 x
CPV 5 x
CPV 6 x
CPV 7 x
CPV 8 x
CPV 9 x
CPV 10 x

• Two objectives with Xs that overlap (cf. table 5.6). In this case, if objective 1 is
achieved, then objective 3 will be achieved automatically. The two objectives

26
are not independent. Either objective 3 is in fact a CPV of objective 1 or the
thinking on CPVs is incomplete and a CPV that is specific to objective 3 has
yet to be found.

Table 5.7 Situation to be avoided #2: two objectives with overlapping Xs


Objective 1 Objective 2 Objective 3

CPV 1 x

CPV 3 x x
CPV 4 x

CPV 7 x x
CPV 8

Boxed text 5.5


Example
We will illustrate the construction of an O/CPV grid by applying it to the
context of a company that is freely based on Monoprix – a French retail chain
(supermarket, clothing, etc.) that operates in city centres. We will use this
example again in the following section for the construction of a panoramic
dashboard.
The main organisational goals of the company correspond to three stakeholders:
• shareholders, with an return on investment objective;
• customers, in a particular market segment: city dwellers;
• the environment and sustainable development have been incorporated into the
strategic priorities of the company: “Our company spirit is also expressed
through a commitment to sustainable business. We think, buy and sell
responsibly. We strive to respect the environment and to promote fairness. We
hope to share this spirit with all of our partners, at every level of our
organisation.”
The business model is that of large supermarket retailer where food sales
represents a significant proportion of the business. The key feature of this model
are: market share, capacity to attract customers, particular financing structure
with negative working capital, staffing adapted to busy/slack periods, proximity
to customers, product freshness, etc.
The specific strategic positioning of the company is upmarket (“Doing your
shopping at Monoprix costs 15% more than at Leclerc...”) aimed at working
urban people with relatively high purchasing power, living in towns of more
than 50 000 inhabitants. This means having a fairly balanced, quality product
range, adapting to the pace of life of active people in terms of opening hours and
accessibility, developing product-related services such as a delivery service and

27
in-store services in order to enable one-stop shopping, etc.
In addition, the company has expanded its store formats with: Monop,
Dailymonop, Beauty Monop, etc.
Current development needs to be continued and consolidated. Having convened
to examine the priorities for next year, the executive team points out that
although store formats and points of sale have developed as planned, there have
been some setbacks:
• There is a significant delay in attaining return on investment objectives, mainly
owing to new stores opening too slowly. The objective of “increasing ROCE
from X% to Y%” has therefore been decided.
• Given the past positioning of the company on low-cost products, the upmarket
urban positioning of the flagship brand still remains to be consolidated so as to
assert its differentiation with respect to competitors. This objective therefore
remains one of the company’s priorities.
• The company is not perceived as being at the leading edge in terms of
sustainable development, in spite of numerous campaigns carried out over the
past two years. The executive team has translated this into the objective of
“improving the credibility of the company’s sustainable development actions”.
• Finally, the executive team perceives increasing difficulties in opening new
retail locations. Consequently it insists on the priority of “continuing the
diversification of store formats at the same pace”.
These objectives were then used to identify CPVs and build the following
O/CPV grid:
Table 5.8 O/CPV grid for a company inspired by Monoprix

Improve
Finalise
credibilit
the Continue
y of the
Increase upmark diversific
company
ROCE et urban ation of
’s
from position store
sustainab
X% to ing of types at
le
Y% the the same
develop
Monopr pace
ment
ix brand
action
Accelerate the
achievement
of profitability X
objectives for
the new stores
Find and
convert new X X
locations

Develop
domestic
X X
help
services22

28
Improve
perceptions
X X
of the retail
brand
Reposition
the Beauty
Monop brand
X X
toward the
high end of
the market
Develop fast-
food X
products
Improve the
quality of
fruit and
vegetables X X
and develop
the organic
food range
Integrate the
warehouses
X X
of the various
store formats
Train cashiers
in customer
X X
reception and
relations
Train buyers
and
department
X X
managers in
sustainable
development
Communicate
about actions
to promote
X
diversity and
equal
opportunities

29
2.2.5. From objectives to indicators
Indicators have to be determined for each objective and each CPV. Indicators are
determined according to the same principles as outlined for the BSC.
When CPVs are similar to action plans, the determination of indicators is generally
straightforward. It is not always necessary to monitor these action plans with an
indicator that is computed automatically by the IT system. For example, if the CPV is
“carry out a quarterly audit of the quantitative and qualitative composition of inventory”
the indicator is “quarterly audit carried out” and it has two values, “yes” or “no”. This
indicator can be monitored without the help of the IT system.

2.3. Comparison of the BSC and OVAR methods

As we have presented them, the two methods share the objective of building
performance management dashboards. The main differences are the following:24
• The tool for ensuring the coherence of the performance model is different: a
grid for the OVAR method and a strategy map for the BSC method.
• These two tools lead to different representations in terms of the form of the
strategic priorities.
• The BSC method provides a more structured framework for achieving the
objective of balancing the indicators.
• The preferred time horizon for building a BSC is the medium term. It is
therefore particularly suited to dynamising the organisation on priorities
relating to strategic change. The preferred time horizon for the OVAR
method is annual. It is therefore particularly suited to dynamising the
organisation on current priorities.

3. Building “panoramic” dashboards

As stated above, there is no academic literature on this question because a distinction is


not generally drawn between performance management and panoramic dashboards. The
following recommendations are therefore offered as a proposal.
The function of the panoramic dashboard is to keep track of the key control parameters
of the entity. This time, objectives will be defined in keeping with:
• the strategic positioning of the company;
• its business model;
• the goals set by the organisation to satisfy its various stakeholders.
The objectives considered in the panoramic dashboard are therefore more likely to be
ongoing than those of the performance management dashboard which is focused on a
few important priorities. However, these are not standard objectives or universal
parameters that might be used to monitor the activity of any company. They mirror the
particularities of the sector and the general strategic choices made by the company’s
directors.

30
We believe that, with a few modifications, the OVAR method of setting out objectives,
i.e. by building a grid of intersecting objectives and CPV, can be used to build
panoramic dashboards. These alterations are mainly linked to the fact that the purpose
and the way of using this dashboard allow for a greater number of indicators than for the
performance management dashboard.
In addition, taking inspiration from the BSC method, we recommend that objectives be
related to the expectations of shareholders and customers and that action areas
correspond to key internal processes (which can be determined using a representation of
the value chain) and to key resources from the human and information systems point of
view. Thus, the panoramic dashboard construction proposed is based on a combination
of the BSC and OVAR methods.
The result is the following method which we will call OOAA for Ongoing Objectives
and Action Areas:

3.1. Determining ongoing objectives (OO)

• The number of objectives is not limited to six.


• Objectives correspond to the expectations of shareholders or customers. They
must be consistent with the company’s strategic positioning.25
• Objectives do not necessarily have to correspond to progress. The parameters
to be controlled can also include elements for which the concern is simply
that performance does not decline. If progress is sought, it is not expressed in
the objective itself, but rather in the target value attached to it (see below). On
the other hand, as we explained above, it is better if the objective expresses
the scale of the ambition, because this can influence the kind of performance
levers that are chosen (for example, maintain market share, increase market
share or increase market share dramatically).
• In planning objectives, the time horizon under consideration is three years
rather than one year. The idea is not to set a reduced number of priorities but
to determine ongoing parameters that must be controlled.
• It is not a matter of trying to identify what will cause problems next year, but
rather determining the key expectations of shareholders and customers in a
more structural way.

3.2. Determining action areas

There is no longer a restricted choice of performance levers to be used to achieve


objectives. Instead, the performance levers that are considered important to control are
chosen. We will call them “action areas” (AA).
While objectives express the expectations of shareholders and customers that are
considered structural for the entity, key action areas correspond to internal elements.
The differences with the CPVs of the performance management dashboard are as
follows:

31
• They can be greater in number.
• They correspond to the more perennial performance levers: key elements in the
business model or important performance factors with respect to the strategic
position adopted (cf. 1.2.2).
• AA are areas where it is considered important to make improvements to reach
objectives, but they are also areas where it is important that there be no
deviation without organising specific corrective action or setting an
improvement objective.
• For action areas that concern processes, one can use a representation of the
value chain with the double objective of identifying key elements in the chain
for achieving objectives and not overlooking important elements in the
business plan.
• In chapter 6 we will see that it is possible to limit the number of action areas by
delegating the monitoring of some of them to a lower level in the
organisation.

3.3. Determining indicators

The method described in section 2.1 also applies to the indicators of panoramic
dashboards. The choice of indicators is even more critical here because dashboard
monitoring is done by exception and because the formulation of the objective or action
area that led to the choice of an indicator is less visible. While formulations and
indicators are largely inseparable in performance management dashboards, in panoramic
dashboards what is analysed are the values produced for the indicators.

3.4. Setting targets

It should be kept in mind that setting targets is essential because the principle at work
here is control by exception. Contrary to performance management dashboards, targets
are not set during the process of building the grid, because these dashboards have a
longer lifespan. In general, targets are set annually during the budgeting process. It is
not always necessary however to set targets using figures; in some cases it is enough to
monitor trends. Moreover, targets do not necessarily correspond to objectives that are
negotiated annually (and linked with incentive schemes). Instead they may be alert
levels that are set for longer timeframes (which are nevertheless still validated when the
budget is being built).

32
Boxed text 5.6
Example
We will now return to our illustration of boxed text 5.5. Based on the elements
already provided, it is possible to build the following OO/AA grid.

Table 5.9 OO/AA grid for a company inspired by Monoprix

Balance the portfolio of


on

Have a product offering

services
that matches strategic

Build a “sustainable
store formats and brands
Increase market share

development” image
return

the
Reduce debt

positioning
investment
Increase

offering
Adapt

etc.
Increase the number of × ×
Attract customers and build customer

customer loyalty cards


Adapt promotions to × × ×
strategic positioning
Optimise the management × × ×
of shelf space
Professionalise × ×
merchandising
Renew product ranges to × × ×
match to strategic
positioning
Develop domestic × × ×
loyalty

services
etc.
Adjust staffing to ×
Organisation of points

customer flows
Reduce inventory ×
shortages (shrinkage)
Reduce energy × ×
consumption
Adjust checkout opening ×
of sale

to customer flows
etc.
Lengthen supplier ×
payment periods
Ensure the freshness of × ×
perishable foods
Reduce the energy ×
consumption of transport
Ensure the traceability of ×
Purchasing and logistics

products
Reduce the number of ×
suppliers
Integrate the purchasing × ×
and logistics functions of
the different store formats
and brands
etc.

33
Define and implement a × × ×
resources
suitable training plan
Launch actions to ×
promote diversity and
equal opportunities and
management
communicate about them
Develop a job rotation × ×
Human

policy
etc.
Find new locations × × ×
Reduce time taken to × × ×
open new locations
Ensure that the × × ×
positioning of the new
New stores

store formats and brands


is consistent with the
image of the company
etc.
Etc.

This example illustrates the formulation of objectives and action areas and the
differences with the O/CPV grid. It is more difficult than for the O/CVP grid to
build an exhaustive grid for a textbook example and to show that action areas
stem from choices, but it is important in practice.

4. Further issues

4.1. The relationship between dashboards and strategy

The principle of coherence between indicators and strategy is not contested. Its practical
implementation, however, is more problematic. Publications on the BSC and OVAR
methods assert that using these methods ensures this coherence in several ways:
• by providing a framework for translating strategy into indicators;
• by providing a structure to ensure coherence between indicators and strategy;
• by making it possible to represent strategy.
• We will discuss these different assertions one by one.
Moreover, we will show that the most recent book by Kaplan and Norton moves away
from the BSC to talk about strategy, using the vision in four perspectives to represent
generic strategy types.

4.1.1. Dashboards and strategy: translation or clarification?


In most of these publications, it is assumed that the organisation’s strategy exists before
the determination of objectives. The role of the methods in this case is to “translate”

34
organisational strategy into objectives, performance levers and indicators. However, in
practice, determining indicators is just as much a way of formulating strategy as it is a
translation of it, whether we are dealing with a performance management dashboard or a
panoramic dashboard.
Strategy is often expressed in the form of a vague idea which allows a great deal of
room for interpretation. The clear expression of priority objectives and the causal links
between them is therefore a useful way of building a more precise representation of
strategy.
Some consultants will ask every member of a unit’s executive management to construct
his own strategy map separately, providing them with methodological support. In
general, the results are very different from one to the other, which reveals
inconsistencies in representations of strategy. The map is therefore a useful tool to bring
these representations into line with each other – a way to reach consensus on strategy.
It is therefore advisable to use dashboard construction methods to get different
managers to discuss and work toward building consensus on a common representation.
In the other methods this role is played by the O/CPV or OO/AA grids.
The link between strategy and performance management is therefore not sequential.
Dashboards are not a translation or manifestation of strategy. It seems more appropriate
to consider the construction of dashboards as participating in the formulation of
strategy.
Consequently, it is not necessary to have an explicit strategy in order to make use of the
methods presented in this chapter.

4.1.2. Coherence between indicators and strategy


While dashboards do not constitute a translation of strategy, it is nevertheless important
that the indicators that are finally selected be consistent from a strategic point of view.
Although the methods emphasise this need for consistency, they do not provide any
tools for ensuring it.
In reality, the methods for building dashboards cannot replace strategic analysis
instruments. These are effective tools for identifying what Kaplan and Norton call a
coherent “customer value proposition”, the key success factors associated with this
proposition and the structuring elements of the business model, as well as ensuring that
the value proposition enables the company to reach its objectives for shareholders. For
example, an analysis of the strengths and weaknesses of the company and the
competitive environment can spur the company to favour a strategic position based on
innovation and brand reputation.
Coherence between indicators and strategy is achieved by combining the conclusions of
this process of strategic reflection (based on strategic analysis methods) and a
representation of the performance model (priority objectives and causal links, supported
by the dashboard building methods presented in this chapter).
Remember that the nature of this coherence is not the same for performance
management and panoramic dashboards. The purpose of the performance management
dashboard is to focus on a few strategic or current priorities, while the panoramic
dashboard aims to control the pursuit of strategic positioning and the business model.
Consequently, in the OVAR method (performance management dashboard building),
strategic positioning essentially guides the choice of objectives: the aim is to determine

35
the priority objectives that senior management will concentrate on over a one-year
timeframe. As CPVs are a means of reaching objectives in the relatively short term, they
do not constitute elements of company strategy.
In the BSC method (performance management dashboard), the strategy map emphasises
certain strategic priorities. It cannot express the entire customer value proposition and
elements of the business model which constitute the performance levers of this
proposition. Given the fact that objectives are expressed in several perspectives and that
the timeframe is implicitly longer than in the OVAR method, the objective is to
highlight the strategic coherence of these priorities, i.e. the entire set of objectives and
indicators. Still, it is only an extract of the organisation’s strategy: elements of strategic
change and not stable elements.
In the method for building a panoramic dashboard, the objective is to control the
attainment of strategic positioning and the business model. Combining the OO/AA grid
with the conclusions of a strategic analysis should enable the company to select relevant
objectives and action areas from a strategic point of view and validate the coherence of
this strategy.

4.1.3. Representation of strategy


Kaplan and Norton’s books and articles deal with the representation and communication
of strategy. Although the strategy map may be a representation that can bring about a
certain degree of consensus on priorities, it remains a summary of this strategy. In fact,
it is more a summary of strategic priorities that does not express strategy so much as
desired strategic changes. This is consistent with the idea that the BSC method can be
used to build a performance management dashboard but not a panoramic dashboard.
Moreover, communicating the strategy map outside the executive team requires both the
formulation of objectives and the representation of causal links to be simplified. While
the detailed representation of objectives and causal links is easily understood by the
executive team, this mode of representation is far from straightforward for people who
have not participated in drawing the map. The map that is disseminated to personnel
must therefore be simplified. It is no longer a representation of strategy but rather a
visual aid for communicating a few priorities.

4.1.4. BSC and generic strategy types


In their most recent books, Kaplan and Norton use their representation in four
perspectives to define generic strategy types: operational excellence, customer intimacy
and product superiority. They use their generic representation of the customer and
internal business process perspectives to describe these three generic strategy types.26
For each of these strategies, Kaplan and Norton define the key elements of the customer
value proposition using their generic description of this proposition in three categories
and seven dimensions (cf. figure 5.4; the other aspects of performance must not be
neglected, but demands relating to them are less stringent).27

36
Figure 5.4 Customer value propositions and generic strategy types

Product/service Related
Image
attributes services

Price Quality Time Selection/ Service Relationship Brand


functionality

Operational
excellence ≠ ≠ ≠ ≠ √ √ ≠
Customer
intimacy
√ √ √ √ ≠ ≠ ≠
Product
superiority √ √ ≠ ≠ √ √ ≠

≠ Differentiator √ Basic requirements

In the same way, they associate each generic strategy type with elements in the value
chain for which the level of performance demanded is standard and those for which
performance must be high (cf. figure 5.5).

Figure 5.5 Internal process perspective and generic strategy types

Innovation Customer management Operational processes

•Supply chain management


•Efficiency (cost, quality,
Operational
excellence √ √ cycle time)
•Capacity management
•Solution development
•Customer service
Customer intimacy √ •Relationship management √
•Advisory services
•Invention
•Product development
Product superiority
•Speed to market √ √

√ Meets basic requirements

This presentation of generic strategy types cannot be equated with the BSC method because
it is not aimed at building dashboards. It uses the analytical framework of the BSC as a tool
for explicating strategies. On the other hand, it can structure the process of dashboard
building by serving as a guide for determining objectives. The BSC method can also be
applied without any reference to these generic strategy types. In our opinion, the association
that Kaplan and Norton make between BSC and strategy actually increases confusion and
wrongly reinforces the idea that the BSC method is a method for translating strategy into
indicators.

37
4.1.5. Summary
Ultimately, these methods are frameworks for building performance models rather than
tools for translating strategy into indicators. They respond to a key challenge that is
poorly handled in practice: the modelling of performance in order to ensure coherence
between indicators and the company’s overall goals. Indeed, Ittner and Larcker (2003)
have shown that most of the indicators monitored in companies are not based on the
identification of causal links.
The representation of strategy as objectives linked by cause-and-effect relationships is
but a tool that must be combined with the results of strategic thinking. A precise and
coherent representation can emerge from this combination, but the way this combination
ought to be organised in practice remains to be defined.

4.2. Lack of precision in the methods

We stated in the introduction to section 2 that the BSC and OVAR methods were
imprecisely defined. We will illustrate this viewpoint here and draw some conclusions
about their implementation.
Concerning the BSC method, several elements support this assertion:
• Kaplan and Norton’s books give numerous examples of strategy maps and
indicators, but they do not give any indications about the process of building
these maps (people concerned, stages, etc.);
• moreover, they are very vague about the way the BSC should be used: the
operational managers concerned, the places where it should be used, the types
of decision envisaged, timeframes, etc.;
• the recommendations that we have made in this chapter concerning the
formulation of objectives in the “customer perspective”, taking the point of
view of a customer, correspond to actual practices that we have observed.
Kaplan and Norton’s recommendations are not as precise;
• the importance of the causal links between objectives and the way of
representing them vary significantly from one book to the next without any
explanation offered for these changes;
• Kaplan and Norton attribute several functions to the method without specifying
that certain elements of the method have to be different depending on the
objective pursued. For example, the goal of the method may be to align a
management team on priorities and communicate these priorities throughout
the organisation (Kaplan and Norton speak of communicating the strategy).
Experience shows that this is not without consequences in terms of the way
the strategy map is presented. In fact, while it is possible to share a strategy
map made up of objectives written in text bubbles that are connected with
arrows, such a map is not comprehensible for those who are not familiar with
this type of representation. In practice, it is common to find a detailed
strategy map as well as a simplified strategy map without any arrows. It
would be useful to have two different names and different sets of construction
principles for these two types of representation –something which Kaplan and
Norton do not provide.

38
Concerning the OVAR method, the designers of the method use the O/CPV grid for
purposes other than building dashboards (Fiol, 2008). For example, to build cohesion
within a management team on shared strategic objectives. In this case they recommend:
• that the executive team jointly build the O/CPV grid;
• that a three-year time horizon be adopted in determining priority objectives;
• that the “manager responsible” (R) part be used to involve the different
members of management in the pursuit of ambitions defined by objectives
and CPVs: once determined, each participant is asked for which CPV he
wants to take responsibility.
This use differs from that presented in point 2.2, notably in the time horizon adopted
which results in different objectives being selected. In addition, it does not necessarily
lead to the construction of a dashboard: the construction and monitoring of indicators
can be delegated to different managers.
Equating this particular use to the OVAR method adds to the fuzziness surrounding the
purpose of this method.
These illustrations show that the methods are incomplete. In a certain way, this
incompleteness supports the idea we subscribe to, namely that there is no single “good
method”. Moreover, it should facilitate the adaptation of methods to a particular
objective or context and, ultimately, the “appropriation” of dashboards by their users.
Unfortunately, though, this is how they are presented. In the case of the BSC, in
addition to the fact that the construction method is unknown, the tool is presented as a
universal solution for improving performance. This runs counter to the idea that the tool
needs to be adapted. Moreover, if some more precise objectives are formulated, such as
communication and the deployment of strategy, the relationship between the tool and
the achievement of these objectives is not explained. The lack of comprehension of
these connections prevents the user from appropriating the method. He therefore has
little choice but to either believe in its effects or not and to hire a consultant to
implement the method in his company.
As for OVAR, there is a definite effort on the part of the authors to describe the
different uses and aims of the method, though using the same name for all the variants.
As we have seen, each aim requires some adaptation of the method, both in its
principles and in the definition of terms. The lack of precision in terminology
contributes to the persistence of conceptual approximations on the question of
dashboards, their construction and their use, which is not conducive to the appropriation
of the different methods that come under the “OVAR” label. It should be noted,
moreover, that there is a lack of literature on how this method is used in practice.
In conclusion, we have to be cautious concerning the application of these methods
because knowledge on these subjects is very meagre. To advance in this area, we feel
that it is necessary both to pursue the work of conceptualisation and to increase the
number of empirical studies on dashboards actually in use in companies, which for the
moment are rather scant.

39
Conclusion

In this chapter we have presented the principles assigned to dashboards and methods
that can be used to build them for an entity taken separately from its organisation. In an
area where knowledge has not been stabilised and methods are often presented as
universal solutions, we have attempted to provide elements to improve the possibility of
organisational members appropriating the concepts and methods associated with
dashboards, their construction and their use. This led us to introduce two goals for
dashboards (dynamisation and control of the organisation), to distinguish between two
types of dashboard (performance management and panoramic) and associated methods
and to further define the methods presented in the literature (BSC and OVAR). The
following chapter will deal with the question of the coordination of the dashboards of
different entities in an organisation.

40

You might also like