Explore 1.5M+ audiobooks & ebooks free for days

Only $12.99 CAD/month after trial. Cancel anytime.

Metrics for Service Management:
Metrics for Service Management:
Metrics for Service Management:
Ebook327 pages2 hours

Metrics for Service Management:

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This title is the sister book to the global best-seller Metrics for IT Service Management. Taking the basics steps described there, this new title describes the context within the ITIL 2011 Lifecycle approach. More than that it looks at the overall goal of metrics which is to achieve Value. The overall delivery of Business Value is driven by Corporate Strategy and Governance, from which Requirements are developed and Risks identified. These Requirements drive the design of Services, Processes and Metrics. Metrics are designed and metrics enable design as well as governing the delivery of value through the whole lifecycle.
The book shows the reader how do achieve this Value objective by extending the ITIL Service Lifecycle approach to meet business requirements.
LanguageEnglish
PublisherVan Haren Publishing
Release dateJun 10, 2020
ISBN9789401805643
Metrics for Service Management:

Related to Metrics for Service Management:

Related ebooks

Architecture For You

View More

Reviews for Metrics for Service Management:

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Metrics for Service Management: - Jan Schilt

    1   Introduction

    This book is designed to be practical; it avoids diagrams, process flows and detailed definitions where these are obvious. What managers need is a view of the goals and objectives of a project or program, and then an understanding of what methods, tools, resources, processes and so on are required to get it working. This book is primarily about design Ð the design of metrics for Service Management, which includes designing end-to-end service metrics. To measure services end-to-end, it is necessary to design process metrics, including Service Management process, technical and other supporting metrics.

    1.1   Background knowledge

    Ideally the reader of this book will already be familiar with Service Management, ITIL®, and ISO/IEC 20000 as well, perhaps, as PRINCE2, M_o_R and, perhaps, ISO/IEC 15504 (SPICE), CMMI, Six Sigma, Cobit and other relevant areas. The book only includes the smallest possible top-level introduction to any of the above for those readers who might not be familiar with a particular area. Anybody intending to achieve a level of maturity in Service Management is advised to read the books recommended in the bibliography - in particular, the five ITIL® lifecycle books - and to take a structured approach to professional development.

    If you have not worked with metrics before, then it would be worthwhile reading the first chapters of this book to avoid some of the more common and dangerous pitfalls. Even if you have worked with metrics, it is probably wise to review these, as mistakes can be subtle and difficult to rectify later. Designing metrics is not simple or quick Ð if it seems so, then the metrics are likely to be at best inadequate and, at worst, dangerous and counter-productive.

    Many organizations have suffered from severe unexpected consequences - directly as a result of applying metrics that were easy to measure and control, but not actually in line with business requirements. A well known example was the use of one metric ‘waiting list time’ to define improvements to the UK National Health Service Ð the result was that everybody met the metric, but the actual waiting lists remained, or, in fact, became longer and less fair. The overall result was a reduced quality of service and increased dissatisfaction even as the metric was reported as a success.

    1.2   How to use this book

    Mostly this book is designed to be used as a practical tool during workshops:

    •   Where services, business and technical, with their processes, are designed.

    •   Where organizational improvement must be addressed urgently.

    •   Where a merged, or re-organized, service delivery team can decide what measures can enable results to be achieved quickly to support longer-term improvement and deviations measured accurately.

    Use this as a tool, for guidance. If a suggestion suits you, use it. If you need to modify it for your own situation, go ahead; this is not supposed to be a stone tablet! If you’ve got a tricky issue to discuss, take it along, so you can explore some possible metrics Ð at least that should provide a common starting point for discussion and, maybe, some ideas for ways forward.

    The layout is uncluttered, designed to be easy to navigate quickly Ð to find an idea, for example, during a meeting. Where possible, repetition is avoided.

    Each metric described includes a paragraph giving some context. This is a reminder that metrics do not stand in isolation. Often this context will include warnings of possible misinterpretation, and suggestions for refining the metric. With any luck, in the heat of the moment, these will be some of the most helpful parts. They’re better read when actually designing a metric, rather than all the way through.

    All metrics should include, for example, a RACI (Responsible, Accountable, Consulted, Informed) matrix to allow proper design of the metric to include the people accountable for it being achieved, those responsible for measuring and managing it and those consulted about its design, improvement or interpretation as well as those informed, through reports, dashboards, alerts or other means.

    The Appendix contains the full form for recording a metric. Space does not permit the inclusion of all this detail for all metrics, so only the main descriptors of each metric appear in a table at the end of each chapter. The full set of electronic metrics is best used electronically, as an on-line Metrics Register that links to your Requirements, Continual Service Improvement (CSI) and Risk Registers and to the relevant Service Design Packages.

    The flow of the book is contained in Figure 1.1 below. Notice that Design forms a major part of the book.

    The book is organized as follows:

    –   Introduction (this chapter), explaining the purpose and structure of this book

    –   Managing, metrics and perspectives: key principles of metrics

    –   Governance: the metrics required for effective governance

    –   Service Strategy: the metrics required for the first phase of the service lifecycle

    –   Service Design: the metrics required for the second phase of the service lifecycle

    –   Chapters exploring Service Design-related topics in more detail:

    ο   Classifications of metrics

    ο   Outsourcing and emerging technologies

    ο   Cultural and technical considerations

    ο   Tools and tool selection

    Figure 1.1 Metrics book topic flow

    –   Service Transition: the metrics required for the third phase of the service lifecycle

    –   Chapter exploring Service Transition-related topic in more detail: Service Transition and management of change

    –   Service Operation: the metrics required for the fourth phase of the service lifecycle

    –   Continual Service Improvement: the metrics required for the final ongoing phase of the service lifecycle

    –   Appendices]

    The ultimate aim of Service Management is to produce Value; this is delivered during Service Operation and measurements facilitate in the definition and scoping of Continual Service Improvement. Corporate Strategy and Governance give rise to new requirements that drive the design of Services, Processes and Metrics. The design of metrics is critical to assuring the efficacy of the Service Lifecycle processes and in governing the delivery of value.

    2   Managing, metrics and perspective

    2.1   Managing

    As with a lot of folklore, there are wise sayings on both sides of the question about how to use metrics as part of management:

    ‘You can’t manage what you can’t measure’ [attributed to Tom DeMarco developer of Structured Analysis]

    ‘A fool with a tool is still a fool’ [attributed to Grady Booch, developer of the Unified Modeling Language]

    Both of these are true. Managing requires good decision-making and good decision-making requires good knowledge of what is to be decided. ITIL®’s concept of Knowledge Management is designed to avoid this pitfall.

    2.2   Perspective

    Relying simply on numbers given by metrics, with no context or perspective, can be worse than having no information at all, apart from ‘gut feel’. Metrics must be properly designed, properly understood and properly collected, otherwise they can be very dangerous. Metrics must always be interpreted in terms of the context in which they are measured in order to give a perspective on what they are likely to mean.

    To give an example: a Service Manager might find that the proportion of emergency changes to normal changes has doubled. With just that information, most people would agree that something has gone wrong – why are there suddenly so many more emergency changes? This could be correct, but here are some alternative explanations of why this is the case:

    •   If the change process is new, this may reflect the number of emergency changes that the organization actually requires more accurately. Previously these changes might have been handled as ordinary changes without proper recognition of the risk.

    •   In a mature organization, a major economic crisis might have intensified the risk of a number of previously low-risk activities. It would be the proper approach for the Service Manager, recognizing changes related to these, to make them emergency changes.

    •   The change management process might have been improved substantially in the current quarter, so much so that the number of ordinary changes that have been turned into standard changes has led to a halving of the number of normal changes. The number of emergency changes has stayed exactly the same, but the ratio is higher because of the tremendous improvement in the change process.

    Even a very simple and apparently uncontroversial metric can mean very different things. As with most management, there is no ‘silver bullet’. Metrics must be properly understood, within context, in order to be useful tools. To ensure that they are understood, metrics must be designed. For best results, service metrics should be designed when the Service itself is designed, as part of the Service Design Package, which is why the ‘Design’ section in this book is the largest.

    The metric template used in this book includes the field ‘Context’ specifically to allow each metric to be properly documented so that, when it is designed, the proper interpretation and possible problems with it can be taken into account. The design of a metric is not simply the measure and how it is taken; it must also make it clear how it will be reported and how management will be able to keep a perspective on what it means for the business - particularly its contribution to measuring value.

    This is also a reason why the number of metrics deployed must be kept as small as possible (but not, as Einstein put it, ‘smaller’!). Metrics must also be designed to complement each other. In the example above, the ratio between emergency and normal changes is an important and useful one to measure, but it could be balanced by measuring the number of standard changes, the business criticality of changes and, perhaps, the cost of changes.

    These would all help to embed the metric into a context that allows proper interpretation.

    2.2.1   Metrics for improvement and performance

    Metrics are needed not only to identify areas needing improvement, but also to guide the improvement activities. For this reason, metrics in this book are often not single numbers, but allow discrimination between, for example, Mean Time To Repair (MTTR) for Services, Components, Individuals and third parties – while also distinguishing between low priority incidents and (high priority) critical incidents. The headline rate shows overall whether things are improving, but these component measures make it possible to produce specific, directed improvement targets based on where or what is causing the issue.

    Metrics are often used to measure, manage and reward individual performance. This has to be handled with great care. Individual contributions that are significant to the organization may be difficult to measure. Some organizations use time sheets to try to understand where staff are spending their time, and thus understand how their work is contributing to the value delivered. These tend to be highly flawed sources of information. Very few individuals see much value in filling in timesheets accurately, and those that do see them as useful find them inadequate records for capturing busy multi-tasking days.

    There is a less subjective method – that of capturing the contribution of individuals and teams as documents in the Service Knowledge Management System (SKMS). For this to work, a good document management system with a sound audit trail is required, along with software that will identify what type of documents have been read, used (as in used as a template or used as a Change Model), updated (as a Checklist will be updated after a project or change review) or created (as in a new Service Design Package (SDP) or entry in the Service Catalogue). Each type of document update can be given a weight, reflecting the value to the organization (a new SDP that moves to the state ‘chartered’ is a major contribution, while an existing Request for Change (RFC) that is updated to add more information on the risk of the change would be a minor contribution).

    Properly managed, such a scheme can give a very accurate and detailed picture of where in the Service Lifecycle work is being done, so missing areas (for example, maybe there are not enough Change Models being created) can be highlighted and the increased weighting communicated to the organization. If these measures are properly audited they can be used as incentives for inter-team competition as well as for finding the individuals worth singling out for recognition and reward. Being an objective system this form of reward, based on the actual contribution to value delivered, can be highly valued, even by very technical and senior staff, as well as being an incentive (and measure of progress) for new or junior staff.

    In certain circumstances, external contracts particularly, penalty clauses may be required. Ideally these should be set so they are not triggered by minor deviations that can swiftly be remedied. Also, ideally, positive incentives should cover most of the relationship, with penalty clauses kept as a last resort. If penalty clauses are invoked frequently, then the business relationship is likely to, eventually, break down – before this happens, it would be wise to change supplier or have a fundamental reevaluation and renegotiation of the contract to avoid this situation.

    2.2.2   Metrics from the top downwards

    Metrics can be understood to work from the top downwards. Business measures (such as profit, turnover, market share, share price, price/earnings ratio) are the ultimate measures of success and all other metrics should, ultimately, contribute to the success of these metrics. Service Management identifies services; some deliver business results directly, some contribute indirectly. These can be measured by Service Metrics. Business services and internal services often depend on processes for their proper operation, and these can be measured by Process Metrics. Services and Processes rely on the underlying technologies that deliver these, and these can be measured by technology processes. Ideally, the sequence is Business Measure <- Service Measure <- Process Measure <- Technology Measure. Some metrics have value outside this direct relationship, but, where possible, metrics should be evaluated for how well they contribute to this value chain.

    For the above to work, metrics, of whatever sort, must be designed as an integrated part of the design of any Service, Process, or Technology.

    2.3   Full metric description

    Useful metrics are more than just measures. A well defined metric should also have, these attributes (Description, Dependencies, Data)

    •   Be under Change Control in the Metrics Register

    •   Have a name/ID

    •   Have a unique reference

    •   Have an owner

    •   Have a version number

    •   Have a category eg:

    –   Business Metric

    –   Service Metric

    –   Process Metric

    –   Technology Metric

    •   Show Status (with transition dates

    Enjoying the preview?
    Page 1 of 1