0% found this document useful (0 votes)
31 views3 pages

Why Should Maintenance Analysis Be Kept Simple

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views3 pages

Why Should Maintenance Analysis Be Kept Simple

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Why should maintenance be kept simple?

Original Message ----- From: "Steve Turner" <[email protected]

To: <[email protected] Sent: Wednesday, May 08, 2002 9:53 PM


Subject: Re: [plantmaint] I Vote for Simple Analysis Methods List

Obviously Manou would like an explanation of this statement that I wrote in the paper in
discussion at http://www.pmoptimisation.com.au/downloads/comparison_rcm_pmo.pdf

My statement was as follows: "Empirical methods can be easily applied without


using computers whereas most statistical methods require software packages to
run them".

My main contention here is that for some reason, we engineers seem to take great
delight in turning something which is fundamentally simple into something that is very
complex. By doing this:

A. We introduce numbers of assumptions of which many, may be (and often are) flawed,
B. We purchase expensive software which:

1. Takes the focus off the task of analysis, RCM/PMO training and implementation and
into software implementation and software training,

2. Takes the onus of analysis away from the very people who understand the plant (the
operators and tradesmen) as these people are often not sufficiently computer literate to
run the complex software and almost always not statistically competent. Hence we loose
buy-in and therefore the enthusiasm to create a living program that is simple to
understand and simple to make logical changes. The PM program ends up being owned
by the engineer in the back room with the computer rather than the folk who do the work
and know what works and what does not.

3. Takes far more time to do the analysis that it should.

4. We end up with a program that is easily discredited on the basis of the assumptions
and the “dodgy” data.

I am certainly not against computers and modern technology, however I believe in


"horses for courses". For example, some of the best planning systems I have seen are
done on a series of white boards, some of the best reporting systems and log book
systems are paper based: carbonated books with three tear out strips which flash out to
the right people and are filed where they can be easily searched..... books kept on line
so that anyone passing by can flick through the log and get a great picture of how things
are running, how they have run for the past day or week for that matter..... No logging on
to a computer... just simple systems working effectively (BTW the data gets entered into
a database for obvious reasons).
Now you may be wondering how I come to the conclusion that the analysis issues are
not complex... By empirical methods I mean asking the right people the right question
primarily to determine the appropriate PM task and interval. In the majority of cases,
condition monitoring is chosen as the means of managing the failure. It is widely
accepted that the intervals of inspection for condition monitoring are primarily driven by
the rates of decay of assets. The point is that the rates of decay of an asset at failure
mode level are rarely measured or collected with any degree of rigor (if at all) hence
there is rarely data available to support anything else, other than an empirical approach.

So point one is that if I ask the right questions of the right people, I will get the best
assessment of the rates of decay very quickly, whereas if I were to rely on statistical
methods, I may never get the information in sufficient quantity to make reasonably
confident predictions.

The second approach to preventive maintenance is what we may call "Hard Time" or
Scheduled discard / refurbishment tasks. The intervals of these tasks rely on some
information regarding the failure patterns and the consequence of failure. Again, in most
industrial applications, there are two “in a sense contradictory" situations.

If the component has a dominant failure mode which is age related and has a high
frequency, then the maintainers will usually know what that is because they change the
component regularly. They do not need a sophisticated database for this.

On the other hand, if the failure is age related and low frequency, then it will take a long
time to get any statistically significant data unless there are lots of these components in
the same service. However, in collecting the data, one could easily suggest that
maintenance has failed, as its primary task is to remove the failures before they occur.

So, in order to get the information, that is so desperately needed to succeed, we must
first setout to fail. This makes no sense. So once again, in an industrial application, we
are forced either:

1) To make sweeping assumptions about means and distributions and stuff the numbers
into simulation algorithms (I personally have a lot of difficulty accepting the validity of
these programs as you may have gathered by now), or

2) We can take the empirical approach which relies on intuition, engineering training and
valued judgment of experienced people who have probably dealt with similar failure
modes many times before.

No doubt Manou, there are instances where statistical applications can work wonderfully
well and where computer simulation packages stand out head and shoulders above the
rest. However in my patch, we are almost always short of the data we want. We
therefore need to provide simple thought processes that help people to make good
educated assessments. If I were to rely on three or four guesses jammed into a software
tool (that perhaps only people of your statistical background really understand) then I
would be most uncomfortable about turning in a decent result for a client. "Trouble is, a
bad result rarely surfaces immediately, but that's another subject."
Regardless of anything else though, the bottom line is implementation. The more
ownership the program has at the "shop floor" then the more successful it will be (in my
experience). Doing a bunch of elegant statistical analysis or any analysis for that matter
is a cost to the business. It is a cost until the time that it is implemented. Since
implementation is often the tough bit, we most often focus heavily on implementation
rather than trying to clean up data and support assumptions which may result in a
solution which is no better than the initial assessment. I love statistics, but I also
understand its limitations.

Anyone wanting to read more on this subject can download a paper called
"understanding the downsides of statistical maintenance analysis methods" from our
website at www.pmoptimisation.com.au. Readers should be aware that this paper
discusses only a certain type of approach that is a "cost minimisation" approach.
Readers may see similarities in some of the points raised in packages that use different
algorithms.

Hope this helps to at least present my opinions. If you agree then great if you don't well I
hope I have at least challenged your thinking.

Regards to all

Steve Turner

Director
OMCS International
[email protected]
ph 61 419397035
www.reliabilityassurance.com

You might also like