EEM Notes - Unit 1 - Extra Note
EEM Notes - Unit 1 - Extra Note
This
STATIC CHARACTERISTICS
1) Accuracy
4
Accuracy of an instrument is specified by ±5% for the range of 0 to 200ºC in the
temperature scale means the reading might be within + or 10ºC of the true reading.
2) Precision
3) Bias
Bias is quantitative term describing the difference between the average of measured
readings made on the same instrument and its true value (It is a characteristic of
measuringinstruments to give indications of the value of a measured quantity for which the average
value differs from true value).
4) Sensitivity
Sensitivity is defined as the ratio of change in output signal (response) to the change in
input signal (measurand). It is the relationship indicating how much output changes when input
changes.
5
If the sensitivity is constant then the system is said to be linear system. If the sensitivity is
variable then the system is said to be non linear system.
Definition of sensitivity for (a) Linear and (b) Non linear instrument
When the calibration curve is linear as in figure the sensitivity of the instrument can be
defined as in slope of the calibration curve. In this case sensitivity is constant over the entire range
of instrument. If the curve is not normally straight line or nonlinear instrument sensitivity varies
with the input or varies from on range to another as in figure.
4) Linearity
Linearity is the best characteristics of an instrument or measurement system. Linearity of
the instrument refers to the output is linearly or directly proportional to input over the entire range
of instrument. So the degree of linear (straight line) relationship between the output to input is
called as linearity of an instrument.
6
Nonlinearity: The maximum difference or deviation of output curve from the
Specifiedidealized straight line as shown in figure. Independent nonlinearity may be defined as
5) Resolution
Resolution or Discrimination is the smallest change in the input value that is required to
cause an appreciable change in the output. (The smallest increment in input or input change which
can be detected by an instrument is called as resolution or discrimination.)
6) Hysteresis
Hysteresis is Non coincidence of loading and unloading curves on output. Hysteresis effect
shows up in any physical, chemical or electrical phenomenon
When input increases, output also increases and calibration curve can be drawn. If input is
decreases from maximum value and output also decreases but does not follow the same curve, then
there is a residual output when input is zero. This phenomenon is called Hysteresis. The difference
between increasing change and decreasing change of output values is known as hysteresis error as
shown in figure.
(The different outputs from the same value of quantity being measured are reached by a
continuously increasing change or a continuously decreasing change)
7) Dead Zone
Dead zone or dead band is defined as the largest change of input quantity for which there is
no output the instrument due the factors such as friction, backlash and hysteresis within the
system.( The region upto which the instrument does not respond for an input change is called dead
zone)
7
Dead time is the time required by an instrument to begin to respond to change in
inputquantity.
8) Backlash
The maximum distance through which one part of the instrument is moved without
disturbing the other part is called as backlash. (Backlash may be defined as the maximum distance
or angle through which any part of the instrument can be moved without causing any motion of
next part of the system)
Reasons for the presence of backlash in an instrument include allowing for lubrication,
manufacturing errors, deflection under load, and thermal expansion.
9) Drift
Drift is an undesirable change in output over a period of time that is unrelated to change in
input, operating conditions. Drift is occurred in instruments due to internal temperature variations,
ageing effects and high stress ect.
Zero drift is used for the changes that occur in output when there is zero output. It is
expressed as percentage of full range output.
10) Threshold
The minimum value of input which is necessary to activate an instrument to produce an
output is termed its threshold as shown in figure. (Threshold is the minimum value of the input
required to cause the pointer to move from zero position).
8
Threshold effect
11) Input Impedance
The magnitude of the impedance of element connected across the signal source is called
Input Impedance. Figure shows a voltage signal source and input device connected across it.
From above two expressions it is clear that a low input impedance device connected across
the voltage signal source draws more current and mo more
re power from signal source than high input
impedance device.
Loading effect is the incapability of the system to faith fully measure, record or control the
input signal in accurate form.
9
13) Repeatability
Repeatability is defined as the ability of an instrument to give the same output for repeated
applications of same input value under same environmental condition.
14) Reproducibility
Reproducibility is defined as the ability of an instrument to reproduce the same output for
repeated applications of same input value under different environment condition.
DYNAMIC CHARACTERISTICS
l. Step Input:Step change in which the primary element is subjected to an instantaneous andfinite
change in measured variable.
2. Linear Input:Linear change, in which the primary element is, follows a measured
variable,changing linearly with time.
It is the rapidity with which an instrument responds to changes in the measured quantity.
(ii) Fidelity
It is the degree to which an instrument indicates the changes in the measured variable
10
without dynamic error (faithful reproduction or fidelity of an instrument is the ability of
reproducing an input signal faithfully (truly)).
(iii) Lag
Retardation type: In this case the response of an instrument begins immediately after
achange in measured variable is occurred.
Time delay type: In this case the response of an instrument begins after a dead time
afterthe application of the input quantity.
It is the difference between the true values of a quantity changing with time and the value
indicated by the instrument, if no static error is assumed. It is also called as Measurement Error.
When measurement problems are concerned with rapidly varying quantities, the dynamic
relations between the instruments input and output are generally defined by the use of differential
equations.
Errors
The difference between the measured value of quantity and true value (Reference Value) of
quantity is called as Error.
Classification of Errors
All measurement can be made without perfect accuracy (degree of error must always be
assumed). In reality, no measurement can ever made with 100% accuracy. It is important to find
that actual accuracy and different types of errors can be occurred in measuring instruments. Errors
may arise from different sources and usually classified as follows, Classification of Error
1. Gross Errors
2. Systematic Errors
11
a) Instrumental errors
c) Observational errors
3. Random Errors
1. Gross Errors
The main source of Gross errors is human mistakes in reading or using instruments and in
recording and calculating measured quantity. As long as human beings are involved and they may
grossly misread the scale reading, then definitely some gross errors will be occurred in measured
value.
o
Example, Due to an oversight, Experimenter may read the temperature as 22.7 C while the
o
actual reading may be 32.7 C He may transpose the reading while recording. For example, he may
o o
read 16.7 C and record 27.6 C as an alternative.
The complete elimination of gross errors is maybe impossible, one should try to predict and
correct them. Some gross errors are easily identified while others may be very difficult to detect.
Gross errors can be avoided by using the following two ways.
Two, three or even more readings should be taken for the quantity being measured by using
different experimenters and different reading point (different environment condition of instrument)
to avoid re reading with same error. So it is suitable to take a large number of readings as a close
agreement between readings assures that no gross error has been occurred in measured values.
2. Systematic Errors
Systematic errors are divided into following three categories.
i. Instrumental Errors
ii. Environmental Errors
iii. Observational Errors
12
a) Instrumental Errors
These errors are arises due to following three reasons (sources of error).
These errors are inherent in instruments because of their mechanical structure due to
construction, calibration or operation of the instruments or measuring devices.
These errors may cause the instrument to read too low or too high.
o
The procedure of measurement must be carefully planned. Substitution methods or
calibration against standards may be used for the purpose.
o
Correction factors should be applied after determining the instrumental errors.
Ii) Misuse of Instruments
In some cases the errors are occurred in measurement due to the fault of the operator than
that of the instrument. A good instrument used in an unintelligent way may give wrong results.
The errors committed by loading effects due to improper use of an instrument for
13
measurement work. In measurement system, loading effects are identified and corrections should
be made or more suitable instruments can be used.
Example, a well calibrated voltmeter may give a misleading (may be false) voltage reading
when connected across a high resistance circuit. The same voltmeter, when connected across a low
resistance circuit may give a more reliable reading (dependable or steady or true value).
In this example, voltmeter has a loading effect on the circuit, altering the actual circuit
conditions by measurement process. So errors caused by loading effect of the meters can be
avoided by using them intelligently.
b) Environmental Error
Example, variations in resistance with temperature can be minimized by using very low
resistance temperature co efficient of resistive material.
Employing techniques which eliminate the effects of these disturbances. For example, the
effect of humidity dust etc., can be entirely eliminated by tightly sealing the equipment.
Applying computed corrections: Efforts are normally made to avoid the use of application
of computed corrections, but where these corrections are needed and are necessary, they are
incorporated for the computations of the results
14
c) Observational Errors
There are many sources of observational errors. As an example, the pointer of a voltmeter
rests slightly above the surface of the scale. Thus an error on account of PARALLAX will be
acquired unless the line of vision of the observer is exactly above the pointer. To minimize parallax
errors, highly accurate meters are provided with mirrored scales as shown in figure.
So we can eliminate this parallax error by having the pointer and scale in the same plane as
shown in figure
The observational errors are also occurs due to involvement of human factors. For
15
example, there are observational errors in measurements involving timing of an event Different
observer may produce different results, especially when sound and light measurement are involved.
The complete elimination of this error can be achieved by using digital display of output.
3. Random Errors
These errors are occurred due to unknown causes and are observed when the magnitude and
polarity of a measurement fluctuate in changeable (random) manner.
The quantity being measure is affected by many happenings or disturbances and ambient
influence about which we are unaware are lumped together and called as Random or Residual. The
errors caused by these disturbances are called Random Errors. Since the errors remain even after
the systematic errors have been taken care, those errors are called as Residual (Random)Errors.
Random errors cannot normally be predicted or corrected, but they can be minimized by
skilled observer and using a well maintained quality instrument.
True value
The true value of quantity being measured is defined as the average of an infinite number of
measured values when the average deviation due to the various contributing factors tends to zero.
In ideal situation is not possible to determine the True value of a quantity by experimental way.
Normally an experimenter would never know that the quantity being measured by experimental
way is the True value of the quantity or not.
In practice the true value would be determined by a “standard method”, that is a method
agreed by experts with sufficient accurate.
16
Static Error
Static error is defined as a difference between the measured value and the true value of
the quantity being measured. It is expressed as follows.
δA= Am At (1)
Where, δA= Error, Am =Measured value of quantity and At= True value of quantity.
δA is also called as absolute static error of quantity A and it is expressed as follows.
ε0=δA (2)
Where, ε0 = Absolute static error of quantity A under measurement.
The absolute value of δA does not specify exactly the accuracy of measurement .so the
quality of measurement is provided by relative static error.
Relative static error is defined as the ratio between the absolute static errors and true value
of quantity being measured. It is expressed as follows.
ε0=δA is small, which means that the difference between measured value and true values is
very small, Am – At = Negligible or small. So Almost
Am = At (that is εr<<<1).
17
Static error Correction or method of Correction:
It is the difference between the true value and the measured value of quantity.
δC= At – Am (5)
SOURCES OF ERRORS
The sources of error, other than the inability of a piece of hardware to provide a true
measurement are listed below,
1) Insufficient knowledge of process parameters and de design conditions.
2) Poor design
3) Change in process parameters, irregularities, upset
upsets (disturbances) ect.
4) Poor maintenance
5) Errors caused by people who operate the instrument or equi equipment.
6) Certain design limitations.
Multi Sample Test:InIn multi sample test, repeated measured data have been acquiredby
acqui
different instruments, different methods of measure
measurement
ment and different observer.
18
Arithmetic Mean
The most probable value of measured reading is the arithmetic mean of the number of
reading taken. The best approximation is made when the number of readings of the same quantity
is very large. Arithmetic mean or average of measured variables X is calculated by taking the sum
of all readings and dividing by the number of reading.
The Average is given by,
The Deviation is departure of the observed reading from the arithmetic mean of the
group of reading. Let the deviation of reading x1 be d1 and that of x2 be d2 etc.,
d1 = x 1 X
d2= x2 X
..
..
..
dn= xn X The
algebraic sum deviation is Zero (d1+ d2+....+ dn= 0)
Average Deviation:
Average deviation defined as the average of the modulus (without respect to its sign) of
the individual deviations and is given by,
D=|d |+|d |+|d |+⋯+|d |
1 2 3 n = Σ|d|
n n
Where, D= Average Deviation.
The average deviation is used to identify precision of the instruments which is used in
making measurements. Highly precise instruments will give a low average deviation between
readings.
19
Standard Deviation
Variance
The variance is the mean square deviation, which is the same as S.D except Square root.
Variance is Just the squared standard deviation.
Histogram:
Example: Following table3.1 shows a set of 50 readings of length measurement. The most
probable or central value of length is 100mm represented as shown in figure Histogram.
20
Table 3.1
Length (mm) Number of observed readings
(frequency or occurrence)
99.7 1
99.8 4
99.9 12
100.0 19
100.1 10
100.2 3
100.3 1
Total number of readings =50
Histogram
This histogram indicates the number of occurrence of particular value. At the central
value of 100mm is occurred 19 times and recorded to the nearest 0.1mm as shown in figure 3.3.
Here bell shape dotted line curve is called as normal or Gaussian curve.
The property which denotes the extent to which the values are dispersed about the central
value is termed as dispersion. The other name of dispersion is spread or scatter.
Example: Figure shows the two sets of data and curve 1 vary from x1to x2and curve 2 vary fromx3
to x4. Curve 1 is having smaller dispersion from central value than the curve 2. Therefore curve 1
is having greater precision than the curve 2.
21
Curves showing different ranges and precision index
Range
The simplest possible measure of dispersion is the range which is the difference between
greatest and least values of measured data.
Example: In figure, the range of curve1 is (x2 – x1) and range of curve 2 is (x4 x3).
22
Relative (Fractional) Limiting Error
The relative limiting error is defined as the ratio of the error to the specified (nominal)
magnitude of the quantity.
In limiting errors
rs the nominal value An is taken as the true value or quantity, the quantity
quantit
which has the maximum deviation from Aa is taken as the incorrect quantity.
Probable error
The most probable or best value of a Gaussian distrdistribution is obtained by taking arithmetic
mean of the various values of the variety. A conven
convenient measure of precision is
achieved by the quantity r. It is called Probable E
Error
rror of P.E. It is expressed as follows,
0.4769
Probable Error = P. E = r =
h
Where r= probable
able error and h= constant called precision index
Gaussian distribution and Histogram are used to estimate the probable error of anymeasurement.
anymeasu
23
Histogram – Refer the topic of statistical analysis.
The normal or Gaussian law of errors is the basis for the major part of study of random
errors. The law of probability states the normal occurrence of deviations from average value of an
infinite number of measurements can be expressed by,
The Normal or Gaussian probability curve is shown in figure. In this curve r is the measure of
precision quantity (probable error=r). The points –r and +r are locating the area bounded by the
Gaussian curve.
24
Calibration
The calibration is better carried out under the predetermined environmental conditions. All
industrial grade instruments can be checked for accuracy in the laboratory by using the working
standard.
Process of Calibration
Choose, as best as one can, which of the inputs will be significant in the application for
which the instrument is to be calibrated.
Standard and secure apparatus that will allow all significant inputs to vary over the ranges
considered necessary.
25
By holding some input constant, varying others and recording the output, develop the
desired static input output relations.
If the instrument is calibrated against secondary standard instrument, then the calibration is
called as secondary calibration. This method is used for further calibration of other devices of
lesser accuracy. Secondary calibration instruments are used in laboratory practice and also in the
industries because they are practical calibration sources.
Secondary
Test Meter (The instrument standard
to be calibrated) meter or instrument
26
Secondary calibration can be classified further two
types,
i) Direct comparison method of Calibration
The procedure of indirect method of calibration is based on the equivalence of two different
devices with same comparison concept.
Standards of measurement:
27