0% found this document useful (0 votes)
51 views24 pages

EEM Notes - Unit 1 - Extra Note

The document discusses the characteristics of measuring instruments, categorizing them into static and dynamic characteristics, with a focus on accuracy, precision, bias, sensitivity, and various types of errors. It explains the importance of understanding these characteristics for selecting appropriate instruments and highlights the classification of errors into gross, systematic, and random errors. Additionally, it emphasizes the need for careful measurement practices to minimize errors and improve the reliability of measurements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views24 pages

EEM Notes - Unit 1 - Extra Note

The document discusses the characteristics of measuring instruments, categorizing them into static and dynamic characteristics, with a focus on accuracy, precision, bias, sensitivity, and various types of errors. It explains the importance of understanding these characteristics for selecting appropriate instruments and highlights the classification of errors into gross, systematic, and random errors. Additionally, it emphasizes the need for careful measurement practices to minimize errors and improve the reliability of measurements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

computers. Computers are used for control and analysis of measured data of measurement system.

This

Final stage of measurement system is known as Terminating stage.

CHARACTERISTICS OF MEASURING INSTRUMENTS

These performance characteristics of an instrument are very important in their selection.

Static Characteristics: Static characteristics of an instrument are considered forinstruments


which are used to measure an unvarying process condition. Performance criteria based upon
static relations represent the static Characteristics. (The static characteristics are the value or
performance given after the steady state condition has reached).

Dynamic Characteristics: Dynamic characteristics of an instrument are considered


forinstruments which are used to measure a varying process condition. Performance criteria
based upon dynamic relations represent the dynamic Characteristics.

STATIC CHARACTERISTICS

1) Accuracy

Accuracy is defined as the degree of closeness with which an instrument reading


approaches to the true value of the quantity being measured. It determines the closeness to true
value of instrument reading.

Accuracy is represented by percentage of full scale reading or in terms of inaccuracy or in


terms of error value.

Example, Accuracy of temperature measuring instrument might be specified by ±3ºC. This


accuracy means the temperature reading might be within + or 3ºC deviation from the true value.

4
Accuracy of an instrument is specified by ±5% for the range of 0 to 200ºC in the
temperature scale means the reading might be within + or 10ºC of the true reading.

2) Precision

Precision is the degree of repeatability of a series of the measurement. Precision is measures


of the degree of closeness of agreement within a group of measurements are repeatedly made under
the prescribed condition.

Precision is used in measurements to describe the stability or reliability or the


reproducibility of results.

Comparison between accuracy and precision.

S.No Accuracy Precision

1. It refers to degree of closeness of the It refers to the degree of agreement


measured value to the true value. among group of readings
2. Accuracy gives the maximum error that Precision of a measuring system
is maximum departure of the final result gives its capability to reproduce a
from its true value. certain reading with a given
accuracy

3) Bias

Bias is quantitative term describing the difference between the average of measured
readings made on the same instrument and its true value (It is a characteristic of
measuringinstruments to give indications of the value of a measured quantity for which the average
value differs from true value).

4) Sensitivity
Sensitivity is defined as the ratio of change in output signal (response) to the change in
input signal (measurand). It is the relationship indicating how much output changes when input
changes.

5
If the sensitivity is constant then the system is said to be linear system. If the sensitivity is
variable then the system is said to be non linear system.

Definition of sensitivity for (a) Linear and (b) Non linear instrument

When the calibration curve is linear as in figure the sensitivity of the instrument can be
defined as in slope of the calibration curve. In this case sensitivity is constant over the entire range
of instrument. If the curve is not normally straight line or nonlinear instrument sensitivity varies
with the input or varies from on range to another as in figure.

4) Linearity
Linearity is the best characteristics of an instrument or measurement system. Linearity of
the instrument refers to the output is linearly or directly proportional to input over the entire range
of instrument. So the degree of linear (straight line) relationship between the output to input is
called as linearity of an instrument.

Representation of Linearity and Non-Linearity of an Instrument

6
Nonlinearity: The maximum difference or deviation of output curve from the
Specifiedidealized straight line as shown in figure. Independent nonlinearity may be defined as

5) Resolution
Resolution or Discrimination is the smallest change in the input value that is required to
cause an appreciable change in the output. (The smallest increment in input or input change which
can be detected by an instrument is called as resolution or discrimination.)
6) Hysteresis
Hysteresis is Non coincidence of loading and unloading curves on output. Hysteresis effect
shows up in any physical, chemical or electrical phenomenon
When input increases, output also increases and calibration curve can be drawn. If input is
decreases from maximum value and output also decreases but does not follow the same curve, then
there is a residual output when input is zero. This phenomenon is called Hysteresis. The difference
between increasing change and decreasing change of output values is known as hysteresis error as
shown in figure.
(The different outputs from the same value of quantity being measured are reached by a
continuously increasing change or a continuously decreasing change)

Hysteresis Error of an instrument

7) Dead Zone

Dead zone or dead band is defined as the largest change of input quantity for which there is
no output the instrument due the factors such as friction, backlash and hysteresis within the
system.( The region upto which the instrument does not respond for an input change is called dead
zone)

7
Dead time is the time required by an instrument to begin to respond to change in
inputquantity.

8) Backlash

The maximum distance through which one part of the instrument is moved without
disturbing the other part is called as backlash. (Backlash may be defined as the maximum distance
or angle through which any part of the instrument can be moved without causing any motion of
next part of the system)

Threshold because of backlash

Reasons for the presence of backlash in an instrument include allowing for lubrication,
manufacturing errors, deflection under load, and thermal expansion.

9) Drift
Drift is an undesirable change in output over a period of time that is unrelated to change in
input, operating conditions. Drift is occurred in instruments due to internal temperature variations,
ageing effects and high stress ect.

Zero drift is used for the changes that occur in output when there is zero output. It is
expressed as percentage of full range output.

10) Threshold
The minimum value of input which is necessary to activate an instrument to produce an
output is termed its threshold as shown in figure. (Threshold is the minimum value of the input
required to cause the pointer to move from zero position).

8
Threshold effect
11) Input Impedance

The magnitude of the impedance of element connected across the signal source is called
Input Impedance. Figure shows a voltage signal source and input device connected across it.

voltage source and input device

The magnitude of the input impedance is given by

Power extracted by the input device from the signal source is

From above two expressions it is clear that a low input impedance device connected across
the voltage signal source draws more current and mo more
re power from signal source than high input
impedance device.

12) Loading Effect

Loading effect is the incapability of the system to faith fully measure, record or control the
input signal in accurate form.

9
13) Repeatability

Repeatability is defined as the ability of an instrument to give the same output for repeated
applications of same input value under same environmental condition.

14) Reproducibility
Reproducibility is defined as the ability of an instrument to reproduce the same output for
repeated applications of same input value under different environment condition.

In case of perfect reproducibility the instrument satisfies no drift condition.

DYNAMIC CHARACTERISTICS

The dynamic behaviour of an instrument is determined by applying some standard form of


known and predetermined input to its primary element (sensing element) and then studies the
output. Generally dynamic behaviour is determined by applying following three types of inputs.

l. Step Input:Step change in which the primary element is subjected to an instantaneous andfinite
change in measured variable.

2. Linear Input:Linear change, in which the primary element is, follows a measured
variable,changing linearly with time.

3. Sinusoidal input:Sinusoidal change, in which the primary element follows a measuredvariable,


the magnitude of which changes in accordance with a sinusoidal function of constant amplitude.

The dynamic characteristics of an instrument are


(i) Speed of response,
(ii) Fidelity,
(iii) Lag, and
(iv) Dynamic error.

(i) Speed of Response

It is the rapidity with which an instrument responds to changes in the measured quantity.

(ii) Fidelity

It is the degree to which an instrument indicates the changes in the measured variable

10
without dynamic error (faithful reproduction or fidelity of an instrument is the ability of
reproducing an input signal faithfully (truly)).

(iii) Lag

It is the retardation or delay in the response of an instrument to changes in the measured


variable. The measuring lags are two types:

Retardation type: In this case the response of an instrument begins immediately after
achange in measured variable is occurred.

Time delay type: In this case the response of an instrument begins after a dead time
afterthe application of the input quantity.

(iv) Dynamic Error

It is the difference between the true values of a quantity changing with time and the value
indicated by the instrument, if no static error is assumed. It is also called as Measurement Error.

When measurement problems are concerned with rapidly varying quantities, the dynamic
relations between the instruments input and output are generally defined by the use of differential
equations.

Errors

The difference between the measured value of quantity and true value (Reference Value) of
quantity is called as Error.

Error = Measured value True Value


δA= Am At

Classification of Errors

All measurement can be made without perfect accuracy (degree of error must always be
assumed). In reality, no measurement can ever made with 100% accuracy. It is important to find
that actual accuracy and different types of errors can be occurred in measuring instruments. Errors
may arise from different sources and usually classified as follows, Classification of Error
1. Gross Errors

2. Systematic Errors

11
a) Instrumental errors

i) Inherent shortcomings of instruments


ii) Misuse of instruments
iii) Loading effects
b) Environmental errors

c) Observational errors
3. Random Errors

1. Gross Errors

The main source of Gross errors is human mistakes in reading or using instruments and in
recording and calculating measured quantity. As long as human beings are involved and they may
grossly misread the scale reading, then definitely some gross errors will be occurred in measured
value.

o
Example, Due to an oversight, Experimenter may read the temperature as 22.7 C while the
o
actual reading may be 32.7 C He may transpose the reading while recording. For example, he may
o o
read 16.7 C and record 27.6 C as an alternative.

The complete elimination of gross errors is maybe impossible, one should try to predict and
correct them. Some gross errors are easily identified while others may be very difficult to detect.
Gross errors can be avoided by using the following two ways.

Great care should be taken in reading and recording the data.

Two, three or even more readings should be taken for the quantity being measured by using
different experimenters and different reading point (different environment condition of instrument)
to avoid re reading with same error. So it is suitable to take a large number of readings as a close
agreement between readings assures that no gross error has been occurred in measured values.

2. Systematic Errors
Systematic errors are divided into following three categories.

i. Instrumental Errors
ii. Environmental Errors
iii. Observational Errors

12
a) Instrumental Errors
These errors are arises due to following three reasons (sources of error).

a) Due to inherent shortcoming of instrument


b) Due to misuse of the instruments, and
c) Due to loading effects of instruments

i) Inherent Shortcomings of instruments

These errors are inherent in instruments because of their mechanical structure due to
construction, calibration or operation of the instruments or measuring devices.

These errors may cause the instrument to read too low or too high.

Example, if the spring (used for producing controlling torque) of a permanent


magnet instrument has become weak, so the instrument will always read high.

Errors may be caused because of friction, hysteresis or even gear backlash.

Elimination or reduction methods of these errors,


o
The instrument may be re calibrated carefully.

o
The procedure of measurement must be carefully planned. Substitution methods or
calibration against standards may be used for the purpose.

o
Correction factors should be applied after determining the instrumental errors.
Ii) Misuse of Instruments

In some cases the errors are occurred in measurement due to the fault of the operator than
that of the instrument. A good instrument used in an unintelligent way may give wrong results.

Examples, Misuse of instruments may be failure to do zero adjustment of instrument, poor


initial adjustments, using leads of too high a resistance and ill practices of instrument beyond the
manufacturer’s instruction and specifications ect.

iii) Loading Effects

The errors committed by loading effects due to improper use of an instrument for

13
measurement work. In measurement system, loading effects are identified and corrections should
be made or more suitable instruments can be used.

Example, a well calibrated voltmeter may give a misleading (may be false) voltage reading
when connected across a high resistance circuit. The same voltmeter, when connected across a low
resistance circuit may give a more reliable reading (dependable or steady or true value).

In this example, voltmeter has a loading effect on the circuit, altering the actual circuit
conditions by measurement process. So errors caused by loading effect of the meters can be
avoided by using them intelligently.

b) Environmental Error

Environmental error occurs due to external environmental conditions of the instrument,


such as effects of temperature, pressure, humidity, dust, vibration or external magnetic or
electrostatic fields.

Elimination or reduction methods of these undesirable errors are

Arrangements should be made to keep the conditions as nearly as constant as possible.


Example, temperature can be kept constant by keeping the instrument in the temperature
controlled region.

The device which is used against these environmental effects.

Example, variations in resistance with temperature can be minimized by using very low
resistance temperature co efficient of resistive material.

Employing techniques which eliminate the effects of these disturbances. For example, the
effect of humidity dust etc., can be entirely eliminated by tightly sealing the equipment.

The external or electrostatic effects can be eliminated by using magnetic or electrostatic


shield on the instrument.

Applying computed corrections: Efforts are normally made to avoid the use of application
of computed corrections, but where these corrections are needed and are necessary, they are
incorporated for the computations of the results

14
c) Observational Errors

There are many sources of observational errors. As an example, the pointer of a voltmeter
rests slightly above the surface of the scale. Thus an error on account of PARALLAX will be
acquired unless the line of vision of the observer is exactly above the pointer. To minimize parallax
errors, highly accurate meters are provided with mirrored scales as shown in figure.

Correct reading 250V

Errors due to parallax


When the pointer’s image appears hidden by the pointer, observer’s eye is directly in line
with the pointer. Although a mirrored scale minimizes parallax error, an error is necessarily
presented through it may be very small.

So we can eliminate this parallax error by having the pointer and scale in the same plane as
shown in figure

Arrangements showing scale and pointer in the same plane

The observational errors are also occurs due to involvement of human factors. For

15
example, there are observational errors in measurements involving timing of an event Different
observer may produce different results, especially when sound and light measurement are involved.

The complete elimination of this error can be achieved by using digital display of output.

3. Random Errors

These errors are occurred due to unknown causes and are observed when the magnitude and
polarity of a measurement fluctuate in changeable (random) manner.

The quantity being measure is affected by many happenings or disturbances and ambient
influence about which we are unaware are lumped together and called as Random or Residual. The
errors caused by these disturbances are called Random Errors. Since the errors remain even after
the systematic errors have been taken care, those errors are called as Residual (Random)Errors.

Random errors cannot normally be predicted or corrected, but they can be minimized by
skilled observer and using a well maintained quality instrument.

Errors in Measuring Instruments

No measurement is free from error in reality. An intelligent skill in taking measurements is


the ability to understand results in terms of possible errors. If the precision of the instrument is
sufficient, no matter what its accuracy is, a difference will always be observed between two
measured results. So an understanding and careful evaluation of the errors is necessary in
measuring instruments. The Accuracy of an instrument is measured in terms of errors.

True value

The true value of quantity being measured is defined as the average of an infinite number of
measured values when the average deviation due to the various contributing factors tends to zero.

In ideal situation is not possible to determine the True value of a quantity by experimental way.
Normally an experimenter would never know that the quantity being measured by experimental
way is the True value of the quantity or not.

In practice the true value would be determined by a “standard method”, that is a method
agreed by experts with sufficient accurate.

16
Static Error

Static error is defined as a difference between the measured value and the true value of
the quantity being measured. It is expressed as follows.
δA= Am At (1)
Where, δA= Error, Am =Measured value of quantity and At= True value of quantity.
δA is also called as absolute static error of quantity A and it is expressed as follows.
ε0=δA (2)
Where, ε0 = Absolute static error of quantity A under measurement.

The absolute value of δA does not specify exactly the accuracy of measurement .so the
quality of measurement is provided by relative static error.

Relative static error

Relative static error is defined as the ratio between the absolute static errors and true value
of quantity being measured. It is expressed as follows.

ε0=δA is small, which means that the difference between measured value and true values is
very small, Am – At = Negligible or small. So Almost
Am = At (that is εr<<<1).

17
Static error Correction or method of Correction:

It is the difference between the true value and the measured value of quantity.
δC= At – Am (5)

* For Detail Error correction (Rectification or Elimi


Elimination
nation or Reduction) methods of all
categories of errors are discussed in the topic of classification of errors.

SOURCES OF ERRORS
The sources of error, other than the inability of a piece of hardware to provide a true
measurement are listed below,
1) Insufficient knowledge of process parameters and de design conditions.
2) Poor design
3) Change in process parameters, irregularities, upset
upsets (disturbances) ect.
4) Poor maintenance
5) Errors caused by people who operate the instrument or equi equipment.
6) Certain design limitations.

STATISTICAL EVALUATION OF MEASUREMENT DATA

Statistical Evaluation of measured data is obtained in two methods of tests as shown in


below.

Multi Sample Test:InIn multi sample test, repeated measured data have been acquiredby
acqui
different instruments, different methods of measure
measurement
ment and different observer.

Single Sample Test:measured


measured data have been acquired by identical conditions
condi
(sameinstrument, methods and observer) at different times.

Statistical Evaluation methods will give the most pprobable


robable true value of measured quantity. The
mathematical background statistical evaluation meth
methods
ods are Arithmetic Mean, Deviation Average
Deviation, Standard Deviation and variance.

18
Arithmetic Mean

The most probable value of measured reading is the arithmetic mean of the number of
reading taken. The best approximation is made when the number of readings of the same quantity
is very large. Arithmetic mean or average of measured variables X is calculated by taking the sum
of all readings and dividing by the number of reading.
The Average is given by,

Where, X= Arithmetic mean, x1, x2....... xn = Reading


readings.

Deviation (Deviation from the Average value)

The Deviation is departure of the observed reading from the arithmetic mean of the
group of reading. Let the deviation of reading x1 be d1 and that of x2 be d2 etc.,

d1 = x 1 X
d2= x2 X
..
..

..
dn= xn X The
algebraic sum deviation is Zero (d1+ d2+....+ dn= 0)

Average Deviation:

Average deviation defined as the average of the modulus (without respect to its sign) of
the individual deviations and is given by,
D=|d |+|d |+|d |+⋯+|d |
1 2 3 n = Σ|d|
n n
Where, D= Average Deviation.

The average deviation is used to identify precision of the instruments which is used in
making measurements. Highly precise instruments will give a low average deviation between
readings.

19
Standard Deviation

Standard deviation is used to analysis random errors occurred in measurement. The


standard Deviation of an infinite number of data is defined as the square root of the sum of
individual deviations squared, divided by the number of readings (n).

Variance

The variance is the mean square deviation, which is the same as S.D except Square root.
Variance is Just the squared standard deviation.

Histogram:

When a number of Multisample observations are taken experimentally there is a scatter


of the data about some central value. For representing this results in the form of a Histogram. A
histogram is also called a frequency distribution curve.

Example: Following table3.1 shows a set of 50 readings of length measurement. The most
probable or central value of length is 100mm represented as shown in figure Histogram.

20
Table 3.1
Length (mm) Number of observed readings
(frequency or occurrence)
99.7 1
99.8 4
99.9 12
100.0 19
100.1 10
100.2 3
100.3 1
Total number of readings =50

Histogram

This histogram indicates the number of occurrence of particular value. At the central
value of 100mm is occurred 19 times and recorded to the nearest 0.1mm as shown in figure 3.3.
Here bell shape dotted line curve is called as normal or Gaussian curve.

Measure of Dispersion from the Mean

The property which denotes the extent to which the values are dispersed about the central
value is termed as dispersion. The other name of dispersion is spread or scatter.

Measure of dispersion from central value is an indication of the degree of consistency


(precision) and regularity of the data.

Example: Figure shows the two sets of data and curve 1 vary from x1to x2and curve 2 vary fromx3
to x4. Curve 1 is having smaller dispersion from central value than the curve 2. Therefore curve 1
is having greater precision than the curve 2.

21
Curves showing different ranges and precision index

Range

The simplest possible measure of dispersion is the range which is the difference between
greatest and least values of measured data.
Example: In figure, the range of curve1 is (x2 – x1) and range of curve 2 is (x4 x3).

Limiting Errors (Guarantee Errors or Limits of erro


errors):

In most of the instruments the accuracy is guarante


guaranteed
ed to be within a certain percentage of
full scale reading. The manufacturer has to specify the deviations from the nominal value of a
particular quantity. The limits of these deviations from the specified value are called as Limiting
Errors or Guarantee Errors.

The magnitude of Limiting Error=Accuracy x Full sca


scale
le reading. In general the actual value
of quantity is determined as follows.

Actual Value of Quantity = Nominal value ± Limiting Error


Aa = An ± δA
Where, Aa = Actual value of quantity; An = Nominal value of Quantity; ± δA= Limiting error.

For Example, Nominalmagnitude of resister is 1000 with a limiti


limiting
ng error ±100 .Determinethe
Actual magnitude of the resistance.
Actual value of quantity Aa =1000±100
00±100 or Aa ≥ 900 and Aa ≤ 1100 .
Therefore the manufacturer guarantees that the valu
valuee of resistance of resistor lies between
900 and 1100 .

22
Relative (Fractional) Limiting Error

The relative limiting error is defined as the ratio of the error to the specified (nominal)
magnitude of the quantity.

Relative Limiting Error εr=

Then limiting values calculated as follows,


We know that Aa = An ± δA = An ± εr An = An (1 ± εr)
Percentage limiting error % εr = εr x100

In limiting errors
rs the nominal value An is taken as the true value or quantity, the quantity
quantit
which has the maximum deviation from Aa is taken as the incorrect quantity.

Probable error

The most probable or best value of a Gaussian distrdistribution is obtained by taking arithmetic
mean of the various values of the variety. A conven
convenient measure of precision is
achieved by the quantity r. It is called Probable E
Error
rror of P.E. It is expressed as follows,
0.4769
Probable Error = P. E = r =
h

Where r= probable
able error and h= constant called precision index

Gaussian distribution and Histogram are used to estimate the probable error of anymeasurement.
anymeasu

23
Histogram – Refer the topic of statistical analysis.

Normal or Gaussian curve of errors

The normal or Gaussian law of errors is the basis for the major part of study of random
errors. The law of probability states the normal occurrence of deviations from average value of an
infinite number of measurements can be expressed by,

Where, x= magnitude deviation from mean

y=Number of readings at any deviation x (the probability of occurrence of deviation x) h=


A constant called precision index

The Normal or Gaussian probability curve is shown in figure. In this curve r is the measure of
precision quantity (probable error=r). The points –r and +r are locating the area bounded by the
Gaussian curve.

The Normal or Gaussian probability curve


Precision index x=0 then, y = h/∏. The maximum value of y depends upon h. If y is larger,
then the corresponding curve is having greater precision. Then the probable is determined
using following expression.

24
Calibration

Calibration is the process of checking the accuracy of instrument by comparing the


instrument reading with a standard or against a similar meter of known accuracy. So using
calibration is used to find the errors and accuracy of the measurement system or an instrument.

Calibration is an essential process to be undertaken for each instrument and measuring


system regularly. The instruments which are actually used for measurement work must be
calibrated against some reference instruments in which is having higher accuracy. Reference
instruments must be calibrated against instrument of still higher accuracy or against primary
standard or against other standards of known accuracy.

The calibration is better carried out under the predetermined environmental conditions. All
industrial grade instruments can be checked for accuracy in the laboratory by using the working
standard.

Certification of an instrument manufactured by an industry is undertaken by National


Physical Laboratory and other authorizes laboratories where the secondary standards and working
standards are kept.

Process of Calibration

The procedure involved in calibration is called as process of calibration. Calibration procedure


involves the comparison of particular instrument with either
A primary standard,

A secondary standard with higher accuracy than the instrument to be calibrated

An instrument of known accuracy.

Procedure of calibration as follows.


Study the construction of the instrument and identify and list all the possible inputs.

Choose, as best as one can, which of the inputs will be significant in the application for
which the instrument is to be calibrated.

Standard and secure apparatus that will allow all significant inputs to vary over the ranges
considered necessary.

25
By holding some input constant, varying others and recording the output, develop the
desired static input output relations.

Theory and Principles of Calibration Methods Calibration


methods are classified into following two types,
1) Primary or Absolute method of calibration

2) Secondary or Comparison method of calibration

i. Direct comparison method of calibration

ii. Indirect comparison method of calibration

1) Primary or Absolute method of calibration

If the particular test instrument (the instrument to be calibrated) is calibrated against


primary standard, then the calibration is called as primary or absolute calibration. After the primary
calibration, the instrument can be used as a secondary calibration instrument.

Test Meter (The instrument Primary standard


to be calibrated) meter or instrument

Representation of Primary Calibration

2) Secondary or Comparison calibration method

If the instrument is calibrated against secondary standard instrument, then the calibration is
called as secondary calibration. This method is used for further calibration of other devices of
lesser accuracy. Secondary calibration instruments are used in laboratory practice and also in the
industries because they are practical calibration sources.

Secondary
Test Meter (The instrument standard
to be calibrated) meter or instrument

Representation of Secondary Calibration

26
Secondary calibration can be classified further two
types,
i) Direct comparison method of Calibration

Direct comparison method of calibration with a know


knownn input source with same order of
accuracy as primary calibration. So the instrument which is calibrated directly is also used as
secondary calibration instruments.

Test Instrument (The voltmeter to


Standard instrument be
calibrated

Representation of Direct method of Calibration

ii) Indirect comparison method of Calibration

The procedure of indirect method of calibration is based on the equivalence of two different
devices with same comparison concept.

Test Instrument (Device 1)


Source Reading

Standard instrument (Device 2)

Representation of indirect method of Calibration

Standards for calibration


Refer Classification of Standards (4 types).

Standards of measurement:

A standard is a physical representation of a unit ooff measurement. A known accurate


measure of physical quantity is termed as standard. These standards are used to determine the
accuracy of other physical quantities by the compar
comparison method.

27

You might also like