0% found this document useful (0 votes)
3 views

B. Fundamental Concepts of Measurement

The document outlines a course on measurement techniques, covering fundamental concepts, instrument selection, types, and characteristics. It emphasizes the importance of accurate measurement in industrial applications and details various static and dynamic characteristics of instruments, including accuracy, precision, sensitivity, and calibration. The course aims to equip engineers with the knowledge to select appropriate instruments for specific measurement tasks.

Uploaded by

ainatheoracle
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

B. Fundamental Concepts of Measurement

The document outlines a course on measurement techniques, covering fundamental concepts, instrument selection, types, and characteristics. It emphasizes the importance of accurate measurement in industrial applications and details various static and dynamic characteristics of instruments, including accuracy, precision, sensitivity, and calibration. The course aims to equip engineers with the knowledge to select appropriate instruments for specific measurement tasks.

Uploaded by

ainatheoracle
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 57

Course Outline

 Introduction
 Fundamental Concepts Of Measurement
 Signal Conditioning and Sensors
 Basics of Mechatronics
 Accuracy And Error Analysis
 Measurement Statistics
 Data Presentation And Curve Fitting
Slide Content
 Introduction
 Measurement Units
 Instrument Selection
 Instrument Types
 Static Characteristics of Instruments
 Dynamic Characteristics of Instruments
 Calibration
INTRODUCTION
 Measurement techniques have been of immense importance ever since
the start of human civilization.

 Measurement of a given quantity is essentially an act of comparison


between the quantity (whose magnitude is unknown) and a
predetermined (predefined) standard.

 Measurements were first needed in barter trade in order to ensure that


exchanges were fair.
Contd….
 In the 19th century, the industrial revolution brought about a
rapid development of new instruments and measurement
techniques to satisfy the needs of industrialized production
techniques.

 Since then, there has been a large and rapid growth in new
industrial technology.

 Particularly evident during the last part of the 20th century


because of the many developments in electronics in general
and computers in particular.

 Application of computers to industrial monitoring and


process control tasks has greatly expanded the requirement
for instruments to measure, record, and control process
variables.
Contd ….
 The requirement for accurate and inexpensive instruments for
the production process stems from the need to reduce production
costs and employ modern production techniques (dictating
working with tighter accuracy limits).

 This latter problem is at the focal point of the research and


developmental efforts of all instrument manufacturers.

 In the past few years, the most cost-effective means of


improving instrument accuracy has been found in many cases
to be the inclusion of digital computing power within the
instruments themselves.

 These intelligent instruments therefore feature prominently in


current instrument manufacturers’ catalogues.
Fundamental Quantities and Fundamental Units

Fundamental Quantities:
Physical quantities which do not depend on any other physical quantities
for their measurements. Also known as base quantities.

Fundamental Units:
Units used to measure fundamental quantities.
Fundamental units do not depend on any other unit.

There are seven fundamental (basic) physical quantities:

Length, mass, time, temperature, electric current, luminous intensity and


amount of a substance.

Their units are fundamental units.

Following are the fundamental quantities with their units and symbol of
units.
MEASUREMENT UNITS
Definitions of Standard Units
MEASUREMENT UNITS
Fundamental SI Units
MEASUREMENT UNITS
Derived SI Units
INSTRUMENT SELECTION
Engineers can generally agree on the high importance of the adage
“pick the right tool for the job.” Using the wrong tool can waste time
and compromise quality, whereas the right tool can deliver the
correct result in a fraction of the time.

To carry out the above evaluation properly, the instrument engineer


must have:

• a wide knowledge of the range of instruments available for


measuring particular physical quantities,

• a deep understanding of how instrument characteristics are


affected by particular measurement situations and operating
conditions.
INSTRUMENT SELECTION
Instrument selection is a compromise among:
 Performance characteristics such as:

- desired measurement accuracy,


- resolution,
- sensitivity,
- dynamic performance.

 Ruggedness and durability.

 Maintenance requirements.

 Environmental condition where instrument will operate.

 Purchase cost.
INSTRUMENT TYPES
Instruments can be subdivided into separate classes according to
several criteria. These sub-classifications are useful in broadly
establishing several attributes of particular instruments such as:
accuracy cost general applications

1. Active and Passive Instruments

Fig. 1: Passive Pressure Gauge Fig. 2: Petrol-tank level indicator


INSTRUMENT TYPES
2. Null-Type and Deflection-Type Instruments

Fig. 3: Dead-weight pressure gauge Fig. 4: Deflection pressure gauge


3. Analogue and Digital Instruments
INSTRUMENT TYPES
4. Indicating Instruments and Instruments with a Signal Output
 Indicating instruments normally includes all null-type instruments

and most passive ones.


 Indicators can also be further divided into those that have an

analogue output and those that have a digital display.


 A common analogue indicator is the liquid-in-glass thermometer.
 Another common indicating device, which exists in both analogue

and digital forms, is the bathroom scale.


 Instruments that have a signal-type output are used commonly as

part of automatic control systems. In other circumstances, they


can also be found in measurement systems where the output
measurement signal is recorded in some way for later use.
5. Non-smart and Smart Instruments
The advent of the microprocessor technology has created a
new division in instruments between those that incorporate
a microprocessor (smart) and those that don’t.
Non-smart measuring devices Smart measuring devices
STATIC CHARACTERISTICS OF INSTRUMENTS
The various static characteristics of a measurement equipment
are measurement Accuracy, Error and Uncertainty

 Accuracy
The Accuracy of an instrument is a measure of the closeness of an
instrument’s output reading to the true value.

 Error
Measurement Error (observational error) is the difference between a
measured quantity and its true value.

 Uncertainty
- Uncertainty describes an interval about the measured value within
which we suspect that the true value must fall with a stated probability.

- It is sometimes quoted as a percentage of the full-scale (f.s.) reading of


Uncertainty in Physical Measurement
Uncertainty in Digital Measurement Equipment
In Summary ………………………
STATIC CHARACTERISTICS OF INSTRUMENTS
Example
Solution
STATIC CHARACTERISTICS OF INSTRUMENTS
2. Precision/Repeatability/Reproducibility
 Precision is a term that describes an instrument’s closeness of output results,

i.e. free of random errors.

 Precision is often, although incorrectly, confused with accuracy.

 High precision does not amount to high accuracy.

 A high-precision instrument may have


low accuracy.

 Low accuracy measurements from a


high-precision instrument are normally
caused by a bias in the measurements,
which is removable by recalibration.
STATIC CHARACTERISTICS OF INSTRUMENTS
Repeatability: Closeness of output readings for the same input over a short
period of time, under the same:
- conditions
- instrument
- observer
- location, and
- conditions of use.
Reproducibility Closeness of output readings for the same input at different
times
when there are changes in the:
- conditions
- instrument
- observer
- location, and
- conditions of use.

In summary, both terms describe the spread of output readings for the same
input. This spread is referred to as repeatability if the measurement conditions
STATIC CHARACTERISTICS OF INSTRUMENTS

3. Tolerance
Tolerance is a term that is closely related to accuracy.
Defines the maximum error that is to be expected in some
value.

Example: A packet of resistors bought in an electronics


component shop gives the nominal resistance value as 1000Ω
and the manufacturing tolerance as 5%. If one resistor is
chosen at random from the packet, what are the possible
minimum and maximum resistance values that this particular
resistor may have?

Solution: Minimum likely value is 1000Ω - 5% = 950Ω.


Maximum likely value is 1000Ω + 5% = 1050Ω
Static Characteristics of Instruments
4. Range or Span
The range or span of an instrument defines the minimum and
maximum values of a quantity that the instrument is designed to
measure. Example laboratory thermometer (0 – 100 oC).

5. Linearity
It is normally desirable that the
output reading of an instrument is
linearly proportional to the
quantity being measured.

The x marks in the figure on the


right show a plot of typical output
readings of an instrument when a
sequence of input quantities are
applied to it.
Static Characteristics of Instruments
6. Sensitivity of Measurement
The sensitivity of an instrument is a measure of the change in instrument
output that occurs when the quantity being measured changes by a given
amount. Thus, sensitivity is the ratio:

The sensitivity of measurement is therefore the slope of the straight line


drawn in the previous figure.

If, for example, a pressure of 2 bars produces


a deflection of 10 degrees in a pressure
transducer, the sensitivity of the instrument
is 5 degrees/bar (assuming the deflection
is zero with zero pressure applied).
Static Characteristics of Instruments
Example:
The following resistance values of a platinum resistance thermometer
were measured through a range of temperatures. Determine the
measurement sensitivity (in ohms/of the instrument.

Resistance (Ohm) Temperature ()


307
200
314
230
321
260
328
290

Solution:
Plotting the values on a graph gives a straight-line relationship
Static Characteristics of Instruments
7. Threshold
The minimum value of an instrument’s input that will result in a change
in the instrument’s output reading.

This minimum level of input known as the threshold of the instrument is


sometimes specified as:
- an absolute value
- as a percentage of full scale deflection.

Example
As an illustration, a car speedometer typically has a threshold of about 15
km/h.
This means that, if the vehicle starts from rest and accelerates, no output
reading is observed on the speedometer until the speed reaches 15 km/h.
Static Characteristics of Instruments
8. Resolution
Lower limit of input value that produces an observable change in
instrument’s output reading.

Like threshold, resolution is sometimes specified as:


- an absolute value
- as a percentage of full scale deflection.

Example
Using a car speedometer as an example again, this has subdivisions
of typically 5 km/h.

This means that when the needle is between the scale markings, we
cannot estimate speed more accurately than to the nearest 5 km/h.
This value of 5 km/h is thus the resolution of the instrument.
Static Characteristics of Instruments
9. Sensitivity to Disturbance
Variations in ambient temperature can influence certain static instrument
characteristics.

Sensitivity to disturbance is a measure of the magnitude of this change.

Such environmental changes affect instruments in two main ways,


- zero drift.
- sensitivity drift.

Zero drift is sometimes referred to as “bias”.

10. Zero Drift/Bias:


Mechanical bathroom scale is a common example of an instrument prone to zero drift.

It is quite usual to find that there is a reading of perhaps 1 kg with no one on the
scale.

If someone of known weight 70 kg were to get on the scale, the reading would be
71 kg, and if someone of known weight 100 kg were to get on the scale, the reading
would be 101 kg. Zero drift is normally removable by calibration.
Static Characteristics of Instruments
11. Sensitivity drift (also known as scale factor drift): defines the
amount by which an instrument’s sensitivity of measurement varies as
ambient conditions change.

Example: A spring balance is calibrated in an environment at a


temperature of 20 and has the following deflection/load
characteristic:

Load (kg) 0 1 2 3
Deflection (mm) 0 20 40 60

It is then used in an environment at a temperature of 30,


and the following deflection/load characteristic is obtained:

Load (kg) 0 1 2 3
Deflection (mm) 5 27 49 71
Static Characteristics of Instruments
With respect to changes in ambient temperature, determine the
a. zero drift per and
b. sensitivity drift per

Solution:
@ 20, deflection/load x-tics is a straight line, with Sensitivity = 20 mm/kg.
@ 30, deflection/load x-tics is still a straight line. Sensitivity = 22 mm/kg.

Zero drift (bias) = 5 mm (the no-load deflection)


Sensitivity drift = (22 – 20)mm/kg = 2 mm/kg

Zero drift/ = 5mm/(30 – 20)


5mm/10
= 0.5 mm/ .

Sensitivity drift/ = 2(mm/kg)/(30 – 20)


= 2(mm/kg)/10
= 0.2 mm/kg.
Static Characteristics of Instruments
12. Hysteresis Effects
Figure 9, illustrates the output charact-
eristics of an instrument that exhibits
hysteresis.

If the input to the instrument is increased


steadily from a negative value, the output
reading varies in the manner shown in
curve A.

If the input variable is then decreased


steadily, the output varies in the manner
shown in curve B.
Fig. 9: Instrument
characteristic with
hysteresis
The non-coincidence between these loading and unloading curves is known as
hysteresis.

Hysteresis is found most commonly in instruments that contain springs, such as a


passive pressure gauge and prony brakes (used for measuring torque).
Static Characteristics of Instruments
13. Dead Space
Dead space is defined as the range of input values over which there is
no change in output value.

Any instrument that exhibits hysteresis also displays dead space, as


marked in Figure 10.

Fig.10: Instrument characterised with


dead space.
Static Characteristics of Instruments
However, some instruments that do not suffer from any significant
hysteresis can still exhibit a dead space in their output characteristics.

Backlash in gears is a typical cause of dead space and results in the


sort of instrument output characteristic shown in Figure 10.

Backlash is commonly experienced in gear sets used to convert


translational to rotational motion (which is a common technique used
to measure translational velocity).
In summary…….

The static characteristics of measuring instruments are


concerned only with the steady-state reading that the
instrument settles down to after the instrument experiences an
input signal.
Dynamic Characteristics of Instruments
The dynamic characteristics of a measuring instrument describe its behaviour
between the time a measured quantity changes value and the time when the
instrument output attains a steady value in response.

Whether static or dynamic characteristics, any value from the measuring


instrument applies to specific environmental conditions as dictated in the
datasheet.

Outside these calibration conditions, some variation in the dynamic parameters


can be expected.

In any linear, time-invariant measuring system, the following general relation


can be written between input and output for time (t) > 0:
Dynamic Characteristics of Instruments
If we limit consideration to that of step changes in the measured quantity
only, then Equation (1) reduces to

Equation (2)

Further simplification can be made by taking certain special cases of


Equation (2), which collectively apply to nearly all measurement
systems.

1. Zero-Order Instruments
A typical example of a first order system is a potentiometer.

A potentiometer is an instrument for measuring voltage by the comparison of an


unknown voltage with a known reference voltage.

It can also serve as an adjustable variable resistor with 3 terminals.

The output voltage changes instantaneously as the slider is displaced along


the potentiometer track.
Fig. 11: Zero-Order instrument
Dynamic Characteristics of Instruments
2. First order Instruments
If coefficients a2 . . . an (except for a0 and a1) are assumed zero in
Equation (2) then any instrument that behaves according to the
equation below is known as a first-order instrument.

Replacing d/dt with the D operator in equation, we get instrument


characteristic.

Defining K as the static sensitivity = b0/a0


and τ as the time constant of the system = a1/a0.

The equation above can be rewritten as:


Dynamic Characteristics of Instruments
The thermocouple is a good example of a first-order instrument.

It is well known that if a thermocouple at room temperature is plunged into boiling water, the
output does not rise instantaneously to a level indicating 100 oC.

Instead it approaches a reading indicating 100 oC in a manner similar to that shown in Figure
12.

A cup anemometer for measuring wind speed is another example of a first order instrument.

a se r !!
Te is the
at t
Wh onstan r
e c r de
t i m o
a z ero t ?
for rumen
inst

Fig. 12: First-order instrument characteristic


Dynamic Characteristics of Instruments
3. Second-order Instruments
If coefficients a3 . . . an other than a0, a1, and a2 in Eq. 2 are
assumed zero, then we get

Eq. 3
Dynamic Characteristics of Instruments
Second-order Instruments continued ….

s i gn ment
A s
e r i v a t i on
t h r u the d out the
Go n d find
s a
proces ition of the nd
d e fi n f t h e seco
ffi c i e nt s o n !!
In a more complex form we have…. coe u a t i o
eq
order

Eq. 5
Dynamic Characteristics of Instruments
Response of a second order instrument…….

i gn m ent
As s
h f or three
Searc f a second
xa m ple s o ring
e m e a su
order ment.
instru
CALIBRATION
Necessity for Calibration
Calibration in measurement technology and metrology is the comparison
of measurement values delivered by a device under test with those of a
calibration standard of known accuracy.

Such a standard could be:


- another measurement device of known accuracy,

- a device generating the measured quantity such as a voltage, sound tone,

- a physical artifact (such as a meter rule).

With regards to the static and dynamic characteristics of measuring


instruments, conformance is achieved only after calibration has been carried
out.

It can be assumed that new instruments will have been calibrated when
obtained from an instrument manufacturer and will therefore initially
behave according to the characteristics stated in the specifications.
The Calibration Process
By comparing the actual input value with the output indication of the
system, the overall effect of the systematic errors can be observed.

Errors at those calibrating points are then made zero by trimming few
adjustable components, using calibration charts or software for corrections.

Strictly speaking, calibration involves the comparison of measured values


with a standard instruments outputs (primarily kept at Standard
Laboratories.

For example the calibration of a pressure sensing device would not only
require a standard pressure measuring device, but also a test-bench, where
different desired pressure values can be generated.

The calibration process of an acceleration measuring device is more


difficult, since, the desired acceleration should be generated on a body, the
measuring device has to be mounted on it and the actual value of the
generated acceleration is measured in some indirect way.
Software calibration.
Calibration can be done for all the points, and then for actual measurement.

The true value can be obtained from a look-up table prepared and stored
before hand or the use of software to generate the fitting curve.

This type of calibration, is often referred to as software calibration.

Single-point, two-point and three-point calibrations


Alternatively, a more popular way is to calibrate the instrument at:
- one,
- two or
- three points of measurement.

Then through independent adjustments trim the instrument, so that the error
at those points would be zero or close to zero.

It is thus expected that error for the whole range of measurement would be
minimal.
Typical input-output characteristics of a measuring device under these three
calibrations are shown in figure overleaf.

Single-point calibration:
is often referred to as offset
adjustment, where the output
of the system is forced to be
zero under zero input
condition.

This is shown in the figure on


the right.

For electronic instruments, it


is often done automatically
and the process is known as
auto-zero calibration.
Two point calibration:
For most field instruments,
calibration is done at two points, one
at zero input and the other at full
scale input.

Two independent adjustments are


made and are known as:
- zero adjustment.
- span adjustment.
3-point calibration:
This is common to industries who want or need documentation of
accurate readings across a larger range of measurement.

Typically a high, middle, and low calibration check across the


measurement spectrum of the sensor or probe is carried out.

This can be achieved thru the use of a curve fitting tool (software).
One important point needs to be mentioned at this juncture.
Instrument characteristics change with time.

So even if it is calibrated once, the output may deviate from the


calibrated points with changes in time, temperature and other
environmental conditions.

Thus, the calibration process has to be repeated at regular intervals


if one wants the instrument to give accurate values of the measurand
every time and anytime.
ASSIGNMENT
ASSIGNMENT Contd……

You might also like