0% found this document useful (0 votes)
19 views88 pages

Calibrate Instrumentation and Control Devices

Calibration is the process of adjusting instruments to ensure accurate measurements by comparing them against a standard of higher accuracy. It involves defining the calibration range, zero and span values, and understanding the characteristics such as accuracy, tolerance, and reliability. Regular calibration is essential to detect errors caused by various factors and to maintain the integrity of measurement instruments in industrial applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views88 pages

Calibrate Instrumentation and Control Devices

Calibration is the process of adjusting instruments to ensure accurate measurements by comparing them against a standard of higher accuracy. It involves defining the calibration range, zero and span values, and understanding the characteristics such as accuracy, tolerance, and reliability. Regular calibration is essential to detect errors caused by various factors and to maintain the integrity of measurement instruments in industrial applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 88

Calibrate Instrumentation and

Control Devices

WHAT IS CALIBRATION?
Cont…………………..

• The definition includes the capability to adjust


the instrument to zero & to set the desired
span.
• An interpretation of the definition would say
that a calibration is a comparison of
measuring equipment against a standard
instrument of higher accuracy to detect,
correlate, adjust, rectify & document the
accuracy of the instrument being compared.
Cont………………
• The Automation, Systems, and
Instrumentation Dictionary, the word
calibration is defined as “a test during
which known values of measured are
applied to the transducer and
corresponding output readings are
recorded under specified conditions.”
Cont………………

• Typically, calibration of an instrument is


checked at several points throughout the
calibration range of the instrument.
• The calibration range is defined as “the region
between the limits within which a quantity is
measured, received or transmitted, expressed
by stating the lower and upper range values.”
• The limits are defined by the zero and span
values.
Cont……………..
• The zero value is the lower end of the range.
• Span is defined as the algebraic difference between the
upper & lower range values.
• The calibration range may differ from the instrument
range, which refers to the capability of the instrument.
• The calibration range may differ from the instrument
range, which refers to the capability of the instrument. e.g,
an electronic pressure transmitter may have a nameplate
instrument range of 0–750 per square inch, gauge (psig)
and output of 4-to-20 milliamps (mA).
• However, the engineer has determined the instrument will
be calibrated for 0-to-300 psig = 4-to-20 mA.
• Therefore, the calibration range would be specified as 0-to-
300 psig = 4-to-20 mA.
Cont………………………..

• In this example, the zero input value is 0 psig


and zero output value is 4 mA.
• The input span is 300 psig and the output span
is 16 mA.
• Different terms may be used at your facility.
Just be careful not to confuse the range the
instrument is capable of with the range for
which the instrument has been calibrated.
WHAT ARE THE CHARACTERISTICS OF A CALIBRATION?

Calibration Tolerance: Every calibration should be


performed to a specified tolerance.
The terms tolerance and accuracy are often used
incorrectly.
In the Automation, Systems, &Instrumentation
Dictionary, the definitions for each are as follows:
Accuracy: The ratio of the error to the full scale output
or the ratio of the error to the output, expressed in
percent span or percent reading, respectively.
Tolerance: Permissible deviation from a specified value;
may be expressed in measurement units, percent of
span, or percent of reading.
Cont……………………..

• By specifying an actual value, mistakes caused


by calculating percentages of span or reading
are eliminated. Also, tolerances should be
specified in the units measured for the
calibration.
• For example, you are assigned to perform the
calibration of the previously mentioned 0-to-
300 psig pressure transmitter with a specified
calibration tolerance of ±2 psig.
The output tolerance would be:(2 psig/300
psig)*16 Ma=0.1067 mA
Cont…………………….
• The calculated tolerance is rounded down to
0.10 mA, because rounding to 0.11 mA would
exceed the calculated tolerance.
• It is recommended that both ±2 psig and ±0.10
mA tolerances appear on the calibration data
sheet if the remote indications and output
milliamp signal are recorded.
Cont………………..

• Note the manufacturer’s specified accuracy for


this instrument may be 0.25% full scale (FS).
• Calibration tolerances should not be assigned
based on the manufacturer’s specification only.
• Calibration tolerances should be determined from
a combination of factors. These factors include:
1. Requirements of the process
2. Capability of available test equipment
3. Consistency/ uniformity with similar instruments
at your facility
4. Manufacturer’s specified tolerance
WHY IS CALIBRATION REQUIRED?

• It makes sense that calibration is required for a


new instrument.
• We want to make sure the instrument is
providing accurate indication or output signal
when it is installed.
• But why can’t we just leave it alone as long as
the instrument is operating properly and
continues to provide the indication we expect?
Cont………………………..

• Instrument error can occur due to a variety of


factors: drift/action activities, environment,
electrical supply, addition of components to the
output loop, process changes, etc.
• Since a calibration is performed by comparing
or applying a known signal to the instrument
under test, errors are detected by performing a
calibration.
• An error is the algebraic difference between the
indication and the actual value of the measured
variable.
Typical errors that occurs include:
A . INSTRUMENT ERRORS
Any given instrument is prone to errors either due to
aging or due to manufacturing tolerances.
Here are some of the common terms used when
describing the performance of an instrument.
1. RANGE
The range of an instrument is usually regarded as the
difference between the maximum & minimum reading.
For example a thermometer that has a scale from 20 to
1000 c has a range of 80oc.
This is also called the FULL SCALE DEFLECTION (f.s.d.).
Cont………………………

2 . ACCURACY
The accuracy of an instrument is often stated as a % of
the range or full scale deflection. For example a
pressure gauge with a range 0 to 500 kPa and an
accuracy of plus or minus 2% f.s.d. could have an error
of plus or minus 10 kPa.

3. REPEATABILITY
If an accurate signal is applied and removed repeatedly
to the system and it is found that the indicated reading
is different each time, the instrument has poor
repeatability. This is often caused by friction or some
other erratic/changeable fault in the system.
Cont………………………….

4 . STABILITY
Instability is most likely to occur in instruments
involving electronic processing with a high
degree of amplification.
A common cause of this is adverse environment
factors such as temperature& vibration. E.g,
In extreme cases the displayed value may jump
about. This, for example, may be caused by a
poor electrical connection affected by vibration
5. TIME LAG ERROR

In any instrument system, it must take time for a


change in the input to show up on the indicated
output.
This time may be very small or very large
depending upon the system.
This is known as the response time of the system.
If the indicated output is incorrect, because it has
not yet responded to the change, then we have
time lag error.
Cont……………..

• A good example of time lag error is an ordinary


glass thermometer.
• If you plunge it into hot water, it will take some
time before the mercury reaches the correct level.
• If you read the thermometer before it settled
down, then you would have time lag error.
• When a signal changes a lot and quite quickly,
(speedometer for example), the person reading
the dial would have great difficulty determining
the correct value as the dial may be still going up
when in reality the signal is going down again.
Cont…………………..

6 . RELIABILITY
Most forms of equipment have a predicted life span.
The more reliable it is, the less chance it has of
going wrong during its expected life span. The
reliability is hence a probability ranging from zero
(it will definitely fail) to 1.0 (it will definitely not
fail).
7 . DRIFT
This occurs when the input to the system is
constant ,but the output tends to change slowly.
For example when switched on, the system may drift
due to the temperature change as it warms up.
NIST Traceability
• Experts at the NIST (National Institute of Standards
and Technology) work to ensure we have means of
tracing measurement accuracy back to intrinsic
standards, which are quantities inherently fixed (as far
as anyone knows).
• The machinery necessary to replicate intrinsic
standards for practical use are quite expensive and
usually delicate.
• This means the average metrologies (let alone the
average industrial instrument technician) simply will
never have access to one.
Cont………………….

• In order for these intrinsic standards to be useful within


the industrial world, we use them to calibrate other
instruments, which are used to calibrate other
instruments, and so on until we arrive at the instrument
we intend to calibrate for field service in a process.
• So long as this \chain" of instruments is calibrated against
each other regularly enough to ensure good accuracy at
the end-point, we may calibrate our field instruments
with confidence.
• The documented confidence is known as NIST traceability:
that the accuracy of the field instrument we Calibrate is
ultimately ensured by a trail of documentation leading to
intrinsic standards maintained by the NIST.
Calibration and re-ranging
• In analog instruments, re-ranging could
(usually) only be accomplished by re-
calibration, since the same adjustments were
used to achieve both purposes.
• In digital instruments, calibration and ranging
are typically separate adjustments (i.e. it is
possible to re-range a digital transmitter
without having to perform a complete
recalibration), so it is important to understand
the difference.
Zero & span Adjustments (Analog Transmitters)

• The purpose of calibration is to ensure the


input and output of an instrument correspond
to one another predictably throughout the
entire range of operation.
• We may express this expectation in the form of
a graph, showing how the input and output of
an instrument should relate:
Cont………………….
Cont………………

• This graph shows how any given percentage of input


should correspond to the same percentage of output,
all the way from 0% to 100%.
• Things become more complicated when the input and
output axes are represented by units of measurement
other than “percent.”
• Take for instance a pressure transmitter, a device
designed to sense a fluid pressure and output an
electronic signal corresponding to that pressure.
• Here is a graph for a pressure transmitter with an
input range of 0 to 100 per square inch (PSI) and an
electronic output signal range of 4 to 20 milliamps
(mA) electric current:
Cont……………….
Cont………………

• Although the graph is still linear, zero pressure


does not equate to zero current.
• This is called a live zero, because the 0% point
of measurement (0PSI fluid pressure)
corresponds to a non-zero (“live”) electronic
signal.
• 0 PSI pressure may be the LRV (Lower Range
Value) of the transmitter’s input, but the LRV
of the transmitter’s output is 4 mA, not 0 mA.
Cont………………………….

Any linear, mathematical function may be expressed in “slope-


intercept” equation form:
y = mx + b
Where,
y = Vertical position on graph
x = Horizontal position on graph
m = Slope of line
b = Point of intersection between the line and the vertical (y)
axis
This instrument’s calibration is no different. If we let x represent
the input pressure in units of PSI and y represent the output
current in units of milliamps, we may write an equation for this
instrument as follows:
y = 0.16x + 4
Cont…………

• On the actual instrument (the pressure transmitter),


there are two adjustments which let us match the
instrument’s behavior to the ideal equation.
• One adjustment is called the zero while the other is
called the span.
• These two adjustments correspond exactly to the b
and m terms of the linear function, respectively: the
“zero” adjustment shifts the instrument’s function
vertically on the graph, while the “span” adjustment
changes the slope of the function on the graph.
• By adjusting both zero and span, we may set the
instrument for any range of measurement within the
manufacturer’s limits.
Cont………………

• It should be noted that for most analog


instruments, these two adjustments are
interactive. That is, adjusting one has an effect
on the other.
• Specifically, changes made to the span
adjustment almost always alter the instrument’s
zero point.
• An instrument with interactive zero and span
adjustments requires much more effort to
accurately calibrate, as one must switch back
and off between the lower- and upper-range
points repeatedly to adjust for accuracy.
Calibration procedures

• As described earlier in this chapter, calibration


refers to the adjustment of an instrument so
its output accurately corresponds to its input
throughout a specified range.
• This definition specifies the outcome of a
calibration process, but not the procedure.
• It is the purpose of this section to describe
procedures for efficiently calibrating different
types of instruments.
Linear instruments
The simplest calibration procedure for a linear instrument is
the so-called zero-and-span method.
The method is as follows:
1. Apply the lower-range value stimulus to the instrument,
wait for it to stabilize
2. Move the “zero” adjustment until the instrument registers
accurately at this point
3. Apply the upper-range value stimulus to the instrument,
wait for it to stabilize
4. Move the “span” adjustment until the instrument
registers accurately at this point
5. Repeat steps 1 through 4 as necessary to achieve good
accuracy at both ends of the range
Cont………………………

• Some linear instruments provide a means to adjust


linearity.
• The linearity adjustment of an instrument should be
changed only if the required accuracy cannot be
achieved across the full range of the instrument.
• Otherwise, it is advisable to adjust the zero and span
controls to “split” the error between the highest and
lowest points on the scale, and leave linearity alone.
Nonlinear instruments
• The calibration of inherently nonlinear
instruments is much more challenging than for
linear instruments.
• No longer are two adjustments (zero and span)
sufficient, because more than two points are
necessary to define a curve.
• Examples of nonlinear instruments include
expanded-scale electrical meters, square root
characterizers, and position-characterized
control valves.
Cont…………..
• Every nonlinear instrument will have its own
recommended calibration procedure, so I will defer
you to the manufacturer’s literature for your specific
instrument.
• I will, however, offer one piece of advice. When
calibrating a nonlinear instrument, document all the
adjustments you make (e.g. how many turns on each
calibration screw) just in case you find the need to “re-
set” the instrument back to its original condition.When
calibrating a nonlinear instrument, document all the
adjustments you make (e.g. how many turns on each
calibration screw) just in case you find the need to “re-
set” the instrument back to its original condition.
Discrete instruments
• The word “discrete” means individual or
distinct.
• In engineering, a “discrete” variable or
measurement refers to a true-or-false
condition.
• Thus, a discrete sensor is one that is only able
to indicate whether the measured variable is
above or below a specified setpoint.
Cont……………………………

• Examples of discrete instruments are process


switches designed to turn on and off at certain
values.
• A pressure switch, for example, used to turn
an air compressor on if the air pressure ever
falls below 85 PSI, is an example of a discrete
instrument.
• Discrete instruments need regular calibration
just like continuous instruments.
Cont………

• Most discrete instruments have but one


calibration adjustment: the set-point or trip-point.
Some process switches have two adjustments: the
set-point as well as a deadband adjustment.
• The purpose of a deadband adjustment is to
provide anadjustable buffer range that must be
traversed before the switch changes state.
• To use our 85 PSI low air pressure switch as an
example, the set-point would be 85 PSI, but if the
deadband were 5PSI it would mean the switch
would not change state until the pressure rose
above 90 PSI (85 PSI + 5 PSI).
Cont………………….

• When calibrating a discrete instrument, you must be


sure to check the accuracy of the set-point in the
proper direction of stimulus change.
• For our air pressure switch example, this would mean
checking to see that the switch changed states at 85
PSI falling, not 85 PSI risingprocedure to efficiently
calibrate a discrete instrument without too many trial-
and-error attempts is to set the stimulus at the desired
value (e.g. 85 PSI for low-pressure switch) & then
move the set-point adjustment in the opposite
direction as the intended direction of the stimulus (in
this case, increasing the set-point value until the switch
changes states).
LRV and URV settings, digital trim (digital
transmitters)

• The advent of “smart” field instruments


containing microprocessors has been a great
advance for industrial instrumentation.
• These devices have built-in diagnostic ability,
greater accuracy (due to digital compensation
of sensor nonlinearities), and the ability to
communicate digitally with host devices for
reporting of various parameters.
• A simplified block diagram of a “smart”
pressure transmitter looks something like this:
Cont………………
Cont………….

• It is important to note all the adjustments within this


device, and how this compares to the relative
simplicity of an all-analog pressure transmitter:
Cont………………..

• Note how the only calibration adjustments


available in the analog transmitter are the
“zero” and “span” settings.
• This is clearly not the case with smart
transmitters.
• Not only can we set lower and upper-range
values (LRV &URV) in a smart transmitter, but it
is also possible to calibrate the analog-to-digital
and digital-to-analog converter circuits
independently of each other.
Cont………………….

• What this means, for the calibration technician is that


a full calibration procedure on a smart transmitter
potentially requires more work and a greater number
of adjustments than an all-analog transmitter.
• A common mistake made among students and
experienced technicians alike is to confuse the range
settings (LRV and URV) for actual calibration
adjustments.
• Just, because you digitally set the LRV of a pressure
transmitter to 0.00 PSI and the URV to 100.00 PSI
does not necessarily mean it will register accurately
at points within that range! The following example
will illustrate this fallacy.
Cont……………………

• Suppose we have a smart pressure transmitter


ranged for 0 to 100 PSI with an analog output
range of 4 to 20 mA, but this transmitter’s
pressure sensor is fatigued/tired from years of
use, such that an actual applied pressure of 100
PSI generates a signal that the analog-to-digital
converter interprets as only 96PSI.
• Assuming everything else in the transmitter is in
perfect condition, with perfect calibration, the
output signal will still be in error:
Cont………………………
Cont……………………

• Digitally setting a smart instrument’s LRV and


URV points does not constitute a legitimate
calibration of the instrument.
• For this reason, smart instruments always
provide a means to perform what is called a
digital trim on both the ADC and DAC circuits,
to ensure the microprocessor “sees” the
correct representation of the applied stimulus
and to ensure the microprocessor’s output
signal gets accurately converted into a DC
current, respectively.
Practical calibration standards

• Within the context of a calibration shop


environment, where accurate calibrations are
important yet intrinsic standards are not readily
accessible, we must do what we can to
maintain a workable degree of accuracy in the
calibration equipment used to calibrate field
instruments.
• It is important that the degree of uncertainty in
the accuracy of a test instrument is significantly
less than the degree of uncertainty we hope to
achieve in the instruments we calibrate.
Cont………………………

• Otherwise, calibration becomes an exercise in


futilitynor useless.
• This ratio of uncertainties is called the Test
Uncertainty Ratio, or TUR.
• The next few subsections describe various
standards used in instrument shops to
calibrate industrial instruments.
Electrical standards
Electrical calibration equipment – used to
calibrate instruments measuring voltage, current,
and resistance – must be periodically calibrated
against higher-tier standards maintained by
outside laboratories.
• In years past, instrument shops would often
maintain their own standard cell batteries
(often called Weston cells) as a primary voltage
reference.
Cont………………

Now, electronic voltage references have all but


displaced standard cells in calibration shops and
laboratories, but these references must be
checked and adjusted for drift in order to
maintain their NIST traceability.
One enormous benefit of electronic calibration
references is that they are able to generate
accurate currents and resistances in addition to
voltage (and not just voltage at one fixed value,
either!).
Cont………………

• Modern electronic references are digitally-


controlled as well, which lends themselves well
to automated testing in assembly-line
environments, and/or programmed multi-point
calibrations with automatic documentation of as-
found and as-left calibration data.
• If a shop cannot afford one of these versatile
references for bench top calibration use, an
acceptable alternative in some cases is to
purchase a high-accuracy multimeter and equip
the calibration bench with adjustable voltage,
current, and resistance sources.
Cont………………………
Cont………………

• It should be noted that the variable voltage


source shown in this test arrangement need not
be sophisticated.
• It simply needs to be variable (to allow precise
adjustment until the high-accuracy voltmeter
registers the desired voltage value) and stable
(so the adjustment will not drift appreciably
over time).
Temperature standards

• The most common technologies for industrial


temperature measurement are electronic in
nature:
• RTDs and thermocouples. As such, the
standards used to calibrate such devices are the
same standards used to calibrate electrical
instruments such as digital multimeters
(DMMs).
• However, there are some temperature-
measuring instruments that are not electrical in
nature.
Cont………………….

In order to calibrate these types of instruments, we


must accurately create the calibration temperatures in
the instrument shop.
• A time-honored standard for low-temperature
industrial calibrations is water, specifically the
freezing and boiling points of water.
• Pure water at sea level (full atmospheric pressure)
freezes at 32 degrees Fahrenheit (0 degrees Celsius) &
boils at 212 degrees Fahrenheit (100 degrees Celsius).
• In fact, the Celsius temperature scale is defined by
these two points of phase change for water at sea
level.
Cont………………………

• To use water as a temperature calibration standard,


simply prepare a vessel for one of two conditions:
thermal equilibrium at freezing or thermal equilibrium at
boiling.
• “Thermal equilibrium” in this context simply means equal
temperature throughout the mixed-phase sample.
• In the case of freezing, this means a well-mixed sample
of solid ice and liquid water.
• In the case of boiling, this means a pot of water at a
steady boil (vaporous steam and liquid water in direct
contact).
• What you are trying to achieve here is ample contact
between the two phases (either solid &liquid; or liquid
&vapor) to eliminate hot or cold spots.
Pressure standards

• In order to accurately calibrate a pressure


instrument in a shop environment, we must
create fluid pressures of known magnitude
against which we compare the instrument
being calibrated.
• As with other types of physical calibrations, our
choices of instruments falls into two broad
categories: devices that inherently produce
known pressures versus devices that accurately
measure pressures created by some (other)
adjustable source.
Cont……………….

• A deadweight tester (sometimes referred to as a dead-


test calibrator) is an example in the former category.
• These devices create accurately known pressures by
means of precise masses and pistons of precise area:
Cont………………

• After connecting the gauge (or other pressure


instrument) to be calibrated, the technician
adjusts the secondary piston to cause the
primary piston to lift Off its resting position and
be suspended by oil pressure alone.
• So long as the mass placed on the primary
piston is precisely known, Earth’s gravitational
field is constant, and the piston is perfectly
vertical, the fluid pressure applied to the
instrument under test must be equal to the
value described by the following equation:
Cont………………

P =F/A
Where,
P = Fluid pressure
F = Force exerted by the action of gravity on the
mass (Fweight = mg)
A = Area of piston
• The primary piston area, of course, is precisely
set at the time of the deadweight tester’s
manufacture and does not change appreciably
throughout the life of the device
Oscilloscope
• It is an electronic devices which is used to measure the
change in voltage over atime.
• To measure an electrical voltage you would use a
voltmeter.
• But what happens if the electrical voltage you want to
measure is varying rapidly in time? The voltmeter
display
• may oscillate rapidly preventing you making a good
reading, or it may display some average of the time
varying voltage.
• In this case, an oscilloscope can be used to observe, and
measure, the entire time-varying voltage, or "signal".
• The oscilloscope places an image of the time-
varying signal on the screen of a cathode ray
tube (CRT) allowing us to observe the shape of
the signal and measure the voltage at different
times.
• If the signal is periodic (it repeats itself over
and over) as is often the case, we can also
measure the frequency, the rate of repeating,
of the signal.
What the Oscilloscope Does

The oscilloscope plots voltage


as a function of time.
• There are two types of voltages AC and DC.
• AC (derived from ALTERNATING CURRENT) indicates a voltage,
the magnitude of which varies as a function of time.
• An AC signal is shown in Figure 1. In contrast, DC (derived from
DIRECT CURRENT) indicates a voltage whose magnitude is
constant in time.
• The voltage is on the vertical (y) axis and the time is on the
horizontal (x) axis.
• A constant voltage (DC) shows up as a flat horizontal line. The
scope has controls to make the x and y scales larger or smaller.
• These act like the controls for magnification on amicroscope.
• They don’t change the actual voltage any more than
magnification makes a cell on the microscope slide bigger;
they just let us see small details more easily.
• There are also controls to shift the center
points of the voltage scales.
• These “offset” knobs are like the controls to
move the stage of the microscope to look at
different parts of a sample.
• You will learn about other adjustments in the
course of the lab.
Components of oscilloscope
• Controls: There are several controls on the scope.
They include: the vertical grid (or scale)
control(Volts/Div), vertical position control, the
horizontal scale control (Timebase), intensity
control, Trigger Level, Trigger Source, etc.
• There are vertical controls for each Y Input
supported by the scope.
• The intensity control controls the brightness of the
trace and the vertical position control is used to set
the zero voltage value of the signal along the Y-axis
(e.g., at the center of the grid).
• The basic scope controls are the vertical (Volts/Div)
and horizontal (Timebase) controls.
1.Volts/Div

• The vertical scale control is used to set how


one reads the voltage values from scope’s Y
axis grid.
• This is called the Volts/Div.
• The figure following shows a sine wave with
amplitude of 1volt and the Volts/Div is set to
0.5 volts/division. (I am using a 4x10 grid for
the display.)
In the following figure, the same sine wave is displayed;
however, the Volts/Div is set to1 volt/division.
2. Timebase

(Assume that Volts/Div is set to 500 millivolts/division


= .5 volts/division.) The Timebase controls how the
horizontal (X-axis) is read.
In the following figure, a sine wave with frequency 1
Hz is displayed.
In this case, the frequency is 1Hz & its period is 1
complete cycle in 1 second (recall Hz = cycles per
second).
Since the Timebase is set to 0.1 seconds (or 100
milliseconds/division) and there are 10 divisions on
the horizontal axis, a second of time spans the full
X-axis and, therefore, a full cycle will be displayed.
Setting the Timebase to 200 milliseconds/division
X 10 divisions = 2 seconds will yield a display of 2
cycles.
How to adjust the Timebase and Volts/Div

• When an unknown signal (both in voltage and


frequency), one may have to adjust both the
Timebase & Volts/Div in a sequential manner
until a clear signal is discerned.
• For example, if the unknown signal races
(moves slowly) across the screen, lower (raise)
the Timebase until a stationary signal is seen.
• If the amplitude is low (high), raise (lower)the
Volts/Div.
• This may have to be iteratively performed until
an acceptable signal is obtained.
Trigger Level and Trigger Source

• The trigger is used to determine where (in time) to start of the


displayed signal.
• This feature is used to view the displayed in relation to a
secondary signal and is used for timing purposes.
• For example, one may want to know the relationship of the input
signal to the output signal.
• First, the Trigger Source selects which signal is to be used as the
trigger.
• Typically, the INT is used to select the displayed signal for the
trigger (self-trigger),
• LINE is used to select the 60 Hz line voltage , and
• EXT is used to select an external signal.
• The Trigger Level is used to adjust the voltage level of the trigger
signal.
• This is also useful for synchronizing a signal whose frequency is
not an exact multiple of theTimebase.
OSCILLOSCOPE OPERATION:
OBJECTIVE:
To familiarize oneself with the use of the Oscilloscope in conjunction
with the Function Generator
1st . Set the devices to generate a sine wave, set the frequency to
50 Hz, and using avoltmeter set the output voltage to 1 volt
peak.
2nd . Connect the output of the to one of the inputs of the Agilent
Oscilloscope.
3rd . Set the Oscilloscope to a Timebase of 10 ms and vertical scale
to 0.5 volts/division.
4rth . How many cycles of the sine wave appear? How many peak-
to-peak divisions does the sine wave fill? Are you seeing a 1 volt
peak 50 Hz sine wave? If so, continue.
If not, check your function generator & oscilloscope settings and
repeat this step until you obtain the proper signal display.
5th . Change the Timebase to 1ms. What happens?
6th . Change the vertical scale to .1 volts/division.
What happens?
7nth. Describe what happens as one decreases
and
increases the Timebase.
8th. Describe what happens as one decreases and
increases the vertical scale.
9th. Repeat steps 1-8 using a square wave and
triangular generated by the Function Generator.
10th. When satisfied turn off both the function
generator and the oscilloscope.
The Oscilloscope Experiment

• In this experiment you will familiarize yourself with the use of an


oscilloscope.
• Using a signal generator you will produce various time varying
voltages (signals) which you will input into the oscilloscope for
analysis.
• There are two main quantities which can be measured with the
aid of an oscilloscope that characterize any periodic AC signal.
• The first is the peak-to-peak voltage (Vpp), which is defined as the
voltage difference between the time-varying signal’s highest and
lowest voltage .
• The second is the frequency of the time-varying signal (f), defined
by f=1/T
• The frequency f in Hz and the period Tin seconds
(the period is also shown in Figure below).
• Sometimes the angular frequency ω in rad/sec is used
instead of the frequency f in Hz.
• They are related by the amplitudeV0, is related to the
peak-to-peak voltage Vpp of the signal by:
How the Oscilloscope Works

• An oscilloscope contains a cathode ray tube (CRT), in which


the deflection of an electron beam that falls onto a phosphor
screen is directly proportional to the voltage applied across a
pair of parallel deflection plates.
• A measurement of this deflection yields ameasurement of the
applied voltage.
• The oscilloscope can be used to display and measure rapidly
varying electrical phenomena.
• The internal subsystems of the oscilloscope are shown in
Figure below a and the front panel of the oscilloscope is
shown in Figure b.
• To see the front panel of the oscilloscope in more detail, open
the pdf file for this lab on the course web page and use
Adobe’s magnify option.
• A vertical amplifier is connected to the y-axis deflection plates.
• It serves to amplify the input signal to the y-plates so that the
CRT can show an appreciable vertical displacement for a small
signal. The horizontal amplifier serves the same purpose for the
x-axis plates and the horizontal display.
• Although an external input signal can be applied to the x-axis
input, this function of the oscilloscope is not used in this course.
• Instead, a sweep generator internal to the oscilloscope is used to
control the horizontal display.
• The sweep generator makes a beam move in the x-direction at a
constant, but adjustable speed.
• The beam’s speed is adjusted using the time base (TB) control
knob.
• This allows theoscilloscope to display the external y-input signal
as a function of time.
• The sweep generator functions as follows. A saw-tooth voltage is
applied to the horizontal deflection plates.
• A saw-tooth voltage is a time-varying periodic voltage . The voltage
first increases linearly with time and then suddenly drops to zero.
• As the voltage increases the beam is deflected more and more to
the right of the CRT screen.
• The result is the beam spot sweeps across the screen with the
same frequency as the saw-tooth signal.
• The horizontal position of the beam spot .
• Note: the time it takes the beam spot to move across the screen
(sweep time) is equal to the period of the saw-tooth signal.
• The rate at which the beam spot sweeps across the screen is
selected by using the time base (TB) selector knob and is
calibrated in time/cm.
• Because both the phosphor screen and the human eye have some
finite retention time, the beam spot looks like acontinuous line at
frequencies higher than about 15 Hz.
The Signal Generator

• To investigate how the oscilloscope works in this first


experiment, we will need to give it a test input signal.
• To accomplish this, we will be using a signal generator
like the one pictured below.
The digital read out (upper left) displays the frequency that
the signal generator is currently set to.
This readout is in Hertz (Hz).
• The RANGE buttons (to the right of the display) will move
the decimal in the read out left or right.
This means that by pressing the button once, we can change
the frequency by a factor of ten.
In the example pictured, one press of the button would
change the frequency from over 999 Hz to over 9,999 HZ or
over 99 Hz, depending on which direction we move the
decimal.
This will allow us to generate a large number of different
frequencies quickly and easily.
This only moves the decimal; it does not change the numbers
that are displayed.
• If we wish to make a different numerical value, we need
to turn the knob immediately range buttons, marked
ADJUST.
• This adjustment works in a rather unique way.
• If the knob is turned quickly the numbers change quickly.
• If we turn the knob slowly, the digits change slowly.
• So, with our frequency set at 999.99 Hz, as in the example
above, if we wish to set it to 999.48 Hz, we could turn the
knob slowly.
• If we wanted to set it to a 188.34 Hz, we could turn the
knob the same amount, but just turn faster and the digits
change faster.
• It may seem a little bit awkward at first, but it gives us
quick access to a large range of frequencies.
• At the top is a setting labeled WAVEFORM. By changing this setting, we can
create smooth sine curves, square waves or triangular waves.
• The LED will light up next to the type of wave selected.
• Below the waveform setting is a knob labeled AMPLITUDE.
• By rotating this knob, we can change the amplitude or height of
our wave. This amplitude will be measured using the
oscilloscope.
• The far right hand side is the OUTPUT of the signal generator.
• This is where we connect the cables to take the signal to an
oscilloscope or an external circuit.
• We will use the two banana jacks at the bottom (the red and
black ones) to connect banana plugs to a cable that has a BNC
connector on the other end (the BNC connector is the round
metal one that will connect to the “input” on the oscilloscope).
• The cable with the banana plugs and a BNC connector is shown
in Figure below.
Reflective Exercise
Calculate frequency and period
We used 0.5v on volt/div, which means each division represents
5v,Again 2ms on time/div, which means each square is
2milliseconds.
Now, if you want to calculate the period, then you have to check
how many divisions or squares it takes horizontally for a full wave
cycle to form;
1. Period: say you found it takes 9div to form a full cycle, then the
period is the multiplication of the time /div settings and the number
of divisions, so this case is 2ms*9
=0.018seconds
2. Frequency: now, according to the f=1/T, here frequency and
period will be f=1/T
=1/0.018s
=55.5hz

You might also like