Optical triangulation (1D),
Structured light(2D).
Submitted by
Hridya N
M2 RAA
TVE23ECRA08
Optical triangulation and structured light are both techniques used in range
sensors for measuring distances or creating depth maps in different dimensions.
2
Optical triangulation (1D)
Optical Triangulation:
• Optical triangulation is a method used to determine the distance to an object by measuring angles.
• In 1D optical triangulation, typically a light source emits a beam of light toward the object, and a
sensor detects the reflected light.
• It typically involves projecting a light pattern onto the object and observing its deformation or
displacement with one or more cameras.
• By analyzing the displacement of the pattern in the camera images, the distance or shape of the
object can be calculated using trigonometry.
• By measuring the angle between the emitted and reflected light, and knowing the baseline distance
between the light source and sensor, the distance to the object can be calculated using trigonometry.
• It's effective for measuring distances along a single axis, providing a linear measurement of distance.
4
1D Optical Triangulation:
Laser triangulation is a type of 1D optical
triangulation.
In laser triangulation, a laser beam is projected onto
the surface of an object, and the position of the laser
spot on the object's surface is observed using a
sensor (such as a photodiode or a camera).
By measuring the angle of the reflected laser beam,
typically with a single dimension (hence 1D), the
distance to the object can be calculated.
Principle of 1D laser triangulation:
5
1D Optical Triangulation:
Principle: 1D Optical triangulation measures distances using the principles of trigonometry.
It involves projecting a light beam onto an object and detecting the reflected light to calculate
the distance based on the angle of reflection.
Geometry: By measuring the angle between the emitted and reflected light rays and knowing
the baseline distance between the light source and the sensor, the distance to the object can be
calculated.
6
1D Optical Triangulation:
What does it do? Measure distance with accuracy from a few microns to a few millimetres,
over a range of a few millimetres to tens of metres, at a rate of 100 to 60,000 times per second.
Why use this technique? Single point optical triangulation instruments provide distance
information quickly and easily without touching the object being measured
Type : Optical triangulation sensors are typically categorized based on the dimensionality of
the measurements they provide.
• There are 1D, 2D, 3D Sensors
• 1D optical triangulation sensors are commonly used for linear distance measurements along
a single axis.
7
1D Optical Triangulation:
Working:
1. Light Emission: A light source, often a laser diode, emits a beam of light towards the target
object.
2. Reflection: The light beam reflects off the object's surface and is captured by a sensor,
usually a photodiode or a photodetector.
3. Angle Measurement: The angle of reflection is measured relative to the baseline distance
between the light source and the sensor.
4. Distance Calculation: Using trigonometric calculations, the distance to the object along the
sensor's axis can be determined.
8
1D Optical Triangulation:
A single point optical triangulation system uses a laser light source, a lens and a linear light
sensitive sensor. The geometry of an optical triangulation system is illustrated in figure 1.
Fig. Geometry of Optical Triangulation
9
1D Optical Triangulation:
A light source illuminates a point on an object (typically a Laser or
LED), an image of this light spot is then formed on the sensor
surface, as the object is moved the image moves along the sensor,
by measuring the location of the light spot image the distance of
the object from the instrument can be determined provided the
baseline length and the angles are known The most important
component in the optical triangulation system is the sensor.
There are two types. The Position Sensitive Detector (PSD) and
the Charged Coupled Device (CCD) the PSD is often chosen for
devices measuring over a small range providing an analogue
output which is ideal for use with pass-fail applications. The CCD
sensor has the advantage of better geometric stability and produces
a signal well suited to providing a digital output
10
Principle of Operation
• Optical triangulation position sensors use reflected waves to pinpoint position and
displacement.
• They are noncontact height or range measurement devices.
• These sensors use an optical transmitter to project light on the target. The reflection of that light
is focused via an optical lens on a light sensitive receiver.
• The distance is calculated from a reference point by determining where the reflected light falls
on a detector.
• If the target changes position, the reflected light changes as well. Conditioning electronics
provide an output signal proportional to target position.
• A measure of the received signal strength (more = stronger), or of the amount of time required
to achieve a desired signal strength (less = better), is an indication that the correct type of sensor
11 has been chosen.
Optical Triangulation:
• Resolution is defined as the smallest amount of distance change that can be reliably measured.
When properly designed, laser triangulation sensors offer extremely high resolution and
stability.
• Optical triangulation sensors have the ability to detect small motions and have been
successfully used in many demanding, high-precision measurement applications.
• Resolution can be determined by looking at the system's electrical noise.
• If the distance between the sensor and target is constant, the output will still fluctuate slightly
due to the white noise of the system.
• Most resolution values are presented based on the peak-to-peak value of noise and can be
represented by the following formula:
Resolution = Sensitivity x Noise
12
1D Optical Triangulation:
Advantages:
• High Accuracy: Optical triangulation sensors can provide accurate distance
measurements.
• Fast Response Time: They offer rapid response times, making them suitable for real-
time applications.
• Long-range Operation: Can operate over considerable distances with high precision.
• low cost, high speed measurement
• can build up information from single point to 2-D or 3-D
• better quality control and good choice of instruments
• good selection of interfacing methods and good protection from environment
13
1D Optical Triangulation:
Disadvantages:
• Limited to 1D Measurements: Optical triangulation is restricted to measuring
distances along a single axis.
• Environmental Sensitivity: Errors can occur due to factors like reflective surfaces,
ambient light interference, and changes in environmental conditions.
• PSD sensors cannot distinguish between multiple bright spots
• Some systematic characteristics to PSD sensors
• Uses light to measure with which may not be eye safe
• Sensor can be occluded
• Some directionality issues due to handed design
14
1D Optical Triangulation:
Applications:
• Industrial Automation: Used for precise position sensing in manufacturing processes,
assembly lines, and robotics.
• Proximity Sensing: Found in various consumer electronics, such as smartphones and tablets,
for touchless gesture recognition and object detection.
• Liquid Level Sensing: Utilized in tanks and containers for measuring fluid levels accurately.
Triangulation sensors are ideally suited for measuring distances, position, and displacement of
targets at long ranges with high accuracy.
Due to their versatility, laser triangulation sensors are used in numerous applications and industries
for non-contact displacement measurement.
From automated process control, R&D test and measurement, to OEM integration, many
15 industries benefit from laser sensors.
Optical Triangulation:
Some design issues for optical triangulation systems are:
• If the object is moved in equal increments then the position of the image on the sensor will not move in
equal increments - the instrument is inherently non-linear
• If the object is moved over a relatively small range an approximately linear instrument can be produced.
• For a given range - if the base-line is extended the instrument will be more linear but will also become
large and possibly awkward to use. If the base line is shortened then a smaller instrument can be made
but it may not be possible to get the accuracy required at the long range due to the increased non-
linearity
• As the object is moved over the range of the instrument with a typical camera the image of the light
spot will go in and out of focus. This is usually solved by positioning the sensor to comply with the
Schliempflug condition - it will then be in focus over the whole range
• If an object occludes the view of the light spot, or stops the light source from illuminating the object,
16 then measurement will not be possible
Optical Triangulation:
• Instability of the light source direction will cause errors which may be a problem to long
range systems
• If the light source impinges on an uneven surface texture or colour measurement accuracy
will be degraded
• If the configuration of an optical triangulation sensor is altered, for example, through
temperature changes or shock, then the instrument will give erroneous results but the user
may not be aware of this.
17
Structured Light (2D)
The demand and gradual shift from 2D to 3D imaging in various fields is increasingly a good
indicator that there is a demand for 3D objects. There are two main groups when it comes to the
usage of 3D imaging techniques: active and passive. Active algorithms are based on different
types of interaction with measurable objects that are more precise and robust, unlike passive
ones.
There are three major technologies used in modern 3D cameras: time-of-flight, active
stereoscopy, and structured light.
19
Structured Light
• Structured light is the perfect solution for a short (and very short) active scanning range. The
algorithm has an impressive accuracy helping reach 100 μm indoors. This technology can not
be used for dark or transparent objects, and for objects with high reflectivity and variable
lighting conditions.
• Structured light is a specially designed 2D pattern that can be represented as an image. There
are many pattern types, and the choice depends on the application. The most basic approach is
to use Moiré fringe patterns — multiple images of sine patterns shifted by different phases.
• The simplest setup of a structured light system is shown below. The pattern is projected on the
object, the superposition of which is captured by the camera. The key step of the algorithm is
to recover the 3D shape of the object from camera images. The algorithm is not limited to the
light spectrum, though infrared or visible light is preferable.
20
Approach
• Structured-light approach inspects the surface or sides of an object. Our goal is to acquire an
object’s full shape by capturing it from different perspectives and building the whole picture
accordingly.
• Start with generated images using the Unity engine and created the whole 3D recognition
pipeline. The process can be mapped to real-world applications easily and quickly. The
method consists of six main steps:
21
Structured Light
1. Calibrate camera-projector pair
2. Project multiple Moiré patterns onto the object from different views (Fig. 2a)
3. Compute depth information of the object for each view (Fig. 2b)
4. Remove background and extract point cloud of object parts
5. Register point clouds from multiple views into one (Fig. 2c)
6. Convert point cloud into a mesh (Fig. 2d)
22
Structured Light
Fig.: Images from structured light pipeline: top (2c) point cloud
of the rabbit, and bottom (2d) mesh of the rabbit
23 Fig : The Structured Light pipeline
Structured Light
Principle: Structured light in 2D involves projecting a known pattern onto an object's surface
and analyzing the deformation of the pattern to extract depth information. By analyzing how
the projected pattern deforms on the object's surface, depth information can be inferred.
24
Structured Light
Working:
1. Pattern Projection: A structured pattern, such as grids, stripes, or coded sequences, is
projected onto the object's surface using a projector or laser.
2. Image Capture: A camera captures images of the pattern as it deforms on the object's
surface.
3. Depth Extraction: Depth information is extracted by analyzing how the projected pattern
deforms. This can be achieved using techniques such as stereo vision or phase shifting.
25
Structured Light
Advantages:
• High Resolution: Structured light sensors can capture detailed depth information with
high resolution and accuracy.
• Versatility: Suitable for a wide range of applications across different industries due to
their ability to handle complex shapes and surfaces.
• Real-time Performance: Structured light systems can provide depth information in
real-time, making them suitable for interactive applications.
Disadvantages:
• Calibration Requirements: Structured light systems require precise calibration and
alignment to ensure accurate depth measurement.
26
Structured Light
• Sensitivity to Environmental Factors: They can be affected by ambient lighting
conditions, shadows, and specular reflections, which may impact accuracy.
• Computational Complexity: Processing captured images to extract depth information
can be computationally intensive, requiring powerful hardware and algorithms.
• Use cases
• With a high-quality, real-time 3D scanning, real-world objects can be quickly captured and
accurately reconstructed — 3D imagery only increases the wide range of applications for
such imaging technologies. Imaging technologies are used ubiquitously from cameras in
multiple interface devices for video chats, to doctors using endoscope for human cell and
tissue observation, and collection.
27
Structured Light
Applications:
• 3D Scanning and Modeling: Used in various industries for creating accurate 3D models
of objects and environments, such as in manufacturing, architecture, and entertainment.
• Biometric Applications: Employed in facial recognition systems, hand gesture tracking,
and other biometric applications for depth sensing and object recognition.
• Augmented Reality: Utilized for accurate depth sensing in augmented reality
applications for precise object placement and interaction.
• Medicine: facial anthropometry, cardiac mechanics analysis, and 3D endoscopy imaging
• Security: biometric identification, manufacturing defect detection, and motion-activated
security cameras
28
Structured Light
• Entertainment and HCI: hand gesture detection, expression detection, gaming, and 3D
avatars
• Communication and collaboration: remote object transmission and 3D video
conferencing
• Manufacturing: 3D optical scanning, reverse engineering of objects to produce CAD
data, volume measurement of intricate engineering parts, and more
• Retail: skin surface measurement for cosmetics research and development, and wrinkle
measurement on various fabrics
• Automotive: obstacle detection systems
• Guidance for the industrial robots
29
Structured Light
Fig. : Illustration of a structured light system containing one projector, one camera, and an object to be captured.
30
Structured Light
3D Ranging : Short
Light Wavelength : NIR and Visible
Application : Short-range sensing,Face
ID/authentication
Strengths : Accurate over short distances,Only one
image sensor needed,Simple object detection and
categorization
Weaknesses :
Very dependent on ambient light
Processor – intensive for distance measurement
Privacy concerns over image data captured
Requires Calibration
31
Conclusion
• Optical triangulation in 1D provides accurate distance measurements along a single axis,
while Structured light in 2D offers detailed depth information across two dimensions
32
Reference
• Structured Light Techniques and Applications - Bell - Major Reference Works - Wiley Online Libra
ry
• TechBrief_SinglePtOpticalTriangulation.pdf (optical-metrology-centre.com)
33
Thank you