0% found this document useful (0 votes)
73 views13 pages

LiDAR Techniques in FRST 443 Lab

Lab 5 of FRST 443 focuses on Light Detection and Ranging (LiDAR) technology, teaching students how to utilize ENVI software for processing and visualizing LiDAR data. The lab covers LiDAR fundamentals, visualization techniques, data processing methods, and tree segmentation, emphasizing the importance of accurate data handling and analysis. Students are required to submit their findings on CANVAS, ensuring originality in their responses to avoid plagiarism.

Uploaded by

stephyzg416
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views13 pages

LiDAR Techniques in FRST 443 Lab

Lab 5 of FRST 443 focuses on Light Detection and Ranging (LiDAR) technology, teaching students how to utilize ENVI software for processing and visualizing LiDAR data. The lab covers LiDAR fundamentals, visualization techniques, data processing methods, and tree segmentation, emphasizing the importance of accurate data handling and analysis. Students are required to submit their findings on CANVAS, ensuring originality in their responses to avoid plagiarism.

Uploaded by

stephyzg416
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

FRST 443: Remote Sensing for Ecosystem Management

LAB 5: Light Detection and Ranging

Instructor: Prof. Nicholas Coops

[Link]@[Link]

TAs: Tommaso Trotto, Brianne Boufford, Leanna Stackhouse

Office hours:
Posted on the Zoom section on canvas (weeks of 10th March and 17th March)

Contact Emails:
[Link]@[Link]
[Link]@[Link]
[Link]@[Link]

For this assignment you will:


• Learn about Light Detection and Ranging (LiDAR)
• Understand how LiDAR works and what’s used for
• Use ENVI to:
• Load raw LiDAR data points
• Display a LiDAR point cloud
• Learn how to extract terrain and tree-level information from a LiDAR point
cloud

Submit your answers on CANVAS. Each lab can be found under the
‘Submissions’ module (28th March)

When answering these questions, you do not need to cite your references, but
you do need to put things into your own words. Plagiarism will result in an
immediate fail.
FRST 443: Remote Sensing for Ecosystem Management

List of sections

Section 1: LiDAR fundamentals – Provides an overview of LiDAR, how it works, and


introduces the lab
Section 2: LiDAR visualization – ENVI tips and tricks on how to visualize a point cloud
Section 3: LiDAR processing – Guided tour of ENVI for some basic LiDAR processing
Section 4: Tree segmentation – Methods to extract individual trees from point clouds
Section 5: Your turn! – Full LiDAR processing pipeline in ENVI based on what you’ve
learned so far

Section 1: LiDAR fundamentals

Light Detection and Ranging (LiDAR) is an active remote sensing technique (i.e. it emits
its own energy source). First used to get the altitude for the Apollo 15 mission in 1971 it
has become prolific in construction, forestry, geology, and even health care applications.
LiDAR is composed of 5 fundamental components: an emitter, a receiver, a very precise
clock, a GPU unit, and an internal measurement unit (IMU). The emitter and receiver emit
and receive light pulses from a laser while the precise clock, the GPS, and IMU register
the time and location of the emitted laser pulse. So, we know when and where a light
pulse is emitted and a reflection of that same pulse is returned. Since the speed of light
is a constant (c = 3 X 108 m/s) we can use the simple distance formula (distance = time *
velocity / 2) to calculate the distance a light pulse traveled before it was reflected with an
accuracy of around 2-15cm depending on the sensor.

There are two types of LiDAR sensors based on the number of pulses they emit (1)
discrete or (2) full waveform. As the name suggests, discrete sensors receive a discrete
number of returns per emitted pulse (1 to 5 depending on the system). Full waveform
systems record a return every nanosecond, which allows them to produce a distinct and
dense vertical profile. From both LiDAR systems we produce a 3D representation, called
a point cloud for discrete LiDAR and waveform for full-waveform LiDAR, for the returns
sensed by the LiDAR receiver. For this lab, we’ll use a discrete return LiDAR point cloud.

The original output of a LiDAR mission requires pre-processing before we can gain useful
information during analysis. Generally, pre-processing involves filtering out ‘noisy’ returns
and classifying the return type. When you classify returns you delineate between returns
that represent the ground, vegetation, water, building, and canopy. In forestry, we use
these classifications to ‘normalize’ the lidar point cloud where we remove the influence to
topography by flattening the ground to a common interpolated surface. This step is done
after classification.
FRST 443: Remote Sensing for Ecosystem Management

In this lab, we will use ENVI to do some simple LiDAR point cloud manipulation using its
integrated ‘LiDAR’ module. We’ll (1) import a point cloud file, (2) extract a DSM (digital
surface model) and DEM (digital elevation model). These can later be used to create a
Canopy Height Model (CHM) to estimate tree height.

Figure 1: Discrete-return point cloud colored by return height.

Section 2: LiDAR visualization

Now, let’s start playing around with some LiDAR data inside ENVI. First, search and run
ENVI LiDAR 6.1 (or 6). If you can’t find it, launch ENVI first and on the right toolbar
expand the LiDAR folder and double click on Launch ENVI LiDAR. This will open a new
interface dedicated to basic LiDAR manipulation.
FRST 443: Remote Sensing for Ecosystem Management

To import a LiDAR file, click on the little directory icon in the top-left corner . Look for
the [Link] file included in your lab 5 data. LiDAR data are normally distributed in the
open-source LAS/LAZ format. Check the Projection information pop-up when
prompted to see what CRS (coordinate reference system) is associated with the LiDAR
file (in this case it is EPSG:2949). By default, ENVI creates a folder associated with your
LiDAR file (in this case called demo) to store extra bits of information used to properly
display the point cloud. This folder also includes a DSM of the point cloud that is produced
automatically when you import the file in the software and will store additional products
produced during the analysis. Finally, note that you’re working with a LAZ file, which is a
compressed form of a LAS file.

IF, AT THIS POINT, ENVI CRASHES, RELAUNCH THE APPLICATION AND LOAD
THE [Link] FILE INSIDE THE DEMO FOLDER

To reduce memory consumption and offer a smoother user experience, ENVI crops the
LiDAR file into a smaller tile solely for visualization purposes. To change view, you can
navigate around your point cloud using the left menu.

Here, you have an indication of the XY coordinates of the tile center, and its size (250m
x 250m). For precise control, you can jump on XY direction of the amount specified by
Jump (m) and click on the arrows next to it. For example, if we wanted to jump 50 meters
FRST 443: Remote Sensing for Ecosystem Management
up on the Y axis, we would set 50 in Jump (m) and click the north arrow. We can also
zoom in and out by clicking the lenses on the left, yet we can’t set how much we want to
zoom in or out as we did for the XY jump. For coarser view changes, we can drag a new
shape on the small view we have available, simply by pointing, clicking, and dragging a
new shape with the pointer. The classic mouse wheel would work as well to zoom in and
out the current view. Instead, if you left click and move your mouse, you can navigate
along the XYZ axes.

Note that we also have control over the density of points we can display. This is a useful
setting in case we are loading a very dense point cloud that would take long to visualize.
This value is currently set to 50 points per m 2, even though our input point cloud has an
average point density of 7 points per m2. Let’s try setting this value to 1 and click Set. You
may notice a change in your view depending on how zoomed in or out you are. By
reducing the number of points displayed, it’s easier to navigate around.
We can also reset the view or have a view from above by choosing either of these tools:

. ENVI LiDAR has some basic filtering functionalities that help you select only
the points you want to work with. This could be useful to remove some high backscatter
noise above a certain elevation. However, this is exclusively used for visualization

purposes. For example, keep all points between 550m and 580m using to display
the mid-valley sides of the hills. Make sure Filter points is checked. You may also need
to select a larger area to see a change. For more control over the minimum and maximum
height values, check out the last section.

We can also extract a transect from the point cloud using . Simply select the tool,
and click the starting point of the transect, then click on the end point and a pop-up window
with the resulting transect will appear. If you move your pointer in the main window or in
the pop-up, you can change the thickness of the transect. Otherwise, for a finer control,
change the Thickness, Movement, and Angle values in the pop-up.

Finally, we have control over the color scheme and active color field, which can be
switched between Height, Classification, Intensity, or RBG. Play around with these

tools to see how the coloring affect what you perceive . The color
picker allows you to select the color scheme or the stretching of the color ramp.

For question 12 of the lab submission take screenshots of both the following windows
together:
1. Display only points between 550 and 600 meters by height;
2. Extract a 2 meter thick transect or an area of your point cloud (make sure that
Show Frame is ticked);
FRST 443: Remote Sensing for Ecosystem Management
Section 3: LiDAR processing

Data processing in ENVI LiDAR is carried out by the Process module. Access it directly

via Process > Process Data… or click . The Process module offers a wide range of
processing techniques to extract a DEM, DSM, individual features like building or trees,
and terrain interpolations among others.

If you explore the pop-up window, you’ll notice three main tabs, where you have ample
control over the processing routine. From the actual processing output, to the area of
interest, and the production parameters like DEM resolution, tree height, and computer
power dedicated to processing.

First, we are interested in extracting a DSM and DEM from the point cloud. To do so,
make sure you’re in the Outputs tab and select only Produce DSM and Produce DEM.
Each parameter requires a filename and an extension. Keep everything else the same
and move to the Area Definition tab.

Here, we can specify what area of the point cloud we want to process. While for the
moment we don’t have any precise requirements, it could be useful to work on a subset
of the study area first for faster computation when you need to tweak your processing
parameters or have a quick visualization of what your output looks like. In this case, the
point cloud we are working with is small, so click on Entire Area to make sure that the
full extent of the point cloud is covered. Next, move to the Production Parameters tab.

This is where the fun is! You get to change the parameters involved in the production of,
in this case, the DEM and DSM. Here, select a DEM Grid Resolution of 5m using a
Rural Area Filtering, with a Near Terrain Classification of 30cm. Don’t worry about the
contour line spacing parameter as it’s used to produce a secondary product out of your
DEM, which we’ll see later on. Leave the other options as there are. Now, change the
DSM parameters to a Grid Resolution of 5m and uncheck Use Power-Lines Points
because we have none at the moment. Leave everything else as it is and click Start
Processing.

This may take a bit of time depending on the computer resources available to ENVI. You
can keep track of what’s happening in the background by looking at the minimal console
outputs at the bottom, which would look something like:
FRST 443: Remote Sensing for Ecosystem Management

We see that production starts for the entire extent of the point cloud, and ENVI is working
out the terrain to produce the DEM, skips the buildings because we didn’t ask for those,
and builds the DSM. All these outputs can be found inside the demo folder that ENVI
creates when you first import your point cloud.

Once the processing is finished, ENVI enters in QA Mode (Quality Assessment Mode),
where you can explore the resulting DSM and the classified point cloud. This looks
something like this:

On the left side, you have a breakdown of how ENVI classified the point cloud into
Unprocessed, Unclassified, Terrain, Near Terrain. The Terrain and Near Terrain points
are used to build the DEM. The unclassified points have no use now because we only
care about the terrain, whereas the DSM simply uses the top surface of the points, which
doesn’t require any particular classification. Unprocessed points may be on the edges or
FRST 443: Remote Sensing for Ecosystem Management
represent noise in the point cloud. Hence, when working with multiple files, it’s
fundamental to consider buffering to avoid such edge artifacts.

Further, you can have a look at the DSM output by toggling on and off the layers. For a
more interactive view of the output, explore the content of the demo folder and open the
DAT files inside ENVI (not ENVI LiDAR). They should look something like this:

DEM
FRST 443: Remote Sensing for Ecosystem Management

DSM

As you may notice, there are some gaps at the edge artifacts, especially in the east side.
Again, it is important to consider those and apply a buffer during processing.
Unfortunately, ENVI doesn’t support this functionality at the moment. To exit the QA

Mode, click , which is already shaded in blue indicating that it’s active. To go back,
click it again.

Note that in ENVI LiDAR the DSM is represented as a 3D surface. This is equivalent to
laying a table cloth on top of the point cloud. Once the result is exported to a .dat file, the
3rd dimension is flattened onto a 2D array of height values.
FRST 443: Remote Sensing for Ecosystem Management

3D DEM visualization in ENVI LiDAR

By now, you may probably know that LiDAR is not expected to have the exact same point
density over the entire acquisition. We can explore where the lowest point density is
located and therefore we may expect to have the greatest acquisition error. To do so, you
can click Process > Generate Density Map…Now select as output format ENVI
elevation format. The pop-up window that will appear will give you a view of where high-
and low-density areas are located. You can over hover the cursor over it to see the point
density change on the right side. The file output is now in the same demo folder for further
investigation.

In conclusion, note that ENVI also generated an extra layer called Terrain. What is it?
This is what ENVI LiDAR calls Orthophoto. Despite not being a “real” orthophoto, this
layer is based on the LiDAR return intensity and is supposed to simulate what the LiDAR
tool “saw” when it hit a target. This could be an RGB image if the sensor collected RGB
information. Not widely used at the moment, but cool to know.

Section 4: Tree Segmentation

Now, let’s process some trees! There are many in this point cloud, so let’s see how ENVI
performs. Go to Process > Process Data... and toggle Produce Trees on, toggle
everything else off. Save the output with a SHP extension and go to Area Definition. We
FRST 443: Remote Sensing for Ecosystem Management
want to work with a smaller area to reduce computation time and see how well ENVI
works. Manually draw a small rectangle on the point cloud view simply by using your
pointer. Note that there are many trees, so use a small area to start with. You can check
the Size in the right panel. Aim for something around 50m x 50m (doesn’t have to be
precise). You could also import a previously-created polygon to delimit your area of
interesting with Load New Layer…

Next, go to Production Parameters and look at the Trees section. Set the Max height
to 1500cm and change Radius to a maximum of 350cm. This will produce a collection of
shapes ideally corresponding to the segmented trees using the parameters we just set.
Start Processing and wait for the ENVI to finish. Note that during production we can also
set the minimum and maximum height for the points we want to process in the General
section under the Production Parameters tab.

ENVI now enters in QA Mode where you can explore the resulting segmented trees. This
is what you would expect to see:

Here, ENVI found 52 trees, identified by the green balloons. You can see these shapes
in GIS by opening the associated shapefile. Note that the tapering of the shapes is an
artifact to help you visualize the trees. In reality, the same file contains the center point of
FRST 443: Remote Sensing for Ecosystem Management
the crown and information on tree height and radius. Do you see more trees than what
ENVI returned? What about the edges of the point cloud, could there be trees there that
were not segmented? Why?

For example, all the dark green points were classified as Trees, however, only 52 shapes
were identified and there are many points that were trees but they don’t belong to any
shape (top-right and top-left corners). ENVI is not as developed as some other LiDAR
software so the algorithms used to compute the products aren’t open-source, and you
have to settle for what ENVI returns.

We may now use this output for other analyses outside ENVI LiDAR to extract tree-level
information, such as crown area, generate full height profiles for each segmented canopy,
extract tree volume information, etc.

Section 5: Full LiDAR routine

Now that you’ve made it this far, it’s your turn to conduct a new full LiDAR processing
routine using ENVI. For this task, use the same [Link] file and generate the following
products:

1. Generate a DEM and DSM of using a 3m grid resolution. Use a near terrain
classification of 30cm for the DEM and toggle off Use Power-Line Points using
only points between 550m and 600m. Choose the TIF file extension. Append a
single screenshot of both the DEM and DSM in QA mode of the entire area on
Canvas under Question 13;
2. Segment trees from a subset of the point cloud based on the following coordinates
(you can write them down in the Area Definition tab):
a. X Min: 349820
b. X Max: 349920
c. Y Min: 5477490
d. Y Max: 5477590
With a minimum height of 130cm and maximum height of 2500cm, and a minimum
radius of 100cm and maximum radius of 400cm. Do not clip the point height for
this step. Append a screenshot of all the resulting trees on Canvas under Question
14;

QUESTIONS: check Canvas for full description.


1. Briefly describe how LiDAR works and what they main components are (150 words
max).
2. What are the main differences between discrete and full waveform LiDAR?
3. (True/False) LiDAR uses a mid-wave infrared band as emitter energy source.
4. Which of the following applications can LiDAR be used for?
FRST 443: Remote Sensing for Ecosystem Management
5. Match the acronym with its purpose. (see Canvas)
6. Order the sequence of events that make up a basic LiDAR manipulation routine.
7. List 3 situations in a forestry context where LiDAR would not be your primary
technological choice. Describe why.
8. Consider you have been tasked to fly a LiDAR sensor on a remotely piloted aircraft
under foggy conditions at night. However, you are very concerned with data quality
and are wondering whether you should fly the LiDAR or not. Ultimately, you decide
the mission in aborted, why?
9. What is point cloud normalization and why is it important?
10. Select all the correct options: (see Canvas).
11. Describe in your own words why LiDAR is a breakthrough technology for forest
structural analyses and how it complements passive remote sensing systems.
Concisely answer with no more than 70 words.
12. Copy and paste a screenshot showing the point cloud view and transect from
the file [Link] according to the lab instructions.
13. Copy and paste a screenshot showing the DSM you generated from the file
[Link] according to the instructions in section 5.
14. Copy and paste a screenshot showing the segmented trees you generated from
the file [Link] according to the instructions in section 5.

You might also like