0% found this document useful (0 votes)
22 views102 pages

FULLTEXT01

This document presents a degree project focused on the development of a 2D room-mapping robot utilizing LiDAR technology, capable of autonomous navigation or manual control. The project evaluates the accuracy of the generated maps against actual floor blueprints, revealing that manual control yields better accuracy than autonomous navigation, though both methods face challenges in data acquisition. The robot was constructed using various components and technologies, including dual DC motors, ultrasonic sensors, and a Raspberry Pi, with results indicating potential areas for improvement in sensor accuracy and data processing.

Uploaded by

ramkumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views102 pages

FULLTEXT01

This document presents a degree project focused on the development of a 2D room-mapping robot utilizing LiDAR technology, capable of autonomous navigation or manual control. The project evaluates the accuracy of the generated maps against actual floor blueprints, revealing that manual control yields better accuracy than autonomous navigation, though both methods face challenges in data acquisition. The robot was constructed using various components and technologies, including dual DC motors, ultrasonic sensors, and a Raspberry Pi, with results indicating potential areas for improvement in sensor accuracy and data processing.

Uploaded by

ramkumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 102

Degree Project in Technology

First cycle, 15 credits

LiDAR robot: Mapping of rooms


in 2D
JENNIFER LUNDSTRÖM
THOMAS NORÉN

Stockholm, Sweden, 2024


LiDAR robot: Mapping of rooms in
2D

JENNIFER LUNDSTRÖM
THOMAS NORÉN

Bachelor’s Programme in Technology


Date: June 4, 2024

Supervisor: Daniel Frede


Examiner: Daniel Frede
School of Industrial Engineering and Management
Swedish title: LiDAR robot: Kartläggning av rum i 2D
© 2024 Jennifer Lundström and Thomas Norén
Abstract | i

Abstract
This project developed a 2D room-mapping robot equipped with Light De-
tection and Ranging (LiDAR) technology, designed to navigate environments
either autonomously via a wall-following algorithm or manually through
keyboard inputs. The goal was to assess and compare the accuracy of the
generated room maps with actual floor blueprints and evaluate the navigation
methods’ precision and practical applicability. Constructed with dual Direct
Current (DC) motors, a motor driver, an inertial measurement unit, two
ultrasonic sensors and optical encoders, an RPLIDAR A1, and a Raspberry
Pi 3B+, the robot was designed and fabricated using Solid Edge, laser cutting,
and 3D printing. Results indicated that manual control achieved more accurate
mappings than the autonomous navigation, though both struggled with data
acquisition, suggesting areas for future improvement in sensor accuracy and
processing.

Keywords
Mechatronics, LiDAR, Raspberry Pi
ii | Sammanfattning

Sammanfattning
Detta projekt utvecklade en robot för kartläggning av rum i 2D, utrustad med
LiDAR-teknologi, designad för att navigera miljöer antingen autonomt via en
väggföljnings-algoritm eller manuellt via tangentbordsinmatningar. Målet var
att bedöma och jämföra noggrannheten hos de genererade rumskartorna med
faktiska byggnadsritningar och utvärdera navigationsmetodernas precision
och praktiska tillämpbarhet. Konstruerad med två likströmsmotorer, en motor
driver, ett gyroskop, två ultraljudssensorer och optiska encoders, en RPLIDAR
A1 och en Raspberry Pi 3B+, designades och tillverkades roboten med
Solid Edge, laserskärning och 3D-skrivning. Resultaten visade att manuell
kontroll uppnådde mer exakta kartläggningar än autonom navigering, även om
båda hade problem med datainsamling, vilket tyder på områden för framtida
förbättringar inom sensorers noggrannhet och bearbetning av datan.

Nyckelord
Mekatronik, LiDAR, Raspberry Pi
Acknowledgments | iii

Acknowledgments
We would first like to thank our supervisor Daniel Frede for all the help he
has provided and for providing us with the RPLIDAR A1M8 and all the other
components that made this project possible. We would also like to thank the
teaching assistants for their excellent help with setting up the communication
between our robot and computer.

Stockholm, June 2024


Jennifer Lundström Thomas Norén
iv | Acknowledgments
Contents | v

Contents

1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Theory 3
2.1 Microcontroller . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Ultrasonic sensor . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Motors and Encoders . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Motor driver . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.5 LiDAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.6 Inertial measurement unit . . . . . . . . . . . . . . . . . . . . 7
2.7 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Method 8
3.1 Creating a driveable robot . . . . . . . . . . . . . . . . . . . 8
3.2 Raspberry Pi 3B+ . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Ultrasonic sensors . . . . . . . . . . . . . . . . . . . . . . . . 9
3.4 Calculating distance with encoders . . . . . . . . . . . . . . . 10
3.5 RPLIDAR A1 . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.6 Designing the robot . . . . . . . . . . . . . . . . . . . . . . . 11
3.6.1 Design of structural components . . . . . . . . . . . . 11
3.6.2 3D printed parts . . . . . . . . . . . . . . . . . . . . 11
3.6.3 Laser cut parts . . . . . . . . . . . . . . . . . . . . . 13
3.7 Wall-following algorithm . . . . . . . . . . . . . . . . . . . . 13
3.8 Manual steering of the robot . . . . . . . . . . . . . . . . . . 14
3.9 Uploading RPLIDAR data, coordinates and angles to a Google
Spreadsheet . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
vi | Contents

3.10 Downloading and visualizing the RPLIDAR data, coordinates


and angles from a Google Spreadsheet . . . . . . . . . . . . . 15
3.11 Measurements of a miniature room . . . . . . . . . . . . . . . 15
3.12 Calculating the coverage ratio of the scans . . . . . . . . . . . 16

4 Results and Analysis 18


4.1 Manual measurements . . . . . . . . . . . . . . . . . . . . . 18
4.2 Autonomous measurements . . . . . . . . . . . . . . . . . . . 20
4.3 Time taken for scans . . . . . . . . . . . . . . . . . . . . . . 21
4.4 Complex environment scan . . . . . . . . . . . . . . . . . . . 22
4.5 Coverage ratio . . . . . . . . . . . . . . . . . . . . . . . . . . 23

5 Discussion and Conclusions 27


5.1 Wall-following algorithm . . . . . . . . . . . . . . . . . . . . 27
5.2 Manual driving . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.3 Answering the research questions . . . . . . . . . . . . . . . . 28
5.3.1 How does the LiDAR sampling frequency affect the
quality of the generated room scan? . . . . . . . . . . 28
5.3.2 How is the quality of the scan and time taken to scan
affected by using a simple wall-following algorithm
vs controlling the robot via keyboard input? . . . . . . 29
5.3.3 Interpreting the coverage rate results . . . . . . . . . . 29

6 Future work 30

References 33

A Supporting materials 35
A.1 Ranges for the scans . . . . . . . . . . . . . . . . . . . . . . . 35
A.2 Additional results . . . . . . . . . . . . . . . . . . . . . . . . 36
A.2.1 Additional manual results . . . . . . . . . . . . . . . 36
A.2.2 Additional autonomous results . . . . . . . . . . . . . 38
A.3 Supporting Materials . . . . . . . . . . . . . . . . . . . . . . 40
A.3.1 Circuit and schematic of robot . . . . . . . . . . . . . 40
A.4 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
A.4.1 Laser cut parts . . . . . . . . . . . . . . . . . . . . . 41
A.5 Python Code . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.5.1 Get rotations from optical encoder . . . . . . . . . . . 42
A.5.2 Uploading data to Google Spreadsheet . . . . . . . . . 44
A.6 Full program for manual input . . . . . . . . . . . . . . . . . 48
Contents | vii

A.7 Full program for wall-following algorithm . . . . . . . . . . . 58


A.8 Program to retrieve data from Google Spreadsheet and
visualize the data as a composite of multiple 2D scans of the
environment. . . . . . . . . . . . . . . . . . . . . . . . . . . 69
A.9 Script to download specific test run data from Google Sheets
and calculate the coverage ratio for that test run. . . . . . . . . 75
viii | List of Figures

List of Figures

2.1 Picture of Raspberry Pi 3B+ . . . . . . . . . . . . . . . . . . 4


2.2 HC-SR04 Ultrasonic sensor . . . . . . . . . . . . . . . . . . . 4
2.3 DC Motor and optical encoder created in SolidEdge . . . . . . 5
2.4 A picture of a L298N Motor driver. . . . . . . . . . . . . . . . 6
2.5 Picture of RPLIDAR A1 . . . . . . . . . . . . . . . . . . . . 6
2.6 Picture of MPU-6050 . . . . . . . . . . . . . . . . . . . . . . 7

3.1 Picture of ultrasonic sensor with voltage divider . . . . . . . . 9


3.2 Picture of components in RPLiDAR A1M8 kit taken from [7] . 10
3.3 Exploded render of the ball transfer unit with marble in Solid
Edge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.4 Render of DC Motor mount in Solid Edge. . . . . . . . . . . . 12
3.5 Render of LiDAR mount in Solid Edge. . . . . . . . . . . . . 12
3.6 Render of LiDAR mount in Solid Edge. . . . . . . . . . . . . 12
3.7 Exploded render of the entire chassis in Solid Edge. . . . . . . 13
3.8 Flowchart of wall-following algorithm made in draw.io . . . . 14
3.9 Floorplan created in Affinity Photo 2 . . . . . . . . . . . . . . 16
3.10 Example of a combination of five measurements of raw
LiDAR data points in a scatter plot. . . . . . . . . . . . . . . . 16

4.1 Figure showing the position of the robot in the miniature room
when doing a three-scan measurement. . . . . . . . . . . . . . 18
4.2 The acquired data points from the three-scan measurement
scattered in a plot. . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3 Figure showing the position of the robot in the miniature room
when doing a four-scan measurement. . . . . . . . . . . . . . 19
4.4 The acquired data points from the four-scan measurement
scattered in a plot. . . . . . . . . . . . . . . . . . . . . . . . . 19
4.5 Figure showing the position of the robot in the miniature room
when doing a seven-scan measurement. . . . . . . . . . . . . 19
List of Figures | ix

4.6 The acquired data points from the seven-scan measurement


scattered in a plot. . . . . . . . . . . . . . . . . . . . . . . . . 19
4.7 The acquired data points using the wall-following algorithm
from the three-scan measurement scattered in a plot. . . . . . . 20
4.8 The acquired data points using the wall-following algorithm
from the four-scan measurement scattered in a plot. . . . . . . 20
4.9 The acquired data points using the wall-following algorithm
from the seven-scan measurement scattered in a plot. . . . . . 21
4.10 Floorplan of more complex environment . . . . . . . . . . . . 22
4.11 The measured data points a run in a more complex environment. 22
4.12 Plot showing the room’s outline (red rectangle), LiDAR data
points from three manual measurements and the perimeter
cells that have data points inside (blue squares). . . . . . . . . 23
4.13 Plot showing the room’s outline (red rectangle), LiDAR data
points from four manual measurements and the perimeter cells
that have data points inside (blue squares). . . . . . . . . . . . 23
4.14 Plot showing the room’s outline (red rectangle), LiDAR data
points from seven manual measurements and the perimeter
cells that have data points inside (blue squares). . . . . . . . . 24
4.15 Plot showing the room’s outline (red rectangle), LiDAR
data points from three autonomous measurements and the
perimeter cells that have data points inside (blue squares). . . . 24
4.16 Plot showing the room’s outline (red rectangle), LiDAR data
points from four autonomous measurements and the perimeter
cells that have data points inside (blue squares). . . . . . . . . 25
4.17 Plot showing the room’s outline (red rectangle), LiDAR
data points from seven autonomous measurements and the
perimeter cells that have data points inside (blue squares). . . . 25
4.18 Plot showing the room’s outline (red rectangle), LiDAR
data points from seven autonomous measurements and the
perimeter cells that have data points inside (blue squares). . . . 26

A.1 Figure showing the position of the robot in the miniature room
when doing a one-scan measurement. . . . . . . . . . . . . . 36
A.2 The acquired data points from the one-scan measurement
scattered in a plot. . . . . . . . . . . . . . . . . . . . . . . . . 36
A.3 Figure showing the position of the robot in the miniature room
when doing a two-scan measurement. . . . . . . . . . . . . . 37
x | List of Figures

A.4 The acquired data points from the two-scan measurement


scattered in a plot. . . . . . . . . . . . . . . . . . . . . . . . . 37
A.5 Figure showing the position of the robot in the miniature room
when doing a five-scan measurement. . . . . . . . . . . . . . 37
A.6 The acquired data points from the five-scan measurement
scattered in a plot. . . . . . . . . . . . . . . . . . . . . . . . . 37
A.7 The acquired data points using the wall-following algorithm
from a two-scan measurement scattered in a plot. . . . . . . . 38
A.8 The acquired data points using the wall-following algorithm
from a five-scan measurement scattered in a plot. . . . . . . . 38
A.9 The acquired data points using the wall-following algorithm
from a ten-scan measurement scattered in a plot. . . . . . . . . 39
A.10 Schematic of robot created in fritzing . . . . . . . . . . . . . 40
A.11 Circuit of robot created in fritzing . . . . . . . . . . . . . . . 41
A.12 Solid Edge Draft Document of the bottom part of the chassis. . 41
A.13 Solid Edge Draft Document of one of the struts for the chassis. 42
A.14 Solid Edge Draft Document of the top part of the chassis. . . . 42
List of Tables | xi

List of Tables

4.1 Time taken for whole run, number of scans for that run and
time per scan for manual tests 1-12, including average time
per scan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Time taken for whole run, number of scans for that run and
time per scan for autonomous tests 1-12, including average
time per scan. . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.3 Number of scans, coverage ratio, number of covered cells
and coverage ratio per covered cells for six manual and six
autonomous runs with different numbers of measurements. . . 26

A.1 Ranges for manual tests 1-3. . . . . . . . . . . . . . . . . . . 35


A.2 Ranges for manual tests 4-6. . . . . . . . . . . . . . . . . . . 35
xii | List of acronyms and abbreviations

List of acronyms and abbreviations

API Application Programming Interface

CPU Central Processing Unit

DC Direct Current

GPIO General Purpose Input Output

I2C Inter-Integrated Circuit


IMU Inertial Measurement Unit

KTH KTH Royal Institute of Technology

LiDAR Light Detection and Ranging

MCU Microcontroller Unit


MEMS Micro-Electromechanical Systems

PWM Pulse Width Modulation

SPI Serial Peripheral Interface


SSH Secure Shell

UART Universal Asynchronous Receiver-Transmitter


USB Universal Serial Bus

WLAN Wireless Local Area Network


Introduction | 1

Chapter 1

Introduction

This report outlines the development of a Light Detection and Ranging


(LiDAR)-equipped robot designed for 2D room mapping, blending theoretical
concepts with practical application. It examines the integration of various
technologies including a LiDAR sensor, transfer of data through Google
Sheets, Pulse Width Modulation (PWM)-controlled motors, while also
discussing prototyping challenges and corresponding solutions. It will
investigate research questions related to the quality of the 2D-scan based on
the sampling frequency of the LiDAR and the method used for the scan, i.e.
manual control with keyboard input vs. a wall-following algorithm.

1.1 Background
As vehicles are developed into autonomous vehicles one key component to this
transition is LiDAR. Innovation in this area could improve day to day tasks and
simplify modern societal problems. With this technology distances to different
objects can be measured making it a key component for collecting information
regarding obstacles and environment. By utilizing LiDAR on a moving robot it
is possible to scan a room and create a 2D layout. But not often is LiDAR used
alone, to create a functioning robot many components need to cooperate. To
better understand how different components work together and can be used to
create solutions for modern day problems a study with the purpose of building
a robot capable of mapping rooms on a 2D scale with LiDAR is conducted.
2 | Introduction

1.2 Purpose
The purpose of this thesis is to present the prototyping process of a LiDAR-
equipped robot capable of mapping rooms in 2D. The robot will navigate
through a room and simultaneously collect LiDAR-data and coordinates along
the way. The data and coordinates are then sent to a computer that will process
the data and create a 2D layout of the room. The theory behind the different
technologies that make up the whole project and the methods used to get
the different technologies to work together will be presented. The research
questions being addressed are outlined as followed:

• How does the LiDAR sampling frequency affect the quality of the
generated room scan?

• How is the quality of the scan and time taken to scan affected by using
a simple wall-following algorithm vs controlling the robot via keyboard
input?

1.3 Scope
The project was allocated a timeline of approximately five months and
had a budget set at SEK 1000, not accounting for commonly available
components such as resistors, Direct Current (DC) motors, breadboards, and
Microcontroller Units (MCUs), which were supplied by KTH Royal Institute
of Technology (KTH). As such, the robot will be limited to a basic prototype.

1.4 Method
The project is divided into two main phases with the first one being a
theoretical investigation. In this phase information about different components
is gathered and theoretical setup of the robot is created. The second part is
the experimental part where the components are tested and assembled into a
functioning robot.
Theory | 3

Chapter 2

Theory
This section provides the theory behind all the components incorporated into
this project, as well as those that were considered for potential use. See section
3 for how the components were used.

2.1 Microcontroller
A MCU works like a small computer and consists of one or more Central
Processing Units (CPUs) along with memory and programmable input/output
ports. To be able to control a device such as a LiDAR-robot a microcontroller
such as the Raspberry Pi can be used.
By sending voltage to a port the micro controller reads it as a one and by
sending 0V we can communicate a zero [1]. Since computers use binary code
consisting of zeros and ones this allows us to communicate with the micro
controller.
The micro-controller used in this project is the Raspberry Pi 3B+, see
2.1. All Raspberry Pi models have two serial ports (Universal Asynchronous
Receiver-Transmitter (UART) pins) which will be needed for communication
with the LiDAR [2]. The micro-controller has built in WiFi which allows data
to sent from the micro-controller to a computer for processing. The Raspberry
Pi 3B+ contains a total of 40 General Purpose Input Output (GPIO) pins
where some only function as input/output pins while others have additional
function such as PWM, Serial Peripheral Interface (SPI), Inter-Integrated
Circuit (I2C) and UART [3]. The micro-controller can be programmed by
using the programming language Python.
4 | Theory

Figure 2.1: Picture of Raspberry Pi 3B+

2.2 Ultrasonic sensor


An ultrasonic sensor, see 2.2, is used to measure distance between the sensor
and an object. To calculate distance with the sensor a 10 µs pulse is generated
by sending voltage to the sensor for 10 µs, this pulse will travel until it hits an
object and reflects back. The time it takes for the pulse to return, the duration
of the pulse, is measured. To calculate the distance the following formula is
used: t·c
d= (2.1)
2
Where t [s] is the duration, c [m/s] the speed of sound and d [m] the distance.
To get the correct distance to an object the duration has to be divided by two
since the pulse travels both to the object and back.

Figure 2.2: HC-SR04 Ultrasonic sensor


Theory | 5

2.3 Motors and Encoders


Motors convert electrical energy, such as from a battery, to mechanical energy
which allows control of anything connected to them which in this case are
wheels as the motive is to control and drive a small robot.
Encoders are used to measure the rotation of the motorshaft and either
incremental or absolute encoders can be used. Absolute encoders measure
the exact position of the shaft and incremental encoders are used to measure
the change in position and not the actual position. In this project incremental
encoders are used and these can either be optical or magnetic. Both work by
placing some sort of disc on the motorshaft so that it will rotate and then using
a sensor to measure the change in rotation. Optical encoders typically offer
higher accuracy than magnetic encoders, making them the preferred choice
for this project.
The optical encoder works by utilizing a disc with, in this case, 20 slits.
The disc is placed on the motorshaft between an IR emitter and optical sensor
module, see 2.3, and as the disc rotates it either blocks or allows the IR beam
to hit the optical sensor which causes state changes [4]. As there are 20 slits
the beam will hit the sensor 20 times but also be blocked 20 times resulting in a
total of 40 state changes in one rotation of the motorshaft. The distance a wheel
has traveled can then be calculated with the equation for the circumference of
a circle
distance = 2 · π · r · n (2.2)
where r [m] is the radius of the wheel and n the number of rotations.

Figure 2.3: DC Motor and optical encoder created in SolidEdge

2.4 Motor driver


A motor driver, as illustrated in 2.4, is required to reverse the direction of a
DC motor’s rotation and control its speed. The motor driver achieves this by
6 | Theory

implementing an H-bridge circuit, which consists of four switches (typically


transistors) that can be turned on or off to control the direction of the current
flow through the DC motor. By reversing the polarity of the current, the H-
bridge allows the motor to change its rotational direction.
In addition to direct control, the motor driver also manages the DC motor’s
speed through PWM. PWM controls the speed by varying the duty cycle of
the voltage supplied to the DC motor. By adjusting the proportion of time the
voltage is on versus off, the effective power delivered to the motor is changed,
thereby controlling its speed. [1].

Figure 2.4: A picture of a L298N Motor driver.

2.5 LiDAR
The LiDAR used is the RPLIDAR A1 which measures reflected infrared light
in the wavelengths 775 − 785 nm [2] to calculate time traveled to surfaces
within the interval 0.12 − 12 m. Similarly to ultrasonic sensors, the distance
to objects can be calculated with equation 2.1. The RPLiDAR A1 performs a
360 degree scan as it has a built in motor, see figure 2.5 which differs from an
ultrasonic sensor which is single point.

Figure 2.5: Picture of RPLIDAR A1


Theory | 7

2.6 Inertial measurement unit


An inertial measurement unit is an electronic component that measures
acceleration, orientation and angular rates of the unit. An MPU6050, see 2.6,
was used as it is a cheap and popular alternative. The MPU6050 is a Micro-
Electromechanical Systems (MEMS) which has 3 accelerometers measuring
linear acceleration in the x-, y- and z-axis in meters per second as well as 3
gyroscopes measuring angular acceleration in pitch, yaw and roll in degrees
per second [5].

Figure 2.6: Picture of MPU-6050

2.7 Localization
If the current position (xi , yi ) of the robot is known then the next coordinate
(xi+1 , yi+1 ) can be calculated with the following equations:
xi+1 = xi + d · cos(θ) (2.3)

yi+1 = yi + d · sin(θ) (2.4)


where d is the distance the robot has traveled and θ the angle it has traveled.
The localization is done by using two wheel encoders, one on the left wheel
and one on the right wheel, and one Inertial Measurement Unit (IMU). The
IMU gives the rotational angle of the robot and the wheel encoders gives the
distance dlef t and dright the wheels have traveled. The total distance d is then
calculated with the following equation:

dlef t + dright
d= (2.5)
2
By setting the start coordinates as (xi , yi ) = (0, 0) we can use the method
mentioned above to calculate position.
8 | Method

Chapter 3

Method
This chapter will showcase the experimental part of the project; the design of
the robot complete with circuit diagrams, components and how they were used
along with CAD models and software.

3.1 Creating a driveable robot


To allow the robot to move two DC motors with wheels connected to them
are used. The DC motors are then connected to an L298N motor driver which
allows for speed and directional control and accepts an input voltage up to 12V.
The motor driver is then connected to the Raspberry Pi and also a 9V battery.
To follow a wall two ultrasonic sensors are also connected to the Raspberry Pi,
see section 3.3 for details. Two optical encoders with corresponding sensors
and a MPU-6050 are connected to the Raspberry Pi to calculate coordinates,
see full circuit and schematic in section A.3.1.

3.2 Raspberry Pi 3B+


The microprocessor Raspberry Pi 3B+ was used which has a 5GHz Wireless
Local Area Network (WLAN) which allows for simple data transfer between
the raspberry Pi and a stationary computer for processing of data. The
Raspberry Pi has a recommended input voltage of 5V/2.5A DC via micro
USB connector [6]. For this project the Raspberry Pi will be powered with
a 10 000 mAh powerbank via the micro USB connector. The Raspberry Pi
has an output voltage of 5V and 3.3V.
Method | 9

3.3 Ultrasonic sensors


Two ultrasonic sensors of type HC-SR04 were used. One is placed to the front
and one to the right side of the robot to allow the robot to follow a wall, for
details see section 3.7. The HC-SR04 sensor has an input voltage of 5V and
will also output a pulse of 5V, the durations of this 5V pulse needs to be read by
GPIO pins on the Raspberry Pi to calculate the distance measured according
to section 2.2. As the GPIO pins on the Raspberry Pi operate at 3.3V [3] the
ultrasonic sensors need to be used together with a voltage divider. To calculate
the appropriate resistance of the voltage divider the system in figure 3.1 is used

Figure 3.1: Picture of ultrasonic sensor with voltage divider

where Vin =5V and Vout =3.3V. The following equation for voltage dividers
is used [1]:
R2
Vout = Vin (3.1)
R1 + R2
This equation can then be written as:
Vout
R2 = R1 (3.2)
Vin − Vout
A common resistor value is of 680Ω so therefore R1 is put to this value and
R2 is calculated with equation 3.2 to 1320Ω. As no resistors of value 1320Ω
were available a resistor R2 = 1200Ω was chosen. This gives Vout = 3.19V
which is still close to 3.3V and is inside the range accepted by the Raspberry
Pi 3B+ GPIO pins. The Raspberry Pi 3B+ accepts any voltages above Vout =
1.6V as a high logic level and therefore the voltage level of Vout = 3.19V is
sufficient [3].
10 | Method

3.4 Calculating distance with encoders


Two optical encoder discs were used and assembled to rotate along the motor
shaft, see figure 2.3. To read the state changes the sensors need an input voltage
of 5V but 3.3V is also sufficient and is the voltage level used in this project. If
the sensors were to receive 5V another voltage divider would have to be used to
allow the state changes to be read by GPIO pins on the Raspberry Pi. Equation
2.2 in section 2.3 was then used to get the distance each wheel has driven. A
code demonstrating how to receive the distance from the optical encoder can
be found in section A.5.1.

3.5 RPLIDAR A1
The RPLIDAR A1 is connected to the Raspberry Pi via the communication
cable which is then insterted into the Universal Serial Bus (USB) adapter, see
figure 3.2 [7]. A micro-USB to USB-A cable was then inserted into the USB-A
port of the Raspberry Pi 3 B+. The USB adapter converts the internal UART
serial interface to USB interface via a CP2012 chip [7][8]. To connect the
RPLIDAR to a PC a CP2102 driver need to be installed, this driver allows
a CP2102-based device, such as the RPLIDAR, to appear on the PC as an
additional COM port [8]. When connecting to a Raspberry Pi, there is no
need to download an additional driver since the RPLIDAR will automatically
appear in the device directory with a name such as ttyUSB0 or ttyUSB1,
depending on the number of USB devices already connected to the Raspberry
Pi. By knowing the path to the RPLIDAR a python script is written, see A.5.2,
to collect the LiDAR data points using the existing library RPLiDAR from
Roboticia [9] [10] and also the python libraries numpy and matplotlib.

Figure 3.2: Picture of components in RPLiDAR A1M8 kit taken from [7]
Method | 11

3.6 Designing the robot


This section details the design and manufacturing of our mobile robot,
focusing on the combination of 3D printed mounts and laser cut chassis parts.
The inspiration behind the design came from robot vacuum cleaners such as
the Roomba® with a round base, which allows for symmetry of the robot and
space for all components needed. This design would incorporate a symmetric
circular base with two DC motor powered wheels along with a ball transfer
unit to create stability and allow for turning along its center of mass.

3.6.1 Design of structural components


For the structural components of the robot, two primary manufacturing
methods were employed; laser cutting and 3D printing. Since the chassis had
to be relatively big to fit all of the needed components, with a diameter of 30
cm, it was determined that laser cutting the chassis parts would be ideal.
The smaller, more intricate mounts require a method that allows for more
complex geometries and customization, making 3D printing a viable method.

3.6.2 3D printed parts


The process of creating the 3D printed parts began with measuring the required
dimensions, followed by creating a 3D model of the part in Solid Edge.
The model was then exported as an .stl file and prepared for printing using
Ultimaker Cura software.

Ball transfer unit: Designed to create stability and allow for turning along the
robots center of gravity, this unit was modeled and printed to house a standard
15 - 17 mm diameter glass marble, see figure 3.3.

Figure 3.3: Exploded render of the ball transfer unit with marble in Solid Edge.
12 | Method

DC Motor Mounts: The DC motor mounts, see 3.4, were needed to fasten the
DC motors that were used to the chassis. These parts were precisely modeled
in Solid Edge, converted to .stl files, and prepared for printing using Ultimaker
Cura software. This process ensures that each mount fits its component
perfectly, as confirmed by datasheet specifications and direct measurements
with calipers. After printing, holes were drilled in the front and back of the
mounts to better fit the DC motors.

Figure 3.4: Render of DC Motor mount in Solid Edge.

Raspberry Pi and LiDAR Mount: The mounting of the Raspberry Pi 3B+,


see 3.6, was necessitated by spatial constraints on the chassis, where the ball
transfer unit had already been installed beneath, resulting in protruding screw
heads that prevented direct attachment. For the LiDAR, a mount was created,
see 3.5, to facilitate straightforward attachment and detachment, while also
providing vertical elevation on the robot’s roof.

Figure 3.5: Render of LiDAR mount Figure 3.6: Render of LiDAR mount
in Solid Edge. in Solid Edge.
Method | 13

3.6.3 Laser cut parts


The process of creating the laser cut parts began with laying out all the
needed components in a pattern which would minimize surface area, while
also allowing for all the necessary cables, screws, nuts and bolts to be installed.
Then, the parts were modelled in Solid Edge, which were then exported as .dft
files and then converted into .dwg files which were subsequently imported into
Adobe Illustrator where they were converted into laser cutting instructions for
an Epilog Laser Fusion M2 (35W). The Chassis is composed of three main
parts, the bottom, four middle struts for elevation, and the top. These were all
cut from 3 mm thick wood composite and then assembled by hand to create
the chassis, see 3.7.

Figure 3.7: Exploded render of the entire chassis in Solid Edge.

3.7 Wall-following algorithm


The purpose of the wall-following algorithm is, as it sounds, to allow a robot
to follow a wall in a room. The wall-following algorithm uses two distance
sensors, see section 2.2, one at the front of the robot and one to the right. The
robot has three possible states - drive forward, turn right, turn left - depending
on if the sensors detect a wall or not according to the flowchart in figure 3.8.
For the wall-following algorithm to work the robot needs to be placed by a
wall.
14 | Method

Figure 3.8: Flowchart of wall-following algorithm made in draw.io

3.8 Manual steering of the robot


In order to manually steer the robot with the arrow keys via a computer
connected with Secure Shell (SSH), the library Curses was used for its
keyboard-handling. When an arrow key is toggled, the robots motors will be
activated to drive the robot in the direction of the arrow key. Into this program,
the code to calculate the distance traveled with the optical encoders, see 3.4 as
well as the code to measure the current angle, see 2.6, to send the current time
and date, LiDAR data, current calculated coordinates and angle to the Google
Spreadsheet that was used to temporary store the data.

3.9 Uploading RPLIDAR data, coordinates


and angles to a Google Spreadsheet
To upload data from the Raspberry Pi to Google Spreadsheets the gspread
library and oauth2client library needs to be installed. To allow the Raspberry
Method | 15

Pi to edit a Spreadsheet the necessary Application Programming Interfaces


(APIs), in this case Google Drive API and Google Sheets API, need to be
enabled which is done in Google Workspace. After that credentials to allow
the Raspberry Pi to access the spreadsheet need to be created [11]. A code
using the two different navigation methods to drive the robot and collect data
to send to the Google Spreadsheet is then written, see A.6 and A.7, based on
example code [12].

3.10 Downloading and visualizing the RPLI-


DAR data, coordinates and angles from
a Google Spreadsheet
The data that was uploaded to a Google Spreadsheet was then retrieved using
the code in A.9. It authenticates and connects to the Google Spreadsheets
through the Google Sheets API and then retrieves the data that was uploaded
earlier by the robot. After that, the data is parsed from strings into lists of
dictionaries for the LiDAR data and the current coordinates of the robot, and a
list of integers for the current angle that the IMU has measured for each point.
For the LiDAR data, the data is in polar coordinates which is converted into
Cartesian coordinates for each point. Along with the positional values of the
robot, the environment can be plotted in a scatter, keeping in mind to normalize
the coordinates of each scan.

3.11 Measurements of a miniature room


In order to test that all the hardware and software of the robot had been
integrated properly a number of test using the two navigation methods were
carried out. The test was carried out by building a miniature room by placing
boxes in a rectangular shape roughly 1.72x2.5 meters, see figure 3.9 for the
floorplan. For each navigation method a total of three tests were carried out;
The first using three scancs, the second using four scans and the last with a
total of seven scans of the room.
First, a series of manual tests were carried out. For the results of these tests,
see chapter 4. For unknown reasons, the LiDAR would only scan a range of
about 90 − 100◦ , see Table A.1 and Table A.2. Therefore, to theoretically scan
the entire miniture room, four scans would have to be combined.
The manual algorithm was tested by picking scanning positions and
16 | Method

controlling the robot to those positions. The manual algorithm was tested in
the following manner: For the first test with three scans, two measurements
were made in the front and back and then one test near the middle of the room.
For the tests that had four or more scans, the miniature room was traversed
corner to corner in a counter-clockwise manner, making sure to capture a scan
in each corner.
For the autonomous tests, the robot was placed in the lower right corner,
see figure 3.9 to see the room layout. The time interval between each scan was
then varied to perform different amount of scans and therefore test how scan
frequency affects the plot. The captured LiDAR data points were then, along
with the collected optical wheel encoder- and IMU-data, used to plot a visual
representation of the scanned environment for all the tests. See 3.10 for an
example of what these scans look like.

Figure 3.9: Floorplan created in


Figure 3.10: Example of a combi-
Affinity Photo 2
nation of five measurements of raw
LiDAR data points in a scatter plot.

3.12 Calculating the coverage ratio of the


scans
To assess the quality of the LiDAR scans, a method to calculate the coverage
ratio was developed. The coverage ratio is defined as the proportion of the
bounding box perimeter that is effectively covered by the LiDAR data points.
The bounding box perimeter is a representation of the actual environment that
the robot was traversing, here based on the floorplan of the miniature room in
Method | 17

3.9. The bounding box perimeter is then divided into perimeter cells that can
be altered in size, for this project a cell size of 100x100 mm was chosen to be
sufficient. The coverage ratio was then calculated by dividing the number of
perimeter cells containing one or more LiDAR points by the total number of
perimeter cells. By evaluating the coverage ratio, we can quantify how well
the LiDAR scans cover the defined area. A script in Python was written to
automate this calculation for multiple test runs comprising of multiple scans,
but some manual adjustments were needed to let the program know which
lines to download from Google Sheets and to add or remove misplaced cells.
18 | Results and Analysis

Chapter 4

Results and Analysis


In this section results consisting of images and plots will be showcased for
the autonomous wall-following algorithm and the manual algorithm using
keyboard input.

4.1 Manual measurements


This section will show the results of the manual measurements for the three-,
four- and seven-measurement scans. See section A.2 for even more results.
The scanning positions of the robot were picked and driven to remotely by the
operator of the robot. The legend in the plots show the coordinates for each
scan.

Figure 4.1: Figure showing


the position of the robot in
the miniature room when
doing a three-scan mea-
surement. Figure 4.2: The acquired data points from the
three-scan measurement scattered in a plot.
Results and Analysis | 19

Figure 4.3: Figure showing


the position of the robot in
the miniature room when
doing a four-scan measure-
ment. Figure 4.4: The acquired data points from the
four-scan measurement scattered in a plot.

Figure 4.5: Figure showing


the position of the robot in
the miniature room when
doing a seven-scan mea-
surement. Figure 4.6: The acquired data points from the
seven-scan measurement scattered in a plot.
20 | Results and Analysis

4.2 Autonomous measurements


This section will showcase the results from the measurements that were
completely autonomous, in which the robot was using the wall-following
algorithm to navigate the miniature room while doing a scan with the LiDAR.
Where the robot takes measurements depends on a time factor which means
some parts may be skipped if the robot moves through that part quickly. Scan
frequency was tested with three-, four- and seven-measurement scan. See
section A.2 for additional results.

Figure 4.7: The acquired data points using the wall-following algorithm from
the three-scan measurement scattered in a plot.

Figure 4.8: The acquired data points using the wall-following algorithm from
the four-scan measurement scattered in a plot.
Results and Analysis | 21

Figure 4.9: The acquired data points using the wall-following algorithm from
the seven-scan measurement scattered in a plot.

4.3 Time taken for scans


Below are tables showing the times taken in seconds for each full run, the
number of scans and the seconds taken per scan for each of the manual and
autonomous test runs. For these tests the number of scans made varied from
one up to 13 scans in the miniature room.

Table 4.1: Time taken for whole run, number of scans for that run and time
per scan for manual tests 1-12, including average time per scan.

Run Time [s] Scans [-] Time per scan [s]


1 0 1 N /A
2 9 2 4.5
3 26 3 8.7
4 33 4 8.3
5 61 5 12.2
6 183 7 26.1
7 150 13 11.5
8 61 6 10.2
9 69 7 9.9
10 73 9 8.1
11 80 9 8.9
12 71 9 7.9
Average 68 12.5 10.6
22 | Results and Analysis

Table 4.2: Time taken for whole run, number of scans for that run and time
per scan for autonomous tests 1-12, including average time per scan.

Run Time [s] Scans [-] Time per scan [s]


1 54 5 10.8
2 31 4 7.8
3 147 13 11.3
4 235 11 21.4
5 103 10 10.3
6 81 7 11.6
7 83 9 9.2
8 124 11 11.3
9 50 4 12.5
10 30 3 10
11 35 4 8.8
12 19 3 6.3
Average 82.7 7 10.9

4.4 Complex environment scan


This section shows a the manual program consisting of nine scans of a more
complex environment than a rectangular room.

Figure 4.11: The measured data


Figure 4.10: Floorplan of more points a run in a more complex
complex environment environment.
Results and Analysis | 23

4.5 Coverage ratio


This section will show the results of the coverage ratio for six of the manual
and six of the autonomous test runs based on the miniature room’s floorplan.
For these tests a square cell with side lengths 100 mm was used, which means
that the total number of cells was 84 for all tests. The coverage ratio was then
calculated by dividing the number of covered cells by the number of total cells.

Figure 4.12: Plot showing the room’s outline (red rectangle), LiDAR data
points from three manual measurements and the perimeter cells that have data
points inside (blue squares).

Figure 4.13: Plot showing the room’s outline (red rectangle), LiDAR data
points from four manual measurements and the perimeter cells that have data
points inside (blue squares).
24 | Results and Analysis

Figure 4.14: Plot showing the room’s outline (red rectangle), LiDAR data
points from seven manual measurements and the perimeter cells that have data
points inside (blue squares).

Figure 4.15: Plot showing the room’s outline (red rectangle), LiDAR data
points from three autonomous measurements and the perimeter cells that have
data points inside (blue squares).
Results and Analysis | 25

Figure 4.16: Plot showing the room’s outline (red rectangle), LiDAR data
points from four autonomous measurements and the perimeter cells that have
data points inside (blue squares).

Figure 4.17: Plot showing the room’s outline (red rectangle), LiDAR data
points from seven autonomous measurements and the perimeter cells that have
data points inside (blue squares).
26 | Results and Analysis

Figure 4.18: Plot showing the room’s outline (red rectangle), LiDAR data
points from seven autonomous measurements and the perimeter cells that have
data points inside (blue squares).

Table 4.3: Number of scans, coverage ratio, number of covered cells and
coverage ratio per covered cells for six manual and six autonomous runs with
different numbers of measurements.

Run Number of Scans Coverage ratio Covered cells Coverage ratio per covered cell
1 1 0.27 23 0.0117
2 2 0.26 22 0.0123
3 3 0.25 21 0.0119
4 4 0.15 13 0.0115
5 5 0.17 14 0.0121
6 7 0.38 32 0.0119
Average 3.67 0.25 20.8 0.0119
1 3 0.10 8 0.0125
2 2 0.10 8 0.0125
3 3 0.10 8 0.0125
4 4 0.17 14 0.0121
5 7 0.25 21 0.0119
6 9 0.26 22 0.0118
Average 4.67 0.16 13.5 0.0122
Discussion and Conclusions | 27

Chapter 5

Discussion and Conclusions


In this section the two navigation methods along with the results in section 4
will be discussed and the research questions in section 1.2 answered.

5.1 Wall-following algorithm


A wall-following algorithm was used in this project as we believed
autonomous path finding to be advantageous in applications where the
layout of the room is unknown, such as in search and rescue operations,
military contexts, disaster response scenarios, and exploration of hazardous
environments. In this case autonomous driving is preferable as driving it
manually increases the risk of getting the robot stuck when it isn’t possible
to see where the robot is going. A solution to this is to add a camera to the
robot to allow for manual steering.
A problem that arose with the wall-following algorithm is the fact that the
robot needs to take many small turns to adjust its trajectory which causes issues
with the encoders that calculate the distance traveled. The turning mechanism
is designed to turn with both wheels, i.e., one wheel turning clockwise and the
other wheel turning counter-clockwise as to stay in the same position while
turning. Since the sensors used with the optical encoders only measure the
state changes they do not detect if the wheel starts to rotate in the opposite
direction. This is an issue because it results in a rotation that in reality didn’t
move the robot that reads as the robot having traveled forward the total amount
the wheels turned. To solve this issue the encoders are therefore paused when
the robot is turning. This solutions works well if the robot drives generally
straight but the constant adjusting during the wall-following algorithm causes
small errors, see figure 4.9 for example.
28 | Discussion and Conclusions

5.2 Manual driving


The manual driving of the robot allows for less adjusting of the trajectory
which results in a lesser deviation from the true position of the robot when
plotting the acquired data. This is mostly because of the problems that arose
with the way the wall-following algorithm calculated distance traveled when
turning as discussed in section 5.1. The manual driving and scanning was
shown to be an easier way to capture an environment when it and the robot
were observable, but in environments with limited vision or limited WiFi
communication better alternatives are needed.

5.3 Answering the research questions


This section delves into the core research questions of the study, exploring how
different technological and methodological approaches impact the precision
and efficiency of LiDAR-based room scanning.

5.3.1 How does the LiDAR sampling frequency affect


the quality of the generated room scan?
When comparing the three-scan measurements, see figures 4.2 and 4.7, with
the seven-scan measurement, see figures 4.6 and 4.9, it shows that lower
sampling frequency results in a less accurate floor blueprint since information
is excluded in the plot with the lower sampling frequency. Which is expected
of course, but it comes at the cost of needing to do a more complex
maneuvering of the robot, doing more scans and the run taking more time
to complete. Although, by looking at plots using even more scans for the
wall-following algorithm, see figure A.9, it shows that too many scans may
get a worse result than one with a lower sampling frequency as the risk of
error is greater the more scans that are taken. The manual algorithm does not
encounter this problem, see figure 4.11.
Discussion and Conclusions | 29

5.3.2 How is the quality of the scan and time taken to


scan affected by using a simple wall-following
algorithm vs controlling the robot via keyboard
input?
When comparing the results from the manual algorithm, see figures 4.2, 4.4
and 4.6, with the figures from the autonomous results, see figures 4.7, 4.8
and 4.9 it shows the manual steering results in the highest quality, the result
most similar to the floorplan in figure 3.9. This is most likely due to the
error discussed in section 5.1. The total average time taken for autonomous
runs, see table 4.3, was higher than for manual driving, see table 4.1, with
autonomous runs averaging 82.7 seconds compared to 68 seconds for manual
runs. However, the average number of scans was greater in manual runs,
at 12.5, compared to 7 for autonomous runs. Despite these differences, the
average time per scan was quite similar between the two, with manual runs
averaging 10.6 seconds and autonomous runs averaging 10.9 seconds.

5.3.3 Interpreting the coverage rate results


The coverage ratio was higher on average for the manual test runs, even though
they had less scans on average. Meanwhile, we can see that the coverage ratio
per covered cell was higher for the autonomous runs. This might have to do
with the fact that there are a finite amount of cells in the perimeter, 84 in fact,
and that each time a scan is made there is a lower chance of hitting a unique
cell along the perimeter. So the coverage ratio per covered cell is expected to
be higher for the set of test runs with the larger amount of average scans. We
can also see the coverage ratio increasing with the number of scans, which is
expected. The problem with the coverage ratio is that it doesn’t tell us anything
about the superfluous data points that are being collected. By increasing the
number of measurements that are made we increase the coverage ratio, but at
the same time we increase unnecessary collected data.
30 | Future work

Chapter 6

Future work
As this is a basic prototype created under limited time, one area that could
be further developed is the localization algorithm. A very simple algorithm
utilizing encoders and angles were used which are prone to inaccuracies.
Other methods utilizing not only position and angle but their derivatives could
possibly give a more accurate position. There are also the possibility of using
a beacon and tracking the position of the robot relative to this beacon.
The LiDAR that was used only scanned/sent data points within the range
of ( 250◦ , 350◦ ). This is most likely due to how the RPLIDAR library was used
and adjusting it so that the LiDAR could use the full 360◦ range would have
led to higher quality scans and fewer needed scans to capture its environment.
Autonomous adaptive sampling that lets the robot adjust its LiDAR
scanning frequency based on the complexity of the environment could
optimise both data collection quality and process efficiency.
Filtering unnecessary data can lead to clearer plots and reduced
computation times. A k-means clustering algorithm was tested and effectively
filtered out redundant data points while preserving the overall shape of the
measured environments. This approach was particularly useful given the
high volume of data collected during both manual and autonomous test
runs, which resulted in cluttered visual outputs. Typically, plotting becomes
computationally intensive with an increase in data points, with our laptop
showing noticeable slowdowns when handling around 5000 points in a single
graph. By employing k-means clustering, not only was the visual clarity of
the room’s representation enhanced, but the computation time required for
displaying graphs and executing a convex hull algorithm, an algorithm that
was used to outline the scatter’s perimeter, was significantly reduced. This
method simplifies the plot’s appearance by effectively managing large, dense
clusters of data points. But as it wasn’t deemed necessary, the filtering method
and the outlining of the perimeter of the scatters were omitted.
Future work | 31

An interesting analysis to do regarding the coverage ratio would have been


to measure how much of all the collected data was unnecessary. This could be
done by using the same approach as was done when calculating the coverage
ratio. By quantizing the red rectangle representing the miniature room in 4.15
into a discrete number of cells, as was done with the perimeter of the room.
Then dividing the number of covered cells along the perimeter of the room
by the total amount of cells composing the room. This was not done though,
because of time constraints.
32 | Future work
References | 33

References

[1] H. Johansson, Elektroteknik. Sweden: KTH, 2013. [Pages 3, 6, and 9.]

[2] L. Shanghai Slamtec Co., RPLIDAR A1M8 Datasheet v3.0. Shanghai,


China: Slamtec, 2020. [Online]. Available: https://www.slamtec.ai/w
p-content/uploads/2023/11/LD108_SLAMTEC_rplidar_datasheet_A
1M8_v3.0_en.pdf [Pages 3 and 6.]

[3] “Raspberry pi hardware,” 2024. [Online]. Available: https://www.ra


spberrypi.com/documentation/computers/raspberry-pi.html [Pages 3
and 9.]

[4] M. Mesganaw and I. Lara, “Differences between optical and magnetic


incremental encoders,” 2022, accessed: 15/04/24. [Online]. Available:
https://www.ti.com/lit/ab/slya061/slya061.pdf?ts=1713170990906
[Page 5.]

[5] I. Inc., “Mpu-6000 and mpu-6050 product specification,” 2013,


accessed: 2023-04-26. [Online]. Available: https://components101.co
m/sites/default/files/component_datasheet/MPU6050-DataSheet.pdf
[Page 7.]

[6] R. P. Ltd, “Raspberry pi 3 model b+,” 2023, accessed: 13/04/24.


[Online]. Available: https://datasheets.raspberrypi.com/rpi3/raspberr
y-pi-3-b-plus-product-brief.pdf [Page 8.]

[7] L. Shanghai Slamtec Co., RPLIDAR A1M8 User Manual. Shanghai,


China: Slamtec, 2020. [Online]. Available: https://bucket-download.
slamtec.com/af084741a46129dfcf2b516110be558561d55767/LM108
_SLAMTEC_rplidarkit_usermanual_A1M8_v2.2_en.pdf [Pages viii
and 10.]

[8] S. Laboratories, “Cp2102 datasheet,” 2004. [Online]. Available:


https://www.sparkfun.com/datasheets/IC/cp2102.pdf [Page 10.]
34 | References

[9] Roboticia, “Rplidar,” accessed: 21/03/24. [Online]. Available: https:


//github.com/Roboticia/RPLidar [Page 10.]

[10] A. Pavlov, “Welcome to rplidar’s documentation,” 2016, accessed:


21/03/24. [Online]. Available: https://rplidar.readthedocs.io/en/latest/
[Page 10.]

[11] G. LLC, “Develop on google workspace,” 2024, accessed: 13/04/24.


[Online]. Available: https://developers.google.com/workspace/guides/g
et-started [Page 15.]

[12] T. DiCola, “googlespreadsheet.py,” 2014, accessed: 22/03/24. [Online].


Available: https://github.com/adafruit/Adafruit_Python_DHT/blob/mas
ter/examples/google_spreadsheet.py [Page 15.]
Appendix A: Supporting materials | 35

Appendix A

Supporting materials

A.1 Ranges for the scans


r1 through r7 refers to the ranges of the different scans in each test. Test one
consists of only one scan with a range of 253,351 degrees where test 2 consists
of a total of two scans where the first scan had a range of 262,351 degrees and
the second scan a range of 262,349 degrees.

Table A.1: Ranges for manual tests 1-3.

Test 1 Test 2 Test 3


r1 : (253, 351)◦ r1 : (262, 351)◦ r1 : (270, 357)◦
r2 : (262, 349)◦ r2 : (264, 351)◦
r3 : (239, 351)◦

Table A.2: Ranges for manual tests 4-6.

Test 4 Test 5 Test 6


r1 : (241, 353)◦ r1 : (229, 353)◦ r1 : (245, 354)◦
r2 : (237, 342)◦ r2 : (248, 351)◦ r2 : (253, 351)◦
r3 : (225, 349)◦ r3 : (240, 350)◦ r3 : (243, 350)◦
r4 : (219, 350)◦ r4 : (245, 343)◦ r4 : (238, 350)◦
r5 : (241, 351)◦ r5 : (240, 333)◦
r6 : (236, 326)◦
r7 : (237, 351)◦
36 | Appendix A: Supporting materials

A.2 Additional results


This section provides more results from scans taken either by the manual or
the wall-following algorithm. For manual tests position may be included while
for autonomous test exact position couldn’t be calculated nor predicted due to
the nature of the code written, see A.7.

A.2.1 Additional manual results

Figure A.1: Figure show-


ing the position of the robot
in the miniature room when
doing a one-scan measure-
ment. Figure A.2: The acquired data points from the
one-scan measurement scattered in a plot.
Appendix A: Supporting materials | 37

Figure A.3: Figure show-


ing the position of the robot
in the miniature room when
doing a two-scan measure-
ment. Figure A.4: The acquired data points from the
two-scan measurement scattered in a plot.

Figure A.5: Figure show-


ing the position of the robot
in the miniature room when
doing a five-scan measure-
ment. Figure A.6: The acquired data points from the
five-scan measurement scattered in a plot.
38 | Appendix A: Supporting materials

A.2.2 Additional autonomous results

Figure A.7: The acquired data points using the wall-following algorithm from
a two-scan measurement scattered in a plot.

Figure A.8: The acquired data points using the wall-following algorithm from
a five-scan measurement scattered in a plot.
Appendix A: Supporting materials | 39

Figure A.9: The acquired data points using the wall-following algorithm from
a ten-scan measurement scattered in a plot.
40 | Appendix A: Supporting materials

A.3 Supporting Materials


A.3.1 Circuit and schematic of robot

Figure A.10: Schematic of robot created in fritzing


Appendix A: Supporting materials | 41

Figure A.11: Circuit of robot created in fritzing

A.4 Models
A.4.1 Laser cut parts

Figure A.12: Solid Edge Draft Document of the bottom part of the chassis.
42 | Appendix A: Supporting materials

Figure A.13: Solid Edge Draft Document of one of the struts for the chassis.

Figure A.14: Solid Edge Draft Document of the top part of the chassis.

A.5 Python Code


A.5.1 Get rotations from optical encoder
””” Program to test an encoder and print rotations made

Program uses a Raspberry Pi, a DC motor connected to a


raspberry pi and power, an optical encoder disc and a IR
emitter and an optiocal sensor. The sensor VCC pin is
Appendix A: Supporting materials | 43

connected to 3.3V, the GND to GND and the signal pin


is connected to GPIO 2 on the Raspberry Pi.
”””
import RPi.GPIO as GPIO

GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)

# Defines signal pin


enPin = 26
GPIO.setup(enPin, GPIO.IN)

# ---- Variables ----


statesPerRotation = 40
rotationCount = 0
stateCount = 0
stateCountTot = 0

# ---- MOTORS ---


rev_pin_l = 7
fwd_pin_l = 8
ena_pin_l = 1

# Percentage of speed. 100 = max speed, 50 = half max etc.


speed_l = 25

def start_motors():
# Left motor
GPIO.setup(rev_pin_l, GPIO.OUT)
GPIO.setup(fwd_pin_l, GPIO.OUT)
GPIO.setup(ena_pin_l, GPIO.OUT)
pwm_l = GPIO.PWM(ena_pin_l, 100)

pwm_l.start(0)
print(”Motors work..”)
return pwm_l

def stop_motors():
GPIO.output(fwd_pin_l, GPIO.LOW)
GPIO.output(rev_pin_l, GPIO.LOW)
time.sleep(2)

def drive_fwd(speed_l, pwm_l):


try:
# To drive forward we set the fwd pins to high
GPIO.output(fwd_pin_l,GPIO.HIGH)
44 | Appendix A: Supporting materials

GPIO.output(rev_pin_l,GPIO.LOW)

pwm_l.ChangeDutyCycle(speed_l)
except KeyboardInterrupt:
stop_motors()
GPIO.cleanup()

def encoder_states(stateCount, stateCountTot, rotationCount,


statesPerRotation):
””” Algorithm for calculating rotations an encoder has
made.

Reads the signal pin (enPin) and notes all state changes.
If 8 state changes have been made then one rotation is
completed.
”””
try:
stateLast = GPIO.input(enPin)
while True:
stateCurrent = GPIO.input(enPin)
if stateCurrent != stateLast:
stateLast = stateCurrent
stateCount += 1
stateCountTot += 1
if stateCount == statesPerRotation:
rotationCount += 1
stateCount = 0
print('rotations: ' + str(rotationCount))
except KeyboardInterrupt: #ctrl c
GPIO.cleanup()

if __name__ == '__main__':
try:
pwm_l = start_motors()
drive_fwd(speed_l, pwm_l)
encoder_states(stateCount, stateCountTot,
rotationCount,
statesPerRotation)
except KeyboardInterrupt:
stop_motors()
GPIO.cleanup()
print('shutting down...')

A.5.2 Uploading data to Google Spreadsheet


Appendix A: Supporting materials | 45

# Copyright (c) 2014 Adafruit Industries


# Author: Tony DiCola

# Permission is hereby granted, free of charge, to any person


# obtaining a copy of this software and associated
# documentation files (the ”Software”), to deal in the
# Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute
# , sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so,
# subject to the following conditions:

# The above copyright notice and this permission notice shall


# be included in all copies or substantial portions of the
# Software.

# THE SOFTWARE IS PROVIDED ”AS IS”, WITHOUT WARRANTY OF ANY


# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import sys
import time
import datetime
from rplidar import RPLidar
import gspread
from oauth2client.service_account import
ServiceAccountCredentials

GDOCS_OAUTH_JSON = 'google-auth.json'

# Google Docs spreadsheet name.


GDOCS_SPREADSHEET_NAME = 'SensorData'

# How long to wait (in seconds) between measurements.


FREQUENCY_SECONDS = 0

# This function is a simple example that prints out scan data


def scan_and_send(lidar):
print(”Starting to gather data...”)
try:
for scan in lidar.iter_scans(max_buf_meas=500):
46 | Appendix A: Supporting materials

data = [(angles, distances) for quality,


distances, angles
in scan]
data = str(data)
return data
except KeyboardInterrupt:
print(”Stopping the scan...”)
except Exception as e:
print(e)
finally:
lidar.stop()
lidar.stop_motor()
lidar.disconnect()

def login_open_sheet(oauth_key_file, spreadsheet):


”””Connect to Google Docs spreadsheet and return the
first worksheet.”””
try:
scope = ['https://spreadsheets.google.com/feeds', '
https://www.googleapis
.com/auth/drive']
credentials = ServiceAccountCredentials.
from_json_keyfile_name
(oauth_key_file, scope
)
gc = gspread.authorize(credentials)
worksheet = gc.open(spreadsheet).sheet1
return worksheet
except Exception as ex:
print('Unable to login and get spreadsheet. Check
OAuth credentials,
spreadsheet name, and
make sure spreadsheet
is shared to the
client_email address
in the OAuth .json
file!')
print('Google sheet login failed with error:', ex)
sys.exit(1)

def send_data(data, worksheet):


try:
# Login if necessary.
if worksheet is None:
worksheet = login_open_sheet(GDOCS_OAUTH_JSON,
GDOCS_SPREADSHEET_NAME
)
Appendix A: Supporting materials | 47

# Skip to the next reading if a valid measurement


# couldn't be taken. This might happen if the CPU
# is under a lot of load and the sensor can't be
# reliably read (timing is critical to read the
# sensor).
if data is None:
time.sleep(2)

# Append the data in the spreadsheet, including a


# timestamp
try:
worksheet.append_row((datetime.datetime.now().
strftime(”%Y-%m-%d
%H:%M:%S”), data)
)
# worksheet.append_row(data)
except:
# Error appending data, most likely because
# credentials are stale. Null out the worksheet
so
# a login is performed at the top of the loop.
print('Append error, logging in again')
worksheet = None
time.sleep(FREQUENCY_SECONDS)
finally:
# Wait 30 seconds before continuing
print('Wrote a row to {0}'.format(
GDOCS_SPREADSHEET_NAME
))
time.sleep(FREQUENCY_SECONDS)
return worksheet

if __name__ == '__main__':
worksheet = None
while True:
PORT_NAME = '/dev/ttyUSB0' # This may vary depending
on where your RPLIDAR
is connected
lidar = RPLidar(PORT_NAME)
data = scan_and_send(lidar)
worksheet = send_data(data, worksheet)
48 | Appendix A: Supporting materials

A.6 Full program for manual input


””” Full program for manual steering via keyboard input

Program uses a Raspberry Pi, two DC motors connected to a


voltage source, two optical encoder disc, two optical,
sensors, two ultrasonic sensors and an RPLiDAR A1.

If Program does not work (starts but gets stuck) solve the
problem by pinging google.com by:
ping -c 4 google.com

To exit/stop the program press: ctrl c


”””
# PROGRAM KINDA WORKS. IF THE ENCODERS DO NOT WORK THEN THE
# PROGRAM FULLY WORKS.
from multiprocessing import Process, Queue, Event
from math import pi, radians, cos, sin
import RPi.GPIO as GPIO
import time
import sys
import datetime
from rplidar import RPLidar
import gspread
from oauth2client.service_account import
ServiceAccountCredentials
import mpu6050
import curses

GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)

# ---- ENCODERS ----


# Pin numbers for encoders signal pin
enPinL = 26
GPIO.setup(enPinL, GPIO.IN)
enPinR = 19
GPIO.setup(enPinR, GPIO.IN)

# ---- Variables ----


statesPerRotation = 40
rotationCountL = 0
stateCountL = 0
stateCountTotL = 0
rotationCountR = 0
stateCountR = 0
Appendix A: Supporting materials | 49

stateCountTotR = 0

# ---- Pin numbers ----


trig_pin_r = 17
echo_pin_r = 11
trig_pin_f = 27
echo_pin_f = 0
rev_pin_l = 7
fwd_pin_l = 8
ena_pin_l = 1
rev_pin_r = 25
fwd_pin_r = 24
ena_pin_r = 23

# Percentage of speed. 100 = max speed, 50 = half max etc.


# As the right weel is slower we need to add a factor
speed_l = 65
speed_r = 65*1.25
# To make sure the robot isn't too close to the wall we use
# the following variable
too_close = 10
too_far = 20

# Authorization document
GDOCS_OAUTH_JSON = 'google-auth.json'

# Google Docs spreadsheet name.


GDOCS_SPREADSHEET_NAME = 'SensorData'

# How long to wait (in seconds) between measurements.


FREQUENCY_SECONDS = 0

# Create a new MPU6050 object


sensor = mpu6050.mpu6050(0x68)

# Setup GPIO pins for PWM first


GPIO.setup(ena_pin_l, GPIO.OUT)
GPIO.setup(ena_pin_r, GPIO.OUT)

# Initialize PWM controllers


pwm_l = GPIO.PWM(ena_pin_l, 100)
pwm_r = GPIO.PWM(ena_pin_r, 100)

def start_ultrasonic():
GPIO.setup(trig_pin_r, GPIO.OUT)
GPIO.setup(echo_pin_r, GPIO.IN)
GPIO.setup(trig_pin_f, GPIO.OUT)
50 | Appendix A: Supporting materials

GPIO.setup(echo_pin_f, GPIO.IN)
GPIO.output(trig_pin_r, 0)
GPIO.output(trig_pin_f, 0)

def start_motors():
GPIO.setup(rev_pin_l, GPIO.OUT)
GPIO.setup(fwd_pin_l, GPIO.OUT)
GPIO.setup(rev_pin_r, GPIO.OUT)
GPIO.setup(fwd_pin_r, GPIO.OUT)
pwm_l.start(0)
pwm_r.start(0)

def drive_fwd():
print('fwd')
GPIO.output(fwd_pin_l, 1)
GPIO.output(fwd_pin_r, 1)
GPIO.output(rev_pin_l, 0)
GPIO.output(rev_pin_r, 0)
pwm_l.ChangeDutyCycle(speed_l)
pwm_r.ChangeDutyCycle(speed_r)

def drive_reverse():
print('rev')
GPIO.output(fwd_pin_l, 0)
GPIO.output(fwd_pin_r, 0)
GPIO.output(rev_pin_l, 1)
GPIO.output(rev_pin_r, 1)
pwm_l.ChangeDutyCycle(speed_l)
pwm_r.ChangeDutyCycle(speed_r)

def turn_right():
print('right')
GPIO.output(fwd_pin_l, 1)
GPIO.output(fwd_pin_r, 0)
GPIO.output(rev_pin_l, 0)
GPIO.output(rev_pin_r, 0)
pwm_l.ChangeDutyCycle(speed_l-10)
pwm_r.ChangeDutyCycle(speed_r-10)

def turn_left():
print('left')
GPIO.output(fwd_pin_l, 0)
GPIO.output(fwd_pin_r, 1)
GPIO.output(rev_pin_l, 0)
GPIO.output(rev_pin_r, 0)
pwm_l.ChangeDutyCycle(speed_l-10)
pwm_r.ChangeDutyCycle(speed_r-10)
Appendix A: Supporting materials | 51

def robot_control(window, xcord, ycord, thetaQ, worksheet):


””” Function to control the robot via keyboard input

To be able to access encoder data continously python


multiprocess library is used. To minimize error in
collection of the encoder data the encoder processes are
paused when the robot is turning.
”””
stateQR = Queue()
stateQL = Queue()
pauseEvent = Event()

# Start encoder process


encoderR = Process(target=encoder_states_right, args=(
stateCountR,
stateCountTotR,
rotationCountR,
statesPerRotation, stateQR
, pauseEvent))
encoderL = Process(target=encoder_states_left, args=(
stateCountL,
stateCountTotL,
rotationCountL,
statesPerRotation, stateQL
, pauseEvent))
encoderR.start()
encoderL.start()

# Start Lidar
PORT_NAME = '/dev/ttyUSB0'
lidar = RPLidar(PORT_NAME)
print('lidar started...')
while True:
key = window.getch()
if key == curses.KEY_UP:
pauseEvent.set()
drive_fwd()
elif key == curses.KEY_DOWN:
pauseEvent.set()
drive_reverse()
elif key == curses.KEY_LEFT:
pauseEvent.clear()
turn_left()
elif key == curses.KEY_RIGHT:
pauseEvent.clear()
turn_right()
52 | Appendix A: Supporting materials

elif key == ord(' '): # Space == stop key


stop_motors()
dataList = []
for i in range(3):
PORT_NAME = '/dev/ttyUSB0'
lidar = RPLidar(PORT_NAME)
data = scan_and_send(lidar)
dataList.append(data)
dataList = str(dataList)
data = dataList.replace(”'”, '')
print('data aquired')
xcord, ycord, lastTheta = get_coordinates(stateQR
, stateQL, thetaQ,
xcord, ycord)
print('coordinates aquired')
worksheet = send_data(data, worksheet, xcord,
ycord, lastTheta)
encoderR.terminate()
encoderL.terminate()
encoderR = Process(target=encoder_states_right,
args=(stateCountR,
stateCountTotR,
rotationCountR,
statesPerRotation,
stateQR,
pauseEvent))
encoderL = Process(target=encoder_states_left,
args=(stateCountL,
stateCountTotL,
rotationCountL,
statesPerRotation,
stateQL,
pauseEvent))
encoderR.start()
encoderL.start()
start_ultrasonic()
start_motors()
return xcord, ycord
elif key == ord('q'): # Quit by opressing q
# Instead press ctrl c to shut down whole program
break

def stop_motors():
GPIO.output(fwd_pin_l, 0)
GPIO.output(rev_pin_l, 0)
GPIO.output(fwd_pin_r, 0)
GPIO.output(rev_pin_r, 0)
Appendix A: Supporting materials | 53

pwm_l.ChangeDutyCycle(0)
pwm_r.ChangeDutyCycle(0)
print(”Motors stopped”)

def calibrate_gyro(samples=50):
gx_bias = 0.0
for _ in range(samples):
gyro_data = sensor.get_gyro_data()
gx_bias += gyro_data['x']
time.sleep(0.01) # short delay between reads
gx_bias /= samples
return gx_bias

def read_sensor_data(gx_bias):
# Read the gyroscope values
gyroscope_data = sensor.get_gyro_data()
# Correct bias
gyroscope_data['x'] -= gx_bias
return gyroscope_data

def convert_dps_to_degrees(thetaQ, gx_bias):


try:
# Initialize variables to store the cumulative angle
angle_x = 0.0
last_time = time.time()
# Start a while loop to continuously read the data
count = 0
while True:
if count % 10 == 0:
count = 0
angle = (f'{angle_x:.2f}')
thetaQ.put(angle)
gyroscope_data = read_sensor_data(gx_bias)

# Current time and elapsed time since last read


current_time = time.time()
elapsed_time = current_time - last_time
last_time = current_time

# Gyroscope data is given in dps; Integrate to


get
# the angle
angle_x += gyroscope_data['x'] * elapsed_time

# Wait for 1 second


count += 1
54 | Appendix A: Supporting materials

time.sleep(0.1)
except KeyboardInterrupt:
GPIO.cleanup()

def encoder_states_right(stateCountR, stateCountTotR,


rotationCountR,
statesPerRotation, stateQR,
pauseEvent):
””” Algorithm for calculating rotations an encoder has
made.

Reads the signal pin (enPin) and notes all state changes.
If
40 state changes have been made then one rotation is
completed.
”””
try:
# Checks first state by reading the signal pin
stateLast = GPIO.input(enPinR)
# Loop to continue to read the states
while True:
stateCurrent = GPIO.input(enPinR)
if stateCurrent != stateLast:
stateLast = stateCurrent
stateCountR += 1
stateCountTotR += 1
stateQR.put(rotationCountR+stateCountR/40)

# Check if pause is set


pauseEvent.wait()

# If there have been as many state changes as there are


# states per rotation then one rotation has been
completed
if stateCountR == statesPerRotation:
rotationCountR += 1
stateCountR = 0
except KeyboardInterrupt:
GPIO.cleanup()

def encoder_states_left(stateCountL, stateCountTotL,


rotationCountL,
statesPerRotation, stateQL,
pauseEvent):
try:
stateLast = GPIO.input(enPinL)
while True:
Appendix A: Supporting materials | 55

stateCurrent = GPIO.input(enPinL)
if stateCurrent != stateLast:
stateLast = stateCurrent
stateCountL += 1
stateCountTotL += 1
stateQL.put(rotationCountL+stateCountL/40)

pauseEvent.wait()
if stateCountL == statesPerRotation:
rotationCountL += 1
stateCountL = 0
except KeyboardInterrupt:
GPIO.cleanup()

def get_coordinates(stateQR, stateQL, thetaQ, xcord, ycord):


try:
lastItemR = 0
lastItemL = 0
lastTheta = 0
while not stateQR.empty():
lastItemR = stateQR.get()
while not stateQL.empty():
lastItemL = stateQL.get()
while not thetaQ.empty():
lastTheta = float(thetaQ.get())

distanceR = pi*lastItemR*66*0.001*1000
distanceL = pi*lastItemL*66*0.001*1000
distance = (distanceR+distanceL)/2
xcord = xcord + distance*cos(radians(lastTheta))
ycord = ycord + distance*sin(radians(lastTheta))
return xcord, ycord, lastTheta
except KeyboardInterrupt:
return xcord, ycord, lastTheta
except Exception as e:
print(f”Error in get_coordinates: {e}”)
return xcord, ycord, lastTheta

def scan_and_send(lidar):
print(”Starting to gather data...”)
try:
for scan in lidar.iter_scans(max_buf_meas=600):
data = [(angles, distances) for distances, angles
in scan]
data = str(data)
data = data.replace('[', '')
data = data.replace(']', '')
56 | Appendix A: Supporting materials

return data
except KeyboardInterrupt:
print(”Stopping the scan...”)
except Exception as e:
print(e)
lidar.clean_input()
finally:
lidar.stop()
lidar.stop_motor()
lidar.disconnect()

def login_open_sheet(oauth_key_file, spreadsheet):


””” Program to connect to Google Docs spreadsheet ”””
try:
scope = ['https://spreadsheets.google.com/feeds', '
https://www.googleapis
.com/auth/drive']
credentials = ServiceAccountCredentials.
from_json_keyfile_name
(oauth_key_file, scope
)
gc = gspread.authorize(credentials)
worksheet = gc.open(spreadsheet).sheet1
#print('not here')
return worksheet
except Exception as ex:
print('Unable to login and get spreadsheet. Check
OAuth credentials,
spreadsheet name, and
make sure spreadsheet
is shared to the
client_email address
in the OAuth .json
file!')
print('Google sheet login failed with error:', ex)
sys.exit(1)

def send_data(data, worksheet, xcord, ycord, lastTheta):


try:
if data is None:
time.sleep(2)

coordinates = str([(xcord,ycord)])
data = data.replace('None, ', '')
data = data.replace(', None', '')
lastTheta = str(int(lastTheta))
lastTheta = lastTheta.replace(”'”,'')
Appendix A: Supporting materials | 57

try:
worksheet.append_row((datetime.datetime.now().
strftime(”%Y-%m-%d
%H:%M:%S”), data,
coordinates,
lastTheta))
except:
print('Append error, logging in again')
worksheet = login_open_sheet(GDOCS_OAUTH_JSON,
GDOCS_SPREADSHEET_NAME
)
time.sleep(FREQUENCY_SECONDS)
finally:
print('Wrote a row to {0}'.format(
GDOCS_SPREADSHEET_NAME
))
return worksheet

def main(stdscr, xcord, ycord, thetaQ, worksheet):


try:
while True:
start_ultrasonic()
start_motors()
xcord, ycord = robot_control(stdscr, xcord, ycord
, thetaQ,
worksheet)
return xcord, ycord
except KeyboardInterrupt:
print('not working')
# After robot_control returns, perform any cleanup or
# additional actions
stop_motors()
pwm_l.stop()
pwm_r.stop()
GPIO.cleanup()

# Run the main function using curses.wrapper


if __name__ == ”__main__”:
worksheet = login_open_sheet(GDOCS_OAUTH_JSON,
GDOCS_SPREADSHEET_NAME)
print('log in succesfull...')
xcord = 0
ycord = 0
thetaQ = Queue()
gx_bias = calibrate_gyro()
theta = Process(target=convert_dps_to_degrees, args=(
thetaQ, gx_bias))
58 | Appendix A: Supporting materials

theta.start()
try:
while True:
xcord, ycord = curses.wrapper(main, xcord, ycord,
thetaQ, worksheet
)
except KeyboardInterrupt:
stop_motors()
pwm_l.stop()
pwm_r.stop()
GPIO.cleanup()

A.7 Full program for wall-following algo-


rithm
””” Full program for wall-following algorithm

Program uses a Raspberry Pi, two DC motors connected to a


voltage source, two optical encoder disc, two optical,
sensors, two ultrasonic sensors and an RPLiDAR A1.

If Program does not work (starts but gets stuck) solve the
problem by pinging google.com by:
ping -c 4 google.com

To exit/stop the program press: ctrl c


”””
from multiprocessing import Process, Queue, Event
from math import pi, radians, cos, sin
import RPi.GPIO as GPIO
import time
import sys
import datetime
from rplidar import RPLidar
import gspread
from oauth2client.service_account import
ServiceAccountCredentials
import mpu6050
import curses

GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)

# ---- ENCODERS ----


Appendix A: Supporting materials | 59

enPinL = 26
GPIO.setup(enPinL, GPIO.IN)
enPinR = 19
GPIO.setup(enPinR, GPIO.IN)

# ---- Variables ----


statesPerRotation = 40
rotationCountL = 0
stateCountL = 0
stateCountTotL = 0
rotationCountR = 0
stateCountR = 0
stateCountTotR = 0

# ---- Pin numbers ----


trig_pin_r = 17
echo_pin_r = 11
trig_pin_f = 27
echo_pin_f = 0
rev_pin_l = 7
fwd_pin_l = 8
ena_pin_l = 1
rev_pin_r = 25
fwd_pin_r = 24
ena_pin_r = 23
#------------------------------------------------------------
# Percentage of speed. 100 = max speed, 50 = half max etc.
speed_l = 80
speed_r = 80*1.23
# To make sure the robot isn't too close to the wall we use
# the following variable
too_close = 10
too_far = 25

# Authorization document
GDOCS_OAUTH_JSON = 'google-auth.json'

# Google Docs spreadsheet name.


GDOCS_SPREADSHEET_NAME = 'SensorData'

# How long to wait (in seconds) between measurements.


FREQUENCY_SECONDS = 0

# Create a new MPU6050 object


sensor = mpu6050.mpu6050(0x68)

# Setup GPIO pins for PWM first


60 | Appendix A: Supporting materials

GPIO.setup(ena_pin_l, GPIO.OUT)
GPIO.setup(ena_pin_r, GPIO.OUT)

# Initialize PWM controllers


pwm_l = GPIO.PWM(ena_pin_l, 100)
pwm_r = GPIO.PWM(ena_pin_r, 100)

def start_ultrasonic():
GPIO.setup(trig_pin_r, GPIO.OUT)
GPIO.setup(echo_pin_r, GPIO.IN)
GPIO.setup(trig_pin_f, GPIO.OUT)
GPIO.setup(echo_pin_f, GPIO.IN)
GPIO.output(trig_pin_r, 0)
GPIO.output(trig_pin_f, 0)
print('Ultrasonic sensors ready...')

def read_distance(trig_pin, echo_pin):


GPIO.output(trig_pin, True)
time.sleep(0.00001) # 10 microsecond pulse
GPIO.output(trig_pin, False)

start_time = time.time()
stop_time = time.time()

# Save StartTime
while GPIO.input(echo_pin) == 0:
start_time = time.time()

# Save time of arrival


while GPIO.input(echo_pin) == 1:
stop_time = time.time()

# Time difference between start and arrival


time_elapsed = stop_time - start_time
# Multiply with the sonic speed (34300 cm/s)
# and divide by 2, because there and back
distance = (time_elapsed * 34300) / 2

return distance

def follow_right_wall():
xcord = 0
ycord = 0
# Open google sheet
worksheet = None
worksheet = login_open_sheet(GDOCS_OAUTH_JSON,
GDOCS_SPREADSHEET_NAME)
Appendix A: Supporting materials | 61

print('log in succesfull...')
stateQR = Queue()
stateQL = Queue()
pauseEvent = Event()

# Start encoder process


encoderR = Process(target=encoder_states_right, args=(
stateCountR,
stateCountTotR,
rotationCountR,
statesPerRotation, stateQR
, pauseEvent))
encoderL = Process(target=encoder_states_left, args=(
stateCountL,
stateCountTotL,
rotationCountL,
statesPerRotation, stateQL
, pauseEvent))
encoderR.start()
encoderL.start()

thetaQ = Queue()
gx_bias = calibrate_gyro()
theta = Process(target=convert_dps_to_degrees, args=(
thetaQ, gx_bias))
theta.start()
try:
num = 20
while True:
if num % 20 == 0:
pwm_l.stop()
pwm_r.stop()
dataList = []
for i in range(3):
PORT_NAME = '/dev/ttyUSB0'
lidar = RPLidar(PORT_NAME)
data = scan_and_send(lidar)
dataList.append(data)
dataList = str(dataList)
data = dataList.replace(”'”, '')
print('data aquired')
xcord, ycord, lastTheta = get_coordinates(
stateQR,
stateQL,
thetaQ, xcord,
ycord)
print('coordinates aquired')
62 | Appendix A: Supporting materials

worksheet = send_data(data, worksheet, xcord,


ycord,
lastTheta)
encoderR.terminate()
encoderL.terminate()
encoderR = Process(target=
encoder_states_right
, args=(
stateCountR,
stateCountTotR
,
rotationCountR
,
statesPerRotation
, stateQR,
pauseEvent))
encoderL = Process(target=encoder_states_left
, args=(
stateCountL,
stateCountTotL
,
rotationCountL
,
statesPerRotation
, stateQL,
pauseEvent))
encoderR.start()
encoderL.start()
start_ultrasonic()
start_motors()
num = 1
front_distance = read_distance(trig_pin_f,
echo_pin_f)
right_distance = read_distance(trig_pin_r,
echo_pin_r)
print(f'Front: {front_distance} cm, Right: {
right_distance} cm
')
if front_distance > too_far and right_distance >
too_far:
#pauseEvent.set()
drive_fwd()
time.sleep(0.3)
stop_motors()
#pauseEvent.clear()
turn_right()
Appendix A: Supporting materials | 63

elif front_distance > too_far and too_close <


right_distance <
too_far:
pauseEvent.set()
drive_fwd()
elif front_distance < too_far:
pauseEvent.clear()
turn_left()
time.sleep(0.3)
pauseEvent.set()
#stop_motors()
drive_reverse()
elif right_distance < too_close or (
right_distance >
too_far and
front_distance <
too_far):
pauseEvent.clear()
try:
turn_left()
print('in left')
time.sleep(0.3)
pauseEvent.set()
drive_reverse()
time.sleep(0.1)
drive_fwd()
except:
pauseEvent.set()
drive_reverse()
time.sleep(0.2)
time.sleep(0.1)
num += 1
except KeyboardInterrupt:
stop_motors()

def start_motors():
GPIO.setup(rev_pin_l, GPIO.OUT)
GPIO.setup(fwd_pin_l, GPIO.OUT)
GPIO.setup(rev_pin_r, GPIO.OUT)
GPIO.setup(fwd_pin_r, GPIO.OUT)
pwm_l.start(0)
pwm_r.start(0)
print(”Motors work...”)

def drive_fwd():
GPIO.output(fwd_pin_l, 1)
GPIO.output(fwd_pin_r, 1)
64 | Appendix A: Supporting materials

GPIO.output(rev_pin_l, 0)
GPIO.output(rev_pin_r, 0)
pwm_l.ChangeDutyCycle(speed_l)
pwm_r.ChangeDutyCycle(speed_r)

def drive_reverse():
GPIO.output(fwd_pin_l, 0)
GPIO.output(fwd_pin_r, 0)
GPIO.output(rev_pin_l, 1)
GPIO.output(rev_pin_r, 1)
pwm_l.ChangeDutyCycle(speed_l)
pwm_r.ChangeDutyCycle(speed_r)

def turn_right():
GPIO.output(fwd_pin_l, 1)
GPIO.output(fwd_pin_r, 0)
GPIO.output(rev_pin_l, 0)
GPIO.output(rev_pin_r, 1)
pwm_l.ChangeDutyCycle(speed_l)
pwm_r.ChangeDutyCycle(speed_r)

def turn_left():
GPIO.output(fwd_pin_l, 0)
GPIO.output(fwd_pin_r, 1)
GPIO.output(rev_pin_l, 1)
GPIO.output(rev_pin_r, 0)
pwm_l.ChangeDutyCycle(speed_l)
pwm_r.ChangeDutyCycle(speed_r)

def stop_motors():
GPIO.output(fwd_pin_l, 0)
GPIO.output(rev_pin_l, 0)
GPIO.output(fwd_pin_r, 0)
GPIO.output(rev_pin_r, 0)
pwm_l.ChangeDutyCycle(0)
pwm_r.ChangeDutyCycle(0)

def calibrate_gyro(samples=50):
print(”Calibrating gyroscope...”)
gx_bias = 0.0
for _ in range(samples):
gyro_data = sensor.get_gyro_data()
gx_bias += gyro_data['x']
time.sleep(0.01)
gx_bias /= samples
return gx_bias
Appendix A: Supporting materials | 65

def read_sensor_data(gx_bias):
# Read the gyroscope values
gyroscope_data = sensor.get_gyro_data()
# Correct bias
gyroscope_data['x'] -= gx_bias
return gyroscope_data

def convert_dps_to_degrees(thetaQ, gx_bias):


try:
angle_x = 0.0
last_time = time.time()
count = 0
while True:
if count % 10 == 0:
count = 0
angle = (f'{angle_x:.2f}')
thetaQ.put(angle)
gyroscope_data = read_sensor_data(gx_bias)

# Current time and elapsed time since last read


current_time = time.time()
elapsed_time = current_time - last_time
last_time = current_time

# Gyroscope data is given in dps; integrate to get the angle


angle_x += gyroscope_data['x'] * elapsed_time

count += 1
time.sleep(0.1)
except KeyboardInterrupt:
GPIO.cleanup()

def encoder_states_right(stateCountR, stateCountTotR,


rotationCountR,
statesPerRotation, stateQR,
pauseEvent):
””” Algorithm for calculating rotations an encoder has
made.

Reads the signal pin (en_pin) and notes all state changes
. If
8 state changes have been made then one rotation is
completed.
”””
try:
66 | Appendix A: Supporting materials

# Checks first state by reading the signal pin


stateLast = GPIO.input(enPinR)
# Loop to continue to read the states
while True:
stateCurrent = GPIO.input(enPinR)
if stateCurrent != stateLast:
stateLast = stateCurrent
stateCountR += 1
stateCountTotR += 1
stateQR.put(rotationCountR+stateCountR/40)

pauseEvent.wait()
# If there have been as many state changes as there are
# states per rotation then a rotation has been completed
if stateCountR == statesPerRotation:
rotationCountR += 1
stateCountR = 0
except KeyboardInterrupt:
GPIO.cleanup()

def encoder_states_left(stateCountL, stateCountTotL,


rotationCountL,
statesPerRotation, stateQL,
pauseEvent):
try:
stateLast = GPIO.input(enPinL)
while True:
stateCurrent = GPIO.input(enPinL)
if stateCurrent != stateLast:
stateLast = stateCurrent
stateCountL += 1
stateCountTotL += 1
stateQL.put(rotationCountL+stateCountL/40)

pauseEvent.wait()
if stateCountL == statesPerRotation:
rotationCountL += 1
stateCountL = 0
except KeyboardInterrupt:
GPIO.cleanup()

def get_coordinates(stateQR, stateQL, thetaQ, xcord, ycord):


try:
lastItemR = 0
lastItemL = 0
lastTheta = 0
while not stateQR.empty():
Appendix A: Supporting materials | 67

lastItemR = stateQR.get()
while not stateQL.empty():
lastItemL = stateQL.get()
while not thetaQ.empty():
lastTheta = float(thetaQ.get())

distanceR = pi*lastItemR*66*0.001*1000
distanceL = pi*lastItemL*66*0.001*1000
distance = (distanceR+distanceL)/2
xcord = xcord + distance*cos(radians(lastTheta))
ycord = ycord + distance*sin(radians(lastTheta))
return xcord, ycord, lastTheta
except KeyboardInterrupt:
return xcord, ycord, lastTheta
except Exception as e:
print(f”Error in get_coordinates: {e}”)
return xcord, ycord, lastTheta

def scan_and_send(lidar):
try:
for scan in lidar.iter_scans(max_buf_meas=600):
data = [(angles, distances) for distances, angles
in scan]
data = str(data)
data = data.replace('[', '')
data = data.replace(']', '')
return data
except KeyboardInterrupt:
print(”Stopping the scan...”)
except Exception as e:
print(e)
finally:
lidar.stop()
lidar.stop_motor()
lidar.disconnect()

def login_open_sheet(oauth_key_file, spreadsheet):


”””Connect to Google Docs spreadsheet and return the
first worksheet.”””
try:
scope = ['https://spreadsheets.google.com/feeds', '
https://www.googleapis
.com/auth/drive']
credentials = ServiceAccountCredentials.
from_json_keyfile_name
(oauth_key_file, scope
68 | Appendix A: Supporting materials

)
gc = gspread.authorize(credentials)
worksheet = gc.open(spreadsheet).sheet1
return worksheet
except Exception as ex:
print('Unable to login and get spreadsheet. Check
OAuth credentials,
spreadsheet name, and
make sure spreadsheet
is shared to the
client_email address
in the OAuth .json
file!')
print('Google sheet login failed with error:', ex)
sys.exit(1)

def send_data(data, worksheet, xcord, ycord, lastTheta):


try:
if data is None:
time.sleep(2)

coordinates = str([(xcord,ycord)])
data = data.replace('None, ', '')
data = data.replace(', None', '')
lastTheta = str(int(lastTheta))
lastTheta = lastTheta.replace(”'”,'')
try:
worksheet.append_row((datetime.datetime.now().
strftime(”%Y-%m-%d
%H:%M:%S”), data,
coordinates,
lastTheta))
except:
print('Append error, logging in again')
worksheet = login_open_sheet(GDOCS_OAUTH_JSON,
GDOCS_SPREADSHEET_NAME
)
time.sleep(FREQUENCY_SECONDS)
finally:
print('Wrote a row to {0}'.format(
GDOCS_SPREADSHEET_NAME
))
return worksheet

def main():
try:
start_ultrasonic()
Appendix A: Supporting materials | 69

start_motors()
follow_right_wall()
pwm_l.stop()
pwm_r.stop()
GPIO.cleanup()
except KeyboardInterrupt:
pwm_l.stop()
pwm_r.stop()
GPIO.cleanup()

if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
GPIO.cleanup()

A.8 Program to retrieve data from Google


Spreadsheet and visualize the data as a
composite of multiple 2D scans of the
environment.
# Copyright (c) 2014 Adafruit Industries
# Author: Tony DiCola

# Permission is hereby granted, free of charge, to any person


obtaining a copy
# of this software and associated documentation files (the ”
Software”), to deal
# in the Software without restriction, including without
limitation the rights
# to use, copy, modify, merge, publish, distribute,
sublicense, and/or sell
# copies of the Software, and to permit persons to whom the
Software is
# furnished to do so, subject to the following conditions:

# The above copyright notice and this permission notice shall


be included in all
# copies or substantial portions of the Software.
70 | Appendix A: Supporting materials

# THE SOFTWARE IS PROVIDED ”AS IS”, WITHOUT WARRANTY OF ANY


KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO
EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE
# SOFTWARE.
import sys
import time
import datetime

import gspread
from oauth2client.service_account import
ServiceAccountCredentials
import numpy as np
import matplotlib.pyplot as plt
import math
import ast

GDOCS_OAUTH_JSON = 'google-auth.json'

# Google Docs spreadsheet name.


GDOCS_SPREADSHEET_NAME = 'SensorData'

# How long to wait (in seconds) between measurements.


FREQUENCY_SECONDS = 0

def login_open_sheet(oauth_key_file, spreadsheet):


”””Connect to Google Docs spreadsheet and return the
first worksheet.”””
try:
scope = ['https://spreadsheets.google.com/feeds', '
https://www.googleapis
.com/auth/drive']
credentials = ServiceAccountCredentials.
from_json_keyfile_name
(oauth_key_file, scope
)
gc = gspread.authorize(credentials)
worksheet = gc.open(spreadsheet).sheet1
return worksheet
Appendix A: Supporting materials | 71

except Exception as ex:


print('Unable to login and get spreadsheet. Check
OAuth credentials,
spreadsheet name, and
make sure spreadsheet
is shared to the
client_email address
in the OAuth .json
file!')
print('Google sheet login failed with error:', ex)
sys.exit(1)

def get_data(worksheet):
try:
# Login if necessary.
if worksheet is None:
worksheet = login_open_sheet(GDOCS_OAUTH_JSON,
GDOCS_SPREADSHEET_NAME
)

# Append the data in the spreadsheet, including a


timestamp
try:
array = np.array(worksheet.get_all_values())
except:
# Error appending data, most likely because
credentials are
stale.
# Null out the worksheet so a login is performed
at the top of the
loop.
print('Retrieve error, logging in again')
worksheet = None
time.sleep(FREQUENCY_SECONDS)
finally:
# Wait 30 seconds before continuing
print('Retrieved rows from {0}'.format(
GDOCS_SPREADSHEET_NAME
))
time.sleep(FREQUENCY_SECONDS)
return worksheet, array

def parse_lidar_data(lidar_data_array, theta_array):


results = [] # To store the parsed data for each entry

try:
# Check if the input is a NumPy array
72 | Appendix A: Supporting materials

if isinstance(lidar_data_array, np.ndarray):
for ii in range(0, len(lidar_data_array)):
lidar_points = ast.literal_eval(
lidar_data_array
[ii])
distances = [point[0] for point in
lidar_points]
angles = [point[1] - int(theta_array[ii]) for
point in
lidar_points]
results.append({'distances': distances, '
angles':
angles})

else:
# Handle case where the input is a single string
lidar_points = ast.literal_eval(lidar_data_array)
distances = [point[0] for point in lidar_points]
angles = [point[1] for point in lidar_points]
results.append({'distances': distances, 'angles':
angles})

return results
except ValueError as e:
print(f”Error parsing LiDAR data: {e}”)
return None
except Exception as e:
print(f”Unexpected error: {e}”)
return None

def parse_measuring_point_coordinates(coordinates_data_array)
:
results = [] # To store the parsed data for each entry
try:
# Check if the input is a NumPy array
if isinstance(coordinates_data_array, np.ndarray):
for item in coordinates_data_array: # Process
each item in the
array
coordinates = ast.literal_eval(item)
x_coords = [point[0] for point in coordinates
]
y_coords = [point[1] for point in coordinates
]
results.append({'x_coords': x_coords, '
y_coords':
y_coords})
Appendix A: Supporting materials | 73

else:
# Handle case where the input is a single string
coordinates = ast.literal_eval(
coordinates_data_array
)
x_coords = [point[0] for point in coordinates]
y_coords = [point[1] for point in coordinates]
results.append({'x_coords': x_coords, 'y_coords':
y_coords})

return results
except ValueError as e:
print(f”Error parsing measuring point coordinates: {e
}”)
return None
except Exception as e:
print(f”Unexpected error: {e}”)
return None

def polar_to_cartesian(distances, angles):


””” Convert polar coordinates to Cartesian. ”””
x_coords = [r * math.cos(math.radians(theta)) for r,
theta in zip(distances,
angles)]
y_coords = [-r * math.sin(math.radians(theta)) for r,
theta in zip(distances,
angles)]
return x_coords, y_coords

def normalize_points(x_coords, y_coords, base_x, base_y):


”””Normalize points by translating them to a new base
point.”””
normalized_x = [x - base_x for x in x_coords]
normalized_y = [y - base_y for y in y_coords]
return normalized_x, normalized_y

def plot_lidar_data(lidar_data, measuring_points):


plt.figure(figsize=(7, 7))

for data, m_point in zip(lidar_data, measuring_points):


# Convert polar to Cartesian coordinates
x_coords, y_coords = polar_to_cartesian(data['
distances'], data['
angles'])

# Normalize to the measuring point coordinates


74 | Appendix A: Supporting materials

base_x, base_y = m_point['x_coords'][0], m_point['


y_coords'][0] #
Assume one measuring
point per set
normalized_x, normalized_y = normalize_points(
x_coords, y_coords,
base_x, base_y)

# Plot
plt.scatter(normalized_x, normalized_y, marker='o',
label=f'Base @ ({
base_x:.2f}, {base_y:.
2f})', s=10)

# plt.title('LiDAR Data Points (unfiltered)')


plt.xlabel('X Coordinate')
plt.ylabel('Y Coordinate')
plt.axhline(0, color='black',linewidth=0.5)
plt.axvline(0, color='black',linewidth=0.5)
plt.legend()
plt.grid(True)
plt.axis('equal') # Keep the scale equal to make it look
proportional
plt.show()

if __name__ == '__main__':
worksheet = None
worksheet, array = get_data(worksheet)
dates_str = array[:,0]
lidar_data_str = array[:,1]
measuring_point_coordinates_str = array[:,2]
theta_array = array[:,3]

start_row = 5
end_row = 7

start_row -= 1

theta_array = theta_array[start_row:end_row]

test_lidar_data_str = lidar_data_str[start_row:end_row]
test_measuring_point_coordinates_str =
measuring_point_coordinates_str
[start_row:end_row]

plot_lidar_data(parse_lidar_data(test_lidar_data_str,
theta_array),
Appendix A: Supporting materials | 75

parse_measuring_point_coordinates
(
test_measuring_point_coordinates_str
))

A.9 Script to download specific test run data


from Google Sheets and calculate the
coverage ratio for that test run.
import sys
import time
import gspread
from oauth2client.service_account import
ServiceAccountCredentials
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.transforms as transforms
import math
import ast

GDOCS_OAUTH_JSON = 'KEX/google-auth.json'
GDOCS_SPREADSHEET_NAME = 'Saved_SensorData'
FREQUENCY_SECONDS = 0

def login_open_sheet(oauth_key_file, spreadsheet):


try:
scope = ['https://spreadsheets.google.com/feeds', '
https://www.googleapis
.com/auth/drive']
credentials = ServiceAccountCredentials.
from_json_keyfile_name
(oauth_key_file, scope
)
gc = gspread.authorize(credentials)
worksheet = gc.open(spreadsheet).sheet1
return worksheet
except Exception as ex:
print('Unable to login and get spreadsheet. Check
OAuth credentials,
spreadsheet name, and
make sure spreadsheet
is shared to the
client_email address
76 | Appendix A: Supporting materials

in the OAuth .json


file!')
print('Google sheet login failed with error:', ex)
sys.exit(1)

def get_data(worksheet):
try:
if worksheet is None:
worksheet = login_open_sheet(GDOCS_OAUTH_JSON,
GDOCS_SPREADSHEET_NAME
)
try:
array = np.array(worksheet.get_all_values())
except:
print('Retrieve error, logging in again')
worksheet = None
time.sleep(FREQUENCY_SECONDS)
finally:
print('Retrieved rows from {0}'.format(
GDOCS_SPREADSHEET_NAME
))
time.sleep(FREQUENCY_SECONDS)
return worksheet, array

def parse_lidar_data(lidar_data_array, theta_array):


results = []
try:
if isinstance(lidar_data_array, np.ndarray):
for ii in range(0, len(lidar_data_array)):
lidar_points = ast.literal_eval(
lidar_data_array
[ii])
distances = [point[0] for point in
lidar_points]
angles = [point[1] - int(theta_array[ii]) for
point in
lidar_points]
results.append({'distances': distances, '
angles':
angles})
else:
lidar_points = ast.literal_eval(lidar_data_array)
distances = [point[0] for point in lidar_points]
angles = [point[1] for point in lidar_points]
results.append({'distances': distances, 'angles':
angles})
return results
Appendix A: Supporting materials | 77

except ValueError as e:
print(f”Error parsing LiDAR data: {e}”)
return None
except Exception as e:
print(f”Unexpected error: {e}”)
return None

def parse_measuring_point_coordinates(coordinates_data_array)
:
results = []
try:
if isinstance(coordinates_data_array, np.ndarray):
for item in coordinates_data_array:
coordinates = ast.literal_eval(item)
x_coords = [point[0] for point in coordinates
]
y_coords = [point[1] for point in coordinates
]
results.append({'x_coords': x_coords, '
y_coords':
y_coords})
else:
coordinates = ast.literal_eval(
coordinates_data_array
)
x_coords = [point[0] for point in coordinates]
y_coords = [point[1] for point in coordinates]
results.append({'x_coords': x_coords, 'y_coords':
y_coords})
return results
except ValueError as e:
print(f”Error parsing measuring point coordinates: {e
}”)
return None
except Exception as e:
print(f”Unexpected error: {e}”)
return None

def polar_to_cartesian(distances, angles):


x_coords = [r * math.cos(math.radians(theta)) for r,
theta in zip(distances,
angles)]
y_coords = [-r * math.sin(math.radians(theta)) for r,
theta in zip(distances,
angles)]
return x_coords, y_coords
78 | Appendix A: Supporting materials

def normalize_points(x_coords, y_coords, base_x, base_y):


normalized_x = [x - base_x for x in x_coords]
normalized_y = [y - base_y for y in y_coords]
return normalized_x, normalized_y

def rotate_points(x_coords, y_coords, angle):


angle_rad = math.radians(angle)
rotated_x = [x * math.cos(angle_rad) - y * math.sin(
angle_rad) for x, y in zip
(x_coords, y_coords)]
rotated_y = [x * math.sin(angle_rad) + y * math.cos(
angle_rad) for x, y in zip
(x_coords, y_coords)]
return rotated_x, rotated_y

def translate_points(x_coords, y_coords, offset_x, offset_y):


translated_x = [x + offset_x for x in x_coords]
translated_y = [y + offset_y for y in y_coords]
return translated_x, translated_y

def prepare_lidar_data(lidar_data, measuring_points,


offset_angle, offset_x,
offset_y):
all_translated_x = []
all_translated_y = []
all_labels = []

for data, m_point in zip(lidar_data, measuring_points):


x_coords, y_coords = polar_to_cartesian(data['
distances'], data['
angles'])
base_x, base_y = m_point['x_coords'][0], m_point['
y_coords'][0]
normalized_x, normalized_y = normalize_points(
x_coords, y_coords,
base_x, base_y)
rotated_x, rotated_y = rotate_points(normalized_x,
normalized_y,
offset_angle)
translated_x, translated_y = translate_points(
rotated_x, rotated_y,
offset_x, offset_y)

all_translated_x.append(translated_x)
all_translated_y.append(translated_y)
all_labels.append(f'Base @ ({base_x:.2f}, {base_y:.2f
})')
Appendix A: Supporting materials | 79

return all_translated_x, all_translated_y, all_labels

def plot_lidar_data(lidar_x_sets, lidar_y_sets, labels,


rect_x, rect_y, rect_width,
rect_height, rect_angle,
covered_cells=None, cell_width
=None, cell_height=None):
plt.figure(figsize=(7, 7))

for lidar_x, lidar_y, label in zip(lidar_x_sets,


lidar_y_sets, labels):
plt.scatter(lidar_x, lidar_y, marker='o', s=10, label
=label)

trans = transforms.Affine2D().rotate_deg_around(rect_x +
rect_width / 2, rect_y +
rect_height / 2,
rect_angle) + plt.gca().
transData
rect = plt.Rectangle((rect_x, rect_y), rect_width,
rect_height, linewidth=1,
edgecolor='red', facecolor
='none', transform=trans)
plt.gca().add_patch(rect)

# Plot the cells


if covered_cells is not None and cell_width is not None
and cell_height is not
None:
for (cell_x, cell_y) in covered_cells:
cell_center_x = rect_x + cell_x * cell_width -
cell_width / 2
cell_center_y = rect_y + cell_y * cell_height -
cell_height / 2
cell_rect = plt.Rectangle((cell_center_x,
cell_center_y),
cell_width,
cell_height,
linewidth=1,
edgecolor='blue',
facecolor='none',
linestyle='--',
transform=trans)
plt.gca().add_patch(cell_rect)

plt.xlabel('X Coordinate')
80 | Appendix A: Supporting materials

plt.ylabel('Y Coordinate')
plt.axhline(0, color='black', linewidth=0.5)
plt.axvline(0, color='black', linewidth=0.5)
plt.legend()
plt.grid(True)
plt.axis('equal')
plt.show()

def add_cell(covered_cells, cell_x, cell_y):


covered_cells.add((cell_x, cell_y))

def remove_cell(covered_cells, cell_x, cell_y):


covered_cells.discard((cell_x, cell_y))

def coverage_ratio(lidar_x_sets, lidar_y_sets, rect_x, rect_y


, rect_width, rect_height,
rect_angle, threshold=100/2):
cell_size = 100 # Size of the cells in mm
num_cells_width = int(rect_width / cell_size)
num_cells_height = int(rect_height / cell_size)

covered_cells = set()

# Define the perimeter cells


edges = []
for i in range(num_cells_width):
edges.append((i * cell_size + cell_size / 2, 0)) #
bottom edge
edges.append((i * cell_size + cell_size / 2,
rect_height)) # top
edge
for j in range(num_cells_height):
edges.append((0, j * cell_size + cell_size / 2)) #
left edge
edges.append((rect_width, j * cell_size + cell_size /
2)) # right edge

trans = transforms.Affine2D().rotate_deg_around(rect_x +
rect_width / 2, rect_y +
rect_height / 2,
rect_angle)

for lidar_x, lidar_y in zip(lidar_x_sets, lidar_y_sets):


for x, y in zip(lidar_x, lidar_y):
for ex, ey in edges:
ex_rot, ey_rot = trans.transform_point((
rect_x + ex,
Appendix A: Supporting materials | 81

rect_y + ey))
if distance_to_rectangle(x, y, ex_rot, ey_rot
, 0, 0, 0) <=
threshold:
cell_x = int(ex // cell_size)
cell_y = int(ey // cell_size)
covered_cells.add((cell_x, cell_y))

total_cells = len(edges)
# coverage = len(covered_cells) / total_cells
return total_cells, covered_cells, cell_size, cell_size

def distance_to_rectangle(x, y, rect_x, rect_y, rect_width,


rect_height, rect_angle):
rect_center_x = rect_x + rect_width / 2
rect_center_y = rect_y + rect_height / 2
angle_rad = -math.radians(rect_angle)
cos_angle = math.cos(angle_rad)
sin_angle = math.sin(angle_rad)
translated_x = cos_angle * (x - rect_center_x) -
sin_angle * (y -
rect_center_y) +
rect_center_x
translated_y = sin_angle * (x - rect_center_x) +
cos_angle * (y -
rect_center_y) +
rect_center_y
dx = max(rect_x - translated_x, 0, translated_x - (rect_x
+ rect_width))
dy = max(rect_y - translated_y, 0, translated_y - (rect_y
+ rect_height))
return math.hypot(dx, dy)

if __name__ == '__main__':
worksheet = None
worksheet, array = get_data(worksheet)
lidar_data_str = array[:, 1]
measuring_point_coordinates_str = array[:, 2]
theta_array = array[:, 3]

start_row = 158
end_row = 161

offset_angle = 90
offset_x = 0
offset_y = 0
82 | Appendix A: Supporting materials

rect_x = -315 - 190 + 40 # + (move right)


rect_y = -2165 + 150 - 40 # + (move up)
rect_angle = -0.75

rect_width = 1720 # hardcoded


rect_height = 2500 # hardcoded

start_row -= 1

theta_array = theta_array[start_row:end_row]

test_lidar_data_str = lidar_data_str[start_row:end_row]
test_measuring_point_coordinates_str =
measuring_point_coordinates_str
[start_row:end_row]

parsed_lidar_data = parse_lidar_data(test_lidar_data_str,
theta_array)
parsed_measuring_points =
parse_measuring_point_coordinates
(
test_measuring_point_coordinates_str
)

lidar_x_sets, lidar_y_sets, labels = prepare_lidar_data(


parsed_lidar_data,
parsed_measuring_points,
offset_angle, offset_x,
offset_y)

total_cells, covered_cells, cell_width, cell_height =


coverage_ratio(
lidar_x_sets, lidar_y_sets
, rect_x, rect_y,
rect_width, rect_height,
rect_angle)

# Manually add or remove cells if needed


remove_cell(covered_cells, 1, 0)
add_cell(covered_cells, 2, 0)

remove_cell(covered_cells, 0, 17)

coverage = len(covered_cells) / total_cells

print(f”Covered cells: {len(covered_cells)}”)


print(f”Total cells: {total_cells}”)
Appendix A: Supporting materials | 83

print(f”Coverage ratio: {coverage:.2f}”)

# Plot with the covered cells visualized


plot_lidar_data(lidar_x_sets, lidar_y_sets, labels,
rect_x, rect_y, rect_width
, rect_height, rect_angle,
covered_cells, cell_width
, cell_height)
84 | Appendix A: Supporting materials
TRITA-ITM-EX- 2024:53

www.kth.se

You might also like