FULLTEXT01
FULLTEXT01
JENNIFER LUNDSTRÖM
THOMAS NORÉN
Abstract
This project developed a 2D room-mapping robot equipped with Light De-
tection and Ranging (LiDAR) technology, designed to navigate environments
either autonomously via a wall-following algorithm or manually through
keyboard inputs. The goal was to assess and compare the accuracy of the
generated room maps with actual floor blueprints and evaluate the navigation
methods’ precision and practical applicability. Constructed with dual Direct
Current (DC) motors, a motor driver, an inertial measurement unit, two
ultrasonic sensors and optical encoders, an RPLIDAR A1, and a Raspberry
Pi 3B+, the robot was designed and fabricated using Solid Edge, laser cutting,
and 3D printing. Results indicated that manual control achieved more accurate
mappings than the autonomous navigation, though both struggled with data
acquisition, suggesting areas for future improvement in sensor accuracy and
processing.
Keywords
Mechatronics, LiDAR, Raspberry Pi
ii | Sammanfattning
Sammanfattning
Detta projekt utvecklade en robot för kartläggning av rum i 2D, utrustad med
LiDAR-teknologi, designad för att navigera miljöer antingen autonomt via en
väggföljnings-algoritm eller manuellt via tangentbordsinmatningar. Målet var
att bedöma och jämföra noggrannheten hos de genererade rumskartorna med
faktiska byggnadsritningar och utvärdera navigationsmetodernas precision
och praktiska tillämpbarhet. Konstruerad med två likströmsmotorer, en motor
driver, ett gyroskop, två ultraljudssensorer och optiska encoders, en RPLIDAR
A1 och en Raspberry Pi 3B+, designades och tillverkades roboten med
Solid Edge, laserskärning och 3D-skrivning. Resultaten visade att manuell
kontroll uppnådde mer exakta kartläggningar än autonom navigering, även om
båda hade problem med datainsamling, vilket tyder på områden för framtida
förbättringar inom sensorers noggrannhet och bearbetning av datan.
Nyckelord
Mekatronik, LiDAR, Raspberry Pi
Acknowledgments | iii
Acknowledgments
We would first like to thank our supervisor Daniel Frede for all the help he
has provided and for providing us with the RPLIDAR A1M8 and all the other
components that made this project possible. We would also like to thank the
teaching assistants for their excellent help with setting up the communication
between our robot and computer.
Contents
1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Theory 3
2.1 Microcontroller . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Ultrasonic sensor . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Motors and Encoders . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Motor driver . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.5 LiDAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.6 Inertial measurement unit . . . . . . . . . . . . . . . . . . . . 7
2.7 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 Method 8
3.1 Creating a driveable robot . . . . . . . . . . . . . . . . . . . 8
3.2 Raspberry Pi 3B+ . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Ultrasonic sensors . . . . . . . . . . . . . . . . . . . . . . . . 9
3.4 Calculating distance with encoders . . . . . . . . . . . . . . . 10
3.5 RPLIDAR A1 . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.6 Designing the robot . . . . . . . . . . . . . . . . . . . . . . . 11
3.6.1 Design of structural components . . . . . . . . . . . . 11
3.6.2 3D printed parts . . . . . . . . . . . . . . . . . . . . 11
3.6.3 Laser cut parts . . . . . . . . . . . . . . . . . . . . . 13
3.7 Wall-following algorithm . . . . . . . . . . . . . . . . . . . . 13
3.8 Manual steering of the robot . . . . . . . . . . . . . . . . . . 14
3.9 Uploading RPLIDAR data, coordinates and angles to a Google
Spreadsheet . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
vi | Contents
6 Future work 30
References 33
A Supporting materials 35
A.1 Ranges for the scans . . . . . . . . . . . . . . . . . . . . . . . 35
A.2 Additional results . . . . . . . . . . . . . . . . . . . . . . . . 36
A.2.1 Additional manual results . . . . . . . . . . . . . . . 36
A.2.2 Additional autonomous results . . . . . . . . . . . . . 38
A.3 Supporting Materials . . . . . . . . . . . . . . . . . . . . . . 40
A.3.1 Circuit and schematic of robot . . . . . . . . . . . . . 40
A.4 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
A.4.1 Laser cut parts . . . . . . . . . . . . . . . . . . . . . 41
A.5 Python Code . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.5.1 Get rotations from optical encoder . . . . . . . . . . . 42
A.5.2 Uploading data to Google Spreadsheet . . . . . . . . . 44
A.6 Full program for manual input . . . . . . . . . . . . . . . . . 48
Contents | vii
List of Figures
4.1 Figure showing the position of the robot in the miniature room
when doing a three-scan measurement. . . . . . . . . . . . . . 18
4.2 The acquired data points from the three-scan measurement
scattered in a plot. . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3 Figure showing the position of the robot in the miniature room
when doing a four-scan measurement. . . . . . . . . . . . . . 19
4.4 The acquired data points from the four-scan measurement
scattered in a plot. . . . . . . . . . . . . . . . . . . . . . . . . 19
4.5 Figure showing the position of the robot in the miniature room
when doing a seven-scan measurement. . . . . . . . . . . . . 19
List of Figures | ix
A.1 Figure showing the position of the robot in the miniature room
when doing a one-scan measurement. . . . . . . . . . . . . . 36
A.2 The acquired data points from the one-scan measurement
scattered in a plot. . . . . . . . . . . . . . . . . . . . . . . . . 36
A.3 Figure showing the position of the robot in the miniature room
when doing a two-scan measurement. . . . . . . . . . . . . . 37
x | List of Figures
List of Tables
4.1 Time taken for whole run, number of scans for that run and
time per scan for manual tests 1-12, including average time
per scan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Time taken for whole run, number of scans for that run and
time per scan for autonomous tests 1-12, including average
time per scan. . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.3 Number of scans, coverage ratio, number of covered cells
and coverage ratio per covered cells for six manual and six
autonomous runs with different numbers of measurements. . . 26
DC Direct Current
Chapter 1
Introduction
1.1 Background
As vehicles are developed into autonomous vehicles one key component to this
transition is LiDAR. Innovation in this area could improve day to day tasks and
simplify modern societal problems. With this technology distances to different
objects can be measured making it a key component for collecting information
regarding obstacles and environment. By utilizing LiDAR on a moving robot it
is possible to scan a room and create a 2D layout. But not often is LiDAR used
alone, to create a functioning robot many components need to cooperate. To
better understand how different components work together and can be used to
create solutions for modern day problems a study with the purpose of building
a robot capable of mapping rooms on a 2D scale with LiDAR is conducted.
2 | Introduction
1.2 Purpose
The purpose of this thesis is to present the prototyping process of a LiDAR-
equipped robot capable of mapping rooms in 2D. The robot will navigate
through a room and simultaneously collect LiDAR-data and coordinates along
the way. The data and coordinates are then sent to a computer that will process
the data and create a 2D layout of the room. The theory behind the different
technologies that make up the whole project and the methods used to get
the different technologies to work together will be presented. The research
questions being addressed are outlined as followed:
• How does the LiDAR sampling frequency affect the quality of the
generated room scan?
• How is the quality of the scan and time taken to scan affected by using
a simple wall-following algorithm vs controlling the robot via keyboard
input?
1.3 Scope
The project was allocated a timeline of approximately five months and
had a budget set at SEK 1000, not accounting for commonly available
components such as resistors, Direct Current (DC) motors, breadboards, and
Microcontroller Units (MCUs), which were supplied by KTH Royal Institute
of Technology (KTH). As such, the robot will be limited to a basic prototype.
1.4 Method
The project is divided into two main phases with the first one being a
theoretical investigation. In this phase information about different components
is gathered and theoretical setup of the robot is created. The second part is
the experimental part where the components are tested and assembled into a
functioning robot.
Theory | 3
Chapter 2
Theory
This section provides the theory behind all the components incorporated into
this project, as well as those that were considered for potential use. See section
3 for how the components were used.
2.1 Microcontroller
A MCU works like a small computer and consists of one or more Central
Processing Units (CPUs) along with memory and programmable input/output
ports. To be able to control a device such as a LiDAR-robot a microcontroller
such as the Raspberry Pi can be used.
By sending voltage to a port the micro controller reads it as a one and by
sending 0V we can communicate a zero [1]. Since computers use binary code
consisting of zeros and ones this allows us to communicate with the micro
controller.
The micro-controller used in this project is the Raspberry Pi 3B+, see
2.1. All Raspberry Pi models have two serial ports (Universal Asynchronous
Receiver-Transmitter (UART) pins) which will be needed for communication
with the LiDAR [2]. The micro-controller has built in WiFi which allows data
to sent from the micro-controller to a computer for processing. The Raspberry
Pi 3B+ contains a total of 40 General Purpose Input Output (GPIO) pins
where some only function as input/output pins while others have additional
function such as PWM, Serial Peripheral Interface (SPI), Inter-Integrated
Circuit (I2C) and UART [3]. The micro-controller can be programmed by
using the programming language Python.
4 | Theory
2.5 LiDAR
The LiDAR used is the RPLIDAR A1 which measures reflected infrared light
in the wavelengths 775 − 785 nm [2] to calculate time traveled to surfaces
within the interval 0.12 − 12 m. Similarly to ultrasonic sensors, the distance
to objects can be calculated with equation 2.1. The RPLiDAR A1 performs a
360 degree scan as it has a built in motor, see figure 2.5 which differs from an
ultrasonic sensor which is single point.
2.7 Localization
If the current position (xi , yi ) of the robot is known then the next coordinate
(xi+1 , yi+1 ) can be calculated with the following equations:
xi+1 = xi + d · cos(θ) (2.3)
dlef t + dright
d= (2.5)
2
By setting the start coordinates as (xi , yi ) = (0, 0) we can use the method
mentioned above to calculate position.
8 | Method
Chapter 3
Method
This chapter will showcase the experimental part of the project; the design of
the robot complete with circuit diagrams, components and how they were used
along with CAD models and software.
where Vin =5V and Vout =3.3V. The following equation for voltage dividers
is used [1]:
R2
Vout = Vin (3.1)
R1 + R2
This equation can then be written as:
Vout
R2 = R1 (3.2)
Vin − Vout
A common resistor value is of 680Ω so therefore R1 is put to this value and
R2 is calculated with equation 3.2 to 1320Ω. As no resistors of value 1320Ω
were available a resistor R2 = 1200Ω was chosen. This gives Vout = 3.19V
which is still close to 3.3V and is inside the range accepted by the Raspberry
Pi 3B+ GPIO pins. The Raspberry Pi 3B+ accepts any voltages above Vout =
1.6V as a high logic level and therefore the voltage level of Vout = 3.19V is
sufficient [3].
10 | Method
3.5 RPLIDAR A1
The RPLIDAR A1 is connected to the Raspberry Pi via the communication
cable which is then insterted into the Universal Serial Bus (USB) adapter, see
figure 3.2 [7]. A micro-USB to USB-A cable was then inserted into the USB-A
port of the Raspberry Pi 3 B+. The USB adapter converts the internal UART
serial interface to USB interface via a CP2012 chip [7][8]. To connect the
RPLIDAR to a PC a CP2102 driver need to be installed, this driver allows
a CP2102-based device, such as the RPLIDAR, to appear on the PC as an
additional COM port [8]. When connecting to a Raspberry Pi, there is no
need to download an additional driver since the RPLIDAR will automatically
appear in the device directory with a name such as ttyUSB0 or ttyUSB1,
depending on the number of USB devices already connected to the Raspberry
Pi. By knowing the path to the RPLIDAR a python script is written, see A.5.2,
to collect the LiDAR data points using the existing library RPLiDAR from
Roboticia [9] [10] and also the python libraries numpy and matplotlib.
Figure 3.2: Picture of components in RPLiDAR A1M8 kit taken from [7]
Method | 11
Ball transfer unit: Designed to create stability and allow for turning along the
robots center of gravity, this unit was modeled and printed to house a standard
15 - 17 mm diameter glass marble, see figure 3.3.
Figure 3.3: Exploded render of the ball transfer unit with marble in Solid Edge.
12 | Method
DC Motor Mounts: The DC motor mounts, see 3.4, were needed to fasten the
DC motors that were used to the chassis. These parts were precisely modeled
in Solid Edge, converted to .stl files, and prepared for printing using Ultimaker
Cura software. This process ensures that each mount fits its component
perfectly, as confirmed by datasheet specifications and direct measurements
with calipers. After printing, holes were drilled in the front and back of the
mounts to better fit the DC motors.
Figure 3.5: Render of LiDAR mount Figure 3.6: Render of LiDAR mount
in Solid Edge. in Solid Edge.
Method | 13
controlling the robot to those positions. The manual algorithm was tested in
the following manner: For the first test with three scans, two measurements
were made in the front and back and then one test near the middle of the room.
For the tests that had four or more scans, the miniature room was traversed
corner to corner in a counter-clockwise manner, making sure to capture a scan
in each corner.
For the autonomous tests, the robot was placed in the lower right corner,
see figure 3.9 to see the room layout. The time interval between each scan was
then varied to perform different amount of scans and therefore test how scan
frequency affects the plot. The captured LiDAR data points were then, along
with the collected optical wheel encoder- and IMU-data, used to plot a visual
representation of the scanned environment for all the tests. See 3.10 for an
example of what these scans look like.
3.9. The bounding box perimeter is then divided into perimeter cells that can
be altered in size, for this project a cell size of 100x100 mm was chosen to be
sufficient. The coverage ratio was then calculated by dividing the number of
perimeter cells containing one or more LiDAR points by the total number of
perimeter cells. By evaluating the coverage ratio, we can quantify how well
the LiDAR scans cover the defined area. A script in Python was written to
automate this calculation for multiple test runs comprising of multiple scans,
but some manual adjustments were needed to let the program know which
lines to download from Google Sheets and to add or remove misplaced cells.
18 | Results and Analysis
Chapter 4
Figure 4.7: The acquired data points using the wall-following algorithm from
the three-scan measurement scattered in a plot.
Figure 4.8: The acquired data points using the wall-following algorithm from
the four-scan measurement scattered in a plot.
Results and Analysis | 21
Figure 4.9: The acquired data points using the wall-following algorithm from
the seven-scan measurement scattered in a plot.
Table 4.1: Time taken for whole run, number of scans for that run and time
per scan for manual tests 1-12, including average time per scan.
Table 4.2: Time taken for whole run, number of scans for that run and time
per scan for autonomous tests 1-12, including average time per scan.
Figure 4.12: Plot showing the room’s outline (red rectangle), LiDAR data
points from three manual measurements and the perimeter cells that have data
points inside (blue squares).
Figure 4.13: Plot showing the room’s outline (red rectangle), LiDAR data
points from four manual measurements and the perimeter cells that have data
points inside (blue squares).
24 | Results and Analysis
Figure 4.14: Plot showing the room’s outline (red rectangle), LiDAR data
points from seven manual measurements and the perimeter cells that have data
points inside (blue squares).
Figure 4.15: Plot showing the room’s outline (red rectangle), LiDAR data
points from three autonomous measurements and the perimeter cells that have
data points inside (blue squares).
Results and Analysis | 25
Figure 4.16: Plot showing the room’s outline (red rectangle), LiDAR data
points from four autonomous measurements and the perimeter cells that have
data points inside (blue squares).
Figure 4.17: Plot showing the room’s outline (red rectangle), LiDAR data
points from seven autonomous measurements and the perimeter cells that have
data points inside (blue squares).
26 | Results and Analysis
Figure 4.18: Plot showing the room’s outline (red rectangle), LiDAR data
points from seven autonomous measurements and the perimeter cells that have
data points inside (blue squares).
Table 4.3: Number of scans, coverage ratio, number of covered cells and
coverage ratio per covered cells for six manual and six autonomous runs with
different numbers of measurements.
Run Number of Scans Coverage ratio Covered cells Coverage ratio per covered cell
1 1 0.27 23 0.0117
2 2 0.26 22 0.0123
3 3 0.25 21 0.0119
4 4 0.15 13 0.0115
5 5 0.17 14 0.0121
6 7 0.38 32 0.0119
Average 3.67 0.25 20.8 0.0119
1 3 0.10 8 0.0125
2 2 0.10 8 0.0125
3 3 0.10 8 0.0125
4 4 0.17 14 0.0121
5 7 0.25 21 0.0119
6 9 0.26 22 0.0118
Average 4.67 0.16 13.5 0.0122
Discussion and Conclusions | 27
Chapter 5
Chapter 6
Future work
As this is a basic prototype created under limited time, one area that could
be further developed is the localization algorithm. A very simple algorithm
utilizing encoders and angles were used which are prone to inaccuracies.
Other methods utilizing not only position and angle but their derivatives could
possibly give a more accurate position. There are also the possibility of using
a beacon and tracking the position of the robot relative to this beacon.
The LiDAR that was used only scanned/sent data points within the range
of ( 250◦ , 350◦ ). This is most likely due to how the RPLIDAR library was used
and adjusting it so that the LiDAR could use the full 360◦ range would have
led to higher quality scans and fewer needed scans to capture its environment.
Autonomous adaptive sampling that lets the robot adjust its LiDAR
scanning frequency based on the complexity of the environment could
optimise both data collection quality and process efficiency.
Filtering unnecessary data can lead to clearer plots and reduced
computation times. A k-means clustering algorithm was tested and effectively
filtered out redundant data points while preserving the overall shape of the
measured environments. This approach was particularly useful given the
high volume of data collected during both manual and autonomous test
runs, which resulted in cluttered visual outputs. Typically, plotting becomes
computationally intensive with an increase in data points, with our laptop
showing noticeable slowdowns when handling around 5000 points in a single
graph. By employing k-means clustering, not only was the visual clarity of
the room’s representation enhanced, but the computation time required for
displaying graphs and executing a convex hull algorithm, an algorithm that
was used to outline the scatter’s perimeter, was significantly reduced. This
method simplifies the plot’s appearance by effectively managing large, dense
clusters of data points. But as it wasn’t deemed necessary, the filtering method
and the outlining of the perimeter of the scatters were omitted.
Future work | 31
References
Appendix A
Supporting materials
Figure A.7: The acquired data points using the wall-following algorithm from
a two-scan measurement scattered in a plot.
Figure A.8: The acquired data points using the wall-following algorithm from
a five-scan measurement scattered in a plot.
Appendix A: Supporting materials | 39
Figure A.9: The acquired data points using the wall-following algorithm from
a ten-scan measurement scattered in a plot.
40 | Appendix A: Supporting materials
A.4 Models
A.4.1 Laser cut parts
Figure A.12: Solid Edge Draft Document of the bottom part of the chassis.
42 | Appendix A: Supporting materials
Figure A.13: Solid Edge Draft Document of one of the struts for the chassis.
Figure A.14: Solid Edge Draft Document of the top part of the chassis.
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
def start_motors():
# Left motor
GPIO.setup(rev_pin_l, GPIO.OUT)
GPIO.setup(fwd_pin_l, GPIO.OUT)
GPIO.setup(ena_pin_l, GPIO.OUT)
pwm_l = GPIO.PWM(ena_pin_l, 100)
pwm_l.start(0)
print(”Motors work..”)
return pwm_l
def stop_motors():
GPIO.output(fwd_pin_l, GPIO.LOW)
GPIO.output(rev_pin_l, GPIO.LOW)
time.sleep(2)
GPIO.output(rev_pin_l,GPIO.LOW)
pwm_l.ChangeDutyCycle(speed_l)
except KeyboardInterrupt:
stop_motors()
GPIO.cleanup()
Reads the signal pin (enPin) and notes all state changes.
If 8 state changes have been made then one rotation is
completed.
”””
try:
stateLast = GPIO.input(enPin)
while True:
stateCurrent = GPIO.input(enPin)
if stateCurrent != stateLast:
stateLast = stateCurrent
stateCount += 1
stateCountTot += 1
if stateCount == statesPerRotation:
rotationCount += 1
stateCount = 0
print('rotations: ' + str(rotationCount))
except KeyboardInterrupt: #ctrl c
GPIO.cleanup()
if __name__ == '__main__':
try:
pwm_l = start_motors()
drive_fwd(speed_l, pwm_l)
encoder_states(stateCount, stateCountTot,
rotationCount,
statesPerRotation)
except KeyboardInterrupt:
stop_motors()
GPIO.cleanup()
print('shutting down...')
GDOCS_OAUTH_JSON = 'google-auth.json'
if __name__ == '__main__':
worksheet = None
while True:
PORT_NAME = '/dev/ttyUSB0' # This may vary depending
on where your RPLIDAR
is connected
lidar = RPLidar(PORT_NAME)
data = scan_and_send(lidar)
worksheet = send_data(data, worksheet)
48 | Appendix A: Supporting materials
If Program does not work (starts but gets stuck) solve the
problem by pinging google.com by:
ping -c 4 google.com
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
stateCountTotR = 0
# Authorization document
GDOCS_OAUTH_JSON = 'google-auth.json'
def start_ultrasonic():
GPIO.setup(trig_pin_r, GPIO.OUT)
GPIO.setup(echo_pin_r, GPIO.IN)
GPIO.setup(trig_pin_f, GPIO.OUT)
50 | Appendix A: Supporting materials
GPIO.setup(echo_pin_f, GPIO.IN)
GPIO.output(trig_pin_r, 0)
GPIO.output(trig_pin_f, 0)
def start_motors():
GPIO.setup(rev_pin_l, GPIO.OUT)
GPIO.setup(fwd_pin_l, GPIO.OUT)
GPIO.setup(rev_pin_r, GPIO.OUT)
GPIO.setup(fwd_pin_r, GPIO.OUT)
pwm_l.start(0)
pwm_r.start(0)
def drive_fwd():
print('fwd')
GPIO.output(fwd_pin_l, 1)
GPIO.output(fwd_pin_r, 1)
GPIO.output(rev_pin_l, 0)
GPIO.output(rev_pin_r, 0)
pwm_l.ChangeDutyCycle(speed_l)
pwm_r.ChangeDutyCycle(speed_r)
def drive_reverse():
print('rev')
GPIO.output(fwd_pin_l, 0)
GPIO.output(fwd_pin_r, 0)
GPIO.output(rev_pin_l, 1)
GPIO.output(rev_pin_r, 1)
pwm_l.ChangeDutyCycle(speed_l)
pwm_r.ChangeDutyCycle(speed_r)
def turn_right():
print('right')
GPIO.output(fwd_pin_l, 1)
GPIO.output(fwd_pin_r, 0)
GPIO.output(rev_pin_l, 0)
GPIO.output(rev_pin_r, 0)
pwm_l.ChangeDutyCycle(speed_l-10)
pwm_r.ChangeDutyCycle(speed_r-10)
def turn_left():
print('left')
GPIO.output(fwd_pin_l, 0)
GPIO.output(fwd_pin_r, 1)
GPIO.output(rev_pin_l, 0)
GPIO.output(rev_pin_r, 0)
pwm_l.ChangeDutyCycle(speed_l-10)
pwm_r.ChangeDutyCycle(speed_r-10)
Appendix A: Supporting materials | 51
# Start Lidar
PORT_NAME = '/dev/ttyUSB0'
lidar = RPLidar(PORT_NAME)
print('lidar started...')
while True:
key = window.getch()
if key == curses.KEY_UP:
pauseEvent.set()
drive_fwd()
elif key == curses.KEY_DOWN:
pauseEvent.set()
drive_reverse()
elif key == curses.KEY_LEFT:
pauseEvent.clear()
turn_left()
elif key == curses.KEY_RIGHT:
pauseEvent.clear()
turn_right()
52 | Appendix A: Supporting materials
def stop_motors():
GPIO.output(fwd_pin_l, 0)
GPIO.output(rev_pin_l, 0)
GPIO.output(fwd_pin_r, 0)
GPIO.output(rev_pin_r, 0)
Appendix A: Supporting materials | 53
pwm_l.ChangeDutyCycle(0)
pwm_r.ChangeDutyCycle(0)
print(”Motors stopped”)
def calibrate_gyro(samples=50):
gx_bias = 0.0
for _ in range(samples):
gyro_data = sensor.get_gyro_data()
gx_bias += gyro_data['x']
time.sleep(0.01) # short delay between reads
gx_bias /= samples
return gx_bias
def read_sensor_data(gx_bias):
# Read the gyroscope values
gyroscope_data = sensor.get_gyro_data()
# Correct bias
gyroscope_data['x'] -= gx_bias
return gyroscope_data
time.sleep(0.1)
except KeyboardInterrupt:
GPIO.cleanup()
Reads the signal pin (enPin) and notes all state changes.
If
40 state changes have been made then one rotation is
completed.
”””
try:
# Checks first state by reading the signal pin
stateLast = GPIO.input(enPinR)
# Loop to continue to read the states
while True:
stateCurrent = GPIO.input(enPinR)
if stateCurrent != stateLast:
stateLast = stateCurrent
stateCountR += 1
stateCountTotR += 1
stateQR.put(rotationCountR+stateCountR/40)
stateCurrent = GPIO.input(enPinL)
if stateCurrent != stateLast:
stateLast = stateCurrent
stateCountL += 1
stateCountTotL += 1
stateQL.put(rotationCountL+stateCountL/40)
pauseEvent.wait()
if stateCountL == statesPerRotation:
rotationCountL += 1
stateCountL = 0
except KeyboardInterrupt:
GPIO.cleanup()
distanceR = pi*lastItemR*66*0.001*1000
distanceL = pi*lastItemL*66*0.001*1000
distance = (distanceR+distanceL)/2
xcord = xcord + distance*cos(radians(lastTheta))
ycord = ycord + distance*sin(radians(lastTheta))
return xcord, ycord, lastTheta
except KeyboardInterrupt:
return xcord, ycord, lastTheta
except Exception as e:
print(f”Error in get_coordinates: {e}”)
return xcord, ycord, lastTheta
def scan_and_send(lidar):
print(”Starting to gather data...”)
try:
for scan in lidar.iter_scans(max_buf_meas=600):
data = [(angles, distances) for distances, angles
in scan]
data = str(data)
data = data.replace('[', '')
data = data.replace(']', '')
56 | Appendix A: Supporting materials
return data
except KeyboardInterrupt:
print(”Stopping the scan...”)
except Exception as e:
print(e)
lidar.clean_input()
finally:
lidar.stop()
lidar.stop_motor()
lidar.disconnect()
coordinates = str([(xcord,ycord)])
data = data.replace('None, ', '')
data = data.replace(', None', '')
lastTheta = str(int(lastTheta))
lastTheta = lastTheta.replace(”'”,'')
Appendix A: Supporting materials | 57
try:
worksheet.append_row((datetime.datetime.now().
strftime(”%Y-%m-%d
%H:%M:%S”), data,
coordinates,
lastTheta))
except:
print('Append error, logging in again')
worksheet = login_open_sheet(GDOCS_OAUTH_JSON,
GDOCS_SPREADSHEET_NAME
)
time.sleep(FREQUENCY_SECONDS)
finally:
print('Wrote a row to {0}'.format(
GDOCS_SPREADSHEET_NAME
))
return worksheet
theta.start()
try:
while True:
xcord, ycord = curses.wrapper(main, xcord, ycord,
thetaQ, worksheet
)
except KeyboardInterrupt:
stop_motors()
pwm_l.stop()
pwm_r.stop()
GPIO.cleanup()
If Program does not work (starts but gets stuck) solve the
problem by pinging google.com by:
ping -c 4 google.com
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
enPinL = 26
GPIO.setup(enPinL, GPIO.IN)
enPinR = 19
GPIO.setup(enPinR, GPIO.IN)
# Authorization document
GDOCS_OAUTH_JSON = 'google-auth.json'
GPIO.setup(ena_pin_l, GPIO.OUT)
GPIO.setup(ena_pin_r, GPIO.OUT)
def start_ultrasonic():
GPIO.setup(trig_pin_r, GPIO.OUT)
GPIO.setup(echo_pin_r, GPIO.IN)
GPIO.setup(trig_pin_f, GPIO.OUT)
GPIO.setup(echo_pin_f, GPIO.IN)
GPIO.output(trig_pin_r, 0)
GPIO.output(trig_pin_f, 0)
print('Ultrasonic sensors ready...')
start_time = time.time()
stop_time = time.time()
# Save StartTime
while GPIO.input(echo_pin) == 0:
start_time = time.time()
return distance
def follow_right_wall():
xcord = 0
ycord = 0
# Open google sheet
worksheet = None
worksheet = login_open_sheet(GDOCS_OAUTH_JSON,
GDOCS_SPREADSHEET_NAME)
Appendix A: Supporting materials | 61
print('log in succesfull...')
stateQR = Queue()
stateQL = Queue()
pauseEvent = Event()
thetaQ = Queue()
gx_bias = calibrate_gyro()
theta = Process(target=convert_dps_to_degrees, args=(
thetaQ, gx_bias))
theta.start()
try:
num = 20
while True:
if num % 20 == 0:
pwm_l.stop()
pwm_r.stop()
dataList = []
for i in range(3):
PORT_NAME = '/dev/ttyUSB0'
lidar = RPLidar(PORT_NAME)
data = scan_and_send(lidar)
dataList.append(data)
dataList = str(dataList)
data = dataList.replace(”'”, '')
print('data aquired')
xcord, ycord, lastTheta = get_coordinates(
stateQR,
stateQL,
thetaQ, xcord,
ycord)
print('coordinates aquired')
62 | Appendix A: Supporting materials
def start_motors():
GPIO.setup(rev_pin_l, GPIO.OUT)
GPIO.setup(fwd_pin_l, GPIO.OUT)
GPIO.setup(rev_pin_r, GPIO.OUT)
GPIO.setup(fwd_pin_r, GPIO.OUT)
pwm_l.start(0)
pwm_r.start(0)
print(”Motors work...”)
def drive_fwd():
GPIO.output(fwd_pin_l, 1)
GPIO.output(fwd_pin_r, 1)
64 | Appendix A: Supporting materials
GPIO.output(rev_pin_l, 0)
GPIO.output(rev_pin_r, 0)
pwm_l.ChangeDutyCycle(speed_l)
pwm_r.ChangeDutyCycle(speed_r)
def drive_reverse():
GPIO.output(fwd_pin_l, 0)
GPIO.output(fwd_pin_r, 0)
GPIO.output(rev_pin_l, 1)
GPIO.output(rev_pin_r, 1)
pwm_l.ChangeDutyCycle(speed_l)
pwm_r.ChangeDutyCycle(speed_r)
def turn_right():
GPIO.output(fwd_pin_l, 1)
GPIO.output(fwd_pin_r, 0)
GPIO.output(rev_pin_l, 0)
GPIO.output(rev_pin_r, 1)
pwm_l.ChangeDutyCycle(speed_l)
pwm_r.ChangeDutyCycle(speed_r)
def turn_left():
GPIO.output(fwd_pin_l, 0)
GPIO.output(fwd_pin_r, 1)
GPIO.output(rev_pin_l, 1)
GPIO.output(rev_pin_r, 0)
pwm_l.ChangeDutyCycle(speed_l)
pwm_r.ChangeDutyCycle(speed_r)
def stop_motors():
GPIO.output(fwd_pin_l, 0)
GPIO.output(rev_pin_l, 0)
GPIO.output(fwd_pin_r, 0)
GPIO.output(rev_pin_r, 0)
pwm_l.ChangeDutyCycle(0)
pwm_r.ChangeDutyCycle(0)
def calibrate_gyro(samples=50):
print(”Calibrating gyroscope...”)
gx_bias = 0.0
for _ in range(samples):
gyro_data = sensor.get_gyro_data()
gx_bias += gyro_data['x']
time.sleep(0.01)
gx_bias /= samples
return gx_bias
Appendix A: Supporting materials | 65
def read_sensor_data(gx_bias):
# Read the gyroscope values
gyroscope_data = sensor.get_gyro_data()
# Correct bias
gyroscope_data['x'] -= gx_bias
return gyroscope_data
count += 1
time.sleep(0.1)
except KeyboardInterrupt:
GPIO.cleanup()
Reads the signal pin (en_pin) and notes all state changes
. If
8 state changes have been made then one rotation is
completed.
”””
try:
66 | Appendix A: Supporting materials
pauseEvent.wait()
# If there have been as many state changes as there are
# states per rotation then a rotation has been completed
if stateCountR == statesPerRotation:
rotationCountR += 1
stateCountR = 0
except KeyboardInterrupt:
GPIO.cleanup()
pauseEvent.wait()
if stateCountL == statesPerRotation:
rotationCountL += 1
stateCountL = 0
except KeyboardInterrupt:
GPIO.cleanup()
lastItemR = stateQR.get()
while not stateQL.empty():
lastItemL = stateQL.get()
while not thetaQ.empty():
lastTheta = float(thetaQ.get())
distanceR = pi*lastItemR*66*0.001*1000
distanceL = pi*lastItemL*66*0.001*1000
distance = (distanceR+distanceL)/2
xcord = xcord + distance*cos(radians(lastTheta))
ycord = ycord + distance*sin(radians(lastTheta))
return xcord, ycord, lastTheta
except KeyboardInterrupt:
return xcord, ycord, lastTheta
except Exception as e:
print(f”Error in get_coordinates: {e}”)
return xcord, ycord, lastTheta
def scan_and_send(lidar):
try:
for scan in lidar.iter_scans(max_buf_meas=600):
data = [(angles, distances) for distances, angles
in scan]
data = str(data)
data = data.replace('[', '')
data = data.replace(']', '')
return data
except KeyboardInterrupt:
print(”Stopping the scan...”)
except Exception as e:
print(e)
finally:
lidar.stop()
lidar.stop_motor()
lidar.disconnect()
)
gc = gspread.authorize(credentials)
worksheet = gc.open(spreadsheet).sheet1
return worksheet
except Exception as ex:
print('Unable to login and get spreadsheet. Check
OAuth credentials,
spreadsheet name, and
make sure spreadsheet
is shared to the
client_email address
in the OAuth .json
file!')
print('Google sheet login failed with error:', ex)
sys.exit(1)
coordinates = str([(xcord,ycord)])
data = data.replace('None, ', '')
data = data.replace(', None', '')
lastTheta = str(int(lastTheta))
lastTheta = lastTheta.replace(”'”,'')
try:
worksheet.append_row((datetime.datetime.now().
strftime(”%Y-%m-%d
%H:%M:%S”), data,
coordinates,
lastTheta))
except:
print('Append error, logging in again')
worksheet = login_open_sheet(GDOCS_OAUTH_JSON,
GDOCS_SPREADSHEET_NAME
)
time.sleep(FREQUENCY_SECONDS)
finally:
print('Wrote a row to {0}'.format(
GDOCS_SPREADSHEET_NAME
))
return worksheet
def main():
try:
start_ultrasonic()
Appendix A: Supporting materials | 69
start_motors()
follow_right_wall()
pwm_l.stop()
pwm_r.stop()
GPIO.cleanup()
except KeyboardInterrupt:
pwm_l.stop()
pwm_r.stop()
GPIO.cleanup()
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
GPIO.cleanup()
import gspread
from oauth2client.service_account import
ServiceAccountCredentials
import numpy as np
import matplotlib.pyplot as plt
import math
import ast
GDOCS_OAUTH_JSON = 'google-auth.json'
def get_data(worksheet):
try:
# Login if necessary.
if worksheet is None:
worksheet = login_open_sheet(GDOCS_OAUTH_JSON,
GDOCS_SPREADSHEET_NAME
)
try:
# Check if the input is a NumPy array
72 | Appendix A: Supporting materials
if isinstance(lidar_data_array, np.ndarray):
for ii in range(0, len(lidar_data_array)):
lidar_points = ast.literal_eval(
lidar_data_array
[ii])
distances = [point[0] for point in
lidar_points]
angles = [point[1] - int(theta_array[ii]) for
point in
lidar_points]
results.append({'distances': distances, '
angles':
angles})
else:
# Handle case where the input is a single string
lidar_points = ast.literal_eval(lidar_data_array)
distances = [point[0] for point in lidar_points]
angles = [point[1] for point in lidar_points]
results.append({'distances': distances, 'angles':
angles})
return results
except ValueError as e:
print(f”Error parsing LiDAR data: {e}”)
return None
except Exception as e:
print(f”Unexpected error: {e}”)
return None
def parse_measuring_point_coordinates(coordinates_data_array)
:
results = [] # To store the parsed data for each entry
try:
# Check if the input is a NumPy array
if isinstance(coordinates_data_array, np.ndarray):
for item in coordinates_data_array: # Process
each item in the
array
coordinates = ast.literal_eval(item)
x_coords = [point[0] for point in coordinates
]
y_coords = [point[1] for point in coordinates
]
results.append({'x_coords': x_coords, '
y_coords':
y_coords})
Appendix A: Supporting materials | 73
else:
# Handle case where the input is a single string
coordinates = ast.literal_eval(
coordinates_data_array
)
x_coords = [point[0] for point in coordinates]
y_coords = [point[1] for point in coordinates]
results.append({'x_coords': x_coords, 'y_coords':
y_coords})
return results
except ValueError as e:
print(f”Error parsing measuring point coordinates: {e
}”)
return None
except Exception as e:
print(f”Unexpected error: {e}”)
return None
# Plot
plt.scatter(normalized_x, normalized_y, marker='o',
label=f'Base @ ({
base_x:.2f}, {base_y:.
2f})', s=10)
if __name__ == '__main__':
worksheet = None
worksheet, array = get_data(worksheet)
dates_str = array[:,0]
lidar_data_str = array[:,1]
measuring_point_coordinates_str = array[:,2]
theta_array = array[:,3]
start_row = 5
end_row = 7
start_row -= 1
theta_array = theta_array[start_row:end_row]
test_lidar_data_str = lidar_data_str[start_row:end_row]
test_measuring_point_coordinates_str =
measuring_point_coordinates_str
[start_row:end_row]
plot_lidar_data(parse_lidar_data(test_lidar_data_str,
theta_array),
Appendix A: Supporting materials | 75
parse_measuring_point_coordinates
(
test_measuring_point_coordinates_str
))
GDOCS_OAUTH_JSON = 'KEX/google-auth.json'
GDOCS_SPREADSHEET_NAME = 'Saved_SensorData'
FREQUENCY_SECONDS = 0
def get_data(worksheet):
try:
if worksheet is None:
worksheet = login_open_sheet(GDOCS_OAUTH_JSON,
GDOCS_SPREADSHEET_NAME
)
try:
array = np.array(worksheet.get_all_values())
except:
print('Retrieve error, logging in again')
worksheet = None
time.sleep(FREQUENCY_SECONDS)
finally:
print('Retrieved rows from {0}'.format(
GDOCS_SPREADSHEET_NAME
))
time.sleep(FREQUENCY_SECONDS)
return worksheet, array
except ValueError as e:
print(f”Error parsing LiDAR data: {e}”)
return None
except Exception as e:
print(f”Unexpected error: {e}”)
return None
def parse_measuring_point_coordinates(coordinates_data_array)
:
results = []
try:
if isinstance(coordinates_data_array, np.ndarray):
for item in coordinates_data_array:
coordinates = ast.literal_eval(item)
x_coords = [point[0] for point in coordinates
]
y_coords = [point[1] for point in coordinates
]
results.append({'x_coords': x_coords, '
y_coords':
y_coords})
else:
coordinates = ast.literal_eval(
coordinates_data_array
)
x_coords = [point[0] for point in coordinates]
y_coords = [point[1] for point in coordinates]
results.append({'x_coords': x_coords, 'y_coords':
y_coords})
return results
except ValueError as e:
print(f”Error parsing measuring point coordinates: {e
}”)
return None
except Exception as e:
print(f”Unexpected error: {e}”)
return None
all_translated_x.append(translated_x)
all_translated_y.append(translated_y)
all_labels.append(f'Base @ ({base_x:.2f}, {base_y:.2f
})')
Appendix A: Supporting materials | 79
trans = transforms.Affine2D().rotate_deg_around(rect_x +
rect_width / 2, rect_y +
rect_height / 2,
rect_angle) + plt.gca().
transData
rect = plt.Rectangle((rect_x, rect_y), rect_width,
rect_height, linewidth=1,
edgecolor='red', facecolor
='none', transform=trans)
plt.gca().add_patch(rect)
plt.xlabel('X Coordinate')
80 | Appendix A: Supporting materials
plt.ylabel('Y Coordinate')
plt.axhline(0, color='black', linewidth=0.5)
plt.axvline(0, color='black', linewidth=0.5)
plt.legend()
plt.grid(True)
plt.axis('equal')
plt.show()
covered_cells = set()
trans = transforms.Affine2D().rotate_deg_around(rect_x +
rect_width / 2, rect_y +
rect_height / 2,
rect_angle)
rect_y + ey))
if distance_to_rectangle(x, y, ex_rot, ey_rot
, 0, 0, 0) <=
threshold:
cell_x = int(ex // cell_size)
cell_y = int(ey // cell_size)
covered_cells.add((cell_x, cell_y))
total_cells = len(edges)
# coverage = len(covered_cells) / total_cells
return total_cells, covered_cells, cell_size, cell_size
if __name__ == '__main__':
worksheet = None
worksheet, array = get_data(worksheet)
lidar_data_str = array[:, 1]
measuring_point_coordinates_str = array[:, 2]
theta_array = array[:, 3]
start_row = 158
end_row = 161
offset_angle = 90
offset_x = 0
offset_y = 0
82 | Appendix A: Supporting materials
start_row -= 1
theta_array = theta_array[start_row:end_row]
test_lidar_data_str = lidar_data_str[start_row:end_row]
test_measuring_point_coordinates_str =
measuring_point_coordinates_str
[start_row:end_row]
parsed_lidar_data = parse_lidar_data(test_lidar_data_str,
theta_array)
parsed_measuring_points =
parse_measuring_point_coordinates
(
test_measuring_point_coordinates_str
)
remove_cell(covered_cells, 0, 17)
www.kth.se