Driverless subsystem: Recruitment Test 2025
General Instructions:
● Write all the answers with a clear expression of thought and idea.
● Given you have access to the internet, you are allowed to browse (as if you wouldn't
in either case :P).
● Most unique answers would help you fetch an interview call. Any kind of plagiarism
will not be appreciated.
● You are welcome to use LLMs to understand concepts. Any answers that are copy
pasted from generative AI would be straightaway rejected.
● Submit the answers in PDF format with the name as “paper name_your name”. For
example, “Driverless_NAME.pdf”. Either hand-written or typed.
● Attempt the paper with good spirit and give enough time to each question. You may
not be able to answer all of them, which is fine. Answer as many as you can.
● Submission deadline: The deadline for assignment submissions is Sunday, 11:59
pm, 26th January 2025.
Best of Luck!
Mention the following piece of information on your first page:
➢ Name
➢ Roll Number
➢ Department
➢ Phone number
➢ Email ID (active Gmail)
Subsystem Points Maximum Points
Perception 25
SLAM 25
PPC 25
AISD 25
Total 100
Remember there is no perfect solution, so feel free to write your approach in all questions
even though you aren’t sure if it solves the problem.
Perception
Now enjoy being the eyes of the car. :)
Question 1: [7 marks]
Computer vision: The artificial eyes
Perception deals with identifying the surroundings by detecting objects and classifying them into
different categories/classes like a ball, cat, human, traffic light, cone and if cone what color?-
green, blue, red, yellow etc. Since, in a driverless car a computer is responsible for this job, this
task comes under the purview of what's known as ‘Computer vision’.
Computer vision involves interpreting and analyzing data in images and live video feeds followed
by processing the data to reveal useful information like- the location of objects in an image
(whether the object is at the bottom right, bottom left or if more accurate locations are required
say at pixel (234, 199) etc ) and much more.
An image is made up of pixels. A computer stores images in the form of a 3D matrix. Each
element of the matrix represents the intensity of each point in the image in each of the three
colours- Red, Green and Blue.
As we can see in the above image, the element in row 1 column 1 ( Ii,j+1) represents the intensity
of the pixel for the top left corner of the robot's image.
Which algorithm/method do you think is in use in the image above for cone detection? Mention its
data format and the information that can be of most use in determining the depth of the cones.
Given the above image is trimmed from the main image, explain how the cone can
be separated from it. Furthermore, the images captured by the camera sometimes
contain noise. What are the different methods that can be applied to solve this
issue? Explain in detail the approach with proper calculations. (Hint: refer to
convolutions)
Question 2: [8 marks]
Perception’s role is to find the range, theta (or X, Y), and class of the cones. There are mainly 2
types of cones that we are concerned about: Blue and Yellow. Blue cones have white stripes in
the middle and Yellow cones have black stripes in the middle(refer to the image above). We know
the dimensions of each cone: height, radius, stripe width and stripe height from the top. For now,
let’s use LiDAR only for detecting the range, theta, and colour of the cones. How’d you detect,
filter out the cones and then go on to classify them? Explain in detail the data format and relevant
information needed to solve the problem of detection and classification of cones using LiDAR as
the only sensor.
While testing, it was found that the intensity of point-hitting
● Yellow surface ranges from 20 to 40
● Blue surface ranges from 10 to 30
● Black surface ranges from 0 to 15
● White surface ranges from 35 to 50
You can’t directly tell if the intensity of any point from a cluster is 0-15, it's a yellow cone or 35-50
it’s a blue cone because there’s always noise in the data provided by sensors. Try to think of a
robust method to solve the issue.
Question 3: [10 marks]
The car relies on its lidar to detect 3D points in the environment and its camera to capture a 2D
visual representation of the track. However, the lidar and camera are not calibrated, meaning their
data does not align properly. This misalignment causes the car to misperceive the objects which
could lead to errors.
Your task is to help calibrate the lidar and camera to work harmoniously, ensuring that the car
correctly maps the cones and navigates the track efficiently.
Describe in detail how you would calibrate the sensors. Write the steps and their importance.
Specify the data required for each part of the process and how it can be acquired. Provide all
necessary calculations, make reasonable assumptions where necessary, and ensure to mention
these assumptions explicitly.
Bonus Question:
What are the different sensors that can be used as "eyes" for a robot, and what makes each type
useful? Are there any drawbacks to consider? Think of it like choosing the ultimate pair of
high-tech shades for a robot – combining functionality with innovation. Provide a detailed
breakdown of these sensor types, their advantages and disadvantages, and how they can assist
in tasks such as object detection and depth estimation.
SLAM
Question 4: [4 marks]
A food delivery robot needs to move from Aroma Dhaba (origin) to Hostel 3 in a 1-dimensional
setting. The goal is to estimate the robot’s position accurately on the map despite noisy sensors
and ensure the parcel is delivered successfully.
System Description:
1. Start Position: The robot starts at position x=0 (Aroma Dhaba).
2. Movement: The robot moves along a straight line with a given speed command.
3. Sensor Detection: The robot’s position sensor accurately detects proximity to hostels
(Hostel 1, Hostel 2, Hostel 3).
Goal:
Estimate the robot’s position continuously and determine when it reaches Hostel 3.
i) Define the 1D map (you may suitably assume the location of hostels), which is also known to
the bot. (2)
ii) Define the robot’s state variables. (2)
Question 5: [11 marks]
i)How would you continuously estimate the robot's position as it moves along the 1D map under
the assumption that:(4)
● The speed command is perfect (no noise).
● The hostel detection sensor is perfect (no noise) This sensor will just beep only 3 times in
the entire run when it has EXACTLY reached a particular hostel
Describe the approach or algorithm you would use to ensure accurate position estimation.
Now let’s be realistic and since the world we live in is not perfect, Consider a noisy environment
where the robot’s sensors are prone to errors.
1. Movement Noise:
The speed command is noisy, leading to deviations in the robot's actual movement.
2. Sensor Noise:
This sensor will just beep only 3 times in the entire run when it THINKS* it has reached a
particular hostel]
*The sensor is noisy, and so even if it beeps, you cannot be 100 per cent sure that
you are exactly at a hostel
ii) Define the type and characteristics of noise for sensors (2)
iii) a) Describe the motion model for the robot’s movement.(1)
b) Describe the measurement model for detecting “Hostel” and its associated noise.(1)
iv)Propose a method to combine the noisy sensor readings to accurately estimate the robot’s
position.(3)
IF NEEDED, MAKE SUITABLE ASSUMPTIONS AND STATE THEM CLEARLY
Question 6: [10 marks]
Scenario:
Imagine you’re at a large library during exam season, where the shelves are packed with
identical-looking books organized by subject. Your task now is to return the book in your hand to
its exact spot on the shelf.
Challenges arise because:
● Similar Appearance: The books and shelves look almost identical, with many books having
similar sizes and colours.
● Stationary Environment: While the books and shelves don’t move, your initial point of view
when taking the book may not match your perspective when returning it.
● Disorientation: After exploring other shelves, you might forget the exact location of the
book’s spot, especially in a crowded section.
To find the correct spot for the book, you use strategies like:
1. Proximity Awareness: Checking nearby shelves based on where you think the book was
originally located.
2. Label Matching: Carefully examine labels on the shelves or books to verify if a spot
matches the one where the book belongs.
3. Reference Points: Use fixed library features (e.g., aisle numbers or end-of-row markers) to
orient yourself and narrow down the possible locations.
Despite these strategies, you might place the book in the wrong spot if labels are missing,
obscured, or confusingly similar.
Task:
i)Reflecting on the scenario above, identify what analogous problem this represents in SLAM.
Explain how the challenges described in the library scenario are mirrored in SLAM. (3)
ii)Using the strategies mentioned (proximity awareness, label matching, reference points), explain
how they could inspire approaches to solve this problem in SLAM. Do you think all of these
strategies can be directly applied, or would some need to be adapted or replaced entirely for the
context of SLAM? Provide reasons for your answer. (7)
HINT: Think along the lines of how visual data affects the mapping process
Path Planning and Controls
Question 7. [10 marks]
Say, you are given discrete points you need to follow (4, 3), (1,7), (3.5, 8), (8,5),(7,4) and
(4,3) but you need a continuous path. Now the problem is you can't have a function,
because a function assigns to each element of X exactly one element of Y and our path is
supposed to be a closed loop. You should be able to generate a path like the one given
above.
You obtain discrete waypoints from a path-planning algorithm for an autonomous racing
car. However, during testing, you observe oscillations in the car's movement as it tracks
the waypoints. These oscillations are suspected to arise due to the discretized nature of
the waypoints, causing abrupt changes in steering and acceleration inputs. Suggest
solution(s) to the above problem with justification along with any other benefits. State any
assumptions taken for its implementation.
Question 8. [8 marks]
You are working on an autonomous racecar that uses cone positions (blue cones on the
left and yellow cones on the right) to generate waypoints for navigation along a track.
However, during a sharp turn, the car loses visibility of the yellow cones on the right side
of the track. In this case, the car can only detect the blue cones on the left side of the
track and must still navigate the sharp turn using only the available blue cone positions.
Task:
Write a pseudo-code to compute the waypoints along the track using only the blue cone
coordinates when the yellow cone coordinates are no longer visible. You must ensure the
car can follow the track effectively around sharp turns.
Consider the following constraints and assumptions:
● The blue cones represent the left boundary of the track, and the yellow cones
represent the right boundary, forming the edges of the track.
● You must mention any assumptions about the track and make inferences from the
known cone positions previously seen which are separately available.
Requirements:
1. Using the available blue cones, compute the path that the car should follow by
generating waypoints that approximate the original path between the blue and
yellow cones.
2. Explain your approach: Discuss how you handle sharp turns and how your
algorithm adjusts to ensure the car maintains its position on the track.
3. The final output of your code/pseudo code should be a set of waypoints which the
car can follow for navigation on such a turn.
Question 9. [7 marks]
Consider a car on a racetrack that must follow a curved section of the track. The car’s
position on the curve is measured using sensors, and you always know the lateral error𝑒
e from the ideal racing line. The car experiences forces that push it away from the desired
line, such as tyre slip and inertia during turns.
a) How should you control the car’s steering angle to minimize lateral error e while
maintaining stability and preventing overshooting or oscillations? Discuss your
approach to tuning the response based on factors such as the car’s speed, track
curvature, and sensor feedback.
b) Additionally, explain how changes in the responsiveness of your steering control
could affect the car’s behaviour during sharp and mild turns. Consider external
disturbances like uneven track surfaces or sudden wind gusts. (Hint: Think of
systems that correct deviations, like a spring restoring force or a pendulum
returning to equilibrium)
Bonus Question:
What additional parameter do you think needs to be accounted for by the algorithm while
deciding the steering angle? Explain geometrically.
Autonomous Integration and Simulation Development
Question 10. [10 marks]
Since you have come so far into the paper, you must already know about the other 3 subsystems:
Perception, PPC and SLAM. The functioning of an autonomous racecar relies on a well-structured
data flow between all these subsystems. Imagine you are part of a team working on an
autonomous racecar system.
1. Suggest which sensors would be required by each of the subsystems and justify your
choices.
2. Draw a flow diagram representing the data flow in the autonomous system, showing the
relationship between the stages mentioned above. Use arrows to indicate data movement.
Your answer will be evaluated based on your understanding of the system’s structure, logical
reasoning, and creativity in presenting your thoughts.
Question 11. [5 marks]
Imagine you are assigned to find/build a simulator for testing driverless systems in Formula
Student race cars. Analyze and discuss the different types of simulators available. Some of the
simulators we use are FSDS, CarMaker, and EUFS*. It should have support on Linux as the
majority of tasks we do are on Ubuntu-20.04.
1. Read about simulators and discuss why they are important. Why do all autonomous
vehicle corporations spend so much time developing such heavy software when they
already have prototypes and real vehicles to test their software stacks?
2. Compare and contrast hardware-in-the-loop (HIL) and software-in-the-loop (SIL). How do
they contribute to the testing process, and what are their specific advantages and
limitations? Suggest and explain a scenario where both are being used for testing.
* You are encouraged to look more into these simulators-
EUFS - https://gitlab.com/eufs/eufs_sim
CarMaker -
https://www.youtube.com/watch?v=XB3Yk-qraV4&ab_channel=IPGAutomotiveGmbH
FSDS - https://fs-driverless.github.io/Formula-Student-Driverless-Simulator/v2.2.0/
Question 12. [10 marks]
You are tasked with designing and building a do-it-yourself (DIY) driverless bot with a budget
constraint of 1 lakh. The bot must include essential sensors, motors, tools, wires, and boards for
autonomous functionality. Your objective is to create a functional prototype that showcases the
principles of autonomous navigation within the given financial limit. Every problem you did in
previous sub-systems is going to contribute to this problem.
1. Choose and justify the selection of sensors, protocols and other tools critical for
autonomous navigation. Mention any reasons why you didn’t choose a specific thing you
found to be used in a driverless vehicle.
2. Discuss the limitations and downsides of your DIY driverless bot compared to a
fully-fledged driverless race car. Highlight the trade-offs made due to budget constraints
and suggest potential improvements for a more advanced implementation. Why do you
think this kind of problem might be useful to us?
Bonus Question:
You are one of the lucky few who were able to go to the UK for the FSUK competition. You have
made the complete code on your laptop and it works flawlessly! all the simulation tests are fluidly
working. Now, you try to upload the code on the main computer(Jetson) of the car. You try to build
the code and it doesn’t even compile properly on that. What are possible steps you are going to
take to debug the error? The only thing you have is the error output that is provided by the
compiler. Mention all the possible types of reasons, hardware or software, it could be.
Finally, GitHub is a very important part of being a Software Engineer, so if you have not made
one, please consider making it. :)
Non-technical Questions (Compulsory)
1. How did you come to know about the team and why are you interested in joining us? How
will Autonomous Vehicles (AVs) impact transportation jobs like taxi drivers and truckers?
Can we re-skill and transition these workers into new roles?
2. Do you follow Formula 1? If yes, who is your favourite Formula-1 driver and constructor?
3. In unavoidable crash scenarios, who or what does the AV prioritize? Should it minimize
harm to itself, passengers, or pedestrians?
4. We all watch movies and TV shows. so, who is your favorite actress/actor :P
(please answer carefully, this is a make-it-or-break-it kind of question), we caught you
copying the questions
Here is an easter egg https://youtu.be/xvFZjo5PgG0?si=rwiHNSH_KBD4HIXp