0% found this document useful (0 votes)
2 views

9.- Thread Programming in Linux

The document provides an overview of threads in embedded systems, detailing their creation, lifecycle, and synchronization mechanisms such as mutexes, monitor classes, and semaphores. It explains the importance of thread priorities and how they affect CPU scheduling and resource access. Additionally, it highlights the potential issues like race conditions that can arise in multithreading environments and the methods to mitigate them.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

9.- Thread Programming in Linux

The document provides an overview of threads in embedded systems, detailing their creation, lifecycle, and synchronization mechanisms such as mutexes, monitor classes, and semaphores. It explains the importance of thread priorities and how they affect CPU scheduling and resource access. Additionally, it highlights the potential issues like race conditions that can arise in multithreading environments and the methods to mitigate them.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Software for Embedded Systems

9. Threads:-
Creation,Lifecycle, Mutual
exclusion,Semapores, and
Thread priorities
1/11
9. Threads:- Creation, Lifecycle, Mutual exclusion, Semapores,
and Thread priorities

What is a Thread?

 A thread exists within a process.


 A process is a running program
 In a multithreading environment threads share the same memory space, file
descriptors, and other system resources.
 However, each thread has its own call stack(for storing local variables),
through which it can execute a shared code, call and return from functions in
that code.

2/11
9. Threads:- Creation, Lifecycle, Mutual exclusion, Semapores,
and Thread priorities

Thread Creation

 Each thread has a thread ID that identifies it.


 Upon the creation of a thread, it executes a thread
function. This function is a type void * and takes a
type void *
 The thread exits when the function returns.
 The pthread_create is used to create a thread. It
takes the following arguments:
a. Pointer to a pthread_t variable, that is the thread ID
b. Pointer to a thread attribute object
c. Pointer to thread function.
d. A thread argument value of type void*.

 Another function called pthread_join can be used


to ensure that the creating thread waits for the
created thread to finish. 3/11
9. Threads:- Creation, Lifecycle, Mutual exclusion, Semapores,
and Thread priorities
Thread Life Cycle

1. READY: The thread resides in the queue waiting for


execution. The is the phase of life of a newly
created thread that has yet to run.
2. RUNNING: This means the thread is executing on
the CPU at the instant. Here, it has been allotted a
running time by the scheduler.
3. WAITING: It is not executing as it is waiting for a
resume signal condition. At the fulfillment of the
condition, the scheduler moves the thread to the
running state.
4. SLEEP: A thread lies in this state when it calls a
function with a time out parameter and will remain
asleep until the timeout is completed or a
notification is received.
5. BLOCKED: This means the process is unable to run
until some external event occurs. It occurs when
the thread tries to access a critical section of the
code that is locked. When the lock is removed, the
scheduler releases only one of the blocked threads.
6. TERMINATED/DEAD: A thread terminates because
of the following reasons:
a. Its thread function exits normally, or 4/11
b. An erroneous event occurred such as segmentation
fault or an unhandled exception.
9. Threads:- Creation, Lifecycle, Mutual exclusion, Semapores,
and Thread priorities
SYNCHRONIZATION
 A concurrent program running threads is very prone to bugs because it is difficult to
track the way the scheduler runs the threads.
 The ultimate source of bugs is the fact that these threads might be accessing the
same resources; creating what is called a race condition.
 A race condition occurs when threads are racing one another to change the same data
structure.
 In order to eliminate this, the operations the threads are performing should be made
uninterruptible; once it starts, it will not be paused or interrupted until it is completed,
and no operation will take place meanwhile.
 In short words, make each thread perform its operation on the shared data structure
until it has decided whether it is done.
 Here comes the roles of Mutexes, Monitor classes, and Semaphores.

5/11
9. Threads:- Creation, Lifecycle, Mutual exclusion, Semapores,
and Thread priorities

Mutexes (MUTual EXclusion locks):

 A special lock that only one thread may lock at a


time.
 If a thread locks a mutex and a second thread
tries to lock the same mutex, the second thread
is blocked or put on hold, to be unblocked only
when the first threads unlocks the mutex.
 Linux guarantees that race conditions do not
occur among threads attempting to lock a mutex,
only one thread can lock per time, while others
are blocked.
 To create a mutex, a variable of type
pthread_mutex_is created and then initialized
with either the pthread_mutex_init command or
using the PTHREAD_MUTEX_INITIALIZER.

6/11
9. Threads:- Creation, Lifecycle, Mutual exclusion, Semapores,
and Thread priorities
Monitor Classes

 This is a combination of a mutex object and condition


variables.
 A condition variable enables implementation of more
complex conditions under which threads execute.
 A condition variable enables us to implement a
condition under which a thread executes and a
condition under which the thread is blocked.
 Basically, A mutex lock should be required and the
condition variable should be signaled afterward.

7/11
9. Threads:- Creation, Lifecycle, Mutual exclusion, Semapores,
and Thread priorities
Semaphores
 A semaphore is just a non-negative integer counter variable that
can be used to synchronize multiple threads and their access to
shared resources.
 Linux ensures the prevention of race conditions to ensure threads
safely modify the value of a semaphore.
 A semaphore supports two basic operations:
a. Wait: This operation decrements the value of the
semaphore by 1. If the value is 0 already, the threads
attempting to change the value becomes blocked until the
value is again positive.
b. Post: This operation increments the semaphore by 1. If
previously zero, one of the threads is unblocked while the
other threads are blocked by the wait operation.
 Semaphores can be implemented using the library
<semaphore.h>.
 To wait on a semaphore, use sem_wait; to post to a semaphore,
use sem_post.
 You can also get the current value of a semaphore by using
sem_getvalue. 8/11
9. Threads:- Creation, Lifecycle, Mutual exclusion, Semapores,
and Thread priorities
Thread Priorities
 Apart from synchronization as a means of controlling thread operations, another
form of control is through thread priorities.
 Thread priorities control how often the thread gets the CPU for execution in
comparison to synchronization which determines the order in which the threads
access shared resources.
 The scheduler is the component that decides which runnable thread will be
executed by the CPU next.
 Each thread has an associated scheduling policy and a static scheduling priority
called the sched_priority that is used by the scheduler.
 A thread can inherit scheduling attributes from the creating thread.
 The real-time policies are: SCHED_FIFO, SCHED_RR and they have
sched_priority value in the range 1(low) to 99(high).

9/11
9. Threads:- Creation, Lifecycle, Mutual exclusion, Semapores,
and Thread priorities
Thread Priorities Contd.

 The scheduler maintains a list of runnable thread for each possible


sched_priority value and selects the thread at the head of the non-
empty list with the highest static priority.
 A thread’s schedule policy determines where it will be inserted in the
list of threads with equal sched_priority value and how it will move
inside the list.
 In short words, if a thread with a higher static priority becomes ready
to run, the currently running thread will be preempted and returned to
the wait list of its static priority level.
 SCHED_FIFO is a simple scheduling algorithm without time slicing while
a SCHED_RR allows each thread to run only for a maximum time
quantum.

10/11
9. Threads:- Creation, Lifecycle, Mutual exclusion, Semapores,
and Thread priorities

END

Thank you for your attention!

11/11

You might also like