There is a component in Java that basically decides which thread should execute or get a resource in the operating system.
Scheduling of threads involves two boundary scheduling.
- Scheduling of user-level threads (ULT) to kernel-level threads (KLT) via lightweight process (LWP) by the application developer.
- Scheduling of kernel-level threads by the system scheduler to perform different unique OS functions.
Lightweight Process (LWP)
Light-weight process are threads in the user space that acts as an interface for the ULT to access the physical CPU resources. Thread library schedules which thread of a process to run on which LWP and how long. The number of LWPs created by the thread library depends on the type of application. In the case of an I/O bound application, the number of LWPs depends on the number of user-level threads. This is because when an LWP is blocked on an I/O operation, then to invoke the other ULT the thread library needs to create and schedule another LWP. Thus, in an I/O bound application, the number of LWP is equal to the number of the ULT. In the case of a CPU-bound application, it depends only on the application. Each LWP is attached to a separate kernel-level thread.

In real-time, the first boundary of thread scheduling is beyond specifying the scheduling policy and the priority. It requires two controls to be specified for the User level threads: Contention scope, and Allocation domain. These are explained as following below.
Contention Scope
The word contention here refers to the competition or fight among the User level threads to access the kernel resources. Thus, this control defines the extent to which contention takes place. It is defined by the application developer using the thread library.
Depending upon the extent of contention it is classified as-
- Process Contention Scope (PCS) :
The contention takes place among threads within a same process. The thread library schedules the high-prioritized PCS thread to access the resources via available LWPs (priority as specified by the application developer during thread creation).
- System Contention Scope (SCS) :
The contention takes place among all threads in the system. In this case, every SCS thread is associated to each LWP by the thread library and are scheduled by the system scheduler to access the kernel resources.
In LINUX and UNIX operating systems, the POSIX Pthread library provides a function Pthread_attr_setscope to define the type of contention scope for a thread during its creation.
int Pthread_attr_setscope(pthread_attr_t *attr, int scope)
The first parameter denotes to which thread within the process the scope is defined.
The second parameter defines the scope of contention for the thread pointed. It takes two values.
PTHREAD_SCOPE_SYSTEM
PTHREAD_SCOPE_PROCESS
If the scope value specified is not supported by the system, then the function returns ENOTSUP.
Allocation Domain
The allocation domain is a set of one or more resources for which a thread is competing. In a multicore system, there may be one or more allocation domains where each consists of one or more cores. One ULT can be a part of one or more allocation domain. Due to this high complexity in dealing with hardware and software architectural interfaces, this control is not specified. But by default, the multicore system will have an interface that affects the allocation domain of a thread.
Consider a scenario, an operating system with three process P1, P2, P3 and 10 user level threads (T1 to T10) with a single allocation domain. 100% of CPU resources will be distributed among all the three processes. The amount of CPU resources allocated to each process and to each thread depends on the contention scope, scheduling policy and priority of each thread defined by the application developer using thread library and also depends on the system scheduler. These User level threads are of a different contention scope.

In this case, the contention for allocation domain takes place as follows:
Process P1
All PCS threads T1, T2, T3 of Process P1 will compete among themselves. The PCS threads of the same process can share one or more LWP. T1 and T2 share an LWP and T3 are allocated to a separate LWP. Between T1 and T2 allocation of kernel resources via LWP is based on preemptive priority scheduling by the thread library. A Thread with a high priority will preempt low priority threads. Whereas, thread T1 of process p1 cannot preempt thread T3 of process p3 even if the priority of T1 is greater than the priority of T3. If the priority is equal, then the allocation of ULT to available LWPs is based on the scheduling policy of threads by the system scheduler(not by thread library, in this case).
Process P2
Both SCS threads T4 and T5 of process P2 will compete with processes P1 as a whole and with SCS threads T8, T9, T10 of process P3. The system scheduler will schedule the kernel resources among P1, T4, T5, T8, T9, T10, and PCS threads (T6, T7) of process P3 considering each as a separate process. Here, the Thread library has no control of scheduling the ULT to the kernel resources.
Process P3
Combination of PCS and SCS threads. Consider if the system scheduler allocates 50% of CPU resources to process P3, then 25% of resources is for process scoped threads and the remaining 25% for system scoped threads. The PCS threads T6 and T7 will be allocated to access the 25% resources based on the priority by the thread library. The SCS threads T8, T9, T10 will divide the 25% resources among themselves and access the kernel resources via separate LWP and KLT. The SCS scheduling is by the system scheduler.
Note:
For every system call to access the kernel resources, a Kernel Level thread is created
and associated to separate LWP by the system scheduler.
Number of Kernel Level Threads = Total Number of LWP
Total Number of LWP = Number of LWP for SCS + Number of LWP for PCS
Number of LWP for SCS = Number of SCS threads
Number of LWP for PCS = Depends on application developer
Here,
Number of SCS threads = 5
Number of LWP for PCS = 3
Number of SCS threads = 5
Number of LWP for SCS = 5
Total Number of LWP = 8 (=5+3)
Number of Kernel Level Threads = 8
Advantages of PCS over SCS
- If all threads are PCS, then context switching, synchronization, scheduling everything takes place within the userspace. This reduces system calls and achieves better performance.
- PCS is cheaper than SCS.
- PCS threads share one or more available LWPs. For every SCS thread, a separate LWP is associated.For every system call, a separate KLT is created.
- The number of KLT and LWPs created highly depends on the number of SCS threads created. This increases the kernel complexity of handling scheduling and synchronization. Thereby, results in a limitation over SCS thread creation, stating that, the number of SCS threads to be smaller than the number of PCS threads.
- If the system has more than one allocation domain, then scheduling and synchronization of resources becomes more tedious. Issues arise when an SCS thread is a part of more than one allocation domain, the system has to handle n number of interfaces.
The second boundary of thread scheduling involves CPU scheduling by the system scheduler. The scheduler considers each kernel-level thread as a separate process and provides access to the kernel resources.
Similar Reads
Operating System Tutorial An Operating System(OS) is a software that manages and handles hardware and software resources of a computing device. Responsible for managing and controlling all the activities and sharing of computer resources among different running applications.A low-level Software that includes all the basic fu
4 min read
OS Basics
Structure of Operating System
Types of OS
Batch Processing Operating SystemIn the beginning, computers were very large types of machinery that ran from a console table. In all-purpose, card readers or tape drivers were used for input, and punch cards, tape drives, and line printers were used for output. Operators had no direct interface with the system, and job implementat
6 min read
Multiprogramming in Operating SystemAs the name suggests, Multiprogramming means more than one program can be active at the same time. Before the operating system concept, only one program was to be loaded at a time and run. These systems were not efficient as the CPU was not used efficiently. For example, in a single-tasking system,
5 min read
Time Sharing Operating SystemMultiprogrammed, batched systems provide an environment where various system resources were used effectively, but it did not provide for user interaction with computer systems. Time-sharing is a logical extension of multiprogramming. The CPU performs many tasks by switches that are so frequent that
5 min read
What is a Network Operating System?The basic definition of an operating system is that the operating system is the interface between the computer hardware and the user. In daily life, we use the operating system on our devices which provides a good GUI, and many more features. Similarly, a network operating system(NOS) is software th
2 min read
Real Time Operating System (RTOS)Real-time operating systems (RTOS) are used in environments where a large number of events, mostly external to the computer system, must be accepted and processed in a short time or within certain deadlines. such applications are industrial control, telephone switching equipment, flight control, and
6 min read
Process Management
Introduction of Process ManagementProcess Management for a single tasking or batch processing system is easy as only one process is active at a time. With multiple processes (multiprogramming or multitasking) being active, the process management becomes complex as a CPU needs to be efficiently utilized by multiple processes. Multipl
8 min read
Process Table and Process Control Block (PCB)While creating a process, the operating system performs several operations. To identify the processes, it assigns a process identification number (PID) to each process. As the operating system supports multi-programming, it needs to keep track of all the processes. For this task, the process control
6 min read
Operations on ProcessesProcess operations refer to the actions or activities performed on processes in an operating system. These operations include creating, terminating, suspending, resuming, and communicating between processes. Operations on processes are crucial for managing and controlling the execution of programs i
5 min read
Process Schedulers in Operating SystemA process is the instance of a computer program in execution. Scheduling is important in operating systems with multiprogramming as multiple processes might be eligible for running at a time.One of the key responsibilities of an Operating System (OS) is to decide which programs will execute on the C
7 min read
Inter Process Communication (IPC)Processes need to communicate with each other in many situations. Inter-Process Communication or IPC is a mechanism that allows processes to communicate. It helps processes synchronize their activities, share information, and avoid conflicts while accessing shared resources.Types of Process Let us f
5 min read
Context Switching in Operating SystemContext Switching in an operating system is a critical function that allows the CPU to efficiently manage multiple processes. By saving the state of a currently active process and loading the state of another, the system can handle various tasks simultaneously without losing progress. This switching
4 min read
Preemptive and Non-Preemptive SchedulingIn operating systems, scheduling is the method by which processes are given access the CPU. Efficient scheduling is essential for optimal system performance and user experience. There are two primary types of CPU scheduling: preemptive and non-preemptive. Understanding the differences between preemp
5 min read
CPU Scheduling in OS
Threads in OS
Process Synchronization
Critical Section Problem Solution
Peterson's Algorithm in Process SynchronizationPeterson's Algorithm is a classic solution to the critical section problem in process synchronization. It ensures mutual exclusion meaning only one process can access the critical section at a time and avoids race conditions. The algorithm uses two shared variables to manage the turn-taking mechanis
15+ min read
Semaphores in Process SynchronizationSemaphores are a tool used in operating systems to help manage how different processes (or programs) share resources, like memory or data, without causing conflicts. A semaphore is a special kind of synchronization data that can be used only through specific synchronization primitives. Semaphores ar
15+ min read
Semaphores and its typesA semaphore is a tool used in computer science to manage how multiple programs or processes access shared resources, like memory or files, without causing conflicts. Semaphores are compound data types with two fields one is a Non-negative integer S.V(Semaphore Value) and the second is a set of proce
6 min read
Producer Consumer Problem using Semaphores | Set 1The Producer-Consumer problem is a classic synchronization issue in operating systems. It involves two types of processes: producers, which generate data, and consumers, which process that data. Both share a common buffer. The challenge is to ensure that the producer doesn't add data to a full buffe
4 min read
Readers-Writers Problem | Set 1 (Introduction and Readers Preference Solution)The readers-writer problem in operating systems is about managing access to shared data. It allows multiple readers to read data at the same time without issues but ensures that only one writer can write at a time, and no one can read while writing is happening. This helps prevent data corruption an
7 min read
Dining Philosopher Problem Using SemaphoresThe Dining Philosopher Problem states that K philosophers are seated around a circular table with one chopstick between each pair of philosophers. There is one chopstick between each philosopher. A philosopher may eat if he can pick up the two chopsticks adjacent to him. One chopstick may be picked
11 min read
Hardware Synchronization Algorithms : Unlock and Lock, Test and Set, SwapProcess Synchronization problems occur when two processes running concurrently share the same data or same variable. The value of that variable may not be updated correctly before its being used by a second process. Such a condition is known as Race Around Condition. There are a software as well as
4 min read
Deadlocks & Deadlock Handling Methods
Introduction of Deadlock in Operating SystemA deadlock is a situation where a set of processes is blocked because each process is holding a resource and waiting for another resource acquired by some other process. In this article, we will discuss deadlock, its necessary conditions, etc. in detail.Deadlock is a situation in computing where two
11 min read
Conditions for Deadlock in Operating SystemA deadlock is a situation where a set of processes is blocked because each process is holding a resource and waiting for another resource acquired by some other process. In this article, we will discuss what deadlock is and the necessary conditions required for deadlock.What is Deadlock?Deadlock is
8 min read
Banker's Algorithm in Operating SystemBanker's Algorithm is a resource allocation and deadlock avoidance algorithm used in operating systems. It ensures that a system remains in a safe state by carefully allocating resources to processes while avoiding unsafe states that could lead to deadlocks.The Banker's Algorithm is a smart way for
8 min read
Wait For Graph Deadlock Detection in Distributed SystemDeadlocks are a fundamental problem in distributed systems. A process may request resources in any order and a process can request resources while holding others. A Deadlock is a situation where a set of processes are blocked as each process in a Distributed system is holding some resources and that
5 min read
Handling DeadlocksDeadlock is a situation where a process or a set of processes is blocked, waiting for some other resource that is held by some other waiting process. It is an undesirable state of the system. In other words, Deadlock is a critical situation in computing where a process, or a group of processes, beco
8 min read
Deadlock Prevention And AvoidanceDeadlock prevention and avoidance are strategies used in computer systems to ensure that different processes can run smoothly without getting stuck waiting for each other forever. Think of it like a traffic system where cars (processes) must move through intersections (resources) without getting int
5 min read
Deadlock Detection And RecoveryDeadlock Detection and Recovery is the mechanism of detecting and resolving deadlocks in an operating system. In operating systems, deadlock recovery is important to keep everything running smoothly. A deadlock occurs when two or more processes are blocked, waiting for each other to release the reso
6 min read
Deadlock Ignorance in Operating SystemIn this article we will study in brief about what is Deadlock followed by Deadlock Ignorance in Operating System. What is Deadlock? If each process in the set of processes is waiting for an event that only another process in the set can cause it is actually referred as called Deadlock. In other word
5 min read
Recovery from Deadlock in Operating SystemIn today's world of computer systems and multitasking environments, deadlock is an undesirable situation that can bring operations to a halt. When multiple processes compete for exclusive access to resources and end up in a circular waiting pattern, a deadlock occurs. To maintain the smooth function
8 min read