!!!Chapter 5 Process Scheduling

5.1 Basic Concepts

5.1.1 CPU - I/O Burst Cycle

The success of CPU scheduling depends on an observed property of processes: process execution consists of a cycle of CPU execution and I/O wait.

5.1.2 CPU Scheduler

When CPU becomes idle, the short-term scheduler must select one of the processes in the ready queue to be executed.

The ready queue is not necessarily a FIFO queue.

All processes in the ready queue are lined up waiting for a chance to run on the CPU. The records in the queues are generally process control blocks of the processes.

5.1.3 Preemptive Scheduling

CPU-scheduling decision take place under below circumstances:

1. When a process switches from the running state to the waiting state.

2. When a process switches from the running state to the ready state.

3. When a process switches from waiting state to ready state.

4. When a process terminates.

For situation 1 and 4, there is no choice in terms of scheduling. A new process must be selected for execution.

When scheduling takes place under situation 1 or 4, the scheduling is nonpreemptive or cooperative; otherwise, it's preemptive.

Preemptive scheduling incurs a cost associated with access to shared data.

5.1.4 Dispatcher (分派程序)

The dispatcher is the module that gives control of CPU to the process selected by the short-term scheduler. It involves:

1. Switching context

2. Switching to user mode

3. Jumping to the proper location in the user program to restart the program

The time it takes for the dispatcher to stop one process and start another running is known as thedispatcher latency.

5.2 Scheduling Criteria

In choosing which CPU-scheduling algorithm to use, we must consider certain criteria:

CPU utilization

Throughput: the number of processes that are completed per time unit

Turnaround time: The interval from the time of submission of a process to the time of completion

Waiting time: The total amount of time that process spends waiting in the ready queue

Response time: The time from the submission of a request until the first response is produced. This is the time it takes to start responding, not the time it takes to output response.

5.3 Scheduling Algorithms

5.3.1 First-Come, First-Served Scheduling

First-com, first-server(FCFS) scheduling algorithm is the simplest CPU-scheduling algorithm.

- the average waiting time under the FCFS policy is often quite long.

FCFS is nonpreemptive.

5.3.2 Shortest-Job-First Scheduling

shortest-job-first(SJF) scheduling algorithm associates with each process the length of the process's next CPU burst. The process that has the smallest CPU burst will be executed first. If two process has same CPU burst, system will use FCFS.

SJF is used frequently in long-term scheduling. Users are responsible to estimate the process time limit.

+ it gives the minimum average waiting time for a given set of processes.

- short-term scheduling cannot know the length of the next CPU burst => we can useexponential average of measured lengths of previous CPU burst to predict.

SJF can be preemptive or nonpreemptive.

preemptive SJF is also called shortest-remaining-time-first scheduling

5.3.3 Priority Scheduling

Priority scheduling algorithm will associate each process with a priority. Equal-priority processes are scheduled by FCFS. SJF is a special priority scheduling.

Priority scheduling can be either preemptive or nonpreemptive.

- indefinite blocking/starvation: Low priority job will waiting forever. =>aging: increase the priority of a job gradually.

5.3.4 Round-Robin Scheduling

The round-robin(RR) scheduling algorithm is designed especially for time-sharing systems.

A small unit of time called time quantum is defined. The ready queue is treated as circular queue. The CPU scheduler goes around the ready, allocating the CPU to each process for a time interval of up to 1 time quantum. New processes are added at the end of the FIFO queue.

RR scheduling algorithm is preemptive.

If time quantum is very large, RR become FIFS. If time quantum is very small, RR becomeprocessor sharing

Gantt chart: a bar chart that illustrates a particular schedule, including the start and finish times of each of the participating processes:

5.3.5 Multilevel Queue Scheduling

A multilevel queue scheduling algorithm partitions the ready queue into several separate queues. Each queue has its own scheduling algorithm.

To Schedule among queues, we can:

1. implemented as fixed-priority preemptive scheduling: lower priority queue will not be executed when higher priority queue is not empty.

2. time-slice among the queues: higher priority queue has 80 percent of CPU time and lower priority queue has 20 percent.

5.3.6 Multilevel Feedback Queue Scheduling

The multilevel feedback queue scheduling algorithm allows a process to move between queues. The idea is to separate processes according to the characteristics of their CPU bursts. If a processes uses too much CPU time, it will be moved to a lower-priority queue.

The scheme leaves I/O-bound and interactive processes in the higher-priority queue.

multilevel feedback queue scheduler is the most general CPU-scheduling algorithm. P 199

5.4 Thread Scheduling

On operating systems that support user-level and kernel-level threads, it is kernel-level threads --- not processes --- that are being scheduled by the operating system.

User-level threads are managed by a thread library, the kernel is unaware of them. To run on a CPU, user-level threads must ultimately be mapped to an associated kernel-level thread.

5.4.1 Contention Scope

One distinction between user-level and kernel-level threads lies in how they are scheduled.

On system implementing many-to-one and many-to-many models, the thread library schedules user-level threads to run on an available LWP, a scheme known asprocess-contention scope(PCS), since competition for the CPU takes place among threads belonging to the same process.

To decide which kernel thread to schedule onto a CPU, the kernel uses system-contention scope(SCS). Competition for the CPU with SCS scheduling takes place among all threads in the system.

Systems using the one-to-one model schedule threads using only SCS.

Typically, PCS is done according to priority --- the scheduler selects the runnable thread with the highest priority to run.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值