0% found this document useful (0 votes)
8 views58 pages

02AS1M AdvancedScheduler Report 20224291 20224328

Uploaded by

Minh Ngọc Bùi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views58 pages

02AS1M AdvancedScheduler Report 20224291 20224328

Uploaded by

Minh Ngọc Bùi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY

SCHOOL OF ELECTRICAL AND ELECTRONIC ENGINEERING

PROJECT REPORT

Advanced Schedule
Reporting Memory Usage
NGUYỄN BÁ THÀNH 20224291
BÙI MINH NGỌC 20224328

Course: Operating systems


Course Code: ET4291E

Supervisor: Dr. Pham Van Tien

Hanoi, 12/2024
PREFACE

In today’s world, computers and electronic devices are advancing at an incredible


speed, becoming an essential part of our daily lives.
Modern computers are designed to be user-friendly, making it easy for anyone to
learn how to use them. This is largely due to the familiar and intuitive operating systems
like Windows or Mac OS.
To deepen our understanding of how an operating system functions and to support
our learning in the Operating Systems course taught by Dr. Pham Van Tien, our group
chose to conduct research on this topic through Pintos. Our focus is specifically on
exploring the Advanced Scheduler.
Mục lục

DUTY ROSTER 7

CHAPTER 1. PINTOS AND OVERVIEW OF SCHEDULING ALGORITHMS 1


1.1 PINTOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 What is Pintos? . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Pintos Project Structure . . . . . . . . . . . . . . . . . . . . . . 1
1.1.3 Pintos Execution Environment . . . . . . . . . . . . . . . . . . 1
1.1.4 Pintos Source Code Directory Tree . . . . . . . . . . . . . . . 1
1.1.5 Role of the Threads Project . . . . . . . . . . . . . . . . . . . . 2
1.2 OVERVIEW OF SCHEDULING ALGORITHMS . . . . . . . . . . . . 3
1.2.1 Round Robin Algorithm . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Priority Scheduling Algorithm . . . . . . . . . . . . . . . . . . 5
1.2.3 Multi-Level Feedback Queue (MLFQ) . . . . . . . . . . . . . . 10

CHAPTER 2. Analysis and Design 15


2.1 ALARM CLOCK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.1 Data Structure and Functionality . . . . . . . . . . . . . . . . . 15
2.1.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.3 Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.4 Design Rationale . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.5 Implementation Steps . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.6 Advantages of This Design . . . . . . . . . . . . . . . . . . . . 18
2.1.7 Limitations and Potential Improvements . . . . . . . . . . . . . 18
2.1.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2 Priority Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.1 Data structures and functions . . . . . . . . . . . . . . . . . . . 19
2.2.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

5
2.3 Advan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.1 Data Structures and Functions . . . . . . . . . . . . . . . . . . 25
2.3.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.3 Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . 26

CHAPTER 3. Programming Process 27


3.1 Timer Sleep Implementation . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Priority Scheduler Implementation . . . . . . . . . . . . . . . . . . . . 29
3.2.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2.2 Relevant Files . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2.3 Detailed Explanation of Each Line of Code . . . . . . . . . . . 29
3.2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3 Multi-level Feedback Queue Scheduler (MLFQS) . . . . . . . . . . . . 34
3.3.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.2 Relevant Files . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.3 Detailed Explanation of Each Line of Code . . . . . . . . . . . 34

CHAPTER 4. RESULTS TESTINGTESTING 38


4.1 Executing the predefined Tests of PINTOS . . . . . . . . . . . . . . . . 38
4.1.1 Priority Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.1.2 MLFQS (Multilevel Feedback Queue Scheduler) Tests . . . . . 39
4.2 Bài test tự tạo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
DUTY ROSTER

No. Job Name


1 Chapter 1: Pintos Nguyễn Bá Thành and Bùi Minh Ngọc
2 Chapter 2: Alarm Clock Bui Minh Ngọc
3 Chapter 2: Priority Scheduling -MLFQS Nguyễn Bá Thành
4 Chapter 3: Timer Sleep -Priority Scheduling Bùi Minh Ngọc
5 Chapter 3: Priority Scheduling -MLFQ Nguyễn Bá Thành
6 Chapter 4: Pintos Test Bui Minh Ngọc
7 Chapter 4: Self-invented Test Nguyễn Bá Thành

7
CHAPTER 1. PINTOS AND OVERVIEW OF SCHEDULING
ALGORITHMS

1.1 PINTOS
1.1.1 What is Pintos?
Pintos is a simple operating system framework designed for educational purposes,
supporting the 80x86 architecture. It provides basic features such as thread management,
loading and executing user programs, and a file system. However, these mechanisms are
implemented at a basic level to give students the opportunity to explore and enhance the
functionality, optimizing the operating system.
Pintos is commonly used in operating systems courses at universities as a plat-
form for practicing core concepts. It runs on hardware emulators like QEMU or Bochs,
facilitating testing and debugging without direct interaction with physical hardware.

1.1.2 Pintos Project Structure


Stanford has designed four projects about Pintos to tackle core operating system
issues:

• Project 1: Threads - Thread management and scheduling mechanisms.

• Project 2: User Programs - Support for running user programs.

• Project 3: Virtual Memory - Implementing virtual memory.

• Project 4: File Systems - Enhancing the file system.

These projects help improve Pintos’ performance and increase its ability to effi-
ciently utilize limited resources such as processing time, memory, and energy.

1.1.3 Pintos Execution Environment


Pintos is implemented and tested in an emulated environment simulating an 80x86
microprocessor architecture along with peripheral devices. Popular emulators include
Bochs and QEMU. In this report, QEMU is used as the main emulation tool.

1.1.4 Pintos Source Code Directory Tree


The Pintos source code structure is organized as follows:

1
• threads/: Contains the Pintos kernel, including the bootloader, basic interrupt
handling, memory allocation, and CPU scheduling. Most of the code related to the
Threads project is located in this directory.

• userprog/: Manages user programs, including the page table, page fault handling,
and program loader. This directory is primarily used in the User Programs project.

• filesys/: The Pintos file system, to be improved in the File Systems project.

• devices/: Source code for interfacing with peripheral devices such as keyboards,
timers, and disks.

• lib/: The standard C library used both in the Pintos kernel and user programs

• lib/kernel/: A library used exclusively in the kernel, providing data structures


like linked lists and hash tables.

• lib/user/: A library dedicated to user programs.

• tests/:Test cases for each project.

• examples/: Examples of user programs that can run on Pintos.

• misc/, utils/: Auxiliary files for running Pintos.

1.1.5 Role of the Threads Project


Threads is the first project in the Pintos project series, focusing on extending the
thread management system. The main objectives include:

• Implementing an efficient thread scheduling mechanism.

• Enhancing multitasking capabilities by optimizing the timer and priority schedul-


ing.

• Addressing synchronization issues, such as priority inversion.

The Threads project lays a critical foundation for building advanced features in
subsequent projects. An optimized thread system ensures that the operating system runs
smoothly, efficiently, and reliably.

2
1.2 OVERVIEW OF SCHEDULING ALGORITHMS
1.2.1 Round Robin Algorithm
Basic ConceptRound Robin is a widely used scheduling algorithm in operating systems
and computer networks. It is a non-preemptive scheduling algorithm, meaning that a
process or task will not be interrupted unless it completes or a different task is scheduled.
The key feature of Round Robin is that it fairly shares CPU resources (or other resources)
among all waiting processes. Each process is allocated a fixed amount of CPU time,
called a time quantum or time slice.

Principle of OperationThe Round Robin algorithm operates as follows:

Hình 1.1 Round Robin Scheduling

1. Queue: Processes waiting for CPU time are placed in a queue in the order they
arrive (first-in, first-out - FIFO).

2. Queue: The scheduler selects the process at the front of the queue to allocate CPU
time.

3. Allocate CPU: The process runs for a time equal to the time quantum..

4. End of quantum: Once the time quantum ends, the process is preempted and
moved to the end of the queue.

5. Repeat: The scheduler repeats the steps above until all processes are completed.

3
SchedulerThe component of the operating system responsible for selecting a process to
run on the CPU. In Round Robin, it selects the first process in the queue.

1.2.1.1 Main Components

• Queue:

– A place where processes waiting for CPU time are stored..


– Typically implemented using a FIFO data structure.

• Time Quantum:

– The fixed amount of time a process is allowed to run on the CPU.


– performance of the algorithm.
– The unit is usually in milliseconds (ms).

1.2.1.2 Advantages of Round Robin

• Fairness: Ensures all processes get CPU time fairly, with no starvation.

• Easy to implement: The Round Robin algorithm is relatively simple to implement


and does not require complex logic.

• Suitable for interactive systems: Ideal for applications that require quick responses,
such as graphical user interface (GUI) applications or real-time systems.

1.2.1.3 Disadvantages of Round Robin

• Impact of time quantum:

– If the time quantum is too small, the number of context switches increases,
causing high overhead and reducing overall performance
– If the time quantum is too large, Round Robin may behave similarly to FCFS
(First-Come, First-Served), causing long waiting times for short processes.

• No priority: It does not prioritize more important processes.

• Inefficient with long processes: If there are many long processes, the waiting time
can still be large.

4
1.2.1.4 Example IllustrationSuppose we have 3 processes:

P1 : Execution time 24ms

P2 : Execution time 3ms

P3 : Execution time 3ms

Time quantum is 4ms.

Time (ms) Running Process Queue Notes


0 P1 P1 , P3 P1 starts running
4 P2 P3 , P1 P1 runs for 4ms, 20ms left
7 P3 P1 P2 completes (3ms)
10 P1 P3 P3 completes (3ms)
14 P1 P1 runs for 4ms, 16ms left
18 P1 P1 runs for 4ms, 12ms left
22 P1 P1 runs for 4ms, 8ms left
26 P1 P1 runs for 4ms, 4ms left
30 P1 P1 completes

Bảng 1.1 Example illustrating the Round Robin algorithm

In this example, we see that the processes are executed alternately

1.2.1.5 ConclusionRound Robin is a simple but effective scheduling algorithm, espe-


cially in systems requiring fairness and quick response. The choice of an appropriate
time quantum is crucial to achieving optimal performance. Despite some drawbacks,
Round Robin remains one of the most widely used scheduling algorithms.

1.2.2 Priority Scheduling Algorithm


1.2.2.1 Basic ConceptPriority Scheduling is a scheduling algorithm in operating sys-
tems where each process is assigned a priority level. The scheduler selects the process
with the highest priority (or lowest, depending on the definition) to run on the CPU.

• Preemptive: A running process can be interrupted if a new process with a higher


priority arrives.

• Non-Preemptive: A running process will continue until it completes or volun-


tarily yields the CPU.

5
Hình 1.2 Flowchart of Non-Preemptive Scheduling

1.2.2.2 Non-Preemptive Priority SchedulingIn the non-preemptive case, once a process


has been allocated the CPU, it will continue to run until completion or voluntarily yields
the CPU, regardless of whether there are higher-priority processes waiting.

1. Assign Priority: Each process is given a static or dynamic priority level.

2. Ready Queue: Processes waiting to execute are placed in the ready queue.

3. Process Selection: The scheduler selects the process with the highest priority.

4. Execution: The selected process runs until completion or voluntarily yields the
CPU.

5. Completion or Yield: A process will be removed from the system when completed
or returned to the queue after yielding the CPU.

6. Repeat: The process repeats from step 3.

In summary, in the non-preemptive mode, a running process will not be interrupted by


other processes, even if they have higher priority, until it completes or yields the CPU.

1.2.2.3 Preemptive Priority SchedulingIn the preemptive case, a running process can
be interrupted by a new process with a higher priority.

6
Hình 1.3 Flowchart of Non-Preemptive Scheduling

1. Assign Priority: Each process is assigned a static or dynamic priority level.

2. Ready Queue: Processes are placed in the ready queue.

3. Initial Selection: The scheduler initially selects the process with the highest prior-
ity from the ready queue.

4. Execution: The selected process starts executing on the CPU.

5. Check Priority When a New Process Arrives: When a new process arrives,the
scheduler compares its priority with the currently running process.

• If the new process has a higher priority:The running process will be inter-
rupted and moved back to the ready queue. The new process will be allocated
the CPU.
• If the new process has equal or lower priority: It is placed in the ready queue
without interrupting the running process.

6. Completion or Yield:When a process completes or yields the CPU, the scheduler


selects the highest priority process from the ready queue.

7. Repeat: The scheduler continues to monitor new arriving processes, check their
priorities, and preempt when necessary.

7
In summary,in the preemptive mode, a running process can be interrupted by a
higher-priority process, ensuring that more important processes get immediate access to
the CPU.

1.2.2.4 Comparison Between Preemptive and Non Preemptive Priority Scheduling

• Preemption: Non-preemptive does not allow preemption; preemptive allows it.

• Suitability: Non-preemptive is suitable for short, low-priority processes; preemp-


tive is ideal for real-time systems requiring fast responses.

• Complexity: Non-preemptive is simpler; preemptive is more complex due to con-


text switching.

• Starvation: Starvation is more likely with non-preemptive; less likely with preemp-
tive, but still possible.

• Management Cost:Non-preemptive has lower management costs, while preemp-


tive has higher management costs due to context switching.

1.2.2.5 Key Components

• Priority:

– A numerical value representing the importance of a process.


– Can be static or dynamic.

• Queue: Contains processes waiting to be allocated CPU time, which may be


sorted by priority.

• Scheduler): Selects the process with the highest priority to run and handles
preemption events if necessary.

1.2.2.6 Types of Priority

• Static Priority:

– The priority is assigned when the process is created and does not change.
– Simple but may lead to starvation.

• Dynamic Priority:

– The priority can change during execution.


– More flexible, solves starvation issues.
– Factors affecting priority changes: waiting time, CPU time used, resources
required.

8
1.2.2.7 Advantages

• Prioritizes important processes.

• Flexible for various types of applications.

• Easy to adjust priority levels.

1.2.2.8 Disadvantages

• Starvation issue.

• Difficulty in assigning appropriate priority levels.

• Priority inversion.

• Overhead in managing priorities.

1.2.2.9 Solutions to Starvation Issues

• Aging: Gradually increase the priority of long-waiting processes.

• Dynamic Priority Adjustment: Change priority based on waiting time.

• Priority Inheritance: Temporarily increase the priority of lower-priority pro-


cesses.

• Priority Ceiling: Assign a maximum priority level for resources.

1.2.2.10 Example IllustrationAssume there are 5 processes P1, P2, P3, P4, P5 with
execution times and priority levels as follows (lower priority number means higher pri-
ority):

Process Execution Time Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Bảng 1.2 Priority Scheduling Process Information

Non-Preemptive:

9
P2 (highest priority) is selected and executed for 1 time unit.

P5 is selected and executed for 5 time units.

P1 is selected and executed for 10 time units.

P3 is selected and executed for 2 time units.

P4 is selected and executed for 1 time unit.

Preemptive

P2 is selected and executed for 1 time unit.

P5 is selected and executed for 1 time unit.

P5 continues execution (higher priority than other processes).

After that, P1 , P3 , và P4 are selected in sequence.

1.2.2.11 ConclusionPriority Scheduling is a powerful algorithm that allows prioritiza-


tion of important processes. However, issues like starvation and priority inversion must
be addressed to ensure the system operates efficiently.

1.2.3 Multi-Level Feedback Queue (MLFQ)


1.2.3.1 Basic ConceptMulti-Level Feedback Queue (MLFQ), is a flexible and complex
scheduling algorithm designed to improve system performance by combining multiple
queues and feedback mechanisms. This algorithm is widely used in many modern oper-
ating systems.
The key feature of MLFQ is the use of multiple priority queues, each with its own
scheduling mechanism. Processes can move between queues based on their behavior,
such as CPU usage.

1.2.3.2 Operational PrinciplesMLFQ operates based on the following principles:

10
Hình 1.4 Multilevel Feedback Queue Scheduling (MLFQ)

1. Multiple Queues: The system has multiple queues, each with a different priority
level. The highest priority queue has the highest priority.

2. Scheduling Mechanism in Each Queue:Each queue uses its own scheduling algo-
rithm, typically Round Robin (RR), but First-Come-First-Served (FCFS) or other
algorithms can also be used.

3. Movement Between Queues:

• Processes that use too much CPU or exceed their time quantum will be moved
to a lower-priority queue.
• Processes that have been waiting for a long time without being allocated CPU
time may be promoted to a higher-priority queue.

4. Fairness: The feedback mechanism helps prevent starvation for processes in lower-
priority queues.

5. Initialization: When a new process arrives, it is typically placed in the highest


priority queue.

1.2.3.3 Key Components

• Queues:

– Multiple queues with different priority levels.


– Sorted from the highest to the lowest priority queue.
– Each queue has its own time quantum.

11
• Scheduling Algorithm):

– The scheduling algorithm applied in each queue, the most common is Round
Robin (RR).
– Round Robin (RR) being the most common due to its fairness.

• Feedback Mechanism:

– Defines how processes move between queues.


– Based on their behavior, such as CPU usage or waiting time.

• Time Quantum:

– The amount of time a process is allowed to run continuously on the CPU.


– Time quantum may differ between queues.

1.2.3.4 Principles of Movement Between QueuesMovement between queues is a criti-


cal element of MLFQ. Common principles include:

• New processes: Always placed in the highest priority queue.

• Time quantum expiration:A process that exhausts its time quantum in one queue
is moved to a lower-priority queue.

• Voluntary CPU release: If a process voluntarily releases the CPU (e.g., performing
I/O), it may be returned to the appropriate queue after completing the I/O.

• Long waiting: A process that has been waiting too long in a lower-priority queue
may be promoted to a higher-priority queue.

1.2.3.5 Advantages of MLFQ

• Adapts to many types of processes: Efficiently handles both I/O-bound (I/O-


heavy) and CPU-bound (CPU-intensive) processes.

• Fairness: The feedback mechanism prevents starvation.

• Low latency: Short or interactive processes can be prioritized to reduce latency.

• Flexibility: The number of queues, time quantum, and movement mechanisms can
be customized to fit the system.

• Good responsiveness: Short processes tend to complete quickly.

12
1.2.3.6 Disadvantages of MLFQ

• Complexity: Difficult to implement and configure.

• Hard to determine optimal parameters: The number of queues, time quantum,


and movement mechanisms need to be carefully tuned.

• Overhead: Managing multiple queues may introduce overhead for the system.

• Potential for starvation: In some cases, processes may be starved if the configu-
ration is not optimal.

1.2.3.7 Example IllustrationAssume the system uses 3 priority queues, described in


the table below:

Queue Priority Level Time Quantum (ms)


Q1 Highest 4
Q2 Medium 8
Q3 Lowest 16

Bảng 1.3 Information about queues in the MLFQ system.

The process execution is as follows:

• Process P1 (CPU-bound):

– P1 is placed in queue Q1 , and starts running.


– P1 runs for 4ms (quantum of Q1 ) and does not complete the task. It is moved
to Q2 .
– In Q2 , P1 runs for 8ms (quantum of Q2 ). If not finished, P1 it will be moved
to Q3 .

• Process P2 (I/O-bound):

– P2 is placed in queue Q1 , and starts running.


– P2 runs for 2ms and then performs I/O. Then, P2 It voluntarily releases the
CPU and is not moved to a lower queue.
– After completing I/O, P2 returns to Q1 .

After the process handling, the results are as follows:

13
• I/O-bound processes like P2 are handled quickly in higher-priority queues, ensuring
quick response times.

• CPU-bound processes like P1 are moved to lower-priority queues when using too
much CPU, creating fairness in the system.

This example illustrates how MLFQ prioritizes short or quick-response processes


while effectively handling resource-intensive processes without affecting overall system
performance.

1.2.3.8 ConclusionMLFQ is a powerful scheduling algorithm that balances perfor-


mance and fairness in multitasking systems. Although complex, this algorithm is highly
flexible and adaptable to various types of processes. Proper configuration and tuning are
essential for achieving the best performance.

14
CHAPTER 2. Analysis and Design

2.1 ALARM CLOCK


2.1.1 Data Structure and Functionality
In thread.h, add two new attributes to represent the number of ticks remaining
before waking up a thread:

• sleepelem : List element for sleeping threads.

• remaining_time_to_wake_up : Remaining ticks before waking up, initialized to


0.

struct thread
{
...
struct list_elem sleepelem ;
int64_t remaining_time_to_wake_up ;
}

2.1.2 Algorithm
The main limitation of the original implementation is that all threads (sleeping and
active) share a single storage space, as shown in Figure 2.1. 2.1.

Hình 2.1 Shared storage space for all threads.

Instead of using a ready_list, a new array called sleeping_threads is orga-


nized to store the threads that are sleeping, as illustrated in Figure 2.2 2.2.
This is how to run the Algorithm:

1. Call thread_block() function to block the current thread (going to sleep). At this
stage, another thread from the top of the ready_list replaces the current thread.

2. Set remaining_time_to_wake_up to the specified sleep duration (in ticks) and


add the current thread to the sleeping_list.

3. On each tick, the timer device sends an interrupt and calls the timer_interrupt()
function. In this handler:

15
README.assets/image-20210214214307531.png

Hình 2.2 Separate storage for sleeping threads.

• Decrement the remaining_time_to_wake_up value of all threads in the sleeping_list


by 1.
• If the remaining_time_to_wake_up value of any thread reaches 0, wake it up
and move it to the ready_list.

2.1.3 Synchronization
1. How to avoid race conditions when multiple threads call :timer_sleep() si-
multaneously?
As long as timer_sleep() is not called in an interrupt context, multiple threads
cannot call this function concurrently. So that, operations on the sleeping_list
are therefore safe.

2. How to avoid race conditions during timer interrupts when call timer_sleep()?
Disable interrupts during the thread’s operation, this function nearly achieves atom-
icity.:

enum intr_level old_level = intr_disable ();


...
intr_set_level (old_level);

16
2.1.4 Design Rationale
Using all_list instead of sleeping_list was considered because all living
threads naturally reside in all_list, eliminating the need for a separate sleeping_list.
However, this approach is not always safe, as demonstrated by the thread_foreach()
function:

/* Iterates through all threads and applies ’func’, passing ’aux’ as an argum
Must be called with interrupts disabled. */
void thread_foreach (thread_action_func *func, void *aux);

Additionally, in practice, the all_list may contain a large number of threads.


Iterating through this entire list can be a waste of time if the majority of threads are not
in a sleeping state.

2.1.5 Implementation Steps


To implement an efficient alarm clock as described, follow these steps:

1. Modify the thread structure: Add the sleepelem and remaining_time_to_wake_up


fields to the thread structure in thread.

2. Create a sleeping list: Initialize a global sleeping_list in the timer or thread


module to store sleeping threads.

3. Update timer_sleep() function: Modify timer_sleep() to:

• Set value for the current thread’s remaining_time_to_wake_up.


• Add the current thread to sleeping_list.
• Call thread_block() function to block the current thread.

4. Update the timer interrupt handler: In timer_interrupt(), iterate through


sleeping_list, decrement each thread's remaining_time_to_wake_up, and wake
threads with a value of 0.

5. Synchronization: Ensure all operations on sleeping_list are safe by disabling


interrupts during related actions.

6. Test the implementation: Testing with various threads and sleep durations to en-
sure accuracy and efficiency under different conditions.

17
2.1.6 Advantages of This Design
• Clear separation: Using a separate sleeping_list helps manage ready and sleep-
ing threads independently, reducing complexity and potential errors.

• Increase Performance: Iterating through a smaller sleeping_list is more efficient


than iterating through the entire all_list, especially when few threads are sleeping.

• Atomic operations: Disabling interrupts during critical actions avoids race condi-
tions and ensures accuracy.

• Scalability: This design effectively handles large numbers of threads by keeping


the sleeping_list compact and updating only relevant threads during each timer in-
terrupt.

2.1.7 Limitations and Potential Improvements


• Interrupt overhead: Frequent timer interrupts may introduce processing overhead,
particularly in systems with many threads. It can be optimized by reducing interrupt
frequency or batching operations.

• Priority handling: Currently, all sleeping threads are treated equally. By adding
priority handling during wake-up could optimize real-time scenarios.

• Alternative data structures: Using a priority queue for sleeping_list (sorted by


remaining_time_to_wake_up) could eliminate the need to iterate through the entire
list on each interrupt.

• Edge case testing: Scenarios like rapid context switching or a very large number
of threads need to be tested to ensure stability.

2.1.8 Conclusion
This design improves the efficiency of the alarm clock by isolating sleeping threads
into a separate list and ensuring atomic operations. With proper synchronization and
thorough testing, this approach enhances efficiency and maintainability in high-concurrency
systems.

2.2 Priority Scheduling


Implementation of the priority scheduling mechanism in the Pintos operating sys-
tem. This mechanism allows threads to have different priority levels, ensuring that threads
with higher priorities are always executed first.

18
2.2.1 Data structures and functions
In thread.h:
Add three attributes to the thread structure:

• real_priority: used to store the actual priority when the priority priority at-
tribute is preempted.

• locks_held: used to perform priority donation,

• current_lock: used for recursive donation.

The thread structure will look like this:

struct thread
{
...
int real_priority ;
struct list locks_held ;
struct lock * current_lock ;
};

In synch.h:
Add two attributes to the lock structure:

• elem: acts as the node of a linked list.

• max_priority: is the highest priority in the waiters of semaphore.

The lock structure will look like this:


struct lock
{
...
struct list_elem elem ;
int max_priority ;
};

In synch.c:
Add one attribute to the semaphore_elem structure:

• priority: the highest priority among the threads waiting in the waiters of semaphore.

19
The semaphore_elem structure will look like this:
struct semaphore_elem
{
...
int priority ;
};

Simulating Nested Priority Donation


This describes the data structures used to track priority donation a series of nested
priority donation charts. We will simulate the priority-donate-nest with the follow-
ing steps:

1. The initial thread creates two locks:

2. The initial thread acquires lock a:

20
3. A medium-priority thread with priority 32 is created. After callingthread_yield(),
since medium thread has a higher priority than the initial thread, the CPU is yielded.
Then, the medium thread then acquires lock b and tries to acquire locka, but since a
is held by the initial thread, priority donation is performed to the initial thread.

4. Returning to the initial thread, a high-priority thread with priority 33 is cre-


ated. After calling thread_yield(), since high-priority has a higher priority

21
than the initial thread, the CPU is yielded. The high-priority thread tries to ac-
quire lock b but fails, and will be blocked. Medium and the initial thread continue
performing priority donation:

5. Returning to the initial thread, it releases lock a, and medium is moved back to the
ready_list. The initial thread’s priority returns to the actual priority. After calling
thread_yield(), medium becomes the next thread to run:

6. Returning to medium, medium it holds lock a:

22
7. Medium releases lock a, after calling thread_yield(), the next thread to execute
is still medium. Then it releases lock b. Medium’s priority returns to its real priority.
After calling thread_yield(), high becomes the next thread to execute:

8. Returning high, it holds lock b:

23
9. high releases lock b, then high finishes. The next thread to execute is medium, and
after that medium finishes. Finally, the initial thread executes and finishes:

2.2.2 Algorithms
Choosing the next thread to run
The schedule() function calls next_thread_to_run() to get the next thread
from ready_list. The ready_list is always sorted in ascending order based on the
priority of the threads within it. When a thread is set to THREAD_READY, it is inserted
into the ready_list using the list_insert_ordered() function. The next thread to
execute will be found at the end of the ready_list.

Acquiring a Lock
When calling lock_acquire(),the first thing to do is perform priority donation.
Before calling sema_down(), you can check if the lock is held by checking the value of
the holder, If the holder is NULL the lock is not held.
If the priority of the lock holder is lower than the current thread’s, we update the
"donated" priority of the lock and the lock holder. If the lock holder is currently holding
other locks, the process continues recursively.
After calling sema_down(), the current thread will hold the lock. The lock is added
to the locks_held list of the thread and the priority of the lock and the current thread
is updated. If the next thread in the ready_list has a higher priority than the current
thread, we yield the CPU to the thread with the higher priority.

Releasing a Lock
When releasing a lock, we update the holder of the lock to NULL and remove
the lock from the locks_held list of the current thread. At the same time, we update
the thread’s priority. If this is the last lock the current thread holds, we restore its real
priority.
Then, we call sema_up() to release the semaphore and allow other threads to con-
tinue execution.

Computing the effective priority


When a thread holds a lock, the actual priority is not stored in priority, because
the "donated" priority might be different from the actual priority. At any time when
a lock is released, the priorityis set to the value of real_priority to reflect the
thread’s real priority.

Priority scheduling for semaphores and locks

24
To ensure that the thread with the highest priority waiting for a lock or semaphore
is released first, we always use the list_max function when selecting the next thread
from the waiters list in the semaphore (semaphore-based locks). Additionally, we use
the sorting function compare_threads_by_priority() to ensure that the thread with
the highest priority is selected.

Priority scheduling for condition variables


Similarly to semaphores and locks, the waiters in a condition variable is also
sorted based on the priority of the threads waiting.

Changing thread’s priority


For Pintos developers, the only way to change a thread’s priority is through the
thread_set_priority(). This function can only change the priority of the current
thread. The new priority will be assigned to real_priority.
If the locks_held list of the current thread is empty, meaning there is no priority
donation, the priority is set to the new value. If the locks_held list is not empty but the
new priority is higher than the previous priority of the thread, the priority is updated.

2.3 Advan
2.3.1 Data Structures and Functions
1. In thread.h:

• Add two attributes to the thread structure:

– nice: Determines the "nice" of the thread in relation to other threads.


– recent_cpu: Measures the amount of CPU time a thread has recently used.

Update the thread structure:


struct thread {
...
int nice ;
fixed_t recent_cpu ;
};

2. In thread.c:

• Add a global variable load_avg, used to estimate the average number of threads
ready to run over the past minute:
static fixed_t load_avg ;

25
2.3.2 Algorithm
Advanced Scheduler: The MLFQ scheduler does not include "borrowing" of pri-
ority because the priority of a thread is calculated automatically by the operating system
rather than being set by the user.
System Load Average: The system load average, denoted as load_avg, a weighted
moving average of the number of threads in the ready_list and the current thread
(excluding the idle_thread). When the system starts, load_avg is initialized to 0.
This value is updated every second using the following formula:

59 1
load_avg = × load_avg + × (#ready thread)
60 60
The recent_cpu and nice Variables Each thread has two attributes:

• recent_cpu: A weighted moving average of nice. During each timer interrupt,


recent_cpu is incremented by 1 for the currently running thread. Additionally,
recent_cpu for all threads is recalculated every second using the following for-
mula:
2 × load_avg
recent_cpu = × recent_cpu + nice
2 × load_avg + 1
• nice: A fixed value set by the user, ranging from -20 to 20, with the default value
being 0.

Priority Calculation: The (priority) of a thread is calculated as:


recent_cpu
priority = PRI_MAX − − (2 × nice)
4
Every 4 timer interrupts, the priority of the current thread is recalculated. If there are
any waiting threads with higher priorities, the thread_yield() function is called..

2.3.3 Synchronization
To avoid conflicts when accessing global variables like load_avg or thread-specific
variables:

• Disable interrupts when accessing or modifying these variables:


enum intr_level old_level = intr_disable () ;
...
intr_set_level ( old_level ) ;

26
CHAPTER 3. Programming Process

3.1 Timer Sleep Implementation


Objectives
Allow threads to pause execution sleep) for a specified period, measured in ticks.
Specifically, when a thread calls the function timer_sleep(int64_t ticks), the sys-
tem will put that thread to sleep and automatically wake it up after the specified number
of ticks.

Related Files
thread.c, thread.h

Implemented Code
1. Add Data Structure in thread.h
struct thread
{
// ... ( other fields ) ...

/* Owned by thread . c . */
struct list_elem sleepelem ; /* List element
for sleeping
threads . */
int64_t remaining_time_to_wake_up ; /* Ticks remaining
from waking up . */

// ... ( other fields ) ...


};

• struct list_elem sleepelem;

– Acts as a "hook" to link the struct thread into the doubly linked list sleeping_list.

• int64_t remaining_time_to_wake_up;

– Stores the number of ticks left until the thread is awakened.

2. Declare sleeping_list in thread.c

27
/* List of sleeping processes . */
static struct list sleeping_list ;

3. Initialize sleeping_list in thread_init()


void
thread_init ( void )
{
// ... ( other initialization code ) ...
list_init (& sleeping_list ) ;
// ... ( other initialization code ) ...
}

4. Function thread_set_sleeping(int64_t ticks)


void
thread_set_sleeping ( int64_t ticks )
{
struct thread * cur = thread_current () ;
cur - > remaining_time_to_wake_up = ticks ;
list_push_back (& sleeping_list , & cur - > sleepelem ) ;
thread_block () ;
}

• Sets the sleep duration and adds the current thread to the sleeping_list.

5. Modify thread_tick()
void
thread_tick ( void )
{
struct list_elem * e = list_begin (& sleeping_list ) ;
struct list_elem * temp ;

while ( e != list_end (& sleeping_list ) )


{
struct thread * t = list_entry (e , struct thread ,
sleepelem ) ;
temp = e ;
e = list_next ( e ) ;

ASSERT (t - > status == THREAD_BLOCKED ) ;

28
if (t - > remaining_time_to_wake_up > 0)
{
t - > remaining_time_to_wake_up - -;
if (t - > remaining_time_to_wake_up <= 0)
{
thread_unblock ( t ) ;
list_remove ( temp ) ;
}
}
}
}

• Iterates through sleeping_list, decreases the value of remaining_time_to_wake_up.

• Wakes up threads with remaining_time_to_wake_up equal to 0.

Conclusion
The system is now capable of managing threads in a sleeping state and waking
them up after the specified time. Using the doubly linked list sleeping_list ensures
efficient and maintainable sleep state management.

3.2 Priority Scheduler Implementation


3.2.1 Objective
Implement a priority-based scheduler, ensuring that threads with higher priority
are executed preferentially. When multiple threads are ready, the thread with the high-
est priority will be chosen to run. Additionally, this section implements the Priority
Donation mechanism to address the Priority Inversion problem.

3.2.2 Relevant Files


thread.c, thread.h, synch.c, synch.h

3.2.3 Detailed Explanation of Each Line of Code


1. Data Structure Additions in thread.h
struct thread
{
// ... ( other fields ) ...

29
int priority ; /* Priority . */

// ... ( other fields ) ...

int real_priority ; /* Real priority while


being
donated . */
struct list locks_held ; /* All locks held . */
struct lock * current_lock ; /* The lock the thread
is waiting for . */
};

• int priority;:

– Stores the current priority of the thread.


– Can be temporarily changed by the Priority Donation mechanism.

• int real_priority;:

– Stores the original priority of the thread.


– Not affected by Priority Donation.

• struct list locks_held;:

– A list of locks currently held by the thread.

• struct lock *current_lock;:

– Pointer to the lock that the thread is waiting for.

2. Data Structure Additions in synch.h


struct lock
{
// ... ( other fields ) ...
struct list_elem elem ; /* List elem in LOCKS HELD . */
int max_priority ; /* Maximum priority among
threads locked . */
};

• struct list_elem elem;:

– Allows a struct lock to be linked into a list (e.g., locks_held).

30
• int max_priority;:

– Stores the highest priority among all threads waiting for the lock.

3. Modifications to thread_create()
tid_t
thread_create ( const char * name , int priority ,
thread_func * function , void * aux )
{
// ... ( other code ) ...
init_thread (t , name , priority ) ;
// ... ( other code ) ...
thread_unblock ( t ) ;
try_thread_yield () ;
return tid ;
}

• Calls init_thread() to initialize the new thread with its priority and name.

• Adds the new thread to the ready_list and yields the CPU if a higher-priority
thread is available.

4. init_thread() Function
static void
init_thread ( struct thread *t , const char * name , int priority )
{
t - > priority = priority ;
t - > real_priority = priority ;
t - > current_lock = NULL ;
list_init (& t - > locks_held ) ;
}

• Initializes the thread’s priority, real priority, and locks.

5. Modifications to thread_unblock()
void
thread_unblock ( struct thread * t )
{
list_insert_ordered (& ready_list , &t - > elem ,
compare_threads_by_priority ,

31
NULL ) ;
}

• Inserts the thread into ready_list, maintaining priority order.

6. compare_threads_by_priority() Function
bool
comp are_t hreads_by_priority ( const struct list_elem *a ,
const struct list_elem *b ,
void * aux UNUSED )
{
return list_entry (a , struct thread , elem ) -> priority <=
list_entry (b , struct thread , elem ) -> priority ;
}

• Compares threads based on their priorities for sorting the ready_list.

7. Modifications to thread_yield()
void
thread_yield ( void )
{
if ( cur != idle_thread )
list_insert_ordered (& ready_list , & cur - > elem ,
compare_threads_by_priority ,
NULL ) ;
}

• Ensures that ready_list remains sorted by priority when yielding.

8. Priority Donation in lock_acquire()


void
lock_acquire ( struct lock * lock )
{
if ( lock - > holder != NULL )
{
thread_current () -> current_lock = lock ;
int current_priority = thread_get_priority () ;
struct lock * temp_lock = lock ;

32
struct thread * temp_holder = lock - > holder ;

while ( temp_lock - > max_priority < current_priority )


{
temp_lock - > max_priority = current_priority ;
thread_update_priority ( temp_holder ) ;
temp_lock = temp_holder - > current_lock ;
if ( temp_lock == NULL )
break ;
else
temp_holder = temp_lock - > holder ;
}
}
sema_down (& lock - > semaphore ) ;
list_push_back (& thread_current () -> locks_held , & lock - > elem )
;
lock - > holder = thread_current () ;
}

• Implements recursive Priority Donation by updating the max_priority of locks


and threads in the chain.

• Updates the current thread’s priority using thread_update_priority().

9. Priority Restoration in lock_release()


void
lock_release ( struct lock * lock )
{
list_remove (& lock - > elem ) ;
if (! thread_mlfqs )
thread_update_priority ( lock - > holder ) ;
lock - > holder = NULL ;
sema_up (& lock - > semaphore ) ;
}

• Restores the thread’s priority after releasing the lock.

3.2.4 Conclusion
The implementation of a Priority Scheduler and Priority Donation ensures efficient
scheduling and addresses the Priority Inversion problem. By modifying data structures

33
and introducing new mechanisms, the system can prioritize threads effectively and main-
tain proper thread synchronization.

3.3 Multi-level Feedback Queue Scheduler (MLFQS)


3.3.1 Objective
Implement the Multi-level Feedback Queue (MLFQS) scheduler, a more advanced
scheduling algorithm that combines priority scheduling and round-robin scheduling.
MLFQS automatically adjusts thread priorities based on past CPU usage behavior, pri-
oritizing I/O-bound processes and distributing fairly to CPU-bound processes.

3.3.2 Relevant Files


thread.c, thread.h, fixed_point.h

3.3.3 Detailed Explanation of Each Line of Code


1. Data Structure Additions in thread.h
struct thread
{
// ... ( other fields ) ...

int nice ; /* Determines how


nice a thread should
be to other
threads . */
fixed_t recent_cpu ; /* The recent cpu .
*/
};

/* If false ( default ) , use round - robin scheduler .


If true , use multi - level feedback queue scheduler .
Controlled by kernel command - line option " - o mlfqs ". */
extern bool thread_mlfqs ;

• int nice;

– Represents how "nice" a thread is to others, with a range from -20 to 20.
– Higher values mean the thread is more "nice" (lower priority).
– Default value is 0.

• fixed_t recent_cpu;

34
– Tracks recent CPU usage, updated periodically and decays over time.
– Used in priority calculations.

• extern bool thread_mlfqs;

– Enables or disables MLFQS mode based on the kernel command-line option


"-o mlfqs".

2. Fixed-point Arithmetic Definitions in fixed_point.h


/* Basic definitions of fixed point . */
typedef int fixed_t ;
/* 15 LSB used for fractional part . */
# define FP_SHIFT_AMOUNT 15
/* Convert a value to fixed - point value . */
# define FP_CONST ( A ) (( fixed_t ) ( A << FP_SHIFT_AMOUNT ) )
/* Add two fixed - point value . */
# define FP_ADD (A , B ) ( A + B )
/* Add a fixed - point value A and an int value B . */
# define FP_ADD_MIX (A , B ) ( A + ( B << FP_SHIFT_AMOUNT ) )
/* Multiply two fixed - point value . */
# define FP_MULT (A , B ) (( fixed_t ) ((( int64_t ) A ) * B >>
FP_SHIFT_AMOUNT ) )
/* Divide two fixed - point value . */
# define FP_DIV (A , B ) (( fixed_t ) (((( int64_t ) A ) <<
FP_SHIFT_AMOUNT ) / B ) )
/* Get rounded integer of a fixed - point value . */
# define FP_ROUND ( A ) ( A >= 0 ? (( A + (1 << ( FP_SHIFT_AMOUNT -
1) ) ) >> FP_SHIFT_AMOUNT ) \
: (( A - (1 << ( FP_SHIFT_AMOUNT - 1) ) ) >>
FP_SHIFT_AMOUNT ) )

• Defines fixed_t as a 32-bit integer with 15 bits for the fractional part.

• Provides macros for fixed-point arithmetic operations:

– FP_CONST(A): Converts an integer to fixed-point format.


– FP_ADD(A,B), FP_MULT(A,B), FP_DIV(A,B): Perform addition, multiplica-
tion, and division of fixed-point values.
– FP_ROUND(A): Rounds a fixed-point value to the nearest integer.

3. Global Variable load_avg Declaration in thread.c

35
/* The system load average . */
static fixed_t load_avg ;

• Tracks the system load average, which reflects the number of threads ready to run.

• Initialized to 0 in thread_init.

4. thread_set_nice() Function
void
thread_set_nice ( int nice )
{
thread_current () -> nice = nice ;
th rea d_update_priority_mlfqs ( thread_current () ) ;
try_thread_yield () ;
}

• Updates the nice value of the current thread and recalculates its priority.

• If a higher-priority thread exists, the current thread yields the CPU.

5. thread_update_recent_cpu() Function
void
thread_update_recent_cpu ( struct thread *t , void * aux UNUSED )
{
t - > recent_cpu = FP_ADD_MIX (
FP_DIV ( FP_MULT ( FP_MULT_MIX ( load_avg , 2) ,
t - > recent_cpu ) ,
FP_ADD_MIX ( FP_MULT_MIX ( load_avg ,
2) , 1) ) ,
t - > nice ) ;
th rea d_update_priority_mlfqs ( t ) ;
}

• Updates recent_cpu based on the formula:


2 · load_avg
recent_cpu = · recent_cpu + nice
2 · load_avg + 1

• Calls thread_update_priority_mlfqs() to recalculate the thread’s priority.

36
6. thread_tick_one_second() Function
void
thread_tick_one_second ( void )
{
enum intr_level old_level = intr_disable () ;

/* Update system load average . */


int num_of_waiting_threads = ( list_size (& ready_list ) ) +
(( thread_current () !=
idle_thread ) ? 1 : 0) ;
load_avg = FP_ADD ( FP_DIV_MIX ( FP_MULT_MIX ( load_avg , 59) ,
60) ,
FP_DIV_MIX ( FP_CONST (
num_of_waiting_threads ) , 60) ) ;

/* Update recent cpu of all threads . */


thread_foreach ( thread_update_recent_cpu , NULL ) ;

intr_set_level ( old_level ) ;
}

• Updates the system load average and recent_cpu of all threads every second.

• Uses the formula:


59 1
load_avg = · load_avg + · num_of_ready_threads
60 60

Conclusion
The MLFQS scheduler effectively balances CPU and I/O-bound processes by ad-
justing priorities dynamically. It leverages fixed-point arithmetic for accurate calcula-
tions and ensures fair distribution of CPU resources.

37
CHAPTER 4. RESULTS TESTTING

4.1 Executing the predefined Tests of PINTOS


These are predefined tests to verify the features and behavior of the scheduler and
priority mechanisms in the system. Below is an explanation of the purpose of each test:
###Alarm Tests

1. alarm-single Purpose: Tests the functionality of timer_sleep() with a sin-


gle thread. Ensures the thread "sleeps" for the correct duration.

2. alarm-multiple Purpose: Tests timer_sleep() with multiple threads. En-


sures all threads wake up at the correct time.

3. alarm-simultaneous Purpose: Tests timer_sleep() with multiple threads


having the same sleep time. Verifies that all threads wake up simultaneously.

4. alarm-priority Purpose: Tests the relationship between timer_sleep() and


priority. Ensures threads wake up in the correct priority order.

5. alarm-zero Purpose: Tests timer_sleep() when the sleep time is 0. The


thread should not be blocked.

6. alarm-negative Purpose: Tests timer_sleep() with negative sleep time. En-


sures the system handles this case correctly and does not cause errors.

4.1.1 Priority Tests


7. priority-change Purpose: Tests changing a thread’s priority while it is
running. Ensures the scheduler correctly adjusts priorities.

8. priority-donate-one Purpose: Tests priority donation with a single thread.


Ensures the priority is correctly transferred.

9. priority-donate-multiple Purpose: Tests priority donation with multiple


threads and complex dependency chains.

38
10. priority-donate-multiple 2 Purpose: Tests advanced scenarios of priority
donation, where multiple threads need to receive and transfer priorities across multiple
levels.

11. priority-donate-nest Purpose: Tests priority donation with nested threads.


Ensures the system handles dependencies accurately.

12. priority-donate-sema Purpose: Tests priority donation with semaphores.


Ensures priorities are managed correctly when using semaphores.

13. priority-donate-lower Purpose: Tests priority donation scenarios where


a lower-priority thread must yield resources to a higher-priority thread.

14. priority-fifo Purpose: Ensures that among threads with the same priority,
they are executed in FIFO (First In, First Out) order.

15. priority-preempt Purpose: Test the ability of preemption (CPU takeover)


when a higher-priority thread appears.

16. priority-sema Purpose: Tests priority management while using semaphores.

17. priority-condvar Purpose: Testing priority management when using con-


dition variables.

18. Purpose:*** Tests a chain of priority donations with threads dependent on


each other in a long chain.

4.1.2 MLFQS (Multilevel Feedback Queue Scheduler) Tests


19. mlfqs-load-1 Purpose: Tests MLFQS with only one thread running under
low load.

20. mlfqs-load-60 Purpose: Tests MLFQS under high load, with many threads
competing for the CPU

21. mlfqs-load-avg Purpose: Tests the calculation of load_avg in MLFQS. En-


sures this value is updated accurately.

39
22. mlfqs-recent-1 Purpose: Tests the calculation of recent_cpu for a single
thread. Ensures this value changes correctly based on load.

23. mlfqs-fair-2 Purpose: Tests fairness in MLFQS with two threads having the
same priority.

24. mlfqs-fair-20 Purpose: Tests fairness in MLFQS when multiple threads have
different priorities

25. mlfqs-nice-2 Purpose: Tests the effect of the nice value with two threads.
Ensures the thread with a higher nice value is given lower priority.

26. mlfqs-nice-10 Purpose: Tests the effect of the nice value in a multi-threaded
scenario.

27. mlfqs-block Purpose: Tests MLFQS when a thread transitions to theBLOCKEDstate.

4.2 Bài test tự tạo


Ngoài các bài test có sẵn, chúng em đã tạo một bài test khác để đánh giá cơ chế
MLFQS mang tên mlfqs-heavy-load Để tạo thêm bài test chúng em đã bổ sung hai đoạn
code mlfqs-heavy-load.c và mlfqs-heavy-load.ck vào thư mục chứa các bài test sau đó
chỉnh sửa các file Make.tests; test.c và test.h
mlfqs-heavy-load.c
/* mlfqs - heavy - load . c
*
* A more demanding test for MLFQS in Pintos .
*
* Creates many threads with different priorities , nice values
, and behaviors
* ( CPU - bound , I /O - bound ) . Runs for a longer time to stress
the scheduler .
*/

# include < stdio .h >


# include " tests / threads / tests . h "
# include " threads / init . h "
# include " threads / malloc . h "

40
# include " threads / synch . h "
# include " threads / thread . h "
# include " devices / timer . h "

# define THREAD_CNT 50 // Total number of threads


# define RUN_TIME 60 // Run time in seconds
# define CPU_BOUND_THREADS 20 // Number of CPU - bound threads
# define IO_BOUND_THREADS 20 // Number of I /O - bound threads
# define MIXED_THREADS 10 // Number of threads with
mixed behavior
# define PRINT_INTERVAL 5 // Interval for printing load
average

static int64_t start_time ;

static void cpu_bound_thread ( void * aux ) ;


static void io_bound_thread ( void * aux ) ;
static void mixed_thread ( void * aux ) ;

void
test_mlfqs_heavy_load ( void )
{
int i ;

ASSERT ( thread_mlfqs ) ;

start_time = timer_ticks () ;

msg ( " Starting ␣ heavy ␣ load ␣ test ␣ for ␣ MLFQS ... " ) ;

// Create CPU - bound threads


for ( i = 0; i < CPU_BOUND_THREADS ; i ++) {
char name [30];
int nice_value = ( i % 5) - 2; // Nice values from -2 to 2
snprintf ( name , sizeof name , " cpu_bound_ % d " , i ) ;
thread_create ( name , PRI_DEFAULT + i % 5 , cpu_bound_thread
, ( void *) ( intptr_t ) nice_value ) ;
}

// Create I /O - bound threads


for ( i = 0; i < IO_BOUND_THREADS ; i ++) {
char name [30];

41
int nice_value = ( i % 5) + 3; // Nice values from 3 to 7
snprintf ( name , sizeof name , " io_bound_ % d " , i ) ;
thread_create ( name , PRI_DEFAULT - i % 5 , io_bound_thread ,
( void *) ( intptr_t ) nice_value ) ;
}

// Create mixed - behavior threads


for ( i = 0; i < MIXED_THREADS ; i ++) {
char name [30];
int nice_value = ( i % 10) - 5; // Nice values from -5 to 4
snprintf ( name , sizeof name , " mixed_ % d " , i ) ;
thread_create ( name , PRI_DEFAULT , mixed_thread ,( void *) (
intptr_t ) nice_value ) ;
}

// Print load average periodically


for ( i = 0; i < RUN_TIME / PRINT_INTERVAL ; i ++) {
int64_t sleep_until = start_time + TIMER_FREQ * ( i + 1) *
PRINT_INTERVAL ;
timer_sleep ( sleep_until - timer_ticks () ) ;
int load_avg = thread_get_load_avg () ;
msg ( " After ␣ % d ␣ seconds , ␣ load ␣ average =% d .%02 d . " ,
( i + 1) * PRINT_INTERVAL , load_avg / 100 , load_avg %
100) ;
}

msg ( " Heavy ␣ load ␣ test ␣ finished . " ) ;


pass () ;
}

static void
cpu_bound_thread ( void * arg )
{
int nice_value = ( int ) ( intptr_t ) arg ;
thread_set_nice ( nice_value ) ;

while ( timer_elapsed ( start_time ) < RUN_TIME * TIMER_FREQ ) {


// Spin in a tight loop to simulate CPU - bound work
for ( int i = 0; i < 1000000; i ++) ;
}
}

42
static void
io_bound_thread ( void * arg )
{
int nice_value = ( int ) ( intptr_t ) arg ;
thread_set_nice ( nice_value ) ;

while ( timer_elapsed ( start_time ) < RUN_TIME * TIMER_FREQ ) {


timer_sleep (5) ; // Simulate I / O operation
// Do some short computation
for ( int i = 0; i < 100000; i ++) ;
}
}

static void
mixed_thread ( void * arg )
{
int nice_value = ( int ) ( intptr_t ) arg ;
thread_set_nice ( nice_value ) ;

while ( timer_elapsed ( start_time ) < RUN_TIME * TIMER_FREQ ) {


// Alternate between CPU - bound and I /O - bound work
if ( timer_ticks () % ( TIMER_FREQ * 2) == 0) {
timer_sleep (10) ;
} else {
for ( int i = 0; i < 500000; i ++) ;
}
}
}

mlfqs-heavy-load.ck
# -* - perl -* -
use strict ;
use warnings ;
use tests :: tests ;
use tests :: threads :: mlfqs ;

our ( $test ) ;

my ( @output ) = read_text_file ( " $test . output " ) ;


common_checks ( " run " , @output ) ;
@output = get_core_output ( " run " , @output ) ;

43
# Check for load average values ( simplified check )
my $load_avg_count = 0;
foreach ( @output ) {
if (/ After \ d + seconds , load average =(\ d +\.\ d +) \./) {
$load_avg_count ++;
}
}

if ( $load_avg_count < 5) {
fail ( " Not ␣ enough ␣ load ␣ average ␣ values ␣ were ␣ printed . " ) ;
}

pass ;

Dưới đây là giải thích chi tiết về các phần của bài test:

1. File mlfqs-heavy-load.c:

• #define:

– THREAD_CNT 50: Tổng số luồng được tạo ra (50).


– RUN_TIME 60: Thời gian chạy của bài test, tính bằng giây (60 giây).
– CPU_BOUND_THREADS 20: Số lượng luồng CPU-bound (20).
– IO_BOUND_THREADS 20: Số lượng luồng I/O-bound (20).
– MIXED_THREADS 10: Số lượng luồng có hành vi hỗn hợp (10).
– PRINT_INTERVAL 5: Khoảng thời gian giữa các lần in giá trị load average,
tính bằng giây (5 giây).

• start_time: Biến toàn cục lưu thời điểm bắt đầu bài test (tính bằng ticks).

• Các hàm cpu_bound_thread, io_bound_thread, mixed_thread: Đây là các


hàm thực thi của các luồng:

– cpu_bound_thread: Mô phỏng luồng sử dụng CPU liên tục (CPU-bound). Nó


chạy một vòng lặp for lớn để tiêu tốn thời gian CPU.
– io_bound_thread: Mô phỏng luồng thực hiện nhiều thao tác I/O (I/O-bound).
Nó gọi timer_sleep để tạm dừng trong một khoảng thời gian ngắn, sau đó
thực hiện một số tính toán nhỏ.

44
– mixed_thread: Mô phỏng luồng có hành vi hỗn hợp, luân phiên giữa CPU-
bound và I/O-bound. Nó kiểm tra timer_ticks() % (TIMER_FREQ * 2) ==
0 để quyết định xem nên thực hiện timer_sleep (I/O-bound) hay chạy vòng
lặp for (CPU-bound).
– Mỗi hàm này đều nhận một tham số arg chứa giá trị nice của luồng và gọi
thread_set_nice để thiết lập giá trị nice cho chính nó.

• test_mlfqs_heavy_load: Hàm thực hiện chính của bài test.

– ASSERT (thread_mlfqs): Kiểm tra xem bộ lập lịch MLFQS có được bật hay
không. Nếu thread_mlfqs là false, assertion sẽ thất bại và dừng bài test.
– Khởi tạo start_time: Lưu thời điểm bắt đầu test.
– Tạo luồng:

* Tạo CPU_BOUND_THREADS (20) luồng CPU-bound với các giá trị nice từ -2
đến 2 và priority từ PRI_DEFAULT đến PRI_DEFAULT + 4.
* Tạo IO_BOUND_THREADS (20) luồng I/O-bound với các giá trị nice từ 3 đến
7 và priority từ PRI_DEFAULT đến PRI_DEFAULT - 4.
* Tạo MIXED_THREADS (10) luồng có hành vi hỗn hợp với các giá trị nice từ
-5 đến 4 và priority mặc định PRI_DEFAULT.

– In load average định kỳ:

* Vòng lặp chạy RUN_TIME / PRINT_INTERVAL (60 / 5 = 12) lần.


* Mỗi lần lặp, nó tạm dừng cho đến thời điểm start_time + TIMER_FREQ
* (i + 1) * PRINT_INTERVAL.
* Gọi thread_get_load_avg() để lấy giá trị load average và in ra màn
hình.

– Kết thúc: In thông báo "Heavy load test finished." và gọi pass() để thông báo
bài test đã vượt qua.

2. File mlfqs-heavy-load.ck:

• use strict;, use warnings;: Bật chế độ strict và cảnh báo trong Perl.
• use tests::tests;, use tests::threads::mlfqs;: Import các module cần
thiết.
• my (@output) = read_text_file ("$test.output");: Đọc nội dung của file
output.
• common_checks ("run", @output);: Thực hiện các kiểm tra chung.

45
• @output = get_core_output ("run", @output);: Lọc lấy phần output chính.
• Kiểm tra load average:

– Đếm số lần xuất hiện của dòng "After \d+ seconds, load average=\d+.\d+."
trong output.
– Nếu số lần xuất hiện nhỏ hơn 5, thì fail bài test.

Mục đích của bài test:

• Tạo tải nặng: Tạo ra nhiều luồng với các hành vi khác nhau để tạo ra tải nặng cho
bộ lập lịch.
• Kiểm tra tính ổn định: Chạy trong thời gian dài (60 giây) để kiểm tra xem bộ lập
lịch có hoạt động ổn định trong điều kiện tải nặng hay không.
• Đánh giá load average: Theo dõi giá trị load average để xem cách bộ lập lịch
phản ứng với tải. Giá trị load average sẽ tăng khi có nhiều luồng sẵn sàng chạy
và giảm khi các luồng hoàn thành hoặc chuyển sang trạng thái blocked (do I/O).
• Kiểm tra ảnh hưởng của nice value: Các luồng có giá trị nice khác nhau để
kiểm tra xem bộ lập lịch có ưu tiên các luồng có giá trị nice thấp hơn (ưu tiên cao
hơn) hay không.

4.3 Results
During testing, the priority scheduling system was evaluated through 27 differ-
ent tests, including tests for the priority donation mechanism and scenarios that involve
complex resource contention. The results show that the system successfully passed all
these tests, ensuring correctness and performance as required. This demonstrates that the
changes implemented in Pintos, particularly the addition of the priority donation mech-
anism, function correctly, avoid issues related to priority inversion, and ensure fairness
in CPU allocation for threads with different priority levels.

46
Đối với bài test tự tạo mlfqs-heavy-load, hệ thống cũng dễ dàng vượt qua

47
CHAPTER 5. MEMORY USAGE

5.1 Memory Usage in Pintos


Objective: Evaluate the memory usage of the Pintos system when running the
‘priority-mem-usage‘ and ‘mlfqs-mem-usage‘ tests, focusing on two scheduling mecha-
nisms: Priority Scheduling and Multi-Level Feedback Queue (MLFQS).
Methodology:

• Use two tests: priority-mem-usage and mlfqs-mem-usage.

• Both tests create 30 threads.

• Each thread repeats the following actions 100 times: allocate 16KB of memory
using malloc, write data to the allocated memory, and release the memory using
free.

• Monitor and record memory-related parameters after all threads complete.

• Run tests on a Pintos environment configured to use each scheduling mechanism


(Priority and MLFQS).

Monitored Parameters:

• Total pages allocated: Total number of memory pages allocated (including


allocations for both kernel and user).

• Pages currently in use: Number of memory pages currently in use.

• Bytes allocated by malloc: Total bytes of memory allocated by the malloc


function (this value may not be entirely accurate due to reported tracking issues).

Note: This report is based on the provided results and acknowledges that the Bytes
allocated by malloc value might not be completely accurate due to tracking errors.

5.2 mem-usage Test


Both the priority-mem-usage and mlfqs-mem-usage tests share a similar test-
ing scenario:
Objective: Test the system’s dynamic memory allocation and deallocation capa-
bility under concurrent thread access.
Procedure:

48
1. Create 30 threads.

2. Each thread performs a loop 100 times, where in each iteration:

• The malloc function allocates a 16KB memory block.


• Data is written to the allocated memory (simulating memory usage operations).
• The free function releases the allocated memory block.

3. Threads run for 10 seconds (using timer_sleep).

4. After all threads complete, memory usage metrics are printed.

Differences Between Tests:

• priority-mem-usage: Uses Priority Scheduling, creating threads with varying


priorities from PRI_MIN to PRI_MAX.

• mlfqs-mem-usage: Uses MLFQS, creating threads with a default priority but a


nice value of 10.

5.2.1 Results from the Priority Scheduling Mechanism


Results:
( priority - mem - usage ) Total pages allocated : 15065
( priority - mem - usage ) Pages currently in use : 8
( priority - mem - usage ) Bytes allocated by malloc : 4

Analysis:

• Total pages allocated: 15065: This figure indicates the total number of mem-
ory pages allocated during the test. It includes pages allocated for the kernel, system
processes, and the test threads. With 30 threads, each looping 100 times and allo-
cating 16KB (equivalent to 4 pages) per iteration, the estimated total allocation for
threads is approximately 12,000 pages. The remaining 3,065 pages can reasonably
account for the kernel and other processes.

• Pages currently in use: 8: After all threads have completed and released mem-
ory, only 8 pages remain in use. This shows that most memory has been successfully
deallocated. The small number of pages still in use might be due to system resources
not being immediately released or memory management structures in Pintos.

• Bytes allocated by malloc: 4: This indicates that 4 bytes of memory may


not have been released. However, this value might not be accurate due to reported
tracking errors.

49
Remarks:

• The Priority Scheduling mechanism performed stably during this test. Threads with
varying priorities executed and completed without significant memory allocation
errors.

• The system successfully handled memory allocation and deallocation requests from
threads.

• While the Bytes allocated by malloc metric suggests a small amount of mem-
ory may remain unreleased, considering its potential inaccuracy, this result is ac-
ceptable.

5.2.2 Results from the MLFQS Mechanism


Results:
( mlfqs - mem - usage ) Total pages allocated : 15033
( mlfqs - mem - usage ) Pages currently in use : 3
( mlfqs - mem - usage ) Bytes allocated by malloc : 0

Analysis:

• Total pages allocated: 15033: Similar to Priority Scheduling, the total num-
ber of allocated pages is reasonable and shows no significant difference.

• Pages currently in use: 3: This result is better than Priority Scheduling (3


pages versus 8 pages). It suggests that MLFQS may manage and reclaim memory
slightly more efficiently.

• Bytes allocated by malloc: 0: This indicates that all memory allocated by


malloc has been fully deallocated after the threads complete. This is a positive
outcome, suggesting no memory leaks occurred during the test.

Remarks:

• The MLFQS mechanism performed well in the test, with threads able to allocate
and deallocate memory without errors.

• The results for Bytes allocated by malloc: 0 and Pages currently in use:
3 suggest that MLFQS managed memory more effectively and avoided memory
leaks.

50
5.3 Comparison Between the Two Mechanisms

Remarks:

• Total pages allocated: Both mechanisms allocated a similar number of pages,


with no significant difference.

• Pages currently in use: MLFQS demonstrated slightly better memory man-


agement and reclamation (3 pages versus 8 pages).

• Bytes allocated by malloc: Both mechanisms achieved results close to 0. How-


ever, as noted, this value might not be entirely accurate. Based on the output,
MLFQS appears to have been more thorough in deallocating memory.

• Execution Time: There was no significant difference in execution time between


the two mechanisms for this test.

Conclusion: Based on the results of these tests, MLFQS appears to demonstrate


slightly better memory management than Priority Scheduling in scenarios involving
continuous memory allocation and deallocation. MLFQS used fewer memory pages after
threads completed and fully deallocated memory (based on the Bytes allocated by
malloc value, despite its potential inaccuracy).

51

You might also like