02AS1M AdvancedScheduler Report 20224291 20224328
02AS1M AdvancedScheduler Report 20224291 20224328
PROJECT REPORT
Advanced Schedule
Reporting Memory Usage
NGUYỄN BÁ THÀNH 20224291
BÙI MINH NGỌC 20224328
Hanoi, 12/2024
PREFACE
DUTY ROSTER 7
5
2.3 Advan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.1 Data Structures and Functions . . . . . . . . . . . . . . . . . . 25
2.3.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.3 Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . 26
7
CHAPTER 1. PINTOS AND OVERVIEW OF SCHEDULING
ALGORITHMS
1.1 PINTOS
1.1.1 What is Pintos?
Pintos is a simple operating system framework designed for educational purposes,
supporting the 80x86 architecture. It provides basic features such as thread management,
loading and executing user programs, and a file system. However, these mechanisms are
implemented at a basic level to give students the opportunity to explore and enhance the
functionality, optimizing the operating system.
Pintos is commonly used in operating systems courses at universities as a plat-
form for practicing core concepts. It runs on hardware emulators like QEMU or Bochs,
facilitating testing and debugging without direct interaction with physical hardware.
These projects help improve Pintos’ performance and increase its ability to effi-
ciently utilize limited resources such as processing time, memory, and energy.
1
• threads/: Contains the Pintos kernel, including the bootloader, basic interrupt
handling, memory allocation, and CPU scheduling. Most of the code related to the
Threads project is located in this directory.
• userprog/: Manages user programs, including the page table, page fault handling,
and program loader. This directory is primarily used in the User Programs project.
• filesys/: The Pintos file system, to be improved in the File Systems project.
• devices/: Source code for interfacing with peripheral devices such as keyboards,
timers, and disks.
• lib/: The standard C library used both in the Pintos kernel and user programs
The Threads project lays a critical foundation for building advanced features in
subsequent projects. An optimized thread system ensures that the operating system runs
smoothly, efficiently, and reliably.
2
1.2 OVERVIEW OF SCHEDULING ALGORITHMS
1.2.1 Round Robin Algorithm
Basic ConceptRound Robin is a widely used scheduling algorithm in operating systems
and computer networks. It is a non-preemptive scheduling algorithm, meaning that a
process or task will not be interrupted unless it completes or a different task is scheduled.
The key feature of Round Robin is that it fairly shares CPU resources (or other resources)
among all waiting processes. Each process is allocated a fixed amount of CPU time,
called a time quantum or time slice.
1. Queue: Processes waiting for CPU time are placed in a queue in the order they
arrive (first-in, first-out - FIFO).
2. Queue: The scheduler selects the process at the front of the queue to allocate CPU
time.
3. Allocate CPU: The process runs for a time equal to the time quantum..
4. End of quantum: Once the time quantum ends, the process is preempted and
moved to the end of the queue.
5. Repeat: The scheduler repeats the steps above until all processes are completed.
3
SchedulerThe component of the operating system responsible for selecting a process to
run on the CPU. In Round Robin, it selects the first process in the queue.
• Queue:
• Time Quantum:
• Fairness: Ensures all processes get CPU time fairly, with no starvation.
• Suitable for interactive systems: Ideal for applications that require quick responses,
such as graphical user interface (GUI) applications or real-time systems.
– If the time quantum is too small, the number of context switches increases,
causing high overhead and reducing overall performance
– If the time quantum is too large, Round Robin may behave similarly to FCFS
(First-Come, First-Served), causing long waiting times for short processes.
• Inefficient with long processes: If there are many long processes, the waiting time
can still be large.
4
1.2.1.4 Example IllustrationSuppose we have 3 processes:
5
Hình 1.2 Flowchart of Non-Preemptive Scheduling
2. Ready Queue: Processes waiting to execute are placed in the ready queue.
3. Process Selection: The scheduler selects the process with the highest priority.
4. Execution: The selected process runs until completion or voluntarily yields the
CPU.
5. Completion or Yield: A process will be removed from the system when completed
or returned to the queue after yielding the CPU.
1.2.2.3 Preemptive Priority SchedulingIn the preemptive case, a running process can
be interrupted by a new process with a higher priority.
6
Hình 1.3 Flowchart of Non-Preemptive Scheduling
3. Initial Selection: The scheduler initially selects the process with the highest prior-
ity from the ready queue.
5. Check Priority When a New Process Arrives: When a new process arrives,the
scheduler compares its priority with the currently running process.
• If the new process has a higher priority:The running process will be inter-
rupted and moved back to the ready queue. The new process will be allocated
the CPU.
• If the new process has equal or lower priority: It is placed in the ready queue
without interrupting the running process.
7. Repeat: The scheduler continues to monitor new arriving processes, check their
priorities, and preempt when necessary.
7
In summary,in the preemptive mode, a running process can be interrupted by a
higher-priority process, ensuring that more important processes get immediate access to
the CPU.
• Starvation: Starvation is more likely with non-preemptive; less likely with preemp-
tive, but still possible.
• Priority:
• Scheduler): Selects the process with the highest priority to run and handles
preemption events if necessary.
• Static Priority:
– The priority is assigned when the process is created and does not change.
– Simple but may lead to starvation.
• Dynamic Priority:
8
1.2.2.7 Advantages
1.2.2.8 Disadvantages
• Starvation issue.
• Priority inversion.
1.2.2.10 Example IllustrationAssume there are 5 processes P1, P2, P3, P4, P5 with
execution times and priority levels as follows (lower priority number means higher pri-
ority):
Non-Preemptive:
9
P2 (highest priority) is selected and executed for 1 time unit.
Preemptive
10
Hình 1.4 Multilevel Feedback Queue Scheduling (MLFQ)
1. Multiple Queues: The system has multiple queues, each with a different priority
level. The highest priority queue has the highest priority.
2. Scheduling Mechanism in Each Queue:Each queue uses its own scheduling algo-
rithm, typically Round Robin (RR), but First-Come-First-Served (FCFS) or other
algorithms can also be used.
• Processes that use too much CPU or exceed their time quantum will be moved
to a lower-priority queue.
• Processes that have been waiting for a long time without being allocated CPU
time may be promoted to a higher-priority queue.
4. Fairness: The feedback mechanism helps prevent starvation for processes in lower-
priority queues.
• Queues:
11
• Scheduling Algorithm):
– The scheduling algorithm applied in each queue, the most common is Round
Robin (RR).
– Round Robin (RR) being the most common due to its fairness.
• Feedback Mechanism:
• Time Quantum:
• Time quantum expiration:A process that exhausts its time quantum in one queue
is moved to a lower-priority queue.
• Voluntary CPU release: If a process voluntarily releases the CPU (e.g., performing
I/O), it may be returned to the appropriate queue after completing the I/O.
• Long waiting: A process that has been waiting too long in a lower-priority queue
may be promoted to a higher-priority queue.
• Flexibility: The number of queues, time quantum, and movement mechanisms can
be customized to fit the system.
12
1.2.3.6 Disadvantages of MLFQ
• Overhead: Managing multiple queues may introduce overhead for the system.
• Potential for starvation: In some cases, processes may be starved if the configu-
ration is not optimal.
• Process P1 (CPU-bound):
• Process P2 (I/O-bound):
13
• I/O-bound processes like P2 are handled quickly in higher-priority queues, ensuring
quick response times.
• CPU-bound processes like P1 are moved to lower-priority queues when using too
much CPU, creating fairness in the system.
14
CHAPTER 2. Analysis and Design
struct thread
{
...
struct list_elem sleepelem ;
int64_t remaining_time_to_wake_up ;
}
2.1.2 Algorithm
The main limitation of the original implementation is that all threads (sleeping and
active) share a single storage space, as shown in Figure 2.1. 2.1.
1. Call thread_block() function to block the current thread (going to sleep). At this
stage, another thread from the top of the ready_list replaces the current thread.
3. On each tick, the timer device sends an interrupt and calls the timer_interrupt()
function. In this handler:
15
README.assets/image-20210214214307531.png
2.1.3 Synchronization
1. How to avoid race conditions when multiple threads call :timer_sleep() si-
multaneously?
As long as timer_sleep() is not called in an interrupt context, multiple threads
cannot call this function concurrently. So that, operations on the sleeping_list
are therefore safe.
2. How to avoid race conditions during timer interrupts when call timer_sleep()?
Disable interrupts during the thread’s operation, this function nearly achieves atom-
icity.:
16
2.1.4 Design Rationale
Using all_list instead of sleeping_list was considered because all living
threads naturally reside in all_list, eliminating the need for a separate sleeping_list.
However, this approach is not always safe, as demonstrated by the thread_foreach()
function:
/* Iterates through all threads and applies ’func’, passing ’aux’ as an argum
Must be called with interrupts disabled. */
void thread_foreach (thread_action_func *func, void *aux);
6. Test the implementation: Testing with various threads and sleep durations to en-
sure accuracy and efficiency under different conditions.
17
2.1.6 Advantages of This Design
• Clear separation: Using a separate sleeping_list helps manage ready and sleep-
ing threads independently, reducing complexity and potential errors.
• Atomic operations: Disabling interrupts during critical actions avoids race condi-
tions and ensures accuracy.
• Priority handling: Currently, all sleeping threads are treated equally. By adding
priority handling during wake-up could optimize real-time scenarios.
• Edge case testing: Scenarios like rapid context switching or a very large number
of threads need to be tested to ensure stability.
2.1.8 Conclusion
This design improves the efficiency of the alarm clock by isolating sleeping threads
into a separate list and ensuring atomic operations. With proper synchronization and
thorough testing, this approach enhances efficiency and maintainability in high-concurrency
systems.
18
2.2.1 Data structures and functions
In thread.h:
Add three attributes to the thread structure:
• real_priority: used to store the actual priority when the priority priority at-
tribute is preempted.
struct thread
{
...
int real_priority ;
struct list locks_held ;
struct lock * current_lock ;
};
In synch.h:
Add two attributes to the lock structure:
In synch.c:
Add one attribute to the semaphore_elem structure:
• priority: the highest priority among the threads waiting in the waiters of semaphore.
19
The semaphore_elem structure will look like this:
struct semaphore_elem
{
...
int priority ;
};
20
3. A medium-priority thread with priority 32 is created. After callingthread_yield(),
since medium thread has a higher priority than the initial thread, the CPU is yielded.
Then, the medium thread then acquires lock b and tries to acquire locka, but since a
is held by the initial thread, priority donation is performed to the initial thread.
21
than the initial thread, the CPU is yielded. The high-priority thread tries to ac-
quire lock b but fails, and will be blocked. Medium and the initial thread continue
performing priority donation:
5. Returning to the initial thread, it releases lock a, and medium is moved back to the
ready_list. The initial thread’s priority returns to the actual priority. After calling
thread_yield(), medium becomes the next thread to run:
22
7. Medium releases lock a, after calling thread_yield(), the next thread to execute
is still medium. Then it releases lock b. Medium’s priority returns to its real priority.
After calling thread_yield(), high becomes the next thread to execute:
23
9. high releases lock b, then high finishes. The next thread to execute is medium, and
after that medium finishes. Finally, the initial thread executes and finishes:
2.2.2 Algorithms
Choosing the next thread to run
The schedule() function calls next_thread_to_run() to get the next thread
from ready_list. The ready_list is always sorted in ascending order based on the
priority of the threads within it. When a thread is set to THREAD_READY, it is inserted
into the ready_list using the list_insert_ordered() function. The next thread to
execute will be found at the end of the ready_list.
Acquiring a Lock
When calling lock_acquire(),the first thing to do is perform priority donation.
Before calling sema_down(), you can check if the lock is held by checking the value of
the holder, If the holder is NULL the lock is not held.
If the priority of the lock holder is lower than the current thread’s, we update the
"donated" priority of the lock and the lock holder. If the lock holder is currently holding
other locks, the process continues recursively.
After calling sema_down(), the current thread will hold the lock. The lock is added
to the locks_held list of the thread and the priority of the lock and the current thread
is updated. If the next thread in the ready_list has a higher priority than the current
thread, we yield the CPU to the thread with the higher priority.
Releasing a Lock
When releasing a lock, we update the holder of the lock to NULL and remove
the lock from the locks_held list of the current thread. At the same time, we update
the thread’s priority. If this is the last lock the current thread holds, we restore its real
priority.
Then, we call sema_up() to release the semaphore and allow other threads to con-
tinue execution.
24
To ensure that the thread with the highest priority waiting for a lock or semaphore
is released first, we always use the list_max function when selecting the next thread
from the waiters list in the semaphore (semaphore-based locks). Additionally, we use
the sorting function compare_threads_by_priority() to ensure that the thread with
the highest priority is selected.
2.3 Advan
2.3.1 Data Structures and Functions
1. In thread.h:
2. In thread.c:
• Add a global variable load_avg, used to estimate the average number of threads
ready to run over the past minute:
static fixed_t load_avg ;
25
2.3.2 Algorithm
Advanced Scheduler: The MLFQ scheduler does not include "borrowing" of pri-
ority because the priority of a thread is calculated automatically by the operating system
rather than being set by the user.
System Load Average: The system load average, denoted as load_avg, a weighted
moving average of the number of threads in the ready_list and the current thread
(excluding the idle_thread). When the system starts, load_avg is initialized to 0.
This value is updated every second using the following formula:
59 1
load_avg = × load_avg + × (#ready thread)
60 60
The recent_cpu and nice Variables Each thread has two attributes:
2.3.3 Synchronization
To avoid conflicts when accessing global variables like load_avg or thread-specific
variables:
26
CHAPTER 3. Programming Process
Related Files
thread.c, thread.h
Implemented Code
1. Add Data Structure in thread.h
struct thread
{
// ... ( other fields ) ...
/* Owned by thread . c . */
struct list_elem sleepelem ; /* List element
for sleeping
threads . */
int64_t remaining_time_to_wake_up ; /* Ticks remaining
from waking up . */
– Acts as a "hook" to link the struct thread into the doubly linked list sleeping_list.
• int64_t remaining_time_to_wake_up;
27
/* List of sleeping processes . */
static struct list sleeping_list ;
• Sets the sleep duration and adds the current thread to the sleeping_list.
5. Modify thread_tick()
void
thread_tick ( void )
{
struct list_elem * e = list_begin (& sleeping_list ) ;
struct list_elem * temp ;
28
if (t - > remaining_time_to_wake_up > 0)
{
t - > remaining_time_to_wake_up - -;
if (t - > remaining_time_to_wake_up <= 0)
{
thread_unblock ( t ) ;
list_remove ( temp ) ;
}
}
}
}
Conclusion
The system is now capable of managing threads in a sleeping state and waking
them up after the specified time. Using the doubly linked list sleeping_list ensures
efficient and maintainable sleep state management.
29
int priority ; /* Priority . */
• int priority;:
• int real_priority;:
30
• int max_priority;:
– Stores the highest priority among all threads waiting for the lock.
3. Modifications to thread_create()
tid_t
thread_create ( const char * name , int priority ,
thread_func * function , void * aux )
{
// ... ( other code ) ...
init_thread (t , name , priority ) ;
// ... ( other code ) ...
thread_unblock ( t ) ;
try_thread_yield () ;
return tid ;
}
• Calls init_thread() to initialize the new thread with its priority and name.
• Adds the new thread to the ready_list and yields the CPU if a higher-priority
thread is available.
4. init_thread() Function
static void
init_thread ( struct thread *t , const char * name , int priority )
{
t - > priority = priority ;
t - > real_priority = priority ;
t - > current_lock = NULL ;
list_init (& t - > locks_held ) ;
}
5. Modifications to thread_unblock()
void
thread_unblock ( struct thread * t )
{
list_insert_ordered (& ready_list , &t - > elem ,
compare_threads_by_priority ,
31
NULL ) ;
}
6. compare_threads_by_priority() Function
bool
comp are_t hreads_by_priority ( const struct list_elem *a ,
const struct list_elem *b ,
void * aux UNUSED )
{
return list_entry (a , struct thread , elem ) -> priority <=
list_entry (b , struct thread , elem ) -> priority ;
}
7. Modifications to thread_yield()
void
thread_yield ( void )
{
if ( cur != idle_thread )
list_insert_ordered (& ready_list , & cur - > elem ,
compare_threads_by_priority ,
NULL ) ;
}
32
struct thread * temp_holder = lock - > holder ;
3.2.4 Conclusion
The implementation of a Priority Scheduler and Priority Donation ensures efficient
scheduling and addresses the Priority Inversion problem. By modifying data structures
33
and introducing new mechanisms, the system can prioritize threads effectively and main-
tain proper thread synchronization.
• int nice;
– Represents how "nice" a thread is to others, with a range from -20 to 20.
– Higher values mean the thread is more "nice" (lower priority).
– Default value is 0.
• fixed_t recent_cpu;
34
– Tracks recent CPU usage, updated periodically and decays over time.
– Used in priority calculations.
• Defines fixed_t as a 32-bit integer with 15 bits for the fractional part.
35
/* The system load average . */
static fixed_t load_avg ;
• Tracks the system load average, which reflects the number of threads ready to run.
• Initialized to 0 in thread_init.
4. thread_set_nice() Function
void
thread_set_nice ( int nice )
{
thread_current () -> nice = nice ;
th rea d_update_priority_mlfqs ( thread_current () ) ;
try_thread_yield () ;
}
• Updates the nice value of the current thread and recalculates its priority.
5. thread_update_recent_cpu() Function
void
thread_update_recent_cpu ( struct thread *t , void * aux UNUSED )
{
t - > recent_cpu = FP_ADD_MIX (
FP_DIV ( FP_MULT ( FP_MULT_MIX ( load_avg , 2) ,
t - > recent_cpu ) ,
FP_ADD_MIX ( FP_MULT_MIX ( load_avg ,
2) , 1) ) ,
t - > nice ) ;
th rea d_update_priority_mlfqs ( t ) ;
}
36
6. thread_tick_one_second() Function
void
thread_tick_one_second ( void )
{
enum intr_level old_level = intr_disable () ;
intr_set_level ( old_level ) ;
}
• Updates the system load average and recent_cpu of all threads every second.
Conclusion
The MLFQS scheduler effectively balances CPU and I/O-bound processes by ad-
justing priorities dynamically. It leverages fixed-point arithmetic for accurate calcula-
tions and ensures fair distribution of CPU resources.
37
CHAPTER 4. RESULTS TESTTING
38
10. priority-donate-multiple 2 Purpose: Tests advanced scenarios of priority
donation, where multiple threads need to receive and transfer priorities across multiple
levels.
14. priority-fifo Purpose: Ensures that among threads with the same priority,
they are executed in FIFO (First In, First Out) order.
20. mlfqs-load-60 Purpose: Tests MLFQS under high load, with many threads
competing for the CPU
39
22. mlfqs-recent-1 Purpose: Tests the calculation of recent_cpu for a single
thread. Ensures this value changes correctly based on load.
23. mlfqs-fair-2 Purpose: Tests fairness in MLFQS with two threads having the
same priority.
24. mlfqs-fair-20 Purpose: Tests fairness in MLFQS when multiple threads have
different priorities
25. mlfqs-nice-2 Purpose: Tests the effect of the nice value with two threads.
Ensures the thread with a higher nice value is given lower priority.
26. mlfqs-nice-10 Purpose: Tests the effect of the nice value in a multi-threaded
scenario.
40
# include " threads / synch . h "
# include " threads / thread . h "
# include " devices / timer . h "
void
test_mlfqs_heavy_load ( void )
{
int i ;
ASSERT ( thread_mlfqs ) ;
start_time = timer_ticks () ;
msg ( " Starting ␣ heavy ␣ load ␣ test ␣ for ␣ MLFQS ... " ) ;
41
int nice_value = ( i % 5) + 3; // Nice values from 3 to 7
snprintf ( name , sizeof name , " io_bound_ % d " , i ) ;
thread_create ( name , PRI_DEFAULT - i % 5 , io_bound_thread ,
( void *) ( intptr_t ) nice_value ) ;
}
static void
cpu_bound_thread ( void * arg )
{
int nice_value = ( int ) ( intptr_t ) arg ;
thread_set_nice ( nice_value ) ;
42
static void
io_bound_thread ( void * arg )
{
int nice_value = ( int ) ( intptr_t ) arg ;
thread_set_nice ( nice_value ) ;
static void
mixed_thread ( void * arg )
{
int nice_value = ( int ) ( intptr_t ) arg ;
thread_set_nice ( nice_value ) ;
mlfqs-heavy-load.ck
# -* - perl -* -
use strict ;
use warnings ;
use tests :: tests ;
use tests :: threads :: mlfqs ;
our ( $test ) ;
43
# Check for load average values ( simplified check )
my $load_avg_count = 0;
foreach ( @output ) {
if (/ After \ d + seconds , load average =(\ d +\.\ d +) \./) {
$load_avg_count ++;
}
}
if ( $load_avg_count < 5) {
fail ( " Not ␣ enough ␣ load ␣ average ␣ values ␣ were ␣ printed . " ) ;
}
pass ;
Dưới đây là giải thích chi tiết về các phần của bài test:
1. File mlfqs-heavy-load.c:
• #define:
• start_time: Biến toàn cục lưu thời điểm bắt đầu bài test (tính bằng ticks).
44
– mixed_thread: Mô phỏng luồng có hành vi hỗn hợp, luân phiên giữa CPU-
bound và I/O-bound. Nó kiểm tra timer_ticks() % (TIMER_FREQ * 2) ==
0 để quyết định xem nên thực hiện timer_sleep (I/O-bound) hay chạy vòng
lặp for (CPU-bound).
– Mỗi hàm này đều nhận một tham số arg chứa giá trị nice của luồng và gọi
thread_set_nice để thiết lập giá trị nice cho chính nó.
– ASSERT (thread_mlfqs): Kiểm tra xem bộ lập lịch MLFQS có được bật hay
không. Nếu thread_mlfqs là false, assertion sẽ thất bại và dừng bài test.
– Khởi tạo start_time: Lưu thời điểm bắt đầu test.
– Tạo luồng:
* Tạo CPU_BOUND_THREADS (20) luồng CPU-bound với các giá trị nice từ -2
đến 2 và priority từ PRI_DEFAULT đến PRI_DEFAULT + 4.
* Tạo IO_BOUND_THREADS (20) luồng I/O-bound với các giá trị nice từ 3 đến
7 và priority từ PRI_DEFAULT đến PRI_DEFAULT - 4.
* Tạo MIXED_THREADS (10) luồng có hành vi hỗn hợp với các giá trị nice từ
-5 đến 4 và priority mặc định PRI_DEFAULT.
– Kết thúc: In thông báo "Heavy load test finished." và gọi pass() để thông báo
bài test đã vượt qua.
2. File mlfqs-heavy-load.ck:
• use strict;, use warnings;: Bật chế độ strict và cảnh báo trong Perl.
• use tests::tests;, use tests::threads::mlfqs;: Import các module cần
thiết.
• my (@output) = read_text_file ("$test.output");: Đọc nội dung của file
output.
• common_checks ("run", @output);: Thực hiện các kiểm tra chung.
45
• @output = get_core_output ("run", @output);: Lọc lấy phần output chính.
• Kiểm tra load average:
– Đếm số lần xuất hiện của dòng "After \d+ seconds, load average=\d+.\d+."
trong output.
– Nếu số lần xuất hiện nhỏ hơn 5, thì fail bài test.
• Tạo tải nặng: Tạo ra nhiều luồng với các hành vi khác nhau để tạo ra tải nặng cho
bộ lập lịch.
• Kiểm tra tính ổn định: Chạy trong thời gian dài (60 giây) để kiểm tra xem bộ lập
lịch có hoạt động ổn định trong điều kiện tải nặng hay không.
• Đánh giá load average: Theo dõi giá trị load average để xem cách bộ lập lịch
phản ứng với tải. Giá trị load average sẽ tăng khi có nhiều luồng sẵn sàng chạy
và giảm khi các luồng hoàn thành hoặc chuyển sang trạng thái blocked (do I/O).
• Kiểm tra ảnh hưởng của nice value: Các luồng có giá trị nice khác nhau để
kiểm tra xem bộ lập lịch có ưu tiên các luồng có giá trị nice thấp hơn (ưu tiên cao
hơn) hay không.
4.3 Results
During testing, the priority scheduling system was evaluated through 27 differ-
ent tests, including tests for the priority donation mechanism and scenarios that involve
complex resource contention. The results show that the system successfully passed all
these tests, ensuring correctness and performance as required. This demonstrates that the
changes implemented in Pintos, particularly the addition of the priority donation mech-
anism, function correctly, avoid issues related to priority inversion, and ensure fairness
in CPU allocation for threads with different priority levels.
46
Đối với bài test tự tạo mlfqs-heavy-load, hệ thống cũng dễ dàng vượt qua
47
CHAPTER 5. MEMORY USAGE
• Each thread repeats the following actions 100 times: allocate 16KB of memory
using malloc, write data to the allocated memory, and release the memory using
free.
Monitored Parameters:
Note: This report is based on the provided results and acknowledges that the Bytes
allocated by malloc value might not be completely accurate due to tracking errors.
48
1. Create 30 threads.
Analysis:
• Total pages allocated: 15065: This figure indicates the total number of mem-
ory pages allocated during the test. It includes pages allocated for the kernel, system
processes, and the test threads. With 30 threads, each looping 100 times and allo-
cating 16KB (equivalent to 4 pages) per iteration, the estimated total allocation for
threads is approximately 12,000 pages. The remaining 3,065 pages can reasonably
account for the kernel and other processes.
• Pages currently in use: 8: After all threads have completed and released mem-
ory, only 8 pages remain in use. This shows that most memory has been successfully
deallocated. The small number of pages still in use might be due to system resources
not being immediately released or memory management structures in Pintos.
49
Remarks:
• The Priority Scheduling mechanism performed stably during this test. Threads with
varying priorities executed and completed without significant memory allocation
errors.
• The system successfully handled memory allocation and deallocation requests from
threads.
• While the Bytes allocated by malloc metric suggests a small amount of mem-
ory may remain unreleased, considering its potential inaccuracy, this result is ac-
ceptable.
Analysis:
• Total pages allocated: 15033: Similar to Priority Scheduling, the total num-
ber of allocated pages is reasonable and shows no significant difference.
Remarks:
• The MLFQS mechanism performed well in the test, with threads able to allocate
and deallocate memory without errors.
• The results for Bytes allocated by malloc: 0 and Pages currently in use:
3 suggest that MLFQS managed memory more effectively and avoided memory
leaks.
50
5.3 Comparison Between the Two Mechanisms
Remarks:
51