OS
OS
CPU scheduling is essential for the system’s performance and ensures that
processes are executed correctly and on time. Different CPU scheduling algorithms
have other properties and the choice of a particular algorithm depends on various
factors. Many criteria have been suggested for comparing CPU scheduling
algorithms.
What is CPU scheduling?
CPU Scheduling is a process that allows one process to use the CPU while another
process is delayed due to unavailability of any resources such as I / O etc, thus
making full use of the CPU. In short, CPU scheduling decides the order and priority
of the processes to run and allocates the CPU time based on various parameters
such as CPU usage, throughput, turnaround, waiting time, and response time. The
purpose of CPU Scheduling is to make the system more efficient, faster, and fairer.
Criteria of CPU Scheduling
CPU scheduling criteria, such as turnaround time, waiting time, and throughput, are
essential metrics used to evaluate the efficiency of scheduling algorithms.
1. CPU utilization
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as
possible. Theoretically, CPU utilization can range from 0 to 100 but in a real-time
system, it varies from 40 to 90 percent depending on the load upon the system.
2. Throughput
A measure of the work done by the CPU is the number of processes being executed
and completed per unit of time. This is called throughput. The throughput may vary
depending on the length or duration of the processes.
CPU Scheduling Criteria
3. Turnaround Time
For a particular process, an important criterion is how long it takes to execute that
process. The time elapsed from the time of submission of a process to the time of
completion is known as the turnaround time. Turn-around time is the sum of times
spent waiting to get into memory, waiting in the ready queue, executing in CPU, and
waiting for I/O.
A scheduling algorithm does not affect the time required to complete the process
once it starts execution. It only affects the waiting time of a process i.e. time spent by
a process waiting in the ready queue.
In an interactive system, turn-around time is not the best criterion. A process may
produce some output fairly early and continue computing new results while previous
results are being output to the user. Thus another criterion is the time taken from
submission of the process of the request until the first response is produced. This
measure is called response time.
Response Time = CPU Allocation Time(when the CPU was allocated for the first) –
Arrival Time
6. Completion Time
The completion time is the time when the process stops executing, which means
that the process has completed its burst time and is completely executed.
7. Priority
A given process always should run in about the same amount of time under a similar
system load.
Importance of Selecting the Right CPU Scheduling Algorithm for Specific
Situations
Round Robin scheduling algorithm works well in a time-sharing system where tasks
have to be completed in a short period of time. SJF scheduling algorithm works best
in a batch processing system where shorter jobs have to be completed first in order
to increase throughput.Priority scheduling algorithm works better in a real-time
system where certain tasks have to be prioritized so that they can be completed in a
timely manner.
Factors Influencing CPU Scheduling Algorithms
There are many factors that influence the choice of CPU scheduling algorithm. Some
of them are listed below.
The number of processes.
Selecting the correct algorithm will ensure that the system will use system resources
efficiently, increase productivity, and improve user satisfaction.
CPU Scheduling Algorithms
There are several CPU Scheduling Algorithms, that are listed below.
(A) FIFO
CPU utilization indicates the efficiency of utilizing the CPU’s processing power.
Scheduling algorithms aim to keep the CPU as busy as possible to achieve high
utilization. However, excessively high CPU utilization can lead to poor system
responsiveness and potential resource contention.
What is throughput in the context of scheduling algorithms?
Throughput refers to the number of processes that are completed and leave the
system within a given time frame. Scheduling algorithms that maximize throughput
often prioritize short processes or those that require minimal CPU time, allowing
more processes to be completed in a given period.
Why is turnaround time an important criterion for scheduling algorithms?
Turnaround time measures the overall time a process takes to complete, from
submission to termination. Lower turnaround time indicates efficient process
execution. Scheduling algorithms that minimize turnaround time generally prioritize
processes with shorter burst times or high priority.
What trade-offs are involved in selecting a scheduling algorithm? Some trade-
off may include that the improving in throughput may lead to increment in waiting
time.
Deadlock System model
Overview :
A deadlock occurs when a set of processes is stalled because each process is
holding a resource and waiting for another process to acquire another resource. In
the diagram below, for example, Process 1 is holding Resource 1 while Process 2
acquires Resource 2, and Process 2 is waiting for Resource 1.
System Model :
Memory, printers, CPUs, open files, tape drives, CD-ROMs, and other
resources are examples of resource categories.
By definition, all resources within a category are equivalent, and any of the
resources within that category can equally satisfy a request from that
category. If this is not the case (i.e. if there is some difference between the
resources within a category), then that category must be subdivided further.
For example, the term “printers” may need to be subdivided into “laser
printers” and “colour inkjet printers.”
The kernel keeps track of which resources are free and which are allocated,
to which process they are allocated, and a queue of processes waiting for this
resource to become available for all kernel-managed resources. Mutexes or
wait() and signal() calls can be used to control application-managed resources
(i.e. binary or counting semaphores. )
3. No pre-emption –
Once a process holds a resource (i.e. after its request is granted), that
resource cannot be taken away from that process until the process voluntarily
releases it.
4. Circular Wait –
There must be a set of processes P0, P1, P2,…, PN such that every P[I] is
waiting for P[(I + 1) percent (N + 1)]. (It is important to note that this condition
implies the hold-and-wait condition, but dealing with the four conditions is
easier if they are considered separately).
Methods for Handling Deadlocks :
In general, there are three approaches to dealing with deadlocks as follows.
2. Detection and recovery of deadlocks, When deadlocks are detected, abort the
process or preempt some resources.
6. If deadlocks are not avoided or detected, the system will gradually slow down
as more processes become stuck waiting for resources that the deadlock has
blocked and other waiting processes. Unfortunately, when the computing
requirements of a real-time process are high, this slowdown can be confused
with a general system slowdown.
Deadlock Prevention :
Deadlocks can be avoided by avoiding at least one of the four necessary conditions:
as follows.
Condition-1 :
Mutual Exclusion :
Make it a requirement that all processes request all resources at the same
time. This can be a waste of system resources if a process requires one
resource early in its execution but does not require another until much later.
Processes that hold resources must release them prior to requesting new
ones, and then re-acquire the released resources alongside the new ones in a
single new request. This can be a problem if a process uses a resource to
partially complete an operation and then fails to re-allocate it after it is
released.
To avoid circular waits, number all resources and insist that processes request
resources is strictly increasing ( or decreasing) order.
To put it another way, before requesting resource Rj, a process must first
release all Ri such that I >= j.
This necessitates more information about each process AND results in low
device utilization. (This is a conservative approach.)
The scheduler only needs to know the maximum number of each resource
that a process could potentially use in some algorithms. In more complex
algorithms, the scheduler can also use the schedule to determine which
resources are required and in what order.
1. Stop all processes that are involved in the deadlock. This does break the
deadlock, but at the expense of terminating more processes than are
absolutely necessary.
2. Processes should be terminated one at a time until the deadlock is broken.
This method is more conservative, but it necessitates performing deadlock
detection after each step.
In the latter case, many factors can influence which processes are terminated next
as follows.
3. How many and what kind of resources does the process have? (Are they
simple to anticipate and restore? )
4. How many more resources are required for the process to be completed?
Memory Management
INTRODUCTION:
Paged Segmentation and Segmented Paging are two different memory management
techniques that combine the benefits of paging and segmentation.
However, both techniques can also introduce additional complexity and overhead in
the memory management process. The choice between Paged Segmentation and
Segmented Paging depends on the specific requirements and constraints of a
system, and often requires trade-offs between flexibility, performance, and overhead.
Major Limitation of Single Level Paging
A big challenge with single level paging is that if the logical address space is large,
then the page table may take up a lot of space in main memory. For instance,
consider that logical address is 32 bit and each page is 4 KB, the number of pages
will be 2^20 pages. The page table without additional bits will be of the size 20 bits *
220 or 2.5 MB. Since each process has its own page table, a lot of memory will be
consumed when single level paging is used. For a system with 64-bit logical address
even a page table of single process will not fit in main memory. For a process with a
large logical address space, a lot of its page table entries are invalid as a lot of the
logical address space goes unused.
Page table with invalid entries
Segmented Paging
A solution to the problem is to use segmentation along with paging to reduce the size
of page table. Traditionally, a program is divided into four segments, namely code
segment, data segment, stack segment and heap segment.
Segments of a process
The size of the page table can be reduced by creating a page table for each
segment. To accomplish this hardware support is required. The address provided by
CPU will now be partitioned into segment no., page no. and offset.
The memory management unit (MMU) will use the segment table which will contain
the address of page table(base) and limit. The page table will point to the page
frames of the segments in main memory.
Segmented Paging
Advantages of Segmented Paging
1. The page table size is reduced as pages are present only for data of
segments, hence reducing the memory requirements.
2. Gives a programmers view along with the advantages of paging.
4. Since the entire segment need not be swapped out, the swapping out into
virtual memory becomes easier .
Disadvantages of Segmented Paging
1. In segmented paging, not every process has the same number of segments
and the segment tables can be large in size which will cause external
fragmentation due to the varying segment table sizes. To solve this problem,
we use paged segmentation which requires the segment table to be paged.
The logical address generated by the CPU will now consist of page no #1,
segment no, page no #2 and offset.
2. The page table even with segmented paging can have a lot of invalid pages.
Instead of using multi level paging along with segmented paging, the problem
of larger page table can be solved by directly applying multi level paging
instead of segmented paging.
Paged Segmentation
Advantages of Paged Segmentation
1. No external fragmentation
3. Extra level of paging at first stage adds to the delay in memory access.
8. Increased code size: Paged Segmentation can result in increased code size,
as the additional code required to manage the multiple page tables can take
up valuable memory space.
There are various constraints to the strategies for the allocation of frames:
You cannot allocate more than the total number of available frames.
At least a minimum number of frames should be allocated to each process.
This constraint is supported by two reasons. The first reason is, as less
number of frames are allocated, there is an increase in the page fault ratio,
decreasing the performance of the execution of the process. Secondly, there
should be enough frames to hold all the different pages that any single
instruction can reference.
Frame allocation algorithms –
The two algorithms commonly used to allocate frames to a process are:
1. Equal allocation: In a system with x frames and y processes, each process
gets equal number of frames, i.e. x/y. For instance, if the system has 48
frames and 9 processes, each process will get 5 frames. The three frames
which are not allocated to any process can be used as a free-frame buffer
pool.
Disadvantage: In systems with processes of varying sizes, it does not
make much sense to give each process equal frames. Allocation of a
large number of frames to a small process will eventually lead to the
wastage of a large number of allocated unused frames.
2. Proportional allocation: Frames are allocated to each process according to
the process size.
For a process pi of size si, the number of allocated frames is ai = (si/S)*m,
where S is the sum of the sizes of all the processes and m is the number of
frames in the system. For instance, in a system with 62 frames, if there is a
process of 10KB and another process of 127KB, then the first process will be
allocated (10/137)*62 = 4 frames and the other process will get (127/137)*62
= 57 frames.
Advantage: All the processes share the available frames according to
their needs, rather than equally.
Global vs Local Allocation –
The number of frames allocated to a process can also dynamically change
depending on whether you have used global replacement or local replacement for
replacing pages in case of a page fault.
1. Local replacement: When a process needs a page which is not in the
memory, it can bring in the new page and allocate it a frame from its own set
of allocated frames only.
Advantage: The pages in memory for a particular process and the
page fault ratio is affected by the paging behavior of only that process.
Disadvantage: A low priority process may hinder a high priority
process by not making its frames available to the high priority process.
2. Global replacement: When a process needs a page which is not in the
memory, it can bring in the new page and allocate it a frame from the set of all
frames, even if that frame is currently allocated to some other process; that is,
one process can take a frame from another.
Advantage: Does not hinder the performance of processes and hence
results in greater system throughput.
Disadvantage: The page fault ratio of a process can not be solely
controlled by the process itself. The pages in memory for a process
depends on the paging behaviour of other processes as well.
Cache Memory
Cache memory is a special type of high-speed memory located close to the CPU in a
computer. It stores frequently used data and instructions, So that the CPU can
access them quickly, improving the overall speed and efficiency of the computer.
Data in primary memory can be accessed faster than secondary memory but still,
access times of primary memory are generally in a few microseconds, whereas the
CPU is capable of performing operations in nanoseconds. Due to the time lag
between accessing data and acting on data performance of the system decreases as
the CPU is not utilized properly, it may remain idle for some time. In order to
minimize this time gap new segment of memory is Introduced known as Cache
Memory.
1. Speed: Faster than the main memory (RAM), which helps the CPU retrieve
data more quickly.
2. Proximity: Located very close to the CPU, often on the CPU chip itself,
reducing data access time.
3. Function: Temporarily holds data and instructions that the CPU is likely to
use again soon, minimizing the need to access the slower main memory.
Role of Cache Memory
Whenever CPU needs any data it searches for corresponding data in the cache (fast
process) if data is found, it processes the data according to instructions, however, if
data is not found in the cache CPU search for that data in primary memory(slower
process) and loads it into the cache. This ensures frequently accessed data are
always found in the cache and hence minimizes the time required to access the
data.
How does Cache Memory Improve CPU Performance?
Cache memory improves CPU performance by reducing the time it takes for the
CPU to access data. By storing frequently accessed data closer to the CPU, it
minimizes the need for the CPU to fetch data from the slower main memory.
What is a Cache Hit and a Cache Miss?
Cache Hit: When the CPU finds the required data in the cache memory, allowing for
quick access.On searching in the cache if data is found, a cache hit has occurred.
Cache Miss: When the required data is not found in the cache, forcing the CPU to
retrieve it from the slower main memory.On searching in the cache if data is not
found, a cache miss has occurred
Although Cache and RAM both are used to increase the performance of the system
there exists a lot of differences in which they operate to increase the efficiency of the
system.
In conclusion, cache memory plays an important role in enhancing the speed and
efficiency of computer systems. By storing frequently accessed data and instructions
close to the CPU, cache memory minimizes the time required for the CPU to access
information, thereby reducing latency and improving overall system performance.
Frequently Asked Questions on Cache Memory – FAQ’s
Cache memory is faster than main memory because it uses high-speed static RAM
(SRAM) rather than the slower dynamic RAM (DRAM) used in main memory. Its
proximity to the CPU also reduces the time needed to access data.
Can cache memory be upgraded?
Cache memory is typically built into the CPU and cannot be upgraded separately.
Upgrading the CPU can increase the amount and speed of cache memory available.
What happens if the cache memory is full?
When the cache memory is full, it uses algorithms like Least Recently Used (LRU) to
replace old data with new data. The least recently accessed data is removed to
make space for the new data.
Why is multi-level cache used in modern CPUs?
Multi-level caches are used to balance speed and cost. L1 cache is the f.astest and
most expensive per byte, so it’s small. L2 and L3 caches are progressively larger
and slower, providing a larger total cache size while managing costs and maintaining
reasonable speed.
The main memory is central to the operation of a Modern Computer. Main Memory is
a large array of words or bytes, ranging in size from hundreds of thousands to
billions. Main memory is a repository of rapidly available information shared by
the CPU and I/O devices. Main memory is the place where programs and
information are kept when the processor is effectively utilizing them. Main memory is
associated with the processor, so moving instructions and information into and out of
the processor is extremely fast. Main memory is also known as RAM (Random
Access Memory). This memory is volatile. RAM loses its data when a power
interruption occurs.
Main Memory
What is Memory Management?
Now we are discussing the concept of Logical Address Space and Physical Address
Space
Logical and Physical Address Space
Loading a process into the main memory is done by a loader. There are two different
types of loading :
Static Loading: Static Loading is basically loading the entire program into a
fixed address. It requires more memory space.
Dynamic Loading: The entire program and all data of a process must be in
physical memory for the process to execute. So, the size of a process is
limited to the size of physical memory. To gain proper memory utilization,
dynamic loading is used. In dynamic loading, a routine is not loaded until it is
called. All routines are residing on disk in a relocatable load format. One of the
advantages of dynamic loading is that the unused routine is never loaded.
This loading is useful when a large amount of code is needed to handle it
efficiently.
Static and Dynamic Linking
To perform a linking task a linker is used. A linker is a program that takes one or
more object files generated by a compiler and combines them into a single
executable file.
Static Linking: In static linking, the linker combines all necessary program
modules into a single executable program. So there is no runtime
dependency. Some operating systems support only static linking, in which
system language libraries are treated like any other object module.
Dynamic Linking: The basic concept of dynamic linking is similar to dynamic
loading. In dynamic linking, “Stub” is included for each appropriate library
routine reference. A stub is a small piece of code. When the stub is executed,
it checks whether the needed routine is already in memory or not. If not
available then the program loads the routine into memory.
Swapping
This is the simplest memory management approach the memory is divided into two
sections:
One part of the operating system
Fence Register
In this approach, the operating system keeps track of the first and last location
available for the allocation of the user program
Interrupt vectors are often loaded in low memory therefore, it makes sense to
load the operating system in low memory
Sharing of data and code does not make much sense in a single process
environment
The Operating system can be protected from user programs with the help of a
fence register.
Multiprogramming with Fixed Partitions (Without Swapping)
Operating System
p1
p2
Operating System
p3
p4
Partition Table
Once partitions are defined operating system keeps track of the status of memory
partitions it is done through a data structure called a partition table.
Sample Partition Table
0k 200k allocated
When it is time to load a process into the main memory and if there is more than one
free block of memory of sufficient size then the OS decides which free block to
allocate.
2. Best Fit
3. Worst Fit
4. Next Fit
Paging
Segmentation
Virtual memory provides several benefits, including the ability to run larger programs
than the available physical memory, increased security through memory isolation,
and simplified memory management for applications. However, using virtual memory
also introduces additional overhead due to page table lookups and potential
performance degradation if excessive swapping occurs.
Real Storage
Explanation
Internal and external storage: Computers have two types of physical
storage: internal and external.
Volatile and non-volatile storage: Volatile storage loses its contents when
the power is removed, while non-volatile storage does not.
Primary storage: This is where documents, images, and other files are
stored on a hard drive or external drive.
Mass storage: This includes devices with removable and non-removable
media, such as smartphones, computers, enterprise servers, and data
centres.
Optical storage: This includes compact disks (CDs) and digital versatile disks
(DVDs).
Bare machine is logical hardware which is used to execute the program in the
processor without using the operating system. As of now, we have studied that we
can’t execute any process without the Operating system. But yes with the help of the
Bare machine we can do that.
Initially, when the operating systems are not developed, the execution of an
instruction is done by directly on hardware without using any interfering hardware, at
that time the only drawback was that the Bare machines accepting the instruction in
only machine language, due to this those person who has sufficient knowledge about
Computer field are able to operate a computer. so after the development of the
operating system Bare machine is referred to as inefficient.
What is Resident Monitor?
The Resident Monitor is a code that runs on Bare Machines. The resident monitor
works like an operating system that controls the instructions and performs all
necessary functions. It also works like job sequencer because it also sequences the
job and sends them to the processor.
After scheduling the job Resident monitors loads the programs one by one into the
main memory according to their sequences. One most important factor about the
resident monitor is that when the program execution occurred there is no gap
between the program execution and the processing is going to be faster.
2. Loader
3. Device Driver
4. Interrupt Processing
Parts of Resident Monitor
Control Language Interpreter: The first part of the Resident monitor is
control language interpreter which is used to read and carry out the instruction
from one level to the next level.
Loader: The second part of the Resident monitor which is the main part of the
Resident Monitor is Loader which Loads all the necessary system and
application programs into the main memory.
Device Driver: The third part of the Resident monitor is Device Driver which
is used to manage the connecting input-output devices to the system. So
basically it is the interface between the user and the system. it works as an
interface between the request and response. request which user made,
Device driver responds that the system produces to fulfill these requests.
Interrupt Processing: The fourth part as the name suggests, it processes the
all occurred interrupt to the system.
Conclusion
In conclusion, a Bare machine is a basic model of a computer that focuses on its
essential components and operations, like memory and simple instructions. It helps
us understand fundamental computing concepts without any extra features. On the
other hand, a resident monitor is a more advanced system that manages the
computer’s resources and allows multiple programs to run efficiently. It handles tasks
like memory allocation and input/output operations, making it easier for users to
interact with the computer.
Frequently Asked Questions on Bare Machine and Resident Monitor – FAQs
Why is the Bare Machine Important?
The bare machine is mostly a theoretical concept for learning, while resident
monitors are used in operating systems to help manage computer functions and
resources.
Can I Run Programs on a Bare Machine?
No, a bare machine doesn’t support running complex programs directly. It only
shows basic operations. In contrast, a resident monitor allows you to run and
manage multiple programs effectively.
Fixed partitioning, also known as static partitioning, is one of the earliest memory
management techniques used in operating systems. In this method, the main
memory is divided into a fixed number of partitions at system startup, and each
partition is allocated to a process. These partitions remain unchanged throughout
system operation, ensuring a simple, predictable memory allocation process. Despite
its simplicity, fixed partitioning has several limitations, such as internal fragmentation
and inflexible handling of varying process sizes. This article delves into the
advantages, disadvantages, and applications of fixed partitioning in modern
operating systems.
What is Fixed (or static) Partitioning in the Operating System?
Fixed (or static) partitioning is one of the earliest and simplest memory management
techniques used in operating systems. It involves dividing the main memory into a
fixed number of partitions at system startup, with each partition being assigned to a
process. These partitions remain unchanged throughout the system’s operation,
providing each process with a designated memory space. This method was widely
used in early operating systems and remains relevant in specific contexts like
embedded systems and real-time applications. However, while fixed partitioning is
simple to implement, it has significant limitations, including inefficiencies caused by
internal fragmentation.
1. In fixed partitioning, the memory is divided into fixed-size chunks, with each
chunk being reserved for a specific process. When a process requests
memory, the operating system assigns it to the appropriate partition. Each
partition is of the same size, and the memory allocation is done at system
boot time.
1. Contiguous
2. Non-Contiguous
Contiguous Memory Allocation:
In contiguous memory allocation, each process is assigned a single continuous block
of memory in the main memory. The entire process is loaded into one contiguous
memory region.
Fixed Partitioning:
This is the oldest and simplest technique used to put more than one process in the
main memory. In this partitioning, the number of partitions (non-overlapping)
in RAM is fixed but the size of each partition may or may not be the same. As it is
a contiguous allocation, hence no spanning is allowed. Here partitions are made
before execution or during system configure.
As illustrated in above figure, first process is only consuming 1MB out of 4MB in the
main memory.
Hence, Internal Fragmentation in first block is (4-1) = 3MB.
Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)= 3+1+1+2
= 7MB.
Suppose process P5 of size 7MB comes. But this process cannot be accommodated
in spite of available free space because of contiguous allocation (as spanning is not
allowed). Hence, 7MB becomes part of External Fragmentation.
Advantages of Fixed Partitioning
Easy to implement: The algorithms required are simple and straightforward.
2. Limit process size: Process of size greater than the size of the partition in
Main Memory cannot be accommodated. The partition size cannot be varied
according to the size of the incoming process size. Hence, the process size of
32MB in the above-stated example is invalid.
Internal fragmentation occurs when a process’s size is smaller than the allocated
partition. The unused memory within the partition is wasted, leading to inefficient
memory usage.
Why is Fixed Partitioning not widely used in modern systems?
Due to its inflexibility in handling varying process sizes and the inefficiency caused
by internal fragmentation, fixed partitioning is not suitable for modern systems with
dynamic workloads.
How is internal fragmentation different from external fragmentation in memory
management?
Internal fragmentation occurs when allocated memory exceeds the memory required
by a process, leaving unused space within a partition. External fragmentation, on the
other hand, refers to the wasted space outside of allocated partitions, which is not
applicable in fixed partitioning since partitions are predetermined and cannot span.
Can processes larger than the partition size be accommodated in Fixed
Partitioning?
No, in fixed partitioning, a process larger than the partition size cannot be
accommodated because partitions are of fixed size and cannot dynamically adjust to
a process’s memory requirements.
What types of systems benefit from Fixed Partitioning?
Alternatives include dynamic partitioning, where partition sizes can adjust to process
requirements, and non-contiguous memory allocation methods like paging and
segmentation, which allow processes to be loaded into non-contiguous memory
blocks, reducing fragmentation.
Contiguous
Non-Contiguous
In the Contiguous Technique, the executing process must be loaded entirely in the
main memory. The contiguous Technique can be divided into.
Initially, RAM is empty and partitions are made during the run-time according
to the process’s need instead of partitioning during system configuration.
The partition size varies according to the need of the process so that internal
fragmentation can be avoided to ensure efficient utilization of RAM.
The number of partitions in RAM is not fixed and depends on the number of
incoming processes and the Main Memory’s size.
Advantages of Variable(Dynamic) Partitioning
No Internal Fragmentation
The operating system maintains a table of free memory blocks or holes, each
of which represents a potential partition. When a process requests memory,
the operating system searches the table for a suitable hole that can
accommodate the requested amount of memory.
Answer:
Answer:
Answer:
The process of dividing up data among several tables, drives, or locations in order to
enhance database manageability or query processing speed is known as data
partitioning.
1. First Fit
2. Best Fit
3. Worst Fit
4. Next Fit
Non-Contiguous memory allocation can be categorized into many ways :
1. Paging
2. Multilevel paging
3. Inverted paging
4. Segmentation
5. Segmented paging
MMU(Memory Management Unit) :
The run time mapping between Virtual address and Physical Address is done by a
hardware device known as MMU.
In memory management, the Operating System will handle the processes and move
the processes between disk and memory for execution . It keeps track of available
and used memory.
MMU scheme :
CPU------- MMU------Memory
Dynamic relocation using a relocation register.
2. MMU will generate a relocation register (base register) for eg: 14000
The value in the relocation register is added to every address generated by a user
process at the time the address is sent to memory. The user program never sees the
real physical addresses. The program can create a pointer to location 346, store it in
memory, manipulate it, and compare it with other addresses—all like the number
346.
The user program generates only logical addresses. However, these logical
addresses must be mapped to physical addresses before they are used.
Address binding :
Address binding is the process of mapping from one address space to another
address space. Logical address is an address generated by the CPU during
execution, whereas Physical Address refers to the location in the memory unit(the
one that is loaded into memory).The logical address undergoes translation by the
MMU or address translation unit in particular. The output of this process is the
appropriate physical address or the location of code/data in RAM.
The logical address generated by the CPU is first checked by the limit register, If the
value of the logical address generated is less than the value of the limit register, the
base address stored in the relocation register is added to the logical address to get
the physical address of the memory location.
If the logical address value is greater than the limit register, then the CPU traps to
the OS, and the OS terminates the program by giving fatal error.