0% found this document useful (0 votes)
640 views10 pages

Q.1 Explain Memory Management Requirements?: - The Available Memory Is Generally Shared Among A

Memory management requirements include relocation, protection, sharing, logical organization, and physical organization. Relocation allows processes to be moved in memory due to swapping. Protection prevents one process from interfering with another's memory. Sharing allows multiple processes to access the same memory region. Logical organization divides memory into modules with different access levels. Physical organization manages the relationship between main and secondary memory.

Uploaded by

Ramesh Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
640 views10 pages

Q.1 Explain Memory Management Requirements?: - The Available Memory Is Generally Shared Among A

Memory management requirements include relocation, protection, sharing, logical organization, and physical organization. Relocation allows processes to be moved in memory due to swapping. Protection prevents one process from interfering with another's memory. Sharing allows multiple processes to access the same memory region. Logical organization divides memory into modules with different access levels. Physical organization manages the relationship between main and secondary memory.

Uploaded by

Ramesh Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Q.1 Explain Memory Management Requirements?

Answer Memory management keeps track of the status of each memory


location, whether it is allocated or free. It allocates the memory dynamically
to the programs at their request and frees it for reuse when it is no longer
needed. Memory management meant to satisfy some requirements that we
should keep in mind.
These Requirements of memory management are:
1. Relocation – The available memory is generally shared among a
number of processes in a multiprogramming system, so it is not
possible to know in advance which other programs will be resident in
main memory at the time of execution of his program. Swapping the
active processes in and out of the main memory enables the operating
system to have a larger pool of ready-to-execute process.

When a program gets swapped out to a disk memory, then it is not


always possible that when it is swapped back into main memory then it
occupies the previous memory location, since the location may still be
occupied by another process. We may need to relocate the process to
a different area of memory. Thus there is a possibility that program
may be moved in main memory due to swapping.

2. Protection – There is always a danger when we have multiple


programs at the same time as one program may write to the address
space of another program. So every process must be protected
against unwanted interference when other process tries to write in a
process whether accidental or incidental. Between relocation and
protection requirement a trade-off occurs as the satisfaction of
relocation requirement increases the difficulty of satisfying the
protection requirement.

Prediction of the location of a program in main memory is not possible,


that’s why it is impossible to check the absolute address at compile
time to assure protection. Most of the programming language allows
the dynamic calculation of address at run time. The memory protection
requirement must be satisfied by the processor rather than the
operating system because the operating system can hardly control a
process when it occupies the processor. Thus it is possible to check
the validity of memory references.
3. Sharing – A protection mechanism must have to allow several
processes to access the same portion of main memory. Allowing each
processes access to the same copy of the program rather than have their
own separate copy has an advantage.

For example, multiple processes may use the same system file and it is
natural to load one copy of the file in main memory and let it shared by
those processes. It is the task of Memory management to allow controlled
access to the shared areas of memory without compromising the
protection. Mechanisms are used to support relocation supported sharing
capabilities.
4 Logical organization – Main memory is organized as linear or it can
be a one-dimensional address space which consists of a sequence of
bytes or words. Most of the programs can be organized into modules,
some of those are unmodifiable (read-only, execute only) and some of
those contain data that can be modified. To effectively deal with a user
program, the operating system and computer hardware must support a
basic module to provide the required protection and sharing. It has the
following advantages:

1. Modules are written and compiled independently and all the


references from one module to another module are resolved by
`the system at run time.
2. Different modules are provided with different degrees of
protection.
3. There are mechanisms by which modules can be shared among
processes. Sharing can be provided on a module level that lets
the user specify the sharing that is desired.
5 Physical organization – The structure of computer memory has two
levels referred to as main memory and secondary memory. Main
memory is relatively very fast and costly as compared to the secondary
memory. Main memory is volatile. Thus secondary memory is provided
for storage of data on a long-term basis while the main memory holds
currently used programs. The major system concern between main
memory and secondary memory is the flow of information and it is
impractical for programmers to understand this for two reasons:

a. The programmer may engage in a practice known as overlaying


when the main memory available for a program and its data may
be insufficient. It allows different modules to be assigned to the
same region of memory. One disadvantage is that it is time-
consuming for the programmer.
b. In a multiprogramming environment, the programmer does not
know how much space will be available at the time of coding and
where that space will be located inside the memory.

Q.2 Explain memory partition?


Answer Memory partitioning is the system by which the memory of a computer system is divided
into sections for use by the resident programs. These memory divisions are known as partitions.
There are different ways in which memory can be partitioned: fixed, variable, and dynamic
partitioning.

Fixed Partitioning
Fixed partitioning is therefore defined as the system of dividing memory into non-overlapping
sizes that are fixed, unmoveable, static. A process may be loaded into a partition of equal or
greater size and is confined to its allocated partition.
If we have comparatively small processes with respect to the fixed partition sizes, t his poses a
big problem. This results in occupying all partitions with lots of unoccupied space left. This
unoccupied space is known as fragmentation. Within the fixed partition context, this is known as
internal fragmentation (IF). This is because of unused space created by a process within its
allocated partition (internal).
Variable Partitioning
Variable partitioning is therefore the system of dividing memory into non-overlapping but
variable sizes. This system of partitioning is more flexible than the fixed partitioning configuration,
but it's still not the most ideal solution. Small processes are fit into small partitions (item 1) and
large processes fit into larger partitions (items 2 and 3). These processes do not necessarily fit
exactly, even though there are other unoccupied partitions. Items 3 and 4 are larger processes of
the same size, but memory has only one available partition that can fit either of them.
The flexibility offered in variable partitioning still does not completely solve our p roblems.

Dynamic Partitioning
Let us examine a typical 64MB space of memory and let's assume that the first 8MB of memory
is reserved for the operating system.

Q.3 Explain different types of memory fragmentation?


Answer Fragmentation
Fragmentation is an unwanted problem where the memory blocks
cannot be allocated to the processes due to their small size and the
blocks remain unused. It can also be understood as when the
processes are loaded and removed from the memory they create
free space or hole in the memory and these small blocks cannot be
allocated to new upcoming processes and results in inefficient use
of memory. Basically, there are two types of fragmentation:

Internal Fragmentation
In this fragmentation, the process is allocated a memory block of
size more than the size of that process. Due to this some part of the
memory is left unused and this cause internal fragmentation.

Example: Suppose there is fixed partitioning (i.e. the memory


blocks are of fixed sizes) is used for memory allocation in RAM.
These sizes are 2MB, 4MB, 4MB, 8MB. Some part of this RAM is
occupied by the Operating System (OS).

How to remove internal fragmentation?


This problem is occurring because we have fixed the sizes of the
memory blocks. This problem can be removed if we use dynamic
partitioning for allocating space to the process. In dynamic
partitioning, the process is allocated only that much amount of
space which is required by the process. So, there is no internal
fragmentation.

External Fragmentation
In this fragmentation, although we have total space available that
is needed by a process still we are not able to put that process in
the memory because that space is not contiguous. This is called
external fragmentation.

Example: Suppose in the above example, if three new processes


P2, P3, and P4 come of sizes 2MB, 3MB, and 6MB respectively.
Now, these processes get memory blocks of size 2MB, 4MB and
8MB respectively allocated.

How to remove external fragmentation?


This problem is occurring because we are allocating memory
continuously to the processes. So, if we remove this condition
external fragmentation can be reduced. This is what done
in paging & segmentation(non-contiguous memory allocation
techniques) where memory is allocated non-contiguously to the
processes. We will learn about paging and segmentation in the next
blog.

Q.4 Define Swapping?

Answer Swapping is a mechanism in which a process can be swapped temporarily out of


main memory (or move) to secondary storage (disk) and make that memory
available to other processes. At some later time, the system swaps back the process
from the secondary storage to main memory.

Though performance is usually affected by swapping process but it helps in running


multiple and big processes in parallel and that's the reason Swapping is also known
as a technique for memory compaction.
The total time taken by swapping process includes the time it takes to move the entire
process to a secondary disk and then to copy the process back to memory, as well
as the time the process takes to regain main memory.
Let us assume that the user process is of size 2048KB and on a standard hard disk
where swapping will take place has a data transfer rate around 1 MB per second. The
actual transfer of the 1000K process to or from memory will take
Q.5 List and explain memory allocation strategies

Answer Memory allocation is an action of assigning the physical or the virtual


memory address space to a process (its instructions and data). The two
fundamental methods of memory allocation are static and dynamic memory
allocation.

Static Memory Allocation


Static memory allocation is performed when the compiler compiles the program and
generate object files, linker merges all these object files and creates a single
executable file, and loader loads this single executable file in main memory, for
execution . In static memory allocation, the size of the data required by the
process must be known before the execution of the process initiates.

If the data sizes are not known before the execution of the process, then
they have to be guessed. If the data size guessed is larger than the
required, then it leads to wastage of memory. If the guessed size is
smaller, then it leads to inappropriate execution of the process.

Static memory allocation method does not need any memory allocation
operation during the execution of the process. As all the memory allocation
operation required for the process is done before the execution of the
process has started. So, it leads to faster execution of a process.

Static memory allocation provides more efficiency when compared by the


dynamic memory allocation.

Dynamic Memory Allocation


Dynamic memory allocation is performed while the program is in execution.
Here, the memory is allocated to the entities of the program when they are
to be used for the first time while the program is running.

The actual size, of the data required, is known at the run time so, it
allocates the exact memory space to the program thereby, reducing the
memory wastage.

Dynamic memory allocation provides flexibility to the execution of the


program. As it can decide what amount of memory space will be required
by the program. If the program is large enough then a dynamic memory
allocation is performed on the different parts of the program, which is to be
used currently. This reduces memory wastage and improves the
performance of the system.

Allocating memory dynamically creates an overhead over the system.


Some allocation operations are performed repeatedly during the program
execution creating more overheads, leading in slow execution of the
program.
Dynamic memory allocation does not require special support from the
operating system. It is the responsibility of the programmer to design the
program in a way to take advantage of dynamic memory allocation method.

Thus the dynamic memory allocation is flexible but slower than static
memory allocation.

Q.7 Draw and explain segmentation with paging?

Answer Segmented Paging

Pure segmentation is not very popular and not being used in many of the
operating systems. However, Segmentation can be combined with Paging to get
the best features out of both the techniques.

In Segmented Paging, the main memory is divided into variable size segments
which are further divided into fixed size pages.

1. Pages are smaller than segments.

2. Each Segment has a page table which means every program has multiple
page tables.

3. The logical address is represented as Segment Number (base address),


Page number and page offset.

Segment Number → It points to the appropriate Segment Number.

Page Number → It Points to the exact page within the segment

Page Offset → Used as an offset within the page frame

Each Page table contains the various information about every page of the segment. The
Segment Table contains the information about every segment. Each segment table
entry points to a page table entry and every page table entry is mapped to one of
the page within a segment.
Q.8 Explain the difference between paging and segmentation?

Answer
Q.9 Write Short note
1) Virtual Memory: - Virtual Memory is a storage allocation scheme in which
secondary memory can be addressed as though it were part of main
memory. The addresses a program may use to reference memory are
distinguished from the addresses the memory system uses to identify
physical storage sites, and program generated address es are translated
automatically to the corresponding machine addresses.
The size of virtual storage is limited by the addressing scheme of the
computer system and amount of secondary memory is available not by the
actual number of the main storage locations.
It is a technique that is implemented using both hardware and software. It
maps memory addresses used by a program, called virtual addresses, into
physical addresses in computer memory.
1. All memory references within a process are logical addresses that are
dynamically translated into physical addresses at run time. This means
that a process can be swapped in and out of main memory such that it
occupies different places in main memory at different times during the
course of execution.
2. A process may be broken into number of pieces and these pieces need
not be continuously located in the main memory during execution. The
combination of dynamic run-time address translation and use of page or
segment table permits this.
2) Demand Paging: - The process of loading the page into memory on
demand (whenever page fault occurs) is known as demand paging.
The process includes the following
Steps:
1. If CPU try to refer a page that is currently not available in the main
memory, it generates an interrupt indicating memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution
to proceed the OS must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical
address space. The page replacement algorithms are used for the
decision making of replacing the page in physical address space.
5. The page table will updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and
it will place the process back into ready state.

Q.10 Explain Page Replacement Strategies?


1) LRU :- In this algorithm page will be replaced which is least recently
used.
Example-Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2
with 4 page frames.Find number of page faults.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots
—> 4 Page faults
0 is already their so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1
Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are
already available in the memory.

• FIFO :- First In First Out (FIFO) –


This is the simplest page replacement algorithm. In this algorithm, the
operating system keeps track of all pages in the memory in a queue, the
oldest page is in the front of the queue. When a page needs to be
replaced page in the front of the queue is selected for removal.

Example-1Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page


frames.Find number of page faults.

Initially all slots are empty, so when 1, 3, 0 came they are allocated to the
empty slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest
page slot i.e 1. —>1 Page Fault.
6 comes, it is also not available in memory so it replaces the oldest page
slot i.e 3 —>1 Page Fault.
Finally when 3 come it is not avilable so it replaces 0 1 page fault
Belady’s anomaly – Belady’s anomaly proves that it is possible to have
more page faults when increasing the number of page frames while using
the First in First Out (FIFO) page replacement algorithm. For example, if
we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3 slots, we
get 9 total page faults, but if we increase slots to 4, we get 10 page faults.

3) Optimal Page Replacement:-


Optimal Page replacement –
In this algorithm, pages are replaced which would not be used for the longest
duration of time in the future.
Example-2:Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,
with 4 page frame. Find number of page fault.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots
—> 4 Page faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest
duration of time in the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are
already available in the memory.
Optimal page replacement is perfect, but not possible in practice as the
operating system cannot know future requests. The use of Optimal Page
replacement is to set up a benchmark so that other replacement algorithms
can be analyzed against it.

You might also like