0% found this document useful (0 votes)
247 views18 pages

Memory Management: Address Binding

Memory management is the process of controlling and coordinating computer memory by assigning portions called blocks to running programs to optimize performance. It involves address binding, which maps logical or virtual addresses generated by programs to corresponding physical memory addresses using a memory management unit. Address binding can occur at compile time, load time, or execution time. Logical addresses are generated by CPUs and refer to locations in a program's logical address space, while physical addresses identify actual locations in physical memory. The memory management unit maps logical addresses to physical addresses so programs can access memory locations indirectly using logical addresses.

Uploaded by

manzoormachu369
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
247 views18 pages

Memory Management: Address Binding

Memory management is the process of controlling and coordinating computer memory by assigning portions called blocks to running programs to optimize performance. It involves address binding, which maps logical or virtual addresses generated by programs to corresponding physical memory addresses using a memory management unit. Address binding can occur at compile time, load time, or execution time. Logical addresses are generated by CPUs and refer to locations in a program's logical address space, while physical addresses identify actual locations in physical memory. The memory management unit maps logical addresses to physical addresses so programs can access memory locations indirectly using logical addresses.

Uploaded by

manzoormachu369
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Memory management

Memory management is the process of controlling and coordinating computer memory,


assigning portions called blocks to various running programs to optimize overall system
performance. Memory management resides in hardware, in the OS (operating system), and in
programs and applications.

ADDRESS BINDING
Address binding is the process of mapping from one address space to another
address space in other words Address binding is the process of mapping the
program's logical or virtual addresses to corresponding physical or
main memory addresses. In other words, a given logical address is
mapped by the MMU (Memory Management Unit) to a physical address.

Logical address is address generated by CPU during execution whereas Physical


Address refers to location in memory unit(the one that is loaded into
memory).Note that user deals with only logical address(Virtual address). The
logical address undergoes translation by the MMU or address translation unit in
particular. The output of this process is the appropriate physical address or the
location of code/data in RAM.

An address binding can be done in three different ways:


Compile Time – If you know that during compile time where process will reside in
memory then absolute address is generated i.e physical address is embedded to
the executable of the program during compilation. Loading the executable as a
process in memory is very fast. But if the generated address space is preoccupied
by other process, then the program crashes and it becomes necessary to
recompile the program to change the address space.

Load time – If it is not known at the compile time where process will reside then
relocatable address will be generated. Loader translates the relocatable address
to absolute address. The base address of the process in main memory is added to
all logical addresses by the loader to generate absolute address. In this if the base
address of the process changes then we need to reload the process again.

Execution time- The instructions are in memory and are being processed by the
CPU. Additional memory may be allocated and/or deallocated at this time. This is
used if process can be moved from one memory to another during
execution(dynamic linking-Linking that is done during load or run time). e.g –
Compaction.

Logical and Physical Address in Operating System


Logical Address

Logical Address is generated by CPU while a program is running. The


logical address is virtual address as it does not exist physically, therefore, it is
also known as Virtual Address. This address is used as a reference to access
the physical memory location by CPU. The term Logical Address Space is used
for the set of all logical addresses generated by a program’s perspective.
The hardware device called Memory-Management Unit is used for mapping
logical address to its corresponding physical address.

Physical Address

Physical Address identifies a physical location of required data in a memory.


The user never directly deals with the physical address but can access by its
corresponding logical address. The user program generates the logical
address and thinks that the program is running in this logical address but the
program needs physical memory for its execution, therefore, the logical
address must be mapped to the physical address by MMU before they are
used. The term Physical Address Space is used for all physical addresses
corresponding to the logical addresses in a Logical address space.
Mapping virtual-address to physical-addresses

Differences Between Logical and Physical Address in Operating System

1. The basic difference between Logical and physical address is that Logical
address is generated by CPU in perspective of a program whereas the
physical address is a location that exists in the memory unit.
2. Logical Address Space is the set of all logical addresses generated by CPU
for a program whereas the set of all physical address mapped to
corresponding logical addresses is called Physical Address Space.
3. The logical address does not exist physically in the memory whereas
physical address is a location in the memory that can be accessed
physically.
4. Identical logical addresses are generated by Compile-time and Load time
address binding methods whereas they differs from each other in run-time
address binding method. Please refer this for details.
5. The logical address is generated by the CPU while the program is running
whereas the physical address is computed by the Memory Management
Unit (MMU).

Comparison Chart:

Paramenter LOGICAL ADDRESS PHYSICAL ADDRESS

Basic generated by CPU location in a memory unit

Logical Address Space is set of all logical Physical Address is set of all physical
Address
addresses generated by CPU in addresses mapped to the corresponding
Space
reference to a program. logical addresses.

User can view the logical address of a User can never view physical address of
Visibility
program. program.

Generation generated by the CPU Computed by MMU

The user can use the logical address to The user can indirectly access physical
Access
access the physical address. address but not directly.

DYNAMIC LINKING

6. When one program is dependent on some other program. In such a


case, rather than loading all the dependent programs, CPU links the
dependent programs to the main executing program when its
required. This mechanism is known as Dynamic Linking.
7. Dynamic linking refers to the linking that is done during load or run-
time and not when the exe is created.
8. In case of dynamic linking the linker while creating the exe does
minimal work.For the dynamic linker to work it actually has to load the
libraries too.Hence it's also called linking loader.

DYNAMIC LOADING
All the programs are loaded in the main memory for execution. Sometimes
complete program is loaded into the memory, but some times a certain part
or routine of the program is loaded into the main memory only when it is
called by the program, this mechanism is called Dynamic Loading, this
enhance the performance.
Dynamic loading means loading the library (or any other binary for that matter) into the
memory during load or run-time.
Dynamic loading can be imagined to be

Differences between Linking and Loading:

9. The key difference between linking and loading is that the linking generates the
executable file of a program whereas, the loading loads the executable file
obtained from the linking into main memory for execution.
10. The linking intakes the object module of a program generated by the
assembler. However, the loading intakes the executable module generated by
the linking.
11. The linking combines all object modules of a program to generate executable
modules it also links the library function in the object module to built-in
libraries of the high-level programming language. On the other hand, loading
allocates space to an executable module in main memory.

Loading and Linking are further categorized into 2 types:

Static Dynamic

Loading the entire program into the main


Loading the program into the main memory on
memory before start of the program execution
demand is called as static loading.
is called as static loading.

Inefficent utilization of memory because


whether it is required or not required entire Efficent utilization of memory.
program is brought into the main memory.

Program execution will be faster. Program execution will be slower.

Statically linked program takes constant load


Dynamic linking is performed at run time by the
time every time it is loaded into the memory for
operating system.
execution.

If the static loading is used then accordingly If the dynamic loading is used then accordingly
static linking is applied. dynamic linking is applied.

In dynamic linking this is not the case and


Static linking is performed by programs called
individual shared modules can be updated and
linkers as the last step in compiling a program.
recompiled.This is one of the greatest advantages
Linkers are also called link editors.
dynamic linking offers.
Static Dynamic

In static linking if any of the external programs


In dynamic linking load time might be reduced if
has changed then they have to be recompiled
the shared library code is already present in
and re-linked again else the changes won’t
memory.
reflect in existing executable file.

SHARED LIBRARIES
Shared libraries are libraries that are linked dynamically. Shared
libraries allow common OS code to be bundled into a wrapper and used by
any application software on the system without loading multiple copies into
memory. All the applications on the system can use it without using more
memory.

Shared libraries are useful in sharing code which is common across many
applications. For example, it is more economic to pack all the code related to
TCP/IP implementation in a shared library. However, data can’t be shared as
every application needs its own set of data. Applications like, browser, ftp, telnet,
etc… make use of the shared ‘network’ library to elevate specific functionality.

Every operating system has its own representation and tool-set to create shared
libraries. More or less the concepts are same.

OVERLAY
Overlaying means "the process of transferring a block of program code or
other data into internal memory, replacing what is already stored".
Overlaying is a technique that allows programs to be larger than the
computer's main memory. An embedded would normally use overlays
because of the limitation of physical memory which is internal memory for
a system-on-chip and the lack of virtual memory facilities.
Overlaying requires the programmers to split their object code to into
multiple completely-independent sections, and the overlay manager that
linked to the code will load the required overlay dynamically & will swap
them when necessary.

Advantage –

12. Reduce memory requirement


13. Reduce time requirement

Disadvantage –

1. Overlap map must be specified by programmer


2. Programmer must know memory requirement
3. Overlaped module must be completely disjoint
4. Programmming design of overlays structure is complex and not possible in all
cases

Swapping
Swapping is a mechanism in which a process can be swapped temporarily out of main
memory (or move) to secondary storage (disk) and make that memory available to other
processes. At some later time, the system swaps back the process from the secondary
storage to main memory.
Though performance is usually affected by swapping process but it helps in running multiple
and big processes in parallel and that's the reason Swapping is also known as a technique
for memory compaction.
The total time taken by swapping process includes the time it takes to move the entire process
to a secondary disk and then to copy the process back to memory, as well as the time the
process takes to regain main memory.

Contiguous Memory Allocation


In contiguous memory allocation each process is contained in a single contiguous block of
memory. Memory is divided into several fixed size partitions. Each partition contains exactly
one process. When a partition is free, a process is selected from the input queue and loaded
into it. The free blocks of memory are known as holes. The set of holes is searched to
determine which hole is best to allocate.
In contiguous memory allocation, all the available memory space remain
together in one place. It means freely available memory partitions are not
scattered here and there across the whole memory space.

In the contiguous memory allocation, both the operating system and the
user must reside in the main memory. The main memory is divided into two
portions one portion is for the operating and other is for the user program.
Paging is a memory management technique in which process address space is broken into
blocks of the same size called pages(size is power of 2, between 512 bytes and 8192 bytes).
The size of the process is measured in the number of pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory
called frames and the size of a frame is kept the same as that of a page to have optimum
utilization of the main memory and to avoid external fragmentation.

Address Translation
Page address is called logical address and represented by page number and the offset.

Logical Address = Page number + page offset

Frame address is called physical addressand represented by a frame number and


theoffset.
Physical Address = Frame number + page offset

A data structure called page map table is used to keep track of the relation between a page
of a process to a frame in physical memory.

When the system allocates a frame to any page, it translates this logical address into a
physical address and create entry into the page table to be used throughout execution of the
program.

Advantages and Disadvantages of Paging


Here is a list of advantages and disadvantages of paging −

5. Paging reduces external fragmentation, but still suffer from internal fragmentation.

6. Paging is simple to implement and assumed as an efficient memory management


technique.

7. Due to equal size of the pages and frames, swapping becomes very easy.

8. Page table requires extra memory space, so may not be good for a system having small
RAM.

Segmentation
Segmentation is a memory management technique in which each job is divided into several
segments of different sizes, one for each module that contains pieces that perform related
functions. Each segment is actually a different logical address space of the program.
When a process is to be executed, its corresponding segmentation are loaded into non-
contiguous memory though every segment is loaded into a contiguous block of available
memory.
Segmentation memory management works very similar to paging but here segments are of
variable-length where as in paging pages are of fixed size.
.

A computer can address more memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section of a hard disk that's set up to
emulate the computer's RAM.

The main visible advantage of this scheme is that programs can be larger than physical memory.
Virtual memory serves two purposes. First, it allows us to extend the use of physical memory by
using disk. Second, it allows us to have memory protection, because each virtual address is
translated to a physical address.
A computer can address more memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section of a hard disk that's set up to
emulate the computer's RAM.

The main visible advantage of this scheme is that programs can be larger than physical memory.
Virtual memory serves two purposes. First, it allows us to extend the use of physical memory by
using disk. Second, it allows us to have memory protection, because each virtual address is
translated to a physical address.

Following are the situations, when entire program is not required to be loaded fully in main
memory.

9. User written error handling routines are used only when an error occurred in the data or
computation.

10. Certain options and features of a program may be used rarely.

11. Many tables are assigned a fixed amount of address space even though only a small
amount of the table is actually used.

12. The ability to execute a program that is only partially in memory would counter many
benefits.

13. Less number of I/O would be needed to load or swap each user program into memory.

14. A program would no longer be constrained by the amount of physical memory that is
available.

15. Each user program could take less physical memory, more programs could be run the
same time, with a corresponding increase in CPU utilization and throughput.

Modern microprocessors intended for general-purpose use, a memory management unit, or


MMU, is built into the hardware. The MMU's job is to translate virtual addresses into physical
addresses. A basic example is given below −
Virtual memory is commonly implemented by demand paging. It can also be implemented in a
segmentation system. Demand segmentation can also be used to provide virtual memory.

Demand Paging
A demand paging system is quite similar to a paging system with swapping where processes
reside in secondary memory and pages are loaded only on demand, not in advance. When a
context switch occurs, the operating system does not copy any of the old program’s pages out to
the disk or any of the new program’s pages into the main memory Instead, it just begins executing
the new program after loading the first page and fetches that program’s pages as they are
referenced.
Advantages
Following are the advantages of Demand Paging −

16. Large virtual memory.


17. More efficient use of memory.
18. There is no limit on degree of multiprogramming.
Disadvantages
19. Number of tables and the amount of processor overhead for handling page interrupts are
greater than in the case of the simple paged management techniques.

Page Replacement Algorithm


Page replacement algorithms are the techniques using which an Operating System decides
which memory pages to swap out, write to disk when a page of memory needs to be allocated.
Paging happens whenever a page fault occurs and a free page cannot be used for allocation
purpose accounting to reason that pages are not available or the number of free pages is
lower than required pages.
First In First Out (FIFO) algorithm
20. Oldest page in main memory is the one which will be selected for replacement.

21. Easy to implement, keep a list, replace pages from the tail and add new pages at the head.

Optimal Page algorithm


22. An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms.
An optimal page-replacement algorithm exists, and has been called OPT or MIN.

23. Replace the page that will not be used for the longest period of time. Use the time when a
page is to be used.
Least Recently Used (LRU) algorithm
1. Page which has not been used for the longest time in main memory is the one which will
be selected for replacement.

2. Easy to implement, keep a list, replace pages by looking back into time.

Page Buffering algorithm

1. To get a process start quickly, keep a pool of free frames.


2. On page fault, select a page to be replaced.
3. Write the new page in the frame of free pool, mark the page table and restart the
process.
4. Now write the dirty page out of disk and place the frame holding replaced page in free
pool.
Least frequently Used(LFU) algorithm
1. The page with the smallest count is the one which will be selected for replacement.

2. This algorithm suffers from the situation in which a page is used heavily during the initial
phase of a process, but then is never used again.

Most frequently Used(MFU) algorithm


1. This algorithm is based on the argument that the page with the smallest count was
probably just brought in and has yet to be used.
Thrashing in OS?

If this page fault and then swapping happening very frequently at higher rate, then
operating system has to spend more time to swap these pages. This state is called
thrashing. Because of this, CPU utilization is going to be reduced.

Effect of Thrashing

Whenever thrashing starts, operating system tries to apply either Global page
replacement Algorithm orLocal page replacement algorithm.
Global Page Replacement

Since global page replacement can access to bring any page, it tries to bring more
pages whenever thrashing found. But what actually will happen is, due to this, no
process gets enough frames and by result thrashing will be increase more and more.
So global page replacement algorithm is not suitable when thrashing happens.

Local Page Replacement

Unlike global page replacement algorithm, local page replacement will select pages
which only belongs to that process. So there is a chance to reduce the thrashing.
But it is proven that there are many disadvantages if we use local page replacement.
So local page replacement is just alternative than global page replacement in
thrashing scenario.

You might also like