0% found this document useful (0 votes)
14 views120 pages

Css 316 Power Point Lecture Notes2

The course CSS 316 on Computer Architecture and Organization introduces fundamental concepts of computer systems, focusing on internal operations and the interface between hardware and software. It covers memory types, architecture design, fault tolerance, and optimization techniques, aiming to equip students with essential programming and security skills. Key topics include main and auxiliary memory systems, access methods, and data transfer modes.

Uploaded by

jxtbella017
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views120 pages

Css 316 Power Point Lecture Notes2

The course CSS 316 on Computer Architecture and Organization introduces fundamental concepts of computer systems, focusing on internal operations and the interface between hardware and software. It covers memory types, architecture design, fault tolerance, and optimization techniques, aiming to equip students with essential programming and security skills. Key topics include main and auxiliary memory systems, access methods, and data transfer modes.

Uploaded by

jxtbella017
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 120

COMPUTER ARCHITECTURE AND ORGANIZATION

CSS 316

BY:

DR. NOEL M.D


CYBER SECURITY SCIENCE DEPT., SICT, FUTMINNA
2024/2025 SESSION
1
Summary of the Course

• Introduction to Computer Organization, as the title implies, introduces you to


the fundamental concepts of how the computer system operates internally to
perform the basic tasks required of it by the end-users. Therefore, you should
acquire the basic knowledge of the internal workings of the components of the
computer system in this course.

• The content of the course material was planned and written to ensure that you
acquire the proper knowledge and skills in order to be able to programme the
computer to do your bidding and to also know how to secure each components
of the computer. The essence is to get you to acquire the necessary knowledge
and competence and equip you with the necessary tools. .

2
Course Introduction
• Computer Organization is concerned with the structure and behavior of a
computer system as seen by the user. It acts as the interface between hardware
and software.

• Computer architecture refers to those attributes of a system visible to a


programmer, or put another way, those attributes that have a direct impact on
the logical execution of a program. Computer organization refers to the
operational units and their interconnection that realize the architecture
specification.

3
• Examples of architecture attributes include the instruction set, the number of
bit to represent various data types (e.g.., numbers, and characters), I/O
mechanisms, and technique for addressing memory.

• Examples of organization attributes include those hardware details transparent


to the programmer, such as control signals, interfaces between the computer
and peripherals, and the memory technology used.

4
INTENDED LEARNING OUTCOMES (ILOs)
• At the end of this module, the student should be able to discuss elaborately
on:
• Memory types and their functionalities
• The history of memory devices
• Modes to augment processing
• Access methods
• Pipelining

5
• Certain objectives have been set out for the achievement of the course aims.
And apart from the course objectives, each unit of this course has its
objectives. Upon the completion of this course, you should be able to:

I. Describe how computer memories function and how they can be optimized
II. Explain major functions and techniques involving architecture designing and
study
III. Explain methods to tolerate faults in computer architectures
IV. Explain methods to optimize control in computer systems

6
MODULE ONE

UNIT ONE: Memory system


1.1 Main Memories
1.2 Auxiliary Memories
1.3 Memory Access Methods
1.4 Memory Mapping and Virtual Memories
1.5 Replacement Algorithms
1.6 Data Transfer Modes

7
1.0 INTRODUCTION

• The memory unit is an essential component in any digital computer


since it is needed for storing programs and data.

• A very small computer with an unlimited application may be able to


fulfill its intended task without the need of additional storage
capacity. Most general purpose computers would run more efficiently
if they were equipped with additional storage beyond the capacity of
the main memory.

• There is just not enough space in one memory unit to accommodate


all the programs used in a typical computer.
8
• The memory unit that communicates directly with the CPU is called
the main memory. Devices that provide backup storage are called
auxiliary memory.

• The most common auxiliary memory devices used in computer


systems are magnetic disks and tapes. They are used for storing
system, programs, large data, and other backup information. Only
programs and data currently needed by the processor reside in main
memory.

9
• A special very-high speed memory called a Cache is sometimes used
to increase the speed of processing by making current programs and
data available to the CPU at a rapid rate. The cache memory is
employed in computer systems to compensate for the speed
differential between main memory access time and processor logic.

10
• The complex subject of computer memory is made more manageable if we
classify memory systems according to their key characteristics.
Internal memory is often equated with main memory.

But there are other forms of internal memory. The processor requires its own
local memory, in the form of registers. The control unit portion of the processor
may also require its own internal memory.

11
Cache is another form of internal memory. External memory consists of
peripheral storage devices, such as disk and tape, that are accessible to
the processor via I/O. An obvious characteristic of memory is its
capacity.

For internal memory, this is typically expressed in terms of bytes (1


byte- = 1024 bits) or words. Common word lengths are 8, 16, and 32
bits. External memory capacity is typically expressed in terms of bytes.

12
• A related concept is the unit of transfer, for internal memory, the unit
of transfer is equal to the number of data lines into and out of the memory
module. This may be equal to the word length, but is often larger, such as 64, 128,
or 256 bits.

• From a user's point of view, the two most important characteristics of memory
are capacity and performance.

• This additional time may be required for transients to die out on signal lines or
to regenerate data if they are read destructively. Now that memory cycle time is
concerned with the system bus, not the processor.

13
• Transfer rate: This is the rate at which data can be transferred into or out of a
memory unit.

Access time (latency): For random-access memory, this is the time it takes to
perform a read or write operation. That is, the time from the instant that an
address is presented to the memory to the instant that data have been stored or
made available for use. For non-random-access memory, access time is the time
it takes to position the read—write mechanism at the desired location.

Memory cycle time: This concept is primarily applied to random-access


memory and consists of the access time plus any additional time required.

14
1.1 MAIN MEMORIES

• The main memory is the central storage unit in a computer system. It is a


relatively large and fast memory used to store programs and data during the
computer operation. The principal technology used for the main memory is
based on semiconductor integrated circuits.

• Integrated circuit RAM chips are available in two possible operating modes,
static and dynamic. The static RAM consists essentially of internal flip-flops that
store the binary information. The stored information remains valid as long as
power is applied to the unit.

15
• The dynamic RAM stores the binary information in the form of electric charges
that are applied to capacitors. The capacitors are provided inside the chip by
MOS transistors. The stored charge on the capacitors tend to discharge with
time and the capacitors must be periodically recharged by refreshing the
dynamic memory.
• Refreshing is done by cycling through the words every few
milliseconds to restore the decaying charge. The dynamic RAM offers reduced
power consumption and larger storage capacity in a single memory chip. The
static RAM is easier to use and has shorter read and write cycles.

• Most of the main memory in a general-purpose computer is made up of RAM


integrated circuit chips, but a portion of the memory may be constructed with
ROM chips. Originally,

16
• RAM was used to refer to a random-access memory, but now it is used to
designate a read/write memory to distinguish it from a read-only memory,
although ROM is also random access. RAM is used for storing the bulk of the
programs and data that are subject to change.

• ROM is used for storing programs that are permanently resident in the
computer and for tables of constants that do not change in value once the
production of the computer is completed. Among other things, the ROM portion
of main memory is needed for storing an initial program called a bootstrap
loader.

• The bootstrap loader is a program whose function is to start the computer


software operating when power is turned on. Since RAM is volatile, its contents
are destroyed when power is turned off. The contents of ROM remain
unchanged after power is turned off and on again.
17
• The startup of a computer consists of turning the power on and starting the
execution of an initial program. Thus when power is turned on, the hardware of
the computer sets the program counter to the first address of the bootstrap
loader.
• The bootstrap program loads a portion of the operating system from disk to
main memory and control is then transferred to the operating system, which
prepares the computer for general use.

18
1.2 Auxiliary Memory
• The primary types of Auxiliary Storage Devices are:
1. Magnetic tape
2. Magnetic Disks
3. Floppy Disks
4. Hard Disks and Drives

19
• These high-speed storage devices are very expensive and hence the cost per bit
of storage is also very high. Again, the storage capacity of the main memory is
also very limited. Often it is necessary to store hundreds of millions of bytes of
data for the CPU to process.

• Therefore, additional memory is required in all the computer systems. This


memory is called auxiliary memory or secondary storage. In this type of
memory, the cost per bit of storage is low.

• However, the operating speed is slower than that of the primary memory. Most
widely used secondary storage devices are magnetic tapes, magnetic disks and
floppy disks.

20
• It is not directly accessible by the CPU.
• Computer usually uses its input / output channels to access secondary storage
and transfers the desired data using an intermediate in primary storage.

• 1.2.1 Magnetic Tapes


• Magnetic tape is a medium for magnetic recording, made of a thin,
magnetisable coating on a long, narrow strip of plastic film. It was developed in
Germany, based on magnetic wire recording. Devices that record and play back
audio and video using magnetic tape are tape recorders and video tape
recorders. Magnetic tape an information storage medium consisting of a
magnetic coating on a flexible backing in tape form. Data is recorded by
magnetic encoding of tracks on the coating according to a particular tape
format.

21
• Characteristics of Magnetic Tapes
• No direct access, but very fast sequential access. Resistant to different
environmental conditions. Easy to transport, store, cheaper than disk.
• Before, it was widely used to store application data; nowadays,
• it's mostly used for backups or archives (tertiary storage).

22
• Magnetic tape is wound on reels (or spools). These may be used on their
own, as open-reel tape, or they may be contained in some sort of magnetic tape
cartridge for protection and ease of handling.

• Early computers used open-reel tape, and this is still sometimes used on large
computer systems although it has been widely superseded by cartridge tape. On
smaller systems, if tape is used at all it is normally cartridge tape.

• Magnetic tape has been used for offline data storage, backup, archiving, data
interchange, and software distribution, and in the early days (before disk storage
was available) also as online backing store. For many of these purposes it has
been superseded by magnetic or optical disk or by online communications.

23
• For example, although tape is a non-volatile medium, it tends to deteriorate in
long-term storage and so needs regular attention (typically an annual
rewinding and inspection) as well as a controlled environment. It is therefore
being superseded for archival purposes by optical disk.

• Magnetic tape is still extensively used for backup; for this purpose, interchange
standards are of minor importance, so proprietary cartridge-tape formats are
widely used.

• Magnetic tapes are used for large computers like mainframe computers where
large volume of data is stored for a longer time. In PCs also you can use tapes
in the form of cassettes.

24
• The cost of storing data in tapes is inexpensive. Tapes consist of magnetic materials that
store data permanently. It can be 12.5 mm to 25 mm wide plastic film-type and 500
meter to 1200-meter-long which is coated with magnetic material. The deck is
connected to the central processor and information is fed into or read from the tape
through the processor. It is similar to cassette tape recorder.

Advantages of Magnetic Tape


• Compact: A 10-inch diameter reel of tape is 2400 feet long and is able to hold 800, 1600
or 6250 characters in each inch of its length. The maximum capacity of such type is 180
million characters. Thus data are stored much more compact on tape.

• Economical: The cost of storing characters on tape is very less as compared to other
storage devices.
• Fast: Copying of data is easier and fast.
• Long term Storage and Re-usability: Magnetic tapes can be used for long term
storage and a tape can be used repeatedly without loss of data. 25
1.2.2 Magnetic Disks
• You might have seen the gramophone record, which is circular like a disk and
coated with magnetic material. Magnetic disks used in computer are made on
the same principle. It rotates with very high speed inside the disk drive. Data
are stored on both the surface of the disk.

• Magnetic disks are most popular for direct access storage. Each disk consists of
a number of invisible concentric circles called tracks. Information is recorded on
tracks of a disk surface in the form of tiny magnetic sports.

26
• The presence of a magnetic sport represents one bit (1) and its absence
represents zero bit (0). The information stored in a disk can be read many
times without affecting the stored data. So the reading operation is non-
destructive. But if you want to write a new data, then the existing data is
erased from the disk and new data is recorded.

Magnetic Disks 27
1.2.3 Floppy Disks

• These are small removable disks that are plastic coated with magnetic recording
material. Floppy disks are typically 3.5″ in size (diameter) and can hold 1.44 MB
of data. This portable storage device is a rewritable media and can be reused a
number of times.

• Floppy disks are commonly used to move files between different computers. The
main disadvantage of floppy disks is that they can be damaged easily and,
therefore, are not very reliable. The following figure shows an example of the
floppy disk. It is similar to magnetic disks.

28
• It is 3.5 inch in diameter. The capacity of a 3.5-inch floppy is 1.44 megabytes. It
is cheaper than any other storage devices and is portable. The floppy is a low-
cost device particularly suitable for personal computer system.

29
Read/Write head:
A floppy disk drive normally has two read/ write heads making Modern floppy
disk drives as double sided drives. A head exists for each side of disk and Both
heads are used for reading and writing on the respective disk side.

Head 0 and Head 1:


Many people do not realize that the first head (head 0) is bottom one and top
head is head 1. The top head is located either four or eight tracks inward from
the bottom head depending upon the drive type.

Head Movement:
A motor called head actuator moves the head mechanism. The heads can
move in and out over the surface of the disk in a straight line to position
themselves over various tracks. The heads move in and out tangentially to the
tracks that they record on the disk. 30
• Head: The heads are made of soft ferrous (iron) compound with
electromagnetic coils. Each head is a composite design with a R/W head
centered within two tunnel erasure heads in the same physical assembly.

• PC compatible floppy disk drive spin at 300 or 360 r.p.m. The two heads are
spring loaded and physically grip the disk with small pressure, this pressure
does not present excessive friction.

31
1.2.4 Hard Disks and Drives
• As can be seen in the picture below, the desktop hard drive consists of the
following components: the head actuator, read/write actuator arm, read/write
head, spindle, and platter.

• On the back of a hard drive is a circuit board called the disk controller or
interface board and is what allows the hard drive to communicate with the
computer.

32
External and Internal Hard drives
• Although most hard drives are internal, there are also stand-alone devices
called external hard drives, which can backup data on computers and expand
the available disk space.
33
• External drives are often stored in an enclosure that helps protect the drive and
allows it to interface with the computer, usually over USB or eSATA.

34
CD-ROM Compact Disk/Read Only Memory (CD-ROM)

• CD-ROM disks are made of reflective metals. CD-ROM is written during the
process of manufacturing by high power laser beam. Here the storage density
is very high, storage cost is very low and access time is relatively fast. Each disk
is approximately 41/2 inches in diameter and can hold over 600 MB of data. As
the CD-ROM can be read only we cannot write or make changes into the data
contained in it.
Characteristics of the CD-ROM
• In PCs, the most commonly used optical storage technology is
called Compact Disk Read-Only Memory (CD-ROM).
• A standard CD-ROM disk can store up to 650 MB of data, or about 70 minutes
of audio.
• Once data is written to a standard CD-ROM disk, the data cannot be altered or
overwritten. 35
CD‐ROM SPEEDS AND USES
• Storage capacity of 1 CD can store about 600 to 700 MB = 600 000 to 700 000KB.
For comparison, we should realize that a common A4 sheet of paper can store
an amount of information in the form of printed characters that would require
about 2.0kB of space on a computer.

36
So one CD can store about the same amount of text information equivalent as
300 000 of such A4 sheets.

• The basic technology of CD-ROM remains the same as that for CD audio, but
CD-ROM requires greater data integrity, because a corrupt bit that is not
noticeable during audio playback becomes intolerable with computer data.
• So CD-ROM (Yellow Book) dedicates more bits to error detection and
correction than CD audio (Red Book).

37
• Data is laid out in a format known as ISO 960. Advantages in comparison of
CD-ROM with other information carriers are:

a). The information density is high.

b). The cost of information storage per information unit is low.

c). The disks are easy to store, to transport and to mail.

d). Random access to information is possible.

38
1.2.4.1 Other Optical Devices

• An optical disk is made up of a rotating disk which is coated with a thin


reflective metal. To record data on the optical disk, a laser beam is focused on
the surface of the spinning disk. The laser beam is turned on and off at varying
rates! Due to this, tiny holes (pits) are burnt into the metal coating along the
tracks. When data stored on the optical disk is to be read, a less powerful laser
beam is focused on the disk surface.

• The storage capacity of these devices is tremendous; the Optical disk access
time is relatively fast. The biggest drawback of the optical disk is that it is a
permanent storage device. Data once written cannot be erased. Therefore it is
a read only storage medium. A typical example of the optical disk is the CD-
ROM. Other forms of optical discs include:

39
• Read-only memory (ROM) disks, like the audio CD, are used for the distribution
of standard program and data files.
• Re-writeable, write-many read-many (WMRM) disks, just like the magnetic
storage disks, allows information to be recorded and erased many times. These
devices are also called Direct Read-after-Write (DRAW) disks.

• WORM (write once, read many) is a data storage technology that allows
information to be written to a disc a single time and prevents the drive from
erasing the data. The discs are intentionally not rewritable, because they are
especially intended to store data that the user does not want to erase
accidentally.

40
• Erasable Optical Disk: An erasable optical disk is the one which can be erased
and then loaded with new data content all over again. These generally come
with a RW label. These are based on a technology popularly known as Magnetic
Optical which involves the application of heat on a precise point on the disk
surface and magnetizing it using a laser.

• Touchscreen Optical Device: A touchscreen is an input and output device


normally layered on the top of an electronic visual display of an information
processing system. A user can give input or control the information processing
system through simple or multi-touch gestures by touching the screen with a
special stylus or one or more fingers.

41
1.3 MEMORY ACCESS METHODS
• Data need to be accessed from the memory for various purposes. There
• are several methods to access memory as listed below:

Sequential access
Direct access
Random access
Associative access

42
• Sequential Access Method
• In sequential memory access method, the memory is accessed in linear
sequential way. The time to access data in this type of method depends on the
location of the data.

Direct Access Method


• Direct access method can be seen as combination of sequential access
method and random access method. Magnetic hard disks contain many
rotating storage tracks. Here each tracks has its own read or write head and the
tracks can be accessed randomly. But access within each track is
sequential.

43
• Random Access Method
• In random access method, data from any location of the memory can be
• accessed randomly. The access to any location is not related with its physical
location and is independent of other locations. There is a separate access
mechanism for each location.

Associative Access Method


• Associative access method is a special type of random access method. It
enables comparison of desired bit locations within a word for a specific match
and to do this for all words simultaneously. Thus, based on portion of word's
content, word is retrieved rather than its address. Example of associative access:
Cache memory uses associative access method.

44
1.4 MEMORY MAPPING AND VIRTUAL MEMORIES

• Memory-mapping is a mechanism that maps a portion of a file, or an entire


file, on disk to a range of addresses within an application's address space. The
application can then access files on disk in the same way it accesses dynamic
memory. This makes file reads and writes faster in comparison with using
functions such as fread and fwrite.

Benefits of Memory-Mapping
• The principal benefits of memory-mapping are efficiency, faster file access, the
ability to share memory between applications, and more efficient coding.

45
• Accessing files via memory map is faster than using I/O functions such as fread
and fwrite. Data are read and written using the virtual memory capabilities that
are built in to the operating system rather than having to allocate, copy into,
and then de-allocate data buffers owned by the process does not access data
from the disk when the map is first constructed.

• It only reads or writes the file on disk when a specified part of the memory
map is accessed, and then it only reads that specific part. This provides faster
random access to the mapped data.

46
Efficiency
• Mapping a file into memory allows access to data in the file as if that data had
been read into an array in the application's address space. Initially, MATLAB
only allocates address space for the array; it does not actually read data from
the file until you access the mapped region.

• As a result, memory-mapped files provide a mechanism by which applications


can access data segments in an extremely large file without having to read the
entire file into memory first. Efficient Coding Style Memory-mapping in your
MATLAB application enables you to access file data using standard MATLAB
indexing operations.

47
1.4.1 VIRTUAL MEMORIES
• Processes in a system share the CPU and main memory with other processes.
However, sharing the main memory poses some special challenges.

• As demand on the CPU increases, processes slowdown in some reasonably


smooth way. But if too many processes need too much memory, then some of
them will simply not be able to run.

48
• Memory is also vulnerable to corruption. If some process inadvertently writes
to the memory used by another process, that process might fail in some
bewildering fashion totally unrelated to the program logic.

• In order to manage memory more efficiently and with fewer errors, modern
systems provide an abstraction of main memory known as virtual memory
(VM). Virtual memory is an elegant interaction of hardware exceptions,
hardware address translation, main memory, disk files, and kernel software
that provides each process with a large, uniform, and private address space.

49
1.4.2 Page Tables

• As with any cache, the VM system must have some way to determine if a virtual
page is cached somewhere in DRAM. If so, the system must determine which
physical page it is cached in. If there is a miss, the system must determine where
the virtual page is stored on disk, select a victim page in physical memory, and
copy the virtual page from disk to DRAM, replacing the victim page.

• These capabilities are provided by a combination of operating system software,


address translation hardware in the MMU (Memory Management Unit), and a
data structure stored in physical memory known as a page table that maps
virtual pages to physical pages. The address translation hardware reads the
page table each time it converts a virtual address to a physical address. The
operating system is responsible for maintaining the contents of the page table
and transferring pages back and forth between disk and DRAM 50
1.5 Replacement Algorithms
• When a page fault occurs, the operating system has to choose a page to
remove from memory to make room for the page that has to be brought in. If the
page to be removed has been modified while in memory, it must be rewritten to
the disk to bring the disk copy up to date. If, however, the page has not been
changed (e.g., it contains program text), the disk copy is already up to date, so no
rewrite is needed. The page to be read in just overwrites the page being evicted.

• While it would be possible to pick a random page to evict at each page fault,
system performance is much better if a page that is not heavily used is chosen.
If a heavily used page is removed, it will probably have to be brought back in
quickly, resulting in extra overhead. Much work has been done on the subject
of page replacement algorithms, both theoretical and experimental. Below are
some of the most important algorithms.
51
1. Optimal page replacement algorithm
2. Not recently used page replacement
3. First-In, First-Out page replacement
4. Second chance page replacement
5. Clock page replacement
6. Least recently used page replacement

52
1.6 DATA TRANSFER MODES
• The Direct Memory Access (DMA) mode of data transfer reduces CPU’s
overhead in handling I/O operations. It also allows parallelism in CPU and I/O
operations. Such parallelism is necessary to avoid wastage of valuable CPU time
while handling I/O devices whose speeds are much slower as compared to
CPU.

• The concept of DMA operation can be extended to relieve the CPU further
from getting involved with the execution of I/O operations. This gives rises to
the development of special purpose processor called Input-Output Processor
(IOP) or IO channel. The Input Output Processor (IOP) is just like a CPU that
handles the details of I/O operations. It is more equipped with facilities than
those available in typical DMA controller

53
• . The IOP can fetch and execute its own instructions that are specifically
designed to characterize I/O transfers. In addition to the I/O – related tasks, it
can perform other processing tasks like arithmetic, logic, and branching and
code translation.

• The main memory unit takes the pivotal role. It communicates with processor
by the means of DMA. The Input Output Processor is a specialized processor
which loads and stores data into memory along with the execution of I/O
instructions. It acts as an interface between system and devices. It involves a
sequence of events to executing I/O operations and then store the results into
the memory.

54
1.6.1 Modes of Transfer
• The binary information that is received from an external device is usually
stored in the memory unit. The information that is transferred from the CPU to
the external device is originated from the memory unit. CPU merely processes
the information but the source and target is always the memory unit. Data
transfer between CPU and the I/O devices may be done in different modes.
• Data transfer to and from the peripherals may be done in any of the three
possible ways:
a. Programmed I/O:It is due to the result of the I/O instructions that are
written in the computer program

b. Interrupt- initiated I/O: By using interrupt facility and special commands


to inform the interface to issue an interrupt request signal whenever data
is available from any device.
55
c. Direct memory access (DMA):
The data transfer between a fast storage media such as magnetic disk and
memory unit is limited by the speed of the CPU. Thus we can allow the peripherals
directly communicate with each other using the memory buses, removing the
intervention of the CPU. This type of data transfer technique is
known as DMA or direct memory access

56
1.7 Pipelining
• Pipelining owes its origin to car assembly lines. The idea is to have more than
one instruction being processed by the processor at the same time.
• Pipelining refers to the technique in which a given task is divided into a number
of subtasks that need to be performed in sequence. Each subtask is performed
by a given functional unit.

• The units are connected in a serial fashion and all of them operate
simultaneously. The use of pipelining improves the performance of a computer
compared to the traditional sequential execution of tasks.

57
• There exist two basic techniques to increase the instruction execution rate of a
processor. These are to increase the clock rate, thus decreasing the instruction
execution time, or alternatively to increase the number of instructions that can
be executed simultaneously.

• Similar to the assembly line, the success of a pipeline depends upon dividing the
execution of an instruction among a number of subunits (stages), each
performing part of the required operations.

58
• A possible division is to consider instruction fetch (F), instruction decode (D),
operand fetch (oF), instruction execution (E), and store of results (S) as the
subtasks needed for the execution of an instruction. In this case, it is possible to
have up to five instructions in the pipeline at the same time, thus reducing
instruction execution latency.

• Types of Pipeline:
• It is divided into 2 categories:

59
• Arithmetic Pipeline- Arithmetic pipelines are usually found in most of the
computers. They are used for floating point operations, multiplication of fixed
point numbers etc.

• Instruction Pipeline- In this, a stream of instructions can be executed by


overlapping fetch, decode and execute phases of an instruction cycle. This type
of technique is used to increase the throughput of the computer system. An
instruction pipeline reads instruction from the memory while previous
instructions are being executed in other segments of the pipeline.
• Thus we can execute multiple instructions simultaneously. The pipeline will be
more efficient if the instruction cycle is divided into segments of equal duration.

60
Pipeline Conflicts
• There are some factors that cause the pipeline to deviate its normal performance.
Some of these factors are given below:

• Timing Variations: All stages cannot take same amount of time. This problem
generally occurs in instruction processing where different instructions have
different operand requirements and thus different processing time.

• Data Hazards: When several instructions are in partial execution, and if they
reference same data then the problem arises. We must ensure that next instruction
does not attempt to access data before the current instruction, because this will
lead
to incorrect results. Branching In order to fetch and execute the next instruction, we
must know what that instruction is. If the present instruction is a conditional branch,
and its result will lead us to the next instruction, then the next instruction may not be
61
known until the current one is processed.
• Interrupts: Interrupts set unwanted instruction into the instruction stream.
Interrupts effect the execution of instruction.

• Data Dependency: It arises when an instruction depends upon the result of a


previous instruction but this result is not yet available.

• Advantages of Pipelining
1). The cycle time of the processor is reduced.
2). It increases the throughput of the system
3). It makes the system reliable.

• Disadvantages of Pipelining
1). The design of pipelined processor is complex and costly to manufacture.
62
2). The instruction latency is more.
CONCLUSION
• Computer memory is central to the operation of a modern computer system; it
stores data or program instructions on a temporary or permanent basis for use
in a computer. However, there is an increasing gap between the speed of
memory and the speed of microprocessors.

• In this module, we study the memory system of a computer, starting with the
organisation of its main memory, which, in some simple systems, is the only form
of data storage to the understanding of more complex systems and the additional
components they carry. Cache systems, which aim at speeding up access to the
primary storage were also studied, and there was a greater focus on virtual
memory systems, which make possible the transparent use of secondary storage
as if it was main memory, by the processor.
63
MODULE TWO

MEMORY ADDRESSING AND HIERARCHY SYSTEMS

64
2.1 INTRODUCTION
• A memory address is a unique identifier used by a device or CPU for data
tracking. This binary address is defined by an ordered and finite sequence
allowing the CPU to track the location of each memory byte. Addressing modes
are an aspect of the instruction set architecture in most central processing unit
(CPU) designs.

• The various addressing modes that are defined in a given instruction set
architecture define how the machine language instructions in that architecture
identify the operand(s) of each instruction. An addressing mode specifies how to
calculate the effective memory address of an operand by using information held
in registers and/or constants contained within a machine instruction or
elsewhere.

65
• This module is divided into three units. The first unit explains memory
addressing and the various modes available. Unit two explains the elements of
memory hierarchy while the last unit takes on virtual memory control systems.
All these are given below:

UNIT ONE: Memory Addressing


UNIT TWO: Elements of Memory Hierarchy
UNIT THREE: Virtual Memory Control System

66
UNIT 1. MEMORY ADDRESSING
• In computing, a memory address is a reference to a specific memory location
used at various levels by software and hardware. Memory addresses are
fixed-length sequences of digits conventionally displayed and manipulated as
unsigned integers.
• Such numerical semantic bases itself upon features of CPU, as well upon use
of the memory like an array endorsed by various programming languages.
There are many ways to locate data and instructions in primary memory and
these methods are called “memory address modes”.

• Memory address modes determine the method used within the program to
access data either from the Cache or the RAM.

67
1.1 What is memory addressing mode?
• Memory addressing mode is the method by which an instruction operand is
specified. One of the functions of a microprocessor is to execute a sequence of
instructions or programs stored in a computer memory (register) in order to
perform a particular task.

• The way the operands are chosen during program execution is dependent on the
addressing mode of the instruction. The addressing mode specifies a rule for
interpreting or modifying the address field of the instruction before the operand
is actually referenced.

• This technique is used by the computers to give programming versatility to the


user by providing such facilities as pointers to memory, counters for loop control,
indexing of data, and program relocation. And as well reduce the number of bits
in the addressing field of the instruction. 68
• However, there are basic requirement for the operation to take effect.
• First, there must be an operator to indicate what action to take and secondly,
there must be an operand that portray the data to be executed.
• For instance; if the numbers 5 and 2 are to be added to have a result, it could
be expressed numerically as 5 + 2.

• In this expression, our operator is (+), or expansion, and the numbers 5 and 2
are our operands. It is important to tell the machine in a microprocessor how to
get the operands to perform the task. The data stored in the operation code is
the operand value or the result.

69
• A word that defines the address of an operand that is stored in memory is the
effective address. The availability of the addressing modes gives the
experienced assembly language programmer flexibility for writing programs
that are more efficient with respect to the number of instructions and
execution time.

Modes of addressing
• There are many methods for defining or obtaining the effective address of an
operators directly from the register. Such approaches are known as modes of
addressing. The programmes are usually written in a high level language, as it is
a simple way to describe the variables and operations to be performed on the
variables by the programmer. The following are the modes of addressing;

70
1). Immediate Addressing Mode
•The operand is directly specified in the instruction.
•Example: MOV AX, 5 (Move the value 5 directly into
register AX)

2). Register Addressing Mode


•The operand is stored in a register.
•Example: ADD AX, BX (Add the contents of BX to AX)

71
3). Direct Addressing Mode
•The instruction specifies the memory address of the operand.
•Example: MOV AX, [1000H] (Move data from memory address 1000H to
AX)
4). Indirect Addressing Mode
•The address of the operand is stored in a register.
•Example: MOV AX, [BX] (Move data from the address stored in BX to AX)
5). Indexed Addressing Mode
•The operand’s address is obtained by adding an index register to a base address.
•Example: MOV AX, [SI + 1000H] (Move data from address SI + 1000H to AX)
72
6). Base-Indexed Addressing Mode
•Uses both a base register and an index register to calculate the effective address.
•Example: MOV AX, [BX + SI] (Move data from address BX + SI to AX)

7). Relative Addressing Mode


•The operand’s address is determined relative to the current instruction pointer (IP).
•Example: JMP LABEL (Jump to the instruction at LABEL, relative to the current
instruction)

8). Stack Addressing Mode


•Uses a stack for storing and retrieving data.
•Example: PUSH AX (Push the value of AX onto the stack)
73
Advantages of addressing modes
• The advantages of using the addressing mode are:

1). To provide the user with programming flexibility by offering such facilities
as memory pointers, loop control counters, data indexing, and programme
displacement.

2). To decrease the counting of bits in the instruction pointing area.

74
Unit Two
ELEMENTS OF MEMORY HIERARCHY

• In the design of the computer system, a processor, as well as a large amount of


memory devices, has been used. However, the main problem is, these parts are
expensive. So the memory organization of the system can be done by memory
hierarchy.

• The computer system has several levels of memory with different performance
rates. But all these can supply an exact purpose, such that the access time can
be reduced. The memory hierarchy was developed depending upon the
behavior of the program.

75
What is Memory Hierarchy?

• Memory is one of the important units in any computer system. Its serves as a
storage for all the processed and the unprocessed data or programs in a
computer system. However, due to the fact that most computer users often
stored large amount of files in their computer memory devices, the use of one
memory device in a computer system has become inefficient and
unsatisfactory.

• This is because only one memory cannot contain all the files needed by the
computer users and when the content of the memory is large, it decreases the
speed of the processor and the general performance of the computer system.

76
• Therefore, to curb with this challenges, memory unit must be divided into
smaller memories for more storage, speedy program executions and the
enhancement of the processor performance. The recently accessed files or
programs must be placed in the fastest memory.

• Memory with large capacity is cheap and slow and the memory with smaller
capacity is fast and costly. The organization of smaller memories to hold the
recently accessed files or programs closer to the CPU is term memory hierarchy.
These memories are successively larger as they move away from the CPU.

77
• The memory hierarchy system encompasses all the storage devices used in a
computer system. Its ranges from the cache memory, which is smaller in size but
faster in speed to a relatively auxiliary memory which is larger in size but slower
in speed. The smaller the size of the memory the costlier it becomes.

• The basic elements of the memory hierarchy includes:


a). Cache memory,
b). Main memory and
c). Auxiliary memory

78
• The cache memory is the fastest and smallest memory. It is easily accessible by
the CPU because it closer to the CPU. Cache memory is very costly compare to
the main memory and the auxiliary memory.
• The main memory also known as primary memory, communicates directly to
the CPU. Its also communicates to the auxiliary memory through the I/O
processor. During program execution, the files that are not currently needed by
the CPU are often moved to the auxiliary storage devices in order to create
space in the main memory for the currently needed files to be stored. The main
memory is made up of Random Access Memory (RAM) and Read Only Memory
(ROM).

79
Memory Hierarchy Diagram

• The memory hierarchy system encompasses all the storage devices used in a
computer system. Its ranges from fastest but smaller in size (cache memory) to a
relatively fast but small in size (main memory) and more slowly but larger in size
(auxiliary memory). The cache memory is the smallest and fastest storage
device, it is place closer to the CPU for easy accessed by the processor logic.
More so, cache memory helps to enhance the processing speed of the system by
making available currently needed programs and data to the CPU at a very high
speed. Its stores segment of programs currently processed by the CPU as well as
the temporary data frequently needed in the current calculation. The main
memory communicates directly to the CPU.

80
• It also very fast in speed and small in size. Its communicates to the auxiliary
memories through the input/ output processor. The main memory provides a
communication link between other storage devices. It contains the currently
accessed data or programs. The unwanted data are transferred to the auxiliary
memories to create more space in the main memory for the currently needed
data to be stored.

• If the CPU needs a program that is outside the main memory, the main
memory will call in the program from the auxiliary memories via the
input/output processor. The main difference between cache and main
memories is the access time and processing logic. The processor logic is often
faster than that of the main memory access time.

81
• The auxiliary memory is made up of the magnetic tape and the magnetic disk.
They are employ in the system to store and backup large volume of data or
programs that are not currently needed by the processor. In summary, the
essence of dividing this memory into different levels of memory hierarchy is to
make storage more efficient, reliable and economical for the users.

• As the storage capacity of the memory increases, the cost per bit for storing
binary information decreases and the access time of the memory becomes
longer. Another important memory is the register. It is usually a static RAM or
SRAM in the computer's processor that holds the data word, normally 64 or 128
bits. The essential register is the program counter register, which is found in all
processors.

• The diagram of a memory hierarchy in presented in Figure 2.1


82
Figure 2.1 Memory Hierarchy Design

83
3. Main Memory: This is the memory unit that communicate directly to the CPU.
It is the primary storage unit in a computer system. The main memory stores
data or programs currently used by the CPU during operation. It is very fast in
terms of access time and it is made up of RAM and ROM.

84
• Magnetic Disks: The magnetic disks is a circular plates fabricated of plastic or
metal by magnetized material. Frequently, two faces of the disk are utilized as
well as many disks may be stacked on one spindle by read or write heads
obtainable on every plane. All the disks in computer turn jointly at high speed.
The tracks in the computer are nothing but bits which are stored within the
magnetized plane in spots next to concentric circles. These are usually separated
into sections which are named as sectors.

• 5. Magnetic Tape: This tape is a normal magnetic recording which is designed


with a slender magnetizable covering on an extended, plastic film of the thin
strip. This is mainly used to back up huge data. Whenever the computer requires
to access a strip, first it will mount to access the data. Once the data is allowed,
then it will be un-mounted. The access time of memory will be slower within
magnetic strip as well as it will take a few minutes for accessing a strip.

85
Characteristics of Memory Hierarchy
• There are numbers of parameters that characterized memory hierarchy.
• They stand as the principle on which all the levels of the memory hierarchy
operate. These characteristics are:

a). Access type,


b). Capacity,
c). Cycle time,
d). Latency,
e). Bandwidth, and
f). Cost

86
• Access Time: Refers to the action that physically takes place during a read or
write operation. When a data or program is moved from the top of the memory
hierarchy to the bottom , the access time automatically increases. Hence, the
interval of time at which the data are request to read or write is called Access
time.

• Capacity: The capacity of a memory hierarchy often increased when data is


moved from the top of the memory hierarchy to the bottom. The capacity of a
memory hierarchy is the total amount of data a memory can store. The capacity
of a memory level is usually measured in bytes.

• Cycle time: It is defined as the time elapsed from the start of a read operation to
the start of a subsequent read.

87
• Latency: is defined as the time interval between the request for information
and the access to the first bit of that information.
• Bandwidth: this measures the number of bits that can be accessed per second.
• Cost: the cost of a memory level is usually specified as dollars per megabytes.
When the data is moved from bottom of the memory hierarchy to top, the cost
for each bit increases automatically. This means that an internal memory is
expensive compared to external memory.

88
Advantages of Memory Hierarchy
• The advantages of memory hierarchy include the following:

a). Memory distributing is simple and economical


b). Removes external destruction
c). Data can be spread all over
d). Permits demand paging & pre-paging
e). Swapping will be more proficient

89
3.0 MEMORY MANAGEMENT
SYSTEMS
3.1 Virtual Memory Control Systems
3.1.1 Paging
3.1.2 Address mapping using Paging
3.1.3 Address Mapping using Segments
3.1.4 Address Mapping using Segmented Paging
3.2 Multi-Programming
3.3 Virtual Machines/Memory and Protection
3.3 Hierarchical Memory systems
3.4 Drawbacks that occur in Virtual Memories

90
Introduction
• In a multiprogramming system, there is a need for a high capacity memory. This
is because most of the programs are often stored in the memory. The programs
must be moved around the memory to change the space of memory used by a
particular program and as well prevent a program from altering other programs
during read and write. Hence, the memory management system becomes
necessary. The movement of these programs from one level of memory
hierarchy to another is known as memory management.

• Memory management system encompasses both the hardware and the


software in its operations. It is the collection of hardware and software
procedures for managing all the programs stored in the memory. The memory
management software is part of the main operating system available in many
computers.
91
Components of Memory Management System:

• The principal components of the memory management system are:

a). A facility for dynamic storage relocation that maps logical memory
references into physical memory addresses.

b). A provision for sharing common programs stored in memory by different


users.
c). Protection of information against unauthorized access between users and
preventing users from changing operating system functions. The dynamic
storage relocation hardware is a mapping process similar to the paging
system

92
3.1 VIRTUAL MEMORY CONTROL SYSTEMS
• Virtual memory is a memory management technique where secondary
memory can be used as if it were a part of the main memory. Virtual memory is a
common technique used in a computer's operating system (OS).

• Virtual memory uses both hardware and software to enable a computer to


compensate for physical memory shortages, temporarily transferring data from
random access memory (RAM) to disk storage. Mapping chunks of memory to
disk files enables a computer to treat secondary memory as though it were main
memory.

93
3.1.1 Paging

• Paging is a memory management scheme that eliminates the need for


contiguous memory allocation by dividing the physical memory into fixed-sized
blocks called frames and the logical memory (process memory) into blocks of
the same size called pages.

• It allows processes to be loaded into non-contiguous memory locations,


reducing fragmentation and improving efficient memory utilization.

94
How Paging Works
a). Logical Address Translation
• A process generates logical addresses (virtual memory).
• Each logical address consists of a page number and an offset within that page.

b). Mapping to Physical Memory


• The page table, maintained by the operating system, maps logical pages to physical frames.
• When a process accesses memory, the page number is translated to a frame number using
the page table.

c). Physical Address Calculation


• The physical address is determined using the formula:
Physical Address=(Frame Number×Page Size)+Offset\text{Physical
Address} = (\text{Frame Number} \times \text{Page Size}) + \
text{Offset}Physical Address=(Frame Number×Page Size)+Offset 95
3.1.2 Address Mapping using Paging

• Address mapping using paging is a memory management scheme that


eliminates the need for contiguous allocation of physical memory and thus
eliminates the problems of fitting varying sized memory chunks onto the
physical memory.

• Here's how you can understand and calculate address mapping using paging in
a practical way:

96
Key Concepts
• Logical Address (Virtual Address): The address generated by the CPU during a
program's execution.
• Physical Address: The actual address in the main memory (RAM).
• Page: A fixed-length block of logical memory.
• Frame: A fixed-length block of physical memory, corresponding to a page.
• Page Table: A data structure used by the operating system to keep track of the
mapping between virtual pages and physical frames.

97
Steps for Address Mapping using Paging

Step 1: Breakdown the Virtual Address


• The virtual address is divided into two parts:
• Page number (p): Refers to which page the address belongs to.
• Page offset (d): The position within the page.

Step 2: Calculate Page Number and Offset


• Page Size (P): The size of each page, typically a power of 2 (e.g., 4 KB, 8 KB).
• Frame Size (F): Same size as the page size.

98
• If you have the virtual address, the page number ppp and the page offset ddd
can be calculated as:

Page number = Virtual AddressPage Size\frac{\text{Virtual Address}}{\text{Page


Size}}Page SizeVirtual Address​

Offset = Virtual Addressmod Page Size\text{Virtual Address} \mod \text{Page


Size}Virtual AddressmodPage Size

99
• If you have the virtual address, the page number ppp and the page offset ddd
can be calculated as:

Page number = Virtual AddressPage Size\frac{\text{Virtual Address}}{\text{Page


Size}}Page SizeVirtual Address​
Offset = Virtual Addressmod Page Size\text{Virtual Address} \mod \text{Page
Size}Virtual AddressmodPage Size

100
Example
• Let's walk through an example where:

• Page Size (P) = 4 KB (4096 bytes)


• Frame Size (F) = 4 KB
• Virtual Address (VA) = 0x1234 (in hexadecimal)
• We assume the page table already maps the virtual page to the physical frame.

Breakdown the Virtual Address


• Convert the virtual address to binary (for example, assume 16-bit addresses).

101
Virtual Address = 0x1234 = 0001 0010 0011 0100 in binary

• Since the page size is 4 KB, which is 212, we need the last 12 bits for the offset
(because 4096 = 212 ) and the remaining bits represent the page number.

Virtual address = 0001 0010 0011 0100


• Page Number: First 4 bits = 0001 (1 in decimal)
• Offset: Last 12 bits = 0010 0011 0100 (0x234 in hexadecimal)

102
Translate the Page Number to a Frame
• Let’s assume the page table maps virtual page 1 to physical frame 3. (In real
cases, this would be retrieved from the page table.)

Calculate the Physical Address


• Now, combine the frame number (3) with the offset:
Physical Address = Frame Number × Frame Size + Offset
That is, Physical Address = 3 × 4096 + 0x234
Physical Address = 12288 + 564
• Physical Address = 12852 (0x3234 in hexadecimal)

103
• So, the physical address corresponding to the virtual address 0x1234 is equal
to: 0x3234.

Summary
Virtual Address (0x1234) → Page Number 1, Offset 0x234
• Page Table maps Page 1 to Frame 3

Physical Address = 0x3234


• This is a practical example of how address mapping works using paging.

104
3.1.3 Address Mapping using Segments
Address mapping using segmentation is another memory management
technique that divides a process’s address space into logical segments, each
representing a different type of data or code. Unlike paging, where memory is
divided into fixed-size blocks, segmentation allows for variable-sized blocks. The
goal is to provide a more flexible and logical way to organize memory, often
based on the structure of a program or process.

Here is how it works..

105
Key Concepts in Segmentation:
Segments:
• A segment is a logical unit of a program or data structure, which could represent code,
data, stack, heap, etc. Each segment has a name and a starting address.
• Segments can vary in size. For example, the code segment might be relatively small,
while the data segment could be large, depending on the program's requirements.

Segment Table:
• The operating system maintains a segment table that maps each segment to a
location in physical memory. Each entry in the segment table contains:
i). Base Address: The starting address of the segment in physical memory.
ii). Limit: The length or size of the segment. This defines the valid range of addresses
within the segment.

106
Logical vs Physical Address
Logical (or virtual) address: A logical address refers to a specific location within a
segment in the process's address space.

Physical address: A physical address refers to a location in physical memory.


In segmentation, a logical address is usually represented by two
parts:
Segment Number: Identifies which segment (code, data, stack, etc.) the address
belongs to.
Offset: The offset or index within the segment.

107
Address Translation:
• When a process generates a logical address, the operating system first looks up
the segment number in the segment table. Then, the operating system checks
the segment's base address and adds the offset to it to compute the physical
address.
• The physical address will be calculated as:
Physical Address=Base Address+Offset\text{Physical Address} = \text{Base
Address} + \text{Offset}Physical Address=Base Address+Offset

• However, if the offset exceeds the segment's limit, an exception (often a


segmentation fault or memory violation) will be triggered, indicating that the
address is out of bounds.

108
Example:
• Let’s assume a system with 3 segments for a process:
• Segment 0: Code (size: 4 KB)
• Segment 1: Data (size: 6 KB)
• Segment 2: Stack (size: 2 KB)

• Now, the segment table might look like this:

109
• When a program wants to access a memory address:
• The program provides a logical address (e.g., Segment 1, offset 3 KB).
• The operating system looks up Segment 1 in the segment table, retrieves
the base address (0x2000), and adds the offset (3 KB) to compute the
physical address (0x2000 + 0x0C00 = 0x2C00).

110
3.1.4 Address Mapping using Segmented Paging

• It is a hybrid memory management scheme that combines the benefits of both


segmentation and paging.

• It addresses the limitations of pure segmentation and paging by using


segments to provide logical division of memory, while also using paging to
handle physical memory efficiently and avoid external fragmentation.

• In segmented paging, the memory address is split into multiple parts, and the
logical address space is first divided into segments. Each segment is then
further divided into pages, so this method combines the flexibility of variable-
sized segments with the efficiency of fixed-sized pages.
111
Advantages of Segmented Paging:
i). Eliminates External Fragmentation: Paging prevents external
fragmentation by allocating memory in fixed-size frames, even if the segments
themselves are of variable size.
ii). Efficient Memory Use: Paging allows for efficient utilization of physical
memory, while segmentation provides a logical structure that matches the
program’s organization.
iii). Protection and Isolation: Each segment can be protected individually (e.g.,
read-only code, read/write data), and paging ensures that pages are allocated
contiguously in physical memory, with each page being mapped independently.
iv). Flexibility: Segmented paging allows for dynamic memory allocation with
the benefits of both segmentation (logical structure) and paging (efficient
memory management).

112
Disadvantages of Segmented Paging:

i). Complex Address Translation: Address translation involves multiple steps


—first translating the segment number, then the page number within the
segment, and finally adding the offset. This can introduce overhead.
ii). Overhead of Managing Two Tables: The operating system must maintain
both a segment table and page tables, which increases memory overhead and
complexity.
iii). Internal Fragmentation: Although paging eliminates external
fragmentation, it can lead to internal fragmentation if the segments are not
perfectly aligned with page boundaries.

113
Advantages of Segmentation:
i). Logical Organization: Segmentation allows a more logical and flexible memory layout
since it directly corresponds to the structure of the program (e.g., code, stack, heap).
ii). Protection: Different segments can be assigned different protection levels (e.g., code can
be read-only, while stack can be read/write). This helps prevent errors and security
vulnerabilities.
iii). Dynamic Size: Unlike paging, which uses fixed-sized blocks, segmentation allows for
segments of varying sizes, which can be more efficient in some cases.

Disadvantages of Segmentation:
i). External Fragmentation: Since segments can vary in size, the physical memory may
become fragmented over time. This fragmentation happens when there are small gaps between
allocated segments, preventing efficient use of memory.
ii). Complexity: The operating system must manage multiple segments and keep track of
their locations and sizes, which can increase overhead.
iii). Address Translation Overhead: Address translation in segmentation requires checking
114
the segment table and performing calculations for each access, which adds overhead.
General Advantages and Disadvantages of Paging

Advantages
• The following are the advantages of using Paging method

a). No need for external Fragmentation


b). Swapping is easy between equal-sized pages and page frames.
c). Easy to use memory management algorithm

Disadvantages
• The following are the disadvantages of using Paging method

a). May cause Internal fragmentation


b). Page tables consume additional memory.
c). Multi-level paging may lead to memory reference overhead. 115
3.2 Multiprogramming

• Multiprogramming is a memory management technique that allows multiple


processes to be loaded into the main memory and executed concurrently by
the CPU.

• The goal is to maximize CPU utilization by keeping it busy with processes, even
if some processes are waiting for I/O operations to complete.

116
Advantages of Multiprogramming
i). Improved CPU Utilization:
• By running multiple processes, the CPU can stay busy. If one process is waiting for I/O, the
CPU can switch to another process, increasing overall system throughput.
ii). Faster Process Execution (Context Switching):
• When one process is waiting for an I/O operation, the system can switch to another process,
keeping the CPU active. This can lead to faster execution of multiple processes compared to
running one process at a time.
iii). Better Resource Utilization:
• Memory, CPU, and I/O devices are used more efficiently since idle periods are reduced. If
one process is waiting for I/O, others can be processed simultaneously, reducing downtime.
iv). Increased System Throughput:
• More tasks are completed in a given period because multiple processes are handled
concurrently, especially in systems with a mix of CPU-bound and I/O-bound tasks.
v). Improved System Responsiveness:
• User requests or time-sensitive tasks can be handled quickly because other processes may
not monopolize the system’s resources. 117
Disadvantages of Multiprogramming
i). Complexity in Process Management:
• Managing multiple processes requires complex scheduling algorithms, context switching,
and memory management, which can introduce overhead and require additional system
resources.
ii). Overhead of Context Switching:
• Switching between processes (context switching) takes time and CPU cycles. This
overhead can slow down the system if done too frequently, especially with many
processes.
iii). Potential for Increased CPU Starvation:
• If there are too many processes or if the scheduling is not managed properly, some
processes may experience CPU starvation, meaning they are not given enough time to
execute.

118
iv). Memory Fragmentation:
• When multiple processes are loaded in memory, fragmentation (both internal and
external) can occur, potentially wasting memory space.

v). Inefficiency with I/O-Bound Processes:


• While multiprogramming works well for CPU-bound processes, it may not be as efficient
for I/O-bound processes if there is insufficient CPU time or memory to handle all tasks
effectively.

vi). Synchronization Issues:


• As multiple processes execute concurrently, synchronizing access to shared resources
becomes a challenge, and race conditions, deadlocks, or data inconsistency may arise if
not properly managed.

119
120

You might also like