0% found this document useful (0 votes)
15 views

ALL CSC 417 NOTE

Memory systems are essential for storing and retrieving information in computing devices, structured hierarchically to balance performance, cost, and capacity. They include various types such as volatile primary memory (e.g., RAM, cache) and non-volatile secondary memory (e.g., HDDs, SSDs), along with advanced technologies like virtual memory and emerging memory types. Key characteristics of memory operations include access time, bandwidth, volatility, and efficiency, which are crucial for optimizing performance in applications like high-performance computing and AI.

Uploaded by

emlyntyler99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

ALL CSC 417 NOTE

Memory systems are essential for storing and retrieving information in computing devices, structured hierarchically to balance performance, cost, and capacity. They include various types such as volatile primary memory (e.g., RAM, cache) and non-volatile secondary memory (e.g., HDDs, SSDs), along with advanced technologies like virtual memory and emerging memory types. Key characteristics of memory operations include access time, bandwidth, volatility, and efficiency, which are crucial for optimizing performance in applications like high-performance computing and AI.

Uploaded by

emlyntyler99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 238

Memory systems

• Memory systems are critical components of computing devices,


facilitating the storage and retrieval of information.
• They encompass a hierarchical structure designed to optimize
performance, cost, and capacity.
The general memory structure
• Memory systems are critical components of computing devices, facilitating the storage and retrieval of information.
• They encompass a hierarchical structure designed to optimize performance, cost, and capacity.
• a. Components of Memory
• Primary Memory (Volatile)
• Registers: Small, fast memory within the CPU used for immediate data processing.
• Cache Memory: High-speed memory located close to the CPU to store frequently accessed data and instructions.
• RAM (Random Access Memory): Temporary storage used for active tasks and processes.
• Secondary Memory (Non-Volatile)
• Hard Drives (HDDs): Magnetic storage devices offering high capacity at lower speeds.
• Solid-State Drives (SSDs): Faster non-volatile storage with no moving parts.
• Tertiary Memory Includes backup devices like optical drives or magnetic tape systems used for long-term data storage.
• Virtual Memory
• Simulates additional memory by using a portion of the hard drive to supplement RAM.
Memory Hierarchy
• The memory hierarchy is structured to optimize the trade-offs
between speed, cost, and capacity. Here's a deeper look at the levels
of memory:
• Registers (Fastest, smallest capacity)
• ↓Cache
• ↓RAM
• ↓Secondary Storage
• ↓Tertiary Storage (Slowest, largest capacity)
• a. Registers Location: Inside the CPU. Purpose: Stores data for immediate processing by the CPU,
such as intermediate calculations or instruction operands. Features: Smallest capacity (a few bytes
or words).Fastest memory type (directly accessible by the processor).Examples: Program Counter
(PC), Accumulator (AC).

• b. Cache Memory
• Levels:
• L1 Cache: Closest to the CPU, smallest in size, but fastest.
• L2 Cache: Larger than L1, slightly slower.
• L3 Cache: Shared among multiple CPU cores, larger and slower than L2.
• Purpose: Reduces latency by storing frequently accessed instructions/data.
• Features:
• Faster than RAM but more expensive.
• Operates using principles like spatial locality (access data close to recently accessed data) and temporal
locality (reuse recently accessed data).
• c. Primary Memory (RAM)Dynamic RAM (DRAM): Used in main memory.
Slower but cheaper than SRAM. It needs periodic refreshing to retain data.
Static RAM (SRAM): Used in cache. Faster and more expensive than DRAM.

• d. Secondary Memory
• Purpose: Provides non-volatile, large-scale storage for data and programs.
• Types:
• HDDs: Mechanical storage; slower read/write speeds.
• SSDs: Flash-based, faster than HDDs, but costlier.
• Hybrid Drives: Combines HDD and SSD for balance.
• e. Tertiary Memory Purpose: Used for archival and backups.
Examples: Optical drives, and magnetic tapes.
• f. Virtual Memory Purpose: Expands the apparent memory available
to applications by using the hard disk as an extension of RAM.
Implementation: Uses paging to divide memory into fixed-sized
blocks. Maps logical addresses to physical addresses.
Characteristics of Memory Operations
• Memory operations determine how efficiently data is stored, accessed, and
manipulated.
• a. Key Characteristics
• a. Access Time
• Time taken to access data from memory.
• Registers: Few nanoseconds.
• RAM: 10-100 nanoseconds.
• HDD: 10 milliseconds.
• b. Memory Bandwidth
• The rate at which data is read or written, measured in GB/s.
• Important for high-performance systems like gaming PCs or data-intensive servers.
• c. Volatility
• Volatile: Requires power to retain data (e.g., RAM).
• Non-Volatile: Retains data without power (e.g., SSDs, HDDs, NVRAM).
• d. Latency vs Throughput
• Latency: Time delay between request and delivery.
• Throughput: Amount of data processed in a given time.
• e. Power Efficiency
• Power consumption varies by memory type.
• Registers consume the least energy.
• HDDs consume more energy during spinning and seek operations.
Key Memory Operations
• a. Read Operation-Involves fetching data stored at a specific address.
Sequential Access: Data read in sequence (e.g., tapes).Random
Access: Access any location directly (e.g., RAM).
• b. Write Operation-Saves new data to memory. May overwrite
existing data in writable memory (e.g., RAM, SSD).
• c. Erase Operation- Applies to non-volatile memory like flash. Flash
memory requires data blocks to be erased before rewriting.
• d. Memory Mapping -Used by CPUs to map logical addresses to
physical memory locations. Performed by the Memory Management
Unit (MMU).
Performance Enhancements
• a. Caching Techniques
1. Write-Through Cache:
1. Data written to both cache and main memory.
2. Pros: Data consistency.
3. Cons: Slower write performance.
2. Write-Back Cache:
1. Data written only to cache initially and main memory updated later.
2. Pros: Faster writes.
3. Cons: Risk of data loss in cache failures.
• b. Prefetching
• Predictively loading data into cache based on anticipated future needs.
• c. Parallelism in Memory
• Multithreading to allow simultaneous memory access.
Emerging Memory Technologies
• a. 3D NAND Structure: Stacks memory cells vertically.
• Benefits: Increased density. Lower cost per GB. Higher durability for SSDs.
• b. Phase Change Memory (PCM) - Uses heat-induced phase changes in
materials to store data.
Advantages: Faster than flash. High endurance.
• c. Magnetoresistive RAM (MRAM) - Uses magnetic states to store data.
Non-volatile and offers near-RAM speeds.
• d. Neuromorphic Memory - Mimics biological neural networks for AI and
machine learning applications.
• e. Quantum Memory - Uses quantum mechanics principles to store data in
quantum states. Still in research but promises revolutionary capacity and
speed.
Advanced Applications of Memory Systems
• a. High-Performance Computing (HPC)
• Requires large, fast memory systems for simulations and data analysis.
• b. Cloud Computing
• Relies on distributed memory systems for scalable storage and
performance.
• c. Internet of Things (IoT)
• Embedded memory systems with low power consumption are critical for
IoT devices.
• d. AI and Machine Learning
• Demand high-speed memory with massive bandwidth for real-time data
processing.
Cache Memory
Characteristics
• Location
• Capacity
• Unit of transfer
• Access method
• Performance
• Physical type
• Physical characteristics
• Organisation
Location
• CPU
• Internal
• External
Capacity
• Word size
— The natural unit of organisation
• Number of words
— or Bytes
Unit of Transfer
• Internal
— Usually governed by data bus width
• External
— Usually a block which is much larger than a
word
• Addressable unit
— Smallest location which can be uniquely
addressed
— Word internally
— Cluster on M$ disks
Access Methods (1)
• Sequential
— Start at the beginning and read through in
order
— Access time depends on location of data and
previous location
— e.g. tape
• Direct
— Individual blocks have unique address
— Access is by jumping to vicinity plus sequential
search
— Access time depends on location and previous
location
— e.g. disk
Access Methods (2)
• Random
— Individual addresses identify locations exactly
— Access time is independent of location or
previous access
— e.g. RAM
• Associative
— Data is located by a comparison with contents
of a portion of the store
— Access time is independent of location or
previous access
— e.g. cache
Memory Hierarchy
• Registers
— In CPU
• Internal or Main memory
— May include one or more levels of cache
— “RAM”
• External memory
— Backing store
Memory Hierarchy - Diagram
Performance
• Access time
— Time between presenting the address and
getting the valid data
• Memory Cycle time
— Time may be required for the memory to
“recover” before next access
— Cycle time is access + recovery
• Transfer Rate
— Rate at which data can be moved
Physical Types
• Semiconductor
— RAM
• Magnetic
— Disk & Tape
• Optical
— CD & DVD
• Others
— Bubble
— Hologram
Physical Characteristics
• Decay
• Volatility
• Erasable
• Power consumption
Organisation
• Physical arrangement of bits into words
• Not always obvious
• e.g. interleaved
The Bottom Line
• How much?
— Capacity
• How fast?
— Time is money
• How expensive?
Hierarchy List
• Registers
• L1 Cache
• L2 Cache
• Main memory
• Disk cache
• Disk
• Optical
• Tape
So you want fast?
• It is possible to build a computer which
uses only static RAM (see later)
• This would be very fast
• This would need no cache
— How can you cache cache?
• This would cost a very large amount
Locality of Reference
• During the course of the execution of a
program, memory references tend to
cluster
• e.g. loops
Cache
• Small amount of fast memory
• Sits between normal main memory and
CPU
• May be located on CPU chip or module
Cache
• Small amount of fast memory
• Sits between normal main memory and
CPU
• May be located on CPU chip or module
Cache and Main Memory
• The use of multiple levels of cache is
depicted in the in the (b) of the figure
above.
• The L2 cache is slower and typically larger
than the L1 cache, and the L3 cache is
slower and typically larger than the L2
cache.
Cache/Main Memory Structure
• The structure of a cache/ main- memory
system is shown below.
• Main memory consists of up to 2n
addressable words, with each word having
a unique n- bit address.
• For mapping purposes, this memory is
considered to consist of a number of
fixed- length blocks of K words each. That
is, there are M = 2n/K blocks in main
memory. The cache consists of m blocks,
called lines. Each line contains K words
plus a tag of a few bits
Cache/Main Memory Structure
• Each line also includes control bits (not
shown), such as a bit to indicate whether
the line has been modified since being
loaded into the cache.
• The length of a line, not including tag and
control bits, is the line size. The line size
may be as small as 32 bits, with each
“word” being a single byte; in this case
the line size is 4 bytes.
• If a word in a block of memory is read,
that block is transferred to one of the lines
of the cache. Because there are more
blocks than lines, an individual line cannot
be uniquely and permanently dedicated to
a particular block.
• Thus, each line includes a tag that
identifies which particular block is
currently being stored.
Cache operation – overview
• CPU requests contents of memory location
• Check cache for this data
• If present, get from cache (fast)
• If not present, read required block from
main memory to cache
• Then deliver from cache to CPU
• Cache includes tags to identify which
block of main memory is in each cache
slot
Cache Read Operation - Flowchart
• The processor generates the read address (RA)
of a word to be read. If the word is contained in
the cache, it is delivered to the processor.
Otherwise, the block containing that word is
loaded into the cache, and the word is delivered
to the processor.
• The figure below shows typical of contemporary
cache organizations. In this organization, the
cache connects to the processor via data, control,
and address lines. The data and address lines
also attach to data and address buffers, which
attach to a system bus from which main memory
is reached.
Typical Cache Organization
• When a cache hit occurs, the data and
address buffers are disabled and
communication is only between processor
and cache, with no system bus traffic.
When a cache miss occurs, the desired
address is loaded onto the system bus
and the data are returned through the
data buffer to both the cache and the
processor.
Cache Design
• Addressing
• Size
• Mapping Function
• Replacement Algorithm
• Write Policy
• Block Size
• Number of Caches
Cache Addressing
• Where does cache sit?
— Between processor and virtual memory management
unit
— Between MMU and main memory
• Logical cache (virtual cache) stores data using
virtual addresses
— Processor accesses cache directly, not thorough physical
cache
— Cache access faster, before MMU address translation
— Virtual addresses use same address space for different
applications
– Must flush cache on each context switch
• Physical cache stores data using main memory
physical addresses
• Almost all non-embedded processors, and
many embedded processors, support
virtual memory, In essence, virtual
memory is a facility that allows programs
to address memory from a logical point of
view, without regard to the amount of
main memory physically available. When
virtual memory is used, the address fields
of machine instructions contain virtual
addresses.
• For read to and writes from main memory,
a hardware memory management unit
(MMU) translates each virtual address into
a physical address in main memory.
• When virtual addresses are used, the
system designer may choose to place the
cache between the processor and the
MMU or between the MMU and main
memory as shown below.
Logical and Physical Caches
• A logical cache, also known as a virtual
cache, stores data using virtual addresses.
The processor accesses the cache directly,
without going through the MMU. A
physical cache stores data using main
memory physical addresses

• Question ()
What are the advantages and disadvantages
of logical cache over physical cache?
Cache Size
• Cost
— More cache is expensive
• Speed
— More cache is faster (up to a point)
— Checking cache for data takes time
• The size of the cache should to be small enough
so that the overall average cost per bit is close to
that of main memory alone and large enough so
that the overall average access time is close to
that of the cache alone.

• There are several other motivations for


minimizing cache size. The larger the cache, the
larger the number of gates involved in addressing
the cache.
• The result is that large caches tend to be
slightly slower than small ones— even
when built with the same integrated
circuit technology and put in the same
place on chip and circuit board.

• The available chip and board area also


limits cache size. Because the
performance of the cache is very sensitive
to the nature of the workload.
Comparison of Cache Sizes
Year of
Processor Type L1 cache L2 cache L3 cache
Introduction
IBM 360/85 Mainframe 1968 16 to 32 KB — —
PDP-11/70 Minicomputer 1975 1 KB — —
VAX 11/780 Minicomputer 1978 16 KB — —
IBM 3033 Mainframe 1978 64 KB — —
IBM 3090 Mainframe 1985 128 to 256 KB — —
Intel 80486 PC 1989 8 KB — —
Pentium PC 1993 8 KB/8 KB 256 to 512 KB —
PowerPC 601 PC 1993 32 KB — —
PowerPC 620 PC 1996 32 KB/32 KB — —
PowerPC G4 PC/server 1999 32 KB/32 KB 256 KB to 1 MB 2 MB
IBM S/390 G4 Mainframe 1997 32 KB 256 KB 2 MB
IBM S/390 G6 Mainframe 1999 256 KB 8 MB —
Pentium 4 PC/server 2000 8 KB/8 KB 256 KB —
High-end server/
IBM SP 2000 64 KB/32 KB 8 MB —
supercomputer
CRAY MTAb Supercomputer 2000 8 KB 2 MB —
Itanium PC/server 2001 16 KB/16 KB 96 KB 4 MB
SGI Origin 2001 High-end server 2001 32 KB/32 KB 4 MB —
Itanium 2 PC/server 2002 32 KB 256 KB 6 MB
IBM POWER5 High-end server 2003 64 KB 1.9 MB 36 MB
CRAY XD-1 Supercomputer 2004 64 KB/64 KB 1MB —
Mapping Function
• Due to fewer cache lines than main
memory blocks, an algorithm is needed
for mapping main memory blocks into
cache lines.
• a means is needed for determining which
main memory block currently occupies a
cache line.
• The choice of the mapping function
dictates how the cache is organized

• Three techniques can be used: direct,


associative, and set-associative
Direct Mapping
• Each block of main memory maps to only
one cache line
— i.e. if a block is in cache, it must be in one
specific place

• The mapping is expressed as


i = j modulo m
where
i = cache line number
j = main memory block number
m = number of lines in the cache
• The figure below shows the mapping for
the first m blocks of main memory. Each
block of main memory maps into one
unique line of the cache.
• The next m blocks of main memory map
into the cache in the same fashion; that
is, block Bm of main memory maps into
line L0 of cache, block Bm+1 maps into
line L1, and so on
Direct Mapping from Cache to Main Memory
• Address is in two parts
• Least Significant w bits identify unique
word
• Most Significant s bits specify one memory
block
• The MSBs are split into a cache line field r
and a tag of s-r (most significant)
The mapping function is easily implemented using
the main memory address. Figure below illustrates
the general mechanism.
For purposes of cache access, each main memory
address can be viewed as consisting of three fields.
The least significant w bits identify a unique word
or byte within a block of main memory,
. The remaining s bits specify one of the 2s blocks
of main memory.
The cache logic interprets these s bits as a tag of s
- r bits (most significant portion) and a line field of
r bits.
This latter field identifies one of the m = 2r lines of
the cache. To summarize
Direct Mapping
Address Structure

Tag s-r Line or Slot r Word w


8 14 2

• 24 bit address
• 2 bit word identifier (4 byte block)
• 22 bit block identifier
— 8 bit tag (=22-14)
— 14 bit slot or line
• No two blocks in the same line have the same Tag field
• Check contents of cache by finding line and checking Tag
Direct Mapping
Cache Line Table

Cache line Main Memory blocks held


0 0, m, 2m, 3m…2s-m

1 1,m+1, 2m+1…2s-m+1


m-1 m-1, 2m-1,3m-1…2s-1
Direct Mapping Cache Organization
Direct Mapping Summary
• Address length = (s + w) bits
• Number of addressable units = 2s+w
words or bytes
• Block size = line size = 2w words or bytes
• Number of blocks in main memory = 2s+
w/2w = 2s
• Number of lines in cache = m = 2r
• Size of tag = (s – r) bits
Direct Mapping pros & cons
• Simple
• Inexpensive
• Fixed location for given block
— If a program accesses 2 blocks that map to the
same line repeatedly, cache misses are very
high
— Its main disadvantage is that there is a fixed
cache location for any given block. Thus, if a
program happens to reference words
repeatedly from two different blocks that map
into the same line, then the blocks will be
continually swapped in the cache, and the hit
ratio will be low (a phenomenon known as
thrashing).
Associative Mapping
• Associative mapping overcomes the
disadvantage of direct mapping by
permitting each main memory block to be
loaded into any line of the cache.
• A main memory block can load into any
line of cache
• Memory address is interpreted as tag and
word
• Tag uniquely identifies block of memory
• Every line’s tag is examined for a match
• Cache searching gets expensive
• the cache control logic interprets a
memory address simply as a Tag and a
Word field.
• The Tag field uniquely identifies a block of
main memory. To determine whether a
block is in the cache, the cache control
logic must simultaneously examine every
line’s tag for a match
Associative Mapping from
Cache to Main Memory
Fully Associative Cache Organization
Associative Mapping
Address Structure

Word
Tag 22 bit 2 bit
• 22 bit tag stored with each 32 bit block of data
• Compare tag field with tag entry in cache to
check for hit
• Least significant 2 bits of address identify which
16 bit word is required from 32 bit data block
• e.g.
— Address Tag Data Cache line
— FFFFFC FFFFFC24682468 3FFF
Associative Mapping Summary
• Address length = (s + w) bits
• Number of addressable units = 2s+w
words or bytes
• Block size = line size = 2w words or bytes
• Number of blocks in main memory = 2s+
w/2w = 2s
• Number of lines in cache = undetermined
• Size of tag = s bits
• With associative mapping, there is
flexibility as to which block to replace
when a new block is read into the cache.
• The principal disadvantage of associative
mapping is the complex circuitry required
to examine the tags of all cache lines in
parallel.
Set Associative Mapping
• Set- associative mapping is a compromise
that exhibits the strengths of both the
direct and associative approaches while
reducing their disadvantages.
• Cache is divided into a number of sets
• Each set contains a number of lines
• A given block maps to any line in a given
set
— e.g. Block B can be in any line of set i
• e.g. 2 lines per set
— 2 way associative mapping
— A given block can be in one of 2 lines in only
one set
• In this case, the cache consists of
number sets, each of which consists of a
number of lines.
• This is referred to as k-way
set-associative mapping.
Set Associative Mapping
Example
• 13 bit set number
• Block number in main memory is modulo
213
Mapping From Main Memory to Cache:
v Associative
Mapping From Main Memory to Cache:
k-way Associative
K-Way Set Associative Cache
Organization
Set Associative Mapping
Address Structure

Word
Tag 9 bit Set 13 bit 2 bit

• Use set field to determine cache set to


look in
• Compare tag field to see if we have a hit
• e.g
— Address Tag Data Set number
— 1FF 7FFC 1FF 12345678 1FFF
— 001 7FFC 001 11223344 1FFF
Set Associative Mapping Summary
• Address length = (s + w) bits
• Number of addressable units = 2s+w
words or bytes
• Block size = line size = 2w words or bytes
• Number of blocks in main memory = 2d
• Number of lines in set = k
• Number of sets = v = 2d
• Number of lines in cache = kv = k * 2d
• Size of tag = (s – d) bits
Direct and Set Associative Cache
Performance Differences
• Significant up to at least 64kB for 2-way
• Difference between 2-way and 4-way at
4kB much less than 4kB to 8kB
• Cache complexity increases with
associativity
• Not justified against increasing cache to
8kB or 16kB
• Above 32kB gives no improvement
• (simulation results)
Figure 4.16
Varying Associativity over Cache Size
Replacement Algorithms
• Once the cache has been filled, when a
new block is brought into the cache, one
of the existing blocks must be replaced

• To achieve high speed, such an algorithm


must be implemented in hardware.
Replacement Algorithms (1)
Direct mapping
• No choice
• Each block only maps to one line
• Replace that line
Replacement Algorithms (2)
Associative & Set Associative
• Hardware implemented algorithm (speed)
• Least Recently used (LRU): Replace that block in the set
that has been in the cache longest with no reference to it.
• e.g. in 2 way set associative
— Which of the 2 block is lru?
• First in first out (FIFO)
— replace block that has been in cache longest
• Least frequently used
— replace block which has had fewest hits
• Random: A technique not based on usage (i.e., not LRU,
LFU, FIFO, or some variant) is to pick a line at random
from among the candidate lines. Simulation studies have
shown that random replacement provides only slightly
inferior performance to an algorithm based on usage
Write Policy
• When a block that is resident in the cache is to
be replaced, there are two cases to consider.
• If the old block in the cache has not been altered,
then it may be overwritten with a new block
without first writing out the old block.
• If at least one write operation has been
performed on a word in that line of the cache,
then main memory must be updated by writing
the line of cache out to the block of memory
before bringing in the new block.
• Must not overwrite a cache block unless main
memory is up to date
• Multiple CPUs may have individual caches
• I/O may address main memory directly
Write through
• All writes go to main memory as well as
cache
• Multiple CPUs can monitor main memory
traffic to keep local (to CPU) cache up to
date
• Lots of traffic
• Slows down writes
Write back
• Updates initially made in cache only
• Update bit for cache slot is set when
update occurs
• If block is to be replaced, write to main
memory only if update bit is set.
• minimizes memory writes. With write
back, updates are made only in the cache.
When an update occurs, a dirty bit, or use
bit, associated with the line is set. Then,
when a block is replaced, it is written back
to main memory if and only if the dirty bit
is set.
Internal Memory
Introduction

• Semiconductor memory subsystems


including ROM, DRAM, SDRAM
memories.
• Memory cell - Basic element of
semiconductor memory.
• Has 3 function terminals ( select,
control, writing or reading)-carrying
electrical signal.
• Error control techniques used to enhance
the memory reliability
Semiconductor main memory
• Organization
• The basic element of a semiconductor is
the memory cell.
• Semiconductor memory cells has the
following properties:
 They exhibit two stable (or semi-stable)
states, which can be used to represent binary
1 and 0.
 They are capable of being written into at least
once, to set the state
 They are capable of being read to sense the
state

Memory cell operation


Select –read or write operation
Control – indicates the read or write operation
• Figure abode depicts the operation of a
memory cell. Most commonly, the cell has
three functional terminals capable of
carrying an electrical signal.
 The select terminal: it select a memory cell for
a read or write operation
 The control terminal: indicate read or write
 The other terminal are used for writing and
reading
 For writing: the other terminal provides an electrical
signal that sets the state of the cell to 1 or 0
 For reading, that terminal is used for output of the
cell’s state.
Semiconductor Memory Types
Semiconductor Memory
• RAM (Random Access Memory)
characterisics@
—Read/Write
—Volatile
—Temporary storage
—Static or dynamic
RAM
• Read data from memory and write new
data into the memory easily and rapidly.
• Accomplished through the use of electrical
signals.
• Volatile – needs constant power supply to
avoid data lost.
• Temporary storage.
• Two traditional RAM
—Dynamic RAM (DRAM)
—Static RAM (SRAM).

Note : Dynamic or static refers to the RAM


Dynamic RAM
• Bits stored as charge in capacitors. The presence or
absence of charge in the capacitor is interpreted as a
binary 1 or 0
• Because capacitors have a natural tendency to discharge,
dynamic RAMs require periodic charge refreshing to
maintain data storage.
• Charges leak away, even with power continuously applied.
• Need refreshing even when powered
• Simpler construction
• Smaller per bit
• Less expensive
• Need refresh circuits
• Slower
• Main memory
• Essentially analogue
— Level of charge determines value of 1 or 0
Dynamic RAM Structure
DRAM Operation
• Address line active when bit value from cell is to
be read or written
— Transistor act as a switch closed (current flows) if a
voltage is applied to the address line and open (no
current flows) if no voltage is present on the address
line.
• Write
— Voltage to bit line
– High for 1 low for 0
— Then a signal is applied to the address line, allowing a
charge to be transferred to the capacitor.
• Read
— For the read operation, when the address line is
selected, the transistor turns on and the charge stored
on the capacitor is fed out onto a bit line and to a sense
amplifier.
Static RAM
• A static RAM (SRAM) is a digital device
that uses the same logic elements used in
the processor. In a SRAM, binary values
are stored using traditional flip-flop logic-
gate configurations.
• A static RAM will hold its data as long as
power is supplied to it
• Bits stored as on/off switches
• No charges to leak
• No refreshing needed when powered
• More complex construction
• Larger per bit
• More expensive
• Does not need refresh circuits
• Faster
• Cache
• Digital
—Uses flip-flops
Stating RAM Structure
• The above diagram is a typical SRAM structure
for an individual cell. Four transistors (T1, T2,
T3, T4) are cross connected in an arrangement
that produces a stable logic state.
• In logic state 1, point C1 is high and point C2 is
low; in this state, T1 and T4 are off and T2 and
T3 are on.1
• In logic state 0, point C1 is low and point C2 is
high; in this state, T1 and T4 are on and T2 and
T3 are off. Both states are stable as long as the
direct current (dc) voltage is applied. Unlike the
DRAM, no refresh is needed to retain data
Static RAM Operation
• Transistor arrangement gives stable logic state
• State 1
—C1 high, C2 low
—T1 T4 off, T2 T3 on
• State 0
—C2 high, C1 low
—T2 T3 off, T1 T4 on
• Address line transistors T5 T6 is used to open or
close a switch
• When a signal is applied to this line, the two
transistors are switched on, allowing a read or
write operation. For a write operation, the
desired bit value is applied to line B, while its
complement is applied to line B. This forces the
four transistors (T1, T2, T3, T4) into the proper
SRAM v DRAM
• Both volatile
—Power needed to preserve data
• Dynamic cell
—Simpler to build, smaller
—More dense
—Less expensive
—Needs refresh
—Larger memory units
• Static
—Faster
—Cache
Read Only Memory (ROM)
• A read-only memory (ROM) contains a
permanent pattern of data that cannot be
changed. A ROM is non-volatile; that is,
no power source is required to maintain
the bit values in memory. While it is
possible to read a ROM, it is not possible
to write new data into it.
• The advantage of ROM is that the data or
program is permanently in main memory
and need never be loaded from a
secondary storage device.
• A ROM is created like any other
integrated circuit chip, with the data
actually wired into the chip as part of the
fabrication process. This presents two
problems:
• ■ The data insertion step includes a
relatively large fixed cost, whether one or
thousands of copies of a particular ROM
are fabricated.
• ■ There is no room for error. If one bit is
wrong, the whole batch of ROMs must be
thrown out.
Read Only Memory (ROM)
• Permanent storage
—Nonvolatile
• Applications of ROM
—Microprogramming
—Library subroutines
—System programs (BIOS: Basic Input Output
System)
—Function tables
ROM at Work

• While RAM uses transistors to turn on or off


access to a capacitor at each intersection,
ROM uses a diode to connect the lines if the
value is 1. If the value is 0, then the lines are
not connected at all.

Figure. BIOS uses Flash memory, a type of ROM.


Types of ROM
• Written during manufacture
—Very expensive for small runs
• Programmable (once)
—PROM
—Needs special equipment to program
• Read “mostly”
—Erasable Programmable (EPROM)
– Erased by UV
—Electrically Erasable (EEPROM)
– Takes much longer to write than read
—Flash memory
– Erase whole memory electrically
Chip Logic
• semiconductor memory comes in
packaged chips. Each chip contains an
array of memory cells.
• Like in memory hierarchy, trade-offs also
exist when we consider the organization
of memory cells and functional logic on a
chip.
• For semiconductor memories, one of the
key design issues is the number of bits of
data that may be read/written at a time.
Chip Packaging
Chip Packaging

• For 1M words, a total of


20 address pins (220
=1M) i.e A0-A19
• D0-D7 ; 8 lines for data
read out
• Vcc power supply
• Vss ground pin
• CE (Chip enable) pin, if
> 1 chip, CE indicates
which chip is meant to
pick up the address in
the bus.
• Vpp program voltage
8 Mbit EPROM
supplied-write operation.
Chip Packaging (2)

• For 1M words, a total of 20pins (220 =1M)


• D0-D7 8 lines for data read out
• Vcc power supply
• Vss ground pin
• CE Chip enable pin, if > 1 chip, CE
indicates which chip is meant to pick up
the address in the bus.
• Vpp program voltage supplied
Interleaved Memory

• Main memory is composed of a collection


of DRAM memory chips.
• A number of chips can be grouped
together to form a memory bank. It is
possible to organize the memory banks in
a way known as interleaved memory.
• . Each bank is independently able to
service a memory read or write request,
so that a system with K banks can service
K requests simultaneously, increasing
memory read or write rates by a factor
of K.
• Collection of DRAM chips
• Grouped into memory bank
• Banks independently service read or write
requests
• K banks can service k requests
simultaneously
Error Correction
• A semiconductor memory system is
subject to errors. These can be
categorized as hard failures and soft
errors.
• Hard Failure
—Permanent defect
—A hard failure is a permanent physical defect
so that the memory cell or cells affected
cannot reliably store data but become stuck at
0 or 1 or switch erratically between 0 and 1.
Hard errors can be caused by harsh
environmental abuse, manufacturing defects,
and wear.
Soft Error
—Random, non-destructive
—No permanent damage to memory
A soft error is a random, non-destructive event
that alters the contents of one or more memory
cells without damaging the memory. Soft errors
can be caused by power supply problems or
alpha particles. These particles result from
radioactive decay and are distressingly common
because radioactive nuclei are found in small
quantities in nearly all materials.
Both hard and soft errors are clearly
undesirable, and most modern main memory
systems include logic for both detecting and
correcting errors
• Detected using Hamming error correcting
code
• The simplest of the error-correcting codes
is the Hamming code devised by Richard
Hamming at Bell Laboratories.
• It uses Venn diagrams to illustrate the use
of this code.
Flash memory
• Another form of semiconductor memory is flash
memory.
• Flash memory is used both for internal memory
and external memory applications.
• flash memory is intermediate between EPROM
and EEPROM in both cost and functionality. Like
EEPROM, flash memory uses an electrical erasing
technology.
• An entire flash memory can be erased in one or
a few seconds, which is much faster
than EPROM.
• it is possible to erase just blocks of memory
rather than an entire chip.
• An important characteristic of flash
memory is that it is persistent memory,
which means that it retains data when
there is no power applied to the memory.
Thus, it is useful for secondary (external)
storage, and as an alternative to random
access memory in computers.
• There are two distinctive types of flash
memory, designated as NOR and NAND
flash memory.
External memory in
Computer Organization
• External memory can also be known as secondary memory or backing
store.
• It is used to store a huge amount of data because it has a huge
capacity. At present, it can measure the data in hundreds of
megabytes or even in gigabytes.
• The important property of external memory is that whenever the
computer switches o>, then stored informa?on will not be lost.
• The external memory can be categorized into four parts:
• Magne?c disk
• Raid
• Op?cal memory
• Magne?c Tape
• Magne?c Disks
• A disk is a type of circular plaFer constructed by a nonmagne?c material, which is
known as a substrate.
• It is covered with a magne?c coa?ng used to hold the informa?on. The substrate is
tradi?onally constructed by aluminium or aluminium alloy material. But recently,
another material has been introduced, which is known as glass substrates.
• There are various beneHts of glass substrates, which are described as follows:
• It has the ability to increase disk reliability by improving the uniformity of a
magne?c Hlm surface.
• It is used to reduce the errors of read-write by doing a signiHcant reduc?on in
overall surface defects.
• It has beFer s?>ness, which will help to reduce disk dynamics. It has the great
ability that it can withstand against shock and damage.
• Magne3c Read and Write Memory
• The most important component of external memory is s?ll magne?c
disks.
• Many systems, such as supercomputers, personal computers, and
mainframes computers, contain both removable and Hxed hard disks.
• A lot of systems contain two heads that are read head and write
head.
• While the opera?on of reading and wri?ng, the plaFer is rota?ng
while the head is sta?onary.
Data Organiza3on and forma?ng
• The head is known as a small device, which is able to read from or
write to the por?on of the plaFer rota?ng beneath it.
• The width of each track is the same as head. We have thousands of
tracks per surface.
• The gaps are used to show the separa?on of adjacent tracks.
• This can prevent or minimize the error which is generated because of
the interference of magne?c Helds or misalignment of the head.
• The sectors are used to transfer the data from and to the disks.
• Physical characteris3cs
• The disk drive always or permanently contains a non-removable disk.
For example, in the personal computer, the hard disk can never be
removed, or we can say that it is a non-removable
disk. The removable disk is a type of disk that can be removed and
replaced with other disks. Both sides of the plaFer contain the
magne?zable coa?ng for most of the disks, which will also be referred
to as the double side. The single-side disks are used in some less
expensive disk systems.
• A movable head is employed by the mul?ple plaFer disks with one
head of read-write per plaFer surface. From the centre of the disk, all
the heads contain the same distance and move together because all
the heads are mechanically Hxed.
• In the plaFer, a set of all tracks in the same rela?ve posi?on is known
as a cylinder.
• This type of mechanism is mostly used in a Eoppy disk. This type of
disk is the least expensive, small, also contains a Mexible plaFer.
• The worksta?ons and personal computers commonly contain a built-
in disk, which is known as Winchester disk. This disk is also referred
to as a hard disk.
• On a movable system, there will be a seek 3me which can be deHned
as the ?me taken to posi?on the head at the track.
• There will also be a rota3on latency or rota3on delay, which can be
deHned as the ?me taken from the star?ng of the sector to reach the
head. The ?me it takes to get into a posi?on to write or read is known
as access 3me which is equal to the sum of rota?onal delay and seeks
?me, if any.
• Once the head gets its posi?on, the system is able to perform the
read or write opera?on as the sector moves under the head. This
process can be called the data transfer por?on of the opera?on, and
the ?me taken while transferring the data will be known as
the transfer 3me.
RAID
• The RAID is also known as a redundant array of independent disks.
• It is a type of data virtualiza?on technology, which is used to combine
components of mul?ple disks into a logical unit so that they can improve
the performance or create redundancy.
• If there are mul?ple disks/drives, it will allow the employment of various
techniques such as disk mirroring, parity, and disk striping.
• We cannot consider RAID as a replacement for data backup. If RAID is
going through the cri?cal data, it will be backed up to a logical set of disks
or other physical disks.
• When we make a connec?on with RAID, we will normally use the
following terms:
• Striping: In this, data will be divided between more than one disk.
• Mirroring: In this, data will mirror between more than one disk.
• Parity: It can also be called checksum. It can be described as a
calculated value, which is used to mathema?cally rebuild the data.
• Commonly RAID has 7 levels. In which levels 0, 1, and 3 are used for
high transfer rates. Levels 4, 5, and 6 are used for high transac?on
rates.
• All the levels of RAID are described as follows:
• RAID 0
• RAID 0 can also be called disk striping. In the RAID 0 technique,
data is evenly divided into two or more than two storage devices such
as HDD or SDD.
• In this technique, we will organize the data in such as manner that
users can faster read or write the files. Due to this process, the
performance will speed up.
• If we have a large number of applications and enormous data, the best
solution will be disk stripping.
• The setup of RAID 0 is very easy. It can also be called the most
affordable type of redundant disk organization.
• However, this type of arrangement is unable to handle the fault or
error, and we cannot use it to handle the cri?cal data. This is because
it writes the Hrst block into the Hrst disk, the second into the next
disk, and so on. This process will be repeated un?l it hits all of the
disks. Lastly, it will come back to the Hrst disk. This means all the disks
are working in parallel, so we are able to see the full performance of
our disks.
• On the downside, there will be no redundancy, and that means if any
disk fails, we will lose all our data across all disks. So RAID 0 provides
us high performance and expanding storage, but it is actually less
reliable as compare to a single disk.
• Advantages of RAID 0
• In the read and write opera?ons, it provides us with great
performance.
• There will be no overhead because RAID 0 uses all the storage
capacity.
• In RAID 0, we can easily implement the technology.

• Disadvantages of RAID 0
• RAID 0 cannot be used in cri?cal systems because it is unable to
tolerate the fault.
• If one disk fails in RAID 0, then all the data of other disks are also lost.
• RAID 1
• RAID 1 can also be called Mirroring. It will take all data from a disk
and then write it into a second disk, which is parallel to the Hrst disk.
In RAID 1, there is very high redundancy because each disk contains
the exact copy of data on another disk.
• It needs minimum two disks to work. The setup of RAID 1 provides
protec?on against data loss, or we can say that it has the fault
tolerance capacity.
• If one disk fails, then the copy of that disk provides the required data.
• Here, the systems can read the data from both disks simultaneously.
Because of this feature, it will also be able to speed up the
performance and availability.
• S?ll, the performance of write opera?on is una>ected. It takes more
?me as compared to the read opera?on because RAID 1 contains two
disks wri?ng in parallel, and the wri?ng opera?on uses the capacity of
one disk, and they have to write the same data twice.
• In RAID 1, the downside of the disks contains the high costs because
one disk must build twice the capacity that's actually needed at this
level.
• Advantages of RAID 1
• As compare to the single disk, the RAID 1 provides an excellent read and
write speed.
• It has the fault tolerance ability. If one disk fails, we don't need to build
the data again, and we will just simply copy the data into the
replacement disk.
• It is a very simple technology, and the implementa?on of RAID 1 is also
very simple.
• Disadvantages of RAID 1
• n RAID 1, the data has to be wriFen twice. That's why e>ec?ve storage
capacity is only half of the total disk capacity, and it is the main
disadvantage of RAID 1.
• RAID 1 is more expensive as compared to RAID 0 because it needs twice
disks to mirror the data.
• The hot-swapping of failed disks are not always allowed by the
so\ware RAID 1. When we power down the computer through which
failed disk is aFacked, the failed disk can only be replaced.
• A lot of people simultaneously use the servers, and this power-down
process may not be accepted by them. That's why these types of
systems typically use hardware controllers because they support hot-
swapping.
• When we write the data on disks, it will evaluate the ECC code (error
correc?on code) for the data on the My. A\er that, it will strip the data
bits to the data disks, and Hnally, it will write the ECC code to the
redundancy disks. When we read the data from the disks, it uses the
redundancy disks to read the corresponding ECC code. Now it will
verify whether the data is consistent. If needed, it will perform
appropriate correc?ons on the My.
• This process uses many disks. It will be conHgured in various disk
conHgura?ons. Now this ?me, RAID 2 is not useful anymore because it
is costly, and the implementa?on of RAID 2 in RAID controller is
dibcult. Now this ?me, ECC is also redundant because the hard disks
are capable of doing the work of ECC themselves.
• Advantages of RAID 2
• RAID 2 uses hamming code while error correc?on.
• It can store the parity with the help of one designated drive.
• Disadvantages of RAID 2
• RAID 2 needs an extra drive for error detec?on.
• It contains an extra drive. That's why it is expensive and contains a
complex structure.
• RAID 3
• RAID 3 can also be called Byte level stripping. The working of RAID 3 is the
same as RAID 0 as it uses byte-level stripping, but it also needs an extra disk
in the array. RAID 3 is used to support a special type of processor in the
parity code calcula?ons, which can be called 'parity disk'. In RAID 3, we strip
the bytes across the disks in place of striping the blocks across the disks. At
this level, we require mul?ple data disks and a dedicated disk so that we can
store the parity. In the conHgura?on process of RAID 3, the data will be
divided into individual bytes, and then it will be saved on a disk. For each
row of the data, the parity disk will be determined, and a\er that, it will be
saved in the men?oned parity disk. If there is any failure, it has the ability to
recover the data with the help of parity bytes that correspond with them
and the appropriate calcula?on of remaining bytes.
• Although this level is rarely used in prac?ce, but it has a lot of beneQts that are if there is
any damage of disk in the arrangement, it can resist. Secondly, it has a very high reading
speed. Unfortunately, RAID 3 also has a lot of drawbacks. First, as compared to the read
speed, the write speed is very slow because of its necessity for checksum calcula?ng.
(RAID hardware controllers are also unable to solve this problem). The second problem is
that if there is any disk failure, then the whole system will work very slowly. RAID 3 has
the ability to resist breakdown that means if any disk in the array fails, it will replace the
damaged disk, but replacing process is very costly. The third problem is that we use the
disk for calcula?ng checksums, which is the boFleneck in the en?re array performance.
• Though the above descrip?on, RAID 3 is unable to show a good, reliable and cheap
solu?on. That's why RAID 3 is used rarely in prac?ce. The systems which are based on
RAID 3 are mostly used for the implementa?on purpose where very large Hles are referred
by the small number of users.
• Advantages of RAID 3
• RAID 3 provides high throughput to transfer the huge data.
• It solves RAID 2's main disadvantage that means it can be resistant to disk
failure and breakdown.
• Disadvantages of RAID 3
• If we only need to transfer a small Hle, the conHgura?on may be too much.
• If there is any disk failure, then it will signiHcantly decrease the throughput.
• RAID 4
• RAID 4 can be known as Block-level striping. The working of RAID 4 is the same as RAID 3. The
main di>erence between them is the process of sharing the data. They are divided into blocks
such as 16, 32, 64, or 128 GB. Same as RAID 0, it will be wriFen on the disk. For each row of
wriFen data, a parity disk is used to write any recorded block. That means this level uses block-
level for striping data in place of byte-level striping. RAID 5 and RAID 4 has a lot of similari?es,
but RAID 4 conHnes all parity data to a single disk. So we can say that it does not use
distribu?ve parity.
• In RAID 4, we can complete implementa?on and conHgura?on with the help of minimum three
disks. RAID 4 also requires hardware support to perform the parity calcula?ons. Due to this, we
are able to recover data with the help of appropriate mathema?cal opera?ons.
• Advantages of RAID 4
• RAID 4 allows the block-level striping, which provides the facility to simultaneously send
I/O requests.
• It provides a low storage overhead. If we add various disks, it will become more lowers.
• This level does not need a synchronized controller or spindles.
• Disadvantages of RAID 4
• It contained the parity drives, which may lead to a boFleneck.
• If we try to perform a write opera?on simultaneously, the opera?on will be slower
because the informa?on of parity is wriFen to one disk.
• RAID 5
• RAID 5 can be called Stripping with parity. It uses the block level for data striping and
also uses distribu?ve parity. RAID 5 needs minimum three disks but can work up to 16
disks. It is the most secured RAID level. Parity is a type of raw binary data. RAID
system calculates the values of parity and using these values, create a parity block. If
any disk fails in the RAID system, it will use the parity block to recover striped data.
Mostly RAID system with parity func?on uses the array to store the parity blocks in
the disks. At this level, data blocks are striped across the drives. The parity checksum
of all data blocks is wriFen only on one drive. The parity checksum does not use a
Hxed drive, but they are spreading across all the drives. If the data of any data block
has no longer available, with the help of parity data, the computer can recalculate the
data. That means if there is any single drive failure, RAID 5 has the ability to resist
against the failure of any disk in the array without access to data or losing the data.
• Advantages of RAID 5
• In RAID 5, the write data transac?ons are slow because of the calcula?on of parity, while the read
data transac?ons are very fast.
• If there is any disk failure in the RAID 5, we s?ll have the power to access all the data no maFer that
the failed drive is being replaced and the data is rebuilt by the storage controller on a new disk.
• Disadvantages of RAID 5
• If there is any disk failure, it will a>ect the throughput, but this process is s?ll acceptable.
• RAID 5 is complex technology. Suppose there is a disk of 4TB in the array of various disks, and it
fails. In this case, replacing and restoring the data of failed disk may take a day or more than that on
the basis of the speed of the controller and the load on an array. At this ?me, if any disk goes bad,
data will be lost forever.
• RAID 6
• RAID 6 can also be called Striping with double parity. The working of RAID 5 is the
same as RAID 6, and the di>erence between them is that the system stores an
addi?onal parity block on each desk in RAID 6. Due to this, a conHgura?on will be
enabled where before the array is unavailable, the two disks may be failed. It needs
two di>erent sets for parity calcula?ons, and it has the ability to rebuild an array even
if two drives simultaneously fail. RAID 6 needs minimum four disks, and it can
withstand two disks that are dying simultaneously. The two disks will be used for the
data, and the remaining two disks will be used for parity informa?on. If there is a rise
in the number of disks, it will increase the chances of mul?ple failures and also
increase the complexity of rebuilding the disk set.
• As compared to the RAID 5, it o>ers higher redundancy and also increases the
performance of read. If there is an intensive write opera?on, this level will also
su>er from the same server performance overhead. This performance is depended
on the architecture of RAID system, i.e., so\ware or hardware. If the system
performs high-performance parity calcula?on with the help of including processing
so\ware, and if it is located in the Hrmware, it will a>ect the performance.
• In RAID 6, the chances of two disks failure at the same ?me are very less. In the
RAID 5 system, if any disk fails, it will take hours, days or more ?me to replace it
with the new disk. At that ?me, if another disk fails, we will lose all of our data
forever. But in RAID 6, the RAID array will even survive from the second failure.
• Advantages of RAID 6
• In RAID 6, the read data transac?ons are very fast, just like the RAID 5.
• It is more secure than RAID 5 because if two disks fail, we are able to s?ll access all our data even
while the system is replaced with our failed disks.
• Disadvantages of RAID 6
• In the RAID 6, we have to calculate the addi?onal parity. That's why write data transac?ons in
RAID 6 are slower than the RAID 5. It can be slower by 20% as compared to RAID 5.
• If there is any disk failure, then it will a>ect the throughput, but this process is s?ll acceptable.
• RAID 6 is complex technology. If there is a disk failure in any RAID array, rebuilding an array can
take a long ?me.
• Op?cal Memory
• The op?cal memory was released in 1982, and Sony and Philips
developed it. These memories perform their opera?ons with the help of
light beams, and it also needs op?on drive for the opera?ons. We can use
op?cal memory to store backup, audio, video, and also for caring data.
The speed of a Mash drive and the hard drive is faster as compared to the
read/write speed. There are various examples of op?cal memory that are
Compact disk (CD), Bluray Disk (BD), and Digital Versa?le Disk (DVD).
• Compact Disk (CD)
• It is a type of digital audio system, which is used to store data. It is composed of
circular plas?c, in which aluminium alloy is used to coat the single side of plas?c,
which is used to store the data. It also contains an addi?onal thin plas?c covering,
which is used to protect the data. CD will perform its opera?ons with the help of a CD
drive. The compact disk can be called the non-erasable disk. Here we use the laser
beam to imprint the data on the disk. In the star?ng, the CDs are used to hold the 60
to 75 minutes of audio informa?on that has the ability to store about 700MB of data.
But now, it can store 60 minutes of audio informa?on on a single side. Now many
devices have been developed which contains low cost and high capacity as compared
to the CD.
• Types of Compact Disk
• CD-ROM:
• CD-ROM is also known as CD read-only memory. It is mainly used to store computer data. As we
know earlier, the compact disks were used to store the video and audio data, but it uses the digital
form to store the data, so we can also be used the compact disks to store the computer data.
• If there is some error in the audio and video appliance, it will ignore that error, and that error does
not reMect in the produced video and audio. But if the computer data contains any error, then CD-
ROM will not tolerate it, and that error will reMect in the produced data. At the ?me of inden?ng
pills on the compact disks, it is impossible to prevent physical imperfec?on. So in order to detect
and correct the error, we have to add some extra bits.
• The compact disk (CD) and compact disk read-only memory (CD-ROM) contain one spiral track,
beginning from the track's centre and spiral out towards the outer edge. CD-ROM uses the blocks
or sectors to store the data
• CD-R:
• CD-R is also known as CD-Recordable. It is a type of write once read many, or we
can say that it allows single ?me recording on a disk. It is used in these types of
applica?ons that require one or a small number of copies of a set of data. CD
recordable composed of polycarbonate plas?c substrate, coa?ng of thin reMec?ve
metal, and a protec?ve outer coa?ng.
• CD-RW:
• CD-RW is also known as CD-Rewritable. It is a type of compact disk format which
allow us to repeatedly recording on a disk. CD rewritable and CD recordable both
are composed of the same material. So it is also composed of polycarbonate
plas?c substrate, coa?ng of thin reMec?ve metal, and a protec?ve outer coa?ng.
The dye will be replaced by an alloy in the CD-RW
• Digital Versa?le Disk (DVD)
• The DVD (digital versa?le disk) technology was Hrst launched in 1996.
The appearance of the CD (compact disk) and the DVD (digital
versa?le disk) has the same. The storage size is the main di>erence
between CD and DVD. So the storage size of a DVD is much larger
than the CD. While designing DVDs, there are several changes that are
done in their design to make the storage larger.
• Blu-Ray DVD
• A Blu-ray disk is a type of high capacity op?cal disk medium, which is used to store a huge amount of data and
to record and playback high deHni?on video. Blu-ray was designed to supersede the DVD. While a CD is able to
store 700 MB of data and a DVD is able to store 4.7 GB of data, a single Blu-ray disk is able to store up to 25 GB
of data. The dual-layer Blu-ray disks can hold 50 GB of data. That amount of storage is equivalent to 4 hours of
HDTV. There is also a double-sided dual-layer DVD, which is commonly used and able to store 17 GB of data.
• Blu-ray disk uses the blue lasers, which help them to hold more informa?on as compared to other op?cal
media. The laser is actually known as 'blue-violet', but the developer rolls o> the tongue to make 'Blue-violet-
ray' a liFle earlier as 'Blu-ray'. The CDs and DVDs use the red laser, and their wavelength (650 nm) is greater
than the blue-violet laser (405nm). With the help of a small wavelength, the laser can focus on a small area. In
Blu-ray disks, we can use the same size, which is used by CD or DVD and store a large amount of data on a disk.
Blu-ray is able to provide very high resolu?on as compared to the DVD. On the basis of standard deHni?on, a
DVD can provide a deHni?on of 720x480 pixels. In contrast, the Blu-ray high deHni?on contains 1920X1080
pixel resolu?on.
• Magne?c Tape
• Reading and wri?ng techniques in the tape system is the same as the disk system.
In this, the medium is Eexible polyester tape coated with a magne3zable
material. The tape's data can be structured as a number of parallel tracks that will
be run lengthwise. In this form, the recording of data can be called a parallel
recording. Instead of the parallel recording, most of the modern system uses
serial recording. The serial recording uses the sequence of bits along with each
track to lay of the data. It is done with the help of a magne?c disk. In the serial
recoding, the disk contains the physical record on the tape, which can be
described as the data which are read and write in the con?guous blocks.
• A tape drive can be accessed as a sequen?al access device. If the current
posi?on of the head is beyond the desired result, we have to rewind the tape at
a certain distance and star?ng reading forward. During the opera?on of reading
and wri?ng only, the tape is in mo?on. The di>erence between tape and disk
drive is that the disk drive can be referred to as a direct access device. A disk
drive is able to get the desired result without sequen?ally reading all sectors on
a disk. It has to only wait un?l the intervening sectors have arrived within one
track. A\er that, it is able to successive access to any track.

The magne?c tape can also be known as a type of second memory. It can also be
used as the slowest speed and lowest cost member of the memory hierarchy.
Virtual Memory is a storage allocation scheme in which secondary memory can be addressed
as part of the main memory.

Virtual memory is a common technique used in a computer's operating system (OS).

Virtual memory uses both hardware and software to enable a computer to compensate for
physical memory shortages, temporarily transferring data from random access memory (RAM)
to disk storage. Mapping chunks of memory to disk files enables a computer to treat secondary
memory as though it were main memory.

The size of virtual storage is limited by the addressing scheme of the computer system and the
amount of secondary memory available not by the actual number of main storage locations.
It is a technique that is implemented using both hardware and software. It maps memory
addresses used by a program, called virtual addresses, into physical addresses in computer
memory.

Advantages of Virtual Memory


 More processes may be maintained in the main memory: Because we are going to load
only some of the pages of any particular process, there is room for more processes. This
leads to more efficient utilization of the processor because it is more likely that at least one
of the more numerous processes will be in the ready state at any particular time.
 A process may be larger than all of the main memory: One of the most fundamental
restrictions in programming is lifted. A process larger than the main memory can be
executed because of demand paging. The OS itself loads pages of a process in the main
memory as required.
 It allows greater multiprogramming levels by using less of the available (primary) memory
for each process.
 It has twice the capacity for addresses as main memory.
 It makes it possible to run more applications at once.
 Users are spared from having to add memory modules when RAM space runs out, and
applications are liberated from shared memory management.
 When only a portion of a program is required for execution, speed has increased.
 Memory isolation has increased security.
 It makes it possible for several larger applications to run at once.
 Memory allocation is comparatively cheap.
 It doesn’t require outside fragmentation.
 It is efficient to manage logical partition workloads using the CPU.
 Automatic data movement is possible.
Disadvantages of Virtual Memory
 It can slow down the system performance, as data needs to be constantly transferred
between the physical memory and the hard disk.
 It can increase the risk of data loss or corruption, as data can be lost if the hard disk fails or
if there is a power outage while data is being transferred to or from the hard disk.
 It can increase the complexity of the memory management system, as the operating system
needs to manage both physical and virtual memory.

What are the benefits of using virtual memory?

The advantages to using virtual memory include:

 It can handle twice as many addresses as main memory.

 It enables more applications to be used at once.

 It frees applications from managing shared memory and saves users from having to add
memory modules when RAM space runs out.

 It has increased speed when only a segment of a program is needed for execution.

 It has increased security because of memory isolation.

 It enables multiple larger applications to run simultaneously.

 Allocating memory is relatively inexpensive.

 It does not need external fragmentation.

 CPU use is effective for managing logical partition workloads.

 Data can be moved automatically.

 Pages in the original process can be shared during a fork system call operation that creates a
copy of itself.

In addition to these benefits, in a virtualized computing environment, administrators can use


virtual memory management techniques to allocate additional memory to a virtual machine
(VM) that has run out of resources. Such virtualization management tactics can improve VM
performance and management flexibility.

What are the limitations of using virtual memory?


Although the use of virtual memory has its benefits, it also comes with some tradeoffs worth
considering, such as:

 Applications run slower if they are running from virtual memory.

 Data must be mapped between virtual and physical memory, which requires extra hardware
support for address translations, slowing down a computer further.

 The size of virtual storage is limited by the amount of secondary storage, as well as the
addressing scheme with the computer system.

 Thrashing can occur if there is not enough RAM, which will make the computer perform
slower.

 It may take time to switch between applications using virtual memory.

 It lessens the amount of available hard drive space.

Virtual memory (virtual RAM) vs. physical memory (RAM)

When talking about the differences between virtual and physical memory, the biggest distinction
commonly made is to speed. RAM is considerably faster than virtual memory. RAM, however,
tends to be more expensive.

When a computer requires storage, RAM is the first used. Virtual memory, which is slower, is
used only when the RAM is filled.

Feature Virtual Memory Physical Memory (RAM)

An abstraction that extends the The actual hardware (RAM) that stores
Definition available memory by using disk data and instructions currently being used
storage by the CPU

Location On the hard drive or SSD On the computer’s motherboard

Slower (due to disk I/O


Speed Faster (accessed directly by the CPU)
operations)
Feature Virtual Memory Physical Memory (RAM)

Smaller, limited by the amount of RAM


Capacity Larger, limited by disk space
installed

Lower (cost of additional disk


Cost Higher (cost of RAM modules)
storage)

Data Indirect (via paging and


Direct (CPU can access data directly)
Access swapping)

Non-volatile (data persists on


Volatility Volatile (data is lost when power is off)
disk)
INTRODUCTION TO
FAULT-TOLERANT
COMPUTING
• INTRODUCTION :
• What is fault tolerance??????
• Fault tolerance is the property that enables a system to con?nue opera?ng properly
in the event of the failure of some of its components. If opera?ng quality decreases
then decrease is propor?onal to the severity of the failure, as compared to a
naively designed system in which even a small failure can cause total breakdown.
• Fault tolerance is par?cularly sought in high-availability or life-cri?cal systems. It is
the art and science of building compu?ng systems that con?nue to operate
sa?sfactorily in the presence of faults.
• Fault Tolerance Requirements : The basic characteris?cs of fault
tolerance are:
• 1.No single point of failure
• 2.No single point of repair
• 3.Fault isola?on to the failing component
• 4.Fault containment to prevent propaga?on of the failure
• system fails if it behaves in a way which is not consistent with its
speciMca?on. Such a failure is a result of a fault in a system
component.
• Systems are fault-tolerant if they behave in a predictable manner,
according to their speciMca?on, in the presence of faults
• ⇒there are no failures in a fault tolerant system.
• Several applica?on areas need systems to maintain a correct
(predictable) func?onality in the presence of faults
• :-banking, systems-control systems-manufacturing systems
• What means correct func?onality in the presence of faults?
• The answer depends on the par?cular applica?on (on the
speciMca?on of the system):•The system stops and doesn’t produce
any erroneous (dangerous) result/behaviour.
• •The system stops and restarts aTer a while without loss of
informa?on.
• •The system keeps func?oning without any interrup?on and
(possibly) with unchanged performance
• fault can be:
• 1.Hardware fault: malfunc?on of a hardware component (processor,
communica?on line, switch, etc.).
• 2.SoTware fault: malfunc?on due to a soTware bug.
• A fault can be the result of:
• 1.Mistakes in speciMca?on or design: such mistakes are at the origin of all soTware
faults and of some of the hardware faults.
• 2.Defects in components: hardware faults can be produced by manufacturing
defects or by defects caused as result of deteriora?on in the course of ?me.
• 3.Opera?ng environment: hardware faults can be the result of stress produced by
adverse environment: temperature, radia?on, vibra?on, etc
• Fault types according to their temporal behavior:
• 1.Permanent fault: the fault remains un?l it is re-paired or the
aYected unit is replaced.
• 2.IntermiZent fault: the fault vanishes and reap-pears (e.g. caused by
a loose wire).
• 3.Transient fault: the fault dies away aTer some?me (caused by
environmental eYects)
• Fault tolerance • A system or a component fails due to a fault • Fault
tolerance means that the system con?nues to provide its services in
presence of faults
• • A system may experience and should recover also from par?al
failures
• • Fault categories in ?me
• Transient- Occurs once and disappear
• IntermiZent- Occurs many ?mes in an irregular way
• Permanent


• Fault-tolerant compu?ng is the art and science of building compu?ng
systems that con?nue to operate sa?sfactorily in the presence of
faults.
• A fault-tolerant system may be able to tolerate one or more fault
types including
• Transient, intermiZent or permanent hardware faults,
• SoTware and hardware design errors,
• Operator errors, or
• Externally induced upsets or physical damage.
Techniques of Fault Tolerant Systems
• There are four main techniques which are
• the HW redundancy which done by using more than one unit of the
component to be tolerated.
• second technique is the SW redundancy this is done by using
addi?onal programs, subprograms, and program
• The third technique is the ?me redundancy, which is useful in the
case of soT errors;
• . The last technique is the informa?on redundancy, which relies on
the coding theory.
Computer Architecture and
Organization

Topic: Input/Output

Reading: Stallings, Chapter 7


General Description of I/O
Wide variety of peripherals
• Delivering di@erent amounts of data
• At di@erent speeds
• In di@erent formats (bit depth, etc.)
Closing the Gap
• Need I/O modules to act as bridge between processor/memory bus
and the peripherals

Device
Interface External
I/O sensors
Processor Bus Module Device and
Interface controls
Device
Interface
External Devices
• External devices are needed as a means of communicaKon
to the outside world (both input and output – I/O)
• Types
• Human readable – communicaKon with user (monitor, printer,
keyboard, mouse)
• Machine readable – communicaKon with equipment (hard drive,
CDROM, sensors, and actuators)
• CommunicaKon – communicaKon with remote
computers/devices (Can be any of the Qrst two or a network
interface card or modem)
Generic Device Interface
Con>guration
Device Interface
Components
• The control logic is the I/O module's interface to the device
• The data channel passes the collected data from or the data to be
output to the device. On the opposite end is the I/O module, but
eventually it is the processor.
• The transducer acts as a converter between the digital data of the
I/O module and the signals of the outside world.
• Keyboard converts moKon of key into data represenKng key
pressed or released
• Temperature sensor converts amount of heat into a digital value
• Disk drive converts data to electronic signals for controlling the
read/write head
I/O Module Functions
• Control & Timing
• Processor CommunicaKon
• Device CommunicaKon
• Data Bu@ering
• Error DetecKon
I/O Module: Control and Timing
• Required because of mulKple devices all communicaKng on the same
channel
• Example
• CPU checks I/O module device status
• I/O module returns status
• If ready, CPU requests data transfer
• I/O module gets data from device
• I/O module transfers data to CPU
• VariaKons for output, DMA, etc.
I/O Module: Processor Communication
• Commands from processor – Examples: READ SECTOR, WRITE
SECTOR, SEEK track number, and SCAN record ID.
• Data – passed back and forth over the data bus
• Status reporKng – Request from the processor for the I/O Module's
status. May be as simple as BUSY and READY
• Address recogniKon – I/O device is setup as a block of one or more
addresses unique to itself
Other I/O Module Functions
• Device CommunicaKon – speciQc to each device
• Data Bu@ering – Due to the di@erences in speed (device
is usually orders of magnitude slower) the I/O module
needs to bu#er data to keep from tying up the CPU's bus
with slow reads or writes
• Error DetecKon – simply distribuKng the need for
watching for errors to the module. They may include:
• MalfuncKons by device (paper jam)
• Data errors (parity checking at the device level)
• Internal errors to the I/O module such as bu@er overruns
I/O Module Structure
I/O Module Level of Operation
• How much control will the CPU be required to handle?
• How much will the CPU be allowed to handle?
• What will the interface look like, e.g., Unix treats everything like a Qle
• Support mulKple or single device
• Will addiKonal control be needed for mulKple devices on a single port
(e.g., serial port versus USB)
Input/Output Techniques
• Programmed I/O – poll and response
• Interrupt driven – module calls for CPU when needed
• Direct Memory Access (DMA) – module has direct access to speciQed
block of memory
Programmed I/O –
CPU has direct control over I/O
• Processor requests operaKon with commands sent to I/O
module
• Control – telling a peripheral what to do
• Test – used to check condiKon of I/O module or device
• Read – obtains data from peripheral so processor can read it
from the data bus
• Write – sends data using the data bus to the peripheral
• I/O module performs operaKon
• When completed, I/O module updates its status registers
• Sensing status – involves polling the I/O module's status
registers
Programmed I/O (continued)
• I/O module does not inform CPU directly
• CPU may wait or do something and come back later
• Wastes CPU Kme because typically processor is much faster than I/O
• CPU acts as a bridge for moving data between I/O module and main memory,
i.e., every piece of data goes through CPU
• CPU waits for I/O module to complete operaKon
Interrupt Driven I/O
• Overcomes CPU waiKng
• Requires setup code and interrupt service rouKne
• No repeated CPU checking of device
• I/O module interrupts when ready
• SKll requires CPU to be go between for moving data between I/O
module and main memory
Basic Interrupt I/O Operation
• CPU iniKalizes the process
• I/O module gets data from peripheral while CPU does other work
• I/O module interrupts CPU
• CPU requests data
I/O module transfers data
Design Issues
• ResoluKon of mulKple interrupts – How do you idenKfy the module
issuing the interrupt?
• Priority – How do you deal with mulKple interrupts at the same Kme
or interrupKng in the middle of an interrupt?
Identifying Interrupting Module
• Di@erent interrupt line for each module
• Limits number of devices
• Even with this method, there are o^en mulKple interrupts sKll on a
single interrupt lined
• Priority is set by hardware
Software poll
• Single interrupt line – when interrupt occurs, CPU then goes out to
check who needs a_enKon
• Slow
• Priority is set by order in which CPU polls devices
Daisy Chain or Hardware
poll
• Interrupt Acknowledge sent down a chain
• Module responsible places unique vector on bus
• CPU uses vector to idenKfy handler rouKne
• Priority is set by order in which interrupt
acknowledge gets to I/O modules, i.e., order of
devices on the chain
Bus Arbitration
• Allow mulKple modules to control bus
• I/O Module must claim the bus before it can raise
interrupt
• Can do this with:
• Bus controller/arbiter
• Distribute control to devices
• Must be one master, either processor or other device
• Device that "wins" places vector on bus uniquely
idenKfying interrupt
• Priority is set by priority in arbitraAon, i.e., whoever is
currently in control of the bus
Direct Memory Access (DMA)
• Impetus behind DMA – Interrupt driven and programmed I/O require
acKve CPU intervenKon (All data must pass through CPU)
• Transfer rate is limited by processor's ability to service the device
• CPU is Ked up managing I/O transfer
DMA (continued)
• AddiKonal Module (hardware) on bus
• DMA controller takes over bus from CPU for I/O
• WaiKng for a Kme when the processor doesn't need bus
• Cycle stealing – seizing bus from CPU (more common)
DMA Operation
• CPU tells DMA controller:
• whether it will be a read or write operaKon
• the address of device to transfer data from
• the starKng address of memory block for the data transfer
• the amount of data to be transferred
• DMA performs transfer while CPU does other processes
• DMA sends interrupt when completed
Evolutions of I/O Methods
Growth of more sophisKcated I/O devices
1.Processor directly controls device
2.Processor uses Programmed I/O
3.Processor uses Interrupts
4.Processor uses DMA
5.Some processing moved to processors in I/O module
that access programs in memory and execute them on
their own without CPU intervenKon (I/O Module
referred to as an I/O Channel)
6.Distributed processing where I/O module is a computer
in its own right(I/O Module referred to as an I/O
Processor)
Computer Organization
Asynchronous input output
synchronization
• Asynchronous input/output (I/O) synchroniza5on is a technique used
in computer organiza5on to manage the transfer of data between the
central processing unit (CPU) and external devices. In asynchronous
I/O synchroniza5on, data transfer occurs at an unpredictable rate,
with no Fxed 5ming or synchroniza5on between the CPU and external
devices.
• This approach diHers from synchronous I/O synchroniza5on, which
uses a clock signal to synchronize the transfer of data between the
CPU and external devices, and ensures that data is transferred at a
Fxed rate.
• Asynchronous I/O synchroniza5on is typically used in situa5ons where
data transfer rates are variable or unpredictable, such as in serial
communica5on or with slow devices. In these cases, the use of a clock
signal to synchronize data transfer can result in a waste of resources
or slow down the transfer of data.
• To manage the transfer of data in asynchronous I/O synchroniza5on,
the CPU typically uses interrupt-driven I/O, where it waits for an
interrupt signal from the device to indicate that data is ready for
transfer. The CPU can then ini5ate the transfer of data, and the device
will send data back in an asynchronous manner.
• Asynchronous input output is a form of input output processing that
allows others devices to do processing before the transmission or
data transfer is done. Problem faced in asynchronous input output
synchroniza6on – It is not sure that the data on the data bus is fresh
or not as their no 5me slot for sending or receiving data. This problem
is solved by following mechanism:
• Strobe
• Handshaking
• Data is transferred from source to des5na5on through data bus in
between. 1. Strobe Mechanism:
• Source ini6ated Strobe – When source ini5ates the process of data
transfer. Strobe is just a signal.

• (i) First, source puts data on the data bus and ON the strobe signal. (ii)
Des5na5on on seeing the ON signal of strobe, read data from the data
bus. (iii) ASer reading data from the data bus by des5na5on, strobe
gets OFF. Signals can be seen as:
• It shows that Frst data is put on the data bus and then strobe signal
gets ac5ve.
• Des6na6on ini6ated signal – When des5na5on ini5ates the process of data
transfer.

• (i) First, the des5na5on ON the strobe signal to ensure the source to put the
fresh data on the data bus. (ii) Source on seeing the ON signal puts fresh
data on the data bus. (iii) Des5na5on reads the data from the data bus and
strobe gets OFF signal. Signals can be seen as:

• It shows that Frst strobe signal gets ac5ve then data is put on the data bus.
• Problems faced in Strobe based asynchronous input output –
• In Source ini5ated Strobe, it is assumed that des5na5on has read the
data from the data bus but there is no surety.
• In Des5na5on ini5ated Strobe, it is assumed that source has put the
data on the data bus but there is no surety.
• This problem is overcome by Handshaking. 2. Handshaking Mechanism:
• Source ini6ated Handshaking – When source ini5ates the data transfer
process. It consists of signals: DATA VALID: if ON tells data on the data
bus is valid otherwise invalid. DATA ACCEPTED: if ON tells data is
accepted otherwise not accepted.
• (i) Source places data on the data bus and enable Data valid signal. (ii)
Des5na5on accepts data from the data bus and enable Data accepted
signal. (iii) ASer this, disable Data valid signal means data on data bus
is invalid now. (iv) Disable Data accepted signal and the process ends.
Now there is surety that des5na5on has read the data from the data
bus through data accepted signal. Signals can be seen as:
• It shows that Frst data is put on the data bus then data valid signal
gets ac5ve and then data accepted signal gets ac5ve. ASer accep5ng
the data, Frst data valid signal gets oH then data accepted signal gets
oH.
• Des6na6on ini6ated Handshaking – When des5na5on ini5ates the
process of data transfer. REQUEST FOR DATA: if ON requests for
puTng data on the data bus. DATA VALID: if ON tells data is valid on
the data bus otherwise invalid data.
• (i) When des5na5on is ready to receive data, Request for Data signal
gets ac5vated. (ii) source in response puts data on the data bus and
enabled Data valid signal. (iii) Des5na5on then accepts data from the
data bus and aSer accep5ng data, disabled Request for Data signal.
(iv) At last, Data valid signal gets disabled means data on the data bus
is no more valid data. Now there is surety that source has put the
data on the data bus through data valid signal. Signals can be seen as:
• It shows that Frst Request for Data signal gets ac5ve then data is put on data bus then Data valid
signal gets ac5ve. ASer reading data, Frst Request for Data signal gets oH then Data valid signal.

• Features :
• Callback func6ons: A callback func5on is a func5on that is called by the opera5ng system or
device driver when a data transfer opera5on is completed. The CPU can con5nue with other tasks
while the device is performing the data transfer opera5on. Once the opera5on is complete, the
device driver calls the callback func5on, which can be used to no5fy the CPU that the data transfer
has Fnished.
• Interrupts: Interrupts are signals sent by devices to the CPU to indicate that an event has
occurred. In the case of asynchronous I/O, interrupts can be used to signal the CPU that a data
transfer opera5on has completed. When an interrupt occurs, the CPU stops execu5ng its current
task and transfers control to an interrupt service rou5ne (ISR) that is responsible for handling the
interrupt.
• Polling: Polling is a technique used to check the status of a device
periodically to determine if it has completed a data transfer opera5on.
With asynchronous I/O, the CPU can poll the device periodically to check
if the data transfer has Fnished. If the transfer is complete, the CPU can
then retrieve the data from the device.
• Select func6on: The select func5on is a system call used to monitor
mul5ple Fle descriptors for input or output readiness. With asynchronous
I/O, the select func5on can be used to monitor the status of a device and
no5fy the CPU when a data transfer opera5on has completed.
• Advantages of Asynchronous input output synchroniza6on :
• Some advantages of asynchronous input/output (I/O) synchroniza5on include:
• Flexibility: Asynchronous I/O synchroniza5on allows for Vexible data transfer rates
and can adapt to varying transfer speeds without the need for synchroniza5on.
This is par5cularly useful when dealing with slow or intermiWent devices.
• Resource eXciency: Because data transfer is not synchronized to a clock signal,
asynchronous I/O synchroniza5on can be more resource-eXcient than
synchronous I/O synchroniza5on. It can reduce the overhead of synchroniza5on
and improve the u5liza5on of system resources.
• Reduced latency: Asynchronous I/O synchroniza5on can help reduce latency, or
the delay between the ini5a5on of a data transfer and its comple5on. This can
improve the responsiveness and overall performance of the system.
• BeWer error handling: Asynchronous I/O synchroniza5on can provide beWer error
handling, as it allows for the detec5on and handling of errors during data
transfer. This can help ensure that data is transferred accurately and reliably.
• Compa5bility: Asynchronous I/O synchroniza5on is compa5ble with a wide range
of devices and systems, making it a Vexible and widely used technique for
managing data transfer.
• Dis-advantages of Asynchronous input output synchroniza6on :
• Some disadvantages of asynchronous input/output (I/O) synchroniza5on include:
• Complexity: Asynchronous I/O synchroniza5on can be more complex to
implement than synchronous I/O synchroniza5on, as it requires interrupt-driven
I/O and other techniques to manage data transfer.
• Overhead: Asynchronous I/O synchroniza5on can result in higher overhead
than synchronous I/O synchroniza5on, as the CPU must constantly monitor
for interrupt signals and ini5ate data transfer when necessary.
• Latency: Although asynchronous I/O synchroniza5on can help reduce
latency in some cases, it can also introduce addi5onal latency when wai5ng
for interrupt signals from devices.
• Synchroniza5on issues: Asynchronous I/O synchroniza5on can introduce
synchroniza5on issues, par5cularly when dealing with mul5ple devices or
large data transfers. It can be diXcult to ensure that data is transferred in
the correct order and that all devices are properly synchronized.
• Compa5bility issues: Asynchronous I/O synchroniza5on may not be
compa5ble with all devices and systems, par5cularly those that
require Fxed data transfer rates or speciFc synchroniza5on protocols.

You might also like