0% found this document useful (0 votes)
20 views26 pages

Computer Organization

The document provides an overview of computer organization, detailing the function and types of buses, memory registers, device drivers, and BIOS. It explains number representation methods, Booth's multiplication, division algorithms, addressing modes, and the significance of condition codes. Additionally, it compares multiprocessor and multicomputer architectures, discusses stack and queue operations, and outlines I/O interface functions and parallel port architecture.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views26 pages

Computer Organization

The document provides an overview of computer organization, detailing the function and types of buses, memory registers, device drivers, and BIOS. It explains number representation methods, Booth's multiplication, division algorithms, addressing modes, and the significance of condition codes. Additionally, it compares multiprocessor and multicomputer architectures, discusses stack and queue operations, and outlines I/O interface functions and parallel port architecture.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

COMPUTER ORGANIZATION

1) Explain bus, list its types, and explain the single bus structure.

• A bus is a communication pathway used to transfer data between components of a


computer.

• It carries information in the form of electrical signals between CPU, memory, and I/O
devices.

• Buses consist of data, address, and control lines.

• Types of Buses:

1. Data Bus – Transfers actual data.

2. Address Bus – Carries memory or I/O addresses.

3. Control Bus – Carries control signals like Read/Write.

• Single Bus Structure:

o All components share a single common bus.

o CPU, memory, and I/O devices communicate over one bus.

o Advantage: Simple and cost-effective.

o Disadvantage: Only one operation at a time (limited speed).

2) Explain the following:

a) Memory Address Register (MAR):

• Holds the address of the memory location to be read from or written to.

• Connected to the address bus.

b) Memory Data Register (MDR):

• Stores data being transferred to or from memory.

• Connected to the data bus.

c) Device Drivers:

• Software programs that control specific hardware devices.

• Provide a communication interface between OS and devices.


d) BIOS (Basic Input Output System):

• Firmware that initializes hardware during the booting process.

• Provides runtime services for the OS and programs.

e) Processor Time:

• The amount of time CPU spends executing a program.

• Measured in CPU cycles or seconds, crucial for performance analysis.

3) Explain number representation in sign magnitude, 1's and 2's complement.

• Sign Magnitude:

o MSB is sign bit (0 = positive, 1 = negative).

o Remaining bits represent magnitude.

o Example: +5 = 0101, −5 = 1101 (in 4-bit).

• 1’s Complement:

o Invert all bits of positive number to get its negative.

o Example: +5 = 0101, −5 = 1010.

o Has two representations for 0 (+0 and −0).

• 2’s Complement:

o Invert all bits and add 1 to the LSB.

o Example: +5 = 0101, −5 = 1011.

o Only one representation for 0; most widely used.

4) Booth’s Multiplication for -7 × 3 (in 4-bit)

• -7 = 1001, 3 = 0011, use 4-bit representation

• M = -7 = 1001, Q = 3 = 0011, Q-1 = 0, A = 0000

• Steps (A, Q, Q-1):

1. Q0 Q-1 = 1 0 → A = A - M → A = 1001

2. Arithmetic right shift: A = 1100, Q = 1001, Q-1 = 1

3. Q0 Q-1 = 1 1 → No operation
4. Right shift → A = 1110, Q = 0100, Q-1 = 1

5. Q0 Q-1 = 0 1 → A = A + M → A = 0111

6. Right shift → A = 0011, Q = 1010

• Result = A+Q = 00111010 = decimal -21

5) Booth’s Multiplication for -5 × -3 (4-bit)

• -5 = 1011, -3 = 1101

• M = -5 = 1011, Q = -3 = 1101, Q-1 = 0, A = 0000

• Perform similar steps using Booth's algorithm

• Final binary result = 00001111 → Decimal +15

6) Booth’s Multiplication for 7 × 3

• 7 = 0111, 3 = 0011

• M = 0111, Q = 0011, Q-1 = 0, A = 0000

• Perform Booth’s steps

• Final result = A+Q = 00010101 = Decimal +21

7) Perform 22 ÷ 7 using Restoring Division (5-bit)

• Dividend = 10110 (22), Divisor = 00111 (7)

• Initialize A = 00000, Q = 10110

• Perform 5 steps:

1. Shift A and Q left

2. Subtract M from A

3. If result is negative, restore and set Q0 = 0

4. If positive, keep and set Q0 = 1

• Final Quotient = 00011 (3), Remainder = 00001 (1)


8) Perform 13 ÷ 5 using Restoring Division

• 13 = 01101, 5 = 00101

• A = 00000, Q = 01101

• Follow restoring division steps:

• Final Quotient = 00010 (2), Remainder = 00011 (3)

9) Restoring Division Algorithm (Unsigned Numbers)

1. Load dividend in Q and divisor in M.

2. Initialize A = 0.

3. For n bits:

o Left shift A and Q.

o A = A - M.

o If A < 0: restore A (A = A + M), set Q0 = 0.

o Else set Q0 = 1.

4. Result: Q = quotient, A = remainder.

Example: 13 ÷ 5

• Q = 01101, M = 00101

• After steps, Quotient = 00010, Remainder = 00011

10) Compare between Multicomputer and Multiprocessor

Feature Multiprocessor Multicomputer

Architecture Shared memory Distributed memory

Communication Shared bus or cache Message passing

Cost More expensive Less expensive

Performance Faster communication Slower due to network delays


Feature Multiprocessor Multicomputer

Example usage Servers, high-performance CPUs Distributed systems, clusters

11) Booth’s Multiplication Algorithm (Steps)

1. Use A (accumulator), Q (multiplier), M (multiplicand), and Q-1.

2. Initialize A = 0, Q-1 = 0.

3. Check last two bits: Q0 and Q-1

o 10 → A = A - M

o 01 → A = A + M

o 00 or 11 → No operation

4. Perform arithmetic right shift on (A, Q, Q-1).

5. Repeat for the number of bits (n times).

6. Result = A + Q.

12) Perform 27 ÷ 6 using Restoring Division

• 27 = 11011, 6 = 00110

• A = 00000, Q = 11011

• Perform 5 cycles:

o Left shift (A,Q)

o Subtract M from A

o Set Q0 based on sign of A

• Final Quotient = 00010 (4), Remainder = 00101 (3)

13) Condition Codes and Their Significance

1. Zero (Z):
o Set if result of operation is zero.

o Used in comparisons and loops.

2. Sign (S):

o Indicates if result is negative.

o Set if MSB = 1.

3. Carry (C):

o Set if there is a carry out of MSB in addition or borrow in subtraction.

o Useful in multi-byte arithmetic.

4. Overflow (V):

o Set if signed overflow occurs.

o Indicates result out of range for signed data.

5. Parity (P):

o Set if number of set bits in result is even.

o Used in error checking.

14) Effective Address Calculation

Given:

• R1 = 1200

• R2 = 4600

a) Load 20(R1), R5
Effective Address (EA) = 20 + contents of R1 = 20 + 1200 = 1220

b) Move #3000, R5
#3000 is an immediate value, so EA is not used. Operand is 3000

c) Store R5, 30(R1,R2)


EA = 30 + R1 + R2 = 30 + 1200 + 4600 = 5830

d) Add -(R2), R5
Pre-decrement: R2 = R2 − 1 = 4599 (assuming byte addressing)
EA = 4599
e) Subtract (R1)+, R5
Post-increment: EA = R1 = 1200, then R1 becomes 1201

15) Evaluate E = (A + B) * (C + D)

Three-address instructions:

ADD R1, A, B ; R1 = A + B

ADD R2, C, D ; R2 = C + D

MUL R3, R1, R2 ; R3 = R1 * R2 (E)

Two-address instructions:

MOV R1, A ; R1 = A

ADD R1, B ; R1 = A + B

MOV R2, C ; R2 = C

ADD R2, D ; R2 = C + D

MUL R1, R2 ; R1 = R1 * R2 (E)

16) Shift and Rotate Instructions

• Shift Instruction: Moves bits left or right, filling with 0 (logical) or sign bit (arithmetic)

o Example: SHL R1 (Shift left) – multiplies R1 by 2

• Rotate Instruction: Circulates bits around

o Example: ROL R1 (Rotate left) – MSB becomes LSB

17) Definitions

a) Assembler: Converts assembly language into machine code


b) Object code: Machine-readable binary output from assembler or compiler
c) (Duplicate) Assembler: Same as (a)
d) Debugging: The process of identifying and fixing code errors
e) Stack: LIFO data structure used in memory for storing return addresses, variables, etc.

18) Stacks vs Queues


Feature Stack (LIFO) Queue (FIFO)

Access Last-in-first-out First-in-first-out

Use cases Function calls, Undo Task scheduling, Buffers

Operations push(), pop() enqueue(), dequeue()

Example Web browser history Print queue

Structure Top pointer Front & rear pointers

19) Stacks in Function Calls

• Used to store return addresses, parameters, and local variables

• Function call: Push return address to stack

• Function return: Pop return address from stack

• Helps implement recursive functions

20) Push and Pop Process

• Push (Insert):

o Check for overflow

o Increment top

o Store value at new top

• Pop (Delete):

o Check for underflow

o Retrieve value from top

o Decrement top

Used in expression evaluation, recursion, and memory management

21) Subroutines in Programming

• A subroutine is a self-contained block of code that performs a specific task

• Advantages:
1. Code reusability

2. Easier debugging and maintenance

3. Improved modularity

4. Shorter programs

5. Allows recursion and nesting

22) Stack vs Queue

Feature Stack Queue

Method LIFO FIFO

Main Operations push(), pop() enqueue(), dequeue()

Direction One end (top) Two ends (front, rear)

Used in Recursion, Expression Eval Scheduling, Buffers

Storage Order Last element first First element first

23) Addressing Modes

1. Immediate: Operand is part of instruction


MOV R1, #5

2. Direct: Address given directly


MOV R1, [1000]

3. Indirect: Address stored in register


MOV R1, [R2]

4. Register: Operand in register


ADD R1, R2
5. Indexed: Address = base + index
MOV R1, 100(R2)

6. Relative: PC + offset
JMP 20(PC)

24) Safe Push and Safe Pop (Assembly Example)

; Safe push

PUSH:

CMP SP, MAX_SIZE

JGE STACK_OVERFLOW

MOV [SP], R1

INC SP

; Safe pop

POP:

CMP SP, 0

JLE STACK_UNDERFLOW

DEC SP

MOV R1, [SP]

Includes overflow and underflow checks.

25) Vectored Interrupts and Nesting

• Vectored Interrupts: Device sends interrupt and address (vector) of ISR

• Faster because no polling

• Interrupt Nesting: Higher priority interrupts can interrupt lower ones

o Handled using interrupt priority levels and enable/disable logic

26) Simultaneous Interrupt Requests

• When multiple devices send interrupts at the same time:


o Priority-based handling: Highest priority gets served

o Polling or Daisy-chaining used for resolving priority

o Remaining requests are queued or masked until serviced

27) DMA Controller in Data Transfer

• DMA (Direct Memory Access) controller allows data transfer without CPU
intervention

• Used in high-speed I/O like disk, network

• Process:

1. CPU initializes DMA

2. DMA transfers data from I/O to memory

3. CPU is free for other tasks

• Enhances speed and performance

28) Centralized Bus Arbitration

• Single bus arbiter controls access to the system bus

• Devices send requests to the arbiter

• Arbiter grants bus based on priority

• Simple and easy to implement

• Limitation: Single point of failure

29) Distributed Bus Arbitration

• All devices participate in arbitration process

• Each device has arbitration logic

• No central controller

• More robust and scalable

• Devices resolve conflicts themselves


30) Centralized vs Distributed Bus Arbitration

Feature Centralized Distributed

Control One central bus arbiter All devices have arbitration logic

Complexity Simple More complex

Reliability Less (single point of failure) More reliable

Cost Low High

Speed Slower for large systems Faster in distributed systems

31) Five Functions of I/O Interface Circuits

1. Data Transfer Coordination


The interface manages the timing and control signals between the processor and I/O
device, ensuring both ends are ready before data transfer begins.

2. Address Decoding
Each I/O device is assigned a unique address. The interface decodes the address sent
by the CPU and ensures that only the addressed device responds.

3. Data Formatting
Since some devices work with serial data and others with parallel, the interface
converts data into the proper format for the specific device (e.g., parallel-to-serial).

4. Status Monitoring
The interface tracks device status using status registers. It indicates conditions like
“Device Ready”, “Busy”, or “Error”, which the CPU checks before initiating a
transaction.

5. Interrupt Handling
It detects when an I/O device needs CPU attention and sends an interrupt signal. The
interface can prioritize and handle multiple interrupts using control logic.
32) Parallel Port Architecture for Processor-to-Printer Connection

1. 8-bit Data Lines


A parallel port typically has 8 data lines that carry 8 bits (1 byte) of data at once from
the CPU to the printer.

2. Control Lines
Control lines like Strobe signal the printer that data is ready. Other signals may
include commands for form feed or printer reset.

3. Status Lines
These lines send signals back from the printer indicating its status — e.g., Busy,
Paper Out, or Error, allowing the CPU to respond accordingly.

4. Handshaking Protocol
The port uses a handshake mechanism where control and status lines work together
to ensure data is transferred only when both devices are ready.

5. Unidirectional or Bidirectional Communication


Older ports were unidirectional (CPU to printer only), but enhanced versions like
ECP/EPP support bidirectional data flow.

33) Standard I/O Interface Circuits for Processor to PCI Bus Communication

1. Bridge Chip Function


A chipset component acts as a bridge between the CPU bus and PCI bus, enabling
communication between two systems with different protocols.

2. Bus Arbitration
PCI uses a central arbiter to manage which device can use the bus, ensuring fair and
efficient access among multiple I/O devices.

3. Address Mapping
PCI devices are assigned unique memory or I/O address ranges, and the interface
circuit translates CPU-generated addresses accordingly.

4. Control Signal Management


It translates and transmits control signals like read/write enable, device select, etc.,
conforming to PCI timing and protocols.

5. Plug and Play Support


PCI supports automatic configuration of hardware resources, making it easier to
install and manage new devices without manual settings.
34) Comparison: Synchronous vs Asynchronous Bus

Aspect Synchronous Bus Asynchronous Bus

All operations are synchronized with a Operates without a clock; uses handshaking
Clock Signal
common clock. signals.

Timing depends on communication between


Timing Timing is predefined and fixed.
sender and receiver.

Hardware Simpler to design and implement due to Requires additional control logic for
Design synchronized timing. handshaking.

High performance due to predictable Slower because of wait states and


Speed
timing. handshaking delays.

Can accommodate devices with different


Flexibility Less flexible for device variation.
speeds.

35) Characteristics of the PCI Bus

1. Plug and Play


PCI devices are auto-detectable and configured dynamically by the system BIOS or OS
without needing manual setup.

2. Bus Mastering
Devices on the PCI bus can take control of the bus and communicate with memory
directly, reducing CPU load.

3. Multiplexed Address/Data Lines


PCI uses the same lines to carry both address and data at different times, reducing
the number of physical pins required.

4. High Transfer Speed


PCI supports 32-bit or 64-bit data buses and clock speeds of 33 MHz or 66 MHz,
allowing transfer rates up to 533 MB/s.

5. Compatibility
PCI is designed to be processor-independent, making it adaptable to a wide range of
systems and architectures.
36) Daisy Chain Interrupt Technique

1. Linear Connection
Devices are connected in series like links of a chain. The first device is closest to the
CPU, and others follow.

2. Priority Handling
When multiple devices raise interrupts, the one nearest the CPU (first in chain) is
serviced first, giving it the highest priority.

3. Shared Interrupt Line


All devices use the same interrupt request (IRQ) line. The CPU identifies which device
caused the interrupt by polling.

4. Interrupt Acknowledge Propagation


The CPU sends an interrupt acknowledge signal that propagates down the chain until
it reaches the interrupting device.

5. Limitation
Though simple to implement, this method has a fixed priority and can delay lower-
priority device response.

37) How Cache Memory Improves Performance

1. Reduces Memory Access Time


Cache stores frequently used data close to the CPU, making access faster than
fetching from main memory.

2. Stores Temporal and Spatial Data


Programs often use the same data repeatedly (temporal locality) or data located
nearby (spatial locality), which cache effectively stores.

3. Minimizes CPU Stalls


Without cache, the CPU might stall waiting for slower memory. Cache reduces such
idle times.

4. Improves Instruction Execution


Cache stores frequently executed instructions, improving execution rate and
throughput.

5. Hierarchical Cache Levels


Modern CPUs use L1, L2, and L3 caches for better performance balance between
speed and capacity.
38) Logical vs Physical Addresses

1. Logical Address
The address generated by the CPU during program execution. Also called virtual
address.

2. Physical Address
Actual location in physical memory (RAM) where the data resides, mapped by the
memory management unit (MMU).

3. Address Translation
MMU translates logical addresses to physical ones using page tables, allowing flexible
memory management.

4. User Perspective
Programs use logical addresses; they don't directly interact with physical addresses.

5. Purpose
Helps in memory protection, sharing, and implementing virtual memory systems.

39) Virtual Memory: Working & Benefits

1. Virtual Address Space


Each process gets its own large virtual memory space, abstracted from physical RAM
limits.

2. Page-Based Management
Memory is divided into equal-sized pages; only required pages are loaded into RAM
when needed.

3. Page Table Usage


The operating system maintains page tables to map virtual pages to physical frames.

4. Demand Paging
Pages are brought into RAM only when required, reducing RAM usage.

5. Benefits

o Executes programs larger than RAM.

o Enables multitasking.

o Isolates memory across processes for safety.


40) Cache Mapping Techniques

1. Direct Mapped Cache

o Each block of memory maps to exactly one cache line.

o Fast lookup but high chances of collisions.

2. Set-Associative Mapping

o Memory block maps to a set, and can be placed in any line within that set.

o Balances performance and flexibility.

3. Fully Associative Mapping

o Any memory block can go into any cache block.

o Flexible but requires complex hardware for searching entire cache.

41) Memory Hierarchy in Computer Systems

1. Speed vs Size
Fastest and smallest memory (registers) at the top, slowest and largest (hard disk) at
the bottom.

2. Levels
Includes registers, cache, main memory (RAM), and secondary storage (HDD/SSD).

3. Cost Efficiency
Higher levels are expensive per byte, so used in small amounts. Lower levels are
cheaper and larger.

4. Performance Optimization
Frequently used data is moved to faster memory to reduce average access time.

5. Principle of Locality
Memory hierarchy is effective because programs tend to access the same memory
locations frequently.
42) Set-Associative vs Fully Associative Cache

Feature Set-Associative Fully Associative

Placement Block goes into a specific set Block can go anywhere in cache

Flexibility Moderate (limited to set) Maximum (no restriction)

Complexity Less complex than fully associative Most complex (requires entire cache search)

Search Time Faster than fully associative Slow due to complete cache search

Cost Less expensive than fully associative Costlier to implement

43) Replacement Algorithm in Memory Systems

1. Purpose
When cache or memory is full, these algorithms decide which block to remove to
make room for new data.

2. LRU (Least Recently Used)


Replaces the block that hasn’t been used for the longest time.

3. FIFO (First-In, First-Out)


The oldest loaded block is replaced, regardless of how frequently it’s used.

4. Random Replacement
Chooses any block randomly, used when simplicity is preferred over performance.

5. Impact
A good replacement algorithm reduces cache misses and improves system
performance.

44) Virtual Memory Organisation in a Computer System


1. Page Table Mapping
Logical memory is divided into pages, physical memory into frames. Page tables map
these.

2. Translation via MMU


The Memory Management Unit (MMU) uses the page table to convert logical
addresses to physical ones.

3. Demand Paging
Pages are loaded into RAM only when required, improving memory usage.

4. Page Replacement
If RAM is full, a replacement algorithm (like LRU) is used to make space.

5. Benefits

o Increases multitasking capability

o Allows larger programs

o Improves memory isolation

45) Comparison: Direct Mapped vs Set Associative Cache

Feature Direct Mapped Set Associative

Mapping One memory block → one cache line One memory block → one set, multiple lines

Performance Faster lookup, more conflicts Balanced speed and lower conflict rate

Flexibility Low (fixed location) Medium (any line in set)

Complexity Simple hardware Slightly complex hardware

Use Case Low-cost, low-conflict systems Performance-sensitive applications

46) Memory Interleaving and Its Role in Improving Access Speed

1. Definition
Memory interleaving is a technique of dividing memory into multiple modules to
allow simultaneous access to different memory locations.
2. Working
Instead of storing data sequentially in one module, it distributes data across modules
(e.g., address 0 in module 0, address 1 in module 1, and so on).

3. Parallel Access
CPU can fetch data from multiple modules simultaneously, reducing wait time and
improving throughput.

4. Types

o Low-order interleaving: Lower bits decide the module.

o High-order interleaving: Higher bits decide the module.

5. Performance Benefit
Reduces memory latency and improves access speed, especially in pipelined
processors and multi-core systems.

47) Difference Between RAM and ROM

Feature RAM (Random Access Memory) ROM (Read Only Memory)

Mutability Read and write possible Read-only or limited write

Volatility Volatile (data lost on power off) Non-volatile (retains data)

Use Temporary data for running programs Stores firmware or boot instructions

Speed Faster access Slower than RAM

Types DRAM, SRAM PROM, EPROM, EEPROM

48) PROM, EPROM, and EEPROM Differences

Feature PROM EPROM EEPROM

Erasable Programmable Electrically Erasable


Full Form Programmable ROM
ROM PROM

Programmed once Can be erased using UV Can be erased using


Programmability
only light electricity
Feature PROM EPROM EEPROM

Reusable with special Reusable with electric


Reusability Not reusable
equipment signals

Erasure Method Not erasable Requires UV light Erased electronically

Permanent Flash memory, modern


Use Firmware, BIOS
firmware BIOS

49) Single Bus Organization of Datapath

1. Structure
All components (ALU, registers, memory) are connected via one common bus.

2. Data Movement
Only one data transfer can occur at a time due to single bus restriction.

3. Control Unit
Requires more control signals and clock cycles to manage sequential data movement.

4. Cost-Effective
Reduces hardware complexity and cost but impacts speed.

5. Limitation
Slower instruction execution due to sequential data transfer.

50) Three Bus Organization of Datapath

1. Structure
Uses three buses: two for reading (Bus A, Bus B) and one for writing (Bus C).

2. Parallel Transfer
Allows reading two operands simultaneously and writing back the result in the same
cycle.

3. Registers Access
ALU gets inputs from Bus A and B, and outputs to Bus C.

4. Fast Execution
Ideal for pipelining and parallelism, improving CPU performance.

5. More Hardware
Requires more wiring and multiplexers, increasing complexity and cost.
51) CISC vs RISC Architecture

Feature CISC (Complex Instruction Set) RISC (Reduced Instruction Set)

Instruction Set Large and complex Small and simple

Execution Time Multi-cycle per instruction One instruction per cycle

Hardware Complex hardware Simpler hardware

Pipelining Difficult to implement Easy to implement

Examples Intel x86 ARM, MIPS

52) Types of Data and Instruction Hazards

1. Data Hazards

o Occur when instructions depend on the results of previous instructions.

o Types:

▪ RAW (Read After Write)

▪ WAR (Write After Read)

▪ WAW (Write After Write)

2. Instruction Hazards

o Also called control hazards.

o Occur due to branching or jumping.

o Delay arises because next instruction address is uncertain.

3. Structural Hazards

o Occur when two instructions need the same hardware resource


simultaneously.

4. Resolution Techniques

o Forwarding, pipeline stalls, branch prediction.


5. Impact

o Hazards reduce the performance of pipelined processors.

53) Working Principle of 5-Stage Pipelining

1. IF (Instruction Fetch)
Fetch instruction from memory using Program Counter (PC).

2. ID (Instruction Decode)
Decode the fetched instruction and read operands from registers.

3. EX (Execute)
Perform the arithmetic or logical operation in the ALU.

4. MEM (Memory Access)


Access memory if it’s a load/store instruction.

5. WB (Write Back)
Write the result back to the destination register.

Result: While one instruction is in MEM, another can be in EX, leading to faster execution.

54) Control Sequence for Mul (R3), R1

Assuming: R1 ← R1 × M[R3]

1. T1: MAR ← R3 (Memory Address Register gets address from R3)

2. T2: MDR ← M[MAR] (Memory fetches operand)

3. T3: Temp ← MDR (Store in temporary register)

4. T4: R1 ← R1 × Temp (Multiply and store result in R1)

Explanation: Memory operand at R3 is fetched, then multiplied with R1 content.

55) Control Sequence for Sub (R3), R1

Assuming: R1 ← R1 − M[R3]

1. T1: MAR ← R3

2. T2: MDR ← M[MAR]


3. T3: Temp ← MDR

4. T4: R1 ← R1 − Temp

Explanation: The value from memory pointed by R3 is subtracted from R1 and result is
stored back in R1.

56) Control Sequence for Add R4, R5, R6 using Three-Bus Organization

Assuming: R4 ← R5 + R6

1. T1: Bus A ← R5, Bus B ← R6

2. T2: ALU performs R5 + R6

3. T3: Result transferred to Bus C

4. T4: R4 ← Bus C

Explanation: Inputs from R5 and R6 go to ALU in parallel and result is written to R4 using
separate buses.

57) Types of Resource and Instruction Hazards

1. Resource Hazards

o Occur when hardware resources (like ALU or memory) are needed by multiple
instructions simultaneously.

2. Data Hazards

o Arise due to operand dependencies between instructions (RAW, WAR, WAW).

3. Control Hazards

o Happen during branching or jumping instructions when the next instruction is


uncertain.

4. Impact

o Causes delays or incorrect execution if not handled properly.

5. Handling

o Forwarding, stalling, hazard detection units, and branch prediction are used.
58) Single vs Multiple Bus Organization

Feature Single Bus Multiple Bus

Data Transfer One at a time Parallel transfers possible

Speed Slower Faster due to concurrency

Hardware Cost Less More expensive due to extra wiring

Control Logic Simple More complex

Performance Less efficient in pipelined systems Efficient in advanced CPUs

59) Execution of a Complete Instruction (Example: ADD R1, R2, R3)

1. Fetch:

o IR ← M[PC] (Fetch instruction)

o PC ← PC + 1

2. Decode:

o Decode the instruction, identify operation and registers

3. Operand Fetch:

o Read R2 and R3 from register file

4. Execution:

o ALU ← R2 + R3
5. Write Back:

o R1 ← ALU result

Result: R1 stores the sum of R2 and R3.

60) Detailed Explanation: CISC vs RISC

Feature CISC (Complex Instruction Set) RISC (Reduced Instruction Set)

Instruction Many complex instructions with


Simple instructions with one clock cycle
Complexity multiple cycles

Longer due to multiple simple


Code Length Shorter due to complex instructions
instructions

Slower per instruction but fewer Fast execution per instruction but more
Execution
instructions instructions

Difficult due to variable-length


Pipelining Easier due to fixed-length instructions
instructions

Examples Intel x86, AMD ARM, MIPS, SPARC

You might also like