0% found this document useful (0 votes)
6 views

1-Module 1-12-12-2024

The document outlines the evolution of computer organization and architecture, detailing the transition from vacuum tubes to transistors, integrated circuits, and modern microprocessors. It explains key concepts such as the Von Neumann architecture, RISC vs. CISC, and floating-point representation. Additionally, it covers performance assessment metrics like clock speed and MIPS, as well as the differences between computer architecture and organization.

Uploaded by

ramanjilucky14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

1-Module 1-12-12-2024

The document outlines the evolution of computer organization and architecture, detailing the transition from vacuum tubes to transistors, integrated circuits, and modern microprocessors. It explains key concepts such as the Von Neumann architecture, RISC vs. CISC, and floating-point representation. Additionally, it covers performance assessment metrics like clock speed and MIPS, as well as the differences between computer architecture and organization.

Uploaded by

ramanjilucky14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Computer Organization and

Architecture (ECE-2002)
Evolution of computers
• First Generation: Vacuum Tubes
• Second Generation: Transistors
• Third Generation: Integrated circuits
• Later generations: LSI and VLSI

17-01-2023
Artillery firing tables for correct firing angle
Evolution of computers
• First Generation: Vacuum Tubes

Vacuum tubes

17-01-2023
First Gen: Vacuum tubes

⮚ENIAC (Electronic Numerical Integrator and Computer)


⮚Built in 1943 for WW-II
⮚Weighed 30 tons, occupying 1500 square feet of floor
space, and containing more than 18,000 vacuum tubes

⮚Consumed 140 kilowatts of power


⮚Faster than any electromechanical computer, capable
An IBM computer vacuum tube assembly from
of 5000 additions per second the 1950's
Electronic Numerical Integrator and Computer (ENIAC)

⮚It was a decimal computer, rather than a binary one

⮚Its memory consisted of 20 “accumulators,” each capable of holding a 10-digit


decimal number

⮚A ring of 10 vacuum tubes represented each digit

⮚The major drawback is that it had to be programmed manually by setting switches


and plugging and unplugging cables
VON Neumann Machine
Stored-Program Concept:
• Both program instructions and data are stored in the same memory.
• This allows the computer to be reprogrammed by simply loading a new set of
instructions, unlike earlier computers (e.g., ENIAC) that required manual rewiring.
Single Memory for Data and Instructions:
• Memory is used for storing both data and instructions, enabling flexible program
execution.
• Instructions are fetched one-by-one and executed in sequence.
Binary Arithmetic:
• It uses binary (base-2) arithmetic instead of decimal (base-10), making computations
faster and more efficient.
Sequential Execution:
• Instructions are executed one at a time in the order they are stored in memory
(sequential processing).
Registers and Control Unit:
• The control unit fetches, decodes, and executes instructions, while registers
temporarily hold data during processing.
17-01-2023
Institute for Advanced Study
(IAS) Computer-USA

⮚A main memory, which stores both data and instructions


⮚An arithmetic and logic unit (ALU) capable of operating on binary data
⮚A control unit, which interprets the instructions in memory and causes them
to be executed
⮚Input and output (I/O) equipment operated by the control unit
Working of IAS
⮚The memory of the IAS consists of 1000 storage locations, called words, of 40
binary digits (bits) each.

⮚Both data and instructions are stored there.

⮚Numbers are represented in binary form, and each instruction is a binary code.

⮚Each number is represented by a sign bit and a 39-bit value.


EXAMPLE
Working of IAS
⮚A word may also contain two 20-bit instructions, with each instruction
consisting of,
• An 8-bit operation code (opcode) specifying the operation to be performed and
• A 12-bit address designating one of the words in memory (numbered from 0 to 999).

Instruction 1 (20 bits) | Instruction 2 (20 bits)


[ 8-bit Opcode | 12-bit Address ] | [ 8-bit Opcode | 12-bit Address ]

Opcode: 00101001 (binary) → ADD operation


Address: 000011010110 (binary) → Memory location 214
Expanded View of IAS
IAS Computer
• Memory address: 0 to 999 (1000 Memory locations)
• Program counter (PC-12 bits): Holds address of the
next instruction to be executed
• Memory Address Register (MAR-12 bits): Holds the
immediate address of main memory to be used (comes
from program counter)
• Memory Buffer Register (MBR- 40 bits): Holds the
data of immediate memory
• Instruction Register (IR-8 bits): Hold and Execute left
instruction (only instruction)
• Instruction Buffer Register (IBR-20 bits): Holds right
instruction with address to be executed
• Accumulator (AC-40 bits): Holds temporary operands
or results of ALU operations
• Multiplier-quotient Register (MQ)
Registers in IAS
⮚Memory buffer register (MBR): Contains a word to be stored in memory or sent to the I/O unit, or is
used to receive a word from memory or from the I/O unit.

⮚Memory address register (MAR): Specifies the address in memory of the word to be written from or
read into the MBR.

⮚Instruction register (IR): Contains the 8-bit opcode instruction being executed.

⮚Instruction buffer register (IBR): Employed to hold temporarily the right hand instruction from a word
in memory.

⮚Program counter (PC): Contains the address of the next instruction-pair to be fetched from memory.

⮚Accumulator (AC) and multiplier quotient (MQ): Employed to hold temporarily operands and results
of ALU operations.
Second GEN: Transistors
⮚Invented at Bell Labs in 1947 by William Shockley, John Bardeen, and Walter Brattain.
⮚A small electronic device that acts as a switch or amplifier, replacing bulky and
inefficient vacuum tubes.
⮚Advantages of Transistors
Smaller and Compact Design:
• Transistors are much smaller than vacuum tubes, enabling more compact
computers.
Cheaper:
• Mass production of transistors significantly reduced the cost of computers.
Less Heat Dissipation:
• Unlike vacuum tubes, transistors generate far less heat, improving reliability and
efficiency.
Durability and Longevity:
• Transistors are more robust and have a longer lifespan compared to vacuum
tubes.
Third gen: Integrated circuits
• What is an Integrated Circuit (IC)?
An IC is a tiny piece of silicon (a semiconductor) that integrates multiple components like
transistors, resistors, and capacitors into a single chip.
• Advantages of ICs Over Discrete Components:
• Eliminated the need for assembling individual components.
• Allowed multiple transistors to be produced simultaneously on a single wafer of silicon.
• Simplified manufacturing, reduced costs, and improved reliability.
• What is Moore’s Law?
• Proposed by Gordon Moore (co-founder of Intel) in 1965:
• The number of transistors on an IC doubles approximately every year.
• Later revised to a doubling every 18 months (still holds true today).
Later generations
• Large-Scale Integration (LSI):Allowed 1,000+ components to be integrated into
a single chip.
• Very-Large-Scale Integration (VLSI):Increased the number of components on a
chip to over 10,000.
• Ultra-Large-Scale Integration (ULSI):Modern ULSI chips can integrate millions
of components, allowing for incredible computing power in small, portable
devices like smartphones and laptops.
• Led to the Birth of the Microprocessor, i.e. contained all components of a CPU
(arithmetic, control unit, registers) on a single chip. eg. Intel 4004, Intel 8008
(1972), Intel 8086 (1979), Intel 80386 (1985).
• Led to the advancements in Memory Technology i.e. Magnetic-Core Memory
to Semiconductor Memory
Performance Assessment
 Performance is one of the key parameters to consider, along with cost, size,
security, reliability, and, in some cases power consumption.
 Application performance depends not just on the raw speed of the processor, but
on the instruction set, choice of implementation language, efficiency of the
compiler, and skill of the programming.
 Clock Speed
 The System Clock: The speed of a processor is dictated by the clock frequency, measured
in Hertz (Hz) (cycles per second).
 Clock signals are generated by a quartz crystal
 The clock rate determines how many pulses the processor receives per second.
 For example, a 1 GHz processor receives 1 billion clock pulses per second
Millions of instructions per second (MIPS) or MIPS rate
The System Clock: The most fundamental level, the speed of a processor
is dictated by the pulse frequency produced by the clock, measured in
cycles per second, or Hertz (Hz).

CPI: Cycle per instruction


f: Clock frequency (number of cycle per second)
Performance Assessment
 Suppose a CPU has a clock frequency of 3.2 GHz and executes 2 instructions per
clock cycle. The CPU speed would be?
Clock frequency: 3.2 GHz = 3.2 x 109 cycles per second (CPS)
Cycles per instruction (CPI): 2

Instructions per second = CPS/ CPI = (3.2 x 109)/ 2 = 6.4 x 109 instructions per
second
Performance Assessment
A CPU executes a simple addition instruction (e.g., A = B + C) in 1 clock cycle.
In this case, the CPI is 1.
Clock Frequency (CPS): 2.5 GHz, i.e. CPS = 2.5x 109
CPU Speed (IPS) = CPS/CPI = 2.5 billion instructions per second

Complex Arithmetic: A CPU executes a complex arithmetic instruction (e.g., A = B × C


+ D) in 2 clock cycles.
However, the CPU's pipeline architecture allows it to execute 2 instructions per clock
cycle.
In this case, clock Frequency (CPS): 3.0 GHz
Clock Cycle per Instructions (CPI): 1/2
CPU Speed (IPS) = CPS/CPI = (3x109)/(1/2) = 6 billion instructions per second
Performance Assessment
 Example: Consider a program has 2 Millions (2x106 instructions) is running on a
400 MHz processor. The program consists of three major types of instructions,
ALU related, load/store, and branching. These instruction requires 1, 2, and 4
CPI with a instruction mix of 60, 30, and 10% respectively in the program.
Estimate the MIPS of the processor.
Solution: Average CPI=0.6x1+0.3x2+0.1x4=1.6
MIPS=(400x106)/(1.6x106)=250 MIPS
Architecture vs organization
Architecture vs organization
⮚Computer architecture refers to those attributes of a system visible to a
programmer
• Those attributes that have a direct impact on the logical execution of a program.
• Examples of architectural attributes include the instruction set, the number of bits used to
represent various data types (e.g., numbers, characters), I/O mechanisms, and techniques
for addressing memory.
⮚Computer organization refers to the operational units and their
interconnections that realize the architectural specifications.
⮚Organizational attributes include those hardware details transparent to the
programmer, such as control signals; interfaces between the computer and
peripherals; and the memory technology used.
Architecture vs organization

⮚For example, it is an architectural design issue whether a computer will have a


multiply instruction.

⮚It is an organizational issue whether that instruction will be implemented by a


special multiply unit or by a mechanism that makes repeated use of the add
unit of the system.
Computer Architecture Computer Organization

1. Architecture describes what the computer does. The Organization describes how it does it.

Computer Architecture deals with the functional behavior


2. of computer systems.
Computer Organization deals with a structural relationship.

As a programmer, you can view architecture as a series


3. of instructions, addressing modes, and registers.
The implementation of the architecture is called organization.

For designing a computer, an organization is decided after its


4. For designing a computer, its architecture is fixed first.
architecture.

Computer Architecture comprises logical functions such


Computer Organization consists of physical units like circuit
5. as instruction sets, registers, data types, and addressing
designs, peripherals, and adders.
modes.
The different architectural categories found in our
computer systems are as follows: CPU organization is classified into three categories based on
1.Von-Neumann Architecture the number of address fields:
6. 2.Harvard Architecture 1.Organization of a single Accumulator.
3.Instruction Set Architecture 2.Organization of general registers
4.Micro-architecture 3.Stack organization
5.System Design
RISC vs. CISC
RISC (Reduced Instruction Set Computer) and CISC (Complex Instruction Set Computer) are
two different approaches to processor design.
RISC (Reduced Instruction Set Computer):Uses a small, highly optimized set of instructions.
• Each instruction is simple and executes in a single clock cycle.
• Example: ARM.
CISC (Complex Instruction Set Computer):Uses a large set of instructions, including complex
operations.
• Instructions can perform multiple tasks in a single command.
• Example: x86 processors (Intel, AMD).
RISC vs. CISC Differences
Aspect RISC CISC

Instruction Set Small and simple Large and complex


Multiple cycles per
Instruction Execution One instruction per clock cycle
instruction
Hardware Complexity Simple control unit Complex control unit

Performance Optimized for speed Optimized for functionality


Larger code size (more Smaller code size (fewer
Code Size
instructions) instructions)
Lower (more suited for
Power Efficiency High (good for mobile devices)
desktops/servers)
Examples ARM, MIPS, SPARC x86, Intel, AMD
Fixed point numbers

• This representation has fixed number of bits for integer part and for fractional
part.
• There are three parts of a fixed-point number representation: the sign field,
integer field, and fractional field
Fixed point numbers

Assume number is using 32-bit format which reserve,


• 1 bit for the sign,
• 15 bits for the integer part and
• 16 bits for the fractional part.
Then, -43.625 is represented as following:

Where, 0 is used to represent + and 1 is used to represent -.


000000000101011 is 15 bit binary value for decimal 43 and,
1010000000000000 is 16 bit binary value for fractional 0.625.
Fixed point numbers

Disadvantage:
• limited range of values that they can represent
• it does not allow enough numbers and accuracy
• Therefore, the smallest positive number is 2-16 ≈ 0.000015
• largest positive number is 215 ≈ 32768
IEEE Floating Point Representation
• In the decimal system, a decimal point (radix point) separates the
whole numbers from the fractional part
• Examples:
37.25 ( Integer = 37, fraction = 0.25)

• We have no way to store the radix point!


• Large numbers will take so much space
• Standards committee came up with a way to store floating point
numbers (that have a decimal point)
A floating-point number is said to be normalized if the most significant digit of
the binary representation is 1

Normalized number in the form:


True Exponent
Mantissa
1 . ____________ x 2

1.0010101 x 25

32 bit Floating Point Representation:


Sign Bit Biased Exponent Mantissa
(1 bit) (8 bit) (23 bit)
Floating Point Representation: Normalization

Every binary number, except the one corresponding to the number zero, can be

normalized by choosing the exponent so that the radix point falls to the right of the
leftmost 1 bit.
37.2510 = 100101.012 = 1.0010101 x 25 => Biased exponent = 132

7.62510 = 111.1012 = 1.11101 x 22 => Biased exponent = 129

0.312510 = 0.01012 = 1.01 x 2-2 => Biased exponent = 125


IEEE Floating Point Representation
• Suppose number is using 32-bit format: the 1 bit sign bit, 8 bits for
signed exponent, and 23 bits for the fractional part.

• Floating point numbers can be stored into 32-bits, by dividing the bits into three
parts: the sign, the biased exponent, and the mantissa.

Example: 37.2510 = 100101.012 = 1.0010101 x 25

Biased exponent= 127+5=132= 10000100

Mantissa= 0010101

SB Exponent (8-bits) Mantissa (23 bits)


0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Example: Find the IEEE FP representation of –24.75

Step 1. Compute the binary equivalent of the whole part and the fractional part.
2410 => 110002,
.7510 =>.112
So: -24.7510 = -11000.112
Step 2. Normalize the number by moving the decimal point to the right
of the leftmost one.
-11000.11 = -1.100011 x 24

So, Mantissa = 100011, True Exponent = 4


Example: Find the IEEE FP representation of –24.75
Step 3. Convert the exponent to a biased exponent
127 + 4 = 131
==> 13110 = 100000112

Step 4. Store the results from steps 1-3

SB Exponent (8-bits) Mantissa (23 bits)


1 1 0 0 0 0 0 1 1 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
IEEE standard to Decimal Floating Point Conversion

Ex 1: Convert the following 32-bit binary number to its decimal floating point
equivalent:
SB Exponent (8-bits) Mantissa (23 bits)
1 0 1 1 1 1 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Step 1: Extract the biased exponent and unbias it

Biased exponent = 011111012 = 12510

True exponent (or) Unbiased Exponent: 125 – 127 = -2


Step 3: Denormalize the binary number from step 2 (i.e. move the decimal and get
rid of (x 2n) part):
-0.01012 (negative exponent – move left)

Step 4: Convert binary number to the FP equivalent (i.e. Add all column values
with 1s in them)

-0.01012 = - ( 0.25 + 0.0625)

= -0.312510
Ex 2: Convert the following 32 bit binary number to its decimal floating
point equivalent:

Sign Exponent Mantissa


0 10000011 10011000..0
Step 1: Extract the biased exponent and unbias it

Biased exponent = 10000112 = 13110

Unbiased Exponent: 131 – 127 = 4

Step 3: Denormalize the binary number from step 2 (i.e. move the decimal and
get rid of (x 2n) part:
11001.12 (positive exponent – move right)

Step 4: Convert binary number to the FP equivalent (i.e. Add all column values
with 1s in them)
11001.1 = 16 + 8 + 1 +.5

= 25.510

You might also like