0% found this document useful (0 votes)
11 views33 pages

Pipelining New

The document discusses pipelining in modern processors, highlighting its role in improving throughput and performance while addressing the need for sophisticated compilation techniques. It explains various hazards that can cause pipeline stalls, including data, instruction, and structural hazards, and presents methods to mitigate these issues, such as operand forwarding and branch prediction. Additionally, it covers the importance of cache memory and the impact of instruction reordering to enhance execution efficiency in pipelined architectures.

Uploaded by

deviveeranan7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views33 pages

Pipelining New

The document discusses pipelining in modern processors, highlighting its role in improving throughput and performance while addressing the need for sophisticated compilation techniques. It explains various hazards that can cause pipeline stalls, including data, instruction, and structural hazards, and presents methods to mitigate these issues, such as operand forwarding and branch prediction. Additionally, it covers the importance of cache memory and the impact of instruction reordering to enhance execution efficiency in pipelined architectures.

Uploaded by

deviveeranan7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Unit-3

PROCESSOR AND
CONTROL UNIT
Pipelining
● Pipelining is widely used in
modern processors.
● Pipelining improves system performance
in terms of throughput.
● Pipelined organization requires
sophisticated compilation techniques.
Making the Execution of
Programs Faster
● Use faster circuit technology to build
the processor and the main memory.
● Arrange the hardware so that more than one
operation can be performed at the same
time.
● But elapsed time needed to perform any
one operation is not changed.
Use the Idea of Pipelining in a
Computer
Fetch + Decode
+ Execution +
Write
Use the Idea of Pipelining in a
Computer
Role of Cache Memory
● Each pipeline stage is expected to complete in
one clock cycle.
● The clock period should be long enough to let
the slowest pipeline stage to complete.
● Faster stages can only wait for the slowest one
to complete.
● Since main memory is very slow compared to
the execution, if each instruction needs to be
fetched from main memory, pipeline is almost
useless.
● We have cache memory to solve speed mismatch.
Pipeline Performance
● The potential increase in performance
resulting from pipelining is proportional to
the number of pipeline stages.
● However, this increase would be achieved
only if all pipeline stages require the same
time to complete, and there is no
interruption throughout program execution.
● Unfortunately, this is not true.
Pipeline Performance
Clock cycle 1 2 3 4 5 6 7 8 Time 9

Instruction

I1 F1 D1 E1 W1

I2 F2 D2 E2 W2

I3 F3 D3 E3 W3

I4 F4 D4 E4 W4

I5 F5 D5 E5

Figure 8.3. Effect of an execution operation taking more than one clock cycle.
Pipeline Performance
Any condition that causes a pipeline to stall is called a hazard.

Data hazard – any condition in which either the source or the


destination operands of an instruction are not available at the
time expected in the pipeline. So some operation has to be
delayed, and the pipeline stalls.

Instruction (control) hazard – a delay in the availability of an


instruction causes the pipeline to stall.

Structural hazard – the situation when two instructions require


the use of a given hardware resource at the same time.
Pipeline Performance Time
Clock cycle 1 2 3 4 5 6 8 9
7
Instructio
Instruction
n hazard I1 F1 D1 E1 W1

I2 F2 D2 E2 W2

I3 F3 D3 E3 W3

(a) Instruction execution steps in successive clock cycles

Time
Clock cycle 2 3 4 5 6 7 8 9
1
Stage
F: Fetch F1 F2 F2 F2 F2 F3
Idle periods –
D: Decode D1 idle idle idle D2 D3
stalls (bubbles)
E: Execute E1 idle idle idle E2 E3
W: Write W1 idle idle idle W2 W3

(b) Function performed by each processor stage in successive clock cycles

Figure 8.4. Pipeline stall caused by a cache miss in F2.


Pipeline Performance
Load X(R1),
Structural
hazard
R2 Clock cycle 1 2 3 4 5 6 Time 7

Instruction

I1 F1 D1 E1 W1
I2 (Load)
F2 D2 E2 M2 W2

I3
F3 D3 E3 W3

F4 D4 E4
I4

I5 F5 D5

Figure 8.5. Effect of a Load instruction on pipeline timing.


Pipeline Performance
● Again, pipelining does not result in individual
instructions being executed faster; rather, it is
the throughput that increases.
● Throughput is measured by the rate at
which instruction execution is completed.
● Pipeline stall causes degradation in
pipeline performance.
● We need to identify all hazards that may cause
the pipeline to stall and to find ways to minimize
their impact.
Data Hazards
Data Hazards
● We must ensure that the results obtained when instructions are
executed in a pipelined processor are identical to those
obtained when the same instructions are executed
● sequentially.
Hazard
occurs A
● ← 3 + A
B←4×A
No hazard
● A←5×C
B ← 20 +
● C
When two operations depend on each other, they must
be executed sequentially in the correct order.
Another example:
MulR2, R3, R4
Data Hazards
Time
Clock cycle 1 2 3 4 5 6 7 8 9

Instruction

I1 (Mul) F1 D1 E1 W1

I2 (Add) F2 D2 D E2 W2
2A

I3 F3 D3 E3 W3

I4 F4 D4 E4 W4

Figure 8.6.Pipeline stalled by data dependency between D2 and W1.


Figure 8.6. Pipeline stalled by data dependency between D2 and W1.
Operand Forwarding
● Instead of taking the data from the
file, the second
register can get
instruction
directly from the output of data ALU
after
previous instruction is completed. the
● A special arrangement needs to be made to
“forward” the output of ALU to the input of
ALU.
Source 1
Source 2

SRC1 SRC2

Register
file

ALU

RSLT

Destination

(a) Datapath

SRC1,SRC2 RSLT

E: Execute W: Write
(ALU) (Register file)

Forwarding path

(b) Position of the source and result registers in the processor pipeline

Figure 8.7. Operand forw arding in a pipelined processor.


Handling Data Hazards in
Software
● Let the compiler detect and handle
the hazard:
I1: Mul R2, R3,
R4 NOP
NOP
I2: Add R5, R4, R6
● The compiler can reorder the instructions
to perform some useful work during the
NOP slots.
Side Effects
● The previous example is explicit and easily detected.
● Sometimes an instruction changes the contents of a
register other than the one named as the destination.

When a location other than one explicitly named in an
instruction as a destination operand is affected, the instruction
is said to have a side effect. (Example?)

Example: conditional code
flags: Add R1, R3
● AddWithCarry R2, R4
Instructions designed for execution on pipelined hardware
should have few side effects.
Instruction Hazards
Overview
●Whenever the stream of instructions
supplied by the instruction fetch unit is
interrupted, the pipeline stalls.
Causes for Instruction Hazard:
● Cache miss

● Branch Instruction execution


Branch Timing e

Tim
Clock cycle 1 2 3 4 5 6 7 8

I1 F1 D1 E1 W1

I2 (Branch) F2 D2 E2

I3 F3 D3 X

I4 F4 X

Ik Fk Dk Ek Wk

I F D E
k+1 k+1 k+1 k+1

(a) Branch address computed in Execute stage


- Branch penalty
- Reducing the
penalty
Branch address computed in Decode stage to reduce branch penalty.
Instruction Queue and
Prefetching
Instruction fetch unit
Instruction queue
F : Fetch
instruction

D : Dispatch/
Decode E : Execute W : Write
unit instruction results

Use of an instruction queue in the hardware organization


Conditional Branches
● A conditional branch instruction introduces
the added hazard caused by the dependency
of the branch condition on the result of a
preceding instruction.
● The decision to branch cannot be made until
the execution of that instruction has been
completed.
● Branch instructions represent about 20% of
the dynamic instruction count of most
programs.
Delayed Branch
● The instructions in the delay slots are
always fetched. Therefore, we would like to
arrange for them to be fully executed
whether or not the branch is taken.
● The objective is to place useful instructions
in these slots.
● The effectiveness of the delayed branch
approach depends on how often it is
possible to reorder instructions.
Delayed Branch
LOOP Shift_left R1
Decrement R2
Branch=0 LOOP
NEXT Add R1,R3

Original program loop

LOOP Decrement R2
Branch=0 LOOP
Shift_left R1
NEXT Add R1,R3

Reordering of instructions for a delayed branch.


Delayed Branch
Time
Clock cycle 1 2 3 4 5 6 7 8

Instruction
Decrement F E

Branch F E

Shift (delay slot) F E

Decrement (Branch F E
taken)

Branch F E

Shift (delay slot) F E

Add (Branch not F E


taken)

Execution timing showing the delay slot being filled during


the last two passes through the loop
Branch Prediction
● To predict whether or not a particular branch will be taken.
● Simplest form: assume branch will not take place and continue
to fetch instructions in sequential address order.

Until the branch is evaluated, instruction execution along
the predicted path must be done on a speculative basis.
● Speculative execution: instructions are executed before the
processor is certain that they are in the correct execution
sequence.
● Need to be careful so that no processor registers or memory
locations are updated until it is confirmed that these instructions
should indeed be executed.
Incorrectly Predicted Branch
Time
Clock cycle 1 2 3 4 5 6

Instruction

I1 (Compare) F1 D1 E1 W1

I2 (Branch>0) F2 D2 /P2 E2

I3 F3 D3 X

I4 F4 X

Ik Fk Dk

Timing when a branch decision has been incorrectly predicted as not


taken.
Branch Prediction
● Better performance can be achieved if we arrange
for some branch instructions to be predicted as
taken and others as not taken.
● Use hardware to observe whether the target
address is lower or higher than that of the branch
instruction.
● Let compiler include a branch prediction bit.
● So far the branch prediction decision is always
the same every time a given instruction is
executed – static branch prediction.
Dynamic Branch Prediction
2 state Algorithm:
LT-Likely to be Taken ,
LNT- Likely Not to be Taken,
BT- Branch Taken
BNT- Branch Not Taken
Dynamic Branch Prediction
4 state Algorithm:
ST-Strongly to be taken,
SNT- Strongly Not to be
taken

You might also like