0% found this document useful (0 votes)
11 views

Unit2 Part D Conge Finalll

Uploaded by

Youraj Yuvi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Unit2 Part D Conge Finalll

Uploaded by

Youraj Yuvi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

D) Congestion Control

As Internet can be considered as a Queue of packets, where


transmitting nodes are constantly adding packets and some of
them (receiving nodes) are removing packets from the queue.

So, consider a situation where too many packets are present in this
queue (or internet or a part of internet), such that constantly
transmitting nodes are pouring packets at a higher rate than
receiving nodes are removing them. This degrades the
performance, and such a situation is termed as Congestion.

Main reason of congestion is more number of packets into the


network than it can handle.
 When the number of packets dumped into
the network is within the carrying capacity,
they all are delivered, expect a few that have
too be rejected due to transmission errors .

 As traffic increases too far, the routers are no


longer able to cope, and they begin to lose
packets. This tends to make matter worse.

 At very high traffic, performance collapse


completely, and almost no packet is delivered .
Congestion prevention policies in different layers
LAYER CONGESTION PREVENTION POLICIES

Transport -retransmission policy


-acknowledgement policy
-flow control policy
-timeout determination

Network -virtual circuits versus datagram inside the subnet


-packet queuing and service policy
-packet discard policy
-routing algorithm
-packet lifetime management

Datalink -retransmission policy


-out-of-order caching policy
-acknowledgement policy
-flow control policy

3
CONTENTS
• INTRODUCTION TO CONGESTION
• GENERAL PRINCIPLE OF CONGESTION
• OPEN LOOP CONGESTION CONTRO
• CLOSED LOOP CONGESTION CONTROL
• CONGESTION CONTROL ALGORITHMS
– LEAKY BUCKET ALGORITHM
– TOKEN BUCKET ALGORITHM
– CHOKE PACKETS
– HOP BY HOP CHOKE PACKETS
– LOAD SHEDDING

TCP TIMER MANAGEMENT

4
INTRODUCTION
WHAT IS CONGESTION?
Congestion is a situation in communication network in which too
many packets are present in a part of the subnet or contending
for the same link so
– The queue overflows
– Packets get dropped
– Network is congested!
BUFFER
BUFFER

CONGESTION
BUFFER
ROUTER

BUFFER
BUFFER
5
Factors that Cause Congestion

 Congestion can occur due to several


reasons.
For example, if all of a sudden a stream of
packets arrive on several input lines and need
to be out on the same output line, then a
long queue will be build up for that output. If
there is insufficient memory to hold these
packets, then packets will be lost (dropped) .
 If router have an infinite amount of memory even then
instead of congestion being reduced, it gets worse; because by
the time packets gets at the head of the queue, to be
dispatched out to the output line, they have already timed-out.

 All the packets will be forwarded to next router up to the


destination, all the way only increasing the load to the
network more and more.

 Finally when it arrives at the destination, the packet will


be discarded, due to time out, so instead of been dropped at
any intermediate router (in case memory is restricted) such a
packet goes all the way up to the destination, increasing the
network load throughout and then finally gets dropped
there.

 Slow processors also cause Congestion. If the router CPU is


slow at performing the task .
Costs of congestion

• large queuing delays are experienced as the packet


arrival rate nears the link capacity.

• unneeded retransmissions by the sender

• when a packet is dropped along a path, the transmission


capacity of the upstream routers have been wasted.

4
Effects of Congestion
• Congestion affects two vital parameters of
the network performance ..

1.Through put
2.Delay
Effects of Congestion
 Initially throughput increases linearly with
offered load, because utilization of the network
increases. However, as the offered load
increases beyond certain limit, say 60% of the
capacity of the network, the throughput drops.

 If the offered load increases further, a point is


reached when not a single packet is delivered to
any destination, which is commonly known as
deadlock situation.
The ideal one corresponds to the situation when all the
packets introduced are delivered to their destination up to the
maximum capacity of the network.

The second one corresponds to the situation when there is no


congestion control.

The third one is the case when some congestion control


technique is used. This prevents the throughput collapse, but
provides lesser throughput than the ideal condition due to
overhead of the congestion control technique
The delay also increases with offered load, as shown in Fig. And no
matter what technique is used for congestion control, the delay
grows without bound as the load approaches the capacity of the
system.
It may be noted that initially there is longer delay when congestion
control policy is applied. However, the network without any
congestion control will saturate at a lower offered load .
GENERAL PRINCIPLE OF CONGESTION
CONTROL
Open loop congestion control
•In this method, policies are used to prevent the congestion before
it happens.
Congestion control is handled either by the source or by the
destination.
1. Retransmission Policy
• The sender retransmits a packet, if it feels that the packet it
has sent is lost or corrupted.
• The retransmission policy and the retransmission timers need
to be designed to optimize efficiency and at the same time
prevent the congestion.
2. Window Policy
• To implement window policy, selective reject window method
is used for congestion control in which it sends only the specific
lost or damaged packets.
14
3. Acknowledgement Policy
• If the receiver does not acknowledge every packet it receives
it may slow down the sender and help prevent congestion.
4. Discarding Policy
• A router may discard less sensitive packets when congestion is
likely to happen
5. Admission Policy
• A router can deny establishing a virtual circuit connection if
there is congestion in the "network or if there is a possibility of
future congestion.

15
CLOSED LOOP CONGESTION CONTROL
• Closed loop congestion control mechanisms try to remove the
congestion after it happens.
1. Backpressure method

• Backpressure is a node-to-node congestion control that starts with a


node and propagates, in the opposite direction of data flow in which the
congested node stops receiving data from the immediate upstream
node(s).
16
Hop-by Hop Choke Packets

Depicts the functioning of Hop-by-Hop choke packets (a) Heavy traffic


between nodes P and Q, (b) Node Q sends the Choke packet to P, (c) Choke packet
reaches R, and the flow between R and Q is curtail down, Choke packer reaches P,
17
and P reduces the flow out .
2. Choke Packet

• In choke packet method, congested router sends


a warning directly to the source station i.e. the
intermediate routers through which the packet
has traveled are not warned.
18
 Choke Packet Technique

 Depicts the functioning of choke packets, (a) Heavy traffic


between nodes P and Q, (b) Node Q sends the Choke packet
to P, (c) Choke packet reaches P, (d) P reduces the flow and
send a reduced flow out, (e) Reduced flow reaches node Q .
19
3. Implicit Signaling
• The source guesses that there is congestion somewhere in
the network when it does not receive any acknowledgment.
Therefore the delay in receiving an acknowledgment is
interpreted as congestion in the network and the source
slows down.
- this policy is used in TCP(Transmission control protocol).
4. Explicit Signaling
• In this method, the congested nodes explicitly send a
signal to the source or destination to inform about the
congestion.
• Explicit signaling is different from the choke packet
method. In choke packed method, a separate packet is used
for this purpose whereas in explicit signaling method, the
signal is included in the packets that carry data .
20
Traffic shaping Algo
• LEAKY BUCKET ALGORITHM

• TOKEN BUCKET ALGORITHM

• Traffic shaping deals with concepts of


classification, queue disciplines, enforcing
policies, congestion management, quality of
service (QoS), and fairness.
LEAKY BUCKET ALGORITHM

22
TOKEN BUCKET ALGORITHM

Fig. 1 Fig. 2 23
TOKEN BUCKET ALGORITHM
TCP has three congestion-control
methods :
1. slow start

2. Additive increase

3. retransmit
Slow start method

Source Destination

Add one packet


each RTT

Figure 6.8 Additive Increase


Computer Networks: TCP Congestion
Control
26
• Slow start method increases the CWND
exponentially.
• The source starts with cwnd = 1.
• Every time an ACK arrives, cwnd is incremented.
• Source sets CWND to 2 packets .
• On receiving the 2 acks, TCP sets the CWND size to 4.

• Thus the number of packets doubles for each RTT.


AIMD (Additive Increase ,
Multiplicative Decrease Control )
• The approach taken is to increase the transmission rate
(window size), probing for usable bandwidth, until loss
occurs. The policy of additive increase may, for instance,
increase the congestion window by a fixed amount every
round trip time. When congestion is detected, the
transmitter decreases the transmission rate by a
multiplicative factor; for example, cut the congestion window
in half after loss.
• AIMD requires a binary signal of congestion. Most frequently,
packet loss serves as the signal; the multiplicative decrease is
triggered when a timeout or acknowledgement message
indicates a packet was lost. It is also possible for in-network
mechanisms to mark congestion (without discarding packets)
as in Explicit Congestion Notification (ECN).
• Mathematical Formula
• Let w(t) be the sending rate (e.g. the congestion
window) during time slot t, a ( a > 0 {\displaystyle
a>0} ) be the additive increase parameter, and b (
0 < b < 1 {\displaystyle 0<b<1} ) be the
multiplicative decrease factor.
• w(t+1)={w(t)+a
• if congestion is not detected w ( t ) × b if
congestion is detected {\displaystyle
w(t+1)={\begin{cases}w(t)+a&{\text{ if
congestion is not detected}}\\w(t)\times
b&{\text{ if congestion is detected}}\end{cases}}}
Fast retransmit
Sender Receiver
Packet 1
Fast Retransmit
Packet 2
Packet 3 ACK 1
Based on three
Packet 4 ACK 2
duplicate ACKs

Packet 5 ACK 2

Packet 6
ACK 2
ACK 2

Retransmit
packet 3

ACK 6
TIMERS IN TCP
1.Persistence timer
• When both sender and receiver will be waiting
for ever then to solve this problem persistence
timer is used.
• Here when timer goes off sender transmits
probe to the receiver.
• Receiver sends the window size in response to
this probe.
• If window size is 0 , then persistence timer is set
again .. And window size is not 0 then sender can
send data.
keep alive timer
• Used when connection is idle then timer goes
off… each sender and reciever checks if other
side is alive or not. If not then connection
terminated.
TIME wait
• Used in TIMED wait state while closing. This
timer is set to time equal to twice the max
packet lifetime to ensure that after closing
connection all the packets created by it goes
off.
Jacoban’s Algo
• Used by TCP
• The RTT ( Round trip time ) for each connection of
TCP is variable.
• When a segment is sent , timer is started. There is
to measure the time required to receive ACK & to
trigger retransmission if ACK takes too long to
come.
• If ACK returns back before timer goes out , then
TCP measures the time taken by ACK & adjust RTT
to a new value using following equation:
• RTT = α RTT + (1 – α )M
Jacoban’s Algo
• Where α is smoothing factor. Has value 7/8.
• Jacobian proposed a new smoothing factor D
which is given by :

• D = αD + (1-α ) І RTT – M І
• And time out is calculated by
• time-out = RTT + 4D

You might also like