tcp-rev2
tcp-rev2
350
300
250
RTT (milliseconds)
200
150
100
1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106
time (seconnds)
(typically, β = 0.25)
Then set timeout interval:
loop (forever) {
switch(event)
Seq=9 Seq=9
2 , 8 byte 2 , 8 byte
s data s data
Seq=92 timeout
Seq=
1 00, 2
0 byt
es da
timeout
ta
=100
A CK
1 00
X K=
AC ACK=
120
loss
Seq=9 Seq=9
2
2 , 8 byte
s data
Sendbase , 8 byte
s data
= 100
Seq=92 timeout
SendBase
= 120 2 0
K=1
100 AC
A C K=
SendBase
= 100 SendBase
= 120 premature timeout
time time
lost ACK scenario
Seq=9
2 , 8 byte
s data
=100
timeout
Seq=1 A CK
0 0, 20
bytes
data
X
loss
SendBase =120
AC K
= 120
time
Cumulative ACK scenario
TCP ACK generation [RFC 1122, RFC 2581]
Fast Retransmit
Time-out period often If sender receives 3
relatively long: ACKs for the same data,
long delay before it supposes that segment
resending lost packet after ACKed data was
Detect lost segments via lost:
duplicate ACKs. fast retransmit: resend
Sender often sends many segment before timer
segments back-to-back expires
If segment is lost, there will
likely be many duplicate
ACKs.
Fast retransmit algorithm:
speed-matching
service: matching
the send rate to the
receiving app’s drain
app process may be rate
slow at reading from
buffer
TCP Flow control: how it works
Rcvr advertises
spare room by
including value of
RcvWindow in
segments
(Suppose TCP receiver
discards out-of-order Sender limits
segments) unACKed data to
RcvWindow
spare room in buffer
guarantees receive
= RcvWindow
buffer doesn’t
= RcvBuffer- overflow
[LastByteRcvd -
LastByteRead]
A CK
Step 2: server receives FIN,
replies with ACK. Closes
connection, sends FIN.
closed
TCP Connection Management (cont.)
timed wait
A CK
Note: with small modification,
can handle simultaneous closed
FINs.
closed
TCP server
lifecycle
TCP client
lifecycle
Principles of Congestion Control
Congestion:
informally: “too many sources sending too much
data too fast for network to handle”
different from flow control!
manifestations:
lost packets (buffer overflow at routers)
long delays (queueing in router buffers)
a top-10 problem!
one router,
infinite buffers
no
retransmission
large delays
when
congested
maximum
achievable
throughput
Causes/costs of congestion: scenario 2
one router, finite buffers
sender retransmission of lost packet
unneeded retransmissions: link carries multiple
copies of pkt
Host A λin : original data λout
Causes/costs of congestion
H λ
o
o
s
u
t
A t
H
o
s
t
B
8 Kbytes
time
RTT
loss event:
double CongWin two segm
en ts
every RTT
done by incrementing
four segm
CongWin for every ents
ACK received
Summary: initial rate
is slow but ramps up time
exponentially fast
Refinement
Philosophy:
After 3 dup ACKs:
CongWin is cut in half • 3 dup ACKs indicates
window then grows network capable of
linearly delivering some segments
But after timeout event: • timeout before 3 dup
CongWin instead set to ACKs is “more alarming”
1 MSS;
window then grows
exponentially
to a threshold, then
grows linearly
Refinement (more)
Q: When should the
exponential
increase switch to
linear?
A: When CongWin
gets to 1/2 of its
value before
timeout.
Implementation:
Variable Threshold
At loss event, Threshold
is set to 1/2 of CongWin
just before loss event
Notification is implicit
just drop the packet (TCP will timeout)
could make explicit by marking the packet
Early random drop
rather than wait for queue to become full, drop
each arriving packet with some drop
probability whenever the queue length
exceeds some drop level
RED Details
Compute average queue length
AvgLen = (1 - Weight) * AvgLen +
Weight * SampleLen
0 < Weight < 1 (usually 0.002)
SampleLen is queue length each time a packet
arrives
MaxThreshold MinThreshold
AvgLen
1.0
MaxP
AvgLen
MinThresh MaxThresh
TCP Vegas
Idea: source watches for some sign that
router’s queue is building up and congestion
will happen too; e.g.,
RTT grows
TCP Vegas
Let BaseRTT be the minimum of all measured RTTs
(commonly the RTT of the first packet)
If not overflowing the connection, then
ExpectRate = CongestionWindow/BaseRTT
Source calculates sending rate (ActualRate) once per
RTT
Source compares ActualRate with ExpectRate
TCP Fairness
Fairness goal: if K TCP sessions share same
bottleneck link of bandwidth R, each should have
average rate of R/K
Practically this does not happen in TCP as
connections with lower RTT are able to grab the
available link bandwidth more quickly.
TCP connection 1
bottleneck
TCP
router
connection 2
capacity R
Fairness (more)
Fairness and UDP Fairness and parallel TCP
Multimedia apps often
connections
do not use TCP nothing prevents app from
do not want rate throttled opening parallel cnctions
by congestion control between 2 hosts.
Instead use UDP: Web browsers do this
pump audio/video at Example: link of rate R
constant rate, tolerate supporting 9 cnctions;
packet loss
new app asks for 1 TCP, gets
Research area: TCP rate R/10
friendly new app asks for 11 TCPs,
gets R/2 !
32-bit SequenceNum
16-bit AdvertisedWindow
TCP Extensions