Announcement: - Project 2 Finally Ready On Tlab - Homework 2 Due Next Mon Tonight - Midterm Next Th. in Class
Announcement: - Project 2 Finally Ready On Tlab - Homework 2 Due Next Mon Tonight - Midterm Next Th. in Class
Seq=92 timeout
timeout
X
loss
Sendbase
= 100
Seq=92 timeout
SendBase
= 120
SendBase
= 100 SendBase
= 120 premature timeout
time time
lost ACK scenario
Outline
Flow control
Connection management
Congestion control
TCP Flow Control
flow control
sender wont overflow
receive side of TCP receivers buffer by
connection has a receive transmitting too much,
buffer: too fast
speed-matching
service: matching the
send rate to the
receiving apps drain
rate
app process may be
slow at reading from
buffer
TCP Flow control: how it works
Rcvr advertises spare
room by including value
of RcvWindow in
segments
closing
Step 2: server receives FIN,
replies with ACK. Closes
connection, sends FIN.
closing
Step 3: client receives FIN,
replies with ACK.
Enters timed wait - will
respond with ACK to received
timed wait
FINs
closed
Step 4: server, receives ACK.
Connection closed. closed
TCP server
lifecycle
TCP client
lifecycle
Outline
Flow control
Connection management
Congestion control
Principles of Congestion Control
Congestion:
informally: too many sources sending too much data
too fast for network to handle
different from flow control!
manifestations:
lost packets (buffer overflow at routers)
long delays (queueing in router buffers)
Reasons
Limited bandwidth, queues
Unneeded retransmission for data and ACKs
Approaches towards congestion control
Two broad approaches towards congestion control:
16 Kbytes
8 Kbytes
time
RTT
loss event:
double CongWin every
RTT
done by incrementing
CongWin for every ACK
received
linear?
(segments)
8
gets to 1/2 of its 6
value before 4 threshold
timeout. 2
Implementation: 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Variable Threshold
Transmission round
TCP connection 1
bottleneck
TCP
router
connection 2
capacity R
Why is TCP fair?
Two competing sessions:
Additive increase gives slope of 1, as throughout increases
multiplicative decrease decreases throughput proportionally
R equal bandwidth share
Connection 1 throughput R
Fairness (more)
Fairness and UDP Fairness and parallel TCP
connections
Multimedia apps often
do not use TCP nothing prevents app from
do not want rate opening parallel
throttled by congestion connections between 2
control
hosts.
Instead use UDP:
Web browsers do this
pump audio/video at
constant rate, tolerate Example: link of rate R
packet loss
supporting 9 connections;
Research area: TCP new app asks for 1 TCP, gets
friendly rate R/10
new app asks for 11 TCPs,
gets R/2 !
Delay modeling
Notation, assumptions:
Q: How long does it take to Assume one link between
receive an object from a client and server of rate R
Web server after sending
S: MSS (bits)
a request?
O: object size (bits)
Ignoring congestion, delay is
no retransmissions (no loss,
influenced by:
no corruption)
TCP connection establishment
Window size:
data transmission delay
First assume: fixed
slow start congestion window, W
segments
Then dynamic window,
modeling slow start
Fixed congestion window (1)
First case:
WS/R > RTT + S/R: ACK
for first segment in
window returns before
windows worth of data
sent
Second case:
WS/R < RTT + S/R: wait
for ACK after sending
windows worth of data
sent
P min {Q, K 1}
O/R to transmit
object
first window
object
= S/R
20
parallel non-
10 persistent
0
28 100 1 10
Kbps Kbps Mbps Mbps
For larger RTT, response time dominated by TCP establishment
& slow start delays. Persistent connections now give important
improvement: particularly in high delaybandwidth networks.