Cns Mod 4
Cns Mod 4
THE TRANSPORT
LAYER
• Client/Server: DISCONNECT
Nesting of Segments, packets, and frames
Connection establishment and connection release
• The C/S Application
– Server: SOCKET/BIND/LISTEN/ACCEPT
– Client: SOCKET/CONNECT
– C/S: SEND/RECEIVE
– □ delayed duplicates.
Possible solutions for solving delayed duplicates
• To give each connection a connection identifier (i.e.,
a sequence number incremented for each connection
established) chosen by the initiating party and put in
each TPDU, including the one requesting the
connection.
• To use throw away transport addresses. Each
time a transport address is needed, a new one is
When a connection is the address is discarded
generated.
released, and never used
again.
• To use sequence number and
age
– To limit packet lifetime.
– To use the sequence number
• Packet lifetime can be restricted to a known
maximum using one of the following techniques
– Restricted subnet design.
– Putting a hop counter in each packet.
– Timestamping each (router synchronization
packet. problem)
– Three-way handshake,
–…
– N-way handshake
– □No protocol exists that works.
• Substitute disconnect for attack . If neither side is prepared to
disconnect until it is convinced that the other side is prepared to
disconnect too, the disconnection will never happen.
(a) Normal case of three-way handshake
(b) Final ACK lost
(c) Response lost
(d) Response lost and subsequent DRs Lost
• Automatic disconnect rule
– If no TPDUs have arrived for a certain number
of seconds, the connection is then automatically
disconnect.
– Thus, if one side ever disconnects, the other side
will detect the lack of activity and also disconnect.
• Conclusion: releasing a connection without data loss
is not nearly as simple as it first appears.
Error control is ensuring that the data is delivered with the
desired level of reliability, usually that all of the data is
delivered without any errors.
• Similarity: In both layers, error control has to be performed.
• Difference: The link layer checksum protects a frame while
it crosses a single link. The transport layer checksum
protects a segment while it crosses an entire network path. It
is an end-to-end check, which is not the same as having a
check on every link.
Flow control in data link layer and transport layer
• Similarity: In both layers a sliding window or other scheme
is needed on each connection to keep a fast transmitter
from overrunning a slow receiver.
• Difference: A router usually has relative few lines, whereas
a host may have numerous connections.
Buffering
• The sender: The sender must buffer all TPDUs sent if the
network service is unreliable. The sender must buffer all
TPDUs sent if the receiver cannot guarantee that every
incoming TPDU will be accepted.
• The receiver: If the receiver has agreed to do the buffering,
there still remains the question of the buffer size.
Flow control and buffering: (The receiver
buffering)
Buffer sizes
(a) Chained fixed-size buffers.
• Wireless Issues
• Efficiency and Power
(a) Goodput and (b) delay as a function of offered load
• Power
– will initially rise with offered load, as delay remains small
and roughly constant,
– but will reach a maximum and
– fall as delay grows rapidly.
• The load with the highest power represents an
efficient load for the transport entity to place on the
network.
Max-Min Fairness
• How to divide bandwidth between different transport senders:
– The first consideration is to ask what this problem has to
do with congestion control.
– A second consideration is what a fair portion means for
flows in a network. The form of fairness that is often desired
for network usage is max-min fairness.
Desirable Bandwidth Allocation
• Convergence: A final criterion is that the
congestion control algorithm converge quickly to a
fair and efficient allocation of bandwidth.
(a) A fast network feeding a low-capacity
receiver.
(b) A slow network feeding a high-capacity
receiver.
• Signals of some congestion control protocol
• Used in TCP
• Analyses by Padhye et al. (1998) show that the throughput
goes up as the inverse square-root of the packet loss rate.
• What this means in practice is that the loss rate for fast TCP
connections is very small; 1% is a moderate loss rate, and by
the time the loss rate reaches 10% the connection has
effectively stopped working.
• However, for wireless networks such as 802.11 LANs,
frame loss rates of at least 10% are common.
• This difference means that, absent protective measures,
congestion control schemes that use packet loss as a signal will
unnecessarily throttle connections that run over wireless links
to very low rates.
• There are two aspects to note. First, the sender does not necessarily know that
the path includes a wireless link, since all it sees is the wired link to which it
is attached.
THE INTERNET
TRANSPORT
PROTOCOLS: UDP, RPC,
• RTPtransport protocols in the
Two main
Internet
– Connectionless protocol (UDP)
– Connection-oriented protocol (TCP)
• UDP (User Datagram protocol)
• RPC (Remote Procedure Call)
• RTP (Real-time Transport Protocol)
The Internet Transport Protocols: UDP
• UDP (RFC768) provides a way for applications to
– send encapsulated IP datagrams and
– send them without having to establish a connection.
• UDP transmits segments consisting of an 8-byte
header followed by the payload.
■ UDP
◆ Multiplexing and demultiplexing using ports.
◆ No flow control, error control or retransmission.
◆ Applications: RPC, RTP, DNS (Domain Name
System)
• RPC (Remote Procedure Call) allows programs to
call procedures located on remote hosts.
– When a process on machine 1 calls a procedure on
machine 2, the calling process on 1 is suspended
and execution of the called procedure takes place on
2
– Information can be transported from the caller to
the callee in the parameters and can come back in
the procedure result.
– No message passing is visible to the programmer.
• The idea behind RPC is to make a remote
procedure call look as much as possible like a local
one.
C Proc □□ C Stub □□ S Stub □□ S Proc
The steps in making an RPC
• Step 1 is the client calling the client stub.
• Step 2 is the client stub packing the parameters into a
message (marshaling) and making a system call to
send the message.
• Step 3 is the kernel sending the message from the
client machine to the server machine.
• Step 4 is the kernel passing the incoming packet to
the server stub and unpacking the packet to extract the
parameters (unmarshaling).
• Step 5 is the server stub calling the server procedure
with the unmarshaled parameters.
• 6.17
A client sends a 128-byte request to a server located 100 km away
over a 1-gigabit optical fiber. What is the efficiency of the line
during the remote procedure call?
• 6.23
Datagram fragmentation and reassembly are handled by IP and are
invisible to TCP. Does this mean that TCP does not have to worry
about data arriving in the wrong order?
• 6.28
The maximum payload of a TCP segment is 65,495 bytes. Why
was such a strange number chosen?
• 6.32
If the TCP round-trip time, RTT, is currently 30 msec and the
following acknowledgements come in after 26, 32, and 24 msec,
respectively, what is the new RTT estimate using the Jacobson
algorithm? Use α=0.9.
• 6.36
In a network whose max segment is 128 bytes, max segment lifetime
is 30 sec, and has 8-bit sequence numbers, what is the maximum
data rate per connection?
• 6.39
To get around the problem of sequence numbers wrapping around
while old packets still exist, one could use 64-bit sequence numbers.
•6.4
In both parts of Fig 6 In both parts of Fig. 6-6 there is a comment that
the value 6, there is a comment that the value of SERVERPORT must be
the same in both client and server. Why is this so important Why is this
so important?