100% found this document useful (1 vote)
130 views

Unit - 4

Uploaded by

shakeershaik2354
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
130 views

Unit - 4

Uploaded by

shakeershaik2354
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 92

Unit -4

Transport layer
Process to Process Communication
User Datagram Protocol (UDP)
Transmission Control Protocol (TCP)
SCTP
Congestion Control
Quality of Service
QoS improving techniques:
Leaky Bucket and Token Bucket algorithm
Transport Layer
• The transport Layer is the second layer in the TCP/IP model and
the fourth layer in the OSI model.
• It is an end-to-end layer used to deliver messages to a host. It is
termed an end-to-end layer because it provides a point-to-point
connection rather than hop-to-hop, between the source host and
destination host to deliver the services reliably.
• The unit of data encapsulation in the Transport Layer is a
segment.
Transport Layer working:
The transport layer takes services from the Application layer and
provides services to the Network layer
• At the sender’s side: The transport layer receives data
(message) from the Application layer and then performs
Segmentation, divides the actual message into segments, adds
the source and destination’s port numbers into the header of
the segment, and transfers the message to the Network layer.
• At the receiver’s side: The transport layer receives data from
the Network layer, reassembles the segmented data, reads its
header, identifies the port number, and forwards the message to
the appropriate port in the Application layer.
PROCESS-TO-PROCESS COMMUNICATION/DELIVERY

• Process is an application-layer entity


• Responsible for delivery of the message to the appropriate
process
Topics discussed in this Section:
Client/Server Paradigm
Multiplexing and Demultiplexing
Connectionless Versus Connection-Oriented
Service Reliable Versus Unreliable
Three Protocols

23.4
oThe Data link layer is responsible for delivery of frames between nodes
over a link node to node delivery using a MAC address to choose one node
among several.

oThe Network layer is responsible for delivery of datagrams between two


hosts host to host delivery using an IP address to choose one host among
millions.

oReal communication takes place between two processes


(application programs). We need process-to-process delivery.

oWe need a mechanism to deliver data from one of process running on


the source host to the corresponding process running on the
destination host.

oThe Transport layer is responsible for process-to-process . We need a


port
number, to choose among multiple processes running on the destination
host.
Figure 23.1 Types of data deliveries

23.6
Client/Server Paradigm
A process on the local host, called a client, needs services from a process
usually on the remote host, called a server.
• Both processes (client and server) have the same name.
• For example, to get the day and time from a remote machine,
we need a Daytime client process running on the local host and a
Daytime server process running on a remote machine.
PORT NUMBER:-
oIn the Internet model, the port numbers are 16-bit integers between 0 and 65,535.
oThe client program defines itself with a port number,
chosen randomly by the transport layer software running on the client host
oThe server process must also define itself with a port number This port
number, however, cannot be chosen randomly
oThe Internet uses port numbers for servers called well-
known port numbers.
oEvery client process knows the well-known port number of
the corresponding server process
Figure 23.2 Example Port numbers

For example, while the Daytime client process, can use an ephemeral (temporary) port
number 52,000 to identify itself, the Daytime server process must use the well-known
(permanent) port number 13.

23.8
IANA ranges

23.9
IP ADDRESSES VERSUS PORT NUMBERS:-
IP addresses and port numbers play different roles in selecting the
destination of data.
final
The destination IP address defines the host among the different hosts

After the host has been selected, the port number defines one of the processes
on this particular host
SOCKET ADDRESSES
o Process-to-process delivery needs two identifiers, IP address and the port
number, at each end to make a connection.
oThe combination of an IP address and a port number is called a socket
address.
oA transport layer protocol needs a pair of socket addresses: the client socket
address and the server socket address.
oThese four pieces of information are part of the IP header and the
transport layer protocol header.
• The IP header contains the IP addresses; the UDP or TCP header
contains the port numbers.
Multiplexing and demultiplexing
Sender: multiplexing of UDP datagrams.UDP datagrams are received
from multiple application programs. A single sequence of UDP datagrams is
passed to IP layer.
Receiver: demultiplexing of UDP datagrams.Single sequence of UDP
datagrams received from IP layer. UDP datagram received is passed to
appropriate application.
CONNECTIONLESS VERSUS CONNECTION-ORIENTED SERVICE:

oA transport layer protocol can either be connectionless or


connection-
oriented.
oConnectionless Service
⮚ In a connectionless service, the packets are sent from one party
to another
with no need for connection establishment or connection release.
⮚The packets are not numbered; they may be delayed or lost or
may arrive out of sequence.
⮚ There is no acknowledgment .

oConnection Oriented Service


⮚In a connection-oriented service, a connection is first
established between the sender and the receiver.
⮚ Data are transferred.
⮚ At the end, the connection is released. ( virtual connection , not a
RELIABLE VERSUS UNRELIABLE
oThe transport layer service can be reliable or
unreliable.
oIf the application layer program needs reliability, we use
a reliable transport layer protocol by implementing flow
and error control at the transport layer. This means a
slower and more complex service.
oOn the other hand, if the application program does
not need reliability then an unreliable protocol can
be used.
Note
oUDP is connectionless and unreliable;
oTCP and SCTP are connection oriented and reliable.
Figure 23.7 Error control

23.15
Transport layers protocols
Figure 23.8 Position of UDP, TCP, and SCTP in TCP/IP
suite

23.17
23-2 USERDATAGRAM PROTOCOL(UDP)

The User Datagram Protocol (UDP) is called a


connectionless, unreliable transport protocol. It does not
add anything to the services of IP except to provide process-
to-process communication instead of host-to- host
communication.
Topics discussed in this section:
Well-Known Ports for UDP User
Datagram
Checksum UDP
Operation Use of
UDP
23.18
Table 23.1 Well-known ports used with
UDP

23.19
User datagram format:-

23.20
User Datagram:-
The UDP header consists of four fields each of 2
bytes in length:
Source Port (UDP packets from a client use this as a
service access point (SAP) to indicate the session
on the local client that originated the packet. UDP
packets from a server carry the server SAP in this
field)
Destination Port (UDP packets from a client use this
as a service access point (SAP) to indicate the
service required from the remote server. UDP
packets from a server carry the client SAP in this
field)
UDP length (The number of bytes comprising the
combined UDP
header information and payload data)
UDP Checksum (A checksum to verify that the end
to end data has not been corrupted by routers or
Note

UDP length
= IP length – IP header’s
length

23.22
UDP Checksum

23.23
Queues in UDP

23.24
UDP Characteristics:-

End-to-End: an application sends/receives data


to/from another application.
Connectionless: Application does not need to preestablish
communication before sending data; application does
not need to terminate communication when finished.
Message-oriented: application sends/receives individual
messages (UDP datagram), not packets.
Best-effort: same best-effort delivery semantics as IP.

Arbitrary interaction: application communicates with


many or one other applications.
Operating system independent: identifying application
does not depend on O/S.
Applications of UDP:
•Used for simple request response communication when size of data is less
and hence there is lesser concern about flow and error control.
• It is suitable protocol for multicasting as UDP supports packet switching.
•UDP is used for some routing update protocols like RIP(Routing Information
Protocol).
•Normally used for real time applications which can not tolerate uneven
delays between sections of a received message.
• Following implementations uses UDP as a transport layer protocol:
• NTP (Network Time Protocol)
• DNS (Domain Name Service)
• BOOTP, DHCP.
• NNP (Network News Protocol)
• Quote of the day protocol
• TFTP, RTSP, RIP, OSPF.
• Application layer can do some of the tasks through UDP-
• Trace Route
• Record Route
• Time stamp
• UDP takes datagram from Network Layer, attach its header and send it to
the
user. So, it works fast.
TCP

TCP is a connection-oriented protocol; it creates a virtual


connection between two TCPs to send data. In addition,
TCP uses flow and error control mechanisms at the
transport level.
Topics discussed in this section:
TCP Services
TCP Features
Segment
A TCP Connection
Flow Control
Error Control
Congestion Control
23.27
Table 23.2 Well-known ports used by
TCP

23.28
TCP Services:-
•Stream Delivery Service.
•Sending and Receiving Buffers.
•Bytes and Segments.
•Full Duplex Service.
•Connection Oriented Service.
•Reliable Service.

• Stream Delivery Service:-


TCP is a stream-oriented protocol. It enables the sending process to deliver data as a stream of
bytes and the receiving process to acquire data as a stream of bytes.
Sending and Receiving Buffers:-
The sending and receiving processes cannot produce and receive data at
the same speed. Hence, TCP needs a buffer for storage.
There are two methods of buffers used in each dissection, which are as
follows:
•Sending Buffer
•Receiving Buffer
A buffer can be implemented by using a circular array of 1-byte location
Bytes and Segments;-
Buffering is used to handle the difference between the speed of data
transmission and data consumption. But only buffering is not enough.
We need one more step before sending the data on the Internet Protocol
(IP) layer as a TCP service provider. It needs to send data in the form of
packets and not as a stream of bytes.
Full-Duplex Service:-
TCP offers a full-duplex service where the data can flow in
both directions simultaneously. Each TCP will then have a
sending buffer and receiving buffer. The TCP segments are
sent in both directions.
Connection-Oriented Service:-
We are already aware that the TCP is a connection-oriented
protocol. When a process wants to communicate (send and
receive) with another process (process -2), the sequence of
operations is as follows:
•TCP of process-1 informs TCP of process-2 and gets its
approval.
•TCP of process-1 tells TCP of process-2 exchange data in both
directions.
•After completing the data exchange, when buffers on both
sides are empty, the two TCPs destroy their buffers
Reliable Service:-
TCP is a reliable transport protocol. It uses an
acknowledgment mechanism for checking the safe and sound
TCP Features:-
•TCP is reliable protocol. That is, the receiver always sends either
positive or negative acknowledgement about the data packet to the
sender, so that the sender always has bright clue about whether
the data packet is reached the destination or it needs to resend it.
•TCP ensures that the data reaches intended destination in the
same order it was sent.
•TCP is connection oriented. TCP requires that connection between
two remote points be established before sending actual data.
•TCP provides error-checking and recovery mechanism.
•TCP provides end-to-end communication.
•TCP provides flow control and quality of service.
•TCP operates in Client/Server point-to-point mode.
•TCP provides full duplex server, i.e. it can perform roles of both
receiver and sender.
TCP Segment:-

23.33
The segment consists of a 20-60-
byte header.
Source port address:
This is a 16-bit field , it defines the port number of the application
program in the host that is sending the segment.
Destination port address:
This is a 16-bit field, it defines the port number of the application
program in the host that is receiving the segment.
Sequence number: This 32-bit field defines the number assigned
to the first byte of data contained in this segment.
Acknowledgment number: This 32 bit field defines the number of
the next byte a party expects to receive.
Header length: A 4-bit field that indicates the number of 4-byte
words in the TCP header. The length of the header can be
between 20 and 60 bytes. Therefore, the value of this field can be
between 5 (5 x 4 =20) and 15 (15 x 4 =60).
Reserved. This is a 6-bit field reserved for future use.

Control: This field defines 6 different control bits or


flags. One or more of these bits can be set at a time.
These bits enable flow control, connection
establishment and termination, connection abortion,
and the mode of data transfer in TCP
Table 23.3 Description of flags in the control field

23.36
Window size: Defines the size of the window, in bytes, that
the other party must maintain. the length of this field is 16
bits, which means that the maximum size of the window is
65,535 bytes. This value is normally referred to as the
receiving window (rwnd) and is determined by the receiver.
The sender must obey the dictation of the receiver in this
case.

Checksum: This 16-bit field contains the checksum. The


inclusion of the checksum for
TCP is mandatory.

Options: There can be up to 40 bytes of optional information


in the TCP header.
A TCP Connection:-
A Connection-oriented transport protocol establishes a virtual path between the
source and destination.
In TCP, connection-oriented transmission requires three phases:
1. connection establishment
2. data transfer
3. connection termination

1.connection establishment
TCP transmits data in full-duplex mode.
🢝 When two TCPs in two machines are connected, they are able to send segments
to each other simultaneously.
Each party must initialize communication and get approval from the other party
before any data are transferred.
The connection establishment in TCP is called three way handshaking.
Three way handshaking:-
1.The client sends the first segment, a SYN segment, in which only the SYN flag is set.
🢝 This segment is for synchronization of sequence numbers. It consumes one sequence
number.
🢝 When the data transfer starts, the sequence number is incremented by 1.
🢝 The SYN segment carries no real data

2. The server sends the second segment, a SYN +ACK segment, with 2
flag bits set: SYN and ACK.
🢝 This segment has a dual purpose. It is a SYN segment for communication in the other
direction and serves as the acknowledgment for the SYN segment.
🢝 It consumes one sequence number.

3.The client sends the third segment. This is just an ACK segment.
🢝 It acknowledges the receipt of the second segment with the ACK flag

and acknowledgment number field.

🢝 The sequence number in this segment is the same as the one in the SYN segment.
🢝 The ACK segment does not consume any sequence numbers.
Figure 23.18 Connection establishment using three-way
handshaking

23.40
2.Data Transfer:-
• After connection is established,
bidirectional data transfer can take
place
• The client and server can both send
data and acknowledgments
• The acknowledgment is piggybacked
with the data.

23.41
3.connection termination:-
1. The client TCP, after receiving a close command from the client process, sends the
first segment, a FIN segment in which the FIN flag is set.
2.The server TCP, after receiving the FIN segment, sends the second segment, a FIN
+ACK segment, to confirm the receipt of the FIN segment from the client and at the
same time to announce the closing of the connection in the other direction.
3. The client TCP sends the last segment, an ACK segment, to confirm the receipt of
the FIN segment from the TCP server.
Half-close
Flow Control
TCP uses a sliding window to handle flow control. TCP sliding window
is of variable size.The window is opened, closed, or shrunk.
• 🢝 Opening a window means moving the right wall to the right.
• 🢝 Closing the window means moving the left wall to the right.
• 🢝 Shrinking the window means moving the right wall to the left.
The size of the window at one end is determined by the lesser of two values: receiver window
(rwnd) or congestion window (cwnd).
Some points about TCP sliding windows:
❏ The size of the window is the lesser of rwnd and cwnd.
❏ The source does not have to send a full window’s worth of data.
❏ The window can be opened or closed by the receiver, but should not be
shrunk.
❏ The destination can send an acknowledgment at any time as long as it does not
result in a shrinking window.
❏ The receiver can temporarily shut down the window; the sender, however, can
always send a segment of 1 byte after the window is shut down.

Figure 23.23 Example


23.6
Congestion Control in TCP:-
• TCP assumes that the cause of a lost segment is due to congestion in the
network.
• If the cause of the lost segment is congestion, retransmission of the segment
does not remove the cause—it aggravates it.
• The sender has two pieces of information: the receiver-advertised window
size and the congestion window size
• TCP Congestion window
🢝 Actual window size = minimum (rwnd, cwnd)
(where rwnd = receiver window size, cwnd = congestion window size)
• TCP Congestion Policy
Based on three phases: 1. slow start, 2. congestion avoidance, and 3.
congestion detection
1. Slow Start: Exponential
Increase
🢝 In The sender keeps track
of a variable named
ssthresh (slow-start threshold).

When the size of window in


bytes reaches this
threshold, slow start stops
and the next phase starts
(additive phase begins).

In most implementations the


value of ssthresh is 65,535
bytes.

In the slow-start algorithm, the


size of the congestion window
increases exponentially until it
2.Congestion Avoidance:

• Increase the size of the congestion window increases additively until


congestion is detected
3. Congestion Detection: Multiplicative Decrease
An implementation reacts to congestion detection in one of two ways:
• If detection is by time-out, a new slow start phase starts
• If detection is by three ACKs, a new congestion avoidance phase starts
23-4 SCTP

Stream Control Transmission Protocol


(SCTP) is a new reliable,
protocol. message-oriented transport layer
applications that
SCTP, Internet
new applications need a
more

SCTP Services and Features


Packet Format
An SCTP Association
Flow Control and Error Control

23.50
Note

SCTP is a message-oriented, reliable


protocol that combines the best features
of UDP and TCP.

23.51
Table 23.4 Some SCTP applications

23.52
SCTP services:-
The services provided by the SCTP are as follows −
•Process-to-Process Communication − SCTP uses all ports
in the TCP space.
•Multiple Streams − SCTP allows multi stream service in
every connection, which is called association in SCTP
terminology. If any one of the streams is blocked, then the other
streams can deliver their data.
•Multihoming − The sending and receiving host can define multiple
IP addresses in each end for an association. In this approach when
one path fails, another interface is ready to deliver without
interruption. This fault-tolerant is used when we are sending and
receiving real-time payload like Internet telephony.

•Full-duplex Communication − Data can flow in both directions at


the same time.
Features of SCTP:-

• SCTP association allows multiple IP addresses for each end.


• In SCTP, a data chunk is numbered using a TSN.
• SI (Stream Identifier):-To distinguish between different
streams, SCTP uses an SI.
• SSN(Stream Sequence number):-To distinguish between
different data chunks belonging to the same stream, SCTP
uses SSNs.
• TCP has segments; SCTP has packets.
Comparison between a TCP segment and an SCTP packet

23.56
Note

In SCTP, control information and data


information are carried in separate
chunks.

23.57
Figure 23.30 Packet, data chunks, and
streams

23.58
Table 23.5
Chunks

23.59
A connection in SCTP is called an association.

Association Establisment:-
Association termination
Congestion Control and
Quality of Service
24-1 DATATRAFFIC
The main focus of congestion control and quality of
service is data traffic. In congestion control we try to
avoid traffic congestion. In quality of service, we try to
create an appropriate environment for the traffic. So,
before talking about congestion control and quality of
service, we discuss the data traffic itself.

Topics discussed in this section:


Traffic Descriptor
Traffic Profiles

24.2
Figure 24.1 Traffic
descriptors

24.3
Figure 24.2 Three traffic
profiles

24.4
CONGESTION

Congestion in a network may occur if the load on the


network—the number of packets sent to the network—
is greater than the capacity of the network—the
number of packets a network can handle. Congestion
control refers to the mechanisms and techniques to
control the congestion and keep the load below the
capacity.

24.5
Congestion: the load on the network is greater than the capacity of the network

Congestion control: the mechanisms to control the congestion and keep the
load below the capacity
Congestion occurs because routers and switches have queues- buffers that hold
the packets before and after processing
The rate of packet arrival > packet processing time 🡪 input queue longer
The packet departure time < packet processing time 🡪 output queue longer
CONGESTIONCONTROL

Congestion control refers to techniques and


mechanisms that can either prevent congestion, before
it happens, or remove congestion, after it has
happened. In general, we can divide congestion
control mechanisms into two broad categories: open-
loop congestion control (prevention) and closed-loop
congestion control (removal).

Topics discussed in this section:


Open-Loop Congestion Control
Closed-Loop Congestion Control
Figure 24.5 Congestion control
categories

24.9
Open loop:-
Closed-loop:-
Closed-loop congestion control mechanisms try to alleviate
congestion after it happens. Several mechanisms have
been used by different protocols.

1. Back pressure: inform the previous upstream router to reduce


the rate of outgoing packets if congested

Choke point: a packet sent by a router to the source to inform it


of congestion, similar to ICMP’s source quench packet
3. Implicit signaling : there is no communication between
the congested node or nodes and the source. The source
guesses that there is a congestion somewhere in the
Network from other symptoms.

4. Explicit signaling: The node that experiences congestion


can explicitly send a signal to the source or destination.
Backward signaling / Forward signaling
🢝 Backward Signaling A bit can be set in a packet
moving in the direction opposite to the congestion.
This bit can warn the source that there is congestion
and that it needs to slow down to avoid the
discarding of packets.

🢝 Forward Signaling A bit can be set in a packet


moving in the direction of the congestion. This bit
can warn the destination that there is congestion.
The receiver in this case can use policies, such as
slowing down the acknowledgments, to alleviate the
congestion.
Congestion Control in TCP:-
• TCP assumes that the cause of a lost segment is due to congestion in the
network.
• If the cause of the lost segment is congestion, retransmission of the segment
does not remove the cause—it aggravates it.
• The sender has two pieces of information: the receiver-advertised window
size and the congestion window size
• TCP Congestion window
🢝 Actual window size = minimum (rwnd, cwnd)
(where rwnd = receiver window size, cwnd = congestion window size)
• TCP Congestion Policy
Based on three phases: 1. slow start, 2. congestion avoidance, and 3.
congestion detection
1. Slow Start: Exponential
Increase
🢝 In The sender keeps track
of a variable named
ssthresh (slow-start threshold).

When the size of window in


bytes reaches this
threshold, slow start stops
and the next phase starts
(additive phase begins).

In most implementations the


value of ssthresh is 65,535
bytes.

In the slow-start algorithm, the


size of the congestion window
increases exponentially until it
2.Congestion Avoidance:

• Increase the size of the congestion window increases additively until


congestion is detected
3. Congestion Detection: Multiplicative Decrease
An implementation reacts to congestion detection in one of two ways:
• If detection is by time-out, a new slow start phase starts
• If detection is by three ACKs, a new congestion avoidance phase starts
QUALITYOFSERVICE

Quality of service (QoS) is an internetworking issue


that has been discussed more than defined. We can
informally define quality of service as something a
flow seeks to attain.

Topics discussed in this section:


Flow Characteristics
Flow Classes

24.23
Figure 24.15 Flow
characteristics

24.24
TECHNIQUESTOIMPROVE QoS

In Section 24.5 we tried to define QoS in terms of its


characteristics. In this section, we discuss some
techniques that can be used to improve the quality of
service. We briefly discuss four common methods:
scheduling, traffic shaping, admission control, and
resource reservation.
Topics discussed in this section:
Scheduling
Traffic Shaping
Resource Reservation
Admission Control

24.25
Figure 24.16 FIFO
queue

24.26
Figure 24.17 Priority
queuing

24.27
Figure 24.18 Weighted fair
queuing

24.28
LEAKY BUCKET:-
Leaky bucket implementation
Token bucket
Token bucket

You might also like