UNIT – IV
TRANSPORT LAYER
TOPIC – 1 PROCESS TO PROCESS COMMUNICATION
Whether on a single computer or across vast networks, programs need to
talk to each other. This exchange of information, known as process-to-
process communication (IPC), is the foundation of any system where
multiple programs work together.
Real communication takes place between two processes (application
programs). We need process-to-process delivery. The transport layer is
responsible for process-to-process delivery-the delivery of a packet, part of
a message, from one process to another. Figure 4.1 shows these three types
of deliveries and their domains
1. Client/Server Paradigm
Although there are several ways to achieve process-to-process communication,
the most common one is through the client/server paradigm. A process on the
local host, called a client, needs services from a process usually on the remote
host, called a server. Both processes (client and server) have the same name. For
example, to get the day and time from a remote machine, we need a Daytime
client process running on the local host and a Daytime server process running on
a remote machine. For communication, we must define the following:
1. Local host
2. Local process
3. Remote host
4. Remote process
2. Addressing
Whenever we need to deliver something to one specific destination among many,
we need an address. At the data link layer, we need a MAC address to choose one
node among several nodes if the connection is not point-to-point. A frame in the
data link layer needs a Destination MAC address for delivery and a source address
for the next node's reply.
Figure 4.2 shows this concept.
3. lANA Ranges
The lANA (Internet Assigned Number Authority) has divided the port numbers
into three ranges: well known, registered, and dynamic (or private), as shown in
Figure 4.4.
Well-known ports. The ports ranging from 0 to 1023 are assigned and
controlledby lANA. These are the well-known ports.
Registered ports. The ports ranging from 1024 to 49,151 are not assigned
orcontrolled by lANA. They can only be registered with lANA to prevent
duplication.
Dynamic ports. The ports ranging from 49,152 to 65,535 are neither
controllednor registered. They can be used by any process. These are the
ephemeral ports.
4. Socket Addresses
Process-to-process delivery needs two identifiers, IP address and the port
number, at each end to make a connection. The combination of an IP address and
a port number is called a socket address. The client socket address defines the
client process uniquely just as the server socket address defines the server process
uniquely (see Figure 4.5).
UDP or TCP header contains the port numbers.
5. Multiplexing and Demultiplexing
The addressing mechanism allows multiplexing and demultiplexing by the
transport layer, as shown in Figure 4.6.
Multiplexing
At the sender site, there may be several processes that need to send packets.
However, there is only one transport layer protocol at any time. This is a many-
to-one relationship and requires multiplexing.
Demultiplexing
At the receiver site, the relationship is one-to-many and requires demultiplexing.
The transport layer receives datagrams from the network layer. After error
checking and dropping of the header, the transport layer delivers each message to
the appropriate process based on the port number.
6. Connectionless Versus Connection-Oriented Service
A transport layer protocol can either be connectionless or connection-oriented.
Connectionless Service
In a connectionless service, the packets are sent from one party to another with
no need for connection establishment or connection release. The packets are not
numbered; they may be delayed or lost or may arrive out of sequence. There is
no acknowledgment either.
Connection~Oriented Service
In a connection-oriented service, a connection is first established between the
sender and the receiver. Data are transferred. At the end, the connection is
released.
7. Reliable Versus Unreliable
The transport layer service can be reliable or unreliable. If the application layer
program needs reliability, we use a reliable transport layer protocol by
implementing flow and error control at the transport layer. This means a slower
and more complex service.
In the Internet, there are three common different transport layer protocols. UDP
is connectionless and unreliable; TCP and SCTP are connection oriented and
reliable. These three can respond to the demands of the application layer
programs.
The network layer in the Internet is unreliable (best-effort delivery), we need to
implement reliability at the transport layer. To understand that error control at the
data link layer does not guarantee error control at the transport layer, let us look
at Figure 4.7.
TOPIC – 2 USER DATAGRAM PROTOCOL
User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a
part of the Internet Protocol suite, referred to as UDP/IP suite.
Unlike TCP, it is an unreliable and connectionless protocol. So, there is
no need to establish a connection before data transfer.
The UDP helps to establish low-latency and loss-tolerating connections
over the network.
The UDP enables process-to-process communication.
UDP is a connectionless protocol that offers minimal error recovery
services.
UDP Header
UDP header is an 8-byte fixed and simple header, while for TCP it may vary
from 20 bytes to 60 bytes. The first 8 Bytes contain all necessary header
information and the remaining part consists of data. UDP port number fields are
each 16 bits long, therefore the range for port numbers is defined from 0 to
65535; port number 0 is reserved. Port numbers help to distinguish different
user requests or processes.
Source Port: Source Port is a 2 Byte long field used to identify the port number
of the source.
Destination Port: It is a 2 Byte long field, used to identify the port of the
destined packet.
Length: Length is the length of UDP including the header and the data. It is a
16-bits field.
Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s complement
of the one’s complement sum of the UDP header, the pseudo-header of
information from the IP header, and the data, padded with zero octets at the end
(if necessary) to make a multiple of two octets.
Applications of UDP
Used for simple request-response communication when the size of data is
less and hence there is lesser concern about flow and error control.
It is a suitable protocol for multicasting as UDP supports packet
switching.
UDP is used for some routing update protocols like RIP(Routing
Information Protocol).
Normally used for real-time applications which can not tolerate uneven
delays between sections of a received message.
VoIP (Voice over Internet Protocol) services, such as Skype and
WhatsApp, use UDP for real-time voice communication. The delay in
voice communication can be noticeable if packets are delayed due to
congestion control, so UDP is used to ensure fast and efficient data
transmission.
DNS (Domain Name System) also uses UDP for its query/response
messages. DNS queries are typically small and require a quick response
time, making UDP a suitable protocol for this application.
DHCP (Dynamic Host Configuration Protocol) uses UDP to dynamically
assign IP addresses to devices on a network. DHCP messages are
typically small, and the delay caused by packet loss or retransmission is
generally not critical for this application.
Following implementations uses UDP as a transport layer protocol:
NTP (Network Time Protocol)
DNS (Domain Name Service)
BOOTP, DHCP.
NNP (Network News Protocol)
Quote of the day protocol TFTP(Trivial File Transfer Protocol),
RTSP(Real Time Streaming Protocol), RIP(Routing Information
Protocol).
The application layer can do some of the tasks through UDP-
Trace Route
Record Route
Timestamp
UDP takes a datagram from Network Layer , attaches its header, and sends it to
the user. So, it works fast.
Advantages of UDP
Speed: UDP is faster than TCP because it does not have the overhead of
establishing a connection and ensuring reliable data delivery.
Lower latency: Since there is no connection establishment, there is lower
latency and faster response time.
Simplicity: UDP has a simpler protocol design than TCP, making it easier to
implement and manage.
Broadcast support: UDP supports broadcasting to multiple recipients, making
it useful for applications such as video streaming and online gaming.
Smaller packet size: UDP uses smaller packet sizes than TCP, which can
reduce network congestion and improve overall network performance.
User Datagram Protocol (UDP) is more efficient in terms of both latency and
bandwidth.
Disadvantages of UDP
No reliability: UDP does not guarantee delivery of packets or order of delivery,
which can lead to missing or duplicate data.
No congestion control: UDP does not have congestion control, which means
that it can send packets at a rate that can cause network congestion.
Vulnerable to attacks: UDP is vulnerable to denial-of-service attacks , where
an attacker can flood a network with UDP packets, overwhelming the network
and causing it to crash.
Limited use cases: UDP is not suitable for applications that require reliable
data delivery, such as email or file transfers, and is better suited for applications
that can tolerate some data loss, such as video streaming or online gaming.
TOPIC – 3 TRANSMISSION CONTROL
PROTOCOL
TCP (Transmission Control Protocol) is one of the main protocols of the
TCP/IP suite. It lies between the Application and Network Layers which
are used in providing reliable delivery services. Transmission Control
Protocol (TCP) ensures reliable and efficient data transmission over the
internet.
TCP plays a crucial role in managing the flow of data between
computers, guaranteeing that information is delivered accurately and in
the correct sequence.
Transmission Control Protocol (TCP) is a connection-oriented protocol
for communications that helps in the exchange of messages between
different devices over a network.
The Internet Protocol (IP), which establishes the technique for sending
data packets between computers, works with TCP.
The position of TCP is at the transport layer of the OSI model. TCP also
helps in ensuring that information is transmitted accurately by
establishing a virtual connection between the sender and receiver.
Working of Transmission Control Protocol (TCP)
Transmission Control Protocol (TCP) model breaks down the data into small
bundles and afterward reassembles the bundles into the original message on the
opposite end to make sure that each message reaches its target location intact.
Sending the information in little bundles of information makes it simpler to
maintain efficiency as opposed to sending everything in one go.
After a particular message is broken down into bundles, these bundles may
travel along multiple routes if one route is jammed but the destination remains
the same.
For Example: When a user requests a web page on the internet, somewhere in
the world, the server processes that request and sends back an HTML Page to
that user. The server makes use of a protocol called the HTTP Protocol. The
HTTP then requests the TCP layer to set the required connection and send the
HTML file.
Now, the TCP breaks the data into small packets and forwards it toward the
Internet Protocol (IP) layer. The packets are then sent to the destination through
different routes.The TCP layer in the user’s system waits for the transmission to
get finished and acknowledges once all packets have been received.
Features of TCP/IP
Some of the most prominent features of Transmission control protocol are
mentioned below.
Segment Numbering System: TCP keeps track of the segments being
transmitted or received by assigning numbers to each and every single one of
them. A specific Byte Number is assigned to data bytes that are to be transferred
while segments are assigned sequence numbers. Acknowledgment Numbers are
assigned to received segments.
Connection Oriented: It means sender and receiver are connected to each other
till the completion of the process. The order of the data is maintained i.e. order
remains same before and after transmission.
Full Duplex: In TCP data can be transmitted from receiver to the sender or vice
– versa at the same time. It increases efficiency of data flow between sender and
receiver.
Flow Control: Flow control limits the rate at which a sender transfers data. This
is done to ensure reliable delivery. The receiver continually hints to the sender
on how much data can be received (using a sliding window).
Error Control: TCP implements an error control mechanism for reliable data
transfer. Error control is byte-oriented. Segments are checked for error
detection. Error Control includes – Corrupted Segment & Lost Segment
Management, Out-of-order segments, Duplicate segments, etc.
Congestion Control: TCP takes into account the level of congestion in the
network. Congestion level is determined by the amount of data sent by a sender.
TOPIC – 4 SCTP CONGESTION CONTROL
Stream Control Transmission Protocol (SCTP) is a network protocol that is connection-
oriented and used for transmitting multiple streams of data simultaneously between any two
endpoints that have established a connection in a computer network. SCTP is a transport
layer of Internet Protocol (IP).
SCTP support telephone connection over the internet.
SCTP is a standard protocol that was coined by The Transport Area Working Group
(TSVWG) of the IETF (Internet Engineering Task Force). The reason for the development of
the protocol is to develop a system that is similar to the telephone Signaling System 7 (SS7)
switching network for carrying call control signals using networks.
The SCTP is similar to TCP protocol but the advantage is that it also provides message
oriented data transfer like User Datagram Protocol (UDP) which makes it useful for end to
end communication over internet. Both TCP and UPD protocol are based on the concept that
made SCTP possible. Unlike TCP SCTP make ensure that it complete the concurrent
transmission over several streams of data in units called message between the end points
which are connected to each other.
What is Multihoming in SCTP?
First we will understand multihoming so multihoming is the process of connecting a network
or a host to multiple network simultaneously which is done due to increase reliability or
performance.
Telecommunication systems are highly prone to time delays. Multihoming system enables
with multiple interfaces to use one over the other without waiting. SCTP multihoming means
that the endpoints which are connected can have different IP addresses associated to it. In
simpler way multihoming refers to sending data to an alternate IP address if in case due to
any issue the primary or original IP address is unreachable. Therefore the SCTP can connect
or establish multiple connection paths between two endpoints.
SCTP Packet
SCTP protocol packet consist of two main parts Header and Payload. The Header is common
but Payload have variable chunks.
The Common SCTP header is 12 byte long and made of the 4 parts
Port Number (Source): shows the sending port
Port Number (Destination): shows the receiving port
Verification tag: a 32 bit random value which differentiate the packets from the previous
connection
Checksum: a CRC32 algorithm for detection of error.
SCTP Services
Aggregate Server Access Protocol (ASAP)
Bearer-independent Call Control (BICC)
Direct Data Placement Segment chunk (DDP-segment)
Direct Data Placement Stream session control (DDP-stream)
Diameter in a DTLS/SCTP DATA chunk (Diameter-DTLS)
Application Of SCTP Protocol
Telephone Communication: It was developed foe the communication of telephony over the
internet.
Multihoming Support: It provides multihoming support, in which both endpoints of the
connection can have multiple IP address which help helps in detection of failure in between
the communication path.
Transport for various Application: It is used in transport signaling messages to and from
SS7(Signaling System 7) on the devices supporting 3G networks through M3UA , M2UA.
Roaming Security and RAN Security: In mobile infrastructure it is used in roaming
security and RAN (Radio Access Network) security.
Reliable and Secure Transport: This protocol provides reliable and highly secure transport
or communication which minimizes the end to end delay.
Advantages of SCTP
As SCTP is a full duplex connection, it enables the data to be sent and receive
simultaneously. The data is delivered in chunks and in a ordered way which are independent
to each stream this help in isolating the data from other streams.
Like TCP and unlike UDP the SCTP provides the following advantage
Flow control: It adjust the data transmission in a particular order and quantity.
Congestion control: It checks for network prior transmission to prevent the congestion over
the links.
Fault tolerance: It uses the IP address from different internet services providers. So, if in
case ISP fails another connection can be used for establishing the connection.
It is a message oriented rather than byte oriented as of UDP.
It provides a path selection functionality to select the primary data transmission and a
monitoring function to test the connectivity of transmission path.
TOPIC -5 QUALITY OF SERVICES QoS
Quality of service (QoS) is the use of mechanisms or technologies that work on
a network to control traffic and ensure the performance of critical applications
with limited network capacity. It enables organizations to adjust their
overall network traffic by prioritizing specific high-performance applications.
QoS is typically applied to networks that carry traffic for resource-intensive
systems. Common services for which it is required include internet protocol
television (IPTV), online gaming, streaming media, videoconferencing, video on
demand (VOD), and Voice over IP (VoIP).
Using QoS in networking, organizations have the ability to optimize the
performance of multiple applications on their network and gain visibility into the
bit rate, delay, jitter, and packet rate of their network. This ensures they can
engineer the traffic on their network and change the way that packets are routed
to the internet or other networks to avoid transmission delay. This also ensures
that the organization achieves the expected service quality for applications and
delivers expected user experiences.
As per the QoS meaning, the key goal is to enable networks and organizations to
prioritize traffic, which includes offering dedicated bandwidth, controlled jitter, and
lower latency. The technologies used to ensure this are vital to enhancing the
performance of business applications, wide-area networks (WANs), and service
provider networks.
How Does QoS Work?
QoS networking technology works by marking packets to identify service types,
then configuring routers to create separate virtual queues for each application,
based on their priority. As a result, bandwidth is reserved for critical applications
or websites that have been assigned priority access.
QoS technologies provide capacity and handling allocation to specific flows in
network traffic. This enables the network administrator to assign the order in
which packets are handled and provide the appropriate amount of bandwidth to
each application or traffic flow.
Types of network traffic
Understanding how QoS network software works is reliant on defining the various types
of traffic that it measures. These are:
1. Bandwidth: The speed of a link. QoS can tell a router how to use bandwidth. For
example, assigning a certain amount of bandwidth to different queues for different traffic
types.
2. Delay: The time it takes for a packet to go from its source to its end destination. This can
often be affected by queuing delay, which occurs during times of congestion and a
packet waits in a queue before being transmitted. QoS enables organizations to avoid
this by creating a priority queue for certain types of traffic.
3. Loss: The amount of data lost as a result of packet loss, which typically occurs due to
network congestion. QoS enables organizations to decide which packets to drop in this
event.
4. Jitter: The irregular speed of packets on a network as a result of congestion, which can
result in packets arriving late and out of sequence. This can cause distortion or gaps in
audio and video being delivered.
Advantages of QoS
The deployment of QoS is crucial for businesses that want to ensure the availability of
their business-critical applications. It is vital for delivering differentiated bandwidth and
ensuring data transmission takes place without interrupting traffic flow or causing packet
losses. Major advantages of deploying QoS include:
1. Unlimited application prioritization: QoS guarantees that businesses’ most mission-
critical applications will always have priority and the necessary resources to achieve high
performance.
2. Better resource management: QoS enables administrators to better manage the
organization’s internet resources. This also reduces costs and the need for investments
in link expansions.
3. Enhanced user experience: The end goal of QoS is to guarantee the high performance
of critical applications, which boils down to delivering optimal user experience.
Employees enjoy high performance on their high-bandwidth applications, which enables
them to be more effective and get their job done more quickly.
4. Point-to-point traffic management: Managing a network is vital however traffic is
delivered, be it end to end, node to node, or point to point. The latter enables
organizations to deliver customer packets in order from one point to the next over the
internet without suffering any packet loss.
5. Packet loss prevention: Packet loss can occur when packets of data are dropped in
transit between networks. This can often be caused by a failure or inefficiency, network
congestion, a faulty router, loose connection, or poor signal. QoS avoids the potential of
packet loss by prioritizing bandwidth of high-performance applications.
6. Latency reduction: Latency is the time it takes for a network request to go from the
sender to the receiver and for the receiver to process it. This is typically affected by
routers taking longer to analyze information and storage delays caused by intermediate
switches and bridges. QoS enables organizations to reduce latency, or speed up the
process of a network request, by prioritizing their critical application.
TOPIC – 6 IMPROVING TECHNIQUES
1. LEAKY BUCKET
When too many packets are present in the network it causes packet
delay and loss of packet which degrades the performance of the
system. This situation is called congestion.
The network layer and transport layer share the responsibility for
handling congestions. One of the most effective ways to control
congestion is trying to reduce the load that transport layer is placing
on the network. To maintain this, the network and transport layers
have to work together.
With too much traffic, performance drops sharply.
A simple leaky bucket algorithm can be implemented using FIFO queue. A FIFO queue holds
the packets. If the traffic consists of fixed-size packets (e.g., cells in ATM networks), the
process removes a fixed number of packets from the queue at each tick of the clock. If the
traffic consists of variable-length packets, the fixed output rate must be based on the number
of bytes or bits.
The following is an algorithm for variable-length packets:
1.Initialize a counter to n at the tick of the clock.
2.Repeat until n is smaller than the packet size of the packet at the head of the queue.
3.Pop a packet out of the head of the queue, say P.
4.Send the packet P, into the network
5.Decrement the counter by the size of packet P.
6.Reset the counter and go to step 1.
2. TOKEN BUCKET ALGORITHM
Token bucket algorithm is one of the techniques for congestion
control algorithms. When too many packets are present in the
network it causes packet delay and loss of packet which degrades
the performance of the system. This situation is called congestion.
The network layer and transport layer share the responsibility for
handling congestions. One of the most effective ways to control
congestion is trying to reduce the load that transport layer is
placing on the network. To maintain this network and transport
layers have to work together.
The Token Bucket Algorithm is diagrammatically represented as follows
With too much traffic, performance drops sharply.
Token Bucket Algorithm
The leaky bucket algorithm enforces output patterns at the average rate, no
matter how busy the traffic is. So, to deal with the more traffic, we need a
flexible algorithm so that the data is not lost. One such approach is the token
bucket algorithm.
Let us understand this algorithm step wise as given below −
When compared to Leaky bucket the token bucket algorithm is less restrictive
that means it allows more traffic. The limit of busyness is restricted by the
number of tokens available in the bucket at a particular instant of time.
The implementation of the token bucket algorithm is easy − a variable is used
to count the tokens. For every t seconds the counter is incremented and then
it is decremented whenever a packet is sent. When the counter reaches zero,
no further packet is sent out.
This is shown in below given diagram −