0% found this document useful (0 votes)
7 views37 pages

CN Unit 5

The document discusses the Transport layer's role in data transmission, including its functions such as segmentation, flow control, and error control. It also explains the User Datagram Protocol (UDP), highlighting its connectionless nature, features, and differences from TCP, along with its header format and queuing mechanism. Additionally, it covers the concepts of unicast and multicast communication, TCP connection management, and the reliability of data transfers.

Uploaded by

I Know
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views37 pages

CN Unit 5

The document discusses the Transport layer's role in data transmission, including its functions such as segmentation, flow control, and error control. It also explains the User Datagram Protocol (UDP), highlighting its connectionless nature, features, and differences from TCP, along with its header format and queuing mechanism. Additionally, it covers the concepts of unicast and multicast communication, TCP connection management, and the reliability of data transfers.

Uploaded by

I Know
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

TRANSPORT LAYER AND APPLICATION LAYER

Transport layer
5.1 Design issues

The basic function of the Transport layer is to accept data from the layer above, split it up into
smaller units, pass these data units to the Network layer, and ensure that all the pieces arrive
correctly at the other end.

Furthermore, all this must be done efficiently and in a way that isolates the upper layers from the
inevitable changes in the hardware technology.

The Transport layer also determines what type of service to provide to the Session layer, and,
ultimately, to the users of the network. The most popular type of transport connection is an error-
free point-to-point channel that delivers messages or bytes in the order in which they were sent.

The Transport layer is a true end-to-end layer, all the way from the source to the destination. In other
words, a program on the source machine carries on a conversation with a similar program on the
destination machine, using the message headers and control messages.

Functions of Transport Layer

1. Service Point Addressing: Transport Layer header includes service point address which is
port address. This layer gets the message to the correct process on the computer unlike
Network Layer, which gets each packet to the correct computer.

2. Segmentation and Reassembling: A message is divided into segments; each segment


contains sequence number, which enables this layer in reassembling the message. Message is
reassembled correctly upon arrival at the destination and replaces packets which were lost in
transmission.

3. Connection Control: It includes 2 types:

o Connectionless Transport Layer : Each segment is considered as an independent


packet and delivered to the transport layer at the destination machine.

o Connection Oriented Transport Layer : Before delivering packets, connection is made


with transport layer at the destination machine.

4. Flow Control: In this layer, flow control is performed end to end.

1
5. Error Control: Error Control is performed end to end in this layer to ensure that the
complete message arrives at the receiving transport layer without any error. Error Correction
is done through retransmission.

Fig 5.1 Transport layer

Design Issues with Transport Layer

 Accepting data from Session layer, split it into segments and send to the network layer.

 Ensure correct delivery of data with efficiency.

 Isolate upper layers from the technological changes.

 Error control and flow control.

5.2 UDP Header format

In computer networking, the UDP stands for User Datagram Protocol. The David P. Reed developed
the UDP protocol in 1980. It is defined in RFC 768, and it is a part of the TCP/IP protocol, so it is a
standard protocol over the internet. The UDP protocol allows the computer applications to send the
messages in the form of datagrams from one machine to another machine over the Internet Protocol
(IP) network. The UDP is an alternative communication protocol to the TCP protocol (transmission
control protocol). Like TCP, UDP provides a set of rules that governs how the data should be
exchanged over the internet. The UDP works by encapsulating the data into the packet and providing

2
its own header information to the packet. Then, this UDP packet is encapsulated to the IP packet and
sent off to its destination. Both the TCP and UDP protocols send the data over the internet protocol
network, so it is also known as TCP/IP and UDP/IP. There are many differences between these two
protocols. UDP enables the process to process communication, whereas the TCP provides host to
host communication. Since UDP sends the messages in the form of datagrams, it is considered the
best-effort mode of communication. TCP sends the individual packets, so it is a reliable transport
medium. Another difference is that the TCP is a connection-oriented protocol whereas, the UDP is a
connectionless protocol as it does not require any virtual circuit to transfer the data.

UDP also provides a different port number to distinguish different user requests and also provides
the checksum capability to verify whether the complete data has arrived or not; the IP layer does not
provide these two services.

Features of UDP protocol

The following are the features of the UDP protocol:

ADVERTISEMENT

ADVERTISEMENT

o Transport layer protocol

UDP is the simplest transport layer communication protocol. It contains a minimum amount
of communication mechanisms. It is considered an unreliable protocol, and it is based on
best-effort delivery services. UDP provides no acknowledgment mechanism, which means
that the receiver does not send the acknowledgment for the received packet, and the sender
also does not wait for the acknowledgment for the packet that it has sent.

ADVERTISEMENT

o Connectionless

The UDP is a connectionless protocol as it does not create a virtual path to transfer the data.
It does not use the virtual path, so packets are sent in different paths between the sender and
the receiver, which leads to the loss of packets or received out of order.

Ordered delivery of data is not guaranteed.

In the case of UDP, the datagrams are sent in some order will be received in the same order is
not guaranteed as the datagrams are not numbered.

3
o Ports

The UDP protocol uses different port numbers so that the data can be sent to the correct
destination. The port numbers are defined between 0 and 1023.

o Faster transmission

UDP enables faster transmission as it is a connectionless protocol, i.e., no virtual path is


required to transfer the data. But there is a chance that the individual packet is lost, which
affects the transmission quality. On the other hand, if the packet is lost in TCP connection,
that packet will be resent, so it guarantees the delivery of the data packets.

o Acknowledgment mechanism

The UDP does have any acknowledgment mechanism, i.e., there is no handshaking between
the UDP sender and UDP receiver. If the message is sent in TCP, then the receiver
acknowledges that I am ready, then the sender sends the data. In the case of TCP, the
handshaking occurs between the sender and the receiver, whereas in UDP, there is no
handshaking between the sender and the receiver.

o Segments are handled independently.

Each UDP segment is handled individually of others as each segment takes different path to
reach the destination. The UDP segments can be lost or delivered out of order to reach the
destination as there is no connection setup between the sender and the receiver.

o Stateless

It is a stateless protocol that means that the sender does not get the acknowledgement for the
packet which has been sent.

Why do we require the UDP protocol?

As we know that the UDP is an unreliable protocol, but we still require a UDP protocol in
some cases. The UDP is deployed where the packets require a large amount of bandwidth
along with the actual data. For example, in video streaming, acknowledging thousands of
packets is troublesome and wastes a lot of bandwidth. In the case of video streaming, the loss
of some packets couldn't create a problem, and it can also be ignored.

UDP Header Format

4
Fig 5.2 UDP Header Format

In UDP, the header size is 8 bytes, and the packet size is upto 65,535 bytes. But this packet size is
not possible as the data needs to be encapsulated in the IP datagram, and an IP packet, the header
size can be 20 bytes; therefore, the maximum of UDP would be 65,535 minus 20. The size of the
data that the UDP packet can carry would be 65,535 minus 28 as 8 bytes for the header of the UDP
packet and 20 bytes for IP header.

The UDP header contains four fields:

o Source port number: It is 16-bit information that identifies which port is going t send the
packet.

o Destination port number: It identifies which port is going to accept the information. It is
16-bit information which is used to identify application-level service on the destination
machine.

o Length: It is 16-bit field that specifies the entire length of the UDP packet that includes the
header also. The minimum value would be 8-byte as the size of the header is 8 bytes.

o Checksum: It is a 16-bits field, and it is an optional field. This checksum field checks
whether the information is accurate or not as there is the possibility that the information can
be corrupted while transmission. It is an optional field, which means that it depends upon the
application, whether it wants to write the checksum or not. If it does not want to write the
checksum, then all the 16 bits are zero; otherwise, it writes the checksum. In UDP, the
checksum field is applied to the entire packet, i.e., header as well as data part whereas, in IP,
the checksum field is applied to only the header field.

5
Concept of Queuing in UDP protocol

Fig 5.3 Queuing in UDP

In UDP protocol, numbers are used to distinguish the different processes on a server and
client. We know that UDP provides a process to process communication. The client generates
the processes that need services while the server generates the processes that provide
services. The queues are available for both the processes, i.e., two queues for each process.
The first queue is the incoming queue that receives the messages, and the second one is the
outgoing queue that sends the messages. The queue functions when the process is running. If
the process is terminated then the queue will also get destroyed.

UDP handles the sending and receiving of the UDP packets with the help of the following
components:

ADVERTISEMENT

o Input queue: The UDP packets uses a set of queues for each process.

o Input module: This module takes the user datagram from the IP, and then it finds the
information from the control block table of the same port. If it finds the entry in the control
block table with the same port as the user datagram, it enqueues the data.

o Control Block Module: It manages the control block table.

o Control Block Table: The control block table contains the entry of open ports.

6
o Output module: The output module creates and sends the user datagram.

Several processes want to use the services of UDP. The UDP multiplexes and demultiplexes
the processes so that the multiple processes can run on a single host.

Limitations

o It provides an unreliable connection delivery service. It does not provide any services of IP
except that it provides process-to-process communication.

o The UDP message can be lost, delayed, duplicated, or can be out of order.

o It does not provide a reliable transport delivery service. It does not provide any
acknowledgment or flow control mechanism. However, it does provide error control to some
extent.

Advantages

o It produces a minimal number of overheads.

UDP Checksum

 Checksum :

i. Here the checksum includes three sections: a pseudo header, the UDP header, and the data
coming from the application layer.

ii. The pseudo header is the part of the header of the IP packet in which the user datagram is
to be encapsulated with some fields filled with 0s (see Figure1).

Fig 5.4 Header

7
iii. If the checksum does not include the pseudo header, a user datagram may arrive safe and
sound. However, if the IP header is corrupted, it may be delivered to the wrong host.

iv. The protocol field is added to ensure that the packet belongs to UDP, and not to TCP.

v. The value of the protocol field for UDP is 17. If this value is changed during transmission,
the checksum calculation at the receiver will detect it and UDP drops the packet. It is not
delivered to the wrong protocol.

 Checksum Calculation:

UDP Checksum calculation is similar to TCP Checksum computation. It’s also a 16-bit field
of one’s complement of one’s complement sum of a pseudo UDP header + UDP datagram.

UDP Unicast/Multicast Real time traffic

1. Unicast : Unicast is a type of information transfer and it is used when there is a


participation of single sender and single recipient. So, in short, you can term it as a one-to-
one mapping. For example, a device having IP address 10.1.4.0 in a network wants to send
the traffic stream (data packets) to the device with IP address 20.14.4.2 in the other network,
then unicast comes into the picture. It is the most common form of data transfer over the n/w.

Fig 5.5 UDP Traffic

Multicasting : Multicasting has one/more senders and multiple recipients participate in data transfer
traffic. In multicasting traffic recline between the boundaries of unicast and broadcast. It server’s
direct single copies of data streams and that are then simulated and routed to hosts that request it. IP
multicast requires support of some other protocols such as Internet Group Management Protocol

8
(IGMP), Multicast routing for its working. And also in Classful IP addressing Class D is reserved for
multicast groups.

Fig 5.6 Multicasting

Difference between Unicast and Multicast:

S.No. Unicast Multicast

It has one sender and one It has one or more senders and
1. receiver. multiple receivers.

It sends data from one device to It sends data from one device to
2. single device. multiple devices.

It works on Single Node It works on star, mesh, tree and


3. Topology. hybrid topology.

It does not scale well for It does not scale well across large
4. streaming media. networks.

Multiple unicasting utilizes


5. more bandwidth as compared. It utilizes bandwidth efficiently.

9
Web surfing, file transfer is an Video Streaming,Online gaming is an
6. example of a unicast. example of a multicast .

7. It has one-to-one mapping. It has one-to-many mapping.

8. Network traffic is high. Network traffic is low.

9. Mobile is a unicast device. Switch is a multicast device.

5.3 TCP connection Management

The connection is established in TCP using the three-way handshake as discussed earlier to create a
connection. One side, say the server, passively stays for an incoming link by implementing the
LISTEN and ACCEPT primitives, either determining a particular other side or nobody in particular.

The other side performs a connect primitive specifying the I/O port to which it wants to join. The
maximum TCP segment size available, other options are optionally like some private data (example
password).

The CONNECT primitive transmits a TCP segment with the SYN bit on and the ACK bit off and
waits for a response.

The sequence of TCP segments sent in the typical case, as shown in the figure below −

10
Fig 5.7 TCP Connection Management

When the segment sent by Host-1 reaches the destination, i.e., host -2, the receiving server checks to
see if there is a process that has done a LISTEN on the port given in the destination port field. If not,
it sends a response with the RST bit on to refuse the connection. Otherwise, it governs the TCP
segment to the listing process, which can accept or decline (for example, if it does not look similar to
the client) the connection.

Call Collision

If two hosts try to establish a connection simultaneously between the same two sockets, then the
events sequence is demonstrated in the figure under such circumstances. Only one connection is
established. It cannot select both the links because their endpoints identify connections.

Suppose the first set up results in a connection identified by (x, y) and the second connection are also
released up. In that case, only tail enter will be made, i.e., for (x, y) for the initial sequence number, a
clock-based scheme is used, with a clock pulse coming after every 4 microseconds. For ensuring
additional safety when a host crashes, it may not reboot for sec, which is the maximum packet
lifetime. This is to make sure that no packets from previous connections are roaming around.

TCP Reliability of data transfers

Transport Layer Protocols are central piece of layered architectures, these provides the logical
communication between application processes. These processes uses the logical communication to
transfer data from transport layer to network layer and this transfer of data should be reliable and
secure. The data is transferred in the form of packets but the problem occurs in reliable transfer of
data. The problem of transferring the data occurs not only at the transport layer, but also at the
application layer as well as in the link layer. This problem occur when a reliable service runs on an
unreliable service, For example, TCP (Transmission Control Protocol) is a reliable data transfer
protocol that is implemented on top of an unreliable layer, i.e., Internet Protocol (IP) is an end to end

11
network layer protocol.

Fig 5.8 Study of Reliable Data Transfer

In this model, we have design the sender and receiver sides of a protocol over a reliable channel. In
the reliable transfer of data the layer receives the data from the above layer breaks the message in
the form of segment and put the header on each segment and transfer. Below layer receives the
segments and remove the header from each segment and make it a packet by adding to header. The
data which is transferred from the above has no transferred data bits corrupted or lost, and all are
delivered in the same sequence in which they were sent to the below layer this is reliable data
transfer protocol. This service model is offered by TCP to the Internet applications that invoke this

12
transfer of data.

Fig 5.9 Study of Unreliable Data Transfer

Similarly in an unreliable channel we have design the sending and receiving side. The sending side
of the protocol is called from the above layer to rdt_send() then it will pass the data that is to be
delivered to the application layer at the receiving side (here rdt-send() is a function for sending
data where rdt stands for reliable data transfer protocol and _send() is used for the sending side).
On the receiving side, rdt_rcv() (rdt_rcv() is a function for receiving data where -rcv() is used for
receiving side), will be called when a packet arrives from the receiving side of the unreliable
channel. When the rdt protocol wants to deliver data to the application layer, it will do so by
calling deliver_data() (where deliver_data() is a function for delivering data to upper layer). In
reliable data transfer protocol, we only consider the case of unidirectional data transfer, that is
transfer of data from the sending side to receiving side(i.e. only in one direction). In case of
bidirectional (full duplex or transfer of data on both the sides) data transfer is conceptually more
difficult. Although we only consider unidirectional data transfer but it is important to note that the
sending and receiving sides of our protocol will needs to transmit packets in both directions, as
shown in above figure. In order to exchange packets containing the data that is needed to be
transferred the both (sending and receiving) sides of rdt also need to exchange control packets in
both direction (i.e., back and forth), both the sides of rdt send packets to the other side by a call to
udt_send() (udt_send() is a function used for sending data to other side where udt stands for
unreliable data transfer protocol).

13
TCP flow control

Before discussing TCP flow control, it’s worth describing the flow control functionality in computer
networks. When two network hosts start communicating, one sends packets, and the other receives
them.

Both may have different hosting hardware, software design, and processing speed. If the receiver is
fast enough to consume messages at a higher rate than those generated by the sender, all works well.

But what will happen if the receiver consumes slower than the sender produces? The messages
will keep adding to the receiver’s queue. After some time, messages will start dropping once the
receiver queue is full. To overcome a fast sender and a slow receiver problem, there is a concept
known as flow control in Computer networks.

Fig 5.10 TCP flow control

What is the TCP flow control?

 Slow sender speedy receiver – No Flow Control required,


Fast sender, slow receiver – Flow control is needed.

The above diagram shows a slow receiver and a fast sender. Let’s understand how the messages will
be overflown after a certain period.

 The sender is sending messages at the rate of 10 messages per second, while the receiver is
receiving at the rate of 5 messages per second.

 When a sender sends a message, the network enqueues messages in the receiver queue.

14
 Once the user application reads a message, the message is clear from the line, and now again,
one buffer space adds to the free space.

 With the mentioned speed for the sender and receiver, the receiver queue will keep
shortening at the pace of 5 every second.

 Finally, after 40 seconds, there will be no space remaining for incoming messages, and
messages will start dropping.

Why is TCP flow control required?

TCP is an example of a transport layer protocol as per the OSI reference model. It provides reliable
and sequenced delivery of messages. Because of reliable delivery, TCP retries to send a lost segment
if a packet is lost due to a slow receiver. If there is no flow control, TCP will keep resending
repeatedly, and the situation will worsen over the network.

With the flow control, the TCP receiver keeps sending the available space capacity for the incoming
messages to the sender during the communication. The sender updates the space information and
reduces the outgoing message rate. The space is known as the receiver window size.

How does the TCP implement the flow control?

For implementing flow control, the sender should know how much free space is available on the
receiver before sending further messages. In an earlier TCP header tutorial, we described various
protocol parameters. One parameter is the window size. Both ends add their own window size in the
header in each TCP segment.

During connection setup, the window size is the maximum capacity available. During packet
transfer, the window size keeps updating. The window size value is zero when a TCP end can not
accept further messages.
When the sender receives a window size of zero, it stops sending any further messages till it gets
again a message with a window size of more than zero.

Flow control and TCP user:

Till now, all discussion was for the TCP layer. Here we will discuss the traffic source for TCP,
which is the user of the layer.

Difference Between Flow Control and Congestion Control:

15
The flow control is from end to end. While congestion control is a node between the path from two
nodes. For example, a router between two communicating nodes may get congested.

How does a TCP application know when to stop?

The purpose of flow control is to let the sender know that the receiver is slower. A sender application
should be informed in case of a slow receiver. Generally, the TCP module informs the sender while
sending data.

5.4 TCP Congestion Control

TCP congestion control is a method used by the TCP protocol to manage data flow over a network
and prevent congestion. TCP uses a congestion window and congestion policy that avoids
congestion. Previously, we assumed that only the receiver could dictate the sender’s window size.
We ignored another entity here, the network. If the network cannot deliver the data as fast as it is
created by the sender, it must tell the sender to slow down. In other words, in addition to the receiver,
the network is a second entity that determines the size of the sender’s window

Congestion Policy in TCP

Slow Start Phase: Starts slow increment is exponential to the threshold.

1. Congestion Avoidance Phase: After reaching the threshold increment is by 1.

2. Congestion Detection Phase: The sender goes back to the Slow start phase or the
Congestion avoidance phase.

Slow Start Phase

Exponential increment: In this phase after every RTT the congestion window size increments
exponentially.

Example:- If the initial congestion window size is 1 segment, and the first segment is successfully
acknowledged, the congestion window size becomes 2 segments. If the next transmission is also
acknowledged, the congestion window size doubles to 4 segments. This exponential growth
continues as long as all segments are successfully acknowledged.

Initially cwnd = 1

After 1 RTT, cwnd = 2^(1) = 2

2 RTT, cwnd = 2^(2) = 4

16
3 RTT, cwnd = 2^(3) = 8

Congestion Avoidance Phase

Additive increment: This phase starts after the threshold value also denoted as ssthresh. The size of
cwnd(congestion window) increases additive. After each RTT cwnd = cwnd + 1.

Example:- if the congestion window size is 20 segments and all 20 segments are successfully
acknowledged within an RTT, the congestion window size would be increased to 21 segments in the
next RTT. If all 21 segments are again successfully acknowledged, the congestion window size
would be increased to 22 segments, and so on.

Initially cwnd = i

After 1 RTT, cwnd = i+1

2 RTT, cwnd = i+2

3 RTT, cwnd = i+3

Congestion Detection Phase

Multiplicative decrement: If congestion occurs, the congestion window size is decreased. The only
way a sender can guess that congestion has happened is the need to retransmit a segment.
Retransmission is needed to recover a missing packet that is assumed to have been dropped by a
router due to congestion. Retransmission can occur in one of two cases: when the RTO timer times
out or when three duplicate ACKs are received.

Case 1: Retransmission due to Timeout – In this case, the congestion possibility is high.

(a) ssthresh is reduced to half of the current window size.

(b) set cwnd = 1

(c) start with the slow start phase again.

Case 2: Retransmission due to 3 Acknowledgement Duplicates – The congestion possibility is less.

(a) ssthresh value reduces to half of the current window size.

(b) set cwnd= ssthresh

(c) start with congestion avoidance phase

17
Example

Assume a TCP protocol experiencing the behavior of slow start. At the 5th transmission round with a
threshold (ssthresh) value of 32 goes into the congestion avoidance phase and continues till the 10th
transmission. At the 10th transmission round, 3 duplicate ACKs are received by the receiver and
entered into additive increase mode. Timeout occurs at the 16th transmission round. Plot the
transmission round (time) vs congestion window size of TCP segments.

Fig 5.11 Plot the transmission round (time) vs congestion window size of TCP segments

TCP header format

TCP stands for Transmission Control Protocol. It is a transport layer protocol that facilitates the
transmission of packets from source to destination. It is a connection-oriented protocol that means it
establishes the connection prior to the communication that occurs between the computing devices in
a network. This protocol is used with an IP protocol, so together, they are referred to as a TCP/IP.

The main functionality of the TCP is to take the data from the application layer. Then it divides the
data into a several packets, provides numbering to these packets, and finally transmits these packets
to the destination. The TCP, on the other side, will reassemble the packets and transmits them to the
application layer. As we know that TCP is a connection-oriented protocol, so the connection will
remain established until the communication is not completed between the sender and the receiver.

Features of TCP protocol

18
The following are the features of a TCP protocol:

o Transport Layer Protocol

TCP is a transport layer protocol as it is used in transmitting the data from the sender to the
receiver.

o Reliable

TCP is a reliable protocol as it follows the flow and error control mechanism. It also supports
the acknowledgment mechanism, which checks the state and sound arrival of the data. In the
acknowledgment mechanism, the receiver sends either positive or negative acknowledgment
to the sender so that the sender can get to know whether the data packet has been received or
needs to resend.

o Order of the data is maintained

This protocol ensures that the data reaches the intended receiver in the same order in which it
is sent. It orders and numbers each segment so that the TCP layer on the destination side can
reassemble them based on their ordering.

o Connection-oriented

It is a connection-oriented service that means the data exchange occurs only after the
connection establishment. When the data transfer is completed, then the connection will get
terminated.

o Full duplex

It is a full-duplex means that the data can transfer in both directions at the same time.

o Stream-oriented

TCP is a stream-oriented protocol as it allows the sender to send the data in the form of a
stream of bytes and also allows the receiver to accept the data in the form of a stream of
bytes. TCP creates an environment in which both the sender and receiver are connected by an
imaginary tube known as a virtual circuit. This virtual circuit carries the stream of bytes
across the internet.

Need for Transport Control Protocol

19
In the layered architecture of a network model, the whole task is divided into smaller tasks. Each task
is assigned to a particular layer that processes the task. In the TCP/IP model, five layers
are application layer, transport layer, network layer, data link layer, and physical layer. The transport
layer has a critical role in providing end-to-end communication to the directly application processes.
It creates 65,000 ports so that the multiple applications can be accessed at the same time. It takes the
data from the upper layer, and it divides the data into smaller packets and then transmits them to the
network layer.

Fig 5.12 purpose of transport layer

Working of TCP

In TCP, the connection is established by using three-way handshaking. The client sends the segment
with its sequence number. The server, in return, sends its segment with its own sequence number as
well as the acknowledgement sequence, which is one more than the client sequence number. When
the client receives the acknowledgment of its segment, then it sends the acknowledgment to the
server. In this way, the connection is established between the client and the server.

20
Fig 5.13 Working of TCP protocol

Advantages of TCP

o It provides a connection-oriented reliable service, which means that it guarantees the delivery
of data packets. If the data packet is lost across the network, then the TCP will resend the lost
packets.

o It provides a flow control mechanism using a sliding window protocol.

o It provides error detection by using checksum and error control by using Go Back or ARP
protocol.

o It eliminates the congestion by using a network congestion avoidance algorithm that includes
various schemes such as additive increase/multiplicative decrease (AIMD), slow start, and
congestion window.

Disadvantage of TCP

21
It increases a large amount of overhead as each segment gets its own TCP header, so fragmentation
by the router increases the overhead.

TCP Header Format

Fig 5.14 TCP Header Format

o Source port: It defines the port of the application, which is sending the data. So, this field
contains the source port address, which is 16 bits.

o Destination port: It defines the port of the application on the receiving side. So, this field
contains the destination port address, which is 16 bits.

o Sequence number: This field contains the sequence number of data bytes in a particular
session.

o Acknowledgment number: When the ACK flag is set, then this contains the next sequence
number of the data byte and works as an acknowledgment for the previous data received. For

22
example, if the receiver receives the segment number 'x', then it responds 'x+1' as an
acknowledgment number.

o HLEN: It specifies the length of the header indicated by the 4-byte words in the header. The
size of the header lies between 20 and 60 bytes. Therefore, the value of this field would lie
between 5 and 15.

o Reserved: It is a 4-bit field reserved for future use, and by default, all are set to zero.

o Flags

There are six control bits or flags:

1. URG: It represents an urgent pointer. If it is set, then the data is processed urgently.

2. ACK: If the ACK is set to 0, then it means that the data packet does not contain an
acknowledgment.

3. PSH: If this field is set, then it requests the receiving device to push the data to the
receiving application without buffering it.

4. RST: If it is set, then it requests to restart a connection.

5. SYN: It is used to establish a connection between the hosts.

6. FIN: It is used to release a connection, and no further data exchange will happen.

o Window size
It is a 16-bit field. It contains the size of data that the receiver can accept. This field is used
for the flow control between the sender and receiver and also determines the amount of buffer
allocated by the receiver for a segment. The value of this field is determined by the receiver.

o Checksum
It is a 16-bit field. This field is optional in UDP, but in the case of TCP/IP, this field is
mandatory.

o Urgent pointer
It is a pointer that points to the urgent data byte if the URG flag is set to 1. It defines a value
that will be added to the sequence number to get the sequence number of the last urgent byte.

23
o Options
It provides additional options. The optional field is represented in 32-bits. If this field
contains the data less than 32-bit, then padding is required to obtain the remaining bits.

TCP Timer Management

TCP (Transmission Control Protocol) is a crucial protocol in network communication, responsible


for ensuring reliable data transfer between devices over the Internet. However, due to the
unpredictable nature of networks and the fact that it relies on underlying IP (Internet Protocol) for
data transmission, TCP cannot guarantee that all packets will be delivered successfully. This is where
TCP timers come into play.

Definition of TCP Timers

TCP timers are mechanisms used by the protocol to manage and control various aspects of the data
transmission process. Essentially, these timers are implemented by a device's operating system and
are used to track different stages of a TCP connection. They ensure that packets are promptly
delivered between devices and help avoid issues such as packet loss or congestion.

Types of TCP Timers

TCP timers are an essential component of the Transmission Control Protocol. They are used to
manage various aspects of network communication, such as retransmission, congestion control, and
detecting inactive connections.

There are three main types of TCP timers: retransmission timer, persistence timer, and keepalive
timer. Each type serves a unique purpose in ensuring reliable data transfer.

Retransmission Timer

The retransmission timer is a critical component in providing reliable data transfer over the network.
Its primary function is to ensure that packets reach their destination by resending packets that may
have been lost or corrupted during transmission.

When a packet is sent over the network, an acknowledgement (ACK) is expected from the receiver.
If no ACK is received within a specified time frame set by the retransmission timer, the sender
assumes that the packet has been lost and will resend it.

Persistence Timer

24
The Persistence Timer (PT) is another important aspect of TCP timers that helps manage network
congestion. When there are too many unacknowledged packets on the network, it can lead to
congestion and eventually result in packet loss or delay.

The PT prevents this by temporarily holding back new transmissions until previously sent packets
are acknowledged. The PT works by periodically sending probes to check whether there are any
unacknowledged packets on the network.

If there is no response from these probes before a specified time period elapses (set by PT), then new
transmission attempts will be held back until existing transmissions have been acknowledged. PTs
play a crucial role in managing congestion on networks with significant traffic loads.

Keepalive Timer

The Keepalive Timer (KT) is used to detect inactive connections. When a connection is idle for an
extended period, it can be challenging to know whether the session has been terminated or not. The
KT solves this by sending probes at regular intervals to check the status of the connection.

If there is no response from these probes before a specified time period elapses (set by KT), then the
connection is assumed to be dead and will be dropped. The KT ensures that resources are not wasted
on inactive sessions, freeing up network resources for active ones.

TCP Timeout Mechanism

How TCP determines when to timeout a connection or packet transmission.

Timeouts are an essential aspect of TCP communication, as they allow the protocol to ensure reliable
data transfer. When a packet is sent, the TCP implementation starts a timer for that packet. If an
acknowledgment for the sent packet is not received before the timer expires, then TCP assumes that
the packet has been lost and initiates retransmission.

The duration of this timer determines how long it takes for TCP to recognize a lost transmission. The
default value for this timeout varies across different operating systems but is usually set between
30−60 seconds.

The impact of timeout on network performance.

TCP timeouts have a significant impact on network performance because they affect how quickly
applications can send and receive data. Long timeouts can cause delays in data transfer, which can

25
lead to poor application performance. On the other hand, short timeouts can result in unnecessary
retransmissions and increased network traffic.

Strategies for optimizing timeout values.

There are several strategies that can be used to optimize timeout values. One approach is to use
adaptive retransmission times based on network conditions such as round−trip time (RTT) estimates
or congestion window size (CWND).

Another strategy is to use a hybrid approach that combines fixed and adaptive timeouts based on
different stages of communication between hosts. For example, initial connections may use fixed
timeouts while established connections may use adaptive ones.

TCP Congestion Control Mechanism

The Definition and Function of Congestion Control Mechanism

TCP congestion control mechanism is an algorithmic approach to manage the flow of data in a TCP
network. The congestion control mechanism regulates the rate at which data is transmitted, while
maintaining a balance between network utilization and reliability.

The mechanism works by adjusting the sending rate based on various feedback mechanisms during
data transmission. It ensures that the available bandwidth is shared fairly among all the users without
causing network congestion.

How TCP Manages Network Congestion Using Timers

The TCP congestion control mechanism uses timers to detect and respond to network congestion.
When a router experiences congestion, it will drop packets or delay their delivery, leading to
retransmissions from the sender.

Strategies for Optimizing Congestion Control Mechanism

To optimize TCP's congestion control mechanism, several strategies can be employed such as
implementing algorithms that are more sensitive to changes in network conditions or tweaking
existing algorithms parameters based on specific use cases. Another approach is using hybrid
mechanisms that combine different algorithms for better performance under varying traffic loads.
Furthermore, deploying Quality of Service (QoS) techniques can help prioritize different types of
traffic during periods of high demand or network congestion.

TCP Timers Configuration and Management

26
Configuring TCP timers is a critical aspect of optimizing network performance. The default timer
values provided by the operating system may not always be ideal for specific network conditions,
and it may be necessary to adjust these values depending on the situation. Most operating systems
provide different ways to configure TCP timers, including modifying the kernel parameters or using
third−party tools.

Configuring TCP timers on different operating systems

Configuring TCP timers on Linux can be done by modifying the kernel parameters using the sysctl
command. On Windows, it can be done through registry settings or using the netsh command−line
tool. Similarly, on macOS, it can be done through system preferences or using third−party tools such
as MacTCP Watcher.

The process of configuring TCP timers is not straightforward and requires technical knowledge in
networking and operating systems.

Managing TCP timers to optimize network performance

The primary goal of managing TCP timers is to ensure reliable data transfer while minimizing delay
and congestion on the network. To achieve this goal, it’s essential to monitor timer values regularly
and adjust them accordingly based on network conditions.

Some best practices for managing TCP timers include:

 Regularly monitoring timer values using built−in networking tools or third−party software

 Analyzing network traffic patterns to identify potential issues such as packet loss or
congestion

 Tuning retransmission and persistence timers based on latency, bandwidth, packet loss rates,
etc.

 Maintaining consistency in timer configurations across all devices in a particular network

 Making gradual adjustments rather than sudden changes to avoid negative impacts on
network performance

Conclusion

In this article, we have discussed the TCP timers and their importance in network communication.
We have covered the different types of TCP timers, their functions, and how they work.

27
Additionally, we have explored how TCP determines when to timeout a connection or packet
transmission and how TCP manages network congestion using timers. Furthermore, we have
discussed best practices for configuring and managing TCP timers to optimize network performance.

5.5 WWW

The World Wide Web (WWW) is a repository of information linked together from points all over
the world. The WWW has a unique combination of flexibility, portability, and user-friendly features
that distinguish it from other services provided by the Internet.

Each site holds one or more documents, referred to as Web pages. Each Web page can contain a link
to other pages in the same site or at other sites. The pages can be retrieved and viewed by using
browsers.

Client (Browser)

A variety of vendors offer commercial browsers that interpret and display a Web document, and all
use nearly the same architecture. Each browser usually consists of three parts: a controller, client
protocol, and interpreters. The controller receives input from the keyboard or the mouse and uses the
client programs to access the document. After the document has been accessed, the controller uses
one of the interpreters to display the document on the screen. The client protocol can be one of the
protocols described previously such as FTP or HTTP (described later in the chapter). The interpreter
can be HTML, Java, or JavaScript, depending on the type of document.

Server

The Web page is stored at the server. Each time a client request arrives, the corresponding document
is sent to the client. To improve efficiency, servers normally store requested files in a cache in
memory; memory is faster to access than disk. A server can also become more efficient through
multithreading or multiprocessing. In this case, a server can answer more than one request at a time.

URL

A client that wants to access a Web page needs the address. To facilitate the access of documents
distributed throughout the world, HTTP uses locators. The uniform resource locator

(URL) is a standard for specifying any kind of information on the Internet. The URL defines four
things: protocol, host computer, port, and path.

28
The protocol is the client/server program used to retrieve the document. Many different protocols can
retrieve a document; among them are FTP or HTTP. The most common today is HTTP.

HTTP

The Hypertext Transfer Protocol (HTTP) is a protocol used mainly to access data on the World Wide
Web. HTTP functions as a combination of FTP and SMTP. It is similar to FTP because it transfers
files and uses the services of TCP. However, it is much simpler than FTP because it uses only one
TCP connection. There is no separate control connection; only data are transferred between the client
and the server.

HTTP is like SMTP because the data transferred between the client and the server look like SMTP
messages. In addition, the format of the messages is controlled by MIME-like headers.

Unlike SMTP, the HTTP messages are not destined to be read by humans; they are read and
interpreted by the HTTP server and HTTP client (browser). SMTP messages are stored and
forwarded, but HTTP messages are delivered immediately. The commands from the client to the
server are embedded in a request message. The contents of the requested file or other information are
embedded in a response message. HTTP uses the services of TCP on well-known port 80.

5.6 File Transfer Protocol (FTP)

File Transfer Protocol (FTP) is the standard mechanism provided by TCP/IP for copying a file from
one host to another. Although transferring files from one system to another seems simple and
straightforward, some problems must be dealt with first. For example, two systems may use different
file name conventions. Two systems may have different ways to represent text and data. Two
systems may have different directory structures. All these problems have been solved by FTP in a
very simple and elegant approach.

FTP differs from other client/server applications in that it establishes two connections between the
hosts. One connection is used for data transfer, the other for control information (commands and
responses). Separation of commands and data transfer makes FTP more efficient. The control
connection uses very simple rules of communication.

We need to transfer only a line of command or a line of response at a time. The data connection, on
the other hand, needs more complex rules due to the variety of data types transferred. However, the
difference in complexity is at the FTP level, not TCP.For TCP, both connections are treated the

29
same.FTP uses two well-known TCP ports: Port 21 is used for the control connection, and port 20 is
used for the data connection.

Simple Mail Transfer Protocol (SMTP) is an Internet standard for electronic mail (e-mail)
transmission across Internet Protocol (IP) networks.

SMTP is a connection-oriented, text-based protocol in which a mail sender communicates with a


mail receiver by issuing command strings and supplying necessary data over a reliable ordered data
stream channel, typically a Transmission Control Protocol (TCP) connection. An SMTP session
consists of commands originated by an SMTP client (the initiating agent, sender, or transmitter) and
corresponding responses from the SMTP server (the listening agent, or receiver) so that the session is
opened, and session parameters are exchanged. A session may include zero or more SMTP
transactions. An SMTP transaction consists of three command/reply sequences (see example below.)
They are:

1. MAIL command, to establish the return address, a.k.a. Return-Path, 5321.From, mfrom,
or envelope sender. This is the address for bounce messages.

2. RCPT command, to establish a recipient of this message. This command can be issued
multiple times, one for each recipient. These addresses are also part of the envelope.

3. DATA to send the message text. This is the content of the message, as opposed to its
envelope. It consists of a message header and a message body separated by an empty line.
DATA is

actually a group of commands, and the server replies twice: once to the DATA command proper, to
acknowledge that it is ready to receive the text, and the second time after the end- of-data sequence,
to either accept or reject the entire message.

IMAP

The Internet Message Access Protocol (IMAP) serves as a cornerstone of cutting-edge email
communication, facilitating seamless get admission to email messages. As a necessary element of the
e-mail infrastructure, IMAP revolutionizes the manner customers interact with their digital
correspondence. Unlike its predecessor, the Post Office Protocol (POP), IMAP gives a dynamic and
synchronized approach to handling emails across multiple gadgets and structures.

30
What is IMAP?

Internet Message Access Protocol (IMAP) is an application layer protocol that operates as a
contract for receiving emails from the mail server. It was designed by Mark Crispin in 1986 as a
remote access mailbox protocol, the current version of IMAP is IMAP4. It is used as the most
commonly used protocol for retrieving emails. This term is also known as Internet mail access
protocol, Interactive mail access protocol, and Interim mail access protocol.IMAP retrieves all of
your recent messages from your email provider by getting in touch with it. They are removed from
the email service as soon as you download them to your Mac or PC. This implies that the email can
only be viewed on the same computer after it has been downloaded. You won’t be able to access the
previously downloaded messages if you attempt to access your email on a different device.

Features of IMAP

 It is capable of managing multiple mailboxes and organizing them into various categories.

 Provides adding of message flags to keep track of which messages are being seen.

 It is capable of deciding whether to retrieve email from a mail server before downloading.

 It makes it easy to download media when multiple files are attached.

Working of IMAP

IMAP follows Client-server Architecture and is the most commonly used email protocol. It is a
combination of client and server process running on other computers that are connected through a
network. This protocol resides over the TCP/IP protocol for communication. Once the
communication is set up the server listens on port 143 by default which is non-encrypted. For the
secure encrypted communication port, 993 is used.

 Email client Gmail establishes a connection with Gmail’s SMTP server.

 By approving the sender’s and recipient’s email addresses, the SMTP server verifies
(authenticates) that the email can be sent.

 The email is sent to the Outlook SMTP server by Gmail’s SMTP server.

 The recipient’s email address is authenticated by the Outlook SMTP server.

 IMAP or POP3 is used by the Outlook SMTP server to deliver the email to the Outlook email
client.

31
Architecture of IMAP

The Internet Message Access Protocol (IMAP) protocol is a client-server model that allows users to
access and view email messages stored on remote servers Here is a summary of the events:

 IMAP clients: An IMAP client is an email application or software that users use to
communicate with their email accounts. Examples include Microsoft Outlook, Mozilla
Thunderbird, Apple Mail, and mobile email applications. The client communicates with the
IMAP server to receive, manage, and send email messages.

 IMAP Server: The IMAP server manages email messages and manages user mailboxes. It
responds to requests from IMAP clients, and provides access to email folders and messages.
The server stores emails in a structured format, usually organized in user-defined folders or
mailboxes. Common IMAP server software includes Dovecot, Courier IMAP, Cyrus IMAP,
and Microsoft Exchange Server.

 Network Protocols: IMAP works over TCP/IP (Transmission Control Protocol/Internet


Protocol) networks, and allows an IMAP client to connect to an IMAP server over the
Internet or local area networks. IMAP typically uses TCP port 143 for unencrypted
connections and TCP port 993 for encrypted connections using SSL/TLS (IMAPS).

Steps involve in IMAP Operation

 An email client, like Microsoft Outlook, connects to the server via IMAP when a user
registers in.

 Certain ports are used for connections.

 The email client shows the headers of every email.

 IMAP does not automatically download attachments; messages are downloaded to the client
only when the user taps on them.

 Compared to alternative email retrieval protocols like Post Office Protocol 3, users can check
their mail more quickly with IMAP (POP3).

 Until they are specifically deleted by the user, emails will stay on the server.

 While IMAP over Secure Sockets Layer (SSL)/Transport Layer Security assigns port number
993, the IMAP server listens on port 143.

32
Advantages

 It offers synchronization across all the maintained sessions by the user.

 It provides security over POP3 protocol as the email only exists on the IMAP server.

 Users have remote access to all the contents.

 It offers easy migration between the devices as it is synchronized by a centralized server.

 There is no need to physically allocate any storage to save contents.

Disadvantages

 IMAP is complex to maintain.

 Emails of the user are only available when there is an internet connection.

 It is slower to load messages.

 Some emails don’t support IMAP which makes it difficult to manage.

5.7 DNS

Domain Name System (DNS) is a hostname for IP address translation service. DNS is a distributed
database implemented in a hierarchy of name servers. It is an application layer protocol for message
exchange between clients and servers. It is required for the functioning of the Internet.

What is the Need of DNS?

Every host is identified by the IP address but remembering numbers is very difficult for people also
the IP addresses are not static therefore a mapping is required to change the domain name to the IP
address. So DNS is used to convert the domain name of the websites to their numerical IP address.

Types of Domain

There are various kinds of domain:

1. Generic domains: .com(commercial), .edu(educational), .mil(military), .org(nonprofit


organization), .net(similar to commercial) all these are generic domains.

2. Country domain: .in (India) .us .uk

33
3. Inverse domain: if we want to know what is the domain name of the website. Ip to domain
name mapping.

SNMP

If an organization has 1000 devices then to check all devices, one by one every day, are working
properly or not is a hectic task. To ease these up, a Simple Network Management Protocol (SNMP)
is used.

Simple Network Management Protocol (SNMP)

SNMP is an application layer protocol that uses UDP port number 161/162.SNMP is used to
monitor the network, detect network faults, and sometimes even to configure remote devices.

Components of SNMP

There are mainly three components of SNMP:

1. SNMP Manager –
It is a centralized system used to monitor the network. It is also known as a Network
Management Station (NMS). A router that runs the SNMP server program is called an agent,
while a host that runs the SNMP client program is called a manager.

2. SNMP agent –
It is a software management software module installed on a managed device. The manager
accesses the values stored in the database, whereas the agent maintains the information in the
database. To ascertain if the router is congested or not, for instance, a manager can examine
the relevant variables that a router stores, such as the quantity of packets received and
transmitted.

3. Management Information Base –


MIB consists of information on resources that are to be managed. This information is
organized hierarchically. It consists of objects instances which are essentially variables. A
MIB, or collection of all the objects under management by the manager, is unique to each
agent. System, interface, address translation, IP, udp, and egp , icmp, tcp are the eight
categories that make up MIB. The mib object is home to these groups.

34
SNMP messages

 GetRequest : It is simply used to retrieve data from SNMP agents. In response to this, the
SNMP agent responds with the requested value through a response message.

 GetNextRequest : To get the value of a variable, the manager sends the agent the
GetNextRequest message. The values of the entries in a table are retrieved using this kind of
communication. The manager won’t be able to access the values if it doesn’t know the
entries’ indices. The GetNextRequest message is used to define an object in certain
circumstances.

 SetRequest : It is used by the SNMP manager to set the value of an object instance on the
SNMP agent.

 Response : When sent in response to the Set message, it will contain the newly set value as
confirmation that the value has been set.

 Trap : These are the message sent by the agent without being requested by the manager. It is
sent when a fault has occurred.

 InformRequest : It was added to SNMPv2c and is used to determine if the manager has
received the trap message or not. It is the same as a trap but adds an acknowledgement that
the trap doesn’t provide.

SNMP security levels

The type of security algorithm applied to SNMP packets is defined by it. These are used in only
SNMPv3. There are 3 security levels namely:
NoAuthNoPriv –
This (no authentication, no privacy) security level uses a community string for authentication and no

35
encryption for privacy.

2. authNopriv – This security level (authentication, no privacy) uses HMAC with Md5 for
authentication and no encryption is used for privacy.

3. authPriv – This security level (authentication, privacy) uses HMAC with Md5 or SHA for
authentication and encryption uses the DES-56 algorithm.

Versions of SNMP

There are three versions of SNMP including the below ones:

1. SNMPv1 –
It uses community strings for authentication and uses UDP only. SNMPv1 is the first version
of the protocol. It is described in RFCs 1155 and 1157 and is simple to set up.

2. SNMPv2c –
It uses community strings for authentication. It uses UDP but can be configured to use
TCP. Improved MIB structure elements, transport mappings, and protocol packet types are all
included in this updated version. However, it also makes use of the current “community-
based” SNMPv1 administrative structure, which is why the version is called SNMPv2c. RFC
1901, RFC 1905, and RFC 1906 all describe it.

3. SNMPv3 –
It uses Hash-based MAC with MD5 or SHA for authentication and DES-56 for privacy. This
version uses TCP. Therefore, the conclusion is the higher the version of SNMP, the more
secure it will be. NMPv3 provides the remote configuration of SNMP entities. This is the
most secure version to date because it also includes authentication and encryption, which may
be used alone or in combination. RFC 1905, RFC 1906, RFC 2571, RFC 2572, RFC 2574,
and RFC 2575.6 are the RFCs for SNMPv3.

Advantages of SNMP

36
 1. It is simple to implement.

 2. Agents are widely implemented.

 3. Agent level overhead is minimal.

 4. It is robust and extensible.

 5. Polling approach is good for LAN based managed object.

 6. It offers the best direct manager agent interface.

 7. SNMP meet a critical need.

Limitation of SNMP

 1. It is too simple and does not scale well.

 2. There is no object oriented data view.

 3. It has no standard control definition.

 4. It has many implementation specific (private MIB) extensions.

 5. It has high communication overhead due to polling

37

You might also like