CN Unit 5
CN Unit 5
Transport layer
5.1 Design issues
The basic function of the Transport layer is to accept data from the layer above, split it up into
smaller units, pass these data units to the Network layer, and ensure that all the pieces arrive
correctly at the other end.
Furthermore, all this must be done efficiently and in a way that isolates the upper layers from the
inevitable changes in the hardware technology.
The Transport layer also determines what type of service to provide to the Session layer, and,
ultimately, to the users of the network. The most popular type of transport connection is an error-
free point-to-point channel that delivers messages or bytes in the order in which they were sent.
The Transport layer is a true end-to-end layer, all the way from the source to the destination. In other
words, a program on the source machine carries on a conversation with a similar program on the
destination machine, using the message headers and control messages.
1. Service Point Addressing: Transport Layer header includes service point address which is
port address. This layer gets the message to the correct process on the computer unlike
Network Layer, which gets each packet to the correct computer.
1
5. Error Control: Error Control is performed end to end in this layer to ensure that the
complete message arrives at the receiving transport layer without any error. Error Correction
is done through retransmission.
Accepting data from Session layer, split it into segments and send to the network layer.
In computer networking, the UDP stands for User Datagram Protocol. The David P. Reed developed
the UDP protocol in 1980. It is defined in RFC 768, and it is a part of the TCP/IP protocol, so it is a
standard protocol over the internet. The UDP protocol allows the computer applications to send the
messages in the form of datagrams from one machine to another machine over the Internet Protocol
(IP) network. The UDP is an alternative communication protocol to the TCP protocol (transmission
control protocol). Like TCP, UDP provides a set of rules that governs how the data should be
exchanged over the internet. The UDP works by encapsulating the data into the packet and providing
2
its own header information to the packet. Then, this UDP packet is encapsulated to the IP packet and
sent off to its destination. Both the TCP and UDP protocols send the data over the internet protocol
network, so it is also known as TCP/IP and UDP/IP. There are many differences between these two
protocols. UDP enables the process to process communication, whereas the TCP provides host to
host communication. Since UDP sends the messages in the form of datagrams, it is considered the
best-effort mode of communication. TCP sends the individual packets, so it is a reliable transport
medium. Another difference is that the TCP is a connection-oriented protocol whereas, the UDP is a
connectionless protocol as it does not require any virtual circuit to transfer the data.
UDP also provides a different port number to distinguish different user requests and also provides
the checksum capability to verify whether the complete data has arrived or not; the IP layer does not
provide these two services.
ADVERTISEMENT
ADVERTISEMENT
UDP is the simplest transport layer communication protocol. It contains a minimum amount
of communication mechanisms. It is considered an unreliable protocol, and it is based on
best-effort delivery services. UDP provides no acknowledgment mechanism, which means
that the receiver does not send the acknowledgment for the received packet, and the sender
also does not wait for the acknowledgment for the packet that it has sent.
ADVERTISEMENT
o Connectionless
The UDP is a connectionless protocol as it does not create a virtual path to transfer the data.
It does not use the virtual path, so packets are sent in different paths between the sender and
the receiver, which leads to the loss of packets or received out of order.
In the case of UDP, the datagrams are sent in some order will be received in the same order is
not guaranteed as the datagrams are not numbered.
3
o Ports
The UDP protocol uses different port numbers so that the data can be sent to the correct
destination. The port numbers are defined between 0 and 1023.
o Faster transmission
o Acknowledgment mechanism
The UDP does have any acknowledgment mechanism, i.e., there is no handshaking between
the UDP sender and UDP receiver. If the message is sent in TCP, then the receiver
acknowledges that I am ready, then the sender sends the data. In the case of TCP, the
handshaking occurs between the sender and the receiver, whereas in UDP, there is no
handshaking between the sender and the receiver.
Each UDP segment is handled individually of others as each segment takes different path to
reach the destination. The UDP segments can be lost or delivered out of order to reach the
destination as there is no connection setup between the sender and the receiver.
o Stateless
It is a stateless protocol that means that the sender does not get the acknowledgement for the
packet which has been sent.
As we know that the UDP is an unreliable protocol, but we still require a UDP protocol in
some cases. The UDP is deployed where the packets require a large amount of bandwidth
along with the actual data. For example, in video streaming, acknowledging thousands of
packets is troublesome and wastes a lot of bandwidth. In the case of video streaming, the loss
of some packets couldn't create a problem, and it can also be ignored.
4
Fig 5.2 UDP Header Format
In UDP, the header size is 8 bytes, and the packet size is upto 65,535 bytes. But this packet size is
not possible as the data needs to be encapsulated in the IP datagram, and an IP packet, the header
size can be 20 bytes; therefore, the maximum of UDP would be 65,535 minus 20. The size of the
data that the UDP packet can carry would be 65,535 minus 28 as 8 bytes for the header of the UDP
packet and 20 bytes for IP header.
o Source port number: It is 16-bit information that identifies which port is going t send the
packet.
o Destination port number: It identifies which port is going to accept the information. It is
16-bit information which is used to identify application-level service on the destination
machine.
o Length: It is 16-bit field that specifies the entire length of the UDP packet that includes the
header also. The minimum value would be 8-byte as the size of the header is 8 bytes.
o Checksum: It is a 16-bits field, and it is an optional field. This checksum field checks
whether the information is accurate or not as there is the possibility that the information can
be corrupted while transmission. It is an optional field, which means that it depends upon the
application, whether it wants to write the checksum or not. If it does not want to write the
checksum, then all the 16 bits are zero; otherwise, it writes the checksum. In UDP, the
checksum field is applied to the entire packet, i.e., header as well as data part whereas, in IP,
the checksum field is applied to only the header field.
5
Concept of Queuing in UDP protocol
In UDP protocol, numbers are used to distinguish the different processes on a server and
client. We know that UDP provides a process to process communication. The client generates
the processes that need services while the server generates the processes that provide
services. The queues are available for both the processes, i.e., two queues for each process.
The first queue is the incoming queue that receives the messages, and the second one is the
outgoing queue that sends the messages. The queue functions when the process is running. If
the process is terminated then the queue will also get destroyed.
UDP handles the sending and receiving of the UDP packets with the help of the following
components:
ADVERTISEMENT
o Input queue: The UDP packets uses a set of queues for each process.
o Input module: This module takes the user datagram from the IP, and then it finds the
information from the control block table of the same port. If it finds the entry in the control
block table with the same port as the user datagram, it enqueues the data.
o Control Block Table: The control block table contains the entry of open ports.
6
o Output module: The output module creates and sends the user datagram.
Several processes want to use the services of UDP. The UDP multiplexes and demultiplexes
the processes so that the multiple processes can run on a single host.
Limitations
o It provides an unreliable connection delivery service. It does not provide any services of IP
except that it provides process-to-process communication.
o The UDP message can be lost, delayed, duplicated, or can be out of order.
o It does not provide a reliable transport delivery service. It does not provide any
acknowledgment or flow control mechanism. However, it does provide error control to some
extent.
Advantages
UDP Checksum
Checksum :
i. Here the checksum includes three sections: a pseudo header, the UDP header, and the data
coming from the application layer.
ii. The pseudo header is the part of the header of the IP packet in which the user datagram is
to be encapsulated with some fields filled with 0s (see Figure1).
7
iii. If the checksum does not include the pseudo header, a user datagram may arrive safe and
sound. However, if the IP header is corrupted, it may be delivered to the wrong host.
iv. The protocol field is added to ensure that the packet belongs to UDP, and not to TCP.
v. The value of the protocol field for UDP is 17. If this value is changed during transmission,
the checksum calculation at the receiver will detect it and UDP drops the packet. It is not
delivered to the wrong protocol.
Checksum Calculation:
UDP Checksum calculation is similar to TCP Checksum computation. It’s also a 16-bit field
of one’s complement of one’s complement sum of a pseudo UDP header + UDP datagram.
Multicasting : Multicasting has one/more senders and multiple recipients participate in data transfer
traffic. In multicasting traffic recline between the boundaries of unicast and broadcast. It server’s
direct single copies of data streams and that are then simulated and routed to hosts that request it. IP
multicast requires support of some other protocols such as Internet Group Management Protocol
8
(IGMP), Multicast routing for its working. And also in Classful IP addressing Class D is reserved for
multicast groups.
It has one sender and one It has one or more senders and
1. receiver. multiple receivers.
It sends data from one device to It sends data from one device to
2. single device. multiple devices.
It does not scale well for It does not scale well across large
4. streaming media. networks.
9
Web surfing, file transfer is an Video Streaming,Online gaming is an
6. example of a unicast. example of a multicast .
The connection is established in TCP using the three-way handshake as discussed earlier to create a
connection. One side, say the server, passively stays for an incoming link by implementing the
LISTEN and ACCEPT primitives, either determining a particular other side or nobody in particular.
The other side performs a connect primitive specifying the I/O port to which it wants to join. The
maximum TCP segment size available, other options are optionally like some private data (example
password).
The CONNECT primitive transmits a TCP segment with the SYN bit on and the ACK bit off and
waits for a response.
The sequence of TCP segments sent in the typical case, as shown in the figure below −
10
Fig 5.7 TCP Connection Management
When the segment sent by Host-1 reaches the destination, i.e., host -2, the receiving server checks to
see if there is a process that has done a LISTEN on the port given in the destination port field. If not,
it sends a response with the RST bit on to refuse the connection. Otherwise, it governs the TCP
segment to the listing process, which can accept or decline (for example, if it does not look similar to
the client) the connection.
Call Collision
If two hosts try to establish a connection simultaneously between the same two sockets, then the
events sequence is demonstrated in the figure under such circumstances. Only one connection is
established. It cannot select both the links because their endpoints identify connections.
Suppose the first set up results in a connection identified by (x, y) and the second connection are also
released up. In that case, only tail enter will be made, i.e., for (x, y) for the initial sequence number, a
clock-based scheme is used, with a clock pulse coming after every 4 microseconds. For ensuring
additional safety when a host crashes, it may not reboot for sec, which is the maximum packet
lifetime. This is to make sure that no packets from previous connections are roaming around.
Transport Layer Protocols are central piece of layered architectures, these provides the logical
communication between application processes. These processes uses the logical communication to
transfer data from transport layer to network layer and this transfer of data should be reliable and
secure. The data is transferred in the form of packets but the problem occurs in reliable transfer of
data. The problem of transferring the data occurs not only at the transport layer, but also at the
application layer as well as in the link layer. This problem occur when a reliable service runs on an
unreliable service, For example, TCP (Transmission Control Protocol) is a reliable data transfer
protocol that is implemented on top of an unreliable layer, i.e., Internet Protocol (IP) is an end to end
11
network layer protocol.
In this model, we have design the sender and receiver sides of a protocol over a reliable channel. In
the reliable transfer of data the layer receives the data from the above layer breaks the message in
the form of segment and put the header on each segment and transfer. Below layer receives the
segments and remove the header from each segment and make it a packet by adding to header. The
data which is transferred from the above has no transferred data bits corrupted or lost, and all are
delivered in the same sequence in which they were sent to the below layer this is reliable data
transfer protocol. This service model is offered by TCP to the Internet applications that invoke this
12
transfer of data.
Similarly in an unreliable channel we have design the sending and receiving side. The sending side
of the protocol is called from the above layer to rdt_send() then it will pass the data that is to be
delivered to the application layer at the receiving side (here rdt-send() is a function for sending
data where rdt stands for reliable data transfer protocol and _send() is used for the sending side).
On the receiving side, rdt_rcv() (rdt_rcv() is a function for receiving data where -rcv() is used for
receiving side), will be called when a packet arrives from the receiving side of the unreliable
channel. When the rdt protocol wants to deliver data to the application layer, it will do so by
calling deliver_data() (where deliver_data() is a function for delivering data to upper layer). In
reliable data transfer protocol, we only consider the case of unidirectional data transfer, that is
transfer of data from the sending side to receiving side(i.e. only in one direction). In case of
bidirectional (full duplex or transfer of data on both the sides) data transfer is conceptually more
difficult. Although we only consider unidirectional data transfer but it is important to note that the
sending and receiving sides of our protocol will needs to transmit packets in both directions, as
shown in above figure. In order to exchange packets containing the data that is needed to be
transferred the both (sending and receiving) sides of rdt also need to exchange control packets in
both direction (i.e., back and forth), both the sides of rdt send packets to the other side by a call to
udt_send() (udt_send() is a function used for sending data to other side where udt stands for
unreliable data transfer protocol).
13
TCP flow control
Before discussing TCP flow control, it’s worth describing the flow control functionality in computer
networks. When two network hosts start communicating, one sends packets, and the other receives
them.
Both may have different hosting hardware, software design, and processing speed. If the receiver is
fast enough to consume messages at a higher rate than those generated by the sender, all works well.
But what will happen if the receiver consumes slower than the sender produces? The messages
will keep adding to the receiver’s queue. After some time, messages will start dropping once the
receiver queue is full. To overcome a fast sender and a slow receiver problem, there is a concept
known as flow control in Computer networks.
The above diagram shows a slow receiver and a fast sender. Let’s understand how the messages will
be overflown after a certain period.
The sender is sending messages at the rate of 10 messages per second, while the receiver is
receiving at the rate of 5 messages per second.
When a sender sends a message, the network enqueues messages in the receiver queue.
14
Once the user application reads a message, the message is clear from the line, and now again,
one buffer space adds to the free space.
With the mentioned speed for the sender and receiver, the receiver queue will keep
shortening at the pace of 5 every second.
Finally, after 40 seconds, there will be no space remaining for incoming messages, and
messages will start dropping.
TCP is an example of a transport layer protocol as per the OSI reference model. It provides reliable
and sequenced delivery of messages. Because of reliable delivery, TCP retries to send a lost segment
if a packet is lost due to a slow receiver. If there is no flow control, TCP will keep resending
repeatedly, and the situation will worsen over the network.
With the flow control, the TCP receiver keeps sending the available space capacity for the incoming
messages to the sender during the communication. The sender updates the space information and
reduces the outgoing message rate. The space is known as the receiver window size.
For implementing flow control, the sender should know how much free space is available on the
receiver before sending further messages. In an earlier TCP header tutorial, we described various
protocol parameters. One parameter is the window size. Both ends add their own window size in the
header in each TCP segment.
During connection setup, the window size is the maximum capacity available. During packet
transfer, the window size keeps updating. The window size value is zero when a TCP end can not
accept further messages.
When the sender receives a window size of zero, it stops sending any further messages till it gets
again a message with a window size of more than zero.
Till now, all discussion was for the TCP layer. Here we will discuss the traffic source for TCP,
which is the user of the layer.
15
The flow control is from end to end. While congestion control is a node between the path from two
nodes. For example, a router between two communicating nodes may get congested.
The purpose of flow control is to let the sender know that the receiver is slower. A sender application
should be informed in case of a slow receiver. Generally, the TCP module informs the sender while
sending data.
TCP congestion control is a method used by the TCP protocol to manage data flow over a network
and prevent congestion. TCP uses a congestion window and congestion policy that avoids
congestion. Previously, we assumed that only the receiver could dictate the sender’s window size.
We ignored another entity here, the network. If the network cannot deliver the data as fast as it is
created by the sender, it must tell the sender to slow down. In other words, in addition to the receiver,
the network is a second entity that determines the size of the sender’s window
2. Congestion Detection Phase: The sender goes back to the Slow start phase or the
Congestion avoidance phase.
Exponential increment: In this phase after every RTT the congestion window size increments
exponentially.
Example:- If the initial congestion window size is 1 segment, and the first segment is successfully
acknowledged, the congestion window size becomes 2 segments. If the next transmission is also
acknowledged, the congestion window size doubles to 4 segments. This exponential growth
continues as long as all segments are successfully acknowledged.
Initially cwnd = 1
16
3 RTT, cwnd = 2^(3) = 8
Additive increment: This phase starts after the threshold value also denoted as ssthresh. The size of
cwnd(congestion window) increases additive. After each RTT cwnd = cwnd + 1.
Example:- if the congestion window size is 20 segments and all 20 segments are successfully
acknowledged within an RTT, the congestion window size would be increased to 21 segments in the
next RTT. If all 21 segments are again successfully acknowledged, the congestion window size
would be increased to 22 segments, and so on.
Initially cwnd = i
Multiplicative decrement: If congestion occurs, the congestion window size is decreased. The only
way a sender can guess that congestion has happened is the need to retransmit a segment.
Retransmission is needed to recover a missing packet that is assumed to have been dropped by a
router due to congestion. Retransmission can occur in one of two cases: when the RTO timer times
out or when three duplicate ACKs are received.
Case 1: Retransmission due to Timeout – In this case, the congestion possibility is high.
17
Example
Assume a TCP protocol experiencing the behavior of slow start. At the 5th transmission round with a
threshold (ssthresh) value of 32 goes into the congestion avoidance phase and continues till the 10th
transmission. At the 10th transmission round, 3 duplicate ACKs are received by the receiver and
entered into additive increase mode. Timeout occurs at the 16th transmission round. Plot the
transmission round (time) vs congestion window size of TCP segments.
Fig 5.11 Plot the transmission round (time) vs congestion window size of TCP segments
TCP stands for Transmission Control Protocol. It is a transport layer protocol that facilitates the
transmission of packets from source to destination. It is a connection-oriented protocol that means it
establishes the connection prior to the communication that occurs between the computing devices in
a network. This protocol is used with an IP protocol, so together, they are referred to as a TCP/IP.
The main functionality of the TCP is to take the data from the application layer. Then it divides the
data into a several packets, provides numbering to these packets, and finally transmits these packets
to the destination. The TCP, on the other side, will reassemble the packets and transmits them to the
application layer. As we know that TCP is a connection-oriented protocol, so the connection will
remain established until the communication is not completed between the sender and the receiver.
18
The following are the features of a TCP protocol:
TCP is a transport layer protocol as it is used in transmitting the data from the sender to the
receiver.
o Reliable
TCP is a reliable protocol as it follows the flow and error control mechanism. It also supports
the acknowledgment mechanism, which checks the state and sound arrival of the data. In the
acknowledgment mechanism, the receiver sends either positive or negative acknowledgment
to the sender so that the sender can get to know whether the data packet has been received or
needs to resend.
This protocol ensures that the data reaches the intended receiver in the same order in which it
is sent. It orders and numbers each segment so that the TCP layer on the destination side can
reassemble them based on their ordering.
o Connection-oriented
It is a connection-oriented service that means the data exchange occurs only after the
connection establishment. When the data transfer is completed, then the connection will get
terminated.
o Full duplex
It is a full-duplex means that the data can transfer in both directions at the same time.
o Stream-oriented
TCP is a stream-oriented protocol as it allows the sender to send the data in the form of a
stream of bytes and also allows the receiver to accept the data in the form of a stream of
bytes. TCP creates an environment in which both the sender and receiver are connected by an
imaginary tube known as a virtual circuit. This virtual circuit carries the stream of bytes
across the internet.
19
In the layered architecture of a network model, the whole task is divided into smaller tasks. Each task
is assigned to a particular layer that processes the task. In the TCP/IP model, five layers
are application layer, transport layer, network layer, data link layer, and physical layer. The transport
layer has a critical role in providing end-to-end communication to the directly application processes.
It creates 65,000 ports so that the multiple applications can be accessed at the same time. It takes the
data from the upper layer, and it divides the data into smaller packets and then transmits them to the
network layer.
Working of TCP
In TCP, the connection is established by using three-way handshaking. The client sends the segment
with its sequence number. The server, in return, sends its segment with its own sequence number as
well as the acknowledgement sequence, which is one more than the client sequence number. When
the client receives the acknowledgment of its segment, then it sends the acknowledgment to the
server. In this way, the connection is established between the client and the server.
20
Fig 5.13 Working of TCP protocol
Advantages of TCP
o It provides a connection-oriented reliable service, which means that it guarantees the delivery
of data packets. If the data packet is lost across the network, then the TCP will resend the lost
packets.
o It provides error detection by using checksum and error control by using Go Back or ARP
protocol.
o It eliminates the congestion by using a network congestion avoidance algorithm that includes
various schemes such as additive increase/multiplicative decrease (AIMD), slow start, and
congestion window.
Disadvantage of TCP
21
It increases a large amount of overhead as each segment gets its own TCP header, so fragmentation
by the router increases the overhead.
o Source port: It defines the port of the application, which is sending the data. So, this field
contains the source port address, which is 16 bits.
o Destination port: It defines the port of the application on the receiving side. So, this field
contains the destination port address, which is 16 bits.
o Sequence number: This field contains the sequence number of data bytes in a particular
session.
o Acknowledgment number: When the ACK flag is set, then this contains the next sequence
number of the data byte and works as an acknowledgment for the previous data received. For
22
example, if the receiver receives the segment number 'x', then it responds 'x+1' as an
acknowledgment number.
o HLEN: It specifies the length of the header indicated by the 4-byte words in the header. The
size of the header lies between 20 and 60 bytes. Therefore, the value of this field would lie
between 5 and 15.
o Reserved: It is a 4-bit field reserved for future use, and by default, all are set to zero.
o Flags
1. URG: It represents an urgent pointer. If it is set, then the data is processed urgently.
2. ACK: If the ACK is set to 0, then it means that the data packet does not contain an
acknowledgment.
3. PSH: If this field is set, then it requests the receiving device to push the data to the
receiving application without buffering it.
6. FIN: It is used to release a connection, and no further data exchange will happen.
o Window size
It is a 16-bit field. It contains the size of data that the receiver can accept. This field is used
for the flow control between the sender and receiver and also determines the amount of buffer
allocated by the receiver for a segment. The value of this field is determined by the receiver.
o Checksum
It is a 16-bit field. This field is optional in UDP, but in the case of TCP/IP, this field is
mandatory.
o Urgent pointer
It is a pointer that points to the urgent data byte if the URG flag is set to 1. It defines a value
that will be added to the sequence number to get the sequence number of the last urgent byte.
23
o Options
It provides additional options. The optional field is represented in 32-bits. If this field
contains the data less than 32-bit, then padding is required to obtain the remaining bits.
TCP timers are mechanisms used by the protocol to manage and control various aspects of the data
transmission process. Essentially, these timers are implemented by a device's operating system and
are used to track different stages of a TCP connection. They ensure that packets are promptly
delivered between devices and help avoid issues such as packet loss or congestion.
TCP timers are an essential component of the Transmission Control Protocol. They are used to
manage various aspects of network communication, such as retransmission, congestion control, and
detecting inactive connections.
There are three main types of TCP timers: retransmission timer, persistence timer, and keepalive
timer. Each type serves a unique purpose in ensuring reliable data transfer.
Retransmission Timer
The retransmission timer is a critical component in providing reliable data transfer over the network.
Its primary function is to ensure that packets reach their destination by resending packets that may
have been lost or corrupted during transmission.
When a packet is sent over the network, an acknowledgement (ACK) is expected from the receiver.
If no ACK is received within a specified time frame set by the retransmission timer, the sender
assumes that the packet has been lost and will resend it.
Persistence Timer
24
The Persistence Timer (PT) is another important aspect of TCP timers that helps manage network
congestion. When there are too many unacknowledged packets on the network, it can lead to
congestion and eventually result in packet loss or delay.
The PT prevents this by temporarily holding back new transmissions until previously sent packets
are acknowledged. The PT works by periodically sending probes to check whether there are any
unacknowledged packets on the network.
If there is no response from these probes before a specified time period elapses (set by PT), then new
transmission attempts will be held back until existing transmissions have been acknowledged. PTs
play a crucial role in managing congestion on networks with significant traffic loads.
Keepalive Timer
The Keepalive Timer (KT) is used to detect inactive connections. When a connection is idle for an
extended period, it can be challenging to know whether the session has been terminated or not. The
KT solves this by sending probes at regular intervals to check the status of the connection.
If there is no response from these probes before a specified time period elapses (set by KT), then the
connection is assumed to be dead and will be dropped. The KT ensures that resources are not wasted
on inactive sessions, freeing up network resources for active ones.
Timeouts are an essential aspect of TCP communication, as they allow the protocol to ensure reliable
data transfer. When a packet is sent, the TCP implementation starts a timer for that packet. If an
acknowledgment for the sent packet is not received before the timer expires, then TCP assumes that
the packet has been lost and initiates retransmission.
The duration of this timer determines how long it takes for TCP to recognize a lost transmission. The
default value for this timeout varies across different operating systems but is usually set between
30−60 seconds.
TCP timeouts have a significant impact on network performance because they affect how quickly
applications can send and receive data. Long timeouts can cause delays in data transfer, which can
25
lead to poor application performance. On the other hand, short timeouts can result in unnecessary
retransmissions and increased network traffic.
There are several strategies that can be used to optimize timeout values. One approach is to use
adaptive retransmission times based on network conditions such as round−trip time (RTT) estimates
or congestion window size (CWND).
Another strategy is to use a hybrid approach that combines fixed and adaptive timeouts based on
different stages of communication between hosts. For example, initial connections may use fixed
timeouts while established connections may use adaptive ones.
TCP congestion control mechanism is an algorithmic approach to manage the flow of data in a TCP
network. The congestion control mechanism regulates the rate at which data is transmitted, while
maintaining a balance between network utilization and reliability.
The mechanism works by adjusting the sending rate based on various feedback mechanisms during
data transmission. It ensures that the available bandwidth is shared fairly among all the users without
causing network congestion.
The TCP congestion control mechanism uses timers to detect and respond to network congestion.
When a router experiences congestion, it will drop packets or delay their delivery, leading to
retransmissions from the sender.
To optimize TCP's congestion control mechanism, several strategies can be employed such as
implementing algorithms that are more sensitive to changes in network conditions or tweaking
existing algorithms parameters based on specific use cases. Another approach is using hybrid
mechanisms that combine different algorithms for better performance under varying traffic loads.
Furthermore, deploying Quality of Service (QoS) techniques can help prioritize different types of
traffic during periods of high demand or network congestion.
26
Configuring TCP timers is a critical aspect of optimizing network performance. The default timer
values provided by the operating system may not always be ideal for specific network conditions,
and it may be necessary to adjust these values depending on the situation. Most operating systems
provide different ways to configure TCP timers, including modifying the kernel parameters or using
third−party tools.
Configuring TCP timers on Linux can be done by modifying the kernel parameters using the sysctl
command. On Windows, it can be done through registry settings or using the netsh command−line
tool. Similarly, on macOS, it can be done through system preferences or using third−party tools such
as MacTCP Watcher.
The process of configuring TCP timers is not straightforward and requires technical knowledge in
networking and operating systems.
The primary goal of managing TCP timers is to ensure reliable data transfer while minimizing delay
and congestion on the network. To achieve this goal, it’s essential to monitor timer values regularly
and adjust them accordingly based on network conditions.
Regularly monitoring timer values using built−in networking tools or third−party software
Analyzing network traffic patterns to identify potential issues such as packet loss or
congestion
Tuning retransmission and persistence timers based on latency, bandwidth, packet loss rates,
etc.
Making gradual adjustments rather than sudden changes to avoid negative impacts on
network performance
Conclusion
In this article, we have discussed the TCP timers and their importance in network communication.
We have covered the different types of TCP timers, their functions, and how they work.
27
Additionally, we have explored how TCP determines when to timeout a connection or packet
transmission and how TCP manages network congestion using timers. Furthermore, we have
discussed best practices for configuring and managing TCP timers to optimize network performance.
5.5 WWW
The World Wide Web (WWW) is a repository of information linked together from points all over
the world. The WWW has a unique combination of flexibility, portability, and user-friendly features
that distinguish it from other services provided by the Internet.
Each site holds one or more documents, referred to as Web pages. Each Web page can contain a link
to other pages in the same site or at other sites. The pages can be retrieved and viewed by using
browsers.
Client (Browser)
A variety of vendors offer commercial browsers that interpret and display a Web document, and all
use nearly the same architecture. Each browser usually consists of three parts: a controller, client
protocol, and interpreters. The controller receives input from the keyboard or the mouse and uses the
client programs to access the document. After the document has been accessed, the controller uses
one of the interpreters to display the document on the screen. The client protocol can be one of the
protocols described previously such as FTP or HTTP (described later in the chapter). The interpreter
can be HTML, Java, or JavaScript, depending on the type of document.
Server
The Web page is stored at the server. Each time a client request arrives, the corresponding document
is sent to the client. To improve efficiency, servers normally store requested files in a cache in
memory; memory is faster to access than disk. A server can also become more efficient through
multithreading or multiprocessing. In this case, a server can answer more than one request at a time.
URL
A client that wants to access a Web page needs the address. To facilitate the access of documents
distributed throughout the world, HTTP uses locators. The uniform resource locator
(URL) is a standard for specifying any kind of information on the Internet. The URL defines four
things: protocol, host computer, port, and path.
28
The protocol is the client/server program used to retrieve the document. Many different protocols can
retrieve a document; among them are FTP or HTTP. The most common today is HTTP.
HTTP
The Hypertext Transfer Protocol (HTTP) is a protocol used mainly to access data on the World Wide
Web. HTTP functions as a combination of FTP and SMTP. It is similar to FTP because it transfers
files and uses the services of TCP. However, it is much simpler than FTP because it uses only one
TCP connection. There is no separate control connection; only data are transferred between the client
and the server.
HTTP is like SMTP because the data transferred between the client and the server look like SMTP
messages. In addition, the format of the messages is controlled by MIME-like headers.
Unlike SMTP, the HTTP messages are not destined to be read by humans; they are read and
interpreted by the HTTP server and HTTP client (browser). SMTP messages are stored and
forwarded, but HTTP messages are delivered immediately. The commands from the client to the
server are embedded in a request message. The contents of the requested file or other information are
embedded in a response message. HTTP uses the services of TCP on well-known port 80.
File Transfer Protocol (FTP) is the standard mechanism provided by TCP/IP for copying a file from
one host to another. Although transferring files from one system to another seems simple and
straightforward, some problems must be dealt with first. For example, two systems may use different
file name conventions. Two systems may have different ways to represent text and data. Two
systems may have different directory structures. All these problems have been solved by FTP in a
very simple and elegant approach.
FTP differs from other client/server applications in that it establishes two connections between the
hosts. One connection is used for data transfer, the other for control information (commands and
responses). Separation of commands and data transfer makes FTP more efficient. The control
connection uses very simple rules of communication.
We need to transfer only a line of command or a line of response at a time. The data connection, on
the other hand, needs more complex rules due to the variety of data types transferred. However, the
difference in complexity is at the FTP level, not TCP.For TCP, both connections are treated the
29
same.FTP uses two well-known TCP ports: Port 21 is used for the control connection, and port 20 is
used for the data connection.
Simple Mail Transfer Protocol (SMTP) is an Internet standard for electronic mail (e-mail)
transmission across Internet Protocol (IP) networks.
1. MAIL command, to establish the return address, a.k.a. Return-Path, 5321.From, mfrom,
or envelope sender. This is the address for bounce messages.
2. RCPT command, to establish a recipient of this message. This command can be issued
multiple times, one for each recipient. These addresses are also part of the envelope.
3. DATA to send the message text. This is the content of the message, as opposed to its
envelope. It consists of a message header and a message body separated by an empty line.
DATA is
actually a group of commands, and the server replies twice: once to the DATA command proper, to
acknowledge that it is ready to receive the text, and the second time after the end- of-data sequence,
to either accept or reject the entire message.
IMAP
The Internet Message Access Protocol (IMAP) serves as a cornerstone of cutting-edge email
communication, facilitating seamless get admission to email messages. As a necessary element of the
e-mail infrastructure, IMAP revolutionizes the manner customers interact with their digital
correspondence. Unlike its predecessor, the Post Office Protocol (POP), IMAP gives a dynamic and
synchronized approach to handling emails across multiple gadgets and structures.
30
What is IMAP?
Internet Message Access Protocol (IMAP) is an application layer protocol that operates as a
contract for receiving emails from the mail server. It was designed by Mark Crispin in 1986 as a
remote access mailbox protocol, the current version of IMAP is IMAP4. It is used as the most
commonly used protocol for retrieving emails. This term is also known as Internet mail access
protocol, Interactive mail access protocol, and Interim mail access protocol.IMAP retrieves all of
your recent messages from your email provider by getting in touch with it. They are removed from
the email service as soon as you download them to your Mac or PC. This implies that the email can
only be viewed on the same computer after it has been downloaded. You won’t be able to access the
previously downloaded messages if you attempt to access your email on a different device.
Features of IMAP
It is capable of managing multiple mailboxes and organizing them into various categories.
Provides adding of message flags to keep track of which messages are being seen.
It is capable of deciding whether to retrieve email from a mail server before downloading.
Working of IMAP
IMAP follows Client-server Architecture and is the most commonly used email protocol. It is a
combination of client and server process running on other computers that are connected through a
network. This protocol resides over the TCP/IP protocol for communication. Once the
communication is set up the server listens on port 143 by default which is non-encrypted. For the
secure encrypted communication port, 993 is used.
By approving the sender’s and recipient’s email addresses, the SMTP server verifies
(authenticates) that the email can be sent.
The email is sent to the Outlook SMTP server by Gmail’s SMTP server.
IMAP or POP3 is used by the Outlook SMTP server to deliver the email to the Outlook email
client.
31
Architecture of IMAP
The Internet Message Access Protocol (IMAP) protocol is a client-server model that allows users to
access and view email messages stored on remote servers Here is a summary of the events:
IMAP clients: An IMAP client is an email application or software that users use to
communicate with their email accounts. Examples include Microsoft Outlook, Mozilla
Thunderbird, Apple Mail, and mobile email applications. The client communicates with the
IMAP server to receive, manage, and send email messages.
IMAP Server: The IMAP server manages email messages and manages user mailboxes. It
responds to requests from IMAP clients, and provides access to email folders and messages.
The server stores emails in a structured format, usually organized in user-defined folders or
mailboxes. Common IMAP server software includes Dovecot, Courier IMAP, Cyrus IMAP,
and Microsoft Exchange Server.
An email client, like Microsoft Outlook, connects to the server via IMAP when a user
registers in.
IMAP does not automatically download attachments; messages are downloaded to the client
only when the user taps on them.
Compared to alternative email retrieval protocols like Post Office Protocol 3, users can check
their mail more quickly with IMAP (POP3).
Until they are specifically deleted by the user, emails will stay on the server.
While IMAP over Secure Sockets Layer (SSL)/Transport Layer Security assigns port number
993, the IMAP server listens on port 143.
32
Advantages
It provides security over POP3 protocol as the email only exists on the IMAP server.
Disadvantages
Emails of the user are only available when there is an internet connection.
5.7 DNS
Domain Name System (DNS) is a hostname for IP address translation service. DNS is a distributed
database implemented in a hierarchy of name servers. It is an application layer protocol for message
exchange between clients and servers. It is required for the functioning of the Internet.
Every host is identified by the IP address but remembering numbers is very difficult for people also
the IP addresses are not static therefore a mapping is required to change the domain name to the IP
address. So DNS is used to convert the domain name of the websites to their numerical IP address.
Types of Domain
33
3. Inverse domain: if we want to know what is the domain name of the website. Ip to domain
name mapping.
SNMP
If an organization has 1000 devices then to check all devices, one by one every day, are working
properly or not is a hectic task. To ease these up, a Simple Network Management Protocol (SNMP)
is used.
SNMP is an application layer protocol that uses UDP port number 161/162.SNMP is used to
monitor the network, detect network faults, and sometimes even to configure remote devices.
Components of SNMP
1. SNMP Manager –
It is a centralized system used to monitor the network. It is also known as a Network
Management Station (NMS). A router that runs the SNMP server program is called an agent,
while a host that runs the SNMP client program is called a manager.
2. SNMP agent –
It is a software management software module installed on a managed device. The manager
accesses the values stored in the database, whereas the agent maintains the information in the
database. To ascertain if the router is congested or not, for instance, a manager can examine
the relevant variables that a router stores, such as the quantity of packets received and
transmitted.
34
SNMP messages
GetRequest : It is simply used to retrieve data from SNMP agents. In response to this, the
SNMP agent responds with the requested value through a response message.
GetNextRequest : To get the value of a variable, the manager sends the agent the
GetNextRequest message. The values of the entries in a table are retrieved using this kind of
communication. The manager won’t be able to access the values if it doesn’t know the
entries’ indices. The GetNextRequest message is used to define an object in certain
circumstances.
SetRequest : It is used by the SNMP manager to set the value of an object instance on the
SNMP agent.
Response : When sent in response to the Set message, it will contain the newly set value as
confirmation that the value has been set.
Trap : These are the message sent by the agent without being requested by the manager. It is
sent when a fault has occurred.
InformRequest : It was added to SNMPv2c and is used to determine if the manager has
received the trap message or not. It is the same as a trap but adds an acknowledgement that
the trap doesn’t provide.
The type of security algorithm applied to SNMP packets is defined by it. These are used in only
SNMPv3. There are 3 security levels namely:
NoAuthNoPriv –
This (no authentication, no privacy) security level uses a community string for authentication and no
35
encryption for privacy.
2. authNopriv – This security level (authentication, no privacy) uses HMAC with Md5 for
authentication and no encryption is used for privacy.
3. authPriv – This security level (authentication, privacy) uses HMAC with Md5 or SHA for
authentication and encryption uses the DES-56 algorithm.
Versions of SNMP
1. SNMPv1 –
It uses community strings for authentication and uses UDP only. SNMPv1 is the first version
of the protocol. It is described in RFCs 1155 and 1157 and is simple to set up.
2. SNMPv2c –
It uses community strings for authentication. It uses UDP but can be configured to use
TCP. Improved MIB structure elements, transport mappings, and protocol packet types are all
included in this updated version. However, it also makes use of the current “community-
based” SNMPv1 administrative structure, which is why the version is called SNMPv2c. RFC
1901, RFC 1905, and RFC 1906 all describe it.
3. SNMPv3 –
It uses Hash-based MAC with MD5 or SHA for authentication and DES-56 for privacy. This
version uses TCP. Therefore, the conclusion is the higher the version of SNMP, the more
secure it will be. NMPv3 provides the remote configuration of SNMP entities. This is the
most secure version to date because it also includes authentication and encryption, which may
be used alone or in combination. RFC 1905, RFC 1906, RFC 2571, RFC 2572, RFC 2574,
and RFC 2575.6 are the RFCs for SNMPv3.
Advantages of SNMP
36
1. It is simple to implement.
Limitation of SNMP
37