0% found this document useful (0 votes)
10 views18 pages

Transport Layer 23-24

The transport layer facilitates process-to-process communication between applications on different hosts, distinguishing it from host-to-host communication managed by the network layer. It provides essential services such as segmentation, error detection, flow control, and connection management, with protocols like TCP ensuring reliable communication and UDP offering a connectionless service. Key concepts include multiplexing, demultiplexing, and congestion control, with TCP and UDP serving different application needs based on reliability and overhead requirements.

Uploaded by

Tusher Abrar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views18 pages

Transport Layer 23-24

The transport layer facilitates process-to-process communication between applications on different hosts, distinguishing it from host-to-host communication managed by the network layer. It provides essential services such as segmentation, error detection, flow control, and connection management, with protocols like TCP ensuring reliable communication and UDP offering a connectionless service. Key concepts include multiplexing, demultiplexing, and congestion control, with TCP and UDP serving different application needs based on reliability and overhead requirements.

Uploaded by

Tusher Abrar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

CHAPTER 23 Introduction to Transport Layer

Transport Layer

The transport layer is located between the application layer and the network layer. It
provides a process-to-process communication between two application layers, one at
the local host and the other at the remote host.

Difference between host-to-host communication and process-to-process


communication.

Host-to-host communication:

 Handled by the network layer.


 Transfers data between two computers (hosts).
 Uses IP addresses to route the message to the correct destination computer.

Process-to-process communication:

 Handled by: Transport layer (e.g., TCP, UDP).


 Scope: Involves delivering data between specific applications (processes) on
different hosts.
 Responsibility: The transport layer ensures that data is delivered to the correct
process or application on the destination host, using mechanisms like ports or
sockets to identify the target process.
Transport Layer Services

Process-to-Process Communication:

The transport layer enables communication between specific applications (processes)


running on different computers. It uses port numbers or sockets to direct the message
to the correct application on the destination system.

Segmentation and Reassembly:

Large messages are broken down into smaller, manageable segments for transmission,
and these segments are reassembled at the destination.

Multiplexing and Demultiplexing:

The transport layer at the sender’s side combines data from multiple processes
(multiplexing), while the receiver’s transport layer separates the data and delivers it
to the correct process (demultiplexing).

Error Detection and Correction:

The transport layer checks for errors in the transmitted data and ensures that corrupted
data is retransmitted, improving the reliability of communication.

Flow Control:

Flow control techniques, like stop-and-wait or sliding window, regulate the pace at
which data is sent to ensure the receiver is not overwhelmed, maintaining stability in
communication.

Reliability:

The transport layer ensures reliable data delivery, especially with connection-oriented
protocols like TCP that guarantee data reaches its destination in the correct order.

Connection Establishment and Termination:

The transport layer manages the opening and closing of connections between sending
and receiving devices, ensuring proper synchronization during these processes.
Port Addressing:

Port numbers are used to identify specific processes running on devices, allowing
multiple applications to share the network while maintaining individual
communication streams.

Data Integrity:

The transport layer ensures that data remains intact during transmission, preventing
unauthorized modifications.

Congestion Control:

The transport layer monitors and adjusts the transmission rate to prevent network
congestion and ensure efficient data flow.

Connection-Oriented Communication:

In protocols like TCP, the transport layer guarantees data delivery in sequence,
ensuring reliable communication between sender and receiver.

Connectionless Communication:

In protocols like UDP, the transport layer allows faster transmission without
guaranteeing the order or successful delivery of data.

Flow Control at Transport Layer

In transport layer communication, four key entities interact: the sender process,
sender transport layer, receiver transport layer, and receiver process.

1.Sender Process: The sender process generates data segments and passes them to
the sender transport layer for further processing.

2.Sender Transport Layer: It encapsulates the data segments into packets and
forwards them to the receiver transport layer.

3.Receiver Transport Layer: The receiver transport layer decapsulates the packets
and passes the data to the receiver process for final delivery.
4.Receiver Process: The receiver process pulls the data from the transport layer when
needed, completing the communication.

Buffers

Buffers are used in flow control to regulate data transmission rates. The sender’s
buffer stores outgoing data to control the pace of sending, while the receiver’s buffer
holds incoming data to prevent loss if it arrives too quickly. This setup helps
synchronize data flow, preventing overflow and ensuring smooth communication
between sender and receiver.

Error Control

In the Internet, error control at the transport layer ensures reliable communication over
the unreliable network layer (IP). This involves:

1. Detecting and discarding corrupted packets to maintain data integrity.


2. Tracking lost or discarded packets and resending them to prevent data loss.
3. Recognizing duplicate packets and discarding them to avoid redundancy.
4. Buffering out-of-order packets until all packets arrive in sequence, ensuring
correct order delivery.
Sequence Number

Acknowledgment

Acknowledgments (ACKs) are used for error control in the transport layer. The
receiver sends an ACK for correctly received packets, and the sender resends packets
if an ACK is not received before a timer expires. Duplicate packets are discarded, and
out-of-order packets are either discarded or buffered until the missing packet arrives,
ensuring reliable transmission.

sliding window

Congestion control

Congestion control is essential in packet-switched networks like the Internet, where


congestion happens when too many packets exceed the network’s capacity. It involves
techniques to prevent overload. Congestion usually occurs due to overloaded queues
in routers and switches. While congestion at the network layer affects the transport
layer, protocols like TCP manage it with their own control mechanisms.

Connectionless Protocols vs Connection-Oriented Protocols

Connectionless Protocols
A communication method where data is sent without establishing a connection. Each
packet is transmitted independently, and there's no guarantee of delivery or order.

Connection-Oriented Protocols
A communication method where a connection is established between the sender and
receiver before data transmission. This ensures reliable, ordered, and complete
delivery of packets.

Connectionless Protocols

1. Data is sent without establishing a connection.


2. Packets are sent independently with no guarantee of delivery or order.
3. Faster, with less overhead due to no connection setup.
4. Example: UDP (User Datagram Protocol).

Connection-Oriented Protocols

1. Requires establishing a connection before data transmission.


2. Ensures reliable, ordered delivery of packets.
3. Slower, with more overhead due to connection setup and maintenance.
4. Example: TCP (Transmission Control Protocol).
Simple protocol

A simple protocol is a connectionless protocol without flow control or error


handling. It assumes the receiver can process any incoming packet immediately,
without being overloaded. There is no mechanism for managing packet loss or
sequencing.

The Stop-and-Wait Protocol

The Stop-and-Wait Protocol is a connection-oriented protocol using flow and error


control with a sliding window of size 1. The sender sends one packet at a time and
waits for acknowledgment. If the checksum is incorrect, the packet is discarded.

The sender starts a timer for each packet. If the acknowledgment arrives before the
timer expires, the next packet is sent. If the timer expires, the packet is resent,
ensuring reliable delivery with only one packet and acknowledgment in the channel
at a time.
Efficiency

The efficiency of the Stop-and-Wait protocol is low in high-bandwidth, long-delay


channels. This is because the sender waits for an acknowledgment after sending
each packet, resulting in unused channel capacity. The bandwidth-delay product,
which represents the channel's capacity in bits, is not fully utilized, causing
inefficiency.

Pipelining

Pipelining in networking allows tasks to start before previous ones finish. It improves
efficiency by sending multiple packets without waiting for acknowledgments,
optimizing network bandwidth.
CHAPTER 24 Transport-Layer Protocols

24.2 USER DATAGRAM PROTOCOL:

The User Datagram Protocol (UDP) is a connectionless, unreliable transport protocol.


It does not add anything to the services of IP except for providing process-to-process
communication instead of host-to-host communication.

24.2.1 User Datagram

A UDP packet, also called a user datagram, has a fixed header of 8 bytes consisting
of four fields, each 2 bytes long. These fields include the source and destination port
numbers, the total length of the datagram (including both header and data), and an
optional checksum. The total length can go up to 65,535 bytes, but must be smaller to
fit within an IP datagram of this size.
UDP Services

1. Process-to-Process Communication
UDP enables process-to-process communication using socket addresses,
which combine IP addresses and port numbers.
2. Connectionless Services
UDP operates in a connectionless manner, where each user datagram is
independent, and there is no connection establishment or termination. Each
datagram can follow a different path.
3. Flow Control
UDP does not provide flow control, meaning there is no window mechanism.
The process using UDP must handle any necessary flow control.
4. Error Control
UDP only provides a checksum for error detection. If an error is detected, the
receiver silently discards the datagram, and there is no feedback sent to the
sender.
5. Checksum
The UDP checksum includes a pseudoheader, UDP header, and application
data. It ensures error detection, but no recovery is performed.
6. Congestion Control
UDP does not implement congestion control, assuming that small and
sporadic packets will not create congestion.
7. Encapsulation and Decapsulation
UDP encapsulates and decapsulates messages when sending data between
processes.
8. Queuing
In UDP, queues are associated with ports, which may be incoming or
outgoing depending on the implementation.
9. Multiplexing and Demultiplexing
UDP multiplexes and demultiplexes data to handle multiple processes
requesting UDP services on a host.

Comparison between UDP and Generic Simple Protocol

UDP (User Datagram Protocol)

1. Connectionless: No connection setup before data transfer.


2. Error Detection: Uses an optional checksum to detect errors.
3. No Flow Control: No mechanism to prevent overload at the receiver.
4. Unreliable: No guarantees for packet delivery or order.
Generic Simple Protocol

1. Connectionless: No connection establishment required.


2. No Error Detection: No error checking for corrupted packets.
3. No Flow Control: No flow control to manage data rate.
4. Unreliable: No guarantee for packet delivery.

UDP Applications:

Typical UDP Applications

1. Simple Request-Response: UDP is ideal for processes requiring simple


request-response communication with minimal flow and error control, unlike
protocols like FTP that transfer large data.
2. Internal Flow/Error Control: Applications such as TFTP, which implement
their own flow and error control mechanisms, are well-suited for UDP.
3. Multicasting: UDP is designed for multicasting, making it the preferred choice
for multicast applications, whereas TCP lacks this capability.
4. Management Processes: UDP is commonly used in network management
protocols like SNMP for efficient communication without the overhead of error
and flow control.

Transmission Control Protocol (TCP)

TCP is a connection-oriented and reliable transport protocol. It defines clear phases


for connection establishment, data transfer, and connection termination, ensuring
reliable communication between processes.
TCP Services

1. Process-to-Process Communication
TCP provides communication between processes using port numbers.
2. Full-Duplex Communication
Data flows in both directions simultaneously, with separate buffers for
sending and receiving.
3. Multiplexing and Demultiplexing
TCP handles multiplexing at the sender and demultiplexing at the receiver,
requiring a connection for each pair of processes.
4. Connection-Oriented Service
TCP establishes a logical connection before data exchange, which is reliable,
ordered, and can handle lost or out-of-order data.
5. Reliable Service
Acknowledgments ensure reliable delivery of data, and lost or corrupted
packets are retransmitted.
6. Stream Delivery Service
TCP delivers data as a continuous byte stream, unlike UDP’s discrete
messages, ensuring ordered delivery.

TCP Segment Format

A TCP packet, known as a segment, consists of a header (ranging from 20 to 60 bytes)


and application data. The header is typically 20 bytes unless options are included, in
which case it can extend up to 60 bytes.

Header Fields:

1. Source Port Address (16 bits): Specifies the port number of the sending
application.
2. Destination Port Address (16 bits): Specifies the port number of the receiving
application.
3. Sequence Number (32 bits): Indicates the byte number of the first data byte in
the segment. Each byte in a TCP stream is numbered.
4. Acknowledgment Number (32 bits): Indicates the next expected byte from
the other party. If byte number x has been successfully received, the
acknowledgment number will be x+1.
5. Header Length (HLEN) (4 bits): Specifies the length of the TCP header in 4-
byte words, ranging from 5 to 15.
6. Flags (Control) (6 bits): Contains flags for flow control, connection
establishment/termination, and data transfer modes. Multiple flags may be set
simultaneously to control different aspects of the connection.
7. Window Size (16 bits): Used for flow control, indicating the size of the
receiving buffer.
8. Urgent Pointer (16 bits): Points to urgent data if the URG flag is set.
9. Checksum (16 bits): Used for error checking the segment.
10. Options and Padding: Optional fields that can extend the header size up to 40
bytes.

Connection Establishment in TCP:

TCP transmits data in full-duplex mode, meaning both the client and server can send
data simultaneously. Before data transfer begins, both parties must establish
communication. This is done using a process called three-way handshaking.

Three-Way Handshaking Process:

1. Client Sends SYN:


The client sends a segment with only the SYN flag set, initiating
synchronization. It includes a random Initial Sequence Number (ISN) to
begin the connection. The SYN segment consumes one sequence number but
carries no data.
2. Server Sends SYN + ACK:
The server responds with a segment containing both SYN and ACK flags. It
acknowledges the client's SYN request and includes its own ISN. The server
also provides a receive window size (rwnd). This segment consumes one
sequence number but carries no data.
3. Client Sends ACK:
The client acknowledges the server’s SYN + ACK segment. This segment
confirms the successful handshake. If no data is transferred, it doesn’t
consume any additional sequence numbers, although it can carry data if
needed.

SYN Flooding Attack:


In a SYN flooding attack, attackers send fake SYN segments to a server, causing it
to allocate resources for non-existent clients. The server sends SYN + ACK
segments, but since no response is received, the server runs out of resources,
denying service to real clients.

Mitigation:

1. Limit connection requests.


2. Filter out fake IPs.
3. Use cookies for verification before allocating resources.

Connection Termination in TCP:


A connection can be closed by either the client or the server, but it is typically
initiated by the client. There are two options for terminating a connection: three-
way handshaking and four-way handshaking with a half-close option.
Three-Way Handshaking:

1. Client to Server:
The client sends a FIN segment to close the connection, which consumes one
sequence number (if it carries no data).
2. Server to Client:
The server sends a FIN + ACK segment to confirm receipt of the client's FIN
segment and to indicate that the connection is also closing in the server-to-
client direction. This may include the last chunk of data, consuming one
sequence number if no data is sent.
3. Client to Server:
The client sends an ACK segment to acknowledge the server's FIN segment.
This segment contains the acknowledgment number (one plus the server's
sequence number) and consumes no sequence numbers.

Half-Close:

In a half-close, one end can stop sending data while still receiving it. This can be
initiated by either the client or the server. For example, if the server needs all data
before processing (like in sorting), the client can close the connection in the client-
to-server direction, while the server keeps the server-to-client direction open.

 Client to Server: The client sends a FIN segment to stop sending data.
 Server to Client: The server sends an ACK segment to accept the half-close,
and can continue sending data.
 Once the server finishes sending all processed data, it sends a FIN segment,
which the client acknowledges with an ACK.

Connection Reset (RST):

The RST (reset) flag is used to:

 Deny a connection request.


 Abort an existing connection.
 Terminate an idle connection.

This action can occur from either side to reset the connection.
(SCTP):

Stream Control Transmission Protocol (SCTP): SCTP is a transport-layer protocol


that combines features of both UDP and TCP to offer better support for multimedia
communication.

SCTP Services:

1. Process-to-Process Communication: SCTP enables communication between


application processes, like TCP and UDP.
2. Multiple Streams: SCTP supports multiple streams in a single connection
(called an association). This prevents delays due to packet loss in one stream,
allowing other streams to continue delivering data. This is ideal for real-time
applications like audio and video.
3. Multihoming: SCTP supports multihoming, allowing both the client and
server to have multiple IP addresses. This ensures fault tolerance, as the
protocol can switch to a different IP address if one path fails, without
interrupting data transfer.
4. Full-Duplex Communication: SCTP enables full-duplex communication,
allowing data to flow in both directions simultaneously.
5. Connection-Oriented Service: SCTP is connection-oriented, where the
connection is termed an association. An association must be established before
communication begins.
6. Reliable Service: SCTP guarantees reliable data transmission using an
acknowledgment mechanism, ensuring the safe delivery of data.

You might also like