The transport and application layers are crucial components of the TCP/IP model, ensuring reliable communication and enabling user-level network services. The transport layer manages end-to-end delivery, error control and flow control using protocols like TCP and UDP, while the application layer provides services such as HTTP, FTP, DNS and SMTP. A deep understanding of these layers is essential for diagnosing network issues, optimizing performance and excelling in technical networking interviews.
1. Explain how TCP’s 3-Way Handshake works and discuss what vulnerabilities it introduces.
Process:
- SYN: Client sends a SYN packet with initial sequence number.
- SYN-ACK: Server replies with SYN-ACK (acknowledges client’s sequence number and provides its own).
- ACK: Client sends ACK, completing the connection setup.
Purpose: Ensures both parties agree on initial sequence numbers -> reliable, synchronized communication.
Vulnerabilities:
- Susceptible to SYN flood attacks: attacker sends large number of SYNs but never completes handshake, filling server’s half-open connection queue.
- Can lead to denial-of-service (DoS).
Mitigation: SYN cookies, firewalls, intrusion prevention systems (IPS) and rate limiting.
2. How does TCP differ from UDP in terms of reliability and why might UDP still be preferred?
TCP:
- Connection-oriented.
- Reliability via sequence numbers, ACKs, retransmissions.
- Provides error-checking and congestion/flow control.
- Higher overhead.
UDP:
- Connectionless, unreliable (no ACKs, retransmissions).
- Low overhead, faster delivery.
- No congestion/flow control.
Why UDP Preferred:
- Real-time apps (VoIP, gaming, streaming) value low latency over reliability.
- Minor packet loss is tolerable; retransmissions would hurt performance more than they help.
3. What is the role of port numbers in transport layer communication and how are ephemeral ports used?
Role of Port Numbers:
- Identify specific applications/services on a host.
- Example: HTTP -> port 80, HTTPS -> 443, DNS -> 53.
- Allow multiplexing of multiple services on the same IP.
Ephemeral Ports:
- Temporary, dynamically allocated by client OS (range: ~49152–65535).
- Used for outgoing connections.
- Each session uniquely identified by (Source IP, Source Port, Destination IP, Destination Port).
- Allow multiple simultaneous connections (e.g., opening many browser tabs).
4. Explain how congestion control in TCP differs from flow control.
Flow Control (Receiver-Side):
- Ensures sender doesn’t overwhelm receiver’s buffer.
- Managed using receiver window (rwnd) field in TCP header.
- Protects end-hosts.
Congestion Control (Network-Side):
- Prevents sender from flooding the network with too much traffic.
- Algorithms: Slow Start, Congestion Avoidance, Fast Retransmit, Fast Recovery.
- Protects network paths.
Difference:
- Flow control = end-to-end buffer management.
- Congestion control = managing bandwidth and preventing congestion collapse.
5. How does HTTP/2 improve upon HTTP/1.1 in terms of application layer performance?
HTTP/1.1 Limitations:
- Head-of-line blocking (one request per connection).
- High overhead with repeated headers.
- Inefficient multiple connections per page.
HTTP/2 Improvements:
- Multiplexing: Multiple streams over a single TCP connection, removing head-of-line blocking at app layer.
- Header Compression (HPACK): Reduces repetitive header overhead.
- Server Push: Server can proactively send resources before client requests them.
- Binary Framing: More efficient parsing vs. text-based HTTP/1.1.
Result: Reduced latency, faster page loads, better performance on high-latency or lossy networks.
6. Compare persistent and non-persistent HTTP connections and their impact on performance.
Non-persistent (HTTP/1.0):
- Each HTTP request/response requires a separate TCP connection. For a webpage with multiple objects (HTML, CSS, JS, images), the browser opens multiple TCP sessions, incurring overhead of 3-way handshake and slow start each time.
- Impact: Higher latency, inefficient bandwidth use.
Persistent (HTTP/1.1 and beyond):
- A single TCP connection can handle multiple requests sequentially or in parallel (with pipelining/multiplexing).
- Impact: Less handshake overhead, reduced latency, faster page loads.
Note: Persistent connections dramatically improve web performance by minimizing setup costs.
7. How does DNS caching work and what risks does it introduce?
Mechanism:
- DNS responses are cached at different levels:
- Browser cache
- OS resolver cache
- ISP recursive DNS server cache. This reduces query time and avoids repetitive lookups.
Risks:
- DNS cache poisoning: An attacker injects forged DNS records, redirecting users to malicious websites.
- Stale cache entries: Users may reach outdated IPs if records aren’t refreshed.
Mitigation: DNSSEC (digital signatures), short TTL values and cache flushing.
8. Explain how QUIC differs from TCP + TLS in terms of transport layer operations.
Traditional TCP + TLS:
- Requires 3-way TCP handshake + TLS handshake before secure data transmission.
- Susceptible to head-of-line blocking (a lost packet delays the entire stream).
QUIC (on UDP):
- Combines transport + encryption (TLS 1.3) in a single handshake -> faster setup.
- Multiplexing streams independently avoids head-of-line blocking.
- Connection migration: QUIC identifies connections by IDs, not IP/port, so mobility (switching Wi-Fi -> 4G) is seamless.
Note: QUIC provides faster, more reliable transport optimized for web traffic -> backbone of HTTP/3.
9. What is SMTP’s role in the application layer and how does it differ from IMAP and POP3?
SMTP (Simple Mail Transfer Protocol):
- Used for sending emails between mail clients -> mail servers -> destination servers.
- Default ports: 25 (server-to-server), 587 (client submission).
POP3 (Post Office Protocol):
- Downloads emails to local device.
- Typically deletes server copy -> no synchronization across devices.
IMAP (Internet Message Access Protocol): Keeps emails on server, supports multiple devices with synced state (read/unread, folders).
Note: SMTP = sending; POP3 = download-only access; IMAP = synchronized multi-device access.
10. How does TCP handle out-of-order packets and why does it matter for application performance?
Handling:
- TCP assigns sequence numbers to each byte.
- Receiver buffers out-of-order packets until missing ones arrive.
- ACKs confirm only the last contiguous byte received. Duplicate ACKs trigger fast retransmit.
Why it matters:
- Ensures data reliability.
- However, frequent reordering -> unnecessary retransmissions -> throughput reduction.
Note: TCP guarantees ordered delivery but at the cost of performance degradation under high reordering scenarios (e.g., wireless or multipath networks).
11. How does TCP handle the problem of packet loss and ensure data arrives in order?
Packet Loss Handling: TCP assigns sequence numbers to segments and uses ACKs (acknowledgments) to confirm delivery. If an ACK is not received within the Retransmission Timeout (RTO), TCP retransmits the lost segment.
In-order Delivery: The receiver uses sequence numbers to reorder out-of-sequence segments before delivering them to the application.
Optimizations:
- Fast Retransmit: Resends a segment after receiving three duplicate ACKs without waiting for RTO.
- Selective Acknowledgment (SACK): Allows the receiver to inform the sender about exactly which segments were received, avoiding unnecessary retransmissions.
12. Why does TCP use both sequence numbers and acknowledgment numbers?
Sequence Numbers:
- Uniquely identify each byte of data sent in a TCP stream.
- Ensure correct ordering of segments (helps reassemble out-of-order packets).
- Prevents duplication when retransmissions occur.
Acknowledgment Numbers:
- Indicate the next expected byte from the other side.
- Provide feedback for flow control and retransmission.
Note: Using both ensures bidirectional reliability: sender knows exactly which bytes were received and receiver knows how to reorder and validate incoming segments.
13. How does UDP handle real-time applications better than TCP?
UDP Characteristics:
- Connectionless, no handshake.
- No retransmissions or acknowledgments.
- No ordering of packets.
Advantages for Real-Time:
- Low Latency: Data sent immediately without waiting for ACKs.
- Tolerance of Loss: Real-time apps (VoIP, video streaming, online gaming) prefer minor packet loss over delays.
- Reduced Jitter: Continuous stream without retransmission delays ensures smoother playback.
Note: UDP sacrifices reliability for speed and timeliness, which real-time apps need.
14. Explain the working of the sliding window protocol in TCP.
The sliding window protocol allows TCP to send multiple segments before requiring ACKs.
Window Size: Determines how much unacknowledged data can be "in flight."
Operation:
- Sender transmits up to the window size.
- As ACKs arrive, the window "slides forward," allowing new data to be sent.
Benefits:
- Efficient bandwidth utilization.
- Supports pipelining (multiple outstanding packets).
15. Why is TLS/SSL considered an Application Layer protocol despite operating on TCP?
TLS/SSL Role: Provides confidentiality (encryption), integrity (hashing) and authentication (certificates) for data exchanged between applications.
Positioning:
- Works above TCP and below application protocols like HTTP, SMTP, FTP.
- Does not change TCP’s transport functions (connection, reliability).
Reason for Application Layer Classification:
- Security services are application-specific (e.g., HTTPS, FTPS).
- Operates as a library/service invoked by applications rather than as part of TCP itself.
Note: TLS/SSL is treated as an Application Layer protocol that enhances communication security without modifying TCP.
16. How does TCP avoid the “Silly Window Syndrome (SWS)”?
Problem (SWS): Happens when either the sender or receiver sends/advertises tiny window sizes, causing inefficient use of bandwidth (too many small packets).
Prevention (by TCP):
- Sender Side (Nagle’s Algorithm): Don’t send tiny packets until ACK or MSS (Maximum Segment Size) is reached.
- Receiver Side (Clark’s Solution): Don’t advertise very small window sizes; wait until there’s enough buffer space.
Note: This ensures efficient transmission by avoiding fragmentation into unnecessarily small packets.
17. Explain the concept of "connection teardown" in TCP.
TCP connection termination is a four-step FIN handshake:
- Sender sends FIN to indicate no more data.
- Receiver ACKs FIN.
- Receiver sends its own FIN when ready to close.
- Sender ACKs the receiver's FIN.
This ensures a graceful shutdown with no loss of data in transit.
18. Why does TCP use exponential backoff for retransmissions?
- Working: If a segment times out, TCP doubles the Retransmission Timeout (RTO) for the next attempt (1s -> 2s -> 4s …).
- Reason: Prevents network collapse during congestion by avoiding aggressive retransmissions. Gives routers time to clear queued packets.
- Advantage: Adaptive and fair -> balances retransmission urgency with network health.
Note: Exponential backoff is crucial to TCP’s congestion control and ensures stability under heavy load.
19. What problem does the Nagle’s algorithm solve in TCP and when can it be harmful?
Problem Solved:
- Reduces network congestion by combining many small packets into fewer, larger segments.
- Especially useful in applications sending small chunks of data (e.g., keystrokes).
Working: TCP holds back small outgoing segments until the previous segment is acknowledged, then sends accumulated data.
Harmful Case:
- Causes latency in interactive applications (e.g., online games, VoIP) where immediate transmission is required.
- This delay is called the “Nagle + Delayed ACK problem.”
Note: Nagle’s improves efficiency but can hurt low-latency, real-time applications.
20. Compare TCP and UDP checksum mechanisms. Why is UDP checksum optional?
TCP Checksum:
- Mandatory.
- Covers header + data + pseudo-header (source/destination IPs).
- Ensures reliability (bit errors get detected -> retransmission).
UDP Checksum:
- Optional in IPv4 (mandatory in IPv6).
- Covers header + data + pseudo-header.
- If checksum field = 0 -> means “not used.”
Reason Optional in UDP:
- UDP is often used in real-time apps (VoIP, video streaming) where speed > reliability.
- Applications may implement their own error-checking mechanisms.
Note: TCP demands strict reliability, while UDP provides flexibility for low-latency use cases.