0% found this document useful (0 votes)
9 views8 pages

Gautami Mudhol C47, Exp7

IPv4 addressing uses unique 32-bit addresses for devices on a network, structured in classes to manage routing, but suffers from inefficiencies leading to address scarcity. To address this, subnetting and CIDR were developed, allowing for better allocation and management of IP addresses. IPv6 was introduced as a successor to IPv4 with a 128-bit address space, improved header processing, and built-in security features, addressing the limitations of IPv4, including address depletion.

Uploaded by

gbmudhol
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views8 pages

Gautami Mudhol C47, Exp7

IPv4 addressing uses unique 32-bit addresses for devices on a network, structured in classes to manage routing, but suffers from inefficiencies leading to address scarcity. To address this, subnetting and CIDR were developed, allowing for better allocation and management of IP addresses. IPv6 was introduced as a successor to IPv4 with a 128-bit address space, improved header processing, and built-in security features, addressing the limitations of IPv4, including address depletion.

Uploaded by

gbmudhol
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

1.Explain ipv4 addressing in detail.

IPv4 (Internet Protocol version 4) addressing is a fundamental concept in computer


networking, defining how devices are identified and located on a network.

What is an IPv4 Address?


Every host and router connected to the Internet is assigned a unique 32-bit IPv4 address.
These addresses are crucial for the proper routing of IP packets, serving as both source
and destination identifiers within the IP packet headers.

Structure and Format of IPv4 Addresses


An IPv4 address is 32 bits long. Historically, these addresses were divided into classes
(Class A, B, and C) based on their first few bits, which determined the network and host
portions of the address. This hierarchical design aimed to allow for scaling in Internet
routing.

However, this class-based system led to inefficiencies:


• Wasteful Allocation: Class B networks, with 65,536 addresses, were often too
large for most organizations, leading to millions of allocated but unused addresses.
Many organizations that could have used a Class C network (256 addresses) opted
for Class B, contributing to address scarcity.
• Fixed Block Sizes: The fixed sizes of address blocks in the classful system
contributed to the rapid depletion of the free address space.
Subnetting
To mitigate the issues of address waste and better manage large blocks of IP addresses
within an organization, subnetting was introduced. Subnetting allows a single allocated
block of IP addresses to be divided into smaller, multiple sub-networks. This division is
internal to the organization and not visible to the external Internet, meaning changes to
subnet divisions do not require external updates to organizations like ICANN (Internet
Corporation for Assigned Names and Numbers).

CIDR (Classless Inter-Domain Routing)


Even with efficient allocation of IP addresses through subnetting, the problem of routing
table explosion persisted in the core of the Internet. To address this, Classless Inter-
Domain Routing (CIDR) was developed. CIDR removed the fixed class boundaries,
allowing for more flexible allocation of IP addresses using variable-length subnet masks
(VLSM). This flexibility enables:
• Address Aggregation: Multiple smaller networks can be grouped into a single,
larger block advertised to the Internet. This significantly reduces the number of
entries in routing tables, improving router performance and stability.
• Longest Matching Prefix Routing: When prefixes overlap, routers use the most
specific route (the longest matching prefix) to direct traffic, providing flexibility in
routing.
IPv4 Address Scarcity and NAT
The tremendous growth of the Internet led to a critical shortage of available IPv4
addresses. This problem was recognized decades ago, and despite efforts like CIDR to use
addresses more sparingly, the complete exhaustion of IPv4 addresses was predicted.

One significant temporary solution developed was Network Address Translation (NAT).
NAT allows multiple devices within a private network to share a single public IPv4 address.
This works by replacing the private source IP address and port with the public IP address
and a unique port when a packet leaves the private network, and vice-versa for incoming
packets. While effective in conserving IP addresses, NAT has been criticized by some
networking purists for violating the architectural model of IP, which posits that every IP
address uniquely identifies a single machine worldwide.

The long-term solution to IPv4 address scarcity is the migration to IPv6, which uses 128-bit
addresses, making address depletion highly unlikely in the foreseeable future.

2. Describe various congestion control policies of TCP in brief.

TCP (Transmission Control Protocol) congestion control is a crucial mechanism designed


to prevent network collapse by regulating the rate at which data is sent into the network,
especially when the network is overloaded. It's distinct from flow control, which manages
the data flow between a sender and a receiver to prevent the receiver's buffer from
overflowing. Congestion control is a global issue, aiming to ensure the network can carry
the offered traffic, while flow control is a local issue between a specific sender and
receiver.

The core idea of TCP congestion control is to dynamically adjust the sending rate based on
feedback from the network. This feedback is often implicit, primarily through packet loss
(indicated by timeouts or duplicate acknowledgements) or explicit, though less commonly
used.
Here are the brief descriptions of various congestion control policies and related
mechanisms in TCP:

• Additive Increase Multiplicative Decrease (AIMD): This is the fundamental control


law for TCP congestion control.
o Additive Increase: When the network is not congested (i.e.,
acknowledgements are received), TCP increases its congestion window (the
amount of unacknowledged data it can send) by a fixed amount (typically
one segment per Round Trip Time or RTT). This allows the sending rate to
gradually probe for available bandwidth.
o Multiplicative Decrease: When congestion is detected (e.g., via a timeout
or duplicate acknowledgements), TCP reduces its congestion window by a
multiplicative factor (usually half). This quick reduction aims to alleviate
congestion rapidly. The AIMD control law aims to achieve a fair and efficient
allocation of bandwidth among competing flows.
• Slow Start: When a TCP connection begins or after a lengthy timeout (indicating
severe congestion), the congestion window is initialized to a small value (typically
one or two segments). The window then increases exponentially, rather than
linearly, for each acknowledgement received. This rapid increase allows TCP to
quickly find available bandwidth at the start of a transmission. Slow start continues
until the congestion window reaches a predefined threshold, at which point it
transitions to congestion avoidance.
• Congestion Avoidance: Once the slow start threshold is reached, TCP switches to
congestion avoidance. In this phase, the congestion window increases linearly
(additively) rather than exponentially. This more cautious increase helps to avoid
overwhelming the network while still probing for additional bandwidth.
• Fast Retransmit: If a sender receives three duplicate acknowledgements for the
same packet, it's an indication that the packet immediately following the
acknowledged one has likely been lost, even before a retransmission timeout
occurs. TCP can then immediately retransmit the presumed lost packet without
waiting for the retransmission timer to expire.
• Fast Recovery: This policy is often used in conjunction with Fast Retransmit. After a
Fast Retransmit, instead of reducing the congestion window drastically (as with a
timeout), Fast Recovery attempts to keep the data flowing by allowing new packets
to be sent based on incoming duplicate acknowledgements. It then slowly reduces
the congestion window to half of its value when the duplicate ACKs were received.
• Karn's Algorithm: This algorithm addresses an issue with retransmission timeouts.
It states that round-trip time (RTT) estimates should not be updated for segments
that have been retransmitted. This is because it's impossible to know if the
acknowledgement received is for the original transmission or a retransmission,
which could contaminate the RTT estimate. Additionally, the timeout is doubled for
each successive retransmission until the segment is acknowledged.
• Selective Acknowledgements (SACK): A significant improvement to TCP, SACK
allows a receiver to explicitly inform the sender about all the segments it has
received, not just the highest in-order segment. This provides more precise
information about packet loss, enabling the sender to retransmit only the genuinely
lost segments, improving efficiency, especially when multiple packets are lost
within a single window.

Over time, various TCP versions like TCP Tahoe, TCP NewReno, CUBIC TCP (used in Linux),
and Compound TCP (used in Windows) have refined these policies and their
implementations. The continuous evolution of TCP congestion control aims to balance
efficiency and fairness in sharing network resources.

3. Explain the structure of the IPv6 address. Discuss the difference between IPv4 and
IPv6.

IPv6 (Internet Protocol version 6) was developed as the successor to IPv4 to address its
limitations, most notably the impending exhaustion of its address space, and to introduce
improvements in header processing, security, and quality of service.

Structure of the IPv6 Address


An IPv6 address is 128 bits long, significantly larger than IPv4's 32-bit addresses. These
addresses are written as eight groups of four hexadecimal digits, with each group
separated by colons. For example: 8000:0000:0000:0000:0123:4567:89AB:CDEF.

To simplify the representation of these long addresses, especially those containing many
zeros, three optimizations are allowed:
• Omitting Leading Zeros: Leading zeros within any four-hexadecimal-digit group
can be omitted. For instance, 0123 can be written as 123.
• Zero Compression: One or more consecutive groups of 16 zero bits can be
replaced by a double colon (::). This compression can only be used once in an
address to avoid ambiguity. For example,
8000:0000:0000:0000:0123:4567:89AB:CDEF can be shortened to
8000::123:4567:89AB:CDEF.
• IPv4-Compatible Addresses: IPv4 addresses can be written as a pair of colons
followed by the old dotted-decimal IPv4 address format, such as ::192.31.20.46.
The sheer size of the IPv6 address space, 2^128 (approximately 3 x 10^38), makes address
depletion unlikely for the foreseeable future.

Differences Between IPv4 and IPv6


The transition from IPv4 to IPv6 involves several key differences, primarily aimed at
improving efficiency, scalability, and functionality:
• Address Length: IPv4 addresses are 32 bits long, while IPv6 addresses are 128 bits
long. This vastly expands the available address space.
• Header Format: IPv6 features a simpler, fixed-length main header of 40 bytes,
which improves routing efficiency by reducing processing overhead for routers.
Unlike IPv4, which has a variable-length header due to optional fields, IPv6 moves
most optional and non-essential fields into separate extension headers. These
extension headers are only processed by the destination host or by specific
intermediate routers if required, speeding up packet forwarding.
• Checksum: The IPv4 header includes a checksum, which routers recompute at
every hop. IPv6 eliminates this header checksum, relying on higher-layer protocols
(like TCP or UDP) and link-layer technologies to provide error detection, thereby
speeding up packet processing in routers.
• Fragmentation: In IPv4, routers can fragment packets if they exceed the Maximum
Transmission Unit (MTU) of a link. In IPv6, fragmentation can only be performed by
the source host. If a packet is too large for a link, the router discards it and sends an
ICMPv6 "Packet Too Big" message back to the source, prompting the source to
retransmit the packet with a smaller size.
• Quality of Service (QoS): IPv6 includes fields like the "Flow label" and
"Differentiated services" fields in its header to provide better support for real-time
traffic and QoS, allowing for improved handling of multimedia and other delay-
sensitive applications.
• Security: IPv6 was designed with built-in security features, specifically IPsec (IP
Security), which provides authentication and encryption capabilities at the network
layer. While IPsec was later retrofitted for IPv4, it is an integral part of the IPv6
standard.
• Stateless Auto-configuration: IPv6 supports stateless address auto-configuration
(SLAAC), allowing devices to automatically generate their own IPv6 addresses
without the need for a DHCP server, simplifying network management.
• No Broadcast Address: IPv6 does not use broadcast addresses. Instead, it uses
multicast addresses for sending traffic to a group of interfaces and anycast
addresses for sending traffic to the nearest interface among a group of interfaces,
providing more efficient ways to deliver data to multiple destinations.

4. Explain the Distance Vector Routing Algorithm in detail.

The Distance Vector Routing Algorithm is a dynamic routing algorithm where each router
maintains a table (a vector) of the best-known distance to every other destination and the
outgoing line to use to reach that destination. This algorithm operates on the principle that
routers iteratively update their routing tables by exchanging information with their directly
connected neighbors.

Here's a detailed explanation of its operation:

1. Core Principle: Distance Vectors

Each router in the network maintains a routing table that contains an entry for every other
router (or destination) in the network. This entry typically includes two pieces of
information:

a. The "distance" (or cost) to reach that destination. This distance can be
measured in various metrics, such as the number of hops, delay, or a
combination of factors like bandwidth and communication cost.
b. The "next hop" or outgoing line that should be used to send packets towards
that destination to achieve the minimum distance.
2. Information Exchange

Periodically (e.g., every T milliseconds), each router sends its entire distance vector to all
its directly connected neighbors. Concurrently, it receives similar distance vectors from its
neighbors.

3. Route Calculation and Updates

When a router receives a distance vector from a neighbor (let's call it neighbor X), it
performs the following calculation for each destination i in the received vector:
a. It determines the delay (or cost) to reach neighbor X (let's call this m). This m
value can be directly measured (e.g., using ECHO packets) or estimated.
b. For each destination i, if neighbor X claims to reach i with a delay of Xi, then
the router calculates that it can reach destination i via neighbor X in Xi + m
milliseconds.
c. The router compares this newly calculated path cost ( Xi + m) with its current
best-known path cost to destination i. If the new path via neighbor X is
shorter (i.e., Xi + m is less than its current stored distance to i), the router
updates its routing table. It sets its new distance to i as Xi + m and records
neighbor X as the next hop for destination i.
d. Crucially, the old routing table is not used in this calculation; only the
incoming vector from the neighbor and the direct cost to that neighbor are
considered.
4. Convergence

This exchange and update process continues iteratively. As routers propagate their
updated distance vectors, information about path changes (like new, shorter paths)
gradually spreads throughout the network. This "good news" tends to spread quickly, at the
rate of one hop per exchange. Eventually, all routers converge on the optimal paths to all
destinations, assuming the network topology remains stable for long enough.

5. The "Count-to-Infinity" Problem

A significant drawback of the Distance Vector Routing Algorithm is its slow convergence in
the face of "bad news," specifically when a link or router fails, leading to the count-to-
infinity problem.

a. Consider a scenario where router B has a path to destination A with a cost of


1 (via a direct link), and router C has a path to A with a cost of 2 (via B).
b. If the link between A and B fails (or A goes down), B initially detects that its
direct path to A is gone. However, when B receives an update from C, C
might still advertise a path to A with a cost of 2 (because C's path goes
through B, but B doesn't know this).
c. B, unaware that C's path is indirectly through itself, might then update its
path to A via C, calculating a cost of 3 (2 from C + 1 to C).
d. This "bad news" (the unavailability of a path) then propagates slowly through
the network. Each router increases its perceived distance to the unreachable
destination by one in each exchange, gradually "counting to infinity".
e. This problem can cause routing loops and packets being sent into black
holes for an extended period until the distance reaches a predefined
"infinity" value (which should be set to the longest possible path plus 1 to
ensure eventual convergence).
f. Heuristics like "split horizon with poisoned reverse" (preventing routers from
advertising their best paths back to the neighbors from which they heard
them) have been proposed to mitigate this, but they do not fully solve the
core problem.Due to the count-to-infinity problem and slow convergence,
Distance Vector Routing was replaced by Link State Routing in the
ARPANETin 1979 for internal routing. However, variants adapted for mobile
environments, like AODV (Ad hoc On-demand Distance Vector), still exist for
specialized networks.

You might also like