0% found this document useful (0 votes)
9 views

cn notes

The document provides an overview of various network topologies, including bus, star, ring, and mesh, along with their advantages and disadvantages. It also discusses two primary communication methods, circuit switching and packet switching, detailing their characteristics, benefits, and drawbacks. Additionally, it covers network types (LAN, MAN, WAN), the OSI and TCP/IP models, transmission media (twisted pair, coaxial cable, fiber optics), and data link layer design issues and protocols.

Uploaded by

Harsh Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

cn notes

The document provides an overview of various network topologies, including bus, star, ring, and mesh, along with their advantages and disadvantages. It also discusses two primary communication methods, circuit switching and packet switching, detailing their characteristics, benefits, and drawbacks. Additionally, it covers network types (LAN, MAN, WAN), the OSI and TCP/IP models, transmission media (twisted pair, coaxial cable, fiber optics), and data link layer design issues and protocols.

Uploaded by

Harsh Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Module 1

Tuesday, November 7, 2023 1:33 PM

Network Topology
Network Topology is the way that defines the structure, and how these components are connected to each other.
1. Bus Topology:
• In a bus topology, all devices are connected to a single central cable (the bus).
• Data is transmitted in both directions, but it can only be received by the device it is intended for.
• It's a simple and inexpensive topology but can suffer from performance issues if too many devices are connected.
2. Star Topology:
• In a star topology, all devices are connected to a central hub or switch.
• Data passes through the hub or switch, allowing for easy management and fault isolation.
• If the hub or switch fails, the entire network can be affected.
3. Ring Topology:
• In a ring topology, each device is connected to exactly two other devices, forming a closed loop.
• Data travels in one direction through the ring until it reaches its destination.
• It's less common in modern networks due to its susceptibility to network disruptions if a single connection or device fails.
4. Mesh Topology:
• In a full mesh topology, every device is connected to every other device.
• Provides high redundancy and fault tolerance, making it suitable for critical applications. • However, it can be
expensive and challenging to manage in large network

Switching
1. Circuit Switching
a. Circuit switching is a communication method where a dedicated communication path, or circuit, is established between two
devices before data transmission begins.
b. The circuit remains dedicated to the communication for the duration of the session, and no other devices can use it while the
session is in progress.
c. Circuit switching is commonly used in voice communication and some types of data communication.
Advantages of Circuit Switching:
• Guaranteed bandwidth: Circuit switching provides a dedicated path for communication, ensuring that bandwidth is guaranteed
for the duration of the call.
• Low latency: Circuit switching provides low latency because the path is predetermined, and there is no need to establish a
connection for each packet.
• Predictable performance: Circuit switching provides predictable performance because the bandwidth is reserved, and there is
no competition for resources.
• Suitable for real-time communication: Circuit switching is suitable for real-time communication, such as voice and video,
because it provides low latency and predictable performance. Disadvantages of Circuit Switching:
• Inefficient use of bandwidth: Circuit switching is inefficient because the bandwidth is reserved for the entire duration of the
call, even when no data is being transmitted.
• Limited scalability: Circuit switching is limited in its scalability because the number of circuits that can be established is finite,
which can limit the number of simultaneous calls that can be made.
• High cost: Circuit switching is expensive because it requires dedicated resources, such as hardware and bandwidth, for the
duration of the call.

2. Packet Switching
a. Packet switching is a communication method where data is divided into smaller units called packets and transmitted over the
network.
b. Each packet contains the source and destination addresses, as well as other information needed for routing.
c. The packets may take different paths to reach their destination, and they may be transmitted out of order or delayed due to
network congestion.
Advantages of Packet Switching:

all Page 1
• Efficient use of bandwidth: Packet switching is efficient because bandwidth is shared among multiple users, and resources are
allocated only when data needs to be transmitted.
are allocated only when data needs to be transmitted.
• Flexible: Packet switching is flexible and can handle a wide range of data rates and packet sizes.
• Scalable: Packet switching is highly scalable and can handle large amounts of traffic on a network.
• Lower cost: Packet switching is less expensive than circuit switching because resources are shared among multiple users.
Disadvantages of Packet Switching:
• Higher latency: Packet switching has higher latency than circuit switching because packets must be routed through multiple
nodes, which can cause delay.
• Limited QoS: Packet switching provides limited QoS guarantees, meaning that different types of traffic may be treated equally.
• Packet loss: Packet switching can result in packet loss due to congestion on the network or errors in transmission.
• Unsuitable for real-time communication: Packet switching is not suitable for real-time communication, such as voice and video,
because of the potential for latency and packet loss.

Circuit Switching Packet Switching

In-circuit switching has there are 3 phases: In Packet switching directly data transfer takes place.
i) Connection Establishment.
ii) Data Transfer. iii)
Connection Released.

In-circuit switching, each data unit knows the entire path In Packet switching, each data unit just knows the final destination
address which is provided by the source. address intermediate path is decided by the routers.

In-Circuit switching, data is processed at the source system In Packet switching, data is processed at all intermediate nodes
only including the source system.

The delay between data units in circuit switching is uniform. The delay between data units in packet switching is not uniform.

Resource reservation is the feature of circuit switching There is no resource reservation because bandwidth is shared among
because the path is fixed for data transmission. users.

Circuit switching is more reliable. Packet switching is less reliable.

Wastage of resources is more in Circuit Switching Less wastage of resources as compared to Circuit Switching

It is not a store and forward technique. It is a store and forward technique.

Transmission of the data is done by the source. Transmission of the data is done not only by the source but also by
the intermediate routers.

Congestion can occur during the connection establishment Congestion can occur during the data transfer phase, a large number
phase because there might be a case where a request is of packets comes in no time.
being made for a channel but the channel is already
occupied.

Circuit switching is not convenient for handling bilateral Packet switching is suitable for handling bilateral traffic.
traffic.

In-Circuit switching, the charge depends on time and In Packet switching, the charge is based on the number of bytes and
distance, not on traffic in the network. connection time.

Recording of packets is never possible in circuit switching. Recording of packets is possible in packet switching.

In-Circuit Switching there is a physical path between the In Packet Switching there is no physical path between the source and
source and the destination the destination

Circuit Switching does not support store and forward Packet Switching supports store and forward transmission
transmission

Call setup is required in circuit switching. No call setup is required in packet switching.

In-circuit switching each packet follows the same route. In packet switching packets can follow any route.

all Page 2
The circuit switching network is implemented at the physical Packet switching is implemented at the datalink layer and network
layer. layer

Circuit switching requires simple protocols for delivery. Packet switching requires complex protocols for delivery.
Network Types
1. LAN (Local Area Network):
• LAN is a network that typically covers a small geographical area, such as a single building, office, or campus.
• Devices within a LAN are usually connected using Ethernet cables or wireless technologies like Wi-Fi.
• LANs are commonly used in homes, offices, and schools for sharing resources like printers, files, and internet connections.
• They are characterized by high data transfer rates and low latency.
2. MAN (Metropolitan Area Network):
• MAN covers a larger geographical area than a LAN but is still limited to a city or a large campus.
• MANs are used to interconnect multiple LANs within a metropolitan area, allowing for data sharing and communication between
them.
• They are often used by businesses, government agencies, or educational institutions to connect their various locations across a city.
3. WAN (Wide Area Network):
• WAN is a network that spans a wide geographical area, often across cities, countries, or even continents.
• WANs can be established using various technologies, including leased lines, fiber-optic cables, and satellite links.
• The internet itself is a global example of a WAN, connecting networks worldwide.
• WANs are suitable for long-distance communication and the exchange of data between distant locations.

Reference Layers

1. OSI
The OSI (Open Systems Interconnection) model defines a conceptual framework for understanding and standardizing network
communication. It consists of seven layers, each with its specific functions and responsibilities:
1. Physical Layer: The physical layer deals with the actual transmission of raw binary data over physical media, such as cables and
electrical voltages. It defines characteristics like signal voltage levels, cable types, and data transmission rates.
2. Data Link Layer: This layer is responsible for creating a reliable link between two directly connected nodes, ensuring error detection
and correction and handling flow control. Ethernet and Wi-Fi are examples of data link layer technologies.
3. Network Layer: The network layer is in charge of routing data packets between different networks. It provides logical addressing (IP
addresses) and determines the best path for data to travel between the source and destination using routing algorithms.
4. Transport Layer: The transport layer establishes end-to-end communication, ensuring data integrity, reliability, and flow control. It
uses protocols like TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).
5. Session Layer: The session layer manages, establishes, and terminates communication sessions or connections between applications.
It also handles synchronization and checkpointing during data exchange.
6. Presentation Layer: Responsible for data translation, encryption, and compression, the presentation layer ensures that data sent by
the application layer is in a format that the application on the receiving end can understand and use.
7. Application Layer: The top layer of the OSI model interacts directly with user applications and provides network services, including
file transfer, email, and remote access. It is where user-level protocols and applications operate, such as HTTP for web browsing and
SMTP for email.

2. TCP/IP
1. Application Layer: The application layer is responsible for providing network services directly to user applications. It includes various
protocols like HTTP, FTP, SMTP, and DNS, enabling applications to communicate over the network.
2. Transport Layer: This layer ensures end-to-end communication, handling functions such as data segmentation, flow control, error
detection, and reliability. Notable protocols in this layer are TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).
3. Internet Layer (Network Layer): The internet layer manages routing and forwarding of data packets between different networks. It
uses IP (Internet Protocol) to assign logical addresses (IP addresses) to devices and determine the best path for data transmission.
4. Link Layer (Data Link Layer): The link layer deals with the physical connection between directly connected nodes, handling data
framing, error detection, and physical addressing. Ethernet and Wi-Fi are examples of link layer technologies.

all Page 3
Module 2
Tuesday, November 7, 2023 3:26 PM

Guided Transmission Media


Twisted pair, coaxial cable, and fiber optics are three different types of transmission media used in networking and telecommunications
to transmit data and signals. Each has its own advantages and is suitable for different applications. Here's an overview of each:
1. Twisted Pair:
• Twisted pair cables are one of the most common types of transmission media in networking. They consist of pairs of insulated
copper wires twisted together.
• There are two main categories of twisted pair cables: unshielded twisted pair (UTP) and shielded twisted pair (STP). UTP is
commonly used in Ethernet networking, while STP has additional shielding to reduce electromagnetic interference.
• Twisted pair cables are relatively inexpensive and easy to install, making them a popular choice for home and office networks.
• They are suitable for shorter distances and lower data rates compared to other types of cables.
2. Coaxial Cable:
• Coaxial cables consist of a central conductor (usually copper or aluminum) surrounded by an insulating layer, a metallic shield, and
an outer insulating layer.
• Coaxial cables are known for their high bandwidth and ability to transmit data over longer distances than twisted pair cables.
• They are commonly used in cable television (CATV) systems, as well as for broadband internet connections. They provide good
resistance to interference and can carry both analog and digital signals.
3. Fiber Optics:
• Fiber optic cables use light pulses to transmit data. They consist of a core made of glass or plastic fibers that carry the light signals,
surrounded by cladding that reflects the light back into the core.
• Fiber optics offer extremely high data transfer rates and very long transmission distances, making them ideal for high-speed internet,
long-distance communication, and high-capacity data networks.
• They are immune to electromagnetic interference and are much more secure for data transmission since they are difficult to tap or
intercept without physical access to the cable.
• Fiber optics are used extensively in telecommunications networks, data centers, and high-performance computing environments.

Data Link Layer Design Issues


1. Frame Delimitation: One of the primary challenges in framing is to identify where one frame starts and ends within the continuous
stream of data. To achieve this, special markers or patterns are inserted into the data stream to indicate the beginning and end of each
frame. This helps the receiver distinguish between different frames and prevent the merging of adjacent frames.
2. Addressing: Each frame typically contains addressing information to specify the source and destination of the data. This addressing
allows the network to deliver the frame to the correct destination node. The addressing information can include source and destination
MAC (Media Access Control) addresses.
3. Error Detection: To ensure the integrity of data during transmission, the frame often includes error detection mechanisms. A common
method is to append a Frame Check Sequence (FCS) to the frame, which is used by the receiver to check for any errors in the frame. If
the receiver detects errors in the FCS, it can request a retransmission of the frame.
4. Flow Control: Frames can also be used for flow control purposes. Flow control mechanisms help manage the rate at which data is
transmitted, ensuring that the receiver can handle the incoming data without becoming overwhelmed. Flow control can involve
mechanisms like acknowledging received frames, preventing data congestion, and avoiding buffer overflow at the receiver.

Elementary Data Link Layer Protocols


1. Unrestricted simplex protocol:
Data transmission is only done in one direction. Transmission (Tx) and reception (Rx) are always available; processing time is irrelevant.
This protocol has an infinite buffer space and no faults, meaning no damaged or lost frames.
The graphic below depicts the Unrestricted Simplex Protocol-

all Page 4
2. Simplex stop-and-wait protocol:
This protocol assumes that data is only transferred in one way. There is no inaccuracy; the receiver can only process data constantly.
These assumptions imply that the transmitter cannot send frames faster than the receiver can process them. The critical issue here is
preventing the transmitter from flooding the receiver. The typical solution for this problem is for the receiver to provide feedback to the
sender; the approach is as follows:
Step 1: The acknowledgement frame is returned to the sender, informing it that the most recently received frame has been processed
and transmitted to the host.
Step 2: Permission is granted to send the following frame.
Step 3: The sender must wait for an acknowledgement frame from the receiver after transmitting the sent frame preceding sending
another frame.
The Simplex stop-and-wait protocol is utilized when the sender sends one frame and waits for the recipient's response. When
an acknowledgement is received, the sender sends the frame as below. The Simplex Stop & Wait Protocol is depicted
diagrammatically as follows-

3. Simplex protocol for noisy channels:


Data transfer is one-way, with a separate recipient and sender, restricted processing capacity & speed at the receiver, and errors in data
frames & acknowledgement frames to be expected due to the noisy channel. Each frame has its sequence number.
The timer is initiated for a limited time after a frame is transferred. If the acknowledgement isn't received before the timer ends, the
frame is retransmitted; if the acknowledgement is malformed or the transferred data frames are damaged, the sender must wait
indefinitely before transmitting the next frame.
The Simplex Protocols for Noisy Channels are depicted diagrammatically as follows-

all Page 5
Flow Control Mechanisms
1. Stop-and-Wait:
○ In the stop-and-wait protocol, the sender sends one frame at a time and waits for an acknowledgment (ACK) from the receiver
before sending the next frame.
○ If the sender does not receive an ACK within a specified timeout period, it assumes that the frame was lost or damaged and
retransmits the same frame.
○ This method is simple but can lead to low efficiency, as the sender is often waiting for an acknowledgment, which may introduce
delays.
2. Sliding Window:
• Sliding window protocols allow multiple frames to be in transit between sender and receiver at the same time, increasing the
efficiency of data transfer.
• The sender can send a certain number of frames (window size) before requiring an acknowledgment. The receiver, in turn,
acknowledges the receipt of frames within the window.
• This allows for a continuous flow of data, as the sender can keep sending frames without waiting for individual acknowledgments.
• There are two types of sliding window protocols: Go-Back-N and Selective Repeat. a. Go-Back-N:
• The sender can send multiple frames without waiting for individual acknowledgments.
• If an acknowledgment is not received within a specified time, the sender assumes that all frames starting from the lost/damaged
frame need to be retransmitted.
b. Selective Repeat:
• The sender can send multiple frames, and the receiver can individually acknowledge the frames it successfully receives.
• If a frame is lost or damaged, only that specific frame is retransmitted, not the entire set of frames.
S.NO Stop-and-Wait Protocol Sliding Window Protocol
1. In Stop-and-Wait Protocol, sender sends one frame and In sliding window protocol, sender sends more than one frame to the
wait for acknowledgment from receiver side. receiver side and re-transmits the frame(s) which is/are damaged or
suspected.
2. Efficiency of Stop-and-Wait Protocol is worse. Efficiency of sliding window protocol is better.
3. Sender window size of Stop-and-Wait Protocol is 1. Sender window size of sliding window protocol is N.
4. Receiver window size of Stop-and-Wait Protocol is 1. Receiver window size of sliding window protocol may be 1 or N.
5. In Stop-and-Wait Protocol, sorting is not necessary. In sliding window protocol, sorting may be or may not be necessary.
6. Efficiency of Stop-and-Wait Protocol is Efficiency of sliding window protocol is
1/(1+2*a) N/(1+2*a)
7. Stop-and-Wait Protocol is half duplex. Sliding window protocol is full duplex.
8. Stop-and-Wait Protocol is mostly used in low speed and Sliding window protocol is mostly used in high speed and error-prone
error free network. network.
9. In Stop-and-Wait Protocol, the sender cannot send any In sliding window protocol, the sender can continue to send new
new frames until it receives an acknowledgment for the frames even if some of the earlier frames have not yet been
previous frame. acknowledged.
10. Stop-and-Wait Protocol has a lower throughput as it has Sliding window protocol has a higher throughput as it allows for
a lot of idle time while waiting for the acknowledgment. continuous transmission of frames.
Module 3
Monday, November 13, 2023 3:27 PM

IPv4 Addressing
• IPv4 (Internet Protocol version 4) is the fourth version of the Internet Protocol, which is the communications protocol that provides an
identification and location system for computers on networks and routes traffic across the Internet.
• IPv4 addresses are 32 bits long and are typically represented in dotted-decimal notation.
• Each 32-bit IPv4 address is divided into four 8-bit octets, separated by periods. Each octet is converted to its decimal equivalent, resulting in a
format like x.x.x.x, where x is a decimal number between 0 and 255.

all Page 6
IPv4 Header Format

IPv4 Classes
1. Class A Addresses:
○ Range: 1.0.0.0 to 126.255.255.255
○ Network Portion: First octet
○ Host Portion: Last three octets
○ Class A addresses were designed for large organizations as they could support a massive number of hosts on a single network (2^24 2
hosts).
2. Class B Addresses:
○ Range: 128.0.0.0 to 191.255.255.255
○ Network Portion: First two octets ○
Host Portion: Last two octets
○ Class B addresses were intended for medium-sized organizations, supporting a moderate number of hosts per network (2^16 - 2 hosts).
3. Class C Addresses:
○ Range: 192.0.0.0 to 223.255.255.255
○ Network Portion: First three octets
○ Host Portion: Last octet
○ Class C addresses were used for smaller networks, providing fewer host addresses (2^8 - 2 hosts).
4. Class D Addresses:
○ Range: 224.0.0.0 to 239.255.255.255 ○ Reserved for multicast groups.
○ Not used for traditional unicast host addressing. Class D addresses are allocated for multicast communication.
5. Class E Addresses:
○ Range: 240.0.0.0 to 255.255.255.255 ○ Reserved for experimental purposes.
○ Not used for regular host addressing. Class E addresses were set aside for research and development.

IPv4 Classless Addressing


Classless addressing, also called Classless Inter-Domain Routing (CIDR), is an improved IP addressing system. It increases the effectiveness of IP
address allocation because of the absence of class distribution.
1. Structure
The CIDR block comprises two parts. These are as follows:

all Page 7
• Block id is used for the network identification, but the number of bits is not pre-defined as it is in the classful IP addressing scheme.
• Host id is used to identify the host part of the network.
2. Notation
• CIDR IP addresses look as follows: w.x.y.z/n
• In the example above w, x, y, z each defines an 8-bit binary number, while n tells us about the number of bits used to identify the network
and is called an IP network prefix or mask.
3. Rules
Requirements for CIDR are defined below:
• Addresses should be contiguous.
• The number of addresses in the block must be in the power of 2.
• The first address of every block must be divisible by the size of the block. 4. Block information
Given the following IP address, let's find the network and host bits.
200.56.23.41/28
The following illustration gives a clear understanding of the aforementioned IP address scheme:

To find the network and host bits, we will use the stated formula, where b represents the number of hosts in the network. nh=232−n
This particular case, in which n equals 28, represents the block id bits, so subtracting it with 32 leaves us with the total number of hosts
expected in the network. nh=232-28 nh=24

Subnetting
• Subnetting is a technique used in computer networking to divide a single network into multiple smaller networks, known as
subnetworks or subnets.
• The purpose of subnetting is to partition a large network into smaller, more efficient subnets, which can improve network
performance, security, and organization.
• In a subnetted network, each subnet has its own unique subnet mask and network address.
• These unique subnet mask and network address allows devices on the subnet to communicate with each other directly without having
to go through a router or other networking device.
• Subnetting works by dividing the host part of an IP address into two or more subnets using a subnet mask. The subnet mask is a series
of ones and zeros that determines which portion of the IP address represents the network and which portion represents the host.
• Advantages of Subnetting
There are several advantages of using subnetting in a computer network:
• Improved network performance: By dividing a large network into smaller subnets, you can reduce the amount of network traffic on
the main network. Thus, improving communication speed and efficiency within the subnets.
• Enhanced security: Subnetting can create separate networks for different types of devices or users, Thus, helping in improving security
by limiting access to sensitive resources.
• Greater network scalability: Subnetting allows you to add more devices to a network without requiring additional IP addresses or
network infrastructure.
• Enhanced network organization: Subnetting allows you to group devices by location, department, or other criteria, making it easier to
manage and maintain the network.
• Reduced network congestion: By dividing a network into smaller subnets, you can reduce the number of devices trying to
communicate over the same network segment. Thus, reducing congestion and improving overall network performance.
• Improved network reliability: By creating redundant subnets, you can improve the reliability of your network by providing a backup
communication path if one subnet goes offline.
• Disadvantages of Subnetting
There are a few potential drawbacks to using subnetting in a computer network:
• Complexity: Subnetting can add complexity to a network, as it requires the use of subnet masks and network addresses, which may be
difficult for some users to understand.
• Additional hardware: In some cases, subnetting may require the use of additional hardware or network devices, such as routers or
switches. Thus, increasing the cost of the network.

all Page 8
• Increased configuration: Subnetting requires the configuration of subnet masks and network addresses, which can be timeconsuming
and may require the assistance of a network administrator.
• Limited scalability: While subnetting can allow you to add more devices to a network without requiring additional IP addresses, it limits
the number of subnets and devices that can be created.
• Security risks: Subnetting can improve security by creating separate networks for different users or devices, but it can also create
security risks if not configured properly, as it can allow unauthorized users to access restricted resources.

Network Address Translation


Network Address Translation (NAT) is a method of mapping an IP address space into another by modifying network address information in the IP
header of packets while they are in transit across a traffic routing device. The technique was originally used to bypass the need to assign a new
address to every host when a network was moved, or when the upstream Internet service provider was replaced, but could not route the
network's address space.
Purpose of NAT
NAT is used for a variety of purposes, including:
• Conserving IP addresses: IPv4 addresses are a finite resource, and NAT can help to conserve them by allowing multiple devices to share a
single public IP address.
• Improving security: NAT can help to improve security by hiding the private IP addresses of devices on a private network. This makes it more
difficult for attackers to gain access to these devices.
• Simplifying network management: NAT can simplify network management by making it easier to administer a network with multiple
devices.
Types of NAT
There are three main types of NAT:
• Static NAT: This type of NAT assigns a permanent public IP address to each device on the private network. This is useful for devices that
need to be accessible from the public network, such as web servers and file servers.
• Dynamic NAT: This type of NAT assigns a temporary public IP address to each device on the private network when it needs to access the
public network. This is more efficient than static NAT because it does not waste public IP addresses.
• Port Address Translation (PAT): This type of NAT is a form of dynamic NAT that translates multiple private IP addresses to a single public IP
address using different source ports. This is very efficient and is commonly used in home routers.
How NAT Works
NAT works by modifying the source IP address of packets that are sent from the private network to the public network. The NAT device also
keeps track of which private IP address corresponds to each public IP address. When a packet is sent from the public network to the private
network, the NAT device modifies the destination IP address of the packet so that it is sent to the correct private device.
Benefits of NAT
There are several benefits to using NAT, including:
• Reduced cost: NAT allows multiple devices to share a single public IP address, which can save money on Internet service costs.
• Improved security: NAT can make it more difficult for attackers to gain access to devices on a private network.
• Simplified network management: NAT can simplify network management by making it easier to administer a network with multiple
devices.
Drawbacks of NAT
There are also some drawbacks to using NAT, including:
• Reduced performance: NAT can reduce network performance by adding overhead to each packet that is translated.
• Increased complexity: NAT can increase the complexity of a network, which can make it more difficult to troubleshoot problems.
• Reduced compatibility: NAT can make some applications incompatible because it can hide the source IP address of packets.
IPv4 vs IPv6
Feature IPv4 IPv6

Address Length 32 bits 128 bits


Address Notation Dotted-decimal Hexadecimal (eight sets of four digits)

Address Space Limited (approx. 4.3 billion) Vast (approx. 3.4 x 10^38)

Address Manual or DHCP Stateless Autoconfiguration (SLAAC) and DHCPv6


Configuration

Broadcast Utilizes broadcast Replaced by multicast and anycast

Subnetting Common, often necessary Still relevant, but less critical due to vast address
space

all Page 9
Network NAT often used, DHCP for address assignment Designed to eliminate the need for NAT, supports
Configuration DHCPv6

Header Complexity Complex header structure Simplified header structure with optional extension
headers

Security Features Originally lacked integrated security features, IPSec added IPSec support is mandatory
later

Deployment Status Widely deployed, but facing challenges due to address Coexisting with IPv4, gaining increased adoption
exhaustion

Transition Dual-stack, tunneling Transition mechanisms for gradual migration from


Mechanisms IPv4 to IPv6

Use Cases Predominantly used in existing networks, the Internet, and Increasingly adopted, especially in new network
many devices deployments
Routed Protocols vs Routing Protocols
Feature Routed Protocol Routing Protocol

Definition A protocol used to send data from one network to another. A protocol used by routers to determine the best path for data to
Examples include IPv4 and IPv6. travel from source to destination.

Purpose Focuses on the end-to-end delivery of data packets. Focuses on the process of selecting the best path for data to travel
within a network.

Examples IPv4, IPv6, IPX, AppleTalk RIP (Routing Information Protocol), OSPF (Open Shortest Path
First), BGP (Border Gateway Protocol)

Functionality Provides addressing and packet forwarding capabilities. Determines the optimal path for data and shares routing
information with other routers.

Configuration Generally requires manual configuration of network Involves configuration of routing tables and protocols to share
addresses. routing information dynamically.

Network Layer Operates at the network layer of the OSI model. Operates at the network layer and is responsible for path
determination.

Dependency Independent of the specific routing methods used. Dependent on the routing algorithm and protocols implemented in
the network.

Dynamic Does not dynamically update routing information. Typically supports dynamic updates to adapt to changes in the
Updates network topology.

Examples of Used in conjunction with routing protocols to facilitate data Implemented on routers to enable dynamic routing and efficient
Usage transfer between networks. data forwarding within a network.
Classification of Routing Algorithms
Static Routing Algorithms:
Definition: Static routing involves manually configuring the routes in a network. The network administrator defines the paths that data
packets should take to reach their destination. Characteristics:
• Paths are predetermined and configured in advance.
• Changes in network topology are not automatically accommodated.
• Simplicity and predictability but less adaptable to dynamic changes. Dynamic Routing Algorithms:
Definition: Dynamic routing algorithms determine paths for data packets in real-time based on current network conditions. These algorithms
adapt to changes in the network topology.
Characteristics:
• Paths are determined dynamically based on real-time information.
• Adapt to changes in the network, making them more flexible.
• Examples include RIP, OSPF, and BGP.
Distance Vector Routing:
Definition: Distance Vector Routing algorithms operate by routers exchanging information about their routing tables with their neighbors. Each
router makes decisions based on the distance and direction (vector) to a destination.
Characteristics:

all Page 10
• Each router maintains a table indicating the distance (number of hops) to reachable destinations.
• Examples include RIP (Routing Information Protocol).
• Simplicity but may suffer from slow convergence in large networks. Link-State Routing:
Definition: Link-State Routing algorithms consider the state of links in the entire network. Routers share information about the state of their
links, allowing each router to build a comprehensive view of the network.
Characteristics:
• Each router maintains a detailed map of the entire network.
• Examples include OSPF (Open Shortest Path First).
• More scalable and adaptable to larger networks but can be more complex to implement.
Path Vector Routing:
Definition: Path Vector Routing is similar to distance vector routing but includes the entire path information in the routing updates. This
provides more information about the network topology.
Characteristics:
• The routing information includes the complete path to a destination.
• Example: BGP (Border Gateway Protocol).
• Commonly used in inter-domain routing and the global Internet.

Link State Routing


Link-State Routing is a type of dynamic routing algorithm used in computer networks. The fundamental concept behind link-state routing is the
exchange of information about the state of links throughout the entire network. This information is used to build a comprehensive and up-to-
date map of the network, allowing routers to make informed decisions about the best paths for data transmission.
Here's a more detailed explanation of Link-State Routing:
Characteristics:
1. Topology Database:
○ Each router maintains a detailed database of the entire network's topology. This database includes information about all routers and
the state of their links.
2. Link State Advertisements (LSAs):
○ Routers periodically send out Link State Advertisements, which contain information about the state of their links. This includes details
such as link cost, link status, and neighboring routers.
3. Dijkstra's Shortest Path Algorithm:
○ Based on the information gathered from LSAs, each router independently runs Dijkstra's Shortest Path Algorithm to calculate the
shortest path to every other router in the network. This results in a routing table that reflects the optimal paths.
4. Flooding:
○ LSAs are flooded throughout the network to ensure that every router receives the most up-to-date information about the network's
state.
5. Hierarchical Structure:
○ Link-State Routing often employs a hierarchical structure, dividing the network into areas. Routers within an area have a detailed map
of that area but may have a summarized view of other areas. This helps in scalability.
Advantages:
• Efficiency: Link-State Routing algorithms are efficient in finding the shortest path to a destination, leading to optimal routing.
• Fast Convergence: Changes in the network are quickly propagated, and routers can rapidly adjust to new topologies.
• Scalability: The hierarchical structure allows for the scaling of large networks by dividing them into manageable areas.
• Reliability: Link-State Routing tends to be more reliable and robust because routers have an accurate and updated map of the network.
Examples:
• Open Shortest Path First (OSPF):
○ OSPF is a widely used Link-State Routing protocol. It operates within an autonomous system and uses link-state information to
calculate the shortest path.
Limitations:
• Complexity: Implementing and managing Link-State Routing can be more complex compared to other routing algorithms.
• Resource Consumption: Maintaining a detailed database and flooding LSAs consume more resources than some other routing approaches.

Distance Vector Routing


Distance Vector Routing is a type of dynamic routing algorithm used in computer networks. This approach relies on routers exchanging
information about their routing tables with their neighboring routers. The primary metric used in these algorithms is the "distance" to a
destination, often measured in terms of the number of hops (routers) between the source and destination. Here's a more detailed explanation
of Distance Vector Routing:

all Page 11
Key Characteristics:
1. Distance as Metric:
○ The fundamental metric used in Distance Vector Routing is the "distance" to a destination. This distance is typically measured in terms
of the number of hops or routers between the source and destination.
2. Routing Table Updates:
○ Routers periodically send updates to their neighboring routers, sharing information about the distances to various destinations. These
updates are often referred to as "vectors."
3. Bellman-Ford Algorithm:
○ Distance Vector Routing algorithms often use the Bellman-Ford algorithm to calculate the shortest path to all destinations. The
algorithm iteratively refines its estimates based on the received distance vectors.
4. Convergence Time:
○ Convergence time refers to the time it takes for all routers in the network to have consistent and updated information about the
network topology. Distance Vector Routing algorithms can experience longer convergence times, especially in large networks.
5. Routing by Rumor:
○ The routing updates are often described as routers "telling" their neighbors about the distances to various destinations. This process of
routers sharing information can be likened to a rumor spreading through the network. Examples:
• Routing Information Protocol (RIP):
○ RIP is a classic example of a Distance Vector Routing protocol. It operates within an autonomous system and uses hop count as its
metric.
• Interior Gateway Routing Protocol (IGRP):
○ IGRP is another example, developed by Cisco. It takes into account factors such as bandwidth and delay in addition to hop count.
Advantages:
• Simplicity: Distance Vector Routing algorithms are relatively simple to understand and implement.
• Ease of Configuration: Configuring routers in a distance vector routing environment is typically straightforward.
• Low Overhead: The amount of routing information exchanged between routers is generally less than in some other routing algorithms.
Limitations:
• Slow Convergence: Distance Vector Routing algorithms can experience slow convergence, especially in larger networks. This is because
routers need time to exchange and process routing updates.
• Count to Infinity Problem: This is a common issue in distance vector algorithms where routers may incorrectly believe they have found a
shorter path, leading to routing loops.
• Limited Scalability: Distance Vector Routing may not scale well to very large networks due to the overhead associated with frequent
updates.

all Page 12
Module 4
Tuesday, November 14, 2023 5:32 PM

Transport Layer: Service Primitives


Transport Layer is responsible for end-to-end communication between devices across a network. To facilitate this communication, the Transport Layer provides a set of
services to the upper layers (such as the application layer) and interacts with the lower layers (such as the network layer). Service primitives are the operations or functions
provided by the Transport Layer to these upper layers.
The basic service primitives of the Transport Layer are typically divided into two categories: Connection-oriented services and Connectionless services.
1. Connection-Oriented Services:
○ CONNECTION-ESTABLISHMENT (REQUEST): This primitive is used by the sender to request the establishment of a connection.
○ CONNECTION-ESTABLISHMENT (INDICATION): This primitive is used by the receiver to indicate that a connection request has been received.
○ CONNECTION-ACCEPTANCE (RESPONSE): This primitive is used by the receiver to accept the connection request.
○ CONNECTION-ACCEPTANCE (INDICATION): This primitive is used by the sender to indicate that the connection request has been accepted.
○ DATA-TRANSFER (REQUEST): This primitive is used by both the sender and receiver to request the transfer of data over the established connection.
○ CONNECTION-RELEASE (REQUEST): This primitive is used by either the sender or receiver to request the release of the connection.
○ CONNECTION-RELEASE (INDICATION): This primitive is used by the other party to indicate that a connection release has been requested.
2. Connectionless Services:
○ UNIT-DATA-TRANSFER (REQUEST): This primitive is used to request the transfer of a single unit of data.
○ UNIT-DATA-TRANSFER (INDICATION): This primitive is used to indicate the arrival of a single unit of data.
○ ERROR REPORT (INDICATION): This primitive is used to indicate the occurrence of an error in the received data.

Sockets
Sockets play a crucial role in the Transport Layer of a computer network, providing a programming interface for network communication. A socket is essentially an endpoint
for sending or receiving data across a computer network. It acts as an abstraction layer that allows applications to communicate with each other, regardless of the underlying
network details.
1. Socket Types:
○ Stream Sockets (TCP): These provide a reliable, connection-oriented communication channel. TCP (Transmission Control Protocol) is the most common protocol
associated with stream sockets. It ensures reliable, ordered delivery of data between the sender and receiver.
○ Datagram Sockets (UDP): These provide connectionless communication, where individual packets (datagrams) are sent without establishing a connection first.
UDP (User Datagram Protocol) is often associated with datagram sockets. It is faster but less reliable compared to TCP.
2. Socket Operations:
• Socket Creation: An application creates a socket using the socket() system call or function. The socket can be of type SOCK_STREAM (for stream sockets) or
SOCK_DGRAM (for datagram sockets).
• Binding: The socket is bound to a specific address and port using the bind() operation. This step is crucial, especially for servers, as it specifies the network address
and port on which the server will listen for incoming connections or data.
• Listening (for Stream Sockets): For servers using stream sockets, the listen() operation is used to wait for incoming connection requests.
• Connection Establishment (for Stream Sockets): The accept() operation is used by a server to accept an incoming connection request. This operation returns a new
socket for communication with the client.
• Connecting (for Stream Sockets): For clients using stream sockets, the connect() operation is used to establish a connection with a server.
• Sending and Receiving Data: The send() and recv() operations are used to send and receive data over the socket.
• Closing: The close() operation is used to release the socket when communication is complete.
3. Sockets are identified by an IP address and port number. In the case of stream sockets, the combination of local and remote IP addresses and port numbers uniquely
identifies a connection.

Connection Management
The Three-Way Handshake is a key process in the establishment of a connection in the Transmission Control Protocol (TCP), which is a connection-oriented protocol in the
Transport Layer of the Internet Protocol Suite. The purpose of the Three-Way Handshake is to ensure that both the sender and receiver are ready to exchange data before
actual communication begins. Here are the steps involved in the Three-Way Handshake:
1. Step 1: SYN (Synchronize)
○ The client initiates the connection by sending a TCP segment to the server with the SYN (Synchronize) flag set.
○ This segment contains the client's initial sequence number (ISN), which is a randomly chosen number to identify the first data byte in the communication.
2. Step 2: SYN-ACK (Synchronize-Acknowledge)
○ Upon receiving the initial SYN segment, the server responds with a TCP segment that has both the SYN and ACK (Acknowledge) flags set.
○ The server also selects its own initial sequence number (ISN).
3. Step 3: ACK (Acknowledge)
○ In the final step of the Three-Way Handshake, the client acknowledges the server's response by sending a TCP segment with the ACK flag set.
○ The acknowledgment (ACK) indicates that the client has received the server's acknowledgment, and the connection is now established.

The termination process typically involves the following steps:


1. Sending FIN (Finish): The entity initiating the connection termination sends a FIN (finish) segment to the other entity, indicating that it will no longer send data. This
segment includes a sequence number indicating the last byte of data that the sender will transmit.

all Page 13
2. Acknowledging FIN: The receiving entity acknowledges the FIN segment with an ACK (acknowledgment) segment. This ACK confirms that the receiving entity has
received the FIN segment and understands that the sender is closing the connection.
3. Sending FIN: Once the receiving entity has finished sending any remaining data, it sends its own FIN segment to the initiating entity, indicating that it has also finished
sending data.
4. Acknowledging FIN: The initiating entity acknowledges the receiving entity's FIN segment with an ACK segment, completing the four-way handshake and finalizing the
connection termination.

UDP
The User Datagram Protocol (UDP) is a core communication protocol of the Internet Protocol suite (TCP/IP) used to send messages (datagrams) across an IP network. UDP is
an unreliable, connectionless protocol, meaning it does not guarantee delivery, ordering, or duplicate protection of data packets. This makes UDP a faster and more
lightweight protocol than its counterpart, Transmission Control Protocol (TCP), but also more prone to errors.
Key Characteristics of UDP:
• Connectionless: UDP does not establish a connection between the sender and receiver before sending data. This makes UDP faster and more efficient for timesensitive
applications.
• Unreliable: UDP does not guarantee delivery or ordering of data packets. It is up to the application to handle any lost or out-of-order packets.
• Best-effort delivery: UDP delivers data packets on a best-effort basis, meaning it makes no guarantees about their delivery. If a packet is lost or corrupted, UDP will not
retransmit it.
• Efficient: UDP is a very efficient protocol due to its simplicity and lack of connection establishment overhead. Advantages of UDP:
• Speed: UDP is significantly faster than TCP due to its lack of connection establishment and error checking overhead.
• Efficiency: UDP is a very efficient protocol in terms of bandwidth usage.
• Simplicity: UDP is a simple protocol with a minimal header structure, making it easier to implement and understand. Disadvantages of UDP:
• Unreliability: UDP does not guarantee delivery or ordering of data packets, which can lead to data loss or corruption.
• Lack of error checking: UDP does not perform extensive error checking, making it more susceptible to errors.
• Vulnerability to attacks: UDP's lack of connection establishment and error checking makes it more vulnerable to certain types of attacks, such as denial-of-service (DoS)
attacks.
Applications of UDP:
• Real-time applications: UDP is well-suited for real-time applications where speed is more important than accuracy, such as video streaming, online gaming, and VoIP
(Voice over IP).
• Small data transfers: UDP is also suitable for small data transfers, such as DNS (Domain Name System) lookups.
• Broadcasting and multicast: UDP supports broadcasting and multicasting, which allows a single sender to send data to multiple receivers simultaneously.

TCP
The Transmission Control Protocol (TCP) is a fundamental protocol of the Internet Protocol suite (TCP/IP) that ensures reliable, ordered, and error-checked delivery of data
packets across an IP network. TCP operates at the transport layer of the TCP/IP model, sitting above the Internet Protocol (IP) and below application-layer protocols such as
HTTP and FTP. Key Characteristics of TCP:
• Connection-oriented: TCP establishes a connection between the sender and receiver before sending data. This connection ensures reliable data transfer and allows for
error checking and retransmission mechanisms.
• Reliable: TCP guarantees delivery, ordering, and duplicate protection of data packets. It employs error detection and correction mechanisms to ensure data integrity.
• Ordered delivery: TCP delivers data packets in the same order they were sent, ensuring the correct sequence of information.
• Flow control: TCP employs flow control mechanisms to regulate the rate at which data is sent, preventing congestion and ensuring smooth data transfer.
• Congestion control: TCP implements congestion control algorithms to adapt its transmission rate based on network conditions, preventing network overload. Advantages
of TCP:
• Reliability: TCP provides reliable data delivery, minimizing data loss or corruption.
• Ordered delivery: TCP maintains the correct order of data packets, ensuring the integrity of information.
• Error checking and retransmission: TCP employs error detection and correction mechanisms to identify and retransmit lost or corrupted packets.
• Congestion control: TCP's congestion control algorithms prevent network congestion and ensure efficient data transfer.
Disadvantages of TCP:
• Overhead: TCP's connection establishment and error checking mechanisms add overhead compared to UDP, making it slightly slower.
• Complexity: TCP is a more complex protocol than UDP, requiring more implementation effort.
Applications of TCP:
• File transfers: TCP is the preferred protocol for file transfers due to its reliability and error checking capabilities.
• Web browsing: TCP is widely used for web browsing, ensuring reliable delivery of web pages and other web content.
• Email: TCP is the standard protocol for sending and receiving emails, guaranteeing message delivery and integrity.
• Remote access: TCP is used for remote access protocols such as SSH (Secure Shell) and FTP, providing secure remote access to computer systems.

TCP vs UDP
Feature TCP UDP
Connection Connection-oriented: Establishes a connection between sender and Connectionless: Sends data without establishing a prior connection, making it stateless
receiver before data transmission, maintaining a stateful session for and faster.
reliable communication.
Reliability Reliable: Guarantees delivery, ordering, and duplicate protection of Unreliable: Does not guarantee delivery, ordering, or duplicate protection of data
data packets, ensuring accurate and complete information transfer. packets, making it more prone to errors and data loss.

all Page 14
Error Extensive: Employs error detection and correction mechanisms to Limited: Performs minimal error checking, relying on the application layer to handle
Checking identify and retransmit lost or corrupted data packets, ensuring data errors, making it less robust in error-prone environments.
integrity.
Speed Slower: Introduces overhead due to connection establishment and Faster: Lacks connection establishment and error checking overhead, enabling
error checking mechanisms, resulting in slightly slower data transfer. faster data transfer, particularly for real-time applications.

Efficiency Less efficient: Overhead associated with connection establishment More efficient: Lack of overhead makes UDP more efficient in terms of bandwidth
and error checking reduces bandwidth utilization compared to UDP. usage, particularly for small data transfers.

Flow Control Yes: Implements flow control mechanisms to regulate the rate at No: Does not implement flow control, relying on the application layer to manage data
which data is sent, preventing congestion and ensuring smooth data flow, making it more susceptible to congestion.
transfer.
Congestion Yes: Employs congestion control algorithms to adapt its transmission No: Lacks congestion control mechanisms, relying on the network to handle
Control rate based on network conditions, preventing network overload and congestion, making it more prone to congestion-related issues.
ensuring efficient data transfer.
Applications File transfers: Ensuring reliable and complete delivery of large data Real-time applications: Video streaming, online gaming, VoIP: Prioritizing speed over
files. reliability for smooth and responsive interactions.
Web browsing: Ensuring accurate and error-free retrieval of web Small data transfers: DNS lookups, network management messages: Efficient
pages and content. transmission of small data packets.
TCP: State Transition
The TCP state transition diagram is a finite state machine that describes the different states that a TCP connection can be in and the events that trigger transitions between
those states. The diagram is shown below:
+-------------+
| LISTEN |
+-------------+
|
v
+-------------+
| SYN-SENT |
+-------------+
|
v
+------------------+
| SYN-RECEIVED |
+------------------+
|
v
+--------------------+
| ESTABLISHED |
+--------------------+
|
(DATA)
v
+------------------+
| FIN-WAIT-1 |
+------------------+
|
v
+-------------+
| FIN-WAIT-2 |
+-------------+
|
v
+------------------+
| CLOSE-WAIT |
+------------------+
|
v
+-------------+
| CLOSING |
+-------------+
|
v
+------------------+
| LAST-ACK |
+------------------+

all Page 15
|
v
+-------------+
| TIME-WAIT |
+-------------+
|
/
+-------------+
| CLOSED | +--------
-----+
State Descriptions:
• LISTEN: The server is actively listening for incoming connection requests.
• SYN-SENT: The client has sent a SYN (synchronization) segment to the server, requesting a connection.
• SYN-RECEIVED: The server has received a SYN segment from the client and has sent a SYN-ACK (synchronization acknowledgment) segment in response.
• ESTABLISHED: The connection is established and data can be exchanged between the client and server.
• FIN-WAIT-1: The client has sent a FIN (finish) segment to the server, indicating that it has finished sending data.
• FIN-WAIT-2: The server has received a FIN segment from the client and has sent an ACK (acknowledgment) segment in response.
• CLOSE-WAIT: The server has sent a FIN segment to the client, indicating that it has finished sending data.
• CLOSING: The client has received a FIN segment from the server and has sent an ACK segment in response.
• LAST-ACK: The server has received an ACK segment from the client in response to its FIN segment.
• TIME-WAIT: The server remains in the TIME-WAIT state for a certain amount of time (typically two minutes) to ensure that all data packets have been delivered and that
there are no outstanding ACKs.
• CLOSED: The connection is closed and no further data can be exchanged.
Events:
• SYN: The client sends a SYN segment to the server.
• SYN-ACK: The server sends a SYN-ACK segment to the client.
• ACK: An acknowledgment segment is sent to acknowledge the receipt of a segment.
• FIN: A finish segment is sent to indicate that data transmission is complete.
Transitions:
• The connection transitions from LISTEN to SYN-SENT when the client sends a SYN segment to the server.
• The connection transitions from SYN-SENT to SYN-RECEIVED when the server sends a SYN-ACK segment to the client.
• The connection transitions from SYN-RECEIVED to ESTABLISHED when the client sends an ACK segment to the server in response to the SYN-ACK segment.
• The connection transitions from ESTABLISHED to FIN-WAIT-1 when the client sends a FIN segment to the server.
• The connection transitions from FIN-WAIT-1 to FIN-WAIT-2 when the server sends an ACK segment to the client in response to the FIN segment.
• The connection transitions from FIN-WAIT-2 to CLOSE-WAIT when the server sends a FIN segment to the client.
• The connection transitions from CLOSE-WAIT to CLOSING when the client sends an ACK segment to the server in response to the FIN segment.
• The connection transitions from CLOSING to LAST-ACK when the server receives an ACK segment from the client in response to its FIN segment. • The connection
transitions from LAST-ACK to TIME-WAIT

TCP: Transition Timers


• Retransmission Timer – To retransmit lost segments, TCP uses retransmission timeout (RTO). When TCP sends a segment the timer starts and stops when the
acknowledgment is received. If the timer expires timeout occurs and the segment is retransmitted. RTO (retransmission timeout is for 1 RTT) to calculate retransmission
timeout we first need to calculate the RTT(round trip time).
RTT three types –
• Measured RTT(RTTm) – The measured round-trip time for a segment is the time required for the segment to reach the destination and be acknowledged, although the
acknowledgement may include other segments.
• Smoothed RTT(RTTs) – It is the weighted average of RTTm. RTTm is likely to change and its fluctuation is so high that a single measurement cannot be used to calculate RTO.
• Deviated RTT(RTTd) – Most implementations do not use RTTs alone so RTT deviated is also calculated to find out RTO
• Persistent Timer – To deal with a zero-window-size deadlock situation, TCP uses a persistence timer. When the sending TCP receives an acknowledgment with a window size
of zero, it starts a persistence timer. When the persistence timer goes off, the sending TCP sends a special segment called a probe. This segment contains only 1 byte of new
data. It has a sequence number, but its sequence number is never acknowledged; it is even ignored in calculating the sequence number for the rest of the data. The probe
causes the receiving TCP to resend the acknowledgment which was lost.
• Keep Alive Timer – A keepalive timer is used to prevent a long idle connection between two TCPs. If a client opens a TCP connection to a server transfers some data and
becomes silent the client will crash. In this case, the connection remains open forever. So a keepalive timer is used. Each time the server hears from a client, it resets this
timer. The time-out is usually 2 hours. If the server does not hear from the client after 2 hours, it sends a probe segment. If there is no response after 10 probes, each of which
is 75 s apart, it assumes that the client is down and terminates the connection.
• Time Wait Timer – This timer is used during TCP connection termination. The timer starts after sending the last Ack for 2nd FIN and closing the connection.

Congestion Control
Open-loop congestion control and closed-loop congestion control are two approaches to managing and mitigating network congestion, each with its own characteristics and
methods. Let's explore each of these concepts:
Open-Loop Congestion Control:
• Definition: In open-loop congestion control, network adjustments are made without direct feedback from the network itself. The control actions are predefined and
executed without real-time information about the current state of the network.
• Characteristics:
o Predefined Policies: Open-loop control relies on predefined policies and strategies to determine how to send or regulate traffic.
o Lack of Real-time Feedback: There is no continuous feedback loop from the network to adjust the control actions based on current conditions.

all Page 16
o Simple Implementation: Open-loop control is often simpler to implement as it does not require real-time monitoring and response mechanisms.
• Example:A network administrator might manually set traffic shaping policies based on expected usage patterns and peak hours. These policies are predetermined and
applied without continuous feedback from the network.
Closed-Loop Congestion Control:
• Definition: Closed-loop congestion control, also known as feedback-based congestion control, involves adjusting network parameters based on real-time feedback about
the state of the network. This feedback loop allows for dynamic and adaptive control actions.
• Characteristics:
o Real-time Feedback: Closed-loop control relies on continuous feedback from the network to make informed decisions about adjusting traffic parameters.
o Adaptability: The system can dynamically respond to changing network conditions, making it more adaptive to variations in traffic and congestion levels.
o Complexity: Closed-loop control systems are often more complex to implement than open-loop systems due to the need for monitoring and feedback
mechanisms.
• Example: TCP (Transmission Control Protocol) utilizes closed-loop congestion control. It dynamically adjusts the rate at which data is sent based on acknowledgments and
network conditions. If packet loss is detected, TCP assumes congestion and reduces the transmission rate.

Sliding Window Protocol


Flow control is a crucial aspect of data communication, ensuring that the data transfer rate matches the receiver's processing capacity. In the context of the Transmission
Control Protocol (TCP), the sliding window protocol serves as the primary flow control mechanism, regulating the data transmission between the sender and receiver to
prevent congestion and ensure smooth data flow.
The Essence of Sliding Window Protocol:
The sliding window protocol operates by maintaining two separate windows, one at the sender and the other at the receiver. These windows define the range of
sequence numbers that each device is permitted to send or receive, respectively. Sender's Window:
The sender's window represents the sequence numbers of packets that the sender is allowed to transmit. As the receiver acknowledges received packets, the sender's
window slides forward, enabling the transmission of subsequent packets. Receiver's Window:
The receiver's window, on the other hand, defines the sequence numbers of packets that the receiver is ready to process. As the receiver sends acknowledgments, its window
slides forward, indicating its capacity to accept new data.
Data Transmission and Window Management:
The sender can only transmit packets that fall within the receiver's window. Upon receiving an acknowledgment, the sender's window slides forward, allowing the
transmission of the next sequence number within the receiver's window.
Similarly, the receiver can only acknowledge packets that fall within its window. When an acknowledgment is sent, the receiver's window slides forward, indicating its
readiness to process the next sequence number.
Benefits of Sliding Window Protocol:
• Prevents Buffer Overflow: By limiting the number of unacknowledged packets, the sliding window prevents the sender from overwhelming the receiver's buffer, ensuring
that the receiver can handle the incoming data efficiently.
• Enhances Efficiency: The sliding window allows the sender to continue transmitting data even if the receiver is slow to process it, ensuring that the sender is not idle,
maximizing network utilization.
• Reduces Congestion: By regulating the flow of data, the sliding window helps to prevent congestion on the network, ensuring that the network's capacity is not
exceeded.
Limitations of Sliding Window Protocol:
• Overhead: The sliding window protocol introduces some overhead due to the need to maintain and update the window sizes and exchange acknowledgments. This
overhead can slightly impact the overall efficiency of the connection.
• Vulnerability to Out-of-Order Packets: If out-of-order packets are received, the receiver may have to discard or re-order them, potentially reducing the overall efficiency
of the connection and introducing delays.

HTTP
HTTP, standing for Hypertext Transfer Protocol, is a fundamental communication protocol that operates at the application layer of the OSI model. It forms the backbone of
data communication on the World Wide Web, facilitating the exchange of information between web servers and clients. Here's an in-depth exploration of HTTP's role at the
application layer:
1. Communication Paradigm:
○ HTTP follows a client-server communication paradigm. A client, typically a web browser, initiates requests, and a server processes these requests and sends back the
corresponding responses.
2. Stateless Protocol:
○ HTTP is inherently stateless, meaning each request from a client is independent of any previous requests. Servers do not retain information about the client's
previous interactions, simplifying the protocol's design and implementation.
3. Request-Response Model:
○ Communication in HTTP revolves around a request-response model. Clients send HTTP requests to servers, specifying the desired action, and servers respond with
the requested information or perform the specified action.
4. Uniform Resource Identifiers (URIs):
○ HTTP uses Uniform Resource Identifiers (URIs) to identify and locate resources on the web. URIs include Uniform Resource Locators (URLs) and Uniform Resource
Names (URNs), providing a standardized way to address web resources.
5. Methods:
○ HTTP defines various request methods, such as GET, POST, PUT, and DELETE, each serving a specific purpose. For example, GET is used to retrieve data, while POST is
used to submit data to be processed.
6. Headers:
○ Both HTTP requests and responses contain headers that convey metadata about the message, including information about the client, server, content type, and
more. Headers play a crucial role in informing the recipient about the nature of the data being exchanged.
7. Cookies and Sessions:

all Page 17
○ HTTP supports the use of cookies for maintaining stateful interactions between clients and servers. Cookies enable servers to store information on the client's
device, facilitating personalized and session-aware experiences.
8. Status Codes:
○ HTTP responses include status codes indicating the outcome of the request. Common status codes include 200 (OK), 404 (Not Found), and 500 (Internal Server
Error), providing information about the success or failure of the request.
9. Security:
○ While HTTP itself is not secure, HTTPS (HTTP Secure) is an extension that adds a layer of security through encryption using protocols like TLS/SSL. HTTPS is widely
used, especially for sensitive transactions such as online banking and e-commerce.
10. RESTful Principles:
○ Many modern web applications adhere to REST (Representational State Transfer) principles, which leverage HTTP methods and status codes to create scalable and
maintainable APIs (Application Programming Interfaces).

SMTP
SMTP, or Simple Mail Transfer Protocol, operates at the application layer of the OSI model and is essential for the reliable transmission of electronic mail (email) over the
Internet. SMTP governs the communication between mail servers to send, relay, and receive email messages. Here's a detailed exploration of SMTP's role at the application
layer:
1. Communication Model:
○ SMTP follows a client-server communication model, where an email client acts as the client, and a mail server operates as the server. The client initiates a
connection to the server to send an email.
2. Message Format:
○ SMTP defines the format for email messages, specifying how the sender's information, recipient's information, subject, and the email body should be structured.
The email body can be plain text or include multimedia content.
3. Commands and Responses:
○ SMTP communication consists of a series of commands and responses. Commands, such as HELO, MAIL, RCPT, and DATA, are used by the client to initiate and
control the email transmission process. The server responds to these commands with numeric codes indicating the success or failure of the operation.
4. Relay of Messages:
○ SMTP is responsible for the relay of messages between mail servers. When an email is sent, it may pass through multiple SMTP servers before reaching its final
destination, with each server forwarding the message closer to the recipient.
5. Port Numbers:
○ SMTP typically uses port 25 for communication. Port 587 is commonly used for secure email transmission (SMTPS) with encryption, and port 465 is also reserved for
this purpose.
6. Store-and-Forward Model:
○ SMTP operates on a store-and-forward model, meaning that it accepts, stores, and then forwards messages to their destinations. This model ensures the reliable
delivery of emails, even if the recipient's server is temporarily unavailable.
7. Security Considerations:
○ Historically, SMTP lacked built-in encryption, leading to potential security vulnerabilities. However, with the advent of protocols like STARTTLS and the widespread
adoption of secure email practices, SMTP can now be used securely over TLS-encrypted connections.
8. Email Routing:
○ SMTP is crucial for routing emails to their intended recipients. MX (Mail Exchange) records in the DNS (Domain Name System) specify the mail servers responsible
for receiving emails on behalf of a domain.
9. Authentication:
○ SMTP authentication mechanisms, such as LOGIN and PLAIN, are employed to verify the identity of users sending emails, enhancing the security of the email
transmission process.

DHCP
Dynamic Host Configuration Protocol (DHCP) is a network management protocol used to dynamically assign an IP address to nay device, or node, on a network so they can
communicate using IP (Internet Protocol). DHCP automates and centrally manages these configurations. There is no need to manually assign IP addresses to new devices.
Therefore, there is no requirement for any user configuration to connect to a DHCP based network.
DHCP does the following:
• DHCP manages the provision of all the nodes or devices added or dropped from the network.
• DHCP maintains the unique IP address of the host using a DHCP server.
• It sends a request to the DHCP server whenever a client/node/device, which is configured to work with DHCP, connects to a network. The server acknowledges by
providing an IP address to the client/node/device.
DHCP runs at the application layer of the TCP/IP protocol stack to dynamically assign IP addresses to DHCP clients/nodes and to allocate TCP/IP configuration information to
the DHCP clients. Information includes subnet mask information, default gateway, IP addresses and domain name system addresses.
Components of DHCP
• DHCP Server: DHCP server is a networked device running the DCHP service that holds IP addresses and related configuration information. This is typically a server or a
router but could be anything that acts as a host, such as an SD-WAN appliance.
• DHCP client: DHCP client is the endpoint that receives configuration information from a DHCP server. This can be any device like computer, laptop, IoT endpoint or
anything else that requires connectivity to the network. Most of the devices are configured to receive DHCP information by default.
• IP address pool: IP address pool is the range of addresses that are available to DHCP clients. IP addresses are typically handed out sequentially from lowest to the
highest.
• Subnet: Subnet is the partitioned segments of the IP networks. Subnet is used to keep networks manageable.
• Lease: Lease is the length of time for which a DHCP client holds the IP address information. When a lease expires, the client has to renew it.
• DHCP relay: A host or router that listens for client messages being broadcast on that network and then forwards them to a configured server. The server then sends
responses back to the relay agent that passes them along to the client. DHCP relay can be used to centralize DHCP servers instead of having a server on each subnet
Benefits of DHCP:

all Page 18
• Centralized administration of IP configuration: DHCP IP configuration information can be stored in a single location and enables that administrator to centrally manage
all IP address configuration information.
• Dynamic host configuration: DHCP automates the host configuration process and eliminates the need to manually configure individual host. When TCP/IP (Transmission
control protocol/Internet protocol) is first deployed or when IP infrastructure changes are required.
• Seamless IP host configuration: The use of DHCP ensures that DHCP clients get accurate and timely IP configuration IP configuration parameter such as IP address,
subnet mask, default gateway, IP address of DND server and so on without user intervention.
• Flexibility and scalability: Using DHCP gives the administrator increased flexibility, allowing the administrator to move easily change IP configuration when the
infrastructure changes.

FTP
FTP (File Transfer Protocol) is a standard network protocol used for the transfer of files between a client and a server on a TCP-based network, such as the Internet.
Client-Server Model:
FTP operates on a client-server model. The client initiates a connection to the server, and after authentication, files can be uploaded (sent) or downloaded (received) between
the client and server.
Modes of FTP:
FTP supports two modes: active and passive. In active mode, the client opens a random port for data transfer, while in passive mode, the server opens a port, and the client
connects to it.
Port Numbers:
FTP uses port 21 for control commands (e.g., authentication and directory listing) and port 20 for data transfer in active mode. In passive mode, dynamic port numbers are
used for data transfer.
Authentication:
FTP typically uses username and password authentication for access to files on the server. It can also support anonymous FTP, allowing users to log in with a generic
username (e.g., "anonymous") and their email address as the password. File Operations:
FTP supports various file operations, including uploading (put), downloading (get), renaming, deleting, and creating directories on the server. It also allows for the transfer of
entire directories and their contents.
Modes of Operation:
FTP operates in two modes: ASCII and binary. ASCII mode is suitable for text files, ensuring proper line ending conversion, while binary mode is used for non-text files to
ensure accurate and unaltered data transfer.
Secure Variants:
Due to security concerns associated with plain FTP (e.g., data and credentials transmitted in plaintext), secure variants have been developed. FTPS (FTP Secure) adds a layer of
security through SSL/TLS encryption, while SFTP (SSH File Transfer Protocol) uses the secure SSH protocol for both data transfer and user authentication.
DNS and Types of Name Servers
• The Domain Name System (DNS) is a decentralized hierarchical naming system that translates human-readable domain names into IP addresses. It plays a crucial role in
enabling users to access websites and other services using domain names rather than numeric IP addresses.
• DNS is based on a distributed database architecture, with various servers distributed across the Internet. This distribution enhances scalability, fault tolerance, and
efficient resolution of domain names.
• When a user enters a domain name in a web browser, the DNS resolution process begins. The client queries DNS servers to resolve the domain name into an IP address,
enabling communication with the desired server.
• Types of DNS Servers ○ Root DNS Servers:
▪ The root DNS servers are the starting point of the DNS resolution process. They provide information about the authoritative name servers for top-level
domains (TLDs) such as .com, .net, and .org.
○ Top-Level Domain (TLD) Servers:
▪ TLD servers are responsible for handling requests related to specific top-level domains. For instance, .com TLD servers handle requests for domain names
ending in .com.
○ Authoritative DNS Servers:
▪ Authoritative DNS servers store and provide authoritative information about domain names, including the mapping of domain names to IP addresses. They are
responsible for the actual resolution of domain names.
○ Recursive DNS Servers:
▪ Recursive DNS servers perform the iterative process of querying other DNS servers until they obtain the final authoritative answer. They often cache the results
to speed up subsequent requests.
○ Caching DNS Servers:
▪ Caching DNS servers temporarily store DNS query results to reduce the need for repeated queries to authoritative servers. This caching mechanism improves
DNS resolution efficiency.

Telnet
• Telnet is a network protocol that allows a user to remotely access and control another computer over the Internet or local area network (LAN). It enables a user to
establish a connection to a remote system and perform tasks as if they were sitting in front of that computer.
• It is a client-server protocol, which means that a client device initiates the connection to a server device. The client sends commands to the server, and the server
responds with output, allowing the user to interact with the remote system’s command-line interface.
• It uses the Transmission Control Protocol (TCP) as its underlying transport protocol.
• Telnet is primarily text-oriented, transmitting keystrokes and displaying text output. It allows users to interact with remote systems as if they were physically present at
the terminal.
• Telnet commonly uses port 23 for communication.
• Telnet transmits data, including usernames and passwords, in plain text. Due to this lack of encryption, Telnet is considered insecure, and its usage over untrusted
networks is discouraged.

all Page 19
• Telnet establishes a virtual terminal connection, providing a command-line interface to the remote device. Users can execute commands, access files, and manage the
remote system.
• Telnet is widely used for interactive sessions with remote servers and network devices, especially in troubleshooting and configuration scenarios.
• Telnet provides specific commands for managing the connection, including options for setting terminal type, toggling character echo, and controlling data flow.
• Due to its lack of encryption, Telnet is vulnerable to eavesdropping and man-in-the-middle attacks. It has largely been replaced by more secure protocols like SSH (Secure
Shell).
• SSH (Secure Shell) has become the preferred alternative to Telnet due to its encryption capabilities, providing a secure method for remote terminal connections.

all Page 20
Module 5
Wednesday, November 15, 2023 5:35 PM

Cisco Service Oriented Network Architecture

The SONA framework defines the following three layers:


Networked Infrastructure layer: Where all the IT resources are interconnected across a converged network foundation. The IT resources include
servers, storage, and clients. The Networked Infrastructure layer represents how these resources exist in different places in the network,
including the campus, branch, data center, enterprise edge, WAN, metropolitan-area network (MAN), and with the teleworker. The objective of
this layer is to provide connectivity, anywhere and anytime. The Networked Infrastructure layer includes the network devices and links to
connect servers, storage, and clients in different places in the network.
Interactive Services layer: Includes both application networking services and infrastructure services. This layer enables efficient allocation of
resources to applications and business processes delivered through the networked infrastructure. This layer includes the following services:
• Voice and collaboration services
• Mobility services
• Wireless services
• Security and identity services
• Storage services
• Compute services
• Application networking services (content networking services)
• Network infrastructure virtualization
• Adaptive network management services
• Quality of service (QoS)
• High availability
• IP multicast
Application layer: This layer includes business applications and collaboration applications. The objective of this layer is to meet business
requirements and achieve efficiencies by leveraging the interactive services layer. This layer includes the following collaborative applications:
• Instant messaging
• Cisco Unified Contact Center
• Cisco Unity (unified messaging)
• Cisco IP Communicator and Cisco Unified IP Phones
• Cisco Unified Meeting Place
• Video delivery using Cisco Digital Media System
• IP telephony.
The benefits of SONA include the following:
• Functionality: Supports the organizational requirements.
• Scalability: Supports growth and expansion of organizational tasks by separating functions and products into layers; this separation makes it
easier to grow the network.
• Availability: Provides the necessary services, reliably, anywhere, anytime.
• Performance: Provides the desired responsiveness, throughput, and utilization on a perapplication basis through the network infrastructure
and services.

all Page 21
• Manageability: Provides control, performance monitoring, and fault detection.
• Efficiency: Provides the required network services and infrastructure with reasonable operational costs and appropriate capital investment
on a migration path to a more intelligent network, through step-by-step network services growth.
• Security: Provides for an effective balance between usability and security while protecting information assets and infrastructure from inside
and outside threats.

PPDIOO
PPDIOO stands for Prepare, Plan, Design, Implement, Operate, and Optimize. PPDIOO is a Cisco methodology that defines the continuous
lifecycle of services required for a network.
The PPDIOO phases are as follows:
• Prepare: Involves establishing the organizational requirements, developing a network strategy, and proposing a high-level conceptual
architecture identifying technologies that can best support the architecture. The prepare phase can establish a financial justification for
network strategy by assessing the business case for the proposed architecture.
• Plan: Involves identifying initial network requirements based on goals, facilities, user needs, and so on. The plan phase involves
characterizing sites and assessing any existing networks and performing a gap analysis to determine whether the existing system
infrastructure, sites, and the operational environment can support the proposed system. A project plan is useful for helping manage the
tasks, responsibilities, critical milestones, and resources required to implement changes to the network. The project plan should align with
the scope, cost, and resource parameters established in the original business requirements.
• Design: The initial requirements that were derived in the planning phase drive the activities of the network design specialists. The network
design specification is a comprehensive detailed design that meets current business and technical requirements, and incorporates
specifications to support availability, reliability, security, scalability, and performance. The design specification is the basis for the
implementation activities.
• Implement: The network is built or additional components are incorporated according to the design specifications, with the goal of
integrating devices without disrupting the existing network or creating points of vulnerability.
• Operate: Operation is the final test of the appropriateness of the design. The operational phase involves maintaining network health
through day-to-day operations, including maintaining high availability and reducing expenses. The fault detection, correction, and
performance monitoring that occur in daily operations provide the initial data for the optimization phase.
• Optimize: Involves proactive management of the network. The goal of proactive management is to identify and resolve issues before they
affect the organization. Reactive fault detection and correction (troubleshooting) is needed when proactive management cannotpredict and
mitigate failures. In the PPDIOO process, the optimization phase can prompt a network redesign if too many network problems and errors
arise, if performance does not meet expectations, or if new applications are identified to support organizational and technical requirements.

Network Design Methodology


Aspect Top-Down Approach Bottom-Up Approach
Definition Begins with defining high-level requirements and Starts with specific details and builds up to the overall design
goals
Focus Emphasizes the overall architecture and Concentrates on specific components and their integration
objectives
Planning Involves strategic planning before detailed design Involves detailed planning before considering the big picture

Phases Typically involves phases like planning, design, Phases often include detailed design, integration, testing, and scaling
implementation, and maintenance
Flexibility Offers flexibility in adapting to changing May be less adaptable to changes as it is built from specific details
requirements
Risk Assessment Identifies and addresses risks at the early stages Risks are addressed as they arise during the detailed design
Cost Initial planning may require significant resources Initial costs may be lower, but detailed design costs may accumulate
Timeframe May take longer due to comprehensive planning Initial implementation may be quicker, but changes may take longer
Scalability Easier to scale and accommodate future growth Scalability may require revisiting and adjusting specific components
Complexity Handles complexity by breaking it into Deals with complexity incrementally, potentially increasing overall
manageable parts complexity

all Page 22
Example Scenario Designing a corporate network infrastructure Building a specific network component, like a server cluster

Classic Three-Layer Hierarchical Model

1. Core Layer
The core layer serves as the backbone of the network, providing high-speed connectivity between the distribution layer devices. It is responsible
for routing traffic between different regions of the network, ensuring that data packets flow efficiently and reliably. Core layer devices, typically
high-performance routers, are characterized by their large switching capacity, low latency, and robust fault tolerance capabilities.
2. Distribution Layer
The distribution layer acts as an intermediary between the core and access layers, providing policy-based connectivity and controlling the
boundary between the two layers. It is responsible for filtering traffic, applying security policies, and aggregating data from the access layer
devices before forwarding it to the core layer. Distribution layer devices, typically Layer 3 routers or multilayer switches, play a crucial role in
managing network traffic flow and applying network-wide policies.
3. Access Layer
The access layer provides direct connections to end-user devices, such as workstations, servers, and printers. It is responsible for providing
network access to these devices, forwarding their traffic to the distribution layer for routing. Access layer devices, typically Layer 2 switches or
wireless access points, are located close to the end-users, ensuring efficient data transmission and reliable network connectivity.

By dividing the network into these three layers, the classic three-layer hierarchical model offers several benefits:
• Improved Scalability: As the network grows, additional devices can be easily added to the access or distribution layers, without significantly
impacting the core layer. This modularity allows the network to scale seamlessly to accommodate increasing demands.
• Enhanced Security: Each layer can implement its own security policies, providing a layered defense against network threats. This segmented
approach isolates potential security breaches and limits their impact on the overall network.
• Simplified Management: By dividing the network into smaller, manageable segments, the three-layer model simplifies network
administration and troubleshooting. Network administrators can focus on specific layers and devices, reducing the complexity of network
management.

Campus Design Considerations


The multilayer approach to campus network design combines data link layer and multilayer switching to achieve robust, highly available campus
networks. This section discusses factors to consider in a Campus LAN design.
The Enterprise Campus network is the foundation for enabling business applications, enhancing productivity, and providing a multitude of
services to end users. The following three characteristics should be considered when designing the campus network:
1. Network application characteristics
The organizational requirements, services, and applications place stringent requirements on a campus network solution—for example, in
terms of bandwidth and delay.
2. Environmental characteristics:
The network’s environment includes its geography and the transmission media used.

all Page 23
• The physical environment of the building or buildings influences the design, as do the number of, distribution of, and distance between
the network nodes (including end users, hosts, and network devices). Other factors include space, power, and heating, ventilation, and
air conditioning support for the network devices.
• Cabling is one of the biggest long-term investments in network deployment. Therefore, transmission media selection depends not only
on the required bandwidth and distances, but also on the emerging technologies that might be deployed over the same infrastructure
in the future.
3. Infrastructure Device Characteristics
The characteristics of the network devices selected influence the design (for example, they determine the network’s flexibility) and
contribute to the overall delay. Trade-offs between data link layer switching—based on media access control (MAC) addresses—and
multilayer switching—based on network layer addresses, transport layer, and application awareness—need to be considered.
• High availability and high throughput are requirements that might require consideration throughout the infrastructure.
• Most Enterprise Campus designs use a combination of data link layer switching in the access layer and multilayer switching in the
distribution and core layers.

Campus Network Design Topology


A typical enterprise hierarchical campus network design includes the following three layers:
• The Core layer that provides optimal transport between sites and high performance routing
• The Distribution layer that provides policy-based connectivity and control boundary between the access and core layers
• The Access layer that provides workgroup/user access to the network
1. Core Layer:
• The Core layer serves as the backbone of the network, providing high-speed, high-capacity transport between different sites or buildings
within the enterprise.
• It focuses on efficient and fast packet forwarding, ensuring optimal data transport between geographically dispersed locations.
• The Core layer is characterized by high-speed routers and switches capable of handling large volumes of data traffic with minimal latency.
• It often employs redundant and fault-tolerant configurations to ensure maximum availability and reliability.
• Aggregates traffic from the Distribution layer.
• Provides a high-speed, low-latency path for data to travel between different parts of the network.
• Ensures fast and reliable connectivity between various sites.
2. Distribution Layer:
• The Distribution layer acts as an intermediary between the Access and Core layers, providing policy-based connectivity and acting as a
control boundary.
• It manages and controls the flow of traffic between different access points within the network.
• Implements policies and controls traffic between different access points, enforcing security and quality of service (QoS) policies.
• Provides segmentation for different VLANs (Virtual Local Area Networks) to enhance security and optimize network performance.
• Aggregates and filters traffic from the Access layer before forwarding it to the Core layer.
• Implements routing protocols to efficiently direct traffic to its destination.
• Enforces policies related to security, QoS, and network segmentation.
3. Access Layer:
• The Access layer is responsible for providing workgroup and user access to the network. It is the point of entry for end-user devices into the
network infrastructure.
• Handles a large number of user devices, such as computers, printers, and other network peripherals.
• Often includes network access devices like switches that provide connectivity to end-user devices.
• Connects end-user devices to the network, allowing them to access resources and services.
• Enforces security policies, controlling access to the network based on user identity and device type.
• Provides a point of connectivity for user devices to the Distribution layer.

Module 6
Wednesday, November 15, 2023 11:56 PM

Introduction to Software Defined Networks


• SDN is a networking approach that separates the control plane from the data plane.
• The control plane is responsible for making decisions about how traffic should be routed, while the data plane is responsible for actually
forwarding traffic.
• In traditional networks, the control plane and data plane are tightly coupled, and network administrators must configure each device
individually. This can be a time-consuming and error-prone process.
• SDN allows network administrators to configure the network from a centralized location using a software application.

all Page 24
• This makes it easier to manage and automate network tasks. SDN also allows network administrators to create more flexible and dynamic
networks.
• SDN uses an open standard called OpenFlow to communicate between the control plane and the data plane. OpenFlow allows the control
plane to send instructions to the data plane, telling it how to forward traffic.
• The control plane typically consists of an SDN controller, which is a software application that runs on a server. The data plane consists of
SDN-enabled network devices, such as switches and routers.
• SDN offers a number of benefits over traditional networking, including:
• Increased agility: SDN makes it easier to make changes to the network, which can be helpful for organizations that need to quickly adapt to
changing business requirements.
• Improved automation: SDN allows network administrators to automate many network tasks, which can save time and money.
• Greater flexibility: SDN makes it possible to create more flexible and dynamic networks that can better meet the needs of applications.
• Enhanced security: SDN can be used to improve the security of the network by providing centralized control over network traffic.

Fundamental Characteristics of SDN


1. Centralized Control: SDN centralizes network control in a single software entity, known as the controller. This centralized control enables
more effective and consistent network management, allowing for dynamic adjustments based on network-wide conditions.
2. Decoupling of Control and Data Planes: SDN separates the control plane, responsible for decision-making, from the data plane, which is
responsible for forwarding traffic. This decoupling enhances flexibility and programmability, simplifying network management.
3. Programmability: SDN introduces programmability to network devices, enabling administrators to programmatically control and configure
network elements through software applications. This feature streamlines network changes, policies, and optimizations, reducing manual
configuration efforts.
4. Abstraction of Network Resources: SDN abstracts the underlying network infrastructure, presenting a simplified and logical view to
applications and administrators. This abstraction simplifies the complexity of the physical network, providing a more intuitive and
manageable representation.
5. Open APIs (Application Programming Interfaces): SDN relies on open APIs to facilitate communication between the SDN controller and
network devices (southbound APIs) as well as applications or business logic (northbound APIs). Open APIs promote interoperability,
enabling integration with various networking hardware and the development of diverse applications.
6. Dynamic Configuration and Adjustment: SDN enables dynamic and real-time adjustments to network configurations. Policies can be
modified on-the-fly based on changing network conditions or business requirements, enhancing the responsiveness of the network.
7. Network Virtualization: SDN facilitates network virtualization by creating multiple virtual networks on a shared physical infrastructure. This
capability allows for logical isolation of network segments, providing flexibility and security.
8. Fine-Grained Traffic Control: SDN enables granular control over network traffic, allowing administrators to define and implement policies
at a per-flow or per-application level. This fine-grained control enhances security, quality of service (QoS), and overall network
performance.
9. Adaptability and Agility: SDN's flexibility and programmability make networks more adaptable to changing conditions and business
requirements. Swift adjustments and optimizations can be made through software applications, contributing to a more responsive and
efficient network infrastructure.

SDN Building Blocks


1. SDN Controller: The SDN controller is a centralized software entity that acts as the brain of the network. It provides a global view of the
network and makes decisions based on network-wide conditions and policies. Centralized control, global network visibility, decisionmaking.
2. Southbound APIs: Southbound APIs facilitate communication between the SDN controller and the networking devices in the data plane.
OpenFlow is a common protocol used for southbound communication. Instructing switches and routers on how to forward traffic based on
the controller's decisions.
3. Northbound APIs: Northbound APIs enable communication between the SDN controller and applications or business logic. These APIs
provide a standardized way for applications to request and receive information about the network and influence its behavior. Allowing
external applications to interact with and control the network through the SDN controller.
4. Decoupled Control Plane: In SDN, the control plane is decoupled from the data plane. The control plane makes decisions about where
traffic should be sent, while the data plane is responsible for forwarding the actual traffic. Enhancing flexibility and programmability,
simplifying network management.
5. Programmable Network Devices: SDN introduces programmability to network devices, allowing administrators to configure and manage
them through software applications. Facilitating automation, dynamic adjustments, and the implementation of network policies.
6. Abstraction Layer: The abstraction layer in SDN simplifies the underlying network infrastructure, providing a logical and simplified
representation to applications and administrators. Making the network more manageable and intuitive.

all Page 25
7. OpenFlow Protocol: OpenFlow is a standard communication protocol used for southbound communication between the SDN controller
and network devices. It defines how the controller can interact with the forwarding plane of network devices. Enabling the SDN controller
to instruct switches and routers on how to handle traffic.

Control Plane and Data Plane


The control plane and data plane are two fundamental components that work together to facilitate the operation of network dev ices. These
planes play distinct roles in the processing and forwarding of network traffic. Control Plane:
• Description: The control plane is responsible for making decisions about how network traffic should be forwarded. It involves activities such
as routing updates, maintaining routing tables, and making decisions based on network protocols.
• Function:
○ Decides the best path for network traffic.
○ Exchanges routing information with neighboring devices.
○ Maintains and updates the routing tables.
○ Responds to network changes and updates.
Data Plane:
• Description: The data plane, also known as the forwarding plane, is responsible for the actual forwarding of network traffic based on the
decisions made by the control plane. It involves activities such as packet forwarding, switching, and filtering.
• Function:
○ Forwards packets based on the information in the routing tables.
○ Performs packet switching and filtering.
○ Directly handles and processes user data.
○ Executes actions based on the rules defined by the control plane.
Relationship between Control Plane and Data Plane:
• The control plane and data plane are tightly integrated within network devices, such as routers and switches. These devices use the
information from the control plane to make forwarding decisions in the data plane.
• The control plane is responsible for determining the optimal paths for network traffic, and the data plane executes the actual forwarding of
packets based on these decisions.
• Separating the control plane from the data plane, as seen in Software-Defined Networking (SDN), allows for centralized control and
programmability, providing more flexibility in managing network resources.

SDN Operations
1. Centralized Network Control:
Centralized network control is a foundational operation in SDN, where a centralized controller makes global decisions for the entire network.
This operation provides a unified view for efficient network management, allowing administrators to configure and control the network from a
central point. It enables consistent and coordinated decision-making across the network.
2. Programmability and Automation:
Programmability and automation involve using software applications to configure and manage network operations dynamically. Th is operation
reduces manual configuration efforts and potential errors, facilitating rapid adaptation to changing network requirements and enhancing overall
operational efficiency.
3. Dynamic Traffic Management:
Dynamic traffic management allows for real-time adjustments to network traffic flows based on changing conditions and requirements. This
operation optimizes network resources dynamically, supports efficient use of bandwidth, and enables adaptive responses to var ying traffic
patterns.
4. Flow-Based Control:
Flow-based control involves defining, managing, and controlling network flows, allowing for granular control over packet forwardin g. This
operation enables administrators to define specific flow-based policies, enhances network visibility and control, and facilitates optimized packet
forwarding based on flow definitions.
5. Monitoring and Analytics:
Monitoring and analytics operations involve gathering real-time data and using analytical tools to gain insights into network performance. This
operation provides visibility into network behavior and performance, facilitating proactive issue identification and resoluti on, and informing
decision-making for optimizing network resources.

OpenFlow Messages - Controller to Switch


Controller-to-switch messages are initiated by the controller and used to directly manage or inspect the state of the switch. Controller -toswitch
messages might or might not require a response from the switch.

all Page 26
The controller-to-switch messages include the following subtypes:
• Features—The controller requests the basic capabilities of a switch by sending a features request. The switch must respond with a features
reply that specifies the basic capabilities of the switch.
• Configuration—The controller sets and queries configuration parameters in the switch. The switch only responds to a query from the
controller.
• Modify-State—The controller sends Modify-State messages to manage state on the switches. Their primary purpose is to add, delete, and
modify flow or group entries in the OpenFlow tables and to set switch port properties.
• Read-State—The controller sends Read-State messages to collect various information from the switch, such as current configuration and
statistics.
• Packet-out—These are used by the controller to send packets out of the specified port on the switch, or to forward packets received
through packet-in messages. Packet-out messages must contain a full packet or a buffer ID representing a packet stored in the switch. The
message must also contain a list of actions to be applied in the order they are specified. An empty action list drops the packet.
• Barrier—Barrier messages are used to confirm the completion of the previous operations. The controller send s Barrier request. The switch
must send a Barrier reply when all the previous operations are complete.
• Role-Request—Role-Request messages are used by the controller to set the role of its OpenFlow channel, or query that role. It is typically
used when the switch connects to multiple controllers.
• Asynchronous-Configuration—These are used by the controller to set an additional filter on the asynchronous messages that it wants to
receive, or to query that filter. It is typically used when the switch connects to multiple controllers.

Symmetric and Asynchronous Messages


1. Asynchronous messages
Switches send asynchronous messages to controllers to inform a packet arrival or switch state change. For example, when a flo w entry is
removed due to timeout, the switch sends a flow-removed message to inform the controller.
The asynchronous messages include the following subtypes:
• Packet-In—Transfer the control of a packet to the controller. For all packets forwarded to the Controller reserved port using a flow entry or
the table-miss flow entry, a packet-in event is always sent to controllers. Other processing, such as TTL checking, can also generate packet-in
events to send packets to the controller. The packet-in events can include the full packet or can be configured to buffer packets in the
switch. If the packet-in event is configured to buffer packets, the packet-in events contain only some fraction of the packet header and a
buffer ID. The controller processes the full packet or the combination of the packet header and the buffer ID. Then, the controller sends a
packet-out message to direct the switch to process the packet.
• Flow-Removed—Inform the controller about the removal of a flow entry from a flow table. These are generated due to a controller flow
delete request or the switch flow expiry process when one of the flow timeouts is exceeded.
• Port-status—Inform the controller of a state or setting change on a port.
• Error—Inform the controller of a problem or error.
2. Symmetric messages
Symmetric messages are sent without solicitation, in either direction.
The symmetric messages contain the following subtypes:
• Hello—Hello messages are exchanged between the switch and controller upon connection startup.
• Echo—Echo request or reply messages can be sent from either the switch or the controller, and must return an echo reply. They are mainly
used to verify the liveness of a controller-switch connection, and might also be used to measure its latency or bandwidth.
OpenFlow timers
An OpenFlow switch supports the following timers:
• Connection detection interval—Interval at which the OpenFlow switch sends an Echo Request message to a controller. When the OpenFlow
switch receives no Echo Reply message within three intervals, the OpenFlow switch is disconnected from the controller.
• Reconnection interval—Interval for the OpenFlow switch to wait before it attempts to reconnect to a controller.

all Page 27
NOX Architecture

POX Architecture

all Page 28

You might also like