cn notes
cn notes
Network Topology
Network Topology is the way that defines the structure, and how these components are connected to each other.
1. Bus Topology:
• In a bus topology, all devices are connected to a single central cable (the bus).
• Data is transmitted in both directions, but it can only be received by the device it is intended for.
• It's a simple and inexpensive topology but can suffer from performance issues if too many devices are connected.
2. Star Topology:
• In a star topology, all devices are connected to a central hub or switch.
• Data passes through the hub or switch, allowing for easy management and fault isolation.
• If the hub or switch fails, the entire network can be affected.
3. Ring Topology:
• In a ring topology, each device is connected to exactly two other devices, forming a closed loop.
• Data travels in one direction through the ring until it reaches its destination.
• It's less common in modern networks due to its susceptibility to network disruptions if a single connection or device fails.
4. Mesh Topology:
• In a full mesh topology, every device is connected to every other device.
• Provides high redundancy and fault tolerance, making it suitable for critical applications. • However, it can be
expensive and challenging to manage in large network
Switching
1. Circuit Switching
a. Circuit switching is a communication method where a dedicated communication path, or circuit, is established between two
devices before data transmission begins.
b. The circuit remains dedicated to the communication for the duration of the session, and no other devices can use it while the
session is in progress.
c. Circuit switching is commonly used in voice communication and some types of data communication.
Advantages of Circuit Switching:
• Guaranteed bandwidth: Circuit switching provides a dedicated path for communication, ensuring that bandwidth is guaranteed
for the duration of the call.
• Low latency: Circuit switching provides low latency because the path is predetermined, and there is no need to establish a
connection for each packet.
• Predictable performance: Circuit switching provides predictable performance because the bandwidth is reserved, and there is
no competition for resources.
• Suitable for real-time communication: Circuit switching is suitable for real-time communication, such as voice and video,
because it provides low latency and predictable performance. Disadvantages of Circuit Switching:
• Inefficient use of bandwidth: Circuit switching is inefficient because the bandwidth is reserved for the entire duration of the
call, even when no data is being transmitted.
• Limited scalability: Circuit switching is limited in its scalability because the number of circuits that can be established is finite,
which can limit the number of simultaneous calls that can be made.
• High cost: Circuit switching is expensive because it requires dedicated resources, such as hardware and bandwidth, for the
duration of the call.
2. Packet Switching
a. Packet switching is a communication method where data is divided into smaller units called packets and transmitted over the
network.
b. Each packet contains the source and destination addresses, as well as other information needed for routing.
c. The packets may take different paths to reach their destination, and they may be transmitted out of order or delayed due to
network congestion.
Advantages of Packet Switching:
all Page 1
• Efficient use of bandwidth: Packet switching is efficient because bandwidth is shared among multiple users, and resources are
allocated only when data needs to be transmitted.
are allocated only when data needs to be transmitted.
• Flexible: Packet switching is flexible and can handle a wide range of data rates and packet sizes.
• Scalable: Packet switching is highly scalable and can handle large amounts of traffic on a network.
• Lower cost: Packet switching is less expensive than circuit switching because resources are shared among multiple users.
Disadvantages of Packet Switching:
• Higher latency: Packet switching has higher latency than circuit switching because packets must be routed through multiple
nodes, which can cause delay.
• Limited QoS: Packet switching provides limited QoS guarantees, meaning that different types of traffic may be treated equally.
• Packet loss: Packet switching can result in packet loss due to congestion on the network or errors in transmission.
• Unsuitable for real-time communication: Packet switching is not suitable for real-time communication, such as voice and video,
because of the potential for latency and packet loss.
In-circuit switching has there are 3 phases: In Packet switching directly data transfer takes place.
i) Connection Establishment.
ii) Data Transfer. iii)
Connection Released.
In-circuit switching, each data unit knows the entire path In Packet switching, each data unit just knows the final destination
address which is provided by the source. address intermediate path is decided by the routers.
In-Circuit switching, data is processed at the source system In Packet switching, data is processed at all intermediate nodes
only including the source system.
The delay between data units in circuit switching is uniform. The delay between data units in packet switching is not uniform.
Resource reservation is the feature of circuit switching There is no resource reservation because bandwidth is shared among
because the path is fixed for data transmission. users.
Wastage of resources is more in Circuit Switching Less wastage of resources as compared to Circuit Switching
Transmission of the data is done by the source. Transmission of the data is done not only by the source but also by
the intermediate routers.
Congestion can occur during the connection establishment Congestion can occur during the data transfer phase, a large number
phase because there might be a case where a request is of packets comes in no time.
being made for a channel but the channel is already
occupied.
Circuit switching is not convenient for handling bilateral Packet switching is suitable for handling bilateral traffic.
traffic.
In-Circuit switching, the charge depends on time and In Packet switching, the charge is based on the number of bytes and
distance, not on traffic in the network. connection time.
Recording of packets is never possible in circuit switching. Recording of packets is possible in packet switching.
In-Circuit Switching there is a physical path between the In Packet Switching there is no physical path between the source and
source and the destination the destination
Circuit Switching does not support store and forward Packet Switching supports store and forward transmission
transmission
Call setup is required in circuit switching. No call setup is required in packet switching.
In-circuit switching each packet follows the same route. In packet switching packets can follow any route.
all Page 2
The circuit switching network is implemented at the physical Packet switching is implemented at the datalink layer and network
layer. layer
Circuit switching requires simple protocols for delivery. Packet switching requires complex protocols for delivery.
Network Types
1. LAN (Local Area Network):
• LAN is a network that typically covers a small geographical area, such as a single building, office, or campus.
• Devices within a LAN are usually connected using Ethernet cables or wireless technologies like Wi-Fi.
• LANs are commonly used in homes, offices, and schools for sharing resources like printers, files, and internet connections.
• They are characterized by high data transfer rates and low latency.
2. MAN (Metropolitan Area Network):
• MAN covers a larger geographical area than a LAN but is still limited to a city or a large campus.
• MANs are used to interconnect multiple LANs within a metropolitan area, allowing for data sharing and communication between
them.
• They are often used by businesses, government agencies, or educational institutions to connect their various locations across a city.
3. WAN (Wide Area Network):
• WAN is a network that spans a wide geographical area, often across cities, countries, or even continents.
• WANs can be established using various technologies, including leased lines, fiber-optic cables, and satellite links.
• The internet itself is a global example of a WAN, connecting networks worldwide.
• WANs are suitable for long-distance communication and the exchange of data between distant locations.
Reference Layers
1. OSI
The OSI (Open Systems Interconnection) model defines a conceptual framework for understanding and standardizing network
communication. It consists of seven layers, each with its specific functions and responsibilities:
1. Physical Layer: The physical layer deals with the actual transmission of raw binary data over physical media, such as cables and
electrical voltages. It defines characteristics like signal voltage levels, cable types, and data transmission rates.
2. Data Link Layer: This layer is responsible for creating a reliable link between two directly connected nodes, ensuring error detection
and correction and handling flow control. Ethernet and Wi-Fi are examples of data link layer technologies.
3. Network Layer: The network layer is in charge of routing data packets between different networks. It provides logical addressing (IP
addresses) and determines the best path for data to travel between the source and destination using routing algorithms.
4. Transport Layer: The transport layer establishes end-to-end communication, ensuring data integrity, reliability, and flow control. It
uses protocols like TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).
5. Session Layer: The session layer manages, establishes, and terminates communication sessions or connections between applications.
It also handles synchronization and checkpointing during data exchange.
6. Presentation Layer: Responsible for data translation, encryption, and compression, the presentation layer ensures that data sent by
the application layer is in a format that the application on the receiving end can understand and use.
7. Application Layer: The top layer of the OSI model interacts directly with user applications and provides network services, including
file transfer, email, and remote access. It is where user-level protocols and applications operate, such as HTTP for web browsing and
SMTP for email.
2. TCP/IP
1. Application Layer: The application layer is responsible for providing network services directly to user applications. It includes various
protocols like HTTP, FTP, SMTP, and DNS, enabling applications to communicate over the network.
2. Transport Layer: This layer ensures end-to-end communication, handling functions such as data segmentation, flow control, error
detection, and reliability. Notable protocols in this layer are TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).
3. Internet Layer (Network Layer): The internet layer manages routing and forwarding of data packets between different networks. It
uses IP (Internet Protocol) to assign logical addresses (IP addresses) to devices and determine the best path for data transmission.
4. Link Layer (Data Link Layer): The link layer deals with the physical connection between directly connected nodes, handling data
framing, error detection, and physical addressing. Ethernet and Wi-Fi are examples of link layer technologies.
all Page 3
Module 2
Tuesday, November 7, 2023 3:26 PM
all Page 4
2. Simplex stop-and-wait protocol:
This protocol assumes that data is only transferred in one way. There is no inaccuracy; the receiver can only process data constantly.
These assumptions imply that the transmitter cannot send frames faster than the receiver can process them. The critical issue here is
preventing the transmitter from flooding the receiver. The typical solution for this problem is for the receiver to provide feedback to the
sender; the approach is as follows:
Step 1: The acknowledgement frame is returned to the sender, informing it that the most recently received frame has been processed
and transmitted to the host.
Step 2: Permission is granted to send the following frame.
Step 3: The sender must wait for an acknowledgement frame from the receiver after transmitting the sent frame preceding sending
another frame.
The Simplex stop-and-wait protocol is utilized when the sender sends one frame and waits for the recipient's response. When
an acknowledgement is received, the sender sends the frame as below. The Simplex Stop & Wait Protocol is depicted
diagrammatically as follows-
all Page 5
Flow Control Mechanisms
1. Stop-and-Wait:
○ In the stop-and-wait protocol, the sender sends one frame at a time and waits for an acknowledgment (ACK) from the receiver
before sending the next frame.
○ If the sender does not receive an ACK within a specified timeout period, it assumes that the frame was lost or damaged and
retransmits the same frame.
○ This method is simple but can lead to low efficiency, as the sender is often waiting for an acknowledgment, which may introduce
delays.
2. Sliding Window:
• Sliding window protocols allow multiple frames to be in transit between sender and receiver at the same time, increasing the
efficiency of data transfer.
• The sender can send a certain number of frames (window size) before requiring an acknowledgment. The receiver, in turn,
acknowledges the receipt of frames within the window.
• This allows for a continuous flow of data, as the sender can keep sending frames without waiting for individual acknowledgments.
• There are two types of sliding window protocols: Go-Back-N and Selective Repeat. a. Go-Back-N:
• The sender can send multiple frames without waiting for individual acknowledgments.
• If an acknowledgment is not received within a specified time, the sender assumes that all frames starting from the lost/damaged
frame need to be retransmitted.
b. Selective Repeat:
• The sender can send multiple frames, and the receiver can individually acknowledge the frames it successfully receives.
• If a frame is lost or damaged, only that specific frame is retransmitted, not the entire set of frames.
S.NO Stop-and-Wait Protocol Sliding Window Protocol
1. In Stop-and-Wait Protocol, sender sends one frame and In sliding window protocol, sender sends more than one frame to the
wait for acknowledgment from receiver side. receiver side and re-transmits the frame(s) which is/are damaged or
suspected.
2. Efficiency of Stop-and-Wait Protocol is worse. Efficiency of sliding window protocol is better.
3. Sender window size of Stop-and-Wait Protocol is 1. Sender window size of sliding window protocol is N.
4. Receiver window size of Stop-and-Wait Protocol is 1. Receiver window size of sliding window protocol may be 1 or N.
5. In Stop-and-Wait Protocol, sorting is not necessary. In sliding window protocol, sorting may be or may not be necessary.
6. Efficiency of Stop-and-Wait Protocol is Efficiency of sliding window protocol is
1/(1+2*a) N/(1+2*a)
7. Stop-and-Wait Protocol is half duplex. Sliding window protocol is full duplex.
8. Stop-and-Wait Protocol is mostly used in low speed and Sliding window protocol is mostly used in high speed and error-prone
error free network. network.
9. In Stop-and-Wait Protocol, the sender cannot send any In sliding window protocol, the sender can continue to send new
new frames until it receives an acknowledgment for the frames even if some of the earlier frames have not yet been
previous frame. acknowledged.
10. Stop-and-Wait Protocol has a lower throughput as it has Sliding window protocol has a higher throughput as it allows for
a lot of idle time while waiting for the acknowledgment. continuous transmission of frames.
Module 3
Monday, November 13, 2023 3:27 PM
IPv4 Addressing
• IPv4 (Internet Protocol version 4) is the fourth version of the Internet Protocol, which is the communications protocol that provides an
identification and location system for computers on networks and routes traffic across the Internet.
• IPv4 addresses are 32 bits long and are typically represented in dotted-decimal notation.
• Each 32-bit IPv4 address is divided into four 8-bit octets, separated by periods. Each octet is converted to its decimal equivalent, resulting in a
format like x.x.x.x, where x is a decimal number between 0 and 255.
all Page 6
IPv4 Header Format
IPv4 Classes
1. Class A Addresses:
○ Range: 1.0.0.0 to 126.255.255.255
○ Network Portion: First octet
○ Host Portion: Last three octets
○ Class A addresses were designed for large organizations as they could support a massive number of hosts on a single network (2^24 2
hosts).
2. Class B Addresses:
○ Range: 128.0.0.0 to 191.255.255.255
○ Network Portion: First two octets ○
Host Portion: Last two octets
○ Class B addresses were intended for medium-sized organizations, supporting a moderate number of hosts per network (2^16 - 2 hosts).
3. Class C Addresses:
○ Range: 192.0.0.0 to 223.255.255.255
○ Network Portion: First three octets
○ Host Portion: Last octet
○ Class C addresses were used for smaller networks, providing fewer host addresses (2^8 - 2 hosts).
4. Class D Addresses:
○ Range: 224.0.0.0 to 239.255.255.255 ○ Reserved for multicast groups.
○ Not used for traditional unicast host addressing. Class D addresses are allocated for multicast communication.
5. Class E Addresses:
○ Range: 240.0.0.0 to 255.255.255.255 ○ Reserved for experimental purposes.
○ Not used for regular host addressing. Class E addresses were set aside for research and development.
all Page 7
• Block id is used for the network identification, but the number of bits is not pre-defined as it is in the classful IP addressing scheme.
• Host id is used to identify the host part of the network.
2. Notation
• CIDR IP addresses look as follows: w.x.y.z/n
• In the example above w, x, y, z each defines an 8-bit binary number, while n tells us about the number of bits used to identify the network
and is called an IP network prefix or mask.
3. Rules
Requirements for CIDR are defined below:
• Addresses should be contiguous.
• The number of addresses in the block must be in the power of 2.
• The first address of every block must be divisible by the size of the block. 4. Block information
Given the following IP address, let's find the network and host bits.
200.56.23.41/28
The following illustration gives a clear understanding of the aforementioned IP address scheme:
To find the network and host bits, we will use the stated formula, where b represents the number of hosts in the network. nh=232−n
This particular case, in which n equals 28, represents the block id bits, so subtracting it with 32 leaves us with the total number of hosts
expected in the network. nh=232-28 nh=24
Subnetting
• Subnetting is a technique used in computer networking to divide a single network into multiple smaller networks, known as
subnetworks or subnets.
• The purpose of subnetting is to partition a large network into smaller, more efficient subnets, which can improve network
performance, security, and organization.
• In a subnetted network, each subnet has its own unique subnet mask and network address.
• These unique subnet mask and network address allows devices on the subnet to communicate with each other directly without having
to go through a router or other networking device.
• Subnetting works by dividing the host part of an IP address into two or more subnets using a subnet mask. The subnet mask is a series
of ones and zeros that determines which portion of the IP address represents the network and which portion represents the host.
• Advantages of Subnetting
There are several advantages of using subnetting in a computer network:
• Improved network performance: By dividing a large network into smaller subnets, you can reduce the amount of network traffic on
the main network. Thus, improving communication speed and efficiency within the subnets.
• Enhanced security: Subnetting can create separate networks for different types of devices or users, Thus, helping in improving security
by limiting access to sensitive resources.
• Greater network scalability: Subnetting allows you to add more devices to a network without requiring additional IP addresses or
network infrastructure.
• Enhanced network organization: Subnetting allows you to group devices by location, department, or other criteria, making it easier to
manage and maintain the network.
• Reduced network congestion: By dividing a network into smaller subnets, you can reduce the number of devices trying to
communicate over the same network segment. Thus, reducing congestion and improving overall network performance.
• Improved network reliability: By creating redundant subnets, you can improve the reliability of your network by providing a backup
communication path if one subnet goes offline.
• Disadvantages of Subnetting
There are a few potential drawbacks to using subnetting in a computer network:
• Complexity: Subnetting can add complexity to a network, as it requires the use of subnet masks and network addresses, which may be
difficult for some users to understand.
• Additional hardware: In some cases, subnetting may require the use of additional hardware or network devices, such as routers or
switches. Thus, increasing the cost of the network.
all Page 8
• Increased configuration: Subnetting requires the configuration of subnet masks and network addresses, which can be timeconsuming
and may require the assistance of a network administrator.
• Limited scalability: While subnetting can allow you to add more devices to a network without requiring additional IP addresses, it limits
the number of subnets and devices that can be created.
• Security risks: Subnetting can improve security by creating separate networks for different users or devices, but it can also create
security risks if not configured properly, as it can allow unauthorized users to access restricted resources.
Address Space Limited (approx. 4.3 billion) Vast (approx. 3.4 x 10^38)
Subnetting Common, often necessary Still relevant, but less critical due to vast address
space
all Page 9
Network NAT often used, DHCP for address assignment Designed to eliminate the need for NAT, supports
Configuration DHCPv6
Header Complexity Complex header structure Simplified header structure with optional extension
headers
Security Features Originally lacked integrated security features, IPSec added IPSec support is mandatory
later
Deployment Status Widely deployed, but facing challenges due to address Coexisting with IPv4, gaining increased adoption
exhaustion
Use Cases Predominantly used in existing networks, the Internet, and Increasingly adopted, especially in new network
many devices deployments
Routed Protocols vs Routing Protocols
Feature Routed Protocol Routing Protocol
Definition A protocol used to send data from one network to another. A protocol used by routers to determine the best path for data to
Examples include IPv4 and IPv6. travel from source to destination.
Purpose Focuses on the end-to-end delivery of data packets. Focuses on the process of selecting the best path for data to travel
within a network.
Examples IPv4, IPv6, IPX, AppleTalk RIP (Routing Information Protocol), OSPF (Open Shortest Path
First), BGP (Border Gateway Protocol)
Functionality Provides addressing and packet forwarding capabilities. Determines the optimal path for data and shares routing
information with other routers.
Configuration Generally requires manual configuration of network Involves configuration of routing tables and protocols to share
addresses. routing information dynamically.
Network Layer Operates at the network layer of the OSI model. Operates at the network layer and is responsible for path
determination.
Dependency Independent of the specific routing methods used. Dependent on the routing algorithm and protocols implemented in
the network.
Dynamic Does not dynamically update routing information. Typically supports dynamic updates to adapt to changes in the
Updates network topology.
Examples of Used in conjunction with routing protocols to facilitate data Implemented on routers to enable dynamic routing and efficient
Usage transfer between networks. data forwarding within a network.
Classification of Routing Algorithms
Static Routing Algorithms:
Definition: Static routing involves manually configuring the routes in a network. The network administrator defines the paths that data
packets should take to reach their destination. Characteristics:
• Paths are predetermined and configured in advance.
• Changes in network topology are not automatically accommodated.
• Simplicity and predictability but less adaptable to dynamic changes. Dynamic Routing Algorithms:
Definition: Dynamic routing algorithms determine paths for data packets in real-time based on current network conditions. These algorithms
adapt to changes in the network topology.
Characteristics:
• Paths are determined dynamically based on real-time information.
• Adapt to changes in the network, making them more flexible.
• Examples include RIP, OSPF, and BGP.
Distance Vector Routing:
Definition: Distance Vector Routing algorithms operate by routers exchanging information about their routing tables with their neighbors. Each
router makes decisions based on the distance and direction (vector) to a destination.
Characteristics:
all Page 10
• Each router maintains a table indicating the distance (number of hops) to reachable destinations.
• Examples include RIP (Routing Information Protocol).
• Simplicity but may suffer from slow convergence in large networks. Link-State Routing:
Definition: Link-State Routing algorithms consider the state of links in the entire network. Routers share information about the state of their
links, allowing each router to build a comprehensive view of the network.
Characteristics:
• Each router maintains a detailed map of the entire network.
• Examples include OSPF (Open Shortest Path First).
• More scalable and adaptable to larger networks but can be more complex to implement.
Path Vector Routing:
Definition: Path Vector Routing is similar to distance vector routing but includes the entire path information in the routing updates. This
provides more information about the network topology.
Characteristics:
• The routing information includes the complete path to a destination.
• Example: BGP (Border Gateway Protocol).
• Commonly used in inter-domain routing and the global Internet.
all Page 11
Key Characteristics:
1. Distance as Metric:
○ The fundamental metric used in Distance Vector Routing is the "distance" to a destination. This distance is typically measured in terms
of the number of hops or routers between the source and destination.
2. Routing Table Updates:
○ Routers periodically send updates to their neighboring routers, sharing information about the distances to various destinations. These
updates are often referred to as "vectors."
3. Bellman-Ford Algorithm:
○ Distance Vector Routing algorithms often use the Bellman-Ford algorithm to calculate the shortest path to all destinations. The
algorithm iteratively refines its estimates based on the received distance vectors.
4. Convergence Time:
○ Convergence time refers to the time it takes for all routers in the network to have consistent and updated information about the
network topology. Distance Vector Routing algorithms can experience longer convergence times, especially in large networks.
5. Routing by Rumor:
○ The routing updates are often described as routers "telling" their neighbors about the distances to various destinations. This process of
routers sharing information can be likened to a rumor spreading through the network. Examples:
• Routing Information Protocol (RIP):
○ RIP is a classic example of a Distance Vector Routing protocol. It operates within an autonomous system and uses hop count as its
metric.
• Interior Gateway Routing Protocol (IGRP):
○ IGRP is another example, developed by Cisco. It takes into account factors such as bandwidth and delay in addition to hop count.
Advantages:
• Simplicity: Distance Vector Routing algorithms are relatively simple to understand and implement.
• Ease of Configuration: Configuring routers in a distance vector routing environment is typically straightforward.
• Low Overhead: The amount of routing information exchanged between routers is generally less than in some other routing algorithms.
Limitations:
• Slow Convergence: Distance Vector Routing algorithms can experience slow convergence, especially in larger networks. This is because
routers need time to exchange and process routing updates.
• Count to Infinity Problem: This is a common issue in distance vector algorithms where routers may incorrectly believe they have found a
shorter path, leading to routing loops.
• Limited Scalability: Distance Vector Routing may not scale well to very large networks due to the overhead associated with frequent
updates.
all Page 12
Module 4
Tuesday, November 14, 2023 5:32 PM
Sockets
Sockets play a crucial role in the Transport Layer of a computer network, providing a programming interface for network communication. A socket is essentially an endpoint
for sending or receiving data across a computer network. It acts as an abstraction layer that allows applications to communicate with each other, regardless of the underlying
network details.
1. Socket Types:
○ Stream Sockets (TCP): These provide a reliable, connection-oriented communication channel. TCP (Transmission Control Protocol) is the most common protocol
associated with stream sockets. It ensures reliable, ordered delivery of data between the sender and receiver.
○ Datagram Sockets (UDP): These provide connectionless communication, where individual packets (datagrams) are sent without establishing a connection first.
UDP (User Datagram Protocol) is often associated with datagram sockets. It is faster but less reliable compared to TCP.
2. Socket Operations:
• Socket Creation: An application creates a socket using the socket() system call or function. The socket can be of type SOCK_STREAM (for stream sockets) or
SOCK_DGRAM (for datagram sockets).
• Binding: The socket is bound to a specific address and port using the bind() operation. This step is crucial, especially for servers, as it specifies the network address
and port on which the server will listen for incoming connections or data.
• Listening (for Stream Sockets): For servers using stream sockets, the listen() operation is used to wait for incoming connection requests.
• Connection Establishment (for Stream Sockets): The accept() operation is used by a server to accept an incoming connection request. This operation returns a new
socket for communication with the client.
• Connecting (for Stream Sockets): For clients using stream sockets, the connect() operation is used to establish a connection with a server.
• Sending and Receiving Data: The send() and recv() operations are used to send and receive data over the socket.
• Closing: The close() operation is used to release the socket when communication is complete.
3. Sockets are identified by an IP address and port number. In the case of stream sockets, the combination of local and remote IP addresses and port numbers uniquely
identifies a connection.
Connection Management
The Three-Way Handshake is a key process in the establishment of a connection in the Transmission Control Protocol (TCP), which is a connection-oriented protocol in the
Transport Layer of the Internet Protocol Suite. The purpose of the Three-Way Handshake is to ensure that both the sender and receiver are ready to exchange data before
actual communication begins. Here are the steps involved in the Three-Way Handshake:
1. Step 1: SYN (Synchronize)
○ The client initiates the connection by sending a TCP segment to the server with the SYN (Synchronize) flag set.
○ This segment contains the client's initial sequence number (ISN), which is a randomly chosen number to identify the first data byte in the communication.
2. Step 2: SYN-ACK (Synchronize-Acknowledge)
○ Upon receiving the initial SYN segment, the server responds with a TCP segment that has both the SYN and ACK (Acknowledge) flags set.
○ The server also selects its own initial sequence number (ISN).
3. Step 3: ACK (Acknowledge)
○ In the final step of the Three-Way Handshake, the client acknowledges the server's response by sending a TCP segment with the ACK flag set.
○ The acknowledgment (ACK) indicates that the client has received the server's acknowledgment, and the connection is now established.
all Page 13
2. Acknowledging FIN: The receiving entity acknowledges the FIN segment with an ACK (acknowledgment) segment. This ACK confirms that the receiving entity has
received the FIN segment and understands that the sender is closing the connection.
3. Sending FIN: Once the receiving entity has finished sending any remaining data, it sends its own FIN segment to the initiating entity, indicating that it has also finished
sending data.
4. Acknowledging FIN: The initiating entity acknowledges the receiving entity's FIN segment with an ACK segment, completing the four-way handshake and finalizing the
connection termination.
UDP
The User Datagram Protocol (UDP) is a core communication protocol of the Internet Protocol suite (TCP/IP) used to send messages (datagrams) across an IP network. UDP is
an unreliable, connectionless protocol, meaning it does not guarantee delivery, ordering, or duplicate protection of data packets. This makes UDP a faster and more
lightweight protocol than its counterpart, Transmission Control Protocol (TCP), but also more prone to errors.
Key Characteristics of UDP:
• Connectionless: UDP does not establish a connection between the sender and receiver before sending data. This makes UDP faster and more efficient for timesensitive
applications.
• Unreliable: UDP does not guarantee delivery or ordering of data packets. It is up to the application to handle any lost or out-of-order packets.
• Best-effort delivery: UDP delivers data packets on a best-effort basis, meaning it makes no guarantees about their delivery. If a packet is lost or corrupted, UDP will not
retransmit it.
• Efficient: UDP is a very efficient protocol due to its simplicity and lack of connection establishment overhead. Advantages of UDP:
• Speed: UDP is significantly faster than TCP due to its lack of connection establishment and error checking overhead.
• Efficiency: UDP is a very efficient protocol in terms of bandwidth usage.
• Simplicity: UDP is a simple protocol with a minimal header structure, making it easier to implement and understand. Disadvantages of UDP:
• Unreliability: UDP does not guarantee delivery or ordering of data packets, which can lead to data loss or corruption.
• Lack of error checking: UDP does not perform extensive error checking, making it more susceptible to errors.
• Vulnerability to attacks: UDP's lack of connection establishment and error checking makes it more vulnerable to certain types of attacks, such as denial-of-service (DoS)
attacks.
Applications of UDP:
• Real-time applications: UDP is well-suited for real-time applications where speed is more important than accuracy, such as video streaming, online gaming, and VoIP
(Voice over IP).
• Small data transfers: UDP is also suitable for small data transfers, such as DNS (Domain Name System) lookups.
• Broadcasting and multicast: UDP supports broadcasting and multicasting, which allows a single sender to send data to multiple receivers simultaneously.
TCP
The Transmission Control Protocol (TCP) is a fundamental protocol of the Internet Protocol suite (TCP/IP) that ensures reliable, ordered, and error-checked delivery of data
packets across an IP network. TCP operates at the transport layer of the TCP/IP model, sitting above the Internet Protocol (IP) and below application-layer protocols such as
HTTP and FTP. Key Characteristics of TCP:
• Connection-oriented: TCP establishes a connection between the sender and receiver before sending data. This connection ensures reliable data transfer and allows for
error checking and retransmission mechanisms.
• Reliable: TCP guarantees delivery, ordering, and duplicate protection of data packets. It employs error detection and correction mechanisms to ensure data integrity.
• Ordered delivery: TCP delivers data packets in the same order they were sent, ensuring the correct sequence of information.
• Flow control: TCP employs flow control mechanisms to regulate the rate at which data is sent, preventing congestion and ensuring smooth data transfer.
• Congestion control: TCP implements congestion control algorithms to adapt its transmission rate based on network conditions, preventing network overload. Advantages
of TCP:
• Reliability: TCP provides reliable data delivery, minimizing data loss or corruption.
• Ordered delivery: TCP maintains the correct order of data packets, ensuring the integrity of information.
• Error checking and retransmission: TCP employs error detection and correction mechanisms to identify and retransmit lost or corrupted packets.
• Congestion control: TCP's congestion control algorithms prevent network congestion and ensure efficient data transfer.
Disadvantages of TCP:
• Overhead: TCP's connection establishment and error checking mechanisms add overhead compared to UDP, making it slightly slower.
• Complexity: TCP is a more complex protocol than UDP, requiring more implementation effort.
Applications of TCP:
• File transfers: TCP is the preferred protocol for file transfers due to its reliability and error checking capabilities.
• Web browsing: TCP is widely used for web browsing, ensuring reliable delivery of web pages and other web content.
• Email: TCP is the standard protocol for sending and receiving emails, guaranteeing message delivery and integrity.
• Remote access: TCP is used for remote access protocols such as SSH (Secure Shell) and FTP, providing secure remote access to computer systems.
TCP vs UDP
Feature TCP UDP
Connection Connection-oriented: Establishes a connection between sender and Connectionless: Sends data without establishing a prior connection, making it stateless
receiver before data transmission, maintaining a stateful session for and faster.
reliable communication.
Reliability Reliable: Guarantees delivery, ordering, and duplicate protection of Unreliable: Does not guarantee delivery, ordering, or duplicate protection of data
data packets, ensuring accurate and complete information transfer. packets, making it more prone to errors and data loss.
all Page 14
Error Extensive: Employs error detection and correction mechanisms to Limited: Performs minimal error checking, relying on the application layer to handle
Checking identify and retransmit lost or corrupted data packets, ensuring data errors, making it less robust in error-prone environments.
integrity.
Speed Slower: Introduces overhead due to connection establishment and Faster: Lacks connection establishment and error checking overhead, enabling
error checking mechanisms, resulting in slightly slower data transfer. faster data transfer, particularly for real-time applications.
Efficiency Less efficient: Overhead associated with connection establishment More efficient: Lack of overhead makes UDP more efficient in terms of bandwidth
and error checking reduces bandwidth utilization compared to UDP. usage, particularly for small data transfers.
Flow Control Yes: Implements flow control mechanisms to regulate the rate at No: Does not implement flow control, relying on the application layer to manage data
which data is sent, preventing congestion and ensuring smooth data flow, making it more susceptible to congestion.
transfer.
Congestion Yes: Employs congestion control algorithms to adapt its transmission No: Lacks congestion control mechanisms, relying on the network to handle
Control rate based on network conditions, preventing network overload and congestion, making it more prone to congestion-related issues.
ensuring efficient data transfer.
Applications File transfers: Ensuring reliable and complete delivery of large data Real-time applications: Video streaming, online gaming, VoIP: Prioritizing speed over
files. reliability for smooth and responsive interactions.
Web browsing: Ensuring accurate and error-free retrieval of web Small data transfers: DNS lookups, network management messages: Efficient
pages and content. transmission of small data packets.
TCP: State Transition
The TCP state transition diagram is a finite state machine that describes the different states that a TCP connection can be in and the events that trigger transitions between
those states. The diagram is shown below:
+-------------+
| LISTEN |
+-------------+
|
v
+-------------+
| SYN-SENT |
+-------------+
|
v
+------------------+
| SYN-RECEIVED |
+------------------+
|
v
+--------------------+
| ESTABLISHED |
+--------------------+
|
(DATA)
v
+------------------+
| FIN-WAIT-1 |
+------------------+
|
v
+-------------+
| FIN-WAIT-2 |
+-------------+
|
v
+------------------+
| CLOSE-WAIT |
+------------------+
|
v
+-------------+
| CLOSING |
+-------------+
|
v
+------------------+
| LAST-ACK |
+------------------+
all Page 15
|
v
+-------------+
| TIME-WAIT |
+-------------+
|
/
+-------------+
| CLOSED | +--------
-----+
State Descriptions:
• LISTEN: The server is actively listening for incoming connection requests.
• SYN-SENT: The client has sent a SYN (synchronization) segment to the server, requesting a connection.
• SYN-RECEIVED: The server has received a SYN segment from the client and has sent a SYN-ACK (synchronization acknowledgment) segment in response.
• ESTABLISHED: The connection is established and data can be exchanged between the client and server.
• FIN-WAIT-1: The client has sent a FIN (finish) segment to the server, indicating that it has finished sending data.
• FIN-WAIT-2: The server has received a FIN segment from the client and has sent an ACK (acknowledgment) segment in response.
• CLOSE-WAIT: The server has sent a FIN segment to the client, indicating that it has finished sending data.
• CLOSING: The client has received a FIN segment from the server and has sent an ACK segment in response.
• LAST-ACK: The server has received an ACK segment from the client in response to its FIN segment.
• TIME-WAIT: The server remains in the TIME-WAIT state for a certain amount of time (typically two minutes) to ensure that all data packets have been delivered and that
there are no outstanding ACKs.
• CLOSED: The connection is closed and no further data can be exchanged.
Events:
• SYN: The client sends a SYN segment to the server.
• SYN-ACK: The server sends a SYN-ACK segment to the client.
• ACK: An acknowledgment segment is sent to acknowledge the receipt of a segment.
• FIN: A finish segment is sent to indicate that data transmission is complete.
Transitions:
• The connection transitions from LISTEN to SYN-SENT when the client sends a SYN segment to the server.
• The connection transitions from SYN-SENT to SYN-RECEIVED when the server sends a SYN-ACK segment to the client.
• The connection transitions from SYN-RECEIVED to ESTABLISHED when the client sends an ACK segment to the server in response to the SYN-ACK segment.
• The connection transitions from ESTABLISHED to FIN-WAIT-1 when the client sends a FIN segment to the server.
• The connection transitions from FIN-WAIT-1 to FIN-WAIT-2 when the server sends an ACK segment to the client in response to the FIN segment.
• The connection transitions from FIN-WAIT-2 to CLOSE-WAIT when the server sends a FIN segment to the client.
• The connection transitions from CLOSE-WAIT to CLOSING when the client sends an ACK segment to the server in response to the FIN segment.
• The connection transitions from CLOSING to LAST-ACK when the server receives an ACK segment from the client in response to its FIN segment. • The connection
transitions from LAST-ACK to TIME-WAIT
Congestion Control
Open-loop congestion control and closed-loop congestion control are two approaches to managing and mitigating network congestion, each with its own characteristics and
methods. Let's explore each of these concepts:
Open-Loop Congestion Control:
• Definition: In open-loop congestion control, network adjustments are made without direct feedback from the network itself. The control actions are predefined and
executed without real-time information about the current state of the network.
• Characteristics:
o Predefined Policies: Open-loop control relies on predefined policies and strategies to determine how to send or regulate traffic.
o Lack of Real-time Feedback: There is no continuous feedback loop from the network to adjust the control actions based on current conditions.
all Page 16
o Simple Implementation: Open-loop control is often simpler to implement as it does not require real-time monitoring and response mechanisms.
• Example:A network administrator might manually set traffic shaping policies based on expected usage patterns and peak hours. These policies are predetermined and
applied without continuous feedback from the network.
Closed-Loop Congestion Control:
• Definition: Closed-loop congestion control, also known as feedback-based congestion control, involves adjusting network parameters based on real-time feedback about
the state of the network. This feedback loop allows for dynamic and adaptive control actions.
• Characteristics:
o Real-time Feedback: Closed-loop control relies on continuous feedback from the network to make informed decisions about adjusting traffic parameters.
o Adaptability: The system can dynamically respond to changing network conditions, making it more adaptive to variations in traffic and congestion levels.
o Complexity: Closed-loop control systems are often more complex to implement than open-loop systems due to the need for monitoring and feedback
mechanisms.
• Example: TCP (Transmission Control Protocol) utilizes closed-loop congestion control. It dynamically adjusts the rate at which data is sent based on acknowledgments and
network conditions. If packet loss is detected, TCP assumes congestion and reduces the transmission rate.
HTTP
HTTP, standing for Hypertext Transfer Protocol, is a fundamental communication protocol that operates at the application layer of the OSI model. It forms the backbone of
data communication on the World Wide Web, facilitating the exchange of information between web servers and clients. Here's an in-depth exploration of HTTP's role at the
application layer:
1. Communication Paradigm:
○ HTTP follows a client-server communication paradigm. A client, typically a web browser, initiates requests, and a server processes these requests and sends back the
corresponding responses.
2. Stateless Protocol:
○ HTTP is inherently stateless, meaning each request from a client is independent of any previous requests. Servers do not retain information about the client's
previous interactions, simplifying the protocol's design and implementation.
3. Request-Response Model:
○ Communication in HTTP revolves around a request-response model. Clients send HTTP requests to servers, specifying the desired action, and servers respond with
the requested information or perform the specified action.
4. Uniform Resource Identifiers (URIs):
○ HTTP uses Uniform Resource Identifiers (URIs) to identify and locate resources on the web. URIs include Uniform Resource Locators (URLs) and Uniform Resource
Names (URNs), providing a standardized way to address web resources.
5. Methods:
○ HTTP defines various request methods, such as GET, POST, PUT, and DELETE, each serving a specific purpose. For example, GET is used to retrieve data, while POST is
used to submit data to be processed.
6. Headers:
○ Both HTTP requests and responses contain headers that convey metadata about the message, including information about the client, server, content type, and
more. Headers play a crucial role in informing the recipient about the nature of the data being exchanged.
7. Cookies and Sessions:
all Page 17
○ HTTP supports the use of cookies for maintaining stateful interactions between clients and servers. Cookies enable servers to store information on the client's
device, facilitating personalized and session-aware experiences.
8. Status Codes:
○ HTTP responses include status codes indicating the outcome of the request. Common status codes include 200 (OK), 404 (Not Found), and 500 (Internal Server
Error), providing information about the success or failure of the request.
9. Security:
○ While HTTP itself is not secure, HTTPS (HTTP Secure) is an extension that adds a layer of security through encryption using protocols like TLS/SSL. HTTPS is widely
used, especially for sensitive transactions such as online banking and e-commerce.
10. RESTful Principles:
○ Many modern web applications adhere to REST (Representational State Transfer) principles, which leverage HTTP methods and status codes to create scalable and
maintainable APIs (Application Programming Interfaces).
SMTP
SMTP, or Simple Mail Transfer Protocol, operates at the application layer of the OSI model and is essential for the reliable transmission of electronic mail (email) over the
Internet. SMTP governs the communication between mail servers to send, relay, and receive email messages. Here's a detailed exploration of SMTP's role at the application
layer:
1. Communication Model:
○ SMTP follows a client-server communication model, where an email client acts as the client, and a mail server operates as the server. The client initiates a
connection to the server to send an email.
2. Message Format:
○ SMTP defines the format for email messages, specifying how the sender's information, recipient's information, subject, and the email body should be structured.
The email body can be plain text or include multimedia content.
3. Commands and Responses:
○ SMTP communication consists of a series of commands and responses. Commands, such as HELO, MAIL, RCPT, and DATA, are used by the client to initiate and
control the email transmission process. The server responds to these commands with numeric codes indicating the success or failure of the operation.
4. Relay of Messages:
○ SMTP is responsible for the relay of messages between mail servers. When an email is sent, it may pass through multiple SMTP servers before reaching its final
destination, with each server forwarding the message closer to the recipient.
5. Port Numbers:
○ SMTP typically uses port 25 for communication. Port 587 is commonly used for secure email transmission (SMTPS) with encryption, and port 465 is also reserved for
this purpose.
6. Store-and-Forward Model:
○ SMTP operates on a store-and-forward model, meaning that it accepts, stores, and then forwards messages to their destinations. This model ensures the reliable
delivery of emails, even if the recipient's server is temporarily unavailable.
7. Security Considerations:
○ Historically, SMTP lacked built-in encryption, leading to potential security vulnerabilities. However, with the advent of protocols like STARTTLS and the widespread
adoption of secure email practices, SMTP can now be used securely over TLS-encrypted connections.
8. Email Routing:
○ SMTP is crucial for routing emails to their intended recipients. MX (Mail Exchange) records in the DNS (Domain Name System) specify the mail servers responsible
for receiving emails on behalf of a domain.
9. Authentication:
○ SMTP authentication mechanisms, such as LOGIN and PLAIN, are employed to verify the identity of users sending emails, enhancing the security of the email
transmission process.
DHCP
Dynamic Host Configuration Protocol (DHCP) is a network management protocol used to dynamically assign an IP address to nay device, or node, on a network so they can
communicate using IP (Internet Protocol). DHCP automates and centrally manages these configurations. There is no need to manually assign IP addresses to new devices.
Therefore, there is no requirement for any user configuration to connect to a DHCP based network.
DHCP does the following:
• DHCP manages the provision of all the nodes or devices added or dropped from the network.
• DHCP maintains the unique IP address of the host using a DHCP server.
• It sends a request to the DHCP server whenever a client/node/device, which is configured to work with DHCP, connects to a network. The server acknowledges by
providing an IP address to the client/node/device.
DHCP runs at the application layer of the TCP/IP protocol stack to dynamically assign IP addresses to DHCP clients/nodes and to allocate TCP/IP configuration information to
the DHCP clients. Information includes subnet mask information, default gateway, IP addresses and domain name system addresses.
Components of DHCP
• DHCP Server: DHCP server is a networked device running the DCHP service that holds IP addresses and related configuration information. This is typically a server or a
router but could be anything that acts as a host, such as an SD-WAN appliance.
• DHCP client: DHCP client is the endpoint that receives configuration information from a DHCP server. This can be any device like computer, laptop, IoT endpoint or
anything else that requires connectivity to the network. Most of the devices are configured to receive DHCP information by default.
• IP address pool: IP address pool is the range of addresses that are available to DHCP clients. IP addresses are typically handed out sequentially from lowest to the
highest.
• Subnet: Subnet is the partitioned segments of the IP networks. Subnet is used to keep networks manageable.
• Lease: Lease is the length of time for which a DHCP client holds the IP address information. When a lease expires, the client has to renew it.
• DHCP relay: A host or router that listens for client messages being broadcast on that network and then forwards them to a configured server. The server then sends
responses back to the relay agent that passes them along to the client. DHCP relay can be used to centralize DHCP servers instead of having a server on each subnet
Benefits of DHCP:
all Page 18
• Centralized administration of IP configuration: DHCP IP configuration information can be stored in a single location and enables that administrator to centrally manage
all IP address configuration information.
• Dynamic host configuration: DHCP automates the host configuration process and eliminates the need to manually configure individual host. When TCP/IP (Transmission
control protocol/Internet protocol) is first deployed or when IP infrastructure changes are required.
• Seamless IP host configuration: The use of DHCP ensures that DHCP clients get accurate and timely IP configuration IP configuration parameter such as IP address,
subnet mask, default gateway, IP address of DND server and so on without user intervention.
• Flexibility and scalability: Using DHCP gives the administrator increased flexibility, allowing the administrator to move easily change IP configuration when the
infrastructure changes.
FTP
FTP (File Transfer Protocol) is a standard network protocol used for the transfer of files between a client and a server on a TCP-based network, such as the Internet.
Client-Server Model:
FTP operates on a client-server model. The client initiates a connection to the server, and after authentication, files can be uploaded (sent) or downloaded (received) between
the client and server.
Modes of FTP:
FTP supports two modes: active and passive. In active mode, the client opens a random port for data transfer, while in passive mode, the server opens a port, and the client
connects to it.
Port Numbers:
FTP uses port 21 for control commands (e.g., authentication and directory listing) and port 20 for data transfer in active mode. In passive mode, dynamic port numbers are
used for data transfer.
Authentication:
FTP typically uses username and password authentication for access to files on the server. It can also support anonymous FTP, allowing users to log in with a generic
username (e.g., "anonymous") and their email address as the password. File Operations:
FTP supports various file operations, including uploading (put), downloading (get), renaming, deleting, and creating directories on the server. It also allows for the transfer of
entire directories and their contents.
Modes of Operation:
FTP operates in two modes: ASCII and binary. ASCII mode is suitable for text files, ensuring proper line ending conversion, while binary mode is used for non-text files to
ensure accurate and unaltered data transfer.
Secure Variants:
Due to security concerns associated with plain FTP (e.g., data and credentials transmitted in plaintext), secure variants have been developed. FTPS (FTP Secure) adds a layer of
security through SSL/TLS encryption, while SFTP (SSH File Transfer Protocol) uses the secure SSH protocol for both data transfer and user authentication.
DNS and Types of Name Servers
• The Domain Name System (DNS) is a decentralized hierarchical naming system that translates human-readable domain names into IP addresses. It plays a crucial role in
enabling users to access websites and other services using domain names rather than numeric IP addresses.
• DNS is based on a distributed database architecture, with various servers distributed across the Internet. This distribution enhances scalability, fault tolerance, and
efficient resolution of domain names.
• When a user enters a domain name in a web browser, the DNS resolution process begins. The client queries DNS servers to resolve the domain name into an IP address,
enabling communication with the desired server.
• Types of DNS Servers ○ Root DNS Servers:
▪ The root DNS servers are the starting point of the DNS resolution process. They provide information about the authoritative name servers for top-level
domains (TLDs) such as .com, .net, and .org.
○ Top-Level Domain (TLD) Servers:
▪ TLD servers are responsible for handling requests related to specific top-level domains. For instance, .com TLD servers handle requests for domain names
ending in .com.
○ Authoritative DNS Servers:
▪ Authoritative DNS servers store and provide authoritative information about domain names, including the mapping of domain names to IP addresses. They are
responsible for the actual resolution of domain names.
○ Recursive DNS Servers:
▪ Recursive DNS servers perform the iterative process of querying other DNS servers until they obtain the final authoritative answer. They often cache the results
to speed up subsequent requests.
○ Caching DNS Servers:
▪ Caching DNS servers temporarily store DNS query results to reduce the need for repeated queries to authoritative servers. This caching mechanism improves
DNS resolution efficiency.
Telnet
• Telnet is a network protocol that allows a user to remotely access and control another computer over the Internet or local area network (LAN). It enables a user to
establish a connection to a remote system and perform tasks as if they were sitting in front of that computer.
• It is a client-server protocol, which means that a client device initiates the connection to a server device. The client sends commands to the server, and the server
responds with output, allowing the user to interact with the remote system’s command-line interface.
• It uses the Transmission Control Protocol (TCP) as its underlying transport protocol.
• Telnet is primarily text-oriented, transmitting keystrokes and displaying text output. It allows users to interact with remote systems as if they were physically present at
the terminal.
• Telnet commonly uses port 23 for communication.
• Telnet transmits data, including usernames and passwords, in plain text. Due to this lack of encryption, Telnet is considered insecure, and its usage over untrusted
networks is discouraged.
all Page 19
• Telnet establishes a virtual terminal connection, providing a command-line interface to the remote device. Users can execute commands, access files, and manage the
remote system.
• Telnet is widely used for interactive sessions with remote servers and network devices, especially in troubleshooting and configuration scenarios.
• Telnet provides specific commands for managing the connection, including options for setting terminal type, toggling character echo, and controlling data flow.
• Due to its lack of encryption, Telnet is vulnerable to eavesdropping and man-in-the-middle attacks. It has largely been replaced by more secure protocols like SSH (Secure
Shell).
• SSH (Secure Shell) has become the preferred alternative to Telnet due to its encryption capabilities, providing a secure method for remote terminal connections.
all Page 20
Module 5
Wednesday, November 15, 2023 5:35 PM
all Page 21
• Manageability: Provides control, performance monitoring, and fault detection.
• Efficiency: Provides the required network services and infrastructure with reasonable operational costs and appropriate capital investment
on a migration path to a more intelligent network, through step-by-step network services growth.
• Security: Provides for an effective balance between usability and security while protecting information assets and infrastructure from inside
and outside threats.
PPDIOO
PPDIOO stands for Prepare, Plan, Design, Implement, Operate, and Optimize. PPDIOO is a Cisco methodology that defines the continuous
lifecycle of services required for a network.
The PPDIOO phases are as follows:
• Prepare: Involves establishing the organizational requirements, developing a network strategy, and proposing a high-level conceptual
architecture identifying technologies that can best support the architecture. The prepare phase can establish a financial justification for
network strategy by assessing the business case for the proposed architecture.
• Plan: Involves identifying initial network requirements based on goals, facilities, user needs, and so on. The plan phase involves
characterizing sites and assessing any existing networks and performing a gap analysis to determine whether the existing system
infrastructure, sites, and the operational environment can support the proposed system. A project plan is useful for helping manage the
tasks, responsibilities, critical milestones, and resources required to implement changes to the network. The project plan should align with
the scope, cost, and resource parameters established in the original business requirements.
• Design: The initial requirements that were derived in the planning phase drive the activities of the network design specialists. The network
design specification is a comprehensive detailed design that meets current business and technical requirements, and incorporates
specifications to support availability, reliability, security, scalability, and performance. The design specification is the basis for the
implementation activities.
• Implement: The network is built or additional components are incorporated according to the design specifications, with the goal of
integrating devices without disrupting the existing network or creating points of vulnerability.
• Operate: Operation is the final test of the appropriateness of the design. The operational phase involves maintaining network health
through day-to-day operations, including maintaining high availability and reducing expenses. The fault detection, correction, and
performance monitoring that occur in daily operations provide the initial data for the optimization phase.
• Optimize: Involves proactive management of the network. The goal of proactive management is to identify and resolve issues before they
affect the organization. Reactive fault detection and correction (troubleshooting) is needed when proactive management cannotpredict and
mitigate failures. In the PPDIOO process, the optimization phase can prompt a network redesign if too many network problems and errors
arise, if performance does not meet expectations, or if new applications are identified to support organizational and technical requirements.
Phases Typically involves phases like planning, design, Phases often include detailed design, integration, testing, and scaling
implementation, and maintenance
Flexibility Offers flexibility in adapting to changing May be less adaptable to changes as it is built from specific details
requirements
Risk Assessment Identifies and addresses risks at the early stages Risks are addressed as they arise during the detailed design
Cost Initial planning may require significant resources Initial costs may be lower, but detailed design costs may accumulate
Timeframe May take longer due to comprehensive planning Initial implementation may be quicker, but changes may take longer
Scalability Easier to scale and accommodate future growth Scalability may require revisiting and adjusting specific components
Complexity Handles complexity by breaking it into Deals with complexity incrementally, potentially increasing overall
manageable parts complexity
all Page 22
Example Scenario Designing a corporate network infrastructure Building a specific network component, like a server cluster
1. Core Layer
The core layer serves as the backbone of the network, providing high-speed connectivity between the distribution layer devices. It is responsible
for routing traffic between different regions of the network, ensuring that data packets flow efficiently and reliably. Core layer devices, typically
high-performance routers, are characterized by their large switching capacity, low latency, and robust fault tolerance capabilities.
2. Distribution Layer
The distribution layer acts as an intermediary between the core and access layers, providing policy-based connectivity and controlling the
boundary between the two layers. It is responsible for filtering traffic, applying security policies, and aggregating data from the access layer
devices before forwarding it to the core layer. Distribution layer devices, typically Layer 3 routers or multilayer switches, play a crucial role in
managing network traffic flow and applying network-wide policies.
3. Access Layer
The access layer provides direct connections to end-user devices, such as workstations, servers, and printers. It is responsible for providing
network access to these devices, forwarding their traffic to the distribution layer for routing. Access layer devices, typically Layer 2 switches or
wireless access points, are located close to the end-users, ensuring efficient data transmission and reliable network connectivity.
By dividing the network into these three layers, the classic three-layer hierarchical model offers several benefits:
• Improved Scalability: As the network grows, additional devices can be easily added to the access or distribution layers, without significantly
impacting the core layer. This modularity allows the network to scale seamlessly to accommodate increasing demands.
• Enhanced Security: Each layer can implement its own security policies, providing a layered defense against network threats. This segmented
approach isolates potential security breaches and limits their impact on the overall network.
• Simplified Management: By dividing the network into smaller, manageable segments, the three-layer model simplifies network
administration and troubleshooting. Network administrators can focus on specific layers and devices, reducing the complexity of network
management.
all Page 23
• The physical environment of the building or buildings influences the design, as do the number of, distribution of, and distance between
the network nodes (including end users, hosts, and network devices). Other factors include space, power, and heating, ventilation, and
air conditioning support for the network devices.
• Cabling is one of the biggest long-term investments in network deployment. Therefore, transmission media selection depends not only
on the required bandwidth and distances, but also on the emerging technologies that might be deployed over the same infrastructure
in the future.
3. Infrastructure Device Characteristics
The characteristics of the network devices selected influence the design (for example, they determine the network’s flexibility) and
contribute to the overall delay. Trade-offs between data link layer switching—based on media access control (MAC) addresses—and
multilayer switching—based on network layer addresses, transport layer, and application awareness—need to be considered.
• High availability and high throughput are requirements that might require consideration throughout the infrastructure.
• Most Enterprise Campus designs use a combination of data link layer switching in the access layer and multilayer switching in the
distribution and core layers.
Module 6
Wednesday, November 15, 2023 11:56 PM
all Page 24
• This makes it easier to manage and automate network tasks. SDN also allows network administrators to create more flexible and dynamic
networks.
• SDN uses an open standard called OpenFlow to communicate between the control plane and the data plane. OpenFlow allows the control
plane to send instructions to the data plane, telling it how to forward traffic.
• The control plane typically consists of an SDN controller, which is a software application that runs on a server. The data plane consists of
SDN-enabled network devices, such as switches and routers.
• SDN offers a number of benefits over traditional networking, including:
• Increased agility: SDN makes it easier to make changes to the network, which can be helpful for organizations that need to quickly adapt to
changing business requirements.
• Improved automation: SDN allows network administrators to automate many network tasks, which can save time and money.
• Greater flexibility: SDN makes it possible to create more flexible and dynamic networks that can better meet the needs of applications.
• Enhanced security: SDN can be used to improve the security of the network by providing centralized control over network traffic.
all Page 25
7. OpenFlow Protocol: OpenFlow is a standard communication protocol used for southbound communication between the SDN controller
and network devices. It defines how the controller can interact with the forwarding plane of network devices. Enabling the SDN controller
to instruct switches and routers on how to handle traffic.
SDN Operations
1. Centralized Network Control:
Centralized network control is a foundational operation in SDN, where a centralized controller makes global decisions for the entire network.
This operation provides a unified view for efficient network management, allowing administrators to configure and control the network from a
central point. It enables consistent and coordinated decision-making across the network.
2. Programmability and Automation:
Programmability and automation involve using software applications to configure and manage network operations dynamically. Th is operation
reduces manual configuration efforts and potential errors, facilitating rapid adaptation to changing network requirements and enhancing overall
operational efficiency.
3. Dynamic Traffic Management:
Dynamic traffic management allows for real-time adjustments to network traffic flows based on changing conditions and requirements. This
operation optimizes network resources dynamically, supports efficient use of bandwidth, and enables adaptive responses to var ying traffic
patterns.
4. Flow-Based Control:
Flow-based control involves defining, managing, and controlling network flows, allowing for granular control over packet forwardin g. This
operation enables administrators to define specific flow-based policies, enhances network visibility and control, and facilitates optimized packet
forwarding based on flow definitions.
5. Monitoring and Analytics:
Monitoring and analytics operations involve gathering real-time data and using analytical tools to gain insights into network performance. This
operation provides visibility into network behavior and performance, facilitating proactive issue identification and resoluti on, and informing
decision-making for optimizing network resources.
all Page 26
The controller-to-switch messages include the following subtypes:
• Features—The controller requests the basic capabilities of a switch by sending a features request. The switch must respond with a features
reply that specifies the basic capabilities of the switch.
• Configuration—The controller sets and queries configuration parameters in the switch. The switch only responds to a query from the
controller.
• Modify-State—The controller sends Modify-State messages to manage state on the switches. Their primary purpose is to add, delete, and
modify flow or group entries in the OpenFlow tables and to set switch port properties.
• Read-State—The controller sends Read-State messages to collect various information from the switch, such as current configuration and
statistics.
• Packet-out—These are used by the controller to send packets out of the specified port on the switch, or to forward packets received
through packet-in messages. Packet-out messages must contain a full packet or a buffer ID representing a packet stored in the switch. The
message must also contain a list of actions to be applied in the order they are specified. An empty action list drops the packet.
• Barrier—Barrier messages are used to confirm the completion of the previous operations. The controller send s Barrier request. The switch
must send a Barrier reply when all the previous operations are complete.
• Role-Request—Role-Request messages are used by the controller to set the role of its OpenFlow channel, or query that role. It is typically
used when the switch connects to multiple controllers.
• Asynchronous-Configuration—These are used by the controller to set an additional filter on the asynchronous messages that it wants to
receive, or to query that filter. It is typically used when the switch connects to multiple controllers.
all Page 27
NOX Architecture
POX Architecture
all Page 28