0% found this document useful (0 votes)
11 views

CYS 204 Lecture Note

Uploaded by

yusuffabiola172
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

CYS 204 Lecture Note

Uploaded by

yusuffabiola172
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Lecture Note

CYS 204

Communication Networks
Department of CyberSecurity

Federal University of Technology, Akure.


Basic Concept of Networking

Basic networking concepts form the foundation of understanding how computer networks
operate and communicate. Here are some fundamental concepts:

Network: A network is a collection of computers, servers, mainframes, network devices, and


other devices connected to one another for the purpose of sharing data and resources.

Node: A node is any device connected to the network, such as a computer, printer, or router.

Communication: Communication involves the exchange of data between nodes in a


network. This can be achieved through wired or wireless connections.

Protocol: Protocols are rules and conventions that govern how data is transmitted and
received in a network. Common protocols include TCP/IP, HTTP, and SMTP.

IP Address: An IP address is a unique numerical label assigned to each device on a network.


It identifies the device's location and allows for communication within the network.

Subnet: A subnet is a logical subdivision of an IP network. It helps organize and manage IP


addresses efficiently.

Router: A router is a network device that connects different networks and directs data
between them. It operates at the network layer (Layer 3) of the OSI model.

Switch: A switch is a network device that connects devices within the same network and
uses MAC addresses to forward data to the correct destination.

Hub: A hub is a basic networking device that connects multiple devices in a network. It
operates at the physical layer (Layer 1) and simply broadcasts data to all connected devices.

Firewall: A firewall is a network security device that monitors and controls incoming and
outgoing network traffic based on predetermined security rules.

DNS (Domain Name System): DNS translates human-readable domain names (like
www.example.com) into IP addresses that machines use to identify each other on a network.

LAN (Local Area Network): A LAN is a network that is limited to a small geographic area,
such as a single building or campus.
WAN (Wide Area Network): A WAN is a network that spans a larger geographic area, often
connecting multiple LANs. The internet is a vast example of a WAN.

OSI Model: The OSI (Open Systems Interconnection) model is a conceptual framework used
to understand network interactions, divided into seven layers: Physical, Data Link, Network,
Transport, Session, Presentation, and Application.

TCP/IP: TCP/IP (Transmission Control Protocol/Internet Protocol) is the fundamental suite of


protocols that powers the internet and most modern networks.

Bandwidth: Bandwidth is the maximum rate of data transfer across a network. It is often
measured in bits per second (bps), kilobits per second (kbps), or megabits per second
(Mbps). These concepts provide a basic understanding of how networks function and how
devices communicate within them. As one delves deeper into networking, more advanced
concepts related to security, wireless networking, and network management become
important.
Network Topology
Network topology refers to the arrangement or physical layout of computer devices,
communication devices, and the links between them in a computer network. It defines how
different elements of a network are connected and how data is transmitted from one device
to another. There are several types of network topologies, each with its own advantages and
disadvantages. Here are some common network topologies:

Bus Topology:
In a bus topology, all devices share a single communication line (the bus). Data is
transmitted along the bus, and each device on the network receives the transmitted data.
However, only the intended recipient processes the data.
Pros: Simple and inexpensive to set up.
Cons: Performance degrades as more devices are added.

Star Topology:
In a star topology, all devices are connected to a central hub or switch. The central hub acts
as a repeater, regenerating and forwarding data to all connected devices.
Pros: Easy to install, easy to troubleshoot, and if one device fails, it doesn't affect the others.
Cons: Relies heavily on the central hub; if it fails, the entire network is affected.

Ring Topology:
In a ring topology, each device is connected to exactly two other devices, forming a closed
loop or ring. Data circulates around the ring until it reaches the intended recipient.
Pros: Simple and easy to install.
Cons: A failure in one device can disrupt the entire network.

Mesh Topology:
In a mesh topology, every device is connected to every other device. There are full-mesh
and partial-mesh configurations. Full mesh means every device is directly connected to
every other device, while partial mesh allows for some redundancy.
Pros: High redundancy; if one link or device fails, alternative paths are available.
Cons: Costly to implement and maintain due to the numerous connections.

Hybrid Topology:
A hybrid topology is a combination of two or more different types of topologies. For example,
a network might have a combination of star and bus topologies.
Pros: Offers benefits of multiple topologies.
Cons: Complex and can be costly.
The choice of network topology depends on factors such as the size of the network, the
requirements for scalability and fault tolerance, cost considerations, and the physical layout
of the environment. Each topology has its strengths and weaknesses, and the optimal choice
varies based on specific network needs.
OSI (Open Systems Interconnection) model
The OSI (Open Systems Interconnection) model is a conceptual framework that standardizes
the functions of a communication system or network into seven abstraction layers. Each
layer is responsible for specific tasks related to network communication, and the model
serves as a guide for designing and understanding how different networking protocols
interact. The OSI model does not directly represent any specific implementation but rather
provides a conceptual blueprint.
Here are the seven layers of the OSI model, listed from the lowest level (Physical) to the
highest level (Application):

Physical Layer (Layer 1):


The physical layer deals with the physical connection between devices. It defines the
hardware elements of a network, such as cables, connectors, and the transmission of raw
bits over a physical medium.

Data Link Layer (Layer 2):


The data link layer is responsible for creating a reliable link between two directly connected
nodes. It handles error detection and correction, as well as the framing of data packets. This
layer is divided into two sub-layers: Logical Link Control (LLC) and Media Access Control
(MAC).

Network Layer (Layer 3):


The network layer focuses on routing packets between different networks. It deals with
logical addressing, such as IP addresses, and determines the optimal path for data to travel
from the source to the destination across multiple devices and networks.

Transport Layer (Layer 4):


The transport layer ensures end-to-end communication, providing error detection and
correction, flow control, and segmentation and reassembly of data. TCP (Transmission
Control Protocol) and UDP (User Datagram Protocol) operate at this layer.

Session Layer (Layer 5):


The session layer manages sessions or connections between applications on different
devices. It establishes, maintains, and terminates communication sessions, allowing for
synchronization and checkpointing of data.
Presentation Layer (Layer 6):
The presentation layer is responsible for translating data between the application layer and
the lower layers. It deals with data formatting, encryption/decryption, and character set
conversions to ensure that data is presented in a readable format.

Application Layer (Layer 7):


The application layer is the topmost layer and interacts directly with end-user applications. It
provides network services directly to end-users, such as email, file transfers, and web
browsing. Protocols like HTTP, SMTP, and FTP operate at this layer.
The OSI model is a helpful tool for understanding and discussing network communication, as
it breaks down the complex process into modular layers. However, it's essential to note that
many real-world networking protocols, like the TCP/IP suite used on the internet, don't
precisely align with the OSI model's seven layers. The TCP/IP model, which is more
commonly referenced, merges the OSI's physical and data link layers into one.
TCP/IP (Transmission Control Protocol/Internet Protocol)
The TCP/IP (Transmission Control Protocol/Internet Protocol) suite is a set of protocols that
form the foundation for communication on the internet and many private networks. It
provides end-to-end communication and specifies how data should be packetized,
addressed, transmitted, routed, and received across networks. The TCP/IP suite consists of
four layers, each with its own set of protocols:

Link Layer (or Network Interface Layer):


The Link Layer is responsible for the physical and data link layers of the OSI model. It deals
with the transmission of raw bits over a physical medium and the framing of data into frames.
Ethernet, Wi-Fi, and PPP (Point-to-Point Protocol) are examples of link layer protocols.

Internet Layer:
The Internet Layer corresponds to the network layer of the OSI model. It focuses on the
routing of data packets between different networks. The key protocol at this layer is the
Internet Protocol (IP), which provides logical addressing (IP addresses) to devices and
determines the best path for data to travel across interconnected networks.

Transport Layer:
The Transport Layer is similar to the transport layer of the OSI model. It ensures reliable end-
to-end communication between devices. Two primary protocols at this layer are:
Transmission Control Protocol (TCP): Provides reliable, connection-oriented communication
with error detection and correction, flow control, and sequencing of data.
User Datagram Protocol (UDP): Offers faster, connectionless communication with minimal
overhead. It is commonly used for real-time applications where occasional data loss is
acceptable.

Application Layer:
The Application Layer aligns with the top three layers (session, presentation, and application)
of the OSI model. It interacts directly with end-user applications and provides network
services. Numerous protocols operate at this layer, including:
Hypertext Transfer Protocol (HTTP): Used for web browsing.
File Transfer Protocol (FTP): Used for file transfers.
Simple Mail Transfer Protocol (SMTP): Used for email transmission.
Post Office Protocol version 3 (POP3) and Internet Message Access Protocol (IMAP): Used
for retrieving email.
The TCP/IP suite is a modular and scalable architecture, allowing for the addition of new
protocols and services as needed. It is the standard networking protocol suite for the
internet and is widely used in various network environments. The suite's layered structure
promotes interoperability and allows for the independent development and modification of
protocols within each layer.
Client-server communication
Client-server communication is a model for network communication where devices or
processes, known as clients and servers, interact with each other to share resources,
services, or data. This model is prevalent in various computing environments, including the
internet, where it forms the basis for many online services and applications. Here's an
overview of client-server communication:

Client:
The client is a device, application, or process that initiates a request for a service or resource.
Clients are typically end-user devices like computers, smartphones, or IoT devices. Clients
make requests to servers, expecting a response.

Server:
The server is a device, application, or process that provides services or resources in
response to client requests. Servers are dedicated machines designed to handle multiple
client requests simultaneously. Examples include web servers, database servers, and email
servers.

Request-Response Model:
Communication between clients and servers follows a request-response model. The client
sends a request to the server, specifying the desired service or resource. The server
processes the request and sends a response back to the client.

Protocols:
Communication between clients and servers relies on standardized protocols that define
how data is formatted, transmitted, and interpreted. Common protocols include
HTTP/HTTPS for web communication, SMTP for email, and FTP for file transfers.

Statelessness:
Many client-server interactions are designed to be stateless, meaning each request from the
client to the server is independent and contains all the information needed for the server to
understand and fulfill the request. This simplifies the design and scalability of the system.

Examples:
In a web environment, a client (web browser) requests a webpage from a server using the
HTTP protocol. The server processes the request, retrieves the webpage, and sends it back
to the client for display.
In a database system, a client application may send a request to a database server to
retrieve or update data. The server processes the SQL query and returns the query results to
the client.

Scalability:
The client-server model allows for scalable systems, as additional clients can connect to a
server without affecting the overall architecture. Servers can be designed to handle multiple
concurrent requests, enabling efficient resource utilization.

Security:
Security measures can be implemented to control access to resources and protect sensitive
data. Authentication and authorization mechanisms ensure that clients have the appropriate
permissions to access server resources.
Client-server communication is fundamental to the functioning of various distributed systems,
enabling the efficient sharing of resources and services across networks. It provides a
structured and organized approach to networked computing, supporting diverse
applications and services.
Network Performance
Network performance refers to the efficiency, reliability, and speed with which data is
transmitted and received across a computer network. It is a measure of how well a network
is operating in terms of delivering the expected level of service and meeting the requirements
of its users. Network performance is crucial for ensuring a seamless and responsive
experience for users and optimizing the utilization of network resources. Several factors
contribute to network performance:

Bandwidth:
Bandwidth is the capacity of a network to transmit data. It is typically measured in bits per
second (bps). Higher bandwidth allows for the transmission of more data simultaneously,
leading to faster network performance.

Latency:
Latency, often referred to as delay, is the time it takes for data to travel from the source to
the destination. Low-latency networks result in quicker response times, critical for real-time
applications like video conferencing and online gaming.

Jitter:
Jitter is the variation in latency, causing irregular delays in the delivery of data packets.
Consistent and low jitter is important for applications that require a steady and predictable
flow of data, such as voice and video calls.

Packet Loss:
Packet loss occurs when data packets transmitted across the network do not reach their
destination. Excessive packet loss can degrade the quality of network communication and
impact the performance of applications.

Reliability:
Network reliability refers to the consistency and stability of network connections. Reliable
networks minimize downtime and disruptions, ensuring continuous availability of services.

Throughput:
Throughput is the actual amount of data transmitted successfully over the network within a
specific timeframe. It is a practical measure of network performance, taking into account
factors like latency, jitter, and packet loss.
Scalability:
Scalability is the ability of a network to accommodate an increasing number of users,
devices, or data traffic without a significant decrease in performance. Scalable networks can
handle growth without compromising efficiency.

Quality of Service (QoS):


QoS mechanisms prioritize certain types of traffic over others, ensuring that critical
applications receive sufficient resources and bandwidth. This helps maintain performance
levels for important tasks.

Load Balancing:
Load balancing involves distributing network traffic across multiple servers or paths to
prevent overloading on a specific resource. It improves overall performance by optimizing
resource utilization.
Monitoring and managing network performance are essential for identifying and addressing
issues proactively. Various tools and techniques, such as network monitoring software,
traffic analysis, and performance testing, are used to assess and optimize network
performance for a better user experience and efficient resource utilization.
Circuit switching, packet switching, and virtual circuit
switching
Circuit switching, packet switching, and virtual circuit switching are three different
approaches to managing the flow of data in a telecommunications network. Each method
has its own characteristics, advantages, and use cases. Let's explore each of them:

Circuit Switching:
Description: In circuit switching, a dedicated communication path is established between
two devices for the duration of their conversation. The path remains reserved exclusively for
the use of those two devices until the conversation is completed.

Characteristics:
Dedicated Path: A dedicated circuit is established for the entire duration of the
communication.

Resource Reservation: Resources (bandwidth) are reserved for the entire duration of the
call, even if no data is being transmitted.

Predictable Timing: Circuit switching provides a predictable and constant delay, making it
suitable for real-time applications like voice calls.
Examples: Traditional telephone networks (PSTN) often use circuit switching.

Packet Switching:
Description: In packet switching, data is divided into packets, which are small units of data.
These packets are then independently routed from the source to the destination, where they
are reassembled to reconstruct the original message.

Characteristics:
Shared Resources: Network resources are shared among multiple users and packets.
Variable Timing: Packet switching provides variable delay as packets may take different
routes to reach the destination.

Efficient Use of Resources: Bandwidth is used more efficiently as resources are


dynamically allocated based on demand.
Examples: The Internet predominantly uses packet switching, including protocols like IP
(Internet Protocol).
Virtual Circuit Switching:
Description: Virtual circuit switching is a hybrid of circuit switching and packet switching. It
establishes a virtual circuit (logical path) between the source and destination, but the data is
still transmitted in packets.

Characteristics:
Dedicated Path (Virtual): A virtual circuit is established for the duration of the
communication, but it is not a physical path.
Resource Reservation: Resources are reserved for the virtual circuit, ensuring a certain
level of quality of service.
Predictable Timing: Similar to circuit switching, virtual circuit switching provides more
predictable timing than pure packet switching.
Examples: Frame Relay and ATM (Asynchronous Transfer Mode) networks use virtual circuit
switching.

Summary:

Circuit Switching: Dedicated circuit for the entire duration; resources reserved; predictable
timing; used in traditional telephony.

Packet Switching: Divides data into packets; shared resources; variable timing; efficient use
of resources; used in the Internet.

Virtual Circuit Switching: Establishes a virtual circuit; resources reserved for the virtual
circuit; predictable timing; used in Frame Relay and ATM networks.
Each switching method has its own trade-offs, and the choice between them depends on
the specific requirements of the application and network.
CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance)

CSMA/CA is a network protocol used in wireless networks to manage access to a shared


communication medium. It is designed to avoid collisions in wireless environments where it
is challenging to detect collisions reliably.

Operation:
Carrier Sense: A device using CSMA/CA listens to the wireless channel to check if it is idle
before attempting to transmit.
Collision Avoidance: Instead of waiting for a collision to be detected, CSMA/CA aims to
avoid collisions by employing techniques such as Request to Send (RTS) and Clear to Send
(CTS) frames. Before transmitting data, a device sends an RTS frame to the destination. The
destination replies with a CTS frame if it is ready to receive. This exchange helps in reserving
the channel and reducing the chance of collisions.

CSMA/CD (Carrier Sense Multiple Access with Collision Detection):


CSMA/CD is a network protocol used in traditional Ethernet networks with shared
communication mediums, such as coaxial cables. It is designed to detect and handle
collisions that may occur when multiple devices attempt to transmit simultaneously on the
same network segment.

Operation:
Carrier Sense: A device using CSMA/CD listens to the network to check if it is idle before
attempting to transmit.
Collision Detection: If a collision is detected (two devices transmitting simultaneously), the
devices stop transmitting, send a jam signal to ensure all devices are aware of the collision,
and then initiate a backoff period before attempting to transmit again.
Differences:

Environment:
CSMA/CA: Primarily used in wireless networks where collision detection is challenging.
CSMA/CD: Historically used in wired Ethernet networks with a shared medium.

Collision Handling:
CSMA/CA: Aims to avoid collisions using techniques like RTS/CTS and prioritizing collision
avoidance.
CSMA/CD: Detects and handles collisions by stopping transmission, sending jam signals,
and initiating backoff periods.
Network Type:
CSMA/CA: Common in Wi-Fi networks.
CSMA/CD: Common in older Ethernet networks (e.g., 10BASE5, 10BASE2) but not used in
modern Ethernet (e.g., 10BASE-T, 100BASE-TX).

Applicability:
CSMA/CA: Suited for environments with a high potential for interference and where collision
detection is unreliable (e.g., wireless networks).
CSMA/CD: Suited for wired environments where collision detection is feasible and was
historically used in shared Ethernet segments.

Efficiency:
CSMA/CA: Introduces additional overhead with RTS/CTS frames but can be more efficient in
avoiding collisions in wireless environments.
CSMA/CD: Involves more efficient use of bandwidth in wired environments when collision
detection is feasible.
Both protocols were crucial in the evolution of networking, with CSMA/CD being a
foundational element of early Ethernet networks and CSMA/CA addressing the challenges
posed by wireless communication. However, as wired Ethernet networks have evolved,
modern Ethernet standards have moved away from CSMA/CD in favor of full-duplex
communication.
Media Access Control (MAC) address
A Media Access Control (MAC) address, also known as a hardware address or physical
address, is a unique identifier assigned to network interfaces for communications at the data
link layer of a network. MAC addresses are crucial for the proper functioning of networking
devices within a local network, and they play a fundamental role in the Ethernet protocol.

Key Characteristics of MAC Addresses:

Uniqueness:
Every network interface card (NIC) or network adapter has a globally unique MAC address.
No two devices on the planet should have the same MAC address.
48-Bit Address:
A MAC address is a 48-bit address, typically represented in hexadecimal format and
commonly written as six pairs of two characters separated by colons or dashes (e.g.,
00:1A:2B:3C:4D:5E).

OUI (Organizationally Unique Identifier):


The first half of the MAC address (the first three pairs) represents the OUI, which is assigned
to the manufacturer or organization that produces the network interface. It helps identify the
vendor of the device.

Interface-Specific:
The second half of the MAC address (the last three pairs) is assigned by the manufacturer to
the specific network interface. This part is unique to each individual NIC.
Usage in Networking:

Addressing:
MAC addresses are used to uniquely identify devices on a local network. When data is
transmitted within a local network, it is addressed to the MAC address of the destination
device.

Switching:
Ethernet switches use MAC addresses to forward data only to the specific device that is the
intended recipient. This process enhances the efficiency of network communication.

ARP (Address Resolution Protocol):


ARP is used to map IP addresses to MAC addresses. Devices on a local network use ARP to
discover the MAC address associated with a specific IP address.
Security:
MAC filtering is a security measure where a network administrator configures a network to
allow or deny specific devices based on their MAC addresses.

Wake-on-LAN:
Wake-on-LAN allows a device to be powered on remotely. It works by sending a special
"magic packet" containing the MAC address of the target device.

Device Identification:
MAC addresses are often used for device identification and tracking within networks for
administrative and troubleshooting purposes.
It's important to note that while MAC addresses are essential for local network
communication, they are not typically used for routing data across the broader internet. In
the internet layer (Layer 3) of the OSI model, IP addresses are used for addressing and
routing.
IP (Internet Protocol)
IP (Internet Protocol) addressing is a fundamental concept in computer networking that
involves assigning unique numerical labels to devices connected to a network, enabling
them to communicate with each other. An IP address serves two primary purposes:
identifying the host or network interface and providing the location of the host in the network.

Key aspects of IP addressing include:


IP Address Types:
IPv4 (Internet Protocol version 4): The most widely used version, IPv4 addresses are 32-bit
numerical labels written in dotted-decimal format (e.g., 192.168.1.1).
IPv6 (Internet Protocol version 6): Developed to address the exhaustion of IPv4 addresses,
IPv6 uses a 128-bit address space, allowing for a vastly increased number of unique
addresses.

IPv4 Address Structure:


An IPv4 address consists of four octets (32 bits), separated by dots. Each octet is
represented in decimal form (0-255). For example, the address 192.168.1.1 is divided into
four octets: 192, 168, 1, and 1.

IPv6 Address Structure:


An IPv6 address is represented in hexadecimal notation and separated by colons. For
example, a simplified IPv6 address could be written as
2001:0db8:85a3:0000:0000:8a2e:0370:7334.
Addressing for Hosts and Networks:
IP addresses are divided into two parts: the network portion and the host portion. The
structure of an IP address is determined by the subnet mask, which indicates how the
address is divided between the network and host.

Subnetting:
Subnetting involves dividing an IP network into smaller sub-networks, allowing for better
organization and management of IP addresses. It helps optimize the use of available
addresses.

IP Address Classes:
In the early days of IPv4, addresses were divided into classes (A, B, C, D, and E) based on
the size of the network. However, this classful addressing scheme has been largely replaced
by Classless Inter-Domain Routing (CIDR).
Dynamic and Static Address Assignment:
Dynamic IP Addressing: Devices are assigned IP addresses dynamically by a DHCP
(Dynamic Host Configuration Protocol) server. This is common in home networks and large
organizations.
Static IP Addressing: Devices are assigned fixed, manually configured IP addresses. Static
addressing is often used for servers and network infrastructure.

Private and Public IP Addresses:


Private IP addresses are reserved for use within private networks (e.g., 192.168.x.x). Public
IP addresses are routable on the global internet.
IP addressing is a foundational element of the TCP/IP protocol suite, providing the basis for
communication between devices in networks, whether they are local area networks (LANs) or
connected to the global internet.
Subnetting
Subnetting is the practice of dividing a larger IP network into smaller, more manageable sub-
networks, or subnets. This process allows network administrators to improve the efficiency
of IP address allocation, enhance network performance, and implement better security and
management practices. Subnetting is a fundamental concept in networking, and it involves
dividing an IP address space into smaller blocks, each assigned to a specific subnet.

Here's a breakdown of key aspects of subnetting:


IPv4 Address Structure:
In IPv4, addresses are 32 bits long, typically represented in dotted-decimal format (e.g.,
192.168.1.1). The address is divided into two parts: the network portion and the host portion.

Network Portion vs. Host Portion:


The network portion identifies the subnet, while the host portion identifies an individual
device within that subnet. The division between the network and host portions is determined
by the subnet mask.

Subnet Mask:
The subnet mask is a 32-bit value that consists of consecutive '1' bits followed by
consecutive '0' bits. It is used to separate the network and host portions of an IP address.
For example, in the subnet mask 255.255.255.0, the first 24 bits are allocated to the network,
and the last 8 bits are for hosts.

CIDR (Classless Inter-Domain Routing):


CIDR introduced a more flexible way to represent IP addresses and subnet masks. CIDR
notation allows network administrators to specify both the IP address and the subnet mask
in a concise format (e.g., 192.168.1.0/24).

Benefits of Subnetting:
Efficient Use of IP Addresses: Subnetting allows for the efficient allocation of IP addresses,
preventing wastage of address space.

Network Organization: Subnets help organize devices based on departments, functions, or


geographical locations.

Improved Performance: Smaller, well-designed subnets can reduce network traffic and
improve performance.
Enhanced Security: Subnetting allows for the isolation of network segments, enhancing
security by controlling the flow of traffic between subnets.

Subnetting Process:
The subnetting process involves determining the required number of subnets and hosts per
subnet, selecting an appropriate subnet mask, and assigning addresses to each subnet.

Example:
If you have the IP address 192.168.1.0 and want to create four subnets, you can use a
subnet mask that allocates more bits for subnets. For example, a subnet mask of
255.255.255.192 (/26 in CIDR notation) allows for four subnets with 62 hosts each.
Subnetting is a valuable skill for network administrators, as it provides the flexibility to design
and manage networks effectively while optimizing resource usage.
Imagine you have a large piece of land, and you want to divide it into smaller, more
manageable sections. Subnetting in networking is a bit like that, but with IP addresses.
Let's use an analogy to make it intuitive.

Analogy: Neighborhood with Houses

The Big Neighborhood (IP Address Space):


Think of the entire neighborhood as a large block of IP addresses. This represents the
available range of IP addresses you have.

Houses in the Neighborhood (Devices):


Each house in the neighborhood is like a device on your network (computer, printer,
smartphone, etc.). These houses need unique addresses to receive mail (data).

Street Address (IP Address):


The street address of each house is similar to an IP address. It uniquely identifies each
house (device) within the neighborhood.

Street Blocks (Subnets):


Now, you want to organize the neighborhood better. You decide to divide it into smaller
blocks, each with its own unique character. These blocks are like subnets.

Subnet 1 (Block 1):


In the first block, you decide to have houses with addresses like 192.168.1.x. This is your
first subnet. The 'x' can be any number from 1 to 254, representing individual houses.
Subnet 2 (Block 2):
In the second block (subnet), you have houses with addresses like 192.168.2.x. Each block
is independent, and houses in one block don't have the same address as houses in another
block.

Why Subnetting is Useful:

Organization:
Subnetting allows you to organize your IP address space more efficiently. Each subnet can
represent a different department, floor, or purpose in your network.

Efficient Use of Addresses:


Instead of having one big range of addresses, you can allocate smaller ranges to different
subnets. This helps in efficient use of available addresses.

Security:
Subnetting can enhance security. You can control the flow of traffic between subnets,
allowing for more secure communication within specific blocks.

Reduced Network Traffic:


Devices in the same subnet communicate more directly with each other, reducing
unnecessary network traffic.

Practical Example:
Let's say you have a company with two departments: Sales and Marketing. Subnetting
allows you to give unique addresses to devices in each department, making it easier to
manage, secure, and organize the network.
In essence, subnetting is like dividing a large space into smaller, more manageable sections
to create order, improve efficiency, and provide structure to the addressing of devices on a
network.
Internet Routing
Internet routing is the process by which data packets are directed from one host to another
across multiple networks, forming the basis for communication on the global internet. It
involves determining the most efficient path for data to traverse through the interconnected
web of routers and networks, ensuring that the data reaches its intended destination.

Key components and concepts related to internet routing include:

Router:
Routers are network devices responsible for forwarding data packets between networks.
They operate at the network layer (Layer 3) of the OSI model and use routing tables to make
decisions about the best path for packet delivery.

Routing Protocol:
Routing protocols are a set of rules and conventions that routers use to exchange
information about the networks they are connected to. Common routing protocols include
BGP (Border Gateway Protocol), OSPF (Open Shortest Path First), and RIP (Routing
Information Protocol).

Routing Tables:
Each router maintains a routing table that contains information about known networks and
the optimal paths to reach them. The table is updated dynamically based on information
received from neighboring routers.

IP Addressing:
Internet Protocol (IP) addresses uniquely identify devices on a network. Routing decisions
are made based on these addresses, allowing routers to determine the destination of data
packets.

Autonomous System (AS):


An Autonomous System is a collection of IP networks and routers under the control of a
single organization, typically an Internet Service Provider (ISP) or a large enterprise. BGP is
often used to exchange routing information between autonomous systems.

BGP (Border Gateway Protocol):


BGP is a key exterior gateway protocol used to exchange routing information between
different autonomous systems. It plays a crucial role in interconnecting the diverse networks
that make up the internet.
Internet Backbone:
The internet backbone consists of high-capacity, long-distance networks that interconnect
major data centers and internet exchange points. These backbone networks facilitate the
efficient transfer of data between regions.

Peering and Transit:


Internet service providers (ISPs) engage in peering and transit agreements to exchange
traffic with each other. Peering involves a direct connection between ISPs to exchange traffic,
while transit allows one ISP to use the network of another to reach destinations.

Anycast Routing:
Anycast is a routing technique that associates a single IP address with multiple servers
distributed across different locations. When a user sends a request to the anycast IP, the
routing system directs the request to the nearest server in the anycast group.
Internet routing is dynamic and constantly adapts to changes in network conditions, such as
link failures, traffic congestion, and changes in routing policies. The goal is to efficiently route
data between hosts while ensuring reliability, low latency, and optimal use of network
resources.

Routing Information Protocol


The Routing Information Protocol (RIP) is one of the oldest and most straightforward routing
protocols used in computer networking. It falls under the category of distance-vector routing
protocols, designed to help routers dynamically share information about network paths and
determine the most efficient routes for data packets to reach their destinations. RIP is
commonly used in small to medium-sized networks.

Key features and characteristics of RIP include:

Distance-Vector Routing:
RIP operates on the principle of distance-vector routing. Each router maintains a routing
table where it stores information about the distance (hop count) to reach each known
network. The "vector" refers to the direction or next-hop information associated with each
route.
Hop Count:
RIP uses hop count as its metric to measure the distance between routers. A hop is
essentially a router through which data passes. The route with the fewest hops is considered
the most efficient.

Routing Updates:
Routers using RIP periodically send updates, called routing updates or advertisements, to
their neighboring routers. These updates contain information about the router's known
networks and their associated hop counts.

Split Horizon:
RIP uses a mechanism called split horizon to prevent routing loops. In simple terms, a router
does not advertise routes back out the interface from which it received them to avoid
creating a loop.

Hold-down Timers:
RIP uses hold-down timers to prevent the rapid and potentially unstable fluctuation of
routing tables. When a router receives information about a route going down, it enters a
hold-down state during which it delays accepting new information about that route for a
specified period.

Convergence:
RIP has a relatively slow convergence time. It may take some time for routers to adjust to
changes in the network topology, which can result in suboptimal routes being used until the
network stabilizes.

Classful Routing:
Originally, RIP was designed to work with classful IP addressing. This means it does not
carry subnet mask information in its updates. Later versions, such as RIPng (RIP Next
Generation), introduced support for classless inter-domain routing (CIDR) and IPv6.

Authentication:
RIP supports a basic form of authentication to secure routing updates. This is essential to
prevent unauthorized devices from injecting false routing information into the network.
While RIP has been widely used, it has certain limitations, such as its slow convergence and
reliance on hop count as the sole metric. More modern routing protocols, like OSPF (Open
Shortest Path First) and BGP (Border Gateway Protocol), are often preferred in larger and
more complex networks due to their enhanced features and scalability.
OSPF
Open Shortest Path First (OSPF) is a link-state routing protocol designed to efficiently route
IP packets within an autonomous system (AS), which is a collection of routers and networks
under a common administration. OSPF is widely used in large and complex networks,
offering scalability, flexibility, and rapid convergence.

Here are key features and concepts associated with OSPF:

Link-State Protocol:
OSPF is a link-state routing protocol, which means that each router in the OSPF domain
maintains a detailed and synchronized database of the entire network's topology. Routers
share information about the state of their links, including their neighbors, the cost of links,
and the state of those links.

Areas:
OSPF divides large networks into smaller, more manageable units called areas. Each area
has its own link-state database, and routers within an area have detailed knowledge of the
area's topology. A backbone area (Area 0) connects different areas, ensuring
interconnectivity.

Neighbor Discovery:
OSPF routers discover and establish adjacencies with their neighboring routers. Hellos are
exchanged between routers to form neighbor relationships. This ensures that routers have
an up-to-date view of their neighbors and the network.

Link-State Database:
Each OSPF router maintains a link-state database containing information about all routers
and links within the OSPF domain. This database is used to calculate the shortest path to
each destination.

Dijkstra's Shortest Path First (SPF) Algorithm:


OSPF uses Dijkstra's SPF algorithm to calculate the shortest path to each destination based
on the information in the link-state database. This ensures that routers have a consistent and
optimal view of the network's topology.

Cost Metric:
OSPF uses a cost metric to determine the preferred path to a destination. The cost is
assigned to each link based on factors such as bandwidth. Routers choose paths with lower
cumulative costs.
Type of Service (TOS) Support:
OSPF supports the concept of Type of Service (TOS), allowing routers to consider different
paths based on the characteristics of the traffic, such as delay or bandwidth requirements.

Hierarchical Design:
OSPF's hierarchical design, with the use of areas, enhances scalability. Changes within an
area do not impact routers in other areas, reducing the amount of routing information that
routers need to exchange.

Authentication:
OSPF provides authentication mechanisms to secure the routing information exchanged
between routers. This helps prevent unauthorized routers from participating in the OSPF
process.

Multiple Routing Tables:


OSPF routers maintain separate routing tables for different IP routing protocols, allowing for
efficient routing of different types of traffic.

Scalability and Flexibility:


OSPF is well-suited for large and complex networks. Its hierarchical structure, efficient use
of resources, and ability to scale make it a preferred choice for many organizations.
Overall, OSPF is a robust and widely adopted routing protocol that plays a crucial role in
ensuring efficient and reliable routing within autonomous systems.

BGP
Border Gateway Protocol (BGP) is a standardized exterior gateway protocol used to
exchange routing and reachability information between autonomous systems (ASes) on the
Internet. BGP is a path vector protocol that enables routers in different ASes to make
informed decisions about the most efficient routes for exchanging data. Here are key
aspects and features of BGP:

Autonomous Systems (AS):


An Autonomous System is a collection of IP networks and routers under the control of a
single organization or entity that presents a common routing policy to the internet. Each AS
is assigned a unique identification number, known as the AS number.
Path Vector Protocol:
BGP is a path vector protocol, which means that it maintains a table of network paths and
selects the best path based on policies, rules, and attributes associated with each path. The
selected path is then advertised to other routers.

Neighbor Relationships:
BGP routers establish neighbor relationships with other routers in different ASes. These
neighbor relationships are formed through manual configuration, and BGP peers exchange
routing information.

BGP Routing Table:


BGP routers maintain a BGP routing table, which contains information about the best paths
to reach various network destinations. BGP routers use attributes like AS path, next-hop
information, and various other criteria to determine the best path.

Attributes:
BGP uses attributes to characterize and determine the best route to a destination. Common
attributes include:

AS Path: The sequence of AS numbers that the route has traversed.

Next Hop: The IP address of the next router along the chosen path.

Weight: A local parameter used to influence route selection within an AS.

Local Preference: Indicates the preference for an external route within an AS.

Multi-Exit Discriminator (MED): Used to influence the path selection process between
neighboring ASes.

Route Advertisement:
BGP routers advertise routes to their neighboring routers, providing information about the
networks they can reach and the associated attributes. BGP routers do not advertise the
entire routing table to every neighbor; instead, they share updates selectively.

Path Selection:
BGP uses a decision process to select the best path to reach a destination. The decision is
based on a set of rules and attributes. The router chooses the path with the highest
preference based on these criteria.
Policy-Based Routing:
BGP allows for policy-based routing decisions. Network administrators can define policies to
influence BGP's path selection based on their organization's requirements.

Scaling and Stability:


BGP is designed to handle the scalability and stability challenges of the global internet. It
allows for the flexible representation of complex internet routing policies.

Internet Backbone:
BGP is a critical protocol for the functioning of the internet backbone. It facilitates the
exchange of routing information between major ISPs and ensures that data can traverse the
interconnected networks that make up the internet.

Security:
BGP lacks inherent security mechanisms, and incidents like route hijacking can occur.
Efforts, such as Resource Public Key Infrastructure (RPKI), have been introduced to enhance
the security of BGP routing.
BGP plays a crucial role in the core of the internet, enabling the interconnection of diverse
networks and the exchange of routing information across autonomous systems. It provides
the foundation for the global routing infrastructure.
Transport Layer Protocol
The Transport Layer is the fourth layer of the OSI (Open Systems Interconnection) model and
the TCP/IP protocol suite. Its primary purpose is to ensure the reliable and efficient
communication between two devices across a network. The Transport Layer achieves this
by providing end-to-end communication services, error detection and correction, flow
control, and multiplexing/demultiplexing of multiple communication streams. Two prominent
Transport Layer protocols are Transmission Control Protocol (TCP) and User Datagram
Protocol (UDP).

Transmission Control Protocol (TCP):


Reliability: TCP is a connection-oriented protocol that ensures reliable and error-free data
delivery. It employs mechanisms like acknowledgments, retransmissions, and sequence
numbers to guarantee that data sent by one device is received correctly by the other.

Flow Control: TCP implements flow control to manage the rate of data transmission
between sender and receiver, preventing congestion. It uses a sliding window mechanism to
dynamically adjust the amount of data that can be in transit at any given time.

Connection Establishment and Termination: TCP establishes a connection between


devices before data exchange and terminates it after completion. This ensures a reliable and
orderly exchange of information.

Ordered Data Delivery: TCP guarantees the ordered delivery of data. If packets arrive out of
order, TCP reorders them before passing them to higher-layer applications.

Full Duplex Communication: TCP supports full-duplex communication, allowing both


devices to send and receive data simultaneously.

User Datagram Protocol (UDP):


Connectionless: UDP is a connectionless protocol, which means it does not establish a
connection before transmitting data. It is more lightweight than TCP but lacks some of the
reliability features.

Low Overhead: UDP has minimal overhead, making it faster and more suitable for real-time
applications where low latency is crucial, such as streaming or online gaming.
Unreliable: Unlike TCP, UDP does not provide error detection, correction, or
acknowledgment mechanisms. Therefore, it is less reliable, and there is no guarantee that
the data will be successfully delivered.

No Flow Control: UDP does not implement flow control. In scenarios where the sender
transmits data faster than the receiver can process, UDP may lead to packet loss.

Simple and Stateless: UDP is a simple and stateless protocol. Each UDP packet is
independent, and there is no concept of a connection or session.

In summary, TCP and UDP are the two primary Transport Layer protocols, each designed to
address different requirements. TCP prioritizes reliability, ordering, and flow control, making
it suitable for applications that demand accurate and complete data delivery. UDP, on the
other hand, is faster, more lightweight, and appropriate for applications where real-time
communication and low overhead are critical, even at the expense of occasional packet loss.
Application Layer Protocols
The Application Layer is the top layer of the OSI (Open Systems Interconnection) model and
the TCP/IP protocol suite. It is responsible for providing network services directly to end-
users and applications. The Application Layer protocols define the communication between
software applications and the network, enabling various applications to communicate over a
network. Some prominent Application Layer protocols include Hypertext Transfer Protocol
(HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), and Domain
Name System (DNS).

Hypertext Transfer Protocol (HTTP):


Purpose: HTTP is the foundation of data communication on the World Wide Web. It
facilitates the transfer of hypertext documents between web servers and clients (browsers).
Connectionless: HTTP operates in a connectionless manner, where each request from the
client and each response from the server is independent. This enables quick retrieval of web
pages and resources.

Stateless: HTTP is a stateless protocol, meaning that each request-response cycle is


independent and does not store information about the client's previous interactions. To
maintain state, mechanisms like cookies are often used.
File Transfer Protocol (FTP):

Purpose: FTP is used for the transfer of files between a client and a server on a network. It
allows users to upload, download, and manipulate files on remote servers.

Two Modes: FTP operates in two modes—active and passive. In active mode, the server
opens a random port for data transfer, while in passive mode, the client opens a random port.
Passive mode is commonly used for enhanced security.

Simple Mail Transfer Protocol (SMTP):


Purpose: SMTP is a protocol for sending email messages between servers. It is used to
transfer emails from the sender's email client to the recipient's email server.
Text-Based: SMTP is a text-based protocol that defines how messages are formatted and
transmitted. It works in conjunction with other protocols like POP (Post Office Protocol) and
IMAP (Internet Message Access Protocol) for email retrieval.

Domain Name System (DNS):


Purpose: DNS is a distributed database system that translates human-readable domain
names (like www.example.com) into IP addresses that are used by network devices for
communication.

Hierarchy: DNS operates in a hierarchical structure, with domain names organized into a
tree-like structure. The top-level domain (TLD) servers handle high-level domains
(e.g., .com, .org), and authoritative DNS servers provide IP address information for specific
domain names.

Post Office Protocol (POP) and Internet Message Access Protocol (IMAP):
Purpose: POP and IMAP are protocols used by email clients to retrieve emails from a mail
server.
POP: POP is a simple protocol where emails are downloaded to the client's device and
usually removed from the server. It is suitable for scenarios where emails are primarily
accessed from a single device.
IMAP: IMAP allows emails to be stored on the server, providing more flexibility for accessing
emails from multiple devices. Changes made on one device (e.g., marking an email as read)
are reflected on the server.

Simple Network Management Protocol (SNMP):


Purpose: SNMP is used for managing and monitoring network devices such as routers,
switches, and servers. It enables administrators to collect information and control devices on
a network.
Get and Set Operations: SNMP supports two primary operations—GET (retrieve information
from a device) and SET (modify the configuration of a device). It uses a manager-agent
model where the SNMP manager collects data from SNMP agents.

Transmission Control Protocol (TCP) and User Datagram Protocol (UDP):


Purpose: While TCP and UDP are Transport Layer protocols, they play a crucial role in
supporting various application layer protocols. TCP is connection-oriented, ensuring reliable
data delivery, while UDP is connectionless, providing faster but less reliable communication.
These Application Layer protocols facilitate a wide range of network-based applications,
allowing users to browse the web, send emails, transfer files, and manage network
resources. Each protocol serves a specific purpose and follows its own set of rules and
conventions to enable effective communication between applications and network devices.

You might also like