0% found this document useful (0 votes)
15 views

VIVA

Uploaded by

nish.wanjari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

VIVA

Uploaded by

nish.wanjari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 71

1.

Data Flow

 Defines the direction in which data travels between devices.

 Three main types: Simplex, Half Duplex, and Full Duplex.

2. Simplex Mode

 Unidirectional Communication: Data flows in only one direction.

 Example: Keyboard to computer.

 Use Case: Ideal for scenarios where response isn't required.

3. Half Duplex Mode

 Bidirectional Communication, But Not Simultaneous: Data can flow both ways, but only one
direction at a time.

 Example: Walkie-talkies.

 Limitation: One device transmits at a time; the other must wait.

4. Full Duplex Mode

 Simultaneous Bidirectional Communication: Data can flow in both directions at the same
time.

 Example: Phone call.

 Advantage: Increases efficiency by allowing continuous data exchange.

 Requirement: Both sender and receiver need full-duplex-capable hardware

--------------------------------------------------------------------------------------------------------------------------------------

1. Types of Network

 Networks are categorized based on size, area coverage, and purpose.

 Common types include PAN, LAN, CAN, MAN, WAN, SAN, VPN, Client-Server, and Peer-to-
Peer networks.

2. Personal Area Network (PAN)

 Size: Smallest; covers an individual workspace (few meters).

 Use Case: Connects personal devices like smartphones, laptops, wearables.

 Example: Bluetooth, USB connections.

3. Local Area Network (LAN)

 Size: Covers a single building or small area.

 Use Case: Connects computers in homes, offices, or schools.

 Advantage: High-speed connection, usually managed by one administrator.


4. Campus Area Network (CAN)

 Size: Larger than LAN, covering multiple buildings in a specific area (like a campus).

 Use Case: Connects academic institutions, business campuses.

 Example: University network spanning multiple buildings.

5. Metropolitan Area Network (MAN)

 Size: Covers a city or large campus.

 Use Case: Connects users within a city or town.

 Example: Citywide Wi-Fi or cable TV networks.

6. Wide Area Network (WAN)

 Size: Covers a large geographical area, even countries or continents.

 Use Case: Connects multiple LANs, allowing communication across long distances.

 Example: The Internet.

7. Storage Area Network (SAN)

 Purpose: Dedicated high-speed network for data storage.

 Use Case: Used in data centers to connect servers to storage devices.

 Advantage: Improves performance for data-intensive applications.

8. Virtual Private Network (VPN)

 Purpose: Securely extends a private network over a public network (e.g., the Internet).

 Use Case: Allows remote access to a private network.

 Benefit: Encrypts data, ensuring privacy and security.

9. Client-Server Network

 Structure: Central server provides resources to multiple clients.

 Use Case: Common in business networks for managing data and applications centrally.

 Advantage: Easier management and control over resources.

10. Peer-to-Peer Network (P2P)

 Structure: Each device (peer) acts as both client and server.

 Use Case: Common for file-sharing networks and small home networks.

 Advantage: No central server, easy to set up, but harder to manage in large networks.

--------------------------------------------------------------------------------------------------------------------------------------

1. Bus Topology

 Description: Single central cable (bus) to which all devices are connected.
 Advantages:

o Easy to implement and expand.

o Cost-effective for small networks.

 Disadvantages:

o Limited cable length and number of nodes.

o Difficult to troubleshoot; a cable failure affects the whole network.

2. Star Topology

 Description: All devices are connected to a central hub or switch.

 Advantages:

o Easy to manage and troubleshoot.

o Failure of one device does not affect others.

 Disadvantages:

o If the central hub fails, the entire network goes down.

o More cabling required, increasing cost.

3. Ring Topology

 Description: Each device is connected to two others, forming a circular pathway.

 Advantages:

o Data travels in a predictable direction, reducing collisions.

o Suitable for handling high traffic.

 Disadvantages:

o A failure in one device or link can disrupt the entire network.

o Slower as network size grows, since data passes through multiple devices.

4. Hybrid Topology

 Description: Combines two or more different topologies (e.g., star and bus).

 Advantages:

o Flexible and scalable, adaptable to different requirements.

o Inherits advantages of the topologies used.

 Disadvantages:

o Complex to design and manage.

o Higher installation cost due to complexity.

5. Mesh Topology
 Description: Each device is connected to every other device.

 Advantages:

o Highly reliable; failure of one link doesn’t affect the rest.

o Provides high data privacy and security.

 Disadvantages:

o Expensive due to extensive cabling and hardware.

o Complex setup and maintenance.

--------------------------------------------------------------------------------------------------------------------------------------

1. Network Interface Card (NIC)

 Function: Connects a computer to a network.

 Key Point: Enables communication by assigning a unique MAC address to each device.

2. MAC Address

 Function: Unique identifier for network interfaces assigned to NICs.

 Key Point: Used for addressing devices within the same network.

3. Repeater

 Function: Amplifies or regenerates signals over long distances.

 Key Point: Used to extend the range of a network; works at the Physical layer.

4. Hub

 Function: Connects multiple devices in a LAN; broadcasts data to all connected devices.

 Key Point: Operates at the Physical layer; non-intelligent device without filtering capability.

5. Switch

 Function: Connects devices within a network; forwards data only to the specific destination
device.

 Key Point: Operates at the Data Link layer; more efficient than a hub as it reduces data
collisions.

6. Bridge

 Function: Connects and filters traffic between two or more LAN segments.

 Key Point: Works at the Data Link layer; used to divide large networks into smaller, more
manageable sections.

7. Router

 Function: Directs data packets between different networks.


 Key Point: Operates at the Network layer; essential for connecting different LANs to the
internet.

8. Gateway

 Function: Connects networks with different protocols and architectures.

 Key Point: Acts as a translator between two systems; works across multiple layers.

9. Firewall

 Function: Monitors and controls incoming and outgoing network traffic.

 Key Point: Enhances security by filtering data packets based on predetermined rules.

10. Wi-Fi Card

 Function: Connects a device to a wireless network.

 Key Point: Similar to a NIC but for wireless connectivity; often integrated into laptops and
mobile devices.

11. Access Point

 Function: Extends wireless network coverage by allowing wireless devices to connect.

 Key Point: Acts as a central transmitter and receiver for wireless signals, enhancing
connectivity within a WLAN.

1. Twisted Pair Cabling

 Description: Pairs of insulated copper wires twisted together to reduce interference.

 Types:

o Unshielded Twisted Pair (UTP): Common in Ethernet networks, affordable but less
shielded from interference.

o Shielded Twisted Pair (STP): More protection against interference, commonly used
in industrial environments.

 Advantages:

o Inexpensive and easy to install.

o Flexible and suitable for short to medium distances.

 Disadvantages:

o Susceptible to electromagnetic interference (especially UTP).

o Limited bandwidth and distance compared to coaxial or fiber optics.

2. Coaxial Cable

 Description: Central conductor wire surrounded by insulation, metallic shielding, and outer
insulation.
 Use Case: Commonly used for cable TV and early Ethernet networks.

 Advantages:

o Better shielding against electromagnetic interference than twisted pair.

o Higher bandwidth and longer distance capabilities.

 Disadvantages:

o Bulkier and less flexible than twisted pair.

o More expensive and challenging to install and maintain.

3. Fiber Optic Cable

 Description: Transmits data as light pulses through strands of glass or plastic fibers.

 Types:

o Single-mode Fiber: For long distances, ideal for telecom and internet backbones.

o Multi-mode Fiber: Shorter distances, commonly used in local networks.

 Advantages:

o Extremely high bandwidth and data transmission speed.

o Immune to electromagnetic interference.

o Suitable for long-distance communication.

 Disadvantages:

o Expensive and fragile.

o Complex installation and maintenance.

--------------------------------------------------------------------------------------------------------------------------------------

1. Connection-Oriented Service

 Definition: A communication service where a connection is established between the sender


and receiver before any data is transmitted.

 Characteristics:

o Involves a handshake or connection setup before data transfer.

o Guarantees in-order delivery of data.

o Common in protocols like TCP (Transmission Control Protocol).

2. Six Service Primitives

 Request: The service user requests the service provider to perform an operation.

 Indication: The service provider indicates an event to the service user.


 Response: The service user responds to an indication or request.

 Confirm: The service provider confirms the successful execution of a requested operation.

 Abort: The operation or connection is forcibly terminated.

 Rejection: A service request is denied by the provider.

3. Connectionless Service

 Definition: A communication service where data is sent without establishing a dedicated


connection.

 Characteristics:

o No connection setup is required before sending data.

o Each packet is sent independently and may arrive out of order.

o Common in protocols like UDP (User Datagram Protocol).

--------------------------------------------------------------------------------------------------------------------------------------

1. Layering in Computer Networks

 Definition: The process of dividing a network architecture into different layers, each
responsible for specific functions.

 Purpose: Simplifies design, implementation, and troubleshooting by breaking down complex


processes into smaller, manageable tasks.

 Common Models: OSI (Open Systems Interconnection) model and TCP/IP model.

2. Layered Model

 Definition: A conceptual framework that standardizes network functions into different layers.

 Goal: Enables interoperability between different devices and technologies, while isolating
each layer’s responsibilities.

 Common Models: OSI and TCP/IP are the most widely used layered models.

3. OSI Model (Open Systems Interconnection Model)

 Definition: A reference model that standardizes the functions of communication systems into
7 layers, from physical transmission to application.

 Purpose: To guide product development and inter-network communication standards.

 7 Layers:
4. 7 Layers of OSI Model

1. Physical Layer

o Function: Defines the hardware elements for transmission and reception of raw data
bits over a physical medium (e.g., cables, radio waves).

o Example: Network interface cards (NICs), cables, hubs.

2. Data Link Layer

o Function: Provides error-free transfer of data frames between two devices on the
same network. It manages access to the physical medium and handles MAC
addressing.

o Example: Ethernet, Wi-Fi.

3. Network Layer

o Function: Responsible for routing data packets between devices across different
networks. Handles logical addressing and packet forwarding.

o Example: IP (Internet Protocol), Routers.

4. Transport Layer

o Function: Ensures reliable data transfer between end systems, handles flow control,
error correction, and segmentation.

o Example: TCP (Transmission Control Protocol), UDP (User Datagram Protocol).

5. Session Layer

o Function: Manages sessions or connections between applications. It establishes,


maintains, and terminates communication sessions.

o Example: NetBIOS, RPC (Remote Procedure Call).

6. Presentation Layer

o Function: Translates data formats between the application and transport layers. It
handles encryption, compression, and translation of data formats.

o Example: SSL/TLS, JPEG, ASCII.

7. Application Layer

o Function: Interfaces with end-user applications to provide network services like file
transfer, email, and web browsing.

o Example: HTTP, FTP, SMTP, DNS.

1. TCP/IP Model

 Definition: A networking model that guides communication between systems on a network.


It is the basis for the Internet.
 Developed by: The Department of Defense Advanced Research Projects Agency (DARPA) in
the 1970s.

 Layers: The TCP/IP model has 4 layers, each responsible for specific networking functions.

2. DARPA (Defense Advanced Research Projects Agency)

 Role in TCP/IP: DARPA funded the research and development that led to the creation of the
TCP/IP protocol suite.

 Impact: The model was originally designed for robust, fault-tolerant communication for
military networks and later became the foundation for the modern Internet.

3. Application Layer

 Function: Responsible for enabling communication between end-user applications and the
network.

 Key Services:

o Defines protocols that applications use for data communication (e.g., HTTP, FTP,
SMTP).

o Provides services such as email, file transfer, and remote login.

 Example Protocols: HTTP, FTP, DNS, SMTP.

4. Transport Layer

 Function: Ensures reliable data transfer between devices and controls the flow of data
between hosts.

 Key Responsibilities:

o Segmentation and reassembly of data.

o Error detection and correction.

o Flow control and congestion management.

 Protocols:

o TCP (Transmission Control Protocol): Provides reliable, connection-oriented


communication.

o UDP (User Datagram Protocol): Provides connectionless, faster communication


without error checking.

5. Internet Layer

 Function: Handles logical addressing, routing, and forwarding of packets across networks.
 Key Responsibilities:

o Routing packets from the source to the destination across different networks.

o IP addressing and addressing schemes.

 Protocols:

o IP (Internet Protocol): Responsible for logical addressing and packet forwarding.

o ICMP (Internet Control Message Protocol): Used for diagnostic and error reporting.

o ARP (Address Resolution Protocol): Resolves IP addresses to MAC addresses.

6. Network Access Layer (Link Layer)

 Function: Defines the protocols and standards for communication over physical networks,
handling the transmission of raw data between devices on the same local network.

 Key Responsibilities:

o Data encapsulation for transmission over physical media.

o Error detection and handling at the physical level.

 Protocols:

o Ethernet: Defines standards for data transmission over wired local area networks
(LAN).

o Wi-Fi: Defines standards for wireless LAN communication.

o PPP (Point-to-Point Protocol): Used for direct communication between two devices
over serial links.

Application Layer Protocols

 HTTP (Hypertext Transfer Protocol): Used for transferring web pages over the internet.

 TELNET: Allows remote login to other computers over a network.

 FTP (File Transfer Protocol): Used for transferring files between a client and server.

 SMTP (Simple Mail Transfer Protocol): Responsible for sending emails.

 SNMP (Simple Network Management Protocol): Used for managing and monitoring
network devices.

 POP3 (Post Office Protocol 3): Retrieves emails from a server to a client.

 IMAP (Internet Message Access Protocol): Retrieves and manages emails from a server.

 BOOTP (Bootstrap Protocol): Used to automatically assign an IP address to a network device


during boot-up.

Transport Layer Protocols


 TCP (Transmission Control Protocol): A reliable, connection-oriented protocol that ensures
error-free, ordered delivery of data.

 UDP (User Datagram Protocol): A connectionless protocol that provides fast but unreliable
data transmission.

 SCTP (Stream Control Transmission Protocol): A transport protocol that supports multi-
homing and message-oriented communication. It combines features of both TCP and UDP.

Internet Layer Protocols

 IP (Internet Protocol): Responsible for logical addressing and routing of packets between
devices across networks.

 ICMP (Internet Control Message Protocol): Used for error messages and network diagnostics
(e.g., ping).

 IGMP (Internet Group Management Protocol): Manages multicast groups for IP networking.

 ARP (Address Resolution Protocol): Resolves IP addresses to MAC (hardware) addresses.

 RARP (Reverse Address Resolution Protocol): Used to map a known MAC address to an IP
address (older protocol, replaced by DHCP).

Network Access Layer Protocols

 Ethernet: A protocol used in LANs for packet-switched communication over wired networks.

 Packet Radio: A communication protocol used in radio systems to send and receive data
packets.

World Wide Web (WWW) Protocols

 HTTPS (Hypertext Transfer Protocol Secure): A secure version of HTTP that encrypts data
during transfer using SSL/TLS to ensure privacy and security.

Web Caching: Proxy Server

 Definition: A proxy server acts as an intermediary between a client (user) and the server,
retrieving data from the server on behalf of the client and caching it for future use.

 Purpose:

o Reduces server load by storing copies of frequently accessed content.

o Improves response time by serving cached data rather than fetching it from the
original server.

o Can filter requests, improve security, and provide anonymity.


Proxy Server Location

 Location in the Network:

o Forward Proxy: Positioned between a client and the server. It forwards client
requests to the server and caches the responses.

o Reverse Proxy: Positioned between the server and client. It handles client requests
on behalf of the server and can cache server responses.

 Common Uses:

o Forward Proxy: Used by clients to access resources on the internet through a proxy.

o Reverse Proxy: Used by websites to protect servers, load balance traffic, and cache
content for faster access.

Cache Update

 Definition: The process by which a cached copy of content is refreshed to ensure that it
remains up-to-date.

 Methods of Cache Update:

1. Time-based Expiration: Cached data expires after a certain period, prompting a


refresh from the origin server.

2. Cache-Control Headers: Servers send directives to cache proxies, specifying how


long to keep data before it’s considered stale.

3. Conditional Requests: A proxy may request only updates for data that has changed,
reducing unnecessary downloads.

4. Manual Refresh: In some cases, cache administrators can manually refresh or


invalidate cached content.

Client-Server Model

 Definition: A network architecture where the client makes requests for services and
resources, while the server provides them.

 Roles:

o Client: The requesting entity, typically a user or an application that consumes


services provided by the server.

o Server: The providing entity, which offers services, resources, or data in response to
client requests.

 Characteristics:

o Clients communicate with servers using predefined protocols (e.g., HTTP, FTP).

o Servers may handle multiple clients simultaneously.


o The server is usually a more powerful machine with centralized resources, while
clients are typically less powerful.

Socket

 Definition: An endpoint for communication between two machines or processes over a


network.

 Function: Sockets allow applications to communicate over a network by sending and


receiving data packets.

 Types:

o Stream Sockets (TCP): Provide reliable, connection-oriented communication.

o Datagram Sockets (UDP): Provide connectionless, unreliable communication.

 Components:

o A socket is defined by a combination of an IP address and a port number.

Socket Address

 Definition: A combination of an IP address and a port number that uniquely identifies a


network socket.

 Components:

o IP Address: Specifies the host machine on the network.

o Port Number: Identifies the specific service or application on the host.

 Example: A socket address for a web server might look like 192.168.1.10:80, where
192.168.1.10 is the IP address and 80 is the port number.

Ports

 Definition: A port is a 16-bit number that identifies a specific process or service running on a
device.

 Types of Ports:

o Well-known Ports (0-1023): Reserved for common protocols like HTTP (port 80),
HTTPS (port 443), FTP (port 21), etc.

o Registered Ports (1024-49151): Used by applications that are not as standardized as


well-known ports but still have a specific registry.

o Dynamic/Private Ports (49152-65535): Used for ephemeral or temporary


connections, often by client-side applications.

 Usage:
o The port allows multiple services to run on the same machine, each listening on
different ports.

==================================================================================
-------------------------------------------------------------------------------------------------------------------------------------

Module 02 - Application Layer

Standard Application Layer Protocols

 Definition: Well-established protocols that are widely used for network communication
across the internet.

 Examples:

o HTTP (Hypertext Transfer Protocol): Used for transferring web pages and data
between a client and a server.

o FTP (File Transfer Protocol): Facilitates the transfer of files between client and server.

o SMTP (Simple Mail Transfer Protocol): Used for sending email.

o DNS (Domain Name System): Resolves domain names to IP addresses.

o IMAP (Internet Message Access Protocol): Retrieves and manages email from a
server.

o POP3 (Post Office Protocol 3): Retrieves email from a server but with fewer features
than IMAP.

Nonstandard Application Layer Protocols

 Definition: Protocols that are not as widely adopted or standardized, often used for
specialized or proprietary applications.

 Examples:

o SNMP (Simple Network Management Protocol): Used for managing and monitoring
network devices, but can be replaced by more modern protocols.

o XMPP (Extensible Messaging and Presence Protocol): Used for real-time


communication like chat services.

o Telnet: Provides terminal emulation for remote login, though largely replaced by SSH
(Secure Shell) for security reasons.

o BOOTP (Bootstrap Protocol): An early protocol used for assigning IP addresses, now
largely replaced by DHCP.

Traditional Paradigm - Client-Server Paradigm

 Definition: A model where the client requests services or resources from a server.
 Characteristics:

o Centralized Control: Servers manage resources and provide services to clients.

o Scalability Issues: Servers can become overloaded as more clients connect, requiring
additional hardware or load balancing.

o Example: Web browsing, where a browser (client) requests data from a web server.

New Paradigm - Peer-to-Peer (P2P)

 Definition: A decentralized model where each device (peer) can act as both a client and a
server, sharing resources and services directly with one another.

 Characteristics:

o Decentralized Control: All peers have equal authority, and there is no central server.

o Resource Sharing: Peers share resources like processing power, bandwidth, and
storage.

o Scalability: Can easily scale since new peers can join without putting strain on a
central server.

o Example: File sharing applications (e.g., BitTorrent) or decentralized communication


systems.

Mixed Paradigm

 Definition: A hybrid model that combines elements of both the Client-Server and Peer-to-
Peer paradigms.

 Characteristics:

o Flexibility: Can take advantage of both centralized control and decentralized sharing.

o Examples:

 Cloud Computing: A cloud service can provide client-server-like centralized


services, but users can share resources (e.g., file sharing and collaboration).

 Hybrid P2P systems: Some applications use centralized servers for


coordination but allow direct communication between peers (e.g., Skype for
voice calls).

--------------------------------------------------------------------------------------------------------------------------------------

Electronic Mail (Email)

 Definition: A method of exchanging digital messages over the internet.

 Components:

o Sender: The individual or system sending the email.


o Recipient: The individual or system receiving the email.

o Email Server: Manages the sending, receiving, and storage of emails.

o Protocols:

 SMTP (Simple Mail Transfer Protocol): Used to send email messages to an


email server.

 POP3 (Post Office Protocol 3): Retrieves emails from the server (downloads
messages).

 IMAP (Internet Message Access Protocol): Allows email access and


management directly on the server (messages remain on the server).

 Mail Clients: Software used by users to manage and send emails, such as Outlook or Gmail.

Local Logging

 Definition: The process of recording system or application events locally (on the machine
where the event occurred).

 Purpose:

o Used for troubleshooting, tracking activity, and auditing purposes.

 Types of Logs:

o System Logs: Tracks system-level events like booting, shutdowns, errors.

o Application Logs: Track events specific to applications, such as errors or transactions.

 Storage: Logs are stored locally on the device’s disk.

Remote Logging

 Definition: The practice of sending logs generated by a system or application to a remote


server for storage and analysis.

 Purpose:

o Provides centralized log management.

o Helps with monitoring multiple systems across different locations.

 Protocols:

o Syslog: A widely used protocol for sending logs from devices to a central logging
server.

o Remote Syslog Servers: These servers collect and store logs from multiple devices
across a network.
Concept of NVT (Network Virtual Terminal)

 Definition: A concept used in the Telnet protocol for providing a standard communication
interface between devices.

 Purpose:

o Allows different types of systems (e.g., mainframes, PCs) to communicate as if they


were connected to the same terminal, regardless of their internal hardware and
software differences.

 NVT in Telnet:

o Telnet uses the NVT concept to translate different control characters (e.g., carriage
returns, line feeds) to a common format.

o This allows remote terminals and host systems to understand and interpret each
other's input/output in a standard way.

--------------------------------------------------------------------------------------------------------------------------------------

DNS (Domain Name System)

 Definition: A hierarchical system that translates human-readable domain names (e.g.,


www.example.com) into IP addresses (e.g., 192.168.1.1) that computers can understand.

 Purpose:

o Enables users to access websites and services using easy-to-remember domain


names rather than numerical IP addresses.

o Helps in routing traffic to the correct destination on the internet.

 Components:

o Domain Names: Organized into a hierarchy, such as .com, .org, etc.

o DNS Resolver: A server that queries DNS records for resolving domain names to IP
addresses.

o DNS Records: Various types of records (A, AAAA, MX, CNAME, etc.) that store
information like IP addresses, mail servers, etc.

 Process:

o When you type a domain name into a browser, the DNS resolver looks up the
associated IP address and returns it, enabling the browser to connect to the correct
server.

Compression

 Definition: The process of reducing the size of data to save storage space or transmission
time.

 Types:
o Lossless Compression: Reduces file size without losing any data, allowing the original
data to be fully restored.

o Lossy Compression: Reduces file size by removing some data, resulting in a loss of
quality, which is often imperceptible to the human senses.

Lossless Compression

 Definition: A type of compression where the original data can be perfectly reconstructed
from the compressed data.

 Characteristics:

o No Loss of Quality: The decompressed data is identical to the original data.

o Use Cases: Ideal for text, code, or images where every bit of information is crucial.

 Examples:

o ZIP files: For compressing documents or software without data loss.

o PNG (Portable Network Graphics): A lossless image format.

o FLAC (Free Lossless Audio Codec): A lossless audio format.

Lossy Compression

 Definition: A type of compression where some data is discarded, resulting in a smaller file
size at the cost of some loss in quality.

 Characteristics:

o Quality Loss: The compressed data is not identical to the original data, but the loss is
usually imperceptible to humans.

o Use Cases: Often used for audio, video, and images where perfect quality isn't
necessary and file size reduction is a priority.

 Examples:

o JPEG (Joint Photographic Experts Group): A lossy image format widely used for
photographs.

o MP3 (MPEG Audio Layer III): A lossy audio format commonly used for music files.

o MPEG (Moving Picture Experts Group): A lossy video compression format.

--------------------------------------------------------------------------------------------------------------------------------------

Cross-Site Scripting (XSS)

 Definition: A vulnerability that allows attackers to inject malicious scripts into web pages
viewed by other users.

 Types:
o Stored XSS (Persistent):

 Malicious script is permanently stored on the target server (e.g., in a


database or message board).

 Triggered every time a user views the compromised page.

o Reflected XSS (Non-Persistent):

 Malicious script is reflected off the web server and executed immediately
when the victim clicks on a malicious link.

 The script is not stored permanently on the server.

SQL Injection

 Definition: A vulnerability that allows attackers to manipulate SQL queries by injecting


malicious SQL code through user inputs, potentially leading to unauthorized access or
modification of the database.

 Impact:

o Can lead to data breaches, unauthorized access to sensitive information, or even


complete database compromise.

 Prevention: Use parameterized queries, prepared statements, and input validation to


prevent injection attacks.

LDAP Injection (Lightweight Directory Access Protocol Injection)

 Definition: A vulnerability where an attacker can inject malicious LDAP queries into user
input fields, leading to unauthorized access to LDAP directories.

 Impact: Allows attackers to bypass authentication or query sensitive directory information.

 Prevention: Validate and sanitize input, use parameterized queries for LDAP requests.

Cross-Site Request Forgery (CSRF)

 Definition: A vulnerability that allows attackers to trick authenticated users into performing
unwanted actions on a website or web application (e.g., changing account settings,
transferring money).

 Mechanism: The attacker forces the victim to send an authenticated request to a web server
without their consent.

 Prevention: Use anti-CSRF tokens, require re-authentication for sensitive operations, and
validate requests to ensure they are legitimate.

Session Hijacking
 Definition: An attack where an attacker steals or intercepts a valid session token, gaining
unauthorized access to a user's session.

 Impact: Allows attackers to impersonate legitimate users and perform actions on their
behalf.

 Prevention: Use secure cookies (with HttpOnly and Secure flags), enable session expiration,
and use encryption (e.g., HTTPS) to protect session tokens.

Cookie Poisoning

 Definition: An attack where an attacker manipulates the contents of a cookie to alter the
behavior of a web application or gain unauthorized access.

 Impact: Can lead to privilege escalation or session hijacking.

 Prevention: Use encryption to protect cookie data, and implement secure cookie attributes
(e.g., Secure, HttpOnly).

DDoS (Distributed Denial of Service)

 Definition: A type of attack where multiple compromised systems are used to flood a target
server with traffic, overwhelming it and causing service disruption or downtime.

 Impact: Can bring down websites or services by consuming all network or server resources.

 Prevention: Use rate limiting, firewalls, DDoS mitigation services, and traffic analysis tools to
detect and block malicious traffic.

------------------------------------------------------------------------------------------

Module 03 Transport Layer

Port Number

 Definition: A port number is a 16-bit integer that identifies a specific service or process on a
device, allowing multiple services to run on the same machine.

 Range: 0 to 65535.

o Well-known ports (0-1023): Reserved for commonly used services (e.g., HTTP on
port 80, HTTPS on port 443).

o Registered ports (1024-49151): Used by applications that are not as standardized as


well-known services.

o Dynamic/Private ports (49152-65535): Temporary ports used for ephemeral


connections, often for client-side applications.

IANA Port Number Ranges


 IANA (Internet Assigned Numbers Authority) is responsible for assigning port numbers.

 Port Number Ranges:

o 0-1023 (Well-known ports): Reserved for system and widely-used protocols (e.g.,
FTP, HTTP).

o 1024-49151 (Registered ports): Used by software applications and services.

o 49152-65535 (Dynamic or private ports): Typically used for client-side connections


or ephemeral communication.

IP Addresses vs. Port Numbers

 IP Address:

o Identifies a specific device on a network, such as a computer or server.

o Represents the network layer address that helps route data to the correct host.

 Port Number:

o Identifies a specific application or service on a device.

o Allows multiple applications to use the same IP address without conflict by


differentiating them via ports (e.g., web traffic uses port 80, FTP uses port 21).

Multiplexing

 Definition: The process of combining multiple communication streams (from different


applications or services) into a single transmission channel.

 Purpose: Allows efficient utilization of network resources by sending multiple data streams
simultaneously over the same network connection.

 Example: Different web browsers accessing multiple websites at the same time through a
single IP address and port.

Demultiplexing

 Definition: The process of separating and directing received data to the appropriate
application or service based on port numbers.

 Purpose: Ensures that data sent over the network is routed to the correct process or service
on the receiving machine.

 Example: A server receiving data on port 80 (HTTP) sends it to a web server process, while
data on port 25 (SMTP) is sent to an email server.

Transport Layer - Reliable vs. Unreliable


 Reliable Transport (TCP):

o Guarantees the delivery of data in the correct order.

o Ensures no data is lost during transmission and provides error correction.

o Uses acknowledgments (ACKs) and retransmission for lost packets.

o Example: TCP (Transmission Control Protocol).

 Unreliable Transport (UDP):

o Does not guarantee delivery, order, or error checking.

o Used in applications where speed is prioritized over reliability (e.g., streaming).

o Example: UDP (User Datagram Protocol).

Error Control

 Definition: Mechanisms used to detect and correct errors in transmitted data.

 Types:

o Error Detection: Detects if an error has occurred in the transmitted data (e.g.,
checksums, cyclic redundancy checks).

o Error Correction: Attempts to correct errors in the data without needing to


retransmit (e.g., forward error correction).

 In Transport Layer (TCP):

o Checksum: Used to detect errors in TCP and UDP packets.

o Retransmission: In TCP, if a packet is lost or corrupted, it is retransmitted to ensure


reliable communication.

TCP Header

 Definition: The TCP header is used to encapsulate the data for transmission in the
Transmission Control Protocol (TCP), which is a reliable, connection-oriented protocol.

 Size: Typically 20 bytes (without options) but can be larger if options are included.

 Structure:

1. Source Port (16 bits): The port number on the sender's side.

2. Destination Port (16 bits): The port number on the receiver's side.

3. Sequence Number (32 bits): Identifies the order of the data bytes for proper
reassembly.

4. Acknowledgment Number (32 bits): Indicates the next expected byte (used for
acknowledgment).

5. Data Offset (4 bits): Specifies the size of the TCP header.


6. Reserved (3 bits): Reserved for future use, should be set to 0.

7. Flags (9 bits):

 URG (1 bit): Urgent pointer field is valid.

 ACK (1 bit): Acknowledgment field is valid.

 PSH (1 bit): Push Function – the receiver should pass the data to the
application immediately.

 RST (1 bit): Reset the connection.

 SYN (1 bit): Synchronize sequence numbers (used for connection


establishment).

 FIN (1 bit): No more data from the sender (used for connection termination).

8. Window Size (16 bits): Specifies the size of the sender's receive window (flow
control).

9. Checksum (16 bits): Used for error-checking of the header and data.

10. Urgent Pointer (16 bits): Points to the last urgent byte if the URG flag is set.

11. Options (Variable length): Can include various options like maximum segment size,
timestamp, etc.

12. Data (Variable length): The actual data being transmitted.

UDP Header

 Definition: The UDP header is used in the User Datagram Protocol (UDP), which is an
unreliable, connectionless protocol.

 Size: 8 bytes (fixed size).

 Structure:

1. Source Port (16 bits): The port number on the sender's side.

2. Destination Port (16 bits): The port number on the receiver's side.

3. Length (16 bits): The total length of the UDP header and data (minimum is 8 bytes).

4. Checksum (16 bits): Used for error-checking of the header and data (optional in IPv4
but mandatory in IPv6).

5. Data (Variable length): The actual data being transmitted.

Key Differences between TCP Header and UDP Header:

 Size: The TCP header is much larger than the UDP header due to additional fields for
reliability, error control, and connection management.
 Reliability Features: TCP includes sequence numbers, acknowledgment numbers, flags for
connection setup and teardown, and a window size for flow control, while UDP lacks these
features and is simpler.

 Error Checking: Both headers contain a checksum, but UDP does not offer error recovery or
retransmission mechanisms, while TCP does.

Connectionless Services

 Definition: A type of communication service where data is sent from the source to the
destination without establishing a dedicated end-to-end connection.

 Characteristics:

o No Connection Setup: Data is sent without establishing or maintaining a session


between sender and receiver.

o Unreliable: The service does not guarantee delivery, ordering, or error correction of
data packets.

o Examples:

 UDP (User Datagram Protocol): A prime example of a connectionless


service, where data is sent as independent packets, called datagrams.

 Use Cases:

o Used in scenarios where speed is crucial, and some data loss is acceptable, such as
streaming, online gaming, or DNS lookups.

Flow and Error Control

 Flow Control:

o Definition: Mechanisms to manage the rate of data transmission between sender


and receiver to prevent congestion and ensure that the receiver can handle the
incoming data.

o Methods:

 Windowing: Used in TCP (Transmission Control Protocol), where the sender


is allowed to send a certain number of unacknowledged packets before
needing an acknowledgment.

 Rate Limiting: The sender adjusts its data rate based on the receiver’s
capacity to process the data.

 Error Control:

o Definition: Mechanisms to detect and correct errors in transmitted data to ensure


data integrity.

o Methods:
 Checksums: A simple method where a value is calculated from the data and
sent along with it; the receiver can recalculate and compare to detect errors.

 Retransmission: In case of errors (such as packet loss or corruption), the


data is retransmitted (used in reliable protocols like TCP).

 Acknowledgments: The receiver sends back an acknowledgment for


successfully received packets, prompting the sender to send the next one.

 Application in TCP:

o Both flow and error control are implemented in TCP to provide reliable data transfer,
using mechanisms like sliding windows and retransmission for lost packets.

Encapsulation and Decapsulation

 Encapsulation:

o Definition: The process of adding headers (and sometimes trailers) to data as it


moves down the layers of the OSI or TCP/IP model. Each layer adds its own header to
the data received from the layer above it.

o Process:

 At the application layer, data is generated.

 The transport layer adds a transport header (e.g., TCP/UDP header).

 The network layer adds an IP header.

 The data link layer adds a link header (e.g., Ethernet header).

o Example: A user sends an email; at each layer, the message is encapsulated with
protocol-specific headers before transmission.

 Decapsulation:

o Definition: The reverse process of encapsulation. As data moves up the layers at the
receiver end, each layer removes the corresponding header and passes the data to
the next layer.

o Process:

 At the physical layer, raw bits are received.

 The link layer removes the link header.

 The network layer removes the IP header.

 The transport layer removes the TCP/UDP header.

 The application layer processes the actual data.

o Example: When the email reaches the recipient, the data is decapsulated at each
layer.
Queuing

 Definition: The process of managing and storing packets in buffers or queues when they
cannot be immediately transmitted.

 Types:

o Input Queuing: Packets are held in a queue before being processed by the router or
switch.

o Output Queuing: After processing, packets are queued before being sent to the next
hop or destination.

 Purpose:

o Ensures that packets are transmitted in an orderly manner.

o Prevents packet loss in the case of network congestion.

 Challenges:

o Queue Overflow: If the queue becomes too full, packets may be dropped.

o Latency: Queuing introduces delay, especially in high-traffic networks.

TCP Services

 Definition: TCP (Transmission Control Protocol) provides a set of services that ensure
reliable, connection-oriented communication between devices on a network.

 Characteristics:

o Reliable Delivery: Guarantees that data is delivered to the destination in the correct
order without errors.

o Flow Control: Manages the rate of data transmission to prevent congestion.

o Error Control: Detects and retransmits lost or corrupted packets.

o Congestion Control: Adjusts the data sending rate to avoid overwhelming the
network.

o Connection-Oriented: Establishes a reliable connection before data transmission.

Process-to-Process Communication

 Definition: Refers to the communication between two specific processes running on


different devices, using a well-defined interface provided by the transport layer (TCP).

 How it works: TCP uses port numbers to differentiate between different processes running
on the same device. A process on the sender's side communicates with a process on the
receiver's side using the respective source and destination ports.
 Example: A web server (running on port 80) communicates with a web browser (requesting
data on port 80), ensuring data is delivered to the correct application.

Stream Delivery Service

 Definition: In TCP, data is transmitted as a continuous stream of bytes, ensuring that the data
is delivered to the receiving application exactly as it was sent, with no gaps or reordering.

 How it works: TCP maintains the order of data and provides a stream that abstracts the
underlying packetization, allowing the application to read data as if it were a continuous
stream.

 Example: When you load a webpage, the HTML, images, and other data come in as a
continuous stream, which is reassembled and processed by the browser.

Sending and Receiving Buffers

 Sending Buffer:

o Temporarily stores data that is ready to be sent, allowing the sender to continue
processing data while the network is being used to transmit.

o The data in the buffer is segmented and sent over the network in small chunks called
TCP segments.

 Receiving Buffer:

o Stores incoming data temporarily as it arrives, ensuring that data is passed to the
receiving application in the correct order.

o If data arrives out of order, TCP reorders the segments before delivering them to the
application.

 Buffering Role: Buffers ensure that TCP can handle variations in network speed, allowing
data to be sent and received smoothly even when there are delays or congestion.

Segments

 Definition: A TCP segment is the unit of data transmitted over the network in TCP
communications.

 Structure:

o Each TCP segment contains a header (with control information like sequence
numbers, acknowledgment, etc.) and a payload (the actual data being transmitted).

 Purpose: Segmentation allows large amounts of data to be broken down into smaller,
manageable chunks for transmission, with each segment containing information needed for
error checking, sequencing, and acknowledgment.
Full Duplex Communication

 Definition: Full duplex means that data can be transmitted and received simultaneously in
both directions between the sender and receiver.

 How it works: In TCP, both the sender and receiver can send and receive data at the same
time, allowing for continuous and simultaneous communication without waiting for one side
to finish.

 Example: A telephone call, where both parties can speak and listen at the same time.

Connection-Oriented Service

 Definition: TCP provides a connection-oriented service, meaning a reliable connection is


established before any data is transmitted.

 How it works:

o Three-way Handshake: A connection is first established using a three-step process


(SYN, SYN-ACK, ACK) to synchronize both sides before data transmission begins.

o Data Transmission: Once the connection is established, data can flow, and the
receiver sends acknowledgments (ACKs) to confirm successful receipt.

o Connection Teardown: After data transfer, the connection is closed through a four-
step process (FIN, ACK).

 Purpose: This ensures reliable data transfer and proper management of the data flow.

Reliable Service

 Definition: TCP provides a reliable service by ensuring that all data is transmitted and
received without errors, in the correct order, and with no data loss.

 Features ensuring reliability:

o Acknowledgments (ACKs): The receiver sends acknowledgments back to the sender


for each successfully received packet.

o Retransmission: If the sender doesn't receive an acknowledgment within a certain


time, it retransmits the data.

o Sequence Numbers: Each segment has a sequence number to keep track of the
order of data, ensuring correct reassembly at the destination.

o Checksums: Used for error detection to ensure that the data received is the same as
what was sent.

 Example: A file transfer where TCP ensures that every byte of data is received by the
destination application correctly and in the right order.
TCP Services

1. Numbering System

 Definition: TCP uses a sequence number for each byte of data to ensure proper ordering and
tracking of packets.

 Purpose: Ensures that the data is delivered to the application in the correct order, even if the
packets arrive out of sequence.

 Acknowledgments: The receiver sends back an acknowledgment (ACK) with the next
expected sequence number.

2. Flow Control

 Definition: Mechanism to prevent the sender from overwhelming the receiver with too
much data at once.

 How it works: TCP uses a sliding window to manage how much data the sender can send
before waiting for an acknowledgment from the receiver.

 Window Size: The receiver advertises a window size that specifies the amount of data it can
handle at once, which the sender uses to control its transmission rate.

3. Error Control

 Definition: Mechanisms to detect and correct errors in transmitted data.

 How it works:

o Checksums: Each TCP segment includes a checksum to detect errors in the header
and data.

o Retransmission: If the sender does not receive an acknowledgment (or if an error is


detected), the data is retransmitted.

 Acknowledgments: The receiver sends ACKs for successfully received segments, and the
sender can identify which segments need retransmission.

4. Congestion Control

 Definition: Mechanisms to prevent network congestion by adjusting the sender’s data


transmission rate.

 How it works:

o Slow Start: Initially, TCP starts with a small congestion window and gradually
increases it as the connection stabilizes.

o Congestion Window: The sender adjusts its sending rate based on feedback (like
packet loss or delayed acknowledgments) to avoid overwhelming the network.

o Algorithms: TCP uses algorithms such as TCP Reno or TCP Cubic to manage
congestion by monitoring round-trip times and packet loss.
TCP Connection

1. Connection Establishment:

 Three-Way Handshaking: A process used to establish a reliable connection between the


sender and receiver in TCP.

1. SYN: The client sends a TCP segment with the SYN (synchronize) flag set to initiate
the connection and synchronize sequence numbers.

2. SYN-ACK: The server responds with a segment containing both SYN and ACK
(acknowledgment) flags to acknowledge the client's request and synchronize
sequence numbers from its side.

3. ACK: The client sends an acknowledgment (ACK) to the server, confirming the receipt
of the SYN-ACK, completing the handshake.

 Result: After the handshake, both sides have synchronized sequence numbers, and the
connection is established for data transfer.

2. Data Transfer:

 Process:

o Once the connection is established, data can be transmitted between the sender and
receiver.

o TCP Segments: Data is split into smaller TCP segments, each with a sequence
number for proper ordering and acknowledgment.

o Flow Control: The sender respects the receiver’s advertised window size to prevent
overwhelming the receiver.

o Reliability: Each segment is acknowledged by the receiver. If no acknowledgment is


received in a given timeout, the sender retransmits the segment.

o Full-Duplex Communication: Data can flow in both directions simultaneously,


allowing both sides to send and receive data concurrently.

3. Connection Termination:

 Four-Way Handshaking: A process to gracefully terminate a TCP connection.

1. FIN (Sender): The sender (client or server) sends a segment with the FIN (finish) flag
to indicate that it has no more data to send.

2. ACK (Receiver): The receiver acknowledges the FIN segment by sending an ACK back.

3. FIN (Receiver): The receiver then sends its own FIN segment to indicate it has
finished sending data.

4. ACK (Sender): The sender acknowledges the receiver's FIN segment with an ACK,
completing the termination process.
 Result: After four steps, the connection is fully closed, and resources are released on both
sides.

State Transition Diagram in TCP

TCP connections transition between different states throughout their lifecycle. The State Transition
Diagram represents these changes and interactions, detailing the connection phases like
establishment, data transfer, and termination.

1. Half Close

 Definition: A half-close occurs when one side of the connection finishes sending data, but
the other side can still send data.

 How it works:

o A side sends a FIN signal to indicate it has finished sending data.

o The connection remains open for the other side to continue sending data.

 State Transition:

o One side transitions to the FIN_WAIT_1 state after sending a FIN.

o The other side enters CLOSE_WAIT, where it acknowledges the FIN but keeps the
connection open for its own data.

o The connection is fully closed when both sides have sent their FIN segments.

 Example: A client finishes sending a file to a server and initiates a half-close, allowing the
server to send data back if needed.

2. Simultaneous Open

 Definition: In some cases, both sides of the connection may attempt to establish the
connection at the same time.

 How it works:

o Both the client and the server send SYN segments simultaneously, leading to both
sides entering the SYN_SENT state.

o Both sides then acknowledge each other's SYN segment and transition to the
ESTABLISHED state.

 State Transition:

o Client and server both start in the SYN_SENT state.

o Once both sides acknowledge the SYN from the other, they enter the ESTABLISHED
state, and data transfer begins.

 Example: In some peer-to-peer applications, both parties may attempt to initiate


communication at the same time.
3. Simultaneous Close

 Definition: A simultaneous close occurs when both sides of the connection attempt to
terminate the connection at the same time.

 How it works:

o Both sides send FIN segments simultaneously, indicating that neither side has data to
send.

o The connection proceeds through a four-step termination process.

 State Transition:

o Both sides transition to the FIN_WAIT_1 state after sending the FIN.

o The receiving side acknowledges the FIN by moving to the CLOSE_WAIT state.

o The sender moves to FIN_WAIT_2, and after acknowledgment, both sides reach the
CLOSED state.

 Example: Both parties decide to terminate the connection at the same time, commonly seen
in systems where both sides finish their tasks concurrently.

Summary of Transitions:

 Half Close: One side sends a FIN, while the other side can still send data.

 Simultaneous Open: Both sides initiate the connection at the same time (both send SYN).

 Simultaneous Close: Both sides try to close the connection at the same time (both send FIN).

TCP Flow Control:

 Sliding Window:

o Definition: Mechanism to control the flow of data between sender and receiver.

o How it works: The sender can send a specified amount of data (window size) before
waiting for an acknowledgment from the receiver.

o Purpose: Ensures that the receiver is not overwhelmed by too much data at once,
optimizing the data transfer rate.

Related Flow Control Mechanisms:

1. Silly Window Syndrome:

o Definition: Occurs when small chunks of data are sent repeatedly, causing inefficient
use of the network.

o Cause: The receiver advertises a small window size, prompting the sender to send
very small packets.
o Solution: The sender waits for more data to accumulate before sending, improving
efficiency.

2. Nagle's Algorithm:

o Definition: Optimizes the sending of small packets by combining multiple small


writes into one larger packet.

o How it works: Prevents sending small packets (each smaller than the Maximum
Segment Size) and instead sends the data in larger, more efficient segments.

o Benefit: Reduces network congestion and improves efficiency for applications that
send small amounts of data.

3. Clark's Solution:

o Definition: A modification to prevent Silly Window Syndrome by allowing the sender


to send data in larger chunks when the receiver’s window is large enough.

o How it works: The receiver advertises a larger window size, and the sender waits
until the window size is large enough before sending data.

o Benefit: Prevents the sender from transmitting very small amounts of data, leading
to better throughput.

4. Delayed Acknowledgment Solution:

o Definition: Delays sending an acknowledgment for a short period to allow for more
efficient data transmission.

o How it works: The receiver waits for a small time before sending an
acknowledgment, allowing it to potentially combine the acknowledgment with data
that needs to be sent.

o Benefit: Reduces the number of ACK packets, optimizing the flow control process.

TCP Error Control

 Purpose: Ensures that data is delivered accurately and reliably, by detecting and correcting
errors during transmission.

1. Retransmission:

 Definition: If a segment is lost or corrupted, TCP retransmits the segment to ensure reliable
delivery.

 How it works:

o When the sender does not receive an acknowledgment (ACK) within a certain time,
it retransmits the unacknowledged segment.
o Retransmission is triggered by RTO (Retransmission Timeout), signaling a possible
data loss.

2. RTO (Retransmission Timeout):

 Definition: The time a sender waits before retransmitting a segment if no acknowledgment is


received.

 How it works:

o The RTO is dynamically calculated based on network conditions.

o If the sender does not receive an ACK for a transmitted segment within this time, it
assumes the segment was lost and retransmits it.

 Calculation: RTO is calculated using the Round Trip Time (RTT) and a smoothed RTT
estimate.

3. Round Trip Time (RTT) Estimation:

 Definition: Measures the time it takes for a segment to travel from the sender to the receiver
and for the acknowledgment to return to the sender.

 Purpose: RTT is used to estimate the RTO and adjust the retransmission strategy.

 How it works: The sender tracks the time between sending a segment and receiving its
acknowledgment. RTT is then smoothed to account for network fluctuations.

4. Some Scenarios:

 Normal Operation:

o How it works: Data is transmitted, and ACKs are received in a timely manner. The
sender sends a new segment once it receives the ACK for the previous one.

o Result: No errors, smooth transmission.

 Lost Segment:

o How it works: When a segment is lost (due to network issues), the sender does not
receive an ACK within the RTO.

o Action: The sender retransmits the lost segment, and the receiver processes the
retransmitted data once it arrives.

Summary of Key Concepts:

 Retransmission: Ensures data is resent if ACK is not received.

 RTO Timeouts: Time before retransmitting a lost segment.


 RTT Estimation: Used to adjust the RTO dynamically.

 Lost Segment: Handled by retransmission once the segment is identified as lost.

Congestion Control in TCP

 Purpose: Prevents congestion in the network by managing the amount of data sent into the
network, ensuring that routers and links do not become overloaded.

1. Congestion Control vs. Congestion Avoidance

 Congestion Control:

o Definition: Mechanisms to reduce the amount of data sent into the network when
congestion is detected.

o Goal: Ensure the network does not become overloaded by controlling the
transmission rate.

o Key Actions: Decrease transmission rate, slow down data sending when congestion
occurs.

 Congestion Avoidance:

o Definition: Strategies to avoid congestion before it happens by monitoring the


network's state and proactively adjusting the transmission rate.

o Goal: Prevent congestion from occurring by keeping network traffic below a certain
threshold.

o Key Actions: Gradually increase the sending rate until network capacity is reached to
avoid sudden congestion.

2. Detect Congestion

 How it works:

o Loss of Packets: Loss of packets (detected by timeouts or duplicate ACKs) is a


primary indicator of congestion.

o Delayed Acknowledgments: A sudden increase in round-trip time (RTT) can indicate


congestion.

o Explicit Congestion Notification (ECN): Some networks use ECN bits to signal
congestion without packet loss.

o Timeouts: When the sender doesn’t receive ACKs within the expected time, it
detects potential congestion.

3. Rate Adjustment Algorithm


 Definition: Mechanisms that dynamically adjust the sender's transmission rate based on
feedback about network congestion.

Key Algorithms:

o Slow Start: Initially, the sender starts with a small congestion window size and
increases it exponentially until it hits a threshold (slow start threshold).

o Congestion Avoidance (Additive Increase, Multiplicative Decrease): Once the


threshold is reached, the sender increases the window size linearly, and decreases it
multiplicatively when congestion is detected.

o Fast Retransmit and Fast Recovery: Upon detecting lost segments (e.g., through
triple duplicate ACKs), the sender retransmits the lost segment quickly and then
adjusts the window size to recover from congestion.

Summary of Key Concepts:

 Congestion Control: Reduces the transmission rate when congestion is detected.

 Congestion Avoidance: Proactively adjusts the transmission rate to avoid congestion.

 Detecting Congestion: Achieved via packet loss, delays, timeouts, and ECN.

 Rate Adjustment Algorithm: Uses mechanisms like Slow Start, Additive Increase,
Multiplicative Decrease, and Fast Recovery to manage the data transmission rate.

1. Multiplicative Increase and Additive Decrease

 Multiplicative Increase:

o Definition: A method to increase the sending rate (or window size) exponentially in
response to favorable network conditions (low congestion).

o How it works: When the congestion window (CWND) increases, it does so


multiplicatively, typically doubling the rate until a threshold is reached.

o Context: Used in congestion control algorithms like TCP Reno or TCP Cubic to adjust
sending rates.

 Additive Decrease:

o Definition: A method to reduce the sending rate (or window size) by a fixed amount
when congestion is detected.

o How it works: When packet loss is detected, the CWND is reduced by a constant
amount (e.g., halving the window size), which helps to alleviate network congestion.

o Context: Used in conjunction with multiplicative increase for controlling flow.

2. Additive Increase and Additive Decrease


 Additive Increase:

o Definition: A method to gradually increase the sending rate or window size by a fixed
increment, helping to probe for available bandwidth.

o How it works: Once the network is stable and congestion is not detected, the sender
increases the window size linearly (e.g., by 1 MSS per RTT).

o Context: Often used in TCP’s congestion control algorithm to increase the


transmission rate once the congestion window is not causing any packet loss.

 Additive Decrease:

o Definition: When congestion is detected, the sender decreases its window size by a
fixed amount (e.g., halving the window).

o How it works: Typically triggered by packet loss or delayed ACKs, the sender reduces
its transmission rate gradually to reduce congestion.

3. Slow Start

 Definition: A phase of TCP congestion control where the sender begins with a small
congestion window and increases it exponentially until it reaches a threshold.

 How it works:

o Initially, the window size is 1 MSS (Maximum Segment Size).

o For each acknowledgment received, the window size doubles, exponentially


increasing the sending rate.

o This process continues until the threshold (ssthresh) is reached, after which the
sender switches to congestion avoidance mode.

4. TCP Timers

 Definition: Various timers used in TCP to manage retransmissions and control flow.

Types of TCP timers:

o Retransmission Timeout (RTO): The timer waiting for an acknowledgment of a


transmitted packet. If it expires, the packet is retransmitted.

o Persistence Timer: Prevents the sender from waiting indefinitely if it is waiting to


receive a window size update from the receiver.

o Keep-Alive Timer: Periodically sends messages to keep the connection alive when
there is no data to transfer.

o Time-Wait Timer: Ensures that the receiver receives all segments before closing the
connection.
5. Quality of Service (QoS)

 Definition: Techniques to prioritize network traffic, ensuring that critical data (e.g., real-time
video, voice) is transmitted with low latency and high reliability.

6. Techniques to Improve QoS

1. Scheduling:

o Definition: The method of managing the order in which packets are transmitted
across the network.

o Types:

 Priority Queuing: Gives higher priority to important packets.

 Weighted Fair Queuing (WFQ): Allocates bandwidth fairly across multiple


traffic flows.

2. Traffic Shaping:

o Definition: Controls the rate at which traffic is sent into the network to avoid
congestion.

o How it works: Limits the amount of data sent at once to ensure that the network can
handle it without overload, typically using a token bucket or leaky bucket approach.

3. Resource Reservation:

o Definition: Allocating network resources (such as bandwidth) for specific flows or


applications.

o How it works: Ensures that certain traffic, like voice or video, has reserved
bandwidth to guarantee low latency and high performance.

4. Admission Control:

o Definition: The process of deciding whether a new connection can be accepted


based on current network load.

o How it works: If the network is congested, admission control will reject new
connections to maintain the quality of existing connections.

7. Leaky Bucket Algorithm

 Definition: A traffic shaping algorithm that controls data flow into the network.

 How it works:

o Incoming packets are placed in a "bucket."

o Packets leave the bucket at a constant rate (like water dripping out), smoothing out
bursts of data.
o If the bucket overflows (i.e., data arrives too quickly), excess packets are discarded or
delayed.

 Purpose: Used to regulate the rate of data flow and avoid network congestion by shaping
bursty traffic into a steady, manageable stream.

Summary:

 Multiplicative Increase/Decrease: Methods for adjusting the transmission rate in response


to congestion (exponential increase, constant decrease).

 Additive Increase/Decrease: Gradual adjustments to the transmission rate in TCP to find the
optimal sending rate.

 Slow Start: A method used to gradually increase the congestion window during connection
initiation.

 TCP Timers: Manage retransmissions and timeouts.

 QoS Techniques: Improve network performance through scheduling, traffic shaping,


resource reservation, and admission control.

 Leaky Bucket Algorithm: Controls data flow and smoothens traffic

--------------------------------------------------------------------------------------

Module 04 Network Layer

1. Logical Addresses

 Definition: Logical addresses are used to identify devices on a network. These addresses are
assigned to devices in a way that they can communicate across different networks.

 Key Points:

o Logical addresses are used in network layer (Layer 3) for routing data between
devices.

o Logical addresses can be IP addresses (IPv4 and IPv6) assigned to devices for
communication over the Internet.

2. IPv4 (Internet Protocol version 4)

 Definition: The fourth version of the Internet Protocol (IP) used for identifying devices on a
network using 32-bit addresses.

 Key Features:

o Address Format: IPv4 addresses are 32-bit long, written as four octets in decimal
(e.g., 192.168.1.1).

o Total Address Space: Approximately 4.3 billion addresses (2^32).


o Address Classes: Divided into Class A, B, C, D, and E.

o Depletion: IPv4 address space is limited and nearly exhausted due to the growing
number of devices.

3. IPv6 (Internet Protocol version 6)

 Definition: The sixth version of the Internet Protocol, designed to replace IPv4, uses 128-bit
addresses.

 Key Features:

o Address Format: IPv6 addresses are 128-bit long, written as eight groups of four
hexadecimal digits (e.g., 2001:0db8:85a3:0000:0000:8a2e:0370:7334).

o Total Address Space: Provides approximately 340 undecillion addresses (2^128),


solving the address exhaustion problem.

o Features: Includes improvements like better routing efficiency, no need for NAT
(Network Address Translation), and built-in security features.

4. Classful Addressing

 Definition: An older IP address classification system used in IPv4, which divides addresses
into predefined classes based on the first few bits of the address.

 Key Classes:

o Class A: Range: 1.0.0.0 to 127.255.255.255 (Used for large networks, 8-bit network
part).

o Class B: Range: 128.0.0.0 to 191.255.255.255 (Used for medium-sized networks, 16-


bit network part).

o Class C: Range: 192.0.0.0 to 223.255.255.255 (Used for small networks, 24-bit


network part).

o Class D: Range: 224.0.0.0 to 239.255.255.255 (Used for multicast groups).

o Class E: Range: 240.0.0.0 to 255.255.255.255 (Reserved for experimental purposes).

5. Special IPv4 Addresses

 Direct Broadcast Address:

o Definition: Used to send a message to all hosts on a specific network. It is the last
address in the address range of a subnet.

o Example: If the network address is 192.168.1.0/24, the broadcast address is


192.168.1.255.

 Limited Broadcast Address:


o Definition: A special address used to communicate with all devices in the local
network, often used when a device does not know its own IP address (e.g., during
DHCP discovery).

o Address: 255.255.255.255.

o Scope: This address is not routed and is only used within the local network.

 Loopback Address:

o Definition: A special address used to test the network stack of a device. It allows the
device to send data to itself.

o Address: 127.0.0.1 (most commonly used).

o Purpose: Useful for testing and diagnostics (e.g., ping 127.0.0.1).

 Private Address:

o Definition: A range of IP addresses reserved for private networks, not routable on


the public internet. These are typically used in internal networks and require NAT
(Network Address Translation) for accessing public networks.

o Address Ranges:

 Class A: 10.0.0.0 to 10.255.255.255

 Class B: 172.16.0.0 to 172.31.255.255

 Class C: 192.168.0.0 to 192.168.255.255

Summary of Key Concepts:

 IPv4: Uses 32-bit addresses, with limited address space (4.3 billion addresses).

 IPv6: Uses 128-bit addresses, providing an extremely large address space (340 undecillion
addresses).

 Classful Addressing: Divides IPv4 addresses into classes (A, B, C, D, E).

 Special IPv4 Addresses:

o Direct Broadcast: Used to send messages to all devices on a specific network.

o Limited Broadcast: Used for local network communication (255.255.255.255).

o Loopback Address: Used for internal device communication (127.0.0.1).

o Private Address: Reserved for private networks (e.g., 10.x.x.x, 192.168.x.x).

1. Unicast

 Definition: Communication between a single sender and a single receiver on a network.


 Key Points:

o The message is addressed to a single device identified by its unique IP address.

o Used for most traditional communication protocols (e.g., web browsing, file
transfers).

o Example: Sending a request to 192.168.1.10.

2. Multicast

 Definition: Communication between a single sender and multiple receivers, but not all
devices on the network.

 Key Points:

o The sender sends a message to a multicast group address.

o Devices that wish to receive the message must subscribe to the multicast group.

o Used in applications like streaming media or online video conferences.

o Example: 224.0.0.1 is a typical multicast address.

3. Broadcast

 Definition: Communication from a single sender to all devices in the network segment or
broadcast domain.

 Key Points:

o The message is sent to all devices, regardless of their IP address.

o Often used for announcements or discovery protocols (e.g., DHCP).

o Example: IPv4 broadcast address 255.255.255.255.

4. Subnetting

 Definition: The process of dividing a larger network into smaller, more manageable sub-
networks (subnets).

 Key Points:

o Used to optimize routing, improve security, and reduce network traffic.

o Subnetting is done by adjusting the subnet mask to create different network


segments.

o Helps with efficient use of IP address space and network management.

o Example: Splitting 192.168.1.0/24 into 192.168.1.0/26 (for 4 subnets).


5. Classless Addressing

 Definition: A method of IP addressing that does not rely on the traditional class-based
addressing scheme (Class A, B, C).

 Key Points:

o Introduced to address the limitations of classful addressing and improve address


space utilization.

o Uses CIDR (Classless Inter-Domain Routing) notation (e.g., 192.168.1.0/24), where


the subnet mask is specified as part of the address.

o Allows more flexible allocation of IP address blocks.

o Example: 10.0.0.0/8 (Classless block of addresses).

6. NAT (Network Address Translation)

 Definition: A method for translating private IP addresses to public IP addresses, and vice
versa, enabling multiple devices in a private network to share a single public IP address.

 Key Points:

o NAT helps conserve public IP addresses and allows for secure internal network
communication.

o Common types of NAT include PAT (Port Address Translation), which maps multiple
internal addresses to a single public IP with different port numbers.

o Example: A private address 192.168.1.10 gets translated to a public IP 203.0.113.1


when accessing the internet.

7. ISP (Internet Service Provider)

 Definition: A company or organization that provides access to the internet.

 Key Points:

o ISPs assign IP addresses to customers and provide services like email, web hosting,
and broadband connectivity.

o Can be categorized as residential, business, or mobile ISPs.

o Example: Comcast, AT&T, Vodafone, or Jio are examples of ISPs.

8. IPv6

 Definition: The sixth version of the Internet Protocol, designed to replace IPv4, using 128-bit
addresses.

 Key Points:
o Provides a vastly larger address space (2^128 addresses) to solve IPv4 address
exhaustion.

o Supports autoconfiguration, better security, and efficient routing.

o Address Format: Written in eight groups of four hexadecimal digits (e.g.,


2001:0db8:85a3:0000:0000:8a2e:0370:7334).

o IPv6 adoption is gradually increasing due to the exhaustion of IPv4 addresses.

Summary of Key Concepts:

 Unicast: One-to-one communication between devices.

 Multicast: One-to-many communication, but not all devices.

 Broadcast: One-to-all communication within a network segment.

 Subnetting: Dividing a large network into smaller sub-networks for better management.

 Classless Addressing: IP addressing using CIDR for more flexible and efficient address
allocation.

 NAT: Translates private IP addresses to public addresses for internet communication.

 ISP: Provides internet access to users and organizations.

 IPv6: The latest IP version with a large address space to replace IPv4.

1. Supernetting

 Definition: The process of combining several smaller subnets into a larger network. It is the
opposite of subnetting.

 Key Points:

o Used to create supernets by aggregating multiple IP address blocks into a single,


larger network.

o Reduces the size of routing tables by simplifying network representation.

o Supernetting is done using CIDR notation (e.g., 192.168.0.0/22 combines multiple


/24 networks).

o Example: Combining 192.168.1.0/24 and 192.168.2.0/24 into 192.168.0.0/22.

2. SuperNetwork

 Definition: A supernetwork is a network created by supernetting, where multiple smaller


address blocks are aggregated into a larger address block.

 Key Points:
o Typically, a CIDR block is used to represent supernetted addresses.

o Used to optimize routing by reducing the number of network entries in the routing
table.

o Example: A supernetwork could be 10.0.0.0/16 representing multiple subnets like


10.0.0.0/24, 10.0.1.0/24, etc.

3. IPv4 Datagram and Fragmentation

 IPv4 Datagram:

o Definition: A basic unit of data transmission in IPv4. It is encapsulated in the payload


of a lower-layer protocol (like Ethernet).

o Structure: Consists of the header (with source and destination IP, TTL, etc.) and data
(the payload).

 Fragmentation:

o Definition: The process of breaking an IPv4 datagram into smaller units called
fragments to fit the Maximum Transmission Unit (MTU) of a network.

o Key Points:

 Fragmentation occurs when the datagram exceeds the MTU limit (typically
1500 bytes in Ethernet).

 Each fragment contains a portion of the original datagram along with its own
IP header.

 The receiver reassembles fragments into the original datagram using


fragment offset and identification fields.

 Example: A 2000-byte datagram might be fragmented into two or more


smaller packets if the network MTU is 1500 bytes.

4. Routing Algorithm

Routing algorithms determine the best path for data packets from the source to the destination in a
network.

4.1 Link State Routing

 Definition: A routing algorithm where each router maintains a map of the entire network
and uses Dijkstra’s algorithm to calculate the shortest path to all other routers.

 Key Points:

o Each router broadcasts its link state (directly connected neighbors) to all other
routers in the network.
o Advantages:

 Provides faster convergence and more accurate routing information


compared to distance-vector routing.

 Routers have complete knowledge of the network topology.

o Example Protocol: OSPF (Open Shortest Path First) is a widely used link-state
protocol.

4.2 Distance Vector Routing

 Definition: A routing algorithm where each router maintains a routing table and shares it
with its immediate neighbors to calculate the best path.

 Key Points:

o Routers send periodic updates to their neighbors with information about their
known networks and distances.

o Each router chooses the shortest path based on the distance to the destination
network.

o Limitations:

 Slower convergence and less accuracy than link-state routing.

 Prone to routing loops and count-to-infinity problems.

o Example Protocol: RIP (Routing Information Protocol) is a common distance-vector


protocol.

Summary of Key Concepts:

 Supernetting: Combines smaller subnets into a larger network to simplify routing.

 SuperNetwork: A network formed by aggregating multiple subnets, typically using CIDR


notation.

 IPv4 Datagram: A basic unit of data encapsulated for transmission; fragmentation is used
when it exceeds MTU limits.

 Routing Algorithms:

o Link State Routing: Uses a map of the entire network for efficient routing and fast
convergence. Example: OSPF.

o Distance Vector Routing: Uses routing tables and periodic updates to calculate
paths, prone to slower convergence. Example: RIP.

1. Creation of LSP (Link-State Packet)

 Definition: LSP is a packet containing information about the state of a router's links (such as
bandwidth, delay, or status).
 Key Points:

o Each router creates an LSP containing information about its directly connected
neighbors.

o LSPs contain information such as the router’s identifier, the links it is connected to,
and their states.

o LSPs are used in Link-State Routing Protocols (e.g., OSPF, IS-IS) to allow routers to
build a map of the network.

2. Flooding of LSP

 Definition: The process of distributing the LSP from the originating router to all other routers
in the network.

 Key Points:

o Routers flood their LSPs to all their neighbors.

o Each router receives an LSP and stores it in its database if it hasn't seen it before.

o Flooding ensures that all routers have the same link-state database to compute the
best paths.

3. Formation of Shortest Path Tree (SPT)

 Definition: The Shortest Path Tree is a tree-like structure formed by selecting the shortest
paths from a source router to all other routers.

 Key Points:

o After receiving all LSPs, each router calculates the Shortest Path Tree (SPT) using the
information from the link-state database.

o The SPT helps in determining the best paths for routing the data.

o Dijkstra’s algorithm is typically used to form the SPT.

4. Dijkstra’s Algorithm

 Definition: A graph search algorithm used to find the shortest path from a source node to all
other nodes in a network.

 Key Points:

o Dijkstra's algorithm calculates the minimum cost path from a source router to all
other routers based on the link-state information.

o It assigns tentative distances to each router and iteratively updates the shortest
paths.
o Steps:

1. Assign a distance value to every node (start with 0 for the source node and
infinity for all others).

2. Mark all nodes as unvisited. Set the source node as current.

3. For each neighbor of the current node, calculate their tentative distance.

4. After visiting all neighbors, mark the current node as visited. Move to the
unvisited node with the smallest tentative distance.

5. Repeat until all nodes have been visited.

5. Routing Algorithms

5.1 RIP (Routing Information Protocol)

 Definition: A distance-vector routing protocol used to determine the best path in an IPv4
network.

 Key Points:

o Metric: RIP uses hop count as the metric, with a maximum of 15 hops allowed.

o Updates: Routers send periodic updates every 30 seconds with their routing tables.

o Limitations: RIP suffers from slow convergence and does not scale well for large
networks.

o Example Use: Small to medium-sized networks.

5.2 OSPF (Open Shortest Path First)

 Definition: A link-state routing protocol used to determine the best paths in large enterprise
networks.

 Key Points:

o Metric: OSPF uses cost as the metric, which is typically based on link bandwidth.

o Updates: Routers send link-state advertisements (LSAs) to flood LSPs across the
network.

o Efficiency: OSPF supports hierarchical routing with areas to optimize large networks.

o Example Use: Large networks with complex routing needs.

5.3 BGP (Border Gateway Protocol)

 Definition: An inter-domain (external) routing protocol used to exchange routing information


between different Autonomous Systems (AS).

 Key Points:

o Metric: BGP uses path vector as its routing metric and selects paths based on policy
rules (e.g., AS path).
o Scalability: BGP is used to route data between ISPs and large networks.

o Types: There are two types: IBGP (Internal BGP) and EBGP (External BGP).

o Example Use: Used by ISPs to manage routing between different ASs.

6. Interior and Exterior Routing

6.1 Interior Routing

 Definition: Routing that occurs within a single Autonomous System (AS).

 Key Points:

o Interior routing protocols (IRPs) like RIP, OSPF, and IS-IS are used within an AS.

o These protocols focus on efficient routing within a single organization’s network.

6.2 Exterior Routing

 Definition: Routing that occurs between different Autonomous Systems (ASes).

 Key Points:

o Exterior routing protocols like BGP are used for communication between different
ISPs or large networks.

o BGP allows the exchange of routing information across the internet.

Summary of Key Concepts:

 LSP Creation: Routers create Link-State Packets with information about their links.

 Flooding of LSP: LSPs are flooded to all routers to ensure all have the same network
topology.

 Shortest Path Tree (SPT): A tree structure formed using link-state information to determine
the best path to all routers.

 Dijkstra’s Algorithm: Used to calculate the shortest paths from the source router to all other
routers.

 Routing Protocols:

o RIP: Distance-vector protocol using hop count.

o OSPF: Link-state protocol using cost as the metric, supports large networks.

o BGP: Path-vector protocol used between ASes, essential for internet routing.

 Interior vs. Exterior Routing:

o Interior: Routing within an AS (using RIP, OSPF).

o Exterior: Routing between ASes (using BGP).


-----------------------------------------------------------------------------------------

Chapter 05 - Data Link Layer

1. Data Link Layer

 Definition: The Data Link Layer (Layer 2) in the OSI model is responsible for node-to-node
communication and error detection/correction.

 Key Points:

o It ensures reliable data transfer over a physical link.

o It handles framing, error control, and flow control to ensure accurate


communication between devices on the same network.

2. Framing

 Definition: Framing is the process of breaking down the data stream into smaller,
manageable units called frames.

 Key Points:

o Frames contain both data and control information like headers and trailers.

o The frame structure allows the Data Link Layer to identify where one message ends
and another begins.

o Frame Delimiters (start and end) are used to mark the boundaries of each frame.

3. Error Control

 Definition: Error control ensures the integrity of data by detecting and correcting errors that
may occur during transmission.

 Key Points:

o Error Detection: Identifies errors using methods like CRC (Cyclic Redundancy Check)
or parity bits.

o Error Correction: If an error is detected, the data is retransmitted or corrected using


methods like ARQ (Automatic Repeat reQuest).

o Types of error detection methods: Parity bits, checksums, and CRC.

4. Flow Control

 Definition: Flow control regulates the rate of data transmission between sender and receiver
to prevent the receiver from being overwhelmed.
 Key Points:

o Ensures the sender does not send data faster than the receiver can process it.

o Techniques include Stop-and-Wait and Sliding Window protocols.

5. Functions of the Data Link Layer

The main functions of the Data Link Layer are:

 Framing: Divides the data from the Network Layer into frames for transmission.

 Error Control: Detects and corrects errors in the transmitted frames.

 Flow Control: Manages the rate of data transmission to prevent buffer overflow.

 Access Control: Manages how devices on a shared medium (like Ethernet) access the
channel.

6. Services Provided to the Network Layer

a. Virtual Communication

 Definition: Virtual communication is an abstract service provided by the Data Link Layer,
where it appears as though there is a direct connection between the sender and receiver.

 Key Points:

o The physical medium might be shared, but the Data Link Layer provides the illusion
of a dedicated link.

o Allows the Network Layer to work without concern for physical layer specifics.

b. Actual Communication

 Definition: Actual communication refers to the real, physical data transfer over the medium
between devices.

 Key Points:

o The Data Link Layer ensures the actual movement of data from one device to
another over the link.

o Handles how devices access and transmit data on the medium.

7. Types of Errors

a. Single Bit Error

 Definition: A single bit error occurs when one bit in a frame is altered due to noise or
interference.

 Key Points:
o Can be detected and corrected easily with error detection schemes like parity bits or
checksums.

o Example: A flipped 1 to 0 or 0 to 1 in a transmitted frame.

b. Multiple Bit Error

 Definition: A multiple bit error occurs when more than one bit in a frame is altered during
transmission.

 Key Points:

o Multiple bit errors are harder to detect and correct than single bit errors.

o Often detected using more sophisticated methods like CRC.

c. Burst Error

 Definition: A burst error occurs when a sequence of bits is altered in the frame, typically due
to noise over a period of time.

 Key Points:

o Burst errors involve multiple bits in a continuous sequence.

o These errors are often more damaging but can be detected using methods like CRC.

Summary of Key Concepts:

 Data Link Layer: Provides reliable node-to-node communication through framing, error
control, and flow control.

 Framing: Divides data into manageable units called frames.

 Error Control: Detects and corrects errors in transmitted frames.

 Flow Control: Ensures the receiver isn’t overwhelmed by controlling the rate of transmission.

 Services to Network Layer:

o Virtual Communication: Abstracts the physical link for the Network Layer.

o Actual Communication: Manages the physical transfer of data.

 Types of Errors:

o Single Bit Error: One bit is altered.

o Multiple Bit Error: More than one bit is altered.

o Burst Error: A sequence of bits is altered.

1. Error Detection

 Definition: Error detection is the process of identifying errors in transmitted data, ensuring
that the data received is correct.
 Key Points:

o Involves adding extra bits (redundancy) to the data being transmitted.

o Errors are detected at the receiving end using these redundant bits.

2. VRC (Vertical Redundancy Check)

 Definition: VRC adds a parity bit to each data unit (like a byte or word) to make the total
number of 1s either even or odd.

 Key Points:

o Each column of bits (vertical) is checked for parity.

o A parity bit (even or odd) is added to ensure an even or odd number of 1s in the
data.

o Even Parity: Even number of 1s.

o Odd Parity: Odd number of 1s.

 Error Detection: Can detect single bit errors but not multiple bit errors or burst errors.

3. LRC (Longitudinal Redundancy Check)

 Definition: LRC extends the idea of VRC by adding an extra parity bit across the entire block
of data (longitudinal).

 Key Points:

o LRC checks parity on a column-by-column basis, treating the entire data block as a
unit.

o After checking each column’s parity, the LRC parity bit is computed for the entire
message.

o Error Detection: Detects errors in vertical and horizontal directions, but like VRC, it is
limited in detecting certain errors (e.g., multiple bit errors).

4. ERC (Error Redundancy Check)

 Definition: ERC is a more generalized error-checking method, often seen in more complex
error detection schemes, though it's not as commonly referenced by this name.

 Key Points:

o Error Redundancy checks for errors by introducing redundancy through additional


error-checking codes.

o It can work in conjunction with methods like CRC or Checksum.

o Error Detection: Detects errors but is less standardized compared to VRC or CRC.
5. CRC (Cyclic Redundancy Check)

 Definition: CRC is a powerful and widely used error-detection method based on polynomial
division.

 Key Points:

o Process: The sender divides the data by a predetermined polynomial and appends
the remainder (CRC value) to the data.

o The receiver performs the same division and checks if the remainder is 0 (i.e., no
error).

o Error Detection: Highly effective for detecting burst errors and single-bit errors. CRC
is commonly used in Ethernet, CDs, and other communications.

o Types of CRC: CRC-16, CRC-32, etc., depending on the length of the polynomial.

6. Checksum

 Definition: A checksum is a mathematical value calculated from the data and transmitted
along with the data to check for errors.

 Key Points:

o The sender calculates a checksum by summing all data units (like bytes) and sending
the sum as a checksum value.

o The receiver computes the sum of received data and compares it with the
transmitted checksum.

o Error Detection: Can detect errors like missing or extra bits but is less reliable for
complex errors like burst errors.

7. Hamming Code

 Definition: Hamming code is an error detection and correction code that uses redundant bits
to detect and correct single-bit errors.

 Key Points:

o Hamming code adds extra parity bits to the data, and the positions of these bits
follow a specific rule (powers of 2).

o It can detect and correct single-bit errors and can detect two-bit errors.

o Process: Parity bits are placed at positions 1, 2, 4, 8, etc., and each parity bit covers
certain data bits to ensure overall consistency.

o Error Detection: It can correct single-bit errors and detect two-bit errors.
Summary of Key Concepts:

 VRC (Vertical Redundancy Check): Adds a parity bit to each data unit to ensure even or odd
parity.

 LRC (Longitudinal Redundancy Check): Adds an additional parity check over entire data
blocks.

 ERC (Error Redundancy Check): A general error-detection method involving redundancy


checks, often used with other methods.

 CRC (Cyclic Redundancy Check): Uses polynomial division to detect burst errors, commonly
used in networking protocols.

 Checksum: Sums the data and appends the result, with the receiver performing the same
check.

 Hamming Code: Adds parity bits at specific positions to detect and correct single-bit errors
and detect two-bit errors.

These methods are fundamental in ensuring data integrity during transmission and can be used in
combination for more robust error detection and correction.

1. Sliding Window Protocols

 Definition: Sliding Window Protocol is a method used in data link layer communication to
manage the flow of data between sender and receiver.

 Key Points:

o It allows the sender to send multiple frames before waiting for acknowledgments,
increasing efficiency.

o The "window" refers to the set of frames that the sender is allowed to send at a
time.

o The window "slides" as the sender receives acknowledgments from the receiver.

2. One Bit Sliding Window Protocol

 Definition: This is the simplest sliding window protocol where the window size is fixed at 1.

 Key Points:

o The sender can send only one frame before waiting for the acknowledgment.

o After receiving an acknowledgment for the current frame, the sender can send the
next frame.

o This protocol essentially behaves like Stop-and-Wait.

o Error Handling: If a frame is lost or corrupted, the sender resends it after a timeout.

 Efficiency: It has low efficiency because only one frame is sent at a time.
3. Go-Back-N Protocol

 Definition: Go-Back-N is a sliding window protocol where the sender can send multiple
frames before receiving acknowledgments, but the receiver can only process frames in order.

 Key Points:

o Window Size: The sender's window size is N, meaning it can send N frames without
waiting for an acknowledgment.

o Receiver's Role: The receiver is expected to acknowledge the frames in order. If any
frame is lost or corrupted, the receiver will discard that frame and all subsequent
frames, even if they are correct.

o Retransmission: The sender goes back and retransmits the lost or corrupted frame
along with any frames that follow it (hence "Go-Back-N").

o Efficiency: More efficient than One Bit Sliding Window because multiple frames are
sent at once, but can suffer from retransmitting frames unnecessarily.

4. Selective Repeat Protocol

 Definition: Selective Repeat is a more advanced sliding window protocol where the sender
can send multiple frames, and the receiver can accept out-of-order frames.

 Key Points:

o Window Size: Both the sender and receiver have a window size of N. This allows the
sender to send N frames before waiting for an acknowledgment.

o Receiver's Role: The receiver can accept frames out-of-order, acknowledging only
the frames that have been received correctly.

o Retransmission: If a frame is lost or corrupted, only the specific frame is


retransmitted, not all frames (unlike Go-Back-N).

o Efficiency: More efficient than Go-Back-N because only lost or corrupted frames are
retransmitted, reducing unnecessary retransmissions.

Summary of Key Concepts:

 One Bit Sliding Window Protocol: The simplest form of sliding window, similar to Stop-and-
Wait, where only one frame is sent at a time and the sender waits for an acknowledgment.

 Go-Back-N Protocol: Allows the sender to send up to N frames without waiting for an
acknowledgment but requires retransmission of all frames after a lost or corrupted frame.

 Selective Repeat Protocol: Similar to Go-Back-N but allows the receiver to accept out-of-
order frames and retransmits only the lost or corrupted frames, improving efficiency.
These sliding window protocols are critical for managing data flow in communication systems,
improving throughput and efficiency by enabling multiple frames to be sent before waiting for
acknowledgments.

1. HDLC (High-Level Data Link Control)

 Definition: HDLC is a bit-oriented protocol used for data link layer communication. It
provides error control, flow control, and framing services.

 Key Points:

o Synchronous communication: Data is transmitted in frames with a defined structure.

o Frame Structure: It consists of a flag (start and end of frame), address field, control
field, data field, and FCS (Frame Check Sequence).

o HDLC supports both point-to-point and multipoint configurations.

o Modes of Operation:

 Normal Response Mode (NRM): Master-slave configuration.

 Asynchronous Balanced Mode (ABM): Peer-to-peer configuration.

2. FCS (Frame Check Sequence) Field

 Definition: FCS is a part of the HDLC frame structure that is used for error detection.

 Key Points:

o It contains a Cyclic Redundancy Check (CRC) value used for detecting errors in the
transmitted frame.

o The sender calculates the CRC over the frame's data, and the receiver performs the
same calculation. If the calculated CRC matches the received CRC, the frame is
considered error-free.

o Location in Frame: The FCS is placed at the end of the HDLC frame, after the data
field.

o Error Detection: Helps in identifying any bit errors that may have occurred during
transmission.

3. PPP (Point-to-Point Protocol)

 Definition: PPP is a data link layer protocol used to establish a direct connection between
two network nodes.

 Key Points:
o Used for: Internet connections over dial-up, DSL, and other point-to-point links.

o Encapsulation: PPP encapsulates network layer packets (e.g., IP) for transmission
over point-to-point links.

o Components:

 Link Control Protocol (LCP): Used to establish, configure, and test the data
link connection.

 Network Control Protocol (NCP): Used to configure and support multiple


network layer protocols (e.g., IP, IPX).

o Error Detection: PPP provides error detection but does not include automatic
retransmission of lost packets (error correction is not a part of PPP itself).

o Authentication: PPP supports PAP (Password Authentication Protocol) and CHAP


(Challenge Handshake Authentication Protocol) for authentication.

4. SLIP (Serial Line Internet Protocol)

 Definition: SLIP is an older, simple protocol used to encapsulate IP packets for transmission
over serial connections.

 Key Points:

o Usage: Primarily used for dial-up connections to the Internet.

o Frame Structure: SLIP is very simple and does not include features like error
detection or flow control. It simply encapsulates IP packets with a start and end
marker.

o Limitations:

 No error detection: Unlike PPP, SLIP does not have error detection
mechanisms such as CRC or FCS.

 No support for multiple network layer protocols: SLIP only supports IP and
does not natively support protocols like IPX or AppleTalk.

o Depreciation: SLIP has been largely replaced by PPP due to its limitations and lack of
advanced features.

Summary of Key Concepts:

 HDLC (High-Level Data Link Control): A bit-oriented protocol for data link layer
communication, providing error control, flow control, and framing.

 FCS (Frame Check Sequence): A field in HDLC frames used for error detection via CRC.

 PPP (Point-to-Point Protocol): A data link protocol used for direct communication between
two nodes, supporting various network layer protocols and providing optional
authentication.
 SLIP (Serial Line Internet Protocol): An older, simpler protocol for encapsulating IP packets,
lacking error detection and advanced features.

These protocols play important roles in point-to-point communication and error detection in data
link layer communication. Let me know if you need further clarification or details!

1. Channel Allocation Problem

 Definition: The channel allocation problem refers to how to assign communication channels
to different users or data streams efficiently, especially when multiple devices or users are
trying to use the same channel.

 Key Points:

o Goal: Maximize the use of the available bandwidth while minimizing interference
and ensuring fairness among users.

o The challenge arises when there is a limited number of channels available to a large
number of users, especially in wireless networks.

2. Dynamic Channel Allocation in LANs and MANs

 Definition: Dynamic channel allocation involves assigning channels to users or devices in


real-time based on demand, rather than pre-assigning fixed channels.

 Key Points:

o LAN (Local Area Network): Uses dynamic allocation methods like Carrier Sense
Multiple Access (CSMA).

o MAN (Metropolitan Area Network): Typically involves more sophisticated methods,


including time-sharing and frequency-sharing mechanisms.

o Advantages: More efficient utilization of available channels, adaptable to varying


network load.

o Challenges: Complex management of resources, potential collisions, and delays.

3. Taxonomy of Multiple Access Protocols

 Definition: Multiple access protocols define how multiple users or devices can share the
same communication medium efficiently.

 Key Points:

o Types of Protocols:

 Random Access: Protocols where users transmit without coordination, e.g.,


ALOHA, CSMA/CD.
 Controlled Access: Protocols where a central controller or token manages
access, e.g., Polling, Token Passing.

 Channelized Access: Protocols that divide the channel into multiple smaller
channels, e.g., FDMA, TDMA, CDMA.

4. Channelization

 Definition: Channelization refers to the technique of dividing a single communication


channel into multiple smaller, separate channels to allow simultaneous communication.

 Key Points:

o Goal: To increase the capacity of a communication medium by using the available


bandwidth more efficiently.

o Common methods of channelization include Frequency Division, Time Division, and


Code Division.

5. FDMA (Frequency Division Multiple Access)

 Definition: FDMA is a channelization technique where the available bandwidth is divided


into several frequency bands, and each user is allocated a specific frequency band for
communication.

 Key Points:

o Fixed Allocation: Each user gets a fixed frequency slot, and they transmit in that slot.

o Usage: Common in analog communication systems and early cellular networks.

o Limitations: Inefficient for bursty traffic as the channels are reserved even when not
in use.

6. TDMA (Time Division Multiple Access)

 Definition: TDMA is a channelization technique where users share the same frequency
channel but transmit in different time slots.

 Key Points:

o Time Slot Allocation: Users are assigned specific time slots for transmission.

o Usage: Common in digital communication systems and cellular networks (e.g., GSM).

o Advantages: Efficient use of the available bandwidth, as only one user transmits at a
time.

o Limitations: Requires precise synchronization to avoid collisions.


7. CDMA (Code Division Multiple Access)

 Definition: CDMA is a channelization technique where each user is assigned a unique code,
and all users transmit simultaneously but on the same frequency channel.

 Key Points:

o Spreading Code: The signal from each user is spread over the available bandwidth
using a unique code (e.g., PN Sequence).

o Usage: Used in modern cellular systems (e.g., 3G networks).

o Advantages: Allows efficient sharing of the same frequency by multiple users.

o Limitations: Requires complex signal processing for separation of users' signals.

8. Walsh Table

 Definition: A Walsh table is used in CDMA systems to assign unique orthogonal codes to
different users to minimize interference.

 Key Points:

o Orthogonal Codes: The codes in a Walsh table are designed so that they do not
interfere with each other when transmitted over the same frequency.

o Usage: Used in CDMA for code assignment, ensuring efficient and interference-free
communication.

o Structure: The Walsh matrix has a binary structure, and the size of the table depends
on the number of users.

9. Ethernet

 Definition: Ethernet is a widely used networking technology that operates at the data link
layer to connect devices in a Local Area Network (LAN).

 Key Points:

o Technology: Originally based on coaxial cables, modern Ethernet uses twisted pair
cables (Cat 5, Cat 6) and fiber optics.

o Access Method: Uses Carrier Sense Multiple Access with Collision Detection
(CSMA/CD) to manage access to the shared communication medium.

o Data Frames: Ethernet uses frames for data encapsulation.

o Speed and Standards: Ethernet supports a wide range of speeds, from 10 Mbps
(10BASE-T) to 100 Gbps (100GBASE).

o Switching: Modern Ethernet LANs use Ethernet switches for better performance
compared to hubs.
Summary of Key Concepts:

 Channel Allocation Problem: The challenge of efficiently assigning communication channels


to multiple users.

 Dynamic Channel Allocation: Real-time assignment of channels in LANs and MANs.

 Multiple Access Protocols Taxonomy: Defines how multiple users can share a
communication medium (e.g., Random, Controlled, Channelized).

 Channelization: Dividing a communication channel into smaller channels (e.g., FDMA, TDMA,
CDMA).

 FDMA: Divides bandwidth into separate frequency bands for each user.

 TDMA: Allocates time slots to users for communication over a shared frequency.

 CDMA: Assigns unique codes to users, allowing simultaneous transmission on the same
frequency.

 Walsh Table: A method of assigning orthogonal codes in CDMA to minimize interference.

 Ethernet: A widely used LAN technology using CSMA/CD for channel access and supporting
high-speed communication.

---------------------------------------------------------------------------------------

Module 06 1. Physical Layer

 Definition: The physical layer is the lowest layer in the OSI model responsible for transmitting
raw bit streams over a physical medium.

 Key Functions:

o Transmission of raw data (bits) as electrical, optical, or radio signals.

o Defines hardware elements such as cables, connectors, and wireless transmission


systems.

2. Transmission Media

 Definition: Transmission media refers to the physical pathways used to carry data between
devices in a network.

 Types:

o Guided (Wired): Media that involves physical cables.

o Unguided (Wireless): Transmission through the air or space without physical cables.

3. Electromagnetic Spectrum

 Definition: The electromagnetic spectrum is the range of all frequencies of electromagnetic


radiation used for transmitting signals over various transmission media.
 Key Points:

o Includes radio waves, microwaves, infrared, visible light, and ultraviolet.

o Different frequency ranges are used for different transmission technologies (e.g.,
radio, fiber optics, wireless communication).

4. Classes of Transmission Media

Guided (Wired) Media:

 Definition: Media that uses physical cables or wires to transmit data.

 Types:

o Twisted Pair Cable:

 Structure: Pairs of insulated copper wires twisted together.

 Types:

 Unshielded Twisted Pair (UTP): No shielding; commonly used in


Ethernet networks.

 Shielded Twisted Pair (STP): Wires are shielded to reduce


electromagnetic interference (EMI).

 Uses: Telephone lines, LAN cables, and Ethernet networks.

 Speed/Distance: Limited bandwidth and range compared to coaxial or fiber


optic.

o Coaxial Cable:

 Structure: Consists of a central copper conductor, an insulating layer, a


metallic shield, and an outer insulating layer.

 Uses: Cable TV, broadband internet, and legacy Ethernet (10BASE2).

 Advantages: Less interference than twisted pair cables.

o Fiber Optic Cable:

 Structure: Consists of glass or plastic fibers that carry light signals.

 Types:

 Single-mode fiber (SMF): For long-distance transmission, using a


single light wave.

 Multi-mode fiber (MMF): For shorter distances, using multiple light


paths.

 Uses: High-speed, high-capacity networks (e.g., internet backbone, data


centers).
 Advantages: High bandwidth, long-distance transmission with minimal signal
loss and immunity to electromagnetic interference.

UTP and STP (Unshielded Twisted Pair and Shielded Twisted Pair):

 UTP: Commonly used in Ethernet networks (Cat5, Cat6 cables), cheaper and more flexible
but more prone to interference.

 STP: More expensive and harder to install, but offers better protection against interference
due to its shielding.

BNC Connectors:

 Definition: A type of coaxial cable connector commonly used for RF (radio frequency) signals.

 Uses: Typically found in older networks and video connections (e.g., CCTV).

Optical Fiber:

 Definition: High-speed data transmission medium using light pulses to transmit data over
long distances.

 Key Points:

o Immune to electromagnetic interference.

o Offers extremely high bandwidth, suitable for long-distance communications.

5. Unguided (Wireless) Media

 Definition: Media that uses air or space to transmit signals without any physical cables.

 Types:

o Air (Free Space):

 Definition: The natural medium for wireless communication using radio


waves, microwaves, or light.

 Key Points: Wireless communication via air is subject to interference from


physical obstructions, weather, and other environmental factors.

o Radio Waves:

 Frequency Range: Typically from 3 kHz to 300 GHz.

 Uses: AM/FM radio, television broadcasts, Wi-Fi, Bluetooth, and cellular


networks.

 Key Points: Easily transmitted through air and can cover large areas; subject
to interference.

o Microwave:

 Frequency Range: 300 MHz to 300 GHz.

 Uses: Point-to-point communication, satellite communication, radar systems.


 Key Points: Requires line-of-sight transmission; can be blocked by physical
obstructions like buildings or mountains.

o Infrared:

 Frequency Range: 300 GHz to 400 THz.

 Uses: Short-range communication, e.g., remote controls, infrared sensors,


wireless peripherals.

 Key Points: Line-of-sight required; limited range due to absorption by air,


objects, or other obstacles.

Summary of Key Concepts:

 Physical Layer: Responsible for the transmission of raw bits over a physical medium.

 Transmission Media: Can be guided (wired) or unguided (wireless).

o Guided Media: Includes twisted pair, coaxial cable, and fiber optic cables, each
offering different levels of interference protection, speed, and range.

o Unguided Media: Includes radio waves, microwaves, and infrared, used in wireless
communication systems.

 Electromagnetic Spectrum: The range of frequencies used for various communication


technologies.

 BNC Connectors: Coaxial connectors used in older systems for RF transmission.

 Optical Fiber: High-performance, long-distance transmission medium using light to carry


data.

1. Communication Satellites

Geostationary Orbit Satellites (GEO):

 Orbit: Fixed 35,786 km above Earth’s equator.

 Key Points:

o Always in the same position relative to the Earth, ideal for TV, weather, and
communication.

o High latency due to the distance from Earth.

o Examples: Communication satellites like Intelsat.

Medium Earth Orbit Satellites (MEO):

 Orbit: Between 2,000 and 35,786 km above Earth.

 Key Points:

o Lower latency than GEO.


o Used for navigation (e.g., GPS).

o Typically used in constellations for broader coverage.

Low Earth Orbit Satellites (LEO):

 Orbit: Below 2,000 km above Earth.

 Key Points:

o Very low latency and fast data transmission.

o Used for satellite internet (e.g., SpaceX's Starlink).

o Need more satellites to cover the entire Earth due to their lower coverage area.

2. Wired LAN Digital Signal Encoding

Nonreturn to Zero-Level (NRZ-L):

 Definition: A binary encoding where the signal remains at a constant level for the duration of
a bit period.

 Key Points:

o Simple, but susceptible to DC bias and lack of synchronization.

Nonreturn to Zero Inverted (NRZI):

 Definition: A signal encoding where a change in level represents a ‘1’ bit and no change
represents a ‘0’ bit.

 Key Points:

o Reduces DC bias issues compared to NRZ-L.

Manchester Encoding:

 Definition: A binary encoding where each bit is represented by two signal levels: a transition
in the middle of the bit period.

 Key Points:

o Provides clock synchronization and is self-clocking.

Differential Manchester Encoding:

 Definition: Similar to Manchester but the bit values are represented by the absence or
presence of transitions at the start of the bit period.

 Key Points:

o Reduces errors in noisy environments.

Bipolar - AMI (Alternate Mark Inversion):

 Definition: A binary encoding where ‘1’ is represented by alternating positive and negative
voltages, and ‘0’ is represented by zero voltage.
 Key Points:

o Reduces the risk of DC bias.

Pseudo Ternary:

 Definition: A binary encoding scheme where ‘1’ is represented by alternating positive and
negative voltages, and ‘0’ is represented by zero voltage.

 Key Points:

o Similar to AMI but the polarity for ‘1’ and ‘0’ is swapped.

3. WAN Communication Networks

Example: Public Telephone Network

 Definition: The worldwide network of telephone lines and switching systems.

 Key Points:

o Provides voice communication via analog or digital signals.

o Uses circuit-switching technology for voice calls.

Hierarchical Network Structure:

 Definition: A network design that organizes communication nodes in a hierarchy.

 Key Points:

o Often seen in large-scale communication systems (e.g., telephone networks).

4. Major Components of the Telephone System

Local Loops:

 Definition: The connection from a user’s device to the telephone exchange.

 Key Points:

o Composed of twisted-pair copper wires or fiber optic cables in modern systems.

Transmission Problems:

 Definition: Issues like attenuation, distortion, and noise that degrade signal quality.

 Key Points:

o Can be mitigated by amplifiers or repeaters.

Modulation Techniques:

 Definition: Methods to modify a carrier signal to transmit data.

 Key Points:
o Amplitude Modulation (AM)

o Frequency Modulation (FM)

o Phase Modulation (PM)

Modems:

 Definition: Devices that convert digital data to analog signals and vice versa.

 Key Points:

o Essential for dial-up Internet access.

Trunks:

 Definition: High-capacity transmission paths between central offices in the telephone


system.

 Key Points:

o Often use multiplexing to carry multiple calls simultaneously.

Multiplexing:

 Definition: Combining multiple signals into one for transmission over a shared medium.

 Key Points:

o Time Division Multiplexing (TDMA)

o Frequency Division Multiplexing (FDMA)

o Wavelength Division Multiplexing (WDM)

Switching Offices:

 Circuit Switching: Dedicated path is established for the entire duration of a call.

 Packet Switching: Data is sent in packets, each possibly taking a different route.

 Message Switching: Entire messages are routed from source to destination.

 Fully Interconnected Network: Every node is connected directly to every other node.

5. The Mobile Telephone

First-generation Mobile Phones (1G):

 Technology: Analog voice communication.

 Key Points:

o Prone to interference and limited coverage.

Second-generation Mobile Phones (2G):

 Technology: Digital voice transmission.


 Key Points:

o 2G GSM: Global System for Mobile Communications.

o 2G CDMA: Code Division Multiple Access.

o 2G D-AMPS: Digital AMPS.

Third-generation Mobile Phones (3G):

 Technology: Digital voice and data services.

 Key Points:

o Enabled mobile internet, video calling, and faster data transfer.

1G, 2G, 3G:

 1G: Analog, voice-only.

 2G: Digital, voice, and text.

 3G: Digital, voice, text, and data (mobile internet).

6. Cable Television

Community Antenna Television (CATV):

 Definition: A cable TV service that uses antennas to receive TV signals and transmit them to
users via coaxial cables.

 Key Points:

o Provides better reception in areas with poor over-the-air signals.

Internet over Cable:

 Definition: Providing high-speed internet access via the same coaxial cable infrastructure
used for cable television.

 Key Points:

o Uses cable modems to convert signals for internet access.

Spectrum Allocation:

 Definition: The distribution of frequency bands to different services (e.g., television, radio,
mobile).

 Key Points:

o Managed by governmental agencies to avoid interference.

Cable Modems:

 Definition: Devices that modulate and demodulate signals for internet access via cable TV
lines.
 Key Points:

o Provide broadband speeds for internet access.

ADSL vs. Cable:

 ADSL (Asymmetric Digital Subscriber Line): Uses phone lines for internet with different
speeds for upload/download.

 Cable: Uses coaxial cable, generally providing faster speeds than ADSL.

HFC and PSTN:

 HFC (Hybrid Fiber-Coaxial): A mix of fiber optic and coaxial cables used for broadband.

 PSTN (Public Switched Telephone Network): Traditional analog telephone network.

Local Loop:

 Definition: The final segment of the telephone network that connects users to the exchange.

 Key Points:

o Bandwidth: Refers to the data rate the local loop can support.

Summary of Key Concepts:

 Communication Satellites: GEO, MEO, and LEO satellites provide different coverage and
latency options.

 Wired LAN Digital Signal Encoding: Various encoding techniques like NRZ, Manchester, and
AMI help in efficient transmission of data.

 WAN Communication Networks: Includes public telephone networks, which rely on circuit
switching, packet switching, and multiplexing.

 Telephone System Components: Features like local loops, modems, trunks, and multiplexing
ensure efficient data and voice transmission.

 Mobile Telephones: Evolved from 1G (analog) to 3G (digital voice and data) systems.

 Cable Television: CATV provides TV services, while cable modems offer internet access over
the same infrastructure.

--------------------------------------------------------------------------------------------------------------------------------------

You might also like