0% found this document useful (0 votes)
92 views109 pages

CN 21 Scheme 1-5

This document provides an introduction to computer networks. It discusses that computer networks allow autonomous computers to exchange information. Networks can be used for business applications like resource sharing, client-server models, and e-commerce. Home applications include accessing remote information, communication, entertainment, and e-commerce. Mobile networks allow connectivity for users on the go. The document also covers network hardware components, different network topologies, and personal area networks.

Uploaded by

Knightfury Milan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views109 pages

CN 21 Scheme 1-5

This document provides an introduction to computer networks. It discusses that computer networks allow autonomous computers to exchange information. Networks can be used for business applications like resource sharing, client-server models, and e-commerce. Home applications include accessing remote information, communication, entertainment, and e-commerce. Mobile networks allow connectivity for users on the go. The document also covers network hardware components, different network topologies, and personal area networks.

Uploaded by

Knightfury Milan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 109

COMPUTER NETWORKS

Module-I
INTRODUCTION
❖ Each of the past three centuries has been dominated by a single new technology.
❖ The 18th century was the era of the great mechanical systems accompanying the Industrial Revolution.
❖ The 19th century was the age of the steam engine.
❖ During the 20th century, the technology was information gathering, processing, and distribution. Ex:
Telephone networks, invention of radio and computer industry, communication satellites, Internet.
❖ The 21st century Networks are Human-to-Human, Machine-to-Machine.
❖ Computer Network: A collection of autonomous computers interconnected by a single technology. Two
computers are said to be interconnected if they are able to exchange information. The connection need not
be via a copper wire; fiber optics, microwaves, infrared and communication satellites can also be used.
Networks come in many sizes, shapes and forms, as we will see later. They are usually connected
together to make larger networks, for example the Internet.
❖ Uses of Computer Networks

A) Business Applications: Resource Sharing, Client-Server-mail, video conferencing


❖ Many companies have a substantial number of computers. For example, a company may have separate
computers to monitor production, keep track of inventories, and do the payroll. Initially, each of these
computers may have worked in isolation from the others, but at some point, management may have
decided to connect them to be able to extract and correlate information about the entire company.

1. Resource sharing: The main task of the connectivity of resources is resource sharing. For example,
a high-volume networked printer may be installed instead of a large collection of individual printers.
2. Information Sharing: large and medium-sized companies and many small companies are vitally
dependent on computerized information. This can be done by a simple client server model connected
by network as illustrated in Fig.1.4.

In the client-server model in detail, two processes are involved, one on the client machine and one on
the server machine. Communication takes the form of the client process sending a message over the
network to the server process. The client process then waits for a reply message. When the server
process gets the request, it performs the requested work or looks up the requested data and sends back
a reply. These messagesare shown in Fig. 1.5.

1
COMPUTER NETWORKS

3. Connecting People: another use of setting up a computer network has to do with people
rather than information or even computers. It is achieved through Email and Video
Conferencing.
4. E-commerce: many companies is doing business electronically with other companies,
especially suppliers and customers, and doing business with consumers over the Internet.

B) Home Applications: Shopping, Digital library, email, game playing, TV, Twitter, Instagram
The computer network provides better connectivity for home applications via desktop
computers, laptops, iPads, iPhones. Some of the more popular uses of the Internet for home users are
as follows:
1. Access to remote information.
2. Person-to-person communication (peer-to-peer).
i. Peer-to-peer - there are no fixed clients and servers.
ii. Audio and Video sharing
3. Interactive entertainment.
4. Electronic commerce.

C) Mobile Users: Notebook Computer, Hotspots, Text Messaging, GPS

As wireless technology becomes more widespread, numerous other applications are likely to
emerge. Wireless networks are of great value to fleets of trucks, taxis, delivery vehicles, and
repairpersons for keeping in contact with home. Wireless networks are also important to the
military.
Although wireless networking and mobile computing are often related, they are not identical, as
Table 1.3 shows. Here we see a distinction between fixed wireless and mobile wireless. Even
notebook computers are sometimes wired. For example, if a traveler plugs a notebook computer
into the telephone jack in a hotel room, he has mobility without a wirelessnetwork.

2
COMPUTER NETWORKS

Another area in which wireless could save money is utility meter reading. If electricity, gas,
water, and other meters in people's homes were to report usage over a wireless network, there would
be no need to send out meter readers.

D) Social Issues: Phishing, network neutrality (Traffic treated as equal)

The widespread introduction of networking has introduced new social, ethical, and political
problems. A popular feature of many networks is newsgroups or bulletin boards whereby people can
exchange messages with like-minded individuals. As long as the subjects are restricted to technical
topics or hobbies like gardening, not too many problems will arise.
The following are the issues in society due to the misbehave or misconduct of computer
networks.

1. Network neutrality
2. Digital Millennium Copyright Act
3. Profiling users
4. Phishing

NETWORK HARDWARE
❖ There is no generally accepted taxonomy into which all computer networks fit, but two dimensions stand out
as important: transmission technology and scale.
❖ There are two transmission technologies: Broadcast links and Point-to-Point links.

❖ Point-to-point links connect individual pairs of machines. To go from the source to the destination on a
network made up of point-to-point links, short messages, called packets in certain contexts, may have to first
visit one or more intermediate machines.
❖ Often multiple routes of different lengths are possible, so finding good ones is important in point-to-point
networks.
❖ Point-to-point transmission with exactly one sender and exactly one receiver is sometimes called unicasting.
Example browsing a website.
❖ Broadcast links: single communication channel shared by all machines, for example wireless network.

❖ An address field within each packet specifies the intended recipient. Upon receiving a packet, a machine
checks the address field. If the packet is intended for the receiving machine, that machine processes the
packet; if the packet is intended for some other machine, it is just ignored.
❖ Some broadcast systems also support transmission to a subset of the machines, which are known as
multicasting.

3
COMPUTER NETWORKS

Topologies:

1. Bus Topology:
• Description: In a bus topology, all devices share a common communication medium, or "bus."
Data is transmitted along the bus, and each device has a unique address. When a device wants to
communicate, it sends the data onto the bus, and all devices can see the transmission.
• Advantages: Simple and cost-effective for small networks.
• Disadvantages: Performance degrades as more devices are added, and the failure of the bus can
disrupt the entire network.
2. Star Topology:
• Description: In a star topology, each device is connected to a central hub or switch. All
communication passes through the central point, making it easy to manage and identify faults.
• Advantages: Easy to install and manage. If one connection fails, it doesn't affect the rest of the
network.
• Disadvantages: Dependence on the central hub; if it fails, the entire network is affected.
3. Ring Topology:
• Description: In a ring topology, each device is connected to exactly two other devices, forming a
closed loop. Data travels in one direction around the ring until it reaches its destination.
• Advantages: Simple and easy to install. Equal access to the network for all devices.
• Disadvantages: A failure in one device or connection can disrupt the entire network.
4. Mesh Topology:
• Description: In a mesh topology, every device is connected to every other device in the network.
This creates multiple paths for data transmission, increasing redundancy and fault tolerance.
• Advantages: High fault tolerance and redundancy. Data can take multiple routes to reach its
destination.
• Disadvantages: Expensive and complex to install and manage, especially as the number of devices
increases.
5. Hybrid Topology:
• Description: A hybrid topology is a combination of two or more different topologies. For example,
a network might combine a star topology in one area with a bus topology in another.
• Advantages: Offers the benefits of multiple topologies, allowing for flexibility and scalability.
• Disadvantages: Complexity and cost can be higher than simpler topologies.
6. Tree Topology:
• Description: Tree topology is a combination of the bus and star topologies. It consists of groups of
star-configured networks connected to a linear bus backbone.
• Advantages: Scalable and can cover large geographical areas.
• Disadvantages: If the backbone fails, the entire network can be affected.

4
COMPUTER NETWORKS

Types of network
PANs (Personal Area Networks):
❖ Let devices communicate over the range of a person.
❖ For example, a wireless network that connects a computer with its peripherals, without using wireless, this
connection must be done with cables.
❖ So many new users have a hard time finding the right cables and plugging them into the right place.
❖ To help these users, some companies got together to design a short-range wireless network called
Bluetooth to connect these components without wires.
❖ The idea is that if your devices have Bluetooth, then you need no cables. You just put them down, turn
them on, and they work together. For many people, this ease of operation is a big plus.
❖ In the simplest form, Bluetooth networks use the master-slave paradigm of
❖ Fig. The system unit (the PC) is normally the master, talking to the mouse, keyboard, etc., as slaves. The
master tells the slaves what addresses to use, when they can broadcast, how long they can transmit, what
frequencies they can use, and so on.
❖ It connects a headset to a mobile phone without cords and it can allow your digital music player

LAN (Local Area Network):

❖ It is a privately owned network that operates within and nearby a single building like a home, office or factory.
❖ When LANs are used by companies, they are called enterprise networks.
5
COMPUTER NETWORKS

❖ Wireless LANs(IEEE 802.11) /Wireless Fidelity (WiFi): It is used in homes, older office buildings, cafeterias,
and other places where it is too much trouble to install cables.
❖ In these systems, every computer has a radio modem and an antenna that it uses to communicate with other
computers.

❖ An AP (Access Point), wireless router, or base station, relays packets between the wireless computers and also
between them and the Internet.
❖ Wireless LAN operates at a speed of 11 to 100’s Mbps.
❖ Wired LANs use a range of different transmission technologies. Most of them use copper wires, but some
use optical fiber. LANs are restricted in size, which means that the worst-case transmission time is bounded
and known in advance.
❖ Wired LANs run at speeds of 100 Mbps to 1 Gbps, have low delay and few errors. Newer LANs can operate
at up to 10 Gbps. It is just easier to send signals over a wire or through a fiber than through the air.
❖ The topology wired LANs are built from point-to-point links. IEEE 802.3, called Ethernet, is, by far, the
most common type of wired LAN. Fig. (b) Switched Ethernet. Each computer speaks the Ethernet protocol
and connects to a box called a switch with a point-to-point link. A switch has multiple ports, each of which
can connect to one computer. The job of the switch is to relay packets between computers that are attached to
it, using the address in each packet to determine which computer to send it to.
❖ Both wireless and wired broadcast networks can be divided into static and dynamic designs, depending on
how the channel is allocated.

❖ Static allocation would be to divide time into discrete intervals and use a round-robin algorithm, allowing each
machine to broadcast only when its time slot comes up.
❖ Static allocation wastes channel capacity when a machine has nothing to say during its allocated slot, so most
systems attempt to allocate the channel dynamically (i.e., on demand).
❖ Dynamic allocation methods for a common channel are either centralized or decentralized.
❖ Centralized channel allocation: There is a single entity, for example, the base station in cellular networks,
which determines who goes next. It might do this by accepting multiple packets and prioritizing them
according to some internal algorithm.
❖ Decentralized channel allocation: there is no central entity; each machine must decide for itself whether to
transmit.

6
COMPUTER NETWORKS

MAN (Metropolitan Area Network):

❖ It covers a city and best-known example is cable television networks available in many cities. These systems
grew from earlier community antenna systems used in areas with poor over-the-air television reception. In
those early systems, a large antenna was placed on top of a nearby hill and a signal was then piped to the
subscribers’ houses.

❖ At first, these were locally designed, ad hoc systems. The next step was television programming and even
entire channels designed for cable only. Often these channels were highly specialized, such as all news, all
sports, all cooking, all gardening, and so on.

❖ When the Internet began attracting a mass audience, the cable TV network operators began to realize that with
some changes to the system, they could pro- vide two-way Internet service in unused parts of the spectrum.

❖ In figure, both television signals and Internet being fed into the centralized cable headend for subsequent
distribution to people’s homes.

❖ Recent developments in high- speed wireless Internet access have resulted in another MAN, which has been
standardized as IEEE 802.16 and is popularly known as WiMAX

WAN (Wide Area Network):


❖ It Spans a large geographical area, often a country or continent.
❖ In most WANs, the subnet consists of two distinct components: transmission lines and switching elements.
❖ Transmission lines (copper wire, optical fiber, or even radio links) move bits between machines.
❖ Switching elements (switches), are specialized computers that connect two or more transmission lines.
❖ When data arrives on an incoming line, the switching element must choose an outgoing line on which
to forward them.

7
COMPUTER NETWORKS

Relation between hosts on LANs and the subnet


❖ The routers will usually connect different kinds of networking technology. The networks inside the offices
may be switched Ethernet, for example, while the long-distance transmission lines may be SONET links.

❖ VPN (Virtual Private Network): it provides flexible reuse of a resource (Internet connectivity).
❖ It has disadvantage, which is a lack of control over the underlying resources and mileage may vary with
internet speed.

❖ The subnet operator is known as a network service provider and the offices are its customers. The
subnet operator will connect to other customers too, as long as they can pay and it can provide
service.
❖ A subnet operator is called an ISP (Internet Service Provider) and the subnet is an ISP network.
Its customers who connect to the ISP receive Internet service.
8
COMPUTER NETWORKS

❖ How the network makes the decision as to which path to use is called the routing algorithm.
❖ How each router makes the decision as to where to send a packet next is called the forwarding
algorithm.
❖ Examples of WAN make heavy use of wireless technologies i.e. satellite systems.
❖ The cellular telephone network is another example of a WAN that uses wireless technology.
❖ The first generation was analog and for voice only. The second generation was digital and for voice
only. The third generation is digital and is for both voice and data.
❖ Each cellular base station covers a distance much larger than a wireless LAN, with a range measured
in kilometers rather than tens of meters.
Internetworks:
❖ A collection of interconnected networks is called internetwork or internet.
❖ The Internet uses ISP networks to connect enterprise networks, home networks, and many other
networks.
❖ There are two rules of thumb that are useful. First, if different organizations have paid to construct
different parts of the network and each maintains its part, we have an internetwork rather than a
single network.
❖ Second, if the underlying technology is different in different parts (e.g., broadcast versus point-to-
point and wired versus wireless), we probably have an internetwork.
❖ Gateways are distinguished by the layer at which they operate in the protocol hierarchy.

NETWORK SOFTWARE:
Protocol Hierarchies:

• To reduce their design complexity, most networks are organized as a stack of layers or levels,
each one built upon the one below it. The number of layers, the name of each layer, the
contents of each layer, and the function of each layer differ from network to network.

• The purpose of each layer is to offer certain services to the higher layers while shielding those
layers from the details of how the offered services are actually implemented. In a sense, each
layer is a kind of virtual machine, offering certain services to the layer above it.
9
COMPUTER NETWORKS

• When layer n on one machine carries on a conversation with layer n on another machine, the
rules and conventions used in this conversation are collectively known as the layer n protocol.
Basically, a protocol is an agreement between the communicating parties on how
communication is to proceed.

A five-layer network is illustrated:

❖ In reality, no data are directly transferred from layer n on one machine to layer n on another machine.
Instead, each layer passes data and control information to the layer immediately below it, until the lowest
layer is reached.
❖ Below layer 1 is the physical medium through which actual communication occurs. Virtual communication
is shown by dotted lines and physical communication by solid lines.
❖ Interface: defines which primitive operations and services the lower layer makes available to the upper one.
❖ Clear- cut interfaces also make it simpler to replace one layer with a completely different protocol or
implementation.
❖ A set of layers and protocols is called network architecture. The specification of architecture must contain
enough information to allow an implementer to write the program or build the hardware for each layer so that
it will correctly obey the appropriate protocol.
❖ A list of the protocols used by a certain system, one protocol per layer, is called a protocol stack.

10
COMPUTER NETWORKS

❖ In this example, M is split into two parts, M 1 and M 2, that will be transmitted separately.
Layer 3 decides which of the outgoing lines to use and passes the packets to layer 2. Layer 2 adds to
each piece not only a header but also a trailer, and gives the resulting unit to layer 1 for physical
transmission.
❖ At the receiving machine the message moves upward, from layer to layer, with headers being
stripped off as it progresses. None of the headers for layers below n are passed up to layer n.

Design Issues for the Layers:


Some of the key design issues that occur in computer networks are present in several layers. The
following briefly mention some of the more important ones.

11
COMPUTER NETWORKS

• Identifying senders and receivers - some form of addressing is needed in order to specify a
specific source and destination.
• Rules for data transfer - The protocol must also determine the direction of data flow, how
many logical channels the connection corresponds to and what their priorities are. Many
networks provide at least two logical channels per connection, one for normal data and one for
urgent data.
• Error control – when circuits are not perfect, both ends of the connection must agree on
which error-detecting and error-correcting codes is being used.

• Sequencing - protocol must make explicit provision for the receiver to allow the pieces to be
reassembled properly.

• Flow Control - how to keep a fast sender from swamping a slow receiver with data. This is
done by feedback-based (receiver to sender) or agreed-on transmission rate.

• Segmentation and reassembly - several levels are the inability of all processes to accept
arbitrarily long messages. It leads to mechanisms for disassembling, transmitting, and then
reassembling messages.
• Multiplexing and demultiplexing – to share the communication medium by severalusers.
• Routing - When there are multiple paths between source and destination, a route must be
chosen.

Connection-Oriented Versus Connectionless Service:


❖ Connection-oriented network service, the service user first establishes a connection, uses the
connection, and then releases the connection.
❖ Connection acts like a tube: the sender pushes objects (bits) in at one end, and the receiver takes
them out at the other end. In most cases the order is preserved so that the bits arrive in the order
they were sent.
❖ In some cases when a connection is established, the sender, receiver, and subnet conduct a
negotiation about the parameters to be used, such as max message size, QoS required etc. Typically,
one side makes a proposal, and the other side can accept it, reject it, or make a counter- proposal.
❖ Connectionless service is modeled after the postal system. Each message (letter) carries the full
destination address, and each one is routed through the intermediate nodes inside the system
independent of all the subsequent messages.
❖ There are different names for messages in different contexts; a packet is a message at the network
layer. When the intermediate nodes receive a message in full before sending it on to the next node,
this is called store-and-forward switching.
❖ The alternative, in which the onward transmission of a message at a node starts before it is
completely received by the node, is called cut-through switching.
❖ Unreliable (meaning not acknowledged) connectionless service is often called datagram service, in

12
COMPUTER NETWORKS

analogy with telegram service, which also does not return an acknowledgement to the sender.

Service Primitives (Operations):


❖ These primitives tell the service to perform some action or report on an action taken by peer entity.
The primitives for connection-oriented service are different from those of connectionless service.

❖ First, the server executes LISTEN to indicate that it is prepared to accept in coming connections. A
common way to implement LISTEN is to make it a blocking system call. After executing the
primitive, the server process is blocked until a request for connection appears.
❖ Next, the client process executes CONNECT to establish a connection with the server. The
CONNECT call needs to specify who to connect to, so it might have a parameter giving the server’s
address. Client is suspended until there is a response.

❖ When the packet arrives at the server, the operating system sees that the packet is requesting a
connection. It checks to see if there is a listener, and if so it unblocks the listener. The server process
13
COMPUTER NETWORKS

can then establish the connection with the ACCEPT call.


❖ The next step is for the server to execute RECEIVE to prepare to accept the first request.
Normally, the server does this immediately upon being released from the LISTEN, before
the acknowledgement can get back to the client. The RECEIVE call blocks the server.
❖ Then the client executes SEND to transmit its request followed by the execution of RECEIVE to get
the reply.
❖ When the client is done, it executes DISCONNECT to terminate the connection.

The Relationship of Services to Protocols


❖ A service is a set of primitives (operations) that a layer provides to the layer above it. I t defines
what operations the layer is prepared to perform on behalf of its users. A service relates to an
interface between two layers, with the lower layer being the service provider and the upper layer
being the service user.
❖ A service is like an abstract data type or an object in an object-oriented language. It defines
operations that can be performed on an object but does not specify how these operations are
implemented.
❖ A protocol, in contrast, is a set of rules governing the format and meaning of the packets, or
messages that are exchanged by the peer entities within a layer.
❖ In contrast, a protocol relates to the implementation of the ser- vice and as such is not visible to the
user of the service.

REFERENCE MODELS
The OSI Reference Model

❖ This model is based on a proposal developed by the International Standards Organization (ISO) as a
first step toward international standardization of the protocols used in the various layers (Day and
Zimmermann, 1983).
❖ It was revised in 1995 (Day, 1995). The model is called the ISO OSI (Open Systems Interconnection)
Reference Model because it deals with connecting open systems—that is, systems that are open for
communication with other systems.
❖ The OSI model has seven layers. The principles that were applied to arrive at the seven
layers can be briefly summarized as follows:
14
COMPUTER NETWORKS

1. A layer should be created where a different abstraction is needed.


2. Each layer should perform a well-defined function.
3. The function of each layer should be chosen with an eye toward defining internationally
standardized protocols.
4. The layer boundaries should be chosen to minimize the information flow across the
interfaces.
5. The number of layers should be large enough that distinct functions need not be thrown
together in the same layer out of necessity and small enough that the architecture does
not become unwieldy.

1 The Physical Layer


❖ This layer is concerned with transmitting raw bits sequence of 0’s and 1’s over a communication
channel.
❖ The design issues deal with mechanical, electrical, and timing interfaces, as well as the physical
transmission medium, which lies below the physical layer.

2 The Data Link Layer


❖ This layer transforms a raw transmission facility into a line that appears free of undetected

15
COMPUTER NETWORKS

transmission errors.
❖ It accomplishes this task by having the sender break up the input data into data frames (typically a
few hundred or a few thousand bytes) and transmit the frames sequentially.
❖ If the service is reliable, the receiver confirms correct receipt of each frame by sending back an
acknowledgement frame.
• Another issue in the data link layer is how to keep a fast transmitter from drowning a slow receiver in
data. Some traffic regulation mechanisms are used.
• Medium access control sub layer deals with how to control access to the shared channel.

3 The Network Layer


❖ Controls the operation of the subnet.
❖ A key design issue is determining how packets are routed from source to destination.
❖ Routes can be based on static tables that are ‘‘wired into’’ the network and rarely changed, or more
often they can be updated automatically to avoid failed components.
❖ If too many packets are present in the subnet at the same time, they will get in one another’s way,
forming bottlenecks. Handling congestion is also a responsibility of the network layer.
❖ Heterogeneous networks to be interconnected.

4 The Transport Layer


❖ It accepts data from above it, split it up into smaller units if need be, pass these to the network layer,
and ensure that the pieces all arrive correctly at the other end.
❖ It also determines what type of service to provide to the session layer, ultimately, to the users of the
network.
❖ The most popular type of transport connection is an error-free point-to-point channel that delivers
messages or bytes in the order in which they were sent.
❖ It also provides the service of transporting isolated messages with no guarantee about the order of
delivery, and the broadcasting of messages to multiple destinations.
❖ The transport layer is a true end-to-end layer; it carries data all the way from the source to the
destination.

5 The Session Layer


❖ It allows users on different machines to establish sessions between them.
❖ Sessions offer various services, including dialog control (keeping track of whose turn it is to transmit),
token management (preventing two parties from attempting the same critical operation
simultaneously), and synchronization (check pointing long transmissions to allow them to pick up
from where they left off in the event of a crash and subsequent recovery).

6 The Presentation Layer


❖ This layer is concerned with the syntax and semantics of the information transmitted.
❖ In order to make it possible for computers with different internal data representations to communicate,
the data structures to be exchanged can be defined in an abstract way, along with a standard encoding
to be used ‘‘on the wire.’’
16
COMPUTER NETWORKS

❖ The presentation layer manages these abstract data structures and allows higher-level data structures
(e.g., banking records) to be defined and exchanged.

7 The Application Layer


❖ It contains a variety of protocols that are commonly needed by users.
❖ One widely used application protocol is HTTP (Hyper Text Transfer Protocol), which is the basis
for the World Wide Web. When a browser wants a Web page, it sends the name of the page it wants
to the server hosting the page using HTTP. The server then sends the page back.
❖ Other application protocols are used for file transfer, electronic mail, and network news.

The TCP/IP Reference Model


❖ This reference Model is a four-layered suite of communication protocols, developed by the DoD
(Department of Defense) in the 1960s. It is named after the two main protocols that are used in the
model, namely, TCP and IP.

1 The Link Layer


❖ It describes what links such as serial lines and classic Ethernet must do to meet the needs of this
connectionless internet layer.
❖ It is not really a layer at all, in the normal sense of the term, but rather an interface between hosts and
transmission links.

2 The Internet Layer


❖ Its job is to permit hosts to inject packets into any network and have they travel independently to the
destination.
❖ Packets may arrive in a completely different order than they were sent, in which case it is the job of
higher layers to rearrange them, if in-order delivery is desired.
❖ This layer defines an official packet format and protocol called IP (Internet Protocol), plus a
companion protocol called ICMP (Internet Control Message Protocol) that helps it function.
❖ The job of the internet layer is to deliver IP packets where they are supposed to go. Packet routing
is clearly a major issue here, as is congestion (though IP has not proven effective at avoiding
congestion).
17
COMPUTER NETWORKS

3 The Transport Layer


❖ It is designed to allow peer entities on the source and destination hosts to carry on a conversation, just
as in the OSI transport layer.
❖ Two end-to-end transport protocols have been defined here TCP,UDP.
❖ TCP (Transmission Control Protocol) is a reliable connection-oriented protocol that allows a byte
stream originating on one machine to be delivered without error on any other machine in the internet.
❖ It segments the incoming byte stream into discrete messages and passes each one on to the internet
layer. At the destination, the receiving TCP process reassembles the received messages into the output
stream.
❖ TCP also handles flow control to make sure a fast sender cannot swamp a slow receiver with more
messages than it can handle.
❖ UDP (User Datagram Protocol), is an unreliable, connectionless protocol for applications that do not
want TCP’s sequencing or flow control and wish to provide their own.
❖ It is also widely used for one-shot, client-server-type request-reply queries and applications in which
prompt delivery is more important than accurate delivery, such as transmitting speech or video.

4 The Application Layer


❖ It contains all the higher-level protocols. File transfer (FTP), and electronic mail (SMTP). Domain
Name System (DNS), for mapping host names onto their net- work addresses, HTTP, the protocol for
fetching pages on the World Wide Web, RTP, the protocol for delivering real-time media such as
voice or movies.

A Comparison of the OSI and TCP/IP Reference Models


❖ Three concepts are central to the OSI model:
1. Services: It tells layer’s semantics, what the layer does, not how entities above it access it or how the
layer works.
2. Interfaces: It specifies what the parameters are and what results to expect.

3. Protocols: provides the offered services.


❖ The TCP/IP model did not originally clearly distinguish between services, interfaces, and protocols,
although people have tried to retrofit it after the fact to make it more OSI-like.
❖ The protocols in the OSI model are better hidden than in the TCP/IP model and can be replaced
relatively easily as the technology changes.
❖ As a consequence, the proto- cols in the OSI model are better hidden than in the TCP/IP model and
can be replaced relatively easily as the technology changes.
❖ With TCP/IP the reverse was true: the protocols came first, and the model was really just a
description of the existing protocols. There was no problem with the protocols fitting the model.
❖ The OSI model has seven layers and the TCP/IP model has four. Both have (inter)network, transport,
and application layers, but the other layers are different.

18
COMPUTER NETWORKS

❖ The OSI model supports both connectionless and connection- oriented communication in the network
layer, but only connection-oriented communication in the transport layer, where it counts
❖ The TCP/IP model supports only one mode in the network layer (connectionless) but both in the
transport layer, giving the users a choice.

PHYSICAL LAYER: GUIDED TRANSMISSION MEDIA


Transmission media, also known as communication channels, are the physical pathways that transport
data signals from one device to another in a network. Guided transmission media refers to those
communication channels that use a physical, wired medium to transmit data. Here are some
commonly used guided transmission media:

1. Magnetic Media :
❖ One of the most common ways to transport data from one computer to another is to write them onto
magnetic tape or removable media (e.g., recordable DVDs), physically transport the tape or disks to
the destination machine and read them back in again.
❖ Although this method is not as sophisticated as using a geosynchronous communication satellite, it is
often more cost effective, especially for applications in which high bandwidth or cost per bit
transported is the key factor.
❖ An industry-standard Ultrium tape can hold 800 gigabytes. A box 60 X 60 X60 cm can hold about
1000 of these tapes, for a total capacity of 800 terabytes, or 6400 terabits (6.4 petabits).
❖ A box of tapes can be delivered anywhere in the United States in 24 hours by Federal Express and
other companies. The effective bandwidth of this transmission is 6400 terabits/86,400 sec, or a bit
over 70 Gbps.
❖ If the destination is only an hour away by road, the bandwidth is increased to over 1700 Gbps. No
computer network can even approach this. Of course, networks are getting faster, but tape densities
are increasing, too.

19
COMPUTER NETWORKS

❖ The cost of an Ultrium tape is around $40 when bought in bulk. A tape can be reused at least 10
times, So the tape cost is maybe $4000 per box per usage.

2. Twisted Pairs

❖ A twisted pair consists of two insulated copper wires, typically about 1 mm thick. The wires are
twisted together in a helical form, just like a DNA molecule.

❖ Twisting is done because two parallel wires constitute a fine antenna. When the wires are twisted, the
waves from different twists cancel out, so the wire radiates less effectively.

❖ A signal is usually carried as the difference in voltage between the two wires in the pair. This
provides better immunity to external noise because the noise tends to affect both wires the same,
leaving the differential unchanged.
❖ The most common application of the twisted pair is the telephone system.

❖ Twisted pairs can run several kilometers without amplification, but for longer distances the signal
becomes too attenuated and repeaters are needed.

❖ The bandwidth depends on the thickness of the wire and the distance traveled, but several
megabits/sec can be achieved for a few kilometers in many cases.

❖ Twisted-pair cabling comes in several varieties. A ‘‘Cat 5”category 5 twisted pair consists of two
insulated wires gently twisted together. Four such pairs are typically grouped in a plastic sheath to
protect the wires and keep them together.
❖ Links that can be used in both directions at the same time, like a two-lane road, are called full-
duplex links.
❖ links that can be used in either direction, but only one way at a time, like a single-track railroad line
are called half-duplex links.

❖ Links that allow traffic in only one direction, like a one-way street. They are called
simplex links.
❖ Cat 5 replaced earlier Category 3 cables but has more twists per meter. More twists result in less
crosstalk and a better-quality signal over longer distances, making the cables more suitable for
high-speed computer communication, especially 100-Mbps and 1-Gbps Ethernet LANs.
20
COMPUTER NETWORKS

❖ Category 6 or even Category 7 has more stringent specifications to handle signals with greater
band- widths.

3. Coaxial Cable
❖ It has better shielding and greater bandwidth than unshielded twisted pairs, so it can span longer
distances at higher speed.
❖ There are two kinds, one kind, 50-ohm cable, is commonly used when it is intended for digital
transmission from the start. The other kind, 75-ohm cable, is commonly used for analog transmission
and cable television.
❖ A coaxial cable consists of a stiff copper wire as the core, surrounded by an insulating material. The
insulator is encased by a cylindrical conductor, often as a closely woven braided mesh.
❖ The outer conductor is covered in a protective plastic sheath.

❖ The construction and shielding of the coaxial cable give it a good combination of high bandwidth
and excellent noise immunity. The bandwidth possible depends on the cable quality and length.
❖ Coaxial cables used to be widely used within the telephone system for long-distance lines but have
now largely been replaced by fiber optics on long- haul routes. Coax is still widely used for cable
television and metropolitan area networks, however.

4. Power Lines
❖ Power lines deliver electrical power to houses, and electrical wiring within houses distributes the
power to electrical outlets. Its been used by electricity companies for low-rate communication such
as re- mote metering for many years, as well in the home to control devices.
❖ Simply plug a TV and a receiver into the wall, which you must do anyway because they need power,
and they can send and receive movies over the electrical wiring.
❖ The difficulty with using household electrical wiring for a network is that it was designed to
distribute power signals.
❖ Electrical signals are sent at 50–60 Hz and the wiring attenuates the much higher frequency (MHz)
signals needed for high-rate data communication.
❖ Transient currents when appliances switch on and off create electrical noise over a wide range of
frequencies.

21
COMPUTER NETWORKS

❖ Despite these difficulties, it is practical to send at least 100 Mbps over typical household electrical
wiring by using communication schemes that resist impaired frequencies and bursts of errors.

5. Fiber Optics
❖ In contrast, the achievable bandwidth with fiber technology is in excess of 50,000 Gbps (50 Tbps)
and we are nowhere near reaching these limits.
❖ The current practical limit of around 100 Gbps is due to our inability to convert between electrical
and opti- cal signals any faster.
❖ Fiber optics are used for long-haul transmission in network backbones, high-speed LANs and high-
speed Internet access such as FttH (Fiber to the Home).
❖ An optical transmission system has three key components: the light source, the transmission medium,
and the detector.
❖ Conventionally, a pulse of light indicates a 1 bit and the absence of light indicates a 0 bit. The
transmission medium is an ultra-thin fiber of glass. The detector generates an electrical pulse when
light falls on it.

Transmission of Light through Fiber:

Optical fibers are made of glass, which, in turn, is made from sand, an inexpensive raw material
available in unlimited amounts.

Fiber Cables:
❖ Fiber optic cables are similar to coax, except without the braid. At the center is the glass core through
which the light propagates. In multimode fibers, the core is typically 50 microns in diameter, about
the thickness of a human hair. In single-mode fibers, the coreis 8 to 10 microns.

22
COMPUTER NETWORKS

❖ The core is surrounded by a glass cladding with a lower index of refraction than the core, to keep all
the light in the core. Next comes a thin plastic jacket to protect the cladding. Fibers are typically
grouped in bundles, protected by an outer sheath.

WIRELESS TRANSMISSION
The Electromagnetic Spectrum
❖ When electrons move, they create electromagnetic waves that can propagate through space . These
waves were predicted by the British physicist James Clerk Maxwell in 1865 and first observed by
the German physicist Heinrich Hertz in 1887.
❖ The number of oscillations per second of a wave is called its frequency, f, and is measured in Hz.
The distance between two consecutive maxima (or minima) is called the wavelength λ (lambda).
❖ When an antenna of the appropriate size is attached to an electrical circuit, the electromagnetic
waves can be broadcast efficiently and received by a receiver some distance away.
❖ In a vacuum, all electromagnetic waves travel at the same speed, no matter what their frequency as
called speed of light, c, is approximately 3 X108 m/sec, or about 1 foot (30 cm) per nanosecond.

❖ In frequency hopping spread spectrum, the transmitter hops from frequency-to-frequency


hundreds of times per second. It is popular for military communication because it makes
transmissions hard to detect and next to impossible to jam.
❖ Direct sequence spread spectrum uses a code sequence to spread the data signal over a wider
frequency band. It is widely used commercially as a spectrally efficient way to let multiple
signals share the same
frequency band.

23
COMPUTER NETWORKS

Radio Transmission
❖ Radio frequency (RF) waves are easy to generate, can travel long distances, and can penetrate
buildings easily, so they are widely used for communication, both indoors and outdoors.
❖ Radio waves also are omnidirectional, meaning that they travel in all directions from the source, so
the transmitter and receiver do not have to be carefully aligned physically.
❖ The properties of radio waves are frequency dependent. At low frequencies, radio waves pass
through obstacles well, but the power falls off sharply with distance from the source—at least as fast as
1/r 2 in air—as the signal energy is spread more thinly over a larger surface. This attenuation is
called path loss.

Microwave Transmission
❖ Before fiber optics, for decades these microwaves formed the heart of the long-distance tele- phone
transmission system.
❖ In fact, MCI, one of AT&T’s first competitors after it was deregulated, built its entire system with
microwave communications passing between towers tens of kilometers apart. Even the company’s
name reflected this (MCI stood for Microwave Communications, Inc.).
❖ Microwaves travel in a straight line, so if the towers are too far apart, the earth will get in the way.
❖ Unlike radio waves at lower frequencies, microwaves do not pass through buildings well. In addition,
even though the beam may be well focused at the transmitter, there is still some divergence in space.
❖ The demand for more and more spectrum drives operators to yet higher frequencies. Bands up to 10
GHz are now in routine use, but at about 4 GHz a new problem sets in: absorption by water.
❖ Microwave communication is so widely used for long-distance telephone communication, mobile
phones, television distribution, and other purposes that a severe shortage of spectrum has developed.
❖ Microwaves are also relatively inexpensive. Putting up two simple towers and putting antennas on
each one may be cheaper than burying 50 km of fiber through a congested urban area or up over a
mountain, and it may also be cheaper than leasing the telephone company’s fiber,

Infrared Transmission
❖ Unguided infrared waves are widely used for short-range communication. The remote controls used
for televisions, VCRs, and stereos all use infrared communication.

24
COMPUTER NETWORKS

❖ On the other hand, the fact that infrared waves do not pass through solid walls well is also a plus. It
means that an infrared system in one room of a building will not interfere with a similar system in
adjacent rooms or buildings: you cannot control your neighbor’s television with your remote control.
❖ Furthermore, security of infrared systems against eavesdropping is better than that of radio systems
precisely for this reason.

❖ Infrared communication has a limited use on the desktop, for example, to connect notebook
computers and printers with the IrDA (Infrared Data Association) standard, but it is not a major
player in the communication game.

Light Transmission
❖ Unguided optical signaling or free-space optics has been in use for centuries.
❖ Optical signaling using lasers is inherently unidirectional, so each end needs its own laser and its own
Photodetector. This scheme offers very high bandwidth at very low cost and is relatively secure
because it is difficult to tap a narrow laser beam.
❖ The laser’s strength, a very narrow beam, is also its weakness here. Aiming a laser beam 1 mm wide
at a target the size of a pin head 500 meters away requires the marksmanship of a latter-day Annie
Oakley.

❖ To add to the difficulty, wind and temperature changes can distort the beam and laser beams also
cannot penetrate rain or thick fog, although they normally work well on sunny days.
❖ Unguided optical communication may seem like an exotic networking technology today, but it might
soon become much more prevalent.
❖ Communicating with visible light in this way is inherently safe and creates a low-speed network in
the immediate vicinity of the display.

25
Computer Networks

MODULE – 2
The Data link layer
DATA LINK LAYER DESIGN ISSUES

➢ The data link layer uses the services of the physical layer to send and receive bits over communication channels.

➢ Functions of data link layer include:


• Providing a well-defined service interface to the network layer.
• Dealing with transmission errors.
• Regulating the flow of data so that slow receivers are not swampedby fast senders.
➢ To accomplish these goals, the data link layer takes the packets it gets from the network layer and encapsulates
them into frames for transmission. Each frame contains a frame header, a payload field for holding the packet,
and a frame trailer, as illustrated in Fig. 3-1. Frame management forms the heart of what the data link layer does.

The following are the data link layer design issues functions of data link layer
1. Services Provided to the Network Layer
2. Framing
3. Error Control
4. Flow Control

Services Provided to the Network Layer

➢ The function of the data link layer is to provide services to the network layer. The principal service is transferring
data from the network layer on the source machine to the network layer on the destination machine.

➢ On the source machine is an entity(a process), in the network layer that hands some bits to the data link layer for
transmission to the destination.

➢ The job of the data link layer is to transmit the bits to the destination machine so they can be handed over to the
network layer there, as shown in Fig. 3-2(a). The actual transmission follows the path of Fig. 3-2(b)

1
Computer Networks

➢ The data link layer can be designed to offer various services:

✓ Unacknowledged connectionless service.


✓ Acknowledged connectionless service.
✓ Acknowledged connection-oriented service.
➢ Unacknowledged connectionless service
✓ Consists of having the source machine send independent frames to the destination machine without
having the destination machine acknowledge them. Eg: Ethernet
✓ No logical connection is established beforehand or released afterward.
✓ If a frame is lost due to noise on the line, no attempt is made to detect the loss or recover from it in
the data link layer.
✓ This class of service is appropriate when the error rate is very low, so recovery is left to higher layers.
It is also appropriate for real-time traffic, such as voice, in which late data are worse than bad data.
➢ Acknowledged connectionless service
✓ No logical connection is established beforehand or released afterward.
✓ But each frame sent is individually acknowledged. In this way, the sender knows whether a frame has
arrived correctly or been lost. If it has not arrived within a specified time interval, it can be sent again.
This service is useful over unreliable channels, such as wireless systems. Eg: 802.11 (WiFi)
➢ Acknowledged connection-oriented service
✓ The most sophisticated service the data link layer can provide to the network layer is connection-oriented
service.
✓ With this service, the source and destination machines establish a connection before any data are
transferred. Each frame sent over the connection is numbered, and the data link layer guarantees that each
frame sent is indeed received.
✓ Furthermore, it guarantees that each frame is received exactly once and that all frames are received in the
2
Computer Networks

right order.
✓ When connection-oriented service is used, transfers go through three distinct phases.
• First, connection is established by having both sides initialize variables and counters needed to keep
track of which frames have been received and which ones have not.
• Second, one or more frames are actually transmitted.
• Third, connection is released, freeing up the variables, buffers, and other resources used to maintain
the connection.
Framing
➢ To provide service to the network layer, the data link layer must use the service provided to it by the physical
layer. Physical layer may add some errors if channel is noisy. Data link layer has to detect and, if necessary,
correct errors. Data link layer adds Checksum to frames.
➢ Breaking up the bit stream into frames is more difficult than it at appears.
➢ Framing methods:

1. Byte count.
2. Flag bytes with byte stuffing.
3. Flag bits with bit stuffing.
4. Physical layer coding violations.

Byte count

This method uses a field in the header to specify the number of bytes in the frame. When the data link layer at
the destination sees the byte count, it knows how many bytes follow and hence where the end of the frame is.
This technique is shown in Fig. 3-3(a) for four small example frames of sizes 5, 5, 8, and 8 bytes, respectively.
The trouble with this algorithm is that the count can be garbled by a transmission error. For example, if the byte
count of 5 in the second frame of Fig. 3-3(b) becomes a 7 due to a single bit flip, the destination will get out of
synchronization. It will then be unable to locate the correct start of the next frame.

3
Computer Networks

Flag bytes with byte stuffing

This method resolves the problem of resynchronization after an error by having each frame start and end with
special bytes. Often the same byte, called a flag byte, is used as both the starting and ending delimiter. This byte
is shown in Fig. 3-4(a) as FLAG. Two consecutive flag bytes indicate the end of one frame and the start of the
next. Thus, if the receiver ever loses synchronization it can just search for two flag bytes to find the end of the
current frame and the start of the next frame.
However, there is a still a problem we have to solve. It may happen that the flag byte occurs in the data, especially
when binary data such as photographs or songs are being transmitted. This situation would interfere with the
framing. One way to solve this problem is to have the sender’s data link layer insert a special escape byte (ESC)
just before each ‘‘accidental’’ flag byte in the data. Thus, a framing flag byte can be distinguished from one in
the data by the absence or presence of an escape byte before it. The data link layer on the receiving end re- moves
the escape bytes before giving the data to the network layer. This technique is called byte stuffing.

If an escape byte occurs in the middle of the data, it is also stuffed with an escape byte. At the receiver, the first
escape byte is removed, leaving the data byte that follows it (which might be another escape byte or the flag byte).
Some examples are shown in Fig. 3-4(b).

Bit stuffing
The third method of delimiting the bit stream gets around a disadvantage of byte stuffing, which is that it is tied
to the use of 8-bit bytes. Framing can be also be done at the bit level, so frames can contain an arbitrary number
of bits made up of units of any size. It was developed for the once very popular HDLC (High- level Data Link
Control) protocol. Each frame begins and ends with a special bit pattern, 01111110 or 0x7E in hexadecimal.
This pattern is a flag byte. When- ever the sender’s data link layer encounters five consecutive 1s in the data, it
automatically stuffs a 0 bit into the outgoing bit stream. This bit stuffing is analogous to byte stuffing, in which
an escape byte is stuffed into the outgoing character stream before a flag byte in the data. It also ensures a
minimum density of transitions that help the physical layer maintain synchronization. USB (Universal Serial Bus)
uses bit stuffing for this reason.
When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it automatically destuffs (i.e.,
deletes) the 0 bit. Just as byte stuffing is completely transparent to the network layer in both computers, so is bit
stuffing. If the user data contain the flag pattern, 01111110, this flag is transmitted as 011111010 but stored in
4
Computer Networks

the receiver’s memory as 01111110. Figure 3-5 gives an example of bit stuffing.

With bit stuffing, the boundary between two frames can be unambiguously recognized by the flag pattern.

Physical layer coding violations


➢ The last method of framing is encoding of bits as signals often includes redundancy to help the receiver.
This redundancy means that some signals will not occur in regular data. For example, in the 4B/5B line
code 4 data bits are mapped to 5 signal bits to ensure sufficient bit transitions. This means that 16 out of
the 32 signal possibilities are not used. We can use some reserved signals to indicate the start and end of
frames. In effect, we are using ‘‘coding violations’’ to delimit frames. The beauty of this scheme is that,
because they are reserved signals, it is easy to find the start and end of frames and there is no need to
stuff the data.

Error Control

➢ Next task of Data link layer is to make sure all frames are eventually delivered to the network layer at the
destination and in the proper order.
➢ The usual way to ensure reliable delivery is to provide the sender with some feedback about what is
happening at the other end of the line. If the sender receives a positive acknowledgement about a frame,
it knows the frame has arrived safely. On the other hand, a negative acknowledgement means that
something has gone wrong and the frame must be transmitted again.
➢ But certain frames can go missing due to the introduction of some noise in the signal. If the
acknowledgements are lost, sender will not understand what to do. That is when the timers are useful.
When the sender transmits a frame, it generally also starts a timer. The timer is set to expire after an
interval long enough for the frame to reach the destination, be processed there, and have the
acknowledgement propagate back to the sender. Normally, the frame will be correctly received and the
acknowledgement will get back before the timer runs out, in which case the timer will be canceled.
➢ During retransmissions receiver may get duplicate packets. To prevent this from happening, sequence
numbers are assigned to outgoing frames, so that the receiver can distinguish retransmissions from
originals.
➢ The whole issue of managing the timers and sequence numbers so as to ensure that each frame is
ultimately passed to the network layer at the destination exactly once, no more and no less, is an important
part of the duties of the data link layer (and higher layers)

5
Computer Networks

Flow Control

➢ Another important design issue that occurs in the data link layer (and higher layers as well) is what to do with
a sender that systematically wants to transmit frames faster than the receiver can accept them. Sender should
send the data at the same speed as the receiver is capable of receiving. Otherwise receiver will loose couple
of frames.
➢ Two approaches are commonly used.
1. Feedback-based flow control - the receiver sends back information to the sender giving it permission to
send more data, or at least telling the sender how the receiver is doing.
2. Rate-based flow control - the protocol has a built-in mechanism that limits the rate at which senders may
transmit data, without using feedback from the receiver.

ERROR DETECTION AND CORRECTION


➢ Transmission errors are unavoidable. We have to learn to deal with them.
➢ Two basic strategies for dealing with errors. Both add redundant information to the data that is sent.
1 . Include enough redundant information to enable the receiver to deduce what
the transmitted data must have been. I t uses error-correcting codes. The use of error-correctingcodes is often
referred to as FEC (Forward Error Correction).
2. Include only enough redundancy to allow the receiver to deduce that an error has occurred (but not which
error) and have it request a retransmission. It uses error-detecting codes.

On channels that are highly reliable, such as fiber, it is cheaper to use an error-detecting code and just retransmit the
occasional block found to be faulty. However, on channels such as wireless links that make many errors, it is better
to add redundancy to each block so that the receiver is able to figure out what the originally transmitted block was.
FEC is used on noisy channels because retransmissions are just as likely to be in error as the first transmission.
Errors are caused by extreme values of thermal noise that overwhelm the signal briefly and occasionally, giving rise
to isolated single-bit errors or errors tend to come in bursts.
PARITY METHOD

➢ It is an error detection technique


➢ appends a parity bit to the end of each word in the frame
➢ Even parity is used for asynchronous Transmission
➢ Odd parity is used for synchronous Transmission

At Sender Side-
• Total number of 1’s in the data unit is counted which is 3 in the above example
• For even parity, parity bit = 1 is added to the data unit to make total number of 1’s even.

6
Computer Networks

Then, the code word 10010011 is transmitted to the receiver.


At Receiver Side-
• After receiving the code word, total number of 1’s in the code word is counted.
• Consider receiver receives the correct code word = 10010011.
• Even parity is used and total number of 1’s is even.
• So, receiver assumes that no error occurred in the data during the transmission.
• If one bit or any odd no bits is erroneously inverted during Transmission, the Receiver will detect an
error. How ever if two or even no of bits are inverted an undetected error occurs.

Burst error:

CRC Method
➢ CRC is an error detecting technique.
➢ CRC – Cyclic Redundancy Check
➢ Consider bits of a message to be coefficients of a polynomial M(x)
1011 – 1x3 + 0x2 + 1x1 + 1x0
➢ Sender and receiver agree on a generator polynomial G(x) of degree r
Sender Side
➢ Multiply Message M(x) by xr. Let this be T(x).
➢ Divide T(x) by G(x) using modulo 2 division (no carries or borrows) getting the remainder polynomial R(x)

7
Computer Networks

➢ Transmit Code word : C(x) = T(x) - R(x);


Receiver side

➢ At receiving end code word is divided by G(x)


➢ If remainder is 0, it indicates that there are no errors in the data.
➢ If remainder is not 0, it means received data has some errors.

1. Divide T(x) by G(x) at the receiver end. If the result is a zero, then the frame istransmitted correctly. Ex. Frame:
1101011011
Generator: 10011

Message after appending 4 zero bits: 11010110000

8
Computer Networks

Since the remainder is zero there is no error in the transmitted frame.


Checksum(Error detection technique)

Sender site:
1. The message is divided into 16-bit words.
2. The value of the checksum word is set to 0.
3. All words including the checksum are added using one’s complement addition.
4. The sum is complemented and becomes the checksum.
5. The checksum is sent with the data.
Receiver site:
1. The message (including checksum) is divided into 16-bit words.

9
Computer Networks

2. All words are added using one’s complement addition.


3. The sum is complemented and becomes the new checksum.
4. If the value of checksum is 0, the message is accepted; otherwise, it is rejected.

Hamming Distance
➢ The Hamming distance between two words (of the same size) is the number of differences between the
corresponding bits. Hamming distance between two words x and y is represented as d(x, y).The Hamming
distance can be found by applying the XOR operation on the two words and counting the number of 1s in the

10
Computer Networks

result.
➢ Example: 1. The Hamming distance d(000, 011) is 2 because 000 011 is 011 (two 1s). 2. The Hamming
distance d(10101, 11110) is 3 because 10101 11110 is 01011 (three 1s).
Hamming Code
Refer class notes

Elementary Data Link Protocols

Protocols in the data link layer are designed so that this layer can perform its basic functions: framing, error control
and flow control. Framing is the process of dividing bit - streams from physical layer into data frames whose size
ranges from a few hundred to a few thousand bytes.

Unrestricted Simplex Protocol:


Following assumptions are made

• Data transmission is simplex i.e. transmitted in one direction only.


• Both transmitting and receiving network layers are ready.
• Processing time is ignored.
• Infinite buffer space is available.
• An error free channel.

This is an unrealistic protocol, which has a nickname “Utopia”.

1. A simplex stop and wait protocol:


The following assumptions are made

a. Error free channel.


b. Data transmission simplex.

Since the transmitter waits for Δt time for an Ack this protocol is called stop and wait protocol.

3. A simplex protocol for a noisy channel

11
Computer Networks

When this protocol fails?

At this situation protocol fails because the receiver receives a duplicate frame and there isno way to find out whether
the receiver frame is original or duplicate. So the protocol fails at this situation.

Now what is needed is some way for the Rx to distinguish a frame and a duplicate. To achieve this, the sender has to
put a sequence number in the header of each frame it sends. The Rx can check the sequence number of each
arriving frame to see if it is a new frame or a duplicate.

12
Computer Networks

Here a question arises: What is the minimum number of bits needed for the sequence number? The ambiguity is
between a frame and its successor. A 1-bit sequence number (0 or 1) is therefore sufficient. At
each instant of time, the receiver excepts a particular sequence number next. Any arriving frame containing wrong
sequence number is rejected as a duplicate. When a frame containing the correct sequence number arrives, it is
accepted, passed to the network layer and then expected sequence number is incremented i.e. 0 becomes 1 and
one becomes 0. Protocols in which a sender waits for a positive ack before advancing to the next data item are
often called PAR (positive ack with retransmission) or ARQ (automatic repeat request).

When this protocol fails?

6. Now A thinks that the Ack received is the ack of new frame F0 and A sends nextframe F1. So a frame
F0 is missed. At this situation this protocol fails.

PIGGY BACKING
In most practical situations there is a need of transmitting data in both directions. This canbe achieved by full duplex
transmission. If this is done we have two separate physical circuits each with a ‘forward ‘ and ‘reverse’ channel.
In both cases, the reverse channel is almost wasted. To overcome this problem a technique called piggy backing
is used.

The technique of temporarily delaying outgoing acknowledgements so that they can be hooked onto the next outgoing
data frame is known as piggy backing.

However, piggybacking introduces a complication not present with separate acknowledgements. How long should
the data link layer wait longer than the sender’s timeout period, the frame will be retransmitted, defeating the
whole purpose of having acknowledgements. Of course, the data link layer cannot foretell the future, so it must
resort to some ad hoc scheme, such as waiting a fixed number of milli seconds. If a new packet arrives quickly,
the acknowledgement is piggy backed onto it; otherwise, if no new packet has arrived by the end of this time
period, the data link layer just sends a separate acknowledgement frame.
13
Computer Networks

SLIDING WINDOW PROTOCOLS


In all sliding window protocols, each outbound frame contains a sequence number, ranging from 0 up to some
maximum. The maximum is usually 2n –1 so the sequence number fits nicely in an n-bit field. The stop-and-wait
sliding window protocol uses n=1, restricting the sequence numbers to 0 and 1, but more sophisticated versions
can use arbitrary n.

The essence of all sliding window protocols is that at any instant of time, the sender maintains a set of sequence
numbers corresponding to frames it is permitted to send. These frames are said to fall with in the sending window.
Similarly the receiver also maintains a receiving window corresponding to the set of frames it is permitted to
accept. The sender’s window and the receiver’s window need not have the same lower and upperlimits, or even
have the same size. In some protocols they are fixed in size, but in others they can grow or shrink as frames are
sent and received.

The sequence numbers with in the sender’s window represent frames sent but as yet not acknowledged. Whenever a
new packet arrives from the network layer, it is given the nexthighest sequence number, and the upper edge of
the window is advanced by one. When an acknowledgement comes in, the lower edge is advanced by one. In this
way the continuously maintains a list of unacknowledged frames.

PIPELINING
1. Upto now we made the assumption that the transmission time required for a frame toarrive at the receiver plus
the transmission time for the ack to come back is negligible.
2. Sometimes this is not true, when there is a long round trip propagation time is there.

14
Computer Networks

3. In these cases round trip propagation time can have important implications for theefficiency of the bandwidth
utilization.
Consider the below example.
Let the channel capacity b = 50Kbps.
round trip propagation delay = 500ms
Frame size = 1000bits
Without considering the round trip propagation delay
For one frame the time taken will be = 1000/500 ms = 20 ms
Considering the round trip propagation delay

For one frame the time taken will be = 500 ms +20 ms = 270 ms
The channel utilization = (20/520)*100 = 4%

i.e. We are wasting 96% of channel time. To overcome this problem we will go for a technique called
PIPELIING.

In this technique, the sender is allowed to transmit upto ‘w ‘ frames before blocking,instead of just 1.With an
appropriate choice of w the sender will be able to continuously transmit frames for a time equal to the round
trip transmit time without filling up the window.

In the above example w would be at least 26 frames. (520/20 = 26 frames)

By the time it has finished sending 26 frames, at t=520 ms, the ack for frame 0 will have just arrived. Thereafter ack
will arrive every 20 ms, so the sender always gets permission to continue just when it needs it. Hence, we can say
the sender window size is 26.

Derivation:
Let the channel capacity = b Bps

Let the frame size = l bits

Let the round trip delay = R secs

To send one frame the time will be l/b secs

Due to round trip delay the time taken will be (l/b + R) Sec = l+Rb/b Sec

The channel utilization is l/b (l/b + R) Sec = (l / l + Rb) Sec

If l > bR the efficiency will be greater than 50%.If l < bR the efficiency will be less than 50%.

If l = bR the efficiency will be 50%.

Ex 1. A channel has a bit rate of 4 kbps and a propagation delay of 20msec.For what rage of frame sizes does stop
and wait give an efficiency of at least 50 % ?

15
Computer Networks

2. Consider an error free 64 kbps channel used to send 512 –byte data frames in one direction, with very short
acknowledgements coming back the other way. What is the maximum throughput for window sizes of 1,7,15,and
127?

Pipelining frames over an unreliable channel raises some serious issues.

First, what happens if a frame in the middle of a long stream is damaged or lost? When a damaged frame arrives at
the receiver, it obviously should discarded, but whatshould the receiver do with all the correct frames following
it?

There are two basic approaches to dealing with errors .

1. Go Back ‘n’
2. Selective repeat or Selective Reject
One way called in go back n, the receiver simply to discard all subsequent frames, sending no acknowledgements for
the discard frames. In the other words, the data link layer refuses to accept any frame except the next one it must
give to the network layer.

Selective Repeat:
The receiving data link layer store all the correct frames following the bad frame, not all itssuccessors. If the second
try succeeds the receiving data link layer will now have many correct frames in sequence, so they can all be
handed off to the network layer quickly andthe highest number acknowledged. This strategy corresponds to a
0 1 2 3 4 5 2 3 4 5 6 7 0
receiver window larger than 1.

0 1 E D D D 2 3 4 5 6

(a) Go-back-N
Error Discarded frames

16
Computer Networks

b) Selective reject

MEDIUM ACCESS CONTROL SUBLAYER (MAC)


Networks can be categories in to two ways
a) Point to point

b) Broad cast channel

- In broadcast network, the key issue is how to share the channel amongseveral users.

- Ex a conference call with five people

-Broadcast channels are also called as multi-access channels or random accesschannels.

-Multi-access channel belong to a sublayer at the DL layer called the MAC sublayer.

The Channel Allocation problem:

a) Static channel allocation in LANs & MANs

i) FDM ii) TDM


Drawbacks: -1) Channel is wasted if one or more stations do not send data.

2) If users increases this will not support.

b) Dynamic channel allocation

i) Pure ALOHA & Slotted ALOHA


ii) CSMA
• CSMA/CA
• CSMA/CD
Pure ALOHA

-1970’s Norman Abramson end his colleagues devised this method, used ground –basedradio broad costing. This is
called the ALOHA system.
-The basic idea, many users are competing for the use of a single shared channel.
-There are two versions of ALOHA: Pure and Slotted.

-Pure ALOHA does not require global time synchronization, where as in slotted ALOHAthe time is divided into
discrete slots into which all frames must fit.
-Let users transmit whenever they have data to be sent.

-There will be collisions and all collided frames will be damaged.

-Senders will know through feedback property whether the frame is destroyed or not bylistening channel.

[-With a LAN it is immediate, with a satellite, it will take 270m sec.]

-If the frame was destroyed, the sender waits random amount of time and again sendsthe frame.

-The waiting time must be random otherwise the same frame will collide over and over.

17
Computer Networks

Frames are transmitted at completely arbitrary times

-Whenever two frames try to occupy the channel at the same time, there will be a collisionand both will be
destroyed.

-We have to find out what is the efficiency of an ALOHA channel?

-Let us consider an infinite collection of interactive users sitting at their systems (stations).

-A user will always in two states typing or waiting.

-Let the ‘Frame time’ denotes the time required to transmit one fixed length frame.

-Assume that infinite populations of users are generating new frames according topossion distribution with mean
N frames per frame time.

-If N>1 users are generating frames at a higher rate than the channel can handle.

-For reasonable throughput 0<N<1.

-In addition to new frames, the station also generates retransmission of frames.

-Old and new frames are G per frame time.

-G> N

-At low load there will be few collisions, so G ~ N

-Under all loads, the throughput S = GPo, where Po is the probability that a frame does notsuffer a collision.

-A frame will not suffer a collision if no other frames are sent with one frame time of itsstart.

-Let ‘t’ be the time required to send a frame.

-If any other user has generated a frame between time to and to+t, the end of that framewill collide with the
beginning of the shaded frame.
18
Computer Networks

-Similarly, any other frame started b/w to+t and to+2t will bump into the end of the shadedframe.

-The probability that ‘k’ frames are generated during a given frame time is given by thepossion distribution:

Pr[k] = Gke-G

k!

-The probability of zero frames is just e-G

-In an interval two frame times long, the mean number at frames generated is 2G.

-The probability at no other traffic being initiated during the entire vulnerable period isgiven by

Po = e-2G

S= Ge-2G [S=GPo]

The Maximum through put occurs at G=0.5 with S=1/2e = 0.184

The channel utilization at pure ALOHA =18%.

Throughput versus offered traffic for ALOHA systems


Slotted ALOHA

-In 1972, Roberts’ devised a method for doubling the capacity of ALOHA system.

-In this system the time is divided into discrete intervals, each interval corresponding toone frame.
19
Computer Networks

-One way to achieve synchronization would be to have one special station emit a pip at the start of each interval, like
a clock.

-In Roberts’ method, which has come to be known as slotted ALOHA, in contrast to Abramson’s pure ALOHA; a
computer is not permitted to send whenever a carriage returnis typed.

-Instead, it is required to wait for the beginning of the next slot.

-Thus the continuous pure ALOHA is turned into a discrete one.

-Since the vulnerable period is now halved, the of no other traffic during the same slot as our test frame is e-G which
leads to

S = Ge –G

- At G=1, slotted ALOHA will have maximum throughput.


- So S=1/e or about 0.368, twice that of pure ALOHA.
- The channel utilization is 37% in slotted ALOHA.
Carrier Sense Multiple Access Protocols
Protocols in which stations listen for a carrier (transmission) and act accordingly are called carries sense protocols.

Persistent CSMA

When a station has data to send, it first listens to the channel to see if any one else is transmitting at that moment. If
the channel is busy, the station waits until it become idle. When the station detects an idle channel, it transmits a
frame. If a collision occurs, the station waits a random amount of time and starts all over again. The protocol is
called 1- persistent also because the station transmits with a probability of 1 when it finds the channel idle.

The propagation delay has an important effect on the performance of the protocol. The longer the propagation delay
the worse the performance of the protocol.

Even if the propagation delay is zero, there will be collisions. If two stations listen the channel, that is idle at the same,
both will send frame and there will be collision.

Non persistent CSMA

In this, before sending, a station sense the channel. If no one else is sending, the station begins doing so it self.
However, if the channel is busy, the station does not continually sense it but it waits a random amount of time
and repeats the process.

This algorithms leads to better channel utilization but longer delays then 1-persistent CSMA.

With persistent CSMA, what happens if two stations become active when a third station is busy? Both wait for the
active station to finish, then simultaneously launch a packet, resulting a collision. There are two ways to handle
this problem.

a) P-persistent CSMA b) exponential backoff.

P-persistent CSMA
The first technique is for a waiting station not to launch a packet immediately when the channel becomes idle, but
first toss a coin, and send a packet only if the coin comes up heads. If the coin comes up tails, the station waits
20
Computer Networks

for some time (one slot for slotted CSMA), then repeats the process. The idea is that if two stations are both
waiting for the medium, this reduces the chance of a collision from 100% to 25%. A simple generalization of the
scheme is to use a biased coin, so that the probability of sending a packet when the medium becomes idle is not
0.5, but p, where 0< p < 1. We call such a scheme P-persistent CSMA. The original scheme, where p=1, is thus
called 1-persitent CSMA.

Exponential backoff

The key idea is that each station, after transmitting a packet, checks whether the packet transmission was successful.
Successful transmission is indicated either by an explicit acknowledgement from the receiver or the absence of a
signal from a collision detection circuit. If the transmission is successful, the station is done. Otherwise, the station
retransmits the packet, simultaneously realizing that at least one other station is also contending for the medium.
To prevent its retransmission from colliding with the other station’s retransmission, each station backs off (that
is, idles) for a random time chosen from the interval [0,2*max-propagation_delay] before retransmitting its
packet. If the retransmission also fails, then the station backs off for a random time in the interval [0,4*
max_propagation_delay], and tries again. Each subsequent collision doubles the backoff interval length, until
the retransmission finally succeeds. On a successful transmission,the backoff interval is reset to the initial
value. We call this type of backoff exponential backoff.

CSMA/CA

In many wireless LANS, unlike wired LANS, the station has no idea whether the packet collided with another packet
or not until it receives an acknowledgement from receiver. In this situation, collisions have a greater effect on
performance than with CSMA/CD, where colliding packets can be quickly detected and aborted. Thus, it makes
sense to try to avoid collisions, if possible. CSMA/CA is basically p-persistence, with the twist that when the
medium becomes idle, a station must wait for a time called the interframe spacing or IFS before contending for a
slot. A station gets a higher priority if it is allocated smaller inter frame spacing.

When a station wants to transmit data, it first checks if the medium is busy. If it is, it continuously senses the medium,
waiting for it to become idle. When the medium becomes idle, the station first waits for an interframe spacing
corresponding to its priority level, then sets a contention timer to a time interval randomly selected in the range
[0,CW], where CW is a predefined contention window length. When this timer expires, it transmits a packet and
waits for the receiver to send an ack. If no ack is received, the packet is assumed lost to collision, and the source
tries again, choosing a contention timer at random from an interval twice as long as the one before(binary
exponential backoff). If the station senses that another station has begun transmission while it was waiting for the
expiration of the contention timer, it does not reset its timer, but merely freezer it, and restarts the countdown
when the packet completes transmission. In this way, stations that happen to choose a longer timer value get
higher priority in the next round of contention.

Collision-Free Protocols

A Bit-Map Protocol
In the basic bit-map method, each contention period consists of exactly N slots. If station0 has a frame to send, it
transmits a 1 bit during the zeroth slot. No other station is allowed to transmit during this slot. Regardless of
what station 0 does, station 1 gets the opportunity to transmit a 1during slot 1, but only if it has a frame queued.
In general, station j may announce the fact that it has a frame to send by inserting a 1 bit into slot j. after all N
slots have passed by, each station has complete knowledge of which stations with to transmit.

21
Computer Networks

The basic bit-map protocol


Since everyone agrees on who goes next, there will never be any collisions. After the last ready station has transmitted
its frame, an event all stations can easily monitor, another Nbit contention period is begun. If a station becomes
ready just after its bit slot has passed by, it is out of luck and must remain silent until every station has had a
chance and the bit map has come around again. Protocols like this in which the desire to transmit is broadcast
before the actual transmission are called reservation protocols.

Binary Countdown
A problem with the basic bit-map protocol is that the overhead is 1 bit per station. A station wanting to use the
channel now broadcasts its address as a binary bit string, starting with the high-order bit. All addresses are
assumed to be the same length. Thebits in each address position from different stations are BOOLEAN ORed
together. We will call this protocol binary countdown. It is used in Datakit.

As soon as a station sees that a high-order bit position that is 0 in its address has been overwritten with a 1, it gives
up. For example, if station 0010,0100,1001, and 1010 are all trying to get the channel, in the first bit time the
stations transmit 0,0,1, and 1, respectively. Stations 0010 and 0100 see the 1 and know that a higher-numbered
station is competing for the channel, so they give up for the current round. Stations 1001 and 1010 continue.

The next bit is 0, and both stations continue. The next bit is 1, so station 1001 gives up. The winner is station 1010,
because it has the highest address. After winning the bidding, it may now transmit a frame, after which another
bidding cycle starts.

The binary countdown protocol. A dash indicates silence

22
Computer Networks

IEEE Standard 802 for LANS and MANS

The IEEE 802.3 is for a 1-persistent CSMA/CD LAN. Xerox built a 2.94 Mbps CSMA/CD system to connect over
100 personal workstations on 1-Km cable. This system was called Ethernet through which electromagnetic
radiation was once thought to propagate. Xerox DEC and Intel came with another standard for 100 Mbps Ethernet.
This differs from old one that it runs at speeds from 1 to 10 Mbps on various media. The second difference
between these two is in one header (802.3 length field is used for packet type in Ethernet).

23
Computer Networks

802.3 Cabling

Five types of cabling are commonly used, 10Base5 cabling called thick Ethernet, came first. It resembles a yellow
garden hose, with markings every 2.5 m to show where thetaps go. Connections to it are generally made using
vampire taps, in which a pin is carefully forced halfway into the coaxial cable’s core. The notation 10Base5
means that it operates at 10 Mbps, uses baseband signaling, and can support segments of up to 500m.

Name Cable Max. segment Nodes/seg. Advantages


10Base5 Thick coax 500 m 100 Good for backbones
10Base2 Thin coax 200 m 30 Cheapest system

10Base-T Twisted pair 100 m 1024 Easy maintenance


10Base-F Fiber optics 2000 m 1024 Best between buildings

The second cable type was 10Base2 or thin Ethernet, which, in contrast to the garden- hose-like thick Ethernet, bends
easily. Connections to it are made using industry standardBNC connectors to form T-junctions, rather than using
vampire taps. These are easier to use and more reliable. Thin Ethernet is much cheaper and easier to install, but
it can run for only 200m and can handle only 30 machines per cable segment.

Cable breaks, bad taps, or loose connectors can be detected by a devise called time domain reflectometry.

For 10Base5, a transceiver is clamped securely around the cable so that its tap makes contact with the inner core.
The transceiver contains the electronics that handle carrier

detection and collision detection. When a collision is detected, the transceiver also puts a special invalid signal on
the cable to ensure that all other transceivers also realize that a collision has occurred.

The transceiver cable terminates on an interface board inside the computer. The interfaceboard contains a controller
chip that transmits frames to, and receives frames from, the transceiver. The controller is responsible for
assembling the data into the proper frame format, as well as computing checksums on outgoing frames and
verifying them on incoming frames.

With 10Base2, the connection to the cable is just a passive BNC T-junction connector.The transceiver electronics
are on the controller board, and each station always has its own transceiver.

With 10Base-T, there is no cable at all, just the hub (a box full of electronics). Adding or removing a station is simple
in this configuration, and cable breaks can be detected easily. The disadvantage of 10Base-T is that the maximum
cable run from the hub is only 100m, may be 150m if high-quality (category 5) twisted pairs are used. 10Base-
Tis becoming steadily more popular due to the ease of maintenance. 10Base-F, which uses fiber optics. This
alternative is expensive due to the cost of the connectors and terminators, but it has excellent noise immunity and
is the method of choice when running between buildings or widely separated hubs.

Each version of 802.3 has a maximum cable length per segment. To allow larger networks, multiple cables can be
connected by repeaters. A repeater is a physical layer device. It receives, amplifies, and retransmits signals in
both directions. As far as the software is concerned, a series of cable segments connected by repeaters is no
different than a single cable (except for some delay introduced by the repeater). A system may contain multiple
cable segments and multiple repeaters, but no two transceivers may be more than 2.5km apart and no path between
any two transceivers any traverse more than four repeaters.
24
Computer Networks

802.3 uses Manchester Encoding and differential Manchester Encoding

25
Computer Networks

The 802.3 MAC sub layer protocol:

I) Preamble:
Each frame start with a preamble of 7 bytes each containing a bit pattern 10101010.

II) Start of frame byte:


It denotes the start of the frame itself. It contains 10101011.

III) Destination address:


This gives the destination address. The higher order bit is zero for ordinary address and 1for group address (Multi
casting). All bits are 1s in the destination field frame will be delivered to all stations (Broad casting).

The 46th bit (adjacent to the high-order bit) is used to distinguish local from global addresses.

IV) Length field:


This tells how many bytes are present in the data field from 0 to 1500.

V) Data field:
This contains the actual data that the frame contains.

VI) Pad:
Valid frame must have 64 bytes long from destination to checksum. If the frame size less than 64 bytes pad field is
used to fill out the frame to the minimum size.

VII) Checksum:

It is used to find out the receiver frame is correct or not. CRC will be used here.

26
Computer Networks

Other Ethernet Networks

Switched Fast Gigabit

Switched Ethernet:

- 10 Base-T Ethernet is a shared media network.


- The entire media is involved in each transmission.
- The HUB used in this network is a passive device. (not intelligent).
- In switched Ethernet the HUB is replaced with switch. Which is a active device(intelligent )
Fast Ethernet

Gigabit Ethernet

IEEE 802.4 (Token Bus)

802.3 frames do not have priorities, making them unsuited for real-time systems in which important frames should
not be held up waiting for unimportant frames. A simple system with a known worst case is a ring in which the
stations take turns sending frames. If there are n stations and it takes T sec to send a frame, no frame will ever
have to wait more than nT sec to be sent.

27
Computer Networks

This standard,802.4, described


as a token bus. Physically, the token bus is a linear or tree-shaped cable onto which the stations are attached.
Logically, the stations are organized into a ring, with each station knowing the address of the station to its “left”
and “right.” When the logical ring is initialized, the highest numbered station may send the first frame. After it
is done, it passes permission to its immediate neighbor by sending the neighbor a special control frame called a
token. The token propagates around the logical ring, with only the token holder being permitted to transmit frames.
Since only one station at a time holds the token, collisions do not occur.

Since the cable is inherently a broadcast medium, each station receives each frame, discarding those not addressed to
it. When a station passes the token, it sends a token frame specifically addressed to its logical neighbor in the
ring, irrespective of where that station is physically located on the cable. It is also worth noting that when stations
are firstpowered on, they will not be in the ring, so the MAC protocol has provisions for adding stations to, and
deleting stations from, the ring. For the physical layer, the token bus uses the 75-ohm broadband coaxial cable
used for cable television. Both single and dual-cable systems are allowed, with or without head-ends.

The frame control field is used to distinguish data frames from control frames. Fro data frames, it carries the frame’s
priority. It can also carry an indicator requiring the destination station to acknowledge correct or incorrect receipt
of the frame.

For control frames, the frame control field is used to specify the frame type. The allowed types include token
passing and various ring maintenance frames, including the mechanism for letting new stations enter the
ring, the mechanism for allowing stations to leave the ring, and so on.

28
Computer Networks

Connecting devices

Connecting devices and the OSI model

Bridges

LANS can be connected by devices called bridges, which operate in the data link layer. Bridges do not examine the
network layer header and can thus copy IP, IPX, and OSI packets equally well.

The various reasons why the bridges are used.

1. Many university and corporate departments have their own LANS, primarily to connect their own personal
computers, workstations, and servers. Since the goals of the various departments differ, different departments
choose different LANS, without regard to what other departments are doing. Sooner or later, there is a need
for interaction, so bridgesare needed.
2. The organization may be geographically spread over several buildings separated by considerable distances. It
may be cheaper to have separate LANS in each building and connect them with bridges and infrared links than
to run a single coaxial cable over the entire site.
3. It may be necessary to split what is logically a single LAN into separate LANS to accommodate the load. Putting
all the workstations on a single LAN- the total bandwidth needed is far too high. Instead multiple LANS
connected by bridges are used.
4. In some situations, a single LAN would be adequate in terms of the load, but the physical distance between the
most distant machines is too great (e.g., more than 2.5km for 802.3). Even if laying the cable is easy to do, the
29
Computer Networks

network would not work due to the excessively long round-trip delay. Only solution is to partition the LAN and
install bridges between the segments.
5. There is the matter of reliability. On a single LAN, a defective node that keeps outputting a continuous stream of
garbage will cripple the LAN. Bridges can be inserted atcritical places, to prevent a single node which has gone
berserk from bringing down the entire system.
6. And last, bridges can contribute to the organization’s security. By inserting bridges atvarious places and being
careful not to forward sensitive traffic, it is possible to isolateparts of the network so that its traffic cannot
escape and fall into the wrong hands.

Types of Bridges
Simple Bridge

Simple bridges are the most primitive and least expensive type of bridge. A simple bridge links two segments and
contains a table that lists the addresses of all the stationsincluded in each of them. Before a simple bridge can be
used, an operator must sit down and enter the addresses of every station. Whenever a new station is added, the
table must be modified. If a station is removed, the newly invalid address must be deleted. Installation and
maintenance of simple bridges are time-consuming and potentially more trouble than the cost savings are worth.

Transparent Bridge

A transparent, or learning, bridge builds its table of station addresses on its own as it performs its bridge functions.
When the transparent bridge is first installed, its table is empty .As it encounters each packet, it looks at both the
destination and the source addresses. It checks the destination to decide where to send the packet. If it does not
yet recognize the destination address, it relays the packet to all of the stations on both segments. It uses the source
address to build its table. As it reads the source address, it notes which side the packet came from and associates
that address with the segment to which it belongs. By continuing this process even after the table is complete, a
transparent bridge is also self-updating.

This bridge uses flooding and backward landing algorithms.

The routing procedure for an incoming frame depends on the LAN it arrives on (the source LAN) and the LAN its
destination is on (the destination LAN), as follows.

1) If destination and source LANS are the same, discard the frame.
2) If the destination and source LANS are different, forward the frame.
3) If the destination LAN is unknown, use flooding.
Two Parallel transparent bridges

30
Computer Networks

Spanning Tree Algorithm


Bridges are normally installed redundantly, which means that two LANs may be connected by more than one bridge.
In this case, if the bridges are transparent bridges, they may create a loop, which means a packet may be going
round and round, from one LAN to another and back again to the first LAN. To avoid this situation, bridges today
use what is called the spanning tree algorithm.

31
Computer Networks

MODULE 3
THE NETWORK LAYER
Introductions
The network layer is responsible for ensuring packets travel from the source to the destination.
This often involves multiple hops through intermediate routers. Therefore, the network layer
handles end-to-end transmission.
To accomplish its tasks, the network layer must understand the network's topology, select
appropriate paths, and manage traffic to prevent congestion. It also addresses challenges when the
source and destination are in different networks. In this chapter, with a primary focus on the
Internet and its network layer protocol, IP.

I. NETWORK LAYER DESIGN ISSUES


1. Store-and-Forward Packet Switching
In the context of network layer protocols, it's essential to understand the components of the
network. In Figure 5-1, we see two key elements: ISP equipment (routers connected by
transmission lines) within the shaded oval and customers' equipment outside the oval. Host H1 is
directly connected to an ISP router, while H2 is on a LAN, connected to a customer-owned router.
Although routers like F are outside the ISP's ownership, they are considered part of the ISP network
for the purpose of this chapter, as they run similar algorithms. This distinction is relevant because
our focus here is on these algorithms.

When a host has a packet to send, it sends it to the nearest router, either on its own LAN or via a
point-to-point link to the ISP. The router stores the packet until it's fully received and processed
(including checksum verification), after which it forwards the packet to the next router along the
path. This process continues until the packet reaches its destination host, where it is delivered. This
mechanism is known as store-and-forward packet switching.

1
Computer Networks

2. Services Provided to the Transport Layer


The network layer provides services to the transport layer, which must be carefully designed with
specific goals in mind:
1. Services should be router technology independent.
2. The transport layer should remain unaware of the number, type, and topology of routers.
3. Network addresses available to the transport layer should follow a uniform numbering plan,
even across LANs and WANs.
The primary debate in network layer design centers on whether it should offer a connection-
oriented or connectionless service. One perspective, advocated by the Internet community,
suggests that the network's inherent unreliability necessitates hosts handling error control and flow
control independently. Thus, a connectionless service with minimal primitives like SEND
PACKET and RECEIVE PACKET is preferable.
In contrast, another camp, represented by telephone companies, argues for a reliable, connection-
oriented service. They draw from a century of experience with the telephone system, emphasizing
quality of service, especially for real-time traffic like voice and video.
This ongoing controversy has persisted for decades. While early networks were often connection-
oriented, connectionless network layers, like IP, have gained popularity, guided by the end-to-end
argument. Still, even within the Internet, connection-oriented features are evolving as quality-of-
service gains importance, as seen in technologies like MPLS and VLANs.
3. Implementation of Connectionless Service
In implementing a connectionless service at the network layer, two distinct approaches are
employed, depending on the type of service being offered.
a. Connectionless Service (Datagram Networks):
• In a connectionless service, packets, often referred to as datagrams, are injected into the
network independently and are routed without any prior setup.
• Each packet is treated as a standalone unit and is forwarded based on its own routing
information.
• No advance path establishment is required for transmitting individual packets.

In contrast, if a connection-oriented service is employed:


b. Connection-Oriented Service (Virtual-Circuit Networks):
• Before transmitting data packets, a path from the source router to the destination router,
known as a Virtual Circuit (VC), must be established.
• This setup process involves configuring a specific path that the data packets will follow.
• The network is referred to as a virtual-circuit network, drawing an analogy to the physical
circuits established in the traditional telephone system.

In a datagram network, let's consider the process of transmitting a long message from process P1
on host H1 to process P2 on host H2. Here's how it works:

1. Process P1 hands the message to the transport layer, which adds a transport header to the
message. This transport layer code runs on H1.

2
Computer Networks

2. The resulting data is then passed to the network layer, typically another procedure within
the operating system.
3. Since the message is longer than the maximum packet size, it's divided into four packets:
1, 2, 3, and 4. Each packet is sent one after the other to router A using a point-to-point
protocol like PPP.
4. Within the ISP's network, each router has a routing table indicating where to send packets
for each possible destination. These tables contain pairs of destinations and the outgoing
lines to use.

5. Router A initially has a routing table as shown in the figure. Packets 1, 2, and 3 are briefly
stored and checked for checksum on arrival. Then they are forwarded according to A's
table, reaching H2 via routers E and F.
6. However, something different happens to packet 4. Router A sends it to router B instead of
following the same route as the first three packets. This change in routing might occur if A
learns of a traffic jam along the initial route and updates its routing table.
7. The algorithm responsible for managing routing tables and making routing decisions is
called the routing algorithm, a central topic in this chapter. Different types of routing
algorithms exist.
8. IP (Internet Protocol), the foundation of the Internet, is a prime example of a connectionless
network service. Each packet carries a destination IP address, allowing routers to forward
packets individually. IPv4 packets have 32-bit addresses, while IPv6 packets have 128-bit
addresses.

3
Computer Networks

4. Implementation of Connection-Oriented Service

In a connection-oriented service, a virtual-circuit network is employed to streamline packet


routing. Here's how it works:

1. Instead of choosing a new route for every packet as in connectionless service, a route from
the source to the destination is selected and stored in tables within the routers when a
connection is established. This chosen route is then used for all data traffic over that
connection, similar to how the telephone system operates.
2. When the connection is terminated, the virtual circuit is also released. Each packet in a
connection-oriented service carries an identifier indicating which virtual circuit it belongs
to.

3. For example, if host H1 establishes connection 1 with host H2, this connection is recorded
as the first entry in the routing tables of each router along the route. Packets from H1 to H2
are then directed through this established route.
4. If another host, say H3, wants to establish a connection to H2, it chooses connection
identifier 1 and requests the network to establish a new virtual circuit. This leads to the
creation of a second entry in the routing tables.
5. To avoid conflicts, routers need the ability to replace connection identifiers in outgoing
packets. This process is sometimes called label switching. An example of a connection-
oriented network service is MPLS (Multi-Protocol Label Switching), used within ISP
networks. MPLS uses a 20-bit connection identifier or label to route traffic efficiently.
6. MPLS is often employed within ISP networks to establish long-term connections for large
traffic volumes. It helps ensure quality of service and assists with various traffic
management tasks, even though it's often transparent to customers.

4
Computer Networks

5. Comparison of Virtual-Circuit and Datagram Networks

Inside a network, there are trade-offs between virtual circuits and datagrams:

1. Setup Time vs. Address Parsing Time:


• Virtual circuits require a setup phase, which consumes time and resources but
simplifies packet routing. Datagram networks don't need setup but require more
complex address lookup.
2. Address Length:
• Datagram networks use longer destination addresses with global meaning, which
can result in significant overhead for short packets, wasting bandwidth.
3. Table Space in Router Memory:
• Datagram networks need entries for every possible destination, whereas virtual-
circuit networks only need entries for each virtual circuit. However, connection
setup packets in virtual-circuit networks also use destination addresses.
4. Quality of Service and Congestion Management:
• Virtual circuits offer advantages in guaranteeing quality of service and managing
congestion by reserving resources in advance during connection setup. Datagram
networks face more challenges in congestion avoidance.
5. Use Case Dependence:

5
Computer Networks

• For transaction processing systems, setting up and clearing virtual circuits may be
too costly compared to their use. However, for long-running applications like
VPNs, permanent virtual circuits can be valuable.
6. Vulnerability:
• Virtual circuits are vulnerable to disruptions if a router crashes and loses memory,
requiring all circuits to be aborted. Datagram networks are more resilient in this
regard.
7. Network Load Balancing:
• Datagram networks allow routers to balance traffic by changing routes during a
sequence of packet transmissions, providing flexibility compared to fixed virtual
circuits.

II. ROUTING ALGORITHMS


The network layer's primary function is routing packets from the source machine to the destination
machine, often requiring multiple hops in most networks. Routing algorithms and data structures
are central to network layer design.

• Routing Algorithm: It decides which output line an incoming packet should be sent on. For
datagram networks, this decision is made for every data packet, while virtual-circuit networks
determine routes during setup and reuse them. This process can be divided into routing (route
selection) and forwarding (packet arrival handling).
• Desirable Properties: Routing algorithms should be correct, simple, robust (able to handle
failures and topology changes), stable (converging to fixed paths), fair, and efficient.
• Trade-offs: Efficiency and fairness can sometimes conflict in routing decisions, requiring a
balance. Networks often optimize goals like minimizing packet delay, maximizing total
throughput, or reducing the number of hops a packet travels.
• Routing Algorithm Classes: Routing algorithms are categorized as nonadaptive (pre-
computed routes) and adaptive (responding to changes in topology and traffic). Adaptive
algorithms can differ in information sources, frequency of route changes, and optimization
metrics.

1. The Optimality Principle


Before diving into specific routing algorithms, it's important to understand the optimality principle,
which holds true regardless of network topology or traffic patterns. This principle, introduced by
Bellman in 1957, states that if router J lies on the optimal path from router I to router K, then the
optimal path from J to K also follows the same route. This can be explained by considering routes
from I to J (r1) and from J to K (r2). If a better route than r2 existed from J to K, it could be
combined with r1 to create a superior route from I to K, contradicting the optimality of r1r2.

6
Computer Networks

• Sink Tree: The sink tree is a tree-like structure formed by optimal routes from all sources
to a specific destination, adhering to the optimality principle. Sink trees play a central role
in routing algorithms, helping efficiently route packets in networks fig(b).
• Directed Acyclic Graphs (DAGs): Sink trees can extend to become DAGs if all possible
paths are considered, allowing for more flexibility in route selection. However, the
fundamental structure of a sink tree is retained in DAGs.
• Network Dynamics: In practice, network dynamics, such as link and router failures, can
affect the stability and accuracy of sink trees. Routers may have varying views of the
current topology, leading to dynamic adjustments in the sink tree.

2. Shortest Path Algorithm


The Shortest Path Algorithm is a fundamental routing technique that computes the optimal paths
within a network when provided with a complete network overview. These are the paths we aim
for a distributed routing algorithm to discover, even if not all routers possess comprehensive
network details.
Here's how it works: We construct a network graph, where each node represents a router, and each
edge signifies a communication link. When determining the route between a specific pair of
routers, the algorithm simply seeks the shortest path within this graph.
The concept of a "shortest path" may be interpreted in various ways. One approach measures path
length in terms of hops, making paths ABC and ABE in Fig. 5-7 equal in length. Another metric
could be geographical distance in kilometers, where ABC would be considerably longer than ABE
if the figure is drawn to scale. Beyond hops and physical distance, numerous other metrics are
feasible. For instance, edges could be labeled with the mean delay of a standard test packet, making
the shortest path the fastest path, regardless of the number of edges or kilometers.
In the broader context, edge labels could be computed based on various factors such as distance,
bandwidth, average traffic, communication cost, measured delay, and more. By adjusting the
weighting function, the algorithm can compute the "shortest" path according to different criteria
or a combination thereof.
7
Computer Networks

Several algorithms are available for calculating the shortest path between two nodes in a graph.
The one we'll discuss is attributed to Dijkstra (1959) and is utilized to find the shortest paths from
a source to all destinations within the network. Each node is assigned a label (in parentheses)
indicating its distance from the source node along the best-known path. Distances must always be
non-negative, particularly when they are based on real factors like bandwidth and delay.
Initially, no paths are known, so all nodes are labeled with infinity. As the algorithm progresses
and identifies paths, labels may change to reflect better routes. Labels can be either tentative or
permanent, with all labels beginning as tentative. Once it's determined that a label represents the
shortest possible path from the source to a particular node, it becomes permanent and remains
unchanged thereafter.

In this algorithm illustration, we'll demonstrate the process using a weighted, undirected graph
(Fig. 5-7a), where the weights represent parameters like distance. Our objective is to discover the
shortest path from A to D. Here's how it works:
1. We begin by marking node A as permanent (indicated by a filled-in circle).
2. Next, we examine each node adjacent to A, one by one (the working node). We relabel each
of these nodes with the distance to A. Additionally, we record the node from which the
probe was made to reconstruct the final path later. If multiple shortest paths exist from A
to D, we must remember all the probe nodes that could reach a node with the same distance.

8
Computer Networks

3. Having examined all nodes adjacent to A, we then inspect all tentatively labeled nodes
across the entire graph. We make the one with the smallest label permanent (as shown in
Fig. 5-7b), and this node becomes the new working node.
4. We repeat the process, starting from node B and examining all nodes adjacent to it. If the
sum of the label on B and the distance from B to the considered node is less than the label
on that node, we have a shorter path, so the node is relabeled.
5. After inspecting all nodes adjacent to the working node and updating tentative labels if
necessary, we search the entire graph for the tentatively labeled node with the smallest
value. This node becomes permanent and serves as the working node for the next round.
Fig. 5-7 illustrates the first six steps of the algorithm.
To understand why this algorithm works, consider Fig. 5-7c. At this point, we've made E
permanent. Now, suppose there were a shorter path than ABE, let's say AXYZE (with X and Y as
intermediate nodes). There are two possibilities: Either node Z has already been made permanent,
or it hasn't. If it has, then E has already been probed in a previous round when Z became permanent,
so the AXYZE path has been considered, and it cannot be shorter.
Now, let's look at the case where Z is still tentatively labeled. If the label at Z is greater than or
equal to that at E, then the AXYZE path cannot be shorter than ABE. If the label is less than that
of E, then Z will become permanent before E, allowing E to be probed from Z.
3. Flooding
Routing algorithms often rely on local knowledge rather than a complete network view. One basic
local technique is "flooding," where every incoming packet is sent out on all outgoing lines except
the one it arrived on.
To control the potential chaos of flooding, a hop counter is added to each packet's header,
decrementing with each hop until the packet is discarded when the counter hits zero. Ideally, the
hop counter should be initialized to the estimated path length from source to destination.
To further manage flooding, routers keep track of flooded packets to avoid duplication. This can
be achieved by having source routers include sequence numbers in their packets and maintaining
lists of seen sequence numbers for each source.
While not practical for most packets, flooding has its uses. It ensures delivery to every node in the
network, making it useful for broadcasting information and maintaining robustness, even in
challenging conditions. Flooding's minimal setup requirements also make it a foundational element
for more efficient routing algorithms and a valuable metric for performance comparisons. Flooding
inherently chooses the shortest path, as it explores all possible paths simultaneously.
4. Distance Vector Routing

Distance Vector Routing is a common dynamic routing algorithm used in computer networks. It
operates by having each router maintain a table, known as a vector, which contains information
about the best-known distance to each destination and the appropriate link to reach it. These tables

9
Computer Networks

are continually updated through communication with neighboring routers, allowing each router to
determine the optimal path to every destination.
In distance vector routing, each router's routing table includes entries for every router in the
network. Each entry consists of the preferred outgoing line for that destination and an estimate of
the distance to reach it. The distance can be measured in various ways, such as the number of hops
or other metrics like propagation delay, which can be determined using specialized ECHO packets.
For instance, let's consider using delay as a metric, and assume that each router knows the delay
to its neighboring routers. Periodically, let's say every T milliseconds, each router shares a list of
its estimated delays to reach various destinations with its neighbors. In return, it receives similar
lists from its neighbors.

For instance, if a router receives a table from neighbor X with Xi representing X's delay estimates
to different routers, and it already knows the delay to X is m milliseconds, it can calculate that it
can reach router i via X in Xi + m milliseconds. By performing this calculation for each neighbor's
estimates, the router can determine the best estimate and corresponding link to update its routing
table. Notably, the old routing table is not used in this calculation.
This updating process is demonstrated in Figure 5-9. Part (a) depicts a network, and part (b)
illustrates delay vectors received from router J's neighbors. Each neighbor claims different delay
values to various destinations. Assuming J has its own delay estimates to its neighbors A, I, H, and
K as 8, 10, 12, and 6 milliseconds, respectively, it calculates its new route to router G. For example,
J knows it can reach A in 8 milliseconds, and A claims a 18-millisecond delay to G. Thus, J can
10
Computer Networks

expect a 26-millisecond delay to G if it forwards packets through A. It performs similar


calculations for other neighbors and destinations, updating its routing table accordingly.
The Count-to-Infinity Problem
Convergence refers to the process of routers settling on the best paths within a network. Distance
vector routing, while a straightforward method for routers to collectively calculate shortest paths,
suffers from a significant drawback: it can be slow to converge. It quickly responds to positive
changes but sluggishly adapts to negative ones.
To illustrate the speed at which good news spreads, let's examine a simple five-node network (Fig.
5-10) with hop count as the metric. Initially, router A is down, and all other routers register A's
delay as infinity. When A comes back up, the routers learn about it through vector exchanges.
Imagine there's a signal (like a gong) that triggers simultaneous vector exchanges across all routers.
During the first exchange, B learns that its left neighbor has zero delay to A and updates its routing
table accordingly. The others still think A is down. The good news about A's revival propagates at
one hop per exchange. In a network with the longest path of N hops, it takes N exchanges for
everyone to know about revived links or routers.

Now consider the scenario in Fig. 5-10(b), where all links and routers are initially operational.
Routers B, C, D, and E have distances to A of 1, 2, 3, and 4 hops, respectively. Suddenly, either A
goes down or the A-B link is severed (effectively the same from B's perspective).
In the first exchange, B doesn't receive any news from A. Fortunately, C reports having a 2-hop
path to A. However, B doesn't know that C's path includes itself, so it believes the path to A via C
is 3 hops long. D and E don't update their entries for A in the first exchange.
In the second exchange, C realizes that its neighbors all claim to have 3-hop paths to A, so it
randomly picks one and updates its distance to A as 4 hops. Subsequent exchanges follow the
history shown in the figure.
This scenario illustrates why bad news travels slowly: no router's value exceeds the minimum of
its neighbors' values by more than one. Gradually, all routers increase their distance to infinity, but
the number of exchanges required depends on the value set for infinity. Therefore, setting infinity
to the longest path plus 1 is a prudent choice.
11
Computer Networks

This problem is known as the count-to-infinity problem, and various attempts have been made to
solve it, like using heuristics such as the split horizon with poisoned reverse rule discussed in RFC
1058. However, none of these heuristics work well in practice due to the fundamental challenge:
routers can't determine if they are on a path when another router advertises it.
5. Link State Routing
Link State Routing replaced Distance Vector Routing in the ARPANET in 1979 due to the count-
to-infinity problem, which caused slow convergence after network topology changes. This
innovative algorithm, now known as Link State Routing, forms the basis for widely used routing
protocols like IS-IS and OSPF. The Link State Routing concept comprises five key steps for routers
to function effectively:

1. Neighbor Discovery: Routers identify their neighboring devices and learn their network
addresses.
2. Distance Metric Assignment: Each router assigns distance or cost metrics to its
neighboring routers.
3. Packet Construction: Routers construct packets containing the information they've
gathered.
4. Packet Exchange: These packets are sent to and received from all other routers in the
network.
5. Shortest Path Computation: Routers calculate the shortest path to reach every other
router in the network.

Learning about the Neighbors: When a router boots up, it identifies its neighbors by sending
HELLO packets over point-to-point connections, receiving replies with unique neighbor names.
For broadcast links (e.g., Ethernet), the setup is more complex. Instead of modeling each link
separately, a LAN is treated as a single node. This node, with a designated router, connects routers
like A, C, and F. This simplifies the topology and reduces message overhead. Connectivity between
routers, like A and C, is represented as paths within this LAN node, such as ANC.

Setting Link Costs: In the link state routing algorithm, each link must have a distance or cost
metric for calculating the shortest paths. These metrics can be automatically determined or
configured by the network operator. Typically, cost is inversely proportional to link bandwidth,
making higher-capacity links preferable. For geographically dispersed networks, link delay can be
12
Computer Networks

factored into the cost, measured using Round-Trip Time with special ECHO packets, ensuring
shorter links are favored in path selection.

Building Link State Packets: In the link state routing algorithm, each router creates a packet
containing sender identity, sequence number, age, and neighbor list with associated costs. Packet
creation is straightforward. The challenge lies in determining when to build these packets. They
can be generated periodically or triggered by significant events like a line or neighbor status change
in fig 5.11.

Distributing the Link State Packets: Distributing link state packets is a crucial aspect of the
algorithm. To ensure all routers quickly and reliably receive these packets, a flooding approach is
used. Each packet contains a sequence number that is incremented with each new transmission,
preventing duplicates. However, to address potential issues, packet age is also included and
decremented periodically. Once the age reaches zero, the information is discarded. This technique
ensures timely updates and prevents obsolete data. Additionally, some refinements include a
holding area for incoming packets and the comparison of sequence numbers to handle duplicates
and errors, with all link state packets acknowledged to enhance robustness.

• Router B uses a data structure, as shown in Fig. 5-13, to manage link state packets. Each
row in this structure represents an incoming packet with information about its origin,
sequence number, age, and data. It also includes flags for sending and acknowledging
packets on B's three links (to A, C, and F).
• For example, when a packet arrives directly from A, it's sent to C and F and acknowledged
to A as indicated by the flag bits. Similarly, a packet from F is forwarded to A and C and
acknowledged to F.
• However, if a packet, like the one from E, arrives through multiple paths (EAB and EFB),
it's sent only to C but acknowledged to both A and F, reflecting the bit settings.

• In cases of duplicate packets arriving while the original is still in the buffer, the flag bits
are adjusted. For instance, if a copy of C's state arrives from F before the original is
forwarded, the bits are changed to indicate acknowledgment to F but no sending.

13
Computer Networks

Computing the New Routes: In link state routing, once a router has collected all link state packets
representing the entire network graph, it can construct a comprehensive view of the network,
considering links in both directions with potential different costs. Dijkstra's algorithm is then
employed locally to compute the shortest paths to all destinations, informing the router of the
preferred link for each destination, which is added to the routing tables.

Compared to distance vector routing, link state routing demands more memory and computation,
with memory needs proportional to k*n (n = number of routers, k = average neighbors), potentially
exceeding the size of a routing table. However, link state routing offers faster convergence, making
it practical for many scenarios. Actual networks often use link state routing protocols like IS-IS
(Intermediate System-Intermediate System) and OSPF (Open Shortest Path First).

14
Computer Networks

Question Bank (Network Layer)


1. What are the primary design goals and responsibilities of the network layer?
2. Explain the concept of routing and its significance in network layer design.
3. What factors should be considered when choosing routing algorithms for a network?
4. How does scalability impact network layer design, and what are some strategies to address
it?
5. Discuss the role of addressing in the network layer and the difference between logical and
physical addresses.
6. What is store-and-forward packet switching, and how does it differ from other switching
techniques?
7. Describe the advantages and disadvantages of store-and-forward packet switching.
8. How does store-and-forward packet switching handle packets of varying sizes and types?
9. Explain the term "packet switching delay" in the context of store-and-forward switching.
10. What services does the network layer provide to the transport layer, and why are these
services important?
11. Compare and contrast the services offered by the network layer with those of the data link
layer.
12. How does the network layer handle routing and forwarding of packets to their destinations?
13. What is a connectionless service in the network layer, and when is it typically used?
14. Describe the steps involved in implementing a connectionless service in a network.
15. Discuss the advantages and disadvantages of connectionless service in comparison to
connection-oriented service.
16. What is a connection-oriented service, and in what scenarios is it beneficial?
17. Explain the steps involved in establishing and releasing a connection in a connection-
oriented network.
18. How does connection-oriented service handle packet delivery and reordering?
19. Define virtual-circuit and datagram networks and highlight their key characteristics.
20. Compare the advantages and disadvantages of virtual-circuit and datagram network
architectures.
21. In which situations would you choose one network type over the other?
22. What is the fundamental goal of routing algorithms, and why are they crucial in computer
networks?
23. Explain the concept of the Optimality Principle in routing. How does it influence routing
decisions?
24. Compare and contrast proactive (table-driven) and reactive (on-demand) routing
algorithms.
25. Describe Dijkstra's algorithm for finding the shortest path in a network. What are its
limitations?
26. How does the Bellman-Ford algorithm work, and how does it handle negative-weight
edges?
27. What is flooding in the context of routing? When is it used, and what are its advantages
and disadvantages?
28. How can network loops be prevented when using flooding as a routing mechanism?

15
Computer Networks

29. Explain the principles behind distance vector routing algorithms. Provide examples of such
algorithms.
30. What is the Bellman-Ford equation, and how is it used in distance vector routing?
31. Describe the fundamentals of link state routing. How do routers exchange link state
information?
32. What is the SPF (Shortest Path First) algorithm, and how is it used in link state routing?
33. What is hierarchical routing, and why is it important in large-scale networks?
34. Discuss the advantages of using hierarchical addressing and routing.
35. Explain the concept of multicast routing. How is it different from unicast and broadcast
routing?
36. What are the challenges in multicast routing, especially in terms of building efficient
multicast trees?
37. What is anycast routing, and when is it used in network design?
38. Describe the process of routing packets to the nearest or best anycast node.
39. How do mobile IP protocols enable routing for hosts that change their network attachment
points?
40. What are the key components of mobile IP, and how do they work together?
41. How do mobile IP protocols enable routing for hosts that change their network attachment
points?
42. What are the key components of mobile IP, and how do they work together?
43. What is congestion control in computer networks, and why is it important?
44. Explain the key approaches to congestion control in computer networks.
45. Compare and contrast open-loop and closed-loop congestion control algorithms.
46. Describe the basic principles behind TCP congestion control algorithms, such as TCP Reno
or TCP Vegas.
47. What is traffic-aware routing, and how does it contribute to congestion control?
48. Explain how traffic-aware routing algorithms can dynamically adapt to network conditions.
49. What is admission control in the context of network management, and why is it necessary?
50. Discuss the role of admission control in preventing network congestion and maintaining
QoS.
51. How does traffic throttling work, and what is its purpose in congestion control?
52. Provide examples of situations where traffic throttling might be applied.
53. Describe load shedding as a congestion control mechanism. When is it typically used?
54. What factors are considered when deciding which traffic to shed during congestion?
55. Explain the concept of Quality of Service (QoS) in computer networks.
56. What are the key metrics used to measure QoS, and how are they defined?
57. How does QoS benefit applications with different requirements in a network?
58. Discuss how the specific requirements of applications (e.g., voice, video, data) impact QoS
design.
59. Provide examples of applications that demand low latency, high bandwidth, or other QoS
parameters.
60. Describe the concept of packet scheduling in QoS management.
61. How do different packet scheduling algorithms, such as Weighted Fair Queuing (WFQ)
and Priority Queuing, work?
62. What is Integrated Services (IntServ) in the context of QoS provisioning?
16
Computer Networks

63. How does RSVP (Resource Reservation Protocol) contribute to IntServ?


64. Explain the principles of Differentiated Services (DiffServ) in QoS management.
65. How are Differentiated Services Code Points (DSCPs) used to mark and classify packets?

17
Computer Networks

Module 4
THE TRANSPORT LAYER

The Transport Layer: The Transport Service, Elements of Transport Protocols, Congestion
Control,The internet transport protocols: UDP, TCP, Performance problems in computer networks,
Networkperformance measurement.

THE TRANSPORT SERVICE Services Provided to the Upper Layers


The ultimate goal of the transport layer is to provide efficient, reliable, and cost-effective data
transmission service to its users, normally processes in the application layer. To achieve this, the transport
layer makes use of the services provided by the network layer. The software and/or
hardware within the transport layer that does the work is called the transport entity. The transport
entity can be located in the operating system kernel, in a library package bound into network applications,
in a separate user process, or even on the network interface card. The first two options are most common
on the Internet. The (logical) relationship of the network, transport, and application layers is illustrated in
Fig.

there are also two types of transport service. The connection-oriented transport service is similar to the
connection-oriented network service in many ways. In both cases, connections have three phases:
establishment, data transfer, and release. Addressing and flow control are also similar in both layers.
Furthermore, the connectionless transport service is also very similar to the connectionless network service.
However, note that it can be difficult to provide a connectionless transport service on top of a connection-
oriented network service, since it is inefficient to set up a connection to send a single packet and then tear
it down immediately afterwards. The obvious question is this: if the transport layer service is so similar to
the network layer service, why are there two distinct layers? Why is one layer not Problems occur, that’s
what? The users have no real control over the network layer, so they cannot solve the problem of poor
service by using better routers or putting more error handling in the data link layer because they don’t
own the routers. The only possibility is to put on top of the network layer another layer that improves
the quality of the service. If, in a connectionless network, packets are lost or mangled, the transport entity
1
Computer Networks

can detect the problem and compensate for it by using retransmissions. If, in a connection-oriented
network, a transport entity is informed halfway through a long transmission that its network connection has
been abruptly terminated, with no indication of what has happened to the data currently in transit, it
can set up a new network connection to the remote transport entity. Using this new network connection,
it can send a query to its peer asking which data arrived and which did not, and knowing where it was,
pick up from where it left off.

Transport Service Primitives

To allow users to access the transport service, the transport layer must provide some operations to
application programs, that is, a transport service interface. Each transport service has its own
interface. In this section, we will first examine a simple (hypothetical) transport service and its
interface to see the bare essentials. In the following section, we will look at a real example. The
transport service is similar to the network service, but there are also some important differences. The
main difference is that the network service is intended to model the service offered by real networks,
warts and all. Real networks can lose packets, so the network service is generally unreliable.

A quick note on terminology is now in order. For lack of a better term, we will use the term segment
for messages sent from transport entity to transport entity.TCP, UDP and other Internet protocols use this
term. Some older protocols used the ungainly name TPDU (Transport Protocol Data Unit). That term is
not used much anymore now but you may see it in older papers and books the network entity similarly
processes the packet header and then passes the contents of the packet payload up to the transport
entity. This nesting is illustrated in Fig. 6-3

Connection Establishment
Establishing a connection sounds easy, but it is actually surprisingly tricky. At first glance, it would
seem sufficient for one transport entity to just send a CONNECTION REQUEST segment to the destination
and wait for a CONNECTION ACCEPTED reply. The problem occurs when the network
can lose, delay, corrupt, and duplicate packets. This behaviour causes serious complications. Imagine
2
Computer Networks

a network that is so congested that acknowledgements hardly ever get back in time and each packet times
out and is retransmitted two or three times. Suppose that the network uses datagrams inside and that every
packet follows a different route. Some of the packets might get stuck in a traffic jam inside
the network and take a long time to arrive. That is, they may be delayed in the network and pop out much
later, when the sender thought that they had been lost. The worst possible nightmare is as follows. A user
establishes a connection with a bank, sends messages telling the bank to transfer a large amount of money
to the account of a not-entirely-trustworthy person. Unfortunately, the packets decide to take the scenic
route to the destination and go off exploring a remote corner of the
network. The sender then times out and sends them all again. This time the packets take the shortest route
and are delivered quickly so the sender releases the connection.
Packet lifetime can be restricted to a known maximum using one (or more) of the following techniques:
1. Restricted network design.
2. Putting a hop counter in each packet.
3. Time stamping each packet.
The first technique includes any method that prevents packets from looping, combined with some way
of bounding delay including congestion over the (now known) longest possible path. It is
difficult, given that internets may range from a single city to international in scope. The second
method consists of having the hop count initialized to some appropriate value and decremented each time
the packet is forwarded. The network protocol simply discards any packet whose hop counter becomes
zero. The third method requires each packet to bear the time it was created, with the routers
agreeing to discard any packet older than some agreed-upon time. This latter method requires the router
clocks to be synchronized, which itself is a nontrivial task, and in practice a hop counter is a close enough
approximation to age.

TCP uses this three-way handshake to establish connections. Within a connection, a timestamp is used
to extend the 32-bit sequence number so that it will not wrap within the maximum packet
lifetime, even for gigabit-per-second connections. This mechanism is a fix to TCP that was needed as it
was used on faster and faster links. It is described in RFC 1323 and called PAWS (Protection Against
Wrapped Sequence numbers). Across connections, for the initial sequence numbers and before PAWS
can come into play, TCP originally use the clock-based scheme just described. However, this turned
out to have security vulnerability. The clock made it easy for an attacker to
predict the next initial sequence number and send packets that tricked the three-way handshake and
established a forged connection. To close this hole, pseudorandom initial sequence numbers are used for
connections in practice.

3
Computer Networks

the
initial sequence numbers not repeat for an interval even though they appear random to an observer.
Otherwise, delayed duplicates can wreak havoc.

Connection Release
Releasing a connection is easier than establishing one. Nevertheless, there are more pitfalls than one
might expect here. As we mentioned earlier, there are two styles of terminating a connection: asymmetric
release and symmetric release Asymmetric release is the way the telephone system works: when one
party hangs up, the connection is broken. Symmetric release treats the connection
as two separate unidirectional connections and requires each one to be released separately.

4
Computer Networks

Asymmetric release is abrupt and may result in data loss. Consider the scenario of Fig. After the connection
is established, host 1 sends a segment that arrives properly at host 2. Then host 1 sends another segment.
Unfortunately, host 2 issues a DISCONNECT before the second segment arrives. The result is that the
connection is released and data are lost.

Crash Recovery
If hosts and routers are subject to crashes or connections are long-lived (e.g., large software or media
downloads) recovery from these crashes becomes an issue. If the transport entity is entirely within the
hosts, recovery from network

and router crashes is straightforward. The transport entities expect lost segments all the time and know
how to cope with them by using retransmissions. A more troublesome problem is how to recover from host
crashes. In particular, it may be desirable for clients to be able to continue working when servers crash and
quickly reboot. To illustrate the difficulty, let us assume that one host, the client, is sending a long file to
another host, the file server, using a simple Stop-and-wait protocol. The transport layer on the server just
passes the incoming segments to the transport user, one by one. Partway through the transmission, the
server crashes. When it comes back up, its tables are reinitialized, so it no longer knows precisely where
it was. In an attempt to recover its previous status, the server might send a broadcast segment to all

5
Computer Networks

other hosts, announcing that it has just crashed and requesting that its clients inform it of the status of
all open connections. Each client can be in one of two states: one segment outstanding, S1, or no segments
outstanding, S0. Based on only this state information, the client must decide whether to retransmit the most
recent segment.
UDP Protocol
UDP provides connectionless, unreliable, datagram service. Connectionless service means that there
is no logical connection between the two ends exchanging messages. Each message is an independent
entity encapsulated in a datagram.

UDP does not see any relation (connection) between consequent datagram coming from the same
source and going to the same destination.
UDP has an advantage: it is message-oriented. It gives boundaries to the messages exchanged. An
application program may be designed to use UDP if it is sending small messages and the simplicity and
speed is more important for the application than reliability.

User Datagram
UDP packets, called user datagram, have a fixed-size header of 8 bytes made of four fields, each of 2
bytes (16 bits).
. The 16 bits can define a total length of 0 to 65,535 bytes. However, the total length needs to be less
because a UDP user datagram is stored in an IP datagram with the total length of 65,535 bytes. The
last field can carry the optional checksum

UDP Services
Process-to-Process Communication
UDP provides process-to-process communication using socket addresses, a combination of IP
addresses and port numbers.
Connectionless Services
As mentioned previously, UDP provides a connection less service. This means that each user
datagram sent by UDP is an independent datagram. There is no relationship between the different user
data grams even if they are coming from the same source process and going to the same
destination program.
6
Computer Networks

Flow Control
UDP is a very simple protocol. There is no flow control, and hence no window mechanism. The
receiver may overflow with incoming messages.
Error Control
There is no error control mechanism in UDP except for the checksum. This means that the sender
does not know if a message has been lost or duplicated.
Checksum
UDP checksum calculation includes three sections: a pseudo header, the UDP header, and the data
coming from the application layer. The pseudo header is the part of the header of the IP packet in
which the user datagram is to be encapsulated with some fields filled with 0s

UDP Applications UDP Features Connectionless Service


As we mentioned previously,
➢ UDP is a connectionless protocol. Each UDP packet is independent from other packets sent by the
same application program. This feature can be considered as an advantage or disadvanta8e depending on
the application requirements.
➢ UDP does not provide error control; it provides an unreliable service. Most applications expect
reliable service from a transport-layer protocol. Although a reliable service is desirable.
Typical Applications
The following shows some typical applications that can benefit more from the services of UDP
➢ UDP is suitable for a process that requires simple request-response communication with little
concern for flow and error control
➢ UDP is suitable for a process with internal flow- and error-control mechanisms. For example, the
Trivial File Transfer Protocol (TFIP)
➢ UDP is a suitable transport protocol for multicasting. Multicasting capability is embedded in the
UDP software
➢ UDP is used for management processes such as SNMP
➢ UDP is used for some route updating protocols such as Routing Information Protocol (RIP)

7
Computer Networks

➢ UDP is normally used for interactive real-time applications that cannot tolerate uneven delay
between sections of a received message

TRANSMISSION CONTROL PROTOCOL


Transmission Control Protocol (TCP) is a connection-oriented, reliable protocol. TCP explicitly
defines connection establishment, data transfer, and connection teardown phases to provide a
connection-oriented service.
TCP Services
Process-to-Process Communication
As with UDP, TCP provides process-to-process communication using port numbers. We have
already given some of the port numbers used by TCP.
Stream Delivery Service
In UDP, a process sends messages with predefined boundaries to UDP for delivery. UDP adds its
own header to each of these messages and delivers it to IP for transmission.
TCP, on the other hand, allows the sending process to deliver data as a stream of bytes and allows the
receiving process to obtain data as a stream of bytes.TCP creates an environment in which the two
processes seem to be connected by an imaginary
"tube" that carries their bytes across the Internet.

8
Computer Networks

Sending and Receiving Buffers


Because the sending and the receiving processes may not necessarily write or read data at the same
rate, TCP needs buffers for storage.
There are two buffers, the sending buffer and the receiving buffer, one for each direction.
➢ At the sender, the buffer has three types of chambers. The white section contains empty
chambers that can be filled by the sending process (producer).
➢ The colored area holds bytes that have been sent but not yet acknowledged.
➢ The TCP sender keeps these bytes in the buffer until it receives an acknowledgment. The shaded area
contains bytes to be sent by the sending TCP.
➢ The operation of the buffer at the receiver is simpler. The circular buffer is divided into two
areas (shown as white and colored).
➢ The white area contains empty chambers to be filled by bytes received from the network.
➢ The colored sections contain received bytes that can be read by the receiving process. When a byte
is read by the receiving process, the chamber is recycled and added to the pool of empty
chambers.

Segments
➢ Although buffering handles the disparity between the speed of the producing and consuming
Processes, we need one more step before we can send data.
➢ The network layer, as a service provider for TCP, needs to send data in packets, not as a stream of
bytes. At the transport layer, TCP groups a number of bytes together into a packet called a segment.
➢ The segments are encapsulated in an IP datagram and transmitted. This entire operation is
transparent to the receiving process.
Format
The segment consists of a header of 20 to 60 bytes, followed by data from the application
program.The header is 20 bytes if there are no options and up to 60 bytes if it contains options.

9
Computer Networks

Source port address This is a 16-bit field that defines the port number of the application program in the
host that is sending the segment.
Destination port address This is a 16-bit field that defines the port number of the application
program in the host that is receiving the segment.
Sequence number This 32-bit field defines the number assigned to the first byte of data contained in this
segment.
Acknowledgment number This 32-bit field defines the byte number that the receiver of the segment is
expecting to receive from the other party.
Header length This 4-bit field indicates the number of 4-byte words in the TCP header. The length of
the header can be between 20 and 60 bytes.

A TCP Connection
➢ TCP is connection-oriented. a connection-oriented transport protocol establishes a logical path
between the source and destination.
➢ All of the segments belonging to a message are then sent over this logical path.
➢ TCP operates at a higher level. TCP uses the services of IP to deliver individual segments to the
receiver, but it controls the connection itself.
➢ In TCP, connection-oriented transmission requires three phases: connection establishment, data
transfer, and connection termination.

10
Computer Networks

Connection Establishment
TCP transmits data in full-duplex mode. When two TCPs in two machines are connected, they are
able to send segments to each other simultaneously.

Three- Way Handshaking


The connection establishment in TCP is called three-way handshaking. an application program,
called the client, wants to make a connection with another application program, called the server,using
TCP as the transport-layer protocol The process starts with the server. The server program tells its TCP
that it is ready to accept a connection. This request is called a passive open.
Although the server TCP is ready to accept a connection from any machine in the world, it cannot
make the connection itself.
The client program issues a request for an active open. A client that wishes to connect to an open server
tells its TCP to connect to a particular server.
➢ A SYN segment cannot carry data, but it consumes one sequence number.
➢ A SYN + ACK segment cannot carry data, but it does consume one sequence number.

➢ An ACK segment, if carrying no data, consumes no sequence number.

11
Computer Networks

12
Computer Networks

MODULE V: APPLICATION LAYER

Principles of Network Applications


• Network-applications are the driving forces for the explosive development of the internet.
• Examples of network-applications:

1. Web 5) Social networking (Facebook, Twitter)


2. File transfers 6) Video distribution (YouTube)
3. E-mail 7) Real-time video conferencing (Skype)
4. P2P file sharing 8) On-line games (World of Warcraft)

• In network-applications, program usually needs to


→ run on the different end-systems and
→ communicate with one another over the network.
• For ex: In the Web application, there are 2 different programs:
1) The browser program running in the user's host (Laptop or Smartphone).
2) The Web-server program running in the Web-server host.

Network Application Architectures


• Two approaches for developing an application:
1) Client-Server architecture 2) P2P (Peer to Peer) architecture

Client-Server Architecture
• In this architecture, there is a server and many clients distributed over the network (Figure 1.1a).
• The server is always-on while a client can be randomly run.
• The server is listening on the network and a client initializes the communication.
• Upon the requests from a client, the server provides certain services to the client.
• Usually, there is no communication between two clients.
• The server has a fixed IP address.
• A client contacts the server by sending a packet to the server's IP address.
• A server is able to communicate with many clients.
• The applications such as FTP, telnet, Web, e-mail etc use the client-server architecture.

Data Center
• Earlier, client-server architecture had a single-server host.
• But now, a single-server host is unable to keep up with all the requests from large no. of clients.
• For this reason, data-center a is used.
• A data-center contains a large number of hosts.
• A data-center is used to create a powerful virtual server.
• In date center, hundreds of servers must be powered and maintained.
• For example:
➢ Google has around 50 data-centers distributed around the world.

1
Computer Networks

➢ These 50 data-centers handle search, YouTube, Gmail, and other services.

P2P Architecture
• There is no dedicated server (Figure 1.1b).
• Pairs of hosts are called peers.
• The peers communicate directly with each other.
• The peers are not owned by the service-provider. Rather, the peers are laptops controlled by users.
• Many of today's most popular and traffic-intensive applications are based on P2P architecture.
• Examples include file sharing (BitTorrent), Internet telephone (Skype) etc.
• Main feature of P2P architectures: self-scalability.
• For ex: In a P2P file-sharing system,
➢ Each peer generates workload by requesting files.
➢ Each peer also adds service-capacity to the system by distributing files to other peers.
• Advantage: Cost effective ‘.’ Normally, server-infrastructure & server bandwidth are not required.
• Three challenges of the P2P applications:
1) ISP Friendly
➢ Most residential ISPs have been designed for asymmetrical bandwidth usage.
➢ Asymmetrical bandwidth means there is more downstream-traffic than upstream-traffic.
➢ But P2P applications shift upstream-traffic from servers to residential ISPs, which stress on
the ISPs.
2) Security
➢ Since the highly distribution and openness, P2P applications can be a challenge to security.
3) Incentive
➢ Success of P2P depends on convincing users to volunteer bandwidth & resources to the
applications.

Figure 1.1: (a) Client-server architecture; (b) P2P architecture

2
Computer Networks

Processes Communicating
Process
• A process is an instance of a program running in a computer.(IPC inter-process communication).
• The processes may run on the 1) same system or 2) different systems.
1) The processes running on the same end-system can communicate with each other using IPC.
2) The processes running on the different end-systems can communicate by exchanging messages.
i) A sending-process creates and sends messages into the network.
ii) A receiving-process receives the messages and responds by sending messages back.

Client & Server Processes


• A network-application consists of pairs of processes:
1) The process that initiates the communication is labeled as the client.
2) The process that waits to be contacted to begin the session is labeled as the server.
• For example:
1) In Web application, a client-browser process communicates with a Web-server-process.
2) In P2P file system, a file is transferred from a process in one peer to a process in another peer.

Interface between the Process and the Computer Network Socket


• Any message sent from one process to another must go through the underlying-network.
• A process sends/receives message through a software-interface of underlying-network called socket.
• Socket is an API between the application-layer and the transport layer within a host (Figure 1.2).
• The application-developer has complete control at the application-layer side of the socket.
• But, the application-developer has little control of the transport-layer side of the socket. For ex: The
application-developer can control:
1) The choice of transport-protocol: TCP or UDP. (API Application Programming Interface)
2) The ability to fix parameters such as maximum-buffer & maximum-segment-sizes.

Figure 1.2: Application processes, sockets, and transport-protocol

Addressing Processes
• To identify the receiving-process, two pieces of information need to be specified:
1) IP address of the destination-host.
2) Port-number that specifies the receiving-process in the destination-host.
3
Computer Networks

• In the Internet, the host is identified by IP address.


• An IP address is a 32-bit that uniquely identify the host.
• Sending-process needs to identify receiving-process ‘.’ a host may run several network-applications.
• For this purpose, a destination port-number is used.
• For example,
A Web-server is identified by port-number 80. A mail-server is identified by port-number 25.

Transport Services Available to Applications


• Networks usually provide more than one transport-layer protocols for different applications.
• An application-developer should choose certain protocol according to the type of applications.
• Different protocols may provide different services.

Reliable Data Transfer


• Reliable means guaranteeing the data from the sender to the receiver is delivered correctly. For ex:
TCP provides reliable service to an application.
• Unreliable means the data from the sender to the receiver may never arrive. For ex: UDP provides
unreliable service to an application.
• Unreliability may be acceptable for loss-tolerant applications, such as multimedia applications.
• In multimedia applications, the lost data might result in a small glitch in the audio/video.

Throughput
• Throughput is the rate at which the sending-process can deliver bits to the receiving-process.
• Since other hosts are using the network, the throughput can fluctuate with time.
• Two types of applications:
1) Bandwidth Sensitive Applications
➢ These applications need a guaranteed throughput. For ex: Multimedia applications
➢ Some transport-protocol provides guaranteed throughput at some specified rate (r bits/sec).
2) Elastic Applications
➢ These applications may not need a guaranteed throughput. For ex: Electronic mail, File transfer &
Web transfers.

Timing
• A transport-layer protocol can provide timing-guarantees.
• For ex: guaranteeing every bit arrives at the receiver in less than 100 msec.
• Timing constraints are useful for real-time applications such as
→ Internet telephony
→ Virtual environments
→ Teleconferencing and
→ Multiplayer games

Security
• A transport-protocol can provide one or more security services.
• For example,
1) In the sending host, a transport-protocol can encrypt all the transmitted-data.
4
Computer Networks

2) In the receiving host, the transport-protocol can decrypt the received-data.


Transport Services Provided by the Internet
• The Internet makes two transport-protocols available to applications, UDP and TCP.
• An application-developer who creates a new network-application must use either: UDP or TCP.
• Both UDP & TCP offers a different set of services to the invoking applications.
• Table 1.1 shows the service requirements for some selected applications.

Table 1.1: Requirements of selected network-applications


Application Data Loss Throughput Time Sensitive
File transfer/download No loss Elastic No
E-mail No loss Elastic No
Web documents No loss Elastic (few kbps) No
Internet-telephony/ Video- Loss-tolerant Audio: few kbps–1 Mbps Yes: 100s of ms
conferencing Video: 10 kbps–5 Mbps
Streaming stored audio/video Loss-tolerant Same as above Yes: few seconds
Interactive games Loss-tolerant Few kbps–10 kbps Yes: 100s of ms
Instant messaging No loss Elastic Yes and no

TCP Services
• An application using transport-protocol TCP, receives following 2 services.
1) Connection-Oriented Service
➢ Before the start of communication, client & server need to exchange control-information.
➢ This phase is called handshaking phase.
➢ Then, the two processes can send messages to each other over the connection.
➢ After the end of communication, the applications must tear down the connection.
2) Reliable Data Transfer Service
➢ The communicating processes must deliver all data sent without error & in the proper order.
• TCP also includes a congestion-control.
• The congestion-control throttles a sending-process when the network is congested.

UDP Services
• UDP is a lightweight transport-protocol, providing minimal services.
• UDP is connectionless, so there is no handshaking before the 2 processes start to communicate.
• UDP provides an unreliable data transfer service.
• Unreliable means providing no guarantee that the message will reach the receiving-process.
• Furthermore, messages that do arrive at the receiving-process may arrive out-of-order.
• UDP does not include a congestion-control.
• UDP can pump data into the network-layer at any rate.

The Web & HTTP


5
Computer Networks

• The appearance of Web dramatically changed the Internet.


• Web has many advantages for a lot of applications.
• It operates on demand so that the users receive what they want when they want it.
• It provides an easy way for everyone make information available over the world.
• Hyperlinks and search engines help us navigate through an ocean of Web-sites.
• Forms, JavaScript, Java applets, and many other devices enable us to interact with pages and sites.
• The Web serves as a platform for many killer applications including YouTube, Gmail, and Facebook.

Overview of HTTP
Web
• A web-page consists of objects (HTML Hyper Text Markup Language).
• An object is a file such as an HTML file, a JPEG image, a Java applet, a video chip.
• The object is addressable by a single URL (URL Uniform Resource Locator).
• Most Web-pages consist of a base HTML file & several referenced objects.
• For example:
If a Web-page contains HTML text and five JPEG images; then the Web-page has six objects:
1) Base HTML file and
2) Five images.
• The base HTML file references the other objects in the page with the object's URLs.
• URL has 2 components:
1) The hostname of the server that houses the object and
2) The object’s path name.
• For example:
“http://www.someSchool.edu/someDepartment/picture.gif”
In above URL,
1) Hostname = “www.someSchool.edu ”
2) Path name = “/someDepartment/picture.gif”.
• The web browsers implement the client-side of HTTP. For ex: Google Chrome, Internet Explorer
• The web-servers implement the server-side of HTTP. For ex: Apache

HTTP
• HTTP is Web’s application-layer protocol (Figure 1.3) (HTTP HyperText Transfer Protocol).
• HTTP defines
→ how clients request Web-pages from servers and
→ how servers transfer Web-pages to clients.

6
Computer Networks

Figure 1.3: HTTP request-response behavior

• When a user requests a Web-page, the browser sends HTTP request to the server.
• Then, the server responds with HTTP response that contains the requested-objects.
• HTTP uses TCP as its underlying transport-protocol.
• The HTTP client first initiates a TCP connection with the server.
• After connection setup, the browser and the server-processes access TCP through their sockets.

• HTTP is a stateless protocol.


• Stateless means the server sends requested-object to client w/o storing state-info about the client.
• HTTP uses the client-server architecture:
1) Client
➢ Browser that requests receive and displays Web objects.
2) Server
➢ Web-server sends objects in response to requests.

Non-Persistent & Persistent Connections


• In many internet applications, the client and server communicate for an extended period of time.
• When this client-server interaction takes place over TCP, a decision should be made:
1) Should each request/response pair be sent over a separate TCP connection or
2) Should all requests and their corresponding responses be sent over same TCP connection?
• These different connections are called non-persistent connections (1) or persistent connections (2).
• Default mode: HTTP uses persistent connections.

7
Computer Networks

HTTP with Non-Persistent Connections


• A non-persistent connection is closed after the server sends the requested-object to the client.
• In other words, the connection is used exactly for one request and one response.
• For downloading multiple objects, multiple connections must be used.
• Suppose user enters URL: "http://www.someSchool.edu/someDepartment/home.index"
• Assume above link contains text and references to 10 jpeg images.

Figure 1.4: Back-of-the-envelope calculation for the time needed to request and receive an HTML file

• Here is how it works:

• RTT is the time taken for a packet to travel from client to server and then back to the client.
• The total response time is sum of following (Figure 1.4):
i) One RTT to initiate TCP connection (RTT Round Trip Time).
ii) One RTT for HTTP request and first few bytes of HTTP response to return.

8
Computer Networks

iii) File transmission time.


i.e. Total response time = (i) + (ii) + (iii) = 1 RTT+ 1 RTT+ File transmission time
= 2(RTT) + File transmission time

HTTP with Persistent Connections


• Problem with Non-Persistent Connections:
1) A new connection must be established and maintained for each requested-object.
➢ Hence, buffers must be allocated and state info must be kept in both the client and server.
➢ This results in a significant burden on the server.
2) Each object suffers a delivery delay of two RTTs:
i) One RTT to establish the TCP connection and
ii) One RTT to request and receive an object.
• Solution: Use persistent connections.
• With persistent connections, the server leaves the TCP connection open after sending responses.
• Hence, subsequent requests & responses b/w same client & server can be sent over same connection
• The server closes the connection only when the connection is not used for a certain amount of time.
• Default mode of HTTP: Persistent connections with pipelining.
• Advantages:
1) This method requires only one RTT for all the referenced-objects.
2) The performance is improved by 20%.

9
Computer Networks

HTTP Message Format


• Two types of HTTP messages: 1) Request-message and 2) Response-message.

HTTP Request Message

Figure 1.5: General format of an HTTP request-message

• An example of request-message is as follows: GET /somedir/page.html HTTP/1.1 Host:


www.someschool.edu Connection: close
User-agent: Mozilla/5.0 Accept-language: eng

• The request-message contains 3 sections (Figure 1.5):


1) Request-line
2) Header-line and
3) Carriage return.
• The first line of message is called the request-line. The subsequent lines are called the header-lines.
• The request-line contains 3 fields. The meaning of the fields is as follows:
1) Method
➢ “GET”: This method is used when the browser requests an object from the server.
2) URL
➢ “/somedir/page.html”: This is the object requested by the browser.
3) Version
➢ “HTTP/1.1”: This is version used by the browser.
• The request-message contains 4 header-lines. The meaning of the header-lines is as follows:
1) “Host: www.someschool.edu” specifies the host on which the object resides.
2) “Connection: close” means requesting a non-persistent connection.
3) “User-agent:Mozilla/5.0” means the browser used is the Firefox.
4) “Accept-language:eng” means English is the preferred language.
• The method field can take following values: GET, POST, HEAD, PUT and DELETE.
1) GET is used when the browser requests an object from the server.
2) POST is used when the user fills out a form & sends to the server.
3) HEAD is identical to GET except the server must not return a message-body in the response.
10
Computer Networks

4) PUT is used to upload objects to servers.


5) DELETE allows an application to delete an object on a server.

HTTP Response Message

Figure 1.6: General format of an HTTP response-message

• An example of response-message is as follows: HTTP/1.1 200 OK


Connection: close
Date: Tue, 09 Aug 2011 15:44:04 GMT
Server: Apache/2.2.3 (CentOS)
Last-Modified: Tue, 09 Aug 2011 15:11:03 GMT
Content-Length: 6821 Content-Type: text/html (data data data data data ...)

• The response-message contains 3 sections (Figure 1.6):


1) Status line
2) Header-lines and
3) Data (Entity body).
• The status line contains 3 fields:
1) Protocol version
2) Status-code and
3) Status message.
• Some common status-codes and associated messages include:
1) 200 OK: Standard response for successful HTTP requests.
2) 400 Bad Request: The server cannot process the request due to a client error.
3) 404 Not Found: The requested resource cannot be found.
• The meaning of the Status line is as follows:
“HTTP/1.1 200 OK”: This line indicates the server is using HTTP/1.1 & that everything is OK.
• The response-message contains 6 header-lines. The meaning of the header-lines is as follows:
1) Connection: This line indicates browser requesting a non-persistent connection.
2) Date: This line indicates the time & date when the response was sent by the server.
3) Server: This line indicates that the message was generated by an Apache Web-server.
4) Last-Modified: This line indicates the time & date when the object was last modified.

11
Computer Networks

5) Content-Length: This line indicates the number of bytes in the sent-object.


6) Content-Type: This line indicates that the object in the entity body is HTML text.

User-Server Interaction: Cookies


• Cookies refer to a small text file created by a Web-site that is stored in the user's computer.
• Cookies are stored either temporarily for that session only or permanently on the hard disk.
• Cookies allow Web-sites to keep track of users.
• Cookie technology has four components:
1) A cookie header-line in the HTTP response-message.
2) A cookie header-line in the HTTP request-message.
3) A cookie file kept on the user’s end-system and managed by the user’s browser.
4) A back-end database at the Web-site.

Figure 1.7: Keeping user state with cookies

• Here is how it works (Figure 1.7):


1) When a user first time visits a site, the server
→ creates a unique identification number (1678) and
→ creates an entry in its back-end database by the identification number.
2) The server then responds to user’s browser.
➢ HTTP response includes Set-cookie: header which contains the identification number (1678)

12
Computer Networks

3) The browser then stores the identification number into the cookie-file.
4) Each time the user requests a Web-page, the browser
→ extracts the identification number from the cookie file, and
→ puts the identification number in the HTTP request.
5) In this manner, the server is able to track user’s activity at the web-site.

Web Caching
• A Web-cache is a network entity that satisfies HTTP requests on the behalf of an original Web-server.
• The Web-cache has disk-storage.
• The disk-storage contains copies of recently requested-objects.

Figure 1.8: Clients requesting objects through a Web-cache (or Proxy Server)

• Here is how it works (Figure 1.8):


1) The user's HTTP requests are first directed to the web-cache.
2) If the cache has the object requested, the cache returns the requested-object to the client.
3) If the cache does not have the requested-object, then the cache
→ connects to the original server and
→ asks for the object.
4) When the cache receives the object, the cache
→ stores a copy of the object in local-storage and
→ sends a copy of the object to the client.
• A cache acts as both a server and a client at the same time.
1) The cache acts as a server when the cache
→ receives requests from a browser and
→ sends responses to the browser.
2) The cache acts as a client when the cache
→ requests to an original server and
→ receives responses from the origin server.
• Advantages of caching:
1) To reduce response-time for client-request.
2) To reduce traffic on an institution’s access-link to the Internet.
3) To reduce Web-traffic in the Internet.

13
Computer Networks

The Conditional GET


• Conditional GET refers a mechanism that allows a cache to verify that the objects are up to date.
• An HTTP request-message is called conditional GET if
1) Request-message uses the GET method and
2) Request-message includes an If-Modified-Since: header-line.
• The following is an example of using conditional GET:
GET /fruit/kiwi.fig HTTP1.1
Host: www.exoriguecuisine.com
If-modified-since: Wed, 7 Sep 2011 09:23:24

• The response is:


HTTP/1.1 304 Not Modified
Date: Sat, 15 Oct 2011 15:39:29

File Transfer: FTP


• FTP is used by the local host to transfer files to or from a remote-host over the network.
• FTP uses client-server architecture (Figure 1.9).
• FTP uses 2 parallel TCP connections (Figure 1.10):
1) Control Connection
➢ The control-connection is used for sending control-information b/w local and remote-hosts.
➢ The control-information includes:
→ user identification
→ password
→ commands to change directory and
→ commands to put & get files.
2) Data Connection
➢ The data-connection is used to transfer files.

Figure 1.9: FTP moves files between local and remote file systems

14
Computer Networks

Figure 1.10: Control and data-connections

• Here is how it works:


1) When session starts, the client initiates a control-connection with the server on port 21.
2) The client sends user-identity and password over the control-connection.
3) Then, the server initiates data-connection to the client on port 20.
4) FTP sends exactly one file over the data-connection and then closes the data-connection.
5) Usually, the control-connection remains open throughout the duration of the user-session.
6) But, a new data-connection is created for each file transferred within a session.
• During a session, the server must maintain the state-information about the user.
• For example:
The server must keep track of the user's current directory.
• Disadvantage:
Keeping track of state-info limits the no. of sessions maintained simultaneously by a server.

FTP Commands & Replies


• The commands are sent from client to server.
• The replies are sent from server to client.
• The commands and replies are sent across the control-connection in 7-bit ASCII format.
• Each command consists of 4-uppercase ASCII characters followed by optional arguments.
• For example:
1) USER username
➢ Used to send the user identification to the server.
2) PASS password
➢ Used to send the user password to the server.
3) LIST
➢ Used to ask the server to send back a list of all the files in the current remote directory.
4) RETR filename
➢ Used to retrieve a file from the current directory of the remote-host.
5) STOR filename
➢ Used to store a file into the current directory of the remote-host.
• Each reply consists of 3-digit numbers followed by optional message.
• For example:
1) 331 Username OK, password required
2) 125 Data-connection already open; transfer starting
3) 425 Can’t open data-connection
4) 452 Error writing file

15
Computer Networks

Electronic Mail in the Internet


• e-mail is an asynchronous communication medium in which people send and read messages.
• e-mail is fast, easy to distribute, and inexpensive.
• e-mail has features such as
→ messages with attachments
→ hyperlinks
→ HTML-formatted text and
→ embedded photos.
• Three major components of an e-mail system (Figure 1.11):
1) User Agents
➢ User-agents allow users to read, reply to, forward, save and compose messages.
➢ For example: Microsoft Outlook and Apple Mail
2) Mail Servers
➢ Mail-servers contain mailboxes for users.
➢ A message is first sent to the sender's mail-server.
➢ Then, the sender’s mail-server sends the message to the receiver's mail-server.
➢ If the sender’s server cannot deliver mail to receiver’s server, the sender’s server
→ holds the message in a message queue and
→ attempts to transfer the message later.
3) SMTP (Simple Mail Transfer Protocol)
➢ SMTP is an application-layer protocol used for email.
➢ SMTP uses TCP to transfer mail from the sender’s mail-server to the recipient’s mail-server.
➢ SMTP has two sides:
1) A client-side, which executes on the sender’s mail-server.
2) A server-side, which executes on the recipient’s mail-server.
➢ Both the client and server-sides of SMTP run on every mail-server.
➢ When a mail-server receives mail from other mail-servers, the mail-server acts as a server. When a
mail-server sends mail to other mail-servers, the mail-server acts as a client.

16
Computer Networks

Figure 1.11: A high-level view of the Internet e-mail system

SMTP
• SMTP is the most important protocol of the email system.
• Three characteristics of SMTP (that differs from other applications):
1) Message body uses 7-bit ASCII code only.
2) Normally, no intermediate mail-servers used for sending mail.
3) Mail transmissions across multiple networks through mail relaying.
• Here is how it works:
1) Usually, mail-servers are listening at port 25.
2) The sending server initiates a TCP connection to the receiving mail-server.
3) If the receiver's server is down, the sending server will try later.
4) If connection is established, the client & the server perform application-layer handshaking.
5) Then, the client indicates the e-mail address of the sender and the recipient.
6) Finally, the client sends the message to the server over the same TCP connection.

Comparison of SMTP with HTTP


1) HTTP is mainly a pull protocol. This is because
→ someone loads information on a web-server and
→ users use HTTP to pull the information from the server.
➢ On the other hand, SMTP is primarily a push protocol. This is because
→ the sending mail-server pushes the file to receiving mail-server.
2) SMTP requires each message to be in seven-bit ASCII format.

17
Computer Networks

➢ If message contains binary-data, the message has to be encoded into 7-bit ASCII format.
➢ HTTP does not have this restriction.
3) HTTP encapsulates each object of message in its own response-message.
➢ SMTP places all of the message's objects into one message.

Mail Access Protocols


• It is not realistic to run the mail-servers on PC & laptop. This is because
→ mail-servers must be always-on and
→ mail-servers must have fixed IP addresses
• Problem: How a person can access the email using PC or laptop?
• Solution: Use mail access protocols.
• Three mail access protocols:
1) Post Office Protocol (POP)
2) Internet Mail Access Protocol (IMAP) and
3) HTTP.

POP
• POP is an extremely simple mail access protocol.
• POP server will listen at port 110.
• Here is how it works:
➢ The user-agent at client's computer opens a TCP connection to the main server.
➢ POP then progresses through three phases:
1) Authentication
➢ The user-agent sends a user name and password to authenticate the user.
2) Transaction
➢ The user-agent retrieves messages.
➢ Also, the user-agent can
→ mark messages for deletion
→ remove deletion marks &
→ obtain mail statistics.
➢ The user-agent issues commands, and the server responds to each command with a reply.
➢ There are two responses:
i) +OK: used by the server to indicate that the previous command was fine.
ii) –ERR: used by the server to indicate that something is wrong.
3) Update
➢ After user issues a quit command, the mail-server removes all messages marked for deletion.
• Disadvantage:
The user cannot manage the mails at remote mail-server. For ex: user cannot delete messages.

IMAP
• IMAP is another mail access protocol, which has more features than POP.
• An IMAP server will associate each message with a folder.
• When a message first arrives at server, the message is associated with recipient's INBOX folder

18
Computer Networks

• Then, the recipient can


→ move the message into a new, user-created folder
→ read the message
→ delete the message and
→ search remote folders for messages matching specific criteria.
• An IMAP server maintains user state-information across IMAP sessions.
• IMAP permits a user-agent to obtain components of messages.
For example, a user-agent can obtain just the message header of a message.

Web-Based E-Mail
• HTTPs are now used for Web-based email accessing.
• The user-agent is an ordinary Web browser.
• The user communicates with its remote-server via HTTP.
• Now, Web-based emails are provided by many companies including Google, Yahoo etc.

DNS — The Internet’s Directory Service


• DNS is an internet service that translates domain-names into IP addresses.
For ex: the domain-name “www.google.com” might translate to IP address “198.105.232.4”.
• Because domain-names are alphabetic, they are easier to remember for human being.
• But, the Internet is really based on IP addresses (DNS Domain Name System).

Services Provided by DNS


• The DNS is
1) A distributed database implemented in a hierarchy of DNS servers.
2) An application-layer protocol that allows hosts to query the distributed database.
• DNS servers are often UNIX machines running the BIND software.
• The DNS protocol runs over UDP and uses port 53. (BIND Berkeley Internet Name Domain)
• DNS is used by application-layer protocols such as HTTP, SMTP, and FTP.
• Assume a browser requests the URL www.someschool.edu/index.html.
• Next, the user’s host must first obtain the IP address of www.someschool.edu
• This is done as follows:
1) The same user machine runs the client-side of the DNS application.
2) The browser
→ extracts the hostname “www.someschool.edu” from the URL and
→ passes the hostname to the client-side of the DNS application.
3) The client sends a query containing the hostname to a DNS server.
4) The client eventually receives a reply, which includes the IP address for the hostname.
5) After receiving the IP address, the browser can initiate a TCP connection to the HTTP server.
• DNS also provides following services:
1) Host Aliasing
➢ A host with a complicated hostname can have one or more alias names.
2) Mail Server Aliasing
➢ For obvious reasons, it is highly desirable that e-mail addresses be mnemonic.

19
Computer Networks

3)Load Distribution
➢ DNS is also used to perform load distribution among replicated servers.
➢ Busy sites are replicated over multiple servers & each server runs on a different system.

Overview of How DNS Works


• Distributed database design is more preferred over centralized design because:
1) A Single Point of Failure
➢ If the DNS server crashes then the entire Internet will not stop.
2) Traffic Volume
➢ A Single DNS Server cannot handle the huge global DNS traffic.
➢ But with distributed system, the traffic is distributed and reduces overload on server.
3) Distant Centralized Database
➢ A single DNS server cannot be “close to” all the querying clients.
➢ If we put the single DNS server in Mysore,
then all queries from USA must travel to the other side of the globe.
➢ This can lead to significant delays.
4) Maintenance
➢ The single DNS server would have to keep records for all Internet hosts.
➢ This centralized database has to be updated frequently to account for every new host.

20
Computer Networks

A Distributed, Hierarchical Database

Figure 1.12: Portion of the hierarchy of DNS servers

• Suppose a client wants to determine IP address for hostname “www.amazon.com” (Figure 1.12):
1) The client first contacts one of the root servers, which returns IP addresses for TLD servers
2) Then, the client contacts one of these TLD servers.
➢ The TLD server returns the IP address of an authoritative-server for “amazon.com”.
3) Finally, the client contacts one of the authoritative-servers for amazon.com.
➢ The authoritative-server returns the IP address for the hostname “www.amazon.com”.

1.5.2.1.1 Recursive Queries & Iterative Queries

Figure 1.13: Interaction of the various DNS servers

• The example shown in Figure 1.13 makes use of both recursive queries and iterative queries.
• The query 1 sent from cis.poly.edu to dns.poly.edu is a recursive query. This is because
→ the query asks dns.poly.edu to obtain the mapping on its behalf.
21
Computer Networks

• But the subsequent three queries 2, 4 and 6 are iterative. This is because
→ all replies are directly returned to dns.poly.edu.

DNS Records & Messages


• The DNS server stores resource-records (RRs).
• RRs provide hostname-to-IP address mappings.
• Each DNS reply message carries one or more resource-records.
• A resource-record is a 4-tuple that contains the following fields: (Name, Value, Type, TTL)
• TTL (time to live) determines when a resource should be removed from a cache.
• The meaning of Name and Value depend on Type:
1) If Type=A, then Name is a hostname and Value is the IP address for the hostname.
➢ Thus, a Type A record provides the standard hostname-to-IP address mapping. For ex:
(relay1.bar.foo.com, 145.37.93.126, A)
2) If Type=NS, then
i) Name is a domain (such as foo.com) and
ii) Value is the hostname of an authoritative DNS server.
➢ This record is used to route DNS queries further along in the query chain. For ex: (foo.com,
dns.foo.com, NS) is a Type NS record.
3) If Type=CNAME, then Value is a canonical hostname for the alias hostname Name.
➢ This record can provide querying hosts the canonical name for a hostname. For ex: (foo.com,
relay1.bar.foo.com, CNAME) is a CNAME record.
4) If Type=MX, Value is the canonical name of a mail-server that has an alias hostname Name.
➢ MX records allow the hostnames of mail-servers to have simple aliases. For ex: (foo.com,
mail.bar.foo.com, MX) is an MX record.

22
Computer Networks

DNS Messages
• Two types of DNS messages: 1) query and 2) reply.
• Both query and reply messages have the same format.

Figure 1.14: DNS message format

• The various fields in a DNS message are as follows (Figure 1.14):


1) Header Section
• The first 12 bytes is the header-section.
• This section has following fields:
i) Identification
➢ This field identifies the query.
➢ This identifier is copied into the reply message to a query.
➢ This identifier allows the client to match received replies with sent queries.
ii) Flag
➢ This field has following 3 flag-bits:
a) Query/Reply
¤ This flag-bit indicates whether the message is a query (0) or a reply (1).
b) Authoritative
¤ This flag-bit is set in a reply message when a DNS server is an authoritative-server.
c) Recursion Desired
¤ This flag-bit is set when a client desires that the DNS server perform recursion.
iii) Four Number-of-Fields
➢ These fields indicate the no. of occurrences of 4 types of data sections that follow the header.
2) Question Section
• This section contains information about the query that is being made.
• This section has following fields:
i) Name
➢ This field contains the domain-name that is being queried.
ii) Type
➢ This field indicates the type of question being asked about the domain-name.
3) Answer Section

23
Computer Networks

•This section contains a reply from a DNS server.


• This section contains the resource-records for the name that was originally queried.
• A reply can return multiple RRs in the answer, since a hostname can have multiple IP addresses.
4) Authority Section
• This section contains records of other authoritative-servers.
5) Additional Section
• This section contains other helpful records.

MODULE WISE QUESTIONS

1) Explain client-server & P2P architecture.


2) With block diagram, explain how application processes communicate through a socket.
3) Explain 4 transport services available to applications.
4) Briefly explain 2 transport layer protocols.
5) With block diagram, explain the working of Web & HTTP.
6) Explain HTTP non-persistent & persistent connections.
7) With general format, explain HTTP request- & response-messages.
8) With a diagram, explain how cookies are used in user-server interaction.
9) With a diagram, explain the working of web caching.
10) With a diagram, explain the working of FTP.
11) With a diagram, explain the working of e-mail system.
12) Briefly explain 3 mail access protocols.
13) Briefly explain the working of DNS.
14) With general format, explain DNS messages.

24

Common questions

Powered by AI

Framing in the data link layer involves encapsulating network layer packets into frames for transmission. Each frame is composed of a frame header, payload, and a frame trailer, which allow the receiver to recognize the beginning and end of each frame . Framing is crucial as it provides a way to handle errors, control data flow, and regulate data transfer between fast transmitters and slower receivers, ensuring reliable communication . The data link layer implements several framing techniques, such as byte count, flag bytes with byte stuffing, flag bits with bit stuffing, and using physical layer coding violations for delimiting frames . These methods ensure that even if errors occur, frames can be properly synchronized and detected, maintaining the integrity and order of the data being sent .

The data link layer manages error control by acknowledging received frames and retransmitting missing frames. It uses different strategies like acknowledged or unacknowledged services depending on the channel reliability. For flow control, it ensures that a fast sender does not overwhelm a slow receiver by employing traffic regulation mechanisms .

Connectionless network layer protocols, like datagram networks, treat each packet independently, routing them without prior setup. Each packet carries full addressing information and may take different paths to the destination. This approach is akin to sending individual letters via the postal system, where delivery order is not guaranteed . In contrast, connection-oriented network layer protocols require the establishment of a pre-determined path, known as a virtual circuit, before data transmission begins. This ensures all packets follow the same route, maintaining order and allowing for guaranteed delivery, resembling a telephone call where setup and teardown phases are involved . Connectionless services are often preferred in scenarios where speed is critical, and delivery sequence is less important, such as in UDP for applications like real-time audio and video . Connection-oriented services provide reliability and ordered delivery, which is essential for applications like file transfers, where TCP is used .

SMTP is primarily a push protocol where the sending server pushes data files to the receiver, mainly supporting ASCII format, whereas HTTP is a pull protocol with users requesting data from a server. HTTP treats each object in a request separately, while SMTP transmits all message objects together, requiring encoding for binary data .

Persistent HTTP connections improve performance by requiring only one RTT for all referenced objects and reducing the need to open new connections for each object, unlike non-persistent connections that incur multiple RTTs and more resource allocations for separate connections. This reduces server load and improves response time by approximately 20% .

Major considerations for designing layers in a network protocol architecture include reducing design complexity, by organizing networks as a stack of layers where each layer has specific functions and serves the layer above it while being shielded from lower-level details . Protocol hierarchies involve establishing clear-cut interfaces that define operations and services available to upper layers, facilitating protocol replacement and technology adaptation . Addressing design involves defining how to identify senders and receivers through addressing schemes . Rules for data transfer must define data flow direction, logical channels, and priorities, ensuring proper sequencing and flow control to manage data transmission efficiently . Moreover, error control mechanisms are crucial to agree on error-detection and error-correction codes between endpoints . The separation into different layers helps in achieving flexibility and modularity, making the system adaptable to new technologies and improvements .

The transport layer ensures data integrity and delivery assurance primarily through services like error detection, retransmissions, and flow control. If packets are lost or corrupted in a network, the transport entity can detect these issues and initiate retransmissions to recover lost data, ensuring that the data arrives complete and in the correct order at the destination . In a connection-oriented service like TCP, the transport layer provides a reliable point-to-point channel, maintaining the order and integrity of transmitted messages through mechanisms like sequencing and acknowledging each received segment . Additionally, TCP employs flow control to prevent a fast sender from overwhelming a slow receiver, further ensuring the reliable delivery of data .

Routers use store-and-forward switching by receiving the entire packet, storing it temporarily, and processing it before forwarding it to the next point on the path towards its destination. This method ensures that routers can check and verify each packet for errors using checksum verification before proceeding to send it on its way . This approach contrasts with other techniques where parts of the packet might be forwarded before the packet is fully received. The benefits of store-and-forward switching include the ability to catch and correct errors in packets, as well as providing a reliable transfer of packets, as it ensures that only error-free packets are sent forward . Additionally, it allows routers to make efficient routing decisions based on complete packet information . However, this method can introduce a delay as packets are stored and processed at each hop, affecting the packet switching delay .

The spanning tree algorithm is beneficial in network bridge configurations as it prevents network loops which could create cyclical paths for packets, causing them to travel indefinitely between LANs. This algorithm allows bridges to be installed redundantly to maximize network reliability without creating loops, by blocking certain redundant paths and forming a spanning tree of the network's bridge connections. Thus, it enhances network stability and efficiency by ensuring a loop-free topology .

IMAP offers several advantages over POP in managing emails on a server. Firstly, IMAP allows users to access their email from multiple devices as messages are stored on a server and can be managed remotely; users can also organize, delete, or flag emails from any connected device, enhancing convenience and accessibility . Secondly, IMAP maintains the state of messages; for instance, if a message is moved to a different folder or marked as read, this change is reflected across all devices, ensuring synchronization . In contrast, POP typically downloads emails to a single device and deletes them from the server, limiting access and management from other devices .

You might also like