0% found this document useful (0 votes)
126 views58 pages

Data Link Layer Protocols Overview

cn notes

Uploaded by

saipranith235
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
126 views58 pages

Data Link Layer Protocols Overview

cn notes

Uploaded by

saipranith235
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

COMPUTER NETWORKS

(III-CSE, SEMESTER-1, R-22)


PREPARED BY-MAGANTI APPARAO
HEAD OF THE DEPARTMENT
ST. MARY’S ENGINEERING COLLEGE

UNIT – II
DATA LINK LAYER
 Design issues, framing
 Error detection and correction
ELEMENTARY DATA LINK PROTOCOLS
 simplex protocol
 A simplex stop and wait protocol for an error-free
channel
 A simplex stop and wait protocol for noisy channel
SLIDING WINDOW PROTOCOLS:
 A one-bit sliding window protocol
 A protocol using Go-Back-N
 A protocol using Selective Repeat
 Example data link protocols
MEDIUM ACCESS SUB LAYER
 The channel allocation problem
MULTIPLE ACCESS PROTOCOLS
 ALOHA
 Carrier sense multiple access protocols
 collision free protocols
 Wireless LANs
 Data link layer switching
DATA LINK LAYER FUNCTIONS
(SERVICES)
1)Providing services to the network layer
Unacknowledged connectionless service.
Appropriate for low error rate and real-time traffic. Ex:
Ethernet
Acknowledged connectionless service.
Useful in unreliable channels, Wi-Fi. Ack/Timer/Resend
Acknowledged connection-oriented service.
Guarantee frames are received exactly once and in the
right order. Appropriate over long, unreliable links such
as a satellite channel or a long- distance telephone circuit
2)Framing: Frames are the streams of bits received
from the network layer into manageable data units. This
division of stream of bits is done by Data Link Layer.
Physical Addressing: The Data Link layer adds a header
to the frame in order to define physical address of the
sender or receiver of the frame, if the frames are to be
distributed to different systems on the network.
3)Flow Control: A receiving node can receive the
frames at a faster rate than it can process the frame.
Without flow control, the receiver's buffer can
overflow, and frames can get lost. To overcome this
problem, the data link layer uses the flow control to
prevent the sending node on one side of the link from
overwhelming the receiving node on another side of the
link. This prevents traffic jam at the receiver side.
4) Error Control: Error control is achieved by adding
a trailer at the end of the frame. Duplication of frames
are also prevented by using this mechanism. Data Link
Layers adds mechanism to prevent duplication of frames.
5) Error detection: Errors can be introduced by signal
attenuation and noise. Data Link Layer protocol provides
a mechanism to detect one or more errors. This is
achieved by adding error detection bits in the frame and
then receiving node can perform an error check.
6) Error correction: Error correction is similar to the
Error detection, except that receiving node not only
detects the errors but also determine where the errors
have occurred in the frame.
7) Access Control: Protocols of this layer determine
which of the devices has control over the link at any given
time, when two or more devices are connected to the same
link.
8) Reliable delivery: Data Link Layer provides a reliable
delivery service, i.e., transmits the network layer
datagram without any error. A reliable delivery service is
accomplished with transmissions and
acknowledgements. A data link layer mainly provides
the reliable delivery service over the links as they have
higher error rates and they can be corrected locally, link
at which an error occurs rather than forcing to retransmit
the data.
9) Half-Duplex & Full-Duplex: In a Full-Duplex
mode, both the nodes can transmit the data at the same
time. In a Half-Duplex mode, only one node can
transmit the data at the same time.

FRAMING
To provide service to the network layer, the data link
layer must use the service provided to it by the physical
layer. What the physical layer does is accept a raw bit
stream and attempt to deliver it to the destination. This bit
stream is not guaranteed to be error free. The number of
bits received may be less than, equal to, or more than
the number of bits transmitted, and they may have
different values. It is up to the data link layer to detect
and, if necessary, correct errors. The usual approach is
for the data link layer to break the bit stream up into
discrete frames and compute the checksum for each frame
(framing). When a frame arrives at the destination, the
checksum is recomputed. If the newly computed
checksum is different from the one contained in the
frame, the data link layer knows that an error has occurred
and takes steps to deal with it (e.g., discarding the bad
frame and possibly also sending back an error report).
We will look at four framing methods:
 Character count.
 Flag bytes with byte stuffing.
 Starting and ending flags, with bit stuffing.
 Physical layer coding violations.
Character count method uses a field in the header to
specify the number of characters in the frame. When the
data link layer at the destination sees the character count,
it knows how many characters follow and hence where
the end of the frame is. This technique is shown in Fig.
(a) For four frames of sizes 5, 5, 8, and 8 characters,
respectively.
A character stream. (a) Without errors. (b) With one error
Flag bytes with byte stuffing method gets around the
problem of resynchronization after an error by having
each frame start and end with special bytes. In the past,
the starting and ending bytes were different, but in recent
years most protocols have used the same byte, called a
flag byte, as both the starting and ending delimiter, as
shown in Fig. (a) as FLAG.
In this way, if the receiver ever loses synchronization, it
can just search for the flag byte to find the end of the
current frame. Two consecutive flag bytes indicate the
end of one frame and start of the next one.
A frame delimited by flag bytes (b) Four examples of
byte sequences before and after byte stuffing
Starting and ending flags, with bit stuffing allows data
frames to contain an arbitrary number of bits and allows
character codes with an arbitrary number of bits per
character. It works like this. Each frame begins and
ends with a special bit pattern, 01111110 (in fact, a flag
byte).
Whenever the sender's data link layer encounters five
consecutive 1s in the data, it automatically stuffs a 0 bit
into the outgoing bit stream. This bit stuffing is analogous
to byte stuffing, in which an escape byte is stuffed into
the outgoing character stream before a flag byte in the
data.
When the receiver sees five consecutive incoming 1 bit,
followed by a 0 bit, it automatically de- stuffs (i.e., deletes)
the 0 bit. Just as byte stuffing is completely transparent to the
network layer in both computers, so is bit stuffing. If the
user data contain the flag pattern, 01111110, this flag
is transmitted as 011111010 but stored in the receiver's
memory as 01111110.

Fig: Bit stuffing. (a) The original data. (b) The data as
they appear on the line.
(c) The data as they are stored in the receiver's memory
after destuffing.
Physical layer coding violations method of framing is
only applicable to networks in which the encoding on
the physical medium contains some redundancy. For
example, some LANs encode 1 bit of data by using 2
physical bits. Normally, a 1 bit is a high-low pair and a
0 bit is a low-high pair.
The scheme means that every data bit has a transition
in the middle, making it easy for the receiver to locate
the bit boundaries. The combinations high- high and
low-low are not used for data but are used for delimiting
frames in some protocols.

ERROR DETECTION
Error is a condition when the receiver’s information
does not match with the sender’s information. During
transmission, digital signals suffer from noise that can
introduce errors in the binary bits travelling from
sender to receiver. That means a 0 bit may change
to 1 o r a 1 b i t m a y c h a n g e t o 0 .
Error Detecting Codes (Implemented either at Data
link layer or Transport Layer of OSI Model)
Whenever a message is transmitted, it may get
scrambled by noise or data may get corrupted. To avoid
this, we use error-detecting codes which are additional
data added to a given digital message to help us detect
if any error has occurred during transmission of the
message.
Basic approach used for error detection is the use of
redundancy bits, where additional bits are added to
facilitate detection of errors. Some popular techniques for
error detection are:
1. Simple Parity check
2. Two-dimensional Parity check
3. Checksum
4. Cyclic redundancy check
1)Simple Parity check
Blocks of data from the source are subjected to a check
bit or parity bit generator form, where a parity of: 1 is
added to the block if it contains odd number of 1’s, and
0 is added if it contains even number of 1’sThis scheme
makes the total number of 1’s even, that is why it is called
even parity checking.
2)Two-dimensional Parity check
Parity check bits are calculated for each row, which is
equivalent to a simple parity check bit. Parity check bits
are also calculated for all columns, then both are sent
along with the data. At the receiving end these are
compared with the parity bits calculated on the received
data.

3)Checksum
In checksum error detection scheme, the data is divided
into k segments each of m bits. In the sender’s end
the segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented to
get the checksum. The checksum segment is sent along
with the data segments.
At the receiver’s end, all received segments are added
using 1’s complement arithmetic to get the sum. The sum
is complemented.
If the result is zero, the received data is accepted;
otherwise discarded.

4)Cyclic redundancy checks (CRC)


Unlike checksum scheme, which is based on addition,
CRC is based on binary division.
In CRC, a sequence of redundant bits, called cyclic
redundancy check bits, are appended to the end of data
unit so that the resulting data unit becomes exactly
divisible by a second, predetermined binary number.
At the destination, the incoming data unit is divided by
the same number. Ifat this step there is no remainder, the
data unit is assumed to be correct and is therefore
accepted.
A remainder indicates that the data unit has been
damaged in transit and therefore must be rejected.
ERROR CORRECTION
Error Correction codes are used to detect and correct
the errors when data is transmitted from the sender to the
receiver.
Error Correction can be handled in two ways:
Backward error correction: Once the error is
discovered, the receiver requests the sender to retransmit
the entire data unit.
Forward error correction: In this case, the receiver
uses the error-correcting
code which automatically corrects the errors.
A single additional bit can detect the error, but cannot
correct it.
For correcting the errors, one has to know the exact
position of the error. For example, If we want to calculate
a single-bit error, the error correction code will determine
which one of seven bits is in error. To achieve this, we
have to add some additional redundant bits.
Suppose r is the number of redundant bits and d is the
total number of the data bits. The number of redundant
bits’ r can be calculated by using the formula:

2r >=d+r+1
The value of r is calculated by using the above formula.
For example, if the value of d is 4, then the possible
smallest value that satisfies the above relation would be
3.
To determine the position of the bit which is in error, a
technique developed by R.W Hamming is Hamming
code which can be applied to any length of the data unit
and uses the relationship between data units and
redundant units.
HAMMING CODE
Parity bits: The bit which is appended to the original
data of binary bits so that the total number of 1s is even
or odd.
Even parity: To check for even parity, if the total
number of 1s is even, then the
value of the parity bit is 0. If the total number of 1s
occurrences is odd, then the value of the parity bit is 1.
Odd Parity: To check for odd parity, if the total number
of 1s is even, then the
value of parity bit is 1. If the total number of 1s is odd,
then the value of parity bit is 0.
Algorithm of Hamming code:
An information of 'd' bits are added to the redundant bits
'r' to form d+r. The location of each of the (d+r) digits is
assigned a decimal value.
The 'r' bits are placed in the positions 1,2, 2k-1
At the receiving end, the parity bits are recalculated. The
decimal value of the parity bits determines the position of
an error.
Relationship b/w Error position & binary number

Let's understand the concept of Hamming code through


an example: Suppose the original data is 1010 which is
to be sent.
Total number of data bits 'd' = 4

Number of redundant bits r: 2r >= d+r+1

2r >= 4+r+1
Therefore, the value of r is 3 that satisfies the above
relation. Total number of bits = d+r = 4+3 = 7;
Determining the position of the redundant bits
The number of redundant bits is 3. The three bits are
represented by r1, r2, r4. The position of the redundant
bits is calculated with corresponds to the raised power of
2. Therefore, their corresponding positions are 1, 21, 22.
The position of r1 = 1, The position of r2 = 2, The
position of r4 = 4

Representation of Data on the addition of parity bits:

Determining the Parity bits


Determining the r1 bit: The r1 bit is calculated by
performing a parity check on the bit positions whose
binary representation includes 1 in the first position.

We observe from the above figure that the bit position


that includes 1 in the first position are 1, 3, 5, 7. Now,
we perform the even-parity check at these bit positions.
The total number of 1 at these bit positions
corresponding to r1 is even, therefore, the value of the
r1 bit is 0.
Determining r2 bit: The r2 bit is calculated by
performing a parity check on the bit positions whose
binary representation includes 1 in the second position

We observe from the above figure that the bit positions


that includes 1 in the second position are 2, 3, 6, 7. Now,
we perform the even-parity check at these bit positions.
The total number of 1 at these bit positions
corresponding to r2 is odd, therefore, the value of the r2
bit is 1.
Determining r4 bit: The r4 bit is calculated by
performing a parity check on the bit positions whose
binary representation includes 1 in the third position.
We observe from the above figure that the bit positions
that includes 1 in the third position are 4, 5, 6, 7. Now,
we perform the even-parity check at these bit positions.
The total number of 1 at these bit positions
corresponding to r4 is even, therefore, the value of the
r4 bit is 0.

ELEMENTARY DATA LINK PROTOCOLS


Protocols in the data link layer are designed so that this layer
can perform its basic functions: framing, error control and flow
control. Framing is the process of dividing bit - streams from
physical layer into data frames whose size ranges from a few
hundred to a few thousand bytes.
Error control mechanisms deals with transmission errors and
retransmission of corrupted and lost frames. Flow control
regulates speed of delivery and so that a fast sender does not
drown a slow receiver.
Simplex Protocol
The Simplex protocol is hypothetical protocol designed for
unidirectional data transmission over an ideal channel, i.e. a
channel through which transmission can never go wrong. It has
distinct procedures for sender and receiver. The sender simply
sends all its data available onto the channel as soon as they are
available its buffer. The receiver is assumed to process all
incoming data instantly. It is hypothetical since it does not
handle flow control or error control.
Stop – and – Wait Protocol
Stop – and – Wait protocol is for noiseless channel too. It
provides unidirectional data transmission without any error
control facilities. However, it provides for flow control so that
a fast sender does not drown a slow receiver. The receiver has
a finite buffer size with finite processing speed. The sender can
send a frame only when it has received indication from the
receiver that it is available for further data processing.
Stop – and – Wait ARQ
Stop – and – wait Automatic Repeat Request (Stop – and – Wait
ARQ) is a variation of the above protocol with added error
control mechanisms, appropriate for noisy channels. The
sender keeps a copy of the sent frame. It then waits for a finite
time to receive a positive acknowledgement from receiver. If
the timer expires or a negative acknowledgement is received,
the frame is retransmitted. If a positive acknowledgement is
received, then the next frame is sent.
Go – Back – N ARQ
Go – Back – N ARQ provides for sending multiple frames
before receiving the acknowledgement for the first frame. It
uses the concept of sliding window, and so is also called sliding
window protocol. The frames are sequentially numbered and a
finite number of frames are sent. If the acknowledgement of a
frame is not received within the time period, all frames starting
from that frame are retransmitted.
Selective Repeat ARQ
This protocol also provides for sending multiple frames before
receiving the acknowledgement for the first frame. However,
here only the erroneous or lost frames are retransmitted, while
the good frames are received and buffered.
Elementary Data Link protocols are classified into three
categories, as given below −
 Protocol 1 − Unrestricted simplex protocol
 Protocol 2 − Simplex stop and wait protocol
 Protocol 3 − Simplex protocol for noisy channels.
UNRESTRICTED SIMPLEX PROTOCOL
Data transmitting is carried out in one direction only. The
transmission (Tx) and receiving (Rx) are always ready and the
processing time can be ignored. In this protocol, infinite buffer
space is available, and no errors are occurring that is no damage
frames and no lost frames.
The Unrestricted Simplex Protocol is diagrammatically
represented as follows −
SIMPLEX STOP AND WAIT PROTOCOL FOR
AN ERROR FREE CHANNEL
In this protocol we assume that data is transmitted in one
direction only. No error occurs; the receiver can only process
the received information at finite rate. These assumptions
imply that the transmitter cannot send frames at rate faster than
the receiver can process them.
The main problem here is how to prevent the sender from
flooding the receiver. The general solution for this problem is
to have the receiver send some sort of feedback to sender, the
process is as follows −
Step1 − The receiver sends the acknowledgement frame back
to the sender telling the sender that the last received frame has
been processed and passed to the host.
Step 2 − Permission to send the next frame is granted.
Step 3 − The sender after sending the sent frame has to wait for
an acknowledge frame from the receiver before sending
another frame.
This protocol is called Simplex Stop and wait protocol, the
sender sends one frame and waits for feedback from the
receiver. When the ACK arrives, the sender sends the next
frame.
The Simplex Stop and Wait Protocol is diagrammatically
represented as follows −

SIMPLEX STOP AND WAIT PROTOCOL FOR


NOISY CHANNEL
Data transfer is only in one direction, consider separate sender
and receiver, finite processing capacity and speed at the
receiver, since it is a noisy channel, errors in data frames or
acknowledgement frames are expected. Every frame has a
unique sequence number.
After a frame has been transmitted, the timer is started for a
finite time. Before the timer expires, if the acknowledgement is
not received, the frame gets retransmitted, when the
acknowledgement gets corrupted or sent data frames gets
damaged, how long the sender should wait to transmit the next
frame is infinite.
The Simplex Protocol for Noisy Channel is diagrammatically
represented as follows −
SLIDING WINDOW PROTOCOLS
The sliding window is a technique for sending multiple frames
at a time. It controls the data packets between the two devices
where reliable and gradual delivery of data frames is needed. It
is also used in TCP (Transmission Control Protocol).
In this technique, each frame has sent from the sequence
number. The sequence numbers are used to find the missing
data in the receiver end. The purpose of the sliding window
technique is to avoid duplicate data, so it uses the sequence
number.
Types of Sliding Window Protocol
1) A One-Bit Sliding Window Protocol

2) Go-Back-N ARQ
3) Selective Repeat ARQ
A ONE-BIT SLIDING WINDOW PROTOCOL
Sliding window protocols are data link layer protocols for
reliable and sequential delivery of data frames. The sliding
window is also used in Transmission Control Protocol. In these
protocols, the sender has a buffer called the sending window
and the receiver has buffer called the receiving window.
In one – bit sliding window protocol, the size of the window is
1. So the sender transmits a frame, waits for its
acknowledgment, then transmits the next frame. Thus it uses
the concept of stop and waits for the protocol. This protocol
provides for full – duplex communications. Hence, the
acknowledgment is attached along with the next data frame to
be sent by piggybacking.
Working Principle
The data frames to be transmitted additionally have an
acknowledgment field, Ack field that is of a few bits’ length.
The Ack field contains the sequence number of the last frame
received without error. If this sequence number matches with
the sequence number of the frame to be sent, then it is inferred
that there is no error and the frame is transmitted. Otherwise, it
is inferred that there is an error in the frame and the previous
frame is retransmitted.
Since this is a bi-directional protocol, the same algorithm
applies to both the communicating parties.
Illustrative Example
The following diagram depicts a scenario with sequence
numbers 0, 1, 2, 3, 0, 1, 2 and so on. It depicts the sliding
windows in the sending and the receiving stations during frame
transmission.
A PROTOCOL USING GO-BACK-N
Before understanding the working of Go-Back-N ARQ, we
first look at the sliding window protocol. As we know that the
sliding window protocol is different from the stop-and-wait
protocol.
In the stop-and-wait protocol, the sender can send only one
frame at a time and cannot send the next frame without
receiving the acknowledgment of the previously sent frame,
whereas, in the case of sliding window protocol, the multiple
frames can be sent at a time.
What is Go-Back-N ARQ
In Go-Back-N ARQ, N is the sender's window size. Suppose
we say that Go-Back-3, which means that the three frames can
be sent at a time before expecting the acknowledgment from
the receiver.
It uses the principle of protocol pipelining in which the multiple
frames can be sent before receiving the acknowledgment of the
first frame. If we have five frames and the concept is Go-Back-
3, which means that the three frames can be sent, i.e., frame no
1, frame no 2, frame no 3 can be sent before expecting the
acknowledgment of frame no 1.
In Go-Back-N ARQ, the frames are numbered sequentially as
Go-Back-N ARQ sends the multiple frames at a time that
requires the numbering approach to distinguish the frame from
another frame, and these numbers are known as the sequential
numbers.
The number of frames that can be sent at a time totally depends
on the size of the sender's window. So, we can say that 'N' is
the number of frames that can be sent at a time before receiving
the acknowledgment from the receiver.
If the acknowledgment of a frame is not received within an
agreed-upon time period, then all the frames available in the
current window will be retransmitted. Suppose we have sent
the frame no 5, but we didn't receive the acknowledgment of
frame no 5, and the current window is holding three frames,
then these three frames will be retransmitted.
The sequence number of the outbound frames depends upon the
size of the sender's window. Suppose the sender's window size
is 2, and we have ten frames to send, then the sequence numbers
will not be 1,2,3,4,5,6,7,8,9,10. Let's understand through an
example.
N is the sender's window size.
If the size of the sender's window is 4 then the sequence number
will be 0,1,2,3,0,1,2,3,0,1,2, and so on.
The number of bits in the sequence number is 2 to generate the
binary sequence 00,01,10,11.
Working of Go-Back-N ARQ
Suppose there are a sender and a receiver, and let's assume that
there are 11 frames to be sent. These frames are represented as
0,1,2,3,4,5,6,7,8,9,10, and these are the sequence numbers of
the frames. Mainly, the sequence number is decided by the
sender's window size. But, for the better understanding, we
took the running sequence numbers, i.e., 0,1,2,3,4,5,6,7,8,9,10.
Let's consider the window size as 4, which means that the four
frames can be sent at a time before expecting the
acknowledgment of the first frame.
Step 1: Firstly, the sender will send the first four frames to the
receiver, i.e., 0,1,2,3, and now the sender is expected to receive
the acknowledgment of the 0th frame.

Let's assume that the receiver has sent the acknowledgment


for the 0 frame, and the receiver has successfully received it.

The sender will then send the next frame, i.e., 4, and the
window slides containing four frames (1,2,3,4).
The receiver will then send the acknowledgment for the frame
no 1. After receiving the acknowledgment, the sender will send
the next frame, i.e., frame no 5, and the window will slide
having four frames (2,3,4,5).

Now, let's assume that the receiver is not acknowledging the


frame no 2, either the frame is lost, or the acknowledgment is
lost. Instead of sending the frame no 6, the sender Go-Back to
2, which is the first frame of the current window, retransmits
all the frames in the current window, i.e., 2,3,4,5.
Important points related to Go-Back-N ARQ:
o In Go-Back-N, N determines the sender's window size,
and the size of the receiver's window is always 1.
o It does not consider the corrupted frames and simply
discards them.
o It does not accept the frames which are out of order and
discards them.
o If the sender does not receive the acknowledgment, it
leads to the retransmission of all the current window
frames.
PROTOCOL USING SELECTIVE REPEAT
In Go-Back-N ARQ, the receiver keeps track of only one
variable, and there is no need to buffer out-of- order
frames; they are simply discarded. However, this
protocol is very inefficient for a noisy link.
For noisy links, there is another mechanism that does
not resend N frames when just one frame is damaged;
only the damaged frame is resent. This mechanism is
called Selective Repeat ARQ.
It is more efficient for noisy links, but the processing at
the receiver is more complex.
Sender Window (explain go-back N sender window
concept (before & after sliding.) The only difference in
sender window between Go-back N and Selective Repeat
is Window size)
Receiver window
The receiver window in Selective Repeat is totally
different from the one in Go Back-N. First, the size of
the receive window is the same as the size of the send
window (2m-1).
Because the sizes of the send window and receive
window are the same, all the frames in the send frame
can arrive out of order and be stored until they can be
delivered.
However, the receiver never delivers packets out of order
to the network layer. Those slots inside the window that
are colored define frames that have arrived out of order
and are waiting for their neighbors to arrive before
delivery to the network layer.
In Selective Repeat ARQ, the size of the sender and
receiver window must be at most one-half of 2m
EXAMPLE DATA LINK PROTOCOLS
Data Link Layer protocols are generally responsible to simply
ensure and confirm that the bits and bytes that are received are
identical to the bits and bytes being transferred. It is basically a
set of specifications that are used for implementation of data
link layer just above the physical layer of the Open System
Interconnections (OSI) Model.
There are various data link protocols that are required for Wide
Area Network (WAN) and modem connections. Logical Link
Control (LLC) is a data link protocol of Local Area Network
(LAN). Some of data link protocols are given below:
SynchronousDataLinkProtocol(SDLC)
SDLC is basically a communication protocol of computer. It
usually supports multipoint links even error recovery or error
correction also. It is usually used to carry SNA (Systems
Network Architecture) traffic and is present precursor to
HDLC. It is also designed and developed by IBM in 1975. It is
also used to connect all of the remote devices to mainframe
computers at central locations may be in point-to-point (one-to-
one) or point-to-multipoint (one-to-many) connections. It is
also used to make sure that the data units should arrive correctly
and with right flow from one network point to next network
point.
High-LevelDataLinkProtocol(HDLC)
HDLC is basically a protocol that is now assumed to be an
umbrella under which many Wide Area protocols sit. It is also
adopted as a part of X.25 network. It was originally created and
developed by ISO in 1979. This protocol is generally based on
SDLC. It also provides best-effort unreliable service and also
reliable service. HDLC is a bit-oriented protocol that is
applicable for point-to-point and multipoint communications
both.
SerialLineInterfaceProtocol(SLIP)
SLIP is generally an older protocol that is just used to add a
framing byte at end of IP packet. It is basically a data link
control facility that is required for transferring IP packets
usually among Internet Service Providers (ISP) and a home
user over a dial-up link.
It is an encapsulation of the TCP/IP especially designed to work
with over serial ports and several router connections simply for
communication. It is some limitations like it does not provide
mechanisms such as error correction or error detection.
PointtoPointProtocol(PPP)
PPP is a protocol that is basically used to provide same
functionality as SLIP. It is most robust protocol that is used to
transport other types of packets also along with IP Packets. It
can also be required for dial-up and leased router-router lines.
It basically provides framing method to describe frames.
It is a character-oriented protocol that is also used for error
detection. It is also used to provides two protocols i.e. NCP and
LCP.
LinkControlProtocol(LCP)
It was originally developed and created by IEEE 802.2. It is
also used to provide HDLC style services on LAN (Local Area
Network). LCP is basically a PPP protocol that is used for
establishing, configuring, testing, maintenance, and ending or
terminating links for transmission of data frames.
LinkAccessProcedure(LAP)
LAP protocols are basically a data link layer protocols that are
required for framing and transferring data across point-to-point
links. It also includes some reliability service features. There
are basically three types of LAP i.e. LAPB (Link Access
Procedure Balanced), LAPD (Link Access Procedure D-
Channel), and LAPF (Link Access Procedure Frame-Mode
Bearer Services). It is actually originated from IBM SDLC,
which is being submitted by IBM to the ISP simply for
standardization.
NetworkControlProtocol(NCP)
NCP was also an older protocol that was implemented by
ARPANET. It basically allows users to have access to use
computers and some of the devices at remote locations and also
to transfer files among two or more computers. It is generally a
set of protocols that is forming a part of PPP. NCP is always
available for each and every higher-layer protocol that is
supported by PPP. NCP was replaced by TCP/IP in the 1980s.
THE MEDIUM ACCESS SUB LAYER
To coordinate the access to the channel, multiple access
protocols are requiring. All these protocols belong to the MAC
sub layer. Data Link layer is divided into two sub layers:
1. Logical Link Control (LLC)- is responsible for error control
& flow control.
2. Medium Access Control (MAC)- MAC is responsible for
multiple access resolutions
The following diagram depicts the position of the MAC layer

Functions of MAC Layer


It provides an abstraction of the physical layer to the LLC and
upper layers of the OSI network.
It is responsible for encapsulating frames so that they are
suitable for transmission via the physical medium.
It resolves the addressing of source station as well as the
destination station, or groups of destination stations.
It performs multiple access resolutions when more than one
data frame is to be transmitted. It determines the channel access
methods for transmission.
It also performs collision resolution and initiating
retransmission in case of collisions.
It generates the frame check sequences and thus contributes to
protection against transmission errors.
MAC Addresses
MAC address or media access control address is a unique
identifier allotted to a network interface controller (NIC) of
a device. It is used as a network address for data transmission
within a network segment like Ethernet, Wi-Fi,
and Bluetooth.
MAC address is assigned to a network adapter at the time of
manufacturing. It is hardwired or hard-coded in the network
interface card (NIC). A MAC address comprises of six groups
of two hexadecimal digits, separated by hyphens, colons, or no
separators. An example of a MAC address is [Link] [Link]
F0:11.
THE CHANNEL ALLOCATION PROBLEM
When there is more than one user who desire to access a shared
network channel, an algorithm is deployed for channel
allocation among the competing users. The network channel
may be a single cable or optical fiber connecting multiple
nodes, or a portion of the wireless spectrum.
Channel allocation algorithms allocate the wired channels and
bandwidths to the users, who may be base stations, access
points or terminal equipment. In broadcast networks, single
channel is shared by several stations. This channel can be
allocated to only one transmitting user at a time. There are two
different methods of channel allocations:
1) Static Channel Allocation- a single channel is divided among
various users either on the basis of frequency (FDM) or on the
basis of time (TDM). In FDM, fixed frequency is assigned to
each user, whereas, in TDM, fixed time slot is assigned to each
user.
2. Dynamic Channel Allocation- no user is assigned fixed
frequency or fixed time slot. All users are dynamically assigned
frequency or time slot, depending upon the requirements of the
user
MULTIPLE ACCESS PROTOCOLS
The Data Link Layer is responsible for transmission of data
between two nodes. Its main functions are-
Data Link Control
Multiple Access Control
Data Link control
The data link control is responsible for reliable transmission
of message over transmission channel by using techniques
like framing, error control and flow control. For Data link
control refer to – Stop and Wait ARQ
Multiple Access Control
If there is a dedicated link between the sender and the receiver
then data link control layer is sufficient, however if there is no
dedicated link present then multiple stations can access the
channel simultaneously. Hence multiple access protocols are
required to decrease collision and avoid crosstalk.
For example, in a classroom full of students, when a teacher
asks a question and all the students (or stations) start
answering simultaneously (send data at same time) then a lot
of chaos is created (data overlap or data lost) then it is the job
of the teacher (multiple access protocols) to manage the
students and make them answer one at a time.
Thus, protocols are required for sharing data on non-dedicated
channels. Multiple access protocols can be subdivided further
as

1. Random Access Protocol:


In this, all stations have same superiority that is no station has
more priority than another station. Any station can send data
depending on medium’s state (idle or busy). It has two
features:
There is no fixed time for sending data
There is no fixed sequence of stations sending data
1. ALOHA
2. CSMA (Carrier Sense Multiple Access)
3. CSMA/CD (Carrier Sense Multiple Access with Collision
Detection)
4. CSMA/CA (Carrier Sense Multiple Access with Collision
Avoidance)
ALOHA
ALOHA was developed at University of Hawaii in early 1970s
by Norman Abramson. It was used for ground based radio
broadcasting. In this method, stations share a common channel.
When two stations transmit simultaneously, collision occurs
and frames are lost.
It was designed for wireless LAN but is also applicable for
shared medium. In this, multiple stations can transmit data at
the same time and can hence lead to collision and data being
garbled.
PURE ALOHA
When a station sends data it waits for an acknowledgement. If
the acknowledgement doesn’t come within the allotted time,
then the station waits for a random amount of time called
back-off time (Tb) and re-sends the data. Since different
stations wait for different amount of time, the probability of
further collision decreases.
SLOTTED ALOHA:
It is similar to pure aloha, except that we divide time into slots
and sending of data is allowed only at the beginning of these
slots. If a station misses out the allowed time, it must wait for
the next slot. This reduces the probability of collision.
CARRIER SENSE MULTIPLE ACCESS PROTOCOLS
Carrier Sense Multiple Access ensures fewer collisions as the
station is required to first sense the medium (for idle or busy)
before transmitting data. If it is idle then it sends data, otherwise
it waits till the channel becomes idle.
However, there is still chance of collision in CSMA due to
propagation delay. For example, if station A wants to send data,
it will first sense the medium. If it finds the channel idle, it will
start sending data.
However, by the time the first bit of data is transmitted (delayed
due to propagation delay) from station A, if station B requests
to send data and senses the medium it will also find it idle and
will also send data. This will result in collision of data from
station A and B.
CSMA access modes-
1-persistent: The node senses the channel, if idle it sends the
data, otherwise it continuously keeps on checking the medium
for being idle and transmits unconditionally (with 1
probability) as soon as the channel gets idle.
Non-Persistent: The node senses the channel, if idle it sends
the data, otherwise it checks the medium after a random amount
of time (not continuously) and transmits when found idle.
P-persistent: The node senses the medium, if idle it sends the
data with p probability. If the data is not transmitted ((1-p)
probability) then it waits for some time and checks the medium
again, now if it is found idle then it sends with p probability.
This repeat continues until the frame is sent. It is used in Wi-Fi
and packet radio systems.
O-persistent: Superiority of nodes is decided beforehand and
transmission occurs in that order. If the medium is idle, node
waits for its time slot to send data.
COLLISION FREE PROTOCOLS
In computer networks, when more than one station tries to
transmit simultaneously via a shared channel, the transmitted
data is garbled. This event is called collision. The Medium
Access Control (MAC) layer of the OSI model is responsible
for handling collision of frames.
Collision – free protocols are devised so that collisions do not
occur. Protocols like CSMA/CD and CSMA/CA nullifies the
possibility of collisions once the transmission channel is
acquired by any station.
However, collision can still occur during the contention period
if more than one stations starts to transmit at the same time.
Collision – free protocols resolves collision in the contention
period and so the possibilities of collisions are eliminated.
Types of Collision – free Protocols
Bit – map Protocol
In bit map protocol, the contention period is divided into N
slots, where N is the total number of stations sharing the
channel. If a station has a frame to send, it sets the
corresponding bit in the slot. So, before transmission, each
station knows whether the other stations want to transmit.
Collisions are avoided by mutual agreement among the
contending stations on who gets the channel.
Binary Countdown
This protocol overcomes the overhead of 1 bit per station of the
bit – map protocol. Here, binary addresses of equal lengths are
assigned to each station. For example, if there are 6 stations,
they may be assigned the binary addresses 001, 010, 011, 100,
101 and 110. All stations wanting to communicate broadcast
their addresses. The station with higher address gets the higher
priority for transmitting.
Limited Contention Protocols
These protocols combine the advantages of collision based
protocols and collision free protocols. Under light load, they
behave like ALOHA scheme. Under heavy load, they behave
like bitmap protocols.
Adaptive Tree Walk Protocol
In adaptive tree walk protocol, the stations or nodes are
arranged in the form of a binary tree as follows -

Initially all nodes (A, B ……. G, H) are permitted to compete


for the channel. If a node is successful in acquiring the channel,
it transmits its frame. In case of collision, the nodes are divided
into two groups (A, B, C, D in one group and E, F, G, H in
another group). Nodes belonging to only one of them is
permitted for competing. This process continues until
successful transmission occurs.
WIRELESS LANS
WLAN stands for Wireless Local Area Network. WLAN is
a local area network that uses radio communication to provide
mobility to the network users while maintaining the
connectivity to the wired network. A WLAN basically, extends
a wired local area network.
WLAN’s are built by attaching a device called the access
point(AP) to the edge of the wired network. Clients
communicate with the AP using a wireless network adapter
which is similar in function to an Ethernet adapter. It is also
called a LAWN is a Local area wireless network.
HISTORY
A professor at the University of Hawaii whose name was
Norman Abramson, developed the world’s first wireless
computer communication network. In 1979, Gfeller and u.
Bapst published a paper in the IEE proceedings reporting an
experimental wireless local area network using diffused
infrared communications. The first of the IEEE workshops on
Wireless LAN was held in 1991.
wireless LAN technology based on IEEE 802.11 standard. Its
predecessor the IEEE 802.3, commonly referred to as the
Ethernet, is the most widely deployed member of the family.
IEEE 802.11 is commonly referred to as wireless Ethernet
because of its close similarity with the IEEE 802.3There are
three media that can be used for transmission over wireless
LANs. Infrared, radio frequency and microwave.
Components of WLANs
The components of WLAN architecture as laid down in IEEE
802.11 are −
Stations (STA) − Stations comprises of all devices and
equipment that are connected to the wireless LAN. Each station
has a wireless network interface controller. A station can be of
two types −
Wireless Access Point (WAP or AP)
Client
Basic Service Set (BSS) − A basic service set is a group of
stations communicating at the physical layer level. BSS can be
of two categories −
Infrastructure BSS
Independent BSS
Extended Service Set (ESS) − It is a set of all connected BSS.
Distribution System (DS) − It connects access points in ESS.
Types of WLANS
WLANs, as standardized by IEEE 802.11, operates in two basic
modes, infrastructure, and ad hoc mode.
Infrastructure Mode − Mobile devices or clients connect to
an access point (AP) that in turn connects via a bridge to the
LAN or Internet. The client transmits frames to other clients via
the AP.
Ad Hoc Mode − Clients transmit frames directly to each other
in a peer-to-peer fashion.
Advantages of WLANs
They provide clutter-free homes, offices and other networked
places.
The LANs are scalable in nature, i.e. devices may be added or
removed from the network at greater ease than wired LANs.
The system is portable within the network coverage. Access to
the network is not bounded by the length of the cables.
Installation and setup are much easier than wired counterparts.
The equipment and setup costs are reduced.
Disadvantages of WLANs
Since radio waves are used for communications, the signals are
noisier with more interference from nearby systems.
Greater care is needed for encrypting information. Also, they
are more prone to errors. So, they require greater bandwidth
than the wired LANs.
WLANs are slower than wired LANs.

DATA LINK LAYER SWITCHING


Data link layer is the second layer of the Open System
Interconnections (OSI) model whose function is to divide the
stream of bits from physical layer into data frames and transmit
the frames according to switching requirements.
Switching in data link layer is done by network devices
called bridges.
BRIDGES
A data link layer bridge connects multiple LANs (local area
networks) together to form a larger LAN. This process of
aggregating networks is called network bridging. A bridge
connects the different components so that they appear as parts
of a single network.
The following diagram shows connection by a bridge −

When a user accesses the internet or another computer network


outside their immediate location, messages are sent through the
network of transmission media. This technique of transferring
the information from one computer network to another network
is known as switching.
Switching in a computer network is achieved by using
switches. A switch is a small hardware device which is used to
join multiple computers together with one local area network
(LAN).
Network switches operate at layer 2 (Data link layer) in the OSI
model.
Switching is transparent to the user and does not require any
configuration in the home network.
Switches are used to forward the packets based on MAC
addresses.
A Switch is used to transfer the data only to the device that has
been addressed. It verifies the destination address to route the
packet appropriately.
It is operated in full duplex mode.
Packet collision is minimum as it directly communicates
between source and destination.
It does not broadcast the message as it works with limited
bandwidth.
Why is Switching Concept required
Switching concept is developed because of the following
reasons:
Bandwidth: It is defined as the maximum transfer rate of a
cable. It is a very critical and expensive resource. Therefore,
switching techniques are used for the effective utilization of the
bandwidth of a network.
Collision: Collision is the effect that occurs when more than
one device transmits the message over the same physical
media, and they collide with each other. To overcome this
problem, switching technology is implemented so that packets
do not collide with each other.
Advantages of Switching:
Switch increases the bandwidth of the network.
It reduces the workload on individual PCs as it sends the
information to only that device which has been addressed.
It increases the overall performance of the network by reducing
the traffic on the network.
There will be less frame collision as switch creates the collision
domain for each connection.
Disadvantages of Switching:
A Switch is more expensive than network bridges.
A Switch cannot determine the network connectivity issues
easily.
Proper designing and configuration of the switch are required
to handle multicast packets.

You might also like