0% found this document useful (0 votes)
2 views

networking unit 3

The Data Link Layer is responsible for error-free data transfer, device identification, and communication between devices on a local network, consisting of two sub-layers: Logical Link Control and Media Access Control. It provides services such as framing, flow control, error control, and congestion control, and employs various protocols like Stop-and-Wait, HDLC, and PPP for managing data transmission. Additionally, it utilizes random access protocols to manage multiple access to shared links, including methods like Aloha and CSMA.

Uploaded by

ikshitij975
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

networking unit 3

The Data Link Layer is responsible for error-free data transfer, device identification, and communication between devices on a local network, consisting of two sub-layers: Logical Link Control and Media Access Control. It provides services such as framing, flow control, error control, and congestion control, and employs various protocols like Stop-and-Wait, HDLC, and PPP for managing data transmission. Additionally, it utilizes random access protocols to manage multiple access to shared links, including methods like Aloha and CSMA.

Uploaded by

ikshitij975
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

UNIT III

DATA LINK LAYER

❖ DATA LINK LAYER

● This layer is responsible for the error-free transfer of data frames.


● It deûnes the format of the data on the network.
● It provides reliable and efûcient communication between two or more devices.
● It is mainly responsible for the unique identiûcation of each device that resides on a local
● network.
● It contains two sub-layers:
2. Logical Link Control Layer
➔ It is responsible for transferring the packets to the Network layer of the
➔ receiver that is receiving.
➔ It identiûes the address of the network layer protocol from the header.
➔ It also provides üow control.
2.Media Access Control Layer
➔ A Media access control layer is a link between the Logical Link Control layer
➔ and the network's physical layer

Services:
We can list the services provided by a data-link layer as shown below.

FRAMING: Definitely, the first service provided by the data-link layer is framing. The data-link
layer at each node needs to encapsulate the datagram (packet received from the network layer) in a
frame before sending it to the next node. The node also needs to decapsulate the datagram from the
frame received on the logical channel.

FLOW CONTROL: The sending data-link layer at the end of a link is a producer of frames; the
receiving data-link layer at the other end of a link is a consumer.
If the rate of produced frames is higher than the rate of consumed frames, frames at the
receiving end need to be buffered while waiting to be consumed (processed). Definitely, we cannot
have an unlimited buffer size at the receiving side. We have two choices. The first choice is to let
the receiving data-link layer drop the frames if its buffer is full.

The second choice is to let the receiving data-link layer send a feedback to the sending
data-link layer to ask it to stop or slow down. Different data-link-layer protocols use different
strategies for flow control.

ERROR CONTROL: At the sending node, a frame in a data-link layer needs to be changed to bits,
transformed to electromagnetic signals, and transmitted through the transmission media. At the
receiving node, electromagnetic signals are received, transformed to bits, and put together to create
a frame.

Since electromagnetic signals are susceptible to error, a frame is susceptible to error. The
error needs first to be detected. After detection, it needs to be either corrected at the receiver
node or discarded and retransmitted by the sending node.

CONGESTION CONTROL: Although a link may be congested with frames, which may result in frame
loss, most data-link-layer protocols do not directly use a congestion control to alleviate congestion,
although some wide-area networks do.

❖ FRAMING:

The data-link layer needs to pack bits into frames, so that each frame is distinguishable from
another. Framing in the data-link layer separates a message from one source to a destination by
adding a sender address and a destination address.

When a message is carried in one very large frame, even a single-bit error would require the
retransmission of the whole frame. When a message is divided into smaller frames, a single-bit error
affects only that small frame.

Frame Size:

Frames can be of fixed or variable size. In fixed-size framing, there is no need for defining
the boundaries of the frames; the size itself can be used as a delimiter.

In variable-size framing, we need a way to define the end of one frame and the beginning of
the next. Historically, two approaches were used for this purpose: a character-oriented approach and
a bit-oriented approach.

1.Character-Oriented Framing:

In character-oriented (or byte-oriented) framing, data to be carried are 8-bit characters


from a coding system such as ASCII.

The header, which normally carries the source and destination addresses and other control
information, and the trailer, which carries error detection redundant bits, are also multiples of 8
bits. Figure shows the format of a frame in a character-oriented protocol.
FIGURE 2.5: A FRAME IN A CHARACTER-ORIENTED PROTOCOL

Character-oriented framing was popular when only text was exchanged by the data-link layers.

However, we send other types of information such as graphs, audio, and video; any character
used for the flag could also be part of the information. If this happens, the receiver, when it
encounters this pattern in the middle of the data, thinks it has reached the end of the frame.

To fix this problem, a byte-stuffing strategy was added to character-oriented framing. In


byte stuffing (or character stuffing), a special byte is added to the data section of the frame when
there is a character with the same pattern as the flag. The data section is stuffed with an extra
byte.

This byte is usually called the escape character (ESC) and has a predefined bit pattern.
Whenever the receiver encounters the ESC character, it removes it from the data section and treats
the next character as data, not as a delimiting flag.

2.Bit-Oriented Framing
In bit-oriented framing, the data section of a frame is a sequence of bits to be interpreted by
the upper layer as text, graphic, audio, video, and so on. However, in addition to headers (and possible
trailers), we still need a delimiter to separate one frame from the other.

Most protocols use a special 8-bit pattern flag, 01111110, as the delimiter to define the
beginning and the end of the frame, as shown in Figure.

FIGURE 2.6: A FRAME IN A BIT-ORIENTED PROTOCOL

This flag can create the same type of problem we saw in the character- oriented protocols.
That is, if the flag pattern appears in the data, we need to somehow inform the receiver that this is
not the end of the frame. We do this by stuffing 1 single bit (instead of 1 byte) to prevent the
pattern from looking like a flag. The strategy is called bit stuffing.

In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an extra 0 is added. This
extra stuffed bit is eventually removed from the data by the receiver.
❖ DATA-LINK LAYER PROTOCOLS

Traditionally four protocols have been defined for the data-link layer to deal with flow and
error control: Simple, Stop-and-Wait, Go-Back-N, and Selective- Repeat. Although the first two
protocols still are used at the data-link layer, the last two have disappeared.

1.Simple Protocol

Our first protocol is a simple protocol with neither flow nor error control. We assume that the
receiver can immediately handle any frame it receives. In other words, the receiver can never be
overwhelmed with incoming frames.

The data-link layer at the sender gets a packet from its network layer, makes a frame out of it, and
sends the frame. The data-link layer at the receiver receives a frame from the link, extracts the
packet from the frame, and delivers the packet to its network layer.

The data-link layers of the sender and receiver provide transmission services for their network
layers.

2.Stop-and-Wait Protocol:
Stop-and-Wait protocol uses both flow and error control. In this protocol, the sender sends one
frame at a time and waits for an acknowledgement before sending the next one. To detect corrupted
frames, we need to add a CRC to each data frame.

When a frame arrives at the receiver site, it is checked. If its CRC is incorrect, the frame is
corrupted and silently discarded. The silence of the receiver is a signal for the sender that a frame
was either corrupted or lost.

Every time the sender sends a frame, it starts a timer. If an acknowledgment arrives before the
timer expires, the timer is stopped and the sender sends the next frame (if it has one to send). If
the timer expires, the sender resends the previous frame, assuming that the frame was either lost
or corrupted.

This means that the sender needs to keep a copy of the frame until its acknowledgment arrives.
When the corresponding acknowledgment arrives, the sender discards the copy and sends the next
frame if it is ready.

3.HDLC:

High-level Data Link Control (HDLC) is a bit-oriented protocol for communication


over point-to-point and multipoint links. It implements the Stop- and-Wait protocol.

Configurations and Transfer Modes:

HDLC provides two common transfer modes that can be used in different
configurations: normal response mode (NRM) and asynchronous balanced mode (ABM). In
normal response mode (NRM), the station configuration is unbalanced.
To provide the flexibility necessary to support all the options possible in the modes
and configurations just described, HDLC defines three types of frames:

1. I-frames: are used to data-link user data and control information relating to user data
(piggybacking).
2. S-frames: are used only to transport control information.
3. U-frames: are reserved for system management. Information carried by U-frames is
intended for managing the link itself.

Each frame in HDLC may contain up to six fields, as shown in Figure 2.9: a beginning
flag field, an address field, a control field, an information field, a frame check sequence
(FCS) field, and an ending flag field. In multiple-frame transmissions, the ending flag of one
frame can serve as the beginning flag of the next frame.

FIGURE 2.9: HDLC FRAMES

● Flag field: This field contains synchronization pattern 01111110, which identifies both
the beginning and the end of a frame.

● Address field: This field contains the address of the secondary station. If a primary
station created the frame, it contains an address. If a secondary station creates the
frame, it contains the address. The address field can be one byte or several bytes long,
depending on the needs of the network.

● Control field: The control field is one or two bytes used for flow and error control.

● Information field: The information field contains the user‘s data from the network
layer or management information. Its length can vary from one network to another.

● FCS field: The frame check sequence (FCS) is the HDLC error detection field. It can
contain either a 2- or 4-byte CRC.

4.POINT-TO-POINT PROTOCOL (PPP):


Services provided by PPP:

PPP defines the format of the frame to be exchanged between devices. It also defines
how two devices can negotiate the establishment of the link and the exchange of data. PPP is
designed to accept payloads from several network layers (not only IP).

Authentication is also provided in the protocol, but it is optional. The new version of
PPP, called Multilink PPP, provides connections over multiple links. One interesting feature of
PPP is that it provides network address configuration. This is particularly useful when a home
user needs a temporary network address to connect to the Internet.

Services Not Provided by PPP:

PPP does not provide flow control. A sender can send several frames one after another
with no concern about overwhelming the receiver. PPP has a very simple mechanism for error
control. A CRC field is used to detect errors.

If the frame is corrupted, it is silently discarded; the upper-layer protocol needs to


take care of the problem. Lack of error control and sequence numbering may cause a packet
to be received out of order. PPP does not provide a sophisticated addressing mechanism to
handle frames in a multipoint configuration.

Framing:
PPP uses a character-oriented (or byte-oriented) frame. Figure 2.10 shows the format
of a PPP frame. The description of each field follows:

FIGURE 2.10: PPP FRAME FORMAT

● Flag: A PPP frame starts and ends with a 1-byte flag with the bit pattern 01111110.
● Address: The address field in this protocol is a constant value and set to 11111111
(broadcast address).
● Control: This field is set to the constant value 00000011 (imitating unnumbered frames
in HDLC).
● Protocol: The protocol field defines what is being carried in the data field: either user
data or other information. This field is by default 2 bytes long, but the two parties can
agree to use only 1 byte.
● Payload field: The data field is a sequence of bytes with the default of a maximum of
1500 bytes; but this can be changed during negotiation.
■ The data field is byte-stuffed if the flag byte pattern appears in this field.
■ Because there is no field defining the size of the data field, padding is
needed if the size is less than the maximum default value or the maximum
negotiated value.
● FCS: The frame check sequence (FCS) is simply a 2-byte or 4-byte standard CRC.

❖ Sliding Window Protocol


In this technique, each frame is sent from the sequence number. The sequence numbers are
used to find the missing data in the receiver end. The purpose of the sliding window technique is
to avoid duplicate data, so it uses the sequence number.

1. Go-Back-N ARQ
Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat Request. It is a data
link layer protocol that uses a sliding window method. In this, if any frame is corrupted or lost,
all subsequent frames have to be sent again.

The size of the sender window is N in this protocol. For example, Go-Back-8, the size of the
sender window, will be 8. The receiver window size is always 1.

If the receiver receives a corrupted frame, it cancels it. The receiver does not accept a
corrupted frame. When the timer expires, the sender sends the correct frame again. The
design of the Go-Back-N ARQ protocol is shown below.
2.Selective Repeat ARQ
Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat Request. It is a
data link layer protocol that uses a sliding window method. The Go-back-N ARQ protocol works
well if it has fewer errors. But if there is a lot of error in the frame, lots of bandwidth loss in
sending the frames again. So, we use the Selective Repeat ARQ protocol. In this protocol, the
size of the sender window is always equal to the size of the receiver window. The size of the
sliding window is always greater than 1.

If the receiver receives a corrupt frame, it does not directly discard it. It sends a negative
acknowledgement to the sender. The sender sends that frame again as soon as on the receiving
negative acknowledgment. There is no waiting for any time-out to send that frame. The design
of the Selective Repeat ARQ protocol is shown below.
Difference between the Go-Back-N ARQ and Selective Repeat ARQ

Go-Back-N ARQ Selective Repeat ARQ

If a frame is corrupted or lost in it,all In this, only the frame is sent again, which is
subsequent frames have to be sent again. corrupted or lost.

If it has a high error rate,it wastes a lot There is a loss of low bandwidth.
of bandwidth.

It is less complex. It is more complex because it has to do sorting


and searching as well. And it also requires more
storage.

It does not require sorting. In this, sorting is done to get the frames in the
correct order.

It does not require searching. The search operation is performed in it.

It is used more. It is used less because it is more complex.

❖ MEDIA ACCESS CONTROL (MAC)


When nodes or stations are connected and use a common link, called a multipoint or broadcast
link, we need a multiple-access protocol to coordinate access to the link. The problem of controlling
the access to the medium is similar to the rules of speaking in an assembly.

Many protocols have been devised to handle access to a shared link. All of these protocols
belong to a sub layer in the data-link layer called media access control (MAC). We categorize them
into three groups, as shown in Figure 2.11.

FIGURE 2.11: TAXONOMY OF MULTIPLE-ACCESS PROTOCOLS RANDOM


A] RANDOM ACCESS PROTOCOL
In this protocol, all the stations has the equal priority to send the data over a channel. In the
random access protocol, one or more stations cannot depend on another station nor any station
control another station. Depending on the channel's state (idle or busy), each station transmits
the data frame. However, if more than one station sends the data over a channel, there may be
a collision or data conflict. Due to the collision, the data frame packets may be lost or changed.
And hence, it does not receive by the receiver end.

Following are the different methods of random-access protocols for broadcasting frames on
the channel.

1. Aloha
2. CSMA
3. CSMA/CD
4. CSMA/CA

1. ALOHA Random Access Protocol

It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium
to transmit data. Using this method, any station can transmit data across a network
simultaneously when a data frameset is available for transmission.

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collisions and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha

Slotted Aloha
2. CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes idle.
Hence, it reduces the chances of a collision on a transmission medium.

CSMA Access Modes

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the
shared channel and if the channel is idle, it immediately sends the data. Else it must wait and
keep track of the status of the channel to be idle and broadcast the frame unconditionally as
soon as the channel is idle.

Non-Persistent: It is the access mode of CSMA that defines before transmitting the data,
each node must sense the channel, and if the channel is inactive, it immediately sends the data.
Otherwise, the station must wait for a random time (not continuously), and when the channel is
found to be idle, it transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent


mode defines that each node senses the channel, and if the channel is inactive, it sends a frame
with a P probability. If the data is not transmitted, it waits for a (q = 1-p probability) random
time and resumes the frame with the next time slot.

O- Persistent: It is an O-persistent method that defines the superiority of the station before
the transmission of the frame on the shared channel. If it is found that the channel is inactive,
each station waits for its turn to retransmit the data.
3. CSMA/ CD

It is a carrier sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first
senses the shared channel before broadcasting the frames, and if the channel is idle, it
transmits a frame to check whether the transmission was successful. If the frame is
successfully received, the station sends another frame. If any collision is detected in the
CSMA/CD, the station sends a jam/ stop signal to the shared channel to terminate data
transmission. After that, it waits for a random time before sending a frame to a channel.

4. CSMA/ CA

It is a carrier sense multiple access/collision avoidance network protocol for carrier


transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether the
channel is clear. If the station receives only a single (own) acknowledgments, that means the
data frame has been successfully transmitted to the receiver. But if it gets two signals (its own
and one more in which the collision of frames), a collision of the frame occurs in the shared
channel. Detects the collision of the frame when a sender receives an acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:

Interframe space: In this method, the station waits for the channel to become idle, and if it
gets the channel is idle, it does not immediately send the data. Instead of this, it waits for
some time, and this time period is called the Interframe space or IFS. However, the IFS time
is often used to define the priority of the station.

Contention window: In the Contention window, the total time is divided into different slots.
When the station/ sender is ready to transmit the data frame, it chooses a random slot number
of slots as wait time. If the channel is still busy, it does not restart the entire process, except
that it restarts the timer only to send data packets when the channel is inactive.

Acknowledgment: In the acknowledgment method, the sender station sends the data frame to
the shared channel if the acknowledgment is not received ahead of time.

B] CONTROLLED ACCESS PROTOCOLS:

In controlled access, the stations consult one another to find which station has the right to
send. A station cannot send unless it has been authorized by other stations. There are three
controlled-access methods:

1.Reservation:
In the reservation method, a station needs to make a reservation before sending data. Time is
divided into intervals. In each interval, a reservation frame precedes the data frames sent in that
interval.

If there are N stations in the system, there are exactly N reservation minislots in the
reservation frame. Each minislot belongs to a station. When a station needs to send a data frame, it
makes a reservation in its own minislot.

FIGURE 2.12: RESERVATION ACCESS METHOD

The stations that have made reservations can send their data frames after the reservation
frame. Figure 2.12 shows a situation with five stations and a five- minislot reservation frame. In the
first interval, only stations 1, 3, and 4 have made reservations. In the second interval, only station 1
has made a reservation.

2.Polling:
Polling works with topologies in which one device is designated as a primary station and the
other devices are secondary stations. All data exchanges must be made through the primary device
even when the ultimate destination is a secondary device.

The primary device controls the link; the secondary devices follow its instructions. It is up to
the primary device to determine which device is allowed to use the channel at a given time.

The primary device, therefore, is always the initiator of a session (see Figure 2.13). This
method uses poll and select functions to prevent collisions. However, the drawback is if the primary
station fails, the system goes down.
Select: The select function is used whenever the primary device has something to send.
Before sending data, the primary creates and transmits a select (SEL) frame, one field of which
includes the address of the intended secondary.

Poll: The poll function is used by the primary device to solicit transmissions from the
secondary devices. When the primary is ready to receive data, it must ask (poll) each device in turn if
it has anything to send.

FIGURE 2.13: SELECT & POLL FUNCTIONS IN POLLING-ACCESS


METHOD

When the first secondary is approached, it responds either with a NAK frame if it has nothing
to send or with data (in the form of a data frame) if it does. If the response is negative (a NAK
frame), then the primary polls the next secondary in the same manner until it finds one with data to
send.

When the response is positive (a data frame), the primary reads the frame and returns an
acknowledgment (ACK frame), verifying its receipt.

3.Token Passing:
In the token-passing method, the stations in a network are organized in a logical ring. In
other words, for each station, there is a predecessor and a successor.

The predecessor is the station which is logically before the station in the ring; the successor
is the station which is after the station in the ring. The current station is the one that is accessing
the channel now. The right to this access has been passed from the predecessor to the current
station. The right will be passed to the successor when the current station has no more data to send.

But how is the right to access the channel passed from one station to another? In this
method, a special packet called a token circulates through the ring.

The possession (meaning control) of the token gives the station the right to access the
channel and send its data. When a station has some data to send, it waits until it receives the token
from its predecessor.

It then holds the token and sends its data. When the station has no more data to send, it
releases the token, passing it to the next logical station in the ring. The station cannot send data
until it receives the token again in the next round. In this process, when a station receives the token
and has no data to send, it just passes the data to the next station.

C] CHANNELIZATION PROTOCOLS:

● Channelization (or channel partition, as it is sometimes called) is a multiple-access


method in which the available bandwidth of a link is shared in time, frequency, or
through code, among different stations.
● There are three channelization protocols: FDMA, TDMA, and CDMA.

1. FDMA:
In frequency-division multiple access (FDMA), the available bandwidth is divided into
frequency bands. Each station is allocated a band to send its data. In other words, each band is
reserved for a specific station, and it belongs to the station all the time.

2.TDMA:
In time-division multiple access (TDMA), the stations share the bandwidth of the channel in
time. Each station is allocated a time slot during which it can send data. Each station transmits
its data in its assigned time slot.

3.CDMA:
Code-division multiple access (CDMA) was conceived (meaning imagine/visualize) several
decades ago. Recent advances in electronic technology have finally made its implementation
possible.
CDMA differs from FDMA in that only one channel occupies the entire bandwidth of the link.
It differs from TDMA in that all stations can send data simultaneously; there is no
timesharing. In CDMA, one channel carries all transmissions simultaneously.
FDMA vs TDMA vs CDMA

Parameters FDMA TDMA CDMA

Full Form The term FDMA is an The term TDMA is an The term CDMA is an
acronym for Frequency acronym for Time acronym for Code
Division Multiple Access. Division Multiple Access. Division Multiple Access.

Mode of FDMA shares one single TDMA only shares the The CDMA shares both-
Operation bandwidth among various time of transmission via time and bandwidth
stations by splitting it into the satellite and not the among various stations
sub-channels. channel. by assigning a different
code for every slot.

Idea of It segments a single band It segments the sending It spreads one spectrum
Transmission of frequency into various time of data into disjoint into multiple slots by
disjoint sub-bands. time slots- in a fixed or making use of orthogonal
demand-driven pattern. codes.

Codeword The FDMA doesn’t need a The TDMA also needs no The codeword is a
codeword. codewords. prerequisite in the case
of the CDMA.

Synchronizati FDMA does not require any TDMA requires CDMA also requires no
on synchronization. synchronization. synchronization.

Terminals Every terminal has its own Every terminal on the Every terminal may
constant frequency. same frequency is active remain operational at
for just a short period the same time and in the
of time. same location without
interruption.

Cells Capacity It has a limited cell It also has a limited cell It has no capacity
capacity. capacity. restriction for a
channel, although it is
interference-limited.

Cost It has a high cost. It has a low cost. Its installation cost is
high, but the operational
cost is low.

Guard times It needed guard bands. It needed guard times. It needed both guard
and Bands times and guard bands.
Fading It doesn't require an It needed an equalizer RAKE receivers may be
Mitigation equalizer. possible in CDMA.

Advantages It is a very reliable, It is highly flexible, It is more flexible,


well-established, and entirely digital, and needs less frequency
straightforward protocol. well-established. planning, and offers a
softer signal handover.

Disadvantages It is very flexible, and the It requires guard space. It works with extremely
frequencies are limited. complicated receivers,
and
senders/transmitters
need a more complex
power control method.

Question Bank:

1.Explain Framing.(6 Marks)


a. Character Oriented
b. Bit Oriented

2. What is channelisation? Explain channelisation protocols (7 Marks)


(FDMA, TDMA, CDMA)

3. Explain following protocols (any two) (8 Marks)


a. HDLC
b. PPP
c. Stop and wait
d. ALOHA

4. Compare Go-back-N ARQ to selective repeat ARQ (6 Marks)

You might also like