0% found this document useful (0 votes)
33 views121 pages

Data Link Layer Error Handling

Uploaded by

tiwana07763
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views121 pages

Data Link Layer Error Handling

Uploaded by

tiwana07763
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 121

Module 2

Data Link Layer and Medium Access Sub Layer


Error Detection & Correction
• There are many reasons such as noise, crosstalk etc., which may help
data to get corrupted during transmission.
• The upper layers work on some generalized view of network
architecture and are not aware of actual hardware data processing.
• Hence, the upper layers expect error-free transmission between the
systems.
• Most of the applications will not function expectedly if they receive
erroneous data.
• Applications such as voice and video may not be that affected and
with some errors they may still function well.
Error Detection & Correction
• Data-link layer uses some error control mechanism to ensure that
frames (data bit streams) are transmitted with certain level of
accuracy.
• But to understand how errors is controlled, it is essential to know
what types of errors may occur.
Types of Errors
• Single bit error: In a frame, there is only one bit, anywhere though,
which is corrupt.

• Multiple bits error: Frame is received with more than one bits in
corrupted state.

• Burst error: Frame contains more than1 consecutive bits corrupted.


Error control mechanism may
involve two possible ways
•Error detection
•Error correction
Error Detection Method
• A condition when the receiver’s information does not match with the
sender’s information.
• During transmission, digital signals suffer from noise that can
introduce errors in the binary bits travelling from sender to receiver.
• That means a 0 bit may change to 1 or a 1 bit may change to 0.
Error Detecting Codes
• Implemented either at Data link layer or Transport Layer of OSI Model
• Whenever a message is transmitted, it may get scrambled by noise or
data may get corrupted.
• To avoid this, we use error-detecting codes which are additional data
added to a given digital message to help us detect if any error has
occurred during transmission of the message.
• Basic approach used for error detection is the use of redundancy bits,
where additional bits are added to facilitate detection of errors.
Some popular techniques for error
detection are
• Simple Parity check
• Two-dimensional Parity check
• Checksum
• Cyclic redundancy check (CRC)
• Longitudinal Redundancy Check (LRC)
Simple Parity check
• Blocks of data from the source are subjected to a check bit or
parity bit generator form, where a parity of :
• 1 is added to the block if it contains odd number of 1’s,
and
• 0 is added if it contains even number of 1’s
• This scheme makes the total number of 1’s even, that is why
it is called even parity checking.
Two-dimensional Parity check
• Parity check bits are calculated for each row, which is equivalent to a
simple parity check bit.
• Parity check bits are also calculated for all columns, then both are
sent along with the data.
• At the receiving end these are compared with the parity bits
calculated on the received data.
Checksum
• In checksum error detection scheme, the data is divided into k
segments each of m bits.
• In the sender’s end the segments are added using 1’s
complement arithmetic to get the sum. The sum is
complemented to get the checksum.
• The checksum segment is sent along with the data segments.
• At the receiver’s end, all received segments are added using
1’s complement arithmetic to get the sum. The sum is
complemented.
• If the result is zero, the received data is accepted; otherwise
discarded.
Cyclic redundancy check (CRC)
• Unlike checksum scheme, which is based on addition, CRC is based on
binary division.
• In CRC, a sequence of redundant bits, called cyclic redundancy check
bits, are appended to the end of data unit so that the resulting data
unit becomes exactly divisible by a second, predetermined binary
number.
• At the destination, the incoming data unit is divided by the same
number. If at this step there is no remainder, the data unit is assumed
to be correct and is therefore accepted.
• A remainder indicates that the data unit has been damaged in transit
and therefore must be rejected.
Longitudinal Redundancy Check
• In longitudinal redundancy method, a BLOCK of bits are arranged in a
table format (in rows and columns) and we will calculate the parity bit
for each column separately. The set of these parity bits are also sent
along with our original data bits.
• Longitudinal redundancy check is a bit-by-bit parity computation, as
we calculate the parity of each column individually.
• This method can easily detect burst errors and single bit errors and it
fails to detect the 2 bit errors occurred in same vertical slice.
Error Correction Method
• Error Correction codes are used to detect and correct the errors when
data is transmitted from the sender to the receiver.
• Error Correction can be handled in two ways:
• Backward error correction: Once the error is discovered,
the receiver requests the sender to retransmit the entire
data unit.
• Forward error correction: In this case, the receiver uses
the error-correcting code which automatically corrects the
errors.
• A single additional bit can detect the error but cannot correct it.
• For correcting the errors, one has to know the exact position of the error. For
example, If we want to calculate a single-bit error, the error correction code
will determine which one of seven bits is in error. To achieve this, we have to
add some additional redundant bits.
• Suppose r is the number of redundant bits and d is the total number of the
data bits. The number of redundant bits r can be calculated by using the
formula:
2r>=d+r+1
• The value of r is calculated by using the above formula. For example, if the
value of d is 4, then the possible smallest value that satisfies the above
relation would be 3.
• To determine the position of the bit which is in error, a technique developed
by R.W Hamming is Hamming code which can be applied to any length of the
data unit and uses the relationship between data units and redundant units.
Hamming Code
• Parity bits: The bit which is appended to the original data of binary
bits so that the total number of 1s is even or odd.
• Even parity: To check for even parity, if the total number of 1s is even,
then the value of the parity bit is 0. If the total number of 1s
occurrences is odd, then the value of the parity bit is 1.
• Odd Parity: To check for odd parity, if the total number of 1s is even,
then the value of parity bit is 1. If the total number of 1s is odd, then
the value of parity bit is 0.
Algorithm of Hamming code
• An information of 'd' bits are added to the redundant bits 'r'
to form d+r.
• The location of each of the (d+r) digits is assigned a decimal
value.
• The 'r' bits are placed in the positions 1,2,.....2k-1.
• At the receiving end, the parity bits are recalculated. The
decimal value of the parity bits determines the position of an
error.
Relationship
b/w Error
position &
binary number.
Let's understand the concept of
Hamming code through an example
• Suppose the original data is 1010 which is to be sent.
• Total number of data bits 'd' = 4
• Number of redundant bits r : 2r >= d+r+1
• 2r>= 4+r+1
• Therefore, the value of r is 3 that satisfies the above
relation.
• Total number of bits = d+r = 4+3 = 7;
Determining the position of the
redundant bits
• The number of redundant bits is 3.
• The three bits are represented by r1, r2, r4.
• The position of the redundant bits is calculated with corresponds to
the raised power of 2.
• Therefore, their corresponding positions are 1, 21, 22.
• The position of r1 = 1
• The position of r2 = 2
• The position of r4 = 4
Representation of Data on the
addition of parity bits:
Determining the Parity bits

• Determining the r1 bit


• The r1 bit is calculated by performing a
parity check on the bit positions whose
binary representation includes 1 in the first
position.

• We observe from the above figure that the


bit positions that includes 1 in the first
position are 1, 3, 5, 7. Now, we perform the
even-parity check at these bit positions.
The total number of 1 at these bit positions
corresponding to r1 is even, therefore, the
value of the r1 bit is 0.
Determining r2 bit

• The r2 bit is calculated by


performing a parity check on the
bit positions whose binary
representation includes 1 in the
second position.
• We observe from the above figure
that the bit positions that includes
1 in the second position are 2, 3,
6, 7. Now, we perform the even-
parity check at these bit positions.
The total number of 1 at these bit
positions corresponding to r2 is
odd, therefore, the value of the r2
bit is 1.
Determining r4 bit

• The r4 bit is calculated by


performing a parity check on the
bit positions whose binary
representation includes 1 in the
third position.
• We observe from the above figure
that the bit positions that includes
1 in the third position are 4, 5, 6,
7. Now, we perform the even-
parity check at these bit positions.
The total number of 1 at these bit
positions corresponding to r4 is
even, therefore, the value of the
r4 bit is 0.
Data transferred is given below

• Suppose the 4th bit is changed from 0 to 1 at the receiving end, then
parity bits are recalculated.
Data Link Controls
• Data Link Control is the service provided by the Data Link Layer to
provide reliable data transfer over the physical medium.
• For example, In the half-duplex transmission mode, one device can
only transmit the data at a time.
• If both the devices at the end of the links transmit the data
simultaneously, they will collide and leads to the loss of the
information.
• The Data link layer provides the coordination among the devices so
that no collision occurs.
Data link layer provides three functions
Line Discipline
• Line Discipline is a functionality of the Data link layer that provides
the coordination among the link systems. It determines which device
can send, and when it can send the data.
• Line Discipline can be achieved in two ways:
• ENQ/ACK
• Poll/select
END/ACK
• END/ACK stands for Enquiry/Acknowledgement is used when there is
no wrong receiver available on the link and having a dedicated path
between the two devices so that the device capable of receiving the
transmission is the intended one.
• END/ACK coordinates which device will start the transmission and
whether the recipient is ready or not.
Working of END/ACK
• The transmitter transmits the frame called an Enquiry (ENQ)
asking whether the receiver is available to receive the data
or not.
• The receiver responses either with the positive
acknowledgement (ACK) or with the negative
acknowledgement (NACK) where positive acknowledgement
means that the receiver is ready to receive the transmission
and negative acknowledgement means that the receiver is
unable to accept the transmission.
Following are
the responses
of the receiver
Responses of the receiver
• If the response to the ENQ is positive, the sender will
transmit its data, and once all of its data has been
transmitted, the device finishes its transmission with an EOT
(END-of-Transmission) frame.
• If the response to the ENQ is negative, then the sender
disconnects and restarts the transmission at another time.
• If the response is neither negative nor positive, the sender
assumes that the ENQ frame was lost during the transmission
and makes three attempts to establish a link before giving up.
Poll/Select
• The Poll/Select method of line discipline works with
those topologies where one device is designated as a
primary station, and other devices are secondary
stations.
Working of Poll/Select
• In this, the primary device and multiple secondary devices consist of a
single transmission line, and all the exchanges are made through the
primary device even though the destination is a secondary device.
• The primary device has control over the communication link, and the
secondary device follows the instructions of the primary device.
• The primary device determines which device is allowed to use the
communication channel. Therefore, we can say that it is an initiator of
the session.
• If the primary device wants to receive the data from the secondary
device, it asks the secondary device that they anything to send, this
process is known as polling.
• If the primary device wants to send some data to the secondary device,
then it tells the target secondary to get ready to receive the data, this
Select
• The select mode is used when the primary device has something to
send.
• When the primary device wants to send some data, then it alerts the
secondary device for the upcoming transmission by transmitting a
Select (SEL) frame, one field of the frame includes the address of the
intended secondary device.
• When the secondary device receives the SEL frame, it sends an
acknowledgement that indicates the secondary ready status.
• If the secondary device is ready to accept the data, then the primary
device sends two or more data frames to the intended secondary
device. Once the data has been transmitted, the secondary sends an
acknowledgement specifies that the data has been received.
Poll
• The Poll mode is used when the primary device wants to receive some
data from the secondary device.
• When a primary device wants to receive the data, then it asks each
device whether it has anything to send.
• Firstly, the primary asks (poll) the first secondary device, if it responds
with the NACK (Negative Acknowledgement) means that it has nothing
to send. Now, it approaches the second secondary device, it responds
with the ACK means that it has the data to send. The secondary device
can send more than one frame one after another or sometimes it may
be required to send ACK before sending each one, depending on the
type of the protocol being used.
Flow Control
• It is a set of procedures that tells the sender how much data it can
transmit before the data overwhelms the receiver.
• The receiving device has limited speed and limited memory to store
the data. Therefore, the receiving device must be able to inform the
sending device to stop the transmission temporarily before the limits
are reached.
• It requires a buffer, a block of memory for storing the information
until they are processed.
Two methods have been developed
to control the flow of data
• Stop-and-wait
• Sliding window
Stop-and-wait
• In the Stop-and-wait method, the sender waits for an
acknowledgement after every frame it sends.
• When acknowledgement is received, then only next frame is
sent. The process of alternately sending and waiting of a
frame continues until the sender transmits the EOT (End of
transmission) frame.
Advantage & Disadvantage of Stop-
and-wait
• The Stop-and-wait method is simple as each frame is checked and
acknowledged before the next frame is sent.
• Stop-and-wait technique is inefficient to use as each frame must
travel across all the way to the receiver, and an acknowledgement
travels all the way before the next frame is sent. Each frame sent and
received uses the entire time needed to traverse the link.
Sliding Window
• The Sliding Window is a method of flow control in which a
sender can transmit the several frames before getting an
acknowledgement.
• In Sliding Window Control, multiple frames can be sent one
after the another due to which capacity of the
communication channel can be utilized efficiently.
• A single ACK acknowledge multiple frames.
• Sliding Window refers to imaginary boxes at both the sender
and receiver end.
• The window can hold the frames at either end, and it provides the
upper limit on the number of frames that can be transmitted before
the acknowledgement.
• Frames can be acknowledged even when the window is not
completely filled.
• The window has a specific size in which they are numbered as
modulo-n means that they are numbered from 0 to n-1. For example,
if n = 8, the frames are numbered from
0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1........
Sender Window
• At the beginning of a transmission, the sender window contains n-1
frames, and when they are sent out, the left boundary moves inward
shrinking the size of the window. For example, if the size of the
window is w if three frames are sent out, then the number of frames
left out in the sender window is w-3.
• Once the ACK has arrived, then the sender window expands to the
number which will be equal to the number of frames acknowledged
by ACK.
Example
• The size of the window is 7, and if frames 0 through 4 have
been sent out and no acknowledgement has arrived, then
the sender window contains only two frames, i.e., 5 and 6.
• Now, if ACK has arrived with a number 4 which means that 0
through 3 frames have arrived undamaged and the sender
window is expanded to include the next four frames.
• Therefore, the sender window contains six frames
(5,6,7,0,1,2).
Receiver Window
• At the beginning of transmission, the receiver window does
not contain n frames, but it contains n-1 spaces for frames.
• When the new frame arrives, the size of the window shrinks.
• The receiver window does not represent the number of
frames received, but it represents the number of frames that
can be received before an ACK is sent. For example, the size
of the window is w, if three frames are received then the
number of spaces available in the window is (w-3).
Receiver Window
• Once the acknowledgement is sent, the receiver window expands by
the number equal to the number of frames acknowledged.
• Suppose the size of the window is 7 means that the receiver window
contains seven spaces for seven frames. If the one frame is received,
then the receiver window shrinks and moving the boundary from 0 to
1. In this way, window shrinks one by one, so window now contains
the six spaces. If frames from 0 through 4 have sent, then the window
contains two spaces before an acknowledgement is sent.
Error Control
Error Control is a technique of error detection and retransmission
Stop-and-wait ARQ
• Stop-and-wait ARQ is a technique used to retransmit the data in case
of damaged or lost frames.
• This technique works on the principle that the sender will not
transmit the next frame until it receives the acknowledgement of the
last transmitted frame.
Four features are required for the
retransmission
• The sending device keeps a copy of the last transmitted frame until the
acknowledgement is received. Keeping the copy allows the sender to
retransmit the data if the frame is not received correctly.
• Both the data frames and the ACK frames are numbered alternately 0
and 1 so that they can be identified individually. Suppose data 1 frame
acknowledges the data 0 frame means that the data 0 frame has been
arrived correctly and expects to receive data 1 frame.
• If an error occurs in the last transmitted frame, then the receiver sends
the NAK frame which is not numbered. On receiving the NAK frame,
sender retransmits the data.
• It works with the timer. If the acknowledgement is not received within
the allotted time, then the sender assumes that the frame is lost during
the transmission, so it will retransmit the frame.
Two possibilities of the
retransmission
• Damaged Frame:
• When the receiver receives a damaged frame, i.e., the frame
contains an error, then it returns the NAK frame.
• For example, when the data 0 frame is sent, and then the receiver
sends the ACK 1 frame means that the data 0 has arrived correctly
and transmits the data 1 frame. The sender transmits the next
frame: data 1. It reaches undamaged, and the receiver returns ACK
0. The sender transmits the next frame: data 0. The receiver
reports an error and returns the NAK frame. The sender
retransmits the data 0 frame.
Two possibilities of the
retransmission
• Lost Frame:
• Sender is equipped with the timer and starts when the frame is
transmitted. Sometimes the frame has not arrived at the receiving
end so that it can be acknowledged neither positively nor
negatively. The sender waits for acknowledgement until the timer
goes off. If the timer goes off, it retransmits the last transmitted
frame.
Characteristics
• Used in Connection-oriented communication.
• It offers error and flows control
• It is used in Data Link and Transport Layers
• Stop and Wait for ARQ mainly implements the Sliding Window
Protocol concept with Window Size 1
Useful Terms
• Propagation Delay: Amount of time taken by a packet to make a
physical journey from one router to another router.
Propagation Delay = (Distance between routers) / (Velocity of
propagation)
• RoundTripTime (RTT) = 2* Propagation Delay
• TimeOut (TO) = 2* RTT
• Time To Live (TTL) = 2* TimeOut. (Maximum TTL is 180 seconds)
Simple Stop and Wait
Sender:
• Rule 1) Send one data packet at a time.
• Rule 2) Send the next packet only after receiving acknowledgement for
the previous.

Receiver:
• Rule 1) Send acknowledgement after receiving and consuming a data
packet.
• Rule 2) After consuming packet acknowledgement need to be sent (Flow
Control)
Problems
Lost Data
2: Lost
Acknowledgem
ent
3 : Delayed
Acknowledgement/Data

• After a timeout on the sender


side, a long-delayed
acknowledgement might be
wrongly considered as
acknowledgement of some
other recent packet
Stop and Wait for ARQ (Automatic Repeat
Request)
• The above 3 problems are resolved by Stop and Wait for ARQ (Automatic Repeat Request) that
does both error control and flow control
Time Out
Sequence Number (Data)
Working of Stop and Wait for ARQ
• Sender A sends a data frame or packet with sequence
number 0.
• Receiver B, after receiving the data frame, sends an
acknowledgement with sequence number 1 (the sequence
number of the next expected data frame or packet)
• There is only a one-bit sequence number that implies that
both sender and receiver have a buffer for one frame or
packet only.
Characteristics of Stop and Wait
ARQ
• It uses a link between sender and receiver as a half-duplex link
• Throughput = 1 Data packet/frame per RTT
• If the Bandwidth*Delay product is very high, then they stop and wait
for protocol if it is not so useful. The sender has to keep waiting for
acknowledgements before sending the processed next packet.
• It is an example of “Closed Loop OR connection-oriented “ protocols
• It is a special category of SWP where its window size is 1
• Irrespective of the number of packets sender is having stop and wait
for protocol requires only 2 sequence numbers 0 and 1
• The Stop and Wait ARQ solves the main three problems but may
cause big performance issues as the sender always waits for
acknowledgement even if it has the next packet ready to send.
• Consider a situation where you have a high bandwidth
connection and propagation delay is also high (you are connected
to some server in some other country through a high-speed
connection).
• To solve this problem, we can send more than one packet at a
time with a larger sequence number. We will be discussing these
protocols in the next articles.
• So Stop and Wait ARQ may work fine where propagation delay is
very less for example LAN connections but performs badly for
distant connections like satellite connections.
Sliding Window ARQ
• Sliding Window ARQ is a technique used for continuous transmission
error control.
• The sliding window is a technique for sending multiple frames at a
time. It controls the data packets between the two devices where
reliable and gradual delivery of data frames is needed. It is also used
in TCP (Transmission Control Protocol).
• In this technique, each frame has sent from the sequence number.
The sequence numbers are used to find the missing data in the
receiver end.
• The purpose of the sliding window technique is to avoid duplicate
data, so it uses the sequence number.
Three Features used for
retransmission
• In this case, the sender keeps the copies of all the transmitted frames
until they have been acknowledged.
• Suppose the frames from 0 through 4 have been transmitted, and the
last acknowledgement was for frame 2, the sender has to keep the
copies of frames 3 and 4 until they receive correctly.
Three Features used for
retransmission
• The receiver can send either NAK or ACK depending on the
conditions.
• The NAK frame tells the sender that the data have been received
damaged.
• Since the sliding window is a continuous transmission mechanism,
both ACK and NAK must be numbered for the identification of a
frame.
• The ACK frame consists of a number that represents the next frame
which the receiver expects to receive. The NAK frame consists of a
number that represents the damaged frame.
Three Features used for
retransmission
• The sliding window ARQ is equipped with the timer to handle the lost
acknowledgements.
• Suppose then n-1 frames have been sent before receiving any
acknowledgement.
• The sender waits for the acknowledgement, so it starts the timer and
waits before sending any more.
• If the allotted time runs out, the sender retransmits one or all the
frames depending upon the protocol used.
Two protocols used in sliding window
ARQ
• Go-Back-n ARQ: In Go-Back-N ARQ protocol, if one frame is lost or
damaged, then it retransmits all the frames after which it does not
receive the positive ACK.
• Go-Back-N ARQ protocol is also known as Go-Back-N Automatic
Repeat Request.
• It is a data link layer protocol that uses a sliding window method.
• In this, if any frame is corrupted or lost, all subsequent frames have to
be sent again.
• The size of the sender window is N in this protocol.
• For example, Go-Back-8, the size of the sender
window, will be 8. The receiver window size is always
1.
• If the receiver receives a corrupted frame, it cancels it.
• The receiver does not accept a corrupted frame.
• When the timer expires, the sender sends the correct
frame again. The design of the Go-Back-N ARQ
protocol is shown below.
example of Go-Back-N ARQ
Three possibilities can occur for
retransmission
• Damaged Frame
• Lost Data Frame
• Lost Acknowledgement
Selective-Reject ARQ
• Selective Repeat ARQ is also known as the Selective Repeat Automatic
Repeat Request.
• It is a data link layer protocol that uses a sliding window method.
• The Go-back-N ARQ protocol works well if it has fewer errors.
• But if there is a lot of error in the frame, lots of bandwidth loss in
sending the frames again.
• So, we use the Selective Repeat ARQ protocol.
• In this protocol, the size of the sender window is always equal to the size
of the receiver window.
• The size of the sliding window is always greater than 1.
Selective-Reject ARQ
• If the receiver receives a corrupt frame, it does not directly discard it.
• It sends a negative acknowledgment to the sender.
• The sender sends that frame again as soon as on the receiving
negative acknowledgment.
• There is no waiting for any time-out to send that frame.
• The design of the Selective Repeat ARQ protocol is shown below.
Features of Selective-Reject ARQ
• Selective-Reject ARQ technique is more efficient than Go-Back-n ARQ.
• In this technique, only those frames are retransmitted for which
negative acknowledgement (NAK) has been received.
• The receiver storage buffer keeps all the damaged frames on hold
until the frame in error is correctly received.
• The receiver must have an appropriate logic for reinserting the frames
in a correct order.
• The sender must consist of a searching mechanism that selects only
the requested frame for retransmission.
Go-Back-N ARQ Selective Repeat ARQ

If a frame is corrupted or lost in it,all subsequent In this, only the frame is sent again, which is corrupted
frames have to be sent again. or lost.

If it has a high error rate,it wastes a lot of bandwidth. There is a loss of low bandwidth.

It is less complex. It is more complex because it has to do sorting and


searching as well. And it also requires more storage.

It does not require sorting. In this, sorting is done to get the frames in the correct
order.

It does not require searching. The search operation is performed in it.

It is used more. It is used less because it is more complex.


Piggybacking
• This technique in which the outgoing acknowledgement is delayed
temporarily is called piggybacking.
Advantages of piggybacking:
1. The major advantage of piggybacking is the better use of available
channel bandwidth. This happens because an acknowledgement frame
needs not to be sent separately.
2. Usage cost reduction
3. Improves latency of data transfer
Disadvantages of piggybacking:
1. The disadvantage of piggybacking is the additional complexity.
2. If the data link layer waits long before transmitting the
acknowledgement (block the ACK for some time), the frame will
rebroadcast.
Random Access Protocol
• The data link layer is used in a computer network to transmit the data between
two devices or nodes.
• It divides the layer into parts such as data link control and the multiple access
resolution/protocol.
• The upper layer has the responsibility to flow control and the error control in the
data link layer, and hence it is termed as logical of data link control.
• Whereas the lower sub-layer is used to handle and reduce the collision or
multiple access on a channel. Hence it is termed as media access control or the
multiple access resolutions.
• A data link control is a reliable channel for transmitting data over a dedicated link
using various techniques such as framing, error control and flow control of data
packets in the computer network.
Multiple access protocol?
• When a sender and receiver have a dedicated link to transmit data
packets, the data link control is enough to handle the channel.
• Suppose there is no dedicated path to communicate or transfer the
data between two devices.
• In that case, multiple stations access the channel and simultaneously
transmits the data over the channel.
• It may create collision and cross talk. Hence, the multiple access
protocol is required to reduce the collision and avoid crosstalk
between the channels.
• For example, suppose that there is a classroom full of students. When
a teacher asks a question, all the students (small channels) in the class
start answering the question at the same time (transferring the data
simultaneously).
• All the students respond at the same time due to which data is
overlap or data lost.
• Therefore it is the responsibility of a teacher (multiple access
protocol) to manage the students and make them one answer.
Random Access Protocol
• In this protocol, all the station has the equal priority to send the data
over a channel.
• In random access protocol, one or more stations cannot depend on
another station nor any station control another station.
• Depending on the channel's state (idle or busy), each station
transmits the data frame.
• However, if more than one station sends the data over a channel,
there may be a collision or data conflict.
• Due to the collision, the data frame packets may be lost or changed.
And hence, it does not receive by the receiver end.
ALOHA Random Access Protocol
• It is designed for wireless LAN (Local Area Network) but can also be
used in a shared medium to transmit data.
• Using this method, any station can transmit data across a network
simultaneously when a data frameset is available for transmission.
Aloha Rules
• Any station can transmit data to a channel at any time.
• It does not require any carrier sensing.
• Collision and data frames may be lost during the transmission of data
through multiple stations.
• Acknowledgment of the frames exists in Aloha. Hence, there is no
collision detection.
• It requires retransmission of data after some random amount of time.
Pure Aloha
• In pure Aloha, when each station transmits data to a channel without
checking whether the channel is idle or not, the chances of collision may
occur, and the data frame can be lost.
• When any station transmits the data frame to a channel, the pure Aloha
waits for the receiver's acknowledgment.
• If it does not acknowledge the receiver end within the specified time,
the station waits for a random amount of time, called the backoff time
(Tb).
• And the station may assume the frame has been lost or destroyed.
Therefore, it retransmits the frame until all the data are successfully
transmitted to the receiver.
Pure Aloha
• The total vulnerable time of pure Aloha is 2 * Tfr.
• Maximum throughput occurs when G = 1/ 2 that is 18.4%.
• Successful transmission of data frame is S = G * e ^ - 2 G.
• As we can see in the figure above, there are four stations for
accessing a shared channel and transmitting data frames. Some
frames collide because most stations send their frames at the same
time. Only two frames, frame 1.1 and frame 2.2, are successfully
transmitted to the receiver end.

• At the same time, other frames are lost or destroyed. Whenever two
frames fall on a shared channel simultaneously, collisions can occur,
and both will suffer damage. If the new frame's first bit enters the
channel before finishing the last bit of the second frame. Both frames
are completely finished, and both stations must retransmit the data
frame.
Slotted Aloha
• The slotted Aloha is designed to overcome the pure Aloha's efficiency
because pure Aloha has a very high possibility of frame hitting.
• In slotted Aloha, the shared channel is divided into a fixed time interval
called slots.
• So that, if a station wants to send a frame to a shared channel, the frame
can only be sent at the beginning of the slot, and only one frame is
allowed to be sent to each slot.
• And if the stations are unable to send data to the beginning of the slot,
the station will have to wait until the beginning of the slot for the next
time.
• However, the possibility of a collision remains when trying to send a
frame at the beginning of two or more station time slot.
• Maximum throughput occurs in the slotted Aloha when G = 1 that is
37%.
• The probability of successfully transmitting the data frame in the
slotted Aloha is S = G * e ^ - 2 G.
• The total vulnerable time required in slotted Aloha is Tfr.
S.n On the basis of Pure Aloha Slotted Aloha
o.

1. Basic In pure aloha, data can be transmitted at any time by any In slotted aloha, data can be transmitted at the
station. beginning of the time slot.

2. Introduced by It was introduced under the leadership of Norman It was introduced by Robert in 1972 to improve
Abramson in 1970 at the University of Hawaii. pure aloha's capacity.

3. Time Time is not synchronized in pure aloha. Time is globally synchronized in slotted aloha.
Time is continuous in it. Time is discrete in it.

4. Number of collisions It does not decrease the number of collisions to half. On the other hand, slotted aloha enhances the
efficiency of pure aloha.
It decreases the number of collisions to half.

5. Vulnerable time In pure aloha, the vulnerable time is = 2 x Tt Whereas, in slotted aloha, the vulnerable time is =
Tt

6. Successful transmission In pure aloha, the probability of the successful In slotted aloha, the probability of the successful
transmission of the frame is - transmission of the frame is -
S = G * e-2G S = G * e-G

7. Throughput The maximum throughput in pure aloha is about 18%. The maximum throughput in slotted aloha is about
37%.
CSMA (Carrier Sense Multiple
Access)
• It is a carrier that senses multiple access based on media access
protocol to sense the traffic on a channel (idle or busy) before
transmitting the data.
• It means that if the channel is idle, the station can send data to the
channel.
• Otherwise, it must wait until the channel becomes idle. Hence, it
reduces the chances of a collision on a transmission medium.
CSMA Access Modes
• 1-Persistent: In the 1-Persistent mode of CSMA that defines each
node, first sense the shared channel and if the channel is idle, it
immediately sends the data. Else it must wait and keep track of the
status of the channel to be idle and broadcast the frame
unconditionally as soon as the channel is idle.
• Non-Persistent: It is the access mode of CSMA that defines before
transmitting the data, each node must sense the channel, and if the
channel is inactive, it immediately sends the data. Otherwise, the
station must wait for a random time (not continuously), and when the
channel is found to be idle, it transmits the frames.
CSMA Access Modes
• P-Persistent: It is the combination of 1-Persistent and Non-persistent
modes. The P-Persistent mode defines that each node senses the
channel, and if the channel is inactive, it sends a frame with a P
probability. If the data is not transmitted, it waits for a (q = 1-p
probability) random time and resumes the frame with the next time
slot.
• O- Persistent: It is an O-persistent method that defines the superiority
of the station before the transmission of the frame on the shared
channel. If it is found that the channel is inactive, each station waits
for its turn to retransmit the data.
CSMA/ CD
• It is a carrier sense multiple access/ collision detection network
protocol to transmit data frames.
• The CSMA/CD protocol works with a medium access control layer.
• Therefore, it first senses the shared channel before broadcasting the
frames, and if the channel is idle, it transmits a frame to check
whether the transmission was successful.
• If the frame is successfully received, the station sends another frame.
• If any collision is detected in the CSMA/CD, the station sends a jam/
stop signal to the shared channel to terminate data transmission.
• After that, it waits for a random time before sending a frame to a
channel.
Advantages of CSMA CD
• It is used for collision detection on a shared channel within a very
short time.
• CSMA CD is better than CSMA for collision detection.
• CSMA CD is used to avoid any form of waste transmission.
• When necessary, it is used to use or share the same amount of
bandwidth at each station.
• It has lower CSMA CD overhead as compared to the CSMA CA.
Disadvantage of CSMA CD
• It is unsuitable for long-distance networks because CSMA CD's
efficiency decreases as the distance increases.
• It can detect collisions only up to 2500 meters; beyond this range, it
cannot detect collisions.
• When multiple devices are added to a CSMA CD, collision detection
performance is reduced.
CSMA/ CA
• It is a carrier sense multiple access/collision avoidance network protocol
for carrier transmission of data frames.
• It is a protocol that works with a medium access control layer.
• When a data frame is sent to a channel, it receives an acknowledgment
to check whether the channel is clear.
• If the station receives only a single (own) acknowledgment, that means
the data frame has been successfully transmitted to the receiver.
• But if it gets two signals (it’s own and one more in which the collision of
frames), a collision of the frame occurs in the shared channel. Detects the
collision of the frame when a sender receives an acknowledgment signal.
Advantage of CSMA CA
• When the size of data packets is large, the chances of collision in
CSMA CA is less.
• It controls the data packets and sends the data when the receiver
wants to send them.
• It is used to prevent collision rather than collision detection on the
shared channel.
• CSMA CA avoids wasted transmission of data over the channel.
• It is best suited for wireless transmission in a network.
• It avoids unnecessary data traffic on the network with the help of the
RTS/ CTS extension.
Disadvantage of CSMA CA
• Sometime CSMA/CA takes much waiting time as usual to transmit the
data packet.
• It consumes more bandwidth by each station.
• Its efficiency is less than a CSMA CD
Controlled Access Protocol
• It is a method of reducing data frame collision on a shared channel.
• In the controlled access method, each station interacts and decides to
send a data frame by a particular station approved by all other
stations.
• It means that a single station cannot send the data frames unless all
other stations are not approved. It has three types of controlled
access: Reservation, Polling, and Token Passing.

You might also like