0% found this document useful (0 votes)
3 views

Data Communication u4

The document discusses error detection and correction in data communication, highlighting the importance of ensuring accurate data transmission due to potential corruption from noise and other factors. It covers various methods of error detection, such as Parity Check and Cyclic Redundancy Check (CRC), as well as error correction techniques like Backward and Forward Error Correction. Additionally, it explains flow control mechanisms and various data link layer protocols that facilitate reliable communication.

Uploaded by

Aditya Panda
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Data Communication u4

The document discusses error detection and correction in data communication, highlighting the importance of ensuring accurate data transmission due to potential corruption from noise and other factors. It covers various methods of error detection, such as Parity Check and Cyclic Redundancy Check (CRC), as well as error correction techniques like Backward and Forward Error Correction. Additionally, it explains flow control mechanisms and various data link layer protocols that facilitate reliable communication.

Uploaded by

Aditya Panda
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 95

Data communication

unit4
Error Detection and Correction

• There are many reasons such as noise, cross-talk etc., which may help
data to get corrupted during transmission. The upper layers work on
some generalized view of network architecture and are not aware of
actual hardware data processing.Hence, the upper layers expect
error-free transmission between the systems. Most of the
applications would not function expectedly if they receive erroneous
data. Applications such as voice and video may not be that affected
and with some errors they may still function well.
• Data-link layer uses some error control mechanism to ensure that
frames (data bit streams) are transmitted with certain level of
accuracy. But to understand how errors is controlled, it is essential to
know what types of errors may occur
Types of Errors
• Error control mechanism may involve two possible ways:
• Error detection
• Error correction
• Error Detection
• Errors in the received frames are detected by means of Parity Check
and Cyclic Redundancy Check (CRC). In both cases, few extra bits are
sent along with actual data to confirm that bits received at other end
are same as they were sent. If the counter-check at receiver’ end fails,
the bits are considered corrupted
Parity Check
• One extra bit is sent along with the original bits to make number of 1s
either even in case of even parity, or odd in case of odd parity.
• The sender while creating a frame counts the number of 1s in it. For
example, if even parity is used and number of 1s is even then one bit with
value 0 is added. This way number of 1s remains even.If the number of 1s
is odd, to make it even a bit with value 1 is added

• The receiver simply counts the number of 1s in a frame. If the count of 1s is


even and even parity is used, the frame is considered to be not-corrupted
and is accepted. If the count of 1s is odd and odd parity is used, the frame
is still not corrupted.
• If a single bit flips in transit, the receiver can detect it by counting the
number of 1s. But when more than one bits are erro neous, then it is very
hard for the receiver to detect the error.
Cyclic Codes

• Cyclic codes are special linear block codes with one extra property. In
a cyclic code, if a code word is cyclically shifted (rotated), the result is
another code word. For example, if 1011000 is a code word and we
cyclically left-shift, then 0110001 is also a code word.
• In this case, if we call the bits in the first word a0 to a6 and the bits in
the second word b0 to b6, we can shift the bits by using the following:

• b1=a0 b2=a1 b3=a2 b4=a3 b5=a4 b6=a5 b0=a6


Cyclic Redundancy Check (CRC)

• CRC is a different approach to detect if the received frame contains


valid data. This technique involves binary division of the data bits
being sent. The divisor is generated using polynomials. The sender
performs a division operation on the bits being sent and calculates
the remainder. Before sending the actual bits, the sender adds the
remainder at the end of the actual bits. Actual data bits plus the
remainder is called a codeword. The sender transmits data bits as
codewords
• At the other end, the receiver performs division operation on codewords
using the same CRC divisor. If the remainder contains all zeros the data bits
are accepted, otherwise it is considered as there some data corruption
occurred in transit
• Error Correction
• In the digital world, error correction can be done in two ways:
• Backward Error Correction When the receiver detects an error in the data
received, it requests back the sender to retransmit the data unit.
• Forward Error Correction When the receiver detects some error in the
data received, it executes error-correcting code, which helps it to auto-
recover and to correct some kinds of errors.
• The first one, Backward Error Correction, is simple and can only be
efficiently used where retransmitting is not expensive. For example, fiber
optics. But in case of wireless transmission retransmitting may cost too
much. In the latter case, Forward Error Correction is used.
• To correct the error in data frame, the receiver must know exactly which bit
in the frame is corrupted. To locate the bit in error, redundant bits are used
as parity bits for error detection.For example, we take ASCII words (7 bits
data), then there could be 8 kind of information we need: first seven bits to
tell us which bit is error and one more bit to tell that there is no error.
• For m data bits, r redundant bits are used. r bits can provide 2r
combinations of information. In m+r bit codeword, there is possibility that
the r bits themselves may get corrupted. So the number of r bits used must
inform about m+r bit locations plus no-error information, i.e. m+r+1.
• Block coding is a method used in digital electronics to encode data
into a specific format. The purpose of block coding is to add
redundant information to the data, which can be used to detect and
correct errors that may occur during transmission or storage. Block
coding is often used in conjunction with error correction codes (ECCs)
to provide a more robust way of transmitting and storing data
There are several types of block codes, including:
• Hamming Codes: Hamming codes are a type of block code that can detect
and correct single-bit errors. They are commonly used in digital systems to
ensure the accuracy of transmitted data.
• Reed-Solomon Codes: Reed-Solomon codes are a type of block code that
can correct multiple-bit errors. They are commonly used in storage
systems, such as CD-ROMs and DVDs, to ensure the integrity of stored
data.
• BCH Codes: BCH codes are a type of block code that can correct a specific
number of errors. They are commonly used in digital communication
systems to ensure the accuracy of transmitted data.
• Block coding can provide many benefits in digital electronics, including
improved reliability, increased data accuracy, and greater efficiency in the
transmission and storage of data. However, block coding also has some
disadvantages, including increased complexity and increased overhead in
terms of processing time and memory usage
Linear Block Code
• Linear block code is a type of error-correcting code in which the
actual information bits are linearly combined with the parity check
bits so as to generate a linear codeword that is transmitted through
the channel. Another major type of error-correcting code is
convolution code.
• In the linear block code technique, the complete message is divided
into blocks and these blocks are combined with redundant bits so as
to deal with error detection and correction
Linear Block Coding
• In block coding, the complete message bits are divided into blocks
where each block holds the same number of bits. Suppose each block
contains k bits, and each k bits of a block defines a dataword. Hence,
the overall datawords will be 2k. At this particular point, we have not
considered any redundancies, thus, we only have the actual message
bitstream converted into datawords.
• Now, in order to perform encoding, the datawords are encoded as
codewords having n number of bits. We have recently discussed that
a block has k bits and after encoding there will be n bits in each block
(of course, n>k) and these n bits will be transmitted across the
channel. While the additional n-k bits are not the message bits as
these are named as parity bits but during transmission, the parity bits
act as they are a part of message bits.
• So, structurally, a codeword is represented as
• Hence, the possible codewords will be 2n out of which 2k contains
datawords. During transmission, if errors are introduced then most
probably, the permissible codewords will be changed into redundant
words which can be detected as an error by the receiver
• In reference to the terms, codewords and datawords, a term code
rate is used which is defined as the ratio of dataword bits to the
codeword bits. Thus, is represented as
• A code is represented as (n,k). Consider an example where n=6 and
k=3 then code will be (6,3), indicating that a dataword of 3 bits is
changed into a codeword of 6 bits
Checksum

• Checksum error detection is a method used to identify errors in


transmitted data. The process involves dividing the data into equally
sized segments and using a 1’s complement to calculate the sum of
these segments. The calculated sum is then sent along with the data
to the receiver. At the receiver’s end, the same process is repeated
and if all zeroes are obtained in the sum, it means that the data is
correct
• Checksum – Operation at Sender’s Side
• Firstly, the data is divided into k segments each of m bits.
• On the sender’s end, the segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented to get the
checksum.
• The checksum segment is sent along with the data segments.
• Checksum – Operation at Receiver’s Side
• At the receiver’s end, all received segments are added using 1’s
complement arithmetic to get the sum. The sum is complemented.
• If the result is zero, the received data is accepted; otherwise
discarded
• Disadvantages
• If one or more bits of a segment are damaged and the corresponding
bit or bits of opposite value in a second segment are also damaged
Data-link Control & Protocols
• Data-link layer is responsible for implementation of point-to-point
flow and error control mechanism
• Flow Control
• When a data frame (Layer-2 data) is sent from one host to another
over a single medium, it is required that the sender and receiver
should work at the same speed. That is, sender sends at a speed on
which the receiver can process and accept the data. What if the
speed (hardware/software) of the sender or receiver differs? If sender
is sending too fast the receiver may be overloaded, (swamped) and
data may be lost.
Two types of mechanisms can be deployed to control the flow
• Stop and Wait
• This flow control mechanism forces the sender after transmitting a
data frame to stop and wait until the acknowledgement of the data-
frame sent is received
• Sliding Window
• In this flow control mechanism, both sender and receiver agree on
the number of data-frames after which the acknowledgement should
be sent. As we learnt, stop and wait flow control mechanism wastes
resources, this protocol tries to make use of underlying resources as
much as possibl
Error Control
• When data-frame is transmitted, there is a probability that data-frame may be lost in the
transit or it is received corrupted. In both cases, the receiver does not receive the correct
data-frame and sender does not know anything about any loss.In such case, both sender
and receiver are equipped with some protocols which helps them to detect transit errors
such as loss of data-frame. Hence, either the sender retransmits the data-frame or the
receiver may request to resend the previous data-frame.
• Requirements for error control mechanism:

• Error detection - The sender and receiver, either both or any, must ascertain that there
is some error in the transit.
• Positive ACK - When the receiver receives a correct frame, it should acknowledge it.
• Negative ACK - When the receiver receives a damaged frame or a duplicate frame, it
sends a NACK back to the sender and the sender must retransmit the correct frame.
• Retransmission: The sender maintains a clock and sets a timeout period. If an
acknowledgement of a data-frame previously transmitted does not arrive before the
timeout the sender retransmits the frame, thinking that the frame or it’s
acknowledgement is lost in transit.
There are three types of techniques available which Data-link layer may deploy to control the errors
by Automatic Repeat Requests (ARQ):
• Stop-and-wait ARQ
• The following transition may occur in Stop-and-Wait ARQ:
• The sender maintains a timeout counter.
• When a frame is sent, the sender starts the timeout counter.
• If acknowledgement of frame comes in time, the sender transmits the
next frame in queue.
• If acknowledgement does not come in time, the sender assumes that
either the frame or its acknowledgement is lost in transit. Sender
retransmits the frame and starts the timeout counter.
• If a negative acknowledgement is received, the sender retransmits
the frame.
Go-Back-N ARQ
• Stop and wait ARQ mechanism does not utilize the resources at their
best.When the acknowledgement is received, the sender sits idle and
does nothing. In Go-Back-N ARQ method, both sender and receiver
maintain a window
• The sending-window size enables the sender to send multiple frames
without receiving the acknowledgement of the previous ones. The
receiving-window enables the receiver to receive multiple frames and
acknowledge them. The receiver keeps track of incoming frame’s
sequence number.
• When the sender sends all the frames in window, it checks up to what
sequence number it has received positive acknowledgement. If all
frames are positively acknowledged, the sender sends next set of
frames. If sender finds that it has received NACK or has not receive
any ACK for a particular frame, it retransmits all the frames after
which it does not receive any positive ACK
Selective Repeat ARQ
• In Go-back-N ARQ, it is assumed that the receiver does not have any
buffer space for its window size and has to process each frame as it
comes. This enforces the sender to retransmit all the frames which
are not acknowledged
• In Selective-Repeat ARQ, the receiver while keeping track of sequence
numbers, buffers the frames in memory and sends NACK for only
frame which is missing or damaged.

• The sender in this case, sends only packet for which NACK is received.
Data Link Layer Protocols

• Data Link Layer protocols are generally responsible to simply ensure


and confirm that the bits and bytes that are received are identical to
the bits and bytes being transferred. It is basically a set of
specifications that are used for implementation of data link layer just
above the physical layer of the Open System Interconnections (OSI)
Model.
• Some Common Data Link Protocols :
There are various data link protocols that are required for Wide Area
Network (WAN) and modem connections. Logical Link Control (LLC) is
a data link protocol of Local Area Network (LAN). Some of data link
protocols are given below
• Synchronous Data Link Protocol (SDLC) –
SDLC is basically a communication protocol of computer. It usually supports
multipoint links even error recovery or error correction also. It is usually
used to carry SNA (Systems Network Architecture) traffic and is present
precursor to HDLC. It is also designed and developed by IBM in 1975. It is
also used to connect all of the remote devices to mainframe computers at
central locations may be in point-to-point (one-to-one) or point-to-
multipoint (one-to-many) connections. It is also used to make sure that the
data units should arrive correctly and with right flow from one network
point to next network point
• High-Level Data Link Protocol (HDLC) –
HDLC is basically a protocol that is now assumed to be an umbrella under
which many Wide Area protocols sit. It is also adopted as a part of X.25
network. It was originally created and developed by ISO in 1979. This
protocol is generally based on SDLC. It also provides best-effort unreliable
service and also reliable service. HDLC is a bit-oriented protocol that is
applicable for point-to-point and multipoint communications both
• Serial Line Interface Protocol (SLIP) –
SLIP is generally an older protocol that is just used to add a framing
byte at end of IP packet. It is basically a data link control facility that is
required for transferring IP packets usually among Internet Service
Providers (ISP) and a home user over a dial-up link. It is an
encapsulation of the TCP/IP especially designed to work with over
serial ports and several router connections simply for communication.
It is some limitations like it does not provide mechanisms such as
error correction or error detection.
• Point to Point Protocol (PPP) –
PPP is a protocol that is basically used to provide same functionality
as SLIP. It is most robust protocol that is used to transport other types
of packets also along with IP Packets. It can also be required for dial-
up and leased router-router lines. It basically provides framing
method to describe frames. It is a character-oriented protocol that is
also used for error detection. It is also used to provides two protocols
i.e. NCP and LCP. LCP is used for bringing lines up, negotiation of
options, bringing them down whereas NCP is used for negotiating
network-layer protocols. It is required for same serial interfaces like
that of HDLC
• Link Control Protocol (LCP) –
It was originally developed and created by IEEE 802.2. It is also used
to provide HDLC style services on LAN (Local Area Network). LCP is
basically a PPP protocol that is used for establishing, configuring,
testing, maintenance, and ending or terminating links for
transmission of data frames
• Link Access Procedure (LAP) –
LAP protocols are basically a data link layer protocols that are
required for framing and transferring data across point-to-point links.
It also includes some reliability service features. There are basically
three types of LAP i.e. LAPB (Link Access Procedure Balanced), LAPD
(Link Access Procedure D-Channel), and LAPF (Link Access Procedure
Frame-Mode Bearer Services). It is actually originated from IBM SDLC,
which is being submitted by IBM to the ISP simply for standardization
• Network Control Protocol (NCP) –
NCP was also an older protocol that was implemented by ARPANET. It
basically allows users to have access to use computers and some of
the devices at remote locations and also to transfer files among two
or more computers. It is generally a set of protocols that is forming a
part of PPP. NCP is always available for each and every higher-layer
protocol that is supported by PPP. NCP was replaced by TCP/IP in the
1980s.
What are noiseless and noisy channels?
• Data link layer protocols are divided into two categories based on
whether the transmission channel is noiseless or noisy.
• The data link layer protocol is diagrammatically represented below −
• Noiseless Channels
• There are two noiseless channels which are as follows −
Simplex channel
Stop & wait channel
• Let us consider an ideal channel where no frames are lost, duplicated, or
corrupted. We introduce two protocols for this type of channel. These two
protocols are as follows −
Protocol that does not use flow control.
Protocol that uses the flow control.
• Now let us consider the Protocols that do not use flow control −
• Simplest Protocol
• Step 1 − Simplest protocol that does not have flow or error control.
• Step 2 − It is a unidirectional protocol where data frames are traveling
in one direction that is from the sender to receiver.
• Step 3 − Let us assume that the receiver can handle any frame it
receives with a processing time that is small enough to be negligible,
the data link layer of the receiver immediately removes the header
from the frame and hands the data packet to its network layer, which
can also accept the packet immediately
Stop-and-Wait Protocol

• Step 1 − If the data frames that arrive at the receiver side are faster than
they can be processed, the frames must be stored until their use.
• Step 2 − Generally, the receiver does not have enough storage space,
especially if it is receiving data from many sources. This may result in either
discarding of frames or denial of service.
• Step 3 − To prevent the receiver from becoming overwhelmed with frames,
the sender must slow down. There must be ACK from the receiver to the
sender.
• Step 4 − In this protocol the sender sends one frame, stops until it receives
confirmation from the receiver, and then sends the next frame.
• Step 5 − We still have unidirectional communication for data frames, but
auxiliary ACK frames travel from the other direction. We add flow control
to the previous protocol
Noisy Channels

• There are three types of requests for the noisy channels, which are as
follows −
Stop & wait Automatic Repeat Request.
Go-Back-N Automatic Repeat Request.
Selective Repeat Automatic Repeat Request.
• Noiseless channels are generally non-existent channels. We can
ignore the error or we need to add error control to our protocols.
Stop and Wait Automatic Repeat Request
• Step 1 − In a noisy channel, if a frame is damaged during transmission, the receiver will
detect with the help of the checksum.
• Step 2 − If a damaged frame is received, it will be discarded, and the transmitter will
retransmit the same frame after receiving a proper acknowledgement.
• Step 3 − If the acknowledgement frame gets lost and the data link layer on 'A' eventually
times out. Not having received an ACK, it assumes that its data frame was lost or
damaged and sends the frame containing packet 1 again. This duplicate frame also
arrives at the data link layer on 'B', thus part of the file will be duplicated and protocol is
said to be failed.
• Step 4 − To solve this problem, assign a sequence number in the header of the message.
• Step 5 − The receiver checks the sequence number to determine if the message is a
duplicate since only the message is transmitted at any time.
• Step 6 − The sending and receiving station needs only a 1-bit alternating sequence of '0'
or '1' to maintain the relationship of the transmitted message and its ACK/ NAK.
• Step 7 − A modulo-2 numbering scheme is used where the frames are alternatively
labelled with '0' or '1' and positive acknowledgements are of the form ACK 0 and ACK 1.
• Normal operation of Stop & Wait ARQ is given below −
• Stop & Wait ARQ with Lost frame is as follows −
High-level Data Link Control (HDLC)

• High-level Data Link Control (HDLC) is a group of communication


protocols of the data link layer for transmitting data between network
points or nodes. Since it is a data link protocol, data is organized into
frames. A frame is transmitted via the network to the destination that
verifies its successful arrival. It is a bit - oriented protocol that is
applicable for both point - to - point and multipoint communications
• Transfer Modes
• HDLC supports two types of transfer modes, normal response mode
and asynchronous balanced mode.
• Normal Response Mode (NRM) − Here, two types of stations are
there, a primary station that send commands and secondary station
that can respond to received commands. It is used for both point - to
- point and multipoint communications
• Asynchronous Balanced Mode (ABM) − Here, the configuration is
balanced, i.e. each station can both send commands and respond to
commands. It is used for only point - to - point communications.
• HDLC Frame
• HDLC is a bit - oriented protocol where each frame contains up to six fields.
The structure varies according to the type of frame. The fields of a HDLC
frame are −
• Flag − It is an 8-bit sequence that marks the beginning and the end of the
frame. The bit pattern of the flag is 01111110.
• Address − It contains the address of the receiver. If the frame is sent by the
primary station, it contains the address(es) of the secondary station(s). If it
is sent by the secondary station, it contains the address of the primary
station. The address field may be from 1 byte to several bytes.
• Control − It is 1 or 2 bytes containing flow and error control information.
• Payload − This carries the data from the network layer. Its length may vary
from one network to another.
• FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The
standard code used is CRC (cyclic redundancy code)
• Types of HDLC Frames
• There are three types of HDLC frames. The type of frame is determined by
the control field of the frame −
• I-frame − I-frames or Information frames carry user data from the network
layer. They also include flow and error control information that is
piggybacked on user data. The first bit of control field of I-frame is 0.
• S-frame − S-frames or Supervisory frames do not contain information field.
They are used for flow and error control when piggybacking is not required.
The first two bits of control field of S-frame is 10.
• U-frame − U-frames or Un-numbered frames are used for myriad
miscellaneous functions, like link management. It may contain an
information field, if required. The first two bits of control field of U-frame is
11
Point-To-Point Protocol
• The PPP stands for Point-to-Point protocol. It is the most commonly used
protocol for point-to-point access. Suppose the user wants to access the
internet from the home, the PPP protocol will be used.
• It is a data link layer protocol that resides in the layer 2 of the OSI model. It
is used to encapsulate the layer 3 protocols and all the information
available in the payload in order to be transmitted across the serial links.
The PPP protocol can be used on synchronous link like ISDN as well as
asynchronous link like dial-up. It is mainly used for the communication
between the two devices.
• It can be used over many types of physical networks such as serial cable,
phone line, trunk line, cellular telephone, fiber optic links such as SONET.
As the data link layer protocol is used to identify from where the
transmission starts and ends, so ISP (Internet Service Provider) use the PPP
protocol to provide the dial-up access to the internet.
Services provided by PPP

• It defines the format of frames through which the transmission occurs.


• It defines the link establishment process. If user establishes a link with a
server, then "how this link establishes" is done by the PPP protocol.
• It defines data exchange process, i.e., how data will be exchanged, the rate
of the exchange.
• The main feature of the PPP protocol is the encapsulation. It defines how
network layer data and information in the payload are encapsulated in the
data link frame.
• It defines the authentication process between the two devices. The
authentication between the two devices, handshaking and how the
password will be exchanged between two devices are decided by the PPP
protocol.
Multiple Access Protocols in Computer Network

• The Data Link Layer is responsible for transmission of data between


two nodes. Its main functions are-
• Data Link Control
• Multiple Access Control
• Data Link control –
The data link control is responsible for reliable transmission of
message over transmission channel by using techniques like framing,
error control and flow control. For Data link control refer to – Stop
and Wait ARQ
Multiple Access Control –
• If there is a dedicated link between the sender and the receiver then
data link control layer is sufficient, however if there is no dedicated
link present then multiple stations can access the channel
simultaneously. Hence multiple access protocols are required to
decrease collision and avoid crosstalk. For example, in a classroom full
of students, when a teacher asks a question and all the students (or
stations) start answering simultaneously (send data at same time)
then a lot of chaos is created( data overlap or data lost) then it is the
job of the teacher (multiple access protocols) to manage the students
and make them answer one at a time
Thus, protocols are required for sharing data on non dedicated
channels. Multiple access protocols can be subdivided further as –
• 1. Random Access Protocol: In this, all stations have same superiority
that is no station has more priority than another station. Any station
can send data depending on medium’s state( idle or busy). It has two
features:
• There is no fixed time for sending data
• There is no fixed sequence of stations sending data
Controlled Access Protocols in Computer Network
• In controlled access, the stations seek information from one another
to find which station has the right to send. It allows only one node to
send at a time, to avoid the collision of messages on a shared
medium. The three controlled-access methods are:
• Reservation
• Polling
• Token Passing
Reservation
• In the reservation method, a station needs to make a reservation before
sending data.
• The timeline has two kinds of periods:
• Reservation interval of fixed time length
• Data transmission period of variable frames.
• If there are M stations, the reservation interval is divided into M slots, and
each station has one slot.
• Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1.
No other station is allowed to transmit during this slot.
• In general, i th station may announce that it has a frame to send by
inserting a 1 bit into i th slot. After all N slots have been checked, each
station knows which stations wish to transmit.
• The stations which have reserved their slots transfer their frames in that
order.
• After data transmission period, next reservation interval begins.
• Since everyone agrees on who goes next, there will never be any collisions
Polling
• Polling process is similar to the roll-call performed in class. Just like
the teacher, a controller sends a message to each node in turn.
• In this, one acts as a primary station(controller) and the others are
secondary stations. All data exchanges must be made through the
controller.
• The message sent by the controller contains the address of the node
being selected for granting access.
• Although all nodes receive the message the addressed one responds
to it and sends data if any. If there is no data, usually a “poll
reject”(NAK) message is sent back.
• Problems include high overhead of the polling messages and high
dependence on the reliability of the controller.
• Advantages of Polling:
• The maximum and minimum access time and data rates on the channel are fixed
predictable.
• It has maximum efficiency.
• It has maximum bandwidth.
• No slot is wasted in polling.
• There is assignment of priority to ensure faster access from some secondary.
• Disadvantages of Polling:
• It consume more time.
• Since every station has an equal chance of winning in every round, link sharing is
biased.
• Only some station might run out of data to send.
• An increase in the turnaround time leads to a drop in the data rates of the
channel under low loads
• Token Passing
• In token passing scheme, the stations are connected logically to each other in form of
ring and access to stations is governed by tokens.
• A token is a special bit pattern or a small message, which circulate from one station to
the next in some predefined order.
• In Token ring, token is passed from one station to another adjacent station in the ring
whereas incase of Token bus, each station uses the bus to send the token to the next
station in some predefined order.
• In both cases, token represents permission to send. If a station has a frame queued for
transmission when it receives the token, it can send that frame before it passes the token
to the next station. If it has no queued frame, it passes the token simply.
• After sending a frame, each station must wait for all N stations (including itself) to send
the token to their neighbours and the other N – 1 stations to send a frame, if they have
one.
• There exists problems like duplication of token or token is lost or insertion of new
station, removal of a station, which need be tackled for correct and reliable operation of
this scheme.
• Performance of token ring can be concluded by 2 parameters:-
• Delay, is a measure of time between when a packet is ready and when it is
delivered. So, the average time (delay) required to send a token to the next
station = a/N.
• Throughput, which is a measure of successful traffic
• Advantages of Token passing:
• It may now be applied with routers cabling and includes built-in debugging
features like protective relay and auto reconfiguration.
• It provides good throughput when conditions of high load.
• Disadvantages of Token passing:
• Its cost is expensive.
• Topology components are more expensive than those of other, more
widely used standard.
• The hardware element of the token rings are designed to be tricky. This
implies that you should choose on manufacture and use them exclusively
• 2. Controlled Access:
In this, the data is sent by that station which is approved by all other stations. For further details
refer – Controlled Access Protocols
• 3. Channelization:
In this, the available bandwidth of the link is shared in time, frequency and code to multiple
stations to access channel simultaneously.
• Frequency Division Multiple Access (FDMA) – The available bandwidth is divided into equal
bands so that each station can be allocated its own band. Guard bands are also added so that no
two bands overlap to avoid crosstalk and noise.
• Time Division Multiple Access (TDMA) – In this, the bandwidth is shared between multiple
stations. To avoid collision time is divided into slots and stations are allotted these slots to
transmit data. However there is a overhead of synchronization as each station needs to know its
time slot. This is resolved by adding synchronization bits to each slot. Another issue with TDMA is
propagation delay which is resolved by addition of guard bands.
For more details refer – Circuit Switching
• Code Division Multiple Access (CDMA) – One channel carries all transmissions simultaneously.
There is neither division of bandwidth nor division of time. For example, if there are many people
in a room all speaking at the same time, then also perfect reception of data is possible if only two
person speak the same language. Similarly, data from different stations can be transmitted
simultaneously in different code languages.
• Orthogonal Frequency Division Multiple Access (OFDMA) – In OFDMA the available bandwidth
is divided into small subcarriers in order to increase the overall performance, Now the data is
transmitted through these small subcarriers. it is widely used in the 5G technology.
IEEE Standards in Computer Networks
• IEEE stands for Institute of Electrical and Electronics Engineers. The
main AIM of IEEE is to foster technological innovation and excellence
for the benefit of humanity. The IEEE standards in computer networks
ensure communication between various devices; it also helps to make
sure that the network service, i.e., the Internet and its related
technologies, must follow a set of guidelines and practices so that all
the networking devices can communicate and work smoothly. Since
there are various types of computer system manufacturers, the IEEE's
Computer Society started a project in 1985 called project 802 to
enable standard communication between various devices. The
standards that deal with computer networking are called the IEEE 802
wireless standards
• What are IEEE Standards in Computer Networks?
• Before learning about the IEEE standards in computer networks, let us get a brief
introduction to IEEE. IEEE, or Institute of Electrical and Electronics Engineers, is an
organization that develops standards for the electronics industry and computers.
IEEE is composed of numerous scientists, engineers, and students from all over
the globe. The main AIM of IEEE is to ensure foster technological innovation and
excellence for the benefit of humanity.
• The IEEE standards in computer networks ensure communication between
various devices; it also helps to make sure that the network service, i.e., the
Internet and its related technologies, must follow a set of guidelines and practices
so that all the networking devices can communicate and work smoothly.
• Since there are various types of computer system manufacturers, the IEEE's
computer society started a project in 1985 called Project 802 to enable standard
communication between various devices. Under this project, the IEEE divided the
data link layer into two sub-parts, namely
• LLC or Logical Link Control and
• MAC or Media Access Control.
• The standards that deal with computer networking (networking in general) are
called the IEEE 802 wireless standards. The IEEE 802 is a collection of networking
standards that deals with the data link layer and physical layer technologies like
ethernet and wireless communications.
• There are various IEEE standards in computer networks. We will be discussing all
the IEEE standards in computer networks in the later section. Let us first learn
about the three notable IEEE standards.
• IEEE 802: The IEEE 802 deals with the standards of LAN and MAN, i.e., Local Area
Network and Metropolitan Area Network.
• IEEE 802.1: The IEEE 802.1 deals with the standards of LAN and MAN. Along with
that, it also deals with the MAC (Media Access Control) bridging.
• IEEE 802.2: The IEEE 802.2 deals with the LLC (Logical Link Control).
• Let us take an example of IEEE standards in computer networks. The IEEE 802.11
standard in computer networks is used in various homely devices like laptops,
printers, smartphones, and various other devices that allow them to
communicate with each other using the Internet. Hence, the IEEE 802.11
standard in computer networks is useful for devices that use wireless
communication, i.e., WiFi bands.
List of IEEE Standards in Computer Networks
IEEE standards in computer networks Description

It is used for the overview and architecture of


IEEE 802
LAN/MAN.

IEEE 802.1 It is used for bridging and management of LAN/MAN.

IEEE 802.1s It is used in multiple spanning trees.

IEEE 802.1 w It is used for rapid reconfiguration of spanning trees.

IEEE 802.1x It is used for network access control of ports.

IEEE 802.2 It is used in Logical Link Control (LLC).

IEEE 802.3 It is used in Ethernet (CSMA/CD access method).

IEEE 802.3ae It is used for 10 Gigabit Ethernet.

It is used for token passing bus access methods and


IEEE 802.4
the physical layer specifications.
It is used for token ring access methods and the
IEEE 802.5
physical layer specifications.

It is used in distributed Queue Dual Bus (DQDB) access


IEEE 802.6 method and for the physical layer specifications
(MAN).

IEEE 802.7 It is used in broadband LAN.

IEEE 802.8 It is used in fiber optics.

IEEE 802.9 It is used in isochronous LANs.

IEEE 802.10 It is used in interoperable LAN/MAN security.

It is used in wireless LAN, MAC, and Physical layer


IEEE 802.11
specifications.

It is used in the demand-priority access method, in the


IEEE 802.12
physical layer, and in repeater specifications.
Standard Ethernet

The original Ethernet was created in 1976 at Xerox's Palo Alto Research Center (PARC).
Since then, it has gone through four generations:
a. Standard Ethernet (l0 Mbps),
b. Fast Ethernet (100 Mbps),
c. Gigabit Ethernet (l Gbps)
d. Ten-Gigabit Ethernet (l0 Gbps)
MAC Sublayer

In Standard Ethernet, the MAC sublayer governs the operation of the access method.
It also frames data received from the upper layer and passes them to the physical layer
The main parts of an Ethernet frame are
• Preamble − It is the starting field that provides alert and timing pulse
for transmission.
• Destination Address − It is a 6-byte field containing the physical
address of destination stations.
• Source Address − It is a 6-byte field containing the physical address of
the sending station.
• Length − It stores the number of bytes in the data field.
• Data and Padding − This carries the data from the upper layers.
• CRC − It contains error detection information.
Changes In The Standard

Changes In The Standard


The 10-Mbps Standard Ethernet has gone through several changes before moving to the higher data rates.
These changes actually opened the road to the evolution of the Ethernet to become compatible with other high-data-rate LANs.

1. Bridged Ethernet
The first step in the Ethernet evolution was the division of a LAN by bridges.
Bridges have two effects on an Ethernet LAN: They raise the bandwidth and they separate collision domains.

Raising the Bandwidth

In an unbridged Ethernet network, the total capacity (10 Mbps) is shared among all stations with a frame to send;
the stations share the bandwidth of the network. If only one station has frames to send, it benefits from the total capacity (10 Mbps
But if more than one station needs to use the network, the capacity is shared.
For example, if two stations have a lot of frames to send, they probably alternate in usage.
When one station is sending, the other one refrains from sending
• A bridge divides the network into two or more networks. Bandwidth-
wise, each network is independent. a network with 12 stations is
divided into two networks, each with 6 stations. Now each network
has a capacity of 10 Mbps. The 10-Mbps capacity in each segment is
now shared between 6 stations (actually 7 because the bridge acts as
a station in each segment), not 12 stations. In a network with a heavy
load, each station theoretically is offered 10/6 Mbps instead of 10/12
Mbps, assuming that the traffic is not going through the bridge
Separating Collision Domains

Another advantage of a bridge is the separation of the collision domain.


The collision domain becomes much smaller and the probability of collision is reduced tremendously.
Without bridging, 12 stations contend for access to the medium; with bridging only 3 stations contend for access to the medium
2. Switched Ethernet
The idea of a bridged LAN can be extended to a switched LAN. Instead of having two to four networks
. The bandwidth is shared only between the station and the switch (5 Mbps each).
In addition, the collision domain is divided into N domains.
A layer 2 switch is an N-port bridge with additional sophistication that allows faster handling of the packets.
Evolution from a bridged Ethernet to a switched Ethernet was a big step that opened the way to an even faster Ethernet
3. Full-Duplex Ethernet
• One of the limitations of 10Base5 and l0Base2 is that communication
is half-duplex, a station can either send or receive, but may not do
both at the same time. The next step in the evolution was to move
from switched Ethernet to full-duplex switched Ethernet. The full-
duplex mode increases the capacity of each domain from 10 to 20
Mbps
IEEE 802.11
• IEEE 802.11 standard, popularly known as WiFi, lays down the
architecture and specifications of wireless LANs (WLANs). WiFi or
WLAN uses high-frequency radio waves instead of cables for
connecting the devices in LAN. Users connected by WLANs can move
around within the area of network coverage.
• IEEE 802.11 Architecture
• The components of an IEEE 802.11 architecture are as follows −
• Stations (STA) − Stations comprises of all devices and equipment that are connected to the
wireless LAN. A station can be of two types−
• Wireless Access Point (WAP) − WAPs or simply access points (AP) are generally wireless routers that form the
base stations or access.
• Client. Clients are workstations, computers, laptops, printers, smartphones, etc.
• Each station has a wireless network interface controller.
• Basic Service Set (BSS) − A basic service set is a group of stations communicating at the physical
layer level. BSS can be of two categories depending upon the mode of operation−
• Infrastructure BSS − Here, the devices communicate with other devices through access points.
• Independent BSS − Here, the devices communicate in a peer-to-peer basis in an ad hoc manner.
• Extended Service Set (ESS) − It is a set of all connected BSS.
• Distribution System (DS) − It connects access points in ESS
Frame Format of IEEE 802.11

• The main fields of a frame of wireless LANs as laid down by IEEE 802.11 are

• Frame Control − It is a 2 bytes starting field composed of 11 subfields. It
contains control information of the frame.
• Duration − It is a 2-byte field that specifies the time period for which the
frame and its acknowledgment occupy the channel.
• Address fields − There are three 6-byte address fields containing addresses
of source, immediate destination, and final endpoint respectively.
• Sequence − It a 2 bytes field that stores the frame numbers.
• Data − This is a variable-sized field that carries the data from the upper
layers. The maximum size of the data field is 2312 bytes.
• Check Sequence − It is a 4-byte field containing error detection information
Bluetooth
• Bluetooth is universal for short-range wireless voice and data
communication. It is a Wireless Personal Area Network (WPAN)
technology and is used for exchanging data over smaller distances.
This technology was invented by Ericson in 1994. It operates in the
unlicensed, industrial, scientific, and medical (ISM) band from 2.4 GHz
to 2.485 GHz. Maximum devices that can be connected at the same
time are 7. Bluetooth ranges up to 10 meters. It provides data rates
up to 1 Mbps or 3 Mbps depending upon the version. The spreading
technique that it uses is FHSS (Frequency-hopping spread spectrum).
A Bluetooth network is called a piconet and a collection of
interconnected piconets is called scatternet
• What is Bluetooth?
• Bluetooth simply follows the principle of transmitting and receiving data using radio waves. It can
be paired with the other device which has also Bluetooth but it should be within the estimated
communication range to connect. When two devices start to share data, they form a network
called piconet which can further accommodate more than five devices.
• Points to remember for Bluetooth:
• Bluetooth Transmission capacity 720 kbps.
• Bluetooth is Wireless.
• Bluetooth is a Low-cost short-distance radio communications standard.
• Bluetooth is robust and flexible.
• Bluetooth is cable replacement technology that can be used to connect almost any device to any
other device.
• The basic architecture unit of Bluetooth is a piconet
• Types of Bluetooth
• Various types of Bluetooth are available in the market nowadays. Let us look at
them.
• In-Car Headset: One can make calls from the car speaker system without the use
of mobile phones.
• Stereo Headset: To listen to music in car or in music players at home.
• Webcam: One can link the camera with the help of Bluetooth with their laptop or
phone.
• Bluetooth-equipped Printer: The printer can be used when connected via
Bluetooth with mobile phone or laptop.
• Bluetooth Global Positioning System (GPS): To use GPS in cars, one can connect
their phone with car system via Bluetooth to fetch the directions of the address
• Advantage:
• It is a low-cost and easy-to-use device.
• It can also penetrate through walls.
• It creates an Ad-hoc connection immediately without any wires.
• It is used for voice and data transfer.
• Disadvantages:
• It can be hacked and hence, less secure.
• It has a slow data transfer rate: of 3 Mbps.
• It has a small range: 10 meters.
• Bluetooth communication does not support routing.
• The issues of handoffs have not been addressed
• Applications:
• It can be used in laptops, and in wireless PCs, printers.
• It can be used in wireless headsets, wireless PANs, and LANs.
• It can connect a digital camera wirelessly to a mobile phone.
• It can transfer data in terms of videos, songs, photographs, or files
from one cell phone to another cell phone or computer.
• It is used in the sectors of Medical health care, sports and fitness,
Military.

You might also like