0% found this document useful (0 votes)
70 views

Error Detection and Correction

Error detection and correction techniques enable reliable delivery of digital data over unreliable communication channels by detecting and correcting errors introduced during transmission. Common techniques include automatic repeat request (ARQ), which requests retransmission of erroneous data, and forward error correction (FEC), which encodes data with error-correcting codes before transmission. Common error detection methods use hash functions, parity bits, checksums, and cyclic redundancy checks.

Uploaded by

Riza Arceño
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Error Detection and Correction

Error detection and correction techniques enable reliable delivery of digital data over unreliable communication channels by detecting and correcting errors introduced during transmission. Common techniques include automatic repeat request (ARQ), which requests retransmission of erroneous data, and forward error correction (FEC), which encodes data with error-correcting codes before transmission. Common error detection methods use hash functions, parity bits, checksums, and cyclic redundancy checks.

Uploaded by

Riza Arceño
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Error detection and correction

In information theory and coding theory with applications in computer


science and telecommunication, error detection and correction or error control are
techniques that enable reliable delivery of digital data over unreliable communication
channels. Many communication channels are subject to channel noise, and thus errors may
be introduced during transmission from the source to a receiver. Error detection techniques
allow detecting such errors, while error correction enables reconstruction of the original data
in many cases.
The general definitions of the terms are as follows:

Error detection is the detection of errors caused by noise or other impairments during
transmission from the transmitter to the receiver.

Error correction is the detection of errors and reconstruction of the original, error-free
data.

Error Implementation

Error correction may generally be realized in two different ways:

Automatic repeat request (ARQ) (sometimes also referred to as backward error


correction): This is an error control technique whereby an error detection scheme is
combined with requests for retransmission of erroneous data. Every block of data
received is checked using the error detection code used, and if the check fails,
retransmission of the data is requested this may be done repeatedly, until the data can
be verified.

Forward error correction (FEC): The sender encodes the data using an error-
correcting code (ECC) prior to transmission. The additional information (redundancy)
added by the code is used by the receiver to recover the original data. In general, the
reconstructed data is what is deemed the "most likely" original data.

Error detection schemes

Error detection is most commonly realized using a suitable hash


function (or checksum algorithm). A hash function adds a fixed-length tag to a message,
which enables receivers to verify the delivered message by recomputing the tag and
comparing it with the one provided.

There exists a vast variety of different hash function designs. However, some are of
particularly widespread use because of either their simplicity or their suitability for detecting
certain kinds of errors (e.g., the cyclic redundancy check's performance in detecting burst
errors).

A random-error-correcting code based on minimum distance coding can provide a strict


guarantee on the number of detectable errors, but it may not protect against a preimage
attack. A repetition code, described in the section below, is a special case of error-correcting
codes: although rather inefficient, a repetition code is suitable in some applications of error
correction and detection due to its simplicity.
Repetition codes

A repetition code is a coding scheme that repeats the bits across a channel to achieve error-
free communication. Given a stream of data to be transmitted, the data are divided into
blocks of bits. Each block is transmitted some predetermined number of times. For example,
to send the bit pattern "1011", the four-bit block can be repeated three times, thus
producing "1011 1011 1011". However, if this twelve-bit pattern was received as "1010 1011
1011" where the first block is unlike the other two it can be determined that an error has
occurred.

Parity bits

A parity bit is a bit that is added to a group of source bits to ensure that the number of set
bits (i.e., bits with value 1) in the outcome is even or odd. It is a very simple scheme that
can be used to detect single or any other odd number (i.e., three, five, etc.) of errors in the
output. An even number of flipped bits will make the parity bit appear correct even though
the data is erroneous.

Checksums

A checksum of a message is a modular arithmetic sum of message code words of a fixed


word length (e.g., byte values). The sum may be negated by means of a ones'-
complement operation prior to transmission to detect errors resulting in all-zero messages.

Cyclic redundancy checks (CRCs)

A cyclic redundancy check (CRC) is a non-secure hash function designed to detect accidental
changes to digital data in computer networks; as a result, it is not suitable for detecting
maliciously introduced errors. It is characterized by specification of what is called
a generator polynomial, which is used as the divisor in a polynomial long division over
a finite field, taking the input data as the dividend, such that the remainder becomes the
result.

Cryptographic hash functions

The output of a cryptographic hash function, also known as a message digest, can provide
strong assurances about data integrity, whether changes of the data are accidental (e.g.,
due to transmission errors) or maliciously introduced. Any modification to the data will likely
be detected through a mismatching hash value. Furthermore, given some hash value, it is
infeasible to find some input data (other than the one given) that will yield the same hash
value. If an attacker can change not only the message but also the hash value, then a keyed
hash or message authentication code (MAC) can be used for additional security. Without
knowing the key, it is infeasible for the attacker to calculate the correct keyed hash value for
a modified message.

Error-correcting codes

Any error-correcting code can be used for error detection. A code with minimum Hamming
distance, d, can detect up to d 1 errors in a code word. Using minimum-distance-based
error-correcting codes for error detection can be suitable if a strict limit on the minimum
number of errors to be detected is desired.
Codes with minimum Hamming distance d = 2 are degenerate cases of error-correcting
codes, and can be used to detect single errors. The parity bit is an example of a single-error-
detecting code.

Applications

Applications that require low latency (such as telephone conversations) cannot


use Automatic Repeat reQuest (ARQ); they must use forward error correction (FEC). By the
time an ARQ system discovers an error and re-transmits it, the re-sent data will arrive too
late to be any good.

Applications where the transmitter immediately forgets the information as soon as it is sent
(such as most television cameras) cannot use ARQ; they must use FEC because when an
error occurs, the original data is no longer available. (This is also whyFEC is used in data
storage systems such as RAID and distributed data store).

Applications that use ARQ must have a return channel; applications having no return
channel cannot use ARQ. Applications that require extremely low error rates (such as digital
money transfers) must use ARQ. Reliability and inspection engineering also make use of the
theory of error-correcting codes.[8]

You might also like