Cryptography Recent Advances and Research Perspect
Cryptography Recent Advances and Research Perspect
6,700
Open access books available
182,000
International authors and editors
195M
Downloads
154
Countries delivered to
TOP 1%
most cited scientists
12.2%
Contributors from top 500 universities
Abstract
1. Introduction
simple substitution ciphers were used to encrypt messages. Over time, more complex
algorithms were developed such as the Hill cipher and the data encryption standard
(DES). The development of the advanced encryption standard (AES) in the late
twentieth century marked a significant improvement in symmetric key cryptography
as it provided stronger encryption and faster processing times.
Asymmetric key cryptography, also known as public-key cryptography, is a
more recent development in the field of cryptography. It is based on the use of two
different keys—a public key and a private key—to encrypt and decrypt messages. The
concept of asymmetric key cryptography was first introduced by Whitfield Diffie and
Martin Hellman in 1976 [2]. This led to the development of various algorithms such as
the Rivest-Shamir-Adleman (RSA) algorithm [3] and the Diffie-Hellman key
exchange [4].
Hash functions are another important component of modern-day encryption. A
hash function is a mathematical function that takes an input (or message) and pro-
duces a fixed-length output (or hash) [5]. Hash functions are used to ensure the
integrity of data as any change to the original input will result in a different hash. The
history of hash functions dates back to the 1950s, where the concept of message
digests was introduced. Over time, more complex algorithms were developed such as
the secure hash algorithm (SHA) and the message digest (MD) [5, 6].
Digital signatures are used to provide authentication and non-repudiation in digital
communications. A digital signature is a mathematical scheme for demonstrating the
authenticity of a digital message or document. The history of digital signature algo-
rithms dates back to the early 1980s, where the concept of public-key cryptography
was first introduced. Over time, various algorithms were developed such as the
digital signature algorithm (DSA) and the elliptic curve digital signature algorithm
(ECDSA) [7].
The evolution of cryptographic algorithms has been driven by major historical
events and technological advancements. With the advent of the internet and the
increase in digital communication, the need for stronger and more efficient encryp-
tion methods became more pressing. As computing power continues to increase, the
potential for cracking encryption algorithms also increases. This has led to the need
for stronger and more advanced cryptographic algorithms, such as post-quantum
cryptography, which can withstand attacks from quantum computers.
In addition to the potential threats to encryption technology, there is also the
potential for integrating artificial intelligence tools with cryptographic algorithms. For
example, machine learning algorithms could be used to identify potential vulnerabil-
ities in encryption systems and improve their security.
As the digital landscape continues to evolve, the importance of staying ahead of the
curve in encryption technology cannot be overstated. This chapter provides an over-
view of the history and evolution of cryptographic algorithms, highlighting the need
for ongoing innovation and development in this field. By continuing to push the
boundaries of encryption technology, we can help to safeguard the privacy and secu-
rity of sensitive data in the digital age.
Encryption is a critical component of modern communication and information
security [8]. By converting data into a secure format that can only be accessed
with the correct key or password, encryption ensures that sensitive information is
protected from unauthorized access. Throughout history, cryptography has
played a significant role in the security of sensitive information from the early substi-
tution ciphers used by ancient civilizations to the modern public-key encryption
algorithms.
2
Cryptography: Recent Advances and Research Perspectives
DOI: http://dx.doi.org/10.5772/intechopen.111847
The history of cryptography dates back to ancient civilizations, where people used
various methods to protect their messages from unauthorized access. The earliest
examples of cryptography being used to protect information were found in an
inscription carved around 1900 BC, in the main chamber of the tomb of the nobleman
Khnumhotep II, in Egypt [12, 13]. The inscription, known as the “Cryptography
Inscription,” described a method for hiding the meaning of hieroglyphic
inscriptions by using symbols to represent individual letters. The symbols were
then scrambled in a specific way to make the text difficult to read. The main purpose
of the “Cryptography Inscription” was not to hide the message but rather to change its
form in a way that would make it appear dignified. While the symbols used in the
inscription were scrambled, they were still readable by those who were familiar with
the method of substitution used. It means that the inscription was intended for a
specific audience who were already familiar with the method rather than as a means of
keeping the message secret from all who might view it.
One of the earliest known methods is the simple substitution cipher, which
involves replacing each letter of the alphabet with another letter according to a
predetermined pattern. There are two types of substitution cipher:
Figure 1.
Monoalphabetic substitution cryptography.
Figure 2.
Caesar cipher with 1, 2, 3, and 4 shit to the left.
Figure 3.
Vigenère square.
Vigenère cipher can be done using the simplest way, which is similar to Caeser
cipher or sophisticated way, where keyword is used for the encryption to specify the
letter, the keyword is repeated over the length of the plaintext, and each letter of the
keyword is used to shift the corresponding letter of the plaintext by a certain number
of positions in the alphabet. For example, if you encrypt “security” using the simple
way, it will be “TGFYWOAG.” But when using the sophisticated way with “IBRI” as a
keyword, the cipher text will be “AFTCZJKG.” To make the cipher more secure,
Vigenère suggested using a different keyword for each message rather than reusing
the same keyword over and over again. He also suggested using longer keywords to
make the cipher even harder to crack. However, if the length of the keyword is
known, it can be easily broken using frequency analysis [15]. Figure 4 shows an
example of onetime pad encryption/decryption.
The onetime pad cipher is not a type of Vigenère cipher. It is a completely different
encryption method that is based on using a long, randomly generated key that is at
least as long as the plaintext. The key is made up of a series of random symbols, and
each symbol is used only once to encrypt one character of the plaintext. Because the
key is truly random and used only once, the onetime pad cipher is considered
unbreakable, provided that the key is kept secret and destroyed after use by both the
sender and the receiver.
The key must be as long as the plaintext for the onetime pad to be unbreakable.
Because onetime pad is based on perfect secrecy, which means that the ciphertext
provides no information about the plaintext, even if the attacker has unlimited com-
putational power.
5
Cryptography - Recent Advances and Research Perspectives
Figure 4.
onetime-pad encryption/decryption example.
Generating truly random keys that are as long as the plaintext is a challenging task,
and transmitting them securely to the recipient is also a difficult problem. This is why
the onetime pad is mostly used in special cases such as diplomatic and intelligence
traffic. Also, onetime pad only guarantees confidentiality and not integrity. This
means that an attacker who intercepts the ciphertext can not recover the plaintext, but
they can easily modify the ciphertext to change the meaning of the message. Onetime
pad requires a unique key for every message, and the keys should be securely
destroyed after use to prevent reuse.
The Playfair cipher is a polygraphic substitution cipher invented in 1854 by Sir
Charles Wheatstone [16]. It was the first cipher that allowed for the encryption of
pairs of letters instead of single letters. The Playfair cipher uses a 5 5 grid of letters,
with each letter of the alphabet appearing once. The letters in the grid are usually
chosen using a keyword. The keyword is then written into the grid, and the remaining
spaces are filled with the letters of the alphabet in order.
To encrypt plaintext using the Playfair cipher, the plaintext is divided into pairs of
letters. If there is an odd number of letters, a dummy letter such as “X” is added at the
end. Each pair of letters is then encrypted using the following steps and demonstrated in
Figure 5:
Step 1: Construct the MATRIX
◦ 5 5 table
◦ Skip letter J
Figure 5.
Playfair cipher steps (A: simple and B: Sophisticated).
◦ 5 5 table
◦ Repeating plaintext letters that are in the same pair are separated with X
◦ If there is an odd letter at the end of the message insert the letter X
7
Cryptography - Recent Advances and Research Perspectives
◦ Swap the letters with the ones on the end of the rectangle
Transposition cipher is an earlier method, where the letters of the message are
rearranged according to a certain pattern, but the letters themselves are not changed
as shown in Figure 6. Unlike substitution ciphers, which replace plaintext characters
with different symbols or letters, transposition ciphers do not change the characters
themselves. Instead, they simply reorder the characters to create a new message. The
security of a transposition cipher is based on the difficulty of reconstructing the
original message from the reordered characters without knowledge of the used trans-
position algorithm.
The Rail Fence cipher is a type of transposition cipher that was first used during
the American Civil War. The technique involves writing the plaintext diagonally on a
grid, then reading the letters in a zigzag pattern along the rows of the grid to produce
the ciphertext. The number of rows in the grid can be adjusted to increase the
complexity of the cipher.
Figure 6.
Transposition cipher example.
8
Cryptography: Recent Advances and Research Perspectives
DOI: http://dx.doi.org/10.5772/intechopen.111847
Figure 7.
Rail Fence encryption example.
For example, suppose we want to encrypt the message “HELLO WORLD” using
a Rail Fence cipher with three rows. Write the letters on a grid as shown in Figure 7.
To decrypt the message, we would write the ciphertext diagonally on a grid, then
read the letters in the same zigzag pattern along the rows of the grid to recover the
plaintext.
While these ancient methods of cryptography may seem primitive by today’s
standards, they laid the foundation for the development of more complex encryption
techniques in the future. The principles of substitution and transposition ciphers are
still used in modern cryptography, and the need for secure communication continues
to drive the evolution of cryptographic algorithms.
One of the most widely used symmetric key encryption algorithms is the data
encryption standard (DES), which was developed by IBM in the 1970s and adopted in
1977 by the National Bureau of Standards, now the National Institute of Standards and
Technology (NIST), as Federal Information Processing Standard. U.S. Data are
encrypted in DES using a block cipher method to encrypt data in 64-bit blocks, with a
56-bit key. The algorithm transforms 64-bit input (plaintext) in a series of steps into a
64-bit output (ciphertext). The same steps, with the same key, are used to reverse the
encryption. The encryption process in DES involves three phases:
Figure 8.
The initial permutation and its inverse.
permutation and its inverse are defined by tables that indicate the position of
each bit in the input to the output as shown in Figure 8. The permutation tables
are used in the encryption and decryption processes to rearrange the bits of the
input according to the specified permutation.
Figure 9.
Internal structure of single round.
10
Cryptography: Recent Advances and Research Perspectives
DOI: http://dx.doi.org/10.5772/intechopen.111847
Figure 10.
Expansion permutation table.
A. Separation: The left and right halves of each 64-bit intermediate value are
treated as separate 32-bit quantities, labeled L (left) and R (right).
B. Expansion: The input key for each round is 48 bits and the right side (R) is
32 bits. In order to XOR Ki with Ri, we need to expand the length of Ri to 48
bits. The expansion table in Figure 10 is used for this purpose.
C. Key mixing: The 48-bit result from the expansion step is XORed with a 48-bit
subkey derived from the main 56-bit key. The subkey is generated using a
combination of permutation and substitution operations on the main key. As
shown in Figure 9, the subkey generation in DES involves the following steps:
Figure 11.
Tables used in subkeys generation.
11
Cryptography - Recent Advances and Research Perspectives
• The 56-bit key is then split into two 28-bit halves, C0 and D0.
• After each shift, the two halves are combined to form a 56-bit value,
which is then permuted using a fixed permutation called the
permutation choice 2 (PC-2) as shown in Figure 9. The output of this
step is a 48-bit subkey.
• The subkeys are used in the encryption process as inputs to the round
function, which combines them with the plaintext to produce the
ciphertext.
Figure 12.
S-boxes used in the substitution step in DES.
12
Cryptography: Recent Advances and Research Perspectives
DOI: http://dx.doi.org/10.5772/intechopen.111847
E. Permutation: The 32-bit outputs from the S-boxes are then concatenated
and subjected to a fixed permutation using the P-box permutation.
The main steps summarized in Figure 13. The DES key generates 48 bits long 16
round keys from the initial 56 bit key. These keys are used in each round of the encryption
process to modify the plaintext. The key involves applying a series of operations, includ-
ing a permutation, a compression function, and left shifts, to the 56-bit key. The resulting
subkeys are used one at a time in each round of the encryption process.
However, due to its small key size, DES is now considered insecure [19] and has
been replaced by the advanced encryption s (AES).
Figure 13.
DES Algorithm steps.
13
Cryptography - Recent Advances and Research Perspectives
To be more precise, 3DES (also known as Triple DES or TDEA) is a symmetric key
cipher that uses the DES algorithm three times in succession to increase its security
[1, 20]. The standard 3DES encryption process can be described as follows:
1. The plaintext is encrypted using the first 56-bit key (K1) with the DES algorithm
to produce a ciphertext.
2. The ciphertext from step 1 is decrypted using the second 56-bit key (K2) with the
DES algorithm to produce an intermediate value.
3. The intermediate value from step 2 is encrypted again using the third 56-bit key
(K3) with the DES algorithm to produce the final ciphertext.
Thus, 3DES involves encrypting the plaintext with K1, decrypting the result with
K2, and encrypting again with K3. The three keys K1, K2, and K3 are usually indepen-
dent keys generated randomly, although some variants of 3DES use a “keying option”
that allows for fewer keys to be used while still maintaining a higher level of security.
While 3DES is slower than DES due to its triple encryption process, it is still
considered a relatively fast algorithm and can be implemented in hardware, as well as
software. Also, due to its small key size, DES is now considered insecure [19] and has
been replaced by the advanced encryption standard (AES).
1. SubBytes: The substitute bytes stage of AES uses a fixed S-box, which is a 256-
byte lookup table, to perform a byte-by-byte substitution of the input block. The
14
Cryptography: Recent Advances and Research Perspectives
DOI: http://dx.doi.org/10.5772/intechopen.111847
Figure 14.
The structure of AES algorithm.
S-box is designed so that each input byte is replaced by a unique output byte. The
inverse S-box is used in the decryption process, which maps each output byte
back to its original input byte. The S-box is a nonlinear component of the AES
algorithm, which helps to increase the resistance of the cipher to various attacks.
For example, 19 will be mapped to the value crossed between row 1 and column
9, which is equal to D4 in the S-Box as shown in Figure 15.
Figure 15.
S-Box used in AES.
15
Cryptography - Recent Advances and Research Perspectives
Figure 16.
ShiftRows operation and its output (with example).
2. ShiftRows: The shiftRows stage is a permutation step that cyclically shifts the
bytes in each row of the state array by a certain number of bytes. This operation
is applied to each row independently, with no mixing of the bytes between the
rows. The number of bytes shifted is determined by the row number: the first
row is not shifted at all, the second row is shifted by one byte to the left, the third
row is shifted by two bytes to the left, and the fourth row is shifted by three
bytes to the left as shown in Figure 16.
This operation provides diffusion of the input data, which increases the security
of the cipher. The inverse operation, used for decryption, is a cyclic shift to the
right instead of the left so that the original byte positions are restored.
3. MixColumns: each column of the state array is treated as a polynomial over the
finite field GF(2^8), where each byte is a coefficient of the polynomial. The bytes
are then multiplied by a fixed polynomial, and the result is reduced modulo
another fixed polynomial. This transformation ensures that each byte in a column
is dependent on all four bytes in the same column as demonstrated in Figure 17.
The multiplication and reduction are done using a pre-computed table of values.
The table is constructed in such a way that multiplication is reduced to a simple
table lookup and XOR operation.
During decryption, the inverse operation of MixColumns is performed. This
involves multiplying each column by a different fixed polynomial and reducing
the result modulo another fixed polynomial.
Figure 17.
Mix column function.
16
Cryptography: Recent Advances and Research Perspectives
DOI: http://dx.doi.org/10.5772/intechopen.111847
Figure 18.
Description of the AddRoundkey in AES.
4. AddRoundkey: Each byte of the current block is XORed with the corresponding
byte of the round key. The round key is derived from the main encryption key
using a key schedule algorithm, which generates a set of round keys for each round
of encryption. This stage serves to add a layer of confusion to the encryption
process, making it more difficult to analyze and break the cipher. Figure 18
describe the AddRoundkey process in AES.
The AES key expansion algorithm takes as input a 128-bit (16-byte) key and generates
a sequence of round keys, one for each round of the AES encryption process. The key
expansion algorithm uses a key schedule to generate these round keys, which involves
performing a series of operations on the input key to generate an expanded key.
The key schedule begins by copying the input key into the first four words of the
key schedule. Then, the key expansion algorithm applies a series of operations to the
last four words of the current key schedule to generate the next four words. This
process is repeated until the key schedule contains the necessary number of round
keys for the specified key size. For example, for a 128-bit key, the key schedule will
generate 11 round keys, one for each of the 10 rounds of AES encryption plus an initial
round key. For a 192-bit key, the key schedule will generate 13 round keys, and for a
256-bit key, the key schedule will generate 15 round keys.
In the key expansion algorithm, the first word in each group of four undergoes a
series of operations before being XORed with the word from fourth positions back.
These operations include a one-byte circular left shift (RotWord), byte substitution
using the S-box (SubWord), and XORing with a round constant (Rcon[j]), the values
of Rcon[j] shown in Figure 19. In the 256-bit key/14-round version, an additional step
is performed on the middle word. The steps are:
2. SubWord performs a byte substitution on each byte of its input word, using the
S-box.
The AES cipher is widely used in various applications, including secure communi-
cations, data storage, and authentication. Its security has been extensively analyzed,
and it is considered to be highly secure against various types of attacks.
Figure 19.
The values of Rcon[j] in hexadecimal.
17
Cryptography - Recent Advances and Research Perspectives
• Blowfish [23]: A symmetric key block cipher that uses variable-length keys (up to
448 bits) and a block size of 64 bits. Blowfish is widely used in cryptographic
applications and is known for its fast encryption and decryption speed.
• Twofish [24]: A symmetric key block cipher that is a successor to Blowfish. It uses a
block size of 128 bits and supports key sizes up to 256 bits. Twofish is considered a
strong and secure encryption algorithm but is slower than some other algorithms.
• Rivest Cipher 4 (RC4) [25]: A symmetric key stream cipher that is widely used in
wireless networks, secure socket layer (SSL), and other applications. RC4 uses a
variable-length key (up to 2048 bits) to generate a stream of pseudo-random bytes,
which are XORed with the plaintext to produce the ciphertext. However, RC4 has
been found to be vulnerable to attacks and is now considered insecure for many
applications.
Since block ciphers operate on fixed-size blocks of data, they cannot be directly
used to encrypt or decrypt messages that are larger than the block size. A mode of
operation is a technique used to apply a block cipher to encrypt or decrypt data that is
larger than the block size of the cipher.
Modes of operation are used to overcome this limitation by allowing the encryption or
decryption of data that is larger than the block size of the cipher. These modes provide
methods to break up the input message into blocks, and then apply the block cipher to
each block. This process is typically performed using feedback mechanisms that generate
input for each subsequent block, based on the output of the previous block.
There are several modes of operation defined by NIST, each with its own strengths
and weaknesses and suitable for different types of applications. For example, some
modes are designed to provide confidentiality, while others also provide message
integrity and authentication. The five modes of operation defined by NIST are:
1. Electronic codebook (ECB): This is the simplest mode of operation, where each
block of plaintext is encrypted independently with the same key as shown in
Figure 20. However, it is not suitable for encrypting large amounts of data or
data with a predictable structure. It suffers from the lack of diffusion, which
means that identical plaintext blocks will result in identical ciphertext blocks.
Figure 20.
ECB mode encryption.
18
Cryptography: Recent Advances and Research Perspectives
DOI: http://dx.doi.org/10.5772/intechopen.111847
Figure 21.
CBC mode encryption.
2. Cipher block chaining (CBC): The cipher block chaining (CBC) mode of operation
addresses the issue of repetitive plaintext blocks in ECB mode. This mode XORs
each plaintext block with the previous ciphertext block before encryption as
shown in Figure 21. This helps to provide diffusion and makes the encryption
process more secure than ECB. Itis worth noting that the sequential nature of CBC
encryption can also be an advantage in some cases as it provides a natural form of
authentication. If a ciphertext block is corrupted or modified during transmission,
the corresponding plaintext block will be affected, and the error will propagate
through the rest of the decryption process, making it easier to detect tampering.
However, one-bit change in a plaintext or IV affects all following ciphertext blocks
can also be a weakness. This can make it difficult to implement certain types of
secure communications protocols such as those that require random access to
encrypted data. Additionally, CBC requires a secure and unpredictable initialization
vector (IV) for each message, which can be challenging to generate and transmit
securely in some scenarios. Finally, as with any mode of operation that relies on a
shared secret key, CBC is vulnerable to attacks that exploit weaknesses in the
underlying block cipher or key management protocols.
3. Cipher feedback (CFB): In this mode, the block cipher is used as a feedback
mechanism to create a stream cipher. The plaintext is XORed with the output of the
block cipher, and the result is encrypted to produce the ciphertext as shown in
Figure 22. This mode allows for variable-length plaintext and provides a self-
synchronizing stream cipher. The initial value is called the initialization vector (IV),
and it is used to seed the process. The size of the shift registers determines the
amount of feedback. For example, if s = 8, the encryption process operates on an 8-
bit subset of the plaintext block at a time. If s = n, then the entire plaintext block is
used at once.
One advantage of CFB mode is that it allows for error propagation to be contained.
If a bit error occurs during transmission, only the block that contains the error is
affected. The other blocks remain unchanged. However, one disadvantage of CFB
mode is that it is sequential, which means that it cannot be parallelized.
19
Cryptography - Recent Advances and Research Perspectives
Figure 22.
CFB mode encryption.
4. Output feedback (OFB): OFB mode operates on full blocks of plaintext and
ciphertext such as other block cipher modes of operation. However, instead of
encrypting the plaintext, the block cipher is used to encrypt an IV to generate a
keystream. The keystream is then XORed with the plaintext to produce the
ciphertext. The key stream is generated independently for each block, so the
encryption and decryption can be parallelized as shown in Figure 23. The main
difference between OFB and CFB is that OFB generates a key stream that is
independent of the plaintext, while CFB uses the ciphertext as feedback to
generate the key stream.
5. Counter (CTR): This mode encrypts a counter value with a block cipher to
produce a keystream, which is then XORed with the plaintext to produce the
ciphertext. This mode is similar to OFB, but it allows for parallel encryption and
decryption and can be used for random. The counter is incremented for each
block of plaintext, and the resulting keystream is used to encrypt that block, see
Figure 24. The advantage of the CTR mode is that it allows for parallel
encryption and decryption of blocks since the keystream is generated
independently of the plaintext or ciphertext. This can lead to significant speed
improvements over other modes, particularly for large messages.
One potential drawback of CTR mode is the need to ensure that the counter
values are never repeated as this could compromise the security of the
encryption. This can be achieved by using a unique counter value for each block
of plaintext, for example by using a nonce (a number used only once) as part of
the counter value.
Figure 23.
OFB mode encryption.
20
Cryptography: Recent Advances and Research Perspectives
DOI: http://dx.doi.org/10.5772/intechopen.111847
Figure 24.
Counter mode encryption.
4.1 RSA
RSA is a widely used public-key cryptosystem. It is been named after its inventors
Ron Rivest, Adi Shamir, and Leonard Adleman. Its security is based on the difficulty
of factoring large integers, which serves as the foundation for its mathematical oper-
ation. RSA has been used for over four decades and is still considered a secure and
practical public-key cryptosystem. RSA involves the generation of a public and a
private key pair. The public key is distributed to others, while the private key is kept
secret. The public key can be used to encrypt messages that only the owner of the
private key can decrypt.
The security of RSA is based on the fact that factoring large integers is a difficult
problem, and the larger the key size, the more difficult it becomes. RSA keys typically
range in size from 1024 to 4096 bits. We can say that RSA is widely accepted and
implemented in various applications such as secure communication, digital signatures,
and key exchange [30]. RSA encryption and decryption are performed as follows:
21
Cryptography - Recent Advances and Research Perspectives
• Key generation:
3. Choose an integer e such that 1 < e < φ(n) and gcd(e, φ(n)) = 1. This value
is called the public exponent.
• Encryption:
The security of RSA is based on the difficulty of factoring large composite numbers
into their prime factors. Breaking RSA encryption requires factoring the modulus n
into its two prime factors p and q, which is a computationally intensive task for large
values of n. Therefore, the security of RSA increases as the size of the keys and the
modulus increase.
4.2 Diffie-Hellman
Diffie-Hellman (DH) is a key exchange algorithm that allows two parties to estab-
lish a shared secret key over an insecure channel. It was developed by Whitfield Diffie
and Martin Hellman in 1976 and is based on the discrete logarithm problem in
modular arithmetic.
In DH, each party generates a public-private key pair. The public keys are
exchanged and used to derive a shared secret key. The derivation of the key involves
modular exponentiation and is based on the fact that the discrete logarithm problem is
believed to be hard. The DH protocol works as follows:
1. Alice and Bob publicly agree on a large prime number p and a primitive root of p,
denoted by g.
2. Alice randomly chooses a secret integer a and calculates A = g^a mod p. She sends
A to Bob.
3. Bob randomly chooses a secret integer b and calculates B = g^b mod p. He sends
B to Alice.
22
Cryptography: Recent Advances and Research Perspectives
DOI: http://dx.doi.org/10.5772/intechopen.111847
6. Alice and Bob now have a shared secret key that can be used for symmetric
encryption.
The security here relies on the fact that computing the discrete logarithm of g mod
p is computationally infeasible. This means that an attacker who intercepts A and B
cannot calculate a or b, and therefore cannot compute the shared secret key K.
The DH algorithm can be used for secure communication by combining it with a
symmetric encryption algorithm. The shared secret key derived using DH is used as the key
for the symmetric encryption algorithm, providing confidentiality for communication.
Widely used in many cryptographic protocols such as Secure Socket Layer (SSL)/Transport
Layer Security (TLS), Secure Shell Protocol (SSH), and Virtual private networks (VPNs)
[31, 32]. However, it does not provide authentication [32], and therefore a man-in-the-
middle attack is possible if the channel is not authenticated. To address this issue, DH is
often used in combination with digital signatures or other authentication mechanisms [33].
5. Hash functions
A hash function is a one-way function that takes an input (also known as the
message or data) of arbitrary length and produces a fixed-size output, typically
represented as a sequence of bytes. The output is often referred to as the hash or
message digest. A good hash function should have the following properties:
• Deterministic: The same input should always produce the same output.
6. Digital signatures
Digital signatures are used to ensure the authenticity, integrity, and non-
repudiation of a digital document or message. The process of creating a digital signa-
ture involves applying a mathematical algorithm to the message or document using
the signer’s private key. The resulting value, known as the signature, is unique to both
the message and the signer’s private key.
The receiver of the message or document can verify the signature using the signer’s
public key, which confirms that the message was indeed sent by the signer and that it
has not been altered since it was signed.
Digital signatures can be used in a variety of applications, including software
updates, online transactions, and legal documents. They provide a means of verifying
23
Cryptography - Recent Advances and Research Perspectives
the identity of the sender, ensuring the integrity of the message or document, and
preventing the sender from denying that they sent the message or document.
7. Future of cryptography
Cryptography has come a long way since its early beginnings, and it continues to play
a critical role in securing our digital world today. The advancement of technology has led
to more complex and sophisticated encryption methods, which have become essential for
protecting sensitive information such as financial transactions, personal data, and confi-
dential communication. With the rise of the internet and mobile technology, cryptogra-
phy has become more important than ever. It is used in everything from e-commerce to
social media to secure online communication [34]. As technology continues to evolve, so
will the field of cryptography, and new techniques and algorithms will be developed to
stay ahead of emerging threats. The future of cryptography holds great promise as
researchers work to develop quantum-resistant encryption and new methods for securing
blockchain technology. As we rely more and more on digital communication and storage,
the role of cryptography in securing our data will only become more critical.
Quantum computers have the potential to break many of the current crypto-
graphic schemes that rely on the difficulty of certain mathematical problems [35].
Quantum cryptography aims to develop new cryptographic schemes that are resistant
to attacks by quantum computers [36]. It makes use of the principles of quantum
mechanics to provide a high level of security. Also, uses quantum mechanical proper-
ties to protect information in transit.
In traditional cryptography, the security of the system relies on the complexity of
mathematical algorithms, while in quantum cryptography, the security relies on the
laws of physics. Specifically, quantum cryptography uses the principle of quantum
entanglement, which involves the correlation of quantum states between two particles.
The most widely known application of quantum cryptography is quantum key
distribution (QKD) [37]. QKD is a protocol that enables two parties to establish a
shared secret key that is completely secure against eavesdropping, even by an attacker
with unlimited computing power. QKD works by transmitting a series of quantum
states, or qubits, between two parties, typically named Alice and Bob. The qubits are
generated using a laser and a polarizer. Alice sends a random sequence of polarizations
to Bob, who measures the polarizations using his own set of polarizers. By comparing
the polarizations, Alice and Bob can detect the presence of an eavesdropper.
There are many challenges to overcome before quantum cryptography can be widely
adopted. One of the main challenges is the difficulty of building practical quantum
cryptography systems, which require precise control of the quantum states involved.
Additionally, there is a need for more research in quantum computing, as well as a need
for new protocols that can be used to secure communications in different contexts.
expensive, especially when the number of parties and the complexity of the function
being computed increase. Despite these challenges, MPC is a powerful tool for
achieving secure collaboration and computation among multiple parties [44].
8. Conclusions
26
Cryptography: Recent Advances and Research Perspectives
DOI: http://dx.doi.org/10.5772/intechopen.111847
Author details
Monther Tarawneh
Computer Science Department, Isra University, Amman, Jordan
© 2023 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of
the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),
which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
27
Cryptography - Recent Advances and Research Perspectives
References
[7] Menezes AJ, van Oorschot PC, [18] Smart NP, Smart NP. The enigma
Vanstone SA. Handbook of applied machine. Cryptography Made Simple.
cryptography (202101 ed.). 2021;1:1-810 2016;64(2):133-161
[21] Smid ME. Development of the [32] Carts DA. A review of the Diffie-
advanced encryption standard. Journal of Hellman algorithm and its use in secure
Research of the National Institute of internet protocols. SANS Institute; 2001;
Standards and Technology. 2021;126:1-18 751:1-7
[22] Daemen J, Rijmen V. AES proposal: [33] Medina R III. Systems and Methods
Rijndael. National Institute of Standards for Digital Signature Detection. ed:
and Technology; 1999 Google Patents ed. 2015
30