0% found this document useful (0 votes)
55 views

Basic Concepts of Network Security

The document discusses network security and provides definitions and descriptions of key network security concepts and components. It discusses how network security involves securing network infrastructure devices and data to prevent unauthorized access and ensure continuous service. It then defines and describes common network devices and components important for network security like firewalls, routers, gateways, switches, and hubs. It also discusses establishing security perimeters by building layers of defense around network areas and hardening device configurations and security policies. Finally, it defines different types of private networks including local area networks (LANs), metropolitan area networks (MANs), and wide area networks (WANs).
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

Basic Concepts of Network Security

The document discusses network security and provides definitions and descriptions of key network security concepts and components. It discusses how network security involves securing network infrastructure devices and data to prevent unauthorized access and ensure continuous service. It then defines and describes common network devices and components important for network security like firewalls, routers, gateways, switches, and hubs. It also discusses establishing security perimeters by building layers of defense around network areas and hardening device configurations and security policies. Finally, it defines different types of private networks including local area networks (LANs), metropolitan area networks (MANs), and wide area networks (WANs).
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 16

NETWORK SECURITY

Network Security is a branch of computer science that involves in securing a computer network and
network infrastructure devices to prevent unauthorized access, data theft, network misuse, device and data
modification. Another function of Network Security is in preventing DoS (Denial of Service) attacks and
assuring continuous service for legitimate network users. Network Security involves proactive defence
methods and mechanisms to protect data, network and network devices from external and internal threats.

Data is the most precious factor of todays businesses. Top business organizations spend billions of
dollers every year to secure their computer networks and to keep their business data safe. Imagine the loss
of all important research data on which the company has invested millions of dollers and working for
years !!!

We are dependent on computers today for controlling large money transfers between banks, insurance,
markets, telecommunication, electrical power distribution, health and medical fields, nuclear power
plants, space research and satellites. We cannot negotiate security in these critical areas.

Basic concepts of network security


Network devicessuch as routers, firewalls, gateways, switches, hubs, and so forthcreate the
infrastructure of local area networks (on the corporate scale) and the Internet (on the global scale).
Securing such devices is fundamental to protecting the environment and outgoing/incoming
communications.

Firewalls

A firewall is a hardware device or software application installed on the borderline of secured networks to
examine and control incoming and outgoing network communications. As the first line of network
defense, firewalls provide protection from outside attacks, but they have no control over attacks from
within the corporate network. Some firewalls also block traffic and services that are actually legitimate.

Router

A router is a networking device that forwards data packets between computer networks. Routers
perform the traffic directing functions on the Internet. A data packet is typically forwarded from
one router to another through the networks that constitute the internetwork until it reaches its
destination node.

A router is connected to two or more data lines from different networks.[b] When a data packet
comes in on one of the lines, the router reads the address information in the packet to determine
the ultimate destination.
Gateway

A gateway is a network node connecting two networks that use different protocols. Gateways can
take several forms -- including routers or computers -- and can perform a variety of tasks. These
range from simply passing traffic on to the next hop on its path to offering complex
traffic filtering, proxies or protocol translations at various network layers.

Switch

A network switch is a multiport network bridge that uses hardware addresses to process and
forward data at the data link layer (layer 2) of the OSI model. Switches can also process data at
the network layer (layer 3) by additionally incorporating routing functionality that most
commonly uses IP addresses to perform packet forwarding; such switches are commonly known
as layer-3 switches or multilayer switches

Hub

A hub is a common connection point for devices in a network. Hubs are commonly used to
connect segments of a LAN. A hub contains multiple ports. When a packet arrives at one port, it
is copied to the other ports so that all segments of the LAN can see all packets.

Hubs and switches serve as a central connection for all of your network equipment and handles
a data type known as frames. Frames carry your data. When a frame is received, it is amplified
and then transmitted on to the port of the destination PC.

Establishing security perimeter for network protection


As the first layer of defense in your network, it is important to take a step back and review the
design of your perimeter security. To ensure a sound architecture, you want to start with what
ultimately must be protected and then design your perimeter security so it can scale as your
needs grow/change. Since the threats you know about and face today may not be the ones you
face tomorrow, you want to be sure your design is flexible enough to meet future needs.

Think of your network perimeter like a castle during medieval times, which has multiple layers
of defense a moat, high walls, big gate, guards, etc. Even in medieval times, people understood
the importance of having layers of security and the concept is no different today in information
security. Here are three tips:

1. Build layers of security around your area


No defense is 100% effective. Thats why defense-in-depth is so important when it comes to
building out your security. The traditional first line of defense against attacks is typically the
firewall, which is configured to allow/deny traffic by source/destination IP, port or protocol. Its
very binary - either traffic is allowed or its blocked by these variables. The evolution of these
network security devices has brought the Next-Generation firewall, which can include
application control, identity awareness and other capabilities such as IPS, web filtering,
advanced malware detection, and more baked into one appliance.

Whether or not its part of your firewall or a separate device, IPS is another important perimeter
defense mechanism. Having your IPS properly optimized and monitored is a good way to catch
attackers that have slipped past the first castle defense (firewall/router).

The popularity of moving more into the cloud has brought cloud-based malware detection and
DDoS services. Unlike appliance-based solutions these are cloud-based services that sit outside
your architecture and analyze traffic before it hits your network.

2. Harden your device configurations, software updates and security policies


Here is where we start building those walls to prevent attackers from getting inside the castle.
The first line of defense typically involves network security devices such as routers, firewalls,
load balancers, etc. which each act like the guards, gate, moats, etc. of long ago.

For each layer of security, you want to ensure they are running the most up-to-date software and
operating systems, and that devices are configured properly. A common misstep occurs when
organizations assume they are secure because of their many layers of defense, but a
misconfigured device is like giving an attacker a key to the castle. Another important practice is
to tighten security policies (of course without impacting the business), so for example you dont
have a router allowing ANY to Telnet to it from outside your network.

3. Enable secure network access

While firewalls, routers and other security layers are in place to prevent unauthorized access,
they also enable access that is approved. So how do we let authorized personnel into the castle?
The drawbridge of course! Next-generation firewalls can help here by scanning inbound and
outbound user traffic, all while looking for patterns of suspicious behavior.

Another way to have secure access from the outside through the perimeter is to install a VPN that
is configured to allow encrypted communication to your network from the outside. Utilizing two-
factor authentication with a VPN contributes towards ensuring the integrity of the users making
the request. This is external-facing to your network and allows users to tunnel into your LAN
from the outside once the appropriate measures are taken to secure access.
Different types of networks

Different types of (private) networks are distinguished based on their size (in terms of the
number of machines), their data transfer speed, and their reach.
Private networks are networks that belong to a single organization.
There are usually said to be three categories of networks:

LAN (local area network)


MAN (metropolitan area network)
WAN (wide area network)

1. Local Area Network


LAN stands for Local Area Network.
It's a group of computers which all belong to the same organization, and which are linked within
a small geographic area using a network, and often the same technology (the most widespread
being Ethernet).
A local area network is a network in its simplest form. Data transfer speeds over a local area
network can reach up to 10 Mbps (such as for an Ethernet network) and 1 Gbps (as with FDDI or
Gigabit Ethernet).
A local area network can reach as many as 100, or even 1000, users.
By expanding the definition of a LAN to the services that it provides, two different operating
modes can be defined:
In a "peer-to-peer" network, in which communication is carried out from one computer to
another, without a central computer, and where each computer has the same role. In a
"client/server" environment, in which a central computer provides network services to users.

2. Metropolitan Area Network


MANs (Metropolitan Area Networks) connect multiple geographically nearby LANs to one
another (over an area of up to a few dozen kilometres) at high speeds. Thus, a MAN lets two
remote nodes communicate as if they were part of the same local area network.
A MAN is made from switches or routers connected to one another with high-speed links
(usually fibre optic cables).
3. Wide Area Network
A WAN (Wide Area Network or extended network) connects multiple LANs to one another over
great geographic distances.
The speed available on a WAN varies depending on the cost of the connections (which increases
with distance) and may be low.
WANs operate using routers, which can "choose" the most appropriate path for data to take to
reach a network node.

FIREWALL
In computing, a firewall is a network security system that monitors and controls the incoming
and outgoing network traffic based on predetermined security rules. A firewall typically
establishes a barrier between a trusted, secure internal network and another outside network,
such as the Internet, that is assumed not to be secure or trusted. Firewalls are often categorized as
either network firewalls or host-based firewalls. Network firewalls filter traffic between two or
more networks; they are either software appliances running on general purpose hardware, or
hardware-based firewall computer appliances. Host-based firewalls provide a layer of software
on one host that controls network traffic in and out of that single machine.
First generation: packet filters

The first type of firewall was the packet filter which looks at network addresses and ports of the
packet and determines if that packet should be allowed or blocked.[18] The first paper published
on firewall technology was in 1988, when engineers from Digital Equipment Corporation (DEC)
developed filter systems known as packet filter firewalls.
Packet filters act by inspecting the "packets" which are transferred between computers on the
Internet. If a packet does not match the packet filter's set of filtering rules, the packet filter will
drop (silently discard) the packet or reject it (discard it, and send "error responses" to the source).
Conversely, if the packet matches one or more of the programmed filters, the packet is allowed to
pass. This type of packet filtering pays no attention to whether a packet is part of an existing
stream of traffic.
Second generation: "stateful" filters

Second-generation firewalls perform the work of their first-generation predecessors but operate
up to layer 4 (transport layer) of the OSI model. This is achieved by retaining packets until
enough information is available to make a judgement about its state.[24] Known as stateful
packet inspection, it records all connections passing through it and determines whether a packet
is the start of a new connection, a part of an existing connection, or not part of any connection.
[25] Though static rules are still used, these rules can now contain connection state as one of
their test criteria
Third generation: application layer

Marcus Ranum, Wei Xu, and Peter Churchyard developed an application firewall known as
Firewall Toolkit (FWTK). In June 1994, Wei Xu extended the FWTK with the kernel
enhancement of IP filter and socket transparent. This was known as the first transparent
application firewall, released as a commercial product of Gauntlet firewall at Trusted
Information Systems. Gauntlet firewall was rated one of the number one firewalls during 1995
1998.
The key benefit of application layer filtering is that it can "understand" certain applications and
protocols (such as File Transfer Protocol (FTP), Domain Name System (DNS), or Hypertext
Transfer Protocol (HTTP)). This is useful as it is able to detect if an unwanted application or
service is attempting to bypass the firewall using a protocol on an allowed port, or detect if a
protocol is being abused in any harmful way.

Different topologies of firewall


Depending on your needs, you can have a very simple firewall setup which will provide enough
protection for your personal computer or small network, or you can choose a more complicated
setup which will provide more protection and security.
Let's have a look starting from the simple solutions, and then move on to the more complicated
ones. Just keep in mind we are not talking about a firewall which is only a piece of software
which runs on the same computer you use to connect to the internet and do your work, but we are
talking about a physical computer which is a dedicated firewall.
1. A Simple Dual-Homed Firewall
The dual-homed firewall is one of the simplest and possibly most common way to use a firewall.
The Internet comes into the firewall directly via a dial-up modem or through some other type of
connection like an ISDN line or cable modem.
The firewall takes care of passing packets that pass its filtering rules between the internal
network and the Internet, and vice versa. It may use IP masquerading and that's all it does. This is
known as a dual-homed host. The two "homes" refer to the two networks that the firewall
machine is part of - one interface connected to the outside home, and the other connected to the
inside home.
2. A Two-Legged Network with a full exposed DMZ
In this more advanced configuration, shown in the picture below, the router that connects to the outside
work is connected to a hub.

Machines that want direct access to the outside world, unfiltered by the firewall, connect to this
hub. One of the firewall's network adapters also connects to this hub. The other network adapter
connects to the internal hub. Machines that need to be protected by the firewall need to connect
to this hub. Any of these hubs could be replaced with switches for added security and speed, and
it would be more effective to use a switch for the internal hub.
3. The Three-legged firewall
This means you need an additional network adapter in your firewall box for your DMZ. The
firewall is then configured to route packets between the outside world and the DMZ differently
than between the outside world and the internal network. This is a useful configuration, and I
have seen many of our customers using it.
Cryptography
Cryptography or is the practice and study of techniques for secure communication in the
presence of third parties called adversaries. More generally, cryptography is about constructing
and analyzing protocols that prevent third parties or the public from reading private messages,
various aspects in information security such as data confidentiality, data integrity, authentication,
and non-repudiation are central to modern cryptography. Modern cryptography exists at the
intersection of the disciplines of mathematics, computer science, and electrical engineering.
Applications of cryptography include ATM cards, computer passwords, and electronic
commerce.
Modern cryptography is heavily based on mathematical theory and computer science practice;
cryptographic algorithms are designed around computational hardness assumptions, making such
algorithms hard to break in practice by any adversary. It is theoretically possible to break such a
system, but it is infeasible to do so by any known practical means. These schemes are therefore
termed computationally secure; theoretical advances, e.g., improvements in integer factorization
algorithms, and faster computing technology require these solutions to be continually adapted.

Modern cryptography
The modern field of cryptography can be divided into several areas of study. The chief ones are
discussed here; see Topics in Cryptography for more.
Symmetric-key cryptography
Symmetric-key cryptography refers to encryption methods in which both the sender and receiver
share the same key (or, less commonly, in which their keys are different, but related in an easily
computable way). This was the only kind of encryption publicly known until June 1976
Symmetric key ciphers are implemented as either block ciphers or stream ciphers. A block cipher
enciphers input in blocks of plaintext as opposed to individual characters, the input form used by
a stream cipher.
The Data Encryption Standard (DES) and the Advanced Encryption Standard (AES) are block
cipher designs that have been designated cryptography standards by the US government (though
DES's designation was finally withdrawn after the AES was adopted). Despite its deprecation as
an official standard, DES (especially its still-approved and much more secure triple-DESvariant)
remains quite popular; it is used across a wide range of applications, from ATM encryption to e-
mail privacy and secure remote access. Many other block ciphers have been designed and
released, with considerable variation in quality. Many have been thoroughly broken, such
as FEAL.
Public-key cryptography

Symmetric-key cryptosystems use the same key for encryption and decryption of a message,
though a message or group of messages may have a different key than others. A significant
disadvantage of symmetric ciphers is the key management necessary to use them securely. Each
distinct pair of communicating parties must, ideally, share a different key, and perhaps each
ciphertext exchanged as well. The number of keys required increases as the square of the number
of network members, which very quickly requires complex key management schemes to keep
them all consistent and secret. The difficulty of securely establishing a secret key between two
communicating parties, when a secure channel does not already exist between them.
In public-key cryptosystems, the public key may be freely distributed, while its paired private
key must remain secret. In a public-key encryption system, the public key is used for encryption,
while the private or secret key is used for decryption. While Diffie and Hellman could not find
such a system, they showed that public-key cryptography was indeed possible by presenting
the DiffieHellman key exchange protocol, a solution that is now widely used in secure
communications to allow two parties to secretly agree on a shared encryption key.
Cryptanalysis

The goal of cryptanalysis is to find some weakness or insecurity in a cryptographic scheme, thus
permitting its subversion or evasion.
It is a common misconception that every encryption method can be broken. In connection with
his WWII work at Bell Labs, Claude Shannon proved that the one-time pad cipher is
unbreakable, provided the key material is truly random, never reused, kept secret from all
possible attackers, and of equal or greater length than the message.[39] Most ciphers, apart from
the one-time pad, can be broken with enough computational effort by brute force attack, but the
amount of effort needed may be exponentially dependent on the key size, as compared to the
effort needed to make use of the cipher. In such cases, effective security could be achieved if it is
proven that the effort required (i.e., "work factor", in Shannon's terms) is beyond the ability of
any adversary. This means it must be shown that no efficient method (as opposed to the time-
consuming brute force method) can be found to break the cipher. Since no such proof has been
found to date, the one-time-pad remains the only theoretically unbreakable cipher.
Cryptanalysis of symmetric-key ciphers typically involves looking for attacks against the block
ciphers or stream ciphers that are more efficient than any attack that could be against a perfect
cipher. For example, a simple brute force attack against DES requires one known plaintext and
255 decryptions, trying approximately half of the possible keys, to reach a point at which
chances are better than even that the key sought will have been found.
Cryptosystems

One or more cryptographic primitives are often used to develop a more complex algorithm,
called a cryptographic system, or cryptosystem. Cryptosystems (e.g., El-Gamal encryption) are
designed to provide particular functionality (e.g., public key encryption) while guaranteeing
certain security properties (e.g., chosen-plaintext attack (CPA) security in the random oracle
model). Cryptosystems use the properties of the underlying cryptographic primitives to support
the system's security properties. Of course, as the distinction between primitives and
cryptosystems is somewhat arbitrary, a sophisticated cryptosystem can be derived from a
combination of several more primitive cryptosystems. In many cases, the cryptosystem's
structure involves back and forth communication among two or more parties in space (e.g.,
between the sender of a secure message and its receiver) or across time (e.g., cryptographically
protected backup data). Such cryptosystems are sometimes called cryptographic protocols.

Digital signature
A digital signature is a mathematical scheme for demonstrating the authenticity of a digital
message or documents. A valid digital signature gives a recipient reason to believe that the
message was created by a known sender (authentication), that the sender cannot deny having sent
the message (non-repudiation), and that the message was not altered in transit.
Digital signatures are a standard element of most cryptographic protocol suites, and are
commonly used for software distribution, financial transactions, contract management software,
and in other cases where it is important to detect forgery or tampering.
Digital signatures are often used to implement electronic signatures, a broader term that refers to
any electronic data that carries the intent of a signature, but not all electronic signatures use
digital signatures. In some countries, including the United States, India, Brazil, Indonesia, Saudi
Arabia, Switzerland and the countries of the European Union electronic signatures have legal
significance.
Digital signatures employ asymmetric cryptography. In many instances they provide a layer of
validation and security to messages sent through a nonsecure channel: Properly implemented, a
digital signature gives the receiver reason to believe the message was sent by the claimed sender.
Digital seals and signatures are equivalent to handwritten signatures and stamped seals. Digital
signatures are equivalent to traditional handwritten signatures in many respects, but properly
implemented digital signatures are more difficult to forge than the handwritten type.
Applications of digital signatures
As organizations move away from paper documents with ink signatures or authenticity stamps,
digital signatures can provide added assurances of the evidence to provenance, identity, and
status of an electronic document as well as acknowledging informed consent and approval by a
signatory. The United States Government Printing Office (GPO) publishes electronic versions of
the budget, public and private laws, and congressional bills with digital signatures. Universities
including Penn State, University of Chicago, and Stanford are publishing electronic student
transcripts with digital signatures.
Below are some common reasons for applying a digital signature to communications:
Authentication
Although messages may often include information about the entity sending a message, that
information may not be accurate. Digital signatures can be used to authenticate the source of
messages. When ownership of a digital signature secret key is bound to a specific user, a valid
signature shows that the message was sent by that user. The importance of high confidence in
sender authenticity is especially obvious in a financial context. For example, suppose a bank's
branch office sends instructions to the central office requesting a change in the balance of an
account. If the central office is not convinced that such a message is truly sent from an
authorized source, acting on such a request could be a grave mistake.
Integrity
In many scenarios, the sender and receiver of a message may have a need for confidence that the
message has not been altered during transmission. Although encryption hides the contents of a
message, it may be possible to change an encrypted message without understanding it. (Some
encryption algorithms, known as nonmalleable ones, prevent this, but others do not.) However, if
a message is digitally signed, any change in the message after signature invalidates the signature.
Furthermore, there is no efficient way to modify a message and its signature to produce a new
message with a valid signature, because this is still considered to be computationally infeasible
by most cryptographic hash functions (see collision resistance).
Non-repudiation
Non-repudiation, or more specifically non-repudiation of origin, is an important aspect of digital
signatures. By this property, an entity that has signed some information cannot at a later time
deny having signed it. Similarly, access to the public key only does not enable a fraudulent party
to fake a valid signature.

Cryptographic Algorithms
1. Advanced Encryption Standard (AES)
The Advanced Encryption Standard or AES is a symmetric block cipher used by the U.S.
government to protect classified information and is implemented in software and hardware
throughout the world to encrypt sensitive data.
The origins of AES date back to 1997 when the National Institute of Standards and Technology
(NIST) announced that it needed a successor to the aging Data Encryption Standard (DES) which
was becoming vulnerable to brute-force attacks.
This new encryption algorithm would be unclassified and had to be "capable of protecting
sensitive government information well into the next century." It was to be easy to implement in
hardware and software, as well as in restricted environments (for example, in a smart card) and
offer good defenses against various attack techniques.
Choosing AES
The selection process to find this new encryption algorithm was fully open to public scrutiny and
comment; this ensured a thorough, transparent analysis of the designs. Fifteen competing designs
were subject to preliminary analysis by the world cryptographic community, including the
National Security Agency (NSA). In August 1999, NIST selected five algorithms for more
extensive analysis. These were:

MARS, submitted by a large team from IBM Research


RC6, submitted by RSA Security
Rijndael, submitted by two Belgian cryptographers, Joan Daemen and Vincent Rijmen
Serpent, submitted by Ross Andersen, Eli Biham and Lars Knudsen
Twofish, submitted by a large team of researchers including Counterpane's respected
cryptographer, Bruce Schneier
Implementations of all of the above were tested extensively in ANSI, C and Java languages for
speed and reliability in encryption and decryption, key and algorithm setup time, and resistance
to various attacks, both in hardware- and software-centric systems. Members of the global
cryptographic community conducted detailed analyses (including some teams that tried to break
their own submissions).
After much enthusiastic feedback, debate and analysis, the Rijndael cipher -- a mash of the
Belgian creators' last names Daemen and Rijmen -- was selected as the proposed algorithm for
AES in October 2000 and was published by NIST as U.S. FIPS PUB 197. The Advanced
Encryption Standard became effective as a federal government standard in 2002. It is also
included in the ISO/IEC 18033-3 standard which specifies block ciphers for the purpose of data
confidentiality.
In June 2003, the U.S. government announced that AES could be used to protect classified
information, and it soon became the default encryption algorithm for protecting classified
information as well as the first publicly accessible and open cipher approved by the NSA for top-
secret information. AES is one of the Suite B cryptographic algorithms used by NSA's
Information Assurance Directorate in technology approved for protecting national security
systems.
Its successful use by the U.S. government led to widespread use in the private sector, leading
AES to become the most popular algorithm used in symmetric key cryptography. The transparent
selection process helped create a high level of confidence in AES among security and
cryptography experts. AES is more secure than its predecessors -- DES and 3DES -- as the
algorithm is stronger and uses longer key lengths. It also enables faster encryption than DES and
3DES, making it ideal for software applications, firmware and hardware that require either low-
latency or high throughput, such as firewalls and routers. It is used in many protocols such as
SSL/TLS and can be found in most modern applications and devices that need encryption
functionality.

How AES encryption works


AES comprises three block ciphers, AES-128, AES-192 and AES-256. Each cipher encrypts and
decrypts data in blocks of 128 bits using cryptographic keys of 128-, 192- and 256-bits,
respectively. (Rijndael was designed to handle additional block sizes and key lengths, but the
functionality was not adopted in AES.) Symmetric or secret-key ciphers use the same key for
encrypting and decrypting, so both the sender and the receiver must know and use the same
secret key. All key lengths are deemed sufficient to protect classified information up to the
"Secret" level with "Top Secret" information requiring either 192- or 256-bit key lengths. There
are 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys -- a
round consists of several processing steps that include substitution, transposition and mixing of
the input plaintext and transform it into the final output of ciphertext.
As a cipher, AES has proven reliable. The only successful attacks against it have been side-
channel attacks on weaknesses found in the implementation or key management of certain AES-
based encryption products. (Side-channel attacks don't use brute force or theoretical weaknesses
to break a cipher, but rather exploit flaws in the way it has been implemented.) The BEAST
browser exploit against the TLS v1.0 protocol is a good example; TLS can use AES to encrypt
data, but due to the information that TLS exposes, attackers managed to predict the initialization
vector block used at the start of the encryption process.
Various researchers have published attacks against reduced-round versions of the Advanced
Encryption Standard, and a research paper published in 2011 demonstrated that using a technique
called a biclique attack could recover AES keys faster than a brute-force attack by a factor of
between three and five, depending on the cipher version. Even this attack, though, does not
threaten the practical use of AES due to its high computational complexity.
2. RSA Algorithm
RSA is one of the first practical public-key cryptosystems and is widely used for secure data
transmission. In such a cryptosystem, the encryption key is public and differs from the
decryption key which is kept secret. In RSA, this asymmetry is based on the practical difficulty
of factoring the product of two large prime numbers, the factoring problem. RSA is made of the
initial letters of the surnames of Ron Rivest, Adi Shamir, and Leonard Adleman, who first
publicly described the algorithm in 1977. Clifford Cocks, an English mathematician working for
the UK intelligence agency GCHQ, had developed an equivalent system in 1973, but it was not
declassified until 1997.
A user of RSA creates and then publishes a public key based on two large prime numbers, along
with an auxiliary value. The prime numbers must be kept secret. Anyone can use the public key
to encrypt a message, but with currently published methods, if the public key is large enough,
only someone with knowledge of the prime numbers can feasibly decode the message.[2]
Breaking RSA encryption is known as the RSA problem; whether it is as hard as the factoring
problem remains an open question.
RSA is a relatively slow algorithm, and because of this it is less commonly used to directly
encrypt user data. More often, RSA passes encrypted shared keys for symmetric key
cryptography which in turn can perform bulk encryption-decryption operations at much higher
speed.
The idea of an asymmetric public-private key cryptosystem is attributed to Whitfield Diffie and
Martin Hellman, who published the concept in 1976. They also introduced digital signatures and
attempted to apply number theory; their formulation used a shared secret key created from
exponentiation of some number, modulo a prime number. However, they left open the problem of
realizing a one-way function, possibly because the difficulty of factoring was not well studied at
the time
Explaining RSA's popularity
RSA derives its security from the difficulty of factoring large integers that are the product of two
large prime numbers. Multiplying these two numbers is easy, but determining the original prime
numbers from the total -- factoring -- is considered infeasible due to the time it would take even
using todays super computers.
The public and the private key-generation algorithm is the most complex part of RSA
cryptography. Two large prime numbers, p and q, are generated using the Rabin-Miller primality
test algorithm. A modulus n is calculated by multiplying p and q. This number is used by both the
public and private keys and provides the link between them. Its length, usually expressed in bits,
is called the key length. The public key consists of the modulus n, and a public exponent, e,
which is normally set at 65537, as it's a prime number that is not too large. The e figure doesn't
have to be a secretly selected prime number as the public key is shared with everyone.
Security of RSA
As discussed, the security of RSA relies on the computational difficulty of factoring large
integers. As computing power increases and more efficient factoring algorithms are discovered,
the ability to factor larger and larger numbers also increases. Encryption strength is directly tied
to key size, and doubling key length delivers an exponential increase in strength, although it does
impair performance. RSA keys are typically 1024- or 2048-bits long, but experts believe that
1024-bit keys could be broken in the near future, which is why government and industry are
moving to a minimum key length of 2048-bits. Barring an unforeseen breakthrough in quantum
computing, it should be many years before longer keys are required, but elliptic curve
cryptography is gaining favor with many security experts as an alternative to RSA for
implementing public-key cryptography. It can create faster, smaller and more efficient
cryptographic keys. Much of todays hardware and software is ECC-ready and its popularity is
likely to grow as it can deliver equivalent security with lower computing power and battery
resource usage, making it more suitable for mobile apps than RSA. Finally, a team of researchers
which included Adi Shamir, a co-inventor of RSA, has successfully determined a 4096-bit RSA
key using acoustic cryptanalysis, however any encryption algorithm is vulnerable to this type of
attack.
An algorithm that uses different keys for encryption and decryption is said to be asymmetric.
Anybody knowing the public key can use it to create encrypted messages, but only the owner of
the secret key can decrypt them. Conversely the owner of the secret key can encrypt messages
that can be decrypted by anybody with the public key. Anybody successfully decrypting such
messages can be sure that only the owner of the secret key could have encrypted them. This fact
is the basis of the digital signature technique.
Without going into detail about how e, d and N are related, d can be deduced from e and N if the
factors of N can be determined. Therefore the security of RSA depends on the difficulty of
factorizing N. Because factorization is believed to be a hard problem, the longer N is, the more
secure the cryptosystem. Given the power of modern computers, a length of 768 bits is
considered reasonably safe, but for serious commercial use 1024 bits is recommended.
The problem with choosing long keys is that RSA is very slow compared with a symmetric block
cipher such as DES, and the longer the key the slower it is. The best solution is to use RSA for
digital signatures and for protecting DES keys. Bulk data encryption should be done using DES.

You might also like