Basic Concepts of Network Security
Basic Concepts of Network Security
Network Security is a branch of computer science that involves in securing a computer network and
network infrastructure devices to prevent unauthorized access, data theft, network misuse, device and data
modification. Another function of Network Security is in preventing DoS (Denial of Service) attacks and
assuring continuous service for legitimate network users. Network Security involves proactive defence
methods and mechanisms to protect data, network and network devices from external and internal threats.
Data is the most precious factor of todays businesses. Top business organizations spend billions of
dollers every year to secure their computer networks and to keep their business data safe. Imagine the loss
of all important research data on which the company has invested millions of dollers and working for
years !!!
We are dependent on computers today for controlling large money transfers between banks, insurance,
markets, telecommunication, electrical power distribution, health and medical fields, nuclear power
plants, space research and satellites. We cannot negotiate security in these critical areas.
Firewalls
A firewall is a hardware device or software application installed on the borderline of secured networks to
examine and control incoming and outgoing network communications. As the first line of network
defense, firewalls provide protection from outside attacks, but they have no control over attacks from
within the corporate network. Some firewalls also block traffic and services that are actually legitimate.
Router
A router is a networking device that forwards data packets between computer networks. Routers
perform the traffic directing functions on the Internet. A data packet is typically forwarded from
one router to another through the networks that constitute the internetwork until it reaches its
destination node.
A router is connected to two or more data lines from different networks.[b] When a data packet
comes in on one of the lines, the router reads the address information in the packet to determine
the ultimate destination.
Gateway
A gateway is a network node connecting two networks that use different protocols. Gateways can
take several forms -- including routers or computers -- and can perform a variety of tasks. These
range from simply passing traffic on to the next hop on its path to offering complex
traffic filtering, proxies or protocol translations at various network layers.
Switch
A network switch is a multiport network bridge that uses hardware addresses to process and
forward data at the data link layer (layer 2) of the OSI model. Switches can also process data at
the network layer (layer 3) by additionally incorporating routing functionality that most
commonly uses IP addresses to perform packet forwarding; such switches are commonly known
as layer-3 switches or multilayer switches
Hub
A hub is a common connection point for devices in a network. Hubs are commonly used to
connect segments of a LAN. A hub contains multiple ports. When a packet arrives at one port, it
is copied to the other ports so that all segments of the LAN can see all packets.
Hubs and switches serve as a central connection for all of your network equipment and handles
a data type known as frames. Frames carry your data. When a frame is received, it is amplified
and then transmitted on to the port of the destination PC.
Think of your network perimeter like a castle during medieval times, which has multiple layers
of defense a moat, high walls, big gate, guards, etc. Even in medieval times, people understood
the importance of having layers of security and the concept is no different today in information
security. Here are three tips:
Whether or not its part of your firewall or a separate device, IPS is another important perimeter
defense mechanism. Having your IPS properly optimized and monitored is a good way to catch
attackers that have slipped past the first castle defense (firewall/router).
The popularity of moving more into the cloud has brought cloud-based malware detection and
DDoS services. Unlike appliance-based solutions these are cloud-based services that sit outside
your architecture and analyze traffic before it hits your network.
For each layer of security, you want to ensure they are running the most up-to-date software and
operating systems, and that devices are configured properly. A common misstep occurs when
organizations assume they are secure because of their many layers of defense, but a
misconfigured device is like giving an attacker a key to the castle. Another important practice is
to tighten security policies (of course without impacting the business), so for example you dont
have a router allowing ANY to Telnet to it from outside your network.
While firewalls, routers and other security layers are in place to prevent unauthorized access,
they also enable access that is approved. So how do we let authorized personnel into the castle?
The drawbridge of course! Next-generation firewalls can help here by scanning inbound and
outbound user traffic, all while looking for patterns of suspicious behavior.
Another way to have secure access from the outside through the perimeter is to install a VPN that
is configured to allow encrypted communication to your network from the outside. Utilizing two-
factor authentication with a VPN contributes towards ensuring the integrity of the users making
the request. This is external-facing to your network and allows users to tunnel into your LAN
from the outside once the appropriate measures are taken to secure access.
Different types of networks
Different types of (private) networks are distinguished based on their size (in terms of the
number of machines), their data transfer speed, and their reach.
Private networks are networks that belong to a single organization.
There are usually said to be three categories of networks:
FIREWALL
In computing, a firewall is a network security system that monitors and controls the incoming
and outgoing network traffic based on predetermined security rules. A firewall typically
establishes a barrier between a trusted, secure internal network and another outside network,
such as the Internet, that is assumed not to be secure or trusted. Firewalls are often categorized as
either network firewalls or host-based firewalls. Network firewalls filter traffic between two or
more networks; they are either software appliances running on general purpose hardware, or
hardware-based firewall computer appliances. Host-based firewalls provide a layer of software
on one host that controls network traffic in and out of that single machine.
First generation: packet filters
The first type of firewall was the packet filter which looks at network addresses and ports of the
packet and determines if that packet should be allowed or blocked.[18] The first paper published
on firewall technology was in 1988, when engineers from Digital Equipment Corporation (DEC)
developed filter systems known as packet filter firewalls.
Packet filters act by inspecting the "packets" which are transferred between computers on the
Internet. If a packet does not match the packet filter's set of filtering rules, the packet filter will
drop (silently discard) the packet or reject it (discard it, and send "error responses" to the source).
Conversely, if the packet matches one or more of the programmed filters, the packet is allowed to
pass. This type of packet filtering pays no attention to whether a packet is part of an existing
stream of traffic.
Second generation: "stateful" filters
Second-generation firewalls perform the work of their first-generation predecessors but operate
up to layer 4 (transport layer) of the OSI model. This is achieved by retaining packets until
enough information is available to make a judgement about its state.[24] Known as stateful
packet inspection, it records all connections passing through it and determines whether a packet
is the start of a new connection, a part of an existing connection, or not part of any connection.
[25] Though static rules are still used, these rules can now contain connection state as one of
their test criteria
Third generation: application layer
Marcus Ranum, Wei Xu, and Peter Churchyard developed an application firewall known as
Firewall Toolkit (FWTK). In June 1994, Wei Xu extended the FWTK with the kernel
enhancement of IP filter and socket transparent. This was known as the first transparent
application firewall, released as a commercial product of Gauntlet firewall at Trusted
Information Systems. Gauntlet firewall was rated one of the number one firewalls during 1995
1998.
The key benefit of application layer filtering is that it can "understand" certain applications and
protocols (such as File Transfer Protocol (FTP), Domain Name System (DNS), or Hypertext
Transfer Protocol (HTTP)). This is useful as it is able to detect if an unwanted application or
service is attempting to bypass the firewall using a protocol on an allowed port, or detect if a
protocol is being abused in any harmful way.
Machines that want direct access to the outside world, unfiltered by the firewall, connect to this
hub. One of the firewall's network adapters also connects to this hub. The other network adapter
connects to the internal hub. Machines that need to be protected by the firewall need to connect
to this hub. Any of these hubs could be replaced with switches for added security and speed, and
it would be more effective to use a switch for the internal hub.
3. The Three-legged firewall
This means you need an additional network adapter in your firewall box for your DMZ. The
firewall is then configured to route packets between the outside world and the DMZ differently
than between the outside world and the internal network. This is a useful configuration, and I
have seen many of our customers using it.
Cryptography
Cryptography or is the practice and study of techniques for secure communication in the
presence of third parties called adversaries. More generally, cryptography is about constructing
and analyzing protocols that prevent third parties or the public from reading private messages,
various aspects in information security such as data confidentiality, data integrity, authentication,
and non-repudiation are central to modern cryptography. Modern cryptography exists at the
intersection of the disciplines of mathematics, computer science, and electrical engineering.
Applications of cryptography include ATM cards, computer passwords, and electronic
commerce.
Modern cryptography is heavily based on mathematical theory and computer science practice;
cryptographic algorithms are designed around computational hardness assumptions, making such
algorithms hard to break in practice by any adversary. It is theoretically possible to break such a
system, but it is infeasible to do so by any known practical means. These schemes are therefore
termed computationally secure; theoretical advances, e.g., improvements in integer factorization
algorithms, and faster computing technology require these solutions to be continually adapted.
Modern cryptography
The modern field of cryptography can be divided into several areas of study. The chief ones are
discussed here; see Topics in Cryptography for more.
Symmetric-key cryptography
Symmetric-key cryptography refers to encryption methods in which both the sender and receiver
share the same key (or, less commonly, in which their keys are different, but related in an easily
computable way). This was the only kind of encryption publicly known until June 1976
Symmetric key ciphers are implemented as either block ciphers or stream ciphers. A block cipher
enciphers input in blocks of plaintext as opposed to individual characters, the input form used by
a stream cipher.
The Data Encryption Standard (DES) and the Advanced Encryption Standard (AES) are block
cipher designs that have been designated cryptography standards by the US government (though
DES's designation was finally withdrawn after the AES was adopted). Despite its deprecation as
an official standard, DES (especially its still-approved and much more secure triple-DESvariant)
remains quite popular; it is used across a wide range of applications, from ATM encryption to e-
mail privacy and secure remote access. Many other block ciphers have been designed and
released, with considerable variation in quality. Many have been thoroughly broken, such
as FEAL.
Public-key cryptography
Symmetric-key cryptosystems use the same key for encryption and decryption of a message,
though a message or group of messages may have a different key than others. A significant
disadvantage of symmetric ciphers is the key management necessary to use them securely. Each
distinct pair of communicating parties must, ideally, share a different key, and perhaps each
ciphertext exchanged as well. The number of keys required increases as the square of the number
of network members, which very quickly requires complex key management schemes to keep
them all consistent and secret. The difficulty of securely establishing a secret key between two
communicating parties, when a secure channel does not already exist between them.
In public-key cryptosystems, the public key may be freely distributed, while its paired private
key must remain secret. In a public-key encryption system, the public key is used for encryption,
while the private or secret key is used for decryption. While Diffie and Hellman could not find
such a system, they showed that public-key cryptography was indeed possible by presenting
the DiffieHellman key exchange protocol, a solution that is now widely used in secure
communications to allow two parties to secretly agree on a shared encryption key.
Cryptanalysis
The goal of cryptanalysis is to find some weakness or insecurity in a cryptographic scheme, thus
permitting its subversion or evasion.
It is a common misconception that every encryption method can be broken. In connection with
his WWII work at Bell Labs, Claude Shannon proved that the one-time pad cipher is
unbreakable, provided the key material is truly random, never reused, kept secret from all
possible attackers, and of equal or greater length than the message.[39] Most ciphers, apart from
the one-time pad, can be broken with enough computational effort by brute force attack, but the
amount of effort needed may be exponentially dependent on the key size, as compared to the
effort needed to make use of the cipher. In such cases, effective security could be achieved if it is
proven that the effort required (i.e., "work factor", in Shannon's terms) is beyond the ability of
any adversary. This means it must be shown that no efficient method (as opposed to the time-
consuming brute force method) can be found to break the cipher. Since no such proof has been
found to date, the one-time-pad remains the only theoretically unbreakable cipher.
Cryptanalysis of symmetric-key ciphers typically involves looking for attacks against the block
ciphers or stream ciphers that are more efficient than any attack that could be against a perfect
cipher. For example, a simple brute force attack against DES requires one known plaintext and
255 decryptions, trying approximately half of the possible keys, to reach a point at which
chances are better than even that the key sought will have been found.
Cryptosystems
One or more cryptographic primitives are often used to develop a more complex algorithm,
called a cryptographic system, or cryptosystem. Cryptosystems (e.g., El-Gamal encryption) are
designed to provide particular functionality (e.g., public key encryption) while guaranteeing
certain security properties (e.g., chosen-plaintext attack (CPA) security in the random oracle
model). Cryptosystems use the properties of the underlying cryptographic primitives to support
the system's security properties. Of course, as the distinction between primitives and
cryptosystems is somewhat arbitrary, a sophisticated cryptosystem can be derived from a
combination of several more primitive cryptosystems. In many cases, the cryptosystem's
structure involves back and forth communication among two or more parties in space (e.g.,
between the sender of a secure message and its receiver) or across time (e.g., cryptographically
protected backup data). Such cryptosystems are sometimes called cryptographic protocols.
Digital signature
A digital signature is a mathematical scheme for demonstrating the authenticity of a digital
message or documents. A valid digital signature gives a recipient reason to believe that the
message was created by a known sender (authentication), that the sender cannot deny having sent
the message (non-repudiation), and that the message was not altered in transit.
Digital signatures are a standard element of most cryptographic protocol suites, and are
commonly used for software distribution, financial transactions, contract management software,
and in other cases where it is important to detect forgery or tampering.
Digital signatures are often used to implement electronic signatures, a broader term that refers to
any electronic data that carries the intent of a signature, but not all electronic signatures use
digital signatures. In some countries, including the United States, India, Brazil, Indonesia, Saudi
Arabia, Switzerland and the countries of the European Union electronic signatures have legal
significance.
Digital signatures employ asymmetric cryptography. In many instances they provide a layer of
validation and security to messages sent through a nonsecure channel: Properly implemented, a
digital signature gives the receiver reason to believe the message was sent by the claimed sender.
Digital seals and signatures are equivalent to handwritten signatures and stamped seals. Digital
signatures are equivalent to traditional handwritten signatures in many respects, but properly
implemented digital signatures are more difficult to forge than the handwritten type.
Applications of digital signatures
As organizations move away from paper documents with ink signatures or authenticity stamps,
digital signatures can provide added assurances of the evidence to provenance, identity, and
status of an electronic document as well as acknowledging informed consent and approval by a
signatory. The United States Government Printing Office (GPO) publishes electronic versions of
the budget, public and private laws, and congressional bills with digital signatures. Universities
including Penn State, University of Chicago, and Stanford are publishing electronic student
transcripts with digital signatures.
Below are some common reasons for applying a digital signature to communications:
Authentication
Although messages may often include information about the entity sending a message, that
information may not be accurate. Digital signatures can be used to authenticate the source of
messages. When ownership of a digital signature secret key is bound to a specific user, a valid
signature shows that the message was sent by that user. The importance of high confidence in
sender authenticity is especially obvious in a financial context. For example, suppose a bank's
branch office sends instructions to the central office requesting a change in the balance of an
account. If the central office is not convinced that such a message is truly sent from an
authorized source, acting on such a request could be a grave mistake.
Integrity
In many scenarios, the sender and receiver of a message may have a need for confidence that the
message has not been altered during transmission. Although encryption hides the contents of a
message, it may be possible to change an encrypted message without understanding it. (Some
encryption algorithms, known as nonmalleable ones, prevent this, but others do not.) However, if
a message is digitally signed, any change in the message after signature invalidates the signature.
Furthermore, there is no efficient way to modify a message and its signature to produce a new
message with a valid signature, because this is still considered to be computationally infeasible
by most cryptographic hash functions (see collision resistance).
Non-repudiation
Non-repudiation, or more specifically non-repudiation of origin, is an important aspect of digital
signatures. By this property, an entity that has signed some information cannot at a later time
deny having signed it. Similarly, access to the public key only does not enable a fraudulent party
to fake a valid signature.
Cryptographic Algorithms
1. Advanced Encryption Standard (AES)
The Advanced Encryption Standard or AES is a symmetric block cipher used by the U.S.
government to protect classified information and is implemented in software and hardware
throughout the world to encrypt sensitive data.
The origins of AES date back to 1997 when the National Institute of Standards and Technology
(NIST) announced that it needed a successor to the aging Data Encryption Standard (DES) which
was becoming vulnerable to brute-force attacks.
This new encryption algorithm would be unclassified and had to be "capable of protecting
sensitive government information well into the next century." It was to be easy to implement in
hardware and software, as well as in restricted environments (for example, in a smart card) and
offer good defenses against various attack techniques.
Choosing AES
The selection process to find this new encryption algorithm was fully open to public scrutiny and
comment; this ensured a thorough, transparent analysis of the designs. Fifteen competing designs
were subject to preliminary analysis by the world cryptographic community, including the
National Security Agency (NSA). In August 1999, NIST selected five algorithms for more
extensive analysis. These were: