0% found this document useful (0 votes)
80 views31 pages

Cloud Computing Unit 1 Notes

Uploaded by

sri yoga sankar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views31 pages

Cloud Computing Unit 1 Notes

Uploaded by

sri yoga sankar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

CLOUD COMPUTING

UNIT-1 Notes
Cloud Computing

UNIT- I

Introduction to Cloud Computing: Characteristics of Cloud Computing, Cloud


Models, Cloud Services Examples, Cloud based services and Applications, Cloud
Concepts and Technologies: Virtualization, Load Balancing, Scalability and
Elasticity, Deployment, Replication, Monitoring, Software defined networking,
Network function virtualization, Map Reduce, Identity and Access Management,
Service Level Agreements, Billing.

Cloud Computing -Introduction


1) Introduction: -
Cloud computing is Internet-based computing where information, software and shared
resources are provided to computers and devices on-demand. IBM defined cloud as “A cloud is a
pool of virtualized computer resources”. Users can access and deploy cloud applications from
anywhere in the world at very competitive costs.
Cloud computing involves provisioning of computing, networking and storage resources
on demand and providing these resources as metered services to the users, in a "pay as you go"
model.
Many Applications such as e-mail, web conferencing, customer relationship management
(CRM), all run in cloud.
Cloud computing is a transformative computing paradigm that involves delivering
applications and services over the internet.
The National Institute of Standards and Technology (NIST) defines the term Cloud
Computing as follows:

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a
shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and
services) that can be rapidly provisioned and released with minimal management effort or service
provider interaction.

2) Characteristics of Cloud Computing:-


NIST further identifies five essential characteristics of cloud computing:
a) On-demand self service
b) Broad network access
c) Resource pooling
d) Rapid elasticity
e) Measured service
f) Performance
g) Reduced cost
h) Outsourced Management
i) Reliability
j) Multi-tenancy
a) On-demand self service
Cloud computing resources can be provisioned on-demand by the users, without requiring
interactions with the cloud service provider. The process of provisioning resources is automated.

b) Broad network access


Cloud computing resources can be accessed over the network using standard access mechanisms
that provide platform-independent access through the use of heterogeneous client platforms such as
workstations, laptops, tablets and smartphones.
c) Resource pooling
The computing and storage resources provided by cloud service providers are pooled to serve
multiple users using multi-tenancy. Multi-tenant aspects of the cloud allow multiple users to be
served by the same physical hardware.
Users are assigned virtual resources that run on top of the physical resources. Various forms of
virtualization approaches such as full virtualization, para-virtualization and hardware virtualization
are available for this purpose.

2
Cloud Computing

d) Rapid elasticity
Cloud computing resources can be provisioned rapidly and elastically. Cloud resources can be
rapidly scaled up or down based on demand.
Two types of scaling options exist:
-Horizontal Scaling (scaling out): Horizontal scaling or scaling-out involves launching
and provisioning additional server resources.
-Vertical Scaling (scaling up): Vertical scaling or sca1inp-up involves changing the computing
capacity assigned to the server resources while keeping the number of server resources constant.

e) Measured service
Cloud computing resources are provided to users on a pay-per-use model. The usage of the cloud
resources is measured and the user is charged based on some specific metric. Metrics such as
amount of CPU cycles used, amount of storage space used, number of network I/O requests. etc.
are used to calculate the usage charges for the cloud resources.

In addition to above five essential characteristics of cloud computing,


other characteristics that again highlight savings in cost include;
f) Performance
Cloud computing provides improved performance for applications since the resources available to
the applications can be scaled up or down based on the dynamic application workloads.
g) Reduced cost
Cloud computing provides cost benefits for applications as only as much computing and Storage
resources as required can be provisioned dynamically, and upfront investment in purchase of
computing assets are minimized.
This saves significant cost for organizations and individuals. Applications can experience large
variations in the workloads which can be due to seasonal or other factors.
For example, e-Commerce applications typically experience higher workloads in holiday seasons.
To ensure market readiness of such applications, adequate resources need to be provisioned so that
the applications can meet the demand of specified workload levels and at the same time ensure that
service level agreements are met.

h) Outsourced Management
Cloud computing allows the users (individuals, large organizations, small and medium enterprises
and governments) to outsource the IT infrastructure requirements to external cloud providers.
Thus, the consumers can save large upfront capital expenditures in setting up the lT infrastructure
and pay only for the operational expenses for the cloud resources used.
The outsourced nature of the cloud services provides a reduction in the lT Infrastructure
management costs.
i) Reliability
Applications deployed in cloud computing environments generally have a higher reliability since
the underlying IT infrastructure is professionally managed by the cloud service.
Cloud service providers specify and guarantee the reliability and availability levels for their cloud
resources in the form of service level agreements (SLAs).
j) Multi-tenancy
The multi-tenanted approach the cloud allows multiple users to make use of the same shared
resources.
Modern applications such as e-Commerce, Business-to-Business, Banking and Financial, Retail
and Social Networking applications that are deployed in cloud computing environments are multi-
tenanted applications.
Multi-tenancy can be of different forms:
Virtual multi-tenancy: In virtual multi-tenancy, computing and storage resources are
shared among multiple users. Multiple tenants are served from virtual machines (VMs) that
execute concurrently on top of the same computing and storage resources.

Organic multi-tenancy: In organic multi-tenancy every component in the system


architecture is shared among multiple tenants, including hardware, OS, database servers,
application servers, load balancers, etc. Organic multi-tenancy exists when explicit multi-
tenant design patterns are coded into the application.

3
Cloud Computing

Service Models of Cloud Computing


Introduction:-
Cloud computing is Internet-based computing where information, software and shared resources
are provided to computers and devices on-demand.

Cloud computing services are offered to users in different forms. NIST defines at least three cloud
service models as follows:
1) Software as a Service
2) Platform as a Service
3) Infrastructure as a Service
Below Diagram shows the different categories of cloud service models in the view of who can
access perspective.

Below diagram shows the cloud computing service models in the view of what can be accessed
perspective:

1) Software as a Service (SaaS):


SaaS model allows the users to use available software applications of the Cloud as a service.
Such applications can be email, customer relationship management, and other office
productivity applications. Enterprise services are usually billed monthly or by usage.
SaaS provides the users a complete software application or the user interface to the application itself.
The cloud service provider manages the underlying cloud infrastructure including servers.
network. Operating systems, storage and application software, and the user is unaware of the
underlying architecture of the cloud.

Applications are provided to the user through a thin client interface (e.g.. a browser). SaaS
applications are platform independent and can be accessed from various client devices such as
workstations, laptop, tablets and smartphones, running different operating systems.
Since the cloud service provider manages both the application and data, the users are able to
access the applications from anywhere.

SaaS is a software delivery methodology that provides licensed multi-tenant access to


software and its functions remotely as a Web-based service.
Services at the software level consist of complete applications that do not require development.

4
Cloud Computing

The following lists the benefits, characteristics and adoption of SaaS model:

2) Platform as a Service (PaaS):


PaaS provides the users the capability to develop and deploy application in the cloud using
the development tools, application programming interfaces (APls), software libraries and services
provided by the cloud service provider.
PaaS Services provide Operating Systems, middleware, databases, development tools, and
runtime support of Programming Languages.

The cloud service provider manages the underlying cloud infrastructure including servers,
network, operating systems and storage. The users, themselves, are responsible for developing,
deploying, configuring and managing applications on the cloud infrastructure.
This model enables the user to develop and deploy user-built applications onto a virtualized
cloud platform.

The following lists the benefits, characteristics and adoption of PaaS model:

3) Infrastructure as a Service (IaaS):


IaaS provides the users the capability to provision computing and storage resources. These
resources are provided to the users as virtual machine instances and virtual storage.
Users can start, stop, configure and manage the virtual machine instances and virtual
storage. Users can deploy operating systems and applications of their choice on the virtual
resources provisioned in the cloud.
The cloud service provider manages the underlying infrastructure. Virtual resources
provisioned by the users are billed based on a pay-per-use paradigm.
Common metering metrics used are the number of virtual machine hours used and/or the
amount of storage space provisioned.
IaaS is the delivery of technology infrastructure as an on demand scalable service.

5
Cloud Computing

The IaaS model puts together infrastructures demanded by users—namely servers, storage, networks, and the
data centre.
Clients are billed for infrastructure services based on what resources are consumed. This
eliminates the need to procure and operate physical servers, data storage systems, or networking
resources.
The infrastructure layer builds on the virtualization layer by offering the virtual machines as
a service to users.
The following lists the benefits, characteristics and adoption of PaaS model:

6
Cloud Computing

Cloud Deployment Models


Introduction:-

Cloud computing is Internet-based computing where Applications, information, software


and shared resources are provided to Users, devices online and on-demand.

Deployment models define the type of access to the cloud. i.e. where the cloud is located
and how it is accessed.

There are different Cloud computing development models. NIST defines the following
cloud deployment models:
1. Public Cloud
2. Private Cloud
3. Hybrid Cloud
[Link] Cloud

7
Cloud Computing

Public Cloud:-
In the public cloud deployment model, cloud services are available to the general public
or a large group of companies.
The cloud resources are shared among different users (individuals, large organizations,
small and medium enterprises and governments).
Public clouds are best suited for users who want to use cloud infrastructure for
development and testing of applications and host applications in the cloud to serve large workloads,
without upfront investments in IT infrastructure.
A public cloud is built over the Internet and can be accessed by any user who has paid for the
service.

Many public clouds are available, including


-Google App Engine (GAE),
-Amazon Web Services (AWS),
-Microsoft Azure,
-IBM Blue Cloud, and
-[Link] etc.
Private Cloud :-
In the private cloud deployment model, cloud infrastructure is operated for
exclusive use of a single organization.
Private cloud services are dedicated for a single organization. Cloud infrastructure can be
setup on premise or off-premise and may be managed internally or by a third-party.
Private clouds are best suited for applications where security is very important and
organizations that want to have very tight control over their data.
A private cloud is built within the domain of an intranet owned by a single organization.

Therefore, it is organization owned and managed, and its access is limited to the owning
clients and their partners.
Private clouds give local users a flexible and agile private infrastructure to run service workloads
within their administrative domains. It offers increased security because of its private nature.

Examples of private Cloud include the following:


1)

2)
Research Compute Cloud (RC2)

The Research Compute Cloud (RC2) is a private cloud, built by IBM, that interconnects the
computing and IT resources at IBM Research Centres scattered throughout the United States,
Europe, and Asia.

8
Cloud Computing

Hybrid Cloud :-
The hybrid cloud deployment model combines the services of multiple clouds (private or
public).
The individual clouds retain their unique identities but are bound by standardized or
proprietary technology that enables data and application portability.
Hybrid clouds are best suited for organizations that want to take advantage of secured
application and data hosting on a private cloud, and at the same time benefit from cost savings by
hosting shared applications and data in public clouds.

Community Cloud :-
The community cloud infrastructure is provisioned for exclusive use by a specific
community of consumers from several organizations that have shared concerns (e.g., mission,
security requirements, policy, and compliance considerations).

It may be owned, managed, and operated by one or more of the organizations in the
community, a third party, or some combination of them, and it may exist on or off premises.

In the community cloud deployment model, the cloud services are shared by several
organizations that have the same policy and compliance considerations.

Community clouds are best suited for organizations that want access to the same
applications and data, and want the cloud costs to be shared with the larger group.

In summary,

public clouds promote standardization, preserve capital investment, and offer application flexibility.

Private clouds attempt to achieve customization and offer higher efficiency, resiliency, security, and
privacy.

Hybrid clouds operate in the middle, with many compromises in terms of resource sharing.

9
Cloud Computing

Cloud Concepts & Technologies


Introduction: -
Cloud computing is Internet-based computing where Applications, information, software
and shared resources are provided to Users online and on-demand.
Concepts and enabling technologies of cloud computing include the following:
-Virtualization
-Load balancing
-Scalability & Elasticity
-Deployment
-Replication
-Monitoring
-Software defined networking
-Network function virtualization
-MapReduce
-Identity and Access Management
-Service Level Agreements
-Billing

1) Virtualization: -
Virtualization refers to the partitioning the resources of a physical system (such as
computing, storage, network and memory) into multiple virtual resources.
Virtualization is the key enabling technology of cloud computing and allows pooling of
resources. In cloud computing, resources are pooled to serve multiple users using multi-tenancy.
Multi-tenant aspects of the cloud allow multiple users to be served by the same physical hardware.
Users are assigned virtual resources that run on top of the physical resources.
Figure 2.1 shows the architecture of a virtualization technology in cloud computing.

10
Cloud Computing

The physical resources such as computing, storage memory and network resources are virtualized.
The virtualization layer partitions the physical resources into multiple virtual machines. The
virtualization layer allows multiple operating system instances to run currently as virtual machines
on the same underlying physical resources.
The virtualization layer consists of a hypervisor or a virtual machine monitor (VMM).
Hypervisor:
The hypervisor presents a virtual operating platform to a guest operating system (OS).
There are two types of hypervisors as shown below:
Type-1 hypervisors or the native hypervisors run directly on the host hardware and control the
hardware and monitor the guest operating systems.

Type 2 hypervisors or hosted hypervisors run on top of a conventional (main/host) operating


system and monitor the guest operating systems

Guest OS:
A guest OS is an operating system that is installed in a virtual machine in addition to the host or
main OS. In virtualization, the guest OS can be different from the host OS.

Types of Virtualizations:
Various forms of virtualization approaches exist:
-Full Virtualization
-Para Virtualization
-Hardware Virtualization

11
Cloud Computing

a) Full Virtualization:
In full virtualization, the virtualization layer completely decouples the guest OS from the
underlying hardware. The guest OS requires no modification and is not aware that it is being
virtualized.

Full virtualization is enabled by direct execution of user requests and binary translation of OS
requests. Figure 2.4 shows the full virtualization approach.

b) Para Virtualization:
In para-virtualization, the guest OS is modified to enable communication with the hypervisor to
improve performance and efficiency.

The guest OS kernel is modified to replace non- virtualizable instructions with hypercalls that
communicate directly with the virtualization layer hypervisor. figure 2.5 show s the para-
virtualization approach.

12
Cloud Computing

c) Hardware Virtualization:
Hardware assisted virtualization is enabled by hardware features such as Intel’s Virtualization
Technology (VT-x) and AMD’s AMD-V. In hardware assisted virtualization, privileged and
sensitive calls are set to automatically trap to the hypervisor. Thus, there is no need for either binary
translation or para-virtualization.

Table 2.1 lists some examples of popular Hypervisor:

2) Load Balancing: -
One of the important features of Cloud computing is scalability. Cloud computing resources
can be scaled up on demand to meet the performance requirements of applications.
Load balancing distributes workloads / user Requests across multiple servers / Resources to meet
the application workloads.
The goals of load balancing techniques are
-to achieve maximum utilization of resources,
-minimizing the response times,
-maximizing throughput.
With load balancing, cloud-based applications can achieve high availability and reliability.
Since multiple resources under a load balancer are used to serve the user requests, in the
event of failure of one or more of the resources, the load balancer can automatically reroute the user
traffic to the healthy resources.
To the end user accessing a cloud-based application, a load balancer makes the pool of
servers under the load balancer appear as a single server with high computing capacity.

Load Balancing Algorithms: The routing of user requests is determined based on a load balancing
algorithm. Commonly used load balancing algorithms include:
-Round Robin load balancing
-Weighted Round Robin load balancing
-Low Latency load balancing
-Least Connections load balancing
-Priority load balancing
-Overflow load balancing

13
Cloud Computing

a) Round Robin load balancing


In Round robin load balancing, the servers are selected one by one to serve the incoming
requests in a non-hierarchical circular fashion with no priority assigned to a specific server.

b) Weighted Round Robin load balancing


In Weighed round robin load balancing. severs are assigned some weights. The incoming
Requests are proportionally routed using a static or dynamic ratio of respective weights.

c) Low Latency load balancing


In low Latency load balancing the load balancer monitors the latency tab each server. Each
incoming request is routed to the server which has the lowest latency.

14
Cloud Computing

d) Least Connections load balancing


In least connections load balancing, the incoming requests are routed to the server with the least
number of connections.

e) Priority load balancing


In priority load balancing, each server is assigned a priority. The incoming traffic is routed to
the highest priority server as long as the server is available. When the highest priority server
fails, the incoming traffic is routed to a server with a lower priority.

f) Overflow load balancing


Overflow load balancing is similar to priority load balancing. When the incoming requests to
highest priority server overflow, the requests are routed to a low er priority server.

15
Cloud Computing

Load Balancing Persistence Approaches: Since load balancing can route successive
requests from a user session to different servers, maintaining the state or the information
of the session is important.
Persistence Approaches include the following:
-Sticky sessions
-Session Database
-Browser cookies
-URL re-writing
a) Sticky sessions: In this approach all the requests belonging to a user session are routed to the
same server. These sessions are called sticky sessions.
The benefit of this approach is that it makes session management simple. However, a drawback of
this approach is that if a server fails all the sessions belonging to that server are lost since there is
no automatic failover possible.

b) Session Database: In this approach, all the session information is stored externally in a
separate session database, which is often replicated to avoid a single point of failure.
Though, this approach involves additional overhead of storing the session information, however,
unlike the sticky session approach, this approach allows automatic failover.

c) Browser cookies: In this approach, the session information is stored on the client side in the
form of browser cookies. The benefit of’ this approach is that it makes the session management
easy and has the least amount of overhead for the load balancer.

d) URL re-writing: In this approach, a URL re-write engine stores the session information by
modifying the URLs on the client side.
Though this approach avoids overhead on the load balancer, a draw- back is that the amount of
session information that can be stored is limited. For applications that require larger amounts of
session information, this approach does not work.
Load balancing can be implemented in software or hardware.
Software-based load balancer run on standard operating systems, and like other cloud resources,
load balancers are also virtualized.
Hardware-based load balancers implement load balancing algorithms in Application Specific
Integrated Circuits (ASICS). In a hardware load balancer, the incoming user requests are routed to
the underlying servers based on some pre-configured load balancing strategy and the response from
the severs are sent back either directly to the user (at laser-4) or back to the load balancer (at layer-
7) where it is manipulated before being sent back to the user.
Table 2.2 lists some examples of load balancers.

16
Cloud Computing

3) Scalability & Elasticity:-


Multi-tier applications such as e-Commerce, social networking, business-to-business, etc. can
experience rapid changes in their traffic.
Each website has a different traffic pattern which is determined by a number of factors that
are generally hard to predict beforehand. Modem web applications have multiple tiers of
deployment with varying number of servers in each tier.
Capacity planning involves determining the right sizing of each tier of the deployment of an
application in terms of the number of resources and the capacity of each resource.
Capacity planning may be for computing, storage, memory or network resources.
Figure 2.7 shows the cost versus capacity curves for traditional and cloud approaches.

Scaling Approaches:
Traditional approaches for capacity planning are based on predicted demands for ap-
plications and account for worst case peak loads of applications. When the workloads of
applications increase, the traditional approaches have been either to scale up or scale out.
a) Vertical Scaling/Scaling up: Scaling up involves upgrading the hardware resources (adding
additional computing, memory, storage or network resources).
b) Horizontal Scaling/Scaling out: Scaling out involves addition of more resources of the same
type.
Traditional scaling up and scaling out approaches are based on demand forecasts at regular intervals
of time.
When variations in workloads are rapid, Scale Up and Scale Out are not applied dynamically then it
leads to either over-provisioning or under-provisioning of resources.
Over-provisioning of resources leads to higher capital expenditures than required.
Under-provisioning of resources leads to traffic overloads, slow response times, low throughputs
and hence loss of opportunity to serve the customers.

17
Cloud Computing

4) Deployment:-
The cloud deployment model identifies the specific type of cloud environment based on
ownership, scale, and access, as well as the cloud's nature and purpose. The location of the
servers you're utilizing and who controls them are defined by a cloud deployment model.
This includes architecting, planning, implementing and operating workloads on cloud.
Below diagram shows the cloud application deployment liIecyc1e.

Deployment prototyping can help in making deployment architecture design choices. By comparing
performance of alternative deployment architectures, deployment prototyping can help in choosing
the best and most cost-effective deployment architecture that can meet the application performance
requirements.

18
Cloud Computing

Deployment design is an iterative process that invo1ves the following steps:


a) Deployment Design
• The variables in this step include the number of servers in each tier,
computing, memory and storage capacitiesof severs, server
interconnection, load balancing and replication strategies.

b) Performance Evaluation
• To verify whether the application meets the performance requirements with the
deployment.
• Involves monitoring the workload on the application and
measuring various workload parameters such as response time
and throughput.
• Utilization of servers (CPU, memory, disk, I/O, etc.) in each tier is also
monitored.

c) Deployment Refinement
• Various alternatives can exist in this step such as vertical scaling
(or scaling up), horizontal scaling (or scalingout), alternative
server interconnections, alternative load balancing and replication
strategies, for instance.

Below Table lists some popular cloud deployment management tools.

19
Cloud Computing

5) Replication:-
Replication is used to create and maintain multiple copies of the data in the cloud.
Replication of data is important for practical reasons such as business continuity and disaster
recovery.
In the event of data loss at the primary location, organizations can continue to operate their
applications from secondary data sources.
With real-time replication of data, organizations can achieve faster recovery from failures.
Cloud based data replication approaches provide replication of data in multiple locations,
automated recovery, low recovery point objective (RPO) and low recovery time objective (RTO).

Cloud enables rapid implementation of replication solutions for disaster recovery for small
and medium enterprises and large organizations. With cloud-based data replication organizations
can plan for disaster recovery without making any capital expenditures on purchasing, configuring
and managing secondary site locations.
Cloud providers affordable replication solutions with pay-per-use/pay-as-you-go pricing
models. There are three types of replication approaches as shown below:
-Array-based Replication
-Network-based Replication
-Host-based Replication
a) Array-based Replication:
Array-based replication uses compatible storage arrays to automatically copy data from a
local storage array to a remote storage array. Arrays replicate data at the disk sub-system level,
therefore the type of hosts accessing the data and the type of data is not important. Thus array-based
replication can work in heterogeneous environments with different operating systems. Array-based
replication uses Network Attached Storage (NAS) or Storage Area Network (SAN), to replicate. A
drawback of this array-based replication is that it requires similar arrays at local and remote
locations. Thus the costs for setting up array-based replication are higher than the other approaches.

b) Network-based Replication:
Network-based replication uses an appliance that sits on the network and intercepts packets
that are sent from hosts and storage arrays. The intercepted packets are replicated to a secondary
location. The benefits of this approach is that it supports heterogeneous environments and requires
a single point of management. However, this approach involves higher initial costs due to
replication hardware and software.

20
Cloud Computing

c) Host-based Replication:
Host-based replication runs on standard servers and uses software to transfer data from a
local remote location. The host acts the replication control mechanism.
An agent is installed the hosts that communicates with the agents on the other hosts. Host-
based replication either be block-based or file-based.
Block-based replication typically require dedicated volumes of the same size on both the
local and remote servers.
File-based replication allows the administrators to choose the files or folders to be
replicated. File-based replication requires less storage as compared to block-based storage
Host- based replication with cloud-infrastructure provides affordable replication solutions.
With host-based replication, entire virtual machines can be replicated in real-time.

21
Cloud Computing

6) Monitoring:-
Cloud resources can be monitored by monitoring services provided by the cloud
service providers. Monitoring services allow cloud users to collect and analyze the data on
various monitoring metrics. Figure 2.10 shows a generic architecture for a cloud monitoring
service.

A monitoring service collects data on various system and application metrics from the cloud
computing instances. Monitoring services provide various pre-defined metrics. Users can also
define their custom metrics for monitoring the cloud resources. Users can define various actions
based on the monitoring data, for example, auto-scaling a cloud deployment when the CPU usage
of monitored resources becomes high. Monitoring services also provide various statistics based on
the monitoring data collected. Table 2.4 lists the commonly used monitoring metrics for cloud
computing resources.

Monitoring of cloud resources is important because it allows the users to keep track of the
health of applications and services deployed in the cloud. For example, an organization which bas
its website hosted in the cloud can monitor the performance of the website and also the website
traffic. With the monitoring data available at ton-time users can make operational decisions such as
scaling up or scaling down cloud resources.

22
Cloud Computing

7) Software Defined Networking:-


Software-defined networking (SDN) technology is an approach to network
management that enables dynamic, programmatically efficient network configuration in order to
improve network performance and monitoring, making it more like cloud computing than
traditional network management.
SDN is meant to address the static architecture of traditional networks. SDN attempts to
centralize network intelligence in one network component by disassociating the forwarding process
of network packets (data plane) from the routing process (control plane).
The control plane consists of one or more controllers, which are considered the brain of the
SDN network where the whole intelligence is incorporated.
SDN was commonly associated with the OpenFlow protocol (for remote communication with
network plane elements for the purpose of determining the path of network packets across network
switches)
These include Cisco Systems' Open Network Environment and Nicira's network virtualization platform.
Software-Defined Networking (SDN) is a networking architecture that separates the control
plane from the data plane and centralizes the network controller.
Figure 2.11 shows the conventional network architecture built with specialized hardware
(switches, routers, etc.).

Reasons for using SDN:


1. Traditional Network is Tedious to manage, old, uses rigid commands and console.
2. In Traditional Network, Manual configuration and lot of administration is required
3. Traditional Network is complex and shows inability to scale.

23
Cloud Computing

Benefits of SDN:
1. It is open Technology and more flexible.
2. Supports more interoperability, speed and Automation
3. SDN helps to improve server Virtualization
4. Supports increased Resources usage efficiently

Figures 2.12 shows the SDN architecture

24
Cloud Computing

Figure 2.13 shows the SDN layers in which the control and data planes are decoupled and
the network controller is centralized.

Software-based SDN controllers maintain a unified view of the network and make
configuration, management and provisioning simpler.
The underlying network infrastructure is abstracted from the applications.
Network devices become simple with SDN as they do not require implementations of a
large number of protocols.
Network devices receive instructions from the SDN controller on how to
forward the packets. These devices can be simpler and cost less as they can be built from
standard hardware and software components.

Key elements of SDN are as follows:


• Centralized Network Controller
• With decoupled the control and data planes and centralized network controller, the
network administrators can rapidly configure the network.
• Programmable Open APIs
• SDN architecture supports programmable open APIs for interface between the SDN
application and control layers (Northbound interface). These open APIs that allow
implementing various network services such as routing, quality of service (QoS), access
control, etc.
• Standard Communication Interface (OpenFlow)
• SDN architecture uses a standard communication interface between the control and
infrastructure layers (Southbound interface). OpenFlow, which is defined by the
Open Networking Foundation (ONF) is the broadly accepted SDN protocol for the
Southbound interface.

25
Cloud Computing

8) MapReduce:-
• MapReduce is a parallel data processing model for processing and analysis of massive scale
data.

• MapReduce phases:
• Map Phase: In the Map phase, data is read from a distributed file system, partitioned
among a set of computing nodes in the cluster, and sent to the nodes as a set of key-
value pairs.
• The Map tasks process the input records independently of each other and produce
intermediate results as key-value pairs.
• The intermediate results are stored on the local disk of the node running the Map
task.
• Reduce Phase: When all the Map tasks are completed, the Reduce phase begins in
which the intermediate data with the same key is aggregated.

9) Identity and Access Management:-


Identity and Access Management (IDAM) for cloud describes the authentication
and authorization of users to provide secure access to cloud resources.
Organizations with multiple users can use IAM services provided by the cloud
service provider for management of user identifiers and user permissions.

IAM services allow organizations to centrally manage users, access permissions,


security credentials and access keys.
Organizations can enable role-based access control to cloud resources and
applications using the IDAM services.

IDAM services allow creation of user groups where all the users in a group have the
same access permissions.
Identity and Access Management is enabled by a number of technologies such as
OpenAuth, Role-based Access Control (RBAC), Digital Identities, Security Tokens,
Identity Providers, etc.

26
Cloud Computing

Figure 2.20 shows the examples of OAuth and RBAC.

OAuth is an open standard for authorization that allows resource owners to share their
private resources stored on one site with an- other site without handing out the credentials.
In the OAuth model, an application (which is not the resource owner) requests access to
resources controlled by the resource owner (but hosted by the server). The resource owner
grants permission to access the resources in the form of a token and matching shared-secret.

Tokens make it unnecessary for the resource owner to share its credentials with the
application. Tokens can be issued with a restricted scope and limited lifetime. and revoked
independently. RBAC is an approach for restricting access to authorized users.
Figure 2.21 show an example of a typical RBAC framework.

27
Cloud Computing

A user who wants to access cloud resources is required to send his/her data to the
system administrator who assigns permissions and access control policies which are stored
in the User Roles and Data Access Policies databases respectively.

10) Service Level Agreements:-


A Service Level Agreement (SLA) for cloud specifies the level of service that is
formally defined as a part of the service contract with the cloud service provider.
SLAs provide a level of service for each service which is specified in the form of
minimum level of service guaranteed and a target level.
SLAs contain a number of performance metrics and the corresponding service level
objectives.
Table2.5 lists the common criteria cloud SLAs.

28
Cloud Computing

11) Billing: -
Cloud service providers offer a number of billing models described as follows:
• Elastic Pricing
• In elastic pricing or pay-as-you-use pricing model, the customers are charged based
on the usage of cloud resources.
• Fixed Pricing
• In fixed pricing models, customers are charged a fixed amount per month for the
cloud resources.
• Spot Pricing
• Spot pricing models offer variable pricing for cloud resources which is driven by
market demand.

12) Network Function Virtualization: -


Network Function Virtualization (NFV) is a technology that leverages virtualization to
consolidate the heterogeneous network devices onto industry standard high-volume servers,
switches and storage.

NFV is complementary to SDN as NFV can provide the infrastructure on which SDN can
run. NFV and SDN are mutually beneficial to each other but not dependent. Network functions can
be virtualized without SDN, similarly, SDN can run without NFV.

Below Diagram shows the NFV architecture, as being standardized by the European
Telecommunications Standards Institute (ETSI).

29
Cloud Computing

Key elements of the NFV architecture are as 5ollows:


• Virtualized Network Function (VNF): VNF is a software implementation of a net-
work function which is capable of running over the NFV Infrastructure (NFVI).
• NFV Infrastructure (NFVI): NFVI includes compute, network and storage
resources that are virtualized.
• NFV Management and Orchestration: NFV Management and Orchestration
focuses on all virtualization-specific management tasks and covers the orchestration
and lifecycle management of physical and/or software resources that support the
infrastructure virtualization, and the lifecycle management of VNFS.

NFV comprises of network functions implemented in software that run on virtualized


resources in the cloud. NFV enables a separation the network functions which are
implemented in software from the underlying hardware.

Thus, network functions can be easily tested and upgraded by installing new
software while the hardware remains the same.
Virtualizing network functions reduces the equipment costs and also reduces
power consumption.

The multi-tenanted nature of the cloud allows virtualized network functions to


be shared for multiple network services.
NFV is applicable to data plane and control plane functions in fixed and mobile
networks.

30
Cloud Computing

Figure 2.17 shows use cases of NFV for home and enterprise networks, content
delivery networks, mobile base stations, mobile core network and security functions.

31

You might also like