0% found this document useful (0 votes)
149 views32 pages

Cloud Application Development

The document outlines the architectural design of cloud computing, focusing on Compute and Storage Clouds, and their layered architecture comprising Infrastructure, Platform, and Application layers. It discusses market-oriented cloud architecture, inter-cloud resource management, and challenges such as service availability, data privacy, and performance bottlenecks. Additionally, it covers resource provisioning methods and the importance of efficient management to meet user demands while maintaining service level agreements.

Uploaded by

satyamshivam.in
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
149 views32 pages

Cloud Application Development

The document outlines the architectural design of cloud computing, focusing on Compute and Storage Clouds, and their layered architecture comprising Infrastructure, Platform, and Application layers. It discusses market-oriented cloud architecture, inter-cloud resource management, and challenges such as service availability, data privacy, and performance bottlenecks. Additionally, it covers resource provisioning methods and the importance of efficient management to meet user demands while maintaining service level agreements.

Uploaded by

satyamshivam.in
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Cloud Application

Development
(UNIT-3: CLOUD INFRASTRUCTURE)
1
Architectural Design of Compute and Storage Clouds

Cloud computing architectures are designed to provide scalable, on-demand


access to computing and storage resources. The two primary components
are Compute Clouds (for processing) and Storage Clouds (for data persistence).
Below is a detailed breakdown of their architectural designs.

2
A Generic Cloud Architecture

• Architecture of cloud computing is the combination of both SOA


(Service Oriented Architecture) and EDA (Event Driven
Architecture).
• The cloud architecture is divided into 2 parts, i.e.
Frontend
Backend
• Client Infrastructure: Client Infrastructure is a part of the
frontend component. It contains the applications and user
interfaces which are required to access the cloud platform. In
other words, it provides a GUI( Graphical User Interface ) to
interact with the cloud.
• Application : Application is a part of backend component that
refers to a software or platform to which client accesses. Means it
provides the service in backend as per the client requirement.
• Service: Service in backend refers to the major three types of
cloud based services like SaaS, PaaS and IaaS. Also manages
which type of service the user accesses.
Fig 3.1: Layered architectural development of the • Runtime Cloud: Runtime cloud in backend provides the execution
cloud platform for IaaS, PaaS, and SaaS applications and Runtime platform/environment to the Virtual machine.
over the Internet. 3
A Generic Cloud Architecture Cont.

• Storage: Storage in backend provides flexible and scalable


storage service and management of stored data.
• Infrastructure: Cloud Infrastructure in backend refers to
the hardware and software components of cloud like it
includes servers, storage, network devices, virtualization
software etc.
• Management: Management in backend refers to
management of backend components like application,
service, runtime cloud, storage, infrastructure, and other
security mechanisms etc.
• Security: Security in backend refers to implementation of
different security mechanisms in the backend for secure
cloud resources, systems, files, and infrastructure to
end-users.
Fig 3.1: Layered architectural development of the
cloud platform for IaaS, PaaS, and SaaS applications
over the Internet. 4
Layered Cloud Architectural Development

• The architecture of a cloud is developed at three


layers:
1. Infrastructure
2. Platform
3. Application/Software
• These three development layers are implemented
with virtualization and standardization of hardware
and software resources provisioned in the cloud.
• This infrastructure layer serves as the foundation for
building the platform layer of the cloud for supporting
PaaS services.
• The platform layer is a foundation for implementing
the application layer for SaaS applications
Fig 3.2: Layered architectural development of the
cloud platform for IaaS, PaaS, and SaaS applications
5
over the Internet.
Layered Cloud Architectural Development

• The infrastructure layer is built with virtualized


compute, storage, and network resources. Proper
utilization of these resources provides the flexibility
demanded by the users.
• The platform layer environment is provided for the
development, testing, deployment and monitoring the
usage of apps. Indirectly, a virtualized cloud platform
acts as a 'system middleware' between the
infrastructure and application layers of a cloud.
• The application layer is formed with the collection of
different modules of all software that are needed for
the SaaS apps. The general service apps include those
of information retrieval, doc processing, and
authentication services. This layer also used in
large-scale by the CRMs, financial transactions, and
Fig 3.2: Layered architectural development of the
cloud platform for IaaS, PaaS, and SaaS applications
supply chain management.
6
over the Internet.
Layered Cloud Architectural Development

• The application layer is also heavily used by enterprises in business marketing sales , and
consumer relationship management (CRM), financial transactions, and supply chain
management.
• It should be noted that not all cloud services are restricted to a single layer. Many
applications may apply resources at mixed layers. After all, the three layers are built from
the bottom up with a dependence relationship.
• In general, SaaS demands the most work from the provider, PaaS is in the middle, and IaaS
demands the least.

7
Market-Oriented Cloud Architecture

• Users or brokers acting on user’s behalf


submit service requests from anywhere in
the world to the data center and cloud to
be processed.

• The SLA resource allocator acts as the


interface between the data center/cloud
service provider and external
users/brokers.

• When a service request is first submitted


the service request examiner interprets the
submitted request for QoS requirements
Fig 3.3: Market-oriented cloud architecture to expand/shrink before determining whether to accept or
leasing of resources with variation in QoS/demand from users.
reject the request. 8
Market-Oriented Cloud Architecture

• The Pricing mechanism decides how service


requests are charged. For instance, requests can
be charged based on submission time
(peak/off-peak), pricing rates (fixed/changing),
or availability of resources (supply/demand).
• The Accounting mechanism maintains the
actual usage of resources by requests so that
the final cost can be computed and charged to
users.
• The VM Monitor mechanism keeps track of the
availability of VMs and their resource
entitlements.
• The Dispatcher mechanism starts the execution
of accepted service requests on allocated VMs.
• The Service Request Monitor mechanism keeps
Fig 3.3: Market-oriented cloud architecture to expand/shrink track of the execution progress of service
leasing of resources with variation in QoS/demand from users. requests.
9
Market-Oriented Cloud Architecture

• Multiple VMs can be started and stopped


on demand on a single physical machine to
meet accepted service requests, hence
providing maximum flexibility to configure
various partitions of resources on the same
physical machine to different specific
requirements of service requests.

• In addition, multiple VMs can concurrently


run applications based on different
operating system environments on a single
physical machine since the VMs are isolated
from one another on the same physical
machine.
Fig 3.3: Market-oriented cloud architecture to expand/shrink
leasing of resources with variation in QoS/demand from users.
10
Architectural Design Challenges

Six open challenges in cloud architecture development :


1. Service Availability and Data Lock-in Problem
2. Data Privacy and Security Concerns
3. Unpredictable Performance and Bottlenecks
4. Distributed Storage and Widespread Software Bugs
5. Cloud Scalability, Interoperability, and Standardization
6. Software Licensing and Reputation Sharing

11
Architectural Design Challenges Cont.
1. Service Availability and Data Lock-in Problem
•Cloud services managed by a single provider can become a single point of failure.
•Lock-in happens when proprietary APIs prevent users from migrating data or
applications across platforms.
•Solution: Use multiple providers and promote API standardization.

2. Data Privacy and Security Concerns


•Public cloud environments are more exposed to threats such as malware, guest
hopping, or VM rootkits.
•Solution: Use encryption, virtual LANs, firewalls, and adhere to data residency laws.

3. Unpredictable Performance and Bottlenecks


•Sharing of physical resources among VMs causes I/O bottlenecks and unpredictable
performance.
•Solution: Improve I/O architecture and virtualization of interrupts/I/O channels
12
Architectural Design Challenges Cont.
4. DistributedStorage and Widespread Software Bugs
•Cloud applications demand scalable and durable storage, but software bugs at scale are
hard to reproduce.
•Solution: Use virtual machines for debugging or high-fidelity simulators.

5. Cloud Scalability, Interoperability, and Standardization


•As cloud grows, interoperability and standard compliance become harder to maintain.
•Solution: Design scalable management systems and adopt common standards.

6. Software Licensing and Reputation Sharing


•Licensing models often don't fit cloud’s dynamic nature; also, users may hesitate to trust
unknown providers.
•Solution: New licensing models and reputation systems are needed to build trust and
transparency

13
Inter-Cloud Resource Management

Inter-cloud resource management is about enabling cloud providers to work together by


sharing and trading computing and storage resources dynamically. This is done to:
• Handle spikes in workload.

• Improve service availability and performance.

• Deliver cost-effective and quality services.

14
Inter-Cloud Resource Management
Extended Cloud Computing Services
• Fig 3.4 shows six layers of cloud services, ranging from hardware, network, and collocation to
infrastructure, platform, and software applications.

• We already introduced the top three service layers as SaaS, PaaS, and IaaS, respectively.

• The cloud platform provides PaaS, which sits on top of the IaaS infrastructure. The top layer
offers SaaS.

• Although the three basic models are dissimilar in usage but they are built one on top of another.
The implication is that one cannot launch SaaS applications with a cloud platform. The cloud
platform cannot be built if compute and storage infrastructures are not there.

15
Inter-Cloud Resource Management

Fig 3.4: A stack of six layers of cloud services and their providers.
16
Inter-Cloud Resource Management
Extended Cloud Computing Services

17
Inter-Cloud Resource Management
Extended Cloud Computing Services Cont.

1. Cloud Service Tasks and Trends


• Cloud services are introduced in five layers. The top layer is for SaaS applications, as further
subdivided into the five application areas, mostly for business applications. For example, CRM is
heavily practiced in business promotion, direct sales, and marketing services.

• PaaS is provided by Google, Salesforce.com, and Facebook, among others.

• IaaS is provided by Amazon, Windows Azure, and RackRack, among others.

• Collocation services require multiple cloud providers to work together to support supply chains
in manufacturing.

18
Inter-Cloud Resource Management
Extended Cloud Computing Services Cont.
2. Software Stack for Cloud Computing
• Despite the various types of nodes in the cloud computing cluster, the overall software stacks are
built from scratch to meet rigorous goals. Developers have to consider how to design the system
to meet critical requirements such as high throughput, HA, and fault tolerance.
• Even the operating system might be modified to meet the special requirement of cloud data
processing.
• The platform for running cloud computing services can be either physical servers or virtual
servers. By using VMs, the platform can be flexible, that is, the running services are not bound to
specific hardware platforms. This brings flexibility to cloud computing platforms.
• The software layer on top of the platform is the layer for storing massive amounts of data. This
layer acts like the file system in a traditional single machine.
• Other layers running on top of the file system are the layers for executing cloud computing
applications. They include the database storage system, programming for large-scale clusters,
and data query language support.
• The next layers are the components in the software stack.
19
Inter-Cloud Resource Management
Extended Cloud Computing Services Cont.

3. Runtime Support Services


• As in a cluster environment, there are also some runtime supporting services in the cloud
computing environment. Cluster monitoring is used to collect the runtime status of the entire
cluster.
• The scheduler queues the tasks submitted to the whole cluster and assigns the tasks to the
processing nodes according to node availability.
• The SaaS model provides the software applications as a service, rather than letting users
purchase the software. As a result, on the customer side, there is no upfront investment in
servers or software licensing.
• On the provider side, costs are rather low, compared with conventional hosting of user
applications. The customer data is stored in the cloud that is either vendor proprietary or a
publicly hosted cloud supporting PaaS and IaaS.

20
Resource Provisioning and Platform Deployment
• The emergence of computing clouds suggests fundamental changes in software and
hardware architecture.
• Cloud architecture puts more emphasis on the number of processor cores or VM
instances. Parallelism is exploited at the cluster node level.

Provisioning of Compute Resources (VMs):


• Providers supply cloud services by signing SLAs (Service Level Agreements) with end
users.
• The SLAs must commit sufficient resources such as CPU, memory, and bandwidth that
the user can use for a preset period.
• Under provisioning of resources will lead to broken SLAs and penalties.
• Over provisioning of resources will lead to resource underutilization, and consequently, a
decrease in revenue for the provider.

21
Provisioning of Compute Resources (VMs):
Contd..
• Deploying an autonomous system to efficiently provision resources to users is a challenging problem.
The difficulty comes from the unpredictability of consumer demand, software and hardware failures,
heterogeneity of services, power management, and conflicts in signed SLAs between consumers and
service providers.
• Efficient VM provisioning depends on the cloud architecture and management of cloud
infrastructures.
• To deploy VMs, users treat them as physical hosts with customized operating systems for specific
applications. For example, Amazon's EC2 uses Xen as the virtual machine monitor (VMM). The same
VMM is used in IBM's Blue Cloud.
• In the EC2 platform, some predefined VM templates are also provided. Users can choose different
kinds of VMs from the templates.
• The provider should offer resource-economic services Power-efficient schemes for caching, query
processing, and thermal management are mandatory due to increasing energy waste from data
centers.
• Public or private clouds promise to streamline the on-demand provisioning of software, hardware,
and data as a service, achieving economies of scale in IT deployment and operation. 22
Resource Provisioning Methods

Fig 3.5: Three cases of cloud resource provisioning without elasticity: (a) heavy waste due to overprovisioning,
(b) underprovisioning and (c) under- and then overprovisioning. 23
Resource Provisioning Methods

• Fig. 3.5, shows three cases of static cloud resource provisioning policies.

• In case (a), overprovisioning with the peak load causes heavy resource waste (shaded area).

• In case (b), underprovisioning (along the capacity line) of resources results in losses by both user
and provider in that paid demand by the users (the shaded area above the capacity) is not served
and wasted resources still exist for those demanded areas below the provisioned capacity.

• In case (c), the constant provisioning of resources with fixed capacity to a declining user demand
could result in even worse resource waste. The user may give up the service by canceling the
demand, resulting in reduced revenue for the provider. Both the user and provider may be losers
in resource provisioning without elasticity.

24
Resource Provisioning Methods

There are three resource provisioning methods which are mentioned below:
1. The demand-driven method provides static resources and has been used in
grid computing for many years.

2. The event-driven method is based on predicted workload by time.

3. The popularity-driven method is based on Internet traffic monitored.

25
Resource Provisioning Methods

1. Demand-Driven Resource Provisioning


• This method adds or removes computing instances based on the current utilization level
of the allocated resources.
• The demand-driven method automatically allocates two Xeon processors for the user
application, when the user was using one Xeon processor more than 60 percent of the
time for an extended period.
• In general, when a resource has surpassed a threshold for a certain amount of time, the
scheme increases that resource based on demand. When a resource is below a threshold
for a certain amount of time, that resource could be decreased accordingly.
• Amazon implements such an auto-scale feature in its EC2 platform.
• This method is easy to implement.
• The scheme does not work out right if the workload changes abruptly.

26
Resource Provisioning Methods

2. Event-Driven Resource Provisioning


• This scheme adds or removes machine instances based on a specific time event. The
scheme works better for seasonal or predicted events. During these events, the number
of users grows before the event period and then decreases during the event period.

• This scheme anticipates peak traffic before it happens. The method results in a minimal
loss of QoS, if the event is predicted correctly.

• Otherwise, wasted resources are even greater due to events that do not follow a fixed
pattern.

27
Resource Provisioning Methods

3. Popularity-Driven Resource Provisioning


• In this method, the Internet searches for popularity of certain applications and creates
the instances by popularity demand.

• The scheme anticipates increased traffic with popularity. Again, the scheme has a
minimal loss of QoS, if the predicted popularity is correct.

• Resources may be wasted if traffic does not occur as expected.

28
Global Exchange of Cloud Resources

In order to support a large number of application service consumers from around the
world, cloud infrastructure providers (i.e., IaaS providers) have established data centers in
multiple geographical locations to provide redundancy and ensure reliability in case of
site failures.

For example, Amazon has data centers in the United States (e.g., one on the East Coast and
another on the West Coast) and Europe. However, currently Amazon expects its cloud
customers (i.e., SaaS providers) to express a preference regarding where they want their
application services to be hosted. Amazon does not provide seamless/automatic
mechanisms for scaling its hosted services across multiple geographically distributed data
centers.

29
Global Exchange of Cloud Resources
This approach has many shortcomings.
• First, it is difficult for cloud customers to determine in advance the best location for hosting their
services as they may not know the origin of consumers of their services.
• Second, SaaS providers may not be able to meet the QoS expectations of their service consumers
originating from multiple geographical locations.
• This necessitates building mechanisms for seamless federation of data centers of a cloud provider or
providers supporting dynamic scaling of applications across multiple domains in order to meet QoS
targets of cloud customers.
• In addition, no single cloud infrastructure provider will be able to establish its data centers at all
possible locations throughout the world. As a result, cloud application service (SaaS) providers will
have difficulty in meeting QoS expectations for all their consumers. Hence, they would like to make
use of services of multiple cloud infrastructure service providers who can provide better support.
• The Fig. 3.6, shows the high-level components of the Melbourne group’s proposed Inter-Cloud
architecture.
• To address these challenges, the Cloudbus Project at the University of Melbourne has proposed
Inter-Cloud architecture supporting brokering and exchange of cloud resources for scaling
applications across multiple clouds.
30
Global Exchange of Cloud Resources

Fig 3.6: Inter-cloud exchange of cloud resources through brokering.


31
Global Exchange of Cloud Resources
• By realizing Inter-Cloud architectural principles in mechanisms in their offering, cloud providers will be able
to dynamically expand or resize their provisioning capability based on sudden spikes in workload demands
by leasing available computational and storage capabilities from other cloud service providers.
• operate as part of a market-driven resource leasing federation, where application service providers such as
Salesforce.com host their services based on negotiated SLA contracts driven by competitive market prices.
• Deliver on-demand, reliable, cost-effective, and QoS-aware services based on virtualization technologies
while ensuring high QoS standards and minimizing service costs.
• They need to be able to utilize market-based utility models as the basis for provisioning of virtualized
software services and federated hardware infrastructure among users with heterogeneous applications.
• The architecture cohesively couples the administratively and topologically distributed storage and compute
capabilities of clouds as part of a single resource leasing abstraction.
• The system will ease the cross-domain capability integration for on-demand, flexible, energy-efficient, and
reliable access to the infrastructure based on virtualization technology.
• The Cloud Exchange (CEx) acts as a market maker for bringing together service producers and consumers. It
supports trading of cloud services based on competitive economic models such as commodity markets and
auctions. 32

You might also like