Unit 2 Cloud Computing
Unit 2 Cloud Computing
UNIT-2
Cloud Service Models
Characteristics of IaaS
PaaS cloud computing platform is created for the programmer to develop, test, run,
and manage the applications.
Characteristics of PaaS
Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App
Engine, Apache Stratos, Magento Commerce Cloud, and OpenShift.
Characteristics of SaaS
The below table shows the difference between IaaS, PaaS, and SaaS -
What is Service?
Service Connections
Service-Oriented Terminologies
o Services - The services are the logical entities defined by one or more
published interfaces.
o Service provider - It is a software entity that implements a service
specification.
o Service consumer - It can be called as a requestor or client that calls a service
provider. A service consumer can be another service or an end-user
application.
o Service locator - It is a service provider that acts as a registry. It is responsible
for examining service provider interfaces and service locations.
o Service broker - It is a service provider that pass service requests to one or
more additional service providers.
Characteristics of SOA
The service-oriented architecture stack can be categorized into two parts - functional
aspects and quality of service aspects.
Functional aspects
o Transport - It transports the service requests from the service consumer to the
service provider and service responses from the service provider to the service
consumer.
o Service Communication Protocol - It allows the service provider and the
service consumer to communicate with each other.
o Service Description - It describes the service and data required to invoke it.
o Service - It is an actual service.
o Business Process - It represents the group of services called in a particular
sequence associated with the particular rules to meet the business
requirements.
o Service Registry - It contains the description of data which is used by service
providers to publish their services.
Advantages of SOA
MULTICORE TECHNOLOGY:
A set of servers with multicore processors can allow the cloud to
create and scale up more VM instances on demand.
Performance
Reliability
In multi-core CPUs, the software is always assigned to different cores. When one piece
of software fails, the others remain unaffected. Whenever a defect arises, it affects
only one core. As a result, multi-core CPUs are better able to resist faults.
Application Speed
Despite the fact that a multi-core CPU is designed for multitasking, its performance
is insufficient. It has a tendency to bounce from one core to the next each time when
an application is processing. As a result, the cache fills up, increasing its speed.
Jitter
Analysis
When you are doing two or more things at once, you will need to employ additional
memory models. In a multi-core machine, this makes analysis tough. Time
limitations, in particular, are difficult to determine and may be inaccurate.
Storage Devices
The block storage devices offer raw storage to the clients. These raw storage are
partitioned to create volumes.
The file Storage Devices offer storage to clients in the form of files, maintaining its
own file system. This storage is in the form of Network Attached Storage (NAS).
Unmanaged cloud storage means the storage is preconfigured for the customer. The
customer can neither format, nor install his own file system or change drive
properties.
Managed cloud storage offers online storage space on-demand. The managed cloud
storage system appears to the user to be a raw disk that the user can partition and
format.
The cloud storage system stores multiple copies of data on multiple servers, at
multiple locations. If one system fails, then it is required only to change the pointer
to the location, where the object is stored.
To aggregate the storage assets into cloud storage systems, the cloud provider can
use storage virtualization software known as Storage GRID. It creates a
virtualization layer that fetches storage from different storage devices into a single
management system. It can also manage data from CIFS and NFS file systems over
the Internet. The following diagram shows how Storage GRID virtualizes the storage
into storage clouds:
Networking technologies:
DNS: The Domain Name System (DNS) is a protocol that is used to translate
human-readable domain names (such as www.google.com) into IP addresses
that computers can understand.
Firewall: A firewall is a security device that is used to monitor and control
incoming and outgoing network traffic. Firewalls are used to protect networks
from unauthorized access and other security threats.
LAN: A Local Area Network (LAN) is a network that covers a small area, such as
an office or a home. LANs are typically used to connect computers and other
devices within a building or a campus.
WAN: A Wide Area Network (WAN) is a network that covers a large geographic
area, such as a city, country, or even the entire world. WANs are used to connect
LANs together and are typically used for long-distance communication.
Cloud Networks: Cloud Networks can be visualized with a Wide Area Network
(WAN) as they can be hosted on public or private cloud service providers and
cloud networks are available if there is a demand. Cloud Networks consist of
Virtual Routers, Firewalls, etc.
WEB 2.0
Web 2.0 is more of a change in thinking rather than a truly updated version of
the World Wide Web, there are a few key web technologies that introduced this
massive shift in the way users viewed and interacted with web pages. Here are just a
few examples of this technology and its improved compatibility:
JavaScript
Adobe Flash
Microsoft Silverlight
RSS
Eclipse
Ajax
Another way to think of this reshaping of the World Wide Web is called the
“read/write” web. Since this reframing of the internet, web users have been able to
communicate in real-time with servers, edit web pages, post comments, and
communicate with other users. Here are just a few categories to help you
understand this major shift in the way the web is used.
Social Web
Thanks to hypertext transfer protocol (HTTP) and other innovations, Web 2.0 acts
as a social web. Users can add comments, like pages, submit reviews, and create
social media accounts for increased levels of interaction. All of this user-generated
content dramatically increases opportunities for communication across all users.
Web 3.0
Web 3.0 is a prediction of the future of the web. It is also called Web3.
Web 3.0 focuses on increased compatibility, decentralized implementation of
user-generated content, and tokenization which relies on blockchain
technology. The result is a web that isn’t static or reliant on Big Tech
corporations.
The exact programming and applications of Web 3.0 aren’t fully realized. This
means that the definition and use of this technology are still changing and
evolving.
Artificial Intelligence
1. A workflow model: This shows the series of activities in the process along
with their inputs, outputs and dependencies. The activities in this model
perform human actions.
2. 2. A dataflow or activity model: This represents the process as a set of
activities, each of which carries out some data transformations. It shows how
the input to the process, such as a specification is converted to an output such
as a design. The activities here may be at a lower level than activities in a
1. The waterfall approach: This takes the above activities and produces them as
separate process phases such as requirements specification, software design,
implementation, testing, and so on. After each stage is defined, it is "signed off"
and development goes onto the following stage.
2. Evolutionary development: This method interleaves the activities of
specification, development, and validation. An initial system is rapidly
developed from a very abstract specification.
3. Formal transformation: This method is based on producing a formal
mathematical system specification and transforming this specification, using
mathematical methods to a program. These transformations are 'correctness
preserving.' This means that you can be sure that the developed programs
meet its specification.
4. System assembly from reusable components: This method assumes the
parts of the system already exist. The system development process target on
integrating these parts rather than developing them from scratch.
Software Crisis
1. Size: Software is becoming more expensive and more complex with the
growing complexity and expectation out of software. For example, the code in
the consumer product is doubling every couple of years.
2. Quality: Many software products have poor quality, i.e., the software products
defects after putting into use due to ineffective testing technique. For example,
Software testing typically finds 25 errors per 1000 lines of code.
3. Cost: Software development is costly i.e. in terms of time taken to develop and
the money involved. For example, Development of the FAA's Advanced
Automation System cost over $700 per lines of code.
4. Delayed Delivery: Serious schedule overruns are common. Very often the
software takes longer than the estimated time to develop, which in turn leads
to cost shooting up. For example, one in four large-scale development projects
is never completed.
Software is more than programs. Any program is a subset of software, and it becomes
software only if documentation & operating procedures manuals are prepared.
Agile Software Development Life Cycle (SDLC) is the combination of both iterative
and incremental process models. It focuses on process adaptability and customer
satisfaction by rapid delivery of working software product. Agile SDLC breaks down
the product into small incremental builds. These builds are provided into iterations.
In the agile SDLC development process, the customer is able to see the result and
understand whether he/she is satisfied with it or not. This is one of the advantages of
the agile SDLC model. One of its disadvantages is the absence of defined requirements
so, it is difficult to estimate the resources and development cost.
In this phase, you must define the requirements. You should explain business
opportunities and plan the time and effort needed to build the project. Based on this
information, you can evaluate technical and economic feasibility.
When you have identified the project, work with stakeholders to define requirements.
You can use the user flow diagram or the high-level UML diagram to show the work
of new features and show how it will apply to your existing system.
Construction/ Iteration
When the team defines the requirements, the work begins. The designers and
developers start working on their project. The aims of designers and developers
deploy the working product within the estimated time. The product will go into
various stages of improvement, so it includes simple, minimal functionality.
Deployment
In this phase, the team issues a product for the user's work environment.
Testing
In this phase, the Quality Assurance team examine the product's performance and
look for the bug.
Feedback
After releasing of the product, the last step is to feedback it. In this step, the team
receives feedback about the product and works through the feedback.
Pervasive Computing
Applications:
There are a rising number of pervasive devices available in the market nowadays.
The areas of application of these devices include:
Retail
Airlines booking and check-in
Sales force automation
Healthcare
Tracking
Car information System
Email access via WAP (Wireless Application Protocol) and voice.
Application environments
An environment is a user-defined collection of resources that hosts an application.
Referenced: ${p:environment/propertyName}.
Creating environments
Before you can run a deployment, you must define at least one application
environment that associates components with an agent on the target host.
Creating environments from WebSphere Application Server cells
You can import information about a WebSphere® Application Server cell into
resources and then use those resources in an environment.
Creating environment gates
To create an environment gate, specify the conditions that must be met
before component versions can be deployed to the environment.
Creation of a virtual machine over existing operating system and hardware is known
as Hardware Virtualization. A Virtual machine provides an environment that is
logically separated from the underlying hardware.
The machine on which the virtual machine is going to create is known as Host
Machine and that virtual machine is referred as a Guest Machine
Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the hardware system is known as hardware virtualization.
The main job of hypervisor is to control and monitoring the processor, memory and
other hardware resources.
Usage:
Hardware virtualization is mainly done for the server platforms, because controlling
virtual machines is much easier than controlling a physical server.
When the virtual machine software or virtual machine manager (VMM) is installed on
the Host operating system instead of directly on the hardware system is known as
operating system virtualization.
Usage:
3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the Server system is known as server virtualization.
Usage:
Server virtualization is done because a single physical server can be divided into
multiple servers on the demand basis and for balancing the load.
4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple
network storage devices so that it looks like a single storage device.
Usage: