Cloud Computing
Cloud computing is the services provided on the Internet to store a vast
amount of data in one place and can be used from anywhere and from any
place. This minimizes the cost of the physical installation of the data
centers and servers.
Some examples of cloud computing are −
Dropbox − It is a one-stop solution for all the services like file
storage, sharing, and managing the system.
Microsoft Azure − It provides a wide range of services like the
backup of data and any sudden recovery from any type of disaster.
Evolution of Cloud Computing
Cloud Computing has evolved from the Distributed system to the current
technology. Cloud computing has been used by all types of businesses, of
different sizes and fields.
Cloud Computing has emerged from the 1950s to the current year and
companies are relying completely on cloud computing based on their
specific needs. Before cloud computing was not accepted and used, later
on looking at its features, people are investing money to buy cloud data
storage.
1. Distributed Systems
In the networks, different systems are connected. When they target to
send the message from different independent systems which are
physically located in various places but are connected through the
network. Some examples of distributed systems are Ethernet which is a
LAN technology, Telecommunication network, and parallel processing.
The Basic functions of the distributed systems are −
Resource Sharing − The Resources like data, hardware, and
software can be shared between them.
Open-to-all − The software is designed and can be shared.
Fault Detection − The error or failure in the system is detected and
can be corrected.
Apart from the functions, the main disadvantage is that all the plan has to
be in the same location and this disadvantage is overcome by the
following systems −
Mainframe Computing
Cluster Computing
Grid Computing
2. Mainframe Computing
It was developed in the year 1951 and provides powerful features.
Mainframe Computing is still in existence due to its ability to deal with a
large amount of data. For a company that needs to access and share a vast
amount of data then this computing is preferred. Among the four types of
computers, mainframe computer performs very fast and lengthy
computations easily.
The type of services handled by them is bulk processing of data and
exchanging large-sized hardware. Apart from the performance,
mainframe computing is very expensive.
3. Cluster Computing
In Cluster Computing, the computers are connected to make it a single
computing. The tasks in Cluster computing are performed concurrently
by each computer also known as the nodes which are connected to the
network. So the activities performed by any single node are known to all
the nodes of the computing which may increase the performance,
transparency, and processing speed.
To eliminate the cost, cluster computing has come into existence. We can
also resize the cluster computing by removing or adding the nodes.
4. Grid Computing
It was introduced in the year 1990. As the computing structure includes
different computers or nodes, in this case, the different nodes are placed
in different geographical places but are connected to the same network
using the internet.
The other computing methods seen so far, it has homogeneous nodes that
are located in the same place. But in this grid computing, the nodes are
placed in different organizations. It minimized the problems of cluster
computing but the distance between the nodes raised a new problem.
5. Web 2.0
This computing lets the users generate their content and collaborate with
other people or share the information using social media, for example,
Facebook, Twitter, and Orkut. Web 2.0 is a combination of the second-
generation technology World Wide Web (WWW) along with the web
services and it is the computing type that is used today.
6. Virtualization
It came into existence 40 years back and it is becoming the current
technique used in IT firms. It employs a software layer over the hardware
and using this it provides the customer with cloud-based services.
7. Utility Computing
Based on the need of the user, utility computing can be used. It provides
the users, company, clients or based on the business need the data storage
can be taken for rent and used.
Characteristics of Cloud Computing
Last Updated : 24 May, 2024
There are many characteristics of Cloud Computing here are few of them :
1. On-demand self-services: The Cloud computing services does not require any
human administrators, user themselves are able to provision, monitor and manage
computing resources as needed.
2. Broad network access: The Computing services are generally provided over
standard networks and heterogeneous devices.
3. Rapid elasticity: The Computing services should have IT resources that are able
to scale out and in quickly and on a need basis. Whenever the user require
services it is provided to him and it is scale out as soon as its requirement gets
over.
4. Resource pooling: The IT resource (e.g., networks, servers, storage,
applications, and services) present are shared across multiple applications and
occupant in an uncommitted manner. Multiple clients are provided service from a
same physical resource.
5. Measured service: The resource utilization is tracked for each application and
occupant, it will provide both the user and the resource provider with an account
of what has been used. This is done for various reasons like monitoring billing
and effective use of resource.
6. Multi-tenancy: Cloud computing providers can support multiple tenants (users
or organizations) on a single set of shared resources.
7. Virtualization: Cloud computing providers use virtualization technology to
abstract underlying hardware resources and present them as logical resources to
users.
8. Resilient computing: Cloud computing services are typically designed with
redundancy and fault tolerance in mind, which ensures high availability and
reliability.
9. Flexible pricing models: Cloud providers offer a variety of pricing models,
including pay-per-use, subscription-based, and spot pricing, allowing users to
choose the option that best suits their needs.
10. Security: Cloud providers invest heavily in security measures to protect their
users’ data and ensure the privacy of sensitive information.
11. Automation: Cloud computing services are often highly automated, allowing
users to deploy and manage resources with minimal manual intervention.
12. Sustainability: Cloud providers are increasingly focused on sustainable
practices, such as energy-efficient data centers and the use of renewable energy
sources, to reduce their environmental impact.
Fig – characteristics of cloud computing
Parallel Computing:
In parallel computing multiple processors performs multiple tasks assigned to them
simultaneously. Memory in parallel systems can either be shared or distributed.
Parallel computing provides concurrency and saves time and money.
Distributed Computing :
In distributed computing we have multiple autonomous computers which seems to
the user as single system. In distributed systems there is no shared memory and
computers communicate with each other through message passing. In distributed
computing a single task is divided among different computers.
Difference between Parallel Computing and Distributed Computing:
S.N Parallel Computing Distributed Computing
O
Many operations are performed System components are located at different
1.
simultaneously locations
2. Single computer is required Uses multiple computers
Multiple processors perform
3. Multiple computers perform multiple operations
multiple operations
It may have shared or distributed
4. It have only distributed memory
memory
Processors communicate with each Computer communicate with each other through
5.
other through bus message passing.
Improves system scalability, fault tolerance and
6. Improves the system performance
resource sharing capabilities
Elasticity in cloud computing
Elasticity in cloud computing allows businesses to adjust their capacity to meet
demand, either manually or automatically. It can help businesses save time and money
by eliminating the need for extra capacity or lengthy purchasing processes.
Cloud Elasticity: Elasticity refers to the ability of a cloud to automatically expand
or compress the infrastructural resources on a sudden up and down in the
requirement so that the workload can be managed efficiently. This elasticity helps to
minimize infrastructural costs. This is not applicable for all kinds of environments,
it is helpful to address only those scenarios where the resource requirements
fluctuate up and down suddenly for a specific time interval. It is not quite practical
to use where persistent resource infrastructure is required to handle the heavy
workload.
The versatility is vital for mission basic or business basic applications where any
split the difference in the exhibition may prompts enormous business misfortune.
Thus, flexibility comes into picture where extra assets are provisioned for such
application to meet the presentation prerequisites.
It works such a way that when number of client access expands, applications are
naturally provisioned the extra figuring, stockpiling and organization assets like
central processor, Memory, Stockpiling or transfer speed what’s more, when fewer
clients are there it will naturally diminish those as
per prerequisite.
The Flexibility in cloud is a well-known highlight related with scale-out
arrangements (level scaling), which takes into consideration assets to be powerfully
added or eliminated when required.
It is for the most part connected with public cloud assets which is generally
highlighted in pay-per-use or pay-more only as costs arise administrations.
The Flexibility is the capacity to develop or contract framework assets (like
process, capacity or organization) powerfully on a case by case basis to adjust to
responsibility changes in the
applications in an autonomic way.
It makes make most extreme asset use which bring about reserve funds in
foundation costs in general.
Relies upon the climate, flexibility is applied on assets in the framework that isn’t
restricted to equipment, programming, network, QoS and different arrangements.
The versatility is totally relying upon the climate as now and again it might become
negative characteristic where execution of certain applications probably ensured
execution.
It is most commonly used in pay-per-use, public cloud services. Where IT managers
are willing to pay only for the duration to which they consumed the resources.
Example: Consider an online shopping site whose transaction workload increases
during festive season like Christmas. So for this specific period of time, the
resources need a spike up. In order to handle this kind of situation, we can go for a
Cloud-Elasticity service rather than Cloud Scalability. As soon as the season goes
out, the deployed resources can then be requested for withdrawal.
Cloud Scalability: Cloud scalability is used to handle the growing workload where
good performance is also needed to work efficiently with software or applications.
Scalability is commonly used where the persistent deployment of resources is
required to handle the workload statically.
Example: Consider you are the owner of a company whose database size was small
in earlier days but as time passed your business does grow and the size of your
database also increases, so in this case you just need to request your cloud service
vendor to scale up your database capacity to handle a heavy workload.
It is totally different from what you have read above in Cloud Elasticity. Scalability
is used to fulfill the static needs while elasticity is used to fulfill the dynamic need
of the organization. Scalability is a similar kind of service provided by the cloud
where the customers have to pay-per-use. So, in conclusion, we can say that
Scalability is useful where the workload remains high and increases statically.
Types of Scalability:
1. Vertical Scalability (Scale-up) –
In this type of scalability, we increase the power of existing resources in the
working environment in an upward direction.
2. Horizontal Scalability: In this kind of scaling, the resources are added in a
horizontal row.
3. Diagonal Scalability –
It is a mixture of both Horizontal and Vertical scalability where the resources are
added both vertically and horizontally.
Difference Between Cloud Elasticity and Scalability :
Cloud Elasticity Cloud Scalability
1 Elasticity is used just to meet the
Scalability is used to meet the static
sudden up and down in the workload
increase in the workload.
for a small period of time.
2 Elasticity is used to meet dynamic
Scalability is always used to address the
changes, where the resources need can
increase in workload in an organization.
increase or decrease.
3 Elasticity is commonly used by small Scalability is used by giant companies
companies whose workload and whose customer circle persistently
demand increases only for a specific grows in order to do the operations
period of time. efficiently.
4 It is a short term planning and adopted Scalability is a long term planning and
just to deal with an unexpected increase adopted just to deal with an expected
in demand or seasonal demands. increase in demand.
What is REpresentational State
Transfer (REST)
REST (REpresentational State Transfer) is an architectural style for developing web
services and systems that can easily communicate with each other. REST is popular
due to its simplicity and the fact that it builds upon existing systems and features of
the internet's HTTP to achieve its objectives, as opposed to creating new standards,
frameworks and technologies.
It is popularly believed that REST is a protocol or standard. However, it is neither.
REST is an architectural style that is commonly adopted for building web-based
application programming interfaces (APIs).
In this architectural style, systems interact through operations on resources. Resources
include all data and functionality. They are accessed using Uniform Resource
Identifiers (URIs) and acted upon using simple operations.
Systems, service interfaces or APIs that comply with the REST architectural style are
called RESTful systems (or RESTful APIs). Typically, these applications are
lightweight, fast, reliable, scalable and portable.
REST constraints
For a system to be RESTful, it must satisfy five mandatory constraints:
It must have a uniform interface to simplify system architecture and improve the
visibility of interactions between system components.
It must incorporate the client-server design pattern, allowing for the separation of
concerns and for the client and server implementations to be done independently.
It must be stateless, meaning that the server and client don't need to know
anything about each other's state so they can both understand the messages
received from each other without having to see previous messages.
It must be cacheable, meaning a response should label itself as cacheable or
noncacheable, and copies of frequently accessed data must be stored in
multiple caches along the request-response path.
It must be layered to constrain component behavior, to remove the need to edit
code (on the client or server) and to improve the web app's security.
In addition, the client system might extend its functionality with the help of
code applets or scripts. This constraint in REST is generally known as "code on
demand."
REST and the HTTP methods
In a REST system, numerous resource methods are used for resource interactions and
to enable resource state transitions. These methods are also known as HTTP verbs.
The default operation of HTTP is GET, used when retrieving a resource or set of
resources from the server by specifying the resource ID. In addition to GET, HTTP
also defines several other request methods, including PUT (update a resource by ID),
POST (create a new resource) and DELETE (remove a resource by ID).
The REST philosophy asserts that to delete something on the server, you would
simply use the URL for the resource and specify the DELETE method of HTTP. For
saving data to the server, a URL and the PUT method would be used. For operations
that are more involved than simply saving, reading or deleting information, the POST
method of HTTP can be used.
Advantages of REST
Using the REST architecture for web apps offers the following advantages:
Resource-based. REST enforces statelessness through resources rather than
commands, improving reliability, performance and scalability.
Simple interface. In REST, each resource involved in client-server interactions is
identified and is uniformly represented in the server response to define a consistent
and simple interface for all interactions.
Familiar constructs. REST interactions are based on constructs that are familiar
to anyone accustomed to using HTTP, including operations (GET, POST,
DELETE, etc.) and URIs. That said, REST and HTTP are not the same and
developers must note the differences when implementing and using REST.
Communication. The status of REST-based interactions between the server and
clients is communicated through numerical HTTP status codes. REST APIs use the
following HTTP status codes to detect errors and ease the API monitoring process:
o 400 error indicates that the request cannot be processed due to a bad request.
o 404 error indicates that a requested resource wasn't found.
o 401 status response code is triggered by an unauthorized request.
o 200 status response code indicates that a request was successful.
o 500 error signals an unexpected internal server error.
All communications between service agents and components are completely visible,
increasing the system's reliability.
Language-independent. When creating RESTful APIs or web services,
developers can employ any language that uses HTTP.
Widespread use. REST is widely used, making it a popular choice for numerous
server- and client-side implementations. For example, on the server side,
developers can employ REST-based frameworks like Restlet and Apache CXF,
while on the client side, they can employ jQuery, Node.js, Angular or Ember.js,
and invoke RESTful web services using standard libraries built into their APIs.
Web APIs. RESTful services employ effective HTTP mechanisms for caching to
reduce latency and the load on servers.
Separation of client and server. By providing many endpoints, a REST API
makes it easy to create complex queries that can meet specific deployment needs.
Also, different clients hit the same REST endpoints and receive the same responses
if they use a REST interface, improving reliability and performance.
Resilience. In a REST system, the failure of a single connector or component does
not result in the entire system collapsing.
Disadvantages of REST
Its benefits notwithstanding, there are some disadvantages of the REST architecture of
which developers should be aware:
Design limitations. There are some limitations of the REST architecture design.
These include multiplexing several requests over a single TCP connection, having
different resource requests for each resource file, server request uploads and long
HTTP request headers, which cause delays in webpage loading. Also, the freedom
that REST provides regarding design decisions can make REST APIs harder to
maintain.
Stateless applications. Because the server does not store state-based information
between request-response cycles, the client must perform state management tasks,
which makes it difficult to implement server updates without using client-side
polling or other types of webhooks that send data and executable commands
between apps.
Definition. REST lacks a clear reference implementation or a definitive standard
to determine whether a design can be defined as RESTful or whether a web API
conforms to REST-based principles.
Data overfetching/underfetching. RESTful services frequently return large
amounts of unusable data along with relevant information -- typically the result of
multiple server queries -- increasing the time it takes for a client to return all the
required data.
What is REST?
REpresentational State Transfer (REST) is a software architectural style that
defines the constraints to create web services. The web services that follows
the REST architectural style is called RESTful Web Services. It differentiates
between the computer system and web services. The REST architectural style
describes the six barriers.
1. Uniform Interface
The Uniform Interface defines the interface between client and server. It simplifies
and decomposes the architecture which enables every part to be developed. The
Uniform Interface has four guiding principles:
o Resource-based: Individual resources are identified using the URI as a
resource identifier. The resources themselves are different from the
representations returned to the customer. For example, the server cannot send
the database but represents some database records expressed to HTML, XML
or JSON depending on the server request and the implementation details.
o Manipulation of resources by representation: When a client represents a
resource associated with metadata, there is information on the server to modify
or delete it.
o Self-Descriptive Message: Each message contains enough information to
describe how the message is processed. For example, the parser can be
specified by the Internet media type (known as the MIME type).
o As the engine of Hypermedia Application State (HATEOAS): Customers
provide states by query-string parameters, body content, request headers, and
requested URIs. The services provide customers with the state by response
codes, response headers and body content. It is called hypermedia
(hyperlink within hypertext).
o In addition to the above description, HATEOS also means that, where
necessary, the object or itself is contained in the linked body (or header) to
supply the URI for retrieving the related objects.
o The same interface that any REST services provide is fundamental to the
design.
2. Client-server
A client-server interface separates the client from the server. For Example, the
separation of concerns not having an internal relationship with internal storage for
each server to improve the portability of customer's data codes. Servers are not
connected with the user interface or user status to make the server simpler and
scalable. Servers and clients are independently replaced and developed until the
interface is changed.
3. Stateless
Stateless means the state of the service doesn't persist between subsequent requests
and response. It means that the request itself contains the state required to handle the
request. It can be a query-string parameter, entity, or header as a part of the URI. The
URI identifies the resource and state (or state change) of that resource in the unit.
After the server performs the appropriate state or status piece (s) that matters are sent
back to the client through the header, status, and response body.
o Most of us in the industry have been accustomed to programming with a
container, which gives us the concept of "session," which maintains the status
among multiple HTTP requests. In REST, the client may include all
information to fulfil the server's request and multiple requests in the state.
Statelessness enables greater scalability because the server does not maintain,
update, or communicate any session state. The resource state is the data that
defines a resource representation.
Example, the data stored in a database. Consider the application state of having data
that may vary according to client and request. The resource state is constant for every
customer who requests it.
4. Layered system
It is directly connected to the end server or by any intermediary whether a client
cannot tell. Intermediate servers improve the system scalability by enabling load-
balancing and providing a shared cache. Layers can enforce security policies.
5. Cacheable
On the World Wide Web, customers can cache responses. Therefore, responses
clearly define themselves as unacceptable or prevent customers from reusing stale
or inappropriate data to further requests. Well-managed caching eliminates some
client-server interactions to improving scalability and performance.
6. Code on Demand (optional)
The server temporarily moves or optimizes the functionality of a client by logic that it
executes. Examples of compiled components are Java applets and client-side
scripts.
Compliance with the constraints will enable any distributed hypermedia system with
desirable contingency properties such as performance, scalability, variability,
visibility, portability, and reliability.
Note: The optional lock of the REST architecture is Code on Demand. If a
service violates a constraint, it cannot be strictly referenced.
Virtualization in Cloud Computing and Types
Last Updated : 13 Jul, 2024
Virtualization is used to create a virtual version of an underlying service With the
help of Virtualization, multiple operating systems and applications can run on the
same machine and its same hardware at the same time, increasing the utilization and
flexibility of hardware. It was initially developed during the mainframe era.
It is one of the main cost-effective, hardware-reducing, and energy-saving techniques
used by cloud providers. Virtualization allows sharing of a single physical instance of
a resource or an application among multiple customers and organizations at one time.
It does this by assigning a logical name to physical storage and providing a pointer to
that physical resource on demand. The term virtualization is often synonymous with
hardware virtualization, which plays a fundamental role in efficiently delivering
Infrastructure-as-a-Service (IaaS) solutions for cloud computing. Moreover,
virtualization technologies provide a virtual environment for not only executing
applications but also for storage, memory, and networking.
Virtualization
Host Machine: The machine on which the virtual machine is going to be
built is known as Host Machine.
Guest Machine: The virtual machine is referred to as a Guest Machine.
Work of Virtualization in Cloud Computing
Virtualization has a prominent impact on Cloud Computing. In the case of
cloud computing, users store data in the cloud, but with the help of
Virtualization, users have the extra benefit of sharing the infrastructure. Cloud
Vendors take care of the required physical resources, but these cloud
providers charge a huge amount for these services which impacts every user
or organization. Virtualization helps Users or Organisations in maintaining
those services which are required by a company through external (third-party)
people, which helps in reducing costs to the company. This is the way through
which Virtualization works in Cloud Computing.
Benefits of Virtualization
More flexible and efficient allocation of resources.
Enhance development productivity.
It lowers the cost of IT infrastructure.
Remote access and rapid scalability.
High availability and disaster recovery.
Pay peruse of the IT infrastructure on demand.
Enables running multiple operating systems.
Drawback of Virtualization
High Initial Investment: Clouds have a very high initial investment, but it
is also true that it will help in reducing the cost of companies.
Learning New Infrastructure: As the companies shifted from Servers to
Cloud, it requires highly skilled staff who have skills to work with the cloud
easily, and for this, you have to hire new staff or provide training to current
staff.
Risk of Data: Hosting data on third-party resources can lead to putting
the data at risk, it has the chance of getting attacked by any hacker or
cracker very easily.
For more benefits and drawbacks, you can refer to the Pros and Cons of
Virtualization.
Characteristics of Virtualization
Increased Security: The ability to control the execution of a guest
program in a completely transparent manner opens new possibilities for
delivering a secure, controlled execution environment. All the operations of
the guest programs are generally performed against the virtual machine,
which then translates and applies them to the host programs.
Managed Execution: In particular, sharing, aggregation, emulation, and
isolation are the most relevant features.
Sharing: Virtualization allows the creation of a separate computing
environment within the same host.
Aggregation: It is possible to share physical resources among several
guests, but virtualization also allows aggregation, which is the opposite
process.
For more characteristics, you can refer to Characteristics of Virtualization.
Types of Virtualization
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization
Types of Virtualization
1. Application Virtualization: Application virtualization helps a user to have
remote access to an application from a server. The server stores all personal
information and other characteristics of the application but can still run on a
local workstation through the internet. An example of this would be a user
who needs to run two different versions of the same software. Technologies
that use application virtualization are hosted applications and packaged
applications.
2. Network Virtualization: The ability to run multiple virtual networks with
each having a separate control and data plan. It co-exists together on top of
one physical network. It can be managed by individual parties that are
potentially confidential to each other. Network virtualization provides a facility
to create and provision virtual networks, logical switches, routers, firewalls,
load balancers, Virtual Private Networks (VPN), and workload security within
days or even weeks.
Network Virtualization
3. Desktop Virtualization: Desktop virtualization allows the users’ OS to be
remotely stored on a server in the data center. It allows the user to access
their desktop virtually, from any location by a different machine. Users who
want specific operating systems other than Windows Server will need to have
a virtual desktop. The main benefits of desktop virtualization are user
mobility, portability, and easy management of software installation, updates,
and patches.
4. Storage Virtualization: Storage virtualization is an array of servers that
are managed by a virtual storage system. The servers aren’t aware of exactly
where their data is stored and instead function more like worker bees in a
hive. It makes managing storage from multiple sources be managed and
utilized as a single repository. storage virtualization software maintains
smooth operations, consistent performance, and a continuous suite of
advanced functions despite changes, breaks down, and differences in the
underlying equipment.
5. Server Virtualization: This is a kind of virtualization in which the masking
of server resources takes place. Here, the central server (physical server) is
divided into multiple different virtual servers by changing the identity number,
and processors. So, each system can operate its operating systems in an
isolated manner. Where each sub-server knows the identity of the central
server. It causes an increase in performance and reduces the operating cost
by the deployment of main server resources into a sub-server resource. It’s
beneficial in virtual migration, reducing energy consumption, reducing
infrastructural costs, etc.
Server Virtualization
6. Data Virtualization: This is the kind of virtualization in which the data is
collected from various sources and managed at a single place without
knowing more about the technical information like how data is collected,
stored & formatted then arranged that data logically so that its virtual view
can be accessed by its interested people and stakeholders, and users through
the various cloud services remotely. Many big giant companies are providing
their services like Oracle, IBM, At scale, Cdata, etc.
Uses of Virtualization
Data-integration
Business-integration
Service-oriented architecture data-services
Searching organizational data
Are you ready to unleash the power of DevOps to streamline your Software
Development and Deployment? Learn about our DevOps Live Course at
GeeksforGeeks, created for all professionals in practice with continuous
integration, delivery, and deployment. Learn about leading tools, industry
best practices, and techniques for automation through an interactive session
with hands-on live projects. Whether you are new to DevOps or looking to
improve your skills, this course equips you with everything needed to
streamline workflows and deliver excellent quality software in the least
amount of time. Learn to take your skills in DevOps to the next level now, and
harness the power of streamlined software development!