0% found this document useful (0 votes)
17 views

Cloud Computing Material

The document provides an overview of cloud computing, detailing five types of clouds: Public, Private, Hybrid, Community, and Multi-Cloud, each serving different organizational needs. It also discusses the cloud computing lifecycle, key characteristics, service models, and virtualization, emphasizing the importance of automation and elasticity. Additionally, it covers data storage types, including DAS, SAN, and NAS, and the tools used for data storage management.

Uploaded by

nivethanive.1125
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Cloud Computing Material

The document provides an overview of cloud computing, detailing five types of clouds: Public, Private, Hybrid, Community, and Multi-Cloud, each serving different organizational needs. It also discusses the cloud computing lifecycle, key characteristics, service models, and virtualization, emphasizing the importance of automation and elasticity. Additionally, it covers data storage types, including DAS, SAN, and NAS, and the tools used for data storage management.

Uploaded by

nivethanive.1125
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

CLOUD COMPUTING

UNIT = 1

Types of cloud :
There are the following 5 types of cloud that you can deploy according to the
organization's needs-

o Public Cloud
o Private Cloud
o Hybrid Cloud
o Community Cloud
o Multi Cloud

Public Cloud

Public cloud is open to all to store and access information via the Internet using the
pay-per-usage method.In public cloud, computing resources are managed and
operated by the Cloud Service Provider (CSP).Due to its open architecture, anyone
with an internet connection may use the public cloud, regardless of location or
company size.By using a pay-per-usage strategy, customers can be assured that they
will only be charged for the resources they actually use, which is a smart financial
choice.

Example: Amazon elastic compute cloud (EC2), IBM SmartCloud Enterprise,


Microsoft, Google App Engine, Windows Azure Services Platform.

Private Cloud

Private cloud is also known as an internal cloud or corporate cloud. It is used by


organizations to build and manage their own data centers internally or by the third
party. It can be deployed using Opensource tools such as Openstack and Eucalyptus.

Examples: VMware vSphere, OpenStack, Microsoft Azure Stack, Oracle Cloud at


Customer, and IBM Cloud Private.

Hybrid Cloud

Hybrid Cloud is a combination of the public cloud and the private cloud. we can say:
Hybrid Cloud = Public Cloud + Private Cloud

Hybrid cloud is partially secure because the services which are running on the public
cloud can be accessed by anyone, while the services which are running on a private
cloud can be accessed only by the organization's users. In a hybrid cloud setup,
organizations can leverage the benefits of both public and private clouds to create a
flexible and scalable computing environment.

Example: Google Application Suite (Gmail, Google Apps, and Google Drive), Office
365 (MS Office on the Web and One Drive), Amazon Web Services.

Community Cloud

Community cloud allows systems and services to be accessible by a group of several


organizations to share the information between the organization and a specific
community. It is owned, managed, and operated by one or more organizations in the
community, a third party, or a combination of them.In a community cloud setup, the
participating organizations, which can be from the same industry, government sector,
or any other community, collaborate to establish a shared cloud infrastructure. This
infrastructure allows them to access shared services, applications, and data relevant
to their community.

Example: Health Care community cloud.

Multi-Cloud

Multi-cloud is a strategy in cloud computing where companies utilize more than one
cloud service provider or platform to meet their computing needs. It involves
distributing workloads, applications, and statistics throughout numerous cloud
environments consisting of public, private, and hybrid clouds.
Examples: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform
(GCP).

UNIT = 2

2.1 CLOUD COMPUTING TECHNOLOGY :

2.1.1 Cloud life cycle :


Cloud lifecycle management provides:
● Ease in administrating cloud and service portal
● Manageable service
● Established multi-tenancy
● Include performance and capacity management
● Support heterogeneity
Phases of CDLC :

1) Requirement and Analysis :


Requirement and analysis method is used to evaluate and understand the
requirements of an end user. This is done by taking up the signifi cant complaints
from the user, network solution, management and customers of the present system.
2) Architect :
The structural behaviour of the cloud architecture gives solution to the cloud system
which comprises of on-premise resource, cloud resources, cloud services, cloud
middleware, software components, data server location and externally visible
properties of data server location.

3) Implementation and Integration


Third phase of CDLC is the actual formation and enablement of the private, public,
community,hybrid, inter and hosted cloud solutions to a computing problem.
Implementation: Events such as privacy, protection, regular, legality, mentality,
inter-machine message and privacy theory are addressed within the implementation
phase.Two components
of cloud computing are implemented in this phase.
A)The implementation of file system is the first case.
B)The implementation of map-reduce system is the second case.
Integration: Integration is intermediate between the source and target systems for
extracting data, mediating and publishing it.
Five
possibilities and recommendations for integrating into cloud effectively are as
follows:
1. Plan and set realistic goals
2. Learn from other’s experience
3. Require IT specialist team
4. Address security concerns
5. Maximize connectivity options
4) Quality Assurance and Verification :
In this phase, cloud auditing is done to ensure the quality of the cloud network. It
also confirms the performance, reliability, availability, elasticity and safety of cloud
network at the service level.
5.Deploy, Testing and Improvement :
Different platform service providers drastically reduce the deployment cost of the
application by pre-building and pre-confi guring a stack of application infrastructure
in this phase.
6) Monitor, Migrate and Audit
This phase is marked by periodically monitoring the cloud environment and
measuring the performance of the system. The extra cost and worth that a client
incurs moving to cloud from the traditional SOA method and furthermore integration
with the existing methods are considered in this phase

2.1.2 Cloud Computing Model:


Cloud computing model supports convenient, on-demand software using the
Internet. The computing devices used are released after usage without any manual
intervention.
Necessary Characteristics :

● On-demand self-service: Any customers can unilaterally use computing capabilities


such as network storage and server time as desired without human interaction with
every service provider
●Broad network access: Services are networked and can be accessed over standard
mechanisms which promote use in mixed thick or thin user platforms (e.g., handheld
devices such as mobile phones, laptops and PDAs).
● Resource pooling: Resources of providers are grouped to serve multiple users by
means of a multi-tenant structure along with different virtual and physical resources
assigned dynamically.
● Rapid elasticity: Services can be run elastically and rapidly to speed up scale out
and fast release. As for the customer, the services available for running often, appear
to be unlimited that can be bought in any amount at any point of time.
● Measured service: In cloud system, controlling and optimization of resources
happen automatically and it is done by controlling and metering at some stage of
abstraction appropriate to the kind of service, for example, the bandwidth,
processing, storage and accounts of active users..

Service Models :

● Cloud software as a service: These are capabilities provided to the customer to


deploy applications in the infrastructure provided by the service provider. Deployed
applications can be accessed by any device supported by WWW. In this case,
controlling or managing the network, server, operating systems, storage, memory or
even single application with the possible payment of user-specifi c application
setting and configuration are not done by the
customer.
● Cloud platform as a service: The service includes installation on the cloud system
infrastructure created by the user itself or is an acquired application that may be
written in some programming language using tools that are supported and/or
provided by the service provider. The end user does not control or manage the
infrastructure of cloud computing system that comprises servers, networks storages
or operating systems.
● Cloud infrastructure as a service: In this, same capabilities and resources are
provided but the consumer can deploy and run the software. The user does not
control the infrastructure.

Deployment Models

● Private cloud: These are functions within the organization and behind the
firewall.
● Community cloud: This cloud infrastructure is common to several organizations.
● Public cloud: This cloud infrastructure is available to public or large industries.
● Hybrid cloud: It is a composite of two and more clouds.

2.2 Cloud Computing Architecture :

Architecture of cloud computing is the combination of both SOA (Service Oriented


Architecture) and EDA (Event Driven Architecture). Client infrastructure,
application, service, runtime cloud, storage, infrastructure, management and
security all these are the components of cloud computing architecture.
The cloud architecture is divided into 2 parts, i.e.
1. Frontend
2. Backend
The below figure represents an internal architectural view of cloud computing.
1. Frontend

Frontend of the cloud architecture refers to the client side of cloud computing
system. Means it contains all the user interfaces and applications which are used
by the client to access the cloud computing services/resources. For example, use of
a web browser to access the cloud platform.

2. Backend

Backend refers to the cloud itself which is used by the service provider. It contains
the resources as well as manages the resources and provides security mechanisms.
Along with this, it includes huge storage, virtual applications, virtual machines,
traffic control mechanisms, deployment models, etc.
Examples of Cloud Computing
Gmail and Yahoo are examples of cloud computing technology. When a user sends
or receives e-mails, he does not use any application software available in his
computer, he just uses the Internet connection. The operating cost of cloud
computing is comparatively low considering an individual infrastructure. The only
concern in the cloud computing technology is security and privacy.
Service models :(refer previous pages)

2.3 CLOUD MODELLING AND DESIGN :

2.3.1 Key Principles of Cloud Computing


Three key principles of cloud computing are abstraction, automation and elasticity.
1) Abstraction
IT providers are in need of standardizing their IT operations, so optimizing their
operation will be made easy. Cloud computing gives some basic but well-defi ned
services. Managing the software services is passed onto the developer or user. A
well-defi ned abstraction layer acts a as grease between clouds and developers or
users, which helps to work efficiently and independent of each other. The three
abstraction layers in clouds are:
a. Application as a Service (AaaS)
b. Platform as a Service (PaaS)
c. Infrastructure as a Service (IaaS)
2) Automation:
The developers or users have complete control over their resources, this is said to be
automation in the cloud. There is no human interaction, even from a developer’s or
user’s side. In this environment, when the user needs more servers, the load
balancer intimates the cloud as to the extra numbers be provided. There is no need
to wait for unpacking, connect your machine and install, all will be done
automatically. This automatic process reduces cost and complexity, and it puts the
developer or user in control. Now the user can reduce his time to market for the
next rollout because, he can do it
himself, no intervention of professionals needed or waiting period.
3.Elasticity
In the early years, people had expensive servers and waited for long time to use the
full capacity of their server. This is a highly ineffi cient model as most of the time the
server was underutilized. In the dot-com era, people started scaling horizontally,
which allowed them to add capacityaccording to their needs.

2.4 VIRTUALIZATION :
‘Virtualization is a methodology for dividing the computer resources to more than
one execution environment by applying more concepts like partitioning, time-
sharing, machine simulation and emulation.’Virtualization allows sharing of a single
physical instance of a resource or an application among multiple customers and
organizations at one time. It does this by assigning a logical name to physical
storage and providing a pointer to that physical resource on demand.

Virtualization is a method in which multiple independent operating systems run on a


physical computer. It maximizes the usage of available physical resources. By
adopting this method, one can achieve high server usage.
some reasons for using virtualization :
● To run legacy applications, VM is used.
● VM provides a secured and sandbox for running an untrusted application.
● VM helps in building secured computing platform.
● VM provides an illusion of hardware.
● VM simulates networks of independent computers.
● VM supports to run distinct OS with different versions.
● VMs are uses for performance monitoring. Operating systems can be checked
without disturbing the productivity.
● VM provides fault and error containment.
● VM tools are good for research and academic experiments

Types of Virtualization
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization

● Server virtualization is a kind of virtualization, used for masking of server


resources, which includes number of physical servers, processors and operating
systems. The intention of using this method is to spare the complicated server
resources and hence increasing the sharing, utilization and maintaining the capacity
of servers.
● Network Virtualization is a method where network resources are combine based
on available bandwidth. Each channel is assigned to a particular server. By adopting
this method of virtualization, a true complexity of network is hidden and managed
like partitioning the hard drive. Because of network virtualization, lower TCO, higher
return of investment, security and dynamic computing are obtained.
● Storage virtualization is a type of virtualization, where a pool of physical storage
from different network of storage devices appears as a single storage device. Usually
this kind of virtualization is adopted in SAN (storage area networks). Storage
virtualization is advantageous in disaster recovery, business continuity, lower TCO,
higher return of investment, dynamic computing, security, testing and development.
● Desktop virtualization supports various computing such as utility and dynamism,
testing, development and security.
● Application virtualization allows server consolidation, application and desktop
deployment, and business continuity. Apart from this, recovery when disaster, lower
TCO with higher ROI, dynamic computing, testing and development are possible.
● Management virtualization allows variety of features which are as follows: server
consolidation, centralized policy-based management, business continuity and
disaster recovery, lower TCO with higher ROI, utility and dynamic computing, testing
and development and security
2.5 Grid cloud and virtualization :

Grid Computing can be defined as a network of computers working together to


perform a task that would rather be difficult for a single machine. All machines on
that network work under the same protocol to act as a virtual supercomputer.Grid
Computing is a subset of distributed computing, where a virtual supercomputer
comprises machines on a network connected by some bus, mostly Ethernet or
sometimes the Internet.
Working of Grid Computing :
A Grid computing network mainly consists of these three types of machines
A)Control Node: A computer, usually a server or a group of servers which
administrates the whole network and keeps the account of the resources in the
network pool.
B)Provider: The computer contributes its resources to the network resource pool.
C)User: The computer that uses the resources on the network.
Use Cases of Grid Computing :

 Drug Discovery
 Cancer Research
 Weather Forecasting
 Risk Analysis
 Computer-Aided Design (CAD)
 Animation and Visual Effects
 Collaborative Projects
VIRTUALIZATION IN CLOUD :

Virtualization is a tool for system administrators, which has many technical uses than
a cloud. Virtualization allows IT organizations to perform multiple operations using a
single physical hardware. Multiple OS instances running on single device is cost-
effective than multiple servers for each task. Cloud computing is accessed through the
WWW and takes advantage of virtualization. Cloud computing can also be used
without virtualization.
Cloud computing and virtualization are two different technologies that work
independently.Although cloud computing is better utilized if desktop virtualization is
done fi rst it requires multiple virtual servers and storage devices, that is multi-
tenancy. Virtualization saves on their infrastructures, that is resources are virtualized
UNIT = 3 (DATA STORAGE AND CLOUD COMPUTING )

3.1 DATA STORAGE :


Storage is a resource to be allocated to organizations to add more value. Data
storage management includes a set of tools to configure, backup, assign to users
according to defined policies.The various types of storage subsystems are:
● Direct Attached Storage (DAS)
● Storage Area Network (SAN)
● Network Attached Storage (NAS)
DAS is the basic in a storage system and employed in building SAN and NAS either
directly or indirectly. NAS is the top most layer, having SAN and DAS as its base. SAN
lies between a DAS and a NAS.

DAS: Direct Attached Storage

DAS is the basic storage system providing block-level storage and used for building
SAN and NAS. The performance of SAN and NAS depends on DAS. Performance of
DAS will always be high, because it is directly connected to the system. Storage
devices used to build a DAS storage subsystem are SCSI, PATA, SATA, SAS, FC, Flash
and RAM

SAN: Storage Area Network


When multiple hosts want to connect a single storage device, then SAN is used. SAN
provides block-level storage and simultaneous access is not permitted and hence it is
suitable for clustering environment. SAN technologies are FC (Fibre Channel), iSCSI
(Internet SCSI) and AoE(ATA over Ethernet).

NAS: Network Attached Storage


For fi le-level storage, NAS is used. SAN and DAS act as base system for NAS. NAS is
also called as ‘File Server’. The main advantages of NAS are that multiple hosts can
share a single volume at the same time, whereas when using SAN or DAS only one
client can access the volume at a time.

Data Storage Management Tools:


Storage Resource Management (SRM) tools include configuration tools, provisioning
tools and measurement tools.
● Configuration tools handle the set-up of storage resources. These tools help to
organize and manage RAID devices by assigning groups, defi ning levels or assigning
spare drives.
● Provisioning tools define and control access to storage resources for preventing a
network user from being able to use any other user’s storage.
● Measurement tools analyse performance based on behavioural information about
a storage device. An administrator can use that information for future capacity and
upgrade planning.

3.2 CLOUD STORAGE :


Cloud storage is data storage hosted remotely using data storage devices in WWW
and maintained by the third party (service provider). Cloud storage is a part of cloud
computing. It is deployed using WAN infrastructure which includes hardware
components such as switches and routers.

Cloud storage can be deployed in many ways. For example:


● Local data (desktop/laptop) can be backed up to cloud storage.
● A virtual disk can be ‘sync’ to the cloud and distributed.
● The cloud can be used as a reservoir for storing data.

DATA MANAGEMENT FOR CLOUD STORAGE :


Cloud data management involves collecting, organizing, and managing data stored in
cloud environments,It enables easy access, dynamic file sharing, and collaboration,
while also requiring specialized tools and platforms for effective data management.
Cloud Data Management Interface (CDMI)
To create, retrieve, update and delete objects in a cloud the cloud data management
interface (CDMI) is used. The functions in CDMI are:
● Cloud storage offerings are discovered by clients
● Management of containers and the data
● Sync metadata with containers an objects
CDMI is also used to manage containers, domains, security access and billing
information.CDMI standard is also used as protocols for accessing storage.
CDMI defi nes how to manage data and also ways of storing and retrieving it. ‘Data
path’ means how data is stored and retrieved. ‘Control path’ means how data is
managed. CDMI standard supports both data path and control path interface.
Cloud Storage Requirements :
A) Multi-tenancy
B)Security
C) Secure Transmission Channel
D)Performance
E) Quality of Service (QoS)
F) Data Protection and Availability

3.4 CLOUD SERVICES :


 Basis of Services
A)Infrastructure as a Service (IaaS)
B)Platform as a Service (PaaS)
C)Software as a Service (SaaS)

A) Infrastructure as a Service (IaaS):


Infrastructure as a Service (IaaS) is a form of service, consigning and outsourcing all
types of computing infrastructures. It presents hardware, servers, storage and
softwares services.
B) Platform as a Service (PaaS) :
PaaS cloud computing platform is created for the programmer to develop, test, run,
and manage the applications.With PaaS, developers can construct WWW
applicationscwithout establishing any devices on their computer.
The components of PaaS are:
● Browser-based development studio
● Seamless deployment to host run-time environment
● Management and supervising tools
● Pay contrary to billing
Types of PaaS
● Social application platforms
● Computation platforms
● Web application platforms
● Business application platforms

C)Software as a Service (SaaS)


SaaS is also known as "on-demand software". It is a software in which the
applications are hosted by a cloud service provider. Users can access these
applications with the help of internet connection and web browser.
Two Main Categories of SaaS
1. Line of enterprise services, which means solutions provided to businesses and
organizations on a subscription basis. Applications enclosed under this class
encompass organization methods, for example, provide string of connected
administration programs, clientelerelated applications, etc.
2. Customer-Oriented Services are suggested to the general public either on a
subscription (more often than not) or for free. Web-based Internet note services fall
under this general category.
Best SaaS Examples
● SalesForce CRM
● Google Apps
● DeskAway
● Impel CRM
● Wipro w-SaaS

UNIT = 4

4.1 Cloud Computing Risks


TYPES OF RISKS :
Threat #1—Misuse and illicit use of cloud computing:
Lawless individuals may take advantage of the befitting registration,
straightforward methods and somewhat anonymous access to cloud
services to launch diverse attacks. Examples of such attacks include: password and
key breaking, DDOS, malicious data hosting, commencing dynamic strike points,
botnet command/control and CAPTCHA-solving farms. Targets are IaaS, PaaS.
Threat #2—Insecure interfaces and APIs:
Customers organize and combine with cloud services through interfaces or APIs.
Providers should double-check that security is incorporated into their service forms,
while users should be cognizant of security risks in the use, implementation, and
administration and monitoring of such services. API dependencies, logging
capabilities, infl exible access to controls, anonymous access, reusable passwords,
clear-text authentication,transmission
of content and improper authorizations are the example of such risks. Targets are
IaaS, PaaS, SaaS.
Threat #3—Vicious insiders:
Vicious insiders represent a larger risk in a cloud computing environment,
since clients manage not have a clear outlook of provider principles and procedures.
Vicious insiders can gain unauthorized access into organizations and their assets.
Some risks encompass impairment, economic infl uence and decrease of productivity.
Targets are IaaS, PaaS, SaaS.
Threat #4—Issues-related technology sharing:
IaaS is based on distributed infrastructure, which is often not conceived to
accommodate a multi-tenant architecture. Overlooked fl aws have authorized
visitors to gain unauthorized rights and/or leverage on the platform. Targets are IaaS.
Threat #5—Data loss or leakage:
Compromised data may encompass (i) deleted or changed
data without producing a backup, (ii) unlinking a record, (iii) decrease of an encoding
key and (iv) unauthorized access to perceptive data. The likelihood of data
compromise considerably rises in cloud computing, due to the architecture and
operations. Examples of data loss/ leakage include: (i) insuffi cient authentication, (ii)
authorization, (iii) review (AAA) controls, (iv) inconsistent encryption, (v) inconsistent
programs keys, (vi) operational fl ops, (vii) disposal challenges, (viii) risk of
association, (xi) jurisdiction/political issues, (x) persistence and trials, (xi) data
centre reliability and catastrophe recovery. Targets are IaaS, PaaS, SaaS
Threat #6—Hijacking (Account/Service):
Account or service hijacking is generally carried out with pilfered credentials. Such
attacks encompass phishing, deception and exploitation ofprograms vulnerabilities.
Using pilfered credentials, attackers can access critical localities of cloud computing
services and compromise the confi dentiality, integrity and accessibility (CIA) of such
services. Examples of such attacks include eavesdropping on transactions/sensitive
undertakings, manipulation of data, coming back with falsified data, redirection to
illegitimate sites. Targets are IaaS, PaaS, SaaS.
Threat #7—Unknown Risk Profile:
Cloud services signify that organizations are less engaged with hardware and
software ownership and maintenance. Although this boasts important benefits,
organizations should be cognizant that matters like internal security systems,
security compliance, configuration hardening, patching, auditing and logging may be
overlooked. Targets are IaaS, SaaS, PaaS.

The Risk Management Process :

●Step 1: Determination of the objectives of the risk administration program,


concluding accurately what the association anticipates its risk administration
program to do. One prime target of the risk administration effort is to maintain the
functioning effectiveness of the organization
●Step 2: The identifi cation of the risks involves somebody being cognizant of the
risks
● Step 3: Once the risks are recognized, the risk supervisor should evaluate the risks.
Evaluation entails assessing the promise dimensions of the reduction and the
likelihood that it is probable to occur. The evaluation needs grading of main concerns
as critical risks, significant or insignificant risks.
● Step 4: Consideration of options and assortment of the risk remedy device,
examines diverse advances utilized to deal with risks and the assortment of the
method that should be utilized for each one.
● Step 5: Risk fi nancing means encompass risk-keeping and risk moving or risk
shifting. Risk remedy apparatus are utilized in concluding which method to use to
deal with a granted risk, the risk supervisor considers the dimensions of the promise
decrease, its likelihood and the assets that would be accessible to meet the loss if it
should occur.
The last step, evaluation and reconsider are absolutely crucial to the program for two
reasons. Within the risk administration method the enterprise environment
alterations, new risks originate and old ones disappear.

4.2 DATA SECURITY IN CLOUD :


Some of the key security benefits of a cloud computing environment are as follows:
● Data centralization: In a cloud atmosphere, the service provider takes
responsibility of storage and small organizations need not spend more money for
personal storage devices. Also, cloud-based storage provides a method to centralize
the data much faster and probably with low cost.
● Incident response: IaaS providers contribute dedicated legal server which can be
used on demand. Whenever there is a violation of the security policy, the server can
be intimated through online.
● When there is an inquest, a backup of the environment can be effortlessly made
and put up on the cloud without affecting the usual course of business.
● Forensic image verification time: Some cloud storage implementations reveal a
cryptographic ascertain addition or hash.
● Logging: In a usual computing paradigm by and large, logging is regular feature. In
general, insufficient computer disk space is assigned that makes logging either non-
existent or minimal. However, in a cloud, storage requirement for benchmark logs is
mechanically solved.
Disadvantages in Cloud Environments :
● Investigation: Investigating an illegal undertaking may be unrealistic in cloud
environments. Cloud services are particularly hard to enquire, because data for
multiple clients may be co-located and may also be dispersed over multiple data
centres. Users have little information about the mesh topology of the inherent
environment. Service provider may also enforce limits on the network security of the
users.
● Data segregation: Data in the cloud is normally in a distributed simultaneously
with data from other customers. Encryption will not be presumed as the single
solution for data segregation issues. Some clients may not desire to encrypt data
because there may be a case when encryption misleads can decimate the data.
● Long-term viability: Service providers should double-check the data security in
altering enterprise positions, such as mergers and acquisitions. Customers should
double-check data accessibility in these situations. Service provider should
furthermore confirm data security in contradictory situations such as extended
outage, etc.
● Compromised servers: In a cloud computing environment, users do not even have
an alternative of utilizing personal acquisition toolkit. In a situation where a server is
compromised, they require to shut their servers down until they get a backup of the
data. This will further create source accessibility concerns.
● Regulatory compliance: Traditional service providers are exempted from outside
audits and security certifications. If a cloud service provider does not adhere to these
security audits, then it directs to a conspicuous decline in clientele trust.
● Recovery: Cloud service providers should double-check the data security in natural
and man-made disasters. Generally, data is duplicated over multiple sites. However,
in the case of any such redundant happenings, provider should do an absolute and
fast restoration

4.3 CLOUD SECURITY SERVICES :


Cloud security service refers to a set of security rules, processes, tools, and
technologies used to safeguard people, confidential information, applications, and
architecture in cloud-based computing environments. The most complete cloud
security solutions shield SaaS resources, users, and workloads from malware, data
breaches, and other security risks.
BENEFITS:
 Data security: By eliminating the loss of information and controlling
access, cloud security solutions are made expressly to guarantee data
security. Confidential information is shielded from unwanted access while in
motion and at repose.
 Access control: Multi-factor authentication is used by cloud security services
to guarantee that only authorized users can access the cloud.
 Application security: Firewalls and vulnerability scanning are used to
safeguard user data in applications.
 Endpoint security: To provide safe access to the cloud, cloud security
service safeguards endpoint devices including laptops, tablets, and
smartphones.
 Integrated privacy: From a single location, cloud monitoring solutions
examine possible risks to several businesses. This makes it possible to secure
protection on all devices, create plans for recovery from emergencies, and
install applications on time.
 Durability and accessibility: Cloud security maintains sustainability by
keeping cloud services functional even if certain components fail.

4.4 CLOUD COMPUTING TOOLS :


Open Source Tools for Construction and Organizing Cloud
There are eight key components to address when constructing an internal or
external compute cloud. They are:
1. Shared infrastructure
2. Self-service automated portal
3. Scalable
4. Rich application container
5. Programmatic control
6. 100% virtual hardware abstraction
7. Strong multi-tenancy
8. Chargeback

Cloud Service Providers (CSPs)

These are platforms offering on-demand computing services:

Amazon Web Services (AWS) – Most widely used; services like EC2, S3, Lambda,
RDS.

Microsoft Azure – Strong integration with Microsoft products like Office 365 and
Active Directory.

Google Cloud Platform (GCP) – Known for data analytics, AI/ML services.

IBM Cloud, Oracle Cloud, Alibaba Cloud – Other notable players.

4.5 CLOUD MASHUPS :

A cloud mashup is a web or app solution created by combining multiple cloud-based


services, APIs, or data sources into a single, unified application or user experience.A
cloud mashup takes data, services, or APIs from different sources and combines
them to create a new application or service.
How it works:

Mashups leverage APIs (Application Programming Interfaces) to access data and


services from various sources, enabling developers to integrate different
functionalities into a single application.

Examples:

A mapping mashup might combine geographic data from one provider with real estate
listings from another.
A weather mashup could integrate weather data from multiple sources and present it in
a user-friendly format.
A social media mashup could combine feeds from different social networks

for example, a news website that pulls in weather updates from Weather.com
(or other), stocks and shares information and even additional news items.
Mashups use API software (application programming interface) to combine
one or more website elements. A cloud mashup is simply an instance of a
web-based mashup, but the application content resides in the cloud. The
reasons are as follows: If cloud mashup is hosted in the cloud, then it will be
placed next to some useful software building tools, if the user subscribes to a
cloud mashup centre service. A good enterprise mashup platform features
reusable application blocks that can be used to build new applications.

You might also like