0% found this document useful (0 votes)
36 views

Cloud Computing (Finals)

read it
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Cloud Computing (Finals)

read it
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Cloud computing:

Cloud computing is a technology that allows the user to access the computer resources such as storage,
processing power, and application over the internet. Instead of owning and maintaining the physical
hardware and software user utilizes the recourses that are hosted by remoted data centre operator by cloud
service provider.
Evolved:

Cloud computing was evolved form the two field of computing that is grid computing and cluster
computing.

Dependence:
Cloud computing depends upon a technology called virtualization technology for dynamic creation and
provisioning of computer recourses.

Cloud computing definition by NIST:


Cloud computing is model for enabling ambiguous, convenient and on-demand access to a shared pool of
configurable computing recourses, that can be rapidly provisioned and released with the minimal
management effort or service provider interaction.
Essential characteristics of cloud computing by NIST:

 On demand self service


 Borad network access
 Rapid elasticity
 Measured services
 Recourse pooling

Grid Computing:
Grid computing is defined as a network of computers working together to perform tasks that would br rather
difficult for a single machine. All the machine on the network work under the same protocol to act as the
super virtual computer.

Cluster computing:
It is a collection of loosely or tightly connect computer that work together to act as a single entity. The
connected system executes the program altogether to give the illusion of a single system. The cluster a
connected through a fast local area network (LAN).

Recourses cluster mechanisms:


The resource cluster mechanism is used to group multiple IT recourses so they can be used as a single IT
recourse.

Advantages:
The main advantages of resources cluster mechanism are:

 Computing capacity
 Load balancing
 High availability
1
Links:
High speed communication links are used to connect the clustered IT resources for Workload distribution,
Task scheduling, Data sharing, System synchronization

Types of clusters:
Following is some types of clusters:

 High performance cluster – use multiple nodes parallel to solve complex computational problem.
 Load balancing clusters – distributes the incoming request across multiple nodes to prevent a single
node being overloaded.
 High availability clusters – maintain redundant nodes to ensure continuous operations and data
availability serving as a backup system in case of failure.
 Server clusters – Consisting of physical or virtual servers. The virtualized clusters support the
migration of VMs for scaling and load balancing.
 Database clusters - Keeps copies of databases on multiple servers. Ensures data is the same on all
copies, which helps in case of server failures.
 Large dataset clusters – Splits big datasets across servers without losing accuracy. Each server
works independently, without needing to communicate with others.

Cloud service model:


There are three main types of cloud service model:

 Platform as a service PaaS


 Software as a service SaaS
 Infrastructure as a service IaaS

Cloud deployment:
A cloud deployment model refers to a way in which cloud computing environment is implemented and
managed. It dictates how recourses are provisioned, shared and accessed within the cloud infrastructure.

Types of cloud computing deployment model:

 Private Cloud – Cloud services are owned and operated by a third-party cloud service provider and
they are made available to the general public over the internet.
 Public cloud – It involves deploying cloud recourses within a dedicated environment that
exclusively used by an organization.
 Hybrid cloud – It combines the elements of both public and private cloud.
 Community cloud – This model involves sharing of cloud recourses and infrastructure among
several organization with similar interest.
 Multi cloud – This model involves using multiple cloud service providers to host different
components of an organization IT infrastructure.

Multi-Device Broker:
This mechanism is used to transform the messages (received from heterogenous devices of Cloud
consumers) into a standard format before conveying them to the Cloud service. The response messages from
Cloud service are intercepted and transformed back to the device specific format before conveying to the
devices through the multi-device broker mechanism.

2
State Management Database: It is a device used to temporarily store the state data of software programs.

Remote Administration System in Cloud Computing:


A system that provides tools and APIs (ways for software programs to talk to each other) for cloud providers
and users. It allows users to manage and control cloud services through online portals.

Types of Portals:

Usage and Administration Portal:


 For managing cloud resources.

 Provides reports on how cloud resources are being used.

Self-Service Portal:
 Let’s users browse and select cloud services.
 Users can request these services, and the cloud provider sets them up automatically.

VIM:
Resources Management System utilizes the virtual infrastructure manager (VIM) for creating and managing
the virtual IT resources.

SLA:
An SLA, or Service Level Agreement, is a formal contract between a service provider and a customer.

FUNDAMENTAL CLOUD ARCHITECTURES:


a) Resource Pooling Architecture:
Resource pooling is a cloud computing concept where multiple physical and virtual resources (such as
servers, storage, and network bandwidth) are combined to serve multiple customers. These resources are
dynamically allocated and reallocated based on demand, ensuring efficient utilization and scalability.
It is based upon using one or more resource pool in which identical IT resources are grouped and maintained
automatically by a system which also ensures that the resource pools remain synchronized.

Example:

 Physical Server pool


 VM pools
 Cloud storage pools
 Network pools
 CPU pools

Types of resources pooling:


A resource pool can be divided into sibling pools as well as nested pools.

 Sibling pools are independent and isolated from each other. May have different types of IT
resources.
 Nested pools are drawn from a bigger pool and consist of the same types of IT resources as are
present in the parent pool
3
b) Dynamic Scalability Architecture:
Dynamic scalability is provided through dynamic allocation of available resources from the resource pool

Scaling:
Scaling can be horizontal & vertical and can also be through dynamic relocation

Requirements:
To implement this architecture, the automated scaling listener (ASL) and Resource Replication Mechanism
are utilized.

c) Workload Distribution Architecture:


The workload distribution is required to prevent the following scenarios:

 Over-utilization of IT resources to prevent the loss in performance


 Under-utilization of IT resources to prevent the over expenditure.
 The workload is distributed on the basis of a load balancing algorithm

d) Service Load Balancing Architecture:


Load balancing architecture in cloud computing consists of a load balancer that sits between servers and
client devices to manage traffic.

e) Elastic Disk Provisioning Architecture:


Establish a dynamic storage provisioning system. that ensures that the cloud consumer is granularly. billed
for the exact amount of storage that it. actually uses.

ADVANCED CLOUD ARCHITECTURES:

a) Hypervisor Clustering Architecture:


A hypervisor is a software that you can use to run multiple virtual machines on a single physical machine.
The hypervisor clustering creates a high-availability cluster across multiple physical servers. Hypervisors
are clustered across multiple physical servers, so that if one fails, active virtual servers are transferred to
another. Heartbeat messages are passed between clustered hypervisors and a central VIM to maintain status
monitoring. The hypervisor cluster uses a shared storage to support a prompt live-migration of VMs.

b) Load Balanced Virtual Server/VM Instances Architecture:


The load balancer functions as a "traffic controller", which routes client requests across the services or VM
instances to make sure that the traffic is distributed efficiently, avoiding overloading any particular instance.

c) Non-Disruptive Service Relocation Architecture:


Non-disruptive relocation moves services without disruption through replication and live migration.

d) Zero Downtime Architecture:


Zero downtime deployment is a deployment method where your website or application is never down or in
an unstable state during the deployment process. To achieve this the web server doesn't start serving the
changed code until the entire deployment process is complete.
e) Cloud Balancing Architecture:

4
It is the implementation of failover system across multiple clouds. It improves/increases the following
features:

 Performance and scalability of IT-resources


 Availability and reliability of IT resources
 Load balancing
 Requires an automated scaling listener and failover system

f) Resource Reservation Architecture:


Resource Reservation Architecture is a system used in cloud computing to reserve specific resources (like
CPU, memory, or storage) for certain tasks or users in advance. This ensures that the necessary resources are
available when needed, preventing delays and guaranteeing performance.

g) Dynamic Failure Detection and Recovery Architecture:


Dynamic Failure Detection and Recovery Architecture is a system in cloud computing designed to
automatically detect failures in the system and quickly recover from them without manual intervention. This
ensures continuous availability and reliability of services.
This architecture allows the implementation of an automated recovery policy consisting of predefined steps
and may involve actions such as:

 Running a script
 Sending a message
 Restarting services

h) Bare-Metal Provisioning Architecture:


Bare Metal in cloud computing refers to physical servers that are provided to users without any pre-
installed virtualization or operating system layers. Users can install their own operating system and software
directly on these servers, gaining full access to and control over the hardware resources.
Bare-Metal Provisioning Architecture is a system in cloud computing that allows users to deploy and
manage applications directly on physical hardware (bare-metal servers) rather than on virtual machines. This
architecture provides high performance and full control over the hardware resources.

i) Rapid Provisioning Architecture:


Rapid Provisioning Architecture in cloud computing refers to a system designed to quickly and
automatically set up IT resources, such as virtual machines (VMs), to save time, reduce human errors, and
increase efficiency. This architecture uses automated processes and pre-defined templates to provision
multiple resources simultaneously.

j) Storage Workload Management Architecture:


Storage Workload Management Architecture refers to the design and implementation of strategies, tools, and
processes used to effectively manage and optimize the storage resources within a computing environment.
This architecture focuses on ensuring that storage systems perform efficiently, meet performance
requirements, and are scalable to handle varying workloads.

k) Direct I/O Architecture:


5
Direct I/O Architecture refers to a method in computer systems, particularly in the context of storage and
input/output operations, where data is transferred directly between storage devices and application memory
without passing through the operating system's buffer cache. This architecture is also known as Direct
Memory Access (DMA) and is designed to optimize data transfer speeds and reduce latency by bypassing
unnecessary layers of processing.

l) Dynamic Data Normalization Architecture:


Dynamic Data Normalization Architecture refers to a system or approach in data management where data
from various sources or formats is standardized and made uniform in real-time or near real-time. This
architecture ensures that data is consistent, accurate, and ready for analysis or processing across different
systems or applications.

j) Elastic Network Capacity Architecture:


Elastic Network Capacity Architecture refers to a network infrastructure design that allows for dynamic and
scalable adjustment of network resources based on changing demands and conditions.

k) Cross Storage Device Vertical Tiering Architecture:


Cross Storage Device Vertical Tiering Architecture refers to a storage management strategy in which data is
classified and stored across different types or tiers of storage devices based on its usage patterns and
performance requirements. This architecture optimizes storage efficiency and performance by placing data
on the most appropriate storage tier, ensuring that frequently accessed data resides on faster and more
expensive storage devices, while less accessed or archival data is stored on slower and more cost-effective
storage devices. Imagine you're organizing items in a warehouse:

Different Shelves: You place commonly used items on easily accessible shelves near the front.
Less Used Items: Items used less frequently are stored on higher or harder-to-reach shelves in the back.

l) Intra Storage Device Vertical Data Tiering Architecture:


Intra Storage Device Vertical Data Tiering Architecture is a storage management approach that involves
organizing data within a single storage device into multiple tiers based on its usage patterns, access
frequency, and performance requirements. Unlike cross-storage device tiering which spans different types of
storage media, intra storage device tiering focuses on optimizing data placement within a single storage
system to maximize performance and efficiency.

m) Load Balanced Virtual Switches Architecture:


Load Balanced Virtual Switches Architecture refers to a network design strategy used in virtualized
environments, particularly in cloud computing and data centres, to distribute network traffic efficiently
across multiple virtual switches. This architecture ensures that network resources are utilized optimally,
avoiding bottlenecks and improving overall performance and reliability.

n) Multipath Resource Access Architecture:


Multipath Resource Access Architecture refers to a design approach in computer networking and storage
systems where multiple paths or routes are utilized simultaneously to access resources. This architecture
enhances reliability, performance, and availability by providing redundant paths that can dynamically adjust
to network or system changes without disrupting operations.

o) Persistent Virtual Network Configuration Architecture:


6
Persistent Virtual Network Configuration Architecture refers to a design approach in virtualized
environments where network configurations (such as virtual networks, VLANs, firewall rules, etc.) are
stored and maintained persistently across system reboots or migrations. This architecture ensures that
network settings remain consistent and are automatically applied to virtual machines (VMs) and network
devices without manual intervention.

p) Redundant Physical Connection for Virtual Servers Architecture:


Redundant Physical Connection for Virtual Servers Architecture is a network design strategy aimed at
ensuring high availability and reliability for virtualized servers by providing redundant physical network
connections. This architecture prevents single points of failure and enhances resilience against network
disruptions.

q) Storage Maintenance Window Architecture:


Storage Maintenance Window Architecture refers to a structured approach for scheduling and managing
maintenance activities related to storage systems in a way that minimizes disruption to ongoing operations
and maximizes availability.

CLOUD FEDERATION & BROKERAGE:


It is the interconnection of Cloud computing infrastructures of two or more Cloud providers for load
balancing. One of the providers buys the services from the other provider. The federation agreement may be
timely or permanent.

Workload Placement in Federated Clouds:


Each cloud provider has a limited number of servers and resources they can allocate to customers' requests.
Some requests have deadlines; for example, a business might need data processed by a certain time. If a
cloud provider doesn't have enough capacity to handle all requests on time, it faces congestion and may fail
to meet Service Level Agreements (SLAs). SLA violations can lead to penalties for the cloud provider,
affecting their reputation and business.

Solution:
Evaluate Options: Decide whether to handle excess requests internally or through partnerships with other
cloud providers.

Consider Costs: Calculate the revenue from extra requests versus the cost of using another provider's
resources.
Prioritize Requests: Balance the urgency of customer deadlines against the time it takes to process requests
remotely.
Federation Benefits: Joining a federation allows providers to cooperate, sharing resources and reducing
delays caused by network distances.

Horizontal Expansion: This means expanding services across different types of cloud offerings (like
Infrastructure as a Service, Platform as a Service, and Software as a Service).

Note:
Vertical Expansion: Adding more hardware or resources.
Horizontal Expansion: Adding more services or features.

7
Cloud Brokerage:
Cloud Brokerage simplifies cloud service selection and management by acting as a middleman between
cloud service providers and customers. It offers a marketplace where customers can choose from various
cloud services, provides added services like monitoring and security, and helps negotiate the best terms. This
makes it easier for businesses to find and use the right cloud solutions without dealing directly with multiple
providers.

Advantages:
Finding the Best Provider: A broker, like a middleman, helps businesses find the perfect cloud provider.
They do this based on what the business needs and wants.
Saving Time and Effort: Instead of businesses searching on their own, the broker does the searching. This
saves businesses a lot of time and work.
Understanding Needs: Brokers work closely with businesses to understand what they need from the cloud,
like how much they can spend and what they need to do with it.

Offering Choices: Brokers give businesses a list of options, so they can choose the cloud provider that’s
right for them. They look at things like budget and what the business wants to do.
Negotiating and Contracting: Brokers can even negotiate with providers on behalf of the business. They
help set up contracts and make sure everything is fair.

Extra Help: Some brokers offer tools to help businesses use their cloud resources better. They might help
with keeping data safe, managing how it moves around, and giving advice on how to get the most out of the
cloud.

CLOUD DELIVERY/SERVICE MODELS’ PERSPECTIVE:

Cloud Provider's Perspective about IaaS:


Virtual Machines and Storage: Cloud providers offer virtual machines with operating systems, virtual
RAM, CPU, and storage. These resources are set up using predefined configurations or bare-metal
provisioning for more control.
Geographical Reach: Cloud services span multiple data centres across different locations, connected by fast
networks.

Isolation and Scalability: VLANs and network controls separate VMs for each customer, while resource
pools and management systems ensure scalability.
Reliability and Security: Data replication ensures high availability, and multipath access enhances
reliability. Billing and SLA monitors track usage for accurate billing and management.

Security Measures: Encryption, authentication, and authorization systems protect data and ensure secure
access.

Note:
Encryption keeps data secure by encoding it, authentication verifies users' identities, and authorization
controls what actions they can take or what data they can access once authenticated.

8
Cloud Provider's Perspective about PaaS:
Ready-made Environments: Developers access pre-configured environments with software tools and
SDKs for building and testing applications.
Scalability and Multitenancy: PaaS environments support scaling applications based on demand and
budget, using automated scaling and load balancers for workload distribution.

Reliability: Non-disruptive service relocation and failover systems maintain application availability across
multiple VMs and data centers.
Monitoring and Security: Pay-per-use and SLA monitors track resource usage and failures, leveraging IaaS
security features for protection.

Cloud Provider's Perspective about SaaS:


Concurrent Users and Custom Implementations: Each SaaS deployment is unique, supporting different
programming logic, resource needs, and user workloads (e.g., Google Apps, email services).

Implementation Mediums: SaaS applications are accessed via mobile apps, REST or web services,
providing APIs for functions like payments (e.g., PayPal) and maps (e.g., Google Maps).
Multi-Device Access: Mobile-based SaaS apps use a multi-device broker for diverse device access.

Architecture and Monitoring: SaaS relies on load balancing, dynamic failure detection, storage
maintenance, elastic resource/network capacity, and cloud balancing for efficient operation. Usage data
collected by pay-per-use monitors helps with billing, and additional security measures ensure data
protection.

Cloud Consumer’s Perspective about IaaS:


IaaS allows consumers to manage virtual machines (VMs) and storage:

Accessing VMs: Consumers use remote applications like remote desktop for Windows or SSH for Mac and
Linux to connect to their VMs, which have an operating system installed.
Managing Cloud Storage: Cloud storage can be connected directly to VMs or to local devices on-site.
Different storage types like networked file systems, storage area networks, or object-based storage are
accessible through web interfaces.

Administrative Control: Consumers have extensive rights to manage their IaaS resources, including
scaling, starting or stopping VMs, setting up networks and firewalls, attaching storage, configuring failover
settings, monitoring SLAs, installing basic software, selecting VM startup images, and managing passwords
and credentials.

Management Tools: IaaS resources are managed through remote administration portals or command line
interfaces using code scripts.

Cloud Consumer’s Perspective about PaaS:


PaaS provides tools and environments for developing and deploying applications:
Components Received: PaaS consumers get access to software libraries, class libraries, frameworks, APIs,
databases, and cloud emulation environments to build applications.

Application Deployment: Completed applications developed using PaaS are deployed directly to the cloud.
9
Administrative Control: PaaS consumers manage aspects like user logins for their services, selecting tools
from ready-made environments, choosing cloud storage, controlling IT resource usage costs, deploying
automated scaling, load balancing, and replication mechanisms, and monitoring SLAs.

Cloud Consumer’s Perspective about SaaS:


SaaS delivers ready-made software applications over the internet:

API Integrations: SaaS applications come with APIs that allow integration into websites and other
applications, like using Google Maps.
Administration: SaaS consumers have limited administrative privileges and responsibilities compared to
IaaS and PaaS. They manage only a few runtime configurations such as controlling usage costs, monitoring
SLAs, and configuring security settings.

Usage and Cost: Many SaaS services are free, but providers may collect background data. Consumers focus
more on using the service rather than managing its infrastructure.

INTER-CLOUD RESOURCE MANAGEMENT:

Inter Cloud:
Inter-Cloud is like a "Cloud of Clouds," similar to how the Internet is a "network of networks." It connects
multiple cloud providers together.

Main Purpose:
Big tech companies like IBM, HP, CISCO, and RedHat are working on creating this interconnected system
of clouds. They aim to solve challenges such as:

Interoperability: Making different cloud systems work together.


Inter-cloud communication: Allowing clouds to communicate with each other.

Security: Ensuring data is safe across multiple clouds.


Workload migration: Moving workloads smoothly between clouds.

CLOUD COST METRICS AND PRICING MODELS:

Business Cost Metrics:

Upfront Costs: These are the initial expenses you pay when you start using cloud services, like setting up
your IT equipment. Using the cloud usually costs less at the beginning compared to setting up everything
yourself on-site.

On-going Costs: These are the regular expenses, such as software licenses, electricity, insurance, and staff
salaries. Over time, using cloud services can become more expensive than maintaining your own IT setup

Additional Costs:

 Cost of Capital: The cost of borrowing money. It’s more expensive if you need a lot of money
quickly.

10
 Sunk Costs: Money already spent on your current IT setup. If you move to the cloud, this money is
considered lost.
 Integration Costs: The time and effort required to set up and test new cloud services.
 Locked-in Costs: Expenses related to being dependent on a single cloud provider because different
providers don't always work well together.

Cloud Usage Cost Metrics

Network Usage: The cost based on the amount of data moving in and out of the cloud. Many providers
don’t charge for data coming in to encourage you to use their services.

VM Usage: Charges for using virtual machines (VMs). This can be a fixed cost, based on usage, or depend
on the features of the VM.

Cloud Storage Device Usage: Costs based on the amount of storage used, usually charged hourly. Some
providers may charge based on the number of data operations, but this is rare.

Cloud Service Usage: Costs based on how long you use the service, how many users there are, and the
number of transactions processed.

TCO: The total cost of owning IT resources, including all expenses from buying to maintaining them.

Cost Management Considerations:

Cost management for cloud services happens at different stages, including:

 Design & Development: Planning and creating the cloud service.


 Deployment: Setting up and launching the cloud service.
 Service Contracting: Making agreements and contracts with customers.
 Provisioning & Decommissioning: Providing and eventually removing the cloud service.

The cost templates (pricing plans) that providers use depend on:

 Market Competition: How much competition there is in the market.


 Overhead Costs: The extra costs during the design, deployment, and operation of the service.
 Cost Reduction: Finding ways to save money by sharing IT resources more effectively.

A cloud service pricing model can include:

 Cost Metrics: Different measures to calculate costs.


 Fixed and Variable Rates: Set prices and prices that change based on usage.
 Discount Offerings: Special price reductions.
 Cost Customization: Adjusting costs based on specific needs.
 Negotiations: Allowing consumers to negotiate prices.
 Payment Options: Various ways to pay for the service.

11
Cloud Service Quality Metrics:

Metrics are used to define and monitor Service Level Agreements (SLAs).

Key QoS Characteristics:

 Availability: Up-time, down-time, service duration.


 Reliability: Minimum time between failures, successful response rate.
 Performance: Capacity, response time, delivery time.
 Scalability: Ability to handle capacity changes and maintain performance.
 Resiliency: Ability to recover from failures.

Definitions:

Availability: How much time a service is up and running versus down and not working.

Reliability: How often the service works without breaking and how often it responds correctly.

Performance: How well the service handles its tasks, including how quickly it responds and delivers
results.

Scalability: How well the service can handle more users or tasks without losing performance.

Resiliency: How quickly and effectively the service recovers from problems or failures.

Service Availability Metrics:

Availability Rate:

 Measured as total up-time/total time.


 E.g., 99.5% minimum.

Down-time Duration:

Down-time is the period when a service or system is not operational or available. During this time, users
cannot access the service because it is either under maintenance, experiencing a failure, or facing other
issues that prevent it from functioning properly.

 Measures maximum and average continuous down-time.


 E.g., 1 hour max, 15 minutes average.

Service Reliability Metrics

Mean-Time Between Failures (MTBF):

 Expected time between consecutive failures.


 E.g., 90 days average.

Service Reliability Rate:


12
 Percentage of successful service outcomes.
 E.g., 99.5% minimum.

Module No – 164: Service Resiliency Metrics

Mean-Time to Switchover (MTSO):

 Time to switch over to a replicated instance after failure.


 E.g., 12 minutes average.

Mean-Time System Recovery (MTSR):

 Time expected for complete recovery from failure.


 E.g., 100 minutes average.

CloudSim:

Testing cloud research and theories on real data centers is hard, so CloudSim provides a free simulation
environment. Researchers, IaaS/PaaS users, and cloud providers use it to improve performance, test policies,
and manage workloads.

CloudSim: Configuration:

Requirements: Needs Java 8 or newer.

Setup: Simply unpack CloudSim and it's ready to use. To remove it, just delete the folder.

Examples and Tutorials: Comes with example codes and video tutorials for easy understanding.

Workload Unit: Called a "Cloudlet."

Computer security Basics:

Computer Security: It is the protection of computer system form unauthorized access, destruction,
distribution and modification.

Information System: Software that helps organize and analyse data.

Privacy: The right to control what information about you is collected, stored, and shared.

Key Terminologies:

The key terminologies of privacy are:

Data controller: An individual or a body which individually or jointly determines the purpose and
procedure of processing an item of personal information.

13
Data processor: An individual or body which processes the personal information on behalf of the data
controller

Data subject: An identified or identifiable individual to whom personal information relates directly or
indirectly.

Confidentiality: Only allowing authorized people to access information.

Example: Student grades should only be available to students, their parents, and certain school staff.

Integrity: Protecting information from unauthorized changes or destruction.

Example: Medical records must be accurate to prevent harm.

Availability: Ensuring information is accessible when needed.

Example: Authentication services need to be available to avoid disrupting work.

Authentication: Proving the identity of a user e.g.; through login and password.

Authorization: Verification of the access rights of an authenticated user e.g.; subscription to basic or
premium user access to an online gaming website.

Computer Security & Trust:

Trust: The belief that a person or system will behave as expected, even with some risks.

 Hard Trust: Based on security measures like authentication and encryption.


 Soft Trust: Based on factors like user experience and brand loyalty.

Cloud Trust: Can be long-term (persistent) or short-term (dynamic) and is enhanced by security features.

Cryptography: The science of securing information by converting it into unreadable formats for
unauthorized users.

Cryptanalysis: The study of breaking encrypted codes.

Functions of Cryptography: Privacy, authentication, integrity, non-repudiation (proof of sender), and key
exchange.

Authentication: Verifying the identity of a user.

Methods: Passwords, electronic cards, fingerprints, etc.

Access Control: Limiting what an authenticated user can do.

Managed by software that checks user actions and permissions.

Malware: Programs designed to harm a computer system.


14
Examples: Adware (pop-up ads), Keyloggers (record keystrokes), Viruses (replicate and spread), Worms
(network-spreading programs), etc.

DoS Attack: Overloading systems with traffic to make them unavailable to users.

Symptoms: Slow network, inaccessible websites, and spam emails.

Remedies: Contact ISP, use DoS detection tools, and manage network traffic.

Firewall: A barrier that blocks unauthorized access to a network. Filters traffic based on rules and monitors
connections.

Intrusion Detection System (IDS): Software or hardware that detects and alerts about suspicious activities
on a network.

Buffer Overflow: When a program writes more data to a memory block than it can hold, potentially
allowing an attacker to take control. Common in languages with direct memory access like C and C++.

OS Security: Implementing security measures during the installation and operation of an operating system.

Steps: Secure BIOS, apply updates, remove unnecessary services, set permissions, and use security tools.

Virtualization Security: Ensuring the isolation and monitoring of virtual machines (VMs).

Methods: Secure hypervisor installation, administrative control, and proper mapping of virtual to physical
devices.

Threat: A potential security breach.

Vulnerability: A weakness that can be exploited.

Risk: The chance of harm or loss from a threat exploiting a vulnerability.

Threat Agent: A factor capable of carrying out an attack.

Types:

 Anonymous Attacker: Outsider launching attacks.


 Malicious Service Agent: Has harmful code and can intercept network traffic.
 Trusted Attacker: A legitimate user who exploits weak security.
 Malicious Insider: An employee who causes damage with administrative rights.

Network Security Basics:

Internet Security: Protecting against threats that come from the Internet.

Major threats include:


15
 Unauthorized access to computer systems, email accounts, websites, and personal or banking
information.
 Viruses and other malicious software (malware).
 Social engineering (tricking people into giving away information).

Secure Socket Layer (SSL): A protocol for encrypting communication between a web browser and web
server.
Working:

1. A website enables SSL.


2. A browser requests a secure connection to the website.
3. The website shares its security certificate (issued by a Certificate Authority) with the browser.
4. The browser confirms the certificate's validity.
5. The browser generates a session key for encryption and shares it with the website.
6. An encrypted communication session starts, using HTTPS (https://) and showing a padlock symbol
in the URL bar.

Wireless Network Security:

Wireless Network Security: Protecting wireless networks from unauthorized access.

Threats:

 Packets can be easily intercepted and recorded.


 Traffic can be modified and retransmitted more easily than on wired networks.
 Wireless networks are more vulnerable to DoS attacks at access points (APs).

Security Protocols for Wireless Networks:

Wired Equivalent Privacy (WEP):

 Designed to provide the same level of security as wired networks.


 Uses RC4 encryption keys (40-128 bits).
 Has many security flaws, is difficult to configure, and can be easily cracked.

Wi-Fi Protected Access (WPA):

 An improvement over WEP, providing better security.


 Uses enhanced RC4 through Temporal Key Integrity Protocol (TKIP).
 Backward compatible with WEP.

Wi-Fi Protected Access 2 (WPA2):

 The most secure wireless security standard, standardized as IEEE 802.11i.


 Uses Advanced Encryption Standard (AES) and Counter Mode with Cipher Block Chaining Message
Authentication Code Protocol (CCMP).
 Allows seamless roaming between access points without needing to reauthenticate.

16
Cloud Security Mechanisms:
Encryption:
Plaintext: Data in human-readable format.
Encryption: Transforming plaintext into a protected, unreadable format called ciphertext to ensure
confidentiality and integrity.
Cipher: The algorithm used for encryption.
Encryption Key: A secret string of characters used to encrypt and decrypt data.
Types of Encryptions:
1. Symmetric Encryption: Uses a single key for both encryption and decryption. Simple but less
secure if the key is shared.
2. Asymmetric Encryption: Uses a pair of keys (public and private). The public key encrypts, and the
private key decrypts. More secure since only the private key can decrypt the message.
Hashing and Digital Signatures:
 Hashing: Creating a fixed-length code (hash) from a message to verify its integrity. If the message
changes, the hash code changes, indicating tampering.
 Digital Signatures: Verifying the authenticity and integrity of digital messages or documents
Similar to a handwritten signature. Used to ensure the message hasn't been altered and is from the
claimed sender.
Public Key Infrastructure (PKI):
PKI: A system for managing asymmetric encryption keys and digital certificates.
 Digital Certificates: Bind a public key to its owner and are issued by a Certificate Authority (CA).
 Purpose: Implement encryption, manage identities, and protect against threats like unauthorized
access.

Identity and Access Management (IAM):

IAM: Policies and procedures for managing user identities and access to IT resources.

Components:

 Authentication: Verifying user identity (usernames, passwords, biometrics).


 Authorization: Controlling access to resources.
 User Management: Creating and managing user accounts and privileges.
 Credential Management: Establishing rules for identities and access.

Single Sign-On (SSO): Allows users to sign in once and access multiple services without re-authenticating.

17
Privacy Issues in cloud Computing:

Lack of User Control

 Data Privacy Issues: Concerns include unauthorized access, improper use of data, retention without
permission, and assurance that data is deleted when needed.
 User Control:
o Infrastructure: Users do not own or control the cloud infrastructure, leading to risks of data
theft and misuse.
o Access: It's often unclear if or when cloud providers access user data, and detecting
unauthorized access is difficult.
o Data Lifecycle: Users cannot be sure their deleted data is actually removed, and there's no
regulation to enforce data erasure by providers.
o Provider Change: It's unclear how to retrieve and ensure the deletion of data when switching
cloud providers.
o Notification: It's difficult to determine responsibility for unauthorized access.

Lack of Training and Expertise

 Skilled Personnel: Running cloud services requires highly skilled staff, particularly with STEM
skills.
 Privacy Impact: Lack of understanding of privacy implications can increase security risks.
 Employee Behaviour: More devices mean more chances for privacy breaches, like unattended
laptops with sensitive data.
 Public Cloud Access: Careful control is needed to prevent privacy issues from public cloud services.

Unauthorized Secondary Usage

 Secondary Usage: Cloud data may be used without permission.


o Legal: Data can be sold for advertising.
o Illegal: Data could be sold to competitors.
 Regulation: There's a need for legal measures to prevent and check unauthorized data use.

Complexity of Regulatory Compliance

 Global Rules: Cloud computing makes it hard to comply with different regional rules and
regulations.
 Data Location: Data may be replicated across various locations, making compliance difficult.
 Cross-Border Data: It's tough to control data movement across borders, especially with multiple
cloud providers.

Addressing Trans-Border Data Flow Restrictions

 Regulations: Many countries restrict personal data flow across borders (e.g., EU, Australia, Canada).
 Adequate Protection: Data can flow to countries with adequate protections or agreements (e.g., US
Safe Harbor agreement).
 Cloud Compliance: Cloud providers need to comply with these data flow restrictions.

Litigation

 Court Orders: Cloud providers may be forced to hand over data due to legal orders.
 Private Agreements: Legal agreements can prevent private entities from accessing data without
permission.
18
Legal Uncertainty

 Evolving Laws: Cloud computing often outpaces current legal frameworks, leading to uncertainties.
 Data Anonymization: Legal consent for anonymizing data and the applicability of privacy laws to
anonymized data is unclear.
 Framework Application: Uncertainty exists on how existing privacy laws apply to cloud
computing.

Conclusions

 Global Privacy: Privacy protection is uncertain globally, and new demands are emerging.
 Policy Changes: Policymakers are pushing for updated security frameworks and accountability.
 Privacy Regulations: The USA and EU are considering new privacy protection frameworks.
 Cloud Challenges: Meeting global privacy regulations in cloud computing is complex, especially
with data location and deletion concerns.

Security Issues In cloud computing:

Gap in Security
 User Control: Lack of control by users leads to security risks.
 Service Level Agreements (SLAs): Often don’t specify necessary security measures.
 Type of Service: Security responsibilities vary by service type (IaaS, PaaS, SaaS).
Unwanted Access
 Government Access: Laws like the US Patriot Act allow government access to data.
 Security Breaches: Risks from inadequate security, malicious employees, and other consumers.
Vendor Lock-in
 Interoperability Issues: Lack of standard formats and APIs makes switching providers difficult.
 Data Migration: Hard to move data between providers or bring it back in-house.
Inadequate Data Deletion
 Data Residuals: No assurance that deleted data is completely removed.
 Shared Resources: Data may persist across shared or reallocated resources.
Compromise of the Management Interface
 Remote Access Risks: Internet-based access poses higher risks.
 Vulnerabilities: Can lead to malicious access to extensive resources.
Backup Vulnerabilities
 Multiple Copies: While backups increase reliability, they also introduce risks.
 Data Loss: Risks of losing data before backups are made or losing context with missing data keys.
Isolation Failure
 Multi-Tenancy: Shared applications may fail to separate data properly.
 Virtualization Attacks: Virtual machines, though isolated, can be compromised if the host server is
attacked.

19
Missing Assurance and Transparency
 Liability: Providers often take minimal responsibility for data loss.
 Assurances: Consumers need guarantees for data safety and alerts for unauthorized access.
Inadequate Monitoring, Compliance, and Audit
 Auditing Difficulties: Complex cloud infrastructures make monitoring and auditing challenging.
 Compliance: Ensuring cloud procedures match consumer security policies is tough.
Conclusion
 Varied Security Issues: Depend on service type and deployment model.
 Outsourcing Security: Can lead to better security but finding the right provider is crucial.

Trust Issue in cloud computing:


Trust in the Clouds

 Trust Boundaries: Traditionally, security boundaries like firewalls create a trusted area for data. In
the cloud, data may be stored and processed outside these boundaries, making it essential to extend
trust to cloud providers.
 Trusted Providers: Trust should be based on recommendations from trusted sources like auditors,
security experts, and established companies.
 Importance of Trust: Especially crucial for personal or business-critical information.

Lack of Consumer Trust

 Consumer Concerns: Many consumers, especially in Europe, worry about unauthorized use of their
data.
 Trust Factors: Factors like reputation, recommendations, trial experiences, and contracts influence
trust in cloud providers.
 Enterprise Concerns: Businesses worry about data security, SLA compliance, vendor lock-in, and
interoperability.

Weak Trust Relationships

 Supply Chain Risks: Using subcontractors can weaken trust, as consumers may not know where
their data is or who has access.
 Lack of Transparency: Consumers may not know the identity of subcontractors, leading to weak
trust.
 Rapid Provisioning: Adding new providers quickly for extra capacity can create weak trust
relationships.

Lack of Consensus About Trust Management Approaches to Be Used

 Missing Consensus: There’s no agreement on how to manage and measure trust in cloud computing.
 Standardized Models Needed: Current models for trust evaluation are inadequate and lack suitable
metrics.
 Verification Challenges: No consensus on what evidence is needed to verify trust mechanisms.

Conclusions

 Trust as a Key Concern: Trust issues are a major barrier to the wider adoption of cloud services.
 Fear of Data Misuse: Concerns about unauthorized access and misuse of data.
20
 Trade-offs: Using cloud services involves balancing privacy, security, compliance, costs, and
benefits.
 Propagating Trust: Trust mechanisms need to extend throughout the service provision chain.
 Developing Trust Models: Comprehensive trust measurement models are required.

Trust Management in Cloud Computing

Systematic Trust Management: A system is needed to monitor and evaluate trust in cloud services.

Trust Attributes: Trust can be measured using attributes like:

 Data Integrity: Security, privacy, and accuracy.


 Security: Protection of personal data.
 Credibility: Quality of Service (QoS).
 Efficiency: Actual vs. promised turnaround time.
 Availability: Access to provider’s resources and services.
 Reliability: Success rate in performing agreed functions on time.
 Adaptability: Avoiding single points of failure through redundancy.
 Customer Support: Quality of support provided.
 Consumer Feedback: Reviews and ratings from users.

Trust Computation: These attributes can be graded to compute a trust value for future reference.

Approaches to Addressing Privacy, Security, and Trust Issues

 Three Dimensions:
o Regulatory Frameworks: Innovative regulations to facilitate cloud operations and address
privacy, security, and trust.
o Responsible Governance: Providers should demonstrate a commitment to safeguarding data
and prove it through audits.
o Supporting Technologies: Use technologies like encryption and anonymization to enhance
privacy and security.
 Combined Approach: Using a mix of these dimensions can reassure consumers and build trust in
cloud providers.

Open issues in cloud computing:

 Not Universally Suitable: Cloud computing isn't ideal for all IT needs or applications.
 Common Issues: Like any complex system, cloud computing faces hardware failures and security
vulnerabilities.
 Addressing Issues: Techniques exist to mitigate and isolate these failures and compromises.

Computing Performance:

 Real-Time Applications: Cloud computing may struggle with high-performance demands and
predictability.
 Latency: Delays in communication can affect performance.
 Data Synchronization: Managing updates to data across multiple copies in the cloud can be
challenging, requiring robust synchronization mechanisms.
 Scalability: Legacy applications may need updates to fully utilize cloud computing's scalability.

21
 Data Control: Consumers need control over data lifecycle and information on any unauthorized
access.

Cloud Reliability:

Definition: Reliability refers to the probability of a system providing uninterrupted service.

Factors Affecting Reliability:

 Network Dependence: Internet reliability impacts cloud service reliability.


 Safety-Critical Processing: Applications involving critical operations (e.g., avionics, medical
devices) aren't suitable for cloud hosting due to reliability concerns.

Economic Goals:

Benefits: Cloud offers cost savings, scalability, and reduced maintenance costs.

Risks:

 SLA Evaluation: Lack of automated tools for Service Level Agreement (SLA) compliance requires
standardized templates for clarity.
 Portability: Challenges exist in transferring data to the cloud securely and moving workloads
between providers.
 Interoperability: Lack of compatibility among cloud providers can lead to vendor lock-in.
 Disaster Recovery: Plans for recovering from physical or electronic disasters are crucial to avoid
economic and performance losses.

Compliance:

Definition: Compliance involves adhering to laws and security standards.

Responsibilities:

 Provider Role: Providers implement compliance measures but lack transparency.


 Consumer Role: Consumers must ensure their data meets legal requirements.

Examples: Healthcare information protection laws (HIPAA), payment security standards, etc.

Forensics: Necessary for investigating incidents and preparing legal actions.

Information Security:

Confidentiality and Integrity: Ensuring data confidentiality, integrity, and availability.

Control Measures:

 Administrative Controls: Define user permissions for data handling.


 Physical Controls: Secure data storage devices physically.
22
 Technical Controls: Use Identity and Access Management (IAM), encryption, and data auditing.

Cloud-Specific Security: Public and private clouds have unique security risks.

Provider Responsibilities: Providers should monitor security compliance to reassure consumers.

Disaster Recovery in cloud computing:

Understanding the threats

 Disk Failure: Hard drives can fail due to wear and tear or disasters like fire or floods. Manufacturers
provide Mean Time Between Failures (MTBF) estimates, but relying solely on these isn't enough.
 Strategies:
o Traditional Backup: Storing data on separate devices. If one fails, data can be restored from
the backup, but if both are lost, data is gone.
o RAID (Redundant Array of Independent Disks): Distributes data across multiple drives. If
one fails, data can be recovered, but complete RAID failure means data loss unless backed
up.
o Cloud-based Backup: Replicates data to remote servers automatically, enhancing reliability
and reducing downtime compared to traditional backups.

Understanding the threats

Power Failure or Disruption: Power surges or outages can damage computers and lead to data loss.

Solutions:

 Surge Protectors: Protect devices from sudden power spikes.


 UPS (Uninterruptible Power Supply): Provides backup power during outages, commonly used in
data centers.
 Cloud Migration: Moving IT operations to the cloud where providers manage robust power backups
and automatic failover to other power grids.

Understanding the threats:

Computer Viruses: Malware can infect devices via internet downloads or shared drives.

Protection:

 Antivirus Software: Essential for detecting and removing viruses.


 Firewalls: Control network traffic to prevent unauthorized access.

Cloud Advantage: Virtualization in the cloud makes it harder for non-cloud viruses to penetrate. Providers
implement strong security measures.

Understanding the threats:

23
Fire, Flood & Disgruntled Employees: Natural disasters or internal threats like disgruntled employees can
destroy equipment or data.

Mitigation:

 Fire Prevention: Cloud providers manage fire prevention systems and backup data remotely,
reducing consumer costs and efforts.
 Location Strategy: Avoid placing data centers in flood-prone areas.
 Access Control: Limit access to sensitive data and quickly revoke access for terminated employees
using Identity as a Service (IDaaS).

Understanding the threats:

Lost Equipment & Desktop Failure: Lost or stolen devices can lead to data loss.

Solutions:

 Data Synchronization: Cloud services sync data across devices, reducing the risk of permanent data
loss.
 Desktop as a Service (DaaS): Employees can access their work from any device connected to the
cloud, minimizing downtime.

Understanding the threats:

Server Failure & Network Failure: Servers can fail, disrupting operations.

Cloud Solutions:

 Redundancy: Cloud providers maintain redundant servers to ensure high uptime.


 Network Redundancy: Multiple internet connections or backup devices ensure continuous
connectivity.

Understanding the threats:

Database System Failure & Phone System Failure: Database failures affect critical applications.

Cloud Solutions:

 Database Replication: Cloud databases use replication and failover systems to minimize downtime.
 Phone Systems: Cloud-based phone systems offer reliability through internal redundancy.

Measuring Business Impact, Disaster Recovery Plan Template

Risk Assessment: Evaluate risks and their potential impact on business operations.

Disaster Recovery Plan (DRP):

 Documentation: Formally document procedures for various disaster scenarios.


24
 Continuity: Ensure cloud providers comply with agreed disaster recovery plans for critical
applications.
 Compliance: Confirm provider adherence to legal and regulatory standards.

Data Governance:

Data Access & Separation: Ensure data interfaces are adaptable and secure.

Integrity & Regulations: Implement checksums, data replication, and compliance measures.

Recovery & Disposal: Securely delete data when no longer needed and verify its deletion.

Backup & Archiving: Assess provider's backup and recovery procedures.

Security & Reliability:

Consumer-Side Security: Harden consumer platforms against attacks and ensure strong encryption.

Physical Security: Verify physical security measures at provider's facilities.

Authentication & Access Management: Use advanced authentication methods to prevent unauthorized
access.

Performance Requirements: Benchmark application performance before deployment to the cloud.

VMs, Software & Applications:

VM Security: Ensure VM isolation and network security.

Application Security: Integrate security frameworks and configure applications securely.

Performance & Compatibility: Test application performance and compatibility with cloud environments.

Migrating to the cloud:

Define System Goals and Requirements:

Before moving to the cloud, it's crucial to plan carefully. Start by clearly defining what your system needs to
achieve and the requirements it must meet. Considerations include:

 Data Security and Privacy: Ensure your data is protected according to regulatory requirements.
 Site Capacity Plan: Determine how much cloud computing power you'll initially need.
 Scalability Requirements: Plan for how your system will handle increases in usage.
 System Uptime: Define how reliable your system needs to be in terms of uptime.
 Business Continuity and Disaster Recovery: Have plans in place for potential disruptions.
 Budget: Understand the financial implications of moving to the cloud.
 Operating System and Programming Language: Ensure compatibility with your current systems.
 Type of Cloud: Decide whether a public, private, or hybrid cloud setup best suits your needs.

25
 Data Backup: Establish how and where your data will be backed up.
 Client Device Support: Consider compatibility with different types of devices.
 Training: Plan for any training needed to use the new cloud-based system effectively.
 Programming API Requirements: Determine the APIs necessary for integration with other
systems.
 Data Export and Reporting Requirements: Specify how data can be exported and reporting
capabilities needed.

Protect Existing Data and Know Your Application Characteristics:

Backup Data: Before migrating, always back up your data to avoid loss.

Data Lifecycle and Disposal: Define how data will be managed and deleted as needed.

Regulatory Compliance: Ensure your cloud solution meets any legal requirements for data privacy and
access.

Application Requirements: Understand how much computing power, storage, and bandwidth your
applications require.

Usage Patterns: Know when your applications experience high and low demand.

Resource Needs: Determine how much RAM and disk storage your applications will need.

Bandwidth Usage: Estimate how much data your applications will transfer over the network.

Caching Needs: Consider whether your applications need data caching for better performance.

Establish a Realistic Deployment Schedule, Review Budget, and Identify IT Governance Issues:

Deployment Schedule: Plan a realistic timeline for moving to the cloud, including testing and training
phases.

Budget Review: Compare the costs of cloud solutions with maintaining in-house systems.

IT Governance: Align your cloud solution with your company's business strategy and establish controls for
system access and monitoring.

Designing Cloud-Based Solution Metrics

Designing a cloud solution that meets both functional (what the system does) and non-functional (how well
the system performs) requirements. Key considerations include:

 Accessibility: Ensure authorized users can access the system easily.


 Auditability: Log critical system events for auditing purposes.
 Availability: Design for high uptime using redundant systems.
 Backup: Plan for data backup and recovery.
 Capacity Planning: Determine current and future resource needs.

26
 Configuration Management: Support multiple operating systems and devices.
 Disaster Recovery: Prepare for unexpected events that could disrupt operations.
 Interoperability: Ensure your system can work with other cloud services.
 Maintainability: Design for easy updates and maintenance.
 Performance: Optimize speed and responsiveness.
 Privacy: Protect sensitive data from unauthorized access.
 Portability: Design for easy migration to other platforms if needed.
 Reliability: Minimize system downtime due to hardware failures.
 Security: Implement measures to protect against cyber threats.
 Testability: Develop tests to ensure your system meets requirements.
 Usability: Design an intuitive interface for ease of use.

Cloud Application scalability and resource scheduling:

Cloud Application Scalability

Load Balancing:

 Scaling Up vs. Scaling Out: Scaling up means upgrading existing resources for more power, while
scaling out means adding more resources.
 Load Balancer: Distributes work (like client requests) evenly across multiple cloud resources using
algorithms like round robin or random distribution.
 Application Design: A well-designed cloud app should scale efficiently without being too rigid or
too costly.

Module No – 240: Cloud Application Scalability

Optimizing Key Pages:

 Minimize Objects: Simplify pages like the home page or forms by reducing unnecessary items such
as graphics, animations, and audio for faster loading.
 Selecting Measurement Points: Identify and optimize the most critical parts of your code to
improve overall system performance.
 Database Operations: Analyse how data is read and written to optimize performance, considering
whether operations can be split across multiple databases (horizontal scaling).

Module No – 241: Cloud Application Scalability

Capacity Planning vs. Scalability:

 Capacity Planning: Estimate the resources needed at a specific time for your application.
 Diminishing Returns: Scaling should stop when adding more resources doesn't significantly
improve performance.
 Performance Tuning: Besides scaling, improve performance by reducing graphics, page load times,
and using caching.

27
Cloud Resource Scheduling Overview:

Effective Resource Scheduling:

Goals: Reduce costs, execution time, and energy consumption while meeting Quality of Service (QoS)
requirements like reliability, security, availability, and scalability.

Provider vs. Consumer: Providers aim to maximize resource use and profit, while consumers want to
minimize costs and execution time.

Types of Resource Scheduling:

These modules cover various strategies for scheduling cloud resources:

 Cost-Based: Prioritizes tasks based on cost constraints, often resulting in a first-come, first-served
approach with considerations for QoS and time.
 Time-Based: Prioritizes tasks based on their deadlines, ensuring tasks nearing their deadlines get
priority.
 Cost & Time-Based: Balances cost constraints with deadline priorities to optimize resource use and
task completion.
 Bargain-Based: Involves negotiation between users and providers to lower processing costs.
 Profit-Based: Focuses on maximizing provider profit while considering SLA (Service Level
Agreement) violations and penalties.
 SLA & QoS Based: Ensures tasks are completed within SLA limits while maintaining QoS
standards.
 Energy-Based: Minimizes energy consumption across data centers to reduce costs and
environmental impact.
 Optimization-Based: Uses advanced algorithms to optimize resource use based on factors like
revenue, efficiency, and task completion times.
 Priority-Based: Assigns priority levels to tasks to avoid starvation during resource contention, with
mechanisms like aging to prevent low-priority tasks from being ignored.
 VM-Based: Manages resources at the virtual machine level, allowing migration of VMs to servers
with available resources to avoid starvation.

Mobile Cloud computing:

Introduction to Mobile Cloud Computing:

Overview:

 Mobile Device Usage: Mobile devices are widely used globally because they offer flexibility in
terms of time and location.
 Resource Constraints: Mobile devices have limitations like processing power, memory, storage,
bandwidth, and battery life.
 Benefits of Cloud: Cloud computing provides unlimited resources over the internet, which can help
overcome these mobile device limitations.

Need for Mobile Cloud Computing:


28
Examples of Need:

 Optical Character Recognition (OCR): Translating text for tourists can strain mobile resources,
making a cloud-based solution more efficient.
 Disaster Site Data Sharing: Sharing images to understand disaster sites benefits from cloud
processing to gather and process data efficiently.
 Sensor Data Collection: Gathering data from sensors across a large area is best managed using
cloud applications due to their scalability.

Applications of Mobile Cloud Computing:

Examples:

 Mobile Commerce: Overcomes mobile device limitations like bandwidth and security by
integrating with cloud services.
 Mobile Learning: Allows for larger educational content, faster processing, and better battery
efficiency by using cloud resources.
 Mobile Healthcare: Enables remote monitoring and quick responses in medical emergencies using
cloud-based services.
 Mobile Gaming: Moves heavy processing tasks to the cloud, utilizing only the mobile screen for
gameplay.

Mobile Cloud Computing Architecture:

Architecture Overview:

 Connection: Mobile devices connect to mobile networks via base stations or satellites.
 Network Services: Base stations handle user requests and connect to servers that manage mobile
network services.
 Cloud Interaction: User requests are then sent to the cloud over the internet, where cloud
controllers handle the services requested.

Mobile Cloud Models:

Different Models:

 Direct Cloud Access: Mobile devices directly access applications hosted on cloud servers, like
email through 3G connections.
 Peer-to-Peer: Some mobile devices share resources with each other using mobile peer-to-peer
networks.
 Cloudlet Integration: Mobile devices connect to cloudlets, which are closer than cloud servers,
reducing latency for certain applications.

Advantages of Mobile Cloud Computing:

Benefits:

 Battery Life: Offloading tasks to the cloud save battery power and reduces response times.
29
 Resource Enhancement: Cloud storage overcomes mobile device storage limitations, and cloud
processing reduces energy and time costs.
 Reliability: Cloud backups and disaster recovery enhance data and application reliability.

Cost Benefit Analysis of Mobile Cloud Computing:

Cost Consideration:

 Decision Making: Evaluates initial and running costs against benefits like performance, energy
conservation, and quality.
 Task Offloading: Determines whether to offload tasks to the cloud based on device energy
consumption, network throughput, and application characteristics.

Mobile Cloud Computing Security:

Security Concerns:

 Issues: Includes mobile device vulnerabilities, wireless network security, and security bugs in
mobile cloud applications.
 Management: Addresses security and privacy concerns unique to mobile cloud environments.

Communication Issues in Mobile Cloud Computing:

Communication Challenges:

 Bandwidth: Limited radio resources in wireless networks pose challenges for mobile cloud
applications.
 Availability: Ensuring continuous service availability despite network failures or signal losses.
 Heterogeneity: Managing diverse mobile devices with different wireless technologies (2G, 3G,
WLAN) while maintaining high availability, scalability, and energy efficiency.

Computational Offloading in Mobile Cloud Computing:

Offloading Decisions:

 Static vs. Dynamic: Static decisions are made at task start, while dynamic decisions adapt to
runtime conditions like network bandwidth and battery life.
 Efficiency Considerations: Offloading is chosen based on whether the benefits (like reduced battery
usage) outweigh the costs (like network usage).

End User Issues in Mobile Cloud Computing:

User Experience:

 Incentives: Encouraging resource sharing among mobile devices through incentives, whether
monetary or shared interests.
 User Interface: Addressing the challenge of diverse device interfaces to ensure a user-friendly
experience.

30
 Performance Assurance: Ensuring service availability and performance despite connectivity issues
like network failures or depleted batteries.

Data Access Challenges in Mobile Cloud Computing:

Data Access Issues:

 Challenges: Accessing data efficiently despite low bandwidth, signal losses, or energy constraints.
 Optimization: Developing approaches to optimize data access patterns and using mobile cloudlets
as file caches.
 Interoperability: Ensuring data compatibility across different devices and platforms.

Miscellaneous Issues in Mobile Cloud Computing:

Other Challenges:

 Performance Management: Balancing workload offloading to optimize performance, especially


when using resources from nearby devices or cloudlets.
 Resource Utilization: Efficiently utilizing cloud processing power for tasks handled by mobile
cloud applications.
 Battery Consumption: Managing energy consumption by determining whether offloading tasks to
the cloud saves battery compared to local processing.

Issues and Challenges in Mobile Cloud Computing:

Overall Challenges:

 Mobility Support: Ensuring mobile devices remain connected to the cloud despite movement, using
solutions like cloudlets in specific locations.
 Security Assurance: Addressing ongoing challenges in ensuring data privacy, security, and trust
between users and service providers.
 Resource Management: Managing incentives, trust, and payment methods among users sharing
resources in ad hoc mobile cloud setups.

Mobile Cloud Computing vs Cloud Computing:

Comparison:

 Cloud Computing: Provides various services (IaaS, PaaS, SaaS) to users ranging from individuals
to enterprises.
 Mobile Cloud Computing: Focuses on delivering cloud-based applications to individual users,
addressing specific mobile device challenges like connectivity, security, and performance.

Big Data

Definition: Big Data refers to a massive amount of data that traditional databases cannot handle effectively
due to its volume, variety, and velocity.

31
 Volume: Refers to the sheer amount of data being generated.
 Variety: Includes different types of data, like text, images, videos, etc.
 Velocity: Describes the speed at which data is generated and processed.

Importance: Big Data helps organizations extract valuable insights for making informed decisions,
improving products/services, and understanding trends.

Cloud Computing

Definition: Cloud Computing delivers computing services (like storage, processing power, software) over
the internet, rather than on local servers or personal devices.

Service Models:

 Infrastructure as a Service (IaaS): Provides virtualized computing resources over the internet.
 Platform as a Service (PaaS): Offers a platform allowing customers to develop, run, and manage
applications without building or maintaining infrastructure.
 Software as a Service (SaaS): Delivers software applications over the internet on a subscription
basis.

Deployment Models:

 Public Cloud: Services provided over the public internet and available to anyone.
 Private Cloud: Services hosted on a private network and accessible only by specific users.
 Hybrid Cloud: Combination of public and private clouds, offering flexibility.
 Benefits: Cost-effectiveness, scalability, flexibility, and accessibility are key advantages of Cloud
Computing.

Software Defined Networking (SDN)

Definition: SDN separates the network's control plane (decision-making) from the data plane (traffic
forwarding), enabling easier management and more efficient network operation.

Key Concepts:

Centralized Control: Network control is managed by a software-based controller, rather than distributed
across individual devices.

Programmable Network: Allows administrators to manage network behaviour dynamically through


software applications.

OpenFlow Protocol: Standardized communication interface used between the controller and network
devices.

Applications: SDN improves network agility, scalability, and reduces costs by centralizing control and
enabling automation.

32
33

You might also like