Unit 3
Unit 3
In cloud computing, this idea is taken further. Cloud providers use virtualization to split one big
server into many smaller virtual ones, so businesses can use just what they need, no extra hardware,
no extra cost.
Virtualization
Suppose there is a company that requires servers for four different purposes:
The customer data server requires a lot of space and a Windows operating system.
The online shopping website requires a high-traffic server and needs a Linux operating
system.
The payroll system requires greater internal memory (RAM) and must use a certain version of
the operating system.
In order to fulfill these requirements, the company initially configures four individual physical
servers, each for a different purpose. This implies that the company needs to purchase four servers,
keep them running, and upgrade them individually, which is very expensive.
Now, by utilizing virtualization, the company can run these four applications on a few physical
servers through multiple virtual machines (VMs). Each VM will behave as an independent server,
possessing its own operating system and resources. Through this means, the company can cut down
on expenses, conserve resources, and manage everything from a single location with ease.
Working of Virtualization
Virtualizations uses special software known as hypervisor, to create many virtual computers (cloud
instances) on one physical computer. The Virtual Machines behave like actual computers but use the
same physical machine.
After installing virtualization software, you can create one or more virtual machines on your
computer.
The real physical computer is called the Host, while the virtual machines are called Guests.
Each guest can have its own operating system, which may be the same or different from the
host OS.
Every virtual machine functions like a standalone computer, with its own settings, programs,
and configuration.
VMs access system resources such as CPU, RAM, and storage, but they work as if they are
using their own hardware.
Hypervisors
A hypervisor is the software that gets virtualization to work. It serves as an intermediary between the
physical computer and the virtual machines. The hypervisor controls the virtual machines' use of the
physical resources (such as the CPU and memory) of the host computer.
For instance, if one virtual machine wants additional computing capability, it requests it from the
hypervisor. The hypervisor ensures the request is forwarded to the physical hardware, and it's
accomplished.
Type 2 Hypervisor:
It's employed when you need to execute more than one operating system on one machine.
Types of Virtualization
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization
Types of Virtualization
1. Application Virtualization: Application virtualization enables remote access by which users can
directly interact with deployed applications without installing them on their local machine. Your
personal data and the applications settings are stored on the server, but you can still run it locally via
the internet. It’s useful if you need to work with multiple versions of the same software. Common
examples include hosted or packaged apps.
Example: Microsoft Azure lets people use their applications without putting them on their own
computers. Once this application is setup in the cloud then employees can use it from any device,
like a laptop or tablet. It feels like the application is on their computer, but it’s really running on
Azure’s servers. This makes things easier, faster, and safer for the company.
2. Network Virtualization: This allows multiple virtual networks to run on the same physical network,
each operating independently. You can quickly set up virtual switches, routers, firewalls, and VPNs,
making network management more flexible and efficient.
Example: Google Cloud is an example of Network Virtualization. Companies create their own
networks using software instead of physical devices with the help of Google Cloud. They can set up
things like IP addresses, firewalls, and private connections all in the cloud. This makes it easy to
manage, change, and grow their network without buying any hardware. It saves time, money, and
gives more flexibility.
Network Virtualization
3. Desktop Virtualization: Desktop virtualization is a process in which you can create different virtual
desktops that users can use from any device like laptop, tablet. It’s great for users who need
flexibility, as it simplifies software updates and provides portability.
4. Storage Virtualization: This combines storage from different servers into a single system, making it
easier to manage. It ensures smooth performance and efficient operations even when the underlying
hardware changes or fails.
Example: Amazon S3 is an example of storage virtualization because in S3 we can easily store any
amount of data from anywhere. Suppose a MNC have lots of files and data of company to store. By
Amazon S3 company can store all their files and data in one place and access these from anywhere
without any kind of issue in secure way.
5. Server Virtualization: This splits a physical server into multiple virtual servers, each functioning
independently. It helps improve performance, cut costs and makes tasks like server migration and
energy management easier.
Example: A startup company has a powerful physical server. This company can use server
virtualization software like VMware vSphere, Microsoft Hyper-V or KVM to create more virtual
machines(VMs) on that one server.
Each VM here is an isolated server, that runs on their own operating system( like Windows and Linux)
and run it's own applications. For example, a company might run A web server on one VM, A
database server on another VM, A file server on a third VM all on the same physical machine. This
reduces costs, makes it easier to manage and back up servers, and allows quick recovery if one VM
fails.
Server Virtualization
6. Data Virtualization: This brings data from different sources together in one place without needing
to know where or how it’s stored. It creates a unified view of the data, which can be accessed
remotely via cloud services.
Example: Companies like Oracle and IBM offer solutions for this.
Cloud resource management presents several key challenges, primarily revolving around cost,
security, and the complexity of multi-cloud environments. Specifically, organizations grapple
with resource sprawl, cost overruns, and security risks. Effectively managing these areas is crucial for
maximizing the benefits of cloud computing.
1. Cost Management:
Resource Sprawl:
Poor monitoring and optimization can result in unexpected and significant costs. Cloud monitoring
tools are essential for tracking resource usage, detecting anomalies, and identifying areas for
optimization.
2. Security:
Data Breaches:
Unauthorized access to sensitive data is a major concern. Organizations need to implement robust
security measures, including identity and access management (IAM), encryption, and compliance
frameworks.
Inadequate IAM:
Properly managing user identities and permissions is critical to prevent unauthorized access and
potential breaches.
Insecure APIs:
Properly configured cloud environments are crucial to avoid misconfigurations that could expose
data or create security risks.
While cloud providers offer shared infrastructure, organizations need to be aware of potential
vulnerabilities and take steps to mitigate risks.
Insider Threats:
Employees with access to cloud resources can pose a threat if not properly managed. Organizations
need to implement access controls and monitoring to prevent malicious or accidental misuse.
Complexity of Integration:
Integrating multiple cloud environments (public and private) can be complex due to different
architectures, tools, and APIs.
Gaining comprehensive visibility across different cloud environments can be challenging due to the
distributed nature of resources.
Skill Gaps:
Effectively managing hybrid cloud environments requires specialized knowledge and expertise.
Workload Placement:
Optimizing workload placement across different clouds is important for performance and cost
efficiency.
4. Other Challenges:
Network Dependence:
Cloud computing relies heavily on network connectivity, making it vulnerable to network outages or
latency issues.
Performance:
Ensuring optimal performance for cloud-based applications is crucial. Organizations need to monitor
and optimize resource allocation to avoid performance bottlenecks.
Compliance:
Cloud environments must comply with various regulations, which can be challenging to manage.
1. Full Virtualization
Definition:
Full virtualization is a virtualization technique in which the hypervisor provides a complete simulation
of the underlying hardware, allowing unmodified guest operating systems to run as if they were on
physical hardware.
How it works:
The hypervisor traps and emulates all privileged CPU instructions from the guest OS.
Hardware resources (CPU, memory, storage, network) are allocated and controlled entirely
by the hypervisor.
Key Features:
Advantages:
Disadvantages:
Examples:
VMware Workstation
Oracle VirtualBox
2. Para Virtualization
Definition:
Para virtualization is a virtualization technique in which the guest OS is aware that it is running in a
virtualized environment and is modified to communicate directly with the hypervisor using special
APIs.
How it works:
Instead of emulating all hardware instructions, the hypervisor provides hypercalls to the
guest OS.
The guest OS interacts directly with the hypervisor for privileged operations.
Key Features:
Advantages:
Disadvantages:
Examples:
Xen Hypervisor
4)VMM
Virtual Machine Monitor (VMM) – Detailed Notes
1. Definition
A Virtual Machine Monitor (VMM), commonly called a Hypervisor, is the control program that
enables virtualization by creating, managing, and monitoring virtual machines on a physical host
system.
It acts as an intermediary between hardware and the virtual machines, ensuring that multiple
operating systems (guest OSs) can run concurrently without interfering with each other.
1. Virtualization of Resources
o Simulates CPU, memory, storage, and network interfaces for each VM.
o Gives the illusion that each VM has its own dedicated hardware.
2. Isolation
4. Security
5. Device Emulation
o Simulates hardware devices so guest OS can run without modification (in full
virtualization).
3. Architecture of a VMM
A VMM sits between the physical hardware and guest operating systems.
Basic Diagram:
Examples:
o VMware ESXi
o Xen
Advantages:
Disadvantages:
Diagram:
B. Type 2 — Hosted Hypervisors
Examples:
o VMware Workstation
o Oracle VirtualBox
o Parallels Desktop
Advantages:
Disadvantages:
Diagram:
5. VMM in Full vs Para-Virtualization
Guest OS
Not required Required
Modification
Hardware
Yes Partial (direct calls)
Emulation
4. Disaster Recovery
o Snapshots and migration help restore services quickly.
5. Multi-Tenancy
Overview
Architecture
1. Hardware Layer
3. Domain Layer
o Dom0 (Control Domain): Privileged domain that runs first, manages hardware
drivers, creates and controls guest VMs.
2. vBlades
vBlades is the userspace implementation of the ATA over Ethernet (AoE) protocol.
It is often used in storage virtualization and cloud computing environments to allow disk devices to
be exported over a network as if they were local drives.
Key Points
Full Form: Virtual Blade Server (in some contexts) or vBlade (AoE target emulator).
Function: Acts as a virtual storage target for clients using the AoE protocol.
Protocol Used: ATA over Ethernet (AoE) — a lightweight SAN protocol designed for local
Ethernet storage sharing.
Role in Virtualization: Used to present virtual block devices to virtual machines over a high-
speed network without the overhead of TCP/IP.
How it Works
1. Server (vBlade target)
Runs the vBlade daemon, exports a file or disk device as an AoE target.
3. Virtual Machines
Can use these exported devices for storage, similar to iSCSI but simpler and more efficient
for local networks.
Advantages
Limitations
In a cloud platform:
Combined with a hypervisor like Xen or KVM, VMs can use AoE-backed disks to store OS
images, snapshots, or persistent data.
Core Concepts:
Feedback Loop:
The system constantly monitors key performance indicators (KPIs) like response time, throughput,
and resource utilization. This data is fed back to a controller, which then adjusts resource allocation
based on predefined policies and control algorithms.
Dynamic Adjustment:
Unlike static resource allocation, feedback control enables the system to adapt to changing
workloads and user demands in real-time. This dynamic adjustment helps optimize resource
utilization and maintain desired QoS levels.
Concepts like stability, gain scheduling, and feedback mechanisms are applied to design robust and
predictable resource management systems. This ensures that the cloud environment remains stable
and responsive under various conditions.
How it Works:
1. 1. Monitoring:
The system continuously monitors resource usage, performance metrics, and user requests.
2. 2. Comparison:
The observed values are compared against predefined thresholds or desired setpoints (e.g.,
acceptable response time, CPU utilization).
3. 3. Control Action:
Based on the comparison, the controller decides on appropriate actions like adding or removing
resources (e.g., virtual machines, storage), adjusting resource allocation within VMs, or modifying
network parameters.
4. 4. Feedback:
The results of these actions are fed back into the system, and the process repeats, creating a
continuous feedback loop.
Benefits:
Improved Efficiency:
Optimizes resource utilization by dynamically adjusting allocation based on actual needs, reducing
waste and cost.
Enhanced QoS:
Ensures that applications and services meet performance requirements by proactively adjusting
resources to handle varying workloads and user demands.
The dynamic nature of feedback control allows the cloud environment to scale resources up or down
as needed, providing flexibility to adapt to changing business requirements.
Automates resource management tasks, reducing the burden on administrators and allowing them
to focus on higher-level strategic initiatives.
Can be designed to handle failures and unexpected events by dynamically adjusting resources to
compensate for the loss of capacity.
Examples:
VM Scaling:
Automatically scaling the number of virtual machines based on workload fluctuations to maintain
desired performance levels.
Resource Allocation:
Dynamically allocating CPU, memory, and storage resources to different virtual machines based on
their specific needs and priorities.
Load Balancing:
Distributing incoming traffic across multiple servers to prevent any single server from becoming
overloaded and to ensure optimal performance.
Energy Optimization:
SFQ is a fair scheduling algorithm used to allocate cloud resources (CPU, storage I/O, or network
bandwidth) proportionally and fairly among multiple virtual machines (VMs), containers, or
tenants.
It is particularly useful in multi-tenant cloud environments to prevent resource starvation and
ensure predictable service quality.
In cloud platforms like AWS EC2, Google Cloud Compute, or OpenStack, SFQ can be applied to:
SFQ maintains virtual start times for each request (job, packet, or VM resource request) instead of
just real-time arrival.
Main Components:
2. Virtual Time Calculator – Computes the virtual start time for each task based on arrival time
and weight (priority).
3. Scheduler – Picks the task with the smallest virtual start time for execution.
4. Resource Allocator – Assigns actual cloud resource (CPU slice, I/O slot, bandwidth chunk).
5. Weight Manager – Applies tenant/service priorities if some clients pay for premium service.
Cloud Example:
If VM-A and VM-B share a CPU, SFQ ensures each gets CPU time slices proportionally to their
assigned weights, regardless of request burstiness.
Where:
The request with smallest start time across all queues is served first.
Over the long term, each tenant gets resource allocation proportional to their weight,
regardless of request arrival patterns.
Premium clients (e.g., higher subscription tiers) can have higher weights to get faster service
while still maintaining fairness for others.
1. Fairness:
Ensures each tenant/service gets a proportional share of resources.
2. Low Complexity:
O(1) or O(log N) scheduling complexity makes it suitable for large cloud data centers.
3. Prevents Starvation:
No flow can completely block others, even under heavy load.
5. QoS Friendly:
Works well with QoS policies in virtualized cloud environments.
Coordination ensures seamless interaction between distributed cloud components such as data
centers, servers, storage systems, and network nodes.
It focuses on:
Balancing workloads
1. Task Scheduling – Deciding the execution order of tasks across multiple cloud nodes.
5. Fault Tolerance Coordination – Managing redundancy and failover in case of node failure.
Main Policies
Spot Instances – Using spare cloud capacity at lower cost but with the risk of interruption.
High Availability – Ensures resources are always ready for critical workloads.
5. Advantages