Project Virtualisation
Project Virtualisation
SUBMITTED BY
NISHIYA - SRO0773258
1
DECLARATION
I ,Nishiya, ICITSS student of ICAI do hereby declare that the dissertation entitled
“VIRTUALIZATION” submitted in fulfilment of information Technology
Training Programme is a record of Bonafide work carried out by
me .
Place: Kottayam
2
ACKNOWLEDGEMENT
I express my humble gratitude to the Almighty God for the constant help and providence with
which he has accompanied me. I would like to take up the opportunity to express my profound
gratitude towards everyone who generously gave their time, energy and resources in order to
contribute to my success. In addition, I address my special thanks to our faculty especially for
their commitments to guide me throughout the research. I am extremely grateful to the ICAI
Kottayam Branch, the Chairman, the faculty, members and staff members for providing the
faculties. Last but not the least; I thank my batch mates for their valuable support and
encouragement.
NISHIYA
3
TABLE OF CONTENTS
SL NO PARTICULARS PAGE NO
1 Introduction 1
2 Type Of Virtualization 3
3 Methods Of Virtualization 10
4 Why Virtualize? 11
5 Virtualization Costs 13
6 Virtualization And Cloud Computing 15
7 Open VZ and XEN Virtualization Technology 18
8 Virtual Machine (VM) 20
9 What is Virtualized Security? 22
10 Virtual Infrastructure 28
11 Challenges of Virtualization 34
12 The Virtualization of a Computer Hardware 37
13 Virtualization Business Benefits 39
14 Conclusion 41
15 Webliography 42
TABLE OF FIGURES
4
FIG NO PARTICULARS PAGE NO
1.1 Server Virtualization 5
1.2 Storage Virtualization 9
1.3 Virtualization Tech In Movies 14
1.4 Cloud Computing 16
1.5 Virtual Security Levels 27
5
VIRTUALISATION
6
INTRODUCTION
This project provides a thorough review of different types of
virtualisation both at technical;l and application levels. At the
technical and application levels.At the technical level it explains
with the help of descriptive diagrams how virtualization
technologies work in general.the paper describes of
1
WHAT IS VIRTUALISATION
Virtualization uses software to create an abstraction layer over computer hardware that
allows the hardware elements of a single computer—processors, memory, storage and more—to
be divided into multiple virtual computers, commonly called virtual machines (VMs). Each VM
runs its own operating system (OS) and behaves like an independent computer, even though it is
running on just a portion of the actual underlying computer hardware.
Virtualization is the ability to run multiple virtual machines on a single piece of hardware. The
hardware runs software which enables you to install multiple operating systems which are able
to run simultaneously and independently, in their own secure environment, with minimal
reduction in performance. Each virtual machine has its own virtual CPU, network interfaces,
storage and operating system.
Virtualization has become a critically important focus of the IT world in recent years.
Virtualization technologies are used by countless thousands of companies to consolidate their
workloads and to make their IT environments scalable and more flexible. If you want to learn
cloud computing, you'll simply have to absorb the basic virtualization technology concepts at
some point.
This course will give you all the fundamental concepts to understand how Virtualization works:
why it's so important and how we moved from Virtualization to cloud computing. As a beginner
course, you will find how Virtualization helps companies and professionals achieving better
TCO and how it works from a technical point of view. Learn what is an hypervisor, how virtual
machines are separated inside the same physical host and how they communicate with lower
hardware levels. If you want to start a career in the cloud computing industry, you will need to
2
know how the most common virtualization technologies works and how they are used in cloud
infrastructures.
Virtualization is a process that allows for more efficient utilization of physical computer
hardware and is the foundation of cloud computing.
3
TYPES OF VIRTUALISATION
Today the term virtualization is widely applied to a number of concepts including:
• Server Virtualization
• Client / Desktop / Application Virtualization
• Network Virtualization
• Storage Virtualization
• Service / Application Infrastructure Virtualization
In most of these cases, either virtualizing one physical resource into many virtual resources or
turning many physical resources into one virtual resource is occurring.
• SERVER VIRTUALIZATION
Server virtualization is the most active segment of the virtualization industry featuring
established companies such as
VMware, Microsoft, and Citrix. With server virtualization one physical machine is divided
many virtual servers. At the core of such virtualization is the concept of a hypervisor (virtual
machine monitor). A hypervisor is a thin software layer that intercepts operating system calls to
hardware. Hypervisors typically provide a virtualized CPU and memory for the guests running
on top of them.
The term was first used in conjunction with the IBM CP-370.
BENEFITS OF VIRTUALISATION .
• Increased Hardware Utilization – This results in hardware saving, reduced
administration overhead, and energy savings.
• Security – Clean images can be used to restore compromised systems. Virtual machines
can also provide sandboxing and isolation to limit attacks.
• Development – Debugging and performance monitoring scenarios can be easily setup in
a repeatable fashion. Developers also have easy access to operating systems they might
not otherwise be able to install on their desktops.
Correspondingly there are a number of potential downsides that must be considered:
4
• Security – There are now more entry points such as the hypervisor and virtual
networking layer to monitor. A compromised image can also be propagated easily with
virtualization technology.
• Administration – While there are less physical machines to maintain there may be more
machines in aggregate. Such maintenance may require new skills and familiarity with
software that administrators otherwise would not need.
• Licensing/Cost Accounting – Many software-licensing schemes do not take
virtualization into account. For example running 4 copies of Windows on one box may
require 4 separate licenses.
5
• APPLICATION/DESKTOP VIRTUALIZATION
Virtualization is not only a server domain technology. It is being put to a number of uses on the
client side at both the desktop and application level. Such virtualization can be broken out into
four categories:
• Local Application Virtualization/Streaming
Disadvantages include:
6
Benefits of desktop virtualization include most of those with application virtualization as well
as:
• High Availability – Downtime can be minimized with replication and fault tolerant
hosted configurations.
• Extended Refresh Cycles – Larger capacity servers as well as limited demands on the
client PCs can extend their lifespan.
• Multiple Desktops – Users can access multiple desktops suited for various tasks from the
same client PC.
Disadvantages of desktop virtualization are similar to server virtualization. There is also the
added disadvantage that clients must have network connectivity to access their virtual desktops.
This is problematic for offline work and also increases network demands at the office.
The final segment of client virtualization is local desktop virtualization. It could be said that this
is where the recent resurgence of virtualization began with VMware’s introduction of VMware
Workstation in the late 90’s. Today the market includes competitors such as Microsoft Virtual
PC and Parallels Desktop. Local desktop virtualization has also played a key part in the
increasing success of Apple’s move to Intel processors since products like VMware Fusion and
Parallels allow easy access to Windows applications. Some the benefits of local desktop
virtualization include:
• Security – With local virtualization organizations can lock down and encrypt just the
valuable contents of the virtual machine/disk. This can be more performant than
encrypting a user’s entire disk or operating system.
• Isolation – Related to security is isolation. Virtual machines allow corporations to isolate
corporate assets from third party machines they do not control. This allows employees to
use personal computers for corporate use in some instances.
• Development/Legacy Support – Local virtualization allows a user’s computer to
support many configurations and environments it would otherwise not be able to support
without different hardware or host operating system. Examples of this include running
Windows in a virtualized environment on OS X and legacy testing Windows
98 support on a machine that’s primary OS is Vista.
7
• NETWORK VIRTUALIZATION
Up to this point the types of virtualizations covered have centered on applications or entire
machines. These are not the only granularity levels that can be virtualized however. Other
computing concepts also lend themselves to being software virtualized as well.
• STORAGE VIRTUALIZATION
Another computing concept that is frequently virtualized is storage. Unlike the definitions we
have seen up to this point that have been complex at times.
Storage virtualization is hard to define in a fixed manner due to the variety of ways that the
functionality can be provided. Typically, it is provided as a feature of:
• Host Based with Special Device Drivers
• Array Controllers
• Network Switches
• Stand Alone Network Appliances.
8
METHODS OF VIRTUALIZATION
Full virtualization
Full virtualization uses an unmodified version of the guest operating system. The guest
addresses the host’s CPU via a channel created by the hypervisor. Because the guest
communicates directly with the CPU, this is the fastest virtualization method .
Paravirtualization
Paravirtualization uses a modified guest operating system. The guest communicates with
the hypervisor. The hypervisor passes the unmodified calls from the guest to the CPU and
other interfaces, both real and virtual. Because the calls are routed through the hypervisor,
this method is slower than full virtualization.
WHY VIRTUALIZE?
9
With increased server provisioning in the data centre, several factors play a role in stifling
growth. Increased power and cooling costs, physical space constraints, man power and
interconnection complexity all contribute significantly to the cost and feasibility of continued
expansion.
Commodity hardware manufacturers have begun to address some of these concerns by shifting
their design goals. Rather than focus solely on raw gigahertz performance, manufacturers have
enhanced the feature sets of CPUs and chip sets to include lower wattage CPUs, multiple cores
per CPU die, advanced power management, and a range of virtualization features. By
employing appropriate software to enable these features, several advantages are realized:
10
• Reduction of Complexity: Infrastructure costs are massively reduced by removing the
need for physical hardware, and networking. Instead of having a large number of physical
computers, all networked together, consuming power and administration costs, fewer
computers can be used to achieve the same goal. Administration and physical setup is less
time consuming and costly.
• Legacy Support: With traditional bare-metal operating system installations, when the
hardware vendor replaces a component of a system, the operating system vendor is
required to make a corresponding change to enable the new hardware (for example, an
Ethernet card). As an operating system ages, the operating system vendor may no longer
provide hardware enabling updates. In a virtualized operating system, the hardware
remains constant for as long as the virtual environment is in place, regardless of any
changes occurring in the real hardware, including full replacement.
VIRTUALIZATION COSTS
Virtualization can be expensive to introduce, but it often saves money in the long term.
Consider the following benefits:
Less power
Using virtualization negates much of the need for multiple physical platforms. This
equates to less power being drawn for machine operation and cooling, resulting in
reduced energy costs. The initial cost of purchasing multiple physical platforms,
combined with the machines' power consumption and required cooling, is drastically cut
by using virtualization.
Less maintenance
11
Provided that adequate planning is performed before migrating physical systems to
virtualized ones, less time is needed to maintain them. This means less money needs to be
spent on parts and labour.
Older versions of software may not be able to run directly on more recent bare-metal
machines. By running older software virtually on a larger, faster system, the life of the
software may be extended while taking advantage of better performance from a newer
system.
Predictable costs
A Red Hat Enterprise Linux subscription provides support for virtualization at a fixed
rate, making it easy to predict costs.
Less space
Consolidating servers onto fewer machines means less physical space is required for
computer systems.
12
VIRTUALIZATION AND CLOUD COMPUTING
The term cloud refers to a network or the internet. It is a technology that uses remote servers on
the internet to store, manage, and access data online rather than local drives. The data can be
anything such as files, images, documents, audio, video, and more.
There are the following operations that we can do using cloud computing:
Small as well as large IT companies, follow the traditional methods to provide the IT
infrastructure. That means for any IT company, we need a Server Room that is the basic need of
IT companies.
In that server room, there should be a database server, mail server, networking, firewalls,
routers, modem, switches, QPS (Query Per Second means how much queries or load will be
handled by the server), configurable system, high net speed, and the maintenance engineers.
To establish such IT infrastructure, we need to spend lots of money. To overcome all these
problems and to reduce the IT infrastructure cost, Cloud Computing comes into existence.
13
Fig 1.4 Cloud Computing
Virtualization plays a very important role in the cloud computing technology, normally in the
cloud computing, users share the data present in the clouds like application etc, but actually with
the help of virtualization users shares Infrastructure.
The main usage of Virtualization Technology is to provide the applications with the standard
versions to their cloud users, suppose if the next version of that application is released, then
cloud provider has to provide the latest version to their cloud users and practically it is possible
because it is more expensive.
To overcome this, we use basically virtualization technology, By using virtualization, all severs
and the software application which are required by other cloud providers are maintained by the
third party people, and the cloud providers has to pay the money on monthly or annual basis.
Mainly Virtualization means, running multiple operating systems on a single machine but
sharing all the hardware resources. And it helps us to provide the pool of IT resources so that we
can share these IT resources in order get benefits in the business.
14
OPEN VZ AND XEN VIRTUALIZATION TECHNOLOGY
OPEN VZ
Open VZ is an open-source virtualization engine on the x86, x86_64, and IA64 processors.
Open VZ, itself, is built on top of
Linux. Unlike Xen’s paravirtualization technique, with OSLV virtualization the operating
environment is virtualized instead of the hardware. Thus, while there is only one operating
system kernel, multiple programs run in isolation from each other within the single OS instance.
XEN
Xen is a virtualization engine (to be exact it is a virtual-machine monitor) for x86, x86-64,
Itanium and PowerPC platforms. On number of processors a paravirtualization technique is
applied by Xen. This means that the operation systems run on Xen are modified in order to
15
achieve high performance on a wide range of hardware architectures, which are initially not
intended for virtualization technologies.
• consolidation
• increased utilization
• rapid provisioning
• dynamic fault tolerance against software failures (through rapid bootstrapping or
rebooting)
• hardware fault tolerance (through migration of a virtual machine to different hardware)
The virtualization overhead observed in both Open VZ and Xen is limited. Various opinions
exist on the difference in performance between the two. However, in both cases the
performance levels of virtualized environment, as compared to the real hardware, are of
16
acceptance-quality level. Specific figures depend on a great number of factors and cannot be
summed up for the general conclusion.
Unlike Open VZ, XEN has the ability to support legacy software as well as new OS instances
on the same computer. That means that proprietary systems can be installed on Xen based
carrier without any additional modification if hardware assisted virtualization is used. Open VZ
provides compatibility only in frame of the alike kernel, such as various distributions of Linux
OSs.
Both engines are based on the Unix OSs, therefore they have great scalability. For example, in
case of Open VZ which employs a single kernel model, it is as scalable as the Linux kernel.
Such kernel supports up to 64 CPUs and up to 64 GB of RAM. (On 32bit with PAE) A single
container can scale up to the whole physical system, i.e., use all the CPUs and all the RAM.
17
Virtualisation is the process of creating a software-based or "virtual" version of a computer,
with dedicated amounts of CPU, memory and storage that are "borrowed" from a physical host
computer—such as your personal computer— and/or a remote server—such as a server in a
cloud provider's datacentre. A virtual machine is a computer file, typically called an image,
which behaves like an actual computer. It can run in a window as a separate computing
environment, often to run a different operating system—or even to function as the user's entire
computer experience—as is common on many people's work computers. The virtual machine is
partitioned from the rest of the system, meaning that the software inside a VM cannot interfere
with the host computer's primary operating system.
18
• Spinning up a new environment to make it simpler and quicker for developers to run dev-
test scenarios.
• AGILITY AND SPEED -Spinning up a VM is relatively easy and quick and is much
simpler than provisioning an entire new environment for your developers. Virtualisation
makes the process of running dev-test scenarios a lot quicker.
• LOWERED DOWNTIME -VMs are so portable and easy to move from one
hypervisor to another on a different machine this means that they are a great solution for
backup, in the event the host goes down unexpectedly.
• SCALABILITY- VMs allow you to more easily scale your apps by adding more
physical or virtual servers to distribute the workload across multiple VMs. As a result,
you can increase the availability and performance of your apps.
19
Virtualized security, or security virtualization, refers to security solutions that are software-
based and designed to work within a virtualized IT environment. This differs from traditional,
hardwarebased network security, which is static and runs on devices such as traditional
firewalls, routers, and switches.
20
Virtualized security is now effectively necessary to keep up with the complex security demands
of a virtualized network, plus it’s more flexible and efficient than traditional physical security.
Here are some of its specific benefits:
Virtualized security can take the functions of traditional security hardware appliances (such as
firewalls and antivirus protection) and deploy them via software. In addition, virtualized
security can also perform additional security functions. These functions are only possible due to
21
the advantages of virtualization, and are designed to address the specific security needs of a
virtualized environment.
For example, an enterprise can insert security controls (such as encryption) between the
application layer and the underlying infrastructure, or use strategies such as micro-segmentation
to reduce the potential attack surface.
It’s important to note, however, that many of these risks are already present in a virtualized
environment, whether security services are virtualized or not. Following enterprise security best
practices (such as spinning down virtual machines when they are no longer needed and using
automation to keep security policies up to date) can help mitigate such risks.
Traditional physical security is hardware-based, and as a result, it’s inflexible and static. The
traditional approach depends on devices deployed at strategic points across a network and is
often focused on protecting the network perimeter (as with a traditional firewall). However, the
perimeter of a virtualized, cloud-based network is necessarily porous and workloads and
applications are dynamically created, increasing the potential attack surface.
Traditional security also relies heavily upon port and protocol filtering, an approach that’s
ineffective in a virtualized environment where addresses and ports are assigned dynamically. In
such an environment, traditional hardware-based security is not enough; a cloud-based network
22
requires virtualized security that can move around the network along with workloads and
applications.
Segmentation, or making specific resources available only to specific applications and users.
This typically takes the form of controlling traffic between different network segments or tiers.
Isolation, or separating independent workloads and applications on the same network. This is
particularly important in a multitenant public cloud environment, and can also be used to isolate
virtual networks from the underlying physical infrastructure, protecting the infrastructure from
attack.
23
Fig 1.5 Virtual Security Levels
24
VIRTUAL INFRASTRUCTURE
25
Virtual infrastructure is a collection of software-defined components that make up an enterprise
IT environment. A virtual infrastructure provides the same IT capabilities as physical resources,
but with software, so that IT teams can allocate these virtual resources quickly and across
multiple systems, based on the varying needs of the enterprise.
By decoupling physical hardware from an operating system, a virtual infrastructure can help
organizations achieve greater IT resource utilization, flexibility, scalability and cost savings.
These benefits are especially helpful to small businesses that require reliable infrastructure but
can’t afford to invest in costly physical hardware.
The benefits of virtualization touch every aspect of an IT infrastructure, from storage and server
systems to networking tools. Here are some key benefits of a virtual infrastructure:
• Cost savings: By consolidating servers, virtualization reduces capital and operating
costs associated with variables such as electrical power, physical security, hosting and
server development.
Scalability: A virtual infrastructure allows organizations to react quickly to changing
customer demands and market trends by ramping up on CPU utilization or scaling back
accordingly.
• Increased productivity: Faster provisioning of applications and resources allows IT teams
to respond more quickly to employee demands for new tools and technologies. The
result: increased productivity, efficiency and agility for IT teams, and an enhanced
employee experience and increased talent retention rates without hardware procurement
delays.
• Simplified server management: From seasonal spikes in consumer demand to unexpected
economic downturns, organizations need to respond quickly. Simplified server
management makes sure IT teams can spin up, or down, virtual machines when required
and re-provision resources based on real-time needs. Furthermore, many management
consoles offer dashboards, automated alerts and reports so that IT teams can respond
immediately to server performance issues.
26
By separating physical hardware from operating systems, virtualization can provision compute,
memory, storage and networking resources across multiple virtual machines (VMs) for greater
application performance, increased cost savings and easier management. Despite variances in
design and functionality, a virtual infrastructure typically consists of these key components:
Virtualized compute:
This component offers the same capabilities as physical servers, but with the ability to be more
efficient. Through virtualization, many operating systems and applications can run on a single
physical server, whereas in traditional infrastructure servers were often underutilized. Virtual
compute also makes newer technologies like cloud computing and containers possible.
Virtualized storage:
This component frees organizations from the constraints and limitations of hardware by
combining pools of physical storage capacity into a single, more manageable repository. By
connecting storage arrays to multiple servers using storage area networks, organizations can
bolster their storage resources and gain more flexibility in provisioning them to virtual
machines. Widely used storage solutions include fiber channel SAN arrays, iSCSI SAN arrays,
and NAS arrays.
27
Plan ahead:
When designing a virtual infrastructure, IT teams should consider how business growth, market
fluctuations and advancements in technology might impact their hardware requirements and
reliance on compute, networking and storage resources.
A virtual infrastructure architecture can help organizations transform and manage their IT
system infrastructure through virtualization. But it requires the right building blocks to deliver
results. These include:
Host: A virtualization layer that manages resources and other services for virtual machines.
Virtual machines run on these individual hosts, which continuously perform monitoring and
management activities in the background. Multiple hosts can be grouped together to work on
the same network and storage subsystems, culminating in combined computing and memory
resources to form a cluster. Machines can be dynamically added or removed from a cluster.
Hypervisor: A software layer that enables one host computer to simultaneously support
multiple virtual operating systems, also known as virtual machines. By sharing the same
physical computing resources, such as memory, processing and storage, the hypervisor stretches
available resources and improves IT flexibility.
28
Virtual machine: These software-defined computers encompass operating systems, software
programs and documents. Managed by a virtual infrastructure, each virtual machine has its own
operating system called a guest operating system.
The key advantage of virtual machines is that IT teams can provision them faster and more
easily than physical machines without the need for hardware procurement. Better yet, IT teams
can easily deploy and suspend a virtual machine, and control access privileges, for greater
security. These privileges are based on policies set by a system administrator.
User interface: This front-end element means administrators can view and manage virtual
infrastructure components by connecting directly to the server host or through a browser-based
interface.
Challenges Of Virtualization?
Bad storage, server, and network configurations are just a few reasons why virtualization fails.
These are technical in nature and are often easy to fix, but some organizations overlook the need
to protect their entire virtualized environments, thinking that they’re inherently more secure
than traditional IT environments. Others use the same tools they use to protect their existing
physical infrastructure. The bottom line is that a virtualized environment is more complex and
29
requires a new management approach. These are the common problems talked about behind
closed doors.
• Resource distribution
The way virtualization partitions systems can result in varied ways — some might
function really well, and others might not provide users access to enough resources to
meet their needs. Resource distribution problems often occur in the shift to virtualization
and can be fixed by working on capacity planning with your service provider.
• Backward compatibility
Using legacy systems can cause problems with newer virtualized software programs.
Compatibility issues can be time-consuming and difficult to solve. A good provider may
be able to suggest upgrades and workarounds to ensure that everything functions the way
they should.
• Performance monitoring Virtualized systems don’t lend themselves to the same kind of
performance monitoring as hardware like mainframes and hardware drives do. Try tools
like VM mark to create benchmarks that measure performance on virtual networks and to
monitor resource usage as well.
• Backup
In a virtualized environment, there is no actual hard drive on which data and systems can
be backed up. This means frequent software updates can make it difficult to access
backup at times. Software programs like Windows Server Backup tools can make this
process easier and allow backups to be stored in one place for easier tracking and access.
• Security Virtual systems could be vulnerable when users don’t keep them secure and
apply best practices for passwords or downloads. Security then becomes a problem for
virtualization, but the isolation of each VM by the system can mitigate security risks and
prevent systems from getting breached or compromised.
• Licensing Compliance
Using existing licensed software in a virtual environment can lead to compliance
issues if more VMs are created than the company is licensed to use the software on. It's
important to keep track of how licensed software is being used and to be
sure compliance is maintained as the virtual environment grows.
30
• Network Configuration
There is a lot of work involved in managing multiple virtual machines, even with a VM
management solution like VMware vSphere. Making poor configuration choices, like
allowing file sharing between VMs, or leaving unused firewall ports open could be all
that's needed for a hacker to gain access to your virtual infrastructure. This
misconfiguration can also include the physical servers, which can become a security risk
without the latest security patches and firmware.
The need for virtualization came into existence because of the computer system or personal
computer’s original design. For the purpose of providing a simplified illustration of the concept,
imagine the form and function of typical personal computer. A basic system requires a hardware
and software combination. The software handles the commands coming from the user, and then
the same software utilizes the computer hardware to perform certain calculations. In this
configuration, one person typing on a keyboard elicits a response from the computer hardware
setup. In this layout, it is impossible for another person to access the same computer, because it
is dedicated to one user.
31
The virtualization of a computer hardware requires the use of a software known as hypervisor,
for the purpose of creating a mechanism that enables the user or the system administrator to
share the resources of a single hardware device. As a result, several students, engineers,
designers, and professionals may use the same server or computer system. This setup makes
sense, because one person cannot utilize the full computing power of a single hardware device.
As a result, an ordinary person without the extensive knowledge of a network administrator can
use a computer thinking that it is linked to a dedicated computer powered by its own processor.
Sharing the computing capability of a single computer hardware maximizes the full potential of
the said system.
It is important to point out that virtualization is not only utilized for the purpose of reducing the
cost of operations. According to Red Hat, the application of the virtualization technology leads
to the creation of separate, distinct, and secure virtual environments (Red Hat, Inc., 2017, par.
1). In other words, there are two additional advantages when administrators and ordinary users
adopt this type of technology. First, the creation of distinct VMs makes it easier to isolate and
study errors or problems in the system. Second, the creation of separate VMs makes it easier to
figure out the vulnerability of the system or the source of external attacks to the system
(Portnoy, 2015). Therefore, the adoption of virtualization technologies is a practical choice in
terms of safety, ease of management, and cost-efficiency.
32
A special software known as the hypervisor enables the user to create virtualization within a
computer hardware system. The specific target of the hypervisor is the computer’s central
processing unit or the CPU. Once in effect, the hypervisor unlocks the OS that was once linked
to a specific CPU. After the unlocking process, the hypervisor in effect creates multiple
operating systems or guest operating systems (Cvetanov, 2015). Without this procedure, the
original OS is limited to one CPU. In a traditional setup, there is a one-on-one relationship
between the OS and the CPU. For example, a rack server requires an OS to function as a web
server. can enjoy the benefits of twenty web servers, but using only the resources of one rack
server.
33
Depending on the type of virtualization, enterprises receive multiple benefits when adopting
them, including the following:
Most importantly, no physical server is now underutilized, as it can be used to run multiple
applications and OSs.
34
• Easier IT management
With the use of software VMs, IT administrators can now easily have a “single view” of
the entire IT infrastructure through their admin portal. This simplifies the centralized
management of various IT resources thus improving operational efficiency.
For instance, network administrators can deploy workload management processes that can speed
up server deployment and configuration. Further, automation tools can be used to automate the
configuration of VMs and applications without any time-consuming or error-prone manual
steps.
• Minimizes IT downtime.
Data losses or server downtime can be devastating to any business and can lower overall
productivity. Effectively, virtualization uses multiple VMs that can run at the same time
and minimizes IT downtime through failovers. Furthermore, backup-related operations
can run on the same VM, thus eliminating the negative impact of data losses or
breaches.
Also, by providing operational visibility, administrators now have real-time access to network
data, thus making disaster recovery (in the event of any emergency) much easier to manage.
Furthermore, virtualization tools make workload balancing easier through the dynamic
allocation of hardware resources. Effectively, IT management personnel only need to change a
few of the existing configurations to improve overall performance.
35
INDIA DESKTOP VIRTUALIZATION MARKET - GROWTH, TRENDS,
COVID-19 IMPACT, AND FORECASTS (2023 - 2028)
36
tools and applications being used, has given rise to increased work
productivity, which is one of the major priorities for all end-user
industries.
37
On-premise Segment Holds the Largest Market Share
38
handled by the clients. Sourcing licenses for on-premise are comparatively simpler.
Connectivity and customization advantages also support the on-premise model.
39
can use VMWare's Virtualization software, to manage their IT
operations in Azure.
Mar 2019 - Microsoft Corporation introduced Azure Stack HCI
Solutions, which is a new implementation of its on-premise Azure
product for hyperconverged infrastructure hardware.
India Desktop Virtualization Market Top Players
1. Microsoft Corporation
2. Wipro Limited
3. Amazon Web Services Inc.
4. Dell Inc.
5. Hewlett-Packard Company
40
41
CONCLUSION
The existence of virtualization technologies came about after realizing the limitations of a
conventional computer design. In the old setup, one user has limited access to the resources of a
computer hardware device. However, a typical usage does not require the full capacity of the
CPU, RAM, storage, and networking capability of the computer system. Thus, virtualization
technologies enabled the sharing of resources and maximizing the potential of a single computer
system. This type of technology allows user to enjoy the benefits of consolidation, redundancy,
safety, and cost-efficiency. The technology’s ability to create distinct and separate VMs made it
an indispensable component of cloud computing.
As a result, network administrators, programmers, and ordinary users are able to develop a
system that runs the same set of applications in multiple machines. It is now possible not only to
multiply the capability of a single computer hardware configuration, but also test applications
without fear of affecting the other VMs that are performing critical operations. Virtualizing
resources lets administrators pool their physical resources, so their hardware can truly be
commoditized. So the legacy infrastructure that's expensive to maintain, but supports important
apps, can be virtualized for optimal use.
WEBLIOGRAPHY
42
• Virtualization Changes Everything: Storage Strategies for VMware vSphere & Cloud
Computing by Vaughn Stewart
• Virtualization: Issues, security threats, and solutions ACM Computing Surveys Volume 45
Issue 2 February 2013 Article No.: 17 pp 1–39
43