0% found this document useful (0 votes)
229 views

VIRTUALIZATION-Basics & Applications: Abstract

Virtualization allows multiple virtual machines to run on a single physical machine, improving efficiency. It works by inserting a thin software layer that allocates hardware resources dynamically between virtual machines. This allows different operating systems and applications to run concurrently on the same physical computer while sharing resources. Virtualization can improve IT resource utilization and availability while reducing costs.

Uploaded by

Swathi Bhat M
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
229 views

VIRTUALIZATION-Basics & Applications: Abstract

Virtualization allows multiple virtual machines to run on a single physical machine, improving efficiency. It works by inserting a thin software layer that allocates hardware resources dynamically between virtual machines. This allows different operating systems and applications to run concurrently on the same physical computer while sharing resources. Virtualization can improve IT resource utilization and availability while reducing costs.

Uploaded by

Swathi Bhat M
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

VIRTUALIZATION-Basics & Applications

Abstract:

Start by eliminating the old “one server, one application” model and run multiple virtual
machines on each physical machine. Improve the efficiency and availability of IT
resources and applications through virtualization. Free your IT admins from spending so
much time managing servers rather than innovating. About 70% of a typical IT budget in
a non-virtualized datacenter goes towards just maintaining the existing infrastructure,
with little left for innovation.

Today’s x86 computer hardware was designed to run a single operating system and a
single application, leaving most machines vastly underutilized. Virtualization lets you run
multiple virtual machines on a single physical machine, with each virtual machine sharing
the resources of that one physical computer across multiple environments. Different
virtual machines can run different operating systems and multiple applications on the
same physical computer. While others are leaping aboard the virtualization bandwagon
now, VMware is the market leader in virtualization. Our technology is production-proven,
used by more than 170,000 customers, including 100% of the Fortune 100.

Definition:

Virtualization is the creation of a virtual(rather than actual) version of something, such as


an operating system a server, a storage device or network resources.

How Does Virtualization Work?


The VMware virtualization platform is built on a business-ready architecture. Use
software such as VMware vSphere to transform or “virtualize” the hardware resources of
an x86-based computer—including the CPU, RAM, hard disk and network controller—to
create a fully functional virtual machine that can run its own operating system and
applications just like a “real” computer. Each virtual machine contains a complete system,
eliminating potential conflicts. VMware virtualization works by inserting a thin layer of
software directly on the computer hardware or on a host operating system. This contains a
virtual machine monitor or “hypervisor” that allocates hardware resources dynamically
and transparently. Multiple operating systems run concurrently on a single physical
computer and share hardware resources with each other. By encapsulating an entire
machine, including CPU, memory, operating system, and network devices, a virtual
machine is completely compatible with all standard x86 operating systems, applications,
and device drivers. You can safely run several operating systems and applications at the
same time on a single computer, with each having access to the resources it needs when it
needs them.

Virtualization can run in two ways :

1.Virtualization on host OS

2.Virtualization without host OS

Using Virtualization to Host Multiple Operating Systems

In many cases, virtualization is used to create many Virtual Private servers(VPS), which
essentially make small duplicates of the same operating system within directories that
appear to the user as servers within themselves.
In virtualization, the host OS is the primary one installed on your server. You then install
the virtualization software within that OS. Finally, you must install the guest OS within a
virtual machine. Unlike emulation, virtualization software does not usually emulate
hardware. In fact, in some cases, it directly interfaces with the server’s hardware, giving
you real-time performance(essentially like running the two OS side-by-side, rather than
one on top of the other).with this setup, it is possible to run two distinct OS with different
web server software, different scripting languages and different web applications, all
within the same physical box.

OS virtualization: Virtualizing without the hypervisor

Virtualization doesn't require a hypervisor. Imagine a virtualization infrastructure


completely devoid of a hypervisor. Having no hypervisor eliminates the need for driver
emulation. Getting rid of driver emulation means faster performance. Faster performance
means more virtual machines that can be run simultaneously. And more simultaneous
virtual machines nets you higher density, all of which means more bang for your
virtualization dollar.

Applications:

1. Hardware Virtualization

2. Network Virtualization

3. I/O Virtualization

4. Memory Virtualization

Hardware Virtualization:

Hardware virtualization is when the virtual machine manager is embedded in the circuits
of a hardware component instead of being called up from a third-party software
application. The virtual machine manager is called a hypervisor.

The job of the hypervisor is to control processor, memory and other firmware resources.
The hypervisor acts like a traffic cop, allowing multiple operating systems to run on the
same device without requiring source code or binary changes. Each operating system
appears to have the processor, memory, and other firmware resources all to itself -- but in
reality, the hypervisor is controlling the processor and its resources, allocating what is
needed to each operating system in turn.

Hardware virtualization is a system which uses one processor to act as if it were several
different computers. This has two main purposes. One is to run different operating
systems on the same hardware. The other is to allow more than one user to use the
processor at the same time. While there are both logistical and financial benefits to
hardware virtualization, there are still some practical limitations.

The name hardware virtualization is used to cover a range of similar technologies


carrying out the same basic function. Strictly speaking, it should be called hardware-
assisted virtualization. This is because the processor itself carries out some of the
virtualization work. This is in contrast to techniques which are solely software based.

The primary use of hardware virtualization is to allow multiple users to access the
processor. This means that each user can have a separate monitor, keyboard and mouse
and run his or her OS independently. As far as the user is concerned, they will effectively
be running their own computer. This set-up can cut costs considerably as multiple users
can share the same core hardware.

There are some significant limitations to hardware virtualization. One is that it still
requires dedicated software to carry out the virtualization, which can bring additional
costs. Another is that, depending on the way the virtualization is carried out, it may not be
as easy to add in extra processing power later on as and when it is needed. Perhaps the
biggest drawback is that no matter how efficiently the virtualization is carried out, the
maximum processing power of the chip cannot be exceeded. This means it must be split
between the different users.

Network Virtualization:

Network virtualization is a method of combining the available resources in a network by


splitting up the available bandwidth into channel, each of which is independent from the
others, and each of which can be assigned (or reassigned) to a particular server or device
in real time. Each channel is independently secured. Every subscriber has shared access to
all the resources on the network from a single computer.

Network management can be a tedious and time-consuming business for a human


administrator. Network virtualization is intended to improve productivity, efficiency, and
job satisfaction of the administrator by performing many of these tasks automatically,
thereby disguising the true complexity of the network. Files, images, programs, and
folders can be centrally managed from a single physical site. Storage media such as hard
drives and tape drives can be easily added or reassigned. Storage space can be shared or
reallocated among the servers.

Network virtualization is intended to optimize network speed, reliability, flexibility,


scalability, and security. Network virtualization is said to be especially effective in
networks that experience sudden, large, and unforeseen surges in usage.

Memory Virtualization:

A guest operating system that executes within a virtual machine expects a zero-based
physical address space, as provided by real hardware. ESX Server gives each VM this
illusion, virtualizing physical memory by adding an extra level of address translation. A
machine address refers to actual hardware memory, while a physical address is a
software abstraction used to provide the illusion of hardware memory to a virtual
machine. We will often use ``physical'' in quotes to highlight this deviation from its usual
meaning.

In computer science, memory virtualization decouples volatile random access memory


(RAM) resources from individual systems in the data center, and then aggregates those
resources into a virtualized memory pool available to any computer in the cluster. The
memory pool is accessed by the operating system or applications running on top of the
operating system. The distributed memory pool can then be utilized as a high-speed
cache, a messaging layer, or a large, shared memory resource for a CPU or a GPU
application.

Memory virtualization allows networked, and therefore distributed, servers to share a


pool of memory to overcome physical memory limitations, a common bottleneck in
software performance. With this capability integrated into the network, applications can
take advantage of a very large amount of memory to improve overall performance,
system utilization, increase memory usage efficiency, and enable new use cases. Software
on the memory pool nodes (servers) allows nodes to connect to the memory pool to
contribute memory, and store and retrieve data. Management software manages the
shared memory, data insertion, eviction and provisioning policies, data assignment to
contributing nodes, and handles requests from client nodes. The memory pool may be
accessed at the application level or operating system level. At the application level, the
pool is accessed through an API or as a networked file system to create a high-speed
shared memory cache. At the operating system level, a page cache can utilize the pool as
a very large memory resource that is much faster than local or networked storage.

Memory virtualization implementations are distinguished from shared memory systems.


Shared memory systems do not permit abstraction of memory resources, thus requiring
implementation with a single operating system instance (i.e. not within a clustered
application environment).

I/O Virtualization:

Input/output (I/O) virtualization is a methodology to simplify management, lower costs


and improve performance of servers in enterprise environments. I/O virtualization
environments are created by abstracting the upper layer protocols from the physical
connections.

The technology enables one physical adapter card to appear as multiple virtual network
interface cards (vNICs) and virtual host bus adapters (vHBAs). Virtual NICs and HBAs
function as conventional NICs and HBAs, and are designed to be compatible with
existing operating systems, Hypervisors, and applications. To networking resources
(LANs and SANs), they appear as normal cards.

In the physical view, virtual I/O replaces a server’s multiple I/O cables with a single cable
that provides a shared transport for all network and storage connections. That cable (or
commonly two cables for redundancy) connects to an external device, which then
provides connections to the data center networks.

Benefits of I/O Virtualization:

 Management agility: By abstracting upper layer protocols from physical


connections, I/O virtualization provides greater flexibility, greater utilization and
faster provisioning when compared to traditional NIC and HBA card
architectures.
 Reduced cost: Virtual I/O lowers costs and enables simplified server management
by using fewer cards, cables, and switch ports, while still achieving full network
I/O performance.
 Reduced cabling: In a virtualized I/O environment, only one cable is needed to
connect servers to both storage and network traffic. This can reduce data center
server-to-network, and server-to-storage cabling within a single server rack by
more than 70 percent, which equates to reduced cost, complexity, and power
requirements. Because the high-speed interconnect is dynamically shared among
various requirements, it frequently results in increased performance as well.
 Increased density: I/O virtualization increases the practical density of I/O by
allowing more connections to exist within a given space. This in turn enables
greater utilization of dense 1U high servers and blade servers that would otherwise
be I/O constrained.

Virtualization Tools:

10 Free Virtualization Tools You Should Know

1. OpenVZ
2. FreeVPS
3. Sun xVM
4. VirtualBox
5. PlateSpin Power Recon
6. Vizioncore vOptimizer Free Ware
7. Virtual Iron Single Server Edition
8. Enomalism Virtualized Management Dashboard (VMD)
9. Microsoft Virtual Server Migration Toolkit (VSMT)
10. Moka5 LivePC Engine

OpenVZ:

Tool Type: Server Platform


Developer/Sponsor: SWsoft
Available From: www.openvz.org
This operating system-level virtualization platform, also called a 'containers-type'
platform offers Linux users a simple and free way to create virtualized environments that
operate like independent servers. OpenVZ proponents claim the platform offers less of a
performance hit than solutions such as VMware and Xen.

FreeVPS:

Tool Type: Server Platform


Developer/Sponsor: Positive Software
Available From: www.freevps.com

Another Linux-based solution, FreeVPS enables users to create isolated virtual private
servers that are independent from one another and from hardware. Free VPS also offers
shared administration for data backups, task and network traffic monitoring and batch
installations.

SunxVM:

Tool Type: Platform and Management Console


Developer/Sponsor: Sun Microsystems
Available From: www.openxvm.org

The open source xVM hypervisor is based on the Xen hypervisor but is based on Sun's
Solaris kernel rather than Linux. Sun also offers free distributions xVM Ops Center,
which is designed to simplify administration of virtualized servers. These solutions are
free, but support will cost you in the form of a subscription.

PlateSpin Power Recon:

Tool Type: VM Inventory


Developer/Sponsor: PlateSpin
Available From: www.platespin.com

This solution is offered free for up to 100 servers. It gives users the power to create a
software and hardware inventory for physical and virtual servers running on Windows,
Linux and Solaris operating systems. This can be a great resource for asset management
and also to plan for power and cooling needs.
VirtualBox:

Tool Type: Server/Desktop Platform


Developer/Sponsor: innotek
Available From: www.virtualbox.org

This x86 virtualization solution is open source, but also comes with a software
development kit to allow for easy interface customization. The VMs created by
VirtualBox have their settings stored in XML, allowing for hardware independence and
easy transfer.

Vizioncore vOptimizer Free Ware:

Tool Type: Performance Optimizer


Developer/Sponsor: Vizioncore
Available From: www.vizioncore.com

This free version of Vizioncore's vOptimizer allows a maximum of two users to reduce
their virtual machine's hard drive to the most compact size possible and tweaks Windows
guest operating systems to improve speed and performance.

Virtual Iron Single Server Edition:

Tool Type: Server Platform


Developer/Sponsor: Virtual Iron
Available From: www.virtualiron.com

This is Virtual Iron's most basic offering, a free edition for single servers that allows the
creation of up to 12 virtual machines on local storage. It is usable for Windows or Linux
operating systems and can be a good solution for items like file and print servers.

Enomalism Virtualized Management Dashboard (VMD):

Tool Type: Management Dashboard


Developer/Sponsor: Enomaly Inc
Available From: www.enomalism.com

This web-based virtual server manager can be a key utility for organizations that have
already embraced virtualization and are now trying to get a handle on their environment.
The dashboard can assist with load balancing, configuration management, capacity
diagnosis, deployment planning and automatic virtual machine migration.

Microsoft Virtual Server Migration Toolkit (VSMT):

Tool Type: Migration Tool


Developer/Sponsor: Microsoft
Available From: www.microsoft.com

This free utility from Microsoft enables administrators to easily transfer a physical server
into the virtual realm. It is limited to migrations into Virtual Server 2005, but can be very
useful in automating migrations within this environment.

Moka5 LivePC Engine:

Tool Type: Desktop Platform


Developer/Sponsor: Moka5
Available From: www.moka5.com

This desktop virtualization platform allows you to create a virtual machine that can be
launched from a USB device, which can be ideal for testing applications in a virtualized
environment without virtualizing the whole system. Be aware, though, that it won't work
in a system that has already been virtualized.

Conclusion:

Higher‐density  virtual  machine  platforms  make  it  possible  to  conserve  space,
maintain  functionality,  and  keep  your  performance  constant—while  at  the  same
time  lower  costs  of  management,  cooling,  and  power.  TCO  benefits  make  it
possible  to  create  an  environment  with  smaller  computing  footprints  while  losing
nothing  in  the  transition.  

You might also like