Virtualization Guide 5.2: Red Hat Enterprise Linux
Virtualization Guide 5.2: Red Hat Enterprise Linux
Virtualization Guide
5.2
Christopher Curran
The Red Hat Enterprise Linux Virtualization Guide contains information on installation,
configuring, administering, tips, tricks and troubleshooting virtualization technologies used in
Red Hat Enterprise Linux.
Red Hat Enterprise Linux: Virtualization Guide
Author Christopher Curran <[email protected]>
Author Jan Mark Holzer <[email protected]>
Translator Don Dutile
Translator Barry Donahue
Translator Rick Ring
Translator Michael Kearey
Translator Marco Grigull
Translator Eugene Teo
Copyright © 2008 Red Hat, Inc
Copyright © 2008 Red Hat, Inc. This material may only be distributed subject to the terms and conditions set forth in the
Open Publication License, V1.0 or later with the restrictions noted below (the latest version of the OPL is presently
available at http://www.opencontent.org/openpub/).
Distribution of substantively modified versions of this document is prohibited without the explicit permission of the
copyright holder.
Distribution of the work or derivative of the work in any standard (paper) book form for commercial purposes is
prohibited unless prior permission is obtained from the copyright holder.
Red Hat and the Red Hat "Shadow Man" logo are registered trademarks of Red Hat, Inc. in the United States and other
countries.
All other trademarks referenced herein are the property of their respective owners.
CA 20 86 86 2B D6 9D FC 65 F6 EC C4 21 91 80 CD DB 42 A6 0E
v
Red Hat Enterprise Linux
vi
24. Configuring ELILO .....................................................................................213
25. Configuration files .....................................................................................217
VI. Tips and Tricks ..................................................................................................227
Tips and Tricks to Enhance Productivity ....................................................... ccxxix
26. Tips and tricks ...........................................................................................231
1. Automatically starting domains during the host system boot ...................231
2. Modifying /etc/grub.conf .................................................................231
3. Example guest configuration files and parameters .................................232
4. Duplicating an existing guest and its configuration file ...........................233
5. Identifying guest type and implementation ............................................234
6. Generating a new unique MAC address ................................................235
7. Limit network bandwidth for a guest .....................................................236
8. Starting domains automatically during system boot ...............................237
9. Modifying dom0 ..................................................................................237
10. Configuring guest live migration .........................................................238
11. Very Secure ftpd ..............................................................................239
12. Configuring LUN Persistence .............................................................240
13. Disable SMART disk monitoring for guests .........................................242
14. Cleaning up the /var/lib/xen/ folder ...............................................242
15. Configuring a VNC Server .................................................................242
16. Cloning guest configuration files .........................................................243
27. Creating custom Red Hat Virtualization scripts ............................................245
1. Using XML configuration files with virsh ..............................................245
28. Compiling para-virtualized driver packages from source code ......................247
VII. Troubleshooting ................................................................................................249
Introduction to Troubleshooting and Problem Solving ........................................ ccli
29. How To troubleshoot Red Hat Virtualization ................................................253
1. Debugging and troubleshooting Red Hat Virtualization ..........................253
2. Log files overview ...............................................................................255
3. Log file descriptions ............................................................................256
4. Important directory locations ................................................................256
5. Troubleshooting with the logs ...............................................................257
6. Troubleshooting with the serial console ................................................257
7. Para-virtualized guest console access ..................................................258
8. Fully virtualized guest console access ..................................................259
9. SELinux considerations .......................................................................259
10. Accessing data on guest disk image ...................................................259
11. Common troubleshooting situations ....................................................260
12. Guest creation errors .........................................................................261
13. Serial console errors .........................................................................261
14. Network bridge errors ........................................................................262
15. Guest configuration files ....................................................................263
16. Interpreting error messages ...............................................................264
17. The layout of the log directories ..........................................................267
18. Online troubleshooting resources .......................................................268
30. Troubleshooting ........................................................................................271
1. Identifying available storage and partitions ............................................271
vii
Red Hat Enterprise Linux
viii
How should CIO's Think about Virtualization?
You may already be heavily invested in the rapidly emerging technology of virtualization. If so,
consider some of the ideas below for further exploiting the technology. If not, now is the right
time to get started.
Virtualization provides a set of tools for increasing flexibility and lowering costs, things that are
important in every enterprise and Information Technology organization. Virtualization solutions
are becoming increasingly available and rich in features.
Since virtualization can provide significant benefits to your organization in multiple areas, you
should be establishing pilots, developing expertise and putting virtualization technology to work
now.
Organizations looking to innovate find that the ability to create new systems and services
without installing additional hardware (and to quickly tear down those systems and services
when they are no longer needed) can be a significant boost to innovation.
Among possible approaches are the rapid establishment of development systems for the
creation of custom software, the ability to quickly set up test environments, the capability to
provision alternate software solutions and compare them without extensive hardware
investments, support for rapid prototyping and agile development environments, and the ability
to quickly establish new production services on demand.
These environments can be created in house or provisioned externally, as with Amazon’s EC2
offering. Since the cost to create a new virtual environment can be very low, and can take
advantage of existing hardware, innovation can be facilitated and accelerated with minimal
investment.
Virtualization can also excel at supporting innovation through the use of virtual environments for
training and learning. These services are ideal applications for virtualization technology. A
student can start course work with a known, standard system environment. Class work can be
isolated from the production network. Learners can establish unique software environments
without demanding exclusive use of hardware resources.
As the capabilities of virtual environments continue to grow, we’re likely to see increasing use of
virtualization to enable portable environments tailored to the needs of a specific user. These
environments can be moved dynamically to an accessible or local processing environment,
regardless of where the user is located. The user’s virtual environments can be stored on the
network or carried on a portable memory device.
ix
How should CIO's Think about Virtualization?
How these applications of virtualization technology apply in your enterprise will vary. If you are
already using the technology in more than one of the areas noted above, consider an additional
investment in a solution requiring rapid development. If you haven’t started with virtualization,
start with a training and learning implementation to develop skills, then move on to application
development and testing. Enterprises with broader experience in virtualization should consider
implementing portable virtual environments or application appliances.
Further benefits include the ability to add hardware capacity in a non-disruptive manner and to
dynamically migrate workloads to available resources.
Depending on the needs of your organization, it may be possible to create a virtual environment
for disaster recovery. Introducing virtualization can significantly reduce the need to replicate
identical hardware environments and can also enable testing of disaster scenarios at lower cost.
Virtualization provides an excellent solution for addressing peak or seasonal workloads. If you
have complementary workloads in your organization, you can dynamically allocate resources to
the applications which are currently experiencing the greatest demand. If you have peak
workloads that you are currently provisioning inside your organization, you may be able to buy
capacity on demand externally and implement it efficiently using virtual technology.
Cost savings from server consolidation can be compelling. If you aren’t exploiting virtualization
for this purpose, you should start a program now. As you gain experience with virtualization,
explore the benefits of workload balancing and virtualized disaster recovery environments.
If you don’t have plans to incorporate virtualization in your solution architecture, now is a very
good time to identify a pilot project, allocate some underutilized hardware platforms, and
develop expertise with this flexible and cost-effective technology. Then, extend your target
architectures to incorporate virtual solutions. Although substantial benefits are available from
virtualizing existing services, building new applications with an integrated virtualization strategy
can yield further benefits in both manageability and availability.
x
About this book
You can learn more about Red Hat’s virtualization solutions at http://www.redhat.com/products/
• System Requirements
• Installation
• Configuration
• Administration
• Reference
• Troubleshooting
2. Document Conventions
Certain words in this manual are represented in different fonts, styles, and weights. This
highlighting indicates that the word is part of a specific category. The categories include the
following:
Courier font
Courier font represents commands, file names and paths, and prompts .
If you have to run a command as root, the root prompt (#) precedes the command:
# gconftool-2
xi
How should CIO's Think about Virtualization?
bold font
Bold font represents application programs and text found on a graphical interface.
Additionally, the manual uses different strategies to draw your attention to pieces of information.
In order of how critical the information is to you, these items are marked as follows:
Note
A note is typically information that you need to understand the behavior of the
system.
Tip
A tip is typically an alternative way of performing a task.
Important
Important information is necessary, but possibly unexpected, such as a
configuration change that will not persist after a reboot.
Caution
A caution indicates an act that would violate your support agreement, such as
recompiling the kernel.
Warning
A warning indicates potential data loss, as may happen when tuning hardware
for maximum performance.
xii
We Need Feedback
3. We Need Feedback
If you find a typographical error in the Virtualization Guide, or if you have thought of a way to
make this manual better, we would love to hear from you! Please submit a report in Busily:
http://bugzilla.redhat.com/bugzilla/ against the component Virtualization_Guide.
If you have a suggestion for improving the documentation, try and be as specific as possible
when describing it. If you have found an error, please include the section number and some of
the surrounding text so we can find it easily.
xiii
xiv
Part I. System Requirements for
Red Hat Enterprise Linux
Virtualization
Chapter 1.
System requirements
Your system will require the attributes listed in this chapter to successfully run virtualization on
Red Hat Enterprise Linux. You will require a host running Red Hat Enterprise Linux 5 Server
with the virtualization packages. The host will need a configured hypervisor. For information on
installing the hypervisor, read Chapter 4, Installing Red Hat Virtualization packages on the host.
You require installation media for the guest systems. The installation media must be available to
the host as follows:
• for para-virtualized guests you require the Red Hat Enterprise Linux 5 installation tree
available over NFS, FTP or HTTP.
• for fully-virtualized guest installations you will require DVD or CD-ROM distribution media or a
bootable .iso file and a network accessible installation tree.
• a file
• an LVM partition
Tip
Use /var/lib/xen/images/ for file based guest images. If you use a different
directory you must add the directory to your SELinux policy. For more information
see Chapter 11, SELinux and virtualization.
1. Hardware prerequisites
Hardware requirements for para-virtualization and full virtualization
The following list is the recommended RAM and disk space for each para-virtualized or fully
virtualized guest
3
Chapter 1. System requirements
It is advised to have at least one processing core or hyper-thread for each virtual machine.
You system will also require hardware virtualization extensions to use fully virtualized guest
operating systems. The steps to identify whether your system has virtualization extensions can
be found at Hardware virtualization extensions.
The virtualization extensions can not be disabled in the BIOS for AMD-V capable processors
installed in a Rev 2 socket. The Intel® VT extensions can be disabled in the BIOS. Certain
laptop vendors have disabled the Intel® VT extensions by default in their CPUs.
These instructions enable Intel® VT virtualization extensions If they are disabled in BIOS:
1. Run the xm dmesg | grep VMX command. The output should display as follows:
2. Run the cat /proc/cpuinfo | grep vmx command to verify the CPU flags have been set.
The output should be similar to the following. Note vmx in the output:
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts
acpi mmx fxsr
sse sse2 ss ht tm syscall lm constant_tsc pni monitor ds_cpl vmx est tm2
cx16 xtpr lahf_lm
Do not proceed if the output of xm dmesg | grep VMX is not VMXON is done for each CPU.
Please visit the BIOS if other messages are reported.
The following commands verify that the virtualization extensions are enabled on AMD-V
architectures:
1. Run the xm dmesg | grep SVM command. The output should look like the following:
4
Hardware prerequisites
2. Run the cat /proc/cpuinfo | grep svm to verify the CPU flags have been set. The output
should be similar to the following. Note svm in the output:
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush mmx
fxsr sse sse2
ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy
svm cr8legacy ts
fid vid ttp tm stc
First, verify the Intel® VT or AMD-V capabilities are enabled via the BIOS. The BIOS settings for
Intel® VT or AMD-V are usually in the Chipset or Processor menus. However, they can
sometimes be hidden under obscure menus, such as Security Settings or other non standard
menus.
For Intel® VT architectures perform these steps to make sure Intel Virtualization Technology is
enabled. Some menu items may have slightly different names:
1. Reboot the computer and open the system's BIOS menu. This can usually be done by
pressing delete or Alt + F4.
4. Power on the machine and open the BIOS Setup Utility. Open the Processor section and
enable Intel®Virtualization Technology or AMD-V. The values may also be called
Virtualization Extensions on some machines. Select Save & Exit.
5
Chapter 1. System requirements
Important
After resetting or updating the BIOS you must reboot the system for the updated
settings to take effect.
6
Chapter 2.
Itanium® support
Red Hat Virtualization on the Itanium® architeture requires the guest firmware
image, refer to Installing Red Hat Virtualization with yum for more information.
7
8
Chapter 3.
Virtualization limitations
This chapter covers the limitations of virtualization in Red Hat Enterprise Linux. There are
several aspects of virtualization which make virtualized guests unsuitable for certain types of
application.
The following Red Hat Enterprise Linux platforms on para-virtualized guests are unable to
subscribe to RHN for the additional services:
• RHN Satellite
• RHN Proxy
It is possible to configure these as fully virtualized guests it should be avoided due to the high
I/O requirements imposed by these. This impact may be mitigated by full support of hardware
virtualization I/O extensions in future Z-stream releases of Red Hat Virtualization.
The following applications should be avoided for their high I/O requirement reasons:
• kdump server
• netdump server
You should carefully scrutinize databased applications before running them on a virtualized
guest. Databases generally use network and storage I/O devices intensively. These applications
may not be suitable for a fully virtualized environment. Consider para-virtualization or
para-virtualized drivers (see Chapter 13, Introduction to Para-virtualized Drivers).
Other platforms and software applications that heavily utilize I/O or real-time should be
evaluated carefully. Using full virtualization with the para-virtualized drivers (see Chapter 13,
Introduction to Para-virtualized Drivers) or para-virtualization will result in better performance
with I/O intensive applications. However, applications will always suffer some performance
degradation running on virtualized environments. The performance benefits of virtualization
through consolidating to newer and faster hardware should not be underestimated as they will
often outweigh the potential disadvantages of moving to a virtualized platform.
9
Chapter 3. Virtualization limitations
For a list of other limitations and issues affecting Red Hat Virtualization read the
Red Hat Enterprise Linux Release Notes for your version. The Release Notes
cover the present known issues and limitations as they are updated or
discovered.
10
Part II. Installation Procedures
Installing Red Hat Enterprise Linux Virtualization
These chapters provide the information for installing host and guest systems utilizing Red Hat
Virtualization. These chapters will provide you with the information to get started using Red Hat
Virtualization.
xiii
xiv
Chapter 4.
More information on kickstart files can be found on Red Hat's website, redhat.com1, in the
Installation Guide for your Red Hat Enterprise Linux version.
RHN registration
1
http://www.redhat.com/docs/manuals/enterprise/
2
https://rhn.redhat.com/
15
Chapter 4. Installing Red Hat Virtualization packages on the host
You machines must be registered with Red Hat Network and you require a valid
Red Hat Network account in order to install Red Hat Virtualization on Red Hat
Enterprise Linux.
if you do not have a valid Red Hat subscription, visit the Red Hat online store3.
2. Select the systems you want to install Red Hat Virtualization on.
3. In the System Properties section the present systems entitlements are listed next to the
Entitlements header. Use the (Edit These Properties) link to change your entitlements.
Your system is now entitled to receive the Red Hat Virtualization packages. The next section
covers installing these packages.
Fully virtualized guests on the Itanium® architecture require the guest firmware image
package(xen-ia64-guest-firmware) from the supplementary installation DVD. This package
can also be can be installed from RHN with the yum command:
python-virtinst
Provides the virt-install command for creating virtual machines.
libvirt-python
3 The libvirt-python package contains a
https://www.redhat.com/wapps/store/catalog.html module that permits applications written in the Python
16
Installing Red Hat Virtualization on an
programming language to use the interface supplied by the libvirt library to use the Xen
virtualization framework.
libvirt
libvirt is an API library which uses the Xen virtualization framework, and the virsh
command line tool to manage and control virtual machines.
virt-manager
Virtual Machine Manager provides a graphical tool for administering virtual machines. It
uses libvirt library as the management API.
To install the other recommended virtualization packages, use the command below:
17
18
Chapter 5.
Installing guests
This chapter will guide you through the process for installing guest virtual machines.
These listed of questions are important to consider and note before commencing the installation
process on the host. Th
Start the installation process using either the New button in virt-manager or use the command
line interface virt-install.
If you are using the virt-install CLI command and you select the --vnc option for a
graphical installation you will also see the graphical installation screen as shown below.
The virt-install script provides a number of options one can pass on the command line.
Below is the output from virt-install -help:
virt-install -help
usage: virt-install [options]
options:
-h, --help show this help message and exit
-n NAME, --name=NAME Name of the guest instance
-r MEMORY, --ram=MEMORY
Memory to allocate for guest instance in megabytes
-u UUID, --uuid=UUID UUID for the guest; if none is given a random UUID
will be generated. If you specify UUID, you should
use
a 32-digit hexadecimal number.
--vcpus=VCPUS Number of vcpus to configure for your guest
--check-cpu Check that vcpus do not exceed physical CPUs and
warn
if they do.
--cpuset=CPUSET Set which physical CPUs Domain can use.
-f DISKFILE, --file=DISKFILE
File to use as the disk image
-s DISKSIZE, --file-size=DISKSIZE
Size of the disk image (if it doesn't exist) in
gigabytes
19
Chapter 5. Installing guests
--nonsparse Don't use sparse files for disks. Note that this
will
be significantly slower for guest creation
--nodisks Don't set up any disks for the guest.
-m MAC, --mac=MAC Fixed MAC address for the guest; if none or RANDOM
is
given a random address will be used
-b BRIDGE, --bridge=BRIDGE
Bridge to connect guest NIC to; if none given, will
try to determine the default
-w NETWORK, --network=NETWORK
Connect the guest to a virtual network, forwarding
to
the physical network with NAT
--vnc Use VNC for graphics support
--vncport=VNCPORT Port to use for VNC
--sdl Use SDL for graphics support
--nographics Don't set up a graphical console for the guest.
--noautoconsole Don't automatically try to connect to the guest
console
-k KEYMAP, --keymap=KEYMAP
set up keymap for a graphical console
--accelerat
--accelerate Use kernel acceleration capabilities
--connect=CONNECT Connect to hypervisor with URI
--livecd Specify the CDROM media is a LiveCD
-v, --hvm This guest should be a fully virtualized guest
-c CDROM, --cdrom=CDROM
File to use a virtual CD-ROM device for fully
virtualized guests
--pxe Boot an installer from the network using the PXE
boot
protocol
--os-type=OS_TYPE The OS type for fully virtualized guests, e.g.
'linux', 'unix', 'windows'
--os-variant=OS_VARIANT
The OS variant for fully virtualized guests, e.g.
'fedora6', 'rhel5', 'solaris10', 'win2k', 'vista'
--noapic Disables APIC for fully virtualized guest
(overrides
value in os-type/os-variant db)
--noacpi Disables ACPI for fully virtualized guest
(overrides
value in os-type/os-variant db)
--arch=ARCH The CPU architecture to simulate
-p, --paravirt This guest should be a paravirtualized guest
-l LOCATION, --location=LOCATION
Installation source for paravirtualized guest (eg,
nfs:host:/path, http://host/path, ftp://host/path)
-x EXTRA, --extra-args=EXTRA
Additional arguments to pass to the installer with
paravirt guests
-d, --debug Print debugging information
--noreboot Disables the automatic rebooting when the
installation
is complete.
--force Do not prompt for input. Answers yes where
20
Create a guest using virt-manager
applicable,
# virt-manager &
The virt-manager opens a new virt-manager application graphical user interface. If you
do not have root privileges the New button will be grayed out and you will not be able to
create a new virtual machine.
2. You will see a dialog box as the one below. Select the Connect button and the main
virt-manager window will appear:
3. The main virt-manager window will allow you to create a new virtual machine using the
New button:
21
Chapter 5. Installing guests
4. The next window provides a summary of the information you will need to provide in order to
create a virtual machine:
22
Create a guest using virt-manager
After you have reviewed all of the information required for your installation you can continue
to the next screen.
5. Depending on whether your system has Intel® VT or AMD-V capable processors the next
window will either display a single choice to create a para-virtualized guest or two choices.
Where one choice will be para-virtualized (modified and optimized operating system for
virtualization) guest creation and the second will be for a fully virtualized (unmodified
operating system) guest creation:
23
Chapter 5. Installing guests
6. The next screen will ask for the installation media for the type of installation you selected.
The para-virtualized installation will require an installation tree accessible either via HTTP,
FTP or NFS (can be setup on the same system as where you install the guest). You can
easily create an installation by either mounting the installation media DVD to a local
directory and exporting it via NFS or making it available via FTP or HTTP. If your media is
an .iso file you can loopback mount the file and extract the files onto a local directory.
24
Create a guest using virt-manager
7. The fully virtualized installation will ask for the location of a boot media (ISO image, DVD or
CD-ROM drive). Depending on your installation media/process you can either perform a
network based installation after booting your guest off the .iso file or perform the whole
installation off a DVD .iso file. typically Windows installations are using DVD/CD .iso files,
Linux or unix-like operating systems such as Red Hat Enterprise Linux use use an .iso file
for installing a base system to use a a network based tree):
25
Chapter 5. Installing guests
8. This screen is for selecting storage for the guest. Choose a disk partition, LUN or a file
based image for the location of the guest image. The convention for Red Hat Enterprise
Linux 5 is to install all file based guest images in the /var/lib/xen/images/ directory as
other SELinux blocks access to images located in other directories. If you run SELinux in
enforcing mode, see Chapter 11, SELinux and virtualization for more information on
installing guests. You must choose a size for file based guest image storage. The size of
your guest image should be larger than the size of the installation, any additional packages
and applications, and the size of the guests swap file. The installation process will choose
the size of the guest's swap file based on size of the RAM allocated to the guest.
Remember to allocate extra space if the guest is to store additional files, for example web
server log files or user storage.
26
Create a guest using virt-manager
Note
It is recommend to use the default directory for virtual machine images which is
/var/lib/xen/images/ . If you are using a different location (such as
/xen/images/ in this example) make sure it is added to your SELinux policy and
relabeled before you continue with the installation (later in the document you will
find information on how to modify your SELinux policy)
9. The last configuration information to enter is the memory size of the guest you are installing
and the number of virtual CPUs you would like to assign to your guest. Red Hat Enterprise
Linux 5 / Virt will require physical memory to back a guest's memory you must ensure your
system is configured with sufficient memory in order to accommodate the guest you may
like to run and configure. A good practice for virtual CPU assignments is to not configure
more virtual CPUs in a single guest then available physical processors in the host. You can
allocate more virtual processors across a number of virtual machine than the number of
physical processors available. However, you should generally avoid doing this as it will
significantly negatively affect performance of your guests and host.
27
Chapter 5. Installing guests
10. At this step you will be presented with a summary screen of all configuration information
you entered. Review the information presented and use the Back button to make changes.
Once you are satisfied with the data entered click the Finish button and the installation
process will commence:
28
Create a guest using virt-manager
Press the Finish button in virt-manager to conclude the installation and automatically
launch a VNC based window for the installation process.
29
30
Chapter 6.
If you used virt-install or virt-manager the following command line would start a guest
installation with the same parameters as selected above via the GUI:
31
Chapter 6. Guest operating system installation processes
After your guest has completed its initial bootstrap you will be dropped into the standard
installation process for Red Hat Enterprise Linux or the operating system specific installer you
select to install. For most installations the default answers will be acceptable
1. The Red Hat Enterprise Linux installer will ask you for the installation language:
32
Installing Red Hat Enterprise Linux 5 as a
2. Next you have to select the keyboard layout you want to use:
33
Chapter 6. Guest operating system installation processes
3. Assign the guest's network address. You can either choose DHCP (as shown below) or
static:
34
para-virtualized guest
4. If you have chosen DHCP for your guest the installation process will now try to acquire an
IP address for your guest via DHCP:
35
Chapter 6. Guest operating system installation processes
5. If you elected to use a static IP address for your guest the next screen will ask you for the
details on the guest's networking configuration:
a. Enter a valid IP address, also make sure the IP address you enter can reach your
installation server which provides the installation tree.
b. Enter a valid Subnet mask, default gateway and name server address.
36
Installing Red Hat Enterprise Linux 5 as a
37
Chapter 6. Guest operating system installation processes
7. After you finished the networking configuration of your guest the installation process will
retrieve the installation files required to continue:
38
para-virtualized guest
39
Chapter 6. Guest operating system installation processes
1. If you are installing a Beta or early release distribution you need to confirm that you really
want to install the operating system:
40
Graphical Red Hat Enterprise Linux 5
2. The next step is to enter a valid registration code. If you have a valid RHN subscription key
please enter in the Installation Number field:
41
Chapter 6. Guest operating system installation processes
3. The release notes for Red Hat Enterprise Linux 5 have temporary subscription keys for
Server and Client installations. If you are installing a local environment with no need to
access RHN enter V for virtualization, S for clustered storage (CLVM or GFS) and C for
clustering (Red Hat Cluster Suite) if required. In most cases entering V is sufficient:
42
installation
Note
Depending on your version of Red Hat Enterprise Linux the setup numbers
above may not work. In that case you can skip this step and confirm your Red
Hat Network account details after the installation using the rhn_register
command.
4. The installation will now confirm you want to erase the data on the storage you selected for
the installation:
43
Chapter 6. Guest operating system installation processes
5. This screen will allow you to review the storage configuration and partition layout. You can
also chose to select the advanced storage configuration if you want to use iSCSI as guest
storage:
44
Graphical Red Hat Enterprise Linux 5
6. After you have reviewed the storage choice just confirm that you indeed want to use the
drive for the installation:
7. Next is the guest networking configuration and hostname settings. The information will be
populated with the data you entered earlier in the installation process:
45
Chapter 6. Guest operating system installation processes
46
installation
10. Now you get to select the software packages you want to install. You can also chose to
customize the software selection by selecting the Customize Now button:
47
Chapter 6. Guest operating system installation processes
11. For our installation we chose the office packages and web server packages:
48
Graphical Red Hat Enterprise Linux 5
12. After you have selected the packages to install, dependencies and space requirements will
be verified:
49
Chapter 6. Guest operating system installation processes
13. After all of the installation dependencies and space requirements have been verified you
need to press the Next button to start the actual installation:
50
installation
14. Now the installation will automatically install all of the selected software packages:
51
Chapter 6. Guest operating system installation processes
15. After the installation has finished you need to reboot your guest:
52
Graphical Red Hat Enterprise Linux 5
16. Your newly installed guest will not reboot, instead it will shutdown:
53
Chapter 6. Guest operating system installation processes
1.2. The first boot after the guest installation of Red Hat
Enterprise Linux 5
To reboot your new guest use the command xm create GuestName where GuestName is the
name you entered during the initial installation. Guest configuration files are located in
/etc/xen/.
Once you have started your guest using the command above you can use virt-manager to
open a graphical console window for your guest. Start virt-manager, select your guest from the
list and click the Open tab, you will see a window similar to the one below:
54
installation
55
Chapter 6. Guest operating system installation processes
56
First boot configuration
57
Chapter 6. Guest operating system installation processes
58
First boot configuration
3. If you select to disable the Firewall configuration you need to confirm your choice one more
time:
59
Chapter 6. Guest operating system installation processes
4. Next is SELinux. It is strongly recommended to run SELinux in enforcing mode but you can
chose to either run SELinux in permissive mode or completely disable it:
60
First boot configuration
61
Chapter 6. Guest operating system installation processes
62
First boot configuration
7. Confirm time and date are set correctly for your guest. If you install a para-virtualized guest
time and date should be in sync with dom0/Hypervisor as the guest will get its time from
dom0. A fully-virtualized guest may need additional configuration such as NTP to avoid
skew in time from dom0:
63
Chapter 6. Guest operating system installation processes
8. If you have a Red Hat Network subscription or would like to try one you can use the screen
below to register your newly installed guest in RHN:
64
First boot configuration
65
Chapter 6. Guest operating system installation processes
10. Once setup has finished you may see one more screen if you opted out of RHN at this time:
66
First boot configuration
11. This screen will allow you to create a user besides the standard root account:
67
Chapter 6. Guest operating system installation processes
12. The following warning appears if you do not choose to create a personal account:
68
First boot configuration
13. If the install detects a sound device you would be asked to calibrate it on this screen:
69
Chapter 6. Guest operating system installation processes
14. If you want to install any additional software packages from CD you could do so on this
screen. It it often more efficient to not install any additional software at this point but add it
later using yum:
70
First boot configuration
15. After you press the Finish button your guest will reconfigure any settings you may have
changed and continue with the boot process:
71
Chapter 6. Guest operating system installation processes
16. And finally you will be greeted with the Red Hat Enterprise Linux 5 login screen:
72
First boot configuration
17. After you have logged in you will see the standard Red Hat Enterprise Linux 5 desktop
environment:
73
Chapter 6. Guest operating system installation processes
Itanium® support
Presently, Red Hat Enterprise Linux hosts on the Itanium® architecture do not
support fully virtualized windows guests. This section only applies to x86 and
x86-64 hosts.
74
Installing a Windows XP Guest as a fully
1. First you start virt-manager and select the New tab to create a new virtual machine. As
you install a Windows based virtual machine you need to select the option to install a Fully
virtualized guest:
75
Chapter 6. Guest operating system installation processes
3. Specify the location for the ISO image you want to use for your Windows installation:
76
virtualized guest
4. Select the storage backing store, either a file based image can be used or a partition or
logical volume:
77
Chapter 6. Guest operating system installation processes
78
Installing a Windows XP Guest as a fully
6. Before the installation will continue you will see the summary screen. Press Finish to
proceed to the actual installation:
79
Chapter 6. Guest operating system installation processes
7. Now the actual Windows installation will start. As you need to make a hardware selection it
is important that you open a console window very quickly after the installation has started.
Once you press Finish make sure you set focus to the virt-manager summary window and
select your newly started Windows guest. Double click on the system name and a console
window will open. Quickly press F5 to select a new HAL, once you get the dialog box in the
Windows install select the 'Generic i486 Platform' tab (you can scroll through the selections
using the Up and Down arrows.
80
virtualized guest
8. The installation will proceed like any other standard Windows installation:
81
Chapter 6. Guest operating system installation processes
82
Installing a Windows XP Guest as a fully
10. After your drive has been formatted Windows will start copying the files onto your new hard
drive:
83
Chapter 6. Guest operating system installation processes
11. After setup has completed your Windows virtual machine will be rebooted:
12. You will have to halt the virtual machine after its initial reboot as you need to manually edit
84
virtualized guest
13. You will need to modify the disk entry and add a cdrom entry to the config file. The old
entry will look similar to the following:
disk = [ 'file:/var/lib/xen/images/winxp.dsk,hda,w' ]
disk = [ 'file:/var/lib/xen/images/winxp.dsk,hda,w' ,
'file:/xen/pub/trees/MS/en_winxp_pro_with_sp2.iso,hdc:cdrom,r', ]
14. Now you can restart your Windows virtual machine using the xm create WindowsGuest
command, where WindowsGuest is the name of your virtual machine.
15. Once you open the console window you will see Windows continuing with the setup phase:
16. If your installation seems to get stuck during the setup phase you can restart the virtual
85
Chapter 6. Guest operating system installation processes
machine using the command mentioned above. This will usually get the installation to
continue. As you restart the virtual machine you will see a Setup is being restarted
message:
17. After setup has finished you will see the Windows boot screen:
86
Installing a Windows XP Guest as a fully
18. Now you can continue with the standard setup of your Windows installation:
87
Chapter 6. Guest operating system installation processes
19. After you completed the setup process you will be presented with your new Windows
desktop or login screen:
88
virtualized guest
It may be easier to use virt-install for installing Windows Server 2003 as the console for the
Windows guest will open quicker and allow for F5 to be pressed which is required to select a
new HAL.
Itanium® support
Presently, Red Hat Enterprise Linux hosts on the Itanium® architecture do not
support fully virtualized windows guests. This section only applies to x86 and
x86-64 hosts.
89
Chapter 6. Guest operating system installation processes
An example of using the virt-install for installing a Windows Server 2003 guest:
1. After starting your guest installation you need to quickly press F5, this opens a dialog
window to select a different HAL or Computer Type. Choose Standard PC as the Computer
Type:
90
Installing a Windows 2003 SP1 Server Guest
91
Chapter 6. Guest operating system installation processes
92
Part III. Configuration
Configuring Red Hat Enterprise Linux Virtualization
These chapters contain specialized information for certain advanced virtualization tasks. Users
wanting enhanced security, additional devices or better performance are advised to read these
chapters.
xcv
xcvi
Chapter 7.
This section uses a guest system created with virt-manager running a fully virtualized Red Hat
Enterprise Linux installation with an image located in /var/lib/xen/images/rhel5FV-1.img.
Note
There is no guarantee that this section will work for your system at this time.
Para-virtualized guests can access floppy drives as well as using para-virtualized
drivers on a fully virtualized system. For more information on using
para-virtualized drivers read Chapter 13, Introduction to Para-virtualized Drivers.
Create the XML configuration file for your guest image using the following command on a
running guest. This will save the configuration settings as an XML file which can be edited to
customize the operations the guest performs when the guest is started. For another example of
editing the virsh XML files, read Chapter 27, Creating custom Red Hat Virtualization scripts.
Add the content below, changing where appropriate, to your guest's configuration XML file. This
example creates a guest with a floppy device as a file based virtual device.
Stop the guest system, and restart the guest using the XML configuration file.
97
Chapter 7. Virtualized block devices
The floppy device is now available in the guest and stored as an image file on the host.
• logical volumes,
1. Create an empty container file or using an existing file container (such as an ISO file).
• to create a sparse file use the following command (note that using sparse files is not
recommended due to data integrity and performance issues, they may be used for testing
but not in a production environment)
• or if you want to create a non-sparse file (recommended) just use the command
2. Once you have created or identified the file you want to assign to your virtual machine you
can add it to the virtual machine's configuration file.
disk = [ 'tap:aio:/var/lib/xen/images/rhel5vm01.dsk,xvda,w', ]
98
Adding additional storage devices to a
4. To add the additional storage, add a new file based container entry in the disk= section of
the configuration file. Ensure you have specified a device name for the virtual block device
(xvd) which has not yet been used by other storage devices. The following is an example
configuration entry adding a file called oracle.dsk:
disk = [ 'tap:aio:/var/lib/xen/images/rhel5vm01.dsk,xvda,w',\
'tap:aio:/xen/images/oracle.dsk,xvdb,w', ]
5. Using the above entry your virtual machine will see file oracle.dsk as the device /dev/xvdb
inside the guest.
1. Make the block device available to the host and configure for your guests needs (that is, the
name, persistence, multipath and so on).
disk = [ 'tap:aio:/var/lib/xen/images/rhel5vm01.dsk,xvda,w', ]
3. To add the additional storage, add a new file based container entry in the disk= section of
the configuration file. Ensure you specify the type phy and use a virtual block device name for
the new virtual block device (xvd) which has not yet been used by other storage devices. The
following is an example configuration entry which adds a file called /dev/sdb1:
disk = [ 'tap:aio:/var/lib/xen/images/rhel5vm01.dsk,xvda,w',\
'phy:/dev/sdb1,xvdb,w', ]
4. Using the above entry your virtual machine will see the file oracle.dsk as the device
/dev/xvdb inside the guest
The same procedure can be used to allow a guest machine access to other physical block
devices, for example a CD-ROM or DVD drive.
99
Chapter 7. Virtualized block devices
xm. In order to dynamically add storage to a virtual machine/domain you need to perform the
following steps:
1. Identify the block device or image file you want to make available to the virtual machine (for
our example we use /dev/sdb1)
2. After you have selected the storage you want to present to the guest you can use the xm
block-attach command to assign it to your virtual machine. The syntax for xm
block-attach is:
The above command will attach /dev/sdb1 to the virtual machine MyVirtualMachine and
the device would be seen as /dev/xvdb inside the virtual machine.
1. The first step would be to acquire UUIDs. Check/Open your /etc/scsi_id.config file and
verify that you have the following line commented out:
# options=-b
options=-g
This configuration option will configure udev to assume that all attached SCSI devices will
100
guest
3. To display the UUID for a given device execute the following command: scsi_id -g -s
/block/sdc. The output will look similar to the following:
# scsi_id -g -s /block/sdc
3600a0b800013275100000015427b625e
The result of the command above represents the device's UUID. At this time you should
verify the UUID you just retrieved is the same displayed via each path the device can be
accessed through. The UUID will be used as the primary/sole key to the device. UUIDs will
be persistent across reboots and as you add additional storage to your environment.
4. The next step is to create a rule to name your device. In /etc/udev/rules.d create the file
20-names.rules. You will be adding the new rules to the file
/etc/udev/rules.d/20-names.rules , any subsequent rules will be added to the same file
using the same format. Rules should have the following format:
Replace UUID and devicename with the UUID retrieved above, and the desired name for the
device. In the example, the rule would look as follows:
This forces the system to verify all devices which correspond to a block device ( /dev/sd*)
for the given UUID. When it finds a matching device, it will create a device node named
/dev/devicename. In the example above, the device node is labeled /dev/mydevice.
/sbin/start_udev
Multi-path configuration.
To implement LUN persistence in a multipath environment, you need to define alias names for
your multipath devices. To identify a device's UUID or WWID follow the steps from the single
path configuration section. The multipath devices will be created in the /dev/mpath directory. In
101
Chapter 7. Virtualized block devices
multipaths {
multipath {
wwid 3600805f30015987000000000768a0019
alias oramp1
}
multipath {
wwid 3600805f30015987000000000d643001a
alias oramp2
}
mulitpath {
wwid 3600805f3001598700000000086fc001b
alias oramp3
}
mulitpath {
wwid 3600805f300159870000000000984001c
alias oramp4
}
}
The syntax for adding an ISO image as a new device to a guest is as following:
['file:/var/lib/xen/images/win2003sp1.dsk,hda,w',\
'file:/xen/trees/ISO/WIN/en_windows_server_2003_with_sp1_standard.iso,hdc:cdrom,r',
]
102
Chapter 8.
Each domain network interface is connected to a virtual network interface in dom0 by a point to
point link. These devices are vif<domid> and <vifid>. vif1.0 for the first interface in dom1;
vif3.1 for the second interface in domain 3.
dom0 handles traffic on these virtual interfaces by using standard Linux conventions for
bridging, routing, rate limiting, etc. The xend daemon employs two shell scripts to perform initial
configuration of your network and new virtual interfaces. These scripts configure a single bridge
for all virtual interfaces. You can configure additional routing and bridging by customizing these
scripts.
Red Hat Virtualization's virtual networking is controlled by the two shell scripts, network-bridge
and vif-bridge. xend calls these scripts when certain events occur. Arguments can be passed
to the scripts to provide additional contextual information. These scripts are located in the
/etc/xen/scripts directory. You can change script properties by modifying the
xend-config.sxp configuration file located in the /etc/xen directory.
Use the network-bridge command when xend is started or stopped, this script initializes or
shuts down the virtual network. Then the configuration initialization creates the bridge xen—br0
and moves eth0 onto that bridge, modifying the routing accordingly. When xend finally exits, it
deletes the bridge and removes eth0, thereby restoring the original IP and routing configuration.
vif-bridge is a script that is invoked for every virtual interface on the domain. It configures
firewall rules and can add the vif to the appropriate bridge.
There are other scripts that you can use to help in setting up Red Hat Virtualization to run on
your network, such as network-route, network-nat, vif-route, and vif-nat. Or these
scripts can be replaced with customized variants.
103
104
Chapter 9.
• Enable SELinux to run in enforcing mode. You can do this by executing the command below.
# setenforce 1
• Remove or disable any unnecessary services such as AutoFS, NFS, FTP, HTTP, NIS, telnetd,
sendmail and so on.
• Only add the minimum number of user accounts needed for platform management on the
server and remove unnecessary user accounts.
• Avoid running any unessential applications on your host. Running applications on the host
may impact virtual machine performance and can affect server stability. Any application which
may crash the server will also cause all virtual machines on the server to go down.
• Use a central location for virtual machine installations and images. Virtual machine images
should be stored under /var/lib/xen/images/. If you are using a different directory for your
virtual machine images make sure you add the directory to your SELinux policy and relabel it
before starting the installation.
• Installation sources, trees, and images should be stored in a central location, usually the
location of your vsftpd server.
105
106
Chapter 10.
• Run the lowest number of necessary services. You do not want to include too many jobs and
services in dom0. The fewer processes and services running on dom0, the higher the level of
security and performance.
• Enable SELinux on the hypervisor(dom0). Read Chapter 11, SELinux and virtualization for
more information on using SELinux and virtualization.
• Use a firewall to restrict traffic to dom0. You can setup a firewall with default-reject rules that
will help secure attacks on dom0. It is also important to limit network facing services.
• Do not allow normal users to access dom0. If you do permit normal users dom0 access, you
run the risk of rendering dom0 vulnerable. Remember, dom0 is privileged, and granting
unprivileged accounts may compromise the level of security.
107
108
Chapter 11.
Set the SELinux context for a block device used by a guest using the semanage and
restorecon commands. In the example below the block device is /dev/sda2:
The commands above can be used to add an additional directory which allows you to store
guest images in a different directory than /var/lib/xen/images/. If you have a guest image
outside of /var/lib/xen/images/ Xen will be unable to access the image. Confirm the
problem using ls on the file and which should output a file not found error.
You can modify your SELinux policy to include other directories you may use to storage images.
You will need to add it to the SELinux policy and relabel the directory you want to use for your
guest images. To add another directory (in our example the directory /home/admin/xen/ will be
added) to your SELinux policy use the following command:
The last step is to relabel the directory using the following command:
restorecon /home/admin/xen
109
Chapter 11. SELinux and virtualization
110
Chapter 12.
Most guest network configuration occurs during the guest initialization and installation process.
To learn about configuring networking during the guest installation process, read the relevant
sections of the installation process, Chapter 5, Installing guests.
Network configuration is also covered in the tool specific reference chapters for
virsh(Chapter 20, Managing guests with virsh) and virt-manager(Chapter 21, Managing
guests with Virtual Machine Manager(virt-manager)). Those chapters provide a detailed
description of the networking configuration tasks using both tools.
Tip
Using para-virtualized network drivers can improve performance on fully
virtualized Linux guests. Chapter 13, Introduction to Para-virtualized Drivers
explains how to utilize para-virtualized network drivers.
#/etc/sysconfig/network-scripts/fcfg-eth1
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
USERCTL=no
IPV6INIT=no
PEERDNS=yes
TYPE=Ethernet
NETMASK=255.255.255.0
IPADDR=10.1.1.1
GATEWAY=10.1.1.254
ARP=yes
111
Chapter 12. Virtualized network devices
3. Edit /etc/xen/xend-config.sxp and add line to your new network bridge script (we will
call the script network-xen-multi-bridge(example below) The new line should read
network-script network-xen-multi-bridge and remove the commenting on the line
called network-script network-bridge.
4. Create a custom script to create multiple Red Hat Virtualization network bridges. A sample
scripts is below, this example script will create two Red Hat Virtualization bridges (xenbr0
and xenbr1) one will be attached to eth1 and the other one to eth0. If you want to create
additional bridges just follow the example in the script and copy/paste the lines accordingly:
#!/bin/sh
# network-xen-multi-bridge
# Exit if anything goes wrong.
set -e
# First arg is the operation.
OP=$1
shift
script=/etc/xen/scripts/network-bridge.xen
case ${OP} in
start)
$script start vifnum=1 bridge=xenbr1 netdev=eth1
$script start vifnum=0 bridge=xenbr0 netdev=eth0
;;
stop)
$script stop vifnum=1 bridge=xenbr1 netdev=eth1
$script stop vifnum=0 bridge=xenbr0 netdev=eth0
;;
status)
$script status vifnum=1 bridge=xenbr1 netdev=eth1
$script status vifnum=0 bridge=xenbr0 netdev=eth0
;;
*)
echo 'Unknown command: ' ${OP}
echo 'Valid commands are: start, stop, status'
exit 1
esac
112
Laptop network configuration
This setup will also enable you to run Red Hat Virtualization in offline mode when you have no
active network connection on your laptop. The easiest solution to run Red Hat Virtualization on
a laptop is to follow the procedure outlined below:
• You basically will be configuring a 'dummy' network interface which will be used by Red Hat
Virtualization. In this example the interface is called dummy0. This will also allow you to use a
hidden IP address space for your guests/Virtual Machines.
• You will need to use static IP address as DHCP will not listen on the dummy interface for
DHCP requests. You can compile your own version of DHCP to listen on dummy interfaces,
however you may want to look into using dnsmasq for DNS, DHCP and tftpboot services in a
Red Hat Virtualization environment. Setup and configuration are explained further down in
this section/chapter.
• You can also configure NAT/IP masquerading in order to enable access to the network from
your guests/virtual machines.
1. create a dummy0 network interface and assign it a static IP address. In our example I
selected 10.1.1.1 to avoid routing problems in our environment. To enable dummy device
support add the following lines to /etc/modprobe.conf
DEVICE=dummy0
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
IPV6INIT=no
PEERDNS=yes
TYPE=Ethernet
NETMASK=255.255.255.0
IPADDR=10.1.1.1
ARP=yes
3. Bind xenbr0 to dummy0, so you can use networking even when not connected to a physical
network. Edit /etc/xen/xend-config.sxp to include the netdev=dummy0 entry:
113
Chapter 12. Virtualized network devices
4. Open /etc/sysconfig/network in the guest and modify the default gateway to point to
dummy0. If you are using a static IP, set the guest's IP address to exist on the same subnet
as dummy0.
NETWORKING=yes
HOSTNAME=localhost.localdomain
GATEWAY=10.1.1.1
IPADDR=10.1.1.10
NETMASK=255.255.255.0
5. Setting up NAT in the host will allow the guests access internet, including with wireless,
solving the Red Hat Virtualization and wireless card issues. The script below will enable
NAT based on the interface currently used for your network connection.
#!/bin/bash
PATH=/usr/bin:/sbin:/bin:/usr/sbin
export PATH
GATEWAYDEV=`ip route | grep default | awk {'print $5'}`
iptables -F
case "$1" in
start)
if test -z "$GATEWAYDEV"; then
echo "No gateway device found"
else
echo "Masquerading using $GATEWAYDEV"
/sbin/iptables -t nat -A POSTROUTING -o $GATEWAYDEV -j MASQUERADE
114
Laptop network configuration
fi
echo "Enabling IP forwarding"
echo 1 > /proc/sys/net/ipv4/ip_forward
echo "IP forwarding set to `cat /proc/sys/net/ipv4/ip_forward`"
echo "done."
;;
*)
echo "Usage: $0 {start|restart|status}"
;;
esac
One solution to the above challenges is to use dnsmasq which can provide all of the above
service in a single package and will also allow you to control its service only being available to
requests from your dummy interface. Below is a short write up on how to configure dnsmasq on
a laptop running Red Hat Virtualization:
• Copy the other files referenced below from http://et.redhat.com/~jmh/tools/xen/ and grab the
file dnsmasq.tgz. The tar archive includes the following files:
• nm-dnsmasq can be used as a dispatcher script for NetworkManager. It will be run every
time NetworkManager detects a change in connectivity and force a restart/reload of
dnsmasq. It should be copied to /etc/NetworkManager/dispatcher.d/nm-dnsmasq
• xenDNSmasq can be used as the main start up or shut down script for
/etc/init.d/xenDNSmasq
• Once you have unpacked and build dnsmasq (the default installation will be the binary into
/usr/local/sbin/dnsmasq) you need to edit your dnsmasq configuration file. The file is
located in /etc/dnsmaqs.conf
1
• http://www.thekelleys.org.uk/dnsmasq/
Edit the configuration to suit your local needs and requirements. The following parameters are
2
http://www.thekelleys.org.uk/dnsmasq/doc.html
115
Chapter 12. Virtualized network devices
• interface If you want dnsmasq to listen for DHCP and DNS requests only on specified (ie
dummy interface(s) but not your public interfaces) interfaces (and the loopback) give the
name of the interface (eg dummy0). Repeat the line for more than one interface. An
example would be interface=dummy0
• dhcp-range to enable the integrated DHCP server, you need to supply the range of
addresses available for lease and optionally a lease time. If you have more than one
network, you will need to repeat this for each network on which you want to supply DHCP
service. An example would be (for network 10.1.1 and a lease time of 12hrs):
dhcp-range=10.1.1.10,10.1.1.50,255.255.255.0,12h
• dhcp-option to override the default route supplied by dnsmasq, which assumes the router
is the same machine as the one running dnsmasq. An example would be
dhcp-option=3,10.1.1.1
• After configuring dnsmasq you can copy the script below as xenDNSmasq to /etc/init.d
• If you want to automatically start dnsmasq during system boot you should register it using
chkconfig(8):
• The NetworkManager dispatcher will execute the script (in alphabetical order if you have
other scripts in the same directory) every time there is a change in connectivity
• dnsmasq will also detect changes in your /etc/resolv.conf and automatically reload them
(ie if you start up a VPN session for example).
• Both the nm-dnsmasq and xenDNSmasq script will also setup NAT if you have your virtual
machines in a hidden network to allow them access to the public network.
116
Chapter 13.
Introduction to Para-virtualized
Drivers
Para-virtualized drivers provide increased performance for fully virtualized Red Hat Enterprise
Linux guests. Use these drivers if you are using fully virtualized Red Hat Enterprise Linux guests
and require better performance.
The RPM packages for the para-virtualized drivers include the modules for storage and
networking para-virtualized drivers for the supported Red Hat Enterprise guest operating
systems. These drivers enable high performance throughput of I/O operations in unmodified
Red Hat Enterprise Linux guest operating systems on top of a Red Hat Enterprise Linux 5.1 (or
greater) host.
The drivers are not supported on Red Hat Enterprise Linux guest operating systems prior to
Red Hat Enterprise Linux 3 .
Using Red Hat Enterprise Linux 5 as the virtualization platform allows System Administrators to
consolidate Linux and Windows workloads onto newer, more powerful hardware with increased
power and cooling efficiency. Red Hat Enterprise Linux 4 (as of update 6) and Red Hat
Enterprise Linux 5 guest operating systems are aware of the underlying virtualization technology
and can interact with it efficiently using specific interfaces and capabilities. This approach can
achieve similar throughput and performance characteristics compared to running on the bare
metal system.
As this approach requires modifications in the guest operating system not all operating systems
and use models can use para-virtualized virtualization. For operating systems which can not be
modified the underlying virtualization infrastructure has to emulate the server hardware (CPU,
Memory as well as IO devices for storage and network). Emulation for IO devices can be very
slow and will be especially troubling for high-throughput disk and network subsystems. The
majority of the performance loss occurs in this area.
117
Chapter 13. Introduction to Para-virtualized Drivers
The para-virtualized device drivers part of the distributed RPM packages bring many of the
performance advantages of para-virtualized guest operating systems to unmodified operating
systems because only the para-virtualized device driver (but not the rest of the operating
system) is aware of the underlying virtualization platform.
After installing the para-virtualized device drivers, a disk device or network card will continue to
appear as a normal, physical disk or network card to the operating system. However, now the
device driver interacts directly with the virtualization platform (with no emulation) to efficiently
deliver disk and network access, allowing the disk and network subsystems to operate at near
native speeds even in a virtualized environment, without requiring changes to existing guest
operating systems.
The para-virtualized drivers have certain host requirements. 64 bit hosts can run:
• 32 bit guests.
• 64 bit guests.
The para-virtualized drivers only work on 32 bit Red Hat Enterprise Linux hosts for 32 bit guests.
1. System requirements
This section provides the requirements for para-virtualized drivers with Red Hat Enterprise
Linux.
Installation.
Before you install the para-virtualized drivers the following requirements (listed below) must be
met.
You will need the following RPM packages for para-virtualized drivers for each guest operating
system installation.
• kmod-xenpv.
• kmod-xenpv,
• modules-init-tools (for versions prior to Red Hat Enterprise Linux 4.6z you require
modules-init-tools-3.1-0.pre5.3.4.el4_6.1 or greater), and
• modversions.
118
Para-virtualization Restrictions and Support
• kmod-xenpv.
You require at least 50MB of free disk space in the /lib filesystem
You are supported for running a 32 bit guest operating system with para-virtualized drivers on
64 bit Red Hat Enterprise Linux 5 Virtualization.
The table below indicates the kernel variants supported with the para-virtualized drivers. You
can use the command shown below to identify the exact kernel revision currently installed on
your host. Compare the output against the table to determine if it is supported.
The Red Hat Enterprise Linux 5 i686 and x86_64 kernel variants include Symmetric
Multiprocessing(SMP), no separate SMP kernel RPM is required.
Take note of processor specific kernel requirements for Red Hat Enterprise Linux 3 Guests in
119
Chapter 13. Introduction to Para-virtualized Drivers
Kernel Architecture Red Hat Enterprise Red Hat Enterprise Red Hat Enterprise
Linux 3 Linux 4 Linux 5
athlon Supported(AMD)
athlon-SMP Supported(AMD)
i32e Supported(Intel)
i686 Supported(Intel) Supported Supported
i686-PAE Supported
i686-SMP Supported(Intel) Supported
i686-HUGEMEM Supported(Intel) Supported
x86_64 Supported(AMD) Supported Supported
x86_64-SMP Supported(AMD) Supported
x86_64-LARGESMP Supported
Itanium (IA64) Supported
Note
The table above is for guest operating systems. AMD and Intel processors are
supported for the Red Hat Enterprise Linux 5.1 host.
Take note
Write the output of the command below down or remember it. This is the value
that determines which packages and modules you need to download.
kernel-PAE-2.6.18-53.1.4.el5.i686
120
Para-virtualization Restrictions and Support
Important Restrictions.
Para-virtualized device drivers can be installed after successfully installing a guest operating
system. You will need a functioning host and guest before you can install these drivers.
After installing the para-virtualized drivers on a guest operating system you should only use the
xm command or virsh to start the guests. If xm is not used the network interfaces (for example,
eth1 and so on) will not be connected correctly during boot. This problem is known and the
Bugzilla number is 300531 and a bug fix is in progress. The bug connects the network interface
to qemu-dm and subsequently limits the performance dramatically.
The table below shows which host kernel is required to run a Red Hat Enterprise Linux 3 guest
on if the guest was compiled for an Intel processor.
The table below shows which host kernel is required to run a Red Hat Enterprise Linux 3 guest
on if the guest was compiled for an AMD processor.
121
Chapter 13. Introduction to Para-virtualized Drivers
Note
After installing the para-virtualized drivers on a guest operating system you
should only use the xm command or the virsh command to start the guests. If
xmor virsh are not used the network interfaces(for example, eth1) will not be
correctly connected during boot. This problem is known and the Bugzilla1
number is 300531. The bug connects the network interface to qemu-dm and
subsequently limits the performance dramatically. A bug fix is presently in
development and will be released via RHN.
1
https://bugzilla.redhat.com/show_bug.cgi?id=300531
122
Common installation steps
para-virtualized block device drivers, you should create the guest with at least
two disks.
Specifically, use the first disk to install the MBR and the boot loader (GRUB), and to
contain the /boot partition. (This disk can be very small, as it only needs to have
enough capacity to hold the /boot partition.
Use the second disk and any additional disks for all other partitions (e.g. /, /usr)
or logical volumes.
Using this installation method, when the para-virtualized block device drivers are
later installed after completing the install of the guest, only booting the guest and
accessing the /boot partition will use the virtualized block device drivers.
1. Copy the RPMs for your hardware architecture to a suitable location in your guest operating
system. Your home directory is sufficient. If you do not know which RPM you require verify
against the table at Section 2, “Para-virtualization Restrictions and Support”.
2. Use the rpm utility to install the RPM packages. The rpm utility will install the following four
new kernel modules into
/lib/modules/[%kversion][%kvariant]/extra/xenpv/%release:
3. If the guest operating does not support automatically loading the para-virtualized drivers (for
example Red Hat Enterprise Linux 3) perform the required post-install steps to copy the
drivers into the operating system specific locations.
5. Reconfigure the guest operating system configuration file on the host to use the installed
para-virtualized drivers.
7. Add any additional storage entities you want to use for the para-virtualized block device
123
Chapter 13. Introduction to Para-virtualized Drivers
driver.
8. Restart your guest using the “xm create YourGuestName” command where
YourGuestName is the name of the guest operating system.
Please note
These packages do not support booting from a para-virtualized disk. Booting the
guest operating system kernel still requires the use of the emulated IDE driver,
while any other (non-system) user-level application and data disks can use the
para-virtualized block device driver.
Driver Installation.
The list below covers the steps to install a Red Hat Enterprise Linux 3 guest with
para-virtualized drivers.
1. Copy the kmod-xenpv rpm corresponding to your hardware architecture and kernel variant to
your guest operating system.
2. Use the rpm utility to install the RPM packages. Make sure you have correctly identified which
package you need for your guest operating system variant and architecture.
3. You need to perform the commands below to enable the correct and automated loading of
the para-virtualized drivers. %kvariant is the kernel variant the para-virtualized drivers have
been build against and %release corresponds to the release version of the para-virtualized
drivers.
124
Installation and Configuration of
Note
Warnings will be generated by insmod when installing the binary driver modules
due to Red Hat Enterprise Linux 3 having MODVERSIONS enabled. These
warnings can be ignored.
4. Verify /etc/modules.conf and make sure you have an alias for eth0 like the one below. If
you are planning to configure multiple interfaces add an additional line for each interface.
Note
Substitute “%release” with the actual release version (for example 0.1-5.el) for
the para-virtualized drivers. If you update the para-virtualized driver RPM
package make sure you update the release version to the appropriate version.
5. Shutdown the virtual machine (use “#shutdown -h now” inside the guest).
• Add any additional disk partitions, volumes or LUNs to the guest so that they can be
accessed via the para-virtualized (xen-vbd) disk driver.
• For each additional physical device, LUN, partition or volume add an entry similar to the
one below to the “disk=” section in the guest configuration file. The original “disk=” entry
might also look like the entry below.
disk = [ "file:/var/lib/xen/images/rhel3_64_fv.dsk,hda,w"]
• Once you have added additional physical devices, LUNs, partitions or volumes your
125
Chapter 13. Introduction to Para-virtualized Drivers
para-virtualized driver the entry should resemble the entry shown below.
disk = [ "file:/var/lib/xen/images/rhel3_64_fv.dsk,hda,w",
"tap:aio:/var/lib/xen/images/UserStorage.dsk,xvda,w" ]
Note
Use “tap:aio” for the para-virtualized device if a file based image is used.
# xm create YourGuestName
Note
You must use "xm create <virt-machine-name>” on Red Hat Enterprise Linux
5.1. The para-virtualized network driver(xen-vnif) will not be connected to eth0
properly if you are using Red Hat Enterprise Linux 5.1 and the virt-manager or
virsh interfaces. This issue is currently a known bug, BZ 300531.
Red Hat Enterprise Linux 5.2 does not have this bug and the virt-manager or
virsh interfaces will correctly load the para-virtualized drivers.
Be aware
The para-virtualized drivers are not automatically added and loaded to the
system because weak-modules and modversions support is not provided in Red
Hat Enterprise Linux 3. To insert the module execute the command below.
insmod xen-vbd.ko
Red Hat Enterprise Linux 3 requires the manual creation of the special files for the block
devices which use xen-vbd. The steps below will cover how to create and register
para-virtualized block devices.
Use the following script to create the special files after the para-virtualized block device driver is
loaded.
#!/bin/sh
126
Para-virtualized Drivers on Red Hat
module="xvd"
mode="664"
major=`awk "\\$2==\"$module\" {print \\$1}" /proc/devices`
# < mknod for as many or few partitions on xvd disk attached to FV guest >
# change/add xvda to xvdb, xvbd, etc. for 2nd, 3rd, etc., disk added in
# in xen config file, respectively.
mknod /dev/xvdb b $major 0
mknod /dev/xvdb1 b $major 1
mknod /dev/xvdb2 b $major 2
chgrp disk /dev/xvd*
chmod $mode /dev/xvd*
For each additional virtual disk, increment the minor number by 16. In the example below an
additional device, minor number 16, is created.
This would make the next device 32 which can be created by:
Now you should verify the partitions which you have created are available.
3 0 10485760 hda
3 1 104391 hda1
3 2 10377990 hda2
202 0 64000 xvdb
202 1 32000 xvdb1
202 2 32000 xvdb2
253 0 8257536 dm-0
253 1 2031616 dm-1
In the above output, you can observe that the partitioned device “xvdb” is available to the
system.
The commands below mount the new block devices to local mount points and updates the
/etc/fstab inside the guest to mount the devices/partitions during boot.
[root@rhel3]# df /mnt/pvdisk_p1
127
Chapter 13. Introduction to Para-virtualized Drivers
Performance tip
Using a Red Hat Enterprise Linux 5.1 host(dom0), the "noapic" parameter
should be added to the kernel boot line in your virtual guest's
/boot/grub/grub.conf entry as seen below. Keep in mind your architecture
and kernel version may be different.
A Red Hat Enterprise Linux 5.2 dom0 will not need this kernel parameter for the
guest.
Please note
The Itanium (ia64) binary RPM packages and builds are not presently available.
Please note
These packages do not support booting from a para-virtualized disk. Booting the
guest operating system kernel still requires the use of the emulated IDE driver,
while any other (non-system) user-level application and data disks can use the
para-virtualized block device driver.
Driver Installation.
The list below covers the steps to install a Red Hat Enterprise Linux 4 guest with
para-virtualized drivers.
128
Enterprise Linux 3
2. Use the rpm utility to install the RPM packages. Make sure you have correctly identified which
package you need for your guest operating system variant and architecture. An updated
module-init-tools is required for this package, it is available with the Red Hat Enterprise
Linux4-6-z kernel and beyond.
Note
There are different packages for UP, SMP, Hugemem and architectures so make
sure you have the right RPMs for your kernel.
3. Execute cat /etc/modules.conf to verify you have an alias for eth0 like the one below. If
you are planning to configure multiple interfaces add an additional line for each interface. It it
does not look like the entry below change it.
4. Shutdown the virtual machine (use “#shutdown -h now” inside the guest).
• Add any additional disk partitions, volumes or LUNs to the guest so that they can be
accessed via the para-virtualized (xen-vbd) disk driver.
• For each additional physical device, LUN, partition or volume add an entry similar to the
one shown below to the “disk=” section in the guest configuration file. The original “disk=”
entry might also look like the entry below.
disk = [ "file:/var/lib/xen/images/rhel4_64_fv.dsk,hda,w"]
• Once you have added additional physical devices, LUNs, partitions or volumes your
para-virtualized driver the entry should resemble the entry shown below.
disk = [ "file:/var/lib/xen/images/rhel3_64_fv.dsk,hda,w",
"tap:aio:/var/lib/xen/images/UserStorage.dsk,xvda,w" ]
129
Chapter 13. Introduction to Para-virtualized Drivers
Note
Use “tap:aio” for the para-virtualized device if a file based image is used.
# xm create YourGuestName
Note
You must use "xm create <virt-machine-name>” on Red Hat Enterprise Linux
5.1. The para-virtualized network driver(xen-vnif) will not be connected to eth0
properly if you are using Red Hat Enterprise Linux 5.1 and the virt-manager or
virsh interfaces. This issue is currently a known bug, BZ 300531.
Red Hat Enterprise Linux 5.2 does not have this bug and the virt-manager or
virsh interfaces will correctly load the para-virtualized drivers.
On the first reboot of the virtual guest, kudzu will ask you to "Keep or Delete the Realtek
Network device" and "Configure the xen-bridge device". You should configure the xen-bridge
and delete the Realtek network device.
Performance tip
Using a Red Hat Enterprise Linux 5.1 host(dom0), the "noapic" parameter
should be added to the kernel boot line in your virtual guest's
/boot/grub/grub.conf entry as seen below. Keep in mind your architecture
and kernel version may be different.
A Red Hat Enterprise Linux 5.2 dom0 will not need this kernel parameter for the
guest.
Now, verify the partitions which you have created are available.
130
Installation and Configuration of
3 0 10485760 hda
3 1 104391 hda1
3 2 10377990 hda2
202 0 64000 xvdb
202 1 32000 xvdb1
202 2 32000 xvdb2
253 0 8257536 dm-0
253 1 2031616 dm-1
In the above output, you can see the partitioned device “xvdb” is available to the system.
The commands below mount the new block devices to local mount points and updates the
/etc/fstab inside the guest to mount the devices/partitions during boot.
[root@rhel4]# df /mnt/pvdisk_p1
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvdb1 32000 15 31985 1% /mnt/pvdisk_p1
Note
This package is not supported for Red Hat Enterprise Linux 4-GA through Red
Hat Enterprise Linux 4 update 2 systems and kernels.
Also note...
IA64 binary RPM packages and builds are not presently available.
A handy tip
If the xen-vbd driver does not automatically load. Issue the following command
from the guest's terminal. Substitute %release with the correct release version
for the para-virtualized drivers.
131
Chapter 13. Introduction to Para-virtualized Drivers
Please note
These packages do not support booting from a para-virtualized disk. Booting the
guest operating system kernel still requires the use of the emulated IDE driver,
while any other (non-system) user-level application and data disks can use the
para-virtualized block device driver.
Driver Installation.
The list below covers the steps to install a Red Hat Enterprise Linux 5 guest with
para-virtualized drivers.
1. Copy the kmod-xenpvrpm corresponding to your hardware architecture and kernel variant to
your guest operating system.
2. Use the rpm utility to install the RPM packages. Make sure you correctly identify which
package you need for your guest operating system variant and architecture.
3. Issue the command below to disable automatic hardware detection inside the guest operating
system
4. Execute cat /etc/modules.conf to verify you have an alias for eth0 like the one below. If
you are planning to configure multiple interfaces add an additional line for each interface. It it
does not look like the entry below change it.
5. Shutdown the virtual machine (use “#shutdown -h now” inside the guest).
6. Edit the guest configuration file in /etc/xen/<Your GuestsName> in the following ways:
132
Para-virtualized Drivers on Red Hat
• Add any additional disk partitions, volumes or LUNs to the guest so that they can be
accessed via the para-virtualized (xen-vbd) disk driver.
• For each additional physical device, LUN, partition or volume add an entry similar to the
one shown below to the “disk=” section in the guest configuration file. The original “disk=”
entry might also look like the entry below.
disk = [ "file:/var/lib/xen/images/rhel4_64_fv.dsk,hda,w"]
• Once you have added additional physical devices, LUNs, partitions or volumes your
para-virtualized driver the entry should resemble the entry shown below.
disk = [ "file:/var/lib/xen/images/rhel3_64_fv.dsk,hda,w",
"tap:aio:/var/lib/xen/images/UserStorage.dsk,xvda,w" ]
Note
Use “tap:aio” for the para-virtualized device if a file based image is used.
# xm create YourGuestName
Note
You must use "xm create <virt-machine-name>” on Red Hat Enterprise Linux
5.1. The para-virtualized network driver(xen-vnif) will not be connected to eth0
properly if you are using Red Hat Enterprise Linux 5.1 and the virt-manager or
virsh interfaces. This issue is currently a known bug, BZ 300531.
Red Hat Enterprise Linux 5.2 does not have this bug and the virt-manager or
virsh interfaces will correctly load the para-virtualized drivers.
To verify the network interface has come up after installing the para-virtualized drivers issue the
following command on the guest. It should display the interface information including an
assigned IP address
133
Chapter 13. Introduction to Para-virtualized Drivers
Now, verify the partitions which you have created are available.
In the above output, you can see the partitioned device “xvdb” is available to the system.
The commands below mount the new block devices to local mount points and updates the
/etc/fstab inside the guest to mount the devices/partitions during boot.
Performance tip
Using a Red Hat Enterprise Linux 5.1 host(dom0), the "noapic" parameter
should be added to the kernel boot line in your virtual guest's
/boot/grub/grub.conf entry as seen below. Keep in mind your architecture
and kernel version may be different.
A Red Hat Enterprise Linux 5.2 dom0 will not need this kernel parameter for the
guest.
134
Enterprise Linux 4
Perform the following steps to reconfigure the network interface inside the guest.
1. In virt-manager open the console window for the guest and log in as root.
2. On Red Hat Enterprise Linux 4 verify the file /etc/modprobe.conf contains the line “alias
eth0 xen-vnif”.
# cat /etc/modprobe.conf
alias eth0 xen-vnif
3. To display the present settings for eth0 execute “# ifconfig eth0”. If you receive an error
about the device not existing you should load the modules manually as outlined in Section 5,
“Manually loading the para-virtualized drivers”.
ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:00:00:6A:27:3A
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:630150 errors:0 dropped:0 overruns:0 frame:0
TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:109336431 (104.2 MiB) TX bytes:846 (846.0 b)
135
Chapter 13. Introduction to Para-virtualized Drivers
5. Select the 'Xen Virtual Ethernet Card (eth0)' entry and click 'Forward'.
136
Para-virtualized Network Driver
7. Press the 'Activate' button to apply the new settings and restart the network.
8. You should now see the new network interface with an IP address assigned.
137
Chapter 13. Introduction to Para-virtualized Drivers
ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:16:3E:49:E4:E0
inet addr:192.168.78.180 Bcast:192.168.79.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:630150 errors:0 dropped:0 overruns:0 frame:0
TX packets:501209 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:109336431 (104.2 MiB) TX bytes:46265452 (44.1 MiB)
vif = [ "mac=00:16:3e:2e:c5:a9,bridge=xenbr0" ]
Add an additional entry to the “vif=” section of the configuration file similar to the one seen
below.
vif = [ "mac=00:16:3e:2e:c5:a9,bridge=xenbr0",
"mac=00:16:3e:2f:d5:a9,bridge=xenbr0" ]
Make sure you generate a unique MAC address for the new interface. You can use the
command below.
After the guest has been rebooted perform the following step in the guest operating system.
Verify the update has been added to your /etc/modules.conf in Red Hat Enterprise Linux 3 or
/etc/modprobe.conf in Red Hat Enterprise Linux 4 and Red Hat Enterprise Linux 5. Add a
new alias for each new interface you added.
2
http://et.redhat.com/~jmh/docs/Installing_RHEL5_Virt.pdf
138
Configuration
Now test that each new interface you added make sure it is available inside the guest.
# ifconfig eth1
The command above should display the properties of eth1, repeat the command for eth2 if you
added a third interface, and so on.
Now you can configure the new network interfaces using redhat-config-network or Red Hat
Enterprise Linux3 or system-config-network on Red Hat Enterprise Linux 4 and Red Hat
Enterprise Linux 5.
disk = [ "file:/var/lib/xen/images/rhel5_64_fv.dsk,hda,w"]
Add an additional entry for your new physical device, LUN, partition or volume to the “disk=”
section of the configuration file. The storage entity the para-virtualized driver the updated entry
would like the following. Note the use of “tap:aio” for the para-virtualized device if a file based
image is used.
disk = [ "file:/var/lib/xen/images/rhel5_64_fv.dsk,hda,w",
"tap:aio:/var/lib/xen/images/UserStorage1.dsk,xvda,w" ]
If you want to add more entries just add them to the “disk=” section as a comma separated list.
Note
You need to increment the letter for the 'xvd' device, that is for your second
storage entity it would be 'xvdb' instead of 'xvda'.
disk = [ "file:/var/lib/xen/images/rhel5_64_fv.dsk,hda,w",
"tap:aio:/var/lib/xen/images/UserStorage1.dsk,xvda,w" ]
"tap:aio:/var/lib/xen/images/UserStorage2.dsk,xvdb,w" ]
139
Chapter 13. Introduction to Para-virtualized Drivers
# cat /proc/partitions
major minor #blocks name
3 0 10485760 hda
3 1 104391 hda1
3 2 10377990 hda2
202 0 64000 xvda
202 1 64000 xvdb
253 0 8257536 dm-0
253 1 2031616 dm-1
In the above output you can see the partition or device “xvdb” is available to the system.
Mount the new devices and disks to local mount points and update the /etc/fstab inside the
guest to mount the devices and partitions at boot time.
# mkdir /mnt/pvdisk_xvda
# mkdir /mnt/pvdisk_xvdb
# mount /dev/xvda /mnt/pvdisk_xvda
# mount /dev/xvdb /mnt/pvdisk_xvdb
# df /mnt/pvdisk_p1
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda 64000 15 63985 1% /mnt/pvdisk_xvda
/dev/xvdb 64000 15 63985 1% /mnt/pvdisk_xvdb
140
Part IV. Administration
Administering Red Hat Enterprise Linux Virtualization
These chapters contain information for administering host and guest systems using Red Hat
Virtualization tools and technologies.
cxliii
cxliv
Chapter 14.
chkconfig xendomains on
The chkconfig xendomains on command does not automatically start domains; instead it will
start the domains on the next boot.
The chkconfig xendomains off terminates all running domains and does not start them again
during the next boot.
145
146
Chapter 15.
Item Description
147
Chapter 15. Managing guests with xend
Item Description
After setting these operating parameters, you should verify that xend is running and if not,
initialize the daemon. At the command prompt, you can start the xend daemon by entering the
following:
The xend node control daemon performs system management functions that relate to virtual
machines. This daemon controls the virtualized resources, and xend must be running to interact
with virtual machines. Before you start xend, you must specify the operating parameters by
editing the xend configuration file xend-config.sxp which is located in the /etc/xen directory.
148
This stops the daemon from running.
149
150
Chapter 16.
Managing CPUs
Red Hat Virtualization allows a domain's virtual CPUs to associate with one or more host CPUs.
This can be used to allocate real resources among one or more guests. This approach allows
Red Hat Virtualization to make optimal use of processor resources when employing dual-core,
hyperthreading, or other advanced CPU technologies. If you are running I/O intensive tasks, it is
typically better to dedicate either a hyperthread or entire core to run domain0. The Red Hat
Virtualization credit scheduler automatically re-balances virtual cpus between physical ones, to
maximize system use. The Red Hat Virtualization system allows the credit scheduler to move
CPUs around as necessary, as long as the virtual CPU is pinned to a physical CPU.
To view vcpus using virsh refer to Displaying virtual CPU information for more information.
To set cpu affinities using virsh refer to Configuring virtual CPU affinity for more information.
to configure and view cpu information with virt-manager refer to Section 13, “Displaying virtual
CPUs ” for more information.
151
152
Chapter 17.
• Live mode using the --live option for the command xm migrate --live
VirtualMachineNameHostName.
To enable the use of migration a few changes must be made to configuration file
/etc/xen/xend-config.sxp. By default migration is disabled due to the potentially harmful
affects on the host's security. Opening the relocation port carries the potential ability of
unauthorized hosts and users to initiate migrate or connect to the relocation ports. As there is
no specific authentication for relocation requests and the only control mechanism is based on
hostnames and IP addresses special care should be taken to make sure the migration port and
server is not accessible to unauthorized hosts.
Enabling migration.
153
Chapter 17. Virtualization live migration
(xend-relocation-server yes)
The default value is no to keep the migration server deactivated. Unless you are using a
trusted network, the domain virtual memory will be exchanged in raw form without
encryption of the communication.
(xend-relocation-port 8002)
The parameter, (xend-relocation-port), specifies the port xend should use for the
relocation interface, if xend-relocation-server is set to yes
The default value of this variable should work for most installations. If you change the value
make sure you are using an unused port on the relocation server.
(xend-relocation-address '')
(xend-relocation-address)is the address the xend should listen on for
relocation-socket connections, if xend-relocation-server is set.
The default is listen on all active interfaces, the parameter can be used to restrict the
relocation server to only listen to a specific interface. The default value in
/etc/xen/xend-config.sxp is an empty string(''). This value should be replaced with a
valid list of addresses or regular expressions surrounded by single quotes.
(xend-relocation-hosts-allow '')
The (xend-relocation-hosts-allow ) parameter is used to control the hosts who are
allowed to talk to the relocation port.
If the value is empty, as denoted in the example above by an epty string surrounded by
single quotes, then all connections are allowed. This assumes the connection arrives on a
port and interface which the relocation server listens on, see also xend-relocation-port
and xend-relocation-address above).
After you have configured the parameters in your configuration file you should reboot the host to
restart your environment with the new parameters.
154
A live migration example
The configuration below consists of two servers (et-virt07 and et-virt08), both of them are
using eth1 as their default network interface hence they are using xenbr1 as their Red Hat
Virtualization networking bridge. We are using a locally attached SCSI disk (/dev/sdb) on
et-virt07 for shared storage using NFS.
# mkdir /xentest
# mount /dev/sdb /xentest
Important
Ensure the directory is exported with the correct options. If you are exporting the
default directory /var/lib/xen/images/ make sure you only export
/var/lib/xen/images/ and not/var/lib/xen/ as this directory is used by the
xend daemon and other Xen components. Sharing /var/lib/xen/ will cause
unpredictable behavior.
# cat /etc/exports
/xentest *(rw,async,no_root_squash)
# showmount -e et-virt07
Export list for et-virt07:
/xentest *
155
Chapter 17. Virtualization live migration
Check the relocation parameters have been configured in the Xen config file on both hosts:
Make sure the Xen relocation server has started and is listening on the dedicated port for Xen
migrations (8002):
156
A live migration example
Verify the NFS directory has been mounted on the other host and you can see and access the
virtual machine image and file system:
[et-virt07 ~]# xm li
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 1880 8 r----- 50.7
[et-virt07 ~]# xm li
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 983 8 r----- 58.2
xentesttravelvm01 1 1024 1 -b---- 9.2
157
Chapter 17. Virtualization live migration
[et-virt07 xentest]# xm li
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 975 8 r----- 110.7
[et-virt07 xentest]# xm li
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 975 8 r----- 118.5
xentesttravelvm01 3 1023 1 -b---- 0.0
Note
The local host's clock is set to a different time 4hrs ahead of the remote host's
clock.
# while true
158
A live migration example
> do
> hostname ; date
> sleep 3
> done
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 06:50:16 EST 2007
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 06:50:19 EST 2007
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 06:50:22 EST 2007
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 06:50:25 EST 2007
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 02:22:24 EST 2007
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 02:22:27 EST 2007
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 02:22:30 EST 2007
[et-virt08 xen]# xm li
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 975 4 r----- 45.9
xentesttravelvm01 1 1023 1 -b---- 1.3
Initiate the live migration to et-virt08. in the example below et-virt07 is the hostname you
are migrating to and <domain-id> muyst be replaced with a guest domain available to the host
system.
[et-virt07 xentest]# xm li
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 975 8 r----- 161.1
[et-virt08 ~]# xm li
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 975 4 r----- 46.3
xentesttravelvm01 1 1023 1 -b---- 1.6
159
Chapter 17. Virtualization live migration
#!/bin/bash
while true
do
touch /var/tmp/$$.log
echo `hostname` >> /var/tmp/$$.log
echo `date` >> /var/tmp/$$.log
cat /var/tmp/$$.log
df /var/tmp
ls -l /var/tmp/$$.log
sleep 3
done
Remember, that script is only for testing purposes and unnecessary for a live migration in a
production environment.
Verify the virtual machine is running on et-virt08 before we try to migrate it to et-virt07:
[et-virt08 ~]# xm li
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 975 4 r----- 46.3
xentesttravelvm01 1 1023 1 -b---- 1.6
Initiate a live migration to et-virt07. You can add the time command to see how long the
migration takes:
# ./doit
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 02:26:27 EST 2007
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
2983664 2043120 786536 73% /
-rw-r--r-- 1 root root 62 Jan 12 02:26 /var/tmp/2279.log
dhcp78-218.lab.boston.redhat.com
160
A live migration example
[et-virt08 ~]# xm li
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 975 4 r----- 56.3
[et-virt07 xentest]# xm li
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 975 8 r----- 198.1
xentesttravelvm01 4 1023 1 -b---- 1.0
Run through another cycle migrating from et-virt07 to et-virt08. Initiate a migration from
161
Chapter 17. Virtualization live migration
et-virt07 to et-virt08:
[et-virt07 xentest]# xm li
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 975 8 r----- 221.7
Before initiating the migration start the simple script in the guest and note the change in time
when migrating the guest:
# ./doit
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 06:57:53 EST 2007
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
2983664 2043120 786536 73% /
-rw-r--r-- 1 root root 62 Jan 12 06:57 /var/tmp/2418.log
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 06:57:53 EST 2007
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 06:57:56 EST 2007
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
2983664 2043120 786536 73% /
-rw-r--r-- 1 root root 124 Jan 12 06:57 /var/tmp/2418.log
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 06:57:53 EST 2007
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 06:57:56 EST 2007
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 06:58:00 EST 2007
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
2983664 2043120 786536 73% /
-rw-r--r-- 1 root root 186 Jan 12 06:57 /var/tmp/2418.log
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 06:57:53 EST 2007
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 06:57:56 EST 2007
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 06:58:00 EST 2007
dhcp78-218.lab.boston.redhat.com
Fri Jan 12 02:30:00 EST 2007
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
162
A live migration example
After the migration command completes on et-virt07 verify on et-virt08 that the virtual
machine has started:
[et-virt08 ~]# xm li
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 975 4 r----- 67.3
xentesttravelvm01 2 1023 1 -b---- 0.4
At this point you have successfully performed an offline and a live migration test.
163
164
Chapter 18.
SSH is usually configured by default so you probably already have SSH keys setup and no
extra firewall rules needed to access the management service or VNC console.
Be aware of the issues with using SSH for remotely managing your virtual machines, including:
• you require root login access to the remote machine for managing virtual machines,
• there is no standard or trivial way to revoke a user's key on all hosts or guests, and
• ssh does not scale well with larger numbers of remote machines.
1. You need a public key pair on the machine where you will run virt-manager. If ssh is
already configured you can skip this command.
$ ssh-keygen -t rsa
2. To permit remote log in, virt-manager needs a copy of the public key on each remote
machine running libvirt. Copy the file $HOME/.ssh/id_rsa.pub from the machine you
want to use for remote management using the scp command:
165
Chapter 18. Remote management of virtualized guests
3. After the file has copied, use ssh to connect to the remote machines as root and add the file
that you copied to the list of authorized keys. If the root user on the remote host does not
already have an list of authorized keys, make sure the file permissions are correctly set
$ ssh root@somehost
# mkdir /root/.ssh
# chmod go-rwx /root/.ssh
# cat /root/key-dan.pub >> /root/.ssh/authorized_keys
# chmod go-rw /root/.ssh/authorized_keys
$ ssh root@somehost
# chkconfig libvirtd on
# service libvirtd start
After libvirtd and SSH are configured you should be able to remotely access and manage
your virtual machines. You should also be able to access your guests with VNC at this point.
Using this method you will not need to give users shell accounts on the remote machines being
managed. However, extra firewall rules are needed to access the management service or VNC
console. Certificate revocation lists can be used to revoke access to users.
166
Remote management over TLS and SSL
In the virt-manager user interface, use the 'SSL/TLS' transport mechanism option when
connecting to a host.
To enable SSL and TLS for VNC, it is necessary to put the certificate authority and client
certificates into $HOME/.pki, that is the following three files:
167
168
Part V. Virtualization Reference
Guide
Tools Reference Guide for Red Hat Enterprise Linux Virtualization
These chapters provide in depth description of the tools used by Red Hat Enterprise Linux
Virtualization. Users wanting to find information on advanced functionality should read these
chapters.
clxxi
clxxii
Chapter 19.
• xentop
• xm dmesg
• xm log
• vmstat
• iostat
• lsof
# lsof -i :5900
xen-vncfb 10635 root 5u IPv4 218738 TCP grumble.boston.redhat.com:5900
(LISTEN)
• XenOprofile
• systemTap
• crash
• xen-gdbserver
• sysrq
• sysrq t
• sysrq w
• sysrq c
Networking
brtcl
# brctl show
bridge name bridge id STP enabled interfaces
xenbr0 8000.feffffffffff no vif13.0
pdummy0
173
Chapter 19. Red Hat Virtualization tools
vif0.0
vif13.0 (3)
port id 8003 state
forwarding
designated root 8000.feffffffffff path cost 100
designated bridge 8000.feffffffffff message age timer 0.00
designated port 8003 forward delay timer 0.00
designated cost 0 hold timer 0.43
flags
pdummy0 (2)
port id 8002 state
forwarding
designated root 8000.feffffffffff path cost 100
designated bridge 8000.feffffffffff message age timer 0.00
designated port 8002 forward delay timer 0.00
designated cost 0 hold timer 0.43
flags
vif0.0 (1)
port id 8001 state
forwarding
designated root 8000.feffffffffff path cost 100
designated bridge 8000.feffffffffff message age timer 0.00
designated port 8001 forward delay timer 0.00
designated cost 0 hold timer 0.43
flags
• ifconfig
174
• tcpdump
175
176
Chapter 20.
The virsh tool is built on the libvirt management API and operates as an alternative to the
xm tool and the graphical guest Manager(virt-manager). Unprivileged users can employ this
utility for read-only operations. If you plan on running the xend, you should enable xend to run
as a service. After modifying the respective configuration file, reboot the system, and xendwill
run as a service. You can use virsh to load scripts for the guest machines.
Where <name> is the machine name of the hypervisor. If you want to initiate a read-only
connection, append the above command with -readonly.
Creating a guest.
Guests can be created from XML configuration files. You can copy existing XML from previously
created guests or use the dumpxml option(refer to Creating a virtual machine XML
dump(configuration file)). To create a guest with virsh from an XML file:
This command outputs the domain information (in XML) to stdout. You save the data by piping
the output to a file.
The file guest.xml can then be used to recreate the guest (refer to Creating a guest. You can
edit this XML configuration file to configure additional devices or to deploy additional guests.
Refer to Section 1, “Using XML configuration files with virsh” for more information on modifying
files created with virsh dumpxml.
177
Chapter 20. Managing guests with virsh
Suspending a guest.
To suspend a guest with virsh:
When a domain is in a suspended state, it still consumes system RAM. Disk and network I/O
will not occur while the guest is suspended. This operation is immediate and the guest must be
restarted with the resume option .
Resuming a guest.
To restore a suspended guest with virsh using the resume option:
This operation is immediate and the guest parameters are preserved for suspend and resume
operations.
Saving a guest.
To save the current state of a guest to a file using the virsh command :
This stops the guest you specify and saves the data to a file, which may take some time given
the amount of memory in use by your guest. You can restore the state of the guest with the
restore option.
Restoring a guest.
To restore a guest that you previously saved with the virsh save option using the virsh
command:
This restarts the saved guest, which may take some time. The guest's name and UUID are
preserved but are allocated for a new id.
178
virsh shutdown [domain-id, domain-name or domain-uuid]
You can control the behavior of the rebooting guest by modifying the on_shutdown parameter
of the xmdomain.cfg file.
Rebooting a guest.
To reboot a guest using virsh command:
You can control the behavior of the rebooting guest by modifying the on_reboot parameter of
the xmdomain.cfg file.
Terminating a guest.
To terminate, destroy or delete guest use the virsh command with destroy:
This command does an immediate ungraceful shutdown and stops any guest domain sessions
(which could potentially lead to file corrupted file systems still in use by the guest). You should
use the destroy option only when the guest is non-responsive. For a para-virtualized guest,
you should use the shutdown option .
179
Chapter 20. Managing guests with virsh
virsh nodeinfo
This displays the node information and the machines that support the virtualization process.
The --inactive option lists inactive domains (domains that have been defined but are not
currently active). The --all domain lists all domains, whether active or not. Your output should
resemble the this example:
Id Name State
----------------------------------
180
0 Domain-0 running
1 Domain202 paused
2 Domain010 inactive
3 Domain9600 crashed
Using virsh list will output domains with the following states:
• The running state refers to domains which are currently active on a CPU.
• Domains listed as blocked or blocking are presently idle, waiting for I/O, waiting for the
hypervisor or dom0 or .
• Domains in the shutoff state are off and not using system resources.
• crashed domains have failed while running and are no longer running.
Where [vcpu] is the virtual VCPU number and [cpulist] lists the physical number of CPUs.
You cannot increase the count above the amount you specified when you created the guest.
181
Chapter 20. Managing guests with virsh
You must specify the [count] in kilobytes. The new count value cannot exceed the amount you
specified when you created the guest. Values lower than 64 MB are unlikely to work with most
guest operating systems. A higher maximum memory value will not affect the an active guest
unless the new value is lower which will shrink the available memory usage.
virsh net-list
182
</ip>
</network>
• virsh net-create [XML file] — Generates and starts a new network using a preexisting
XML file
• virsh net-define [XML file] — Generates a new network from a preexisting XML file
without starting it
• virsh net-name [network UUID] — Convert a specified [network UUID] to a network name
• virsh net-uuid [network name — Convert a specified [network name] to a network UUID
183
184
Chapter 21.
185
Chapter 21. Managing guests with Virtual Machine Manager(virt-manager)
Your local desktop can intercept key combinations (for example, Ctrl+Alt+F11) to prevent them
from being sent to the guest machine. You can use virt-managersticky key' capability to send
these sequences. You must press any modifier key (Ctrl or Alt) 3 times and the key you specify
gets treated as active until the next non-modifier key is pressed. Then you can send Ctrl-Alt-F11
to the guest by entering the key sequence 'Ctrl Ctrl Ctrl Alt+F1'.
6. Starting virt-manager
To start virt-manager session, from the Applications menu, click System Tools and select
Virtual Machine Manager(virt-manager).
186
Creating a new guest
Alternatively, virt-manager can be started remotely using ssh as demonstrated in the following
command:
Using ssh to manage virtual machines and hosts is discussed further in Section 1, “Remote
management with ssh”.
• Summarize running domains with live performance and resource utilization statistics.
• Display graphs that show performance and resource utilization over time.
• Use the embedded VNC client viewer which presents a full graphical console to the guest
domain.
Before creating new guest virtual machines you should consider the following options. This list
is a summery of the installation process using the Virtual Machine Manager.
• Decide whether you will use full virtualization (required for non-Red Hat Enterprise Linux
guests. Full virtualization provides more flexibility but less performance) or para-virtualization
(only for Red Hat Enterprise Linux 4 and 5 guests. Provides performance close to
bare-metal).
• Para-virtualized guests required network based installation media. That is your installation
media must be hosted on a nfs, ftp or http server.
• Fully virtualized guests require iso images, CD-ROMs or DVDs of the installation media
187
Chapter 21. Managing guests with Virtual Machine Manager(virt-manager)
• If you are creating a fully virtualized guest, identify the operating system type and variant.
• Decide the location and type (for example, a file or partition) of the storage for the virtual disk
• Decide how much of your physical memory and cpu cores, or processors, you are going to
allocate the guest. Be aware of the physical limitations of your system and the system
requirements of your virtual machines.
Note:
You must install Red Hat Enterprise Linux 5, virt-manager, and the kernel
packages on all systems that require virtualization. All systems then must be
booted and running the Red Hat Virtualization kernel.
1. you have not booted the correct kernel. Verify you are running the
kernel-xen kernel by running uname.
$ uname -r
2.6.18-53.1.14.el5xen
For other issues see the troubleshooting section, Part VII, “Troubleshooting”.
188
Creating a new guest
These are the steps required to install a guest operating system on Red Hat Enterprise Linux 5
using the Virtual Machine Monitor:
1. From the Applications menu, select System Tools and then Virtual Machine Manager.
189
Chapter 21. Managing guests with Virtual Machine Manager(virt-manager)
3. Click Forward.
4. Enter the name of the new virtual system, this name will be the name of the configuration
file for the virtual machine, the name of the virtual disk and the name displayed by
virt-manager's main screen.
Warning
Do not use the kernel-xen as the file name for a Red Hat Enterprise Linux 5
fully virtualized guest. Using this kernel on fully virtualized guests can cause your
190
Creating a new guest
system to hang.
5. Choose a virtualization method to use for your guest, either para-virtualization or full
virtualization.
6. Enter the location of your install media. The location of the kickstart file is optional. Then
click Forward .
191
Chapter 21. Managing guests with Virtual Machine Manager(virt-manager)
Storage media
For installation media on an http server the address should resemble
"http://servername.example.com/pub/dist/rhel5" where the actual source on your
local host is /var/www/html/pub/dist/rhel5.
For more information on configuring these network services read the relevant
sections of your Red Hat Enterprise Linux Deployment Guide in the
System->Documentation menu.
7. For fully virtualized guests you must use an .iso file, CD-ROM or DVD.
192
Creating a new guest
8. Install either to a physical disk partition or install to a virtual file system within a file.
Note
This example installs a virtual system within a file.
The default SELinux policy only allows storage of virtualization disk images in the
/var/lib/xen/images folder.
193
Chapter 21. Managing guests with Virtual Machine Manager(virt-manager)
restorecon -v /path/to/file
194
Creating a new guest
Choose the “Shared Physical Device” option to allow the guest access to the same
network as the host and accessible to other computers on the network.
Choose the “Virtual Network” option if you want your guest to on a virtual network. You
can bridge a virtual network making it accessible to external networked computers, read
Chapter 8, Configuring networks and guests for configuration instructions.
Note
The section, Section 17, “Creating a virtual network”, can guide you through the
process of creating and configuring network devices or the chapter on network
devices, Chapter 12, Virtualized network devices.
195
Chapter 21. Managing guests with Virtual Machine Manager(virt-manager)
10. Select memory to allocate the guest and the number of virtual CPUs then click Forward.
Note
Avoid allocating more memory to all of your virtual machines than you have
physically available. Over allocating will cause the system to use the swap
partition excessively, causing unworkable performance levels.
11. Review your selections, then click Forward to open a console and the files start to install.
196
Restoring a saved machine
13. Type xm create -c xen-guest to start the Red Hat Enterprise Linux 5 guest. Right click
on the guest in the Virtual Machine Manager and choose Open to open a virtual console.
197
Chapter 21. Managing guests with Virtual Machine Manager(virt-manager)
in the main window. Domain0 is your host system. If there are no machines present, this means
that currently there are no machines running on the system.
4. Click Open.
The saved virtual system appears in the Virtual Machine Manager main window.
1. In the Virtual Machine Manager main window, highlight the virtual machine that you want to
view.
2. From the Virtual Machine Manager Edit menu, select Machine Details (or click the Details
button on the bottom of the Virtual Machine Manager main window).
198
Status monitoring
The Virtual Machine Details Overview window appears. This window summarizes CPU and
memory usage for the domain(s) you specified.
4. On the Hardware tab, click on Processor to view or change the current processor memory
allocation.
5. On the Hardware tab, click on Memory to view or change the current RAM memory
allocation.
6. On the Hardware tab, click on Disk to view or change the current hard disk configuration.
7. On the Hardware tab, click on Network to view or change the current network
configuration.
199
Chapter 21. Managing guests with Virtual Machine Manager(virt-manager)
2. From the Status monitoring area selection box, specify the time (in seconds) that you want
the system to update.
3. From the Consoles area, specify how to open a console and specify an input device.
2. The Virtual Machine Manager lists the Domain IDs for all domains on your system.
200
Displaying virtual CPUs
2. The Virtual Machine Manager lists the status of all virtual machines on your system.
1. From the View menu, select the Virtual CPUs check box.
2. The Virtual Machine Manager lists the Virtual CPUs for all virtual machines on your system.
1. From the View menu, select the CPU Usage check box.
2. The Virtual Machine Manager lists the percentage of CPU in use for all virtual machines on
your system.
201
Chapter 21. Managing guests with Virtual Machine Manager(virt-manager)
1. From the View menu, select the Memory Usage check box.
2. The Virtual Machine Manager lists the percentage of memory in use (in megabytes) for all
virtual machines on your system.
2. This will open the Host Details menu. Click the Virtual Networks tab.
3. All available virtual networks are listed on the left-hand box of the menu. You can edit the
configuration of a virtual network by selecting it from this box and editing as you see fit.
1. Open the Host Details menu (refer to Section 16, “Managing a virtual network”) and click
the Add button.
202
Creating a virtual network
This will open the Create a new virtual network menu. Click Forward to continue.
2. Enter an appropriate name for your virtual network and click Forward.
3. Enter an IPv4 address space for your virtual network and click Forward.
4. Define the DHCP range for your virtual network by specifying a Start and End range of IP
addresses. Click Forward to continue.
5. Select how the virtual network should connect to the physical network.
If you select Forwarding to physical network, choose whether the Destination should be
NAT to any physical device or NAT to physical device eth0.
6. You are now ready to create the network. Check the configuration of your network and click
Finish.
7. The new virtual network is now available in the Virtual Network tab of the Host Details
menu.
203
Chapter 21. Managing guests with Virtual Machine Manager(virt-manager)
204
Chapter 22.
command description
help print help
list list domains
create create a domain from an XML file
start start a previously created inactive domain
destroy destroy a domain
define define (but do not start) a domain from an
XML file
domid convert a domain name or UUID to domain id
domuuid convert a domain name or id to domain UUID
dominfo domain information
domname convert a domain id or UUID to domain name
domstate domain state
quit quit this interactive terminal
reboot reboot a domain
restore restore a domain from a saved state in a file
resume resume a domain
save save a domain state to a file
shutdown gracefully shutdown a domain
suspend suspend a domain
undefine undefine an inactive domain
205
Chapter 22. Commands for Red Hat Virtualization
command description
setmem changes the allocated memory.
setmaxmem changes maximum memory limit.
setvcpus changes number of virtual CPUs.
vcpuinfo domain vcpu information.
vcpupin control the domain vcpu affinity.
command description
version show version
dumpxml domain information in XML
nodeinfo node information
virsh # list
Id Name State
----------------------------------
0 Domain-0 running
13 r5b2-mySQL01 blocked
206
virsh the command line interface tool for
207
Chapter 22. Commands for Red Hat Virtualization
</os>
<memory>512000</memory>
<vcpu>1</vcpu>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<interface type='bridge'>
<source bridge='xenbr0'/>
<mac address='00:16:3e:49:1d:11'/>
<script path='vif-bridge'/>
</interface>
<graphics type='vnc' port='5900'/>
<console tty='/dev/pts/4'/>
</devices>
virsh # version
Compiled against library: libvir 0.1.7
Using library: libvir 0.1.7
Using API: Xen 3.0.1
Running hypervisor: Xen 3.0.0
208
virtualization
$ xm list
Name ID Mem(MiB) VCPUs State
Time(s)
Domain-0 0 520 2 r-----
1275.5
r5b2-mySQL01 13 500 1 -b----
16.1
• xm create [-c] DomainName/ID: start a virtual machine. If the -c option is used, the start up
process will attach to the guest's console.
• xm reboot DomainName/ID: reboot a virtual machine, runs through the normal system shut
down and start up process.
• xm shutdown DomainName/ID: shut down a virtual machine, runs a normal system shut down
procedure.
• xm pause
• xm unpause
• xm save
• xm restore
• xm migrate
• xm mem-set
$ xm vcpu-list
Name ID VCPUs CPU State Time(s) CPU Affinity
Domain-0 0 0 0 r-- 708.9 any cpu
Domain-0 0 1 1 -b- 572.1 any cpu
r5b2-mySQL01 13 0 1 -b- 16.1 any cpu
• xm vcpu-pin
209
Chapter 22. Commands for Red Hat Virtualization
• xm vcpu-set
• use the xm sched-credit command to display scheduler parameters for a given domain:
$ xm sched-credit -d 0
{'cap': 0, 'weight': 256}
$ xm sched-credit -d 13
{'cap': 25, 'weight': 256}
• xm top
• xm dmesg
• xm info
• xm log
$ xm uptime
Name ID Uptime
Domain-0 0 3:42:18
r5b2-mySQL01 13 0:06:27
• xm sysrq
• xm dump-core
• xm rename
• xm domid
• xm domname
210
Chapter 23.
Configuring GRUB
The GNU Grand Unified Boot Loader(GRUB) is a program which enables the user to select
which installed operating system or kernel to load at system boot time. It also allows the user to
pass arguments to the kernel. The GRUB configuration file (located in /boot/grub/grub.conf)
is used to create a list of operating systems to boot in GRUB's menu interface. When you install
the kernel-xen RPM, a post script adds kernel-xen entries to the GRUB configuration file.
You can edit the grub.conf file and enable the following GRUB parameter.
If you set your Linux grub entries to reflect this example, the boot loader loads the hypervisor,
initrd image, and Linux kernel. Since the kernel entry is on top of the other entries, the kernel
loads into memory first. The boot loader sends, and receives, command line arguments to and
from the hypervisor and Linux kernel. This example entry shows how you would restrict the
Dom0 linux kernel memory to 800 MB.
You can use these GRUB parameters to configure the Virtualization hypervisor:
mem
This limits the amount of memory that is available to the hypervisor kernel.
com1=115200, 8n1
This enables the first serial port in the system to act as serial console (com2 is assigned for the
next port, and so on...).
dom0_mem
dom0_max_vcpus
211
Chapter 23. Configuring GRUB
acpi
This switches the ACPI hypervisor to the hypervisor and domain0. The ACPI parameter options
include:
noacpi
212
Chapter 24.
Configuring ELILO
ELILO is the boot loader used on EFI-based systems, notably Itanium®. Similar to the GRUB,
the boot loader on x86 and x86-64 systems, ELILO allows the user to select which installed
kernel to load during the system boot sequence. It also allows the user to pass arguments to the
kernel. The ELILO configuration file, which is located in the EFI boot partition and symbolically
linked to /etc/elilo.conf, contains a list of global options and image stanzas. When you
install the kernel-xen RPM, a post install script adds the appropriate image stanza to the
elilo.conf.
• Global options that affect the behavior of ELILO and all the entries. Typically there's no need
to change these from the default values.
• Image stanzas that define a boot selection along with associated options.
image=vmlinuz-2.6.18-92.el5xen
vmm=xen.gz-2.6.18-92.el5
label=linux
initrd=initrd-2.6.18-92.el5xen.img
read-only
root=/dev/VolGroup00/rhel5_2
append="-- rhgb quiet"
The image parameter indicates the following lines apply to a single boot selection. This stanza
defines a hypervisor(vmm), initrd, and command line arguments (read-only, root and
append) to the hypervisor and kernel. When ELILO is loaded during the boot sequence, this
image will be labeled label linux.
ELILO translates read-only to the kernel command line option ro which causes the root file
system to be mounted read-only until the initscripts mount the root drive as read-write. ELILO
copies the "root" line to the kernel command line. These are merged with the "append" line to
build a completecommand line:
The -- is used to delimit hypervisor and kernel arguments. The hypervisor arguments come
first, then the -- delimiter, followed by the kernel arguments. The hypervisor does not usually
have any arguments.
Technical note
213
Chapter 24. Configuring ELILO
ELILO passes the entire command line to the hypervisor. The hypervisor divides
the content and passes the kernel options to the kernel.
To customize the hypervisor, insert parameters before the --. An example of the hypervisor
memory(mem) parameter and the quiet parameter for the kernel:
append="dom0_mem=2G -- quiet"
1
http://tx.downloads.xensource.com/downloads/docs/user/#SECTION04130000000000000000
214
A modified example of the configuration above, showing syntax for appending memory and cpu
allocation parameters to the hypervisor:
image=vmlinuz-2.6.18-92.el5xen
vmm=xen.gz-2.6.18-92.el5
label=linux
initrd=initrd-2.6.18-92.el5xen.img
read-only
root=/dev/VolGroup00/rhel5_2
append="dom0_mem=2G dom0_max_vcpus=2 --"
Additionally this example removes the kernel parameters "rhgb quiet" so that kernel and
initscript output are generated on the console. Note the double-dash remains so that the
append line is correctly interpreted as hypervisor arguments.
215
216
Chapter 25.
Configuration files
Red Hat Virtualization configuration files contain the following standard variables. Configuration
items within these files must be enclosed in single quotes('). These configuration files reside in
the /etc/xen directory.
Item Description
The table below, Table 25.2, “Red Hat Virtualization configuration files reference”, is formatted
output from xm create --help_config.
217
Chapter 25. Configuration files
Parameter Description
vncpasswd=NAME Password for VNC console on HVM domain.
vncviewer=no | yes Spawn a vncviewer listening for a vnc server
in the domain. The address of the vncviewer
is passed to the domain on the kernel
command line using
VNC_SERVER=<host>:<port>. The port used
by vnc is 5500 + DISPLAY. A display value
with a free port is chosen if possible. Only
valid when vnc=1.
vncconsole=no | yes Spawn a vncviewer process for the domain's
graphical console. Only valid when vnc=1.
name=NAME Domain name. Must be unique.
bootloader=FILE Path to bootloader.
bootargs=NAME Arguments to pass to boot loader
bootentry=NAME DEPRECATED. Entry to boot via boot loader.
Use bootargs.
kernel=FILE Path to kernel image.
ramdisk=FILE Path to ramdisk.
features=FEATURES Features to enable in guest kernel
builder=FUNCTION Function to use to build the domain.
memory=MEMORY Domain memory in MB.
maxmem=MEMORY Maximum domain memory in MB.
shadow_memory=MEMORY Domain shadow memory in MB.
cpu=CPU CPU to run the VCPU0 on.
cpus=CPUS CPUS to run the domain on.
pae=PAE Disable or enable PAE of HVM domain.
acpi=ACPI Disable or enable ACPI of HVM domain.
apic=APIC Disable or enable APIC of HVM domain.
vcpus=VCPUS # of Virtual CPUS in domain.
cpu_weight=WEIGHT Set the new domain's cpu weight. WEIGHT is a
float that controls the domain's share of the
cpu.
restart=onreboot | always | never Deprecated. Use on_poweroff, on_reboot,
and on_crash instead. Whether the domain
should be restarted on exit. - onreboot: restart
on exit with shutdown code reboot - always:
always restart on exit, ignore exit code -
never: never restart on exit, ignore exit code
218
Parameter Description
on_poweroff=destroy | restart | Behavior when a domain exits with reason
preserve | destroy 'poweroff'. - destroy: the domain is cleaned up
as normal; - restart: a new domain is started
in place of the old one; - preserve: no
clean-up is done until the domain is manually
destroyed (using xm destroy, for example); -
rename-restart: the old domain is not cleaned
up, but is renamed and a new domain started
in its place.
on_reboot=destroy | restart | preserve Behavior when a domain exits with reason
| destroy 'reboot'. - destroy: the domain is cleaned up
as normal; - restart: a new domain is started
in place of the old one; - preserve: no
clean-up is done until the domain is manually
destroyed (using xm destroy, for example); -
rename-restart: the old domain is not cleaned
up, but is renamed and a new domain started
in its place.
on_crash=destroy | restart | preserve | Behavior when a domain exits with reason
destroy 'crash'. - destroy: the domain is cleaned up as
normal; - restart: a new domain is started in
place of the old one; - preserve: no clean-up
is done until the domain is manually
destroyed (using xm destroy, for example); -
rename-restart: the old domain is not cleaned
up, but is renamed and a new domain started
in its place.
blkif=no | yes Make the domain a block device backend.
netif=no | yes Make the domain a network interface
backend.
tpmif=no | yes Make the domain a TPM interface backend.
disk=phy:DEV,VDEV,MODE[,DOM] Add a disk device to a domain. The physical
device is DEV, which is exported to the domain
as VDEV. The disk is read-only if MODE is r,
read-write if MODE is w. If DOM is specified it
defines the backend driver domain to use for
the disk. The option may be repeated to add
more than one disk.
pci=BUS:DEV.FUNC Add a PCI device to a domain, using given
params (in hex). For example pci=c0:02.1a.
The option may be repeated to add more than
one pci device.
219
Chapter 25. Configuration files
Parameter Description
ioports=FROM[-TO] Add a legacy I/O range to a domain, using
given params (in hex). For example
ioports=02f8-02ff. The option may be
repeated to add more than one i/o range.
irq=IRQ Add an IRQ (interrupt line) to a domain. For
example irq=7. This option may be repeated
to add more than one IRQ.
usbport=PATH Add a physical USB port to a domain, as
specified by the path to that port. This option
may be repeated to add more than one port.
Make the domain a framebuffer backend. The
vfb=type={vnc,sdl}, vncunused=1, backend type should be either sdl or vnc. For
vncdisplay=N, type=vnc, connect an external vncviewer.
The server will listen on ADDR (default
vnclisten=ADDR, display=DISPLAY,
127.0.0.1) on port N+5900. N defaults to the
xauthority=XAUTHORITY, domain id. If vncunused=1, the server will try
vncpasswd=PASSWORD, to find an arbitrary unused port above 5900.
For type=sdl, a viewer will be started
keymap=KEYMAP automatically using the given DISPLAY and
XAUTHORITY, which default to the current
user's ones.
Add a network interface with the given MAC
vif=type=TYPE, mac=MAC, address and bridge. The vif is configured by
bridge=BRIDGE, ip=IPADDR, calling the given configuration script. If type is
not specified, default is netfront not ioemu
script=SCRIPT, backend=DOM,
device. If mac is not specified a random MAC
vifname=NAME
address is used. If not specified then the
network backend chooses it's own MAC
address. If bridge is not specified the first
bridge found is used. If script is not specified
the default script is used. If backend is not
specified the default backend driver domain is
used. If vifname is not specified the backend
virtual interface will have name vifD.N where
D is the domain id and N is the interface id.
This option may be repeated to add more
than one vif. Specifying vifs will increase the
number of interfaces as needed.
vtpm=instance=INSTANCE,backend=DOM Add a TPM interface. On the backend side
use the given instance as virtual TPM
instance. The given number is merely the
preferred instance number. The hotplug script
220
Parameter Description
will determine which instance number will
actually be assigned to the domain. The
association between virtual machine and the
TPM instance number can be found in
/etc/xen/vtpm.db. Use the backend in the
given domain.
access_control=policy=POLICY,label=LABEL Add a security label and the security policy
reference that defines it. The local ssid
reference is calculated when
starting/resuming the domain. At this time, the
policy is checked against the active policy as
well. This way, migrating through save/restore
is covered and local labels are automatically
created correctly on the system where a
domain is started / resumed.
nics=NUM DEPRECATED. Use empty vif entries
instead. Set the number of network interfaces.
Use the vif option to define interface
parameters, otherwise defaults are used.
Specifying vifs will increase the number of
interfaces as needed.
root=DEVICE Set the root= parameter on the kernel
command line. Use a device, e.g. /dev/sda1,
or /dev/nfs for NFS root.
extra=ARGS Set extra arguments to append to the kernel
command line.
ip=IPADDR Set the kernel IP interface address.
gateway=IPADDR Set the kernel IP gateway.
netmask=MASK Set the kernel IP netmask.
hostname=NAME Set the kernel IP hostname.
interface=INTF Set the kernel IP interface name.
dhcp=off|dhcp Set the kernel dhcp option.
nfs_server=IPADDR Set the address of the NFS server for NFS
root.
nfs_root=PATH Set the path of the root NFS directory.
device_model=FILE Path to device model program.
fda=FILE Path to fda
fdb=FILE Path to fdb
serial=FILE Path to serial or pty or vc
221
Chapter 25. Configuration files
Parameter Description
localtime=no | yes Is RTC set to localtime?
keymap=FILE Set keyboard layout used
usb=no | yes Emulate USB devices?
usbdevice=NAME Name of USB device to add?
stdvga=no | yes Use std vga or cirrhus logic graphics
isa=no | yes Simulate an ISA only system?
boot=a|b|c|d Default boot device
nographic=no | yes Should device models use graphics?
soundhw=audiodev Should device models enable audio device?
vnc Should the device model use VNC?
vncdisplay VNC display to use
vnclisten Address for VNC server to listen on.
vncunused Try to find an unused port for the VNC server.
Only valid when vnc=1.
sdl Should the device model use SDL?
display=DISPLAY X11 display to use
xauthority=XAUTHORITY X11 Authority to use
uuid xenstore UUID (universally unique identifier)
to use. One will be randomly generated if this
option is not set, just like MAC addresses for
virtual network interfaces. This must be a
unique value across the entire cluster.
Table 25.4, “Configuration parameter default values” lists all configuration parameters available,
the Python parser function used to set the value and each parameter's default value. The setter
function gives an idea of what the parser does with the values you specify. It reads them as
Python values, then feeds them to a setter function to store them. If the value is not valid
Python, you get an obscure error message. If the setter rejects your value, you should get a
reasonable error message, except it appears to get lost somehow, along with your bogus
setting. If the setter accepts, but the value makes no sense, the program proceeds, and you can
expect it to fall flat on its face somewhere down the road.
222
Parser function Valid arguments
• yes
• y
• no
• yes
set_float
Accepts a floating point number with Python's
float(). For example:
• 3.14
• 10.
• .001
• 1e100
• 3.14e-10
set_int
Accepts an integer with Python's int().
set_value
accepts any Python value.
append_value
accepts any Python value, and appends it to
the previous value which is stored in an array.
223
Chapter 25. Configuration files
shadow_memory set_int 0
pae set_int 0
acpi set_int 0
apic set_int 0
vcpus set_int 1
blkif set_bool 0
netif set_bool 0
tpmif append_value 0
disk append_value []
pci append_value []
ioports append_value []
irq append_value []
usbport append_value []
vfb append_value []
vif append_value []
vtpm append_value []
access_control append_value []
nics set_int -1
ip set_value ''
224
Parameter Parser function Default value
hostname set_value ''
localtime set_bool 0
usb set_bool 0
stdvga set_bool 0
isa set_bool 0
nographic set_bool 0
vncunused set_bool 1
225
226
Part VI. Tips and Tricks
Tips and Tricks to Enhance Productivity
These chapters contain useful hints and tips to improve Red Hat Virtualization.
ccxxix
ccxxx
Chapter 26.
The example below shows how you can configure the softlink for a guest image named
example to automatic boot during the system boot.
# cd /etc/xen/auto
# ls
# ln -s /var/lib/xen/images/example .
# ls -l
lrwxrwxrwx 1 root root 14 Dec 14 10:02 example -> ../example
2. Modifying /etc/grub.conf
This section describes how to safely and correctly change your /etc/grub.conf file to use the
virtualization kernel. You must use the virtualization kernel for domain0 in order to successfully
run the hypervisor. Copy your existing virtualized kernel entry make sure you copy all of the
important lines or your system will panic upon boot (initrd will have a length of '0'). You need
specify hypervisor specific values you have to add them to the xen line of your grub entry.
The output below is an example of a grub.conf entry from a Red Hat Virtualization system. The
grub.conf on your system may vary. The important part in the example below is the section
from the title line to the next new line.
#boot=/dev/sda
default=0
timeout=15
#splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu
serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
terminal --timeout=10 serial console
231
Chapter 26. Tips and tricks
To set the amount of memory assigned to your host system at boot time to 256MB you need to
append dom0_mem=256M to the xen line in your grub.conf. A modified version of the grub
configuration file in the previous example:
#boot=/dev/sda
default=0
timeout=15
#splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
terminal --timeout=10 serial console
name = "rhel5b2vm01"
memory = "2048"
disk = [ 'tap:aio:/var/lib/xen/images/rhel5b2vm01.dsk,xvda,w', ]
vif = [ 'mac=00:16:3e:33:79:3c, bridge=xenbr0', ]
vnc=1
vncunused=1
uuid = "302bd9ce-4f60-fc67-9e40-7a77d9b4e1ed"
bootloader="/usr/bin/pygrub"
vcpus=2
on_reboot = 'restart'
232
Duplicating an existing guest and its
on_crash = 'restart'
name = "rhel4u4-x86_64"
builder = "hvm"
memory = "500"
disk = [ 'file:/var/lib/xen/images/rhel4u4-x86_64.dsk,hda,w', ]
vif = [ 'type=ioemu, mac=00:16:3e:09:f0:12, bridge=xenbr0', 'type=ioemu,
mac=00:16:3e:09:f0:13, bridge=xenbr1' ]
uuid = "b10372f9-91d7-a05f-12ff-372100c99af5"
device_model = "/usr/lib64/xen/bin/qemu-dm"
kernel = "/usr/lib/xen/boot/hvmloader"
vnc=1
vncunused=1
apic=1
acpi=1
pae=1
vcpus=1
serial = "pty" # enable serial console
on_reboot = 'restart'
name
The name of your guest as it is known to the hypervisor and displayed in the management
utilities. This entry should be unique on your system.
uuid
A unique handle for the guest, a new UUID can be regenerated using the uuidgen
command. A sample UUID output:
$ uuidgen
a984a14f-4191-4d14-868e-329906b211e5
vif
• The MAC address must define a unique MAC address for each guest. This is
automatically done if the standard tools are used. If you are copying a guest configuration
from an existing guest you can use the script Section 6, “Generating a new unique MAC
address”.
233
Chapter 26. Tips and tricks
• If you are moving or duplicating an existing guest configuration file to a new host you
have to make sure you adjust the xenbr entry to correspond with your local networking
configuration (you can obtain the Red Hat Virtualization bridge information using the
command brctl show.
• Device entries, make sure you adjust the entries in the disk= section to point to the
correct guest image.
/etc/sysconfig/network
Modify the HOSTNAME entry to the guest's new hostname.
/etc/sysconfig/network-scripts/ifcfg-eth0
/etc/selinux/config
Change the SELinux enforcement policy from Enforcing to Disabled. Use the GUI tool
system-config-securitylevel or the command:
# setenforce 0
#!/bin/bash
declare -i IS_HVM=0
declare -i IS_PARA=0
check_hvm()
{
IS_X86HVM="$(strings /proc/acpi/dsdt | grep int-xen)"
if [ x"${IS_X86HVM}" != x ]; then
echo "Guest type is full-virt x86hvm"
IS_HVM=1
fi
}
check_para()
{
if $(grep -q control_d /proc/xen/capabilities); then
echo "Host is dom0"
IS_PARA=1
else
234
configuration file
$ ./macgen.py
00:16:3e:20:b0:11
#!/usr/bin/python
# macgen.py script to generate a MAC address for Red Hat Virtualization
guests
#
import random
#
def randomMAC():
mac = [ 0x00, 0x16, 0x3e,
random.randint(0x00, 0x7f),
random.randint(0x00, 0xff),
random.randint(0x00, 0xff) ]
return ':'.join(map(lambda x: "%02x" % x, mac))
#
print randomMAC()
235
Chapter 26. Tips and tricks
virtinst.util.uuidToString(virtinst.util.randomUUID())' | python
# echo 'import virtinst.util ; print virtinst.util.randomMAC()' | python
The script above can also be implemented as a script file as seen below.
#!/usr/bin/env python
# -*- mode: python; -*-
print ""
print "New UUID:"
import virtinst.util ; print
virtinst.util.uuidToString(virtinst.util.randomUUID())
print "New MAC:"
import virtinst.util ; print virtinst.util.randomMAC()
print ""
rate
The rate= option can be added to the VIF= entry in a virtual machine configuration file to
limit a virtual machine's network bandwidth or to specify a specific granularity of credits
during a specified time window.
time window
The time window is optional to the rate= option:
A smaller time window will provide less burst transmission, however, the replenishment rate
and latency will increase.
The default 50ms time window is a good balance between latency and throughput and in
most cases will not require changing.
rate=10Mb/s
236
Starting domains automatically during
rate=250KB/s
Limit the outgoing network traffic from the guest to 250KB/s.
rate=10MB/s@50ms
Limit bandwidth to 10MB/s and provide the guest with a 50KB chunk every 50ms.
In the virtual machine configuration a sample VIF entry would look like the following:
This rate entry would limit the virtual machine's interface to 10MB/s for outgoing traffic
# cd /etc/xen
# cd auto
# ls
# ln -s ../rhel5vm01 .
# ls -l
lrwxrwxrwx 1 root root 14 Dec 14 10:02 rhel5vm01 -> ../rhel5vm01
#
9. Modifying dom0
To use Red Hat Virtualization to manage domain0, you will constantly making changes to the
grub.conf configuration file, that resides in the /etc directory. Because of the large number of
domains to manage, many system administrators prefer to use the 'cut and paste' method when
editing grub.conf . If you do this, make sure that you include all five lines in the Virtualization
entry (or this will create system errors). If you require Xen hypervisor specific values, you must
add them to the 'xen' line. This example represents a correct grub.conf Virtualization entry:
# boot=/dev/sda/
default=0
timeout=15
#splashimage=(hd0, 0)/grub/splash.xpm.gz
hiddenmenu
serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
terminal --timeout=10 serial console
237
Chapter 26. Tips and tricks
For example, if you need to change your dom0 hypervisor's memory to 256MB at boot time, you
must edit the 'xen' line and append it with the correct entry, 'dom0_mem=256M' . This example
represents the respective grub.conf xen entry:
# boot=/dev/sda
default=0
timeout=15
#splashimage=(hd0,0)/grubs/splash.xpm.gz
hiddenmenu
serial --unit=0 --speed =115200 --word=8 --parity=no --stop=1
terminal --timeout=10 serial console
title Red Hat Enterprise Linux Server (2.6.17-1.2519.4.21. el5xen)
root (hd0,0)
kernel /xen.gz-2.6.17-1.2519.4.21.el5 com1=115200,8n1 dom0_mem=256MB
module /vmlinuz-2.6.17-1.2519.4.21.el5xen ro
root=/dev/VolGroup00/LogVol00
module /initrd-2.6.17-1.2519.4.21.el5xen.img
(xend-relocation-server yes)
The default for this parameter is 'no', which keeps the relocation/migration server
deactivated (unless on a trusted network) and the domain virtual memory is exchanged in
raw form without encryption.
(xend-relocation-port 8002)
This parameter sets the port that xend uses for migration. This value is correct, just make
sure to remove the comment that comes before it.
(xend-relocation-address )
This parameter is the address that listens for relocation socket connections, after you
enable the xend-relocation-server . When listening, it restricts the migration to a
particular interface.
(xend-relocation-hosts-allow )
238
system boot
This parameter controls the host that communicates with the relocation port. If the value is
empty, then all incoming connections are allowed. You must change this to a
space-separated sequences of regular expressions (such as
xend-relocation-hosts-allow- '^localhost\\.localdomain$' ). A host with a fully
qualified domain name or IP address that matches these expressions are accepted.
After you configure these parameters, you must reboot the host for the Red Hat Virtualization to
accept your new parameters.
1. To configure vsftpd, edit /etc/passwd using vipw and change the ftp user's home directory
to the directory where you are going to keep the installation trees for your para-virtualized
guests. An example entry for the FTP user would look like the following:
ftp:x:14:50:FTP User:/xen/pub:/sbin/nologin
2. to have vsftpd start automatically during system boot use the chkconfig utility to enable the
automatic start up of vsftpd.
3. verify that vsftpd is not enabled using the chkconfig --list vsftpd:
4. run the chkconfig --levels 345 vsftpd on to start vsftpd automatically for run levels 3, 4
and 5.
5. use the chkconfig --list vsftpd command to verify vsftdp has been enabled to start
during system boot:
6. use the service vsftpd start vsftpd to start the vsftpd service:
239
Chapter 26. Tips and tricks
# options=-b
# options=-g
This tells udev to monitor all system SCSI devices for returning UUIDs. To determine the
system UUIDs, type:
# scsi_id -g -s /block/sdc
# scsi_id -g -s /block/sdc
*3600a0b80001327510000015427b625e*
This long string of characters is the UUID. To get the device names to key off the UUID, check
each device path to ensure that the UUID number is the same for each device. The UUIDs do
not change when you add a new device to your system. Once you have checked the device
240
Configuring LUN Persistence
paths, you must create rules for the device naming. To create these rules, you must edit the
20-names.rules file that resides in the /etc/udev/rules.d directory. The device naming
rules you create here should follow this format:
Replace your existing UUID and devicename with the above UUID retrieved entry. So the rule
should resemble the following:
This causes the system to enable all devices that match /dev/sd* to inspect the given UUID.
When it finds a matching device, it creates a device node called /dev/devicename. For this
example, the device node is /dev/mydevice . Finally, you need to append the rc.local file
that resides in the /etc directory with this path:
/sbin/start_udev
multipath {
wwid 3600a0b80001327510000015427b625e
alias oramp1
}
multipath {
wwid 3600a0b80001327510000015427b6
alias oramp2
}
multipath {
wwid 3600a0b80001327510000015427b625e
alias oramp3
}
multipath {
wwid 3600a0b80001327510000015427b625e
alias oramp4
}
241
Chapter 26. Tips and tricks
1. Edit the ~/.vnc/xstartup file to start a GNOME session whenever vncserver is started.
The first time you run the vncserver script it will ask you for a password you want to use for
your VNC session.
#!/bin/sh
# Uncomment the following two lines for normal desktop:
# unset SESSION_MANAGER
# exec /etc/X11/xinit/xinitrc
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
#xsetroot -solid grey
#vncconfig -iconic &
#xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
#twm &
if test -z "$DBUS_SESSION_BUS_ADDRESS" ; then
eval `dbus-launch --sh-syntax –exit-with-session`
echo "D-BUS per-session daemon address is: \
242
Cloning guest configuration files
$DBUS_SESSION_BUS_ADDRESS"
fi
exec gnome-session
You must also modify these system configuration settings on your guest. You must modify the
HOSTNAME entry of the /etc/sysconfig/network file to match the new guest's hostname.
243
244
Chapter 27.
Chapter 26, Tips and tricks is recommended reading for programmers thinking of making new
applications which use Red Hat Virtualization.
# cat satelliteiso.xml
<disk type="file" device="disk">
<driver name="file"/>
<source
file="/var/lib/xen/images/rhn-satellite-5.0.1-11-redhat-linux-as-i386-4-embedded-oracle.iso"/
<target dev="hdc"/>
<readonly/>
</disk>
Run virsh attach-device to attach the ISO as hdc to a guest called "satellite" :
245
246
Chapter 28.
Before compiling, the kverrel and kverbase variables in the xenpv.spec file must be changed
to match the kernel version that binary modules are being built for. Usually, this is the value
returned from 'uname -r'.
You can then use the following command to rebuild the kmod-xenpv package. Change
SOURCE to reflect the correct path to your kmod-xenpv RPM file.
cd /usr/src/redhat/SOURCE/xenpv
rpmbuild -bb kmod-xenpv
247
248
Part VII. Troubleshooting
Introduction to Troubleshooting and Problem Solving
The following chapters provide information to assist you in troubleshooting issues you may
encounter using Red Hat Virtualization.
1
http://www.redhat.com/docs/manuals/enterprise/
ccli
cclii
Chapter 29.
• troubleshooting techniques,
xentop
xentop displays real-time information about a Red HAt Virtualization host system and it's
domains.
xm
Using the dmesg and log
• vmstat
• iostat
• lsof
You can employ these Advanced Debugging Tools and logs to assist with troubleshooting:
253
Chapter 29. How To troubleshoot Red Hat Virtualization
• XenOprofile
• systemtap
• crash
• sysrq
• sysrq t
• sysrq w
These networking tools can assist with troubleshooting virtualization networking problems:
• ifconfig
• tcpdump
• brctl
brctl is a networking tool that inspects and configures the ethernet bridge configuration in the
Virtualization linux kernel. You must have root access before performing these example
commands:
# brctl show
xenbr0
bridge-id 8000.fefffffffff
designated-root 8000.fefffffffff
root-port 0 path-cost 0
254
Log files overview
aging-time 300.01
Other utilities which can be used to troubleshoot virtualization on Red Hat Enterprise Linux 5. All
utilities mentioned can be found in the Server repositories of the Red Hat Enterprise Linux 5
Server distribution:
• strace is a command which traces system calls and events received and used by another
process.
• vncviewer: connect to a VNC server running on your server or a virtual machine. Install
vncviwer using the yum install vnc command.
• vncserver: start a remote desktop on your server. Gives you the ability to run graphical user
interfaces such as virt-manager via a remote session. Install vncserver using the yum
install vnc-server command.
• The Red Hat Virtualization main configuration directory is /etc/xen/. This directory contains
the xend daemon and other virtual machine configuration files. The networking script files
reside here as well (in the /scripts subdirectory).
• All of actual log files themselves that you will consult for troubleshooting purposes reside in
the /var/log/xen directory.
• You should also know that the default directory for all virtual machine file based disk images
resides in the /var/lib/xen directory.
• Red Hat Virtualization information for the /proc file system reside in the /proc/xen/
directory.
255
Chapter 29. How To troubleshoot Red Hat Virtualization
• xend.log is the log file that contains all the data collected by the xend daemon, whether it is
a normal system event, or an operator initiated action. All virtual machine operations (such as
create, shutdown, destroy, etc.) appears here. The xend.log is usually the first place to look
when you track down event or performance problems. It contains detailed entries and
conditions of the error messages.
• xend-debug.log is the log file that contains records of event errors from xend and the
Virtualization subsystems (such as framebuffer, Python scripts, etc.).
• xen-hotplug-log is the log file that contains data from hotplug events. If a device or a
network script does not come online, the event appears here.
• qemu-dm.[PID].log is the log file created by the qemu-dm process for each fully virtualized
guest. When using this log file, you must retrieve the given qemu-dm process PID, by using
the ps command to examine process arguments to isolate the qemu-dm process on the virtual
machine. Note that you must replace the [PID] symbol with the actual PID qemu-dm process.
If you encounter any errors with the Virtual Machine Manager, you can review the generated
data in the virt-manager.log file that resides in the /.virt-manager directory. Note that
every time you start the Virtual Machine Manager, it overwrites the existing log file contents.
Make sure to backup the virt-manager.log file, before you restart the Virtual Machine
manager after a system error.
• When you restart the xend daemon, it updates the xend-database that resides in the
/var/lib/xen/xend-db directory.
• Virtual machine dumps (that you perform with xm dump-core command) resides in the
/var/lib/xen/dumps directory.
256
Troubleshooting with the logs
• The /etc/xen directory contains the configuration files that you use to manage system
resources. The xend daemon configuration file is called xend-config.sxp and you can use
this file to implement system-wide changes and configure the networking callouts.
• The proc folders are another resource that allows you to gather system information. These
proc entries reside in the /proc/xen directory:
/proc/xen/capabilities
/proc/xen/balloon
/proc/xen/xenbus/
The other log file, xend-debug.log, is very useful to system administrators since it contains
even more detailed information than xend.log . Here is the same error data for the same
kernel domain creation problem:
When calling customer support, always include a copy of both these log files when contacting
the technical support staff.
257
Chapter 29. How To troubleshoot Red Hat Virtualization
The serial console is helpful in troubleshooting difficult problems. If the Virtualization kernel
crashes and the hypervisor generates an error, there is no way to track the error on a local host.
However, the serial console allows you to capture it on a remote host. You must configure the
host to output data to the serial console. Then you must configure the remote host to capture
the data. To do this, you must modify these options in the grub.conf file to enable a 38400-bps
serial console on com1/dev/ttyS0:
The sync_console can help determine a problem that causes hangs with asynchronous
hypervisor console output, and the "pnpacpi=off" works around a problem that breaks input
on the serial console. The parameters "console=ttyS0" and "console=tty" means that
kernel errors get logged with on both the normal VGA console and on the serial console. Then
you can install and set up ttywatch to capture the data on a remote host connected by a
standard null-modem cable. For example, on the remote host you could type:
This pipes the output from /dev/ttyS0 into the file /var/log/ttywatch/myhost.log .
Where domain100 represents a running name or number. You can also use the Virtual Machine
Manager to display the virtual text console. On the Virtual Machine Details window, select Serial
258
Fully virtualized guest console access
xm console
You can also use the Virtual Machine Manager to display the serial console. On the Virtual
Machine Details window, select Serial Console from the View menu.
9. SELinux considerations
This sections contains things to you must consider when you implement SELinux into your Red
Hat Virtualization environment. When you deploy system changes or add devices, you must
update your SELinux policy accordingly. To configure an LVM volume for a guest, you must
modify the SELinux context for the respective underlying block device and volume group.
The Boolean parameter xend_disable_t can be used to set the xend in unconfined mode after
restarting the daemon. It is better to disable protection for a single daemon than the whole
system. It is advisable that you should not re-label directories as xen_image_t that you will use
elsewhere.
You can use the kpartx application to handle partitioned disks or LVM volume groups:
259
Chapter 29. How To troubleshoot Red Hat Virtualization
To access LVM volumes on a second partition, you must rescan LVM with vgscan and activate
the volume group on the partition (called VolGroup00 by default) by using the vgchange -ay
command:
# kpartx -a /dev/xen/guest1
#vgscan
Reading all physical volumes . This may take a while...
Found volume group "VolGroup00" using metadata type lvm2
# vgchange -ay VolGroup00
2 logical volume(s) in volume group VolGroup00 now active.
# lvs
LV VG Attr Lsize Origin Snap% Move Log Copy%
LogVol00 VolGroup00 -wi-a- 5.06G
LogVol01 VolGroup00 -wi-a- 800.00M
# mount /dev/VolGroup00/LogVol00 /mnt/
....
#umount /mnt/
#vgchange -an VolGroup00
#kpartx -d /dev/xen/guest1
You must remember to deactivate the logical volumes with vgchange -an, remove the partitions
with kpartx-d , and delete the loop device with losetup -d when you finish.
You try to run xend start manually and receive more errors:
260
Guest creation errors
import images
xc = xen.lowlevel.xc.xc ()
What is most likely happened here is that you rebooted your host into a kernel that is not a
xen-hypervisor kernel. To correct this, you must select the xen-hypervisor kernel at boot
time (or set the xen-hypervisor kernel to default in your grub.conf file.
You do a yum update and receive a new kernel, the grub.conf default kernel switches right
back to a bare-metal kernel instead of the Virtualization kernel.
To correct this problem you must modify the default kernel RPM that resides in the
/etc/sysconfig/kernel/ directory. You must ensure that kernel-xen parameter is set as the
default option in your gb.conf file.
261
Chapter 29. How To troubleshoot Red Hat Virtualization
These changes to the grub.conf should enable your serial console to work correctly. You
should be able to use any number for the ttyS and it should work like ttyS0 .
#/etc/sysconfig/network-scripts/fcfg-eth1
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
USERCTL=no
IPV6INIT=no
PEERDNS=yes
TYPE=Ethernet
NETMASK=255.255.255.0
IPADDR=10.1.1.1
GATEWAY=10.1.1.254
ARP=yes
Edit /etc/xen/xend-config.sxp and add a line to your new network bridge script (this
example uses "network-virtualization-multi-bridge" ).
In the xend-config.sxp file, the new line should reflect your new script:
network-script network-xen-multi-bridge
network-script network-bridge
If you want to create multiple Xen bridges, you must create a custom script. This example below
creates two Xen bridges (called xenbr0 and xenbr1 ) and attaches them to eth1 and eth0 ,
262
Guest configuration files
respectively:
# !/bin/sh
# network-xen-multi-bridge
# Exit if anything goes wrong
set -e
# First arg is operation.
OP=$1
shift
script=/etc/xen/scripts/network-bridge.xen
case ${OP} in
start)
$script start vifnum=1 bridge=xenbr1 netdev=eth1
$script start vifnum=0 bridge=xenbr0 netdev=eth0
;;
stop)
$script stop vifnum=1 bridge=xenbr1 netdev=eth1
$script stop vifnum=0 bridge=xenbr0 netdev=eth0
;;
status)
$script status vifnum=1 bridge=xenbr1 netdev=eth1
$script status vifnum=0 bridge=xenbr0 netdev=eth0
;;
*)
echo 'Unknown command: ' ${OP}
echo 'Valid commands are: start, stop, status'
exit 1
esac
If you want to create additional bridges, just use the example script and copy/paste the file
accordingly.
name = "rhel5vm01"
memory = "2048"
disk = ['tap:aio:/xen/images/rhel5vm01.dsk,xvda,w',]
vif = ["type=ieomu, mac=00:16:3e:09:f0:12
bridge=xenbr0',
"type=ieomu, mac=00:16:3e:09:f0:13 ]
vnc = 1
vncunused = 1
uuid = "302bd9ce-4f60-fc67-9e40-7a77d9b4e1ed"
bootloader = "/usr/bin/pygrub"
vcpus=2
on_reboot = "restart"
on_crash = "restart"
263
Chapter 29. How To troubleshoot Red Hat Virtualization
Note that the serial="pty" is the default for the configuration file. This configuration file
example is for a fully-virtualized guest:
name = "rhel5u5-86_64"
builder = "hvm"
memory = 500
disk =
['file:/xen/images/rhel5u5-x86_64.dsk.hda,w1']
vif = [ 'type=ioemu, mac=00:16:3e:09:f0:12,
bridge=xenbr0', 'type=ieomu, mac=00:16:3e:09:f0:13, bridge=xenbr1']
uuid = "b10372f9-91d7-ao5f-12ff-372100c99af5'
device_model = "/usr/lib64/xen/bin/qemu-dm"
kernel = "/usr/lib/xen/boot/hvmloader/"
vnc = 1
vncunused = 1
apic = 1
acpi = 1
pae = 1
vcpus =1
serial ="pty" # enable serial console
on_boot = 'restart'
A domain can fail if there is not enough RAM available. Domain0 does not balloon down enough
to provide space for the newly created guest. You can check the xend.log file for this error:
You can check the amount of memory in use by domain0 by using the xm list domain0
command. If dom0 is not ballooned down, you can use the command "xm mem-set dom0
NewMemSize" to check memory.
1
../../../home/mhideo/.evolution//xen/images/rhel5u5-x86_64.dsk.hda,w
264
Interpreting error messages
This message indicates that you are trying to run an unsupported guest kernel image on your
hypervisor. This happens when you try to boot a non-PAE para-virtualized guest kernel on a
Red Hat Enterprise Linux 5 host. Red Hat Virtualization only supports guest kernels with PAE
and 64 bit architectures.
# xm create -c va-base
If you need to run a 32 bit non-PAE kernel you will need to run your guest as a fully virtualized
virtual machine. For para-virtualized guests, if you need to run a 32 bit PAE guest, then you
must have a 32 bit PAE hypervisor. For para-virtualized guests, to run a 64 bit PAE guest, then
you must have a 64 bit PAE hypervisor. For full virtualization guests you must run a 64 bit guest
with a 64 bit hypervisor. The 32 bit PAE hypervisor that comes with Red Hat Enterprise Linux 5
i686 only supports running 32 bit PAE para virtualized and 32 bit fully virtualized guest OSes.
The 64 bit hypervisor only supports 64 bit para-virtualized guests.
This happens when you move the full virtualized HVM guest onto a Red Hat Enterprise Linux 5
system. Your guest may fail to boot and you will see an error in the console screen. Check the
PAE entry in your configuration file and ensure that pae=1.You should use a 32 bit distribution.
This happens when the virt-manager application fails to launch. This error occurs when there is
no localhost entry in the /etc/hosts configuration file. Check the file and verify if the localhost
265
Chapter 29. How To troubleshoot Red Hat Virtualization
This happens when the guest's bridge is incorrectly configured and this forces the Xen hotplug
scripts to timeout. If you move configuration files between hosts, you must ensure that you
update the guest configuration files to reflect network topology and configuration modifications.
When you attempt to start a guest that has an incorrect or non-existent Xen bridge
configuration, you will receive the following errors:
/local/domain/0/backend/vif/2/0/hotplug-status
266
The layout of the log directories
/local/domain/0/backend/vif/2/0/hotplug-status
To resolve this problem, you must edit your guest configuration file, and modify the vif entry.
When you locate the vif entry of the configuration file, assuming you are using xenbr0 as the
default bridge, ensure that the proper entry resembles the following:
Python generates these messages when an invalid (or incorrect) configuration file. To resolve
this problem, you must modify the incorrect configuration file, or you can generate a new one.
/etc/xen/
• host the script used to setup the Virtualization networking environment can be found in
the scripts subdirectory.
• Sometimes the system administrator may decided to keep the virtual machine
configuration files in a different or central location. Make sure you are not working off old
or stale configuration files.
267
Chapter 29. How To troubleshoot Red Hat Virtualization
/var/log/xen/
/var/lib/xen/
• default directory for Virtualization related file (such as XenDB and virtual machine
images).
/var/lib/xen/images/
• If you are using a different directory for your virtual machine images make sure you add
the directory to your SELinux policy and relabel it before starting the installation.
/proc/xen/
http://www.openvirtualization.com2
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/index.html
• Libvirt API
http://www.libvirt.org3
http://virt-manager.et.redhat.com4
http://www.xensource.com/xen/xen/
2
•3 http://www.openvirtualization.com/
Virtualization Technologies Overview
http://www.libvirt.org/
4
http://virt-manager.et.redhat.com/
268
Online troubleshooting resources
http://virt.kernelnewbies.org5
http://et.redhat.com6
5
http://virt.kernelnewbies.org/
6
http://et.redhat.com/
269
270
Chapter 30.
Troubleshooting
This chapter covers common problems and solutions with Red Hat Enterprise Linux
virtualization.
# cat /proc/partitions
major minor #blocks name
202 16 104857600 xvdb
3 0 8175688 hda
This example uses 64 but you can specify another number to set the maximum loop value. You
may also have to implement loop device backed guests on your system. To employ loop device
backed guests for a para-virtualized guest, use the phy: block device or tap:aio
commands. To employ loop device backed guests for a full virtualized system, use the phy:
device or file: file commands.
271
Chapter 30. Troubleshooting
This may cause a domain to fail to start. The reason for this is there is not enough memory
available or dom0 has not ballooned down enough to provide space for a recently created or
started guest. In your /var/log/xen/xend.log, an example error message indicating this has
occurred:
You can verify the amount of memory currently used by dom0 with the command “xm list
Domain-0”. If dom0 is not ballooned down you can use the command “xm mem-set Domain-0
NewMemSize” where NewMemSize should be a smaller value.
# xm create testVM
Using config file "./testVM".
Going to boot Red Hat Enterprise Linux Server (2.6.18-1.2839.el5)
kernel: /vmlinuz-2.6.18-1.2839.el5
initrd: /initrd-2.6.18-1.2839.el5.img
Error: (22, 'Invalid argument')
In the above error you can see that the kernel line shows that it's trying to boot a non-xen
kernel. The correct entry in the example is ”kernel: /vmlinuz-2.6.18-1.2839.el5xen”.
The solution is to verify you have indeed installed a kernel-xen in your guest and it is the default
kernel to boot in your /etc/grub.conf configuration file.
If you do have a kernel-xen installed in your guest you can start your guest using the command
“xm create -c GuestName” where GuestName is the name of the kernel-xen. The previous
command will present you with the Grub boot loader screen and allow you to select the kernel
to boot. You will have to choose the kernel-xen kernel to boot. Once the guest has completed
the boot process you can log into the guest and edit /etc/grub.conf to change the default
boot kernel to your kernel-xen. Simply change the line “default=X” (where X is a number
starting at '0') to correspond to the entry with your kernel-xen line. The numbering starts at '0' so
if your kernel-xen entry is the second entry you would enter '1' as the default,for example
“default=1”.
272
Fully-virtualized x86_64 guest fails to boot
If you to boot a non-PAE para-virtualized guest you will see the error message below. It
basically indicates you are trying to run a guest kernel on your Hypervisor which at this time is
not supported. Red Hat Enterprise Linux 5 and Xen presently only supports PAE and 64 bit
para-virtualized guest kernels.
# xm create -c va-base
Using config file "va-base".
Error: (22, 'Invalid argument')
[2006-12-14 14:55:46 xend.XendDomainInfo 3874] ERROR (XendDomainInfo:202)
Domain construction failed
Traceback (most recent call last):
File "/usr/lib/python2.4/site-packages/xen/xend/XendDomainInfo.py",
line 195, in create vm.initDomain()
File "/usr/lib/python2.4/site-packages/xen/xend/XendDomainInfo.py",
line 1363, in initDomain raise VmError(str(exn))
VmError: (22, 'Invalid argument')
[2006-12-14 14:55:46 xend.XendDomainInfo 3874] DEBUG (XendDomainInfo:1449)
XendDomainInfo.destroy: domid=1
[2006-12-14 14:55:46 xend.XendDomainInfo 3874] DEBUG (XendDomainInfo:1457)
XendDomainInfo.destroyDomain(1)
If you need to run a 32 bit or non-PAE kernel you will need to run your guest as a
fully-virtualized virtual machine. The rules for hypervisor compatibility are:
• para-virtualized guests your guest must match the architecture type of your hypervisor.
Therefore if you want to run a 32 bit PAE guest you must have a 32 bit PAE hypervisor.
• to run a 64 bit para-virtualized guest your Hypervisor must be a 64 bit version too.
• fully virtualized guests your hypervisor must be 32 bit or 64 bit for 32 bit guests. You can run a
32 bit (PAE and non-PAE) guest on a 32 bit or 64 bit hypervisor.
• to run a 64 bit fully virtualized guest your hypervisor must be 64 bit too.
273
Chapter 30. Troubleshooting
Applying Intel CPU microcode update: FATAL: Module microcode not found.
ERROR: Module microcode does not exist in /proc/modules
As the virtual machine is running on virtual CPUs there is no point updating the microcode.
Disabling the microcode update for your virtual machines will stop this error:
If you try to start a guest which has an incorrect or non-existent Xen bridge configured you will
see the following error after starting the guest
# xm create r5b2-mySQL01
Using config file "r5b2-mySQL01".
Going to boot Red Hat Enterprise Linux Server (2.6.18-1.2747.el5xen)
kernel: /vmlinuz-2.6.18-1.2747.el5xen
initrd: /initrd-2.6.18-1.2747.el5xen.img
Error: Device 0 (vif) could not be connected. Hotplug scripts not working
274
Python depreciation warning messages
and in /var/log/xen/xend.log you will see the following messages (or similar messages)
being logged
To resolve this issue edit your guest's configuration file and modify the vif entry to reflect your
local configuration. For example if your local configuration is using xenbr0 as its default bridge
you should modify your vif entry in your configuration file from
to
Another cause is an incorrect configuration file in your current working directory. “xm create”
will look in the current directory for a configuration file and then in /etc/xen
# xm shutdown win2k3xen12
# xm create win2k3xen12
Using config file "win2k3xen12".
/usr/lib64/python2.4/site-packages/xen/xm/opts.py:520: DeprecationWarning:
Non-ASCII character '\xc0' in file win2k3xen12 on line 1, but no encoding
declared; see http://www.python.org/peps/pep-0263.html for details
execfile(defconfig, globs, locs)
275
Chapter 30. Troubleshooting
276
Chapter 31.
Troubleshooting Para-virtualized
Drivers
This chapter deals with issues you may encounter with the Red Hat Enterprise Linux hosts and
fully virtualized guests using the para-virtualized drivers
/var/log/xen/
directory holding all log file generated by the xend daemon and qemu-dm process.
xend.log
• This logfile is used by xend to log any events generate by either normal system events or
operator initiated events.
• virtual machine operations such as create, shutdown, destroy etc are all logged in this
logfile.
• Usually this logfile will be the first place to look at in the event of a problem. In many
cases you will be able to identify the root cause by scanning the logfile and review the
entries logged just prior to the actual error message.
xend-debug.log
• used to record error events from xend and its subsystems (such as framebuffer and
Python scripts etc..)
xen-hotplug.log
• events such as devices not coming online or network bridges not online will be logged in
this file
qemu-dm.PID.log
• this file is create by the qemu-dm process which is started for each fully-virtualized guest.
• the PID will be replaced with the PID of the process of the related qemu-dm process
277
Chapter 31. Troubleshooting Para-virtualized Drivers
• You can retrieve the PID for a given qemu-dm process using the ps command and in
looking at the process arguments you can identify the virtual machine the qemu-dm
process belongs to.
If you are troubleshooting a problem with the virt-manager application you can also review the
logfile generated by it. The logfile for virt-manager will be in a subdirectory called
.virt-manager in the user's home directory who's running virt-manager. For example,
~/.virt-manager/virt-manager
Note
The logfile is overwritten every time you start virt-manager. If you are
troubleshooting a problem with virt-manager make sure you save the logfile
before you restart virt-manager after an error has occurred.
/var/lib/xen/images/
the standard directory for file based virtual machine images.
/var/lib/xen/xend-db/
directory that hold the xend database which is generated every time the daemon is
restarted.
/etc/xen/
holds a number of configuration files used to tailor your Red Hat Enterprise Linux 5
Virtualization environment to suite your local needs
/var/xen/dump/
hold dumps generate by virtual machines or when using the xm dump-core command.
/proc/xen/
has a number of entries which can be used to retrieve additional information:
• /proc/xen/capabilities
• /proc/xen/privcmd
• /proc/xen/balloon
278
Para-virtualized guest fail to load on a Red
• /proc/xen/xenbus
• /proc/xen/xsd_port
• /proc/xen/xsd_kva
When the para-virtualized driver modules are inserted, a long list of unresolved modules will be
displayed. A shortened excerpt of the error can be seen below.
insmod xen-platform-pci.o
Warning: kernel-module version mismatch
xen-platform-pci.o was compiled for kernel version
2.4.21-52.EL
while this kernel is version 2.4.21-50.EL
xen-platform-pci.o: unresolved symbol
__ioremap_R9eac042a
xen-platform-pci.o: unresolved symbol
flush_signals_R50973be2
xen-platform-pci.o: unresolved symbol
pci_read_config_byte_R0e425a9e
xen-platform-pci.o: unresolved symbol
__get_free_pages_R9016dd82
[...]
The solution is to use the correct RPM package for your hardware architecture for the
para-virtualized drivers.
279
Chapter 31. Troubleshooting Para-virtualized Drivers
The important part of the message above is the last line which should state the module has
been loaded with warnings.
If the guest operating system has been booted using the virt-manager(the GUI tool) or
virsh(the command line application) interface the boot process will detect the “new” old Realtek
card. This due to the fact libvirt, as the underlying API to virt-manager and virsh, will
always add type=ioemu to the networking section followed by prompting the systems
administrator to reconfigure networking inside the guest. It is recommend you interrupt the boot
process (using virt-manager, virsh or xm) and to boot the guest using the xm command. In the
event of the guest operating system has booted all the way to multi-user mode you will detect
that there is no networking active as the backend and frontend drivers are not connected
properly.
To fix this issue, shut down the guest and boot it using “xm create”. During the boot process
kudzu (the hardware detection process) will detect the “old” Realtek card. Simply select
“Remove Configuration” to delete the Realtek card from the guest operating system. The
guest should continue to boot and configure the network interfaces correctly.
You can identify if your guest has been booted with virt-manager, virsh or “xm create” using
the command “# xm list –long YourGuestName”
In the screenshot below you can see the entry “ioemu” highlighted in the “device vif”
(networking) section. This would mean the guest was booted with virt-manager or virsh and
networking is not configured correctly, that is, without the para-virtualized network driver.
280
Hat Enterprise Linux 3 guest operating
In the screenshot below you can see there is no “type ioemu” entry in the “device vif”
section so you can safely assume the guest has been booted with “xm create
YourGuestName”. This means networking is configured to use the para-virtualized network
driver.
281
Chapter 31. Troubleshooting Para-virtualized Drivers
This will allow you to reconfigure network or storage entities or identify why they failed to load in
the first place. The steps below should load the para-virtualized driver modules.
# cd /lib/modules/`uname -r`/
# find . -name 'xen-*.ko' -print
Take note of the location and load the modules manually. Substitute {LocationofPV-drivers} with
the correct location you noted from the output of the commands above.
# insmod \
/lib/modules/'uname
-r'/{LocationofPV-drivers}/xen-platform-pci.ko
282
system
# insmod /lib/modules/'uname
-r'/{LocationofPV-drivers}/xen-balloon.ko
# insmod /lib/modules/'uname
-r'/{LocationofPV-drivers}/xen-vnif.ko
# insmod /lib/modules/'uname
-r'/{LocationofPV-drivers}/xen-vbd.ko
After the para-virtualized drivers have been installed and the guest has been rebooted you can
verify that the drivers have loaded. First you should confirm the drivers have logged their
loading into /var/log/messages
You can also use the lsmod command to list the loaded para-virtualized drivers. It should output
a list containing the xen_vnif, xen_vbd, xen_platform_pci and xen_balloon modules.
# lsmod|grep xen
xen_vbd 19168 1
xen_vnif 28416 0
xen_balloon 15256 1 xen_vnif
xen_platform_pci 98520 3
xen_vbd,xen_vnif,xen_balloon,[permanent]
283
284
Appendix A. Revision History
Revision History
Revision 5.2-10 Wednesday May 14 2008 ChristopherCurran<[email protected]>
Resolves: #322761
Many spelling and grammar errors corrected.
Chapter on Remote Management added.
Revision 5.2-5 Tue Mar 19 2008 ChristopherCurran<[email protected]>
Resolves: #428915
New Virtualization Guide created.
285
286
Appendix B. Red Hat Virtualization
system architecture
A functional Red Hat Virtualization system is multi-layered and is driven by the privileged Red
Hat Virtualization component. Red Hat Virtualization can host multiple guest operating systems.
Each guest operating system runs in its own domain, Red Hat Virtualization schedules virtual
CPUs within the virtual machines to make the best use of the available physical CPUs. Each
guest operating systems handles its own applications. These guest operating systems schedule
each application accordingly.
You can deploy Red Hat Virtualization in one of two choices: full virtualization or
para-virtualization. Full virtualization provides total abstraction of the underlying physical system
and creates a new virtual system in which the guest operating systems can run. No
modifications are needed in the guest OS or application (the guest OS or application is not
aware of the virtualized environment and runs normally). Para-virtualization requires user
modification of the guest operating systems that run on the virtual machines (these guest
operating systems are aware that they are running on a virtual machine) and provide
near-native performance. You can deploy both para-virtualization and full virtualization across
your virtualization infrastructure.
The first domain, known as domain0 (dom0), is automatically created when you boot the
system. Domain0 is the privileged guest and it possesses management capabilities which can
create new domains and manage their virtual devices. Domain0 handles the physical hardware,
such as network cards and hard disk controllers. Domain0 also handles administrative tasks
such as suspending, resuming, or migrating guest domains to other virtual machines.
The hypervisor (Red Hat's Virtual Machine Monitor) is a virtualization platform that allows
multiple operating systems to run on a single host simultaneously within a full virtualization
environment. A guest is an operating system (OS) that runs on a virtual machine in addition to
the host or main OS.
With Red Hat Virtualization, each guests memory comes from a slice of the host's physical
memory. For para-virtualized guests, you can set both the initial memory and the maximum size
of the virtual machine. You can add (or remove) physical memory to the virtual machine at
runtime without exceeding the maximum size you specify. This process is called ballooning.
You can configure each guest with a number of virtual cpus (called vcpus). The Virtual Machine
Manager schedules the vcpus according to the workload on the physical CPUs.
You can grant a guest any number of virtual disks. The guest sees these as either hard disks
or (for full virtual guests) as CD-ROM drives. Each virtual disk is served to the guest from a
block device or from a regular file on the host. The device on the host contains the entire full
disk image for the guest, and usually includes partition tables, multiple partitions, and potentially
LVM physical volumes.
Virtual networking interfaces runs on the guest. Other interfaces can run on the guest like
287
Appendix B. Red Hat Virtualization system architecture
virtual ethernet Internet cards (VNICs). These network interfaces are configured with a
persistent virtual media access control (MAC) address. The default installation of a new guest
installs the VNIC with a MAC address selected at random from a reserved pool of over 16
million addresses, so it is unlikely that any two guests will receive the same MAC address.
Complex sites with a large number of guests can allocate MAC addresses manually to ensure
that they remain unique on the network.
Each guest has a virtual text console that connects to the host. You can redirect guest logins
and console output to the text console.
You can configure any guest to use a virtual graphical console that corresponds to the normal
video console on the physical host. You can do this for full virtual and para-virtualized guests. It
employs the features of the standard graphic adapter like boot messaging, graphical booting,
multiple virtual terminals, and can launch the x window system. You can also use the graphical
keyboard to configure the virtual keyboard and mouse.
Guests can be identified in any of three identities: domain name (domain-name), identity
(domain-id), or UUID. The domain-name is a text string that corresponds to a guest
configuration file. The domain-name is used to launch the guests, and when the guest runs the
same name is used to identify and control it. The domain-id is a unique, non-persistent number
that gets assigned to an active domain and is used to identify and control it. The UUID is a
persistent, unique identifier that is controlled from the guest's configuration file and ensures that
the guest is identified over time by system management tools. It is visible to the guest when it
runs. A new UUID is automatically assigned to each guest by the system tools when the guest
first installs.
288
Appendix C. Additional resources
To learn more about Red Hat Virtualization, refer to the following resources.
1. Online resources
• http://www.libvirt.org/ is the official website for the libvirt virtualization API that interacts
with the virtualization framework of a host OS.
2. Installed documentation
289
290
Glossary
This glossary is intended to define the terms used in this Installation Guide.
B
Bare-metal The term bare-metal refers to the underlying physical
architecture of a computer. Running an operating system on
bare-metal is another way of referring to running an unmodified
version of the operating system on the physical hardware.
Examples of operating systems running on bare metal are
dom0 or a natively installed operating system.
D
dom0 Also known as the Host or host operating system.
Domains domU and Domains are both domains. Domains run on the
Hypervisor. The term domains has a similar meaning to Virtual
machines and the two are technically interchangeable. A
domain is a Virtual Machine.
domU domU refers to the guest operating system which run on the
host system (Domains ).
F
Full virtualization You can deploy Red Hat Virtualization in one of two choices:
full virtualization or para-virtualization. Full virtualization
provides total abstraction of the underlying physical system
(Bare-metal ) and creates a new virtual system in which the
guest operating systems can run. No modifications are needed
in the guest operating system. The guest operating system and
any applications on the guest are not aware of the virtualized
environment and run normally. Para-virtualization requires a
modified version of the Linux operating system.
291
Glossary
G
Guest system Also known as guests, virtual machines or domU.
H
Hardware Virtual Machine See Full virtualization
I
I/O Short for input/output (pronounced "eye-oh"). The term I/O is
used to describe any program, operation or device that
transfers data to or from a computer and to or from a peripheral
device. Every transfer is an output from one device and an
input into another. Devices such as keyboards and mouses are
input-only devices while devices such as printers are
output-only. A writable CD-ROM is both an input and an output
device.
K
Kernel-based Virtual KVM is a Full virtualization kernel module which will be
Machine incorporated into future releases of Red Hat Enterprise Linux.
KVM is presently available in the fedora Linux distribution and
other Linux distributions.
L
292
LUN Logical Unit Numbers(LUN) is the number assigned to a logical
unit (a SCSI protocol entity).
M
Migration See also Relocation
MAC Addresses The Media Access Control Address is the hardware address for
a Network Interface Controller. In the context of virtualization
MAC addresses must be generated for virtual network
interfaces with each MAC on your local domain being unique.
P
Para-virtualization Para-virtualization uses a special kernel, sometimes referred to
as the xen kernel or kernel-xen to virtualized another
environment while using the hosts libraries and devices. A
para-virtualized installation will have complete access to all
devices on the system. Para-virtualization is significantly faster
than full virtualization can can be effectively used for load
balancing, provisioning, security and consolidation advantages.
Para-virtualized drivers Para-virtualized drivers are device drivers that operate on fully
virtualized linux guests. These drivers greatly increase
performance of network and block device I/O for fully
virtualized guests.
R
Relocation Another term for Migration usually used to describe moving a
virtual machine image across geographic locations.
V
293
Glossary
294