个人工具

UbuntuHelp:Xen

来自Ubuntu中文

跳转至: 导航, 搜索

Introduction

Xen is an open-source virtual machine layer which runs on the bare hardware, allowing multiple operating systems to run on the same hardware at the same time. It does this without emulation or instruction translation, and provides near-native (~97%) CPU performance. Xen is optimized for servers -- running many instances of Linux or other operating systems, each with their own kernel, securely and cleanly partitioned from each other, on one piece of server hardware. If you just want to run a virtual instance of windows on your workstation, then KVM or VirtualBox is probably what you want instead. Xen allows a tenfold (or so) decrease in the number of boxes in a typical data center. It also abstracts away the differences between hardware, allowing you to migrate entire operating system images between different pieces of hardware by just shutting them down, moving the bits, then booting them back up again -- no messing around with drivers or grub. If you have a SAN (or DRBD, or GNBD), then you can do this migration without even moving the bits. As if that's not enough, Xen also supports "live migration" -- you can migrate a running server instance from one box to another without suspending or shutting down. A Xen administrator can do this without the users or even the server's root sysadmin knowing it happened. Even ssh connections stay up. Xen is what has made "cloud computing" possible, including Amazon's EC2. The Xen "hypervisor" is the thing that runs on the bare hardware. It's a very thin, very small operating system. It dices the hardware up into virtual machines which are nearly Intel-like, but with slightly modified interrupt, memory, and I/O handling. Anyone with mainframe experience will recognize that the Xen hypervisor is the equivalent of IBM's mainframe VM, but for PC hardware. The tradeoff for all this performance and flexibility is that an operating system's kernel needs to be slightly ported to run under the Xen hypervisor -- Xen is just another hardware architecture. More recent versions of Xen, if running on more recent versions of Intel or AMD CPUs, may be able to run some operating systems, such as Windows, without modification, but this is not yet Xen's core use case -- there's a lot of old hardware out there and still will be for a long time. One of Xen's strengths is the ability to make this old hardware (circa Pentium III or later) usable again in a modern data center.

Ubuntu's Support of Xen

There has been some controversy about Ubuntu's support for Xen, mostly fueled by some sensationalist articles about KVM in places like CNET and The Register. It's true that Intrepid does not include a dom0 linux kernel, but it does still include a Xen 3.3 hypervisor and userland tools. According to Evan Broder, this is likely just a workload issue; Intrepid runs a linux kernel version which Xen doesn't explictly support, and so the Ubuntu kernel team would have had to forward-port the Xen patches from 2.6.18. This sort of unsync will go away when Xen gets into the mainstream kernel, which may actually happen soon. It's also worth noting that the Xen 3.3 hypervisor is also included in jaunty. In the meantime, a suitable workaround is to go ahead and install the Xen 3.3 hypervisor and userland tools for Intrepid, Jaunty, or whatever later version of Ubuntu you're running, and then go get a dom0 linux kernel from Debian. See also https://wiki.ubuntu.com/Xen and https://wiki.ubuntu.com/Virtualization, and on freenode, try: ##xen or #ubuntu-virt.

Glossary

  • Domain 0 (dom0): The Xen administrator's operating system, also called the "Xen host". Is booted automatically right after the hypervisor boots and is given special management privileges and some direct access to the physical hardware via the hypervisor. Grub boots the hypervisor, then loads the dom0 kernel; the dom0 kernel is listed as a 'module' in grub.
  • Domain U (domU): A Xen guest domain. A domU is a single Xen virtual machine. The “U” stands for “unprivileged”. The domU kernels are not mentioned in grub at all; they are booted from and managed by tools in dom0.
  • HVM: HVM is a feature of more modern hardware which allows running Windows as a Xen domU.
  • Hypervisor: A very thin, very small operating system which runs on the bare hardware, dicing it up into virtual machines. Like IBM's mainframe VM, but for Intel-like hardware. Grub boots the hypervisor; the filename of the hypervisor is listed as the 'kernel' in grub.

Karmic Notes

The installation guide below is helpful for most generations of Xen with most releases of Ubuntu, but has a couple of details that are out-dated. Here we briefly discuss those notes to help you with your install. If you haven't used Xen before you will want to read at least the guide below, in addition to these brief additions.

  • Karmic (and presumably later releases) requires a kernel that provides the /proc/<pid>/mountinfo extensions. These extensions were first in kernels 2.6.26, so earlier kernels will not work with Karmic, either as dom0 or domU.
  • Recent kernels in Ubuntu are compressed with newer compression techniques lzma and bzip2. Xen 3.2 does not understand how to uncompress these kernels. If you want to use these kernels in your dom0, you need a more recent version of Xen. Xen 3.3 has worked on kernels up to 2.6.31-19-server. (Note: can anyone answer whether domU kernels have the same issue?)
  • Newer Ubuntu kernels include the pv_ops extensions, which means the Ubuntu kernels come all ready to run as domUs. These kernels are included in Ubuntu distributions as the "-server" kernels. You can fetch a complete kernel, include the /lib/modules and other stuff needed to boot it as the linux-virtual package. These kernels are not suitable for dom0 kernels. (pv_ops is a new architecture for virtualizing linux kernels that provides a fake hardware interface for the domU to run against. Many of the linux virtualization packages, including Xen, are switching to support pv_ops, to make it easy for any distribution to provide domU kernels. Work is under way to make these kernels also serve as Xen dom0 kernels, but that's a different and more challenging problem.)
  • If you fetch this package, the kernel will be placed in the /boot directory of your domU. Remember to copy the kernel and initrd to the /boot directory of your dom0, which will have to load them into memory to kick off the domU. Remember to edit the domU configuration file to use the new kernel and initrd. If you install the kernel some other way, remember to copy the appropriate /lib/modules/ subdirectory into your domU. (This setup is somewhat unintuitive, but the idea is that that dom0 needs the domU kernel and initrd to get the kernel started, but does not need the modules, since those will be accessed once the kernel is running. The domU does not need the kernel or initrd, since those will already be in memory, but does need the modules. It's fine to have the extra kernels in your dom0, since your bootloader (e.g., grub) will pick the right one, and it doesn't hurt anyone to have the kernels in the domU /boot, since they will be ignored.)
  • The detailed instructions below for setting up the Xen configuration describe how to fix getty by editing the automatic scripts. On karmic things have changed further. If you want to have a console for your domUs, you'll need to have getty running on /dev/hvc0 inside the domU. getty is run out of /etc/init/hvc0.conf on karmic. (On jaunty it's /etc/event.d/hvc0.conf.) Copy /etc/init/tty1.conf and edit appropriately. Make sure you have:
extras = "console=hvc0"

in the domU configuration file. (The domU console has changed names over the years. See this table at the bottom of the XenDom0Kernels Wiki for details, but basically with the pv_ops kernels since 2.6.26 the correct name is hvc0; before that the correct name was often xvc0; before that the syntax was "xenconsole=tty".)

  • If you also want to upgrade your dom0, remember that you will need at least a 2.6.26 kernel if you're going to karmic. A good source of these kernels is debian. The debian 2.6.26-2-xen-amd64 kernels are known to work with at least jaunty, and should in principle be sufficient for karmic. (Please report definitive experience here if you can.)

Installation

Note: This guide is written for Feisty. It is currently not fully compatible with Gutsy, use at your own risk! In general, more recent versions of Ubuntu will require fewer workarounds. Intrepid, for instance, includes everything but the Linux dom0 kernel itself, though you can get a kernel from Debian -- see above.

Install from packages (recommended)

sudo aptitude install ubuntu-xen-server

Note: There is also an ubuntu-xen-desktop, and an ubuntu-xen-desktop-amd64. The purpose of these is a bit ambiguous, but they install firefox and all kinds of gnome libs. They may also not be PAE enabled... compiled for systems with more than 4GB of RAM. Even if you are on a 64 bit system, you should still install ubuntu-xen-server. More package details here: http://packages.ubuntu.com/feisty/base/ubuntu-xen-server The Feisty AMD64 version of ubuntu doesn't have an ubuntu-xen-server metapackage so far. Instead, use this. For Feisty AMD64:

sudo aptitude install xen-image-2.6.19-4-generic-amd64 bridge-utils libxen3.0 python-xen3.0 xen-docs-3.0 xen-hypervisor-3.0 xen-ioemu-3.0 xen-tools xen-utils-3.0

For Gutsy AMD64:

sudo aptitude install ubuntu-xen-server        # Not in repository at 02/25/2008
sudo aptitude install ubuntu-xen-desktop-amd64 # includes xenman

Maybe install xenman too, but it installs all kinds of gnome stuff. Next you need to enable networking. [Gutsy: I had to reboot the xen kernel before I could run the xend script]

vim /etc/xen/xend-config.sxp

#(network-script network-dummy)
(network-script network-bridge)

then
sudo /etc/init.d/xend restart  # for the change to take effect

Its also a good idea to increase the default number of loop mounts allowed. Not really needed if you are going to use LVM, but it also wont break anything if you do it anyway.

vim /etc/modules

loop max_loop=64

This is probably a good point to reboot your machine so that you use the xen kernel. After reboot is a good chance to check your network and ensure it works. Both Feisty and Gutsy may have network problems with certain hardware. If you are experiencing network problems, check out this potential solution

ACPI

If - after reboot - you see a kernel oops in dmesg or suggestion to boot with irqpoll, then try disabling ACPI and Plug 'n Play OS options in your BIOS. If you have no access to these options from the BIOS, then you may need to boot your kernel with acpi=off option. The reason is that there is no ACPI in the Xen kernel. Edit /boot/grub/menu.list and find a module line:

title           Xen 3.1 / Ubuntu 7.10, kernel 2.6.22-14-xen
root            (hd0,0)
kernel          /boot/xen-3.1.gz
module          /boot/vmlinuz-2.6.22-14-xen root=UUID=your-uuid-here ro console=tty0
module          /boot/initrd.img-2.6.22-14-xen

change the one module line with options above to:

module          /boot/vmlinuz-2.6.22-14-xen root=UUID=your-uuid-here ro console=tty0 acpi=off

Initrd

Most people can skip this section. This is just in case you get a kernel panic at reboot, then you probably have SCSI or SATA modules that need to be included in an initrd. Do this to create an initrd:

sudo depmod -a xen-3.0-i386.gz
sudo mkinitramfs -o /boot/xen-3.0-i386.initrd.img 2.6.19-4-generic

And then add this as a second module line in the Xen section of your menu.lst file.

module      /boot/xen-3.0-i386.initrd.img

This recommendation might be wrong. Please correct it if so. See here and and here for more details.

AACRaid Bugfix

Currently on some server configurations running 2.6.24-18-xen, AACRaid will cause a kernel panic. (The error is 'out of SW-IOMMU space'). This happened to me either on boot-up, or immediately after I tried to access the disk (ie, using apt-get). If this happens to you, try adding "swiotlb=128" into your /boot/grub/menu.lst file like so:

module   /boot/vmlinuz-2.6.24-18-xen root=<your UUID> ro console=tty0 swiotlb=128

You will also have to add this switch into any DomU config files you create. This flag fixed the kernel panics on IBM X-Server 3550's with attached SAN arrays. This bug originally referenced at: [1] (Thanks everyone who's working on it. Paul Mc.)

Prebuilt Binaries install

This section has not yet been written. http://xen.xensource.com/download/dl_31tarballs.html http://www.howtoforge.com/xen_3.0_ubuntu_dapper_drake

Guest Templates

For a full list of possible xen domU config options, type sudo xm create --help_config

LVM partitioning

xen-tools can create lvm volumes for you, so you can skip this section. If you need to make lvm volumes yourself you can use these commands. I just used the ubuntu installer to setup my LVM volume group initially. If you are using local .img files for your xen guests, then you dont need lvm.

sudo lvdisplay                         # To see existing lvm volumes.
lvcreate -n myguest-disk -L +100G my_volume_group    # To create a volume 
lvcreate -n myguest-swap -L +4G   my_volume_group    # To create another volume

How to extend an LVM partition

# e2fsck -f /dev/vg0/<DomU_name>
# lvextend -L +1G /dev/vg0/<DomU_name>
# resize2fs /dev/vg0/<DomU_name>

Using loopback-mounted-file

Create sparse file for disk, and non-sparse file for swap.

dd if=/dev/zero of=/mnt/domains/myslice/disk.img bs=1 count=0 seek=25G
dd if=/dev/zero of=/mnt/domains/myslice/swap.img bs=1G count=1

mkfs.ext3 ./disk.img  (then say Y)
mkswap ./swap.img

Network for DomU

If you use the bridged network setup, it may be necessary to enable dhcp for eth0. Make sure that your /etc/network/interfaces looks like something like this.

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

If you are using static IP configuration, then your /etc/network/interfaces will be something like this, do not copy and paste this. Change the IP addresses accordingly.

 auto lo
 iface lo inet loopback

 auto eth0
 iface eth0 inet static
 address 192.168.44.247
 netmask 255.255.252.0
 gateway 192.168.47.254 

Make sure hostname of DomU is correct in /etc/hostname. That file should have a single line, and in that line, your hostname should be present. Make sure /etc/hosts file is correct.

127.0.0.1       localhost
127.0.1.1       yourhostname 

DomU using xen-tools (recommended)

First you need to edit some of the default values in xen-tools.conf . Go through the file and set them to what you want. I've mentioned the more important ones below. Make sure you set a gateway and netmask or networking wont work.

# vim /etc/xen-tools/xen-tools.conf

gateway   = 192.168.0.1
netmask   = 255.255.255.0
passwd = 1
kernel = /boot/vmlinuz-2.6.19-4-server
initrd = /boot/initrd.img-2.6.19-4-server
mirror = http://archive.ubuntu.com/ubuntu/

Create a new image.

sudo xen-create-image --hostname=xen1.example.com --ip=192.168.1.10 --ide --force

Tail the log file under /var/log/xen to see progress. There is no real indication on the command-line that anything is happening, though you will see some network traffic, possibly. Someone said that if you don't use --ide, it wont work. So I took their advice. man xen-create-image says " --ide Use IDE names for virtual devices (hda not sda)" When the command finishes, it will leave a config file in /etc/xen named based on the host name your specified. This config file can be used in the command below that creates the virtual instance. Start a xen host.

sudo xm create /etc/xen/xen1.example.com.cfg

Other DomU Setups

Stuff goes here.

Other DomU Guest Configurations

Stuff to check when converting a disk image to a Dom U

/etc/fstab /etc/conf.d/net or /etc/network/interfaces /etc/resolv.conf /lib/modules/kernel-modules ( copy them to the guest if needed ) on gentoo fix the /sbin/rc bug that causes /sys and /proc errors. Make sure you setup an empty /sys, /proc, and setup a skeleton /dev. set root passwd hostname Look out for /etc/udev/rules.d/70-persistent-net.rules which can change your eth device id.

Using debootstrap

Mount the guest partition(loopback-file, raw partition or LVM) under /mnt/myguest Bootstrap Ubuntu. In the commands below change the "export" lines with your configuration, also select a mirror in your country/continent

sudo apt-get install debootstrap
export ARCH=amd64
export DISTRIBUTION=hardy
export MIRROR=http://us.archive.ubuntu.com/ubuntu/
debootstrap --arch $ARCH $DISTRIBUTION /mnt/myguest/ $MIRROR

Copy the modules of current running kernel

sudo cp -a  /lib/modules/`uname -r`/   /mnt/myguest/lib/modules/

Fix networking as stated above in "Network for DomU" section Edit /mnt/myguest/etc/fstab. Use hda instead of sda if you use ide. Include the swap line if you prepared a swap partition.

/dev/sda1     /     ext3     errors=remount-ro   0     1
proc          /proc proc     rw,nodev,nosuid,noexec              0     0
/dev/sda2     none  swap     sw                    0     0

Now create a config file following the instructions in "Creating a DomU config file by hand" section below. You can run your guest with

sudo xm create /etc/xen/YOURCONF.cfg

Debootstrap does not create locales. When you run the virtual machine, if you get errors like the following, run the command below.

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = “en_US.UTF-8″
are supported and installed on your system.
perl: warning: Falling back to the standard locale (”C”).
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory

To fix this, run this, replace en_US.UTF-8 with your own locale.

sudo locale-gen en_US.UTF-8

Remember, locale settings are in /etc/environment

LANG=en_US.UTF-8
LC_ALL=en_US.UTF-8

Creating a DomU config file by hand

Here is a skeleton DomU config file. Put this file under "/etc/xen" directory. Edit the file changing the settings matching your setup.

  • Learn your kernel version with "uname -r" command. Replace the versions in "kernel" and "ramdisk" lines with correct version.
  • As you can guess, "memory" sets DomU memory
  • "name" will be the name used to refer to this guest, keeping it the same as your DomU hostname will keep things simpler.
  • "disk" line depends heavily on how you created the disks for DomU. If there is no swap, remove the second entry. If you use ide, change "sda" into "hda".
 
 kernel = "/boot/vmlinuz-2.6.19-4-generic"
 ramdisk = "/boot/initrd.img-2.6.19-4-generic"
 builder='linux'
 memory = 128
 name = "yourhostname"
 vcpus = 1
 vif = [ 'bridge=xenbr0' ]
 disk = [ 'file:/var/vm/myvm/disk.img,sda1,w' ,  'file:/var/vm/myvm/swap.img,sda2,w' ]
 root = "/dev/hda1 ro"
on_poweroff = 'destroy'
on_reboot   = 'restart'
on_crash    = 'restart'

If you want to give static mac addresses and static IPs, "vif" line should be like the one below. Mac addresses beginning with "00:16:3e" are reserved for guest machines, fill the remaining part randomly.

vif  = [ 'mac=00:16:3e:XX:XX:XX, ip="YYY.YYY.YYY.YYY"' ]

If you use a physical partition, the disks line should be something like;

 disk = [ 'phy:/dev/hda6,ioemu:hda1,w' ]

Gentoo Guest

http://bugs.gentoo.org/show_bug.cgi?id=192436

Ubuntu Guest is Gutsy 7.10 or newer

The xen-tools hook scripts included for gutsy target are merely a link to edgy, which is problematic. The init system has changed since Edgy. Also, there is a bug with accessing hwclock. The result is a newly created gutsy domU appears to crash or hang after mounting the rootfs. The xen-tools patch/workaround:

1) Remove the symbolic link /usr/lib/xen-tools/gutsy.d and make a copy of /usr/lib/xen-tools/edgy.d to /usr/lib/xen-tools/gutsy.d
rm /usr/lib/xen-tools/gutsy.d
cp -a /usr/lib/xen-tools/edgy.d /usr/lib/xen-tools/gutsy.d
mv /usr/lib/xen-tools/gutsy.d/15-disable-hwclock /usr/lib/xen-tools/gutsy.d/21-disable-hwclock

2) Edit /usr/lib/xen-tools/gutsy.d/21-disable-hwclock and be sure to have the lines below:
chroot ${prefix} /usr/sbin/update-rc.d -f hwclock.sh remove
chroot ${prefix} /usr/sbin/update-rc.d -f hwclockfirst.sh remove
chroot ${prefix} rm -f /etc/udev/rules.d/85-hwclock.rules
chroot ${prefix} ln -sf /bin/true /sbin/hwclock

3) Create a hook to enable the gettys

cp /usr/lib/xen-tools/gutsy.d/30-disable-gettys /usr/lib/xen-tools/gutsy.d/32-enable-gettys
now edit the above file to execute the following:

#
#  Change first console setting to xvc0 upstart
#
echo "xvc0" >> ${prefix}/etc/securetty
sed -i "s/tty1/xvc0/" ${prefix}/etc/event.d/tty1

(Note that in karmic the getty line must be put in /etc/init/hvc0.conf, instead of in /etc/event.d/tty1. Can someone change the above appropriately?) (Also note that in recent kernels, including all of those that will actually boot with karmic, the correct console device is /dev/hvc0. This change must be made both above, and in the extras line, below.) Set runlevel and console devices: echo "extra = '2 console=xvc0'" >> /etc/xen/guestname.cfg Edit guestname.cfg and assign mac addr to the vif: vif = [ 'mac=xx:xx:xx:xx:xx:xx, ip=a.b.c.d' ]

Windows HVM Guests

Make sure you have HVM support turning on in the BIOS.

sudo xm dmesg | grep VMX

For the initial install you can mount a iso as a cdrom. A vnc server will be started on localhost. To get the server to run on the machines public facing ips, make this change.

vim /etc/xen/xend-config.sxp

#(vnc-listen '127.0.0.1')
(vnc-listen '0.0.0.0')

and dont forget  sudo /etc/init.d/xend restart

Your xen guest config file should look like this:

#Kernel and memory size
kernel = '/usr/lib/xen-ioemu-3.0/boot/hvmloader'
device_model = "/usr/lib/xen-ioemu-3.0/bin/qemu-dm"
builder = 'hvm'
memory  = '512'
disk    = [ 'phy:barracudas/winxp01-disk,ioemu:hda,w', 'file:/home/steven/winxp.iso,ioemu:hdc:cdrom,r' ]

#  Hostname and Networking
name    = 'winxp01'
vif  = [ 'type=ioemu, bridge=xenbr0' ]

#  Behaviour
boot='d'  #d is cdrom boot, c is disk boot.
vnc=1
vncviewer=1
sdl=0

This works for booting a Windows 2003 HVM guest:

#  -*- mode: python; -*-

import os, re
arch = os.uname()[4]
if re.search('64', arch):
    arch_libdir = 'lib64'
else:
    arch_libdir = 'lib'

kernel = "/usr/lib/xen/boot/hvmloader"
builder='hvm'
memory = 756
shadow_memory = 8
name = "Windoze"
vif = [ 'type=ioemu, bridge=xenbr0' ]
disk = [ 'phy:/dev/vm-disks/win2k3,ioemu:hda,w', 'file:/root/en_win_srv_2003_r2_standard_cd1.iso,hdc:cdrom,r' ]
boot = "d"

device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm'
vnc=1
vncpasswd=''
serial='pty'

GNOME as domU guest

  1. Configure GDM to start VNC, editing the `/etc/X11/gdm/gdm.conf`

Under [servers] heading, add this line, and comment out others like it:

  0=VNC 

Before the [server-Standard] section, add:

  [server-VNC]
  name=VNC server
  command=/usr/bin/Xvnc -geometry 800x600 -depth 24
  flexible=true 

More info: http://wiki.xensource.com/xenwiki/XenDemoLaptop More information: http://www.mail-archive.com/[email protected]/msg24961.html http://openvz.org/pipermail/users/2007-January/000521.html

Troubleshooting

Setting locale failed errors

If you are getting errors on your guest like the following, run the command below.

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = “en_US.UTF-8″
are supported and installed on your system.
perl: warning: Falling back to the standard locale (”C”).
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory

To fix this, run this command, replace en_US.UTF-8 with your own locale.

sudo locale-gen en_US.UTF-8

Remember, locale settings are in /etc/environment

LANG=en_US.UTF-8
LC_ALL=en_US.UTF-8

Logging into your domU via the console

In order to be able to login to your domU from the console using:

 
xm create {your hostname}.cfg -c 

(to the set root password for ssh, for instance, or to see more output than just kernel output when debugging) it may be necessary to add the following line to your /etc/xen/{your hostname}.cfg

extra='xencons=tty'

With newer kernels you should instead use:

extra = "console=hvc0"

domU freezes as init touches the hardware clock

Additionally, if the domU seems to freeze after "Setting the system clock.." you may need to mount the domU and add the following line to the bottom of /etc/default/rcS:

HWCLOCKACCESS=no

Other issues

See the older wiki entry XenVirtualMachine for additional suggestions to try. Also try out this guide for xen on feisty: http://www.howtoforge.com/ubuntu_7.04_xen_from_repositories

Great, its setup. Now how do I use it?

Here are the most important Xen commands:

xm create -c /path/to/config - Start a virtual machine.
xm shutdown <name> - Stop a virtual machine.
xm destroy <name> - Stop a virtual machine immediately without shutting it down. It's as if you switch off the power button.
xm list - List all running systems.
xm console <name> - Log in on a virtual machine.
xm help - List of all commands.

Links

Sister Wiki's

Learning Sites

  • [11] - A wiki dedicated to documenting the different virtualization technologies available in Linux.

Other Reference