个人工具

UbuntuHelp:KVM/Networking

来自Ubuntu中文

跳转至: 导航, 搜索


  1. title KVM Networking

<<Include(KVM/Header)>>

配置网络

有一些不同的方法让虚拟机访问外部网络。

配置用户模式网络(Usermode Networking)

在缺省的配置下,客户机操作系统能够访问网络,但是却不能被网络上的其他机器看到。客户机可以,利润,浏览web页面,但是不能作为一个可访问的web服务器。

缺省情况下,客户机的操作系统会获得一个网络号为10.0.2.0/24的IP地址,而宿主机的操作系统可以在10.0.2.2这个地址访问到。

您可以从内部的客户机OS,ssh到宿主机OS (地址是10.0.2.2) ,然后使用scp来复制文件。

如果这种情况符合您的需求,那么不需要做其他任何的配置。

如果缺省配置下,您的客户机不能正常连接,看下面的Troubleshooting

配置桥接网络(Bridged Networking)

桥接网络允许虚拟网卡通过物理网卡连接到外部网络,使它们看上去好像网络上的其他普通主机一样。

警告:网络桥接无法在物理网络设备 (e.g., eth1, ath0) 使用无线网卡(e.g., ipw3945)做桥接设备的情况下工作as most wireless device drivers do not support bridging!

激活CAP_NET_ADMIN capability

桥接网络缺省并不工作。从2006年9月发布的2.6.18内核开始,使用TUN/TAP网络就需要激活 CAP_NET_ADMIN capability(bug #103010)。

  • 安装 qemu 包(qemu-kvm in lucid or newer):
    sudo apt-get install qemu
  • 安装 Linux capabilities 工具:
    sudo apt-get install libcap2-bin
  • Ubuntu 10.4 (Lucid) 或更新版本 - 给指定用户赋予 CAP_NET_ADMIN capability。This capability should be assigned cautiously, as it will allow those users to disrupt all networking on the system.
    • Give 'qemu' the inheritable CAP_NET_ADMIN capability, for 64-bit:
      sudo setcap cap_net_admin=ei /usr/bin/qemu-system-x86_64

      for 32-bit:
      sudo setcap cap_net_admin=ei /usr/bin/qemu
    • 允许制定的用户获得继承 CAP_NET_ADMIN capability ,需要编辑 ''/etc/security/capability.conf'': <pre>cap_net_admin USER-NAME-HERE
  • Ubuntu 10.4 (Lucid)之前的版本 - 会给所有用户赋予 CAP_NET_ADMIN capability。Use with extreme caution as it will give all users the ability to disrupt all networking on the system.
    • Give qemu the forced CAP_NET_ADMIN capability:
      sudo setcap cap_net_admin=ep /usr/bin/qemu-system-*
  • Note that the filesystem capabilities above will be lost on every qemu upgrade, since setting of the file system capability is not supported by Ubuntu packaging (see FilesystemCapabilities|for details on the blockers). For a good overview of Linux capabilities and QEMU see this writeup.

在宿主机上创建网桥

安装 bridge-utils 包:

sudo apt-get install bridge-utils

我们就要改变网络配置<<FootNote(假设您没有使用NetworkManager来管理您的网卡(在这个例子中是eth0)。如果您在使用NetworkManager,那么取消它或不让它管理网卡。Use the configuration for your card as the network configuration of the bridge (br0 in the example).)>>。为此,我们首先停止网络<<FootNote(This is needed for example when you move from DHCP to static address: it will stop the DHCP client, which a restart won't do if you changed the configuration already. If you are changing this remotely, then you should prepare your new configuration into a separate file and the use a script to stop networking, put the new configuration in place and start it back.)>>:

sudo invoke-rc.d networking stop

如果你通过远程连接,所以不能停止网络,继续下面的命令,最后使用 sudo invoke-rc.d networking restart 命令。如果你犯错了,那么就没办法恢复了。

为了设置一个网桥,编辑 /etc/network/interfaces 并且注释掉或替换掉已有的配置(replace with the values for your network):

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet static
        address 192.168.0.10
        network 192.168.0.0
        netmask 255.255.255.0
        broadcast 192.168.0.255
        gateway 192.168.0.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0


或者使用 DHCP

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet dhcp
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0

这将创建一个虚拟网卡 br0。

现在重启网络:
sudo /etc/init.d/networking restart

Configuring ubuntu-vm-builder to create bridged guests by default

This is handled by giving ubuntu-vm-builder the --bridge=br0 flag in karmic.

Virtual machines are defined in XML files; ubuntu-vm-builder, the tool we will use to create VMs, bases them on the template file /etc/vmbuilder/libvirt/libvirtxml.tmpl (before Ubuntu 8.10 /usr/share/ubuntu-vm-builder/templates/libvirt.tmpl) Open that file, and change:

    <interface type='network'>
      <source network='default'/>
    </interface>

To:

    <interface type='bridge'>
      <source bridge='br0'/>
    </interface>

Generating a KVM MAC

If you are managing your guests via command line, the following script might be helpful to generate a randomized MAC. If you are getting an error for 'rl', install the package 'randomize-lines'.

#!/bin/sh
echo -n "54:52:00"
for i in 1 2 3; do
        echo -n ":"
        for j in 1 2; do
                for k in 0 1 2 3 4 5 6 7 8 9 A B C D E F; do
                        echo $k
                done|rl|sed -n 1p
        done|while read m; do
                echo -n $m
        done
done
echo

Here is another way to generate a randomized MAC.

MACADDR="52:54:$(dd if=/dev/urandom count=1 2>/dev/null | md5sum | sed 's/^\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4/')"; echo $MACADDR

Converting an existing guest

If you have already created VMs before, you can make them use bridged networking if you change the XML definition (in /etc/libvirt/qemu/) for the network interface, adjusting the mac address as desired from:

    <interface type='network'>
      <mac address='00:11:22:33:44:55'/>
      <source network='default'/>
    </interface>

to:

    <interface type='bridge'>
      <mac address='00:11:22:33:44:55'/>
      <source bridge='br0'/>
    </interface>

Note: Make sure the first octet in your MAC address is EVEN (eg. 00:) as MAC addresses with ODD first-bytes (eg. 01:) are reserved for multicast communication and can cause confusing problems for you. For instance, the guest will be able to receive ARP packets and reply to them, but the reply will confuse other machines. This is not a KVM issue, but just the way Ethernet works. You do not need to restart libvirtd to reload the changes; the easiest way is to log into virsh (a command line tool to manage VMs), stop the VM, reread its configuration file, and restart the VM:

yhamon@paris:/etc/libvirt/qemu$ ls
mirror.xml  networks  vm2.xml
yhamon@paris:/etc/libvirt/qemu$ virsh --connect qemu:///system
Connecting to uri: qemu:///system
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # list
 Id Name                 State
----------------------------------
 10 vm2                  running
 15 mirror               running

virsh # shutdown mirror
Domain mirror is being shutdown

virsh # define mirror.xml
Domain mirror defined from mirror.xml

virsh # start mirror
Domain mirror started

The VM "mirror" is now using bridged networking.

DNS and DHCP Guests

libvirt uses dnsmasq to hand out IP addresses to guests which are configured to use dhcp. If on your host machine, you add 192.168.122.1 (the default IP of your host in libvirt) as your first nameserver in /etc/resolv.conf, then you can do name resolution for your guests. dnsmasq is smart enough to use the other 'nameserver' entries in your /etc/resolv.conf for resolving non-libvirt addresses. For example, if your current /etc/resolv.conf is:
search example.com
nameserver 10.0.0.1
Change this to be:
search example.com
nameserver 192.168.122.1
nameserver 10.0.0.1
Now, if you have a virtual machine named 'hardy-amd64', after starting it, you can do:
$ host hardy-amd64
hardy-amd64 has address <IP address given by dnsmasq>
Note that when using ssh you may need to use a trailing '.' after the hostname:
$ ssh hardy-amd64.
Finally, for this to work, your guest must send its hostname as part of the dhcp request. This is done automatically on many operating systems. For systems that do not send this automatically and use dhcp3, you can adjust the dhclient.conf file. For example, on Ubuntu 6.06 LTS (Dapper), adjust /etc/dhcp3/dhclient.conf to have:
send host-name "<your guest hostname here>";

IMPORTANT: Depending on your network configuration, your host's /etc/resolv.conf file might be periodically overwritten. You will have to either adjust the dhcp server on your network to hand out the additional libvirt name server for your libvirt hosts, or adjust each host machine accordingly. As there are many possible configurations for host machines, user's are encouraged to look at resolvconf and/or man interfaces.

Booting Over the Network Using PXE

The current Ubuntu release does not ship pxe binary ROM images because the source code is not included to recreate the images in the upstream tarball. There may be a way to automate the creation of these files as part of the package. In order to use boot -n, you will need to download or create the appropriate ROM images from [1] KVM and QEMU can emulate a number of network cards. Here is the current ROM files

'KVM Name' nic,model= 'Etherboot Identification' 'Etherboot Filename' 'KVM filename'
i82551 pxe-i82551.bin
i82557b pxe-i82557b.bin
i82559er pxe-i82559er.bin
ne2k_pci (default) ns8390:rtl8029 -- [10ec,8029] gpxe-0.9.3-rtl8029.rom pxe-ne2k_pci.bin
ne2k_isa pxe-ne2k_isa.bin
pcnet pxe-pcnet.bin
rtl8139 pxe-rtl8139.bin
e1000 ((e1000:e1000-0x1026 -- [8086,1026])) gpxe-0.9.3-e1000-0x1026.rom pxe-e1000.bin
smc91c111 pxe-smc91c111.bin
lance pxe-lance.bin
mcf_fec pxe-mcf_fec.bin

Copy the respective file to /usr/share/kvm and/or /usr/share/qemu. <<Anchor(virtio)>>

Use virtio for Ubuntu Hardy/Intrepid or Windows guests

For Windows guests follow this instruction. You may find the performances of the network relatively poor (approx. 100/120mbits on my servers, which are quite fast). If you are running Ubuntu Hardy or Intrepid, you can enable virtio. Go to the definition file of your VM, and add the virtio line to the definition of your network interface:

    <interface type='bridge'>
      <mac address='52:54:00:a0:41:92'/>
      <source bridge='br0'/>
      <model type='virtio'/>   <-- add this line, leave the rest
    </interface>

Or, if you're using KVM on the command line, add the options:

-net nic,model=virtio -net user

This improves the network performances by a lot (factor 10, nearly). But this works only with Ubuntu Hardy or Intrepid guests for the moment, which is why it is not by default. Note that this also corrects the issue some are reporting with their network connections going away after a period of time or data transfer.

Using multiple nics with multiple subnets i.e. vlans

You may experience some KVM host connectivity issues when using multiple nics, each on their own subnet/vlan (multiple default routes?). In my case SSH logins (to the KVM host) would take a long time and connectivity would be cut when I restarted the network interfaces making ssh sessions and virt-manager connections crash. I needed multiple nics, each to be on a separate subnet (vlan). Each nic is then dedicated to a specific VM on the KVM host. The VM's then connect directly to the network using a bridge device. I never experienced problems with KVM guest connectivity. Only the KVM Host. I fixed the problem using the following configuration in /etc/network/interfaces on the KVM host. Please note the use of "manual" and "metric". YMMV. :D Note: first make sure that the guestOS loads the right network drivers, this worked for me: remove network modules 8139cp and 8139too , then modprobe 8139cp

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp
	metric 0

auto eth1
iface eth1 inet manual

auto br1
iface br1 inet dhcp
        bridge_ports eth1
	bridge_stp off
	bridge_fd 0
	bridge_maxwait 0
	metric 1

auto eth2
iface eth2 inet manual
	
auto br2
iface br2 inet dhcp
        bridge_ports eth2
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0
	metric 1

# add more ethN and brN as needed

<<Include(KVM/Header)>>