个人工具

“UbuntuHelp:OpenVZ”的版本间的差异

来自Ubuntu中文

跳转至: 导航, 搜索
第9行: 第9行:
 
The original documentation can be found here: [http://openvz.org/]  
 
The original documentation can be found here: [http://openvz.org/]  
 
== Installing [[UbuntuHelp:OpenVZ|OpenVZ]] ==
 
== Installing [[UbuntuHelp:OpenVZ|OpenVZ]] ==
 +
=== 7.04 Feisty ===
 
This installation was tested on Ubuntu Server 7.04 (Feisty)
 
This installation was tested on Ubuntu Server 7.04 (Feisty)
 
* Because [[UbuntuHelp:OpenVZ|OpenVZ]] is not part of the standard repositories of Ubuntu, we first have to add the Debian repository for [[UbuntuHelp:OpenVZ|OpenVZ]]. Therefore add the following line to your "/etc/apt/sources.list"  
 
* Because [[UbuntuHelp:OpenVZ|OpenVZ]] is not part of the standard repositories of Ubuntu, we first have to add the Debian repository for [[UbuntuHelp:OpenVZ|OpenVZ]]. Therefore add the following line to your "/etc/apt/sources.list"  
第30行: 第31行:
 
sudo apt-get install vzctl vzquota
 
sudo apt-get install vzctl vzquota
 
</nowiki></pre>
 
</nowiki></pre>
== Networking ==
+
==== Networking ====
 
* For the networking to work properly, ipv4 forwarding has to be turned on.  
 
* For the networking to work properly, ipv4 forwarding has to be turned on.  
 
This enables your VPSs to communicate with the outside world.
 
This enables your VPSs to communicate with the outside world.
第44行: 第45行:
 
We can reload this config file to the system with:
 
We can reload this config file to the system with:
 
<code><nowiki>"sudo sysctl -p"</nowiki></code>
 
<code><nowiki>"sudo sysctl -p"</nowiki></code>
 +
=== 8.04 Hardy ===
 +
* Install the kernel and tools
 +
<pre><nowiki>
 +
$ sudo apt-get install linux-openvz vzctl
 +
</nowiki></pre>
 +
'''Important!''' As of 2008-04-28 the linux-image-2.6.24-16-openvz kernel is broken and the kernel does not boot. Please see the bugreport [https://bugs.launchpad.net/ubuntu/+source/linux/+bug/210672 here].
 +
You can either recompile your kernel or you can use [https://edge.launchpad.net/~compbrain/+archive Will Nowak's repository on PPA] that contains the linux-image-2.6.24-17-openvz package. It will boot and work perfectly. Both solutions are very simple and secure and you can find instructions in the bugreport mentioned above.
 +
* Reboot into the openvz kernel
 +
* Remove the `-server` kernel or the `-generic` if you are on a desktop machine
 +
<pre><nowiki>
 +
$ sudo apt-get remove --purge --auto-remove `dpkg -l linux-image-*server | awk '$1 ~ /ii/ {print $2}'`
 +
</nowiki></pre>
 +
* Change the sysctl variables in `/etc/sysctl.conf`
 +
This step might not be necessary once the vzctl package is going to be updated
 +
<pre><nowiki>
 +
# On Hardware Node we generally need
 +
# packet forwarding enabled and proxy arp disabled
 +
 +
net.ipv4.conf.default.forwarding=1
 +
net.ipv4.conf.default.proxy_arp = 0
 +
net.ipv4.ip_forward=1
 +
 +
# Enables source route verification
 +
net.ipv4.conf.all.rp_filter = 1
 +
 +
# Enables the magic-sysrq key
 +
kernel.sysrq = 1
 +
 +
# TCP Explict Congestion Notification
 +
#net.ipv4.tcp_ecn = 0
 +
 +
# we do not want all our interfaces to send redirects
 +
net.ipv4.conf.default.send_redirects = 1
 +
net.ipv4.conf.all.send_redirects = 0
 +
</nowiki></pre>
 +
* Apply the sysctl changes
 +
<pre><nowiki>
 +
$ sudo sysctl -p
 +
</nowiki></pre>
 +
* Create a symlink to /vz because most of the vz tools expects the [[UbuntuHelp:OpenVZ|OpenVZ]] folders to reside there. This step is not necessary, but can eliminate further problems when other vz related components are installed.
 +
<pre><nowiki>
 +
$ ln -s /var/lib/vz /vz
 +
</nowiki></pre>
 
== Download Template(s) ==
 
== Download Template(s) ==
Before we can create a new Virtual Private Server, we first have to download a template of the distro we want to use. [[UbuntuHelp:OpenVZ|OpenVZ]] uses "templates" or "cached templates". The difference is that "templates" are a sort of cookbook for "cached templates" A package manager is then used to download and create the cached template of the chosen distribution. Because most cached versions of popular distro's are already created and not that big, it is easiest to download the cached version and place it in the "/var/lib/vz/template/cache" directory (or the path you have chosen in the "/etc/vz/vz.conf" file).
+
Before we can create a new Virtual Private Server, we first have to either download or create a template of the distro we want to use. [[UbuntuHelp:OpenVZ|OpenVZ]] uses "templates" or "cached templates". The difference is that "templates" are a sort of cookbook for "cached templates" A package manager is then used to download and create the cached template of the chosen distribution. Because most cached versions of popular distro's are already created and not that big, it is easiest to download the cached version and place it in the "/var/lib/vz/template/cache" directory (or the path you have chosen in the "/etc/vz/vz.conf" file).
 
The cached templates can be found here [http://openvz.org/download/template/cache/]  
 
The cached templates can be found here [http://openvz.org/download/template/cache/]  
== Create a new VPS ==
+
== Create Template ==
 +
This section describes how to create an Ubuntu 8.04 Hardy minimal template
 +
Documentation format:
 +
* Run the command on the [[UbuntuHelp:OpenVZ|OpenVZ]] host system
 +
<pre><nowiki>
 +
[HW] $ command
 +
</nowiki></pre>
 +
* Run the command on the [[UbuntuHelp:OpenVZ|OpenVZ]] container
 +
<pre><nowiki>
 +
[VPS] $ command
 +
</nowiki></pre>
 +
=== Prerequisites ===
 +
* debootstrap
 +
<pre><nowiki>
 +
[HW] $ sudo apt-get install debootstrap
 +
</nowiki></pre>
 +
=== Creating template ===
 +
==== Running debootstrap ====
 +
* Create a working directory:
 +
<pre><nowiki>
 +
[HW] $ mkdir hardy-chroot
 +
</nowiki></pre>
 +
* Run debootstrap to install a minimal Hardy Heron system into that directory:
 +
<pre><nowiki>
 +
[HW] $ debootstrap [--arch ''ARCH''] hardy hardy-chroot
 +
</nowiki></pre>
 +
If the ARCH of the host machine is equal to the one of the container, you can skip the --arch option, but if you need to build an OS template for another ''ARCH'', specify it explicitly:
 +
* for AMD64/x86_64, use `amd64`
 +
* for i386 `i386`
 +
==== Preparing/starting a container ====
 +
Now you have an installation created by `debootstrap`, you can run it as a container. In the example below CT ID of 777 is used; of course any other non-allocated ID could be used.
 +
* Moving installation to container private area
 +
<pre><nowiki>
 +
[HW] $ sudo mv hardy-chroot /vz/private/777
 +
</nowiki></pre>
 +
* All files needs to be owned by root
 +
<pre><nowiki>
 +
[HW] $ sudo chown -R /vz/private/777
 +
</nowiki></pre>
 +
* Setting initial container configuration
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl set 777 --applyconfig vps.basic --save
 +
</nowiki></pre>
 +
* Setting container's `OSTEMPLATE`
 +
<pre><nowiki>
 +
[HW] $ echo "OSTEMPLATE=ubuntu-8.04" | sudo tee -a /etc/vz/conf/777.conf >/dev/null
 +
</nowiki></pre>
 +
* Setting container's IP address. (This is just a temporary setting for the update process to work)
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl set 777 --ipadd x.x.x.x --save
 +
</nowiki></pre>
 +
* Setting DNS server for the container (This is just a temporary setting for the update process to work)
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl set 777 --nameserver x.x.x.x --save
 +
</nowiki></pre>
 +
* Removing `udev` from the `/etc/rcS.d` and `klogd` from the `/etc/rc2.d` folders
 +
If udev was in place the container might not start, it could be stuck and even `vzctl enter` would not be able to access the container's command line. If klogd was in place it might not let the runlevel 2 change finish.
 +
<pre><nowiki>
 +
[HW] $ sudo rm /vz/private/777/etc/rcS.d/S10udev /vz/private/777/etc/rc2.d/S11klogd
 +
</nowiki></pre>
 +
* Starting the container
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl start 777
 +
</nowiki></pre>
 +
==== Modify the installation ====
 +
* Enter a container:
 +
<pre><nowiki>
 +
[HW] $ vzctl enter 777
 +
</nowiki></pre>
 +
'''Warning!!! Do not run the commands below on the hardware node, they are only to be run within the container!'''
 +
Note: You will not need to use `sudo` within the container, you enter as root when you use `vzctl enter`.
 +
* Remove unnecessary packages:
 +
<pre><nowiki>
 +
[VPS] $ apt-get remove --purge busybox-initramfs console-setup dmidecode eject \
 +
ethtool initramfs-tools klibc-utils laptop-detect libiw29 libklibc \
 +
libvolume-id0 mii-diag module-init-tools ntpdate pciutils pcmciautils ubuntu-minimal \
 +
udev usbutils wireless-tools wpasupplicant xkb-data tasksel tasksel-data
 +
</nowiki></pre>
 +
Note: If you want to use the `tasksel` tool, do not remove it — but then you have to let laptop-detect stay.
 +
Note: On removing the deb-package `module-init-tools`, a fake-modprobe is needed for IPv6 addresses, see below!
 +
* The DHCP client can be also removed if you know that you will not need it.
 +
<pre><nowiki>
 +
[VPS] $ apt-get remove --purge --auto-remove dhcp3-client dhcp3-common
 +
</nowiki></pre>
 +
* Clean up after udev
 +
<pre><nowiki>
 +
[VPS] $ rm -fr /lib/udev
 +
</nowiki></pre>
 +
* Disable getty
 +
On a usual Linux system, getty is running on a virtual terminals, which a container does not have. So, having getty running doesn't make sense; more to say, it complains it can not open a terminal device and this clutters the logs.
 +
<pre><nowiki>
 +
[VPS] $ initctl stop tty{1,2,3,4,5,6}
 +
[VPS] $ rm /etc/event.d/tty*
 +
</nowiki></pre>
 +
* Set sane permissions for /root directory
 +
<pre><nowiki>
 +
[VPS] $ chmod 700 /root
 +
</nowiki></pre>
 +
* Disable root login
 +
<pre><nowiki>
 +
[VPS] $ usermod -L root
 +
</nowiki></pre>
 +
* "fake-modprobe" needed for IPv6 addresses
 +
<pre><nowiki>
 +
[VPS] $ ln -s /bin/true /sbin/modprobe
 +
</nowiki></pre>
 +
On setup IPv6, the command "modprobe -Q IPv6" is called, which fails without the "fake-modprobe"
 +
* Set the default repositories for Hardy
 +
Make sure that you replace the '''<YOURCOUNTRY>''' with the your country code
 +
<pre><nowiki>
 +
[VPS] $ COUNTRY=<YOURCOUNTRY>. cat >/etc/apt/sources.list <<EOF
 +
# Binary
 +
deb http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse
 +
deb http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse
 +
deb http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse
 +
 
 +
# Binary Canonical
 +
# deb http://archive.canonical.com/ubuntu hardy partner
 +
 
 +
# Binary backport
 +
# deb http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-backports main restricted universe multiverse
 +
 
 +
# Source
 +
# deb-src http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse
 +
# deb-src http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse
 +
# deb-src http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse
 +
 
 +
# Source backport
 +
# deb-src http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-backports main restricted universe multiverse
 +
 
 +
# Source Canonical
 +
# deb-src http://archive.canonical.com/ubuntu hardy partner
 +
EOF
 +
</nowiki></pre>
 +
Note: Only the "main restricted universe multiverse" binary repositories are enabled. Change it if you need more.
 +
* Apply new security updates
 +
<pre><nowiki>
 +
[VPS] $ apt-get update && apt-get upgrade
 +
</nowiki></pre>
 +
* Install some more packages
 +
<pre><nowiki>
 +
[VPS] $ apt-get install ssh quota
 +
</nowiki></pre>
 +
* Fix SSH host keys
 +
This is only useful if you installed SSH above. Each individual container should have its own pair of SSH host keys. The code below will wipe out the existing SSH keys and instruct the newly-created container to create new SSH keys on first boot.
 +
<pre><nowiki>
 +
[VPS] $ rm -f /etc/ssh/ssh_host_*
 +
[VPS] $ cat << EOF > /etc/rc2.d/S15ssh_gen_host_keys
 +
#!/bin/sh
 +
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -t rsa -N ''
 +
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -t dsa -N ''
 +
rm -f \$0
 +
EOF
 +
[VPS] $ chmod a+x /etc/rc2.d/S15ssh_gen_host_keys
 +
</nowiki></pre>
 +
* Link `/etc/mtab` to `/proc/mounts`, so `df` and friends will work:
 +
<pre><nowiki>
 +
[VPS] $ rm -f /etc/mtab
 +
[VPS] $ ln -s /proc/mounts /etc/mtab
 +
</nowiki></pre>
 +
* After that, it would make sense to disable `mtab.sh` script which messes with `/etc/mtab`
 +
<pre><nowiki>
 +
[VPS] $ update-rc.d -f mtab.sh remove
 +
</nowiki></pre>
 +
* Disable some services
 +
In most of the cases you don't want klogd to run -- the only exception is if you configure iptables to log some events -- so you can disable it.
 +
<pre><nowiki>
 +
[VPS] $ update-rc.d -f klogd remove
 +
</nowiki></pre>
 +
* Set default hostname
 +
<pre><nowiki>
 +
[VPS] $ echo "localhost" > /etc/hostname
 +
</nowiki></pre>
 +
* Set `/etc/hosts`
 +
<pre><nowiki>
 +
[VPS] $ echo "127.0.0.1 localhost.localdomain localhost" > /etc/hosts
 +
</nowiki></pre>
 +
* Add `ptys` to `/dev`
 +
This is needed in case `/dev/pts` will not be mounted after container start. In case `/dev/ttyp*` and `/dev/ptyp*` files are present, and LEGACY_PTYS support is enabled in the kernel, vzctl will still be able to enter the container.
 +
<pre><nowiki>
 +
[VPS] $ cd /dev && /sbin/MAKEDEV ptyp
 +
</nowiki></pre>
 +
* Remove nameserver(s)
 +
[VPS] $ > /etc/resolv.conf
 +
* Clean aptcahche
 +
<pre><nowiki>
 +
[VPS] $ apt-get clean
 +
</nowiki></pre>
 +
* Cleaning up log files
 +
<pre><nowiki>
 +
[VPS] $ > /var/log/messages; > /var/log/auth.log; > /var/log/kern.log; > /var/log/bootstrap.log; \
 +
> /var/log/dpkg.log; > /var/log/syslog; > /var/log/daemon.log; > /var/log/apt/term.log; rm -f /var/log/*.0 /var/log/*.1
 +
</nowiki></pre>
 +
* Exit the container
 +
<pre><nowiki>
 +
[VPS] $ exit
 +
</nowiki></pre>
 +
==== Preparing for and packing template cache ====
 +
'''The following commands should be run on the host system (i.e. not inside a container).'''
 +
* We don't need an IP for the container anymore, and we definitely do not need it in template cache, so remove it
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl set 777 --ipdel all --save
 +
</nowiki></pre>
 +
* Stop the container
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl stop 777
 +
</nowiki></pre>
 +
* Change dir to the container private
 +
<pre><nowiki>
 +
[HW] $ cd /vz/private/777
 +
</nowiki></pre>
 +
* Now create a cached OS tarball. In the command below, you'll want to replace <arch> with your architecture (i386, amd64).
 +
'''Note the space and the dot at the end of the command'''.
 +
<pre><nowiki>
 +
[HW] $ sudo tar czf /vz/template/cache/ubuntu-8.04-<arch>-minimal.tar.gz .
 +
</nowiki></pre>
 +
* Cleanup
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl destroy 777
 +
[HW] $ sudo rm -f /etc/vz/conf/777.conf.destroyed
 +
</nowiki></pre>
 +
==== Testing template cache ====
 +
* We can now create a container based on the just-created template cache. Be sure to change `arch` to your architecture just like you did when you named the tarball above.
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl create 123456 --ostemplate ubuntu-8.04-<arch>-minimal
 +
</nowiki></pre>
 +
* Now make sure that your new container works
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl start 123456
 +
[HW] $ sudo vzctl exec 123456 ps axf
 +
</nowiki></pre>
 +
You should see that a few processes are running.
 +
* Cleanup
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl stop 123456
 +
[HW] $ sudo vzctl destroy 123456
 +
[HW] $ sudo rm -f /etc/vz/conf/123456.conf.destroyed
 +
</nowiki></pre>
 +
== Administration ==
 
When we create a VPS, we must give it a number. This number must be unique and it is used to control the VPS during it's existence. A good guideline is to use the last three digits of the ip address you are going to use for this VPS. i.e.: 10.0.0.101 would be VPS 101!
 
When we create a VPS, we must give it a number. This number must be unique and it is used to control the VPS during it's existence. A good guideline is to use the last three digits of the ip address you are going to use for this VPS. i.e.: 10.0.0.101 would be VPS 101!
To create a VPS use this command:
+
=== Creating a container from OS template ===
 +
* Create a container
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl create <VEID> --ostemplate <the name of your template>
 +
</nowiki></pre>
 +
* Set the IP, nameserver, hostname as described below
 +
* Enter into the container  (equivalent to chroot)
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl enter [VEID]
 +
</nowiki></pre>
 +
* Install the langauge support. language-pack-[LANGUAGE]-base, for english use ''language-pack-en-base''
 +
<pre><nowiki>
 +
[VPS] $ apt-get install language-pack-en-base
 +
</nowiki></pre>
 +
You might need to run apt-get update first
 +
* Set timezone
 +
<pre><nowiki>
 +
[VPS] $ dpkg-reconfigure tzdata
 +
</nowiki></pre>
 +
* Exit the container
 +
<pre><nowiki>
 +
[VPS] $ exit
 +
</nowiki></pre>
 +
=== Configuring a container ===
 +
* Adding IP address
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl set [VEID|VENAME] --ipadd [IP_ADDRESS] --save
 +
</nowiki></pre>
 +
* Deleting IP address
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl set [VEID|VENAME] --ipdel [IP_ADDRESS] --save
 +
</nowiki></pre>
 +
* Setting hostname
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl set [VEID|VENAME] --hostname [HOSTNAME] --save
 +
</nowiki></pre>
 +
* Setting nameserver
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl set [VEID|VENAME] --nameserver [NAMESERVER_IP] --save
 +
</nowiki></pre>
 +
* Setting virtual name
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl set [VEID] --name [VENAME] --save
 +
</nowiki></pre>
 +
===== Start, stop, take snapshot or revert to snapshot =====
 +
* Start
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl start [VEID|VENAME]
 +
</nowiki></pre>
 +
* Stop
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl stop [VEID|VENAME]
 +
</nowiki></pre>
 +
* Take snapshot
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl chkpnt [VEID|VENAME] [--dumpfile <name>]
 +
</nowiki></pre>
 +
* Revert to snapshot
 +
<pre><nowiki>
 +
[HW] $ sudo vzctl restore [VEID|VENAME] [--dumpfile <name>]
 +
</nowiki></pre>
 +
===== Destroying a container =====
 
<pre><nowiki>
 
<pre><nowiki>
sudo vzctl create 101 --ostemplate <the name of your template>
+
[HW] $ sudo vzctl destroy [VEID|VENAME]
 
</nowiki></pre>
 
</nowiki></pre>
To start/stop VPS  
+
===== Monitoring =====
 +
* List running VPS
 
<pre><nowiki>
 
<pre><nowiki>
sudo vzctl start 101
+
[HW] $ sudo vzlist
 
</nowiki></pre>
 
</nowiki></pre>
To enter into VPS (equivalent to chroot)
+
* List all VPS
 
<pre><nowiki>
 
<pre><nowiki>
sudo vzctl enter 101
+
[HW] $ sudo vzlist -a
 
</nowiki></pre>
 
</nowiki></pre>
more to come...
 
----
 
 
----
 
----
  
 
[[category:UbuntuHelp]]
 
[[category:UbuntuHelp]]

2008年5月9日 (五) 19:15的版本

Introduction

This page describes the installation of OpenVZ on "Ubuntu Server" as a host. To properly implement the practical steps found in this guide, the reader should be a user of Ubuntu who is comfortable with the use of command-line applications, using the Bourne Again SHell (bash) environment, and editing system configuration files with their preferred text editor application.

About OpenVZ

OpenVZ is a server virtualization solution for Linux. It enables one to create multiple virtual Linux servers which are isolated from the host and from each other, based on a technique called "Operating System Virtualization". Similar techniques are used in Solaris Zones, Linux-VServer and FreeBSD jails. This technique does not use hardware virtualization like KVM, XEN or VMware. The so called "Virtual Servers" or VPSs behave like stand alone servers. They consume less resources than their hardware virtualized counterparts, but must use the same kernel as the host. Therefor you can only have Linux VPSs on a Linux host. The original documentation can be found here: [1]

Installing OpenVZ

7.04 Feisty

This installation was tested on Ubuntu Server 7.04 (Feisty)

  • Because OpenVZ is not part of the standard repositories of Ubuntu, we first have to add the Debian repository for OpenVZ. Therefore add the following line to your "/etc/apt/sources.list"

"deb http://debian.systs.org/ stable openvz" (without quotes)

  • Now we have to get the correct signing key for this repository added to our system:
sudo wget http://debian.systs.org/dso_archiv_signing_key.asc
sudo apt-key add dso_archiv_signing_key.asc
sudo rm dso_archiv_signing_key.asc
  • When this is done we can update APT and install the ovz kernel.
sudo apt-get update
sudo apt-get install ovzkernel-2.6.18

When the new kernel is installed, change the /boot/grub/menu.lst file so that it boots the ovz kernel automatically

  • Now reboot the system in the new ovz-kernel
  • We still have to install the OpenVZ control utilities: vzctl and vzquota
sudo apt-get install vzctl vzquota

Networking

  • For the networking to work properly, ipv4 forwarding has to be turned on.

This enables your VPSs to communicate with the outside world. You can check this with the following command: cat /proc/sys/net/ipv4/ip_forward it should read "1". If it reads "0", it can be turned on by editing the "/etc/sysctl.conf" file. The malfunctioning network-settings are probably because of this bug, see [2]

  • To edit this file type:

"sudo nano -w /etc/sysctl.conf" and add the following line: "net.ipv4.conf.all.forwarding=1" and save the document. We can reload this config file to the system with: "sudo sysctl -p"

8.04 Hardy

  • Install the kernel and tools
$ sudo apt-get install linux-openvz vzctl

Important! As of 2008-04-28 the linux-image-2.6.24-16-openvz kernel is broken and the kernel does not boot. Please see the bugreport here. You can either recompile your kernel or you can use Will Nowak's repository on PPA that contains the linux-image-2.6.24-17-openvz package. It will boot and work perfectly. Both solutions are very simple and secure and you can find instructions in the bugreport mentioned above.

  • Reboot into the openvz kernel
  • Remove the `-server` kernel or the `-generic` if you are on a desktop machine
$ sudo apt-get remove --purge --auto-remove `dpkg -l linux-image-*server | awk '$1 ~ /ii/ {print $2}'`
  • Change the sysctl variables in `/etc/sysctl.conf`

This step might not be necessary once the vzctl package is going to be updated

 # On Hardware Node we generally need
 # packet forwarding enabled and proxy arp disabled
 
 net.ipv4.conf.default.forwarding=1
 net.ipv4.conf.default.proxy_arp = 0
 net.ipv4.ip_forward=1
 
 # Enables source route verification
 net.ipv4.conf.all.rp_filter = 1
 
 # Enables the magic-sysrq key
 kernel.sysrq = 1
 
 # TCP Explict Congestion Notification
 #net.ipv4.tcp_ecn = 0
 
 # we do not want all our interfaces to send redirects
 net.ipv4.conf.default.send_redirects = 1
 net.ipv4.conf.all.send_redirects = 0
  • Apply the sysctl changes
$ sudo sysctl -p
  • Create a symlink to /vz because most of the vz tools expects the OpenVZ folders to reside there. This step is not necessary, but can eliminate further problems when other vz related components are installed.
$ ln -s /var/lib/vz /vz

Download Template(s)

Before we can create a new Virtual Private Server, we first have to either download or create a template of the distro we want to use. OpenVZ uses "templates" or "cached templates". The difference is that "templates" are a sort of cookbook for "cached templates" A package manager is then used to download and create the cached template of the chosen distribution. Because most cached versions of popular distro's are already created and not that big, it is easiest to download the cached version and place it in the "/var/lib/vz/template/cache" directory (or the path you have chosen in the "/etc/vz/vz.conf" file). The cached templates can be found here [3]

Create Template

This section describes how to create an Ubuntu 8.04 Hardy minimal template Documentation format:

  • Run the command on the OpenVZ host system
[HW] $ command
  • Run the command on the OpenVZ container
[VPS] $ command

Prerequisites

  • debootstrap
[HW] $ sudo apt-get install debootstrap

Creating template

Running debootstrap

  • Create a working directory:
[HW] $ mkdir hardy-chroot
  • Run debootstrap to install a minimal Hardy Heron system into that directory:
[HW] $ debootstrap [--arch ''ARCH''] hardy hardy-chroot

If the ARCH of the host machine is equal to the one of the container, you can skip the --arch option, but if you need to build an OS template for another ARCH, specify it explicitly:

  • for AMD64/x86_64, use `amd64`
  • for i386 `i386`

Preparing/starting a container

Now you have an installation created by `debootstrap`, you can run it as a container. In the example below CT ID of 777 is used; of course any other non-allocated ID could be used.

  • Moving installation to container private area
[HW] $ sudo mv hardy-chroot /vz/private/777
  • All files needs to be owned by root
[HW] $ sudo chown -R /vz/private/777
  • Setting initial container configuration
[HW] $ sudo vzctl set 777 --applyconfig vps.basic --save
  • Setting container's `OSTEMPLATE`
[HW] $ echo "OSTEMPLATE=ubuntu-8.04" | sudo tee -a /etc/vz/conf/777.conf >/dev/null
  • Setting container's IP address. (This is just a temporary setting for the update process to work)
[HW] $ sudo vzctl set 777 --ipadd x.x.x.x --save
  • Setting DNS server for the container (This is just a temporary setting for the update process to work)
[HW] $ sudo vzctl set 777 --nameserver x.x.x.x --save
  • Removing `udev` from the `/etc/rcS.d` and `klogd` from the `/etc/rc2.d` folders

If udev was in place the container might not start, it could be stuck and even `vzctl enter` would not be able to access the container's command line. If klogd was in place it might not let the runlevel 2 change finish.

[HW] $ sudo rm /vz/private/777/etc/rcS.d/S10udev /vz/private/777/etc/rc2.d/S11klogd
  • Starting the container
[HW] $ sudo vzctl start 777

Modify the installation

  • Enter a container:
[HW] $ vzctl enter 777

Warning!!! Do not run the commands below on the hardware node, they are only to be run within the container! Note: You will not need to use `sudo` within the container, you enter as root when you use `vzctl enter`.

  • Remove unnecessary packages:
[VPS] $ apt-get remove --purge busybox-initramfs console-setup dmidecode eject \
ethtool initramfs-tools klibc-utils laptop-detect libiw29 libklibc \
libvolume-id0 mii-diag module-init-tools ntpdate pciutils pcmciautils ubuntu-minimal \
udev usbutils wireless-tools wpasupplicant xkb-data tasksel tasksel-data

Note: If you want to use the `tasksel` tool, do not remove it — but then you have to let laptop-detect stay. Note: On removing the deb-package `module-init-tools`, a fake-modprobe is needed for IPv6 addresses, see below!

  • The DHCP client can be also removed if you know that you will not need it.
[VPS] $ apt-get remove --purge --auto-remove dhcp3-client dhcp3-common
  • Clean up after udev
[VPS] $ rm -fr /lib/udev
  • Disable getty

On a usual Linux system, getty is running on a virtual terminals, which a container does not have. So, having getty running doesn't make sense; more to say, it complains it can not open a terminal device and this clutters the logs.

[VPS] $ initctl stop tty{1,2,3,4,5,6}
[VPS] $ rm /etc/event.d/tty*
  • Set sane permissions for /root directory
[VPS] $ chmod 700 /root
  • Disable root login
[VPS] $ usermod -L root
  • "fake-modprobe" needed for IPv6 addresses
[VPS] $ ln -s /bin/true /sbin/modprobe

On setup IPv6, the command "modprobe -Q IPv6" is called, which fails without the "fake-modprobe"

  • Set the default repositories for Hardy

Make sure that you replace the <YOURCOUNTRY> with the your country code

[VPS] $ COUNTRY=<YOURCOUNTRY>. cat >/etc/apt/sources.list <<EOF
# Binary
deb http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse
deb http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse

# Binary Canonical
# deb http://archive.canonical.com/ubuntu hardy partner

# Binary backport
# deb http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-backports main restricted universe multiverse

# Source
# deb-src http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse
# deb-src http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse
# deb-src http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse

# Source backport
# deb-src http://${COUNTRY}archive.ubuntu.com/ubuntu/ hardy-backports main restricted universe multiverse

# Source Canonical
# deb-src http://archive.canonical.com/ubuntu hardy partner
EOF

Note: Only the "main restricted universe multiverse" binary repositories are enabled. Change it if you need more.

  • Apply new security updates
[VPS] $ apt-get update && apt-get upgrade
  • Install some more packages
[VPS] $ apt-get install ssh quota
  • Fix SSH host keys

This is only useful if you installed SSH above. Each individual container should have its own pair of SSH host keys. The code below will wipe out the existing SSH keys and instruct the newly-created container to create new SSH keys on first boot.

[VPS] $ rm -f /etc/ssh/ssh_host_*
[VPS] $ cat << EOF > /etc/rc2.d/S15ssh_gen_host_keys
#!/bin/sh
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -t rsa -N ''
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -t dsa -N ''
rm -f \$0
EOF
[VPS] $ chmod a+x /etc/rc2.d/S15ssh_gen_host_keys
  • Link `/etc/mtab` to `/proc/mounts`, so `df` and friends will work:
[VPS] $ rm -f /etc/mtab
[VPS] $ ln -s /proc/mounts /etc/mtab
  • After that, it would make sense to disable `mtab.sh` script which messes with `/etc/mtab`
[VPS] $ update-rc.d -f mtab.sh remove
  • Disable some services

In most of the cases you don't want klogd to run -- the only exception is if you configure iptables to log some events -- so you can disable it.

[VPS] $ update-rc.d -f klogd remove
  • Set default hostname
[VPS] $ echo "localhost" > /etc/hostname
  • Set `/etc/hosts`
[VPS] $ echo "127.0.0.1 localhost.localdomain localhost" > /etc/hosts
  • Add `ptys` to `/dev`

This is needed in case `/dev/pts` will not be mounted after container start. In case `/dev/ttyp*` and `/dev/ptyp*` files are present, and LEGACY_PTYS support is enabled in the kernel, vzctl will still be able to enter the container.

[VPS] $ cd /dev && /sbin/MAKEDEV ptyp
  • Remove nameserver(s)

[VPS] $ > /etc/resolv.conf

  • Clean aptcahche
[VPS] $ apt-get clean
  • Cleaning up log files
[VPS] $ > /var/log/messages; > /var/log/auth.log; > /var/log/kern.log; > /var/log/bootstrap.log; \
> /var/log/dpkg.log; > /var/log/syslog; > /var/log/daemon.log; > /var/log/apt/term.log; rm -f /var/log/*.0 /var/log/*.1
  • Exit the container
[VPS] $ exit

Preparing for and packing template cache

The following commands should be run on the host system (i.e. not inside a container).

  • We don't need an IP for the container anymore, and we definitely do not need it in template cache, so remove it
[HW] $ sudo vzctl set 777 --ipdel all --save
  • Stop the container
[HW] $ sudo vzctl stop 777
  • Change dir to the container private
[HW] $ cd /vz/private/777
  • Now create a cached OS tarball. In the command below, you'll want to replace <arch> with your architecture (i386, amd64).

Note the space and the dot at the end of the command.

[HW] $ sudo tar czf /vz/template/cache/ubuntu-8.04-<arch>-minimal.tar.gz .
  • Cleanup
[HW] $ sudo vzctl destroy 777
[HW] $ sudo rm -f /etc/vz/conf/777.conf.destroyed

Testing template cache

  • We can now create a container based on the just-created template cache. Be sure to change `arch` to your architecture just like you did when you named the tarball above.
[HW] $ sudo vzctl create 123456 --ostemplate ubuntu-8.04-<arch>-minimal
  • Now make sure that your new container works
[HW] $ sudo vzctl start 123456
[HW] $ sudo vzctl exec 123456 ps axf

You should see that a few processes are running.

  • Cleanup
[HW] $ sudo vzctl stop 123456
[HW] $ sudo vzctl destroy 123456
[HW] $ sudo rm -f /etc/vz/conf/123456.conf.destroyed

Administration

When we create a VPS, we must give it a number. This number must be unique and it is used to control the VPS during it's existence. A good guideline is to use the last three digits of the ip address you are going to use for this VPS. i.e.: 10.0.0.101 would be VPS 101!

Creating a container from OS template

  • Create a container
[HW] $ sudo vzctl create <VEID> --ostemplate <the name of your template>
  • Set the IP, nameserver, hostname as described below
  • Enter into the container (equivalent to chroot)
[HW] $ sudo vzctl enter [VEID]
  • Install the langauge support. language-pack-[LANGUAGE]-base, for english use language-pack-en-base
[VPS] $ apt-get install language-pack-en-base

You might need to run apt-get update first

  • Set timezone
[VPS] $ dpkg-reconfigure tzdata
  • Exit the container
[VPS] $ exit

Configuring a container

  • Adding IP address
[HW] $ sudo vzctl set [VEID|VENAME] --ipadd [IP_ADDRESS] --save
  • Deleting IP address
[HW] $ sudo vzctl set [VEID|VENAME] --ipdel [IP_ADDRESS] --save
  • Setting hostname
[HW] $ sudo vzctl set [VEID|VENAME] --hostname [HOSTNAME] --save
  • Setting nameserver
[HW] $ sudo vzctl set [VEID|VENAME] --nameserver [NAMESERVER_IP] --save
  • Setting virtual name
[HW] $ sudo vzctl set [VEID] --name [VENAME] --save
Start, stop, take snapshot or revert to snapshot
  • Start
[HW] $ sudo vzctl start [VEID|VENAME]
  • Stop
[HW] $ sudo vzctl stop [VEID|VENAME]
  • Take snapshot
[HW] $ sudo vzctl chkpnt [VEID|VENAME] [--dumpfile <name>]
  • Revert to snapshot
[HW] $ sudo vzctl restore [VEID|VENAME] [--dumpfile <name>]
Destroying a container
[HW] $ sudo vzctl destroy [VEID|VENAME]
Monitoring
  • List running VPS
[HW] $ sudo vzlist
  • List all VPS
[HW] $ sudo vzlist -a