Note: The procedures documented in this article are not endorsed by XenVZ.co.uk. They come with no guarantees whatsoever. Following the procedures in this article could destroy your data, or leave you with a raft of important services stacked on top of a hopelessly unstable system. Proceed entirely at your own risk.
For Xen VPSs, XenVZ currently offers Ubuntu 8.04 Hardy and 9.04 Jaunty, but nothing later than that.
If you start with 9.04 Jaunty and try an out-of-the-box in-place upgrade to 9.10 Karmic, you find that your machine will no longer boot.
So if you want to use something later like 9.10 Karmic or 10.04 Lucid, you need to do something a bit cleverer.
For Xen VPSs, XenVZ uses a stock kernel which appears to be some version of 2.6.18 out of RHEL 5.something. This is all patched and updated and all fine for many purposes. This stock kernel is, by default, booted from outside the VPS and is used regardless of what is configured inside the VPS.
However Karmic's default boot configuration requires /proc/<process-id>/mountinfo, which I believe was a new feature for kernel 2.6.26.
To get Karmic working, we would either need to alter the boot configuration, or make the kernel provide the required feature.
Kernel 2.6.18 seems fairly ancient to me, so my approach would be to use a later kernel.
Fortunately, XenVZ now offer an "expert" feature for configuring one's own kernel via PvGrub/PyGrub. This can be turned on through a tick box in the control panel under "Custom Kernel".
It may be possible to use a kernel from Ubuntu, but for various reasons, I have gone down the path of using a kernel from Debian. (After writing the original version of this article, I switched the system over to an Ubuntu kernel and it worked fine; better in fact because it booted up with fewer warnings etc.. So it may be possible/better to use an Ubuntu kernel from the beginning. Please let me know if you get any success doing this.)
First guess might be the Debian 5.0 Stable "Lenny" kernel which is version 2.6.26. But this apparently has problems running as a domU under Xen.
The next guess I had was to use a kernel from Debian Testing "Squeeze", which would be a version 2.6.32 kernel.
Nicely for us, kernels nowadays seem to support Xen domU operation even in the "ordinary" kernels, so we don't need to go hunting around for a special Xen-domU-enabled kernel. (This seems to be related to "pvops" http://wiki.xensource.com/xenwiki/XenParavirtOps.) So we should be able to use the basic basic kernel from Squeeze and it should work.
So, our steps would be:
Set your VPS up with Ubuntu 9.04 Jaunty using the XenVZ web control panel.
Install any updates which have been published since the system image was made:# aptitude update
# aptitude safe-upgrade
Download the deb package for the appropriate kernel from Debian Squeeze. This can be done using "wget" (though "wget" isn't installed by default so if it is a fresh system you will have to put it on (aptitude install wget)).
Install this kernel using "dpkg -i"
(This is lifted from http://wiki.debian.org/PyGrub) Create the environment needed for running update-grub# mkdir /boot/grub
# echo "(hd0) /dev/sda" > /boot/grub/device.map
# mknod /dev/sda b 202 0
(This is a bit I added in the middle.) We need to run "update-grub" to generate /boot/grub/menu.lst. However, because the above kernel does not have the word "xen" in its name, "update-grub" will not recognise this kernel as suitable for Xen domU use, so ordinarily it skips it. There may be a good way of fixing this, but for now I just fudge "update-grub" to think all kernels are suitable for Xen domU use. Edit "/usr/sbin/update-grub", look for the bit that goes "is_xen=" and change it to "is_xen=1". (I've raised Bug 586756 against Ubuntu, and provided a patch.)
(This is lifted from http://wiki.debian.org/PyGrub) Create first /boot/grub/menu.lst based on the content of /boot# update-grub
In the XenVZ web control panel, turn on "Power User" and "Custom Kernel".
In "/etc/event.d", copy the file "xvc0" to "hvc0", then edit "hvc0" and replace all instances of "xvc0" with "hvc0". This is needed because in the later kernel, the console device name changes from xvc0 to hvc0.
In "/etc/fstab", change "sda1" to "xvda1" and "sda2" to "xvda2". This is needed because in the later kernel, the block devices exposed through Xen get different names. Also you may want to check the sixth field of all records and make sure it is zero for anything which isn't a real filesystem. This is suggested because from a certain version of Ubuntu, it starts to try to fsck the swap partition and complains about not being able to find fsck.swap.
You should now be able to re-start the VPS and it should boot up on the new kernel.
Assuming it comes up, check that your swap is enabled (cat /proc/swaps), and check that console login access is working (to get console access, follow the instructions provided in the web control panel)
There may be other things we can do to clean this up further — please let me know if you have any suggestions.
If you ignored my suggestion to get console login access working earlier then stop and do it now. One day you will need it and when you need it, (a) you will really need it, and (b) you will not then be in a position to set it up. You need to check that your console access is working ahead of time so that it is ready and working for when you need it.
Once you have the new kernel, you should be able to upgrade to Ubuntu 9.10 Karmic. The procedure is outlined at https://help.ubuntu.com/community/KarmicUpgrades. In essence it goes as:
If you are currently on 9.04 Jaunty, it should suggest an upgrade to 9.10 Karmic. The upgrade will probably ask you about a network configuration file which has been modified. I took the new version of the file and it worked.
That should all run through and then ask you to re-boot. You should be fine to reboot at this point. When it comes back up you should then be on 9.10 Karmic.
I don't recall if it is Karmic or Lucid, but one of them changes how the console login access works relative to the previous version. So you may now want to check the contents of "/etc/init". You should have a file "tty1.conf". Copy this file to "hvc0", then edit it and change all instances of "tty1" to "hvc0". We then need to get init ("upstart"?) to re-read the configuration. I haven't figured out how to do this yet, but a re-boot will get that working soon enough.
Check that console login access is working.
Once we have 9.10 Karmic up and running, we can upgrade to 10.04 Lucid. This can be done in the same way as the upgrade to Karmic (do-release-upgrade).
Lucid seems to install "Plymouth" (a boot splash manager) by default, which is a bit pointless for a co-located VPS. Unfortunately it seems that some critical-looking things depend on Plymouth, so I didn't fancying uninstalling it. But we can disable Plymouth easily enough by moving its configuration files out of /etc/init:
# mkdir /etc/init-disabled
# mv plymouth* /etc/init-disabled
Also, you may want to look at the kernel boot parameters in /boot/grub/menu.lst. You may see a line like:
(If I understand correctly, the UUID will be different for each system.)
You may want to remove the directives 'splash' and 'quiet'. I think 'splash' is to do with a splash screen, which is a bit pointless on a VPS. I think 'quiet' is probably to do with reducing the amount of chatter during boot-up, which is probably the opposite of what we want.
Again, check that console login access is working.
I also took the opportunity to disable the redundant "tty" entries in /etc/init, by moving them to /etc/init-disabled.
Since first writing the article, I discovered that we can use Ubuntu kernels. Using Ubuntu kernels makes it boot more smoothly. I only switched to an Ubuntu kernel at the end of the process, but it may have worked to use an Ubuntu kernel from the beginning.
Turns out that following the above approach results in a system which doesn't do UTF-8 by default. If you want UTF-8 support, you have to put that on yourself. See Setting up a default locale under Ubuntu 10.04 Lucid.
If you have any further information or suggestions, please let me know.