Building your own kernel and ramdisk is necessary if you want to
customize the kernel configuration,
keep up with the absolute latest SSI code available through CVS,
or test your SSI bugfix or kernel enhancement with UML.
Otherwise, feel free to skip this section.
SSI source code is available as official release tarballs and through CVS. The CVS repository contains the latest, bleeding-edge code. It can be less stable than the official release, but it has features and bugfixes that the release does not have.
The latest SSI release can be found at the top of this release list. At the time of this writing, the latest release is 0.6.5.
Download the latest release. Extract it.
host$ tar jxvf ~/ssi-linux-2.4.16-v0.6.5.tar.bz2 |
Determine the corresponding kernel version number from the release name. It appears before the SSI version number. For the 0.6.5 release, the corresponding kernel version is 2.4.16.
Follow these instructions to do a CVS checkout of the latest SSI code. The modulename is ssic-linux.
You also need to check out the latest CI code. Follow these instructions to do that. The modulename is ci-linux.
To do a developer checkout, you must be a CI or SSI developer. If you are interested in becoming a developer, read Section 8.3 and Section 8.4.
Determine the corresponding kernel version with
host$ head -4 ssic-linux/ssi-kernel/Makefile VERSION = 2 PATCHLEVEL = 4 SUBLEVEL = 16 EXTRAVERSION = |
In this case, the corresponding kernel version is 2.4.16. If you're paranoid, you might want to make sure the corresponding kernel version for CI is the same.
host$ head -4 ci-linux/ci-kernel/Makefile VERSION = 2 PATCHLEVEL = 4 SUBLEVEL = 16 EXTRAVERSION = |
They will only differ when I'm merging them up to a new kernel version. There is a window between checking in the new CI code and the new SSI code. I'll do my best to minimize that window. If you happen to see it, wait a few hours, then update your sandboxes.
host$ cd ssic-linux host$ cvs up -d host$ cd ../ci-linux host$ cvs up -d host$ cd .. |
Download the appropriate kernel source. Get the version you determined in Section 4.1. Kernel source can be found on this U.S. server or any one of these mirrors around the world.
Extract the source. This will take a little time.
host$ tar jxvf ~/linux-2.4.16.tar.bz2 |
or
host$ tar zxvf ~/linux-2.4.16.tar.gz |
Follow the appropriate instructions, based on whether you downloaded an official SSI release or did a CVS checkout.
Apply the patch in the SSI source tree.
host$ cd linux host$ patch -p1 <../ssi-linux-2.4.16-v0.6.5/ssi-linux-2.4.16-v0.6.5.patch |
Apply the UML patch from either the CI or SSI sandbox. It will fail on patching Makefile. Don't worry about this.
host$ cd linux host$ patch -p1 <../ssic-linux/3rd-party/uml-patch-2.4.18-22 |
Copy CI and SSI code into place.
host$ cp -alf ../ssic-linux/ssi-kernel/. . host$ cp -alf ../ci-linux/ci-kernel/. . |
Apply the GFS patch from the SSI sandbox.
host$ patch -p1 <../ssic-linux/3rd-party/opengfs-ssi.patch |
Apply any other patch from ssic-linux/3rd-party at your discretion. They haven't been tested much or at all in the UML environment. The KDB patch is rather useless in this environment.
Configure the kernel with the provided configuration file. The following commands assume you are still in the kernel source directory.
host$ cp config.uml .config host$ make oldconfig ARCH=um |
Build the kernel image and modules.
host$ make dep linux modules ARCH=um |
To install the kernel you must be able to loopback mount the GFS root image. You need to do a few things to the host system to make that possible.
Download any version of OpenGFS after 0.0.92, or check out the latest source from CVS.
Apply the appropriate kernel patches from the kernel_patches directory to your kernel source tree. Make sure you enable the /dev filesystem, but do not have it automatically mount at boot. (When you configure the kernel select 'File systems -> /dev filesystem support' and unselect 'File systems -> /dev filesystem support -> Automatically mount at boot'.) Build the kernel as usual, install it, rewrite your boot block and reboot.
Configure, build and install the GFS modules and utilities.
host$ cd opengfs host$ ./autogen.sh --with-linux_srcdir=host_kernel_source_tree host$ make host$ su host# make install |
Configure two aliases for one of the host's network devices. The first alias should be 192.168.50.1, and the other should be 192.168.50.101. Both should have a netmask of 255.255.255.0.
host# ifconfig eth0:0 192.168.50.1 netmask 255.255.255.0 host# ifconfig eth0:1 192.168.50.101 netmask 255.255.255.0 |
cat the contents of /proc/partitions. Select two device names that you're not using for anything else, and make two loopback devices with their names. For example:
host# mknod /dev/ide/host0/bus0/target0/lun0/part1 b 7 1 host# mknod /dev/ide/host0/bus0/target0/lun0/part2 b 7 2 |
Finally, load the necessary GFS modules and start the lock server daemon.
host# modprobe gfs host# modprobe memexp host# memexpd host# Ctrl-D |
Your host system now has GFS support.
Loopback mount the shared root.
host$ su host# losetup /dev/loop1 root_cidev host# losetup /dev/loop2 root_fs host# passemble host# mount -t gfs -o hostdata=192.168.50.1 /dev/pool/pool0 /mnt |
Install the modules into the root image.
host# make modules_install ARCH=um INSTALL_MOD_PATH=/mnt host# Ctrl-D |
You have to repeat some of the steps you did in Section 4.5. Extract another copy of the OpenGFS source. Call it opengfs-uml. Add the following line to make/modules.mk.in.
KSRC := /root/linux-ssi INCL_FLAGS := -I. -I.. -I$(GFS_ROOT)/src/include -I$(KSRC)/include \ + -I$(KSRC)/arch/um/include \ $(EXTRA_INCL) DEF_FLAGS := -D__KERNEL__ -DMODULE $(EXTRA_FLAGS) OPT_FLAGS := -O2 -fomit-frame-pointer |
Configure, build and install the GFS modules and utilities for UML.
host$ cd opengfs-uml host$ ./autogen.sh --with-linux_srcdir=UML_kernel_source_tree host$ make host$ su host# make install DESTDIR=/mnt |
Change root into the loopback mounted root image, and use the --uml argument to cluster_mkinitrd to build a ramdisk.
host# /usr/sbin/chroot /mnt host# cluster_mkinitrd --uml initrd-ssi.img 2.4.16-21um |
Move the new ramdisk out of the root image, and assign ownership to the appropriate user. Wrap things up.
host# mv /mnt/initrd-ssi.img ~username host# chown username ~username/initrd-ssi.img host# umount /mnt host# passemble -r all host# losetup -d /dev/loop1 host# losetup -d /dev/loop2 host# Ctrl-D host$ cd .. |
Pass the new kernel and ramdisk images into ssi-start with the appropriate pathnames for KERNEL and INITRD in ~/.ssiuml/ssiuml.conf. An example for KERNEL would be ~/linux/linux. An example for INITRD would be ~/initrd-ssi.img.
Stop the currently running cluster and start again.
host$ ssi-stop host$ ssi-start |
You should see a three-node cluster booting with your new kernel. Feel free to take it through the exercises in Section 3 to make sure it's working correctly.