Revision 16 as of 2006-02-07 21:30:51

Clear message

TableOfContents

Installation

Caveats

The following classes of systems need some extra attention for installation:

Overview

SL3/4 hosts are installed using kickstart ([http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/sysadmin-guide/ch-kickstart2.html online manual]). The repository is mirrored from [ftp://ftp.scientificlinux.org/linux/scientific/ the ftp server at FNAL] and is located on the installation server, z.ifh.de, in /net1/z/DL6/SL. The host profiles for the kickstart install are kept in /net1/z/DL6/profiles, and some files needed during the postinstallation, before the AFS client is available, in /net1/z/DL6/postinstall (accessible through the http server running on z). More files, and most utility scripts, are located in /project/linux/SL3. Most of them are exactly the same for SL4. If in doubt, try the SL3 script unless you can find one for SL4.

Installation takes the following steps:

Anchor(cfvamos)

System Configuration in VAMOS

Choose a default derived from sl3-def. Defaults starting with "sl3-" are 32bit, those starting with "sl3a-" are 64bit. These will mainly differ in the settings for OS_ARCH and AFS_SYSNAME (see the sl3a-mod modifier). 64bit capable systems can run the 32bit version as well.

OS_ARCH is read by several tools in the following steps to determine what to install. The same is true for CF_SL_release: This variable determines which minor SL release the system will use. Both OS_ARCH and CF_SL_release affect the choice of installation kernel & initrd, installation repository, and yum repositories for updating and installing additional packages.

It should now be safe to do this step without disabling sue on the system, since sue.bootstrap will no longer permit OS_ARCH to change.

Run the Workflow whenever a system changes from DLx to SL3 or back, since some tools (scout) can only consult the netgroups to decide how things should be done. This is wrongwrongwrong, but ...

Anchor(profiles)

Creating System Profiles

This is done with the tool CKS3.pl which reads "host.cks3" files and creates "host.ks" files from them, using additional information from VAMOS, the AMS directory, or the live system still running DL4, DL5, SL3 or SL4, as well as pre/post script building blocks from /project/linux/SL{3|4}/{pre|post}.

CKS3.pl is located in /project/linux/SL3/CKS3, and is fully perldoc'd. A sample DEFAULT.cks with many comments is located in the same directory. This is exactly the same for SL4, and the only SL4-specific option (SELINUX) is documented.

To create a profile:

Anchor(ai)

Activating Private Key Distribution

If you followed the instructions above (read the CKS3 output), you already know what to do: {{{ ssh mentor sudo activ-ai <host> }}} This will activate the one-shot mechanism for giving the host (back) its private keys (root password, kerberos keyfile, vamos/ssh keys, ...). The init script retrieved during postinstall will start the script /products/ai/scripts/ai-start which will NFS-export a certain directory to mentor, put an SSL public key there, and ask mentor to crypt the credentials with that key and copy them into the directory. If after the installation the host has its credentials, it worked and no other system can possibly have them as well. If it hasn't the keys are burned and have to be scrubbed. Hasn't happened yet, but who knows.

If ai-start fails, the system will retry after 5 minutes. Mails will be sent to linuxroot from both mentor and the installing system, indicating that this happened. The reason is usually that this step was forgotten. Remember it has to be repeated before every reinstallation.

Anchor(boot)

Booting the system into installation

There are several options:

}}}

}}}

}}}

System/Boot Method Matrix

Package Handling & Automatic Updates

See the "aaru" feature for how all this (except kernels) is handled.

There are three distinct mechanisms for package handling on the client:

SL Standard & Errata Packages

<!> For yum on SL3, the command to create the necessary repository data is yum-arch <dir>BR For SL4, it is createrepo <dir>

Errata are synced to arwen with /project/linux/SL3/sync-arwen.pl and then to z with /project/linux/SL3/sync-z.pl (still manually). Packages to be installed additionally by /sbin/yumsel or updated by /sbin/aaru.yum.boot and /sbin/aaru.yum.daily are NOT taken from the errata mirror created like this, but instead from "staged errata" directories created (also, still manually) by the script /project/linux/SL3/yum/stage-errata/stage-errata. The sync/stage scripts send mail to linuxroot@ifh.de unless in dryrun mode. The stage_errata script is fully perldoc'ed, the others are too simple.

For SL4, there are separate sync scripts in /project/linux/SL4. However, the stage_errata script is shared between SL3 and SL4 now. Creation of repository data for SL4 (with createrepo) has to be done using ssh and nfs (currently on arwen) until our new installation server is available. This is slow. Don't worry, just let it finish.

Addon Packages (Zeuthen)

Most of these are found in /afs/ifh.de/packages/RPMS/@sys/System, with their (no)src rpms in /afs/ifh.de/packages/SRPMS and the source tarballs in /afs/ifh.de/packages/SOURCES. Some come from external sources like the dag repository (http://dag.wieers.com/home-made/), freshrpms (http://freshrpms.net/) or the SuSE 8.2/9.0 distributions. These latter ones are typically not accompanied by a src rpm.

After adding a package, make it available to yum like this:

cd /afs/.ifh.de/packages/RPMS/@sys/System
yum-arch .   # SL3
createrepo . # SL4
arcx vos release $PWD

Selectable Addon Packages (Zeuthen)

There's a way to provide packages in selectable repositories. For example, this was used to install an openafs-1.2.13 update on selected systems while the default for SL3 was still 1.2.11, and we didn't want to have 1.2.13 on every system.

These packages reside in directories SL/<release>/<arch>_extra/<name> on the installation server. For example, the afs update packages for 3.0.4/i386 are in /net/z/DL6/SL/304/i386_extra/afs1213 . To have clients access this repository, set any vamos variable starting with CF_YUM_extrarepos (CF_YUM_extrarepos or CF_YUM_extrarepos_host or ...) to a space separated list of subdirectories in <arch>_extra.

For example, CF_YUM_extrarepos='afs1213' will make aaru.yum.create add this repository (accessible via nf or http) to the host's yum configuration.

To make available packages in such a repository, you must yum-arch the *sub*directory (not <arch>_extra). While the installation server is still running DL5, use /project/linux/SL3/YUM-DL5/yum-arch-dl5 (ignore the error messages about it being unable to open some file in /tmp).

Note that matching kernel modules must still reside in a directory searched by the update script (see below). This should generally not cause problems since these aren't updated by yum anyway.

Additional Modules for Kernel Updates

Handled by the kernel feature, the script /usr/sbin/KUSL3.pl reads its information about which kernels to install from VAMOS variables Linux_kernel_version and a few others, and carries out whatever needs to be done in order to install new kernels and remove old ones. The script is perldoc'ed.

Basically, set Linux_kernel_version in VAMOS, and on the host (after a sue.bootstrap) run KUSL3.pl. Make sure you like what it would do, then run KUSL3.pl -x.

Kernels and additional packages are found in the repository mirror including the errata directory (CF_SL_release is used to find those), and in /afs/ifh.de/packages/RPMS/@sys/System (and some subdirectories).

If the variable Linux_kernel_modules is set to a (whitespace separated) list of module names, KUSL3 will install (and require the availability of) the corresponding kernel-module rpm. For example, if Linux_kernel_version is 2.4.21-20.0.1.EL 2.4.21-27.0.2.EL, and Linux_kernel_modules is foo bar, the mandatory modules are:

Generally speaking, kernel module packages must comply with the SL conventions.

KUSL3 will refuse to install a kernel if mandatory packages are not available. Non mandatory packages include kernel-source, sound modules, kernel-doc.

ALSA

This is not needed on SL4.

Matching kernel-module-alsa-`uname -r` packages are installed by KUSL3.pl if (a) they are available and (b) the package alsa-driver is installed (the latter should be the case on desktops after yumsel has run for the first time).

Both are created from the alsa-driver srpm found in /packages/SRPMS/System. Besides manual rebuilds, there is now the option to use the script /project/linux/SL3/modules/build-alsa-modules.pl.

Short instructions for building the kernel modules package manually (for an easier method, [#alsascrp see below]):

sed -i 's/^\(EXTRAVERSION.*\)custom/\1/' Makefile make oldconfig make dep make clean}}}

Anchor(alsascrp)Scripted build of the kernel modules packages:

ESD CAN Module (for PITZ Radiation Monitor)

This hasn't been done yet for SL4.

This is similar to the ALSA modules, but;

Nvidia

This hasn't been done yet for SL4.

Again, similar to alsa. Maybe a bit simpler since the spec will deal with the kernel sources correctly and without further attention (it makes a copy of the source directory and then does the right thing).

  1. install the right kernel-source package (there's a build requirement)
  2. install the srpm:
    rpm -ivh /packages/SRPMS/System/nvidia-driver-1.0.7174-3.src.rpm
  3. build (on an SMP system, on a UP system the define changes accordingly):
    rpmbuild -ba nvidia-driver-1.0.7174-3.spec
    • on i386, will build i386 userspace packages
    • on x86_64, will build userspace and kernel package for current kernel
    rpmbuild -bb --target i686 nvidia-driver-1.0.7174-3.spec
    • on i386, will build kernel module for running kernel
    rpmbuild -bb --target i686 --define 'kernel 2.4.21-27.0.2.EL' ...
    • on i686, will build kernel module for other kernels
    rpmbuild -bb --define 'kernel ...' --define 'build_module 1' nvidia...
    • on x86_64, will build kernel module for other kernel
  4. copy the .rpms to /afs/.ifh.de/packages/@sys/System/nvidia, yum-arch and release

XFS (SL4)

There are two ways to use the XFS filesystem which is not supported by the standard SL kernels:

To do the latter for a specific kernel:

  1. make sure the right kernel[-smp]-devel package is installed

  2. rebuild the source rpm:
    rpmbuild --rebuild --define 'kernel_topdir /lib/modules/2.6.9-22.0.1.ELsmp/build' \
            /packages/SRPMS/System/xfs/kernel-module-xfs-2.6.9-22.EL-0.1-1.src.rpm
    Note the version-release in the source rpm (here: 2.6.9-22.EL) need not match the one for which you want to build (here: 2.6.9-22.0.1.EL). Hence there's no need to create and store a new source rpm.
  3. copy the rpm to /afs/.ifh.de/packages/@sys/System/xfs, createrepo, and release

Adding a new SL3 release

There are quarterly releases of SL, following Red Hat's updates to RHEL. Each new release must be made available for installation and updates. The procedure is the same for SL3 and SL4. Just substitute filenames and paths as appropriate:

Step 1: Mirror the new subdirectory

Modify sync-arwen.sh and sync-z.sh to include the new release. Make sure there's enough space on both arwen and z. Now sync-arwen, then sync-z. If you're using 30rolling for testing, make a link like this:

/net1/z/DL6/SL/304 -> 30rolling

Step 2: Create empty extra postinstall repositories

Note: as of SL 3.0.5, these are no longer used and no longer included in the yum.conf automatically created.

mkdir /net1/z/DL6/SL/304/i386_post
cd /net1/z/DL6/SL/304/i386_post
/project/linux/SL3/YUM-DL5/yum-arch-dl5 .

mkdir /net1/z/DL6/SL/304/x86_64_post
cd /net1/z/DL6/SL/304/x86_64_post
/project/linux/SL3/YUM-DL5/yum-arch-dl5 .

If some packages are needed at this stage, of course put them there...

Step 3: Create staged errata directories

Modify /project/linux/SL3/yum/stage-errata/stage-errata.cf to include the new release. Note if you're trying 30rolling as a test for the release, you must configure 30rolling, not 304 (or whatever). Now run stage-errata.

Step 4: Make the kernel/initrd available for PXE boot

Go into /tftpboot on z. Do something like

cp -i /net1/z/DL6/SL/304/i386/images/SL/pxeboot/vmlinuz      vmlinuz.sl304
cp -i /net1/z/DL6/SL/304/x86_64/images/SL/pxeboot/vmlinuz    vmlinuz.sl304amd64
cp -i /net1/z/DL6/SL/304/i386/images/SL/pxeboot/initrd.img   initrd.sl304
cp -i /net1/z/DL6/SL/304/x86_64/images/SL/pxeboot/initrd.img initrd.sl304amd64

Then cd into pxelinux.cfg. Make copies of the relevant configuration files (cp SL303-i386-ks SL304-i386-ks; cp SL303-x86_64-ks SL304-x86_64-ks) and edit them accordingly (s/303/304/g);

Step 5: Make the release available in VAMOS

Fire up the GUI, select "vars" as the top object, go to CF_SL_release, choose the "values_host" tab, and add the new value to the available choices. Set it on some test host.

Step 6: test

Make sure this works and sets the right link:

  /project/linux/SL3/PXE/pxe <testhost>

Make sure this chooses the right directory:

  cd /net1/z/DL6/profiles
  ./CKS3.pl <testhost>

Make sure SL3U works correctly:

  ssh <testhost>
  /project/linux/SL3/SL3U/SL3U.pl yes please

Try an installation:

Try updating an existing installation:

Booting a rescue system

There are several ways to do this, including:

From CD1 of the distribution

Simply boot from CD1 of the distribution. At the boot prompt, type linux rescue.

Over the network with the unified installation CD

Just boot whatever entry you normally would for installation, but add the keyword rescue to the command line.

Building Kernel Packages

32-bit SL3

First install the kernel srpm (not kernel-source). Make your changes to the spec, add patches etc.

rpmbuild --sign -ba kernel-2.4.spec

This will build

rpmbuild --sign -ba --target i686 kernel-2.4.spec

This will build

Trying to turn off build of the hugemen kernel breaks the spec.

Additional modules for these are built as for any other SL kernel, with one exception of course:

Building the kernel-module-openafs packages

For ordinary SL kernels, this is done at FNAL or CERN, hence we needn't bother. But for our own kernels we have to do this ourselves.

/!\ This only works correctly on non-SMP build systems.BRBR On an SMP system, the build will work but the modules will not.

Install the kernel-source, kernel, and kernel-smp RPMs on the build system, they're all needed.

Then run:

PATH=/usr/kerberos/bin:$PATH rpmbuild --rebuild --sign --target i686 --define 'kernel 2.4.21...' openafs-...src.rpm

References

http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/sysadmin-guide/ch-kickstart2.html

http://linux.duke.edu/projects/yum/