Contents
Client setup
Client protocol extension
Install a suitable kernel module
% rpm -q kernel-module-openafs-`uname -r` kernel-module-openafs-2.6.18-128.1.10.el5-1.4.7-68.2.SL5.i686
Upon startup of the afs service, the following should show up:
# service afs restart ... # dmesg | grep patch libafs with 'rxosd support' 'vicep-access' patches
(vicep-access is not necessary for RxOSD operation per se, but is required for making proper use of a Lustre or GPFS backend.)
Proper afs commands
Install a suitable openafs package
# rpm -q openafs openafs-1.4.10.osd.vpa.r691-77alma.sl5.x86_64 # fs protocol Enabled protocols are RXOSD (1 parallel streams on connections with rtt > 10 ms). # osd help osd: Commands are: ...
Memory cache
When using a cluster filesystem backend, memcache has proven to be the fastest alternative (at least for Lustre via Infiniband).
In /etc/sysconfig/afs:
OPTIONS="-memcache -stat 8192 -chunksize 20 -daemons 8 -volumes 64"
This selects memcache and boosts the chunk size to 1MB. To avoid a low overall number of cache chunks:
CACHESIZE="262144"
256 MB seems a reasonable size.
Accessing a cluster filesystem backend
E.g. Lustre:
Make sure the shared vice partition is accessible:
# ln -s /vicepal /lustre/...
or resp.
# mount -t lustre 141.34.218.7@tcp:/zn_test /vicept
(see "Using a Lustre storage backend" below)make the client detect the vice partition
# service afs-vicep-access restart ... # dmesg | tail ... Visible OSD 2 lun 37 cell 33 == ifh.de
Fileserver setup
Adding an rxosd service
It is sensible to have fileserver and rxosd processes share the same machines. This way, access to files even in a single volume can be spread to all fileservers (given that clients don't access the same (set of) file(s) all the time).
create a dedicated vice partition
# mkdir /vicepc <edit /etc/fstab> # mount /vicepc # touch /vicepc/OnlyRxosd
- The last command ensures that the fileserver process will not claim this partition.
create the service instance (with admin privileges or -localauth)
# bos create <server-machine> rxosd simple /usr/afs/bin/rxosd
There is no harm in doing this even before creating a vice partition.
add the new OSD to the OSDDB
# osd createosd -id <id> -name <osd-name> -ip <machine-ip> -lun <lun> 1m 64g
- id
must not have existed in the OSDDB before, check with osd l -all
- osd-name
- must also be unique
- lun
numeric representation of the vice partition, /vicepa is 0, /vicepc is 2 and /vicepal is 37
- size constraints
1m and 64g are arbitrary. Note that the minimum size directly influences what files get stored to OSD. If this is done before creating vice partition and/or service instance, the OSD will appear as "down" in the osd l output. No harm done.
Using a Lustre storage backend
Basically, this is the same process as adding a regular rxosd service. The vice partition resides in a Lustre filesystem, however.
A larger maximum file size for your Lustre-OSD is not required, the fileserver will never get to see a size that surpasses any client's cache size.
Lustre mountpoint
E.g.
# mount -t lustre 141.34.218.7@tcp:/zn_test /vicept # touch /vicept/OnlyRxosd
The rest is identical to the above.
Symlink
If Lustre is mounted anywhere else:
# ln -s /lustre/vpxl /vicepal # touch /vicepal/OnlyRxosd # touch /vicepal/AlwaysAttach
The last command enables the use of a regular directory as a vice partition. The rest is identical to the above.