Revision 2 as of 2009-06-09 08:25:35

Clear message

Client setup

Client protocol extension

Install a suitable kernel module

% rpm -q kernel-module-openafs-`uname -r`
kernel-module-openafs-2.6.18-128.1.10.el5-1.4.7-68.2.SL5.i686 

Upon startup of the afs service, the following should show up:

# service afs restart
...
# dmesg | grep patch                           
libafs with   'rxosd support' 'vicep-access' patches 

(vicep-access is not necessary for RxOSD operation per se, but is required for making proper use of a Lustre or GPFS backend.)

Proper afs commands

Install a suitable openafs package

# rpm -q openafs
openafs-1.4.10.osd.vpa.r691-77alma.sl5.x86_64 
# fs protocol
Enabled protocols are  RXOSD (1 parallel streams on connections with rtt > 10 ms). 
# osd help
osd: Commands are:
... 

Memory cache

When using a cluster filesystem backend, memcache has proven to be the fastest alternative (at least for Lustre via Infiniband).

In /etc/sysconfig/afs:

OPTIONS="-memcache -stat 8192 -chunksize 20 -daemons 8 -volumes 64" 

This selects memcache and boosts the chunk size to 1MB. To avoid a low overall number of cache chunks:

CACHESIZE="262144" 

256 MB seems a reasonable size.

Accessing a cluster filesystem backend

E.g. Lustre:

Fileserver setup

Adding an rxosd service

It is sensible to have fileserver and rxosd processes share the same machines. This way, access to files even in a single volume can be spread to all fileservers (given that clients don't access the same (set of) file(s) all the time).

Using a Lustre storage backend

Basically, this is the same process as adding a regular rxosd service. The vice partition resides in a Lustre filesystem, however.

(!) A larger maximum file size for your Lustre-OSD is not required, the fileserver will never get to see a size that surpasses any client's cache size.

Lustre mountpoint

E.g.

# mount -t lustre 141.34.218.7@tcp:/zn_test /vicept 
# touch /vicept/OnlyRxosd 

The rest is identical to the above.

If Lustre is mounted anywhere else:

# ln -s /lustre/vpxl /vicepal
# touch /vicepal/OnlyRxosd
# touch /vicepal/AlwaysAttach 

The last command enables the use of a regular directory as a vice partition. The rest is identical to the above.

Database servers

Packages

If appropriate openafs-server packages are in use, there is nothing to do.

# rpm -q openafs-server
openafs-server-1.4.10.osd.vpa.r657-74alma.2.sl5.x86_64
# rpm -qf /usr/afs/bin/osdserver
openafs-server-1.4.10.osd.vpa.r657-74alma.2.sl5.x86_64 

For a minimum intrusion on existing DB servers, use the dedicated openafs-osdserver package:

% rpm -q openafs-server openafs-osdserver
openafs-server-1.4.7-68.2.SL5.x86_64
openafs-osdserver-1.4.10.osd.vpa.r689-76alma.sl5.x86_64
% rpm -qf /usr/afs/bin/osdserver 
openafs-osdserver-1.4.10.osd.vpa.r689-76alma.sl5.x86_64 

It has no dependencies and will install regardless of the rest of your local AFS installation.

Launching the OSDDB service

With admin privileges:

bos create <db-machine> osddb simple /usr/afs/bin/osdserver 

Once the osdservers are up and running, clients should be able to see this:

% osd l
 id name(loc)     ---total space---      flag  prior. own. server lun size range
  1 local_disk                                 wr  rd                 (0kb-1mb) 

The local_disk entry is a default.