⇤ ← Revision 1 as of 2009-06-08 17:53:19
3808
Comment: client and file server howto
|
4966
db servers
|
Deletions are marked like this. | Additions are marked like this. |
Line 89: | Line 89: |
= Database servers = == Packages == If appropriate ''openafs-server'' packages are in use, there is nothing to do. {{{ # rpm -q openafs-server openafs-server-1.4.10.osd.vpa.r657-74alma.2.sl5.x86_64 # rpm -qf /usr/afs/bin/osdserver openafs-server-1.4.10.osd.vpa.r657-74alma.2.sl5.x86_64 }}} For a minimum intrusion on existing DB servers, use the dedicated ''openafs-osdserver'' package: {{{ % rpm -q openafs-server openafs-osdserver openafs-server-1.4.7-68.2.SL5.x86_64 openafs-osdserver-1.4.10.osd.vpa.r689-76alma.sl5.x86_64 % rpm -qf /usr/afs/bin/osdserver openafs-osdserver-1.4.10.osd.vpa.r689-76alma.sl5.x86_64 }}} It has no dependencies and will install regardless of the rest of your local AFS installation. == Launching the OSDDB service == With admin privileges: {{{ bos create <db-machine> osddb simple /usr/afs/bin/osdserver }}} Once the osdservers are up and running, clients should be able to see this: {{{ % osd l id name(loc) ---total space--- flag prior. own. server lun size range 1 local_disk wr rd (0kb-1mb) }}} The ''local_disk'' entry is a default. |
Contents
Client setup
Client protocol extension
Install a suitable kernel module
% rpm -q kernel-module-openafs-`uname -r` kernel-module-openafs-2.6.18-128.1.10.el5-1.4.7-68.2.SL5.i686
Upon startup of the afs service, the following should show up:
# service afs restart ... # dmesg | grep patch libafs with 'rxosd support' 'vicep-access' patches
(vicep-access is not necessary for RxOSD operation per se, but is required for making proper use of a Lustre or GPFS backend.)
Proper afs commands
Install a suitable openafs package
# rpm -q openafs openafs-1.4.10.osd.vpa.r691-77alma.sl5.x86_64 # fs protocol Enabled protocols are RXOSD (1 parallel streams on connections with rtt > 10 ms). # osd help osd: Commands are: ...
Memory cache
When using a cluster filesystem backend, memcache has proven to be the fastest alternative (at least for Lustre via Infiniband).
In /etc/sysconfig/afs:
OPTIONS="-memcache -stat 8192 -chunksize 20 -daemons 8 -volumes 64"
This selects memcache and boosts the chunk size to 1MB. To avoid a low overall number of cache chunks:
CACHESIZE="262144"
256 MB seems a reasonable size.
Accessing a cluster filesystem backend
E.g. Lustre:
Make sure the shared vice partition is accessible:
# ln -s /vicepal /lustre/...
or resp.
# mount -t lustre 141.34.218.7@tcp:/zn_test /vicept
(see "Using a Lustre storage backend" below)make the client detect the vice partition
# service afs-vicep-access restart ... # dmesg | tail ... Visible OSD 2 lun 37 cell 33 == ifh.de
Fileserver setup
Adding an rxosd service
It is sensible to have fileserver and rxosd processes share the same machines. This way, access to files even in a single volume can be spread to all fileservers (given that clients don't access the same (set of) file(s) all the time).
create a dedicated vice partition
# mkdir /vicepc <edit /etc/fstab> # mount /vicepc # touch /vicepc/OnlyRxosd
- The last command ensures that the fileserver process will not claim this partition.
create the service instance (with admin privileges or -localauth)
# bos create <server-machine> rxosd simple /usr/afs/bin/rxosd
There is no harm in doing this even before creating a vice partition.
add the new OSD to the OSDDB
# osd createosd -id <id> -name <osd-name> -ip <machine-ip> -lun <lun> 1m 64g
- id
must not have existed in the OSDDB before, check with osd l -all
- osd-name
- must also be unique
- lun
numeric representation of the vice partition, /vicepa is 0, /vicepc is 2 and /vicepal is 37
- size constraints
1m and 64g are arbitrary. Note that the minimum size directly influences what files get stored to OSD. If this is done before creating vice partition and/or service instance, the OSD will appear as "down" in the osd l output. No harm done.
Using a Lustre storage backend
Basically, this is the same process as adding a regular rxosd service. The vice partition resides in a Lustre filesystem, however.
A larger maximum file size for your Lustre-OSD is not required, the fileserver will never get to see a size that surpasses any client's cache size.
Lustre mountpoint
E.g.
# mount -t lustre 141.34.218.7@tcp:/zn_test /vicept # touch /vicept/OnlyRxosd
The rest is identical to the above.
Symlink
If Lustre is mounted anywhere else:
# ln -s /lustre/vpxl /vicepal # touch /vicepal/OnlyRxosd # touch /vicepal/AlwaysAttach
The last command enables the use of a regular directory as a vice partition. The rest is identical to the above.
Database servers
Packages
If appropriate openafs-server packages are in use, there is nothing to do.
# rpm -q openafs-server openafs-server-1.4.10.osd.vpa.r657-74alma.2.sl5.x86_64 # rpm -qf /usr/afs/bin/osdserver openafs-server-1.4.10.osd.vpa.r657-74alma.2.sl5.x86_64
For a minimum intrusion on existing DB servers, use the dedicated openafs-osdserver package:
% rpm -q openafs-server openafs-osdserver openafs-server-1.4.7-68.2.SL5.x86_64 openafs-osdserver-1.4.10.osd.vpa.r689-76alma.sl5.x86_64 % rpm -qf /usr/afs/bin/osdserver openafs-osdserver-1.4.10.osd.vpa.r689-76alma.sl5.x86_64
It has no dependencies and will install regardless of the rest of your local AFS installation.
Launching the OSDDB service
With admin privileges:
bos create <db-machine> osddb simple /usr/afs/bin/osdserver
Once the osdservers are up and running, clients should be able to see this:
% osd l id name(loc) ---total space--- flag prior. own. server lun size range 1 local_disk wr rd (0kb-1mb)
The local_disk entry is a default.