Differences between revisions 1 and 2
Revision 1 as of 2005-10-31 17:29:10
Size: 4030
Comment:
Revision 2 as of 2005-10-31 17:39:21
Size: 4211
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
#acl DvGroup:read,write,revert Known:read,write All:read
Line 5: Line 7:
This is the '''rear''' view of the SCSI cabling:
Line 6: Line 9:
attachment:iole12-setup.png [attachment:iole12-setup.fig xfig source]

TableOfContents

cabling

This is the rear view of the SCSI cabling:

attachment:iole12-setup.png [attachment:iole12-setup.fig xfig source]

RAID array setup and mapping

RAID

drives

LD

Partition

SCSI Channel

ID

Host

Adapter

device

raid-iole

1-4

0

0

0

1

iole1

0

sda

1

2

sdb

5-8

1

-

3

sdc

9-12

2

0

1

1

iole2

0

sda

1

2

sdb

13-16

3

-

3

sdc

raid-iolaos

1-4

0

0

0

1

iole1

1

sdd

1

2

sde

5-8

1

-

3

sdf

9-12

2

0

1

1

iole2

1

sdd

1

2

sde

13-16

3

-

3

sdf

kickstart partitioning

clearpart --drives=sda,sdd --initlabel
part raid.01 --size   256 --ondisk sda
part raid.03 --size  1024 --ondisk sda
part raid.05 --size  2048 --ondisk sda
part raid.07 --size 10240 --ondisk sda
part raid.09 --size     1 --ondisk sda --grow
part raid.02 --size   256 --ondisk sdd
part raid.04 --size  1024 --ondisk sdd
part raid.06 --size  2048 --ondisk sdd
part raid.08 --size 10240 --ondisk sdd
part raid.10 --size     1 --ondisk sdd --grow
raid /boot      --level=1 --device=md0 --fstype ext2 raid.01 raid.02
raid /afs_cache --level=1 --device=md1 --fstype ext3 raid.03 raid.04
raid swap       --level=1 --device=md2 --fstype swap raid.05 raid.06
raid /          --level=1 --device=md3 --fstype ext3 raid.07 raid.08
raid /usr1      --level=1 --device=md4 --fstype ext3 raid.09 raid.10

This should be safe to reuse if anything ever has to be reinstalled, since there are no data partitions on any of these block devices. To play safe, add the following to the cks3 files:

$cfg{PREINSTALL_ADD} = "sleep 86400";

and rerun CKS3. Then check /proc/partitions on virtual console #2: You should see 6 disks, and sda and sdd should be the small ones. It is then safe to "killall -TERM sleep" to continue the installation.

adding the MD devices for the vice partitions

First, created primary partitions spanning the whole device on sdb, sdc, sde, sdf. This was done with fdisk. Note the type must be fd (Linux RAID Autodetect).

Then created the devices:

mdadm -create /dev/md5 -l 1 --raid-devices=2 /dev/sdb1 /dev/sde1
mdadm -create /dev/md6 -l 1 --raid-devices=2 /dev/sdc1 /dev/sdf1

Initialization takes very long! When all four devices were initialized at the same time, it took > 24 hours, even though the max bandwidth was set to 50000 in /proc/sys/dev/raid/speed_limit_max. The limiting factor seemed to be the writes ot raid-iolaos.

installing GRUB on sdd

While kickstart will happily put /boot onto /dev/md0, it will install GRUB in the master boot record of /dev/sda only. Hence when raid-iole is not operational, neither iole1 nor iole2 could boot.

To remedy this, GRUB was installed into the master boot record of sdd on both systems manually. After starting grub (as root):

grub > device (hd0) /dev/sdd
grub > root (hd0,0)
grub > setup (hd0)
grub > quit

This of course assumes that /boot is a separate partition. The device command accounts for the fact that /dev/hdd will be the first BIOS drive if raid-iole is unavailable.

Some more info [http://www.linuxquestions.org/questions/archive/8/2005/03/1/297043 here].

iole1_and_iole2_Setup (last edited 2008-10-30 11:40:13 by localhost)