Warning Livedoc is no longer being updated and will be deprecated shortly. Please refer to https://documentation.tjhsst.edu.

SAN/iSCSI Administration

From Livedoc - The Documentation Repository
Revision as of 12:21, 12 June 2013 by Andrew Hamilton (talk | contribs) (added multipath configuration details)
Jump to: navigation, search

This page contains administration guidelines and commands for iSCSI exports on the CSL SAN.

Adding an iSCSI LUN

This is used to add a new iSCSI LUN to the SAN.

ZVOL creation

All of the iSCSI exports on the SAN are currently backed by ZVOL block devices. ZVOLs provide much better performance than using raw datafiles over top of a normal filesystem and are easily expanded and managed using ZFS.

For ease of management, related ZVOLs are typically grouped underneath a ZFS filesystem. First we create the filesystem, then we create each of the ZVOLs beneath it. For most VMs, you will need two ZVOLs; one for the root partition and one for swap.

zfs create apocalypse/vms/<vm>
zfs create -V 15G apocalypse/vms/<vm>/root
zfs create -V 1G apocalypse/vms/<vm>/swap

iSCSI Exports

iSCSI Exports are currently done using the LIO iSCSI Target managed through Pacemaker. Therefore, we need to add the new LUNs to Pacemaker which will handle bringing them online on the appropriate host.

First you need to generate a unique value for each LUN to use as an SCSI ID. We use the first half of the md5 hash of the LUN's name as a reliably unique value.

echo -n apocalypse-<vm>-root | md5sum | head -c 16
echo -n apocalypse-<vm>-swap | md5sum | head -c 16

You will also need a LUN number for each LUN you are adding; these are generally assigned starting from 0 and incrementing with each LUN added. To get the highest numbered LUN currently in the cluster run and read through the configuration.

crm configure show

Now its time to add the LUNs to the cluster configuration. This will be done using a Shadow CIB (a cloned copy of the configuration). This is *IMPORTANT* because if the entire configuration is not added at once, the cluster may behave unexpectedly.

First we create the new Shadow CIB

crm(live)# cib
crm(live)cib# new <vm>
##Notice we are now viewing/editting the (<vm>) configuration not the (live) configuration
crm(ion)cib# cd ..

Next we create the iSCSILogicalUnit configurations; make sure to adjust the LUN number, scsi_sn, and path values appropriately.

crm(ion)# configure
crm(ion)configure# primitive apocalypse-<vm>-root ocf:heartbeat:iSCSILogicalUnit \
 params implementation="lio" target_iqn="iqn.1992-03.edu.tjhsst:target:apocalypse.0" lun="9" \
 path="/dev/zvol/apocalypse/vms/<vm>/root" scsi_sn="<sn>" \
 op start interval="0" timeout="10s" \
 op stop interval="0" timeout="60s"
crm(ion)configure# primitive apocalypse-<vm>-swap ocf:heartbeat:iSCSILogicalUnit \
 params implementation="lio" target_iqn="iqn.1992-03.edu.tjhsst:target:apocalypse.0" lun="10" \
 path="/dev/zvol/apocalypse/vms/<vm>/swap" scsi_sn="<sn>" \
 op start interval="0" timeout="10s" \
 op stop interval="0" timeout="60s"

Next, you need to add the new iSCSILogicalUnits to the apocalypse-luns resource group to ensure that they are appropriate managed in conjunction with the rest of the iSCSI system.

crm(ion)configure# edit apocalypse-luns
##The configuration entry for apocalypse-luns will open in EDITOR (usually vim)
##Add apocalypse-<vm>-root apocalypse-<vm>-swap to the end of the configuration line just before apocalypseTarget
##When you are done, save and quit the editor (default :wq) 

Finally, commit the configuration changes to the Shadow CIB and leave the configuration submenu:

crm(ion)configure# commit
crm(ion)configure# cd ..

At this point if you have any questions, now is a very good time to ask someone to double-check your configuration before you crash the SAN. Otherwise, if you are VERY sure you know what you're doing, commit the changes to the live configuration.

crm(ion)# cib
crm(ion)cib# commit <vm>
INFO: commited '<vm>' shadow CIB to the cluster
crm(ion)cib# exit

Finally, start the cluster monitor and make sure your new resources have started:


Multipath Configuration

Multipath now needs to be configured on the client servers. This is most easily done on one server and then copied to the others. On one of the client servers, run:


Then, for each LUN you are adding, run:

ls /dev/mapper/ | grep 090427202e6db84c #(replace this with the SCSI_SN of the LUN)

For each LUN, add an entry in /etc/multipath.conf similar to the following; replacing the WWID with the value returned above, and the alias with the name of the LUN.

multipath {
    wwid 36001405090427202e6db84c000000000
    alias apocalypse-ion-root
    features                "1 queue_if_no_path"

Next for each LUN, you need to remove the temporary multipath alias (which defaults to the WWID) by running:

multipath -f 36001405090427202e6db84c000000000 #(replace this with the WWID)

Finally, re-run the multipath command to regenerate the multipath devices with the new aliases.

You should now have an entry in /dev/mapper/ for each LUN you added to the SAN. Once you have verified that everything is working, you need to replicate this configuration across the other systems connected to the SAN. Copy the updated /etc/multipath.conf to them, then run:

iscsiadm -m session --rescan

See the main SAN article for a list of systems currently connected to the SAN via iSCSI.