Warning Livedoc is no longer being updated and will be deprecated shortly. Please refer to https://documentation.tjhsst.edu.

Difference between revisions of "SAN/iSCSI Administration"

From Livedoc - The Documentation Repository
Jump to: navigation, search
m (Multipath Configuration: categorize)
Line 9: Line 9:
 
For ease of management, related ZVOLs are typically grouped underneath a ZFS filesystem. First we create the filesystem, then we create each of the ZVOLs beneath it. For most VMs, you will need two ZVOLs; one for the root partition and one for swap.
 
For ease of management, related ZVOLs are typically grouped underneath a ZFS filesystem. First we create the filesystem, then we create each of the ZVOLs beneath it. For most VMs, you will need two ZVOLs; one for the root partition and one for swap.
  
<code>
+
<pre>
 
  zfs create apocalypse/vms/<vm>
 
  zfs create apocalypse/vms/<vm>
 
  zfs create -V 15G apocalypse/vms/<vm>/root
 
  zfs create -V 15G apocalypse/vms/<vm>/root
 
  zfs create -V 1G apocalypse/vms/<vm>/swap
 
  zfs create -V 1G apocalypse/vms/<vm>/swap
</code>
+
</pre>
  
 
===iSCSI Exports===
 
===iSCSI Exports===
Line 20: Line 20:
 
First you need to generate a unique value for each LUN to use as an SCSI ID. We use the first half of the md5 hash of the LUN's name as a reliably unique value.
 
First you need to generate a unique value for each LUN to use as an SCSI ID. We use the first half of the md5 hash of the LUN's name as a reliably unique value.
  
<code>
+
<pre>
 
  echo -n apocalypse-<vm>-root | md5sum | head -c 16
 
  echo -n apocalypse-<vm>-root | md5sum | head -c 16
 
  echo -n apocalypse-<vm>-swap | md5sum | head -c 16
 
  echo -n apocalypse-<vm>-swap | md5sum | head -c 16
</code>
+
</pre>
  
You will also need a LUN number for each LUN you are adding; these are generally assigned starting from 0 and incrementing with each LUN added. To get the highest numbered LUN currently in the cluster run and read through the configuration.
+
You will also need a LUN number for each LUN you are adding; these are generally assigned starting from 0 and incrementing with each LUN added. To get the highest numbered LUN check the <code>add.luns</code> file.
 
+
Change the lines in the <code>add.luns</code> file to match the format shown below. Make sure the LUNs in the <code>add.luns</code> file have been added to the end of the <code>luns</code> file before clearing <code>add.luns</code>.
<code>
+
<pre>
crm configure show
+
apocalypse-ns2-root,<LUN number>(add 1 to the previous),/dev/zvol/apocalypse/vms/<vm>/root,<SCSI ID>
</code>
+
apocalypse-ns2-swap,<LUN number>(add 1 to the previous),/dev/zvol/apocalypse/vms/<vm>/swap,<SCSI ID>
 
+
</pre>
Now its time to add the LUNs to the cluster configuration. This will be done using a Shadow CIB (a cloned copy of the configuration). This is *IMPORTANT* because if the entire configuration is not added at once, the cluster may behave unexpectedly.
+
Then, run the <code>add-luns.sh</code> script.
 
+
<pre>
First we create the new Shadow CIB
+
./add-luns.sh
 
+
</pre>
<code>
 
crm
 
crm(live)# cib
 
crm(live)cib# new <vm>
 
##Notice we are now viewing/editting the (<vm>) configuration not the (live) configuration
 
crm(ion)cib# cd ..
 
</code>
 
 
 
Next we create the iSCSILogicalUnit configurations; make sure to adjust the LUN number, scsi_sn, and path values appropriately.
 
<code>
 
crm(ion)# configure
 
crm(ion)configure# primitive apocalypse-<vm>-root ocf:heartbeat:iSCSILogicalUnit \
 
  params implementation="lio" target_iqn="iqn.1992-03.edu.tjhsst:target:apocalypse.0" lun="9" \
 
  path="/dev/zvol/apocalypse/vms/<vm>/root" scsi_sn="<sn>" \
 
  op start interval="0" timeout="10s" \
 
  op stop interval="0" timeout="60s"
 
crm(ion)configure# primitive apocalypse-<vm>-swap ocf:heartbeat:iSCSILogicalUnit \
 
  params implementation="lio" target_iqn="iqn.1992-03.edu.tjhsst:target:apocalypse.0" lun="10" \
 
  path="/dev/zvol/apocalypse/vms/<vm>/swap" scsi_sn="<sn>" \
 
  op start interval="0" timeout="10s" \
 
  op stop interval="0" timeout="60s"
 
</code>
 
 
 
Next, you need to add the new iSCSILogicalUnits to the apocalypse-luns resource group to ensure that they are appropriate managed in conjunction with the rest of the iSCSI system.
 
<code>
 
crm(ion)configure# edit apocalypse-luns
 
##The configuration entry for apocalypse-luns will open in EDITOR (usually vim)
 
##Add apocalypse-<vm>-root apocalypse-<vm>-swap to the end of the configuration line AFTER apocalypseTarget
 
##When you are done, save and quit the editor (default :wq)
 
</code>
 
 
 
Finally, commit the configuration changes to the Shadow CIB and leave the configuration submenu:
 
<code>
 
crm(ion)configure# commit
 
crm(ion)configure# cd ..
 
</code>
 
 
 
At this point if you have any questions, now is a very good time to ask someone to double-check your configuration before you crash the SAN. Otherwise, if you are ''VERY'' sure you know what you're doing, commit the changes to the live configuration.
 
<code>
 
crm(ion)# cib
 
crm(ion)cib# commit <vm>
 
INFO: commited '<vm>' shadow CIB to the cluster
 
crm(ion)cib# exit
 
bye
 
</code>
 
 
 
You can now start the cluster monitor and make sure your new resources have started:
 
<code>
 
crm_mon
 
</code>
 
  
 
Finally, each VM Server needs to be told to rescan for iSCSI LUN changes. IMPORTANT - DO NOT restart iscsid or you will crash all of the VMs on the server; rather use the following command to instruct iscsid to scan for changes.
 
Finally, each VM Server needs to be told to rescan for iSCSI LUN changes. IMPORTANT - DO NOT restart iscsid or you will crash all of the VMs on the server; rather use the following command to instruct iscsid to scan for changes.
<code>
+
<pre>
 
  iscsiadm -m session --rescan
 
  iscsiadm -m session --rescan
</code>
+
</pre>
  
 
===Multipath Configuration===
 
===Multipath Configuration===
 
Multipath now needs to be configured on the client servers. This is most easily done on one server and then copied to the others. On one of the client servers, run:
 
Multipath now needs to be configured on the client servers. This is most easily done on one server and then copied to the others. On one of the client servers, run:
<code>
+
<pre>
 
  multipath
 
  multipath
</code>
+
</pre>
  
 
Then, for each LUN you are adding, run:
 
Then, for each LUN you are adding, run:
<code>
+
<pre>
 
  ls /dev/mapper/ | grep 090427202e6db84c #(replace this with the SCSI_SN of the LUN)
 
  ls /dev/mapper/ | grep 090427202e6db84c #(replace this with the SCSI_SN of the LUN)
 
  36001405090427202e6db84c000000000
 
  36001405090427202e6db84c000000000
</code>
+
</pre>
  
 
For each LUN, add an entry in /etc/multipath.conf similar to the following; replacing the WWID with the value returned above, and the alias with the name of the LUN.
 
For each LUN, add an entry in /etc/multipath.conf similar to the following; replacing the WWID with the value returned above, and the alias with the name of the LUN.
<code>
+
<pre>
 
  multipath {
 
  multipath {
 
     wwid 36001405090427202e6db84c000000000
 
     wwid 36001405090427202e6db84c000000000
Line 110: Line 60:
 
     features                "1 queue_if_no_path"
 
     features                "1 queue_if_no_path"
 
  }
 
  }
</code>
+
</pre>
  
 
Next for each LUN, you need to remove the temporary multipath alias (which defaults to the WWID) by running:
 
Next for each LUN, you need to remove the temporary multipath alias (which defaults to the WWID) by running:
<code>
+
<pre>
 
  multipath -f 36001405090427202e6db84c000000000 #(replace this with the WWID)
 
  multipath -f 36001405090427202e6db84c000000000 #(replace this with the WWID)
</code>
+
</pre>
  
 
Finally, re-run the <code>multipath</code> command to regenerate the multipath devices with the new aliases.
 
Finally, re-run the <code>multipath</code> command to regenerate the multipath devices with the new aliases.
  
 
You should now have an entry in /dev/mapper/ for each LUN you added to the SAN. Once you have verified that everything is working, you need to replicate this configuration across the other systems connected to the SAN. Copy the updated /etc/multipath.conf to them, then run:
 
You should now have an entry in /dev/mapper/ for each LUN you added to the SAN. Once you have verified that everything is working, you need to replicate this configuration across the other systems connected to the SAN. Copy the updated /etc/multipath.conf to them, then run:
<code>
+
<pre>
 
  iscsiadm -m session --rescan
 
  iscsiadm -m session --rescan
 
  multipath
 
  multipath
</code>
+
</pre>
  
 
See the main [[SAN]] article for a list of systems currently connected to the SAN via iSCSI.
 
See the main [[SAN]] article for a list of systems currently connected to the SAN via iSCSI.
[[Category:To Update]]
 

Revision as of 12:38, 26 December 2016

This page contains administration guidelines and commands for iSCSI exports on the CSL SAN.

Adding an iSCSI LUN

This is used to add a new iSCSI LUN to the SAN.

ZVOL creation

All of the iSCSI exports on the SAN are currently backed by ZVOL block devices. ZVOLs provide much better performance than using raw datafiles over top of a normal filesystem and are easily expanded and managed using ZFS.

For ease of management, related ZVOLs are typically grouped underneath a ZFS filesystem. First we create the filesystem, then we create each of the ZVOLs beneath it. For most VMs, you will need two ZVOLs; one for the root partition and one for swap.

 zfs create apocalypse/vms/<vm>
 zfs create -V 15G apocalypse/vms/<vm>/root
 zfs create -V 1G apocalypse/vms/<vm>/swap

iSCSI Exports

iSCSI Exports are currently done using the LIO iSCSI Target managed through Pacemaker. Therefore, we need to add the new LUNs to Pacemaker which will handle bringing them online on the appropriate host.

First you need to generate a unique value for each LUN to use as an SCSI ID. We use the first half of the md5 hash of the LUN's name as a reliably unique value.

 echo -n apocalypse-<vm>-root | md5sum | head -c 16
 echo -n apocalypse-<vm>-swap | md5sum | head -c 16

You will also need a LUN number for each LUN you are adding; these are generally assigned starting from 0 and incrementing with each LUN added. To get the highest numbered LUN check the add.luns file. Change the lines in the add.luns file to match the format shown below. Make sure the LUNs in the add.luns file have been added to the end of the luns file before clearing add.luns.

apocalypse-ns2-root,<LUN number>(add 1 to the previous),/dev/zvol/apocalypse/vms/<vm>/root,<SCSI ID>
apocalypse-ns2-swap,<LUN number>(add 1 to the previous),/dev/zvol/apocalypse/vms/<vm>/swap,<SCSI ID>

Then, run the add-luns.sh script.

./add-luns.sh

Finally, each VM Server needs to be told to rescan for iSCSI LUN changes. IMPORTANT - DO NOT restart iscsid or you will crash all of the VMs on the server; rather use the following command to instruct iscsid to scan for changes.

 iscsiadm -m session --rescan

Multipath Configuration

Multipath now needs to be configured on the client servers. This is most easily done on one server and then copied to the others. On one of the client servers, run:

 multipath

Then, for each LUN you are adding, run:

 ls /dev/mapper/ | grep 090427202e6db84c #(replace this with the SCSI_SN of the LUN)
 36001405090427202e6db84c000000000

For each LUN, add an entry in /etc/multipath.conf similar to the following; replacing the WWID with the value returned above, and the alias with the name of the LUN.

 multipath {
     wwid 36001405090427202e6db84c000000000
     alias apocalypse-ion-root
     features                "1 queue_if_no_path"
 }

Next for each LUN, you need to remove the temporary multipath alias (which defaults to the WWID) by running:

 multipath -f 36001405090427202e6db84c000000000 #(replace this with the WWID)

Finally, re-run the multipath command to regenerate the multipath devices with the new aliases.

You should now have an entry in /dev/mapper/ for each LUN you added to the SAN. Once you have verified that everything is working, you need to replicate this configuration across the other systems connected to the SAN. Copy the updated /etc/multipath.conf to them, then run:

 iscsiadm -m session --rescan
 multipath

See the main SAN article for a list of systems currently connected to the SAN via iSCSI.