Warning Livedoc is no longer being updated and will be deprecated shortly. Please refer to https://documentation.tjhsst.edu.

SAN/Add iSCSI Initiator

From Livedoc - The Documentation Repository
Revision as of 20:22, 20 November 2013 by Andrew Hamilton (talk | contribs) (finish article)
Jump to: navigation, search

This page contains administration guidelines and commands for connecting a Server to the iSCSI storage provided by the CSL SAN



The following software packages must be installed:

  • iproute2
  • open-iscsi
  • multipath-tools

emerge -a iproute2 open-iscsi multipath-tools


For security reasons, all iSCSI traffic on the SAN is run on an isolated VLAN (currently VLAN 16). This VLAN needs to be added to the Server's trunk link(s) to allow the server to access the SAN.


The following kernel modules need to be available. NOTE - these MUST be built as kernel modules, not into the kernel, or iscsid will fail to start.


Networking Configuration

First, the SAN VLAN needs to be configured on the server. The following lines need to be added/modified in /etc/conf.d/net

#Add VLAN 16 to the below list
vlans_bond0="16 1600 1802"

#Replace the last two octets of the below IP with the last two
#octets of the server's IP address

Then, either restart the server's trunk interface (BAD idea if it's in production) or use the following commands to manually configure the interface.

ip link add link bond0 name vlan16 type vlan id 16
ip addr add dev vlan16

iSCSI configuration

First you will need to configure the server's initiator name and alias for iSCSI. Edit /etc/iscsi/initatorname.iscsi and edit the following lines:


You will also need to make sure that iSCSI is set to automatically start connections at boot. Edit /etc/iscsi/iscsid.conf and edit the following line:

node.startup = automatic

Next, make sure that iscsid is in the default runlevel and started:

rc-update add iscsid default
/etc/init.d/iscsid start

Next you will need to "discover" the apocalypse iSCSI target; execute the following commands to do so:

iscsiadm -m discovery -t sendtargets -p
iscsiadm -m discovery -t sendtargets -p

Finally, tell iSCSI to start the connections to apocalypse. Note that we open two connections; one to each IP address. In the next section, we will use multipath to aggregate these connections for speed and reliability.

iscsiadm -m node -T iqn.1992-03.edu.tjhsst:target:apocalypse.0 -p -l
iscsiadm -m node -T iqn.1992-03.edu.tjhsst:target:apocalypse.0 -p -l

Multipath configuration

Next we will configure multipath which creates a virtual block device over top of our two iSCSI connections. We will use these block devices as the disks for our VMs since they provide redundant paths. Multipath will also queue I/O requests while the SAN fails over between servers; thus providing the time buffer needed for the failover to take place.

To configure multipath, simply copy /etc/multipath.conf from an existing VM server and then make sure that both multipath and multipathd are in the default runlevel.