SAN/Add iSCSI Initiator
The following software packages must be installed:
emerge -a iproute2 open-iscsi multipath-tools
For security reasons, all iSCSI traffic on the SAN is run on an isolated VLAN (currently VLAN 16). This VLAN needs to be added to the Server's trunk link(s) to allow the server to access the SAN.
The following kernel modules need to be available. NOTE - these MUST be built as kernel modules, not into the kernel, or iscsid will fail to start.
First, the SAN VLAN needs to be configured on the server. The following lines need to be added/modified in /etc/conf.d/net
#Add VLAN 16 to the below list vlans_bond0="16 1600 1802" vlan16_name="vlan16" #Replace the last two octets of the below IP with the last two #octets of the server's IP address config_vlan16="172.16.17.45/16"
Then, either restart the server's trunk interface (BAD idea if it's in production) or use the following commands to manually configure the interface.
ip link add link bond0 name vlan16 type vlan id 16 ip addr add 172.16.17.45/16 dev vlan16
First you will need to configure the server's initiator name and alias for iSCSI. Edit /etc/iscsi/initatorname.iscsi and edit the following lines:
You will also need to make sure that iSCSI is set to automatically start connections at boot. Edit /etc/iscsi/iscsid.conf and edit the following line:
node.startup = automatic
Next, make sure that iscsid is in the default runlevel and started:
rc-update add iscsid default /etc/init.d/iscsid start
Next you will need to "discover" the apocalypse iSCSI target; execute the following commands to do so:
iscsiadm -m discovery -t sendtargets -p 172.16.3.1 iscsiadm -m discovery -t sendtargets -p 172.16.30.1
Finally, tell iSCSI to start the connections to apocalypse. Note that we open two connections; one to each IP address. In the next section, we will use multipath to aggregate these connections for speed and reliability.
iscsiadm -m node -T iqn.1992-03.edu.tjhsst:target:apocalypse.0 -p 172.16.3.1 -l iscsiadm -m node -T iqn.1992-03.edu.tjhsst:target:apocalypse.0 -p 172.16.30.1 -l
Next we will configure multipath which creates a virtual block device over top of our two iSCSI connections. We will use these block devices as the disks for our VMs since they provide redundant paths. Multipath will also queue I/O requests while the SAN fails over between servers; thus providing the time buffer needed for the failover to take place.
To configure multipath, simply copy /etc/multipath.conf from an existing VM server and then make sure that both multipath and multipathd are in the default runlevel.