Warning Livedoc is no longer being updated and will be deprecated shortly. Please refer to https://documentation.tjhsst.edu.

Difference between revisions of "SAN"

From Livedoc - The Documentation Repository
Jump to: navigation, search
(initial SAN page)
 
(added cluster software information)
Line 3: Line 3:
 
==Hardware==
 
==Hardware==
 
[[Snares]] and [[Bottom]] each have an LSI PCI-E dual-port SAS-2 HBA. They are connected via copper SFF-8088 cables to [[Apocalypse]] such that each server has access to all of the drives in Apocalypse. Any Server, HBA, or backplane component within Apocalypse can fail without a loss of functionality.
 
[[Snares]] and [[Bottom]] each have an LSI PCI-E dual-port SAS-2 HBA. They are connected via copper SFF-8088 cables to [[Apocalypse]] such that each server has access to all of the drives in Apocalypse. Any Server, HBA, or backplane component within Apocalypse can fail without a loss of functionality.
 +
 +
In addition, [[Rockhopper]] is included in the cluster as a standby member to provide a third cluster member to break ties and prevent other messy situations. Because it does not have any connections to a storage array, it is not capable of nor configured to run any SAN services.
  
 
==Software==
 
==Software==
Line 9: Line 11:
 
===ZFS===
 
===ZFS===
 
ZFS via the ZFS on Linux project is used as the filesystem on the SAN. Currently there is a single zpool, called apocalypse, with 10 drives in a RAID-Z2 and an eleventh drive as an online spare.
 
ZFS via the ZFS on Linux project is used as the filesystem on the SAN. Currently there is a single zpool, called apocalypse, with 10 drives in a RAID-Z2 and an eleventh drive as an online spare.
 +
 +
Using ZFS as our base filesystem provides a number of benefits including transparent data check-summing and compression. ZFS is also capable of transparent deduplication, however, we do not currently have this feature enabled because of the amount of memory it requires and because this feature is not well-tested in ZFS on Linux.
 +
 +
===Corosync/Pacemaker===
 +
Corosync and Pacemaker are used to provide high availability failover of SAN services in the event of a hardware or software failure.

Revision as of 01:22, 7 June 2013

The CSL SAN (Storage Area Network) is a redundant cluster providing iSCSI and NFS storage to other servers. Its primary purposes is to provide iSCSI storage for VM hard drives.

Hardware

Snares and Bottom each have an LSI PCI-E dual-port SAS-2 HBA. They are connected via copper SFF-8088 cables to Apocalypse such that each server has access to all of the drives in Apocalypse. Any Server, HBA, or backplane component within Apocalypse can fail without a loss of functionality.

In addition, Rockhopper is included in the cluster as a standby member to provide a third cluster member to break ties and prevent other messy situations. Because it does not have any connections to a storage array, it is not capable of nor configured to run any SAN services.

Software

A number of pieces of software are used on top of the SAN hardware to provide data mangement and redundancy, high availability, and iSCSI and NFS access.

ZFS

ZFS via the ZFS on Linux project is used as the filesystem on the SAN. Currently there is a single zpool, called apocalypse, with 10 drives in a RAID-Z2 and an eleventh drive as an online spare.

Using ZFS as our base filesystem provides a number of benefits including transparent data check-summing and compression. ZFS is also capable of transparent deduplication, however, we do not currently have this feature enabled because of the amount of memory it requires and because this feature is not well-tested in ZFS on Linux.

Corosync/Pacemaker

Corosync and Pacemaker are used to provide high availability failover of SAN services in the event of a hardware or software failure.