Warning Livedoc is no longer being updated and will be deprecated shortly. Please refer to https://documentation.tjhsst.edu.

Difference between revisions of "VM Creation"

From Livedoc - The Documentation Repository
Jump to: navigation, search
m (Reverted edits by Pfoley (talk) to last revision by 2016fwilson)
(updates to the VM Creation Guide)
Line 1: Line 1:
''Note: Parts of this guide are old.''
+
==Storage Setup==
 +
We use iSCSI LUNs to back the partitions for our VM cluster. Each VM has at least two LUNs for a root and a swap partition. Some VMs may have additional partitions for specific purposes.
  
== Storage Setup ==
+
Once you know how many and what size partitions you will need, see the [[SAN/iSCSI Administration]] guide for detailed instructions on configuring a LUN for each required partition. When you are done, you should have device nodes in /dev/mapper for each new volume
ssh to bottom/snares (wherever apocalypse zpool is imported) and gain root
 
 
 
<code>
 
zfs create apocalypse/vms/<hostname>
 
zfs create -V 10G apocalypse/vms/<hostname>/root
 
zfs create -V 1G apocalypse/vms/<hostname>/swap
 
</code>
 
 
 
<code>
 
targetcli
 
targetcli> cd backstores/block
 
targetcli> create apocalypse-<hostname>-root /dev/zvol/apocalypse/vms/<hostname>/root
 
targetcli> create apocalypse-<hostname>-swap /dev/zvol/apocalypse/vms/<hostname>/swap
 
targetcli> cd /iscsi/iqn.1992-03.edu.tjhsst:storage:apocalypse.0/tpg1/luns
 
 
 
Record the lun numbers. You will need them later.
 
 
 
targetcli> create /backstores/block/apocalypse-<hostname>-root
 
targetcli> create /backstores/block/apocalypse-<hostname>-swap
 
targetcli> exit
 
echo yes | tcm_dump --o
 
</code>
 
 
 
ssh to the VM server you plan to use for the new VM (waitaha/littleblue) and gain root
 
<code>
 
iscsiadm -m session --rescan
 
</code>
 
 
 
generate the wwid by running:
 
 
 
<code>
 
 
 
/lib/udev/scsi_id -g /dev/disk/by-path/ip-172.16.3.1*-lun-<lunnum>
 
 
 
</code>
 
 
 
if you forgot the lun numbers run
 
 
 
<code>
 
targetcli
 
targetcli> cd /iscsi/iqn.1992-03.edu.tjhsst:storage:apocalypse.0/tpg1/luns
 
targetcli> ls
 
</code>
 
 
 
Edit /etc/multipath.conf and create two entries for root and swap following the existing examples
 
The WWID is the wwn from the previous command for each volume
 
after editing, copy /etc/multipath.conf to the other VM servers
 
 
 
<code>
 
multipath
 
multipath -ll
 
</code>
 
 
 
You should now have device nodes in /dev/mapper for each new volume
 
  
 +
==Filesystems==
 +
We currently use ext4 as the main filesystem for all of our VM partitions except for swap partitions (obviously). Create filesystems on each of the new VMs partitions now; then mount them on the host VM server.
 
<code>
 
<code>
 
  mkfs.ext4 /dev/mapper/apocalypse-<hostname>-root
 
  mkfs.ext4 /dev/mapper/apocalypse-<hostname>-root
Line 64: Line 13:
 
  mount /dev/mapper/apocalypse-<hostname>-root /mnt/<hostname>
 
  mount /dev/mapper/apocalypse-<hostname>-root /mnt/<hostname>
 
</code>
 
</code>
Check if the device is writable.
+
 
 +
Double-check that the device is writable; occasionally it gets locked RO for some unknown reason.
 
<code>
 
<code>
 
  touch /mnt/<hostname>/t
 
  touch /mnt/<hostname>/t
Line 76: Line 26:
 
  multipath
 
  multipath
 
</code>
 
</code>
Actually copy the files.
+
 
 +
==VM Installation==
 +
We maintain a prebuilt base VM along with an excludes file that can be used to very quickly install a new VM without having to go through the normal Gentoo install process. Use the following commands to copy the stage64 image to the new VM.
 
<code>
 
<code>
  scp stage64:newvm-excludes ~
+
  scp stage64:newvm-excludes ~/
  rsync -avSz --numeric-ids --exclude-from=newvm-excludes stage64:/ /mnt/<hostname>
+
  rsync -avSz --numeric-ids --exclude-from=~/newvm-excludes stage64:/ /mnt/<hostname>
 
</code>
 
</code>
If you need the kernel sources (only if you need to compile 3rd party modules)
+
If you need the kernel sources (only if you need to compile 3rd party modules such as AFS)
 
<code>
 
<code>
 
  cd /mnt/<hostname>/usr/src/
 
  cd /mnt/<hostname>/usr/src/
  git clone stage64:/usr/src/linux.git linux.git
+
  git clone git://haimageserver/linux.git linux.git
 
</code>
 
</code>
  
Change root to your new vm
+
==VM Postinstall==
 +
There are a few postinstall steps that need to be completed on the copied image before it is ready to run. First, chroot into the new VM and set the root password.
  
 
<code>
 
<code>
Line 97: Line 50:
 
  passwd
 
  passwd
 
</code>
 
</code>
Add any additional partitions to /etc/fstab
+
 
 +
If you have any additional partitions beyond root and swap, add them to /etc/fstab.
 +
 
 +
Configure the system's new identity in various files and make sure that no part of stage64's identity was copied over.
 
<code>
 
<code>
 
  vim /etc/conf.d/hostname
 
  vim /etc/conf.d/hostname
Line 111: Line 67:
 
   
 
   
 
  crontab -l #Configure appropriate backup time
 
  crontab -l #Configure appropriate backup time
 +
</code>
  
 +
Finally, exit the chroot and unmount all of the VM's partitions.
 +
<code>
 
  exit
 
  exit
 
+
#umount additional partitions if needed, then.
 
  umount /mnt/<hostname>
 
  umount /mnt/<hostname>
 +
</code>
  
 +
==Libvirt VM Definition==
 +
Copy the stage64 XML configuration file to create the new VM's configuration; then edit it as detailed below.
 +
<code>
 
  cd /etc/libvirt/qemu
 
  cd /etc/libvirt/qemu
 
  cp stage64.xml <hostname>.xml
 
  cp stage64.xml <hostname>.xml
Line 123: Line 86:
 
Set name, partitions, kernel, memory, and networking as appropriate
 
Set name, partitions, kernel, memory, and networking as appropriate
 
Mac address is derrived from the IPv4 Address of the system
 
Mac address is derrived from the IPv4 Address of the system
 +
 +
Load the new VM's configuration into libvirt and then start the new VM.
 
<code>
 
<code>
 
  virsh define /etc/libvirt/qemu/<hostname>.xml
 
  virsh define /etc/libvirt/qemu/<hostname>.xml
 +
virsh start <hostname>
 
</code>
 
</code>

Revision as of 02:19, 1 September 2013

Storage Setup

We use iSCSI LUNs to back the partitions for our VM cluster. Each VM has at least two LUNs for a root and a swap partition. Some VMs may have additional partitions for specific purposes.

Once you know how many and what size partitions you will need, see the SAN/iSCSI Administration guide for detailed instructions on configuring a LUN for each required partition. When you are done, you should have device nodes in /dev/mapper for each new volume

Filesystems

We currently use ext4 as the main filesystem for all of our VM partitions except for swap partitions (obviously). Create filesystems on each of the new VMs partitions now; then mount them on the host VM server.

mkfs.ext4 /dev/mapper/apocalypse-<hostname>-root
mkswap /dev/mapper/apocalypse-<hostname>-swap

mkdir /mnt/<hostname>
mount /dev/mapper/apocalypse-<hostname>-root /mnt/<hostname>

Double-check that the device is writable; occasionally it gets locked RO for some unknown reason.

touch /mnt/<hostname>/t
rm /mnt/<hostname>/t

If this dosen't work you may need to recreate the device nodes

multipath -f apocalypse-<hostname>-root
multipath -f apocalypse-<hostname>-swap
iscsiadm -m session --rescan
multipath

VM Installation

We maintain a prebuilt base VM along with an excludes file that can be used to very quickly install a new VM without having to go through the normal Gentoo install process. Use the following commands to copy the stage64 image to the new VM.

scp stage64:newvm-excludes ~/
rsync -avSz --numeric-ids --exclude-from=~/newvm-excludes stage64:/ /mnt/<hostname>

If you need the kernel sources (only if you need to compile 3rd party modules such as AFS)

cd /mnt/<hostname>/usr/src/
git clone git://haimageserver/linux.git linux.git

VM Postinstall

There are a few postinstall steps that need to be completed on the copied image before it is ready to run. First, chroot into the new VM and set the root password.

chroot /mnt/<hostname> /bin/bash
env-update
source /etc/profile
export PS1="{<hostname>}$PS1"
passwd

If you have any additional partitions beyond root and swap, add them to /etc/fstab.

Configure the system's new identity in various files and make sure that no part of stage64's identity was copied over.

vim /etc/conf.d/hostname
vim /etc/conf.d/net

rm /etc/ssh/ssh_host_* #These should already be gone
rm /etc/krb5.keytab
ktutil -k /etc/krb5.keytab get -p ahamilto/admin host/<FQDN>
vim /etc/issue #Remove the warning and change the hostname
vim /etc/nagios/nrpe.cfg #Edit the bind IP and update check values if appropriate
vim /etc/security/access.groups #remove stage64 and add in appropriate hostname group

crontab -l #Configure appropriate backup time

Finally, exit the chroot and unmount all of the VM's partitions.

exit
#umount additional partitions if needed, then.
umount /mnt/<hostname>

Libvirt VM Definition

Copy the stage64 XML configuration file to create the new VM's configuration; then edit it as detailed below.

cd /etc/libvirt/qemu
cp stage64.xml <hostname>.xml
vim <hostname>.xml

Set name, partitions, kernel, memory, and networking as appropriate Mac address is derrived from the IPv4 Address of the system

Load the new VM's configuration into libvirt and then start the new VM.

virsh define /etc/libvirt/qemu/<hostname>.xml
virsh start <hostname>