Auto-deploy VC on Rocks

From PRAGMA wiki
Jump to: navigation, search


This script enable user to deploy a virtual cluster (VC) on a Rocks VM hosting system using VC image files stored in Gfarm or a local depository.

Usage: pragma_boot <vc-name> [number-of-compute-nodes] [local-depository-path]
vc-name is required
If the number-of-compute-nodes is not specified, 1 is assumed.
If the local-depository-path is not specified, standard gfarm path is assumed.
This script must be invoked in a non-priviledged user account. But the user must belong to a group with some sudo priviledges. See sudoer setup examples below.

What does the script do

  • Create a new VC with the specified number of compute nodes
  • Copy the VC images then modify their network settings according to specifications provided in the VC's xml file and the script configuration files
  • Append the contents of all .pub files in the user's .ssh directory and root .ssh directory to the VC's frontend image's /root/.ssh/authorized_keys file. (If the VM image was configured as a system allowing root login using ssh key, this provides the user and the vm hosting system root ssh access to the deployed VM root account. Otherwise, this does not guarantee access to the VM root via ssh. The script does not change system or security configurations of the VM image.
  • Boot-up the new VC

Current limitations

  • Use Rocks default resource allocation scheme for cluster creation
  • Only tested on Rocks 6.1 with VC images calit2-119-222 (see gfarm:/vm-images/SDSC/calit2-119-222)

Installation and setup

  • Download the tarball from gfarm:/vm-images/SDSC/vc-scripts-1.tar or
  • Choose an installation directory (for example, /opt/vm-scripts) and un-tar the files there
  • Edit pragma_boot to set "scriptdir" to the installation directory path
  • Edit AvailableIP, LocalSettings and resolv.conf files in the installation directory
  • On your VM hosting system frontend and all VM containers, create a group (for example, vmdisks)
  • Give the group rwx access to VM disk image directory path (default is /state/partition1/kvm/disks). For example,
$ ls -ld /state/partition1/kvm
drwxr-x--- 4 root vmdisks 4096 Jan  7 2011 /state/partition1/kvm
$ ls -ld /state/partition1/kvm/disks
drwxrwx--- 2 root vmdisks 4096 Oct  5 00:05 /state/partition1/kvm/disks
  • Add users to the group
  • Add a line in /etc/sudoers (visudo) to enable the group sudo sub-scripts. For example,
%vmdisks ALL=NOPASSWD:/opt/vm-scripts/vm-new, /opt/vc-scripts/vc-new, /opt/vm-scripts/vm-makeover, /opt/vc-scripts/fe-makeover, /opt/vc-scripts/cn-makeover, /opt/vm-scripts/vm-start, /opt/vc-scripts/vc-start, /opt/vm-scripts/vm-cleanup, /opt/vc-scripts/vc-cleanup, /opt/vm-scripts/vm-free, /opt/vc-scripts/vc-free, /opt/rocks/bin/rocks, /bin/sh
  • add "/etc/sudoers" to the file list in /var/411/, then run "rocks sync users"


Deploy with VC images from Gfarm

  • You must have gfarm client installed
  • Run "grid-proxy-init", then run "gfexport /vm-images/vcdb.txt" to test gfarm access
  • To deploy VC, for example, calit2-119-222 with 2 compute nodes
    • Run "pragma_boot calit2-119-222 2". A VC named calit2-119-222-<your-user-name> should be created.

Deploy with VC images from local disk

  • Setup a depository on your local disk with similar directory structure as on Gfarm (If you don't want to setup Gfarm client and only want to use local depository, you can get the calit2-119-222 VC images files from
  • Copy vcdb.txt and the directory tree of the VC images to your local depository
  • To deploy VC, for example, calit2-119-222 with 2 compute nodes and the local depository top path is /home/cindy/vm-images
$ pragma_boot calit2-119-222 2 /home/cindy/vm-images

Remove or cleanup

  • To remove a VC created successfully by pragma_boot, for example, if the VC name is calit2-119-222-cindy
    • Run "vm-remove calit2-119-222-cindy". The VC calti2-119-222-cindy will be removed.
  • If the script aborted by itself, it should cleanup after itself. After you fix the cause of the failure, you should be able to run the script again.
  • If you terminated the script prematurely (with control-c, for example)
    • Check if your VC image files are mounted (df should do).
      • If yes, umount manually
      • On a KVM system, run "kpartx -dv <VM/image/file/path>" to release the devices for the VC image
    • Then run "vc-remove"