From PRAGMA wiki
Jump to: navigation, search

The Rocks-based bio application VM was created by Nadya Williams at SDSC/UCSD. The gfarm roll, VM deployment scripts and this documentation is created and maintained by Cindy Zheng.

Auto-deploy with scripts on Rocks/Xen hosting server

  • To setup VM deployment scripts on Rocks/Xen hosting server, follow the documented example
  • This bio application VM size is relatively big (4GB), for faster deployment, you can load the image on a local disk. For example,
    • Create a directory on a locally mounted file system, /home/cindy/vm-backup.
    • Copy gfarm:/vm-images/vmdb.txt to /home/cindy/vm-backup directory.
    • Create the same subsidrectories as in gfarm. For example, /home/cindy/vm-backup/SDSC.
    • Deposite the VM image file to the sub-directory.
  • Deploy Bio application VM

To find the syntax of vm deployment script

$ /opt/vm-scripts/vm-deploy
Usage: vm-deploy vm-image-name [rocks] [number of instances] [local directory path]

Where vm-image-name is in the first field of gfarm:/vm-images/vmdb.txt file. Specify "rocks" if the VM image was created on a Rocks system. Specify number of instances you'd like to deploy.

  • To deploy a single instance VM where the VM image file is in a local directory /home/cindy/vm-backup/SDSC:
$ /opt/vm-scripts/vm-deploy bioapp5 rocks 1 /home/cindy/vm-backup
  • To deploy a signle instance VM from a VM image file in gfarm:
$ grid-proxy-init
$ /opt/vm-scripts/vm-deploy bioapp5 rocks

Manually deploy on KVM/OpenNebula

This is a manual procedure documented by AIST team. Contact Yoshio Tanaka and Akihiko Ota. It deploys a bio application VM which was created on a Rocks/Xen platform onto a KVM/Opennebula system.

1. Mount the bioapp image file
   If you can use lomount command:
      % sudo lomount -diskimage bioapp5.img -partition 1 /some/path
   If you cannot use lomount command, use losetup and kpartx
      % sudo losetup /dev/loop0 bioapp5.img
      % sudo kpartx -av /dev/loop0
      add map loop0p1 (253:0): 0 20159937 linear /dev/loop0 63
      % sudo mount /dev/mapper/loop0p1 /mnt
2. Install standard kernel
   2.1. chroot to the bioapp5.img:
        % sudo chroot /mnt /bin/bash
        # export PS1="[chroot]# "
   2.2. change /etc/modprobe.conf:
        [chroot]# vi /etc/modprobe.conf
        (modify as follows)
        alias scsi_hostadapter ata_piix
        alias scsi_hostadapter1 virtio_blk
        alias net-pf-10 off
        alias ipv6 off
        options ipv6 disable=1
        alias eth0 virtio_net
        alias eth1 virtio_net
   2.3. install kernel:
        [chroot]# yum install kernel.x86_64
        Running Transaction
        Installing     : kernel                                      1/1 
        Modulefile is /etc/modprobe.conf
        error opening /sys/block: No such file or directory grubby fatal error: unable to find a suitable template
        error: failed to stat /dev/pts: No such file or directory
           kernel.x86_64 0:2.6.18-238.19.1.el5                                                               
        (Some errors occur but there is no problem.)
   2.4. modify grub.conf:
        [chroot]# cp -p /boot/grub/grub.conf /boot/grub/grub.conf.original 
        [chroot]# vi /boot/grub/grub.conf 
        (insert following description) 
           serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1 
           terminal --timeout=5 console serial

           title CentOS (2.6.18-238.19.1.el5)
                   root (hd0,0)
                   kernel /boot/vmlinuz-2.6.18-238.19.1.el5 ro root=LABEL=/ rhgb quiet console=ttyS0,115200
                   initrd /boot/initrd-2.6.18-238.19.1.el5.img
        (diff from original grub.conf)
        [chroot]# diff -u /boot/grub/grub.conf.original /boot/grub/grub.conf
        --- /boot/grub/grub.conf.original		2012-04-08 16:12:47.000000000 -0700
        +++ /boot/grub/grub.conf			2012-04-20 00:50:42.000000000 -0700
        @@ -10,6 +10,14 @@
        +serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1 terminal 
        +--timeout=5 console serial
        +title CentOS (2.6.18-238.19.1.el5)
        +        root (hd0,0)
        +        kernel /boot/vmlinuz-2.6.18-238.19.1.el5 ro root=LABEL=/ rhgb quiet console=ttyS0,115200
        +        initrd /boot/initrd-2.6.18-238.19.1.el5.img
         title Rocks (2.6.18-238.19.1.el5xen)
               root (hd0,0)
               kernel /boot/vmlinuz-2.6.18-238.19.1.el5xen ro root=LABEL=/ rhgb quiet

        (Confirm that "default=" points at the standard kernel.)
   2.5. rebuild initrd with VirtIO drivers:
        [chroot]# cd /boot
        [chroot]# mv initrd-2.6.18-238.19.1.el5.img initrd-2.6.18-238.19.1.el5.img.original
        [chroot]# mkinitrd --with=virtio_pci --with=virtio_blk --with=virtio_net -v -f initrd-2.6.18-238.19.1.el5.img 2.6.18-238.19.1.el5
3. set up serial console, and append SSH keys For watch boot messages
   [chroot]# vi /etc/inittab
   (append following line)
      co:2345:respawn:/sbin/agetty -h -L 115200 ttyS0 ansi
   [chroot]# vi /etc/securetty
   (append following word)
   [chroot]# vi /root/.ssh/authorized_keys
   (append SSH public key)
   [chroot]# chmod 700 /root/.ssh 
   [chroot]# chmod 600 /root/.ssh/authorized_keys
   You may set PasswordAuthentication to disable at /etc/ssh/sshd_config if so desired.
4. exit chroot, umount, and detach image file 
   [chroot]# exit 
   % sudo umount /mnt
   % sudo kpartx -dv /dev/loop0
   del devmap : loop0p1
   % sudo losetup -d /dev/loop0
5. (If needed) GRUB re-setup
   (I don't know why, but on AIST environments, the KVM cannot read GRUB of the bioapp VM image file. So once I booted the KVM from the Linux Live-CD iso image with the bioapp VM image, and re-setuped the bioapp VM's GRUB. I used SystemRescueCD version 2.5.1 at
   5.1. Boot the KVM from the Live-CD iso image with the bioapp VM image:
        % sudo /usr/libexec/qemu-kvm \
        -hda ./bioapp5.img \
        -cdrom ./systemrescuecd-x86-2.5.1.iso \
        -boot d \
        -m 1024 \
        -monitor stdio \
        -vnc :0
   5.2. mount the bioapp5.img as /dev/sda1
        (connect via VNC and login to the Live-CD environment)
        % mount /dev/sda1 /mnt
        % mount -t proc none /mnt/proc
        % mount --rbind /dev /mnt/dev
   5.3. chroot to the bioapp5.img:
        % chroot /mnt /bin/bash
        # export PS1="[chroot]# "
   5.4. Create /etc/mtab:
        [chroot]# grep -v rootfs /proc/mounts > /etc/mtab
   5.5. re-setup GRUB:
        [chroot]# grub --no-floppy
        grub> root (hd0,0)
        grub> setup (hd0)
        grub> quit
   5.6. umount the bioapp5.img and shutdown the Live-CD environment:
        [chroot]# exit
        % umount -l /mnt/proc
        % umount -l /mnt/dev
        % umount -l /mnt
        % shutdown -h now

Finish configuration

Once your Bioapp image is up and running there are remaining couple of steps that will update your image networking according to your network specifications. When successful, the VM is ready for use.

Checks after first boot

Once the image is up and running check the network configuration parameters.
1. Check /etc/resolv.conf

cat /etc/resolv.conf
search local your.domain

This file should have:

  • your correct domain listed in "search local" line (your.domain)
  • a valid correct IP for at least one nameserver in your domain

Correct any errors if found.
2. Check /etc/sysconfig/network-scripts/ifcfg-eth0, it should look similar to:

cat /etc/sysconfig/network-scripts/ifcfg-eth0

Correct any errors if found.

3. Check /etc/sysconfig/network-scripts/ifcfg-eth1, it should look similar to:

cat /etc/sysconfig/network-scripts/ifcfg-eth0
IPADDR=     <--your host IP
NETMASK=       <-- your  host netmask

Correct any errors if found.

4. Check /etc/sysconfig/network, it should look similar to:

NETWORKING=yes      <-- your host FQDN
GATEWAY=              <-- your gateway

Correct any errors if found.

Run VMreconfig script

This script will update the rocks database with your correct networking information, then will reconfigure all cluster services that depend on these values including SGE, Condor and Opal configurations.
1. Run the script as :


There will be some messages on the screen. The error messages about SGE configuration are expected.
2. Once the script has finished please check /tmp/VMchange.log file for errors.
3. Reboot your Bioapp VM.

Access Bioapp applications via Opal service

Point your browser to your image address as http://your.image.fqdn/opal2/dashboard?command=serviceList
You will see a page similar to
Note, has many more applications then bioapp image.