Snooze » deployment http://snooze.inria.fr Wed, 11 Oct 2017 12:11:13 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.16 Snooze 2.1.4 is out http://snooze.inria.fr/snooze-2-1-4-is-out/ http://snooze.inria.fr/snooze-2-1-4-is-out/#comments Fri, 25 Apr 2014 12:10:18 +0000 http://snooze.inria.fr/?p=2067 Today Snooze version 2.1.4 has been released.

What’s new in this version ?

1 Snoozeclient

The command line interface for snooze is now supported again. It comes with new commands

  • images : to list all the images available
  • hosts : to list all the hosts
  • dump : to dump the hierarchy, check system status

The visualizer has been removed and is replaced by the one already integrated in the Snooze web interface

At a glance :

  • Define a new cluster
  • $)snoozeclient define -vcn test 
    Cluster has been defined!
    
  • List images
  • $)snoozeclient images
    Name                                	  Capacity        	 Allocation      	 Format          	 BackingStore   
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    debian-hadoop-context-big.qcow2     	  53687091200     	 2020872192      	 qcow2           	 null           
    context.iso                         	  403456          	 405504          	 iso             	 null           
    resilin-base.raw                    	  47100985344     	 2028998656      	 qcow2           	 null           
    
  • Add a virtual machine to this cluster
  • $) snoozeclient add -vcn test -iid debian-hadoop-context-big.qcow2
    Add command successfull!
    

    Note that optional parameters allow you to specify a name, the amount of vcpus/memory or network requested.

  • Start the cluster
  • $) snoozeclient start -vcn test
    Name                      	 VM address      	 GM address      	 LC address      	 Status    
    ---------------------------------------------------------------------------------------------------------------
    88524977-cc30-4e91-b4e7-d 	 10.164.0.101    	 172.16.129.9    	 172.16.130.25   	 RUNNING 
    
  • Dump the hierarchy
  • $) snoozeclient dump
    GL : 0
    	 GM : 52415ef7-f0c1-4001-a58d-1c4f83e49d45
    	 	 LC : 341c6ccf-06a2-4020-a13d-01a659bfe887
    	 	 	 VM : de5bfe11-09ac-43b2-8b28-8e1a5afcf9d6
    	 	 LC : e7365665-41bc-45c5-b7b6-520448c5e5bd
    	 	 	 VM : a0bda947-e309-4af7-b1a1-f1087fd50cf6
    	 	 	 VM : f55d612e-ada3-4db9-b39a-66bcb2123e30
    
    

2 Other services

The other services didn’t move much for this release, see the changelog below:

  • Snoozenode
  • *implement searchVirtualMachine in GroupManagerCassandraRepository.
    *implement getLocalControllerList in GroupManagerResource
    
  • Snoozecommon
  • Implements migration in restlet communicator.
    

3 snooze-deploy-localcluster

The script now include a one line installer : see here

]]>
http://snooze.inria.fr/snooze-2-1-4-is-out/feed/ 0
Static scheduling in Snooze http://snooze.inria.fr/static-scheduling-in-snooze/ http://snooze.inria.fr/static-scheduling-in-snooze/#comments Thu, 13 Feb 2014 15:23:49 +0000 http://snooze.inria.fr/?p=2017 One (hidden) feaure of snooze is the ability to pass an hostId parameter in you submission request :


"virtualMachineTemplates"=><a href='http://snooze.inria.fr/static-scheduling-in-snooze/out-54_r/' rel='attachment wp-att-2025'>out-54_r</a>
  [
    {
      "name"    => "vm1",
      "hostId"  => "parapluie-25",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 2,
      "memory"  => 2097152,
    }
  ]

In the above code (ruby style) will tell snooze that we want the vm1 on host parapluie-25.
Originaly the id was an uuid randomly generated at start time of each node. With a recent fix, different way (determined in the config file) can be used to generate the id. This make the feature more practical to use.

Example of description :

#!/usr/bin/env ruby

require 'json'

$templates =  {
  "virtualMachineTemplates"=>
  [
    {
      "name"    => "vm1",
      "hostId"  => "parapluie-25",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 2,
      "memory"  => 2097152,
    },  
    {  
      "name"    => "vm2",
      "hostId"  => "parapluie-25",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 4,
      "memory"  => 2097152,
    },  
    {  
      "name"    => "vm3",
      "hostId"  => "parapide-2",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 4,
      "memory"  => 1048576,
    },  
    {  
      "name"    => "vm4",
      "hostId"  => "parapide-2",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 2,
      "memory"  => 2097152,
    },  
    {  
      "name"    => "vm5",
      "hostId"  => "parapide-2",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 2,
      "memory"  => 2097152,
    },  
    {  
      "name"    => "vm6",
      "hostId"  => "parapluie-8",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 8,
      "memory"  => 1048576,
    },  
    {  
      "name"    => "vm7",
      "hostId"  => "parapluie-21",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 12,
      "memory"  => 2097152,
    }
  ]
}

`curl
-X 'POST'
-H 'Content-type: application/json'
-d '#{JSON.dump($templates)}' http://localhost:5000/bootstrap?startVirtualCluster`

You can then sucessively destroy and restart in the same conditions :


for i in {1..8}
do
  curl \
   -d "vm$i"\
   -X 'POST'\
   localhost:5000/bootstrap?destroyVirtualMachine
done

out-54_r

]]>
http://snooze.inria.fr/static-scheduling-in-snooze/feed/ 0
Snooze local deployment how to http://snooze.inria.fr/snooze-local-deployment-how-to/ http://snooze.inria.fr/snooze-local-deployment-how-to/#comments Mon, 20 Jan 2014 09:50:00 +0000 http://snooze.inria.fr/?p=1981 I propose in this post to go through the local deployment of the snooze cluster and show how it can be used to run multiple snoozenode on multiple machines.

Requirements

  • Multicast-routing enabled switch
  • Hardware supporting the KVM/XEN hypervisor
  • Linux based operating system

1 Snooze on one machine

First follow the local deployment tutorial in the documentation. This will guide you through the set-up of your local cluster.

2 Snooze on three machines

If you have successfully set up Snooze on a single machine, installing Snooze on three machines is pretty straightforward. In order to have a realistic deployment only one local controller instance must run on each server. We propose the following topology for the deployment.

Scenario :

  • machine 1 : 1 BS, 2 GMs, snoozeimage, snoozeec2, Cassandra, Rabbitmq
  • machine 2 : 1 LC
  • machine 3 : 1 LC

Setup of the machine 1

You’ve done all the configuration in the first part of the tutorial ! If you decide to run a Local Controller on this machine, we suggest to not activate the energy management…

Setup of machines 2

Install the snooze-deploy-localcluster scripts on this machine.
In the config file (by default /usr/share/snoozenode/configs/snooze_node.cfg) fill the following lines with the appropriate IP or hostname of machine 1. In the latter case be sure to have a DNS (or /etc/hosts) properly configured :

# Zookeeper
faultTolerance.zookeeper.hosts = machine1:2181

# Image repository
imageRepository.address = machine1

# Database
database.type = cassandra # if "memory" you don't need the following line
database.cassandra.hosts = machine1:9160

# Rabbitmq
external.notifier.address = machine1

Setup of machines 3

It’s the same configuration as machine 2.

NFS

The Local Controller nodes get the images disks through an NFS shared directory. So you will have to configure a NFS share between your 3 machines. We propose the following scenario :

Scenario :

  • machine 1 : Nfs server, exports /var/lib/libvirt/images (default libvirt pool)
  • machine 2 : Nfs client, mount the exported directory in /var/lib/libvirt/images
  • machine 3 : Nfs client, mount the exported directory in /var/lib/libvirt/images
On machine 1

Configure the nfs-server.
Check the snoozeimages configuration to be sure that the pool configured is a directory pool using /var/lib/libvirt/images/ as backend.

On machine 2 & 3

Mount the NFS shared directory.
Check the snoozenode configuration file :

imageRepository.manager.source = /var/lib/libvirt/images
imageRepository.manager.destination = /var/lib/libvirt/images

3 Launch snooze

On each machine, you’ll have to launch the following two commands (from the root of the local deployment script):

start_local_cluster.sh -l
start_local_cluster.sh -s
]]>
http://snooze.inria.fr/snooze-local-deployment-how-to/feed/ 0
How green is the deployment on grid’5000 ? http://snooze.inria.fr/how-green-is-the-deployment-on-grid5000/ http://snooze.inria.fr/how-green-is-the-deployment-on-grid5000/#comments Thu, 19 Dec 2013 16:17:30 +0000 http://snooze.inria.fr/?p=1876 I was playing lately with power meters installed in the Lyon site.
I’ve built a small web application to show real time the power consumption of the nodes.

System set up

I deployed Snooze on taurus cluster. 2 of the 16 machines were out of order, thus I deployed Snooze on the 14 remaining machines with the following set up :

  • 1 node : bootstrap
  • 3 nodes : group manager
  • 8 nodes : local controller
  • 2 nodes : cassandra

All the figures below depict the power consumption of the nodes during the deployment of Snooze. Only one node of each type is represented.

Let’s go through the different phases…

1Respawning phase

At the time of the reservation, some of the nodes used in the deployment were shutdown. The first phase is thus the respawning phase which will insure that all the 14 nodes are up and running before the deployment phase. This phase took approximately 4 minutes.

2Deployment phase

Once all the nodes are up, a debian/wheezy is deployed on the 14 nodes. This phase took approximately 6 minutes.

3Puppet phase

Once the wheezy environment is deployed on all the nodes we configure them using different puppet recipes.

  • First Rabbitmq is installed on the bootstrap ( taurus-1)
  • Then cassandra is installed (here on taurus-9)
  • Snooze is installed on all the nodes (except taurus-9)
  • Finally NFS is configured

These steps are sequential, maybe we could parallelized them.

4Post-install phase

In this phase we prepare the cluster by uploading base images into the image repository, configuring the network on compute hosts…

5Cluster start phase

And … Finally we start the cluster.

After that phase, we observed that without any load, the power consumption was about 1700 W in total (14 nodes).

Global view

]]>
http://snooze.inria.fr/how-green-is-the-deployment-on-grid5000/feed/ 0