Snooze http://snooze.inria.fr Wed, 11 Oct 2017 12:11:13 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.16 Snooze 2.1.5 released http://snooze.inria.fr/snooze-2-1-5-released/ http://snooze.inria.fr/snooze-2-1-5-released/#comments Mon, 02 Jun 2014 21:15:17 +0000 http://snooze.inria.fr/?p=2098 Today Snooze version 2.1.5 has been released.

This version allows qemu domain type in addition to kvm.
You can now run snooze inside a virtual machine (usefull when working with Apple Mac OS X).

You can check this gist if you want to setup a dev and test environment with vagrant :

https://gist.github.com/msimonin/fa502fa74f33ae024ea4

If a virtual machine is ready on your local machine you can try the webinstall :

# web install
curl https://raw.githubusercontent.com/snoozesoftware/snooze-deploy-localcluster/master/webinstall.sh | sh

]]>
http://snooze.inria.fr/snooze-2-1-5-released/feed/ 0
Snooze 2.1.4 is out http://snooze.inria.fr/snooze-2-1-4-is-out/ http://snooze.inria.fr/snooze-2-1-4-is-out/#comments Fri, 25 Apr 2014 12:10:18 +0000 http://snooze.inria.fr/?p=2067 Today Snooze version 2.1.4 has been released.

What’s new in this version ?

1 Snoozeclient

The command line interface for snooze is now supported again. It comes with new commands

  • images : to list all the images available
  • hosts : to list all the hosts
  • dump : to dump the hierarchy, check system status

The visualizer has been removed and is replaced by the one already integrated in the Snooze web interface

At a glance :

  • Define a new cluster
  • $)snoozeclient define -vcn test 
    Cluster has been defined!
    
  • List images
  • $)snoozeclient images
    Name                                	  Capacity        	 Allocation      	 Format          	 BackingStore   
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    debian-hadoop-context-big.qcow2     	  53687091200     	 2020872192      	 qcow2           	 null           
    context.iso                         	  403456          	 405504          	 iso             	 null           
    resilin-base.raw                    	  47100985344     	 2028998656      	 qcow2           	 null           
    
  • Add a virtual machine to this cluster
  • $) snoozeclient add -vcn test -iid debian-hadoop-context-big.qcow2
    Add command successfull!
    

    Note that optional parameters allow you to specify a name, the amount of vcpus/memory or network requested.

  • Start the cluster
  • $) snoozeclient start -vcn test
    Name                      	 VM address      	 GM address      	 LC address      	 Status    
    ---------------------------------------------------------------------------------------------------------------
    88524977-cc30-4e91-b4e7-d 	 10.164.0.101    	 172.16.129.9    	 172.16.130.25   	 RUNNING 
    
  • Dump the hierarchy
  • $) snoozeclient dump
    GL : 0
    	 GM : 52415ef7-f0c1-4001-a58d-1c4f83e49d45
    	 	 LC : 341c6ccf-06a2-4020-a13d-01a659bfe887
    	 	 	 VM : de5bfe11-09ac-43b2-8b28-8e1a5afcf9d6
    	 	 LC : e7365665-41bc-45c5-b7b6-520448c5e5bd
    	 	 	 VM : a0bda947-e309-4af7-b1a1-f1087fd50cf6
    	 	 	 VM : f55d612e-ada3-4db9-b39a-66bcb2123e30
    
    

2 Other services

The other services didn’t move much for this release, see the changelog below:

  • Snoozenode
  • *implement searchVirtualMachine in GroupManagerCassandraRepository.
    *implement getLocalControllerList in GroupManagerResource
    
  • Snoozecommon
  • Implements migration in restlet communicator.
    

3 snooze-deploy-localcluster

The script now include a one line installer : see here

]]>
http://snooze.inria.fr/snooze-2-1-4-is-out/feed/ 0
Generate your virtual machines base image with Packer http://snooze.inria.fr/generate-your-virtual-machines-base-image-with-packer/ http://snooze.inria.fr/generate-your-virtual-machines-base-image-with-packer/#comments Tue, 22 Apr 2014 08:11:19 +0000 http://snooze.inria.fr/?p=2049 The snooze-packer project has been released today.

Packer eases the virtual machines process creation. You could check the packer web site for further information.
Now, We use it to create virtual machine images and prepare it with the contextualization script (see the of the documentation).

Checkout the snooze-packer project for the first images provided with the tool : https://github.com/snoozesoftware/snooze-packer

]]>
http://snooze.inria.fr/generate-your-virtual-machines-base-image-with-packer/feed/ 0
Snooze 2.1.3 released, Snooze 3 coming soon http://snooze.inria.fr/snooze-2-1-3-released-snooze-3-coming-soon/ http://snooze.inria.fr/snooze-2-1-3-released-snooze-3-coming-soon/#comments Wed, 02 Apr 2014 09:57:35 +0000 http://snooze.inria.fr/?p=2036 Snooze has been 2.1.3 released.

See the changelog below :

#87 Add "test" hypervisor to the template generation chain.
Testing template generation will be eased.
#83 Fix serial console template issue.
This fix the compatibility with recent libvirt version.
#81 handle destroy image with backingImageManager and src=dest path. 
This bug appeared when using nfs to store backing and diffs disks files.
#82 Get the virtual machines lists even if in-memory database is used. 
This was mostly a missing feature.

The next version of snooze has been pushed in the develop branch.
This will be a major release with the introduction of a more complete plugin mechanisms.
We are now working on demonstrating the modularity of the system in pluging in different algorithms.
This will come soon 😉

]]>
http://snooze.inria.fr/snooze-2-1-3-released-snooze-3-coming-soon/feed/ 0
Static scheduling in Snooze http://snooze.inria.fr/static-scheduling-in-snooze/ http://snooze.inria.fr/static-scheduling-in-snooze/#comments Thu, 13 Feb 2014 15:23:49 +0000 http://snooze.inria.fr/?p=2017 One (hidden) feaure of snooze is the ability to pass an hostId parameter in you submission request :


"virtualMachineTemplates"=><a href='http://snooze.inria.fr/static-scheduling-in-snooze/out-54_r/' rel='attachment wp-att-2025'>out-54_r</a>
  [
    {
      "name"    => "vm1",
      "hostId"  => "parapluie-25",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 2,
      "memory"  => 2097152,
    }
  ]

In the above code (ruby style) will tell snooze that we want the vm1 on host parapluie-25.
Originaly the id was an uuid randomly generated at start time of each node. With a recent fix, different way (determined in the config file) can be used to generate the id. This make the feature more practical to use.

Example of description :

#!/usr/bin/env ruby

require 'json'

$templates =  {
  "virtualMachineTemplates"=>
  [
    {
      "name"    => "vm1",
      "hostId"  => "parapluie-25",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 2,
      "memory"  => 2097152,
    },  
    {  
      "name"    => "vm2",
      "hostId"  => "parapluie-25",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 4,
      "memory"  => 2097152,
    },  
    {  
      "name"    => "vm3",
      "hostId"  => "parapide-2",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 4,
      "memory"  => 1048576,
    },  
    {  
      "name"    => "vm4",
      "hostId"  => "parapide-2",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 2,
      "memory"  => 2097152,
    },  
    {  
      "name"    => "vm5",
      "hostId"  => "parapide-2",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 2,
      "memory"  => 2097152,
    },  
    {  
      "name"    => "vm6",
      "hostId"  => "parapluie-8",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 8,
      "memory"  => 1048576,
    },  
    {  
      "name"    => "vm7",
      "hostId"  => "parapluie-21",
      "imageId" => "debian-hadoop-context-big.qcow2",
      "vcpus"   => 12,
      "memory"  => 2097152,
    }
  ]
}

`curl
-X 'POST'
-H 'Content-type: application/json'
-d '#{JSON.dump($templates)}' http://localhost:5000/bootstrap?startVirtualCluster`

You can then sucessively destroy and restart in the same conditions :


for i in {1..8}
do
  curl \
   -d "vm$i"\
   -X 'POST'\
   localhost:5000/bootstrap?destroyVirtualMachine
done

out-54_r

]]>
http://snooze.inria.fr/static-scheduling-in-snooze/feed/ 0
Snooze local deployment how to http://snooze.inria.fr/snooze-local-deployment-how-to/ http://snooze.inria.fr/snooze-local-deployment-how-to/#comments Mon, 20 Jan 2014 09:50:00 +0000 http://snooze.inria.fr/?p=1981 I propose in this post to go through the local deployment of the snooze cluster and show how it can be used to run multiple snoozenode on multiple machines.

Requirements

  • Multicast-routing enabled switch
  • Hardware supporting the KVM/XEN hypervisor
  • Linux based operating system

1 Snooze on one machine

First follow the local deployment tutorial in the documentation. This will guide you through the set-up of your local cluster.

2 Snooze on three machines

If you have successfully set up Snooze on a single machine, installing Snooze on three machines is pretty straightforward. In order to have a realistic deployment only one local controller instance must run on each server. We propose the following topology for the deployment.

Scenario :

  • machine 1 : 1 BS, 2 GMs, snoozeimage, snoozeec2, Cassandra, Rabbitmq
  • machine 2 : 1 LC
  • machine 3 : 1 LC

Setup of the machine 1

You’ve done all the configuration in the first part of the tutorial ! If you decide to run a Local Controller on this machine, we suggest to not activate the energy management…

Setup of machines 2

Install the snooze-deploy-localcluster scripts on this machine.
In the config file (by default /usr/share/snoozenode/configs/snooze_node.cfg) fill the following lines with the appropriate IP or hostname of machine 1. In the latter case be sure to have a DNS (or /etc/hosts) properly configured :

# Zookeeper
faultTolerance.zookeeper.hosts = machine1:2181

# Image repository
imageRepository.address = machine1

# Database
database.type = cassandra # if "memory" you don't need the following line
database.cassandra.hosts = machine1:9160

# Rabbitmq
external.notifier.address = machine1

Setup of machines 3

It’s the same configuration as machine 2.

NFS

The Local Controller nodes get the images disks through an NFS shared directory. So you will have to configure a NFS share between your 3 machines. We propose the following scenario :

Scenario :

  • machine 1 : Nfs server, exports /var/lib/libvirt/images (default libvirt pool)
  • machine 2 : Nfs client, mount the exported directory in /var/lib/libvirt/images
  • machine 3 : Nfs client, mount the exported directory in /var/lib/libvirt/images
On machine 1

Configure the nfs-server.
Check the snoozeimages configuration to be sure that the pool configured is a directory pool using /var/lib/libvirt/images/ as backend.

On machine 2 & 3

Mount the NFS shared directory.
Check the snoozenode configuration file :

imageRepository.manager.source = /var/lib/libvirt/images
imageRepository.manager.destination = /var/lib/libvirt/images

3 Launch snooze

On each machine, you’ll have to launch the following two commands (from the root of the local deployment script):

start_local_cluster.sh -l
start_local_cluster.sh -s
]]>
http://snooze.inria.fr/snooze-local-deployment-how-to/feed/ 0
Focus on hadoop http://snooze.inria.fr/focus-on-hadoop/ http://snooze.inria.fr/focus-on-hadoop/#comments Thu, 09 Jan 2014 12:50:09 +0000 http://snooze.inria.fr/?p=1935 Snooze deployment on grid’5000 comes with scripts for configuring a hadoop cluster and launching benchmarks on it.

1 System deployment

You just have to follow the deployment procedure explained in the documentation.

After that you can launch some virtual machines. Since those virtual machines will host hadoop services we suggest you to set (at least) the number of vcpus to 3 and ram to 3GB.

2 Configure hadoop

Once deployed, you will find the hadoop deployment scripts on the first bootstrap :

$bootstrap) cd /tmp/snooze/experiments

You need to create a file containing the IPs addresses of your virtual machines.
To achieve this you can make a request to the EC2 API to get the list of instances running, this will return a XML containing all the instances (and their IPs).
Finally you will just have to parse the output to get the IPs.

In the following code, we assume that you are connected to the first bootstrap and that the SnoozeEC2 service is running on this node.


$bootstrap) curl localhost:4001?Action=DescribeInstances > instances

The following code will output the list of IPs addresses of your running virtual machines. You have to redirect this to the file /tmp/snooze/experiments/tmp/virtual_machine_hosts.txt.


require 'rexml/document'
include REXML
# instance file contains the output of "curl snoozeec2?Action=DescribeInstances"
file = File.new 'instances'
doc = Document.new file

XPath.each(doc, "//ipAddress"){ |item| puts item.text}

Finally, go to /tmp/snooze/experiments/ and launch :

$bootstrap) ./experiments -m configure
--
[Snooze-Experiments] Configuration mode (normal, variable_data):
normal

3 Launch a benchmark

You can get the list of available benchmark by typing :


$bootstrap) ./experiments -m benchmark
--
[Snooze-Experiments] Benchmark name (e.g. dfsio, dfsthroughput, mrbench, nnbench,                pi, teragen, terasort, teravalidate, censusdata, censusbench, wikidata, wikibench):

Choose one and you’re done.

]]>
http://snooze.inria.fr/focus-on-hadoop/feed/ 0
Snooze 2.1.1 released http://snooze.inria.fr/snooze-2-1-1-released/ http://snooze.inria.fr/snooze-2-1-1-released/#comments Wed, 08 Jan 2014 14:25:52 +0000 http://snooze.inria.fr/?p=1922 A patch of the Snooze system has been released under the 2.1.1 tag.

Changelog:

* snoozeec2 : fix compatibily with euca2ools.

* snoozecommon/snoozenode : fix migration issue when using backing file stored locally in local controller.

* snooze-capistrano : remove maint-2.1.0 and v2.1.0 stage. Introduce latest stage.

Euca2ools compatibility

You can now use a subset of the euca2ools commands to interract with Snooze. Supported commands are :

  euca-describe-images
  euca-describe-instances
  euca-describe-run-instances
  euca-describe-terminate-instances
  euca-describe-reboot-instances

Migration fix

In 2.1.0 with backing disk parameter, migration was only possible when master file and virtual machine disk images were located in a shared directory. Now you can migrate virtual machine even if the virtual machine disk is local to the node. The master file must still reside on a shared directory.
Full disk migration will be investigate for the next release.

Snooze capistrano

We remove the stages v2.1.0 and maint2.1.0 and introduce an abstract stage : latest (alias to 2.1.1 here). It will lead to a better maintainability of the script.
Some output has been improved aswell.

]]>
http://snooze.inria.fr/snooze-2-1-1-released/feed/ 0
How green is the deployment on grid’5000 ? http://snooze.inria.fr/how-green-is-the-deployment-on-grid5000/ http://snooze.inria.fr/how-green-is-the-deployment-on-grid5000/#comments Thu, 19 Dec 2013 16:17:30 +0000 http://snooze.inria.fr/?p=1876 I was playing lately with power meters installed in the Lyon site.
I’ve built a small web application to show real time the power consumption of the nodes.

System set up

I deployed Snooze on taurus cluster. 2 of the 16 machines were out of order, thus I deployed Snooze on the 14 remaining machines with the following set up :

  • 1 node : bootstrap
  • 3 nodes : group manager
  • 8 nodes : local controller
  • 2 nodes : cassandra

All the figures below depict the power consumption of the nodes during the deployment of Snooze. Only one node of each type is represented.

Let’s go through the different phases…

1Respawning phase

At the time of the reservation, some of the nodes used in the deployment were shutdown. The first phase is thus the respawning phase which will insure that all the 14 nodes are up and running before the deployment phase. This phase took approximately 4 minutes.

2Deployment phase

Once all the nodes are up, a debian/wheezy is deployed on the 14 nodes. This phase took approximately 6 minutes.

3Puppet phase

Once the wheezy environment is deployed on all the nodes we configure them using different puppet recipes.

  • First Rabbitmq is installed on the bootstrap ( taurus-1)
  • Then cassandra is installed (here on taurus-9)
  • Snooze is installed on all the nodes (except taurus-9)
  • Finally NFS is configured

These steps are sequential, maybe we could parallelized them.

4Post-install phase

In this phase we prepare the cluster by uploading base images into the image repository, configuring the network on compute hosts…

5Cluster start phase

And … Finally we start the cluster.

After that phase, we observed that without any load, the power consumption was about 1700 W in total (14 nodes).

Global view

]]>
http://snooze.inria.fr/how-green-is-the-deployment-on-grid5000/feed/ 0
Euca2ools support http://snooze.inria.fr/euca2ools-support/ http://snooze.inria.fr/euca2ools-support/#comments Wed, 18 Dec 2013 14:34:29 +0000 http://snooze.inria.fr/?p=1867 The release 2.1 introduced a small subsets of EC2 API methods.
This feature can be used to manage a Snooze deployment through libcloud (with Eucalyptus provider). You can refer to the documentation for further information.

But the support with the command line tool : *euca2ools* wasn’t completed at the time of the release.
That’s why we will propose soon a new version of the SnoozeEC2 system service to add the support of the *euca2ools*.

]]>
http://snooze.inria.fr/euca2ools-support/feed/ 0