Snooze » snooze http://snooze.inria.fr Wed, 11 Oct 2017 12:11:13 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.16 Snooze 2.1.3 released, Snooze 3 coming soon http://snooze.inria.fr/snooze-2-1-3-released-snooze-3-coming-soon/ http://snooze.inria.fr/snooze-2-1-3-released-snooze-3-coming-soon/#comments Wed, 02 Apr 2014 09:57:35 +0000 http://snooze.inria.fr/?p=2036 Snooze has been 2.1.3 released.

See the changelog below :

#87 Add "test" hypervisor to the template generation chain.
Testing template generation will be eased.
#83 Fix serial console template issue.
This fix the compatibility with recent libvirt version.
#81 handle destroy image with backingImageManager and src=dest path. 
This bug appeared when using nfs to store backing and diffs disks files.
#82 Get the virtual machines lists even if in-memory database is used. 
This was mostly a missing feature.

The next version of snooze has been pushed in the develop branch.
This will be a major release with the introduction of a more complete plugin mechanisms.
We are now working on demonstrating the modularity of the system in pluging in different algorithms.
This will come soon 😉

]]>
http://snooze.inria.fr/snooze-2-1-3-released-snooze-3-coming-soon/feed/ 0
Snooze local deployment how to http://snooze.inria.fr/snooze-local-deployment-how-to/ http://snooze.inria.fr/snooze-local-deployment-how-to/#comments Mon, 20 Jan 2014 09:50:00 +0000 http://snooze.inria.fr/?p=1981 I propose in this post to go through the local deployment of the snooze cluster and show how it can be used to run multiple snoozenode on multiple machines.

Requirements

  • Multicast-routing enabled switch
  • Hardware supporting the KVM/XEN hypervisor
  • Linux based operating system

1 Snooze on one machine

First follow the local deployment tutorial in the documentation. This will guide you through the set-up of your local cluster.

2 Snooze on three machines

If you have successfully set up Snooze on a single machine, installing Snooze on three machines is pretty straightforward. In order to have a realistic deployment only one local controller instance must run on each server. We propose the following topology for the deployment.

Scenario :

  • machine 1 : 1 BS, 2 GMs, snoozeimage, snoozeec2, Cassandra, Rabbitmq
  • machine 2 : 1 LC
  • machine 3 : 1 LC

Setup of the machine 1

You’ve done all the configuration in the first part of the tutorial ! If you decide to run a Local Controller on this machine, we suggest to not activate the energy management…

Setup of machines 2

Install the snooze-deploy-localcluster scripts on this machine.
In the config file (by default /usr/share/snoozenode/configs/snooze_node.cfg) fill the following lines with the appropriate IP or hostname of machine 1. In the latter case be sure to have a DNS (or /etc/hosts) properly configured :

# Zookeeper
faultTolerance.zookeeper.hosts = machine1:2181

# Image repository
imageRepository.address = machine1

# Database
database.type = cassandra # if "memory" you don't need the following line
database.cassandra.hosts = machine1:9160

# Rabbitmq
external.notifier.address = machine1

Setup of machines 3

It’s the same configuration as machine 2.

NFS

The Local Controller nodes get the images disks through an NFS shared directory. So you will have to configure a NFS share between your 3 machines. We propose the following scenario :

Scenario :

  • machine 1 : Nfs server, exports /var/lib/libvirt/images (default libvirt pool)
  • machine 2 : Nfs client, mount the exported directory in /var/lib/libvirt/images
  • machine 3 : Nfs client, mount the exported directory in /var/lib/libvirt/images
On machine 1

Configure the nfs-server.
Check the snoozeimages configuration to be sure that the pool configured is a directory pool using /var/lib/libvirt/images/ as backend.

On machine 2 & 3

Mount the NFS shared directory.
Check the snoozenode configuration file :

imageRepository.manager.source = /var/lib/libvirt/images
imageRepository.manager.destination = /var/lib/libvirt/images

3 Launch snooze

On each machine, you’ll have to launch the following two commands (from the root of the local deployment script):

start_local_cluster.sh -l
start_local_cluster.sh -s
]]>
http://snooze.inria.fr/snooze-local-deployment-how-to/feed/ 0
Focus on hadoop http://snooze.inria.fr/focus-on-hadoop/ http://snooze.inria.fr/focus-on-hadoop/#comments Thu, 09 Jan 2014 12:50:09 +0000 http://snooze.inria.fr/?p=1935 Snooze deployment on grid’5000 comes with scripts for configuring a hadoop cluster and launching benchmarks on it.

1 System deployment

You just have to follow the deployment procedure explained in the documentation.

After that you can launch some virtual machines. Since those virtual machines will host hadoop services we suggest you to set (at least) the number of vcpus to 3 and ram to 3GB.

2 Configure hadoop

Once deployed, you will find the hadoop deployment scripts on the first bootstrap :

$bootstrap) cd /tmp/snooze/experiments

You need to create a file containing the IPs addresses of your virtual machines.
To achieve this you can make a request to the EC2 API to get the list of instances running, this will return a XML containing all the instances (and their IPs).
Finally you will just have to parse the output to get the IPs.

In the following code, we assume that you are connected to the first bootstrap and that the SnoozeEC2 service is running on this node.


$bootstrap) curl localhost:4001?Action=DescribeInstances > instances

The following code will output the list of IPs addresses of your running virtual machines. You have to redirect this to the file /tmp/snooze/experiments/tmp/virtual_machine_hosts.txt.


require 'rexml/document'
include REXML
# instance file contains the output of "curl snoozeec2?Action=DescribeInstances"
file = File.new 'instances'
doc = Document.new file

XPath.each(doc, "//ipAddress"){ |item| puts item.text}

Finally, go to /tmp/snooze/experiments/ and launch :

$bootstrap) ./experiments -m configure
--
[Snooze-Experiments] Configuration mode (normal, variable_data):
normal

3 Launch a benchmark

You can get the list of available benchmark by typing :


$bootstrap) ./experiments -m benchmark
--
[Snooze-Experiments] Benchmark name (e.g. dfsio, dfsthroughput, mrbench, nnbench,                pi, teragen, terasort, teravalidate, censusdata, censusbench, wikidata, wikibench):

Choose one and you’re done.

]]>
http://snooze.inria.fr/focus-on-hadoop/feed/ 0
Snooze 2.1.1 released http://snooze.inria.fr/snooze-2-1-1-released/ http://snooze.inria.fr/snooze-2-1-1-released/#comments Wed, 08 Jan 2014 14:25:52 +0000 http://snooze.inria.fr/?p=1922 A patch of the Snooze system has been released under the 2.1.1 tag.

Changelog:

* snoozeec2 : fix compatibily with euca2ools.

* snoozecommon/snoozenode : fix migration issue when using backing file stored locally in local controller.

* snooze-capistrano : remove maint-2.1.0 and v2.1.0 stage. Introduce latest stage.

Euca2ools compatibility

You can now use a subset of the euca2ools commands to interract with Snooze. Supported commands are :

  euca-describe-images
  euca-describe-instances
  euca-describe-run-instances
  euca-describe-terminate-instances
  euca-describe-reboot-instances

Migration fix

In 2.1.0 with backing disk parameter, migration was only possible when master file and virtual machine disk images were located in a shared directory. Now you can migrate virtual machine even if the virtual machine disk is local to the node. The master file must still reside on a shared directory.
Full disk migration will be investigate for the next release.

Snooze capistrano

We remove the stages v2.1.0 and maint2.1.0 and introduce an abstract stage : latest (alias to 2.1.1 here). It will lead to a better maintainability of the script.
Some output has been improved aswell.

]]>
http://snooze.inria.fr/snooze-2-1-1-released/feed/ 0