Snooze » local http://snooze.inria.fr Wed, 11 Oct 2017 12:11:13 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.16 Snooze 2.1.4 is out http://snooze.inria.fr/snooze-2-1-4-is-out/ http://snooze.inria.fr/snooze-2-1-4-is-out/#comments Fri, 25 Apr 2014 12:10:18 +0000 http://snooze.inria.fr/?p=2067 Today Snooze version 2.1.4 has been released.

What’s new in this version ?

1 Snoozeclient

The command line interface for snooze is now supported again. It comes with new commands

  • images : to list all the images available
  • hosts : to list all the hosts
  • dump : to dump the hierarchy, check system status

The visualizer has been removed and is replaced by the one already integrated in the Snooze web interface

At a glance :

  • Define a new cluster
  • $)snoozeclient define -vcn test 
    Cluster has been defined!
    
  • List images
  • $)snoozeclient images
    Name                                	  Capacity        	 Allocation      	 Format          	 BackingStore   
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    debian-hadoop-context-big.qcow2     	  53687091200     	 2020872192      	 qcow2           	 null           
    context.iso                         	  403456          	 405504          	 iso             	 null           
    resilin-base.raw                    	  47100985344     	 2028998656      	 qcow2           	 null           
    
  • Add a virtual machine to this cluster
  • $) snoozeclient add -vcn test -iid debian-hadoop-context-big.qcow2
    Add command successfull!
    

    Note that optional parameters allow you to specify a name, the amount of vcpus/memory or network requested.

  • Start the cluster
  • $) snoozeclient start -vcn test
    Name                      	 VM address      	 GM address      	 LC address      	 Status    
    ---------------------------------------------------------------------------------------------------------------
    88524977-cc30-4e91-b4e7-d 	 10.164.0.101    	 172.16.129.9    	 172.16.130.25   	 RUNNING 
    
  • Dump the hierarchy
  • $) snoozeclient dump
    GL : 0
    	 GM : 52415ef7-f0c1-4001-a58d-1c4f83e49d45
    	 	 LC : 341c6ccf-06a2-4020-a13d-01a659bfe887
    	 	 	 VM : de5bfe11-09ac-43b2-8b28-8e1a5afcf9d6
    	 	 LC : e7365665-41bc-45c5-b7b6-520448c5e5bd
    	 	 	 VM : a0bda947-e309-4af7-b1a1-f1087fd50cf6
    	 	 	 VM : f55d612e-ada3-4db9-b39a-66bcb2123e30
    
    

2 Other services

The other services didn’t move much for this release, see the changelog below:

  • Snoozenode
  • *implement searchVirtualMachine in GroupManagerCassandraRepository.
    *implement getLocalControllerList in GroupManagerResource
    
  • Snoozecommon
  • Implements migration in restlet communicator.
    

3 snooze-deploy-localcluster

The script now include a one line installer : see here

]]>
http://snooze.inria.fr/snooze-2-1-4-is-out/feed/ 0
Snooze local deployment how to http://snooze.inria.fr/snooze-local-deployment-how-to/ http://snooze.inria.fr/snooze-local-deployment-how-to/#comments Mon, 20 Jan 2014 09:50:00 +0000 http://snooze.inria.fr/?p=1981 I propose in this post to go through the local deployment of the snooze cluster and show how it can be used to run multiple snoozenode on multiple machines.

Requirements

  • Multicast-routing enabled switch
  • Hardware supporting the KVM/XEN hypervisor
  • Linux based operating system

1 Snooze on one machine

First follow the local deployment tutorial in the documentation. This will guide you through the set-up of your local cluster.

2 Snooze on three machines

If you have successfully set up Snooze on a single machine, installing Snooze on three machines is pretty straightforward. In order to have a realistic deployment only one local controller instance must run on each server. We propose the following topology for the deployment.

Scenario :

  • machine 1 : 1 BS, 2 GMs, snoozeimage, snoozeec2, Cassandra, Rabbitmq
  • machine 2 : 1 LC
  • machine 3 : 1 LC

Setup of the machine 1

You’ve done all the configuration in the first part of the tutorial ! If you decide to run a Local Controller on this machine, we suggest to not activate the energy management…

Setup of machines 2

Install the snooze-deploy-localcluster scripts on this machine.
In the config file (by default /usr/share/snoozenode/configs/snooze_node.cfg) fill the following lines with the appropriate IP or hostname of machine 1. In the latter case be sure to have a DNS (or /etc/hosts) properly configured :

# Zookeeper
faultTolerance.zookeeper.hosts = machine1:2181

# Image repository
imageRepository.address = machine1

# Database
database.type = cassandra # if "memory" you don't need the following line
database.cassandra.hosts = machine1:9160

# Rabbitmq
external.notifier.address = machine1

Setup of machines 3

It’s the same configuration as machine 2.

NFS

The Local Controller nodes get the images disks through an NFS shared directory. So you will have to configure a NFS share between your 3 machines. We propose the following scenario :

Scenario :

  • machine 1 : Nfs server, exports /var/lib/libvirt/images (default libvirt pool)
  • machine 2 : Nfs client, mount the exported directory in /var/lib/libvirt/images
  • machine 3 : Nfs client, mount the exported directory in /var/lib/libvirt/images
On machine 1

Configure the nfs-server.
Check the snoozeimages configuration to be sure that the pool configured is a directory pool using /var/lib/libvirt/images/ as backend.

On machine 2 & 3

Mount the NFS shared directory.
Check the snoozenode configuration file :

imageRepository.manager.source = /var/lib/libvirt/images
imageRepository.manager.destination = /var/lib/libvirt/images

3 Launch snooze

On each machine, you’ll have to launch the following two commands (from the root of the local deployment script):

start_local_cluster.sh -l
start_local_cluster.sh -s
]]>
http://snooze.inria.fr/snooze-local-deployment-how-to/feed/ 0