# gnt-cluster init cluster1.example.com # gnt-node add node2.example.com # gnt-instance add -n node2.example.com \ > -o debootstrap --disk 0:size=30g \ > -t plain instance1.example.com
The Ganeti software manages physical nodes and virtual instances of a cluster based on a virtualization software. The current version (2.2) supports Xen 3.x and KVM (72 or above) as hypervisors.
First you must install the software on all the cluster nodes, either from sources or (if available) from a package. The next step is to create the initial cluster configuration, using gnt-cluster init.
Then you can add other nodes, or start creating instances.
In Ganeti 2.0, the architecture of the cluster is a little more complicated than in 1.2. The cluster is coordinated by a master daemon (ganeti-masterd(8)), running on the master node. Each node runs (as before) a node daemon, and the master has the RAPI daemon running too.
Each node can be in one of the following states:
Only one node per cluster can be in this role, and this node is the one holding the authoritative copy of the cluster configuration and the one that can actually execute commands on the cluster and modify the cluster state. See more details under Cluster configuration.
The node receives the full cluster configuration (configuration file and jobs) and can become a master via the gnt-cluster master-failover command. Nodes that are not in this state cannot transition into the master role due to missing state.
This the normal state of a node.
Nodes in this state are functioning normally but cannot receive new instance, because the intention is to set them to offline or remove them from the cluster.
These nodes are still recorded in the Ganeti configuration, but except for the master daemon startup voting procedure, they are not actually contacted by the master. This state was added in order to allow broken machines (that are being repaired) to remain in the cluster but without creating problems.
The master node keeps and is responsible for the cluster configuration. On the filesystem, this is stored under the /usr/local/var/ganeti/lib directory, and if the master daemon is stopped it can be backed up normally.
The master daemon will replicate the configuration database called config.data and the job files to all the nodes in the master candidate role. It will also distribute a copy of some configuration values via the ssconf files, which are stored in the same directory and start with ssconf_ prefix, to all nodes.
All cluster modification are done via jobs. A job consists of one or more opcodes, and the list of opcodes is processed serially. If an opcode fails, the entire job is failed and later opcodes are no longer processed. A job can be in one of the following states:
The job has been submitted but not yet processed by the master daemon.
The job is waiting for for locks before the first of its opcodes.
The job is waiting for locks, but is has been marked for cancellation. It will not transition to running, but to canceled.
The job is currently being executed.
The job has been canceled before starting execution.
The job has finished successfully.
The job has failed during runtime, or the master daemon has been stopped during the job execution.
Report bugs to http://code.google.com/p/ganeti/ or contact the developers using the Ganeti mailing list <ganeti@googlegroups.com>.
Ganeti overview and specifications: ganeti(7) (general overview), ganeti-os-interface(7) (guest OS definitions).
Ganeti commands: gnt-cluster(8) (cluster-wide commands), gnt-job(8) (job-related commands), gnt-node(8) (node-related commands), gnt-instance(8) (instance commands), gnt-os(8) (guest OS commands), gnt-backup(8) (instance import/export commands), gnt-debug(8) (debug commands).
Ganeti daemons: ganeti-watcher(8) (automatic instance restarter), ganeti-cleaner(8) (job queue cleaner), ganeti-noded(8) (node daemon), ganeti-masterd(8) (master daemon), ganeti-rapi(8) (remote API daemon).
Copyright (C) 2006, 2007, 2008, 2009, 2010 Google Inc. Permission is granted to copy, distribute and/or modify under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
On Debian systems, the complete text of the GNU General Public License can be found in /usr/share/common-licenses/GPL.