add [--readd] [-s secondary_ip] {nodename}
Adds the given node to the cluster.
This command is used to join a new node to the cluster. You will have to provide the password for root of the node to be able to add the node in the cluster. The command needs to be run on the Ganeti master.
Note that the command is potentially destructive, as it will forcibly join the specified host the cluster, not paying attention to its current status (it could be already in a cluster, etc.)
The -s
is used in dual-home clusters and
specifies the new node's IP in the secondary network. See the
discussion in gnt-cluster(8) for more
information.
In case you're readding a node after hardware failure, you can
use the --readd
parameter. In this case, you
don't need to pass the secondary IP again, it will reused from
the cluster. Also, the drained and
offline flags of the node will be cleared
before re-adding it.
Example:
# gnt-node add node5.example.com # gnt-node add -s 192.0.2.5 node5.example.com
add-tags [--from file] {nodename} {tag...}
Add tags to the given node. If any of the tags contains invalid characters, the entire operation will abort.
If the --from
option is given, the list of
tags will be extended with the contents of that file (each
line becomes a tag). In this case, there is not need to pass
tags on the command line (if you do, both sources will be
used). A file name of - will be interpreted as stdin.
evacuate [-f] [--early-release] [--iallocator NAME | --new-secondary destination_node] {node...}
This command will move all secondary instances away from the given node(s). It works only for instances having a drbd disk template.
The new location for the instances can be specified in two ways:
as a single node for all instances, via the
--new-secondary
option
or via the --iallocator
option,
giving a script name as parameter, so each instance will
be in turn placed on the (per the script) optimal
node
The --early-release
changes the code so that
the old storage on node being evacuated is removed early
(before the resync is completed) and the internal Ganeti locks
are also released for both the current secondary and the new
secondary, thus allowing more parallelism in the cluster
operation. This should be used only when recovering from a
disk failure on the current secondary (thus the old storage is
already broken) or when the storage on the primary node is
known to be fine (thus we won't need the old storage for
potential recovery).
Example:
# gnt-node evacuate -I dumb node3.example.com
failover [-f] [--ignore-consistency] {node}
This command will fail over all instances having the given node as primary to their secondary nodes. This works only for instances having a drbd disk template.
Normally the failover will check the consistency of the disks
before failing over the instance. If you are trying to migrate
instances off a dead node, this will fail. Use the
--ignore-consistency
option for this purpose.
Example:
# gnt-node failover node1.example.com
info [node...]
Show detailed information about the nodes in the cluster. If you don't give any arguments, all nodes will be shows, otherwise the output will be restricted to the given names.
list [--sync]
[--no-headers] [--separator=SEPARATOR]
[--units=UNITS] [-o [+]FIELD,...]
[--roman]
[node...]
Lists the nodes in the cluster.
The --no-headers
option will skip the initial
header line. The --separator
option takes an
argument which denotes what will be used between the output
fields. Both these options are to help scripting.
The units used to display the numeric values in the output
varies, depending on the options given. By default, the values
will be formatted in the most appropriate unit. If the
--separator
option is given, then the values
are shown in mebibytes to allow parsing by scripts. In both
cases, the --units
option can be used to
enforce a given output unit.
By default, the query of nodes will be done in parallel with
any running jobs. This might give inconsistent results for the
free disk/memory. The --sync
can be used to
grab locks for all the nodes and ensure consistent view of the
cluster (but this might stall the query for a long time).
Passing the --roman
option gnt-node list will try to
output some of its fields in a latin-friendly way. This is not the
default for backwards compatibility.
The -o
option takes a comma-separated list of
output fields. The available fields and their meaning are:
the node name
the number of instances having this node as primary
the list of instances having this node as primary, comma separated
the number of instances having this node as a secondary node
the list of instances having this node as a secondary node, comma separated
the primary ip of this node (used for cluster communication)
the secondary ip of this node (used for data replication in dual-ip clusters, see gnt-cluster(8)
total disk space in the volume group used for instance disk allocations
available disk space in the volume group
total memory on the physical node
the memory used by the node itself
memory available for instance allocations
the node bootid value; this is a linux specific feature that assigns a new UUID to the node at each boot and can be use to detect node reboots (by tracking changes in this value)
comma-separated list of the node's tags
the so called 'serial number' of the node; this is a numeric field that is incremented each time the node is modified, and it can be used to detect modifications
the creation time of the node; note that this field contains spaces and as such it's harder to parse
if this attribute is not present (e.g. when upgrading from older versions), then "N/A" will be shown instead
the last modification time of the node; note that this field contains spaces and as such it's harder to parse
if this attribute is not present (e.g. when upgrading from older versions), then "N/A" will be shown instead
Show the UUID of the node (generated automatically by Ganeti)
the toal number of logical processors
the number of NUMA domains on the node, if the hypervisor can export this information
the number of physical CPU sockets, if the hypervisor can export this information
whether the node is a master candidate or not
whether the node is drained or not; the cluster still communicates with drained nodes but excludes them from allocation operations
whether the node is offline or not; if offline, the cluster does not communicate with offline nodes; useful for nodes that are not reachable in order to avoid delays
A condensed version of the node flags; this field will output a one-character field, with the following possible values:
M for the master node
C for a master candidate
R for a regular node
D for a drained node
O for an offline node
If the value of the option starts with the character
+
, the new fields will be added to the
default list. This allows to quickly see the default list plus
a few other fields, instead of retyping the entire list of
fields.
Note that some of this fields are known from the configuration of the cluster (e.g. name, pinst, sinst, pip, sip and thus the master does not need to contact the node for this data (making the listing fast if only fields from this set are selected), whereas the other fields are "live" fields and we need to make a query to the cluster nodes.
Depending on the virtualization type and implementation details, the mtotal, mnode and mfree may have slighly varying meanings. For example, some solutions share the node memory with the pool of memory used for instances (KVM), whereas others have separate memory for the node and for the instances (Xen).
If no node names are given, then all nodes are queried. Otherwise, only the given nodes will be listed.
migrate [-f] [--non-live] [--migration-mode=live|non-live] {node}
This command will migrate all instances having the given node as primary to their secondary nodes. This works only for instances having a drbd disk template.
As for the gnt-instance migrate command,
the options --no-live
and --migration-mode
can be given to
influence the migration type.
Example:
# gnt-node migrate node1.example.com
modify [-f] [--submit] [--master-candidate=yes|no
] [--drained=yes|no
] [--offline=yes|no
] [--auto-promote] {node}
This command changes the role of the node. Each options takes either a literal yes or no, and only one option should be given as yes. The meaning of the roles are described in the manpage ganeti(7).
In case a node is demoted from the master candidate role, the
operation will be refused unless you pass
the --auto-promote
option. This option will
cause the operation to lock all cluster nodes (thus it will
not be able to run in parallel with most other jobs), but it
allows automated maintenance of the cluster candidate pool. If
locking all cluster node is too expensive, another option is
to promote manually another node to master candidate before
demoting the current one.
Example (setting a node offline, which will demote it from master candidate role if is in that role):
# gnt-node modify --offline=yes node1.example.com
Example (setting the node back to online and master candidate):
# gnt-node modify --offline=no --master-candidate=yes node1.example.com
remove {nodename}
Removes a node from the cluster. Instances must be removed or migrated to another cluster before.
Example:
# gnt-node remove node5.example.com
remove-tags [--from file] {nodename} {tag...}
Remove tags from the given node. If any of the tags are not existing on the node, the entire operation will abort.
If the --from
option is given, the list of
tags will be extended with the contents of that file (each
line becomes a tag). In this case, there is not need to pass
tags on the command line (if you do, both sources will be
used). A file name of - will be interpreted as stdin.
volumes [--no-headers] [--human-readable] [--separator=SEPARATOR] [--output=FIELDS]
[node...]
Lists all logical volumes and their physical disks from the node(s) provided.
The --no-headers
option will skip the initial
header line. The --separator
option takes an
argument which denotes what will be used between the output
fields. Both these options are to help scripting.
The units used to display the numeric values in the output
varies, depending on the options given. By default, the values
will be formatted in the most appropriate unit. If the
--separator
option is given, then the values
are shown in mebibytes to allow parsing by scripts. In both
cases, the --units
option can be used to
enforce a given output unit.
The -o
option takes a comma-separated list of
output fields. The available fields and their meaning are:
the node name on which the volume exists
the physical drive (on which the LVM physical volume lives)
the volume group name
the logical volume name
the logical volume size
The name of the instance to which this volume belongs, or (in case it's an orphan volume) the character "-"
Example:
# gnt-node volumes node5.example.com Node PhysDev VG Name Size Instance node1.example.com /dev/hdc1 xenvg instance1.example.com-sda_11000.meta 128 instance1.example.com node1.example.com /dev/hdc1 xenvg instance1.example.com-sda_11001.data 256 instance1.example.com
list-storage [--no-headers] [--human-readable] [--separator=SEPARATOR] [--storage-type=STORAGE_TYPE] [--output=FIELDS]
[node...]
Lists the available storage units and their details for the given node(s).
The --no-headers
option will skip the initial header
line. The --separator
option takes an argument which
denotes what will be used between the output fields. Both these options
are to help scripting.
The units used to display the numeric values in the output varies,
depending on the options given. By default, the values will be
formatted in the most appropriate unit. If the
--separator
option is given, then the values are shown
in mebibytes to allow parsing by scripts. In both cases, the
--units
option can be used to enforce a given output
unit.
The --storage-type
option can be used to choose a
storage unit type. Possible choices are lvm-pv,
lvm-vg or file.
The -o
option takes a comma-separated list of
output fields. The available fields and their meaning are:
the node name on which the volume exists
the type of the storage unit (currently just
what is passed in via
--storage-type
)
the path/identifier of the storage unit
total size of the unit; for the file type see a note below
used space in the unit; for the file type see a note below
available disk space
whether we the unit is available for allocation (only lvm-pv can change this setting, the other types always report true)
Note that for the "file" type, the total disk space might not equal to the sum of used and free, due to the method Ganeti uses to compute each of them. The total and free values are computed as the total and free space values for the filesystem to which the directory belongs, but the used space is computed from the used space under that directory only, which might not be necessarily the root of the filesystem, and as such there could be files outside the file storage directory using disk space and causing a mismatch in the values.
Example:
node1# gnt-node list-storage node2 Node Type Name Size Used Free Allocatable node2 lvm-pv /dev/sda7 673.8G 1.5G 672.3G Y node2 lvm-pv /dev/sdb1 698.6G 0M 698.6G Y
modify-storage [--allocatable=yes|no
]
{node} {storage-type} {volume-name}
Modifies storage volumes on a node. Only LVM physical volumes can be modified at the moment. They have a storage type of "lvm-pv".
Example:
# gnt-node modify-storage --allocatable no node5.example.com lvm-pv /dev/sdb1
repair-storage [--ignore-consistency] {node} {storage-type} {volume-name}
Repairs a storage volume on a node. Only LVM volume groups can be repaired at this time. They have the storage type "lvm-vg".
On LVM volume groups, repair-storage runs "vgreduce --removemissing".
Running this command can lead to data loss. Use it with care. |
The --ignore-consistency
option will ignore
any inconsistent disks (on the nodes paired with this
one). Use of this option is most likely to lead to data-loss.
Example:
# gnt-node repair-storage node5.example.com lvm-vg xenvg
powercycle [--yes
] [--force
] {node}
This commands (tries to) forcefully reboot a node. It is a command that can be used if the node environemnt is broken, such that the admin can no longer login over ssh, but the Ganeti node daemon is still working.
Note that this command is not guaranteed to work; it depends on the hypervisor how effective is the reboot attempt. For Linux, this command require that the kernel option CONFIG_MAGIC_SYSRQ is enabled.
The --yes
option can be used to skip
confirmation, while the --force
option is
needed if the target node is the master node.
Report bugs to http://code.google.com/p/ganeti/ or contact the developers using the Ganeti mailing list <ganeti@googlegroups.com>.
Ganeti overview and specifications: ganeti(7) (general overview), ganeti-os-interface(7) (guest OS definitions).
Ganeti commands: gnt-cluster(8) (cluster-wide commands), gnt-job(8) (job-related commands), gnt-node(8) (node-related commands), gnt-instance(8) (instance commands), gnt-os(8) (guest OS commands), gnt-backup(8) (instance import/export commands), gnt-debug(8) (debug commands).
Ganeti daemons: ganeti-watcher(8) (automatic instance restarter), ganeti-cleaner(8) (job queue cleaner), ganeti-noded(8) (node daemon), ganeti-masterd(8) (master daemon), ganeti-rapi(8) (remote API daemon).
Copyright (C) 2006, 2007, 2008, 2009, 2010 Google Inc. Permission is granted to copy, distribute and/or modify under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
On Debian systems, the complete text of the GNU General Public License can be found in /usr/share/common-licenses/GPL.