Package ganeti :: Module cmdlib :: Class LUGroupAssignNodes
[hide private]
[frames] | no frames]

Class LUGroupAssignNodes

source code


Logical unit for assigning nodes to groups.

Instance Methods [hide private]
 
ExpandNames(self)
Expand names for this LU.
source code
 
DeclareLocks(self, level)
Declare LU locking needs for a level
source code
 
CheckPrereq(self)
Check prerequisites.
source code
 
Exec(self, feedback_fn)
Assign nodes to a new group.
source code

Inherited from NoHooksLU: BuildHooksEnv, BuildHooksNodes

Inherited from LogicalUnit: CheckArguments, HooksCallBack, __init__

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

Static Methods [hide private]
a two-tuple
CheckAssignmentForSplitInstances(changes, node_data, instance_data)
Check for split instances after a node assignment.
source code
Class Variables [hide private]
  REQ_BGL = False

Inherited from NoHooksLU: HPATH, HTYPE

Instance Variables [hide private]

Inherited from LogicalUnit: dry_run_result

Properties [hide private]

Inherited from object: __class__

Method Details [hide private]

ExpandNames(self)

source code 

Expand names for this LU.

This method is called before starting to execute the opcode, and it should update all the parameters of the opcode to their canonical form (e.g. a short node name must be fully expanded after this method has successfully completed). This way locking, hooks, logging, etc. can work correctly.

LUs which implement this method must also populate the self.needed_locks member, as a dict with lock levels as keys, and a list of needed lock names as values. Rules:

  • use an empty dict if you don't need any lock
  • if you don't need any lock at a particular level omit that level
  • don't put anything for the BGL level
  • if you want all locks at a level use locking.ALL_SET as a value

If you need to share locks (rather than acquire them exclusively) at one level you can modify self.share_locks, setting a true value (usually 1) for that level. By default locks are not shared.

This function can also define a list of tasklets, which then will be executed in order instead of the usual LU-level CheckPrereq and Exec functions, if those are not defined by the LU.

Examples:

 # Acquire all nodes and one instance
 self.needed_locks = {
   locking.LEVEL_NODE: locking.ALL_SET,
   locking.LEVEL_INSTANCE: ['instance1.example.com'],
 }
 # Acquire just two nodes
 self.needed_locks = {
   locking.LEVEL_NODE: ['node1.example.com', 'node2.example.com'],
 }
 # Acquire no locks
 self.needed_locks = {} # No, you can't leave it to the default value None
Overrides: LogicalUnit.ExpandNames
(inherited documentation)

DeclareLocks(self, level)

source code 

Declare LU locking needs for a level

While most LUs can just declare their locking needs at ExpandNames time, sometimes there's the need to calculate some locks after having acquired the ones before. This function is called just before acquiring locks at a particular level, but after acquiring the ones at lower levels, and permits such calculations. It can be used to modify self.needed_locks, and by default it does nothing.

This function is only called if you have something already set in self.needed_locks for the level.

Parameters:
  • level - Locking level which is going to be locked
Overrides: LogicalUnit.DeclareLocks
(inherited documentation)

CheckPrereq(self)

source code 

Check prerequisites.

Overrides: LogicalUnit.CheckPrereq

Exec(self, feedback_fn)

source code 

Assign nodes to a new group.

Overrides: LogicalUnit.Exec

CheckAssignmentForSplitInstances(changes, node_data, instance_data)
Static Method

source code 

Check for split instances after a node assignment.

This method considers a series of node assignments as an atomic operation, and returns information about split instances after applying the set of changes.

In particular, it returns information about newly split instances, and instances that were already split, and remain so after the change.

Only instances whose disk template is listed in constants.DTS_INT_MIRROR are considered.

Parameters:
  • changes (list of (node_name, new_group_uuid) pairs.) - list of node assignments to consider.
  • node_data - a dict with data for all nodes
  • instance_data - a dict with all instances to consider
Returns: a two-tuple
a list of instances that were previously okay and result split as a consequence of this change, and a list of instances that were previously split and this change does not fix.