Trees | Indices | Help |
|
---|
|
Verifies the status of a node group.
|
|||
NodeImage A class representing the logical and physical status of a node. |
|
|||
|
|||
|
|||
|
|||
boolean |
|
||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
{instance: {node: [(succes, payload)]}} |
|
||
dict |
|
||
tuple; (list, list) |
|
||
|
|||
|
|||
Inherited from Inherited from Inherited from Inherited from |
|
|||
|
|||
|
|
|||
|
|
|||
HPATH = "cluster-verify"
|
|||
HTYPE = "CLUSTER"
|
|||
REQ_BGL = False
|
|||
_HOOKS_INDENT_RE = re.compile("^", re.M)
|
|||
Inherited from |
|
|||
Inherited from |
|
|||
Inherited from |
|
Expand names for this LU. This method is called before starting to execute the opcode, and it should update all the parameters of the opcode to their canonical form (e.g. a short node name must be fully expanded after this method has successfully completed). This way locking, hooks, logging, etc. can work correctly. LUs which implement this method must also populate the self.needed_locks member, as a dict with lock levels as keys, and a list of needed lock names as values. Rules:
If you need to share locks (rather than acquire them exclusively) at one level you can modify self.share_locks, setting a true value (usually 1) for that level. By default locks are not shared. This function can also define a list of tasklets, which then will be executed in order instead of the usual LU-level CheckPrereq and Exec functions, if those are not defined by the LU. Examples: # Acquire all nodes and one instance self.needed_locks = { locking.LEVEL_NODE: locking.ALL_SET, locking.LEVEL_INSTANCE: ['instance1.example.com'], } # Acquire just two nodes self.needed_locks = { locking.LEVEL_NODE: ['node1.example.com', 'node2.example.com'], } # Acquire no locks self.needed_locks = {} # No, you can't leave it to the default value None
|
Declare LU locking needs for a level While most LUs can just declare their locking needs at ExpandNames time, sometimes there's the need to calculate some locks after having acquired the ones before. This function is called just before acquiring locks at a particular level, but after acquiring the ones at lower levels, and permits such calculations. It can be used to modify self.needed_locks, and by default it does nothing. This function is only called if you have something already set in self.needed_locks for the level.
|
Check prerequisites for this LU. This method should check that the prerequisites for the execution of this LU are fulfilled. It can do internode communication, but it should be idempotent - no cluster or system changes are allowed. The method should raise errors.OpPrereqError in case something is not fulfilled. Its return value is ignored. This method should also update all the parameters of the opcode to their canonical form if it hasn't been done by ExpandNames before.
|
Perform some basic validation on data returned from a node.
|
Check the node time.
|
Check the node LVM results and update info for cross-node checks.
|
Check cross-node consistency in LVM.
|
Check the node bridges.
|
Check the results of user scripts presence and executability on the node
|
Check the node network connectivity results.
|
Verify an instance. This function checks to see if the required block devices are available on the instance's node, and that the nodes are in the correct state. |
Verify if there are any unknown volumes in the cluster. The .os, .swap and backup volumes are ignored. All other volumes are reported as unknown.
|
Verify N+1 Memory Resilience. Check that if one single node dies we can still start all the instances it was primary for. |
Verifies file checksums collected from all nodes.
|
Verifies and the node DRBD status.
|
Builds the node OS structures.
|
Verifies the node OS list.
|
Verifies paths in pathutils.FILE_STORAGE_PATHS_FILE.
|
Verifies out of band functionality of a node.
|
Verifies and updates the node volume data. This function will update a NodeImage's internal structures with data from the remote call.
|
Verifies and updates the node instance list. If the listing was successful, then updates this node's instance list. Otherwise, it marks the RPC call as failed for the instance list key.
|
Verifies and computes a node information map
|
Gets per-disk status information for all instances.
|
Choose which nodes should talk to which other nodes. We will make nodes contact all nodes in their group, and one node from every other group. Warning: This algorithm has a known issue if one node group is much smaller than others (e.g. just one node). In such a case all other nodes will talk to the single node. |
Build hooks env. Cluster-Verify hooks just ran in the post phase and their failure makes the output be logged in the verify output and the verification to fail.
|
Build hooks nodes.
|
Verify integrity of the node group, performing various test on nodes.
|
Analyze the post-hooks' result This method analyses the hook result, handles it, and sends some nicely-formatted feedback back to the user.
|
Trees | Indices | Help |
|
---|
Generated by Epydoc 3.0.1 on Fri Jul 4 09:38:39 2014 | http://epydoc.sourceforge.net |