class documentation

class TLReplaceDisks(Tasklet):

View In Hierarchy

Replaces disks for an instance.

Note: Locking is not within the scope of this class.

Method __init__ Initializes this class.
Method CheckPrereq Check prerequisites.
Method Exec Execute disk replacement.
Instance Variable disks Undocumented
Instance Variable early_release Undocumented
Instance Variable iallocator_name Undocumented
Instance Variable ignore_ipolicy Undocumented
Instance Variable instance Undocumented
Instance Variable instance_name Undocumented
Instance Variable instance_uuid Undocumented
Instance Variable mode Undocumented
Instance Variable new_node_uuid Undocumented
Instance Variable node_secondary_ip Undocumented
Instance Variable other_node_uuid Undocumented
Instance Variable remote_node_info Undocumented
Instance Variable remote_node_uuid Undocumented
Instance Variable target_node_uuid Undocumented
Static Method _RunAllocator Compute a new secondary node using an IAllocator.
Method _CheckDevices Undocumented
Method _CheckDisksActivated Checks if the instance disks are activated.
Method _CheckDisksConsistency Undocumented
Method _CheckDisksExistence Undocumented
Method _CheckVolumeGroup Undocumented
Method _CreateNewStorage Create new storage on the primary or secondary node.
Method _ExecDrbd8DiskOnly Replace a disk on the primary or secondary for DRBD 8.
Method _ExecDrbd8Secondary Replace the secondary node for DRBD 8.
Method _FindFaultyDisks Wrapper for FindFaultyInstanceDisks.
Method _RemoveOldStorage Undocumented
Method _UpdateDisksSecondary Update the configuration of disks to have a new secondary.

Inherited from Tasklet:

Instance Variable cfg Undocumented
Instance Variable lu Undocumented
Instance Variable rpc Undocumented
def __init__(self, lu, instance_uuid, instance_name, mode, iallocator_name, remote_node_uuid, disks, early_release, ignore_ipolicy):

Initializes this class.

def CheckPrereq(self):

Check prerequisites.

This checks that the instance is in the cluster.

def Exec(self, feedback_fn):

Execute disk replacement.

This dispatches the disk replacement to the appropriate handler.

disks =

Undocumented

early_release =

Undocumented

iallocator_name =

Undocumented

ignore_ipolicy =

Undocumented

instance =

Undocumented

instance_name =

Undocumented

instance_uuid =

Undocumented

mode =

Undocumented

new_node_uuid =

Undocumented

node_secondary_ip =

Undocumented

other_node_uuid =

Undocumented

remote_node_info =

Undocumented

remote_node_uuid =

Undocumented

target_node_uuid =

Undocumented

@staticmethod
def _RunAllocator(lu, iallocator_name, instance_uuid, relocate_from_node_uuids):

Compute a new secondary node using an IAllocator.

def _CheckDevices(self, node_uuid, iv_names):

Undocumented

def _CheckDisksActivated(self, instance):

Checks if the instance disks are activated.

Parameters
instanceThe instance to check disks
Returns
True if they are activated, False otherwise
def _CheckDisksConsistency(self, node_uuid, on_primary, ldisk):

Undocumented

def _CheckDisksExistence(self, node_uuids):

Undocumented

def _CheckVolumeGroup(self, node_uuids):

Undocumented

def _CreateNewStorage(self, node_uuid):

Create new storage on the primary or secondary node.

This is only used for same-node replaces, not for changing the secondary node, hence we don't want to modify the existing disk.

def _ExecDrbd8DiskOnly(self, feedback_fn):

Replace a disk on the primary or secondary for DRBD 8.

The algorithm for replace is quite complicated:

  1. for each disk to be replaced:

    1. create new LVs on the target node with unique names
    1. detach old LVs from the drbd device
    1. rename old LVs to name_replaced.<time_t>
    1. rename new LVs to old LVs
    1. attach the new LVs (with the old names now) to the drbd device
  1. wait for sync across all devices
  1. for each modified disk:
    1. remove old LVs (which have the name name_replaces.<time_t>)

Failures are not very well handled.

def _ExecDrbd8Secondary(self, feedback_fn):

Replace the secondary node for DRBD 8.

The algorithm for replace is quite complicated:

  • for all disks of the instance:
    • create new LVs on the new node with same names
    • shutdown the drbd device on the old secondary
    • disconnect the drbd network on the primary
    • create the drbd device on the new secondary
    • network attach the drbd on the primary, using an artifice: the drbd code for Attach() will connect to the network if it finds a device which is connected to the good local disks but not network enabled
  • wait for sync across all devices
  • remove all disks from the old secondary

Failures are not very well handled.

def _FindFaultyDisks(self, node_uuid):
def _RemoveOldStorage(self, node_uuid, iv_names):

Undocumented

def _UpdateDisksSecondary(self, iv_names, feedback_fn):

Update the configuration of disks to have a new secondary.

Parameters
iv_namesiterable of triples for all volumes of the instance. The first component has to be the device and the third the logical id.
feedback_fnfunction to used send feedback back to the caller of the OpCode