Contents
This design document describes the support for generic disk template conversion in Ganeti. The logic used is disk template agnostic and targets to cover the majority of conversions among the supported disk templates.
Currently, Ganeti supports choosing among different disk templates when creating an instance. However, converting the disk template of an existing instance is possible only between the plain and drbd templates. This feature was added in Ganeti since its early versions when the number of supported disk templates was limited. Now that Ganeti supports plenty of choices, this feature should be extended to provide more flexibility to the user.
The procedure for converting from the plain to the drbd disk template works as follows. Firstly, a completely new disk template is generated matching the size, mode, and the count of the current instance’s disks. The missing volumes are created manually both in the primary (meta disk) and the secondary node. The original LVs running on the primary node are renamed to match the new names. The last step is to manually associate the DRBD devices with their mirror block device pairs. The conversion from the drbd to the plain disk template is much simpler than the opposite. Firstly, the DRBD mirroring is manually disabled. Then the unnecessary volumes including the meta disk(s) of the primary node, and the meta and data disk(s) from the previously secondary node are removed.
This design proposes the creation of a unified interface for handling the disk template conversions in Ganeti. Currently, there is no such interface and each one of the supported conversions uses a separate code path.
This proposal introduces a single, disk-agnostic interface for handling the disk template conversions in Ganeti, keeping in mind that we want it to be as generic as possible. An exception case will be the currently supported conversions between the LVM-based disk templates. Their basic functionality will not be affected and will diverge from the rest disk template conversions. The target is to provide support for conversions among the majority of the available disk templates, and also creating a mechanism that will easily support any new templates that may be probably added in Ganeti, at a future point.
Currently, the supported conversions for the LVM-based templates are handled by the LUInstanceSetParams LU. Our implementation will follow the same approach. From a high-level point-of-view this design can be split in two parts:
The instance must be stopped before starting the disk template conversion, as it currently is, otherwise the operation will fail. The new mechanism will need to copy the disk’s data for the conversion to be possible. We propose using the Unix dd command to copy the instance’s data. It can be used to copy data from source to destination, block-by-block, regardless of their filesystem types, making it a convenient tool for the case. Since the conversion will be done via data copy it will take a long time for bigger disks to copy their data and consequently for the instance to switch to the new template.
Some template conversions can be done faster without copying explicitly their disks’ data. A use case is the conversions between the LVM-based templates, i.e., drbd and plain which will be done as happens now and not using the dd command. Also, this implementation will provide partial support for the blockdev disk template which will act only as a source template. Since those volumes are adopted pre-existent block devices we will not support conversions targeting this template. Another exception case will be the diskless template. Since it is a testing template that creates instances with no disks we will not provide support for conversions that include this template type.
We divide the design into the following parts:
The block device abstract class will be extended with two new methods, named Import and Export. Those methods will be responsible for building the commands that will be used for the data copy between the corresponding devices. The Export method will build the command which will export the data from the source device, while the Import method will do the opposite. It will import the data to the newly created target device. Those two methods will not perform the actual data copy; they will simply return the requested commands for transferring the data from/to the individual devices. The output of the two methods will be combined using a pipe (“|”) by the caller method in the backend level.
By default the data import and export will be done using the dd command. All the inherited classes will use the base functionality unless there is a faster way to convert to. In that case the underlying block device will overwrite those methods with its specific functionality. A use case will be the Ceph/RADOS block devices which will make use of the rbd import and rbd export commands to copy their data instead of using the default dd command.
Keeping the data copy functionality in the block device layer, provides us with a generic mechanism that works between almost all conversions and furthermore can be easily extended for new disk templates. It also covers the devices that support the access=userspace parameter and solves this problem in a generic way, by implementing the logic in the right level where we know what is the best to do for each device.
Introduce a new RPC call:
where src_disk and dest_disk are the original and the new disk objects respectively. First, the actual device instances will be computed and then they will be used to build the export and import commands for the data copy. The output of those methods will be concatenated using a pipe, following a similar approach with the impexp daemon. Finally, the unified data copy command will be executed, at this level, by the nodeD.
The main modifications will be made in the LUInstanceSetParams LU. The implementation of the conversion mechanism will be split into the following parts:
The -t (--disk-template) option from the gnt-instance modify command will specify the disk template to convert to, as it happens now. The rest disk options such as its size, its mode, and its name will be computed from the original volumes by the conversion mechanism, and the user will not explicitly provide them.
When converting to an ExtStorage disk template the provider=*PROVIDER* option which specifies the ExtStorage provider will be mandatory. Also, arbitrary parameters can be passed to the ExtStorage provider. Those parameters will be optional and could be passed as additional comma separated options. Since it is not allowed to convert the disk template of an instance and make use of the --disk option at the same time, we propose to introduce a new option named --ext-params to handle the ext template conversions.
gnt-instance modify -t ext --ext-params provider=pvdr1 test_vm
gnt-instance modify -t ext --ext-params provider=pvdr1,param1=val1,param2=val2 test_vm
For conversions to a file-based template the --file-storage-dir and the --file-driver options could be used, similarly to the add command, to manually configure the storage directory and the preferred driver for the file-based disks.
gnt-instance modify -t file --file-storage-dir=mysubdir test_vm
This is a summary of the disk template conversions that the conversion mechanism will support:
Source Disk Template | Target Disk Template | ||||||||
---|---|---|---|---|---|---|---|---|---|
Plain | DRBD | File | Sharedfile | Gluster | RBD | Ext | BlockDev | Diskless | |
Plain | Yes. | Yes. | Yes. | Yes. | Yes. | Yes. | No. | No. | |
DRBD | Yes. | Yes. | Yes. | Yes. | Yes. | Yes. | No. | No. | |
File | Yes. | Yes. | Yes. | Yes. | Yes. | Yes. | No. | No. | |
Sharedfile | Yes. | Yes. | Yes. | Yes. | Yes. | Yes. | No. | No. | |
Gluster | Yes. | Yes. | Yes. | Yes. | Yes. | Yes. | No. | No. | |
RBD | Yes. | Yes. | Yes. | Yes. | Yes. | Yes. | No. | No. | |
Ext | Yes. | Yes. | Yes. | Yes. | Yes. | Yes. | No. | No. | |
BlockDev | Yes. | Yes. | Yes. | Yes. | Yes. | Yes. | Yes. | No. | |
Diskless | No. | No. | No. | No. | No. | No. | No. | No. |
Expand the conversion mechanism to provide a visual indication of the data copy operation. We could monitor the progress of the data sent via a pipe, and provide to the user information such as the time elapsed, percentage completed (probably with a progress bar), total data transferred, and so on, similar to the progress tracking that is currently done by the impexp daemon.