Volume migration between OpenStack clusters¶
In the following, we will describe two procedures for migrating Cinder volumes between OpenStack clusters. As explained in the following, you will most probably need both of them, in order to migrate a virtual machine.
Assumptions¶
The examples provided below assume that both OpenStack clusters use Ceph as the
storage backend, and we want to migrate volumes from site PA1
to site/region CT1/PA1new
.
The destination OpenStack cluster relies on several distinct Ceph clusters:
there is only one Glance pool, and more than one Cinder pool one for each region. Specifically:
at the source OpenStack site, Ceph cluster name is
ceph
and Cinder/Glance Ceph pool names areoscinder/osglance
at the destination OpenStack site: * Glance Ceph pool name is
glance-ct1-cl1
, and Ceph cluster name isceph
* Cinder Ceph pool name iscinder-ceph-ct1-cl1
, and Ceph cluster name iscephpa1
We also assume that Ceph mirroring via rbd-mirror
has already been configured.
Specifically, we assume:
image-based mirroring for the Cinder Ceph pool
pool-based mirroring for the Glance Ceph pool
Volume migration using Cinder¶
This procedure applies to the case when you want to move a Ceph-backed Cinder volume to a different Ceph-backed OpenStack cluster (destination has different Ceph cluster as well as different OpenStack cluster).
Overall description and caveats¶
This strategy works well for any non-bootable volumes.
Moreover, depending on the Ceph source cluster, the procedure may require powering the virtual server off for the duration of the Ceph mirroring. The critical point here is enabling the Ceph features needed for mirroring between clusters: you may want to try on one non-critical machine, first, and make sure the feature enabling does not cause the machine to freeze. (Should it freeze, remove the Ceph features and perform a “hard reboot”)
The procedure was successfully tsted on additional volumes presented to a virtual machine, and on non-bootable volumes which are currently not-in-use.
(Large) part of the procedure requires Ceph admin-like privileges. The remaining small part of the procedure should be executed by the OpenStack tenant admin.
Procedure¶
Within Cinder, a volume has a name like volume-<hexString>
where <hexString>
is the
volume UUID.
Therefore, we advise you to define environment variables like:
export volID=<hexString>
export volName=volume-$volID
Ceph part¶
The following operations are performed on the source and destination Ceph clusters.
First of all, activate the Ceph mirroring of the volume (again, if the volume is in-use, please make sure the relevant server is powered off):
remove any snapshot which is no longer needed, as they would be mirrored, too, slowing the whole procedure
check the features enabled on the volume: rbd-mirror requires exclusive-lock and journaling:
rbd --cluster ceph -p oscinder info $volName
if needed, enable exclusive-lock and then enable journaling: you may first need to disable other features:
rbd --cluster ceph -p oscinder feature disable $volName deep-flatten rbd --cluster ceph -p oscinder feature disable $volName fast-diff rbd --cluster ceph -p oscinder feature disable $volName object-map rbd --cluster ceph -p oscinder feature enable $volName exclusive-lock rbd --cluster ceph -p oscinder feature enable $volName journaling
check features: only layering, exclusive-lock and journaling should be shown by the following command:
rbd --cluster ceph -p oscinder info $volName | grep -e ^rbd -e features
enable mirroring (remember, for this pool we configured image-based mirroring):
rbd --cluster ceph -p oscinder mirror image enable $volName
check destination Ceph cluster:
rbd --cluster cephpa1 -p oscinder info $volName
as long as the mirroring is ongoing, you will see at destination also a temporary volume with name such as
$volName.@.rbd-mirror….
; once it disappears, it means the mirroring is complete
Before importing a volume in OpenStack, we need to copy it to the pool managed by Cinder (cinder-ceph-pa1-cl1), by executing:
rbd --cluster cephpa1 cp oscinder/$volName cinder-ceph-pa1-cl1/$volName
rbd --cluster cephpa1 -p cinder-ceph-pa1-cl1 ls -l | grep $volName
Note
Although we keep the original volume name, after the OpenStack import described
in next paragraph the Ceph volume will be renamed to volume-<newHexString>
.
OpenStack part¶
As previously stated, after the import the Ceph volume will be renamed to volume-<newHexString>
.
We can take advantage of this procedure to also give the volume a human-meaningful name, so for example we will set:
export volName=volume-<hexString>
export volOSname=serverXdevVdB
Now, using the OpenStack tenant admin credentials, execute:
cinder manage --name=$volOSname cinder-pa1-cl1@cinder-ceph-pa1-cl1#cinder-ceph-pa1-cl1 $volName
At this stage, if you have access to the Ceph Cinder pool you can verify that $volName
has been renamed
to volume-<newHexString>
.
From the OpenStack GUI, you can now attach the volume to a virtual machine.
Ceph part, final¶
At this point, if you feel brave enough, you can clean things up by
deleting the volume in the source Ceph pool, or
turning the “mirroring” off, which will cause the mirrored volume to disappear, and removing the mirroring features from the source volume
In this example we will opt for the second possibility:
rbd --cluster ceph -p oscinder mirror image disable $volName
rbd --cluster ceph -p oscinder feature disable $volName exclusive-lock
rbd --cluster ceph -p oscinder feature disable $volName journaling
Volume migration via Glance¶
This procedure applies to the case when you want to move a bootable Ceph-backed Cinder volume to a different Ceph-backed OpenStack cluster (destination has different Ceph cluster as well as different OpenStack cluster).
Overall description and caveats¶
The idea is to take a snapshot of the virtual machine, transfer it to the destination OpenStack cluster and spawn a new virtual machine at destination.
This strategy is best suited for bootable volumes, and does not require the server to be powered off.
Procedure¶
OpenStack source¶
The following commands will be issued by the OpenStack tenant admin.
It will be convenient to set a bunch of environment variables:
export volID=<hexString> # where <hexString> is the Cinder volume UUID
export srvOSname=<serverName>
export imgOSname=${srvOSname}_vda_img ; echo $imgOSname # this will be the OpenStack name of the glance image
If the source server has additional disks defined in /etc/fstab you may want to comment the relevant entries before performing the snapshot, and uncomment them right after the snapshot has been taken. Then execute:
cinder upload-to-image --force True --disk-format qcow2 $volID ${imgOSname}
then we need to get the name of the newly-created image:
export imgOSid=`glance image-list | grep $imgOSname | awk '{print $2}'` ; echo $imgOSid
glance image-show $imgOSid # wait until the command returns status=active
Activate mirroring for the image by enabling the exclusive-lock and journaling features: note that within Glance the image name corresponds to the Glance UUID alone, without any prefix.
On the client OpenStack source, prepare and execute the following commands:
command (Command_ceph_src), whose output should be later executed by the Ceph source admin:
echo "echo Command_ceph_src" ; echo "export imgOSid=$imgOSid" ; \ echo "rbd --cluster ceph -p osglance feature enable \$imgOSid exclusive-lock ; \ rbd --cluster ceph -p osglance feature enable \$imgOSid journaling ; \ rbd --cluster ceph -p osglance info \$imgOSid ; echo sleep ; sleep 10 ; \ rbd --cluster cephpa1 -p osglance info \$imgOSid"
command (Command_ceph_dst), whose output should be later executed by the Ceph destination admin:
echo "echo Command_ceph_dst: Ceph destination" ; echo "export imgOSid=$imgOSid" ; \ echo "rbd --cluster cephpa1 -p osglance ls -l | grep \$imgOSid ; echo === ; sleep 5" ; \ echo "rbd --cluster cephpa1 -p osglance cp \$imgOSid glance-ct1-cl1/\$imgOSid ; \ rbd --cluster cephpa1 -p glance-ct1-cl1 snap create \${imgOSid}@snap ; \ rbd --cluster cephpa1 -p glance-ct1-cl1 snap protect \${imgOSid}@snap ; sleep 2 ; \ rbd --cluster cephpa1 -p glance-ct1-cl1 ls -l | grep \$imgOSid"
command (Command_openstack), whose output should be later executed by the OpenStack destination tenant admin:
echo "echo Command_openstack" ; echo "export imgOSid=$imgOSid ; \ export imgOSname=$imgOSname ; \ echo glance --os-image-api-version 1 image-create --name \$imgOSname --store rbd --disk-format qcow2 --container-format bare --location rbd://cephpa1/glance-pa1-cl1/\${imgOSid}/snap"
command (Command_clean), whose output should be later executed by the Ceph source admin, after the image has successfully made it to the new OpenStack cluster:
echo "echo Command_clean: Ceph source" ; echo "export imgOSid=$imgOSid" ; echo "rbd --cluster ceph -p osglance ls -l | grep \$imgOSid ; echo === ; sleep 5" ; \ echo "rbd --cluster ceph -p osglance feature disable \$imgOSid journaling ; \ rbd --cluster ceph -p osglance feature disable \$imgOSid exclusive-lock ; \ rbd --cluster ceph -p osglance info \$imgOSid ; echo sleep ; sleep 10 ; \ rbd --cluster ceph -p osglance info \$imgOSid"
Ceph source¶
Execute the output of Command_ceph_src above, prepared by the OpenStack source tenant admin.
Ceph destination¶
Execute the output of Command_ceph_dst above, prepared by the OpenStack source tenant admin.
Note that you should proceed only after the output of:
rbd --cluster cephpa1 -p osglance ls -l | grep $imgOSid
shows the mirroring has completed.
The other commands take care of copying the image to the pool managed by Glance and prepare the snapshot Glance expects to find:
..warning:: Note that we copy the image to the Glance pool rather than ask Glance to point the non-standard pool, otherwise the procedure would get a bit more complicated once we perform the cleanup (namely, the destination image would disappear as soon as the mirroring is switched off, and we would tolerate an error message when deleting the image from the destination pool).
Ceph source: cleanup¶
At this point, you can clean things up by:
execute the output of Command_clean above, prepared by the OpenStack source tenant admin
deleting the volume in the source Ceph pool (this part is not comprised in the above command set, to be conservative)
OpenStack destination¶
Using tenant admin credentials, execute the command Command_openstack prepared on the source OpenStack cluster.
At this point, in the destination OpenStack cluster you can launch a new instance from the image.
Warning
Make sure the disk for such instance is at least as big as the one of the original instance.