OpenNebula doesn't currently support migrating VM's from one type of datastore to another different type of datastore. I have been working on a method that is working and want to document it here to get some feedback and opinions on the method.
A datastore type is identified primarily by the Transfer manager Driver 'TM_MAD' setting. This setting cannot be changed, either through Sunstone or through the cli. So we need a method to do just this. This is what i did. I started with a fresh install of OpenNebula 5.4.13 in one VM, and 2 VM nodes all running Debian 9 within VMware virtual machines (don't forget to check virtualisation for the VM CPU options).
NOTE: This is an experimental process so make sure you Backup everything first!
Steps
To migrate to a different store, there are a few steps we need to do. They are as follows:
- Setup the NFS share exports,
- Move the VM images to the NFS share and mount the datastore,
- Change the datastore types,
- Configure the nodes for NFS share.
Setup NFS Server
First thing we want to do is setup the NFS shares that we want to use. I'm using a single share for the base datastore folder, but you could use separate shares for each datastore ID from different NFS servers.
- On the NFS Server create the datastore folder i.e.
mkdir /share/one_datastore
,
- Add the datastore path to exports and export the new share
exportfs -rav
,
- Confirm the share is available
showmount -e localhost
Prepare to Migrate
Before we modify the datastores there are a few things to do first:
- Shut down any running VM's and undeploy them. This saves the machines states and copies the images back to the image store,
- Stop Sunstone and OpenNebula services
systemctl stop opennebula && systemctl stop opennebula-sunstone
.
Migrate Data
Shared storage shares the VM disk images so all the nodes can access the same data. So copy the VM data to the NFS share ready for mounting.
- From the Sunstone frontend server confirm the NFS shares
showmount -e [nfs-server]
,
- Create a temp folder to mount the share in
mkdir /mnt/datastore
,
- Temporarily mount the NFS folder
mount [nfs-server]:/share/one_datastore /mnt/datastore
,
- Move the datastore folders to the share
mv /var/lib/one/datastores/* /mnt/datastore/
- OpenNebula datastore folders now live on the NFS server:
ls /mnt/datastore
should list folders 0, 1 and 2,
- Mount the NFS share to replace the OpenNebula datastore folder
mount [nfs-server]:/share/one_datastore /var/lib/one/datastores
,
- Confirm the folders are available
ls /var/lib/one/datastores
should list our 3 folders 0, 1 and 2,
- Add the mount into /etc/fstab to persist the mount on boot.
OpenNebula frontend is now configured to access the datastore folders from the NFS share. Next we want to change the datastores type from ssh to shared.
Change Datastore Types
The data for the datastore configuration is stored in the OpenNebula database /var/lib/one/one.db
. We can change the driver type by editing the datastore configuration data which then tells OpenNebula whiche drivers to use, and how to handle the datastore data. By default OpenNebula uses an sqlite database with the option of MySql. i'm using sqlite but the same works for MySql.
- Open the OpenNebula database
sqlite3 /var/lib/one/one.db
,
- View all tables with
.tables
. datastore_pool
is the table we want to modify,
- List all the records in the table
select * from datastore_pool;
will result in a screen-full of configuration data. Each record has an identifier oid
which matches the datastore ID, like this (the first 0 is the datastore ID for the default SYSTEM database):
0|system|<DATASTORE><ID>0</ID><UID>0</UID><GID>0</GID><UNAME>oneadmin</UNAME><GNAME>oneadmin</GNAME><NAME>system</NAME><PERMISSIONS><OWNER_U>1</OWNER_U><OWNER_M>1</OWNER_M><OWNER_A>0</OWNER_A><GROUP_U>1</GROUP_U><GROUP_M>0</GROUP_M><GROUP_A>0</GROUP_A><OTHER_U>0</OTHER_U><OTHER_M>0</OTHER_M><OTHER_A>0</OTHER_A></PERMISSIONS><DS_MAD><![CDATA[-]]></DS_MAD><TM_MAD><![CDATA[ssh]]></TM_MAD><BASE_PATH><![CDATA[/var/lib/one//datastores/0]]></BASE_PATH><TYPE>1</TYPE><DISK_TYPE>0</DISK_TYPE><STATE>0</STATE><CLUSTERS><ID>0</ID></CLUSTERS><TOTAL_MB>0</TOTAL_MB><FREE_MB>0</FREE_MB><USED_MB>0</USED_MB><IMAGES></IMAGES><TEMPLATE><ALLOW_ORPHANS><![CDATA[NO]]></ALLOW_ORPHANS><DISK_TYPE><![CDATA[FILE]]></DISK_TYPE><DS_MIGRATE><![CDATA[YES]]></DS_MIGRATE><RESTRICTED_DIRS><![CDATA[/]]></RESTRICTED_DIRS><SAFE_DIRS><![CDATA[/var/tmp]]></SAFE_DIRS><SHARED><![CDATA[NO]]></SHARED><TM_MAD><![CDATA[ssh]]></TM_MAD><TYPE><![CDATA[SYSTEM_DS]]></TYPE></TEMPLATE></DATASTORE>|0|0|1|1|0
- Now to change the datastore type. Grab the data from the 3rd column
body
(You can run select body from datastore_pool where oid=0;
) and copy to your favourite text editor (that's the chunk starting with <DATASTORE> and ending with </DATASTORE>). Find and replace:
Find: <TM_MAD><![CDATA[ssh]]></TM_MAD>
Replace with: <TM_MAD><![CDATA[shared]]></TM_MAD>
Find: <SHARED><![CDATA[NO]]></SHARED>
Replace with: <SHARED><![CDATA[YES]]></SHARED>
- Now to update the SYSTEM datastore record. Run the following command on the database, replacing [datastore-config] with the text block you just modified
update datastore_pool set body='[datastore-config]' where oid=0
,
- Update IMAGE datastore is a little different. There is no SHARED option, but we want to use either
shared
or qcow2
drivers. I used qcow2. So: select body from datastore_pool where oid=1;
:
Find: <TM_MAD><![CDATA[ssh]]></TM_MAD>
Replace: <TM_MAD><![CDATA[qcow2]]></TM_MAD>
- Update the record:
update datastore_pool set body='[datastore-config]' where oid=1;
,
- Update the FILES datastore (oid=3) by replacing
<TM_MAD><![CDATA[ssh]]></TM_MAD>
with <TM_MAD><![CDATA[shared]]></TM_MAD>
and update using the method above.
Now that the datastores have been updated to use the shared driver, lets start Sunstone and check that the datastores show up.
systemctl start opennebula && systemctl start opennebula-sunstone
Jump into Sunstone web and go to datastores. Opening each datastore to check whether SHARED is enabled, and the correct drivers show i.e. shared or qcow2.
~DONT DO ANYTHING YET~ Still need to configure the nodes!
Configure the Nodes
So because we stopped and undeployed the VMs, there shouldn't be any data in the node datastores. So we can just set up NFS shares to the datastores folder. Confirm the folders are empty first and make sure to take backups! This is an experimental process so be warned! Right, lets get onto it:
- Check the contents of
/var/lib/one/datastores
. If you are mounting each datastore ID based folder to its own NFS share then you can do this instead of the entire datastore folder. Empty any folders with 0, 1 and 2 folders. otherwise remove all folders from the datastores folder,
- If not already installed:
apt-get install nfs-common
,
- Check for NFS shares:
showmount -e [nfs-server]
,
- Mount the nfs share to the datastore folder:
mount [nfs-server]:/share/one_datastore /var/lib/one/datastores
,
- Confirm the mount i.e.
df
,
- Edit
/etc/fstab
adding the mount so its mounted on next boot.
- Restart your node to confirm the datastore nfs persists, and to give them a restart!
Repeat with all host nodes.
Test it Out
In Sunstone go to the Hosts TAB and check they are up and running. Next go and grab a VM and deploy it. It should deploy without any issues and start booting.
Once up and running i like to constantly ping the VM while testing live migration. So start ping (ping [vm-ip] -t
in windows) and then in Sunstone open the VM and do a 'Live Migrate' to another node. Watch the ping and check the logs to make sure it succeeded. I found i had to refresh the display, and go to the hosts TAB to check the VM had migrated. After that it showed correctly but i think its a caching issue in my browser. After the Live Migration you should still see the ping rolling along, with maybe one failed ping in the results.
Conclusion
So that's the process i used to migrate from ssh local storage to shared storage. I'v tested it and it is working without any issues. However, if you do have any issues or have an opinion on this process please let me know. If there are any pitfalls with this i have overlooked please also let me know.
Ok, have fun with it. I'm off to try moving the shared storage over to some kind of shared cluster like Ceph or GlusterFS!