Migrating virtual guests to other nodes is a useful feature in a
cluster. There are settings to control the behavior of such
migrations. This can be done via the configuration file
datacenter.cfg
or for a specific migration via API or command line
parameters.
It makes a difference if a guest is online or offline, or if it has local resources (like a local disk).
For details about virtual machine migration, see the QEMU/KVM Migration Chapter Section 10.3, “Migration”.
For details about container migration, see the Container Migration Chapter Section 11.10, “Migration”.
The migration type defines if the migration data should be sent over an
encrypted (secure
) channel or an unencrypted (insecure
) one.
Setting the migration type to insecure means that the RAM content of a
virtual guest is also transferred unencrypted, which can lead to
information disclosure of critical data from inside the guest (for
example, passwords or encryption keys).
Therefore, we strongly recommend using the secure channel if you do not have full control over the network and can not guarantee that no one is eavesdropping on it.
Storage migration does not follow this setting. Currently, it always sends the storage content over a secure channel.
Encryption requires a lot of computing power, so this setting is often changed to "unsafe" to achieve better performance. The impact on modern systems is lower because they implement AES encryption in hardware. The performance impact is particularly evident in fast networks, where you can transfer 10 Gbps or more.
By default, Proxmox VE uses the network in which cluster communication takes place to send the migration traffic. This is not optimal both because sensitive cluster traffic can be disrupted and this network may not have the best bandwidth available on the node.
Setting the migration network parameter allows the use of a dedicated network for all migration traffic. In addition to the memory, this also affects the storage traffic for offline migrations.
The migration network is set as a network using CIDR notation. This has the advantage that you don’t have to set individual IP addresses for each node. Proxmox VE can determine the real address on the destination node from the network specified in the CIDR form. To enable this, the network must be specified so that each node has exactly one IP in the respective network.
We assume that we have a three-node setup, with three separate networks. One for public communication with the Internet, one for cluster communication, and a very fast one, which we want to use as a dedicated network for migration.
A network configuration for such a setup might look as follows:
iface eno1 inet manual # public network auto vmbr0 iface vmbr0 inet static address 192.X.Y.57/24 gateway 192.X.Y.1 bridge-ports eno1 bridge-stp off bridge-fd 0 # cluster network auto eno2 iface eno2 inet static address 10.1.1.1/24 # fast network auto eno3 iface eno3 inet static address 10.1.2.1/24
Here, we will use the network 10.1.2.0/24 as a migration network. For
a single migration, you can do this using the migration_network
parameter of the command line tool:
# qm migrate 106 tre --online --migration_network 10.1.2.0/24
To configure this as the default network for all migrations in the
cluster, set the migration
property of the /etc/pve/datacenter.cfg
file:
# use dedicated migration network migration: secure,network=10.1.2.0/24
The migration type must always be set when the migration network
is set in /etc/pve/datacenter.cfg
.