--- /dev/null
+<html>
+ <body>
+ <h1>Guest migration</h1>
+
+ <ul id="toc"></ul>
+
+ <p>
+ Migration of guests between hosts is a complicated problem with many possible
+ solutions, each with their own positive and negative points. For maximum
+ flexibility of both hypervisor integration, and adminsitrator deployment,
+ libvirt implements several options for migration.
+ </p>
+
+ <h2><a id="transport">Network data transports</a></h2>
+
+ <p>
+ There are two options for the data transport used during migration, either
+ the hypervisor's own <strong>native</strong> transport, or <strong>tunnelled</strong>
+ over a libvirtd connection.
+ </p>
+
+ <h3><a id="transportnative">Hypervisor native transport</a></h3>
+ <p>
+ <em>Native</em> data transports may or may not support encryption, depending
+ on the hypervisor in question, but will typically have the lowest computational costs
+ by minimising the number of data copies involved. The native data transports will also
+ require extra hypervisor-specific network configuration steps by the administrator when
+ deploying a host. For some hypervisors, it might be neccessary to open up a large range
+ of ports on the firewall to allow multiple concurrent migration operations.
+ </p>
+
+ <p>
+ <img class="diagram" src="migration-native.png" alt="Migration native path">
+ </p>
+
+ <h3><a id="transporttunnel">libvirt tunnelled transport</a></h3>
+ <p>
+ <em>Tunnelled</em> data transports will always be capable of strong encryption
+ since they are able to leverage the capabilities built in to the libvirt RPC protocol.
+ The downside of a tunnelled transport, however, is that there will be extra data copies
+ involved on both the source and destinations hosts as the data is moved between libvirtd
+ and the hypervisor. This is likely to be a more significant problem for guests with
+ very large RAM sizes, which dirty memory pages quickly. On the deployment side, tunnelled
+ transports do not require any extra network configuration over and above what's already
+ required for general libvirtd <a href="remote.html">remote access</a>, and there is only
+ need for a single port to be open on the firewall to support multiple concurrent
+ migration operations.
+ </p>
+
+ <p>
+ <img class="diagram" src="migration-tunnel.png" alt="Migration tunnel path">
+ </p>
+
+ <h2><a id="flow">Communication control paths/flows</a></h2>
+
+ <p>
+ Migration of virtual machines requires close co-ordination of the two
+ hosts involved, as well as the application invoking the migration,
+ which may be on the source, the destination, or a third host.
+ </p>
+
+ <h3><a id="flowmanageddirect">Managed direct migration</a></h3>
+
+ <p>
+ With <em>managed direct</em> migration, the libvirt client process
+ controls the various phases of migration. The client application must
+ be able to connect and authenticate with the libvirtd daemons on both
+ the source and destination hosts. There is no need for the two libvirtd
+ daemons to communicate with each other. If the client application
+ crashes, or otherwise loses its connection to libvirtd during the
+ migration process, an attempt will be made to abort the migration and
+ restart the guest CPUs on the source host. There may be scenarios
+ where this cannot be safely done, in which cases the guest will be
+ left paused on one or both of the hosts.
+ </p>
+
+ <p>
+ <img class="diagram" src="migration-managed-direct.png" alt="Migration direct, managed">
+ </p>
+
+
+ <h3><a id="flowpeer2peer">Managed peer to peer migration</a></h3>
+
+ <p>
+ With <em>peer to peer</em> migration, the libvirt client process only
+ talks to the libvirtd daemon on the source host. The source libvirtd
+ daemon controls the entire migration process itself, by directly
+ connecting the destination host libvirtd. If the client application crashes,
+ or otherwise loses its connection to libvirtd, the migration process
+ will continue uninterrupted until completion.
+ </p>
+
+ <p>
+ <img class="diagram" src="migration-managed-p2p.png" alt="Migration peer-to-peer">
+ </p>
+
+
+ <h3><a id="flowunmanageddirect">Unmanaged direct migration</a></h3>
+
+ <p>
+ With <em>unmanaged direct</em> migration, neither the libvirt client
+ or libvirtd daemon control the migration process. Control is instead
+ delegated to the hypervisor's over management services (if any). The
+ libvirt client merely initiates the migration via the hypervisor's
+ management layer. If the libvirt client or libvirtd crash, the
+ migration process will continue uninterrupted until completion.
+ </p>
+
+ <p>
+ <img class="diagram" src="migration-unmanaged-direct.png" alt="Migration direct, unmanaged">
+ </p>
+
+
+ <h2><a id="security">Data security</a></h2>
+
+ <p>
+ Since the migration data stream includes a complete copy of the guest
+ OS RAM, snooping of the migration data stream may allow compromise
+ of sensitive guest information. If the virtualization hosts have
+ multiple network interfaces, or if the network switches support
+ tagged VLANs, then it is very desirable to separate guest network
+ traffic from migration or management traffic.
+ </p>
+
+ <p>
+ In some scenarios, even a separate network for migration data may
+ not offer sufficient security. In this case it is possible to apply
+ encryption to the migration data stream. If the hypervisor does not
+ itself offer encryption, then the libvirt tunnelled migration
+ facility should be used.
+ </p>
+
+ <h2><a id="uris">Migration URIs</a></h2>
+
+ <p>
+ Initiating a guest migration requires the client application to
+ specify up to three URIs, depending on the choice of control
+ flow and/or APIs used. The first URI is that of the libvirt
+ connection to the source host, where the virtual guest is
+ currently running. The second URI is that of the libvirt
+ connection to the destination host, where the virtual guest
+ will be moved to. The third URI is a hypervisor specific
+ URI used to control how the guest will be migrated. With
+ any managed migration flow, the first and second URIs are
+ compulsory, while the third URI is optional. With the
+ unmanaged direct migration mode, the first and third URIs are
+ compulsory and the second URI is not used.
+ </p>
+
+ <p>
+ Ordinarily management applications only need to care about the
+ first and second URIs, which are both in the normal libvirt
+ connection URI format. Libvirt will then automatically determine
+ the hypervisor specific URI, by looking up the target host's
+ configured hostname. There are a few scenarios where the management
+ application may wish to have direct control over the third URI.
+ </p>
+
+ <ol>
+ <li>The configured hostname is incorrect, or DNS is broken. If a
+ host has a hostname which will not resolve to match one of its
+ public IP addresses, then libvirt will generate an incorrect
+ URI. In this case the management application should specify the
+ hypervisor specific URI explicitly, using an IP address, or a
+ correct hostname.</li>
+ <li>The host has multiple network interaces. If a host has multiple
+ network interfaces, it might be desirable for the migration data
+ stream to be sent over a specific interface for either security
+ or performance reasons. In this case the management application
+ should specify the hypervisor specific URI, using an IP address
+ associated with the network to be used.</li>
+ <li>The firewall restricts what ports are available. When libvirt
+ generates a migration URI will pick a port number using hypervisor
+ specific rules. Some hypervisors only require a single port to be
+ open in the firewalls, while others require a whole range of port
+ numbers. In the latter case the management application may wish
+ to choose a specific port number outside the default range in order
+ to comply with local firewall policies</li>
+ </ol>
+
+ <h2><a id="config">Configuration file handling</a></h2>
+
+ <p>
+ There are two types of virtual machine known to libvirt. A <em>transient</em>
+ guest only exists while it is running, and has no configuration file stored
+ on disk. A <em>persistent</em> guest maintains a configuration file on disk
+ even when it is not running.
+ </p>
+
+ <p>
+ By default, a migration operation will not attempt to change any configuration
+ files that may be stored on either the source or destination host. It is the
+ administrator, or management application's, responsibility to manage distribution
+ of configuration files (if desired). It is important to note that the <code>/etc/libvirt</code>
+ directory <strong>MUST NEVER BE SHARED BETWEEN HOSTS</strong>. There are some
+ typical scenarios that might be applicable:
+ </p>
+
+ <ul>
+ <li>Centralized configuration files outside libvirt, in shared storage. A cluster
+ aware management application may maintain all the master guest configuration
+ files in a cluster filesystem. When attempting to start a guest, the config
+ will be read from the cluster FS and used to deploy a persistent guest.
+ For migration the configuration will need to be copied to the destination
+ host and removed on the original.
+ </li>
+ <li>Centralized configuration files outside libvirt, in a database. A data center
+ management application may not storage configuration files at all. Instead it
+ may generate libvirt XML on the fly when a guest is booted. It will typically
+ use transient guests, and thus not have to consider configuration files during
+ migration.
+ </li>
+ <li>Distributed configuration inside libvirt. The configuration file for each
+ guest is copied to every host where the guest is able to run. Upon migration
+ the existing config merely needs to be updated with any changes
+ </li>
+ <li>Ad-hoc configuration management inside libvirt. Each guest is tied to a
+ specific host and rarely migrated. When migration is required, the config
+ is moved from one host to the other.
+ </li>
+ </ul>
+
+ <p>
+ As mentioned above, libvirt will not touch configuration files during
+ migration by default. The <code>virsh</code> command has two flags to
+ influence this behaviour. The <code>--undefine-source</code> flag
+ will cause the configuration file to be removed on the source host
+ after a successful migration. The <code>--persist</code> flag will
+ cause a configuration file to be created on the destination host
+ after a successful migration. The following table summarizes the
+ configuration file handling in all possible state and flag
+ combinations.
+ </p>
+
+ <table class="data">
+ <thead>
+ <tr class="head">
+ <th colspan="3">Before migration</th>
+ <th colspan="2">Flags</th>
+ <th colspan="3">After migration</th>
+ </tr>
+ <tr class="subhead">
+ <th>Guest type</th>
+ <th>Source config</th>
+ <th>Dest config</th>
+ <th>--undefine-source</th>
+ <th>--persist</th>
+ <th>Guest type</th>
+ <th>Source config</th>
+ <th>Dest config</th>
+ </tr>
+ </thead>
+ <tbody>
+ <!-- src:N, dst:N -->
+ <tr>
+ <td>Transient</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ <td>Transient</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ </tr>
+ <tr>
+ <td>Transient</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ <td class="n">N</td>
+ <td>Transient</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ </tr>
+ <tr>
+ <td>Transient</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ <td>Persistent</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ </tr>
+ <tr>
+ <td>Transient</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ <td class="y">Y</td>
+ <td>Persistent</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ </tr>
+
+ <!-- src:N, dst:Y -->
+ <tr>
+ <td>Transient</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ <td>Transient</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ </tr>
+ <tr>
+ <td>Transient</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ <td class="y">Y</td>
+ <td class="n">N</td>
+ <td>Transient</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ </tr>
+ <tr>
+ <td>Transient</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ <td>Persistent</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ </tr>
+ <tr>
+ <td>Transient</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ <td class="y">Y</td>
+ <td class="y">Y</td>
+ <td>Persistent</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ </tr>
+
+ <!-- src:Y dst:N -->
+ <tr>
+ <td>Persistent</td>
+ <td class="y">Y</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ <td>Transient</td>
+ <td class="y">Y</td>
+ <td class="n">N</td>
+ </tr>
+ <tr>
+ <td>Persistent</td>
+ <td class="y">Y</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ <td class="n">N</td>
+ <td>Transient</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ </tr>
+ <tr>
+ <td>Persistent</td>
+ <td class="y">Y</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ <td>Persistent</td>
+ <td class="y">Y</td>
+ <td class="y">Y</td>
+ </tr>
+ <tr>
+ <td>Persistent</td>
+ <td class="y">Y</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ <td class="y">Y</td>
+ <td>Persistent</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ </tr>
+
+ <!-- src:Y dst:Y -->
+ <tr>
+ <td>Persistent</td>
+ <td class="y">Y</td>
+ <td class="y">Y</td>
+ <td class="n">N</td>
+ <td class="n">N</td>
+ <td>Persistent</td>
+ <td class="y">Y</td>
+ <td class="y">Y</td>
+ </tr>
+ <tr>
+ <td>Persistent</td>
+ <td class="y">Y</td>
+ <td class="y">Y</td>
+ <td class="y">Y</td>
+ <td class="n">N</td>
+ <td>Persistent</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ </tr>
+ <tr>
+ <td>Persistent</td>
+ <td class="y">Y</td>
+ <td class="y">Y</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ <td>Persistent</td>
+ <td class="y">Y</td>
+ <td class="y">Y</td>
+ </tr>
+ <tr>
+ <td>Persistent</td>
+ <td class="y">Y</td>
+ <td class="y">Y</td>
+ <td class="y">Y</td>
+ <td class="y">Y</td>
+ <td>Persistent</td>
+ <td class="n">N</td>
+ <td class="y">Y</td>
+ </tr>
+ </tbody>
+ </table>
+
+ <h2><a id="scenarios">Migration scenarios</a></h2>
+
+
+ <h3><a id="scenarionativedirect">Native migration, client to two libvirtd servers</a></h3>
+
+ <p>
+ At an API level this requires use of virDomainMigrate, without the
+ VIR_MIGRATE_PEER2PEER flag set. The destination libvirtd server
+ will automatically determine the native hypervisor URI for migration
+ based off the primary hostname. To force migration over an alternate
+ network interface the optional hypervisor specific URI must be provided
+ </p>
+
+ <pre>
+ syntax: virsh migrate GUESTNAME DEST-LIBVIRT-URI [HV-URI]
+
+
+ eg using default network interface
+
+ virsh migrate web1 qemu+ssh://desthost/system
+ virsh migrate web1 xen+tls://desthost/system
+
+
+ eg using secondary network interface
+
+ virsh migrate web1 qemu://desthost/system tcp://10.0.0.1/
+ virsh migrate web1 xen+tcp://desthost/system xenmigr:10.0.0.1/
+ </pre>
+
+ <p>
+ Supported by Xen, QEMU, VMWare and VirtualBox drivers
+ </p>
+
+ <h3><a id="scenarionativepeer2peer">Native migration, client to and peer2peer between, two libvirtd servers</a></h3>
+
+ <p>
+ virDomainMigrate, with the VIR_MIGRATE_PEER2PEER flag set,
+ using the libvirt URI format for the 'uri' parameter. The
+ destination libvirtd server will automatically determine
+ the native hypervisor URI for migration, based off the
+ primary hostname. The optional uri parameter controls how
+ the source libvirtd connects to the destination libvirtd,
+ in case it is not accessible using the same address that
+ the client uses to connect to the destination, or a different
+ encryption/auth scheme is required. There is no
+ scope for forcing an alternative network interface for the
+ native migration data with this method.
+ </p>
+
+ <p>
+ This mode cannot be invoked from virsh
+ </p>
+
+ <p>
+ Supported by QEMU driver
+ </p>
+
+ <h3><a id="scenariotunnelpeer2peer1">Tunnelled migration, client and peer2peer between two libvirtd servers</a></h3>
+
+ <p>
+ virDomainMigrate, with the VIR_MIGRATE_PEER2PEER & VIR_MIGRATE_TUNNELLED
+ flags set, using the libvirt URI format for the 'uri' parameter. The
+ destination libvirtd server will automatically determine
+ the native hypervisor URI for migration, based off the
+ primary hostname. The optional uri parameter controls how
+ the source libvirtd connects to the destination libvirtd,
+ in case it is not accessible using the same address that
+ the client uses to connect to the destination, or a different
+ encryption/auth scheme is required. The native hypervisor URI
+ format is not used at all.
+ </p>
+
+ <p>
+ This mode cannot be invoked from virsh
+ </p>
+
+ <p>
+ Supported by QEMU driver
+ </p>
+
+ <h3><a id="nativedirectunmanaged">Native migration, client to one libvirtd server</a></h3>
+
+ <p>
+ virDomainMigrateToURI, without the VIR_MIGRATE_PEER2PEER flag set,
+ using a hypervisor specific URI format for the 'uri' parameter.
+ There is no use or requirement for a destination libvirtd instance
+ at all. This is typically used when the hypervisor has its own
+ native management daemon available to handle incoming migration
+ attempts on the destination.
+ </p>
+
+ <pre>
+ syntax: virsh migrate GUESTNAME HV-URI
+
+
+ eg using same libvirt URI for all connections
+
+ virsh migrate --direct web1 xenmigr://desthost/
+ </pre>
+
+ <p>
+ Supported by Xen driver
+ </p>
+
+ <h3><a id="nativepeer2peer">Native migration, peer2peer between two libvirtd servers</a></h3>
+
+ <p>
+ virDomainMigrateToURI, with the VIR_MIGRATE_PEER2PEER flag set,
+ using the libvirt URI format for the 'uri' parameter. The
+ destination libvirtd server will automatically determine
+ the native hypervisor URI for migration, based off the
+ primary hostname. There is no scope for forcing an alternative
+ network interface for the native migration data with this method.
+ </p>
+
+ <pre>
+ syntax: virsh migrate GUESTNAME DEST-LIBVIRT-URI [ALT-DEST-LIBVIRT-URI]
+
+
+ eg using same libvirt URI for all connections
+
+ virsh migrate --p2p web1 qemu+ssh://desthost/system
+
+
+ eg using different libvirt URI auth scheme for peer2peer connections
+
+ virsh migrate --p2p web1 qemu+ssh://desthost/system qemu+tls:/desthost/system
+
+
+ eg using different libvirt URI hostname for peer2peer connections
+
+ virsh migrate --p2p web1 qemu+ssh://desthost/system qemu+ssh://10.0.0.1/system
+ </pre>
+
+ <p>
+ Supported by the QEMU driver
+ </p>
+
+ <h3><a id="scenariotunnelpeer2peer2">Tunnelled migration, peer2peer between two libvirtd servers</a></h3>
+
+ <p>
+ virDomainMigrateToURI, with the VIR_MIGRATE_PEER2PEER & VIR_MIGRATE_TUNNELLED
+ flags set, using the libvirt URI format for the 'uri' parameter. The
+ destination libvirtd server will automatically determine
+ the native hypervisor URI for migration, based off the
+ primary hostname. The optional uri parameter controls how
+ the source libvirtd connects to the destination libvirtd,
+ in case it is not accessible using the same address that
+ the client uses to connect to the destination, or a different
+ encryption/auth scheme is required. The native hypervisor URI
+ format is not used at all.
+ </p>
+
+ <pre>
+ syntax: virsh migrate GUESTNAME DEST-LIBVIRT-URI [ALT-DEST-LIBVIRT-URI]
+
+
+ eg using same libvirt URI for all connections
+
+ virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system
+
+
+ eg using different libvirt URI auth scheme for peer2peer connections
+
+ virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system qemu+tls:/desthost/system
+
+
+ eg using different libvirt URI hostname for peer2peer connections
+
+ virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system qemu+ssh://10.0.0.1/system
+ </pre>
+
+ <p>
+ Supported by QEMU driver
+ </p>
+
+ </body>
+</html>