+Thu May 8 10:19:11 EST 2008 Daniel P. Berrange <berrange@redhat.com>
+
+ * docs/page.xsl: Fix detection of sub-headings
+ * docs/domain.html, docs/domain.html.in: Re-write content to
+ reflect current domain XML format
+
Thu May 8 07:51:11 EST 2008 Daniel P. Berrange <berrange@redhat.com>
* src/auth.html.in, src/auth.html: Fix policykit config docs
</div>
<div id="content">
<h1>Domain XML format</h1>
- <p>This section describes the XML format used to represent domains, there are
-variations on the format based on the kind of domains run and the options
-used to launch them:</p>
- <h3 id="Normal"><a name="Normal1" id="Normal1">Normal paravirtualized Xen
-guests</a>:</h3>
- <p>The root element must be called <code>domain</code> with no namespace, the
-<code>type</code> attribute indicates the kind of hypervisor used, 'xen' is
-the default value. The <code>id</code> attribute gives the domain id at
-runtime (not however that this may change, for example if the domain is saved
-to disk and restored). The domain has a few children whose order is not
-significant:</p>
- <ul><li>name: the domain name, preferably ASCII based</li><li>memory: the maximum memory allocated to the domain in kilobytes</li><li>vcpu: the number of virtual cpu configured for the domain</li><li>os: a block describing the Operating System, its content will be
- dependent on the OS type
- <ul><li>type: indicate the OS type, always linux at this point</li><li>kernel: path to the kernel on the Domain 0 filesystem</li><li>initrd: an optional path for the init ramdisk on the Domain 0
- filesystem</li><li>cmdline: optional command line to the kernel</li><li>root: the root filesystem from the guest viewpoint, it may be
- passed as part of the cmdline content too</li></ul></li><li>devices: a list of <code>disk</code>, <code>interface</code> and
- <code>console</code> descriptions in no special order</li></ul>
- <p>The format of the devices and their type may grow over time, but the
-following should be sufficient for basic use:</p>
- <p>A <code>disk</code> device indicates a block device, it can have two
-values for the type attribute either 'file' or 'block' corresponding to the 2
-options available at the Xen layer. It has two mandatory children, and one
-optional one in no specific order:</p>
- <ul><li>source with a file attribute containing the path in Domain 0 to the
- file or a dev attribute if using a block device, containing the device
- name ('hda5' or '/dev/hda5')</li><li>target indicates in a dev attribute the device where it is mapped in
- the guest</li><li>readonly an optional empty element indicating the device is
- read-only</li><li>shareable an optional empty element indicating the device
- can be used read/write with other domains</li></ul>
- <p>An <code>interface</code> element describes a network device mapped on the
-guest, it also has a type whose value is currently 'bridge', it also have a
-number of children in no specific order:</p>
- <ul><li>source: indicating the bridge name</li><li>mac: the optional mac address provided in the address attribute</li><li>ip: the optional IP address provided in the address attribute</li><li>script: the script used to bridge the interface in the Domain 0</li><li>target: and optional target indicating the device name.</li></ul>
- <p>A <code>console</code> element describes a serial console connection to
-the guest. It has no children, and a single attribute <code>tty</code> which
-provides the path to the Pseudo TTY on which the guest console can be
-accessed</p>
- <p>Life cycle actions for the domain can also be expressed in the XML format,
-they drive what should be happening if the domain crashes, is rebooted or is
-poweroff. There is various actions possible when this happen:</p>
- <ul><li>destroy: The domain is cleaned up (that's the default normal processing
- in Xen)</li><li>restart: A new domain is started in place of the old one with the same
- configuration parameters</li><li>preserve: The domain will remain in memory until it is destroyed
- manually, it won't be running but allows for post-mortem debugging</li><li>rename-restart: a variant of the previous one but where the old domain
- is renamed before being saved to allow a restart</li></ul>
- <p>The following could be used for a Xen production system:</p>
- <pre><domain>
- ...
- <on_reboot>restart</on_reboot>
- <on_poweroff>destroy</on_poweroff>
- <on_crash>rename-restart</on_crash>
- ...
-</domain></pre>
- <p>While the format may be extended in various ways as support for more
-hypervisor types and features are added, it is expected that this core subset
-will remain functional in spite of the evolution of the library.</p>
- <h3 id="Fully">
- <a name="Fully1" id="Fully1">Fully virtualized guests</a>
+ <ul><li>
+ <a href="#elements">Element and attribute overview</a>
+ <ul><li>
+ <a href="#elementsMetadata">General metadata</a>
+ </li><li>
+ <a href="#elementsOS">Operating system booting</a>
+ <ul><li>
+ <a href="#elementsOSBIOS">BIOS bootloader</a>
+ </li><li>
+ <a href="#elementsOSBootloader">Host bootloader</a>
+ </li><li>
+ <a href="#elementsOSKernel">Direct kernel boot</a>
+ </li></ul>
+ </li><li>
+ <a href="#elementsResources">Basic resources</a>
+ </li><li>
+ <a href="#elementsLifecycle">Lifecycle control</a>
+ </li><li>
+ <a href="#elementsFeatures">Hypervisor features</a>
+ </li><li>
+ <a href="#elementsTime">Time keeping</a>
+ </li><li>
+ <a href="#elementsDevices">Devices</a>
+ <ul><li>
+ <a href="#elementsDisks">Hard drives, floppy disks, CDROMs</a>
+ </li><li>
+ <a href="#elementsNICS">Network interfaces</a>
+ <ul><li>
+ <a href="#elementsNICSVirtual">Virtual network</a>
+ </li><li>
+ <a href="#elementsNICSBridge">Bridge to to LAN</a>
+ </li><li>
+ <a href="#elementsNICSSlirp">Userspace SLIRP stack</a>
+ </li><li>
+ <a href="#elementsNICSEthernet">Generic ethernet connection</a>
+ </li><li>
+ <a href="#elementsNICSMulticast">Multicast tunnel</a>
+ </li><li>
+ <a href="#elementsNICSTCP">TCP tunnel</a>
+ </li></ul>
+ </li><li>
+ <a href="#elementsInput">Input devices</a>
+ </li><li>
+ <a href="#elementsGraphics">Graphical framebuffers</a>
+ </li><li>
+ <a href="#elementsConsole">Consoles, serial & parallel devices</a>
+ <ul><li>
+ <a href="#elementsCharSTDIO">Domain logfile</a>
+ </li><li>
+ <a href="#elementsCharFle">Device logfile</a>
+ </li><li>
+ <a href="#elementsCharVC">Virtual console</a>
+ </li><li>
+ <a href="#elementsCharNull">Null device</a>
+ </li><li>
+ <a href="#elementsCharPTY">Pseudo TTY</a>
+ </li><li>
+ <a href="#elementsCharHost">Host device proxy</a>
+ </li><li>
+ <a href="#elementsCharTCP">TCP client/server</a>
+ </li><li>
+ <a href="#elementsCharUDP">UDP network console</a>
+ </li><li>
+ <a href="#elementsCharUNIX">UNIX domain socket client/server</a>
+ </li></ul>
+ </li></ul>
+ </li></ul>
+ </li><li>
+ <a href="#examples">Example configs</a>
+ </li></ul>
+ <p>
+ This section describes the XML format used to represent domains, there are
+ variations on the format based on the kind of domains run and the options
+ used to launch them. For hypervisor specific details consult the
+ <a href="drivers.html">driver docs</a>
+ </p>
+ <h2>
+ <a name="elements" id="elements">Element and attribute overview</a>
+ </h2>
+ <p>
+ The root element required for all virtual machines is
+ named <code>domain</code>. It has two attributes, the
+ <code>type</code> specifies the hypervisor used for running
+ the domain. The allowed values are driver specific, but
+ include "xen", "kvm", "qemu", "lxc" and "kqemu". The
+ second attribute is <code>id</code> which is a unique
+ integer identifier for the running guest machine. Inactive
+ machines have no id value.
+ </p>
+ <h3>
+ <a name="elementsMetadata" id="elementsMetadata">General metadata</a>
</h3>
- <p>There is a few things to notice specifically for HVM domains:</p>
- <ul><li>the optional <code><features></code> block is used to enable
- certain guest CPU / system features. For HVM guests the following
- features are defined:
- <ul><li><code>pae</code> - enable PAE memory addressing</li><li><code>apic</code> - enable IO APIC</li><li><code>acpi</code> - enable ACPI bios</li></ul></li><li>the optional <code><clock></code> element is used to specify
- whether the emulated BIOS clock in the guest is synced to either
- <code>localtime</code> or <code>utc</code>. In general Windows will
- want <code>localtime</code> while all other operating systems will
- want <code>utc</code>. The default is thus <code>utc</code></li><li>the <code><os></code> block description is very different, first
- it indicates that the type is 'hvm' for hardware virtualization, then
- instead of a kernel, boot and command line arguments, it points to an os
- boot loader which will extract the boot information from the boot device
- specified in a separate boot element. The <code>dev</code> attribute on
- the <code>boot</code> tag can be one of:
- <ul><li><code>fd</code> - boot from first floppy device</li><li><code>hd</code> - boot from first harddisk device</li><li><code>cdrom</code> - boot from first cdrom device</li></ul></li><li>the <code><devices></code> section includes an emulator entry
- pointing to an additional program in charge of emulating the devices</li><li>the disk entry indicates in the dev target section that the emulation
- for the drive is the first IDE disk device hda. The list of device names
- supported is dependent on the Hypervisor, but for Xen it can be any IDE
- device <code>hda</code>-<code>hdd</code>, or a floppy device
- <code>fda</code>, <code>fdb</code>. The <code><disk></code> element
- also supports a 'device' attribute to indicate what kinda of hardware to
- emulate. The following values are supported:
- <ul><li><code>floppy</code> - a floppy disk controller</li><li><code>disk</code> - a generic hard drive (the default it
- omitted)</li><li><code>cdrom</code> - a CDROM device</li></ul>
- For Xen 3.0.2 and earlier a CDROM device can only be emulated on the
- <code>hdc</code> channel, while for 3.0.3 and later, it can be emulated
- on any IDE channel.</li><li>the <code><devices></code> section also include at least one
- entry for the graphic device used to render the os. Currently there is
- just 2 types possible 'vnc' or 'sdl'. If the type is 'vnc', then an
- additional <code>port</code> attribute will be present indicating the TCP
- port on which the VNC server is accepting client connections.</li></ul>
- <p>It is likely that the HVM description gets additional optional elements
-and attributes as the support for fully virtualized domain expands,
-especially for the variety of devices emulated and the graphic support
-options offered.</p>
+ <pre>
+ <domain type='xen' id='3'>
+ <name>fv0</name>
+ <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid>
+ ...</pre>
+ <dl><dt><code>name</code></dt><dd>The content of the <code>name</code> element provides
+ a short name for the virtual machine. This name should
+ consist only of alpha-numeric characters and is required
+ to be unique within the scope of a single host. It is
+ often used to form the filename for storing the persistent
+ configuration file. <span class="since">Since 0.0.1</span></dd><dt><code>uuid</code></dt><dd>The content of the <code>uuid</code> element provides
+ a globally unique identifier for the virtual machine.
+ The format must be RFC 4122 compliant, eg <code>3e3fce45-4f53-4fa7-bb32-11f34168b82b</code>.
+ If omitted when defining/creating a new machine, a random
+ UUID is generated. <span class="since">Since 0.0.1</span></dd></dl>
<h3>
- <a name="Net1" id="Net1">Networking interface options</a>
- </h3>
- <p>The networking support in the QEmu and KVM case is more flexible, and
-support a variety of options:</p>
- <ol><li>Userspace SLIRP stack
- <p>Provides a virtual LAN with NAT to the outside world. The virtual
- network has DHCP & DNS services and will give the guest VM addresses
- starting from <code>10.0.2.15</code>. The default router will be
- <code>10.0.2.2</code> and the DNS server will be <code>10.0.2.3</code>.
- This networking is the only option for unprivileged users who need their
- VMs to have outgoing access. Example configs are:</p>
- <pre><interface type='user'/></pre>
- <pre>
-<interface type='user'>
- <mac address="11:22:33:44:55:66"/>
-</interface>
- </pre>
- </li><li>Virtual network
- <p>Provides a virtual network using a bridge device in the host.
- Depending on the virtual network configuration, the network may be
- totally isolated, NAT'ing to an explicit network device, or NAT'ing to
- the default route. DHCP and DNS are provided on the virtual network in
- all cases and the IP range can be determined by examining the virtual
- network config with '<code>virsh net-dumpxml <network
- name></code>'. There is one virtual network called 'default' setup out
- of the box which does NAT'ing to the default route and has an IP range of
- <code>192.168.22.0/255.255.255.0</code>. Each guest will have an
- associated tun device created with a name of vnetN, which can also be
- overridden with the <target> element. Example configs are:</p>
- <pre><interface type='network'>
- <source network='default'/>
-</interface>
-
-<interface type='network'>
- <source network='default'/>
- <target dev='vnet7'/>
- <mac address="11:22:33:44:55:66"/>
-</interface>
- </pre>
- </li><li>Bridge to to LAN
- <p>Provides a bridge from the VM directly onto the LAN. This assumes
- there is a bridge device on the host which has one or more of the hosts
- physical NICs enslaved. The guest VM will have an associated tun device
- created with a name of vnetN, which can also be overridden with the
- <target> element. The tun device will be enslaved to the bridge.
- The IP range / network configuration is whatever is used on the LAN. This
- provides the guest VM full incoming & outgoing net access just like a
- physical machine. Examples include:</p>
- <pre><interface type='bridge'>
- <source bridge='br0'/>
-</interface>
-
-<interface type='bridge'>
- <source bridge='br0'/>
- <target dev='vnet7'/>
- <mac address="11:22:33:44:55:66"/>
-</interface></pre>
- </li><li>Generic connection to LAN
- <p>Provides a means for the administrator to execute an arbitrary script
- to connect the guest's network to the LAN. The guest will have a tun
- device created with a name of vnetN, which can also be overridden with the
- <target> element. After creating the tun device a shell script will
- be run which is expected to do whatever host network integration is
- required. By default this script is called /etc/qemu-ifup but can be
- overridden.</p>
- <pre><interface type='ethernet'/>
+ <a name="elementsOS" id="elementsOS">Operating system booting</a>
+ </h3>
+ <p>
+ There are a number of different ways to boot virtual machines
+ each with their own pros and cons.
+ </p>
+ <h4>
+ <a name="elementsOSBIOS" id="elementsOSBIOS">BIOS bootloader</a>
+ </h4>
+ <p>
+ Booting via the BIOS is available for hypervisors supporting
+ full virtualization. In this case the BIOS has a boot order
+ priority (floppy, harddisk, cdrom, network) determining where
+ to obtain/find the boot image.
+ </p>
+ <pre>
+ ...
+ <os>
+ <type>hvm</type>
+ <loader>/usr/lib/xen/boot/hvmloader</loader>
+ <boot dev='hd'/>
+ </os>
+ ...</pre>
+ <dl><dt><code>type</code></dt><dd>The content of the <code>type</code> element specifies the
+ type of operating system to be booted in the virtual machine.
+ <code>hvm</code> indicates that the OS is one designed to run
+ on bare metal, so requires full virtualization. <code>linux</code>
+ (badly named!) refers to an OS that supports the Xen 3 hypervisor
+ guest ABI. There are also two optional attributes, <code>arch</code>
+ specifying the CPU architecture to virtualization, and <code>machine</code>
+ refering to the machine type. The <a href="formatcaps.html">Capabilities XML</a>
+ provides details on allowed values for these. <span class="since">Since 0.0.1</span></dd><dt><code>loader</code></dt><dd>The optional <code>loader</code> tag refers to a firmware blob
+ used to assist the domain creation process. At this time, it is
+ only needed by Xen fullyvirtualized domains. <span class="since">Since 0.1.0</span></dd><dt><code>boot</code></dt><dd>The <code>dev</code> attribute takes one of the values "fd", "hd",
+ "cdrom" or "network" and is used to specify the next boot device
+ to consider. The <code>boot</code> element can be repeated multiple
+ times to setup a priority list of boot devices to try in turn.
+ <span class="since">Since 0.1.3</span>
+ </dd></dl>
+ <h4>
+ <a name="elementsOSBootloader" id="elementsOSBootloader">Host bootloader</a>
+ </h4>
+ <p>
+ Hypervisors employing paravirtualization do not usually emulate
+ a BIOS, and instead the host is responsible to kicking off the
+ operating system boot. This may use a pseduo-bootloader in the
+ host to provide an interface to choose a kernel for the guest.
+ An example is <code>pygrub</code> with Xen.
+ </p>
+ <pre>
+ ...
+ <bootloader>/usr/bin/pygrub</bootloader>
+ <bootloader_args>--append single</bootloader_args>
+ ...</pre>
+ <dl><dt><code>bootloader</code></dt><dd>The content of the <code>bootloader</code> element provides
+ a fullyqualified path to the bootloader executable in the
+ host OS. This bootloader will be run to choose which kernel
+ to boot. The required output of the bootloader is dependant
+ on the hypervisor in use. <span class="since">Since 0.1.0</span></dd><dt><code>bootloader_args</code></dt><dd>The optional <code>bootloader_args</code> element allows
+ command line arguments to be passed to the bootloader.
+ <span class="since">Since 0.2.3</span>
+ </dd></dl>
+ <h4>
+ <a name="elementsOSKernel" id="elementsOSKernel">Direct kernel boot</a>
+ </h4>
+ <p>
+ When installing a new guest OS it is often useful to boot directly
+ from a kernel and initrd stored in the host OS, allowing command
+ line arguments to be passed directly to the installer. This capability
+ is usually available for both para and full virtualized guests.
+ </p>
+ <pre>
+ ...
+ <os>
+ <type>hvm</type>
+ <loader>/usr/lib/xen/boot/hvmloader</loader>
+ <kernel>/root/f8-i386-vmlinuz</kernel>
+ <initrd>/root/f8-i386-initrd</initrd>
+ <cmdline>console=ttyS0 ks=http://example.com/f8-i386/os/</cmdline>
+ </os>
+ ...</pre>
+ <dl><dt><code>type</code></dt><dd>This element has the same semantics as described earlier in the
+ <a href="#elementsOSBIOS">BIOS boot section</a></dd><dt><code>type</code></dt><dd>This element has the same semantics as described earlier in the
+ <a href="#elementsOSBIOS">BIOS boot section</a></dd><dt><code>kernel</code></dt><dd>The contents of this element specify the fully-qualified path
+ to the kernel image in the host OS.</dd><dt><code>initrd</code></dt><dd>The contents of this element specify the fully-qualified path
+ to the (optional) ramdisk image in the host OS.</dd><dt><code>cmdline</code></dt><dd>The contents of this element specify arguments to be passed to
+ the kernel (or installer) at boottime. This is often used to
+ specify an alternate primary console (eg serial port), or the
+ installation media source / kickstart file</dd></dl>
+ <h3>
+ <a name="elementsResources" id="elementsResources">Basic resources</a>
+ </h3>
+ <pre>
+ ...
+ <memory>524288</memory>
+ <currentMemory>524288</currentMemory>
+ <vcpu>1</vcpu>
+ ...</pre>
+ <dl><dt><code>memory</code></dt><dd>The maximum allocation of memory for the guest at boot time.
+ The units for this value are bytes</dd><dt><code>currentMemory</code></dt><dd>The actual allocation of memory for the guest. This value
+ be less than the maximum allocation, to allow for ballooning
+ up the guests memory on the fly. If this is omitted, it defaults
+ to the same value as the <code>memory<code> element</code></code></dd><dt><code>vcpu</code></dt><dd>The content of this element defines the number of virtual
+ CPUs allocated for the guest OS.</dd></dl>
+ <h3>
+ <a name="elementsLifecycle" id="elementsLifecycle">Lifecycle control</a>
+ </h3>
+ <p>
+ It is sometimes neccessary to override the default actions taken
+ when a guest OS triggers a lifecycle operation. The following
+ collections of elements allow the actions to be specified. A
+ common use case is to force a reboot to be treated as a poweroff
+ when doing the initial OS installation. This allows the VM to be
+ re-configured for the first post-install bootup.
+ </p>
+ <pre>
+ ...
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>restart</on_crash>
+ ...</pre>
+ <dl><dt><code>on_poweroff</code></dt><dd>The content of this element specifies the action to take when
+ the guest requests a poweroff.</dd><dt><code>on_poweroff</code></dt><dd>The content of this element specifies the action to take when
+ the guest requests a reboot.</dd><dt><code>on_poweroff</code></dt><dd>The content of this element specifies the action to take when
+ the guest crashes.</dd></dl>
+ <p>
+ Each of these states allow for the same four possible actions.
+ </p>
+ <dl><dt><code>destroy</code></dt><dd>The domain will be terminated completely and all resources
+ released</dd><dt><code>restart</code></dt><dd>The domain will be terminated, and then restarted with
+ the same configuration</dd><dt><code>preserve</code></dt><dd>The domain will be terminated, and its resource preserved
+ to allow analysis.</dd><dt><code>rename-restart</code></dt><dd>The domain will be terminated, and then restarted with
+ a new name</dd></dl>
+ <h3>
+ <a name="elementsFeatures" id="elementsFeatures">Hypervisor features</a>
+ </h3>
+ <p>
+ Hypervisors may allow certain CPU / machine features to be
+ toggled on/off.
+ </p>
+ <pre>
+ ...
+ <features>
+ <pae/>
+ <acpi/>
+ <apic/>
+ </features>
+ ...</pre>
+ <p>
+ All features are listed within the <code>features</code>
+ element, omitting a togglable feature tag turns it off.
+ The available features can be found by asking
+ for the <a href="formatcaps.html">capabilities XML</a>,
+ but a common set for fully virtualized domains are:
+ </p>
+ <dl><dt><code>pae</code></dt><dd>Physical address extension mode allows 32-bit guests
+ to address more than 4 GB of memory.</dd><dt><code>acpi</code></dt><dd>ACPI is useful for power management, for example, with
+ KVM guests it is required for graceful shutdown to work.
+ </dd></dl>
+ <h3>
+ <a name="elementsTime" id="elementsTime">Time keeping</a>
+ </h3>
+ <p>
+ The guest clock is typically initialized from the host clock.
+ Most operating systems expect the hardware clock to be kept
+ in UTC, and this is the default. Windows, however, expects
+ it to be in so called 'localtime'.
+ </p>
+ <pre>
+ ...
+ <clock sync="localtime"/>
+ ...</pre>
+ <dl><dt><code>clock</code></dt><dd>The <code>sync</code> attribute takes either "utc" or
+ "localtime" to specify how the guest clock is initialized
+ in relation to the host OS.
+ </dd></dl>
+ <h3>
+ <a name="elementsDevices" id="elementsDevices">Devices</a>
+ </h3>
+ <p>
+ The final set of XML elements are all used to descibe devices
+ provided to the guest domain. All devices occur as children
+ of the main <code>devices</code> element.
+ <span class="since">Since 0.1.3</span>
+ </p>
+ <pre>
+ ...
+ <devices>
+ <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
+ ...</pre>
+ <dl><dt><code>emulator</code></dt><dd>
+ The contents of the <code>emulator</code> element specify
+ the fully qualified path to the device model emulator binary.
+ The <a href="formatcaps.html">capabilities XML</a> specifies
+ the recommended default emulator to use for each particular
+ domain type / architecture combination.
+ </dd></dl>
+ <h4>
+ <a name="elementsDisks" id="elementsDisks">Hard drives, floppy disks, CDROMs</a>
+ </h4>
+ <p>
+ Any device that looks like a disk, be it a floppy, harddisk,
+ cdrom, or paravirtualized driver is specified via the <code>disk</code>
+ element.
+ </p>
+ <pre>
+ ...
+ <disk type='file'>
+ <driver name="tap" type="aio">
+ <source file='/var/lib/xen/images/fv0'/>
+ <target dev='hda' bus='ide'/>
+ </disk>
+ ...</pre>
+ <dl><dt><code>disk</code></dt><dd>The <code>disk</code> element is the main container for describing
+ disks. The <code>type</code> attribute is either "file" or "block"
+ and refers to the underlying source for the disk. The optional
+ <code>device</code> attribute indicates how the disk is to be exposed
+ to the guest OS. Possible values for this attribute are "floppy", "disk"
+ and "cdrom", defaulting to "disk".
+ <span class="since">Since 0.0.3; "device" attribute since 0.1.4</span></dd><dt><code>source</code></dt><dd>If the disk <code>type</code> is "file", then the <code>file</code> attribute
+ specifies the fully-qualified path to the file holding the disk. If the disk
+ <code>type</code> is "block", then the <code>dev</code> attribute specifies
+ the path to the host device to serve as the disk. <span class="since">Since 0.0.3</span></dd><dt><code>target</code></dt><dd>The <code>target</code> element controls the bus / device under which the
+ disk is exposed to the guest OS. The <code>dev</code> attribute indicates
+ the "logical" device name. The actual device name specified is not guarenteed to map to
+ the device name in the guest OS. Treat it as a device ordering hint.
+ The optional <code>bus</code> attribute specifies the type of disk device
+ to emulate; possible values are driver specific, with typical values being
+ "ide", "scsi", "virtio", "xen". If omitted, the bus type is inferred from
+ the style of the device name. eg, a device named 'sda' will typically be
+ exported using a SCSI bus.
+ <span class="since">Since 0.0.3; <code>bus</code> attribute since 0.4.3</span></dd><dt><code>driver</code></dt><dd>If the hypervisor supports multiple backend drivers, then the optional
+ <code>driver</code> element allows them to be selected. The <code>name</code>
+ attribute is the primary backend driver name, while the optional <code>type</code>
+ attribute provides the sub-type. <span class="since">Since 0.1.8</span>
+ </dd></dl>
+ <h4>
+ <a name="elementsNICS" id="elementsNICS">Network interfaces</a>
+ </h4>
+ <pre>
+ ...
+ <interface type='bridge'>
+ <source bridge='xenbr0'/>
+ <mac address='00:16:3e:5d:c7:9e'/>
+ <script path='vif-bridge'/>
+ </interface>
+ ...</pre>
+ <h5>
+ <a name="elementsNICSVirtual" id="elementsNICSVirtual">Virtual network</a>
+ </h5>
+ <p>
+ <strong><em>
+ This is the recommended config for general guest connectivity on
+ hosts with dynamic / wireless networking configs
+ </em></strong>
+ </p>
+ <p>
+ Provides a virtual network using a bridge device in the host.
+ Depending on the virtual network configuration, the network may be
+ totally isolated, NAT'ing to an explicit network device, or NAT'ing to
+ the default route. DHCP and DNS are provided on the virtual network in
+ all cases and the IP range can be determined by examining the virtual
+ network config with '<code>virsh net-dumpxml [networkname]</code>'.
+ There is one virtual network called 'default' setup out
+ of the box which does NAT'ing to the default route and has an IP range of
+ <code>192.168.22.0/255.255.255.0</code>. Each guest will have an
+ associated tun device created with a name of vnetN, which can also be
+ overridden with the <target> element.
+ </p>
+ <pre>
+ ...
+ <interface type='network'>
+ <source network='default'/>
+ </interface>
+ ...
+ <interface type='network'>
+ <source network='default'/>
+ <target dev='vnet7'/>
+ <mac address="11:22:33:44:55:66"/>
+ </interface>
+ ...</pre>
+ <h5>
+ <a name="elementsNICSBridge" id="elementsNICSBridge">Bridge to to LAN</a>
+ </h5>
+ <p>
+ <strong><em>
+ This is the recommended config for general guest connectivity on
+ hosts with static wired networking configs
+ </em></strong>
+ </p>
+ <p>
+ Provides a bridge from the VM directly onto the LAN. This assumes
+ there is a bridge device on the host which has one or more of the hosts
+ physical NICs enslaved. The guest VM will have an associated tun device
+ created with a name of vnetN, which can also be overridden with the
+ <target> element. The tun device will be enslaved to the bridge.
+ The IP range / network configuration is whatever is used on the LAN. This
+ provides the guest VM full incoming & outgoing net access just like a
+ physical machine.
+ </p>
+ <pre>
+ ...
+ <interface type='bridge'>
+ <source bridge='br0'/>
+ </interface>
-<interface type='ethernet'>
- <target dev='vnet7'/>
- <script path='/etc/qemu-ifup-mynet'/>
-</interface></pre>
- </li><li>Multicast tunnel
- <p>A multicast group is setup to represent a virtual network. Any VMs
- whose network devices are in the same multicast group can talk to each
- other even across hosts. This mode is also available to unprivileged
- users. There is no default DNS or DHCP support and no outgoing network
- access. To provide outgoing network access, one of the VMs should have a
- 2nd NIC which is connected to one of the first 4 network types and do the
- appropriate routing. The multicast protocol is compatible with that used
- by user mode linux guests too. The source address used must be from the
- multicast address block.</p>
- <pre><interface type='mcast'>
- <source address='230.0.0.1' port='5558'/>
-</interface></pre>
- </li><li>TCP tunnel
- <p>A TCP client/server architecture provides a virtual network. One VM
- provides the server end of the network, all other VMS are configured as
- clients. All network traffic is routed between the VMs via the server.
- This mode is also available to unprivileged users. There is no default
- DNS or DHCP support and no outgoing network access. To provide outgoing
- network access, one of the VMs should have a 2nd NIC which is connected
- to one of the first 4 network types and do the appropriate routing.</p>
- <p>Example server config:</p>
- <pre><interface type='server'>
- <source address='192.168.0.1' port='5558'/>
-</interface></pre>
- <p>Example client config:</p>
- <pre><interface type='client'>
- <source address='192.168.0.1' port='5558'/>
-</interface></pre>
- </li></ol>
- <p>To be noted, options 2, 3, 4 are also supported by Xen VMs, so it is
-possible to use these configs to have networking with both Xen &
-QEMU/KVMs connected to each other.</p>
- <h2>Example configs</h2>
+ <interface type='bridge'>
+ <source bridge='br0'/>
+ <target dev='vnet7'/>
+ <mac address="11:22:33:44:55:66"/>
+ </interface>
+ ...</pre>
+ <h5>
+ <a name="elementsNICSSlirp" id="elementsNICSSlirp">Userspace SLIRP stack</a>
+ </h5>
+ <p>
+ Provides a virtual LAN with NAT to the outside world. The virtual
+ network has DHCP & DNS services and will give the guest VM addresses
+ starting from <code>10.0.2.15</code>. The default router will be
+ <code>10.0.2.2</code> and the DNS server will be <code>10.0.2.3</code>.
+ This networking is the only option for unprivileged users who need their
+ VMs to have outgoing access.
+ </p>
+ <pre>
+ ...
+ <interface type='user'/>
+ ...
+ <interface type='user'>
+ <mac address="11:22:33:44:55:66"/>
+ </interface>
+ ...</pre>
+ <h5>
+ <a name="elementsNICSEthernet" id="elementsNICSEthernet">Generic ethernet connection</a>
+ </h5>
+ <p>
+ Provides a means for the administrator to execute an arbitrary script
+ to connect the guest's network to the LAN. The guest will have a tun
+ device created with a name of vnetN, which can also be overridden with the
+ <target> element. After creating the tun device a shell script will
+ be run which is expected to do whatever host network integration is
+ required. By default this script is called /etc/qemu-ifup but can be
+ overridden.
+ </p>
+ <pre>
+ ...
+ <interface type='ethernet'/>
+ ...
+ <interface type='ethernet'>
+ <target dev='vnet7'/>
+ <script path='/etc/qemu-ifup-mynet'/>
+ </interface>
+ ...</pre>
+ <h5>
+ <a name="elementsNICSMulticast" id="elementsNICSMulticast">Multicast tunnel</a>
+ </h5>
+ <p>
+ A multicast group is setup to represent a virtual network. Any VMs
+ whose network devices are in the same multicast group can talk to each
+ other even across hosts. This mode is also available to unprivileged
+ users. There is no default DNS or DHCP support and no outgoing network
+ access. To provide outgoing network access, one of the VMs should have a
+ 2nd NIC which is connected to one of the first 4 network types and do the
+ appropriate routing. The multicast protocol is compatible with that used
+ by user mode linux guests too. The source address used must be from the
+ multicast address block.
+ </p>
+ <pre>
+ ...
+ <interface type='mcast'>
+ <source address='230.0.0.1' port='5558'/>
+ </interface>
+ ...</pre>
+ <h5>
+ <a name="elementsNICSTCP" id="elementsNICSTCP">TCP tunnel</a>
+ </h5>
+ <p>
+ A TCP client/server architecture provides a virtual network. One VM
+ provides the server end of the network, all other VMS are configured as
+ clients. All network traffic is routed between the VMs via the server.
+ This mode is also available to unprivileged users. There is no default
+ DNS or DHCP support and no outgoing network access. To provide outgoing
+ network access, one of the VMs should have a 2nd NIC which is connected
+ to one of the first 4 network types and do the appropriate routing.</p>
+ <pre>
+ ...
+ <interface type='server'>
+ <source address='192.168.0.1' port='5558'/>
+ </interface>
+ ...
+ <interface type='client'>
+ <source address='192.168.0.1' port='5558'/>
+ </interface>
+ ...</pre>
+ <h4>
+ <a name="elementsInput" id="elementsInput">Input devices</a>
+ </h4>
+ <p>
+ Input devices allow interaction with the graphical framebuffer in the guest
+ virtual machine. When enabling the framebuffer, an input device is automatically
+ provided. It may be possible to add additional devices explicitly, for example,
+ to provide a graphics tablet for absolute cursor movement.
+ </p>
+ <pre>
+ ...
+ <input type='mouse' bus='usb'/>
+ ...</pre>
+ <dl><dt><code>input</code></dt><dd>The <code>input</code> element has one madatory attribute, the <code>type</code>
+ whose value can be either 'mouse' or 'tablet'. The latter provides absolute
+ cursor movement, while the former uses relative movement. The optional
+ <code>bus</code> attribute can be used to refine the exact device type.
+ It takes values "xen" (paravirtualized), "ps2" and "usb".</dd></dl>
+ <h4>
+ <a name="elementsGraphics" id="elementsGraphics">Graphical framebuffers</a>
+ </h4>
+ <p>
+ A graphics device allows for graphical interaction with the
+ guest OS. A guest will typically have either a framebuffer
+ or a text console configured to allow interaction with the
+ admin.
+ </p>
+ <pre>
+ ...
+ <graphics type='vnc' port='5904'/>
+ ...</pre>
+ <dl><dt><code>graphics</code></dt><dd>The <code>graphics</code> element has a mandatory <code>type</code>
+ attribute which takes the value "sdl" or "vnc". The former displays
+ a window on the host desktop, while the latter activates a VNC server.
+ If the latter is used the <code>port</code> attributes specifies the
+ TCP port number (with -1 indicating that it should be auto-allocated).
+ The <code>listen</code> attribute is an IP address for the server to
+ listen on. The <code>password</code> attribute provides a VNC password
+ in clear text.</dd></dl>
+ <h4>
+ <a name="elementsConsole" id="elementsConsole">Consoles, serial & parallel devices</a>
+ </h4>
+ <p>
+ A character device provides a way to interact with the virtual machine.
+ Paravirtualized consoles, serial ports and parallel ports are all
+ classed as character devices and so represented using the same syntax.
+ </p>
+ <pre>
+ ...
+ <parallel type='pty'>
+ <source path='/dev/pts/2'/>
+ <target port='0'/>
+ </parallel>
+ <serial type='pty'>
+ <source path='/dev/pts/3'/>
+ <target port='0'/>
+ </serial>
+ <console type='pty'>
+ <source path='/dev/pts/4'/>
+ <target port='0'/>
+ </console>
+ </devices>
+ </domain></pre>
+ <dl><dt><code>parallel</code></dt><dd>Represents a parallel port</dd><dt><code>serial</code></dt><dd>Represents a serial port</dd><dt><code>console</code></dt><dd>Represents the primary console. This can be the paravirtualized
+ console with Xen guests, or duplicates the primary serial port
+ for fully virtualized guests without a paravirtualized console.</dd><dt><code>source</code></dt><dd>The attributes available for the <code>source</code> element
+ vary according to the <code>type</code> attribute on the parent
+ tag. Allowed variations will be described below</dd><dt><code>target</code></dt><dd>The port number of the character device is specified via the
+ <code>port</code> attribute, numbered starting from 1. There is
+ usually only one console device, and 0, 1 or 2 serial devices
+ or parallel devices.
+ </dd></dl>
+ <h5>
+ <a name="elementsCharSTDIO" id="elementsCharSTDIO">Domain logfile</a>
+ </h5>
+ <p>
+ This disables all input on the character device, and sends output
+ into the virtual machine's logfile
+ </p>
+ <pre>
+ ...
+ <console type='stdio'>
+ <target port='1'>
+ </console>
+ ...</pre>
+ <h5>
+ <a name="elementsCharFle" id="elementsCharFle">Device logfile</a>
+ </h5>
+ <p>
+ A file is opened and all data sent to the character
+ device is written to the file.
+ </p>
+ <pre>
+ ...
+ <serial type="file">
+ <source path="/var/log/vm/vm-serial.log"/>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+ <h5>
+ <a name="elementsCharVC" id="elementsCharVC">Virtual console</a>
+ </h5>
+ <p>
+ Connects the character device to the graphical framebuffer in
+ a virtual console. This is typically accessed via a special
+ hotkey sequence such as "ctrl+alt+3"
+ </p>
+ <pre>
+ ...
+ <serial type='vc'>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+ <h5>
+ <a name="elementsCharNull" id="elementsCharNull">Null device</a>
+ </h5>
+ <p>
+ Connects the character device to the void. No data is ever
+ provided to the input. All data written is discarded.
+ </p>
+ <pre>
+ ...
+ <serial type='null'>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+ <h5>
+ <a name="elementsCharPTY" id="elementsCharPTY">Pseudo TTY</a>
+ </h5>
+ <p>
+ A Pseudo TTY is allocated using /dev/ptmx. A suitable client
+ such as 'virsh console' can connect to interact with the
+ serial port locally.
+ </p>
+ <pre>
+ ...
+ <serial type="pty">
+ <source path="/dev/pts/3"/>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+ <p>
+ NB special case if <console type='pty'>, then the TTY
+ path is also duplicated as an attribute tty='/dv/pts/3'
+ on the top level <console> tag. This provides compat
+ with existing syntax for <console> tags.
+ </p>
+ <h5>
+ <a name="elementsCharHost" id="elementsCharHost">Host device proxy</a>
+ </h5>
+ <p>
+ The character device is passed through to the underlying
+ physical character device. The device types must match,
+ eg the emulated serial port should only be connected to
+ a host serial port - dont connect a serial port to a parallel
+ port.
+ </p>
+ <pre>
+ ...
+ <serial type="dev">
+ <source path="/dev/ttyS0"/>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+ <h5>
+ <a name="elementsCharTCP" id="elementsCharTCP">TCP client/server</a>
+ </h5>
+ <p>
+ The character device acts as a TCP client connecting to a
+ remote server, or as a server waiting for a client connection.
+ </p>
+ <pre>
+ ...
+ <serial type="tcp">
+ <source mode="connect" host="0.0.0.0" service="2445"/>
+ <wiremode type="telnet"/>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+ <h5>
+ <a name="elementsCharUDP" id="elementsCharUDP">UDP network console</a>
+ </h5>
+ <p>
+ The character device acts as a UDP netconsole service,
+ sending and receiving packets. This is a lossy service.
+ </p>
+ <pre>
+ ...
+ <serial type="udp">
+ <source mode="bind" host="0.0.0.0" service="2445"/>
+ <source mode="connect" host="0.0.0.0" service="2445"/>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+ <h5>
+ <a name="elementsCharUNIX" id="elementsCharUNIX">UNIX domain socket client/server</a>
+ </h5>
+ <p>
+ The character device acts as a UNIX domain socket server,
+ accepting connections from local clients.
+ </p>
+ <pre>
+ ...
+ <serial type="unix">
+ <source mode="bind" path="/tmp/foo"/>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+ <h2>
+ <a name="examples" id="examples">Example configs</a>
+ </h2>
<p>
Example configurations for each driver are provide on the
driver specific pages listed below
<body>
<h1>Domain XML format</h1>
- <p>This section describes the XML format used to represent domains, there are
-variations on the format based on the kind of domains run and the options
-used to launch them:</p>
-
- <h3 id="Normal"><a name="Normal1" id="Normal1">Normal paravirtualized Xen
-guests</a>:</h3>
-
- <p>The root element must be called <code>domain</code> with no namespace, the
-<code>type</code> attribute indicates the kind of hypervisor used, 'xen' is
-the default value. The <code>id</code> attribute gives the domain id at
-runtime (not however that this may change, for example if the domain is saved
-to disk and restored). The domain has a few children whose order is not
-significant:</p>
- <ul>
- <li>name: the domain name, preferably ASCII based</li>
- <li>memory: the maximum memory allocated to the domain in kilobytes</li>
- <li>vcpu: the number of virtual cpu configured for the domain</li>
- <li>os: a block describing the Operating System, its content will be
- dependent on the OS type
- <ul><li>type: indicate the OS type, always linux at this point</li><li>kernel: path to the kernel on the Domain 0 filesystem</li><li>initrd: an optional path for the init ramdisk on the Domain 0
- filesystem</li><li>cmdline: optional command line to the kernel</li><li>root: the root filesystem from the guest viewpoint, it may be
- passed as part of the cmdline content too</li></ul></li>
- <li>devices: a list of <code>disk</code>, <code>interface</code> and
- <code>console</code> descriptions in no special order</li>
- </ul>
- <p>The format of the devices and their type may grow over time, but the
-following should be sufficient for basic use:</p>
- <p>A <code>disk</code> device indicates a block device, it can have two
-values for the type attribute either 'file' or 'block' corresponding to the 2
-options available at the Xen layer. It has two mandatory children, and one
-optional one in no specific order:</p>
- <ul>
- <li>source with a file attribute containing the path in Domain 0 to the
- file or a dev attribute if using a block device, containing the device
- name ('hda5' or '/dev/hda5')</li>
- <li>target indicates in a dev attribute the device where it is mapped in
- the guest</li>
- <li>readonly an optional empty element indicating the device is
- read-only</li>
- <li>shareable an optional empty element indicating the device
- can be used read/write with other domains</li>
- </ul>
- <p>An <code>interface</code> element describes a network device mapped on the
-guest, it also has a type whose value is currently 'bridge', it also have a
-number of children in no specific order:</p>
- <ul>
- <li>source: indicating the bridge name</li>
- <li>mac: the optional mac address provided in the address attribute</li>
- <li>ip: the optional IP address provided in the address attribute</li>
- <li>script: the script used to bridge the interface in the Domain 0</li>
- <li>target: and optional target indicating the device name.</li>
- </ul>
- <p>A <code>console</code> element describes a serial console connection to
-the guest. It has no children, and a single attribute <code>tty</code> which
-provides the path to the Pseudo TTY on which the guest console can be
-accessed</p>
- <p>Life cycle actions for the domain can also be expressed in the XML format,
-they drive what should be happening if the domain crashes, is rebooted or is
-poweroff. There is various actions possible when this happen:</p>
- <ul>
- <li>destroy: The domain is cleaned up (that's the default normal processing
- in Xen)</li>
- <li>restart: A new domain is started in place of the old one with the same
- configuration parameters</li>
- <li>preserve: The domain will remain in memory until it is destroyed
- manually, it won't be running but allows for post-mortem debugging</li>
- <li>rename-restart: a variant of the previous one but where the old domain
- is renamed before being saved to allow a restart</li>
- </ul>
- <p>The following could be used for a Xen production system:</p>
- <pre><domain>
- ...
- <on_reboot>restart</on_reboot>
- <on_poweroff>destroy</on_poweroff>
- <on_crash>rename-restart</on_crash>
- ...
-</domain></pre>
- <p>While the format may be extended in various ways as support for more
-hypervisor types and features are added, it is expected that this core subset
-will remain functional in spite of the evolution of the library.</p>
-
- <h3 id="Fully"><a name="Fully1" id="Fully1">Fully virtualized guests</a></h3>
- <p>There is a few things to notice specifically for HVM domains:</p>
- <ul>
- <li>the optional <code><features></code> block is used to enable
- certain guest CPU / system features. For HVM guests the following
- features are defined:
- <ul><li><code>pae</code> - enable PAE memory addressing</li><li><code>apic</code> - enable IO APIC</li><li><code>acpi</code> - enable ACPI bios</li></ul></li>
- <li>the optional <code><clock></code> element is used to specify
- whether the emulated BIOS clock in the guest is synced to either
- <code>localtime</code> or <code>utc</code>. In general Windows will
- want <code>localtime</code> while all other operating systems will
- want <code>utc</code>. The default is thus <code>utc</code></li>
- <li>the <code><os></code> block description is very different, first
- it indicates that the type is 'hvm' for hardware virtualization, then
- instead of a kernel, boot and command line arguments, it points to an os
- boot loader which will extract the boot information from the boot device
- specified in a separate boot element. The <code>dev</code> attribute on
- the <code>boot</code> tag can be one of:
- <ul><li><code>fd</code> - boot from first floppy device</li><li><code>hd</code> - boot from first harddisk device</li><li><code>cdrom</code> - boot from first cdrom device</li></ul></li>
- <li>the <code><devices></code> section includes an emulator entry
- pointing to an additional program in charge of emulating the devices</li>
- <li>the disk entry indicates in the dev target section that the emulation
- for the drive is the first IDE disk device hda. The list of device names
- supported is dependent on the Hypervisor, but for Xen it can be any IDE
- device <code>hda</code>-<code>hdd</code>, or a floppy device
- <code>fda</code>, <code>fdb</code>. The <code><disk></code> element
- also supports a 'device' attribute to indicate what kinda of hardware to
- emulate. The following values are supported:
- <ul><li><code>floppy</code> - a floppy disk controller</li><li><code>disk</code> - a generic hard drive (the default it
- omitted)</li><li><code>cdrom</code> - a CDROM device</li></ul>
- For Xen 3.0.2 and earlier a CDROM device can only be emulated on the
- <code>hdc</code> channel, while for 3.0.3 and later, it can be emulated
- on any IDE channel.</li>
- <li>the <code><devices></code> section also include at least one
- entry for the graphic device used to render the os. Currently there is
- just 2 types possible 'vnc' or 'sdl'. If the type is 'vnc', then an
- additional <code>port</code> attribute will be present indicating the TCP
- port on which the VNC server is accepting client connections.</li>
- </ul>
- <p>It is likely that the HVM description gets additional optional elements
-and attributes as the support for fully virtualized domain expands,
-especially for the variety of devices emulated and the graphic support
-options offered.</p>
-
- <h3>
- <a name="Net1" id="Net1">Networking interface options</a>
- </h3>
- <p>The networking support in the QEmu and KVM case is more flexible, and
-support a variety of options:</p>
- <ol>
- <li>Userspace SLIRP stack
- <p>Provides a virtual LAN with NAT to the outside world. The virtual
- network has DHCP & DNS services and will give the guest VM addresses
- starting from <code>10.0.2.15</code>. The default router will be
- <code>10.0.2.2</code> and the DNS server will be <code>10.0.2.3</code>.
- This networking is the only option for unprivileged users who need their
- VMs to have outgoing access. Example configs are:</p>
- <pre><interface type='user'/></pre>
- <pre>
-<interface type='user'>
- <mac address="11:22:33:44:55:66"/>
-</interface>
- </pre>
- </li>
- <li>Virtual network
- <p>Provides a virtual network using a bridge device in the host.
- Depending on the virtual network configuration, the network may be
- totally isolated, NAT'ing to an explicit network device, or NAT'ing to
- the default route. DHCP and DNS are provided on the virtual network in
- all cases and the IP range can be determined by examining the virtual
- network config with '<code>virsh net-dumpxml <network
- name></code>'. There is one virtual network called 'default' setup out
- of the box which does NAT'ing to the default route and has an IP range of
- <code>192.168.22.0/255.255.255.0</code>. Each guest will have an
- associated tun device created with a name of vnetN, which can also be
- overridden with the <target> element. Example configs are:</p>
- <pre><interface type='network'>
- <source network='default'/>
-</interface>
-
-<interface type='network'>
- <source network='default'/>
- <target dev='vnet7'/>
- <mac address="11:22:33:44:55:66"/>
-</interface>
- </pre>
- </li>
- <li>Bridge to to LAN
- <p>Provides a bridge from the VM directly onto the LAN. This assumes
- there is a bridge device on the host which has one or more of the hosts
- physical NICs enslaved. The guest VM will have an associated tun device
- created with a name of vnetN, which can also be overridden with the
- <target> element. The tun device will be enslaved to the bridge.
- The IP range / network configuration is whatever is used on the LAN. This
- provides the guest VM full incoming & outgoing net access just like a
- physical machine. Examples include:</p>
- <pre><interface type='bridge'>
- <source bridge='br0'/>
-</interface>
-
-<interface type='bridge'>
- <source bridge='br0'/>
- <target dev='vnet7'/>
- <mac address="11:22:33:44:55:66"/>
-</interface></pre>
- </li>
- <li>Generic connection to LAN
- <p>Provides a means for the administrator to execute an arbitrary script
- to connect the guest's network to the LAN. The guest will have a tun
- device created with a name of vnetN, which can also be overridden with the
- <target> element. After creating the tun device a shell script will
- be run which is expected to do whatever host network integration is
- required. By default this script is called /etc/qemu-ifup but can be
- overridden.</p>
- <pre><interface type='ethernet'/>
-
-<interface type='ethernet'>
- <target dev='vnet7'/>
- <script path='/etc/qemu-ifup-mynet'/>
-</interface></pre>
- </li>
- <li>Multicast tunnel
- <p>A multicast group is setup to represent a virtual network. Any VMs
- whose network devices are in the same multicast group can talk to each
- other even across hosts. This mode is also available to unprivileged
- users. There is no default DNS or DHCP support and no outgoing network
- access. To provide outgoing network access, one of the VMs should have a
- 2nd NIC which is connected to one of the first 4 network types and do the
- appropriate routing. The multicast protocol is compatible with that used
- by user mode linux guests too. The source address used must be from the
- multicast address block.</p>
- <pre><interface type='mcast'>
- <source address='230.0.0.1' port='5558'/>
-</interface></pre>
- </li>
- <li>TCP tunnel
- <p>A TCP client/server architecture provides a virtual network. One VM
- provides the server end of the network, all other VMS are configured as
- clients. All network traffic is routed between the VMs via the server.
- This mode is also available to unprivileged users. There is no default
- DNS or DHCP support and no outgoing network access. To provide outgoing
- network access, one of the VMs should have a 2nd NIC which is connected
- to one of the first 4 network types and do the appropriate routing.</p>
- <p>Example server config:</p>
- <pre><interface type='server'>
- <source address='192.168.0.1' port='5558'/>
-</interface></pre>
- <p>Example client config:</p>
- <pre><interface type='client'>
- <source address='192.168.0.1' port='5558'/>
-</interface></pre>
- </li>
- </ol>
- <p>To be noted, options 2, 3, 4 are also supported by Xen VMs, so it is
-possible to use these configs to have networking with both Xen &
-QEMU/KVMs connected to each other.</p>
-
- <h2>Example configs</h2>
+ <ul id="toc"></ul>
+
+ <p>
+ This section describes the XML format used to represent domains, there are
+ variations on the format based on the kind of domains run and the options
+ used to launch them. For hypervisor specific details consult the
+ <a href="drivers.html">driver docs</a>
+ </p>
+
+
+ <h2><a name="elements">Element and attribute overview</a></h2>
+
+ <p>
+ The root element required for all virtual machines is
+ named <code>domain</code>. It has two attributes, the
+ <code>type</code> specifies the hypervisor used for running
+ the domain. The allowed values are driver specific, but
+ include "xen", "kvm", "qemu", "lxc" and "kqemu". The
+ second attribute is <code>id</code> which is a unique
+ integer identifier for the running guest machine. Inactive
+ machines have no id value.
+ </p>
+
+
+ <h3><a name="elementsMetadata">General metadata</a></h3>
+
+ <pre>
+ <domain type='xen' id='3'>
+ <name>fv0</name>
+ <uuid>4dea22b31d52d8f32516782e98ab3fa0</uuid>
+ ...</pre>
+
+ <dl>
+ <dt><code>name</code></dt>
+ <dd>The content of the <code>name</code> element provides
+ a short name for the virtual machine. This name should
+ consist only of alpha-numeric characters and is required
+ to be unique within the scope of a single host. It is
+ often used to form the filename for storing the persistent
+ configuration file. <span class="since">Since 0.0.1</span></dd>
+ <dt><code>uuid</code></dt>
+ <dd>The content of the <code>uuid</code> element provides
+ a globally unique identifier for the virtual machine.
+ The format must be RFC 4122 compliant, eg <code>3e3fce45-4f53-4fa7-bb32-11f34168b82b</code>.
+ If omitted when defining/creating a new machine, a random
+ UUID is generated. <span class="since">Since 0.0.1</span></dd>
+ </dl>
+
+ <h3><a name="elementsOS">Operating system booting</a></h3>
+
+ <p>
+ There are a number of different ways to boot virtual machines
+ each with their own pros and cons.
+ </p>
+
+ <h4><a name="elementsOSBIOS">BIOS bootloader</a></h4>
+
+ <p>
+ Booting via the BIOS is available for hypervisors supporting
+ full virtualization. In this case the BIOS has a boot order
+ priority (floppy, harddisk, cdrom, network) determining where
+ to obtain/find the boot image.
+ </p>
+
+ <pre>
+ ...
+ <os>
+ <type>hvm</type>
+ <loader>/usr/lib/xen/boot/hvmloader</loader>
+ <boot dev='hd'/>
+ </os>
+ ...</pre>
+
+ <dl>
+ <dt><code>type</code></dt>
+ <dd>The content of the <code>type</code> element specifies the
+ type of operating system to be booted in the virtual machine.
+ <code>hvm</code> indicates that the OS is one designed to run
+ on bare metal, so requires full virtualization. <code>linux</code>
+ (badly named!) refers to an OS that supports the Xen 3 hypervisor
+ guest ABI. There are also two optional attributes, <code>arch</code>
+ specifying the CPU architecture to virtualization, and <code>machine</code>
+ refering to the machine type. The <a href="formatcaps.html">Capabilities XML</a>
+ provides details on allowed values for these. <span class="since">Since 0.0.1</span></dd>
+ <dt><code>loader</code></dt>
+ <dd>The optional <code>loader</code> tag refers to a firmware blob
+ used to assist the domain creation process. At this time, it is
+ only needed by Xen fullyvirtualized domains. <span class="since">Since 0.1.0</span></dd>
+ <dt><code>boot</code></dt>
+ <dd>The <code>dev</code> attribute takes one of the values "fd", "hd",
+ "cdrom" or "network" and is used to specify the next boot device
+ to consider. The <code>boot</code> element can be repeated multiple
+ times to setup a priority list of boot devices to try in turn.
+ <span class="since">Since 0.1.3</span>
+ </dd>
+ </dl>
+
+ <h4><a name="elementsOSBootloader">Host bootloader</a></h4>
+
+ <p>
+ Hypervisors employing paravirtualization do not usually emulate
+ a BIOS, and instead the host is responsible to kicking off the
+ operating system boot. This may use a pseduo-bootloader in the
+ host to provide an interface to choose a kernel for the guest.
+ An example is <code>pygrub</code> with Xen.
+ </p>
+
+ <pre>
+ ...
+ <bootloader>/usr/bin/pygrub</bootloader>
+ <bootloader_args>--append single</bootloader_args>
+ ...</pre>
+
+ <dl>
+ <dt><code>bootloader</code></dt>
+ <dd>The content of the <code>bootloader</code> element provides
+ a fullyqualified path to the bootloader executable in the
+ host OS. This bootloader will be run to choose which kernel
+ to boot. The required output of the bootloader is dependant
+ on the hypervisor in use. <span class="since">Since 0.1.0</span></dd>
+ <dt><code>bootloader_args</code></dt>
+ <dd>The optional <code>bootloader_args</code> element allows
+ command line arguments to be passed to the bootloader.
+ <span class="since">Since 0.2.3</span>
+ </dd>
+
+ </dl>
+
+ <h4><a name="elementsOSKernel">Direct kernel boot</a></h4>
+
+ <p>
+ When installing a new guest OS it is often useful to boot directly
+ from a kernel and initrd stored in the host OS, allowing command
+ line arguments to be passed directly to the installer. This capability
+ is usually available for both para and full virtualized guests.
+ </p>
+
+ <pre>
+ ...
+ <os>
+ <type>hvm</type>
+ <loader>/usr/lib/xen/boot/hvmloader</loader>
+ <kernel>/root/f8-i386-vmlinuz</kernel>
+ <initrd>/root/f8-i386-initrd</initrd>
+ <cmdline>console=ttyS0 ks=http://example.com/f8-i386/os/</cmdline>
+ </os>
+ ...</pre>
+
+ <dl>
+ <dt><code>type</code></dt>
+ <dd>This element has the same semantics as described earlier in the
+ <a href="#elementsOSBIOS">BIOS boot section</a></dd>
+ <dt><code>type</code></dt>
+ <dd>This element has the same semantics as described earlier in the
+ <a href="#elementsOSBIOS">BIOS boot section</a></dd>
+ <dt><code>kernel</code></dt>
+ <dd>The contents of this element specify the fully-qualified path
+ to the kernel image in the host OS.</dd>
+ <dt><code>initrd</code></dt>
+ <dd>The contents of this element specify the fully-qualified path
+ to the (optional) ramdisk image in the host OS.</dd>
+ <dt><code>cmdline</code></dt>
+ <dd>The contents of this element specify arguments to be passed to
+ the kernel (or installer) at boottime. This is often used to
+ specify an alternate primary console (eg serial port), or the
+ installation media source / kickstart file</dd>
+ </dl>
+
+ <h3><a name="elementsResources">Basic resources</a></h3>
+
+ <pre>
+ ...
+ <memory>524288</memory>
+ <currentMemory>524288</currentMemory>
+ <vcpu>1</vcpu>
+ ...</pre>
+
+ <dl>
+ <dt><code>memory</code></dt>
+ <dd>The maximum allocation of memory for the guest at boot time.
+ The units for this value are bytes</dd>
+ <dt><code>currentMemory</code></dt>
+ <dd>The actual allocation of memory for the guest. This value
+ be less than the maximum allocation, to allow for ballooning
+ up the guests memory on the fly. If this is omitted, it defaults
+ to the same value as the <code>memory<code> element</dd>
+ <dt><code>vcpu</code></dt>
+ <dd>The content of this element defines the number of virtual
+ CPUs allocated for the guest OS.</dd>
+ </dl>
+
+ <h3><a name="elementsLifecycle">Lifecycle control</a></h3>
+
+ <p>
+ It is sometimes neccessary to override the default actions taken
+ when a guest OS triggers a lifecycle operation. The following
+ collections of elements allow the actions to be specified. A
+ common use case is to force a reboot to be treated as a poweroff
+ when doing the initial OS installation. This allows the VM to be
+ re-configured for the first post-install bootup.
+ </p>
+
+ <pre>
+ ...
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>restart</on_crash>
+ ...</pre>
+
+ <dl>
+ <dt><code>on_poweroff</code></dt>
+ <dd>The content of this element specifies the action to take when
+ the guest requests a poweroff.</dd>
+ <dt><code>on_poweroff</code></dt>
+ <dd>The content of this element specifies the action to take when
+ the guest requests a reboot.</dd>
+ <dt><code>on_poweroff</code></dt>
+ <dd>The content of this element specifies the action to take when
+ the guest crashes.</dd>
+ </dl>
+
+ <p>
+ Each of these states allow for the same four possible actions.
+ </p>
+
+ <dl>
+ <dt><code>destroy</code></dt>
+ <dd>The domain will be terminated completely and all resources
+ released</dd>
+ <dt><code>restart</code></dt>
+ <dd>The domain will be terminated, and then restarted with
+ the same configuration</dd>
+ <dt><code>preserve</code></dt>
+ <dd>The domain will be terminated, and its resource preserved
+ to allow analysis.</dd>
+ <dt><code>rename-restart</code></dt>
+ <dd>The domain will be terminated, and then restarted with
+ a new name</dd>
+ </dl>
+
+ <h3><a name="elementsFeatures">Hypervisor features</a></h3>
+
+ <p>
+ Hypervisors may allow certain CPU / machine features to be
+ toggled on/off.
+ </p>
+
+ <pre>
+ ...
+ <features>
+ <pae/>
+ <acpi/>
+ <apic/>
+ </features>
+ ...</pre>
+
+ <p>
+ All features are listed within the <code>features</code>
+ element, omitting a togglable feature tag turns it off.
+ The available features can be found by asking
+ for the <a href="formatcaps.html">capabilities XML</a>,
+ but a common set for fully virtualized domains are:
+ </p>
+
+ <dl>
+ <dt><code>pae</code></dt>
+ <dd>Physical address extension mode allows 32-bit guests
+ to address more than 4 GB of memory.</dd>
+ <dt><code>acpi</code></dt>
+ <dd>ACPI is useful for power management, for example, with
+ KVM guests it is required for graceful shutdown to work.
+ </dd>
+ </dl>
+
+ <h3><a name="elementsTime">Time keeping</a></h3>
+
+ <p>
+ The guest clock is typically initialized from the host clock.
+ Most operating systems expect the hardware clock to be kept
+ in UTC, and this is the default. Windows, however, expects
+ it to be in so called 'localtime'.
+ </p>
+
+ <pre>
+ ...
+ <clock sync="localtime"/>
+ ...</pre>
+
+ <dl>
+ <dt><code>clock</code></dt>
+ <dd>The <code>sync</code> attribute takes either "utc" or
+ "localtime" to specify how the guest clock is initialized
+ in relation to the host OS.
+ </dd>
+ </dl>
+
+ <h3><a name="elementsDevices">Devices</a></h3>
+
+ <p>
+ The final set of XML elements are all used to descibe devices
+ provided to the guest domain. All devices occur as children
+ of the main <code>devices</code> element.
+ <span class="since">Since 0.1.3</span>
+ </p>
+
+ <pre>
+ ...
+ <devices>
+ <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
+ ...</pre>
+
+ <dl>
+ <dt><code>emulator</code></dt>
+ <dd>
+ The contents of the <code>emulator</code> element specify
+ the fully qualified path to the device model emulator binary.
+ The <a href="formatcaps.html">capabilities XML</a> specifies
+ the recommended default emulator to use for each particular
+ domain type / architecture combination.
+ </dd>
+ </dl>
+
+ <h4><a name="elementsDisks">Hard drives, floppy disks, CDROMs</a></h4>
+
+ <p>
+ Any device that looks like a disk, be it a floppy, harddisk,
+ cdrom, or paravirtualized driver is specified via the <code>disk</code>
+ element.
+ </p>
+
+ <pre>
+ ...
+ <disk type='file'>
+ <driver name="tap" type="aio">
+ <source file='/var/lib/xen/images/fv0'/>
+ <target dev='hda' bus='ide'/>
+ </disk>
+ ...</pre>
+
+ <dl>
+ <dt><code>disk</code></dt>
+ <dd>The <code>disk</code> element is the main container for describing
+ disks. The <code>type</code> attribute is either "file" or "block"
+ and refers to the underlying source for the disk. The optional
+ <code>device</code> attribute indicates how the disk is to be exposed
+ to the guest OS. Possible values for this attribute are "floppy", "disk"
+ and "cdrom", defaulting to "disk".
+ <span class="since">Since 0.0.3; "device" attribute since 0.1.4</span></dd>
+ <dt><code>source</code></dt>
+ <dd>If the disk <code>type</code> is "file", then the <code>file</code> attribute
+ specifies the fully-qualified path to the file holding the disk. If the disk
+ <code>type</code> is "block", then the <code>dev</code> attribute specifies
+ the path to the host device to serve as the disk. <span class="since">Since 0.0.3</span></dd>
+ <dt><code>target</code></dt>
+ <dd>The <code>target</code> element controls the bus / device under which the
+ disk is exposed to the guest OS. The <code>dev</code> attribute indicates
+ the "logical" device name. The actual device name specified is not guarenteed to map to
+ the device name in the guest OS. Treat it as a device ordering hint.
+ The optional <code>bus</code> attribute specifies the type of disk device
+ to emulate; possible values are driver specific, with typical values being
+ "ide", "scsi", "virtio", "xen". If omitted, the bus type is inferred from
+ the style of the device name. eg, a device named 'sda' will typically be
+ exported using a SCSI bus.
+ <span class="since">Since 0.0.3; <code>bus</code> attribute since 0.4.3</span></dd>
+ <dt><code>driver</code></dt>
+ <dd>If the hypervisor supports multiple backend drivers, then the optional
+ <code>driver</code> element allows them to be selected. The <code>name</code>
+ attribute is the primary backend driver name, while the optional <code>type</code>
+ attribute provides the sub-type. <span class="since">Since 0.1.8</span>
+ </dd>
+ </dl>
+
+ <h4><a name="elementsNICS">Network interfaces</a></h4>
+
+ <pre>
+ ...
+ <interface type='bridge'>
+ <source bridge='xenbr0'/>
+ <mac address='00:16:3e:5d:c7:9e'/>
+ <script path='vif-bridge'/>
+ </interface>
+ ...</pre>
+
+ <h5><a name="elementsNICSVirtual">Virtual network</a></h5>
+
+ <p>
+ <strong><em>
+ This is the recommended config for general guest connectivity on
+ hosts with dynamic / wireless networking configs
+ </em></strong>
+ </p>
+
+ <p>
+ Provides a virtual network using a bridge device in the host.
+ Depending on the virtual network configuration, the network may be
+ totally isolated, NAT'ing to an explicit network device, or NAT'ing to
+ the default route. DHCP and DNS are provided on the virtual network in
+ all cases and the IP range can be determined by examining the virtual
+ network config with '<code>virsh net-dumpxml [networkname]</code>'.
+ There is one virtual network called 'default' setup out
+ of the box which does NAT'ing to the default route and has an IP range of
+ <code>192.168.22.0/255.255.255.0</code>. Each guest will have an
+ associated tun device created with a name of vnetN, which can also be
+ overridden with the <target> element.
+ </p>
+
+ <pre>
+ ...
+ <interface type='network'>
+ <source network='default'/>
+ </interface>
+ ...
+ <interface type='network'>
+ <source network='default'/>
+ <target dev='vnet7'/>
+ <mac address="11:22:33:44:55:66"/>
+ </interface>
+ ...</pre>
+
+ <h5><a name="elementsNICSBridge">Bridge to to LAN</a></h5>
+
+ <p>
+ <strong><em>
+ This is the recommended config for general guest connectivity on
+ hosts with static wired networking configs
+ </em></strong>
+ </p>
+
+ <p>
+ Provides a bridge from the VM directly onto the LAN. This assumes
+ there is a bridge device on the host which has one or more of the hosts
+ physical NICs enslaved. The guest VM will have an associated tun device
+ created with a name of vnetN, which can also be overridden with the
+ <target> element. The tun device will be enslaved to the bridge.
+ The IP range / network configuration is whatever is used on the LAN. This
+ provides the guest VM full incoming & outgoing net access just like a
+ physical machine.
+ </p>
+
+ <pre>
+ ...
+ <interface type='bridge'>
+ <source bridge='br0'/>
+ </interface>
+
+ <interface type='bridge'>
+ <source bridge='br0'/>
+ <target dev='vnet7'/>
+ <mac address="11:22:33:44:55:66"/>
+ </interface>
+ ...</pre>
+
+ <h5><a name="elementsNICSSlirp">Userspace SLIRP stack</a></h5>
+
+ <p>
+ Provides a virtual LAN with NAT to the outside world. The virtual
+ network has DHCP & DNS services and will give the guest VM addresses
+ starting from <code>10.0.2.15</code>. The default router will be
+ <code>10.0.2.2</code> and the DNS server will be <code>10.0.2.3</code>.
+ This networking is the only option for unprivileged users who need their
+ VMs to have outgoing access.
+ </p>
+
+ <pre>
+ ...
+ <interface type='user'/>
+ ...
+ <interface type='user'>
+ <mac address="11:22:33:44:55:66"/>
+ </interface>
+ ...</pre>
+
+
+ <h5><a name="elementsNICSEthernet">Generic ethernet connection</a></h5>
+
+ <p>
+ Provides a means for the administrator to execute an arbitrary script
+ to connect the guest's network to the LAN. The guest will have a tun
+ device created with a name of vnetN, which can also be overridden with the
+ <target> element. After creating the tun device a shell script will
+ be run which is expected to do whatever host network integration is
+ required. By default this script is called /etc/qemu-ifup but can be
+ overridden.
+ </p>
+
+ <pre>
+ ...
+ <interface type='ethernet'/>
+ ...
+ <interface type='ethernet'>
+ <target dev='vnet7'/>
+ <script path='/etc/qemu-ifup-mynet'/>
+ </interface>
+ ...</pre>
+
+ <h5><a name="elementsNICSMulticast">Multicast tunnel</a></h5>
+
+ <p>
+ A multicast group is setup to represent a virtual network. Any VMs
+ whose network devices are in the same multicast group can talk to each
+ other even across hosts. This mode is also available to unprivileged
+ users. There is no default DNS or DHCP support and no outgoing network
+ access. To provide outgoing network access, one of the VMs should have a
+ 2nd NIC which is connected to one of the first 4 network types and do the
+ appropriate routing. The multicast protocol is compatible with that used
+ by user mode linux guests too. The source address used must be from the
+ multicast address block.
+ </p>
+
+ <pre>
+ ...
+ <interface type='mcast'>
+ <source address='230.0.0.1' port='5558'/>
+ </interface>
+ ...</pre>
+
+ <h5><a name="elementsNICSTCP">TCP tunnel</a></h5>
+
+ <p>
+ A TCP client/server architecture provides a virtual network. One VM
+ provides the server end of the network, all other VMS are configured as
+ clients. All network traffic is routed between the VMs via the server.
+ This mode is also available to unprivileged users. There is no default
+ DNS or DHCP support and no outgoing network access. To provide outgoing
+ network access, one of the VMs should have a 2nd NIC which is connected
+ to one of the first 4 network types and do the appropriate routing.</p>
+
+ <pre>
+ ...
+ <interface type='server'>
+ <source address='192.168.0.1' port='5558'/>
+ </interface>
+ ...
+ <interface type='client'>
+ <source address='192.168.0.1' port='5558'/>
+ </interface>
+ ...</pre>
+
+
+ <h4><a name="elementsInput">Input devices</a></h4>
+
+ <p>
+ Input devices allow interaction with the graphical framebuffer in the guest
+ virtual machine. When enabling the framebuffer, an input device is automatically
+ provided. It may be possible to add additional devices explicitly, for example,
+ to provide a graphics tablet for absolute cursor movement.
+ </p>
+
+ <pre>
+ ...
+ <input type='mouse' bus='usb'/>
+ ...</pre>
+
+ <dl>
+ <dt><code>input</code></dt>
+ <dd>The <code>input</code> element has one madatory attribute, the <code>type</code>
+ whose value can be either 'mouse' or 'tablet'. The latter provides absolute
+ cursor movement, while the former uses relative movement. The optional
+ <code>bus</code> attribute can be used to refine the exact device type.
+ It takes values "xen" (paravirtualized), "ps2" and "usb".</dd>
+ </dl>
+
+
+ <h4><a name="elementsGraphics">Graphical framebuffers</a></h4>
+
+ <p>
+ A graphics device allows for graphical interaction with the
+ guest OS. A guest will typically have either a framebuffer
+ or a text console configured to allow interaction with the
+ admin.
+ </p>
+
+ <pre>
+ ...
+ <graphics type='vnc' port='5904'/>
+ ...</pre>
+
+ <dl>
+ <dt><code>graphics</code></dt>
+ <dd>The <code>graphics</code> element has a mandatory <code>type</code>
+ attribute which takes the value "sdl" or "vnc". The former displays
+ a window on the host desktop, while the latter activates a VNC server.
+ If the latter is used the <code>port</code> attributes specifies the
+ TCP port number (with -1 indicating that it should be auto-allocated).
+ The <code>listen</code> attribute is an IP address for the server to
+ listen on. The <code>password</code> attribute provides a VNC password
+ in clear text.</dd>
+ </dl>
+
+ <h4><a name="elementsConsole">Consoles, serial & parallel devices</a></h4>
+
+ <p>
+ A character device provides a way to interact with the virtual machine.
+ Paravirtualized consoles, serial ports and parallel ports are all
+ classed as character devices and so represented using the same syntax.
+ </p>
+
+ <pre>
+ ...
+ <parallel type='pty'>
+ <source path='/dev/pts/2'/>
+ <target port='0'/>
+ </parallel>
+ <serial type='pty'>
+ <source path='/dev/pts/3'/>
+ <target port='0'/>
+ </serial>
+ <console type='pty'>
+ <source path='/dev/pts/4'/>
+ <target port='0'/>
+ </console>
+ </devices>
+ </domain></pre>
+
+
+ <dl>
+ <dt><code>parallel</code></dt>
+ <dd>Represents a parallel port</dd>
+ <dt><code>serial</code></dt>
+ <dd>Represents a serial port</dd>
+ <dt><code>console</code></dt>
+ <dd>Represents the primary console. This can be the paravirtualized
+ console with Xen guests, or duplicates the primary serial port
+ for fully virtualized guests without a paravirtualized console.</dd>
+ <dt><code>source</code></dt>
+ <dd>The attributes available for the <code>source</code> element
+ vary according to the <code>type</code> attribute on the parent
+ tag. Allowed variations will be described below</dd>
+ <dt><code>target</code></dt>
+ <dd>The port number of the character device is specified via the
+ <code>port</code> attribute, numbered starting from 1. There is
+ usually only one console device, and 0, 1 or 2 serial devices
+ or parallel devices.
+ </dl>
+
+ <h5><a name="elementsCharSTDIO">Domain logfile</a></h5>
+
+ <p>
+ This disables all input on the character device, and sends output
+ into the virtual machine's logfile
+ </p>
+
+ <pre>
+ ...
+ <console type='stdio'>
+ <target port='1'>
+ </console>
+ ...</pre>
+
+
+ <h5><a name="elementsCharFle">Device logfile</a></h5>
+
+ <p>
+ A file is opened and all data sent to the character
+ device is written to the file.
+ </p>
+
+ <pre>
+ ...
+ <serial type="file">
+ <source path="/var/log/vm/vm-serial.log"/>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+
+ <h5><a name="elementsCharVC">Virtual console</a></h5>
+
+ <p>
+ Connects the character device to the graphical framebuffer in
+ a virtual console. This is typically accessed via a special
+ hotkey sequence such as "ctrl+alt+3"
+ </p>
+
+ <pre>
+ ...
+ <serial type='vc'>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+
+ <h5><a name="elementsCharNull">Null device</a></h5>
+
+ <p>
+ Connects the character device to the void. No data is ever
+ provided to the input. All data written is discarded.
+ </p>
+
+ <pre>
+ ...
+ <serial type='null'>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+
+ <h5><a name="elementsCharPTY">Pseudo TTY</a></h5>
+
+ <p>
+ A Pseudo TTY is allocated using /dev/ptmx. A suitable client
+ such as 'virsh console' can connect to interact with the
+ serial port locally.
+ </p>
+
+ <pre>
+ ...
+ <serial type="pty">
+ <source path="/dev/pts/3"/>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+
+ <p>
+ NB special case if <console type='pty'>, then the TTY
+ path is also duplicated as an attribute tty='/dv/pts/3'
+ on the top level <console> tag. This provides compat
+ with existing syntax for <console> tags.
+ </p>
+
+ <h5><a name="elementsCharHost">Host device proxy</a></h5>
+
+ <p>
+ The character device is passed through to the underlying
+ physical character device. The device types must match,
+ eg the emulated serial port should only be connected to
+ a host serial port - dont connect a serial port to a parallel
+ port.
+ </p>
+
+ <pre>
+ ...
+ <serial type="dev">
+ <source path="/dev/ttyS0"/>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+
+ <h5><a name="elementsCharTCP">TCP client/server</a></h5>
+
+ <p>
+ The character device acts as a TCP client connecting to a
+ remote server, or as a server waiting for a client connection.
+ </p>
+
+ <pre>
+ ...
+ <serial type="tcp">
+ <source mode="connect" host="0.0.0.0" service="2445"/>
+ <wiremode type="telnet"/>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+
+ <h5><a name="elementsCharUDP">UDP network console</a></h5>
+
+ <p>
+ The character device acts as a UDP netconsole service,
+ sending and receiving packets. This is a lossy service.
+ </p>
+
+ <pre>
+ ...
+ <serial type="udp">
+ <source mode="bind" host="0.0.0.0" service="2445"/>
+ <source mode="connect" host="0.0.0.0" service="2445"/>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+
+ <h5><a name="elementsCharUNIX">UNIX domain socket client/server</a></h5>
+
+ <p>
+ The character device acts as a UNIX domain socket server,
+ accepting connections from local clients.
+ </p>
+
+ <pre>
+ ...
+ <serial type="unix">
+ <source mode="bind" path="/tmp/foo"/>
+ <target port="1"/>
+ </serial>
+ ...</pre>
+
+ <h2><a name="examples">Example configs</a></h2>
<p>
Example configurations for each driver are provide on the
<xsl:template name="toc">\r
<ul>\r
<xsl:for-each select="/html/body/h2[count(a) = 1]">\r
- <xsl:variable name="thishead" select="."/>\r
+ <xsl:variable name="thish2" select="."/>\r
<li>\r
<a href="#{a/@name}"><xsl:value-of select="a/text()"/></a>\r
- <xsl:if test="count(./following-sibling::h3[preceding-sibling::h2[1] = $thishead and count(a) = 1]) > 0">\r
+ <xsl:if test="count(./following-sibling::h3[preceding-sibling::h2[1] = $thish2 and count(a) = 1]) > 0">\r
<ul>\r
- <xsl:for-each select="./following-sibling::h3[preceding-sibling::h2[1] = $thishead and count(a) = 1]">\r
- <xsl:variable name="thissubhead" select="."/>\r
+ <xsl:for-each select="./following-sibling::h3[preceding-sibling::h2[1] = $thish2 and count(a) = 1]">\r
+ <xsl:variable name="thish3" select="."/>\r
<li>\r
<a href="#{a/@name}"><xsl:value-of select="a/text()"/></a>\r
- <xsl:if test="count(./following-sibling::h4[preceding-sibling::h3[1] = $thissubhead and count(a) = 1]) > 0">\r
+ <xsl:if test="count(./following-sibling::h4[preceding-sibling::h3[1] = $thish3 and count(a) = 1]) > 0">\r
<ul>\r
- <xsl:for-each select="./following-sibling::h4[preceding-sibling::h3[1] = $thissubhead and count(a) = 1]">\r
+ <xsl:for-each select="./following-sibling::h4[preceding-sibling::h3[1] = $thish3 and count(a) = 1]">\r
+ <xsl:variable name="thish4" select="."/>\r
<li>\r
<a href="#{a/@name}"><xsl:value-of select="a/text()"/></a>\r
- <xsl:if test="count(./following-sibling::h5[preceding-sibling::h4[1] = $thissubhead and count(a) = 1]) > 0">\r
+ <xsl:if test="count(./following-sibling::h5[preceding-sibling::h4[1] = $thish4 and count(a) = 1]) > 0">\r
<ul>\r
- <xsl:for-each select="./following-sibling::h5[preceding-sibling::h4[1] = $thissubhead and count(a) = 1]">\r
+ <xsl:for-each select="./following-sibling::h5[preceding-sibling::h4[1] = $thish4 and count(a) = 1]">\r
+ <xsl:variable name="thish5" select="."/>\r
<li>\r
<a href="#{a/@name}"><xsl:value-of select="a/text()"/></a>\r
- <xsl:if test="count(./following-sibling::h6[preceding-sibling::h5[1] = $thissubhead and count(a) = 1]) > 0">\r
+ <xsl:if test="count(./following-sibling::h6[preceding-sibling::h5[1] = $thish5 and count(a) = 1]) > 0">\r
<ul>\r
- <xsl:for-each select="./following-sibling::h6[preceding-sibling::h5[1] = $thissubhead and count(a) = 1]">\r
+ <xsl:for-each select="./following-sibling::h6[preceding-sibling::h5[1] = $thish5 and count(a) = 1]">\r
<li>\r
<a href="#{a/@name}"><xsl:value-of select="a/text()"/></a>\r
</li>\r