--- /dev/null
- {\large Last updated on 12th October, 2004}
+\documentclass[11pt,twoside,final,openright]{xenstyle}
+\usepackage{a4,graphicx,setspace}
+\setstretch{1.15}
+
+\begin{document}
+
+% TITLE PAGE
+\pagestyle{empty}
+\begin{center}
+\vspace*{\fill}
+\includegraphics{figs/xenlogo.eps}
+\vfill
+\vfill
+\vfill
+\begin{tabular}{l}
+{\Huge \bf Users' manual} \\[4mm]
+{\huge Xen v2.0 for x86} \\[80mm]
+
+{\Large Xen is Copyright (c) 2004, The Xen Team} \\[3mm]
+{\Large University of Cambridge, UK} \\[20mm]
- \section{Optional}
- \begin{itemize}
- \item The Python logging package (see {\tt http://www.red-dove.com/})
- for additional Xend logging functionality.
- \end{itemize}
-
++{\large Last updated on 26th October, 2004}
+\end{tabular}
+\vfill
+\end{center}
+\cleardoublepage
+
+% TABLE OF CONTENTS
+\pagestyle{plain}
+\pagenumbering{roman}
+{ \parskip 0pt plus 1pt
+ \tableofcontents }
+\cleardoublepage
+
+% PREPARE FOR MAIN TEXT
+\pagenumbering{arabic}
+\raggedbottom
+\widowpenalty=10000
+\clubpenalty=10000
+\parindent=0pt
+\renewcommand{\topfraction}{.8}
+\renewcommand{\bottomfraction}{.8}
+\renewcommand{\textfraction}{.2}
+\renewcommand{\floatpagefraction}{.8}
+\setstretch{1.15}
+
+\newcommand{\path}[1]{{\tt #1}}
+
+\part{Introduction and Tutorial}
+\chapter{Introduction}
+
+{\bf
+DISCLAIMER: This documentation is currently under active development
+and as such there may be mistakes and omissions --- watch out for
+these and please report any you find to the developer's mailing list.
+Contributions of material, suggestions and corrections are welcome.
+}
+
+Xen is a { \em paravirtualising } virtual machine monitor (VMM) or
+``Hypervisor'' for the x86 processor architecture. Xen can securely
+multiplex heterogeneous virtual machines on a single physical with
+near-native performance. The virtual machine technology facilitates
+enterprise-grade functionality, including:
+
+\begin{itemize}
+\item Virtual machines with close to native performance.
+\item Live migration of running virtual machines.
+\item Excellent hardware support (use unmodified Linux device drivers).
+\item Suspend to disk / resume from disk of running virtual machines.
+\item Transparent copy on write disks.
+\item Sandboxed, restartable device drivers.
+\item Pervasive debugging - debug whole OSes, from kernel to applications.
+\end{itemize}
+
+Xen support is available for increasingly many operating systems. The
+following OSs have either been ported already or a port is in
+progress:
+\begin{itemize}
+\item Linux 2.4
+\item Linux 2.6
+\item NetBSD 2.0
+\item Dragonfly BSD
+\item FreeBSD 5.3
+\item Plan 9
+% \item Windows XP
+\end{itemize}
+
+Right now, Linux 2.4, Linux 2.6 and NetBSD are available for Xen 2.0.
+It is intended that Xen support be integrated into the official
+releases of Linux 2.6, NetBSD 2.0, FreeBSD and Dragonfly BSD.
+
+Even running multiple copies of Linux can be very useful, providing a
+means of containing faults to one OS image, providing performance
+isolation between the various OS instances and trying out multiple
+distros.
+
+% The Windows XP port is only available to those who have signed the
+% Microsoft Academic Source License. Publically available XP support
+% will not be available for the foreseeable future (this may change when
+% Intel's Vanderpool Technology becomes available).
+
+Possible usage scenarios for Xen include:
+\begin{description}
+\item [Kernel development] test and debug kernel modifications in a
+ sandboxed virtual machine --- no need for a separate test
+ machine
+\item [Multiple OS Configurations] run multiple operating systems
+ simultaneously, for instance for compatibility or QA purposes
+\item [Server consolidation] move multiple servers onto one box,
+ provided performance and fault isolation at virtual machine
+ boundaries
+\item [Cluster computing] improve manageability and efficiency by
+ running services in virtual machines, isolated from
+ machine-specifics and load balance using live migration
+\item [High availability computing] run device drivers in sandboxed
+ domains for increased robustness
+\item [Hardware support for custom OSes] export drivers from a
+ mainstream OS (e.g. Linux) with good hardware support
+ to your custom OS, avoiding the need for you to port existing
+ drivers to achieve good hardware support
+\end{description}
+
+\section{Structure}
+
+\subsection{High level}
+
+A Xen system has multiple layers. The lowest layer is Xen itself ---
+the most privileged piece of code in the system. On top of Xen run
+guest operating system kernels. These are scheduled pre-emptively by
+Xen. On top of these run the applications of the guest OSs. Guest
+OSs are responsible for scheduling their own applications within the
+time allotted to them by Xen.
+
+One of the domains --- { \em Domain 0 } --- is privileged. It is
+started by Xen at system boot and is responsible for initialising and
+managing the whole machine. Domain 0 builds other domains and manages
+their virtual devices. It also performs suspend, resume and
+migration of other virtual machines. Where it is used, the X server
+is also run in domain 0.
+
+Within Domain 0, a process called ``Xend'' runs to manage the system.
+Xend is responsible for managing virtual machines and providing access
+to their consoles. Commands are issued to Xend over an HTTP
+interface, either from a command-line tool or from a web browser.
+
+XXX need diagram(s) here to make this make sense
+
+\subsection{Paravirtualisation}
+
+Paravirtualisation allows very high performance virtual machine
+technology, even on architectures (like x86) which are traditionally
+hard to virtualise.
+
+Paravirtualisation requires guest operating systems to be { \em ported
+} to run on the VMM. This process is similar to a port of an
+operating system to a new hardware platform. Although operating
+system kernels must explicitly support Xen in order to run in a
+virtual machine, { \em user space applications and libraries
+do not require modification }.
+
+\section{Hardware Support}
+
+Xen currently runs on the x86 architecture, but could in principle be
+ported to others. In fact, it would have been rather easier to write
+Xen for pretty much any other architecture as x86 is particularly
+tricky to handle. A good description of Xen's design, implementation
+and performance is contained in the October 2003 SOSP paper, available
+at:\\
+{\tt http://www.cl.cam.ac.uk/netos/papers/2003-xensosp.pdf}\\
+Work to port Xen to x86\_64 and IA64 is currently underway.
+
+Xen requires a ``P6'' or newer processor (e.g. Pentium Pro, Celeron,
+Pentium II, Pentium III, Pentium IV, Xeon, AMD Athlon, AMD Duron).
+Multiprocessor machines are supported, and we also have basic support
+for HyperThreading (SMT), although this remains a topic for ongoing
+research. We're also working on an x86\_64 port (though Xen already
+runs on these systems just fine in 32-bit mode).
+
+Xen can currently use up to 4GB of memory. It is possible for x86
+machines to address up to 64GB of physical memory but (unless an
+external developer volunteers) there are no plans to support these
+systems. The x86\_64 port is the planned route to supporting more
+than 4GB of memory.
+
+Xen offloads most of the hardware support issues to the guest OS
+running in Domain 0. Xen itself only contains code to detect and
+start additional processors, setup interrupt routing and perform PCI
+bus enumeration. Device drivers run within a privileged guest OS
+rather than within Xen itself. This means that we should be
+compatible with the majority of device hardware supported by Linux.
+The default XenLinux build contains support for relatively modern
+server-class network and disk hardware, but you can add support for
+other hardware by configuring your XenLinux kernel in the normal way
+(e.g. \verb_# make ARCH=xen menuconfig_).
+
+\section{History}
+
+
+``Xen'' is a Virtual Machine Monitor (VMM) originally developed by the
+Systems Research Group of the University of Cambridge Computer
+Laboratory, as part of the UK-EPSRC funded XenoServers project.
+
+The XenoServers project aims to provide a ``public infrastructure for
+global distributed computing'', and Xen plays a key part in that,
+allowing us to efficiently partition a single machine to enable
+multiple independent clients to run their operating systems and
+applications in an environment providing protection, resource
+isolation and accounting. The project web page contains further
+information along with pointers to papers and technical reports:
+{\tt http://www.cl.cam.ac.uk/xeno}
+
+Xen has since grown into a project in its own right, enabling us to
+investigate interesting research issues regarding the best techniques
+for virtualizing resources such as the CPU, memory, disk and network.
+The project has been bolstered by support from Intel Research
+Cambridge, and HP Labs, who are now working closely with us.
+% We're also in receipt of support from Microsoft Research Cambridge to
+% port Windows XP to run on Xen.
+
+Xen was first described in the 2003 paper at SOSP \\
+({\tt http://www.cl.cam.ac.uk/netos/papers/2003-xensosp.pdf}).
+The first public release of Xen (1.0) was made in October 2003. Xen
+was developed as a research project by the University of Cambridge
+Computer Laboratory (UK). Xen was the first Virtual Machine Monitor
+to make use of {\em paravirtualisation} to achieve near-native
+performance virtualisation of commodity operating systems. Since
+then, Xen has been extensively developed and is now used in production
+scenarios on multiple sites.
+
+Xen 2.0 is the latest release, featuring greatly enhanced hardware
+support, configuration flexibility, usability and a larger complement
+of supported operating systems. We think that Xen has the potential
+to become {\em the} definitive open source virtualisation solution and
+will work to conclusively achieve that position.
+
+
+\chapter{Installation}
+
+The Xen distribution includes three main components: Xen itself,
+utilities to convert a standard Linux tree to run on Xen and the
+userspace tools required to operate a Xen-based system.
+
+This manual describes how to install the Xen 2.0 distribution from
+source. Alternatively, there may be packages available for your
+operating system distribution.
+
+\section{Prerequisites}
+\label{sec:prerequisites}
+\begin{itemize}
++\item i686-class CPU or newer
+\item A working installation of your favourite Linux distribution.
+\item A working installation of the GRUB bootloader.
+\item An installation of Twisted v1.3 or above (see {\tt
+http://www.twistedmatrix.com}). There may be a package available for
+your distribution; alternatively it can be installed by running {\tt \#
+make install-twisted} in the root of the Xen source tree.
++\item Python logging package (see {\tt http://www.red-dove.com/})
+\item The Linux bridge control tools (see {\tt
+http://bridge.sourceforge.net}). There may be packages of these tools
+available for your distribution.
+\item Linux IP Routing Tools
+\item make
+\item gcc
+\item libcurl
+\item zlib-dev
+\item python-dev
+\item python2.3-pycurl
+\item python2.3-twisted
+\end{itemize}
+
- \item[\path{linux-2.4.27-xen/}] Xen support for Linux 2.4
- \item[\path{linux-2.6.8.1-xen/}] Xen support for Linux 2.6
+\section{Install Bitkeeper (Optional)}
+
+To fetch a local copy, first download the BitKeeper tools.
+Download instructions must be obtained by filling out the provided
+form at: \\ {\tt
+http://www.bitmover.com/cgi-bin/download.cgi }
+
+The BitKeeper install program is designed to be run with X. If X is
+not available, you can specify the install directory on the command
+line.
+
+\section{Download the Xen source code}
+
+\subsection{Using Bitkeeper}
+
+The public master BK repository for the 2.0 release lives at: \\
+{\tt bk://xen.bkbits.net/xen-2.0.bk}. You can use Bitkeeper to
+download it and keep it updated with the latest features and fixes.
+
+Change to the directory in which you want to put the source code, then
+run:
+\begin{verbatim}
+# bk clone bk://xen.bkbits.net/xen-2.0.bk
+\end{verbatim}
+
+Under your current directory, a new directory named `xen-2.0.bk' has
+been created, which contains all the source code for the Xen
+hypervisor and the Xen tools. The directory also contains `sparse' OS
+source trees, containing only the files that require changes to allow
+the OS to run on Xen.
+
+Once you have cloned the repository, you can update to the newest
+changes to the repository by running:
+\begin{verbatim}
+# cd xen-2.0.bk # to change into the local repository
+# bk pull # to update the repository
+\end{verbatim}
+
+\subsection{Without Bitkeeper}
+
+The Xen source tree is also available in gzipped tarball form from the
+Xen downloads page:\\
+{\tt http://www.cl.cam.ac.uk/Research/SRG/netos/xen/downloads.html}.
+Prebuilt tarballs are also available from this page but are very large.
+
+\section{The distribution}
+
+The Xen source code repository is structured as follows:
+
+\begin{description}
+\item[\path{tools/}] Xen node controller daemon (Xend), command line tools,
+ control libraries
+\item[\path{xen/}] The Xen hypervisor itself.
- XXX Insert details on customising the kernel to be built.
- i.e. merging config files
-
++\item[\path{linux-2.4.27-xen-sparse/}] Xen support for Linux 2.4
++\item[\path{linux-2.6.9-xen-sparse/}] Xen support for Linux 2.6
++\item[\path{linux-2.6.9-patches/}] Experimental patches for Linux 2.6
+\item[\path{netbsd-2.0-xen-sparse/}] Xen support for NetBSD 2.0
+\item[\path{docs/}] various documentation files for users and developers
+\item[\path{extras/}] currently this contains the Mini OS, aimed at developers
+\end{description}
+
+\section{Build and install}
+
+The Xen makefile includes a target ``world'' that will do the
+following:
+
+\begin{itemize}
+\item Build Xen
+\item Build the control tools, including Xend
+\item Download (if necessary) and unpack the Linux 2.6 source code,
+ and patch it for use with Xen
+\item Build a Linux kernel to use in domain 0 and a smaller
+ unprivileged kernel, which can optionally be used for
+ unprivileged virtual machines.
+\end{itemize}
+
+Inspect the Makefile if you want to see what goes on during a build.
+Building Xen and the tools is straightforward, but XenLinux is more
+complicated. The makefile needs a `pristine' linux kernel tree which
+it will then add the Xen architecture files to. You can tell the
+makefile the location of the appropriate linux compressed tar file by
+setting the LINUX\_SRC environment variable, e.g. \\
+\verb!# LINUX_SRC=/tmp/linux-2.6.8.1.tar.bz2 make world! \\ or by
+placing the tar file somewhere in the search path of {\tt
+LINUX\_SRC\_PATH} which defaults to ``{\tt .:..}". If the makefile
+can't find a suitable kernel tar file it attempts to download it from
+kernel.org (this won't work if you're behind a firewall).
+
+After untaring the pristine kernel tree, the makefile uses the {\tt
+mkbuildtree} script to add the Xen patches to the kernel. It then
+builds two different XenLinux images, one with a ``-xen0'' extension
+which contains hardware device drivers and drivers for Xen's virtual
+devices, and one with a ``-xenU'' extension that just contains the
+virtual ones.
+
+The procedure is similar to build the Linux 2.4 port: \\
+\verb!# LINUX_SRC=/path/to/linux2.4/source make linux24!
+
+The NetBSD port can be built using: \\ \verb!# make netbsd! \\ The
+NetBSD port is built using a snapshot of the netbsd-2-0 cvs branch.
+The snapshot is downloaded as part of the build process, if it is not
+yet present in the {\tt NETBSD\_SRC\_PATH} search path. The build
+process also downloads a toolchain which includes all the tools
+necessary to build the NetBSD kernel under Linux.
+
+If you have an SMP machine you may wish to give the {\tt '-j4'}
+argument to make to get a parallel build.
+
- asked about some Xen-specific options. We advised accepting the
+If you have an existing Linux kernel configuration that you would like
+to use for domain 0, you should copy it to
+install/boot/config-2.6.8.1-xen0. During the first build, you may be
- It is possible to use any Linux block device to store virtual machine
- disk images. This chapter covers some of the possibilities; note that
- it is also possible to use network-based block devices and other
- unconventional block devices.
++asked about some Xen-specific options. We advise accepting the
+defaults for these options.
+
+\framebox{\parbox{5in}{
+{\bf Distro specific:} \\
+{\it Gentoo} --- if not using udev (most installations, currently), you'll need
+to enable devfs and devfs mount at boot time in the xen0 config.
+}}
+
+The files produced by the build process are stored under the
+\path{install/} directory. To install them in their default
+locations, do: \\
+\verb_# make install_
+
+Alternatively, users with special installation requirements may wish
+to install them manually by copying the files to their appropriate
+destinations.
+
+Files in \path{install/boot/} include:
+\begin{itemize}
+\item \path{install/boot/xen.gz} The Xen 'kernel'
+\item \path{install/boot/vmlinuz-2.6.8.1-xen0} Domain 0 XenLinux kernel
+\item \path{install/boot/vmlinuz-2.6.8.1-xenU} Unprivileged XenLinux kernel
+\end{itemize}
+
+The difference between the two Linux kernels that are built is due to
+the configuration file used for each. The "U" suffixed unprivileged
+version doesn't contain any of the physical hardware device drivers
+--- it is 30\% smaller and hence may be preferred for your
+non-privileged domains. The ``0'' suffixed privileged version can be
+used to boot the system, as well as in driver domains and unprivileged
+domains.
+
+The \path{install/boot} directory will also contain the config files
+used for building the XenLinux kernels, and also versions of Xen and
+XenLinux kernels that contain debug symbols (\path{xen-syms} and
+\path{vmlinux-syms-2.6.8.1-xen0}) which are essential for interpreting crash
+dumps. Retain these files as the developers may wish to see them if
+you post on the mailing list.
+
+\section{Configuration}
+
+\subsection{GRUB Configuration}
+
+An entry should be added to \path{grub.conf} (often found under
+\path{/boot/} or \path{/boot/grub/}) to allow Xen / XenLinux to boot.
+This file is sometimes called \path{menu.lst}, depending on your
+distribution. The entry should look something like the following:
+
+\begin{verbatim}
+title Xen 2.0 / XenLinux 2.6.8.1
+ kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1
+ module /boot/vmlinuz-2.6.8.1-xen0 root=/dev/sda4 ro console=tty0 console=ttyS0
+\end{verbatim}
+
+The first line of the configuration (kernel...) tells GRUB where to
+find Xen itself and what boot parameters should be passed to it (in
+this case, setting domain 0's memory allocation and the settings for
+the serial port).
+
+The second line of the configuration describes the location of the
+XenLinux kernel that Xen should start and the parameters that should
+be passed to it (these are standard Linux parameters, identifying the
+root device and specifying it be initially mounted read only and
+instructing that console output be sent both to the screen and to the
+serial port).
+
++If you want to use an initrd, just add another {\tt module} line to
++the configuration, as usual:
++\begin{verbatim}
++ module /boot/my_initrd.gz
++\end{verbatim}
++
+As always when installing a new kernel, it is recommended that you do
+not remove the original contents of \path{menu.lst} --- you may want
+to boot up with your old Linux kernel in future, particularly if you
+have problems.
+
+\framebox{\parbox{5in}{
+{\bf Distro specific:} \\
+{\it SuSE} --- Omit the {\tt ro} option from the XenLinux kernel
+command line, since the partition won't be remounted rw during boot.
+}}
+
+\subsection{Serial Console}
+
+In order to configure serial console output, it is necessary to add a
+line into \path{/etc/inittab}. The XenLinux console driver is
+designed to make this procedure the same as configuring a normal
+serial console. Add the line:
+
+{\tt c:2345:respawn:/sbin/mingetty ttyS0}
+
+\subsection{TLS Libraries}
+
+Users of the XenLinux 2.6 kernel should disable Thread Local Storage
+(e.g. by doing a {\tt mv /lib/tls /lib/tls.disabled}) before
+attempting to run with a XenLinux kernel. You can always reenable it
+by restoring the directory to its original location (i.e. {\tt mv
+ /lib/tls.disabled /lib/tls}).
+
+The TLS implementation uses segmentation in a way that is not
+permissable under Xen. If TLS is not disabled, an emulation mode is
+used within Xen which reduces performance substantially and is not
+guaranteed to work perfectly.
+
+\section{Test the new install}
+
+It should now be possible to restart the system and use Xen. Reboot
+as usual but choose the new Xen option when the Grub screen appears.
+
+What follows should look much like a conventional Linux boot. The
+first portion of the output comes from Xen itself, supplying low level
+information about itself and the machine it is running on. The
+following portion of the output comes from XenLinux itself.
+
+You may see some errors during the XenLinux boot. These are not
+necessarily anything to worry about --- they may result from kernel
+configuration differences between your XenLinux kernel and the one you
+usually use.
+
+When the boot completes, you should be able to log into your system as
+usual. If you are unable to log in to your system running Xen, you
+should still be able to reboot with your normal Linux kernel.
+
+
+\chapter{Starting a domain}
+
+The first step in creating a new domain is to prepare a root
+filesystem for it to boot off. Typically, this might be stored in a
+normal partition, an LVM or other volume manager partition, a disk
+file or on an NFS server.
+
+A simple way to do this is simply to boot from your standard OS
+install CD and install the distribution into another partition on your
+hard drive.
+
+{\em N.b } you can boot with Xen and XenLinux without installing any
+special userspace tools but will need to have the prerequisites
+described in Section~\ref{sec:prerequisites} and the Xen control tools
+installed before you proceed.
+
+\section{From the web interface}
+
+Boot the Xen machine and start Xensv (see Chapter~\ref{cha:xensv} for
+more details) using the command: \\
+\verb_# xensv start_ \\
+This will also start Xend (see Chapter~\ref{cha:xend} for more information).
+
+The domain management interface will then be available at {\tt
+http://your\_machine:8080/}. This provides a user friendly wizard for
+starting domains and functions for managing running domains.
+
+\section{From the command line}
+
+Full details of the {\tt xm} tool are found in Chapter~\ref{cha:xm}.
+
+This example explains how to use the \path{xmdefconfig} file. If you
+require a more complex setup, you will want to write a custom
+configuration file --- details of the configuration file formats are
+included in Chapter~\ref{cha:config}.
+
+The \path{xmexample1} file is a simple template configuration file
+for describing a single VM.
+
+The \path{xmexample2} file is a template description that is intended
+to be reused for multiple virtual machines. Setting the value of the
+{\tt vmid} variable on the {\tt xm} command line
+fills in parts of this template.
+
+Both of them can be found in \path{/etc/xen/}
+\subsection{Editing \path{xmdefconfig}}
+
+At minimum, you should edit the following variables in \path{/etc/xen/xmdefconfig}:
+
+\begin{description}
+\item[kernel] Set this to the path of the kernel you compiled for use
+ with Xen. [e.g. {\tt kernel =
+ '/root/xen-2.0.bk/install/boot/vmlinuz-2.4.27-xenU'}]
+\item[memory] Set this to the size of the domain's memory in
+megabytes. [e.g. {\tt memory = 64 } ]
+\item[disk] Set the first entry in this list to calculate the offset
+of the domain's root partition, based on the domain ID. Set the
+second to the location of \path{/usr} (if you are sharing it between
+domains). [i.e. {\tt disk = ['phy:your\_hard\_drive\%d,sda1,w' \%
+(base\_partition\_number + vmid), 'phy:your\_usr\_partition,sda6,r' ]}
+\item[dhcp] Uncomment the dhcp variable, so that the domain will
+receive its IP address from a DHCP server. [i.e. {\tt dhcp=''dhcp''}]
+\end{description}
+
+You may also want to edit the {\bf vif} variable in order to choose
+the MAC address of the virtual ethernet interface yourself. For
+example: \\ \verb_vif = ['mac=00:06:AA:F6:BB:B3']_\\ If you do not set
+this variable, Xend will automatically generate a random MAC address
+from an unused range.
+
+If you don't have a \path{xmdefconfig} file, simply create your own
+by copying one of the \path{/etc/xen/xmexample} files.
+\subsection{Starting the domain}
+
+The {\tt xm} tool provides a variety of commands for managing domains.
+Use the {\tt create} command to start new domains. To start the
+virtual machine with virtual machine ID 1.
+
+\begin{verbatim}
+# xm create -c vmid=1
+\end{verbatim}
+
+The {\tt -c} switch causes {\tt xm} to turn into the domain's console
+after creation. The {\tt vmid=1} sets the {\tt vmid} variable used in
+the {\tt xmdefconfig} file. The tool uses the
+\path{/etc/xen/xmdefconfig} file, since no custom configuration file
+was specified on the command line.
+
+\section{Example: ttylinux}
+
+Ttylinux is a very small Linux distribution, designed to
+require very few resources. We will use it as a concrete example of
+how to start a Xen domain. Most users will probably want to install a
+more complex mainstream distribution once they have mastered the
+basics.
+
+\begin{enumerate}
+\item Download the ttylinux disk image from XXX where from?
+\item Create a configuration file like the following:
+\begin{verbatim}
+kernel = "/boot/vmlinuz-2.6.8.1-xenU" # or a 2.4 kernel or a xen0 kernel
+memory = 64
+name = "ttylinux"
+cpu = -1 # leave to Xen to pick
+nics = 1
+ip = "1.2.3.4"
+disk = ['file:/path/to/ttylinux-disk,sda1,w']
+root = "/dev/sda1 ro"
+\end{verbatim}
+\item Now start the domain and connect to its console:
+\begin{verbatim}
+xm create -f configfile -c
+\end{verbatim}
+\item Login as root, password root.
+\end{enumerate}
+
+\section{Starting / Stopping domains automatically}
+
+It is possible to have certain domains start automatically at boot
+time and to have dom0 wait for all running domains to shutdown before
+it shuts down the system.
+
+To specify a domain is to start at boot-time, place its
+configuration file (or a link to it) under \path{/etc/xen/auto/}.
+
+A Sys-V style init script for RedHat and LSB-compliant systems is
+provided and will be automatically copied to \path{/etc/init.d/}
+during install. You can then enable it in the appriate way for your
+distribution.
+
+For instance, on RedHat:
+
+\verb_# chkconfig --add xendomains_
+
+By default, this will start the boot-time domains in runlevels 3, 4
+and 5.
+
+You can also use the {\tt service} command to run this script manually, e.g:
+
+\verb_# service xendomains start_
+
+Starts all the domains with config files under /etc/xen/auto/.
+
+\verb_# service xendomains stop_
+
+Shuts down ALL running Xen domains.
+
+
+\chapter{Domain management tasks}
+
+The previous chapter described a simple example of how to configure
+and start a domain. This chapter summarises the tools available to
+manage running domains.
+
+\section{Command line management}
+
+Command line management tasks are also performed using the {\tt xm}
+tool. For online help for the commands available, type:\\
+\verb_# xm help_
+
+\subsection{Basic management commands}
+
+The most important {\tt xm} commands are: \\
+\verb_# xm list_ : Lists all domains running. \\
+\verb_# xm consoles_ : Gives information about the domain consoles. \\
+\verb_# xm console_: open a console to a domain.
+e.g. \verb_# xm console 1_ (open console to domain 1)
+
+\subsection{\tt xm list}
+
+The output of {\tt xm list} is in rows of the following format:\\
+\verb_name domid memory cpu state cputime console_
+
+\begin{description}
+\item[name] The descriptive name of the virtual machine.
+\item[domid] The number of the domain ID this virtual machine is running in.
+\item[memory] Memory size in megabytes.
+\item[cpu] The CPU this domain is running on.
+\item[state] Domain state consists of 5 fields:
+ \begin{description}
+ \item[r] running
+ \item[b] blocked
+ \item[p] paused
+ \item[s] shutdown
+ \item[c] crashed
+ \end{description}
+\item[cputime] How much CPU time (in seconds) the domain has used so far.
+\item[console] TCP port accepting connections to the domain's console.
+\end{description}
+
+The {\tt xm list} command also supports a long output format when the
+{\tt -l} switch is used. This outputs the fulls details of the
+running domains in Xend's SXP configuration format.
+
+For example, suppose the system is running the ttylinux domain as
+described earlier. The list command should produce output somewhat
+like the following:
+\begin{verbatim}
+# xm list
+Name Id Mem(MB) CPU State Time(s) Console
+Domain-0 0 251 0 r---- 172.2
+ttylinux 5 63 0 -b--- 3.0 9605
+\end{verbatim}
+
+Here we can see the details for the ttylinux domain, as well as for
+domain 0 (which of course is always running). Note that the console
+port for the ttylinux domain is 9605. This can be connected to by TCP
+using a terminal program (e.g. {\tt telnet} or, better, {\tt
+xencons}). The simplest way to connect is to use the {\tt xm console}
+command, specifying the domain name or ID. To connect to the console
+of the ttylinux domain, we could use:
+\begin{verbatim}
+# xm console ttylinux
+\end{verbatim}
+or:
+\begin{verbatim}
+# xm console 5
+\end{verbatim}
+
+\chapter{Other kinds of storage}
+
- using the Xensv web interface (see Chapter~\ref{cha:xensv}).
-
- \chapter{Xensv (Web interface server)}
- \label{cha:xensv}
-
- Xensv is the server for the web control interface. It can be started
- using:\\
- \verb_# xensv start_ \\
- and stopped using:
- \verb_# xensv stop_ \\
- It will automatically start Xend if it is not already running.
-
- By default, Xensv will serve out the web interface on port 8080. This
- can be changed by editing {\tt
- /usr/lib/python2.3/site-packages/xen/sv/params.py}.
-
- Once Xensv is running, the web interface can be used to manage running
- domains and provides a user friendly domain creation wizard.
++It is possible to directly export any Linux block device to a virtual,
++or to export filesystems / devices to virtual machines using standard
++network protocals (e.g. NBD, iSCSI, NFS, etc). This chapter covers
++some of the possibilities.
+
+\section{File-backed virtual block devices}
+
+It is possible to use a file in Domain 0 as the primary storage for a
+virtual machine. As well as being convenient, this also has the
+advantage that the virtual block device will be {\em sparse} --- space
+will only really be allocated as parts of the file are used. So if a
+virtual machine uses only half of its disk space then the file really
+takes up half of the size allocated.
+
+For example, to create a 2GB sparse file-backed virtual block device
+(actually only consumes 1KB of disk):
+
+\verb_# dd if=/dev/zero of=vm1disk bs=1k seek=2048k count=1_
+
+Make a file system in the disk file: \\
+\verb_# mkfs -t ext3 vm1disk_
+
+(when the tool asks for confirmation, answer `y')
+
+Populate the file system e.g. by copying from the current root:
+\begin{verbatim}
+# mount -o loop vm1disk /mnt
+# cp -ax / /mnt
+\end{verbatim}
+Tailor the file system by editing \path{/etc/fstab},
+\path{/etc/hostname}, etc (don't forget to edit the files in the
+mounted file system, instead of your domain 0 filesystem, e.g. you
+would edit \path{/mnt/etc/fstab} instead of \path{/etc/fstab} ). For
+this example put \path{/dev/sda1} to root in fstab.
+
+Now unmount (this is important!):\\
+\verb_# umount /mnt_
+
+In the configuration file set:\\
+\verb_disk = ['file:/full/path/to/vm1disk,sda1,w']_
+
+As the virtual machine writes to its `disk', the sparse file will be
+filled in and consume more space up to the original 2GB.
+
++\section{NFS Root}
++
++The procedure for using NFS root in a virtual machine is basically the
++same as you would follow for a real machine. NB. the Linux NFS root
++implementation is known to have stability problems under high load
++(this is not a Xen-specific problem), so this configuration may not be
++appropriate for critical servers.
++
++First, populate a root filesystem in a directory on the server machine
++--- this can be on another physical machine, or perhaps just another
++virtual machine on the same node.
++
++Now, configure the NFS server to export this filesystem over the
++network by adding a line to /etc/exports, for instance:
++
++\begin{verbatim}
++/export/vm1root w.x.y.z/m (rw,sync,no_root_squash)
++\end{verbatim}
++
++Finally, configure the domain to use NFS root. In addition to the
++normal variables, you should make sure to set the following values in
++the domain's configuration file:
++
++\begin{verbatim}
++root = '/dev/nfs'
++nfs_server = 'a.b.c.d' # Substitute the IP for the server here
++nfs_root = '/path/to/root' # Path to root FS on the server machine
++\end{verbatim}
++
++The domain will need network access at boot-time, so either statically
++configure an IP address (Using the config variables {\tt ip}, {\tt
++netmask}, {\tt gateway}, {\tt hostname}) or enable DHCP ({\tt
++dhcp='dhcp'}).
++
+\section{LVM-backed virtual block devices}
+
+XXX Put some simple examples here - would be nice if an LVM user could
+contribute some, although obviously users would have to read the LVM
+docs to do advanced stuff.
+
+\part{Quick Reference}
+
+\chapter{Domain Configuration Files}
+\label{cha:config}
+
+XXX Could use a little explanation about possible values
+
+Xen configuration files contain the following standard variables:
+
+\begin{description}
+\item[kernel] Path to the kernel image (on the server).
+\item[ramdisk] Path to a ramdisk image (optional).
+% \item[builder] The name of the domain build function (e.g. {\tt'linux'} or {\tt'netbsd'}.
+\item[memory] Memory size in megabytes.
+\item[cpu] CPU to assign this domain to.
+\item[nics] Number of virtual network interfaces.
+\item[vif] List of MAC addresses (random addresses are assigned if not given).
+\item[disk] Regions of disk to export to the domain.
+\item[dhcp] Set to {\tt 'dhcp'} if you want to DHCP allocate the IP address.
+\item[netmask] IP netmask.
+\item[gateway] IP address for the gateway (if any).
+\item[hostname] Set the hostname for the virtual machine.
+\item[root] Set the root device.
+\item[nfs\_server] IP address for the NFS server.
+\item[nfs\_root] Path of the root filesystem on the NFS server.
+\item[extra] Extra string to append to the kernel command line.
+\item[restart] Three possible options:
+ \begin{description}
+ \item[always] Always restart the domain, no matter what
+ its exit code is.
+ \item[never] Never restart the domain.
+ \item[onreboot] (restart the domain if it requests reboot).
+ \end{description}
+\end{description}
+
+It is also possible to include Python scripting commands in
+configuration files. This is done in the \path{xmdefconfig} file in
+order to handle the {\tt vmid} variable.
+
+
+\chapter{Xend (Node control daemon)}
+\label{cha:xend}
+
+The Xen Daemon (Xend) performs system management functions related to
+virtual machines. It forms a central point of control for a machine
+and can be controlled using an HTTP-based protocol. Xend must be
+running in order to start and manage virtual machines.
+
+Xend must be run as root because it needs access to privileged system
+management functions. A small set of commands may be issued on the
+Xend command line:
+
+\begin{tabular}{ll}
+\verb_# xend start_ & start Xend, if not already running \\
+\verb_# xend stop_ & stop Xend if already running \\
+\verb_# xend restart_ & restart Xend if running, otherwise start it \\
+\end{tabular}
+
+A SysV init script called {\tt xend} is provided to start Xend at boot
+time. {\tt make install} installs this script in {\path{/etc/init.d}.
+To enable it, you have to make symbolic links in the appropriate
+runlevel directories or use the {\tt chkconfig} tool, where available.
+
+Once Xend is running, more sophisticated administration can be done
- XXX Add description of arguments and switches for all the options
-
++using the xm tool (see Chapter~\ref{cha:xm}) and the experimental
++Xensv web interface (see Chapter~\ref{cha:xensv}).
+
+\chapter{The xm tool}
+\label{cha:xm}
+
- mechanism for sharing out ``slack time''.
+The xm tool is the primary tool for managing Xen from the console.
+The general format of an xm command line is:
+
+\begin{verbatim}
+# xm command [switches] [arguments] [variables]
+\end{verbatim}
+
+The available {\em switches } and {\em arguments} are dependent on the
+{\em command} chosen. The {\em variables} may be set using
+declarations of the form {\tt variable=value} and command line
+declarations override any of the values in the configuration file
+being used, including the standard variables described above and any
+custom variables (for instance, the \path{xmdefconfig} file uses a
+{\tt vmid} variable).
+
+The available commands are as follows:
+
+\begin{description}
+\item[balloon] Request a domain to adjust its memory footprint.
+\item[create] Create a new domain.
+\item[destroy] Kill a domain immediately.
+\item[list] List running domains.
+\item[shutdown] Ask a domain to shutdown.
+\item[dmesg] Fetch the Xen (not Linux!) boot output.
+\item[consoles] Lists the available consoles.
+\item[console] Connect to the console for a domain.
+\item[help] Get help on xm commands.
+\item[save] Suspend a domain to disk.
+\item[restore] Restore a domain from disk.
+\item[pause] Pause a domain's execution.
+\item[unpause] Unpause a domain.
+\item[pincpu] Pin a domain to a CPU.
+\item[bvt] Set BVT scheduler parameters for a domain.
+\item[bvt\_ctxallow] Set the BVT context switching allowance for the system.
+\item[atropos] Set the atropos parameters for a domain.
+\item[rrobin] Set the round robin time slice for the system.
+\item[info] Get information about the Xen host.
+\item[call] Call a Xend HTTP API function directly.
+\end{description}
+
+For a detailed overview of switches, arguments and variables to each command
+try
+\begin{verbatim}
+# xm help command
+\end{verbatim}
+
++\chapter{Xensv (Web control interface)}
++\label{cha:xensv}
++
++Xensv is the experimental web control interface for managing a Xen
++machine. It can be used to perform some (but not yet all) of the
++management tasks that can be done using the xm tool.
++
++It can be started using:\\ \verb_# xensv start_ \\ and
++stopped using: \verb_# xensv stop_ \\ It will automatically start Xend
++if it is not already running.
++
++By default, Xensv will serve out the web interface on port 8080. This
++can be changed by editing {\tt
++/usr/lib/python2.3/site-packages/xen/sv/params.py}.
++
++Once Xensv is running, the web interface can be used to manage running
++domains and provides a user friendly domain creation wizard.
++
++
++
+\chapter{Glossary}
+
+\begin{description}
+\item[Atropos] One of the CPU schedulers provided by Xen.
+ Atropos provides domains with absolute shares
+ of the CPU, with timeliness guarantees and a
++ mechanism for sharing out `slack time'.
+
+\item[BVT] The BVT scheduler is used to give proportional
+ fair shares of the CPU to domains.
+
+\item[Exokernel] A minimal piece of privileged code, similar to
+ a {\bf microkernel} but providing a more
+ `hardware-like' interface to the tasks it
+ manages. This is similar to a paravirtualising
+ VMM like {\bf Xen} but was designed as a new
+ operating system structure, rather than
+ specifically to run multiple conventional OSs.
+
+\item[Domain] A domain is the execution context that
+ contains a running { \bf virtual machine }.
+ The relationship between virtual machines
+ and domains on Xen is similar to that between
+ programs and processes in an operating
+ system: a virtual machine is a persistent
+ entity that resides on disk (somewhat like
+ a program). When it is loaded for execution,
+ it runs in a domain. Each domain has a
+ { \bf domain ID }.
+
+\item[Domain 0] The first domain to be started on a Xen
+ machine. Domain 0 is responsible for managing
+ the system.
+
+\item[Domain ID] A unique identifier for a { \bf domain },
+ analogous to a process ID in an operating
+ system. Apart from domain
+
+\item[Full virtualisation] An approach to virtualisation which
+ requires no modifications to the hosted
+ operating system, providing the illusion of
+ a complete system of real hardware devices.
+
+\item[Hypervisor] An alternative term for { \bf VMM }, used
+ because it means ``beyond supervisor'',
+ since it is responsible for managing multiple
+ ``supervisor'' kernels.
+
+\item[Live migration] A technique for moving a running virtual
+ machine to another physical host, without
+ stopping it or the services running on it.
+
+\item[Microkernel] A small base of code running at the highest
+ hardware privilege level. A microkernel is
+ responsible for sharing CPU and memory (and
+ sometimes other devices) between less
+ privileged tasks running on the system.
+ This is similar to a VMM, particularly a
+ {\bf paravirtualising} VMM but typically
+ addressing a different problem space and
+ providing different kind of interface.
+
+\item[NetBSD/Xen] A port of NetBSD to the Xen architecture.
+
+\item[Paravirtualisation] An approach to virtualisation which requires
+ modifications to the operating system in
+ order to run in a virtual machine. Xen
+ uses paravirtualisation but preserves
+ binary compatibility for user space
+ applications.
+
+\item[Shadow pagetables] A technique for hiding the layout of machine
+ memory from a virtual machine's operating
+ system. Used in some {\bf VMM}s to provide
+ the illusion of contiguous physical memory,
+ in Xen this is used during
+ {\bf live migration}.
+
+\item[Virtual Machine] The environment in which a hosted operating
+ system runs, providing the abstraction of a
+ dedicated machine. A virtual machine may
+ be identical to the underlying hardware (as
+ in { \bf full virtualisation }, or it may
+ differ, as in { \bf paravirtualisation }.
+
+\item[VMM] Virtual Machine Monitor - the software that
+ allows multiple virtual machines to be
+ multiplexed on a single physical machine.
+
+\item[Xen] Xen is a paravirtualising virtual machine
+ monitor, developed primarily by the
+ Systems Research Group at the University
+ of Cambridge Computer Laboratory.
+
+\item[XenLinux] Official name for the port of the Linux kernel
+ that runs on Xen.
+
+\end{description}
+
+\part{Advanced Topics}
+
+\chapter{Advanced Network Configuration}
+
+For simple systems with a single ethernet interface with a simple
+configuration, the default installation should work ``out of the
+box''. More complicated network setups, for instance with multiple
+ethernet interfaces and / or existing bridging setups will require
+some special configuration.
+
+The purpose of this chapter is to describe the mechanisms provided by
+xend to allow a flexible configuration for Xen's virtual networking.
+
+\section{Xen networking scripts}
+
+Xen's virtual networking is configured by 3 shell scripts. These are
+called automatically by Xend when certain events occur, with arguments
+to the scripts providing further contextual information. These
+scripts are found by default in \path{/etc/xen}. The names and
+locations of the scripts can be configured in \path{xend-config.sxp}.
+
+\subsection{\path{network}}
+
+This script is called once when Xend is started and once when Xend is
+stopped. Its job is to do any advance preparation required for the
+Xen virtual network when Xend starts and to do any corresponding
+cleanup when Xend exits.
+
+In the default configuration, this script creates the bridge
+``xen-br0'' and moves eth0 onto that bridge, modifying the routing
+accordingly.
+
+In configurations where the bridge already exists, this script could
+be replaced with a link to \path{/bin/true} (for instance).
+
+When Xend exits, this script is called with the {\tt stop} argument,
+which causes it to delete the Xen bridge and remove {\tt eth0} from
+it, restoring the normal IP and routing configuration.
+
+\subsection{\path{vif-bridge}}
+
+This script is called for every domain virtual interface. This should
+do things like configuring firewalling rules for that interface and
+adding it to the appropriate bridge.
+
+By default, this adds and removes VIFs on the default Xen bridge.
+This script can be customized to properly deal with more complicated
+bridging setups.
+
+\chapter{Advanced Scheduling Configuration}
+
+\section{Scheduler selection}
+
+Xen offers a boot time choice between multiple schedulers. To select
+a scheduler, pass the boot parameter { \tt sched=sched\_name } to Xen,
+substituting the appropriate scheduler name. Details of the schedulers
+and their parameters are included below; future versions of the tools
+will provide a higher-level interface to these tools.
+
+It is expected that system administrators configure their system to
+use the scheduler most appropriate to their needs. Currently, the BVT
+scheduler is the recommended choice, since the Atropos scheduler is
+not finished.
+
+\section{Borrowed Virtual Time}
+
+{\tt sched=bvt } (the default) \\
+
+BVT provides proportional fair shares of the CPU time. It has been
+observed to penalise domains that block frequently (e.g. IO intensive
+domains), but this can be compensated by using warping.
+
+\subsection{Global Parameters}
+
+\begin{description}
+\item[ctx\_allow]
+ the context switch allowance is similar to the "quantum"
+ in traditional schedulers. It is the minimum time that
+ a scheduled domain will be allowed to run before being
+ pre-empted. This prevents thrashing of the CPU.
+\end{description}
+
+\subsection{Per-domain parameters}
+
+\begin{description}
+\item[mcuadv]
+ the MCU (Minimum Charging Unit) advance determines the
+ proportional share of the CPU that a domain receives. It
+ is set inversely proportionally to a domain's sharing weight.
+\item[warp]
+ the amount of "virtual time" the domain is allowed to warp
+ backwards
+\item[warpl]
+ the warp limit is the maximum time a domain can run warped for
+\item[warpu]
+ the unwarp requirement is the minimum time a domain must
+ run unwarped for before it can warp again
+\end{description}
+
+\section{Atropos}
+
+{\tt sched=atropos } \\
+
+Atropos is a Soft Real Time scheduler. It provides guarantees about
+absolute shares of the CPU (with a method for optionally sharing out
+slack CPU time on a best-effort basis) and can provide timeliness
+guarantees for latency-sensitive domains.
+
+Every domain has an associated period and slice. The domain should
+receive 'slice' nanoseconds every 'period' nanoseconds. This allows
+the administrator to configure both the absolute share of the CPU a
+domain receives and the frequency with which it is scheduled. When
+domains unblock, their period is reduced to the value of the latency
+hint (the slice is scaled accordingly so that they still get the same
+proportion of the CPU). For each subsequent period, the slice and
+period times are doubled until they reach their original values.
+
+Note: don't overcommit the CPU when using Atropos (i.e. don't reserve
+more CPU than is available - the utilisation should be kept to
+slightly less than 100% in order to ensure predictable behaviour).
+
+\subsection{Per-domain parameters}
+
+\begin{description}
+\item[slice]
+ The length of time per period that a domain is guaranteed.
+\item[period]
+ The period over which a domain is guaranteed to receive
+ its slice of CPU time.
+\item[latency]
+ The latency hint is used to control how soon after
+ waking up a domain should be scheduled.
+\item[xtratime]
+ This is a true (1) / false (0) flag that specifies whether
+ a domain should be allowed a share of the system slack time.
+\end{description}
+
+\section{Round Robin}
+
+{\tt sched=rrobin } \\
+
+The Round Robin scheduler is included as a simple demonstration of
+Xen's internal scheduler API. It is not intended for production use
+--- the other schedulers included are all more general and should give
+higher throughput.
+
+\subsection{Global parameters}
+
+\begin{description}
+\item[rr\_slice]
+ The maximum time each domain runs before the next
+ scheduling decision is made.
+\end{description}
+
+\chapter{Privileged domains}
+
+There are two possible types of privileges: IO privileges and
+administration privileges.
+
+\section{Driver domains (IO Privileges)}
+
+IO privileges can be assigned to allow a domain to drive PCI devices
+itself. This is used to support driver domains.
+
+Setting backend privileges is currently only supported in SXP format
+config files (??? is this true - there's nothing in xmdefconfig,
+anyhow). To allow a domain to function as a backend for others,
+somewhere within the {\tt vm} element of its configuration file must
+be a {\tt backend} element of the form {\tt (backend ({\em type}))}
+where {\tt \em type} may be either {\tt netif} or {\tt blkif},
+according to the type of virtual device this domain will service.
+After this domain has been built, Xend will connect all new and
+existing {\em virtual} devices (of the appropriate type) to that
+backend.
+
+Note that:
+\begin{itemize}
+\item a block backend cannot import virtual block devices from other
+domains
+\item a network backend cannot import virtual network devices from
+other domains
+\end{itemize}
+
+Thus (particularly in the case of block backends, which cannot import
+a virtual block device as their root filesystem), you may need to boot
+a backend domain from a ramdisk or a network device.
+
+The privilege to drive PCI devices may also be specified on a
+per-device basis. Xen will assign the minimal set of hardware
+privileges to a domain that are required to control its devices. This
+can be configured in either format of configuration file:
+
+\begin{itemize}
+\item SXP Format:
+ Include {\tt device} elements
+ {\tt (device (pci (bus {\em x}) (dev {\em y}) (func {\em z}))) } \\
+ inside the top-level {\tt vm} element. Each one specifies the address
+ of a device this domain is allowed to drive ---
+ the numbers {\em x},{\em y} and {\em z} may be in either decimal or
+ hexadecimal format.
+\item Flat Format: Include a list of PCI device addresses of the
+ format: \\ {\tt pci = ['x,y,z', ...] } \\ where each element in the
+ list is a string specifying the components of the PCI device
+ address, separated by commas. The components ({\tt \em x}, {\tt \em
+ y} and {\tt \em z}) of the list may be formatted as either decimal
+ or hexadecimal.
+\end{itemize}
+
+\section{Administration Domains}
+
+Administration privileges allow a domain to use the ``dom0
+operations'' (so called because they are usually available only to
+domain 0). A privileged domain can build other domains, set scheduling
+parameters, etc.
+
+% Support for other administrative domains is not yet available...
+
+\chapter{Xen build options}
+
+For most users, the default build of Xen will be adequate. For some
+advanced uses, Xen provides a number of build-time options:
+
+At build time, these options should be set as environment variables or
+passed on make's command-line. For example:
+
+\begin{verbatim}
+export option=y; make
+option=y make
+make option1=y option2=y
+\end{verbatim}
+
+\section{List of options}
+
+{\bf verbose=y }\\
+Enable debugging messages when Xen detects an unexpected condition.
+Also enables console output from all domains. \\
+{\bf debug=y }\\
+Enable debug assertions. Implies {\bf verbose=y }.
+(Primarily useful for tracing bugs in Xen). \\
+{\bf debugger=y }\\
+Enable the in-Xen pervasive debugger (PDB).
+This can be used to debug Xen, guest OSes, and
+applications. For more information see the
+XenDebugger-HOWTO. \\
+{\bf perfc=y }\\
+Enable performance-counters for significant events
+within Xen. The counts can be reset or displayed
+on Xen's console via console control keys. \\
+{\bf trace=y }\\
+Enable per-cpu trace buffers which log a range of
+events within Xen for collection by control
+software. For more information see the chapter on debugging,
+in the Xen Interface Manual.
+
+\chapter{Boot options}
+
+\section{Xen boot options}
+
+These options are used to configure Xen's behaviour at runtime. They
+should be appended to Xen's command line, either manually or by
+editing \path{grub.conf}.
+
+{\bf ignorebiostables }\\
+ Disable parsing of BIOS-supplied tables. This may help with some
+ chipsets that aren't fully supported by Xen. If you specify this
+ option then ACPI tables are also ignored, and SMP support is
+ disabled. \\
+
+{\bf noreboot } \\
+ Don't reboot the machine automatically on errors. This is
+ useful to catch debug output if you aren't catching console messages
+ via the serial line. \\
+
+{\bf nosmp } \\
+ Disable SMP support.
+ This option is implied by 'ignorebiostables'. \\
+
+{\bf noacpi } \\
+ Disable ACPI tables, which confuse Xen on some chipsets.
+ This option is implied by 'ignorebiostables'. \\
+
+{\bf watchdog } \\
+ Enable NMI watchdog which can report certain failures. \\
+
+{\bf noht } \\
+ Disable Hyperthreading. \\
+
+{\bf badpage=$<$page number$>$[,$<$page number$>$] } \\
+ Specify a list of pages not to be allocated for use
+ because they contain bad bytes. For example, if your
+ memory tester says that byte 0x12345678 is bad, you would
+ place 'badpage=0x12345' on Xen's command line (i.e., the
+ last three digits of the byte address are not
+ included!). \\
+
+{\bf com1=$<$baud$>$,DPS[,$<$io\_base$>$,$<$irq$>$] \\
+ com2=$<$baud$>$,DPS[,$<$io\_base$>$,$<$irq$>$] } \\
+ Xen supports up to two 16550-compatible serial ports.
+ For example: 'com1=9600,8n1,0x408,5' maps COM1 to a
+ 9600-baud port, 8 data bits, no parity, 1 stop bit,
+ I/O port base 0x408, IRQ 5.
+ If the I/O base and IRQ are standard (com1:0x3f8,4;
+ com2:0x2f8,3) then they need not be specified. \\
+
+{\bf console=$<$specifier list$>$ } \\
+ Specify the destination for Xen console I/O.
+ This is a comma-separated list of, for example:
+\begin{description}
+ \item[vga] use VGA console and allow keyboard input
+ \item[com1] use serial port com1
+ \item[com2H] use serial port com2. Transmitted chars will
+ have the MSB set. Received chars must have
+ MSB set.
+ \item[com2L] use serial port com2. Transmitted chars will
+ have the MSB cleared. Received chars must
+ have MSB cleared.
+\end{description}
+ The latter two examples allow a single port to be
+ shared by two subsystems (e.g. console and
+ debugger). Sharing is controlled by MSB of each
+ transmitted/received character.
+ [NB. Default for this option is 'com1,tty'] \\
+
+{\bf conswitch=$<$switch-char$><$auto-switch-char$>$ } \\
+ Specify how to switch serial-console input between
+ Xen and DOM0. The required sequence is CTRL-<switch-char>
+ pressed three times. Specifying '`' disables switching.
+ The <auto-switch-char> specifies whether Xen should
+ auto-switch input to DOM0 when it boots -- if it is 'x'
+ then auto-switching is disabled. Any other value, or
+ omitting the character, enables auto-switching.
+ [NB. Default for this option is 'a'] \\
+
+{\bf nmi=xxx } \\
+ Specify what to do with an NMI parity or I/O error. \\
+ 'nmi=fatal': Xen prints a diagnostic and then hangs. \\
+ 'nmi=dom0': Inform DOM0 of the NMI. \\
+ 'nmi=ignore': Ignore the NMI. \\
+
+{\bf dom0\_mem=xxx } \\
+ Set the maximum amount of memory for domain0. \\
+
+{\bf tbuf\_size=xxx } \\
+ Set the size of the per-cpu trace buffers, in pages
+ (default 1). Note that the trace buffers are only
+ enabled in debug builds. Most users can ignore
+ this feature completely. \\
+
+{\bf sched=xxx } \\
+ Select the CPU scheduler Xen should use. The current
+ possibilities are 'bvt', 'atropos' and 'rrobin'. The
+ default is 'bvt'. For more information see
+ Sched-HOWTO.txt. \\
+
+{\bf pci\_dom0\_hide=(xx.xx.x)(yy.yy.y)... } \\
+Hide selected PCI devices from domain 0 (for instance, to stop it
+taking ownership of them so that they can be driven by another
+domain). Device IDs should be given in hex format. Bridge devices do
+not need to be hidden --- they are hidden implicitly, since guest OSes
+do not need to configure them.
+
+\section{XenLinux Options}
+
+{\bf xencons=xxx}
+Specify the device node to
+which the Xen virtual console driver is attached: \\
+ 'xencons=off': disable virtual console \\
+ 'xencons=tty': attach console to /dev/tty1 (tty0 at boot-time) \\
+ 'xencons=ttyS': attach console to /dev/ttyS0\\
+The default is ttyS for dom0 and tty for all other domains.
+
+\chapter{Further Support}
+
+If you have questions that are not answered by this manual, the
+sources of information listed below may be of interest to you. Note
+that bug reports, suggestions and contributions related to the
+software (or the documentation) should be sent to the Xen developers'
+mailing list (address below).
+
+\section{Other documentation}
+
+For developers interested in porting operating systems to Xen, the
+{\em Xen Interface Manual} is distributed in the \path{docs/}
+directory of the Xen source distribution. Various HOWTOs are
+available in \path{docs/HOWTOS} but this content is being integrated
+into this manual.
+
+\section{Online references}
+
+The official Xen web site is found at: \\
+{\tt
+http://www.cl.cam.ac.uk/Research/SRG/netos/xen/] }.
+
+Links to other
+documentation sources are listed at: \\ {\tt
+http://www.cl.cam.ac.uk/Research/SRG/netos/xen/documentation.html}.
+
+\section{Mailing lists}
+
+There are currently two official Xen mailing lists:
+
+\begin{description}
+\item[xen-devel@lists.sourceforge.net] Used for development
+discussions and requests for help. Subscribe at: \\
+{\tt http://lists.sourceforge.net/mailman/listinfo/xen-devel}
+\item[xen-announce@lists.sourceforge.net] Used for announcements only.
+Subscribe at: \\
+{\tt http://lists.sourceforge.net/mailman/listinfo/xen-announce}
+\item[xen-changelog@lists.sourceforge.net] Changelog feed
+from the unstable and 2.0 trees - developer oriented. Subscribe at: \\
+{\tt http://lists.sourceforge.net/mailman/listinfo/xen-changelog}
+\end{description}
+
+Although there is no specific user support list, the developers try to
+assist users who post on xen-devel. As the bulk of traffic on this
+list increases, a dedicated user support list may be introduced.
+
+\appendix
+
+\chapter{Installing Debian}
+
+The Debian project provides a tool called {\tt debootstrap} which
+allows a base Debian system to be installed into a filesystem without
+requiring the host system to have any Debian-specific software (such
+as {\tt apt}).
+
+Here's some info how to install Debian 3.1 (Sarge) for an unprivileged
+Xen domain:
+
+\begin{enumerate}
+\item Set up Xen 2.0 and test that it's working, as described earlier in
+ this manual.
+
+\item Create disk images for root-fs and swap (alternatively, you
+ might create dedicated partitions, LVM logical volumes, etc. if
+ that suits your setup).
+\begin{verbatim}
+dd if=/dev/zero of=/path/diskimage bs=1024k count=size_in_mbytes
+dd if=/dev/zero of=/path/swapimage bs=1024k count=size_in_mbytes
+\end{verbatim}
+ If you're going to use this filesystem / diskimage only as a
+ `template' for other vm diskimages, something like 300 MB should
+ be enough.. (of course it depends what kind of packages you are
+ planning to install to the template)
+
+\item Create the filesystem and initialise the swap image
+\begin{verbatim}
+mkfs.ext3 /path/diskimage
+mkswap /path/swapimage
+\end{verbatim}
+
+\item Mount the diskimage for installation
+\begin{verbatim}
+mount -o loop /path/diskimage /mnt/disk
+\end{verbatim}
+
+\item Install {\tt debootstrap}
+
+Make sure you have debootstrap installed on the host. If you are
+running Debian sarge (3.1 / testing) or unstable you can install it by
+running {\tt apt-get install debootstrap}. Otherwise, it can be
+downloaded from the Debian project website.
+
+\item Install debian base to the diskimage:
+\begin{verbatim}
+debootstrap --arch i386 sarge /mnt/disk \
+ http://ftp.<countrycode>.debian.org/debian
+\end{verbatim}
+
+You can use any other Debian http/ftp mirror you want.
+
+\item When debootstrap completes successfully, modify settings:
+\begin{verbatim}
+chroot /mnt/disk /bin/bash
+\end{verbatim}
+
+Edit the following files using vi or nano and make needed changes:
+\begin{verbatim}
+/etc/hostname
+/etc/hosts
+/etc/resolv.conf
+/etc/network/interfaces
+/etc/networks
+\end{verbatim}
+
+Set up access to the services, edit:
+\begin{verbatim}
+/etc/hosts.deny
+/etc/hosts.allow
+/etc/inetd.conf
+\end{verbatim}
+
+Add Debian mirror to:
+\begin{verbatim}
+/etc/apt/sources.list
+\end{verbatim}
+
+Create fstab like this:
+\begin{verbatim}
+/dev/sda1 / ext3 errors=remount-ro 0 1
+/dev/sda2 none swap sw 0 0
+proc /proc proc defaults 0 0
+\end{verbatim}
+
+Logout
+
+\item Umount the diskimage
+\begin{verbatim}
+umount /mnt/disk
+\end{verbatim}
+
+\item Create Xen 2.0 configuration file for the new domain. You can
+ use the example-configurations coming with xen as a template.
+
+ Make sure you have the following set up:
+\begin{verbatim}
+disk = [ 'file:/path/diskimage,sda1,w', 'file:/path/swapimage,sda2,w' ]
+root = "/dev/sda1 ro"
+\end{verbatim}
+
+\item Start the new domain
+\begin{verbatim}
+xm create -f domain_config_file
+\end{verbatim}
+
+Check that the new domain is running:
+\begin{verbatim}
+xm list
+\end{verbatim}
+
+\item Attach to the console of the new domain.
+ You should see something like this when starting the new domain:
+
+\begin{verbatim}
+Started domain testdomain2, console on port 9626
+\end{verbatim}
+
+ There you can see the ID of the console: 26. You can also list
+ the consoles with {\tt xm consoles"}. (ID is the last two
+ digits of the portnumber.)
+
+ Attach to the console:
+
+\begin{verbatim}
+xm console 26
+\end{verbatim}
+
+ or by telnetting to the port 9626 of localhost (the xm console
+ progam works better).
+
+\item Log in and run base-config
+
+ As a default there's no password for the root.
+
+ Check that everything looks OK, and the system started without
+ errors. Check that the swap is active, and the network settings are
+ correct.
+
+ Run {\tt /usr/sbin/base-config} to set up the Debian settings.
+
+ Set up the password for root using passwd.
+
+\item Done. You can exit the console by pressing {\tt Ctrl + ]}
+
+\end{enumerate}
+
+If you need to create new domains, you can just copy the contents of
+the `template'-image to the new disk images, either by mounting the
+template and the new image, and using {\tt cp -a} or {\tt tar} or by
+simply copying the image file. Once this is done, modify the
+image-specific settings (hostname, network settings, etc).
+
+\end{document}
+
+
+%% Other stuff without a home
+
+%% Instructions Re Python API
+
+%% Other Control Tasks using Python
+%% ================================
+
+%% A Python module 'Xc' is installed as part of the tools-install
+%% process. This can be imported, and an 'xc object' instantiated, to
+%% provide access to privileged command operations:
+
+%% # import Xc
+%% # xc = Xc.new()
+%% # dir(xc)
+%% # help(xc.domain_create)
+
+%% In this way you can see that the class 'xc' contains useful
+%% documentation for you to consult.
+
+%% A further package of useful routines (xenctl) is also installed:
+
+%% # import xenctl.utils
+%% # help(xenctl.utils)
+
+%% You can use these modules to write your own custom scripts or you can
+%% customise the scripts supplied in the Xen distribution.
++
++% Explain about AGP GART