ia64/xen-unstable

changeset 2891:59c1b7c5785d

bitkeeper revision 1.1159.1.374 (418a8468jkIUKrzzY4OldxspTUbSdQ)

Merge tempest.cl.cam.ac.uk:/auto/groups/xeno/BK/xeno.bk
into tempest.cl.cam.ac.uk:/local/scratch/smh22/xeno.bk
author smh22@tempest.cl.cam.ac.uk
date Thu Nov 04 19:35:04 2004 +0000 (2004-11-04)
parents 2d20bd697dda 4b5799ad3285
children bf724487f9e7 0d70a2747177
files docs/src/user.tex
line diff
     1.1 --- a/docs/src/user.tex	Thu Nov 04 18:55:09 2004 +0000
     1.2 +++ b/docs/src/user.tex	Thu Nov 04 19:35:04 2004 +0000
     1.3 @@ -6,6 +6,9 @@
     1.4  \def\Xend{{Xend}\xspace}
     1.5  \def\xend{{xend}\xspace}
     1.6  
     1.7 +\latexhtml{\newcommand{\path}[1]{{\small {\tt #1}}}}{\newcommand{\path}[1]{{\tt #1}}}
     1.8 +
     1.9 +
    1.10  
    1.11  \begin{document}
    1.12  
    1.13 @@ -56,8 +59,6 @@ Contributions of material, suggestions a
    1.14  \renewcommand{\floatpagefraction}{.8}
    1.15  \setstretch{1.1}
    1.16  
    1.17 -\latexhtml{\newcommand{\path}[1]{{\small {\tt #1}}}}{\newcommand{\path}[1]{{\tt #1}}}
    1.18 -
    1.19  \part{Introduction and Tutorial}
    1.20  \chapter{Introduction}
    1.21  
    1.22 @@ -68,7 +69,7 @@ close-to-native performance.  The virtua
    1.23  facilitates enterprise-grade functionality, including:
    1.24  
    1.25  \begin{itemize}
    1.26 -\item Virtual machines with performance nearly identical to native
    1.27 +\item Virtual machines with performance close to native
    1.28    hardware.
    1.29  \item Live migration of running virtual machines between physical hosts.
    1.30  \item Excellent hardware support (supports most Linux device drivers).
    1.31 @@ -89,7 +90,7 @@ applications and libraries {\em do not} 
    1.32  Xen support is available for increasingly many operating systems:
    1.33  right now, Linux 2.4, Linux 2.6 and NetBSD are available for Xen 2.0.
    1.34  We expect that Xen support will ultimately be integrated into the
    1.35 -official releases of Linux, NetBSD and FreeBSD.  Other OS ports,
    1.36 +releases of Linux, NetBSD and FreeBSD.  Other OS ports,
    1.37  including Plan 9, are in progress.
    1.38  
    1.39  Possible usage scenarios for Xen include:
    1.40 @@ -104,7 +105,8 @@ Possible usage scenarios for Xen include
    1.41        virtual machine boundaries. 
    1.42  \item [Cluster computing.] Management at VM granularity provides more
    1.43        flexibility than separately managing each physical host, but
    1.44 -      better control and isolation than single-system image solutions.
    1.45 +      better control and isolation than single-system image solutions, 
    1.46 +      particularly by using live migration for load balancing. 
    1.47  \item [Hardware support for custom OSes.] Allow development of new OSes
    1.48        while benefiting from the wide-ranging hardware support of
    1.49        existing OSes such as Linux.
    1.50 @@ -141,12 +143,13 @@ Multiprocessor machines are supported, a
    1.51  for HyperThreading (SMT), although this remains a topic for ongoing
    1.52  research. A port specifically for x86/64 is in
    1.53  progress, although Xen already runs on such systems in 32-bit legacy
    1.54 -mode.
    1.55 +mode. In addition a port to the IA64 architecture is approaching 
    1.56 +completion. 
    1.57  
    1.58  Xen can currently use up to 4GB of memory.  It is possible for x86
    1.59  machines to address up to 64GB of physical memory but there are no
    1.60 -plans to support these systems.  The x86/64 port is the planned route
    1.61 -to supporting larger memory sizes. 
    1.62 +current plans to support these systems.  The x86/64 port is the
    1.63 +planned route to supporting larger memory sizes.
    1.64  
    1.65  Xen offloads most of the hardware support issues to the guest OS
    1.66  running in Domain 0.  Xen itself contains only the code required to
    1.67 @@ -162,7 +165,7 @@ other hardware by configuring your XenLi
    1.68  
    1.69  Xen was originally developed by the Systems Research Group at the
    1.70  University of Cambridge Computer Laboratory as part of the XenoServers
    1.71 -project, funded by UK-EPSRC.
    1.72 +project, funded by the UK-EPSRC.
    1.73  XenoServers aim to provide a `public infrastructure for
    1.74  global distributed computing', and Xen plays a key part in that,
    1.75  allowing us to efficiently partition a single machine to enable
    1.76 @@ -170,7 +173,7 @@ multiple independent clients to run thei
    1.77  applications in an environment providing protection, resource
    1.78  isolation and accounting.  The project web page contains further
    1.79  information along with pointers to papers and technical reports:
    1.80 -{\small {\tt http://www.cl.cam.ac.uk/xeno}}
    1.81 +\path{http://www.cl.cam.ac.uk/xeno} 
    1.82  
    1.83  Xen has since grown into a fully-fledged project in its own right,
    1.84  enabling us to investigate interesting research issues regarding the
    1.85 @@ -194,50 +197,86 @@ definitive open source solution for virt
    1.86  \chapter{Installation}
    1.87  
    1.88  The Xen distribution includes three main components: Xen itself, ports
    1.89 -of Linux and NetBSD to run on Xen, and the user-space tools required
    1.90 -to manage a Xen-based system.  This chapter describes how to install
    1.91 -the Xen 2.0 distribution from source.  Alternatively, there may be
    1.92 -pre-built packages available as part of your operating system
    1.93 -distribution.
    1.94 +of Linux 2.4 and 2.6 and NetBSD to run on Xen, and the user-space
    1.95 +tools required to manage a Xen-based system.  This chapter describes
    1.96 +how to install the Xen 2.0 distribution from source.  Alternatively,
    1.97 +there may be pre-built packages available as part of your operating
    1.98 +system distribution.
    1.99  
   1.100  \section{Prerequisites}
   1.101  \label{sec:prerequisites}
   1.102 +
   1.103 +The following is a full list of prerequisites. Items marked `$*$' are
   1.104 +only required if you wish to build from source; items marked `$\dag$'
   1.105 +are only required if you wish to run more than one virtual machine.
   1.106 +
   1.107  \begin{itemize}
   1.108  \item A working Linux distribution using the GRUB bootloader and
   1.109  running on a P6-class (or newer) CPU.
   1.110 -\item The Linux bridge control tools\footnote{{\tt
   1.111 -http://bridge.sourceforge.net}}.
   1.112 -\item Build tools (gcc v3.2.x or v3.3.x, binutils, GNU make).
   1.113 -\item Development installation of libcurl.
   1.114 -\item Development installation of zlib (e.g., zlib-dev).
   1.115 -\item Development installation of Python v2.2 or later (e.g., python-dev).
   1.116 -\item An installation of Twisted v1.3 or above\footnote{{\tt
   1.117 +\item [$*$] Build tools (gcc v3.2.x or v3.3.x, binutils, GNU make).
   1.118 +\item [$*$] Development installation of libcurl (e.g., libcurl-devel) 
   1.119 +\item [$*$] Development installation of zlib (e.g., zlib-dev).
   1.120 +\item [$*$] Development installation of Python v2.2 or later (e.g., python-dev).
   1.121 +\item [$*$] \LaTeX, transfig and tgif are required to build the documentation.
   1.122 +\item [$\dag$] The \path{iproute2} package. 
   1.123 +\item [$\dag$] The Linux bridge-utils\footnote{Available from 
   1.124 +{\tt http://bridge.sourceforge.net}} (e.g., \path{/sbin/brctl})
   1.125 +\item [$\dag$] An installation of Twisted v1.3 or
   1.126 +above\footnote{Available from {\tt
   1.127  http://www.twistedmatrix.com}}. There may be a binary package
   1.128  available for your distribution; alternatively it can be installed by
   1.129 -running `{\sl make install-twisted}' in the root of the Xen source tree.
   1.130 -\item \LaTeX, transfig and tgif are required to build the documentation.
   1.131 +running `{\sl make install-twisted}' in the root of the Xen source
   1.132 +tree.
   1.133  \end{itemize}
   1.134  
   1.135 -\section{Download the Xen source code}
   1.136 -
   1.137 -\subsection{Tarball}
   1.138 -
   1.139 -The Xen source tree is available as a compressed tarball from the
   1.140 -Xen download page (pre-built tarballs are also available from this page):
   1.141 +Once you have satisfied the relevant prerequisites, you can 
   1.142 +now install either a binary or source distribution of Xen. 
   1.143 +
   1.144 +\section{Installing from Binary Tarball} 
   1.145 +
   1.146 +Pre-built tarballs are available for download from the Xen 
   1.147 +download page
   1.148  \begin{quote} 
   1.149  {\tt http://xen.sf.net}
   1.150  \end{quote} 
   1.151  
   1.152 -\subsection{Using BitKeeper}
   1.153 -
   1.154 +Once you've downloaded the tarball, simply unpack and install: 
   1.155 +\begin{verbatim}
   1.156 +# tar zxvf xen-2.0-install.tgz
   1.157 +# cd xen-2.0-install
   1.158 +# sh ./install.sh 
   1.159 +\end{verbatim} 
   1.160 +
   1.161 +Once you've installed the binaries you need to configure
   1.162 +your system as described in Section~\ref{s:configure}. 
   1.163 +
   1.164 +\section{Installing from Source} 
   1.165 +
   1.166 +This section describes how to obtain, build, and install 
   1.167 +Xen from source. 
   1.168 +
   1.169 +\subsection{Obtaining the Source} 
   1.170 +
   1.171 +The Xen source tree is available as either a compressed source tar
   1.172 +ball or as a clone of our master BitKeeper repository.
   1.173 +
   1.174 +\begin{description} 
   1.175 +\item[Obtaining the Source Tarball]\mbox{} \\  
   1.176 +Stable versions (and daily snapshots) of the Xen source tree are
   1.177 +available as compressed tarballs from the Xen download page
   1.178 +\begin{quote} 
   1.179 +{\tt http://xen.sf.net}
   1.180 +\end{quote} 
   1.181 +
   1.182 +\item[Using BitKeeper]\mbox{} \\  
   1.183  If you wish to install Xen from a clone of our latest BitKeeper
   1.184  repository then you will need to install the BitKeeper tools.
   1.185  Download instructions for BitKeeper can be obtained by filling out the
   1.186  form at:
   1.187 +
   1.188  \begin{quote} 
   1.189  {\tt http://www.bitmover.com/cgi-bin/download.cgi}
   1.190  \end{quote}
   1.191 -
   1.192  The public master BK repository for the 2.0 release lives at: 
   1.193  \begin{quote}
   1.194  {\tt bk://xen.bkbits.net/xen-2.0.bk}  
   1.195 @@ -251,15 +290,15 @@ run:
   1.196  # bk clone bk://xen.bkbits.net/xen-2.0.bk
   1.197  \end{verbatim}
   1.198  
   1.199 -
   1.200 -Under your current directory, a new directory named `xen-2.0.bk' has
   1.201 -been created, which contains all the source code for Xen, the OS
   1.202 +Under your current directory, a new directory named \path{xen-2.0.bk}
   1.203 +has been created, which contains all the source code for Xen, the OS
   1.204  ports, and the control tools. You can update your repository with the
   1.205  latest changes at any time by running:
   1.206  \begin{verbatim}
   1.207  # cd xen-2.0.bk # to change into the local repository
   1.208  # bk pull       # to update the repository
   1.209  \end{verbatim}
   1.210 +\end{description} 
   1.211  
   1.212  %\section{The distribution}
   1.213  %
   1.214 @@ -276,9 +315,9 @@ latest changes at any time by running:
   1.215  %\item[\path{extras/}] Bonus extras.
   1.216  %\end{description}
   1.217  
   1.218 -\section{Build and install}
   1.219 -
   1.220 -The Xen makefile includes a target `world' that will do the
   1.221 +\subsection{Building from Source} 
   1.222 +
   1.223 +The top-level Xen Makefile includes a target `world' that will do the
   1.224  following:
   1.225  
   1.226  \begin{itemize}
   1.227 @@ -291,6 +330,42 @@ following:
   1.228        unprivileged virtual machines.
   1.229  \end{itemize}
   1.230  
   1.231 +
   1.232 +After the build has completed you should have a top-level 
   1.233 +directory called \path{dist/} in which all resulting targets 
   1.234 +will be placed; of particular interest are the two kernels 
   1.235 +XenLinux kernel images, one with a `-xen0' extension
   1.236 +which contains hardware device drivers and drivers for Xen's virtual
   1.237 +devices, and one with a `-xenU' extension that just contains the
   1.238 +virtual ones. These are found in \path{dist/install/boot/} along
   1.239 +with the image for Xen itself and the configuration files used
   1.240 +during the build. 
   1.241 +
   1.242 +The NetBSD port can be built using: 
   1.243 +\begin{quote}
   1.244 +\begin{verbatim}
   1.245 +# make netbsd20
   1.246 +\end{verbatim} 
   1.247 +\end{quote} 
   1.248 +NetBSD port is built using a snapshot of the netbsd-2-0 cvs branch.
   1.249 +The snapshot is downloaded as part of the build process, if it is not
   1.250 +yet present in the \path{NETBSD\_SRC\_PATH} search path.  The build
   1.251 +process also downloads a toolchain which includes all the tools
   1.252 +necessary to build the NetBSD kernel under Linux.
   1.253 +
   1.254 +To customize further the set of kernels built you need to edit
   1.255 +the top-level Makefile. Look for the line: 
   1.256 +
   1.257 +\begin{quote}
   1.258 +\begin{verbatim}
   1.259 +KERNELS ?= mk.linux-2.6-xen0 mk.linux-2.6-xenU
   1.260 +\end{verbatim} 
   1.261 +\end{quote} 
   1.262 +
   1.263 +You can edit this line to include any set of operating system 
   1.264 +kernels which have configurations in the top-level 
   1.265 +\path{buildconfigs/} directory. 
   1.266 +
   1.267  %% Inspect the Makefile if you want to see what goes on during a build.
   1.268  %% Building Xen and the tools is straightforward, but XenLinux is more
   1.269  %% complicated.  The makefile needs a `pristine' Linux kernel tree to which
   1.270 @@ -306,36 +381,10 @@ following:
   1.271  %% After untaring the pristine kernel tree, the makefile uses the {\tt
   1.272  %% mkbuildtree} script to add the Xen patches to the kernel. 
   1.273  
   1.274 -After the build has completed you should have a top-level 
   1.275 -directory called {\tt dist/} in which all resulting targets 
   1.276 -will be placed; of particular interest are the two kernels 
   1.277 -XenLinux kernel images, one with a `-xen0' extension
   1.278 -which contains hardware device drivers and drivers for Xen's virtual
   1.279 -devices, and one with a `-xenU' extension that just contains the
   1.280 -virtual ones. These are found in \path{dist/install/boot/} along
   1.281 -with the image for Xen itself and the configuration files used
   1.282 -during the build. 
   1.283  
   1.284  %% The procedure is similar to build the Linux 2.4 port: \\
   1.285  %% \verb!# LINUX_SRC=/path/to/linux2.4/source make linux24!
   1.286  
   1.287 -The NetBSD port can be built using: \\ \verb!# make netbsd20! \\ The
   1.288 -NetBSD port is built using a snapshot of the netbsd-2-0 cvs branch.
   1.289 -The snapshot is downloaded as part of the build process, if it is not
   1.290 -yet present in the {\tt NETBSD\_SRC\_PATH} search path.  The build
   1.291 -process also downloads a toolchain which includes all the tools
   1.292 -necessary to build the NetBSD kernel under Linux.
   1.293 -
   1.294 -% If you have an SMP machine you may wish to give the {\tt '-j4'}
   1.295 -% argument to make to get a parallel build.
   1.296 -
   1.297 -
   1.298 -If you have an existing Linux kernel configuration that you would like
   1.299 -to use for domain 0, you should copy it to
   1.300 -\path{dist/install/boot/config-2.6.9-xen0}; for example, certain
   1.301 -distributions require a kernel with {\tt devfs} support at boot time.
   1.302 -During the first build, you may be prompted with some Xen-specific
   1.303 -options.  We advise accepting the defaults for these options.
   1.304  
   1.305  %% \framebox{\parbox{5in}{
   1.306  %% {\bf Distro specific:} \\
   1.307 @@ -343,10 +392,54 @@ options.  We advise accepting the defaul
   1.308  %% to enable devfs and devfs mount at boot time in the xen0 config.
   1.309  %% }}
   1.310  
   1.311 +\subsection{Custom XenLinux Builds}
   1.312 +
   1.313 +% If you have an SMP machine you may wish to give the {\tt '-j4'}
   1.314 +% argument to make to get a parallel build.
   1.315 +
   1.316 +If you wish to build a customized XenLinux kernel (e.g. to support
   1.317 +additional devices or enable distribution-required features), you can
   1.318 +use the standard Linux configuration mechanisms, specifying that the
   1.319 +architecture being built for is \path{xen}, e.g:
   1.320 +\begin{quote}
   1.321 +\begin{verbatim} 
   1.322 +# cd linux-2.6.9-xen0 
   1.323 +# make ARCH=xen xconfig 
   1.324 +\end{verbatim} 
   1.325 +\end{quote} 
   1.326 +
   1.327 +You can also copy an existing Linux configuration (\path{.config}) 
   1.328 +into \path{linux-2.6.9-xen0} and execute:  
   1.329 +\begin{quote}
   1.330 +\begin{verbatim} 
   1.331 +# make oldconfig 
   1.332 +\end{verbatim} 
   1.333 +\end{quote} 
   1.334 +
   1.335 +You may be prompted with some Xen-specific options; we 
   1.336 +advise accepting the defaults for these options.
   1.337 +
   1.338 +Note that the only difference between the two types of Linux kernel
   1.339 +that are built is the configuration file used for each.  The "U"
   1.340 +suffixed (unprivileged) versions don't contain any of the physical
   1.341 +hardware device drivers, leading to a 30\% reduction in size; hence
   1.342 +you may prefer these for your non-privileged domains.  The `0'
   1.343 +suffixed privileged versions can be used to boot the system, as well
   1.344 +as in driver domains and unprivileged domains.
   1.345 +
   1.346 +
   1.347 +\subsection{Installing the Binaries}
   1.348 +
   1.349 +
   1.350  The files produced by the build process are stored under the
   1.351 -\path{dist/install/} directory.  To install them in their default
   1.352 -locations, do: \\
   1.353 -\verb_# make install_
   1.354 +\path{dist/install/} directory. To install them in their default
   1.355 +locations, do:
   1.356 +\begin{quote}
   1.357 +\begin{verbatim}
   1.358 +# make install
   1.359 +\end{verbatim} 
   1.360 +\end{quote}
   1.361 +
   1.362  
   1.363  Alternatively, users with special installation requirements may wish
   1.364  to install them manually by copying the files to their appropriate
   1.365 @@ -359,14 +452,6 @@ destinations.
   1.366  %% \item \path{install/boot/vmlinuz-2.6.9-xenU}  Unprivileged XenLinux kernel
   1.367  %% \end{itemize}
   1.368  
   1.369 -The difference between the two Linux kernels that are built is due to
   1.370 -the configuration file used for each.  The "U" suffixed unprivileged
   1.371 -version doesn't contain any of the physical hardware device drivers
   1.372 ---- it is 30\% smaller and hence may be preferred for your
   1.373 -non-privileged domains.  The `0' suffixed privileged version can be
   1.374 -used to boot the system, as well as in driver domains and unprivileged
   1.375 -domains.
   1.376 -
   1.377  The \path{dist/install/boot} directory will also contain the config files
   1.378  used for building the XenLinux kernels, and also versions of Xen and
   1.379  XenLinux kernels that contain debug symbols (\path{xen-syms} and
   1.380 @@ -374,8 +459,12 @@ XenLinux kernels that contain debug symb
   1.381  dumps.  Retain these files as the developers may wish to see them if
   1.382  you post on the mailing list.
   1.383  
   1.384 +
   1.385 +
   1.386 +
   1.387 +
   1.388  \section{Configuration}
   1.389 -
   1.390 +\label{s:configure}
   1.391  Once you have built and installed the Xen distribution, it is 
   1.392  simple to prepare the machine for booting and running Xen. 
   1.393  
   1.394 @@ -403,9 +492,8 @@ The module line of the configuration des
   1.395  XenLinux kernel that Xen should start and the parameters that should
   1.396  be passed to it (these are standard Linux parameters, identifying the
   1.397  root device and specifying it be initially mounted read only and
   1.398 -instructing that console output be sent both to the screen and to the
   1.399 -serial port). Some distributions such as SuSE do not require the 
   1.400 -{\small {\tt ro}} parameter. 
   1.401 +instructing that console output be sent to the screen).  Some
   1.402 +distributions such as SuSE do not require the \path{ro} parameter.
   1.403  
   1.404  %% \framebox{\parbox{5in}{
   1.405  %% {\bf Distro specific:} \\
   1.406 @@ -414,7 +502,7 @@ serial port). Some distributions such as
   1.407  %% }}
   1.408  
   1.409  
   1.410 -If you want to use an initrd, just add another {\small {\tt module}} line to
   1.411 +If you want to use an initrd, just add another \path{module} line to
   1.412  the configuration, as usual:
   1.413  {\small
   1.414  \begin{verbatim}
   1.415 @@ -462,23 +550,26 @@ regular Linux. Simply add the line:
   1.416  \end{quote} 
   1.417  
   1.418  and you should be able to log in. Note that to successfully log in 
   1.419 -as root over the serial will require adding \path{ttyS0} to
   1.420 +as root over the serial line will require adding \path{ttyS0} to
   1.421  \path{/etc/securetty} in most modern distributions. 
   1.422  
   1.423  \subsection{TLS Libraries}
   1.424  
   1.425  Users of the XenLinux 2.6 kernel should disable Thread Local Storage
   1.426 -(e.g. by doing a {\small {\tt mv /lib/tls /lib/tls.disabled}}) before
   1.427 +(e.g. by doing a \path{mv /lib/tls /lib/tls.disabled}) before
   1.428  attempting to run with a XenLinux kernel.  You can always reenable it
   1.429 -by restoring the directory to its original location (i.e. {\small 
   1.430 -{\tt mv /lib/tls.disabled /lib/tls}}).
   1.431 +by restoring the directory to its original location (i.e. 
   1.432 +\path{mv /lib/tls.disabled /lib/tls}).
   1.433  
   1.434  The reason for this is that the current TLS implementation uses
   1.435  segmentation in a way that is not permissible under Xen.  If TLS is
   1.436  not disabled, an emulation mode is used within Xen which reduces
   1.437  performance substantially and is not guaranteed to work perfectly.
   1.438  
   1.439 -\section{Test the new install}
   1.440 +We hope that this issue can be resolved by working 
   1.441 +with Linux distribution vendors. 
   1.442 +
   1.443 +\section{Booting Xen} 
   1.444  
   1.445  It should now be possible to restart the system and use Xen.  Reboot
   1.446  as usual but choose the new Xen option when the Grub screen appears.
   1.447 @@ -498,29 +589,25 @@ usual.  If you are unable to log in to y
   1.448  should still be able to reboot with your normal Linux kernel.
   1.449  
   1.450  
   1.451 -\chapter{Starting a domain}
   1.452 +\chapter{Starting Additional Domains}
   1.453  
   1.454  The first step in creating a new domain is to prepare a root
   1.455  filesystem for it to boot off.  Typically, this might be stored in a
   1.456  normal partition, an LVM or other volume manager partition, a disk
   1.457 -file or on an NFS server.
   1.458 -A simple way to do this is simply to boot from your standard OS
   1.459 -install CD and install the distribution into another partition on your
   1.460 -hard drive.
   1.461 -
   1.462 -You can boot Xen and a single XenLinux instance without installing any
   1.463 -special user-space tools. To proceed further than this you will need
   1.464 -to install the prerequisites described in Section~\ref{sec:prerequisites}
   1.465 -and the Xen control tools. The control tools are installed by entering
   1.466 -the tools subdirectory of the repository and typing \\
   1.467 -\verb!# make install! \\
   1.468 -
   1.469 -To start the \xend control daemon, type \\ \verb!# xend start! \\ If you
   1.470 +file or on an NFS server.  A simple way to do this is simply to boot
   1.471 +from your standard OS install CD and install the distribution into
   1.472 +another partition on your hard drive.
   1.473 +
   1.474 +To start the \xend control daemon, type
   1.475 +\begin{quote}
   1.476 +\verb!# xend start!
   1.477 +\end{quote}
   1.478 +If you
   1.479  wish the daemon to start automatically, see the instructions in
   1.480  Section~\ref{s:xend}. Once the daemon is running, you can use the
   1.481 -{\tt xm} tool to monitor and maintain the domains running on your
   1.482 +\path{xm} tool to monitor and maintain the domains running on your
   1.483  system. This chapter provides only a brief tutorial: we provide full
   1.484 -details of the {\tt xm} tool in Section~\ref{s:xm}. 
   1.485 +details of the \path{xm} tool in the next chapter. 
   1.486  
   1.487  %\section{From the web interface}
   1.488  %
   1.489 @@ -535,76 +622,87 @@ details of the {\tt xm} tool in Section~
   1.490  %
   1.491  %\section{From the command line}
   1.492  
   1.493 -This example explains how to use the \path{xmdefconfig} file.  If you
   1.494 -require a more complex setup, you will want to write a custom
   1.495 -configuration file --- details of the configuration file formats are
   1.496 -included in Section~\ref{s:cfiles}. 
   1.497 -
   1.498 -The \path{xmexample1} file is a simple template configuration file
   1.499 -for describing a single VM.
   1.500 -
   1.501 -The \path{xmexample2} file is a template description that is intended
   1.502 -to be reused for multiple virtual machines.  Setting the value of the
   1.503 -{\tt vmid} variable on the {\tt xm} command line
   1.504 -fills in parts of this template.
   1.505 -
   1.506 -Both of them can be found in \path{/etc/xen/}
   1.507 -
   1.508 -\section{Editing {\tt xmdefconfig}}
   1.509 -
   1.510 -At minimum, you should edit the following 
   1.511 -variables in \path{/etc/xen/xmdefconfig}:
   1.512 -
   1.513 +
   1.514 +\section{Creating a Domain Configuration File} 
   1.515 +
   1.516 +Before you can start an additional domain, you must create a
   1.517 +configuration file. We provide two example files which you 
   1.518 +can use as a starting point: 
   1.519 +\begin{itemize} 
   1.520 +  \item \path{/etc/xen/xmexample1} is a simple template configuration file
   1.521 +    for describing a single VM.
   1.522 +
   1.523 +  \item \path{/etc/xen/xmexample2} file is a template description that
   1.524 +    is intended to be reused for multiple virtual machines.  Setting
   1.525 +    the value of the \path{vmid} variable on the \path{xm} command line
   1.526 +    fills in parts of this template.
   1.527 +\end{itemize} 
   1.528 +
   1.529 +Copy one of these files and edit it as appropriate.
   1.530 +Typical values you may wish to edit include: 
   1.531 +
   1.532 +\begin{quote}
   1.533  \begin{description}
   1.534  \item[kernel] Set this to the path of the kernel you compiled for use
   1.535 -              with Xen. [e.g. {\tt kernel = '/boot/vmlinuz-2.6.9-xenU'}]
   1.536 +              with Xen (e.g.\  \path{kernel = '/boot/vmlinuz-2.6.9-xenU'})
   1.537  \item[memory] Set this to the size of the domain's memory in
   1.538 -megabytes. [e.g. {\tt memory = 64} ]
   1.539 +megabytes (e.g.\ \path{memory = 64})
   1.540  \item[disk] Set the first entry in this list to calculate the offset
   1.541  of the domain's root partition, based on the domain ID.  Set the
   1.542 -second to the location of \path{/usr} (if you are sharing it between
   1.543 -domains). [i.e. {\tt disk = ['phy:your\_hard\_drive\%d,sda1,w' \%
   1.544 +second to the location of \path{/usr} if you are sharing it between
   1.545 +domains (e.g.\ \path{disk = ['phy:your\_hard\_drive\%d,sda1,w' \%
   1.546  (base\_partition\_number + vmid), 'phy:your\_usr\_partition,sda6,r' ]}
   1.547  \item[dhcp] Uncomment the dhcp variable, so that the domain will
   1.548 -receive its IP address from a DHCP server. [i.e. {\tt dhcp='dhcp'}]
   1.549 +receive its IP address from a DHCP server (e.g.\ \path{dhcp='dhcp'})
   1.550  \end{description}
   1.551 +\end{quote}
   1.552  
   1.553  You may also want to edit the {\bf vif} variable in order to choose
   1.554  the MAC address of the virtual ethernet interface yourself.  For
   1.555 -example: \\ \verb_vif = ['mac=00:06:AA:F6:BB:B3']_\\ If you do not set
   1.556 -this variable, \xend will automatically generate a random MAC address
   1.557 -from an unused range.
   1.558 -
   1.559 -If you don't have a \path{xmdefconfig} file, simply create your own 
   1.560 -by copying one of the \path{/etc/xen/xmexample} files.
   1.561 -\section{Starting the domain}
   1.562 -
   1.563 -The {\tt xm} tool provides a variety of commands for managing domains.
   1.564 -Use the {\tt create} command to start new domains.  To start the
   1.565 -virtual machine with virtual machine ID 1.
   1.566 -
   1.567 +example: 
   1.568 +\begin{quote}
   1.569 +\verb_vif = ['mac=00:06:AA:F6:BB:B3']_
   1.570 +\end{quote}
   1.571 +If you do not set this variable, \xend will automatically generate a
   1.572 +random MAC address from an unused range.
   1.573 +
   1.574 +
   1.575 +\section{Booting the Domain}
   1.576 +
   1.577 +The \path{xm} tool provides a variety of commands for managing domains.
   1.578 +Use the \path{create} command to start new domains. Assuming you've 
   1.579 +created a configuration file \path{myvmconf} based around
   1.580 +\path{/etc/xen/xmexample2}, to start a domain with virtual 
   1.581 +machine ID~1 you should type: 
   1.582 +
   1.583 +\begin{quote}
   1.584  \begin{verbatim}
   1.585 -# xm create -c vmid=1
   1.586 +# xm create -c -f myvmconfig vmid=1
   1.587  \end{verbatim}
   1.588 -
   1.589 -The {\tt -c} switch causes {\tt xm} to turn into the domain's console
   1.590 -after creation.  The {\tt vmid=1} sets the {\tt vmid} variable used in
   1.591 -the {\tt xmdefconfig} file.  The tool uses the
   1.592 -\path{/etc/xen/xmdefconfig} file, since no custom configuration file
   1.593 -was specified on the command line.
   1.594 +\end{quote}
   1.595 +
   1.596 +
   1.597 +The \path{-c} switch causes \path{xm} to turn into the domain's
   1.598 +console after creation.  The \path{vmid=1} sets the \path{vmid}
   1.599 +variable used in the \path{myvmconf} file.
   1.600 +
   1.601 +
   1.602 +You should see the console boot messages from the new domain 
   1.603 +appearing in the terminal in which you typed the command, 
   1.604 +culminating in a login prompt. 
   1.605 +
   1.606  
   1.607  \section{Example: ttylinux}
   1.608  
   1.609 -Ttylinux is a very small Linux distribution, designed to
   1.610 -require very few resources.  We will use it as a concrete example of
   1.611 -how to start a Xen domain.  Most users will probably want to install a
   1.612 -full-featured distribution once they have mastered the
   1.613 -basics.
   1.614 +Ttylinux is a very small Linux distribution, designed to require very
   1.615 +few resources.  We will use it as a concrete example of how to start a
   1.616 +Xen domain.  Most users will probably want to install a full-featured
   1.617 +distribution once they have mastered the basics.
   1.618  
   1.619  \begin{enumerate}
   1.620  \item Download and extract the ttylinux disk image from the Files
   1.621 -section of the project's SourceForge site (see {\tt
   1.622 -http://sf.net/projects/xen/}).
   1.623 +section of the project's SourceForge site (see 
   1.624 +\path{http://sf.net/projects/xen/}).
   1.625  \item Create a configuration file like the following:
   1.626  \begin{verbatim}
   1.627  kernel = "/boot/vmlinuz-2.6.9-xenU"
   1.628 @@ -623,7 +721,8 @@ xm create -f configfile -c
   1.629  \item Login as root, password root.
   1.630  \end{enumerate}
   1.631  
   1.632 -\section{Starting / Stopping domains automatically}
   1.633 +
   1.634 +\section{Starting / Stopping Domains Automatically}
   1.635  
   1.636  It is possible to have certain domains start automatically at boot
   1.637  time and to have dom0 wait for all running domains to shutdown before
   1.638 @@ -639,47 +738,64 @@ your distribution.
   1.639  
   1.640  For instance, on RedHat:
   1.641  
   1.642 +\begin{quote}
   1.643  \verb_# chkconfig --add xendomains_
   1.644 +\end{quote}
   1.645  
   1.646  By default, this will start the boot-time domains in runlevels 3, 4
   1.647  and 5.
   1.648  
   1.649 -You can also use the {\tt service} command to run this script manually, e.g:
   1.650 -
   1.651 +You can also use the \path{service} command to run this script
   1.652 +manually, e.g:
   1.653 +
   1.654 +\begin{quote}
   1.655  \verb_# service xendomains start_
   1.656  
   1.657  Starts all the domains with config files under /etc/xen/auto/.
   1.658 -
   1.659 +\end{quote}
   1.660 +
   1.661 +
   1.662 +\begin{quote}
   1.663  \verb_# service xendomains stop_
   1.664  
   1.665  Shuts down ALL running Xen domains.
   1.666 -
   1.667 -
   1.668 -\chapter{Domain management tasks}
   1.669 +\end{quote}
   1.670 +
   1.671 +\chapter{Domain Management Tools}
   1.672  
   1.673  The previous chapter described a simple example of how to configure
   1.674  and start a domain.  This chapter summarises the tools available to
   1.675  manage running domains.
   1.676  
   1.677 -\section{Command line management}
   1.678 -
   1.679 -Command line management tasks are also performed using the {\tt xm}
   1.680 -tool.  For online help for the commands available, type:\\
   1.681 +\section{Command-line Management}
   1.682 +
   1.683 +Command line management tasks are also performed using the \path{xm}
   1.684 +tool.  For online help for the commands available, type:
   1.685 +\begin{quote}
   1.686  \verb_# xm help_
   1.687 -
   1.688 -\subsection{Basic management commands}
   1.689 -
   1.690 -The most important {\tt xm} commands are: \\
   1.691 -\verb_# xm list_ : Lists all domains running. \\
   1.692 -\verb_# xm consoles_ : Gives information about the domain consoles. \\
   1.693 -\verb_# xm console_: Opens a console to a domain.
   1.694 -e.g. \verb_# xm console 1_ (open console to domain 1)
   1.695 +\end{quote}
   1.696 +
   1.697 +You can also type \path{xm help $<$command$>$} for more information 
   1.698 +on a given command. 
   1.699 +
   1.700 +\subsection{Basic Management Commands}
   1.701 +
   1.702 +The most important \path{xm} commands are: 
   1.703 +\begin{quote}
   1.704 +\verb_# xm list_: Lists all domains running.\\
   1.705 +\verb_# xm consoles_ : Gives information about the domain consoles.\\
   1.706 +\verb_# xm console_: Opens a console to a domain (e.g.\
   1.707 +  \verb_# xm console myVM_
   1.708 +\end{quote}
   1.709  
   1.710  \subsection{\tt xm list}
   1.711  
   1.712 -The output of {\tt xm list} is in rows of the following format:\\
   1.713 -\verb_name domid memory cpu state cputime console_
   1.714 -
   1.715 +The output of \path{xm list} is in rows of the following format:
   1.716 +\begin{center}
   1.717 +{\tt name domid memory cpu state cputime console}
   1.718 +\end{center}
   1.719 +
   1.720 +\begin{quote}
   1.721  \begin{description}
   1.722  \item[name]  The descriptive name of the virtual machine.
   1.723  \item[domid] The number of the domain ID this virtual machine is running in.
   1.724 @@ -696,9 +812,10 @@ The output of {\tt xm list} is in rows o
   1.725  \item[cputime] How much CPU time (in seconds) the domain has used so far.
   1.726  \item[console] TCP port accepting connections to the domain's console.
   1.727  \end{description}
   1.728 -
   1.729 -The {\tt xm list} command also supports a long output format when the
   1.730 -{\tt -l} switch is used.  This outputs the fulls details of the
   1.731 +\end{quote}
   1.732 +
   1.733 +The \path{xm list} command also supports a long output format when the
   1.734 +\path{-l} switch is used.  This outputs the fulls details of the
   1.735  running domains in \xend's SXP configuration format.
   1.736  
   1.737  For example, suppose the system is running the ttylinux domain as
   1.738 @@ -714,8 +831,8 @@ ttylinux           5       63    0  -b--
   1.739  Here we can see the details for the ttylinux domain, as well as for
   1.740  domain 0 (which, of course, is always running).  Note that the console
   1.741  port for the ttylinux domain is 9605.  This can be connected to by TCP
   1.742 -using a terminal program (e.g. {\tt telnet} or, better, {\tt
   1.743 -xencons}).  The simplest way to connect is to use the {\tt xm console}
   1.744 +using a terminal program (e.g. \path{telnet} or, better, 
   1.745 +\path{xencons}).  The simplest way to connect is to use the \path{xm console}
   1.746  command, specifying the domain name or ID.  To connect to the console
   1.747  of the ttylinux domain, we could use:
   1.748  \begin{verbatim}
   1.749 @@ -726,7 +843,7 @@ or:
   1.750  # xm console 5
   1.751  \end{verbatim}
   1.752  
   1.753 -\section{Domain save and restore}
   1.754 +\section{Domain Save and Restore}
   1.755  
   1.756  The administrator of a Xen system may suspend a virtual machine's
   1.757  current state into a disk file in domain 0, allowing it to be resumed
   1.758 @@ -741,16 +858,16 @@ the command:
   1.759  This will stop the domain named `ttylinux' and save its current state
   1.760  into a file called \path{ttylinux.xen}.
   1.761  
   1.762 -To resume execution of this domain, use the {\tt xm restore} command:
   1.763 +To resume execution of this domain, use the \path{xm restore} command:
   1.764  \begin{verbatim}
   1.765  # xm restore ttylinux.xen
   1.766  \end{verbatim}
   1.767  
   1.768  This will restore the state of the domain and restart it.  The domain
   1.769  will carry on as before and the console may be reconnected using the
   1.770 -{\tt xm console} command, as above.
   1.771 -
   1.772 -\section{Live migration}
   1.773 +\path{xm console} command, as above.
   1.774 +
   1.775 +\section{Live Migration}
   1.776  
   1.777  Live migration is used to transfer a domain between physical hosts
   1.778  whilst that domain continues to perform its usual activities --- from
   1.779 @@ -758,14 +875,16 @@ the user's perspective, the migration sh
   1.780  
   1.781  To perform a live migration, both hosts must be running Xen / \xend and
   1.782  the destination host must have sufficient resources (e.g. memory
   1.783 -capacity) to accommodate the domain after the move.
   1.784 +capacity) to accommodate the domain after the move. Furthermore we
   1.785 +currently require both source and destination machines to be on the 
   1.786 +same L2 subnet. 
   1.787  
   1.788  Currently, there is no support for providing access to disk
   1.789  filesystems when a domain is migrated.  Administrators should choose
   1.790  an appropriate storage solution (i.e. SAN, NAS, etc.) to ensure that
   1.791  domain filesystems are also available on their destination node.
   1.792  
   1.793 -A domain may be migrated using the {\tt xm migrate} command.  To
   1.794 +A domain may be migrated using the \path{xm migrate} command.  To
   1.795  live migrate a domain to another machine, we would use
   1.796  the command:
   1.797  
   1.798 @@ -783,11 +902,11 @@ The domain will then continue on the new
   1.799  for a fraction of a second (usually between about 60 -- 300ms).
   1.800  
   1.801  For now it will be necessary to reconnect to the domain's console on
   1.802 -the new machine using the {\tt xm console} command.  If a migrated
   1.803 +the new machine using the \path{xm console} command.  If a migrated
   1.804  domain has any open network connections then they will be preserved,
   1.805  so SSH connections do not have this limitation.
   1.806  
   1.807 -\section{Managing domain memory (ballooning and memory limits)}
   1.808 +\section{Managing Domain Memory}
   1.809  
   1.810  XenLinux domains have the ability to relinquish / reclaim machine
   1.811  memory at the request of the administrator or the user of the domain.
   1.812 @@ -795,7 +914,7 @@ memory at the request of the administrat
   1.813  \subsection{Setting memory footprints from dom0}
   1.814  
   1.815  The machine administrator can request that a domain alter its memory
   1.816 -footprint using the {\tt xm balloon} command.  For instance, we can
   1.817 +footprint using the \path{xm balloon} command.  For instance, we can
   1.818  request that our example ttylinux domain reduce its memory footprint
   1.819  to 32 megabytes.
   1.820  
   1.821 @@ -803,7 +922,7 @@ to 32 megabytes.
   1.822  # xm balloon ttylinux 32
   1.823  \end{verbatim}
   1.824  
   1.825 -We can now see the result of this in the output of {\tt xm list}:
   1.826 +We can now see the result of this in the output of \path{xm list}:
   1.827  
   1.828  \begin{verbatim}
   1.829  # xm list
   1.830 @@ -823,9 +942,9 @@ can restore the domain to its original s
   1.831  
   1.832  The virtual file \path{/proc/xen/memory\_target} allows the owner of a
   1.833  domain to adjust their own memory footprint.  Reading the file
   1.834 -(e.g. \verb!# cat /proc/xen/memory\_target!) prints out the current
   1.835 +(e.g. \path{cat /proc/xen/memory\_target}) prints out the current
   1.836  memory footprint of the domain.  Writing the file
   1.837 -(e.g. \verb!# echo new\_target > /proc/xen/memory\_target!) requests
   1.838 +(e.g. \path{echo new\_target > /proc/xen/memory\_target}) requests
   1.839  that the kernel adjust the domain's memory footprint to a new value.
   1.840  
   1.841  \subsection{Setting memory limits}
   1.842 @@ -834,30 +953,55 @@ Xen associates a memory size limit with 
   1.843  is the amount of memory the domain is originally started with,
   1.844  preventing the domain from ever growing beyond this size.  To permit a
   1.845  domain to grow beyond its original allocation or to prevent a domain
   1.846 -you've shrunk from reclaiming the memory it relinquished, use the {\tt
   1.847 -xm maxmem} command.
   1.848 -
   1.849 -\chapter{Domain filesystem storage}
   1.850 +you've shrunk from reclaiming the memory it relinquished, use the 
   1.851 +\path{xm maxmem} command.
   1.852 +
   1.853 +\chapter{Domain Filesystem Storage}
   1.854  
   1.855  It is possible to directly export any Linux block device in dom0 to
   1.856 -another domain,
   1.857 -or to export filesystems / devices to virtual machines using standard
   1.858 -network protocols (e.g. NBD, iSCSI, NFS, etc).  This chapter covers
   1.859 -some of the possibilities.
   1.860 -
   1.861 -\section{Warning: Block device sharing}
   1.862 -
   1.863 +another domain, or to export filesystems / devices to virtual machines
   1.864 +using standard network protocols (e.g. NBD, iSCSI, NFS, etc).  This
   1.865 +chapter covers some of the possibilities.
   1.866 +
   1.867 +
   1.868 +\section{Exporting Physical Devices as VBDs} 
   1.869 +
   1.870 +One of the simplest configurations is to directly export 
   1.871 +individual partitions from domain 0 to other domains. To 
   1.872 +achieve this use the \path{phy:} specifier in your domain 
   1.873 +configuration file. For example a line like
   1.874 +\begin{quote}
   1.875 +\verb_disk = ['phy:hda3,sda1,w']_
   1.876 +\end{quote}
   1.877 +specifies that the partition \path{/dev/hda3} in domain 0 
   1.878 +should be exported to the new domain as \path{/dev/sda1}; 
   1.879 +one could equally well export it as \path{/dev/hda3} or 
   1.880 +\path{/dev/sdb5} should one wish. 
   1.881 +
   1.882 +In addition to local disks and partitions, it is possible to export
   1.883 +any device that Linux considers to be ``a disk'' in the same manner.
   1.884 +For example, if you have iSCSI disks or GNBD volumes imported into
   1.885 +domain 0 you can export these to other domains using the \path{phy:}
   1.886 +disk syntax.
   1.887 +
   1.888 +
   1.889 +\begin{center}
   1.890 +\framebox{\bf Warning: Block device sharing}
   1.891 +\end{center}
   1.892 +\begin{quote}
   1.893  Block devices should only be shared between domains in a read-only
   1.894  fashion otherwise the Linux kernels will obviously get very confused
   1.895  as the file system structure may change underneath them (having the
   1.896  same partition mounted rw twice is a sure fire way to cause
   1.897  irreparable damage)!  If you want read-write sharing, export the
   1.898 -directory to other domains via NFS from domain0.
   1.899 -
   1.900 -\section{File-backed virtual block devices}
   1.901 -
   1.902 -It is possible to use a file in Domain 0 as the primary storage for a
   1.903 -virtual machine.  As well as being convenient, this also has the
   1.904 +directory to other domains via NFS from domain0. 
   1.905 +\end{quote}
   1.906 +
   1.907 +
   1.908 +\section{Using File-backed VBDs}
   1.909 +
   1.910 +It is also possible to use a file in Domain 0 as the primary storage
   1.911 +for a virtual machine.  As well as being convenient, this also has the
   1.912  advantage that the virtual block device will be {\em sparse} --- space
   1.913  will only really be allocated as parts of the file are used.  So if a
   1.914  virtual machine uses only half of its disk space then the file really
   1.915 @@ -865,74 +1009,174 @@ takes up half of the size allocated.
   1.916  
   1.917  For example, to create a 2GB sparse file-backed virtual block device
   1.918  (actually only consumes 1KB of disk):
   1.919 -
   1.920 +\begin{quote}
   1.921  \verb_# dd if=/dev/zero of=vm1disk bs=1k seek=2048k count=1_
   1.922 -
   1.923 -Make a file system in the disk file: \\
   1.924 +\end{quote}
   1.925 +
   1.926 +Make a file system in the disk file: 
   1.927 +\begin{quote}
   1.928  \verb_# mkfs -t ext3 vm1disk_
   1.929 +\end{quote}
   1.930  
   1.931  (when the tool asks for confirmation, answer `y')
   1.932  
   1.933  Populate the file system e.g. by copying from the current root:
   1.934 +\begin{quote}
   1.935  \begin{verbatim}
   1.936  # mount -o loop vm1disk /mnt
   1.937  # cp -ax /{root,dev,var,etc,usr,bin,sbin,lib} /mnt
   1.938  # mkdir /mnt/{proc,sys,home,tmp}
   1.939  \end{verbatim}
   1.940 +\end{quote}
   1.941 +
   1.942  Tailor the file system by editing \path{/etc/fstab},
   1.943  \path{/etc/hostname}, etc (don't forget to edit the files in the
   1.944  mounted file system, instead of your domain 0 filesystem, e.g. you
   1.945  would edit \path{/mnt/etc/fstab} instead of \path{/etc/fstab} ).  For
   1.946  this example put \path{/dev/sda1} to root in fstab.
   1.947  
   1.948 -Now unmount (this is important!):\\
   1.949 +Now unmount (this is important!):
   1.950 +\begin{quote}
   1.951  \verb_# umount /mnt_
   1.952 -
   1.953 -In the configuration file set:\\
   1.954 +\end{quote}
   1.955 +
   1.956 +In the configuration file set:
   1.957 +\begin{quote}
   1.958  \verb_disk = ['file:/full/path/to/vm1disk,sda1,w']_
   1.959 +\end{quote}
   1.960  
   1.961  As the virtual machine writes to its `disk', the sparse file will be
   1.962  filled in and consume more space up to the original 2GB.
   1.963  
   1.964 -\section{NFS Root}
   1.965 -
   1.966 -The procedure for using NFS root in a virtual machine is basically the
   1.967 -same as you would follow for a real machine.  NB. the Linux NFS root
   1.968 -implementation is known to have stability problems under high load
   1.969 -(this is not a Xen-specific problem), so this configuration may not be
   1.970 -appropriate for critical servers.
   1.971 -
   1.972 -First, populate a root filesystem in a directory on the server machine
   1.973 ---- this can be on another physical machine, or perhaps just another
   1.974 -virtual machine on the same node.
   1.975 -
   1.976 -Now, configure the NFS server to export this filesystem over the
   1.977 -network by adding a line to /etc/exports, for instance:
   1.978 -
   1.979 +
   1.980 +\section{Using LVM-backed VBDs}
   1.981 +
   1.982 +A particularly appealing solution is to use LVM volumes 
   1.983 +as backing for domain file-systems since this allows dynamic
   1.984 +growing/shrinking of volumes as well as snapshot and other 
   1.985 +features. 
   1.986 +
   1.987 +To initialise a partition to support LVM volumes:
   1.988 +\begin{quote}
   1.989 +\begin{verbatim} 
   1.990 +# pvcreate /dev/sda10		
   1.991 +\end{verbatim} 
   1.992 +\end{quote}
   1.993 +
   1.994 +Create a volume group named `vg' on the physical partition:
   1.995 +\begin{quote}
   1.996 +\begin{verbatim} 
   1.997 +# vgcreate vg /dev/sda10
   1.998 +\end{verbatim} 
   1.999 +\end{quote}
  1.1000 +
  1.1001 +Create a logical volume of size 4GB named `myvmdisk1':
  1.1002 +\begin{quote}
  1.1003 +\begin{verbatim} 
  1.1004 +# lvcreate -L4096M -n myvmdisk1 vg
  1.1005 +\end{verbatim} 
  1.1006 +\end{quote}
  1.1007 +
  1.1008 +You should now see that you have a \path{/dev/vg/myvmdisk1}
  1.1009 +Make a filesystem, mount it and populate it, e.g.:
  1.1010 +\begin{quote}
  1.1011 +\begin{verbatim} 
  1.1012 +# mkfs -t ext3 /dev/vg/myvmdisk1
  1.1013 +# mount /dev/vg/myvmdisk1 /mnt
  1.1014 +# cp -ax / /mnt
  1.1015 +# umount /mnt
  1.1016 +\end{verbatim} 
  1.1017 +\end{quote}
  1.1018 +
  1.1019 +Now configure your VM with the following disk configuration:
  1.1020 +\begin{quote}
  1.1021 +\begin{verbatim} 
  1.1022 + disk = [ 'phy:vg/myvmdisk1,sda1,w' ]
  1.1023 +\end{verbatim} 
  1.1024 +\end{quote}
  1.1025 +
  1.1026 +LVM enables you to grow the size of logical volumes, but you'll need
  1.1027 +to resize the corresponding file system to make use of the new
  1.1028 +space. Some file systems (e.g. ext3) now support on-line resize.  See
  1.1029 +the LVM manuals for more details.
  1.1030 +
  1.1031 +You can also use LVM for creating copy-on-write clones of LVM
  1.1032 +volumes (known as writable persistent snapshots in LVM
  1.1033 +terminology). This facility is new in Linux 2.6.8, so isn't as
  1.1034 +stable as one might hope. In particular, using lots of CoW LVM
  1.1035 +disks consumes a lot of dom0 memory, and error conditions such as
  1.1036 +running out of disk space are not handled well. Hopefully this
  1.1037 +will improve in future.
  1.1038 +
  1.1039 +To create two copy-on-write clone of the above file system you
  1.1040 +would use the following commands:
  1.1041 +
  1.1042 +\begin{quote}
  1.1043 +\begin{verbatim} 
  1.1044 +# lvcreate -s -L1024M -n myclonedisk1 /dev/vg/myvmdisk1
  1.1045 +# lvcreate -s -L1024M -n myclonedisk2 /dev/vg/myvmdisk1
  1.1046 +\end{verbatim} 
  1.1047 +\end{quote}
  1.1048 +
  1.1049 +Each of these can grow to have 1GB of differences from the master
  1.1050 +volume. You can grow the amount of space for storing the
  1.1051 +differences using the lvextend command, e.g.:
  1.1052 +\begin{quote}
  1.1053 +\begin{verbatim} 
  1.1054 +# lvextend +100M /dev/vg/myclonedisk1
  1.1055 +\end{verbatim} 
  1.1056 +\end{quote}
  1.1057 +
  1.1058 +Don't let the `differences volume' ever fill up otherwise LVM gets
  1.1059 +rather confused. It may be possible to automate the growing
  1.1060 +process by using \path{dmsetup wait} to spot the volume getting full
  1.1061 +and then issue an \path{lvextend}.
  1.1062 +
  1.1063 +%% In principle, it is possible to continue writing to the volume
  1.1064 +%% that has been cloned (the changes will not be visible to the
  1.1065 +%% clones), but we wouldn't recommend this: have the cloned volume
  1.1066 +%% as a 'pristine' file system install that isn't mounted directly
  1.1067 +%% by any of the virtual machines.
  1.1068 +
  1.1069 +
  1.1070 +\section{Using NFS Root}
  1.1071 +
  1.1072 +First, populate a root filesystem in a directory on the server
  1.1073 +machine. This can be on a distinct physical machine, or simply 
  1.1074 +run within a virtual machine on the same node.
  1.1075 +
  1.1076 +Now configure the NFS server to export this filesystem over the
  1.1077 +network by adding a line to \path{/etc/exports}, for instance:
  1.1078 +
  1.1079 +\begin{quote}
  1.1080  \begin{verbatim}
  1.1081  /export/vm1root      w.x.y.z/m (rw,sync,no_root_squash)
  1.1082  \end{verbatim}
  1.1083 +\end{quote}
  1.1084  
  1.1085  Finally, configure the domain to use NFS root.  In addition to the
  1.1086  normal variables, you should make sure to set the following values in
  1.1087  the domain's configuration file:
  1.1088  
  1.1089 +\begin{quote}
  1.1090 +\begin{small}
  1.1091  \begin{verbatim}
  1.1092  root       = '/dev/nfs'
  1.1093 -nfs_server = 'a.b.c.d'       # Substitute the IP for the server here
  1.1094 -nfs_root   = '/path/to/root' # Path to root FS on the server
  1.1095 +nfs_server = 'a.b.c.d'       # substitute IP address of server 
  1.1096 +nfs_root   = '/path/to/root' # path to root FS on the server
  1.1097  \end{verbatim}
  1.1098 -
  1.1099 -The domain will need network access at boot-time, so either statically
  1.1100 -configure an IP address (Using the config variables {\tt ip}, {\tt
  1.1101 -netmask}, {\tt gateway}, {\tt hostname}) or enable DHCP ({\tt
  1.1102 -dhcp='dhcp'}).
  1.1103 -
  1.1104 -%% \section{LVM-backed virtual block devices}
  1.1105 -
  1.1106 -%% XXX Put some simple examples here - would be nice if an LVM user could
  1.1107 -%% contribute some, although obviously users would have to read the LVM
  1.1108 -%% docs to do advanced stuff.
  1.1109 +\end{small}
  1.1110 +\end{quote}
  1.1111 +
  1.1112 +The domain will need network access at boot time, so either statically
  1.1113 +configure an IP address (Using the config variables \path{ip}, 
  1.1114 +\path{netmask}, \path{gateway}, \path{hostname}) or enable DHCP (
  1.1115 +\path{dhcp='dhcp'}).
  1.1116 +
  1.1117 +Note that the Linux NFS root implementation is known to have stability
  1.1118 +problems under high load (this is not a Xen-specific problem), so this
  1.1119 +configuration may not be appropriate for critical servers.
  1.1120 +
  1.1121  
  1.1122  \part{User Reference Documentation}
  1.1123  
  1.1124 @@ -942,7 +1186,7 @@ The Xen control software includes the \x
  1.1125  must be running), the xm command line tools, and the prototype 
  1.1126  xensv web interface. 
  1.1127  
  1.1128 -\section{\Xend (Node control daemon)}
  1.1129 +\section{\Xend (node control daemon)}
  1.1130  \label{s:xend}
  1.1131  
  1.1132  The Xen Daemon (\Xend) performs system management functions related to
  1.1133 @@ -971,7 +1215,7 @@ Once \xend is running, more sophisticate
  1.1134  using the xm tool (see Section~\ref{s:xm}) and the experimental
  1.1135  Xensv web interface (see Section~\ref{s:xensv}).
  1.1136  
  1.1137 -\section{Xm (Command line interface)}
  1.1138 +\section{Xm (command line interface)}
  1.1139  \label{s:xm}
  1.1140  
  1.1141  The xm tool is the primary tool for managing Xen from the console.
  1.1142 @@ -1022,7 +1266,7 @@ try
  1.1143  \end{verbatim}
  1.1144  \end{quote}
  1.1145  
  1.1146 -\section{Xensv (Web control interface)}
  1.1147 +\section{Xensv (web control interface)}
  1.1148  \label{s:xensv}
  1.1149  
  1.1150  Xensv is the experimental web control interface for managing a Xen
  1.1151 @@ -1076,7 +1320,9 @@ vif = [ 'mac=aa:00:00:00:00:11, bridge=x
  1.1152  \item[disk] List of block devices to export to the domain,  e.g. \\
  1.1153    \verb_disk = [ 'phy:hda1,sda1,r' ]_ \\
  1.1154    exports physical device \path{/dev/hda1} to the domain 
  1.1155 -  as \path{/dev/sda1} with read-only access. 
  1.1156 +  as \path{/dev/sda1} with read-only access. Exporting a disk read-write 
  1.1157 +  which is currently mounted is dangerous -- if you are \emph{certain}
  1.1158 +  you wish to do this, you can specify \path{w!} as the mode. 
  1.1159  \item[dhcp] Set to {\tt 'dhcp'} if you want to use DHCP to configure
  1.1160    networking. 
  1.1161  \item[netmask] Manually configured IP netmask.
  1.1162 @@ -1163,12 +1409,12 @@ according to the type of virtual device 
  1.1163  %% existing {\em virtual} devices (of the appropriate type) to that
  1.1164  %% backend.
  1.1165  
  1.1166 -Note that a block backend cannot import virtual block devices from
  1.1167 -other domains, and a network backend cannot import virtual network
  1.1168 -devices from other domains.  Thus (particularly in the case of block
  1.1169 -backends, which cannot import a virtual block device as their root
  1.1170 -filesystem), you may need to boot a backend domain from a ramdisk or a
  1.1171 -network device.
  1.1172 +Note that a block backend cannot currently import virtual block
  1.1173 +devices from other domains, and a network backend cannot import
  1.1174 +virtual network devices from other domains.  Thus (particularly in the
  1.1175 +case of block backends, which cannot import a virtual block device as
  1.1176 +their root filesystem), you may need to boot a backend domain from a
  1.1177 +ramdisk or a network device.
  1.1178  
  1.1179  Access to PCI devices may be configured on a per-device basis.  Xen
  1.1180  will assign the minimal set of hardware privileges to a domain that
  1.1181 @@ -1294,14 +1540,14 @@ slightly less than 100\% in order to ens
  1.1182    should be allowed a share of the system slack time.
  1.1183  \end{description}
  1.1184  
  1.1185 -\section{Round Robin}
  1.1186 +\subsection{Round Robin}
  1.1187  
  1.1188  {\tt sched=rrobin} \\
  1.1189  
  1.1190  The round robin scheduler is included as a simple demonstration of
  1.1191  Xen's internal scheduler API.  It is not intended for production use. 
  1.1192  
  1.1193 -\subsection{Global parameters}
  1.1194 +\subsubsection{Global Parameters}
  1.1195  
  1.1196  \begin{description}
  1.1197  \item[rr\_slice]
  1.1198 @@ -1325,7 +1571,7 @@ Xen's internal scheduler API.  It is not
  1.1199  This chapter describes the build- and boot-time options 
  1.1200  which may be used to tailor your Xen system. 
  1.1201  
  1.1202 -\section{Xen build options}
  1.1203 +\section{Xen Build Options}
  1.1204  
  1.1205  Xen provides a number of build-time options which should be 
  1.1206  set as environment variables or passed on make's command-line.  
  1.1207 @@ -1349,7 +1595,7 @@ events within Xen for collection by cont
  1.1208  software. 
  1.1209  \end{description} 
  1.1210  
  1.1211 -\section{Xen boot options}
  1.1212 +\section{Xen Boot Options}
  1.1213  \label{s:xboot}
  1.1214  
  1.1215  These options are used to configure Xen's behaviour at runtime.  They
  1.1216 @@ -1506,7 +1752,7 @@ that bug reports, suggestions and contri
  1.1217  software (or the documentation) should be sent to the Xen developers'
  1.1218  mailing list (address below).
  1.1219  
  1.1220 -\section{Other documentation}
  1.1221 +\section{Other Documentation}
  1.1222  
  1.1223  For developers interested in porting operating systems to Xen, the
  1.1224  {\em Xen Interface Manual} is distributed in the \path{docs/}
  1.1225 @@ -1515,7 +1761,7 @@ directory of the Xen source distribution
  1.1226  %Various HOWTOs are available in \path{docs/HOWTOS} but this content is
  1.1227  %being integrated into this manual.
  1.1228  
  1.1229 -\section{Online references}
  1.1230 +\section{Online References}
  1.1231  
  1.1232  The official Xen web site is found at:
  1.1233  \begin{quote}
  1.1234 @@ -1525,20 +1771,20 @@ The official Xen web site is found at:
  1.1235  This contains links to the latest versions of all on-line 
  1.1236  documentation. 
  1.1237  
  1.1238 -\section{Mailing lists}
  1.1239 +\section{Mailing Lists}
  1.1240  
  1.1241  There are currently three official Xen mailing lists:
  1.1242  
  1.1243  \begin{description}
  1.1244  \item[xen-devel@lists.sourceforge.net] Used for development
  1.1245  discussions and requests for help.  Subscribe at: \\
  1.1246 -{\small {\tt http://lists.sourceforge.net/mailman/listinfo/xen-devel}}
  1.1247 +\path{http://lists.sourceforge.net/mailman/listinfo/xen-devel}
  1.1248  \item[xen-announce@lists.sourceforge.net] Used for announcements only.
  1.1249  Subscribe at: \\
  1.1250 -{\small {\tt http://lists.sourceforge.net/mailman/listinfo/xen-announce}}
  1.1251 +\path{http://lists.sourceforge.net/mailman/listinfo/xen-announce}
  1.1252  \item[xen-changelog@lists.sourceforge.net]  Changelog feed
  1.1253  from the unstable and 2.0 trees - developer oriented.  Subscribe at: \\
  1.1254 -{\small {\tt http://lists.sourceforge.net/mailman/listinfo/xen-changelog}}
  1.1255 +\path{http://lists.sourceforge.net/mailman/listinfo/xen-changelog}
  1.1256  \end{description}
  1.1257  
  1.1258  Although there is no specific user support list, the developers try to
  1.1259 @@ -1548,12 +1794,12 @@ list increases, a dedicated user support
  1.1260  \appendix
  1.1261  
  1.1262  
  1.1263 -\chapter{Installing Debian}
  1.1264 -
  1.1265 -The Debian project provides a tool called {\small {\tt debootstrap}} which
  1.1266 +\chapter{Installing Xen / XenLinux on Debian}
  1.1267 +
  1.1268 +The Debian project provides a tool called \path{debootstrap} which
  1.1269  allows a base Debian system to be installed into a filesystem without
  1.1270  requiring the host system to have any Debian-specific software (such
  1.1271 -as {\small {\tt apt}}).
  1.1272 +as \path{apt}. 
  1.1273  
  1.1274  Here's some info how to install Debian 3.1 (Sarge) for an unprivileged
  1.1275  Xen domain:
  1.1276 @@ -1585,11 +1831,11 @@ mkswap /path/swapimage
  1.1277  mount -o loop /path/diskimage /mnt/disk
  1.1278  \end{verbatim}\end{small}
  1.1279  
  1.1280 -\item Install {\small {\tt debootstrap}}
  1.1281 +\item Install \path{debootstrap}
  1.1282  
  1.1283  Make sure you have debootstrap installed on the host.  If you are
  1.1284  running Debian sarge (3.1 / testing) or unstable you can install it by
  1.1285 -running {\small {\tt apt-get install debootstrap}}.  Otherwise, it can be
  1.1286 +running \path{apt-get install debootstrap}.  Otherwise, it can be
  1.1287  downloaded from the Debian project website.
  1.1288  
  1.1289  \item Install Debian base to the disk image:
  1.1290 @@ -1667,7 +1913,7 @@ Started domain testdomain2, console on p
  1.1291  \end{verbatim}\end{small}
  1.1292          
  1.1293          There you can see the ID of the console: 26. You can also list
  1.1294 -        the consoles with {\small {\tt xm consoles}} (ID is the last two
  1.1295 +        the consoles with \path{xm consoles} (ID is the last two
  1.1296          digits of the port number.)
  1.1297  
  1.1298          Attach to the console:
  1.1299 @@ -1687,18 +1933,17 @@ xm console 26
  1.1300          errors.  Check that the swap is active, and the network settings are
  1.1301          correct.
  1.1302  
  1.1303 -        Run {\small {\tt/usr/sbin/base-config}} to set up the Debian settings.
  1.1304 +        Run \path{/usr/sbin/base-config} to set up the Debian settings.
  1.1305  
  1.1306          Set up the password for root using passwd.
  1.1307  
  1.1308 -\item     Done. You can exit the console by pressing {\small {\tt Ctrl + ]}}
  1.1309 +\item     Done. You can exit the console by pressing \path{Ctrl + ]}
  1.1310  
  1.1311  \end{enumerate}
  1.1312  
  1.1313  If you need to create new domains, you can just copy the contents of
  1.1314  the `template'-image to the new disk images, either by mounting the
  1.1315 -template and the new image, and using {\small {\tt cp -a}} or {\small
  1.1316 -    {\tt tar}} or by
  1.1317 +template and the new image, and using \path{cp -a} or \path{tar} or by
  1.1318  simply copying the image file.  Once this is done, modify the
  1.1319  image-specific settings (hostname, network settings, etc).
  1.1320  
  1.1321 @@ -1713,9 +1958,9 @@ to update the hwclock, change the consol
  1.1322  map, start apmd (power management), or gpm (mouse cursor).  Either
  1.1323  ignore the errors (they should be harmless), or remove them from the
  1.1324  startup scripts.  Deleting the following links are a good start:
  1.1325 -{\small\path{S24pcmcia}}, {\small\path{S09isdn}},
  1.1326 -{\small\path{S17keytable}}, {\small\path{S26apmd}},
  1.1327 -{\small\path{S85gpm}}.
  1.1328 +{\path{S24pcmcia}}, {\path{S09isdn}},
  1.1329 +{\path{S17keytable}}, {\path{S26apmd}},
  1.1330 +{\path{S85gpm}}.
  1.1331  
  1.1332  If you want to use a single root file system that works cleanly for
  1.1333  both domain 0 and unprivileged domains, a useful trick is to use
  1.1334 @@ -1726,14 +1971,14 @@ level number passed on the kernel comman
  1.1335  
  1.1336  If using NFS root files systems mounted either from an
  1.1337  external server or from domain0 there are a couple of other gotchas.
  1.1338 -The default {\small\path{/etc/sysconfig/iptables}} rules block NFS, so part
  1.1339 +The default {\path{/etc/sysconfig/iptables}} rules block NFS, so part
  1.1340  way through the boot sequence things will suddenly go dead.
  1.1341  
  1.1342 -If you're planning on having a separate NFS {\small\path{/usr}} partition, the
  1.1343 +If you're planning on having a separate NFS {\path{/usr}} partition, the
  1.1344  RH9 boot scripts don't make life easy - they attempt to mount NFS file
  1.1345  systems way to late in the boot process. The easiest way I found to do
  1.1346 -this was to have a {\small\path{/linuxrc}} script run ahead of
  1.1347 -{\small\path{/sbin/init}} that mounts {\small\path{/usr}}:
  1.1348 +this was to have a {\path{/linuxrc}} script run ahead of
  1.1349 +{\path{/sbin/init}} that mounts {\path{/usr}}:
  1.1350  
  1.1351  \begin{quote}
  1.1352  \begin{small}\begin{verbatim}
  1.1353 @@ -1748,19 +1993,19 @@ this was to have a {\small\path{/linuxrc
  1.1354  %$ XXX SMH: font lock fix :-)  
  1.1355  
  1.1356  The one slight complication with the above is that
  1.1357 -{\small\path{/sbin/portmap}} is dynamically linked against
  1.1358 -{\small\path{/usr/lib/libwrap.so.0}} Since this is in
  1.1359 -{\small\path{/usr}}, it won't work. This can be solved by copying the
  1.1360 +{\path{/sbin/portmap}} is dynamically linked against
  1.1361 +{\path{/usr/lib/libwrap.so.0}} Since this is in
  1.1362 +{\path{/usr}}, it won't work. This can be solved by copying the
  1.1363  file (and link) below the /usr mount point, and just let the file be
  1.1364  'covered' when the mount happens.
  1.1365  
  1.1366 -In some installations, where a shared read-only {\small\path{/usr}} is
  1.1367 +In some installations, where a shared read-only {\path{/usr}} is
  1.1368  being used, it may be desirable to move other large directories over
  1.1369 -into the read-only {\small\path{/usr}}. For example, you might replace
  1.1370 -{\small\path{/bin}}, {\small\path{/lib}} and {\small\path{/sbin}} with
  1.1371 -links into {\small\path{/usr/root/bin}}, {\small\path{/usr/root/lib}}
  1.1372 -and {\small\path{/usr/root/sbin}} respectively. This creates other
  1.1373 -problems for running the {\small\path{/linuxrc}} script, requiring
  1.1374 +into the read-only {\path{/usr}}. For example, you might replace
  1.1375 +{\path{/bin}}, {\path{/lib}} and {\path{/sbin}} with
  1.1376 +links into {\path{/usr/root/bin}}, {\path{/usr/root/lib}}
  1.1377 +and {\path{/usr/root/sbin}} respectively. This creates other
  1.1378 +problems for running the {\path{/linuxrc}} script, requiring
  1.1379  bash, portmap, mount, ifconfig, and a handful of other shared
  1.1380  libraries to be copied below the mount point --- a simple
  1.1381  statically-linked C program would solve this problem.