direct-io.hg

view docs/src/user.tex @ 4383:85f87d4249f9

bitkeeper revision 1.1159.258.78 (424ab2c5j7eJJls3kgj8UgYPektTwQ)

Updated mailing lists details in documentation.

Signed-off-by: James Bulpin <james@xensource.com>
author jrb44@plym.cl.cam.ac.uk
date Wed Mar 30 14:08:05 2005 +0000 (2005-03-30)
parents 9340287b7f30
children 8b2f70175ca1 0dc3b8b8c298 13e24e7bb79d
line source
1 \documentclass[11pt,twoside,final,openright]{report}
2 \usepackage{a4,graphicx,html,parskip,setspace,times,xspace}
3 \setstretch{1.15}
6 \def\Xend{{Xend}\xspace}
7 \def\xend{{xend}\xspace}
9 \latexhtml{\newcommand{\path}[1]{{\small {\tt #1}}}}{\newcommand{\path}[1]{{\tt #1}}}
13 \begin{document}
15 % TITLE PAGE
16 \pagestyle{empty}
17 \begin{center}
18 \vspace*{\fill}
19 \includegraphics{figs/xenlogo.eps}
20 \vfill
21 \vfill
22 \vfill
23 \begin{tabular}{l}
24 {\Huge \bf Users' manual} \\[4mm]
25 {\huge Xen v2.0 for x86} \\[80mm]
27 {\Large Xen is Copyright (c) 2002-2004, The Xen Team} \\[3mm]
28 {\Large University of Cambridge, UK} \\[20mm]
29 \end{tabular}
30 \end{center}
32 {\bf
33 DISCLAIMER: This documentation is currently under active development
34 and as such there may be mistakes and omissions --- watch out for
35 these and please report any you find to the developer's mailing list.
36 Contributions of material, suggestions and corrections are welcome.
37 }
39 \vfill
40 \cleardoublepage
42 % TABLE OF CONTENTS
43 \pagestyle{plain}
44 \pagenumbering{roman}
45 { \parskip 0pt plus 1pt
46 \tableofcontents }
47 \cleardoublepage
49 % PREPARE FOR MAIN TEXT
50 \pagenumbering{arabic}
51 \raggedbottom
52 \widowpenalty=10000
53 \clubpenalty=10000
54 \parindent=0pt
55 \parskip=5pt
56 \renewcommand{\topfraction}{.8}
57 \renewcommand{\bottomfraction}{.8}
58 \renewcommand{\textfraction}{.2}
59 \renewcommand{\floatpagefraction}{.8}
60 \setstretch{1.1}
62 \part{Introduction and Tutorial}
63 \chapter{Introduction}
65 Xen is a {\em paravirtualising} virtual machine monitor (VMM), or
66 `hypervisor', for the x86 processor architecture. Xen can securely
67 execute multiple virtual machines on a single physical system with
68 close-to-native performance. The virtual machine technology
69 facilitates enterprise-grade functionality, including:
71 \begin{itemize}
72 \item Virtual machines with performance close to native
73 hardware.
74 \item Live migration of running virtual machines between physical hosts.
75 \item Excellent hardware support (supports most Linux device drivers).
76 \item Sandboxed, restartable device drivers.
77 \end{itemize}
79 Paravirtualisation permits very high performance virtualisation,
80 even on architectures like x86 that are traditionally
81 very hard to virtualise.
82 The drawback of this approach is that it requires operating systems to
83 be {\em ported} to run on Xen. Porting an OS to run on Xen is similar
84 to supporting a new hardware platform, however the process
85 is simplified because the paravirtual machine architecture is very
86 similar to the underlying native hardware. Even though operating system
87 kernels must explicitly support Xen, a key feature is that user space
88 applications and libraries {\em do not} require modification.
90 Xen support is available for increasingly many operating systems:
91 right now, Linux 2.4, Linux 2.6 and NetBSD are available for Xen 2.0.
92 A FreeBSD port is undergoing testing and will be incorporated into the
93 release soon. Other OS ports, including Plan 9, are in progress. We
94 hope that that arch-xen patches will be incorporated into the
95 mainstream releases of these operating systems in due course (as has
96 already happened for NetBSD).
98 Possible usage scenarios for Xen include:
99 \begin{description}
100 \item [Kernel development.] Test and debug kernel modifications in a
101 sandboxed virtual machine --- no need for a separate test
102 machine.
103 \item [Multiple OS configurations.] Run multiple operating systems
104 simultaneously, for instance for compatibility or QA purposes.
105 \item [Server consolidation.] Move multiple servers onto a single
106 physical host with performance and fault isolation provided at
107 virtual machine boundaries.
108 \item [Cluster computing.] Management at VM granularity provides more
109 flexibility than separately managing each physical host, but
110 better control and isolation than single-system image solutions,
111 particularly by using live migration for load balancing.
112 \item [Hardware support for custom OSes.] Allow development of new OSes
113 while benefiting from the wide-ranging hardware support of
114 existing OSes such as Linux.
115 \end{description}
117 \section{Structure of a Xen-Based System}
119 A Xen system has multiple layers, the lowest and most privileged of
120 which is Xen itself.
121 Xen in turn may host multiple {\em guest} operating systems, each of
122 which is executed within a secure virtual machine (in Xen terminology,
123 a {\em domain}). Domains are scheduled by Xen to make effective use of
124 the available physical CPUs. Each guest OS manages its own
125 applications, which includes responsibility for scheduling each
126 application within the time allotted to the VM by Xen.
128 The first domain, {\em domain 0}, is created automatically when the
129 system boots and has special management privileges. Domain 0 builds
130 other domains and manages their virtual devices. It also performs
131 administrative tasks such as suspending, resuming and migrating other
132 virtual machines.
134 Within domain 0, a process called \emph{xend} runs to manage the system.
135 \Xend is responsible for managing virtual machines and providing access
136 to their consoles. Commands are issued to \xend over an HTTP
137 interface, either from a command-line tool or from a web browser.
139 \section{Hardware Support}
141 Xen currently runs only on the x86 architecture, requiring a `P6' or
142 newer processor (e.g. Pentium Pro, Celeron, Pentium II, Pentium III,
143 Pentium IV, Xeon, AMD Athlon, AMD Duron). Multiprocessor machines are
144 supported, and we also have basic support for HyperThreading (SMT),
145 although this remains a topic for ongoing research. A port
146 specifically for x86/64 is in progress, although Xen already runs on
147 such systems in 32-bit legacy mode. In addition a port to the IA64
148 architecture is approaching completion. We hope to add other
149 architectures such as PPC and ARM in due course.
152 Xen can currently use up to 4GB of memory. It is possible for x86
153 machines to address up to 64GB of physical memory but there are no
154 current plans to support these systems: The x86/64 port is the
155 planned route to supporting larger memory sizes.
157 Xen offloads most of the hardware support issues to the guest OS
158 running in Domain~0. Xen itself contains only the code required to
159 detect and start secondary processors, set up interrupt routing, and
160 perform PCI bus enumeration. Device drivers run within a privileged
161 guest OS rather than within Xen itself. This approach provides
162 compatibility with the majority of device hardware supported by Linux.
163 The default XenLinux build contains support for relatively modern
164 server-class network and disk hardware, but you can add support for
165 other hardware by configuring your XenLinux kernel in the normal way.
167 \section{History}
169 Xen was originally developed by the Systems Research Group at the
170 University of Cambridge Computer Laboratory as part of the XenoServers
171 project, funded by the UK-EPSRC.
172 XenoServers aim to provide a `public infrastructure for
173 global distributed computing', and Xen plays a key part in that,
174 allowing us to efficiently partition a single machine to enable
175 multiple independent clients to run their operating systems and
176 applications in an environment providing protection, resource
177 isolation and accounting. The project web page contains further
178 information along with pointers to papers and technical reports:
179 \path{http://www.cl.cam.ac.uk/xeno}
181 Xen has since grown into a fully-fledged project in its own right,
182 enabling us to investigate interesting research issues regarding the
183 best techniques for virtualising resources such as the CPU, memory,
184 disk and network. The project has been bolstered by support from
185 Intel Research Cambridge, and HP Labs, who are now working closely
186 with us.
188 Xen was first described in a paper presented at SOSP in
189 2003\footnote{\tt
190 http://www.cl.cam.ac.uk/netos/papers/2003-xensosp.pdf}, and the first
191 public release (1.0) was made that October. Since then, Xen has
192 significantly matured and is now used in production scenarios on
193 many sites.
195 Xen 2.0 features greatly enhanced hardware support, configuration
196 flexibility, usability and a larger complement of supported operating
197 systems. This latest release takes Xen a step closer to becoming the
198 definitive open source solution for virtualisation.
200 \chapter{Installation}
202 The Xen distribution includes three main components: Xen itself, ports
203 of Linux 2.4 and 2.6 and NetBSD to run on Xen, and the user-space
204 tools required to manage a Xen-based system. This chapter describes
205 how to install the Xen 2.0 distribution from source. Alternatively,
206 there may be pre-built packages available as part of your operating
207 system distribution.
209 \section{Prerequisites}
210 \label{sec:prerequisites}
212 The following is a full list of prerequisites. Items marked `$\dag$'
213 are required by the \xend control tools, and hence required if you
214 want to run more than one virtual machine; items marked `$*$' are only
215 required if you wish to build from source.
216 \begin{itemize}
217 \item A working Linux distribution using the GRUB bootloader and
218 running on a P6-class (or newer) CPU.
219 \item [$\dag$] The \path{iproute2} package.
220 \item [$\dag$] The Linux bridge-utils\footnote{Available from
221 {\tt http://bridge.sourceforge.net}} (e.g., \path{/sbin/brctl})
222 \item [$\dag$] An installation of Twisted v1.3 or
223 above\footnote{Available from {\tt
224 http://www.twistedmatrix.com}}. There may be a binary package
225 available for your distribution; alternatively it can be installed by
226 running `{\sl make install-twisted}' in the root of the Xen source
227 tree.
228 \item [$*$] Build tools (gcc v3.2.x or v3.3.x, binutils, GNU make).
229 \item [$*$] Development installation of libcurl (e.g., libcurl-devel)
230 \item [$*$] Development installation of zlib (e.g., zlib-dev).
231 \item [$*$] Development installation of Python v2.2 or later (e.g., python-dev).
232 \item [$*$] \LaTeX and transfig are required to build the documentation.
233 \end{itemize}
235 Once you have satisfied the relevant prerequisites, you can
236 now install either a binary or source distribution of Xen.
238 \section{Installing from Binary Tarball}
240 Pre-built tarballs are available for download from the Xen
241 download page
242 \begin{quote}
243 {\tt http://xen.sf.net}
244 \end{quote}
246 Once you've downloaded the tarball, simply unpack and install:
247 \begin{verbatim}
248 # tar zxvf xen-2.0-install.tgz
249 # cd xen-2.0-install
250 # sh ./install.sh
251 \end{verbatim}
253 Once you've installed the binaries you need to configure
254 your system as described in Section~\ref{s:configure}.
256 \section{Installing from Source}
258 This section describes how to obtain, build, and install
259 Xen from source.
261 \subsection{Obtaining the Source}
263 The Xen source tree is available as either a compressed source tar
264 ball or as a clone of our master BitKeeper repository.
266 \begin{description}
267 \item[Obtaining the Source Tarball]\mbox{} \\
268 Stable versions (and daily snapshots) of the Xen source tree are
269 available as compressed tarballs from the Xen download page
270 \begin{quote}
271 {\tt http://xen.sf.net}
272 \end{quote}
274 \item[Using BitKeeper]\mbox{} \\
275 If you wish to install Xen from a clone of our latest BitKeeper
276 repository then you will need to install the BitKeeper tools.
277 Download instructions for BitKeeper can be obtained by filling out the
278 form at:
280 \begin{quote}
281 {\tt http://www.bitmover.com/cgi-bin/download.cgi}
282 \end{quote}
283 The public master BK repository for the 2.0 release lives at:
284 \begin{quote}
285 {\tt bk://xen.bkbits.net/xen-2.0.bk}
286 \end{quote}
287 You can use BitKeeper to
288 download it and keep it updated with the latest features and fixes.
290 Change to the directory in which you want to put the source code, then
291 run:
292 \begin{verbatim}
293 # bk clone bk://xen.bkbits.net/xen-2.0.bk
294 \end{verbatim}
296 Under your current directory, a new directory named \path{xen-2.0.bk}
297 has been created, which contains all the source code for Xen, the OS
298 ports, and the control tools. You can update your repository with the
299 latest changes at any time by running:
300 \begin{verbatim}
301 # cd xen-2.0.bk # to change into the local repository
302 # bk pull # to update the repository
303 \end{verbatim}
304 \end{description}
306 %\section{The distribution}
307 %
308 %The Xen source code repository is structured as follows:
309 %
310 %\begin{description}
311 %\item[\path{tools/}] Xen node controller daemon (Xend), command line tools,
312 % control libraries
313 %\item[\path{xen/}] The Xen VMM.
314 %\item[\path{linux-*-xen-sparse/}] Xen support for Linux.
315 %\item[\path{linux-*-patches/}] Experimental patches for Linux.
316 %\item[\path{netbsd-*-xen-sparse/}] Xen support for NetBSD.
317 %\item[\path{docs/}] Various documentation files for users and developers.
318 %\item[\path{extras/}] Bonus extras.
319 %\end{description}
321 \subsection{Building from Source}
323 The top-level Xen Makefile includes a target `world' that will do the
324 following:
326 \begin{itemize}
327 \item Build Xen
328 \item Build the control tools, including \xend
329 \item Download (if necessary) and unpack the Linux 2.6 source code,
330 and patch it for use with Xen
331 \item Build a Linux kernel to use in domain 0 and a smaller
332 unprivileged kernel, which can optionally be used for
333 unprivileged virtual machines.
334 \end{itemize}
337 After the build has completed you should have a top-level
338 directory called \path{dist/} in which all resulting targets
339 will be placed; of particular interest are the two kernels
340 XenLinux kernel images, one with a `-xen0' extension
341 which contains hardware device drivers and drivers for Xen's virtual
342 devices, and one with a `-xenU' extension that just contains the
343 virtual ones. These are found in \path{dist/install/boot/} along
344 with the image for Xen itself and the configuration files used
345 during the build.
347 The NetBSD port can be built using:
348 \begin{quote}
349 \begin{verbatim}
350 # make netbsd20
351 \end{verbatim}
352 \end{quote}
353 NetBSD port is built using a snapshot of the netbsd-2-0 cvs branch.
354 The snapshot is downloaded as part of the build process, if it is not
355 yet present in the \path{NETBSD\_SRC\_PATH} search path. The build
356 process also downloads a toolchain which includes all the tools
357 necessary to build the NetBSD kernel under Linux.
359 To customize further the set of kernels built you need to edit
360 the top-level Makefile. Look for the line:
362 \begin{quote}
363 \begin{verbatim}
364 KERNELS ?= mk.linux-2.6-xen0 mk.linux-2.6-xenU
365 \end{verbatim}
366 \end{quote}
368 You can edit this line to include any set of operating system kernels
369 which have configurations in the top-level \path{buildconfigs/}
370 directory, for example \path{mk.linux-2.4-xenU} to build a Linux 2.4
371 kernel containing only virtual device drivers.
373 %% Inspect the Makefile if you want to see what goes on during a build.
374 %% Building Xen and the tools is straightforward, but XenLinux is more
375 %% complicated. The makefile needs a `pristine' Linux kernel tree to which
376 %% it will then add the Xen architecture files. You can tell the
377 %% makefile the location of the appropriate Linux compressed tar file by
378 %% setting the LINUX\_SRC environment variable, e.g. \\
379 %% \verb!# LINUX_SRC=/tmp/linux-2.6.9.tar.bz2 make world! \\ or by
380 %% placing the tar file somewhere in the search path of {\tt
381 %% LINUX\_SRC\_PATH} which defaults to `{\tt .:..}'. If the makefile
382 %% can't find a suitable kernel tar file it attempts to download it from
383 %% kernel.org (this won't work if you're behind a firewall).
385 %% After untaring the pristine kernel tree, the makefile uses the {\tt
386 %% mkbuildtree} script to add the Xen patches to the kernel.
389 %% The procedure is similar to build the Linux 2.4 port: \\
390 %% \verb!# LINUX_SRC=/path/to/linux2.4/source make linux24!
393 %% \framebox{\parbox{5in}{
394 %% {\bf Distro specific:} \\
395 %% {\it Gentoo} --- if not using udev (most installations, currently), you'll need
396 %% to enable devfs and devfs mount at boot time in the xen0 config.
397 %% }}
399 \subsection{Custom XenLinux Builds}
401 % If you have an SMP machine you may wish to give the {\tt '-j4'}
402 % argument to make to get a parallel build.
404 If you wish to build a customized XenLinux kernel (e.g. to support
405 additional devices or enable distribution-required features), you can
406 use the standard Linux configuration mechanisms, specifying that the
407 architecture being built for is \path{xen}, e.g:
408 \begin{quote}
409 \begin{verbatim}
410 # cd linux-2.6.9-xen0
411 # make ARCH=xen xconfig
412 # cd ..
413 # make
414 \end{verbatim}
415 \end{quote}
417 You can also copy an existing Linux configuration (\path{.config})
418 into \path{linux-2.6.9-xen0} and execute:
419 \begin{quote}
420 \begin{verbatim}
421 # make ARCH=xen oldconfig
422 \end{verbatim}
423 \end{quote}
425 You may be prompted with some Xen-specific options; we
426 advise accepting the defaults for these options.
428 Note that the only difference between the two types of Linux kernel
429 that are built is the configuration file used for each. The "U"
430 suffixed (unprivileged) versions don't contain any of the physical
431 hardware device drivers, leading to a 30\% reduction in size; hence
432 you may prefer these for your non-privileged domains. The `0'
433 suffixed privileged versions can be used to boot the system, as well
434 as in driver domains and unprivileged domains.
437 \subsection{Installing the Binaries}
440 The files produced by the build process are stored under the
441 \path{dist/install/} directory. To install them in their default
442 locations, do:
443 \begin{quote}
444 \begin{verbatim}
445 # make install
446 \end{verbatim}
447 \end{quote}
450 Alternatively, users with special installation requirements may wish
451 to install them manually by copying the files to their appropriate
452 destinations.
454 %% Files in \path{install/boot/} include:
455 %% \begin{itemize}
456 %% \item \path{install/boot/xen.gz} The Xen 'kernel'
457 %% \item \path{install/boot/vmlinuz-2.6.9-xen0} Domain 0 XenLinux kernel
458 %% \item \path{install/boot/vmlinuz-2.6.9-xenU} Unprivileged XenLinux kernel
459 %% \end{itemize}
461 The \path{dist/install/boot} directory will also contain the config files
462 used for building the XenLinux kernels, and also versions of Xen and
463 XenLinux kernels that contain debug symbols (\path{xen-syms} and
464 \path{vmlinux-syms-2.6.9-xen0}) which are essential for interpreting crash
465 dumps. Retain these files as the developers may wish to see them if
466 you post on the mailing list.
472 \section{Configuration}
473 \label{s:configure}
474 Once you have built and installed the Xen distribution, it is
475 simple to prepare the machine for booting and running Xen.
477 \subsection{GRUB Configuration}
479 An entry should be added to \path{grub.conf} (often found under
480 \path{/boot/} or \path{/boot/grub/}) to allow Xen / XenLinux to boot.
481 This file is sometimes called \path{menu.lst}, depending on your
482 distribution. The entry should look something like the following:
484 {\small
485 \begin{verbatim}
486 title Xen 2.0 / XenLinux 2.6.9
487 kernel /boot/xen.gz dom0_mem=131072
488 module /boot/vmlinuz-2.6.9-xen0 root=/dev/sda4 ro console=tty0
489 \end{verbatim}
490 }
492 The kernel line tells GRUB where to find Xen itself and what boot
493 parameters should be passed to it (in this case, setting domain 0's
494 memory allocation and the settings for the serial port). For more
495 details on the various Xen boot parameters see Section~\ref{s:xboot}.
497 The module line of the configuration describes the location of the
498 XenLinux kernel that Xen should start and the parameters that should
499 be passed to it (these are standard Linux parameters, identifying the
500 root device and specifying it be initially mounted read only and
501 instructing that console output be sent to the screen). Some
502 distributions such as SuSE do not require the \path{ro} parameter.
504 %% \framebox{\parbox{5in}{
505 %% {\bf Distro specific:} \\
506 %% {\it SuSE} --- Omit the {\tt ro} option from the XenLinux kernel
507 %% command line, since the partition won't be remounted rw during boot.
508 %% }}
511 If you want to use an initrd, just add another \path{module} line to
512 the configuration, as usual:
513 {\small
514 \begin{verbatim}
515 module /boot/my_initrd.gz
516 \end{verbatim}
517 }
519 As always when installing a new kernel, it is recommended that you do
520 not delete existing menu options from \path{menu.lst} --- you may want
521 to boot your old Linux kernel in future, particularly if you
522 have problems.
525 \subsection{Serial Console (optional)}
527 %% kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1
528 %% module /boot/vmlinuz-2.6.9-xen0 root=/dev/sda4 ro
531 In order to configure Xen serial console output, it is necessary to add
532 an boot option to your GRUB config; e.g. replace the above kernel line
533 with:
534 \begin{quote}
535 {\small
536 \begin{verbatim}
537 kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1
538 \end{verbatim}}
539 \end{quote}
541 This configures Xen to output on COM1 at 115,200 baud, 8 data bits,
542 1 stop bit and no parity. Modify these parameters for your set up.
544 One can also configure XenLinux to share the serial console; to
545 achieve this append ``\path{console=ttyS0}'' to your
546 module line.
549 If you wish to be able to log in over the XenLinux serial console it
550 is necessary to add a line into \path{/etc/inittab}, just as per
551 regular Linux. Simply add the line:
552 \begin{quote}
553 {\small
554 {\tt c:2345:respawn:/sbin/mingetty ttyS0}
555 }
556 \end{quote}
558 and you should be able to log in. Note that to successfully log in
559 as root over the serial line will require adding \path{ttyS0} to
560 \path{/etc/securetty} in most modern distributions.
562 \subsection{TLS Libraries}
564 Users of the XenLinux 2.6 kernel should disable Thread Local Storage
565 (e.g.\ by doing a \path{mv /lib/tls /lib/tls.disabled}) before
566 attempting to run with a XenLinux kernel\footnote{If you boot without first
567 disabling TLS, you will get a warning message during the boot
568 process. In this case, simply perform the rename after the machine is
569 up and then run \texttt{/sbin/ldconfig} to make it take effect.}. You can
570 always reenable it by restoring the directory to its original location
571 (i.e.\ \path{mv /lib/tls.disabled /lib/tls}).
573 The reason for this is that the current TLS implementation uses
574 segmentation in a way that is not permissible under Xen. If TLS is
575 not disabled, an emulation mode is used within Xen which reduces
576 performance substantially.
578 We hope that this issue can be resolved by working with Linux
579 distribution vendors to implement a minor backward-compatible change
580 to the TLS library.
582 \section{Booting Xen}
584 It should now be possible to restart the system and use Xen. Reboot
585 as usual but choose the new Xen option when the Grub screen appears.
587 What follows should look much like a conventional Linux boot. The
588 first portion of the output comes from Xen itself, supplying low level
589 information about itself and the machine it is running on. The
590 following portion of the output comes from XenLinux.
592 You may see some errors during the XenLinux boot. These are not
593 necessarily anything to worry about --- they may result from kernel
594 configuration differences between your XenLinux kernel and the one you
595 usually use.
597 When the boot completes, you should be able to log into your system as
598 usual. If you are unable to log in to your system running Xen, you
599 should still be able to reboot with your normal Linux kernel.
602 \chapter{Starting Additional Domains}
604 The first step in creating a new domain is to prepare a root
605 filesystem for it to boot off. Typically, this might be stored in a
606 normal partition, an LVM or other volume manager partition, a disk
607 file or on an NFS server. A simple way to do this is simply to boot
608 from your standard OS install CD and install the distribution into
609 another partition on your hard drive.
611 To start the \xend control daemon, type
612 \begin{quote}
613 \verb!# xend start!
614 \end{quote}
615 If you
616 wish the daemon to start automatically, see the instructions in
617 Section~\ref{s:xend}. Once the daemon is running, you can use the
618 \path{xm} tool to monitor and maintain the domains running on your
619 system. This chapter provides only a brief tutorial: we provide full
620 details of the \path{xm} tool in the next chapter.
622 %\section{From the web interface}
623 %
624 %Boot the Xen machine and start Xensv (see Chapter~\ref{cha:xensv} for
625 %more details) using the command: \\
626 %\verb_# xensv start_ \\
627 %This will also start Xend (see Chapter~\ref{cha:xend} for more information).
628 %
629 %The domain management interface will then be available at {\tt
630 %http://your\_machine:8080/}. This provides a user friendly wizard for
631 %starting domains and functions for managing running domains.
632 %
633 %\section{From the command line}
636 \section{Creating a Domain Configuration File}
638 Before you can start an additional domain, you must create a
639 configuration file. We provide two example files which you
640 can use as a starting point:
641 \begin{itemize}
642 \item \path{/etc/xen/xmexample1} is a simple template configuration file
643 for describing a single VM.
645 \item \path{/etc/xen/xmexample2} file is a template description that
646 is intended to be reused for multiple virtual machines. Setting
647 the value of the \path{vmid} variable on the \path{xm} command line
648 fills in parts of this template.
649 \end{itemize}
651 Copy one of these files and edit it as appropriate.
652 Typical values you may wish to edit include:
654 \begin{quote}
655 \begin{description}
656 \item[kernel] Set this to the path of the kernel you compiled for use
657 with Xen (e.g.\ \path{kernel = '/boot/vmlinuz-2.6.9-xenU'})
658 \item[memory] Set this to the size of the domain's memory in
659 megabytes (e.g.\ \path{memory = 64})
660 \item[disk] Set the first entry in this list to calculate the offset
661 of the domain's root partition, based on the domain ID. Set the
662 second to the location of \path{/usr} if you are sharing it between
663 domains (e.g.\ \path{disk = ['phy:your\_hard\_drive\%d,sda1,w' \%
664 (base\_partition\_number + vmid), 'phy:your\_usr\_partition,sda6,r' ]}
665 \item[dhcp] Uncomment the dhcp variable, so that the domain will
666 receive its IP address from a DHCP server (e.g.\ \path{dhcp='dhcp'})
667 \end{description}
668 \end{quote}
670 You may also want to edit the {\bf vif} variable in order to choose
671 the MAC address of the virtual ethernet interface yourself. For
672 example:
673 \begin{quote}
674 \verb_vif = ['mac=00:06:AA:F6:BB:B3']_
675 \end{quote}
676 If you do not set this variable, \xend will automatically generate a
677 random MAC address from an unused range.
680 \section{Booting the Domain}
682 The \path{xm} tool provides a variety of commands for managing domains.
683 Use the \path{create} command to start new domains. Assuming you've
684 created a configuration file \path{myvmconf} based around
685 \path{/etc/xen/xmexample2}, to start a domain with virtual
686 machine ID~1 you should type:
688 \begin{quote}
689 \begin{verbatim}
690 # xm create -c myvmconf vmid=1
691 \end{verbatim}
692 \end{quote}
695 The \path{-c} switch causes \path{xm} to turn into the domain's
696 console after creation. The \path{vmid=1} sets the \path{vmid}
697 variable used in the \path{myvmconf} file.
700 You should see the console boot messages from the new domain
701 appearing in the terminal in which you typed the command,
702 culminating in a login prompt.
705 \section{Example: ttylinux}
707 Ttylinux is a very small Linux distribution, designed to require very
708 few resources. We will use it as a concrete example of how to start a
709 Xen domain. Most users will probably want to install a full-featured
710 distribution once they have mastered the basics\footnote{ttylinux is
711 maintained by Pascal Schmidt. You can download source packages from
712 the distribution's home page: {\tt http://www.minimalinux.org/ttylinux/}}.
714 \begin{enumerate}
715 \item Download and extract the ttylinux disk image from the Files
716 section of the project's SourceForge site (see
717 \path{http://sf.net/projects/xen/}).
718 \item Create a configuration file like the following:
719 \begin{verbatim}
720 kernel = "/boot/vmlinuz-2.6.9-xenU"
721 memory = 64
722 name = "ttylinux"
723 nics = 1
724 ip = "1.2.3.4"
725 disk = ['file:/path/to/ttylinux/rootfs,sda1,w']
726 root = "/dev/sda1 ro"
727 \end{verbatim}
728 \item Now start the domain and connect to its console:
729 \begin{verbatim}
730 xm create configfile -c
731 \end{verbatim}
732 \item Login as root, password root.
733 \end{enumerate}
736 \section{Starting / Stopping Domains Automatically}
738 It is possible to have certain domains start automatically at boot
739 time and to have dom0 wait for all running domains to shutdown before
740 it shuts down the system.
742 To specify a domain is to start at boot-time, place its
743 configuration file (or a link to it) under \path{/etc/xen/auto/}.
745 A Sys-V style init script for RedHat and LSB-compliant systems is
746 provided and will be automatically copied to \path{/etc/init.d/}
747 during install. You can then enable it in the appropriate way for
748 your distribution.
750 For instance, on RedHat:
752 \begin{quote}
753 \verb_# chkconfig --add xendomains_
754 \end{quote}
756 By default, this will start the boot-time domains in runlevels 3, 4
757 and 5.
759 You can also use the \path{service} command to run this script
760 manually, e.g:
762 \begin{quote}
763 \verb_# service xendomains start_
765 Starts all the domains with config files under /etc/xen/auto/.
766 \end{quote}
769 \begin{quote}
770 \verb_# service xendomains stop_
772 Shuts down ALL running Xen domains.
773 \end{quote}
775 \chapter{Domain Management Tools}
777 The previous chapter described a simple example of how to configure
778 and start a domain. This chapter summarises the tools available to
779 manage running domains.
781 \section{Command-line Management}
783 Command line management tasks are also performed using the \path{xm}
784 tool. For online help for the commands available, type:
785 \begin{quote}
786 \verb_# xm help_
787 \end{quote}
789 You can also type \path{xm help $<$command$>$} for more information
790 on a given command.
792 \subsection{Basic Management Commands}
794 The most important \path{xm} commands are:
795 \begin{quote}
796 \verb_# xm list_: Lists all domains running.\\
797 \verb_# xm consoles_ : Gives information about the domain consoles.\\
798 \verb_# xm console_: Opens a console to a domain (e.g.\
799 \verb_# xm console myVM_
800 \end{quote}
802 \subsection{\tt xm list}
804 The output of \path{xm list} is in rows of the following format:
805 \begin{center}
806 {\tt name domid memory cpu state cputime console}
807 \end{center}
809 \begin{quote}
810 \begin{description}
811 \item[name] The descriptive name of the virtual machine.
812 \item[domid] The number of the domain ID this virtual machine is running in.
813 \item[memory] Memory size in megabytes.
814 \item[cpu] The CPU this domain is running on.
815 \item[state] Domain state consists of 5 fields:
816 \begin{description}
817 \item[r] running
818 \item[b] blocked
819 \item[p] paused
820 \item[s] shutdown
821 \item[c] crashed
822 \end{description}
823 \item[cputime] How much CPU time (in seconds) the domain has used so far.
824 \item[console] TCP port accepting connections to the domain's console.
825 \end{description}
826 \end{quote}
828 The \path{xm list} command also supports a long output format when the
829 \path{-l} switch is used. This outputs the fulls details of the
830 running domains in \xend's SXP configuration format.
832 For example, suppose the system is running the ttylinux domain as
833 described earlier. The list command should produce output somewhat
834 like the following:
835 \begin{verbatim}
836 # xm list
837 Name Id Mem(MB) CPU State Time(s) Console
838 Domain-0 0 251 0 r---- 172.2
839 ttylinux 5 63 0 -b--- 3.0 9605
840 \end{verbatim}
842 Here we can see the details for the ttylinux domain, as well as for
843 domain 0 (which, of course, is always running). Note that the console
844 port for the ttylinux domain is 9605. This can be connected to by TCP
845 using a terminal program (e.g. \path{telnet} or, better,
846 \path{xencons}). The simplest way to connect is to use the \path{xm console}
847 command, specifying the domain name or ID. To connect to the console
848 of the ttylinux domain, we could use any of the following:
849 \begin{verbatim}
850 # xm console ttylinux
851 # xm console 5
852 # xencons localhost 9605
853 \end{verbatim}
855 \section{Domain Save and Restore}
857 The administrator of a Xen system may suspend a virtual machine's
858 current state into a disk file in domain 0, allowing it to be resumed
859 at a later time.
861 The ttylinux domain described earlier can be suspended to disk using
862 the command:
863 \begin{verbatim}
864 # xm save ttylinux ttylinux.xen
865 \end{verbatim}
867 This will stop the domain named `ttylinux' and save its current state
868 into a file called \path{ttylinux.xen}.
870 To resume execution of this domain, use the \path{xm restore} command:
871 \begin{verbatim}
872 # xm restore ttylinux.xen
873 \end{verbatim}
875 This will restore the state of the domain and restart it. The domain
876 will carry on as before and the console may be reconnected using the
877 \path{xm console} command, as above.
879 \section{Live Migration}
881 Live migration is used to transfer a domain between physical hosts
882 whilst that domain continues to perform its usual activities --- from
883 the user's perspective, the migration should be imperceptible.
885 To perform a live migration, both hosts must be running Xen / \xend and
886 the destination host must have sufficient resources (e.g. memory
887 capacity) to accommodate the domain after the move. Furthermore we
888 currently require both source and destination machines to be on the
889 same L2 subnet.
891 Currently, there is no support for providing automatic remote access
892 to filesystems stored on local disk when a domain is migrated.
893 Administrators should choose an appropriate storage solution
894 (i.e. SAN, NAS, etc.) to ensure that domain filesystems are also
895 available on their destination node. GNBD is a good method for
896 exporting a volume from one machine to another. iSCSI can do a similar
897 job, but is more complex to set up.
899 When a domain migrates, it's MAC and IP address move with it, thus it
900 is only possible to migrate VMs within the same layer-2 network and IP
901 subnet. If the destination node is on a different subnet, the
902 administrator would need to manually configure a suitable etherip or
903 IP tunnel in the domain 0 of the remote node.
905 A domain may be migrated using the \path{xm migrate} command. To
906 live migrate a domain to another machine, we would use
907 the command:
909 \begin{verbatim}
910 # xm migrate --live mydomain destination.ournetwork.com
911 \end{verbatim}
913 Without the \path{--live} flag, \xend simply stops the domain and
914 copies the memory image over to the new node and restarts it. Since
915 domains can have large allocations this can be quite time consuming,
916 even on a Gigabit network. With the \path{--live} flag \xend attempts
917 to keep the domain running while the migration is in progress,
918 resulting in typical `downtimes' of just 60--300ms.
920 For now it will be necessary to reconnect to the domain's console on
921 the new machine using the \path{xm console} command. If a migrated
922 domain has any open network connections then they will be preserved,
923 so SSH connections do not have this limitation.
925 \section{Managing Domain Memory}
927 XenLinux domains have the ability to relinquish / reclaim machine
928 memory at the request of the administrator or the user of the domain.
930 \subsection{Setting memory footprints from dom0}
932 The machine administrator can request that a domain alter its memory
933 footprint using the \path{xm balloon} command. For instance, we can
934 request that our example ttylinux domain reduce its memory footprint
935 to 32 megabytes.
937 \begin{verbatim}
938 # xm balloon ttylinux 32
939 \end{verbatim}
941 We can now see the result of this in the output of \path{xm list}:
943 \begin{verbatim}
944 # xm list
945 Name Id Mem(MB) CPU State Time(s) Console
946 Domain-0 0 251 0 r---- 172.2
947 ttylinux 5 31 0 -b--- 4.3 9605
948 \end{verbatim}
950 The domain has responded to the request by returning memory to Xen. We
951 can restore the domain to its original size using the command line:
953 \begin{verbatim}
954 # xm balloon ttylinux 64
955 \end{verbatim}
957 \subsection{Setting memory footprints from within a domain}
959 The virtual file \path{/proc/xen/memory\_target} allows the owner of a
960 domain to adjust their own memory footprint. Reading the file
961 (e.g. \path{cat /proc/xen/memory\_target}) prints out the current
962 memory footprint of the domain. Writing the file
963 (e.g. \path{echo new\_target > /proc/xen/memory\_target}) requests
964 that the kernel adjust the domain's memory footprint to a new value.
966 \subsection{Setting memory limits}
968 Xen associates a memory size limit with each domain. By default, this
969 is the amount of memory the domain is originally started with,
970 preventing the domain from ever growing beyond this size. To permit a
971 domain to grow beyond its original allocation or to prevent a domain
972 you've shrunk from reclaiming the memory it relinquished, use the
973 \path{xm maxmem} command.
975 \chapter{Domain Filesystem Storage}
977 It is possible to directly export any Linux block device in dom0 to
978 another domain, or to export filesystems / devices to virtual machines
979 using standard network protocols (e.g. NBD, iSCSI, NFS, etc). This
980 chapter covers some of the possibilities.
983 \section{Exporting Physical Devices as VBDs}
984 \label{s:exporting-physical-devices-as-vbds}
986 One of the simplest configurations is to directly export
987 individual partitions from domain 0 to other domains. To
988 achieve this use the \path{phy:} specifier in your domain
989 configuration file. For example a line like
990 \begin{quote}
991 \verb_disk = ['phy:hda3,sda1,w']_
992 \end{quote}
993 specifies that the partition \path{/dev/hda3} in domain 0
994 should be exported read-write to the new domain as \path{/dev/sda1};
995 one could equally well export it as \path{/dev/hda} or
996 \path{/dev/sdb5} should one wish.
998 In addition to local disks and partitions, it is possible to export
999 any device that Linux considers to be ``a disk'' in the same manner.
1000 For example, if you have iSCSI disks or GNBD volumes imported into
1001 domain 0 you can export these to other domains using the \path{phy:}
1002 disk syntax. E.g.:
1003 \begin{quote}
1004 \verb_disk = ['phy:vg/lvm1,sda2,w']_
1005 \end{quote}
1009 \begin{center}
1010 \framebox{\bf Warning: Block device sharing}
1011 \end{center}
1012 \begin{quote}
1013 Block devices should typically only be shared between domains in a
1014 read-only fashion otherwise the Linux kernel's file systems will get
1015 very confused as the file system structure may change underneath them
1016 (having the same ext3 partition mounted rw twice is a sure fire way to
1017 cause irreparable damage)! \Xend will attempt to prevent you from
1018 doing this by checking that the device is not mounted read-write in
1019 domain 0, and hasn't already been exported read-write to another
1020 domain.
1021 If you want read-write sharing, export the directory to other domains
1022 via NFS from domain0 (or use a cluster file system such as GFS or
1023 ocfs2).
1025 \end{quote}
1028 \section{Using File-backed VBDs}
1030 It is also possible to use a file in Domain 0 as the primary storage
1031 for a virtual machine. As well as being convenient, this also has the
1032 advantage that the virtual block device will be {\em sparse} --- space
1033 will only really be allocated as parts of the file are used. So if a
1034 virtual machine uses only half of its disk space then the file really
1035 takes up half of the size allocated.
1037 For example, to create a 2GB sparse file-backed virtual block device
1038 (actually only consumes 1KB of disk):
1039 \begin{quote}
1040 \verb_# dd if=/dev/zero of=vm1disk bs=1k seek=2048k count=1_
1041 \end{quote}
1043 Make a file system in the disk file:
1044 \begin{quote}
1045 \verb_# mkfs -t ext3 vm1disk_
1046 \end{quote}
1048 (when the tool asks for confirmation, answer `y')
1050 Populate the file system e.g. by copying from the current root:
1051 \begin{quote}
1052 \begin{verbatim}
1053 # mount -o loop vm1disk /mnt
1054 # cp -ax /{root,dev,var,etc,usr,bin,sbin,lib} /mnt
1055 # mkdir /mnt/{proc,sys,home,tmp}
1056 \end{verbatim}
1057 \end{quote}
1059 Tailor the file system by editing \path{/etc/fstab},
1060 \path{/etc/hostname}, etc (don't forget to edit the files in the
1061 mounted file system, instead of your domain 0 filesystem, e.g. you
1062 would edit \path{/mnt/etc/fstab} instead of \path{/etc/fstab} ). For
1063 this example put \path{/dev/sda1} to root in fstab.
1065 Now unmount (this is important!):
1066 \begin{quote}
1067 \verb_# umount /mnt_
1068 \end{quote}
1070 In the configuration file set:
1071 \begin{quote}
1072 \verb_disk = ['file:/full/path/to/vm1disk,sda1,w']_
1073 \end{quote}
1075 As the virtual machine writes to its `disk', the sparse file will be
1076 filled in and consume more space up to the original 2GB.
1078 {\bf Note that file-backed VBDs may not be appropriate for backing
1079 I/O-intensive domains.} File-backed VBDs are known to experience
1080 substantial slowdowns under heavy I/O workloads, due to the I/O handling
1081 by the loopback block device used to support file-backed VBDs in dom0.
1082 Better I/O performance can be achieved by using either LVM-backed VBDs
1083 (Section~\ref{s:using-lvm-backed-vbds}) or physical devices as VBDs
1084 (Section~\ref{s:exporting-physical-devices-as-vbds}).
1086 Linux supports a maximum of eight file-backed VBDs across all domains by
1087 default. This limit can be statically increased by using the {\em
1088 max\_loop} module parameter if CONFIG\_BLK\_DEV\_LOOP is compiled as a
1089 module in the dom0 kernel, or by using the {\em max\_loop=n} boot option
1090 if CONFIG\_BLK\_DEV\_LOOP is compiled directly into the dom0 kernel.
1093 \section{Using LVM-backed VBDs}
1094 \label{s:using-lvm-backed-vbds}
1096 A particularly appealing solution is to use LVM volumes
1097 as backing for domain file-systems since this allows dynamic
1098 growing/shrinking of volumes as well as snapshot and other
1099 features.
1101 To initialise a partition to support LVM volumes:
1102 \begin{quote}
1103 \begin{verbatim}
1104 # pvcreate /dev/sda10
1105 \end{verbatim}
1106 \end{quote}
1108 Create a volume group named `vg' on the physical partition:
1109 \begin{quote}
1110 \begin{verbatim}
1111 # vgcreate vg /dev/sda10
1112 \end{verbatim}
1113 \end{quote}
1115 Create a logical volume of size 4GB named `myvmdisk1':
1116 \begin{quote}
1117 \begin{verbatim}
1118 # lvcreate -L4096M -n myvmdisk1 vg
1119 \end{verbatim}
1120 \end{quote}
1122 You should now see that you have a \path{/dev/vg/myvmdisk1}
1123 Make a filesystem, mount it and populate it, e.g.:
1124 \begin{quote}
1125 \begin{verbatim}
1126 # mkfs -t ext3 /dev/vg/myvmdisk1
1127 # mount /dev/vg/myvmdisk1 /mnt
1128 # cp -ax / /mnt
1129 # umount /mnt
1130 \end{verbatim}
1131 \end{quote}
1133 Now configure your VM with the following disk configuration:
1134 \begin{quote}
1135 \begin{verbatim}
1136 disk = [ 'phy:vg/myvmdisk1,sda1,w' ]
1137 \end{verbatim}
1138 \end{quote}
1140 LVM enables you to grow the size of logical volumes, but you'll need
1141 to resize the corresponding file system to make use of the new
1142 space. Some file systems (e.g. ext3) now support on-line resize. See
1143 the LVM manuals for more details.
1145 You can also use LVM for creating copy-on-write clones of LVM
1146 volumes (known as writable persistent snapshots in LVM
1147 terminology). This facility is new in Linux 2.6.8, so isn't as
1148 stable as one might hope. In particular, using lots of CoW LVM
1149 disks consumes a lot of dom0 memory, and error conditions such as
1150 running out of disk space are not handled well. Hopefully this
1151 will improve in future.
1153 To create two copy-on-write clone of the above file system you
1154 would use the following commands:
1156 \begin{quote}
1157 \begin{verbatim}
1158 # lvcreate -s -L1024M -n myclonedisk1 /dev/vg/myvmdisk1
1159 # lvcreate -s -L1024M -n myclonedisk2 /dev/vg/myvmdisk1
1160 \end{verbatim}
1161 \end{quote}
1163 Each of these can grow to have 1GB of differences from the master
1164 volume. You can grow the amount of space for storing the
1165 differences using the lvextend command, e.g.:
1166 \begin{quote}
1167 \begin{verbatim}
1168 # lvextend +100M /dev/vg/myclonedisk1
1169 \end{verbatim}
1170 \end{quote}
1172 Don't let the `differences volume' ever fill up otherwise LVM gets
1173 rather confused. It may be possible to automate the growing
1174 process by using \path{dmsetup wait} to spot the volume getting full
1175 and then issue an \path{lvextend}.
1177 In principle, it is possible to continue writing to the volume
1178 that has been cloned (the changes will not be visible to the
1179 clones), but we wouldn't recommend this: have the cloned volume
1180 as a `pristine' file system install that isn't mounted directly
1181 by any of the virtual machines.
1184 \section{Using NFS Root}
1186 First, populate a root filesystem in a directory on the server
1187 machine. This can be on a distinct physical machine, or simply
1188 run within a virtual machine on the same node.
1190 Now configure the NFS server to export this filesystem over the
1191 network by adding a line to \path{/etc/exports}, for instance:
1193 \begin{quote}
1194 \begin{small}
1195 \begin{verbatim}
1196 /export/vm1root 1.2.3.4/24 (rw,sync,no_root_squash)
1197 \end{verbatim}
1198 \end{small}
1199 \end{quote}
1201 Finally, configure the domain to use NFS root. In addition to the
1202 normal variables, you should make sure to set the following values in
1203 the domain's configuration file:
1205 \begin{quote}
1206 \begin{small}
1207 \begin{verbatim}
1208 root = '/dev/nfs'
1209 nfs_server = '2.3.4.5' # substitute IP address of server
1210 nfs_root = '/path/to/root' # path to root FS on the server
1211 \end{verbatim}
1212 \end{small}
1213 \end{quote}
1215 The domain will need network access at boot time, so either statically
1216 configure an IP address (Using the config variables \path{ip},
1217 \path{netmask}, \path{gateway}, \path{hostname}) or enable DHCP (
1218 \path{dhcp='dhcp'}).
1220 Note that the Linux NFS root implementation is known to have stability
1221 problems under high load (this is not a Xen-specific problem), so this
1222 configuration may not be appropriate for critical servers.
1225 \part{User Reference Documentation}
1227 \chapter{Control Software}
1229 The Xen control software includes the \xend node control daemon (which
1230 must be running), the xm command line tools, and the prototype
1231 xensv web interface.
1233 \section{\Xend (node control daemon)}
1234 \label{s:xend}
1236 The Xen Daemon (\Xend) performs system management functions related to
1237 virtual machines. It forms a central point of control for a machine
1238 and can be controlled using an HTTP-based protocol. \Xend must be
1239 running in order to start and manage virtual machines.
1241 \Xend must be run as root because it needs access to privileged system
1242 management functions. A small set of commands may be issued on the
1243 \xend command line:
1245 \begin{tabular}{ll}
1246 \verb!# xend start! & start \xend, if not already running \\
1247 \verb!# xend stop! & stop \xend if already running \\
1248 \verb!# xend restart! & restart \xend if running, otherwise start it \\
1249 % \verb!# xend trace_start! & start \xend, with very detailed debug logging \\
1250 \verb!# xend status! & indicates \xend status by its return code
1251 \end{tabular}
1253 A SysV init script called {\tt xend} is provided to start \xend at boot
1254 time. {\tt make install} installs this script in {\path{/etc/init.d}.
1255 To enable it, you have to make symbolic links in the appropriate
1256 runlevel directories or use the {\tt chkconfig} tool, where available.
1258 Once \xend is running, more sophisticated administration can be done
1259 using the xm tool (see Section~\ref{s:xm}) and the experimental
1260 Xensv web interface (see Section~\ref{s:xensv}).
1262 As \xend runs, events will be logged to \path{/var/log/xend.log} and,
1263 if the migration assistant daemon (\path{xfrd}) has been started,
1264 \path{/var/log/xfrd.log}. These may be of use for troubleshooting
1265 problems.
1267 \section{Xm (command line interface)}
1268 \label{s:xm}
1270 The xm tool is the primary tool for managing Xen from the console.
1271 The general format of an xm command line is:
1273 \begin{verbatim}
1274 # xm command [switches] [arguments] [variables]
1275 \end{verbatim}
1277 The available {\em switches} and {\em arguments} are dependent on the
1278 {\em command} chosen. The {\em variables} may be set using
1279 declarations of the form {\tt variable=value} and command line
1280 declarations override any of the values in the configuration file
1281 being used, including the standard variables described above and any
1282 custom variables (for instance, the \path{xmdefconfig} file uses a
1283 {\tt vmid} variable).
1285 The available commands are as follows:
1287 \begin{description}
1288 \item[balloon] Request a domain to adjust its memory footprint.
1289 \item[create] Create a new domain.
1290 \item[destroy] Kill a domain immediately.
1291 \item[list] List running domains.
1292 \item[shutdown] Ask a domain to shutdown.
1293 \item[dmesg] Fetch the Xen (not Linux!) boot output.
1294 \item[consoles] Lists the available consoles.
1295 \item[console] Connect to the console for a domain.
1296 \item[help] Get help on xm commands.
1297 \item[save] Suspend a domain to disk.
1298 \item[restore] Restore a domain from disk.
1299 \item[pause] Pause a domain's execution.
1300 \item[unpause] Unpause a domain.
1301 \item[pincpu] Pin a domain to a CPU.
1302 \item[bvt] Set BVT scheduler parameters for a domain.
1303 \item[bvt\_ctxallow] Set the BVT context switching allowance for the system.
1304 \item[atropos] Set the atropos parameters for a domain.
1305 \item[rrobin] Set the round robin time slice for the system.
1306 \item[info] Get information about the Xen host.
1307 \item[call] Call a \xend HTTP API function directly.
1308 \end{description}
1310 For a detailed overview of switches, arguments and variables to each command
1311 try
1312 \begin{quote}
1313 \begin{verbatim}
1314 # xm help command
1315 \end{verbatim}
1316 \end{quote}
1318 \section{Xensv (web control interface)}
1319 \label{s:xensv}
1321 Xensv is the experimental web control interface for managing a Xen
1322 machine. It can be used to perform some (but not yet all) of the
1323 management tasks that can be done using the xm tool.
1325 It can be started using:
1326 \begin{quote}
1327 \verb_# xensv start_
1328 \end{quote}
1329 and stopped using:
1330 \begin{quote}
1331 \verb_# xensv stop_
1332 \end{quote}
1334 By default, Xensv will serve out the web interface on port 8080. This
1335 can be changed by editing
1336 \path{/usr/lib/python2.3/site-packages/xen/sv/params.py}.
1338 Once Xensv is running, the web interface can be used to create and
1339 manage running domains.
1344 \chapter{Domain Configuration}
1345 \label{cha:config}
1347 The following contains the syntax of the domain configuration
1348 files and description of how to further specify networking,
1349 driver domain and general scheduling behaviour.
1351 \section{Configuration Files}
1352 \label{s:cfiles}
1354 Xen configuration files contain the following standard variables.
1355 Unless otherwise stated, configuration items should be enclosed in
1356 quotes: see \path{/etc/xen/xmexample1} and \path{/etc/xen/xmexample2}
1357 for concrete examples of the syntax.
1359 \begin{description}
1360 \item[kernel] Path to the kernel image
1361 \item[ramdisk] Path to a ramdisk image (optional).
1362 % \item[builder] The name of the domain build function (e.g. {\tt'linux'} or {\tt'netbsd'}.
1363 \item[memory] Memory size in megabytes.
1364 \item[cpu] CPU to run this domain on, or {\tt -1} for
1365 auto-allocation.
1366 \item[console] Port to export the domain console on (default 9600 + domain ID).
1367 \item[nics] Number of virtual network interfaces.
1368 \item[vif] List of MAC addresses (random addresses are assigned if not
1369 given) and bridges to use for the domain's network interfaces, e.g.
1370 \begin{verbatim}
1371 vif = [ 'mac=aa:00:00:00:00:11, bridge=xen-br0',
1372 'bridge=xen-br1' ]
1373 \end{verbatim}
1374 to assign a MAC address and bridge to the first interface and assign
1375 a different bridge to the second interface, leaving \xend to choose
1376 the MAC address.
1377 \item[disk] List of block devices to export to the domain, e.g. \\
1378 \verb_disk = [ 'phy:hda1,sda1,r' ]_ \\
1379 exports physical device \path{/dev/hda1} to the domain
1380 as \path{/dev/sda1} with read-only access. Exporting a disk read-write
1381 which is currently mounted is dangerous -- if you are \emph{certain}
1382 you wish to do this, you can specify \path{w!} as the mode.
1383 \item[dhcp] Set to {\tt 'dhcp'} if you want to use DHCP to configure
1384 networking.
1385 \item[netmask] Manually configured IP netmask.
1386 \item[gateway] Manually configured IP gateway.
1387 \item[hostname] Set the hostname for the virtual machine.
1388 \item[root] Specify the root device parameter on the kernel command
1389 line.
1390 \item[nfs\_server] IP address for the NFS server (if any).
1391 \item[nfs\_root] Path of the root filesystem on the NFS server (if any).
1392 \item[extra] Extra string to append to the kernel command line (if
1393 any)
1394 \item[restart] Three possible options:
1395 \begin{description}
1396 \item[always] Always restart the domain, no matter what
1397 its exit code is.
1398 \item[never] Never restart the domain.
1399 \item[onreboot] Restart the domain iff it requests reboot.
1400 \end{description}
1401 \end{description}
1403 For additional flexibility, it is also possible to include Python
1404 scripting commands in configuration files. An example of this is the
1405 \path{xmexample2} file, which uses Python code to handle the
1406 \path{vmid} variable.
1409 %\part{Advanced Topics}
1411 \section{Network Configuration}
1413 For many users, the default installation should work `out of the box'.
1414 More complicated network setups, for instance with multiple ethernet
1415 interfaces and/or existing bridging setups will require some
1416 special configuration.
1418 The purpose of this section is to describe the mechanisms provided by
1419 \xend to allow a flexible configuration for Xen's virtual networking.
1421 \subsection{Xen virtual network topology}
1423 Each domain network interface is connected to a virtual network
1424 interface in dom0 by a point to point link (effectively a `virtual
1425 crossover cable'). These devices are named {\tt
1426 vif$<$domid$>$.$<$vifid$>$} (e.g. {\tt vif1.0} for the first interface
1427 in domain 1, {\tt vif3.1} for the second interface in domain 3).
1429 Traffic on these virtual interfaces is handled in domain 0 using
1430 standard Linux mechanisms for bridging, routing, rate limiting, etc.
1431 Xend calls on two shell scripts to perform initial configuration of
1432 the network and configuration of new virtual interfaces. By default,
1433 these scripts configure a single bridge for all the virtual
1434 interfaces. Arbitrary routing / bridging configurations can be
1435 configured by customising the scripts, as described in the following
1436 section.
1438 \subsection{Xen networking scripts}
1440 Xen's virtual networking is configured by two shell scripts (by
1441 default \path{network} and \path{vif-bridge}). These are
1442 called automatically by \xend when certain events occur, with
1443 arguments to the scripts providing further contextual information.
1444 These scripts are found by default in \path{/etc/xen/scripts}. The
1445 names and locations of the scripts can be configured in
1446 \path{/etc/xen/xend-config.sxp}.
1448 \begin{description}
1450 \item[network:] This script is called whenever \xend is started or
1451 stopped to respectively initialise or tear down the Xen virtual
1452 network. In the default configuration initialisation creates the
1453 bridge `xen-br0' and moves eth0 onto that bridge, modifying the
1454 routing accordingly. When \xend exits, it deletes the Xen bridge and
1455 removes eth0, restoring the normal IP and routing configuration.
1457 %% In configurations where the bridge already exists, this script could
1458 %% be replaced with a link to \path{/bin/true} (for instance).
1460 \item[vif-bridge:] This script is called for every domain virtual
1461 interface and can configure firewalling rules and add the vif
1462 to the appropriate bridge. By default, this adds and removes
1463 VIFs on the default Xen bridge.
1465 \end{description}
1467 For more complex network setups (e.g. where routing is required or
1468 integrate with existing bridges) these scripts may be replaced with
1469 customised variants for your site's preferred configuration.
1471 %% There are two possible types of privileges: IO privileges and
1472 %% administration privileges.
1474 \section{Driver Domain Configuration}
1476 I/O privileges can be assigned to allow a domain to directly access
1477 PCI devices itself. This is used to support driver domains.
1479 Setting backend privileges is currently only supported in SXP format
1480 config files. To allow a domain to function as a backend for others,
1481 somewhere within the {\tt vm} element of its configuration file must
1482 be a {\tt backend} element of the form {\tt (backend ({\em type}))}
1483 where {\tt \em type} may be either {\tt netif} or {\tt blkif},
1484 according to the type of virtual device this domain will service.
1485 %% After this domain has been built, \xend will connect all new and
1486 %% existing {\em virtual} devices (of the appropriate type) to that
1487 %% backend.
1489 Note that a block backend cannot currently import virtual block
1490 devices from other domains, and a network backend cannot import
1491 virtual network devices from other domains. Thus (particularly in the
1492 case of block backends, which cannot import a virtual block device as
1493 their root filesystem), you may need to boot a backend domain from a
1494 ramdisk or a network device.
1496 Access to PCI devices may be configured on a per-device basis. Xen
1497 will assign the minimal set of hardware privileges to a domain that
1498 are required to control its devices. This can be configured in either
1499 format of configuration file:
1501 \begin{itemize}
1502 \item SXP Format: Include device elements of the form: \\
1503 \centerline{ {\tt (device (pci (bus {\em x}) (dev {\em y}) (func {\em z})))}} \\
1504 inside the top-level {\tt vm} element. Each one specifies the address
1505 of a device this domain is allowed to access ---
1506 the numbers {\em x},{\em y} and {\em z} may be in either decimal or
1507 hexadecimal format.
1508 \item Flat Format: Include a list of PCI device addresses of the
1509 format: \\
1510 \centerline{{\tt pci = ['x,y,z', ...]}} \\
1511 where each element in the
1512 list is a string specifying the components of the PCI device
1513 address, separated by commas. The components ({\tt \em x}, {\tt \em
1514 y} and {\tt \em z}) of the list may be formatted as either decimal
1515 or hexadecimal.
1516 \end{itemize}
1518 %% \section{Administration Domains}
1520 %% Administration privileges allow a domain to use the `dom0
1521 %% operations' (so called because they are usually available only to
1522 %% domain 0). A privileged domain can build other domains, set scheduling
1523 %% parameters, etc.
1525 % Support for other administrative domains is not yet available... perhaps
1526 % we should plumb it in some time
1532 \section{Scheduler Configuration}
1533 \label{s:sched}
1536 Xen offers a boot time choice between multiple schedulers. To select
1537 a scheduler, pass the boot parameter {\em sched=sched\_name} to Xen,
1538 substituting the appropriate scheduler name. Details of the schedulers
1539 and their parameters are included below; future versions of the tools
1540 will provide a higher-level interface to these tools.
1542 It is expected that system administrators configure their system to
1543 use the scheduler most appropriate to their needs. Currently, the BVT
1544 scheduler is the recommended choice.
1546 \subsection{Borrowed Virtual Time}
1548 {\tt sched=bvt} (the default) \\
1550 BVT provides proportional fair shares of the CPU time. It has been
1551 observed to penalise domains that block frequently (e.g. I/O intensive
1552 domains), but this can be compensated for by using warping.
1554 \subsubsection{Global Parameters}
1556 \begin{description}
1557 \item[ctx\_allow]
1558 the context switch allowance is similar to the `quantum'
1559 in traditional schedulers. It is the minimum time that
1560 a scheduled domain will be allowed to run before being
1561 pre-empted.
1562 \end{description}
1564 \subsubsection{Per-domain parameters}
1566 \begin{description}
1567 \item[mcuadv]
1568 the MCU (Minimum Charging Unit) advance determines the
1569 proportional share of the CPU that a domain receives. It
1570 is set inversely proportionally to a domain's sharing weight.
1571 \item[warp]
1572 the amount of `virtual time' the domain is allowed to warp
1573 backwards
1574 \item[warpl]
1575 the warp limit is the maximum time a domain can run warped for
1576 \item[warpu]
1577 the unwarp requirement is the minimum time a domain must
1578 run unwarped for before it can warp again
1579 \end{description}
1581 \subsection{Atropos}
1583 {\tt sched=atropos} \\
1585 Atropos is a soft real time scheduler. It provides guarantees about
1586 absolute shares of the CPU, with a facility for sharing
1587 slack CPU time on a best-effort basis. It can provide timeliness
1588 guarantees for latency-sensitive domains.
1590 Every domain has an associated period and slice. The domain should
1591 receive `slice' nanoseconds every `period' nanoseconds. This allows
1592 the administrator to configure both the absolute share of the CPU a
1593 domain receives and the frequency with which it is scheduled.
1595 %% When
1596 %% domains unblock, their period is reduced to the value of the latency
1597 %% hint (the slice is scaled accordingly so that they still get the same
1598 %% proportion of the CPU). For each subsequent period, the slice and
1599 %% period times are doubled until they reach their original values.
1601 Note: don't overcommit the CPU when using Atropos (i.e. don't reserve
1602 more CPU than is available --- the utilisation should be kept to
1603 slightly less than 100\% in order to ensure predictable behaviour).
1605 \subsubsection{Per-domain parameters}
1607 \begin{description}
1608 \item[period] The regular time interval during which a domain is
1609 guaranteed to receive its allocation of CPU time.
1610 \item[slice]
1611 The length of time per period that a domain is guaranteed to run
1612 for (in the absence of voluntary yielding of the CPU).
1613 \item[latency]
1614 The latency hint is used to control how soon after
1615 waking up a domain it should be scheduled.
1616 \item[xtratime] This is a boolean flag that specifies whether a domain
1617 should be allowed a share of the system slack time.
1618 \end{description}
1620 \subsection{Round Robin}
1622 {\tt sched=rrobin} \\
1624 The round robin scheduler is included as a simple demonstration of
1625 Xen's internal scheduler API. It is not intended for production use.
1627 \subsubsection{Global Parameters}
1629 \begin{description}
1630 \item[rr\_slice]
1631 The maximum time each domain runs before the next
1632 scheduling decision is made.
1633 \end{description}
1646 \chapter{Build, Boot and Debug options}
1648 This chapter describes the build- and boot-time options
1649 which may be used to tailor your Xen system.
1651 \section{Xen Build Options}
1653 Xen provides a number of build-time options which should be
1654 set as environment variables or passed on make's command-line.
1656 \begin{description}
1657 \item[verbose=y] Enable debugging messages when Xen detects an unexpected condition.
1658 Also enables console output from all domains.
1659 \item[debug=y]
1660 Enable debug assertions. Implies {\bf verbose=y}.
1661 (Primarily useful for tracing bugs in Xen).
1662 \item[debugger=y]
1663 Enable the in-Xen debugger. This can be used to debug
1664 Xen, guest OSes, and applications.
1665 \item[perfc=y]
1666 Enable performance counters for significant events
1667 within Xen. The counts can be reset or displayed
1668 on Xen's console via console control keys.
1669 \item[trace=y]
1670 Enable per-cpu trace buffers which log a range of
1671 events within Xen for collection by control
1672 software.
1673 \end{description}
1675 \section{Xen Boot Options}
1676 \label{s:xboot}
1678 These options are used to configure Xen's behaviour at runtime. They
1679 should be appended to Xen's command line, either manually or by
1680 editing \path{grub.conf}.
1682 \begin{description}
1683 \item [ignorebiostables ]
1684 Disable parsing of BIOS-supplied tables. This may help with some
1685 chipsets that aren't fully supported by Xen. If you specify this
1686 option then ACPI tables are also ignored, and SMP support is
1687 disabled.
1689 \item [noreboot ]
1690 Don't reboot the machine automatically on errors. This is
1691 useful to catch debug output if you aren't catching console messages
1692 via the serial line.
1694 \item [nosmp ]
1695 Disable SMP support.
1696 This option is implied by `ignorebiostables'.
1698 \item [noacpi ]
1699 Disable ACPI tables, which confuse Xen on some chipsets.
1700 This option is implied by `ignorebiostables'.
1702 \item [watchdog ]
1703 Enable NMI watchdog which can report certain failures.
1705 \item [noht ]
1706 Disable Hyperthreading.
1708 \item [badpage=$<$page number$>$,$<$page number$>$, \ldots ]
1709 Specify a list of pages not to be allocated for use
1710 because they contain bad bytes. For example, if your
1711 memory tester says that byte 0x12345678 is bad, you would
1712 place `badpage=0x12345' on Xen's command line.
1714 \item [com1=$<$baud$>$,DPS,$<$io\_base$>$,$<$irq$>$
1715 com2=$<$baud$>$,DPS,$<$io\_base$>$,$<$irq$>$ ] \mbox{}\\
1716 Xen supports up to two 16550-compatible serial ports.
1717 For example: `com1=9600, 8n1, 0x408, 5' maps COM1 to a
1718 9600-baud port, 8 data bits, no parity, 1 stop bit,
1719 I/O port base 0x408, IRQ 5.
1720 If the I/O base and IRQ are standard (com1:0x3f8,4;
1721 com2:0x2f8,3) then they need not be specified.
1723 \item [console=$<$specifier list$>$ ]
1724 Specify the destination for Xen console I/O.
1725 This is a comma-separated list of, for example:
1726 \begin{description}
1727 \item[vga] use VGA console and allow keyboard input
1728 \item[com1] use serial port com1
1729 \item[com2H] use serial port com2. Transmitted chars will
1730 have the MSB set. Received chars must have
1731 MSB set.
1732 \item[com2L] use serial port com2. Transmitted chars will
1733 have the MSB cleared. Received chars must
1734 have MSB cleared.
1735 \end{description}
1736 The latter two examples allow a single port to be
1737 shared by two subsystems (e.g. console and
1738 debugger). Sharing is controlled by MSB of each
1739 transmitted/received character.
1740 [NB. Default for this option is `com1,vga']
1742 \item [conswitch=$<$switch-char$><$auto-switch-char$>$ ]
1743 Specify how to switch serial-console input between
1744 Xen and DOM0. The required sequence is CTRL-$<$switch-char$>$
1745 pressed three times. Specifying the backtick character
1746 disables switching.
1747 The $<$auto-switch-char$>$ specifies whether Xen should
1748 auto-switch input to DOM0 when it boots --- if it is `x'
1749 then auto-switching is disabled. Any other value, or
1750 omitting the character, enables auto-switching.
1751 [NB. default switch-char is `a']
1753 \item [nmi=xxx ]
1754 Specify what to do with an NMI parity or I/O error. \\
1755 `nmi=fatal': Xen prints a diagnostic and then hangs. \\
1756 `nmi=dom0': Inform DOM0 of the NMI. \\
1757 `nmi=ignore': Ignore the NMI.
1759 \item [dom0\_mem=xxx ]
1760 Set the amount of memory (in kB) to be allocated to domain0.
1762 \item [tbuf\_size=xxx ]
1763 Set the size of the per-cpu trace buffers, in pages
1764 (default 1). Note that the trace buffers are only
1765 enabled in debug builds. Most users can ignore
1766 this feature completely.
1768 \item [sched=xxx ]
1769 Select the CPU scheduler Xen should use. The current
1770 possibilities are `bvt' (default), `atropos' and `rrobin'.
1771 For more information see Section~\ref{s:sched}.
1773 \item [physdev\_dom0\_hide=(xx:xx.x)(yy:yy.y)\ldots ]
1774 Hide selected PCI devices from domain 0 (for instance, to stop it
1775 taking ownership of them so that they can be driven by another
1776 domain). Device IDs should be given in hex format. Bridge devices do
1777 not need to be hidden --- they are hidden implicitly, since guest OSes
1778 do not need to configure them.
1779 \end{description}
1783 \section{XenLinux Boot Options}
1785 In addition to the standard Linux kernel boot options, we support:
1786 \begin{description}
1787 \item[xencons=xxx ] Specify the device node to which the Xen virtual
1788 console driver is attached. The following options are supported:
1789 \begin{center}
1790 \begin{tabular}{l}
1791 `xencons=off': disable virtual console \\
1792 `xencons=tty': attach console to /dev/tty1 (tty0 at boot-time) \\
1793 `xencons=ttyS': attach console to /dev/ttyS0
1794 \end{tabular}
1795 \end{center}
1796 The default is ttyS for dom0 and tty for all other domains.
1797 \end{description}
1801 \section{Debugging}
1802 \label{s:keys}
1804 Xen has a set of debugging features that can be useful to try and
1805 figure out what's going on. Hit 'h' on the serial line (if you
1806 specified a baud rate on the Xen command line) or ScrollLock-h on the
1807 keyboard to get a list of supported commands.
1809 If you have a crash you'll likely get a crash dump containing an EIP
1810 (PC) which, along with an \path{objdump -d image}, can be useful in
1811 figuring out what's happened. Debug a Xenlinux image just as you
1812 would any other Linux kernel.
1814 %% We supply a handy debug terminal program which you can find in
1815 %% \path{/usr/local/src/xen-2.0.bk/tools/misc/miniterm/}
1816 %% This should be built and executed on another machine that is connected
1817 %% via a null modem cable. Documentation is included.
1818 %% Alternatively, if the Xen machine is connected to a serial-port server
1819 %% then we supply a dumb TCP terminal client, {\tt xencons}.
1824 \chapter{Further Support}
1826 If you have questions that are not answered by this manual, the
1827 sources of information listed below may be of interest to you. Note
1828 that bug reports, suggestions and contributions related to the
1829 software (or the documentation) should be sent to the Xen developers'
1830 mailing list (address below).
1832 \section{Other Documentation}
1834 For developers interested in porting operating systems to Xen, the
1835 {\em Xen Interface Manual} is distributed in the \path{docs/}
1836 directory of the Xen source distribution.
1838 %Various HOWTOs are available in \path{docs/HOWTOS} but this content is
1839 %being integrated into this manual.
1841 \section{Online References}
1843 The official Xen web site is found at:
1844 \begin{quote}
1845 {\tt http://www.cl.cam.ac.uk/netos/xen/}
1846 \end{quote}
1848 This contains links to the latest versions of all on-line
1849 documentation (including the lateset version of the FAQ).
1851 \section{Mailing Lists}
1853 There are currently four official Xen mailing lists:
1855 \begin{description}
1856 \item[xen-devel@lists.xensource.com] Used for development
1857 discussions and bug reports. Subscribe at: \\
1858 {\small {\tt http://lists.xensource.com/xen-devel}}
1859 \item[xen-users@lists.xensource.com] Used for installation and usage
1860 discussions and requests for help. Subscribe at: \\
1861 {\small {\tt http://lists.xensource.com/xen-users}}
1862 \item[xen-announce@lists.xensource.com] Used for announcements only.
1863 Subscribe at: \\
1864 {\small {\tt http://lists.xensource.com/xen-announce}}
1865 \item[xen-changelog@lists.xensource.com] Changelog feed
1866 from the unstable and 2.0 trees - developer oriented. Subscribe at: \\
1867 {\small {\tt http://lists.xensource.com/xen-changelog}}
1868 \end{description}
1871 \appendix
1874 \chapter{Installing Xen / XenLinux on Debian}
1876 The Debian project provides a tool called \path{debootstrap} which
1877 allows a base Debian system to be installed into a filesystem without
1878 requiring the host system to have any Debian-specific software (such
1879 as \path{apt}.
1881 Here's some info how to install Debian 3.1 (Sarge) for an unprivileged
1882 Xen domain:
1884 \begin{enumerate}
1885 \item Set up Xen 2.0 and test that it's working, as described earlier in
1886 this manual.
1888 \item Create disk images for root-fs and swap (alternatively, you
1889 might create dedicated partitions, LVM logical volumes, etc. if
1890 that suits your setup).
1891 \begin{small}\begin{verbatim}
1892 dd if=/dev/zero of=/path/diskimage bs=1024k count=size_in_mbytes
1893 dd if=/dev/zero of=/path/swapimage bs=1024k count=size_in_mbytes
1894 \end{verbatim}\end{small}
1895 If you're going to use this filesystem / disk image only as a
1896 `template' for other vm disk images, something like 300 MB should
1897 be enough.. (of course it depends what kind of packages you are
1898 planning to install to the template)
1900 \item Create the filesystem and initialise the swap image
1901 \begin{small}\begin{verbatim}
1902 mkfs.ext3 /path/diskimage
1903 mkswap /path/swapimage
1904 \end{verbatim}\end{small}
1906 \item Mount the disk image for installation
1907 \begin{small}\begin{verbatim}
1908 mount -o loop /path/diskimage /mnt/disk
1909 \end{verbatim}\end{small}
1911 \item Install \path{debootstrap}
1913 Make sure you have debootstrap installed on the host. If you are
1914 running Debian sarge (3.1 / testing) or unstable you can install it by
1915 running \path{apt-get install debootstrap}. Otherwise, it can be
1916 downloaded from the Debian project website.
1918 \item Install Debian base to the disk image:
1919 \begin{small}\begin{verbatim}
1920 debootstrap --arch i386 sarge /mnt/disk \
1921 http://ftp.<countrycode>.debian.org/debian
1922 \end{verbatim}\end{small}
1924 You can use any other Debian http/ftp mirror you want.
1926 \item When debootstrap completes successfully, modify settings:
1927 \begin{small}\begin{verbatim}
1928 chroot /mnt/disk /bin/bash
1929 \end{verbatim}\end{small}
1931 Edit the following files using vi or nano and make needed changes:
1932 \begin{small}\begin{verbatim}
1933 /etc/hostname
1934 /etc/hosts
1935 /etc/resolv.conf
1936 /etc/network/interfaces
1937 /etc/networks
1938 \end{verbatim}\end{small}
1940 Set up access to the services, edit:
1941 \begin{small}\begin{verbatim}
1942 /etc/hosts.deny
1943 /etc/hosts.allow
1944 /etc/inetd.conf
1945 \end{verbatim}\end{small}
1947 Add Debian mirror to:
1948 \begin{small}\begin{verbatim}
1949 /etc/apt/sources.list
1950 \end{verbatim}\end{small}
1952 Create fstab like this:
1953 \begin{small}\begin{verbatim}
1954 /dev/sda1 / ext3 errors=remount-ro 0 1
1955 /dev/sda2 none swap sw 0 0
1956 proc /proc proc defaults 0 0
1957 \end{verbatim}\end{small}
1959 Logout
1961 \item Unmount the disk image
1962 \begin{small}\begin{verbatim}
1963 umount /mnt/disk
1964 \end{verbatim}\end{small}
1966 \item Create Xen 2.0 configuration file for the new domain. You can
1967 use the example-configurations coming with Xen as a template.
1969 Make sure you have the following set up:
1970 \begin{small}\begin{verbatim}
1971 disk = [ 'file:/path/diskimage,sda1,w', 'file:/path/swapimage,sda2,w' ]
1972 root = "/dev/sda1 ro"
1973 \end{verbatim}\end{small}
1975 \item Start the new domain
1976 \begin{small}\begin{verbatim}
1977 xm create -f domain_config_file
1978 \end{verbatim}\end{small}
1980 Check that the new domain is running:
1981 \begin{small}\begin{verbatim}
1982 xm list
1983 \end{verbatim}\end{small}
1985 \item Attach to the console of the new domain.
1986 You should see something like this when starting the new domain:
1988 \begin{small}\begin{verbatim}
1989 Started domain testdomain2, console on port 9626
1990 \end{verbatim}\end{small}
1992 There you can see the ID of the console: 26. You can also list
1993 the consoles with \path{xm consoles} (ID is the last two
1994 digits of the port number.)
1996 Attach to the console:
1998 \begin{small}\begin{verbatim}
1999 xm console 26
2000 \end{verbatim}\end{small}
2002 or by telnetting to the port 9626 of localhost (the xm console
2003 program works better).
2005 \item Log in and run base-config
2007 As a default there's no password for the root.
2009 Check that everything looks OK, and the system started without
2010 errors. Check that the swap is active, and the network settings are
2011 correct.
2013 Run \path{/usr/sbin/base-config} to set up the Debian settings.
2015 Set up the password for root using passwd.
2017 \item Done. You can exit the console by pressing \path{Ctrl + ]}
2019 \end{enumerate}
2021 If you need to create new domains, you can just copy the contents of
2022 the `template'-image to the new disk images, either by mounting the
2023 template and the new image, and using \path{cp -a} or \path{tar} or by
2024 simply copying the image file. Once this is done, modify the
2025 image-specific settings (hostname, network settings, etc).
2027 \chapter{Installing Xen / XenLinux on Redhat or Fedora Core}
2029 When using Xen / XenLinux on a standard Linux distribution there are
2030 a couple of things to watch out for:
2032 Note that, because domains>0 don't have any privileged access at all,
2033 certain commands in the default boot sequence will fail e.g. attempts
2034 to update the hwclock, change the console font, update the keytable
2035 map, start apmd (power management), or gpm (mouse cursor). Either
2036 ignore the errors (they should be harmless), or remove them from the
2037 startup scripts. Deleting the following links are a good start:
2038 {\path{S24pcmcia}}, {\path{S09isdn}},
2039 {\path{S17keytable}}, {\path{S26apmd}},
2040 {\path{S85gpm}}.
2042 If you want to use a single root file system that works cleanly for
2043 both domain 0 and unprivileged domains, a useful trick is to use
2044 different 'init' run levels. For example, use
2045 run level 3 for domain 0, and run level 4 for other domains. This
2046 enables different startup scripts to be run in depending on the run
2047 level number passed on the kernel command line.
2049 If using NFS root files systems mounted either from an
2050 external server or from domain0 there are a couple of other gotchas.
2051 The default {\path{/etc/sysconfig/iptables}} rules block NFS, so part
2052 way through the boot sequence things will suddenly go dead.
2054 If you're planning on having a separate NFS {\path{/usr}} partition, the
2055 RH9 boot scripts don't make life easy - they attempt to mount NFS file
2056 systems way to late in the boot process. The easiest way I found to do
2057 this was to have a {\path{/linuxrc}} script run ahead of
2058 {\path{/sbin/init}} that mounts {\path{/usr}}:
2060 \begin{quote}
2061 \begin{small}\begin{verbatim}
2062 #!/bin/bash
2063 /sbin/ipconfig lo 127.0.0.1
2064 /sbin/portmap
2065 /bin/mount /usr
2066 exec /sbin/init "$@" <>/dev/console 2>&1
2067 \end{verbatim}\end{small}
2068 \end{quote}
2070 %$ XXX SMH: font lock fix :-)
2072 The one slight complication with the above is that
2073 {\path{/sbin/portmap}} is dynamically linked against
2074 {\path{/usr/lib/libwrap.so.0}} Since this is in
2075 {\path{/usr}}, it won't work. This can be solved by copying the
2076 file (and link) below the /usr mount point, and just let the file be
2077 'covered' when the mount happens.
2079 In some installations, where a shared read-only {\path{/usr}} is
2080 being used, it may be desirable to move other large directories over
2081 into the read-only {\path{/usr}}. For example, you might replace
2082 {\path{/bin}}, {\path{/lib}} and {\path{/sbin}} with
2083 links into {\path{/usr/root/bin}}, {\path{/usr/root/lib}}
2084 and {\path{/usr/root/sbin}} respectively. This creates other
2085 problems for running the {\path{/linuxrc}} script, requiring
2086 bash, portmap, mount, ifconfig, and a handful of other shared
2087 libraries to be copied below the mount point --- a simple
2088 statically-linked C program would solve this problem.
2093 \chapter{Glossary of Terms}
2095 \begin{description}
2096 \item[Atropos] One of the CPU schedulers provided by Xen.
2097 Atropos provides domains with absolute shares
2098 of the CPU, with timeliness guarantees and a
2099 mechanism for sharing out `slack time'.
2101 \item[BVT] The BVT scheduler is used to give proportional
2102 fair shares of the CPU to domains.
2104 \item[Exokernel] A minimal piece of privileged code, similar to
2105 a {\bf microkernel} but providing a more
2106 `hardware-like' interface to the tasks it
2107 manages. This is similar to a paravirtualising
2108 VMM like {\bf Xen} but was designed as a new
2109 operating system structure, rather than
2110 specifically to run multiple conventional OSs.
2112 \item[Domain] A domain is the execution context that
2113 contains a running {\bf virtual machine}.
2114 The relationship between virtual machines
2115 and domains on Xen is similar to that between
2116 programs and processes in an operating
2117 system: a virtual machine is a persistent
2118 entity that resides on disk (somewhat like
2119 a program). When it is loaded for execution,
2120 it runs in a domain. Each domain has a
2121 {\bf domain ID}.
2123 \item[Domain 0] The first domain to be started on a Xen
2124 machine. Domain 0 is responsible for managing
2125 the system.
2127 \item[Domain ID] A unique identifier for a {\bf domain},
2128 analogous to a process ID in an operating
2129 system.
2131 \item[Full virtualisation] An approach to virtualisation which
2132 requires no modifications to the hosted
2133 operating system, providing the illusion of
2134 a complete system of real hardware devices.
2136 \item[Hypervisor] An alternative term for {\bf VMM}, used
2137 because it means `beyond supervisor',
2138 since it is responsible for managing multiple
2139 `supervisor' kernels.
2141 \item[Live migration] A technique for moving a running virtual
2142 machine to another physical host, without
2143 stopping it or the services running on it.
2145 \item[Microkernel] A small base of code running at the highest
2146 hardware privilege level. A microkernel is
2147 responsible for sharing CPU and memory (and
2148 sometimes other devices) between less
2149 privileged tasks running on the system.
2150 This is similar to a VMM, particularly a
2151 {\bf paravirtualising} VMM but typically
2152 addressing a different problem space and
2153 providing different kind of interface.
2155 \item[NetBSD/Xen] A port of NetBSD to the Xen architecture.
2157 \item[Paravirtualisation] An approach to virtualisation which requires
2158 modifications to the operating system in
2159 order to run in a virtual machine. Xen
2160 uses paravirtualisation but preserves
2161 binary compatibility for user space
2162 applications.
2164 \item[Shadow pagetables] A technique for hiding the layout of machine
2165 memory from a virtual machine's operating
2166 system. Used in some {\bf VMMs} to provide
2167 the illusion of contiguous physical memory,
2168 in Xen this is used during
2169 {\bf live migration}.
2171 \item[Virtual Machine] The environment in which a hosted operating
2172 system runs, providing the abstraction of a
2173 dedicated machine. A virtual machine may
2174 be identical to the underlying hardware (as
2175 in {\bf full virtualisation}, or it may
2176 differ, as in {\bf paravirtualisation}.
2178 \item[VMM] Virtual Machine Monitor - the software that
2179 allows multiple virtual machines to be
2180 multiplexed on a single physical machine.
2182 \item[Xen] Xen is a paravirtualising virtual machine
2183 monitor, developed primarily by the
2184 Systems Research Group at the University
2185 of Cambridge Computer Laboratory.
2187 \item[XenLinux] Official name for the port of the Linux kernel
2188 that runs on Xen.
2190 \end{description}
2193 \end{document}
2196 %% Other stuff without a home
2198 %% Instructions Re Python API
2200 %% Other Control Tasks using Python
2201 %% ================================
2203 %% A Python module 'Xc' is installed as part of the tools-install
2204 %% process. This can be imported, and an 'xc object' instantiated, to
2205 %% provide access to privileged command operations:
2207 %% # import Xc
2208 %% # xc = Xc.new()
2209 %% # dir(xc)
2210 %% # help(xc.domain_create)
2212 %% In this way you can see that the class 'xc' contains useful
2213 %% documentation for you to consult.
2215 %% A further package of useful routines (xenctl) is also installed:
2217 %% # import xenctl.utils
2218 %% # help(xenctl.utils)
2220 %% You can use these modules to write your own custom scripts or you can
2221 %% customise the scripts supplied in the Xen distribution.
2225 % Explain about AGP GART
2228 %% If you're not intending to configure the new domain with an IP address
2229 %% on your LAN, then you'll probably want to use NAT. The
2230 %% 'xen_nat_enable' installs a few useful iptables rules into domain0 to
2231 %% enable NAT. [NB: We plan to support RSIP in future]
2236 %% Installing the file systems from the CD
2237 %% =======================================
2239 %% If you haven't got an existing Linux installation onto which you can
2240 %% just drop down the Xen and Xenlinux images, then the file systems on
2241 %% the CD provide a quick way of doing an install. However, you would be
2242 %% better off in the long run doing a proper install of your preferred
2243 %% distro and installing Xen onto that, rather than just doing the hack
2244 %% described below:
2246 %% Choose one or two partitions, depending on whether you want a separate
2247 %% /usr or not. Make file systems on it/them e.g.:
2248 %% mkfs -t ext3 /dev/hda3
2249 %% [or mkfs -t ext2 /dev/hda3 && tune2fs -j /dev/hda3 if using an old
2250 %% version of mkfs]
2252 %% Next, mount the file system(s) e.g.:
2253 %% mkdir /mnt/root && mount /dev/hda3 /mnt/root
2254 %% [mkdir /mnt/usr && mount /dev/hda4 /mnt/usr]
2256 %% To install the root file system, simply untar /usr/XenDemoCD/root.tar.gz:
2257 %% cd /mnt/root && tar -zxpf /usr/XenDemoCD/root.tar.gz
2259 %% You'll need to edit /mnt/root/etc/fstab to reflect your file system
2260 %% configuration. Changing the password file (etc/shadow) is probably a
2261 %% good idea too.
2263 %% To install the usr file system, copy the file system from CD on /usr,
2264 %% though leaving out the "XenDemoCD" and "boot" directories:
2265 %% cd /usr && cp -a X11R6 etc java libexec root src bin dict kerberos local sbin tmp doc include lib man share /mnt/usr
2267 %% If you intend to boot off these file systems (i.e. use them for
2268 %% domain 0), then you probably want to copy the /usr/boot directory on
2269 %% the cd over the top of the current symlink to /boot on your root
2270 %% filesystem (after deleting the current symlink) i.e.:
2271 %% cd /mnt/root ; rm boot ; cp -a /usr/boot .