ia64/xen-unstable

view docs/src/user.tex @ 2902:7b0ac219fe22

bitkeeper revision 1.1159.1.383 (418b38b8MlKiUmfFlHnkh4EOOU776Q)

minor doc fixes
author iap10@labyrinth.cl.cam.ac.uk
date Fri Nov 05 08:24:24 2004 +0000 (2004-11-05)
parents 2f1e5bdb3088
children be00b475e1d0 42882b3e0dda
line source
1 \documentclass[11pt,twoside,final,openright]{xenstyle}
2 \usepackage{a4,graphicx,html,parskip,setspace,times,xspace}
3 \setstretch{1.15}
6 \def\Xend{{Xend}\xspace}
7 \def\xend{{xend}\xspace}
9 \latexhtml{\newcommand{\path}[1]{{\small {\tt #1}}}}{\newcommand{\path}[1]{{\tt #1}}}
13 \begin{document}
15 % TITLE PAGE
16 \pagestyle{empty}
17 \begin{center}
18 \vspace*{\fill}
19 \includegraphics{figs/xenlogo.eps}
20 \vfill
21 \vfill
22 \vfill
23 \begin{tabular}{l}
24 {\Huge \bf Users' manual} \\[4mm]
25 {\huge Xen v2.0 for x86} \\[80mm]
27 {\Large Xen is Copyright (c) 2002-2004, The Xen Team} \\[3mm]
28 {\Large University of Cambridge, UK} \\[20mm]
29 \end{tabular}
30 \end{center}
32 {\bf
33 DISCLAIMER: This documentation is currently under active development
34 and as such there may be mistakes and omissions --- watch out for
35 these and please report any you find to the developer's mailing list.
36 Contributions of material, suggestions and corrections are welcome.
37 }
39 \vfill
40 \cleardoublepage
42 % TABLE OF CONTENTS
43 \pagestyle{plain}
44 \pagenumbering{roman}
45 { \parskip 0pt plus 1pt
46 \tableofcontents }
47 \cleardoublepage
49 % PREPARE FOR MAIN TEXT
50 \pagenumbering{arabic}
51 \raggedbottom
52 \widowpenalty=10000
53 \clubpenalty=10000
54 \parindent=0pt
55 \parskip=5pt
56 \renewcommand{\topfraction}{.8}
57 \renewcommand{\bottomfraction}{.8}
58 \renewcommand{\textfraction}{.2}
59 \renewcommand{\floatpagefraction}{.8}
60 \setstretch{1.1}
62 \part{Introduction and Tutorial}
63 \chapter{Introduction}
65 Xen is a {\em paravirtualising} virtual machine monitor (VMM), or
66 `hypervisor', for the x86 processor architecture. Xen can securely
67 execute multiple virtual machines on a single physical system with
68 close-to-native performance. The virtual machine technology
69 facilitates enterprise-grade functionality, including:
71 \begin{itemize}
72 \item Virtual machines with performance close to native
73 hardware.
74 \item Live migration of running virtual machines between physical hosts.
75 \item Excellent hardware support (supports most Linux device drivers).
76 \item Sandboxed, restartable device drivers.
77 \end{itemize}
79 Paravirtualisation permits very high performance virtualisation,
80 even on architectures like x86 that are traditionally
81 very hard to virtualise.
82 The drawback of this approach is that it requires operating systems to
83 be {\em ported} to run on Xen. Porting an OS to run on Xen is similar
84 to supporting a new hardware platform, however the process
85 is simplified because the paravirtual machine architecture is very
86 similar to the underlying native hardware. Even though operating system
87 kernels must explicitly support Xen, a key feature is that user space
88 applications and libraries {\em do not} require modification.
90 Xen support is available for increasingly many operating systems:
91 right now, Linux 2.4, Linux 2.6 and NetBSD are available for Xen 2.0.
92 A FreeBSD port is undergoing testing and will be incorporated into the
93 release soon. Other OS ports, including Plan 9, are in progress. We
94 hope that that arch-xen patches will be incorporated into the
95 mainstream releases of these operating systems in due course (as has
96 already happened for NetBSD).
98 Possible usage scenarios for Xen include:
99 \begin{description}
100 \item [Kernel development.] Test and debug kernel modifications in a
101 sandboxed virtual machine --- no need for a separate test
102 machine.
103 \item [Multiple OS configurations.] Run multiple operating systems
104 simultaneously, for instance for compatibility or QA purposes.
105 \item [Server consolidation.] Move multiple servers onto a single
106 physical host with performance and fault isolation provided at
107 virtual machine boundaries.
108 \item [Cluster computing.] Management at VM granularity provides more
109 flexibility than separately managing each physical host, but
110 better control and isolation than single-system image solutions,
111 particularly by using live migration for load balancing.
112 \item [Hardware support for custom OSes.] Allow development of new OSes
113 while benefiting from the wide-ranging hardware support of
114 existing OSes such as Linux.
115 \end{description}
117 \section{Structure of a Xen-Based System}
119 A Xen system has multiple layers, the lowest and most privileged of
120 which is Xen itself.
121 Xen in turn may host multiple {\em guest} operating systems, each of
122 which is executed within a secure virtual machine (in Xen terminology,
123 a {\em domain}). Domains are scheduled by Xen to make effective use of
124 the available physical CPUs. Each guest OS manages its own
125 applications, which includes responsibility for scheduling each
126 application within the time allotted to the VM by Xen.
128 The first domain, {\em domain 0}, is created automatically when the
129 system boots and has special management privileges. Domain 0 builds
130 other domains and manages their virtual devices. It also performs
131 administrative tasks such as suspending, resuming and migrating other
132 virtual machines.
134 Within domain 0, a process called \xend runs to manage the system.
135 \Xend is responsible for managing virtual machines and providing access
136 to their consoles. Commands are issued to \xend over an HTTP
137 interface, either from a command-line tool or from a web browser.
139 \section{Hardware Support}
141 Xen currently runs only on the x86 architecture, requiring a `P6' or
142 newer processor (e.g. Pentium Pro, Celeron, Pentium II, Pentium III,
143 Pentium IV, Xeon, AMD Athlon, AMD Duron). Multiprocessor machines are
144 supported, and we also have basic support for HyperThreading (SMT),
145 although this remains a topic for ongoing research. A port
146 specifically for x86/64 is in progress, although Xen already runs on
147 such systems in 32-bit legacy mode. In addition a port to the IA64
148 architecture is approaching completion. We hope to add other
149 architectures such as PPC and ARM in due course.
152 Xen can currently use up to 4GB of memory. It is possible for x86
153 machines to address up to 64GB of physical memory but there are no
154 current plans to support these systems: The x86/64 port is the
155 planned route to supporting larger memory sizes.
157 Xen offloads most of the hardware support issues to the guest OS
158 running in Domain 0. Xen itself contains only the code required to
159 detect and start secondary processors, set up interrupt routing, and
160 perform PCI bus enumeration. Device drivers run within a privileged
161 guest OS rather than within Xen itself. This approach provides
162 compatibility with the majority of device hardware supported by Linux.
163 The default XenLinux build contains support for relatively modern
164 server-class network and disk hardware, but you can add support for
165 other hardware by configuring your XenLinux kernel in the normal way.
167 \section{History}
169 Xen was originally developed by the Systems Research Group at the
170 University of Cambridge Computer Laboratory as part of the XenoServers
171 project, funded by the UK-EPSRC.
172 XenoServers aim to provide a `public infrastructure for
173 global distributed computing', and Xen plays a key part in that,
174 allowing us to efficiently partition a single machine to enable
175 multiple independent clients to run their operating systems and
176 applications in an environment providing protection, resource
177 isolation and accounting. The project web page contains further
178 information along with pointers to papers and technical reports:
179 \path{http://www.cl.cam.ac.uk/xeno}
181 Xen has since grown into a fully-fledged project in its own right,
182 enabling us to investigate interesting research issues regarding the
183 best techniques for virtualising resources such as the CPU, memory,
184 disk and network. The project has been bolstered by support from
185 Intel Research Cambridge, and HP Labs, who are now working closely
186 with us.
188 Xen was first described in a paper presented at SOSP in
189 2003\footnote{\tt
190 http://www.cl.cam.ac.uk/netos/papers/2003-xensosp.pdf}, and the first
191 public release (1.0) was made that October. Since then, Xen has
192 significantly matured and is now used in production scenarios on
193 many sites.
195 Xen 2.0 features greatly enhanced hardware support, configuration
196 flexibility, usability and a larger complement of supported operating
197 systems. This latest release takes Xen a step closer to becoming the
198 definitive open source solution for virtualisation.
200 \chapter{Installation}
202 The Xen distribution includes three main components: Xen itself, ports
203 of Linux 2.4 and 2.6 and NetBSD to run on Xen, and the user-space
204 tools required to manage a Xen-based system. This chapter describes
205 how to install the Xen 2.0 distribution from source. Alternatively,
206 there may be pre-built packages available as part of your operating
207 system distribution.
209 \section{Prerequisites}
210 \label{sec:prerequisites}
212 The following is a full list of prerequisites. Items marked `$\dag$'
213 are required by the \xend control tools, and hence required if you
214 want to run more than one virtual machine; items marked `$*$' are only
215 required if you wish to build from source.
216 \begin{itemize}
217 \item A working Linux distribution using the GRUB bootloader and
218 running on a P6-class (or newer) CPU.
219 \item [$\dag$] The \path{iproute2} package.
220 \item [$\dag$] The Linux bridge-utils\footnote{Available from
221 {\tt http://bridge.sourceforge.net}} (e.g., \path{/sbin/brctl})
222 \item [$\dag$] An installation of Twisted v1.3 or
223 above\footnote{Available from {\tt
224 http://www.twistedmatrix.com}}. There may be a binary package
225 available for your distribution; alternatively it can be installed by
226 running `{\sl make install-twisted}' in the root of the Xen source
227 tree.
228 \item [$*$] Build tools (gcc v3.2.x or v3.3.x, binutils, GNU make).
229 \item [$*$] Development installation of libcurl (e.g., libcurl-devel)
230 \item [$*$] Development installation of zlib (e.g., zlib-dev).
231 \item [$*$] Development installation of Python v2.2 or later (e.g., python-dev).
232 \item [$*$] \LaTeX, transfig and tgif are required to build the documentation.
233 \end{itemize}
235 Once you have satisfied the relevant prerequisites, you can
236 now install either a binary or source distribution of Xen.
238 \section{Installing from Binary Tarball}
240 Pre-built tarballs are available for download from the Xen
241 download page
242 \begin{quote}
243 {\tt http://xen.sf.net}
244 \end{quote}
246 Once you've downloaded the tarball, simply unpack and install:
247 \begin{verbatim}
248 # tar zxvf xen-2.0-install.tgz
249 # cd xen-2.0-install
250 # sh ./install.sh
251 \end{verbatim}
253 Once you've installed the binaries you need to configure
254 your system as described in Section~\ref{s:configure}.
256 \section{Installing from Source}
258 This section describes how to obtain, build, and install
259 Xen from source.
261 \subsection{Obtaining the Source}
263 The Xen source tree is available as either a compressed source tar
264 ball or as a clone of our master BitKeeper repository.
266 \begin{description}
267 \item[Obtaining the Source Tarball]\mbox{} \\
268 Stable versions (and daily snapshots) of the Xen source tree are
269 available as compressed tarballs from the Xen download page
270 \begin{quote}
271 {\tt http://xen.sf.net}
272 \end{quote}
274 \item[Using BitKeeper]\mbox{} \\
275 If you wish to install Xen from a clone of our latest BitKeeper
276 repository then you will need to install the BitKeeper tools.
277 Download instructions for BitKeeper can be obtained by filling out the
278 form at:
280 \begin{quote}
281 {\tt http://www.bitmover.com/cgi-bin/download.cgi}
282 \end{quote}
283 The public master BK repository for the 2.0 release lives at:
284 \begin{quote}
285 {\tt bk://xen.bkbits.net/xen-2.0.bk}
286 \end{quote}
287 You can use BitKeeper to
288 download it and keep it updated with the latest features and fixes.
290 Change to the directory in which you want to put the source code, then
291 run:
292 \begin{verbatim}
293 # bk clone bk://xen.bkbits.net/xen-2.0.bk
294 \end{verbatim}
296 Under your current directory, a new directory named \path{xen-2.0.bk}
297 has been created, which contains all the source code for Xen, the OS
298 ports, and the control tools. You can update your repository with the
299 latest changes at any time by running:
300 \begin{verbatim}
301 # cd xen-2.0.bk # to change into the local repository
302 # bk pull # to update the repository
303 \end{verbatim}
304 \end{description}
306 %\section{The distribution}
307 %
308 %The Xen source code repository is structured as follows:
309 %
310 %\begin{description}
311 %\item[\path{tools/}] Xen node controller daemon (Xend), command line tools,
312 % control libraries
313 %\item[\path{xen/}] The Xen VMM.
314 %\item[\path{linux-*-xen-sparse/}] Xen support for Linux.
315 %\item[\path{linux-*-patches/}] Experimental patches for Linux.
316 %\item[\path{netbsd-*-xen-sparse/}] Xen support for NetBSD.
317 %\item[\path{docs/}] Various documentation files for users and developers.
318 %\item[\path{extras/}] Bonus extras.
319 %\end{description}
321 \subsection{Building from Source}
323 The top-level Xen Makefile includes a target `world' that will do the
324 following:
326 \begin{itemize}
327 \item Build Xen
328 \item Build the control tools, including \xend
329 \item Download (if necessary) and unpack the Linux 2.6 source code,
330 and patch it for use with Xen
331 \item Build a Linux kernel to use in domain 0 and a smaller
332 unprivileged kernel, which can optionally be used for
333 unprivileged virtual machines.
334 \end{itemize}
337 After the build has completed you should have a top-level
338 directory called \path{dist/} in which all resulting targets
339 will be placed; of particular interest are the two kernels
340 XenLinux kernel images, one with a `-xen0' extension
341 which contains hardware device drivers and drivers for Xen's virtual
342 devices, and one with a `-xenU' extension that just contains the
343 virtual ones. These are found in \path{dist/install/boot/} along
344 with the image for Xen itself and the configuration files used
345 during the build.
347 The NetBSD port can be built using:
348 \begin{quote}
349 \begin{verbatim}
350 # make netbsd20
351 \end{verbatim}
352 \end{quote}
353 NetBSD port is built using a snapshot of the netbsd-2-0 cvs branch.
354 The snapshot is downloaded as part of the build process, if it is not
355 yet present in the \path{NETBSD\_SRC\_PATH} search path. The build
356 process also downloads a toolchain which includes all the tools
357 necessary to build the NetBSD kernel under Linux.
359 To customize further the set of kernels built you need to edit
360 the top-level Makefile. Look for the line:
362 \begin{quote}
363 \begin{verbatim}
364 KERNELS ?= mk.linux-2.6-xen0 mk.linux-2.6-xenU
365 \end{verbatim}
366 \end{quote}
368 You can edit this line to include any set of operating system kernels
369 which have configurations in the top-level \path{buildconfigs/}
370 directory, for example {\tt mk.linux-2.4-xenU} to build a Linux 2.4
371 kernel containing only virtual device drivers.
373 %% Inspect the Makefile if you want to see what goes on during a build.
374 %% Building Xen and the tools is straightforward, but XenLinux is more
375 %% complicated. The makefile needs a `pristine' Linux kernel tree to which
376 %% it will then add the Xen architecture files. You can tell the
377 %% makefile the location of the appropriate Linux compressed tar file by
378 %% setting the LINUX\_SRC environment variable, e.g. \\
379 %% \verb!# LINUX_SRC=/tmp/linux-2.6.9.tar.bz2 make world! \\ or by
380 %% placing the tar file somewhere in the search path of {\tt
381 %% LINUX\_SRC\_PATH} which defaults to `{\tt .:..}'. If the makefile
382 %% can't find a suitable kernel tar file it attempts to download it from
383 %% kernel.org (this won't work if you're behind a firewall).
385 %% After untaring the pristine kernel tree, the makefile uses the {\tt
386 %% mkbuildtree} script to add the Xen patches to the kernel.
389 %% The procedure is similar to build the Linux 2.4 port: \\
390 %% \verb!# LINUX_SRC=/path/to/linux2.4/source make linux24!
393 %% \framebox{\parbox{5in}{
394 %% {\bf Distro specific:} \\
395 %% {\it Gentoo} --- if not using udev (most installations, currently), you'll need
396 %% to enable devfs and devfs mount at boot time in the xen0 config.
397 %% }}
399 \subsection{Custom XenLinux Builds}
401 % If you have an SMP machine you may wish to give the {\tt '-j4'}
402 % argument to make to get a parallel build.
404 If you wish to build a customized XenLinux kernel (e.g. to support
405 additional devices or enable distribution-required features), you can
406 use the standard Linux configuration mechanisms, specifying that the
407 architecture being built for is \path{xen}, e.g:
408 \begin{quote}
409 \begin{verbatim}
410 # cd linux-2.6.9-xen0
411 # make ARCH=xen xconfig
412 # cd ..
413 # make
414 \end{verbatim}
415 \end{quote}
417 You can also copy an existing Linux configuration (\path{.config})
418 into \path{linux-2.6.9-xen0} and execute:
419 \begin{quote}
420 \begin{verbatim}
421 # make ARCH=xen oldconfig
422 \end{verbatim}
423 \end{quote}
425 You may be prompted with some Xen-specific options; we
426 advise accepting the defaults for these options.
428 Note that the only difference between the two types of Linux kernel
429 that are built is the configuration file used for each. The "U"
430 suffixed (unprivileged) versions don't contain any of the physical
431 hardware device drivers, leading to a 30\% reduction in size; hence
432 you may prefer these for your non-privileged domains. The `0'
433 suffixed privileged versions can be used to boot the system, as well
434 as in driver domains and unprivileged domains.
437 \subsection{Installing the Binaries}
440 The files produced by the build process are stored under the
441 \path{dist/install/} directory. To install them in their default
442 locations, do:
443 \begin{quote}
444 \begin{verbatim}
445 # make install
446 \end{verbatim}
447 \end{quote}
450 Alternatively, users with special installation requirements may wish
451 to install them manually by copying the files to their appropriate
452 destinations.
454 %% Files in \path{install/boot/} include:
455 %% \begin{itemize}
456 %% \item \path{install/boot/xen.gz} The Xen 'kernel'
457 %% \item \path{install/boot/vmlinuz-2.6.9-xen0} Domain 0 XenLinux kernel
458 %% \item \path{install/boot/vmlinuz-2.6.9-xenU} Unprivileged XenLinux kernel
459 %% \end{itemize}
461 The \path{dist/install/boot} directory will also contain the config files
462 used for building the XenLinux kernels, and also versions of Xen and
463 XenLinux kernels that contain debug symbols (\path{xen-syms} and
464 \path{vmlinux-syms-2.6.9-xen0}) which are essential for interpreting crash
465 dumps. Retain these files as the developers may wish to see them if
466 you post on the mailing list.
472 \section{Configuration}
473 \label{s:configure}
474 Once you have built and installed the Xen distribution, it is
475 simple to prepare the machine for booting and running Xen.
477 \subsection{GRUB Configuration}
479 An entry should be added to \path{grub.conf} (often found under
480 \path{/boot/} or \path{/boot/grub/}) to allow Xen / XenLinux to boot.
481 This file is sometimes called \path{menu.lst}, depending on your
482 distribution. The entry should look something like the following:
484 {\small
485 \begin{verbatim}
486 title Xen 2.0 / XenLinux 2.6.9
487 kernel /boot/xen.gz dom0_mem=131072
488 module /boot/vmlinuz-2.6.9-xen0 root=/dev/sda4 ro console=tty0
489 \end{verbatim}
490 }
492 The kernel line tells GRUB where to find Xen itself and what boot
493 parameters should be passed to it (in this case, setting domain 0's
494 memory allocation and the settings for the serial port). For more
495 details on the various Xen boot parameters see Section~\ref{s:xboot}.
497 The module line of the configuration describes the location of the
498 XenLinux kernel that Xen should start and the parameters that should
499 be passed to it (these are standard Linux parameters, identifying the
500 root device and specifying it be initially mounted read only and
501 instructing that console output be sent to the screen). Some
502 distributions such as SuSE do not require the \path{ro} parameter.
504 %% \framebox{\parbox{5in}{
505 %% {\bf Distro specific:} \\
506 %% {\it SuSE} --- Omit the {\tt ro} option from the XenLinux kernel
507 %% command line, since the partition won't be remounted rw during boot.
508 %% }}
511 If you want to use an initrd, just add another \path{module} line to
512 the configuration, as usual:
513 {\small
514 \begin{verbatim}
515 module /boot/my_initrd.gz
516 \end{verbatim}
517 }
519 As always when installing a new kernel, it is recommended that you do
520 not delete existing menu options from \path{menu.lst} --- you may want
521 to boot your old Linux kernel in future, particularly if you
522 have problems.
525 \subsection{Serial Console (optional)}
527 %% kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1
528 %% module /boot/vmlinuz-2.6.9-xen0 root=/dev/sda4 ro
531 In order to configure Xen serial console output, it is necessary to add
532 an boot option to your GRUB config; e.g. replace the above kernel line
533 with:
534 \begin{quote}
535 {\small
536 \begin{verbatim}
537 kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1
538 \end{verbatim}}
539 \end{quote}
541 This configures Xen to output on COM1 at 115,200 baud, 8 data bits,
542 1 stop bit and no parity. Modify these parameters for your set up.
544 One can also configure XenLinux to share the serial console; to
545 achieve this append ``\path{console=ttyS0}'' to your
546 module line.
549 If you wish to be able to log in over the XenLinux serial console it
550 is necessary to add a line into \path{/etc/inittab}, just as per
551 regular Linux. Simply add the line:
552 \begin{quote}
553 {\small
554 {\tt c:2345:respawn:/sbin/mingetty ttyS0}
555 }
556 \end{quote}
558 and you should be able to log in. Note that to successfully log in
559 as root over the serial line will require adding \path{ttyS0} to
560 \path{/etc/securetty} in most modern distributions.
562 \subsection{TLS Libraries}
564 Users of the XenLinux 2.6 kernel should disable Thread Local Storage
565 (e.g. by doing a \path{mv /lib/tls /lib/tls.disabled}) before
566 attempting to run with a XenLinux kernel. You can always reenable it
567 by restoring the directory to its original location (i.e.
568 \path{mv /lib/tls.disabled /lib/tls}).
570 The reason for this is that the current TLS implementation uses
571 segmentation in a way that is not permissible under Xen. If TLS is
572 not disabled, an emulation mode is used within Xen which reduces
573 performance substantially.
575 We hope that this issue can be resolved by working with Linux
576 distribution vendors to implement a minor backward-compatible change
577 to the TLS library.
579 \section{Booting Xen}
581 It should now be possible to restart the system and use Xen. Reboot
582 as usual but choose the new Xen option when the Grub screen appears.
584 What follows should look much like a conventional Linux boot. The
585 first portion of the output comes from Xen itself, supplying low level
586 information about itself and the machine it is running on. The
587 following portion of the output comes from XenLinux.
589 You may see some errors during the XenLinux boot. These are not
590 necessarily anything to worry about --- they may result from kernel
591 configuration differences between your XenLinux kernel and the one you
592 usually use.
594 When the boot completes, you should be able to log into your system as
595 usual. If you are unable to log in to your system running Xen, you
596 should still be able to reboot with your normal Linux kernel.
599 \chapter{Starting Additional Domains}
601 The first step in creating a new domain is to prepare a root
602 filesystem for it to boot off. Typically, this might be stored in a
603 normal partition, an LVM or other volume manager partition, a disk
604 file or on an NFS server. A simple way to do this is simply to boot
605 from your standard OS install CD and install the distribution into
606 another partition on your hard drive.
608 To start the \xend control daemon, type
609 \begin{quote}
610 \verb!# xend start!
611 \end{quote}
612 If you
613 wish the daemon to start automatically, see the instructions in
614 Section~\ref{s:xend}. Once the daemon is running, you can use the
615 \path{xm} tool to monitor and maintain the domains running on your
616 system. This chapter provides only a brief tutorial: we provide full
617 details of the \path{xm} tool in the next chapter.
619 %\section{From the web interface}
620 %
621 %Boot the Xen machine and start Xensv (see Chapter~\ref{cha:xensv} for
622 %more details) using the command: \\
623 %\verb_# xensv start_ \\
624 %This will also start Xend (see Chapter~\ref{cha:xend} for more information).
625 %
626 %The domain management interface will then be available at {\tt
627 %http://your\_machine:8080/}. This provides a user friendly wizard for
628 %starting domains and functions for managing running domains.
629 %
630 %\section{From the command line}
633 \section{Creating a Domain Configuration File}
635 Before you can start an additional domain, you must create a
636 configuration file. We provide two example files which you
637 can use as a starting point:
638 \begin{itemize}
639 \item \path{/etc/xen/xmexample1} is a simple template configuration file
640 for describing a single VM.
642 \item \path{/etc/xen/xmexample2} file is a template description that
643 is intended to be reused for multiple virtual machines. Setting
644 the value of the \path{vmid} variable on the \path{xm} command line
645 fills in parts of this template.
646 \end{itemize}
648 Copy one of these files and edit it as appropriate.
649 Typical values you may wish to edit include:
651 \begin{quote}
652 \begin{description}
653 \item[kernel] Set this to the path of the kernel you compiled for use
654 with Xen (e.g.\ \path{kernel = '/boot/vmlinuz-2.6.9-xenU'})
655 \item[memory] Set this to the size of the domain's memory in
656 megabytes (e.g.\ \path{memory = 64})
657 \item[disk] Set the first entry in this list to calculate the offset
658 of the domain's root partition, based on the domain ID. Set the
659 second to the location of \path{/usr} if you are sharing it between
660 domains (e.g.\ \path{disk = ['phy:your\_hard\_drive\%d,sda1,w' \%
661 (base\_partition\_number + vmid), 'phy:your\_usr\_partition,sda6,r' ]}
662 \item[dhcp] Uncomment the dhcp variable, so that the domain will
663 receive its IP address from a DHCP server (e.g.\ \path{dhcp='dhcp'})
664 \end{description}
665 \end{quote}
667 You may also want to edit the {\bf vif} variable in order to choose
668 the MAC address of the virtual ethernet interface yourself. For
669 example:
670 \begin{quote}
671 \verb_vif = ['mac=00:06:AA:F6:BB:B3']_
672 \end{quote}
673 If you do not set this variable, \xend will automatically generate a
674 random MAC address from an unused range.
677 \section{Booting the Domain}
679 The \path{xm} tool provides a variety of commands for managing domains.
680 Use the \path{create} command to start new domains. Assuming you've
681 created a configuration file \path{myvmconf} based around
682 \path{/etc/xen/xmexample2}, to start a domain with virtual
683 machine ID~1 you should type:
685 \begin{quote}
686 \begin{verbatim}
687 # xm create -c myvmconf vmid=1
688 \end{verbatim}
689 \end{quote}
692 The \path{-c} switch causes \path{xm} to turn into the domain's
693 console after creation. The \path{vmid=1} sets the \path{vmid}
694 variable used in the \path{myvmconf} file.
697 You should see the console boot messages from the new domain
698 appearing in the terminal in which you typed the command,
699 culminating in a login prompt.
702 \section{Example: ttylinux}
704 Ttylinux is a very small Linux distribution, designed to require very
705 few resources. We will use it as a concrete example of how to start a
706 Xen domain. Most users will probably want to install a full-featured
707 distribution once they have mastered the basics.
709 \begin{enumerate}
710 \item Download and extract the ttylinux disk image from the Files
711 section of the project's SourceForge site (see
712 \path{http://sf.net/projects/xen/}).
713 \item Create a configuration file like the following:
714 \begin{verbatim}
715 kernel = "/boot/vmlinuz-2.6.9-xenU"
716 memory = 64
717 name = "ttylinux"
718 nics = 1
719 ip = "1.2.3.4"
720 disk = ['file:/path/to/ttylinux/rootfs,sda1,w']
721 root = "/dev/sda1 ro"
722 \end{verbatim}
723 \item Now start the domain and connect to its console:
724 \begin{verbatim}
725 xm create configfile -c
726 \end{verbatim}
727 \item Login as root, password root.
728 \end{enumerate}
731 \section{Starting / Stopping Domains Automatically}
733 It is possible to have certain domains start automatically at boot
734 time and to have dom0 wait for all running domains to shutdown before
735 it shuts down the system.
737 To specify a domain is to start at boot-time, place its
738 configuration file (or a link to it) under \path{/etc/xen/auto/}.
740 A Sys-V style init script for RedHat and LSB-compliant systems is
741 provided and will be automatically copied to \path{/etc/init.d/}
742 during install. You can then enable it in the appropriate way for
743 your distribution.
745 For instance, on RedHat:
747 \begin{quote}
748 \verb_# chkconfig --add xendomains_
749 \end{quote}
751 By default, this will start the boot-time domains in runlevels 3, 4
752 and 5.
754 You can also use the \path{service} command to run this script
755 manually, e.g:
757 \begin{quote}
758 \verb_# service xendomains start_
760 Starts all the domains with config files under /etc/xen/auto/.
761 \end{quote}
764 \begin{quote}
765 \verb_# service xendomains stop_
767 Shuts down ALL running Xen domains.
768 \end{quote}
770 \chapter{Domain Management Tools}
772 The previous chapter described a simple example of how to configure
773 and start a domain. This chapter summarises the tools available to
774 manage running domains.
776 \section{Command-line Management}
778 Command line management tasks are also performed using the \path{xm}
779 tool. For online help for the commands available, type:
780 \begin{quote}
781 \verb_# xm help_
782 \end{quote}
784 You can also type \path{xm help $<$command$>$} for more information
785 on a given command.
787 \subsection{Basic Management Commands}
789 The most important \path{xm} commands are:
790 \begin{quote}
791 \verb_# xm list_: Lists all domains running.\\
792 \verb_# xm consoles_ : Gives information about the domain consoles.\\
793 \verb_# xm console_: Opens a console to a domain (e.g.\
794 \verb_# xm console myVM_
795 \end{quote}
797 \subsection{\tt xm list}
799 The output of \path{xm list} is in rows of the following format:
800 \begin{center}
801 {\tt name domid memory cpu state cputime console}
802 \end{center}
804 \begin{quote}
805 \begin{description}
806 \item[name] The descriptive name of the virtual machine.
807 \item[domid] The number of the domain ID this virtual machine is running in.
808 \item[memory] Memory size in megabytes.
809 \item[cpu] The CPU this domain is running on.
810 \item[state] Domain state consists of 5 fields:
811 \begin{description}
812 \item[r] running
813 \item[b] blocked
814 \item[p] paused
815 \item[s] shutdown
816 \item[c] crashed
817 \end{description}
818 \item[cputime] How much CPU time (in seconds) the domain has used so far.
819 \item[console] TCP port accepting connections to the domain's console.
820 \end{description}
821 \end{quote}
823 The \path{xm list} command also supports a long output format when the
824 \path{-l} switch is used. This outputs the fulls details of the
825 running domains in \xend's SXP configuration format.
827 For example, suppose the system is running the ttylinux domain as
828 described earlier. The list command should produce output somewhat
829 like the following:
830 \begin{verbatim}
831 # xm list
832 Name Id Mem(MB) CPU State Time(s) Console
833 Domain-0 0 251 0 r---- 172.2
834 ttylinux 5 63 0 -b--- 3.0 9605
835 \end{verbatim}
837 Here we can see the details for the ttylinux domain, as well as for
838 domain 0 (which, of course, is always running). Note that the console
839 port for the ttylinux domain is 9605. This can be connected to by TCP
840 using a terminal program (e.g. \path{telnet} or, better,
841 \path{xencons}). The simplest way to connect is to use the \path{xm console}
842 command, specifying the domain name or ID. To connect to the console
843 of the ttylinux domain, we could use:
844 \begin{verbatim}
845 # xm console ttylinux
846 \end{verbatim}
847 or:
848 \begin{verbatim}
849 # xm console 5
850 \end{verbatim}
851 or:
852 \begin{verbatim}
853 # xencons localhost 9605
854 \end{verbatim}
856 \section{Domain Save and Restore}
858 The administrator of a Xen system may suspend a virtual machine's
859 current state into a disk file in domain 0, allowing it to be resumed
860 at a later time.
862 The ttylinux domain described earlier can be suspended to disk using
863 the command:
864 \begin{verbatim}
865 # xm save ttylinux ttylinux.xen
866 \end{verbatim}
868 This will stop the domain named `ttylinux' and save its current state
869 into a file called \path{ttylinux.xen}.
871 To resume execution of this domain, use the \path{xm restore} command:
872 \begin{verbatim}
873 # xm restore ttylinux.xen
874 \end{verbatim}
876 This will restore the state of the domain and restart it. The domain
877 will carry on as before and the console may be reconnected using the
878 \path{xm console} command, as above.
880 \section{Live Migration}
882 Live migration is used to transfer a domain between physical hosts
883 whilst that domain continues to perform its usual activities --- from
884 the user's perspective, the migration should be imperceptible.
886 To perform a live migration, both hosts must be running Xen / \xend and
887 the destination host must have sufficient resources (e.g. memory
888 capacity) to accommodate the domain after the move. Furthermore we
889 currently require both source and destination machines to be on the
890 same L2 subnet.
892 Currently, there is no support for providing automatic remote access
893 to filesystems stored on local disk when a domain is migrated.
894 Administrators should choose an appropriate storage solution
895 (i.e. SAN, NAS, etc.) to ensure that domain filesystems are also
896 available on their destination node. GNBD is a good method for
897 exporting a volume from one machine to another. iSCSI can do a similar
898 job, but is more complex to set up.
900 When a domain migrates, it's MAC and IP address move with it, thus it
901 is only possible to migrate VMs within the same layer-2 network and IP
902 subnet. If the destination node is on a different subnet, the
903 administrator would need to manually configure a suitable etherip or
904 IP tunnel in the domain 0 of the remote node.
906 A domain may be migrated using the \path{xm migrate} command. To
907 live migrate a domain to another machine, we would use
908 the command:
910 \begin{verbatim}
911 # xm migrate --live mydomain destination.ournetwork.com
912 \end{verbatim}
914 Without the {\tt --live} flag, \xend simply stops the domain and
915 copies the memory image over to the new node and restarts it. Since
916 domains can have large allocations this can be quite time consuming,
917 even on a Gigabit network. With the {\tt --live} flag \xend attempts
918 to keep the domain running while the migration is in progress,
919 resulting in typical 'downtimes' of just 60 -- 300ms.
921 For now it will be necessary to reconnect to the domain's console on
922 the new machine using the \path{xm console} command. If a migrated
923 domain has any open network connections then they will be preserved,
924 so SSH connections do not have this limitation.
926 \section{Managing Domain Memory}
928 XenLinux domains have the ability to relinquish / reclaim machine
929 memory at the request of the administrator or the user of the domain.
931 \subsection{Setting memory footprints from dom0}
933 The machine administrator can request that a domain alter its memory
934 footprint using the \path{xm balloon} command. For instance, we can
935 request that our example ttylinux domain reduce its memory footprint
936 to 32 megabytes.
938 \begin{verbatim}
939 # xm balloon ttylinux 32
940 \end{verbatim}
942 We can now see the result of this in the output of \path{xm list}:
944 \begin{verbatim}
945 # xm list
946 Name Id Mem(MB) CPU State Time(s) Console
947 Domain-0 0 251 0 r---- 172.2
948 ttylinux 5 31 0 -b--- 4.3 9605
949 \end{verbatim}
951 The domain has responded to the request by returning memory to Xen. We
952 can restore the domain to its original size using the command line:
954 \begin{verbatim}
955 # xm balloon ttylinux 64
956 \end{verbatim}
958 \subsection{Setting memory footprints from within a domain}
960 The virtual file \path{/proc/xen/memory\_target} allows the owner of a
961 domain to adjust their own memory footprint. Reading the file
962 (e.g. \path{cat /proc/xen/memory\_target}) prints out the current
963 memory footprint of the domain. Writing the file
964 (e.g. \path{echo new\_target > /proc/xen/memory\_target}) requests
965 that the kernel adjust the domain's memory footprint to a new value.
967 \subsection{Setting memory limits}
969 Xen associates a memory size limit with each domain. By default, this
970 is the amount of memory the domain is originally started with,
971 preventing the domain from ever growing beyond this size. To permit a
972 domain to grow beyond its original allocation or to prevent a domain
973 you've shrunk from reclaiming the memory it relinquished, use the
974 \path{xm maxmem} command.
976 \chapter{Domain Filesystem Storage}
978 It is possible to directly export any Linux block device in dom0 to
979 another domain, or to export filesystems / devices to virtual machines
980 using standard network protocols (e.g. NBD, iSCSI, NFS, etc). This
981 chapter covers some of the possibilities.
984 \section{Exporting Physical Devices as VBDs}
986 One of the simplest configurations is to directly export
987 individual partitions from domain 0 to other domains. To
988 achieve this use the \path{phy:} specifier in your domain
989 configuration file. For example a line like
990 \begin{quote}
991 \verb_disk = ['phy:hda3,sda1,w']_
992 \end{quote}
993 specifies that the partition \path{/dev/hda3} in domain 0
994 should be exported read-write to the new domain as \path{/dev/sda1};
995 one could equally well export it as \path{/dev/hda} or
996 \path{/dev/sdb5} should one wish.
998 In addition to local disks and partitions, it is possible to export
999 any device that Linux considers to be ``a disk'' in the same manner.
1000 For example, if you have iSCSI disks or GNBD volumes imported into
1001 domain 0 you can export these to other domains using the \path{phy:}
1002 disk syntax. E.g.:
1003 \begin{quote}
1004 \verb_disk = ['phy:vg/lvm1,sda2,w']_
1005 \end{quote}
1009 \begin{center}
1010 \framebox{\bf Warning: Block device sharing}
1011 \end{center}
1012 \begin{quote}
1013 Block devices should typically only be shared between domains in a
1014 read-only fashion otherwise the Linux kernel's file systems will get
1015 very confused as the file system structure may change underneath them
1016 (having the same ext3 partition mounted rw twice is a sure fire way to
1017 cause irreparable damage)! \Xend will attempt to prevent you from
1018 doing this by checking that the device is not mounted read-write in
1019 domain 0, and hasn't already been exported read-write to another
1020 domain.
1021 If you want read-write sharing, export the directory to other domains
1022 via NFS from domain0 (or use a cluster file system such as GFS or
1023 ocfs2).
1025 \end{quote}
1028 \section{Using File-backed VBDs}
1030 It is also possible to use a file in Domain 0 as the primary storage
1031 for a virtual machine. As well as being convenient, this also has the
1032 advantage that the virtual block device will be {\em sparse} --- space
1033 will only really be allocated as parts of the file are used. So if a
1034 virtual machine uses only half of its disk space then the file really
1035 takes up half of the size allocated.
1037 For example, to create a 2GB sparse file-backed virtual block device
1038 (actually only consumes 1KB of disk):
1039 \begin{quote}
1040 \verb_# dd if=/dev/zero of=vm1disk bs=1k seek=2048k count=1_
1041 \end{quote}
1043 Make a file system in the disk file:
1044 \begin{quote}
1045 \verb_# mkfs -t ext3 vm1disk_
1046 \end{quote}
1048 (when the tool asks for confirmation, answer `y')
1050 Populate the file system e.g. by copying from the current root:
1051 \begin{quote}
1052 \begin{verbatim}
1053 # mount -o loop vm1disk /mnt
1054 # cp -ax /{root,dev,var,etc,usr,bin,sbin,lib} /mnt
1055 # mkdir /mnt/{proc,sys,home,tmp}
1056 \end{verbatim}
1057 \end{quote}
1059 Tailor the file system by editing \path{/etc/fstab},
1060 \path{/etc/hostname}, etc (don't forget to edit the files in the
1061 mounted file system, instead of your domain 0 filesystem, e.g. you
1062 would edit \path{/mnt/etc/fstab} instead of \path{/etc/fstab} ). For
1063 this example put \path{/dev/sda1} to root in fstab.
1065 Now unmount (this is important!):
1066 \begin{quote}
1067 \verb_# umount /mnt_
1068 \end{quote}
1070 In the configuration file set:
1071 \begin{quote}
1072 \verb_disk = ['file:/full/path/to/vm1disk,sda1,w']_
1073 \end{quote}
1075 As the virtual machine writes to its `disk', the sparse file will be
1076 filled in and consume more space up to the original 2GB.
1079 \section{Using LVM-backed VBDs}
1081 A particularly appealing solution is to use LVM volumes
1082 as backing for domain file-systems since this allows dynamic
1083 growing/shrinking of volumes as well as snapshot and other
1084 features.
1086 To initialise a partition to support LVM volumes:
1087 \begin{quote}
1088 \begin{verbatim}
1089 # pvcreate /dev/sda10
1090 \end{verbatim}
1091 \end{quote}
1093 Create a volume group named `vg' on the physical partition:
1094 \begin{quote}
1095 \begin{verbatim}
1096 # vgcreate vg /dev/sda10
1097 \end{verbatim}
1098 \end{quote}
1100 Create a logical volume of size 4GB named `myvmdisk1':
1101 \begin{quote}
1102 \begin{verbatim}
1103 # lvcreate -L4096M -n myvmdisk1 vg
1104 \end{verbatim}
1105 \end{quote}
1107 You should now see that you have a \path{/dev/vg/myvmdisk1}
1108 Make a filesystem, mount it and populate it, e.g.:
1109 \begin{quote}
1110 \begin{verbatim}
1111 # mkfs -t ext3 /dev/vg/myvmdisk1
1112 # mount /dev/vg/myvmdisk1 /mnt
1113 # cp -ax / /mnt
1114 # umount /mnt
1115 \end{verbatim}
1116 \end{quote}
1118 Now configure your VM with the following disk configuration:
1119 \begin{quote}
1120 \begin{verbatim}
1121 disk = [ 'phy:vg/myvmdisk1,sda1,w' ]
1122 \end{verbatim}
1123 \end{quote}
1125 LVM enables you to grow the size of logical volumes, but you'll need
1126 to resize the corresponding file system to make use of the new
1127 space. Some file systems (e.g. ext3) now support on-line resize. See
1128 the LVM manuals for more details.
1130 You can also use LVM for creating copy-on-write clones of LVM
1131 volumes (known as writable persistent snapshots in LVM
1132 terminology). This facility is new in Linux 2.6.8, so isn't as
1133 stable as one might hope. In particular, using lots of CoW LVM
1134 disks consumes a lot of dom0 memory, and error conditions such as
1135 running out of disk space are not handled well. Hopefully this
1136 will improve in future.
1138 To create two copy-on-write clone of the above file system you
1139 would use the following commands:
1141 \begin{quote}
1142 \begin{verbatim}
1143 # lvcreate -s -L1024M -n myclonedisk1 /dev/vg/myvmdisk1
1144 # lvcreate -s -L1024M -n myclonedisk2 /dev/vg/myvmdisk1
1145 \end{verbatim}
1146 \end{quote}
1148 Each of these can grow to have 1GB of differences from the master
1149 volume. You can grow the amount of space for storing the
1150 differences using the lvextend command, e.g.:
1151 \begin{quote}
1152 \begin{verbatim}
1153 # lvextend +100M /dev/vg/myclonedisk1
1154 \end{verbatim}
1155 \end{quote}
1157 Don't let the `differences volume' ever fill up otherwise LVM gets
1158 rather confused. It may be possible to automate the growing
1159 process by using \path{dmsetup wait} to spot the volume getting full
1160 and then issue an \path{lvextend}.
1162 In principle, it is possible to continue writing to the volume
1163 that has been cloned (the changes will not be visible to the
1164 clones), but we wouldn't recommend this: have the cloned volume
1165 as a 'pristine' file system install that isn't mounted directly
1166 by any of the virtual machines.
1169 \section{Using NFS Root}
1171 First, populate a root filesystem in a directory on the server
1172 machine. This can be on a distinct physical machine, or simply
1173 run within a virtual machine on the same node.
1175 Now configure the NFS server to export this filesystem over the
1176 network by adding a line to \path{/etc/exports}, for instance:
1178 \begin{quote}
1179 \begin{verbatim}
1180 /export/vm1root 1.2.3.4/24 (rw,sync,no_root_squash)
1181 \end{verbatim}
1182 \end{quote}
1184 Finally, configure the domain to use NFS root. In addition to the
1185 normal variables, you should make sure to set the following values in
1186 the domain's configuration file:
1188 \begin{quote}
1189 \begin{small}
1190 \begin{verbatim}
1191 root = '/dev/nfs'
1192 nfs_server = '2.3.4.5' # substitute IP address of server
1193 nfs_root = '/path/to/root' # path to root FS on the server
1194 \end{verbatim}
1195 \end{small}
1196 \end{quote}
1198 The domain will need network access at boot time, so either statically
1199 configure an IP address (Using the config variables \path{ip},
1200 \path{netmask}, \path{gateway}, \path{hostname}) or enable DHCP (
1201 \path{dhcp='dhcp'}).
1203 Note that the Linux NFS root implementation is known to have stability
1204 problems under high load (this is not a Xen-specific problem), so this
1205 configuration may not be appropriate for critical servers.
1208 \part{User Reference Documentation}
1210 \chapter{Control Software}
1212 The Xen control software includes the \xend node control daemon (which
1213 must be running), the xm command line tools, and the prototype
1214 xensv web interface.
1216 \section{\Xend (node control daemon)}
1217 \label{s:xend}
1219 The Xen Daemon (\Xend) performs system management functions related to
1220 virtual machines. It forms a central point of control for a machine
1221 and can be controlled using an HTTP-based protocol. \Xend must be
1222 running in order to start and manage virtual machines.
1224 \Xend must be run as root because it needs access to privileged system
1225 management functions. A small set of commands may be issued on the
1226 \xend command line:
1228 \begin{tabular}{ll}
1229 \verb!# xend start! & start \xend, if not already running \\
1230 \verb!# xend stop! & stop \xend if already running \\
1231 \verb!# xend restart! & restart \xend if running, otherwise start it \\
1232 % \verb!# xend trace_start! & start \xend, with very detailed debug logging \\
1233 \verb!# xend status! & indicates \xend status by its return code
1234 \end{tabular}
1236 A SysV init script called {\tt xend} is provided to start \xend at boot
1237 time. {\tt make install} installs this script in {\path{/etc/init.d}.
1238 To enable it, you have to make symbolic links in the appropriate
1239 runlevel directories or use the {\tt chkconfig} tool, where available.
1241 Once \xend is running, more sophisticated administration can be done
1242 using the xm tool (see Section~\ref{s:xm}) and the experimental
1243 Xensv web interface (see Section~\ref{s:xensv}).
1245 As \xend runs, events will be logged to {\tt /var/log/xend.log} and
1246 {\tt /var/log/xfrd.log}, and these may be useful for troubleshooting
1247 problems.
1249 \section{Xm (command line interface)}
1250 \label{s:xm}
1252 The xm tool is the primary tool for managing Xen from the console.
1253 The general format of an xm command line is:
1255 \begin{verbatim}
1256 # xm command [switches] [arguments] [variables]
1257 \end{verbatim}
1259 The available {\em switches} and {\em arguments} are dependent on the
1260 {\em command} chosen. The {\em variables} may be set using
1261 declarations of the form {\tt variable=value} and command line
1262 declarations override any of the values in the configuration file
1263 being used, including the standard variables described above and any
1264 custom variables (for instance, the \path{xmdefconfig} file uses a
1265 {\tt vmid} variable).
1267 The available commands are as follows:
1269 \begin{description}
1270 \item[balloon] Request a domain to adjust its memory footprint.
1271 \item[create] Create a new domain.
1272 \item[destroy] Kill a domain immediately.
1273 \item[list] List running domains.
1274 \item[shutdown] Ask a domain to shutdown.
1275 \item[dmesg] Fetch the Xen (not Linux!) boot output.
1276 \item[consoles] Lists the available consoles.
1277 \item[console] Connect to the console for a domain.
1278 \item[help] Get help on xm commands.
1279 \item[save] Suspend a domain to disk.
1280 \item[restore] Restore a domain from disk.
1281 \item[pause] Pause a domain's execution.
1282 \item[unpause] Unpause a domain.
1283 \item[pincpu] Pin a domain to a CPU.
1284 \item[bvt] Set BVT scheduler parameters for a domain.
1285 \item[bvt\_ctxallow] Set the BVT context switching allowance for the system.
1286 \item[atropos] Set the atropos parameters for a domain.
1287 \item[rrobin] Set the round robin time slice for the system.
1288 \item[info] Get information about the Xen host.
1289 \item[call] Call a \xend HTTP API function directly.
1290 \end{description}
1292 For a detailed overview of switches, arguments and variables to each command
1293 try
1294 \begin{quote}
1295 \begin{verbatim}
1296 # xm help command
1297 \end{verbatim}
1298 \end{quote}
1300 \section{Xensv (web control interface)}
1301 \label{s:xensv}
1303 Xensv is the experimental web control interface for managing a Xen
1304 machine. It can be used to perform some (but not yet all) of the
1305 management tasks that can be done using the xm tool.
1307 It can be started using:\\ \verb_# xensv start_ \\ and
1308 stopped using: \verb_# xensv stop_ \\
1310 By default, Xensv will serve out the web interface on port 8080. This
1311 can be changed by editing {\tt
1312 /usr/lib/python2.3/site-packages/xen/sv/params.py}.
1314 Once Xensv is running, the web interface can be used to create and
1315 manage running domains.
1320 \chapter{Domain Configuration}
1321 \label{cha:config}
1323 The following contains the syntax of the domain configuration
1324 files and description of how to further specify networking,
1325 driver domain and general scheduling behaviour.
1327 \section{Configuration Files}
1328 \label{s:cfiles}
1330 Xen configuration files contain the following standard variables.
1331 Unless otherwise stated, configuration items should be enclosed in
1332 quotes: see \path{/etc/xen/xmexample1} for an example.
1334 \begin{description}
1335 \item[kernel] Path to the kernel image
1336 \item[ramdisk] Path to a ramdisk image (optional).
1337 % \item[builder] The name of the domain build function (e.g. {\tt'linux'} or {\tt'netbsd'}.
1338 \item[memory] Memory size in megabytes.
1339 \item[cpu] CPU to run this domain on, or {\tt -1} for
1340 auto-allocation.
1341 \item[nics] Number of virtual network interfaces.
1342 \item[vif] List of MAC addresses (random addresses are assigned if not
1343 given) and bridges to use for the domain's network interfaces, e.g.
1344 \begin{verbatim}
1345 vif = [ 'mac=aa:00:00:00:00:11, bridge=xen-br0',
1346 'bridge=xen-br1' ]
1347 \end{verbatim}
1348 to assign a MAC address and bridge to the first interface and assign
1349 a different bridge to the second interface, leaving \xend to choose
1350 the MAC address.
1351 \item[disk] List of block devices to export to the domain, e.g. \\
1352 \verb_disk = [ 'phy:hda1,sda1,r' ]_ \\
1353 exports physical device \path{/dev/hda1} to the domain
1354 as \path{/dev/sda1} with read-only access. Exporting a disk read-write
1355 which is currently mounted is dangerous -- if you are \emph{certain}
1356 you wish to do this, you can specify \path{w!} as the mode.
1357 \item[dhcp] Set to {\tt 'dhcp'} if you want to use DHCP to configure
1358 networking.
1359 \item[netmask] Manually configured IP netmask.
1360 \item[gateway] Manually configured IP gateway.
1361 \item[hostname] Set the hostname for the virtual machine.
1362 \item[root] Specify the root device parameter on the kernel command
1363 line.
1364 \item[nfs\_server] IP address for the NFS server (if any).
1365 \item[nfs\_root] Path of the root filesystem on the NFS server (if any).
1366 \item[extra] Extra string to append to the kernel command line (if
1367 any)
1368 \item[restart] Three possible options:
1369 \begin{description}
1370 \item[always] Always restart the domain, no matter what
1371 its exit code is.
1372 \item[never] Never restart the domain.
1373 \item[onreboot] Restart the domain iff it requests reboot.
1374 \end{description}
1375 \end{description}
1377 For additional flexibility, it is also possible to include Python
1378 scripting commands in configuration files. An example of this is the
1379 \path{xmexample2} file, which uses Python code to handle the {\tt
1380 vmid} variable.
1383 %\part{Advanced Topics}
1385 \section{Network Configuration}
1387 For many users, the default installation should work `out of the box'.
1388 More complicated network setups, for instance with multiple ethernet
1389 interfaces and/or existing bridging setups will require some
1390 special configuration.
1392 The purpose of this section is to describe the mechanisms provided by
1393 \xend to allow a flexible configuration for Xen's virtual networking.
1395 \subsection{Xen networking scripts}
1397 Xen's virtual networking is configured by two shell scripts (by
1398 default \path{network} and \path{vif-bridge}). These are
1399 called automatically by \xend when certain events occur, with
1400 arguments to the scripts providing further contextual information.
1401 These scripts are found by default in \path{/etc/xen/scripts}. The
1402 names and locations of the scripts can be configured in
1403 \path{/etc/xen/xend-config.sxp}.
1405 \begin{description}
1407 \item[network:] This script is called whenever \xend is started or
1408 stopped to respectively initialise or tear down the Xen virtual
1409 network. In the default configuration initialisation creates the
1410 bridge `xen-br0' and moves eth0 onto that bridge, modifying the
1411 routing accordingly. When \xend exits, it deletes the Xen bridge and
1412 removes eth0, restoring the normal IP and routing configuration.
1414 %% In configurations where the bridge already exists, this script could
1415 %% be replaced with a link to \path{/bin/true} (for instance).
1417 \item[vif-bridge:] This script is called for every domain virtual
1418 interface and can configure firewalling rules and add the vif
1419 to the appropriate bridge. By default, this adds and removes
1420 VIFs on the default Xen bridge.
1422 \end{description}
1425 %% There are two possible types of privileges: IO privileges and
1426 %% administration privileges.
1428 \section{Driver Domain Configuration}
1430 I/O privileges can be assigned to allow a domain to directly access
1431 PCI devices itself. This is used to support driver domains.
1433 Setting backend privileges is currently only supported in SXP format
1434 config files. To allow a domain to function as a backend for others,
1435 somewhere within the {\tt vm} element of its configuration file must
1436 be a {\tt backend} element of the form {\tt (backend ({\em type}))}
1437 where {\tt \em type} may be either {\tt netif} or {\tt blkif},
1438 according to the type of virtual device this domain will service.
1439 %% After this domain has been built, \xend will connect all new and
1440 %% existing {\em virtual} devices (of the appropriate type) to that
1441 %% backend.
1443 Note that a block backend cannot currently import virtual block
1444 devices from other domains, and a network backend cannot import
1445 virtual network devices from other domains. Thus (particularly in the
1446 case of block backends, which cannot import a virtual block device as
1447 their root filesystem), you may need to boot a backend domain from a
1448 ramdisk or a network device.
1450 Access to PCI devices may be configured on a per-device basis. Xen
1451 will assign the minimal set of hardware privileges to a domain that
1452 are required to control its devices. This can be configured in either
1453 format of configuration file:
1455 \begin{itemize}
1456 \item SXP Format: Include device elements of the form: \\
1457 \centerline{ {\tt (device (pci (bus {\em x}) (dev {\em y}) (func {\em z})))}} \\
1458 inside the top-level {\tt vm} element. Each one specifies the address
1459 of a device this domain is allowed to access ---
1460 the numbers {\em x},{\em y} and {\em z} may be in either decimal or
1461 hexadecimal format.
1462 \item Flat Format: Include a list of PCI device addresses of the
1463 format: \\
1464 \centerline{{\tt pci = ['x,y,z', ...]}} \\
1465 where each element in the
1466 list is a string specifying the components of the PCI device
1467 address, separated by commas. The components ({\tt \em x}, {\tt \em
1468 y} and {\tt \em z}) of the list may be formatted as either decimal
1469 or hexadecimal.
1470 \end{itemize}
1472 %% \section{Administration Domains}
1474 %% Administration privileges allow a domain to use the `dom0
1475 %% operations' (so called because they are usually available only to
1476 %% domain 0). A privileged domain can build other domains, set scheduling
1477 %% parameters, etc.
1479 % Support for other administrative domains is not yet available... perhaps
1480 % we should plumb it in some time
1486 \section{Scheduler Configuration}
1487 \label{s:sched}
1490 Xen offers a boot time choice between multiple schedulers. To select
1491 a scheduler, pass the boot parameter {\em sched=sched\_name} to Xen,
1492 substituting the appropriate scheduler name. Details of the schedulers
1493 and their parameters are included below; future versions of the tools
1494 will provide a higher-level interface to these tools.
1496 It is expected that system administrators configure their system to
1497 use the scheduler most appropriate to their needs. Currently, the BVT
1498 scheduler is the recommended choice.
1500 \subsection{Borrowed Virtual Time}
1502 {\tt sched=bvt} (the default) \\
1504 BVT provides proportional fair shares of the CPU time. It has been
1505 observed to penalise domains that block frequently (e.g. I/O intensive
1506 domains), but this can be compensated for by using warping.
1508 \subsubsection{Global Parameters}
1510 \begin{description}
1511 \item[ctx\_allow]
1512 the context switch allowance is similar to the `quantum'
1513 in traditional schedulers. It is the minimum time that
1514 a scheduled domain will be allowed to run before being
1515 pre-empted.
1516 \end{description}
1518 \subsubsection{Per-domain parameters}
1520 \begin{description}
1521 \item[mcuadv]
1522 the MCU (Minimum Charging Unit) advance determines the
1523 proportional share of the CPU that a domain receives. It
1524 is set inversely proportionally to a domain's sharing weight.
1525 \item[warp]
1526 the amount of `virtual time' the domain is allowed to warp
1527 backwards
1528 \item[warpl]
1529 the warp limit is the maximum time a domain can run warped for
1530 \item[warpu]
1531 the unwarp requirement is the minimum time a domain must
1532 run unwarped for before it can warp again
1533 \end{description}
1535 \subsection{Atropos}
1537 {\tt sched=atropos} \\
1539 Atropos is a soft real time scheduler. It provides guarantees about
1540 absolute shares of the CPU, with a facility for sharing
1541 slack CPU time on a best-effort basis. It can provide timeliness
1542 guarantees for latency-sensitive domains.
1544 Every domain has an associated period and slice. The domain should
1545 receive `slice' nanoseconds every `period' nanoseconds. This allows
1546 the administrator to configure both the absolute share of the CPU a
1547 domain receives and the frequency with which it is scheduled.
1549 %% When
1550 %% domains unblock, their period is reduced to the value of the latency
1551 %% hint (the slice is scaled accordingly so that they still get the same
1552 %% proportion of the CPU). For each subsequent period, the slice and
1553 %% period times are doubled until they reach their original values.
1555 Note: don't overcommit the CPU when using Atropos (i.e. don't reserve
1556 more CPU than is available --- the utilisation should be kept to
1557 slightly less than 100\% in order to ensure predictable behaviour).
1559 \subsubsection{Per-domain parameters}
1561 \begin{description}
1562 \item[period] The regular time interval during which a domain is
1563 guaranteed to receive its allocation of CPU time.
1564 \item[slice]
1565 The length of time per period that a domain is guaranteed to run
1566 for (in the absence of voluntary yielding of the CPU).
1567 \item[latency]
1568 The latency hint is used to control how soon after
1569 waking up a domain it should be scheduled.
1570 \item[xtratime] This is a boolean flag that specifies whether a domain
1571 should be allowed a share of the system slack time.
1572 \end{description}
1574 \subsection{Round Robin}
1576 {\tt sched=rrobin} \\
1578 The round robin scheduler is included as a simple demonstration of
1579 Xen's internal scheduler API. It is not intended for production use.
1581 \subsubsection{Global Parameters}
1583 \begin{description}
1584 \item[rr\_slice]
1585 The maximum time each domain runs before the next
1586 scheduling decision is made.
1587 \end{description}
1600 \chapter{Build, Boot and Debug options}
1602 This chapter describes the build- and boot-time options
1603 which may be used to tailor your Xen system.
1605 \section{Xen Build Options}
1607 Xen provides a number of build-time options which should be
1608 set as environment variables or passed on make's command-line.
1610 \begin{description}
1611 \item[verbose=y] Enable debugging messages when Xen detects an unexpected condition.
1612 Also enables console output from all domains.
1613 \item[debug=y]
1614 Enable debug assertions. Implies {\bf verbose=y}.
1615 (Primarily useful for tracing bugs in Xen).
1616 \item[debugger=y]
1617 Enable the in-Xen debugger. This can be used to debug
1618 Xen, guest OSes, and applications.
1619 \item[perfc=y]
1620 Enable performance counters for significant events
1621 within Xen. The counts can be reset or displayed
1622 on Xen's console via console control keys.
1623 \item[trace=y]
1624 Enable per-cpu trace buffers which log a range of
1625 events within Xen for collection by control
1626 software.
1627 \end{description}
1629 \section{Xen Boot Options}
1630 \label{s:xboot}
1632 These options are used to configure Xen's behaviour at runtime. They
1633 should be appended to Xen's command line, either manually or by
1634 editing \path{grub.conf}.
1636 \begin{description}
1637 \item [ignorebiostables ]
1638 Disable parsing of BIOS-supplied tables. This may help with some
1639 chipsets that aren't fully supported by Xen. If you specify this
1640 option then ACPI tables are also ignored, and SMP support is
1641 disabled.
1643 \item [noreboot ]
1644 Don't reboot the machine automatically on errors. This is
1645 useful to catch debug output if you aren't catching console messages
1646 via the serial line.
1648 \item [nosmp ]
1649 Disable SMP support.
1650 This option is implied by `ignorebiostables'.
1652 \item [noacpi ]
1653 Disable ACPI tables, which confuse Xen on some chipsets.
1654 This option is implied by `ignorebiostables'.
1656 \item [watchdog ]
1657 Enable NMI watchdog which can report certain failures.
1659 \item [noht ]
1660 Disable Hyperthreading.
1662 \item [badpage=$<$page number$>$,$<$page number$>$, \ldots ]
1663 Specify a list of pages not to be allocated for use
1664 because they contain bad bytes. For example, if your
1665 memory tester says that byte 0x12345678 is bad, you would
1666 place `badpage=0x12345' on Xen's command line.
1668 \item [com1=$<$baud$>$,DPS,$<$io\_base$>$,$<$irq$>$
1669 com2=$<$baud$>$,DPS,$<$io\_base$>$,$<$irq$>$ ] \mbox{}\\
1670 Xen supports up to two 16550-compatible serial ports.
1671 For example: 'com1=9600,8n1,0x408,5' maps COM1 to a
1672 9600-baud port, 8 data bits, no parity, 1 stop bit,
1673 I/O port base 0x408, IRQ 5.
1674 If the I/O base and IRQ are standard (com1:0x3f8,4;
1675 com2:0x2f8,3) then they need not be specified.
1677 \item [console=$<$specifier list$>$ ]
1678 Specify the destination for Xen console I/O.
1679 This is a comma-separated list of, for example:
1680 \begin{description}
1681 \item[vga] use VGA console and allow keyboard input
1682 \item[com1] use serial port com1
1683 \item[com2H] use serial port com2. Transmitted chars will
1684 have the MSB set. Received chars must have
1685 MSB set.
1686 \item[com2L] use serial port com2. Transmitted chars will
1687 have the MSB cleared. Received chars must
1688 have MSB cleared.
1689 \end{description}
1690 The latter two examples allow a single port to be
1691 shared by two subsystems (e.g. console and
1692 debugger). Sharing is controlled by MSB of each
1693 transmitted/received character.
1694 [NB. Default for this option is `com1,vga']
1696 \item [conswitch=$<$switch-char$><$auto-switch-char$>$ ]
1697 Specify how to switch serial-console input between
1698 Xen and DOM0. The required sequence is CTRL-$<$switch-char$>$
1699 pressed three times. Specifying the backtick character
1700 disables switching.
1701 The $<$auto-switch-char$>$ specifies whether Xen should
1702 auto-switch input to DOM0 when it boots --- if it is `x'
1703 then auto-switching is disabled. Any other value, or
1704 omitting the character, enables auto-switching.
1705 [NB. default switch-char is `a']
1707 \item [nmi=xxx ]
1708 Specify what to do with an NMI parity or I/O error. \\
1709 `nmi=fatal': Xen prints a diagnostic and then hangs. \\
1710 `nmi=dom0': Inform DOM0 of the NMI. \\
1711 `nmi=ignore': Ignore the NMI.
1713 \item [dom0\_mem=xxx ]
1714 Set the amount of memory (in kB) to be allocated to domain0.
1716 \item [tbuf\_size=xxx ]
1717 Set the size of the per-cpu trace buffers, in pages
1718 (default 1). Note that the trace buffers are only
1719 enabled in debug builds. Most users can ignore
1720 this feature completely.
1722 \item [sched=xxx ]
1723 Select the CPU scheduler Xen should use. The current
1724 possibilities are `bvt' (default), `atropos' and `rrobin'.
1725 For more information see Section~\ref{s:sched}.
1727 \item [pci\_dom0\_hide=(xx.xx.x)(yy.yy.y)\ldots ]
1728 Hide selected PCI devices from domain 0 (for instance, to stop it
1729 taking ownership of them so that they can be driven by another
1730 domain). Device IDs should be given in hex format. Bridge devices do
1731 not need to be hidden --- they are hidden implicitly, since guest OSes
1732 do not need to configure them.
1733 \end{description}
1737 \section{XenLinux Boot Options}
1739 In addition to the standard Linux kernel boot options, we support:
1740 \begin{description}
1741 \item[xencons=xxx ] Specify the device node to which the Xen virtual
1742 console driver is attached. The following options are supported:
1743 \begin{center}
1744 \begin{tabular}{l}
1745 `xencons=off': disable virtual console \\
1746 `xencons=tty': attach console to /dev/tty1 (tty0 at boot-time) \\
1747 `xencons=ttyS': attach console to /dev/ttyS0
1748 \end{tabular}
1749 \end{center}
1750 The default is ttyS for dom0 and tty for all other domains.
1751 \end{description}
1755 \section{Debugging}
1756 \label{s:keys}
1758 Xen has a set of debugging features that can be useful to try and
1759 figure out what's going on. Hit 'h' on the serial line (if you
1760 specified a baud rate on the Xen command line) or ScrollLock-h on the
1761 keyboard to get a list of supported commands.
1763 If you have a crash you'll likely get a crash dump containing an EIP
1764 (PC) which, along with an 'objdump -d image', can be useful in
1765 figuring out what's happened. Debug a Xenlinux image just as you
1766 would any other Linux kernel.
1768 %% We supply a handy debug terminal program which you can find in
1769 %% \path{/usr/local/src/xen-2.0.bk/tools/misc/miniterm/}
1770 %% This should be built and executed on another machine that is connected
1771 %% via a null modem cable. Documentation is included.
1772 %% Alternatively, if the Xen machine is connected to a serial-port server
1773 %% then we supply a dumb TCP terminal client, {\tt xencons}.
1778 \chapter{Further Support}
1780 If you have questions that are not answered by this manual, the
1781 sources of information listed below may be of interest to you. Note
1782 that bug reports, suggestions and contributions related to the
1783 software (or the documentation) should be sent to the Xen developers'
1784 mailing list (address below).
1786 \section{Other Documentation}
1788 For developers interested in porting operating systems to Xen, the
1789 {\em Xen Interface Manual} is distributed in the \path{docs/}
1790 directory of the Xen source distribution.
1792 %Various HOWTOs are available in \path{docs/HOWTOS} but this content is
1793 %being integrated into this manual.
1795 \section{Online References}
1797 The official Xen web site is found at:
1798 \begin{quote}
1799 {\tt http://www.cl.cam.ac.uk/netos/xen/}
1800 \end{quote}
1802 This contains links to the latest versions of all on-line
1803 documentation.
1805 \section{Mailing Lists}
1807 There are currently three official Xen mailing lists:
1809 \begin{description}
1810 \item[xen-devel@lists.sourceforge.net] Used for development
1811 discussions and requests for help. Subscribe at: \\
1812 \path{http://lists.sourceforge.net/mailman/listinfo/xen-devel}
1813 \item[xen-announce@lists.sourceforge.net] Used for announcements only.
1814 Subscribe at: \\
1815 \path{http://lists.sourceforge.net/mailman/listinfo/xen-announce}
1816 \item[xen-changelog@lists.sourceforge.net] Changelog feed
1817 from the unstable and 2.0 trees - developer oriented. Subscribe at: \\
1818 \path{http://lists.sourceforge.net/mailman/listinfo/xen-changelog}
1819 \end{description}
1821 Although there is no specific user support list, the developers try to
1822 assist users who post on xen-devel. As the bulk of traffic on this
1823 list increases, a dedicated user support list may be introduced.
1825 \appendix
1828 \chapter{Installing Xen / XenLinux on Debian}
1830 The Debian project provides a tool called \path{debootstrap} which
1831 allows a base Debian system to be installed into a filesystem without
1832 requiring the host system to have any Debian-specific software (such
1833 as \path{apt}.
1835 Here's some info how to install Debian 3.1 (Sarge) for an unprivileged
1836 Xen domain:
1838 \begin{enumerate}
1839 \item Set up Xen 2.0 and test that it's working, as described earlier in
1840 this manual.
1842 \item Create disk images for root-fs and swap (alternatively, you
1843 might create dedicated partitions, LVM logical volumes, etc. if
1844 that suits your setup).
1845 \begin{small}\begin{verbatim}
1846 dd if=/dev/zero of=/path/diskimage bs=1024k count=size_in_mbytes
1847 dd if=/dev/zero of=/path/swapimage bs=1024k count=size_in_mbytes
1848 \end{verbatim}\end{small}
1849 If you're going to use this filesystem / disk image only as a
1850 `template' for other vm disk images, something like 300 MB should
1851 be enough.. (of course it depends what kind of packages you are
1852 planning to install to the template)
1854 \item Create the filesystem and initialise the swap image
1855 \begin{small}\begin{verbatim}
1856 mkfs.ext3 /path/diskimage
1857 mkswap /path/swapimage
1858 \end{verbatim}\end{small}
1860 \item Mount the disk image for installation
1861 \begin{small}\begin{verbatim}
1862 mount -o loop /path/diskimage /mnt/disk
1863 \end{verbatim}\end{small}
1865 \item Install \path{debootstrap}
1867 Make sure you have debootstrap installed on the host. If you are
1868 running Debian sarge (3.1 / testing) or unstable you can install it by
1869 running \path{apt-get install debootstrap}. Otherwise, it can be
1870 downloaded from the Debian project website.
1872 \item Install Debian base to the disk image:
1873 \begin{small}\begin{verbatim}
1874 debootstrap --arch i386 sarge /mnt/disk \
1875 http://ftp.<countrycode>.debian.org/debian
1876 \end{verbatim}\end{small}
1878 You can use any other Debian http/ftp mirror you want.
1880 \item When debootstrap completes successfully, modify settings:
1881 \begin{small}\begin{verbatim}
1882 chroot /mnt/disk /bin/bash
1883 \end{verbatim}\end{small}
1885 Edit the following files using vi or nano and make needed changes:
1886 \begin{small}\begin{verbatim}
1887 /etc/hostname
1888 /etc/hosts
1889 /etc/resolv.conf
1890 /etc/network/interfaces
1891 /etc/networks
1892 \end{verbatim}\end{small}
1894 Set up access to the services, edit:
1895 \begin{small}\begin{verbatim}
1896 /etc/hosts.deny
1897 /etc/hosts.allow
1898 /etc/inetd.conf
1899 \end{verbatim}\end{small}
1901 Add Debian mirror to:
1902 \begin{small}\begin{verbatim}
1903 /etc/apt/sources.list
1904 \end{verbatim}\end{small}
1906 Create fstab like this:
1907 \begin{small}\begin{verbatim}
1908 /dev/sda1 / ext3 errors=remount-ro 0 1
1909 /dev/sda2 none swap sw 0 0
1910 proc /proc proc defaults 0 0
1911 \end{verbatim}\end{small}
1913 Logout
1915 \item Unmount the disk image
1916 \begin{small}\begin{verbatim}
1917 umount /mnt/disk
1918 \end{verbatim}\end{small}
1920 \item Create Xen 2.0 configuration file for the new domain. You can
1921 use the example-configurations coming with Xen as a template.
1923 Make sure you have the following set up:
1924 \begin{small}\begin{verbatim}
1925 disk = [ 'file:/path/diskimage,sda1,w', 'file:/path/swapimage,sda2,w' ]
1926 root = "/dev/sda1 ro"
1927 \end{verbatim}\end{small}
1929 \item Start the new domain
1930 \begin{small}\begin{verbatim}
1931 xm create -f domain_config_file
1932 \end{verbatim}\end{small}
1934 Check that the new domain is running:
1935 \begin{small}\begin{verbatim}
1936 xm list
1937 \end{verbatim}\end{small}
1939 \item Attach to the console of the new domain.
1940 You should see something like this when starting the new domain:
1942 \begin{small}\begin{verbatim}
1943 Started domain testdomain2, console on port 9626
1944 \end{verbatim}\end{small}
1946 There you can see the ID of the console: 26. You can also list
1947 the consoles with \path{xm consoles} (ID is the last two
1948 digits of the port number.)
1950 Attach to the console:
1952 \begin{small}\begin{verbatim}
1953 xm console 26
1954 \end{verbatim}\end{small}
1956 or by telnetting to the port 9626 of localhost (the xm console
1957 program works better).
1959 \item Log in and run base-config
1961 As a default there's no password for the root.
1963 Check that everything looks OK, and the system started without
1964 errors. Check that the swap is active, and the network settings are
1965 correct.
1967 Run \path{/usr/sbin/base-config} to set up the Debian settings.
1969 Set up the password for root using passwd.
1971 \item Done. You can exit the console by pressing \path{Ctrl + ]}
1973 \end{enumerate}
1975 If you need to create new domains, you can just copy the contents of
1976 the `template'-image to the new disk images, either by mounting the
1977 template and the new image, and using \path{cp -a} or \path{tar} or by
1978 simply copying the image file. Once this is done, modify the
1979 image-specific settings (hostname, network settings, etc).
1981 \chapter{Installing Xen / XenLinux on Redhat or Fedora Core}
1983 When using Xen / XenLinux on a standard Linux distribution there are
1984 a couple of things to watch out for:
1986 Note that, because domains>0 don't have any privileged access at all,
1987 certain commands in the default boot sequence will fail e.g. attempts
1988 to update the hwclock, change the console font, update the keytable
1989 map, start apmd (power management), or gpm (mouse cursor). Either
1990 ignore the errors (they should be harmless), or remove them from the
1991 startup scripts. Deleting the following links are a good start:
1992 {\path{S24pcmcia}}, {\path{S09isdn}},
1993 {\path{S17keytable}}, {\path{S26apmd}},
1994 {\path{S85gpm}}.
1996 If you want to use a single root file system that works cleanly for
1997 both domain 0 and unprivileged domains, a useful trick is to use
1998 different 'init' run levels. For example, use
1999 run level 3 for domain 0, and run level 4 for other domains. This
2000 enables different startup scripts to be run in depending on the run
2001 level number passed on the kernel command line.
2003 If using NFS root files systems mounted either from an
2004 external server or from domain0 there are a couple of other gotchas.
2005 The default {\path{/etc/sysconfig/iptables}} rules block NFS, so part
2006 way through the boot sequence things will suddenly go dead.
2008 If you're planning on having a separate NFS {\path{/usr}} partition, the
2009 RH9 boot scripts don't make life easy - they attempt to mount NFS file
2010 systems way to late in the boot process. The easiest way I found to do
2011 this was to have a {\path{/linuxrc}} script run ahead of
2012 {\path{/sbin/init}} that mounts {\path{/usr}}:
2014 \begin{quote}
2015 \begin{small}\begin{verbatim}
2016 #!/bin/bash
2017 /sbin/ipconfig lo 127.0.0.1
2018 /sbin/portmap
2019 /bin/mount /usr
2020 exec /sbin/init "$@" <>/dev/console 2>&1
2021 \end{verbatim}\end{small}
2022 \end{quote}
2024 %$ XXX SMH: font lock fix :-)
2026 The one slight complication with the above is that
2027 {\path{/sbin/portmap}} is dynamically linked against
2028 {\path{/usr/lib/libwrap.so.0}} Since this is in
2029 {\path{/usr}}, it won't work. This can be solved by copying the
2030 file (and link) below the /usr mount point, and just let the file be
2031 'covered' when the mount happens.
2033 In some installations, where a shared read-only {\path{/usr}} is
2034 being used, it may be desirable to move other large directories over
2035 into the read-only {\path{/usr}}. For example, you might replace
2036 {\path{/bin}}, {\path{/lib}} and {\path{/sbin}} with
2037 links into {\path{/usr/root/bin}}, {\path{/usr/root/lib}}
2038 and {\path{/usr/root/sbin}} respectively. This creates other
2039 problems for running the {\path{/linuxrc}} script, requiring
2040 bash, portmap, mount, ifconfig, and a handful of other shared
2041 libraries to be copied below the mount point --- a simple
2042 statically-linked C program would solve this problem.
2047 \chapter{Glossary of Terms}
2049 \begin{description}
2050 \item[Atropos] One of the CPU schedulers provided by Xen.
2051 Atropos provides domains with absolute shares
2052 of the CPU, with timeliness guarantees and a
2053 mechanism for sharing out `slack time'.
2055 \item[BVT] The BVT scheduler is used to give proportional
2056 fair shares of the CPU to domains.
2058 \item[Exokernel] A minimal piece of privileged code, similar to
2059 a {\bf microkernel} but providing a more
2060 `hardware-like' interface to the tasks it
2061 manages. This is similar to a paravirtualising
2062 VMM like {\bf Xen} but was designed as a new
2063 operating system structure, rather than
2064 specifically to run multiple conventional OSs.
2066 \item[Domain] A domain is the execution context that
2067 contains a running {\bf virtual machine}.
2068 The relationship between virtual machines
2069 and domains on Xen is similar to that between
2070 programs and processes in an operating
2071 system: a virtual machine is a persistent
2072 entity that resides on disk (somewhat like
2073 a program). When it is loaded for execution,
2074 it runs in a domain. Each domain has a
2075 {\bf domain ID}.
2077 \item[Domain 0] The first domain to be started on a Xen
2078 machine. Domain 0 is responsible for managing
2079 the system.
2081 \item[Domain ID] A unique identifier for a {\bf domain},
2082 analogous to a process ID in an operating
2083 system.
2085 \item[Full virtualisation] An approach to virtualisation which
2086 requires no modifications to the hosted
2087 operating system, providing the illusion of
2088 a complete system of real hardware devices.
2090 \item[Hypervisor] An alternative term for {\bf VMM}, used
2091 because it means `beyond supervisor',
2092 since it is responsible for managing multiple
2093 `supervisor' kernels.
2095 \item[Live migration] A technique for moving a running virtual
2096 machine to another physical host, without
2097 stopping it or the services running on it.
2099 \item[Microkernel] A small base of code running at the highest
2100 hardware privilege level. A microkernel is
2101 responsible for sharing CPU and memory (and
2102 sometimes other devices) between less
2103 privileged tasks running on the system.
2104 This is similar to a VMM, particularly a
2105 {\bf paravirtualising} VMM but typically
2106 addressing a different problem space and
2107 providing different kind of interface.
2109 \item[NetBSD/Xen] A port of NetBSD to the Xen architecture.
2111 \item[Paravirtualisation] An approach to virtualisation which requires
2112 modifications to the operating system in
2113 order to run in a virtual machine. Xen
2114 uses paravirtualisation but preserves
2115 binary compatibility for user space
2116 applications.
2118 \item[Shadow pagetables] A technique for hiding the layout of machine
2119 memory from a virtual machine's operating
2120 system. Used in some {\bf VMMs} to provide
2121 the illusion of contiguous physical memory,
2122 in Xen this is used during
2123 {\bf live migration}.
2125 \item[Virtual Machine] The environment in which a hosted operating
2126 system runs, providing the abstraction of a
2127 dedicated machine. A virtual machine may
2128 be identical to the underlying hardware (as
2129 in {\bf full virtualisation}, or it may
2130 differ, as in {\bf paravirtualisation}.
2132 \item[VMM] Virtual Machine Monitor - the software that
2133 allows multiple virtual machines to be
2134 multiplexed on a single physical machine.
2136 \item[Xen] Xen is a paravirtualising virtual machine
2137 monitor, developed primarily by the
2138 Systems Research Group at the University
2139 of Cambridge Computer Laboratory.
2141 \item[XenLinux] Official name for the port of the Linux kernel
2142 that runs on Xen.
2144 \end{description}
2147 \end{document}
2150 %% Other stuff without a home
2152 %% Instructions Re Python API
2154 %% Other Control Tasks using Python
2155 %% ================================
2157 %% A Python module 'Xc' is installed as part of the tools-install
2158 %% process. This can be imported, and an 'xc object' instantiated, to
2159 %% provide access to privileged command operations:
2161 %% # import Xc
2162 %% # xc = Xc.new()
2163 %% # dir(xc)
2164 %% # help(xc.domain_create)
2166 %% In this way you can see that the class 'xc' contains useful
2167 %% documentation for you to consult.
2169 %% A further package of useful routines (xenctl) is also installed:
2171 %% # import xenctl.utils
2172 %% # help(xenctl.utils)
2174 %% You can use these modules to write your own custom scripts or you can
2175 %% customise the scripts supplied in the Xen distribution.
2179 % Explain about AGP GART
2182 %% If you're not intending to configure the new domain with an IP address
2183 %% on your LAN, then you'll probably want to use NAT. The
2184 %% 'xen_nat_enable' installs a few useful iptables rules into domain0 to
2185 %% enable NAT. [NB: We plan to support RSIP in future]
2190 %% Installing the file systems from the CD
2191 %% =======================================
2193 %% If you haven't got an existing Linux installation onto which you can
2194 %% just drop down the Xen and Xenlinux images, then the file systems on
2195 %% the CD provide a quick way of doing an install. However, you would be
2196 %% better off in the long run doing a proper install of your preferred
2197 %% distro and installing Xen onto that, rather than just doing the hack
2198 %% described below:
2200 %% Choose one or two partitions, depending on whether you want a separate
2201 %% /usr or not. Make file systems on it/them e.g.:
2202 %% mkfs -t ext3 /dev/hda3
2203 %% [or mkfs -t ext2 /dev/hda3 && tune2fs -j /dev/hda3 if using an old
2204 %% version of mkfs]
2206 %% Next, mount the file system(s) e.g.:
2207 %% mkdir /mnt/root && mount /dev/hda3 /mnt/root
2208 %% [mkdir /mnt/usr && mount /dev/hda4 /mnt/usr]
2210 %% To install the root file system, simply untar /usr/XenDemoCD/root.tar.gz:
2211 %% cd /mnt/root && tar -zxpf /usr/XenDemoCD/root.tar.gz
2213 %% You'll need to edit /mnt/root/etc/fstab to reflect your file system
2214 %% configuration. Changing the password file (etc/shadow) is probably a
2215 %% good idea too.
2217 %% To install the usr file system, copy the file system from CD on /usr,
2218 %% though leaving out the "XenDemoCD" and "boot" directories:
2219 %% cd /usr && cp -a X11R6 etc java libexec root src bin dict kerberos local sbin tmp doc include lib man share /mnt/usr
2221 %% If you intend to boot off these file systems (i.e. use them for
2222 %% domain 0), then you probably want to copy the /usr/boot directory on
2223 %% the cd over the top of the current symlink to /boot on your root
2224 %% filesystem (after deleting the current symlink) i.e.:
2225 %% cd /mnt/root ; rm boot ; cp -a /usr/boot .