ia64/xen-unstable

view docs/src/user.tex @ 3256:054fad40be91

bitkeeper revision 1.1159.170.62 (41b658f7CVacvwHL6JLDDIqJsW_sSQ)

Update to Linux 2.6.10-rc3.
author cl349@arcadians.cl.cam.ac.uk
date Wed Dec 08 01:29:27 2004 +0000 (2004-12-08)
parents f65b65977b19
children c23dd7ec1f54 47678d2c18c3
line source
1 \documentclass[11pt,twoside,final,openright]{report}
2 \usepackage{a4,graphicx,html,parskip,setspace,times,xspace}
3 \setstretch{1.15}
6 \def\Xend{{Xend}\xspace}
7 \def\xend{{xend}\xspace}
9 \latexhtml{\newcommand{\path}[1]{{\small {\tt #1}}}}{\newcommand{\path}[1]{{\tt #1}}}
13 \begin{document}
15 % TITLE PAGE
16 \pagestyle{empty}
17 \begin{center}
18 \vspace*{\fill}
19 \includegraphics{figs/xenlogo.eps}
20 \vfill
21 \vfill
22 \vfill
23 \begin{tabular}{l}
24 {\Huge \bf Users' manual} \\[4mm]
25 {\huge Xen v2.0 for x86} \\[80mm]
27 {\Large Xen is Copyright (c) 2002-2004, The Xen Team} \\[3mm]
28 {\Large University of Cambridge, UK} \\[20mm]
29 \end{tabular}
30 \end{center}
32 {\bf
33 DISCLAIMER: This documentation is currently under active development
34 and as such there may be mistakes and omissions --- watch out for
35 these and please report any you find to the developer's mailing list.
36 Contributions of material, suggestions and corrections are welcome.
37 }
39 \vfill
40 \cleardoublepage
42 % TABLE OF CONTENTS
43 \pagestyle{plain}
44 \pagenumbering{roman}
45 { \parskip 0pt plus 1pt
46 \tableofcontents }
47 \cleardoublepage
49 % PREPARE FOR MAIN TEXT
50 \pagenumbering{arabic}
51 \raggedbottom
52 \widowpenalty=10000
53 \clubpenalty=10000
54 \parindent=0pt
55 \parskip=5pt
56 \renewcommand{\topfraction}{.8}
57 \renewcommand{\bottomfraction}{.8}
58 \renewcommand{\textfraction}{.2}
59 \renewcommand{\floatpagefraction}{.8}
60 \setstretch{1.1}
62 \part{Introduction and Tutorial}
63 \chapter{Introduction}
65 Xen is a {\em paravirtualising} virtual machine monitor (VMM), or
66 `hypervisor', for the x86 processor architecture. Xen can securely
67 execute multiple virtual machines on a single physical system with
68 close-to-native performance. The virtual machine technology
69 facilitates enterprise-grade functionality, including:
71 \begin{itemize}
72 \item Virtual machines with performance close to native
73 hardware.
74 \item Live migration of running virtual machines between physical hosts.
75 \item Excellent hardware support (supports most Linux device drivers).
76 \item Sandboxed, restartable device drivers.
77 \end{itemize}
79 Paravirtualisation permits very high performance virtualisation,
80 even on architectures like x86 that are traditionally
81 very hard to virtualise.
82 The drawback of this approach is that it requires operating systems to
83 be {\em ported} to run on Xen. Porting an OS to run on Xen is similar
84 to supporting a new hardware platform, however the process
85 is simplified because the paravirtual machine architecture is very
86 similar to the underlying native hardware. Even though operating system
87 kernels must explicitly support Xen, a key feature is that user space
88 applications and libraries {\em do not} require modification.
90 Xen support is available for increasingly many operating systems:
91 right now, Linux 2.4, Linux 2.6 and NetBSD are available for Xen 2.0.
92 A FreeBSD port is undergoing testing and will be incorporated into the
93 release soon. Other OS ports, including Plan 9, are in progress. We
94 hope that that arch-xen patches will be incorporated into the
95 mainstream releases of these operating systems in due course (as has
96 already happened for NetBSD).
98 Possible usage scenarios for Xen include:
99 \begin{description}
100 \item [Kernel development.] Test and debug kernel modifications in a
101 sandboxed virtual machine --- no need for a separate test
102 machine.
103 \item [Multiple OS configurations.] Run multiple operating systems
104 simultaneously, for instance for compatibility or QA purposes.
105 \item [Server consolidation.] Move multiple servers onto a single
106 physical host with performance and fault isolation provided at
107 virtual machine boundaries.
108 \item [Cluster computing.] Management at VM granularity provides more
109 flexibility than separately managing each physical host, but
110 better control and isolation than single-system image solutions,
111 particularly by using live migration for load balancing.
112 \item [Hardware support for custom OSes.] Allow development of new OSes
113 while benefiting from the wide-ranging hardware support of
114 existing OSes such as Linux.
115 \end{description}
117 \section{Structure of a Xen-Based System}
119 A Xen system has multiple layers, the lowest and most privileged of
120 which is Xen itself.
121 Xen in turn may host multiple {\em guest} operating systems, each of
122 which is executed within a secure virtual machine (in Xen terminology,
123 a {\em domain}). Domains are scheduled by Xen to make effective use of
124 the available physical CPUs. Each guest OS manages its own
125 applications, which includes responsibility for scheduling each
126 application within the time allotted to the VM by Xen.
128 The first domain, {\em domain 0}, is created automatically when the
129 system boots and has special management privileges. Domain 0 builds
130 other domains and manages their virtual devices. It also performs
131 administrative tasks such as suspending, resuming and migrating other
132 virtual machines.
134 Within domain 0, a process called \emph{xend} runs to manage the system.
135 \Xend is responsible for managing virtual machines and providing access
136 to their consoles. Commands are issued to \xend over an HTTP
137 interface, either from a command-line tool or from a web browser.
139 \section{Hardware Support}
141 Xen currently runs only on the x86 architecture, requiring a `P6' or
142 newer processor (e.g. Pentium Pro, Celeron, Pentium II, Pentium III,
143 Pentium IV, Xeon, AMD Athlon, AMD Duron). Multiprocessor machines are
144 supported, and we also have basic support for HyperThreading (SMT),
145 although this remains a topic for ongoing research. A port
146 specifically for x86/64 is in progress, although Xen already runs on
147 such systems in 32-bit legacy mode. In addition a port to the IA64
148 architecture is approaching completion. We hope to add other
149 architectures such as PPC and ARM in due course.
152 Xen can currently use up to 4GB of memory. It is possible for x86
153 machines to address up to 64GB of physical memory but there are no
154 current plans to support these systems: The x86/64 port is the
155 planned route to supporting larger memory sizes.
157 Xen offloads most of the hardware support issues to the guest OS
158 running in Domain~0. Xen itself contains only the code required to
159 detect and start secondary processors, set up interrupt routing, and
160 perform PCI bus enumeration. Device drivers run within a privileged
161 guest OS rather than within Xen itself. This approach provides
162 compatibility with the majority of device hardware supported by Linux.
163 The default XenLinux build contains support for relatively modern
164 server-class network and disk hardware, but you can add support for
165 other hardware by configuring your XenLinux kernel in the normal way.
167 \section{History}
169 Xen was originally developed by the Systems Research Group at the
170 University of Cambridge Computer Laboratory as part of the XenoServers
171 project, funded by the UK-EPSRC.
172 XenoServers aim to provide a `public infrastructure for
173 global distributed computing', and Xen plays a key part in that,
174 allowing us to efficiently partition a single machine to enable
175 multiple independent clients to run their operating systems and
176 applications in an environment providing protection, resource
177 isolation and accounting. The project web page contains further
178 information along with pointers to papers and technical reports:
179 \path{http://www.cl.cam.ac.uk/xeno}
181 Xen has since grown into a fully-fledged project in its own right,
182 enabling us to investigate interesting research issues regarding the
183 best techniques for virtualising resources such as the CPU, memory,
184 disk and network. The project has been bolstered by support from
185 Intel Research Cambridge, and HP Labs, who are now working closely
186 with us.
188 Xen was first described in a paper presented at SOSP in
189 2003\footnote{\tt
190 http://www.cl.cam.ac.uk/netos/papers/2003-xensosp.pdf}, and the first
191 public release (1.0) was made that October. Since then, Xen has
192 significantly matured and is now used in production scenarios on
193 many sites.
195 Xen 2.0 features greatly enhanced hardware support, configuration
196 flexibility, usability and a larger complement of supported operating
197 systems. This latest release takes Xen a step closer to becoming the
198 definitive open source solution for virtualisation.
200 \chapter{Installation}
202 The Xen distribution includes three main components: Xen itself, ports
203 of Linux 2.4 and 2.6 and NetBSD to run on Xen, and the user-space
204 tools required to manage a Xen-based system. This chapter describes
205 how to install the Xen 2.0 distribution from source. Alternatively,
206 there may be pre-built packages available as part of your operating
207 system distribution.
209 \section{Prerequisites}
210 \label{sec:prerequisites}
212 The following is a full list of prerequisites. Items marked `$\dag$'
213 are required by the \xend control tools, and hence required if you
214 want to run more than one virtual machine; items marked `$*$' are only
215 required if you wish to build from source.
216 \begin{itemize}
217 \item A working Linux distribution using the GRUB bootloader and
218 running on a P6-class (or newer) CPU.
219 \item [$\dag$] The \path{iproute2} package.
220 \item [$\dag$] The Linux bridge-utils\footnote{Available from
221 {\tt http://bridge.sourceforge.net}} (e.g., \path{/sbin/brctl})
222 \item [$\dag$] An installation of Twisted v1.3 or
223 above\footnote{Available from {\tt
224 http://www.twistedmatrix.com}}. There may be a binary package
225 available for your distribution; alternatively it can be installed by
226 running `{\sl make install-twisted}' in the root of the Xen source
227 tree.
228 \item [$*$] Build tools (gcc v3.2.x or v3.3.x, binutils, GNU make).
229 \item [$*$] Development installation of libcurl (e.g., libcurl-devel)
230 \item [$*$] Development installation of zlib (e.g., zlib-dev).
231 \item [$*$] Development installation of Python v2.2 or later (e.g., python-dev).
232 \item [$*$] \LaTeX, transfig and tgif are required to build the documentation.
233 \end{itemize}
235 Once you have satisfied the relevant prerequisites, you can
236 now install either a binary or source distribution of Xen.
238 \section{Installing from Binary Tarball}
240 Pre-built tarballs are available for download from the Xen
241 download page
242 \begin{quote}
243 {\tt http://xen.sf.net}
244 \end{quote}
246 Once you've downloaded the tarball, simply unpack and install:
247 \begin{verbatim}
248 # tar zxvf xen-2.0-install.tgz
249 # cd xen-2.0-install
250 # sh ./install.sh
251 \end{verbatim}
253 Once you've installed the binaries you need to configure
254 your system as described in Section~\ref{s:configure}.
256 \section{Installing from Source}
258 This section describes how to obtain, build, and install
259 Xen from source.
261 \subsection{Obtaining the Source}
263 The Xen source tree is available as either a compressed source tar
264 ball or as a clone of our master BitKeeper repository.
266 \begin{description}
267 \item[Obtaining the Source Tarball]\mbox{} \\
268 Stable versions (and daily snapshots) of the Xen source tree are
269 available as compressed tarballs from the Xen download page
270 \begin{quote}
271 {\tt http://xen.sf.net}
272 \end{quote}
274 \item[Using BitKeeper]\mbox{} \\
275 If you wish to install Xen from a clone of our latest BitKeeper
276 repository then you will need to install the BitKeeper tools.
277 Download instructions for BitKeeper can be obtained by filling out the
278 form at:
280 \begin{quote}
281 {\tt http://www.bitmover.com/cgi-bin/download.cgi}
282 \end{quote}
283 The public master BK repository for the 2.0 release lives at:
284 \begin{quote}
285 {\tt bk://xen.bkbits.net/xen-2.0.bk}
286 \end{quote}
287 You can use BitKeeper to
288 download it and keep it updated with the latest features and fixes.
290 Change to the directory in which you want to put the source code, then
291 run:
292 \begin{verbatim}
293 # bk clone bk://xen.bkbits.net/xen-2.0.bk
294 \end{verbatim}
296 Under your current directory, a new directory named \path{xen-2.0.bk}
297 has been created, which contains all the source code for Xen, the OS
298 ports, and the control tools. You can update your repository with the
299 latest changes at any time by running:
300 \begin{verbatim}
301 # cd xen-2.0.bk # to change into the local repository
302 # bk pull # to update the repository
303 \end{verbatim}
304 \end{description}
306 %\section{The distribution}
307 %
308 %The Xen source code repository is structured as follows:
309 %
310 %\begin{description}
311 %\item[\path{tools/}] Xen node controller daemon (Xend), command line tools,
312 % control libraries
313 %\item[\path{xen/}] The Xen VMM.
314 %\item[\path{linux-*-xen-sparse/}] Xen support for Linux.
315 %\item[\path{linux-*-patches/}] Experimental patches for Linux.
316 %\item[\path{netbsd-*-xen-sparse/}] Xen support for NetBSD.
317 %\item[\path{docs/}] Various documentation files for users and developers.
318 %\item[\path{extras/}] Bonus extras.
319 %\end{description}
321 \subsection{Building from Source}
323 The top-level Xen Makefile includes a target `world' that will do the
324 following:
326 \begin{itemize}
327 \item Build Xen
328 \item Build the control tools, including \xend
329 \item Download (if necessary) and unpack the Linux 2.6 source code,
330 and patch it for use with Xen
331 \item Build a Linux kernel to use in domain 0 and a smaller
332 unprivileged kernel, which can optionally be used for
333 unprivileged virtual machines.
334 \end{itemize}
337 After the build has completed you should have a top-level
338 directory called \path{dist/} in which all resulting targets
339 will be placed; of particular interest are the two kernels
340 XenLinux kernel images, one with a `-xen0' extension
341 which contains hardware device drivers and drivers for Xen's virtual
342 devices, and one with a `-xenU' extension that just contains the
343 virtual ones. These are found in \path{dist/install/boot/} along
344 with the image for Xen itself and the configuration files used
345 during the build.
347 The NetBSD port can be built using:
348 \begin{quote}
349 \begin{verbatim}
350 # make netbsd20
351 \end{verbatim}
352 \end{quote}
353 NetBSD port is built using a snapshot of the netbsd-2-0 cvs branch.
354 The snapshot is downloaded as part of the build process, if it is not
355 yet present in the \path{NETBSD\_SRC\_PATH} search path. The build
356 process also downloads a toolchain which includes all the tools
357 necessary to build the NetBSD kernel under Linux.
359 To customize further the set of kernels built you need to edit
360 the top-level Makefile. Look for the line:
362 \begin{quote}
363 \begin{verbatim}
364 KERNELS ?= mk.linux-2.6-xen0 mk.linux-2.6-xenU
365 \end{verbatim}
366 \end{quote}
368 You can edit this line to include any set of operating system kernels
369 which have configurations in the top-level \path{buildconfigs/}
370 directory, for example \path{mk.linux-2.4-xenU} to build a Linux 2.4
371 kernel containing only virtual device drivers.
373 %% Inspect the Makefile if you want to see what goes on during a build.
374 %% Building Xen and the tools is straightforward, but XenLinux is more
375 %% complicated. The makefile needs a `pristine' Linux kernel tree to which
376 %% it will then add the Xen architecture files. You can tell the
377 %% makefile the location of the appropriate Linux compressed tar file by
378 %% setting the LINUX\_SRC environment variable, e.g. \\
379 %% \verb!# LINUX_SRC=/tmp/linux-2.6.9.tar.bz2 make world! \\ or by
380 %% placing the tar file somewhere in the search path of {\tt
381 %% LINUX\_SRC\_PATH} which defaults to `{\tt .:..}'. If the makefile
382 %% can't find a suitable kernel tar file it attempts to download it from
383 %% kernel.org (this won't work if you're behind a firewall).
385 %% After untaring the pristine kernel tree, the makefile uses the {\tt
386 %% mkbuildtree} script to add the Xen patches to the kernel.
389 %% The procedure is similar to build the Linux 2.4 port: \\
390 %% \verb!# LINUX_SRC=/path/to/linux2.4/source make linux24!
393 %% \framebox{\parbox{5in}{
394 %% {\bf Distro specific:} \\
395 %% {\it Gentoo} --- if not using udev (most installations, currently), you'll need
396 %% to enable devfs and devfs mount at boot time in the xen0 config.
397 %% }}
399 \subsection{Custom XenLinux Builds}
401 % If you have an SMP machine you may wish to give the {\tt '-j4'}
402 % argument to make to get a parallel build.
404 If you wish to build a customized XenLinux kernel (e.g. to support
405 additional devices or enable distribution-required features), you can
406 use the standard Linux configuration mechanisms, specifying that the
407 architecture being built for is \path{xen}, e.g:
408 \begin{quote}
409 \begin{verbatim}
410 # cd linux-2.6.9-xen0
411 # make ARCH=xen xconfig
412 # cd ..
413 # make
414 \end{verbatim}
415 \end{quote}
417 You can also copy an existing Linux configuration (\path{.config})
418 into \path{linux-2.6.9-xen0} and execute:
419 \begin{quote}
420 \begin{verbatim}
421 # make ARCH=xen oldconfig
422 \end{verbatim}
423 \end{quote}
425 You may be prompted with some Xen-specific options; we
426 advise accepting the defaults for these options.
428 Note that the only difference between the two types of Linux kernel
429 that are built is the configuration file used for each. The "U"
430 suffixed (unprivileged) versions don't contain any of the physical
431 hardware device drivers, leading to a 30\% reduction in size; hence
432 you may prefer these for your non-privileged domains. The `0'
433 suffixed privileged versions can be used to boot the system, as well
434 as in driver domains and unprivileged domains.
437 \subsection{Installing the Binaries}
440 The files produced by the build process are stored under the
441 \path{dist/install/} directory. To install them in their default
442 locations, do:
443 \begin{quote}
444 \begin{verbatim}
445 # make install
446 \end{verbatim}
447 \end{quote}
450 Alternatively, users with special installation requirements may wish
451 to install them manually by copying the files to their appropriate
452 destinations.
454 %% Files in \path{install/boot/} include:
455 %% \begin{itemize}
456 %% \item \path{install/boot/xen.gz} The Xen 'kernel'
457 %% \item \path{install/boot/vmlinuz-2.6.9-xen0} Domain 0 XenLinux kernel
458 %% \item \path{install/boot/vmlinuz-2.6.9-xenU} Unprivileged XenLinux kernel
459 %% \end{itemize}
461 The \path{dist/install/boot} directory will also contain the config files
462 used for building the XenLinux kernels, and also versions of Xen and
463 XenLinux kernels that contain debug symbols (\path{xen-syms} and
464 \path{vmlinux-syms-2.6.9-xen0}) which are essential for interpreting crash
465 dumps. Retain these files as the developers may wish to see them if
466 you post on the mailing list.
472 \section{Configuration}
473 \label{s:configure}
474 Once you have built and installed the Xen distribution, it is
475 simple to prepare the machine for booting and running Xen.
477 \subsection{GRUB Configuration}
479 An entry should be added to \path{grub.conf} (often found under
480 \path{/boot/} or \path{/boot/grub/}) to allow Xen / XenLinux to boot.
481 This file is sometimes called \path{menu.lst}, depending on your
482 distribution. The entry should look something like the following:
484 {\small
485 \begin{verbatim}
486 title Xen 2.0 / XenLinux 2.6.9
487 kernel /boot/xen.gz dom0_mem=131072
488 module /boot/vmlinuz-2.6.9-xen0 root=/dev/sda4 ro console=tty0
489 \end{verbatim}
490 }
492 The kernel line tells GRUB where to find Xen itself and what boot
493 parameters should be passed to it (in this case, setting domain 0's
494 memory allocation and the settings for the serial port). For more
495 details on the various Xen boot parameters see Section~\ref{s:xboot}.
497 The module line of the configuration describes the location of the
498 XenLinux kernel that Xen should start and the parameters that should
499 be passed to it (these are standard Linux parameters, identifying the
500 root device and specifying it be initially mounted read only and
501 instructing that console output be sent to the screen). Some
502 distributions such as SuSE do not require the \path{ro} parameter.
504 %% \framebox{\parbox{5in}{
505 %% {\bf Distro specific:} \\
506 %% {\it SuSE} --- Omit the {\tt ro} option from the XenLinux kernel
507 %% command line, since the partition won't be remounted rw during boot.
508 %% }}
511 If you want to use an initrd, just add another \path{module} line to
512 the configuration, as usual:
513 {\small
514 \begin{verbatim}
515 module /boot/my_initrd.gz
516 \end{verbatim}
517 }
519 As always when installing a new kernel, it is recommended that you do
520 not delete existing menu options from \path{menu.lst} --- you may want
521 to boot your old Linux kernel in future, particularly if you
522 have problems.
525 \subsection{Serial Console (optional)}
527 %% kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1
528 %% module /boot/vmlinuz-2.6.9-xen0 root=/dev/sda4 ro
531 In order to configure Xen serial console output, it is necessary to add
532 an boot option to your GRUB config; e.g. replace the above kernel line
533 with:
534 \begin{quote}
535 {\small
536 \begin{verbatim}
537 kernel /boot/xen.gz dom0_mem=131072 com1=115200,8n1
538 \end{verbatim}}
539 \end{quote}
541 This configures Xen to output on COM1 at 115,200 baud, 8 data bits,
542 1 stop bit and no parity. Modify these parameters for your set up.
544 One can also configure XenLinux to share the serial console; to
545 achieve this append ``\path{console=ttyS0}'' to your
546 module line.
549 If you wish to be able to log in over the XenLinux serial console it
550 is necessary to add a line into \path{/etc/inittab}, just as per
551 regular Linux. Simply add the line:
552 \begin{quote}
553 {\small
554 {\tt c:2345:respawn:/sbin/mingetty ttyS0}
555 }
556 \end{quote}
558 and you should be able to log in. Note that to successfully log in
559 as root over the serial line will require adding \path{ttyS0} to
560 \path{/etc/securetty} in most modern distributions.
562 \subsection{TLS Libraries}
564 Users of the XenLinux 2.6 kernel should disable Thread Local Storage
565 (e.g.\ by doing a \path{mv /lib/tls /lib/tls.disabled}) before
566 attempting to run with a XenLinux kernel\footnote{If you boot without first
567 disabling TLS, you will get a warning message during the boot
568 process. In this case, simply perform the rename after the machine is
569 up and then run \texttt{/sbin/ldconfig} to make it take effect.}. You can
570 always reenable it by restoring the directory to its original location
571 (i.e.\ \path{mv /lib/tls.disabled /lib/tls}).
573 The reason for this is that the current TLS implementation uses
574 segmentation in a way that is not permissible under Xen. If TLS is
575 not disabled, an emulation mode is used within Xen which reduces
576 performance substantially.
578 We hope that this issue can be resolved by working with Linux
579 distribution vendors to implement a minor backward-compatible change
580 to the TLS library.
582 \section{Booting Xen}
584 It should now be possible to restart the system and use Xen. Reboot
585 as usual but choose the new Xen option when the Grub screen appears.
587 What follows should look much like a conventional Linux boot. The
588 first portion of the output comes from Xen itself, supplying low level
589 information about itself and the machine it is running on. The
590 following portion of the output comes from XenLinux.
592 You may see some errors during the XenLinux boot. These are not
593 necessarily anything to worry about --- they may result from kernel
594 configuration differences between your XenLinux kernel and the one you
595 usually use.
597 When the boot completes, you should be able to log into your system as
598 usual. If you are unable to log in to your system running Xen, you
599 should still be able to reboot with your normal Linux kernel.
602 \chapter{Starting Additional Domains}
604 The first step in creating a new domain is to prepare a root
605 filesystem for it to boot off. Typically, this might be stored in a
606 normal partition, an LVM or other volume manager partition, a disk
607 file or on an NFS server. A simple way to do this is simply to boot
608 from your standard OS install CD and install the distribution into
609 another partition on your hard drive.
611 To start the \xend control daemon, type
612 \begin{quote}
613 \verb!# xend start!
614 \end{quote}
615 If you
616 wish the daemon to start automatically, see the instructions in
617 Section~\ref{s:xend}. Once the daemon is running, you can use the
618 \path{xm} tool to monitor and maintain the domains running on your
619 system. This chapter provides only a brief tutorial: we provide full
620 details of the \path{xm} tool in the next chapter.
622 %\section{From the web interface}
623 %
624 %Boot the Xen machine and start Xensv (see Chapter~\ref{cha:xensv} for
625 %more details) using the command: \\
626 %\verb_# xensv start_ \\
627 %This will also start Xend (see Chapter~\ref{cha:xend} for more information).
628 %
629 %The domain management interface will then be available at {\tt
630 %http://your\_machine:8080/}. This provides a user friendly wizard for
631 %starting domains and functions for managing running domains.
632 %
633 %\section{From the command line}
636 \section{Creating a Domain Configuration File}
638 Before you can start an additional domain, you must create a
639 configuration file. We provide two example files which you
640 can use as a starting point:
641 \begin{itemize}
642 \item \path{/etc/xen/xmexample1} is a simple template configuration file
643 for describing a single VM.
645 \item \path{/etc/xen/xmexample2} file is a template description that
646 is intended to be reused for multiple virtual machines. Setting
647 the value of the \path{vmid} variable on the \path{xm} command line
648 fills in parts of this template.
649 \end{itemize}
651 Copy one of these files and edit it as appropriate.
652 Typical values you may wish to edit include:
654 \begin{quote}
655 \begin{description}
656 \item[kernel] Set this to the path of the kernel you compiled for use
657 with Xen (e.g.\ \path{kernel = '/boot/vmlinuz-2.6.9-xenU'})
658 \item[memory] Set this to the size of the domain's memory in
659 megabytes (e.g.\ \path{memory = 64})
660 \item[disk] Set the first entry in this list to calculate the offset
661 of the domain's root partition, based on the domain ID. Set the
662 second to the location of \path{/usr} if you are sharing it between
663 domains (e.g.\ \path{disk = ['phy:your\_hard\_drive\%d,sda1,w' \%
664 (base\_partition\_number + vmid), 'phy:your\_usr\_partition,sda6,r' ]}
665 \item[dhcp] Uncomment the dhcp variable, so that the domain will
666 receive its IP address from a DHCP server (e.g.\ \path{dhcp='dhcp'})
667 \end{description}
668 \end{quote}
670 You may also want to edit the {\bf vif} variable in order to choose
671 the MAC address of the virtual ethernet interface yourself. For
672 example:
673 \begin{quote}
674 \verb_vif = ['mac=00:06:AA:F6:BB:B3']_
675 \end{quote}
676 If you do not set this variable, \xend will automatically generate a
677 random MAC address from an unused range.
680 \section{Booting the Domain}
682 The \path{xm} tool provides a variety of commands for managing domains.
683 Use the \path{create} command to start new domains. Assuming you've
684 created a configuration file \path{myvmconf} based around
685 \path{/etc/xen/xmexample2}, to start a domain with virtual
686 machine ID~1 you should type:
688 \begin{quote}
689 \begin{verbatim}
690 # xm create -c myvmconf vmid=1
691 \end{verbatim}
692 \end{quote}
695 The \path{-c} switch causes \path{xm} to turn into the domain's
696 console after creation. The \path{vmid=1} sets the \path{vmid}
697 variable used in the \path{myvmconf} file.
700 You should see the console boot messages from the new domain
701 appearing in the terminal in which you typed the command,
702 culminating in a login prompt.
705 \section{Example: ttylinux}
707 Ttylinux is a very small Linux distribution, designed to require very
708 few resources. We will use it as a concrete example of how to start a
709 Xen domain. Most users will probably want to install a full-featured
710 distribution once they have mastered the basics\footnote{ttylinux is
711 maintained by Pascal Schmidt. You can download source packages from
712 the distribution's home page: {\tt http://www.minimalinux.org/ttylinux/}}.
714 \begin{enumerate}
715 \item Download and extract the ttylinux disk image from the Files
716 section of the project's SourceForge site (see
717 \path{http://sf.net/projects/xen/}).
718 \item Create a configuration file like the following:
719 \begin{verbatim}
720 kernel = "/boot/vmlinuz-2.6.9-xenU"
721 memory = 64
722 name = "ttylinux"
723 nics = 1
724 ip = "1.2.3.4"
725 disk = ['file:/path/to/ttylinux/rootfs,sda1,w']
726 root = "/dev/sda1 ro"
727 \end{verbatim}
728 \item Now start the domain and connect to its console:
729 \begin{verbatim}
730 xm create configfile -c
731 \end{verbatim}
732 \item Login as root, password root.
733 \end{enumerate}
736 \section{Starting / Stopping Domains Automatically}
738 It is possible to have certain domains start automatically at boot
739 time and to have dom0 wait for all running domains to shutdown before
740 it shuts down the system.
742 To specify a domain is to start at boot-time, place its
743 configuration file (or a link to it) under \path{/etc/xen/auto/}.
745 A Sys-V style init script for RedHat and LSB-compliant systems is
746 provided and will be automatically copied to \path{/etc/init.d/}
747 during install. You can then enable it in the appropriate way for
748 your distribution.
750 For instance, on RedHat:
752 \begin{quote}
753 \verb_# chkconfig --add xendomains_
754 \end{quote}
756 By default, this will start the boot-time domains in runlevels 3, 4
757 and 5.
759 You can also use the \path{service} command to run this script
760 manually, e.g:
762 \begin{quote}
763 \verb_# service xendomains start_
765 Starts all the domains with config files under /etc/xen/auto/.
766 \end{quote}
769 \begin{quote}
770 \verb_# service xendomains stop_
772 Shuts down ALL running Xen domains.
773 \end{quote}
775 \chapter{Domain Management Tools}
777 The previous chapter described a simple example of how to configure
778 and start a domain. This chapter summarises the tools available to
779 manage running domains.
781 \section{Command-line Management}
783 Command line management tasks are also performed using the \path{xm}
784 tool. For online help for the commands available, type:
785 \begin{quote}
786 \verb_# xm help_
787 \end{quote}
789 You can also type \path{xm help $<$command$>$} for more information
790 on a given command.
792 \subsection{Basic Management Commands}
794 The most important \path{xm} commands are:
795 \begin{quote}
796 \verb_# xm list_: Lists all domains running.\\
797 \verb_# xm consoles_ : Gives information about the domain consoles.\\
798 \verb_# xm console_: Opens a console to a domain (e.g.\
799 \verb_# xm console myVM_
800 \end{quote}
802 \subsection{\tt xm list}
804 The output of \path{xm list} is in rows of the following format:
805 \begin{center}
806 {\tt name domid memory cpu state cputime console}
807 \end{center}
809 \begin{quote}
810 \begin{description}
811 \item[name] The descriptive name of the virtual machine.
812 \item[domid] The number of the domain ID this virtual machine is running in.
813 \item[memory] Memory size in megabytes.
814 \item[cpu] The CPU this domain is running on.
815 \item[state] Domain state consists of 5 fields:
816 \begin{description}
817 \item[r] running
818 \item[b] blocked
819 \item[p] paused
820 \item[s] shutdown
821 \item[c] crashed
822 \end{description}
823 \item[cputime] How much CPU time (in seconds) the domain has used so far.
824 \item[console] TCP port accepting connections to the domain's console.
825 \end{description}
826 \end{quote}
828 The \path{xm list} command also supports a long output format when the
829 \path{-l} switch is used. This outputs the fulls details of the
830 running domains in \xend's SXP configuration format.
832 For example, suppose the system is running the ttylinux domain as
833 described earlier. The list command should produce output somewhat
834 like the following:
835 \begin{verbatim}
836 # xm list
837 Name Id Mem(MB) CPU State Time(s) Console
838 Domain-0 0 251 0 r---- 172.2
839 ttylinux 5 63 0 -b--- 3.0 9605
840 \end{verbatim}
842 Here we can see the details for the ttylinux domain, as well as for
843 domain 0 (which, of course, is always running). Note that the console
844 port for the ttylinux domain is 9605. This can be connected to by TCP
845 using a terminal program (e.g. \path{telnet} or, better,
846 \path{xencons}). The simplest way to connect is to use the \path{xm console}
847 command, specifying the domain name or ID. To connect to the console
848 of the ttylinux domain, we could use any of the following:
849 \begin{verbatim}
850 # xm console ttylinux
851 # xm console 5
852 # xencons localhost 9605
853 \end{verbatim}
855 \section{Domain Save and Restore}
857 The administrator of a Xen system may suspend a virtual machine's
858 current state into a disk file in domain 0, allowing it to be resumed
859 at a later time.
861 The ttylinux domain described earlier can be suspended to disk using
862 the command:
863 \begin{verbatim}
864 # xm save ttylinux ttylinux.xen
865 \end{verbatim}
867 This will stop the domain named `ttylinux' and save its current state
868 into a file called \path{ttylinux.xen}.
870 To resume execution of this domain, use the \path{xm restore} command:
871 \begin{verbatim}
872 # xm restore ttylinux.xen
873 \end{verbatim}
875 This will restore the state of the domain and restart it. The domain
876 will carry on as before and the console may be reconnected using the
877 \path{xm console} command, as above.
879 \section{Live Migration}
881 Live migration is used to transfer a domain between physical hosts
882 whilst that domain continues to perform its usual activities --- from
883 the user's perspective, the migration should be imperceptible.
885 To perform a live migration, both hosts must be running Xen / \xend and
886 the destination host must have sufficient resources (e.g. memory
887 capacity) to accommodate the domain after the move. Furthermore we
888 currently require both source and destination machines to be on the
889 same L2 subnet.
891 Currently, there is no support for providing automatic remote access
892 to filesystems stored on local disk when a domain is migrated.
893 Administrators should choose an appropriate storage solution
894 (i.e. SAN, NAS, etc.) to ensure that domain filesystems are also
895 available on their destination node. GNBD is a good method for
896 exporting a volume from one machine to another. iSCSI can do a similar
897 job, but is more complex to set up.
899 When a domain migrates, it's MAC and IP address move with it, thus it
900 is only possible to migrate VMs within the same layer-2 network and IP
901 subnet. If the destination node is on a different subnet, the
902 administrator would need to manually configure a suitable etherip or
903 IP tunnel in the domain 0 of the remote node.
905 A domain may be migrated using the \path{xm migrate} command. To
906 live migrate a domain to another machine, we would use
907 the command:
909 \begin{verbatim}
910 # xm migrate --live mydomain destination.ournetwork.com
911 \end{verbatim}
913 Without the \path{--live} flag, \xend simply stops the domain and
914 copies the memory image over to the new node and restarts it. Since
915 domains can have large allocations this can be quite time consuming,
916 even on a Gigabit network. With the \path{--live} flag \xend attempts
917 to keep the domain running while the migration is in progress,
918 resulting in typical `downtimes' of just 60--300ms.
920 For now it will be necessary to reconnect to the domain's console on
921 the new machine using the \path{xm console} command. If a migrated
922 domain has any open network connections then they will be preserved,
923 so SSH connections do not have this limitation.
925 \section{Managing Domain Memory}
927 XenLinux domains have the ability to relinquish / reclaim machine
928 memory at the request of the administrator or the user of the domain.
930 \subsection{Setting memory footprints from dom0}
932 The machine administrator can request that a domain alter its memory
933 footprint using the \path{xm balloon} command. For instance, we can
934 request that our example ttylinux domain reduce its memory footprint
935 to 32 megabytes.
937 \begin{verbatim}
938 # xm balloon ttylinux 32
939 \end{verbatim}
941 We can now see the result of this in the output of \path{xm list}:
943 \begin{verbatim}
944 # xm list
945 Name Id Mem(MB) CPU State Time(s) Console
946 Domain-0 0 251 0 r---- 172.2
947 ttylinux 5 31 0 -b--- 4.3 9605
948 \end{verbatim}
950 The domain has responded to the request by returning memory to Xen. We
951 can restore the domain to its original size using the command line:
953 \begin{verbatim}
954 # xm balloon ttylinux 64
955 \end{verbatim}
957 \subsection{Setting memory footprints from within a domain}
959 The virtual file \path{/proc/xen/memory\_target} allows the owner of a
960 domain to adjust their own memory footprint. Reading the file
961 (e.g. \path{cat /proc/xen/memory\_target}) prints out the current
962 memory footprint of the domain. Writing the file
963 (e.g. \path{echo new\_target > /proc/xen/memory\_target}) requests
964 that the kernel adjust the domain's memory footprint to a new value.
966 \subsection{Setting memory limits}
968 Xen associates a memory size limit with each domain. By default, this
969 is the amount of memory the domain is originally started with,
970 preventing the domain from ever growing beyond this size. To permit a
971 domain to grow beyond its original allocation or to prevent a domain
972 you've shrunk from reclaiming the memory it relinquished, use the
973 \path{xm maxmem} command.
975 \chapter{Domain Filesystem Storage}
977 It is possible to directly export any Linux block device in dom0 to
978 another domain, or to export filesystems / devices to virtual machines
979 using standard network protocols (e.g. NBD, iSCSI, NFS, etc). This
980 chapter covers some of the possibilities.
983 \section{Exporting Physical Devices as VBDs}
985 One of the simplest configurations is to directly export
986 individual partitions from domain 0 to other domains. To
987 achieve this use the \path{phy:} specifier in your domain
988 configuration file. For example a line like
989 \begin{quote}
990 \verb_disk = ['phy:hda3,sda1,w']_
991 \end{quote}
992 specifies that the partition \path{/dev/hda3} in domain 0
993 should be exported read-write to the new domain as \path{/dev/sda1};
994 one could equally well export it as \path{/dev/hda} or
995 \path{/dev/sdb5} should one wish.
997 In addition to local disks and partitions, it is possible to export
998 any device that Linux considers to be ``a disk'' in the same manner.
999 For example, if you have iSCSI disks or GNBD volumes imported into
1000 domain 0 you can export these to other domains using the \path{phy:}
1001 disk syntax. E.g.:
1002 \begin{quote}
1003 \verb_disk = ['phy:vg/lvm1,sda2,w']_
1004 \end{quote}
1008 \begin{center}
1009 \framebox{\bf Warning: Block device sharing}
1010 \end{center}
1011 \begin{quote}
1012 Block devices should typically only be shared between domains in a
1013 read-only fashion otherwise the Linux kernel's file systems will get
1014 very confused as the file system structure may change underneath them
1015 (having the same ext3 partition mounted rw twice is a sure fire way to
1016 cause irreparable damage)! \Xend will attempt to prevent you from
1017 doing this by checking that the device is not mounted read-write in
1018 domain 0, and hasn't already been exported read-write to another
1019 domain.
1020 If you want read-write sharing, export the directory to other domains
1021 via NFS from domain0 (or use a cluster file system such as GFS or
1022 ocfs2).
1024 \end{quote}
1027 \section{Using File-backed VBDs}
1029 It is also possible to use a file in Domain 0 as the primary storage
1030 for a virtual machine. As well as being convenient, this also has the
1031 advantage that the virtual block device will be {\em sparse} --- space
1032 will only really be allocated as parts of the file are used. So if a
1033 virtual machine uses only half of its disk space then the file really
1034 takes up half of the size allocated.
1036 For example, to create a 2GB sparse file-backed virtual block device
1037 (actually only consumes 1KB of disk):
1038 \begin{quote}
1039 \verb_# dd if=/dev/zero of=vm1disk bs=1k seek=2048k count=1_
1040 \end{quote}
1042 Make a file system in the disk file:
1043 \begin{quote}
1044 \verb_# mkfs -t ext3 vm1disk_
1045 \end{quote}
1047 (when the tool asks for confirmation, answer `y')
1049 Populate the file system e.g. by copying from the current root:
1050 \begin{quote}
1051 \begin{verbatim}
1052 # mount -o loop vm1disk /mnt
1053 # cp -ax /{root,dev,var,etc,usr,bin,sbin,lib} /mnt
1054 # mkdir /mnt/{proc,sys,home,tmp}
1055 \end{verbatim}
1056 \end{quote}
1058 Tailor the file system by editing \path{/etc/fstab},
1059 \path{/etc/hostname}, etc (don't forget to edit the files in the
1060 mounted file system, instead of your domain 0 filesystem, e.g. you
1061 would edit \path{/mnt/etc/fstab} instead of \path{/etc/fstab} ). For
1062 this example put \path{/dev/sda1} to root in fstab.
1064 Now unmount (this is important!):
1065 \begin{quote}
1066 \verb_# umount /mnt_
1067 \end{quote}
1069 In the configuration file set:
1070 \begin{quote}
1071 \verb_disk = ['file:/full/path/to/vm1disk,sda1,w']_
1072 \end{quote}
1074 As the virtual machine writes to its `disk', the sparse file will be
1075 filled in and consume more space up to the original 2GB.
1078 \section{Using LVM-backed VBDs}
1080 A particularly appealing solution is to use LVM volumes
1081 as backing for domain file-systems since this allows dynamic
1082 growing/shrinking of volumes as well as snapshot and other
1083 features.
1085 To initialise a partition to support LVM volumes:
1086 \begin{quote}
1087 \begin{verbatim}
1088 # pvcreate /dev/sda10
1089 \end{verbatim}
1090 \end{quote}
1092 Create a volume group named `vg' on the physical partition:
1093 \begin{quote}
1094 \begin{verbatim}
1095 # vgcreate vg /dev/sda10
1096 \end{verbatim}
1097 \end{quote}
1099 Create a logical volume of size 4GB named `myvmdisk1':
1100 \begin{quote}
1101 \begin{verbatim}
1102 # lvcreate -L4096M -n myvmdisk1 vg
1103 \end{verbatim}
1104 \end{quote}
1106 You should now see that you have a \path{/dev/vg/myvmdisk1}
1107 Make a filesystem, mount it and populate it, e.g.:
1108 \begin{quote}
1109 \begin{verbatim}
1110 # mkfs -t ext3 /dev/vg/myvmdisk1
1111 # mount /dev/vg/myvmdisk1 /mnt
1112 # cp -ax / /mnt
1113 # umount /mnt
1114 \end{verbatim}
1115 \end{quote}
1117 Now configure your VM with the following disk configuration:
1118 \begin{quote}
1119 \begin{verbatim}
1120 disk = [ 'phy:vg/myvmdisk1,sda1,w' ]
1121 \end{verbatim}
1122 \end{quote}
1124 LVM enables you to grow the size of logical volumes, but you'll need
1125 to resize the corresponding file system to make use of the new
1126 space. Some file systems (e.g. ext3) now support on-line resize. See
1127 the LVM manuals for more details.
1129 You can also use LVM for creating copy-on-write clones of LVM
1130 volumes (known as writable persistent snapshots in LVM
1131 terminology). This facility is new in Linux 2.6.8, so isn't as
1132 stable as one might hope. In particular, using lots of CoW LVM
1133 disks consumes a lot of dom0 memory, and error conditions such as
1134 running out of disk space are not handled well. Hopefully this
1135 will improve in future.
1137 To create two copy-on-write clone of the above file system you
1138 would use the following commands:
1140 \begin{quote}
1141 \begin{verbatim}
1142 # lvcreate -s -L1024M -n myclonedisk1 /dev/vg/myvmdisk1
1143 # lvcreate -s -L1024M -n myclonedisk2 /dev/vg/myvmdisk1
1144 \end{verbatim}
1145 \end{quote}
1147 Each of these can grow to have 1GB of differences from the master
1148 volume. You can grow the amount of space for storing the
1149 differences using the lvextend command, e.g.:
1150 \begin{quote}
1151 \begin{verbatim}
1152 # lvextend +100M /dev/vg/myclonedisk1
1153 \end{verbatim}
1154 \end{quote}
1156 Don't let the `differences volume' ever fill up otherwise LVM gets
1157 rather confused. It may be possible to automate the growing
1158 process by using \path{dmsetup wait} to spot the volume getting full
1159 and then issue an \path{lvextend}.
1161 In principle, it is possible to continue writing to the volume
1162 that has been cloned (the changes will not be visible to the
1163 clones), but we wouldn't recommend this: have the cloned volume
1164 as a `pristine' file system install that isn't mounted directly
1165 by any of the virtual machines.
1168 \section{Using NFS Root}
1170 First, populate a root filesystem in a directory on the server
1171 machine. This can be on a distinct physical machine, or simply
1172 run within a virtual machine on the same node.
1174 Now configure the NFS server to export this filesystem over the
1175 network by adding a line to \path{/etc/exports}, for instance:
1177 \begin{quote}
1178 \begin{small}
1179 \begin{verbatim}
1180 /export/vm1root 1.2.3.4/24 (rw,sync,no_root_squash)
1181 \end{verbatim}
1182 \end{small}
1183 \end{quote}
1185 Finally, configure the domain to use NFS root. In addition to the
1186 normal variables, you should make sure to set the following values in
1187 the domain's configuration file:
1189 \begin{quote}
1190 \begin{small}
1191 \begin{verbatim}
1192 root = '/dev/nfs'
1193 nfs_server = '2.3.4.5' # substitute IP address of server
1194 nfs_root = '/path/to/root' # path to root FS on the server
1195 \end{verbatim}
1196 \end{small}
1197 \end{quote}
1199 The domain will need network access at boot time, so either statically
1200 configure an IP address (Using the config variables \path{ip},
1201 \path{netmask}, \path{gateway}, \path{hostname}) or enable DHCP (
1202 \path{dhcp='dhcp'}).
1204 Note that the Linux NFS root implementation is known to have stability
1205 problems under high load (this is not a Xen-specific problem), so this
1206 configuration may not be appropriate for critical servers.
1209 \part{User Reference Documentation}
1211 \chapter{Control Software}
1213 The Xen control software includes the \xend node control daemon (which
1214 must be running), the xm command line tools, and the prototype
1215 xensv web interface.
1217 \section{\Xend (node control daemon)}
1218 \label{s:xend}
1220 The Xen Daemon (\Xend) performs system management functions related to
1221 virtual machines. It forms a central point of control for a machine
1222 and can be controlled using an HTTP-based protocol. \Xend must be
1223 running in order to start and manage virtual machines.
1225 \Xend must be run as root because it needs access to privileged system
1226 management functions. A small set of commands may be issued on the
1227 \xend command line:
1229 \begin{tabular}{ll}
1230 \verb!# xend start! & start \xend, if not already running \\
1231 \verb!# xend stop! & stop \xend if already running \\
1232 \verb!# xend restart! & restart \xend if running, otherwise start it \\
1233 % \verb!# xend trace_start! & start \xend, with very detailed debug logging \\
1234 \verb!# xend status! & indicates \xend status by its return code
1235 \end{tabular}
1237 A SysV init script called {\tt xend} is provided to start \xend at boot
1238 time. {\tt make install} installs this script in {\path{/etc/init.d}.
1239 To enable it, you have to make symbolic links in the appropriate
1240 runlevel directories or use the {\tt chkconfig} tool, where available.
1242 Once \xend is running, more sophisticated administration can be done
1243 using the xm tool (see Section~\ref{s:xm}) and the experimental
1244 Xensv web interface (see Section~\ref{s:xensv}).
1246 As \xend runs, events will be logged to \path{/var/log/xend.log} and,
1247 if the migration assistant daemon (\path{xfrd}) has been started,
1248 \path{/var/log/xfrd.log}. These may be of use for troubleshooting
1249 problems.
1251 \section{Xm (command line interface)}
1252 \label{s:xm}
1254 The xm tool is the primary tool for managing Xen from the console.
1255 The general format of an xm command line is:
1257 \begin{verbatim}
1258 # xm command [switches] [arguments] [variables]
1259 \end{verbatim}
1261 The available {\em switches} and {\em arguments} are dependent on the
1262 {\em command} chosen. The {\em variables} may be set using
1263 declarations of the form {\tt variable=value} and command line
1264 declarations override any of the values in the configuration file
1265 being used, including the standard variables described above and any
1266 custom variables (for instance, the \path{xmdefconfig} file uses a
1267 {\tt vmid} variable).
1269 The available commands are as follows:
1271 \begin{description}
1272 \item[balloon] Request a domain to adjust its memory footprint.
1273 \item[create] Create a new domain.
1274 \item[destroy] Kill a domain immediately.
1275 \item[list] List running domains.
1276 \item[shutdown] Ask a domain to shutdown.
1277 \item[dmesg] Fetch the Xen (not Linux!) boot output.
1278 \item[consoles] Lists the available consoles.
1279 \item[console] Connect to the console for a domain.
1280 \item[help] Get help on xm commands.
1281 \item[save] Suspend a domain to disk.
1282 \item[restore] Restore a domain from disk.
1283 \item[pause] Pause a domain's execution.
1284 \item[unpause] Unpause a domain.
1285 \item[pincpu] Pin a domain to a CPU.
1286 \item[bvt] Set BVT scheduler parameters for a domain.
1287 \item[bvt\_ctxallow] Set the BVT context switching allowance for the system.
1288 \item[atropos] Set the atropos parameters for a domain.
1289 \item[rrobin] Set the round robin time slice for the system.
1290 \item[info] Get information about the Xen host.
1291 \item[call] Call a \xend HTTP API function directly.
1292 \end{description}
1294 For a detailed overview of switches, arguments and variables to each command
1295 try
1296 \begin{quote}
1297 \begin{verbatim}
1298 # xm help command
1299 \end{verbatim}
1300 \end{quote}
1302 \section{Xensv (web control interface)}
1303 \label{s:xensv}
1305 Xensv is the experimental web control interface for managing a Xen
1306 machine. It can be used to perform some (but not yet all) of the
1307 management tasks that can be done using the xm tool.
1309 It can be started using:
1310 \begin{quote}
1311 \verb_# xensv start_
1312 \end{quote}
1313 and stopped using:
1314 \begin{quote}
1315 \verb_# xensv stop_
1316 \end{quote}
1318 By default, Xensv will serve out the web interface on port 8080. This
1319 can be changed by editing
1320 \path{/usr/lib/python2.3/site-packages/xen/sv/params.py}.
1322 Once Xensv is running, the web interface can be used to create and
1323 manage running domains.
1328 \chapter{Domain Configuration}
1329 \label{cha:config}
1331 The following contains the syntax of the domain configuration
1332 files and description of how to further specify networking,
1333 driver domain and general scheduling behaviour.
1335 \section{Configuration Files}
1336 \label{s:cfiles}
1338 Xen configuration files contain the following standard variables.
1339 Unless otherwise stated, configuration items should be enclosed in
1340 quotes: see \path{/etc/xen/xmexample1} and \path{/etc/xen/xmexample2}
1341 for concrete examples of the syntax.
1343 \begin{description}
1344 \item[kernel] Path to the kernel image
1345 \item[ramdisk] Path to a ramdisk image (optional).
1346 % \item[builder] The name of the domain build function (e.g. {\tt'linux'} or {\tt'netbsd'}.
1347 \item[memory] Memory size in megabytes.
1348 \item[cpu] CPU to run this domain on, or {\tt -1} for
1349 auto-allocation.
1350 \item[console] Port to export the domain console on (default 9600 + domain ID).
1351 \item[nics] Number of virtual network interfaces.
1352 \item[vif] List of MAC addresses (random addresses are assigned if not
1353 given) and bridges to use for the domain's network interfaces, e.g.
1354 \begin{verbatim}
1355 vif = [ 'mac=aa:00:00:00:00:11, bridge=xen-br0',
1356 'bridge=xen-br1' ]
1357 \end{verbatim}
1358 to assign a MAC address and bridge to the first interface and assign
1359 a different bridge to the second interface, leaving \xend to choose
1360 the MAC address.
1361 \item[disk] List of block devices to export to the domain, e.g. \\
1362 \verb_disk = [ 'phy:hda1,sda1,r' ]_ \\
1363 exports physical device \path{/dev/hda1} to the domain
1364 as \path{/dev/sda1} with read-only access. Exporting a disk read-write
1365 which is currently mounted is dangerous -- if you are \emph{certain}
1366 you wish to do this, you can specify \path{w!} as the mode.
1367 \item[dhcp] Set to {\tt 'dhcp'} if you want to use DHCP to configure
1368 networking.
1369 \item[netmask] Manually configured IP netmask.
1370 \item[gateway] Manually configured IP gateway.
1371 \item[hostname] Set the hostname for the virtual machine.
1372 \item[root] Specify the root device parameter on the kernel command
1373 line.
1374 \item[nfs\_server] IP address for the NFS server (if any).
1375 \item[nfs\_root] Path of the root filesystem on the NFS server (if any).
1376 \item[extra] Extra string to append to the kernel command line (if
1377 any)
1378 \item[restart] Three possible options:
1379 \begin{description}
1380 \item[always] Always restart the domain, no matter what
1381 its exit code is.
1382 \item[never] Never restart the domain.
1383 \item[onreboot] Restart the domain iff it requests reboot.
1384 \end{description}
1385 \end{description}
1387 For additional flexibility, it is also possible to include Python
1388 scripting commands in configuration files. An example of this is the
1389 \path{xmexample2} file, which uses Python code to handle the
1390 \path{vmid} variable.
1393 %\part{Advanced Topics}
1395 \section{Network Configuration}
1397 For many users, the default installation should work `out of the box'.
1398 More complicated network setups, for instance with multiple ethernet
1399 interfaces and/or existing bridging setups will require some
1400 special configuration.
1402 The purpose of this section is to describe the mechanisms provided by
1403 \xend to allow a flexible configuration for Xen's virtual networking.
1405 \subsection{Xen virtual network topology}
1407 Each domain network interface is connected to a virtual network
1408 interface in dom0 by a point to point link (effectively a `virtual
1409 crossover cable'). These devices are named {\tt
1410 vif$<$domid$>$.$<$vifid$>$} (e.g. {\tt vif1.0} for the first interface
1411 in domain 1, {\tt vif3.1} for the second interface in domain 3).
1413 Traffic on these virtual interfaces is handled in domain 0 using
1414 standard Linux mechanisms for bridging, routing, rate limiting, etc.
1415 Xend calls on two shell scripts to perform initial configuration of
1416 the network and configuration of new virtual interfaces. By default,
1417 these scripts configure a single bridge for all the virtual
1418 interfaces. Arbitrary routing / bridging configurations can be
1419 configured by customising the scripts, as described in the following
1420 section.
1422 \subsection{Xen networking scripts}
1424 Xen's virtual networking is configured by two shell scripts (by
1425 default \path{network} and \path{vif-bridge}). These are
1426 called automatically by \xend when certain events occur, with
1427 arguments to the scripts providing further contextual information.
1428 These scripts are found by default in \path{/etc/xen/scripts}. The
1429 names and locations of the scripts can be configured in
1430 \path{/etc/xen/xend-config.sxp}.
1432 \begin{description}
1434 \item[network:] This script is called whenever \xend is started or
1435 stopped to respectively initialise or tear down the Xen virtual
1436 network. In the default configuration initialisation creates the
1437 bridge `xen-br0' and moves eth0 onto that bridge, modifying the
1438 routing accordingly. When \xend exits, it deletes the Xen bridge and
1439 removes eth0, restoring the normal IP and routing configuration.
1441 %% In configurations where the bridge already exists, this script could
1442 %% be replaced with a link to \path{/bin/true} (for instance).
1444 \item[vif-bridge:] This script is called for every domain virtual
1445 interface and can configure firewalling rules and add the vif
1446 to the appropriate bridge. By default, this adds and removes
1447 VIFs on the default Xen bridge.
1449 \end{description}
1451 For more complex network setups (e.g. where routing is required or
1452 integrate with existing bridges) these scripts may be replaced with
1453 customised variants for your site's preferred configuration.
1455 %% There are two possible types of privileges: IO privileges and
1456 %% administration privileges.
1458 \section{Driver Domain Configuration}
1460 I/O privileges can be assigned to allow a domain to directly access
1461 PCI devices itself. This is used to support driver domains.
1463 Setting backend privileges is currently only supported in SXP format
1464 config files. To allow a domain to function as a backend for others,
1465 somewhere within the {\tt vm} element of its configuration file must
1466 be a {\tt backend} element of the form {\tt (backend ({\em type}))}
1467 where {\tt \em type} may be either {\tt netif} or {\tt blkif},
1468 according to the type of virtual device this domain will service.
1469 %% After this domain has been built, \xend will connect all new and
1470 %% existing {\em virtual} devices (of the appropriate type) to that
1471 %% backend.
1473 Note that a block backend cannot currently import virtual block
1474 devices from other domains, and a network backend cannot import
1475 virtual network devices from other domains. Thus (particularly in the
1476 case of block backends, which cannot import a virtual block device as
1477 their root filesystem), you may need to boot a backend domain from a
1478 ramdisk or a network device.
1480 Access to PCI devices may be configured on a per-device basis. Xen
1481 will assign the minimal set of hardware privileges to a domain that
1482 are required to control its devices. This can be configured in either
1483 format of configuration file:
1485 \begin{itemize}
1486 \item SXP Format: Include device elements of the form: \\
1487 \centerline{ {\tt (device (pci (bus {\em x}) (dev {\em y}) (func {\em z})))}} \\
1488 inside the top-level {\tt vm} element. Each one specifies the address
1489 of a device this domain is allowed to access ---
1490 the numbers {\em x},{\em y} and {\em z} may be in either decimal or
1491 hexadecimal format.
1492 \item Flat Format: Include a list of PCI device addresses of the
1493 format: \\
1494 \centerline{{\tt pci = ['x,y,z', ...]}} \\
1495 where each element in the
1496 list is a string specifying the components of the PCI device
1497 address, separated by commas. The components ({\tt \em x}, {\tt \em
1498 y} and {\tt \em z}) of the list may be formatted as either decimal
1499 or hexadecimal.
1500 \end{itemize}
1502 %% \section{Administration Domains}
1504 %% Administration privileges allow a domain to use the `dom0
1505 %% operations' (so called because they are usually available only to
1506 %% domain 0). A privileged domain can build other domains, set scheduling
1507 %% parameters, etc.
1509 % Support for other administrative domains is not yet available... perhaps
1510 % we should plumb it in some time
1516 \section{Scheduler Configuration}
1517 \label{s:sched}
1520 Xen offers a boot time choice between multiple schedulers. To select
1521 a scheduler, pass the boot parameter {\em sched=sched\_name} to Xen,
1522 substituting the appropriate scheduler name. Details of the schedulers
1523 and their parameters are included below; future versions of the tools
1524 will provide a higher-level interface to these tools.
1526 It is expected that system administrators configure their system to
1527 use the scheduler most appropriate to their needs. Currently, the BVT
1528 scheduler is the recommended choice.
1530 \subsection{Borrowed Virtual Time}
1532 {\tt sched=bvt} (the default) \\
1534 BVT provides proportional fair shares of the CPU time. It has been
1535 observed to penalise domains that block frequently (e.g. I/O intensive
1536 domains), but this can be compensated for by using warping.
1538 \subsubsection{Global Parameters}
1540 \begin{description}
1541 \item[ctx\_allow]
1542 the context switch allowance is similar to the `quantum'
1543 in traditional schedulers. It is the minimum time that
1544 a scheduled domain will be allowed to run before being
1545 pre-empted.
1546 \end{description}
1548 \subsubsection{Per-domain parameters}
1550 \begin{description}
1551 \item[mcuadv]
1552 the MCU (Minimum Charging Unit) advance determines the
1553 proportional share of the CPU that a domain receives. It
1554 is set inversely proportionally to a domain's sharing weight.
1555 \item[warp]
1556 the amount of `virtual time' the domain is allowed to warp
1557 backwards
1558 \item[warpl]
1559 the warp limit is the maximum time a domain can run warped for
1560 \item[warpu]
1561 the unwarp requirement is the minimum time a domain must
1562 run unwarped for before it can warp again
1563 \end{description}
1565 \subsection{Atropos}
1567 {\tt sched=atropos} \\
1569 Atropos is a soft real time scheduler. It provides guarantees about
1570 absolute shares of the CPU, with a facility for sharing
1571 slack CPU time on a best-effort basis. It can provide timeliness
1572 guarantees for latency-sensitive domains.
1574 Every domain has an associated period and slice. The domain should
1575 receive `slice' nanoseconds every `period' nanoseconds. This allows
1576 the administrator to configure both the absolute share of the CPU a
1577 domain receives and the frequency with which it is scheduled.
1579 %% When
1580 %% domains unblock, their period is reduced to the value of the latency
1581 %% hint (the slice is scaled accordingly so that they still get the same
1582 %% proportion of the CPU). For each subsequent period, the slice and
1583 %% period times are doubled until they reach their original values.
1585 Note: don't overcommit the CPU when using Atropos (i.e. don't reserve
1586 more CPU than is available --- the utilisation should be kept to
1587 slightly less than 100\% in order to ensure predictable behaviour).
1589 \subsubsection{Per-domain parameters}
1591 \begin{description}
1592 \item[period] The regular time interval during which a domain is
1593 guaranteed to receive its allocation of CPU time.
1594 \item[slice]
1595 The length of time per period that a domain is guaranteed to run
1596 for (in the absence of voluntary yielding of the CPU).
1597 \item[latency]
1598 The latency hint is used to control how soon after
1599 waking up a domain it should be scheduled.
1600 \item[xtratime] This is a boolean flag that specifies whether a domain
1601 should be allowed a share of the system slack time.
1602 \end{description}
1604 \subsection{Round Robin}
1606 {\tt sched=rrobin} \\
1608 The round robin scheduler is included as a simple demonstration of
1609 Xen's internal scheduler API. It is not intended for production use.
1611 \subsubsection{Global Parameters}
1613 \begin{description}
1614 \item[rr\_slice]
1615 The maximum time each domain runs before the next
1616 scheduling decision is made.
1617 \end{description}
1630 \chapter{Build, Boot and Debug options}
1632 This chapter describes the build- and boot-time options
1633 which may be used to tailor your Xen system.
1635 \section{Xen Build Options}
1637 Xen provides a number of build-time options which should be
1638 set as environment variables or passed on make's command-line.
1640 \begin{description}
1641 \item[verbose=y] Enable debugging messages when Xen detects an unexpected condition.
1642 Also enables console output from all domains.
1643 \item[debug=y]
1644 Enable debug assertions. Implies {\bf verbose=y}.
1645 (Primarily useful for tracing bugs in Xen).
1646 \item[debugger=y]
1647 Enable the in-Xen debugger. This can be used to debug
1648 Xen, guest OSes, and applications.
1649 \item[perfc=y]
1650 Enable performance counters for significant events
1651 within Xen. The counts can be reset or displayed
1652 on Xen's console via console control keys.
1653 \item[trace=y]
1654 Enable per-cpu trace buffers which log a range of
1655 events within Xen for collection by control
1656 software.
1657 \end{description}
1659 \section{Xen Boot Options}
1660 \label{s:xboot}
1662 These options are used to configure Xen's behaviour at runtime. They
1663 should be appended to Xen's command line, either manually or by
1664 editing \path{grub.conf}.
1666 \begin{description}
1667 \item [ignorebiostables ]
1668 Disable parsing of BIOS-supplied tables. This may help with some
1669 chipsets that aren't fully supported by Xen. If you specify this
1670 option then ACPI tables are also ignored, and SMP support is
1671 disabled.
1673 \item [noreboot ]
1674 Don't reboot the machine automatically on errors. This is
1675 useful to catch debug output if you aren't catching console messages
1676 via the serial line.
1678 \item [nosmp ]
1679 Disable SMP support.
1680 This option is implied by `ignorebiostables'.
1682 \item [noacpi ]
1683 Disable ACPI tables, which confuse Xen on some chipsets.
1684 This option is implied by `ignorebiostables'.
1686 \item [watchdog ]
1687 Enable NMI watchdog which can report certain failures.
1689 \item [noht ]
1690 Disable Hyperthreading.
1692 \item [badpage=$<$page number$>$,$<$page number$>$, \ldots ]
1693 Specify a list of pages not to be allocated for use
1694 because they contain bad bytes. For example, if your
1695 memory tester says that byte 0x12345678 is bad, you would
1696 place `badpage=0x12345' on Xen's command line.
1698 \item [com1=$<$baud$>$,DPS,$<$io\_base$>$,$<$irq$>$
1699 com2=$<$baud$>$,DPS,$<$io\_base$>$,$<$irq$>$ ] \mbox{}\\
1700 Xen supports up to two 16550-compatible serial ports.
1701 For example: `com1=9600, 8n1, 0x408, 5' maps COM1 to a
1702 9600-baud port, 8 data bits, no parity, 1 stop bit,
1703 I/O port base 0x408, IRQ 5.
1704 If the I/O base and IRQ are standard (com1:0x3f8,4;
1705 com2:0x2f8,3) then they need not be specified.
1707 \item [console=$<$specifier list$>$ ]
1708 Specify the destination for Xen console I/O.
1709 This is a comma-separated list of, for example:
1710 \begin{description}
1711 \item[vga] use VGA console and allow keyboard input
1712 \item[com1] use serial port com1
1713 \item[com2H] use serial port com2. Transmitted chars will
1714 have the MSB set. Received chars must have
1715 MSB set.
1716 \item[com2L] use serial port com2. Transmitted chars will
1717 have the MSB cleared. Received chars must
1718 have MSB cleared.
1719 \end{description}
1720 The latter two examples allow a single port to be
1721 shared by two subsystems (e.g. console and
1722 debugger). Sharing is controlled by MSB of each
1723 transmitted/received character.
1724 [NB. Default for this option is `com1,vga']
1726 \item [conswitch=$<$switch-char$><$auto-switch-char$>$ ]
1727 Specify how to switch serial-console input between
1728 Xen and DOM0. The required sequence is CTRL-$<$switch-char$>$
1729 pressed three times. Specifying the backtick character
1730 disables switching.
1731 The $<$auto-switch-char$>$ specifies whether Xen should
1732 auto-switch input to DOM0 when it boots --- if it is `x'
1733 then auto-switching is disabled. Any other value, or
1734 omitting the character, enables auto-switching.
1735 [NB. default switch-char is `a']
1737 \item [nmi=xxx ]
1738 Specify what to do with an NMI parity or I/O error. \\
1739 `nmi=fatal': Xen prints a diagnostic and then hangs. \\
1740 `nmi=dom0': Inform DOM0 of the NMI. \\
1741 `nmi=ignore': Ignore the NMI.
1743 \item [dom0\_mem=xxx ]
1744 Set the amount of memory (in kB) to be allocated to domain0.
1746 \item [tbuf\_size=xxx ]
1747 Set the size of the per-cpu trace buffers, in pages
1748 (default 1). Note that the trace buffers are only
1749 enabled in debug builds. Most users can ignore
1750 this feature completely.
1752 \item [sched=xxx ]
1753 Select the CPU scheduler Xen should use. The current
1754 possibilities are `bvt' (default), `atropos' and `rrobin'.
1755 For more information see Section~\ref{s:sched}.
1757 \item [pci\_dom0\_hide=(xx.xx.x)(yy.yy.y)\ldots ]
1758 Hide selected PCI devices from domain 0 (for instance, to stop it
1759 taking ownership of them so that they can be driven by another
1760 domain). Device IDs should be given in hex format. Bridge devices do
1761 not need to be hidden --- they are hidden implicitly, since guest OSes
1762 do not need to configure them.
1763 \end{description}
1767 \section{XenLinux Boot Options}
1769 In addition to the standard Linux kernel boot options, we support:
1770 \begin{description}
1771 \item[xencons=xxx ] Specify the device node to which the Xen virtual
1772 console driver is attached. The following options are supported:
1773 \begin{center}
1774 \begin{tabular}{l}
1775 `xencons=off': disable virtual console \\
1776 `xencons=tty': attach console to /dev/tty1 (tty0 at boot-time) \\
1777 `xencons=ttyS': attach console to /dev/ttyS0
1778 \end{tabular}
1779 \end{center}
1780 The default is ttyS for dom0 and tty for all other domains.
1781 \end{description}
1785 \section{Debugging}
1786 \label{s:keys}
1788 Xen has a set of debugging features that can be useful to try and
1789 figure out what's going on. Hit 'h' on the serial line (if you
1790 specified a baud rate on the Xen command line) or ScrollLock-h on the
1791 keyboard to get a list of supported commands.
1793 If you have a crash you'll likely get a crash dump containing an EIP
1794 (PC) which, along with an \path{objdump -d image}, can be useful in
1795 figuring out what's happened. Debug a Xenlinux image just as you
1796 would any other Linux kernel.
1798 %% We supply a handy debug terminal program which you can find in
1799 %% \path{/usr/local/src/xen-2.0.bk/tools/misc/miniterm/}
1800 %% This should be built and executed on another machine that is connected
1801 %% via a null modem cable. Documentation is included.
1802 %% Alternatively, if the Xen machine is connected to a serial-port server
1803 %% then we supply a dumb TCP terminal client, {\tt xencons}.
1808 \chapter{Further Support}
1810 If you have questions that are not answered by this manual, the
1811 sources of information listed below may be of interest to you. Note
1812 that bug reports, suggestions and contributions related to the
1813 software (or the documentation) should be sent to the Xen developers'
1814 mailing list (address below).
1816 \section{Other Documentation}
1818 For developers interested in porting operating systems to Xen, the
1819 {\em Xen Interface Manual} is distributed in the \path{docs/}
1820 directory of the Xen source distribution.
1822 %Various HOWTOs are available in \path{docs/HOWTOS} but this content is
1823 %being integrated into this manual.
1825 \section{Online References}
1827 The official Xen web site is found at:
1828 \begin{quote}
1829 {\tt http://www.cl.cam.ac.uk/netos/xen/}
1830 \end{quote}
1832 This contains links to the latest versions of all on-line
1833 documentation (including the lateset version of the FAQ).
1835 \section{Mailing Lists}
1837 There are currently three official Xen mailing lists:
1839 \begin{description}
1840 \item[xen-devel@lists.sourceforge.net] Used for development
1841 discussions and requests for help. Subscribe at: \\
1842 \path{http://lists.sourceforge.net/mailman/listinfo/xen-devel}
1843 \item[xen-announce@lists.sourceforge.net] Used for announcements only.
1844 Subscribe at: \\
1845 \path{http://lists.sourceforge.net/mailman/listinfo/xen-announce}
1846 \item[xen-changelog@lists.sourceforge.net] Changelog feed
1847 from the unstable and 2.0 trees - developer oriented. Subscribe at: \\
1848 \path{http://lists.sourceforge.net/mailman/listinfo/xen-changelog}
1849 \end{description}
1851 Although there is no specific user support list, the developers try to
1852 assist users who post on xen-devel. As the bulk of traffic on this
1853 list increases, a dedicated user support list may be introduced.
1855 \appendix
1858 \chapter{Installing Xen / XenLinux on Debian}
1860 The Debian project provides a tool called \path{debootstrap} which
1861 allows a base Debian system to be installed into a filesystem without
1862 requiring the host system to have any Debian-specific software (such
1863 as \path{apt}.
1865 Here's some info how to install Debian 3.1 (Sarge) for an unprivileged
1866 Xen domain:
1868 \begin{enumerate}
1869 \item Set up Xen 2.0 and test that it's working, as described earlier in
1870 this manual.
1872 \item Create disk images for root-fs and swap (alternatively, you
1873 might create dedicated partitions, LVM logical volumes, etc. if
1874 that suits your setup).
1875 \begin{small}\begin{verbatim}
1876 dd if=/dev/zero of=/path/diskimage bs=1024k count=size_in_mbytes
1877 dd if=/dev/zero of=/path/swapimage bs=1024k count=size_in_mbytes
1878 \end{verbatim}\end{small}
1879 If you're going to use this filesystem / disk image only as a
1880 `template' for other vm disk images, something like 300 MB should
1881 be enough.. (of course it depends what kind of packages you are
1882 planning to install to the template)
1884 \item Create the filesystem and initialise the swap image
1885 \begin{small}\begin{verbatim}
1886 mkfs.ext3 /path/diskimage
1887 mkswap /path/swapimage
1888 \end{verbatim}\end{small}
1890 \item Mount the disk image for installation
1891 \begin{small}\begin{verbatim}
1892 mount -o loop /path/diskimage /mnt/disk
1893 \end{verbatim}\end{small}
1895 \item Install \path{debootstrap}
1897 Make sure you have debootstrap installed on the host. If you are
1898 running Debian sarge (3.1 / testing) or unstable you can install it by
1899 running \path{apt-get install debootstrap}. Otherwise, it can be
1900 downloaded from the Debian project website.
1902 \item Install Debian base to the disk image:
1903 \begin{small}\begin{verbatim}
1904 debootstrap --arch i386 sarge /mnt/disk \
1905 http://ftp.<countrycode>.debian.org/debian
1906 \end{verbatim}\end{small}
1908 You can use any other Debian http/ftp mirror you want.
1910 \item When debootstrap completes successfully, modify settings:
1911 \begin{small}\begin{verbatim}
1912 chroot /mnt/disk /bin/bash
1913 \end{verbatim}\end{small}
1915 Edit the following files using vi or nano and make needed changes:
1916 \begin{small}\begin{verbatim}
1917 /etc/hostname
1918 /etc/hosts
1919 /etc/resolv.conf
1920 /etc/network/interfaces
1921 /etc/networks
1922 \end{verbatim}\end{small}
1924 Set up access to the services, edit:
1925 \begin{small}\begin{verbatim}
1926 /etc/hosts.deny
1927 /etc/hosts.allow
1928 /etc/inetd.conf
1929 \end{verbatim}\end{small}
1931 Add Debian mirror to:
1932 \begin{small}\begin{verbatim}
1933 /etc/apt/sources.list
1934 \end{verbatim}\end{small}
1936 Create fstab like this:
1937 \begin{small}\begin{verbatim}
1938 /dev/sda1 / ext3 errors=remount-ro 0 1
1939 /dev/sda2 none swap sw 0 0
1940 proc /proc proc defaults 0 0
1941 \end{verbatim}\end{small}
1943 Logout
1945 \item Unmount the disk image
1946 \begin{small}\begin{verbatim}
1947 umount /mnt/disk
1948 \end{verbatim}\end{small}
1950 \item Create Xen 2.0 configuration file for the new domain. You can
1951 use the example-configurations coming with Xen as a template.
1953 Make sure you have the following set up:
1954 \begin{small}\begin{verbatim}
1955 disk = [ 'file:/path/diskimage,sda1,w', 'file:/path/swapimage,sda2,w' ]
1956 root = "/dev/sda1 ro"
1957 \end{verbatim}\end{small}
1959 \item Start the new domain
1960 \begin{small}\begin{verbatim}
1961 xm create -f domain_config_file
1962 \end{verbatim}\end{small}
1964 Check that the new domain is running:
1965 \begin{small}\begin{verbatim}
1966 xm list
1967 \end{verbatim}\end{small}
1969 \item Attach to the console of the new domain.
1970 You should see something like this when starting the new domain:
1972 \begin{small}\begin{verbatim}
1973 Started domain testdomain2, console on port 9626
1974 \end{verbatim}\end{small}
1976 There you can see the ID of the console: 26. You can also list
1977 the consoles with \path{xm consoles} (ID is the last two
1978 digits of the port number.)
1980 Attach to the console:
1982 \begin{small}\begin{verbatim}
1983 xm console 26
1984 \end{verbatim}\end{small}
1986 or by telnetting to the port 9626 of localhost (the xm console
1987 program works better).
1989 \item Log in and run base-config
1991 As a default there's no password for the root.
1993 Check that everything looks OK, and the system started without
1994 errors. Check that the swap is active, and the network settings are
1995 correct.
1997 Run \path{/usr/sbin/base-config} to set up the Debian settings.
1999 Set up the password for root using passwd.
2001 \item Done. You can exit the console by pressing \path{Ctrl + ]}
2003 \end{enumerate}
2005 If you need to create new domains, you can just copy the contents of
2006 the `template'-image to the new disk images, either by mounting the
2007 template and the new image, and using \path{cp -a} or \path{tar} or by
2008 simply copying the image file. Once this is done, modify the
2009 image-specific settings (hostname, network settings, etc).
2011 \chapter{Installing Xen / XenLinux on Redhat or Fedora Core}
2013 When using Xen / XenLinux on a standard Linux distribution there are
2014 a couple of things to watch out for:
2016 Note that, because domains>0 don't have any privileged access at all,
2017 certain commands in the default boot sequence will fail e.g. attempts
2018 to update the hwclock, change the console font, update the keytable
2019 map, start apmd (power management), or gpm (mouse cursor). Either
2020 ignore the errors (they should be harmless), or remove them from the
2021 startup scripts. Deleting the following links are a good start:
2022 {\path{S24pcmcia}}, {\path{S09isdn}},
2023 {\path{S17keytable}}, {\path{S26apmd}},
2024 {\path{S85gpm}}.
2026 If you want to use a single root file system that works cleanly for
2027 both domain 0 and unprivileged domains, a useful trick is to use
2028 different 'init' run levels. For example, use
2029 run level 3 for domain 0, and run level 4 for other domains. This
2030 enables different startup scripts to be run in depending on the run
2031 level number passed on the kernel command line.
2033 If using NFS root files systems mounted either from an
2034 external server or from domain0 there are a couple of other gotchas.
2035 The default {\path{/etc/sysconfig/iptables}} rules block NFS, so part
2036 way through the boot sequence things will suddenly go dead.
2038 If you're planning on having a separate NFS {\path{/usr}} partition, the
2039 RH9 boot scripts don't make life easy - they attempt to mount NFS file
2040 systems way to late in the boot process. The easiest way I found to do
2041 this was to have a {\path{/linuxrc}} script run ahead of
2042 {\path{/sbin/init}} that mounts {\path{/usr}}:
2044 \begin{quote}
2045 \begin{small}\begin{verbatim}
2046 #!/bin/bash
2047 /sbin/ipconfig lo 127.0.0.1
2048 /sbin/portmap
2049 /bin/mount /usr
2050 exec /sbin/init "$@" <>/dev/console 2>&1
2051 \end{verbatim}\end{small}
2052 \end{quote}
2054 %$ XXX SMH: font lock fix :-)
2056 The one slight complication with the above is that
2057 {\path{/sbin/portmap}} is dynamically linked against
2058 {\path{/usr/lib/libwrap.so.0}} Since this is in
2059 {\path{/usr}}, it won't work. This can be solved by copying the
2060 file (and link) below the /usr mount point, and just let the file be
2061 'covered' when the mount happens.
2063 In some installations, where a shared read-only {\path{/usr}} is
2064 being used, it may be desirable to move other large directories over
2065 into the read-only {\path{/usr}}. For example, you might replace
2066 {\path{/bin}}, {\path{/lib}} and {\path{/sbin}} with
2067 links into {\path{/usr/root/bin}}, {\path{/usr/root/lib}}
2068 and {\path{/usr/root/sbin}} respectively. This creates other
2069 problems for running the {\path{/linuxrc}} script, requiring
2070 bash, portmap, mount, ifconfig, and a handful of other shared
2071 libraries to be copied below the mount point --- a simple
2072 statically-linked C program would solve this problem.
2077 \chapter{Glossary of Terms}
2079 \begin{description}
2080 \item[Atropos] One of the CPU schedulers provided by Xen.
2081 Atropos provides domains with absolute shares
2082 of the CPU, with timeliness guarantees and a
2083 mechanism for sharing out `slack time'.
2085 \item[BVT] The BVT scheduler is used to give proportional
2086 fair shares of the CPU to domains.
2088 \item[Exokernel] A minimal piece of privileged code, similar to
2089 a {\bf microkernel} but providing a more
2090 `hardware-like' interface to the tasks it
2091 manages. This is similar to a paravirtualising
2092 VMM like {\bf Xen} but was designed as a new
2093 operating system structure, rather than
2094 specifically to run multiple conventional OSs.
2096 \item[Domain] A domain is the execution context that
2097 contains a running {\bf virtual machine}.
2098 The relationship between virtual machines
2099 and domains on Xen is similar to that between
2100 programs and processes in an operating
2101 system: a virtual machine is a persistent
2102 entity that resides on disk (somewhat like
2103 a program). When it is loaded for execution,
2104 it runs in a domain. Each domain has a
2105 {\bf domain ID}.
2107 \item[Domain 0] The first domain to be started on a Xen
2108 machine. Domain 0 is responsible for managing
2109 the system.
2111 \item[Domain ID] A unique identifier for a {\bf domain},
2112 analogous to a process ID in an operating
2113 system.
2115 \item[Full virtualisation] An approach to virtualisation which
2116 requires no modifications to the hosted
2117 operating system, providing the illusion of
2118 a complete system of real hardware devices.
2120 \item[Hypervisor] An alternative term for {\bf VMM}, used
2121 because it means `beyond supervisor',
2122 since it is responsible for managing multiple
2123 `supervisor' kernels.
2125 \item[Live migration] A technique for moving a running virtual
2126 machine to another physical host, without
2127 stopping it or the services running on it.
2129 \item[Microkernel] A small base of code running at the highest
2130 hardware privilege level. A microkernel is
2131 responsible for sharing CPU and memory (and
2132 sometimes other devices) between less
2133 privileged tasks running on the system.
2134 This is similar to a VMM, particularly a
2135 {\bf paravirtualising} VMM but typically
2136 addressing a different problem space and
2137 providing different kind of interface.
2139 \item[NetBSD/Xen] A port of NetBSD to the Xen architecture.
2141 \item[Paravirtualisation] An approach to virtualisation which requires
2142 modifications to the operating system in
2143 order to run in a virtual machine. Xen
2144 uses paravirtualisation but preserves
2145 binary compatibility for user space
2146 applications.
2148 \item[Shadow pagetables] A technique for hiding the layout of machine
2149 memory from a virtual machine's operating
2150 system. Used in some {\bf VMMs} to provide
2151 the illusion of contiguous physical memory,
2152 in Xen this is used during
2153 {\bf live migration}.
2155 \item[Virtual Machine] The environment in which a hosted operating
2156 system runs, providing the abstraction of a
2157 dedicated machine. A virtual machine may
2158 be identical to the underlying hardware (as
2159 in {\bf full virtualisation}, or it may
2160 differ, as in {\bf paravirtualisation}.
2162 \item[VMM] Virtual Machine Monitor - the software that
2163 allows multiple virtual machines to be
2164 multiplexed on a single physical machine.
2166 \item[Xen] Xen is a paravirtualising virtual machine
2167 monitor, developed primarily by the
2168 Systems Research Group at the University
2169 of Cambridge Computer Laboratory.
2171 \item[XenLinux] Official name for the port of the Linux kernel
2172 that runs on Xen.
2174 \end{description}
2177 \end{document}
2180 %% Other stuff without a home
2182 %% Instructions Re Python API
2184 %% Other Control Tasks using Python
2185 %% ================================
2187 %% A Python module 'Xc' is installed as part of the tools-install
2188 %% process. This can be imported, and an 'xc object' instantiated, to
2189 %% provide access to privileged command operations:
2191 %% # import Xc
2192 %% # xc = Xc.new()
2193 %% # dir(xc)
2194 %% # help(xc.domain_create)
2196 %% In this way you can see that the class 'xc' contains useful
2197 %% documentation for you to consult.
2199 %% A further package of useful routines (xenctl) is also installed:
2201 %% # import xenctl.utils
2202 %% # help(xenctl.utils)
2204 %% You can use these modules to write your own custom scripts or you can
2205 %% customise the scripts supplied in the Xen distribution.
2209 % Explain about AGP GART
2212 %% If you're not intending to configure the new domain with an IP address
2213 %% on your LAN, then you'll probably want to use NAT. The
2214 %% 'xen_nat_enable' installs a few useful iptables rules into domain0 to
2215 %% enable NAT. [NB: We plan to support RSIP in future]
2220 %% Installing the file systems from the CD
2221 %% =======================================
2223 %% If you haven't got an existing Linux installation onto which you can
2224 %% just drop down the Xen and Xenlinux images, then the file systems on
2225 %% the CD provide a quick way of doing an install. However, you would be
2226 %% better off in the long run doing a proper install of your preferred
2227 %% distro and installing Xen onto that, rather than just doing the hack
2228 %% described below:
2230 %% Choose one or two partitions, depending on whether you want a separate
2231 %% /usr or not. Make file systems on it/them e.g.:
2232 %% mkfs -t ext3 /dev/hda3
2233 %% [or mkfs -t ext2 /dev/hda3 && tune2fs -j /dev/hda3 if using an old
2234 %% version of mkfs]
2236 %% Next, mount the file system(s) e.g.:
2237 %% mkdir /mnt/root && mount /dev/hda3 /mnt/root
2238 %% [mkdir /mnt/usr && mount /dev/hda4 /mnt/usr]
2240 %% To install the root file system, simply untar /usr/XenDemoCD/root.tar.gz:
2241 %% cd /mnt/root && tar -zxpf /usr/XenDemoCD/root.tar.gz
2243 %% You'll need to edit /mnt/root/etc/fstab to reflect your file system
2244 %% configuration. Changing the password file (etc/shadow) is probably a
2245 %% good idea too.
2247 %% To install the usr file system, copy the file system from CD on /usr,
2248 %% though leaving out the "XenDemoCD" and "boot" directories:
2249 %% cd /usr && cp -a X11R6 etc java libexec root src bin dict kerberos local sbin tmp doc include lib man share /mnt/usr
2251 %% If you intend to boot off these file systems (i.e. use them for
2252 %% domain 0), then you probably want to copy the /usr/boot directory on
2253 %% the cd over the top of the current symlink to /boot on your root
2254 %% filesystem (after deleting the current symlink) i.e.:
2255 %% cd /mnt/root ; rm boot ; cp -a /usr/boot .